One debate that is top of mind is whether open source is good or bad. You touch on this a bit, but to go further, I might also add that there are risk/reward trade-offs with open source AI models. On the one hand, it democratizes access (generally good, imo). On the other, it also gives bad actors easy access to the rules of the system, which they can then manipulate. Having worked in tech, I recognize that sophisticated threat actors run their own experiments on closed-access systems and eventually figure out those rules to manipulate for their nefarious aims... but sharing the code might actually empower less sophisticated threat actors to do stuff previously mostly limited to better resourced counterparts. I haven't figured out where I stand on this, but it's one of complicated integrity issues that makes me uneasy.
Great points, Sahar.
One debate that is top of mind is whether open source is good or bad. You touch on this a bit, but to go further, I might also add that there are risk/reward trade-offs with open source AI models. On the one hand, it democratizes access (generally good, imo). On the other, it also gives bad actors easy access to the rules of the system, which they can then manipulate. Having worked in tech, I recognize that sophisticated threat actors run their own experiments on closed-access systems and eventually figure out those rules to manipulate for their nefarious aims... but sharing the code might actually empower less sophisticated threat actors to do stuff previously mostly limited to better resourced counterparts. I haven't figured out where I stand on this, but it's one of complicated integrity issues that makes me uneasy.