A while ago, a few friends of mine asked me for help thinking through their draft piece on factions in the “AI regulation” space. It was fun! We talked a good amount and I went over a bunch of ideas. A fair number of them ended up in the eventual article in the New York Times.
Read it! And also, here’s a few ideas I shared with them that may not have shown through as clearly in the final/edited piece:
This is a fight over terminology.
Some of the factions are fighting over who owns certain AI ethics terminology
They are claiming ownership of words like "AI risk", "AI alignment", “AI safety”, and “AI ethics”.
Some have successfully conquered territory.
For example, AI ethics, to a naive observer, probably means a lot of different things. Maybe it means “how do we not abuse emerging intelligence? Maybe programs can have sentience, after all”. Or it could mean “how do we take the spoils of an AI boosted economy and share it with everyone?”. In practice, however, it means something pretty narrow.
The main strategies in this fight are about seizing control of language and specific terms than arguing ideology straight out.
This feels a lot like integrity work
There’s a mysterious, complex system that no one really understands. You can tweak it, but the only way to understand your tweaks is by tests and metrics that only show a small sliver of the overall impact of your changes. It’s powerful, and you use it. But also a little out of control.
You try your best to tweak it towards doing good, and not harm. But you know you’ll never get it “perfect”, that there always are tradeoffs, and thinking structurally (rather than just banning certain words or phrases) is the way to go.
This is the reality of dealing with ranking and recommender systems on social media. (aka integrity work or “trust and safety engineering” work). But also what AI alignment people do right now.
That’s because AI alignment workers are integrity workers.
Fuzzy analogies to nuclear weapons don’t help
It seems like both the “well-intentioned” hawks and the more mercenary policy entrepreneurs use the analogy of nuclear weapons — this is a thing that is both scary, useful, and not to fall in the hands of enemies.
By well-intentioned I just mean this: Is business truly the overwhelming reason that the china hawks are freaked out? I'd say that there are people who are truly thinking of things in a geostrategic/military/national perspective that aren't in hock to lockheed martin
This analogy kind of works: there’s a specific technology you don’t want to fall into the wrong hands
It also mostly doesn’t: the tech doesn’t require access to capital. The genie is already out of the bottle. Also, the tech isn’t a binary: either you use a nuke or you don’t — the analogy just doesn’t fit.
“Open source” isn’t a magic wand
I ran linux all through college. Yes, on a huge, heavy, laptop with about 20 minutes of battery. I lived the struggle because I believed in the dream of libre software. And yet — some people seem to hear the word “open source” and automatically turn their brain off. (For some: “good thing, no problems”.)
I also don’t see the functional difference between “open source model” and “cheap provider of AI services that is open access to anyone who pays”
Power matters. Monopolies matter.
There is a faction of people that criticizes the potential for AI monopolies where only the rich have access
They want to ensure AI benefits all of society, not just the wealthy elite
Check out this essay for a great, clear, vibrant explanation. (Four Futures by Peter Frase in Jacobin). One of the best essays of the 2010s.
I probably said more, but this is all I can remember / have notes about. Hope it’s useful!
Great points, Sahar.
One debate that is top of mind is whether open source is good or bad. You touch on this a bit, but to go further, I might also add that there are risk/reward trade-offs with open source AI models. On the one hand, it democratizes access (generally good, imo). On the other, it also gives bad actors easy access to the rules of the system, which they can then manipulate. Having worked in tech, I recognize that sophisticated threat actors run their own experiments on closed-access systems and eventually figure out those rules to manipulate for their nefarious aims... but sharing the code might actually empower less sophisticated threat actors to do stuff previously mostly limited to better resourced counterparts. I haven't figured out where I stand on this, but it's one of complicated integrity issues that makes me uneasy.