As mentioned in my weekly newsletter, I’m busier than usual, courtesy of the page proofs for my upcoming book. But sailing is smooth enough, so I can take a break for a roundup at least, with a collection of sharp quotes on two topics: LLM/AI companies ripping off artists (on which I wrote over at medium.com) and OpenAI’s call for government oversight (on which I wrote about here).
First off, ripping off artists:
@sanctionedany on Mastodon:
If I sell bootleg DVDs on the corner of the street the police will come and arrest me, if a company takes every creative effort i have ever published online and feeds it into a neural network that can regurgitate it verbatim, that’s fine
On the same topic, Emily Bender on Mastodon:
Sharing art online used to be low-risk to artists: freely available just meant many individual people could experience the art. And if someone found a piece they really liked and downloaded a copy (rather than always visiting its url), the economic harms were minimal.
But the story changes when tech bros mistake “free for me to enjoy” for “free for me to collect” and there is an economic incentive (at least in the form of VC interest) to churn out synthetic media based on those collections.
Exactly.
Then, Government oversight for AI:
@Pwnallthethings on OpenAI’s latest and most ridiculous “governance of superintelligence” nonsense:
OpenAI’s statement on “governance of superintelligence” is basically “in our humble opinion you should regulate AI, but not *our* company’s AI which is good, but instead only imaginary evil AI that exists only in the nightmares you have after reading too many Sci-Fi books, and the regulatory framework you choose should be this laughably guaranteed-to-fail regulatory framework which we designed here on a napkin while laughing”
Timnit Gebru on the same general topic in The Guardian:
That conversation ascribes agency to a tool rather than the humans building the tool. That means you can abdicate responsibility: “It’s not me that’s the problem. It’s the tool. It’s super-powerful. We don’t know what it’s going to do.” Well, no—it’s you that’s the problem. You’re building something with certain characteristics for your profit. That’s extremely distracting, and it takes the attention away from real harms and things that we need to do. Right now.
Finally, my favorite quote of the week, by @jenniferplusplus:
Sam Altman is asking for a brand new agency to be created and staffed on his recommendation in order to regulate that AIs won’t “escape”.
That’s not a thing. It’s sci-fi nonsense. It’s like environmental regulations to prevent creating godzilla. What he’s actually trying to do is invent a set of rules that only he can win, and establish that he can’t be liable for the real harm he does. AI has only ever been a liability laundering machine, and he wants to enshrine that function in law.
He’s also really desperate not to be regulated by the FTC, because their charter is to prevent and remedy actual harm to actual people.
As transparently ridiculous as OpenAI’s latest “Artificial Superintelligence” snake oil is, you’d better brace yourself for the onslaught of AI tea leaves cognoscenti who will rave over *ASI* with gullible snootiness on all channels.