ResearchGate Twitter Flickr Tumblr

AI/LLM/GPT Roundup, March 06: Lofty Ideals & Harsh Realities

My original plan for today’s roundup involved resources and discussions on questions of copyright, but I had to put that off until next week. I’m on the final stretch of a Corona infection, and I’m not yet feeling up to tackling such a complex topic.

Thus, three general sources instead you might want to read.

First, OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit by Chloe Xiang:

OpenAI was founded in 2015 as a nonprofit research organization by Altman, Elon Musk, Peter Thiel, and LinkedIn cofounder Reid Hoffman, among other tech leaders. In its founding statement, the company declared its commitment to research “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” The blog stated that “since our research is free from financial obligations, we can better focus on a positive human impact,” and that all researchers would be encouraged to share “papers, blog posts, or code, and our patents (if any) will be shared with the world.”

Now, eight years later, we are faced with a company that is neither transparent nor driven by positive human impact[.] According to investigative reporter Karen Hao, who spent three days at the company in 2020, OpenAI’s internal culture began to reflect less on the careful, research-driven AI development process, and more on getting ahead, leading to accusations of fueling the “AI hype cycle.” Employees were now being instructed to keep quiet about their work and embody the new company charter.

“There is a misalignment between what the company publicly espouses and how it operates behind closed doors. Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration,” Hao wrote.

Personally, I don’t think that there were any “founding ideals” that could have been “eroded” in the first place; the idea that anyone ever took these lofty ideals at face value in the knowledge that people like Musk or Thiel were involved strikes me as a serious case of Orwellian doublethink. It was simply a mask that was convenient for a time, and a very transparent one at that.

Next, You Are Not a Parrot—And a ChatBot Is Not a Human by Elizabeth Weil, an absolutely terrific portrait of Emily M. Bender in The New Yorker’s Intelligencer section:

We go around assuming ours is a world in which speakers—people, creators of products, the products themselves—mean to say what they say and expect to live with the implications of their words. This is what philosopher of mind Daniel Dennett calls “the intentional stance.” But we’ve altered the world. We’ve learned to make “machines that can mindlessly generate text,” Bender told me when we met this winter. “But we haven’t learned how to stop imagining the mind behind it.”

While parrots are great, humans aren’t parrots. Go read the whole thing. I have just one minor quibble, Weil’s throwaway remark concerning books and copyrights, to which I’ll get back next week.

Finally, the Federal Trade Commission has weighed in on AI in advertising last week, and it’s a blast. Michael Atleson in Keep Your AI Claims in Check at the FTC’s Business Blog:

And what exactly is “artificial intelligence” anyway? It’s an ambiguous term with many possible definitions. It often refers to a variety of technological tools and techniques that use computation to perform tasks such as predictions, decisions, or recommendations. But one thing is for sure: it’s a marketing term. Right now it’s a hot one. And at the FTC, one thing we know about hot marketing terms is that some advertisers won’t be able to stop themselves from overusing and abusing them.

AI hype is playing out today across many products, from toys to cars to chatbots and a lot of things in between. Breathless media accounts don’t help, but it starts with the companies that do the developing and selling. We’ve already warned businesses to avoid using automated tools that have biased or discriminatory impacts. […]

Advertisers should take another look at our earlier AI guidance, which focused on fairness and equity but also said, clearly, not to overpromise what your algorithm or AI-based tool can deliver. Whatever it can or can’t do, AI is important, and so are the claims you make about it. You don’t need a machine to predict what the FTC might do when those claims are unsupported.

It certainly won’t stop the hype train, but it’s a decent warning shot.