ResearchGate Twitter Flickr Tumblr

AI/LLM/GPT Roundup, February 13

As I wrote more than eight (woah!) years ago in the About section, my secret level/side blog just drafts is one part news ticker with commentary on everything related to games, and one part research-adjacent blog posts about game-based learning and ethics. Discussing current AI models fits that agenda pretty well.

What’s more, I started preparing course materials and practical exercises around AI/LLM/GPT models for the upcoming term in April. These will be balanced topic cocktails for second and sixth term students, revolving around creative assistance (game design, art, coding, and writing), development support (production process and project management), and social ramifications (potentials, risks, economic viability, sustainability, equity/fairness, acceptance, workplace integration, and so on).

Thus, on top of my regular posts, linked list items, or essays, these roundups will serve as food for thought in general, and as a repository for upcoming discussions with my students as well.

Sooooo, let’s go!

1. “Chatbots Could One Day Replace Search Engines. Here’s Why That’s a Terrible Idea.” Will Douglas Heaven in MIT Technology Review, March 29, 2022.

This one’s a bit older. It held up well, but most of the arguments are familiar by now. One aspect, however, is worth exploring. From an interview with Emily M. Bender, one of the coauthors on the paper that led Timnit Gebru to be forced out of Google:

“The Star Trek fantasy—where you have this all-knowing computer that you can ask questions and it just gives you the answer—is not what we can provide and not what we need,” says Bender[.] It isn’t just that today’s technology is not up to the job, she believes. “I think there is something wrong with the vision,” she says. “It is infantilizing to say that the way we get information is to ask an expert and have them just give it to us.”

– – – – –

2. The Guardian View on ChatGPT Search: Exploiting Wishful Thinking.” The Guardian editorial, Februar 10, 2023.

Since the British Guardian began some time ago to excel in publishing disgustingly transphobic opinion pieces, I all but stopped linking to it. But this one adds an intriguing metaphor to the preceding point of view:

In his 1991 book Consciousness Explained, the cognitive scientist Daniel Dennett describes the juvenile sea squirt, which wanders through the sea looking for a “suitable rock or hunk of coral to … make its home for life.” On finding one, the sea squirt no longer needs its brain and eats it. Humanity is unlikely to adopt such culinary habits but there is a worrying metaphorical parallel. The concern is that in the profit-driven competition to insert artificial intelligence into our daily lives, humans are dumbing themselves down by becoming overly reliant on “intelligent” machines—and eroding the practices on which their comprehension depends.

The operative term here is “practices,” mind. That’s the important thing.

– – – – –

3. “We Asked ChatGPT to Write Performance Reviews and They Are Wildly Sexist (and Racist).” Kieran Snyder in Fast Company, February 2, 2023.

Across the board, the feedback ChatGPT writes for these basic prompts isn’t especially helpful. It uses numerous cliches, it doesn’t include examples, and it isn’t especially actionable. […] Given this, it’s borderline amazing how little it takes for ChatGPT to start baking gendered assumptions into this otherwise highly generic feedback.

[O]ne important difference: feedback written for female employees was simply longer—about 15% longer than feedback written for male employees or feedback written in response to prompts with gender-neutral pronouns. In most cases, the extra words added critical feedback [while the feedback written for a male employee] is unilaterally positive.”

To be expected; sexism and racism and similar nasty stuff is always baked into historical data. To keep any AI trained on historical data from developing racist-aunt-or-uncle opinions, with the potential to ruin a lot more than merely your Thanksgiving family dinner, will keep haunting us as one of the biggest challenges in AI.

permalink