ResearchGate Twitter Flickr Tumblr

OpenAI’s Five Stages of Nonsense

Ed Zitron, pulling no punches:

It’s time to treat OpenAI, Anthropic, Google, Meta, and any other company pushing generative AI with suspicion, to see their acts as an investor-approved act of deception, theft, and destruction. There is no reason to humor their “stages of artificial intelligence”—it’s time to ask them where the actual intelligence is, where the profits are, how we get past the environmental destruction and the fact that they’ve trained on billions of pieces of stolen media, something that every single journalist should consider an insult and a threat. And when they give vague, half-baked answers, the response should be to push harder, to look them in the eye and ask “why can’t you tell me?”

Go and read the whole thing. We’all should be outraged at how these billionaire con artists continue to plunder our cultural resources and poison our information ecosystem while distracting us with this five-stages nonsense along their completely bogus promise of artificial general intelligence as carrot and their equally completely bogus threat of creating a humanity-destroying superintelligence as stick.

As @SwiftonSecurity put it:

Literally every argument about AI risk is entirely made up from exactly nothing. All the terminology is fart-huffing. It has the same evidentiary basis as a story about floopflorps on Gavitron9 outcompeting nuubidons.

And, boy, does Sam Altman have a keen sense of risk and human values! Lobbying for an “oversight” agency—which would effectively create an AI market-controlling monopoly or oligopoly—to prevent humanity-destroying superintelligence from happening has been described, as I wrote about at the time, as calling with a straight face for environmental regulations to prevent the creation of Godzilla. (And don’t miss this Honest Government Ad on the risks of AI!)

Yes, the first cracks do appear (PDF) in the hype machine, and the light you can see falling through these cracks are colossal piles of money on fire. Generative AI as such will certainly stay with us and do marvelous things that are both beneficial and evil, but the grandiose earth-shattering, humanity-changing claims from these serial snake-oil merchants will sooner or later go the way of the dodo—just like those grandiose claims around 3D printing, Mars colonies, crypto, blockchain, or Level 5 autonomous cars.


LLM Support for “Clarity” in Creative and Scholarly Writing

If you’re using grammar and spell checkers, you’re already using LLM-enhanced algorithms whether you want it or not. You can argue whether that counts as “AI use” that needs to be disclosed—I’d say it doesn’t. They’re not doing a better or worse job than dedicated grammar and spell checkers did before.

But let’s look at the next step up: “AI enhanced” spell and grammar checkers that promise to “clarify” your text, make it “more concise,” and similar. Now, the “clear and concise” cudgel has been used—but rarely substantiated beyond the most obviously overcooked examples—since Strunk & White inflicted modern history’s most terrible style guide on English speakers. So I became curious, and I experimented with this “clarify” feature a lot. You really can’t say all it ever does is make your text as bland and uninteresting as (in)humanly possible; it also often suggests “stylish” substitutions with results reminiscent of using more flowery expressions at a dinner conversation after the third martini.

But I actually use that feature from time to time, and not only for its entertainment value!

What I use it for is to check if the original meaning of a complex sentence is retained in the “clarifying” suggestions. If yes, then everything’s fine, nothing needs to be changed and I can move on. If not, then there’s obviously some potential for confusion or misunderstanding there, so I need to go back to my sentence or paragraph to remedy that.

Which is actually not different from the way I work with copyedited manuscripts that came back from the publisher. Whenever the copy editor suggests a rewrite “for clarity,” that rewrite is almost invariably completely wrong. But it’s really helpful, because now I see the potential for confusion and misunderstanding and can rewrite that sentence or paragraph on my own.

Thus, checking whether “AI” correctly understands your complex sentence or paragraph is—I think—a perfectly legitimate use for creative or scholarly writing. Relying on and click-approving such suggestions, however, seems to me neither legitimate nor helpful in these domains.

For other domains, well, go ahead! This “clarify” feature can certainly assist you in squeezing the last drop of personal identity from your business emails.


AI/LLM/GPT Roundup, February: Some Technical Points of Interest

There are several drafts in my WordPress backend I eventually didn’t bother finishing, partly because I was busy with other things, partly because I feel I’ve wasted enough time and energy writing about AI/LLM/GPT baseline topics here and on my essay on Medium already. But also because I was getting tired of the usual suspects—from antidemocracy-bankrolling techno-authoritarian brunchlord billionaires to hype-cycle-chasing journos who gullibly savor every serial fabulist’s sidefart like a transubstantiated host to singularity acolytes on Xittler who have less understanding of the underlying technologies and attached academic subjects than my late pseudosasa japonica yet corkscrew themselves into grandiose future assertions while flying high from snorting up innuendos of imminent ASI like so many lines of coke.

Thus, I focused on more practical things instead. To start with, I dug into the topic of copyright, with which I’m not yet done. I’m a copyright minimalist and zealous supporter of Creative Commons, Open Science, Library Genesis, Internet Archive, and so on, but you don’t have to be a copyright maximalist to find the wholesale looting of our collective art, science, and culture by that new strain of plantation owners bred from capitalism’s worst actors truly revolting. However, the arguments against the latter have to be very precise against the background of the former, and I’m still at it.

The other thing I did was focusing on aspects—technological, philosophical, social, ethical—that are interesting and a bit more rewarding to think about in the context of large language models. For this roundup, I picked three aspects from the technical side.

The Scaling Process
The first aspect is the scaling process, as observed so far. Making large language models larger not only leads to quantitative improvement but also to qualitative improvement in terms of new abilities like arithmetic or summarization. These abilities are often interpreted as evidence for “emergence,” a difficult (and loaded) term with different definitions in different fields. This recommended article from AssemblyAI from about a year ago is a good introduction, and it also explains why we cannot simply slap the label “emergence” on this scaling process and call it a day. Well, you can define emergent behavior simply as “an ability that is not present in smaller models but present in larger models,” but I believe that’s not only profoundly misguided but actually a cheap parlor trick. While we can’t predict which abilities will pop up at which scale, there’s scant reason to believe that any of them did, do, or will indicate an actual “regime change” that emergence in this definitional context requires, i.e., a fundamental change of the rules that govern a system. But that no “emergence magic” is involved, by all sane accounts, doesn’t make the scaling process any less intriguing.

Interpolation and Extrapolation
About a year ago, there was this fantastic article “ChatGPT Is a Blurry JPEG of the Web” by Ted Chiang in The New Yorker. The article makes several important points, but it might be a mistake to think that large language models merely “interpolate” their training data, comparable to how decompression algorithms interpolate pixels to recreate images or videos from lossy compression. Data points in large language models, particularly words, are represented as vectors in a high-dimensional vector space, which are then processed and manipulated in the network’s modules and submodules. And here’s the gist: for very high-dimensional vector spaces like large language models, there’s evidence that interpolation and extrapolation become equivalent, so that data processing in large language models is much more complex and much more interesting than mere interpolation.

Fractal Boundaries
Finally, a recent paper by former Google Brain and DeepMind researcher Sohl-Dickstein investigates the similarities between fractal generation and neural network training. There’s evidence that the boundary between hyperparameters that lead to successful or unsuccessful training (neural networks) behave like the points from function iteration (fractal generation) that define the boundary where these iterations converge (remain bounded) or diverge (go to infinity). While the generated properties from low-dimensional functions in fractal generation and the complex functions that act in a high-dimensional space in neural network–training differ, these similarities nevertheless might explain the chaotic behavior of hyperparameters in what Sohl-Dickstein calls “meta-loss landscapes” as the space that meta-learning algorithms try to optimize. In a nutshell: meta-loss is at its minimum at the “fractal edge,” the boundary between convergence and divergence, which is exactly the region where balancing is most difficult. Lots of stuff, all very preliminary, but highly captivating.


Study: “Kool-Aid Drinkers Found to Match Top 1% of Human Thinkers on Standard Cognition Test”

Even though I try to filter out AI hype machine news and focus on things that are actually interesting, sometimes such gullible idiocy comes my way that I can’t resist having a look at it. Here’s such a specimen: “GPT-4 found to match the top 1% of human thinkers on a standard creativity test,” a headline that popped up way too often in my AI/ML/LLM keyword feeds to be ignored.

Let’s have a look at this polished gem of bollocks.

It was distributed by dozens of news sources most of whom referred either to Science Daily (a press-release aggregator) or to (a site run by a guy with an intercollege bachelor degree in neuroscience by the UC Riverside College of Natural and Agricultural Sciences who “continues to learn via MOOCs”), both of which published the exact same press-release written up by the University of Montana’s News Service Associate Director about a “study” conducted at that very university in the context of “entrepreneurial creativity” by an Assistant Clinical Professor of Management with a PhD in economics which hasn’t been published, let alone peer-reviewed, but “will be presented at the Southern Oregon University Creativity Conference.”

I’m impressed.

You might remember the Sagan Standard that “extraordinary claims require extraordinary evidence”—a standard I try to hammer into my students for their own academic theses year after year? Naturally, it doesn’t apply to shills—but it damn well should apply to academics and news reporting.


AI/LLM/GPT Roundup, June 28: A Whiff of Doom

While we’re still arguing whether the future looks bright or bleak with respect to AI/LLM/GPT, some parts of this future begin to look like a Boschian nightmare from hell. Let’s start with “search.” From Avram Piltch’s “Plagiarism Engine: Google’s Content-Swiping AI Could Break the Internet”:

Even worse, the answers in Google’s SGE boxes are frequently plagiarized, often word-for-word, from the related links. Depending on what you search for, you may find a paragraph taken from just one source or get a whole bunch of sentences and factoids from different articles mashed together into a plagiarism stew. […]

From a reader’s perspective, we’re left without any authority to take responsibility for the claims in the bot’s answer. Who, exactly, says that the Ryzen 7 7800X3D is faster and on whose authority is it recommended? I know, from tracing back the text, that Tom’s Hardware and Hardware Times stand behind this information, but because there’s no citation, the reader has no way of knowing. Google is, in effect, saying that its bot is the authority you should believe. […]

Though Google is telling the public that it wants to drive traffic to publishers, the SGE experience looks purpose-built to keep readers from leaving and going off to external sites, unless those external sites are ecomm vendors or advertisers. […] If Google were to roll its SGE experience out of beta and make it the default, it would be detonating a 50-megaton bomb on the free and open web.

As always, you’ll have to read the article in full for yourself. While I haven’t beta-tested Google’s SGE personally, I’ve explored GAI well enough to say that this doesn’t strike me as overly alarmist—if Google really goes online with this, search will be screwed. What are we (collectively) going to do about it? I have no answer to that.

Next, data in general, or “knowledge.” You probably remember the Time article, which I referenced several times in blog posts and in my essay on Medium, on how OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic. That practice, apparently, is not merely getting worse—it’s getting worse on a much larger, even comprehensive scale. Here’s Josh Dzieza at The Verge with “AI Is a Lot of Work”:

A few months after graduating from college in Nairobi, a 30-year-old I’ll call Joe got a job as an annotator—the tedious work of processing the raw information used to train artificial intelligence. AI learns by finding patterns in enormous quantities of data, but first that data has to be sorted and tagged by people, a vast workforce mostly hidden behind the machines. In Joe’s case, he was labeling footage for self-driving cars—identifying every vehicle, pedestrian, cyclist, anything a driver needs to be aware of—frame by frame and from every possible camera angle. It’s difficult and repetitive work. A several-second blip of footage took eight hours to annotate, for which Joe was paid about $10. […]

Much of the public response to language models like OpenAI’s ChatGPT has focused on all the jobs they appear poised to automate. But behind even the most impressive AI system are people—huge numbers of people labeling data to train it and clarifying data when it gets confused. Only the companies that can afford to buy this data can compete, and those that get it are highly motivated to keep it secret. The result is that, with few exceptions, little is known about the information shaping these systems’ behavior, and even less is known about the people doing the shaping. […]

The data vendors behind familiar names like OpenAI, Google, and Microsoft come in different forms. There are private outsourcing companies with call-center-like offices, such as the Kenya- and Nepal-based CloudFactory, where Joe annotated for $1.20 an hour before switching to Remotasks. There are also “crowdworking” sites like Mechanical Turk and Clickworker where anyone can sign up to perform tasks. In the middle are services like Scale AI. Anyone can sign up, but everyone has to pass qualification exams and training courses and undergo performance monitoring. Annotation is big business.

It’s a veritable hellscape into which OpenAI, Google, and Microsoft pitch workers and information alike; you really have to read it for yourself to get the full picture. And it’s getting even worse than that because evidence has popped up that these annotators have begun to use AI for their jobs, with all the “data poisoning” implications you can think of. Boschian, again.

Finally, “social risks.” You might remember how OpenAI’s Sam Altman lobbied for a new “oversight agency” which, as someone quipped, amounts to calling for environmental regulations that prevent the creation of Godzilla. And, who would have thunk, Altman is of course lobbying the EU to water down AI regulations that would address the actual risks of their products in terms of misinformation, labor impact, safety, and similar. On that, here’s another Time exclusive, “OpenAI Lobbied the E.U. to Water Down AI Regulation”:

[B]ehind the scenes, OpenAI has lobbied for significant elements of the most comprehensive AI legislation in the world—the E.U.’s AI Act—to be watered down in ways that would reduce the regulatory burden on the company, according to documents about OpenAI’s engagement with E.U. officials obtained by TIME from the European Commission via freedom of information requests. In several cases, OpenAI proposed amendments that were later made to the final text of the E.U. law—which was approved by the European Parliament on June 14, and will now proceed to a final round of negotiations before being finalized as soon as January. […]

One expert who reviewed the OpenAI White Paper at TIME’s request was unimpressed. “What they’re saying is basically: trust us to self-regulate,” says Daniel Leufer, a senior policy analyst focused on AI at Access Now’s Brussels office. “It’s very confusing because they’re talking to politicians saying, ‘Please regulate us,’ they’re boasting about all the [safety] stuff that they do, but as soon as you say, ‘Well, let’s take you at your word and set that as a regulatory floor,’ they say no.”

And, as noted in the article, Google and Microsoft have lobbied the EU in similar ways, which shouldn’t come as a surprise.

So, take a deep breath and enjoy a whiff of actual doom, drafting down from Google’s SGE to exploitation and data poisoning to AI regulations fashioned by the worst possible actors.


AI/LLM/GPT Roundup, June 13: AI Gods and Old Timey Religion

Mercifully, particularly after OpenAI’s Calvinball move of raising to “ASI,” the AI horseshit fire hose’s been throttled somewhat. Which doesn’t mean nothing ain’t happening, of course. But it gives me time to collect quotes on an aspect that fell by the wayside: religion!

Here’s some juicy quotes, starting with Graydon Saunders on Mastodon:

AI needs abstraction, extrapolation, and generalization. (People are not as one should say ideal at any of those!)

What we’ve got is a measure of similarity. It’s not any better than a butterfly recognizing flowers, and it’s orders of magnitude more energy intensive. (Especially if you count the hardware manufacturing.)

People want a god. If they have a god, they don’t have to be responsible.

And where there’s any hint of a god, there’s a grift. That’s all this is.

Snake oil’s always been successful, but successful beyond belief when sold from the pulpit. The recurring motif of passing one’s responsibility to someone else, i.e., from human actors to a diffuse entity, was already touched upon in two quotes from the last roundup, that “AI has only ever been a liability laundering machine” and that you can abdicate responsibility by saying “It’s not me that’s the problem. It’s the tool. It’s super-powerful.”

Next, Rob Pickering at Mastodon:

AI isn’t going to press the button, but it is already being used as an occult tool by humans whose only qualities are wealth and self serving ambition to exert control over societal opinion.

Sounds eerily familiar, right? Then, Kenan Malik in the Guardian:

We have stumbled into a digital panopticon almost without realising it. Yet to suggest we live in a world shaped by AI is to misplace the problem. There is no machine without a human, and nor is there likely to be. […] Too often when we talk of the “problem” of AI, we remove the human from the picture. We practise a form of what the social scientist and tech developer Rumman Chowdhury calls “moral outsourcing”: blaming machines for human decisions.

Such moral outsourcing works terrifically well, naturally and historically, when superhuman, godlike abilities are conveniently projected to any given entity, be that “god” or “satan” or “ASI.” Finally, not least for its entertainment value, this spirited quote from Charlie Stross:

The AI singularity is horseshit. Warmed-over Christian apocalypse horseshit (minus the God/Jesus nonsense). Roko’s Basilisk should be the final clue: it emerges logically from the whole farrago of nonsense but it boils down to AI Satan punishing the acausal infidels for their AI original sin. If it walks like a duck and quacks like a duck it’s a duck: or in this case, a Christian heresy.

Seems as if Neil Gaiman’s New Gods have become just another pack of Old Gods rather quickly.


AI/LLM/GPT Roundup, May 24: From Selling Bootlegs to Creating Godzilla

As mentioned in my weekly newsletter, I’m busier than usual, courtesy of the page proofs for my upcoming book. But sailing is smooth enough, so I can take a break for a roundup at least, with a collection of sharp quotes on two topics: LLM/AI companies ripping off artists (on which I wrote over at and OpenAI’s call for government oversight (on which I wrote about here).

First off, ripping off artists:

@sanctionedany on Mastodon:

If I sell bootleg DVDs on the corner of the street the police will come and arrest me, if a company takes every creative effort i have ever published online and feeds it into a neural network that can regurgitate it verbatim, that’s fine

On the same topic, Emily Bender on Mastodon:

Sharing art online used to be low-risk to artists: freely available just meant many individual people could experience the art. And if someone found a piece they really liked and downloaded a copy (rather than always visiting its url), the economic harms were minimal.

But the story changes when tech bros mistake “free for me to enjoy” for “free for me to collect” and there is an economic incentive (at least in the form of VC interest) to churn out synthetic media based on those collections.


Then, Government oversight for AI:

@Pwnallthethings on OpenAI’s latest and most ridiculous “governance of superintelligence” nonsense:

OpenAI’s statement on “governance of superintelligence” is basically “in our humble opinion you should regulate AI, but not *our* company’s AI which is good, but instead only imaginary evil AI that exists only in the nightmares you have after reading too many Sci-Fi books, and the regulatory framework you choose should be this laughably guaranteed-to-fail regulatory framework which we designed here on a napkin while laughing”

Timnit Gebru on the same general topic in The Guardian:

That conversation ascribes agency to a tool rather than the humans building the tool. That means you can abdicate responsibility: “It’s not me that’s the problem. It’s the tool. It’s super-powerful. We don’t know what it’s going to do.” Well, no—it’s you that’s the problem. You’re building something with certain characteristics for your profit. That’s extremely distracting, and it takes the attention away from real harms and things that we need to do. Right now.

Finally, my favorite quote of the week, by @jenniferplusplus:

Sam Altman is asking for a brand new agency to be created and staffed on his recommendation in order to regulate that AIs won’t “escape”.

That’s not a thing. It’s sci-fi nonsense. It’s like environmental regulations to prevent creating godzilla. What he’s actually trying to do is invent a set of rules that only he can win, and establish that he can’t be liable for the real harm he does. AI has only ever been a liability laundering machine, and he wants to enshrine that function in law.

He’s also really desperate not to be regulated by the FTC, because their charter is to prevent and remedy actual harm to actual people.

As transparently ridiculous as OpenAI’s latest “Artificial Superintelligence” snake oil is, you’d better brace yourself for the onslaught of AI tea leaves cognoscenti who will rave over *ASI* with gullible snootiness on all channels.


Shorter Sam Altman Senate Hearing: “Let Me Bamboozle You Into Handing OpenAI a Monopoly on a Silver Platter”

One could rightfully ask how seriously you can take a senate hearing when the person supposed to testify is allowed to dazzle its members with what Emily Bender calls a Magic Show first.


Sam Altman Wows Lawmakers at Closed AI Dinner: “Fantastic…forthcoming”

OpenAI CEO Sam Altman spoke to an engaged crowd of around 60 lawmakers at a dinner Monday about the advanced artificial technology his company produces and the challenges of regulating it.

The dinner discussion comes at a peak moment for AI, which has thoroughly captured Congress’ fascination. […] “He gave fascinating demonstrations in real time,” Johnson said. “I think it amazed a lot of members. And it was a standing-room-only crowd in there.”

And of course it gets worse from there.

During the hearing of the Senate Judiciary Subcommittee on Privacy and Technology itself, Sam Altman ostensibly calls for regulation, licensing, and oversight. But what he’s really asking for is a monopoly for OpenAI.

These are his propositions:

  1. Form a new government agency charged with licensing large AI models, and empower it to revoke that license for companies whose models don’t comply with government standards.
  2. Create a set of safety standards for AI models, including evaluations of their dangerous capabilities. For instance, models would have to pass certain tests for safety, such as whether they could “self-replicate” and “exfiltrate into the wild”—that is, to go rogue and start acting on their own.
  3. Require independent audits, by independent experts, of the models’ performance on various metrics.


Please create a new agency to oversee and license large AI models that isn’t the FTC, because the FTC has already made it repeatedly clear that they will clamp down on actual risks and what makes LLM technologies actually dangerous, concerning consumer safety, unsubstantiated claims, fraud, discriminatory results, training data transparency, privacy concerns, using artists’ work without permission (aka theft), and so on and so forth.

So no, not the FTC! Not the agency that’s been called “Washington’s most powerful technology cop!”

Instead, please create a new, better agency that will concern itself with imaginary threats like “self-replicating rogue AI acting on its own.” An agency that audits “performance on various metrics” and sets standards for AI in the same way governments “regulate nuclear weapons.” An agency, moreover, that gives out government licenses and makes it prohibitively expensive for potential competitors to enter the market. And, for good measure, please stack this new agency with our own experts, if that isn’t too much to ask.

It’s true, there’s the general problem of recruiting experienced professionals for oversight agencies that are not deeply in cahoots with the industry they’re supposed to oversee—a problem that, for example, has most visibly plagued the Federal Communications Commission since forever. And it’s not as if Altman were in any way subtle about it:

Still, many of the senators seem eager to trust Altman with self-regulation. Sen. John Neely Kennedy (R-La.) asked if Altman himself might be qualified to oversee a federal regulatory body overseeing AI. Altman demurred, saying he loves his current job, but offered to have his company send a list of suitable candidates.

You can’t make this up.

For some senators, at least, having a giant sign nailed on your forehead that says “Hello I’m a Fraudster and Con Man!” reliably counts as a stellar letter of recommendation.


That Which We Call Autonomous / By Any Other Name Would Smell as Sweet

Just to be clear, here’s where we stand right now with regard to LLM technology:

  • LLM-based tools and applications will be extremely useful; they will assist us in practically everything we do (whether it makes sense or not); and they will change how we work in every imaginable field.
  • LLM-based tools and applications, if left unchecked and unregulated, will be used to create a global privacy nightmare; they will pull the rug out from under artists and writers and scientists not because they’re better, but because they loot, anonymize, and disseminate people’s work wholesale; and they will cause information contamination on an unprecedented scale.
  • What LLM tools and applications are decidedly not is “intelligent”; they’re not about to become “sentient”; and they’re not fixing to annihilate humanity (but they can certainly assist humans in that endeavor).

So why all that ever crescendoing lament about LLM taking over and annihilating humanity? The reason is that “Skynet” is merely an extreme case of what’s actually attractive to investors.

Cory Doctorow:

The entire case for “AI” as a disruptive tool worth trillions of dollars is grounded in the idea that chatbots and image-generators will let bosses fire hundreds of thousands or even millions of workers. […]

We’ve seen this movie before. It was just five years ago that we were promised that self-driving technology was about to make truck-driving obsolete. […] But the twin lies of self-driving trucks—that these were on the horizon, and that they would replace 3,000,000 workers—were lucrative lies. They were the story that drove billions in investment and sky-high valuations for any company with “self-driving” in its name.

Will Kaufman, in response:

the most annoying part is that AI fearmongering *is* AI marketing. “It will replace people” is absolutely the value-add for AI. It’s why every C-suite in the country is suddenly obsessed.

“AI will gain sentience and replace people!!!”
“You mean we can cut payroll without cutting productivity???”

Dave Rahardja, also in response:

This is the fundamental reason that #AI is getting so many billions poured into it. It’s the lure of replacing *expensive* labor with automation. Corporations have endlessly squeezed blood from lower-cost labor and they are salivating at the prospect of getting rid of high-cost labor.

This is unfortunate, because AI has *actual* uses. As models get more specialized and smaller, I can see AI automating a lot of rote work away at reasonable cost. Unfortunately, the corporate hype is so strong right now it’s muddying all the conversations

The take-away is, just as with fully autonomous vehicles, that these people lie. They lie! It’s as simple as that. Investors certainly don’t believe the crackpots screaming from the rooftops that LLM will kill us all (yet they wouldn’t be horrified at the prospect if they did). However, as a more high-pitched variation of the ostensibly saner “We’re Close to AGI” bullshit narrative, delivered with a straight face by con artists like Sam Altman, this doomsday hype feeds into and gives emphasis to the promise that LLM will make as many people redundant and replaceable and obsolete as possible—that’s what attracts money.

For the mindset of investment opportunists, scale doesn’t matter—“autonomous” is the Open Sesame for their deepest purses, no matter if it’s about self-driving trucks or coding assistance or Skynet dropping the bomb.


Can We Mind-Upload These Tech Bros Into Their Chatbot Trainers and Send Them on Fully Autonomous Rockets to Mars?

Turns out, when I wrote about Geoffrey Hinton’s CBS interview, I was still too generous on several counts. In the Guardian last week, his position not only degraded to the usual “AI will take over humanity” tech bro nonsense, but proceeded into the bonkers territory of “maybe these big models are actually much better than the brain.”

And then he goes full Skynet from there:

You need to imagine something that is more intelligent than us by the same degree that we are more intelligent than a frog. It’s all very well to say: “Well, don’t connect them to the internet,” but as long as they’re talking to us, they can make us do things.

In another article in the New York Times, he makes a big fuss about leaving Google “because of the dangers ahead,” only that these “dangers” are the exact same smoke grenades all the other tech bros are throwing left and right with abandon. And to add insult to injury, he happily dismisses the actual threats and throws the whistleblowers who’ve warned about these real threats for years—and got actually fired from Google for it—under the bus.

What a trash human. And then there’s this stupendous gem:

I’m not a policy guy. I’m just someone who’s suddenly become aware that there’s a danger of something really bad happening. I wish I had a nice solution, like: “Just stop burning carbon, and you’ll be OK.” But I can’t see a simple solution like that.

If you listen long enough to Hinton, you begin to wish Skynet became a reality. Why, using conventional notions of intelligence for a moment, must those people who currently build our most intelligent systems be the least? I wonder.


Keep Your Fire Extinguisher Handy for This Interview with Geoffrey Hinton on ChatGPT

Barely one minute into this CBS interview with Geoffrey Hinton, the “Godfather of AI,” as CBS puts it, has already set my brain on fire with this claim about text synthesis machines:

You tell it a joke and, not for all jokes, but for quite a few, it can tell you why it’s funny. It seems very hard to say it doesn’t understand when it can tell you why a joke is funny.

High time we reassessed how smart these “legendary” AI researchers really are and removed them from the pedestals we’ve built.

Certainly, everything he goes on to say about the neural net approach is true, that it became the dominant approach and eventually led us to where we are at this moment in time. And, to his merit, he doesn’t think that we’re actually creating a brain:

I think there’s currently a divergence between the artificial neural networks that are the basis of all this new AI and how the brain actually works. I think they’re going different routes now. […] All the big models now use the technique called back propagation […] and I don’t think that’s what the brain is doing.

But that doesn’t keep him from proposing that LLM are becoming equivalent to the brain.

To start with, he “lowered his expectations” for the time it would take to create general purpose AI/AGI from “20–50 years” to “20 years” and wouldn’t completely rule out that it “could happen in 5.” Then, explaining the difference between LLM technology and the brain as revolving around quantities of communication and data, he stresses how much ChatGPT “knows” and “understands.” And he even claims that, yes, you can say an LLM is a sophisticated auto-complete system, “but so is the brain,” as both systems have to “understand the sentence” to do all the things they do. And he supports that claim with a translation example that completely erases any possible difference between “understanding” and “computing probabilities.”

Of course, I can see how you get there, by switching from using a model of the brain to build an LLM to taking that LLM and how it works to create a model of the brain. It’s a trick, a sleight of hand! Because now you can claim that LLM will develop into General Purpose Intelligence/AGI because it effectively works like the brain—which is like claiming you’re close to creating an antigravity device by recreating ever more perfect states of weightlessness through parabolic flights. I can’t be kind about this, but if you really believe that text synthesis machines will become self-aware and develop into entities like Skynet or Bishop, then congratulations, here’s your one-and-twenty Cargo Cult membership card.

There is an interesting bit around the 20-minute mark when Hinton talks about how the LLM of the future might be able to adapt its output to “different world views,” which is a fancy way of saying that it might become capable of generating output that is context-sensitive with regard to the beliefs of the recipient. All the ramifications such developments would entail merit their own blog post. Another interesting point this interview touches upon are autonomous lethal weapons and the alignment problem.

But just when my brain had cooled off a bit, Hinton lobs the next incendiary grenade at it by creating the most idiot straw man imaginable around the concept of “sentience”:

When it comes to sentience, I’m amazed that people can confidently pronounce that these things are not sentient, and when you ask them what they mean by sentient, they say, well, they don’t really know. So how can you be confident they’re not sentient if you don’t know what sentience means?

I need a drink.

Finally, there’s this gem:

To their credit, the people who’ve been really staunch critics of neural nets and said these things are never going to work—when they worked, they did something that scientists don’t normally do, which is “oh, it worked, we’ll do that.”


I’ve read my Kuhn and my Feyerabend very thoroughly. Neural net vs. symbolic logic is about competing approaches to AI, not paradigm change, and the actual problem with paradigm change is that, for an interim period, the established paradigm is likely to have as much or even more explanatory power, and indeed work better, than the new one.

We really should do something about these pedestals.


Absolute banger of a post (as Andy Baio put it) by Nilay Patel:

Okay, now here’s the problem: if Ghostwriter977 simply uploads “Heart on my Sleeve” without that Metro Boomin tag, they will kick off a copyright war that pits the future of Google against the future of YouTube in a potentially zero-sum way. Google will either have to kneecap all of its generative AI projects, including Bard and the future of search, or piss off major YouTube partners like Universal Music, Drake, and The Weeknd. Let’s walk through it.

It might be a good idea to start stocking popcorn.


Why Eliezer Yudkowsky’s Time Op-Ed on How Current AI Systems Will Kill Us All Is Even More Unhinged than You Think

Once upon a time, for more than a decade, I had a Time magazine print subscription; it was a pretty decent publication at the time, especially if you consider that the only competing weekly was Newsweek. However, that Time magazine now went and published Eliezer Yudkowsky’s “Pausing AI Developments Isn’t Enough: We Need to Shut It All Down” doesn’t imply that Yudkowsky isn’t a shrieking dangerous crackpot; it merely implies that Time magazine, being no longer the magazine it once was, is making use of the riches it found at the bottom of the barrel.

The article immediately sets your brain on fire before it even starts, with the ridiculously self-aggrandizing assertion that he’s “widely regarded as a founder of the field,” and that he’s been “aligning” “Artificial General Intelligence” since 2001.

Which doesn’t come as a surprise. The reason that Eliezer Yudkowsky is also widely known for his stupendous intelligence and monumental expertise is because he rarely fails to tell you, and I can see no reason why we shouldn’t believe him.

Teaser (not from the article):

I am a human, and an educated citizen, and an adult, and an expert, and a genius… but if there is even one more gap of similar magnitude remaining between myself and the Singularity, then my speculations will be no better than those of an eighteenth-century scientist.

His Machine Intelligence Research Institute—the successor of the Singularity Institute—is associated with Effective Altruism and LessWrong, which in turn are military-grade magnets for longtermists, about whom Émile P. Torres wrote last year and again more recently. Yudkowksy, to sum it up, is a vibrant member of TESCREAL—Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, Longtermism—a secular cult whose two decades-old AGI Is Just Around the Corner! sales pitch perpetually oscillates between AGI utopia and AGI apocalypse. Why do so many people fall for it? There’s a unsound fascination attached to both utopian and apocalyptic scenarios in the context of personal existence, fed by the human desire to either live forever or die together with everyone else. I get that, certainly. But it shouldn’t be the lens through which we look at AI and other things in the world.

Now, that article. As a reminder, we’re talking about large language models here: text synthesis machines, trained with well-understood technology on ginormous piles of undisclosed scraped data and detoxified by traumatized exploited workers. These machines, as has been pointed out, present you with statements about how answers to your questions would probably look like, which is categorically completely different from what answers in the proper sense actually are. All that, however, keeps being blissfully overlooked or proactively forgotten or dismissed through armchair expertise in neuroscience.

Against this backdrop, you can now fully savor the delicacies from Yudkowsky’s article’s urgent demands’ menu:

  • Enact a comprehensive indefinite worldwide moratorium on training LLM.
  • Shut down all the large GPU clusters for LLM training.
  • Cap the allowed computing power to train AI systems.
  • Track all GPUs sold.
  • Destroy rogue data centers by airstrike and be prepared for war to enforce the moratorium.

If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.

  • Be prepared to launch a full-scale nuclear conflict to prevent AI extinction scenarios.

Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange.

Bonus demand on Twitter (screenshot):

  • Send in nanobots to switch off all large GPU clusters.

Sounds wild? Here’s his rationale (if you can call it that):

Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”

On top of that, he added a bizarre story about his daughter losing a tooth “on the same day” that “ChatGPT blew away those standardized tests” (it didn’t), and his thoughts that “she’s not going to get a chance to grow up.” But don’t let that fool you. Here are two Twitter gems that put his sentiments in perspective; one stunner about when the “right to life” sets in (screenshot | tweet), and another one he later deleted, answering a question about nuclear war and how many people he thinks are allowed to die in order to prevent AGI (screenshot | delete notice):

There should be enough survivors on Earth in close contact to form a viable reproduction population, with room to spare, and they should have a sustainable food supply. So long as that’s true, there’s still a chance of reaching the stars someday.

Well, that’s longtermism for you! And the follow-up problem with longtermism is that we all know what kind of “population” will most likely survive, and is even supposed and preferred to survive in certain circles.

Unhinged soapbox meltdowns like this, about how LLM systems are on the verge of becoming alive and will inevitably annihilate humanity in a global extinction event, do us no favor. All they do is distract us from the actual risks and dangers of AI and what we really should do: research the crap out of LLM and put it to use for everything that makes sense but also implement strict protocols; enforce AI liability, transparency, and Explainable AI where needed; hold manufacturers and deployers of AI systems responsible and accountable; and severely discourage the unfettered distribution of untested models based on undisclosed data sets, theft, and exploitation (besides the posts on this here blog, I wrote a lengthy illustrated essay on these topics over at Medium).

Personally, I’ve always been, and still am, very excited about the development and promises of advanced LLM systems (with AlphaGo and “move 37” in particular as a beloved milestone back in 2016), and I will be happily engaging my game design students with LLM concepts and applications this term. How does this excitement rhyme with my positions on “intelligent” AI and Explainable AI? Well, it’s a matter of how you align creativity and accountability!

With regard to the latter, it’s perfectly fine when nobody’s able to explain how a creative idea or move came about; but it’s not fine at all if nobody’s able to explain why your credit score dropped, why your social welfare benefits were denied, or why you’ve been flagged by the CPS as a potential abuser.

And with regard to the former, I don’t think “intelligence,” whatever it is, is a precondition for being creative at all.


The German Ethic Council’s Statement on “Humans and Machines”

Regarding the German ethic council’s published statement (PDF) on “Humans and Machines: Challenges Posed by Artificial Intelligence,” the first thing to notice is that it has been referenced and commented on a lot on social media, especially by those who haven’t read it. And most of what’s been commented on, which is the second thing to notice, are personal opinions (or interpretations, I’ll come to that) by high-ranking authors of that statement, in interviews and even on their own website, that don’t quite rhyme with what they actually wrote.

For example, in this Süddeutsche Zeitung article, the ethic council’s chairperson and deputy chairperson are quoted as saying, respectively, “AI must not replace humans” and “AI applications cannot replace human intelligence, responsibility, and evaluation.” But what their published statement actually does is discuss, in much more nuanced ways, “the social dimension of the relationship between delegating, expanding, and reducing [and how] people can be affected differently by processes of delegating or replacing” (page 255). Furthermore, the same newspaper article proclaims that “The German ethics council has now also dealt with questions relating to the relationship between humans and machines and has spoken out in favor of strict limitations on the use of AI,” and mdr aktuell quotes the council’s chairperson as speaking out for “pausing the further development of artificial intelligence.” All that’s quite curious, because none of that is part of the ethic council’s published statement itself.

Here are two detailed quotes from the published statement (the original German text is so horribly stilted that even Google Translate had its problems; I tried to defuse it a bit, but don’t expect miracles). They paint a different and, as mentioned, more nuanced picture:

For such a context-specific perspective, the German ethics council looked into representative applications in medicine, K-12 (school) education, public communication, and public administration. Fields were deliberately selected where the penetration of AI-based technologies is very different, and where different extents of the replacement of previously human actions by AI can be illustrated. In all four fields, deployment scenarios are characterized by sometimes significant relationship and power asymmetries, which makes the responsible use of AI and the consideration of the interests and well-being of particularly vulnerable groups of people all the more important. Considering the different ways in which AI is used, and their respective degrees of delegation to machines, allows nuanced ethical considerations to be made. (p.45)


In order to prevent the diffusion of responsibility, the use of AI-supported digital technologies must be designed in the sense of decision support and not decision replacement. It must not come at the expense of effective control options. Those affected by algorithmically supported decisions must be given the opportunity to access the basis for the decision, particularly in areas with a high degree of intervention. This presupposes that, at the end of the technical procedures, decision-making people remain visible who are able and obliged to take responsibility. (p.264)

If anything, the news media quotes from the council’s chairpersons are interpretations of their published statement, which is odd and raises question. However, let’s proceed to an overview of what the ethic council’s published statement actually contains.

The first part is a historical summary and definition of “AI,” which includes the challenges in defining “intelligence” and “reason” and touches upon topics like authorship, intention, responsibility, and the relationship between humans and technology in general. All in all, it’s pretty solid, and I’m not going to nitpick.

The second part dives into four representative fields of application: medicine, K–12 (school) education, public communication, and public administration.

  1. The discussion of the first field, medicine, is solid and specific, and even includes psychotherapy. It’s a surprisingly positive view, provided further advances are conducted in a careful and risk-conscious manner, with adequate controls and protocols in place.

  2. The discussion of the second field, K–12 education, is excessively vague and unspecific and doesn’t come up with anything resembling actionable advice. Not only does it become clear right away that the responsible council members are hopelessly stuck in the learning model of cognitivism; they also insert a gratuitous swipe at behaviorism to broadcast their blissful obliviousness of conceptual innovations and application in that field (up to and including game-based learning). The best they can come up with are unspecified advantages of “tutor systems” and “classroom analytics,” and the risks of “attention monitoring” and “affect recognition,” which, like, yeah sure. However, I suspect that the complete uselessness of this chapter merely reflects how utterly obsolete and unsalvageable K–12-based educational systems are in the Information Age.

  3. The discussion of the third field, public communication and how people form opinions, can be called pretty solid again, but sadly for the wrong reasons. On the one hand, their severe criticism is certainly valid—ranging from algorithmic timelines to sensationalism and clickbait to the challenges of content moderation to chilling effects to undesirable shifts in public discourse (basically the Overton Window effect, even if they don’t refer to it explicitly). Many of these arguments I presented myself, right before COVID-19 hit, as a consultant on right-wing extremism and discrimination in gaming communities. But here’s the hitch: this chapter’s relation to current advances in AI technologies is tenuous at best, and much of that criticism should be leveled as ferociously at good old traditional media as well, from newspapers to news channels and other TV formats. (Which I do in my presentations and recommendations.) So yes, the council’s basically correct here, and their case for regulation is reasonable. But the whole chapter strikes me as both too tangential and too narrow in scope.

  4. The discussion of the fourth and last field, public administration, is again solid and presents well-argued points and recommendations with regard to automatic and algorithmic decision-making systems, automation bias and algorithmic bias, and data-related systemic distortions and discrimination. There’s nothing to complain about here.

To sum it up, the ethic council’s published statement is neither as terrible as made out to be by some, nor as useful in certain areas as it could be, and probably should be. For all its weaknesses, however, it doesn’t fall for AI hype, addresses actual risks instead of illusionary risks, and presents views and recommendations not altogether different to what other experts and scientists say, including the authors of the “Stochastic Parrots” paper.

Which, I fear, is too nuanced for politicians to comprehend and act upon. And that high-ranking members of the ethics council themselves undermine their published statement by front-loading the public discussion with urgent opinion soundbites doesn’t bode well for rational discussions or a positive impact either.

*As yet, no English-language version of the council’s published statement exists; all the quotes in this post are translated by yours truly with some help from Google Translate. (For this post’s preview image, by the way, click here.)


Last week, I linked to and commented on reactions and responses to the Future of Life Institute’s open letter “Pause Giant AI Experiments.” Sayash Kapoor and Arvind Narayanan had written another excellent comment, “A Misleading Open Letter About Sci-fi AI Dangers Ignores the Real Risks”:

We agree that misinformation, impact on labor, and safety are three of the main risks of AI. Unfortunately, in each case, the letter presents a speculative, futuristic risk, ignoring the version of the problem that is already harming people. It distracts from the real issues and makes it harder to address them. The letter has a containment mindset analogous to nuclear risk, but that’s a poor fit for AI. It plays right into the hands of the companies it seeks to regulate.

Here’s the overview risk matrix they created for the occasion, to which their item-by-item explanations provide the depth.


Adam Conover of Adam Ruins Everything fame, ripping hype marketing from “self-driving cars” to “AI” to shreds on his YouTube channel:

All of this has gotten people worried that a super intelligent AI is right around the corner, that the robots are gonna take over[.] It’s all bullshit. That fear is a tech bro fantasy. Designed to distract you from the real risk of AI. That a bunch of dumb companies are lying about what their broken tech can do so they can trick you into using a worse version of things we already have, all while stealing the work of real people to do it.

Even though the video’s been uploaded only yesterday, it’s glaringly obvious that most of it was recorded roughly two weeks ago—that’s how fast things move right now. Thus, if you’ve followed the news cycle in general and this blog in particular, you will recognize most if not all of the events and motifs, only much funnier and with a lot more f-words.


The “Stochastic Parrots” authors responded to the Future of Life Institute’s “Pause Giant AI Experiments” open letter:

It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse which promises either a “flourishing” or “potentially catastrophic” future. Such language that inflates the capabilities of automated systems and anthropomorphizes them, as we note in Stochastic Parrots, deceives people into thinking that there is a sentient being behind the synthetic media. This not only lures people into uncritically trusting the outputs of systems like ChatGPT, but also misattributes agency. Accountability properly lies not with the artifacts but with their builders. […]

Contrary to the letter’s narrative that we must “adapt” to a seemingly pre-determined technological future and cope “with the dramatic economic and political disruptions (especially to democracy) that AI will cause,” we do not agree that our role is to adjust to the priorities of a few privileged individuals and what they decide to build and proliferate.

Read the whole thing. Timnit Gebru, Emily M. Bender, Angelina McMillan-Major, and Margaret Mitchell are positively on fire.

(The post-preview image of a delivery robot traveling into an uncertain future is here. Also, for a better understanding of the Future of Life Institute and their open letter’s background, here’s an introduction to longtermism by Émile P. Torres at, which I had belatedly added to my initial post on “Pause AI” two days ago.)


Bill Gates Again Manages to Keep AI in Perspective (And Autonomous Driving)

Earlier this week, I wrote about how Bill Gates isn’t carried away by AGI nonsense, while “Microsoft Research” simultaneously went off the rails at with a publication that can charitably be called a badly written sf novelette disguised as a paper.

I still had GatesNotes open in a browser tab, and last night a new entry popped up on “The Rules of the Road Are About to Change,” with a take on AI for autonomous vehicles that is interesting for two different reasons.

First, there’s the LLM-related approach to training:

When you get behind the wheel of a car, you rely on the knowledge you’ve accumulated from every other drive you’ve ever taken. That’s why you know what to do at a stop sign, even if you’ve never seen that particular sign on that specific road before. Wayve uses deep learning techniques to do the same thing. The algorithm learns by example. It applies lessons acquired from lots of real world driving and simulations to interpret its surroundings and respond in real time.

The result was a memorable ride. The car drove us around downtown London, which is one of the most challenging driving environments imaginable, and it was a bit surreal to be in the car as it dodged all the traffic. (Since the car is still in development, we had a safety driver in the car just in case, and she assumed control several times.)

I think this is a nifty approach; I can imagine it will push things forward quite a bit.

Then, Gates’s personal predictions on autonomous driving are equally interesting, not least because it’s refreshingly free of AI hype:

Right now, we’re close to the tipping point—between levels 2 and 3—when cars are becoming available that allow the driver to take their hands off the wheel and let the system drive in certain circumstances. The first level 3 car was recently approved for use in the United States, although only in very specific conditions: Autonomous mode is permitted if you’re going under 40 mph on a highway in Nevada on a sunny day.

At Level 3 (SAE 3), to recall, the driver can have their “eyes off” the road and busy themselves with other tasks, but must “still be prepared to intervene within some limited time.” The level of automation can be thought of “as a co-driver or co-pilot that’s ready to alert the driver in an orderly fashion when swapping their turn to drive.”

Make no mistake—this step from SAE 2 to SAE 3 is a monumental one. However, just like other hypey AI hypes, SAE 3 too isn’t right around the corner. Gates again:

Over the next decade, we’ll start to see more vehicles crossing this threshold. […]

A lot of highways have high-occupancy lanes to encourage carpooling—will we one day have “autonomous vehicles only” lanes? Will AVs eventually become so popular that you have to use the “human drivers only” lane if you want to be behind the wheel?

That type of shift is likely decades away, if it happens at all.

If SAE 3 really happens “over the next decade,” that would be in the ballpark of what I consistently (and insistently) predicted around seven or eight years ago—that even SAE 3 as autonomous “co-pilot” driving would take fifteen years of development and infrastructural changes at least to become feasible. (A prediction that got screamed at a lot at the time, metaphorically speaking.)

But all that notwithstanding, I do not think that the scenario of autonomous individual cars will serve us well. In 2019, Space Karen took a swipe at Singapore’s government for not being “welcoming” to Tesla and not “supportive of electric vehicles,” which is complete bullshit in and of itself, of course. (I visited CREATE in 2019, Singapore NTU’s Campus for Research Excellence and Technological Enterprise, and the halls were filled to the brim with research projects on autonomous electric transportation.) But in Singapore, the focus is on public and semi-public transportation, not private cars. And I gotta say, I loved it when the Singaporean secretary for the environment and water resources, Masagos Zulkifli, shot back by saying that Singapore is prioritizing public transportation: “What Elon Musk wants to produce is a lifestyle. We are not interested in a lifestyle. We are interested in proper solutions that will address climate problems.”


Thus, while I’m pretty excited about advances in autonomous driving, I’d be even more excited about advances in autonomous driving toward sustainable transportation.


The day before yesterday, I worked my way through this terrible “Pause Giant AI Experiments” open letter, but didn’t get around to commenting on it. Luckily, I don’t have to! Emily Bender tore into it meanwhile, into the institution* that published it, and into the letter’s footnotes and what they refer to:

Human-competitive at general tasks, eh? What does footnote 3 reference? The speculative fiction novella known as the “Sparks paper” and OpenAI’s non-technical ad copy for GPT4. ROFLMAO.

What it boils down to is this. One the one hand, one can and should agree with this open letter that the way LLM development is handled right now is really bad for everybody. On the other, this open letter advocates stepping on the brake by stepping on the gas to accelerate the AI hype, which is entirely counterproductive:

I mean, I’m glad that the letter authors & signatories are asking “Should we let machines flood our information channels with propaganda and untruth?” but the questions after that are just unhinged #AIhype, helping those building this stuff sell it.

Go read the whole thing.

*Addendum: If you want to learn more about longtermism—both the Future of Life Institute and all the people cited in footnote #1 except the Stochastic Parrots authors are longermists—here’s an excellent article by Emile P. Torres on “The Dangerous Ideas of ‘Longtermisn’ and ‘Existential Risk’” (h/t Timnit Gebru).


Sometimes I wonder if there will ever come a time when I will no longer find everything about Microsoft, Windows, and its founders and their “foundations” and “philanthropies” outright revolting. But, in stark contrast to this recent inanity by “Microsoft Research” on, Bill Gates keeps things in perspective:

What is powering things like ChatGPT is artificial intelligence. It is learning how to do chat better but can’t learn other tasks. By contrast, the term artificial general intelligence refers to software that’s capable of learning any task or subject. AGI doesn’t exist yet—there is a robust debate going on in the computing industry about how to create it, and whether it can even be created at all. […]

These “strong” AIs, as they’re known, will probably be able to establish their own goals. What will those goals be? What happens if they conflict with humanity’s interests? Should we try to prevent strong AI from ever being developed? These questions will get more pressing with time.

But none of the breakthroughs of the past few months have moved us substantially closer to strong AI.

Most of his post then goes on about reasonable business applications, sprinkled with pseudo-naïve bullshit about how AI, e.g., will free up people’s time so they can care more for the elderly and some such. (I’m not making this up.) Which, however, culminates in a clincher that brings me right back to where I started at the beginning of this post:

When productivity goes up, society benefits because people are freed up to do other things, at work and at home. Of course, there are serious questions about what kind of support and retraining people will need. Governments need to help workers transition into other roles.

Governments! These institutions, you know, that corporations and billionaires don’t pay taxes to, and whose interference in even their most atrocious business practices is resisted inch by inch. These will be responsible to “help workers transition into other roles” so that productivity can go up. Right. I got that.

While LLM/ChatGPT/AI systems will certainly change how we work in ways comparable to the introduction of visual user interfaces, the World Wide Web, or the iPhone, or perhaps even the steam engine, who knows, it will remain business as usual: socialism for the rich and capitalism for the poor. In that regard, if we don’t also change the system, AI will change fuck nothing.


In an earlier post, I commented on how companies like ElevenLabs are beginning to force voice actors to sign away their voice rights to AI. Now, Levi Strauss announced an AI partnership with Lalaland a few days ago:

Today, LS&Co. announced our partnership with, a digital fashion studio that builds customized AI-generated models. Later this year, we are planning tests of this technology using AI-generated models to supplement human models, increasing the number and diversity of our models for our products in a sustainable way. […]

“While AI will likely never fully replace human models for us, we are excited for the potential capabilities this may afford us for the consumer experience.” [italics mine]

Which perfectly illustrates what I recently wrote in my essay over at Medium on Artificial Intelligence, ChatGPT, and Transformational Change:

One reason why all this is not glaringly obvious is the dazzling and distracting bombardment with AI sideshow acts. Where endless streams of parlor tricks and sleights of hand are presented, from fake Kanji, robot lawyers, and crypto comparisons to made-up celebrity conversations, emails to the manager, and 100 ChatGPT Prompts to Power Your Business. Through all that glitter and fanfare and free popcorn, many people don’t notice—or don’t want to notice or profess not to notice—that the great attraction in the center ring is just business as usual, only that the acrobats have been replaced by their likenesses and no longer need to be paid.

Press releases like this I will call Minidiv or Minisus announcements from now on. Marketing the replacement of human models through AI not only as progress toward “diversity” but also “sustainability”—a term currently thrown around with regard to AI in marketing and PR like confetti—has the exact same vibe as Orwell’s Ministries of Love and Peace.


A few days ago, I wrote about how training large language models is still prohibitively expensive, but that the costs of running them is coming down like a rocket.

Today, Jan-Keno Janssen (c’t 3003) posted a YouTube video on how to get Stanford’s ALPACA/LLaMA ChatGPT clone to run locally on ordinary hardware at home. It’s in German language, but you can switch on English subtitles, and there’s a companion page with the full German transcript that you can feed to Google Translate (or whatever you work with).

As Janssen points out, the copyright situation is murky; also, it became apparent yesterday that Facebook’s begun to take down LLaMa repos. But, as an instant countermeasure, the dalai creator already announced the launch of a decentralized AI model distribution platform named GOAT.

Point being, we’re approaching Humpty Dumpty territory. Once all that stuff is in the wild and runs reasonably well on ordinary hardware, the LLM business model that sat on the wall will take a great fall. And OpenAI’s horses and Facebook’s men won’t be able to put it together again.


As mentioned previously, I’m preparing course materials and practical exercises around AI/LLM/GPT models for the upcoming term, and I talked to coders (game engineering, mostly) who tried out ChatGPT’s coding assistance abilities. The gist is, ChatGPT gets well-documented routines right, but even then its answers are blissfully oblivious of best practice considerations; gets more taxing challenges wrong or, worse, subtly wrong; and begins to fall apart and/or make shit up down the road. (As a disclaimer, I’m not a programmer; I’ve always restricted myself to scripting, as there are too many fields and subjects in my life I need to keep up with already.)

Against this backdrop, here’s a terrific post by Tyler Glaiel on Substack: “Can GPT-4 *Actually* Write Code?” Starting with an example of not setting the cat on fire, his post is quite technical and goes deep into the weeds—but that’s exactly what makes it interesting. Glaiel’s summary:

Would this have helped me back in 2020? Probably not. I tried to take its solution and use my mushy human brain to modify it into something that actually worked, but the path it was going down was not quite correct, so there was no salvaging it. […] I tried this again with a couple of other “difficult” algorithms I’ve written, and it’s the same thing pretty much every time. It will often just propose solutions to similar problems and miss the subtleties that make your problem different, and after a few revisions it will often just fall apart. […]

The crescent example is a bit damning here. ChatGPT doesn’t know the answer, there was no example for this in its training set and it can’t find that in its model. The useful thing to do would be to just say “I do not know of an algorithm that does this.” But instead it’s overconfident in its own capabilities, and just makes shit up. It’s the same problem it has with plenty of other fields, though its strange competence in writing simple code sorta hides that fact a bit.

As a curiosity, he found that ChatGPT-3.5 came closer to one specific answer than ChatGPT-4:

When I asked GPT-3.5 accidentally it got much much closer. This is actually a “working solution, but with some bugs and edge cases.” It can’t handle a cycle of objects moving onto each other in a chain, but yeah this is much better than the absolute nothing GPT-4 gave… odd…

Generally, we shouldn’t automatically expect GPT to become enormously better with each version; besides the law of diminishing returns, there is no reason to assume, without evidence, that making LLMs bigger and bigger will make them better and better. But yes, we can at least expect that updated versions won’t perform worse.

And then there’s this illuminating remark by Glaiel, buried in the comments:

GPT-4 can’t reason about a hard problem if it doesn’t have example references of the same problem in its training set. That’s the issue. No amount of tweaking the prompt or overexplaining the invariants (without just writing the algorithm in English, which if you can get to that point then you already solved the hard part of the problem) will get it to come to a proper solution, because it doesn’t know one. You’re welcome to try it yourself with the problems I posted here.

That’s the point. LLMs do not think and cannot reason. They can only find and deliver solutions that already exist.

Finally, don’t miss ChatGPT’s self-assessment on these issues, after Glaiel fed the entire conversation back to it and asked it to write the “final paragraph” for his post!


Bryant Francis at Game Developer on “Ghostwriter,” Ubisoft’s new AI tool for its narrative team to generate barks:

Ubisoft is directly integrating Ghostwriter into its general narrative tool called “Omen.” When Ubisoft writers are creating NPCs, they are able to create cells that contain barks about different topics. An NPC named “Gaspard” might want to talk about being hungry or speeding while driving a car. To generate lines about speeding, the writer can either write their own barks, or click on the Ghostwriter tool to generate lines about that topic. Ghostwriter is able to generate these lines by combining the writer’s input with input from different large language models. […]

Ghostwriter is also used to generate large amounts of lines for “crowd life.” Ubisoft games often feature large crowds of NPCs in urban environments, and when players walk through those crowds, they generally will hear snippets of fake conversations or observations about what’s going on in the plot or game world.

Putting on my writer’s hat, I think that’s a great tool I’d love to work with! But there are trepidations, of course. Kaan Serin for Rock Paper Shotgun:

On Twitter, Radical Forge’s lead UI artist Edd Coates argued that this work could have been handed to a junior, making Ghostwriter seem like just another cost-cutting measure. He also said, “They’re clearly testing the waters with the small stuff before rolling out more aggressive forms of AI.” […] Other devs have argued that writing barks isn’t a pain.

As to the latter, sure—if your idea of a great workday consists of filling spreadsheet after spreadsheet with variations of dialogue snippets, you do you. But editing and refining them could be, and even should be, just as enjoyable, if not more so.

As to the former, Coates does have a point. In our corporate climate, as mentioned before, companies rarely use new technologies to make life better, hours shorter, and workdays more enjoyable for their employees. Instead, they will happily use these new technologies as a lever to “increase productivity” by reducing their workforce, replacing skilled with less-skilled personnel at lower wages, and crank up any turnout to whatever these new technologies allow.

Ghostwriter sounds like a terrific tool, and the problem isn’t new technologies. Unfettered capitalism is.


Bellingcat founder Eliot Higgins just got banned from Midjourney for posting a monumental Twitter thread about the Orange Fascist getting arrested, prosecuted, thrown into jail, escaping, and ending up at a McDonald’s as a fugitive, illustrated through several dozen Midjourney v5 images.

(In case Space Karen takes this thread down in the name of Freeze Peach, here are some excerpts on BuzzFeed.)

Whether that’s funny or not, I leave to you. The interesting bit is what else Midjourney did ban:

The word “arrested” is now banned on the platform.

As someone intimately familiar with language change and semiotics and stuff, and as someone who’s also followed the perpetual cat-and-mouse game between China’s Great Censorship Firewall and its clever citizens over the years, I recommend we’all order a tanker truck of popcorn and place bets on who’s going to throw in the towel first.


A few days ago, John Carmack posted this DM conversation on his Twitter account. Questioned whether coding will become obsolete in the future due to AI/ChatGPT, he replied:

If you build full “product skills” and use the best tools for the job, which today might be hand coding, but later may be Al guiding, you will probably be fine.

And when asked about the nature of these full product skills:

Software is just a tool to help accomplish something for people—many programmers never understood that. Keep your eyes on the delivered value, and don’t over focus on the specifics of the tools.

I think that’s both great advice for the future and has always been great advice in the past. I remember well how I looked up to John Carmack’s work in awe, many years ago, when I began to become interested in games.


Takahashi Keijiro / 高橋啓治郎 released some nifty proof-of-concept code on a ChatGPT-powered shader generator (Twitter | GitHub) and a natural language prompt for editing Unity projects (Twitter | GitHub).

Takahashi with regard to the latter:

Is it practical?
Definitely no! I created this proof-of-concept and proved that it doesn’t work yet. It works nicely in some cases and fails very poorly in others. I got several ideas from those successes and failures, which is this project’s main aim.

Can I install this to my project?
This is just a proof-of-concept project, so there is no standard way to install it in other projects. If you want to try it with your project anyway, you can simply copy the Assets/Editor directory to your project.

As I wrote in the final paragraph of my essay at on Artificial Intelligence, ChatGPT, and Transformational Change:

[The] dynamics to watch out for [will happen] in tractable fields with reasonably defined rule sets and fact sets, approximately traceable causes and effects, and reasonably unambiguous victory/output conditions. Buried beneath the prevailing delusions and all the toys and the tumult, they won’t be easy to spot.

I didn’t have any concrete applications in mind there, but Takahashi’s experiments are certainly part of what I meant. Now, while I’m not using Unity personally, and I’m certainly not going to for a variety of reasons, I’m confident that anything clever coders will eventually get to work in a major commercial game engine will sooner or later find its way into open source engines like Godot.

There are obstacles, of course—licensing and processing power prominently among them. The training process for models like ChatGPT is prohibitively expensive; licensing costs will accordingly be high; and you don’t want to have stuff that doesn’t run under a free software license in your open source engine in the first place.

But not all is lost! You absolutely do not need vastly oversized behemoths like ChatGPT for tasks like this. Everybody can create a large language model in principle, and the technologies it requires are well known, well documented, and open source.

There’s BLOOM, for starters, a transformer-based large language model “created by over 1000 AI researchers to provide a free large language model for everyone who wants to try.” And while training such models still costs a bunch even on lesser scales—access to Google’s cloud-based TPUs (GPUs designed for high volumes of low precision computing) isn’t exactly cheap—prices will come down eventually for older and less efficient hardware. Here’s an example of how fast these things develop: according to David Silver et al.’s 2017 paper in Nature, AlphaGo Zero defeated its predecessor AlphaGo Lee (the version that beat Lee Sedol, distributed over a whole bunch of computers and 48 TPUs) by 100:0 using a single computer with four TPUs.

All that’s pretty exciting. The trick is to jump off of the runaway A(G)I hype train before it becomes impossible to do so without breaking your neck, and start exploring and playing around with this stuff in your domain of expertise, or for whatever catches your interest, in imaginative ways.


Unearthed by @thebookisclosed, Microsoft is embedding a crypto-wallet in its Edge browser that handles multiple types of cryptocurrency, records transactions and currency fluctuations, and offers a tab to keep track of NFTs.

Andrew Cunningham, senior tech reporter at Ars Technica:

This is only one of many money and shopping-related features that Microsoft has bolted onto Edge since it was reborn as a Chromium-based browser a few years ago. In late 2021, the company faced backlash after adding a “buy now, pay later” short-term financing feature to Edge. And as an Edge user, the first thing I do in a new Windows install is disable the endless coupon code, price comparison, and cash-back pop-ups generated by Shopping in Microsoft Edge (many settings automatically sync between Edge browsers when you sign in with a Microsoft account; the default search engine and all of these shopping add-ons need to be changed manually every time).

How Windows users put up with this, I can’t fathom. Aside from that, it’s a positively trustworthy scammy-spammy environment that can only win by adding Bing’s text synthesis machine to it, which makes stuff up with abandon but is looked upon as being potentially “intelligent” by people who neither know what large language models are nor how marketing works.

This has the potential to become the most fun event since Mentos met Coke.


Regarding transparency, large language models and algorithms in general pose two distinct challenges: companies’ willful obfuscation of both the data sets and the human agents involved in training their models, and how these models arrive at their decisions. While I’m usually focused on disputes over the former, problems arising from the latter are equally important and can be just as harmful.

A proposal by the European Commission, whose general approach was adopted by the European Council last December, proposes to update AI liability to include cases that involve black box AI systems that are so “complex, autonomous, and opaque” that it becomes difficult for victims to identify in detail how the damage was caused. Thus, recipients of automated decisions must be able “to express their point of view and to contest the decision.” Which, in practice, requires convincing explanations. But how would you get convincing explanations when you’re dealing with black box AI systems?

One of the first questions would certainly be why black box AI systems need to exist in the first place. In her 2022 paper “Does Explainability Require Transparency?,” Elena Esposito puts it like this:

The dilemma is often presented as a trade-off between model performance and model transparency: if you want to take full advantage of the intelligence of the algorithms, you have to accept their unexplainability—or find a compromise. If you focus on performance, opacity will increase—if you want some level of explainability you can maybe better control negative consequences, but you will have to give up some intelligence.

This goes back to Shmueli’s distinction between explanatory and predictive modeling and the suggested trade-off between comprehensibility and efficiency, which suggests that obscure algorithms can be more accurate and efficient by “disengaging from the burden of comprehensibility”—an approach on which I’m not completely sold with regard to AI implementation and practice, even though it’s probably the case with regard to evolved complex systems.

However, following Esposito, approaches from the sociological perspective can change the question and show that this “somewhat depressing approach” to XAI (Explainable AI) is not the only possible one:

Explanations can be observed as a specific form of communication, and their conditions of success can be investigated. This properly sociological point of view leads to question an assumption that is often taken for granted, the overlap between transparency and explainability: the idea that if there is no transparency (that is, if the system is opaque), it cannot be explained—and if an explanation is produced, the system becomes transparent. From the point of view of a sociological theory of communication, the relationship between the concepts of transparency and explainability can be seen in a different way: explainability does not necessarily require transparency, and the approach to incomprehensible machines can change radically.

If this perspective sounds familiar, it’s because it’s based on Luhmann’s notion of communication, to which she explicitly refers:

Machines must be able to produce adequate explanations by responding to the requests of their interlocutors. This is actually what happens in the communication with human beings as well. I refer here to Niklas Luhmann’s notion of communication[.] Each of us, when we understand a communication, understand in our own way what the others are saying or communicating, and do not need access to their thoughts. […]

Social structures such as language, semantics, and communication forms normally provide for sufficient coordination, but perplexities may arise, or additional information may be needed. In these cases, we may be asked to give explanations[.] But what information do we get when we are given an explanation? We continue to know nothing about our partner’s neurophysiological or psychic processes—which (fortunately) can remain obscure, or private. To give a good explanation we do not have to disclose our thoughts, even less the connections of our neurons. We can talk about our thoughts, but our partners only know of them what we communicate, or what they can derive from it. We simply need to provide our partners with additional elements, which enable them to understand (from their perspective) what we have done and why.

This, obviously, rhymes with the EU proposal and the requirement of providing convincing explanations. That way, the requirement of transparency could be abandoned:

Even when harm is produced by an intransparent algorithm, the company using it and the company producing it must respond to requests and explain that they have done everything necessary to avoid the problems—enabling the recipients to challenge their decisions. [T]he companies using the algorithms have to deliver motivations, not “a complex explanation of the algorithms used or the disclosure of the full algorithm” (European Data Protection Board, 2017, p.25).

Seen in this light, the EU’s GDPR §22.3, its 2017 Annual Report on Data Protection and Privacy, and the aforementioned proposal sound a lot more reasonable than they’re often painted as in press releases or news reports.

But beyond algorithms, particularly with regard to LLM, the other side of the black box equation must be solved too, where by “solved” I of course mean “regulated.” To prevent all kinds of horrific consequences inflicted on us by these high-impact technologies, convincing explanations for what comes out of the black box must be complemented with full transparency of what goes in.


Following OpenAI Into the Rabbit Hole

“Oh my fur and whiskers! I’m late, I’m late, I’m late!”

OpenAI, December 2015:

OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.

OpenAI, April 2018:

We will actively cooperate with other research and policy institutions; we seek to create a global community working together to address AGI’s global challenges. We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.

OpenAI, February 2023:

We have a clause in our Charter about assisting other organizations to advance safety instead of racing with them in late-stage AGI development. We have a cap on the returns our shareholders can earn so that we aren’t incentivized to attempt to capture value without bound and risk deploying something potentially catastrophically dangerous (and of course as a way to share the benefits with society).

OpenAI, March 2023:

Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this [GPT-4 Technical Report] contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.

Everything’s fine. Tech bros funded by sociopaths fantasizing about AGI while rushing untested high-impact technology to market without oversight with the potential to affect almost everyone in every industry. What could possibly go wrong?


Rarely do I disagree with Cory Doctorow’s analyses, which are always highly enjoyable to read, both for their brilliant insights and their remarkable rhetoric punch. In his recent post on Google’s Chatbot Panic, however, things are a bit more complicated:

The really remarkable thing isn’t just that Microsoft has decided that the future of search isn’t links to relevant materials, but instead lengthy, florid paragraphs written by a chatbot who happens to be a habitual liar—even more remarkable is that Google agrees.

Microsoft has nothing to lose. It’s spent billions on Bing, a search-engine no one voluntarily uses. Might as well try something so stupid it might just work. But why is Google, a monopolist who has a 90+% share of search worldwide, jumping off the same bridge as Microsoft?

According to him, it looks inexplicable at first why Google isn’t trying to figure out how to exclude or fact-check LLM garbage, like they exclude or fact-check the “confident nonsense of the spammers and SEO creeps.” Referring to another article he wrote for The Atlantic, Doctorow makes the case that Google had one amazing idea, but then every product or service that wasn’t bought or otherwise acquired failed (with the lone exception of their “Hotmail clone”). This, he says, triggered a cognitive dissonance, i.e., that the true genius of this self-styled creative genius is “spending other people’s money to buy other people’s products and take credit for them.” This cognitive dissonance in turn triggered a pathology that drives these inexplicable decisions to follow trailing competitors over the cliff, like Bing now or Yahoo in the past:

Google has long exhibited this pathology. In the mid-2000s—after Google chased Yahoo into China and started censoring its search-results and collaborating on state surveillance—we used to say that the way to get Google to do something stupid and self-destructive was to get Yahoo to do it first. [Yahoo] going into China was an act of desperation after it was humiliated by Google’s vastly superior search. Watching Google copy Yahoo’s idiotic gambits was baffling.

But if you look at it from a different perspective, these maneuvers could actually appear clever. In game theory, there’s the “reversed follow-the-leader” strategy, most often illustrated with sailboat races, particularly Dixit and Nalebuff’s example of the 1983 America’s Cup finals in The Art of Strategy. The leading party (sailboat or company) copies the strategy of the trailing party (sailboat or company) as a surefire way to keep its leading position, even if that imitated strategy happens to be extremely stupid. If being the winner (or market leader) is the only thing that counts, it doesn’t matter whether the copied strategy is successful or unsuccessful or clever or stupid. Now, while that strategy doesn’t work when there’s not just one but two or more close competitors, the tech industry’s tendency to create duopolies and even quasi-monopolies naturally leads to situations where a reversed follow-the-leader strategy keeps making sense.

The drawbacks of this strategy are that winning doesn’t look spectacular or even clever; that it appears as if the winner has no confidence in their own strategy; and that imitating the runner-up might turn out to be very costly. But still, they win! And if that’s all there is to it, it’s just another form of pathology, and Doctorow’s analysis is on the mark after all.


Luckily, LinkedIn hasn’t messaged me yet for whatever expertise it thinks I have, but some users received the following request:

Help unlock community knowledge with us. Add your insights into this AI-powered collaborative article.

This is a new type of article that we started with the help of AI, but it isn’t complete without insights from experts like you. Share your thoughts directly into each section—you’re in a select group of experts that has access to do so. Learn more

— The LinkedIn Team

That’s straight out of bizarro land. LinkedIn member Autumn on Mastodon:

Wait wait wait wait

Let me get this straight…

LinkedIn wants to generate crappy AI content and then invite me to fix it, for free, under some guise of flattery calling out my “expertise”


There are no lengths these platforms won’t go to on the final leg of their enshittification journey.


James Vincent at The Verge:

LinkedIn announced last week it’s using AI to help write posts for users to chat about. Snap has created its own chatbot, and Meta is working on AI “personas.” It seems future social networks will be increasingly augmented by AI.

According to his report, LinkedIn has begun sharing “AI-powered conversation starters” with the express purpose of provoking discussion among users. Reminder here that LinkedIn belongs to Microsoft; also, as Vincent quips, LinkedIn is full of “workfluencers” whose posts and engagement baits range in tone from “management consultant bland to cheerfully psychotic,” which is, happily, “the same emotional spectrum on which AI tends to operate.”

But conversation starters might be merely the beginning. Vincent imagines semiautomated social networks with fake users that “needle, encourage, and coddle” their respective user bases, giving “quality personalized content at a scale.” And while that’s mostly tongue-in-cheek, there’s an observable vector for that with conversational chatbots populating more and more social media sites like Snap or Discord.

And then there’s Facebook:

Meta, too, seems to be developing similar features, with Mark Zuckerberg promising in February that the company is exploring the creation of “AI personas that can help people in a variety of ways.” What that means isn’t clear, but Facebook already runs a simulated version of its site populated by AI users in order to model and predict the behavior of their human counterparts.

Welcome to Hell on Earth. I’m sure all this will turn out fine.


Joseph Cox, reporting for Motherboard:

Voice actors are increasingly being asked to sign rights to their voices away so clients can use artificial intelligence to generate synthetic versions that could eventually replace them, and sometimes without additional compensation, according to advocacy organizations and actors who spoke to Motherboard.

Which, how could it be otherwise, is already packaged and marketed as the tech industry’s latest shit sandwich. According to ElevenLabs, e.g., voice actors “will no longer be limited by the number of recording sessions they can attend and instead they will be able to license their voices for use in any number of projects simultaneously, securing additional revenue and royalty streams.”

A shit sandwich voice actor Fryda Wolff doesn’t buy:

“[A]ctors don’t want the ability to license or ‘secure additional revenue streams,’ that nonsense jargon gives away the game that ElevenLabs have no idea how voice actors make their living.” Wolff added, “we can just ask musicians how well they’ve been doing since streaming platforms licensing killed ‘additional revenue and royalty streams’ for music artists. ElevenLabs’ verbiage is darkly funny.”

There’s so much stuff LLM technologies could do, like practically every new technology, to benefit almost everyone in almost every industry, humanity as a whole, and even the climate and the planet. Right now, however, we’d better be prepared to being bombarded with shit sandwiches left and right.

Instead of asking what these technologies can do for everyone (e.g., how AI/LLM can assist in smart city-planning or medical diagnostics), the major players are rather asking what they can do for shareholders and billionaires (e.g., “Bain & Company announces services alliance with OpenAI to help enterprise clients identify and realize the full potential and maximum value of AI”). The dominant vector here is relentlessly handing out snake oil to the effect that “A(G)I” will solve all of our problems in an exciting future full of marvels, while in reality the foundations are laid down in the present for rampant exploitation with breathtaking speed.

As Cory Doctorow put it:

Markets value automation primarily because automation allows capitalists to pay workers less. The textile factory owners who purchased automatic looms weren’t interested in giving their workers raises and shorting working days. They wanted to fire their skilled workers and replace them with small children kidnapped out of orphanages and indentured for a decade, starved and beaten and forced to work, even after they were mangled by the machines. Fun fact: Oliver Twist was based on the bestselling memoir of Robert Blincoe, a child who survived his decade of forced labor.

If you think Cory’s example is a purely historical one, you haven’t kept up with current events. In the right hands, LLM technologies can be a terrific addition to our toolbox to help us help ourselves as a species. But LLM technologies as such will solve nothing at best, and make life for large parts of humanity more miserable at worst.


AI/LLM/GPT Roundup, March 06: Lofty Ideals & Harsh Realities

My original plan for today’s roundup involved resources and discussions on questions of copyright, but I had to put that off until next week. I’m on the final stretch of a Corona infection, and I’m not yet feeling up to tackling such a complex topic.

Thus, three general sources instead you might want to read.

First, OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit by Chloe Xiang:

OpenAI was founded in 2015 as a nonprofit research organization by Altman, Elon Musk, Peter Thiel, and LinkedIn cofounder Reid Hoffman, among other tech leaders. In its founding statement, the company declared its commitment to research “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” The blog stated that “since our research is free from financial obligations, we can better focus on a positive human impact,” and that all researchers would be encouraged to share “papers, blog posts, or code, and our patents (if any) will be shared with the world.”

Now, eight years later, we are faced with a company that is neither transparent nor driven by positive human impact[.] According to investigative reporter Karen Hao, who spent three days at the company in 2020, OpenAI’s internal culture began to reflect less on the careful, research-driven AI development process, and more on getting ahead, leading to accusations of fueling the “AI hype cycle.” Employees were now being instructed to keep quiet about their work and embody the new company charter.

“There is a misalignment between what the company publicly espouses and how it operates behind closed doors. Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration,” Hao wrote.

Personally, I don’t think that there were any “founding ideals” that could have been “eroded” in the first place; the idea that anyone ever took these lofty ideals at face value in the knowledge that people like Musk or Thiel were involved strikes me as a serious case of Orwellian doublethink. It was simply a mask that was convenient for a time, and a very transparent one at that.

Next, You Are Not a Parrot—And a ChatBot Is Not a Human by Elizabeth Weil, an absolutely terrific portrait of Emily M. Bender in The New Yorker’s Intelligencer section:

We go around assuming ours is a world in which speakers—people, creators of products, the products themselves—mean to say what they say and expect to live with the implications of their words. This is what philosopher of mind Daniel Dennett calls “the intentional stance.” But we’ve altered the world. We’ve learned to make “machines that can mindlessly generate text,” Bender told me when we met this winter. “But we haven’t learned how to stop imagining the mind behind it.”

While parrots are great, humans aren’t parrots. Go read the whole thing. I have just one minor quibble, Weil’s throwaway remark concerning books and copyrights, to which I’ll get back next week.

Finally, the Federal Trade Commission has weighed in on AI in advertising last week, and it’s a blast. Michael Atleson in Keep Your AI Claims in Check at the FTC’s Business Blog:

And what exactly is “artificial intelligence” anyway? It’s an ambiguous term with many possible definitions. It often refers to a variety of technological tools and techniques that use computation to perform tasks such as predictions, decisions, or recommendations. But one thing is for sure: it’s a marketing term. Right now it’s a hot one. And at the FTC, one thing we know about hot marketing terms is that some advertisers won’t be able to stop themselves from overusing and abusing them.

AI hype is playing out today across many products, from toys to cars to chatbots and a lot of things in between. Breathless media accounts don’t help, but it starts with the companies that do the developing and selling. We’ve already warned businesses to avoid using automated tools that have biased or discriminatory impacts. […]

Advertisers should take another look at our earlier AI guidance, which focused on fairness and equity but also said, clearly, not to overpromise what your algorithm or AI-based tool can deliver. Whatever it can or can’t do, AI is important, and so are the claims you make about it. You don’t need a machine to predict what the FTC might do when those claims are unsupported.

It certainly won’t stop the hype train, but it’s a decent warning shot.


AI/LLM/GPT Roundup, February 26: Teaching and Research

There were several reports over the last weeks on how teachers began to use ChatGPT in various ways, mostly as a research assistant for their students. One example, as related by SunRev on Reddit:

My friend is in university and taking a history class. The professor is using ChatGPT to write essays on the history topics and as the assignments, the students have to mark up its essays and point out where ChatGPT is wrong and correct it.

As mentioned in an earlier roundup post, I’m preparing to let my students try out a few things with creative assistance in my upcoming game design lectures. But I have doubts about the use of LLM for research assistance. It certainly has its good sides; as ChatGPT and similar models are ridiculously unreliable, it forces students to fact-check, which is great. However, research isn’t—or shouldn’t be—all about fact-checking data. Rather, it should be about learning and internalizing the entire process of doing research, be it for postgraduate projects or college essays: gaining a thumbnail understanding and accumulating topic-specific keywords; following references and finding resources; weighing these resources according to factors like time/place/context, domain expertise, trustworthiness, soundness of reasoning, and so on; and eventually producing an interesting argument on the basis of source analysis and synthesis. I’m not sure if fact-checking and correcting LLM output is a big step forward in that direction.

Then, there’s the problem of research data contamination. xmlns=“Dan” on Mastodon:

There is a demand for low-background steel, steel produced before the nuclear tests mid century, for use in Geiger counters. They produce it from scavenging ships sunk during world war one, as it’s the only way they can be sure there is no radiation.

The same is going to happen for internet data, only archives pre-2022 will be usable for sociology research and the like as the rest will be contaminated by AI nonsense. Absolute travesty.

This might indeed develop into a major challenge. How big of a challenge? We can’t know yet, but it will most likely depend on how good LLM-based machines will become in differentiating between LLM output, human output, and mixed output. Around 2010, there was the Great Content Farm Panic, when kazillions of websites began to speed-vomit optimized keyword garbage into the web. Luckily, Google’s engineers upgraded their search algorithms in clever ways, so that most of that garbage was ranked into oblivion relatively quickly. Can Google or anyone else pull that off again, with regard to a tidal wave of LLM sewage? There’s no guarantee, but those search engines and knowledge repositories that become better at it will gain an advantage over their competitors, so there’s a capitalist incentive at least.

Finally, this bombshell suggestion by Kevin Roose in an article about ChatGPT for teachers (yes, that Kevin Roose of “Bing’s A.I. Chat Reveals Its Feelings: ‘I Want to Be Alive’” fame as mentioned in my last roundup):

ChatGPT can also help teachers save time preparing for class. Jon Gold, an eighth grade history teacher at Moses Brown School, a pre-K through 12th grade Quaker school in Providence, R.I., said that he had experimented with using ChatGPT to generate quizzes. He fed the bot an article about Ukraine, for example, and asked it to generate 10 multiple-choice questions that could be used to test students’ understanding of the article. (Of those 10 questions, he said, six were usable.)

In the light of my recent essay-length take on AI, ChatGPT, and Transformational Change at, particularly my impression that ChatGPT might at least liberate us from large swaths of business communication, multiple-choice tests, or off-the-shelf essay topics, this made my eyes roll back so hard that I could see my brain catching fire.


Just when ChatGPT and other large language models bumped their heads harshly and audibly against the ceiling of reality, the promises swiftly became more spectacular than ever.

From “Planning for AGI and Beyond” on OpenAI’s website:

If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.

That’s the entire article in a nutshell: pretentious, pompous, and perfectly vacuous.

Assuming these people live in a fantasy world where they see themselves as modern-day demiurges who will soon create Artificial General Intelligence from large language models is the charitable option. Assuming they’re grifters is the other.


AI/LLM/GPT Roundup, February 20: Bing Antics & AI Pareidolia

Originally, I planned this week’s roundup to be specifically about AI/LLM/ChatGPT in research and education, but I pushed these topics back a week in the light of current events. You’ve probably heard by now that Microsoft’s recent Bing AI demo, after some people took a closer look, was a far greater disaster than Google’s Bard AI demo had been a few days earlier.

Dmitri Brereton:

According to this [Bing AI’s] pros and cons list, the “Bissell Pet Hair Eraser Handheld Vacuum” sounds pretty bad. Limited suction power, a short cord, and it’s noisy enough to scare pets? Geez, how is this thing even a best seller?

Oh wait, this is all completely made up information.

It gets worse from there. Predictably, Bing AI also resorts to gaslighting, a topic I touched upon in my recent essay on Artificial Intelligence, ChatGPT, and Transformational Change over at

But hey, people want to believe! Which can be quite innocuous, as exemplified by this Golem article in German language. Shorter version: “On the one hand, Bing AI’s answers to our computer hardware questions were riddled with errors, but on the other, it generated a healthy meal plan for the week that we took at face value, so we have to conclude that Google now has a problem.”

Then, it can be stark, ridiculous nonsense. Which especially flowered in Kevin Roose’s “conversational article” in The New York Times, under the original headline “Bing’s A.I. Chat Reveals Its Feelings: ‘I Want to Be Alive.’”

Emily M. Bender gave it the thrashing it deserved:

And then here: “I had a long conversation with the chatbot” frames this as though the chatbot was somehow engaged and interested in “conversing” with Roose so much so that it stuck with him through a long conversation.

It didn’t. It’s a computer program. This is as absurd as saying: “On Tuesday night, my calculator played math games with me for two hours.”

That paragraph gets worse, though. It doesn’t have any desires, secret or otherwise. It doesn’t have thoughts. It doesn’t “identify” as anything. […] And let’s take a moment to observe the irony (?) that the NYTimes, famous for publishing transphobic trash, is happy to talk about how a computer program supposedly “identifies.”

You can learn more about what journalism currently gets terribly wrong from Bender’s essay On NYT Magazine on AI: Resist the Urge to Be Impressed, again over at There, she looks into topics like misguided metaphors and framing; misconceptions about language, language acquisition, and reading comprehension; troublesome training data; and how documentation, transparency, and democratic governance fall prey to the sycophantic exultation (my phrasing) of Silicon Valley techbros and their sociopathic enablers (dito).

Finally, there’s the outright pathetic. Jumping into swirling vertiginous abysses of eschatological delusions particularly on Twitter, many seem to believe or pretend to believe that the erratic behavior of Bing’s “Sydney,” which at times even resembled bizarre emotional breakdowns until Microsoft pulled the plug, is evidence for internal experiences and the impending rise of self-aware AI.

Linguist Mark Liberman:

But since the alliance between OpenAI and Microsoft added (a version of) this LLM to (a version of) Bing, people have been encountering weirder issues. As Mark Frauenfelder pointed out a couple of days ago at BoingBoing, “Bing is having bizarre emotional breakdowns and there’s a subreddit with examples.” One question about these interactions is where the training data came from, since such systems just spin out word sequences that their training estimates to be probable.

After some excerpts from OpenAI’s own page on their training model, he concludes:

So an army of low-paid “AI trainers” created training conversations, and also evaluated such conversations comparatively—which apparently generated enough sad stuff to fuel those “bizarre emotional breakdowns.”

A second question is what this all means, in practical terms. Most of us (anyhow me) have seen this stuff as somewhere between pathetic and ridiculous, but C. M. [Corey McMillan] pointed out to me that there might be really bad effects on naive and psychologically vulnerable people.

As evidenced by classical research as well as Pac-Man’s ghosts, humans are more than eager to anthropomorphize robots’ programmed behavioral patterns as “shy,” “curious,” “aggressive,” and so on. That an equivalent to this would be true for programmed communication patterns shouldn’t come as a surprise.

However, for those who join the I Want to Believe train deliberately, it doesn’t seem to have anything to do with a lack of technical knowledge or “intelligence” in general, whatever that is. Not counting those who seize this as a juicy consulting career opportunity, the purported advent of self-aware machines is a dizzyingly large wishful thinking buffet that offers either delicacies or indigestibles for a broad range of sensibilities.

On a final note for today, this doesn’t mean that technical knowledge of how ChatGPT works is purely optional, useless, or snobbish. Adding to the growing number of sources already out there, e.g., Stephen Wolfram last week published a 19,000-word essay on ChatGPT that keeps a reasonable balance between being in-depth and accessible. And even if you don’t agree with his hypotheses or predictions on human and computational language, that’s where all this stuff becomes really interesting. Instead of chasing Pac-Man’s ghosts and seeing faces in toast, we should be thrilled about what we can learn and will learn from LLM research—from move 37 to Sydney and beyond—about decision processes, creativity, language, and other deep aspects of the human condition.


AI/LLM/GPT Roundup, February 13

As I wrote more than eight (woah!) years ago in the About section, my secret level/side blog just drafts is one part news ticker with commentary on everything related to games, and one part research-adjacent blog posts about game-based learning and ethics. Discussing current AI models fits that agenda pretty well.

What’s more, I started preparing course materials and practical exercises around AI/LLM/GPT models for the upcoming term in April. These will be balanced topic cocktails for second and sixth term students, revolving around creative assistance (game design, art, coding, and writing), development support (production process and project management), and social ramifications (potentials, risks, economic viability, sustainability, equity/fairness, acceptance, workplace integration, and so on).

Thus, on top of my regular posts, linked list items, or essays, these roundups will serve as food for thought in general, and as a repository for upcoming discussions with my students as well.

Sooooo, let’s go!

1. “Chatbots Could One Day Replace Search Engines. Here’s Why That’s a Terrible Idea.” Will Douglas Heaven in MIT Technology Review, March 29, 2022.

This one’s a bit older. It held up well, but most of the arguments are familiar by now. One aspect, however, is worth exploring. From an interview with Emily M. Bender, one of the coauthors on the paper that led Timnit Gebru to be forced out of Google:

“The Star Trek fantasy—where you have this all-knowing computer that you can ask questions and it just gives you the answer—is not what we can provide and not what we need,” says Bender[.] It isn’t just that today’s technology is not up to the job, she believes. “I think there is something wrong with the vision,” she says. “It is infantilizing to say that the way we get information is to ask an expert and have them just give it to us.”

– – – – –

2. The Guardian View on ChatGPT Search: Exploiting Wishful Thinking.” The Guardian editorial, Februar 10, 2023.

Since the British Guardian began some time ago to excel in publishing disgustingly transphobic opinion pieces, I all but stopped linking to it. But this one adds an intriguing metaphor to the preceding point of view:

In his 1991 book Consciousness Explained, the cognitive scientist Daniel Dennett describes the juvenile sea squirt, which wanders through the sea looking for a “suitable rock or hunk of coral to … make its home for life.” On finding one, the sea squirt no longer needs its brain and eats it. Humanity is unlikely to adopt such culinary habits but there is a worrying metaphorical parallel. The concern is that in the profit-driven competition to insert artificial intelligence into our daily lives, humans are dumbing themselves down by becoming overly reliant on “intelligent” machines—and eroding the practices on which their comprehension depends.

The operative term here is “practices,” mind. That’s the important thing.

– – – – –

3. “We Asked ChatGPT to Write Performance Reviews and They Are Wildly Sexist (and Racist).” Kieran Snyder in Fast Company, February 2, 2023.

Across the board, the feedback ChatGPT writes for these basic prompts isn’t especially helpful. It uses numerous cliches, it doesn’t include examples, and it isn’t especially actionable. […] Given this, it’s borderline amazing how little it takes for ChatGPT to start baking gendered assumptions into this otherwise highly generic feedback.

[O]ne important difference: feedback written for female employees was simply longer—about 15% longer than feedback written for male employees or feedback written in response to prompts with gender-neutral pronouns. In most cases, the extra words added critical feedback [while the feedback written for a male employee] is unilaterally positive.”

To be expected; sexism and racism and similar nasty stuff is always baked into historical data. To keep any AI trained on historical data from developing racist-aunt-or-uncle opinions, with the potential to ruin a lot more than merely your Thanksgiving family dinner, will keep haunting us as one of the biggest challenges in AI.


Last Tuesday, in my essay at on “Artificial Intelligence, ChatGPT, and Transformational Change,” I was generally pessimistic about current AI’s social implications and generally optimistic about its technical implications.

However, yesterday’s article “ChatGPT Is a Blurry JPEG of the Web” by Ted Chiang in The New Yorker gave me second thoughts about the latter:

Think of ChatGPT as a blurry jpeg of all the text on the Web. It retains much of the information on the Web, in the same way that a jpeg retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry jpeg, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.

Also, Google’s Bard presentation yesterday (and some Bing shenanigans) gave me second thoughts from a different, but related perspective. Not because things did go sideways a bit there; it rather occurred to me that chatbots obscure both their sources’ origins and the selection process a lot more than conventional search engines already do, which might transform search engines in the long run into portaled successors of the AOL internet. Sure, people can still use conventional search, but we all know how things are done at Google. If more and more people adapt to chatbot search and conventional search begins to deliver fewer and fewer ads, resources might be cut, and conventional search might even be dropped for good someday (viz: Google Reader, Feedburner, Wave, Inbox, Rockmelt, Web & Realtime APIs, Site Search, Map Maker, Spaces, Picasa, Orkut, Google+, and so on).

But there’s more. Ted Chiang again:

Imagine what it would look like if ChatGPT were a lossless algorithm. If that were the case, it would always answer questions by providing a verbatim quote from a relevant Web page. We would probably regard the software as only a slight improvement over a conventional search engine, and be less impressed by it. The fact that ChatGPT rephrases material from the Web instead of quoting it word for word makes it seem like a student expressing ideas in her own words, rather than simply regurgitating what she’s read; it creates the illusion that ChatGPT understands the material. In human students, rote memorization isn’t an indicator of genuine learning, so ChatGPT’s inability to produce exact quotes from Web pages is precisely what makes us think that it has learned something. When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression.

Go and read the whole thing. It’s chock-full of insights and interesting thoughts.


An illustrated essay by yours truly that I published on

Today, figuratively speaking, these technologies will be implemented in our design tools and graphics editors and search engines and word processors and translation/transformation apps and game engines and coding environments and digital audio workstations and video production editors and business communication platforms and diagnostic tools and statistical analysis software in everything everywhere all at once fashion with the possible exception of our singularly immobile educational systems, and we will work with them without batting an eye once the novelty value’s worn off. And by tomorrow, what were once miracles will have become children’s toys.

It’s a 14–minute read, according to It started out as a post for this blog, but grew to a size more conveniently digestible over there. Along with all the things mentioned in the quote and in the headline, I also touch upon AlphaGo’s successor, cotton candy, social and economical dynamics, and late-night visits to the refrigerator.


GPT: Like Crocodile Tears in Rain

If you don’t live under a rock, you might have noticed a few remarkable—but not altogether unpredictable—advances in natural language processing and reinforcement learning, prominently text-to-image models and ChatGPT. And now, every concerned pundit comes out of the woodwork, decrying how terrible these developments are for our educational system and for our starving artists.

Is this deluge of AI-generated images and texts terrible? Of course it is. But don’t let them tell you that this is the problem. It’s only the symptom of a problem.

Let’s start with all those people suddenly feeling deeply concerned about the death of the college essay. Education, if you think of it, should do three things: make children and students curious about as many subjects as possible; give them the tools to develop interests around these subjects; and facilitate the acquisition of skills, knowledge, and understanding along these interests. For these ends, virtually every new technology would be useful one way or another. Our educational systems’ priority, however, is feeding children and students standardizable packets of information—a lot with very short best-before dates stamped on them—for evaluation needs and immediate workplace fitness. Just think of it: the world wide web became accessible for general use around 1994! During all that time, almost thirty years, the bulk of written and oral exams didn’t adapt to integrate the internet, but has been kept isolated from it meticulously. Nor, for that matter, has the underlying system changed of keeping information expensive and disinformation free, an infrastructure into which AI-generated nonsense can be fed now with abandon. And all this gate-keeping for what: when there’s a potential majority in any given country to elect or keep electing fascists into high office, the survival of the college essay probably isn’t the most pressing thing on our plate with regard to education.

Then, the exploitation of artists. Could these fucking techbros have trained their fucking models on work that artists consented to or whose work is in the public domain? That’s what they should’ve done and of course didn’t, but please spare me your pundity tears. While it is thoroughly reprehensible, it’s only possible because at that intersection where the tech and creative industries meet, a towering exploitation machine has stood all along—co-opting or replacing or straightaway ripping off the work of artists and freelance artists, and practically everybody who doesn’t own an army of copyright lawyers, the moment their work becomes even marginally successful.

AI will advance, and everything it can do will be done. Nexus-6 models aren’t right around the corner, but they’re an excellent metaphor. We could try and legislate all those leash-evading applications to death, chasing after them, always a few steps behind and a few years too late, trying to prevent new ways of exploitation while reinforcing old ways of exploitation. Or we could try and change our educational and creative economies in ways that make these applications actually useful and welcome, for educators and artists in particular and humanity and this planet in general.


From a conversation at

Ferrari and some of the other high-end car manufacturers still use clay and carving knives. It’s a very small portion of the gaming industry that works that way, and some of these people are my favourite people in the world to fight with—they’re the most beautiful and pure, brilliant people. They’re also some of the biggest fucking idiots.

That’s CEO John Riccitiello answering to criticism with regard to Unity’s recent merger with ironSource.

For the rest of the conversation, Riccitiello builds his giant strawman of developers who “don’t care about what their player thinks” and equates this strawman with everyone who doesn’t embrace Unity’s publishing model that is driven by, let’s call it by its name, advertising and addiction.

One of the anecdotes with which he fleshes out his strawman:

I’ve seen great games fail because they tuned their compulsion loop to two minutes when it should have been an hour.

That’s not just strawman-nonsense on so many levels; it also tells you everything about the mindset behind it. I’m well aware that “compulsion loop” has become an industry term in the mobile games sphere that has replaced “gameplay loop,” the term we still use when we want to make games that players can enjoy. (Ferraris, according to Riccitiello.)

Just to refresh your memory, while the gameplay loop or core loop* consists of a sequence of activities or sets of activities that the player engages in again and again during play that defines the mechanical aspect of the playing experience, the compulsion loop is a behaviorally constructed, dopamine-dependent, addiction-susceptible, near-perpetual anticipation–avoidance–reward loop as an extrinsic motivation package that keeps players playing to maximize their exposure to advertising or their willingness to spend money on in-game purchases of both.

That’s what Unity’s business model is about, industry term or no.


* To maximize confusion, there’s also the game loop, which is the piece of code that updates and renders the game from game state to game state.



Layoffs have afflicted Unity’s offices across the globe. […] On Blind, the anonymous messaging board commonly used by employees in the tech industry, Unity staffers say that roughly 300 or 400 people have been let go, and that layoffs are still ongoing. […]

Unity has been a “shit show” lately, one person familiar with the situation, who requested anonymity for fear of reprisal, told Kotaku. Attrition. Mismanagement. Strategic pivots at a rapid, unpredictable rate.

Two weeks prior, apparently, CEO John Riccitiello had lied in an all-hands meeting that they wouldn’t be laying off anyone.

What’s more, there has been a flurry of acquisitions lately, most recently digital effects studio Weta for $1.62b and Parsec for $320m, while investment into creative ventures all but dried up. The only creative team who’d been working internally on a game was fired as well.

And then there’s this:

This project that Unity debuted this year, aimed at improving users’ knowledge of Unity, improve tooling, level up creator skills, that was fun and inspiring, that a lot of people were looking forward to? Everyone in that team picture has been fired.

I always had my reasons to distrust Unity deeply, and indicators for a smoldering fire under the hood have been there for a very long time. So if its C-suite now is officially a coterie of lying weasels, maybe we should rewind to about a year ago and take their protestation in this matter with the tanker truck of salt it deserved all along.


The Swedish Embracer Group, who’s lately been gobbling up developer studios and IPs like people eat popcorn at the movie theater, is building an archive “for every game ever made,” according to this YouTube video and this page on their website:

Imagine a place where all physical video games, consoles and accessories are gathered at the same place. And think about how much that could mean for games’ culture and enabling video games research. This journey has just been started and we are at an early stage. But already now, we have a large collection to take care of at the Embracer Games Archive’s premises in Karlstad, Sweden. A team of experts has been recruited and will start building the foundation for the archive.

Frankly, I don’t see the point. A “secret vault” with the “long-term ambition” to exhibit parts of the archive “locally and through satellite exhibitions at other locations” so people can—what—look at boxes?

I’m not saying collecting colorful boxes is a bad thing. I like colorful boxes! But any preservation effort whose primary goal isn’t to restore and archive these games’ source codes, clean up their fucked-up license entanglement train wrecks, and make them playable in open-source emulators that are not shot down by predatory copyright policies upheld and lobbied for by the games industry and their assorted associations, that’s neither true preservation nor progress.

Imagine all the paintings in the Louvre were not behind glass but inside boxes, and all you could see were descriptions written on these boxes on what these paintings are about and who created them when.


Bloomberg Law:

Judge John C. Coughenour let part of the case move forward in the U.S. District Court for the Western District of Washington, saying it’s plausible Valve exploits its market dominance to threaten and retaliate against developers that sell games for less through other retailers or platforms.

The company “allegedly enforces this regime through a combination of written and unwritten rules” imposing its own conditions on how even “non-Steam-enabled games are sold and priced,” Coughenour wrote. “These allegations are sufficient to plausibly allege unlawful conduct.”

The consolidated dispute is one of several legal challenges to the standard 30% commission taken by leading sales and app distribution platforms across Silicon Valley.

About time.

Steam is an exploitation machine that doesn’t invest the shitloads of money it extracts back into their platform to make it less exploitative and a better experience particularly for indie developers. Instead, Steam always had a habit of passing on everything they should do themselves to its community, which, besides unpaid work, is among the most patriarchal things I can think of. Also, remember for whom they lowered their cut when people began to complain? They lowered it for the big players who brought in over $10 million in sales. In other words, only for those who have the muscle to push against them.

I remember well when this cartoon was popular ten years ago, when people regarded Valve as “Your Cat—Loyal, friendly, and the internet loves him” (scroll all the way down). Yes, that’s a very nice picture that I didn’t quite buy even at the time. Today, if you caught a glimpse of Steam’s real image behind the curtain, it’ll be more like a picture of Dorian Gray’s cat.


Square Enix, the day before yesterday, in their Press Release (PDF):

[We] today signed a share transfer agreement with Sweden-based Embracer Group AB concerning the divestiture of select overseas studios and IP. The company’s primary assets to be divested in the transaction are group subsidiaries such as Crystal Dynamics, Eidos Interactive and IP such as Tomb Raider, Deus Ex, Thief, and Legacy of Cain.

[corporate jargon] In addition, the transaction enables the launch of new businesses by moving forward with investments in fields including blockchain, AI, and the cloud. [more corporate jargon]

Well, I’m not an analyst. But if you ask me, selling off some of your jewels in a fire sale*, not because they made a loss but, purportedly, because the profits they made were “below expectations,” to a publishing group that already owns more than a hundred studios and a kaboodle of other media companies, with the expressed intention of investing in cryptoshit, then I’d say there are some invisible pipelines under the hood that will funnel staggering amounts of money over time into some select investors’ fathomless pockets.

*Compare the estimated price tag of $300 million for all these studios and IPs to this year’s other buying sprees—Sony’s acquisition of Bungie/Destiny for $3.6 billion; Take-Two’s acquisition of Zynga/FarmVille for $12.7 billion; or Microsoft’s acquisition of Activision Blizzard for just under $70 billion. Plus, Embracer’s own recent acquisitions of Asmodee Games for $3.1 billion and Gearbox for $1.3 billion. Compared to these, what Embracer payed for all these studios and IPs is the equivalent of the tip money you put aside for pizza and package deliveries.


One of the topics this secret level focuses on is game-based learning, and sometimes I also rant post about school systems as such. But there’s also higher education—and the university system in the U.S., which has been eroded, dismantled, assaulted, and hijacked for and by monetary and political interests for quite a while. Recently, it’s come under sustained assault more than ever, and even academic tenure’s now in the crosshairs. Tenure certainly has its flaws and can be exploited, but it’s there for a reason.

SMBC, my favorite web comic together with XKCD, regularly and notoriously makes fun of every academic discipline—natural sciences, social sciences, math, you name it—but it always felt to me that the jokes targeting the humanities were a bit less playful and a bit more deprecating.

Thus, I was blown away by this jumbo-sized SMBC comic the other day, with its lovingly crafted metaphor of liberal education as an “old dank pub”—so much so that I can’t but link to it here for posterity, and to make you cry too.

Don’t forget to click on the red button at the bottom for a bonus panel, and read a few other comics as well while you’re there!


Week before last, I mentioned in my weekly newsletter that People Make Games had released a documentary on YouTube about their investigations into emotional abuse across three different, highly prestigious indie studios. There were several aspects to it that made this even more depressing than those regular big toxic studio news that we more or less have come to expect.

One of these studios was Funomena, founded in 2013 by Robin Hunicke and Martin Middleton. Yesterday, Chris Bratt from People Make Games reported on Twitter that Funomena will be closed:

I’m absolutely gutted to report that Funomena is set to be closed by the end of this month, with all contractors already having been laid off as of last Wednesday.

This is an extremely sad end to the studio’s story and I hope everyone affected is able to land on their feet.

This announcement has caught many employees by surprise, who now find themselves looking for other work, with their last paycheck coming this Friday.

Then there’s Funomena’s official statement, also on Twitter, which paves the way for laying the blame on a new funding round that didn’t materialize. Finally, there seem to be voices who blame shuttering the studio on the release of the documentary, but (former) Funomena employees beg to differ.

Apparently, we can’t have good things.


Last week, The New Yorker featured an interview with FromSoftware’s game director Miyazaki Hidetaka, conducted by Simon Parkin.

Mostly, it’s about difficulty, but also about writing. For Elden Ring, as you might know, Miyazaki collaborated with George R. R. Martin. But it’s not at all your regular “let’s hire a writer/screenwriter for the story” approach:

Miyazaki placed some key restraints on Martin’s contributions. Namely, Martin was to write the game’s backstory, not its actual script. Elden Ring takes place in a world known as the Lands Between. Martin provided snatches of text about its setting, its characters, and its mythology, which includes the destruction of the titular ring and the dispersal of its shards, known as the Great Runes. Miyazaki could then explore the repercussions of that history in the story that the player experiences directly. “In our games, the story must always serve the player experience,” he said. “If [Martin] had written the game’s story, I would have worried that we might have to drift from that. I wanted him to be able to write freely and not to feel restrained by some obscure mechanic that might have to change in development.”

In Ludotronics, my game design book, I wrote about how a game’s setting, location/environment, backstory, and lore can be crafted with audiovisual, kinesthetic, and mythological means to create the game’s world narrative. How Miyazaki approached this for Elden Ring would make a terrific example for an updated edition.


There’s an irony in Martin—an author known for his intricate, clockwork plots—working with Miyazaki, whose games are defined by their narrative obfuscation. In Dark Souls, a crucial plot detail is more likely to be found in the description of an item in your inventory than in dialogue. It’s a technique Miyazaki employs to spark players’ imaginations[.]

For many reasons, I think it’s an excellent approach. (I also think that Miyazaki is very polite.)


The people of Ukraine are under attack. As game developers we want to create new worlds, not to destroy the one we have. That’s why we’ve banded together to present this charity bundle to help Ukrainians survive this ordeal and thrive after the war ends.

Over 700 creators have joined in support to donate their work.

We kept the minimum low, but we highly urge you to pay above the minimum if you can afford to do so. All proceeds will be split between the charities 50/50.

Only paid products were allowed into the bundle this time, DRM-free and download-ready, no external Steam Keys or any other bullshit. Proceeds will be split evenly between the International Medical Corps and the Ukrainian organization Voices of Children.

The yield from’s Bundle for Racial Justice and Equality two years ago was phenomenal.

Let’s do this again.


Alice O’Connor at Rock Paper Shotgun:

For reference, Activision Blizzard’s valuation by market cap was $51 billion right before Microsoft announced their plan to buy Actiblizz for $69 billion. On that scale, Ubisoft look tiny with a market cap of $6-7-ish billion. Microsoft could accidentally buy Ubisoft by misclicking on Amazon and not even realise until Yves Guillemot was dumped on their doorstep in a cardboard box.

Being open to buy-outs seems to be the fashionable thing to do nowadays if you’re a game publisher and your company’s rocked by scandals around sexism, racism, misogyny, and toxic work conditions in general. (Paradox, for now, being an exception.)

What it doesn’t do, of course, is help. After the better time of a year, none of the demands from the “A Better Ubisoft” initiative have been met. And with regard to Activision Blizzard, well. Last Tuesday, this absolute gem of an anti-union presentation [slide | slide] popped up from ReedSmith, whose lawyers—you can’t make this up—represent Activision in the NLRB hearing on the publisher’s union-busting activities. (That presentation’s been hastily deleted, of course, but the Wayback Machine has a long memory.)


Matthew Gault on Vice:

America’s Army: Proving Grounds, a game used as a recruitment tool by the United States government, is shutting down its servers on March 5 after existing in various iterations for 20 years. After that date, the game will be delisted on Steam and removed from the PSN store. Offline matches and private servers will work, but the game will no longer track stats or provide online matches.

Once an avid player, I haven’t played America’s Army for years, but it sure feels like the end of an era.

Of course, it’s one of those “old school” shooters from the late 1990s and 2000s where you can fire up your own server or play the game offline with friends over LAN forever, like I still do from time to time with the original 1999 Unreal Tournament.

But I guess you’ll no longer be court-martialed and send up the river (i.e., banned from online play for a week or so) for friendly fire, disobeying orders, or being an asshat in general.


As an update to my original linked-list item on Wordle, I mentioned C Thi Nguyen’s thread on Wordle’s social graphic representation:

But the cleverest bit about Wordle is its social media presence. The best thing about Wordle is *the graphic design of the shareable Wordle chart*. There’s a huge amount of information—and drama—packed into that little graph.

Every game of Wordle is a particular little arc of decisions, attempts, and failures.

But each little posted box is *a neat synopsis of somebody else’s arc of action, failure, choice, and success*.

Now, if you’re so inclined, you can turn your lovely little graphs into lovely little buildings:

Wordle2Townscaper is meant to convert Wordle tweets into Townscaper houses using yellow and green building blocks. You can download the tweet contents, parse pasted tweet contents or manually edit of 6 rows and 5 columns. Optionally, you can also choose whether wrong guesses should be left blank on Townscaper or filled with white blocks. Ground floor is always needed because you can’t change its color.



Here’s a brief interview on BBC with Josh Wardle, the creator of the Wordle game (it starts at around 1:24):

I like the idea of doing the opposite of that—what about a game that deliberately doesn’t want much of your attention? Wordle is very simple and you can play it in three minutes, and that is all you get.

There are also no ads and I am not doing anything with your data, and that is also quite deliberate.

There’s a lot to love about Wordle—that you can play it only once per day, or that you can share your result on social media in a clever, spoiler-free way (the word you have to guess on any given day is the same for all players).

On the other hand, will it last? Ian Bogost, particularly made some astute remarks regarding Wordle’s rules and its life cycle to that effect.

The interview will be taken down after four weeks, but this BBC news item has some quotes. What I didn’t know, as mentioned in this article, is that Josh Wardle was also the creator of Reddit’s The Button, an “experimental game” that ticks all the right boxes for being a social experiment.


Update January 13:: Here’s a terrifice Twitter thread by philosopher C Thi Nguyen (@add_hawk) on Wordle’s graphic social communication, and another thread by Steven Cravotta with a great story on how Wordle impacted his own game on the app store, and what he’s going to do with it.


Turns out, I was still way too generous in my linked-list entry on Ken Levine’s 2014 GDC talk.


In Levine’s interpretation, auteurism has meant discarding months of work, much to his staff’s dismay. During development of BioShock Infinite at his previous studio, Levine said he “probably cut two games worth of stuff,” according to a 2012 interview with the site AusGamers. The final months of work on that game demanded extensive overtime, prompting managers to meet informally with some employees’ spouses to apologize.

Ghost Story employees spent weeks or months building components of the new game, only for Levine to scrap them. Levine’s tastes occasionally changed after playing a hot indie release, such as the side-scrolling action game Dead Cells or the comic book-inspired shooter Void Bastards, and he insisted some features be overhauled to emulate those games. Former staff say the constant changes were demoralizing and felt like a hindrance to their careers.

Those who worked with Levine say his mercurial demeanor caused strife. Some who sparred with Levine mysteriously stopped appearing in the office, former staff say. When asked, managers typically described the person as a bad match and said they had been let go, say five people who worked there. Others simply quit. The studio’s top producer resigned in 2017 following clashes with Levine.



Harrison Kensley and Daniel Kukiela:

We are playing entirely within a neural network’s representation of Grand Theft Auto V. We’ve seen AI play within an environment, but here the AI is the environment.


To collect training data for their neural network, they first created a rules-based AI and let twelve instances of it drive around in the same GTA V game instance on the same stretch of road.

And there’s more:

This model learned to model GTA’s modeled physics. But could it have just as well learned, maybe, some basic real-life physics?

Stay tuned.


Finally got around to listening to Ken Levine’s 2014 GDC talk on “Narrative Lego” and I’m kind of…underwhelmed. It’s all very tropey, it’s all rather old-hattish, good people have tried, and the “yeah but let’s get the system off the ground first” drill is precisely how we bake old problems into new technologies.

As far as I know, nothing much has happened since, except for the studio’s rebranding to “Ghost Story Games” and some adumbrations about being inspired by the Nemesis AI System* and doing things differently than Telltale Games. And while Ghost Story Games’s catch phrase “Radical Recognition” is certainly catchy, I’m not holding my breath.

* Which, in an egregious dick move by Warner Bros., has been patented in the meantime.


Great podcast chat to listen to between Robin Hunicke and Kimberly Voll. Docks right into the mixed graduate/undergraduate course on “Social & Ethical Dimensions of Player Behavior and Game Interactions” that we wrapped up two weeks ago.

We’ve really come a long way. Once, as related by Kimberley Voll, it was like this:

One of my defining moments prior to Riot was sitting in a meeting with someone discussing a survey that had just gone on, a survey of players, and they then really said, we can throw away the results from all the female players, because that cleary wasn’t their demographic.

This podcast delivers a barrage of experiences, current research, and interesting approaches to digest, from the differentiation between imaginative space and social space in MMORPGs to the effects of competition on player behavior, or the absence of social constraints in digital games that promote and enforce behavior like sportsmanship in non-digital games.


This is the bleakest assessment of teaching I’ve read in a very long time. I don’t think it’s wholly unwarranted—my own assessement of school-level teaching in particular is equally grim, as I’ve written about lately here and here. But I’m way more optimistic with respect to university-level teaching.

John says:

[T]here is the value problem. Is the information I am sharing and asking students to critically evaluate, really important? Is this stuff they need to know? I often kid myself that it is. I will claim that a subject as apparently dry and boring as contract law is intrinsically fascinating because it raises important questions about trust, freedom, reciprocity, economic value and so on. […]

I could (and frequently do) argue that students are learning the capacity for critical and self-reflective awareness as result of my teaching […] The claim is often made that critical thinking skills are valuable from a social perspective: people with the capacity for critical thought are more discerning consumers of information, better problem solvers, better citizens and so on. But I don’t know how true this is.




Today, I made some changes to just drafts.

To begin with, I switched the background color from its original daringfireball-flavored grayish-blue to a more maritime-flavored grayish-green. (It’s called “Dark Slate Gray,” but never mind.) If it’s still grayish-blue, hit CMD-R on Mac or CTRL-R on PC a few times to reload the cached style sheet.

Then, I installed a lightbox plugin and added a “slide” icon to the link class; it will display the post’s Open Graph/Twitter Card image in an overlay if you click on it. (That’s the image you see when the post appears on social media.) I might throw in a few other pictures as well.

Finally, I remembered that I created this side blog back in 2014 not merely for research-adjacent blog posts about game-based learning and game-related ethics; I also created it to link to, and to briefly comment on, external content related to games in general. (Whereby, in linked list–fashion, the post’s headline links to the source, not to the post.) This will recommence soon.

Also, I fixed a few bugs.

That’s it.


The Zombie Model of Cognitivism

Last week, I wrapped up two undergraduate-level game design courses with final workshop sessions, and I wanted them to be an enjoyable experience for everyone.

If there’s one thing teaching should accomplish, it’s this: show students a field, its tools, and its biggest challenges in inspiring ways and help them find and solve challenges which a) they find interesting and b) are manageable at their current level of expertise in terms of skill, knowledge, understanding, and attitude. With that, you provide everything that’s needed: autonomy, mastery, purpose, and also—as solving interesting problems almost always demands collaboration—relatedness.

When I was a student, working together existed mostly as group work or team papers, and I hated both. So I always try to set up my workshop sessions in ways even my freshman self would appreciate. As you might have guessed already, the learning theory I’m trying to follow, to these ends and in general, is constructivism, not cognitivism.

Cognitivism is more or less what you experienced in school. It’s about communicating simplified and standardized knowledge to the students in the most efficient and effective manner, who are supposed to memorize this knowledge in a first step, and then memorize the use of that knowledge in a second step by solving simplified and standardized problems. Sounds familiar, right? Cognitivism is all about memory and mental states and knowledge transfer. It’s not at all about creating curiosity, facilitating exploration, or providing purpose. Cognitivism doesn’t foster and stimulate mastery goals and personal growth (getting really good at something), it focuses instead on performance goals (getting better grades, and getting better grades than others). There’s no autonomy to speak of, only strict curricula and nailed-down syllabi and standardized testing. And there’s no relatedness to speak of either, because there’s no collaboration or cocreation beyond the aforementioned occasional group work or team paper. Because real collaboration and cocreation would mess up standardized testing and individual grading.

Constructivist learning theory, in contrast, is about everything cognitivism ignores: interests, exploration, autonomy, mastery goals, purpose, and relatedness through collaborative and cocreative problem-solving. Certainly, constructivism has its own tough challenges to navigate, but that’s no different from any other learning theory, like cognitivism or behaviorism. According to constructivism, students learn through experiences—along which they create internal representations of the things in the world, and attach meaning to these internal representations. Thus, it’s not about learning facts and procedures but developing concepts and contexts—by exploring a field’s authentic problem space, picking a challenge, and then exploring the field’s current knowledge space and its tool boxes to solve that challenge. The critical role of the teacher is to provide assistance and turn unknown unknowns into known unknowns, so that students can develop a better understanding of the scope, the scale, and the difficulties involved. Without the latter, learning and problem-solving can easily become a frustrating experience instead of an enjoyable one.

And that’s what I mean by enjoyable. In practice, in my final workshop sessions, everybody can pick a problem they’re burning to solve, gather like-minded fellow students, and try to solve that problem together in a breakout room by applying the knowledge and the tools we’ve explored and collected along the course. Or, alternatively, they can stay in the “main track” and tackle a challenge whose rough outlines I set up in advance, moderated by me. In the end, every group presents their findings and shares their most interesting insights.

That’s a setup even my younger self would have appreciated! Everyone can, but by no means has to, shoulder the burden of responsibility by adopting a challenge. Everyone can, but by no means has to, win over other students for a challenge, or join an individual group. Everyone can decide to stay in the main track, which is perfectly fine.

The problem with cognitivism is not that it doesn’t want to die, it’s that is hasn’t noticed it’s already dead. Just think about it: almost everybody can tell, or knows somebody who can tell, an uplifting story about that one outstanding teacher who inspired them despite everything else going down in school. But the very foundation of our educational system is that it’s supposed to work independently of any specific individual, of any specific outstanding teaching performance.

Should we ever design a framework to accomplish this, which might or might not happen in the future, that makes our educational system truly work on the system level, cognitivism certainly isn’t it. Cognitivism is a shambling zombie that, over time, has gradually slipped into its second career as a political fetish.


Opening Schools in Times of Plague

2020 went by in a daze. For the first time since 2015, I didn’t publish anything, neither a paper nor a book, and I practically stopped blogging. I did write, but I’m months behind my self-set schedule. It wasn’t just the Corona crisis—it were developments in Hong Kong, the U.S., and Israel too that put me into a negative mental state that continually drained my energy.

Now, things have improved in the U.S., but the situation in Hong Kong becomes more terrifying by the minute, and from Israel come the most alarming news. Moreover, I live in Germany—where politicians, propagandists, mercenary scientists, and dangerous alliances of Covid-19 denialists alike torpedo every sensible solution that real scientists and a handful of public figures come up with to fight the virus and keep people safe.

Among the most disastrous measures, in every respect, is the premature opening of schools. This has no rational explanation. Schools in Germany are notoriously underfunded and have no digital infrastructure to speak of. Teachers are bogged down by administrative work. The integration of technologies in the classroom very rarely exceeds the use of calculators and overhead projectors. And, countless political statements of intent notwithstanding, no one ever really gave a shit, and nothing ever changed.

Now, during times of plague, education is suddenly the most terribly precious thing, and to send children back into crowded classrooms is more important than all the people this might kill in the process, or damage for life.

Of course, there are economic reasons to get children out of the way as fast as political decorum allows, going hand in hand with the ministerial refusal to make home office work, where possible, mandatory. Dying for the GDP is something we all understand.

But, considering the tremendous scope of suffering inflicted by Covid-19, that’s too rational an explanation for the consistently irrational demeanor and decision-making, where the state-level secretaries of education push toward opening the schools to strengthen the next pandemic wave each time like clockwork. What’s really going on, and that took me forever to understand, is that “school” is not, or no longer, structured like a place of education & learning. Instead, school is structured like a fetish, forever pointing toward education as a lost referent that can no longer be retrieved.

As such, it is hyper-resistant to any kind of change or reform; to scientific and technological progress; to social and psychological and pedagogical insights.

As such, it wastes twelve or thirteen years of everyone’s life on memorizing swiftly perishable facts instead of teaching curiosity; focuses on tests and grades instead of fostering skill, creativity, and understanding; insists on following the curriculum instead of inspiring students with the love, and thirst, for knowledge and for lifelong learning.

As such, it demands that everyone suffer. And now, true to its nature as a fetish, it demands the willingness to sacrifice your loved ones in times of plague as well.


Hanukkah Special — Day Eight | Summary & Outlook

Today, on the last day of Hanukkah, we will do three things: wrap everything up, briefly; talk about the three missing elements period style, game type, and game loop; and draft a task list with items that would be needed to advance our conceptual sketch toward something pitchable, i.e., a game treatment.

Read More »


Hanukkah Special — Day Seven | Architectonics

During the fourth, fifth, and sixth day of Hanukkah, we sketched a number of design decisions for the Interactivity, Plurimediality, and Narrativity territories.

Today, we will engage the fourth and final design territory of the Ludotronics paradigm, Architectonics.

Instead of merely covering “story” or “narrative,” Architectonics denotes the design and arrangement of a game’s dramatic structure in terms of both its narrative structure and its game mechanics and rule dynamics.

Read More »


Hanukkah Special — Day Six | Narrativity

Yesterday’s design territory, Plurimediality, was about “compelling aesthetics,” seen from the viewpoint of functional aesthetics and working toward a game’s consistent look-and-feel for a holistic user experience.

Narrativity, our design territory for today, is about “emotional appeal,” seen from the viewpoint of narrative qualities or properties that work toward conveying specific meaning in a specific dramatic unit for a memorable gameplay moment.

For the latter, the Ludotronics paradigm works with four content domains: visual, auditory, kinesthetic, and mythological. Obviously, to be able to plan specific narrative qualities or properties for memorable gameplay moments, we need to know a lot more about our purposeful/educational Hanukkah game than we know at the moment. All we can do right now is sketch a few general principles that we can apply in each domain.

Read More »


Hanukkah Special — Day Five | Plurimediality

While yesterday’s Interactivity territory with its game mechanics, rule dynamics, and player interactions has the “just-right amount of challenge” at its motivational core, Plurimediality is associated with “compelling aesthetics.”

Which, in a game, comprises not only graphics, sound, music, writing, voice acting, and the game’s look-and-feel in general, but also usability in its many forms including player controls. That’s because aesthetics and usability are two sides of the same coin, united by function. Accordingly, Plurimediality integrates design thinking from the perspective of functional aesthetics (informed by the Ludology dimension) and the perspective of aesthetic experience (informed by the Cinematology dimension). One can’t be great without the other, and every element in turn must connect with the theme.

To recap our prospective purposeful/educational Hanukkah game’s concept so far, we’re considering a game with intergenerational cooperative gameplay for Jewish and non-Jewish children and Jewish parents/relatives with a core experience in the ludological dimension, the general theme of “hope,” and a strong focus on the motifs “anticipation,” “ambition,” and “trust” in the Interactivity design territory.

So what’s the aesthetic experience of our Hanukkah game going to be? To answer this question, tentatively, we will look at the game’s style and sound.

Read More »


Hanukkah Special — Day Four | Interactivity

To recapitulate our insights from the first three days of Hanukkah, our purposeful/educational Hanukkah game’s theme will be “hope”; its core will fall into the ludological dimension; it will have a complex primary and secondary target audience of Jewish and non-Jewish children and Jewish parents and relatives; and its USP will be intergenerational cooperative gameplay.

Today, we will explore the first design territory for our Hanukkah game, Interactivity. Within the Ludotronics paradigm, Interactivity is informed by game mechanical aspects, i.e., the mechanics and rules of the game, and ludological aspects, i.e., how players interact with the game and with other players. (The other three design territories besides Interactivity, to be visited in the following days, are Plurimediality, Narrativity, and Architectonics.)

Read More »


Hanukkah Special — Day Three | Audience & USP

From the research we did on the first and the second day of Hanukkah, we collected a number of design parameters for our purposeful/educational Hanukkah game. These are:

  • it’s not well suited for a (pseudo-)historical combat or strategy game;
  • it must be playable for non-Jewish players without having to “play-act” Jewish rituals;
  • “hope” is a suitable theme for design decisions across all four design territories;
  • the best fit for the core playing experience is the ludological dimension.

Today, on the third day of Hanukkah, we will explore possible target audiences and the game’s unique selling proposition, or USP.

Read More »


Hanukkah Special — Day Two | Theme & Core Experience

Yesterday, on the first day of Hanukkah, we sounded out the historical context of the Hanukkah story, thereby eliminating a historical combat or strategy game. Now we will turn to the meaning or meanings that Hanukkah has acquired over time, from the Talmud until today, to find our purposeful/educational game’s theme and its core experience in one of the four dimensions, i.e., Game Mechanics, Ludology, Cinematology, or Narratology.

Read More »


Hanukkah Special — Day One | Intel

Usually, everything starts with a game idea. From there, you can proceed, step for step, maybe along the Ludotronics paradigm, until you have a pitchable concept—no matter whether it’s a publisher pitch, an in-house pitch, or a war cry to assemble a crackerjack team for your indie title. Now what if someone—a customer, a publisher—approaches you not with an idea, but with a topic?

Along the eight days of Hanukkah, of which today is the first, let’s sketch a purposeful/educational game concept about Hanukkah as an exercise.

Read More »


Native Informants in Game-Based Learning: Gender

Game-based learning comes in many different shapes and styles that encompass everything from dedicated educational content in serious games to training simulations to communicating knowledge about historical eras or events and questions of ethnicity or gender in AAA action-adventure games. Yet, as always, well-meant isn’t necessarily well-done. In the case of AAA and high-quality independent games, it is often the lack of “native informants” during the development process that turn good intentions into public meltdowns.

“Native informant” is a term from post-colonial discourse, particularly as developed by Gayatri Chakravorty Spivak. It indicates the voice of the “other” which always runs the risk of being overwritten by, or coopted and absorbed into, a dominant public discourse. Even the term itself always runs this risk.

There are three topics of the “other” I’d like to touch upon in three consecutive posts: gender, ethnicity, and madness. For this post on gender, let’s look at some well-known examples.

With a native informant, the original portrayal of Hainly Abrams wouldn’t have failed as abysmally as it did in Mass Effect: Andromeda. BioWare’s reaction was laudable, certainly—they reached out to the transgender community and made the appropriate changes. But why didn’t they reach out in the first place? Who, on the team, was comfortable with writing the “other” without listening to their voices? Another egregious example were RimWorld’s early gender algorithms—where, in contrast, the developers’ reaction was anything but laudable. (I might be going out on a limb here, but I don’t think OkCupid data qualifies as a native informant.)

Calling in native informants as consultants is exceptionally useful in more than one respect.

First, obviously, they prevent your team from making outrageous mistakes. In the case of Mass Effect: Andromeda, it was a particular mistake that could be fixed with a patch. But when a character’s actions or even whole chunks of the plot are based on faulty premises, that’s not easily fixed at all. Enter Assassin’s Creed: Odyssey’s DLC “Legacy of the First Blade: Shadow Heritage,” where Cassandra was “hetconned,” i.e., forced into a heterosexual relationship to become a parent.

Then, why would you want to stop at merely not getting it wrong? Native informants will provide you with involving details that will turn cliche attitudes into motivated actions & emotions and transform your cardboard characters into engaging personalities.

Finally, hiring native informants as consultants adds diversity to your team, enriches your game, and broadens your reach beyond your mainstream audience. Toward people, that is, who would gladly buy your game if they see themselves represented in it—in the contexts of visibility, of acceptance, and of role models as drivers of empowerment.


Boss Exam!, or: Grading in Games

Among the great advantages of game-based learning is, in well-designed games, that players are “tested” exclusively on skills, knowledge, understanding, or attitude that they have learned, or should have learned, while playing the game. Certainly, there are exceptions. Games can be badly designed. Or, problems of familiarity might arise, as discussed in Ludotronics, when conventions of a certain class of games (misleadingly called “genre”) raise the Skill Threshold for players who are not familiar with them and might also interfere later in test situations. Most of the time, though, players are indeed tested on what the game has taught them, and well-designed “term finals” like level bosses do not simply test the player’s proficiency, but push them even further.

This rhymes beautifully with the rule “you should not grade what you don’t teach” and its extension “if you do teach it, grade it only after you taught it.” While this sounds like a no-brainer, there are areas in general education that have a strong tendency toward breaking this rule, with grammar as a well-known example. And then there’s the problem of bad or insufficient teaching. When students, through no fault of their own, failed to acquire a critical skill or piece of knowledge demanded by the advanced task at hand, how would you grade them? Is it okay, metaphorically speaking, to send students on bicycles to a car race and then punish them for poor lap time performances? But if you don’t grade them on the basis of what they should have learned, doesn’t that mean lowering the standards? As for the students, they can’t just up and leave badly designed education like they can stop playing a badly designed game.

Another aspect, already mentioned, is that tests in games not only test what has been taught, but are designed to push player proficiency even further. This is possible, and possible only, because the player is allowed to fail, and generally also allowed to fail as often as necessary. Such tests, moreover, can become very sophisticated, very well-designed. In Sekiro: Shadows Die Twice, you might be able to take down a boss with what you’ve already learned instead of what you’re supposed to be learning while fighting that boss, but you’ll be punished for it later when you desperately need the skill you had been invited to learn. You can read all about this example in Patrick Klepek’s “15 Hours In, Sekiro Gave Me a Midterm Exam That Exposed My Whole Ass,” which is recommended reading.


The Other Thing on the Doorstep: Games and the Classroom

The second reverse question I love to discuss in my lecture on media psychology/media didactics for game designers is to ask, instead of what games can bring to the classroom, what the classroom brings to the table compared to games. Again, it’s about non-specialized public gymnasien in Germany, so the results are not expected to be particularly invigorating.

What about motor skills, dexterity, agility, hand-eye coordination, reaction times, and so on? What about endurance, persistence, ambition (not in terms of grades), or patience? Here, the classroom has little to offer. Possible contributions are expected to come from physical education, certainly. But—with the exception of specialized boarding schools, or sportgymnasien—physical education is almost universally despised by students for a whole raft of reasons, and it is not renowned for advancing any of the qualities enumerated above in systematic ways.

What about »Kombinationsfähigkeit« in terms of logical thinking, deduction, and reasoning? What about strategy, tactics, anticipatory thinking (which actions will trigger which events), algorithmic thinking (what will happen in what order or sequence), and similar? Again, the classroom in non-vocational or non-specialized schools has little to offer here, if anything at all.

Finally, what about creativity, ingenuity, resourcefulness, imagination, improvisation, and similar? Some, the lucky ones, have fond memories of art lessons that fostered creativity. But even then: on this palette, creativity is just one color among others. Music lessons have potential too, but—barring specialized schools again—it’s rare to hear students reminisce lovingly about music lessons that systematically fostered any of these qualities.

Now, the curious thing is that serious games and game-based learning projects have a tendency to try and compete precisely with what the classroom does bring to the table for the cognitive domain, notably in its traditional knowledge silos that we call subjects. This, not to mince words, is fairly useless. Serious games, in careful studies against control groups, almost never beat the classroom significantly in terms of learning events, learning time, depths of knowledge, or even knowledge retention—but they come with a long time to market, a stiff price tag, and tend to burn content like a wildfire. Instead, game-based learning should focus on the psychomotor domain, the affective domain, those parts of the cognitive domain that the classroom notoriously neglects, and rigorously unsiloed knowledge from every domain. And if all that can’t be integrated into the classroom because the classroom can’t change or adapt and integrate games and new technologies in general, then let’s build our own classroom, maybe as a massively multilearner online game-based learning environment. The century’s still young.


The Thing on the Doorstep: Technology and the Classroom

In my lecture on media psychology/media didactics for game designers, reverse-questioning how school lessons and school curricula relate to technology and games counts among my favorite exercises. For example, instead of asking what technology in general and games in particular can do for the classroom, we ask which technological advances schools have actually adopted since the middle of the twentieth century.

Now, as I am teaching in Germany, and most of my students come from non-specialized public gymnasien, you can probably guess where this is going. With the rare exception of interactive whiteboard use (roughly one student out of fifteen), the two technological advances properly integrated into regular classroom use in seven decades are [check notes] the pocket calculator and [check notes again] the overhead projector.

Historically, all technological advances that have been proposed for classroom use in Germany, including pocket calculators and TV/VCR sets and computers and cell phones and tablets, and so on, were viciously opposed by teachers, parents, administrators, and politicians alike with an inexhaustible reservoir of claims ranging from the decline of educational integrity (»Kulturverfall«) to wi-fi radiation panic (»Strahlenesoterik«). Today, the copy & paste function especially is viewed as a creeping horror that must never be allowed to make it past the hallowed doorstep of higher education.

Add to that an often desolate financial situation. There are cases where schools can’t afford a decent wi-fi infrastructure, desktop or mobile hardware, software licenses, or, especially, teacher training. But all these problems could be surmounted in principle if it wasn’t for the one titanic challenge: the curriculum. Cognitivism has brought us this far, and we certainly shouldn’t abandon it. But it should become part of something new, a curriculum based on what and how we should teach and what and how we should learn in the 21st century. This requires motivational models that are up to the task; we can start with Connectivism and with models like flow and self-determination, both of which feed the Ludotronics motivational model, and advance from there. Technological, social, and other kinds of advances should become deeply integrated into this new curriculum. To turn our information society into a true knowledge society, we must leave the era of age cohorts, classes, repeated years, and silo teaching behind and embrace learning and teaching as a thrilling and, before all else, shared quest of lifelong exploration.


Hot off the press:

Ludotronics: A Comprehensive Game Design Methodology From First Ideas to Spectacular Pitches and Proposals

It’s a conceptually complete paradigm and design methodology for intermediate and advanced game designers, from coming up with a raw idea for a game to greenlighting a refined version of that idea for development.


Note: For three years, from March 2019 through April 2022, a high-quality e-book version of Ludotronics was on sale at DrivethruRPG. After this date, I pulled this edition from sale, as it will be succeeded by a print edition scheduled for 2023.


This is my fourth paper, certainly not my last, but the last paper I will prepublish before my book comes out. (For those of you who haven’t followed or forgotten all about it, three years ago I committed to writing one academic paper per year during book research.)

Link barrage first.

My initial paper from 2015 on dialogic speech in video games is here. My second paper from 2016 on sensory design for video games is here. My third paper from last year about emotional design for video games is here. Finally, my book website is here. The book will be released later this year (or in early 2019, at the latest). I decided to publish the electronic book with the help of DriveThruRPG. It’s a site I love, conditions are decent, and Amazon is not an option (re: format, royalties).

Now, here’s my fourth paper. It’s on learning design in video games (PDF, direct link): “Learning to Play, Playing to Learn: An Integrated Design Approach 
for Learning Experiences in Video Games.”*

Once again, it’s an electronic preprint, not published in an academic journal at this point in time. Enjoy!

*This paper has now been uploaded to with some minor typo corrections.


Passage is structure, not story. It’s not about challenges or plot points, it’s about rules and mood. As a passage, it should serve as an introduction when the game is played for the first time, and from then on, well-chosen details of it should serve as a reminder every single time until it is finished.

A brief essay on passages in game design that I wrote for my university’s news room page.


Hooray, my third paper! As I wrote here two years ago, I committed myself to writing one paper each year while I’m researching and writing my book on game design methodologies. Which has a full title now: The Ludotronics Game Design Methodology: From First Ideas to Spectacular Pitches and Proposals. Which, if all goes well, I will publish next year.

With regard to my third paper, this time it’s all about emotions. You’ll find the paper here (PDF, direct link): “Tuning Aristotle: An Applied Model of Emotions for 
Interactive Dramatic Structures.”* Again, it’s an eletronic preprint; it hasn’t been published in an academic journal yet.

(My second paper on sensory design is here.)


*This paper has now been uploaded to with some minor typo corrections.


The general principle to put players in control of their flow channels is to design the game’s challenge structure in a way that concurrently escalates risk, relief, and reward, and not just over time, but “stacked” at any given gameplay moment.

A brief essay on “flow” in game design that I wrote for my university’s news room page.


As I wrote last year, I committed myself to generating one academic paper per year from the research I’m doing for my book on game design methodologies, Ludotronics. (The website is up, but there’s nothing much to see. Book release will commence not before 2018, and that’s when the full website will come alive too.)

This is the second academic paper I wrote along my research (PDF, direct link): “Making Sense: Juxtaposing Visual, Auditory, and Kinesthetic 
Design Elements to Create Meaning, Reinforce Emotions, and 
Strengthen Player Memory Formation and Retrieval.”* It’s a rather long title for a paper, granted. But I did put a lot into it!

Again, as I’m not yet pitching my papers to academic journals, this is an electronic preprint. Enjoy!

*This paper has now been uploaded to with some minor typo corrections.


Early this year, I began conducting research into game design methodologies for a book I’m going to write. Its title, or at least part of its title, will be Ludotronics. Trademark’s filed, and the domain is up at Yet, it’s only a placeholder page right now, and will be for a very long time. But when it’s done, the whole book will also be freely available on that website under a Creative Commons license.

Taking my research plan and my other three lives into account, writing this book will take me about three to four years, as an estimate. So if all goes well, that’s a release date in 2018, probably late 2018.

But also …

I committed myself to generating one academic paper per year from the research I’m doing for my book. This is the first paper (PDF, direct link): “A Functional Model for Dialogic Speech in Video Game Design”*. For a number of reasons, I decided that I will pitch these papers to academic journals not before my book research is complete, maybe even after the book release. (I know, nothing’s ever complete in science.) So take these academic papers as what they are, as electronic preprints. Enjoy!

*This paper has now been uploaded to with some minor typo corrections.


Breaking Down Siloed Educational Subjects & Learning with Computer Games: Finland Leads the Way

Recently, I asked about the possible effects of “one-size-fits-all educational methodology with predetermined curricula and standardized testing” and, especially, “conditioned learning of siloed educational subjects detached from personal experience in large classes solely determined by year of birth.” That, of course, was purely a rhetorical question as these effects are clearly visible to everybody. Rarely do schools instill in us a deep and abiding love for learning and for the subjects taught, the historical events envisioned, the works of literature read, the math problems solved. Rarely do we fondly remember our school days and sentimentalize them into nostalgic yearnings for a lost pleasure. In my 2013 inaugural lecture “Knowledge Attacks!: Storyfication, Gamification, and Futurification for Learning Experiences and Experiential Learning in the 21st Century” which, alas, I still haven’t managed to put online, I developed two sample scenarios to break down educational silos in our schools.

The first sample scenario took off from the topic of “calculus,” typically siloed and restricted to “math” courses. Why don’t we confront students instead with the exact same problems Newton and Leibniz faced and connect the invention of calculus with learning content from its historical context: Newton’s Principia and the Great Plague of London; the English civil war and the Bill of Rights; the baby and toddler years of the Scientific Method and the principles of rational inquiry; the Great Fire of London and modern city planning; the formation of the United Kingdom and colonial power struggles; the journalist, author, and spy Daniel Defoe and Protestant work ethic; the transition from Baroque to Rococo in painting and music; the Newtonian telescope and Newton’s laws of motion; the dissociation of physics and metaphysics. Interim summary: math, physics, philosophy, religion, English language, history, geography, music, art, literature, astronomy. Oh, and sports: the evolution of cricket during this time in post-restoration England as the origin of professional team sport.

Or “democracy,” aspects of which are usually siloed in history courses and/or a variety of elective courses like “politics” or “citizenship.” What if students were given the task, perhaps in a time-travel setup, to convince the Athenian assembly in 482 B.C.E. via political maneuvering to distribute the new-found silver seam’s wealth among the Athenian citizens instead of following Themistocles’s proposal to build a naval fleet? So that the Battle of Salamis would never happen, the Greek city states swallowed by the Persian empire, neither the Roman republic nor large parts of the world become hellenized, and the European Renaissance as we know it would never happen? The potential in terms of learning content: the dynamics of democracy and political maneuvering; the history of classical antiquity; general economics, trade, and the economics of coins and currencies; the structure and ramifications of democratic systems built on slave economies; Greek language; rhetoric; comedy and tragedy; myth; Herodotus and the beginnings and nature of historical writing; geometry; the turn from pre-Socratic to Socratic philosophy; sculpture and architecture; ship building; astronomy; geography; the Olympic Games. Strategy, probably—especially naval strategy if the plan to change history fails to succeed: Salamis from the point of view of Themistocles on the side of the Greeks and from the point of view of Artemisia—the skillful and clear-sighted commander of a contingent of allied forces in Xerxes’s fleet—on the side of the Persians. Infinite possibilities.

Now, what’s being introduced in Finland right now as “cross-subject topics” and “phenomenon-based teaching” looks quite similar in principle to my developing concept of “scenario learning.” As the Independent's headline puts it, “Subjects Scrapped and Replaced with ‘Topics’ as Country Reforms Its Education System”:

Finland is about to embark on one of the most radical education reform programmes ever undertaken by a nation state—scrapping traditional “teaching by subject” in favour of “teaching by topic.” […] Subject-specific lessons—an hour of history in the morning, an hour of geography in the afternoon—are already being phased out for 16-year-olds in the city’s upper schools. They are being replaced by what the Finns call “phenomenon” teaching—or teaching by topic. For instance, a teenager studying a vocational course might take “cafeteria services” lessons, which would include elements of maths, languages (to help serve foreign customers), writing skills and communication skills.

More academic pupils would be taught cross-subject topics such as the European Union—which would merge elements of economics, history (of the countries involved), languages and geography.

This sounds exciting already, but there’s more:

There are other changes too, not least to the traditional format that sees rows of pupils sitting passively in front of their teacher, listening to lessons or waiting to be questioned. Instead there will be a more collaborative approach, with pupils working in smaller groups to solve problems while improving their communication skills.

Many teachers, of course, aren’t exactly thrilled. But a “co-teaching” approach to lesson planning with input from more than one subject specialist has also been introduced; participating teachers receive a “small top-up in salary,” and, most importantly, “about 70 per cent of the city’s high school teachers have now been trained in adopting the new approach.”

And even that’s not the end of it. A game-based learning approach, upon which my scenario learning concept is largely built, will also be introduced to Finland’s schools (emphases mine):

Meanwhile, the pre-school sector is also embracing change through an innovative project, the Playful Learning Centre, which is engaged in discussions with the computer games industry about how it could help introduce a more “playful” learning approach to younger children.

“We would like to make Finland the leading country in terms of playful solutions to children’s learning,” said Olavi Mentanen, director of the PLC project.

Finally, from the case studies:

We come across children playing chess in a corridor and a game being played whereby children rush around the corridors collecting information about different parts of Africa. Ms. Jaatinen describes what is going on as “joyful learning.” She wants more collaboration and communication between pupils to allow them to develop their creative thinking skills.

What I feel now is a powerful urge to immediately move to Finland.

(h/t @JohnDanaher)



Austin Walker, whose eye-opening article “Real Human Beings: Shadow of Mordor, Watch Dogs and the New NPC” I referenced in an earlier post, writes terrific video game criticism at New Statesman, Paste Magazine, or ClockworkWorlds.

Great if you could spare a few bucks to support his work on Patreon. Critical voices like his are sorely needed.


#ECGBL 2014: The Pitfalls of Gamified Learning Design

The first #ECGBL2014 presentation I attended was “Experimenting on How to Create a Sustainable Gamified Learning Design That Supports Adult Students When Learning Through Designing Learning Games” (Source) by Charlotte Lærke Weitze, PhD Fellow, Department of Education, Learning and Philosophy, Aalborg University Copenhagen, Denmark.

Weitze’s paper relates to a double challenge so crucial for designing game-based learning solutions that you’ll see me coming back to it on this blog time and again. One side of this challenge is students burning through content way faster than educators and game designers are able to produce; you just can’t keep up. The other side is the challenge of practicing complex, non-siloed learning content within game-based learning environments that are incompatible with practices such as rote learning, flash cards, repeat training, or standardized testing.

In other words, it’s non-educational games’ “replay value” challenge on screaming steroids. Among the most promising solutions is mapping the “learning by teaching” approach to designing content, i. e., having students design fresh game content as a learning-by-teaching exercise. Obviously, both effectiveness and efficiency of this approach depend on the number of students that are part of the system, where output will exponentially take off only after a certain threshold has been reached, both numerically and by way of network effects.

Of course there are pitfalls. One obvious pitfall is that students—with the exception of game design students, of course—are students, not game designers. And while, along more reasonable approaches, they don’t have to design fully functional games by themselves with game mechanics, rules, and everything, it’s still hard to design entertaining, i. e., “playable” content for games without at least some game design knowledge from various fields—prominently balancing, guidance strategies for player actions, or interactive storytelling. This was one of the reasons why, regrettably, Weitze’s experiment, where students also had to build the content with software, fell short.

As for the experimental setup (which employs the terminology of James Paul Gee’s more interaction-/discourse-oriented differentiation between little “g” games and big “G” Games), students design and build little “g” games for a digital platform within the framework of a “gamified learning experience,” the big “G” Game, which includes embedding learning goals and evaluating learning success. The little “g” games in this case are built for other students around cross-disciplinary learning content from the fields of history, religion, and social studies in a three-step process consisting of “concept development, introduction and experiments with the digital game design software (GameSalad), and digital game design” (4). Moreover, this whole development process (within the big “G” Game) had to be designed in such a way as to be motivating and engaging for the students on the one hand, and to yield evaluable data as to its motivational impact on individual learning successes on the other.

Experiences from this experiment, unsurprisingly, were described in the presentation as “mixed”—which is academic parlance for “did not fucking work as planned at all.” The problems with this setup are of course manifold and among these, “focus” is the elephant in the room.

There are far too many layers, especially for an experiment: the gamification of game design processes for learning purposes (the big “G” Game) and the design thereof with in-built learning goals and evaluation strategies; below that, then, the game design processes for the little “g” games including, again, learning goals and evaluation strategies. In other words, the students were supposed to be learning inside the big “G” Game by designing little “g” games in groups with which other students in turn would be able to “learn from playing the games and thus gain knowledge, skills and competence while playing” (3)—all that for cross-disciplinary content and executed in a competitive manner (the big “G” Game) by 17 students and three teachers none of which had sufficient game design experience to start with, and under severe time constraints of three workshop sessions of four hours each.

Besides “focus,” the second elephant in the room is “ambition,” which the paper acknowledges as such:

In the current experiment the overall game continued over the course of three four‐hour‐long workshops. Though this was a long time to spend on an experiment, curriculum‐wise, for the upper‐secondary students, it is very little time when both teachers and students are novices in game‐design. (3)


This is an ambitious goal, since a good learning‐game‐play is difficult to achieve even for trained learning game designers and instructors. (3)

And the nail in the coffin, again unsurprisingly:

At this point [the second workshop] they were asked to start considering how to create their game concept in a digital version. These tasks were overwhelming and off‐putting for some of the students to such a degree that they almost refused to continue. This was a big change in their motivation to continue in the big Game and thereby the students learning process was hindered as well. (7)

Finally, what also generated palpable problems during this experiment was the competitive nature of the big “G” Game. While the paper goes to some lengths to defend this setup (competition between groups vs. collaboration within groups), I don’t find this approach to game-based learning convincing. Indeed I think that for game-based learning this approach—competition on the macro-level, collaboration on the micro- or group-level—has it upside down: that’s what we already have, everywhere. Progress, in contrast, would be to collaborate on the macro-level with stimulating, non-vital competitive elements on the micro- or group level.

Game-based learning can provide us the tools to learn and create collaboratively, and to teach us to learn and create collaboratively, for sustained lifelong learning-experiences. Competition can and should be involved, but as a stimulating part for a much greater experience, not the other way round. The other way round—where collaboration has to give way to competition as soon as things threaten to become important—is exactly what game-based learning has the potential to overturn and transcend in the long run.


#ECGBL2014: Welcome and Keynote

Last year’s ECGBL 2014 (8th European Conference for Game-Based Learning), October 9–10, in Berlin, which included the satellite conference SGDA 2014 (5th International Conference on Serious Games Development & Applications), was located at Campus Wilhelminenhof in Berlin Oberschöneweide and hosted by the University of Applied Sciences for Engineering and Economics HTW Berlin. The conference was opened by Dr.-Ing. Carsten Busch, program chair and professor of media economics/media information technology, followed by an introduction to the HTW Berlin by Dr.-Ing. Helen Leemhuis, faculty dean and professor of engineering management.

Then, the keynote. Oh well.

How to put this politely. To be sure, there are occasions and circumstances where it is a good idea to engage with stakeholders outside academia by inviting industry representatives to academic conferences as keynote speakers, but in this case it rather wasn’t.

The invited speaker was Dr. jur. Maximilian Schenk, formerly “Director Operations and member of the management team” of the German VZ network (which has gone down in history for, among other things, setting the bar for future copy-&-paste operations spectacularly high by copying Facebook literally wholesale, down to style sheets and code files named fbook.css or poke.php), at present managing director of the BIU (Bundesverband interaktive Unterhaltungssoftware / German Trade Association of Interactive Entertainment Software). He addressed his audience of highly qualified postgraduate, postdoc, and tenured veteran researchers from the fields of game-based learning and serious games across a wide range of disciplines verbatim with:

You are the specialists so I won’t go into your terrain, so instead I will tell you something about the fundamentals of serious games that you have to understand to know what making serious games is all about.

During the stupefied silence that followed, Maximilian Schenk acquainted the audience with the BIU and its sundry activities, explained how the traditionally bad image of gaming in Germany, including serious games, was changing as it had been found out that “games make people smarter” (evidence: Spiegel), pontificated about “games as a medium” from “tent fires, maybe 10,000 years from now” to today’s video games, and ended his thankfully brief keynote with an enthusiastic barrage of growth forecasts relating to game-based learning/serious games industries whose outrageously optimistic numbers were inversely proportional to the amount of actual evidence corroborating these numbers.

While appreciated in general, it was rather obvious that the keynote’s briefness had taken the organizers by surprise, and Carsten Busch jumped in to introduce, with all the little tell-tale signs of hurried improvisation, the Swedish Condom08 gamification project. This presentation—its general drift into inappropriately didactic terrain notwithstanding (“What did you learn?”; “What technologies were used?”)—turned out to be enjoyable and stimulating.

And so the conference began. Follow-up posts are in the pipeline.


Brianna Wu:

Something has to change. I have 3 games to get out the door–Revolution 60 PC, which is almost done, Cupcake Crisis and Project: Gogo. My team needs me leading them, not fighting Gamergate.

I’m not sure what the answer is. I might start a Patreon for an assistant to help with all these Gamergate tasks. I might just start doing less. But, I do know I’m going to lead GSX more this week and fight Gamergate less.

I got into the game industry to make games. And it’s time for me to get back to it.


Only, for Brianna Wu and some others it’s way more difficult to get back to work so if it comes down to Patreon assistance, count me in.

Also, Brianna Wu:

If I were being honest—I’m more than a little resentful. The vast majority of our male-dominated games press wrote a single piece condemning Gamergate and has been radio silent ever since. The publishers are silent, the console makers are silent. And so, Anita, Zoe, Randi and myself are out here doing the majority of the work, while everyone whines about wanting it to be over.

Meanwhile, the rest of the industry is doing what they do best, which is nothing.



European Conference for Game-Based Learning

This week, I’m off to the ECGBL 2014 in Berlin, including the SGDA 2014 satellite conference.

Not sure if I will have time for blogging but if not, bear with me—there’ll be no shortage of posts about conference papers soon.

If anyone reading this blog also happens to attend this year’s ECGBL and wants to connect, check for every possible channel to get in touch.



It’s All in Your Head: How Do We Get People to Understand Programming, Math, Literature?

The negative motivational potential of programming textbooks and tutorials is second only to the motivational potential of how we teach math. And while visual “learning by doing” systems like Khan Academy’s Computer Programming online course seem like progress, they’re sugarcoating the problem instead of providing a solution.

Bret Victor, two years ago, in his eye-opening essay “Learnable Programming”:

We often think of a programming environment or language in terms of its features—this one “has code folding”, that one “has type inference”. This is like thinking about a book in terms of its words—this book has a “fortuitous”, that one has a “munificent”. What matters is not individual words, but how the words together convey a message.

Everything else—a must-read—follows from there toward building a mental model. The concept of building a mental model is based on an interesting premise: “A programming system has two parts. The programming ‘environment’ is the part that’s installed on the computer. The programming “language” is the part that’s installed in the programmer’s head.” The inspiration to build on this premise, as Victor remarks, came from Will Wright’s thoughts on interactive design in “Sims, BattleBots, Cellular Automata God and Go: A Conversation with Will Wright” by Celia Pearce:

So what we’re trying to as designers is build up these mental models in the player. The computer is just an incremental step, an intermediate model to the model in the player’s head. The player has to be able to bootstrap themselves into understanding that model. You’ve got this elaborate system with thousands of variables, and you can’t just dump it on the user or else they’re totally lost. So we usually try to think in terms of, what’s a simpler metaphor that somebody can approach this with? What’s the simplest mental model that you can walk up to one of these games and start playing it, and at least understand the basics? Now it might be the wrong model, but it still has to bootstrap into your learning process. So for most of our games, there’s some overt metaphor that allows you approach the simulation. (Game Studies: The International Journal of Computer Game Research Vol.2 Issue 1, July 2002)

That, of course, holds important implications for game-based learning design—not just for teaching programming, but that’s a particularly obvious example. In game-based learning design, bootstrapping must take place both at the system level and the content level: the GBL model must be designed in such a way that you “can walk up to it and start playing,” and the same thing applies to how the learning content is designed so that the player ”can walk up to it and start learning.” (A fine example of Hayden White’s The Content of the Form principle at work, incidentally.)

As of now, there is no shortage of games that try to teach programming to kids, but just browsing the blurbs opens jar after jar crawling with inadequacies, to put it mildly. Games aimed at teens tend to float belly up the next time you check (yes I’m looking at you, CodeHero), or remain perpetually promising but unfinished like Dre’s Detox. And you won’t find anything remotely suitable for adults or seniors.

Obviously, if we managed to create games that teach how to code along the lines imagined by Bret Victor, we’d create new generations of coders who could put to good use not only the principles they’ve learned, but the principles with which they’ve learned what they’ve learned, to create great game-based learning designs and experiences for the future.


Intel, responding to criticism that they caved to demands from the misogynist #GamerGate mob and pulled their ads from Gamasutra:

We apologize and we are deeply sorry if we offended anyone.

No, Intel, that’s not how to do it. Here’s why:

Jebus, but I hate that poor excuse for an apology. It happens all the time; someone says something stupid and wrong, and instead of saying, “I was wrong, I’m sorry and will try to change,” they say, “I’m sorry you were offended by my remarks”—suddenly, the problem lies not in the error of the speaker but in the sensitivity of the listener.

That’s not an apology. It’s a transparent attempt to twist the blame to fall on everyone else but the person who made the mistake.

The only thing this “apology” demonstrates is that Intel’s PR department is run by spineless weasels.


What About #GBL and the Humanities?

Whenever you watch the rare event of big chunks of money flying in the direction of serious GBL development like wild geese in winter, you can bet your tenure on it that it’ll be all about STEM. But education isn’t just Science, Technology, Engineering, and Math—it’s about a zillion other things too! And that includes Social Sciences and the Humanities.

The humanities, despite being ridiculously underfunded, are doing fine in principle. And if we care at all about what kind of society, political system, historical self-image, or perspective on justice should be shaping our future, or what public and individual capacities of critical and self-critical thinking, introspection, and levels of general knowledge to make informed decisions about anything and everything we want future generations to have at their disposal, then we should be as deeply interested in bringing the humanities and social sciences into game-based learning than we are already with respect to STEM.

Because if you aren’t interested in such things or can’t motivate yourself to take them seriously, then you’d better prepare for a near-future society best represented right now by FOX News and talk radio programs, comment sections of online newspapers, and tech dudebro social media wankfests.

That said, there are at least two fatal mistakes the humanities must avoid at all costs: neither should they put themselves on the defensive about their own self-worth, nor should they position themselves conveniently in the “training” camp.

Jeffrey T. Nealon in Post-Postmodernism: Or, The Cultural Logic of Just-in-Time Capitalism (Stanford: Stanford UP, 2012):

The other obvious way to articulate the humanities’ future value is to play up the commitment to communication skills that one sees throughout the humanities. For example, Cathy Davidson writes in the Associated Departments of English Bulletin (2000), “If we spend too much of our energy lamenting the decline in the number of positions for our doctoral students, … we are giving up the single most compelling argument we have for our existence”: the fact that we “teach sophisticated techniques for reading, writing, and sorting information into a coherent argument.” “Reading, writing, evaluating and organizing information have probably never been more central to everyday life,” Davidson points out, so—by analogy—the humanities have never been so central to the curriculum and the society at large. This seems a compelling enough line of reasoning—and donors, politicians, students, and administrators love anything that smacks of a training program.

But, precisely because of that fact, I think there’s reason to be suspicious of teaching critical-thinking skills as the humanities’ primary reason for being. The last thing you want to be in the new economy is an anachronism, but the second-to-last thing you want to be is the “training” wing of an organization. And not because training is unnecessary or old line, far from it; rather, you want to avoid becoming a training facility because training is as outsourceable as the day is long: English department “writing” courses, along with many other introductory skills courses throughout the humanities, are already taught on a mass scale through distance education, bypassing the bricks-and-mortar university’s (not-for-profit) futures altogether, and becoming a funding stream for distance ed’s (for-profit) virtual futures. Tying our future exclusively to skills training is tantamount to admitting that the humanities are a series of service departments—confirming our future status as corporate trainers. And, given the fact that student writing and communication skills are second only to the weather as a perennial source of complaint among those who employ our graduates, I don’t think we want to wager our futures solely on that. (187–88)

Everybody who’s involved in a rare GBL game pitch for the humanities or social sciences that travels down this road should turn the wheel hard and fast in a different direction—or get out and run.


Cas Prince over at Puppyblog about the declining value of the indie game customer (slightly densified):

Back in the early 2000s, games would sell for about $20. Of course, 99% of the time, when things didn’t work it was just because the customer had shitty OEM drivers. So what would happen was we spent a not insignificant proportion of our time—time which we could have been making new games and thus actually earning a living—fixing customers computers. So we jokingly used to say that we sold you a game for a dollar and then $19 of support.

Then Steam came (and to a lesser extent, Big Fish Games). Within 5 short years, the value of an independent game plummeted from about $20 to approximately $1, with very few exceptions.

Then came the Humble Bundle and all its little imitators.

It was another cataclysmically disruptive event, so soon on the heels of the last. Suddenly you’ve got a massive problem on your hands. You’ve sold 40,000 games! But you’ve only made enough money to survive full-time for two weeks because you’re selling them for 10 cents each. And several hundred new customers suddenly want their computers fixed for free. And when the dust from all the bundles has settled you’re left with a market expectation of games now that means you can only sell them for a dollar. That’s how much we sell our games for. One dollar. They’re meant to be $10, but nobody buys them at $10. They buy them when a 90% discount coupon lands in their Steam inventory. We survive only by the grace of 90% coupon drops, which are of course entirely under Valve’s control. It doesn’t matter how much marketing we do now, because Valve control our drip feed.

Long, rambling, and eminently realistic long-form post everybody interested in gaming culture and indie games should go and read from A–Z.


At ProfHacker, Anastasia Salter has collected five recommendations for critical readings on games and learning. A quick check of my own personal library (and memory) reveals that from these I’ve read only two, namely James Paul Gee’s What Videogame Have to Teach Us About Learning and Literacy and Jesper Juul’s The Art of Failure.

But except Gee, publishing dates are in the neighborhood of 2013/2014, and my backlog is disheartening anyways.


From Ian Bogost’s talk at the 2013 Games for Change conference:

When people talk about “changing the world with games,” in addition to checking for your wallet, perhaps you should also check to see if there are any games involved in those world changing games[.] The dirty truth about most of these serious games, the one that nobody wants to talk about in public, is they’re not really that concerned about being games. This is mostly because games are hip, they make appealing peaks in your grant application, they offer new terrain, undiscovered country, they give us new reasons to pursue existing programs in order to keep them running.

Maybe what we want are not “serious’ games, but earnest games. Games that aren’t just instrumental or opportunistic in their intentions.

Nails it.


Don’t Let Your Simulation Game Become a Shit Sandwich

According to a 2011 metastudy by Traci Sitzmann in Personnel Psychology, declarative and procedural knowledge and retention were observed to be higher in groups taught with computer-based simulation games than in groups taught without, and even self-efficacy was observed to be substantially higher—surprisingly high, I might say. But that isn’t the whole story.

Common knowledge, and often among the main rationales for developing computer-based simulation games, is that wrapping entertainment around course materials will boost motivation. Motivation, hopefully, for learning new skills and not merely for playing the simulation game.

But do we know for sure that this works?

Two key simulation game theories propose that the primary benefit of using simulation games in training is their motivational potential. Thus, it is ironic that a dearth of research has compared posttraining motivation for trainees taught with simulation games to a comparison group. A number of studies have compared changes in motivation and other affective outcomes from pre- to posttraining for trainees taught with simulation games, but this research design suffers from numerous internal validity threats, including history, selection, and maturation. Also, the use of pre-to-post comparisons may result in an upward bias in effect sizes, leading researchers to overestimate the effect of simulation games on motivational processes.

Sounds bad enough. But there’s more! In a corporate environment, motivation is intimately linked to work-motivation—think of it as a special case of transfer of learning—but which, it turns out, hasn’t so far been tested in any meaningful manner at all:

However, the instructional benefits of simulation games would be maximized if trainees were also motivated to utilize the knowledge and skills taught in simulation games on the job. Confirming that simulation games enhance work-related motivation is a critical area for future research.

Also, there’s something else. How well declarative and procedural knowledge, retention, and self-efficacy are raised depends, according to this meta analysis, on several factors. The best results were observed for games where work-related competencies were actively rather than passively learned during game play; when the game could be played as often as desired; and when the simulation game was embedded in an instructional program rather than a stand-alone device.

Lots of implications there. And ample opportunity to turn your corporate simulation game into a veritable shit sandwich: when the game is merely the digital version of your textbooks, training handbooks, or field guides; when the replay value is low; and when you think you can cut down on your programs, trainers, and field exercises.

In other words: a good simulation game will cost you, and you can’t recover these costs by cutting down on your training environment. Instead, a simulation game is a substantial investment in your internal market, and you better make sure to get the right team on board so that motivation will translate into training success and training success into work-motivation.

Paper cited: Sitzmann, Tracy. “A Meta-Analytic Examination of the Instruction Effectiveness of Computer-Based Simulation Games.” Personnel Psychology. Vol.64, Issue 2 (Summer 2011). 489–528.


I-Chun Hung and Nian-Shing Chen at Oxford University Press blogs:

When younger learners study natural science, their body movements with external perceptions can positively contribute to knowledge construction during the period of performing simulated exercises. The way of using keyboard/mouse for simulated exercises is capable of conveying procedural information to learners. However, it only reproduces physical experimental procedures on a computer. […]

If environmental factors, namely bodily states and situated actions, were well-designed as external information, the additional input can further help learners to better grasp the concepts through meaningful and educational body participation.

Exciting research. Add to that implications from Damasio’s somatic marker hypothesis and the general question of the vanishing of movement and physicality from learning processes as an as yet underresearched psychological—or even philosophical, think peripatetics—observable.

This is a direction we should follow through in game-based learning research with some financial muscle, so to speak.


A Design Paradigm for Serious Games

How serious games are developed has changed quite a bit since Gunter et al.’s paper “A Case for a Formal Design Paradigm for Serious Games” (link to PDF) from 2006, but that doesn’t invalidate its point of departure in principle:

We are witnessing a mad rush to pour educational content into games or to use games in the classroom in an inappropriate manner and in an ad hoc manner in hopes that players are motivated to learn simply because the content is housed inside a game.

While this paper is neither a rigorously written research study nor exactly informed by deep knowledge about the psychology of learning (all three authors have their backgrounds in the technology of learning), and the concluding “method for creating designed choices” falls flat on its nose as this paper regrettably fails to define “choice” in this context, we can still extract its basic idea, strip off its naïve linearity, and expand on it.

In brief:

The basic design process for educational games should occur within a three-dimensional space whose three conceptual axes are: Game Mechanics, Dramatic Structure, and the Psychology of Learning. To simply try and “map” these parameters onto each other in a largely linear approach that, among other things, is destined to lose sight of participatory elements and agenticity rather quickly will run into problems and lead to bad games. And the best approach to build such a matrix for a given objective is to create a collaborative team with top-notch professionals from all three areas, i. e., game design, narrative design, and the psychology of learning and motivation.

Paper cited: Gunter, Glenda A., Robert F. Kenny, & Erik Henry Vick. “A Case for a Formal Design Paradigm for Serious Games.” The Journal of the International Digital Media and Arts Association. Vol.3 No.1 (2006). 1-19.


The Trouble with Game-Based Learning Research

From a 2013 Research Roundup on Game-Based Learning:

While serious games have been embraced by educators in and out of the classroom, many questions remain. What are the possible effects of digital gaming, connectivity and multitasking for younger learners, whose bodies and brains are still maturing?

Let me rephrase this just a bit:

While 20th century-style classroom learning has been embraced by educators all over the world, many questions remain. What are the possible effects of one-size-fits-all educational methodology with predetermined curricula and standardized testing, conditioned learning of siloed educational subjects detached from personal experience, and large class sizes solely determined by year of birth, for younger learners whose bodies and brains are still maturing?

What this comes down to is this. With their defensive positions reflected by arguments as well as study designs, game-based learning proponents often paint themselves into a corner. You just can’t conclusively identify (let alone “prove”) the effects and effect sizes of a particular teaching method for all times, ages, and contexts. Moreover, it’s proponents of that archaic industrial processing of learning and learners that we, somewhat misleadingly, call our “modern educational system” who should scramble to legitimite their adherence to outdated structures and methods, not the other way round.

Another thing that’s screwed, of course, is that from twenty studies on game based-learning listed by this particular research roundup mentioned above, only three are freely available — “Video Game–Based Learning: An Emerging Paradigm for Instruction” (Link); “Gamification in a Social Learning Environment” (Link to PDF); “A Meta-Analytic Examination of the Instructional Effectiveness of Computer-Based Simulation Games” (Link). And from the other seventeen arcticles’ overall ten sources even the excellently-equipped university and state library I’m privileged to enjoy research access to does subscribe, again, to three.


From education game-maker FilamentGames:

Commencing Operation Play, a call-to-arms for all believers in the positive impact of game-based learning! From September 15th–19th, we’re celebrating educators that utilize game-based learning in their classrooms and the benefits games can have on student engagement and understanding. We’ve partnered with some of the most powerful forces in the industry to build a hub of teacher resources for adding game-based learning to your classroom curriculum.

On board for digital games are, among others, MIT’s Education Arcade and Institute of Play’s GlassLab.

Worth checking out. I almost missed this.


Frank Catalano at GeekWire about the possible consequences of Microsoft’s acquisition of Mojang for TeacherGaming and MinecraftEdu:

With the Mojang buy, Microsoft will have an automatic presence in two hot and growing areas of importance in K-12 schools: STEM education, and game-based learning. It could choose to:

  • Maintain the licensing and direct support relationship for TeacherGaming’s MinecraftEdu,
  • Distribute Minecraft directly to schools as a Microsoft Education initiative (perhaps also buying TeacherGaming), or
  • Let education-specific efforts wither as it pursues world domination in mass market video games.

Early indications are somewhat promising, if not yet specific.

The Bill & Melinda Gates Foundation’s activities notwithstanding, Microsoft’s past in edu is checkered, to say the least. While Microsoft’s new CEO Satya Nadella indirectly confirmed that Ballmer’s departure marked the end of Microsoft’s platform-centric “domination” strategy, it will take time until we know whether that’s just marketing lingo or a real change of heart.

Remember the time when education was one of Apple’s rare strongholds and Microsoft proposed to pay out $1.1 billion in legal settlements from a class action law suit “in Microsoft software to needy schools”?

Be wary, we should.