All of this has gotten people worried that a super intelligent AI is right around the corner, that the robots are gonna take over[.] It’s all bullshit. That fear is a tech bro fantasy. Designed to distract you from the real risk of AI. That a bunch of dumb companies are lying about what their broken tech can do so they can trick you into using a worse version of things we already have, all while stealing the work of real people to do it.
Even though the video’s been uploaded only yesterday, it’s glaringly obvious that most of it was recorded roughly two weeks ago—that’s how fast things move right now. Thus, if you’ve followed the news cycle in general and this blog in particular, you will recognize most if not all of the events and motifs, only much funnier and with a lot more f-words.
It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse which promises either a “flourishing” or “potentially catastrophic” future. Such language that inflates the capabilities of automated systems and anthropomorphizes them, as we note in Stochastic Parrots, deceives people into thinking that there is a sentient being behind the synthetic media. This not only lures people into uncritically trusting the outputs of systems like ChatGPT, but also misattributes agency. Accountability properly lies not with the artifacts but with their builders. […]
Contrary to the letter’s narrative that we must “adapt” to a seemingly pre-determined technological future and cope “with the dramatic economic and political disruptions (especially to democracy) that AI will cause,” we do not agree that our role is to adjust to the priorities of a few privileged individuals and what they decide to build and proliferate.
Read the whole thing. Timnit Gebru, Emily M. Bender, Angelina McMillan-Major, and Margaret Mitchell are positively on fire.
(The post-preview image of a delivery robot traveling into an uncertain future is here. Also, for a better understanding of the Future of Life Institute and their open letter’s background, here’s an introduction to longtermism by Émile P. Torres at Salon.com, which I had belatedly added to my initial post on “Pause AI” two days ago.)
Earlier this week, I wrote about how Bill Gates isn’t carried away by AGI nonsense, while “Microsoft Research” simultaneously went off the rails at arxiv.org with a publication that can charitably be called a badly written sf novelette disguised as a paper.
I still had GatesNotes open in a browser tab, and last night a new entry popped up on “The Rules of the Road Are About to Change,” with a take on AI for autonomous vehicles that is interesting for two different reasons.
First, there’s the LLM-related approach to training:
When you get behind the wheel of a car, you rely on the knowledge you’ve accumulated from every other drive you’ve ever taken. That’s why you know what to do at a stop sign, even if you’ve never seen that particular sign on that specific road before. Wayve uses deep learning techniques to do the same thing. The algorithm learns by example. It applies lessons acquired from lots of real world driving and simulations to interpret its surroundings and respond in real time.
The result was a memorable ride. The car drove us around downtown London, which is one of the most challenging driving environments imaginable, and it was a bit surreal to be in the car as it dodged all the traffic. (Since the car is still in development, we had a safety driver in the car just in case, and she assumed control several times.)
I think this is a nifty approach; I can imagine it will push things forward quite a bit.
Then, Gates’s personal predictions on autonomous driving are equally interesting, not least because it’s refreshingly free of AI hype:
Right now, we’re close to the tipping point—between levels 2 and 3—when cars are becoming available that allow the driver to take their hands off the wheel and let the system drive in certain circumstances. The first level 3 car was recently approved for use in the United States, although only in very specific conditions: Autonomous mode is permitted if you’re going under 40 mph on a highway in Nevada on a sunny day.
At Level 3 (SAE 3), to recall, the driver can have their “eyes off” the road and busy themselves with other tasks, but must “still be prepared to intervene within some limited time.” The level of automation can be thought of “as a co-driver or co-pilot that’s ready to alert the driver in an orderly fashion when swapping their turn to drive.”
Make no mistake—this step from SAE 2 to SAE 3 is a monumental one. However, just like other hypey AI hypes, SAE 3 too isn’t right around the corner. Gates again:
Over the next decade, we’ll start to see more vehicles crossing this threshold. […]
A lot of highways have high-occupancy lanes to encourage carpooling—will we one day have “autonomous vehicles only” lanes? Will AVs eventually become so popular that you have to use the “human drivers only” lane if you want to be behind the wheel?
That type of shift is likely decades away, if it happens at all.
If SAE 3 really happens “over the next decade,” that would be in the ballpark of what I consistently (and insistently) predicted around seven or eight years ago—that even SAE 3 as autonomous “co-pilot” driving would take fifteen years of development and infrastructural changes at least to become feasible. (A prediction that got screamed at a lot at the time, metaphorically speaking.)
But all that notwithstanding, I do not think that the scenario of autonomous individual cars will serve us well. In 2019, Space Karen took a swipe at Singapore’s government for not being “welcoming” to Tesla and not “supportive of electric vehicles,” which is complete bullshit in and of itself, of course. (I visited CREATE in 2019, Singapore NTU’s Campus for Research Excellence and Technological Enterprise, and the halls were filled to the brim with research projects on autonomous electric transportation.) But in Singapore, the focus is on public and semi-public transportation, not private cars. And I gotta say, I loved it when the Singaporean secretary for the environment and water resources, Masagos Zulkifli, shot back by saying that Singapore is prioritizing public transportation: “What Elon Musk wants to produce is a lifestyle. We are not interested in a lifestyle. We are interested in proper solutions that will address climate problems.”
Boom.
Thus, while I’m pretty excited about advances in autonomous driving, I’d be even more excited about advances in autonomous driving toward sustainable transportation.
The day before yesterday, I worked my way through this terrible “Pause Giant AI Experiments” open letter, but didn’t get around to commenting on it. Luckily, I don’t have to! Emily Bender tore into it meanwhile, into the institution* that published it, and into the letter’s footnotes and what they refer to:
Human-competitive at general tasks, eh? What does footnote 3 reference? The speculative fiction novella known as the “Sparks paper” and OpenAI’s non-technical ad copy for GPT4. ROFLMAO.
What it boils down to is this. One the one hand, one can and should agree with this open letter that the way LLM development is handled right now is really bad for everybody. On the other, this open letter advocates stepping on the brake by stepping on the gas to accelerate the AI hype, which is entirely counterproductive:
I mean, I’m glad that the letter authors & signatories are asking “Should we let machines flood our information channels with propaganda and untruth?” but the questions after that are just unhinged #AIhype, helping those building this stuff sell it.
———————— *Addendum: If you want to learn more about longtermism—both the Future of Life Institute and all the people cited in footnote #1 except the Stochastic Parrots authors are longermists—here’s an excellent article by Emile P. Torres on “The Dangerous Ideas of ‘Longtermisn’ and ‘Existential Risk’” (h/t Timnit Gebru).
Sometimes I wonder if there will ever come a time when I will no longer find everything about Microsoft, Windows, and its founders and their “foundations” and “philanthropies” outright revolting. But, in stark contrast to this recent inanity by “Microsoft Research” on arxiv.org, Bill Gates keeps things in perspective:
What is powering things like ChatGPT is artificial intelligence. It is learning how to do chat better but can’t learn other tasks. By contrast, the term artificial general intelligence refers to software that’s capable of learning any task or subject. AGI doesn’t exist yet—there is a robust debate going on in the computing industry about how to create it, and whether it can even be created at all. […]
These “strong” AIs, as they’re known, will probably be able to establish their own goals. What will those goals be? What happens if they conflict with humanity’s interests? Should we try to prevent strong AI from ever being developed? These questions will get more pressing with time.
But none of the breakthroughs of the past few months have moved us substantially closer to strong AI.
Most of his post then goes on about reasonable business applications, sprinkled with pseudo-naïve bullshit about how AI, e.g., will free up people’s time so they can care more for the elderly and some such. (I’m not making this up.) Which, however, culminates in a clincher that brings me right back to where I started at the beginning of this post:
When productivity goes up, society benefits because people are freed up to do other things, at work and at home. Of course, there are serious questions about what kind of support and retraining people will need. Governments need to help workers transition into other roles.
Governments! These institutions, you know, that corporations and billionaires don’t pay taxes to, and whose interference in even their most atrocious business practices is resisted inch by inch. These will be responsible to “help workers transition into other roles” so that productivity can go up. Right. I got that.
While LLM/ChatGPT/AI systems will certainly change how we work in ways comparable to the introduction of visual user interfaces, the World Wide Web, or the iPhone, or perhaps even the steam engine, who knows, it will remain business as usual: socialism for the rich and capitalism for the poor. In that regard, if we don’t also change the system, AI will change fuck nothing.
Today, LS&Co. announced our partnership with Lalaland.ai, a digital fashion studio that builds customized AI-generated models. Later this year, we are planning tests of this technology using AI-generated models to supplement human models, increasing the number and diversity of our models for our products in a sustainable way. […]
“While AI will likely never fully replace human models for us, we are excited for the potential capabilities this may afford us for the consumer experience.” [italics mine]
One reason why all this is not glaringly obvious is the dazzling and distracting bombardment with AI sideshow acts. Where endless streams of parlor tricks and sleights of hand are presented, from fake Kanji, robot lawyers, and crypto comparisons to made-up celebrity conversations, emails to the manager, and 100 ChatGPT Prompts to Power Your Business. Through all that glitter and fanfare and free popcorn, many people don’t notice—or don’t want to notice or profess not to notice—that the great attraction in the center ring is just business as usual, only that the acrobats have been replaced by their likenesses and no longer need to be paid.
Press releases like this I will call Minidiv or Minisus announcements from now on. Marketing the replacement of human models through AI not only as progress toward “diversity” but also “sustainability”—a term currently thrown around with regard to AI in marketing and PR like confetti—has the exact same vibe as Orwell’s Ministries of Love and Peace.
A few days ago, I wrote about how training large language models is still prohibitively expensive, but that the costs of running them is coming down like a rocket.
Today, Jan-Keno Janssen (c’t 3003) posted a YouTube video on how to get Stanford’s ALPACA/LLaMA ChatGPT clone to run locally on ordinary hardware at home. It’s in German language, but you can switch on English subtitles, and there’s a companion page with the full German transcript that you can feed to Google Translate (or whatever you work with).
As Janssen points out, the copyright situation is murky; also, it became apparent yesterday that Facebook’s begun to take down LLaMa repos. But, as an instant countermeasure, the dalai creator already announced the launch of a decentralized AI model distribution platform named GOAT.
Point being, we’re approaching Humpty Dumpty territory. Once all that stuff is in the wild and runs reasonably well on ordinary hardware, the LLM business model that sat on the wall will take a great fall. And OpenAI’s horses and Facebook’s men won’t be able to put it together again.
As mentioned previously, I’m preparing course materials and practical exercises around AI/LLM/GPT models for the upcoming term, and I talked to coders (game engineering, mostly) who tried out ChatGPT’s coding assistance abilities. The gist is, ChatGPT gets well-documented routines right, but even then its answers are blissfully oblivious of best practice considerations; gets more taxing challenges wrong or, worse, subtly wrong; and begins to fall apart and/or make shit up down the road. (As a disclaimer, I’m not a programmer; I’ve always restricted myself to scripting, as there are too many fields and subjects in my life I need to keep up with already.)
Against this backdrop, here’s a terrific post by Tyler Glaiel on Substack: “Can GPT-4 *Actually* Write Code?” Starting with an example of not setting the cat on fire, his post is quite technical and goes deep into the weeds—but that’s exactly what makes it interesting. Glaiel’s summary:
Would this have helped me back in 2020? Probably not. I tried to take its solution and use my mushy human brain to modify it into something that actually worked, but the path it was going down was not quite correct, so there was no salvaging it. […] I tried this again with a couple of other “difficult” algorithms I’ve written, and it’s the same thing pretty much every time. It will often just propose solutions to similar problems and miss the subtleties that make your problem different, and after a few revisions it will often just fall apart. […]
The crescent example is a bit damning here. ChatGPT doesn’t know the answer, there was no example for this in its training set and it can’t find that in its model. The useful thing to do would be to just say “I do not know of an algorithm that does this.” But instead it’s overconfident in its own capabilities, and just makes shit up. It’s the same problem it has with plenty of other fields, though its strange competence in writing simple code sorta hides that fact a bit.
As a curiosity, he found that ChatGPT-3.5 came closer to one specific answer than ChatGPT-4:
When I asked GPT-3.5 accidentally it got much much closer. This is actually a “working solution, but with some bugs and edge cases.” It can’t handle a cycle of objects moving onto each other in a chain, but yeah this is much better than the absolute nothing GPT-4 gave… odd…
Generally, we shouldn’t automatically expect GPT to become enormously better with each version; besides the law of diminishing returns, there is no reason to assume, without evidence, that making LLMs bigger and bigger will make them better and better. But yes, we can at least expect that updated versions won’t perform worse.
And then there’s this illuminating remark by Glaiel, buried in the comments:
GPT-4 can’t reason about a hard problem if it doesn’t have example references of the same problem in its training set. That’s the issue. No amount of tweaking the prompt or overexplaining the invariants (without just writing the algorithm in English, which if you can get to that point then you already solved the hard part of the problem) will get it to come to a proper solution, because it doesn’t know one. You’re welcome to try it yourself with the problems I posted here.
That’s the point. LLMs do not think and cannot reason. They can only find and deliver solutions that already exist.
Finally, don’t miss ChatGPT’s self-assessment on these issues, after Glaiel fed the entire conversation back to it and asked it to write the “final paragraph” for his post!
Bryant Francis at Game Developer on “Ghostwriter,” Ubisoft’s new AI tool for its narrative team to generate barks:
Ubisoft is directly integrating Ghostwriter into its general narrative tool called “Omen.” When Ubisoft writers are creating NPCs, they are able to create cells that contain barks about different topics. An NPC named “Gaspard” might want to talk about being hungry or speeding while driving a car. To generate lines about speeding, the writer can either write their own barks, or click on the Ghostwriter tool to generate lines about that topic. Ghostwriter is able to generate these lines by combining the writer’s input with input from different large language models. […]
Ghostwriter is also used to generate large amounts of lines for “crowd life.” Ubisoft games often feature large crowds of NPCs in urban environments, and when players walk through those crowds, they generally will hear snippets of fake conversations or observations about what’s going on in the plot or game world.
Putting on my writer’s hat, I think that’s a great tool I’d love to work with! But there are trepidations, of course. Kaan Serin for Rock Paper Shotgun:
On Twitter, Radical Forge’s lead UI artist Edd Coates argued that this work could have been handed to a junior, making Ghostwriter seem like just another cost-cutting measure. He also said, “They’re clearly testing the waters with the small stuff before rolling out more aggressive forms of AI.” […] Other devs have argued that writing barks isn’t a pain.
As to the latter, sure—if your idea of a great workday consists of filling spreadsheet after spreadsheet with variations of dialogue snippets, you do you. But editing and refining them could be, and even should be, just as enjoyable, if not more so.
As to the former, Coates does have a point. In our corporate climate, as mentioned before, companies rarely use new technologies to make life better, hours shorter, and workdays more enjoyable for their employees. Instead, they will happily use these new technologies as a lever to “increase productivity” by reducing their workforce, replacing skilled with less-skilled personnel at lower wages, and crank up any turnout to whatever these new technologies allow.
Ghostwriter sounds like a terrific tool, and the problem isn’t new technologies. Unfettered capitalism is.
Bellingcat founder Eliot Higgins just got banned from Midjourney for posting a monumental Twitter thread about the Orange Fascist getting arrested, prosecuted, thrown into jail, escaping, and ending up at a McDonald’s as a fugitive, illustrated through several dozen Midjourney v5 images.
(In case Space Karen takes this thread down in the name of Freeze Peach, here are some excerpts on BuzzFeed.)
Whether that’s funny or not, I leave to you. The interesting bit is what else Midjourney did ban:
The word “arrested” is now banned on the platform.
As someone intimately familiar with language change and semiotics and stuff, and as someone who’s also followed the perpetual cat-and-mouse game between China’s Great Censorship Firewall and its clever citizens over the years, I recommend we’all order a tanker truck of popcorn and place bets on who’s going to throw in the towel first.
A few days ago, John Carmack posted this DM conversation on his Twitter account. Questioned whether coding will become obsolete in the future due to AI/ChatGPT, he replied:
If you build full “product skills” and use the best tools for the job, which today might be hand coding, but later may be Al guiding, you will probably be fine.
And when asked about the nature of these full product skills:
Software is just a tool to help accomplish something for people—many programmers never understood that. Keep your eyes on the delivered value, and don’t over focus on the specifics of the tools.
I think that’s both great advice for the future and has always been great advice in the past. I remember well how I looked up to John Carmack’s work in awe, many years ago, when I began to become interested in games.
Is it practical?
Definitely no! I created this proof-of-concept and proved that it doesn’t work yet. It works nicely in some cases and fails very poorly in others. I got several ideas from those successes and failures, which is this project’s main aim.
Can I install this to my project?
This is just a proof-of-concept project, so there is no standard way to install it in other projects. If you want to try it with your project anyway, you can simply copy the Assets/Editor directory to your project.
[The] dynamics to watch out for [will happen] in tractable fields with reasonably defined rule sets and fact sets, approximately traceable causes and effects, and reasonably unambiguous victory/output conditions. Buried beneath the prevailing delusions and all the toys and the tumult, they won’t be easy to spot.
I didn’t have any concrete applications in mind there, but Takahashi’s experiments are certainly part of what I meant. Now, while I’m not using Unity personally, and I’m certainly not going to for a variety of reasons, I’m confident that anything clever coders will eventually get to work in a major commercial game engine will sooner or later find its way into open source engines like Godot.
There are obstacles, of course—licensing and processing power prominently among them. The training process for models like ChatGPT is prohibitively expensive; licensing costs will accordingly be high; and you don’t want to have stuff that doesn’t run under a free software license in your open source engine in the first place.
But not all is lost! You absolutely do not need vastly oversized behemoths like ChatGPT for tasks like this. Everybody can create a large language model in principle, and the technologies it requires are well known, well documented, and open source.
There’s BLOOM, for starters, a transformer-based large language model “created by over 1000 AI researchers to provide a free large language model for everyone who wants to try.” And while training such models still costs a bunch even on lesser scales—access to Google’s cloud-based TPUs (GPUs designed for high volumes of low precision computing) isn’t exactly cheap—prices will come down eventually for older and less efficient hardware. Here’s an example of how fast these things develop: according to David Silver et al.’s 2017 paper in Nature, AlphaGo Zero defeated its predecessor AlphaGo Lee (the version that beat Lee Sedol, distributed over a whole bunch of computers and 48 TPUs) by 100:0 using a single computer with four TPUs.
All that’s pretty exciting. The trick is to jump off of the runaway A(G)I hype train before it becomes impossible to do so without breaking your neck, and start exploring and playing around with this stuff in your domain of expertise, or for whatever catches your interest, in imaginative ways.
Unearthed by @thebookisclosed, Microsoft is embedding a crypto-wallet in its Edge browser that handles multiple types of cryptocurrency, records transactions and currency fluctuations, and offers a tab to keep track of NFTs.
Andrew Cunningham, senior tech reporter at Ars Technica:
This is only one of many money and shopping-related features that Microsoft has bolted onto Edge since it was reborn as a Chromium-based browser a few years ago. In late 2021, the company faced backlash after adding a “buy now, pay later” short-term financing feature to Edge. And as an Edge user, the first thing I do in a new Windows install is disable the endless coupon code, price comparison, and cash-back pop-ups generated by Shopping in Microsoft Edge (many settings automatically sync between Edge browsers when you sign in with a Microsoft account; the default search engine and all of these shopping add-ons need to be changed manually every time).
How Windows users put up with this, I can’t fathom. Aside from that, it’s a positively trustworthy scammy-spammy environment that can only win by adding Bing’s text synthesis machine to it, which makes stuff up with abandon but is looked upon as being potentially “intelligent” by people who neither know what large language models are nor how marketing works.
This has the potential to become the most fun event since Mentos met Coke.
Regarding transparency, large language models and algorithms in general pose two distinct challenges: companies’ willful obfuscation of both the data sets and the human agents involved in training their models, and how these models arrive at their decisions. While I’m usually focused on disputes over the former, problems arising from the latter are equally important and can be just as harmful.
A proposal by the European Commission, whose general approach was adopted by the European Council last December, proposes to update AI liability to include cases that involve black box AI systems that are so “complex, autonomous, and opaque” that it becomes difficult for victims to identify in detail how the damage was caused. Thus, recipients of automated decisions must be able “to express their point of view and to contest the decision.” Which, in practice, requires convincing explanations. But how would you get convincing explanations when you’re dealing with black box AI systems?
The dilemma is often presented as a trade-off between model performance and model transparency: if you want to take full advantage of the intelligence of the algorithms, you have to accept their unexplainability—or find a compromise. If you focus on performance, opacity will increase—if you want some level of explainability you can maybe better control negative consequences, but you will have to give up some intelligence.
This goes back to Shmueli’s distinction between explanatory and predictive modeling and the suggested trade-off between comprehensibility and efficiency, which suggests that obscure algorithms can be more accurate and efficient by “disengaging from the burden of comprehensibility”—an approach on which I’m not completely sold with regard to AI implementation and practice, even though it’s probably the case with regard to evolved complex systems.
However, following Esposito, approaches from the sociological perspective can change the question and show that this “somewhat depressing approach” to XAI (Explainable AI) is not the only possible one:
Explanations can be observed as a specific form of communication, and their conditions of success can be investigated. This properly sociological point of view leads to question an assumption that is often taken for granted, the overlap between transparency and explainability: the idea that if there is no transparency (that is, if the system is opaque), it cannot be explained—and if an explanation is produced, the system becomes transparent. From the point of view of a sociological theory of communication, the relationship between the concepts of transparency and explainability can be seen in a different way: explainability does not necessarily require transparency, and the approach to incomprehensible machines can change radically.
Machines must be able to produce adequate explanations by responding to the requests of their interlocutors. This is actually what happens in the communication with human beings as well. I refer here to Niklas Luhmann’s notion of communication[.] Each of us, when we understand a communication, understand in our own way what the others are saying or communicating, and do not need access to their thoughts. […]
Social structures such as language, semantics, and communication forms normally provide for sufficient coordination, but perplexities may arise, or additional information may be needed. In these cases, we may be asked to give explanations[.] But what information do we get when we are given an explanation? We continue to know nothing about our partner’s neurophysiological or psychic processes—which (fortunately) can remain obscure, or private. To give a good explanation we do not have to disclose our thoughts, even less the connections of our neurons. We can talk about our thoughts, but our partners only know of them what we communicate, or what they can derive from it. We simply need to provide our partners with additional elements, which enable them to understand (from their perspective) what we have done and why.
This, obviously, rhymes with the EU proposal and the requirement of providing convincing explanations. That way, the requirement of transparency could be abandoned:
Even when harm is produced by an intransparent algorithm, the company using it and the company producing it must respond to requests and explain that they have done everything necessary to avoid the problems—enabling the recipients to challenge their decisions. [T]he companies using the algorithms have to deliver motivations, not “a complex explanation of the algorithms used or the disclosure of the full algorithm” (European Data Protection Board, 2017, p.25).
But beyond algorithms, particularly with regard to LLM, the other side of the black box equation must be solved too, where by “solved” I of course mean “regulated.” To prevent all kinds of horrific consequences inflicted on us by these high-impact technologies, convincing explanations for what comes out of the black box must be complemented with full transparency of what goes in.
OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.
We will actively cooperate with other research and policy institutions; we seek to create a global community working together to address AGI’s global challenges. We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.
We have a clause in our Charter about assisting other organizations to advance safety instead of racing with them in late-stage AGI development. We have a cap on the returns our shareholders can earn so that we aren’t incentivized to attempt to capture value without bound and risk deploying something potentially catastrophically dangerous (and of course as a way to share the benefits with society).
Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this [GPT-4 Technical Report] contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.
Everything’s fine. Tech bros funded by sociopaths fantasizing about AGI while rushing untested high-impact technology to market without oversight with the potential to affect almost everyone in every industry. What could possibly go wrong?
Rarely do I disagree with Cory Doctorow’s analyses, which are always highly enjoyable to read, both for their brilliant insights and their remarkable rhetoric punch. In his recent post on Google’s Chatbot Panic, however, things are a bit more complicated:
The really remarkable thing isn’t just that Microsoft has decided that the future of search isn’t links to relevant materials, but instead lengthy, florid paragraphs written by a chatbot who happens to be a habitual liar—even more remarkable is that Google agrees.
Microsoft has nothing to lose. It’s spent billions on Bing, a search-engine no one voluntarily uses. Might as well try something so stupid it might just work. But why is Google, a monopolist who has a 90+% share of search worldwide, jumping off the same bridge as Microsoft?
According to him, it looks inexplicable at first why Google isn’t trying to figure out how to exclude or fact-check LLM garbage, like they exclude or fact-check the “confident nonsense of the spammers and SEO creeps.” Referring to another article he wrote for The Atlantic, Doctorow makes the case that Google had one amazing idea, but then every product or service that wasn’t bought or otherwise acquired failed (with the lone exception of their “Hotmail clone”). This, he says, triggered a cognitive dissonance, i.e., that the true genius of this self-styled creative genius is “spending other people’s money to buy other people’s products and take credit for them.” This cognitive dissonance in turn triggered a pathology that drives these inexplicable decisions to follow trailing competitors over the cliff, like Bing now or Yahoo in the past:
Google has long exhibited this pathology. In the mid-2000s—after Google chased Yahoo into China and started censoring its search-results and collaborating on state surveillance—we used to say that the way to get Google to do something stupid and self-destructive was to get Yahoo to do it first. [Yahoo] going into China was an act of desperation after it was humiliated by Google’s vastly superior search. Watching Google copy Yahoo’s idiotic gambits was baffling.
But if you look at it from a different perspective, these maneuvers could actually appear clever. In game theory, there’s the “reversed follow-the-leader” strategy, most often illustrated with sailboat races, particularly Dixit and Nalebuff’s example of the 1983 America’s Cup finals in The Art of Strategy. The leading party (sailboat or company) copies the strategy of the trailing party (sailboat or company) as a surefire way to keep its leading position, even if that imitated strategy happens to be extremely stupid. If being the winner (or market leader) is the only thing that counts, it doesn’t matter whether the copied strategy is successful or unsuccessful or clever or stupid. Now, while that strategy doesn’t work when there’s not just one but two or more close competitors, the tech industry’s tendency to create duopolies and even quasi-monopolies naturally leads to situations where a reversed follow-the-leader strategy keeps making sense.
The drawbacks of this strategy are that winning doesn’t look spectacular or even clever; that it appears as if the winner has no confidence in their own strategy; and that imitating the runner-up might turn out to be very costly. But still, they win! And if that’s all there is to it, it’s just another form of pathology, and Doctorow’s analysis is on the mark after all.
Luckily, LinkedIn hasn’t messaged me yet for whatever expertise it thinks I have, but some users received the following request:
Help unlock community knowledge with us. Add your insights into this AI-powered collaborative article.
This is a new type of article that we started with the help of AI, but it isn’t complete without insights from experts like you. Share your thoughts directly into each section—you’re in a select group of experts that has access to do so. Learn more…
— The LinkedIn Team
That’s straight out of bizarro land. LinkedIn member Autumn on Mastodon:
Wait wait wait wait
Let me get this straight…
LinkedIn wants to generate crappy AI content and then invite me to fix it, for free, under some guise of flattery calling out my “expertise”
Really?
There are no lengths these platforms won’t go to on the final leg of their enshittification journey.
LinkedIn announced last week it’s using AI to help write posts for users to chat about. Snap has created its own chatbot, and Meta is working on AI “personas.” It seems future social networks will be increasingly augmented by AI.
According to his report, LinkedIn has begun sharing “AI-powered conversation starters” with the express purpose of provoking discussion among users. Reminder here that LinkedIn belongs to Microsoft; also, as Vincent quips, LinkedIn is full of “workfluencers” whose posts and engagement baits range in tone from “management consultant bland to cheerfully psychotic,” which is, happily, “the same emotional spectrum on which AI tends to operate.”
But conversation starters might be merely the beginning. Vincent imagines semiautomated social networks with fake users that “needle, encourage, and coddle” their respective user bases, giving “quality personalized content at a scale.” And while that’s mostly tongue-in-cheek, there’s an observable vector for that with conversational chatbots populating more and more social media sites like Snap or Discord.
And then there’s Facebook:
Meta, too, seems to be developing similar features, with Mark Zuckerberg promising in February that the company is exploring the creation of “AI personas that can help people in a variety of ways.” What that means isn’t clear, but Facebook already runs a simulated version of its site populated by AI users in order to model and predict the behavior of their human counterparts.
Welcome to Hell on Earth. I’m sure all this will turn out fine.
Voice actors are increasingly being asked to sign rights to their voices away so clients can use artificial intelligence to generate synthetic versions that could eventually replace them, and sometimes without additional compensation, according to advocacy organizations and actors who spoke to Motherboard.
Which, how could it be otherwise, is already packaged and marketed as the tech industry’s latest shit sandwich. According to ElevenLabs, e.g., voice actors “will no longer be limited by the number of recording sessions they can attend and instead they will be able to license their voices for use in any number of projects simultaneously, securing additional revenue and royalty streams.”
A shit sandwich voice actor Fryda Wolff doesn’t buy:
“[A]ctors don’t want the ability to license or ‘secure additional revenue streams,’ that nonsense jargon gives away the game that ElevenLabs have no idea how voice actors make their living.” Wolff added, “we can just ask musicians how well they’ve been doing since streaming platforms licensing killed ‘additional revenue and royalty streams’ for music artists. ElevenLabs’ verbiage is darkly funny.”
There’s so much stuff LLM technologies could do, like practically every new technology, to benefit almost everyone in almost every industry, humanity as a whole, and even the climate and the planet. Right now, however, we’d better be prepared to being bombarded with shit sandwiches left and right.
Instead of asking what these technologies can do for everyone (e.g., how AI/LLM can assist in smart city-planning or medical diagnostics), the major players are rather asking what they can do for shareholders and billionaires (e.g., “Bain & Company announces services alliance with OpenAI to help enterprise clients identify and realize the full potential and maximum value of AI”). The dominant vector here is relentlessly handing out snake oil to the effect that “A(G)I” will solve all of our problems in an exciting future full of marvels, while in reality the foundations are laid down in the present for rampant exploitation with breathtaking speed.
Markets value automation primarily because automation allows capitalists to pay workers less. The textile factory owners who purchased automatic looms weren’t interested in giving their workers raises and shorting working days. They wanted to fire their skilled workers and replace them with small children kidnapped out of orphanages and indentured for a decade, starved and beaten and forced to work, even after they were mangled by the machines. Fun fact: Oliver Twist was based on the bestselling memoir of Robert Blincoe, a child who survived his decade of forced labor.
If you think Cory’s example is a purely historical one, you haven’t kept up with current events. In the right hands, LLM technologies can be a terrific addition to our toolbox to help us help ourselves as a species. But LLM technologies as such will solve nothing at best, and make life for large parts of humanity more miserable at worst.
My original plan for today’s roundup involved resources and discussions on questions of copyright, but I had to put that off until next week. I’m on the final stretch of a Corona infection, and I’m not yet feeling up to tackling such a complex topic.
Thus, three general sources instead you might want to read.
OpenAI was founded in 2015 as a nonprofit research organization by Altman, Elon Musk, Peter Thiel, and LinkedIn cofounder Reid Hoffman, among other tech leaders. In its founding statement, the company declared its commitment to research “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” The blog stated that “since our research is free from financial obligations, we can better focus on a positive human impact,” and that all researchers would be encouraged to share “papers, blog posts, or code, and our patents (if any) will be shared with the world.”
Now, eight years later, we are faced with a company that is neither transparent nor driven by positive human impact[.] According to investigative reporter Karen Hao, who spent three days at the company in 2020, OpenAI’s internal culture began to reflect less on the careful, research-driven AI development process, and more on getting ahead, leading to accusations of fueling the “AI hype cycle.” Employees were now being instructed to keep quiet about their work and embody the new company charter.
“There is a misalignment between what the company publicly espouses and how it operates behind closed doors. Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration,” Hao wrote.
Personally, I don’t think that there were any “founding ideals” that could have been “eroded” in the first place; the idea that anyone ever took these lofty ideals at face value in the knowledge that people like Musk or Thiel were involved strikes me as a serious case of Orwellian doublethink. It was simply a mask that was convenient for a time, and a very transparent one at that.
We go around assuming ours is a world in which speakers—people, creators of products, the products themselves—mean to say what they say and expect to live with the implications of their words. This is what philosopher of mind Daniel Dennett calls “the intentional stance.” But we’ve altered the world. We’ve learned to make “machines that can mindlessly generate text,” Bender told me when we met this winter. “But we haven’t learned how to stop imagining the mind behind it.”
While parrots are great, humans aren’t parrots. Go read the whole thing. I have just one minor quibble, Weil’s throwaway remark concerning books and copyrights, to which I’ll get back next week.
Finally, the Federal Trade Commission has weighed in on AI in advertising last week, and it’s a blast. Michael Atleson in Keep Your AI Claims in Check at the FTC’s Business Blog:
And what exactly is “artificial intelligence” anyway? It’s an ambiguous term with many possible definitions. It often refers to a variety of technological tools and techniques that use computation to perform tasks such as predictions, decisions, or recommendations. But one thing is for sure: it’s a marketing term. Right now it’s a hot one. And at the FTC, one thing we know about hot marketing terms is that some advertisers won’t be able to stop themselves from overusing and abusing them.
AI hype is playing out today across many products, from toys to cars to chatbots and a lot of things in between. Breathless media accounts don’t help, but it starts with the companies that do the developing and selling. We’ve already warned businesses to avoid using automated tools that have biased or discriminatory impacts. […]
Advertisers should take another look at our earlier AI guidance, which focused on fairness and equity but also said, clearly, not to overpromise what your algorithm or AI-based tool can deliver. Whatever it can or can’t do, AI is important, and so are the claims you make about it. You don’t need a machine to predict what the FTC might do when those claims are unsupported.
It certainly won’t stop the hype train, but it’s a decent warning shot.
There were several reports over the last weeks on how teachers began to use ChatGPT in various ways, mostly as a research assistant for their students. One example, as related by SunRev on Reddit:
My friend is in university and taking a history class. The professor is using ChatGPT to write essays on the history topics and as the assignments, the students have to mark up its essays and point out where ChatGPT is wrong and correct it.
As mentioned in an earlier roundup post, I’m preparing to let my students try out a few things with creative assistance in my upcoming game design lectures. But I have doubts about the use of LLM for research assistance. It certainly has its good sides; as ChatGPT and similar models are ridiculously unreliable, it forces students to fact-check, which is great. However, research isn’t—or shouldn’t be—all about fact-checking data. Rather, it should be about learning and internalizing the entire process of doing research, be it for postgraduate projects or college essays: gaining a thumbnail understanding and accumulating topic-specific keywords; following references and finding resources; weighing these resources according to factors like time/place/context, domain expertise, trustworthiness, soundness of reasoning, and so on; and eventually producing an interesting argument on the basis of source analysis and synthesis. I’m not sure if fact-checking and correcting LLM output is a big step forward in that direction.
There is a demand for low-background steel, steel produced before the nuclear tests mid century, for use in Geiger counters. They produce it from scavenging ships sunk during world war one, as it’s the only way they can be sure there is no radiation.
The same is going to happen for internet data, only archives pre-2022 will be usable for sociology research and the like as the rest will be contaminated by AI nonsense. Absolute travesty.
This might indeed develop into a major challenge. How big of a challenge? We can’t know yet, but it will most likely depend on how good LLM-based machines will become in differentiating between LLM output, human output, and mixed output. Around 2010, there was the Great Content Farm Panic, when kazillions of websites began to speed-vomit optimized keyword garbage into the web. Luckily, Google’s engineers upgraded their search algorithms in clever ways, so that most of that garbage was ranked into oblivion relatively quickly. Can Google or anyone else pull that off again, with regard to a tidal wave of LLM sewage? There’s no guarantee, but those search engines and knowledge repositories that become better at it will gain an advantage over their competitors, so there’s a capitalist incentive at least.
Finally, this bombshell suggestion by Kevin Roose in an article about ChatGPT for teachers (yes, that Kevin Roose of “Bing’s A.I. Chat Reveals Its Feelings: ‘I Want to Be Alive’” fame as mentioned in my last roundup):
ChatGPT can also help teachers save time preparing for class. Jon Gold, an eighth grade history teacher at Moses Brown School, a pre-K through 12th grade Quaker school in Providence, R.I., said that he had experimented with using ChatGPT to generate quizzes. He fed the bot an article about Ukraine, for example, and asked it to generate 10 multiple-choice questions that could be used to test students’ understanding of the article. (Of those 10 questions, he said, six were usable.)
In the light of my recent essay-length take on AI, ChatGPT, and Transformational Change at medium.com, particularly my impression that ChatGPT might at least liberate us from large swaths of business communication, multiple-choice tests, or off-the-shelf essay topics, this made my eyes roll back so hard that I could see my brain catching fire.
Just when ChatGPT and other large language models bumped their heads harshly and audibly against the ceiling of reality, the promises swiftly became more spectacular than ever.
If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.
That’s the entire article in a nutshell: pretentious, pompous, and perfectly vacuous.
Assuming these people live in a fantasy world where they see themselves as modern-day demiurges who will soon create Artificial General Intelligence from large language models is the charitable option. Assuming they’re grifters is the other.
Originally, I planned this week’s roundup to be specifically about AI/LLM/ChatGPT in research and education, but I pushed these topics back a week in the light of current events. You’ve probably heard by now that Microsoft’s recent Bing AI demo, after some people took a closer look, was a far greater disaster than Google’s Bard AI demo had been a few days earlier.
According to this [Bing AI’s] pros and cons list, the “Bissell Pet Hair Eraser Handheld Vacuum” sounds pretty bad. Limited suction power, a short cord, and it’s noisy enough to scare pets? Geez, how is this thing even a best seller?
Oh wait, this is all completely made up information.
But hey, people want to believe! Which can be quite innocuous, as exemplified by this Golem article in German language. Shorter version: “On the one hand, Bing AI’s answers to our computer hardware questions were riddled with errors, but on the other, it generated a healthy meal plan for the week that we took at face value, so we have to conclude that Google now has a problem.”
And then here: “I had a long conversation with the chatbot” frames this as though the chatbot was somehow engaged and interested in “conversing” with Roose so much so that it stuck with him through a long conversation.
It didn’t. It’s a computer program. This is as absurd as saying: “On Tuesday night, my calculator played math games with me for two hours.”
That paragraph gets worse, though. It doesn’t have any desires, secret or otherwise. It doesn’t have thoughts. It doesn’t “identify” as anything. […] And let’s take a moment to observe the irony (?) that the NYTimes, famous for publishing transphobic trash, is happy to talk about how a computer program supposedly “identifies.”
You can learn more about what journalism currently gets terribly wrong from Bender’s essay On NYT Magazine on AI: Resist the Urge to Be Impressed, again over at medium.com. There, she looks into topics like misguided metaphors and framing; misconceptions about language, language acquisition, and reading comprehension; troublesome training data; and how documentation, transparency, and democratic governance fall prey to the sycophantic exultation (my phrasing) of Silicon Valley techbros and their sociopathic enablers (dito).
Finally, there’s the outright pathetic. Jumping into swirling vertiginous abysses of eschatological delusions particularly on Twitter, many seem to believe or pretend to believe that the erratic behavior of Bing’s “Sydney,” which at times even resembled bizarre emotional breakdowns until Microsoft pulled the plug, is evidence for internal experiences and the impending rise of self-aware AI.
But since the alliance between OpenAI and Microsoft added (a version of) this LLM to (a version of) Bing, people have been encountering weirder issues. As Mark Frauenfelder pointed out a couple of days ago at BoingBoing, “Bing is having bizarre emotional breakdowns and there’s a subreddit with examples.” One question about these interactions is where the training data came from, since such systems just spin out word sequences that their training estimates to be probable.
After some excerpts from OpenAI’s own page on their training model, he concludes:
So an army of low-paid “AI trainers” created training conversations, and also evaluated such conversations comparatively—which apparently generated enough sad stuff to fuel those “bizarre emotional breakdowns.”
A second question is what this all means, in practical terms. Most of us (anyhow me) have seen this stuff as somewhere between pathetic and ridiculous, but C. M. [Corey McMillan] pointed out to me that there might be really bad effects on naive and psychologically vulnerable people.
As evidenced by classical research as well as Pac-Man’s ghosts, humans are more than eager to anthropomorphize robots’ programmed behavioral patterns as “shy,” “curious,” “aggressive,” and so on. That an equivalent to this would be true for programmed communication patterns shouldn’t come as a surprise.
However, for those who join the I Want to Believe train deliberately, it doesn’t seem to have anything to do with a lack of technical knowledge or “intelligence” in general, whatever that is. Not counting those who seize this as a juicy consulting career opportunity, the purported advent of self-aware machines is a dizzyingly large wishful thinking buffet that offers either delicacies or indigestibles for a broad range of sensibilities.
On a final note for today, this doesn’t mean that technical knowledge of how ChatGPT works is purely optional, useless, or snobbish. Adding to the growing number of sources already out there, e.g., Stephen Wolfram last week published a 19,000-word essay on ChatGPT that keeps a reasonable balance between being in-depth and accessible. And even if you don’t agree with his hypotheses or predictions on human and computational language, that’s where all this stuff becomes really interesting. Instead of chasing Pac-Man’s ghosts and seeing faces in toast, we should be thrilled about what we can learn and will learn from LLM research—from move 37 to Sydney and beyond—about decision processes, creativity, language, and other deep aspects of the human condition.
As I wrote more than eight (woah!) years ago in the About section, my secret level/side blog just drafts is one part news ticker with commentary on everything related to games, and one part research-adjacent blog posts about game-based learning and ethics. Discussing current AI models fits that agenda pretty well.
What’s more, I started preparing course materials and practical exercises around AI/LLM/GPT models for the upcoming term in April. These will be balanced topic cocktails for second and sixth term students, revolving around creative assistance (game design, art, coding, and writing), development support (production process and project management), and social ramifications (potentials, risks, economic viability, sustainability, equity/fairness, acceptance, workplace integration, and so on).
Thus, on top of my regular posts, linked list items, or essays, these roundups will serve as food for thought in general, and as a repository for upcoming discussions with my students as well.
This one’s a bit older. It held up well, but most of the arguments are familiar by now. One aspect, however, is worth exploring. From an interview with Emily M. Bender, one of the coauthors on the paper that led Timnit Gebru to be forced out of Google:
“The Star Trek fantasy—where you have this all-knowing computer that you can ask questions and it just gives you the answer—is not what we can provide and not what we need,” says Bender[.] It isn’t just that today’s technology is not up to the job, she believes. “I think there is something wrong with the vision,” she says. “It is infantilizing to say that the way we get information is to ask an expert and have them just give it to us.”
Since the British Guardian began some time ago to excel in publishing disgustingly transphobic opinion pieces, I all but stopped linking to it. But this one adds an intriguing metaphor to the preceding point of view:
In his 1991 book Consciousness Explained, the cognitive scientist Daniel Dennett describes the juvenile sea squirt, which wanders through the sea looking for a “suitable rock or hunk of coral to … make its home for life.” On finding one, the sea squirt no longer needs its brain and eats it. Humanity is unlikely to adopt such culinary habits but there is a worrying metaphorical parallel. The concern is that in the profit-driven competition to insert artificial intelligence into our daily lives, humans are dumbing themselves down by becoming overly reliant on “intelligent” machines—and eroding the practices on which their comprehension depends.
The operative term here is “practices,” mind. That’s the important thing.
Across the board, the feedback ChatGPT writes for these basic prompts isn’t especially helpful. It uses numerous cliches, it doesn’t include examples, and it isn’t especially actionable. […] Given this, it’s borderline amazing how little it takes for ChatGPT to start baking gendered assumptions into this otherwise highly generic feedback.
[O]ne important difference: feedback written for female employees was simply longer—about 15% longer than feedback written for male employees or feedback written in response to prompts with gender-neutral pronouns. In most cases, the extra words added critical feedback [while the feedback written for a male employee] is unilaterally positive.”
To be expected; sexism and racism and similar nasty stuff is always baked into historical data. To keep any AI trained on historical data from developing racist-aunt-or-uncle opinions, with the potential to ruin a lot more than merely your Thanksgiving family dinner, will keep haunting us as one of the biggest challenges in AI.
Think of ChatGPT as a blurry jpeg of all the text on the Web. It retains much of the information on the Web, in the same way that a jpeg retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry jpeg, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.
Also, Google’s Bard presentation yesterday (and some Bing shenanigans) gave me second thoughts from a different, but related perspective. Not because things did go sideways a bit there; it rather occurred to me that chatbots obscure both their sources’ origins and the selection process a lot more than conventional search engines already do, which might transform search engines in the long run into portaled successors of the AOL internet. Sure, people can still use conventional search, but we all know how things are done at Google. If more and more people adapt to chatbot search and conventional search begins to deliver fewer and fewer ads, resources might be cut, and conventional search might even be dropped for good someday (viz: Google Reader, Feedburner, Wave, Inbox, Rockmelt, Web & Realtime APIs, Site Search, Map Maker, Spaces, Picasa, Orkut, Google+, and so on).
Imagine what it would look like if ChatGPT were a lossless algorithm. If that were the case, it would always answer questions by providing a verbatim quote from a relevant Web page. We would probably regard the software as only a slight improvement over a conventional search engine, and be less impressed by it. The fact that ChatGPT rephrases material from the Web instead of quoting it word for word makes it seem like a student expressing ideas in her own words, rather than simply regurgitating what she’s read; it creates the illusion that ChatGPT understands the material. In human students, rote memorization isn’t an indicator of genuine learning, so ChatGPT’s inability to produce exact quotes from Web pages is precisely what makes us think that it has learned something. When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression.
Go and read the whole thing. It’s chock-full of insights and interesting thoughts.
Today, figuratively speaking, these technologies will be implemented in our design tools and graphics editors and search engines and word processors and translation/transformation apps and game engines and coding environments and digital audio workstations and video production editors and business communication platforms and diagnostic tools and statistical analysis software in everything everywhere all at once fashion with the possible exception of our singularly immobile educational systems, and we will work with them without batting an eye once the novelty value’s worn off. And by tomorrow, what were once miracles will have become children’s toys.
It’s a 14–minute read, according to medium.com. It started out as a post for this blog, but grew to a size more conveniently digestible over there. Along with all the things mentioned in the quote and in the headline, I also touch upon AlphaGo’s successor, cotton candy, social and economical dynamics, and late-night visits to the refrigerator.
Now that Ron DeSantis is engineering the birth of the Fourth Reich in Florida in preparation for his nomination for the 2024 presidential election, it is hard not to see the similarities between his all-out war against cultural and educational institutions and the coordinated totalitarian Gleichschaltung perpetrated by the German Nazi party between 1933 and 1935. Yes, you read that right—the Nazis established total control over Germany’s institutions and organizations within the span of two years.
For cultural gleichschaltung, you can use two major strategies: you can ban or you can co-opt. Co-optation takes more effort but in contrast to bans, it is often invisible and can serve as a tool for indoctrination.
Here’s a rather subtle example: a 1943 performance of Robert Schumann’s Das Paradies und die Peri Op.50, a secular oratorio for soloists, choir, and orchestra completed in 1843. When you take a look at the program—see slides for page 1 and page 2, which I found in a boxed vinyl recording I once bought at a flea market in Germany—and happen to know the libretto, you will immediately see what the Nazis did there.
The libretto is about the Peri, a creature from Persian mythology, who’s been expelled from paradise and tries to regain entrance by bringing a gift that is “most dear to heaven” (here’s a full summary). First, she captures the last drop of blood of a young freedom fighter, which fails. Then, she catches the last breath of a girl who sacrifices herself for her plague-ridden lover; that too fails. Finally, she brings the tear of a criminal, shed in remorse at the sight of a praying child, which finally opens the gates of heaven.
Now look at that program again—the girl who sacrifices herself comes first, which fails; then comes the tear of remorse, which also fails; then the drop of blood of a young freedom fighter—»Denn heilig ist das Blut, / Für die Freiheit verspritzt vom Heldenmut«, which opens the gates of heaven.
That’s how you can twist, easily and with a few strokes of the pen, a piece of your cultural heritage into supporting your fascist agenda.
If you don’t live under a rock, you might have noticed a few remarkable—but not altogether unpredictable—advances in natural language processing and reinforcement learning, prominently text-to-image models and ChatGPT. And now, every concerned pundit comes out of the woodwork, decrying how terrible these developments are for our educational system and for our starving artists.
Is this deluge of AI-generated images and texts terrible? Of course it is. But don’t let them tell you that this is the problem. It’s only the symptom of a problem.
Let’s start with all those people suddenly feeling deeply concerned about the death of the college essay. Education, if you think of it, should do three things: make children and students curious about as many subjects as possible; give them the tools to develop interests around these subjects; and facilitate the acquisition of skills, knowledge, and understanding along these interests. For these ends, virtually every new technology would be useful one way or another. Our educational systems’ priority, however, is feeding children and students standardizable packets of information—a lot with very short best-before dates stamped on them—for evaluation needs and immediate workplace fitness. Just think of it: the world wide web became accessible for general use around 1994! During all that time, almost thirty years, the bulk of written and oral exams didn’t adapt to integrate the internet, but has been kept isolated from it meticulously. Nor, for that matter, has the underlying system changed of keeping information expensive and disinformation free, an infrastructure into which AI-generated nonsense can be fed now with abandon. And all this gate-keeping for what: when there’s a potential majority in any given country to elect or keep electing fascists into high office, the survival of the college essay probably isn’t the most pressing thing on our plate with regard to education.
Then, the exploitation of artists. Could these fucking techbros have trained their fucking models on work that artists consented to or whose work is in the public domain? That’s what they should’ve done and of course didn’t, but please spare me your pundity tears. While it is thoroughly reprehensible, it’s only possible because at that intersection where the tech and creative industries meet, a towering exploitation machine has stood all along—co-opting or replacing or straightaway ripping off the work of artists and freelance artists, and practically everybody who doesn’t own an army of copyright lawyers, the moment their work becomes even marginally successful.
AI will advance, and everything it can do will be done. Nexus-6 models aren’t right around the corner, but they’re an excellent metaphor. We could try and legislate all those leash-evading applications to death, chasing after them, always a few steps behind and a few years too late, trying to prevent new ways of exploitation while reinforcing old ways of exploitation. Or we could try and change our educational and creative economies in ways that make these applications actually useful and welcome, for educators and artists in particular and humanity and this planet in general.
Ferrari and some of the other high-end car manufacturers still use clay and carving knives. It’s a very small portion of the gaming industry that works that way, and some of these people are my favourite people in the world to fight with—they’re the most beautiful and pure, brilliant people. They’re also some of the biggest fucking idiots.
For the rest of the conversation, Riccitiello builds his giant strawman of developers who “don’t care about what their player thinks” and equates this strawman with everyone who doesn’t embrace Unity’s publishing model that is driven by, let’s call it by its name, advertising and addiction.
One of the anecdotes with which he fleshes out his strawman:
I’ve seen great games fail because they tuned their compulsion loop to two minutes when it should have been an hour.
That’s not just strawman-nonsense on so many levels; it also tells you everything about the mindset behind it. I’m well aware that “compulsion loop” has become an industry term in the mobile games sphere that has replaced “gameplay loop,” the term we still use when we want to make games that players can enjoy. (Ferraris, according to Riccitiello.)
Just to refresh your memory, while the gameplay loop or core loop* consists of a sequence of activities or sets of activities that the player engages in again and again during play that defines the mechanical aspect of the playing experience, the compulsion loop is a behaviorally constructed, dopamine-dependent, addiction-susceptible, near-perpetual anticipation–avoidance–reward loop as an extrinsic motivation package that keeps players playing to maximize their exposure to advertising or their willingness to spend money on in-game purchases of both.
That’s what Unity’s business model is about, industry term or no.
——————-
* To maximize confusion, there’s also the game loop, which is the piece of code that updates and renders the game from game state to game state.
Layoffs have afflicted Unity’s offices across the globe. […] On Blind, the anonymous messaging board commonly used by employees in the tech industry, Unity staffers say that roughly 300 or 400 people have been let go, and that layoffs are still ongoing. […]
Unity has been a “shit show” lately, one person familiar with the situation, who requested anonymity for fear of reprisal, told Kotaku. Attrition. Mismanagement. Strategic pivots at a rapid, unpredictable rate.
Two weeks prior, apparently, CEO John Riccitiello had lied in an all-hands meeting that they wouldn’t be laying off anyone.
What’s more, there has been a flurry of acquisitions lately, most recently digital effects studio Weta for $1.62b and Parsec for $320m, while investment into creative ventures all but dried up. The only creative team who’d been working internally on a game was fired as well.
This project that Unity debuted this year, aimed at improving users’ knowledge of Unity, improve tooling, level up creator skills, that was fun and inspiring, that a lot of people were looking forward to? Everyone in that team picture has been fired. https://unity.com/demos/gigaya
I always had my reasons to distrust Unity deeply, and indicators for a smoldering fire under the hood have been there for a very long time. So if its C-suite now is officially a coterie of lying weasels, maybe we should rewind to about a year ago and take their protestation in this matter with the tanker truck of salt it deserved all along.
The Swedish Embracer Group, who’s lately been gobbling up developer studios and IPs like people eat popcorn at the movie theater, is building an archive “for every game ever made,” according to this YouTube video and this page on their website:
Imagine a place where all physical video games, consoles and accessories are gathered at the same place. And think about how much that could mean for games’ culture and enabling video games research. This journey has just been started and we are at an early stage. But already now, we have a large collection to take care of at the Embracer Games Archive’s premises in Karlstad, Sweden. A team of experts has been recruited and will start building the foundation for the archive.
Frankly, I don’t see the point. A “secret vault” with the “long-term ambition” to exhibit parts of the archive “locally and through satellite exhibitions at other locations” so people can—what—look at boxes?
I’m not saying collecting colorful boxes is a bad thing. I like colorful boxes! But any preservation effort whose primary goal isn’t to restore and archive these games’ source codes, clean up their fucked-up license entanglement train wrecks, and make them playable in open-source emulators that are not shot down by predatory copyright policies upheld and lobbied for by the games industry and their assorted associations, that’s neither true preservation nor progress.
Imagine all the paintings in the Louvre were not behind glass but inside boxes, and all you could see were descriptions written on these boxes on what these paintings are about and who created them when.
Judge John C. Coughenour let part of the case move forward in the U.S. District Court for the Western District of Washington, saying it’s plausible Valve exploits its market dominance to threaten and retaliate against developers that sell games for less through other retailers or platforms.
The company “allegedly enforces this regime through a combination of written and unwritten rules” imposing its own conditions on how even “non-Steam-enabled games are sold and priced,” Coughenour wrote. “These allegations are sufficient to plausibly allege unlawful conduct.”
The consolidated dispute is one of several legal challenges to the standard 30% commission taken by leading sales and app distribution platforms across Silicon Valley.
About time.
Steam is an exploitation machine that doesn’t invest the shitloads of money it extracts back into their platform to make it less exploitative and a better experience particularly for indie developers. Instead, Steam always had a habit of passing on everything they should do themselves to its community, which, besides unpaid work, is among the most patriarchal things I can think of. Also, remember for whom they lowered their cut when people began to complain? They lowered it for the big players who brought in over $10 million in sales. In other words, only for those who have the muscle to push against them.
I remember well when this cartoon was popular ten years ago, when people regarded Valve as “Your Cat—Loyal, friendly, and the internet loves him” (scroll all the way down). Yes, that’s a very nice picture that I didn’t quite buy even at the time. Today, if you caught a glimpse of Steam’s real image behind the curtain, it’ll be more like a picture of Dorian Gray’s cat.
Square Enix, the day before yesterday, in their Press Release (PDF):
[We] today signed a share transfer agreement with Sweden-based Embracer Group AB concerning the divestiture of select overseas studios and IP. The company’s primary assets to be divested in the transaction are group subsidiaries such as Crystal Dynamics, Eidos Interactive and IP such as Tomb Raider, Deus Ex, Thief, and Legacy of Cain.
[corporate jargon] In addition, the transaction enables the launch of new businesses by moving forward with investments in fields including blockchain, AI, and the cloud. [more corporate jargon]
Well, I’m not an analyst. But if you ask me, selling off some of your jewels in a fire sale*, not because they made a loss but, purportedly, because the profits they made were “below expectations,” to a publishing group that already owns more than a hundred studios and a kaboodle of other media companies, with the expressed intention of investing in cryptoshit, then I’d say there are some invisible pipelines under the hood that will funnel staggering amounts of money over time into some select investors’ fathomless pockets.
——————-
*Compare the estimated price tag of $300 million for all these studios and IPs to this year’s other buying sprees—Sony’s acquisition of Bungie/Destiny for $3.6 billion; Take-Two’s acquisition of Zynga/FarmVille for $12.7 billion; or Microsoft’s acquisition of Activision Blizzard for just under $70 billion. Plus, Embracer’s own recent acquisitions of Asmodee Games for $3.1 billion and Gearbox for $1.3 billion. Compared to these, what Embracer payed for all these studios and IPs is the equivalent of the tip money you put aside for pizza and package deliveries.
One of the topics this secret level focuses on is game-based learning, and sometimes I also rant post about school systems as such. But there’s also higher education—and the university system in the U.S., which has been eroded, dismantled, assaulted, and hijacked for and by monetary and political interests for quite a while. Recently, it’s come under sustained assault more than ever, and even academic tenure’s now in the crosshairs. Tenure certainly has its flaws and can be exploited, but it’s there for a reason.
SMBC, my favorite web comic together with XKCD, regularly and notoriously makes fun of every academic discipline—natural sciences, social sciences, math, you name it—but it always felt to me that the jokes targeting the humanities were a bit less playful and a bit more deprecating.
Thus, I was blown away by this jumbo-sized SMBC comic the other day, with its lovingly crafted metaphor of liberal education as an “old dank pub”—so much so that I can’t but link to it here for posterity, and to make you cry too.
Don’t forget to click on the red button at the bottom for a bonus panel, and read a few other comics as well while you’re there!
Week before last, I mentioned in my weekly newsletter that People Make Games had released a documentary on YouTube about their investigations into emotional abuse across three different, highly prestigious indie studios. There were several aspects to it that made this even more depressing than those regular big toxic studio news that we more or less have come to expect.
One of these studios was Funomena, founded in 2013 by Robin Hunicke and Martin Middleton. Yesterday, Chris Bratt from People Make Games reported on Twitter that Funomena will be closed:
I’m absolutely gutted to report that Funomena is set to be closed by the end of this month, with all contractors already having been laid off as of last Wednesday.
This is an extremely sad end to the studio’s story and I hope everyone affected is able to land on their feet.
This announcement has caught many employees by surprise, who now find themselves looking for other work, with their last paycheck coming this Friday.
Then there’s Funomena’s official statement, also on Twitter, which paves the way for laying the blame on a new funding round that didn’t materialize. Finally, there seem to be voices who blame shuttering the studio on the release of the documentary, but (former) Funomena employees beg to differ.
In my first post in this series, I wrote about in-game photography for memorable moments, and about the detrimental effects photos can have on our memories by highlighting these recorded moment over all other moments that might be of equal, or even higher, biographical value for us. And I wrote about how digital photography might alleviate this effect. It enables us not only to record an unprecedented amount of moments of our lives, but record them at almost any given point in time. Our phone cameras are always with us, ready to record.
Originally, I planned to proceed to aspects of player memory and game-based learning, but I’m still processing these topics. Then, last week, an article in the New Yorker by Kyle Chayka on “Have iPhone Cameras Become Too Smart?” has triggered another artillery exchange in the digital photo quality wars that is worth mentioning in the context of these posts.
Chayka’s argument boils down to this: in contrast to the iPhone 7 camera, later models—iPhone 11–13—digitally manipulate shots so “aggressively and unsolicited” that they often don’t look “natural” but weird, or overprocessed and “over-real,” with glaring editing errors on top.
Backlash was swift, naturally. John Nack (h/t Daring Fireball) put up a gallery on Twitter that juxtaposes iPhone 7 and iPhone 12 shots from the same objects at the same location. Or, John Gruber argued that the problem “is not that iPhone cameras have gotten too smart. It’s that they haven’t gotten smart enough.”
It doesn’t help that Chayka’s article is not supported by evidence, like, well, photos. Not of editing glitches, we all know how they look—but examples of iPhone 7 shots that look better, or more natural, or more interesting, than corresponding shots from an iPhone 11 or later. For an article that makes such a deep, sweeping argument about digital photography, one could expect some examples to go with it? Perhaps it’s just me.
Also, it doesn’t help that the article’s arguments aren’t well structured and throw wildly different things into the mix, particularly the hypothesis (or assumption) that modern digital photography has a “destabilizing effect on the status of the camera and the photographer” (with reference to Benjamin, of course). That iPhones create “a shallow copy of photographic technique that undermines the impact of the original” and “mimics artistry without ever getting there”—now that’s a tune we’ve heard before.
First of all, for the overwhelming majority of iPhone users, iPhone photography is perfectly fine. And for them, later iPhone models serve these people’s purposes a lot better than the camera and software package of the iPhone 7 did.
Then, if the iPhone’s editing process is too aggressive for what you want to achieve, which is indeed a concern for many professional photographers and artists, you can and should switch to third-party photo apps, up to and especially Halide that allows you to shoot RAW and, from iPhone 12 models on, also Apple’s ProRAW. (Chayka even mentions Halide, but somehow that doesn’t lead anywhere.) And it goes without saying that later iPhones will provide you with better RAW or ProRAW data to work with than an iPhone 7.
Seen in this light, the argument that digital photography is not really “professional photography” and is lacking in “artistry” is at least partly based on the hidden, quite bizarre, and most likely unreflected assumption that the “shot” that came out of the camera is what makes or breaks a professional photo or art. Just like photos from non-digital cameras, the editing process is a substantial part of it, despite all their differences.
Adding everything up, Chayka’s article is not well-structured; it doesn’t provide evidence for its arguments; and it contains hidden assumptions that are dubious at least, and outright untenable at worst.
And I’m not even a fan of iPhone photography! Early on over at Glass, I followed a good number of accounts who shot with iPhone 11+ models. Photographers, that is, who are very good and know what they’re doing. Some are a blast, and I keep enjoying them very much. But with time, I felt the majority of photos from pure iPhone accounts were just not interesting enough to stick around. At the same time, I began to follow more and more photographers who shoot film, and not just professionals. (Most of my older photos on Glass are shot with an iPhone 5s, so please ignore these curious sounds of breaking glass that you hear from the back of the house.)
Thus, there are two different, but perfectly compatible takeaways.
On one side, following Gruber, iPhone cameras haven’t gotten smart enough yet for the overwhelming majority of iPhone users who aren’t professional photographers or artists. On the other side, newer and smarter iPhone cameras keep providing professional photographers and artists with more, better, and richer “raw” data to work with.
Bonus recommendation: if you want to dive into the nitty-gritty details of photo processing software on the iPhone 13 Pro, this post by Halide designer Sebastiaan de With on “iPhone 13 Pro: The Edge of Intelligent Photography” is what you want to read.
Last week, The New Yorker featured an interview with FromSoftware’s game director Miyazaki Hidetaka, conducted by Simon Parkin.
Mostly, it’s about difficulty, but also about writing. For Elden Ring, as you might know, Miyazaki collaborated with George R. R. Martin. But it’s not at all your regular “let’s hire a writer/screenwriter for the story” approach:
Miyazaki placed some key restraints on Martin’s contributions. Namely, Martin was to write the game’s backstory, not its actual script. Elden Ring takes place in a world known as the Lands Between. Martin provided snatches of text about its setting, its characters, and its mythology, which includes the destruction of the titular ring and the dispersal of its shards, known as the Great Runes. Miyazaki could then explore the repercussions of that history in the story that the player experiences directly. “In our games, the story must always serve the player experience,” he said. “If [Martin] had written the game’s story, I would have worried that we might have to drift from that. I wanted him to be able to write freely and not to feel restrained by some obscure mechanic that might have to change in development.”
In Ludotronics, my game design book, I wrote about how a game’s setting, location/environment, backstory, and lore can be crafted with audiovisual, kinesthetic, and mythological means to create the game’s world narrative. How Miyazaki approached this for Elden Ring would make a terrific example for an updated edition.
Also:
There’s an irony in Martin—an author known for his intricate, clockwork plots—working with Miyazaki, whose games are defined by their narrative obfuscation. In Dark Souls, a crucial plot detail is more likely to be found in the description of an item in your inventory than in dialogue. It’s a technique Miyazaki employs to spark players’ imaginations[.]
For many reasons, I think it’s an excellent approach. (I also think that Miyazaki is very polite.)
The people of Ukraine are under attack. As game developers we want to create new worlds, not to destroy the one we have. That’s why we’ve banded together to present this charity bundle to help Ukrainians survive this ordeal and thrive after the war ends.
Over 700 creators have joined in support to donate their work.
We kept the minimum low, but we highly urge you to pay above the minimum if you can afford to do so. All proceeds will be split between the charities 50/50.
Only paid products were allowed into the bundle this time, DRM-free and download-ready, no external Steam Keys or any other bullshit. Proceeds will be split evenly between the International Medical Corps and the Ukrainian organization Voices of Children.
The yield from itch.io’s Bundle for Racial Justice and Equality two years ago was phenomenal.
For reference, Activision Blizzard’s valuation by market cap was $51 billion right before Microsoft announced their plan to buy Actiblizz for $69 billion. On that scale, Ubisoft look tiny with a market cap of $6-7-ish billion. Microsoft could accidentally buy Ubisoft by misclicking on Amazon and not even realise until Yves Guillemot was dumped on their doorstep in a cardboard box.
Being open to buy-outs seems to be the fashionable thing to do nowadays if you’re a game publisher and your company’s rocked by scandals around sexism, racism, misogyny, and toxic work conditions in general. (Paradox, for now, being an exception.)
What it doesn’t do, of course, is help. After the better time of a year, none of the demands from the “A Better Ubisoft” initiative have been met. And with regard to Activision Blizzard, well. Last Tuesday, this absolute gem of an anti-union presentation [slide | slide] popped up from ReedSmith, whose lawyers—you can’t make this up—represent Activision in the NLRB hearing on the publisher’s union-busting activities. (That presentation’s been hastily deleted, of course, but the Wayback Machine has a long memory.)
America’s Army: Proving Grounds, a game used as a recruitment tool by the United States government, is shutting down its servers on March 5 after existing in various iterations for 20 years. After that date, the game will be delisted on Steam and removed from the PSN store. Offline matches and private servers will work, but the game will no longer track stats or provide online matches.
Once an avid player, I haven’t played America’s Army for years, but it sure feels like the end of an era.
Of course, it’s one of those “old school” shooters from the late 1990s and 2000s where you can fire up your own server or play the game offline with friends over LAN forever, like I still do from time to time with the original 1999 Unreal Tournament.
But I guess you’ll no longer be court-martialed and send up the river (i.e., banned from online play for a week or so) for friendly fire, disobeying orders, or being an asshat in general.
But the cleverest bit about Wordle is its social media presence. The best thing about Wordle is *the graphic design of the shareable Wordle chart*. There’s a huge amount of information—and drama—packed into that little graph.
Every game of Wordle is a particular little arc of decisions, attempts, and failures.
But each little posted box is *a neat synopsis of somebody else’s arc of action, failure, choice, and success*.
Now, if you’re so inclined, you can turn your lovely little graphs into lovely little buildings:
Wordle2Townscaper is meant to convert Wordle tweets into Townscaper houses using yellow and green building blocks. You can download the tweet contents, parse pasted tweet contents or manually edit of 6 rows and 5 columns. Optionally, you can also choose whether wrong guesses should be left blank on Townscaper or filled with white blocks. Ground floor is always needed because you can’t change its color.
Here’s a brief interview on BBC with Josh Wardle, the creator of the Wordle game (it starts at around 1:24):
I like the idea of doing the opposite of that—what about a game that deliberately doesn’t want much of your attention? Wordle is very simple and you can play it in three minutes, and that is all you get.
There are also no ads and I am not doing anything with your data, and that is also quite deliberate.
There’s a lot to love about Wordle—that you can play it only once per day, or that you can share your result on social media in a clever, spoiler-free way (the word you have to guess on any given day is the same for all players).
On the other hand, will it last? Ian Bogost, particularly made some astute remarks regarding Wordle’s rules and its life cycle to that effect.
The interview will be taken down after four weeks, but this BBC news item has some quotes. What I didn’t know, as mentioned in this article, is that Josh Wardle was also the creator of Reddit’s The Button, an “experimental game” that ticks all the right boxes for being a social experiment.
Update January 13:: Here’s a terrifice Twitter thread by philosopher C Thi Nguyen (@add_hawk) on Wordle’s graphic social communication, and another thread by Steven Cravotta with a great story on how Wordleimpacted his own game on the app store, and what he’s going to do with it.
In Levine’s interpretation, auteurism has meant discarding months of work, much to his staff’s dismay. During development of BioShock Infinite at his previous studio, Levine said he “probably cut two games worth of stuff,” according to a 2012 interview with the site AusGamers. The final months of work on that game demanded extensive overtime, prompting managers to meet informally with some employees’ spouses to apologize.
Ghost Story employees spent weeks or months building components of the new game, only for Levine to scrap them. Levine’s tastes occasionally changed after playing a hot indie release, such as the side-scrolling action game Dead Cells or the comic book-inspired shooter Void Bastards, and he insisted some features be overhauled to emulate those games. Former staff say the constant changes were demoralizing and felt like a hindrance to their careers.
Those who worked with Levine say his mercurial demeanor caused strife. Some who sparred with Levine mysteriously stopped appearing in the office, former staff say. When asked, managers typically described the person as a bad match and said they had been let go, say five people who worked there. Others simply quit. The studio’s top producer resigned in 2017 following clashes with Levine.
We are playing entirely within a neural network’s representation of Grand Theft Auto V. We’ve seen AI play within an environment, but here the AI is the environment.
Nifty.
To collect training data for their neural network, they first created a rules-based AI and let twelve instances of it drive around in the same GTA V game instance on the same stretch of road.
And there’s more:
This model learned to model GTA’s modeled physics. But could it have just as well learned, maybe, some basic real-life physics?
Finally got around to listening to Ken Levine’s 2014 GDC talk on “Narrative Lego” and I’m kind of…underwhelmed. It’s all very tropey, it’s all rather old-hattish, good people have tried, and the “yeah but let’s get the system off the ground first” drill is precisely how we bake old problems into new technologies.
As far as I know, nothing much has happened since, except for the studio’s rebranding to “Ghost Story Games” and some adumbrations about being inspired by the Nemesis AI System* and doing things differently than Telltale Games. And while Ghost Story Games’s catch phrase “Radical Recognition” is certainly catchy, I’m not holding my breath.
* Which, in an egregious dick move by Warner Bros., has been patented in the meantime.
Great podcast chat to listen to between Robin Hunicke and Kimberly Voll. Docks right into the mixed graduate/undergraduate course on “Social & Ethical Dimensions of Player Behavior and Game Interactions” that we wrapped up two weeks ago.
We’ve really come a long way. Once, as related by Kimberley Voll, it was like this:
One of my defining moments prior to Riot was sitting in a meeting with someone discussing a survey that had just gone on, a survey of players, and they then really said, we can throw away the results from all the female players, because that cleary wasn’t their demographic.
This podcast delivers a barrage of experiences, current research, and interesting approaches to digest, from the differentiation between imaginative space and social space in MMORPGs to the effects of competition on player behavior, or the absence of social constraints in digital games that promote and enforce behavior like sportsmanship in non-digital games.
This is the bleakest assessment of teaching I’ve read in a very long time. I don’t think it’s wholly unwarranted—my own assessement of school-level teaching in particular is equally grim, as I’ve written about lately here and here. But I’m way more optimistic with respect to university-level teaching.
[T]here is the value problem. Is the information I am sharing and asking students to critically evaluate, really important? Is this stuff they need to know? I often kid myself that it is. I will claim that a subject as apparently dry and boring as contract law is intrinsically fascinating because it raises important questions about trust, freedom, reciprocity, economic value and so on. […]
I could (and frequently do) argue that students are learning the capacity for critical and self-reflective awareness as result of my teaching […] The claim is often made that critical thinking skills are valuable from a social perspective: people with the capacity for critical thought are more discerning consumers of information, better problem solvers, better citizens and so on. But I don’t know how true this is.
To begin with, I switched the background color from its original daringfireball-flavored grayish-blue to a more maritime-flavored grayish-green. (It’s called “Dark Slate Gray,” but never mind.) If it’s still grayish-blue, hit CMD-R on Mac or CTRL-R on PC a few times to reload the cached style sheet.
Then, I installed a lightbox plugin and added a “slide” icon to the link class; it will display the post’s Open Graph/Twitter Card image in an overlay if you click on it. (That’s the image you see when the post appears on social media.) I might throw in a few other pictures as well.
Finally, I remembered that I created this side blog back in 2014 not merely for research-adjacent blog posts about game-based learning and game-related ethics; I also created it to link to, and to briefly comment on, external content related to games in general. (Whereby, in linked list–fashion, the post’s headline links to the source, not to the post.) This will recommence soon.
Last week, I wrapped up two undergraduate-level game design courses with final workshop sessions, and I wanted them to be an enjoyable experience for everyone.
If there’s one thing teaching should accomplish, it’s this: show students a field, its tools, and its biggest challenges in inspiring ways and help them find and solve challenges which a) they find interesting and b) are manageable at their current level of expertise in terms of skill, knowledge, understanding, and attitude. With that, you provide everything that’s needed: autonomy, mastery, purpose, and also—as solving interesting problems almost always demands collaboration—relatedness.
When I was a student, working together existed mostly as group work or team papers, and I hated both. So I always try to set up my workshop sessions in ways even my freshman self would appreciate. As you might have guessed already, the learning theory I’m trying to follow, to these ends and in general, is constructivism, not cognitivism.
Cognitivism is more or less what you experienced in school. It’s about communicating simplified and standardized knowledge to the students in the most efficient and effective manner, who are supposed to memorize this knowledge in a first step, and then memorize the use of that knowledge in a second step by solving simplified and standardized problems. Sounds familiar, right? Cognitivism is all about memory and mental states and knowledge transfer. It’s not at all about creating curiosity, facilitating exploration, or providing purpose. Cognitivism doesn’t foster and stimulate mastery goals and personal growth (getting really good at something), it focuses instead on performance goals (getting better grades, and getting better grades than others). There’s no autonomy to speak of, only strict curricula and nailed-down syllabi and standardized testing. And there’s no relatedness to speak of either, because there’s no collaboration or cocreation beyond the aforementioned occasional group work or team paper. Because real collaboration and cocreation would mess up standardized testing and individual grading.
Constructivist learning theory, in contrast, is about everything cognitivism ignores: interests, exploration, autonomy, mastery goals, purpose, and relatedness through collaborative and cocreative problem-solving. Certainly, constructivism has its own tough challenges to navigate, but that’s no different from any other learning theory, like cognitivism or behaviorism. According to constructivism, students learn through experiences—along which they create internal representations of the things in the world, and attach meaning to these internal representations. Thus, it’s not about learning facts and procedures but developing concepts and contexts—by exploring a field’s authentic problem space, picking a challenge, and then exploring the field’s current knowledge space and its tool boxes to solve that challenge. The critical role of the teacher is to provide assistance and turn unknown unknowns into known unknowns, so that students can develop a better understanding of the scope, the scale, and the difficulties involved. Without the latter, learning and problem-solving can easily become a frustrating experience instead of an enjoyable one.
And that’s what I mean by enjoyable. In practice, in my final workshop sessions, everybody can pick a problem they’re burning to solve, gather like-minded fellow students, and try to solve that problem together in a breakout room by applying the knowledge and the tools we’ve explored and collected along the course. Or, alternatively, they can stay in the “main track” and tackle a challenge whose rough outlines I set up in advance, moderated by me. In the end, every group presents their findings and shares their most interesting insights.
That’s a setup even my younger self would have appreciated! Everyone can, but by no means has to, shoulder the burden of responsibility by adopting a challenge. Everyone can, but by no means has to, win over other students for a challenge, or join an individual group. Everyone can decide to stay in the main track, which is perfectly fine.
The problem with cognitivism is not that it doesn’t want to die, it’s that is hasn’t noticed it’s already dead. Just think about it: almost everybody can tell, or knows somebody who can tell, an uplifting story about that one outstanding teacher who inspired them despite everything else going down in school. But the very foundation of our educational system is that it’s supposed to work independently of any specific individual, of any specific outstanding teaching performance.
Should we ever design a framework to accomplish this, which might or might not happen in the future, that makes our educational system truly work on the system level, cognitivism certainly isn’t it. Cognitivism is a shambling zombie that, over time, has gradually slipped into its second career as a political fetish.
Memory is inseparable from learning, and learning is inseparable from memory. They’re two sides of the same coin, if you think of it. Against this backdrop, what role does personal photography play, i.e., photos that are not created for artistic reasons but to record moments we deem memorable? Certainly, there’s a lot to learn from personal photographs in terms of historical knowledge, but that’s not something we expect from our own personal photographs. We don’t expect to learn from them, we merely expect to be reminded of knowledge we already have, relying on the convenience of a kind of memory refresh machine.
But I suspect there’s more to it, and that we can use personal photography for the learning environment of video games. But beware: these are just wayside thoughts I had while working on a paper; they don’t amount to anything resembling a systematic or scientific inquiry. I’m not even entirely sure which direction Part II of this post is going to take!
In this first part, to pave the ground, I want to highlight some aspects related to in-game photography, and some aspects related to how personal photography is associated with personal memory.
As to in-game photography, the option to make in-game screenshots has been around for a while. A bit more recent, though, is the option to let the player character make photographs with their in-game smartphone, which the player can then keep, collect into albums, and post on social media as if they had made these photographs themself.
In-game photography for “memorable moments” is a clever tool for bringing the player and the player character closer together. This is particularly valuable in games like, e.g., Uncharted 4: A Thief’s End and The Lost Legacy, where the player characters—Nathan Drake and Chloe Frazer, respectively—live their own lives, have their own goals, and make their own choices and decisions, all utterly unaffected by player input. (The difference between choices and decisions is discussed at length in my book Ludotronics.)
Then, there are games that use in-game photography for documenting the state of the world, as, e.g., in Umurangi Generation (a brilliant game that’s still a Steam exclusive, sadly).
Certainly, these two aspects—memorable moments and documentation—are intimately related. And there’s a third aspect involved: individual biography. By recording memorable moments and events, photographs illustrate the story that we tell ourselves about ourselves. (Mind, as John Barth put it in On With the Story, that “The story of our life is not our life, it is our story.”) This story, in turn, does a lot of heavy lifting toward convincing ourselves that we have a coherent past, that we’ve been the person we are right now all along, only more “developed” alongside new knowledge and experiences over time.
But for all these things that photography does for us, it used to come with a trade-off. Photographs don’t merely focus, but narrow down and even constrict our memories to those moments we chose to illustrate. It’s these illustrated moments that we not merely recognize when we see them, but that we are most likely to freely recall, collectively and individually, in the absence of these illustrations—because we also remember the photographs from these events, not just the events. But, precisely because of this, remembering photographs often has the right of way to remembering events themselves. Thus, for moments that we did not choose to illustrate that way, we depend more and more on recognition instead of recall. Instead of remembering them freely, they need to be triggered: through other kinds of “illustrations” like oral or written texts for both collective and individual recognition; and melodies, flavors, and scents for individual recognition. Yet, depending more and more on recognition instead of on free recall, more and more of these unillustrated biographical moments are lost in time.
As a biographical note from my late teens/early twenties, that was one of two reasons why I gave up photography, only a few years after buying a decent camera from money I’d earned during summer break. I’ve always had an excellent memory, but I began to notice that my photographs pulled me toward the memories they illustrated, like irresistible little magnets, and that they made it hard for me to remember events not illustrated by my photographs. (The second reason was that all my photos were crap and didn’t get better, no matter how hard I tried.)
But this trade-off might no longer exist. Sure enough, I picked up making photos again the moment I could do so excessively. Thanks to digital photography, every moment worthwhile can be illustrated now. (Plus, I think I got slightly better at taking photos, but you can judge for yourself.)
These aspects, now, I want to analyze in the context of the original question. What do these aspects imply, and how could they be put to use, with relation to player memory, player/player character bonding, and game-based learning?
One thing I do is consulting in the political arena with research, reports, and recommendations to combat racism, misogyny, LGBTQ+ hate, and antisemitism in video games, gaming communities, and on gaming platforms. And, specifically, the instrumentalization of such tendencies by the far-right.
The case can be made that #GamerGate, the neofascist/white-supremacist utilization and amplification of male resentment against women in general and women in gaming in particular, was a test run that eventually paved the way for full-blown Trumpism in the U.S. In Germany, luckily, #GamerGate was never able to establish a comparably powerful foothold among its gaming communities. But, as we all know, Germany has a massive Nazi problemofitsown, and lawmakers want to make sure that these people don’t radicalize players and recruit even more Nazis on game-related channels and platforms.
Alas, efforts and measures that make sense are wanting.
There are two major reasons for that. The German legislative body is, on the one hand, way more influenced by authoritarian power fantasies of its executive branch than they should be, for a healthy democracy. And on the other hand, they have a hard time differentiating between substantially different types of challenges that call for different types of actions and different types of actors.
For this post, let’s put aside the executive branch’s power fantasies—banning end-to-end encryption, for starters—and focus on highlighting the different types of challenges instead.
The first challenge is specific far-right-communication channels like Telegram or Gab, where extremists meet, plan, and coordinate their activities, from illegal to lethal. Here, the primary actors should be the executive branch, i.e., law enforcement. It’s certainly not a simple task, but there are many powerful traditional and digital investigation methods law enforcement agencies have at their disposal—if they just stopped complaining about how difficult everything is without unlimited authoritarian snooping power, and got to work instead.
The second challenge is mainstream social media platforms like YouTube and Facebook in particular, where far-right talking points—wrapped up in ready-to-eat bundles of racism, misogyny, LGBTQ+ hate, and antisemitism—are prominently injected into the public discourse. Here, the primary actor should be the legislative branch, i.e., lawmakers. They should pass laws and policies with severe fines and damage claim options. This should make it impossible for these platform holders to earn money through vacuuming up the personal data of entire populations, and then let third parties turn the nozzle around and target these populations algorithmically with radicalizing content. Basically, these should be the same kind of laws and fines that make it unattractive for companies to release toxic waste into our water systems.
The third challenge is specific gaming platforms and media channels like Twitch, Discord, Steam, in-game chat channels, and so on with their traditionally high amplitude of racism and misogyny. The more normalized these behaviors become among players, the more vulnerable they become to be nudged toward ever more extremist ideas. Here, the primary actor should be educational, informational, and cultural institutions to promote and rehearse non-discriminatory behavior for individuals and groups, in cooperation with platform holders and game-related media.
Of course, the primary actors for all three challenges could and should coordinate, and they should set up programs with clear goals and timeframes. That way, outcomes could be monitored and evaluated and the programs and goals revised if need be.
Alas, this is Germany! So what’s likely to continue instead to meet these three challenges is a never-ending stream of efforts to make end-to-end encryption illegal; make life hard in terms of data protection for everybody except Facebook, Google, or Amazon; and sprinkle gaming platform holders from time to time with complaints about some random dude’s swastika in their profile picture.
2020 went by in a daze. For the first time since 2015, I didn’t publish anything, neither a paper nor a book, and I practically stopped blogging. I did write, but I’m months behind my self-set schedule. It wasn’t just the Corona crisis—it were developments in Hong Kong, the U.S., and Israel too that put me into a negative mental state that continually drained my energy.
Now, things have improved in the U.S., but the situation in Hong Kong becomes more terrifying by the minute, and from Israel come the most alarming news. Moreover, I live in Germany—where politicians, propagandists, mercenary scientists, and dangerous alliances of Covid-19 denialists alike torpedo every sensible solution that real scientists and a handful of public figures come up with to fight the virus and keep people safe.
Among the most disastrous measures, in every respect, is the premature opening of schools. This has no rational explanation. Schools in Germany are notoriously underfunded and have no digital infrastructure to speak of. Teachers are bogged down by administrative work. The integration of technologies in the classroom very rarely exceeds the use of calculators and overhead projectors. And, countless political statements of intent notwithstanding, no one ever really gave a shit, and nothing ever changed.
Now, during times of plague, education is suddenly the most terribly precious thing, and to send children back into crowded classrooms is more important than all the people this might kill in the process, or damage for life.
Of course, there are economic reasons to get children out of the way as fast as political decorum allows, going hand in hand with the ministerial refusal to make home office work, where possible, mandatory. Dying for the GDP is something we all understand.
But, considering the tremendous scope of suffering inflicted by Covid-19, that’s too rational an explanation for the consistently irrational demeanor and decision-making, where the state-level secretaries of education push toward opening the schools to strengthen the next pandemic wave each time like clockwork. What’s really going on, and that took me forever to understand, is that “school” is not, or no longer, structured like a place of education & learning. Instead, school is structured like a fetish, forever pointing toward education as a lost referent that can no longer be retrieved.
As such, it is hyper-resistant to any kind of change or reform; to scientific and technological progress; to social and psychological and pedagogical insights.
As such, it wastes twelve or thirteen years of everyone’s life on memorizing swiftly perishable facts instead of teaching curiosity; focuses on tests and grades instead of fostering skill, creativity, and understanding; insists on following the curriculum instead of inspiring students with the love, and thirst, for knowledge and for lifelong learning.
As such, it demands that everyone suffer. And now, true to its nature as a fetish, it demands the willingness to sacrifice your loved ones in times of plague as well.
Today, on the last day of Hanukkah, we will do three things: wrap everything up, briefly; talk about the three missing elements period style, game type, and game loop; and draft a task list with items that would be needed to advance our conceptual sketch toward something pitchable, i.e., a game treatment.
During the fourth, fifth, and sixth day of Hanukkah, we sketched a number of design decisions for the Interactivity, Plurimediality, and Narrativity territories.
Today, we will engage the fourth and final design territory of the Ludotronics paradigm, Architectonics.
Instead of merely covering “story” or “narrative,” Architectonics denotes the design and arrangement of a game’s dramatic structure in terms of both its narrative structure and its game mechanics and rule dynamics.
Yesterday’s design territory, Plurimediality, was about “compelling aesthetics,” seen from the viewpoint of functional aesthetics and working toward a game’s consistent look-and-feel for a holistic user experience.
Narrativity, our design territory for today, is about “emotional appeal,” seen from the viewpoint of narrative qualities or properties that work toward conveying specific meaning in a specific dramatic unit for a memorable gameplay moment.
For the latter, the Ludotronics paradigm works with four content domains: visual, auditory, kinesthetic, and mythological. Obviously, to be able to plan specific narrative qualities or properties for memorable gameplay moments, we need to know a lot more about our Hanukkah game than we know at the moment. All we can do right now is sketch a few general principles that we can apply in each domain.
While yesterday’s Interactivity territory with its game mechanics, rule dynamics, and player interactions has the “just-right amount of challenge” at its motivational core, Plurimediality is associated with “compelling aesthetics.”
Which, in a game, comprises not only graphics, sound, music, writing, voice acting, and the game’s look-and-feel in general, but also usability in its many forms including player controls. That’s because aesthetics and usability are two sides of the same coin, united by function. Accordingly, Plurimediality integrates design thinking from the perspective of functional aesthetics (informed by the Ludology dimension) and the perspective of aesthetic experience (informed by the Cinematology dimension). One can’t be great without the other, and every element in turn must connect with the theme.
To recap our prospective Hanukkah game’s concept so far, we’re considering a game with intergenerational cooperative gameplay for Jewish and non-Jewish children and Jewish parents/relatives with a core experience in the ludological dimension, the general theme of “hope,” and a strong focus on the motifs “anticipation,” “ambition,” and “trust” in the Interactivity design territory.
So what’s the aesthetic experience of our Hanukkah game going to be? To answer this question, tentatively, we will look at the game’s style and sound.
To recapitulate our insights from the first three days of Hanukkah, our Hanukkah game’s theme will be “hope”; its core will fall into the ludological dimension; it will have a complex primary and secondary target audience of Jewish and non-Jewish children and Jewish parents and relatives; and its USP will be intergenerational cooperative gameplay.
Today, we will explore the first design territory for our Hanukkah game, Interactivity. Within the Ludotronics paradigm, Interactivity is informed by game mechanical aspects, i.e., the mechanics and rules of the game, and ludological aspects, i.e., how players interact with the game and with other players. (The other three design territories besides Interactivity, to be visited in the following days, are Plurimediality, Narrativity, and Architectonics.)
Yesterday, on the first day of Hanukkah, we sounded out the historical context of the Hanukkah story, thereby eliminating a historical combat or strategy game. Now we will turn to the meaning or meanings that Hanukkah has acquired over time, from the Talmud until today, to find our game’s theme and its core experience in one of the four dimensions, i.e., Game Mechanics, Ludology, Cinematology, or Narratology.
Usually, everything starts with a game idea. From there, you can proceed, step for step, maybe along the Ludotronics paradigm, until you have a pitchable concept—no matter whether it’s a publisher pitch, an in-house pitch, or a war cry to assemble a crackerjack team for your indie title. Now what if someone—a customer, a publisher—approaches you not with an idea, but with a topic?
Along the eight days of Hanukkah, of which today is the first, let’s sketch a game concept about Hanukkah as an exercise.
Game-based learning comes in many different shapes and styles that encompass everything from dedicated educational content in serious games to training simulations to communicating knowledge about historical eras or events and questions of ethnicity or gender in AAA action-adventure games. Yet, as always, well-meant isn’t necessarily well-done. In the case of AAA and high-quality independent games, it is often the lack of “native informants” during the development process that turn good intentions into public meltdowns.
“Native informant” is a term from post-colonial discourse, particularly as developed by Gayatri Chakravorty Spivak. It indicates the voice of the “other” which always runs the risk of being overwritten by, or coopted and absorbed into, a dominant public discourse. Even the term itself always runs this risk.
There are three topics of the “other” I’d like to touch upon in three consecutive posts: gender, ethnicity, and madness. For this post on gender, let’s look at some well-known examples.
With a native informant, the original portrayal of Hainly Abrams wouldn’t have failed as abysmally as it did in Mass Effect: Andromeda. BioWare’s reaction was laudable, certainly—they reached out to the transgender community and made the appropriate changes. But why didn’t they reach out in the first place? Who, on the team, was comfortable with writing the “other” without listening to their voices? Another egregious example were RimWorld’s early gender algorithms—where, in contrast, the developers’ reaction was anything but laudable. (I might be going out on a limb here, but I don’t think OkCupid data qualifies as a native informant.)
Calling in native informants as consultants is exceptionally useful in more than one respect.
First, obviously, they prevent your team from making outrageous mistakes. In the case of Mass Effect: Andromeda, it was a particular mistake that could be fixed with a patch. But when a character’s actions or even whole chunks of the plot are based on faulty premises, that’s not easily fixed at all. Enter Assassin’s Creed: Odyssey’s DLC “Legacy of the First Blade: Shadow Heritage,” where Cassandra was “hetconned,” i.e., forced into a heterosexual relationship to become a parent.
Then, why would you want to stop at merely not getting it wrong? Native informants will provide you with involving details that will turn cliche attitudes into motivated actions & emotions and transform your cardboard characters into engaging personalities.
Finally, hiring native informants as consultants adds diversity to your team, enriches your game, and broadens your reach beyond your mainstream audience. Toward people, that is, who would gladly buy your game if they see themselves represented in it—in the contexts of visibility, of acceptance, and of role models as drivers of empowerment.
Among the great advantages of game-based learning is, in well-designed games, that players are “tested” exclusively on skills, knowledge, understanding, or attitude that they have learned, or should have learned, while playing the game. Certainly, there are exceptions. Games can be badly designed. Or, problems of familiarity might arise, as discussed in Ludotronics, when conventions of a certain class of games (misleadingly called “genre”) raise the Skill Threshold for players who are not familiar with them and might also interfere later in test situations. Most of the time, though, players are indeed tested on what the game has taught them, and well-designed “term finals” like level bosses do not simply test the player’s proficiency, but push them even further.
This rhymes beautifully with the rule “you should not grade what you don’t teach” and its extension “if you do teach it, grade it only after you taught it.” While this sounds like a no-brainer, there are areas in general education that have a strong tendency toward breaking this rule, with grammar as a well-known example. And then there’s the problem of bad or insufficient teaching. When students, through no fault of their own, failed to acquire a critical skill or piece of knowledge demanded by the advanced task at hand, how would you grade them? Is it okay, metaphorically speaking, to send students on bicycles to a car race and then punish them for poor lap time performances? But if you don’t grade them on the basis of what they should have learned, doesn’t that mean lowering the standards? As for the students, they can’t just up and leave badly designed education like they can stop playing a badly designed game.
Another aspect, already mentioned, is that tests in games not only test what has been taught, but are designed to push player proficiency even further. This is possible, and possible only, because the player is allowed to fail, and generally also allowed to fail as often as necessary. Such tests, moreover, can become very sophisticated, very well-designed. In Sekiro: Shadows Die Twice, you might be able to take down a boss with what you’ve already learned instead of what you’re supposed to be learning while fighting that boss, but you’ll be punished for it later when you desperately need the skill you had been invited to learn. You can read all about this example in Patrick Klepek’s “15 Hours In, Sekiro Gave Me a Midterm Exam That Exposed My Whole Ass,” which is recommended reading.
The second reverse question I love to discuss in my lecture on media psychology/media didactics for game designers is to ask, instead of what games can bring to the classroom, what the classroom brings to the table compared to games. Again, it’s about non-specialized public gymnasien in Germany, so the results are not expected to be particularly invigorating.
What about motor skills, dexterity, agility, hand-eye coordination, reaction times, and so on? What about endurance, persistence, ambition (not in terms of grades), or patience? Here, the classroom has little to offer. Possible contributions are expected to come from physical education, certainly. But—with the exception of specialized boarding schools, or sportgymnasien—physical education is almost universally despised by students for a whole raft of reasons, and it is not renowned for advancing any of the qualities enumerated above in systematic ways.
What about »Kombinationsfähigkeit« in terms of logical thinking, deduction, and reasoning? What about strategy, tactics, anticipatory thinking (which actions will trigger which events), algorithmic thinking (what will happen in what order or sequence), and similar? Again, the classroom in non-vocational or non-specialized schools has little to offer here, if anything at all.
Finally, what about creativity, ingenuity, resourcefulness, imagination, improvisation, and similar? Some, the lucky ones, have fond memories of art lessons that fostered creativity. But even then: on this palette, creativity is just one color among others. Music lessons have potential too, but—barring specialized schools again—it’s rare to hear students reminisce lovingly about music lessons that systematically fostered any of these qualities.
Now, the curious thing is that serious games and game-based learning projects have a tendency to try and compete precisely with what the classroom does bring to the table for the cognitive domain, notably in its traditional knowledge silos that we call subjects. This, not to mince words, is fairly useless. Serious games, in careful studies against control groups, almost never beat the classroom significantly in terms of learning events, learning time, depths of knowledge, or even knowledge retention—but they come with a long time to market, a stiff price tag, and tend to burn content like a wildfire. Instead, game-based learning should focus on the psychomotor domain, the affective domain, those parts of the cognitive domain that the classroom notoriously neglects, and rigorously unsiloed knowledge from every domain. And if all that can’t be integrated into the classroom because the classroom can’t change or adapt and integrate games and new technologies in general, then let’s build our own classroom, maybe as a massively multilearner online game-based learning environment. The century’s still young.
In my lecture on media psychology/media didactics for game designers, reverse-questioning how school lessons and school curricula relate to technology and games counts among my favorite exercises. For example, instead of asking what technology in general and games in particular can do for the classroom, we ask which technological advances schools have actually adopted since the middle of the twentieth century.
Now, as I am teaching in Germany, and most of my students come from non-specialized public gymnasien, you can probably guess where this is going. With the rare exception of interactive whiteboard use (roughly one student out of fifteen), the two technological advances properly integrated into regular classroom use in seven decades are [check notes] the pocket calculator and [check notes again] the overhead projector.
Historically, all technological advances that have been proposed for classroom use in Germany, including pocket calculators and TV/VCR sets and computers and cell phones and tablets, and so on, were viciously opposed by teachers, parents, administrators, and politicians alike with an inexhaustible reservoir of claims ranging from the decline of educational integrity (»Kulturverfall«) to wi-fi radiation panic (»Strahlenesoterik«). Today, the copy & paste function especially is viewed as a creeping horror that must never be allowed to make it past the hallowed doorstep of higher education.
Add to that an often desolate financial situation. There are cases where schools can’t afford a decent wi-fi infrastructure, desktop or mobile hardware, software licenses, or, especially, teacher training. But all these problems could be surmounted in principle if it wasn’t for the one titanic challenge: the curriculum. Cognitivism has brought us this far, and we certainly shouldn’t abandon it. But it should become part of something new, a curriculum based on what and how we should teach and what and how we should learn in the 21st century. This requires motivational models that are up to the task; we can start with Connectivism and with models like flow and self-determination, both of which feed the Ludotronics motivational model, and advance from there. Technological, social, and other kinds of advances should become deeply integrated into this new curriculum. To turn our information society into a true knowledge society, we must leave the era of age cohorts, classes, repeated years, and silo teaching behind and embrace learning and teaching as a thrilling and, before all else, shared quest of lifelong exploration.
For my very first set of university courses, to navigate monster schedules and sleep deprivation, I employed the time-honored strategy of putting everything that seemed remotely relevant on a load of slides that would bring down an eighteen-wheeler, and then wing it from there. Eventually, of course, one has to find the time to rework and restructure and make every single lecture shine.
Game levels, basically, are very much like lectures. For each, you should ask yourself three basic questions:
What will the players (or students) have learned by the end of the level (or lecture)?
How can you make that an enjoyable experience?
How can achieving their learning goals feel rewarding to them?
Here, to keep it brief, let’s focus on the first question, the learning outcome, which you should structure in ways that make sense. The Ludotronics paradigm applies a substantially modified KSA model with four ingredients: skill (repeatable, observable performance); knowledge (recognizable or recallable facts & procedures); understanding (grasping complex processes & applying knowledge in new and different contexts); and attitude (adapted behavior courtesy of freshly acquired skills, knowledge, or understanding).
You don’t have to stuff all four into every level or lecture, far from it. But you need to keep an eye on all four so you can motivate and inspire through variety toward a dazzling holistic payoff. Along these lines, you can then create fitting experiences and rewards. Motivation to play equals motivation to learn, after all! And the quality of a level or a lecture has nothing to do with the number of assets or the number of slides; it has everything to do with a joyful and rewarding learning experience.
Ludotronics:A Comprehensive Game Design Methodology From First Ideas to Spectacular Pitches and Proposals
It’s a conceptually complete paradigm and design methodology for intermediate and advanced game designers, from coming up with a raw idea for a game to greenlighting a refined version of that idea for development.
Enjoy!
Note: For three years, from March 2019 through April 2022, a high-quality e-book version of Ludotronics was on sale at DrivethruRPG. After this date, I pulled this edition from sale, as it will be succeeded by a print edition scheduled for 2023.
This is my fourth paper, certainly not my last, but the last paper I will prepublish before my book comes out. (For those of you who haven’t followed or forgotten all about it, three years ago I committed to writing one academic paper per year during book research.)
Link barrage first.
My initial paper from 2015 on dialogic speech in video games is here. My second paper from 2016 on sensory design for video games is here. My third paper from last year about emotional design for video games is here. Finally, my book website ludotronics.net is here. The book will be released later this year (or in early 2019, at the latest). I decided to publish the electronic book with the help of DriveThruRPG. It’s a site I love, conditions are decent, and Amazon is not an option (re: format, royalties).
Passage is structure, not story. It’s not about challenges or plot points, it’s about rules and mood. As a passage, it should serve as an introduction when the game is played for the first time, and from then on, well-chosen details of it should serve as a reminder every single time until it is finished.
A brief essay on passages in game design that I wrote for my university’s news room page.
Hooray, my third paper! As I wrote here two years ago, I committed myself to writing one paper each year while I’m researching and writing my book on game design methodologies. Which has a full title now: The Ludotronics Game Design Methodology: From First Ideas to Spectacular Pitches and Proposals. Which, if all goes well, I will publish next year.
The general principle to put players in control of their flow channels is to design the game’s challenge structure in a way that concurrently escalates risk, relief, and reward, and not just over time, but “stacked” at any given gameplay moment.
A brief essay on “flow” in game design that I wrote for my university’s news room page.
As I wrote last year, I committed myself to generating one academic paper per year from the research I’m doing for my book on game design methodologies, Ludotronics. (The website ludotronics.net is up, but there’s nothing much to see. Book release will commence not before 2018, and that’s when the full website will come alive too.)
Early this year, I began conducting research into game design methodologies for a book I’m going to write. Its title, or at least part of its title, will be Ludotronics. Trademark’s filed, and the domain is up at ludotronics.net. Yet, it’s only a placeholder page right now, and will be for a very long time. But when it’s done, the whole book will also be freely available on that website under a Creative Commons license.
Taking my research plan and my other three lives into account, writing this book will take me about three to four years, as an estimate. So if all goes well, that’s a release date in 2018, probably late 2018.
But also …
I committed myself to generating one academic paper per year from the research I’m doing for my book. This is the first paper (PDF, direct link): “A Functional Model for Dialogic Speech in Video Game Design”*. For a number of reasons, I decided that I will pitch these papers to academic journals not before my book research is complete, maybe even after the book release. (I know, nothing’s ever complete in science.) So take these academic papers as what they are, as electronic preprints. Enjoy!
*This paper has now been uploaded to researchgate.net with some minor typo corrections.
Recently, I asked about the possible effects of “one-size-fits-all educational methodology with predetermined curricula and standardized testing” and, especially, “conditioned learning of siloed educational subjects detached from personal experience in large classes solely determined by year of birth.” That, of course, was purely a rhetorical question as these effects are clearly visible to everybody. Rarely do schools instill in us a deep and abiding love for learning and for the subjects taught, the historical events envisioned, the works of literature read, the math problems solved. Rarely do we fondly remember our school days and sentimentalize them into nostalgic yearnings for a lost pleasure. In my 2013 inaugural lecture “Knowledge Attacks!: Storyfication, Gamification, and Futurification for Learning Experiences and Experiential Learning in the 21st Century” which, alas, I still haven’t managed to put online, I developed two sample scenarios to break down educational silos in our schools.
The first sample scenario took off from the topic of “calculus,” typically siloed and restricted to “math” courses. Why don’t we confront students instead with the exact same problems Newton and Leibniz faced and connect the invention of calculus with learning content from its historical context: Newton’s Principia and the Great Plague of London; the English civil war and the Bill of Rights; the baby and toddler years of the Scientific Method and the principles of rational inquiry; the Great Fire of London and modern city planning; the formation of the United Kingdom and colonial power struggles; the journalist, author, and spy Daniel Defoe and Protestant work ethic; the transition from Baroque to Rococo in painting and music; the Newtonian telescope and Newton’s laws of motion; the dissociation of physics and metaphysics. Interim summary: math, physics, philosophy, religion, English language, history, geography, music, art, literature, astronomy. Oh, and sports: the evolution of cricket during this time in post-restoration England as the origin of professional team sport.
Or “democracy,” aspects of which are usually siloed in history courses and/or a variety of elective courses like “politics” or “citizenship.” What if students were given the task, perhaps in a time-travel setup, to convince the Athenian assembly in 482 B.C.E. via political maneuvering to distribute the new-found silver seam’s wealth among the Athenian citizens instead of following Themistocles’s proposal to build a naval fleet? So that the Battle of Salamis would never happen, the Greek city states swallowed by the Persian empire, neither the Roman republic nor large parts of the world become hellenized, and the European Renaissance as we know it would never happen? The potential in terms of learning content: the dynamics of democracy and political maneuvering; the history of classical antiquity; general economics, trade, and the economics of coins and currencies; the structure and ramifications of democratic systems built on slave economies; Greek language; rhetoric; comedy and tragedy; myth; Herodotus and the beginnings and nature of historical writing; geometry; the turn from pre-Socratic to Socratic philosophy; sculpture and architecture; ship building; astronomy; geography; the Olympic Games. Strategy, probably—especially naval strategy if the plan to change history fails to succeed: Salamis from the point of view of Themistocles on the side of the Greeks and from the point of view of Artemisia—the skillful and clear-sighted commander of a contingent of allied forces in Xerxes’s fleet—on the side of the Persians. Infinite possibilities.
Now, what’s being introduced in Finland right now as “cross-subject topics” and “phenomenon-based teaching” looks quite similar in principle to my developing concept of “scenario learning.” As the Independent's headline puts it, “Subjects Scrapped and Replaced with ‘Topics’ as Country Reforms Its Education System”:
Finland is about to embark on one of the most radical education reform programmes ever undertaken by a nation state—scrapping traditional “teaching by subject” in favour of “teaching by topic.” […] Subject-specific lessons—an hour of history in the morning, an hour of geography in the afternoon—are already being phased out for 16-year-olds in the city’s upper schools. They are being replaced by what the Finns call “phenomenon” teaching—or teaching by topic. For instance, a teenager studying a vocational course might take “cafeteria services” lessons, which would include elements of maths, languages (to help serve foreign customers), writing skills and communication skills.
More academic pupils would be taught cross-subject topics such as the European Union—which would merge elements of economics, history (of the countries involved), languages and geography.
This sounds exciting already, but there’s more:
There are other changes too, not least to the traditional format that sees rows of pupils sitting passively in front of their teacher, listening to lessons or waiting to be questioned. Instead there will be a more collaborative approach, with pupils working in smaller groups to solve problems while improving their communication skills.
Many teachers, of course, aren’t exactly thrilled. But a “co-teaching” approach to lesson planning with input from more than one subject specialist has also been introduced; participating teachers receive a “small top-up in salary,” and, most importantly, “about 70 per cent of the city’s high school teachers have now been trained in adopting the new approach.”
And even that’s not the end of it. A game-based learning approach, upon which my scenario learning concept is largely built, will also be introduced to Finland’s schools (emphases mine):
Meanwhile, the pre-school sector is also embracing change through an innovative project, the Playful Learning Centre, which is engaged in discussions with the computer games industry about how it could help introduce a more “playful” learning approach to younger children.
“We would like to make Finland the leading country in terms of playful solutions to children’s learning,” said Olavi Mentanen, director of the PLC project.
Finally, from the case studies:
We come across children playing chess in a corridor and a game being played whereby children rush around the corridors collecting information about different parts of Africa. Ms. Jaatinen describes what is going on as “joyful learning.” She wants more collaboration and communication between pupils to allow them to develop their creative thinking skills.
What I feel now is a powerful urge to immediately move to Finland.
The first #ECGBL2014 presentation I attended was “Experimenting on How to Create a Sustainable Gamified Learning Design That Supports Adult Students When Learning Through Designing Learning Games” (Source) by Charlotte Lærke Weitze, PhD Fellow, Department of Education, Learning and Philosophy, Aalborg University Copenhagen, Denmark.
Weitze’s paper relates to a double challenge so crucial for designing game-based learning solutions that you’ll see me coming back to it on this blog time and again. One side of this challenge is students burning through content way faster than educators and game designers are able to produce; you just can’t keep up. The other side is the challenge of practicing complex, non-siloed learning content within game-based learning environments that are incompatible with practices such as rote learning, flash cards, repeat training, or standardized testing.
In other words, it’s non-educational games’ “replay value” challenge on screaming steroids. Among the most promising solutions is mapping the “learning by teaching” approach to designing content, i. e., having students design fresh game content as a learning-by-teaching exercise. Obviously, both effectiveness and efficiency of this approach depend on the number of students that are part of the system, where output will exponentially take off only after a certain threshold has been reached, both numerically and by way of network effects.
Of course there are pitfalls. One obvious pitfall is that students—with the exception of game design students, of course—are students, not game designers. And while, along more reasonable approaches, they don’t have to design fully functional games by themselves with game mechanics, rules, and everything, it’s still hard to design entertaining, i. e., “playable” content for games without at least some game design knowledge from various fields—prominently balancing, guidance strategies for player actions, or interactive storytelling. This was one of the reasons why, regrettably, Weitze’s experiment, where students also had to build the content with software, fell short.
As for the experimental setup (which employs the terminology of James Paul Gee’s more interaction-/discourse-oriented differentiation between little “g” games and big “G” Games), students design and build little “g” games for a digital platform within the framework of a “gamified learning experience,” the big “G” Game, which includes embedding learning goals and evaluating learning success. The little “g” games in this case are built for other students around cross-disciplinary learning content from the fields of history, religion, and social studies in a three-step process consisting of “concept development, introduction and experiments with the digital game design software (GameSalad), and digital game design” (4). Moreover, this whole development process (within the big “G” Game) had to be designed in such a way as to be motivating and engaging for the students on the one hand, and to yield evaluable data as to its motivational impact on individual learning successes on the other.
Experiences from this experiment, unsurprisingly, were described in the presentation as “mixed”—which is academic parlance for “did not fucking work as planned at all.” The problems with this setup are of course manifold and among these, “focus” is the elephant in the room.
There are far too many layers, especially for an experiment: the gamification of game design processes for learning purposes (the big “G” Game) and the design thereof with in-built learning goals and evaluation strategies; below that, then, the game design processes for the little “g” games including, again, learning goals and evaluation strategies. In other words, the students were supposed to be learning inside the big “G” Game by designing little “g” games in groups with which other students in turn would be able to “learn from playing the games and thus gain knowledge, skills and competence while playing” (3)—all that for cross-disciplinary content and executed in a competitive manner (the big “G” Game) by 17 students and three teachers none of which had sufficient game design experience to start with, and under severe time constraints of three workshop sessions of four hours each.
Besides “focus,” the second elephant in the room is “ambition,” which the paper acknowledges as such:
In the current experiment the overall game continued over the course of three four‐hour‐long workshops. Though this was a long time to spend on an experiment, curriculum‐wise, for the upper‐secondary students, it is very little time when both teachers and students are novices in game‐design. (3)
Plus:
This is an ambitious goal, since a good learning‐game‐play is difficult to achieve even for trained learning game designers and instructors. (3)
And the nail in the coffin, again unsurprisingly:
At this point [the second workshop] they were asked to start considering how to create their game concept in a digital version. These tasks were overwhelming and off‐putting for some of the students to such a degree that they almost refused to continue. This was a big change in their motivation to continue in the big Game and thereby the students learning process was hindered as well. (7)
Finally, what also generated palpable problems during this experiment was the competitive nature of the big “G” Game. While the paper goes to some lengths to defend this setup (competition between groups vs. collaboration within groups), I don’t find this approach to game-based learning convincing. Indeed I think that for game-based learning this approach—competition on the macro-level, collaboration on the micro- or group-level—has it upside down: that’s what we already have, everywhere. Progress, in contrast, would be to collaborate on the macro-level with stimulating, non-vital competitive elements on the micro- or group level.
Game-based learning can provide us the tools to learn and create collaboratively, and to teach us to learn and create collaboratively, for sustained lifelong learning-experiences. Competition can and should be involved, but as a stimulating part for a much greater experience, not the other way round. The other way round—where collaboration has to give way to competition as soon as things threaten to become important—is exactly what game-based learning has the potential to overturn and transcend in the long run.
Last year’s ECGBL 2014 (8th European Conference for Game-Based Learning), October 9–10, in Berlin, which included the satellite conference SGDA 2014 (5th International Conference on Serious Games Development & Applications), was located at Campus Wilhelminenhof in Berlin Oberschöneweide and hosted by the University of Applied Sciences for Engineering and Economics HTW Berlin. The conference was opened by Dr.-Ing. Carsten Busch, program chair and professor of media economics/media information technology, followed by an introduction to the HTW Berlin by Dr.-Ing. Helen Leemhuis, faculty dean and professor of engineering management.
Then, the keynote. Oh well.
How to put this politely. To be sure, there are occasions and circumstances where it is a good idea to engage with stakeholders outside academia by inviting industry representatives to academic conferences as keynote speakers, but in this case it rather wasn’t.
The invited speaker was Dr. jur. Maximilian Schenk, formerly “Director Operations and member of the management team” of the German VZ network (which has gone down in history for, among other things, setting the bar for future copy-&-paste operations spectacularly high by copying Facebook literally wholesale, down to style sheets and code files named fbook.css or poke.php), at present managing director of the BIU (Bundesverband interaktive Unterhaltungssoftware / German Trade Association of Interactive Entertainment Software). He addressed his audience of highly qualified postgraduate, postdoc, and tenured veteran researchers from the fields of game-based learning and serious games across a wide range of disciplines verbatim with:
You are the specialists so I won’t go into your terrain, so instead I will tell you something about the fundamentals of serious games that you have to understand to know what making serious games is all about.
During the stupefied silence that followed, Maximilian Schenk acquainted the audience with the BIU and its sundry activities, explained how the traditionally bad image of gaming in Germany, including serious games, was changing as it had been found out that “games make people smarter” (evidence: Spiegel), pontificated about “games as a medium” from “tent fires, maybe 10,000 years from now” to today’s video games, and ended his thankfully brief keynote with an enthusiastic barrage of growth forecasts relating to game-based learning/serious games industries whose outrageously optimistic numbers were inversely proportional to the amount of actual evidence corroborating these numbers.
While appreciated in general, it was rather obvious that the keynote’s briefness had taken the organizers by surprise, and Carsten Busch jumped in to introduce, with all the little tell-tale signs of hurried improvisation, the Swedish Condom08 gamification project. This presentation—its general drift into inappropriately didactic terrain notwithstanding (“What did you learn?”; “What technologies were used?”)—turned out to be enjoyable and stimulating.
And so the conference began. Follow-up posts are in the pipeline.
Something has to change. I have 3 games to get out the door–Revolution 60 PC, which is almost done, Cupcake Crisis and Project: Gogo. My team needs me leading them, not fighting Gamergate.
I’m not sure what the answer is. I might start a Patreon for an assistant to help with all these Gamergate tasks. I might just start doing less. But, I do know I’m going to lead GSX more this week and fight Gamergate less.
I got into the game industry to make games. And it’s time for me to get back to it.
If I were being honest—I’m more than a little resentful. The vast majority of our male-dominated games press wrote a single piece condemning Gamergate and has been radio silent ever since. The publishers are silent, the console makers are silent. And so, Anita, Zoe, Randi and myself are out here doing the majority of the work, while everyone whines about wanting it to be over.
Meanwhile, the rest of the industry is doing what they do best, which is nothing.
#GamerGate isn’t over. Actually, GamerGate might never be over. Or at least won’t be for a very long time. And it can’t, because institutionalized misogyny—very much like its sibling institutionalized racism the consequences of which are as visible and as lethal as ever—is, indeed, about institutions and institutionalized practices. GamerGate is not about “issues.” Neither legitimate issues nor pseudo issues—instead, the whole GamerGate edifice is built upon maintaining systemic privilege within our predominant power structures, structures that are completely transparent and therefore invisible for those who enjoy these privileges, but starkly reflecting and often painfully visible for those who don’t.
This asymmetry, in turn, engenders very different flavors of “ideology,” which complicates things considerably.
Everything that’s enmeshed in these transparent, invisible power structures will be perceived by those endowed with privilege as “natural”—race relations, gender relations, how we are educated, how we work, how we love, how we fight, how we die. In contrast, every voice that tries to question or change these invisible power structures—toward visibility, respect, equal opportunity, whatever it is—will be instantly “marked” and highly visible and therefore inevitably be perceived as “ideological” against an otherwise “natural” background, much like a cloud on a spotless, blue sky.
This perceived naturalness, however, is a kind of “deep ideology”—I’m loosely following Michel Foucault here, and Judith Butler—which is glaringly obvious as such for the non-privileged and disenfranchised. And this “deep ideology” in turn—now loosely following Niklas Luhmann here (I told you it’ll complicate things!)—serves recursively as both cause and effect with respect to the underlying power structures in that autopoietic system (i.e, a system where the functional product of its cooperating parts is the organization that creates these parts) we call “society.”
So if you enjoy white, straight, male, cis, able-bodied, or mentally unimpaired privilege, you need to develop a “systemic awareness” which lowers the transparency and makes the invisible visible, and then we’all might be able to actually talk. It’s certainly not easy to develop this awareness all by yourself from scratch, but you don’t need to—there are people who walk amongst you who, if you ask in good faith, are sincerely willing to help! So especially if you’re a straight white dude who prefers to remain ignorant about the relative privileges you enjoy, and there is always a point where remaining ignorant becomes an active decision indeed, pardon me that henceforth your name will be known as Douchebag.
Let’s translate all this into gaming terms. John Scalzi once put it like this: for the privileged, life is like playing an MMORPG on the lowest difficulty setting and remaining blissfully unaware of it.
Still too abstract? Here then, let me help you out with this one:
That’s why GamerGate is not an “issue” we can “solve”—it’s a systemic problem we’re confronted with every day on countless occasions. Systemic problems take many forms, in the games industry and society at large as well. Whether, in the case of misogyny, it’s death, rape, and terror threats against female developers and critics and cosplayers and every female voice that’s raised in traditionally male-dominated contexts, whether, in the case of racism, it’s the constant terror you experience as a person of color even if you’re not getting shot six times with your hands up and unarmed or strangled to death on camera by law enforcement officers who act with impunity. Systemic problems have many faces, and they’re usually much uglier than you think they are.
And that brings me to Cultural Criticism, the very purpose of which is to lower the system’s transparency, to denaturalize what appears natural, to make the invisible visible, to expose structural privilege. What Anita Sarkeesian does is cultural criticism, and the mind-boggling amount of threats and invectives directed at her from within the “gaming community,” enriched with massive amounts of hate-filled misogyny and a generous helping of antisemitism, was—and is—terrifying. If you’ve followed her Twitter mentions—or those from Leigh Alexander or Brianna Wu—you couldn’t but stare in utter disbelief at the misogynist hordes that popped out of the ground like spawned from dragon’s teeth. And for all of us who self-identified as male “gamers” up to this point in time, in a way this also meant that, suddenly, these misogynist hordes were us.
But not only “us gamers”—from early on, I noticed many familiar names from the “atheist community” joining GamerGate’s ranks—actually, the same assholes who want to keep atheism “pure” without contaminating it with social issues, who hurl Social Justice Warrior at you as an insult, and who call cultural critics pointing at their straight white male privileges “bullies.” Sounds familiar? Exactly.
And it’s the same familiar idiots who accuse you of misandry—“How does being a misandrist make you better than a misogynist?” blah blah—while a) taking ironic misandryliterally (taking things literally where you shouldn’t is a well-oiled rhetorical ploy) and b) pretending to not see that all this is not at all about individual likes and dislikes, but about a huge systemic imbalance where misogyny is baked right into the distribution and availability of resources while misandry is not. In society, “misandry” does not wield any perceptible power, and it’s a non-issue if there ever was one. But, as a reminder, misogyny isn’t an “issue” either—it’s part of the systemic problem I outlined above, and these kinds of problems are not solvable without changing the underlying structure, sometimes wholesale.
Now Ethics in Journalism, in contrast, is indeed an “issue.” And that’s why GamerGate supporters are profoundly disinterested in it at the end of the day. Does anybody really have to point out the great number of problems endemic to video game journalism? I think not. These problems have been bugging us for years and years and not a week goes by without something happening that reminds us. Since when, then, are Ethics in Video Game Journalism’s biggest problems female indie developers working or not working on Call of Duty clones and/or having a video game journalist boyfriend who never wrote a single line about her games, or Kickstarter-backed female critics, or Patreon-supported female artists?
Are these really our problems in video game journalism?
Witch King: "No man can kill me." Eowyn: "I am no man." [Stabs him] Witch King: [Shrieks] "GAMERGATE IS ABOUT JOURNALISTIC ETHICS!" [Dies]
Witch King: “No man can kill me.”
Eowyn: “I am no man.” [Stabs him]
Witch King: [Shrieks] “GAMERGATE IS ABOUT JOURNALISTIC ETHICS!” [Dies] @sween 4:08 PM – 20 Oct 2014
But! wouldn’t it be possible, however remotely, that GamerGate—which, to remind you, was kicked from 4chan, of all places; whose merry members laughed it off when confronted with their endemic combination of death threats and doxxing because “nobody’s been killed yet!”; and whose leading figures are really quite endearing characters—be rescued by funneling it into the collecting tank of their own pretense, into a true consumer movement?
No. Here’s why:
The entire movement is like dragging a net through sewage, picking up every vile sexist/homophobic consumer. It's unsalvageable.
The entire movement is like dragging a net through sewage, picking up every vile sexist/homophobic consumer. It’s unsalvageable. @Spacekatgal 4:21 PM – 15 Nov 2014
GamerGate is about as salvageable as the MRA camp or those who take their political and aesthetic inspirations from tinfoil agendas and Stürmer illustrations. Talking of which: it was breathtaking to see how GamerGate acolytes shamelessly deployed Nazi imagery, especially against Anita Sarkeesian, while at the same time insulting feminists as “femiNazis” on a regular basis, all without the slightest cognitive dissonance. But there’s more—and this is a feeling I truly share with Elizabeth Simins: despite many holocaust comparisons floating around, it was GamerGate’s illustrated self-image (in several variations) of intrepidly resisting the “media conspiracy” tanks on Tiananmen Square that really knocked me off my feet.
Finally, are there “two sides” to this story? Well, yes—in the very sense that there are “two sides” to the story when it comes to the dangers of tobacco use or to climate change. Exactly like in the latter cases, GamerGate’s spin is a true “manufactroversy,” a manufactured controversy.
Which this tweet illustrates best:
Should women be allowed to create and play video games without fear of being murdered in real life? Let's hear both sides of the story.
Should women be allowed to create and play video games without fear of being murdered in real life? Let’s hear both sides of the story. @Hello_Tailor 5:12 AM – 15 Oct 2014
And yes of course that tweet is sarcastic, bitterly so. But at that point things had literally advanced to a state where at least some people were confused enough and read and reacted to it literally.
Which raises the question: if #GamerGate isn’t over, might never be over, or at least won’t be for a very long time because it isn’t about “issues” but about invisible systemic privilege, what should we be doing next?
Well—I think we should keep our defenses up, collectively protect vulnerable colleagues and friends, and strike back swiftly and decisively if need be, but we should not let this toxic masculinity-fueled GamerGate nonsense eat up our time and energy any longer. Instead, let’s put our time and energy into more valuable endeavors that might, in the long run, make a difference.
For us, actually, now is a better time than ever for developing great games that can make a difference.
The game “Actually, It’s About Ethics in Journalism!” pitches the player against hordes of badly coded Turing bots set up to propagate a fictitious media conspiracy.
The challenge is how long the player can either argue against, or ignore, an unrelenting onslaught of non-sequiturs and escalating threats and invective before her brain melts or she is driven from home.
This game has been released on Twitter, and you can play it for free!
It’s still in beta so here’s your invite: sign up with the key phrase #GamerGate and you’re good to go. Please note: especially if you are a woman, you might not be able to sign out again. Ever.
After attending the European Conference on Game-Based Learning in Berlin last week, I was looking forward to writing about exciting papers and newly-won friends (and a few scientific lemons, to be sure). But with what’s still going on, unabated, under the false flag of the #GamerGate hashtag, business as usual isn’t an option. Chattering away unperturbed about how games make you smarter, drive innovation, are great educational tools, have a bright economic future, and whatnot, while ignoring the disgusting reality and real-life consequences of a spectacularly vicious misogynist attack on our co-gamers makes you nothing less than an accomplice—just ask your history teacher, or possibly your (grand-)parents, how these mechanisms work. And I’m especially looking at you, Intel. [Annotation: Intel made good for it big time.]
And lo and behold, it isn’t just the “gaming community” where this kind of hate cancer has erupted in our midst lately, so I can take two paragraphs from PZ’s “Sunday Sacrilege” post and apply it here by changing four words, marked in italics:
There’s also a really low bar set here. Valuing diversity—the idea that the gaming community should be equally welcoming to all races and sexes—and valuing equality—that everyone in that community should have the same status—are such basic ideas that it’s shocking that anyone could regard their promotion as a sign of a corrupting conspiracy by Social Justice Warriors. Who the fuck would argue with those ideas? Virtually no one. Definitely no one that we would want to accommodate in the gaming community.
Demanding that part of the responsibility of being a gamer should also mean being a decent human being who wants to build functional, useful communities doesn’t sound like a particularly onerous expectation to me. Of course, what that also means is that the games industry needs to broaden its goals to serve a larger proportion of the population.
So that’s what we have to demand and defend now? In the one-and-twenty? This is just incredible.
I expect everybody from the scientific community to take a stand against #GamerGate, to not look away. I expect important gaming sites and gaming news sites to resist getting bullied into silence, and I most certainly expect the big players in the video game industry to set course toward diversity and equality, both for their companies and for their products. Because, as I already quoted in a previous post, if you’re a big player in the games industry you are very probably, and that applies to #GamerGate as well, part of the problem: “When your leadership isn’t gender-balanced, it’s tough to have a balanced customer base.”
* I like to thank Elizabeth Simins for providing this article’s headline for free.
In most video games most of the time, non-player characters are the meat in the player character’s power fantasy sandwich. It might taste great and satisfyingly fire up the player’s neural tastebuds, but there’s something fundamentally wrong with it: what should be virtual human beings are pure objects instead, designed for a comprehensive toolbox for emotional and cognitive manipulation. “Good” NPCs suffer horrible fates to provide the player character anger and motivation, “bad” NPCs suffer horrible fates to provide the player character entertainment and moral exemption.
Of course, this is not at all native to video games. It’s rampant in every other media, including literature—from lowly entertainment thrillers the like of Peter Benchley’s Jaws, skillfully dissected by Wayne C. Booth in The Company We Keep: An Ethics of Fiction, to the lofty heights of highbrow prose.
When Sindbad tells the tale of his next-to-last voyage to his guests in Barth’s The Last Voyage of Somebody the Sailor, the narrator paraphrases Sindbad’s exposition in a rather revealing way: “He makes it to shore, as always, this time with a handful of others, whose next job in his story is to die and leave him the sole survivor. (11)” A treatment which confirms the old adage that it is always a good thing to be the protagonist in one’s story—a motif Barth also plays on in the embedded “Story of Jaydā the Jewel of Cairo” or in his second novel The End of the Road. Barth, all things considered, does not seem particularly perturbed by casting aside his supporting cast in general and his castaways in particular, an assumption supported by his opinions on the subject articulated in his essay “A Body of Words” from Further Fridays[.]
Killing off or otherwise utilizing purely fictional characters for the sake of the story or the sake of one’s argument might, after all, not be a completely innocent endeavor.
How we handle NPCs in games is at the core of Austin Walker’s terrific post “Real Human Beings: Shadow of Mordor, Watch Dogs and the New NPC” at Paste magazine. But it’s much deeper than that. Walker’s post touches on how a “new generation” of NPCs as a promising possible remedy exacerbates the problem instead through mainstream story patterns; on the seemingly ineradicable use of binary oppositions which still tailor stories to be experienced from the “natural” default perspective of the White Western Male; and on the absurd, disheartening lopsidedness of the player character’s agency arsenal:
I can’t touch anyone.
This has been bugging me since I started playing Watch Dogs. When I see the man playing trumpet at the park, I can’t tip him. When I hear that someone’s father has cancer, I can’t transfer money into their account—though I can drain their already meager savings further.
And now, these crying people, I can’t hug them. Not that I should—not that Aiden Pearce should be in this space at all. But I am, and I want to hug them. I want that so much more than the ability to do harm, but it’s all I can do.
But what about educational games? Surely, these must be different! After all, NPCs in educational games are rarely designed to be killed but to be talked to, to be helped out, to be cooperated with. Yet, I would argue that most educational games suffer from the exact same problem—when virtual people are means to an end, it doesn’t matter whether it’s an admirable end or a reprehensible end. As long as these virtual people’s only function is to provide students with a learning environment and behavioral incentives, educational games are only superficially different from Shadow of Mordor or Watch Dogs.
Surely, though, the “cultural” perspective is vastly improved in educational games? Well, we’ve come a long way, but—no, not necessarily. Case in point: The Radix Endeavor, a STEM MMO in development from MIT’s Education Arcade and Filament Games, supported by the Bill & Melinda Gates Foundation. The players enter the land, or island, of “Ysola” and become part of an underground movement to secretly conduct scientific research, which is forbidden, to free Ysola from the tyranny of the “evil Obfuscati.”
The key to making a game engaging to students is a strong narrative. “What’s important is to take that engaging narrative and that incentive system and put some stakes into the world to keep it feeling like an engaging environment and a place that students really want to be,” said Susannah Gordon-Messer, education content manager for MIT’s Education Arcade. In Radix, the player’s task is to help citizens of a fictional earth-like world gain knowledge about math and science, a privilege denied by the land’s rulers.
You can certainly see where this is going.
And indeed, while the player characters might or might not be native inhabitants of Ysola (the descriptions aren’t too clear on this point), it’s easy to figure out from above’s marketing shtick and—unmistakably, no prior deconstruction experience required—from the Radix Trailer voice-over descriptions such as “interact with Ysola natives and see how you can help them on their quest to knowledge” (00:28), “make use of every resource to help the natives better understand their world” (01:04), or “You are not alone on this endeavor! Join with many other players to help Ysola natives!” (01:40) that the game stumbles flailingly into the familiar trap of a cultural perspective where nobody understands and solves the problems of an exotic people better than the Western visitors.
To wind it up, a tentative forecast from Austin Walker’s post—you should go and read it now in full—of how NPCs could be designed instead:
In a recent episode of the podcast Three Moves Ahead, guest Chris Remo opines about how Jordan Mechner’s The Last Express communicated the lives of its NPCs, who went about their own schedules, had their own conversations, and paid little attention to the player’s motivations. He says that the game gave “glimpses of other people’s interior lives without regard for how they may relate to the player’s.”
This is a beautiful thing that we often forget that games can do.
This week, I’m off to the ECGBL 2014 in Berlin, including the SGDA 2014 satellite conference.
Not sure if I will have time for blogging but if not, bear with me—there’ll be no shortage of posts about conference papers soon.
If anyone reading this blog also happens to attend this year’s ECGBL and wants to connect, check betweendrafts.com/hello-world for every possible channel to get in touch.
The negative motivational potential of programming textbooks and tutorials is second only to the motivational potential of how we teach math. And while visual “learning by doing” systems like Khan Academy’s Computer Programming online course seem like progress, they’re sugarcoating the problem instead of providing a solution.
Bret Victor, two years ago, in his eye-opening essay “Learnable Programming”:
We often think of a programming environment or language in terms of its features—this one “has code folding”, that one “has type inference”. This is like thinking about a book in terms of its words—this book has a “fortuitous”, that one has a “munificent”. What matters is not individual words, but how the words together convey a message.
Everything else—a must-read—follows from there toward building a mental model. The concept of building a mental model is based on an interesting premise: “A programming system has two parts. The programming ‘environment’ is the part that’s installed on the computer. The programming “language” is the part that’s installed in the programmer’s head.” The inspiration to build on this premise, as Victor remarks, came from Will Wright’s thoughts on interactive design in “Sims, BattleBots, Cellular Automata God and Go: A Conversation with Will Wright” by Celia Pearce:
So what we’re trying to as designers is build up these mental models in the player. The computer is just an incremental step, an intermediate model to the model in the player’s head. The player has to be able to bootstrap themselves into understanding that model. You’ve got this elaborate system with thousands of variables, and you can’t just dump it on the user or else they’re totally lost. So we usually try to think in terms of, what’s a simpler metaphor that somebody can approach this with? What’s the simplest mental model that you can walk up to one of these games and start playing it, and at least understand the basics? Now it might be the wrong model, but it still has to bootstrap into your learning process. So for most of our games, there’s some overt metaphor that allows you approach the simulation. (Game Studies: The International Journal of Computer Game Research Vol.2 Issue 1, July 2002)
That, of course, holds important implications for game-based learning design—not just for teaching programming, but that’s a particularly obvious example. In game-based learning design, bootstrapping must take place both at the system level and the content level: the GBL model must be designed in such a way that you “can walk up to it and start playing,” and the same thing applies to how the learning content is designed so that the player ”can walk up to it and start learning.” (A fine example of Hayden White’s The Content of the Form principle at work, incidentally.)
As of now, there is no shortage of games that try to teach programming to kids, but just browsing the blurbs opens jar after jar crawling with inadequacies, to put it mildly. Games aimed at teens tend to float belly up the next time you check (yes I’m looking at you, CodeHero), or remain perpetually promising but unfinished like Dre’s Detox. And you won’t find anything remotely suitable for adults or seniors.
Obviously, if we managed to create games that teach how to code along the lines imagined by Bret Victor, we’d create new generations of coders who could put to good use not only the principles they’ve learned, but the principles with which they’ve learned what they’ve learned, to create great game-based learning designs and experiences for the future.
Jebus, but I hate that poor excuse for an apology. It happens all the time; someone says something stupid and wrong, and instead of saying, “I was wrong, I’m sorry and will try to change,” they say, “I’m sorry you were offended by my remarks”—suddenly, the problem lies not in the error of the speaker but in the sensitivity of the listener.
That’s not an apology. It’s a transparent attempt to twist the blame to fall on everyone else but the person who made the mistake.
The only thing this “apology” demonstrates is that Intel’s PR department is run by spineless weasels.
Whenever you watch the rare event of big chunks of money flying in the direction of serious GBL development like wild geese in winter, you can bet your tenure on it that it’ll be all about STEM. But education isn’t just Science, Technology, Engineering, and Math—it’s about a zillion other things too! And that includes Social Sciences and the Humanities.
The humanities, despite being ridiculously underfunded, are doing fine in principle. And if we care at all about what kind of society, political system, historical self-image, or perspective on justice should be shaping our future, or what public and individual capacities of critical and self-critical thinking, introspection, and levels of general knowledge to make informed decisions about anything and everything we want future generations to have at their disposal, then we should be as deeply interested in bringing the humanities and social sciences into game-based learning than we are already with respect to STEM.
Because if you aren’t interested in such things or can’t motivate yourself to take them seriously, then you’d better prepare for a near-future society best represented right now by FOX News and talk radio programs, comment sections of online newspapers, and tech dudebro social media wankfests.
That said, there are at least two fatal mistakes the humanities must avoid at all costs: neither should they put themselves on the defensive about their own self-worth, nor should they position themselves conveniently in the “training” camp.
Jeffrey T. Nealon in Post-Postmodernism: Or, The Cultural Logic of Just-in-Time Capitalism (Stanford: Stanford UP, 2012):
The other obvious way to articulate the humanities’ future value is to play up the commitment to communication skills that one sees throughout the humanities. For example, Cathy Davidson writes in the Associated Departments of English Bulletin (2000), “If we spend too much of our energy lamenting the decline in the number of positions for our doctoral students, … we are giving up the single most compelling argument we have for our existence”: the fact that we “teach sophisticated techniques for reading, writing, and sorting information into a coherent argument.” “Reading, writing, evaluating and organizing information have probably never been more central to everyday life,” Davidson points out, so—by analogy—the humanities have never been so central to the curriculum and the society at large. This seems a compelling enough line of reasoning—and donors, politicians, students, and administrators love anything that smacks of a training program.
But, precisely because of that fact, I think there’s reason to be suspicious of teaching critical-thinking skills as the humanities’ primary reason for being. The last thing you want to be in the new economy is an anachronism, but the second-to-last thing you want to be is the “training” wing of an organization. And not because training is unnecessary or old line, far from it; rather, you want to avoid becoming a training facility because training is as outsourceable as the day is long: English department “writing” courses, along with many other introductory skills courses throughout the humanities, are already taught on a mass scale through distance education, bypassing the bricks-and-mortar university’s (not-for-profit) futures altogether, and becoming a funding stream for distance ed’s (for-profit) virtual futures. Tying our future exclusively to skills training is tantamount to admitting that the humanities are a series of service departments—confirming our future status as corporate trainers. And, given the fact that student writing and communication skills are second only to the weather as a perennial source of complaint among those who employ our graduates, I don’t think we want to wager our futures solely on that. (187–88)
Everybody who’s involved in a rare GBL game pitch for the humanities or social sciences that travels down this road should turn the wheel hard and fast in a different direction—or get out and run.
Cas Prince over at Puppyblog about the declining value of the indie game customer (slightly densified):
Back in the early 2000s, games would sell for about $20. Of course, 99% of the time, when things didn’t work it was just because the customer had shitty OEM drivers. So what would happen was we spent a not insignificant proportion of our time—time which we could have been making new games and thus actually earning a living—fixing customers computers. So we jokingly used to say that we sold you a game for a dollar and then $19 of support.
Then Steam came (and to a lesser extent, Big Fish Games). Within 5 short years, the value of an independent game plummeted from about $20 to approximately $1, with very few exceptions.
Then came the Humble Bundle and all its little imitators.
It was another cataclysmically disruptive event, so soon on the heels of the last. Suddenly you’ve got a massive problem on your hands. You’ve sold 40,000 games! But you’ve only made enough money to survive full-time for two weeks because you’re selling them for 10 cents each. And several hundred new customers suddenly want their computers fixed for free. And when the dust from all the bundles has settled you’re left with a market expectation of games now that means you can only sell them for a dollar. That’s how much we sell our games for. One dollar. They’re meant to be $10, but nobody buys them at $10. They buy them when a 90% discount coupon lands in their Steam inventory. We survive only by the grace of 90% coupon drops, which are of course entirely under Valve’s control. It doesn’t matter how much marketing we do now, because Valve control our drip feed.
Long, rambling, and eminently realistic long-form post everybody interested in gaming culture and indie games should go and read from A–Z.
At ProfHacker, Anastasia Salter has collected five recommendations for critical readings on games and learning. A quick check of my own personal library (and memory) reveals that from these I’ve read only two, namely James Paul Gee’s What Videogame Have to Teach Us About Learning and Literacy and Jesper Juul’s The Art of Failure.
But except Gee, publishing dates are in the neighborhood of 2013/2014, and my backlog is disheartening anyways.
From Ian Bogost’s talk at the 2013 Games for Change conference:
When people talk about “changing the world with games,” in addition to checking for your wallet, perhaps you should also check to see if there are any games involved in those world changing games[.] The dirty truth about most of these serious games, the one that nobody wants to talk about in public, is they’re not really that concerned about being games. This is mostly because games are hip, they make appealing peaks in your grant application, they offer new terrain, undiscovered country, they give us new reasons to pursue existing programs in order to keep them running.
Maybe what we want are not “serious’ games, but earnest games. Games that aren’t just instrumental or opportunistic in their intentions.
According to a 2011 metastudy by Traci Sitzmann in Personnel Psychology, declarative and procedural knowledge and retention were observed to be higher in groups taught with computer-based simulation games than in groups taught without, and even self-efficacy was observed to be substantially higher—surprisingly high, I might say. But that isn’t the whole story.
Common knowledge, and often among the main rationales for developing computer-based simulation games, is that wrapping entertainment around course materials will boost motivation. Motivation, hopefully, for learning new skills and not merely for playing the simulation game.
But do we know for sure that this works?
Two key simulation game theories propose that the primary benefit of using simulation games in training is their motivational potential. Thus, it is ironic that a dearth of research has compared posttraining motivation for trainees taught with simulation games to a comparison group. A number of studies have compared changes in motivation and other affective outcomes from pre- to posttraining for trainees taught with simulation games, but this research design suffers from numerous internal validity threats, including history, selection, and maturation. Also, the use of pre-to-post comparisons may result in an upward bias in effect sizes, leading researchers to overestimate the effect of simulation games on motivational processes.
Sounds bad enough. But there’s more! In a corporate environment, motivation is intimately linked to work-motivation—think of it as a special case of transfer of learning—but which, it turns out, hasn’t so far been tested in any meaningful manner at all:
However, the instructional benefits of simulation games would be maximized if trainees were also motivated to utilize the knowledge and skills taught in simulation games on the job. Confirming that simulation games enhance work-related motivation is a critical area for future research.
Also, there’s something else. How well declarative and procedural knowledge, retention, and self-efficacy are raised depends, according to this meta analysis, on several factors. The best results were observed for games where work-related competencies were actively rather than passively learned during game play; when the game could be played as often as desired; and when the simulation game was embedded in an instructional program rather than a stand-alone device.
Lots of implications there. And ample opportunity to turn your corporate simulation game into a veritable shit sandwich: when the game is merely the digital version of your textbooks, training handbooks, or field guides; when the replay value is low; and when you think you can cut down on your programs, trainers, and field exercises.
In other words: a good simulation game will cost you, and you can’t recover these costs by cutting down on your training environment. Instead, a simulation game is a substantial investment in your internal market, and you better make sure to get the right team on board so that motivation will translate into training success and training success into work-motivation.
Paper cited: Sitzmann, Tracy. “A Meta-Analytic Examination of the Instruction Effectiveness of Computer-Based Simulation Games.” Personnel Psychology. Vol.64, Issue 2 (Summer 2011). 489–528.
When younger learners study natural science, their body movements with external perceptions can positively contribute to knowledge construction during the period of performing simulated exercises. The way of using keyboard/mouse for simulated exercises is capable of conveying procedural information to learners. However, it only reproduces physical experimental procedures on a computer. […]
If environmental factors, namely bodily states and situated actions, were well-designed as external information, the additional input can further help learners to better grasp the concepts through meaningful and educational body participation.
Exciting research. Add to that implications from Damasio’s somatic marker hypothesis and the general question of the vanishing of movement and physicality from learning processes as an as yet underresearched psychological—or even philosophical, think peripatetics—observable.
This is a direction we should follow through in game-based learning research with some financial muscle, so to speak.
How serious games are developed has changed quite a bit since Gunter et al.’s paper “A Case for a Formal Design Paradigm for Serious Games” (link to PDF) from 2006, but that doesn’t invalidate its point of departure in principle:
We are witnessing a mad rush to pour educational content into games or to use games in the classroom in an inappropriate manner and in an ad hoc manner in hopes that players are motivated to learn simply because the content is housed inside a game.
While this paper is neither a rigorously written research study nor exactly informed by deep knowledge about the psychology of learning (all three authors have their backgrounds in the technology of learning), and the concluding “method for creating designed choices” falls flat on its nose as this paper regrettably fails to define “choice” in this context, we can still extract its basic idea, strip off its naïve linearity, and expand on it.
In brief:
The basic design process for educational games should occur within a three-dimensional space whose three conceptual axes are: Game Mechanics, Dramatic Structure, and the Psychology of Learning. To simply try and “map” these parameters onto each other in a largely linear approach that, among other things, is destined to lose sight of participatory elements and agenticity rather quickly will run into problems and lead to bad games. And the best approach to build such a matrix for a given objective is to create a collaborative team with top-notch professionals from all three areas, i. e., game design, narrative design, and the psychology of learning and motivation.
Paper cited: Gunter, Glenda A., Robert F. Kenny, & Erik Henry Vick. “A Case for a Formal Design Paradigm for Serious Games.” The Journal of the International Digital Media and Arts Association. Vol.3 No.1 (2006). 1-19.
While serious games have been embraced by educators in and out of the classroom, many questions remain. What are the possible effects of digital gaming, connectivity and multitasking for younger learners, whose bodies and brains are still maturing?
Let me rephrase this just a bit:
While 20th century-style classroom learning has been embraced by educators all over the world, many questions remain. What are the possible effects of one-size-fits-all educational methodology with predetermined curricula and standardized testing, conditioned learning of siloed educational subjects detached from personal experience, and large class sizes solely determined by year of birth, for younger learners whose bodies and brains are still maturing?
What this comes down to is this. With their defensive positions reflected by arguments as well as study designs, game-based learning proponents often paint themselves into a corner. You just can’t conclusively identify (let alone “prove”) the effects and effect sizes of a particular teaching method for all times, ages, and contexts. Moreover, it’s proponents of that archaic industrial processing of learning and learners that we, somewhat misleadingly, call our “modern educational system” who should scramble to legitimite their adherence to outdated structures and methods, not the other way round.
Another thing that’s screwed, of course, is that from twenty studies on game based-learning listed by this particular research roundup mentioned above, only three are freely available — “Video Game–Based Learning: An Emerging Paradigm for Instruction” (Link); “Gamification in a Social Learning Environment” (Link to PDF); “A Meta-Analytic Examination of the Instructional Effectiveness of Computer-Based Simulation Games” (Link). And from the other seventeen arcticles’ overall ten sources even the excellently-equipped university and state library I’m privileged to enjoy research access to does subscribe, again, to three.
Commencing Operation Play, a call-to-arms for all believers in the positive impact of game-based learning! From September 15th–19th, we’re celebrating educators that utilize game-based learning in their classrooms and the benefits games can have on student engagement and understanding. We’ve partnered with some of the most powerful forces in the industry to build a hub of teacher resources for adding game-based learning to your classroom curriculum.
On board for digital games are, among others, MIT’s Education Arcade and Institute of Play’s GlassLab.
With the Mojang buy, Microsoft will have an automatic presence in two hot and growing areas of importance in K-12 schools: STEM education, and game-based learning. It could choose to:
Maintain the licensing and direct support relationship for TeacherGaming’s MinecraftEdu,
Distribute Minecraft directly to schools as a Microsoft Education initiative (perhaps also buying TeacherGaming), or
Let education-specific efforts wither as it pursues world domination in mass market video games.
Early indications are somewhat promising, if not yet specific.
The Bill & Melinda Gates Foundation’s activities notwithstanding, Microsoft’s past in edu is checkered, to say the least. While Microsoft’s new CEO Satya Nadella indirectly confirmed that Ballmer’s departure marked the end of Microsoft’s platform-centric “domination” strategy, it will take time until we know whether that’s just marketing lingo or a real change of heart.
Remember the time when education was one of Apple’s rare strongholds and Microsoft proposed to pay out $1.1 billion in legal settlements from a class action law suit “in Microsoft software to needy schools”?
A circulating story about a grey parrot performing unexpectedly well in a setup of the classic 1972 Stanford Marshmallow Experiment reminded me of a related study from 2012, “Rational Snacking: Young Children’s Decision-Making on the Marshmallow Task Is Moderated by Beliefs About Environmental Reliability” (paywall). Takeaways from the latter were: the original test’s setup didn’t control for trust (and probably other factors as well), and the dependent variable of self-control is at least “moderated” by the perceived reliability of the environment—trust—in personal delay-of-gratification success or failure. (About the parrot, alas, I have nothing useful to say.)
One can be reasonably sure that both factors, i.e., trust and self-control, are indeed highly correlated in real-world settings. From there we can assume another confounding factor pertaining to both the original and the follow-up study that hasn’t been controlled for, namely, social background. Reliability isn’t just a matter of character, but also a matter of circumstances: people can be unreliable und untrustworthy not because they are unreliable and untrustworthy, but also because their personal environment and circumstances force them to be. For a taste of how this works, play the terrific Papers, Please!.
In fact, your unreliability and untrustworthiness in real-world settings can be a function of naked necessity, and the more precarious your personal circumstances are, the less you can afford being reliable and trustworthy. It can start with the promise of a birthday present you couldn’t keep because you needed to have the refrigerator repaired that just broke down, or for a visit to the zoo or your daughter’s all-important afternoon baseball game because you finally didn’t dare leave early from work even though you had asked for permission.
Now think again about the results of the original Marshmallow Experiment’s follow-up studies:
In follow-up studies, Mischel found unexpected correlations between the results of the marshmallow test and the success of the children many years later. The first follow-up study, in 1988, showed that “preschool children who delayed gratification longer in the self-imposed delay paradigm, were described more than 10 years later by their parents as adolescents who were significantly more competent.”
What needs to be done, actually, is factoring in the participants real-life environment reliability which, stunningly, hasn’t been controlled for throughout these studies.
Now, learning to trust or not to trust is a bit more complicated than “learning whom to trust or not” as an educational objective. “Trust” has a context, often a dynamic one. If children don’t understand this, they will have no clue what to do or what’s happening, e. g., when it comes to intermittent trustworthiness because circumstances compel a basically trustworthy person to be trustworthy most of the time, but not all of the time. And we know from Behaviorism that intermittent rewards (i.e, reinforcements) trigger certain classes of hormones that are not only highly addictive, but change behavior in unexpected and overwhelmingly socially negative ways.
When “Trust” as an educational objective has to take dynamically changing contexts into account, many of the game mechanics used in educational games are not even close to being useful, and that applies to corporate trainings as well. But how can we create dynamic contexts?
Well, Storification! A dramatic structure, turning points, ever higher stakes, unforeseen predicaments, and a “story world” where every action has its consequences, though not necessarily the expected ones. Obviously, in an interactive, participative game where the player can expect to have agency it’s neither possible nor desirable to “script” all these elements beforehand. As of now, the single best way to experience such situations, and experience them in perfect safety!, are still old-fashioned pen-and-paper roleplaying games—true, authentic “co-op” games through-and-through.
And that’s exactly what we’re going to have to create in the field of game-based learning: a plot-driven, context-rich co-op game that dynamically evolves through interactions between players who enjoy true agency. Which would be almost too easy with Strong AI as a “game master,” but as long as we can’t have that, we must work with what we have. Which, actually, is quite a lot—with MMORPG mechanics and mixed AI/Tutor systems leading the way.
Digital games are becoming a more regular part of the classroom, according to the nearly 700 teachers who responded to the survey.
Of those teachers who use games in the classroom (513 respondents), the majority of respondents (55%) use games in the classroom at least once a week and another quarter have kids play games at least once a month.
The GLPC survey found that a majority of teachers still use desktop computers to play games (72%) and a sizable group (41%) is using interactive whiteboards. But still, tablets have quickly grown to equal the whiteboard usage.
That’s a lot, actually, but I think it’s reasonable to expect a slight selection bias here, i. e., that teachers who use digital games in their classrooms are a bit more likely to respond to this survey than those who don’t.
Add to that another survey, quoted on GLPC’s website:
This growth of mobile technology was also highlighted in a new survey from the technology and education firm Amplify. That survey found that of those not using tablets 67 percent plan to invest in them in the next 1–2 years.
Again, quite a lot. Yet, “Interactive” and “mobile” don’t necessarily translate into “collaborative,” and I wonder whether tablets are particulary suited for collaborative game-based learning (which playing games in the classroom was all about in the first place).
Also, I wonder how those numbers would compare to a similar survey in Germany—oh wait, I don’t.
About two months or so ago, I threw a few remarks about LEGO’s new “Female Scientists Research Institute” into that Black Hole commonly known as Facebook, raining on the then-ongoing “LEGO finally gets it!” parade by reminding everybody that this set was not a regular product but a) fansourced as a winner of the annual “Idea” competition and b) a limited edition.
And so it goes. Shortly after the launch, from the New York Times:
Within days of its appearance early this month, the Research Institute—a paleontologist, an astronomer and a chemist—sold out on Lego’s website and will not be available at major retailers, including Target and Walmart. Toys “R” Us did carry the line, but according to associates reached by telephone at two of its New York stores, it sold out at those locations as well.
Lego said the set was manufactured as a limited edition, meaning it was not mass-produced.
So there’s that.
And the problem is…well, take one guess. Avivah Wittenberg-Cox over at Harvard Business Review (emphasis mine):
Why did it take until 2014 for the world’s second-largest toy maker to offer girls (and their toy-buying parents) products they might actually want? (After all, even Barbie has been an astronaut since 1965.)
Perhaps it has something to do with the profile of LEGO’s management team, comprised almost entirely of men. The three-person board of the privately-held company is all men, led by CEO Jørgen Vig Knudstorp. The 21-person corporate management team has 20 men and one woman—and she’s in an internally-facing staff role, not connected to the customer base or product development. When your leadership isn’t gender-balanced, it’s tough to have a balanced customer base. The new “Research Institute” range was proposed by geoscientist Ellen Kooijman on one of the company’s crowd-sourcing sites. But it begs the question, is there really no one inside the company who might have come up with the radical idea of having women scientists feature in a 21st century toy company’s line? […]
Don’t hold your breath, though. Despite its first-day sold-out success, LEGO has decided not to continue the Research Institute line. It was only a “limited edition.” So girls, back to the pool. The guys in this boardroom don’t seem to want to give you any ideas… let alone seats at the table.
Read the whole piece—LEGO should be deeply ashamed. But the exact same problem haunts the videogame industry, and the cultural expressions that attach themselves to it; under the protective cultural umbrella of predominantly male C-level execs, we’re not only stuck with equivalents of “limited editions” in the videogame market, but also with that howling mob of male gamers descending on everything that’s not sufficiently catering to their dicks.
[Ernest W.] Adams mentioned “stealth learning” as a very effective way to convey a specific message in a serious game. He said Lufthansa has a game called Virtual Pilot that challenges gamers to fly to the designated city with increasingly fewer aids. They say “Land at city X,” and all you have to go on is a map of the region showing red dots (cities) within country boundaries, and you must choose the right city to proceed. Success then removes the dots representing the cities, and you must guess where the city in question is, and you’re awarded more points the closer your chosen spot is to the actual location. The final level removes country boundaries as well, stretching your memory and knowledge to the maximum.
While a fun game in its own right, what you don’t realise as you play is that you now know what cities Lufthansa flies to as the game doesn’t show cities the airline doesn’t service. Sneaky!
Seriously?
I have the greatest respect for Ernest Adams so I believe he mentioned this game as an example for the underlying mechanics in principle and not for its quality as a serious game in general. Where to begin: advergames as serious games? learning a brand’s flight destinations as an educational objective? flight destinations that—give me a sec—we can check out anytime anywhere on our phones courtesy of Google Search or Lufthansa’s own nifty app? the lack of an incentive system to retain the geographical knowledge gained (except for use in repeat games)? knowledge, moreover, that you need to succeed in the first place? the lack of any game mechanism that makes this knowledge relevant to the player beyond earning points toward a finite total?
just drafts is one part news ticker with commentary and one part real-time research about game-based learning and game-based education, game design, and game-related media ethics. Other game-related topics (storytelling, branding & marketing, copyleft wars, gadgetry, and so on) you’ll find at my flagship blog between drafts in various categories.
News ticker posts will feature a quote like this one from an external source, only longer.
To that I will add a commentary, preferrably not much longer than the quote. The post’s headline will link not to the post itself as expected, but—indicated by an arrow at the end of the post title—directly to the source of the quote (a principle I picked up from John Gruber’s site and fell in love with over the years along with #4a525a). A permalink for the post itself will be provided at the end of the post; see below, in blue. Research posts titles, in contrast, will have no arrow and behave like good, old-fashioned blogposts.