ResearchGate Twitter Flickr Tumblr

Takahashi Keijiro / 高橋啓治郎 released some nifty proof-of-concept code on a ChatGPT-powered shader generator (Twitter | GitHub) and a natural language prompt for editing Unity projects (Twitter | GitHub).

Takahashi with regard to the latter:

Is it practical?
Definitely no! I created this proof-of-concept and proved that it doesn’t work yet. It works nicely in some cases and fails very poorly in others. I got several ideas from those successes and failures, which is this project’s main aim.

Can I install this to my project?
This is just a proof-of-concept project, so there is no standard way to install it in other projects. If you want to try it with your project anyway, you can simply copy the Assets/Editor directory to your project.

As I wrote in the final paragraph of my essay at Medium.com on Artificial Intelligence, ChatGPT, and Transformational Change:

[The] dynamics to watch out for [will happen] in tractable fields with reasonably defined rule sets and fact sets, approximately traceable causes and effects, and reasonably unambiguous victory/output conditions. Buried beneath the prevailing delusions and all the toys and the tumult, they won’t be easy to spot.

I didn’t have any concrete applications in mind there, but Takahashi’s experiments are certainly part of what I meant. Now, while I’m not using Unity personally, and I’m certainly not going to for a variety of reasons, I’m confident that anything clever coders will eventually get to work in a major commercial game engine will sooner or later find its way into open source engines like Godot.

There are obstacles, of course—licensing and processing power prominently among them. The training process for models like ChatGPT is prohibitively expensive; licensing costs will accordingly be high; and you don’t want to have stuff that doesn’t run under a free software license in your open source engine in the first place.

But not all is lost! You absolutely do not need vastly oversized behemoths like ChatGPT for tasks like this. Everybody can create a large language model in principle, and the technologies it requires are well known, well documented, and open source.

There’s BLOOM, for starters, a transformer-based large language model “created by over 1000 AI researchers to provide a free large language model for everyone who wants to try.” And while training such models still costs a bunch even on lesser scales—access to Google’s cloud-based TPUs (GPUs designed for high volumes of low precision computing) isn’t exactly cheap—prices will come down eventually for older and less efficient hardware. Here’s an example of how fast these things develop: according to David Silver et al.’s 2017 paper in Nature, AlphaGo Zero defeated its predecessor AlphaGo Lee (the version that beat Lee Sedol, distributed over a whole bunch of computers and 48 TPUs) by 100:0 using a single computer with four TPUs.

All that’s pretty exciting. The trick is to jump off of the runaway A(G)I hype train before it becomes impossible to do so without breaking your neck, and start exploring and playing around with this stuff in your domain of expertise, or for whatever catches your interest, in imaginative ways.

permalink