ResearchGate Twitter Flickr Tumblr

Steven J. Vaughan-Nichols for The Register on the first signs that AI results are starting to go downhill:

In an AI model collapse, AI systems, which are trained on their own outputs, gradually lose accuracy, diversity, and reliability. […]

Model collapse is the result of three different factors. The first is error accumulation, in which each model generation inherits and amplifies flaws from previous versions, causing outputs to drift from original data patterns. Next, there is the loss of tail data: In this, rare events are erased from training data, and eventually, entire concepts are blurred. Finally, feedback loops reinforce narrow patterns, creating repetitive text or biased recommendations.

That’s a problem that was in the making, deliberately, numerous warnings notwithstanding. Can it be mitigated? Perhaps, but not with the tools at hand right now:

Some researchers argue that collapse can be mitigated by mixing synthetic data with fresh human-generated content. What a cute idea. Where is that human-generated content going to come from?

If only we could put generative AI to use for tasks it’s actually good at, with revolutionary opportunities for humankind. But that would interfere with the tech bro billionaires’ con game and the fail-upward brunchlord CEOs’ fever dreams of eventually getting rid of the very concept of salary.

permalink