From “Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity” by Joel Becker et al. (emphases mine):
Despite widespread adoption, the impact of AI tools on software development in the wild remains understudied. We conduct a randomized controlled trial (RCT) to understand how AI tools at the February–June 2025 frontier affect the productivity of experienced open-source developers. 16 developers with moderate AI experience complete 246 tasks in mature projects on which they have an average of 5 years of prior experience. Each task is randomly assigned to allow or disallow usage of early-2025 AI tools. When AI tools are allowed, developers primarily use Cursor Pro, a popular code editor, and Claude 3.5/3.7 Sonnet. Before starting tasks, developers forecast that allowing AI will reduce completion time by 24%. After completing the study, developers estimate that allowing AI reduced completion time by 20%. Surprisingly, we find that allowing AI actually increases completion time by 19%—AI tooling slowed developers down. This slowdown also contradicts predictions from experts in economics (39% shorter) and ML (38% shorter). To understand this result, we collect and evaluate evidence for 20 properties of our setting that a priori could contribute to the observed slowdown effect—for example, the size and quality standards of projects, or prior developer experience with AI tooling. Although the influence of experimental artifacts cannot be entirely ruled out, the robustness of the slowdown effect across our analyses suggests it is unlikely to primarily be a function of our experimental design.
This paper’s results shouldn’t be overinterpreted as covering every coding skill level in every project in every industry, and I’m also generally a fan of checking reproducibility first before sounding the fanfares. The thing is, exactly as stated by the authors, that the impact of AI tools in the wild remains understudied.
Which would, for a matter as crucially important as this one, be bad enough on its own. What makes it worse is the methodical flooding of news spaces and online platforms with pseudoscientific nonsense, egregiously unscientific “in-house studies,” comically unmoored predictions and promises, and the torrents of CEO-Said-a-Thing stenography from gullible tech tourists like Kevin Roose or—how Karl Bode colorfully puts it—a U.S. tech press that “evolved into an extension of tech product marketing under the ad engagement journalism model, until the point it basically became either mindless consumerist gadget porn or MBAbro entrepreneurial fan fiction.” And that doesn’t even cover yet the current administration’s systematic dismantling of academic funding and research institutions across the board.
Plus, of course, the basket full of problems I addressed in previous posts about mounting vulnerabilities, skill decay, and the tech industry’s Boschian vision of no longer having to pay wages in a world ruled by entrepreneurial monarchs.*
________________
* Which, curiously, I seem to have kind of predicted when I started out on my Voidpunk sf-horror project ten years ago.