ResearchGate Twitter Flickr Tumblr

Following OpenAI Into the Rabbit Hole

“Oh my fur and whiskers! I’m late, I’m late, I’m late!”

OpenAI, December 2015:

OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.

OpenAI, April 2018:

We will actively cooperate with other research and policy institutions; we seek to create a global community working together to address AGI’s global challenges. We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.

OpenAI, February 2023:

We have a clause in our Charter about assisting other organizations to advance safety instead of racing with them in late-stage AGI development. We have a cap on the returns our shareholders can earn so that we aren’t incentivized to attempt to capture value without bound and risk deploying something potentially catastrophically dangerous (and of course as a way to share the benefits with society).

OpenAI, March 2023:

Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this [GPT-4 Technical Report] contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.

Everything’s fine. Tech bros funded by sociopaths fantasizing about AGI while rushing untested high-impact technology to market without oversight with the potential to affect almost everyone in every industry. What could possibly go wrong?