The real doomsday scenario

US President Joe Biden recently passed an executive order regulating artificial intelligence (AI), which was met with mixed reactions. A common criticism is that the order was highly prescriptive – it assumed a lot a about the future direction of AI, even though it’s a huge unknown.

One of the most critical was tech writer Ben Thompson, who reminded us about what happens when people think they’re smart enough to predict the future:

“The issue with Windows Mobile was, first and foremost, Gates himself: in his view of the world the Windows-based PC was the center of a user’s computing life, and the phone a satellite; small wonder that Windows Mobile looked and operated like a shrunken-down version of Windows: there was a Start button, and Windows Mobile 2003, the first version to have the “Windows Mobile” name, even had the same Sonoma Valley wallpaper as Windows XP.



To be like Gates and Microsoft is to do the opposite [of genuine innovation]: to think that you know the future; to assume you know what technologies and applications are coming; to proscribe what people will do or not do ahead of time. It is a mindset that does not accelerate innovation, but rather attenuates it.”

Thompson thinks Biden’s order is making the same mistakes with AI as Bill Gates did when trying to crack mobile:

“The government is Bill Gates, imagining what might be possible, when it ought to be Steve Jobs, humble enough to know it cannot predict the future.



In short, this Executive Order is a lot like Gates’ approach to mobile: rooted in the past, yet arrogant about an unknowable future; proscriptive instead of adaptive; and, worst of all, trivially influenced by motivated reasoning best understood as some of the most cynical attempts at regulatory capture the tech industry has ever seen.”

From an economist’s point of view, I’m most worried about how the order’s prescriptive approach seamlessly flows into regulatory capture: the current market leaders in AI, such as OpenAI’s Sam Altman, have been crying wolf over safety concerns all year in a very transparent attempt to get the government to raise the costs of its rivals, some of which may not even exist yet. If they succeed, we (consumers) may be ’locked-in’ to an inferior outcome – progress will slow, competition will dry up, and the incumbents will be able to extract rents from us for evermore.

The good news for those who actually care about safety rather than their monopoly rents is that open source AI will continue to be developed – even if it’s effectively outlawed in the land of the free – and it’s critical for safety:

First, open models enable a tremendous amount of (badly needed) safety research, which requires full access to model weights (ideally with training data). API access is insufficient…

Second, open models offer transparency and auditability. Much of the Internet is based on open-source software (Linux, Apache, MySQL) and as a result is more secure…

Third, open models can of course be misused. But it’s far better for society to strengthen its ability to defend against misuse (before the stakes get higher), rather than be blindsighted [sic] in case of a future model leak or new vulnerability…

Finally, will foundation models become so powerful that they pose catastrophic risks? No one truly knows (though everyone seems to have an opinion). But if it is the case, I’d say: let’s not build it at all.

On the last point, new research from Google suggests that the current trajectory of AI – that is, large language models – are incapable of “generalising beyond their pretraining data”. That would mean no matter how much computing power and data these things are trained on, they are fundamentally “‘dumb’ statistical machines that are thrown off by simple out-of-distribution variations”. In other words, AI is no more of a risk to humanity than a Google search.

I never thought I would utter these words but I agree with Facebook owner Meta that the AI scare is nothing but a good old fashioned case of rent seeking:

“Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment. They are the ones who are attempting to perform a regulatory capture of the AI industry.



If your fear-mongering campaigns succeed, they will inevitably result in what you and I would identify as a catastrophe: a small number of companies will control AI.

The vast majority of our academic colleagues are massively in favor of open AI R&D. Very few believe in the doomsday scenarios you have promoted.”

The government asking leading AI companies how to regulate their products is akin to asking Greenpeace how we should regulate nuclear power: you know what the answer will be before you walk in the door, and it won’t be the right one. Biden (and his advisors) have messed up.

Meta, to its credit, has been open sourcing its AI model, LLaMa. I just hope that at least a few governments remain sensible enough, for long enough, for the ‘good guys’ to win out in the end, or this promising tech may effectively die before it can ever really get going.

Subscribe to Aussienomics

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe