Building a moat
An internal Google document leaked at the start of this month claimed that “We have no moat and neither does OpenAI”. The document was, of course, referring to the fact that in the arms race of AI, there are no barriers to slow down potential competitors. The technology itself is not new, and every day people are coming up with new ways to do “with $100 and 13B params that we struggle with at $10M and 540B”.
It should then come as no surprise that in his recent testimony to Congress, OpenAI’s founder Sam Altman called for “a new agency that licenses any effort above a certain threshold of capabilities”:
“I think if this technology goes wrong, it can go quite wrong. We want to work with the government to prevent that from happening.”
If you’re a dominant firm in a contestable market worried about competition, regulation is music to your ears. Those open source models popping up at a cost of $100? Gone; they simply won’t be able to meet whatever licensing or testing requirements the new agency – which will have to take advice from people like Altman, given the government doesn’t pay enough to attract top AI talent – imposes on them. Entry is then restricted to large companies which can afford the added costs.
It’s a battle tested strategy used by famous monopolists from Alcoa to Standard Oil that’s well documented in the literature. There’s a reason the disgraced former CEO of FTX, Sam Bankman-Fried, was very fond of crypto regulation!
Remember this guy? He was a big fan of regulation. Source
Could AI go wrong? Sure, although that also has to be weighed against the benefits. One of the most common arguments for regulation, regardless of its merits, is public safety. Altman is looking to take advantage of the widespread AI ‘doomerism’ that has captured the minds of some otherwise very intelligent people, leading them to call for measures such as pausing AI development for six months or even the extremes of bombing data centres.
But most of that doomerism is based on the belief that the current generation of AI is a stepping stone towards artificial general intelligence (AGI), which just doesn’t seem likely at all – at least not in the near future. AI, such as Altman’s ChatGPT, are not really intelligent at all; they’re nothing but very large, predictive language models. They’re probably not even on the road to AGI – Terminator-style end of humanity stuff that can think and continually improve itself – and are already hitting diminishing returns (the low hanging fruit used to train them has been picked). Add to that the fact that robots are still pretty awful and require many humans just to maintain them, even if an evil, self-improving AI wanted to wipe out humanity, “it will be a gradual, multi-decade process”.
In other words, we could just pull the plug.
The fact is, AI almost certainly doesn’t require the level of regulation being called for by the AI doomers and CEOs of current industry leaders. Being too cautious about this technology comes with very high costs, and that’s even if we don’t bomb the data centres! We already have a litany of laws and regulations that are well equipped to handle any unintended consequences of AI, were they to occur.
And let’s be honest. Even without regulation protecting their lead, the likes of OpenAI’s ChatGPT (Microsoft-backed) and Google’s Bard will be fine, thanks to another feature of government: indirect subsidies via contracts. The US Military alone has asked for $US1.8 billion for AI in the 2024 Budget, most of which will inevitably flow to big companies just as Australia’s governments siphon off tax dollars to the Big Four consultancies: they’re the only ones able to comply with the government’s strict tendering requirements. Smaller competitors just don’t have the capacity to win such contracts.
The best course of action is just to let this new industry play itself out and deal with any consequences as they arise – primum non nocere!