9 min read

iPhones and AI: What's next?

Apple has joined the AI race by adding ChatGPT to every iPhone, bringing privacy and security concerns to the forefront; how should governments regulate this rapidly evolving technology without stifling innovation?
AI hand with the scales of justice.
Image by the ABC.

Artificial intelligence (AI) is still all the rage. Last week Apple finally joined the party, announcing that iPhone users would soon have access to ChatGPT through its Siri digital assistant, and that it plans to roll out a bunch of new AI features later this year.

It's going to be interesting to see how Apple deals with the inevitable privacy and security issues that flow from this link-up with OpenAI. Elon Musk, who has long-standing beef with OpenAI and is building a competitor, went as far as banning iPhones from his companies:

"If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies. That is an unacceptable security violation.

And visitors will have to check their Apple devices at the door, where they will be stored in a Faraday cage."

Apple assures us that "requests are not stored by OpenAI, and users' IP addresses are obscured", but given OpenAI's track record on respecting rules and intellectual property, a healthy amount of scepticism is certainly warranted.

But beyond consumer beware, it raises questions about how this industry should be regulated. The Australian government has made moves in this space, recently signing the Seoul Declaration and joining the Hiroshima AI Process Friends Group that will work towards "safe and responsible AI". It has also worked with Singapore's government to "test both country's AI ethics principles", and of course has its own interim response to "safe and responsible AI", with a temporary expert group working until the end of this month to "advise us on testing, transparency, and accountability measures for AI in legitimate, but high-risk settings to ensure our AI systems are safe".

The interim report probably gives the best indication as to the approach it will take, but a lot was left unanswered. With the 'expert group' wrapping up at the end of this month, I'm sure a more concrete proposal to regulating AI will be forthcoming. Until then, here's what I think.

What AI can, can't, and may never do

Before regulating something, it's important to know more about what it is, exactly, we're regulating. And I think we need to be careful listening too closely to people in the industry about what AI can do (including its potential), as doing so might lead regulators astray. Those in the AI industry tend to be excellent salespeople, full of hype about their own products and they truly believe they're on the path to making humans obsolete within years. Their sales pitches can be so convincing that on a recent visit to Silicon Valley, they managed to get a relatively well-known Chicago-trained monetary economist, Scott Sumner, to write that all communism needed to work was for everyone to as smart as the people working on AI:

"I'm probably giving you the idea that the Bay Area tech people are a bunch of weirdos. Nothing could be further from the truth. In general, I found them to be smarter, more rational, and even nicer than the average human being. If everyone in the world were like these people, even communism might have worked."

Yeah, nah. What Sumner observed is collective delusion, reinforced by the fact these people are largely sheltered from the rest of the world while inside their Silicon Valley bubble. Large language models (LLMs) aren't even on the road to artificial general intelligence (AGI); it's actually astonishing how easily a group of very smart people can so easily veer wildly off course in their thinking – a warning to all technocracy fans out there!

This post is for paying subscribers only