Is Scarlett Johannson Going To Take over the World?

I’ve been thinking about the dangers of AI a lot lately, especially how deep fake scams could convince people to invest in markets endorsed by fake celebrities.

So, when discussing this, a member of the audience asked me if the regulators are prepared for this wave of disruption and I said ‘no’. Regulators can only regulate what they can see. They see Microsoft’s OpenAI and Google’s Gemini, and they must know that this is a game-changer, so what are they doing about it?

Well, in March, the EU approved the AI Act, far-reaching regulations that are seen as something of a landmark. After all, the EU is pretty stringent with Big Tech, especially after all their tax dodging and data use policies, so what does this new regulation mean?

The Act should be in force this month and sets rules for high-impact, general-purpose AI models and high-risk AI systems, which will have to comply with specific transparency obligations and EU copyright laws.

That’s a good start – as much of these developments depend upon scraping content from people who spend a lot of time creating content. For example Sony Music, the largest music publisher in the world and the home of artists like Beyonce and Adele, has contacted Google, Microsoft, OpenAI, and more than 700 tech firms, to determine if they have used its songs to develop artificial intelligence systems. The publisher wants to stop training, developing or making money from AI using Sony songs without permission, the BBC reports.

The critical question: if AI is only as good as the content it can learn from, how do you protect the content creators? The EU AI Act tries to address this but does it go far enough?

Possibly as the EU’s rules take a risk-based approach: the riskier the system, the tougher the requirements – with outright bans on the AI tools deemed to carry the most threat. This means that high-risk AI providers must conduct risk assessments and ensure their products comply with the law before they are made available to the public.

But possibly not as there is no reference to specific banking, payments and financial rules. It is more geared towards the rules related to government identification of citizens using biometrics and restricting their adoption.

It will be interesting to see if other countries follow the EU’s approach, but it’s also missing some other issues. For example, Wired has just reported that the whole long-term risk team working on AI implications at OpenAI have left and been disbanded.

Maybe it’s not surprising as the company has had some internal battles between it’s founders, leading to the outing of CEO Sam Altman by the Board late last year, only to be reinstated five days later. So, the fact that the team was led by Ilya Sutskever, OpenAI’s chief scientist and one of the company’s cofounders who also happened to be one of the board members who outed Sam, maybe gives you a feel for what’s going on.

The other co-lead of the team was the former DeepMind researcher Jan Leike, and Jan doesn’t want to go quietly, posting a thread on X on Friday explaining that his decision came from a disagreement over the company’s priorities and how much resources his team was being allocated.

“I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point,” Leike wrote. “Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.”

Nevertheless, there were several big AI announcements in the last week or so.

OpenAI launched a new conversational AI system called ChatGPT-40, or Omni if you prefer, allows much more human-like discussion with a voice that sounds incredibly like Scarlett Johannson from the film ‘Her’ (2013). Coincidence? Maybe not. It’s the favourite sci-fi movie of Sam Altman, CEO of ChatGPT’s parent company, OpenAI, who posted on X last week a message that simply read: “her.”

Meanwhile it is notable that Google made a big range of AI announcements last week as well at their developers event. As Techcrunch reports, the company made its case to developers — and to some extent, consumers — why its bets on AI are ahead of rivals. At the Google hosted developers event, the company unveiled a revamped AI-powered search engine, an AI model with an expanded context window of 2 million tokens, AI helpers across its suite of Workspace apps, like Gmail, Drive and Docs, tools to integrate its AI into developers’ apps and even a future vision for AI, codenamed Project Astra, which can respond to sight, sounds, voice and text combined.

Finally, JP Morgan Chase announced IndexGPT at the start of the month. The service delivers thematic investment baskets created with the assistance of OpenAI’s GPT-4 model. Thematic investing focuses on emerging trends, rather than traditional industry sectors or company fundamentals, and it has gained popularity over the last decade. Having said that, the thematic investing trend lost some ground in the last few years, due to poor performance and higher interest rates. IndexGPT aims to reignite interest in thematic investing by providing a more accurate and efficient approach.

While each advance on its own was promising, the onslaught of AI news was overwhelming, as is the speed and rate of change in this space. In other words, watch this space ... don't just watch it, but focus upon like a Lion focuses upon a Zebra in the Kruger Park. You don't catch it, you don't eat.

Related: The Growing Deep Fake Scam Crisis