OPENAI - IN THE THICK OF IT

Sam Altman’s deliberate schmoozing of regional EU heads of government – many of whom have influence over the final shape of the EU’s AI rulebook via the European Council – reached its peak this week when he met the President of the EU Commission, Ursula von der Leyen. After the meeting, Ursula’s tweet mirrored EU industry chief Thierry Breton’s stance stating “…to match the speed of tech development, AI firms need to play their part”.

The real question is has Sam’s ‘productive weeks of conversation in Europe’ influenced the ‘over-regulation’ he saw coming down the pipe last week? Altman won’t have long to find out. Altman will meet Thierry in San Francisco later this month to workshop details of an ‘interim AI Pact’ – a draft voluntary AI code to provide safeguards while new laws are developed. An AI code that EU tech chief Margrethe Vestager stated was likely to be drawn up ‘within weeks’. An AI code that, until now, only Google had publicly agreed to work on with the EU.

So why did Altman (and hundreds of AI scientists, academics, tech CEOs and public figures) sign a statement, asking for the risk of extinction from AI to be the global priority? Firstly, the AI Pact (which is expected to set global standards) is not regulatory and it will deal only with risks that do exist, vs. AI risks that don’t exist yet. The statement (hosted on the website of a San Francisco-based, privately-funded not-for-profit called the Center for AI Safety (CAIS)) equates AI risk with the existential harms posed by nuclear apocalypse and calls for policymakers to focus their attention on mitigating what is claimed as a ‘doomsday’ extinction-level AI risk. Here’s the (intentionally brief) statement in full:

It’s claimed that it’s intentionally short to ensure that the extinction-level message isn’t drowned out by messaging around the “important and urgent risks from AI” (that’s today’s problems). But you’ve got to ask, why are OpenAI and others piling this on now given how hard it would be to regulate against risks that don’t exist yet?

SO WHAT?

The Center for AI Safety says this “opens the discussion” although Andrew Griffin (The Independent) rightly points out “It reads a little like being hectored by a burglar about your house’s locks not being good enough” as every dangerous AI is the product of intentional choices by its developers - most of which just signed this statement. The very same companies and individuals who signed the controversial open letter promoted by Elon Musk that pointed directly to OpenAI and proposed to stop the development of language models more powerful than GPT-4 for 6 months. Has AI responsibility shifted more widely? Yes. However, enquiring minds may be thinking that it is ‘more than a little helpful to the companies that signed it, in making those risks seem inevitable and naturally occurring’ as though they’re not making a profit from AI and can’t influence it. Equally, it conveniently acts as a smokescreen around the real issues that need to be regulated right now. The argument isn’t going away, the EU is driving accountability and the world is watching on.

WANT TO INNOVATE LIKE OPENAI? Order your copy of the second volume of ‘Disruptive Technologies’ now!

OpenAI launched a Cybersecurity Grant Program—a $1M initiative to boost AI-powered cybersecurity capabilities and foster AI and cybersecurity discourse.  /OpenAI blog 

Peter Deng joined OpenAI as the VP of Consumer Product.  /LinkedIn

Sam Altman, Ilya Sutskever (Chief Scientist), Mira Murati (CTO), and ten senior OpenAI execs signed a statement asking for global priority to be placed on ‘mitigating AI's extinction risk’.  /Center for AI Safety

OpenAI announced a new method of training using ‘process supervision’ vs. ‘outcome supervision’, to avoid hallucinations. /OpenAI research

ChatGPT iOS app became available in 152 countries and regions.  /OpenAI articles

ChatGPT became the #1 productivity app in App Store (#3 all apps).  /Apple AppStore

OpenAI crossed the 200 plugin mark.  /@OfficialLoganK

ChatGPT neared 1 billion monthly website users.  /Dexerto

ChatGPT was predicted to fuel $1.3 trillion AI market by 2032.  /Bloomberg

ChatGPT's third-largest traffic source is Japan.  /Reuters

ChatGPT Search went live in the plugin store.  /@OfficialLoganK

Sam Altman met with President of the EU Commission, Ursula von der Leyen.  /@vonderleyen

Sam Altman described the special thing about OpenAI as ‘the culture of repeatable innovation’.  /@sama

Sam Altman confirmed to attend AI pact workshop with EU industry chief, Thierry Breton in June.  /Reuters

Sam Altman finally watched the movie 'Ex Machina'.  /BI

Mira Murati’s (OpenAI’s CTO), Twitter account was hacked to promote crypto scam.  /Investing.com

Sam Altman confirmed to visit the Ministry of SMEs and Startups in Seoul, South Korea (June 9).  /KoreaTechDesk

Sam Altman’s stated his ‘single highest-confidence prediction about what a world with AGI in it, will be like’. /@sama

Why are AI whistleblowers so worried?  /Bloomberg Podcast

Instacart launched a new ChatGPT-powered in-app search tool.  /Tech Crunch

Chinese organisations launched 79 AI large language models since 2020. /Reuters

Asus announced the first service that lets offices install Nvidia AI servers to control data.  /Bloomberg

All of the unexpected ways ChatGPT has infiltrated students’ lives.  /WP

Nvidia showed off a supercomputer built to create next-gen AI.  /Dexerto

GPT4 plugged into Minecraft unearthed new potential for AI.  /Wired

The Race to Make A.I. Smaller (and Smarter).  /NY Times

Someone forwarded this to you, and you don’t want to miss future issues? We don’t blame you, WDODTW is as comprehensive as it is business-critical. Sign up and choose a monthly (£15) or annual (£100) subscription. Cancel anytime, natch. > Like this, and wish there was one for Amazon? There is. Enjoy.

Keep Reading

No posts found