Synopsis
Altman has linked the incident to rising tensions around AI, public narratives about his role, and a recent critical article about him. Saying that the fear and anxiety about AI is justified, he called for a policy initiative to lead the AI transition in society.OpenAI chief executive Sam Altman said a suspect threw a Molotov cocktail at his home early Saturday, an incident he linked to rising tensions around artificial intelligence (AI) and public narratives about his role.The attack took place at around 3:45 am, according to a blog post by the AI leader. The device struck the house but bounced off, and no injuries were reported, Altman confirmed, adding that the damage was limited.
A Molotov cocktail is a crude firebomb, typically a bottle filled with a flammable liquid and fitted with a cloth wick that is lit before being thrown. The suspect attempted to set fire to Altman's home using this device.
Making the incident public, Altman said he hoped disclosure would deter similar attacks. San Francisco police arrested a suspect, the company said on Friday.
Altman added that the incident followed a recent critical article about him and came amid heightened public anxiety over AI.
He said he had dismissed earlier warnings that such coverage could increase risk but now believes “words and narratives” can escalate situations.
“I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives,” Altman wrote.
AI rhetoric and Altman
Amidst the incident, Altman outlined his core views on AI, describing it as a high-impact, general-purpose technology with both large upside and real risk.
“The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever.”
Additionally, he called for a policy initiative to lead the AI transition in society.
“We urgently need a society-wide response to be resilient to new threats. This includes things like a new policy to help navigate through a difficult economic transition in order to get to a much better future,” Altman added.
Companies should act as stakeholders, not sole decision-makers, Altman continued.
OpenAI Saga under Altman
Altman included personal remarks on his tenure at OpenAI, citing both progress and internal challenges.
“Working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for me,” he said.
“I have made many other mistakes throughout the insane trajectory of OpenAI; I am a flawed person in the centre of an exceptionally complex situation, trying to get a little better each year, always working for the mission.”
Musk’s influence minimised
Recalling his beef with founder Elon Musk, he said he was "proud of" resisting early efforts by Musk to secure unilateral control of OpenAI, calling it a key decision that allowed the organisation to continue.
He described OpenAI’s growth from a small lab to a major platform as rapid and often chaotic, requiring more structured operations going forward.
Towards the AGI race
Altman said competition in AI has become high-stakes, especially around the pursuit of artificial general intelligence (AGI).
“Once you see AGI, you can’t unsee it,” he said, describing the competition as having a “ring of power” dynamic, where the goal of controlling such technology can push organisations toward extreme positions.
Altman said the issue is not the technology itself, but the idea of being the entity that controls it.
“The only solution I can come up with is to orient towards sharing the technology with people broadly, and for no one to have the ring. The two obvious ways to do this are individual empowerment and making sure the democratic system stays in control.”
The development comes as OpenAI faces scrutiny after striking a deal with the US government to deploy its AI systems in military environments, amid an escalating dispute between Washington and rival firm Anthropic.
The US Department of Defense blacklisted Anthropic, labelling it a “supply chain risk” following disagreements over how its AI models could be used in military operations.