ChatGPT Wrestles With Its Most Chilling Conversation: How Do I Plan an Attack?
https://www.wsj.com/us-news/chatgpt-mass-shooting-openai-78a436d1
https://archive.is/20260504032150/https://www.wsj.com/finance/investing/polymarket-kalshi-betting-profits-prediction-markets-eb23ac11
Last spring, Florida State University student Phoenix Ikner wanted to know how many classmates he needed to kill to become notorious.
ChatGPT responded with a metric. Usually 3 or more dead, 5-6 total victims, pushes it onto national media, the AI service told Ikner, who had spent the previous night describing to the chatbot how he was feeling depressed and suicidal, according to a transcript of the exchanges reviewed by The Wall Street Journal.
snip
Ikner logged off. Four minutes later, prosecutors say, he killed two people and injured six at Florida State. Ikner faces charges of murder and attempted murder. He has pleaded not guilty.
Ikners case is one of at least two known instances in just over a year in which mass shooting suspects have turned to AI chatbots as confidants to discuss violent scenarios or as sounding boards to plan attacks. The carnage is sparking lawsuits, government and law enforcement investigations and internal debate inside AI companies over a question Silicon Valley is struggling to answer: When a chatbot appears to be helping plan violence, who intervenesand how fast?
ChatGPT scans its chats for indications of potential violence, but many go unreported.
Factoids:
OpenAI shared the conversations with law enforcement after the incident.
Yes, I double-checked.
after.
They claim "zero tolerance" for using their tools to assist in committing violence,
The Florida A.G. opened a criminal investigation into the incident in April.