Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

usonian

(26,258 posts)
Mon May 4, 2026, 12:10 AM 13 hrs ago

ChatGPT Wrestles With Its Most Chilling Conversation: How Do I Plan an Attack?

https://www.wsj.com/us-news/chatgpt-mass-shooting-openai-78a436d1

https://archive.is/20260504032150/https://www.wsj.com/finance/investing/polymarket-kalshi-betting-profits-prediction-markets-eb23ac11

Last spring, Florida State University student Phoenix Ikner wanted to know how many classmates he needed to kill to become notorious.

ChatGPT responded with a metric. “Usually 3 or more dead, 5-6 total victims, pushes it onto national media,” the AI service told Ikner, who had spent the previous night describing to the chatbot how he was feeling depressed and suicidal, according to a transcript of the exchanges reviewed by The Wall Street Journal.

snip

Ikner logged off. Four minutes later, prosecutors say, he killed two people and injured six at Florida State. Ikner faces charges of murder and attempted murder. He has pleaded not guilty.

Ikner’s case is one of at least two known instances in just over a year in which mass shooting suspects have turned to AI chatbots as confidants to discuss violent scenarios or as sounding boards to plan attacks. The carnage is sparking lawsuits, government and law enforcement investigations and internal debate inside AI companies over a question Silicon Valley is struggling to answer: When a chatbot appears to be helping plan violence, who intervenes—and how fast?


ChatGPT scans its chats for indications of potential violence, but many go unreported.

Factoids:

• OpenAI shared the conversations with law enforcement after the incident.
Yes, I double-checked. after.

• They claim "zero tolerance" for using their tools to assist in committing violence,

• The Florida A.G. opened a criminal investigation into the incident in April.



1 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
ChatGPT Wrestles With Its Most Chilling Conversation: How Do I Plan an Attack? (Original Post) usonian 13 hrs ago OP
My immediate reaction is OMG, the solution is monitoring. AZJonnie 12 hrs ago #1

AZJonnie

(3,939 posts)
1. My immediate reaction is OMG, the solution is monitoring.
Mon May 4, 2026, 01:07 AM
12 hrs ago

But this needs to be done a certain way. Otherwise, we could end up with full-time big-brother government surveillance on AI. That would be bad.

The software needs to cut people the hell off (and way earlier than in this present case). Then it reports them to a human, an employee of the company. The human is then responsible for making a call to authorities (or not), AND their name goes ON IT. Refused or passed along, someONE is responsible, and the company is liable as well for the decision.

But, we can't allow full-time government surveillance to "keep everyone safe". It needs to be kept simple and not controlled by the Feds.

Latest Discussions»General Discussion»ChatGPT Wrestles With Its...