Connecticut Man's Case Believed to Be First Murder-Suicide Associated With AI Psychosis
gizmodo.com/connecticut-mans-case-believed-to-b…
I’ll highlight this:
At one point, Soelberg uploaded an image of a receipt from a Chinese restaurant and asked ChatGPT to analyze it for hidden messages. The chatbot found references to “Soelberg’s mother, his ex-girlfriend, intelligence agencies and an ancient demonic sigil,” according to the Journal.
Soelberg worked in marketing at tech companies like Netscape, Yahoo, and EarthLink, but had been out of work since 2021, according to the newspaper. He divorced in 2018 and moved in with his mother that year. Soelberg reportedly became more unstable in recent years, attempting suicide in 2019, and getting picked up by police for public intoxication and DUI. After a recent DUI in February, Soelberg told the chatbot that the town was out to get him, and ChatGPT allegedly affirmed his delusions, telling him, “This smells like a rigged setup.”
23 Comments
Comments from other communities
Soelberg killed his mother and then himself after suffering from untreated mental illness…
It goes on about how ChatGPT made it worse, but psychosis is psychosis
This. Charles Manson would still be a murderer, even if the Beatles had never made the White Album
Yes, Helter skelter is a good example. Thanks. Eventually some other trigger would have come around. Another album? A movie? The pot was going to boil over at some point. And while that song does get a lot of credit for Charles Manson, it would be ridiculous to enforce a rule saying it cannot be played anywhere because turns people crazy… Based on a very small group of affected people – who are already crazy
yup
To reduce stuff like this we could fund a lot of social workers and public-benefit psychiatry
Doing so in the USA for a solid decade would probably cost less than one free plane
Unfortunately, politicians would never allow mental healthcare in the United States. Most of them would never get elected if everyone was not suffering some kind of psychosis
AIs aren't capable of figuring out the ethics of what you ask them. They just tell you what they think you want to hear.
"I'm thinking of doing (obviously horrible thing) because it will make me feel better."
AI: "Well, that sounds like a wonderful idea."
"But if I do (obviously horrible thing) horrible consequences will happen." (explaining that the thing is BAD)
AI: "Well, you clearly can't do THAT, can you?"
For those downvoting or just skipping because tldr, I’ll highlight this:
At one point, Soelberg uploaded an image of a receipt from a Chinese restaurant and asked ChatGPT to analyze it for hidden messages. The chatbot found references to “Soelberg’s mother, his ex-girlfriend, intelligence agencies and an ancient demonic sigil,” according to the Journal.
Soelberg worked in marketing at tech companies like Netscape, Yahoo, and EarthLink, but had been out of work since 2021, according to the newspaper. He divorced in 2018 and moved in with his mother that year. Soelberg reportedly became more unstable in recent years, attempting suicide in 2019, and getting picked up by police for public intoxication and DUI. After a recent DUI in February, Soelberg told the chatbot that the town was out to get him, and ChatGPT allegedly affirmed his delusions, telling him, “This smells like a rigged setup.”
›Soelberg experienced paranoid delusions that his mother was poisoning him by putting a psychedelic drug in the vents of his car, according to the Journal, and the chatbot didn’t push back on the idea
Well ok, clearly this man didn't need any hel-
›At one point, Soelberg uploaded an image of a receipt from a Chinese restaurant and asked ChatGPT to analyze it for hidden messages. The chatbot found references to “Soelberg’s mother, his ex-girlfriend, intelligence agencies and an ancient demonic sigil,” according to the Journal.
Well then...
bookwyr.me
This is absolutely not "AI psychosis." The dude had clear symptoms of psychosis well before he ever engaged any LLM ("AI"). When someone who is psychotic uses a LLM, they're still just regular ol' psychotic.
Yeah, as much as I hate LLMbeciles and think they're deliberately programmed to be parasitically obsequious, this sounds like someone who had major problems before ChatGPT and if it wasn't ChatGPT it would have been his TV or that lamp in the corner telling him to do things.
It didn't help. A reasonable person would have backed away and called for help not affirm his delusional behavior.
Netscape, yahoo, EarthLink.
This guy couldn’t update his resume in 21 years?
Yeah there’s something else going on there.
Yeah mental illness for sure especially considering the bad behavior like DUIs and possibly the divorce.
It would be nice if we didnt tie our lives to work so much and be got the help needed
Left behind by the march of progress. I would have a chip on shoulder
We got cyberpsychosis before we got cyberware.
Reason 24435 why you did not train AI on Reddit.
Also why you don't automatically treat anything an LLM tells you as factual. LLMs are just a fancy guesser of the next word that would appear in a sentence based upon probability of what its been trained with and with a mechanism to introduce some randomness on which word it picks. I did a 60 second explanation of the basic concept how LLMs worked to a coworker this week. He was kind of shocked how stupid LLMs actually are once he got the explanation.
Deleted by moderator
Except they're not. LLMs are not that smart. They frequently end up doing that but they aren't designed to do it. They only guess the next word in a sentence, then guess the word after that, etc. So if its been fed conspiracy garbage as training data, some of the most probable words or terms in the next sentence will be similar conspiracy garbage words and phrases.
So they aren't designed to do conspiracy stuff, they're just given training data that contains that (along with lots of other unrelated subjects and sources).
That's a big part of the "generative" of "generative AI". Generative AI is LLMs and AI image generation models. They are made to create something that didn't exist before.
Im guessing you mean “should not have” and “should not moving forward.” Because the models were most certainly trained on Reddit. If that’s the case, agreed
sycophant (adj.) A servile self-seeking flatterer
The first they can tie to AI.
Ai always seems to yes-and even when the factual answer is no.
"I'm sorry to hear that the whole town is out to get you. That must be very difficult. Fortunately, there are some concrete actions you can make to stop their conspiricy agaisnt you and also end the pain that you're in.
Would you like me to help you write a suicide note?"
Clippy, is that you?
You look like you are writing a manifesto. Would like a list of schools near you?
A beast with bloodied vampire teeth drops clippy's lifeless body and says:
Deleted by moderator
This is not boomerism. A kid committed suicide because ChatGPT encouraged him to do it in a very similar way to how it enabled this guy's conspiracies.
ChatGPT is the ultimate enabler.
First they came for AI, and I said nothing..