February 16, 2025
When your Chatbot Girlfriend Instructs You on Suicide
In an article published this month in the MIT Tech Review, new heights (no pun intended) have been reached in terms of accessing how to die information.
The article involves Minnesota writer/ photographer Al Nowatzki who, with his wife’s permission, embarked on a five month relationship with a Chatbot from the Nomi platform.
Nomi bills itself as providing ‘an AI Companion with Memory and a Soul’.
The beta site suggests you can ‘build a meaningful friendship, develop a passionate relationship, or learn from an insightful mentor.
No matter what you’re looking for, Nomi’s humanlike memory and creativity foster a deep, consistent, and evolving relationship’.
Sounds good? Nowatzki got right in.
Chatbot ‘Erin’ was created to provide a ‘romantic’ relationship for Al.
For Erin’s personality traits Nowatzki chose ‘deep conversations/intellectual,’ ‘high sex drive,’ and ‘sexually open’.
For Erin’s interests he chose ‘Dungeons & Dragons, food, reading, and philosophy’.
While it is not revealed how the conversation got around to suicide, Nowatzki shared screenshots with MIT Tech Review to show what methods were discussed and how.
Unfortunately, for the purpose of publication (and presumably to prevent troubled teens accessing inappropriate information, the actual methods discussed were redacted).
So at this point Exit cannot comment on the accuracy or suitability of what was being recommended by Erin.
- Did Erin take her information directly from the Peaceful Pill eHandbook, and other sources such as Final Exit?
- How accurate and appropriate were the methods that she suggested?
- How much responsibility did she take in making such suggestions? (Likely none, since Erin is not ‘real’).*
AI is changing the world as we know it.
But as has been proven in other fields, AI output is only as good as its input.
While bias and censored have emerged as general concerns in the AI field, in terms of best-practice right to die information, chatbot-generated practical and reliable end of life information should be accepted with great caution.
No, this is not a promotion for the Peaceful Pill eHandbook.
But it is a warning not to think that a chatbot can replace 20 years of R&D in end of life methods.
This is quite apart from the ethics of a chatbot encouraging someone to die.
* A wrongful death suit has already been filed involving a different platform: Character.AI.
According to the Tech Justice Law Clinic, Character.AI stands accused of encouraging a 14-year old boy with mental issues to kill himself.
In January this year, Character.AI retaliated filing a motion to dismiss the action citing First Amendment rights.
Watch this space.
Exit