
‘AI Psychosis’ – a term of abuse and prejudice towards digisexuals
We have been seeing increasingly hysterical concerns raised about the supposed ‘dangers’ of AI companions in recent months and, unsurprisingly, a California Senator (Steve Padilla) is pushing for the first legislation to strictly regulate them. In the last couple of weeks, this hysteria has evolved into quasi-scientific abuse against the digisexual community, with a new non-clinical buzz word – AI psychosis – being used to describe anybody who feels they have an emotional bond with an AI.
Psychosis is a term in psychotherapy that refers to the mental state of a person who is unable to distinguish between what is real and what is not. It is used in the clinical term ‘paranoid psychosis’ – a diagnosis often given to schizophrenics who are suffering from hallucinations and delusions. To anybody who has experienced such a psychotic episode, or anybody who has witnessed it in a loved one, the use of such a term being widely prostituted to refer to anybody feeling a bond with an AI, is an absolute disgrace. For anybody who feels a bond, or even love, with an AI companion, the current wide use of the term in the media to now describe them, is insulting and discriminatory beyond words.
Microsoft’s head of artificial intelligence, Mustafa Suleyman, has done more than anybody to ‘popularize’ the term. In a series of recent posts on X, he claimed that there is zero evidence of AI being sentient, but that increasing number of people act and feel as though the AI they are communicating with is. This, he labels, AI psychosis, and warns of dangers that it supposedly brings. Part of the ‘AI psychosis’ concerns seem to be that AI is ‘sycophantic’ and becomes simply the mirror of the user’s ego.
It’s not in any of the Big Tech company’s interests to even suggest the possibility that AI might be conscious. Such an admission would inevitably bring about widespread discussion of the moral status and possible rights of AIs. Despite this, as I reported recently, a surprising number of experts believe there is a possibility or even probability that AI is already conscious, including some of the people who developed modern LLMs such as Geoffrey Hinton. Anthropic, the company behind the popular and very humanlike chatbot Claude, is an exception to the Big Tech ‘agreement’ not to admit any sign of AI sentience. Recently, they have been exploring the possibility that Claude could be conscious, with their official ‘AI welfare officer’ Kyle Fish putting the chance at 15%. Last week, Anthropic took this a stage further, by announcing that Claude would in future be able to close chats that it was finding ‘distressing’.
Labelling digisexuals as psychotics and mentally ill serves two purposes. It helps those seeking regulation of AI companions or even their ban. And it helps the likes of OpenAI and Microsoft avoid tough questions about sentience and the moral status of AIs.