“We made ChatGPT pretty restrictive,” [He] says, “to make sure we were being careful with mental health issues.”

As a psychiatrist who studies emerging psychosis in adolescents and young adults, this was news to me.

Researchers have identified 16 cases in the media this year of individuals developing symptoms of psychosis – losing touch with reality – in the context of ChatGPT use. My group has since identified four more. In addition to these is the now well-known case of a 16-year-old who died by suicide after discussing his plans extensively with ChatGPT – which encouraged them. If this is Sam Altman’s idea of “being careful with mental health issues”, that’s not good enough.

The plan, according to his announcement, is to be less careful soon. “We realize,” he continues, that ChatGPT’s restrictions “made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.”

“Mental health problems”, if we accept this framing, are independent of ChatGPT. They belong to users, who either have them or don’t. Fortunately, these problems have now been “mitigated”, though we are not told how (by “new tools” Altman presumably means the semi-functional and easily circumvented parental controls that OpenAI recently introduced).

This has got to be one of the most condescending Villain speeches ever, including all the fictional ones.

In my opinion Sam Altman is playing with at the very least manslaughter if not murder charges here for this kind of behavior, this is serious stuff and Altman is evolving from Clown to Murder Clown.

Yet the “mental health problems” Altman wants to externalize have deep roots in the design of ChatGPT and other large language model chatbots. These products wrap an underlying statistical model in an interface that simulates a conversation, and in doing so implicitly invite the user into the illusion that they’re interacting with a presence that has agency. This illusion is powerful even if intellectually we might know otherwise. Attributing agency is what humans are wired to do. We curse at our car or computer. We wonder what our pet is thinking. We see ourselves everywhere.

  • BlackJerseyGiant@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    4 days ago

    That’s Murder Clown inc., pal. We all know that corporations can do no wrong; why else would they have all that $$$? What would we do without all that money flowing into the political theater? Have the people’s voice heard? And for goodness sake, take a moment to think what that would do to all the Murder Clowns at all those corporations! Who will think of the Murder Clowns, who, I ask of you!?!