Show Notes
•
Computer Power and Human Reason: From Judgment to Calculation, by Joseph Weizenbaum, at the Internet Archive
•
Machines Who Think: A Personal Inquiry Into the History and Prospects of Artificial Intelligence, by Pamela McCorduck, at the Internet Archive
•
Little Flying Robots, Faine’s email newsletter
•
I Just Want a Cute Robot Lamp That Isn’t Evil (Faine Greenwood, May 5, 2025):
[A]s I watched that video and found myself seized with my own desperate desire for a little lamp pet to call my own, I had to remind myself of a central modern truth, as I always do whenever I find myself beguiled by a new piece of technology:
Apple will find a way to make this thing, in one way or another, secretly evil.
•
The Virtual Pet Games of My 90s Youth and AI Ethics (Faine Greenwood, Sep 5, 2025)
[Among Weizenbaum’s] core arguments include that using an AI to simulate human psychotherapy should not just be considered wrong: it should be considered an obscene use of a digital tool in a space that it does not and cannot comprehend.
A core reason why it is wrong is because these AI tools are terribly good at convincing people that they are capable of exercising wisdom that they do not actually possess. And as I write this in 2025, the obscenity has come to pass.
. . . People are falling in love with LLMs. Fleeing their homes for LLMs. Killing themselves with apparent LLM encouragement. And, in one recent horrific case, becoming so emotionally intertwined with their LLM that they’re encouraged to walk a path that eventually led to murder-suicide.
While it’s true that these people largely all seemed to have considerable pre-existing mental health challenges, I also think it’s difficult to argue with LLMs acting as a catalyst that propelled them along a different, darker path than they might have taken otherwise. And the preliminary research we do have indicates that people who use AI heavily are lonelier and more depressed than those who don’t - and while it’s unclear if their AI use is making them that way, it also seems like their chatbot relationships aren’t really helping.
. . . Making matters worse, as is the theme of our age, AI companies are rapidly rolling out animated interfaces that people can use to interact with LLMs in a far more organic, visually compelling way. . . . Right now, we're watching people interact with LLMs in ways that exceed the intensity of the relationship even the most sentimental child once had with the virtual cat.
Faine makes two main points (which I agree with):
1. Human beings are incredibly quick to anthropomorphize AI systems, whether they're virtual dogs or sycophantic chat bots. This can be very dangerous.
2. Human beings can (probably) harden their hearts to abusive and violent behavior by the abuse of non-human, or virtual, entities. This too, can be dangerous.
Faine concludes with a reflection on her own interactions with LLMs, wondering if she has an obligation to be polite to it:
I do not want to be cruel to the simulated assistant, and would bring me no particular pleasure to dehumanize it. But it also makes my skin crawl when the machine grows overly familiar with me.
Would I find The Machine's simulated attempts to charm me, to convince me to speak sweetly to it, less unsettling than it if it walked in the skin of an adorable virtual dog?
Perhaps.
But then again, my adorable virtual dogs back in 1998 weren't trying to take my job, send my personally-identifiable-information back to a faceless corporation, or work in concert to bring about a techno-authoritarian new world order.
•
Faine on Bluesky
•
Breaking Down the Lawsuit Against OpenAI Over Teen’s Suicide (TechPolicy.press, Aug 26, 2025):
Matthew and Maria Raine, the parents of a 16-year-old named Adam Raine who died by suicide in April, today filed a lawsuit against OpenAI, its CEO, Sam Altman, and the company’s employees and investors. . . .
The plaintiffs, represented by the law firm Edelson and the Tech Justice Law Project, allege that the California teen hung himself after OpenAI’s ChatGPT-4o product cultivated a sycophantic, psychological dependence in Adam and subsequently provided explicit instructions and encouragement for his suicide.
. . . The suit says the teen’s interaction with the OpenAI product and its outcome was “not a glitch or unforeseen edge case—it was the predictable result of deliberate design choices” as well as failed or insufficient safety practices. It notes that the “rushed GPT-4o launch triggered an immediate exodus of OpenAI’s top safety researchers” . . .
•
Excerpt of Jason Koebler on 404 Media podcast