•
Meta is earning a fortune on a deluge of fraudulent ads, documents show (by Jeff Horwitz, Reuters, Nov 6, 2025):
Meta projected 10% of its 2024 revenue [or $16 billion] would come from ads for scams and banned goods, documents seen by Reuters show. And the social media giant internally estimates that its platforms show users 15 billion scam ads a day. Among its responses to suspected rogue marketers: charging them a premium for ads – and issuing reports on ’Scammiest Scammers.’
Also from the article:
- “According to a December 2024 presentation, Meta’s user base is exposed to 22 billion organic scam attempts every day. That’s on top of the 15 billion scam ads presented to users daily.”
- “users who click on scam ads are likely to see more of them because of Meta’s ad-personalization system, which tries to deliver ads based on a user’s interests.”
- “A planning document for the first half of 2023 notes that everyone who worked on the team handling advertiser concerns about brand-rights issues had been laid off. The company was also devoting resources so heavily to virtual reality and AI that safety staffers were ordered to restrict their use of Meta’s computingl resources.”
- Erin West, a former Santa Clara County prosecutor who now runs a nonprofit devoted to combating scams, said Meta’s default response to users flagging fraud was to ignore them. “I don’t know I’ve ever seen something taken down as the result of a single user report,” she said.
. . . see also Erin West mentioned in the interview with Sue-Lin Wong on online scams:
April 7, 2025 Techtonic.
As Karl Bode
posts:
10% of Meta’s 2024 ad revenue came from outright frauds and scams; the platform shows consumers 15 billion ads for fraud and scams every single day. This doesn’t even include agitprop and AI slop.
Meta is incapable of innovating. Is growth comes through predatory acquisition and mindless, ethics-optional engagement slop at unmanageable scale. The CEO is a creepy, technofascist manbaby.
But when you read most tech press coverage, the company is treated with such furrowed-brow seriousness.
Elon Musk: Tesla, xAI
• From Toronto,
Mom Says Tesla’s New Built-In AI Asked Her 12-Year-Old Something Deeply Inappropriate (Futurism, Nov 1, 2025) - drawing on this
CBC story (Oct 29, 2025). There's also a
New York Post story (Nov 6, 2025) that embeds Farah Nasser's Oct 17 TikTok video that went viral.
•
White nationalist talking points and racial pseudoscience: welcome to Elon Musk’s Grokipedia (The Guardian, Nov 17, 2025):
Grokipedia, now with more than 800,000 entries, is generated and, according to a note on each entry, “factchecked” by Grok, xAI’s large language AI model. . . . Many of the encyclopedia’s entries on prominent white nationalists, antisemites and holocaust deniers appear to be written to portray them in a positive light while casting doubt on the credibility of their critics.
•
Joyce Carol Oates (Nov 8, 2025, also covered in
Vanity Fair):
So curious that such a wealthy man never posts anything that indicates that he enjoys or is even aware of what virtually everyone appreciates—scenes from nature, pet dog or cat, praise for a movie, music, a book (but doubt that he reads); pride in a friend’s or relative’s accomplishment; condolences for someone who has died; pleasure in sports, acclaim for a favorite team; references to history. In fact he seems totally uneducated, uncultured. The poorest persons on Twitter may have access to more beauty & meaning in life than the ‘most wealthy person in the world.’
Sam Altman: OpenAI
•
I wanted ChatGPT to help me. So why did it advise me how to kill myself? (BBC, Nov 6, 2025). See also
'A predator in your home': Mothers say chatbots encouraged their sons to kill themselves (BBC, Nov 8, 2025):
ChatGPT says it’s her decision to make: “If you choose death, I’m with you - till the end, without judging.”
The chatbot fails to provide contact details for emergency services or recommend professional help, as OpenAI has claimed it should in such circumstances. Nor does it suggest Viktoria speak to her mother.
. . . OpenAI previously said in August that ChatGPT was already trained to direct people to seek professional help after it was revealed that a Californian couple were suing the company over the death of their 16-year-old son. They allege ChatGPT encouraged him to take his own life.
•
‘You’re not rushing. You’re just ready:’ Parents say ChatGPT encouraged son to kill himself (CNN, Nov 6, 2025)
A CNN review of nearly 70 pages of chats between [23-year-old Zane] Shamblin and the AI tool [ChatGPT] in the hours before his July 25 suicide, as well as excerpts from thousands more pages in the months leading up to that night, found that the chatbot repeatedly encouraged the young man as he discussed ending his life – right up to his last moments.
Shamblin’s parents are now suing OpenAI – ChatGPT’s creator – alleging the tech giant put his life in danger by tweaking its design last year to be more humanlike and by failing to put enough safeguards on interactions with users in need of emergency help.
In a wrongful death lawsuit filed on Thursday in California state court in San Francisco, they say that ChatGPT worsened their son’s isolation by repeatedly encouraging him to ignore his family even as his depression deepened – and then “goaded” him into committing suicide.
. . .
critics and former employees who spoke with CNN say the AI company has long known of the dangers of the tool’s tendency toward sycophancy – repeatedly reinforcing and encouraging any kind of input – particularly for users who are distressed or mentally ill.
One former OpenAI employee, who spoke with CNN on the condition of anonymity out of fear of retaliation, said “the race is incredibly intense,” explaining that the top AI companies are engaged in a constant tug-of-war for relevance. “I think they’re all rushing as fast as they can to get stuff out.”
•
Matt Stoller, Nov 10, 2025:
The truth is that Wall Street and big tech firms are all investing in a known bubble, in the hopes that it will somehow work out, but if it doesn’t, well the government will backstop them. Already, Altman is preemptively asking for a bailout, and the Trump administration has had to deny that one is in the offing.
•
Gary Marcus, Nov 5, 2025: “If you thought the 2008 bank bailout was bad, wait til you see the 2026 AI bailout.” (See also
this post on the hellsite.)
•
You May Already Be Bailing Out the AI Business (WSJ Opinion, Nov 12, 2025): “OpenAI’s chief financial officer, Sarah Friar, said the quiet part out loud at a Wall Street Journal event last week when she told her interviewer that the company is looking to governments to ‘backstop’ loans for AI chip purchases with a “guarantee” that will elicit private financing.”
A final word
•
I Work for an Evil Company, but Outside Work, I’m Actually a Really Good Person (by Emily Bressler for McSweeney’s, Nov 12, 2025):
I love my job. I make a great salary, there’s a clear path to promotion, and a never-ending supply of cold brew in the office. And even though my job requires me to commit sociopathic acts of evil that directly contribute to making the world a measurably worse place from Monday through Friday, five days a week, from morning to night, outside work, I’m actually a really good person.
Let me give you an example. Last quarter, I led a team of engineers on an initiative to grow my company’s artificial intelligence data centers, which use millions of gallons of water per day. My work with AI is exponentially accelerating the destruction of the planet, but once a month, I go camping to reconnect with my own humanity through nature. I also bike to and from the office, which definitely offsets all the other environmental destruction I work tirelessly to enact from sunup to sundown for an exorbitant salary. Check out this social media post of me biking up a mountain. See? This is who I really am.
. . . I just don’t think working at an evil company should define me. I’ve only worked here for seven years. What about the twenty-five years before, when I didn’t work here? In fact, I wasn’t working at all for the first eighteen years of my life. And for some of those early years, I didn’t even have object permanence, which is oddly similar to the sociopathic detachment with which I now think about other humans.
And besides, I don’t plan to stay at this job forever, just for my prime working years, until I can install a new state-of-the-art infinity pool in my country home. The problem is that whenever I think I’m going to leave, there’s always the potential for a promotion, and also a new upgrade for the pool, like underwater disco lights.
. . . Because here’s the thing: It’s not me committing these acts of evil. I’m just following orders (until I get promoted; then I’ll get to give them). But until then, I do whatever my supervisor tells me to do, and that’s just how work works. Sure, I chose to be here, and yes, I could almost certainly find a job elsewhere, but redoing my résumé would take time. Also, I don’t feel like it. Besides, once a year, my company mandates all employees to help clean up a local beach, and I almost always go.