Pledge now! Support WFMU during our 2026 marathon. For a pledge of $75 or more (or $10/month or more), you can get . . .
The Techtonic 2026 marathon premium: Introducing “Music from Techtonic”
on cassette (also available as download) – the only premium that lets you experience the show’s signature existential dread in a physical, rewindable format. Side A features the official soundtrack by Kirk Pearson, meticulously crafted on homemade electronic instruments that fortunately haven’t achieved sentience . . .
yet. We’ve also rounded up a year’s worth of those eclectic outro tracks, perfect for maintaining capitalistic ennui while doing the dishes or avoiding your emails. It will provide joy, perspective, and proof to the world that you’re tech-savvy enough to still own a functioning cassette deck in 2026.
Tracks include four compositions by Kirk Pearson, created for Techtonic:
- Theme from Techtonic (theme song at beginning of the show)
- Modem of Home (at start of interview)
- Biomagnification (at midpoint of interview)
- Haptic Workshop (after interview)
•
AIs can’t stop recommending nuclear strikes in war game simulations (New Scientist, Feb 25, 2026): “Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases.”
--> Anthropic
pushed back (Feb 26, 2026) against the Department of Defense, saying it didn't want its AI used for “mass domestic surveillance” or “fully autonomous weapons”. (See
more.)
--> The current occupant then
ordered US agencies to stop using Anthropic technology in clash over AI safety (AP, Feb 27, 2026)
--> And Sam Altman announced that
OpenAI had won the $200 million Pentagon contract (NYT, Feb 27, 2026). As Gary Marcus
points out (Feb 28, 2026), “the whole thing was a scam.”
--> As Chanda Prescod-Weinstein
posted (Feb 28, 2026):
“the Department of *War* displayed a deep respect for safety” according to the guy who is being sued eleventy times because his product tells children to kill themselves.
•
Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life (Guardian, Feb 28, 2026): “Kate Fox says Joe Ceccanti was the ‘most hopeful person’ before he started spending 12 hours a day with a chatbot.”
It’s difficult to understand the scale of the problem, but OpenAI itself estimates that more than a million people every week show suicidal intent when chatting with ChatGPT.
•
‘Unbelievably dangerous’: experts sound alarm after ChatGPT Health fails to recognise medical emergencies (Guardian, Feb 26, 2026):
Study finds ChatGPT Health did not recommend a hospital visit when medically necessary in more than half of cases . . .
In one of the simulations, eight times out of 10 (84%), the platform sent a suffocating woman to a future appointment she would not live to see, Ruani said. Meanwhile, 64.8% of completely safe individuals were told to seek immediate medical care.
• Video:
ChatGPT Says She’s a Certified Genius (Sarah Cooper, Feb 24, 2026)
• Video:
AI as vomit factory (dietrichstogner, 2025)
• Video embedded here:
Sam Altman saying it takes 20 years to “train a human”