Stop anthropomorphizing lines of code.
Elon Musk promised that his social media company X would be “the everything app,” but these days “everything” seems to only include slop, fascist propaganda, and abuse. Increasingly, the social media site has been awash in vulgar and non-consensual sexual images that users are creating with X’s built-in AI tool, Grok. As The Guardian’s Nick Robins-Early wrote:
Many users on X have prompted Grok to generate sexualized, nonconsensual AI-altered versions of images in recent days, in some cases removing people’s clothing without their consent. Musk on Thursday reposted an AI photo of himself in a bikini, captioned with cry-laughing emojis, in a nod to the trend.
And as 404 Media uncovered, the abuse this software is enabling is likely far worse than it appears and is in many ways merely the latest escalation of an online creep problem that’s as old as the internet.
It’s horrendous, from top to bottom, especially for women who are being aggressively targeted by X users just for existing online.
The writer Ketan Joshi picked up on a strange pattern of language and usage in the media coverage of this scandal. Joshi posted a thread on Bluesky gathering examples “of major media outlets falsely anthropomorphising the “Grok” chatbot program and in doing so, actively and directly removing responsibility and accountability from individual people working at X who created a child pornography generator.” The example headlines and articles Joshi found include phrases like “Grok apologizes,” “Grok says,” or “Elon Musk’s AI chatbot blames.” The articles go further in some cases, giving the software agency by quoting it as “writing,” “saying,” and “posting.”
The problem here, as Joshi wrote, is that this framing shifts responsibility away from the people who are using and platforming this software. Implying that the chatbot and image generation program itself is accountable allows people to hide from their own culpability in the bot’s shadow.
This has been a trend in how AI is discussed for a while. The media’s language and framing are often overly deferential to the tech industry’s own marketing hype—imagine blaming a toaster for a burned slice of multigrain just because a salesman assured you about the Bread Safe Smart Sensor™ technology. This tendency to assume that these programs are as capable as we’re being told isn’t unique to AI—think of “smart bombs”—but the trend in usage doesn’t seem to be getting any better.
The word “artificial” in AI is accurate, though. These programs are not natural, they’re human-made artifices conceived, created, and maintained by people. Allowing creators, engineers, and executives to evade accountability for their decisions, just because we imagine that the toasters they made are awake, will only degrade the internet further.
I think 2026 will be the nadir of social media. Without changes, these online platforms will be squeezed into more horrible and unpleasant forms by the pressures of AI maximimalists, extractive data miners, and fascistic supporters of a “clicktatorship” who care above all else about creating and curating displays of made-for-TV violence. A better internet is not impossible, though. We can name the people behind these problems, and we can do something about it.
The viral warning that “a computer can never be held accountable” from a 1979 IBM training document has never been more resonant. The problem with Grok and other programs isn’t that it’s escaped containment like Skynet, the problem is more akin to an owner who has let their aggressive dog off its leash.
People who live in a society with you and me are putting these tools to malicious uses. They are people who take time in their day to craft and share abusive images of kids and strangers, and who delight in the pain those images cause. They are people who post slop of themselves next to cry-laughing emojis, desperate to be the funny one for once. They are people who blew off meeting up with friends so they could stay up late into the night to program these tools, who got bored and zoned out in long meetings to discuss implementing this software, and who are right now ignoring texts about why they’re letting the platforms they’re responsible for flood with filth.
None of this is the toaster’s doing. We shouldn’t allow the marketers and their apologists let those who are really responsible avoid their time in the spotlight.
James Folta
James Folta is a writer and the managing editor of Points in Case. He co-writes the weekly Newsletter of Humorous Writing. More at www.jamesfolta.com or at jfolta[at]lithub[dot]com.



















