EU vs X: Grok's Explicit-Image Mess Has Officially Crossed the Line

The EU has now chosen X – and this time it's not politics, misinformation based or free speech arguments.
Related to porn: Specifically, there is this question of sexually explicit images that can be created using Grok, the AI associated with Elon Musk's platform, and whether some of those were used to create “digital stripping” content.
This is the kind of thing that makes your stomach clench when you read it, because it's not just invisible damage. It is targeted, personal and in some cases may be illegal.
And about the situation, too. This is not a nice EU This is the EU saying, “Enough is enough”.
Regulators are concerned about how quickly this type of content spreads online and the fact that once something like a deeply graphic image is out, it won't go away.
The damage is done, even if the platform slows it down, even if the account reaches its limit.
Now here's the kicker. People always seem surprised when AI is used for the worst things. But, I mean, let's face it – are we really surprised?
He unleashes a fun image tool on millions of people and the internet does what it always does: it throws its shiny new toy in the trash, looking for ways to hurt someone with it.
That's why this investigation is not just about “the EU is angry at the chatbot.” It's happening through the Digital Services Act, which orders major platforms to behave like responsible adults.
It should always be possible to tell that X has taken a reasonable approach to risk assessment, and has put in place adequate safety precautions. Not after the damage. Before.
UX has obviously taken some steps in response, such as paying more attention to certain features and controlling reinforcement (for example, by placing certain image-generating functions behind a paywall).
That's … something, I guess. But if you're the one whose photo was altered and shared, it probably doesn't sound like a win. It's like closing the front door after your house has been burglarized.
And here's another unpleasant truth: Platforms today don't just “hold content.” They grow it. They recommend it. They push it to the feeds.
That's why the EU isn't just concerned about the revealing images Grok has revealed – it's interested in whether X's plans made that content travel faster and further than it should have.
The scary thing is that this is about to become the new normal.
AI-generated images aren't going anywhere. In fact, it only gets better, faster, cheaper and more original.
Which means “total consumption” will increase again. Today is Grok. Tomorrow is another model, another platform, another crop of victims.
And it's not just celebrities anymore; classmates, coworkers, exes and random women on the internet who posted a selfie in 2011 and still don't respect a day they were online.
This is why the EU investigation is important. And not because it's fun to see a lot of technical sweat (although, OK, that part continues).
It's important, though, because this is one of the first high-level tests of whether governments can force platforms to treat AI damage as a real emergency and not just a side quest.
And if X fails this test? And expect regulators to be more aggressive across the board – because the next platform in their cross hairs may not have as many opportunities.



