ASI

UK Regulator Keeps X Under Pressure

The UK will not allow it to leave. Even as other questions slip quietly into bureaucratic limbo, this one stands.

The British news watchdog said on Thursday it would continue X's investigation into the distribution of secret AI-generated images – despite the platform's insistence that it is cracking down on harmful content.

At the center of the conflict are deep images – often sexual; often cheated – which has increased with X. The administrator's fear is far from predictable.

With these images, reputations can be ruined in minutes – and, once they're out, trying to keep them out of the public eye is an almost impossible task.

Officials say they need to know if X's programs actually prevent this or if they only respond once the damage is done.

And that's a good question, right? We have heard promises. This growing fear of AI becoming an automatic image generator has led to similar questions, such as the testing of Musk's Grok chatbot and Japan's recent launch of an investigation into the same type of image creation risks.

What's impressive — perhaps ironic — is that X's owner, Elon Musk, has long been a champion of free speech.

But regulators don't discuss freedom of speech as something to say; they have to deal with injuries.

When AI produces fake pornographic images of real people, possibly women, this is no longer a philosophical debate, it's a public safety issue.

Meanwhile, countries outside the UK are making decisions based on that concept already.

Malaysia, for example, recently cut off access to Grok entirely after the emergence of revealing AI-generated images, a development that sent shockwaves through the tech community.

The UK investigation comes at a time when regulators are generally flexing their muscles around AI governance.

Europe is moving in the opposite direction with sweeping legislation aimed at holding platforms to account for how AI systems are used and governed.

The way forward seems very straightforward when you see how the EU's landmark AI rules are being set as a template for the rest of the world to use.

Here's my hot take, for whatever it's worth. This question is not just about X alone. It's about whether tech companies can continue to claim trust while deploying tools that can be misused at scale.

The UK regulator appears to be saying, politely but firmly, “Show us it works – or we'll keep looking.”

And honestly, that feels late. Deepfakes are no longer just a threat of the future. They are there, they are dirty and the regulators are starting to act like that.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button