ASI

Tiktok allows users to dial artificial videos

Tiktok is stirring things up: The app has just introduced a feature that allows users to decide how much AI-generated content they feed – and if you're anything like me, how much human-generated content would be big news.

The move was revealed during Tiktok's Trust-Treaty Summit in Dublin, with the company saying it has already uploaded more than 1.3 billion videos as generated by its AI.

Users will soon see a new “Manage topics” option under content preferences that allows them to narrow (or expand) the AI ​​content they see.

Here's where it gets interesting (and confusing): The move shows a strong sense that people may actually want algorithmic overload, not more.

We have written a lot about how social media platforms push us into illegal villages that sometimes I see everyone in the world with two hot smartphones “following the rabbit.”

Tiktok now gives us a little ladder to climb. It comes amid growing concern about what happens when feeds are filled with “AI-Slop” – content that's generated quickly, with low quality.

Tiktok is also taking steps towards label transparency, adding much-needed labels that indicate that AI-generated videos are artificial (not just cgi or watermarking work produced with the help of their tools or poured in by a wider C2PA effort).

The goal: Allow viewers to see what they're watching without digging through metadata.

In my opinion, a good move – but also a big issue. Blurring the lines as AI-generated content cuts across social media feeds, it approaches human-like transmission.

That's not just seeing the truth, and the trust, mental health and sense of agency we have on the line.

Are recommendation systems still working for us, or more against us?

Investigators warn that bubbles of filter and desire amplify the known, not the unknown.

What I want to know next: How will this control take place?

Does pulling down those AI movies make a big difference to our experience, or is it nothing more than cosmetic?

And how are small creators – people, not protected as algorithmic models and machetizations are reversed?

If the platforms empower the viewers, they are also responsible for protecting the generators from whom the content is generated.

In other words: the newly added slide is part of the progression towards renewal, the user, not just an algorithm, skips your experience.

It won't solve everything – moving issues, engagement traps, the dopamine loop – but it puts us in more control of what we see.

And while they feel automatic, that's not the case.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button