|
YouTube is just as wary of the rise of AI slop as you, and that's why more AI-generated content is coming to the platform in the near future. In a lengthy blog post outlining YouTube's 2026 plans, CEO Neal Mohan said the company will continue to embrace this new "creative frontier" by soon allowing its creators to throw together Shorts using their AI-generated likeness.
Mohan didn't elaborate further about how this feature will work when it launches, but acknowledged the "critical" issue of deepfakes currently polluting the web, and reaffirmed his company's support for new legislation such as the NO FAKES Act. YouTube also allows its own creators to protect themselves against unauthorized use of their likeness using a detection feature that scans newly uploaded videos for matches.
Other fresh AI (note: in no way slop) features referenced in the post include the currently-in-beta no-code Playables platform, which lets you make games using Gemini 3 with a single text prompt, as well as new music creation tools. At the same time, Mohan said YouTube is building on its existing systems designed to combat spam, clickbait and "low quality AI content." He added that an average of six million daily viewers watched more than 10 minutes of AI autodubbed content in December, despite the issues that rival platforms have had with similar features.
Mohan didn't say when w
|
|
You know things are messed up when a Big Tech company fights accusations of union-busting by insisting it was only AI layoffs. That's where things stand after a group of fired TikTok moderators in the UK filed a legal claim with an employment tribunal. The Guardian reported on Friday that around 400 TikTok content moderators who were unionizing were laid off before Christmas.
The workers were sacked a week before a vote was scheduled to establish a collective bargaining unit. The moderators said they wanted better protection against the personal toll of processing traumatic content at a high speed. They accused TikTok of unfair dismissal and violating UK trade union laws.
"Content moderators have the most dangerous job on the internet," John Chadfield, the national officer for tech workers at the Communication Workers Union (CWU), said in a statement to The Guardian. "They are exposed to the child sex abuse material, executions, war and drug use. Their job is to make sure this content doesn't reach TikTok's 30 million monthly users. It is high pressure and low paid. They wanted input into their workflows and more say over how they kept the platform safe. They said they were being asked to do too much with too few resources."
TikTok denied that the firings were union-busting, calling the accusations "baseless." Instead, the company claimed the layoffs were part of a restructuring plan amid its adoption of AI for content moderation. The company said 91 percent of transgressive content is now removed automatically.
The company first announced a restructuring exercise in August, just as hundre
|
|