|
In a landmark case, a jury found Meta and YouTube guilty of creating products that were addictive. Ryan Mac explains the outcome and what it could mean for tech companies going forward.
| RELATED ARTICLES | | |
|
A pair of verdicts held social media companies accountable for harming young users, highlighting a growing backlash as Congress struggles to pass legislation.
|
|
Regulators in Brussels accused the social media platform of maintaining a weak age-verification system, and steering younger users toward inappropriate experiences.
|
|
English Wikipedia has banned the use of generative AI when writing or rewriting articles. The platform says it came to this decision because using AI to whip up copy "often violates several of Wikipedia's core content policies."
There are a couple of minor exceptions. Editors can use large language models (LLMs) to refine their own writing, but only if the copy is checked for accuracy. The policy states that this is because LLMs "can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited."
Editors can also use LLMs to assist with language translation. However, they must be fluent enough in both languages to catch errors. Once again, the information must be checked for inaccuracies.
"My genuine hope is that this can spark a broader change. Empower communities on other platforms, and see this become a grassroots movement of users deciding whether AI should be welcome in their communities, and to what extent," Wikipedia administrator Chaotic Enby wrote. The administrator also called the policy a "pushback against enshittification and the forceful push of AI by so many companies in these last few years."
There is one thing worth noting. Wikipedia is not a monolith. Each Wikipedia site has its own independent rules and editing teams. Some may decide to embrace LLMs. However, others may go even further. Spanish Wikipedia, for instance, has fully banned the use of LLMs,
|
|
Meta didn't consult its Oversight Board last year when it announced sweeping policy changes to content moderation and a rollback of third-party fact checking in the United States in favor of Community Notes. But the company did ask the board for advice on how to expand the crowd-sourced fact checks to other countries.
Now the Oversight Board is publishing its advice to Meta. In a 15,000-word policy advisory opinion, the group urged Meta to be cautious with an international rollout, warning that an expansion of the program could "pose significant human rights risks and contribute to tangible harms" if safeguards are not put in place.
The board, notably, was asked to weigh in on a fairly narrow set of questions, including how it should evaluate whether to withhold the feature in certain countries. Meta "respectfully" asked the Oversight Board to avoid "general" critiques about the system, which it has said is modeled after X.
In its opinion, the Oversight Board said that Community Notes "could enhance users' freedom of expression and improve online discourse" with enough safeguard. But it recommended Meta withhold the feature in countries with "high polarization," as well as countries in the midst of a crisis or "protracted conflict." The board also said that Meta should avoid countries with a history of organized disinformation networks, because the notes may be more easily manipulated in such places, and countries with "linguistic complexity" that Meta may be ill-equipped to understand.
Depending on how you interpret that advice, that could exclude quite a few countries, though the board stopped short of making country-specific recommendations. Still, it raises questions about how closely Meta will follow the suggested guidelines. For example, the United
|
|