|
Google is adding a new memory import feature to Gemini, making it easier for customers to switch to Gemini AI from another AI service. Users can import memories, context, and chat history from other AI apps.
|
|
English Wikipedia has banned the use of generative AI when writing or rewriting articles. The platform says it came to this decision because using AI to whip up copy "often violates several of Wikipedia's core content policies."
There are a couple of minor exceptions. Editors can use large language models (LLMs) to refine their own writing, but only if the copy is checked for accuracy. The policy states that this is because LLMs "can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited."
Editors can also use LLMs to assist with language translation. However, they must be fluent enough in both languages to catch errors. Once again, the information must be checked for inaccuracies.
"My genuine hope is that this can spark a broader change. Empower communities on other platforms, and see this become a grassroots movement of users deciding whether AI should be welcome in their communities, and to what extent," Wikipedia administrator Chaotic Enby wrote. The administrator also called the policy a "pushback against enshittification and the forceful push of AI by so many companies in these last few years."
There is one thing worth noting. Wikipedia is not a monolith. Each Wikipedia site has its own independent rules and editing teams. Some may decide to embrace LLMs. However, others may go even further. Spanish Wikipedia, for instance, has fully banned the use of LLMs,
|
|
Meta didn't consult its Oversight Board last year when it announced sweeping policy changes to content moderation and a rollback of third-party fact checking in the United States in favor of Community Notes. But the company did ask the board for advice on how to expand the crowd-sourced fact checks to other countries.
Now the Oversight Board is publishing its advice to Meta. In a 15,000-word policy advisory opinion, the group urged Meta to be cautious with an international rollout, warning that an expansion of the program could "pose significant human rights risks and contribute to tangible harms" if safeguards are not put in place.
The board, notably, was asked to weigh in on a fairly narrow set of questions, including how it should evaluate whether to withhold the feature in certain countries. Meta "respectfully" asked the Oversight Board to avoid "general" critiques about the system, which it has said is modeled after X.
In its opinion, the Oversight Board said that Community Notes "could enhance users' freedom of expression and improve online discourse" with enough safeguard. But it recommended Meta withhold the feature in countries with "high polarization," as well as countries in the midst of a crisis or "protracted conflict." The board also said that Meta should avoid countries with a history of organized disinformation networks, because the notes may be more easily manipulated in such places, and countries with "linguistic complexity" that Meta may be ill-equipped to understand.
Depending on how you interpret that advice, that could exclude quite a few countries, though the board stopped short of making country-specific recommendations. Still, it raises questions about how closely Meta will follow the suggested guidelines. For example, the United
|
|
Low-quality, mass-produced AI songs have been flooding music streaming platforms like Spotify for a couple of years now. This is annoying, but relatively easy for fans to avoid. However, it leads to real problems for artists. There's so much slop coming in that some gets falsely attributed to actual musicians on these platforms.
This messes with brand identity and audience retention, but Spotify is testing a new tool to help real artists exercise more control over their profiles. The platform's Artist Profile Protection feature lets musicians review releases before they go live and become associated with their profiles.
Spotify
This should prevent AI slop from creeping in, as the actual artist will have final say when 100 new songs show up out of the blue that sort of sound like them but with all of that pesky soul removed. It's in beta right now and if an artist denies a track, it won't be associated with their profile, won't contribute to stats and won't show up in user recommendations. This looks to be a simple and potentially effective solution to an ongoing problem.
"Music has been landing on the wrong artist pages across streaming services, and the rise of easy-to-produce AI tracks has made the problem worse," Spotify wrote in a blog post. "We know how frust
|
|
OpenAI today said that it is ending support for its Sora AI video app just six months after it initially launched.
|
|