|
The court has granted Anthropic's request for a preliminary injunction, preventing the government from banning its products for federal use and from formally labeling it as a "supply chain risk," at least for now. If you'll recall, things turned sour between the company and the Trump administration when Anthropic refused to change the terms of its contract that would allow the government to use its technology for mass surveillance and the development of autonomous weapons.
In response to Anthropic's refusal, the president ordered federal agencies to stop using Claude and the company's other services. The Defense Department also officially labeled it as a supply chain risk, which is typically reserved for entities typically based in US adversaries like China that threaten national security. In addition, department secretary Pete Hegseth warned companies that if they want to work with the government, they must sever ties with Anthropic. The AI company challenged the designation in court, calling it unlawful and in violation of free speech and its rights to due process. It asked the court to put a pause on the ban while the lawsuit is ongoing, as well.
In a court filing, the Defense Department said giving Anthropic continued access to its warfighting infrastructure would "
|
|
Pick up some excellent smart trackers for just $15 each and never lose your things again. But don't delay: AirTag deals tend to sell out during Amazon sales.
|
|
Google is adding a new memory import feature to Gemini, making it easier for customers to switch to Gemini AI from another AI service. Users can import memories, context, and chat history from other AI apps.
|
|
OpenAI has "indefinitely" abandoned plans to release an a erotic chatbot for adults following concerns from employees and investors, the company confirmed to The Financial Times. Plans for such a feature, first announced in October 2025 for release in December last year, had already been delayed while company debated whether to release it all. It's the second app OpenAI has decided to shelf this week, after announcing on Tuesday that it was shutting down its Sora video generator.
The adult-oriented chatbot, reportedly called "Citron mode," is now on hold with no planned release date. The company reportedly had difficulty training models that previously avoided erotic content and also removing illegal behavior like bestiality or incest, two people familiar with the matter told the FT.
Open AI said that it wanted to conduct long-term research on the effects of erotic chats and user attachment to AI, adding that there was currently not yet enough "empirical evidence" on the subject. The company also said it wanted to focus on its core productivity tools like coding assistants and drop "side quests" like Sora and the erotic chatbot.
The idea for adult features came after OpenAI announced that it would add parental controls and automatic age detection features for ChatGPT. CEO Sam Altman said back in October that the company had always been careful about such issues over concerns around unhealthy AI attachments, but felt comfortable that it could "safely relax the restrictions in most ca
|
|
The European Union has opened a formal investigation into whether Snapchat has breached Digital Services Act (DSA) regulations regarding the safeguarding of children using its app.
Regulators say that the company, whose audience demographic has always skewed young, may not be doing enough to protect minors from grooming and "recruitment for criminal purposes." The EU is also looking into whether Snapchat's younger users are too easily accessing information on how to buy illegal drugs and age-restricted products.
Brussels argues that while Snapchat requires users to be at least 13 years of age to sign up for an account, its self-declaration age assurance system may not be an adequate means of ensuring those younger than the minimum age can't engage with the platform. The European Commission also says the current measures fail to assess whether users are younger than 17 years old, which it says is necessary for an "age-appropriate experience." It also alleges that adults are able to exploit the current system to lie about their own age and impersonate minors.
Investigators believe that the app itself doesn't allow for other users to report accounts they suspect are being used by people younger than the minimum age requirements. Moreover, they argue that reporting illegal content found on the app is not easy enough, and that Snapchat may not be informing its users about "possibilities for redress."
Other issues being looked at by the European Commission include child and teen accounts being recommended to other users by Snapchat's Find Friends feature and insufficient guidance on available account safety features.
The investigators are now in the process of gath
|
|
AMD just revealed the Ryzen 9950X3D2 Dual Edition desktop processor, which is a beastly follow-up to last year's 9950X3D. This is the company's first desktop processor where both chiplets have been equipped with AMD's proprietary 3D V-Cache technology, which seems like a boon for gamers. Each chiplet includes 104MB of cache, offering an incredible 208MB total on-chip cache.
"208MB of cache means more game data, more assets and more working data sitting right next to the CPU cores," AMD Senior VP Jack Huynh explained in an announcement video.
Just like last year's release, the 9950X3D2 features a 16-core processor based on the Zen 5 architecture. This new release has increased to a 200W TDP, compared to the 170TDP of the original. This could indicate an increase in speed and performance, but with more heat output.
AMD
AMD says the chip will be great for both gaming and for creative workloads, like compiling game engines, running AI models and rendering 3D objects. The company says it can deliver a five to 10 percent performance boost whe
|
|
Apple plans to allow third-party AI chatbots to integrate with Siri in iOS 27, reports Bloomberg. Apple already has a partnership with OpenAI that lets ?Siri? hand questions off to ChatGPT, but Apple will expand that integration to other companies like Google and Anthropic.
|
|
Meta didn't consult its Oversight Board last year when it announced sweeping policy changes to content moderation and a rollback of third-party fact checking in the United States in favor of Community Notes. But the company did ask the board for advice on how to expand the crowd-sourced fact checks to other countries.
Now the Oversight Board is publishing its advice to Meta. In a 15,000-word policy advisory opinion, the group urged Meta to be cautious with an international rollout, warning that an expansion of the program could "pose significant human rights risks and contribute to tangible harms" if safeguards are not put in place.
The board, notably, was asked to weigh in on a fairly narrow set of questions, including how it should evaluate whether to withhold the feature in certain countries. Meta "respectfully" asked the Oversight Board to avoid "general" critiques about the system, which it has said is modeled after X.
In its opinion, the Oversight Board said that Community Notes "could enhance users' freedom of expression and improve online discourse" with enough safeguard. But it recommended Meta withhold the feature in countries with "high polarization," as well as countries in the midst of a crisis or "protracted conflict." The board also said that Meta should avoid countries with a history of organized disinformation networks, because the notes may be more easily manipulated in such places, and countries with "linguistic complexity" that Meta may be ill-equipped to understand.
Depending on how you interpret that advice, that could exclude quite a few countries, though the board stopped short of making country-specific recommendations. Still, it raises questions about how closely Meta will follow the suggested guidelines. For example, the United
|
|
Low-quality, mass-produced AI songs have been flooding music streaming platforms like Spotify for a couple of years now. This is annoying, but relatively easy for fans to avoid. However, it leads to real problems for artists. There's so much slop coming in that some gets falsely attributed to actual musicians on these platforms.
This messes with brand identity and audience retention, but Spotify is testing a new tool to help real artists exercise more control over their profiles. The platform's Artist Profile Protection feature lets musicians review releases before they go live and become associated with their profiles.
Spotify
This should prevent AI slop from creeping in, as the actual artist will have final say when 100 new songs show up out of the blue that sort of sound like them but with all of that pesky soul removed. It's in beta right now and if an artist denies a track, it won't be associated with their profile, won't contribute to stats and won't show up in user recommendations. This looks to be a simple and potentially effective solution to an ongoing problem.
"Music has been landing on the wrong artist pages across streaming services, and the rise of easy-to-produce AI tracks has made the problem worse," Spotify wrote in a blog post. "We know how frust
|
|
OpenAI today said that it is ending support for its Sora AI video app just six months after it initially launched.
|
|