|
Having Perplexity's AI and models on devices from the world's biggest phone-maker puts the company under a brighter light.
|
|
Two stories about the Claude maker Anthropic broke on Tuesday that, when combined, arguably paint a chilling picture. First, US Defense Secretary Pete Hegseth is reportedly pressuring Anthropic to yield its AI safeguards and give the military unrestrained access to its Claude AI chatbot. The company then chose the same day that the Hegseth news broke to drop its centerpiece safety pledge.
On Tuesday, Anthropic said it was modifying its Responsible Scaling Policy (RSP) to lower safety guardrails. Up until now, the company's core pledge has been to stop training new AI models unless specific safety guidelines can be guaranteed in advance. This policy, which set hard tripwires to halt development, was a big part of Anthropic's pitch to businesses and consumers.
"Two and a half years later, our honest assessment is that some parts of this theory of change have played out as we hoped, but others have not," Anthropic wrote. Now, its updated policy approaches safety relatively, rather than with strict red lines.
Anthropic's quotes in an
|
|
I went hands-on with Samsung's newest base and plus model phones. The Galaxy S26 has a larger screen and a bigger battery, while the S26 Plus is a lot like the S25 Plus.
|
|
Defense Secretary Pete Hegseth will reportedly give Anthropic until Friday to drop certain guardrails for military use, as reported by Axios. The outlet also reported that CEO Dario Amodei met with Hegseth yesterday as the Pentagon ratcheted up pressure on the AI company to give in to its demands.
The makers of Claude have reportedly been offered an ultimatum: Either yield to the government's demands to remove limits for certain military applications, or potentially be forced to tailor its AI model to the government's needs under the Defense Production Act.
Anthropic, for its part, has said that while it was willing to adopt certain policies for the Pentagon, it would not allow its model to be used for mass surveillance of Americans or for the development of autonomous weapons.
Claude is currently the only AI model employed in some of the government's most sensitive work. "The only reason we're still talking to these people is we need them and we need them now. The problem for these guys is they are that good," a defense official told Axios.
The Pentagon is reportedly ramping up conversations with OpenAI and Google about using their models for classified work. ChatGPT and Gemini are already approved for unclassified government use.
|
|