|
No kidding - this is a real thing, from a real manufacturer, meant for real-world use. The Pit Bull 1.0 comes from KNK Karts, a seasoned Indian go-kart maker. And boy does it look like fun.
Continue Reading
Motorcycles,
|
|
Austria is the latest country to prepare a social media ban for its children, but it's going even further than others by including anyone under 14. In a press release, the Austrian government said it has introduced a comprehensive catalogue of measures meant to shield minors from the harms of social media. According to the press release, an official bill will be introduced by the end of June.
Andreas Babler, a vice chancellor and leader of the Social Democratic Party of Austria, said the government's efforts would include the new age restriction, improved media literacy and clear rules for social media platforms. Austrian lawmakers didn't detail what the upcoming rules would be, but the country is likely to follow in the footsteps of many others who have or are pursuing similar bans. While Australia was the first to implement a social media ban for anyone under 16, other European countries like Spain and the
|
|
We're in the middle of Amazon's "Big Spring Sale," which includes deals and offers on everything from Apple devices to clothes, kitchen electronics, furniture, and much more. The new event is set to run through March 31, so you'll have a few days of discounts to shop, with new markdowns appearing every day.
|
|
Anthropic has begun previewing "auto mode" inside of Claude Code. The company describes the new feature as a middle path between the app's default behavior, which sees Claude request approval for every file write and bash command, and the "dangerously-skip-premissions" command some coders use to make the chatbot function more autonomously.
With auto mode enabled, a classifier system guides Claude, giving it permission to carry out actions it deems safe, while redirecting the chatbot to take a different approach when it determines Claude might do something risky. In designing the system, Anthropic's goal was to reduce the likelihood of Claude carrying out mass file deletions, extracting sensitive data or executing malicious code.
Of course, no system is perfect, and Anthropic warns as such. "The classifier may still allow some risky actions: for example, if user intent is ambiguous, or if Claude doesn't have enough context about your environment to know an action might create additional risk," the company writes.
Anthropic doesn't mention a specific incident as inspiration for auto mode, but the recent 13-hour AWS outage Amazon suffered after one of the company's AI tools reportedly deleted a hosting environment, was probably front of mind for the company. Amazon blamed that specific incident on human error, saying the staffer involved in the incident had "broader permissions than expected."
Team plan users can preview auto mode starting today, with the feature set to roll out to Enterprise and API users in the coming days.
This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-releases-safer-claude-code-auto-mode-to-avoid-mass
|
|