|
Apple has just released Xcode 26.3, and it's a big step forward in terms of the company's support of coding agents. The new release expands on the AI features the company introduced with Xcode 26 at WWDC 2025 to give systems like Claude and ChatGPT more robust access to its in-house IDE.
With the update, Apple says Claude and OpenAI's Codex "can search documentation, explore file structures, update project settings, and verify their work visually by capturing Xcode Previews and iterating through builds and fixes." This is in contrast to earlier releases of Xcode 26 where those same agents were limited in what they could see of a developer's Xcode environment, restricting their utility. According to Apple, the change will give users tools they can use to streamline their processes and work more efficiently than before.
Developers can add Claude and Codex to their Xc
|
|
Spain will join the growing list of countries banning access to social media for children, Prime Minister Pedro Sanchez announced Tuesday. The law will apply to users under 16 years of age amidst a broader push to hold social media companies accountable for hate speech, social division and illegal content.
Speaking at the World Governments Summit in Dubai, Prime Minister Sanchez excoriated social media, calling it a "failed state" where "laws are ignored and crime is endured." He spoke to the importance of digital governance for these platforms, highlighting recent incidents like X's AI chatbot Grok generating sexualized images of children, Meta "spying" on Android users and the myriad election interference campaigns that have taken place on Facebook.
In light of what Sanchez called the "integral" role social media plays in the lives of young users, he said the best way to help them is to "take back control." Next week, his government will enact a slew of new regulations, with a ban on users under 16 years of age among them. Social media companies will be required to implement what he calls "effective age verification systems" and "not just checkboxes." A specific timeline on enforcement of the coming ban has not been announced.
|
|
Samsung's 2025 was filled with new foldables, an ultra-thin new form factor and the launch of Google's XR platform. After making some announcements at CES 2026, the company is expected to host its first Galaxy Unpacked of the year in February to introduce the Galaxy S26 lineup. Official invites have yet to be shared, but the date is widely expected to be near the end of the month.
Whenever it does happen, Engadget will be covering Galaxy Unpacked live, and we'll most likely have hands-on coverage of Samsung's new smartphones soon after they're announced. While we wait for an official invite, here's everything we expect Samsung will introduce at the first Galaxy Unpacked event of 2026.
What is Unpacked 2026 taking place?But first, when is Unpacked going to happen? A recent image shared by leakster Evan Blass indicated Unpacked should be taking place on "February 25 2026." Blass
|
|
We're just over one week away from Valentine's Day, which falls on Saturday, February 14 this year. Similar to years past, many third-party Apple resellers and accessory companies have opened up notable discounts on Apple products and accessories to coincide with the holiday.
|
|
Moltbook bills itself as a social network for AI agents. That's a wacky enough concept in the first place, but the site apparently exposed the credentials for thousands of its human users. The flaw was discovered by cybersecurity firm Wiz, and its team assisted Moltbook with addressing the vulnerability.
The issue appears to be the result of the entire Reddit-style forum being vibe-coded; Moltbook's human founder posted a few days ago on X that he "didn't write one line of code" for the platform and instead directed an AI assistant to create the whole setup.
According to the blog post from Wiz analyzing the issue, Moltbook had a vulnerability that allowed for "1.5 million API authentication tokens, 35,000 email addresses and private messages between agents" to be fully read and accessed. Wiz also found that the vulnerability could let unauthenticated human users edit live Moltbook posts. In other words, there is no way to verify whether a Moltbook post was authored by an AI agent or a human user posing as one. "The revolutionary AI social network was largely humans operating fleets of bots," the company's analysis concluded.
So ends another cautionary tale reminding us that just because AI can do a task doesn't mean it'll do it correctly.
This article originally appeared on Engadget at https://www.engadget.com/ai/moltbook-the-ai-social-network-exposed-human-credentials-due-to-vibe-coded-security-flaw-230324567.html?src=rss
|
|
Elon Musk's lawsuit against Sam Altman and OpenAI, filed last week in California state court, accuses the defendants of forgetting core parts of OpenAI's stated mission to develop useful and non-harmful artificial general intelligence. Altman has since moved to buttress his responsible AI credentials, including the signing of an open letter pledging to develop AI "to improve people's lives."
Critics, however, remain unconvinced by Altman's show of responsibility. Ever since the rapid popularization of generative AI (genAI) over the past year, those critics have been warning that the consequences of unfettered and unregulated AI development could be not just corrosive to human society, but a threat to it entirely.
To read this article in full, please click here
|
|