|
Smart home devices or apps acting up? I've been there: Here's what I do to get things working right.
|
|
Commentary: When we assign emotional intelligence to an entity where none exists, we start trusting AI in ways it was never meant to be trusted.
|
|
Apple today released iOS 26.2, the second major update to the iOS 26 operating system that came out in September, iOS 26.2 comes a little over a month after iOS 26.1 launched. ?iOS 26?.2 is compatible with the ?iPhone? 11 series and later, as well as the second-generation ??iPhone?? SE.
| RELATED ARTICLES | | |
|
When the bubble pops, you can't say there weren't signs.
|
|
I'll send you the best deals under $50, so you won't miss any perfect holiday gifts when they're on sale.
|
|
Apple released iOS 26.2 on December 12, introducing the latest version of iOS 26. iOS 26.2 isn't the biggest update, but it brings quite a few helpful new features to your iPhone.
|
|
Last week, Netflix surprised us all when it announced plans for an $82.7 billion acquisition of Warner Bros., a move that would fundamentally reshape the world of streaming video and Hollywood. But Paramount isn't giving up on WB — this week it launched a $108 billion hostile takeover effort. In this episode, we discuss why everyone is fighting for WB, and why Netflix may be the best worst option for the storied movie studio.
Subscribe!iTunes
|
|
Ever since reporting earlier this year on how easy it is to trick an agentic browser, I've been following the intersections between modern AI and old-school scams. Now, there's a new convergence on the horizon: hackers are apparently using AI prompts to seed Google search results with dangerous commands. When executed by unknowing users, these commands prompt computers to give the hackers the access they need to install malware.
The warning comes by way of a recent report from detection-and-response firm Huntress. Here's how it works. First, the threat actor has a conversation with an AI assistant about a common search term, during which they prompt the AI to suggest pasting a certain command into a computer's terminal. They make the chat publicly visible and pay to boost it on Google. From then on, whenever someone searches for the term, the malicious instructions will show up high on the first page of results.
Huntress ran tests on both ChatGPT and Grok after discovering that a Mac-targeting data exfiltration attack called AMOS had originated from a simple Google search. The user of the infected device had searched "clear disk space on Mac," clicked a sponsored ChatGPT link and — lacking the training to see that the advice was hostile — executed the command. This let the attackers install the AMOS malware. The testers discovered that both chatbots replicated the attack vector.
As Huntress points out, the evil genius of this attack is that it bypasses almost all the traditional red flags we've been taught to look for. The victim doesn't have to download a file, install a suspicious executable or even click a shady link. The only things they have to trust are
|
|