|
Ever since reporting earlier this year on how easy it is to trick an agentic browser, I've been following the intersections between modern AI and old-school scams. Now, there's a new convergence on the horizon: hackers are apparently using AI prompts to seed Google search results with dangerous commands. When executed by unknowing users, these commands prompt computers to give the hackers the access they need to install malware.
The warning comes by way of a recent report from detection-and-response firm Huntress. Here's how it works. First, the threat actor has a conversation with an AI assistant about a common search term, during which they prompt the AI to suggest pasting a certain command into a computer's terminal. They make the chat publicly visible and pay to boost it on Google. From then on, whenever someone searches for the term, the malicious instructions will show up high on the first page of results.
Huntress ran tests on both ChatGPT and Grok after discovering that a Mac-targeting data exfiltration attack called AMOS had originated from a simple Google search. The user of the infected device had searched "clear disk space on Mac," clicked a sponsored ChatGPT link and — lacking the training to see that the advice was hostile — executed the command. This let the attackers install the AMOS malware. The testers discovered that both chatbots replicated the attack vector.
As Huntress points out, the evil genius of this attack is that it bypasses almost all the traditional red flags we've been taught to look for. The victim doesn't have to download a file, install a suspicious executable or even click a shady link. The only things they have to trust are
|
|
Apple today released a new update for Safari Technology Preview, the experimental browser that was first introduced in March 2016. Apple designed ?Safari Technology Preview? to allow users to test features that are planned for future release versions of the Safari browser.
|
|
A UK research team based at Durham University has identified an exploit that could allow attackers to figure out what you type on your MacBook Pro — based on the sound each keyboard tap makes.
These kinds of attacks aren't particularly new. The researchers found research dating back to the 1950s into using acoustics to identify what people write. They also note that the first paper detailing use of such an attack surface was written for the US National Security Agency (NSA) in 1972, prompting speculation such attacks may already be in place.
"(The) governmental origin of AS- CAs creates speculation that such an attack may already be possible on modern devices, but remains classified," the researchers wrote.
To read this article in full, please click here
|
|