|
At I/O 2024, Google's teaser for Project Astra gave us a glimpse at where AI assistants are going in the future. It's a multi-modal feature that combines the smarts of Gemini with the kind of image recognition abilities you get in Google Lens, as well as powerful natural language responses. However, while the promo video was slick, after getting to try it out in person, it's clear there's a long way to go before something like Astra lands on your phone. So here are three takeaways from our first experience with Google's next-gen AI.
Sam's take:
Currently, most people interact with digital assistants using their voice, so right away Astra's multi-modality (i.e. using sight and sound in addition to text/speech) to communicate with an AI is relatively novel. In theory, it allows computer-based entities to work and behave more like a real assistant or agent - which was one of Google's big buzzwords for the show - instead of something more robotic that simply responds to spoken commands.
Photo by Sam Rutherford/Engadget
In our demo, we had the option of asking Astra to tell a story based on some objects we placed in front of camera, after which it told us a lovely tale about a dinosaur and its trusty baguette trying to escape an ominous red light. It was fun and the tale was cute, and the AI worked about as well as you would expect. But at the sam
|
|
Brin makes an appearance at this year's Google's I/O developer conference.
|
|
Google's latest I/O conference was an absolute slurry of AI promises. The company took so many stabs with new AI models, showing off new AI capabilities that integrate Gemini into every single Google product or service under the sun. But take enough shots, and the law of averages says at least one will hit the mark.…
Read more...
|
|
Google's already affordable A-Series phone is even cheaper when you take advantage of these deals.
|
|