
I checked out the new Gemini and Project Astra; here’s why I believe they represent the future.
We are rapidly entering an era where AI becomes truly beneficial, and central to this is Project Astra, Google's new universal AI agent designed to assist with everyday tasks. Companies like Oppo, Honor, Motorola, and Tecno have introduced innovative ways for AI to support daily life, but Astra's multimodal approach is crucial for the future of artificial intelligence.
The concept is straightforward: just point your phone camera at an object and engage in a live conversation with Google Gemini, where you can ask questions and receive suggestions based on its observations.
The underlying technology is complex, and as expected, features are being rolled out gradually. Two initial features are finally ready, and ahead of their launch later this month, I had the opportunity to preview them along with other announcements regarding Gemini. What I witnessed is the future of AI, and it’s incredibly exciting:
Astra features: Gemini Live Video and screen sharing
Nirave Gondhia / Digital Trends
A significant update to Gemini is the new Gemini Live, which has enhanced visual capabilities thanks to Project Astra. It makes perfect sense that Astra's features will contribute to the next generation of Google Live in various ways.
If you've been looking forward to an AI that can help you perceive your surroundings, the new video-sharing functionality will be transformative. The demonstration involved queries related to a pottery business, with Gemini Live effectively identifying colors, shapes, and context without requiring multiple prompts.
As shown in the video above, it’s truly exhilarating, and the possibilities seem limitless. Although I’m not sure if it can assist with changing a tire or resolving minor engine issues for someone inexperienced, it opens the door for seeking fashion advice, medical diagnoses, or real-time translations while traveling.
Certainly, there’s also a professional application for this, as the new Gemini Live facilitates screen sharing. This will enable users to share their screens, pose questions, and have Gemini guide them through the process. I can envision this being especially beneficial for complex tasks such as filing paperwork, mastering advanced subjects, or completing financial and tax forms.
These aren’t the only enhancements within this new Agentic AI, as Google has introduced additional Gemini-powered features across its ecosystem.
Gemini Live can now read files, documents, and images
Nirave Gondhia / Digital Trends
In addition to screen sharing on Gemini Live, Google unveiled Gemini's capability to read and comprehend a diverse range of images, files, and documents. This feature expands the core functionality of Gemini Live to include various file types.
This capability could be particularly advantageous for students, as Google showcased a scenario where a student might utilize it. Imagine a textbook page discussing DNA; Gemini Live can delve deeper into the topic, search for supplementary relevant information, and even create a rhyme to assist with remembering key facts.
With these features, Gemini Live is set to reach new heights and may even pave the way for the arrival of the next Google Glasses sooner rather than later. The demonstration was conducted using the Gemini app on the Galaxy S25 Ultra, so it should be accessible to all Gemini Advanced users.
New features for Google Home: Gemini Routines
Nirave Gondhia / Digital Trends
This demonstration was designed to showcase how Gemini AI is transforming the smart home experience. In many ways, Gemini will help realize the long-anticipated concept of an autonomous smart home.
The demo illustrated a familiar scenario involving missing cookies. For those with kids, a partner who loves sweets, or even a crafty pet, the integration of the new Google Home and Gemini will help identify the culprit.
The demonstration displayed how Gemini could sift through Nest Cam footage, pinpoint the exact moment the cookies went missing, and analyze the situation. With a simple prompt asking who consumed the missing cookie, Gemini can also set up a new routine to execute automatically the next time the offender is spotted on that camera. I'm eager to explore routines further, especially with more intricate prompts and tasks.
The future of AI is Google Gemini
Nirave Gondhia / Digital Trends
I'm impressed with Google's Gemini rollout, particularly concerning its smartphone initiatives. The extensive distribution across hundreds of millions of Android devices and collaborations with various phone manufacturers to develop new features are pivotal to increasing user engagement and feature expansion.
Google is effectively acting as a facilitator, harmonizing the different ideas and requirements from phone manufacturers as part of its feature development strategy. There will be a time when certain features remain exclusive to specific phone brands, but for now, it's wonderful that all Gemini users can test and experience these innovations.
Nirave Gondhia / Digital Trends
That being said, these features are primarily available to Gemini Advanced subscribers. As anticipated, the video and screen-sharing capabilities in Gemini Live are exclusive to Gemini Advanced users, though it’s unclear whether all or any of the other features will be accessible without a subscription. If you haven't purchased a subscription yet, now could be a worthwhile time to consider it. Furthermore, if you're in the market for a new phone, you can still get one year of the Google One AI





Other articles






I checked out the new Gemini and Project Astra; here’s why I believe they represent the future.
I have just experienced the next-generation AI thanks to Google Gemini. The newest features will simplify life, and we owe it to Project Astra. Here's why it's amazing.