I'm thrilled about the leak of Google’s Gemini Intelligence, but I really hope they change that name.
Gemini Intelligence appears to be Google's Android counterpart to Apple Intelligence.
While Google aids Apple in enhancing its AI capabilities, it seems the tech giant may have taken quite a liking to the name Apple Intelligence. A recent leak by Mysticleaks on Telegram suggests that “Gemini Intelligence” appears within Google’s software operating on what seems to be a Pixel smartphone.
For the moment, it's wise to approach the leak skeptically until further evidence emerges. However, if the video is authentic, Google might be gearing up to introduce this feature with the Pixel 11 series, which is anticipated to launch around August 2026.
Is Google genuinely calling it Gemini Intelligence?
The irony here is almost overwhelming. Apple Intelligence represents Apple's significant endeavor to enhance Siri's capabilities, making it smarter, more personalized, and genuinely useful in the age of AI. Ironically, Apple has established a multi-year partnership with Google to provide next-generation Siri functionality powered by Gemini models. This means Google may be simultaneously advancing Apple Intelligence while debuting Gemini Intelligence. This could either be a stroke of branding genius or simply poor marketing.
Google is already expanding Gemini's Personal Intelligence features, which allow the AI to interface with applications such as Gmail, Google Photos, YouTube, and Search, enabling it to respond to inquiries based on the user's context. Instead of seeking help from a generic chatbot, users can request information relevant to their emails, photos, saved details, and activities across Google services.
Why would the Pixel 11 be a fitting choice?
Pixel devices have long served as Google's experimentation platform for AI features, such as call screening and AI-driven photo editing tools. If “Gemini Intelligence” is indeed legitimate, the Pixel 11 would be the ideal device to launch it, offering a deeply integrated, phone-level AI experience. We can only hope that the name undergoes some reconsideration, assuming, of course, there's a name to revise at all.
---
A rice grain-sized sensor could provide robots with a delicate touch, preventing them from causing damage.
Robots are incredibly accurate, but they often struggle with gentleness. A machine capable of assembling a car with near-perfect precision can still apply excessive force when operating in sensitive environments, such as inside a human eye or during intricate surgeries. Researchers at Shanghai Jiao Tong University are working on a novel force sensor to enhance robots' ability to “feel” the objects they interact with more precisely. The sensor is tiny, resembling a grain of rice, measuring just 1.7 millimeters in width, making it suitable for advanced surgical tools. It’s particularly intriguing as it doesn't depend on conventional electronics; instead, it utilizes light to assess force from all directions, including pressure, sliding, and twisting. The mechanism operates by placing a soft material at the end of an optical fiber that slightly deforms upon contact with an object. This minuscule change modifies how light traverses the sensor, and the altered light patterns are transmitted through optical fibers to a camera, which registers them as an image. Researchers then employ a machine learning model to analyze these light patterns and convert them into precise force readings. Essentially, the system learns to “interpret” touch through light alone, eliminating the need for numerous wires or individual sensors in such a compact space.
---
Meta’s employees are struggling to adapt to AI integration.
If you want to see what happens when a major tech company attempts to impose an AI-driven future on its staff, look no further than Meta today. The organization that built its success on extensive user data is now turning that focus internally, much to the employees' displeasure. Last month, Meta discreetly notified tens of thousands of its U.S. employees that their corporate laptops would start monitoring their keystrokes, mouse movements, clicks, and screen activity. This data aims to inform Meta's AI models about how people utilize computers in reality. The response was swift—within hours, internal discussion threads filled with anger, confusion, and a wide array of emoji reactions that clearly conveyed employees' sentiments. When an engineering manager inquired about opting out, Meta's chief technology officer, Andrew Bosworth, provided a blunt response: there was no option to opt out—at least not with a company laptop. This comes from a company now linking AI tool usage to performance reviews, conducting mandatory "AI Transformation Weeks" to retrain staff, and creating internal dashboards that gamify AI token consumption, a metric so closely monitored that some employees began developing AI agents to oversee their other AI agents, generating a self-consuming feedback loop.
---
Sci-fi accurately predicted gadgets, but not the overall experience.
While sci-fi literature has often envisioned consumer technology remarkably well, reality continues to deliver a useful yet compromised version of those dreams.
Recently, while waiting for an Uber, I encountered a GPS malfunction that seemed to enjoy misleading me. The car was close by, I was in proximity, yet we both found ourselves ensnared in that modern hassle of incorrect pins, slow maneu
Other articles
I'm thrilled about the leak of Google’s Gemini Intelligence, but I really hope they change that name.
Google might be working on a more integrated AI layer powered by Gemini that links more intimately with your applications, photos, emails, and daily phone activities.
