Google's latest Gemma 3 AI models are quick, efficient, and designed for mobile devices.

Google's latest Gemma 3 AI models are quick, efficient, and designed for mobile devices.

      Google’s AI initiatives are closely associated with Gemini, now an essential component of its popular software and hardware products. Additionally, the company has made various open-source AI models available under the Gemma label for over a year.

      Today, Google announced its third-generation open-source AI models, showcasing impressive capabilities. The Gemma 3 models are available in four configurations: 1 billion, 4 billion, 12 billion, and 27 billion parameters, and they are designed to operate on devices ranging from smartphones to powerful workstations.

      Tailored for mobile devices, Google asserts that Gemma 3 is the best single-accelerator model globally, capable of functioning on a single GPU or TPU without needing a complete cluster. This means that a Gemma 3 AI model can operate on the Pixel smartphone’s Tensor Processing Core (TPU) in the same manner as the Gemini Nano model runs locally.

      One of the primary advantages of Gemma 3 compared to the Gemini AI models is its open-source nature, allowing developers to customize and integrate it into mobile apps and desktop software as needed. Another significant benefit is that Gemma supports over 140 languages, including 35 available in a pre-trained package.

      Similar to the latest Gemini 2.0 series, Gemma 3 can comprehend text, images, and videos, making it multi-modal. In terms of performance, Gemma 3 reportedly outperforms other popular open-source AI models, including DeepSeek V3, OpenAI's reasoning-capable o3-mini, and Meta’s Llama-405B variant.

      Gemma 3 offers a context window of 128,000 tokens, adequate for processing the entirety of a 200-page book. In contrast, Google’s Gemini 2.0 Flash Lite model has a context window of one million tokens. Generally, an average English word is approximately equal to 1.3 tokens in the context of AI models.

      Gemma 3 can also handle function calling and structured output, enabling it to interact with external datasets and perform automated agent tasks, similar to Gemini’s seamless integration across platforms like Gmail and Docs.

      Google's latest open-source AI models can be deployed locally or through the company’s cloud services, including the Vertex AI suite. Gemma 3 AI models are now accessible via Google AI Studio and third-party repositories like Hugging Face, Ollama, and Kaggle.

      Gemma 3 aligns with a broader industry trend where companies develop Large Language Models (like Gemini) while also releasing small language models (SLMs). Microsoft employs a similar approach with its open-source Phi series of small language models.

      Models such as Gemma and Phi are highly resource-efficient, making them ideal for smartphones. Furthermore, their lower latency makes them particularly suitable for mobile applications.

Google's latest Gemma 3 AI models are quick, efficient, and designed for mobile devices. Google's latest Gemma 3 AI models are quick, efficient, and designed for mobile devices. Google's latest Gemma 3 AI models are quick, efficient, and designed for mobile devices.

Other articles

Google's latest Gemma 3 AI models are quick, efficient, and designed for mobile devices.

Google has launched four new open-source AI models as part of the Gemma 3 series, specifically designed for mobile platform deployment, surpassing OpenAI in the process.