Google launches Gemma 4 AI models for data centres and smartphones: What CEO Sundar Pichai and Demis Hassabis have to say

Spread the love


Google launches Gemma 4 AI models for data centres and smartphones: What CEO Sundar Pichai and Demis Hassabis have to say

Google has launched Gemma 4, the latest and most capable version of its open-source AI model family. The company says that the latest models mark a significant step forward in making frontier-level artificial intelligence (AI) accessible to developers everywhere: from powerful data centre workstations to a smartphone in your pocket. Google claims that since it released the first generation of Gemma, developers have downloaded it more than 400 million times, spawning a community ecosystem of over 100,000 model variants built on top of Google’s foundation.Google CEO Sundar Pichai said that the models pack “incredible amount of intelligence per parameter”, while Google DeepMind chief executive called Gemma 4 as the “best open models in the world for their respective sizes.”“Gemma 4 is here, and it’s packing an incredible amount of intelligence per parameter,” said Pichai, while sharing a post by Hassabis, who said: “Excited to launch Gemma 4: the best open models in the world for their respective sizes. Available in 4 sizes that can be fine-tuned for your specific task: 31B dense for great raw performance, 26B MoE for low latency, and effective 2B & 4B for edge device use – happy building!”Gemma 4 is available now under an Apache 2.0 licence, meaning developers can use, modify and build on it freely.

Four Gemma 4 models, one goal

Google is releasing Gemma 4 in four sizes, designed to cover everything from mobile devices to high-end developer machines:

  • E2B (Effective 2 Billion parameters) — Built for phones and IoT devices
  • E4B (Effective 4 Billion parameters) — Also optimised for edge and mobile use
  • 26B Mixture of Experts (MoE) — A mid-range powerhouse
  • 31B Dense — The flagship, currently ranked #3 among all open AI models in the world on the industry-standard Arena AI leaderboard

That last number is particularly striking. The 31B model said to have outperformed competitors 20 times its size.

What Gemma 4 can do

Google says Gemma 4 moves well beyond basic question-and-answer chat. Key capabilities include:Advanced reasoning: The model can handle multi-step planning and complex logic, with improvements in mathematics and instruction-following tasks.Agentic workflows: Gemma 4 natively supports function-calling, structured data output and system instructions, allowing developers to build AI agents that can interact with external tools, APIs and services autonomously.Code generation: Developers can run Gemma 4 entirely offline on a local machine, turning a standard workstation into a private AI coding assistant.Vision and audio: All four models can process images and video natively. The two smaller edge models also support audio input for speech recognition.Long context windows: The edge models can process up to 128,000 tokens in a single prompt, while the larger models stretch to 256,000 tokens.140+ languages: Finally, Gemma 4 has been trained natively across more than 140 languages, making it one of the most globally inclusive open models available.

Gemma models for smartphones

Perhaps the best aspect of Gemma 4 is just how small Google has managed to make it while keeping it powerful. According to The E2B and E4B models were built from the ground up in close collaboration with Google’s Pixel team, Qualcomm Technologies and MediaTek — the companies behind the chips that power billions of Android devices worldwide. The result is a model that runs completely offline, with near-zero latency, on everyday devices including phones, Raspberry Pi boards and Nvidia Jetson hardware.Google describes Gemma 4 as being built from the same world-class research and technology that powers Gemini 3, its flagship proprietary model.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *