Google Says New Gemma 3 Is Ideal for Mobile, Edge Devices
August 18, 2025
Google has introduced a new ultra-light model called Gemma 3 270M ideal for smartphones and other on-device use cases. The open-source model is power-efficient and small enough to run locally in the absence of an Internet connection, as Google demonstrated in internal tests using a Pixel 9 Pro SoC. With just 270 million parameters, Gemma 3 270M is a fraction of the size of flagship LLMs, which typically have billions of parameters. While Google’s new model was not made for complex conversational use, it is “designed from the ground up for task-specific fine-tuning with strong instruction-following.”
The model is “capable of handling complex, domain-specific tasks and can be quickly fine-tuned in mere minutes to fit an enterprise or indie developer’s needs,” VentureBeat reports, adding that it “can also run directly in a user’s web browser, on a Raspberry Pi,” and even “in a toaster,” an example of its ability to run on simple hardware.
Gemma 3 270M has text structuring capabilities baked-in, according to a blog post from Google, which is releasing the new model in two versions: a pre-trained “checkpoint” and an instruction-tuned model. Developers with technical skills can take the pre-trained checkpoint and fine-tune it for specific use cases, leveraging its general knowledge to adapt it to specialized tasks, like a niche chatbot or processing domain-specific data.
The instruction-tuned version has been optimized to interpret and respond to common instructions and can be used out of the box with minimal setup for building applications for mobile apps or edge devices.
Google calls Gemma 3 270M “the perfect starting point for creating a fleet of small, specialized models, each an expert at its own task,” but says it’s “not just for enterprise,” citing “creative applications” like a Bedtime Story Generator web app.
SiliconANGLE compares the new model’s performance on various benchmarks, suggesting the results “look fairly impressive,” but concluding “it may not be best in class,” mentioning Liquid AI’s LFM2-350M, released last month, as a strong contender.
Ars Technica explores the definitional subtleties between the mostly open Gemma models and strict “open source,” writing that the Gemma models and their weights are free and not bound by a commercial licensing agreement. However, there are basic terms of use.
For those who want to run it locally, Gemma 3 270M models can be downloaded from Hugging Face, Ollama, Kaggle, LM Studio or Docker. Or it can be accessed in the cloud via Google’s turn-key environment Vertex AI. (The Google blog post provides additional resources for running and fine-tuning the model.)
No Comments Yet
You can be the first to comment!
Leave a comment
You must be logged in to post a comment.