Gemma 3Gemma 3

Gemma 4 - Our Most Intelligent Open AI Model

Gemma 4

Start a conversation with the AI assistant

You can upload images and select different models

💡 Tip: You can directly paste images from clipboard

About Gemma 4

Google's most intelligent open AI model, built from Gemini 3 research and technology to maximize intelligence-per-parameter.

Gemma 4 represents a new era of open AI models with groundbreaking efficiency. Built from Gemini 3 research, it delivers frontier-level intelligence with unprecedented compute efficiency. Available in four sizes (2B, 4B, 26B, 31B) to meet diverse deployment needs from edge devices to personal computers.

Model Sizes

Choose the right model size for your deployment scenario

E2B & E4B (Edge)

Gemma 4 E2B and E4B variants deliver maximum compute and memory efficiency. A new level of intelligence for mobile and IoT devices. Can run completely offline with near-zero latency on phones, Raspberry Pi, and Jetson Nano.

E2B
E4B
  • • Gemma 4 audio and vision support for real-time edge processing
  • • Near-zero latency completely offline
  • • Optimized for mobile and IoT devices

26B & 31B (Frontier)

Gemma 4 26B and 31B models deliver unprecedented intelligence-per-parameter. Frontier intelligence for IDEs, coding assistants, and agentic workflows. Optimized for consumer GPUs.

26B
31B
  • • Gemma 4 advanced reasoning for IDEs and coding assistants
  • • Agentic workflows and function calling
  • • Optimized for consumer GPUs (RTX 3090, etc.)

Core Features

Agentic

Gemma 4 builds autonomous agents that plan, navigate apps, and complete tasks with native function calling support

Multimodal

Gemma 4 delivers strong audio and visual understanding for rich multimodal applications

140+ Languages

Gemma 4 creates multilingual experiences that understand cultural context beyond translation

Fine-tuning

Fine-tune Gemma 4 for specific tasks using preferred frameworks

Efficient

Run Gemma 4 on your own hardware for efficient development and deployment

128K Context

Gemma 4 supports up to 128K token context window for handling long documents and complex conversations

Application Scenarios

Competitive Coding

LiveCodeBench 80%

Agentic Workflows

τ2-bench 86.4%

Visual Understanding

MMMU Pro 76.9%

Mathematical Reasoning

AIME 2026 89.2%

Multilingual Tasks

140+ languages

Gemma 4 excels in advanced reasoning, code generation, competitive programming, mathematical reasoning (AIME 2026: 89.2%), and scientific knowledge (GPQA Diamond: 84.3%). The model shows remarkable improvement in agentic tool use (τ2-bench: 86.4%) compared to previous generations.

Performance Benchmarks

Industry-leading performance across multiple benchmarks as of April 2026

General Intelligence

Arena AI (text)
1452
MMMLU Multilingual Q&A
85.2%
MMMU Pro Multimodal
76.9%

Advanced Reasoning

AIME 2026 Mathematics
89.2%
GPQA Diamond Scientific
84.3%

Code Generation

LiveCodeBench v6
80.0%
τ2-bench Agentic Tool Use
86.4%

Benchmark data from Gemma 4 31B IT Thinking model. All metrics as of 4/2/26.

Gemma 4 vs Gemma 3

Significant improvements across all benchmarks
BenchmarkGemma 4 31BGemma 3 27BImprovement
Arena AI (text)14521365+87
MMLU85.2%67.6%+17.6%
AIME 2026 Math89.2%20.8%+68.4%
LiveCodeBench80.0%29.1%+50.9%
τ2-bench Agentic86.4%6.6%+79.8%

Core Advantages

01
Industry-Leading Intelligence

Gemma 4 achieves unprecedented intelligence-per-parameter, delivering frontier-level capabilities in a compact model. The 31B model ranks at the top of AI benchmarks, outperforming models in its class.

02
Compute Efficiency

Gemma 4 delivers maximum intelligence with minimum compute. Built from Gemini 3 research to optimize every parameter for best-in-class performance. Significant improvement in intelligence-per-parameter ratio.

03
Edge Deployment

Gemma 4 E2B and E4B variants are designed for mobile and IoT devices, enabling near-zero latency AI experiences completely offline. Audio and vision support for real-time edge processing.

04
Enterprise Security

Gemma 4 models undergo rigorous security protocols matching our proprietary models. Enterprises and sovereign organizations gain a trusted, transparent foundation meeting highest security standards.

Technical Specifications

Model Architecture

The Gemma 4 architecture is built from Gemini 3 research and technology. It features an optimized Transformer architecture with improvements in attention mechanisms and training stability.

Training Methodology

Gemma 4 is trained on massive datasets with reinforcement learning from human feedback (RLHF). Incorporates safety training and responsible AI principles throughout the development process.

Context Window

Gemma 4 supports up to 128K tokens context window, with extensions up to 256K for select variants. This enables processing of long documents, codebases, and extended conversations.

Safety & Responsibility

Gemma 4 undergoes comprehensive safety evaluation including child safety, content safety, and representational harm. Implements multi-layered safety protection mechanisms for reliable deployment.

Real-World Application Cases

Educational Tutoring

Gemma 4 provides personalized learning support with advanced reasoning capabilities. Excels in explaining complex mathematical and scientific concepts with step-by-step guidance.

Software Development

Gemma 4 is a powerful coding assistant for IDE integration. Achieves 80% on LiveCodeBench competitive coding problems, providing high-quality code suggestions and debugging assistance.

Enterprise Applications

Gemma 4 enables autonomous agents for business process automation. Native function calling support enables seamless integration with enterprise systems and tools.

Research Assistant

Gemma 4 supports researchers with literature review, hypothesis validation, and data analysis. Achieves 84.3% on GPQA Diamond graduate-level scientific questions.

Deployment Options

Hugging Face

Download official model weights

huggingface.co/collections/google/gemma-4

Ollama

Run locally with Ollama

ollama.com/library/gemma4

Vertex AI

Deploy at scale on Google Cloud

cloud.google.com/vertex-ai

Integration Ecosystem

Supported Frameworks

JAX
Keras
PyTorch
Hugging Face

Platforms

Google AI Studio
Kaggle
LM Studio
Docker

Edge & Mobile

Gemma.cpp
LiteRT-LM
MediaPipe
MLX

Quick Start

Get started with Gemma 4 right away. Try in Google AI Studio or download to run locally.

  1. Try Gemma 4 31B in Google AI Studio - no setup required
  2. Download from Hugging Face or Ollama for local deployment
  3. Run locally on your GPU or deploy to cloud infrastructure
  4. Fine-tune for your specific use case using LoRA or full fine-tuning