Google Gemini 3 Flash Launch Shakes Up AI With Speed and Coding Power

Google Brings Gemini 3 Flash to the Spotlight

Google has released Gemini 3 Flash as the newest model in its Gemini lineup. The company focuses on speed, efficiency, and lower computing costs. The AI model joins Gemini 3 Pro and Gemini 3 Deep Think. Google says the model delivers faster responses while maintaining strong reasoning abilities.

The launch shows Google’s intent to move quickly in the AI race. The new AI model targets users who want reliable performance without heavy resource usage.

Gemini 3 Flash Prioritises Speed

Google designed Gemini 3 Flash to reduce response time across tasks. The company claims the model runs three times faster than Gemini 2.5 Pro. It also consumes nearly 30 percent fewer tokens on average.

This efficiency helps developers scale projects while controlling costs. Google positions the model as fast, practical, and suitable for daily AI workloads.

Coding Performance Gets a Boost

Google says the new model outperforms Gemini 3 Pro in several coding tasks. The model handles debugging, code generation, and logic reasoning with improved speed.

Internal evaluations show strong results across multiple benchmarks. Google highlights its performance in reasoning, academic evaluation, and software testing tasks.

Benchmark Scores Shared by Google

Gemini 3 Flash scored 90.4 percent on the GPQA Diamond benchmark. This test focuses on reasoning and knowledge depth.

The model achieved 33.7 percent on Humanity’s Last Exam without tools. It also scored 81.2 percent on MMMU Pro and 78 percent on SWE-bench Verified. These scores indicate stable performance across complex scenarios.

Where Is It Available

Google is rolling out Gemini 3 Flash globally through the Gemini app and website. Users can also access it through AI Mode in Google Search.

Developers can use the model via the Gemini API in Google AI Studio, Gemini CLI, and Antigravity. Enterprises can deploy it using Vertex AI and Gemini Enterprise.

Gemini 3 Flash Pricing Explained

Google has priced the AI model to stay competitive. Input tokens cost $0.50 per million. Output tokens cost $3 per million. Audio input remains priced at $1 per million tokens.

The model costs more than Gemini 2.5 Pro but remains cheaper than Gemini 3 Pro. Google presents it as a balanced option for speed and cost control.

61 Views