Google

Google

Tech giant behind Gemini models with multimodal capabilities spanning text, code, and visuals.

13 Featured Models
10 Featured Text Models
3 Featured Image Models
Compare Models

Key Highlights

Formed by merging DeepMind (founded 2010, acquired 2014) and Google Brain (founded 2011).

Led by DeepMind co-founder Demis Hassabis.

Pioneered Deep Reinforcement Learning (AlphaGo beat Go world champion; AlphaStar mastered StarCraft II).

Developed AlphaFold, dramatically advancing protein structure prediction (Nobel Prize 2024).

Invented the Transformer architecture, underpinning modern LLMs like BERT, LaMDA, PaLM, and Gemini.

Created the Gemini family (Nano, Pro, Ultra, Flash, 2.5) as natively multimodal models.

Released Gemma open-weight models derived from Gemini research.

Developed WaveNet/WaveRNN for realistic text-to-speech (used in Google Assistant).

Created AI for coding (AlphaCode), math (AlphaGeometry, AlphaProof), algorithm discovery (AlphaDev), robotics (RoboCat, SIMA), and more.

Applies AI to Google products (Search, Ads, Android, Cloud TPUs, data center efficiency).

Gemini 2.5 Flash Preview 05-20

May 2025

Gemini 2.5 Flash May 20th Checkpoint is Google's state-of-the-art workhorse model, specifically designed for advanced reasoning, coding, mathematics, and scientific tasks. It includes built-in "thinking" capabilities, enabling it to provide responses with greater accuracy and nuanced context handling. Note: This model is available in two variants: thinking and non-thinking. The output pricing varies significantly depending on whether the thinking capability is active. If you select the standard variant (without the ":thinking" suffix), the model will explicitly avoid generating thinking tokens. To utilize the thinking capability and receive thinking tokens, you must choose the ":thinking" variant, which will then incur the higher thinking-output pricing. Additionally, Gemini 2.5 Flash is configurable through the "max tokens for reasoning" parameter.

conversationreasoningcode-generationanalysis

Gemini 2.5 Flash Preview 05-20 (thinking)

May 2025

Gemini 2.5 Flash May 20th Checkpoint is Google's state-of-the-art workhorse model, specifically designed for advanced reasoning, coding, mathematics, and scientific tasks. It includes built-in "thinking" capabilities, enabling it to provide responses with greater accuracy and nuanced context handling. Note: This model is available in two variants: thinking and non-thinking. The output pricing varies significantly depending on whether the thinking capability is active. If you select the standard variant (without the ":thinking" suffix), the model will explicitly avoid generating thinking tokens. To utilize the thinking capability and receive thinking tokens, you must choose the ":thinking" variant, which will then incur the higher thinking-output pricing. Additionally, Gemini 2.5 Flash is configurable through the "max tokens for reasoning" parameter.

conversationreasoningcode-generationanalysis

Gemma 3n 4B

May 2025

Gemma 3n E4B-it is optimized for efficient execution on mobile and low-resource devices, such as phones, laptops, and tablets. It supports multimodal inputs—including text, visual data, and audio—enabling diverse tasks such as text generation, speech recognition, translation, and image analysis. Leveraging innovations like Per-Layer Embedding (PLE) caching and the MatFormer architecture, Gemma 3n dynamically manages memory usage and computational load by selectively activating model parameters, significantly reducing runtime resource requirements. This model supports a wide linguistic range (trained in over 140 languages) and features a flexible 32K token context window. Gemma 3n can selectively load parameters, optimizing memory and computational efficiency based on the task or device capabilities, making it well-suited for privacy-focused, offline-capable applications and on-device AI solutions.

conversationanalysistranslationreasoning