When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Google annouces new Gemini AI updates including the new and light Gemini 1.5 Flash model

Google Gemini Pro

In just over a year, Google launched new large language models (LLM), first under the name Google Bard and then finally settling on the Gemini branding. Today, at the Google I/O 2024 developer conference, the company announced some new updates for the Gemini series of AI models.

In a blog post, Google announced an all-new model, Gemini 1.5 Flash. As the name suggests, it is a lightweight LLM that's designed to work quickly. Google stated:

1.5 Flash excels at summarization, chat applications, image and video captioning, data extraction from long documents and tables, and more. This is because it’s been trained by 1.5 Pro through a process called “distillation,” where the most essential knowledge and skills from a larger model are transferred to a smaller, more efficient model.

The new model is currently available in preview form and will be generally available sometime in June.

The current Gemini 1.5 Pro model is getting some updates as well. Google stated:

1.5 Pro can now follow increasingly complex and nuanced instructions, including ones that specify product-level behavior involving role, format and style. We’ve improved control over the model’s responses for specific use cases, like crafting the persona and response style of a chat agent or automating workflows through multiple function calls. And we’ve enabled users to steer model behavior by setting system instructions.

Both the Flash and Pro models come with a 1 million token context window. However, Google also announced today it is testing a 2 million token context window for the Pro edition. Developers who want to try that version out can sign up for the waitlist.

Google revealed its new Gemma 2 open-source LLM model today as well. In a separate blog post, Google said that the model will launch in June:

Developers and researchers have requested a bigger model that’s still in a size that’s easy to use. The new Gemma 27B model should do that: it outperforms some models that are more than twice its size and will run efficiently on GPUs or a single TPU host in Vertex AI.

Finally, its Gemini Nano model for on-device AI work will now be able to understand visual, sound, and spoken language prompts in addition to text prompts on the company's Pixel devices.

Report a problem with article
New AI-based features for Android
Next Article

Google announces real-time scam detection, homework help and other AI features for Android

Apple App Store logo
Previous Article

Apple reveals it deleted 374 million fraud accounts and a lot more

Join the conversation!

Login or Sign Up to read and post a comment.

1 Comment - Add comment