
Google has announced that it is making its latest and most advanced Gemini AI model, version 2.5 Pro, available to all users of the Gemini app free of charge. This experimental model, previously only accessible to Gemini Advanced subscribers, is now being rolled out to the general public starting this Saturday.
Gemini 2.5 Pro was originally launched by Google earlier this week as the company's "smartest AI model" to date, with additional sophisticated thinking ability than previous versions. The updated model supports a range of features including app and browser extensions, file upload, and integration with Google's Canvas collaboration tool.
Access to Gemini 2.5 Pro was initially limited to Gemini Advanced users, who pay a $19.99 monthly subscription fee in the United States. Google has now opened up the experimental version to everyone using Gemini, something the company says is intended to "get our most intelligent model into more people's hands asap."
Gemini 2.5 Pro is taking off 🚀🚀🚀
— Google Gemini App (@GeminiApp) March 29, 2025
The team is sprinting, TPUs are running hot, and we want to get our most intelligent model into more people’s hands asap.
Which is why we decided to roll out Gemini 2.5 Pro (experimental) to all Gemini users, beginning today.
Try it at no… https://t.co/eqCJwwVhXJ
The new Gemini 2.5 Pro model is already live on the Gemini website, and will be rolling out to the Android and iOS mobile apps in the coming days. Recent app updates have also improved the user experience by making it easier to select and remember which Gemini model is being used.
The model, currently in an experimental phase, supports features such as App/Extensions integration, file uploads, and the Canvas feature. According to information referenced in the announcement, Gemini 2.5 Pro (experimental) currently leads the LMArena leaderboard. Google noted it is also working to enhance the model's coding capabilities.
Gemini 2.5 Pro also possesses a 1 million token context window, enabling it to process large data sets and maintain context over extended interactions. Plans are in place to potentially extend this to a 2 million token window in the future.
3 Comments - Add comment