Google Reimagines Search with AI Mode
Digest more
Google says the release version of 2.5 Flash is better at reasoning, coding, and multimodality, but it uses 20–30 percent fewer tokens than the preview version. This edition is now live in Vertex AI, AI Studio, and the Gemini app. It will be made the default model in early June.
5h
CNET on MSNGoogle Announces AR Glasses, More Gemini in Chrome, 3D Conferencing and Tons More at Google I/OFrom its new Project Aura XR glasses to Chrome's wants-to-be-more-helpful AI mode, Gemini Live and new Flow generative video tool, Google puts AI everywhere.
Explore more
Follow live updates from the Google I/O 2025. Get the latest developer news from the annual conference as Google is expected to reveal more on its AI tool Gemini.
On Tuesday at Google I/O 2025, the company announced Deep Think, an “enhanced” reasoning mode for its flagship Gemini 2.5 Pro model. Deep Think allows the model to consider multiple answers to questions before responding, boosting its performance on certain benchmarks.
At its I/O developer conference today, Google announced two new ways to access its AI-powered “Live” mode, which lets users search for and ask about anything they can point their camera at. The feature will arrive in Google Search as part of its expanded AI Mode and is also coming to the Gemini app on iOS,
Google’s AI models are learning to reason, wield agency, and build virtual models of the real world. The company’s AI lead, Demis Hassabis, says all this—and more—will be needed for true AGI.
Gemini AI and others now have the ability to scour the video footage we keep in our apps: Here's why, what it's learning and how it may be able to help you.
For now, Meet can only translate between English and Spanish, but Google plans on adding support for Italian, German, and Portuguese in the “coming weeks.” The feature is rolling out now to subscribers. Google will also test this feature with Workspace users later this year.