Gemini, Google
Digest more
Top News
Overview
Highlights
Google’s Gemini Diffusion demo didn’t get much airtime at I/O, but its blazing speed—and potential for coding—has AI insiders speculating about a shift in the model wars.
Google on Tuesday revealed new Android development tools, a new mobile AI architecture, and an expanded developer community. The announcements accompanied the unveiling of an AI Mode for Google Search at the Google I/O keynote in Mountain View, California.
Just don’t confuse Deep Think with DeepMind or Astra with Aura.
Google says the release version of 2.5 Flash is better at reasoning, coding, and multimodality, but it uses 20–30 percent fewer tokens than the preview version. This edition is now live in Vertex AI, AI Studio, and the Gemini app. It will be made the default model in early June.
Google’s AI models are learning to reason, wield agency, and build virtual models of the real world. The company’s AI lead, Demis Hassabis, says all this—and more—will be needed for true AGI.
The Swedish carmaker said on Wednesday that it is now the lead development partner for Android automotive software.
As Google is fond of pointing out, Android XR is its first new OS developed in the "Gemini era." The platform is designed to run on a range of glasses and headsets that make extensive use of Google's AI bot, but there were only two experiences on display at I/O: an AR headset from Samsung known as Project Moohan and the prototype smart glasses.
Gemini AI and others now have the ability to scour the video footage we keep in our apps: Here's why, what it's learning and how it may be able to help you.
The main AI firms in town all announced MCP support this week - here's what that means for ChatGPT, Gemini, and Copilot.