More

    In 2026, Google Is Focused on Making AI Actually Useful

    Google spent a lot of 2025 adding arrows to its quiver as it built out its artificial intelligence, Gemini. Now, it’s focusing on how best to aim those arrows across its wide array of hardware and Android software to help people find truly helpful ways to use AI.

    Google’s Gemini had a banner year in 2025. It propelled itself into the creative AI space with industry-leading models such as Veo 3 videos and Nano Banana. AI Mode introduced agentic capabilities that let the AI do the searching for you. And Gemini 3 showed off the company’s most advanced large language model yet, sparking concern among OpenAI and other competitors.

    This year, Google is hoping to take the progress it has made developing these blockbuster models and abilities to bring them into devices, whether that’s Android smartphones, Chromebooks, smart glasses or even TVs. The ultimate goal is to home in on AI’s practical uses. Sameer Samat, president of Android Ecosystem at Google, calls it «AI utility.»

    «AI utility is really how I think about the way that an ordinary consumer would experience this technology and say, ‘Wow, that is really powerful,'» Samat told CNET in an interview at CES 2026. «It is something that either makes me really happy to own this product or something I want to switch to.»

    Read more: Best of CES 2026: 22 Winners Awarded by CNET Group

    Building practical uses for AI is certainly not a new idea for Google. In 2024, it introduced Circle to Search on Android, which does exactly what the name says: You can draw a circle on a photo on your phone’s screen, and it uses visual intelligence to analyze it for relevant information, run a Google search and bring up additional info. AI-powered improvements to spam prevention meant Android users reported significantly fewer spam messages (58%) compared to iPhone users, according to Google’s own research. Most recently, it added hands-free abilities to chat with Gemini while you’re using Google Maps, helping you find nearby parking or restaurants.

    Android devices have seen the addition of a lot of AI, but the idea of AI utility isn’t limited to phones and computers. Google has steadily been adding Gemini to TVs, for example, beginning with recommendations for what to watch.

    In January, the company announced it was expanding the AI integrations on TVs. Deep dives can create custom multimedia presentations in less than 2 minutes on any topic you want. You can chat with your TV about anything, as you would a chatbot. AI-powered photo editing, similar to the remix tools in Google Photos, is also making its way to the big screens. And if you want, you can also make AI images and videos from scratch using Google’s popular models.

    Introducing these chatbot-like search and media abilities is less about pushing people to create AI images on their TV and more about meeting people where they are. If you love displaying your family photos on your TV, as a screensaver, you can take advantage of AI-powered editing tools to put your own custom spin on them. It’s all in an effort to make watching TV a more engaging, less passive activity, Google said in a live demo at CES.

    Another way to introduce more utilitarian AI tools is by building out agentic AI or AI agents. This genre of generative AI is built to handle tasks independently, with no human supervision required, like ordering food delivery or running code. Right now, we’re «on the cusp of agents being able to accomplish real tasks for us,» Samat said. Building out this tech, beyond desktop and mobile applications, will be key.

    «Some of the greatest need for this kind of functionality will come from other form factors, which perhaps have smaller screens, no screens at all, or where they need to be hands-free,» Samat said. That could be in the software inside vehicles, self-driving and otherwise, but it could also be inside smart glasses, which Google has previously said it sees as integral to the AI evolution.

    Google’s focus on utility reflects a growing trend and movement into the next phase of AI development. Suppose we think about chatbots as an early version of the internet, like AOL. Then moving toward personalized, agentic AI tools is the new, well, Google.

    AI is no longer a novelty. In 2026, all of us — the people building AI and the ones using it — should be invested in finding concrete, productive ways to integrate AI. While you may enjoy using Nano Banana, you’re also going to want your Android’s AI to make your life easier.

    «We think that this technology can move people from AI curiosity to AI utility, and the feeling that Android devices are helpful, fun and delightful,» said Samat.

    Recent Articles

    spot_img

    Related Stories

    Stay on op - Ge the daily news in your inbox