
Pros
- Higher-quality responses
- Accurate, with greatly reduced hallucinations
- Connection to internet and to other Google services
- Fast image generation
- 2TB cloud storage
Cons
- Can still make logical errors
- Coding assist can make repeated mistakes
- Failed certain obscure requests
If the Gemini twins in Greek mythology are meant to guide sailors, then the name Google gave to its AI chatbot finally matches its demigodly ambitions. Compared to when I tested it last year, Gemini has seen tremendous improvements in accuracy and usefulness.
While the free version of Gemini is highly capable and is good for most use cases, the paid version I’m reviewing here brings a more powerful AI model that can handle requests of higher complexity with greater «reasoning» abilities. I’ve found responses to be more informative, with greater nuance. Where 2.5 Flash aims to be a light model that can output answers quickly at little cost, 2.5 Pro takes the extra processing time to give better outputs.
At $20 per month, Gemini Pro is worth the upgrade for people looking to accomplish more-complex research and coding tasks, or who want a deeper AI model to communicate with. Considering that a Gemini Pro subscription also comes with 2TB of cloud storage, along with some video generation tools, it could easily become a justifiable expense for some people.
At the same time, Gemini isn’t all-knowing. It still can make some logical mistakes, because AI chatbots don’t truly understand the world around them. They’re highly complex autocomplete tools and, as a result, can get information wrong because they don’t experience the world like we do.
Compared with ChatGPT Plus, Gemini Pro can still lag behind in some scenarios, despite its numerous improvements and further integrations with other Google services, such as Search and Maps.
What Google ultimately delivers is a highly capable AI chatbot that can handle a wide host of challenges. From coding to research, Gemini can handle pretty much anything thrown at it. Both ChatGPT Plus, specifically the «reasoning» o3 model, and Gemini Pro offer tremendous functionality, and differentiating between the two comes down to very specific use cases.
How CNET reviews AI models
Last year, I treated my reviews of AI chatbots as if I were reviewing any other tech product at CNET, running a series of comparison tests to see which came out on top. Though that’s a handy way to test camera quality between the iPhone 16 Pro and the Samsung Galaxy S25 Ultra, it’s a little less useful when reviewing AI chatbots.
Because AI chatbots are machines that can do practically anything, performing side-by-side A-B testing isn’t indicative of how most people interact with AI. Imagine if I were reviewing Google Search and Bing. In this scenario, it would make sense to do comparative searches and record my results. But AI chatbots don’t work like traditional online search, which indexes sites to pull up the most relevant results. AIs give novel answers to every question asked, even if it’s the same question. It makes standard comparative testing less of a reflection of real-world use.
This year, I’m opting for a more experiential approach. It’s less scientific, sure, but I feel it gives a better sense of the vibe each AI chatbot brings. And considering that Google, OpenAI, Anthropic and others want to give their AI chatbots a bit of a personality, unpacking that vibe is core to evaluating the AI experience.
Research and accuracy
Compared with 2024, Gemini Pro is leagues more accurate this year. It seldom makes up facts or links to nonexistent YouTube videos, as the previous version did during my tests last year. Google has also done a much better job of integrating information gathered via Search to pull up the most relevant sourcing.
To test how Gemini could help me research current events, I asked the chatbot to analyze talking points from the recent New York mayoral primary. It did an excellent job of pulling together facts with proper sourcing, including Radio Free Europe, PBS, official government sources and, in some instances, YouTube videos from major news channels. Because Google owns YouTube, Gemini can link directly to any of the site’s vast trove of videos. That gives Google an edge over the companies behind other AI engines. YouTube, by default, blocks creators from allowing their videos to be used for AI training. Other AI companies have crawled YouTube to train their models, a practice that violated YouTube’s terms of service.
Something I’ve found particularly handy is using Gemini as a sounding board for some of my crazier ideas. I own a wide selection of video game consoles, all plugged into my television with multiple HDMI switches and power strips. I asked Gemini if it would be possible to make a superlong HDMI switch that could fit more than 20 devices. Gemini explained that creating a circuit board that could handle 4K, HDR and high refresh rates across multiple inputs would be extremely challenging and beyond the scope of a DIY project.
When I asked Gemini to create a schematic of what this project might look like, it attempted to do so with ASCII characters but ultimately failed.
At least Gemini is real with me.
Despite Gemini’s accuracy, it doesn’t understand the world around it. I’ve recently been trying to cut more sugar out of my diet, (apologies to my local bubble tea shop). I’ve been wanting to make a basic milk tea using monkfruit sweetener and nondairy creamer. I asked Gemini to create me a healthy milk tea recipe, but it didn’t work out so well.
Gemini suggested I make a tea base with one cup of water and two bags of black tea. For sweetness, Gemini said to add only 1 tablespoon of monkfruit sweetener. That, plus three-quarters of a cup of milk and 1 to 2 tablespoons of creamer would create that ideal low-calorie milk tea.
The result was a chunky mess. After I used these exact measurements, I found that the drink wasn’t anywhere sweet enough and that the creamer-to-liquid ratio was all off, leading to clumps in the final product. Gemini has never sipped bubble tea before, so it makes sense that it doesn’t understand what 2 tablespoons of nondairy creamer would do in only 14 ounces of liquid.
Gemini’s ‘vibe coding’ blew me away
I’m not a coder. Back in college, I took an introductory Python course and after much struggle managed to get a C. But with AI chatbots, you don’t need to be a coding wiz, or even know how to display «Hello World» at all. With Gemini Pro, I was able to make a fully functioning Chrome extension with virtually no experience (I didn’t test coding with Gemini Free).
Vibe coding is a term that essentially means to code by talking it out. You express to an AI what you’re hoping to accomplish, and through your back-and-forth conversation, the AI will generate the code and help you get a program up and running.
I vibe coded with Gemini, and not only was the experience fascinating and a lot of fun, I also found it to be just as impactful as when I used ChatGPT for the first time in late 2023.
(Gemini isn’t the only AI chatbot with coding assistance. All the other major AI chatbots tout their coding capabilities. But Gemini Pro is the first I’ve meaningfully tested for that purpose.)
I asked Gemini to build me a tool that could scan my articles and add appropriate links to other CNET articles. In my conversation with Gemini to build and test the tool, I explained in plain language any issues, and Gemini would come up with a solution and fix the code.
It wasn’t a perfect experience, though. There were instances when Gemini would generate an updated piece of code only to leave out a feature that was in the prior version. I’d then ask Gemini where that feature went, and it would apologize and fix the code.
Interestingly, when we’d hit roadblocks and I’d suggest that maybe the feature I was envisioning was simply too difficult to implement, Gemini would push back. It would say the feature was still totally doable and would generate revised code. This back-and-forth would go on until Gemini got it right.
The larger computer science job market is currently going through an upheaval as Big Tech executives continue laying off thousands of workers while boasting about how much coding AI is doing. After using Gemini to code, I understand why students are worried.
Regardless, coding with Gemini has changed my understanding of the power of AI chatbots, and I plan to vibe code more as part of the testing I do for reviews.
Gemini Pro is (surprisingly) worse than ChatGPT for shopping
Searching for any product on Google Search leads to an obnoxious mix of product carousels and sponsored listings, drowning out all other reviews and articles (including CNET’s). Considering how much Google invests in monetizing online shopping, it’s surprising how quaint the product research is on Gemini by comparison. ChatGPT has a far more robust shopping experience.
Gemini is an excellent tool for basic product research. When I asked it to compare various models of Panasonic micro four-thirds cameras, for example, Gemini could pull up models that met my specifications and could tabulate their features in a handy list when asked. It could add more products to that list as I continued to fall down the DSLR rabbit hole.
At the same time, unlike ChatGPT, it doesn’t provide links to stores and it doesn’t incorporate images. Product research on Gemini required me to have a separate Google Search window open just so I could see current pricing and what various camera models looked like, side by side. Gemini can also sometimes get product details wrong. But ChatGPT would also link to incorrect products.
Shopping is one of those instances where Gemini needs to act less like an AI chatbot and more like Google Search. I looked for a piece of furniture to hold my record player and store my vinyl, and Gemini gave me a guide to what to look for when shopping but didn’t actually recommend any products. Jumping over to ChatGPT, it was an entirely different experience. There, it was like I was working with a sales associate at a furniture store, going through the various options to find something that fit my needs.
Image generation: At least it’s better than Gemini Free
I didn’t extensively test image generation with Gemini Pro, but I found it more than adequate for basic tasks. (CNET will have a separate hands-on of Google’s various image and video generation tools.) At the very least, when compared with Gemini Free, Gemini Pro did a better job of following my intent when creating images.
Like with my Gemini Free review, I wanted to create a nostalgic image that evoked the feeling of playing a Game Boy on a late night drive in the back seat of a car. Gemini Pro got it on the first go.
My prompt that generated the image above: «I want to create an image. One that evokes a feeling of nostalgia. The image is that of a boy playing his Game Boy in the back of his parents’ car on a long road trip at night. Because the screen isn’t backlit, he’s sitting near the window, trying to catch whatever passing light he can to see where to go. This image should use cool colors accented by the warmth of the light entering in. Feature anime-style artwork with a slight Western design. Should feel hand-drawn with intricately detailed linework. Analog VHS distortion. ’90s cartoon aesthetic.»
My experience with Gemini Free image generation was much more frustrating. The model simply didn’t understand world logic and would often place the boy in the front seat facing backward, or with surrounding vehicles driving in the wrong direction. Eventually, I gave up.
Redemption
Google’s done it. After Bard’s dismal launch and a bumpy rebrand as Gemini, the latest build of the company’s AI chatbot can compete with the best from OpenAI. This time around, Gemini brings with it greater accuracy, collaborative capability, coding power and image generation to make an overall compelling product.
The Gemini 2.5 Pro model is simply better than the free Gemini Flash model. Answers have more nuance and density, and features like the image generator work considerably better. It’s a tad annoying when Gemini will default to the Flash model even for Pro subscribers. I suspect it does so when traffic is high and Google is trying to lessen the load. It’s easy enough to switch back, but you have to notice the switch to do so.
Compared with ChatGPT’s o3 model, in particular, Gemini 2.5 Pro is faster while maintaining comparable answer quality. Google also says it has a 1 million token context window, which would dwarf what’s been reported regarding ChatGPT. Being able to pull in data from other Google services gives Gemini another edge.
Gemini isn’t perfect, however. It can still stumble with some types of queries, and using it for shopping is lackluster. Despite my qualms, I found myself increasingly reliant on Gemini, moving myself further away from Google Search.
AI is slowly moving Google away from being a search company to an answer company. Last year, Gemini’s answers were too often wrong to be worth recommending. This time, I have much more confidence in Gemini’s answers. Of course, if I ever publish anything incorrect, the responsibility will fall on me.
Ultimately, Gemini Pro acts as a professional and handy know-it-all assistant. It doesn’t have the attitude of Claude or the controversy of Grok. Instead, it’s there to help even if you find yourself giving up. It’s that assertiveness that makes Gemini Pro a standout AI product.