Machine Learning

How to use Gemini 3 pro effectively

Its latest LLM: Gemini 3. The model is long awaited and widely discussed before its release. In this article, I will cover my first experience with the model and how it differs from other LLMS.

The goal of the article is to share my first impressions when using the Gemini 3, highlighting what works well and what doesn't. I will highlight my experience using Gemini 3 on Console and while coding with it.

This highlights the main content of this article. I will discuss my first examples using the Gemini 3, with the Gemini Console and from code as well. I'll highlight what I like about the model and parts I don't like. Photo by chatgpt.

Why you should use Gemini 3

In my opinion, Gemini 2.5 Pro was already available for discussion before the release of Gemini 3. The only place I believe that the other LLM was better in Claude Sonnet 4.5 Thinking, writing.

The reason I believe Gemini 2.5 Pro is the best LLM for Non-Codi is because of:

  • Ability to accurately obtain relevant information
  • Low number of hallucinations
  • Its willingness to disagree with me

I believe the last point is the most important. Some people want warm llms who feel ready to talk to; However, I would argue (as a problem-solver) always look for the opposite:

You want an LLM that is straight to the point and willing to be proven wrong

My experience is that the Gemini 2.5 was the best in this regard, compared to other llms such as the GPT-5, Grok 4, and Claude Sonnet 4.5.

Considering google, in my opinion, already had the best LLM out there, the release of the new Gemini model is very interesting, and something I started to explore after the release.


It is good to point out that Google has released Google 3 Pro, but it has not yet released another alternative, although it is natural to think that such a model will be released soon.

I am not authorized by Google to write this article.

Gemini 3 on Console

I first started testing the Gemini 3 Pro on the Console. The first thing that struck me was that it was slow compared to the Gemini 2.5 Pro. However, this is not a problem, as I value intelligence more than speed, of course, up to a certain extent. Although the Gemini 3 Pro is slow, I'm definitely not saying it's slow.

Another point I noticed is that when explaining, Gemini 3 creates or uses a lot of images in its explanations. For example, when discussing EPC certificates with Gemini, the model received the image below:

This is a picture of the Gemini 3 Pro, which I used to answer my questions about EPC certifications. Photo by Gemini 3 Pro

I also noticed that it would sometimes produce images, even if I didn't trigger it clearly. Image generation on the Gemini Console is surprisingly fast.


At the time I was very impressed with Gemini 3's abilities when I was analyzing the first research paper on intersection models, and I talked to Gemini to understand the paper. The Model was actually good for reading paper, including text, images, and figures; However, this is a strength that other Frontier models have. I was very impressed when I talked to Gemini 3 about adultery models, trying to understand them.

I made the wrong impression about the paper, thinking we were discussing conditional lift models, when in fact we were looking at unconditional lift. Note that I was discussing this before knowing about the goals – and juices and – unconditionally Flexibility.

Gemini 3 then called out that I misunderstood the concepts, I understood the real purpose behind my question, and it helped me a lot to deepen my understanding of interference models.

This picture highlights the good interaction with the Gemini 3 Pro, where the model is understood where I did not understand this article very close to it. Being able to call things like this is an important feature of LLMS, in my opinion. Image from Gemini.

I took some of the old queries and ran them on the Gemini console with Gemini 2.5 Pro, and it ran the same queries again, this time using Gemini 3 Pro. They are used to broad questions, although not particularly difficult.

The answers I received have been very similar, even if I noticed Gemini 3 was better to tell me things I didn't know, or to reveal topics / areas i (or Gemini 2.5 Pro) had never thought about before. For example, I was discussing how I write essays, and what I could do to improve, when I believed Gemini 3 was better at giving feedback, and coming up with creative ways to improve my writing.


So, to put it mildly, gemini 3 on the console is:

  • Little by little
  • Smart, and gives good explanations
  • It's good at revealing things I never thought about, which is very helpful when you're dealing with problem solving
  • He is willing to disagree with you, and helps to call out ambiguities, qualities that I believe are really important for an LLM assistant

Coding with gemini 3

After working with the Gemini 3 on the Console, I started coding with it with a pointer. My overall experience is that it is definitely a good model, although I still prefer the Claude Sonnet 4.5 as my main coding model. The main reason for this is that Gemini 3 tends to come up with more complex solutions and is a slower model. However, the Gemini 3 is a much more grumbling model that may be better for some coding use cases than I experience. I've developed a lot of coding infrastructure around AI Agents and CDK stacks.

I tried Gemini 3 coding in two main ways:

  • Playing the game is shown in this X post, from a screenshot of the game
  • Installing aventic is an agentic infrastructure

First, I tried to make a game from X Post. At the first start, the model created pygame and all the scenes, but it forgets to include all the sprites (art), the bar on the left, and so on. Basically, it makes a very minimalistic version of the game.

Then I quickly wrote the following:

Make it look properly like this game  with the design and everything. Use

NOTE: When entering codes, you must be more specific in your commands than my PRECT above. I used this right away because I was new to this game, and I wanted to see Gemini 3 Pro's ability to build a game from scratch.

After using the Prompt above, it made a working game, where visitors walk around, I can buy pipes and different machines, and actually the game works as expected. Very impressive!


I continued coding with gemini 3, but this time on a production grade code base. My overall conclusion is that the Gemini 3 Pro generally gets the job done, although I often have a duller or worse experience than I do when using the Claude 4.5 Sonnet. In addition, Claude Sonnet 4.5 is slightly faster, which makes it the exact model of choice for me when coding. However, I would consider the Gemini 3 Pro the second coding model I've used.

I also think about what type of coding works best for what is coded. In some cases, speed is very important. Other coding methods, another model might be better, and so on, so you should try the models yourself and see what works for you. The price of using these types has dropped very quickly, and you can easily return any change made, which makes it very cheap to test different models.

It's also worth mentioning that google has released a new addon called antigravity, although I haven't tried it yet.

Impressions that appear

My overall impression of gemini 3 is good, and my LLM usage stack would look like this:

  • Claude 4.5 Sonnet thinks in codes
  • GPT-5 When I need quick answers to simple questions (the GPT-App works well to open with a shortcut).
  • GPT-5 When Making Pictures
  • When I want complete answers and have long discussions with LLM on a topic, I will use Gemini 3. Usually, to learn new topics, discuss software development, or similar.

The Gemini 3 price per million tokens looks like the following (November 19, 2025, from Gemini Developer API Documentation)

  • If you have less than 200K input token:
    • Entry Tokens: 2 USD
    • Withdrawal Tokens: 12 USD
  • If you have more than 200k input fields:
    • Entry Tokens: 4 USD
    • Withdrawal Tokens: 18 USD

In conclusion, I have good impressions from Gemini 3, and I highly recommend checking it out.

👉 Find me in the community:

💻 My webinar on visual language models

📩 Subscribe to my newsletter

🧑💻 Get in touch

🔗 lickEdin

🐦 X / Twitter

✍️ Medium

You can read my other articles:

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button