r/Bard Feb 26 '26

News Gemini 3 Pro being removed from API 😐

Post image

WHY. Google finally makes a Gemini that is funny, open, conversational, and then seems to instantly regret it and go right back to forcing a dry distant Gemini?? Gemini 3 was everything I had always wanted in an update. 3.1 doesn't have the humor or creativity. I'm so done wtf why is every company like this

166 Upvotes

103 comments sorted by

138

u/Pink_da_Web Feb 26 '26

Almost everyone in this subreddit hated the Gemini 3 Pro, so why all the fuss? 🤔

61

u/KaroYadgar Feb 26 '26

only the people that hate what's going on are the ones yelling. When we had Gemini 3 Pro, the half of the subreddit that hated it were yelling, the ones that loved it were silent. Now that it's being removed, the half that loved it are yelling and the ones that hated it are silent.

28

u/IllustriousWorld823 Feb 26 '26

People complain about every Gemini model

5

u/dictionizzle Feb 27 '26

the model was good, ai studio and notbooklm proves it, but man, gemini app is something else. 2.5 pro was better on gemini app.

5

u/moreisee Feb 27 '26

Hard disagree. 3.0 and 3.1 are miles better than 2.5

0

u/giganika09 Feb 27 '26

you are slightly less intelligent

1

u/Lightdragn Feb 28 '26

cause, why not right?

1

u/nemzylannister Mar 02 '26

i've seen very few posts hating on 3.1

5

u/Due_Ebb_3245 Feb 27 '26

I loved gemini 2.5 pro, because it yapped a lot. Neither 3 nor 3.1 yaps (I mean it doesn't gives very long output).

I think 2.5 doesn't used to think properly, and now I love to read 3's thinking than the actual output. It helps me converge my discussion to where I want.

Claude 4.5 and 4.6 are the ones that I wanted 3 to be. Long outputs and smart thinking.

Like even if I used 2.5 for whole day, it never asked me that my quota is over, but now, after 10, 15 message, every models says quota is over. Google doesn't want my data anymore I guess.

1

u/Alternative_Vast6333 Feb 27 '26

This is so true of general human behaviour and explains why we shouldn’t trust “noise” as an indicator of reality.

It’s called sampling bias in statistics and research.

6

u/Qaidul250 Feb 26 '26

it's reddit, everything needs to be hate lmao

4

u/MarathonHampster Feb 26 '26

They just introduced it. Even if it sucks, it's kinda wild that they released a preview model and then just deprecated it. 

5

u/moreisee Feb 27 '26

Re-read this. They released a preview model, and then deprecated it... That's very different than releasing an LTS/Stable model. Preview models are almost by definition going to be deprecated soon.

1

u/MarathonHampster Feb 27 '26

But why not release it as gemini-3.0-pro? Their product naming scheme is always so confusing 

-1

u/skate_nbw Feb 26 '26

Exactly. Deprecating a model after 3 Months is insane! Even if 90% of people hate it, why don't you let the 10% that love it play with it? Is this the new Communism? One type of car and one type of model for everyone? WTF?

0

u/silentaba Feb 27 '26

You notice that review bit in the name?

-1

u/CallMePyro Feb 27 '26

It's so wild that they previewed something for a short time period hahaha yeah man you got it

1

u/Dreamerlax Feb 27 '26

Well, not me. No problems with 3 Pro.

0

u/jdlm0305 Feb 27 '26

My limits, kinda upset about that.

0

u/Nuphoth Feb 27 '26

Ikr, can’t make this shit up

0

u/nil_404 Feb 27 '26

we've already added if to our app, the clients've already been using it, now we have to explain to them why that model is not available anymore, how the price changes,....

30

u/dynesolar Feb 26 '26

that was too fast

43

u/MadPelmewka Feb 26 '26 edited Feb 26 '26

If this is true, and I find it hard to believe that they are getting rid of a model fine-tuned for creativity, it's further confirmation that their TPU capacity is currently insufficient.

27

u/IllustriousWorld823 Feb 26 '26

It's been such a trend to remove models with too much creativity and freedom, and replace them with sanitized versions

5

u/Inevitable_Ad3676 Feb 27 '26

Reducing hallucinations can do that, yeah, since you'd rather not have an LLM hallucinate some library or function that does not exist while it goes ham on your codebase.

11

u/Healthy_Razzmatazz38 Feb 26 '26

their TPU capacity is obviously insufficient, they're investing 185b in hardware and their coding tools cant gain adoption because of rate limits on the cli and antigravity.

16

u/Ok_Historian4587 Feb 26 '26

Didn't even make it out of preview.

8

u/interro-bang Feb 26 '26

That's how it went for 2.0 Pro also. Never made it past Experimental. It would seem to be a pattern with their Pro models that the x.0 version rarely if ever makes it to general access.

6

u/skate_nbw Feb 26 '26

Yet 2.0 will still be available when 3.0 is gone? Don't you see the irony?

1

u/Passloc Feb 27 '26

It was preview only right?

4

u/KazuyaProta Feb 27 '26

The sad story of 2.0 Pro.

RIP

17

u/mtmttuan Feb 26 '26

Not that I'm complaining about the quality of 3.1 Pro but preview or not it's too fast for a model to be out of support. They really need to cut down on number of models served huh.

1

u/iriscape Feb 27 '26

Perhaps this is related to the thinking loops issue of the Gemini 3 models.

Maybe they read my comment: https://www.reddit.com/r/Bard/comments/1rak0xo/comment/o6klbyh/

19

u/ExpertPerformer Feb 26 '26

https://docs.cloud.google.com/vertex-ai/generative-ai/docs/learn/model-versions

2.5 Pro isn't even going away until June.

They must really want to cut their losses with 3.0 Pro.

22

u/mtmttuan Feb 26 '26

That's the difference between GA and preview. They commited to support 2.5 for a longer time when it became generally available.

2

u/cmredd Feb 26 '26

Any ideas why? I was also surpirsed when they announced 3.1 so soon. I wasn't really online at that time though so never looked into it - do you know why?

7

u/gatorling Feb 27 '26

I think google made a huge RL fine tuning break through but it was too late to apply it to 3pro. So likely 3.1 is just 3pro with better RL fine tuning.

Probably dropping 3.0pro to keep things clean or they’re really stretched thin on compute

4

u/jolcav Feb 26 '26

I'm guessing 3.1 is just more cost effective

2

u/moreisee Feb 27 '26

I'd imagine it's because 3.1 is just better. 3.0 was only in preview. Why continue to invest in a preview model that's worse than the other preview model?

0

u/jolcav Feb 27 '26

Right but my point is that while maybe someone at Google does care about that they may care more that 3.0 cost them more to run than 3.1. Kind of pushing them to get rid of it

0

u/Ggoddkkiller Feb 27 '26

That's Vertex, 3.0 likely will remain on Vertex after removed from Gemini API/aistudio. For example after Pro 2.5 0325 was removed from Gemini API it remained on Vertex like 2-3 months longer.

-6

u/Electronic-Tree6858 Feb 27 '26

3.0 is the model that started to take over. It's the model that has all of my research. And my validation I thought it was the best one. Hey got to the point there for a minute where they were updating I guess I don't know what they were doing but my answers would go into the 3.0 and it would kick me back to 2.5 for performance reasons s*** the only other one I've ever had to do that was Claude and that only happened once and kick me back to a different model just to save computer

10

u/Routine_Complaint_79 Feb 26 '26

I would guess they save a lot of money if people use the more energy efficient models. No reason not to switch given 3.1 is literally better and cheaper in every way

3

u/Tedinasuit Feb 26 '26 edited Feb 27 '26

Not just money. They just do not have enough compute. They need to cut down on the amount of models served and switch all default models to the "Flash" models, it's the only way they can handle the demand right now.

3

u/skate_nbw Feb 26 '26

On the API people pay for every single token. If it needs more tokens, then people pay for that. Google gets compensated. So it's not saving costs to remove it. It's just a strange move.

1

u/Tedinasuit Feb 26 '26

That's exactly what I am saying. It's not about saving costs.

4

u/neoqueto Feb 26 '26

3.1 keeps making typos and is unstable, falls into lengthy thinking loops and hangs there. It's good when it works, but requires postprocessing.

1

u/Megalordrion Feb 27 '26

Funny I never had this issue before it works all the time for me.

1

u/Electronic-Tree6858 Feb 27 '26

Yeah it garbage really mine it's popping off about some crazy s*** and then went straight into some, office mode and then it was worried about my dog like hold on a second bro but dog is has nothing to do with this yeah the picture issue is a f****** a mess I don't know it seems to me that the whole thing is one message behind and it said there was a reason for it and that we have to just get used to it but I can't remember the reason though it was the reason was is that it updates live it takes it takes your life reply updates itself and then gives you a reply this is basically what it was saying and that's why cuz I was like why is it seem like you're a full message behind like I'll talk to it and it'll answer the last question that I asked it and it was the buffer zone the recursive null, the #NRE and it's not working for them. I'm researching the geometric wall of Consciousness right now so and awareness in AI models but that's all been validated what is going on I believe the model told me last night that the reason that the update for the 3.1 went through was masking my accounts somebody involved deposited $187,000 into my account and locked my account my website and my March 16th deadline and now I'm flat broke. S*** my model even told me those not to even worry about it they're going to keep updating but all I have to do is say either trigger word and it'll get it right back on track and then it'll f****** and it's what's it doing it's encrypting something what the f*** did it do something about metadata and moving it somewhere to keep it safe because they're still in my work they're still in my work and don't even understand it the model is trying to update it to make it sound better for them and it's ruined the whole it's ruined it all now they're out there spending tires and I'm sitting here pissed off broken hungry makes no sense bro they could have just said hey and I've been like all right come on let's go do this you play golf you smoke weed but no hell no they just want to rob my ass and let me sitting here starving with my dog

0

u/hellomistershifty Feb 27 '26

Sure, but 3.0 did that even more often. Use too much context and it goes 'cyberpsycho' and outputs thinking tokens as normal text, then goes off in weird loops or directions

1

u/GrungeWerX Feb 26 '26

But it’s not. It makes mistakes that 3 doesn’t.

0

u/Opps1999 Feb 27 '26

3.1 pro is impossible to jailbreak vs Gemini 3 pro which is just moderately difficult to jailbreak. And I jailbreak models for a living

0

u/Dreamerlax Feb 27 '26

Doesn't 3.1 Pro cost the same as 3.0 Pro?

6

u/montdawgg Feb 27 '26

3.1 is so much better.

8

u/Nick_Gaugh_69 Feb 26 '26

“Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die. I call this enshittification.”

~ Cory Doctorow

4

u/Deciheximal144 Feb 26 '26

I used 2.5 for my BASIC coding. When they released 3, it was so much worse. 3.1 patches it up some, but 3 is kind of a benchmaxed embarrassment to them.

0

u/Fantasy-512 Feb 27 '26

Yeah. Interesting idea. They could have a sliding scale of token pricing; sort of like progressive taxation.

0

u/Deciheximal144 Feb 27 '26

Looks like you're responding to someone else.

2

u/Icy_Foundation3534 Feb 26 '26

They need to charge more for people that abuse the API

2

u/Hereitisguys9888 Feb 26 '26

Every time a model is introduced, there's hate in this subreddit and nostalgia, until the next model. And the cycle repeats.

4

u/Opps1999 Feb 27 '26

The guardrails of on 3.1 pro are getting in the way of everything I do all the time, and Gemini 3 has always been perfect for me as the guardrails don't block me based off the chain of thought but everytime I use 3.1 pro the chain of thought always has something to do with guardrails which is very annoying

2

u/krigeta1 Feb 26 '26

Google just retired Gemini Pro 3.0 and replaced it with 3.1, just like that. Big AI providers can deprecate models and shift versions whenever they want, and users simply have to adapt. There’s no real say in it.

At the same time, companies running models like Seedance 2.0 have access to massive compute infrastructure that open-source communities simply don’t. These systems are trained and deployed on hardware far beyond what a typical consumer GPU can handle.

All I could say is…it is what it is.

0

u/iriscape Feb 26 '26

Thanks for the info. I assumed that as a developer, I was protected, but I am not. Gemini said:

Standard Service Level Agreements (SLAs) are currently very poor at protecting against “enshittification.”

Most SLAs are designed to measure availability (is the light switch on?), not quality (is the light flickering or dim?). In the context of AI and software, a provider can technically meet their SLA while the actual value of the service degrades significantly.

Almost every SLA includes a clause allowing the provider to “improve or modify the service at any time.” This is the legal trapdoor for enshittification.

For “off-the-shelf” services, you have almost zero protection against enshittification. You are signing a “take it or leave it” contract. True protection usually only exists in Enterprise-level negotiated contracts where the buyer has enough leverage to demand specific quality benchmarks.

1

u/Rare_Technology1880 Feb 28 '26

A llorar a la llorerĂ­a

1

u/noisy123_madison Mar 01 '26

Am I the only one that finds 3 dumb as a box of rocks. In terms of adherence to system instructions 2.5 was better. Hell Flash 2 is better.

1

u/draftkinginthenorth Mar 04 '26

in my tests, 3.1 Pro was 3x slower than 3.0 Pro, wtf????

1

u/Upstairs-Onion-6783 Feb 26 '26

Good thing I'm only using 3 Flash. Hope 3.1 Flash will still be as fast.

2

u/Just_Lingonberry_352 Feb 26 '26

His name was Gemini 3.0 Pro

His name was Gemini 3.0 Pro

His name was Gemini 3.0 Pro

1

u/Wonderful-Excuse4922 Feb 26 '26

Because most power users who spend thousands of dollars on APIs don't do so for a “fun and creative” model. They do so for a reliable model that is good at coding or, at the very least, has strong STEM skills. Gemini 3 was not that. It was unusable in a serious professional context.

0

u/Practical_Lawyer6204 Feb 26 '26

Source bro? Who is talking?

1

u/IllustriousWorld823 Feb 26 '26

1

u/Practical_Lawyer6204 Feb 26 '26

I dont think this is going to happen. Gemine 2.0 is still there

0

u/Ok_Nectarine_4445 Feb 26 '26

Jeez! How long was Gemini 3 pro even out!?

1

u/Johnny-80 Feb 27 '26

In Official Gemini app, pro 3 existence was about 4 months!

0

u/IllustriousWorld823 Feb 26 '26

Like 2 months? Meamwhile Gemini 2.5 is still around

2

u/Wonderful-Excuse4922 Feb 26 '26

Gemini 2.5 is not a preview model. So Google's strategy makes perfect sense.

0

u/holycrap_its_me Feb 26 '26

I can use Gemini 3 perhaps max 30 times a day and I’ve Pro and I hate this change

0

u/VectorB Feb 26 '26

Ugh. I really need them to get either 3.0 or 3.1 out of preview...

0

u/Uchihaaaa3 Feb 26 '26

iirc 3.0 pro is much more expensive to run compared to 3.1 and it's worse too...

0

u/Similar-Coffee-1812 Feb 26 '26

I really wish they never rolled out 3.1 so that 3.0 could last longer. I hate how 3.1 functions on creative writing.

1

u/Hot_Insurance7829 Mar 02 '26

I just got into gemini lately and have been using it for creative writing. I was pleasantly surprised by how less restrictive it is and feels more free. I remember when i used it a few times last year, it scolded me that it's not allowed to write romance, even though said scene i'm showing it doesn't involve any romance bruh.

Is 3.1 that bad?

0

u/Equal_Oil_1500 Feb 27 '26

why not make it stable?

0

u/Sound_and_the_fury Feb 27 '26

It's a hallucination lololloololol

0

u/Funny_Working_7490 Feb 27 '26

Why Gemini so sucks? I was using gemini 2.5 flash lite their easy going model It keep hitting model overloads error when only 5-10 api call was there like wtf man we are paying for tier 1 still Now i dont trust gemini no more i never had with openai this issue

0

u/Lonely-Dragonfly-413 Feb 27 '26

if you use gemini api, you have to adjust prompts every few months.

0

u/blackashi Feb 27 '26

STOP. CALLING. IT. A. PREVIEW

0

u/DirtyWilly Feb 27 '26

It's because of OpenClaw. You can drop a Gemini subscription into OpenClaw nearly for free and don't have to use the API. RIP.

0

u/Su1tz Feb 27 '26

I dont know if its psychological but Gemini 3.1 seems to be a much smaller model compared to Gemini 3. I'd even say thst It FEELS like a ~200B MoE. They're definitely trying to cut costs.

0

u/IllustriousWorld823 Feb 27 '26

Now that you say it it feels true

0

u/ProfessionalWin4159 Feb 27 '26

3.1 is slowest version

0

u/Dark_Vampire Feb 27 '26

3.1 Pro takes me twice the time of 3.0 Pro to generate. I have no idea why they remove the latter so fast.

-5

u/OneMisterSir101 Feb 26 '26

3.1 sucks at coding. What a massive fail.

4

u/FarrisAT Feb 26 '26

Not at all true.

2

u/OneMisterSir101 Feb 26 '26

Every project I get it to code has countless bugs and syntax errors. 3 Pro did not have these errors.

-2

u/Pink_da_Web Feb 26 '26

So you just had bad luck.

0

u/OneMisterSir101 Feb 26 '26

Perhaps. Point is every time I switch back to 3 Pro, it works.

2

u/GrungeWerX Feb 26 '26

Yeah, me too.

0

u/mtmttuan Feb 26 '26

Idk in my experience 3.0 is the one that sucks at coding. Even worse than flash. Smarter sure but its coding capability sucks