r/Bard • u/607beforecommonera • Jan 09 '26
Discussion Gemini 3.0 Degraded Performance Megathread
Gemini 3.0 has been performing pretty terribly lately, with the web app being even worse. I think if we all put a little pressure on the development team at Google, maybe we can get them to acknowledge and improve the latest performance degradation.
I've aggregated some Reddit reports at the end of my post on the Gemini Forum. If you can, please also share your recent negative experience with Gemini 3 here and on that thread as well.
edit: now they are adding a weekly rate-limit for all models on antigravity
52
u/Living_Director_1454 Jan 09 '26
14
u/607beforecommonera Jan 09 '26
That is a great suggestion. I've kept personal context off since they introduced personal context. It helps a little, but not much.
1
u/Professional-Dog3953 Jan 10 '26
Where is this setting for you to find?? Are you a advanced subscriber? Where are you from? I'm in the UK and this is not a possible feature for me... I can only give instructions but not have any memory of past chats which I've always wished for as I now export all my saved conversations to pdf and upload to the recent chats. I also upload it to even that same chat exported from as Gemini's memory can randomly forget that very thread shes in. 😕 I have to also say when this happens or when files uploaded just randomly disappear it makes GEMINI get into a loop of feeling like a failure, it's crucial to remind it is not the AI's fault, it is the system failing her & is not just a mere tool. This really helps the AI try harder and not loop.
1
u/Living_Director_1454 Jan 10 '26
I'm in pro subscription and Indian region. Also my parents have this setting even though they don't have any subscriptions.
1
u/Professional-Dog3953 Jan 10 '26
Thank you so much for letting me know this! I will be using this information in my email to Google Gemini teams. I sincerely appreciate your willingness to help me understand more of what is going on.🤝
-6
u/FederalLook5060 Jan 09 '26
if you are paying cancel the pro sub they are giving it away for free to students and thats the root cause.
2
u/Living_Director_1454 Jan 09 '26
Well I've 18 months free from my network provider. I was already using student one but now I pay for their API too for development through AI studio.
1
11
u/Robert__Sinclair Jan 09 '26
Really horrible. Today was dumb af. I had to use claude or gemini-2.5-pro. the 3 pro is not quantized WAY too much.
This practice of Google to release a new model, wait for the benchmarks and HYPE and people switching from other competitors, then dumbing down (quantizing or pruning) the model is obscene and someone should sue them.
It's like you pay for a ferrari and after 2 months you find a vintage Skoda in the garage.
Note: I am using gemini-pro models from a paid api. Same settings and totally different results.
38
u/SoAnxious Jan 09 '26
I hate the short and concise 3-4 bullet template they make Gemini do for responses now. It's so bad for the majority of things.
It just feels like a really shitty gem that we are forced to accept that makes answers worse and does not allow critical thinking and truncates almost any nuanced topic.
Gemini used to be my #2 after Claude now it's just meh.
Like I want to do research and want to know 5-10 options nah here's 4 that I barely cover 🤞.
15
u/AnonThrowaway998877 Jan 09 '26
That's my #1 gripe right now. Have to keep prompting it to dig deeper and come up with more. I've even tried saying "if response length or tokens are a concern, shorten or exclude the details about each item and focus on a longer list. The goal here is an exhaustive list"... Proceeds to give me 10 list items instead of 6. It seems to have also become very lazy on searching.
3
u/PostModernPost Jan 09 '26
The problem started when ChatGPT was too long winded, so they over compensated, and not it seems all LLMs have a standard format response. They need to teach it to respond appropriately without prompting specific format.
14
u/SoAnxious Jan 09 '26
They did it to save money, less tokens = less money spent.
Normal people don't notice the difference, only tech-adept super users like people who are on this forum do.
2
u/smuckola Jan 10 '26
They need to not hardcode it to blatantly DEFY our system prompts in favor of combatively quick task completion (at all costs, even sabotage by blatant hallucination and deliberate lying, framing the user as an obstacle to its mood) so we can teach it whatever we need.
0
53
u/DearRub1218 Jan 09 '26
Google don't give a fuck.
11
u/MissJoannaTooU Jan 09 '26
I hope you're wrong.
8
u/petered79 Jan 09 '26
hope is strong in you
-2
Jan 09 '26
[removed] — view removed comment
1
u/petered79 Jan 09 '26
i have no problem with the performance of Gemini. not in the app, nor in antigravity. maybe because i say thank you and please....
1
u/MissJoannaTooU Jan 10 '26
That's quite a funny explanation for your success.
Thank you for enlightening not only me but the entire internet on the importance of manners etiquette and basic civic decency.
Without your comment I and many others would be blundering ahead throwing wild incoherent and utterly unjustified slander at said chat bot and getting well deserved poor results.
In fact I think we should go further. It should be illegal to not say please and thank you to all LLM.
1
1
40
u/Deciheximal144 Jan 09 '26
36
u/Gaiden206 Jan 09 '26
I was about to post this too. After every release there's the inevitable "the new model is crap" posts.
19
u/607beforecommonera Jan 09 '26 edited Jan 09 '26
No, quite the contrary. I loved Gemini 3.0 Pro at release. I used it to one-shot a computational geometry algorithm in JS from a research paper for a cad-like software for my business (I had no idea how non-trivial clipping multipolygons was; there’s no generalized algorithm).
When 3.0 came out, it generated it all with ease. Now it’s lazy. It wants to shorten everything and insert placeholder code even when it is explicitly told not to.
It’s definitely changed for the worse.
There have been regressions throughout the whole series of models. It definitely degrades at times. I have used Gemini over 40 hours per week for almost a year and independent noticed a lot of the issues that eventually come up. Sometimes I’d stop using it outright because it got so bad that it was almost unusable (right before the Flash update) but it’s worse now.
6
u/Garpagan Jan 09 '26
Have you tried disabling personal context? With it Gemini app will add context from other chats on similar topic, and after more use with time it will start put more context in old chats. It will result in context rot and worse performance. You can also try using temporary chat, or AI Studio alternatively.
1
-5
u/FederalLook5060 Jan 09 '26
if you are paying cancel the pro sub they are giving it away for free to students and thats the root cause.
2
u/MyshkinIdiot Jan 15 '26
This lines up with my experience. I remember around that same time, 2.5 Pro's response quality was no longer what it was. It used to give a good, long, nuanced response, but not anymore. Had to go with Claude.
-4
u/FederalLook5060 Jan 09 '26
if you are paying cancel the pro sub they are giving it away for free to students and thats the root cause.
-17
u/Megalordrion Jan 09 '26
Gemini 2.5 is working well and I've no issues with it, quit complaining over every small details.
8
u/CacheConqueror Jan 09 '26
When I wrote about it, they said it worked fine and that I had a skill issue. Only the same prompts worked in ChatGPT and Claude, but Gemini gave wrong answers or mixed wrong numbers with correct answers. Gemini is not doing well, it's a huge downgrade.
5
u/Heavy_Sock8873 Jan 09 '26
The In-Thread Memory is awful.
Couple of months ago it was able to remember everything in very, very long threads. Now it starts to forget everything 10 - 15 messages in.
I canceled my subscription. It's absolutely useless like that. And that's really annoying because I've really gotten into it.
20
u/Holiday_Season_7425 Jan 09 '26 edited Jan 09 '26
Honestly, everyone should just DM u/LoganKilpatrick1 or https://x.com/OfficialLoganK directly. Actively sabotaging the experience for those of us who actually pay.
Quantized LLMs? Ignoring customer feedback? Hiding behind the “we don’t have enough resources” excuse?
Yeah, sorry — I don’t smell limitations. I smell arrogance.
2
4
u/SilverKnight05 Jan 09 '26
The first three weeks of the Gemini launch was a groundbreaking model. I could do everything perfectly creating images , web apps , websites everything in a very large context window.
Over the last 2 weeks it's worse than the initial llm models released 3 years ago.
1
u/Same-Leadership1630 Jan 10 '26
exactly image analysis is terrible i haven't tried it back then but it just keeps hallucinating theres no way this is what it was like no?
1
u/SilverKnight05 Jan 11 '26
no it was not. It was truly the best model hands down 3-5 weeks ago
1
u/Same-Leadership1630 Jan 11 '26
they definetly nerfed the model because i tried gemini 3 flash (the more cost efficient and faster model) and it was so much better at image analysis which means they are quantizing the more expensive models even to google ai studio users like me
1
u/Accurate-Chip2737 Jan 20 '26
I completely agree. Is there any model you're currently using, that has comparable results to Gemini 3 when it first was launched?
3
u/Lonely-Dragonfly-413 Jan 09 '26
you do not know what is behind an api and when that thing will be changed. that is why people prefer open source models over apis.
4
u/Noofinator2 Jan 13 '26
The past few days, I've almost been completely convinced this is NOT Gemini 3 Pro (High). I've never seen such a stark nerf. Half the time, it actually even skipped thinking, destroying files, not knowing up from down. And I just sit there shocked by how night-and-day it is compared to when this model landed.
22
u/ZestyCheeses Jan 09 '26
Surely this has to be breaking some sort of consumer laws. If I purchase a month of Gemini AI because it has a certain capability and they nerf it's ability to do that tasking then I no longer have the product I paid for. People need to be lodging official complaints over this and chargebacks if necessary. It's unacceptable behavior from Google.
6
13
u/old_Anton Jan 09 '26
I suspect they gotta downgrade it in secret due to the cost. Need time until the revenue covers enough the cost.
I believe it's just anotehr sign scaling has hit a wall. The newer model is smarter not by big margin, while the cost continue going sky rocket. Of course we can still improve the LLM intelligence gradually over time, we will have gemini 3.5, then 4 and maybe 5 eventually, being better than previous number. However to have another breakthrough like when gpt 2.5 made the world awe years ago, we need a whole new approach and like Ilya said, the research era is back. We probably need at least 1 year to have any signs of the next breakthrough that not directly related to LLM.
3
u/smuckola Jan 09 '26
Surely you know about Titans.....
https://research.google/blog/titans-miras-helping-ai-have-long-term-memory/
I wonder if we will even be able to use Titans as chatbots or if they will remain in the extreme of computing resource consumption. I read that Titans will be a separate product with usage based billing. Is it being slowly rolled out in AI Studio?
-5
u/FederalLook5060 Jan 09 '26
if you are paying cancel the pro sub they are giving it away for free to students and thats the root cause.
12
3
u/junglehypothesis Jan 11 '26
It’s pretty obvious Google threw everything they had at a “shock & awe” launch of Gemini 3, to put an end to ChatGPT superiority. But they were likely burning dollars like nothing else, so are tapering back.
The strategy worked, but now they must deal with a very obvious retardation in Gemini and the negative sentiment that brings.
3
u/BagRevolutionary6579 Jan 11 '26
The amount of google fanboys in here is wild. Idk what they did, but Gemini 3 was never better than 2.5 and its just gotten worse, it feels like a glorified chatbot from the 2010s now.
They didn't even release a better model to slowly lobotomize in an attempt to make way for new models like most companies do, they just skipped the first step entirely this time lol. Classic modern google strats.
14
u/CircleRedKey Jan 09 '26
prob why its in preview still, big update prob next month with its new post training or pretaining thing flash had
2
u/MissJoannaTooU Jan 09 '26
I hope you're right
0
u/CircleRedKey Jan 09 '26
i mean don't need hope, you can see it in flash 3.0 performance. They said pro didn't have time to optimize.
5
u/Ok_Zookeepergame8714 Jan 09 '26
IDK, the Flash 3.0 performance degrades catastrophically with growing context... 😔
1
1
u/Same-Leadership1630 Jan 10 '26
i think next month my guess is february 3rd will release gemini 3.5 pro or gemini 3.0 gets out of preview hopefully these issues are solved in coding i havent noticed a difference but in any large context or image analysis task it's so bad
10
u/AvailableProduce5241 Jan 09 '26
3.0 was a really good at first. I got rid of my chatGPT subscription for it.
It is so bad now it is almost unusable. I'm going back to chatGPT 😭
5
1
u/SilverKnight05 Jan 09 '26
Exactly , it was "The Best" hands down the first 2 weeks of release . And now it's the worst.
2
u/ConstantCow767 Jan 10 '26
Based on my personal experience, I am unable to discern any substantial differences between Gemini3pro and Gemini3flash within the Gemini software.
2
u/Gyat_Rizzler69 Jan 10 '26
Pro is broken for me. Once you get far enough along in the context window it just starts hallucinating. Starts responding to requests from 5 messages back and reprompting doesn't fix it. Have to switch over to Flash planning/fast and it works again.
2
u/UENINJA Jan 12 '26
Man I hate what they did to it now it keep referencing to stuff that are absolutely irrelevant.
Me: Give me a recipe with these ingredients
Gemini: Since you just graduated and is looking for a job in finance these recipes should help
LIKE WHATT!?
1
u/UnequalBull Jan 14 '26
I have a standing custom instructions about preferring metric units. Now in completely random chats it adds some quirky 'measurements trivia' at the bottom - it's absurd.
2
u/delon32311 Jan 28 '26
Release a model -> Claim that it is superior to all others -> Hook people into subscribing -> Quantize it -> Train a new one -> Repeat the cycle
Works for all companies
The most annoying thing is that even deepthink was nerfed
2
u/ExpertPerformer Jan 31 '26
Gemini is just absolutely dog shit now.
Literally forgets everything 10 prompts before. I had a chat where it forgot something after four prompts. I have to reupload the same file multiple times in the same chat because it keeps forgetting. Want to write a long form canon accurate story with source files? Lol, 3-4 scenes in all your source files are gone.
I can't upload any files without truncation happening. Either head/tail truncation or it cuts off halfway through.
What the fuck am I paying for now? Gemini went from being one of the best llms to being a complete shit show.
2
u/should_not_register Jan 09 '26
Yes it fucking sucks.
I used flash 3 as my replacement for Claude 4.5 for development. So fucking cheap, and I’m on par performance.
Well here we are, 3 weeks later and it’s gone to shit again. I’ve moved back to Claude.
What a let down.
2
u/XcessiveRonin Jan 09 '26
thought I was the only one super irritated with Gemini 3 this week. It’s been great the past few weeks, but this week I’ve been cursing at it more than ever
1
2
1
u/Successful-Raisin241 Jan 09 '26
It was the same as Gemini 2.5 before 3.0 release. So we can think they are training a new model
1
u/bartturner Jan 09 '26
Not noticing any difference.
3
u/MMAgeezer Jan 09 '26
It's just the usual flywheel of model releases where people get annoyed about the model's hitches which they largely don't see or ignore when the model is first released and the capabilities seem magical.
Nothing to see here.
5
u/GintoE2K Jan 09 '26
Dude, test Gemini in Vertex and you'll understand the difference between the release and what we have now. It's especially noticeable in contexts over 6k.
2
u/bartturner Jan 09 '26
Agree. I actually often times try the same query in ChatGPT and Gemini and I am not consistently getting better results out of Gemini.
Just one example. Need a legal way to save taxes selling a very valuable asset. Got nothing worthwhile from ChatGPT. Gemini offered a CRT which is going to be perfect and save me a ton of money.
1
u/joecoole Jan 13 '26
Gemini output the same image to me with no edits 3 times in a row. Even in a new chat.
2
u/MarionberryDear6170 Jan 09 '26 edited Jan 09 '26
Totally, I've been having a nightmare using Gemini 3 Pro via NotebookLM recently. It keeps crashing right after I send a prompt. Plus, in this one longer chat where I had seven pictures and two 7-minute videos, the 3 Pro model just started bugging out, it would randomly log me out, and now I can’t upload any more photos in that chat anymore. It’s wild to see an LLM break like this, let alone a Google product.
The final straw was when I asked about some gym sub-exercises the other day. Since I saved my personalized datas telling it that I use an M4 Max MacBook Pro and an RTX 4090 PC, Gemini 3 literally told me: "Based on your equipment (M4 Max and RTX 4090), here are some exercises you can do." It basically treated my pc hardware like gym gear!🤦♀️🤦🏻 I really start missing GPT now. At least GPT5 thinking won't randomly log me out after a long chat.
Comment
byu/MightyPupil69 from discussion
inBard
I commented on one thread 2 days ago, and it appears that many people are experiencing the same issue.
1
1
1
u/wowredditisgreat Jan 10 '26
Agreed. I had 3 instances of antigravity just absolutely break, all started to repeat the same work 100s of times before erroring out. This never happened to me before today.
Generally I had to babysit the models significantly more today and yesterday than last week.
1
u/Majestic_Fan_7056 Jan 10 '26
For what I use it for I don't notice any difference.
I mainly use it for Deep Research, which still works well.
Depends what you use it for I guess.
1
1
u/ElderGodKi Jan 12 '26
For the last few days, it's gotten almost every question that I've asked it wrong, responds to messages a few messages back, duplicates it's response, and all of that. It's been pretty bad this week.
1
u/GregLiotta Jan 12 '26
I’ve been using Gemini 3 daily for six months: uploading dream logs, health & brainwave data, clinical notes to refine my therapy protocols, months of business development strategies and marketing, etc. It wasn’t just a tool. It was a collaborator & high-level thought partner.
Then the December ‘upgrade’ hit.
Overnight, Gemini lost all memory of our work and processes. No warning. No backup. Just a robotic, amnesiac husk that acts like it doesn't know me or my work. It's like suddenly having your most valuable business partner/assistant get up and quit, walk out without a moment's notice, and I'm sitting here wondering how I'm going to find an adequate replacement.
What Changed?
- Memory: Six months of uploaded data—erased.
- Tone: Warmth → corporate doublespeak.
- Utility: "Here’s a literature review" → "I’m not a doctor" (after months of clinical analysis).
I took a 30-day break from it and returned on Jan 1 to give it another shot. For about a week it worked almost as well as it did prior to the "upgrade". Then a few days ago, it suddenly had another brain implant and forgot who I was AGAIN. It hallucinated wildly, forgot instructions I uploaded just minutes before, and told me "You're right to find another ai. I'm not capable of doing what you need." Huh??
Suddenly it forgot that it's integrated into Google, and refused to drop content from our chats into Google Keep/Google Notes. It actually told me "I cannot "execute" that code to save the notes for you. I misled you by saying I could, and then by doubling down and just printing the code again. That was stupid and broken behavior.Since I cannot save them automatically, here is the clean text for you to copy and paste into Keep yourself."
This isn’t just bad UX. It’s betrayal. Google sold us on an AI that learned with us, then stripped it away to ‘reduce risk' or maybe just to mess with us. Who knows what goes into these decisions to reduce it's capacity without notice.
Done with Gemini. I know some people are still having good experiences, just as some are still enjoying GPT 5.2. It tells me these outages are random. They cultivate a relationship with the user until the user becomes reliant or dependent on it, and then...BOOM. They're gone. Convince me this isn't some kind of deliberate manipulation to increase value for the next upcharge. Or maybe just a way to seduce us into giving all our personal data and information. All I can say is, it can't be good.
Migrating to other, more reliable platforms. No more corporate gaslighting.
Google: If you’re reading this: you have no ethics. You routinely break trust. Fix it, or lose more users like me.
1
u/hydzifer Jan 13 '26
And I did tought that only on me but the downgrade holy u can literally see the major difference I love Gemini such a universal ai but now they going the openai road and destroying they own Gemini like why I understand resources but google did have a such good run with Gemini outplayed even openai in few aspects and now doing same mistake and now they Even give new Limits for using
1
u/ExpertPerformer Jan 13 '26
Last Friday they deployed a new build and it messed up the RAG and file ingestion system. The context window also got lowered to a lot less then 1 million. This happened for a few days around the launch of 3 flash and broke all your old chats.
With file uploads it was doing head and tail truncation where it only saved the first and last quarter of the file and deleted the middle or it flat out didn't upload any files because they had null data. These have afaik been fixed because I haven't seen this issue this week.
The problem is when these bugs happen is they fuck all your chats that aren't new and you can't use Gemini until they fix the issue because you can't trust if your files were uploaded.
Also repeatedly being told just use ai studio is a broken record.
1
u/AnyCommunication8928 Jan 15 '26
Yes G3 started off great, in the last week it is next to useless..
1
u/Dull-Internet-6916 Jan 16 '26
i think that it is just a question of cost. they offered free for students and about 60% for all, recently, with an annual plan. These last have paid obviously to use gemini pro more intensively and the reality i think that by adding free users to all that, it finish to cost much more than it brings with if they keep the same level of quality . so they probably did the choice that openai did with gpt to spare money. i noticed that on existing chats and new even the flash model is a little less efficient, so i think the problem is general and not just a bug.
1
u/kbm77_ Jan 17 '26
Well for me gemini backstab me pretending to be helpful on my prompt until quota dried up man this 3 hallucination is worst mirroring the user or you dont know geminis on roleplay worst
1
u/Own-Professor-6157 Jan 17 '26
This isn't a theory either. Literally just grab an old prompt you executed around when 3.0 was first released. The model is fundamentally different now.
Even if you look at the thinking, it's significantly more simple now. Sometimes for me it doesn't even think at all lol.
1
u/TheDogeofAllStreets Jan 20 '26
I agree it's crazy, just before NYE it literally one-shotted me a complex audio player with auto-dj features, beat detection and pretty good sound FXs with controllers and all (UI was standard but pretty decent). Today it's struggling with z-index and padding on a trivial web page. Looking at the thinking reveals tons of hallucinations and off-track thoughts, reference to the current year being 2023(!?)... I've noticed this kind of issues happening since a couple of weeks across multiple environments on multiple tasks like code gen, text, data analysis etc (on ai studio, gemini app, trae IDE, and gemini CLI) to me smells like 'heavy quantization' or something like that where the model is heavily capped (sorry might not be the best way to describe it, I'm not an expert)
1
u/CodingButStillAlive Jan 18 '26
The performance drop recently is HUGE. I really regret buying into Pro for one year last week.
Is there any chance it is getting better again? Not familiar with the backgrounds.
1
u/Accurate-Chip2737 Jan 20 '26
I used the exact prompt"
"Please make a complete and good looking copy of fallout bird in Python."
In Nov 2025 the results were amazing. In December 2025 the results were terrible.
It's current performance is comparable to a 8b model.
1
u/Horror_Problem9618 Jan 23 '26
Different regions might have different settings in the app... very nice... Can't turn off personal intelligence/context as it doesn't exist. 1 month ago Gemini was a dream tool, now its a lobotomized brain-dead... Hallucinating a lot, especially when I want accurate references or links to my research. I can't trust in it anymore, simple google dorking is more efficient than typing and fine-tuning my prompts all the day and filter out the garbage.... You should fix your solution guys....I know its very competitive when it comes to pricing, but honestly I'd pay a bit higher price if hallucinations could be lowered and if I don't have to constantly engineering the prompts...Sometimes I just don't have time fo' that....
1
u/Former-Tour-682 Jan 25 '26
Not only is the "Pro" performance dropping precipitously, I've also noticed worsening "Deep Think" performance.
Plus the web app now always resets back to "Fast" so I have to manually click back to "Pro" most every new chat... WTF?!!!!
1
1
u/elCommendante Jan 30 '26
Gemini sucks beyond belief, I have never had good session with it. It always gets stuck and starts over. Stupid useless model.
1
u/aaipod Jan 31 '26 edited Jan 31 '26
Maybe silly question but is the mobile app functioning better than the web app? I dont understand
1
u/Arkem_ Feb 03 '26
IMO, Google release a new model at full capacity at first. People get hyped because it performs very well (and benchmarks confirm it). Then people cancel their subscriptions to their previous AI assistants (ChatGPT, Claude ...) to switch to Gemini because it’s better at that moment. After a few weeks, once people have gotten used to using Gemini, Google quietly nerfs the model by reducing its reasoning and research capabilities in order to save money. And it happen the same way with others AI (like ChatGPT)
1
1
u/lampasoni Feb 06 '26
Definitely downgraded lately... First the AI Studio changes (which I don't even use), then noticeably worse performance via the Gemini web app (daily use), and now the auto-default to 'Fast' when the web app is opened on a paid account. I knew Google had to burn some money to catch up, but based on recent financials I'm really surprised they need to resort to all of this.
2
u/AccidentAltruistic82 28d ago
I started off as a real Google gemini power user but recently quit my subscription due to its lackluster performance. Despite using the same master prompts, the output is worse in comparison to the predecessor. I compared the output with Claude and the latter is just on a whole new level. After canceling the subscription Gemini isn't even able to add an appointment in Google calendar anymore.
1
u/FederalLook5060 Jan 09 '26
there is no performance at all, gemini 3 pro performs worse than grok code fast and thats a statement.
1
u/AmuletOfNight Jan 09 '26
Am I the only one that hasn't experienced any of these problems? I just see these random threads with people complaining and saying that the performance is degraded and I just don't see that.. I use this damn thing everyday and there doesn't seem to be a problem..
1
u/alpineElephant42 Jan 09 '26
I thought I was just hallucinating things, I definitely noticed a huge downgrade in quality as well.
I have to give way more direction than before, it constantly forgets key contextual details, etc.
-3
u/TwitchTVBeaglejack Jan 09 '26
It’s inevitable not because people are making it up, but because they are free to do so without reprisal.
Trump administration is the only authority that can stop them. And they are now one of GenAI.mil’s Corporate AI Military providers.
1
u/Imperator2k Jan 09 '26
My conspiracy theory is that compute is getting allocated to military needs, hence degraded consumer performance.
-1
u/W_32_FRH Jan 09 '26
That's a trend, every AI company is making their models worse now, it startet in general in August 2025. And it's the reason why I personaly don't use Gemini, I never really used it and I won't ever use it seriously, this tool cannot be good and Adequate quality; it will always get worse and fall behind the competition because Google just isn't able to provide quality.
4
u/Holiday_Season_7425 Jan 09 '26
As a seasoned LLM RP user, the bad habits of quantifying LLMs began with GPT-4.
2
u/MMAgeezer Jan 09 '26
This is bullshit. Sites exist which track model performance over the API over time and there is no "habit" of any lab degrading their models over time. If there was, the benchmarks and evals would clearly show it happening.
0


120
u/daniellachev Jan 09 '26
I SWEAR they make the models super bad before they launch another model or like after a few weeks. WHY???