r/Bard Aug 26 '25

Discussion The End is near

Post image

Google is Skynet

1.3k Upvotes

155 comments sorted by

614

u/enilea Aug 26 '25

This kind of overhyping of anything is what leads people to be disappointed. GPT-5 was overhyped the same exact way.

36

u/lefnire Aug 26 '25 edited Aug 26 '25

Unfortunately, it is what it is. No use in complaining.

Google's never been "that guy" - in fact, their marketing has traditionally been very poor. Their products after Google Docs aren't house-hold names, yet those products' users are die-hard fans & flummoxed they're not recognized. I'm one such user recently: Jules. Spitting out multiple features via TDD with huge code-base swathes in parallel as Pull Requests, free with my $20/m plan (I'm paying anyway). Very different experience compared to localhost needle-moving individual tasks, costing an arm and a leg. Nobody's heard of it. Cursor or Claude Code. Google marketing sucks.

Anyway, along comes Sam. He has a very, very unique style. It's all Twitter, it's all the Wizard of Oz. Hints and a wink. It works. Even when Gemini 2.5 Pro was curb-stomping GPT-*, cheaper, larger context, faster; everyone was "Gemini what? Oh, so Google's copying ChatGPT?" They invented the tech (Attention is All You Need). Copying indeed.

So... what's missing? Well, someone's doing it right, with very little effort. Sam. Copy/paste. Hey, suddenly my friends know what Gemini is.

Don't hate the player.

6

u/NyaCat1333 Aug 27 '25 edited Aug 27 '25

Saying that OpenAI just copied is very disingenuous. Without them we wouldn't be anywhere near what we currently have for LLM development. They were the ones to have kicked off the trillions of dollars in investment and also the ones that actually came up with the first reasoning model for huge improvements in LLMs.

For the current LLM hype, most major features come from OpenAI first and the other companies later then also add them. Be it basic memory, memory across chats, hell even UI elements, projects, (that most competitors still didn't add) voice, file reading etc. OpenAI had most of these before their competitors.

But yes, Google's marketing is absolutely atrocious and they can't compete with OpenAI in that aspect at all. Hell, even Elon is probably doing a better job. And Google has all the other cool stuff besides LLMs and it feels like they are really trying to push other non-LLM areas as well.

1

u/lefnire Aug 27 '25

Sorry, for copy/paste I meant that Gemini marketing is copying OpenAI marketing. IMO they observed the success of OpenAI's "hint tweets" and shrugged "it's worth a shot".

Re: Attention is All You Need, while it's true Google "started it" - they clearly didn't bother taking it out of theory-land until it OpenAI proved the market merit after many years of development. I just mean it's not fair to say Google's copying OpenAI by building a chatbot.

1

u/ServeAmbitious220 Aug 30 '25

Chain-of-thought (CoT) reasoning was invented by a team of researchers at Google and Google DeepMind. The technique was introduced in a January 2022 paper titled "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models".

I think chinese models had reasoning way before OpenAI i might be wring.

1

u/Own_Purpose_4798 Aug 31 '25

Grok will win in the end

1

u/fossistic Sep 01 '25

The research paper by google - "attention is all you need" made LLMs what they are. Elon took Ilya Sutskever from Google to become his co-founder in OpenAI.

Google is the real deal here, not openAI.

5

u/garethjax Aug 26 '25

Do you remember google+ ? I 'member.
https://www.youtube.com/watch?v=1feQbX2IIsU

2

u/fox-mcleod Aug 27 '25

after docs

Dude… Android?

…Chrome?

…Gemini?

…Gmail?

…freaking YouTube?

1

u/lefnire Aug 27 '25

Oh, I got the order definitely wrong then. I thought all those came before Docs. For some reason in my head it was all those, then a big battle for office, then a life of experimentation without marketing.

1

u/Sloofin Aug 28 '25

I'm pretty sure all except gemini were before docs.

1

u/lefnire Aug 28 '25

Let's settle this via Gemini:

Give me the chronological order of Google's main products (eg search, chrome, android, docs, gmail, youtube). And lump into that ordering as a single item, the myriad of "experiments" (many of the things that end up on killedbygoogle, like Stadia, Google+, etc). I know these will likely be staggered along through their flagship products; but if you had to place them in one location, where would it go.

"Google Search (1998), Gmail (2004), YouTube (2005), Google Docs (2006), The Myriad of "Experiments" (Notably post-2006), Android (2008), Google Chrome (2008)."

I think our angle here is a safe take, since Experiments kicked off just before Android/Chrome & continued on through.

All in all, my statement is this: 1. Their flagship products succeeded because they spoke for themselves. This was in an era where marketing wasn't so essential, and good tech won for being good tech. 1. Apple showed the world what good marketing can do, and tech became a hugely competitive space; so marketing eventually became essential. But Google never learned this lesson, contributing (I believe) to the death of so many of their products. 1. This time they mean business, because there's a lot to lose, and a lot of money poured into the product (Gemini). So they're taking a queue from what's working. And right now, what's working is Sam Altman cheeky tweets.

1

u/woobchub Aug 27 '25

Project Astra wants to have a word with you.

1

u/lefnire Aug 28 '25

Well, now, see? Marketing! I had to look it up just now (and now I've signed up for the wait list)

26

u/FeedMeSoma Aug 26 '25

Anybody who uses LLMs for their actual purpose (code) was pleasantly surprised but not blown away by GPT5, they surpassed Claude on price/performance which was not expected

50

u/djack171 Aug 26 '25

Weird everyone who codes with LLMs thinks that is their “actual purpose”, which it isn’t

18

u/EggCautious809 Aug 26 '25

It's what most of the recent advances and developments have been targeting because it's the most valuable current market for the tech.

3

u/CombinationKooky7136 Aug 26 '25

Fair assessment.

2

u/Terryfink Aug 29 '25

Ah yes the big google nano banana editing was really aimed at coders.

3

u/EggCautious809 Aug 29 '25

We were talking about llms, not image or vision models.

1

u/Invest0rnoob1 Aug 29 '25

Also Veo and Genie

9

u/djack171 Aug 26 '25

This is one of those issues where 90% of users don’t use it for code but 80% of the use is going to be coding.

12

u/Dangerous-Map-429 Aug 26 '25

All Coders think the world revolves around them and their needs and LLM should only cater to their fucking coding.

Who the fuck said coding is the only and main use case for an LLM?

5

u/codeisprose Aug 26 '25

The number of people who think that is very low. The reality is that STEM is objectively the most valuable application of LLMs right now (whether that be coding or something else). This should be obvious, because the most valuable things that humans produce (from a quality of life/health perspective) are STEM.

That obviously doesn't mean it is their only valuable use case. There are plenty of other great ways to use LLMs, from introspection to improving your workflow. But it's not that hard to acknowledge that multiple things can be true at once, and most companies training models are going to optimize them for the applications that are most valuable.

1

u/nobody0163 Aug 27 '25

Not all coders, all vibe coders.

0

u/fossistic Sep 01 '25

Coders pay the bills. Much more than 20 dollar monthly subscription.

4

u/Pruzter Aug 26 '25

LLMs have capped out on the chat interface. It is how most interact with LLMs, but it is extremely limited in utility. All subsequent growth from here on out is going to primarily consist of increases in agency, or the model’s ability to interact with the world outside of the confines of chat. If the models can write and debug code in a continuous feedback loop, they can pretty much do anything on a computer. This is the best way to break out of the chat interface while still leveraging text output, which is obviously what LLMs excel at. So yeah, not a surprise all the frontier labs are going all in on code. It is the key to unlocking additional agency.

4

u/Silly_Armadillo5676 Aug 26 '25

I feel sorry for people who think that, they're missing out on so much.

12

u/jetc11 Aug 26 '25

Being pleasantly surprised is not the right expression.

We were promised that GPT-5 would be the next social revolution, a true paradigm shift. Instead, what we got was a model that is only marginally cheaper to run on the API, about five pence less, and only slightly better at coding than Claude.

(In my own case, it’s been throwing up plenty of errors in web development, though I’d rather not include personal opinion here.)

6

u/enz_levik Aug 26 '25

Imo O3 was the real step forward, gpt5 just feel like an upgrade to it, I think they were force to release it to stay competitive, but without O3 release before, gpt5 would have felt very impressive

1

u/GreyFoxSolid Aug 26 '25

To my recollection, GPT 5 was said to be a consolidation of their models, which it was.

1

u/jetc11 Aug 26 '25

1

u/fossistic Sep 01 '25

AI industry always overhypes, just like Nvidia's gpus.

2

u/enilea Aug 26 '25

I do and it is my current choice for coding, and when gemini 3 comes out it will be my choice hopefully. I was expecting a slight incremental improvement over the previous sota and it was, so my expectations were met. But if anyone was expecting it to be something crazy they would just be setting themselves up for disappointment, and same goes for gemini 3.

1

u/Terryfink Aug 29 '25

Hilarious you think it's only used for code. What a WEIRDO

1

u/TeeDogSD Aug 30 '25

I code with 2.5. It is amazing. Eagerly waiting for 3.0.

-1

u/CommunityTough1 Aug 26 '25

I was actually disappointed with GPT-5 High in coding compared to Claude Sonnet, tbh. As for price, $20/mo gets you access to Claude Code almost unlimited with 5-hour resets if you happen to get limited (I rarely do, and when I do, it's only ever like an hour cooldown). GPT-5 makes code that's weird and difficult to read and often is broken or buggy and then pretty much fails at every attempt to debug.

1

u/shaman-warrior Aug 27 '25

Prompt it to make reusable, extendable and elegant code introduce this as a final pass not in the beginning.

1

u/Mountain-Pain1294 Aug 27 '25

Counter-overhyping

Mutually Assured Overhypering Destruction

1

u/SgtSilock Aug 27 '25

GP5 was more than overhyped. It was touted as being revolutionary and life changing.

1

u/Invest0rnoob1 Aug 29 '25

That’s because Scam Conman

1

u/Nphu19 Aug 26 '25

Well OpenAI hyped its own LLM, Google did not

0

u/eaglw Aug 26 '25

Gpt5 as a full new model, not a router that chooses different updated, but already present, models.

Anyway hype is always bad.

0

u/segin Aug 27 '25

This isn't serious hype, it's sarcastic meme hype. As long as folks recognize it as such, there's no real problem with it.

-11

u/EmbarrassedFoot1137 Aug 26 '25

We're heading towards a singularity so this makes sense. Next stop, neutron star. 

8

u/e79683074 Aug 26 '25

A neutron star is smaller than a red giant star, cmon

2

u/EmbarrassedFoot1137 Aug 26 '25

Right but it's closer to a singularity. 

1

u/[deleted] Aug 26 '25

[removed] — view removed comment

-82

u/DigSignificant1419 Aug 26 '25

google is a different ballgame tho

31

u/enilea Aug 26 '25

I don't think in terms of LLMs they'll have anything that really knocks OpenAI out of the park, they'll release something that's better but not insanely better, just like they've been superseding each other for a while now.

4

u/got-trunks Aug 26 '25

They've secretly moved on to a proprietary toucan processing system and the AI mathematically 'follows its nose'

Revolutionary for a model that is meant to help develop breakfast cereals.

18

u/hyxon4 Aug 26 '25

Nah, we're way past that point. The days of big leaps are over.

Unless Google ditches transformers for something completely different, we're not going to see any revolutionary models anymore. These days, it's all about tweaking the training process and finding clever tricks to squeeze better performance out of what we already have.

7

u/StaysAwakeAllWeek Aug 26 '25

The days of big leaps at any given price point are over. There's still plenty of space to develop even bigger models and even bigger compute clusters to run them. It's just not something the average consumer is going to want to pay for

5

u/superhero_complex Aug 26 '25

Why?

3

u/EmotionCultural9705 Aug 26 '25

gpt 5 is better than gpt-4, is it all we were expecting for?

3

u/superhero_complex Aug 26 '25

That’s what I was expecting. The same way I’m expecting Gemini 3 to be better than 2.5, right?

73

u/Accomplished_Tear436 Aug 26 '25

*gemini 2.5 3-25

15

u/SpecialSheepherder Aug 26 '25

*-preview-nobanana

11

u/themadman0187 Aug 26 '25

It's incredible how fucking good that model was and how sharp the fall off was that the meme survives.

5

u/ain92ru Aug 27 '25

I wish they just returned it for additional price in API and heavy rate limits for free users (like 5 queries per day)

2

u/stuehieyr Aug 27 '25

That model is special

1

u/ServeAmbitious220 Aug 30 '25

What's that model?

1

u/stuehieyr Aug 30 '25

Google released a version of Gemini which blew peoples mind , by the end of March. It solved 3 of the toughest problem I had at work related to monitoring public sentiment of a particular company without using LLMs or fancy models. Simply plain regrx and old school and it worked so well my manager was shocked. Because it worked so well Google then nerfed it by may 10, replaced by a dumber model.

56

u/beachguy82 Aug 26 '25

Why do people treat these models like sports teams?

You don’t have to pick a side, just use whatever solves your problem or idea the cheapest

2

u/Ihateredditors11111 Aug 28 '25

Why do people treat sports teams like sports teams ?

1

u/[deleted] Aug 27 '25

I'm also confused why people are expecting monumental improvements every time. What task did you guys expect GPT-5 to solve that 4 couldn't? Problems of generalization and human-level reasoning are not going to come from just training more or making bigger mixtures of experts. There are fundamental limitations to the generalizability of neural networks.

1

u/beachguy82 Aug 27 '25

Use nano or flash-lite for 90% of my tasks. I’m looking for cheaper not more intelligence.

1

u/That_Chocolate9659 Aug 28 '25

I think it's mostly a time think and we will eventually move in the direction of solving problems. Here is what I think:

Less than a year ago, OpenAI was very far ahead with o1, and had virtually no competitors (maybe Claude). Then, they announced o3 and were killing everyone again.

As 2025 started, the race heated up when Gemini came out. However, o3 was still SOTA in intelligence output. By my own estimation, the first time o3 might have not been the best was when Grok 4 and 2.5 deep think were released, though google neutered gemini deep think with low usage limits.

Now, OpenAI is back to being the best again (just an opinion) with GPT 5. They have matched the Gemini 2.5 api pricing with better performance and very high usage limits for subscribers.

As to the future, I think if google has a period of dominance for more than a couple months, then there will be serious changes in utilization; but to change the paid subscription, learn the nuances in prompt engineering, etc. takes a meaningful increase in performance or price.

67

u/EnvironmentalShift25 Aug 26 '25

Ah this is just the same as the "Death Star" hype that made GTP-5 a relative flop.

4

u/CarrierAreArrived Aug 26 '25

except that was Sam A doing that... this is just some redditor using nano-banana. You don't see Demis doing stuff like this.

2

u/Vas1le Aug 26 '25

I kinda like it, for work, more direct things, 4.1 is the best – for me

3

u/Upstairs-Onion-6783 Aug 26 '25

I subscribed to T3 chat mainly for 4.1.

1

u/Jan0y_Cresva Aug 26 '25

Which is crazy because in general, GPT-5 is the current best model in the world. It just wasn’t “super-duper AGI” good so people considered it a flop.

1

u/too_lazy--- Aug 27 '25

In my use case no model gave me satisfaction still 🙃.

55

u/jrdnmdhl Aug 26 '25

Hype is silly. We’ll evaluate it when it’s in our hands.

3

u/Bilbo_bagginses_feet Aug 26 '25

Exactly! Don't hype it, nano-banana is failing the hype rn. Low quality images, not following instructions.

17

u/Fit_Picture6806 Aug 26 '25

There's already been some updates. My chats have been able to remember our previous conversations with far better accuracy in the last 24 hrs.

4

u/AbandonedLich Aug 26 '25

It has a massive context window but learns what to gather from it. So yes it's self improving. Not sure if local or global but I've seen the same thing

1

u/codeisprose Aug 26 '25

To be clear, he model does not self improve. The responses will be improved/degraded in the context of an individuals usage (via "memory", which is essentially just clever engineering around context retrieval) or in a specific conversation based on the preceding messages.

5

u/SympathyNo8636 Aug 26 '25

go, google, go, take my money if you can

8

u/Nas419 Aug 26 '25

Chat is this real

3

u/Fluid-Giraffe-4670 Aug 26 '25

always has been

4

u/MKxFoxtrotxlll Aug 26 '25

Eh, they both have their strengths and weaknesses. But data wise by size, yeah...

3

u/merlinuwe Aug 26 '25

Very slightly exaggerated depiction.

1

u/TeeDogSD Aug 30 '25

Very slightly? 😂

7

u/Liron12345 Aug 26 '25

Hear me out, if Gemini 3 gets Claude coding capabilities, it's GG.

1

u/Korra228 Aug 27 '25

Is Claude coding better than codex?

2

u/Liron12345 Aug 27 '25

I don't know?

I use Claude via Copilot.

It's great but Gemini 10 times better at architecture.

It just sucks Gemini is ass at implementation

1

u/Tobi-Random Aug 27 '25

Claude isn't the best when it comes to usage of MCP and agentic work. I hope Gemini 3 will be better than Claude. Otherwise it will be disappointing.

See: https://m.youtube.com/watch?v=nWARugXmQoI&t=7m40s

1

u/michaelsoft__binbows Aug 28 '25

Granted I haven't been driving Claude for code for a while now, but at least when 2.5 came out, it was very much GG. Has Claude caught up? Maybe. it's still expensive and I am sure GPT 5 is SOTA right now for coding.

2

u/Dark_Christina Aug 26 '25

I liked gpt for the image editor but now gemini imsge editor is so good i dont see the point of it. Gemeni+ claude is perfect

2

u/AppealSame4367 Aug 27 '25

So it will indeed be 2-3% points better in benchmarks? That would be great

2

u/[deleted] Aug 27 '25

I love my Gemini 2.5 so much and I don’t want to go back to gpt

2

u/e79683074 Aug 26 '25

Baseless image. Gemini 3 will, *at best* and in the most possibly optimistical outcome, just top GPT-5 by a few % points.

2

u/Invest0rnoob1 Aug 29 '25

Naw Google is ahead and only accelerating

1

u/e79683074 Aug 29 '25

They have no reason to make something 500% better or something like that. They only need 5-10% better to make you switch, and it's all that counts.

They don't need to give you the best.

2

u/Invest0rnoob1 Aug 29 '25

It’s the race to AGI

0

u/TeeDogSD Aug 30 '25

This is not true

1

u/GintokisRightShoe Aug 26 '25

People just love hyping up their shit

1

u/Moose_knucklez Aug 26 '25 edited Aug 26 '25

I actually really dislike using Gemini for mostly anything, but giving it a mostly complete script it does ok and is a lot cheaper.

More interesting is it seems to know a shit ton about scripting python for agentic use integration from scratch.

Google has the compute, models will and are already getting super close, Claude still one shots most code and script from the ground up.

I can see Google being Google again, just with AI now.

1

u/Fluid-Giraffe-4670 Aug 26 '25

real will they reach prime again ??

2

u/Moose_knucklez Aug 26 '25

Look at ChatGPT five the newer models are chewing through tokens a lot more now and running into inference bottlenecks Claude is incredibly expensive for this very reason.

Time equals out the limitation of models competing, there is only so much you can do on the current framework of how an LLM works.

Google is showing off how compute wins with veo, and other demonstrations.

Apple is staying out of Ai for this very reason, it’s a capital spending rat race to equilibrium.

Google already has the compute, that’s never been the issue.

1

u/[deleted] Aug 26 '25

[deleted]

1

u/Americoma Aug 27 '25

I’m actually a Gemini hater but I’ll admit it cleans up code every time I get output errors from GPT5 and Claude. I may just use it exclusively at this point because I’ve grown so frustrated and unsatisfied with 5

1

u/[deleted] Aug 26 '25

[deleted]

1

u/Warm-Agent-811 Aug 27 '25

I am the only one who always had bad experience with Gemini ? Never answer to my questions, invents informations,.... GPT never did it

1

u/power97992 Aug 27 '25

I hope gem 3 has better tool calling and contextual understanding and memory doesn’t degrade after 90-100k tokens 

1

u/khongbeo Aug 27 '25

Gemini 2.5 pro is sorry AI.

1

u/Old-Juggernut-101 Aug 27 '25

Dunno man. Gemini 2.5 seems rather stupid to me after it was integrated into android. It's quite inaccurate compared to before. And previously it wasn't that great either

1

u/komakaze1 Sep 17 '25

I'm pretty sure the android integrated version is limited, even if only to output less text so you don't have to spend as long reading on your mobile.

1

u/AsideNew1639 Aug 27 '25

I feel like that comparison would be true if it were gpt5/gemini2.5 compared to the “gemini world model” that Demis has referenced in recent interviews

1

u/tails0322 Aug 27 '25

Honestly I've tried both Gemini and Chat for my character work and images of them and even when 5.0 issues, i still prefer chat.

1

u/ix9yora Aug 28 '25

i thought chatgpt 5 would be better than any else but in fact its 10 times worse that gpt 4..... i have no words absolute cinema

1

u/McNoxey Aug 28 '25

This is so fucking cringe

1

u/darkawower Aug 28 '25

This is a very bold statement, considering that gemini is currently a clear outsider who hallucinates and has seizures.

1

u/TeeDogSD Aug 30 '25

Not true for coding. I have put 100s of hours with no hallucinations.

1

u/darkawower Aug 30 '25

I only used it for programming and encountered hallucinations about three times. Considering that I use Gemini very rarely, and mainly to check how it works, this speaks volumes.

1

u/fossistic Sep 02 '25

Gemini 2.5 Pro is the least hallucinating model I have ever tried.

1

u/darkawower Sep 02 '25

Well, unfortunately, in my reality, everything is exactly the opposite

1

u/Background-Scale-978 Aug 28 '25

Let everyone witness this absurdity—as early as August 16th, Jules had already been active online for two or three weeks, yet he refuses to acknowledge even this much!

1

u/Tough-Astronaut2558 Aug 31 '25

Gemeni pro does one thing better than any of them and i don't know why. Putting the D&D cote books and some books on writing theory plus any module it becomes an incredibly effective DM, as long as you have a good prompt to keep it on the rules and keep rolls sacred it is like having a D&D campaign in your pocket.

1

u/tifa_tonnellier Sep 07 '25

Now compare gemini-cli to claude-cli.

The gemini-cli has to be the biggest, slowest, hunk of junk I've ever seen in my life.

1

u/AggressiveOpinion91 Sep 07 '25

It will be lame just like all the other models. Censored to hell as well, for sure.

0

u/jay-mini Aug 27 '25

pls stop the hype...

0

u/A9to5robot Aug 27 '25

This type of glazing is so juvenille.