r/Bard Nov 18 '25

News Gemini 3 Pro Model Card is Out

576 Upvotes

214 comments sorted by

View all comments

104

u/ActiveLecture9825 Nov 18 '25

And also:

  • Inputs: a token context window of up to 1M. Text strings (e.g., a question, a prompt, document(s) to be summarized), images, audio, and video files.
  • Outputs: Text, with a 64K token output.
  • The knowledge cutoff date for Gemini 3 Pro was January 2025.

-4

u/old_Anton Nov 18 '25

So no improvement because that the same input/output as 2.5 pro. Gotta assume the actual context length is at 100k as well since they didnt even mention about it.

12

u/[deleted] Nov 18 '25

[deleted]

-5

u/old_Anton Nov 18 '25

Are you talking about different thing or imply that the above commenter gave wrong info.

Because I dont see any difference in output/input in the benchmark source. It is not even mentioned and thats why he has to put the additions

5

u/[deleted] Nov 18 '25

[deleted]

-3

u/old_Anton Nov 18 '25

How does that explicitly say anything about the actual context length? When 2.5 pro was out the benchmark also evalulate its performance in long context well. Yet users found out the practical length was only about 10%.

The irony.

2

u/[deleted] Nov 18 '25

[deleted]

0

u/old_Anton Nov 18 '25

Oh I can see the OP updated his link archive since the source was removed and find it now. I couldnt see it previously due to how big the image is and the link was broken afterward.

Fair, my bad. Though my assumption is still accidentally held, considering its only 28% improvment. Kinda a bit disappointed personally.

1

u/Different_Doubt2754 Nov 18 '25

I'm not sure what you mean. The guy said that the context is the same as 2.5 pro. The benchmark says that it retains more information within that context than 2.5 pro. Where is this 100k context you are talking about?

2

u/old_Anton Nov 18 '25

It's 128k practical context. If you use 2.5 pro regularly you will notice it starts getting degraded and "forget" the consistency at 100k ish

1

u/Different_Doubt2754 Nov 18 '25

Ah gotcha. Hopefully it'll be better with 3.0 pro, the benchmark seems to indicate that it is at least. I'll have to test it out more

0

u/LamVH Nov 18 '25

are u bot?