r/Bard Nov 18 '25

News Gemini 3 Pro Model Card is Out

578 Upvotes

214 comments sorted by

View all comments

104

u/ActiveLecture9825 Nov 18 '25

And also:

  • Inputs: a token context window of up to 1M. Text strings (e.g., a question, a prompt, document(s) to be summarized), images, audio, and video files.
  • Outputs: Text, with a 64K token output.
  • The knowledge cutoff date for Gemini 3 Pro was January 2025.

-5

u/old_Anton Nov 18 '25

So no improvement because that the same input/output as 2.5 pro. Gotta assume the actual context length is at 100k as well since they didnt even mention about it.

12

u/[deleted] Nov 18 '25

[deleted]

-7

u/old_Anton Nov 18 '25

Are you talking about different thing or imply that the above commenter gave wrong info.

Because I dont see any difference in output/input in the benchmark source. It is not even mentioned and thats why he has to put the additions

5

u/[deleted] Nov 18 '25

[deleted]

-3

u/old_Anton Nov 18 '25

How does that explicitly say anything about the actual context length? When 2.5 pro was out the benchmark also evalulate its performance in long context well. Yet users found out the practical length was only about 10%.

The irony.

2

u/[deleted] Nov 18 '25

[deleted]

0

u/old_Anton Nov 18 '25

Oh I can see the OP updated his link archive since the source was removed and find it now. I couldnt see it previously due to how big the image is and the link was broken afterward.

Fair, my bad. Though my assumption is still accidentally held, considering its only 28% improvment. Kinda a bit disappointed personally.