Nvidia’s long-awaited RTX 50 series GPUs made their debut at CES 2025, and as expected, Nvidia spent more time talking about AI than about the GPUs themselves. Once you filter out all the AI fluff, though, it does seem like you’re still left with some powerful new cards.
The significant feature upgrades coming to the RTX 50 series include DLSS 4, Nvidia Reflex 2 (essential for DLSS 4 to feel right, I believe), and RTX Neural shaders. I’ll go over these in more detail in a bit. Raw performance has gone up as well, but Nvidia hasn’t released any hard data on that front, so I can’t tell by how much.
Nvidia announced four cards in total:
- RTX 5090 with 21,760 cores and 32GB VRAM for Rs 2,14,000
- RTX 5080 with 10,752 cores and 16GB VRAM for Rs 1,07,000
- RTX 5070 Ti with 8,960 cores and 16GB VRAM for Rs 80,000
- RTX 5070 with 6,144 cores and 12GB VRAM for Rs 59,000
Notably, the “entry-level” RTX 5060 series is nowhere in sight at this time.
Judging by the specs, the GPUs all have about 20-30% more cores (Tensor, RT, Shader, etc.) across the board and an upgrade to much faster GDDR7 memory. The biggest update is in the AI department, where we see a new architecture for processing AI workloads
In effect, Nvidia is claiming a roughly 2x increase in AI performance over previous gen parts, and with DLSS 4, 2-4x “better” performance in gaming. “Better” is in quotes because it comes with a million caveats, but we’ll ignore them till reviews are out.
Why gaming GPUs need AI
When Nvidia introduced real-time ray tracing in 2018, the performance hit was so bad that even the most powerful GPUs at the time struggled to deliver playable frame-rates. To combat this, Nvidia introduced DLSS or Deep Learning Super Sampling. DLSS worked by rendering the game at a lower resolution and then upscaling each rendered frame using AI. As an aside, CES demos show the RTX 5090 – a Rs 2,14,000 GPU – struggling to hit 30 fps without some form of AI magic.
Four years later in 2022, Nvidia released the RTX 40 series GPUs with support for DLSS 3 frame-generation. To oversimplify how this works, DLSS 3 would look at an existing frame, predict when the next frame would render and what it would look like, and then generate a fake frame between the existing frame and the predicted one. It’s as unnecessarily complicated as it sounds, and led to some predictable issues, to wit: these generated frames are literally imaginary and their accuracy depended very highly on the player making slow and predictable movements.
It’s easy enough to predict movement when you’re driving in a straight line, say, but swerve suddenly and all the predictions are off. It’s even worse in the case of shooting games where gamers tend to be even less predictable. Fine detail like animated grass, moving text in the background, or even background NPCs looked like like flickery mush.
DLSS 3 has plenty of issues ranging from high input latency (the time it takes for a player action to render on the screen) to poor frame-pacing (inconsistent timing between each frame) and nasty ghosting issues (smearing textures and artefacts around moving objects). To Nvidia’s credit, they at least did a better job than AMD, who remains a generation or two behind on the technology front.
DLSS 3 wasn’t, and still isn’t, very good, but that never stopped Nvidia from claiming a 50-100% boost in performance with the feature enabled. History appears to be repeating itself with the RTX 50 and DLSS 4 announcements.
RTX 50 series Blackwell and DLSS 4
RTX Blackwell was dubbed the “engine of AI” by Nvidia CEO Jensen Huang. Nvidia’s CES press release quotes him as saying “fusing AI-driven neural rendering and ray tracing, Blackwell is the most significant computer graphics innovation since we introduced programmable shading 25 years ago.”
Why the focus on AI, and not, say, raster or ray-tracing performance? To put it simply, Nvidia believes that it’s faster to generate a frame with AI than it is to render it classically (i.e. with raw computing power), hence their doubling down on AI.
Nvidia replaced the CNN-based AI model (convolutional neural network) powering DLSS and other AI features on previous-gen RTX cards with a newer transformer-based one. Again, the differences are too complex to get into, suffice it to say that a transformer model analyses more data faster. In the case of DLSS, this means that the new AI model is better and faster at making predictions and generating fake frames.
Where DLSS 3 generates one fake frame between a real and predicted one, DLSS 4 generates up to 3 fake frames in the same interval.
Where DLSS 3 generates one fake frame between a real and predicted one, DLSS 4 generates up to 3 fake frames in the same interval, resulting in a theoretical uplift of 200-400% or more. One demo saw an uplift of 10,000% with all the AI fakery enabled. Hence Nvidia’s claim that a Rs 59,000 card can deliver the same performance as a Rs 2,00,000+ card from last-gen.
As the subtitle aptly puts it, bah humbug!
DLSS 4 doesn’t resolve the issues plaguing DLSS 3, it only mitigates them with a faster AI model, better upscaling, and a new feature called Reflex 2 (more on that in a bit). Admittedly, these mitigations do work surprisingly well in Nvidia’s demos, but bear in mind that we’re looking at cherry-picked samples in highly optimised scenarios. Real-world performance remains to be seen, and factors like input latency must be experienced first hand to appreciate the lag that frame-generation introduces.
Why Nvidia Reflex 2 is just as important as DLSS 4
Nvidia Reflex 2 takes Nvidia’s original Reflex technology and supercharges it. Reflex reduced PC latency – again, this is the time it takes for a player action like a mouse movement to be reflected on screen – by better synchronising the CPU and GPU. Reflex 2 improves this sync by ensuring that the CPU and GPU stay in step at all times and that the GPU has the most up-to-date data on the state of the game.
Note that this is done via an SDK that the game engine must support, so it won’t work on all games.
Reflex 2 also works with DLSS 4 to synchronise player input with the fake frames being generated. When movement is detected, these fake frames are then ‘warped’ slightly just before they’re rendered on your display to match the new camera angle. In effect, the generated frames from DLSS 4 now appear, visually, to be more accurately following player actions.
Couple this with improvements to ray reconstruction and DLSS upscaling and theoretically, your games now look better than ever before, and run faster than ever before.
There were a bunch of other announcements around AI-generated/optimised textures, facial animations, better NPC intelligence, and more, but we’ll look at those in another article.
What does this mean for gaming?
Honestly, I don’t know yet, but I am cautiously optimistic about the future of graphics.
I’m thinking we’ll see about a 30-40% uplift over the RTX 40 series in terms of raw performance, which is meaningful enough by itself. I also like the improvements I’m seeing to the DLSS upscaling models and Reflex, both of which will, I think, have the most significant impact on visuals and perceived performance. DLSS 4 I’m not sure about. I didn’t care much for DLSS 3 frame generation when it launched, and I’m not yet sure if DLSS 4 will change my mind on fake frames.
Regardless, there’s more than enough to like here and I’ll be placing an order for a shiny new RTX 50 series GPU the moment it launches in India, which should be around 30 January.
The full keynote is available here: