AI-Upscaled 240p games have never looked this good

mediumThis post was originally published by Antony Terence at Medium [AI]

NVIDIA’s DLSS 2.0 tech makes gaming even more accessible

Image for post
Control takes advantage of NVIDIA’s DLSS 2.0 to deliver phenomenal visuals. Source: Remedy.

Picture this. An impossibly polished next-gen game running on an excuse of a gaming rig with all the bells and whistles intact. At remarkable frame rates, no less. While it sounds like a pipe dream that is clogged by the fact that most run-of-the-mill hardware just isn’t good enough for these games, NVIDIA’s focus on the high-end ended up giving PC peasants a boon in the process.

While raytracing caters to those who want the best from their machines, the AI-accelerating Tensor cores present in NVIDIA’s RTX graphic card lineup has another trick in the oven. One that has the potential to change how low-spec gamers experience videogames.

In short, play at 540p (or lower) while the upscaling algorithms let you experience the game at a pristine 1080p or an even higher resolution. While it isn’t exactly the holy grail of rendering, it comes pretty darn close.

Control 720p vs 1440p DLSS 2.0 NVIDIA
Control’s protagonist looks stunning even at 720p thanks to DLSS 2.0. Source: Digital Foundry.

Miracles happen

NVIDIA has made major strides in AI rendering, enabling gamers to game at higher resolutions without the massive spikes in computational demand that normally accompany resolution bumps. Breaking down the technical aspects of this engineering marvel would no doubt require a masters’ degree in the affair. But the benefits are clear: games look better and run better. Technical wizardry at its best.

From the RTX 20 series launch in September 2018, their upscaling algorithm has come a long way. NVIDIA’s new cards featured Tensor cores that were tailored for workstation use scenarios. They also provided big gains when it came to raytracing in videogames. But NVIDIA was just getting started.

It also showcased its DLSS (Deep learning super sampling) tech, which effectively let you apply image upscaling in videogames, all in real-time. It used deep learning to upscale images to an even higher-resolution, letting neural networks do the heavy lifting instead of running the game traditionally. This in turn resulted in big performance gains. With less computation done by the video card, you could reach higher graphics settings without your frame rate taking a hit. Here’s how they did it.

DLSS 2.0 NVIDIA AI rendering
DLSS 2.0 shakes things up. Source: NVIDIA.

A promise fulfilled

NVIDIA made DLSS sound like the next big thing in tech and it certainly was promising. It was effectively an anti-aliasing technique that aspired to replace traditional jagged edge smoothening tech like FXAA (Fast approximate anti-aliasing). Unfortunately, as with most tech in their infancy, DLSS 1.0 wasn’t the silver bullet NVIDIA made it out to be. Only a few games took advantage of the feature and it was no one-click solution.

Implementations in Battlefield V and Metro Exodus produced a blurry mess, hurting clarity and texture detail. Despite running the games hundreds of thousands of times through neural-network AI supercomputers, things didn’t go as planned. NVIDIA’s bold promise fell flat, going from a must-have to an experience caked in Vaseline. They promised that more games would adopt the technology but developers found it difficult to implement and the quality was nowhere near what was expected from the visual showcases the games were without the DLSS sauce.

Fortunately, NVIDIA decided to double down on DLSS and over a year later, DLSS 2.0 hit the scene with improvements that would let it live up to the hype. With superior quality, better scaling across resolutions, and training that no longer needed to be done on a case-per-case basis, it looked like NVIDIA had a winner on its hands.

DLSS 2.0 NVIDIA tech comparison vs DLSS 1.0
DLSS 2.0 greatly improved upon the original AI upscaling technology. Source: Wccftech.

You only live twice

DLSS 2.0 lets games reconstruct all kinds of complex details, surpassing native images at times with a Performance mode enabling the 4x resolution jump from 540p to 1080p you see in the example above. The blurriness that plagued DLSS 1.0 is no longer an issue, but its successor does come at a slightly higher performance cost. Instead of just using images to train its deep neural network, DLSS 2.0 also uses low resolution motion vectors to understand how objects move from frame to frame, letting it estimate (with startling accuracy) what the next frame will look like. Add to that the fact that this neural model doesn’t need to be trained per game and there’s no saying where this nascent technology will go from here.

DLSS 2.0 is easier to implement in games at the quality that gamers have come to expect but right now only a handful of titles have access to DLSS 2.0. Control, MechWarrior V, Deliver Us to the Moon, and Wolfenstein: Youngblood have all witnessed solid improvements in terms of performance without compromising on their visuals. With more titles on the horizon, adoption of the tech is picking up. As with raytracing, NVIDIA’s push just might make it a staple in most next-gen titles. And with a cross-platform alternative in the works at AMD, these gains could show up on AMD PCs, the PlayStation 5, and the new Xbox consoles soon.

At the moment, only RTX cards can take advantage of this feature. The $300 RTX 2060 represents a pretty steep entry point for most consumers but it’s only a matter of time before AMD and even Intel begin to ship cheaper hardware with their Tensor core equivalents. A watered-down graphics card with a lower Tensor core count could work wonders while upscaling from low resolutions, making next-gen graphical advancements accessible to the masses. All in all, it’s an exciting time to be a gamer.

Spread the word

This post was originally published by Antony Terence at Medium [AI]

Related posts