Nvidia is intent on relaunching one of its workhorses: the DLSS. The abbreviation stands for Deep Learning Super Sampling, or Supersampling through artificial neural networks.

You have to start from afar to understand why we came up with this technology. One of the limitations of video games are the performances obtainable on a specific hardware. In the world of PCs it is possible, by adjusting the graphic parameters, to obtain multiple different experiences. I can play at 30 FPS by favoring the graphic detail, or lower each setting to be able to play at 120 FPS. The budget has always been the limit: a 200 € GPU certainly does not have the capacity of a 1000 € GPU. The task of a GPU is to go and decide the color of the pixels that make up the final image. The lower the number of pixels, the faster it can complete the calculation and therefore have greater performance.


Yet, touching the resolution seems to be one of the biggest taboos in the world of PC gaming, while in the console world it is used daily. Mainly because playing at a short distance from the screen, the loss of detail that incurs in spreading a lower resolution image than the native one on a monitor is quite evident. So in general it is much better to play at a minimum, but always at your monitor's native resolution for maximum visual clarity.

In recent years, with the spread of the 4K and the will to move towards the use of Ray Tracing, real-time lighting calculated according to a diffusion of rays, as close to reality as it is computationally heavy, many new upscaling techniques have been devised capable of reconstructing a high-resolution image starting from a lower one, with increasingly more algorithms efficient and effective. Temporal reconstruction techniques such as those used in Rainbow Six Siege and Watch Dogs 2 or the checkerbord rendering used on PlayStation 4 with excellent results. They are techniques that still have many artifacts and a perceptible loss of quality, but they are good compromises for having back performance.

Less is more.

In recent years, numerous techniques of image reconstruction and upscaling based on deep learning algorithms were being developed, with very remarkable results, superior to those of any other pre-existing program or human work. These were calculations made not in real time, therefore useful for restoring old films, or works destroyed or corrupted by time, or improving their photo gallery in low resolution, children of technologically shorter times. Nvidia saw an opportunity here: if she managed to execute the neural algorithm in a time comparable to that of the generation of a frame, she could apply the same rules in video games in real time.

So here is one of the reasons why Turing architecture is so dense with computing units. FP32 unit for the classic graphic calculation, INT32 unit to process many of the effects that do not require the precision of the floating point without interrupting the main pipeline. RTX modules to speed up the calculation of the intersection of the rays with the geometry of the game worlds for realistic lighting. And finally, the protagonists of this technology: i Tensor Core. Calculation units optimized to elaborate matrices, algebraic structures at the base of the Deep Learning calculation.

DLSS 2.0 Training
The images reconstructed from the network are tested against images at the native resolution of 16K to identify where the algorithm is wrong and make it improve autonomously

So here at the end of 2018, DLSS 1.0 made its appearance. And it was certainly not roses and flowers, on the contrary, the technology was particularly immature. Nvidia's approach was particularly "image-centric". The technology worked well in performing a simple upscaling on static images, or rather deterministic, but much less with the moving and dynamic ones of a video game. Each software needed specific neural network training. The tensor cores were very slow to execute the algorithm and this meant the unavailability on certain configurations. The risk would have been to have a decrease in performance instead of an increase. The quality of the final images was passable only when aiming to obtain 4K, while trying to use it in Full HD was a suicide. Despite promises of improvements due to further training of the network, little changed.

Always treasure your mistakes

Nvidia has therefore decided to rethink the technology, making it something worthy of being used and above all much easier to use. DLSS 2.0 is a huge step forward, a completely new way of reconstructing the image from its first iteration. We have a generic algorithm, no longer ad hoc built for each single game, therefore applicable to all software and all resolutions. The algorithm has also been speeded up, according to the house now runs at twice the speed of before, completely removing any configuration restrictions. To replace the information generated by previous dedicated networks for each individual game, Nvidia has integrated vector information about the movement within the frame. In this way, after the first generated image, temporal feedback is generated and from the second onwards, a temporally stable image is created.

This method is the basis of the Temporal Anti Aliasing techniques, known by the name of TAA, which have the task of smoothing the angular corners of the images. So one could say, simplifying the work done by Nvidia technicians a lot, that DLSS 2.0 combines a high quality upscaler with a TAA filter. With a result, however, that takes the strengths of the two worlds and not the worst sides. The algorithm is also much more flexible and presents itself to the player in three quality levels: Quality, Balanced and Performance. The latter operates at 50% of the final resolution (4 times upscale), Balanced which operates at 57% of the final resolution and Quality at 66%.

DLSS 2.0 by Nvidia, official presentation and tech dive.
Not only better performances, but better rendered graphic quality, even compared to the native resolution in many details!

New games supported, hopefully increase

It is not just propaganda with nothing to touch. Nvidia has worked to lighten the burden on developers in implementing this technology. However, it requires access to the vector information above from the game engine, fortunately very common today given the use of TAA, but it is still not something capable of operating directly from the drivers on any existing software. I myself got to try DLSS 2.0 in Deliver Us The Moon, obtaining performances that I never expected, being able to play at 2560 × 1080 all at ultra, including RTX effects, at a framerate higher than 60 FPS on the RTX 2060.

Control, the beautiful Remedy Entertainment shooter, showed us a taste of DLSS 2.0, with a preliminary variant performed on normal FP32 cores. Now it will receive with the next expansion, the real DLSS 2.0. News that makes me inclined to do a second run. As proof that there is no need to have the RTX to be able to benefit from the performance increase given by the DLSS, Mechwarrior 5 will be equipped with this technology.

I don't know about you, but I'm very much appreciating this drift towards technologies that seek to improve performance without sacrificing overall graphic quality, capable of making our graphics cards more versatile and longer-lasting. The Variable Rate Shading will be the future, given its adoption on both the next consoles and on the Turing and subsequent cards, but I am convinced that this new incarnation of the DLSS is truly revolutionary and an ace in the hole on which to focus. It was up to me to decide, I would like to see it implemented in every PC game from here until the end of time.