DLDSR, DSR, DLSS - wtf?

If you’ve ever been to Hawaii the local place names can get really confusing. It’s because the Hawaiian alphabet only has 13 letters, and 5 of those are the vowels, so everywhere sort of sounds the same.

Nvidia seems to have a similar issue, where they only like to use the letters D, S, L in various random combinations to pretty much describe everything they release. Kukae, Nvidia, very kukae:slight_smile:. The recent addition of DLDSR has brought the confusion up to comedy levels.

Anyway, so here’s a quick summary as I see it. I’m not sure it will help, but I wrote it now.

Downscaling

DSR = Dynamic Super Resolution. This is something where you get the game to render a higher resolution than you can see natively on your device. So you can now set your game to output 4K but play it on a 1440p monitor. This is called a ‘Downscaling’ technique. This is an image quality improvement tech, rather than a performance improvement. It’s often used for older games with the more powerful cards, meaning you’re trading framerate for a nicer looking image. AMD has their equivalents of Virtual Super Resolution (VSR) etc. DSR and VSR works on all cards.

DLDSR = Deep Learning Dynamic Super Resolution (whew). This is similar to DSR, in that it’s a ‘downscaler’ for when you want to improve the image quality. Like DSR, you set the game to render at a virtual higher resolution and then DLDSR will downscale it to your native device resolution. The ‘deep learning’ bits are model that the AI tensor cores runs on RTX cards to improve what details it keeps in downscaling. This doesn’t improve framerates, it just tries to make it look nicer when you have headroom to do it.

Upscaling

NIS = Nvidia Image Scaling. This is where you have a lower game resolution and you want to render it to a higher native display. So you set your game to output 1080p and you see it upscaled on a 4K monitor or VR. On the AMD side they have FSR (Fidelity FX Super Resolution). This is an image performance improvement tech, rather than quality. It lets you play games in a higher resolution at a faster frame rate. NIS and FSR work on all cards.

DLSS = Deep Learning Super Sampling. This is another ‘upscaling’ tech, similar to NIS/FSR, but using something called the AI tensor core hardware of the Nvidia RTX cards that have it. Machine learning has been used to generate a model of how best to fill in the missing info when the upscale works. DLSS uses a series of frames to try to predict what to fill in with upscaling, so the game needs to co-operate with giving that buffer info in the right way - so it’s not that the game needs to be submitted to Nvidia for AI model training or anything fancy like that, it’s more just a case it needs to provide runtime info in a certain way via the API. Again, this is a performance improvement tech.

Do These Work for All Games?

Pretty much yes. DLSS does need to be enabled per game, but the others work with ‘virtual resolutions’ so you can trick anything to work up or downscaling. It’s nearly always best when the game does these algorithm’s itself (like incorporates NIS as part of it’s render pipeline) because then the game engine can take advantage more of what’s not being shown - by the time it gets to the driver texture output, it’s a bit late, or rather could be more efficient and look better.

Do These Work For VR?

Normally, no. Things like 2D games have a firm definition of what the final output resolution is, while in VR that’s a bit more fluid and dependent of it using Oculus, OpenVR, OpenXR etc. which all express it differently. Things like FSR and NIS have been used as ‘application layers’ in something called the VR Compositor (the things that brings together all the things to render to the device), but it would be better if the VR title itself incorporated them in its render pipeline itself.

Is This Just To Make The Nvidia AI Cores Do Something So People Feel Better About $2000?

Pretty much. It’s sort of a ‘feature war’ with AMD, and often the results are literally arguing over the number of angels on a texture. I think with the acronym assault going on perhaps Nvidia are just trying to get people to use the Geforce Experience to set their graphics settings - as in ‘I have no idea what these mean, let Nvidia do it for me’.

16 Likes

Very informative and appreciate the time you put into it FF!

The “Deep Learning” nvidia features. Is that supported by DCS, or like you said, co-operated by DCS? Just curious.

1 Like


Thank you SO MUCH! Was esp. wondering about VR.

4 Likes

No, but it is pretty much the definition of proprietary extension, and they’ve always said they try to keep away from them. In Russia RTX Off’s You etc.

Because it works with frame prediction (over 3 frames I believe) and a now common trained AI model (that almost certainly used none or very few flight sim games as part of its corpus) then it also might not work that well either. It’s a bit like motion reprojection in VR, when it gets it wrong it’s almost worst than not providing a ‘tweener’ bit of inferred info.

Something like No Man’s Sky works pretty well in 2D and in VR with DLSS, so it can be done but it takes a bit of game dev effort to make it happen.

1 Like

Thank you!

1 Like

As an addendum, as now we can all witness the firepower of this fully ARMED and OPERATIONAL battle station set of acronyms, here’s what I use day to day:

DCS VR - I use a NIS upscaler with the OpenVR FSR (bad name really, as it supports NIS too). Info here: OpenVR fholger VR Perf Kit (FSR) - #77 by fearlessfrog

MSFS VR - I use a NIS upscaler with the OpenXR API Layer too. Info here: Nvidia Image Scaler - #29 by fearlessfrog

I’m running a 3080 Ti, but those two above let me put the amp to 11 in VR. For 2D on the above, you can just use the Nvidia / AMD driver support to upscale with the virtual resolutions - they work well.

For stuff like Cyberpunk, Red Dead Redemption 2, No Man’s Sky etc, I use a 4K monitor so tend to peck around with the DLSS options for these. Gives a good framerate boost.

6 Likes

Thanks for the effort to simplify complex concepts @fearlessfrog. Great stuff!

3 Likes

Agree, good writeup. Thanks, @fearlessfrog !

3 Likes

I switched on DLDSR on the off chance the card might actually get to use a Tensor core before it dies.

2 Likes

I’m hip. My new “RTX”, as I use it, is more like an “rTX”. I thought the ‘Tensor Core’ thing was being used? (in non-raytracing sims). Or is it I that’s being used? Anyway, thanks for the insight.

The reply was tongue in cheek - not really certain of the benefits it provides in Falcon without finding something that can show that.

1 Like

Understood. However you seem to be close to the mark, from the horses mouth:

Tensor Cores

Tensor Cores are physical cores that are dedicated to complex computations involved in tasks such as machine learning and AI. Tensor Cores enable mixed-precision computing, dynamically adapting calculations to accelerate throughput while preserving accuracy. These cores have been specifically designed to help with these complex workloads in order to make these computations more efficient, as well as to relieve the main CUDA cores of the card of the extra burden.

In consumer cards such as the gaming-focused GeForce series of cards based on the Turing or Ampere architecture…

the Tensor Cores do not specifically have a rendering job.

These cores do not render frames or help in general performance numbers like the normal CUDA cores or the RT Cores might do. The presence of Tensor Cores in these cards does serve a purpose. These cores handle the bulk of the processing power behind the excellent Deep Learning Super Sampling or DLSS feature of Nvidia. We will explore DLSS in a minute, but first, we have to identify which cards actually possess Tensor Cores in the first place.

[Emphasis mine]

Here’s a good article on what a tensor math function is, why it applies to graphics plus how these cores do their stuff:

2 Likes

Now, if they’d have just said it was in the form of matrix maths I’d [might] have grokked it sooner - ‘hardware matrices’ I guess. I didn’t read it all yet… :slight_smile:

1 Like

Yep, a decent analogy that we’re old enough to remember is that they are sort of like math co-processors - specialized in general matrix math. I am still mad the Intel 486SX was just a DX with the math co-processor disabled! :wink:

The other thing of note is that when they talk of ‘AI Deep Learning’ and all that jazz, that they mean the ‘run-time’ model is used by the Tensor cores. AI and machine learning is ‘asymmetric’ computing, in that in plain speaking, it takes a lot of number crunching to make a decent model (like hundreds of GPUs), but once you have one then you just need good tensor cores to use it. The model of a neural network is essentially a weighted prediction algorithm, where given input A, B it can give you the best likelihood of C. The tensor cores with DLSS have been ‘trained’ (as in we have a run-time model) with many frames of acual games for thousands of computing hours, and the cores on your GPU are now being used to ‘predict’ the best way to upscale (what color/brightness should this pixel be, given these pixel matrice etc). I’m now gloriously off-topic. :slight_smile:

1 Like

One might think those ‘Tensor’ cores, when not being used for DL-such-n-such, would help push more triangles. Maybe they do…

To prove how big a geek I am, I actually get this:

Have done a lot of light ‘light reading’ on AI starting with the ‘fuzzy logic’ days. I’m been trying to figure out a way to cram some of this into my lil creation. Til I realize it’s over-kill.

The vertexes and ultimately triangles and textures are processed by a more general processor on the GPU. They are fully Turing Complete, in that they are essentially full computing units in their own right. Rather than given a buffer to draw (like the older < dx8 cards) they effectively run ‘code’ to produce the output. When you hear stuff like ‘Shaders’ then these are small specialized programs written in a High Level Shader Language.

Tensor cores are just one trick ponies, just incredibly fast ponies at what they do. :horse:

I’m amazed at how many tri’s these things push compared to my first go at it back in the late 80’s; the cockpit model now is orders of mag. higher than my first entire scene graph processed, for an open world game. Crazy big numbers now. I recall, but never want to do again, writing line rendering routines in assembler! Ick.

1 Like

I’d moved on about that time, but from what I read up on of early shaders, yeah, they kind of reminded me of asm - a paradigm shift in some ways. Gosh, been a long time