The latest Nvidia driver is harping on about DLSS again so I thought I’d take a look. Here’s the announce and details:
So for No Man’s Sky they are claiming about double the framerate, and unusually for DLSS, the VR mode will support it too. I tend to get about 55 ish FPS with a 2080 in VR and ‘High’ presets and it looks pretty good, so the idea of getting a solid 90 is welcome. I sort of doubt it, so went to take a look only to find out that the NMS patch isn’t out yet - the driver is here but the game doesn’t offer the option. I’ll post something here when it does:
So I retried Metro Exodus with DLSS and Raytracing and it does look really nice. I get a solid 60 FPS with GSYNC on a 4K monitor, and details are great. It’s clear and the ray tracing does look better for the reflections etc. Here’s some quick 4K grabs
(the game itself is pretty good, but a tunnel shooter where you aren’t in tunnels anymore makes it a bit more like the old Stalker games than Metro). Here’s the FPS improvements for Exodus using DLSS plus Raytracing (the rare example of getting better graphics AND better FPS together):
So one thing that comes up is always ‘When can I get double framerates in my flight sim?’. I’m no expert but here’s my thinking:
MSFS and Asobo won’t use Nvidia DLSS because they are committed to a DX12 build for the future Xbox consoles release. The hardware inside those consoles is custom AMD chipsets, so they’ll use DX12 over a common layer, but for the AMD side it is called FidelityFX. It’s the same machine learning stuff. So MSFS will get it, just a lot later and it’ll be called something else. It will use the GPU on both AMD and Nvidia for ML stuff.
DCS has their ‘We don’t use proprietary tech’ line, which always makes me smile. Sort of like I how I often shout at the TV when the hockey is on; it’s cute that I think I make a difference or anyone cares.
One technical reason for why DCS would struggle to use DLSS 2.1 VR is the way their graphics engine works. Most games use something called ‘Forward Rendering’ where the graphic pipeline is a straight-forward Vertex’s → Geometry → Lighting/Fragment set-up. DCS from about 2.5 uses a Deferred Rendering where the lighting is rendered in multiple passes over the final scene. This allows them to do really efficient shadows and global lighting effects (the sun etc) for lots of light sources very cheaply. DCS has a lot of light sources, so this is a big win for 2D and higher resolutions. The downside is that things like MSAA/TAA and DLSS sort of need to be involved before these passes, making it incompatible with deferred lighting. It’s not impossible, it’s just work, and most games can more easily add DLSS, for DCS it would be harder. The fact that DCS isn’t even a big enough game to have a driver profile (it uses 'Black Shark dcs.exe) or Geforce/Radeon settings support is also not a great sign. I have no idea why ED don’t embrace that, given the money their customers often spend on hardware. (Also, should point out, for DLSS v2.1, the game maker doesn’t need to submit their source or anything, the learning is a one off from output).
DCS also uses two passes for the VR render, while most for-VR games can just incorporate it in a single pass, so lots of historical reasons why it struggles in VR. Let’s hope the upcoming DCS Vulkan rendering engine changes flip a few things, although I can see it as being a lot of work on an old engine, so don’t get hopes up too soon.