RX 6900XT vs RTX 3080: DCS World benchmarks

So, just like @Poneybirds one and a half years ago, I ended up with 2 graphics cards of the same caliber. This time not due to the absolute cost of the cards but due to quickly falling prices forcing me to reconsider my RX 6900XT purchase now that I am still within the return period.

Contenders

  • Nvidia RTX 3080 ASUS TUF Gaming OC LHR V2, bought at €850

  • AMD RX 6900 XT PowerColor Red Devil Ultimate, bought at €1080

Question

Is the RX 6900 XT Ultimate worth about 200 euros more?

Hypothesis

The 6900 XT may be a higher-tier card than the 3080, but the Nvidia RTX 3080 and higher have previously shown better performance in DCS VR at high res (Reverb) than their AMD rivals of the same tier.
Initially, I would expect similar performance from both with no significant difference.
I did a quick VR flight with the 3080 after taking the measurements but before looking at the results (to stay as unbiased as possible), flying the same mission that I flew yesterday with the 6900XT and it felt similar enough.

Setup

CPU: Ryzen 7 5800X3D
MoBo: Gigabyte B550 Aorus Pro Rev 1.0
RAM: 32GB (8x8GB) G.Skill 4000CL18 clocked to 3600CL18
SSD (with OS and DCS): Samsung 970 Evo 1TB NVMe M.2
PSU: Corsair RM850x
Case: Fractal Design Meshify C with lots of Noctua fans
OS: Windows 10
DCS version: 2.7.14.242228

Test

I ran 2 tests. 1 in VR, and 1 on a monitor, setting resolution to 4K (I only own a 1440p but felt that would not stress the cards enough so did some acronym magic to get DCS World to render at 4K with both cards)
I did not touch the controls of course, and left the VR headset on the shelf during the tests.

I tried to find a “standard” GPU benchmark for DCS, but could only find a .trk that was no longer recognized, and Gryz’ CPU-focused benchmark, which, as we could see in Poneybirds’ test, is not ideal for measuring differences between GPUs.
So I picked the Normandy WW2 Airfield Attack Mission for the Mirage 2000C. It has a lot of clouds up close, a view of a relatively performance intensive map, and some AI units, who will not shoot you down if you let go of the controls to benchmark for a minute.

These are the settings I used for the 4K benchmark: (mirrors were on by the way)

And these are the settings I used in VR:

I used FRAPS to record 60 seconds of frametimes, starting from a fixed point in the mission each time.

Results

Since the Red Devil Ultimate is a binned chip and made for overclocking, I decided to test a full manual noisy max performance overclock, as well as the currently preferred quiet mode.
Why test the overclocked setting if it is noisy? If I keep it, I might make it quiet by removing the shroud and fans and strapping some Noctua 120 mm to it. It should be easier than it was on the Gigabyte 1080Ti, I should be able to leave the heatsink attached on this one.. Then it will have plenty of cooling at whisper-quiet acoustic levels. Thus, the MAX_PERF overclocked setting is the most relevant to determine if the 6900XT is worth the premium in the long run.
The silent is not the actual silent BIOS default but just the OC bios with a 1% underclock, a 7% undervolt and a way lower fan curve. But it does have the full 300W power limit that it has in the normal BIOS. (Silent BIOS lowers fan curve and power limit)

First, some simple aggregate results in terms of framerate.

Higher is better

4K_HIGH_M2000C_Normandy_Airfield_attack_benchmark_start_300_knots
name                            Min    Max      Avg
----------------------------  -----  -----  -------
RTX_3080 - stock_low_latency     88    100   94.433
RX_6900XT - MAX_PERF             97    108  102.783
RX_6900XT - Silent               91    102   97.083


VR_M2000C_Normandy_Airfield_attack_benchmark_start_290_knots
name                            Min    Max     Avg
----------------------------  -----  -----  ------
RTX_3080 - stock_low_latency     68     81  74.383
RX_6900XT - MAX_PERF             72     88  79.333

Low (left) is better than high (right)

Then, the full frametime plots for both scenarios.
This is a histogram, showing how often any frametime (time it takes to render a frame, 1/fps) appeared.
Having more area (area = number of frames) on the left side is good, but especially having less area on the right side is good, because that means less slow frames.

VR_M2000C_Normandy_Airfield_attack_benchmark_start_290_knots

Discussion

This is weird. Why does the RTX 3080 have 2 peaks with a dip in the middle? In VR, perhaps it makes sense that it switches between 45 and 60 or 60 and 90 Hz (I use OpenComposite). But in the flat-screen benchmark, I have no idea where this comes from. I thought maybe I had some OpenXR .dll in there, and cleared out the shaders to be sure, and ran it again. Still the 2 peaks though:
^This was visible in an earlier version of the 4K plot due to Nvidia Control Panel by default pre-rendering frames. See post #1 for the fix and the old graphs

The 2 peaks in VR are still present, but they are sensible: Around 11 ms (90 fps) and 22 ms (45 fps). In fact, I am surprised the Radeon card could report such a wide range in frametimes to FRAPS in VR. The Nvidia doesn’t do such granular reporting, it appears. So comparing the plots is not that useful, but we can still see the fps aggregate numbers:
The RTX 3080 is about 4 fps slower on average (5%), but the difference in the minimum fps is less than in maximum. Which is nice, because the lower fps is where we need the performance most.

Conclusion

In 4K, the RTX 3080 stock performance is exactly between the 6900XT silent and overclocked modes’ performance.
In VR, the twin peak thing (see discussion) means it loses out on the 6900XT.

Regardless, this difference is small enough that it is worth it to me. The ASUS TUF RTX 3080 is smaller so I can put 3 case fans in the front of the case again and the GPU fans have a more pleasant sound: it might even be quieter on the stock fan curve, even though its total board power is almost 10% higher than the RX 6900XT’s. But mostly, 5% performance difference is not worth a 20% price increase to me at this point.

Rationality

The hardest part is that I personally prefer to have an AMD card. I like the company better, AMD has always done a lot of open source and using open standards while Nvidia locked out its competitor (Nvidia Hairworks, G-SYNC vs FreeSync), and they even managed to lock an entire sector (Deep Learning) to their brand. The letters “GEFORCE” are akin to the Galactic Empire logo for me (but uglier). And especially now, the 6800XT and 6900XT cards are more energy-efficient this generation, and it ‘matches’ my AMD CPU. Thus it is hard to return the card I already bought and replace it with an Nvidia.

But if I had not bought the 6900XT yet, I would never pay a 200 euro premium to get an AMD that performs 5% better. And the choice that I have now to return it or not is actually the same, even though it doesn’t feel like it.

6 Likes

I found it! Setting Low Latency Mode to Ultra disables pre-rendering that 1 frame:
afbeelding

And removed the “2 peaks” shape from the frametime plot: (green line is low latency set to Ultra)

5 Likes

Original post updated

2 Likes

never say never :wink: you just have to pay if you want that 5% more… diminishing returns…

…you can always return both, buy card for 600 gold and play on lower settings or sims which are not that demanding, easy :smile:

2 Likes

Well yeah of course.
But 5% is actually a maximum, since the Radeon was manually overclocked and the Nvidia wasn’t. If I squeeze max performance from the Nvidia, it could easily halve.

But yes you could of course always get a cheaper card with lower cost per fps

Price Per Watt may makeup the difference over time.

1 Like

test incoming, right ? :slight_smile:

For heavy users, it could.
AMD is more efficient this generatiom, in general.

But for me, that won’t fly. I am lucky to fly a few hours in VR once every few days. As an absolute upper limit, let’s say I fly 2 hours every day, and idk what I pay either but Google says electricity in NL is currently between €0.50 and €0.70, and let’s say that the power difference is 50 W: it can’t be more than that really.
Then even if all those high estimates are true and I keep the card for 2 years, that is still only about €50.

Now I love AMD but with video cards, price/availability is the most important to me. Especially in this segment where performance differences are 10% but price difference can be 100%. It’s the same reason I had a reference 6800XT for the past year.

Custom board 6800XT prices have come down too since buying the Nvidia, but not below the price I paid for this RTX 3080, which is still better in DCS VR specifically.

Still:

1 Like

Yes of course, but it will take some time before I have confidence in a stable overclock.
Just figured out an efficient undervolt: 1875 MHz at 850 mV was almost stable but crashed on rendering fresh shaders for Persian Gulf. 1860 MHz at 850 mV is stable for a few days now.
Next I will try to find the upper limit of maximum performance until a flat screen benchmark crashes and find a stable point from there.

With regard tu GPGPU, AMD is still playing catch up but things are starting to move. pyTorch has started shipping ROCm accelerated python packages. Hardware support for ROCm is still mainly compute focused cards, but they are ever so slowly expanding it. Hopefully they can manage to ship future generations supporting it out of the box.

1 Like

Ryzen and 6000 series should have the wide open DMA SAM as well.

1 Like

Valid also for Ryzen 3000 series CPUs and RX 5000 series GPUs in combination with B550 or X570 chipset. https://www.amd.com/en/technologies/smart-access-memory. I completely missed it. So realized recently that I have available that option and immediately enabled it. :grinning:

1 Like