So, just like @Poneybirds one and a half years ago, I ended up with 2 graphics cards of the same caliber. This time not due to the absolute cost of the cards but due to quickly falling prices forcing me to reconsider my RX 6900XT purchase now that I am still within the return period.
Contenders
- Nvidia RTX 3080 ASUS TUF Gaming OC LHR V2, bought at €850
- AMD RX 6900 XT PowerColor Red Devil Ultimate, bought at €1080
Question
Is the RX 6900 XT Ultimate worth about 200 euros more?
Hypothesis
The 6900 XT may be a higher-tier card than the 3080, but the Nvidia RTX 3080 and higher have previously shown better performance in DCS VR at high res (Reverb) than their AMD rivals of the same tier.
Initially, I would expect similar performance from both with no significant difference.
I did a quick VR flight with the 3080 after taking the measurements but before looking at the results (to stay as unbiased as possible), flying the same mission that I flew yesterday with the 6900XT and it felt similar enough.
Setup
CPU: Ryzen 7 5800X3D
MoBo: Gigabyte B550 Aorus Pro Rev 1.0
RAM: 32GB (8x8GB) G.Skill 4000CL18 clocked to 3600CL18
SSD (with OS and DCS): Samsung 970 Evo 1TB NVMe M.2
PSU: Corsair RM850x
Case: Fractal Design Meshify C with lots of Noctua fans
OS: Windows 10
DCS version: 2.7.14.242228
Test
I ran 2 tests. 1 in VR, and 1 on a monitor, setting resolution to 4K (I only own a 1440p but felt that would not stress the cards enough so did some acronym magic to get DCS World to render at 4K with both cards)
I did not touch the controls of course, and left the VR headset on the shelf during the tests.
I tried to find a “standard” GPU benchmark for DCS, but could only find a .trk that was no longer recognized, and Gryz’ CPU-focused benchmark, which, as we could see in Poneybirds’ test, is not ideal for measuring differences between GPUs.
So I picked the Normandy WW2 Airfield Attack Mission for the Mirage 2000C. It has a lot of clouds up close, a view of a relatively performance intensive map, and some AI units, who will not shoot you down if you let go of the controls to benchmark for a minute.
These are the settings I used for the 4K benchmark: (mirrors were on by the way)
And these are the settings I used in VR:
I used FRAPS to record 60 seconds of frametimes, starting from a fixed point in the mission each time.
Results
Since the Red Devil Ultimate is a binned chip and made for overclocking, I decided to test a full manual noisy max performance overclock, as well as the currently preferred quiet mode.
Why test the overclocked setting if it is noisy? If I keep it, I might make it quiet by removing the shroud and fans and strapping some Noctua 120 mm to it. It should be easier than it was on the Gigabyte 1080Ti, I should be able to leave the heatsink attached on this one.. Then it will have plenty of cooling at whisper-quiet acoustic levels. Thus, the MAX_PERF overclocked setting is the most relevant to determine if the 6900XT is worth the premium in the long run.
The silent is not the actual silent BIOS default but just the OC bios with a 1% underclock, a 7% undervolt and a way lower fan curve. But it does have the full 300W power limit that it has in the normal BIOS. (Silent BIOS lowers fan curve and power limit)
First, some simple aggregate results in terms of framerate.
Higher is better
4K_HIGH_M2000C_Normandy_Airfield_attack_benchmark_start_300_knots
name Min Max Avg
---------------------------- ----- ----- -------
RTX_3080 - stock_low_latency 88 100 94.433
RX_6900XT - MAX_PERF 97 108 102.783
RX_6900XT - Silent 91 102 97.083
VR_M2000C_Normandy_Airfield_attack_benchmark_start_290_knots
name Min Max Avg
---------------------------- ----- ----- ------
RTX_3080 - stock_low_latency 68 81 74.383
RX_6900XT - MAX_PERF 72 88 79.333
Low (left) is better than high (right)
Then, the full frametime plots for both scenarios.
This is a histogram, showing how often any frametime (time it takes to render a frame, 1/fps) appeared.
Having more area (area = number of frames) on the left side is good, but especially having less area on the right side is good, because that means less slow frames.
Discussion
This is weird. Why does the RTX 3080 have 2 peaks with a dip in the middle? In VR, perhaps it makes sense that it switches between 45 and 60 or 60 and 90 Hz (I use OpenComposite). But in the flat-screen benchmark, I have no idea where this comes from. I thought maybe I had some OpenXR .dll in there, and cleared out the shaders to be sure, and ran it again. Still the 2 peaks though:
^This was visible in an earlier version of the 4K plot due to Nvidia Control Panel by default pre-rendering frames. See post #1 for the fix and the old graphs
The 2 peaks in VR are still present, but they are sensible: Around 11 ms (90 fps) and 22 ms (45 fps). In fact, I am surprised the Radeon card could report such a wide range in frametimes to FRAPS in VR. The Nvidia doesn’t do such granular reporting, it appears. So comparing the plots is not that useful, but we can still see the fps aggregate numbers:
The RTX 3080 is about 4 fps slower on average (5%), but the difference in the minimum fps is less than in maximum. Which is nice, because the lower fps is where we need the performance most.
Conclusion
In 4K, the RTX 3080 stock performance is exactly between the 6900XT silent and overclocked modes’ performance.
In VR, the twin peak thing (see discussion) means it loses out on the 6900XT.
Regardless, this difference is small enough that it is worth it to me. The ASUS TUF RTX 3080 is smaller so I can put 3 case fans in the front of the case again and the GPU fans have a more pleasant sound: it might even be quieter on the stock fan curve, even though its total board power is almost 10% higher than the RX 6900XT’s. But mostly, 5% performance difference is not worth a 20% price increase to me at this point.
Rationality
The hardest part is that I personally prefer to have an AMD card. I like the company better, AMD has always done a lot of open source and using open standards while Nvidia locked out its competitor (Nvidia Hairworks, G-SYNC vs FreeSync), and they even managed to lock an entire sector (Deep Learning) to their brand. The letters “GEFORCE” are akin to the Galactic Empire logo for me (but uglier). And especially now, the 6800XT and 6900XT cards are more energy-efficient this generation, and it ‘matches’ my AMD CPU. Thus it is hard to return the card I already bought and replace it with an Nvidia.
But if I had not bought the 6900XT yet, I would never pay a 200 euro premium to get an AMD that performs 5% better. And the choice that I have now to return it or not is actually the same, even though it doesn’t feel like it.