Why Unreal Engine 5.3 is a BIG Deal

Anybody else see a potential flightsim from this tech, one day?

6 Likes

Will nanite become the tech that solves spotting distance issues in flightsims…?

I don’t think so. It’s somewhat related to LODing, but is more an issue of how to map objects that appear at subpixel size to the camera to said pixel.

2 Likes

That’s my understanding as well. You can’t render something smaller than the minimum render size, which would be a pixel, right? I think the only way to address it is some kind of smart scaling ala BMS, or the dot method that the ED team is experimenting with for DCS. Both have their drawbacks and advantages.
(speaking in basics about a topic so far above my head I have to crane my neck)

1 Like

You can super sample, but that is very costly, if you do it for the entire frame.

1 Like

No, it probably won’t solve that issue.
But I’d say that if an aircraft-like model can’t fill a pixel, of the modern high res screens we’ve got today, you’re not supposed to be able to see it anyway. :smile:
The probability of a pilot spotting an object within his or hers central vision is very low even at 7NM.
The big problem, as I see it, is how the aircraft model behaves close in, at distances where you are supposed to be able to see and track an aircraft.
Maybe Nanite tech will help to cure aircraft details popping in and out of the model…? Sometimes it looks like the aircraft is changing course, when it’s just the LOD switching.
And maybe it could help with target popping on the scenery?

3 Likes

It’s really a bit more complicated than that, because features like the wings can easily have subpixel thickness when the aircraft as a whole is already way above pixel size. And it’s usually not a good idea to taylor software exclusively to certain hardware when you cannot make sure that users will use, like in your example, screens above a certain pixel density.

Where LOD switching issues are involved, yes, that seems exactly like what nanites were developed for.

1 Like

It is indeed more complicated.
In real life you have to deal with the constraints of the human visual lobe and focus distances that just isn’t possible to replicate on a 2D screen, not even with stereo vision VR. That limitation probably is what it is, I’m afraid. It will always have to be approximated in flightsims, to some extent. But I think it has improved with higher resolution screens. It certainly looks better on a 2K screen compared to VGA, which was the point I tried to make, not that developers should lock in on a certain resolution. :blush:

1 Like

Both very good points!

Though MSFS already does so much that I had almost given up on seeing, I still hunger for more, like Raytracing, real smoke effects subject to wind, cloth simulation…

Yeah, mostly eye candy, but…

1 Like

I don’t know if this would ever be a possibility but my thinking is this–the next step we need is dynamic resolution. Yeah, we technically have that already, but only for the ENTIRE picture. We have depth of field, but that only pretends to do it.

I’m talking about replicating the structure of the retina. We can see with great detail in a cone in front of us while it drops as we move to our peripheral vision to what would be damned low res on a PC.

So by using some kind of eye tracker we render in say 8k for a radius at the center of the screen and then drop it down to below 1080 around the edges. Of course the big problem is making sure that it can move that focus of high res as quickly as you can move your eyes, so when you look at the edge you see the detail increase there almost instantly while it decreases where you’ve looked away from at the same time.

Naturally this would make taking screenshots/videos/streaming worthless as you’d be stuck seeing what the recorder was looking at only to a far more severe level than TrackIR or VR does now.

But frankly when you crank up the resolution and texture detail it doesn’t need to be 100% of every object on the screen, what I’m looking at should be prioritized. My feeling is that while this would allow the GPU to work more intelligently where it is needed, it would certainly up the requirements of the CPU figuring out which parts of the screen need higher res and which need lower while the textures are streaming higher/lower from the drive to the GPU.

This is how foveated rendering works in VR, I think? But I guess it would be possible for 2D screens as well.

4 Likes

Just started to toy around with Unreal Engine 5.4.3 and it’s quite a treat. Imported the CVA-31 Bon Homme Richard mesh as fbx and while it somehow looks ugly, it’s a start. Will see how far I get with all those model and blueprint options, but it looks more like modding++ than the dreadful C++ experience for using a game engine…
Imported the free CVN-76 Ronald Reagan carrier and it looks great. Now have to figure out why I fall through the deck.

Has anyone of you done some steps in UE5 and what are your experiences?

2 Likes