Does DCS have a Visibilty Problem?

Easily compensated for with a slight head movement don’t you think, and made up for by the lack of a bulky helmet and visor?, oh, and no canopy to bang up against in that bulky helmet.

If you’re able to rotate your neck > 90 degrees, you might want to see a doctor about it :smile: Of course there are other techniques you can use that I’v either heard or described, but none of them account for the G stresses, the pack pain from years of chronic exertion under G, I’m assuming stomach indigestion from a rapidly eaten, probably poorly nutritious meal before dashing to the flight line. The emotional malaise of being separated from family and home, the fun of an adrenaline rush. VR simulates all that right?

All of that aside, while i’m glad you’re able to spot a non maneuvering strategic transport at 30 miles, I’m not sure what it has to do with my issues with VR in a BFM situation, valid or invalid as they might be, have little to nothing to do with DCS’ rendering system. Those encounters happen well within the region where the model is rendered.

1 Like

Meh, my two cents on this whole issue (and I do this for a living) is that spotting is mostly mental and can be trained. I have zero issues with the spotting and visibility in DCS on a 1920x1080 screen (no comment on higher resolutions since i have not done that). If I have a radar lock, on something and a TD container I can spot them at the ranges and where and how I would expect. When I don’t I pick them up at ranges and against backgrounds at a rate that makes sense to me.

Even when I messed about with VR I had no issues with a WVR fight, there were definitely issues with the longer ranges in spotting (nearly impossible), but tracking and following a fighter once engaged was about what I expected and was used to.

What concerns me about changing visibility and spotting is it removes tactics. A huge part dogfighting is maintaining the visual, you cant fight what you cant see, and offensively you try and exploit that where every and whenever you can. Getting low against a cluttered background, pointing the jet to get as skinny as possible and making an unobserved entry into a fight are the goal of just about every intercept we run.

Defensively if you can lose yourself and make the bandit go no joy then you have an opportunity to exploit that and either reverse the fight, or separate. It should not be easy because if you make it too easy then these opportunities and tactics are taken away. And they are 100% realistic.

If you have trouble spotting things in a visual engagement what you need to do is start visualizing the engagements more. I can talk you through techniques for where to look in BFM and visual engagements to maintain the tally (hint if the bandit was somewhere and headed a certain direction 5 seconds ago when you saw him, it is very likely he is still there).

This is all separate from the need for AI to match human spotting abilities, 100% on that, again, it removes tactics that you can try and perform if they instantly spot you at a certain range.

15 Likes

This is going to be my epitaph.

Yes please!
Tell us everything you know (and are allowed to) please!

I can only say that regarding the AI, that’s a huge frickin’ can of worms that would get real complex and convoluted real fast. There are certain ways I could think of getting something reasonably accurate for a given aspect, but only for small numbers of aircraft. There’s a lot of numbers to punch together in a very dynamic situation and the end result may not be too different from now.

Theoretically, if you could take an aircraft’s skin, crunch a few numbers regarding the colors on overall surfaces (clearly defined top and bottom, etc.) then pipe that through whether it’s against the sky or ground, then the color of the background in comparison to the skin color, pipe in the speed, relative angle and motion, and… I’ve lost my train of thought. There’s a lot of variables in there – which is why when I tried to simulate a Longbow FCR, I chickened out and just did a simple probability variable for a given target type against the overall distance of it. Trying to go to more extremes would have been utterly pointless as the FCR was only a small part of the whole aircraft.

AI, for better or worse, is much the same. It’d probably be easier and more realistic to simply introduce randomized variables based on the AI skill level to give it a chance as to whether or not it will detect, lose sight, etc. – but once again, even that costs processing power. If such isn’t already done, I do imagine one could get a fairly realistic representation using these randomized variables, since we’re dealing with a pretty chaotic environment as it is.

4K doesn’t seem so crazy now does it? :wink: I think a 60" 4K screen from 2.5 feet away is like a real life 90 degree FOV!

1 Like

Well, sort of… I feel rather cut off from the outside world, when in my goggles. The feeling I get when the wife sneaks up on me and suddenly shout at me for not doing the dishes/dinner/shopping, can be described as emotional malaise…or even stronger words. :wink:

1 Like

I get sleepy. Every. Time. Zzzzzzz…

I think that’s because you’re in a virtual cockpit, and we’re mostly tired when we fly. Sort of a pavlovian response.

3 Likes

A simple AI like the one you described is not tough on the CPU.
The geometrical stuff takes a few cycles, yes, but even if you do that every 300ms for all units within 30 miles of the player that’s nothing compared to what is already done.
The problem is: that probably won’t be sufficient.
I have done a bit of programming in that area as well, and the problem often is structuring tasks and building priority lists.
Man, if only ED were a bit more open for suggestions, I’d love to contribute a few snippets here or there, things I learned the hard way.

Btw that could be one of the things that may probably profit from multi threading. It is an asynchronous task anyway.

It could be even possible to have a parser to create ad hoc look up tables for visibility during the mission creation-compiling phase* so that it becomes part of the data that gets read while the mission loads.
A handful of bytes really.

*After all it’s in the mission creation phase that stuff like time of day, weather and liveries get chosen, and mostly those are the variables that affects visibility…

1 Like

The real question is, what is being used in DCS currently? Dollars to donuts it’s the same AI they’ve had since LOMAC.

At least derived from it, at least it looks like it.

Is there a consensus as to which is the best resolution for spotting aircraft at a distance?

640x480, I bet. Or lower. Lower rez = larger dots = easier to see.

1 Like

So, I guess that I’m more or less Tiger bait at 3440x1440 :slight_smile:

4 Likes

I know it is getting a little off topic but I think the AI is starting to get a little criticism it may not deserve. This is my understanding of how AI detection works:

The process starts with a line of sight check. Terrain, objects, trees and overcast blocks line of sight and always prevents detection.

From there, each unit type has a baseline detection distance. For example ‘fighters’ have a baseline of 7,500 meters.

That 7,500 meter distance is then adjusted up or down for:

  • actual size of the aircraft
  • illumination level
  • fog setting
  • angular motion in relation to detector
  • unit AI level

That distance is then reduced based on what background it is being viewed against:

  • air = 100%
  • road = 80%
  • runway = 80%
  • water = 75%
  • land = 60%
  • forest = 30%

An additional check is made if the aircraft is a light source. If so, a baseline distance of 10,000 meters is applied and that distance is adjusted for the light’s intensity.

Another check is made when a unit fires a weapon. That distance varies depending on weapon type, for example a missile shot is detectable at 5,000 meters.

After all that is done, the AI’s field of view and ability to scan the area is simulated. This is done by applying a ‘mean time to detect’, or average time the AI will take to detect a target within that final calculated distance, based on the off-boresight angle.

0 degrees = 10 sec
180 degrees = 180 sec

Let’s say based on all this, the calculated detection distance is 4,329 meters, an aircraft is inside that distance, that aircraft is 45 degrees off the detecting aircraft’s nose, and line of sight is not blocked. At some point in the next 45 seconds, that aircraft will be detected. Of course, that detection distance and the average detection time is being constantly updated based on all those factors listed above.

There is room for improvement in all things but, you have to admit, that is a pretty solid system.

8 Likes

Cancel the Christen Eagle…we need a BD5 with guns…

2 Likes

Great breakdown on what is happening “behind the curtain”. I don’t dogfight much…or at least, I haven’t much, so I don’t have a lot of experience to form an opinion. I tend to fly helos and ground pounding modules and try to let my CAP AI tussle with their intercept AI…

tenor(1)

1 Like