Thought this would get lost in the nato thread so posted it in the f16 subforum
it’s already scaring they are using drones to fight!
it’s even more scaring they will use ai to drive drones!
hope there will be some ai weapons no proliferation agreement!
hope the pilot could smash the ai with cunning and intuition!
thinking more about it…
hope the research could give us, better ai!
we deserve it
Yeah but it’s DARPA…what have they done since they invented the Internet?
Seems like this will be another case of “big defense industry and gov’t doesn’t understand how modern tech works.”
Early stages in a controlled environment, but here we go…
The event so far was interesting to watch. The AI doesn’t seem to mind the negative G and tends to win by being carefree with its existence - aggression wins etc.
The fight starts here - https://youtu.be/NzdhIA2S35w?t=16732
Hmm. But doesn’t the AI know, always, where the human is? Seems in the RW this would require some serious sensor ability. “Let me at em!”
“Perfect State information” they stated…always knows where the other object is - presumably that includes knowing speed AoA etc if its only eyes are access to object parameters!
It would need something similar to EODAS in the real world I guess.
Interesting that their future vision is a manned jet where the AI does the flying and BFM and pilot is a passenger who handles the cognitive thinking side.
I guess an early step would be to ‘recommend’ things, as in man in the loop stuff. The winning AI was running on a laptop, so we’re not talking super computer stuff. It’s a huge (huge) leap from this to ‘AI fighter pilot’ though, as that was just a game with very artificial rules. Some of the rules (like keeping it under 9G) help the human, but sensor omniscient powers is nice for the AI - not that AI’s can’t be good at detecting visual imagery and motion (think how well an AI could have a ‘head on a swivel’ in 360).
Just like fly-by-wire means the pilots isn’t actually moving the elevator but telling a computer of their intent to do something, I guess some combat ‘hints’ can use an AI like this type of for fighting. It’s still the human fighting, it’s just a tool to augment. As in Airbus has Flight Law, maybe one day they’ll be ‘Fight Law’ where it’ll ignore stuff that will get you killed in WVR. I imagine the whizzbang stuff in F-35’s and F-22’s is already pretty ‘augmented’ for BVR regardless of learning AI models.
The difference between this and a missile expert system decision process built into a AIM-120 is that the AI model went through 100,000+ fights before hand to ‘train’ the model. So effectively every time it does something statistically wrong it corrects and learns. AI models are split between a training part and an evaluation part (where it runs), so you can throw computer resources at the training side (racks of GPUs etc) but then the evaluation run-time is small and efficient (like how an iPhone recognizes a face locally).
I don’t particularly want a world of AI drones with the ability to independently fight, but then again, both the good guys and the bad guys have access to all this stuff as the tech advances. In some aspects it’s inevitable to some degree, and the ability to adapt will dictate who does well.
At the end of the day this progression is necessary.
They (the DARPA program manager ex F-16/F-22) were at one point talking about how back in the day some had an idea to send the Calvary horses to the front line by train - that way they would be fresh when they arrived, and run rings around the Tanks they were facing…anyway
This setup is quite disrespectful to someone who is doing his service tbh, and I would not be surprised if the tables were turned if the fight was IRL, if even machine learning could possibly counter perifial vision of a human pilot. But regarding the displayed bfm competency and tactics the verdict is pretty harsh from the bfm pro armchair pilots over at benchmarksims and a clear indication that he’s out of his element. The most striking example being the lead turn at 600 kts, with real life reference instead of a VR hat I don’t believe there is a way that a professional fighter pilot would ever do that if his life depended in it.
It’s a “dumbed down” setup to make the machine learning problem manageable. They were pretty upfront about this, though.
I think we need to remember that “machine learning” at this stage is essentially just pattern matching arithmetic. Powerful when done right, but it’s not really autonomous; it builds from known data and extrapolates from that. It’s been applied for years (for example, the AH-64’s FCS uses Kalman filters to calculate where to aim the weapons to hit) and really isn’t anything new in the defense industry. In essence, this is what we can do now, under current systems and inside of those limitations.
Was this whole thing a game changer? No, I don’t think so; this looks more like a “Congress gib money plz kthxbye” from the DOD/DARPA/mil industry. Perhaps for good reason, given that it is the future, but having experienced the DOD/gov’ts inner workings, more likely for ridiculous reasons. There’s a reason you see a lot more advances in these regards at places like Google, SpaceX, etc. as opposed to Boeing, LockMart, and so on; the people good at working on these kinds of things aren’t exactly chomping at the bit to go work in mil-defense.
For me, most of all is we don’t know what Banger’s background is. By that, I don’t mean his flying record, but how involved in computers is he? Has he or does he play a lot of games? Does he have a grasp of how simulation AI works in more crude/primitive forms? Without knowing any of this, I’m not willing to say that this was really much more than a DARPA advertisement.
As has been proven time and time again, you bring out some fancy new technology and call it unbreakable in the lab, then in the field it falls apart in 5 seconds because PVT Chucklenuts and SPC Schmuckatelli immersed it in a vat of solvent to clean it faster because they didn’t want to scrub it properly. All too often, the people building and making this tech forget about the people actually using it, and they especially forget that the people using it aren’t exactly the cream of the crop.
This reminded me of something…
WAY back in the early 90’s I think it was, I knew a guy, who knew a guy, that let me stroll through an Air Traffic “Future world”. He was a consultant helping them evaluate the system - it was what I know loosely as an “Expert System”. The AI was going to control everything. They’d been working on it for years then. Conceptually it’s all pretty simple, but!
The guy I chatted with told me that one day they (the ‘humans’) asked if they could script in some thunderstorms during an 80% scenario (traffic at 80% of max capacity). The gist of it was: the AI couldn’t adapt, at all. I speculate that a big reason is the humans in the aircraft weren’t, well, AI themselves; TWA123 wants to go left around a cell, and Eastern 456 wants to go right. We humans adapt.
So, they shut it (the AI) down and, here’s the interesting part, the humans had a real hard time dealing with the scenario now because they’d been, essentially, observers for weeks at a time - their skills had gotten rusty. Now, replace all this with BFM/ACM and yeah, they need to think about that.
While i agree with the rest of your post, this i disagree with. It just isn’t possible to solve this kind of complex optimization problem with a Kalman Filter. The Kalman Filter was developed to solve a problem that this AI did not have, namely dealing with noisy and/or incomplete data. The Kalman Filter uses a state space model (e.g. equations of motion) and a statistical noise model (plus sensor fusion if multiple sensors are available) to give you an optimal estimate of (classically) where something is and where it is going (but can be used for other problems where a state space model can help). It can not directly give you steering input. It also would not be able to conserve energy (i.e. trade position for energy to better maneuver at a later point).
It’s true that it is not autonomous because this thing in its present form couldn’t even bring the plane back to base. Like a dog chasing cars it would go after any plane it finds and kill it or be killed by it until it runs out of fuel. But what it does is still way, way more complex than a Kalman Filter.
Yep, just to +1 @sobek there. A Kalman filter is not really like a recurrent or convolutional neural network, as the layers change and the parameters are learnt and not fixed. While none of this stuff is (or perhaps ever will be) Buck Rodgers, there’s been advances in the last 20 years that are decent, mainly through the commodization of parallel computing and newer back-propagation algorithms that work with these parallel units. A good and approachable resource if interested in learning about the area is here - http://neuralnetworksanddeeplearning.com/
Another way to think of these games is really the scenarios of what happens if the bad guys get into it first, as in, it’s not just a case of wanting to replace USAF warm bodies, it’s more to understand how to fight against a threat like that. Swarms of cheap drones with specialized AI could be a thing one day, so it’s good to ‘threat model’ it like crazy. Competitions like this help with that (a bit, although I would suspect what we see in public is not even state of the art…)
I look at it all as taking known data and extrapolating from that; I’m sorry to say I’m a bit dumb when it comes to arithmetic and matters associated with such. But one thing I can say is I’m tired of everyone and their brother frothing about “AI” and “Skynet is coming!” when it’s really far from that nefarious and isn’t so much “learning” as it is doing fancy stuff with pattern matching. From my dumb, uneducated perspective, of course.
There is a huge amount of bullshit in the whole area of AI, and if you know about Kalman Filters then you aren’t even a little bit dumb