Isn’t that just about where it is now?
God I hope not!
Right, same here. And many (most? All?) of the people in that ‘dispatch’ role will have no real-world experience from a pilot (“aviator!”) perspective. They’ll only know what they’re trained to know, and if you have the same burden of training and experience all you’ve done is change where the pilot sits.
Lol no.
At least not at my company, or any that I’ve worked for.
Mostly, I was speaking about the automated systems that are in use today. Auto pilot, etc.
Oh, I see. No, no more than Cruise control and lane assist automate the task of driving and turn the driver into a dispatcher.
No matter how often everyone screams on high that fully automated AI is the perfect solution I will NEVER be a convert to this idea. At this point, I am too old and too set in my ways to EVER trust a fully automated AI only vehicle of any kind.
Wheels
Personally I can’t see it happening for quite a while.
Exactly. It isn’t a technology issue (heck, I would argue that the tech is already here) it is a people issue and because it is a people issue, it is a political issue.
IMHO it would be political suicide to legislate to allow this becuase of the backlash that would occur when the inevitable happens with a fully automated transport system for people or military system with full kill chain authority without a person-in-the-loop.
There is zero appetite for this kind of risk with the current generation of politicians in just about every democracy. Until the current millennial generation start getting elected and we have future generations who are a lot more comfotable with this concept, I just can’t see it happening?
You would then need a dog in the cockpit as well… How does it go - the pilot is there to feed the dog and the dog is there to bite the pilot if he tries to touch anything
All I can say is that if the Auto-pilot on my Tesla is anything to go by, I won’t be trusting an AI pilot anytime soon. There are too many variables. One thing that a human brain excels at is improvising, making a best guess based on incomplete data, thinking outside the box when a wildcard problem presents itself. Going back to the Sully scenario, a computer might very well have turned the airplane back to La Guardia but in doing so, it would have endangered countless people on the ground, and flown over a city where if it didn’t make it back to the airport, there would have been almost zero chance of anyone surviving. I’m of course biased, but I don’t see myself being replaced by a robot before I retire. Hopefully I am right!
Nailed it with the Sully scenario.
I recently retired from an organisation that looked at AI as the solution to all of our ‘big data’ problems. Once I started researching how AI (in it’s current form) actually works, especially when factoring in (potentially deliberate) poisoning of the datasets used to ‘train’ them I realised it wasn’t the panacea we were looking for. Sure, it could reduce, but not eliminate the human workload.
IMHO we are years, if not decades (if ever) away from the Hollywood vision of AI.
Well said. The point isn’t so much that the AI can or cannot do what it’s trained to do perfectly, but rather who trains the AI, and do they do a perfect job?
Abnormal vibration, a spike in fuel usage due to increased drag, perhaps abnormal control surface deflection in straight flight, etc.
To all of you comparing autonomous cars to planes flying autonomously in controlled airspace, you are missing the point. There’s no sidewalks with kids and elderly pedestrians next to a runway. The problems for autonomously controlled planes are completely different.
The number of RA’s I have had to respond to over the years due to conflicts with VFR traffic while I was IFR are enough to suggest otherwise. Controlled airspace isn’t as sterile and predictable as you suggest.
I didn’t suggest it is sterile, but you don’t have to react in split seconds to objects appearing and intersecting with your trajectory like in traffic. The amount of clutter an autonomous car has to deal with compared to a plane is insane, which makes classifying obstacles much harder. Tracking airborne obstacles is a solved problem.
Considering the sensor suite required to replace two humans; It will have to be quite advanced. Knowing how often technology fails, compared how rarely the human fails, on aircraft, I predict dispatch reliability will be a problem. This will increase cost to the point where human pilots, who are cheaper every year will be the preferred option. And cost dictates safety, unfortunately…
Maybe?
We’ll see.
I love the fact that no stone is left unturned in the search for ways to enhance safety, even though I may not see it’s application, immediately.
Things aren’t automatically hard for machines just because they are hard for humans and vice versa.
The human body has undergone several million years of evolution up to this point. Designing machines that are as good as humans at doing things that humans are good at is incredibly hard. The human body however hasn’t evolved to pilot airplanes.
Once you step out of the constraints of what the human organism has primarily evolved to do, tasks that seem impossible merely become hard. That’s exactly what @Freak meant with one of his previous posts.
There is no doubt that a machine can react faster than any human. It is how it reacts that is potentially a problem?
To go back to what @PaulRix highlighted with his Sully scenario, sometimes the best solution to a problem requires out of the box thinking and Machine Learning (I personally hate the term AI as it is currently generally used) by definition can’t think outside the box.
When things go tits up. And they will, who does the machine ‘choose’ to save? The passengers in the vehicle? The people on the ground such as kids and elderly pedestrians?
When a human makes that decision we the have someone to blame or absolve.
When a machine makes the decision who do we blame. The machine? The software engineers who programmed it? The company who made it? The politicians who legislated to allow autonomous vehicles? This is a can of worms that our current legal systems and society aren’t equipped to handle, which is why I stand by my initial statement that completely autonomous vehicles of any kind are a long way off.
I am really loving this thread. It is great to have a robust debate about where technology is taking us.
Ditto, it’s moronic but unfortunately has found its way into colloquial language.
How is that any more complex than the decision a human pilot has to make? Ideally it would choose the outcome where the least amount of people are killed. Making that part of a cost function is admittedly complicated, but not impossible.
This is no different from problems we face already. As it is right now, when a plane crashes, the pilot is not always responsible. Sometimes it is the airline, sometimes it is the plane manufacturer, sometimes it’s the bean counters executives and sometimes it is force majeure. I don’t see how having a pilot or not substantially changes this. At a hypothetical point in time when autonomous planes are safer than those piloted by man, the bigger ethical dilemma would be not using them just because there’s one less person to put blame on.
The way this is handled in a democracy is that an investigation will determine whether all parties involved have acted to the best of their ability as is dictated by law.