Will a robot steal either your or your child's pilot job?

Technically, this is all possible today. The actual implementation of how robots come to govern transit in our lives, well that’s a bit further off.

Obviously automation is already a massive factor in any modern vehicle, the way information is hidden from the user and how computers decide on how to act on the inputs. This is true for cars, trains and airplanes alike.

People are really the limiting factor these days. Harbours are largely automated and controlled by robots, mooring ships have a lot of camera’s and computers controlling the docking, same goes for aircraft in limited visibility, they happily land the aircraft until the pilot is ready to turn onto the taxiway. Trains have speed limits and safety distances governed by signals, sometimes dynamic(shifting control blocks for high speed travel) sometimes static(good ol block signal), time tables are configured by a computer with only a human doing the final OK over it.

Oh, and don’t forget the whole traffic grid for cars, lights, dynamic lane closure and opening and the re-routing of traffic. That’s all automated already.

So yeah, all the blocks are in place, there’s no major technological hurdle to be overcome, the biggest issue is humans :wink:

Airlines also love having a very convenient scapegoat in the event of a Boeing going lawndart somewhere in Montana.

Being able to blame a single flight crew or even single pilot is preferable to blaming a gigantic and expensive aircraft management system that controls all flights.

Also, if something does go wrong and an airline does go down, how many other aircraft in the air are SOL as well?

1 Like

Autonomy depends on a solid GPS constellation and a stable network within which the robots coordinate their moves. Human pilots (and human drivers) need neither. A big reason that we still maintain U-2s is because they can effectively gather data when the fancy stuff has been taken out by your adversary and you are left with a dude with a map. Pilots are expensive. And they are needy. Robots are cheap but they are even needier. They want a stable string of information from on-board sensors and the network. If something happens to that stream they are then dependent on all the AI learning and human programming available up to that moment. To misquote Han, I’ve seen some strange stuff up there. Not every storm shows up on radar and not every mountain wave is accurately forecast.

The big promoters of autonomy enjoy pointing out the worst in humanity as indications of a societal imperative to keep them away from anything complicated. I think the brain is way undervalued. And that brain really needs to be in the vehicle and not in some dark cavern with screens. He needs to see and hear and feel and FEAR.

I saw an account recently of a situation in the '80’s that brought us as close to Nuclear War between the USA and USSR as we had ever been up to that point, arguably accepting the Cuban Missile Crisis. The only thing that prevented it was the decision by a Russian officer to disobey protocol which required a launch because he sensed that something was amiss. I can imagine that there would be those who so fear the human finger on the button that they would prefer seeing flawed human emotion taken out of the loop. That we are all still here is an indication that human controlled systems work…mostly.

When pilots do eventually get sent home it will be partially because of our success. We have been so effective at keeping the fatality rate down to a rounding error that the public already thinks we do nothing. The airplane I fly isn’t any more automated today than were versions built in the early eighties. Technologies like GPWS and TCAS have helped tremendously. But what has saved most lives has been improved communication and interaction between flight crews.

7 Likes

Great post, and I agree, the adoption of CRM has had a huge impact on flight safety. I still think that in the grand scheme of things, that the crew are not terribly expensive, even though on the surface the costs might seem large.

Mother of all necros, here we are 5 years later.

Just saw this vid and was reminded of this thread.

4 Likes

It’s cool. Maybe…probably, this will be the way of the world in 20 years. But I see it as two systems:

{1) AI. Unable to work efficiently in a human controlled environment without another human at a terminal somewhere (A human here or a human there. If there is a human, what’s the point?). Weather is more than just radar: mountain wave, wind gradients, ridge sink and clear air turbulence, to name a few are not easily managed by AI. Airborne medical emergencies are not easily recognized and managed by AI. Passengers behaving badly are not easily managed by AI. I can think of 20 more. You might be able to think of 100 more. The takeaway is that AI is challenged working within a human environment with human customer abord. The advantage of AI (so long as there is no human at a terminal) is that it is impervious to terrorism. (Actually, is it though?) It doesn’t need to rest. It doesn’t need to be paid. (Well, Jeff Bezos might disagree). It is far more precise. It does not require recurrent training. It doesn’t “have a bad day”. It doesn’t drink, snort or pop pain-killers. It doesn’t commute or call in sick. So, yeah, it’s a good system and will only get better.

What gets ignored though is that (2) pilots (human flight) are also a system. And even though they are legacy, they are not stagnant. Pilots are improving too. This is why we see far less fatal accidents today than we did 20+ years ago. Technology played a roll, but mostly the trend is due to a change in training and pilot culture. Pilots are slow but clever. They react comfortably and, usually, appropriately to the less knowable weather I mentioned above. Pilots are slow and occasionally inept. They do drink…sleep poorly…consume mind-altering chemicals…get divorced and so on. But they react to stress and never-before-seen challenges in clever ways. Sully is the obvious example. The nerds are agog because they smell fortune just over the horizon. Those fortunes blind them to the risks–or gag them from admitting the risk. In the video a “pilot” executive talks about how safe he will feel in this system because pilots all know the terror (I forget the word he used) of flight when things go wrong. But things go wrong whether a pilot is at the helm or not. When the merde hits the fan, there is not a pilot on this planet who’d rather be a passenger to an automated system instead of a passenger in a human system or, better, the controlling human himself. Any “pilot” who says otherwise either isn’t really a pilot, or shouldn’t be one.

So there are two systems. Both work. One has limits which history has exposed and that we are all familiar with. The other has limitations we can only imagine. It comes down to money and safety. I believe that AI aviation will always be cheaper and less safe. The nerds promoting AI systems will say that they are cheaper and more safe. Time will tell. I can only say that I have seen a lot in my 35 years behind the yoke. And I have made every imaginable human error. But I recovered or my human partner recovered and the people in the back were none the wiser. For all those errors were hundreds of decisions and inputs that increased safety.

To those who say Eric (smokin’ hole) is old and too technologically ignorant to understand this brave new world, I say “You are certainly right?” But let me ask this (at least to those who are parents): would you trust a robust and “proven” AI system to feed, care for, bathe and educate your children for a week with zero parental contact? Even forgetting the mental anguish and just talking physical well-being–if no, why not? Could it be because you intuitively see the pitfalls almost immediately. If yes, great! But what flawed system exactly is that AI parent replacing? The metaphor may seem a stretch but that is truly how I see robot aviation. It’s a non-solution to a problem that only exist to the people paying the bills. And safety is never the bill-payer’s primary concern.

5 Likes

Why use autopilot when you can fly the plane yourself? Why use GPS when inertial navigation was enough to get from A to B? Because it’s made aviation safer, and make no mistake, at some point in the not so distant future, AI will be much safer in operating airplanes than humans will*. Does that mean that humans did a bad job piloting airplanes? Certainly not.

*in controlled air space, mind you

IIRC they could have made it back to Teterboro had they known immediately that, yes, this was indeed a dual flameout and, no, none of those engines would relight. That plane probably had instrumentation enough to determine that both engines were goners, the problem is, a human brain could not deal with the amount of information provided by said instrumentation. It would take a human hours if not days to process. An AI sufficiently trained could have made that decision in seconds or even less. You can’t rationalize away that non-human actors have tools at their disposition that transcend human capability by such an amount that it would give them much more time and effectively, safety margin. AI doesn’t have to deal with stress, it just acts. We’re not there yet but we’re close.

Nerds vs jocks, really? This probably hits close to home for you, I understand that, but you can do better.

1 Like

Thing is, what would the AI base such a decision on? How do you train that AI to analyze what other type of damage was sustained as well. Damage it may not see. Did the birds dent the cowlings? What’s the actual glide performance of a decade old airliner? A dual engine flameout is one thing. Considering another ‘black swan’ accident. Qantas 32, where numerous systems was damaged, how do you pre condition AI to deal with that?
Somewhere in that code, there will probably be a few lines about saving lives. When will AI draw the line and say the best scenario for least casualties is crashing this aircraft in an unpopulated area? How much risk would a human software engineer allow the AI to take? My point is, humans are still controlling the AI by setting its limits. You kind of just move the decicions from the cockpit to a cubicle, where someone must consider every concievable option, before they occur. Thinking of Artificial Intelligence as an entity that makes cold decisions based on facts only, makes my skin crawl. You know a human pilot, onboard the aircraft, will fight the problem like his/hers life depended on it, because it does. Who would rely on AI that may prioritize someone elses life, over the passengers, because it’s the only option that guarantees the least amount of casualties?
And thinking of it that way, maybe AI would’ve made the same decision as Sully and Skiles did. Too many unknown factors; The Hudson is where we risk the least amount of lives.
Autopilots, GPS, ACAS, EGPWS etc. are really good at what they do, until they’re not. They all have their limitations.
I suspect any AI system will have to be made by humans, and as such have inherent limitations too…
Those are my thoughts on the matter, at least. Use with discretion :wink:

3 Likes

As do humans. Remember those 737MAX that crashed because of a “feature” that Boing built in to save airlines the trouble of sending their pilots into type conversion? You can’t have your cake and eat it.

1 Like

Exactly. That goes for anything created by humans too :slight_smile:

A prime example of bad engineering decisions being made for commercial reasons. :+1:

2 Likes

@sobek, I won’t pick a fight but NYC is my back yard. TEB is just a couple of miles away. I stood on the Hudson suffering from a cold and watched Sully’s Airbus float down the river*. I’ve done that departure out of LGA dozens of times. Making TEB was an impossibility.

*Two nights later my buddy and I were at the Weehauken ferry terminal bar and helped make sure that the Captain of the first ferry to meet the ‘bus never bought a beer. At one point we were chatting with him as his image on CNN splashed on the TV above the bar.

3 Likes

It either has sensors that inform it of the state of the airplane or it will react to the manifestation of the damage.

If it has no experience with the exact type of damage, it will resort to the same techniques a human uses. Calculating the glide performance on the fly. But it will do so very accurately. While continuously evaluating all its options.

Remarkably similar to a human, you tell it what is expected behavior and then you run simulations until it has figured out what to do. Training is a matter of giving it experience with enough varying parameters, not telling it exactly what to do when.

#nothowAIworks

The software engineer doesn’t tell the AI “if A then B”. He implements a cost function and trains the AI until it has learned to minimize that cost function under a wide range of conditions.

Listen guys, we are simply at a standoff here.

Naturally you guys don’t want to be educated about what pilots can and can’t do by a non-pilot. I sure as hell don’t need to be educated about what AI can and can’t do, having spent almost 8 years of my life working as a nerd data scientist.

My bad for bringing it up again.

1 Like

Don’t feel bad about bringing up the debate or restarting it. I find it very interesting.

I’ve been a professional pilot for 22 years.
I know a thing or two about flight safety too… @smokinhole has been at it even longer.
I’ve seen several engineering solutions that have improved safety, and those that reduced safety. I’ve seen human pilots becoming so dependent upon some engineering solutions that they can’t cope when these solutions stop functioning.
So I’m sorry if I won’t readily accept what I see as yet another engineering solution. The tech is interesting, but I doubt it will become a be all-end all solution to flight safety. In any case, it won’t happen before I retire. And if AI actually ends my career as a pilot, I can always go back to working as an engineer… :wink:

3 Likes

6vqerj

2 Likes

The problem is that an ‘AI’ is only as good as the people responsible for designing it, programming it, training and testing it.

A certain aircraft power plant manufacturer is having trouble building FADEC fuel controllers that don’t have software related rollbacks.

A few years ago, one of the aircraft I flew had an engine rollback because a software engineer took the shortcut of determining whether the airplane was in the air or on the ground by a simple IF condition that boiled down to IF set at Takeoff thrust > 17 seconds THEN airborne, otherwise on ground, instead of tying into one of the squat switches.

That setup was certified, and worked just fine for many years until a pilot took off light out of a satellite airport with a low level-off and pulled the power back to maintain less than 200kts below class B. One engine was in airborne mode, and one was a half second short of it, and decided it was still on the ground, leading to a rollback to ground idle when the anti-ice was selected on.

My point is that any system, whether it’s mechanical, hardware, software, an AI, whatever, is only as good as the people that design and produce it. In this case we’re simply shifting the burden from a trained, hopefully experienced professional pilot to a group of software engineers. That may or may not ultimately prove to be a mistake, but either way let’s not kid ourselves that we can create something that approaches perfection by taking the ‘man’ out of the loop. Instead, we’ve created a new loop with a bunch of different ‘men’ in it.

2 Likes

Your insights are fascinating, thank you for sharing. Reading them fill me with both excitement for the future of AI, along with deep terror and dread for the inevitable errors and unforeseen consequences along the way, perhaps like Michael Crichton did most of his life.

That’s often the price of exciting technological progress, as evidenced throughout history.
For instance, the era when Pan-Am began operating the 707.

Serious question regarding a self-piloted aircraft:

Will it be able to hear and see? Will it exercise caution based on what it hears and sees, driven by gut feeling that something is wrong?

I’m thinking about handling situations like structural damage, uncontained engine failures.

2 Likes

Which is exactly why this new system presented is built around a new kind of operator. It reduces the need for human operators without replacing them. To be clear, as they indicate in the video, this is a very simple procedure-based control algorithm. Not A.I.

Many people in robotics believe this is either an important step in automation, or the best safety imaginable: humans augmented by robots, not replaced.

As @sobek says, machines are just a lot more efficient at doing things once they have learned them. There is a risk it might do something completely wrong (due to design or test errors made by humans) so human intervention is necessary. For now at least. For how long, impossible to tell IMO.

I have no serious experience in AI (one MSc course and a 6-month internship on neural networks and robotics) or actual flying, but I just wanted to point that out. The discussion seems to have drifted away from the video towards full replacement of human pilots again, while the video is about a single operator checking on multiple autonomous vehicles.

2 Likes

I am going to answer your question with another: Are sensors that function like human eyes and ears the best sensors for those purposes?

Well said. As someone who spent quite a bit of my career flying Citations and King Airs single-pilot, I understand completely how automation can augment, supplement and share workload from a pilot perspective. And I’m all for it, to a degree. An ‘AI’ co-pilot that is basically an R2D2 that’s obviously subordinate to an actual on-board pilot would be a great thing for aviation, especially for single pilot ops.

However I’ve also had to push the red button and downgrade a level of automation. Collins FMCs in the Pro-Line 21’s were out in the fleet for ten years before they figured out why they were turning the wrong direction when joining an approach, for another example.

I don’t know, but I think that right now they’re pretty great. I know of a King Air pilot who neglected to check one of his fuel caps, and was alerted not by his fuel gauge, but by the geiser of fuel coming out of the fuel port right after takeoff. In flight school a student made the same mistake, except he left the oil cap off on his Cessna, and was made aware when the oil started to creep out of the back of the cowling onto the windscreen, long before any oil pressure gauge would have shown there was a problem.

How does an AI recognize an engine cowling coming apart, or a bird strike? Honestly curious what the answer is, or could be?

2 Likes

My fear is that it will be the human who plays the role of R2D2. The automated system does much of the decision-making and all of the flying. The human serves to provide continuous flight following and planning–basically the same role that a dispatcher serves now at air carriers.

1 Like