Will a robot steal either your or your child's pilot job?

IMHO it is infinitely more complex. Machines are faster, stronger, less subject to fatigue but they can only do what they are programmed to do i.e. they can’t ‘think’ as well as we do. Which is why I like the Sully scenario. Imagine the meeting to decide the programming parameters for the exact circumstances he faced. It would have been by the book (turn back to La Gaurdia). Even if somebody had said “how about ditching in the Hudson” the majority response would have been “That’s crazy talk”.

Those passengers are lucky they had Sully as the pilot. I reckon at least 90% of pilots would have done it by the book. But 100% of machines would have.

Agree, at the end of the day, in the case of an automous vehicle ‘legally’ it shouldn’t be any different. But it will take a trial and AFAIK this is untested for a completely autonomous vehicle.

100% agree. I’m not saying it will never happen, just that we aren’t there yet and don’t think as a society we will be for some time yet.

I’m sorry but I don’t really agree with that. In an ideal world yes, in reality??? Money talks and it will be some poor schmuck who will wear the blame and the root problem will continue.

Call me a cynic, but In my country we have a saying ‘Politicians will never ask for a Royal Commission unless they already know what the outcome will be’.

OK, I will try to explain a bit more what I meant by my question about human senses being optimal sensors.

@sobek summarized a lot of what I thought already but I want to explain what I meant

For human eyes, they sure help us see a lot of things we would miss otherwise. But there’s many things our eyes can’t see. We can’t see other aircraft when they’re in clouds, we can’t see the temperature of most things except when they’re near melting…
We can’t even see on half the globe at any time (night).

I think the infra-red wavelength is a lot more sensible to use for an airplane’s “eyes”. It does not face the constraints I mention above. The disadvantage is worse resolution, but for detection at range we have plenty of other solutions, such as radar.

For ears: there is not much to hear from outside the airplane. We mainly hear vibrations of the airplane itself. It would probably be useful to have a few vibration sensors at different places to be able to localize vibrations. But this only tells you where a problem is, not what it is.

For integrity sensing, perhaps a network of sensing lines across the panels is best suited. It’s quite simple and shows an open contact immediately when a panel breaks or comes loose, or an object strikes. It might not even be necessary to know that something hit the right engine. Yes that is what humans see and respond to but if you are a very fast thinker, all you care about then is already instrumented: the engine params will quickly tell the story of its demise, the flight instruments will tell you how much correctional input is needed, the fuel and oil gauges will basically instantly show if something is leaking (at least they will if you look very very carefully, which is trivial to a machine), and there are sensors already to tell you if the gear extends properly or not. Because we can’t see the gear either. Seeing is only useful to a pilot because that is what human brains are really good at. If we had evolved to very accurately and quickly respond to electrical signals, we would plug up the pilot to the sensor suite of the airplane and there would be no windows. But that is exactly what machines are good at, compared to us.

Specifically for the oil/fuel leak that the pilot could only spot as a geyser: I think the machine should be able to easily tell the tiny unexpected pressure drop that the human can’t see. And perhaps be able to localize an abnormal vibration, or see, but deducing what is happening from those sensors is much harder than just sensing small unexpected pressure change.

Sorry, very long rant. Anyway, I hope this can help explain some of what @sobek means by the difference in difficulty of tasks for humans and machines.

1 Like

All true.
What I was trying to get across in my last post is the cost of maintaining such a system. The technical downtime of airframes already cost fortunes. A sick pilot is easier to replace and cheaper too. This is what I think will be the highest hurdle. Economy.

2 Likes

That is indeed what it will all come down to.
I think currently, airplanes already have an incredible number of sensors and systems that need to be maintained. Many of those can be removed as well. Think of controls, displays, buttons… cupholders :stuck_out_tongue:

If the necessary systems for removing the pilot from the plane (initially not from the loop, as in the video, with a multi-vehicle operator), can be done without too much net additional maintenance, it may not be that hard to play even against the reduced cost of less pilots.

1 Like

Don’t you dare go there! :wink:

2 Likes

There are some lines on the London Underground that are now “automatic “ where all the driver does is press a button to open the doors… and be there for emergencies … same with the docklands light railway, there is an “operator” (they call them operators so they don’t have to pay train driver wages😀) on those with some controls… but I think that is as far as automation will ever get … you would always need a human in the loop ( in my humble opinion) but now the question is does that human actually have to be in the train/aircraft
Or can they be flown/ driven from a drone like station … imagine if you would a cubical with a pilot sat in there and all he does is take off and land different planes all day …:smiley::smiley::smiley:

1 Like

There is no “book” for situations like this. Pilots are not nearly so micro-managed. Free thinking is part of our training. I just helped beta test the next CQ (continuing qual) sim period for my airline that will go live for the rest of the pilot group in April. The emergency is not complicated but terrain is an issue. The crew may opt to return to the departure airport but doing so would involve flying an unpressurized airplane at over 10000 feet to clear terrain and then an overweight landing in marginal winter IFR conditions. The crew can also press ahead and land at LAAA, LBBB, LCCC, etc. Fuel jettisoning may also be considered. In the scenario there is potential for great decisions and less great decisions. But so long as the crew lands safely while applying solid CRM and TEM skills, even the less great choices will pass the scenario. The “book” only asks that they follow SOP and that the Captain uses all available resources when arriving at a solution.

As to comparing the operating environment of cars versus planes, very true. However, while planes don’t have road construction and snow-covered highways to contend with, they do have clouds, GA, birds and, now, drones. Often we visually avoid little puffies that experience tells us will pack a punch. The cloud may be cute and certainly not yet generating rain, but it is clearly convective and flying through means someone could get hurt. We make those sorts of low level decisions several times a flight. We work with the crew to have the service completed and them in their jumpseats before hitting the worst of the bumps. Collaboration is a strength of the human system that must be considered when deciding which course (automated or human) is safest going forward.

Another roadblock is one of trust. Humans expect human error. The terror of being an occupant of a car wreck with a flawed human driver is much different than the terror of an automated one with no wheel. “2001” was terrifying for a reason.

2 Likes

They do, and they fail.
I would think that autonomous aircraft will have to have more, to replace a human and redundancy.

The way automation has taken over many tasks, have indeed made the pilot the problem. You may have a smart system, but since it mostly functions well, the pilot becomes dependent upon it and won’t be able to step up to the task, when the smart system fails. There are many pitfalls…

2 Likes

@Troll nailed it. THIS is how automation will take over in aviation, by making the human so dumb and dependent that he/she becomes more hazard than help.

3 Likes

And, to elaborate a bit about what I called software engineers setting the limits, or cost optimisation as @sobek corrected me.

Take the rather simple case of total loss of propulsive power.
It’s a single engine aircraft with one passenger.

Suddenly the engine quits. Say the AI identifies as a breakdown and the engine can’t be restarted.

The options are few, to keep it simple so even I can understand :wink:

  1. Crash. Cost equals one aircraft and one passenger.
  2. Out field landing. Cost between one aircraft and the sum of 1.
  3. Landing on a road. Cost between 0 and the sum of 1. + 5 cars and occupants)
  4. Turning back to airfield. Cost between 0 and the sum of 1. + 30 bystanders and 2 houses.

The only option the software engineer can be 100% certain that the machine can accomplish is option 1. Options 2-4 depend on uncertain factors and makes the outcome somwhere between a best and worst case scenario.

Somewhere along the development of the system, someone must make the decisions on the probability of the outcome of the options and the value of the cost parameters. Sure, the machine can probably do some of this on the fly, so to speak, but even a machine can’t have all the information it needs to calculate a perfect outcome.

Say the machine reasons that it’s safe to at least try option 2 as that there is a reasonable chance of the outcome cost being less than option 1.
Option 4 is out since the machine quickly deducts that it won’t make it back.
Will it try option 3…? Well, there is a chance of a better outcome than option 2, but there is also a chance of a much worse outcome.

And here’s where I suspect we all would assume that a cold calculating machine will say screw the passenger, I won’t risk killing 5 other people and go for option 2.
If I were the passenger, I’d say screw the 5 people in their cars! I want to roll the dice and take my chances on there being no cars because my life is more important to me, than five other people. It may be selfish, but that’s how people usually reason.
I fear that a machine will go for the easier solution, because of lack of predictive data.
I’d want a pilot driving me, who shares the same values as I. Someone who will fight for me, even if the odds are bad.
This is why I wouldn’t want to fly with pilots with a hero complex either.

But, in any case, there will have to be humans who train/teach/program the machine into how to value life and property, won’t it? The legal ramifications here are not trivial either.

The philosophical aspects of this is very interesting and will be a huge debate.

5 Likes

I believe that the machine will be given “rights” that will allow it to make these decisions without fear of massive liability. “Machine rights” will, of course, simply be cover for corporate protection. But as the machines get better at providing therapy, writing three act plays in the style of Pushkin and caring for our aging parents so that we don’t have to, we humans will be marketed into buying in on “machine rights” as the most open-minded and humane relationship with our AI partners (masters?). I hope to be dead and dust when all this happens. I further hope that my daughter will find this paradigm one that brings her happiness in a world in which her mind is no longer dominant.

2 Likes

We are a self eradicating species, aren’t we? :smile:

2 Likes

Reading what you described game me a mental image of every future battle-against-the-robots scene in Terminator.

1 Like

While the legal aspects are in some regards challenging, we are not talking about autonomous entities here in the sense as we regard ourselves as autonomous.

Therefore lamenting about humans no longer being dominant is about the same as being mad at a car because it drives faster than you can run.

1 Like

It is true that statistical models are never perfect, but they can be good enough. The machine is trained to choose the outcome that minimizes the cost function over a stupendous amount of simulations. During the training, it will try a lot of different approaches so it will “know”, statistically, what yields the best outcome. Running its margins too thin would yield more catastrophic outcomes, ergo increase the cost function. It all hinges on the simulation being complex enough for the machine not to draw conclusions that do not hold in reality. This has to be asserted through rigorous testing.

There’s never only 4 options. To the machine, there’s as many options as there are points in (discretized) space. Depending on the current parameters (thrust, energy state, drag, weight, etc.) the machine decides which of the points in space are likely to yield the optimum outcome. It can generate such an evaluation with a very high frequency. If at any point in time a parameter changes, the next evaluation will encompass that and may favor other points in space, or not, once again, depending on what is optimal given the current state of information available.

2 Likes

Myself and a colleague at work had a similar conversation about self driving cars … and both of us came to the same conclusion… after 30+ years of driving neither of us would be able to allow a car to park itself with either of us in it …probably part control issue on my part and also something to do with having computers as a hobby, and knowing how often they can have a “moment of instability” :grinning:

2 Likes

Yes, but the best outcome for who?
The point I was trying to make was that a machine will probably treat everybody the same, or give every life the same value. And it can be argued that this is for the greater good. However, people tend to value their own life over others and want somebody in the drivers seat, who wants to come home to the family too. It’s purely a psychological effect, but one I think will be hard to overcome.

No, I kept the number of options low, to illustrate my point.

And I still can’t find a way around the fact that a human, or group of humans, will need to decide how the machine shall value its options.
I mean, we’re pretty far from building that machine that is an all knowing, infallible diety, if you know what I mean.

2 Likes

That’s a disturbing train of thought, Troll. There’s very good reasons why democratic societies do not, at least in theory, value the life of one person above that of another.

You mean it’s better if a pair of pilots have to do it in the heat of the moment? I’m not sure I can follow.

1 Like

I promise, I am not trying to revive an uncomfortable argument among virtual friends. This one has run its course about as far as we can take it. But we just got a Roomba! And I’ll be damned if my job as a downstairs* vacuum operator isn’t ended permanently. Thresholds, shoes, grates, chairs, THE CAT…nothing phases it.

*A stairclimbing version would be awesome! (And frightening as hell)

(Oops…just saw that this thread was already revived!)

4 Likes

Disturbing or not, if we truly valued other peoples lives as much as we value our own, there would not be wars or murders. I wish it was so, but we would all lose a lot of sleep thinking about all the sick and starving people in this world, if we actually considered their well being to be as important as our own. In fact, we’d be stressed out completely. But humans don’t value other human lives like they value their own, I’m sorry to say. We are just like other predators, in that way.

A machine would not distinguish between passengers onboard and humans on the ground, would it? How should developers teach this machine to take risks with peoples lives? If you tell it to save the passengers at all costs, it would land in a town square full of people, wouldn’t it? Nobody wants to be the guy who taught it to do that. The machine would have to be taught to minimize damage, which could mean ditching on a lake, if that means it’s only risking the passengers lives.
This is why people generally feel safer when there’s a human controlling the vehicle that they are travelling in, and that this human suffers the same fate as the passengers.

I know this is very abstract and to the extreme. The point I’m trying to make is that people generally mistrust machines to do the right thing, by them.

There’s no way around the fact that humans will have to teach the machine the values they want it to have. I.e. there will have to be human interaction in the development process of the training of the machine.
I seriously doubt that these humans will be able to predict every scenario that the fast calculating machine can come up with. There will be solutions that nobody thought of, for good or bad. This will have to be regulated in some way,

So, you end up with a machine doing what humans have taught it to do, by trying to predict what can happen, just like the two pilots who are trained to procedurally coping with the same situations, in addition to using their intellect and experience.
Sure, if the human machine devs do their job well, the machine may very well be able to outperform the pilots, but getting passengers to trust it is another matter.
And, you will have to cope with the demand for redundancy, that the aviation business is loaded with.

Anyway, @sobek, we view this from different standpoints. These are just my thoughts, based on my schooling and experience. Your views are different. That’s perfectly normal. I find it very interesting to hear what other people think about a lot of subjects. I’m not trying to convince anybody to see things my way and I don’t care if they do or don’t.
Also known as my 2 cents :wink:
Take it for what it’s worth.
IMO it’s not disturbing in any way shape or form.

4 Likes