Ten years or so ago, back when I was in this weird phase where I found The Economist to be entertaining, I read within an article about Europe’s new AI guidelines. Europe was the first government to state how they expected these sticky moral subjects to be handled. That’s about all I remember other than that The Economist, being The Economist, seemed way more concerned about liability than the societal impact of robots becoming drivers, pilots, coders, doctors and lawyers.
If I truly cared about this subject I would probably seek out this original European document because educated stakeholders were likely involved in its creation. But I don’t really care enough to overcome my natural human laziness. I will continue to fly and drive and write and occasionally code because I enjoy those things and they will likely be legal for the remainder of my life. My 16yo daughter absolutely hates driving and is agnostic about flying and so properly primed for the future that I fear. Maybe her generation will expect the robot in charge to always sacrifice her and her co-travelers at their own expense rather than risk the lives of others. On the other hand, maybe she will be rich enough to have a Titanium Level moral RF chip installed and her robot will legally protect her at the expense of all the poor suckers out there just rolling the dice.
In the meantime, I’ve got a robot vacuum and a robot coffee maker (Spinn!). Life is good. Everyone else can get stuffed!