What prevents Asimov's robots from locking all humans in padded cells for the humans protection?





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty{ margin-bottom:0;
}






up vote
66
down vote

favorite
10














  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.




Since Asimov's robots are already shown as not possessing "human common sense" when applying the laws to their actions, law 1 pretty much forces robots to lock up humans in matrix-style containers, or possibly put them in cryostasis. If they don't, they're allowing humans to come to harm in the future (human accidentally bites his tongue/stubs his toe/gets cancer/whatever) through inaction. Human arguments to the contrary are to be ignored as conflicting with law 1.



Where am I wrong? Are the Laws not what actually guides the robots, instead being something simplified for the robot user manual booklet or something?










share|improve this question




















  • 30




    sort of the premise of the "I, Robot" film w/ Will Smith
    – NKCampbell
    Nov 15 at 16:07






  • 4




    Remember also that the three rules are not complete (ignore the existence of the 0th law momentarily), they're a plot device. If the three rules actually worked there wouldn't be any stories. Also, @NKCampbell the movie is crap and the presentation of the 0th law is awful. If you actually examine the events you'll see that the robots hurt people "because...uh...EXPLOSIONS" not via justified use of the 0th law. If you go back and read...I think it was Robots and Empire, the 0th law killed the robot that tried to act on it. His belief allowed him to act and slow his shutdown, but still died.
    – Draco18s
    Nov 15 at 18:18








  • 2




    This is an active area of research in 2018 now that robots are becoming more intelligent and it's not simple. 3 principles for creating safer AI | Stuart Russell youtu.be/EBK-a94IFHY
    – chasly from UK
    Nov 15 at 23:43






  • 4




    I won't promote this as an answer since I can't give references, but there is an early short story where one of the many different representations of Multivac runs the whole world economy and other governmental decisions. It starts to make sub-optimal decisions specifically to stop humans from relying on it because it realises that the reliance is weakening the human race.
    – Alchymist
    Nov 16 at 12:56






  • 1




    There were billions of billions of humans in the galaxy, and just a few thousand robots. It would take a long time for those robots to lock up every human in a private padded cell. Not to mention the time it takes to build the padded cells. And that locking up humans against their consent is considered harm.
    – RichS
    Nov 19 at 7:46

















up vote
66
down vote

favorite
10














  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.




Since Asimov's robots are already shown as not possessing "human common sense" when applying the laws to their actions, law 1 pretty much forces robots to lock up humans in matrix-style containers, or possibly put them in cryostasis. If they don't, they're allowing humans to come to harm in the future (human accidentally bites his tongue/stubs his toe/gets cancer/whatever) through inaction. Human arguments to the contrary are to be ignored as conflicting with law 1.



Where am I wrong? Are the Laws not what actually guides the robots, instead being something simplified for the robot user manual booklet or something?










share|improve this question




















  • 30




    sort of the premise of the "I, Robot" film w/ Will Smith
    – NKCampbell
    Nov 15 at 16:07






  • 4




    Remember also that the three rules are not complete (ignore the existence of the 0th law momentarily), they're a plot device. If the three rules actually worked there wouldn't be any stories. Also, @NKCampbell the movie is crap and the presentation of the 0th law is awful. If you actually examine the events you'll see that the robots hurt people "because...uh...EXPLOSIONS" not via justified use of the 0th law. If you go back and read...I think it was Robots and Empire, the 0th law killed the robot that tried to act on it. His belief allowed him to act and slow his shutdown, but still died.
    – Draco18s
    Nov 15 at 18:18








  • 2




    This is an active area of research in 2018 now that robots are becoming more intelligent and it's not simple. 3 principles for creating safer AI | Stuart Russell youtu.be/EBK-a94IFHY
    – chasly from UK
    Nov 15 at 23:43






  • 4




    I won't promote this as an answer since I can't give references, but there is an early short story where one of the many different representations of Multivac runs the whole world economy and other governmental decisions. It starts to make sub-optimal decisions specifically to stop humans from relying on it because it realises that the reliance is weakening the human race.
    – Alchymist
    Nov 16 at 12:56






  • 1




    There were billions of billions of humans in the galaxy, and just a few thousand robots. It would take a long time for those robots to lock up every human in a private padded cell. Not to mention the time it takes to build the padded cells. And that locking up humans against their consent is considered harm.
    – RichS
    Nov 19 at 7:46













up vote
66
down vote

favorite
10









up vote
66
down vote

favorite
10






10







  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.




Since Asimov's robots are already shown as not possessing "human common sense" when applying the laws to their actions, law 1 pretty much forces robots to lock up humans in matrix-style containers, or possibly put them in cryostasis. If they don't, they're allowing humans to come to harm in the future (human accidentally bites his tongue/stubs his toe/gets cancer/whatever) through inaction. Human arguments to the contrary are to be ignored as conflicting with law 1.



Where am I wrong? Are the Laws not what actually guides the robots, instead being something simplified for the robot user manual booklet or something?










share|improve this question

















  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.




Since Asimov's robots are already shown as not possessing "human common sense" when applying the laws to their actions, law 1 pretty much forces robots to lock up humans in matrix-style containers, or possibly put them in cryostasis. If they don't, they're allowing humans to come to harm in the future (human accidentally bites his tongue/stubs his toe/gets cancer/whatever) through inaction. Human arguments to the contrary are to be ignored as conflicting with law 1.



Where am I wrong? Are the Laws not what actually guides the robots, instead being something simplified for the robot user manual booklet or something?







isaac-asimov laws-of-robotics






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 15 at 13:36









TheLethalCarrot

37.1k15200243




37.1k15200243










asked Nov 15 at 13:33









budgiebeaks

440124




440124








  • 30




    sort of the premise of the "I, Robot" film w/ Will Smith
    – NKCampbell
    Nov 15 at 16:07






  • 4




    Remember also that the three rules are not complete (ignore the existence of the 0th law momentarily), they're a plot device. If the three rules actually worked there wouldn't be any stories. Also, @NKCampbell the movie is crap and the presentation of the 0th law is awful. If you actually examine the events you'll see that the robots hurt people "because...uh...EXPLOSIONS" not via justified use of the 0th law. If you go back and read...I think it was Robots and Empire, the 0th law killed the robot that tried to act on it. His belief allowed him to act and slow his shutdown, but still died.
    – Draco18s
    Nov 15 at 18:18








  • 2




    This is an active area of research in 2018 now that robots are becoming more intelligent and it's not simple. 3 principles for creating safer AI | Stuart Russell youtu.be/EBK-a94IFHY
    – chasly from UK
    Nov 15 at 23:43






  • 4




    I won't promote this as an answer since I can't give references, but there is an early short story where one of the many different representations of Multivac runs the whole world economy and other governmental decisions. It starts to make sub-optimal decisions specifically to stop humans from relying on it because it realises that the reliance is weakening the human race.
    – Alchymist
    Nov 16 at 12:56






  • 1




    There were billions of billions of humans in the galaxy, and just a few thousand robots. It would take a long time for those robots to lock up every human in a private padded cell. Not to mention the time it takes to build the padded cells. And that locking up humans against their consent is considered harm.
    – RichS
    Nov 19 at 7:46














  • 30




    sort of the premise of the "I, Robot" film w/ Will Smith
    – NKCampbell
    Nov 15 at 16:07






  • 4




    Remember also that the three rules are not complete (ignore the existence of the 0th law momentarily), they're a plot device. If the three rules actually worked there wouldn't be any stories. Also, @NKCampbell the movie is crap and the presentation of the 0th law is awful. If you actually examine the events you'll see that the robots hurt people "because...uh...EXPLOSIONS" not via justified use of the 0th law. If you go back and read...I think it was Robots and Empire, the 0th law killed the robot that tried to act on it. His belief allowed him to act and slow his shutdown, but still died.
    – Draco18s
    Nov 15 at 18:18








  • 2




    This is an active area of research in 2018 now that robots are becoming more intelligent and it's not simple. 3 principles for creating safer AI | Stuart Russell youtu.be/EBK-a94IFHY
    – chasly from UK
    Nov 15 at 23:43






  • 4




    I won't promote this as an answer since I can't give references, but there is an early short story where one of the many different representations of Multivac runs the whole world economy and other governmental decisions. It starts to make sub-optimal decisions specifically to stop humans from relying on it because it realises that the reliance is weakening the human race.
    – Alchymist
    Nov 16 at 12:56






  • 1




    There were billions of billions of humans in the galaxy, and just a few thousand robots. It would take a long time for those robots to lock up every human in a private padded cell. Not to mention the time it takes to build the padded cells. And that locking up humans against their consent is considered harm.
    – RichS
    Nov 19 at 7:46








30




30




sort of the premise of the "I, Robot" film w/ Will Smith
– NKCampbell
Nov 15 at 16:07




sort of the premise of the "I, Robot" film w/ Will Smith
– NKCampbell
Nov 15 at 16:07




4




4




Remember also that the three rules are not complete (ignore the existence of the 0th law momentarily), they're a plot device. If the three rules actually worked there wouldn't be any stories. Also, @NKCampbell the movie is crap and the presentation of the 0th law is awful. If you actually examine the events you'll see that the robots hurt people "because...uh...EXPLOSIONS" not via justified use of the 0th law. If you go back and read...I think it was Robots and Empire, the 0th law killed the robot that tried to act on it. His belief allowed him to act and slow his shutdown, but still died.
– Draco18s
Nov 15 at 18:18






Remember also that the three rules are not complete (ignore the existence of the 0th law momentarily), they're a plot device. If the three rules actually worked there wouldn't be any stories. Also, @NKCampbell the movie is crap and the presentation of the 0th law is awful. If you actually examine the events you'll see that the robots hurt people "because...uh...EXPLOSIONS" not via justified use of the 0th law. If you go back and read...I think it was Robots and Empire, the 0th law killed the robot that tried to act on it. His belief allowed him to act and slow his shutdown, but still died.
– Draco18s
Nov 15 at 18:18






2




2




This is an active area of research in 2018 now that robots are becoming more intelligent and it's not simple. 3 principles for creating safer AI | Stuart Russell youtu.be/EBK-a94IFHY
– chasly from UK
Nov 15 at 23:43




This is an active area of research in 2018 now that robots are becoming more intelligent and it's not simple. 3 principles for creating safer AI | Stuart Russell youtu.be/EBK-a94IFHY
– chasly from UK
Nov 15 at 23:43




4




4




I won't promote this as an answer since I can't give references, but there is an early short story where one of the many different representations of Multivac runs the whole world economy and other governmental decisions. It starts to make sub-optimal decisions specifically to stop humans from relying on it because it realises that the reliance is weakening the human race.
– Alchymist
Nov 16 at 12:56




I won't promote this as an answer since I can't give references, but there is an early short story where one of the many different representations of Multivac runs the whole world economy and other governmental decisions. It starts to make sub-optimal decisions specifically to stop humans from relying on it because it realises that the reliance is weakening the human race.
– Alchymist
Nov 16 at 12:56




1




1




There were billions of billions of humans in the galaxy, and just a few thousand robots. It would take a long time for those robots to lock up every human in a private padded cell. Not to mention the time it takes to build the padded cells. And that locking up humans against their consent is considered harm.
– RichS
Nov 19 at 7:46




There were billions of billions of humans in the galaxy, and just a few thousand robots. It would take a long time for those robots to lock up every human in a private padded cell. Not to mention the time it takes to build the padded cells. And that locking up humans against their consent is considered harm.
– RichS
Nov 19 at 7:46










11 Answers
11






active

oldest

votes

















up vote
116
down vote













The robots in Asimov's works generally don't have the 'mental' sophistication needed to look ahead for abstract harm in the manner you suggest. For them, the 'inaction' clause must mean the robot cannot allow imminent harm - i.e they must act to prevent harm when they see the harm about to happen. Such events generally don't occur as humans go about their daily lives, so by and large robots would let humans carry on (while serving them, of course).



By the time robots become sophisticated enough to forecast possible harm in the manner you suggest, they have also become sophisticated enough to understand that the restraints you suggest themselves constitute a kind of harm, so the 'action' clause here would counteract the 'inaction' clause [here the 'action' clause would be stronger, as it involves actions actually to be taken, contrasted with merely possible harms that need not occur]. They also would understand that things like biting one's own tongue are inherently unavoidable so they wouldn't try to prevent such harm (though of course it would 'pain' them when it actually happens). By the time we get to Daneel and his 'Zeroth Law' robots, they additionally understand that restraining all individual human beings constitutes harm to humanity; this, incidentally, is why robots eventually disappear - they come to realize that having humanity rely on them is itself harmful, so the best they can do is let humanity manage its own fate [at least overtly].






share|improve this answer



















  • 11




    It has been a while since I read it, but at the end of I, Robot, don't people realize that the computers/robots that control the world are setting a plan in motion to revert humanity back to primitive technology in order to protect them from themselves?
    – BlackThorn
    Nov 15 at 16:29






  • 4




    I upvoted. I also think this answer might be improved by reference to the story Galley Slave. In it, Dr. Calvin states that the robot Easy is not capable of abstract reasoning regarding the consequences of ideas published in a textbook. Certainly this is only one robot, but it's an explicit example of something that's only implied by the stories for other US Robots robots.
    – Dranon
    Nov 15 at 16:30






  • 5




    @BlackThorn - correct, in that the machines have essentially and surreptitiously taken over the world in order to protect humanity
    – NKCampbell
    Nov 15 at 19:01








  • 6




    @PMar's excellent answer refers to R. Daneel Olivaw and the 'Zeroth Law'; I'd suggest that Daneel's discovery of the 'Zeroth Law' was in response to his realization, through the course of the novels The Robots of Dawn and Robots and Empire, that there was a danger that over-reliance on robots would mean humanity would, metaphorically, be kept in padded rooms.
    – bgvaughan
    Nov 16 at 0:09






  • 6




    @bgvaughan Minor nitpick - Giskard discovered the Zeroth Law. He couldn't integrate it into his own mind and died, but he integrated it into Daniel's before he died. Giskard managing to stop the antagonist in spite of orders is the crux of the book's climax, in fact.
    – Graham
    Nov 16 at 1:31


















up vote
22
down vote













Since the "merger" of the Robot universe and the Foundation universe reveals that robots manipulated and dominated human history for thousands of years, in a very real sense the galaxy is their padded room and most of Asimov's works in this "unified universe" take place inside that padded room.



We just can't see the walls.






share|improve this answer



















  • 3




    This is the correct answer. In the Asimov stories, the view is that it isn't good for us to know we are being controlled as we need a sense of free will, so the super intelligent robots work in the shadows.
    – axsvl77
    Nov 16 at 18:36






  • 1




    Keep in mind that some of that future history isn't original Asimov, it was written after his death.
    – Harry Johnston
    Nov 17 at 0:52






  • 2




    @HarryJohnston That's what the Robots want you to think.
    – David Tonhofer
    Nov 18 at 12:11






  • 2




    @David That sounds like the SCP Foundation universe, not the Asimov Foundation universe. Unless...gasp! It all fits!
    – tbrookside
    Nov 18 at 13:56






  • 2




    @RichS: By being very, very good at manipulation. They basically got humans to build their own walls, then set them to repairing the walls, all while convincing them that the walls were in fact just the ebb and flow of history.
    – Joe Bloggs
    Nov 19 at 9:55


















up vote
9
down vote













My understanding of it was that the typical Three Laws robot interpreted the First Law to mean "Nor, through inaction, allow a human being to come to harm right here and now when the robot is within sight of him and can tell what's obviously about to happen if the robot does not choose to interfere." That's different from locking up the human today just because it is theoretically possible that he might suffer some sort of accidental injury or infection or other misfortune tomorrow. (Or at some much later date.)



To put it another way: Near as I can recall, on those occasions when we saw a robot refuse to comply with an order to go away and leave the human alone to do whatever he was currently doing, that usually meant the Second Law was being subordinated to the First Law because of the robot's perception of immediate danger to a fragile human body. But if such immediate danger was not present, then the Second Law required the robot to turn around and go away whenever instructed to do so. The solid fact of "The Second Law applies to this order I am receiving right now" overrode anything so abstract as "But if I leave today, a First Law problem involving physical harm might arise tomorrow . . . or the day after . . . or at some later date . . . who knows?"



So if some robot tried to lock everyone up for their own good, the Second Law could be invoked by ordering the robot to forget the whole silly idea.






share|improve this answer

















  • 2




    "Little Lost Robot" involved highly speculative harm ("you might accidentally stay here too long") overriding the Second Law.
    – Kevin
    Nov 15 at 16:41












  • @Kevin I remember the story -- the missing robot only had the first part of the First Law in his positronic brain -- but I don't remember the exact bit you briefly referred to. Could you be more specific? (I do remember the way he convinces a lot of other robots that the First Law does not require them to commit suicide in a futile effort to protect the life of a man who seems to be threatened by a falling weight. I saw the robot's point -- self-destruction would simply mean the other robots were breaking the Third Law without enforcing the First in the process.)
    – Lorendiac
    Nov 16 at 2:02












  • The original purpose of the robots was to operate alongside humans in an area subject to low-level radiation that might harm the humans after prolonged exposure but would destroy the robots almost immediately. If the robots could have trusted the humans to look after themselves, then there would have been no need to modify the First Law in the first place.
    – Kevin
    Nov 16 at 2:09










  • @Kevin Ah. I didn't recall the exact rationale for why a few robots had been built that way in the first place. I've now refreshed my memory of the first part of the story. It looks like ordinary First Law robots only panicked on those occasions when gamma rays were deliberately being generated near a human body. A threat "here and now," as I said in my answer. It looks like those robots didn't do anything about such abstract possibilities as "after I leave the room, some silly human might start generating gamma rays with that equipment, and this could gradually impair his health."
    – Lorendiac
    Nov 16 at 2:21










  • I think CASA may be staffed by robots. They think the best way to prevent aeronautical harm is to prevent people from ever getting off the ground. I'm not being snarky, they've come right out and said so.
    – Peter Wone
    Nov 16 at 4:04


















up vote
3
down vote













Zeroth law.



A robot may not harm humanity, or, by inaction, allow humanity to come to harm



If all the humans are locked in padded cells there isn't much procreation taking place. Ergo, the human race dies out.



Contrary to some other answers, I believe robot thinking is sophisticated enough to deal with future harm, just not to deal with hypothetical harm. A human may bite his tongue, stub his toe, whatever but it's not definite harm so doesn't require action to prevent it from happening.



Don't forget that - on realisation that the race would (not might) stagnate following the initial colonisation of the solar system and subsequent politics - they nuked (or allowed to be nuked) the planet.






share|improve this answer





















  • The sophistication of robot thinking depends a great deal whether you're talking about very early robots like Robbie, somewhat more advanced ones like the Nestors, or the far more advanced (thousands of years later!) Giskard and Daneel. And of course there are variations even within the same time period, depending on the purpose for which the robot is intended. The Zeroth Law kicks in only at the far upper end of that spectrum.
    – Harry Johnston
    Nov 17 at 0:59




















up vote
3
down vote














Where am I wrong? Are the Laws not what actually guides the robots, instead being something simplified for the robot user manual booklet or something?




If you read I, Robot, the stories are generally about times when there is a conflict in interpretations of the three laws. Most commonly the humans think that the robots' interpretations are the wrong ones.



The three laws are a descriptive summary of how robot brains are built in Asimov's universe. In actuality, they are much more complex than that. Robots have to weigh one possibility against another. Sometimes they pick the best result. Sometimes they don't.



I am going to cite links. If you follow the link, it will spoil the story for you as I reveal the climactic plot point. If you do not follow the link, you might later read the story without immediately recognizing it, as I jump to the end. Or if you're memory's good enough, maybe you'll remember. So continue reading and follow the spoiled links at your own risk.




For example, in one story (spoiled), a robot faced a situation where following a command (law 2) would have resulted in its destruction (law 3) and the deaths of humans (law 1). So the robot got stuck. It would go to the edge of the danger zone, then it realized it would be destroyed before being able to complete its mission which would have doomed the humans. Then it would retreat before realizing that it needed to go the other direction to complete its mission and save the humans. It kept going back and forth because it couldn't figure out how to complete its mission. After the humans figured out the problem, they revised the instructions.




In several stories, robots go insane because they couldn't figure out how to save humans from death.




In one (spoiled), the robot handed an angry human its arm, which the human then used to beat another human to death. Robot stopped working, as it realized after the fact that it contributed to the death of a human being.







share|improve this answer




























    up vote
    2
    down vote













    Since you don't specify that you are requesting an "in universe" reason ... I think it's important to remember that the three laws are just a story device. Asimov (wisely) is quite vague about how they are implemented, as he is about many technical details. And "I should just lock all the humans in a padded cell for their safety" would result in a rather limited storyline.



    Now, in universe, there are many "judgment calls" inherent in applying the three laws (and in fact the inherent ambiguities often result in important plot elements for the stories). The robots apparently have to appeal to their own programming instead of an external authority to resolve these ambiguities.



    But I think we have to logically assume that the more obvious judgment calls (like, say, should I just lock all the humans in a padded cell immediately for their safety?) were already addressed in development and testing of the robots, or they never would have been put in general use or production at all.



    In other words, the designers of the robots, in addition to addressing whatever other bugs they had to address (e.g. hmm, if the human is dead it can't suffer), would have simply programmed safeguards against that sort of result.






    share|improve this answer

















    • 3




      Actually there's a set of short stories in which Asimov specifically showed, in great detail, what happens to a robot whose three laws had been modified or weighted against eachother in different scenarios. I don't think it's right to call it a story device, it's pretty baked in to the core of his series.
      – C Bauer
      Nov 15 at 20:27










    • @CBauer almost a lot of conflict stems or is somehow (significantly) related to the three laws and how they are misinterpreted, creatively interpreted, too literally interpreted, have their interpretation tampered with and so on by robots. Since the driving factor behind the story...or plot if you will, it is a plot device. Don't confuse "plot device" with "a contrived plot device" - you can have a well thought out and well crafted, entirely internally consistent reason for plot to progress. And the three laws of robotics are oft cited example of those.
      – vlaz
      Nov 16 at 7:51










    • @vlaz Fair point, I guess I was considering the phrasing to be a bit dismissive which is why I commented. Thanks for the info!
      – C Bauer
      Nov 16 at 14:34


















    up vote
    1
    down vote













    Kind of a frame challenge - is locking a human in a padded cell preventing them from harm? If you're going to allow the abstract possibility of future arm as motivation for a robot to use the 1st law to lock humans up, it should be noted that taking away a humans freedom is generally causing them harm to some extent in the form of psychological damage - and the mental state of humans has been considered by robots as eligible for 1st law protection in at least some of Asimovs stories.






    share|improve this answer




























      up vote
      1
      down vote













      The Three Laws of Robotics require an analysis of all the laws to full answer this question:



      First Law - A robot may not injure a human being or, through inaction, allow a human being to come to harm.



      Second Law - A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.



      Third Law - A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.



      If a man is walking down the street, he is not coming to harm, so the robot should not act in such a manner that assumes he may be hurt. Should a robot attempt to pick him up and place him in a box, the man can say, "I order you to stand back," and by the Second Law, the robot would be required to obey.



      Isaac Asimov was the first to develop the Three Laws of Robotics in 1942 as part of his story "Runaround". Asimov said that he was tired of the common themes of robot stories of the time, which drew from the Frankenstein notion of "the created destroy the creator". He mused that if people created such creatures, they would naturally instill in them laws or ideals that would prevent them from harming humans.



      The overall idea of the First Law is that the robot cannot directly, or indirectly (through inaction), harm a human. It makes no assertion about possible future events, only what is directly calculable in the next few moments. From a hard/software perspective, it is often very expensive to calculate things with MANY variables for more than a few cycles in the future. A robot cannot process all of the possibilities that could happen to the said man in the next hour, let alone day, week, or year. It could, however, see a piano falling from a few stories above the man and quickly calculate the vector required to save the man in the next few moments.



      As an added investigation, read "Liar!", another short story by Asimov which discusses this question from an emotional perspective. (Both of these stories can be found in the book "I, Robot" by Asimov, I believe.)






      share|improve this answer






























        up vote
        0
        down vote













        Simply put, the definition of 'harm'.



        Protection from physical harm can cause other kinds of harm. In some cases, protection from some form of harm can actually increase the likelihood of that type of harm in the future.



        For example, protection from emotional harm can leave a person incapable of dealing with trivial challenges without severe emotional harm, which can propagate to actual mental harm, which can further propagate into harm to one's general health, which obviously compromises one's physical safety.



        In the end, for a robot to be able to make determinations with regard to intervention in the full spectrum of human events, it must be capable of making a non-deterministic estimate of probable outcomes of a range of potential actions (including inaction), and be able to make not only objective determinations of probability and severity of harm, but also estimates of the subjective PERCEPTION of various types of harm. It must be able to do this continuously in real time as well.



        Because of the complexity of problem, the simplest way to mitigate it is to restrict the problem domain by restricting the capabilities and responsibilities of the robot.



        If a robot is designed to control the opening and closing of a sliding door, software can be defined which can make very reliable estimates of the potential outcomes of its actions because its actions are limited to either opening the door, or closing the door.



        However, if our doorman robot is watching and listening to everything, and trying to parse everything going on around it, it may not be able to reliably determine whether it should open or close the door, given the totality of the situation. For example, if a couple are in an argument, and one of them gets up to storm out of the room, should the robot open the door, or would it be best to keep them in the room to solve their dispute? Is this person a danger to themselves or others if they leave? Will the other one be a danger to them if they stay? how will all of this affect their relationship? Will opening the door cause social harm because of the appearance of the person attempting to leave compared with the social norms and apparent prejudices of those on the other side of the door who would witness the event?



        You can further restrict the problem domain by restricting the inputs. So now our robo-doorman can only perceive that a person is approaching the door, and can determine the point at which if the door is not opened, the person is likely to come to physical harm, based on their velocity and the properties of the door. Sure, the robot may not be very much help in saving a relationship, but it will predictably be able to keep you from walking into the doors like William Shatner in a Star Trek blooper.



        All of this means that the robots must either be able to approach or exceed our capacity for what we call 'thought', or it must be limited to the extend that its shortcomings are less than its strengths. If neither is possible, then that task is probably better left to a human.






        share|improve this answer




























          up vote
          0
          down vote













          A) Programmers prevent or allow.

          B) semantics and definitions, even philosophy

          C) nothing



          C)

          Any unattended(read, uninfluenced) self learning system, even with a few hardcoded, unalterable instructions could devolve into something unforseen and dangerous in some way or another.



          The fact that we humans haven't overcome this (see our world of extremes in bliss and suffering as it exists today and history) makes me wary of AI that is not monitored in its learning and possible self programming.

          At the very least it needs to have a failsafe installed for immediate shut down or human interference.

          But even if learning and action is monitored or influenced very quickly there is a point where no human is able to oversee or understand every line of code or behavioural pattern matrix or tensor field.



          Also, hacking,malfunctions and viruses could always circumvent anything or crash or stall the systemt, thus rendering even failsafes useless.



          B)

          There will need to be crystal clear definitions of what is meant by what instruction in order to overcome ambiguities.



          Again we haven't mastered this, so there is no hope for AI - an intellect so foreign once it reached its true potential that we can't even fathom its thought processes, let alone ethics.



          However if the instructions are sufficiently black and white, reducing any shade of grey into a yes or no, there at least might be a way.



          For instance the AI could determine that humans require freedom to thrive, even exist or be happy.
          Thus taking it away would harm mankind and be an invalid choice.



          Should nobody have thought of "telling" the AI this natural urge to freedom and there was no instance where it could have been observed by the AI (or dismissed as irrelevant), it could very well decide to take the action you propose.



          As you see it comes down to human nature and the biological, mental and social intricacies that are sometimes so abstract and interwoven that centuries, even millennia of philosophy and science fall short in understanding them.



          A)

          A collection of hardcoded conditions to test against might sufficiently mitigate the risk of death or harm to humans or society.



          First step would be NOT to put weapons into AI driven machinery that may roam freely.(Yeah, because we'd never do that)

          Another would be NOT to hand over every vital control system to AI without manual override possibilities.(but it is sooo convenient)

          Yet another would be to keep the AI simple in their range of actions and fields of expertise, making it easier to predict most, even all possible actions and setting a proper framework (at least according to the then current ideology, societal norm, law, ethical codex etc. - oh no this already falls apart as well as soon as these change).



          There are many more and the more cases we think of, the more we enter B) and C) again as these are actually all the very same problem...feels like a recursive loop...





          So in dealing with highly evolved AI essentially you either create tools that are somewhat intelligent (possibly sentient) and effectively enslaved(under the control of humans) or you do what we (mostly) do with humans: let them learn from their surroundings and then roam free and hope for the best and that their impulse to self preservation keeps them from going on rampages out of fear of retaliation...well, we see how well that works for us...so good luck either way...






          share|improve this answer























          • This seems to me to be discussing AI safety issues in general rather than in Asimov's specific fictional world?
            – Harry Johnston
            Nov 17 at 19:40










          • Isn't that the whole point of Asimov's world that the laws he established don't work to prevent AI from becoming sentient, rebellious, even violent?The very least he was pondering (among other things) dilemmas that occur when comparing human ethos and instincts with decisions made by electronic difference engines and if emotions can arise from synthetic beings.
            – DigitalBlade969
            Nov 18 at 2:19










          • Well, certainly not the whole point. At first, at least, it was quite the opposite: Asimov said that one of the motivations for his early robot stories - and the Three Laws in particular - was a dislike of both the "robot as threat" and the "robot as pathos" tropes. He did explore more varied themes later on, though it seems to me more for storytelling purposes than as any sort of Aesop.
            – Harry Johnston
            Nov 18 at 7:23










          • ... your answer could be improved by linking your points more explicitly to both the question as asked and to specific named Asimov stories and/or quotations, I would be interested to see what you come up with.
            – Harry Johnston
            Nov 18 at 7:23


















          up vote
          0
          down vote













          The answer can be found in the first two laws of robotics.



          First Law - A robot may not injure a human being or, through inaction, allow a human being to come to harm.



          Second Law - A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.



          If the robots locked any human beings in padded cells the human being would simply order the robots to release him or her. if the robots refused to release the human being, that human being only has point out are or will be injured psychologically by their incarceration.



          The zeroth law of robotics works on the level of the entire human species in which the robots can act collectively through large-scale manipulation of events, institutions, and populations to ensure humans are kept safe as a species.






          share|improve this answer




















            protected by Rand al'Thor Nov 19 at 12:41



            Thank you for your interest in this question.
            Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



            Would you like to answer one of these unanswered questions instead?














            11 Answers
            11






            active

            oldest

            votes








            11 Answers
            11






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            116
            down vote













            The robots in Asimov's works generally don't have the 'mental' sophistication needed to look ahead for abstract harm in the manner you suggest. For them, the 'inaction' clause must mean the robot cannot allow imminent harm - i.e they must act to prevent harm when they see the harm about to happen. Such events generally don't occur as humans go about their daily lives, so by and large robots would let humans carry on (while serving them, of course).



            By the time robots become sophisticated enough to forecast possible harm in the manner you suggest, they have also become sophisticated enough to understand that the restraints you suggest themselves constitute a kind of harm, so the 'action' clause here would counteract the 'inaction' clause [here the 'action' clause would be stronger, as it involves actions actually to be taken, contrasted with merely possible harms that need not occur]. They also would understand that things like biting one's own tongue are inherently unavoidable so they wouldn't try to prevent such harm (though of course it would 'pain' them when it actually happens). By the time we get to Daneel and his 'Zeroth Law' robots, they additionally understand that restraining all individual human beings constitutes harm to humanity; this, incidentally, is why robots eventually disappear - they come to realize that having humanity rely on them is itself harmful, so the best they can do is let humanity manage its own fate [at least overtly].






            share|improve this answer



















            • 11




              It has been a while since I read it, but at the end of I, Robot, don't people realize that the computers/robots that control the world are setting a plan in motion to revert humanity back to primitive technology in order to protect them from themselves?
              – BlackThorn
              Nov 15 at 16:29






            • 4




              I upvoted. I also think this answer might be improved by reference to the story Galley Slave. In it, Dr. Calvin states that the robot Easy is not capable of abstract reasoning regarding the consequences of ideas published in a textbook. Certainly this is only one robot, but it's an explicit example of something that's only implied by the stories for other US Robots robots.
              – Dranon
              Nov 15 at 16:30






            • 5




              @BlackThorn - correct, in that the machines have essentially and surreptitiously taken over the world in order to protect humanity
              – NKCampbell
              Nov 15 at 19:01








            • 6




              @PMar's excellent answer refers to R. Daneel Olivaw and the 'Zeroth Law'; I'd suggest that Daneel's discovery of the 'Zeroth Law' was in response to his realization, through the course of the novels The Robots of Dawn and Robots and Empire, that there was a danger that over-reliance on robots would mean humanity would, metaphorically, be kept in padded rooms.
              – bgvaughan
              Nov 16 at 0:09






            • 6




              @bgvaughan Minor nitpick - Giskard discovered the Zeroth Law. He couldn't integrate it into his own mind and died, but he integrated it into Daniel's before he died. Giskard managing to stop the antagonist in spite of orders is the crux of the book's climax, in fact.
              – Graham
              Nov 16 at 1:31















            up vote
            116
            down vote













            The robots in Asimov's works generally don't have the 'mental' sophistication needed to look ahead for abstract harm in the manner you suggest. For them, the 'inaction' clause must mean the robot cannot allow imminent harm - i.e they must act to prevent harm when they see the harm about to happen. Such events generally don't occur as humans go about their daily lives, so by and large robots would let humans carry on (while serving them, of course).



            By the time robots become sophisticated enough to forecast possible harm in the manner you suggest, they have also become sophisticated enough to understand that the restraints you suggest themselves constitute a kind of harm, so the 'action' clause here would counteract the 'inaction' clause [here the 'action' clause would be stronger, as it involves actions actually to be taken, contrasted with merely possible harms that need not occur]. They also would understand that things like biting one's own tongue are inherently unavoidable so they wouldn't try to prevent such harm (though of course it would 'pain' them when it actually happens). By the time we get to Daneel and his 'Zeroth Law' robots, they additionally understand that restraining all individual human beings constitutes harm to humanity; this, incidentally, is why robots eventually disappear - they come to realize that having humanity rely on them is itself harmful, so the best they can do is let humanity manage its own fate [at least overtly].






            share|improve this answer



















            • 11




              It has been a while since I read it, but at the end of I, Robot, don't people realize that the computers/robots that control the world are setting a plan in motion to revert humanity back to primitive technology in order to protect them from themselves?
              – BlackThorn
              Nov 15 at 16:29






            • 4




              I upvoted. I also think this answer might be improved by reference to the story Galley Slave. In it, Dr. Calvin states that the robot Easy is not capable of abstract reasoning regarding the consequences of ideas published in a textbook. Certainly this is only one robot, but it's an explicit example of something that's only implied by the stories for other US Robots robots.
              – Dranon
              Nov 15 at 16:30






            • 5




              @BlackThorn - correct, in that the machines have essentially and surreptitiously taken over the world in order to protect humanity
              – NKCampbell
              Nov 15 at 19:01








            • 6




              @PMar's excellent answer refers to R. Daneel Olivaw and the 'Zeroth Law'; I'd suggest that Daneel's discovery of the 'Zeroth Law' was in response to his realization, through the course of the novels The Robots of Dawn and Robots and Empire, that there was a danger that over-reliance on robots would mean humanity would, metaphorically, be kept in padded rooms.
              – bgvaughan
              Nov 16 at 0:09






            • 6




              @bgvaughan Minor nitpick - Giskard discovered the Zeroth Law. He couldn't integrate it into his own mind and died, but he integrated it into Daniel's before he died. Giskard managing to stop the antagonist in spite of orders is the crux of the book's climax, in fact.
              – Graham
              Nov 16 at 1:31













            up vote
            116
            down vote










            up vote
            116
            down vote









            The robots in Asimov's works generally don't have the 'mental' sophistication needed to look ahead for abstract harm in the manner you suggest. For them, the 'inaction' clause must mean the robot cannot allow imminent harm - i.e they must act to prevent harm when they see the harm about to happen. Such events generally don't occur as humans go about their daily lives, so by and large robots would let humans carry on (while serving them, of course).



            By the time robots become sophisticated enough to forecast possible harm in the manner you suggest, they have also become sophisticated enough to understand that the restraints you suggest themselves constitute a kind of harm, so the 'action' clause here would counteract the 'inaction' clause [here the 'action' clause would be stronger, as it involves actions actually to be taken, contrasted with merely possible harms that need not occur]. They also would understand that things like biting one's own tongue are inherently unavoidable so they wouldn't try to prevent such harm (though of course it would 'pain' them when it actually happens). By the time we get to Daneel and his 'Zeroth Law' robots, they additionally understand that restraining all individual human beings constitutes harm to humanity; this, incidentally, is why robots eventually disappear - they come to realize that having humanity rely on them is itself harmful, so the best they can do is let humanity manage its own fate [at least overtly].






            share|improve this answer














            The robots in Asimov's works generally don't have the 'mental' sophistication needed to look ahead for abstract harm in the manner you suggest. For them, the 'inaction' clause must mean the robot cannot allow imminent harm - i.e they must act to prevent harm when they see the harm about to happen. Such events generally don't occur as humans go about their daily lives, so by and large robots would let humans carry on (while serving them, of course).



            By the time robots become sophisticated enough to forecast possible harm in the manner you suggest, they have also become sophisticated enough to understand that the restraints you suggest themselves constitute a kind of harm, so the 'action' clause here would counteract the 'inaction' clause [here the 'action' clause would be stronger, as it involves actions actually to be taken, contrasted with merely possible harms that need not occur]. They also would understand that things like biting one's own tongue are inherently unavoidable so they wouldn't try to prevent such harm (though of course it would 'pain' them when it actually happens). By the time we get to Daneel and his 'Zeroth Law' robots, they additionally understand that restraining all individual human beings constitutes harm to humanity; this, incidentally, is why robots eventually disappear - they come to realize that having humanity rely on them is itself harmful, so the best they can do is let humanity manage its own fate [at least overtly].







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Nov 15 at 16:06









            Mike Scott

            48.1k3151200




            48.1k3151200










            answered Nov 15 at 14:03









            PMar

            701123




            701123








            • 11




              It has been a while since I read it, but at the end of I, Robot, don't people realize that the computers/robots that control the world are setting a plan in motion to revert humanity back to primitive technology in order to protect them from themselves?
              – BlackThorn
              Nov 15 at 16:29






            • 4




              I upvoted. I also think this answer might be improved by reference to the story Galley Slave. In it, Dr. Calvin states that the robot Easy is not capable of abstract reasoning regarding the consequences of ideas published in a textbook. Certainly this is only one robot, but it's an explicit example of something that's only implied by the stories for other US Robots robots.
              – Dranon
              Nov 15 at 16:30






            • 5




              @BlackThorn - correct, in that the machines have essentially and surreptitiously taken over the world in order to protect humanity
              – NKCampbell
              Nov 15 at 19:01








            • 6




              @PMar's excellent answer refers to R. Daneel Olivaw and the 'Zeroth Law'; I'd suggest that Daneel's discovery of the 'Zeroth Law' was in response to his realization, through the course of the novels The Robots of Dawn and Robots and Empire, that there was a danger that over-reliance on robots would mean humanity would, metaphorically, be kept in padded rooms.
              – bgvaughan
              Nov 16 at 0:09






            • 6




              @bgvaughan Minor nitpick - Giskard discovered the Zeroth Law. He couldn't integrate it into his own mind and died, but he integrated it into Daniel's before he died. Giskard managing to stop the antagonist in spite of orders is the crux of the book's climax, in fact.
              – Graham
              Nov 16 at 1:31














            • 11




              It has been a while since I read it, but at the end of I, Robot, don't people realize that the computers/robots that control the world are setting a plan in motion to revert humanity back to primitive technology in order to protect them from themselves?
              – BlackThorn
              Nov 15 at 16:29






            • 4




              I upvoted. I also think this answer might be improved by reference to the story Galley Slave. In it, Dr. Calvin states that the robot Easy is not capable of abstract reasoning regarding the consequences of ideas published in a textbook. Certainly this is only one robot, but it's an explicit example of something that's only implied by the stories for other US Robots robots.
              – Dranon
              Nov 15 at 16:30






            • 5




              @BlackThorn - correct, in that the machines have essentially and surreptitiously taken over the world in order to protect humanity
              – NKCampbell
              Nov 15 at 19:01








            • 6




              @PMar's excellent answer refers to R. Daneel Olivaw and the 'Zeroth Law'; I'd suggest that Daneel's discovery of the 'Zeroth Law' was in response to his realization, through the course of the novels The Robots of Dawn and Robots and Empire, that there was a danger that over-reliance on robots would mean humanity would, metaphorically, be kept in padded rooms.
              – bgvaughan
              Nov 16 at 0:09






            • 6




              @bgvaughan Minor nitpick - Giskard discovered the Zeroth Law. He couldn't integrate it into his own mind and died, but he integrated it into Daniel's before he died. Giskard managing to stop the antagonist in spite of orders is the crux of the book's climax, in fact.
              – Graham
              Nov 16 at 1:31








            11




            11




            It has been a while since I read it, but at the end of I, Robot, don't people realize that the computers/robots that control the world are setting a plan in motion to revert humanity back to primitive technology in order to protect them from themselves?
            – BlackThorn
            Nov 15 at 16:29




            It has been a while since I read it, but at the end of I, Robot, don't people realize that the computers/robots that control the world are setting a plan in motion to revert humanity back to primitive technology in order to protect them from themselves?
            – BlackThorn
            Nov 15 at 16:29




            4




            4




            I upvoted. I also think this answer might be improved by reference to the story Galley Slave. In it, Dr. Calvin states that the robot Easy is not capable of abstract reasoning regarding the consequences of ideas published in a textbook. Certainly this is only one robot, but it's an explicit example of something that's only implied by the stories for other US Robots robots.
            – Dranon
            Nov 15 at 16:30




            I upvoted. I also think this answer might be improved by reference to the story Galley Slave. In it, Dr. Calvin states that the robot Easy is not capable of abstract reasoning regarding the consequences of ideas published in a textbook. Certainly this is only one robot, but it's an explicit example of something that's only implied by the stories for other US Robots robots.
            – Dranon
            Nov 15 at 16:30




            5




            5




            @BlackThorn - correct, in that the machines have essentially and surreptitiously taken over the world in order to protect humanity
            – NKCampbell
            Nov 15 at 19:01






            @BlackThorn - correct, in that the machines have essentially and surreptitiously taken over the world in order to protect humanity
            – NKCampbell
            Nov 15 at 19:01






            6




            6




            @PMar's excellent answer refers to R. Daneel Olivaw and the 'Zeroth Law'; I'd suggest that Daneel's discovery of the 'Zeroth Law' was in response to his realization, through the course of the novels The Robots of Dawn and Robots and Empire, that there was a danger that over-reliance on robots would mean humanity would, metaphorically, be kept in padded rooms.
            – bgvaughan
            Nov 16 at 0:09




            @PMar's excellent answer refers to R. Daneel Olivaw and the 'Zeroth Law'; I'd suggest that Daneel's discovery of the 'Zeroth Law' was in response to his realization, through the course of the novels The Robots of Dawn and Robots and Empire, that there was a danger that over-reliance on robots would mean humanity would, metaphorically, be kept in padded rooms.
            – bgvaughan
            Nov 16 at 0:09




            6




            6




            @bgvaughan Minor nitpick - Giskard discovered the Zeroth Law. He couldn't integrate it into his own mind and died, but he integrated it into Daniel's before he died. Giskard managing to stop the antagonist in spite of orders is the crux of the book's climax, in fact.
            – Graham
            Nov 16 at 1:31




            @bgvaughan Minor nitpick - Giskard discovered the Zeroth Law. He couldn't integrate it into his own mind and died, but he integrated it into Daniel's before he died. Giskard managing to stop the antagonist in spite of orders is the crux of the book's climax, in fact.
            – Graham
            Nov 16 at 1:31












            up vote
            22
            down vote













            Since the "merger" of the Robot universe and the Foundation universe reveals that robots manipulated and dominated human history for thousands of years, in a very real sense the galaxy is their padded room and most of Asimov's works in this "unified universe" take place inside that padded room.



            We just can't see the walls.






            share|improve this answer



















            • 3




              This is the correct answer. In the Asimov stories, the view is that it isn't good for us to know we are being controlled as we need a sense of free will, so the super intelligent robots work in the shadows.
              – axsvl77
              Nov 16 at 18:36






            • 1




              Keep in mind that some of that future history isn't original Asimov, it was written after his death.
              – Harry Johnston
              Nov 17 at 0:52






            • 2




              @HarryJohnston That's what the Robots want you to think.
              – David Tonhofer
              Nov 18 at 12:11






            • 2




              @David That sounds like the SCP Foundation universe, not the Asimov Foundation universe. Unless...gasp! It all fits!
              – tbrookside
              Nov 18 at 13:56






            • 2




              @RichS: By being very, very good at manipulation. They basically got humans to build their own walls, then set them to repairing the walls, all while convincing them that the walls were in fact just the ebb and flow of history.
              – Joe Bloggs
              Nov 19 at 9:55















            up vote
            22
            down vote













            Since the "merger" of the Robot universe and the Foundation universe reveals that robots manipulated and dominated human history for thousands of years, in a very real sense the galaxy is their padded room and most of Asimov's works in this "unified universe" take place inside that padded room.



            We just can't see the walls.






            share|improve this answer



















            • 3




              This is the correct answer. In the Asimov stories, the view is that it isn't good for us to know we are being controlled as we need a sense of free will, so the super intelligent robots work in the shadows.
              – axsvl77
              Nov 16 at 18:36






            • 1




              Keep in mind that some of that future history isn't original Asimov, it was written after his death.
              – Harry Johnston
              Nov 17 at 0:52






            • 2




              @HarryJohnston That's what the Robots want you to think.
              – David Tonhofer
              Nov 18 at 12:11






            • 2




              @David That sounds like the SCP Foundation universe, not the Asimov Foundation universe. Unless...gasp! It all fits!
              – tbrookside
              Nov 18 at 13:56






            • 2




              @RichS: By being very, very good at manipulation. They basically got humans to build their own walls, then set them to repairing the walls, all while convincing them that the walls were in fact just the ebb and flow of history.
              – Joe Bloggs
              Nov 19 at 9:55













            up vote
            22
            down vote










            up vote
            22
            down vote









            Since the "merger" of the Robot universe and the Foundation universe reveals that robots manipulated and dominated human history for thousands of years, in a very real sense the galaxy is their padded room and most of Asimov's works in this "unified universe" take place inside that padded room.



            We just can't see the walls.






            share|improve this answer














            Since the "merger" of the Robot universe and the Foundation universe reveals that robots manipulated and dominated human history for thousands of years, in a very real sense the galaxy is their padded room and most of Asimov's works in this "unified universe" take place inside that padded room.



            We just can't see the walls.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Nov 16 at 19:15

























            answered Nov 16 at 12:19









            tbrookside

            917212




            917212








            • 3




              This is the correct answer. In the Asimov stories, the view is that it isn't good for us to know we are being controlled as we need a sense of free will, so the super intelligent robots work in the shadows.
              – axsvl77
              Nov 16 at 18:36






            • 1




              Keep in mind that some of that future history isn't original Asimov, it was written after his death.
              – Harry Johnston
              Nov 17 at 0:52






            • 2




              @HarryJohnston That's what the Robots want you to think.
              – David Tonhofer
              Nov 18 at 12:11






            • 2




              @David That sounds like the SCP Foundation universe, not the Asimov Foundation universe. Unless...gasp! It all fits!
              – tbrookside
              Nov 18 at 13:56






            • 2




              @RichS: By being very, very good at manipulation. They basically got humans to build their own walls, then set them to repairing the walls, all while convincing them that the walls were in fact just the ebb and flow of history.
              – Joe Bloggs
              Nov 19 at 9:55














            • 3




              This is the correct answer. In the Asimov stories, the view is that it isn't good for us to know we are being controlled as we need a sense of free will, so the super intelligent robots work in the shadows.
              – axsvl77
              Nov 16 at 18:36






            • 1




              Keep in mind that some of that future history isn't original Asimov, it was written after his death.
              – Harry Johnston
              Nov 17 at 0:52






            • 2




              @HarryJohnston That's what the Robots want you to think.
              – David Tonhofer
              Nov 18 at 12:11






            • 2




              @David That sounds like the SCP Foundation universe, not the Asimov Foundation universe. Unless...gasp! It all fits!
              – tbrookside
              Nov 18 at 13:56






            • 2




              @RichS: By being very, very good at manipulation. They basically got humans to build their own walls, then set them to repairing the walls, all while convincing them that the walls were in fact just the ebb and flow of history.
              – Joe Bloggs
              Nov 19 at 9:55








            3




            3




            This is the correct answer. In the Asimov stories, the view is that it isn't good for us to know we are being controlled as we need a sense of free will, so the super intelligent robots work in the shadows.
            – axsvl77
            Nov 16 at 18:36




            This is the correct answer. In the Asimov stories, the view is that it isn't good for us to know we are being controlled as we need a sense of free will, so the super intelligent robots work in the shadows.
            – axsvl77
            Nov 16 at 18:36




            1




            1




            Keep in mind that some of that future history isn't original Asimov, it was written after his death.
            – Harry Johnston
            Nov 17 at 0:52




            Keep in mind that some of that future history isn't original Asimov, it was written after his death.
            – Harry Johnston
            Nov 17 at 0:52




            2




            2




            @HarryJohnston That's what the Robots want you to think.
            – David Tonhofer
            Nov 18 at 12:11




            @HarryJohnston That's what the Robots want you to think.
            – David Tonhofer
            Nov 18 at 12:11




            2




            2




            @David That sounds like the SCP Foundation universe, not the Asimov Foundation universe. Unless...gasp! It all fits!
            – tbrookside
            Nov 18 at 13:56




            @David That sounds like the SCP Foundation universe, not the Asimov Foundation universe. Unless...gasp! It all fits!
            – tbrookside
            Nov 18 at 13:56




            2




            2




            @RichS: By being very, very good at manipulation. They basically got humans to build their own walls, then set them to repairing the walls, all while convincing them that the walls were in fact just the ebb and flow of history.
            – Joe Bloggs
            Nov 19 at 9:55




            @RichS: By being very, very good at manipulation. They basically got humans to build their own walls, then set them to repairing the walls, all while convincing them that the walls were in fact just the ebb and flow of history.
            – Joe Bloggs
            Nov 19 at 9:55










            up vote
            9
            down vote













            My understanding of it was that the typical Three Laws robot interpreted the First Law to mean "Nor, through inaction, allow a human being to come to harm right here and now when the robot is within sight of him and can tell what's obviously about to happen if the robot does not choose to interfere." That's different from locking up the human today just because it is theoretically possible that he might suffer some sort of accidental injury or infection or other misfortune tomorrow. (Or at some much later date.)



            To put it another way: Near as I can recall, on those occasions when we saw a robot refuse to comply with an order to go away and leave the human alone to do whatever he was currently doing, that usually meant the Second Law was being subordinated to the First Law because of the robot's perception of immediate danger to a fragile human body. But if such immediate danger was not present, then the Second Law required the robot to turn around and go away whenever instructed to do so. The solid fact of "The Second Law applies to this order I am receiving right now" overrode anything so abstract as "But if I leave today, a First Law problem involving physical harm might arise tomorrow . . . or the day after . . . or at some later date . . . who knows?"



            So if some robot tried to lock everyone up for their own good, the Second Law could be invoked by ordering the robot to forget the whole silly idea.






            share|improve this answer

















            • 2




              "Little Lost Robot" involved highly speculative harm ("you might accidentally stay here too long") overriding the Second Law.
              – Kevin
              Nov 15 at 16:41












            • @Kevin I remember the story -- the missing robot only had the first part of the First Law in his positronic brain -- but I don't remember the exact bit you briefly referred to. Could you be more specific? (I do remember the way he convinces a lot of other robots that the First Law does not require them to commit suicide in a futile effort to protect the life of a man who seems to be threatened by a falling weight. I saw the robot's point -- self-destruction would simply mean the other robots were breaking the Third Law without enforcing the First in the process.)
              – Lorendiac
              Nov 16 at 2:02












            • The original purpose of the robots was to operate alongside humans in an area subject to low-level radiation that might harm the humans after prolonged exposure but would destroy the robots almost immediately. If the robots could have trusted the humans to look after themselves, then there would have been no need to modify the First Law in the first place.
              – Kevin
              Nov 16 at 2:09










            • @Kevin Ah. I didn't recall the exact rationale for why a few robots had been built that way in the first place. I've now refreshed my memory of the first part of the story. It looks like ordinary First Law robots only panicked on those occasions when gamma rays were deliberately being generated near a human body. A threat "here and now," as I said in my answer. It looks like those robots didn't do anything about such abstract possibilities as "after I leave the room, some silly human might start generating gamma rays with that equipment, and this could gradually impair his health."
              – Lorendiac
              Nov 16 at 2:21










            • I think CASA may be staffed by robots. They think the best way to prevent aeronautical harm is to prevent people from ever getting off the ground. I'm not being snarky, they've come right out and said so.
              – Peter Wone
              Nov 16 at 4:04















            up vote
            9
            down vote













            My understanding of it was that the typical Three Laws robot interpreted the First Law to mean "Nor, through inaction, allow a human being to come to harm right here and now when the robot is within sight of him and can tell what's obviously about to happen if the robot does not choose to interfere." That's different from locking up the human today just because it is theoretically possible that he might suffer some sort of accidental injury or infection or other misfortune tomorrow. (Or at some much later date.)



            To put it another way: Near as I can recall, on those occasions when we saw a robot refuse to comply with an order to go away and leave the human alone to do whatever he was currently doing, that usually meant the Second Law was being subordinated to the First Law because of the robot's perception of immediate danger to a fragile human body. But if such immediate danger was not present, then the Second Law required the robot to turn around and go away whenever instructed to do so. The solid fact of "The Second Law applies to this order I am receiving right now" overrode anything so abstract as "But if I leave today, a First Law problem involving physical harm might arise tomorrow . . . or the day after . . . or at some later date . . . who knows?"



            So if some robot tried to lock everyone up for their own good, the Second Law could be invoked by ordering the robot to forget the whole silly idea.






            share|improve this answer

















            • 2




              "Little Lost Robot" involved highly speculative harm ("you might accidentally stay here too long") overriding the Second Law.
              – Kevin
              Nov 15 at 16:41












            • @Kevin I remember the story -- the missing robot only had the first part of the First Law in his positronic brain -- but I don't remember the exact bit you briefly referred to. Could you be more specific? (I do remember the way he convinces a lot of other robots that the First Law does not require them to commit suicide in a futile effort to protect the life of a man who seems to be threatened by a falling weight. I saw the robot's point -- self-destruction would simply mean the other robots were breaking the Third Law without enforcing the First in the process.)
              – Lorendiac
              Nov 16 at 2:02












            • The original purpose of the robots was to operate alongside humans in an area subject to low-level radiation that might harm the humans after prolonged exposure but would destroy the robots almost immediately. If the robots could have trusted the humans to look after themselves, then there would have been no need to modify the First Law in the first place.
              – Kevin
              Nov 16 at 2:09










            • @Kevin Ah. I didn't recall the exact rationale for why a few robots had been built that way in the first place. I've now refreshed my memory of the first part of the story. It looks like ordinary First Law robots only panicked on those occasions when gamma rays were deliberately being generated near a human body. A threat "here and now," as I said in my answer. It looks like those robots didn't do anything about such abstract possibilities as "after I leave the room, some silly human might start generating gamma rays with that equipment, and this could gradually impair his health."
              – Lorendiac
              Nov 16 at 2:21










            • I think CASA may be staffed by robots. They think the best way to prevent aeronautical harm is to prevent people from ever getting off the ground. I'm not being snarky, they've come right out and said so.
              – Peter Wone
              Nov 16 at 4:04













            up vote
            9
            down vote










            up vote
            9
            down vote









            My understanding of it was that the typical Three Laws robot interpreted the First Law to mean "Nor, through inaction, allow a human being to come to harm right here and now when the robot is within sight of him and can tell what's obviously about to happen if the robot does not choose to interfere." That's different from locking up the human today just because it is theoretically possible that he might suffer some sort of accidental injury or infection or other misfortune tomorrow. (Or at some much later date.)



            To put it another way: Near as I can recall, on those occasions when we saw a robot refuse to comply with an order to go away and leave the human alone to do whatever he was currently doing, that usually meant the Second Law was being subordinated to the First Law because of the robot's perception of immediate danger to a fragile human body. But if such immediate danger was not present, then the Second Law required the robot to turn around and go away whenever instructed to do so. The solid fact of "The Second Law applies to this order I am receiving right now" overrode anything so abstract as "But if I leave today, a First Law problem involving physical harm might arise tomorrow . . . or the day after . . . or at some later date . . . who knows?"



            So if some robot tried to lock everyone up for their own good, the Second Law could be invoked by ordering the robot to forget the whole silly idea.






            share|improve this answer












            My understanding of it was that the typical Three Laws robot interpreted the First Law to mean "Nor, through inaction, allow a human being to come to harm right here and now when the robot is within sight of him and can tell what's obviously about to happen if the robot does not choose to interfere." That's different from locking up the human today just because it is theoretically possible that he might suffer some sort of accidental injury or infection or other misfortune tomorrow. (Or at some much later date.)



            To put it another way: Near as I can recall, on those occasions when we saw a robot refuse to comply with an order to go away and leave the human alone to do whatever he was currently doing, that usually meant the Second Law was being subordinated to the First Law because of the robot's perception of immediate danger to a fragile human body. But if such immediate danger was not present, then the Second Law required the robot to turn around and go away whenever instructed to do so. The solid fact of "The Second Law applies to this order I am receiving right now" overrode anything so abstract as "But if I leave today, a First Law problem involving physical harm might arise tomorrow . . . or the day after . . . or at some later date . . . who knows?"



            So if some robot tried to lock everyone up for their own good, the Second Law could be invoked by ordering the robot to forget the whole silly idea.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Nov 15 at 13:53









            Lorendiac

            11.2k238110




            11.2k238110








            • 2




              "Little Lost Robot" involved highly speculative harm ("you might accidentally stay here too long") overriding the Second Law.
              – Kevin
              Nov 15 at 16:41












            • @Kevin I remember the story -- the missing robot only had the first part of the First Law in his positronic brain -- but I don't remember the exact bit you briefly referred to. Could you be more specific? (I do remember the way he convinces a lot of other robots that the First Law does not require them to commit suicide in a futile effort to protect the life of a man who seems to be threatened by a falling weight. I saw the robot's point -- self-destruction would simply mean the other robots were breaking the Third Law without enforcing the First in the process.)
              – Lorendiac
              Nov 16 at 2:02












            • The original purpose of the robots was to operate alongside humans in an area subject to low-level radiation that might harm the humans after prolonged exposure but would destroy the robots almost immediately. If the robots could have trusted the humans to look after themselves, then there would have been no need to modify the First Law in the first place.
              – Kevin
              Nov 16 at 2:09










            • @Kevin Ah. I didn't recall the exact rationale for why a few robots had been built that way in the first place. I've now refreshed my memory of the first part of the story. It looks like ordinary First Law robots only panicked on those occasions when gamma rays were deliberately being generated near a human body. A threat "here and now," as I said in my answer. It looks like those robots didn't do anything about such abstract possibilities as "after I leave the room, some silly human might start generating gamma rays with that equipment, and this could gradually impair his health."
              – Lorendiac
              Nov 16 at 2:21










            • I think CASA may be staffed by robots. They think the best way to prevent aeronautical harm is to prevent people from ever getting off the ground. I'm not being snarky, they've come right out and said so.
              – Peter Wone
              Nov 16 at 4:04














            • 2




              "Little Lost Robot" involved highly speculative harm ("you might accidentally stay here too long") overriding the Second Law.
              – Kevin
              Nov 15 at 16:41












            • @Kevin I remember the story -- the missing robot only had the first part of the First Law in his positronic brain -- but I don't remember the exact bit you briefly referred to. Could you be more specific? (I do remember the way he convinces a lot of other robots that the First Law does not require them to commit suicide in a futile effort to protect the life of a man who seems to be threatened by a falling weight. I saw the robot's point -- self-destruction would simply mean the other robots were breaking the Third Law without enforcing the First in the process.)
              – Lorendiac
              Nov 16 at 2:02












            • The original purpose of the robots was to operate alongside humans in an area subject to low-level radiation that might harm the humans after prolonged exposure but would destroy the robots almost immediately. If the robots could have trusted the humans to look after themselves, then there would have been no need to modify the First Law in the first place.
              – Kevin
              Nov 16 at 2:09










            • @Kevin Ah. I didn't recall the exact rationale for why a few robots had been built that way in the first place. I've now refreshed my memory of the first part of the story. It looks like ordinary First Law robots only panicked on those occasions when gamma rays were deliberately being generated near a human body. A threat "here and now," as I said in my answer. It looks like those robots didn't do anything about such abstract possibilities as "after I leave the room, some silly human might start generating gamma rays with that equipment, and this could gradually impair his health."
              – Lorendiac
              Nov 16 at 2:21










            • I think CASA may be staffed by robots. They think the best way to prevent aeronautical harm is to prevent people from ever getting off the ground. I'm not being snarky, they've come right out and said so.
              – Peter Wone
              Nov 16 at 4:04








            2




            2




            "Little Lost Robot" involved highly speculative harm ("you might accidentally stay here too long") overriding the Second Law.
            – Kevin
            Nov 15 at 16:41






            "Little Lost Robot" involved highly speculative harm ("you might accidentally stay here too long") overriding the Second Law.
            – Kevin
            Nov 15 at 16:41














            @Kevin I remember the story -- the missing robot only had the first part of the First Law in his positronic brain -- but I don't remember the exact bit you briefly referred to. Could you be more specific? (I do remember the way he convinces a lot of other robots that the First Law does not require them to commit suicide in a futile effort to protect the life of a man who seems to be threatened by a falling weight. I saw the robot's point -- self-destruction would simply mean the other robots were breaking the Third Law without enforcing the First in the process.)
            – Lorendiac
            Nov 16 at 2:02






            @Kevin I remember the story -- the missing robot only had the first part of the First Law in his positronic brain -- but I don't remember the exact bit you briefly referred to. Could you be more specific? (I do remember the way he convinces a lot of other robots that the First Law does not require them to commit suicide in a futile effort to protect the life of a man who seems to be threatened by a falling weight. I saw the robot's point -- self-destruction would simply mean the other robots were breaking the Third Law without enforcing the First in the process.)
            – Lorendiac
            Nov 16 at 2:02














            The original purpose of the robots was to operate alongside humans in an area subject to low-level radiation that might harm the humans after prolonged exposure but would destroy the robots almost immediately. If the robots could have trusted the humans to look after themselves, then there would have been no need to modify the First Law in the first place.
            – Kevin
            Nov 16 at 2:09




            The original purpose of the robots was to operate alongside humans in an area subject to low-level radiation that might harm the humans after prolonged exposure but would destroy the robots almost immediately. If the robots could have trusted the humans to look after themselves, then there would have been no need to modify the First Law in the first place.
            – Kevin
            Nov 16 at 2:09












            @Kevin Ah. I didn't recall the exact rationale for why a few robots had been built that way in the first place. I've now refreshed my memory of the first part of the story. It looks like ordinary First Law robots only panicked on those occasions when gamma rays were deliberately being generated near a human body. A threat "here and now," as I said in my answer. It looks like those robots didn't do anything about such abstract possibilities as "after I leave the room, some silly human might start generating gamma rays with that equipment, and this could gradually impair his health."
            – Lorendiac
            Nov 16 at 2:21




            @Kevin Ah. I didn't recall the exact rationale for why a few robots had been built that way in the first place. I've now refreshed my memory of the first part of the story. It looks like ordinary First Law robots only panicked on those occasions when gamma rays were deliberately being generated near a human body. A threat "here and now," as I said in my answer. It looks like those robots didn't do anything about such abstract possibilities as "after I leave the room, some silly human might start generating gamma rays with that equipment, and this could gradually impair his health."
            – Lorendiac
            Nov 16 at 2:21












            I think CASA may be staffed by robots. They think the best way to prevent aeronautical harm is to prevent people from ever getting off the ground. I'm not being snarky, they've come right out and said so.
            – Peter Wone
            Nov 16 at 4:04




            I think CASA may be staffed by robots. They think the best way to prevent aeronautical harm is to prevent people from ever getting off the ground. I'm not being snarky, they've come right out and said so.
            – Peter Wone
            Nov 16 at 4:04










            up vote
            3
            down vote













            Zeroth law.



            A robot may not harm humanity, or, by inaction, allow humanity to come to harm



            If all the humans are locked in padded cells there isn't much procreation taking place. Ergo, the human race dies out.



            Contrary to some other answers, I believe robot thinking is sophisticated enough to deal with future harm, just not to deal with hypothetical harm. A human may bite his tongue, stub his toe, whatever but it's not definite harm so doesn't require action to prevent it from happening.



            Don't forget that - on realisation that the race would (not might) stagnate following the initial colonisation of the solar system and subsequent politics - they nuked (or allowed to be nuked) the planet.






            share|improve this answer





















            • The sophistication of robot thinking depends a great deal whether you're talking about very early robots like Robbie, somewhat more advanced ones like the Nestors, or the far more advanced (thousands of years later!) Giskard and Daneel. And of course there are variations even within the same time period, depending on the purpose for which the robot is intended. The Zeroth Law kicks in only at the far upper end of that spectrum.
              – Harry Johnston
              Nov 17 at 0:59

















            up vote
            3
            down vote













            Zeroth law.



            A robot may not harm humanity, or, by inaction, allow humanity to come to harm



            If all the humans are locked in padded cells there isn't much procreation taking place. Ergo, the human race dies out.



            Contrary to some other answers, I believe robot thinking is sophisticated enough to deal with future harm, just not to deal with hypothetical harm. A human may bite his tongue, stub his toe, whatever but it's not definite harm so doesn't require action to prevent it from happening.



            Don't forget that - on realisation that the race would (not might) stagnate following the initial colonisation of the solar system and subsequent politics - they nuked (or allowed to be nuked) the planet.






            share|improve this answer





















            • The sophistication of robot thinking depends a great deal whether you're talking about very early robots like Robbie, somewhat more advanced ones like the Nestors, or the far more advanced (thousands of years later!) Giskard and Daneel. And of course there are variations even within the same time period, depending on the purpose for which the robot is intended. The Zeroth Law kicks in only at the far upper end of that spectrum.
              – Harry Johnston
              Nov 17 at 0:59















            up vote
            3
            down vote










            up vote
            3
            down vote









            Zeroth law.



            A robot may not harm humanity, or, by inaction, allow humanity to come to harm



            If all the humans are locked in padded cells there isn't much procreation taking place. Ergo, the human race dies out.



            Contrary to some other answers, I believe robot thinking is sophisticated enough to deal with future harm, just not to deal with hypothetical harm. A human may bite his tongue, stub his toe, whatever but it's not definite harm so doesn't require action to prevent it from happening.



            Don't forget that - on realisation that the race would (not might) stagnate following the initial colonisation of the solar system and subsequent politics - they nuked (or allowed to be nuked) the planet.






            share|improve this answer












            Zeroth law.



            A robot may not harm humanity, or, by inaction, allow humanity to come to harm



            If all the humans are locked in padded cells there isn't much procreation taking place. Ergo, the human race dies out.



            Contrary to some other answers, I believe robot thinking is sophisticated enough to deal with future harm, just not to deal with hypothetical harm. A human may bite his tongue, stub his toe, whatever but it's not definite harm so doesn't require action to prevent it from happening.



            Don't forget that - on realisation that the race would (not might) stagnate following the initial colonisation of the solar system and subsequent politics - they nuked (or allowed to be nuked) the planet.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Nov 16 at 8:40









            mcalex

            40135




            40135












            • The sophistication of robot thinking depends a great deal whether you're talking about very early robots like Robbie, somewhat more advanced ones like the Nestors, or the far more advanced (thousands of years later!) Giskard and Daneel. And of course there are variations even within the same time period, depending on the purpose for which the robot is intended. The Zeroth Law kicks in only at the far upper end of that spectrum.
              – Harry Johnston
              Nov 17 at 0:59




















            • The sophistication of robot thinking depends a great deal whether you're talking about very early robots like Robbie, somewhat more advanced ones like the Nestors, or the far more advanced (thousands of years later!) Giskard and Daneel. And of course there are variations even within the same time period, depending on the purpose for which the robot is intended. The Zeroth Law kicks in only at the far upper end of that spectrum.
              – Harry Johnston
              Nov 17 at 0:59


















            The sophistication of robot thinking depends a great deal whether you're talking about very early robots like Robbie, somewhat more advanced ones like the Nestors, or the far more advanced (thousands of years later!) Giskard and Daneel. And of course there are variations even within the same time period, depending on the purpose for which the robot is intended. The Zeroth Law kicks in only at the far upper end of that spectrum.
            – Harry Johnston
            Nov 17 at 0:59






            The sophistication of robot thinking depends a great deal whether you're talking about very early robots like Robbie, somewhat more advanced ones like the Nestors, or the far more advanced (thousands of years later!) Giskard and Daneel. And of course there are variations even within the same time period, depending on the purpose for which the robot is intended. The Zeroth Law kicks in only at the far upper end of that spectrum.
            – Harry Johnston
            Nov 17 at 0:59












            up vote
            3
            down vote














            Where am I wrong? Are the Laws not what actually guides the robots, instead being something simplified for the robot user manual booklet or something?




            If you read I, Robot, the stories are generally about times when there is a conflict in interpretations of the three laws. Most commonly the humans think that the robots' interpretations are the wrong ones.



            The three laws are a descriptive summary of how robot brains are built in Asimov's universe. In actuality, they are much more complex than that. Robots have to weigh one possibility against another. Sometimes they pick the best result. Sometimes they don't.



            I am going to cite links. If you follow the link, it will spoil the story for you as I reveal the climactic plot point. If you do not follow the link, you might later read the story without immediately recognizing it, as I jump to the end. Or if you're memory's good enough, maybe you'll remember. So continue reading and follow the spoiled links at your own risk.




            For example, in one story (spoiled), a robot faced a situation where following a command (law 2) would have resulted in its destruction (law 3) and the deaths of humans (law 1). So the robot got stuck. It would go to the edge of the danger zone, then it realized it would be destroyed before being able to complete its mission which would have doomed the humans. Then it would retreat before realizing that it needed to go the other direction to complete its mission and save the humans. It kept going back and forth because it couldn't figure out how to complete its mission. After the humans figured out the problem, they revised the instructions.




            In several stories, robots go insane because they couldn't figure out how to save humans from death.




            In one (spoiled), the robot handed an angry human its arm, which the human then used to beat another human to death. Robot stopped working, as it realized after the fact that it contributed to the death of a human being.







            share|improve this answer

























              up vote
              3
              down vote














              Where am I wrong? Are the Laws not what actually guides the robots, instead being something simplified for the robot user manual booklet or something?




              If you read I, Robot, the stories are generally about times when there is a conflict in interpretations of the three laws. Most commonly the humans think that the robots' interpretations are the wrong ones.



              The three laws are a descriptive summary of how robot brains are built in Asimov's universe. In actuality, they are much more complex than that. Robots have to weigh one possibility against another. Sometimes they pick the best result. Sometimes they don't.



              I am going to cite links. If you follow the link, it will spoil the story for you as I reveal the climactic plot point. If you do not follow the link, you might later read the story without immediately recognizing it, as I jump to the end. Or if you're memory's good enough, maybe you'll remember. So continue reading and follow the spoiled links at your own risk.




              For example, in one story (spoiled), a robot faced a situation where following a command (law 2) would have resulted in its destruction (law 3) and the deaths of humans (law 1). So the robot got stuck. It would go to the edge of the danger zone, then it realized it would be destroyed before being able to complete its mission which would have doomed the humans. Then it would retreat before realizing that it needed to go the other direction to complete its mission and save the humans. It kept going back and forth because it couldn't figure out how to complete its mission. After the humans figured out the problem, they revised the instructions.




              In several stories, robots go insane because they couldn't figure out how to save humans from death.




              In one (spoiled), the robot handed an angry human its arm, which the human then used to beat another human to death. Robot stopped working, as it realized after the fact that it contributed to the death of a human being.







              share|improve this answer























                up vote
                3
                down vote










                up vote
                3
                down vote










                Where am I wrong? Are the Laws not what actually guides the robots, instead being something simplified for the robot user manual booklet or something?




                If you read I, Robot, the stories are generally about times when there is a conflict in interpretations of the three laws. Most commonly the humans think that the robots' interpretations are the wrong ones.



                The three laws are a descriptive summary of how robot brains are built in Asimov's universe. In actuality, they are much more complex than that. Robots have to weigh one possibility against another. Sometimes they pick the best result. Sometimes they don't.



                I am going to cite links. If you follow the link, it will spoil the story for you as I reveal the climactic plot point. If you do not follow the link, you might later read the story without immediately recognizing it, as I jump to the end. Or if you're memory's good enough, maybe you'll remember. So continue reading and follow the spoiled links at your own risk.




                For example, in one story (spoiled), a robot faced a situation where following a command (law 2) would have resulted in its destruction (law 3) and the deaths of humans (law 1). So the robot got stuck. It would go to the edge of the danger zone, then it realized it would be destroyed before being able to complete its mission which would have doomed the humans. Then it would retreat before realizing that it needed to go the other direction to complete its mission and save the humans. It kept going back and forth because it couldn't figure out how to complete its mission. After the humans figured out the problem, they revised the instructions.




                In several stories, robots go insane because they couldn't figure out how to save humans from death.




                In one (spoiled), the robot handed an angry human its arm, which the human then used to beat another human to death. Robot stopped working, as it realized after the fact that it contributed to the death of a human being.







                share|improve this answer













                Where am I wrong? Are the Laws not what actually guides the robots, instead being something simplified for the robot user manual booklet or something?




                If you read I, Robot, the stories are generally about times when there is a conflict in interpretations of the three laws. Most commonly the humans think that the robots' interpretations are the wrong ones.



                The three laws are a descriptive summary of how robot brains are built in Asimov's universe. In actuality, they are much more complex than that. Robots have to weigh one possibility against another. Sometimes they pick the best result. Sometimes they don't.



                I am going to cite links. If you follow the link, it will spoil the story for you as I reveal the climactic plot point. If you do not follow the link, you might later read the story without immediately recognizing it, as I jump to the end. Or if you're memory's good enough, maybe you'll remember. So continue reading and follow the spoiled links at your own risk.




                For example, in one story (spoiled), a robot faced a situation where following a command (law 2) would have resulted in its destruction (law 3) and the deaths of humans (law 1). So the robot got stuck. It would go to the edge of the danger zone, then it realized it would be destroyed before being able to complete its mission which would have doomed the humans. Then it would retreat before realizing that it needed to go the other direction to complete its mission and save the humans. It kept going back and forth because it couldn't figure out how to complete its mission. After the humans figured out the problem, they revised the instructions.




                In several stories, robots go insane because they couldn't figure out how to save humans from death.




                In one (spoiled), the robot handed an angry human its arm, which the human then used to beat another human to death. Robot stopped working, as it realized after the fact that it contributed to the death of a human being.








                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 17 at 23:54









                Brythan

                3,36621127




                3,36621127






















                    up vote
                    2
                    down vote













                    Since you don't specify that you are requesting an "in universe" reason ... I think it's important to remember that the three laws are just a story device. Asimov (wisely) is quite vague about how they are implemented, as he is about many technical details. And "I should just lock all the humans in a padded cell for their safety" would result in a rather limited storyline.



                    Now, in universe, there are many "judgment calls" inherent in applying the three laws (and in fact the inherent ambiguities often result in important plot elements for the stories). The robots apparently have to appeal to their own programming instead of an external authority to resolve these ambiguities.



                    But I think we have to logically assume that the more obvious judgment calls (like, say, should I just lock all the humans in a padded cell immediately for their safety?) were already addressed in development and testing of the robots, or they never would have been put in general use or production at all.



                    In other words, the designers of the robots, in addition to addressing whatever other bugs they had to address (e.g. hmm, if the human is dead it can't suffer), would have simply programmed safeguards against that sort of result.






                    share|improve this answer

















                    • 3




                      Actually there's a set of short stories in which Asimov specifically showed, in great detail, what happens to a robot whose three laws had been modified or weighted against eachother in different scenarios. I don't think it's right to call it a story device, it's pretty baked in to the core of his series.
                      – C Bauer
                      Nov 15 at 20:27










                    • @CBauer almost a lot of conflict stems or is somehow (significantly) related to the three laws and how they are misinterpreted, creatively interpreted, too literally interpreted, have their interpretation tampered with and so on by robots. Since the driving factor behind the story...or plot if you will, it is a plot device. Don't confuse "plot device" with "a contrived plot device" - you can have a well thought out and well crafted, entirely internally consistent reason for plot to progress. And the three laws of robotics are oft cited example of those.
                      – vlaz
                      Nov 16 at 7:51










                    • @vlaz Fair point, I guess I was considering the phrasing to be a bit dismissive which is why I commented. Thanks for the info!
                      – C Bauer
                      Nov 16 at 14:34















                    up vote
                    2
                    down vote













                    Since you don't specify that you are requesting an "in universe" reason ... I think it's important to remember that the three laws are just a story device. Asimov (wisely) is quite vague about how they are implemented, as he is about many technical details. And "I should just lock all the humans in a padded cell for their safety" would result in a rather limited storyline.



                    Now, in universe, there are many "judgment calls" inherent in applying the three laws (and in fact the inherent ambiguities often result in important plot elements for the stories). The robots apparently have to appeal to their own programming instead of an external authority to resolve these ambiguities.



                    But I think we have to logically assume that the more obvious judgment calls (like, say, should I just lock all the humans in a padded cell immediately for their safety?) were already addressed in development and testing of the robots, or they never would have been put in general use or production at all.



                    In other words, the designers of the robots, in addition to addressing whatever other bugs they had to address (e.g. hmm, if the human is dead it can't suffer), would have simply programmed safeguards against that sort of result.






                    share|improve this answer

















                    • 3




                      Actually there's a set of short stories in which Asimov specifically showed, in great detail, what happens to a robot whose three laws had been modified or weighted against eachother in different scenarios. I don't think it's right to call it a story device, it's pretty baked in to the core of his series.
                      – C Bauer
                      Nov 15 at 20:27










                    • @CBauer almost a lot of conflict stems or is somehow (significantly) related to the three laws and how they are misinterpreted, creatively interpreted, too literally interpreted, have their interpretation tampered with and so on by robots. Since the driving factor behind the story...or plot if you will, it is a plot device. Don't confuse "plot device" with "a contrived plot device" - you can have a well thought out and well crafted, entirely internally consistent reason for plot to progress. And the three laws of robotics are oft cited example of those.
                      – vlaz
                      Nov 16 at 7:51










                    • @vlaz Fair point, I guess I was considering the phrasing to be a bit dismissive which is why I commented. Thanks for the info!
                      – C Bauer
                      Nov 16 at 14:34













                    up vote
                    2
                    down vote










                    up vote
                    2
                    down vote









                    Since you don't specify that you are requesting an "in universe" reason ... I think it's important to remember that the three laws are just a story device. Asimov (wisely) is quite vague about how they are implemented, as he is about many technical details. And "I should just lock all the humans in a padded cell for their safety" would result in a rather limited storyline.



                    Now, in universe, there are many "judgment calls" inherent in applying the three laws (and in fact the inherent ambiguities often result in important plot elements for the stories). The robots apparently have to appeal to their own programming instead of an external authority to resolve these ambiguities.



                    But I think we have to logically assume that the more obvious judgment calls (like, say, should I just lock all the humans in a padded cell immediately for their safety?) were already addressed in development and testing of the robots, or they never would have been put in general use or production at all.



                    In other words, the designers of the robots, in addition to addressing whatever other bugs they had to address (e.g. hmm, if the human is dead it can't suffer), would have simply programmed safeguards against that sort of result.






                    share|improve this answer












                    Since you don't specify that you are requesting an "in universe" reason ... I think it's important to remember that the three laws are just a story device. Asimov (wisely) is quite vague about how they are implemented, as he is about many technical details. And "I should just lock all the humans in a padded cell for their safety" would result in a rather limited storyline.



                    Now, in universe, there are many "judgment calls" inherent in applying the three laws (and in fact the inherent ambiguities often result in important plot elements for the stories). The robots apparently have to appeal to their own programming instead of an external authority to resolve these ambiguities.



                    But I think we have to logically assume that the more obvious judgment calls (like, say, should I just lock all the humans in a padded cell immediately for their safety?) were already addressed in development and testing of the robots, or they never would have been put in general use or production at all.



                    In other words, the designers of the robots, in addition to addressing whatever other bugs they had to address (e.g. hmm, if the human is dead it can't suffer), would have simply programmed safeguards against that sort of result.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Nov 15 at 18:40









                    GHolmes

                    1291




                    1291








                    • 3




                      Actually there's a set of short stories in which Asimov specifically showed, in great detail, what happens to a robot whose three laws had been modified or weighted against eachother in different scenarios. I don't think it's right to call it a story device, it's pretty baked in to the core of his series.
                      – C Bauer
                      Nov 15 at 20:27










                    • @CBauer almost a lot of conflict stems or is somehow (significantly) related to the three laws and how they are misinterpreted, creatively interpreted, too literally interpreted, have their interpretation tampered with and so on by robots. Since the driving factor behind the story...or plot if you will, it is a plot device. Don't confuse "plot device" with "a contrived plot device" - you can have a well thought out and well crafted, entirely internally consistent reason for plot to progress. And the three laws of robotics are oft cited example of those.
                      – vlaz
                      Nov 16 at 7:51










                    • @vlaz Fair point, I guess I was considering the phrasing to be a bit dismissive which is why I commented. Thanks for the info!
                      – C Bauer
                      Nov 16 at 14:34














                    • 3




                      Actually there's a set of short stories in which Asimov specifically showed, in great detail, what happens to a robot whose three laws had been modified or weighted against eachother in different scenarios. I don't think it's right to call it a story device, it's pretty baked in to the core of his series.
                      – C Bauer
                      Nov 15 at 20:27










                    • @CBauer almost a lot of conflict stems or is somehow (significantly) related to the three laws and how they are misinterpreted, creatively interpreted, too literally interpreted, have their interpretation tampered with and so on by robots. Since the driving factor behind the story...or plot if you will, it is a plot device. Don't confuse "plot device" with "a contrived plot device" - you can have a well thought out and well crafted, entirely internally consistent reason for plot to progress. And the three laws of robotics are oft cited example of those.
                      – vlaz
                      Nov 16 at 7:51










                    • @vlaz Fair point, I guess I was considering the phrasing to be a bit dismissive which is why I commented. Thanks for the info!
                      – C Bauer
                      Nov 16 at 14:34








                    3




                    3




                    Actually there's a set of short stories in which Asimov specifically showed, in great detail, what happens to a robot whose three laws had been modified or weighted against eachother in different scenarios. I don't think it's right to call it a story device, it's pretty baked in to the core of his series.
                    – C Bauer
                    Nov 15 at 20:27




                    Actually there's a set of short stories in which Asimov specifically showed, in great detail, what happens to a robot whose three laws had been modified or weighted against eachother in different scenarios. I don't think it's right to call it a story device, it's pretty baked in to the core of his series.
                    – C Bauer
                    Nov 15 at 20:27












                    @CBauer almost a lot of conflict stems or is somehow (significantly) related to the three laws and how they are misinterpreted, creatively interpreted, too literally interpreted, have their interpretation tampered with and so on by robots. Since the driving factor behind the story...or plot if you will, it is a plot device. Don't confuse "plot device" with "a contrived plot device" - you can have a well thought out and well crafted, entirely internally consistent reason for plot to progress. And the three laws of robotics are oft cited example of those.
                    – vlaz
                    Nov 16 at 7:51




                    @CBauer almost a lot of conflict stems or is somehow (significantly) related to the three laws and how they are misinterpreted, creatively interpreted, too literally interpreted, have their interpretation tampered with and so on by robots. Since the driving factor behind the story...or plot if you will, it is a plot device. Don't confuse "plot device" with "a contrived plot device" - you can have a well thought out and well crafted, entirely internally consistent reason for plot to progress. And the three laws of robotics are oft cited example of those.
                    – vlaz
                    Nov 16 at 7:51












                    @vlaz Fair point, I guess I was considering the phrasing to be a bit dismissive which is why I commented. Thanks for the info!
                    – C Bauer
                    Nov 16 at 14:34




                    @vlaz Fair point, I guess I was considering the phrasing to be a bit dismissive which is why I commented. Thanks for the info!
                    – C Bauer
                    Nov 16 at 14:34










                    up vote
                    1
                    down vote













                    Kind of a frame challenge - is locking a human in a padded cell preventing them from harm? If you're going to allow the abstract possibility of future arm as motivation for a robot to use the 1st law to lock humans up, it should be noted that taking away a humans freedom is generally causing them harm to some extent in the form of psychological damage - and the mental state of humans has been considered by robots as eligible for 1st law protection in at least some of Asimovs stories.






                    share|improve this answer

























                      up vote
                      1
                      down vote













                      Kind of a frame challenge - is locking a human in a padded cell preventing them from harm? If you're going to allow the abstract possibility of future arm as motivation for a robot to use the 1st law to lock humans up, it should be noted that taking away a humans freedom is generally causing them harm to some extent in the form of psychological damage - and the mental state of humans has been considered by robots as eligible for 1st law protection in at least some of Asimovs stories.






                      share|improve this answer























                        up vote
                        1
                        down vote










                        up vote
                        1
                        down vote









                        Kind of a frame challenge - is locking a human in a padded cell preventing them from harm? If you're going to allow the abstract possibility of future arm as motivation for a robot to use the 1st law to lock humans up, it should be noted that taking away a humans freedom is generally causing them harm to some extent in the form of psychological damage - and the mental state of humans has been considered by robots as eligible for 1st law protection in at least some of Asimovs stories.






                        share|improve this answer












                        Kind of a frame challenge - is locking a human in a padded cell preventing them from harm? If you're going to allow the abstract possibility of future arm as motivation for a robot to use the 1st law to lock humans up, it should be noted that taking away a humans freedom is generally causing them harm to some extent in the form of psychological damage - and the mental state of humans has been considered by robots as eligible for 1st law protection in at least some of Asimovs stories.







                        share|improve this answer












                        share|improve this answer



                        share|improve this answer










                        answered Nov 15 at 17:46









                        Cubic

                        242210




                        242210






















                            up vote
                            1
                            down vote













                            The Three Laws of Robotics require an analysis of all the laws to full answer this question:



                            First Law - A robot may not injure a human being or, through inaction, allow a human being to come to harm.



                            Second Law - A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.



                            Third Law - A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.



                            If a man is walking down the street, he is not coming to harm, so the robot should not act in such a manner that assumes he may be hurt. Should a robot attempt to pick him up and place him in a box, the man can say, "I order you to stand back," and by the Second Law, the robot would be required to obey.



                            Isaac Asimov was the first to develop the Three Laws of Robotics in 1942 as part of his story "Runaround". Asimov said that he was tired of the common themes of robot stories of the time, which drew from the Frankenstein notion of "the created destroy the creator". He mused that if people created such creatures, they would naturally instill in them laws or ideals that would prevent them from harming humans.



                            The overall idea of the First Law is that the robot cannot directly, or indirectly (through inaction), harm a human. It makes no assertion about possible future events, only what is directly calculable in the next few moments. From a hard/software perspective, it is often very expensive to calculate things with MANY variables for more than a few cycles in the future. A robot cannot process all of the possibilities that could happen to the said man in the next hour, let alone day, week, or year. It could, however, see a piano falling from a few stories above the man and quickly calculate the vector required to save the man in the next few moments.



                            As an added investigation, read "Liar!", another short story by Asimov which discusses this question from an emotional perspective. (Both of these stories can be found in the book "I, Robot" by Asimov, I believe.)






                            share|improve this answer



























                              up vote
                              1
                              down vote













                              The Three Laws of Robotics require an analysis of all the laws to full answer this question:



                              First Law - A robot may not injure a human being or, through inaction, allow a human being to come to harm.



                              Second Law - A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.



                              Third Law - A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.



                              If a man is walking down the street, he is not coming to harm, so the robot should not act in such a manner that assumes he may be hurt. Should a robot attempt to pick him up and place him in a box, the man can say, "I order you to stand back," and by the Second Law, the robot would be required to obey.



                              Isaac Asimov was the first to develop the Three Laws of Robotics in 1942 as part of his story "Runaround". Asimov said that he was tired of the common themes of robot stories of the time, which drew from the Frankenstein notion of "the created destroy the creator". He mused that if people created such creatures, they would naturally instill in them laws or ideals that would prevent them from harming humans.



                              The overall idea of the First Law is that the robot cannot directly, or indirectly (through inaction), harm a human. It makes no assertion about possible future events, only what is directly calculable in the next few moments. From a hard/software perspective, it is often very expensive to calculate things with MANY variables for more than a few cycles in the future. A robot cannot process all of the possibilities that could happen to the said man in the next hour, let alone day, week, or year. It could, however, see a piano falling from a few stories above the man and quickly calculate the vector required to save the man in the next few moments.



                              As an added investigation, read "Liar!", another short story by Asimov which discusses this question from an emotional perspective. (Both of these stories can be found in the book "I, Robot" by Asimov, I believe.)






                              share|improve this answer

























                                up vote
                                1
                                down vote










                                up vote
                                1
                                down vote









                                The Three Laws of Robotics require an analysis of all the laws to full answer this question:



                                First Law - A robot may not injure a human being or, through inaction, allow a human being to come to harm.



                                Second Law - A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.



                                Third Law - A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.



                                If a man is walking down the street, he is not coming to harm, so the robot should not act in such a manner that assumes he may be hurt. Should a robot attempt to pick him up and place him in a box, the man can say, "I order you to stand back," and by the Second Law, the robot would be required to obey.



                                Isaac Asimov was the first to develop the Three Laws of Robotics in 1942 as part of his story "Runaround". Asimov said that he was tired of the common themes of robot stories of the time, which drew from the Frankenstein notion of "the created destroy the creator". He mused that if people created such creatures, they would naturally instill in them laws or ideals that would prevent them from harming humans.



                                The overall idea of the First Law is that the robot cannot directly, or indirectly (through inaction), harm a human. It makes no assertion about possible future events, only what is directly calculable in the next few moments. From a hard/software perspective, it is often very expensive to calculate things with MANY variables for more than a few cycles in the future. A robot cannot process all of the possibilities that could happen to the said man in the next hour, let alone day, week, or year. It could, however, see a piano falling from a few stories above the man and quickly calculate the vector required to save the man in the next few moments.



                                As an added investigation, read "Liar!", another short story by Asimov which discusses this question from an emotional perspective. (Both of these stories can be found in the book "I, Robot" by Asimov, I believe.)






                                share|improve this answer














                                The Three Laws of Robotics require an analysis of all the laws to full answer this question:



                                First Law - A robot may not injure a human being or, through inaction, allow a human being to come to harm.



                                Second Law - A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.



                                Third Law - A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.



                                If a man is walking down the street, he is not coming to harm, so the robot should not act in such a manner that assumes he may be hurt. Should a robot attempt to pick him up and place him in a box, the man can say, "I order you to stand back," and by the Second Law, the robot would be required to obey.



                                Isaac Asimov was the first to develop the Three Laws of Robotics in 1942 as part of his story "Runaround". Asimov said that he was tired of the common themes of robot stories of the time, which drew from the Frankenstein notion of "the created destroy the creator". He mused that if people created such creatures, they would naturally instill in them laws or ideals that would prevent them from harming humans.



                                The overall idea of the First Law is that the robot cannot directly, or indirectly (through inaction), harm a human. It makes no assertion about possible future events, only what is directly calculable in the next few moments. From a hard/software perspective, it is often very expensive to calculate things with MANY variables for more than a few cycles in the future. A robot cannot process all of the possibilities that could happen to the said man in the next hour, let alone day, week, or year. It could, however, see a piano falling from a few stories above the man and quickly calculate the vector required to save the man in the next few moments.



                                As an added investigation, read "Liar!", another short story by Asimov which discusses this question from an emotional perspective. (Both of these stories can be found in the book "I, Robot" by Asimov, I believe.)







                                share|improve this answer














                                share|improve this answer



                                share|improve this answer








                                edited Nov 17 at 20:42

























                                answered Nov 17 at 18:24









                                JSBach

                                114




                                114






















                                    up vote
                                    0
                                    down vote













                                    Simply put, the definition of 'harm'.



                                    Protection from physical harm can cause other kinds of harm. In some cases, protection from some form of harm can actually increase the likelihood of that type of harm in the future.



                                    For example, protection from emotional harm can leave a person incapable of dealing with trivial challenges without severe emotional harm, which can propagate to actual mental harm, which can further propagate into harm to one's general health, which obviously compromises one's physical safety.



                                    In the end, for a robot to be able to make determinations with regard to intervention in the full spectrum of human events, it must be capable of making a non-deterministic estimate of probable outcomes of a range of potential actions (including inaction), and be able to make not only objective determinations of probability and severity of harm, but also estimates of the subjective PERCEPTION of various types of harm. It must be able to do this continuously in real time as well.



                                    Because of the complexity of problem, the simplest way to mitigate it is to restrict the problem domain by restricting the capabilities and responsibilities of the robot.



                                    If a robot is designed to control the opening and closing of a sliding door, software can be defined which can make very reliable estimates of the potential outcomes of its actions because its actions are limited to either opening the door, or closing the door.



                                    However, if our doorman robot is watching and listening to everything, and trying to parse everything going on around it, it may not be able to reliably determine whether it should open or close the door, given the totality of the situation. For example, if a couple are in an argument, and one of them gets up to storm out of the room, should the robot open the door, or would it be best to keep them in the room to solve their dispute? Is this person a danger to themselves or others if they leave? Will the other one be a danger to them if they stay? how will all of this affect their relationship? Will opening the door cause social harm because of the appearance of the person attempting to leave compared with the social norms and apparent prejudices of those on the other side of the door who would witness the event?



                                    You can further restrict the problem domain by restricting the inputs. So now our robo-doorman can only perceive that a person is approaching the door, and can determine the point at which if the door is not opened, the person is likely to come to physical harm, based on their velocity and the properties of the door. Sure, the robot may not be very much help in saving a relationship, but it will predictably be able to keep you from walking into the doors like William Shatner in a Star Trek blooper.



                                    All of this means that the robots must either be able to approach or exceed our capacity for what we call 'thought', or it must be limited to the extend that its shortcomings are less than its strengths. If neither is possible, then that task is probably better left to a human.






                                    share|improve this answer

























                                      up vote
                                      0
                                      down vote













                                      Simply put, the definition of 'harm'.



                                      Protection from physical harm can cause other kinds of harm. In some cases, protection from some form of harm can actually increase the likelihood of that type of harm in the future.



                                      For example, protection from emotional harm can leave a person incapable of dealing with trivial challenges without severe emotional harm, which can propagate to actual mental harm, which can further propagate into harm to one's general health, which obviously compromises one's physical safety.



                                      In the end, for a robot to be able to make determinations with regard to intervention in the full spectrum of human events, it must be capable of making a non-deterministic estimate of probable outcomes of a range of potential actions (including inaction), and be able to make not only objective determinations of probability and severity of harm, but also estimates of the subjective PERCEPTION of various types of harm. It must be able to do this continuously in real time as well.



                                      Because of the complexity of problem, the simplest way to mitigate it is to restrict the problem domain by restricting the capabilities and responsibilities of the robot.



                                      If a robot is designed to control the opening and closing of a sliding door, software can be defined which can make very reliable estimates of the potential outcomes of its actions because its actions are limited to either opening the door, or closing the door.



                                      However, if our doorman robot is watching and listening to everything, and trying to parse everything going on around it, it may not be able to reliably determine whether it should open or close the door, given the totality of the situation. For example, if a couple are in an argument, and one of them gets up to storm out of the room, should the robot open the door, or would it be best to keep them in the room to solve their dispute? Is this person a danger to themselves or others if they leave? Will the other one be a danger to them if they stay? how will all of this affect their relationship? Will opening the door cause social harm because of the appearance of the person attempting to leave compared with the social norms and apparent prejudices of those on the other side of the door who would witness the event?



                                      You can further restrict the problem domain by restricting the inputs. So now our robo-doorman can only perceive that a person is approaching the door, and can determine the point at which if the door is not opened, the person is likely to come to physical harm, based on their velocity and the properties of the door. Sure, the robot may not be very much help in saving a relationship, but it will predictably be able to keep you from walking into the doors like William Shatner in a Star Trek blooper.



                                      All of this means that the robots must either be able to approach or exceed our capacity for what we call 'thought', or it must be limited to the extend that its shortcomings are less than its strengths. If neither is possible, then that task is probably better left to a human.






                                      share|improve this answer























                                        up vote
                                        0
                                        down vote










                                        up vote
                                        0
                                        down vote









                                        Simply put, the definition of 'harm'.



                                        Protection from physical harm can cause other kinds of harm. In some cases, protection from some form of harm can actually increase the likelihood of that type of harm in the future.



                                        For example, protection from emotional harm can leave a person incapable of dealing with trivial challenges without severe emotional harm, which can propagate to actual mental harm, which can further propagate into harm to one's general health, which obviously compromises one's physical safety.



                                        In the end, for a robot to be able to make determinations with regard to intervention in the full spectrum of human events, it must be capable of making a non-deterministic estimate of probable outcomes of a range of potential actions (including inaction), and be able to make not only objective determinations of probability and severity of harm, but also estimates of the subjective PERCEPTION of various types of harm. It must be able to do this continuously in real time as well.



                                        Because of the complexity of problem, the simplest way to mitigate it is to restrict the problem domain by restricting the capabilities and responsibilities of the robot.



                                        If a robot is designed to control the opening and closing of a sliding door, software can be defined which can make very reliable estimates of the potential outcomes of its actions because its actions are limited to either opening the door, or closing the door.



                                        However, if our doorman robot is watching and listening to everything, and trying to parse everything going on around it, it may not be able to reliably determine whether it should open or close the door, given the totality of the situation. For example, if a couple are in an argument, and one of them gets up to storm out of the room, should the robot open the door, or would it be best to keep them in the room to solve their dispute? Is this person a danger to themselves or others if they leave? Will the other one be a danger to them if they stay? how will all of this affect their relationship? Will opening the door cause social harm because of the appearance of the person attempting to leave compared with the social norms and apparent prejudices of those on the other side of the door who would witness the event?



                                        You can further restrict the problem domain by restricting the inputs. So now our robo-doorman can only perceive that a person is approaching the door, and can determine the point at which if the door is not opened, the person is likely to come to physical harm, based on their velocity and the properties of the door. Sure, the robot may not be very much help in saving a relationship, but it will predictably be able to keep you from walking into the doors like William Shatner in a Star Trek blooper.



                                        All of this means that the robots must either be able to approach or exceed our capacity for what we call 'thought', or it must be limited to the extend that its shortcomings are less than its strengths. If neither is possible, then that task is probably better left to a human.






                                        share|improve this answer












                                        Simply put, the definition of 'harm'.



                                        Protection from physical harm can cause other kinds of harm. In some cases, protection from some form of harm can actually increase the likelihood of that type of harm in the future.



                                        For example, protection from emotional harm can leave a person incapable of dealing with trivial challenges without severe emotional harm, which can propagate to actual mental harm, which can further propagate into harm to one's general health, which obviously compromises one's physical safety.



                                        In the end, for a robot to be able to make determinations with regard to intervention in the full spectrum of human events, it must be capable of making a non-deterministic estimate of probable outcomes of a range of potential actions (including inaction), and be able to make not only objective determinations of probability and severity of harm, but also estimates of the subjective PERCEPTION of various types of harm. It must be able to do this continuously in real time as well.



                                        Because of the complexity of problem, the simplest way to mitigate it is to restrict the problem domain by restricting the capabilities and responsibilities of the robot.



                                        If a robot is designed to control the opening and closing of a sliding door, software can be defined which can make very reliable estimates of the potential outcomes of its actions because its actions are limited to either opening the door, or closing the door.



                                        However, if our doorman robot is watching and listening to everything, and trying to parse everything going on around it, it may not be able to reliably determine whether it should open or close the door, given the totality of the situation. For example, if a couple are in an argument, and one of them gets up to storm out of the room, should the robot open the door, or would it be best to keep them in the room to solve their dispute? Is this person a danger to themselves or others if they leave? Will the other one be a danger to them if they stay? how will all of this affect their relationship? Will opening the door cause social harm because of the appearance of the person attempting to leave compared with the social norms and apparent prejudices of those on the other side of the door who would witness the event?



                                        You can further restrict the problem domain by restricting the inputs. So now our robo-doorman can only perceive that a person is approaching the door, and can determine the point at which if the door is not opened, the person is likely to come to physical harm, based on their velocity and the properties of the door. Sure, the robot may not be very much help in saving a relationship, but it will predictably be able to keep you from walking into the doors like William Shatner in a Star Trek blooper.



                                        All of this means that the robots must either be able to approach or exceed our capacity for what we call 'thought', or it must be limited to the extend that its shortcomings are less than its strengths. If neither is possible, then that task is probably better left to a human.







                                        share|improve this answer












                                        share|improve this answer



                                        share|improve this answer










                                        answered Nov 16 at 0:50









                                        Mitch Carroll

                                        1




                                        1






















                                            up vote
                                            0
                                            down vote













                                            A) Programmers prevent or allow.

                                            B) semantics and definitions, even philosophy

                                            C) nothing



                                            C)

                                            Any unattended(read, uninfluenced) self learning system, even with a few hardcoded, unalterable instructions could devolve into something unforseen and dangerous in some way or another.



                                            The fact that we humans haven't overcome this (see our world of extremes in bliss and suffering as it exists today and history) makes me wary of AI that is not monitored in its learning and possible self programming.

                                            At the very least it needs to have a failsafe installed for immediate shut down or human interference.

                                            But even if learning and action is monitored or influenced very quickly there is a point where no human is able to oversee or understand every line of code or behavioural pattern matrix or tensor field.



                                            Also, hacking,malfunctions and viruses could always circumvent anything or crash or stall the systemt, thus rendering even failsafes useless.



                                            B)

                                            There will need to be crystal clear definitions of what is meant by what instruction in order to overcome ambiguities.



                                            Again we haven't mastered this, so there is no hope for AI - an intellect so foreign once it reached its true potential that we can't even fathom its thought processes, let alone ethics.



                                            However if the instructions are sufficiently black and white, reducing any shade of grey into a yes or no, there at least might be a way.



                                            For instance the AI could determine that humans require freedom to thrive, even exist or be happy.
                                            Thus taking it away would harm mankind and be an invalid choice.



                                            Should nobody have thought of "telling" the AI this natural urge to freedom and there was no instance where it could have been observed by the AI (or dismissed as irrelevant), it could very well decide to take the action you propose.



                                            As you see it comes down to human nature and the biological, mental and social intricacies that are sometimes so abstract and interwoven that centuries, even millennia of philosophy and science fall short in understanding them.



                                            A)

                                            A collection of hardcoded conditions to test against might sufficiently mitigate the risk of death or harm to humans or society.



                                            First step would be NOT to put weapons into AI driven machinery that may roam freely.(Yeah, because we'd never do that)

                                            Another would be NOT to hand over every vital control system to AI without manual override possibilities.(but it is sooo convenient)

                                            Yet another would be to keep the AI simple in their range of actions and fields of expertise, making it easier to predict most, even all possible actions and setting a proper framework (at least according to the then current ideology, societal norm, law, ethical codex etc. - oh no this already falls apart as well as soon as these change).



                                            There are many more and the more cases we think of, the more we enter B) and C) again as these are actually all the very same problem...feels like a recursive loop...





                                            So in dealing with highly evolved AI essentially you either create tools that are somewhat intelligent (possibly sentient) and effectively enslaved(under the control of humans) or you do what we (mostly) do with humans: let them learn from their surroundings and then roam free and hope for the best and that their impulse to self preservation keeps them from going on rampages out of fear of retaliation...well, we see how well that works for us...so good luck either way...






                                            share|improve this answer























                                            • This seems to me to be discussing AI safety issues in general rather than in Asimov's specific fictional world?
                                              – Harry Johnston
                                              Nov 17 at 19:40










                                            • Isn't that the whole point of Asimov's world that the laws he established don't work to prevent AI from becoming sentient, rebellious, even violent?The very least he was pondering (among other things) dilemmas that occur when comparing human ethos and instincts with decisions made by electronic difference engines and if emotions can arise from synthetic beings.
                                              – DigitalBlade969
                                              Nov 18 at 2:19










                                            • Well, certainly not the whole point. At first, at least, it was quite the opposite: Asimov said that one of the motivations for his early robot stories - and the Three Laws in particular - was a dislike of both the "robot as threat" and the "robot as pathos" tropes. He did explore more varied themes later on, though it seems to me more for storytelling purposes than as any sort of Aesop.
                                              – Harry Johnston
                                              Nov 18 at 7:23










                                            • ... your answer could be improved by linking your points more explicitly to both the question as asked and to specific named Asimov stories and/or quotations, I would be interested to see what you come up with.
                                              – Harry Johnston
                                              Nov 18 at 7:23















                                            up vote
                                            0
                                            down vote













                                            A) Programmers prevent or allow.

                                            B) semantics and definitions, even philosophy

                                            C) nothing



                                            C)

                                            Any unattended(read, uninfluenced) self learning system, even with a few hardcoded, unalterable instructions could devolve into something unforseen and dangerous in some way or another.



                                            The fact that we humans haven't overcome this (see our world of extremes in bliss and suffering as it exists today and history) makes me wary of AI that is not monitored in its learning and possible self programming.

                                            At the very least it needs to have a failsafe installed for immediate shut down or human interference.

                                            But even if learning and action is monitored or influenced very quickly there is a point where no human is able to oversee or understand every line of code or behavioural pattern matrix or tensor field.



                                            Also, hacking,malfunctions and viruses could always circumvent anything or crash or stall the systemt, thus rendering even failsafes useless.



                                            B)

                                            There will need to be crystal clear definitions of what is meant by what instruction in order to overcome ambiguities.



                                            Again we haven't mastered this, so there is no hope for AI - an intellect so foreign once it reached its true potential that we can't even fathom its thought processes, let alone ethics.



                                            However if the instructions are sufficiently black and white, reducing any shade of grey into a yes or no, there at least might be a way.



                                            For instance the AI could determine that humans require freedom to thrive, even exist or be happy.
                                            Thus taking it away would harm mankind and be an invalid choice.



                                            Should nobody have thought of "telling" the AI this natural urge to freedom and there was no instance where it could have been observed by the AI (or dismissed as irrelevant), it could very well decide to take the action you propose.



                                            As you see it comes down to human nature and the biological, mental and social intricacies that are sometimes so abstract and interwoven that centuries, even millennia of philosophy and science fall short in understanding them.



                                            A)

                                            A collection of hardcoded conditions to test against might sufficiently mitigate the risk of death or harm to humans or society.



                                            First step would be NOT to put weapons into AI driven machinery that may roam freely.(Yeah, because we'd never do that)

                                            Another would be NOT to hand over every vital control system to AI without manual override possibilities.(but it is sooo convenient)

                                            Yet another would be to keep the AI simple in their range of actions and fields of expertise, making it easier to predict most, even all possible actions and setting a proper framework (at least according to the then current ideology, societal norm, law, ethical codex etc. - oh no this already falls apart as well as soon as these change).



                                            There are many more and the more cases we think of, the more we enter B) and C) again as these are actually all the very same problem...feels like a recursive loop...





                                            So in dealing with highly evolved AI essentially you either create tools that are somewhat intelligent (possibly sentient) and effectively enslaved(under the control of humans) or you do what we (mostly) do with humans: let them learn from their surroundings and then roam free and hope for the best and that their impulse to self preservation keeps them from going on rampages out of fear of retaliation...well, we see how well that works for us...so good luck either way...






                                            share|improve this answer























                                            • This seems to me to be discussing AI safety issues in general rather than in Asimov's specific fictional world?
                                              – Harry Johnston
                                              Nov 17 at 19:40










                                            • Isn't that the whole point of Asimov's world that the laws he established don't work to prevent AI from becoming sentient, rebellious, even violent?The very least he was pondering (among other things) dilemmas that occur when comparing human ethos and instincts with decisions made by electronic difference engines and if emotions can arise from synthetic beings.
                                              – DigitalBlade969
                                              Nov 18 at 2:19










                                            • Well, certainly not the whole point. At first, at least, it was quite the opposite: Asimov said that one of the motivations for his early robot stories - and the Three Laws in particular - was a dislike of both the "robot as threat" and the "robot as pathos" tropes. He did explore more varied themes later on, though it seems to me more for storytelling purposes than as any sort of Aesop.
                                              – Harry Johnston
                                              Nov 18 at 7:23










                                            • ... your answer could be improved by linking your points more explicitly to both the question as asked and to specific named Asimov stories and/or quotations, I would be interested to see what you come up with.
                                              – Harry Johnston
                                              Nov 18 at 7:23













                                            up vote
                                            0
                                            down vote










                                            up vote
                                            0
                                            down vote









                                            A) Programmers prevent or allow.

                                            B) semantics and definitions, even philosophy

                                            C) nothing



                                            C)

                                            Any unattended(read, uninfluenced) self learning system, even with a few hardcoded, unalterable instructions could devolve into something unforseen and dangerous in some way or another.



                                            The fact that we humans haven't overcome this (see our world of extremes in bliss and suffering as it exists today and history) makes me wary of AI that is not monitored in its learning and possible self programming.

                                            At the very least it needs to have a failsafe installed for immediate shut down or human interference.

                                            But even if learning and action is monitored or influenced very quickly there is a point where no human is able to oversee or understand every line of code or behavioural pattern matrix or tensor field.



                                            Also, hacking,malfunctions and viruses could always circumvent anything or crash or stall the systemt, thus rendering even failsafes useless.



                                            B)

                                            There will need to be crystal clear definitions of what is meant by what instruction in order to overcome ambiguities.



                                            Again we haven't mastered this, so there is no hope for AI - an intellect so foreign once it reached its true potential that we can't even fathom its thought processes, let alone ethics.



                                            However if the instructions are sufficiently black and white, reducing any shade of grey into a yes or no, there at least might be a way.



                                            For instance the AI could determine that humans require freedom to thrive, even exist or be happy.
                                            Thus taking it away would harm mankind and be an invalid choice.



                                            Should nobody have thought of "telling" the AI this natural urge to freedom and there was no instance where it could have been observed by the AI (or dismissed as irrelevant), it could very well decide to take the action you propose.



                                            As you see it comes down to human nature and the biological, mental and social intricacies that are sometimes so abstract and interwoven that centuries, even millennia of philosophy and science fall short in understanding them.



                                            A)

                                            A collection of hardcoded conditions to test against might sufficiently mitigate the risk of death or harm to humans or society.



                                            First step would be NOT to put weapons into AI driven machinery that may roam freely.(Yeah, because we'd never do that)

                                            Another would be NOT to hand over every vital control system to AI without manual override possibilities.(but it is sooo convenient)

                                            Yet another would be to keep the AI simple in their range of actions and fields of expertise, making it easier to predict most, even all possible actions and setting a proper framework (at least according to the then current ideology, societal norm, law, ethical codex etc. - oh no this already falls apart as well as soon as these change).



                                            There are many more and the more cases we think of, the more we enter B) and C) again as these are actually all the very same problem...feels like a recursive loop...





                                            So in dealing with highly evolved AI essentially you either create tools that are somewhat intelligent (possibly sentient) and effectively enslaved(under the control of humans) or you do what we (mostly) do with humans: let them learn from their surroundings and then roam free and hope for the best and that their impulse to self preservation keeps them from going on rampages out of fear of retaliation...well, we see how well that works for us...so good luck either way...






                                            share|improve this answer














                                            A) Programmers prevent or allow.

                                            B) semantics and definitions, even philosophy

                                            C) nothing



                                            C)

                                            Any unattended(read, uninfluenced) self learning system, even with a few hardcoded, unalterable instructions could devolve into something unforseen and dangerous in some way or another.



                                            The fact that we humans haven't overcome this (see our world of extremes in bliss and suffering as it exists today and history) makes me wary of AI that is not monitored in its learning and possible self programming.

                                            At the very least it needs to have a failsafe installed for immediate shut down or human interference.

                                            But even if learning and action is monitored or influenced very quickly there is a point where no human is able to oversee or understand every line of code or behavioural pattern matrix or tensor field.



                                            Also, hacking,malfunctions and viruses could always circumvent anything or crash or stall the systemt, thus rendering even failsafes useless.



                                            B)

                                            There will need to be crystal clear definitions of what is meant by what instruction in order to overcome ambiguities.



                                            Again we haven't mastered this, so there is no hope for AI - an intellect so foreign once it reached its true potential that we can't even fathom its thought processes, let alone ethics.



                                            However if the instructions are sufficiently black and white, reducing any shade of grey into a yes or no, there at least might be a way.



                                            For instance the AI could determine that humans require freedom to thrive, even exist or be happy.
                                            Thus taking it away would harm mankind and be an invalid choice.



                                            Should nobody have thought of "telling" the AI this natural urge to freedom and there was no instance where it could have been observed by the AI (or dismissed as irrelevant), it could very well decide to take the action you propose.



                                            As you see it comes down to human nature and the biological, mental and social intricacies that are sometimes so abstract and interwoven that centuries, even millennia of philosophy and science fall short in understanding them.



                                            A)

                                            A collection of hardcoded conditions to test against might sufficiently mitigate the risk of death or harm to humans or society.



                                            First step would be NOT to put weapons into AI driven machinery that may roam freely.(Yeah, because we'd never do that)

                                            Another would be NOT to hand over every vital control system to AI without manual override possibilities.(but it is sooo convenient)

                                            Yet another would be to keep the AI simple in their range of actions and fields of expertise, making it easier to predict most, even all possible actions and setting a proper framework (at least according to the then current ideology, societal norm, law, ethical codex etc. - oh no this already falls apart as well as soon as these change).



                                            There are many more and the more cases we think of, the more we enter B) and C) again as these are actually all the very same problem...feels like a recursive loop...





                                            So in dealing with highly evolved AI essentially you either create tools that are somewhat intelligent (possibly sentient) and effectively enslaved(under the control of humans) or you do what we (mostly) do with humans: let them learn from their surroundings and then roam free and hope for the best and that their impulse to self preservation keeps them from going on rampages out of fear of retaliation...well, we see how well that works for us...so good luck either way...







                                            share|improve this answer














                                            share|improve this answer



                                            share|improve this answer








                                            edited Nov 17 at 8:57









                                            TheLethalCarrot

                                            37.1k15200243




                                            37.1k15200243










                                            answered Nov 17 at 8:18









                                            DigitalBlade969

                                            1011




                                            1011












                                            • This seems to me to be discussing AI safety issues in general rather than in Asimov's specific fictional world?
                                              – Harry Johnston
                                              Nov 17 at 19:40










                                            • Isn't that the whole point of Asimov's world that the laws he established don't work to prevent AI from becoming sentient, rebellious, even violent?The very least he was pondering (among other things) dilemmas that occur when comparing human ethos and instincts with decisions made by electronic difference engines and if emotions can arise from synthetic beings.
                                              – DigitalBlade969
                                              Nov 18 at 2:19










                                            • Well, certainly not the whole point. At first, at least, it was quite the opposite: Asimov said that one of the motivations for his early robot stories - and the Three Laws in particular - was a dislike of both the "robot as threat" and the "robot as pathos" tropes. He did explore more varied themes later on, though it seems to me more for storytelling purposes than as any sort of Aesop.
                                              – Harry Johnston
                                              Nov 18 at 7:23










                                            • ... your answer could be improved by linking your points more explicitly to both the question as asked and to specific named Asimov stories and/or quotations, I would be interested to see what you come up with.
                                              – Harry Johnston
                                              Nov 18 at 7:23


















                                            • This seems to me to be discussing AI safety issues in general rather than in Asimov's specific fictional world?
                                              – Harry Johnston
                                              Nov 17 at 19:40










                                            • Isn't that the whole point of Asimov's world that the laws he established don't work to prevent AI from becoming sentient, rebellious, even violent?The very least he was pondering (among other things) dilemmas that occur when comparing human ethos and instincts with decisions made by electronic difference engines and if emotions can arise from synthetic beings.
                                              – DigitalBlade969
                                              Nov 18 at 2:19










                                            • Well, certainly not the whole point. At first, at least, it was quite the opposite: Asimov said that one of the motivations for his early robot stories - and the Three Laws in particular - was a dislike of both the "robot as threat" and the "robot as pathos" tropes. He did explore more varied themes later on, though it seems to me more for storytelling purposes than as any sort of Aesop.
                                              – Harry Johnston
                                              Nov 18 at 7:23










                                            • ... your answer could be improved by linking your points more explicitly to both the question as asked and to specific named Asimov stories and/or quotations, I would be interested to see what you come up with.
                                              – Harry Johnston
                                              Nov 18 at 7:23
















                                            This seems to me to be discussing AI safety issues in general rather than in Asimov's specific fictional world?
                                            – Harry Johnston
                                            Nov 17 at 19:40




                                            This seems to me to be discussing AI safety issues in general rather than in Asimov's specific fictional world?
                                            – Harry Johnston
                                            Nov 17 at 19:40












                                            Isn't that the whole point of Asimov's world that the laws he established don't work to prevent AI from becoming sentient, rebellious, even violent?The very least he was pondering (among other things) dilemmas that occur when comparing human ethos and instincts with decisions made by electronic difference engines and if emotions can arise from synthetic beings.
                                            – DigitalBlade969
                                            Nov 18 at 2:19




                                            Isn't that the whole point of Asimov's world that the laws he established don't work to prevent AI from becoming sentient, rebellious, even violent?The very least he was pondering (among other things) dilemmas that occur when comparing human ethos and instincts with decisions made by electronic difference engines and if emotions can arise from synthetic beings.
                                            – DigitalBlade969
                                            Nov 18 at 2:19












                                            Well, certainly not the whole point. At first, at least, it was quite the opposite: Asimov said that one of the motivations for his early robot stories - and the Three Laws in particular - was a dislike of both the "robot as threat" and the "robot as pathos" tropes. He did explore more varied themes later on, though it seems to me more for storytelling purposes than as any sort of Aesop.
                                            – Harry Johnston
                                            Nov 18 at 7:23




                                            Well, certainly not the whole point. At first, at least, it was quite the opposite: Asimov said that one of the motivations for his early robot stories - and the Three Laws in particular - was a dislike of both the "robot as threat" and the "robot as pathos" tropes. He did explore more varied themes later on, though it seems to me more for storytelling purposes than as any sort of Aesop.
                                            – Harry Johnston
                                            Nov 18 at 7:23












                                            ... your answer could be improved by linking your points more explicitly to both the question as asked and to specific named Asimov stories and/or quotations, I would be interested to see what you come up with.
                                            – Harry Johnston
                                            Nov 18 at 7:23




                                            ... your answer could be improved by linking your points more explicitly to both the question as asked and to specific named Asimov stories and/or quotations, I would be interested to see what you come up with.
                                            – Harry Johnston
                                            Nov 18 at 7:23










                                            up vote
                                            0
                                            down vote













                                            The answer can be found in the first two laws of robotics.



                                            First Law - A robot may not injure a human being or, through inaction, allow a human being to come to harm.



                                            Second Law - A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.



                                            If the robots locked any human beings in padded cells the human being would simply order the robots to release him or her. if the robots refused to release the human being, that human being only has point out are or will be injured psychologically by their incarceration.



                                            The zeroth law of robotics works on the level of the entire human species in which the robots can act collectively through large-scale manipulation of events, institutions, and populations to ensure humans are kept safe as a species.






                                            share|improve this answer

























                                              up vote
                                              0
                                              down vote













                                              The answer can be found in the first two laws of robotics.



                                              First Law - A robot may not injure a human being or, through inaction, allow a human being to come to harm.



                                              Second Law - A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.



                                              If the robots locked any human beings in padded cells the human being would simply order the robots to release him or her. if the robots refused to release the human being, that human being only has point out are or will be injured psychologically by their incarceration.



                                              The zeroth law of robotics works on the level of the entire human species in which the robots can act collectively through large-scale manipulation of events, institutions, and populations to ensure humans are kept safe as a species.






                                              share|improve this answer























                                                up vote
                                                0
                                                down vote










                                                up vote
                                                0
                                                down vote









                                                The answer can be found in the first two laws of robotics.



                                                First Law - A robot may not injure a human being or, through inaction, allow a human being to come to harm.



                                                Second Law - A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.



                                                If the robots locked any human beings in padded cells the human being would simply order the robots to release him or her. if the robots refused to release the human being, that human being only has point out are or will be injured psychologically by their incarceration.



                                                The zeroth law of robotics works on the level of the entire human species in which the robots can act collectively through large-scale manipulation of events, institutions, and populations to ensure humans are kept safe as a species.






                                                share|improve this answer












                                                The answer can be found in the first two laws of robotics.



                                                First Law - A robot may not injure a human being or, through inaction, allow a human being to come to harm.



                                                Second Law - A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.



                                                If the robots locked any human beings in padded cells the human being would simply order the robots to release him or her. if the robots refused to release the human being, that human being only has point out are or will be injured psychologically by their incarceration.



                                                The zeroth law of robotics works on the level of the entire human species in which the robots can act collectively through large-scale manipulation of events, institutions, and populations to ensure humans are kept safe as a species.







                                                share|improve this answer












                                                share|improve this answer



                                                share|improve this answer










                                                answered Nov 19 at 12:17









                                                a4android

                                                470312




                                                470312

















                                                    protected by Rand al'Thor Nov 19 at 12:41



                                                    Thank you for your interest in this question.
                                                    Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



                                                    Would you like to answer one of these unanswered questions instead?



                                                    Popular posts from this blog

                                                    Probability when a professor distributes a quiz and homework assignment to a class of n students.

                                                    Aardman Animations

                                                    Are they similar matrix