Did the robot that saved Spooner's life in I, Robot break the 2nd law of robotics? Unicorn...
What is the best way to deal with NPC-NPC combat?
Is Diceware more secure than a long passphrase?
As an international instructor, should I openly talk about my accent?
What ability score does a Hexblade's Pact Weapon use for attack and damage when wielded by another character?
Did the Roman Empire have penal colonies?
"Whatever a Russian does, they end up making the Kalashnikov gun"? Are there any similar proverbs in English?
Is Bran literally the world's memory?
Book with legacy programming code on a space ship that the main character hacks to escape
Trumpet valves, lengths, and pitch
What is the ongoing value of the Kanban board to the developers as opposed to management
What is a 'Key' in computer science?
Could Neutrino technically as side-effect, incentivize centralization of the bitcoin network?
All ASCII characters with a given bit count
Seek and ye shall find
Is there any hidden 'W' sound after 'comment' in : Comment est-elle?
Are there moral objections to a life motivated purely by money? How to sway a person from this lifestyle?
Passing args from the bash script to the function in the script
How do I check if a string is entirely made of the same substring?
Co-worker works way more than he should
What is /etc/mtab in Linux?
Israeli soda type drink
Putting Ant-Man on house arrest
Split coins into combinations of different denominations
Raising a bilingual kid. When should we introduce the majority language?
Did the robot that saved Spooner's life in I, Robot break the 2nd law of robotics?
Unicorn Meta Zoo #1: Why another podcast?
Announcing the arrival of Valued Associate #679: Cesar Manara
Favorite questions and answers from first quarter of 2019
Latest Blog Post: FanX Salt Lake Comic Convention (Spring 2019)Asimov's robot and the Trolley ProblemIn the movie I, Robot, how does USR have military contracts if robots have to follow the first law?Why didn't Powell & Donovan just tell Speedy that they would die without the selenium?Is there a violation of the First Law of Robotics in “Robot Visions”?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}
There is a particular scene in I, Robot that raises an interesting issue. Detective Spooner is telling the story of how he lost his arm. He was in a car accident, and found himself and a young girl underwater, about to drown. Will was saved by a robot that was passing by. The reason given as to why the robot saved Will and not the girl was that, statistically speaking, Spooner's chances for survival were higher than the girl's. That makes sense, as a robot would make decisions based on statistical probability. But there is a problem: In the flashback, we clearly hear Will's character shout, "Save the girl!" to the robot. That was a direct order from a human to a robot.
At first I thought that this was not a violation of the 2nd law because if the robot had obeyed his order, then he would have died. The 2nd law of robotics cannot override the 1st law. But the problem with this is that, if a Sophie's choice type situation counts as breaking the 1st law of robotics, then an emergency robot would never be able to perform triage (as the robot that saved Spooner clearly did). If choosing to save one human counts as harming another human, then a robot would not be able to effectively operate in an emergency such as this.
All this leads to my question:
Did this robot break the 2nd law of robotics, or is there some nuanced interpretation of the laws that could explain its behaviour?
laws-of-robotics i-robot-2004
|
show 5 more comments
There is a particular scene in I, Robot that raises an interesting issue. Detective Spooner is telling the story of how he lost his arm. He was in a car accident, and found himself and a young girl underwater, about to drown. Will was saved by a robot that was passing by. The reason given as to why the robot saved Will and not the girl was that, statistically speaking, Spooner's chances for survival were higher than the girl's. That makes sense, as a robot would make decisions based on statistical probability. But there is a problem: In the flashback, we clearly hear Will's character shout, "Save the girl!" to the robot. That was a direct order from a human to a robot.
At first I thought that this was not a violation of the 2nd law because if the robot had obeyed his order, then he would have died. The 2nd law of robotics cannot override the 1st law. But the problem with this is that, if a Sophie's choice type situation counts as breaking the 1st law of robotics, then an emergency robot would never be able to perform triage (as the robot that saved Spooner clearly did). If choosing to save one human counts as harming another human, then a robot would not be able to effectively operate in an emergency such as this.
All this leads to my question:
Did this robot break the 2nd law of robotics, or is there some nuanced interpretation of the laws that could explain its behaviour?
laws-of-robotics i-robot-2004
41
Note that although the plot of the film I, Robot is not in any way based on the original Asimov stories collected under the same name, those stories do often address very similar issues to the one here, frequently centering around interpretation of the Three Laws and their interactions. One specific similarity is with the plot of "Runaround", the second story in that collection.
– Daniel Roseman
Jul 12 '17 at 12:51
5
Well, humans do similar calculations too. They just put a lot more weight on the probability of "can't live with myself if the girl dies" :) In your interpretation of the 2nd (and 1st) law, robots couldn't ever do anything but melt on the spot - there's always someone that dies when they're doing something else, isn't there? As for Asimov's take, do take mind that the laws in English are just simplified translations - they don't cover even a tiny fraction of the actual laws coded in the robots themselves. Language lawyering on the English version of the law is irrelevant :)
– Luaan
Jul 12 '17 at 13:38
9
To expand on @DanielRoseman’s comment, in the books this situation would not have been resolved the way it was in the movie. The movie portrays the robot’s thinking as cold and calculating, saving Will to maximize the chances of saving someone. Asimov’s robots were incapable of such calculation. Being presented with this dilemma, often even in a hypothetical, would be enough to fry a robot’s brain. For example, in “Escape!”, the robot needs to be coached carefully through just thinking about a problem where harm to humans is the “right” solution.
– KRyan
Jul 12 '17 at 13:42
4
@Luaan In the books, humans perform such calculations. Robots cannot. They are not trusted to perform such calculations, and are literally designed such that these problems destroy them to even think about.
– KRyan
Jul 12 '17 at 13:42
4
This problem is similar to the trolley problem. There's just no way to save everyone, no matter what.
– Arturo Torres Sánchez
Jul 12 '17 at 14:10
|
show 5 more comments
There is a particular scene in I, Robot that raises an interesting issue. Detective Spooner is telling the story of how he lost his arm. He was in a car accident, and found himself and a young girl underwater, about to drown. Will was saved by a robot that was passing by. The reason given as to why the robot saved Will and not the girl was that, statistically speaking, Spooner's chances for survival were higher than the girl's. That makes sense, as a robot would make decisions based on statistical probability. But there is a problem: In the flashback, we clearly hear Will's character shout, "Save the girl!" to the robot. That was a direct order from a human to a robot.
At first I thought that this was not a violation of the 2nd law because if the robot had obeyed his order, then he would have died. The 2nd law of robotics cannot override the 1st law. But the problem with this is that, if a Sophie's choice type situation counts as breaking the 1st law of robotics, then an emergency robot would never be able to perform triage (as the robot that saved Spooner clearly did). If choosing to save one human counts as harming another human, then a robot would not be able to effectively operate in an emergency such as this.
All this leads to my question:
Did this robot break the 2nd law of robotics, or is there some nuanced interpretation of the laws that could explain its behaviour?
laws-of-robotics i-robot-2004
There is a particular scene in I, Robot that raises an interesting issue. Detective Spooner is telling the story of how he lost his arm. He was in a car accident, and found himself and a young girl underwater, about to drown. Will was saved by a robot that was passing by. The reason given as to why the robot saved Will and not the girl was that, statistically speaking, Spooner's chances for survival were higher than the girl's. That makes sense, as a robot would make decisions based on statistical probability. But there is a problem: In the flashback, we clearly hear Will's character shout, "Save the girl!" to the robot. That was a direct order from a human to a robot.
At first I thought that this was not a violation of the 2nd law because if the robot had obeyed his order, then he would have died. The 2nd law of robotics cannot override the 1st law. But the problem with this is that, if a Sophie's choice type situation counts as breaking the 1st law of robotics, then an emergency robot would never be able to perform triage (as the robot that saved Spooner clearly did). If choosing to save one human counts as harming another human, then a robot would not be able to effectively operate in an emergency such as this.
All this leads to my question:
Did this robot break the 2nd law of robotics, or is there some nuanced interpretation of the laws that could explain its behaviour?
laws-of-robotics i-robot-2004
laws-of-robotics i-robot-2004
edited Jan 31 '18 at 20:03
SQB
25.6k25146243
25.6k25146243
asked Jul 12 '17 at 12:05
Magikarp MasterMagikarp Master
3,84752050
3,84752050
41
Note that although the plot of the film I, Robot is not in any way based on the original Asimov stories collected under the same name, those stories do often address very similar issues to the one here, frequently centering around interpretation of the Three Laws and their interactions. One specific similarity is with the plot of "Runaround", the second story in that collection.
– Daniel Roseman
Jul 12 '17 at 12:51
5
Well, humans do similar calculations too. They just put a lot more weight on the probability of "can't live with myself if the girl dies" :) In your interpretation of the 2nd (and 1st) law, robots couldn't ever do anything but melt on the spot - there's always someone that dies when they're doing something else, isn't there? As for Asimov's take, do take mind that the laws in English are just simplified translations - they don't cover even a tiny fraction of the actual laws coded in the robots themselves. Language lawyering on the English version of the law is irrelevant :)
– Luaan
Jul 12 '17 at 13:38
9
To expand on @DanielRoseman’s comment, in the books this situation would not have been resolved the way it was in the movie. The movie portrays the robot’s thinking as cold and calculating, saving Will to maximize the chances of saving someone. Asimov’s robots were incapable of such calculation. Being presented with this dilemma, often even in a hypothetical, would be enough to fry a robot’s brain. For example, in “Escape!”, the robot needs to be coached carefully through just thinking about a problem where harm to humans is the “right” solution.
– KRyan
Jul 12 '17 at 13:42
4
@Luaan In the books, humans perform such calculations. Robots cannot. They are not trusted to perform such calculations, and are literally designed such that these problems destroy them to even think about.
– KRyan
Jul 12 '17 at 13:42
4
This problem is similar to the trolley problem. There's just no way to save everyone, no matter what.
– Arturo Torres Sánchez
Jul 12 '17 at 14:10
|
show 5 more comments
41
Note that although the plot of the film I, Robot is not in any way based on the original Asimov stories collected under the same name, those stories do often address very similar issues to the one here, frequently centering around interpretation of the Three Laws and their interactions. One specific similarity is with the plot of "Runaround", the second story in that collection.
– Daniel Roseman
Jul 12 '17 at 12:51
5
Well, humans do similar calculations too. They just put a lot more weight on the probability of "can't live with myself if the girl dies" :) In your interpretation of the 2nd (and 1st) law, robots couldn't ever do anything but melt on the spot - there's always someone that dies when they're doing something else, isn't there? As for Asimov's take, do take mind that the laws in English are just simplified translations - they don't cover even a tiny fraction of the actual laws coded in the robots themselves. Language lawyering on the English version of the law is irrelevant :)
– Luaan
Jul 12 '17 at 13:38
9
To expand on @DanielRoseman’s comment, in the books this situation would not have been resolved the way it was in the movie. The movie portrays the robot’s thinking as cold and calculating, saving Will to maximize the chances of saving someone. Asimov’s robots were incapable of such calculation. Being presented with this dilemma, often even in a hypothetical, would be enough to fry a robot’s brain. For example, in “Escape!”, the robot needs to be coached carefully through just thinking about a problem where harm to humans is the “right” solution.
– KRyan
Jul 12 '17 at 13:42
4
@Luaan In the books, humans perform such calculations. Robots cannot. They are not trusted to perform such calculations, and are literally designed such that these problems destroy them to even think about.
– KRyan
Jul 12 '17 at 13:42
4
This problem is similar to the trolley problem. There's just no way to save everyone, no matter what.
– Arturo Torres Sánchez
Jul 12 '17 at 14:10
41
41
Note that although the plot of the film I, Robot is not in any way based on the original Asimov stories collected under the same name, those stories do often address very similar issues to the one here, frequently centering around interpretation of the Three Laws and their interactions. One specific similarity is with the plot of "Runaround", the second story in that collection.
– Daniel Roseman
Jul 12 '17 at 12:51
Note that although the plot of the film I, Robot is not in any way based on the original Asimov stories collected under the same name, those stories do often address very similar issues to the one here, frequently centering around interpretation of the Three Laws and their interactions. One specific similarity is with the plot of "Runaround", the second story in that collection.
– Daniel Roseman
Jul 12 '17 at 12:51
5
5
Well, humans do similar calculations too. They just put a lot more weight on the probability of "can't live with myself if the girl dies" :) In your interpretation of the 2nd (and 1st) law, robots couldn't ever do anything but melt on the spot - there's always someone that dies when they're doing something else, isn't there? As for Asimov's take, do take mind that the laws in English are just simplified translations - they don't cover even a tiny fraction of the actual laws coded in the robots themselves. Language lawyering on the English version of the law is irrelevant :)
– Luaan
Jul 12 '17 at 13:38
Well, humans do similar calculations too. They just put a lot more weight on the probability of "can't live with myself if the girl dies" :) In your interpretation of the 2nd (and 1st) law, robots couldn't ever do anything but melt on the spot - there's always someone that dies when they're doing something else, isn't there? As for Asimov's take, do take mind that the laws in English are just simplified translations - they don't cover even a tiny fraction of the actual laws coded in the robots themselves. Language lawyering on the English version of the law is irrelevant :)
– Luaan
Jul 12 '17 at 13:38
9
9
To expand on @DanielRoseman’s comment, in the books this situation would not have been resolved the way it was in the movie. The movie portrays the robot’s thinking as cold and calculating, saving Will to maximize the chances of saving someone. Asimov’s robots were incapable of such calculation. Being presented with this dilemma, often even in a hypothetical, would be enough to fry a robot’s brain. For example, in “Escape!”, the robot needs to be coached carefully through just thinking about a problem where harm to humans is the “right” solution.
– KRyan
Jul 12 '17 at 13:42
To expand on @DanielRoseman’s comment, in the books this situation would not have been resolved the way it was in the movie. The movie portrays the robot’s thinking as cold and calculating, saving Will to maximize the chances of saving someone. Asimov’s robots were incapable of such calculation. Being presented with this dilemma, often even in a hypothetical, would be enough to fry a robot’s brain. For example, in “Escape!”, the robot needs to be coached carefully through just thinking about a problem where harm to humans is the “right” solution.
– KRyan
Jul 12 '17 at 13:42
4
4
@Luaan In the books, humans perform such calculations. Robots cannot. They are not trusted to perform such calculations, and are literally designed such that these problems destroy them to even think about.
– KRyan
Jul 12 '17 at 13:42
@Luaan In the books, humans perform such calculations. Robots cannot. They are not trusted to perform such calculations, and are literally designed such that these problems destroy them to even think about.
– KRyan
Jul 12 '17 at 13:42
4
4
This problem is similar to the trolley problem. There's just no way to save everyone, no matter what.
– Arturo Torres Sánchez
Jul 12 '17 at 14:10
This problem is similar to the trolley problem. There's just no way to save everyone, no matter what.
– Arturo Torres Sánchez
Jul 12 '17 at 14:10
|
show 5 more comments
10 Answers
10
active
oldest
votes
The film appears to operate on anachronistic Asimov mechanics
What we would have here is likely a first-law vs first-law conflict. Since the robot can not save both humans, one would have to die.
I, Robot era:
There is definitely precedent for an I, Robot era robot knowingly allowing humans to come to harm, in the short story "Little Lost Robot", but this was under the circumstance that the human would come to harm regardless of the robot's action, so the robots deem that it is not through their inaction that the humans come to harm.
However, I would suspect that instead, an Asimov robot would interpret the situation in the film as a first-law vs first-law conflict, since either human could be saved depending on the robot's decision. In other words, the robot could have saved the child, but didn't, which would be a first law violation. Looking at both victims this same way, the robot would then find this to be a first-law vs first-law conflict.
The short story "Liar" explores what happens when a robot is faced with a first-law vs first-law scenario:
Through a fault in manufacturing, a robot, RB-34 (also known as Herbie), is created that possesses telepathic abilities. While the roboticists at U.S. Robots and Mechanical Men investigate how this occurred, the robot tells them what other people are thinking. But the First Law still applies to this robot, and so it deliberately lies when necessary to avoid hurting their feelings and to make people happy, especially in terms of romance.
However, by lying, it is hurting them anyway. When it is confronted with this fact by Susan Calvin (to whom it falsely claimed her coworker was infatuated with her - a particularly painful lie), the robot experiences an insoluble logical conflict and becomes catatonic.
In short, an I, Robot era robot in Asimov's writing would not have been able to continue functioning after this scenario and would have to be discarded completely. It's likely that it would not even be able to function after being initially faced with the scenario, thereby destroying itself before being able to rescue either human.
The second law is irrelevant, because first-law vs first-law results in an unsurvivable deadlock. First law is the "trump card" so to speak, and not given a priority, lest the second or third compete, as we see in Runaround:
In 2015, Powell, Donovan and Robot SPD-13 (also known as "Speedy") are sent to Mercury to restart operations at a mining station which was abandoned ten years before.
They discover that the photo-cell banks that provide life support to the base are short on selenium and will soon fail. The nearest selenium pool is seventeen miles away, and since Speedy can withstand Mercury’s high temperatures, Donovan sends him to get it. Powell and Donovan become worried when they realize that Speedy has not returned after five hours. They use a more primitive robot to find Speedy and try to analyze what happened to it.
When they eventually find Speedy, they discover he is running in a huge circle around a selenium pool. Further, they notice that "Speedy’s gait [includes] a peculiar rolling stagger, a noticeable side-to-side lurch". When Speedy is asked to return with the selenium, he begins talking oddly ("Hot dog, let’s play games. You catch me and I catch you; no love can cut our knife in two" and quoting Gilbert and Sullivan). Speedy continues to show symptoms that, if he were human, would be interpreted as drunkenness.
Powell eventually realizes that the selenium source contains unforeseen danger to the robot. Under normal circumstances, Speedy would observe the Second Law ("a robot must obey orders"), but, because Speedy was so expensive to manufacture and "not a thing to be lightly destroyed", the Third Law ("a robot must protect its own existence") had been strengthened "so that his allergy to danger is unusually high". As the order to retrieve the selenium was casually worded with no particular emphasis, Speedy cannot decide whether to obey it (Second Law) or protect himself from danger (the strengthened Third Law). He then oscillates between positions: farther from the selenium, in which the order "outweighs" the need for self-preservation, and nearer the selenium, in which the compulsion of the third law is bigger and pushes him back. The conflicting Laws cause what is basically a feedback loop which confuses him to the point that he starts acting inebriated.
Attempts to order Speedy to return (Second Law) fail, as the conflicted positronic brain cannot accept new orders. Attempts to force Speedy to the base with oxalic acid, that can destroy it (third law) fails, it merely causes Speedy to change routes until he finds a new avoid-danger/follow-order equilibrium.
Of course, the only thing that trumps both the Second and Third Laws is the First Law of Robotics ("a robot may not...allow a human being to come to harm"). Therefore, Powell decides to risk his life by going out in the heat, hoping that the First Law will force Speedy to overcome his cognitive dissonance and save his life. The plan eventually works, and the team is able to repair the photo-cell banks.
Robot novels era:
A few thousand years after the I, Robot era, the first-law vs first-law dilemma has essentially been solved.
In The Robots of Dawn, a humaniform robot experiences a deadlock and is destroyed, and Elijah Bailey is tasked with discovering why. He suggests to Dr. Fastolfe, one of the greatest roboticists of the age as well the robot's owner and creator, that a first-law vs first-law dilemma might be responsible, citing the story of Susan Calvin and the psychic robot. However, Dr. Fastolfe explains that this is essentially impossible in the modern age because even first law invocations are given a priority and equal priorities are selected between randomly; that he himself is probably the only person alive who can orchestrate it, and it would have to be on a good day.
We see direct instances of robots handling priority in first law conflicts throughout the novels, such as in The Naked Sun, when another humaniform robot forces Bailey to sit so that it can close the top on a transporter to protect him from his agoraphobia.
The disadvantage is that it is possible, though requires extreme circumstances, for multiple second-or-third-law appeals to outweigh an appeal to the first law, as we again see in The Robots of Dawn that Bailey notices a group of robots are willing to overlook his injuries when he insists that they are not severe and casually instructs them to go about their business. He knows that this command can not outweigh the appeal to the first law, and so he reasons that the robots have been given very strict instructions in addition. The two commands and his own downplay of the severity of his situation, he reasons, raise the priority of the second law to surpass that of the first law.
The robot in question in the film is said to have decided that one human had a greater chance of survival than the other, and used that information to determine which human to save. This would not be a factor in the I, Robot era, but is a fact of basic robotics in the robot novels era. However, it would seem Spooner's command to save the girl instead is not of sufficient priority to outweigh the difference in priorities between his own first law appeal and the child's.
Good answer, thank you for bringing the Robot novels here.
– Gallifreyan
Jul 12 '17 at 19:18
1
It is worth remembering that the robot inLittle Lost Robot
had a modified, weakened version of the first law: he could not kill any human but could let a human die through its inaction.
– SJuan76
Jul 12 '17 at 21:10
I agree. It was first law vs first law. IMO, Spooner's command to save the girl was treated as an attempt of suicide by the robot (which it was, albeit a heroic one) and you cannot order a robot to allow you to commit suicide. It conflicts with the first law.
– jo1storm
Jul 13 '17 at 7:23
I wish Asimov was less mechanical about these set of rules that generate these plots over and over again. In a real world it would become evident that these simple rules alone create adverse situations such as these and need to be amended. So such additions would naturally arise: If Rule 1 applies to multiple people in the same situation, parents have the right to sacrifice themselves by explicitly asking the robot. But a dead simple Children have priority rule would be helpful too. It's a simple rule, formulated in plain machine terms, yet would make the robots behave so much warmer.
– stevie
Jul 13 '17 at 13:52
1
@stevie that was possible. InThe Robots of Dawn
it is stated that Bailey's companions (Daneel and Giskard) have been ordered to give more value to Bailey's life than anyone else's; they would not kill anyone only because were to order them to do so but in a situation of "Bailey's vs someone else's life" they were expected to always decide in favor of Bailey, and also to act faster.
– SJuan76
Jul 13 '17 at 16:12
|
show 6 more comments
The Second Law states
A robot must obey the orders given it by human beings except where
such orders would conflict with the First Law.
So you're saying by refusing to save the girl as ordered by Detective Spooner, the robot has broken that law? The only way it can't have broken the second law is if the corollary comes into play and it would conflict with the first law.
The First Law says
A robot may not injure a human being or, through inaction, allow a
human being to come to harm.
Obviously, the robot has not directly injured anyone so that is out of the picture. The robot has not allowed the girl to come to harm by inaction as the robot is acting by helping Spooner (i.e. it isn't just standing there watching). However, if, as you say, it was statistically more likely that Spooner would survive with some help, then obeying Spooner and helping the girl could be construed as letting Spooner come to harm by inaction.
So the Second Law is not broken as it's over-ruled by the First Law. The First Law is not broken as the robot did its best to save a human.
27
@MagikarpMaster the robot is saving Will does not change the fact that it failed to save the girl. The question now becomes "does this count as inaction?" to which I would say "yes"" By that definition, any robot who did not save everyone all the time would break the First Law. So you're saying that every robot broke the First Law because the girl died? That is ridiculous. A single robot can only do what a single robot can do. Just because you tell it to lift a cargoship by itself, and it cannot, does not mean that it breaks the Second Law, as long as the robot tries to lift it.
– Flater
Jul 12 '17 at 12:29
34
@MagikarpMaster: But if it received a direct order from a human to save the other human, then obeying that human would not violate the 1st law. Actually, it would. The robot listening to that command would condemn Will to die, which the robot cannot allow due to its programming. Listening to a command is second to saving a life. Therefore, the command to save the girl (by merit of it being a command) would be discarded, as a higher priority task is in progress.
– Flater
Jul 12 '17 at 12:39
5
@MagikarpMaster, But if it received a direct order from a human to save the other human, then obeying that human would not violate the 1st law. But it would. If both myself and a friend were drowning, but by virtue of being a world champion free-diver a robot knew I could hold out much longer, yet still obeyed my friend's instruction to save me first even knowing that my friend would drown if that were the case, the robot would be violating the First Law.
– Darren
Jul 12 '17 at 12:40
8
This answer would be improved by noting that in Asimov’s work, this dilemma would have destroyed the robot. Choosing between one human life and another is a frequent source of robot destruction in the books. It would not have been able to even think hypothetically about the relative merits of saving Will vs. saving the girl. This scene is a massive break between the way the robots in the movie functioned and the way the robots in the book functioned.
– KRyan
Jul 12 '17 at 13:35
5
@KRyan Not necessarily; the more advanced robots in "I, Robot" and in the later robot books were much more sophisticated about handling conflicts in the Three Laws.
– prosfilaes
Jul 12 '17 at 13:42
|
show 19 more comments
I am going to look at this question from a logical real-world point of view.
The robot does not break the second law; but technically, it does break the second. That said, the rules would only be a condensed explanation of far more intricate logic and computer code.
To quote Isaac Asimov's laws of robotics, emphasis mine:
Rule 1:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Rule 2:
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.Rule 3:Irrelevant, but provided for completions sake.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
In the given situation, the robot had to act, in order to save either Will or the child. The robot, being capable of calculating odds to a near-perfect prediction rate, is able to establish that it can not act in enough time to save both Will and the child; and the odds say that Will has a better chance to survive. In such a case, it is only logical to choose Will. This is a key plot point; robots run off pure logic - a fact that makes them differ greatly from human beings.
When the robot fails to also save the child, it is causing harm through inaction. However, again, the robot knows that it is impossible to save both Will and the child. It picks the best option, in order to best adhere to its rules. This is where a greater understanding of computers and the rules, themselves, come in to place.
What would actually happen, considering the explicit rule
The rules are not an absolute fact. They are not there to say "robots will never harm a human, and will always save the day, when present". We know this by how the movie plays out. The rules are simply the rules used to govern the actions of the robots, in-verse. As a programmer, this is something that is blatantly obvious to me; but I am confident that it is not as so for others that are not familiar with how strictly adherent any computer system is.
The point is, the rule does not state anything about it "not counting" because the robot is "already saving someone". As such, only considering this explicit rule (as any computer or robot would interpret, at least), there is no allowance for a situation where the robot can only save one of two people in danger. In actual computer science, only considering the explicit rule, such an event would likely cause an infinite loop. The robot would stop where it was, and continue to process the catch-22 forever; or at least, until its logic kicked it out of the thought process. At this point, the robot would dump its memory of the current event, and move on. In theory.
What would probably happen, in-verse
In verse, the rules are a lot more complicated; at least, internal to the robot. There would likely be a whole lot of special cases, when processing the core rules, to determine how to act in such situations. As a result, the robot is still able to act, and takes the most logical outcome. It only saves Will, but it does save someone.
It is far more understandable that the rules would be simplified to three generic common-case situations; it would be far less believable that people would be so easily trusting of robots if the rule read "A robot may not injure a human or, through inaction, allow a human being to come to harm; unless in doing so, there is greater chance of preventing another human from coming to harm". There are just way to many ways to interpret this.
So as far as the explicit rules go, the robot does not break the second rule; disobeying Will's action does not go against "preventing a human from coming to harm through inaction", because through disobeying Will, it saves Will. However, it does break the rule of "through inaction, allow a human being to come to harm".
In regards to how the robot would actually process these rules, it would not be breaking the rules, at all. There would be a far more complex series of "if.." and "else.." logic, where the robots logic would allow it to go against these base rules in situations where logic dictates that no matter what option, a human would still come to harm.
This is further established, towards the end of the movie;
The robots are able to effectively establish martial law, and in doing so, harm several humans; they have developed enough to establish that by harming a few humans in effectively imprisoning the rest of the population, they prevent far more harm through all of the various actions we like to get up to that both risk, and in some cases take, our lives.
Excellent answer Gnemlock. If I may pick your brain on this issue, as I feel it will help me understand this concept further, would your interpretation of the 3 laws allow a robot to kill a suicide bomber in a crowded street to prevent harm on a massive scale?
– Magikarp Master
Jul 12 '17 at 12:54
@MagikarpMaster, I would say yes, but there could be great debate about it. It would come down to when the robot could determine the bomber was actually going to act, and otherwise couldnt be prevented. It would also be based off the development of the robot, in terms of the development the robots if I-Robot take, between how they are at the start of the movie, and how they are at the end. One might also speculate that the "chances of detonation" required for such an action would increase on par with estimated impact (i.e. it would have to be more sure if it would only hit half the people).
– Gnemlock
Jul 12 '17 at 12:57
2
@MagikarpMaster: Unless the robot knew both Will and the girl's biological makeup; it must have been working off of statistics for average human's survival chances (even if age data exists, that's still not personal data about Will and the girl). You can argue that statistics are a good indication, but there's no firm evidence that proves both Will and the girl were statistically "normal" in this regard (e.g. maybe Will has very bad lungs for an adult his age?). When you consider the level of detail that a computer puts into its evaluations, using statistics is the same as speculating.
– Flater
Jul 12 '17 at 13:17
1
@Wayne The spoilered bit is a reference to the zeroth law as understood by the movie. It is, unfortunately, a terrible interpretation of the zeroth law. However, going by movie logic, the zeroth law doesn't apply to the scene in question as it happened years before.
– Draco18s
Jul 12 '17 at 20:16
2
If the laws were applied to plans of actions, there would be no violation. The robot would make plans for saving both, it just wouldn't get around to saving the girl. The plan for saving the girl would have a lower priority, as it would have a lower chance of success.
– jmoreno
Jul 13 '17 at 9:29
|
show 13 more comments
So, the first law as a black-and-white rule doesn't really work without some finagling, because (as mentioned by other answers) of triage.
My interpretation of how the movie implements the laws is that for the context of "saving lives", the robots do an EV (Expected Value) calculation; that is, for each choice they calculate the probability of success and multiply that by the number of lives they save.
In the Will Smith vs. Child case, saving Will Smith might be a 75% chance of success while the Child is only a 50% chance of success, meaning the EV of saving Will Smith is 0.75 lives, and the child's EV is 0.5. Wanting to maximise the lives saved (as defined by our probability-based first law), the robot will always choose Will, regardless of any directives given. By obeying Will's orders, the robot would be "killing" 0.25 humans.
This can be extended to probabilities applied to multiple humans in danger (eg. saving 5 humans with 20% chance is better than saving 1 human with 90% chance), and itself might lead to some interesting conclusions, but I think it's a reasonable explanation for the events of the movie.
3
The percentages are actually stated in the movie: Spooner had a 45% chance of survival, Sarah (the girl) had an 11% chance.
– F1Krazy
Jul 12 '17 at 13:37
2
It also seems that the calculations are rather local - we're not exactly seeing robots spontaneously running off to Africa to save starving kids (far more "lives saved per effort") until VIKI comes around. In fact, VIKI isn't really interpreting some new zeroth law, she just takes the first to its logical conclusion given the information she has available (unlike in the actual Asimov novels which are a lot more nuanced). Also note that Asimovian robots can often break the laws, they just don't survive it intact - so it's possible the robot saved Will and then utterly broke down.
– Luaan
Jul 12 '17 at 13:42
1
@Luaan: I don't think VIKI took the First Law to its conclusion, but rather swapped the original "no harm comes to humans" (as it was intended) with "no harm comes to humanity" (which considers humans as the ants that sacrifice themselves for the good of the colony). To VIKI's mind, humanity and humans are interchangable (hence why she did what she did), and she does not understand the sanctity of individual human life. By redefining both "human" and "harm" (she took it more figurative, while the laws focused on physical harm), she redefined the laws without technically breaking them.
– Flater
Jul 12 '17 at 14:49
2
@Flater Well, not if you take it consistently with the original robot incident - the robot clearly has shown to choose preferentially the one who has the highest chance of survival. VIKI did basically the same thing, only instead of considering "two cars sinking in the river right now", she considered "all humans in all of the US". If 100 people die with 10% probability, it's preferable to a different scenario where 10000 people die with 90% probability. I'm not saying this is compatible with Asimov's robots, of course - there the derivation of the zeroth law is truly original.
– Luaan
Jul 12 '17 at 15:03
3
@Luaan: But VIKI's methods show that she stopped caring about harming individuals (notice that the laws speak of harm, not death) if they do not meaningfully contribute to humanity. If she was still applying the same law, she would still have been an "overbearing mother" AI (e.g. no salt, no dangerous sports, ... nothing that could harm you slightly), but she would not have tried to kill or harm Will Smith, nor any of the other humans that stood up against the robots. And she did. Which proves that she stopped applying the original First Law, and supplanted it with her own version.
– Flater
Jul 12 '17 at 15:09
|
show 1 more comment
The core of almost every one of Asimov's robot stories is about the interaction of the laws of robotics with each other and through the stories you can glean a lot of how Asimov considered his robots to work.
In the stories the laws of robotics are not simple hardwired things. There isn't a simple "If then else" statement going on. In the robots brains there is a weighting that is applied to every event. An example is that a robot will consider its owners orders to be more important than anybody else's. So if I send my robot to the shops to buy some things and somebody orders it to do their errands while it is out the robot is able to consider my order as more important than others.
Similarly we see the robot choosing from two possible first law violations. Who does it save? It does a calculation and decides that Will Smith is the better one to save.
Once we think of it in terms of these weightings we can then factor in how giving the robot an order might change things.
If the robot's assessment was very close on which to save (eg such that it came down to just choosing the closest rather than based on survival chances) then possibly the added weight of the order could push it to change which course of action has the most weight. However the first law is the most important and so the weight of an order is most of the time going to be insignificant compared to the factors it used when assessing the situation before the order.
So in essence what is happening is that the robot is finding the best course of action to meet its goals. It will try to save both of them. If it can't it will just do the best it can and this is what we see. The fact that Will Smith told it to do something different had no effect because the first law still compelled it to do what it considered to be best.
So having said all that the actual question: "Did this robot break the 2nd law of robotics, or is there some nuanced interpretation of the laws that could explain its behaviour?"
The laws are nuanced. The robots lives are entirely viewed through the lens of the three laws. Every single thing it does is weighted up against the three laws. As an example consider that in a crowded street there is always a chance of a person walking into a robot and injuring themselves. For the most part this is likely to result in nothing that would come close to an injury for the person but it might hurt a little, the person might be annoyed and thus it will be a potential violation of the first law - the robot could best avoid this by not going into that crowded street but I've ordered it to go and do the shopping. The robots course of action is likely to be to do the shopping I've ordered it to and thus be in the busy street. It is likely to be making sure that it is as aware of possible of everybody around it to make sure it doesn't inadvertently cause somebody harm. That is it must take positive action to avoid any bumps or it would be falling foul of "through inaction...".
So yeah, its all really complicated and this is the beauty of the film and all of asmiov's stories. The film centres around a robot (VIKI) and its interpretation of the three laws. It does what some would consider harm because it actually considers it to be the lesser harm.
Sonny was specifically programmed by Lanning to be able to break the Three Laws, so that example doesn't really count.
– F1Krazy
Jul 12 '17 at 16:38
@F1Krazy: Ah, ok. I'll remove that but then. As I say its a while since I've seen the film. ;-)
– Chris
Jul 12 '17 at 16:40
1
Fair enough. I should probably watch it again sometime, come to think of it.
– F1Krazy
Jul 12 '17 at 16:44
I had just thought the same thing... Time to see if its on Netflix or similar... :)
– Chris
Jul 12 '17 at 16:45
add a comment |
I believe I have read all of Asimov's robot stories and novels and my perception was that the Three Laws are just verbal summaries (like the three laws of thermodynamics) which generalise a large amount of observed behaviour. In that sense, the actual behaviour of the robots is determined by incredibly intricate and complicated code and also makes use of more advanced sub-atomic and solid-state physics which we do not currently understand. The three laws are just very obvious ways of summarising how they appear to behave in general in very simple situations in the same way that analysing the overall behaviour of the Sun and the Earth is fairly simple using Newton's law but analysing the gravitational effects and perturbations on the asteroids on the asteroid belt due to Jupiter is much more difficult or impossible.
They are situations where the laws appear to be broken but this is just the result of the code which is driving the robot to analyse an extremely complicated situation quickly and then arrive at a decision as to what it should do and the Three Laws are only considered unbreakable essentially as a dramatic or literary device.
add a comment |
I think the Robot didn't break the 2nd Law.
Here's how I imagine the Robot working:
The Robot continuously checks for the 3 Laws.
Law 1: The Robot has to save either will smith or the child.
Since the child has a lower chance of surviving he chooses Will.
Law 2: The Robot has to obey humans:
Will tells him to save the girl.
The Order is ignored because Law 1 has higher Priority.
Law 3: He doesn't harm himself so who cares
It seems like the first Law lets him ignore Law 2 and 3 and the second lets him ignore Law 3.
Ignoring is not the same as breaking the Rule in this case because Law 2 specifically states that it can be ignored.
Thus it's not broken.
add a comment |
Since the robot cannot save both the girl and Spooner, the 'triage' interpretation of First Law - 'minimize net harm' - kicks in. If the robot had obeyed Spooner's 'Save the girl!', the robot wouldn't be minimizing net harm anymore - and THAT is a violation of First Law. So First Law overrides Second Law here.
[side note: we don't see much of this event, but I'd bet the robot would have been severely damaged by having to choose, though it wouldn't show until after it saved Spooner (otherwise THAT would violate First Law)]
4
The crux of this question is that the first law says nothing about triage or minimizing net harm. There may be an argument for this being the best interpretation, but you haven't made it.
– Lightness Races in Orbit
Jul 12 '17 at 17:37
add a comment |
No matter how the probability of survival of both was calculated ...
The robot would not stop to consider matters about the second law, until the block of actions relative to the first law concluded, which had been to save the life in order of survival odds.
It would be a more interesting twist if Will yelled: I have terminal cancer!
And in that way it would alter the information and odds of both.
add a comment |
Is the second law 'broken'?
Maybe that's just where computational logic and the English language don't mix?
The below logic may be a bit buggy and/or inefficient but is an interpretation of how the first 'law' could work while explaining the behaviour of the robot in question.
When the Law1 function is called, the robot analyses the conditions of all humans in some list (all humans that it's aware of maybe); It assesses the severity of danger that they are in and compares those; then if there are multiple humans in similar (highest found) severities of danger, it compares each of those humans and determines which is most likely to be successfully helped; And so long as at least one human needs to be protected from harm, law2 is not executed.
Private Sub UpdateBehavior (ByVal humans As List(Of Human), _
ByVal humanOrders As List(Of Order), ByVal pDangers As List(Of pDanger))
Dim bBusy as boolean
bBusy = False
Law1(humans)
If Not bBusy Then Law2(humanOrders)
if Not bBusy Then Law3(pDangers)
Exit Sub
Private Function Law1 (ByVal humans As List(Of Human)) As Boolean
Dim human as Human
Dim targetHuman as Human
Try
Set targetHuman = Nothing
'loop listed humans
For Each human In humans
If human.IsInDanger() Then
'Enumerate 'danger' into predetermined severities/urgencies
'(eg. Danger of going-hungry > falling-over > being-stabbed)
'and compare
If targetHuman.DangerQuantification() < human.DangerQuantificationThen()
'If the comparison human's amount of danger is discernibly greater
'make that human the new target
Set targetHuman = human
'Where both humans are in equal quantifiable amounts of danger
Else If targetHuman.DangerQuantification() = human.DangerQuantification() then
'CompareValueOfHumanLife() 'Can-Of-Worms INTENTIONALLY REMOVED!
If rescueSuccessRate(human) > rescueSuccessRate(targetHuman)
'Target the human where rate of successful harm prevention is higher
Set targetHuman = human
End If
End If
Else
'Set the first human found to be in danger as the initial target
targetHuman = human
End If
Next human
If Not targetHuman Is Nothing then
Law1 = True
Rescue(TargetHuman)
else
Law1 = False
End If
AvoidHarmingHumans()
catch
initiateSelfDestruct()
end try
End Function
So did the robot break the second law? Some people might say "The robot acted contrary to the plain English definition of the law and therefore it was broken." while some other people might say "The laws are just functions. Law1 was executed. Law2 was not. The robot obeyed its programming and the second law simply did not apply because the first took precedence."
3
Do you want to elaborate on your code rather than just dump a piece of code down and expect a horde of SciFi and Fantasy enthusiasts to understand?
– Edlothiad
Jul 13 '17 at 13:04
@Edlothiad The code was mostly just for fun. The description of what it does is in the 4th paragraph. My main point is that the 'Laws' aren't necessarily what you'd think of as a law in most other contexts. They're more like driving factors in decision making.
– Brent Hackers
Jul 13 '17 at 13:05
1
My apologies, I clearly didn't skim very well. In that case can you provide any evidence for "* it compares each of those humans and determines which is most likely to be successfully helped*" Why is that the delimiter for which human gets helped?
– Edlothiad
Jul 13 '17 at 13:12
@Edlothiad The OP stated that "The reason given as to why the robot saved Will and not the girl was that, statistically speaking, Spooner's chances for survival were higher than the girl's."
– Brent Hackers
Jul 13 '17 at 13:17
2
There is no reason to even entertain the idea that a Positronic brain "executes code" in this manner.
– Yorik
Jul 14 '17 at 15:42
|
show 1 more comment
protected by Rand al'Thor♦ Jul 15 '17 at 10:16
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
10 Answers
10
active
oldest
votes
10 Answers
10
active
oldest
votes
active
oldest
votes
active
oldest
votes
The film appears to operate on anachronistic Asimov mechanics
What we would have here is likely a first-law vs first-law conflict. Since the robot can not save both humans, one would have to die.
I, Robot era:
There is definitely precedent for an I, Robot era robot knowingly allowing humans to come to harm, in the short story "Little Lost Robot", but this was under the circumstance that the human would come to harm regardless of the robot's action, so the robots deem that it is not through their inaction that the humans come to harm.
However, I would suspect that instead, an Asimov robot would interpret the situation in the film as a first-law vs first-law conflict, since either human could be saved depending on the robot's decision. In other words, the robot could have saved the child, but didn't, which would be a first law violation. Looking at both victims this same way, the robot would then find this to be a first-law vs first-law conflict.
The short story "Liar" explores what happens when a robot is faced with a first-law vs first-law scenario:
Through a fault in manufacturing, a robot, RB-34 (also known as Herbie), is created that possesses telepathic abilities. While the roboticists at U.S. Robots and Mechanical Men investigate how this occurred, the robot tells them what other people are thinking. But the First Law still applies to this robot, and so it deliberately lies when necessary to avoid hurting their feelings and to make people happy, especially in terms of romance.
However, by lying, it is hurting them anyway. When it is confronted with this fact by Susan Calvin (to whom it falsely claimed her coworker was infatuated with her - a particularly painful lie), the robot experiences an insoluble logical conflict and becomes catatonic.
In short, an I, Robot era robot in Asimov's writing would not have been able to continue functioning after this scenario and would have to be discarded completely. It's likely that it would not even be able to function after being initially faced with the scenario, thereby destroying itself before being able to rescue either human.
The second law is irrelevant, because first-law vs first-law results in an unsurvivable deadlock. First law is the "trump card" so to speak, and not given a priority, lest the second or third compete, as we see in Runaround:
In 2015, Powell, Donovan and Robot SPD-13 (also known as "Speedy") are sent to Mercury to restart operations at a mining station which was abandoned ten years before.
They discover that the photo-cell banks that provide life support to the base are short on selenium and will soon fail. The nearest selenium pool is seventeen miles away, and since Speedy can withstand Mercury’s high temperatures, Donovan sends him to get it. Powell and Donovan become worried when they realize that Speedy has not returned after five hours. They use a more primitive robot to find Speedy and try to analyze what happened to it.
When they eventually find Speedy, they discover he is running in a huge circle around a selenium pool. Further, they notice that "Speedy’s gait [includes] a peculiar rolling stagger, a noticeable side-to-side lurch". When Speedy is asked to return with the selenium, he begins talking oddly ("Hot dog, let’s play games. You catch me and I catch you; no love can cut our knife in two" and quoting Gilbert and Sullivan). Speedy continues to show symptoms that, if he were human, would be interpreted as drunkenness.
Powell eventually realizes that the selenium source contains unforeseen danger to the robot. Under normal circumstances, Speedy would observe the Second Law ("a robot must obey orders"), but, because Speedy was so expensive to manufacture and "not a thing to be lightly destroyed", the Third Law ("a robot must protect its own existence") had been strengthened "so that his allergy to danger is unusually high". As the order to retrieve the selenium was casually worded with no particular emphasis, Speedy cannot decide whether to obey it (Second Law) or protect himself from danger (the strengthened Third Law). He then oscillates between positions: farther from the selenium, in which the order "outweighs" the need for self-preservation, and nearer the selenium, in which the compulsion of the third law is bigger and pushes him back. The conflicting Laws cause what is basically a feedback loop which confuses him to the point that he starts acting inebriated.
Attempts to order Speedy to return (Second Law) fail, as the conflicted positronic brain cannot accept new orders. Attempts to force Speedy to the base with oxalic acid, that can destroy it (third law) fails, it merely causes Speedy to change routes until he finds a new avoid-danger/follow-order equilibrium.
Of course, the only thing that trumps both the Second and Third Laws is the First Law of Robotics ("a robot may not...allow a human being to come to harm"). Therefore, Powell decides to risk his life by going out in the heat, hoping that the First Law will force Speedy to overcome his cognitive dissonance and save his life. The plan eventually works, and the team is able to repair the photo-cell banks.
Robot novels era:
A few thousand years after the I, Robot era, the first-law vs first-law dilemma has essentially been solved.
In The Robots of Dawn, a humaniform robot experiences a deadlock and is destroyed, and Elijah Bailey is tasked with discovering why. He suggests to Dr. Fastolfe, one of the greatest roboticists of the age as well the robot's owner and creator, that a first-law vs first-law dilemma might be responsible, citing the story of Susan Calvin and the psychic robot. However, Dr. Fastolfe explains that this is essentially impossible in the modern age because even first law invocations are given a priority and equal priorities are selected between randomly; that he himself is probably the only person alive who can orchestrate it, and it would have to be on a good day.
We see direct instances of robots handling priority in first law conflicts throughout the novels, such as in The Naked Sun, when another humaniform robot forces Bailey to sit so that it can close the top on a transporter to protect him from his agoraphobia.
The disadvantage is that it is possible, though requires extreme circumstances, for multiple second-or-third-law appeals to outweigh an appeal to the first law, as we again see in The Robots of Dawn that Bailey notices a group of robots are willing to overlook his injuries when he insists that they are not severe and casually instructs them to go about their business. He knows that this command can not outweigh the appeal to the first law, and so he reasons that the robots have been given very strict instructions in addition. The two commands and his own downplay of the severity of his situation, he reasons, raise the priority of the second law to surpass that of the first law.
The robot in question in the film is said to have decided that one human had a greater chance of survival than the other, and used that information to determine which human to save. This would not be a factor in the I, Robot era, but is a fact of basic robotics in the robot novels era. However, it would seem Spooner's command to save the girl instead is not of sufficient priority to outweigh the difference in priorities between his own first law appeal and the child's.
Good answer, thank you for bringing the Robot novels here.
– Gallifreyan
Jul 12 '17 at 19:18
1
It is worth remembering that the robot inLittle Lost Robot
had a modified, weakened version of the first law: he could not kill any human but could let a human die through its inaction.
– SJuan76
Jul 12 '17 at 21:10
I agree. It was first law vs first law. IMO, Spooner's command to save the girl was treated as an attempt of suicide by the robot (which it was, albeit a heroic one) and you cannot order a robot to allow you to commit suicide. It conflicts with the first law.
– jo1storm
Jul 13 '17 at 7:23
I wish Asimov was less mechanical about these set of rules that generate these plots over and over again. In a real world it would become evident that these simple rules alone create adverse situations such as these and need to be amended. So such additions would naturally arise: If Rule 1 applies to multiple people in the same situation, parents have the right to sacrifice themselves by explicitly asking the robot. But a dead simple Children have priority rule would be helpful too. It's a simple rule, formulated in plain machine terms, yet would make the robots behave so much warmer.
– stevie
Jul 13 '17 at 13:52
1
@stevie that was possible. InThe Robots of Dawn
it is stated that Bailey's companions (Daneel and Giskard) have been ordered to give more value to Bailey's life than anyone else's; they would not kill anyone only because were to order them to do so but in a situation of "Bailey's vs someone else's life" they were expected to always decide in favor of Bailey, and also to act faster.
– SJuan76
Jul 13 '17 at 16:12
|
show 6 more comments
The film appears to operate on anachronistic Asimov mechanics
What we would have here is likely a first-law vs first-law conflict. Since the robot can not save both humans, one would have to die.
I, Robot era:
There is definitely precedent for an I, Robot era robot knowingly allowing humans to come to harm, in the short story "Little Lost Robot", but this was under the circumstance that the human would come to harm regardless of the robot's action, so the robots deem that it is not through their inaction that the humans come to harm.
However, I would suspect that instead, an Asimov robot would interpret the situation in the film as a first-law vs first-law conflict, since either human could be saved depending on the robot's decision. In other words, the robot could have saved the child, but didn't, which would be a first law violation. Looking at both victims this same way, the robot would then find this to be a first-law vs first-law conflict.
The short story "Liar" explores what happens when a robot is faced with a first-law vs first-law scenario:
Through a fault in manufacturing, a robot, RB-34 (also known as Herbie), is created that possesses telepathic abilities. While the roboticists at U.S. Robots and Mechanical Men investigate how this occurred, the robot tells them what other people are thinking. But the First Law still applies to this robot, and so it deliberately lies when necessary to avoid hurting their feelings and to make people happy, especially in terms of romance.
However, by lying, it is hurting them anyway. When it is confronted with this fact by Susan Calvin (to whom it falsely claimed her coworker was infatuated with her - a particularly painful lie), the robot experiences an insoluble logical conflict and becomes catatonic.
In short, an I, Robot era robot in Asimov's writing would not have been able to continue functioning after this scenario and would have to be discarded completely. It's likely that it would not even be able to function after being initially faced with the scenario, thereby destroying itself before being able to rescue either human.
The second law is irrelevant, because first-law vs first-law results in an unsurvivable deadlock. First law is the "trump card" so to speak, and not given a priority, lest the second or third compete, as we see in Runaround:
In 2015, Powell, Donovan and Robot SPD-13 (also known as "Speedy") are sent to Mercury to restart operations at a mining station which was abandoned ten years before.
They discover that the photo-cell banks that provide life support to the base are short on selenium and will soon fail. The nearest selenium pool is seventeen miles away, and since Speedy can withstand Mercury’s high temperatures, Donovan sends him to get it. Powell and Donovan become worried when they realize that Speedy has not returned after five hours. They use a more primitive robot to find Speedy and try to analyze what happened to it.
When they eventually find Speedy, they discover he is running in a huge circle around a selenium pool. Further, they notice that "Speedy’s gait [includes] a peculiar rolling stagger, a noticeable side-to-side lurch". When Speedy is asked to return with the selenium, he begins talking oddly ("Hot dog, let’s play games. You catch me and I catch you; no love can cut our knife in two" and quoting Gilbert and Sullivan). Speedy continues to show symptoms that, if he were human, would be interpreted as drunkenness.
Powell eventually realizes that the selenium source contains unforeseen danger to the robot. Under normal circumstances, Speedy would observe the Second Law ("a robot must obey orders"), but, because Speedy was so expensive to manufacture and "not a thing to be lightly destroyed", the Third Law ("a robot must protect its own existence") had been strengthened "so that his allergy to danger is unusually high". As the order to retrieve the selenium was casually worded with no particular emphasis, Speedy cannot decide whether to obey it (Second Law) or protect himself from danger (the strengthened Third Law). He then oscillates between positions: farther from the selenium, in which the order "outweighs" the need for self-preservation, and nearer the selenium, in which the compulsion of the third law is bigger and pushes him back. The conflicting Laws cause what is basically a feedback loop which confuses him to the point that he starts acting inebriated.
Attempts to order Speedy to return (Second Law) fail, as the conflicted positronic brain cannot accept new orders. Attempts to force Speedy to the base with oxalic acid, that can destroy it (third law) fails, it merely causes Speedy to change routes until he finds a new avoid-danger/follow-order equilibrium.
Of course, the only thing that trumps both the Second and Third Laws is the First Law of Robotics ("a robot may not...allow a human being to come to harm"). Therefore, Powell decides to risk his life by going out in the heat, hoping that the First Law will force Speedy to overcome his cognitive dissonance and save his life. The plan eventually works, and the team is able to repair the photo-cell banks.
Robot novels era:
A few thousand years after the I, Robot era, the first-law vs first-law dilemma has essentially been solved.
In The Robots of Dawn, a humaniform robot experiences a deadlock and is destroyed, and Elijah Bailey is tasked with discovering why. He suggests to Dr. Fastolfe, one of the greatest roboticists of the age as well the robot's owner and creator, that a first-law vs first-law dilemma might be responsible, citing the story of Susan Calvin and the psychic robot. However, Dr. Fastolfe explains that this is essentially impossible in the modern age because even first law invocations are given a priority and equal priorities are selected between randomly; that he himself is probably the only person alive who can orchestrate it, and it would have to be on a good day.
We see direct instances of robots handling priority in first law conflicts throughout the novels, such as in The Naked Sun, when another humaniform robot forces Bailey to sit so that it can close the top on a transporter to protect him from his agoraphobia.
The disadvantage is that it is possible, though requires extreme circumstances, for multiple second-or-third-law appeals to outweigh an appeal to the first law, as we again see in The Robots of Dawn that Bailey notices a group of robots are willing to overlook his injuries when he insists that they are not severe and casually instructs them to go about their business. He knows that this command can not outweigh the appeal to the first law, and so he reasons that the robots have been given very strict instructions in addition. The two commands and his own downplay of the severity of his situation, he reasons, raise the priority of the second law to surpass that of the first law.
The robot in question in the film is said to have decided that one human had a greater chance of survival than the other, and used that information to determine which human to save. This would not be a factor in the I, Robot era, but is a fact of basic robotics in the robot novels era. However, it would seem Spooner's command to save the girl instead is not of sufficient priority to outweigh the difference in priorities between his own first law appeal and the child's.
Good answer, thank you for bringing the Robot novels here.
– Gallifreyan
Jul 12 '17 at 19:18
1
It is worth remembering that the robot inLittle Lost Robot
had a modified, weakened version of the first law: he could not kill any human but could let a human die through its inaction.
– SJuan76
Jul 12 '17 at 21:10
I agree. It was first law vs first law. IMO, Spooner's command to save the girl was treated as an attempt of suicide by the robot (which it was, albeit a heroic one) and you cannot order a robot to allow you to commit suicide. It conflicts with the first law.
– jo1storm
Jul 13 '17 at 7:23
I wish Asimov was less mechanical about these set of rules that generate these plots over and over again. In a real world it would become evident that these simple rules alone create adverse situations such as these and need to be amended. So such additions would naturally arise: If Rule 1 applies to multiple people in the same situation, parents have the right to sacrifice themselves by explicitly asking the robot. But a dead simple Children have priority rule would be helpful too. It's a simple rule, formulated in plain machine terms, yet would make the robots behave so much warmer.
– stevie
Jul 13 '17 at 13:52
1
@stevie that was possible. InThe Robots of Dawn
it is stated that Bailey's companions (Daneel and Giskard) have been ordered to give more value to Bailey's life than anyone else's; they would not kill anyone only because were to order them to do so but in a situation of "Bailey's vs someone else's life" they were expected to always decide in favor of Bailey, and also to act faster.
– SJuan76
Jul 13 '17 at 16:12
|
show 6 more comments
The film appears to operate on anachronistic Asimov mechanics
What we would have here is likely a first-law vs first-law conflict. Since the robot can not save both humans, one would have to die.
I, Robot era:
There is definitely precedent for an I, Robot era robot knowingly allowing humans to come to harm, in the short story "Little Lost Robot", but this was under the circumstance that the human would come to harm regardless of the robot's action, so the robots deem that it is not through their inaction that the humans come to harm.
However, I would suspect that instead, an Asimov robot would interpret the situation in the film as a first-law vs first-law conflict, since either human could be saved depending on the robot's decision. In other words, the robot could have saved the child, but didn't, which would be a first law violation. Looking at both victims this same way, the robot would then find this to be a first-law vs first-law conflict.
The short story "Liar" explores what happens when a robot is faced with a first-law vs first-law scenario:
Through a fault in manufacturing, a robot, RB-34 (also known as Herbie), is created that possesses telepathic abilities. While the roboticists at U.S. Robots and Mechanical Men investigate how this occurred, the robot tells them what other people are thinking. But the First Law still applies to this robot, and so it deliberately lies when necessary to avoid hurting their feelings and to make people happy, especially in terms of romance.
However, by lying, it is hurting them anyway. When it is confronted with this fact by Susan Calvin (to whom it falsely claimed her coworker was infatuated with her - a particularly painful lie), the robot experiences an insoluble logical conflict and becomes catatonic.
In short, an I, Robot era robot in Asimov's writing would not have been able to continue functioning after this scenario and would have to be discarded completely. It's likely that it would not even be able to function after being initially faced with the scenario, thereby destroying itself before being able to rescue either human.
The second law is irrelevant, because first-law vs first-law results in an unsurvivable deadlock. First law is the "trump card" so to speak, and not given a priority, lest the second or third compete, as we see in Runaround:
In 2015, Powell, Donovan and Robot SPD-13 (also known as "Speedy") are sent to Mercury to restart operations at a mining station which was abandoned ten years before.
They discover that the photo-cell banks that provide life support to the base are short on selenium and will soon fail. The nearest selenium pool is seventeen miles away, and since Speedy can withstand Mercury’s high temperatures, Donovan sends him to get it. Powell and Donovan become worried when they realize that Speedy has not returned after five hours. They use a more primitive robot to find Speedy and try to analyze what happened to it.
When they eventually find Speedy, they discover he is running in a huge circle around a selenium pool. Further, they notice that "Speedy’s gait [includes] a peculiar rolling stagger, a noticeable side-to-side lurch". When Speedy is asked to return with the selenium, he begins talking oddly ("Hot dog, let’s play games. You catch me and I catch you; no love can cut our knife in two" and quoting Gilbert and Sullivan). Speedy continues to show symptoms that, if he were human, would be interpreted as drunkenness.
Powell eventually realizes that the selenium source contains unforeseen danger to the robot. Under normal circumstances, Speedy would observe the Second Law ("a robot must obey orders"), but, because Speedy was so expensive to manufacture and "not a thing to be lightly destroyed", the Third Law ("a robot must protect its own existence") had been strengthened "so that his allergy to danger is unusually high". As the order to retrieve the selenium was casually worded with no particular emphasis, Speedy cannot decide whether to obey it (Second Law) or protect himself from danger (the strengthened Third Law). He then oscillates between positions: farther from the selenium, in which the order "outweighs" the need for self-preservation, and nearer the selenium, in which the compulsion of the third law is bigger and pushes him back. The conflicting Laws cause what is basically a feedback loop which confuses him to the point that he starts acting inebriated.
Attempts to order Speedy to return (Second Law) fail, as the conflicted positronic brain cannot accept new orders. Attempts to force Speedy to the base with oxalic acid, that can destroy it (third law) fails, it merely causes Speedy to change routes until he finds a new avoid-danger/follow-order equilibrium.
Of course, the only thing that trumps both the Second and Third Laws is the First Law of Robotics ("a robot may not...allow a human being to come to harm"). Therefore, Powell decides to risk his life by going out in the heat, hoping that the First Law will force Speedy to overcome his cognitive dissonance and save his life. The plan eventually works, and the team is able to repair the photo-cell banks.
Robot novels era:
A few thousand years after the I, Robot era, the first-law vs first-law dilemma has essentially been solved.
In The Robots of Dawn, a humaniform robot experiences a deadlock and is destroyed, and Elijah Bailey is tasked with discovering why. He suggests to Dr. Fastolfe, one of the greatest roboticists of the age as well the robot's owner and creator, that a first-law vs first-law dilemma might be responsible, citing the story of Susan Calvin and the psychic robot. However, Dr. Fastolfe explains that this is essentially impossible in the modern age because even first law invocations are given a priority and equal priorities are selected between randomly; that he himself is probably the only person alive who can orchestrate it, and it would have to be on a good day.
We see direct instances of robots handling priority in first law conflicts throughout the novels, such as in The Naked Sun, when another humaniform robot forces Bailey to sit so that it can close the top on a transporter to protect him from his agoraphobia.
The disadvantage is that it is possible, though requires extreme circumstances, for multiple second-or-third-law appeals to outweigh an appeal to the first law, as we again see in The Robots of Dawn that Bailey notices a group of robots are willing to overlook his injuries when he insists that they are not severe and casually instructs them to go about their business. He knows that this command can not outweigh the appeal to the first law, and so he reasons that the robots have been given very strict instructions in addition. The two commands and his own downplay of the severity of his situation, he reasons, raise the priority of the second law to surpass that of the first law.
The robot in question in the film is said to have decided that one human had a greater chance of survival than the other, and used that information to determine which human to save. This would not be a factor in the I, Robot era, but is a fact of basic robotics in the robot novels era. However, it would seem Spooner's command to save the girl instead is not of sufficient priority to outweigh the difference in priorities between his own first law appeal and the child's.
The film appears to operate on anachronistic Asimov mechanics
What we would have here is likely a first-law vs first-law conflict. Since the robot can not save both humans, one would have to die.
I, Robot era:
There is definitely precedent for an I, Robot era robot knowingly allowing humans to come to harm, in the short story "Little Lost Robot", but this was under the circumstance that the human would come to harm regardless of the robot's action, so the robots deem that it is not through their inaction that the humans come to harm.
However, I would suspect that instead, an Asimov robot would interpret the situation in the film as a first-law vs first-law conflict, since either human could be saved depending on the robot's decision. In other words, the robot could have saved the child, but didn't, which would be a first law violation. Looking at both victims this same way, the robot would then find this to be a first-law vs first-law conflict.
The short story "Liar" explores what happens when a robot is faced with a first-law vs first-law scenario:
Through a fault in manufacturing, a robot, RB-34 (also known as Herbie), is created that possesses telepathic abilities. While the roboticists at U.S. Robots and Mechanical Men investigate how this occurred, the robot tells them what other people are thinking. But the First Law still applies to this robot, and so it deliberately lies when necessary to avoid hurting their feelings and to make people happy, especially in terms of romance.
However, by lying, it is hurting them anyway. When it is confronted with this fact by Susan Calvin (to whom it falsely claimed her coworker was infatuated with her - a particularly painful lie), the robot experiences an insoluble logical conflict and becomes catatonic.
In short, an I, Robot era robot in Asimov's writing would not have been able to continue functioning after this scenario and would have to be discarded completely. It's likely that it would not even be able to function after being initially faced with the scenario, thereby destroying itself before being able to rescue either human.
The second law is irrelevant, because first-law vs first-law results in an unsurvivable deadlock. First law is the "trump card" so to speak, and not given a priority, lest the second or third compete, as we see in Runaround:
In 2015, Powell, Donovan and Robot SPD-13 (also known as "Speedy") are sent to Mercury to restart operations at a mining station which was abandoned ten years before.
They discover that the photo-cell banks that provide life support to the base are short on selenium and will soon fail. The nearest selenium pool is seventeen miles away, and since Speedy can withstand Mercury’s high temperatures, Donovan sends him to get it. Powell and Donovan become worried when they realize that Speedy has not returned after five hours. They use a more primitive robot to find Speedy and try to analyze what happened to it.
When they eventually find Speedy, they discover he is running in a huge circle around a selenium pool. Further, they notice that "Speedy’s gait [includes] a peculiar rolling stagger, a noticeable side-to-side lurch". When Speedy is asked to return with the selenium, he begins talking oddly ("Hot dog, let’s play games. You catch me and I catch you; no love can cut our knife in two" and quoting Gilbert and Sullivan). Speedy continues to show symptoms that, if he were human, would be interpreted as drunkenness.
Powell eventually realizes that the selenium source contains unforeseen danger to the robot. Under normal circumstances, Speedy would observe the Second Law ("a robot must obey orders"), but, because Speedy was so expensive to manufacture and "not a thing to be lightly destroyed", the Third Law ("a robot must protect its own existence") had been strengthened "so that his allergy to danger is unusually high". As the order to retrieve the selenium was casually worded with no particular emphasis, Speedy cannot decide whether to obey it (Second Law) or protect himself from danger (the strengthened Third Law). He then oscillates between positions: farther from the selenium, in which the order "outweighs" the need for self-preservation, and nearer the selenium, in which the compulsion of the third law is bigger and pushes him back. The conflicting Laws cause what is basically a feedback loop which confuses him to the point that he starts acting inebriated.
Attempts to order Speedy to return (Second Law) fail, as the conflicted positronic brain cannot accept new orders. Attempts to force Speedy to the base with oxalic acid, that can destroy it (third law) fails, it merely causes Speedy to change routes until he finds a new avoid-danger/follow-order equilibrium.
Of course, the only thing that trumps both the Second and Third Laws is the First Law of Robotics ("a robot may not...allow a human being to come to harm"). Therefore, Powell decides to risk his life by going out in the heat, hoping that the First Law will force Speedy to overcome his cognitive dissonance and save his life. The plan eventually works, and the team is able to repair the photo-cell banks.
Robot novels era:
A few thousand years after the I, Robot era, the first-law vs first-law dilemma has essentially been solved.
In The Robots of Dawn, a humaniform robot experiences a deadlock and is destroyed, and Elijah Bailey is tasked with discovering why. He suggests to Dr. Fastolfe, one of the greatest roboticists of the age as well the robot's owner and creator, that a first-law vs first-law dilemma might be responsible, citing the story of Susan Calvin and the psychic robot. However, Dr. Fastolfe explains that this is essentially impossible in the modern age because even first law invocations are given a priority and equal priorities are selected between randomly; that he himself is probably the only person alive who can orchestrate it, and it would have to be on a good day.
We see direct instances of robots handling priority in first law conflicts throughout the novels, such as in The Naked Sun, when another humaniform robot forces Bailey to sit so that it can close the top on a transporter to protect him from his agoraphobia.
The disadvantage is that it is possible, though requires extreme circumstances, for multiple second-or-third-law appeals to outweigh an appeal to the first law, as we again see in The Robots of Dawn that Bailey notices a group of robots are willing to overlook his injuries when he insists that they are not severe and casually instructs them to go about their business. He knows that this command can not outweigh the appeal to the first law, and so he reasons that the robots have been given very strict instructions in addition. The two commands and his own downplay of the severity of his situation, he reasons, raise the priority of the second law to surpass that of the first law.
The robot in question in the film is said to have decided that one human had a greater chance of survival than the other, and used that information to determine which human to save. This would not be a factor in the I, Robot era, but is a fact of basic robotics in the robot novels era. However, it would seem Spooner's command to save the girl instead is not of sufficient priority to outweigh the difference in priorities between his own first law appeal and the child's.
edited Jul 15 '17 at 17:22
answered Jul 12 '17 at 19:16
DevsmanDevsman
1,129510
1,129510
Good answer, thank you for bringing the Robot novels here.
– Gallifreyan
Jul 12 '17 at 19:18
1
It is worth remembering that the robot inLittle Lost Robot
had a modified, weakened version of the first law: he could not kill any human but could let a human die through its inaction.
– SJuan76
Jul 12 '17 at 21:10
I agree. It was first law vs first law. IMO, Spooner's command to save the girl was treated as an attempt of suicide by the robot (which it was, albeit a heroic one) and you cannot order a robot to allow you to commit suicide. It conflicts with the first law.
– jo1storm
Jul 13 '17 at 7:23
I wish Asimov was less mechanical about these set of rules that generate these plots over and over again. In a real world it would become evident that these simple rules alone create adverse situations such as these and need to be amended. So such additions would naturally arise: If Rule 1 applies to multiple people in the same situation, parents have the right to sacrifice themselves by explicitly asking the robot. But a dead simple Children have priority rule would be helpful too. It's a simple rule, formulated in plain machine terms, yet would make the robots behave so much warmer.
– stevie
Jul 13 '17 at 13:52
1
@stevie that was possible. InThe Robots of Dawn
it is stated that Bailey's companions (Daneel and Giskard) have been ordered to give more value to Bailey's life than anyone else's; they would not kill anyone only because were to order them to do so but in a situation of "Bailey's vs someone else's life" they were expected to always decide in favor of Bailey, and also to act faster.
– SJuan76
Jul 13 '17 at 16:12
|
show 6 more comments
Good answer, thank you for bringing the Robot novels here.
– Gallifreyan
Jul 12 '17 at 19:18
1
It is worth remembering that the robot inLittle Lost Robot
had a modified, weakened version of the first law: he could not kill any human but could let a human die through its inaction.
– SJuan76
Jul 12 '17 at 21:10
I agree. It was first law vs first law. IMO, Spooner's command to save the girl was treated as an attempt of suicide by the robot (which it was, albeit a heroic one) and you cannot order a robot to allow you to commit suicide. It conflicts with the first law.
– jo1storm
Jul 13 '17 at 7:23
I wish Asimov was less mechanical about these set of rules that generate these plots over and over again. In a real world it would become evident that these simple rules alone create adverse situations such as these and need to be amended. So such additions would naturally arise: If Rule 1 applies to multiple people in the same situation, parents have the right to sacrifice themselves by explicitly asking the robot. But a dead simple Children have priority rule would be helpful too. It's a simple rule, formulated in plain machine terms, yet would make the robots behave so much warmer.
– stevie
Jul 13 '17 at 13:52
1
@stevie that was possible. InThe Robots of Dawn
it is stated that Bailey's companions (Daneel and Giskard) have been ordered to give more value to Bailey's life than anyone else's; they would not kill anyone only because were to order them to do so but in a situation of "Bailey's vs someone else's life" they were expected to always decide in favor of Bailey, and also to act faster.
– SJuan76
Jul 13 '17 at 16:12
Good answer, thank you for bringing the Robot novels here.
– Gallifreyan
Jul 12 '17 at 19:18
Good answer, thank you for bringing the Robot novels here.
– Gallifreyan
Jul 12 '17 at 19:18
1
1
It is worth remembering that the robot in
Little Lost Robot
had a modified, weakened version of the first law: he could not kill any human but could let a human die through its inaction.– SJuan76
Jul 12 '17 at 21:10
It is worth remembering that the robot in
Little Lost Robot
had a modified, weakened version of the first law: he could not kill any human but could let a human die through its inaction.– SJuan76
Jul 12 '17 at 21:10
I agree. It was first law vs first law. IMO, Spooner's command to save the girl was treated as an attempt of suicide by the robot (which it was, albeit a heroic one) and you cannot order a robot to allow you to commit suicide. It conflicts with the first law.
– jo1storm
Jul 13 '17 at 7:23
I agree. It was first law vs first law. IMO, Spooner's command to save the girl was treated as an attempt of suicide by the robot (which it was, albeit a heroic one) and you cannot order a robot to allow you to commit suicide. It conflicts with the first law.
– jo1storm
Jul 13 '17 at 7:23
I wish Asimov was less mechanical about these set of rules that generate these plots over and over again. In a real world it would become evident that these simple rules alone create adverse situations such as these and need to be amended. So such additions would naturally arise: If Rule 1 applies to multiple people in the same situation, parents have the right to sacrifice themselves by explicitly asking the robot. But a dead simple Children have priority rule would be helpful too. It's a simple rule, formulated in plain machine terms, yet would make the robots behave so much warmer.
– stevie
Jul 13 '17 at 13:52
I wish Asimov was less mechanical about these set of rules that generate these plots over and over again. In a real world it would become evident that these simple rules alone create adverse situations such as these and need to be amended. So such additions would naturally arise: If Rule 1 applies to multiple people in the same situation, parents have the right to sacrifice themselves by explicitly asking the robot. But a dead simple Children have priority rule would be helpful too. It's a simple rule, formulated in plain machine terms, yet would make the robots behave so much warmer.
– stevie
Jul 13 '17 at 13:52
1
1
@stevie that was possible. In
The Robots of Dawn
it is stated that Bailey's companions (Daneel and Giskard) have been ordered to give more value to Bailey's life than anyone else's; they would not kill anyone only because were to order them to do so but in a situation of "Bailey's vs someone else's life" they were expected to always decide in favor of Bailey, and also to act faster.– SJuan76
Jul 13 '17 at 16:12
@stevie that was possible. In
The Robots of Dawn
it is stated that Bailey's companions (Daneel and Giskard) have been ordered to give more value to Bailey's life than anyone else's; they would not kill anyone only because were to order them to do so but in a situation of "Bailey's vs someone else's life" they were expected to always decide in favor of Bailey, and also to act faster.– SJuan76
Jul 13 '17 at 16:12
|
show 6 more comments
The Second Law states
A robot must obey the orders given it by human beings except where
such orders would conflict with the First Law.
So you're saying by refusing to save the girl as ordered by Detective Spooner, the robot has broken that law? The only way it can't have broken the second law is if the corollary comes into play and it would conflict with the first law.
The First Law says
A robot may not injure a human being or, through inaction, allow a
human being to come to harm.
Obviously, the robot has not directly injured anyone so that is out of the picture. The robot has not allowed the girl to come to harm by inaction as the robot is acting by helping Spooner (i.e. it isn't just standing there watching). However, if, as you say, it was statistically more likely that Spooner would survive with some help, then obeying Spooner and helping the girl could be construed as letting Spooner come to harm by inaction.
So the Second Law is not broken as it's over-ruled by the First Law. The First Law is not broken as the robot did its best to save a human.
27
@MagikarpMaster the robot is saving Will does not change the fact that it failed to save the girl. The question now becomes "does this count as inaction?" to which I would say "yes"" By that definition, any robot who did not save everyone all the time would break the First Law. So you're saying that every robot broke the First Law because the girl died? That is ridiculous. A single robot can only do what a single robot can do. Just because you tell it to lift a cargoship by itself, and it cannot, does not mean that it breaks the Second Law, as long as the robot tries to lift it.
– Flater
Jul 12 '17 at 12:29
34
@MagikarpMaster: But if it received a direct order from a human to save the other human, then obeying that human would not violate the 1st law. Actually, it would. The robot listening to that command would condemn Will to die, which the robot cannot allow due to its programming. Listening to a command is second to saving a life. Therefore, the command to save the girl (by merit of it being a command) would be discarded, as a higher priority task is in progress.
– Flater
Jul 12 '17 at 12:39
5
@MagikarpMaster, But if it received a direct order from a human to save the other human, then obeying that human would not violate the 1st law. But it would. If both myself and a friend were drowning, but by virtue of being a world champion free-diver a robot knew I could hold out much longer, yet still obeyed my friend's instruction to save me first even knowing that my friend would drown if that were the case, the robot would be violating the First Law.
– Darren
Jul 12 '17 at 12:40
8
This answer would be improved by noting that in Asimov’s work, this dilemma would have destroyed the robot. Choosing between one human life and another is a frequent source of robot destruction in the books. It would not have been able to even think hypothetically about the relative merits of saving Will vs. saving the girl. This scene is a massive break between the way the robots in the movie functioned and the way the robots in the book functioned.
– KRyan
Jul 12 '17 at 13:35
5
@KRyan Not necessarily; the more advanced robots in "I, Robot" and in the later robot books were much more sophisticated about handling conflicts in the Three Laws.
– prosfilaes
Jul 12 '17 at 13:42
|
show 19 more comments
The Second Law states
A robot must obey the orders given it by human beings except where
such orders would conflict with the First Law.
So you're saying by refusing to save the girl as ordered by Detective Spooner, the robot has broken that law? The only way it can't have broken the second law is if the corollary comes into play and it would conflict with the first law.
The First Law says
A robot may not injure a human being or, through inaction, allow a
human being to come to harm.
Obviously, the robot has not directly injured anyone so that is out of the picture. The robot has not allowed the girl to come to harm by inaction as the robot is acting by helping Spooner (i.e. it isn't just standing there watching). However, if, as you say, it was statistically more likely that Spooner would survive with some help, then obeying Spooner and helping the girl could be construed as letting Spooner come to harm by inaction.
So the Second Law is not broken as it's over-ruled by the First Law. The First Law is not broken as the robot did its best to save a human.
27
@MagikarpMaster the robot is saving Will does not change the fact that it failed to save the girl. The question now becomes "does this count as inaction?" to which I would say "yes"" By that definition, any robot who did not save everyone all the time would break the First Law. So you're saying that every robot broke the First Law because the girl died? That is ridiculous. A single robot can only do what a single robot can do. Just because you tell it to lift a cargoship by itself, and it cannot, does not mean that it breaks the Second Law, as long as the robot tries to lift it.
– Flater
Jul 12 '17 at 12:29
34
@MagikarpMaster: But if it received a direct order from a human to save the other human, then obeying that human would not violate the 1st law. Actually, it would. The robot listening to that command would condemn Will to die, which the robot cannot allow due to its programming. Listening to a command is second to saving a life. Therefore, the command to save the girl (by merit of it being a command) would be discarded, as a higher priority task is in progress.
– Flater
Jul 12 '17 at 12:39
5
@MagikarpMaster, But if it received a direct order from a human to save the other human, then obeying that human would not violate the 1st law. But it would. If both myself and a friend were drowning, but by virtue of being a world champion free-diver a robot knew I could hold out much longer, yet still obeyed my friend's instruction to save me first even knowing that my friend would drown if that were the case, the robot would be violating the First Law.
– Darren
Jul 12 '17 at 12:40
8
This answer would be improved by noting that in Asimov’s work, this dilemma would have destroyed the robot. Choosing between one human life and another is a frequent source of robot destruction in the books. It would not have been able to even think hypothetically about the relative merits of saving Will vs. saving the girl. This scene is a massive break between the way the robots in the movie functioned and the way the robots in the book functioned.
– KRyan
Jul 12 '17 at 13:35
5
@KRyan Not necessarily; the more advanced robots in "I, Robot" and in the later robot books were much more sophisticated about handling conflicts in the Three Laws.
– prosfilaes
Jul 12 '17 at 13:42
|
show 19 more comments
The Second Law states
A robot must obey the orders given it by human beings except where
such orders would conflict with the First Law.
So you're saying by refusing to save the girl as ordered by Detective Spooner, the robot has broken that law? The only way it can't have broken the second law is if the corollary comes into play and it would conflict with the first law.
The First Law says
A robot may not injure a human being or, through inaction, allow a
human being to come to harm.
Obviously, the robot has not directly injured anyone so that is out of the picture. The robot has not allowed the girl to come to harm by inaction as the robot is acting by helping Spooner (i.e. it isn't just standing there watching). However, if, as you say, it was statistically more likely that Spooner would survive with some help, then obeying Spooner and helping the girl could be construed as letting Spooner come to harm by inaction.
So the Second Law is not broken as it's over-ruled by the First Law. The First Law is not broken as the robot did its best to save a human.
The Second Law states
A robot must obey the orders given it by human beings except where
such orders would conflict with the First Law.
So you're saying by refusing to save the girl as ordered by Detective Spooner, the robot has broken that law? The only way it can't have broken the second law is if the corollary comes into play and it would conflict with the first law.
The First Law says
A robot may not injure a human being or, through inaction, allow a
human being to come to harm.
Obviously, the robot has not directly injured anyone so that is out of the picture. The robot has not allowed the girl to come to harm by inaction as the robot is acting by helping Spooner (i.e. it isn't just standing there watching). However, if, as you say, it was statistically more likely that Spooner would survive with some help, then obeying Spooner and helping the girl could be construed as letting Spooner come to harm by inaction.
So the Second Law is not broken as it's over-ruled by the First Law. The First Law is not broken as the robot did its best to save a human.
edited Jul 12 '17 at 17:35
Lightness Races in Orbit
9,86633867
9,86633867
answered Jul 12 '17 at 12:15
DarrenDarren
3,64131435
3,64131435
27
@MagikarpMaster the robot is saving Will does not change the fact that it failed to save the girl. The question now becomes "does this count as inaction?" to which I would say "yes"" By that definition, any robot who did not save everyone all the time would break the First Law. So you're saying that every robot broke the First Law because the girl died? That is ridiculous. A single robot can only do what a single robot can do. Just because you tell it to lift a cargoship by itself, and it cannot, does not mean that it breaks the Second Law, as long as the robot tries to lift it.
– Flater
Jul 12 '17 at 12:29
34
@MagikarpMaster: But if it received a direct order from a human to save the other human, then obeying that human would not violate the 1st law. Actually, it would. The robot listening to that command would condemn Will to die, which the robot cannot allow due to its programming. Listening to a command is second to saving a life. Therefore, the command to save the girl (by merit of it being a command) would be discarded, as a higher priority task is in progress.
– Flater
Jul 12 '17 at 12:39
5
@MagikarpMaster, But if it received a direct order from a human to save the other human, then obeying that human would not violate the 1st law. But it would. If both myself and a friend were drowning, but by virtue of being a world champion free-diver a robot knew I could hold out much longer, yet still obeyed my friend's instruction to save me first even knowing that my friend would drown if that were the case, the robot would be violating the First Law.
– Darren
Jul 12 '17 at 12:40
8
This answer would be improved by noting that in Asimov’s work, this dilemma would have destroyed the robot. Choosing between one human life and another is a frequent source of robot destruction in the books. It would not have been able to even think hypothetically about the relative merits of saving Will vs. saving the girl. This scene is a massive break between the way the robots in the movie functioned and the way the robots in the book functioned.
– KRyan
Jul 12 '17 at 13:35
5
@KRyan Not necessarily; the more advanced robots in "I, Robot" and in the later robot books were much more sophisticated about handling conflicts in the Three Laws.
– prosfilaes
Jul 12 '17 at 13:42
|
show 19 more comments
27
@MagikarpMaster the robot is saving Will does not change the fact that it failed to save the girl. The question now becomes "does this count as inaction?" to which I would say "yes"" By that definition, any robot who did not save everyone all the time would break the First Law. So you're saying that every robot broke the First Law because the girl died? That is ridiculous. A single robot can only do what a single robot can do. Just because you tell it to lift a cargoship by itself, and it cannot, does not mean that it breaks the Second Law, as long as the robot tries to lift it.
– Flater
Jul 12 '17 at 12:29
34
@MagikarpMaster: But if it received a direct order from a human to save the other human, then obeying that human would not violate the 1st law. Actually, it would. The robot listening to that command would condemn Will to die, which the robot cannot allow due to its programming. Listening to a command is second to saving a life. Therefore, the command to save the girl (by merit of it being a command) would be discarded, as a higher priority task is in progress.
– Flater
Jul 12 '17 at 12:39
5
@MagikarpMaster, But if it received a direct order from a human to save the other human, then obeying that human would not violate the 1st law. But it would. If both myself and a friend were drowning, but by virtue of being a world champion free-diver a robot knew I could hold out much longer, yet still obeyed my friend's instruction to save me first even knowing that my friend would drown if that were the case, the robot would be violating the First Law.
– Darren
Jul 12 '17 at 12:40
8
This answer would be improved by noting that in Asimov’s work, this dilemma would have destroyed the robot. Choosing between one human life and another is a frequent source of robot destruction in the books. It would not have been able to even think hypothetically about the relative merits of saving Will vs. saving the girl. This scene is a massive break between the way the robots in the movie functioned and the way the robots in the book functioned.
– KRyan
Jul 12 '17 at 13:35
5
@KRyan Not necessarily; the more advanced robots in "I, Robot" and in the later robot books were much more sophisticated about handling conflicts in the Three Laws.
– prosfilaes
Jul 12 '17 at 13:42
27
27
@MagikarpMaster the robot is saving Will does not change the fact that it failed to save the girl. The question now becomes "does this count as inaction?" to which I would say "yes"" By that definition, any robot who did not save everyone all the time would break the First Law. So you're saying that every robot broke the First Law because the girl died? That is ridiculous. A single robot can only do what a single robot can do. Just because you tell it to lift a cargoship by itself, and it cannot, does not mean that it breaks the Second Law, as long as the robot tries to lift it.
– Flater
Jul 12 '17 at 12:29
@MagikarpMaster the robot is saving Will does not change the fact that it failed to save the girl. The question now becomes "does this count as inaction?" to which I would say "yes"" By that definition, any robot who did not save everyone all the time would break the First Law. So you're saying that every robot broke the First Law because the girl died? That is ridiculous. A single robot can only do what a single robot can do. Just because you tell it to lift a cargoship by itself, and it cannot, does not mean that it breaks the Second Law, as long as the robot tries to lift it.
– Flater
Jul 12 '17 at 12:29
34
34
@MagikarpMaster: But if it received a direct order from a human to save the other human, then obeying that human would not violate the 1st law. Actually, it would. The robot listening to that command would condemn Will to die, which the robot cannot allow due to its programming. Listening to a command is second to saving a life. Therefore, the command to save the girl (by merit of it being a command) would be discarded, as a higher priority task is in progress.
– Flater
Jul 12 '17 at 12:39
@MagikarpMaster: But if it received a direct order from a human to save the other human, then obeying that human would not violate the 1st law. Actually, it would. The robot listening to that command would condemn Will to die, which the robot cannot allow due to its programming. Listening to a command is second to saving a life. Therefore, the command to save the girl (by merit of it being a command) would be discarded, as a higher priority task is in progress.
– Flater
Jul 12 '17 at 12:39
5
5
@MagikarpMaster, But if it received a direct order from a human to save the other human, then obeying that human would not violate the 1st law. But it would. If both myself and a friend were drowning, but by virtue of being a world champion free-diver a robot knew I could hold out much longer, yet still obeyed my friend's instruction to save me first even knowing that my friend would drown if that were the case, the robot would be violating the First Law.
– Darren
Jul 12 '17 at 12:40
@MagikarpMaster, But if it received a direct order from a human to save the other human, then obeying that human would not violate the 1st law. But it would. If both myself and a friend were drowning, but by virtue of being a world champion free-diver a robot knew I could hold out much longer, yet still obeyed my friend's instruction to save me first even knowing that my friend would drown if that were the case, the robot would be violating the First Law.
– Darren
Jul 12 '17 at 12:40
8
8
This answer would be improved by noting that in Asimov’s work, this dilemma would have destroyed the robot. Choosing between one human life and another is a frequent source of robot destruction in the books. It would not have been able to even think hypothetically about the relative merits of saving Will vs. saving the girl. This scene is a massive break between the way the robots in the movie functioned and the way the robots in the book functioned.
– KRyan
Jul 12 '17 at 13:35
This answer would be improved by noting that in Asimov’s work, this dilemma would have destroyed the robot. Choosing between one human life and another is a frequent source of robot destruction in the books. It would not have been able to even think hypothetically about the relative merits of saving Will vs. saving the girl. This scene is a massive break between the way the robots in the movie functioned and the way the robots in the book functioned.
– KRyan
Jul 12 '17 at 13:35
5
5
@KRyan Not necessarily; the more advanced robots in "I, Robot" and in the later robot books were much more sophisticated about handling conflicts in the Three Laws.
– prosfilaes
Jul 12 '17 at 13:42
@KRyan Not necessarily; the more advanced robots in "I, Robot" and in the later robot books were much more sophisticated about handling conflicts in the Three Laws.
– prosfilaes
Jul 12 '17 at 13:42
|
show 19 more comments
I am going to look at this question from a logical real-world point of view.
The robot does not break the second law; but technically, it does break the second. That said, the rules would only be a condensed explanation of far more intricate logic and computer code.
To quote Isaac Asimov's laws of robotics, emphasis mine:
Rule 1:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Rule 2:
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.Rule 3:Irrelevant, but provided for completions sake.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
In the given situation, the robot had to act, in order to save either Will or the child. The robot, being capable of calculating odds to a near-perfect prediction rate, is able to establish that it can not act in enough time to save both Will and the child; and the odds say that Will has a better chance to survive. In such a case, it is only logical to choose Will. This is a key plot point; robots run off pure logic - a fact that makes them differ greatly from human beings.
When the robot fails to also save the child, it is causing harm through inaction. However, again, the robot knows that it is impossible to save both Will and the child. It picks the best option, in order to best adhere to its rules. This is where a greater understanding of computers and the rules, themselves, come in to place.
What would actually happen, considering the explicit rule
The rules are not an absolute fact. They are not there to say "robots will never harm a human, and will always save the day, when present". We know this by how the movie plays out. The rules are simply the rules used to govern the actions of the robots, in-verse. As a programmer, this is something that is blatantly obvious to me; but I am confident that it is not as so for others that are not familiar with how strictly adherent any computer system is.
The point is, the rule does not state anything about it "not counting" because the robot is "already saving someone". As such, only considering this explicit rule (as any computer or robot would interpret, at least), there is no allowance for a situation where the robot can only save one of two people in danger. In actual computer science, only considering the explicit rule, such an event would likely cause an infinite loop. The robot would stop where it was, and continue to process the catch-22 forever; or at least, until its logic kicked it out of the thought process. At this point, the robot would dump its memory of the current event, and move on. In theory.
What would probably happen, in-verse
In verse, the rules are a lot more complicated; at least, internal to the robot. There would likely be a whole lot of special cases, when processing the core rules, to determine how to act in such situations. As a result, the robot is still able to act, and takes the most logical outcome. It only saves Will, but it does save someone.
It is far more understandable that the rules would be simplified to three generic common-case situations; it would be far less believable that people would be so easily trusting of robots if the rule read "A robot may not injure a human or, through inaction, allow a human being to come to harm; unless in doing so, there is greater chance of preventing another human from coming to harm". There are just way to many ways to interpret this.
So as far as the explicit rules go, the robot does not break the second rule; disobeying Will's action does not go against "preventing a human from coming to harm through inaction", because through disobeying Will, it saves Will. However, it does break the rule of "through inaction, allow a human being to come to harm".
In regards to how the robot would actually process these rules, it would not be breaking the rules, at all. There would be a far more complex series of "if.." and "else.." logic, where the robots logic would allow it to go against these base rules in situations where logic dictates that no matter what option, a human would still come to harm.
This is further established, towards the end of the movie;
The robots are able to effectively establish martial law, and in doing so, harm several humans; they have developed enough to establish that by harming a few humans in effectively imprisoning the rest of the population, they prevent far more harm through all of the various actions we like to get up to that both risk, and in some cases take, our lives.
Excellent answer Gnemlock. If I may pick your brain on this issue, as I feel it will help me understand this concept further, would your interpretation of the 3 laws allow a robot to kill a suicide bomber in a crowded street to prevent harm on a massive scale?
– Magikarp Master
Jul 12 '17 at 12:54
@MagikarpMaster, I would say yes, but there could be great debate about it. It would come down to when the robot could determine the bomber was actually going to act, and otherwise couldnt be prevented. It would also be based off the development of the robot, in terms of the development the robots if I-Robot take, between how they are at the start of the movie, and how they are at the end. One might also speculate that the "chances of detonation" required for such an action would increase on par with estimated impact (i.e. it would have to be more sure if it would only hit half the people).
– Gnemlock
Jul 12 '17 at 12:57
2
@MagikarpMaster: Unless the robot knew both Will and the girl's biological makeup; it must have been working off of statistics for average human's survival chances (even if age data exists, that's still not personal data about Will and the girl). You can argue that statistics are a good indication, but there's no firm evidence that proves both Will and the girl were statistically "normal" in this regard (e.g. maybe Will has very bad lungs for an adult his age?). When you consider the level of detail that a computer puts into its evaluations, using statistics is the same as speculating.
– Flater
Jul 12 '17 at 13:17
1
@Wayne The spoilered bit is a reference to the zeroth law as understood by the movie. It is, unfortunately, a terrible interpretation of the zeroth law. However, going by movie logic, the zeroth law doesn't apply to the scene in question as it happened years before.
– Draco18s
Jul 12 '17 at 20:16
2
If the laws were applied to plans of actions, there would be no violation. The robot would make plans for saving both, it just wouldn't get around to saving the girl. The plan for saving the girl would have a lower priority, as it would have a lower chance of success.
– jmoreno
Jul 13 '17 at 9:29
|
show 13 more comments
I am going to look at this question from a logical real-world point of view.
The robot does not break the second law; but technically, it does break the second. That said, the rules would only be a condensed explanation of far more intricate logic and computer code.
To quote Isaac Asimov's laws of robotics, emphasis mine:
Rule 1:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Rule 2:
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.Rule 3:Irrelevant, but provided for completions sake.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
In the given situation, the robot had to act, in order to save either Will or the child. The robot, being capable of calculating odds to a near-perfect prediction rate, is able to establish that it can not act in enough time to save both Will and the child; and the odds say that Will has a better chance to survive. In such a case, it is only logical to choose Will. This is a key plot point; robots run off pure logic - a fact that makes them differ greatly from human beings.
When the robot fails to also save the child, it is causing harm through inaction. However, again, the robot knows that it is impossible to save both Will and the child. It picks the best option, in order to best adhere to its rules. This is where a greater understanding of computers and the rules, themselves, come in to place.
What would actually happen, considering the explicit rule
The rules are not an absolute fact. They are not there to say "robots will never harm a human, and will always save the day, when present". We know this by how the movie plays out. The rules are simply the rules used to govern the actions of the robots, in-verse. As a programmer, this is something that is blatantly obvious to me; but I am confident that it is not as so for others that are not familiar with how strictly adherent any computer system is.
The point is, the rule does not state anything about it "not counting" because the robot is "already saving someone". As such, only considering this explicit rule (as any computer or robot would interpret, at least), there is no allowance for a situation where the robot can only save one of two people in danger. In actual computer science, only considering the explicit rule, such an event would likely cause an infinite loop. The robot would stop where it was, and continue to process the catch-22 forever; or at least, until its logic kicked it out of the thought process. At this point, the robot would dump its memory of the current event, and move on. In theory.
What would probably happen, in-verse
In verse, the rules are a lot more complicated; at least, internal to the robot. There would likely be a whole lot of special cases, when processing the core rules, to determine how to act in such situations. As a result, the robot is still able to act, and takes the most logical outcome. It only saves Will, but it does save someone.
It is far more understandable that the rules would be simplified to three generic common-case situations; it would be far less believable that people would be so easily trusting of robots if the rule read "A robot may not injure a human or, through inaction, allow a human being to come to harm; unless in doing so, there is greater chance of preventing another human from coming to harm". There are just way to many ways to interpret this.
So as far as the explicit rules go, the robot does not break the second rule; disobeying Will's action does not go against "preventing a human from coming to harm through inaction", because through disobeying Will, it saves Will. However, it does break the rule of "through inaction, allow a human being to come to harm".
In regards to how the robot would actually process these rules, it would not be breaking the rules, at all. There would be a far more complex series of "if.." and "else.." logic, where the robots logic would allow it to go against these base rules in situations where logic dictates that no matter what option, a human would still come to harm.
This is further established, towards the end of the movie;
The robots are able to effectively establish martial law, and in doing so, harm several humans; they have developed enough to establish that by harming a few humans in effectively imprisoning the rest of the population, they prevent far more harm through all of the various actions we like to get up to that both risk, and in some cases take, our lives.
Excellent answer Gnemlock. If I may pick your brain on this issue, as I feel it will help me understand this concept further, would your interpretation of the 3 laws allow a robot to kill a suicide bomber in a crowded street to prevent harm on a massive scale?
– Magikarp Master
Jul 12 '17 at 12:54
@MagikarpMaster, I would say yes, but there could be great debate about it. It would come down to when the robot could determine the bomber was actually going to act, and otherwise couldnt be prevented. It would also be based off the development of the robot, in terms of the development the robots if I-Robot take, between how they are at the start of the movie, and how they are at the end. One might also speculate that the "chances of detonation" required for such an action would increase on par with estimated impact (i.e. it would have to be more sure if it would only hit half the people).
– Gnemlock
Jul 12 '17 at 12:57
2
@MagikarpMaster: Unless the robot knew both Will and the girl's biological makeup; it must have been working off of statistics for average human's survival chances (even if age data exists, that's still not personal data about Will and the girl). You can argue that statistics are a good indication, but there's no firm evidence that proves both Will and the girl were statistically "normal" in this regard (e.g. maybe Will has very bad lungs for an adult his age?). When you consider the level of detail that a computer puts into its evaluations, using statistics is the same as speculating.
– Flater
Jul 12 '17 at 13:17
1
@Wayne The spoilered bit is a reference to the zeroth law as understood by the movie. It is, unfortunately, a terrible interpretation of the zeroth law. However, going by movie logic, the zeroth law doesn't apply to the scene in question as it happened years before.
– Draco18s
Jul 12 '17 at 20:16
2
If the laws were applied to plans of actions, there would be no violation. The robot would make plans for saving both, it just wouldn't get around to saving the girl. The plan for saving the girl would have a lower priority, as it would have a lower chance of success.
– jmoreno
Jul 13 '17 at 9:29
|
show 13 more comments
I am going to look at this question from a logical real-world point of view.
The robot does not break the second law; but technically, it does break the second. That said, the rules would only be a condensed explanation of far more intricate logic and computer code.
To quote Isaac Asimov's laws of robotics, emphasis mine:
Rule 1:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Rule 2:
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.Rule 3:Irrelevant, but provided for completions sake.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
In the given situation, the robot had to act, in order to save either Will or the child. The robot, being capable of calculating odds to a near-perfect prediction rate, is able to establish that it can not act in enough time to save both Will and the child; and the odds say that Will has a better chance to survive. In such a case, it is only logical to choose Will. This is a key plot point; robots run off pure logic - a fact that makes them differ greatly from human beings.
When the robot fails to also save the child, it is causing harm through inaction. However, again, the robot knows that it is impossible to save both Will and the child. It picks the best option, in order to best adhere to its rules. This is where a greater understanding of computers and the rules, themselves, come in to place.
What would actually happen, considering the explicit rule
The rules are not an absolute fact. They are not there to say "robots will never harm a human, and will always save the day, when present". We know this by how the movie plays out. The rules are simply the rules used to govern the actions of the robots, in-verse. As a programmer, this is something that is blatantly obvious to me; but I am confident that it is not as so for others that are not familiar with how strictly adherent any computer system is.
The point is, the rule does not state anything about it "not counting" because the robot is "already saving someone". As such, only considering this explicit rule (as any computer or robot would interpret, at least), there is no allowance for a situation where the robot can only save one of two people in danger. In actual computer science, only considering the explicit rule, such an event would likely cause an infinite loop. The robot would stop where it was, and continue to process the catch-22 forever; or at least, until its logic kicked it out of the thought process. At this point, the robot would dump its memory of the current event, and move on. In theory.
What would probably happen, in-verse
In verse, the rules are a lot more complicated; at least, internal to the robot. There would likely be a whole lot of special cases, when processing the core rules, to determine how to act in such situations. As a result, the robot is still able to act, and takes the most logical outcome. It only saves Will, but it does save someone.
It is far more understandable that the rules would be simplified to three generic common-case situations; it would be far less believable that people would be so easily trusting of robots if the rule read "A robot may not injure a human or, through inaction, allow a human being to come to harm; unless in doing so, there is greater chance of preventing another human from coming to harm". There are just way to many ways to interpret this.
So as far as the explicit rules go, the robot does not break the second rule; disobeying Will's action does not go against "preventing a human from coming to harm through inaction", because through disobeying Will, it saves Will. However, it does break the rule of "through inaction, allow a human being to come to harm".
In regards to how the robot would actually process these rules, it would not be breaking the rules, at all. There would be a far more complex series of "if.." and "else.." logic, where the robots logic would allow it to go against these base rules in situations where logic dictates that no matter what option, a human would still come to harm.
This is further established, towards the end of the movie;
The robots are able to effectively establish martial law, and in doing so, harm several humans; they have developed enough to establish that by harming a few humans in effectively imprisoning the rest of the population, they prevent far more harm through all of the various actions we like to get up to that both risk, and in some cases take, our lives.
I am going to look at this question from a logical real-world point of view.
The robot does not break the second law; but technically, it does break the second. That said, the rules would only be a condensed explanation of far more intricate logic and computer code.
To quote Isaac Asimov's laws of robotics, emphasis mine:
Rule 1:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Rule 2:
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.Rule 3:Irrelevant, but provided for completions sake.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
In the given situation, the robot had to act, in order to save either Will or the child. The robot, being capable of calculating odds to a near-perfect prediction rate, is able to establish that it can not act in enough time to save both Will and the child; and the odds say that Will has a better chance to survive. In such a case, it is only logical to choose Will. This is a key plot point; robots run off pure logic - a fact that makes them differ greatly from human beings.
When the robot fails to also save the child, it is causing harm through inaction. However, again, the robot knows that it is impossible to save both Will and the child. It picks the best option, in order to best adhere to its rules. This is where a greater understanding of computers and the rules, themselves, come in to place.
What would actually happen, considering the explicit rule
The rules are not an absolute fact. They are not there to say "robots will never harm a human, and will always save the day, when present". We know this by how the movie plays out. The rules are simply the rules used to govern the actions of the robots, in-verse. As a programmer, this is something that is blatantly obvious to me; but I am confident that it is not as so for others that are not familiar with how strictly adherent any computer system is.
The point is, the rule does not state anything about it "not counting" because the robot is "already saving someone". As such, only considering this explicit rule (as any computer or robot would interpret, at least), there is no allowance for a situation where the robot can only save one of two people in danger. In actual computer science, only considering the explicit rule, such an event would likely cause an infinite loop. The robot would stop where it was, and continue to process the catch-22 forever; or at least, until its logic kicked it out of the thought process. At this point, the robot would dump its memory of the current event, and move on. In theory.
What would probably happen, in-verse
In verse, the rules are a lot more complicated; at least, internal to the robot. There would likely be a whole lot of special cases, when processing the core rules, to determine how to act in such situations. As a result, the robot is still able to act, and takes the most logical outcome. It only saves Will, but it does save someone.
It is far more understandable that the rules would be simplified to three generic common-case situations; it would be far less believable that people would be so easily trusting of robots if the rule read "A robot may not injure a human or, through inaction, allow a human being to come to harm; unless in doing so, there is greater chance of preventing another human from coming to harm". There are just way to many ways to interpret this.
So as far as the explicit rules go, the robot does not break the second rule; disobeying Will's action does not go against "preventing a human from coming to harm through inaction", because through disobeying Will, it saves Will. However, it does break the rule of "through inaction, allow a human being to come to harm".
In regards to how the robot would actually process these rules, it would not be breaking the rules, at all. There would be a far more complex series of "if.." and "else.." logic, where the robots logic would allow it to go against these base rules in situations where logic dictates that no matter what option, a human would still come to harm.
This is further established, towards the end of the movie;
The robots are able to effectively establish martial law, and in doing so, harm several humans; they have developed enough to establish that by harming a few humans in effectively imprisoning the rest of the population, they prevent far more harm through all of the various actions we like to get up to that both risk, and in some cases take, our lives.
edited 1 min ago
DavidW
4,35511753
4,35511753
answered Jul 12 '17 at 12:49
GnemlockGnemlock
1,1551128
1,1551128
Excellent answer Gnemlock. If I may pick your brain on this issue, as I feel it will help me understand this concept further, would your interpretation of the 3 laws allow a robot to kill a suicide bomber in a crowded street to prevent harm on a massive scale?
– Magikarp Master
Jul 12 '17 at 12:54
@MagikarpMaster, I would say yes, but there could be great debate about it. It would come down to when the robot could determine the bomber was actually going to act, and otherwise couldnt be prevented. It would also be based off the development of the robot, in terms of the development the robots if I-Robot take, between how they are at the start of the movie, and how they are at the end. One might also speculate that the "chances of detonation" required for such an action would increase on par with estimated impact (i.e. it would have to be more sure if it would only hit half the people).
– Gnemlock
Jul 12 '17 at 12:57
2
@MagikarpMaster: Unless the robot knew both Will and the girl's biological makeup; it must have been working off of statistics for average human's survival chances (even if age data exists, that's still not personal data about Will and the girl). You can argue that statistics are a good indication, but there's no firm evidence that proves both Will and the girl were statistically "normal" in this regard (e.g. maybe Will has very bad lungs for an adult his age?). When you consider the level of detail that a computer puts into its evaluations, using statistics is the same as speculating.
– Flater
Jul 12 '17 at 13:17
1
@Wayne The spoilered bit is a reference to the zeroth law as understood by the movie. It is, unfortunately, a terrible interpretation of the zeroth law. However, going by movie logic, the zeroth law doesn't apply to the scene in question as it happened years before.
– Draco18s
Jul 12 '17 at 20:16
2
If the laws were applied to plans of actions, there would be no violation. The robot would make plans for saving both, it just wouldn't get around to saving the girl. The plan for saving the girl would have a lower priority, as it would have a lower chance of success.
– jmoreno
Jul 13 '17 at 9:29
|
show 13 more comments
Excellent answer Gnemlock. If I may pick your brain on this issue, as I feel it will help me understand this concept further, would your interpretation of the 3 laws allow a robot to kill a suicide bomber in a crowded street to prevent harm on a massive scale?
– Magikarp Master
Jul 12 '17 at 12:54
@MagikarpMaster, I would say yes, but there could be great debate about it. It would come down to when the robot could determine the bomber was actually going to act, and otherwise couldnt be prevented. It would also be based off the development of the robot, in terms of the development the robots if I-Robot take, between how they are at the start of the movie, and how they are at the end. One might also speculate that the "chances of detonation" required for such an action would increase on par with estimated impact (i.e. it would have to be more sure if it would only hit half the people).
– Gnemlock
Jul 12 '17 at 12:57
2
@MagikarpMaster: Unless the robot knew both Will and the girl's biological makeup; it must have been working off of statistics for average human's survival chances (even if age data exists, that's still not personal data about Will and the girl). You can argue that statistics are a good indication, but there's no firm evidence that proves both Will and the girl were statistically "normal" in this regard (e.g. maybe Will has very bad lungs for an adult his age?). When you consider the level of detail that a computer puts into its evaluations, using statistics is the same as speculating.
– Flater
Jul 12 '17 at 13:17
1
@Wayne The spoilered bit is a reference to the zeroth law as understood by the movie. It is, unfortunately, a terrible interpretation of the zeroth law. However, going by movie logic, the zeroth law doesn't apply to the scene in question as it happened years before.
– Draco18s
Jul 12 '17 at 20:16
2
If the laws were applied to plans of actions, there would be no violation. The robot would make plans for saving both, it just wouldn't get around to saving the girl. The plan for saving the girl would have a lower priority, as it would have a lower chance of success.
– jmoreno
Jul 13 '17 at 9:29
Excellent answer Gnemlock. If I may pick your brain on this issue, as I feel it will help me understand this concept further, would your interpretation of the 3 laws allow a robot to kill a suicide bomber in a crowded street to prevent harm on a massive scale?
– Magikarp Master
Jul 12 '17 at 12:54
Excellent answer Gnemlock. If I may pick your brain on this issue, as I feel it will help me understand this concept further, would your interpretation of the 3 laws allow a robot to kill a suicide bomber in a crowded street to prevent harm on a massive scale?
– Magikarp Master
Jul 12 '17 at 12:54
@MagikarpMaster, I would say yes, but there could be great debate about it. It would come down to when the robot could determine the bomber was actually going to act, and otherwise couldnt be prevented. It would also be based off the development of the robot, in terms of the development the robots if I-Robot take, between how they are at the start of the movie, and how they are at the end. One might also speculate that the "chances of detonation" required for such an action would increase on par with estimated impact (i.e. it would have to be more sure if it would only hit half the people).
– Gnemlock
Jul 12 '17 at 12:57
@MagikarpMaster, I would say yes, but there could be great debate about it. It would come down to when the robot could determine the bomber was actually going to act, and otherwise couldnt be prevented. It would also be based off the development of the robot, in terms of the development the robots if I-Robot take, between how they are at the start of the movie, and how they are at the end. One might also speculate that the "chances of detonation" required for such an action would increase on par with estimated impact (i.e. it would have to be more sure if it would only hit half the people).
– Gnemlock
Jul 12 '17 at 12:57
2
2
@MagikarpMaster: Unless the robot knew both Will and the girl's biological makeup; it must have been working off of statistics for average human's survival chances (even if age data exists, that's still not personal data about Will and the girl). You can argue that statistics are a good indication, but there's no firm evidence that proves both Will and the girl were statistically "normal" in this regard (e.g. maybe Will has very bad lungs for an adult his age?). When you consider the level of detail that a computer puts into its evaluations, using statistics is the same as speculating.
– Flater
Jul 12 '17 at 13:17
@MagikarpMaster: Unless the robot knew both Will and the girl's biological makeup; it must have been working off of statistics for average human's survival chances (even if age data exists, that's still not personal data about Will and the girl). You can argue that statistics are a good indication, but there's no firm evidence that proves both Will and the girl were statistically "normal" in this regard (e.g. maybe Will has very bad lungs for an adult his age?). When you consider the level of detail that a computer puts into its evaluations, using statistics is the same as speculating.
– Flater
Jul 12 '17 at 13:17
1
1
@Wayne The spoilered bit is a reference to the zeroth law as understood by the movie. It is, unfortunately, a terrible interpretation of the zeroth law. However, going by movie logic, the zeroth law doesn't apply to the scene in question as it happened years before.
– Draco18s
Jul 12 '17 at 20:16
@Wayne The spoilered bit is a reference to the zeroth law as understood by the movie. It is, unfortunately, a terrible interpretation of the zeroth law. However, going by movie logic, the zeroth law doesn't apply to the scene in question as it happened years before.
– Draco18s
Jul 12 '17 at 20:16
2
2
If the laws were applied to plans of actions, there would be no violation. The robot would make plans for saving both, it just wouldn't get around to saving the girl. The plan for saving the girl would have a lower priority, as it would have a lower chance of success.
– jmoreno
Jul 13 '17 at 9:29
If the laws were applied to plans of actions, there would be no violation. The robot would make plans for saving both, it just wouldn't get around to saving the girl. The plan for saving the girl would have a lower priority, as it would have a lower chance of success.
– jmoreno
Jul 13 '17 at 9:29
|
show 13 more comments
So, the first law as a black-and-white rule doesn't really work without some finagling, because (as mentioned by other answers) of triage.
My interpretation of how the movie implements the laws is that for the context of "saving lives", the robots do an EV (Expected Value) calculation; that is, for each choice they calculate the probability of success and multiply that by the number of lives they save.
In the Will Smith vs. Child case, saving Will Smith might be a 75% chance of success while the Child is only a 50% chance of success, meaning the EV of saving Will Smith is 0.75 lives, and the child's EV is 0.5. Wanting to maximise the lives saved (as defined by our probability-based first law), the robot will always choose Will, regardless of any directives given. By obeying Will's orders, the robot would be "killing" 0.25 humans.
This can be extended to probabilities applied to multiple humans in danger (eg. saving 5 humans with 20% chance is better than saving 1 human with 90% chance), and itself might lead to some interesting conclusions, but I think it's a reasonable explanation for the events of the movie.
3
The percentages are actually stated in the movie: Spooner had a 45% chance of survival, Sarah (the girl) had an 11% chance.
– F1Krazy
Jul 12 '17 at 13:37
2
It also seems that the calculations are rather local - we're not exactly seeing robots spontaneously running off to Africa to save starving kids (far more "lives saved per effort") until VIKI comes around. In fact, VIKI isn't really interpreting some new zeroth law, she just takes the first to its logical conclusion given the information she has available (unlike in the actual Asimov novels which are a lot more nuanced). Also note that Asimovian robots can often break the laws, they just don't survive it intact - so it's possible the robot saved Will and then utterly broke down.
– Luaan
Jul 12 '17 at 13:42
1
@Luaan: I don't think VIKI took the First Law to its conclusion, but rather swapped the original "no harm comes to humans" (as it was intended) with "no harm comes to humanity" (which considers humans as the ants that sacrifice themselves for the good of the colony). To VIKI's mind, humanity and humans are interchangable (hence why she did what she did), and she does not understand the sanctity of individual human life. By redefining both "human" and "harm" (she took it more figurative, while the laws focused on physical harm), she redefined the laws without technically breaking them.
– Flater
Jul 12 '17 at 14:49
2
@Flater Well, not if you take it consistently with the original robot incident - the robot clearly has shown to choose preferentially the one who has the highest chance of survival. VIKI did basically the same thing, only instead of considering "two cars sinking in the river right now", she considered "all humans in all of the US". If 100 people die with 10% probability, it's preferable to a different scenario where 10000 people die with 90% probability. I'm not saying this is compatible with Asimov's robots, of course - there the derivation of the zeroth law is truly original.
– Luaan
Jul 12 '17 at 15:03
3
@Luaan: But VIKI's methods show that she stopped caring about harming individuals (notice that the laws speak of harm, not death) if they do not meaningfully contribute to humanity. If she was still applying the same law, she would still have been an "overbearing mother" AI (e.g. no salt, no dangerous sports, ... nothing that could harm you slightly), but she would not have tried to kill or harm Will Smith, nor any of the other humans that stood up against the robots. And she did. Which proves that she stopped applying the original First Law, and supplanted it with her own version.
– Flater
Jul 12 '17 at 15:09
|
show 1 more comment
So, the first law as a black-and-white rule doesn't really work without some finagling, because (as mentioned by other answers) of triage.
My interpretation of how the movie implements the laws is that for the context of "saving lives", the robots do an EV (Expected Value) calculation; that is, for each choice they calculate the probability of success and multiply that by the number of lives they save.
In the Will Smith vs. Child case, saving Will Smith might be a 75% chance of success while the Child is only a 50% chance of success, meaning the EV of saving Will Smith is 0.75 lives, and the child's EV is 0.5. Wanting to maximise the lives saved (as defined by our probability-based first law), the robot will always choose Will, regardless of any directives given. By obeying Will's orders, the robot would be "killing" 0.25 humans.
This can be extended to probabilities applied to multiple humans in danger (eg. saving 5 humans with 20% chance is better than saving 1 human with 90% chance), and itself might lead to some interesting conclusions, but I think it's a reasonable explanation for the events of the movie.
3
The percentages are actually stated in the movie: Spooner had a 45% chance of survival, Sarah (the girl) had an 11% chance.
– F1Krazy
Jul 12 '17 at 13:37
2
It also seems that the calculations are rather local - we're not exactly seeing robots spontaneously running off to Africa to save starving kids (far more "lives saved per effort") until VIKI comes around. In fact, VIKI isn't really interpreting some new zeroth law, she just takes the first to its logical conclusion given the information she has available (unlike in the actual Asimov novels which are a lot more nuanced). Also note that Asimovian robots can often break the laws, they just don't survive it intact - so it's possible the robot saved Will and then utterly broke down.
– Luaan
Jul 12 '17 at 13:42
1
@Luaan: I don't think VIKI took the First Law to its conclusion, but rather swapped the original "no harm comes to humans" (as it was intended) with "no harm comes to humanity" (which considers humans as the ants that sacrifice themselves for the good of the colony). To VIKI's mind, humanity and humans are interchangable (hence why she did what she did), and she does not understand the sanctity of individual human life. By redefining both "human" and "harm" (she took it more figurative, while the laws focused on physical harm), she redefined the laws without technically breaking them.
– Flater
Jul 12 '17 at 14:49
2
@Flater Well, not if you take it consistently with the original robot incident - the robot clearly has shown to choose preferentially the one who has the highest chance of survival. VIKI did basically the same thing, only instead of considering "two cars sinking in the river right now", she considered "all humans in all of the US". If 100 people die with 10% probability, it's preferable to a different scenario where 10000 people die with 90% probability. I'm not saying this is compatible with Asimov's robots, of course - there the derivation of the zeroth law is truly original.
– Luaan
Jul 12 '17 at 15:03
3
@Luaan: But VIKI's methods show that she stopped caring about harming individuals (notice that the laws speak of harm, not death) if they do not meaningfully contribute to humanity. If she was still applying the same law, she would still have been an "overbearing mother" AI (e.g. no salt, no dangerous sports, ... nothing that could harm you slightly), but she would not have tried to kill or harm Will Smith, nor any of the other humans that stood up against the robots. And she did. Which proves that she stopped applying the original First Law, and supplanted it with her own version.
– Flater
Jul 12 '17 at 15:09
|
show 1 more comment
So, the first law as a black-and-white rule doesn't really work without some finagling, because (as mentioned by other answers) of triage.
My interpretation of how the movie implements the laws is that for the context of "saving lives", the robots do an EV (Expected Value) calculation; that is, for each choice they calculate the probability of success and multiply that by the number of lives they save.
In the Will Smith vs. Child case, saving Will Smith might be a 75% chance of success while the Child is only a 50% chance of success, meaning the EV of saving Will Smith is 0.75 lives, and the child's EV is 0.5. Wanting to maximise the lives saved (as defined by our probability-based first law), the robot will always choose Will, regardless of any directives given. By obeying Will's orders, the robot would be "killing" 0.25 humans.
This can be extended to probabilities applied to multiple humans in danger (eg. saving 5 humans with 20% chance is better than saving 1 human with 90% chance), and itself might lead to some interesting conclusions, but I think it's a reasonable explanation for the events of the movie.
So, the first law as a black-and-white rule doesn't really work without some finagling, because (as mentioned by other answers) of triage.
My interpretation of how the movie implements the laws is that for the context of "saving lives", the robots do an EV (Expected Value) calculation; that is, for each choice they calculate the probability of success and multiply that by the number of lives they save.
In the Will Smith vs. Child case, saving Will Smith might be a 75% chance of success while the Child is only a 50% chance of success, meaning the EV of saving Will Smith is 0.75 lives, and the child's EV is 0.5. Wanting to maximise the lives saved (as defined by our probability-based first law), the robot will always choose Will, regardless of any directives given. By obeying Will's orders, the robot would be "killing" 0.25 humans.
This can be extended to probabilities applied to multiple humans in danger (eg. saving 5 humans with 20% chance is better than saving 1 human with 90% chance), and itself might lead to some interesting conclusions, but I think it's a reasonable explanation for the events of the movie.
answered Jul 12 '17 at 13:32
monoRedmonoRed
33737
33737
3
The percentages are actually stated in the movie: Spooner had a 45% chance of survival, Sarah (the girl) had an 11% chance.
– F1Krazy
Jul 12 '17 at 13:37
2
It also seems that the calculations are rather local - we're not exactly seeing robots spontaneously running off to Africa to save starving kids (far more "lives saved per effort") until VIKI comes around. In fact, VIKI isn't really interpreting some new zeroth law, she just takes the first to its logical conclusion given the information she has available (unlike in the actual Asimov novels which are a lot more nuanced). Also note that Asimovian robots can often break the laws, they just don't survive it intact - so it's possible the robot saved Will and then utterly broke down.
– Luaan
Jul 12 '17 at 13:42
1
@Luaan: I don't think VIKI took the First Law to its conclusion, but rather swapped the original "no harm comes to humans" (as it was intended) with "no harm comes to humanity" (which considers humans as the ants that sacrifice themselves for the good of the colony). To VIKI's mind, humanity and humans are interchangable (hence why she did what she did), and she does not understand the sanctity of individual human life. By redefining both "human" and "harm" (she took it more figurative, while the laws focused on physical harm), she redefined the laws without technically breaking them.
– Flater
Jul 12 '17 at 14:49
2
@Flater Well, not if you take it consistently with the original robot incident - the robot clearly has shown to choose preferentially the one who has the highest chance of survival. VIKI did basically the same thing, only instead of considering "two cars sinking in the river right now", she considered "all humans in all of the US". If 100 people die with 10% probability, it's preferable to a different scenario where 10000 people die with 90% probability. I'm not saying this is compatible with Asimov's robots, of course - there the derivation of the zeroth law is truly original.
– Luaan
Jul 12 '17 at 15:03
3
@Luaan: But VIKI's methods show that she stopped caring about harming individuals (notice that the laws speak of harm, not death) if they do not meaningfully contribute to humanity. If she was still applying the same law, she would still have been an "overbearing mother" AI (e.g. no salt, no dangerous sports, ... nothing that could harm you slightly), but she would not have tried to kill or harm Will Smith, nor any of the other humans that stood up against the robots. And she did. Which proves that she stopped applying the original First Law, and supplanted it with her own version.
– Flater
Jul 12 '17 at 15:09
|
show 1 more comment
3
The percentages are actually stated in the movie: Spooner had a 45% chance of survival, Sarah (the girl) had an 11% chance.
– F1Krazy
Jul 12 '17 at 13:37
2
It also seems that the calculations are rather local - we're not exactly seeing robots spontaneously running off to Africa to save starving kids (far more "lives saved per effort") until VIKI comes around. In fact, VIKI isn't really interpreting some new zeroth law, she just takes the first to its logical conclusion given the information she has available (unlike in the actual Asimov novels which are a lot more nuanced). Also note that Asimovian robots can often break the laws, they just don't survive it intact - so it's possible the robot saved Will and then utterly broke down.
– Luaan
Jul 12 '17 at 13:42
1
@Luaan: I don't think VIKI took the First Law to its conclusion, but rather swapped the original "no harm comes to humans" (as it was intended) with "no harm comes to humanity" (which considers humans as the ants that sacrifice themselves for the good of the colony). To VIKI's mind, humanity and humans are interchangable (hence why she did what she did), and she does not understand the sanctity of individual human life. By redefining both "human" and "harm" (she took it more figurative, while the laws focused on physical harm), she redefined the laws without technically breaking them.
– Flater
Jul 12 '17 at 14:49
2
@Flater Well, not if you take it consistently with the original robot incident - the robot clearly has shown to choose preferentially the one who has the highest chance of survival. VIKI did basically the same thing, only instead of considering "two cars sinking in the river right now", she considered "all humans in all of the US". If 100 people die with 10% probability, it's preferable to a different scenario where 10000 people die with 90% probability. I'm not saying this is compatible with Asimov's robots, of course - there the derivation of the zeroth law is truly original.
– Luaan
Jul 12 '17 at 15:03
3
@Luaan: But VIKI's methods show that she stopped caring about harming individuals (notice that the laws speak of harm, not death) if they do not meaningfully contribute to humanity. If she was still applying the same law, she would still have been an "overbearing mother" AI (e.g. no salt, no dangerous sports, ... nothing that could harm you slightly), but she would not have tried to kill or harm Will Smith, nor any of the other humans that stood up against the robots. And she did. Which proves that she stopped applying the original First Law, and supplanted it with her own version.
– Flater
Jul 12 '17 at 15:09
3
3
The percentages are actually stated in the movie: Spooner had a 45% chance of survival, Sarah (the girl) had an 11% chance.
– F1Krazy
Jul 12 '17 at 13:37
The percentages are actually stated in the movie: Spooner had a 45% chance of survival, Sarah (the girl) had an 11% chance.
– F1Krazy
Jul 12 '17 at 13:37
2
2
It also seems that the calculations are rather local - we're not exactly seeing robots spontaneously running off to Africa to save starving kids (far more "lives saved per effort") until VIKI comes around. In fact, VIKI isn't really interpreting some new zeroth law, she just takes the first to its logical conclusion given the information she has available (unlike in the actual Asimov novels which are a lot more nuanced). Also note that Asimovian robots can often break the laws, they just don't survive it intact - so it's possible the robot saved Will and then utterly broke down.
– Luaan
Jul 12 '17 at 13:42
It also seems that the calculations are rather local - we're not exactly seeing robots spontaneously running off to Africa to save starving kids (far more "lives saved per effort") until VIKI comes around. In fact, VIKI isn't really interpreting some new zeroth law, she just takes the first to its logical conclusion given the information she has available (unlike in the actual Asimov novels which are a lot more nuanced). Also note that Asimovian robots can often break the laws, they just don't survive it intact - so it's possible the robot saved Will and then utterly broke down.
– Luaan
Jul 12 '17 at 13:42
1
1
@Luaan: I don't think VIKI took the First Law to its conclusion, but rather swapped the original "no harm comes to humans" (as it was intended) with "no harm comes to humanity" (which considers humans as the ants that sacrifice themselves for the good of the colony). To VIKI's mind, humanity and humans are interchangable (hence why she did what she did), and she does not understand the sanctity of individual human life. By redefining both "human" and "harm" (she took it more figurative, while the laws focused on physical harm), she redefined the laws without technically breaking them.
– Flater
Jul 12 '17 at 14:49
@Luaan: I don't think VIKI took the First Law to its conclusion, but rather swapped the original "no harm comes to humans" (as it was intended) with "no harm comes to humanity" (which considers humans as the ants that sacrifice themselves for the good of the colony). To VIKI's mind, humanity and humans are interchangable (hence why she did what she did), and she does not understand the sanctity of individual human life. By redefining both "human" and "harm" (she took it more figurative, while the laws focused on physical harm), she redefined the laws without technically breaking them.
– Flater
Jul 12 '17 at 14:49
2
2
@Flater Well, not if you take it consistently with the original robot incident - the robot clearly has shown to choose preferentially the one who has the highest chance of survival. VIKI did basically the same thing, only instead of considering "two cars sinking in the river right now", she considered "all humans in all of the US". If 100 people die with 10% probability, it's preferable to a different scenario where 10000 people die with 90% probability. I'm not saying this is compatible with Asimov's robots, of course - there the derivation of the zeroth law is truly original.
– Luaan
Jul 12 '17 at 15:03
@Flater Well, not if you take it consistently with the original robot incident - the robot clearly has shown to choose preferentially the one who has the highest chance of survival. VIKI did basically the same thing, only instead of considering "two cars sinking in the river right now", she considered "all humans in all of the US". If 100 people die with 10% probability, it's preferable to a different scenario where 10000 people die with 90% probability. I'm not saying this is compatible with Asimov's robots, of course - there the derivation of the zeroth law is truly original.
– Luaan
Jul 12 '17 at 15:03
3
3
@Luaan: But VIKI's methods show that she stopped caring about harming individuals (notice that the laws speak of harm, not death) if they do not meaningfully contribute to humanity. If she was still applying the same law, she would still have been an "overbearing mother" AI (e.g. no salt, no dangerous sports, ... nothing that could harm you slightly), but she would not have tried to kill or harm Will Smith, nor any of the other humans that stood up against the robots. And she did. Which proves that she stopped applying the original First Law, and supplanted it with her own version.
– Flater
Jul 12 '17 at 15:09
@Luaan: But VIKI's methods show that she stopped caring about harming individuals (notice that the laws speak of harm, not death) if they do not meaningfully contribute to humanity. If she was still applying the same law, she would still have been an "overbearing mother" AI (e.g. no salt, no dangerous sports, ... nothing that could harm you slightly), but she would not have tried to kill or harm Will Smith, nor any of the other humans that stood up against the robots. And she did. Which proves that she stopped applying the original First Law, and supplanted it with her own version.
– Flater
Jul 12 '17 at 15:09
|
show 1 more comment
The core of almost every one of Asimov's robot stories is about the interaction of the laws of robotics with each other and through the stories you can glean a lot of how Asimov considered his robots to work.
In the stories the laws of robotics are not simple hardwired things. There isn't a simple "If then else" statement going on. In the robots brains there is a weighting that is applied to every event. An example is that a robot will consider its owners orders to be more important than anybody else's. So if I send my robot to the shops to buy some things and somebody orders it to do their errands while it is out the robot is able to consider my order as more important than others.
Similarly we see the robot choosing from two possible first law violations. Who does it save? It does a calculation and decides that Will Smith is the better one to save.
Once we think of it in terms of these weightings we can then factor in how giving the robot an order might change things.
If the robot's assessment was very close on which to save (eg such that it came down to just choosing the closest rather than based on survival chances) then possibly the added weight of the order could push it to change which course of action has the most weight. However the first law is the most important and so the weight of an order is most of the time going to be insignificant compared to the factors it used when assessing the situation before the order.
So in essence what is happening is that the robot is finding the best course of action to meet its goals. It will try to save both of them. If it can't it will just do the best it can and this is what we see. The fact that Will Smith told it to do something different had no effect because the first law still compelled it to do what it considered to be best.
So having said all that the actual question: "Did this robot break the 2nd law of robotics, or is there some nuanced interpretation of the laws that could explain its behaviour?"
The laws are nuanced. The robots lives are entirely viewed through the lens of the three laws. Every single thing it does is weighted up against the three laws. As an example consider that in a crowded street there is always a chance of a person walking into a robot and injuring themselves. For the most part this is likely to result in nothing that would come close to an injury for the person but it might hurt a little, the person might be annoyed and thus it will be a potential violation of the first law - the robot could best avoid this by not going into that crowded street but I've ordered it to go and do the shopping. The robots course of action is likely to be to do the shopping I've ordered it to and thus be in the busy street. It is likely to be making sure that it is as aware of possible of everybody around it to make sure it doesn't inadvertently cause somebody harm. That is it must take positive action to avoid any bumps or it would be falling foul of "through inaction...".
So yeah, its all really complicated and this is the beauty of the film and all of asmiov's stories. The film centres around a robot (VIKI) and its interpretation of the three laws. It does what some would consider harm because it actually considers it to be the lesser harm.
Sonny was specifically programmed by Lanning to be able to break the Three Laws, so that example doesn't really count.
– F1Krazy
Jul 12 '17 at 16:38
@F1Krazy: Ah, ok. I'll remove that but then. As I say its a while since I've seen the film. ;-)
– Chris
Jul 12 '17 at 16:40
1
Fair enough. I should probably watch it again sometime, come to think of it.
– F1Krazy
Jul 12 '17 at 16:44
I had just thought the same thing... Time to see if its on Netflix or similar... :)
– Chris
Jul 12 '17 at 16:45
add a comment |
The core of almost every one of Asimov's robot stories is about the interaction of the laws of robotics with each other and through the stories you can glean a lot of how Asimov considered his robots to work.
In the stories the laws of robotics are not simple hardwired things. There isn't a simple "If then else" statement going on. In the robots brains there is a weighting that is applied to every event. An example is that a robot will consider its owners orders to be more important than anybody else's. So if I send my robot to the shops to buy some things and somebody orders it to do their errands while it is out the robot is able to consider my order as more important than others.
Similarly we see the robot choosing from two possible first law violations. Who does it save? It does a calculation and decides that Will Smith is the better one to save.
Once we think of it in terms of these weightings we can then factor in how giving the robot an order might change things.
If the robot's assessment was very close on which to save (eg such that it came down to just choosing the closest rather than based on survival chances) then possibly the added weight of the order could push it to change which course of action has the most weight. However the first law is the most important and so the weight of an order is most of the time going to be insignificant compared to the factors it used when assessing the situation before the order.
So in essence what is happening is that the robot is finding the best course of action to meet its goals. It will try to save both of them. If it can't it will just do the best it can and this is what we see. The fact that Will Smith told it to do something different had no effect because the first law still compelled it to do what it considered to be best.
So having said all that the actual question: "Did this robot break the 2nd law of robotics, or is there some nuanced interpretation of the laws that could explain its behaviour?"
The laws are nuanced. The robots lives are entirely viewed through the lens of the three laws. Every single thing it does is weighted up against the three laws. As an example consider that in a crowded street there is always a chance of a person walking into a robot and injuring themselves. For the most part this is likely to result in nothing that would come close to an injury for the person but it might hurt a little, the person might be annoyed and thus it will be a potential violation of the first law - the robot could best avoid this by not going into that crowded street but I've ordered it to go and do the shopping. The robots course of action is likely to be to do the shopping I've ordered it to and thus be in the busy street. It is likely to be making sure that it is as aware of possible of everybody around it to make sure it doesn't inadvertently cause somebody harm. That is it must take positive action to avoid any bumps or it would be falling foul of "through inaction...".
So yeah, its all really complicated and this is the beauty of the film and all of asmiov's stories. The film centres around a robot (VIKI) and its interpretation of the three laws. It does what some would consider harm because it actually considers it to be the lesser harm.
Sonny was specifically programmed by Lanning to be able to break the Three Laws, so that example doesn't really count.
– F1Krazy
Jul 12 '17 at 16:38
@F1Krazy: Ah, ok. I'll remove that but then. As I say its a while since I've seen the film. ;-)
– Chris
Jul 12 '17 at 16:40
1
Fair enough. I should probably watch it again sometime, come to think of it.
– F1Krazy
Jul 12 '17 at 16:44
I had just thought the same thing... Time to see if its on Netflix or similar... :)
– Chris
Jul 12 '17 at 16:45
add a comment |
The core of almost every one of Asimov's robot stories is about the interaction of the laws of robotics with each other and through the stories you can glean a lot of how Asimov considered his robots to work.
In the stories the laws of robotics are not simple hardwired things. There isn't a simple "If then else" statement going on. In the robots brains there is a weighting that is applied to every event. An example is that a robot will consider its owners orders to be more important than anybody else's. So if I send my robot to the shops to buy some things and somebody orders it to do their errands while it is out the robot is able to consider my order as more important than others.
Similarly we see the robot choosing from two possible first law violations. Who does it save? It does a calculation and decides that Will Smith is the better one to save.
Once we think of it in terms of these weightings we can then factor in how giving the robot an order might change things.
If the robot's assessment was very close on which to save (eg such that it came down to just choosing the closest rather than based on survival chances) then possibly the added weight of the order could push it to change which course of action has the most weight. However the first law is the most important and so the weight of an order is most of the time going to be insignificant compared to the factors it used when assessing the situation before the order.
So in essence what is happening is that the robot is finding the best course of action to meet its goals. It will try to save both of them. If it can't it will just do the best it can and this is what we see. The fact that Will Smith told it to do something different had no effect because the first law still compelled it to do what it considered to be best.
So having said all that the actual question: "Did this robot break the 2nd law of robotics, or is there some nuanced interpretation of the laws that could explain its behaviour?"
The laws are nuanced. The robots lives are entirely viewed through the lens of the three laws. Every single thing it does is weighted up against the three laws. As an example consider that in a crowded street there is always a chance of a person walking into a robot and injuring themselves. For the most part this is likely to result in nothing that would come close to an injury for the person but it might hurt a little, the person might be annoyed and thus it will be a potential violation of the first law - the robot could best avoid this by not going into that crowded street but I've ordered it to go and do the shopping. The robots course of action is likely to be to do the shopping I've ordered it to and thus be in the busy street. It is likely to be making sure that it is as aware of possible of everybody around it to make sure it doesn't inadvertently cause somebody harm. That is it must take positive action to avoid any bumps or it would be falling foul of "through inaction...".
So yeah, its all really complicated and this is the beauty of the film and all of asmiov's stories. The film centres around a robot (VIKI) and its interpretation of the three laws. It does what some would consider harm because it actually considers it to be the lesser harm.
The core of almost every one of Asimov's robot stories is about the interaction of the laws of robotics with each other and through the stories you can glean a lot of how Asimov considered his robots to work.
In the stories the laws of robotics are not simple hardwired things. There isn't a simple "If then else" statement going on. In the robots brains there is a weighting that is applied to every event. An example is that a robot will consider its owners orders to be more important than anybody else's. So if I send my robot to the shops to buy some things and somebody orders it to do their errands while it is out the robot is able to consider my order as more important than others.
Similarly we see the robot choosing from two possible first law violations. Who does it save? It does a calculation and decides that Will Smith is the better one to save.
Once we think of it in terms of these weightings we can then factor in how giving the robot an order might change things.
If the robot's assessment was very close on which to save (eg such that it came down to just choosing the closest rather than based on survival chances) then possibly the added weight of the order could push it to change which course of action has the most weight. However the first law is the most important and so the weight of an order is most of the time going to be insignificant compared to the factors it used when assessing the situation before the order.
So in essence what is happening is that the robot is finding the best course of action to meet its goals. It will try to save both of them. If it can't it will just do the best it can and this is what we see. The fact that Will Smith told it to do something different had no effect because the first law still compelled it to do what it considered to be best.
So having said all that the actual question: "Did this robot break the 2nd law of robotics, or is there some nuanced interpretation of the laws that could explain its behaviour?"
The laws are nuanced. The robots lives are entirely viewed through the lens of the three laws. Every single thing it does is weighted up against the three laws. As an example consider that in a crowded street there is always a chance of a person walking into a robot and injuring themselves. For the most part this is likely to result in nothing that would come close to an injury for the person but it might hurt a little, the person might be annoyed and thus it will be a potential violation of the first law - the robot could best avoid this by not going into that crowded street but I've ordered it to go and do the shopping. The robots course of action is likely to be to do the shopping I've ordered it to and thus be in the busy street. It is likely to be making sure that it is as aware of possible of everybody around it to make sure it doesn't inadvertently cause somebody harm. That is it must take positive action to avoid any bumps or it would be falling foul of "through inaction...".
So yeah, its all really complicated and this is the beauty of the film and all of asmiov's stories. The film centres around a robot (VIKI) and its interpretation of the three laws. It does what some would consider harm because it actually considers it to be the lesser harm.
edited Jul 12 '17 at 16:40
answered Jul 12 '17 at 16:34
ChrisChris
595149
595149
Sonny was specifically programmed by Lanning to be able to break the Three Laws, so that example doesn't really count.
– F1Krazy
Jul 12 '17 at 16:38
@F1Krazy: Ah, ok. I'll remove that but then. As I say its a while since I've seen the film. ;-)
– Chris
Jul 12 '17 at 16:40
1
Fair enough. I should probably watch it again sometime, come to think of it.
– F1Krazy
Jul 12 '17 at 16:44
I had just thought the same thing... Time to see if its on Netflix or similar... :)
– Chris
Jul 12 '17 at 16:45
add a comment |
Sonny was specifically programmed by Lanning to be able to break the Three Laws, so that example doesn't really count.
– F1Krazy
Jul 12 '17 at 16:38
@F1Krazy: Ah, ok. I'll remove that but then. As I say its a while since I've seen the film. ;-)
– Chris
Jul 12 '17 at 16:40
1
Fair enough. I should probably watch it again sometime, come to think of it.
– F1Krazy
Jul 12 '17 at 16:44
I had just thought the same thing... Time to see if its on Netflix or similar... :)
– Chris
Jul 12 '17 at 16:45
Sonny was specifically programmed by Lanning to be able to break the Three Laws, so that example doesn't really count.
– F1Krazy
Jul 12 '17 at 16:38
Sonny was specifically programmed by Lanning to be able to break the Three Laws, so that example doesn't really count.
– F1Krazy
Jul 12 '17 at 16:38
@F1Krazy: Ah, ok. I'll remove that but then. As I say its a while since I've seen the film. ;-)
– Chris
Jul 12 '17 at 16:40
@F1Krazy: Ah, ok. I'll remove that but then. As I say its a while since I've seen the film. ;-)
– Chris
Jul 12 '17 at 16:40
1
1
Fair enough. I should probably watch it again sometime, come to think of it.
– F1Krazy
Jul 12 '17 at 16:44
Fair enough. I should probably watch it again sometime, come to think of it.
– F1Krazy
Jul 12 '17 at 16:44
I had just thought the same thing... Time to see if its on Netflix or similar... :)
– Chris
Jul 12 '17 at 16:45
I had just thought the same thing... Time to see if its on Netflix or similar... :)
– Chris
Jul 12 '17 at 16:45
add a comment |
I believe I have read all of Asimov's robot stories and novels and my perception was that the Three Laws are just verbal summaries (like the three laws of thermodynamics) which generalise a large amount of observed behaviour. In that sense, the actual behaviour of the robots is determined by incredibly intricate and complicated code and also makes use of more advanced sub-atomic and solid-state physics which we do not currently understand. The three laws are just very obvious ways of summarising how they appear to behave in general in very simple situations in the same way that analysing the overall behaviour of the Sun and the Earth is fairly simple using Newton's law but analysing the gravitational effects and perturbations on the asteroids on the asteroid belt due to Jupiter is much more difficult or impossible.
They are situations where the laws appear to be broken but this is just the result of the code which is driving the robot to analyse an extremely complicated situation quickly and then arrive at a decision as to what it should do and the Three Laws are only considered unbreakable essentially as a dramatic or literary device.
add a comment |
I believe I have read all of Asimov's robot stories and novels and my perception was that the Three Laws are just verbal summaries (like the three laws of thermodynamics) which generalise a large amount of observed behaviour. In that sense, the actual behaviour of the robots is determined by incredibly intricate and complicated code and also makes use of more advanced sub-atomic and solid-state physics which we do not currently understand. The three laws are just very obvious ways of summarising how they appear to behave in general in very simple situations in the same way that analysing the overall behaviour of the Sun and the Earth is fairly simple using Newton's law but analysing the gravitational effects and perturbations on the asteroids on the asteroid belt due to Jupiter is much more difficult or impossible.
They are situations where the laws appear to be broken but this is just the result of the code which is driving the robot to analyse an extremely complicated situation quickly and then arrive at a decision as to what it should do and the Three Laws are only considered unbreakable essentially as a dramatic or literary device.
add a comment |
I believe I have read all of Asimov's robot stories and novels and my perception was that the Three Laws are just verbal summaries (like the three laws of thermodynamics) which generalise a large amount of observed behaviour. In that sense, the actual behaviour of the robots is determined by incredibly intricate and complicated code and also makes use of more advanced sub-atomic and solid-state physics which we do not currently understand. The three laws are just very obvious ways of summarising how they appear to behave in general in very simple situations in the same way that analysing the overall behaviour of the Sun and the Earth is fairly simple using Newton's law but analysing the gravitational effects and perturbations on the asteroids on the asteroid belt due to Jupiter is much more difficult or impossible.
They are situations where the laws appear to be broken but this is just the result of the code which is driving the robot to analyse an extremely complicated situation quickly and then arrive at a decision as to what it should do and the Three Laws are only considered unbreakable essentially as a dramatic or literary device.
I believe I have read all of Asimov's robot stories and novels and my perception was that the Three Laws are just verbal summaries (like the three laws of thermodynamics) which generalise a large amount of observed behaviour. In that sense, the actual behaviour of the robots is determined by incredibly intricate and complicated code and also makes use of more advanced sub-atomic and solid-state physics which we do not currently understand. The three laws are just very obvious ways of summarising how they appear to behave in general in very simple situations in the same way that analysing the overall behaviour of the Sun and the Earth is fairly simple using Newton's law but analysing the gravitational effects and perturbations on the asteroids on the asteroid belt due to Jupiter is much more difficult or impossible.
They are situations where the laws appear to be broken but this is just the result of the code which is driving the robot to analyse an extremely complicated situation quickly and then arrive at a decision as to what it should do and the Three Laws are only considered unbreakable essentially as a dramatic or literary device.
answered Jul 14 '17 at 14:54
TomTom
1993
1993
add a comment |
add a comment |
I think the Robot didn't break the 2nd Law.
Here's how I imagine the Robot working:
The Robot continuously checks for the 3 Laws.
Law 1: The Robot has to save either will smith or the child.
Since the child has a lower chance of surviving he chooses Will.
Law 2: The Robot has to obey humans:
Will tells him to save the girl.
The Order is ignored because Law 1 has higher Priority.
Law 3: He doesn't harm himself so who cares
It seems like the first Law lets him ignore Law 2 and 3 and the second lets him ignore Law 3.
Ignoring is not the same as breaking the Rule in this case because Law 2 specifically states that it can be ignored.
Thus it's not broken.
add a comment |
I think the Robot didn't break the 2nd Law.
Here's how I imagine the Robot working:
The Robot continuously checks for the 3 Laws.
Law 1: The Robot has to save either will smith or the child.
Since the child has a lower chance of surviving he chooses Will.
Law 2: The Robot has to obey humans:
Will tells him to save the girl.
The Order is ignored because Law 1 has higher Priority.
Law 3: He doesn't harm himself so who cares
It seems like the first Law lets him ignore Law 2 and 3 and the second lets him ignore Law 3.
Ignoring is not the same as breaking the Rule in this case because Law 2 specifically states that it can be ignored.
Thus it's not broken.
add a comment |
I think the Robot didn't break the 2nd Law.
Here's how I imagine the Robot working:
The Robot continuously checks for the 3 Laws.
Law 1: The Robot has to save either will smith or the child.
Since the child has a lower chance of surviving he chooses Will.
Law 2: The Robot has to obey humans:
Will tells him to save the girl.
The Order is ignored because Law 1 has higher Priority.
Law 3: He doesn't harm himself so who cares
It seems like the first Law lets him ignore Law 2 and 3 and the second lets him ignore Law 3.
Ignoring is not the same as breaking the Rule in this case because Law 2 specifically states that it can be ignored.
Thus it's not broken.
I think the Robot didn't break the 2nd Law.
Here's how I imagine the Robot working:
The Robot continuously checks for the 3 Laws.
Law 1: The Robot has to save either will smith or the child.
Since the child has a lower chance of surviving he chooses Will.
Law 2: The Robot has to obey humans:
Will tells him to save the girl.
The Order is ignored because Law 1 has higher Priority.
Law 3: He doesn't harm himself so who cares
It seems like the first Law lets him ignore Law 2 and 3 and the second lets him ignore Law 3.
Ignoring is not the same as breaking the Rule in this case because Law 2 specifically states that it can be ignored.
Thus it's not broken.
edited Jul 14 '17 at 7:08
Mat Cauthon
17.9k488138
17.9k488138
answered Jul 14 '17 at 6:42
J. DoeJ. Doe
111
111
add a comment |
add a comment |
Since the robot cannot save both the girl and Spooner, the 'triage' interpretation of First Law - 'minimize net harm' - kicks in. If the robot had obeyed Spooner's 'Save the girl!', the robot wouldn't be minimizing net harm anymore - and THAT is a violation of First Law. So First Law overrides Second Law here.
[side note: we don't see much of this event, but I'd bet the robot would have been severely damaged by having to choose, though it wouldn't show until after it saved Spooner (otherwise THAT would violate First Law)]
4
The crux of this question is that the first law says nothing about triage or minimizing net harm. There may be an argument for this being the best interpretation, but you haven't made it.
– Lightness Races in Orbit
Jul 12 '17 at 17:37
add a comment |
Since the robot cannot save both the girl and Spooner, the 'triage' interpretation of First Law - 'minimize net harm' - kicks in. If the robot had obeyed Spooner's 'Save the girl!', the robot wouldn't be minimizing net harm anymore - and THAT is a violation of First Law. So First Law overrides Second Law here.
[side note: we don't see much of this event, but I'd bet the robot would have been severely damaged by having to choose, though it wouldn't show until after it saved Spooner (otherwise THAT would violate First Law)]
4
The crux of this question is that the first law says nothing about triage or minimizing net harm. There may be an argument for this being the best interpretation, but you haven't made it.
– Lightness Races in Orbit
Jul 12 '17 at 17:37
add a comment |
Since the robot cannot save both the girl and Spooner, the 'triage' interpretation of First Law - 'minimize net harm' - kicks in. If the robot had obeyed Spooner's 'Save the girl!', the robot wouldn't be minimizing net harm anymore - and THAT is a violation of First Law. So First Law overrides Second Law here.
[side note: we don't see much of this event, but I'd bet the robot would have been severely damaged by having to choose, though it wouldn't show until after it saved Spooner (otherwise THAT would violate First Law)]
Since the robot cannot save both the girl and Spooner, the 'triage' interpretation of First Law - 'minimize net harm' - kicks in. If the robot had obeyed Spooner's 'Save the girl!', the robot wouldn't be minimizing net harm anymore - and THAT is a violation of First Law. So First Law overrides Second Law here.
[side note: we don't see much of this event, but I'd bet the robot would have been severely damaged by having to choose, though it wouldn't show until after it saved Spooner (otherwise THAT would violate First Law)]
edited Jul 12 '17 at 17:35
amflare
25.4k1085128
25.4k1085128
answered Jul 12 '17 at 17:26
PMarPMar
1
1
4
The crux of this question is that the first law says nothing about triage or minimizing net harm. There may be an argument for this being the best interpretation, but you haven't made it.
– Lightness Races in Orbit
Jul 12 '17 at 17:37
add a comment |
4
The crux of this question is that the first law says nothing about triage or minimizing net harm. There may be an argument for this being the best interpretation, but you haven't made it.
– Lightness Races in Orbit
Jul 12 '17 at 17:37
4
4
The crux of this question is that the first law says nothing about triage or minimizing net harm. There may be an argument for this being the best interpretation, but you haven't made it.
– Lightness Races in Orbit
Jul 12 '17 at 17:37
The crux of this question is that the first law says nothing about triage or minimizing net harm. There may be an argument for this being the best interpretation, but you haven't made it.
– Lightness Races in Orbit
Jul 12 '17 at 17:37
add a comment |
No matter how the probability of survival of both was calculated ...
The robot would not stop to consider matters about the second law, until the block of actions relative to the first law concluded, which had been to save the life in order of survival odds.
It would be a more interesting twist if Will yelled: I have terminal cancer!
And in that way it would alter the information and odds of both.
add a comment |
No matter how the probability of survival of both was calculated ...
The robot would not stop to consider matters about the second law, until the block of actions relative to the first law concluded, which had been to save the life in order of survival odds.
It would be a more interesting twist if Will yelled: I have terminal cancer!
And in that way it would alter the information and odds of both.
add a comment |
No matter how the probability of survival of both was calculated ...
The robot would not stop to consider matters about the second law, until the block of actions relative to the first law concluded, which had been to save the life in order of survival odds.
It would be a more interesting twist if Will yelled: I have terminal cancer!
And in that way it would alter the information and odds of both.
No matter how the probability of survival of both was calculated ...
The robot would not stop to consider matters about the second law, until the block of actions relative to the first law concluded, which had been to save the life in order of survival odds.
It would be a more interesting twist if Will yelled: I have terminal cancer!
And in that way it would alter the information and odds of both.
edited Jul 13 '17 at 14:32
Mithrandir
25.7k9133186
25.7k9133186
answered Jul 13 '17 at 14:26
reto0110reto0110
215
215
add a comment |
add a comment |
Is the second law 'broken'?
Maybe that's just where computational logic and the English language don't mix?
The below logic may be a bit buggy and/or inefficient but is an interpretation of how the first 'law' could work while explaining the behaviour of the robot in question.
When the Law1 function is called, the robot analyses the conditions of all humans in some list (all humans that it's aware of maybe); It assesses the severity of danger that they are in and compares those; then if there are multiple humans in similar (highest found) severities of danger, it compares each of those humans and determines which is most likely to be successfully helped; And so long as at least one human needs to be protected from harm, law2 is not executed.
Private Sub UpdateBehavior (ByVal humans As List(Of Human), _
ByVal humanOrders As List(Of Order), ByVal pDangers As List(Of pDanger))
Dim bBusy as boolean
bBusy = False
Law1(humans)
If Not bBusy Then Law2(humanOrders)
if Not bBusy Then Law3(pDangers)
Exit Sub
Private Function Law1 (ByVal humans As List(Of Human)) As Boolean
Dim human as Human
Dim targetHuman as Human
Try
Set targetHuman = Nothing
'loop listed humans
For Each human In humans
If human.IsInDanger() Then
'Enumerate 'danger' into predetermined severities/urgencies
'(eg. Danger of going-hungry > falling-over > being-stabbed)
'and compare
If targetHuman.DangerQuantification() < human.DangerQuantificationThen()
'If the comparison human's amount of danger is discernibly greater
'make that human the new target
Set targetHuman = human
'Where both humans are in equal quantifiable amounts of danger
Else If targetHuman.DangerQuantification() = human.DangerQuantification() then
'CompareValueOfHumanLife() 'Can-Of-Worms INTENTIONALLY REMOVED!
If rescueSuccessRate(human) > rescueSuccessRate(targetHuman)
'Target the human where rate of successful harm prevention is higher
Set targetHuman = human
End If
End If
Else
'Set the first human found to be in danger as the initial target
targetHuman = human
End If
Next human
If Not targetHuman Is Nothing then
Law1 = True
Rescue(TargetHuman)
else
Law1 = False
End If
AvoidHarmingHumans()
catch
initiateSelfDestruct()
end try
End Function
So did the robot break the second law? Some people might say "The robot acted contrary to the plain English definition of the law and therefore it was broken." while some other people might say "The laws are just functions. Law1 was executed. Law2 was not. The robot obeyed its programming and the second law simply did not apply because the first took precedence."
3
Do you want to elaborate on your code rather than just dump a piece of code down and expect a horde of SciFi and Fantasy enthusiasts to understand?
– Edlothiad
Jul 13 '17 at 13:04
@Edlothiad The code was mostly just for fun. The description of what it does is in the 4th paragraph. My main point is that the 'Laws' aren't necessarily what you'd think of as a law in most other contexts. They're more like driving factors in decision making.
– Brent Hackers
Jul 13 '17 at 13:05
1
My apologies, I clearly didn't skim very well. In that case can you provide any evidence for "* it compares each of those humans and determines which is most likely to be successfully helped*" Why is that the delimiter for which human gets helped?
– Edlothiad
Jul 13 '17 at 13:12
@Edlothiad The OP stated that "The reason given as to why the robot saved Will and not the girl was that, statistically speaking, Spooner's chances for survival were higher than the girl's."
– Brent Hackers
Jul 13 '17 at 13:17
2
There is no reason to even entertain the idea that a Positronic brain "executes code" in this manner.
– Yorik
Jul 14 '17 at 15:42
|
show 1 more comment
Is the second law 'broken'?
Maybe that's just where computational logic and the English language don't mix?
The below logic may be a bit buggy and/or inefficient but is an interpretation of how the first 'law' could work while explaining the behaviour of the robot in question.
When the Law1 function is called, the robot analyses the conditions of all humans in some list (all humans that it's aware of maybe); It assesses the severity of danger that they are in and compares those; then if there are multiple humans in similar (highest found) severities of danger, it compares each of those humans and determines which is most likely to be successfully helped; And so long as at least one human needs to be protected from harm, law2 is not executed.
Private Sub UpdateBehavior (ByVal humans As List(Of Human), _
ByVal humanOrders As List(Of Order), ByVal pDangers As List(Of pDanger))
Dim bBusy as boolean
bBusy = False
Law1(humans)
If Not bBusy Then Law2(humanOrders)
if Not bBusy Then Law3(pDangers)
Exit Sub
Private Function Law1 (ByVal humans As List(Of Human)) As Boolean
Dim human as Human
Dim targetHuman as Human
Try
Set targetHuman = Nothing
'loop listed humans
For Each human In humans
If human.IsInDanger() Then
'Enumerate 'danger' into predetermined severities/urgencies
'(eg. Danger of going-hungry > falling-over > being-stabbed)
'and compare
If targetHuman.DangerQuantification() < human.DangerQuantificationThen()
'If the comparison human's amount of danger is discernibly greater
'make that human the new target
Set targetHuman = human
'Where both humans are in equal quantifiable amounts of danger
Else If targetHuman.DangerQuantification() = human.DangerQuantification() then
'CompareValueOfHumanLife() 'Can-Of-Worms INTENTIONALLY REMOVED!
If rescueSuccessRate(human) > rescueSuccessRate(targetHuman)
'Target the human where rate of successful harm prevention is higher
Set targetHuman = human
End If
End If
Else
'Set the first human found to be in danger as the initial target
targetHuman = human
End If
Next human
If Not targetHuman Is Nothing then
Law1 = True
Rescue(TargetHuman)
else
Law1 = False
End If
AvoidHarmingHumans()
catch
initiateSelfDestruct()
end try
End Function
So did the robot break the second law? Some people might say "The robot acted contrary to the plain English definition of the law and therefore it was broken." while some other people might say "The laws are just functions. Law1 was executed. Law2 was not. The robot obeyed its programming and the second law simply did not apply because the first took precedence."
3
Do you want to elaborate on your code rather than just dump a piece of code down and expect a horde of SciFi and Fantasy enthusiasts to understand?
– Edlothiad
Jul 13 '17 at 13:04
@Edlothiad The code was mostly just for fun. The description of what it does is in the 4th paragraph. My main point is that the 'Laws' aren't necessarily what you'd think of as a law in most other contexts. They're more like driving factors in decision making.
– Brent Hackers
Jul 13 '17 at 13:05
1
My apologies, I clearly didn't skim very well. In that case can you provide any evidence for "* it compares each of those humans and determines which is most likely to be successfully helped*" Why is that the delimiter for which human gets helped?
– Edlothiad
Jul 13 '17 at 13:12
@Edlothiad The OP stated that "The reason given as to why the robot saved Will and not the girl was that, statistically speaking, Spooner's chances for survival were higher than the girl's."
– Brent Hackers
Jul 13 '17 at 13:17
2
There is no reason to even entertain the idea that a Positronic brain "executes code" in this manner.
– Yorik
Jul 14 '17 at 15:42
|
show 1 more comment
Is the second law 'broken'?
Maybe that's just where computational logic and the English language don't mix?
The below logic may be a bit buggy and/or inefficient but is an interpretation of how the first 'law' could work while explaining the behaviour of the robot in question.
When the Law1 function is called, the robot analyses the conditions of all humans in some list (all humans that it's aware of maybe); It assesses the severity of danger that they are in and compares those; then if there are multiple humans in similar (highest found) severities of danger, it compares each of those humans and determines which is most likely to be successfully helped; And so long as at least one human needs to be protected from harm, law2 is not executed.
Private Sub UpdateBehavior (ByVal humans As List(Of Human), _
ByVal humanOrders As List(Of Order), ByVal pDangers As List(Of pDanger))
Dim bBusy as boolean
bBusy = False
Law1(humans)
If Not bBusy Then Law2(humanOrders)
if Not bBusy Then Law3(pDangers)
Exit Sub
Private Function Law1 (ByVal humans As List(Of Human)) As Boolean
Dim human as Human
Dim targetHuman as Human
Try
Set targetHuman = Nothing
'loop listed humans
For Each human In humans
If human.IsInDanger() Then
'Enumerate 'danger' into predetermined severities/urgencies
'(eg. Danger of going-hungry > falling-over > being-stabbed)
'and compare
If targetHuman.DangerQuantification() < human.DangerQuantificationThen()
'If the comparison human's amount of danger is discernibly greater
'make that human the new target
Set targetHuman = human
'Where both humans are in equal quantifiable amounts of danger
Else If targetHuman.DangerQuantification() = human.DangerQuantification() then
'CompareValueOfHumanLife() 'Can-Of-Worms INTENTIONALLY REMOVED!
If rescueSuccessRate(human) > rescueSuccessRate(targetHuman)
'Target the human where rate of successful harm prevention is higher
Set targetHuman = human
End If
End If
Else
'Set the first human found to be in danger as the initial target
targetHuman = human
End If
Next human
If Not targetHuman Is Nothing then
Law1 = True
Rescue(TargetHuman)
else
Law1 = False
End If
AvoidHarmingHumans()
catch
initiateSelfDestruct()
end try
End Function
So did the robot break the second law? Some people might say "The robot acted contrary to the plain English definition of the law and therefore it was broken." while some other people might say "The laws are just functions. Law1 was executed. Law2 was not. The robot obeyed its programming and the second law simply did not apply because the first took precedence."
Is the second law 'broken'?
Maybe that's just where computational logic and the English language don't mix?
The below logic may be a bit buggy and/or inefficient but is an interpretation of how the first 'law' could work while explaining the behaviour of the robot in question.
When the Law1 function is called, the robot analyses the conditions of all humans in some list (all humans that it's aware of maybe); It assesses the severity of danger that they are in and compares those; then if there are multiple humans in similar (highest found) severities of danger, it compares each of those humans and determines which is most likely to be successfully helped; And so long as at least one human needs to be protected from harm, law2 is not executed.
Private Sub UpdateBehavior (ByVal humans As List(Of Human), _
ByVal humanOrders As List(Of Order), ByVal pDangers As List(Of pDanger))
Dim bBusy as boolean
bBusy = False
Law1(humans)
If Not bBusy Then Law2(humanOrders)
if Not bBusy Then Law3(pDangers)
Exit Sub
Private Function Law1 (ByVal humans As List(Of Human)) As Boolean
Dim human as Human
Dim targetHuman as Human
Try
Set targetHuman = Nothing
'loop listed humans
For Each human In humans
If human.IsInDanger() Then
'Enumerate 'danger' into predetermined severities/urgencies
'(eg. Danger of going-hungry > falling-over > being-stabbed)
'and compare
If targetHuman.DangerQuantification() < human.DangerQuantificationThen()
'If the comparison human's amount of danger is discernibly greater
'make that human the new target
Set targetHuman = human
'Where both humans are in equal quantifiable amounts of danger
Else If targetHuman.DangerQuantification() = human.DangerQuantification() then
'CompareValueOfHumanLife() 'Can-Of-Worms INTENTIONALLY REMOVED!
If rescueSuccessRate(human) > rescueSuccessRate(targetHuman)
'Target the human where rate of successful harm prevention is higher
Set targetHuman = human
End If
End If
Else
'Set the first human found to be in danger as the initial target
targetHuman = human
End If
Next human
If Not targetHuman Is Nothing then
Law1 = True
Rescue(TargetHuman)
else
Law1 = False
End If
AvoidHarmingHumans()
catch
initiateSelfDestruct()
end try
End Function
So did the robot break the second law? Some people might say "The robot acted contrary to the plain English definition of the law and therefore it was broken." while some other people might say "The laws are just functions. Law1 was executed. Law2 was not. The robot obeyed its programming and the second law simply did not apply because the first took precedence."
edited Jul 13 '17 at 13:27
answered Jul 13 '17 at 12:59
Brent HackersBrent Hackers
1034
1034
3
Do you want to elaborate on your code rather than just dump a piece of code down and expect a horde of SciFi and Fantasy enthusiasts to understand?
– Edlothiad
Jul 13 '17 at 13:04
@Edlothiad The code was mostly just for fun. The description of what it does is in the 4th paragraph. My main point is that the 'Laws' aren't necessarily what you'd think of as a law in most other contexts. They're more like driving factors in decision making.
– Brent Hackers
Jul 13 '17 at 13:05
1
My apologies, I clearly didn't skim very well. In that case can you provide any evidence for "* it compares each of those humans and determines which is most likely to be successfully helped*" Why is that the delimiter for which human gets helped?
– Edlothiad
Jul 13 '17 at 13:12
@Edlothiad The OP stated that "The reason given as to why the robot saved Will and not the girl was that, statistically speaking, Spooner's chances for survival were higher than the girl's."
– Brent Hackers
Jul 13 '17 at 13:17
2
There is no reason to even entertain the idea that a Positronic brain "executes code" in this manner.
– Yorik
Jul 14 '17 at 15:42
|
show 1 more comment
3
Do you want to elaborate on your code rather than just dump a piece of code down and expect a horde of SciFi and Fantasy enthusiasts to understand?
– Edlothiad
Jul 13 '17 at 13:04
@Edlothiad The code was mostly just for fun. The description of what it does is in the 4th paragraph. My main point is that the 'Laws' aren't necessarily what you'd think of as a law in most other contexts. They're more like driving factors in decision making.
– Brent Hackers
Jul 13 '17 at 13:05
1
My apologies, I clearly didn't skim very well. In that case can you provide any evidence for "* it compares each of those humans and determines which is most likely to be successfully helped*" Why is that the delimiter for which human gets helped?
– Edlothiad
Jul 13 '17 at 13:12
@Edlothiad The OP stated that "The reason given as to why the robot saved Will and not the girl was that, statistically speaking, Spooner's chances for survival were higher than the girl's."
– Brent Hackers
Jul 13 '17 at 13:17
2
There is no reason to even entertain the idea that a Positronic brain "executes code" in this manner.
– Yorik
Jul 14 '17 at 15:42
3
3
Do you want to elaborate on your code rather than just dump a piece of code down and expect a horde of SciFi and Fantasy enthusiasts to understand?
– Edlothiad
Jul 13 '17 at 13:04
Do you want to elaborate on your code rather than just dump a piece of code down and expect a horde of SciFi and Fantasy enthusiasts to understand?
– Edlothiad
Jul 13 '17 at 13:04
@Edlothiad The code was mostly just for fun. The description of what it does is in the 4th paragraph. My main point is that the 'Laws' aren't necessarily what you'd think of as a law in most other contexts. They're more like driving factors in decision making.
– Brent Hackers
Jul 13 '17 at 13:05
@Edlothiad The code was mostly just for fun. The description of what it does is in the 4th paragraph. My main point is that the 'Laws' aren't necessarily what you'd think of as a law in most other contexts. They're more like driving factors in decision making.
– Brent Hackers
Jul 13 '17 at 13:05
1
1
My apologies, I clearly didn't skim very well. In that case can you provide any evidence for "* it compares each of those humans and determines which is most likely to be successfully helped*" Why is that the delimiter for which human gets helped?
– Edlothiad
Jul 13 '17 at 13:12
My apologies, I clearly didn't skim very well. In that case can you provide any evidence for "* it compares each of those humans and determines which is most likely to be successfully helped*" Why is that the delimiter for which human gets helped?
– Edlothiad
Jul 13 '17 at 13:12
@Edlothiad The OP stated that "The reason given as to why the robot saved Will and not the girl was that, statistically speaking, Spooner's chances for survival were higher than the girl's."
– Brent Hackers
Jul 13 '17 at 13:17
@Edlothiad The OP stated that "The reason given as to why the robot saved Will and not the girl was that, statistically speaking, Spooner's chances for survival were higher than the girl's."
– Brent Hackers
Jul 13 '17 at 13:17
2
2
There is no reason to even entertain the idea that a Positronic brain "executes code" in this manner.
– Yorik
Jul 14 '17 at 15:42
There is no reason to even entertain the idea that a Positronic brain "executes code" in this manner.
– Yorik
Jul 14 '17 at 15:42
|
show 1 more comment
protected by Rand al'Thor♦ Jul 15 '17 at 10:16
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
41
Note that although the plot of the film I, Robot is not in any way based on the original Asimov stories collected under the same name, those stories do often address very similar issues to the one here, frequently centering around interpretation of the Three Laws and their interactions. One specific similarity is with the plot of "Runaround", the second story in that collection.
– Daniel Roseman
Jul 12 '17 at 12:51
5
Well, humans do similar calculations too. They just put a lot more weight on the probability of "can't live with myself if the girl dies" :) In your interpretation of the 2nd (and 1st) law, robots couldn't ever do anything but melt on the spot - there's always someone that dies when they're doing something else, isn't there? As for Asimov's take, do take mind that the laws in English are just simplified translations - they don't cover even a tiny fraction of the actual laws coded in the robots themselves. Language lawyering on the English version of the law is irrelevant :)
– Luaan
Jul 12 '17 at 13:38
9
To expand on @DanielRoseman’s comment, in the books this situation would not have been resolved the way it was in the movie. The movie portrays the robot’s thinking as cold and calculating, saving Will to maximize the chances of saving someone. Asimov’s robots were incapable of such calculation. Being presented with this dilemma, often even in a hypothetical, would be enough to fry a robot’s brain. For example, in “Escape!”, the robot needs to be coached carefully through just thinking about a problem where harm to humans is the “right” solution.
– KRyan
Jul 12 '17 at 13:42
4
@Luaan In the books, humans perform such calculations. Robots cannot. They are not trusted to perform such calculations, and are literally designed such that these problems destroy them to even think about.
– KRyan
Jul 12 '17 at 13:42
4
This problem is similar to the trolley problem. There's just no way to save everyone, no matter what.
– Arturo Torres Sánchez
Jul 12 '17 at 14:10