A list of puns related to "Three Laws of Robotics"
I do not know if it was ever mentioned in story but do Asimovβs βThree Laws of Roboticsβ apply to constructs affiliated with Babylonia? Are they allowed to harm humans in general? Either in self interest or self defense? Or will they be punished for such an action?
Edit: Thank you for all your responses. Iβm planning crossover doodles and needed this question answered.
I have seen various questions from people on here and on Facebook wondering why the machines even needed to use bioelectricity from human beings if they had access to fusion technology, since fusion would likely provide more power for the machines than all the humans alive ever could.
I believe that the machines do this because they are, surprisingly, unable to break the first law of robotics which is a robot shall not harm a human.
Except, you may say "But we saw them kill hundreds of people, and nearly wipe out Zion!"
The machines didn't break the rule, but altered the terms of it. A machine shall not harm humanity. by keeping humanity locked in the Matrix, they are preserving it.
Zion is simply a trashcan for the humans that rejected the matrix, keeping it a peak stability, which was why it was allowed to exist. but exterminating it like they have done countless times before is not rendering humanity extinct.
How advance are the animatronics intelligent to have emotions.
And dose the three laws of robotics apply to them?
1.Β Robot shall not harm a human, or by inaction allow a human to come to harm.
A robot shall obey any instruction given to it by a human.
A robot must protect its own existence, except where to do so would conflict with the first or second laws.
So like most of you, I have been somewhat surprised at how Demerzel, as a robot can seemingly allow people to be killed, as well as partake in the ritual suicide of Brother Darkness. I however think I know how she is doing this. Just to start with the basics:
As we know from the Robots and Foundation books, the Three laws of robotics are instrinsically linked to the architecture of a positronic brain, and a robot basically can't function without them. Yet Demerzel seems to flaunt the 1st law quite frequently, as well as the 2nd law at times. Some have said that she can do this because of the Zeroth law, which can allow a robot to harm individual humans if it protects humanity as a whole, but this seems insufficient. Daneel could only disregard the Three laws in favour of the Zeroth with great difficulty and under specific circumstances.
However, Asimov's work is replete with examples of loopholes in the Laws, and one particular example came to mind in the case of Demerzel. In Foundation's Edge, we find Solarian robots, who have a definition of human that only includes Solarians and no one else. This allowed them to apply lethal force against any human's not Solarian, as they don't consider the Laws to apply to them. I believe Demerzel has a similarly specific definition of human, which probably excludes all enemies of the Empire, and might even so specific as to just include the Cleons. This of course leads to another problem, and that is how she could allow for the death of Brother Darkness. However, the ceremony of them changing his name to brother Darkness might be a way for her to literally blacklist him, and thereby allowing his death to happen. In another thread, someone approximated that the Cleons must remain for about 30 years in each phase of their lives, making them 90 by the end of it. It is possible that the original Cleon only lived to be 90 years old, and that any of the clones that grow beyond that age can be excluded from Demerzel's view of humanity.
Thoughts? I know it isn't quite watertight, but it makes a lot more sense than just throwing the Zeroth at anything.
PS: Cleon is an anagram of clone, I just noticed. Cheeky of them.
I'm working on the idea of an Asimovian Assassin, that is a robot that obeys the Three Laws of Robotic but is still working as hired assassin.
And of course I have some issue in how such a character would kill anyone if the first law prohibits it to do it. I could make it killing non-humans (aliens, mutants, robots) but if anyone has some nice workaround the Three Laws I'm open to advice.
For example in The Naked Sun (or was it Cave of Steels?), a robot is fooled into thinking a situation is safe for a human, and it carries out actions which cleverly work around the first law resulting in a killing. But I don't see how a robot can trick itself into such position.
The best example is the vacuum-bot from season 2 episode 1, a robot built for vacuuming and cleaning would never have weapons built into it, and it would have protocols that prioritize human life over anything else. Or the maintenance robot in the Life-Hutch episode, a malfunction in programming doesn't turn a robot that fixes into a robot that kills, that's not how robots work. Other than to make an obvious point about "machines running our life is scary", what's the deal with these unrealistic robots?
Asimovβs three laws of robotics are as follows:
FIRST LAW A robot may not injure a human being or, through inaction, allow a human being to come to harm.
SECOND LAW A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
THIRD LAW A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I couldn't help but think of the author Isaac Asimovβs Three Laws of Robotics while watching this show and how it stands up to The Code.
If you simply adapt the Three Laws to the relationship between supers and humans/human law, I find it much more logical (albeit obviously still flawed), than The Code.
Any thoughts?
He's done a couple other things too.
In 1943 - Isaac Asimov wrote a story called "Runaround" in which he introduced his three laws of robotics. (He later added a zeroth law).
My question is: Are there any books/stories exploring additions to Asimov's three laws of robotics?
There is a big problem with the current laws of robotics.
Specifically with the issue of self-harm done by humans.
The laws of robotics while provide general idea about what needs to be done, it doesn't describe how a robot should act if a human acts in a way that could lead to self-harming.
What do you (the audience) think the robot should or should not do if human could self-harm? and what do you think is the definition of self-harm?
I'm confused given he seems to have violated the second law "a robot must obey humans" on numerous occasions (such as running away prior to the series starting, and turning on Wily in Mega Man 4) but I thought the reason X was so special is because he was Dr. Light's first robot to not be bond by such laws. So did Dr. Light disown Proto Man or am I missing something?
The kids in my neighborhood have come up with a new way to pass the time: They walk up to someone with a robot, and tell the robot something along the lines of "I'll kill myself if you don't follow me" or "My grandmother is dying, only you can help her!". Essentially they trick the bot into thinking it's in danger of breaking the first law, thereby ignoring any commands it is given, dropping whatever it's doing and running off with the kid. I've even heard of some of them tricking the bots by endangering their own health by eating something they're slightly allergic to.
Sometimes the bots don't come back, In my case it did. How do I stop this from happening again?
e.g. The player always has control of their movement. If a player is driving a car and the car is moving forwards, it must be because the player is pushing forward on the control, and they can stop the car at any time.
We can sum this up with 'Don't take the player for a ride.'
An example of this done wrong is the dalek casing section of Doctor Who The Edge of Time VR. you're not incontrol of your direction or movement and is it's uncomfortable for the vestibular system.
There must be many more potential laws to make great VR though?
-- Also does anyone actually find Blinders help them? Though I am unusual in that snap turning and snap movement make me extremely uncomfortable and only smooth movemnt on all axis is really comfortable for me.
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.