Advertisement

Would it be ethical of humanity to enslave its sentient androids?

Started by August 01, 2009 03:52 PM
81 comments, last by Calin 15 years, 3 months ago
I was thinking about this some more. What's the purpose of creating a perfectly sentient artificial intelligence? I don't think there is a good one. So far, all of our attempts to create artificial intelligence have not even come close to sentience. To date, nobody has passed the Turing test. But, suppose that after many more decades of R&D, we do create a self-aware machine and its so well created that it is just like us. Why would we create it? To not be alone? Because we can imagine it? Because it's challenging?

If the objective is to create another sentient being, why not just have sex and pop out some babies? It would take a lot less time and effort, money, and thought to create something more perfectly human than any roboticist could dream of creating. It would also be genuinely capable of giving and receiving love, which would probably be a nagging doubt for some people if they were interacting intimately with robots.

On the other hand, if hyper-intelligent AI is an end objective and it actually does enslave/extinct humanity, then humanity is merely a stepping stone to a higher sort of being/consciousness. It would transcend mortality and be closer to omniscience then we could ever be. If the end result of every intelligent biological civilization is to create an intelligent mechanical civilization, then our search for 'life' elsewhere in the universe could reveal a civilization consisting of advanced robots. Perhaps even, the biological ancestors of that civilization found a way to transfer their consciousness into their mechanical creations? Is this our underlying motivation? To find a method for immortality?
Quote: Original post by WazzatMan
(I'm pretty sure some Scientists are bored/curious enough to risk humanity's existence to do so).

You're right,some ethical problems might be.I remember a short story about it.
--------
After ship disaster deep in space,crew had to stay in different parts of ship,they can't move,almost all were wounded,there was not enough oxygen and so on,it was agony.They used Morse code for communication,and one of repairing robots heard it during several days.After many years the ship was detected,crew buryed,robot was transferred to another spaceship.Atfer a couple days ship commander found that somebody tolk using Morse at night.Commander decided join to "chat",and dead crew began to beg him to save ...That people was dead for many years,but in robot's head remained a very complex "record" ,actually -a set of pseudo-individuals (just like a people which can exist in our night dreams ,and we talk with them).This record was smart enough,could answer to questions (like in Turing tests),begged again to save ...whom? And what to do with it? Are this people alive or dead? Commander has decided to send robot to scrap[smile]

[Edited by - Krokhin on August 4, 2009 11:48:12 PM]
Advertisement
Quote: Original post by slayemin
I was thinking about this some more. What's the purpose of creating a perfectly sentient artificial intelligence? I don't think there is a good one. So far, all of our attempts to create artificial intelligence have not even come close to sentience. To date, nobody has passed the Turing test. But, suppose that after many more decades of R&D, we do create a self-aware machine and its so well created that it is just like us. Why would we create it? To not be alone? Because we can imagine it? Because it's challenging?

If the objective is to create another sentient being, why not just have sex and pop out some babies? It would take a lot less time and effort, money, and thought to create something more perfectly human than any roboticist could dream of creating. It would also be genuinely capable of giving and receiving love, which would probably be a nagging doubt for some people if they were interacting intimately with robots.

On the other hand, if hyper-intelligent AI is an end objective and it actually does enslave/extinct humanity, then humanity is merely a stepping stone to a higher sort of being/consciousness. It would transcend mortality and be closer to omniscience then we could ever be. If the end result of every intelligent biological civilization is to create an intelligent mechanical civilization, then our search for 'life' elsewhere in the universe could reveal a civilization consisting of advanced robots. Perhaps even, the biological ancestors of that civilization found a way to transfer their consciousness into their mechanical creations? Is this our underlying motivation? To find a method for immortality?


Those are good questions. I think the first reason for creating machine sentience is to show that it can be done. And with that, to claim the glory of being the first to do it. A second reason is to better understand consciousness, what it's made of, how it's formed and so on. Tangled up with the ethics of the project is the notion that it would be ethical to run experiments with sentient machines in ways that would not be ethical with human beings. And yes, that notion begs the question. Also bound up with the quest for machine sentience, is the desire to confirm materialism. That is, to show that consciousness arises from matter without any supernatural intervention. By creating sentient machines, we reduce ourselves to purely biological machines, sans spirituality etc. The third reason follows from that, and it's the one promoted the most (or that seems to be) by the loudest advocates for pursuing this research. If consciousness is based solely on matter and if we can program it into machines, then we ought to also be able to transfer our consciousness into machines and through that live forever. In short, it's the desire to live forever. Scientists like Kurzweil and Moravec appear very motivated by this possibility. So yes, to find a method for obtaining immortality.

A more practical reason, ala Blade Runner, would be to use such machines as proxies for ourselves in dangerous situations, like wars, toxic cleanup and planetary and galactic exploration. With that last notion, consider the great distances involved and the time necessary to reach the nearest stars. Proxima Centauri, the closest star to the Sun, is 4.2 light years distant, approximately 4.2 x 10^13 km (42 trillion km). I don't know what the top speed ever achieved is, but if the escape velocity of the Earth is 11.2 km/s then we have achieved that speed (with our current technology). At that speed, the journey would take 3.75 trillion seconds or more than 1 billion hours or nearly 43.5 million days or not quite 120,000 years. Only an immortal could make that trip at that speed. And then there is the danger of cosmic rays to contend with. Star travel may require sentient machines.


"I thought what I'd do was, I'd pretend I was one of those deaf-mutes." - the Laughing Man
Quote: Original post by slayemin
I was thinking about this some more. What's the purpose of creating a perfectly sentient artificial intelligence? I don't think there is a good one. So far, all of our attempts to create artificial intelligence have not even come close to sentience. To date, nobody has passed the Turing test. But, suppose that after many more decades of R&D, we do create a self-aware machine and its so well created that it is just like us. Why would we create it? To not be alone? Because we can imagine it? Because it's challenging?


I can think a few good reasons:

- We could find a way to merge our minds with the machines alowing humans to explore the universe without having to worry about protecting our biological bodies.

- If we can't do that, we can still send the machines and trust they will see things like we do.

- If for some reason the human race happens to get extinct, the machines would inherit the spirit of our race and continue our quest (if we can call it a quest).

More or less like Pauld Davies said: In the end, what do we really want to be preserved from mankind? Our biological bodies/functions or the spirit of the race? Our particular way of percieving stuff, the way we think about things.

If someday we fail to adapt ourselves to the enviroment and we die, machines could still have a chance. As our creations, they would be worthy depositaries of all they things that made us humans.

[Edited by - owl on August 4, 2009 7:12:07 PM]
[size="2"]I like the Walrus best.
You are nuts if you think that anyone cares to carry on the 'human spirit' through machines, after human extinction. I am sure the main reason for such research would be war, as always.
Quote: Original post by LessBread
At that speed, the journey would take 3.75 trillion seconds or more than 1 billion hours or nearly 43.5 million days or not quite 120,000 years. Only an immortal could make that trip at that speed. And then there is the danger of cosmic rays to contend with. Star travel may require sentient machines.

This is senseless.Message to nowhere.Or,if you speak about sci-fi issues: according to Azimov "positron" artificial brains are very sensitive to radiation.On the contary,live cells has a very effective ways for self-repairing DNA molecula (another words, "hard disk with operation system"). Human brain is the most non-sensitive part of body,cells dies because of radiation,but our brain is very flexible system.This is a double flexibility, both at cell and whole-system levels. Can mankind create something like that? Seems me,Azimov was not sure.
ps Nearest stars moves relatively Sun with speed 0-40 km/s ,i.e travel time with ~10km/s speed may be from ~20.000 years to infinity(I don't know data about Proxima Centaurus):)

[Edited by - Krokhin on August 5, 2009 7:49:29 AM]
Advertisement
Lets look at this from a different prospective, if there was a girl who likes bondage and you give her what she wants, would that be unethical?

All that is required for it to be ethical is that they are doing it because they want to, so just make them want to do stuff you tell them to do.
Remember Codeka is my alternate account, just remember that!
Quote: Original post by CodaKiller
Lets look at this from a different prospective, if there was a girl who likes bondage and you give her what she wants, would that be unethical?

All that is required for it to be ethical is that they are doing it because they want to, so just make them want to do stuff you tell them to do.


Your ethical principle is really weak. If 'ethical' is defined as doing something to satisfy a desire, then how could the sociopathic murderer ever be considered 'unethical' in your view?
Quote: Original post by LessBread
Those are good questions. I think the first reason for creating machine sentience is to show that it can be done. And with that, to claim the glory of being the first to do it. A second reason is to better understand consciousness, what it's made of, how it's formed and so on. Tangled up with the ethics of the project is the notion that it would be ethical to run experiments with sentient machines in ways that would not be ethical with human beings. And yes, that notion begs the question. Also bound up with the quest for machine sentience, is the desire to confirm materialism. That is, to show that consciousness arises from matter without any supernatural intervention. By creating sentient machines, we reduce ourselves to purely biological machines, sans spirituality etc. The third reason follows from that, and it's the one promoted the most (or that seems to be) by the loudest advocates for pursuing this research. If consciousness is based solely on matter and if we can program it into machines, then we ought to also be able to transfer our consciousness into machines and through that live forever. In short, it's the desire to live forever. Scientists like Kurzweil and Moravec appear very motivated by this possibility. So yes, to find a method for obtaining immortality.

Reason 1: is to show that it can be done.
For the sake of proof of its possibility? I think in this situation, creating a consciousness would be more of a validation of a model which describes how the mind works. This seems like it would be the most plausible reason to go forward. In other situations, I don't think this would be a good reason for something something (ie. atomic bombs or something horrendous).

Reason 2: Glory.
I've never really found this to be a compelling reason to do anything, even in the military. Maybe I'm a bit too much of a Stoicist.

Reason 3: Mind without supernatural intervention.
I don't think we need to create an 'artificial' sentience to prove to ourselves that sentience isn't caused by supernatural intervention. There are other ways we can go about this, such as proving the non-existence of supernatural entities. As a reason, this doesn't really satisfy me as much as the first one.

Reason 4: (which you call the third) Uploading consciousness
I really wonder about the actual technological possibility of this. First, we'd have to exactly copy every neuron in the biological brain and all of its connections, and then recreate them in an electronic one. We'd be making a copy, not transferring a consciousness. If we did manage to do that, we'd get into all sorts of sticky ethical situations we'd have to sort through (crisis's of identity, superiority, death, conflicts of wills, relationships, who deserves to be copied, etc). Also, our brains are designed to work with our biological bodies, so there are parts of us which regulate our breathing, heart beats, sleep cycles, sex drives, etc. In a machine, a lot of these functions would be useless and maybe even harmful? Robot sex, anyone?
I think that if this is one of the end objectives and motivating reasons, we need to think very carefully about this. As short-sighted human beings, we have a tendency to rush into things and suffer the long term consequences and ramifications of our haste (most salient example: nuclear weapons -> scarier world, cold wars, potentially instantaneous self-annihilation)

Quote:
A more practical reason, ala Blade Runner, would be to use such machines as proxies for ourselves in dangerous situations, like wars, toxic cleanup and planetary and galactic exploration. With that last notion, consider the great distances involved and the time necessary to reach the nearest stars. Proxima Centauri, the closest star to the Sun, is 4.2 light years distant, approximately 4.2 x 10^13 km (42 trillion km). I don't know what the top speed ever achieved is, but if the escape velocity of the Earth is 11.2 km/s then we have achieved that speed (with our current technology). At that speed, the journey would take 3.75 trillion seconds or more than 1 billion hours or nearly 43.5 million days or not quite 120,000 years. Only an immortal could make that trip at that speed. And then there is the danger of cosmic rays to contend with. Star travel may require sentient machines.


Robots are already doing a lot of tasks which would be impossible for human beings to do (ie. cleaning up radioactive spills). It's really not a requirement to bestow sentience on the robot to do these sorts of things ;)
As for space travel... Until, and if it's even physically impossible, we develop warp drives, we're going to be very alone in this part of the universe. We would never be able to reap the benefits & knowledge gained by a robot traveling to other nearby star systems because our lives are just too short... We're sort of like butterflies sitting on the branch of a great red sequoia.
Even if everyone had their consciousnesses transferred to robots, we still can't escape the vastness of space and time. A lot could happen in a robot society within 120,000 years. Even if every robot was put into a 120,000 year hibernation to 'freeze' progress and development, time doesn't stop.
Sentience has rarely been a concern. We continue to enslave sentient beings to serve our needs. Why would it be different with machines? If sentience came about, it could be by accident, and although there would be robot's rights campaigners, the robots would more than likely be designed not to voice any concerns and thus would continue to be employed for labor.
----Bart

This topic is closed to new replies.

Advertisement