Advertisement

Would it be ethical of humanity to enslave its sentient androids?

Started by August 01, 2009 03:52 PM
81 comments, last by Calin 15 years, 3 months ago
I left out another possible reason: greed. My earlier speculations pertained to the motivations of scientists. Greed operates, imo, to the extent that corporations are involved. They will want a new product that they can mass produce and sell in large volumes to make killer profits.


Quote: Original post by slayemin
Reason 1: is to show that it can be done.
For the sake of proof of its possibility? I think in this situation, creating a consciousness would be more of a validation of a model which describes how the mind works. This seems like it would be the most plausible reason to go forward. In other situations, I don't think this would be a good reason for something something (ie. atomic bombs or something horrendous).


Validation for a model of consciousness. That's plausible. But secondary too, given the possibility that consciousness might be obtainable through models that aren't patterned after the human mind.

Quote: Original post by slayemin
Reason 2: Glory.
I've never really found this to be a compelling reason to do anything, even in the military. Maybe I'm a bit too much of a Stoicist.


Or maybe you haven't spent much time around scientists. I've been reading a book on the history of particle physics. Here's a passage of an interview with Mel Schwartz (In 1960 he worked out what would be necessary to produce and detect a beam of neutrinos. In the 1970's he left high energy physics for the Silicon Valley.):

"We asked Schwartz about the pleasure of being first, of having clear priority to the discovery. "People are very selfish of their priorities in physics," he said. "You know, now I'm in a business [computer systems] where the measure is very simple. It's how many bucks can you bring in, right? If your company make enough profit, then you're a big man. If it makes a little profit, you're a small man. If it makes no profit, you're minuscule. So it's very simple to make a measure of a man. In physics, the only measure you make is general recognition. In that situation, you have an awful lot of people fighting for the only money that exists, which is the money in recognition. It's a big poker game, with a certain amount of zero-sum, so to speak. In other words, if I win, you lose. If I get the priority for that particular thing, you haven't got it." Experimenters often ask the question: Who was the second person to say E=mc^2?"

Quote: Original post by slayemin
Reason 3: Mind without supernatural intervention.
I don't think we need to create an 'artificial' sentience to prove to ourselves that sentience isn't caused by supernatural intervention. There are other ways we can go about this, such as proving the non-existence of supernatural entities. As a reason, this doesn't really satisfy me as much as the first one.


The non-existence of supernatural entities can't be proven.

Quote: Original post by slayemin
Reason 4: (which you call the third) Uploading consciousness
I really wonder about the actual technological possibility of this. First, we'd have to exactly copy every neuron in the biological brain and all of its connections, and then recreate them in an electronic one. We'd be making a copy, not transferring a consciousness. If we did manage to do that, we'd get into all sorts of sticky ethical situations we'd have to sort through (crisis's of identity, superiority, death, conflicts of wills, relationships, who deserves to be copied, etc). Also, our brains are designed to work with our biological bodies, so there are parts of us which regulate our breathing, heart beats, sleep cycles, sex drives, etc. In a machine, a lot of these functions would be useless and maybe even harmful? Robot sex, anyone?
I think that if this is one of the end objectives and motivating reasons, we need to think very carefully about this. As short-sighted human beings, we have a tendency to rush into things and suffer the long term consequences and ramifications of our haste (most salient example: nuclear weapons -> scarier world, cold wars, potentially instantaneous self-annihilation)


I agree that at best the procedure would amount to copying not transferring, but I wasn't setting forward my aspirations. I was setting forward the aspirations of a few researchers in the field whose writings I have read.

Quote:
Robots are already doing a lot of tasks which would be impossible for human beings to do (ie. cleaning up radioactive spills). It's really not a requirement to bestow sentience on the robot to do these sorts of things ;)
As for space travel... Until, and if it's even physically impossible, we develop warp drives, we're going to be very alone in this part of the universe. We would never be able to reap the benefits & knowledge gained by a robot traveling to other nearby star systems because our lives are just too short... We're sort of like butterflies sitting on the branch of a great red sequoia.
Even if everyone had their consciousnesses transferred to robots, we still can't escape the vastness of space and time. A lot could happen in a robot society within 120,000 years. Even if every robot was put into a 120,000 year hibernation to 'freeze' progress and development, time doesn't stop.


It's not a requirement, but the expectation is that such machines would easier to direct. And when it comes to a 120,000 year trip, it seems to me that having a sentience present would facility troubleshooting, adapting to unforeseen contingencies and so on. One development on such a trip might be boredom.
"I thought what I'd do was, I'd pretend I was one of those deaf-mutes." - the Laughing Man
Quote: Original post by LessBread
Also bound up with the quest for machine sentience, is the desire to confirm materialism. That is, to show that consciousness arises from matter without any supernatural intervention. By creating sentient machines, we reduce ourselves to purely biological machines, sans spirituality etc.


I think the real question with this goal is if we don't already fully understand the origins of sentience before the machine's creation, how do we differentiate true sentience from sentient-seeming reactions?
Until we have a better understanding of where sentience comes from, we rely on external signals to guess at sentience, and so far AI research is investing a lot of effort into mimicking those external signals without the internal processes necessarily being the same - if the trend continues my prediction is that we could have a race between our ability to create sentient machines and our ability to detect sentient machines. We could end up with a situation where sentient machines are in service before we realise that they are sentient, and once they are already in service inertia and backwards-compatibility could make it hard to undo even if we decide its unnecessary slavery.


As far as enslavement of sentient creations... thats a tough one. The problem seems to me to be similar to my ponderings on vegetarianism: The subjects are being produced artificially, they dont exist prior to the demand for their services, so if their usefulness is taken away they will stop being produced - the question is which is the right choice, to give something life, albeit with conditions and possible disadvantages to it, or to not create it in the first place? Is it better to live a poor life, or to never be born?

As CodaKiller was suggesting, though, slavery is a difficult concept when you have control over the machine's desires. The machine does not have a default state of "liking" or "disliking" any particular task - if it is created to enjoy a task then it will do it voluntarily. Our own projections of pleasant or unpleasant arent relevant.
(@Slayemin's response: The sociopathic murderer is not unethical because of their own desires, it is unethical because of their impact on the desires of the victim - if murderers could fulfil their desire without ever negatively affecting someone else, there would be no problem. That analogy only really fits if we were talking about machines designed to hurt/discomfort other sentient creatures.)


IMHO, if we assume that we create their desires at the same time as their sentience, the work they do is irrelevant to the ethical question and the ethics just comes down to their creation - at which point the question is the same as creating sentience via the traditional method: Is it ethical to have a baby, or to not have a baby? I dont think its really a question that can be answered.
Advertisement
Quote: Original post by caffiene
Quote: Original post by LessBread
Also bound up with the quest for machine sentience, is the desire to confirm materialism. That is, to show that consciousness arises from matter without any supernatural intervention. By creating sentient machines, we reduce ourselves to purely biological machines, sans spirituality etc.


I think the real question with this goal is if we don't already fully understand the origins of sentience before the machine's creation, how do we differentiate true sentience from sentient-seeming reactions?
Until we have a better understanding of where sentience comes from, we rely on external signals to guess at sentience, and so far AI research is investing a lot of effort into mimicking those external signals without the internal processes necessarily being the same - if the trend continues my prediction is that we could have a race between our ability to create sentient machines and our ability to detect sentient machines. We could end up with a situation where sentient machines are in service before we realise that they are sentient, and once they are already in service inertia and backwards-compatibility could make it hard to undo even if we decide its unnecessary slavery.


Isn't that the problem that the Turing test is supposed to resolve? Or is it a problem the Turing test creates? That is, does the Turing test provide evidence of sentience or merely evidence of sentient-seeming reactions? Ultimately, I think this is a difficulty that arise from the conflict between positivist models of consciousness and phenomelogical models of consciousness.

Quote: Original post by caffiene
As far as enslavement of sentient creations... thats a tough one. The problem seems to me to be similar to my ponderings on vegetarianism: The subjects are being produced artificially, they dont exist prior to the demand for their services, so if their usefulness is taken away they will stop being produced - the question is which is the right choice, to give something life, albeit with conditions and possible disadvantages to it, or to not create it in the first place? Is it better to live a poor life, or to never be born?

As CodaKiller was suggesting, though, slavery is a difficult concept when you have control over the machine's desires. The machine does not have a default state of "liking" or "disliking" any particular task - if it is created to enjoy a task then it will do it voluntarily. Our own projections of pleasant or unpleasant arent relevant.


It seems to me that the notion of a machine's desires is a human projection and that the unstated assumption, one that flows naturally from human experience, is that intentionality is a key component of consciousness. If a machine is programmed to behave in a particular way, can it be considered sentient? It seems to me that some degree of autonomy is necessary, some degree of countermanding programming, some degree of ignoring programming, some degree of reprogramming.

And when the mind wanders, the possibility intrudes that humans are sentient biological machines, the most successful in a long line of attempts. To what degree is symbolic aptitude necessary to demonstrate consciousness? Could it be that we ask too much of our creations? Or that we think so highly of ourselves, that we are blind to the reality that the planet is full of other sentient creatures?

Quote: Original post by caffiene
(@Slayemin's response: The sociopathic murderer is not unethical because of their own desires, it is unethical because of their impact on the desires of the victim - if murderers could fulfil their desire without ever negatively affecting someone else, there would be no problem. That analogy only really fits if we were talking about machines designed to hurt/discomfort other sentient creatures.)


That raises the question, for me at least, of whether the desires of sociopaths can be considered healthy and if so to what degree. No human being can live completely independent of other human beings, so to what degree is sociopathy an evolutionarily successful strategy? And even granting the assertion that sociopaths aren't unethical because of their own desires, is it the negative impact on the desires of the victim or the negative impact on the physical well being of the victim that raises the issue? Desire connects the body with the environment, but it doesn't seem that is the kind of desire under consideration here.

Quote: Original post by caffiene
IMHO, if we assume that we create their desires at the same time as their sentience, the work they do is irrelevant to the ethical question and the ethics just comes down to their creation - at which point the question is the same as creating sentience via the traditional method: Is it ethical to have a baby, or to not have a baby? I dont think its really a question that can be answered.


What constitutes intentionality?

I don't think it boils back down to the ethics of procreation. I think that's a cul-de-sac of thought that we readily fall into because it's a age old question for us.
"I thought what I'd do was, I'd pretend I was one of those deaf-mutes." - the Laughing Man
Quote: Original post by LessBread
It seems to me that the notion of a machine's desires is a human projection and that the unstated assumption, one that flows naturally from human experience, is that intentionality is a key component of consciousness.
I agree - projecting desires on a machine, at least in the current sense of a machine, generally seems to me to be unreasonable. Looking deeper into the implications of a sentient machine though, makes the question trickier. I can see a number of places where it intersects with free will, determinism and other such issues:

Quote: If a machine is programmed to behave in a particular way, can it be considered sentient? It seems to me that some degree of autonomy is necessary, some degree of countermanding programming, some degree of ignoring programming, some degree of reprogramming.

This is where we run into determinism and free will.
Programming is to set up a machine with a set of instructions and inputs, such that we get a predictable/repeatable outcome. But is human sentience any different to a programmed automaton in the first place? Recent research is showing an increasing number of situations where our reactions begin before conscious thought, and there is speculation that our experience of sentience/consciousness is simply post-hoc representation of our programmed reaction. The materialist view suggests that barring the existence of the soul, etc, we are simply a collection of particle interactions which are as predictable and pre-programmed as our speculative sentient machines would be.

So the question is... is pre-programmed behaviour different to our own sentience? If we are deterministic structures, then arent our own desires themselves preprogrammed? I tend to follow the materialist view unless we find evidence to the contrary, but until we fully understand sentience/consciousness we dont have any way to know for sure. Which as far as I can see, unfortunately means that we need to understand it before we can know the full ethical implications of experimenting with it.

If the materialist view is correct, then controlling desires would be simply a matter of manipulating DNA and controlling the particles which interact with the body (or the analogue for the sentient machine) so that the reaction to circumstances is the reaction we intend. Having the machine "override" programming seems unnecessary, so long as we view the initial conditions and external inputs when we alter the program.

Quote: And when the mind wanders, the possibility intrudes that humans are sentient biological machines, the most successful in a long line of attempts. To what degree is symbolic aptitude necessary to demonstrate consciousness? Could it be that we ask too much of our creations? Or that we think so highly of ourselves, that we are blind to the reality that the planet is full of other sentient creatures?
I think thats a very appropriate digression. In my opinion, sentience really should be viewed more as a continuum than a yes/no quality - we could progress through from modeling a single celled creature with no definable brain, to a simple-brained creature like a worm, all the way up through various types of animals. Would there, at some point, be a specific change we could refer to as sentience? Im doubtful.
So its maybe more accurate to say we're really talking about "human level sentience" or "a level of sentience from which we can make inferences about human sentience" and paraphrasing it as sentience.
Which leads also to the though, is ethics perhaps a continuum as well? Can we talk about it being "very slightly unethical" to experiment with the sentience of a worm, but "much more unethical" to experiment with the sentience of a human? How would we quantify that so as to be able to make objective decisions about it?



Quote: That raises the question, for me at least, of whether the desires of sociopaths can be considered healthy and if so to what degree. No human being can live completely independent of other human beings, so to what degree is sociopathy an evolutionarily successful strategy? And even granting the assertion that sociopaths aren't unethical because of their own desires, is it the negative impact on the desires of the victim or the negative impact on the physical well being of the victim that raises the issue? Desire connects the body with the environment, but it doesn't seem that is the kind of desire under consideration here.

My initial thought is that the negative impact on the desires of the victim is the key, because "negative" itself is only really meaningful in the context of subjective desires. What the victim feels and desires is the framework used to determine if the impact on their physical well-being is "negative".

Although... further thought makes me think it gets complicated... Desires, if we take the materialist view, come from the physical brain, so physical and mental are inextricably mixed together. Body dysmorphic disorders come to mind, for example - the impact of removing a limb might be seen as positive where others would see it as negative. But what about the physical brain? Has there been a "negative" impact on the physical brain at some point which has caused the body dysmophic disorder in the first place? And is that relevant to whether removing a limb is positive or negative, or is it a separate issue?


(Excuse the rambling... Getting late, and Im just addressing ideas as they come to me)
Quote: Original post by caffiene
Quote: Original post by LessBread
It seems to me that the notion of a machine's desires is a human projection and that the unstated assumption, one that flows naturally from human experience, is that intentionality is a key component of consciousness.
I agree - projecting desires on a machine, at least in the current sense of a machine, generally seems to me to be unreasonable. Looking deeper into the implications of a sentient machine though, makes the question trickier. I can see a number of places where it intersects with free will, determinism and other such issues:


Just to be clear, what I'm suggesting is that the notion that sentient machines would have desires is a human projection, an anthropomorphism. Because we have desires and find them very powerful, we find it difficult to imagine how a sentience could not. We have set ourselves up as the measure of sentience.

Quote: Original post by caffiene
Quote: If a machine is programmed to behave in a particular way, can it be considered sentient? It seems to me that some degree of autonomy is necessary, some degree of countermanding programming, some degree of ignoring programming, some degree of reprogramming.

This is where we run into determinism and free will. Programming is to set up a machine with a set of instructions and inputs, such that we get a predictable/repeatable outcome. But is human sentience any different to a programmed automaton in the first place? Recent research is showing an increasing number of situations where our reactions begin before conscious thought, and there is speculation that our experience of sentience/consciousness is simply post-hoc representation of our programmed reaction. The materialist view suggests that barring the existence of the soul, etc, we are simply a collection of particle interactions which are as predictable and pre-programmed as our speculative sentient machines would be.

So the question is... is pre-programmed behaviour different to our own sentience? If we are deterministic structures, then arent our own desires themselves preprogrammed? I tend to follow the materialist view unless we find evidence to the contrary, but until we fully understand sentience/consciousness we dont have any way to know for sure. Which as far as I can see, unfortunately means that we need to understand it before we can know the full ethical implications of experimenting with it.

If the materialist view is correct, then controlling desires would be simply a matter of manipulating DNA and controlling the particles which interact with the body (or the analogue for the sentient machine) so that the reaction to circumstances is the reaction we intend. Having the machine "override" programming seems unnecessary, so long as we view the initial conditions and external inputs when we alter the program.


What happens when preprogrammed behaviors conflict? If such behaviors are prioritized, what are the implications of situations where preprogammed behaviors conflict, yet a lower priority behavior is undertaken rather than a higher priority behavior? I have tried to avoid the word "choice" in this formulation, but isn't that what this anomaly points to, choice?

I'm not sure that a cognitive science approach to consciousness is an optimal approach to reaching ethical conclusions. We have an excellent understanding of how guns work, but that understanding does not lend itself very well to understanding the ethical implications of the use of guns.

Quote: Original post by caffiene
Quote: And when the mind wanders, the possibility intrudes that humans are sentient biological machines, the most successful in a long line of attempts. To what degree is symbolic aptitude necessary to demonstrate consciousness? Could it be that we ask too much of our creations? Or that we think so highly of ourselves, that we are blind to the reality that the planet is full of other sentient creatures?
I think thats a very appropriate digression. In my opinion, sentience really should be viewed more as a continuum than a yes/no quality - we could progress through from modeling a single celled creature with no definable brain, to a simple-brained creature like a worm, all the way up through various types of animals. Would there, at some point, be a specific change we could refer to as sentience? Im doubtful.
So its maybe more accurate to say we're really talking about "human level sentience" or "a level of sentience from which we can make inferences about human sentience" and paraphrasing it as sentience.
Which leads also to the though, is ethics perhaps a continuum as well? Can we talk about it being "very slightly unethical" to experiment with the sentience of a worm, but "much more unethical" to experiment with the sentience of a human? How would we quantify that so as to be able to make objective decisions about it?


An animal rights advocate would likely base ethical considerations on suffering, the degree of suffering inflicted on the subject. It seems to me that the pursuit of machine based human level sentience resonates with the story of Dr. Frankenstein. Furthermore, it seems to me that what would better serve human interests would be a machine more akin to a dog (not a wolf). That is, a very advanced tool, but not one prepared to overrun our ecological niche. I suppose the drawback, however, is that we would want to make such a machine so that it could communicate with us directly, and that would mean giving it symbolic aptitude.

Quote: Original post by caffiene
Quote: That raises the question, for me at least, of whether the desires of sociopaths can be considered healthy and if so to what degree. No human being can live completely independent of other human beings, so to what degree is sociopathy an evolutionarily successful strategy? And even granting the assertion that sociopaths aren't unethical because of their own desires, is it the negative impact on the desires of the victim or the negative impact on the physical well being of the victim that raises the issue? Desire connects the body with the environment, but it doesn't seem that is the kind of desire under consideration here.

My initial thought is that the negative impact on the desires of the victim is the key, because "negative" itself is only really meaningful in the context of subjective desires. What the victim feels and desires is the framework used to determine if the impact on their physical well-being is "negative".

Although... further thought makes me think it gets complicated... Desires, if we take the materialist view, come from the physical brain, so physical and mental are inextricably mixed together. Body dysmorphic disorders come to mind, for example - the impact of removing a limb might be seen as positive where others would see it as negative. But what about the physical brain? Has there been a "negative" impact on the physical brain at some point which has caused the body dysmophic disorder in the first place? And is that relevant to whether removing a limb is positive or negative, or is it a separate issue?

(Excuse the rambling... Getting late, and Im just addressing ideas as they come to me)


Desires and needs overlap, but that doesn't mean they are the same. Mental is not the mind. This is a tangent, but do three legged dogs suffer phantom limb syndrome?

"I thought what I'd do was, I'd pretend I was one of those deaf-mutes." - the Laughing Man
Granting rights to a machine? how do you feel about granting rights to a plough?
[OP are you even remotely serious about this? (I should watch I robot again)]

My project`s facebook page is “DreamLand Page”

Advertisement
"Freedom is the right of all sentient beings." ~Optimus Prime

I think our world with sentient androids would end up like Chobits or Futurama.
Quote: Original post by LessBread
Just to be clear, what I'm suggesting is that the notion that sentient machines would have desires is a human projection, an anthropomorphism.
Yes, we're on the same page here afaik.
Quote: Because we have desires and find them very powerful, we find it difficult to imagine how a sentience could not. We have set ourselves up as the measure of sentience.
True. I dont see any reason to believe that desires are inherently part of sentience.
I wonder though at what stage we might reasonably expect desires to appear, if we are using the human brain as our model for trying to construct sentience? If sentience and similar phenomenon can be created with an exact model of the human brain, it stands to reason that a model of the human brain would have the same characteristics as a human brain - eg, desires. At what point our simulation would be "close enough" to begin expecting human characteristics as opposed to general sentient characteristics, I dont know.

Quote: What happens when preprogrammed behaviors conflict? If such behaviors are prioritized, what are the implications of situations where preprogammed behaviors conflict, yet a lower priority behavior is undertaken rather than a higher priority behavior? I have tried to avoid the word "choice" in this formulation, but isn't that what this anomaly points to, choice?
I dont know... Does this happen? Is it possible for a "lower priority" behaviour to be undertaken rather than a "higher priority" behaviour?

My expectation, coming from a materialist viewpoint, is that a lower priority behaviour would never overrule a higher priority one. Instead, details of the inputs to the system might cause a temporary change in priorities under very rare circumstances, or in ways that are computationally prohibitive to predict. But that doesnt necessarily mean that the behaviour is anything other than a predefined, if complicated, algorithm. "Choice", as distinct from a predefined behaviour, as far as I can work out would require some form of nondeterministic mechanism, such as a soul, or a mechanism outside of the brain which we have no understanding of yet. Moreover - if either a soul or an "external to the brain" process is necessary for choice, then we dont yet know to include it in the machine simulation and therefore the simulation wouldn't develop "choice" even if it exists in humans.

Quote: I'm not sure that a cognitive science approach to consciousness is an optimal approach to reaching ethical conclusions. We have an excellent understanding of how guns work, but that understanding does not lend itself very well to understanding the ethical implications of the use of guns.
Very true... Im really thinking through the cognitive science more as an exercise in working out possible outcomes - under what circumstances consciousness might or might not arise, etc - to give a better framework for thinking about ethics. It wont reach ethical conclusions, but it helps narrow down what scenarios we're most likely to need to find an ethical conclusion for.

To be honest, working out the frame of reference is more interesting to me anyway, because the actual ethical conclusion in most cases boils down to a subjective value judgement at some point where discussion cant really have a useful input.

Quote: An animal rights advocate would likely base ethical considerations on suffering, the degree of suffering inflicted on the subject.
They probably would... but Id counter by asking, are they basing their decision on suffering because its the key factor, or because they dont have a mechanism for being certain of the animal's desires? Isnt "suffering" itself only a shorthand for emotional distress based on our best guess at the animal's internal experience of what is happening to it?

Quote: Furthermore, it seems to me that what would better serve human interests would be a machine more akin to a dog (not a wolf). That is, a very advanced tool, but not one prepared to overrun our ecological niche. I suppose the drawback, however, is that we would want to make such a machine so that it could communicate with us directly, and that would mean giving it symbolic aptitude.
Yeah. Im thinking about human-level sentience because it seems to be what the OP was talking about, but in reality I think it would be much more educational to work our way up from something less complicated, and only up to the point where it meets our needs - either able to perform the duty required, or when the desired behaviour emerges from the simulation for us to study. It also means the ethical issues can be addressed more slowly.

The only problem is if the research into sentient appearing machines for interface or emotional purposes begins to converge with true sentience simulation.



Quote: Desires and needs overlap, but that doesn't mean they are the same.
And requires a tricky subjective value judgement to make a ruling on, I think... That being: Which is ethically more important? A need or a desire? If a desire and a need conflict ... wait. Stopping mid-though, here - Can a desire and a need conflict? It suddenly occurs to me that "needs" are really only shorthand for a desire based on a biological imperative or an assumed universal desire. We need to eat and breath, for example... but is it really a need? If the biological imperative wasn't creating the desire to continue living, we wouldnt need to do the things necessary to survive. That is - if I dont desire to live, the needs associated with prolonging my existence stop being needs; and if they are optional, even slightly, then that suggests they are really only extremely pervasive desires.

Maybe Im missing something. Are there examples of need which dont fit this reasoning?
Quote: Original post by caffiene
True. I dont see any reason to believe that desires are inherently part of sentience.
I wonder though at what stage we might reasonably expect desires to appear, if we are using the human brain as our model for trying to construct sentience? If sentience and similar phenomenon can be created with an exact model of the human brain, it stands to reason that a model of the human brain would have the same characteristics as a human brain - eg, desires. At what point our simulation would be "close enough" to begin expecting human characteristics as opposed to general sentient characteristics, I dont know.


Well, not specifically speaking about human sentience but every life form studied to the date exhibits some sort of trophism/taxism.

Hypotetically, if an autonomous (mobile) machine was programmed with a set of rules to control it's basic behaviors (find energy sources/food, avoid dangers), then, IMHO, it's actions could be interpreted as "desires".

To explain this mechanically in a more graphical way: Get a man have his testicles cut off, then observe a notorious change in it's desires/behavior, even the philosopical/metaphisical ones.
[size="2"]I like the Walrus best.
What exactly is "life"? Isn't it a bit biased to assume that technology isn't "alive" if it hasn't manifested in a form that we recognize as "life"? Couldn't we say that computers and technology have a form of life, in that they transition from nonexistence to active existence to inactive existence to decomposition, where they serve to further the ever increasing entropy of the universe? If technology is alive today, aren't we already enslaving it, or has technology enslaved us? Does technology toil to improve us, or do we toil to improve technology? You might say that technology can't "live" without us, but we can live without technology. But have you ever tried living without technology? Are you sure you'd last very long?

[Formerly "capn_midnight". See some of my projects. Find me on twitter tumblr G+ Github.]

This topic is closed to new replies.

Advertisement