Welcome to Gaia! ::

Reply Philosophy Threads
Consciousness and Emotions

Quick Reply

Enter both words below, separated by a space:

Can't read the text? Click here

Submit

le jolie garcon

PostPosted: Wed Aug 22, 2007 10:56 pm


Any post-apocalyptic post-human world that has been taken over by robots which we built, rests on a single theory: that at one point in one time those beings whom we gave sentience demanded equal rights, was denied them, and responded in anger. This is the classic idea behind the Matrix movie and saga of stories spawned thereof. If this being demanded equal rights, it must have been responding to a desire for freedom, implying it had emotions, which would be necessary in order to 'want' things. Let us assume we make a being that is capable of acting on its own without constant human prodding.

Let us assume that if this being is to be autonomous that it is to have a digital power gage and algorithmic instructions to re-charge every time the gage reaches a threshold level. It is also to have sensory inputs, such that if it's outer suit touches a surface the signal is communicated to the main processing unit. The machine will thus know it cannot push against a wall, or with additional programming of basic mechanics, it will know how much force it needs to apply to lift a book, and so on and so forth. It is also to have some sort of vision so that the cameras installed allow the processor to gage where objects are and how to avoid them. And also to be programmed with several self-preserving algorithms, such as, using its sensory inputs to know when it's body is being damaged and that when such a thing happens it is to execute any actions necessary to stop the damage from continuing. In this way it is programmed to act automatically and flexibly (with variations due to different variables) towards different situations.

Supporters of strong-AI will claim that the response of the being to the power gage is hunger, and its gaging of damage is indeed pain and that pain and hunger manifest themselves in the same manner in humans. However, I would like to point out that although it can be argued we are somehow 'programmed' with survival instincts, there is inherently an inexpressible thing in the individual experiences of human beings. For example: a person may be an expert of wine, he/she might know everything about how wine is made, it's molecular structure, it's history, etc. But in never having drank a glass of wine, they will never known what it is 'like' to drink wine. This can also be said of pain and hunger. That individual might even have at their disposal an arsenal of verbal tools to describe what wine tastes like and what pain or hunger feel like, but because they had never experienced it those words wold have had to be borrowed from others who tried explaining it after having experienced it. A person never having experienced wine would never be able to even attempt to describe it.

Thus the experience of pain and hunger makes these two “feelings” more than an algorithmic automatic response to pre-programmed survival instincts.

Now that we have roughly outlined an autonomous being and pointed out that it indeed does not have feelings we can deduct that if it cannot feel hunger or paid then it certainly cannot feel other more complex feelings, such as desire or anger. Though there are disagreements on the definition of 'feelings' and 'emotions', I shall refer to emotions in a general sense as anything that (lending some sort of 'common-sense' notion) the reader or I can categorize as emotions. That is, love, anger, fear, desire, etc. I argue that our autonomous being cannot feel emotions because just like drinking wine there is something that it's 'like' to be in love, or fearful of something, or angry at someone. This can be supported by the fact that often a person who has never been in love will claim they don't know what it's like. Thus our automaton is made with no emotions.

Since the robot has no emotions, it cannot feel angry at the injustice of a fellow automaton, or desire freedom, but can it learn to feel? If the robot was not made with the ability to experience can it gain the ability to experience without outside help? This is not the question I'm interested in answering. My question is whether or not this ability to experience and as a consequence emotions are necessary for consciousness. Though I find it highly unlikely that our automaton is conscious, it certainly cannot be argued that a feeling being is a conscious being, but is the opposite the true? Is a conscious being necessarily a feeling being?

I have established that I require a conscious being to have the ability to experience: that is, in order to be aware of things they have to know what it's 'like' to touch, what it's 'like' to see, what it's 'like' to taste, etc. The question remains whether or not all experiences are manifestations of emotions. Unfortunately this is extensively debatable and it largely depends on what particular definition of 'emotion' and 'feeling' one uses.

Certainly we are all acquainted with experiencing new things, given that we know already how to experience. Babies are born with a certain set of possible feelings, we know they feel hunger, fear, and joy. We've all seen a baby cry for food, scream at unsightly things and laugh at the sight of an entertaining toy. From there, because of the complexity of their cerebrum and their growing processing capacity, they are able to augment their experiences and feel a larger variety of things. Different individuals take this escalation at different paces. I do however believe this is a gradual process since our experiences depend on a) our ability to experience and b) the various input, and quite frankly a baby hasn't seen very much. The more things one sees, touches, tastes, hears, the more they experience.

Do these experiences require emotional responses? And are perhaps the experiences not complete without these emotional responses? Certainly some responses to food are emotional. When we taste something it is arguable that we always elicit an emotional response in either liking or not liking the food. We are all somewhat selective in what we eat, we have our favorites and our least liked. Even when we say the food is tastless, we are showing we dislike it for being tasteless.

It is perhaps not the entirety of consciousness that involves emotion, certainly thinking in an intellectual manner does not, but it is possibly that emotion is an inevitable development of a conscious being. Certainly without consciousness there is no emotion, so if we were indeed to make sentient beings there is the definite possibility that they would feel.
PostPosted: Sat Aug 25, 2007 6:25 pm


What if then, the robot/machine is programmed with some kind of adapting function, to which it can adapt to its surroundings. thereby increasing its own set database itself, by absorbing information on its own. Would it not then eventually adapt to a near perfect form?

Rising Hope


tenchi_no_kashaku

PostPosted: Fri Nov 23, 2007 11:36 am


Not technically. It's software abilities must match the mechanical and/or physical abilities. Therefore if the actual machine was incapable to adapt then the program would be of no practical use.
PostPosted: Fri Nov 23, 2007 2:57 pm


le jolie garcon
Any post-apocalyptic post-human world that has been taken over by robots which we built, rests on a single theory: that at one point in one time those beings whom we gave sentience demanded equal rights, was denied them, and responded in anger. This is the classic idea behind the Matrix movie and saga of stories spawned thereof. If this being demanded equal rights, it must have been responding to a desire for freedom, implying it had emotions, which would be necessary in order to 'want' things. Let us assume we make a being that is capable of acting on its own without constant human prodding.

Let us assume that if this being is to be autonomous that it is to have a digital power gage and algorithmic instructions to re-charge every time the gage reaches a threshold level. It is also to have sensory inputs, such that if it's outer suit touches a surface the signal is communicated to the main processing unit. The machine will thus know it cannot push against a wall, or with additional programming of basic mechanics, it will know how much force it needs to apply to lift a book, and so on and so forth. It is also to have some sort of vision so that the cameras installed allow the processor to gage where objects are and how to avoid them. And also to be programmed with several self-preserving algorithms, such as, using its sensory inputs to know when it's body is being damaged and that when such a thing happens it is to execute any actions necessary to stop the damage from continuing. In this way it is programmed to act automatically and flexibly (with variations due to different variables) towards different situations.

Supporters of strong-AI will claim that the response of the being to the power gage is hunger, and its gaging of damage is indeed pain and that pain and hunger manifest themselves in the same manner in humans. However, I would like to point out that although it can be argued we are somehow 'programmed' with survival instincts, there is inherently an inexpressible thing in the individual experiences of human beings. For example: a person may be an expert of wine, he/she might know everything about how wine is made, it's molecular structure, it's history, etc. But in never having drank a glass of wine, they will never known what it is 'like' to drink wine. This can also be said of pain and hunger. That individual might even have at their disposal an arsenal of verbal tools to describe what wine tastes like and what pain or hunger feel like, but because they had never experienced it those words wold have had to be borrowed from others who tried explaining it after having experienced it. A person never having experienced wine would never be able to even attempt to describe it.

Thus the experience of pain and hunger makes these two “feelings” more than an algorithmic automatic response to pre-programmed survival instincts.

Now that we have roughly outlined an autonomous being and pointed out that it indeed does not have feelings we can deduct that if it cannot feel hunger or paid then it certainly cannot feel other more complex feelings, such as desire or anger. Though there are disagreements on the definition of 'feelings' and 'emotions', I shall refer to emotions in a general sense as anything that (lending some sort of 'common-sense' notion) the reader or I can categorize as emotions. That is, love, anger, fear, desire, etc. I argue that our autonomous being cannot feel emotions because just like drinking wine there is something that it's 'like' to be in love, or fearful of something, or angry at someone. This can be supported by the fact that often a person who has never been in love will claim they don't know what it's like. Thus our automaton is made with no emotions.

Since the robot has no emotions, it cannot feel angry at the injustice of a fellow automaton, or desire freedom, but can it learn to feel? If the robot was not made with the ability to experience can it gain the ability to experience without outside help? This is not the question I'm interested in answering. My question is whether or not this ability to experience and as a consequence emotions are necessary for consciousness. Though I find it highly unlikely that our automaton is conscious, it certainly cannot be argued that a feeling being is a conscious being, but is the opposite the true? Is a conscious being necessarily a feeling being?

I have established that I require a conscious being to have the ability to experience: that is, in order to be aware of things they have to know what it's 'like' to touch, what it's 'like' to see, what it's 'like' to taste, etc. The question remains whether or not all experiences are manifestations of emotions. Unfortunately this is extensively debatable and it largely depends on what particular definition of 'emotion' and 'feeling' one uses.

Certainly we are all acquainted with experiencing new things, given that we know already how to experience. Babies are born with a certain set of possible feelings, we know they feel hunger, fear, and joy. We've all seen a baby cry for food, scream at unsightly things and laugh at the sight of an entertaining toy. From there, because of the complexity of their cerebrum and their growing processing capacity, they are able to augment their experiences and feel a larger variety of things. Different individuals take this escalation at different paces. I do however believe this is a gradual process since our experiences depend on a) our ability to experience and b) the various input, and quite frankly a baby hasn't seen very much. The more things one sees, touches, tastes, hears, the more they experience.

Do these experiences require emotional responses? And are perhaps the experiences not complete without these emotional responses? Certainly some responses to food are emotional. When we taste something it is arguable that we always elicit an emotional response in either liking or not liking the food. We are all somewhat selective in what we eat, we have our favorites and our least liked. Even when we say the food is tastless, we are showing we dislike it for being tasteless.

It is perhaps not the entirety of consciousness that involves emotion, certainly thinking in an intellectual manner does not, but it is possibly that emotion is an inevitable development of a conscious being. Certainly without consciousness there is no emotion, so if we were indeed to make sentient beings there is the definite possibility that they would feel.


Hahahaha. this is one of my favorite topics. It seems you've been reading up.


I read everything you said and have this to say. Yes we have five sences. With these sences(though they each sence in a different way) we can discern the difference between one thing in another. A computer only has two definitions 1 and 0. This is binary. Without atleast binary one cant think, so therefore if you have had experience, then you can think. If a five year old child(in perfect health) lost all of her sences, she could still think. She might not learn much more, but she has experience of more than two things, not only that but of decoding a language, so yes she could think. If a new born was born withotu any sences, then no. That baby could not think, and would not have emmotions. To think we have to be able to discern one things from another, and taht requires a little experience.

To sence, tells us we are alive. A robot can have three sences(the other two being taste and smell, as far as I know), however this is not why robots are not concous(yet?). When we are born, we have natural instincts to react. We start collecting memory as we use these instincts. Eveuntally we learn how to think properly, and wake up in a way( which I remember quite vividly). There are people without emmotions(I cant remember what it's called) and who never did have them. They can think fine. They only usually act to tend to there needs, as they still do not like pain. Otherwise they are rarely seen doing anything else(except perhaps illegal things since they don't understand wrong and right corectly).

Emmotion is so we can react to other living things. Thought is how we can react to ourselves. Instincts are how we keep ourselves alive, and come to gain concousness. A baby is concous in a way, yes, but I don't think they are all the way.

What I would like to see is if we made sort of a baby, who can navigate through different obstacles, and preform extra actions when nececary to reach it's goal, which would be food, sleap, diaper change(simulation), water, bath, entertainment. It would be programmed to navigate through a house with chairs and such placed to trick it, and with other things interacting with it. If possible, it should have a memory of everything it sences on camra audio, and sencory triggers, to be able to remember the paths of things, and how to overcome certin obstacles.

I wonder if something like this could learn to think, because there is still the crucial part of interaction with other people(because people would be placed in the house as well). It wouldn't have to look like a human, since it would just make it harder to navigate.

For example, if every day you where in the house following the robot around, and before it grabs or interacts with anything, take it away and says its color. I don't know if this would do anything. It would probably learn to evade your grasp of the object though.

27x
Crew


tenchi_no_kashaku

PostPosted: Fri Nov 23, 2007 3:12 pm


As technology is evlolving, there are machines that have senses. But, there's still a gap between them and us. Our senses give us a diagnosis of our surrounding or primary knowledge. Machines can only obey commands and in some instances when making their own commands they are still bound to what they can and cannot do. So the fear that machines/robots will take over is very extreme. It is just paranoia output into an apocalyptic desire of ones own fears.
Reply
Philosophy Threads

 
Manage Your Items
Other Stuff
Get GCash
Offers
Get Items
More Items
Where Everyone Hangs Out
Other Community Areas
Virtual Spaces
Fun Stuff
Gaia's Games
Mini-Games
Play with GCash
Play with Platinum