Not only do they act and speak like us, importantly, they are made of the same kind of stuff. We ascribe conscious behaviour to other humans not because we have access to their consciousness (we don’t), but because other people are analogous to us. Clearly there is a difference between programming something to give output like a human, and being conscious of what is being computed. But that is an ‘imitation game’, as seminal computer scientist Alan Turing (1912-1954) himself called it, and it does not show that a machine has self-awareness. In the Turing Test, a machine hidden from view is asked questions by a human, and if that person thinks the answers indicate he’s talking to another person, then the conclusion is that the machine thinks. This of course leads to a big problem – how would we ever know? First, could it be self-conscious (as in ‘self-aware’)? Second, could it have emotions and feelings? And third, could it think consciously – that is, have insight and understanding in its arguments and thoughts? The question is therefore not so much whether robots could simulate human behaviour, which we know they can do to increasing degrees, but whether they could actually experience things, as humans do. There are at least three connected major aspects of consciousness to be considered when we ask whether a robot could be conscious. Disposal of our machines would become a moral issue. Will robots always be just machines with nothing going on inside, or could they become conscious things with an inner life? If they developed some kind of inner world, it would seem to be like killing them if we scrapped them. SUBSCRIBE NOW Articles Could a Robot be Conscious? Brian King says only if some specific conditions are met.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |