Posted by Karen Haslam 30 July 2014
Thinking robots: The philosophy of artificial intelligence and evolving technology
When I studied philosophy at university, many years ago now, one of my favourite modules was on the philosophy of artificial intelligence. Twenty years later, artificial intelligence is starting to become part of our everyday lives, from Siri in the iPhone, to the Nest thermostat that learns the temperatures that suit you. The computers are starting to think for themselves.
It all takes me back to those classes and the philosophical questions about whether a machine could ever be considered to be conscious in the same way as a human, or where the distinction between man and machine lies. If you could continue to live if your brain was transferred to a robot would you still be human?
Mind and body
The brain often comes up in this kind of debate because we tend to think of that as being where our thoughts live, and therefore the home of our mind, whatever that is. Most would identify the mind as conscious and unconscious thought made up of all our experiences, and a certain degree of innate knowledge and a few pre-programmed reactions. Each one of us reacts in a unique way to any given situation because we are influenced by our experiences.
The ‘mind’ of a machine, on the other hand, is programmed to react in certain ways to certain situation. If A then B. These reactions that are based on the way that the programmer wants the machine to react in a given situation. The difference between us and the machine is that we have the free will to make choices that aren’t determined by a programmer, but by ourselves based on our experiences.
But what if, just like the Siris and the Nests of this world, the machine is able to learn. Then it’s mind is made up of more than what it was taught by the programmer and something else is influencing it.
I’m not suggesting that Nest is going to decide to freeze us out of our homes on a whim (although Google might) just that each Nest will evolve differently because it will have a different set of influences. So does that mean it is developing a mind of its own?
Before you panic, and turn off the thermostat, Nest and Siri are really quite limited in what they can learn about you - Google and Apple on the other hand, the companies behind those products, they can learn loads... But Apple and Google are essentially human, so we won’t bother ourselves with them here.
Speaking of Google, should we be cautious about our friends who start wearing Google Glasses? Surely that’s the first step on a journey to being a fully fledged cyborg. Next thing you know they will be buying themselves a bionic arm.
At what point does a person become a cyborg? How much of you would need to remain organic in order for you to be classed as human? How many artificial legs and arms are acceptable if you already have a pacemaker, contact lenses, hearing aid, and so on?
The question here is what makes us human and that’s a mine field because as soon as you say that your mind and memories are what make you human you encounter ethical issues about people with amnesia and people who have suffered brain damage.
Maybe you aren’t defined by your body anyway. Some religions refer to the spirit or soul, and accept that that can exist outside the body after death. Others might think that you can live on in what you leave behind be it through your kids, or in your Facebook posts.
That’s the premise of many a sci-fi story, the idea that a computer could act just like us if it was able to learn our behaviour from our Facebook posts and emails.
I’m not completely adverse to the idea that a computer program could take elements of my personality from things I’ve written and one day, after I’m long gone, I might be resurrected as a computer program. Perhaps I should be careful what I wish for.
The robots are coming
Probably the robots will become sentient beings before we all become cyborgs anyway. Futurologist and inventor Ray Kurzweil thinks that in 15 years computers will trump our human intelligence. They will not only be able to do what we humans do - they will be able to do it better. This is known as the Turing Test - the test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human - and Kurzwell thinks that machines will pass that point by 2029.
Kurzwell just happens to work for Google, by the way.
Read about this subject: Humans’ reign on Earth is over: a computer has passed the Turing test