Digital Rise Homepage

Ai Artificial Intelligence

Strong Artificial Intelligence (AI) is simply not possible. When I say strong AI, I mean, AI that is as intelligent as humans are. For furthing reading on the subject see, “Shadows of the Mind” By Roger Penrose and “On Intelligence” By Jeff Hawkins. Strong AI is not possable, even in theory, and as Jeff Hawkins put it, “It wasn’t because of lack of trying.” It’s only now that scientist are catching up with what Roger Penrose argued almost thirty years ago. I believe intelligent machines can be built, but they will not be like what most people imagine. In the future, you are not going to have a cup of tea with them. Most likely, they will be used to solve complex problems, like predicting the complex folding of amino acids. I am sincere when I write this, they are not going to be replicating human intelligence. Computers are not like you or I, they are 100% predictable. There is no state of uncertainty. The confusion seems to be, the errors that come about in computers are due to syntax errors or an error from a hardware failure due to negligence of a technician. Not! because of anything the computer did wrong. So, when your computer freezes up on you, and a million pop up adds fly onto your screen about a medication for hair growth and you yell, “You stupid computer.” You’re right. It wasn’t really the computer’s fault. The computer was just blindly following orders, just not your orders.

Computers do not understand. Take for example, John Searle’s “Chinese Room” thought experiment. One man sits inside a room. The only language that he “understands” is English. We’ll call him Dave. Another man is sitting outside the room, and can not see in. The only language he can “understand” is Chinese. We’ll call him Foo Coochun. So, Foo Coochun reads a story in Chinese outside the room, then he writes a series of questions about the story he just read. He slides the story and the questions through a slit in the wall to Dave. Keep in mind that Dave cannot understand Chinese. He has no clue as to what any of what was handed to him means. He also has a huge instruction book next to him that was written in English and a pencil and some scrap paper. He opens up the Chinese book and scans the symbols. Then he looks in the instruction book, written in English, on what to do if those are the symbols he sees.

After a long time of manipulation of symbols and characters – erasing and rewriting and moving symbols around – he’s done! So, he pass the paper back to Foo Coochun. Foo Coochun reads the answers Dave gave to the questions about the story. Mr. Coochun thinks the answers are brilliant. He even writes in Chinese on the paper: “Magnificent and insightful.” Here’s the point. Dave didn’t understand the answers that he wrote down. So, what was understood? The book? That’s just a stupid book. The person who wrote the book? He certainly wasn’t there.

This is an analogy of how computers operate. They follow instructions blindly. They do not understand. And some will say, “We just need to make the computers faster, then they will understand.” The neurons within are brain operate at 500 operations per second. Sounds fast, right? The transistors within a computer perform operations at a billion times per second. So, we know that is false. And yet still, I can hear people say, “We just need massive parallel processing and then the computers will understand.” There is something called, “The Five Hundred Step Rule.” If a man has to walk 500 steps to get a rock from point a to point b, even if he had 500 men, it would still take 500 steps. He would now be able to move 500 stones, with 500 men, but it would still take 500 steps. So, we know that’s false.

The turning test opposes AI. If I am text messaging a person that I have never met. I should be able to tell if that person is a human, and not a machine, simply by the responses I get. A human, will perhaps use, a longer, more descriptive, answer. A human will use creativity and it can be driven by emotion. A human may give me an answer that requires intuition.

Goodstein’s theorem is an example of this. If you work this problem out, you see that the answer appears to be getting bigger and bigger, when in reality the answer is working it’s way back to 0 again. Computers don’t have ‘gut feelings’. In fact, a computer does not feel anything at all. Even if a computer could feel. How would we even know? This is the issue of qualia or the subjective feelings of consciousness. There is no known way of communicating it.

So, if I ask the computer, “How do you feel?” and he replies, “I feel happy.” How does any of us know if the computer is really feeling the sensation of happiness or not? He might just be saying that because that is what he was programmed to say. Do we just take his word for it?

The Halting problem is another problem. If you look at the back of your shampoo bottle, it may have directions. By the way, if you need directions on how to wash your hair, I suggest medical attention. But, it may say, “Wet hair, rub in shampoo then rinse.” For a human, these are adequate instructions. If you gave these same instructions to a computer, it would never get out of the shower. The computer would loop forever because it is unable to halt itself. We as humans have an algorithm in our brains – if it even is an algorithm – that never makes a mistake like that. The list of reasons why strong AI is not possable even in theory can go on and on. I’ll make a deal with everyone: if strong AI comes about in my lifetime, I’ll shoot myself in the foot. That’s how sure I am.