Skip to content
December 17, 2012 / neurograce

Thoughts on Thinking: What is and is not necessary for intelligence

robot-thinkerThought is a concept that is frequently relegated to the Stewart-esque realm of “I know it when I see it.” To think isn’t merely to churn data and spit out output for each input; we instinctively believe there is more behind it than that.  When a friend pauses and stares off for a minute before answering a question, we allow them that time to think, having some notion of the process they’re going through. But we are not so empathetic when we see the colorful spinning wheel or turning hourglass after asking something of our computers.

A lack of a hard definition, however, makes studying thought a confusing endeavor.  We would ideally like to break it down into component parts, basic computations that can conceivably be implemented by groups of neurons.  We could then approach each independently, assuming that this phenomenon is equivalent to the sum of its parts. But the reductionist approach, while helpful for getting at the mechanisms of those individual computations, seems to strip the thought process of something essential, and is likely to lead to less than satisfying results.

Some of the great thinkers of cognition came together recently at Rockefeller University to both celebrate the 100th birthday of Alan Turing, and discuss this notion of what makes thought thought, how we can study it, and how (or, if) we can replicate it. The talks were wide-reaching and the crowd represented a variety of viewpoints, but common themes did emerge.  One of the most striking ones was the requirement of novelty.  That is, to have true intelligence requires the ability to answer novel questions based on currently-known information.  Recalling a fact is not intelligent.  Recalling a fact and integrating it into a new line of reasoning that was built purely from other recalled facts is.  The production of this new line of reasoning makes it possible to answer a new kind of question. And, as John Hopfield pointed out, the number of novel avenues of reasoning that can be explored and new questions an intelligent being can answer grows exponentially with the amount of information known.

This kind of extension of given information into new dimensions requires mental exploration. Intelligent beings seem to be able to walk down different paths and get a sense of which is the right one to follow. They can build hypothetical worlds and predict what will happen in them. They can abstract situations away to the highest level and use that to find deep similarities between superficially disparate things.  Josh Tenenbaum used this example from a 1944 psychological experiment to demonstrate just how instinctively we deviate from just processing the information that was given to us and start to add new layers of understanding.

The fact that most (decent) people come away from that video with a sense of ill-will towards a shape shows how wild our mental exploration routinely gets. But what is it that allows us to make these new connections and forge novel cognitive paths? And is it possible to design entities that do it as well?

Currently, much of AI research is focused on the creation of artificial neural networks that can extract meaningful information from a small amount of input and use that to be able to categorize future input of a similar kind. For example, by discovering commonalities across the shape of letters in hand-writing samples, a computer can identify letters in a new hand-writing  sample and turn it successfully into text, even though it had never “seen” that exact sample before. It is important to note, however, that while the computer is answering something novel in the sense that it is telling us the text of a new hand-writing sample, the question itself is not novel. Identifying letters is exactly what the computer was trained to do and exactly what it does; nothing new there. Many people would argue, that these kinds of networks, no matter how complex, could never end up creating the intelligence we experience in ourselves. Pattern recognition simply doesn’t capture enough of what intelligence requires. There is no deeper understanding of why the patterns exist. There aren’t abstract connections between concepts. There is no possibility for novel exploration.

But what if we made it more complicated, by say training networks to do pattern recognition on patterns themselves? Or combined these networks with other AI technology that is better suited for learning causal relationships, such as Bayesian inference? That combinatorial approach is what helped IBM’s Watson become a Jeopardy! whiz. Watson got the frequently-ambiguous and complex clues as textual inputs and displayed amazing language analysis skills by coming up with the correct response. He was drawing connections between the words in the clue and concepts that are external of it. An incredibly impressive feat, but we still don’t say that Watson thought about the clues or understood the answer. Watson just did. He did as he was programmed to do. That gut response suggests that we believe that automation and intelligence must be mutually exclusive.

But such a conviction is worrisome. It means that the bar we’re measuring intelligence by inches up anytime a machine comes close to reaching it.  Somehow in the process of understanding thought, we seem to discredit any thought processes we can understand. Certainly, Watson wouldn’t pass the Turing test. He had a limited skill set and behavioral repertoire. He couldn’t tell a personal story, or laugh with the rest of us when he occasionally came up with an absurd response to a clue. But he was very capable in his realm and the idea that he was not at all “thinking” seems in someway unfair. Perhaps not unfair to Watson, but unfair to our level of progress in understanding thinking. The idea that someday a robot could be built which does by all appearances perfectly replicate human cognition can be unsettling to some people. We don’t want to think of ourselves as merely carbon-based computing machines. But we shouldn’t let hubris stand in the way of progress. We may miss the opportunity to explain thought because we are too afraid of explaining it away.


One Comment

Leave a Comment
  1. Jeff / Jun 25 2016 9:11 pm

    I really like your post! It was so informational and articulately-worded. I guess it really depends on the disposition of the thinker, in regard to whether or not hubris is something that is not worthy to be in the way of progress. For in being able to replicate thought, surely much use could come in more complex machines that could do more complex things, but also has our technologically-advanced era really brought human beings much good? Excuse me for the potentially unpopular/impractical opinion, and I’m not saying technology is “bad”, as flint and stones are “technology”, but technology has made the (relatively wealthy, but less so as time presses on) human species rather sedentary as opposed to their naturally hunter-gatherer existences, and we will never postpone the inevitable – death – no matter how long we are able to prolong life with medicine, etc. Therefore, humans believing they are special in one manner or another, in this case, having the capability to think, is something that provides sweet relief to the journey toward death. 🙂

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: