Confirmation bias as a tool of perception

I've been trying to figure out where to go next with my study of perception. One concept I'm exploring is the idea that our expectations enhance our ability to recognize patterns.

I recently found a brilliant illustration of this from researcher Matt Davis, who studies how humans process language. Try out the following audio samples. Listen to the first one several times. It's a "vocoded" version of the plain English recording that follows. Can you tell what's being said?

Vocoded version.

Click here to open this WAV file

Give up? Now listen to the plain English version once and then listen to the vocoded version again.

Clear English version.

Click here to open this WAV file

Davis refers to this a-ha effect as "pop-out":

    Perhaps the clearest case of pop-out occurs if you listen to a vocoded sentence before and immediately after you hear the same sentence in vocoded form. It is likely that the vocoded sentence will sound a lot clearer when you know the identity of that sentence.

To me, this is a wonderful example of confirmation bias. Once you have an expectation of what to look for in the data, you quickly find it.

How does this relate to perception? I believe that recognizing patterns in real world data involves not only the data causing simple pattern matching to occur (bottom up), but also higher level expectations prompting the lower levels to search for expected patterns (top down). To help illustrate and explain, consider how you might engineer a specific task of perception: detecting a straight line in a picture. If you're familiar with machine vision, you'll know this is an age-old problem that has been fairly well solved using some good algorithms. Still, it's not trivial. Consider the following illustration of a picture of a building and some of the steps leading up to our thought experiment:

The first three steps we'll take are pretty conventional ones. First, we get our source image. Second, we apply a filter that looks at each pixel to see if it strongly contrasts with its neighbors. Our output is represented by a grayscale image, with black pixels representing strong contrasts in the source image. In our third step, we "threshold" our contrast image so each pixel goes either to black or white; no shades of gray.

Here's where our line detection begins. We'll say we start by making a list of all sets of neighboring black pixels that have, say, 10 or more pixels touching one another. Next, we filter these by seeing which have a large number of those pixels roughly fitting a line function. We end up with a bunch of small line segments. Traditionally, we could stop here, but we don't have to. We could pick any of these line segments and extend it out in either direction to see how far it can go and still find black pixels that roughly fit that line function. We might even tolerate a gap of a white pixel or two as we continue extending out. And we might try different variations of the line function that still fit but fit better as the line segment gets longer, in order to further refine the line function. But then uncertainty kicks in and we conservatively stop stretching out when we no longer see black pixels.

Here's where confirmation bias can help. Once we have a bunch of high-certainty line segments to work with, we now have expectations set about where lines form. So maybe we take our line segments back to the grayscale version of the contrast image. To my thinking, those gray pixels that got thresholded to white earlier still contain useful information. In fact, each grey pixel in the hypothesized line provides "evidence" that the line continues onward; that the "hypothesis" is "valid". It doesn't even matter that there may be lots of other grey -- or even black -- pixels just outside the hypothesized line. They don't add to or distract from the hypothesis. Only the "positive confirmation" of grey pixels adds weight to the hypothesis that the line extends further than we could tell by the black pixels in the thresholded version. Naturally, as the line extends out, we may get to a point where most of the pixels are white or light. Then we stop extending our line.

I love this example. It shows how we can start with the source data "suggesting" certain known patterns (here, lines) and that a higher level model can then set expectations about bigger patterns that are not immediately visible (longer lines) and use otherwise "weak evidence" (light grey pixels) as additional confirmation that such patterns are indeed found. To me, this is a wonderful illustration of inductive reasoning at work. The dark pixels may give strong, deductive proof of the existence of lines in the source data, but the light pixels that fit the extended line functions give weaker inductive evidence of the same.

I don't mean to suggest that perception is now solved. This example works because I've predefined a model of an "object"; here, a line. I could extend the example to search for ellipses, rectangles, and so on. But having to predefine these primitive object types seems to miss the point that we are quite capable of discovering these and much more sophisticated models for ourselves. There's no real learning in my example; only refinement. Still, I like that this illustrates how confirmation bias -- something of a dirty phrase in the worlds of science and politics -- probably plays a central role in the nature of perception.

Comments

Popular posts from this blog

Coherence and ambiguities in problem solving

Neural network in C# with multicore parallelization / MNIST digits demo

Back in the saddle / C# neural network demo