Mind Hacks™: Tips & Tools for Using Your Brain (26 page)

Read Mind Hacks™: Tips & Tools for Using Your Brain Online

Authors: Tom Stafford,Matt Webb

Tags: #COMPUTERS / Social Aspects / Human-Computer Interaction

BOOK: Mind Hacks™: Tips & Tools for Using Your Brain
4.83Mb size Format: txt, pdf, ePub
Stop Memory-Buffer Overrun While Reading
The length of a sentence isn’t what makes it hard to understand — it’s how long you have
to wait for a phrase to be completed.

When you’re reading a sentence, you don’t understand it word by word, but rather phrase
by phrase. Phrases are groups of words that can be bundled together, and they’re related by
the rules of grammar. A noun phrase will include nouns and adjectives, and a verb phrase
will include a verb and a noun, for example. These phrases are the building blocks of
language, and we naturally chunk sentences into phrase blocks just as we chunk visual images
into objects.

What this means is that we don’t treat every word individually as we hear it; we treat
words as parts of phrases and have a buffer (a very short-term memory) that stores the words
as they come in, until they can be allocated to a phrase. Sentences become cumbersome not if
they’re long, but if they overrun the buffer required to parse them, and that depends on how
long the individual phrases are.

In Action

Read the following sentence to yourself:

  • While Bob ate an apple was in the basket.

Did you have to read it a couple of times to get the meaning? It’s grammatically
correct, but the commas have been left out to emphasize the problem with the
sentence.

As you read about Bob, you add the words to an internal buffer to make up a phrase. On
first reading, it looks as if the whole first half of the sentence is going to be your
first self-contained phrase (in the case of the first, that’s “While Bob ate an
apple”) — but you’re being led down the garden path. The sentence is constructed to dupe
you. After the first phrase, you mentally add a comma and read the rest of the
sentence...only to find out it makes no sense. Then you have to think about where the
phrase boundary falls (aha, the comma is after “ate,” not “apple”!) and read the sentence
again to reparse it. Note that you have to read again to break it into different phrases;
you can’t just juggle the words around in your head.

Now try reading these sentences, which all have the same meaning and increase in
complexity:

  • The cat caught the spider that caught the fly the old lady swallowed.
  • The fly swallowed by the old lady was caught by the spider caught by the
    cat.
  • The fly the spider the cat caught caught was swallowed by the old lady.

The first two sentences are hard to understand, but make some kind of sense. The last
sentence is merely rearranged but makes no natural sense at all. (This is all assuming it
makes some sort of sense for an old lady to be swallowing cats in the first place, which
is patently absurd, but it turns out she swallowed a goat too, not to mention a horse, so
we’ll let the cat pass without additional comment.)

How It Works

Human languages have the special property of being recombinant. This means a sentence
isn’t woven like a scarf, where if you want to add more detail you have to add it at the
end. Sentences are more like Lego. The phrases can be broken up and combined with other
sentences or popped open in the middle and more bricks added.

Have a look at these rather unimaginative examples:

  • This sentence is an example.
  • This boring sentence is a simple example.
  • This long, boring sentence is a simple example of sentence structure.

The way sentences are understood is that they’re parsed into phrases. One type of
phrase is a noun phrase, the object of the sentence. In “This sentence is an example,” the
noun phrase is “this sentence.” For the second, it’s “this boring sentence.”

Once a noun phrase is fully assembled, it can be packaged up and properly understood
by the rest of the brain. During the time you’re reading the sentence, however, the words
sit in your verbal working memory — a kind of short-term buffer — until the phrase is
finished.

There’s an analogy here with visual processing. It’s easier to understand the world
in chunks — hence the Gestalt Grouping Principles
[
Grasp the Gestalt
]
. With language, which arrives serially,
rather than in parallel like vision, you can’t be sure what the chunks are until the end
of the phrase, so you have to hold it unchunked in working memory until you know where
the phrase ends.

— M.W.

Verb phrases work the same way. When your brain sees “is,” it knows there’s a verb
phrase starting and holds the subsequent words in memory until that phrase has been closed
off (with the word “example,” in the first sentence in the previous list). Similarly, the
last part of the final sentence, “of sentence structure,” is a prepositional phrase, so
it’s also self-contained. Phrase boundaries make sentences much easier to understand.
Rather than the object of the third example sentence being three times more complex than
the first (it’s three words: “long, boring sentence” versus one, “sentence”), it can be
understood as the same object, but with modifiers.

It’s easier to see this if you look at the tree diagrams shown in
Figure 4-4
. A sentence takes on a treelike
structure, for these simple examples, in which phrases are smaller trees within that. To
understand a whole phrase, its individual tree has to join up. These sentences are all
easy to understand because they’re composed of very small trees that are completed
quickly.

We don’t use just grammatical rules to break sentences in chunks. One of the reasons
the sentence about Bob was hard to understand was you expect, after seeing “Bob ate” to
learn about
what
Bob ate. When you read “the apple,” it’s exactly
what you expect to see, so you’re happy to assume it’s part of the same phrase. To find
phrase boundaries, we check individual word meaning and likelihood of word order,
continually revise the meaning of the sentence, and so on, all while the buffer is
growing. But holding words in memory until phrases complete has its own problems, even
apart from sentences that deliberately confuse you, which is where the old lady comes
in.

Figure 4-4. How the example sentences form trees of phrases

Both of the first remarks on the old lady’s culinary habits require only one phrase to
be held in buffer at a time. Think about what phrases are left incomplete at any given
word. There’s no uncertainty over what any given “caught” or “by” words refer to: it’s
always the next word. For instance, your brain read “The cat” (in the first sentence) and
immediately said, “did what?” Fortunately the answer is the very next phrase: “caught the
spider.” “OK,” says your brain, and pops that phrase out of working memory and gets on
with figuring out the rest of the sentence.

The last example about the old lady is completely different. By the time your brain
gets to the words “the cat,” three questions are left hanging. What about the cat? What
about the spider? What about the fly? Those questions are answered in quick succession:
the fly the old lady swallowed; the spider that caught the fly, and so on.

But because all of these questions are of the same type, the same kind of phrase, they
clash in verbal working memory, and that’s the limit on sentence comprehension.

In Real Life

A characteristic of good speeches (or anything passed down in an oral
tradition) is that they minimize the amount of working memory, or buffer, required to
understand them. This doesn’t matter so much for written text, in which you can skip back
and read the sentence again to figure it out; you have only one chance to hear and
comprehend the spoken word, so you’d better get it right the first time around. That’s why
speeches written down always look so simple.

That doesn’t mean you can ignore the buffer size for written language. If you want to
make what you say, and what you write, easier to understand, consider the order in which
you are giving information in a sentence. See if you can group together the elements that
go together so as to reduce demand on the reader’s concentration. More people will get to
the end of your prose with the energy to think about what you’ve said or do what you
ask.

See Also
  • Caplan, D., & Waters, G. (1998). “Verbal Working Memory and Sentence
    Comprehension” (
    http://cogprints.ecs.soton.ac.uk/archive/00000623
    ).
  • Steven Pinker discusses parse trees and working memory extensively in
    The Language Instinct
    . Pinker, S. (2000).
    The
    Language Instinct: The New Science of Language and Mind.
    London: Penguin
    Books Ltd.
Robust Processing Using Parallelism
Neural networks process in parallel rather than serially. This means that as
processing of different aspects proceeds, previously processed aspects can be used quickly
to disambiguate the processing of others.

Neural networks are massively parallel computers. Compare this to your PC, which is a
serial computer. Yeah, sure, it can emulate a parallel processor, but only because it is
really quick. However quick it does things, though, it does them only one at a time.

Neural processing is glacial by comparison. A neuron in the visual cortex is unlikely to
fire more than every 5 milliseconds even at its maximum activation. Auditory cells have
higher firing rates, but even they have an absolute minimum gap of 2 ms between sending
signals. This means that for actions that take 0.5 to 1 second — such as noticing a ball
coming toward you and catching it (and many of the things cognitive psychologists
test) — there are a maximum of 100 consecutive computations the brain can do in this time.
This is the so-called
100 step rule
.
1

The reason your brain doesn’t run like a PC with a 0.0001 MHz processor is
because the average neuron connects onto between 1000 and 10,000 other neurons. Information
is routed, and routed back, between multiple interconnected neural modules, all in parallel.
This allows the slow speed of each neuron to be overcome, and also makes it natural, and
necessary, that all aspects of a computational job be processed simultaneously, rather than
in stages.

Any decision you make or perception you have (because what your brain decides to provide
you with as a coherent experience is a kind of decision too) is made up of the contributions
of many processing modules, all running simultaneously. There’s no time for them to run
sequentially, so they all have to be able to run with raw data and whatever else they can
get hold of at the time, rather than waiting for the output of other modules.

In Action

A good example of simultaneous processing is in understanding language. As you hear or
read, you use the context of what is being said, the possible meaning of the individual
words, the syntax of the sentences, and how the sounds of each word — or the letters of each
word — look to figure out what is being said.

Consider the next sentence: “For breakfast I had bacon and ****.” You don’t need to
know the last word to understand it, and you can make a good guess at the last
word.

Can you tell the meaning of “Buy v!agra” if I email it to you? Of course you can; you
don’t need to have the correct letter in the second word to know what it is (if it doesn’t
get stopped by your spam filters first, that is).

How It Works

The different contributions — the different clues you use in reading — inform one another,
to fill in missing information and correct mismatched information. This is one of the
reasons typos can be hard to spot in text (particularly your own, in which the
contribution of your understanding of the text autocorrects, in your mind, the typos
before you notice them), but it’s also why you’re able to have conversations in loud bars.
The parallel processing of different aspects of the input provides robustness to errors
and incompleteness and allows information from different processes to interactively
disambiguate each other.

Do you remember the email circular that went around (
http://www.mrc-cbu.cam.ac.uk/people/matt.davis/home.html
) saying that you can write your sentences with the internal letters rearranged
and still be understood just as
well? Apparently, it deosn’t mttaer in waht oredr the ltteers in a wrod are,
the olny iprmoetnt tihng is taht the frist and lsat ltteer be at the rghit pclae. The rset
can be a toatl mses and you can sitll raed it wouthit a porbelm.

It’s not true, of course. You understand such scrambled sentences only
nearly as well
as unscrambled ones. We can figure out what the
sentence is in this context because of the high redundancy of the information we’re given.
We know the sentence makes sense, so that constrains the range of possible words that can
be in it, just as the syntax does: the rules of grammar mean only some words are allowed
in some positions. The word-length information is also there, as are the letters in the
word. The only missing thing is the position information for the internal letters. And
compensating for that is an easy bit of processing for your massively parallel, multiple
constraint–satisfying, language faculty.

Perhaps the reason it seems surprising that we can read scrambled sentences is because
a computer faced with the same problem would be utterly confused. Computers have to have
each word fit exactly to their template for that word. No exact match, no understanding.
OK, so Google can suggest correct spellings for you, but type in
i am cufosned
and it’s stuck, whereas a human could take a guess (they face off in
Figure 4-5
).

Figure 4-5. Google and my friend William go head to head

This same kind of process works in vision. You have areas of visual cortex responsible
for processing different elements. Some provide color information, some information on
motion or depth or orientation. The interconnections between them mean that when you look
at a scene they all start working and cooperatively figure out what the best fit to the
incoming data is. When a fit is found, your perception snaps to it and you realize what
you’re looking at. This massive parallelism and interactivity mean that it can be
misleading to label individual regions as “the bit that does
X
”;
truly, no bit of the brain ever operates without every other bit of the brain operating
simultaneously, and outside of that environment single brain regions wouldn’t work at
all.

End Note
  1. Feldman, J. A., & Ballard, D. H. (1982). Connectionist
    models and their properties.
    Cognitive Science, 6
    , 205–254 (
    http://cognitrn.psych.indiana.edu/rgoldsto/cogsci/Feldman.pdf
    ).

Other books

The Storyspinner by Becky Wallace
Astra by Grace Livingston Hill
A Dolphin's Gift by Watters, Patricia
Wanted: White Russian by Marteeka Karland
The Twelfth Transforming by Pauline Gedge
Crooked Pieces by Sarah Grazebrook
Game Over by Fern Michaels