Chapter 7: Language & Cognition — Master Introductory Psychology
Master Introductory Psychology  ·  Chapter 7 of 16

Language & Cognition

How we acquire and use language, form concepts, solve problems, and make decisions — and the surprising ways thinking goes wrong.

📖 17 sections ⌛ ~35 min read 🔑 35 key terms ✎ 10 review questions
📚 Master Introductory Psychology Complete Edition — Get the full 16 chapter print book here
Buy on Amazon →

How Do We Acquire Language?

In 1957, B. F. Skinner published Verbal Behavior, an explanation of languagelanguageA system of symbols and rules for combining them that allows communication and complex thought. acquisition according to behaviorist principles of reinforcement and punishment. Skinner believed that language acquisition could be understood in the same way that lever pressing and disc pecking was understood. Children were reinforced for proper language use, and ignored or punished for improper use, and the buildup of all these consequences eventually led to their full development of linguistic capabilities.

Noam Chomsky, a young linguist at MIT, was not convinced by Skinner's explanations and in 1959 wrote a scathing review which argued that Skinner's claims about language acquisition were unsupported speculations. While Chomsky's review was not without critics of its own, this fiery attack of an icon of behaviorism has been suggested to have played a role in the cognitive revolution that followed. Beginning in the 1960s psychologists shifted their focus away from observable behaviors and onto internal mental processes.

In considering internal processes in language acquisition, Chomsky argued that humans must have a “Language Acquisition DeviceLanguage Acquisition DeviceChomsky's proposed innate mental mechanism that enables children to rapidly acquire the grammar of any human language.” (LAD). This isn’t necessarily a brain structure or particular gene sequence, but rather a term for some device, system, or network in the human brain that allows language ability to emerge as long as there is sufficient input. Chomsky claimed that humans have this LAD and other animals don’t, which explains why nearly all human children develop linguistic abilities almost effortlessly, while even years of intensive training is not able to effectively condition complex language use in animals.

✎  Quick check — Section 1
What is a phoneme?

The Building Blocks of Language

In order to better understood critiques of the behaviorist approach, we need to know a little bit more about linguistics, the study of language.

All spoken languages are made up of individual sounds, though the sounds used in each language may vary. Some sounds (like the rolled Rs in Spanish or clicking sounds in Xhosa) may be somewhat unique, while other sounds may be shared among many languages. These small units of sounds that are used in a language are called phonemes. Examples of phonemes in English would be sounds like the f sound in fat or the c sound in cat. English uses about 40 phonemes in total, while other languages may have twice this amount, or may manage to get by with just over a dozen.

While phonemes like f and c don't mean anything on their own, language also contains small units that are meaningful. These meaningful units are referred to as morphemes. (You can remember that morphememorphemeThe smallest unit of meaning in a language — includes whole words and meaningful word parts like prefixes and suffixes. starts with m, and morphemes have meaning). Morphemes can be as small as one letter like a, or they can be short words that cannot be further broken down into smaller meaningful parts, like bat. Morphemes also include suffixes and prefixes that affect meaning, like pre-, post-, -ism, or even just an -s at the end of a word to make it plural. So we might have one word composed of multiple morphemes, like bats (two morphemes: bat and -s), or batman (two morphemes: bat and man). Just as a limited number of phonemes can create thousands of morphemes, these morphemes can be combined to create many more thousands of different words (as evidenced by the 616,500+ word-forms you can find in the Oxford-English Dictionary).

When combining multiple sounds from phonemes and morphemes to create words there are certain rules. Each language has different rules for how its phonemes and morphemes can be combined and how these combinations are spoken. These rules are known as phonological rules. Violation of these rules in a particular language results in an accent, which may be characteristic of a particular region. For example, a regional Boston accent, often demonstrated with the sentence “Park the car in Harvard Yard”, is known for dropping r sounds when they follow vowels, resulting in something like “Pahk the cah in Hahvahd Yahd”. In this case, a Bostonian is combining phonemes in a manner that doesn't follow the phonological rules of standard English.

Accents that arise when learning a new language may occur because people are carrying over phonological rules from their native language, or because they have difficulty recognizing or producing the combinations of phonemes of the new language. For example, native Mandarin speakers may have trouble pronouncing English words that end in “L”, because Mandarin doesn't use this phonemephonemeThe smallest unit of sound in a language — English has about 44 phonemes. in this way (L sounds are always followed by a vowel – like in la, le, li, lu). If you've seen the show An Idiot Abroad, now you can better understand the difficulty that Karl Pilkington had with Chinese speakers always calling him “Karla”.

Now that we have phonemes and morphemes and rules for combining them to make words, we need to have a set of rules for how to combine these words into meaningful phrases and sentences. These are syntactical rules. These aren't the nit-picking grammatical rules from your English class, but the more general rules for the order of words and using tenses to communicate meaning.

✎  Quick check — Section 2
Chomsky's concept of a language acquisition device suggests:

Universal Grammar

Chomsky proposed a theory of “universal grammargrammarThe entire system of rules governing a language, including phonology, morphology, syntax, and semantics.” which suggests that the overall brain mechanisms for understanding and processing human language are the same for any language and therefore must be an innate human characteristic. Since children everywhere seem to automatically learn the language around them, regardless of what that language is, it seems that understanding of the general rules of how language works will naturally arise in any child exposed to language. In this respect, language seems different from the acquisition of other concepts like mathematical understanding. For example, intuitive understanding of concepts of grammar will emerge naturally starting around age 2 in just about everyone given language exposure. We don't see similarly rapid and near-universal understanding in other areas, suggesting a unique biological predisposition for concepts of grammar.

Many people misunderstand the conceptconceptA mental category that groups objects, events, or ideas sharing common properties. of universal grammaruniversal grammarChomsky's proposal that all human languages share a common underlying grammatical structure.. It's not saying that languages all share the same grammatical rules. It's saying that human languages all follow similar types of rules in their overall structure because all human languages are created and used by human brains. All those brains innately conceive of language working in the same way.

So while languages may differ in where they place subjects and objects, whether they conjugate verbs or not, or how they structure the future tense, these are just minor details compared to the overall structure of how language works. The concept of universal grammar points out that although languages differ in the details, they all have ways of indicating the concepts of nouns, verbs, and adjectives because these are the structures necessary for human brains to communicate linguistically. Unlike other animals, we seem to be born prepared for language, and this innate predisposition allows us to learn language rapidly.

✎  Quick check — Section 3
What is the Sapir-Whorf hypothesis?

Language Development

The rate of vocabulary growth in children is truly phenomenal. Between the ages of 1 and 5, children will acquire a vocabulary of thousands of words, at an average rate of around 10 words per day; far too quickly for reinforcement to effectively condition in such a short time. Part of this explosion of vocabulary growth comes from fast-mapping; the ability to connect a word with a meaning after only a single exposure. So when Mom is in the grocery store and says “this is an apple”, the child is able connect these new sounds “apple” with this new object. Without any further reinforcement, the child may see an apple a week later, and correctly name it.

Language development milestones — from babbling through single words, two-word u
Language development milestones — from babbling through single words, two-word utterances, and telegraphic speech to full grammatical language.

Children also develop an understanding of syntaxsyntaxThe rules governing how words are combined to form grammatical sentences in a language. quite rapidly. Children begin babblingbabblingThe repetitive consonant-vowel sounds produced by infants around 4–6 months regardless of the language spoken around them. at around 4 months old, and by the age of about 1 year children generally begin speaking single words, usually nouns. This is actually quite impressive because it demonstrates their understanding of the concept of nouns from all the other words and sounds they have heard.

Around age 2, children begin stringing words together into short sentences of just 2 words, known as telegraphic speechtelegraphic speechEarly two-word utterances that convey meaning efficiently without grammatical filler words.. These mini-sentences leave out function words like the, an, a, etc. and only include the most important words for communicating (like a telegraph message). This indicates that children are already able to identify which words really matter and which words are just grammatical fluff. These two word sentences show further evidence of grammatical understanding because they tend to follow the syntax of the child's native language.

Young English speakers, for instance, will say things like “want cookie” rather than “cookie want”, following the general rule of placing an object after a verb in English. They will also say things like “big dog”, placing the adjective before the noun, while their Spanish-speaking counterparts are saying “perro grande”, reversing the order to follow the syntax of Spanish. Even these tiny fragments reveal that children are already picking up the rules of the language around them.

Around age 4 or 5, children begin making grammatical errors. These errors actually provide evidence that children are learning the rules of language rather than simply imitating others. At this age, children make errors such as saying “She hitted him” or “He runned to school”. These errors reveal two important things. The first is that children have acquired an understanding of how the language works without necessarily having explicit grammar instruction. This overgeneralization of a rule reveals that the children actually understand the “-ed makes the past tense” rule, even though they are misapplying it.

Secondly, overgeneralization (also sometimes referred to as overregularization) supports the idea that language isn’t learned by conditioning because these new errors have not been reinforced in the past (or even heard, since Mom and Dad never say things like “goed” or “bringed”).

Another well-known demonstration that children are discovering the rules of language is known as The Wug Test, created by Jean Berko Gleason. In this test, children are introduced to a novel word by being shown a picture or toy called a “wug”. Then when another wug is presented, the children complete the sentence “Here are two ____”. The fact that young children can correctly complete the sentence with “wugs” means that they understand the concept of pluralization of nouns, even though they haven't been explicitly taught this rule and they've never heard anyone else say “wugs”. Overgeneralization and the Wug Test demonstrate that a child is actively working out the rules of language, not simply parroting back phrases. This process of working out the rules seems to occur naturally with exposure to language.

There is, however, a clock ticking on this natural process of figuring out the grammatical rules of a language. In order for this to occur, it seems that a child must have exposure to a language before some time around age 7. Without exposure before this time, a critical periodcritical periodA sensitive window in development during which language (or other skills) must be acquired for full normal development. has passed and the development of full fluency in language is no longer possible.

This can be seen in the tragic cases of children deprived of language exposure for many years. Genie was a girl who was kept in isolation by her father, who didn't speak to her at all, from when she was 20 months old until she was rescued at the age of 13. Despite intensive efforts to teach Genie language, she was never able to develop a high level of competency. A similar case involved a girl named Isabelle who was kept in silent isolation until she was 6 years old. Following her rescue, however, Isabelle was able to make a full recovery and in fact within 1 year of instruction her language abilities were within a normal range.

It's important to remember that this critical period refers to ability for language and grammar in someone's first language, not a second (or third or fourth) language. It's not stating that if you haven't learned a particular language before age 7 that you can't learn it. But you have to have learned your first language by this point in order for your brain to develop the ability to process the rules of any language.

It seems that if children have had sufficient exposure to any language, this equips their mind to understand the concepts of grammatical rules, which in turn allows them to learn other languages in the future, even if the specific grammatical rules of the next language are completely different. While it is true that younger age in starting a second language is associated with the attainment of higher levels of fluency, it is still possible for people to achieve high levels of proficiency and fluency in another language, even if they begin learning at a later age.

There are some people, however, for whom naturally understanding the rules of grammar just doesn't seem to happen. These are people with genetic dysphasia, a rare condition in which a person is unable to learn the rules of grammar in his or her native tongue. Despite being able to learn complex ideas and use advanced vocabulary, these people are seemingly unable to work out the rules of grammar and repeatedly make errors. This suggests that language is a specialized module, unrelated to other types of learning and cognitive skills. In this way, the fact that people with genetic dysphasia can't seem to learn grammar provides further evidence that, for most people, acquiring language is an innate and near-automatic process.

✎  Quick check — Section 4
A prototype is best described as:

The Importance of the Social Environment

The evidence for critical periods and the importance of exposure to language at a young age both suggest that the social environment is a crucial component of the language acquisition process. With this in mind, we shouldn't adopt a strictly nativist approach, but should instead adopt an interactionist approach to consider how our genetic predisposition for language interacts with our environment to allow for the complete development of linguistic fluency.

One of the most startling cases of how our social environment allows language to develop comes from Nicaragua. Following a revolution in 1980, the government in Nicaragua created schools for the deaf in the capital city of Managua. For the first time, deaf children from remote areas all over the country were brought together and given instruction in sign language. Previously, many deaf children had been deprived of a full language environment in their hometowns and had only learned to communicate with family and friends via simple gestures. Without a formal sign language in place and deprived of a social environment which would allow them to communicate with others, these children never developed full linguistic fluency and concepts of syntax. With the new school, however, the hope was that deaf children could be taught an existing sign language, learn vocational skills, and become more independent members of society. Hopes for these lofty goals were questioned, however, as initial attempts to teach a formal sign language failed.

What happened instead was even more exciting. By placing all these young children together in the same environment, with a strong desire to communicate with one another, a new language began to emerge. While these gestures were initially dismissed as mere miming and mimicry, this turned out to be more than just playground slang. It was a fully functional language that naturally emerged, complete with all the syntactical rules, tenses, and other grammatical features you would find in any other language. This case shows us that the human mind has a natural drive to communicate and create structured language. So while you may think that grammatical rules were created to punish you in English class, the truth is that grammar is a naturally-emerging phenomenon that allows us to communicate more fully with others.

✎  Quick check — Section 5
Functional fixedness is an obstacle to problem solving because it:

Does Language Influence How You Think?

One puzzling question that arises when thinking about language is how language and thought are related. When we see deaf adults from Nicaragua who were too late for the development of full language abilities, we may wonder just what their thoughts are like. Our own thoughts are so intertwined with our language that it may seem impossible to separate the two. For most of us, it often seems that all the thoughts we have occur in the form of words and we may wonder how those words influence our ways of thinking about the world.

The notion that our native language influences how we perceive the world is known as the linguistic relativitylinguistic relativityThe hypothesis that the language one speaks influences how one thinks and perceives the world. hypothesis. It is also referred to as the Whorf-Sapir hypothesis, based on the work of Benjamin Whorf and his mentor Edward Sapir. Whorf proposed that due to the grammatical structures of their language, speakers of the Hopi language conceptualized time differently than English speakers.

Whorf's views (published in the late 1930s and early 1940s) were heavily criticized throughout the 1960s (likely due to the dominance of the view of universal grammar) but have recently been re-examined and some psychologists are reconsidering the possible role of language on how we think about the world. While few would suggest that our language determines our thought, researchers have studied how the language we speak may influence our perception and memory for colors, objects, and time, though results have been mixed.

We also see the question of how language and thought are related in the form of popular articles listing “untranslatable” foreign words, often suggesting that speakers of these languages conceptualize the world in ways that other speakers don't. We may wonder, however, if this is actually the case, since while these terms cannot be reduced down to single words in other languages, they can most certainly be described (or else the article couldn't be written). This would suggest that even if we don't speak the language we can still have the same types of thought. Just because English doesn't have a single word for schadenfreude doesn't mean that I can't conceptualize (or even experience) taking joy in the suffering of others.

One of the problems with analyzing how language influences thought is that language is such an integral part of culture. We may readily admit that culture influences how we think, but it can be difficult or impossible to determine exactly how language fits in to this interaction. One way of attempting to separate language from culture is to look at people who live in similar cultures but speak different languages. For instance, I could look to a country like Switzerland and find native speakers of French, German, Italian, and Romansh who, apart from language, arguably have rather similar cultural environments.

In addition to considering how language influences thought between individuals, we can consider how language may influence thought within individuals. In bilingual individuals who speak both languages fluently, perhaps their thinking varies when using one language or another. Several studies have investigated this, with some surprising results. Research by Michael Ross, Elaine Xun, and Anne Wilson in 2002 found that bilingual Chinese/English speakers described their personality traits quite differently depending on the language they used for the description. Similarly, Nairán Ramírez-Esparza and colleagues published a study in 2006 looking at bilingual Americans and Mexicans and found variation in their personality scores for extraversion, agreeableness, and conscientiousness depending on whether they were assessed in English or Spanish.

While the linguistic relativity hypothesis considers thought across languages, we may also wonder how language influences thought within a particular language. Does Orwellian “newspeak” really influence how people think about important issues? Does the fact that Beijing media refers to the city's pollution as 雾霾 wu mai (literally “fog-haze”) rather than using the general term for pollution 污染 wu ran (literally “dirty contamination”) influence how the city's residents perceive their environment? As we've already seen in memory experiments by Elizabeth Loftus, and as we'll see in framing studies by Amos Tversky and Daniel Kahneman in the next section, it does seem to be the case that particular wording has the potential to influence thought and decision-making, though the extent of this influence is still up for debate.

✎  Quick check — Section 6
The availability heuristic leads us to judge the probability of events based on:

Concepts and Categories

Language provides us with ways of organizing thought, and one way that we see this organization is the creation of concepts and categories. These are mental groupings of objects or ideas which are similar. Rather than creating a completely new word for every possible shape of a chair, we can use the generic word “chair” both to represent a particular chair, and the concept of a chair in general. The question is, how do our brains go about this organizational process? When we walk into IKEA, how do we immediately recognize that a stimulus we've never seen before is in fact a chair?

If we try to come up with the rules our brains are using to decide on “chair-ness”, we find that it's no simple task. Must it have 4 legs? Well, some chairs don't have legs at all but have solid bases instead and we recognize those just the same. A soft or hard surface? Again, both seem acceptable, so that doesn't help us to create a rule. Sharp or rounded corners? None of these rules seem to work in helping us recognize chairs, and yet, we can almost always recognize any chair immediately upon seeing it. So if we aren't following specific rules, how are we doing this?

What seems to be happening is that we have some guidelines rather than rules. These guidelines are the features that are common to chairs, and when we think of a chair, we generally think of these characteristics. We seem to think of an idea of a chair that has most of these features, but we also recognize that there are exceptions and variations. In considering a new object, these guidelines come to mind, but we don't use them as hard-and-fast rules for making a judgment.

To demonstrate this, take a moment and quickly doodle a picture of a bird.

Do so now, before looking ahead ; )

Now, if my magical powers of intuition are correct, you may have drawn something like this:

Chances are that you didn't draw a penguin, or an ostrich, or a peacock. While all of these are fine examples of birds, they probably aren't the first birds that came to mind. What probably came to mind is a “most-typical” version of a bird. This most-typical or best example is known as a prototypeprototypeThe most typical or representative example of a concept or category.. A prototype has many or all of the characteristic features that we associate with a particular category.

So for birds, your prototype is probably small, feathered, winged, able to fly, and has twig-like legs and a small beak. If, when walking in a park, you encounter a new stimulus that might be a bird, you mentally compare it to your prototype to see how well it fits this category. A bird that is a close match to your prototype would probably be recognized more quickly than one that differs drastically. The same would be true for recognizing chairs. You have a mental prototype of a chair with all the “most chair-like” features, and you compare new possible chairs to this mental image.

In attempting to explain category recognition, exemplar theory considers that we aren't limited to thinking just about prototypes, and we may also mentally compare a stimulus to other examples in memory (though this may take us a little longer). For chairs, this may allow you to recognize a soft lumpy mass as a chair, because you have a memory of beanbag chairs as a less-common but still acceptable type of chair. Or if you were to encounter a large, flightless animal, you could still consider the possibility that it is a bird, even though several features don't match your prototypical bird.

✎  Quick check — Section 7
The framing effect demonstrates that:

How Do We Solve Problems?

Our minds do quite a lot of problem solving on a regular basis, but how do we go about solving problems? For some problems, it turns out, we don't actually have to do much solving. If I were to ask you to solve 3 + 4, chances are that you don't actually need to solve it. Instead, you simply need to remember the answer that you figured out long ago. But when you were younger and still learning addition, you may have had to actually work out the solution, perhaps by counting on your fingers.

So our memory of solutions that have worked in the past can help us to solve problems that appear again, but the downside to this approach is that it makes us vulnerable to applying a solution that has worked in the past that won't necessarily work in the present. This tendency to think only of what has worked in the past is known as our mental setmental setThe tendency to approach a new problem using the same strategy that worked for similar past problems, even when a new approach is needed.. As Abraham Maslow wrote: "I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail” (Maslow, 1966, p.15). Perhaps, like me, you've wasted a great deal of time trying old solutions on new problems because you were unable to think of the problems from a new perspective.

There's also a tendency to view tools as only being useful in the ways that we usually use them. This is known as functional fixednessfunctional fixednessThe tendency to perceive objects only in their conventional uses, blocking creative problem solutions.. Let's say that you want to hang a picture on your wall but when you check your toolbox you realize you lent your hammer to a friend. You might not immediately realize that you could probably drive in the nail using the side of the wrench sitting in front of you. You might not immediately realize this solution because of the tendency to think of wrenches as being used for wrenching things, not for hammering them.

I'm not suggesting that functional fixedness is always a bad thing. In fact, this tendency to concentrate on the normal functions of objects and tools is probably a good thing. Sure you could also use your cell phone to drive in that nail, but in the process you'd probably ruin its ability to perform the functions that it was specifically designed for. Nevertheless, we should be aware of this tendency and consider when novel uses for tools might be beneficial.

A related concept here is the difference between convergent and divergent thinking. In convergent thinking, we move towards one single solution to a problem. So if you're solving an algebra equation, often there is only one solution and each step should move you towards that single solution. In our hammer example, convergent thinking answers the question “how do I hammer in a nail?” by pointing towards one solution: a hammer.

Divergent thinking, on the other hand, is for situations with many possible solutions, and one solution may not be at all related to the other solutions. For example, if I asked you to imagine all the possible uses of a hammer, you may think of using it as a paperweight, and a moment later you may think of using it to get an object that's just out-of-reach, even though these solutions don't have much in common.

Obstacles to Problem Solving

As we begin considering solutions to problems, we should be aware of our tendency towards overconfidenceoverconfidenceThe tendency to be more confident in one's judgments than accuracy would warrant.. We tend to think that our views and judgments are correct, even when they aren't and we can be more confident than correct. For instance, when attempting to spell difficult words participants who felt 100% sure they were correct were actually only correct about 80% of the time.

Of course, even now that we know about overconfidence, we might feel that this error only applies to other people, not us. If you believe that you're never overconfident, then you are probably both wrong and overconfident. This tendency to think that errors and biases are just other people's problems relates to another type of bias. If you ask people to estimate how their own traits and abilities compare to others (e.g. attractiveness, IQ, driving ability, etc.) what you will find is that most people believe themselves to be above average. This is, of course, a statistical impossibility, but that doesn't always factor in to people's self-assessments. This tendency to feel above-average is known as illusory superiority or The Wobegon Effect, named after the fictional town of Lake Wobegon from Garrison Keillor's A Prairie Home Companion; a place “where all the women are strong, all the men are good-looking, and all the children are above average."

The Wobegon Effect may actually be a part of our “psychological immune system”. Perhaps the tendency to exaggerate our estimates of our own positive traits helps to keep us motivated and upbeat, and when combined with a sense of overconfidence, makes it easier to make decisions and more likely we take action toward our goals.

Belief Bias

It's not surprising that our beliefs can cloud our judgment, but we may not realize just how easily our pre-existing beliefs can disrupt our ability to think logically. Research by Jonathan Evans, Julie Barston, and Paul Pollard found that previously existing beliefs influenced subjects' ability to determine whether statements were logically valid or invalid. It seems that we have difficulty separating our beliefs from our analysis of logic and we may not be aware of this influence.

Not only are we unaware of how our beliefs can influence our reasoning, our beliefs are resistant to change. This is known as belief perseverance. Even when we are confronted with contradictory evidence, it generally does little to change our beliefs. Lee Ross, Mark Lepper, and Michael Hubbard (1975) gave participants false feedback on their performance (to create a belief about their abilities) then revealed that the feedback was unrelated to their actual performance (contradictory evidence). They then asked participants to assess their own performance and found ratings which still matched the false feedback previously received. Participants who had been told they performed well stuck to the belief that they were good at the task, even though they knew the feedback hadn't been accurate.

I'd like to offer a personal example that many students will relate to and illustrates several of the ideas above. Just as students tend to be overconfident in how quickly they can complete assignments, I was overconfident in how quickly I could write this book. Each chapter took me much longer than I had expected. Nevertheless, as I began a new chapter, I once again believed that I would be able to finish rapidly, ignoring the contradictory evidence and maintaining my belief.

This tendency to cling to our existing beliefs can be dangerous when making important decisions like determining the innocence or guilt of criminal defendants. While we may hope to uphold the notion that everyone is innocent until proven guilty, once a person has been viewed as a prime suspect it may be difficult for judges, jurors, or witnesses to overcome an initial belief of guilt.

If you're looking for a way to avoid this potential trap, research by Charles Lord and colleagues may be able to help. Their research suggests that considering opposite findings may have a stronger corrective effect than simply trying to be unbiased. When evaluating studies on the death penalty, some participants were told to “ask yourself at each step whether you would have made the same high or low evaluations had exactly the same study produced results on the other side of the issue.” Other participants were simply told to be as unbiased as possible. The “consider the opposite” participants showed less bias in their critical evaluations of the studies and their attitudes toward the death penalty did not become as polarized.

Problem Solving Approaches

When it comes to solving problems, there are different approaches that we can take. One way to solve a problem is to use an algorithmalgorithmA methodical, step-by-step problem-solving procedure that guarantees a solution if one exists.. An algorithm is a step-by-step procedure that guarantees you will reach a solution. Imagine that I'm getting ready to go out, but I can't find my apartment keys. I know they must be somewhere in my apartment, because I previously used them to get inside. An algorithm which would guarantee finding the keys would be to start in one corner of my apartment and systematically move through every square inch of the apartment until the keys were found.

Even though this approach would guarantee finding my keys, it's probably not the approach I would actually use. Instead of using an algorithm, I would probably use a heuristicheuristicA mental shortcut that speeds up decision-making but can lead to systematic errors.. A heuristic is a mental shortcut that doesn't guarantee a solution, but works most of the time. In searching for my keys, a heuristic I might use is to look where I last remember having them. Now they may not actually be where I last remember having them, but it's still a good place to start, rather than beginning in a corner of the room and slowly moving inch-by-inch through the apartment.

Heuristics and Decision-Making

It turns out that our minds love using heuristics. We're generally willing to accept the trade-off of occasionally being wrong in order to get the benefit of being fast. Amos Tversky and Daniel Kahneman spent decades uncovering the heuristics we use by creating ingenious demonstrations of how these heuristics can lead us astray. Here are just a few of the questions that Tversky and Kahneman have posed to participants, along with descriptions of the heuristics they reveal. These studies show us the consistent pattern of errors we make, which can be likened to the way that visual illusions consistently fool our perception. Dan Ariely refers to these as “decision illusions”, and unlike visual illusions, it can be difficult for us to realize when we are being fooled.

The Availability Heuristic

In their first study of the availability heuristicavailability heuristicJudging the probability of events based on how easily examples come to mind., Tversky and Kahneman asked participants to estimate whether English had more words that started with the letter k or more words that had k as the third letter. What do you think?

If you're like most participants in their study, you might guess that there are more words that start with k than words that have k as the third letter, when in fact this is not the case. The reason you might have misjudged is that you were probably able to bring to mind more examples of words that start with k. We don't usually bring words to mind based on their third letter, so naturally it was harder to think of those examples. Thinking of words that start with k probably seemed much easier, and as a result you may have felt that these words are more common. The availability heuristic says that we tend to estimate the frequency of events based on how easily we can come up with examples, or how “available” they are to our mind.

This tendency to assume that things which are more easily recalled are more frequent can be seen when we attempt to estimate the likelihood of events like terrorist attacks or plane crashes. Examples of these unfortunate events tend to spring to mind rather easily, and this may cause us to assume that these events occur more frequently than they actually do.

On the other hand, events and dangers which actually are more frequent may not come to mind quite so readily. So parents may fear letting their children walk to school because of highly-memorable news stories of child abductions, even though the risks posed by choking, drowning, or car accidents are far greater. Often it seems that the things we fear might kill us are not the things that are actually likely to kill us. As comedian Norm Macdonald noted, you spend time thinking “gee, I hope a terrorist doesn't attack and kill me”, when it's far more likely that your own heart is what will attack and kill you.

That Sounds Like A ….

In another study, Tversky and Kahneman asked participants to read randomly selected assessments from 100 interviews with 70 engineers and 30 lawyers. Participants were then asked to guess the likelihood that the person in question was an engineer or a lawyer. So I might randomly pull out an assessment and tell you that Adam was described as particularly outgoing, interested in politics, and displayed skill in argument. Then you would guess the odds that Adam was a lawyer or an engineer. What do you think? Here's another example from one of Tversky and Kahneman's descriptions:

“Dick is a 30 year old man. He is married with no children. A man of high ability and high motivation, he promises to be quite successful in his field. He is well-liked by his colleagues.” (Tversky and Kahneman, 1973)

What do you think about Adam and Dick? What do you think the odds are that Dick is an engineer?

You probably immediately thought of Adam as a lawyer but found Dick a little harder to pin down. Most people in the study figured there was equal chance that Dick was an engineer or a lawyer, and so they estimated the odds of him being an engineer at around 50%. If you thought the same way, you relied on the Representativeness HeuristicRepresentativeness HeuristicJudging the probability of events by how closely they match a prototype, ignoring base-rate information. – estimating likelihood by using information from prototypes and ignoring more useful information about base rates. In this case you tried to match these profiles to your prototypes of engineers and lawyers, and in doing so, you ignored information that was more relevant. If 70 interviewees were engineers and the assessments were randomly selected, there is a 70% chance any selected profile is an engineer regardless of personality traits. So if you were betting on accuracy in this task, the smart money would always be on engineer.

The Framing Effect

Tversky and Kahneman also investigated how the framing of questions can influence our decision-making by giving two groups of participants the same decision to make, but with a slight change in wording. Participants were asked to imagine the outbreak of a rare virus in a village of 600 people. They were then asked to adopt a program to combat this threat.

One group of participants was told that Program A would save 200 lives, while Program B had a 1/3 chance of saving everyone, and a 2/3 chance of saving no one.

The other group was told that with Program A, 400 people would die, while Program B had a 1/3 chance no one would die and a 2/3 chance everyone would die.

As you can see, choices are mathematically-identical for both groups and the difference is how the options are framed: lives saved vs. deaths. When the choices were framed as “saving”, Program A was popular, but when those same numbers were framed as how many would die, the riskier Program B was widely adopted. Tversky and Kahneman proposed that when we consider benefits we like certainty, but when we consider losses we're more likely to take risks.

We see the use of framing in everyday life as well. Medications and interventions emphasize their 70% effectiveness rate, rather than frame it as a 30% failure rate. Customers find it more appealing when gas stations offer a “cash discount” rather than a “credit card fee” and government agencies may find more popular support for programs that aim to “help the needy” rather than “provide welfare”.

How we define options can have an effect on the decisions that are made and which choices seem preferable. Eric Johnson and Daniel Goldstein analyzed data on organ donation in countries across Europe and found that the major factor in how many people were willing to participate in the organ donation program was whether the sign-up form was an automatic opt-in or opt-out. If the default option was joining, membership in the program was high, nearly 100% in many countries, but when not joining was the default, membership rates were low. We may not like the realization that we are so easily influenced by default options and the choices that form-designers make for us, but the evidence certainly points us toward that conclusion.

The Sunk-Cost Fallacy

The final decision-making error that we'll address may be familiar to you already and is known as the sunk-cost fallacysunk-cost fallacyThe tendency to continue investing in something because of past costs, even when future losses outweigh benefits.. This is the tendency to make present decisions based on previous investments. These previous investments may no longer have any actual bearing on the decision being made, but we seem to have a hard time letting them go. So gamblers and stock speculators may make increasingly risky bets in attempts to make up for past losses even though those past losses don't have any effect on their odds of winning the present bets.

Perhaps this shouldn't always be considered a fallacy, as there are certainly times that our previous investments rightly influence our present decisions but we should consider how we frame past events and how they may become a part of our decision-making. Abandoning relationships or quitting jobs on a whim may be situations in which we can be glad that our previous investments constrain our decisions and allow us to keep a cool head.

Part of the explanation for the sunk-cost fallacy is that we seem to have a tendency to keep a sort of mental accounting of costs that we've occurred for specific circumstances. Tversky and Kahneman (1981) demonstrated how different frames of our past experiences can influence our decisions by asking participants to consider the scenario below.

“Imagine that you have decided to see a play where admission is $10 per ticket. As you enter the theater you discover that you have lost a $10 bill. Would you still pay $10 for a ticket for the play?”

When given this question, 88% of participants said yes they would purchase the ticket. Other participants saw this version of events:

“Imagine that you have decided to see a play and paid the admission price of $10 per ticket. As you enter the theater you discover that you have lost the ticket. The seat was not marked and the ticket cannot be recovered. Would you pay $10 for another ticket?”

In this second case, only 46% of participants were willing to purchase the ticket. In both cases, the loss of $10 is identical. The difference is that in the second case people feel that they have already invested $10 in the play, so they are unwilling to pay twice for the same show. Tversky and Kahneman suggested that rather than a purely rational single mental account of gains and losses, we tend to keep separate mental accounts. In the second case, the mental “theater-cost account” is already at -$10, so purchasing a new ticket would put the account at -$20, which is deemed too expensive for the show. In the first case, however, the lost $10 was not considered as part of the theater cost, and as a result did not come to bear on the decision.

Is There a Silver Lining?

If all this discussion of our biases and errors is upsetting, it shouldn't be. Gerd Gigerenzer has suggested that rather than thinking of these heuristics as biases, we should consider them as tools. We should take a moment to consider just how amazing our information-processing abilities are. Our capabilities are truly astonishing, and the very fact that our brains can contemplate their own shortcomings should be enough to inspire hope. The better we understand our failures and biases, the better able we will be to create systems that will help us to overcome them. And for all the failures that heuristics bring, we shouldn't ignore just how helpful they are.

There are even situations in which having little information can actually be more helpful in guiding us to the right answer. Daniel Goldstein and Gerd Gigerenzer asked American and German participants to estimate whether San Diego or San Antonio had a higher population. 62% of Americans were able to correctly identify San Diego as more populous. Interestingly, 100% of the German participants answered correctly. Were these Germans more knowledgeable about the population of American cities? Nope. These Germans reported that they had never heard of San Antonio, but were at least familiar with the name of San Diego. Without any information about the population of San Antonio, they relied on the assumption that the city they had heard of was probably a larger one, and as a result they were guided to the correct answer. This recognition heuristic shows how our minds are able to make decisions based on very little information. Rather than being crippled by indecision, our minds gladly take shortcuts whenever possible and some of these shortcuts turn out to be rather effective.

It also seems that we can create these shortcuts for our decision-making with extensive practice. This allows us to effectively automate decisions and conserve valuable mental energy. Our constant search for mental shortcuts combined with extensive experience can create expertise that allows people to make lightning-fast decisions while still maintaining high levels of accuracy. Chess masters can quickly size up the correct move with just a brief glance at a game board, baseball players can tell a good pitch from a bad one and decide to swing in an instant, and chicken-sexers can identify male or female chicks based on ambiguous genitals in less than a second, without even knowing exactly how they do it.

So while heuristics do cause us to make errors in judgment, we shouldn't ignore their usefulness. These shortcuts help make the world more manageable and allow us to make thousands of decisions each day, and for the most part, the decisions we make are good ones. Instead of being overwhelmed by the constant influx of irrelevant information, we somehow manage to move along, and even with these flaws, the history of human existence has been a story of increasing knowledge and unceasing improvement. So in our quest for greater cognitive efficiency we may occasionally stumble, but we shouldn't ignore the times when these shortcuts allow us to leap.

Chapter Summary

Key takeaways — Chapter 7
  • With exposure, language seems to emerge naturally in children, suggesting humans have a biological predisposition for language acquisition.
  • Telegraphic speech, errors of overgeneralization and The Wug Test all indicate that young children begin to understand and apply grammatical rules without explicit instruction.
  • Acquisition of one's first language depends on early exposure and a critical period for proper language development means that exposure must happen before some time around age 7.
  • While language doesn't determine thought (the linguistic relativity hypothesis), there is some evidence for ways that language may shape thought and perception.
  • We are all subject to a number of biases which can distort our thinking including overconfidence, the Wobegon Effect, and belief perseverance.
  • Rather than always using algorithms to solve problems, our brains seem to prefer using heuristics; mental shortcuts which allow us to make decisions quickly, though they also increase our risk of making errors.

Review Questions

Chapter 7 — Language & Cognition
10 multiple choice questions · Select an answer then click Check
Question 1 of 10 Score: 0 / 0
out of 10
Study tools
Practice the Chapter 7 key terms
35 flashcards covering all key terms from this chapter — with instant definitions.
Study Flashcards →