Chapter 4: Sensation & Perception — Master Introductory Psychology
Master Introductory Psychology  ·  Chapter 4 of 16

Sensation & Perception

How physical stimuli become conscious experience — from the physics of light and sound to the psychology of how the brain interprets what the senses deliver.

📖 18 sections ⌛ ~35 min read 🔑 40 key terms ✎ 10 review questions
📚 Master Introductory Psychology Complete Edition — Get the full 16 chapter print book here
Buy on Amazon →

Stimulation and Interpretation

SensationSensationThe process by which sensory receptors detect and respond to physical stimuli. refers to the actual stimulation of the sensory organs. Light hitting your eyes, sound waves vibrating your eardrums, things touching your skin, or chemical molecules finding their way into your nose and mouth are all sensory experiences that can be sent to your brain. The process of converting these physical signals in the world into neural activity in the brain is known as transductiontransductionThe conversion of physical stimulus energy into electrical neural signals that the brain can process..

The Necker Cube — an ambiguous figure demonstrating that perception actively con
The Necker Cube — an ambiguous figure demonstrating that perception actively constructs what we see. The same image can be perceived as oriented in two different ways.

PerceptionPerceptionThe brain's active process of interpreting, organizing, and making sense of sensory information. refers to the interpretation and organization of the information that is coming in from the sense organs. Most of the time, this processing occurs so quickly and seamlessly that we tend to forget that it is a separate step from sensation. It seems that we immediately recognize objects in our visual field (that’s a coffee mug, that’s a pen, etc.) or identify the sounds around us (those are footsteps, that’s a bird chirping, etc.) but this perception of the sensory information is actually a complex process that is also influenced by our experience with the world. On rare occasions we may be reminded of this separation between sensation and perception when we can’t quite make out what we’re looking at or what we're hearing. When this occurs, our problem is not in sensing the stimulus (we know it's there), but in perceiving it (we don't know how to correctly interpret it).

One of the simplest ways to demonstrate this separation of sensation and perception is to find a stimulus that is intentionally ambiguous, such as the “Necker Cube” below:

Which panel of the cube is closest to you? There are 2 possible answers. You may perceive the panel starting at the bottom left corner as closest, or you may perceive the panel starting at the top right corner as closest. In fact, you can probably switch which appears closer at will. When we mentally flip our view of the cube, the sensation isn’t changing at all, but our interpretation and organization of that sensation (our perception) is changing.

✎  Quick check — Section 1
The process of converting physical stimuli into neural signals is called:

Psychophysics

My apologies to those of you hoping to read about physics gone mad; the truth is that psychophysicspsychophysicsThe scientific study of the relationship between physical stimuli and the psychological experience of them. is actually the study of our sensitivity to stimuli of different strengths. This is generally done by attempting to measure different types of thresholds.

Absolute thresholdAbsolute thresholdThe minimum stimulus intensity detectable 50% of the time under ideal conditions. refers to the minimum amount of stimulation necessary for a stimulus to be detected. For example, the absolute threshold for vision would be the dimmest light that we could possibly see or for hearing it would be the softest tone that we’re able to hear.

Measuring absolute threshold is difficult because it’s often hard to know if a participant is actually detecting a stimulus or just guessing. As we approach the minimal intensity, even the participant will have a hard time knowing if he really saw or heard something. In order to accommodate for this, psychophysicists (who may or may not be mad scientists) generally consider the absolute threshold to be the point at which accuracy for detecting a stimulus reaches 50% and higher. Once a participant is able to achieve 50% accuracy, it can be assumed that the participant is actually detecting the stimulus.

In addition to knowing about the minimum intensity necessary for perception, psychophysics also investigates how well we can differentiate between two stimuli that are both above the absolute threshold. In other words, how much change is necessary (in the brightness of a light, the volume of a sound, etc.) for us to detect that a change has occurred? While many psychological concepts have wonderfully creative names, this is not one of them. The minimum amount of change that we can actually detect is referred to as the just-noticeable difference (JND).

In attempting to measure just-noticeable differences, we run into the following question: Why is it that people can tell the difference between the brightness of 1 versus 2 candles, but not the difference between 100 and 101 candles? Similarly, when I’m at the gym, I can easily feel the difference between a 1lb weight and a 2lb weight (I'm really hitting those biceps hard), but I can't detect the difference between 100lbs and 101lbs. The reason for this is that just-noticeable differences aren’t constant amounts of change, they are constant proportions of change. This concept is known as Weber’s Law (Ernst Weber was German so it’s pronounced Vay-ber, not Web-er).

For example, in the case of detecting changes in weight, people generally need a change of about 2% in order to detect the difference. This explains why an envelope with 2 sheets of paper inside feels noticeably heavier than an envelope with only one sheet, but why a 500 page textbook feels the same as a textbook with 505 pages.

✎  Quick check — Section 2
The absolute threshold is defined as:

Signal Detection Theory

I mentioned above that a 50% accuracy rate is needed to assume that a participant is actually detecting a stimulus. With this in mind, we should consider that our ability to detect stimuli is not as predictable and dependable as we might like. In fact, our senses are constantly dealing with the problem of “noise”, both internal and external, which clouds our perception. This term “noise” doesn't just refer to sounds, but anything that might interfere with our ability to detect a stimulus.

If you’ve had a hearing test you’ve experienced an excellent demonstration of signal detection theorysignal detection theoryA framework explaining how we detect weak signals amid noise, accounting for expectations and motivation.. Imagine your hearing test is just about to begin. Even in a soundproof room, there may be some unintentional sounds affecting your ability to hear the tones being tested. From vibrations in the floor to the movement of air molecules hitting your eardrums, there really is no “perfectly quiet”. Even if these potential distractions could be completely eliminated, you may then become aware of the sound of your breathing, your heart beating, or even the movement of blood in your body, in addition to the noise of your internal thoughts, imagined sounds, and spontaneous neural activity that occurs quite regularly.

Now, let’s say the tester is going to either present a stimulus tone (yes), or not (no), and you’re going to respond that you detected the stimulus (yes), or not (no). This gives us a matrix of 4 possible scenarios, illustrated below:

When a tone is loud and clear, you may not have any doubts, but what do you do when you’re unsure? Was that a tone being played or not? Should you err on the side of saying yes? If so, you’ll probably get more hits, but you’ll also end up getting more false alarms. Similarly, if you tend towards saying no in times of uncertainty, you’ll end up with more correct rejections but also more misses.

Your tendency towards saying yes or saying no when you aren't sure is known as your response criteria and it may range from liberal (unsure – say yes) to conservative (unsure - say no). Now imagine that the person taking the test after you experiences exactly the same auditory stimulation that you experienced for every single tone (in other words, the two of you have ears which function identically well). Even though your hearing is technically identical, if this person has different response criteria, results on the hearing test will not be the same.

Signal detection theory is not just for hearing tests or attempts to measure absolute thresholds. If you consider just how often we must make decisions without complete information, you’ll see that signal detection theory has numerous applications to our daily lives. Is an enemy nation hiding dangerous weapons? Pre-emptive strike or detente? Is that an innocent and injured civilian or an enemy combatant? Send a drone or a first aid kit? Is that person reaching for a weapon or a wallet? Unfortunately, poor decisions in these scenarios can carry tragic consequences and determining how liberal or conservative our response criteria should be is no simple task.

Or consider a doctor who notices an unusual spot on an otherwise routine X-ray; could it be a tumor? How liberal or conservative should the judgment be? Recommend expensive and traumatic preventive surgery that may not be necessary? Or risk allowing the tumor (if that's indeed what it is) to grow or spread? These are not easy choices.

Less seriously, but perhaps still important in terms of major life decisions, imagine that you are an attractive woman being approached repeatedly by men asking for dates. Which potential suitors should you agree to see? If you decide to default to saying No to a first date, you may inadvertently turn down your Prince Charming. Be too liberal saying Yes, however, and you’ll find yourself stuck on many boring dates that turn out to be false alarms. (In case you’re wondering, my attempts at charm are generally met with “correct rejections” – keep up those excellent signal detection skills ladies!)

Now that we've had an overview of sensation and perception, we'll spend some time looking at the details of our senses in order to better understand the process of transduction for each. And we'll start with the one that we practice using about 16 hours a day nearly every day of our lives.

✎  Quick check — Section 3
Which cells in the retina are most responsible for color vision in bright light?

Vision

In order to understand how vision works, we need to begin at the eye, examining the structures and how they function.

Cross-section of the human eye showing the cornea, lens, retina, fovea, optic ne
Cross-section of the human eye showing the cornea, lens, retina, fovea, optic nerve, and blind spot.

The outer surface of the eye is covered by a translucent layer called the cornea. This layer helps to focus light coming into the eye, and it also serves to protect the eye. The cornea is especially touch-sensitive and stimulation of the cornea invokes a reflexive closing of the eyelid. This high level of sensitivity also explains why a tiny speck of dirt in your eye can feel like a boulder.

The visual pathway — information from each visual field crosses at the optic chi
The visual pathway — information from each visual field crosses at the optic chiasm and is processed by the opposite hemisphere.

Underneath the cornea, we reach the iris. This is the part of the eye that appears colored (brown, blue, hazel, etc). The iris is able to change shape, modifying the size of the pupil, the dark spot in the center of our eyes. The pupil is actually just an opening in the iris, which allows light to enter the eye.

The reason that the pupil appears black is that light can enter the eye, but once inside the eye the light is absorbed, meaning that very little light comes back out of the eye. An exception to this that you’re probably familiar with is the “red-eye” effect that can occur in photographs. What’s happening is that light from the camera flash is being reflected off the interior surface of the eye and coming back out the pupil. This happens fast enough that the pupil doesn't have time to change size and the camera catches this brief reflected light in the photograph. Because this light is reflecting off the blood-rich interior surface of the eye (known as the fundus), the light appears red. As you've probably seen, having another light before the actual flash for the photograph reduces the occurrence of red-eye, because the first light gives the iris time to change the size of the pupil, reducing the amount of light that is reflected back by the flash.

Now, let's get back to the steps of vision. After light passes through the pupil, it hits the lens, a disc-like structure surrounded by muscle that focuses the light. The lens directs light to the retinaretinaThe light-sensitive inner surface of the eye containing rodsrodsRetinal receptor cells that detect low levels of light and are responsible for peripheral and night vision. and conesconesRetinal receptor cells concentrated in the foveafoveaThe central region of the retina densely packed with cones, providing the sharpest and most colorful vision. that detect color and fine detail in bright light. that transduce light into neural signals., the back surface of the eye. Changing the shape of the lens changes how it directs the light reaching the retina, allowing us to focus on objects at different distances. This focusing ability of the lens, together with the ability to adjust the iris mentioned above and the fact we have two eyes that adjust at different angles to focus on the same point (known as convergence) gives us the ability to see a variety of distances with fairly high acuity. Taken together, these three adjustments make up a process called accommodation.

As we age, the lens has a tendency to become stiff, reducing our ability to adjust its focus, resulting in impairments in our vision. This is why you may see your parents or grandparents moving a book or paper a certain distance from their faces in order to read it. Because they cannot adjust the lens to bring the text into focus, they are forced to move the object until it reaches the distance where the lens is already focused. The lens may also become cloudy or develop spots (cataracts) which may obstruct or otherwise impair vision. The marvels of modern medicine now mean that your lens can be surgically replaced and a quick YouTube search (for the not-too-squeamish) can show you exactly what that process looks like.

The retina contains about 125 million photoreceptors; specialized cells which are able to respond to stimulation by light. We have 2 main types of photoreceptors; rods and cones, named after their shapes. Rods are more sensitive to light than cones, but are only able to detect light and dark. Cones are able to detect color and fine detail, though they are less sensitive to low levels of light. This explains why colors appear rather gray in very dim lighting. Dim light is sufficient to stimulate our rods but not our cones, resulting in vision that is gray and not as detailed. In normal light conditions, however, both our rods and cones are stimulated, allowing us to see vivid colors and fine detail.

The area of the retina with the greatest density of cones is the fovea (in fact the fovea is composed entirely of cones), a bowl-shaped dent in the retina which provides our highest level of visual acuity. When you look closely at something, you are directing its image onto the fovea (via accommodation), allowing you to see it most clearly.

You’ll notice in the image of the eye structure that there is a point where all of the blood vessels and other nerves exit the eye. This means that there is a spot on the retina that lacks rods and cones, and is therefore a blindspot. We don’t notice the blindspot in each eye for two reasons. The first is that we have two eyes (most of us anyway, sorry pirates) and our two blindspots are in different locations, each eye covering for the other. Even so, you’ll notice that if you close one eye, you still can’t see your blindspot. This is because the brain automatically “fills in” the blindspot based on whatever information is stimulating the cells around it.

If we look closely at the retina, we'll see more than just rods and cones. In fact, before light stimulates the rods and cones it must first pass through retinal ganglion cells and bipolar cells. Rods and cones are grouped together into areas known as receptive fields, where many rods and cones connect to several bipolar cells. Several of these bipolar cells then join together to a single retinal ganglion cell (RGC), which carries messages to the optic nerveoptic nerveThe bundle of nerve fibers carrying visual information from the retina to the brain. and then out to the brain. You may be wondering, if a group of rods or cones all connect to a single cell, how can that single cell send different types of messages from many photoreceptors to the brain?

As we learned in chapter 3, the only method that single neurons have for conveying different messages is to change their firing rate. As it happens, a single RGC cell fires at different rates based on the pattern of light on the receptive field, in a process known as lateral inhibition. Some receptive fields are known as “on-center”, meaning that the retinal ganglion cell responds most intensely when light is in the center of the receptive field, surrounded by darkness. Other retinal ganglion cells are classified as “off-center” because they respond most intensely to patterns of dark spots surrounded by light. In both cases, these cells fire differently than they would in all-light or all-dark situations.

When we see areas of contrast, such as black lines on a whiteboard, this contrast is emphasized, because some of the receptive fields along that border will be partially lit and partially dark, causing those retinal ganglion cells to fire at different rates. This signals that something more interesting than simply light/dark is happening and this exaggerates the line, helping us to see greater contrast.

This can be demonstrated in a simple illusion known as Mach Bands. In the image above, all the bars are solid colors. What you will probably perceive, however, is that each individual bar appears slightly lighter on the left side and darker on the right side. This is occurring because lateral inhibition is causing greater activity in the RGCs along each line where the color changes, emphasizing the contrast so it stands out. As a result, the light gray to the right of each change appears lighter than it actually is, while the darker gray to its left appears darker than it actually is.

We should note from lateral inhibition that some of the process of perception is actually happening before messages even get to the brain. This is happening in the retina itself, and no matter how much we know about this illusion, the process of lateral inhibition, and the processes of receptive fields, we will still experience the effect.

✎  Quick check — Section 4
The cochlea is part of which sensory system?

Color Vision and Colorblindness

As mentioned, rods respond to light and dark, while cones allow us to see in color. Color isn't actually in light, but rather light comes in different wavelengths and our brains are able to perceive these differing wavelengths as different colors. Long wavelengths correspond to red, medium to green, and short to blue.

Young-Helmholtz Trichromatic TheoryTrichromatic TheoryThe theory that color vision is based on three types of cones sensitive to red, green, and blue wavelengths. is that we have 3 different types of cones which respond to different wavelengths of light. These three cone types are L-cones, M-cones, and S-cones, corresponding to greater sensitivity to long (red), medium (green), and short (blue) wavelengths. Each cone type actually sees a fairly large portion of the visible spectrum. The wavelengths that cone types can see overlap but each cone type responds more intensely to certain wavelengths. The combination and comparison of different levels of stimulation from all cone types allows us to see an estimated 10 million possible colors. Trichromatic theory also allows us to explain colorblindness, which results when a person has a defect in one (or more than one) type of cone. Most people with colorblindness do see in color, they just can't see all the hues that people with three functioning cone types can see.

The Opponent Process Theory of Color is that perception of color depends on comparisons between colors which work in pairs to inhibit one another. These opposing pairs are red/green, blue/yellow, and light/dark. When one half of a pair is stimulated, the other half is inhibited. This may explain why we never describe colors as reddish-green or bluish-yellow because we don't perceive these antagonistic pairs mixing in that way. We can also demonstrate this opponent process with a color afterimage. If you stare at a red circle without moving your eyes for an extended period of time (30 seconds or so) then you look at a plain white background, you'll see an afterimage of a green circle.

What's happening is that when you stare at the red circle long enough, the “red” signal to the brain is decreased. When you switch to a white background the white light stimulates red and green equally, but since the red signal has been decreased, the green stimulation seems relatively stronger and you perceive green.

✎  Quick check — Section 5
Gate control theory proposes that pain signals:

Hearing

In order to understand how we can get from physical sound waves to neurons firing in the brain, let’s take a look at the major structures of the ear and what they do.

The auditory pathway — sound vibrations travel from the outer ear through the os
The auditory pathway — sound vibrations travel from the outer ear through the ossicles into the cochlea, where hair cells transduce them into neural signals sent to the auditory cortex.

The pinna or auricle is the visible outer ear. It basically acts as a funnel, helping to direct sound waves into the auditory canal. At the end of the auditory canal is the eardrum (or tympanic membrane if you want to sound fancy). The eardrum vibrates in response to sound waves. This vibration of the eardrum causes movements of the ossicles (Latin for “little bones” – they are the smallest bones in your body) in the middle ear. The three ossicles are (in order) the hammer (malleus), the anvil (incus), and the stirrup(stapes).

The vibrations of the stirrup are passed on to the cochleacochleaThe fluid-filled, coiled structure in the inner ear where sound vibrations are transduced into neural signals. (Latin for “snail”) in the inner ear. The cochlea is a spiral structure filled with fluid. Inside this fluid sits the basilar membrane. The basilar membrane contains tiny hair cells (stereocilia) which move in response to the vibration of fluid in the cochlea. When moved, these hair cells trigger the firing of neurons. Different wavelengths of sound trigger different hair cells along the basilar membrane, allowing different pitches to trigger different neurons (this is known as place code). Messages from the cochlea are carried away from the ear to the thalamus via the auditory nerve. From the thalamus, the messages are relayed to the primary auditory cortex (A1) in the temporal lobes for processing.

✎  Quick check — Section 6
Which of the following is a Gestalt law of perceptual organization?

The Vestibular System

One section of the inner ear doesn’t have to do with hearing at all. This is the vestibular systemvestibular systemThe sensory system in the inner ear that detects balance, head position, and body movement.. The vestibular system is used to help us balance and orient ourselves. The vestibular system sits above the cochlea and consists of two vestibular sacs (the utricle and saccule) and three semi-circular canals. These structures are filled with fluid which moves in response to head movements, triggering hair cells which help give us information about the body’s position in space.

If you’ve ever spun around too quickly or played “dizzy bat”, you’ve experienced what happens when the vestibular system is disrupted. By spinning in circles quickly you cause the fluid in the vestibular system to move. When you stop suddenly, the momentum generated causes the fluid to continue moving, creating a state of disorientation as other body senses indicate that you aren’t moving. Similarly, astronauts often experience “space sickness” due to the disruptions of the vestibular system caused by a lack of gravity (and for which they wear anti-nausea skin patches – vomiting into your space suit is not a good idea).

✎  Quick check — Section 7
Perceptual constancy refers to:

Touch

Exploring the world by touching and grasping (known as haptic perception) is one of our most important senses, and the sense organ responsible, our skin, is our body's largest organ. The skin is more than just a fleshy bag that separates our innards from the outer world; it's a remarkable organ that constantly transmits several types of information to the brain.

Skin pain receptors — two types of pain fibers carry fast/sharp and slow/burning
Skin pain receptors — two types of pain fibers carry fast/sharp and slow/burning pain signals through the spinal cord to the brain.

Our skin contains a variety of receptors which allow us to detect different properties of the physical world around us. Using these mechanoreceptors, we’re able to detect pressure, texture, pattern, and vibration, as well as temperature (via thermoreceptors) and pain (via nocireceptors).

As we learned when looking at the brain, our sense of touch is contralateral, meaning that the left hemisphere of the brain processes information from the right side of the body, while the right hemisphere processes signals from the left side of the body. When learning about the somatosensory cortex in the parietal lobe (which processes touch messages from the body), we also learned that greater sensitivity requires greater amounts of brain cortex. The representation of our hands and face in the cortex is considerably larger than that of our torso, legs, or arms.

✎  Quick check — Section 8
Retinal disparity is a binocular depth cue based on:

Pain

As just about all of us are acutely aware, our skin (along with other parts of the body) has the ability to sense pain, though there are actually rare cases of congenital insensitivity to pain. While you might think this would grant you amazing superhero status, the truth is that a life without pain is far from ideal.

Referred pain — pain felt in a location distant from its source. Heart attack pa
Referred pain — pain felt in a location distant from its source. Heart attack pain is often felt in the left arm because sensory nerves from both areas converge in the spinal cord.

Pain provides crucial messages to quickly change our behavior or escape harmful situations. People who are unable to detect pain are constantly at risk of doing serious harm to their bodies without even being aware of it. Noticing that your skin is melting on a burning stove, recognizing that you’re stretching a muscle a little too far, or feeling the dull throbbing of an infection are all important cues to change your behavior or seek medical assistance to prevent further damage. Unfortunately people with a congenital insensitivity to pain miss these cues, and as a result their pain-free lives tend to be rather short as injuries and damage accumulate.

Our experience of pain comes from two different types of pain receptors. Receptors that provide us with information for fast, intense, sharp pains are known as A-delta fibers. For longer-lasting, dull, throbbing pain, messages are carried on C-fibers. A good mnemonic for remembering which pain types are carried by which fibers is to imagine yourself being stabbed by the sharp point of the letter A (resulting in sudden, sharp, and intense pain). You can also imagine a bunch of Cs radiating out of your body to represent a dull, sore, throbbing pain. In addition to traveling to the somatosensory cortex (via the thalamus), pain messages also travel to the limbic system (the hypothalamus, hippocampus, and amygdala), which isn’t surprising if you recall that these are important areas for emotion, motivation, and memory. Pain, emotion, and memory are closely linked, such as when we learn to experience fear for objects or situations that have previously caused us pain.

It’s not only our skin that contains pain receptors. We also have receptors that can signal pain from our bones, our muscles, and our internal organs. In some areas, receptors from both internal and external (meaning skin) sources converge on the same spinal cord nerves. This can cause an experience of referred pain, where internal pain signals feel like they are coming from elsewhere in the body. A well-known example of this occurs when a person is suffering from a heart-attack, as a common symptom is feeling sharp shooting pains in the left arm, even though the actual source of the pain is inside the chest.

Smell

Like taste, smell (or olfactionolfactionThe sense of smell — chemical molecules bind to receptors in the nasal cavity and send signals to the brain.) is a chemical sense, because our sensation comes from direct interaction with chemical molecules. In this case, odorant molecules from whatever we are smelling waft up our nasal passages, where they bind with special receptors called olfactory receptor neurons (ORN). The ORN pass these messages on to the glomeruli, which connect to the olfactory bulb inside the skull. From the olfactory bulb, these messages are passed directly to areas of the temporal lobes, bypassing the thalamus (the relay station all other senses pass through). It may be the case that the strong ties between scent and emotional memories are due to these direct connections with the brain and their proximityproximityThe Gestalt principle that nearby objects are perceived as belonging together. to the limbic system (which processes emotions and encodes memories).

The olfactory system — odorant molecules bind to receptors in the nasal epitheli
The olfactory system — odorant molecules bind to receptors in the nasal epithelium, sending signals through the olfactory bulb to the limbic system and cortex.

Humans have a total of about 10 million olfactory receptor neurons, which come in about 350 different types. The stimulation of different combinations of these ORN types allows us to detect a wide variety of possible smells. The unique pattern of stimulation that a particular combination of chemicals causes is experienced as a unique scent.

While it may be rather pleasant to realize that the scent of a warm apple pie means molecules are actually coming off that pie and going inside your body, it’s decidedly unpleasant to consider what this means when you encounter a particularly foul restroom.

You may also note from the illustration that it’s possible for odorant molecules to waft up the back entrance to the ORNs via the throat (provided the mouth is open). Food in our mouths sends odorant molecules up the back of the throat to the ORNs, especially as we chew, making smell and taste two very closely linked senses. If you're curious to try tasting without smelling, pinching the nose works fairly well, as it blocks entrance from the nasal passages and also stops air flow from up the back of the throat, preventing much of a food's smell from being detected. I don't recommend trying this in a posh restaurant, as you probably don't want to offend a chef who sees you pinching your nose while eating. And of course for foods like stinky tofu, durian, or Camembert cheese, the powerful smell of the food may just be the most enjoyable part. There are some people (like one of my sisters) who lack the ability to detect scent at all; a condition known as anosmia.

Taste Perception

As you’ve probably noticed, your tongue is covered with small bumps, which are known as papillae. There are 4 different kinds of papillae on your tongue which are all named for their shape; fungiform(mushroom-shaped), filiform (thread-shaped), circumvallate (dome-shaped), and foliate (leaf-shaped).

Taste buds on the tongue — microvilli in taste pores interact with food molecule
Taste buds on the tongue — microvilli in taste pores interact with food molecules to trigger gustatory signals sent to the brain.

With the exception of filiform papillae (which serve mechanical functions), papillae are lined with taste buds, which detect taste by interacting with the tastant molecules in food. In total, humans have around 5,000-10,000 taste buds. You have more taste buds when you are young, and by age 20 you will have lost around 50% of them. Molecules from food interact with hair-like receptors called microvilli, which stick out from the taste bud at taste pores, and this interaction triggers activity in gustatory cells inside the taste bud.

We also have some taste buds on the sides and roof of the mouth (the palate), as well as some parts of the throat. This explains why we can also detect taste in these areas, though not as intensely as on our tongues.

If you ever learned about different taste areas being mapped out on your tongue, I regret to inform you that what you were told is a complete and utter lie! There is no “salty area” or “sweet area” of the tongue (though unfortunately this is still taught in many places). So while this did make for some fun coloring assignments in primary school, the bitter truth is that each taste bud contains receptor cells for all tastes, so each taste is distributed throughout the mouth, not localized. This error comes from the misinterpretation of research showing small differences in sensitivity to certain tastes in some areas – or maybe people were trying to make the topic more exciting, since confusion seems to have originated from a paper by a Harvard psychologist named Professor Boring. So clear your mind of that myth while we consider the types of tastes that humans can sense.

When it comes to detecting taste, different receptors respond to different molecules. Some gustatory cells will respond to certain molecules but not others, meaning they are specialized for different “tastes”. There are 5 main tastes: salty, sweet, bitter, sour, and umami (Japanese for “savory” – one your elementary school science class may have been missing). Each of these tastes is associated with different types of molecules. The different combinations of interactions with these 5 receptor types (in addition to information from scent receptors in the nose) gives us all the varied tastes that we can experience.

We also use other nerves in our mouths to tell us about the food we’re eating, so our experience of taste is not limited to what happens at the taste buds. Other qualities like spiciness (pungency or piquancy), temperature, texture, and astringency play crucial roles in our perception of the foods we taste. A great deal of our taste perception also happens outside the mouth (and nose), as we “eat with our eyes”. The colors of foods can influence our experience of their taste. When I was a kid, a box of popsicles often included a “mystery flavor” – colored white – that was difficult to pin down. Without the reference point of color (purple=grape, red=berry, yellow = banana, etc.), it’s easy to make mistakes in identifying flavors. In addition, our beliefs and expectations about food can also influence our perceptions. In season 3 of Penn & Teller's Bullshit!, the comedy/magic duo demonstrated this by duping restaurant customers into eating cheap canned tomatoes, instant potatoes, and meat from microwave dinners. These were all presented as gourmet specialties, leading customers to believe, and perceive, that they were really eating “the best” in taste and quality.

Mingling the Senses

Now that we've looked at each of the senses in detail, we have to remember that they aren't so neatly boxed in real-life. We've seen how we “eat with our eyes”, but this concept applies to a mingling of our other senses as well. As a number of perceptual illusions can demonstrate, what we hear influences what we see (and vice versa), what we hear can influence our experience of taste, and what we see can even influence what we feel. Our overall sensory experience of the world is a rich tapestry which synthesizes information from all of our senses. This is true for all of us, but there is also a group of people for whom this is especially apparent. These people are synesthetes, or people with synesthesiasynesthesiaA perceptual phenomenon where stimulation of one sense automatically triggers an experience in another sense.. Synesthesia is a more dramatic mingling of the senses, in which one sense automatically stimulates another, such as sounds causing colorful visions or even triggering tastes. One of the most common types of synesthesia is the experience of seeing certain shapes as colored, even when they are not. For example, a synesthete may see letters or numbers as always having particular colors, like seeing As as red, or 6s as green. This isn't a delusion, as the synesthete knows that the letter or number is printed in black ink, but still has the visual experience of color.

Synesthesia illustration — a person with grapheme-color synesthesia perceives le
Synesthesia illustration — a person with grapheme-color synesthesia perceives letters and numbers as having distinct colors.

You may ask how we could ever know if a synesthete were lying about this experience. Couldn't someone just claim that an A appears red and we would never be able to get inside his mind to know if this was true? For a long time synesthesia was regarded this way, as if synesthetes were delusional or making things up.

Fortunately, some clever perceptual psychologists came up with ways of testing the experience of synesthetes. When shown an array of letters, they wondered whether synesthetes might recognize patterns more quickly than non-synesthetes. Look at the image here and see how quickly you can find all the Ps mixed in with Qs.

In this case minding your Ps and Qs can be a difficult task, but if you were able to see the letters as having different colors (grapheme-color synesthesia), it would be considerably easier. In fact, this is what happens when some synesthetes are asked to do this task, and researchers have found evidence that they are able to recognize the patterns more quickly. This demonstrates that synesthetes don't just make things up or have overactive imaginations; their perception of the world really differs from that of other people. On the following page you'll see a simulation of how this task might appear for a synesthete who sees Ps and Qs as different colors.

Gestalt Laws

Early perceptual researchers focused on identifying the laws of perception that dictate how we organize and interpret information. The emphasis of gestalt psychologygestalt psychologyA school of psychology emphasizing that we perceive wholes rather than collections of separate parts. is often summarized as “the whole is different than the sum of the parts” meaning that in order to understand perception, we shouldn't think in terms of isolated pieces, but in terms of how those pieces come together. The following Gestalt laws were identified as predictable patterns for how our brains tend to perceive the world.

Gestalt laws of perceptual organization — proximity, similarity, continuity, and
Gestalt laws of perceptual organization — proximity, similarity, continuity, and closure showing how the brain groups visual elements into wholes.

ClosureClosureThe Gestalt principle that we mentally fill in gaps to perceive complete, whole objects. – Parts of a recognizable image are put together and the mind “closes off” or fills in any gaps.

Proximity – Objects which are closer together are more likely to be perceived as a group.

SimilaritySimilarityThe Gestalt principle that similar objects are grouped together perceptually. – Objects that are more similar are more likely to be perceived as a group.

ContinuityContinuityThe Gestalt principle that we tend to perceive smooth, continuous patterns rather than discontinuous ones. – Objects that make up a continuous form are perceived as a group.

Simplicity – Simple explanations are preferred over equally plausible but more complex explanations.

Common Fate – Objects which move in unison are more likely to be perceived as a group.

While these Gestalt laws are most frequently demonstrated visually, remember that they apply to all types of perception. For example, we tend to group similar pitches together, while pitches that are far apart can be perceived as coming from separate sources. A great example of this can be heard when a single instrument jumps quickly back and forth between high and low pitches, resulting in an auditory illusion that two separate instruments are playing. An excellent example of this can be heard in J.B. Arban's arrangement of The Carnival of Venice, where, in the final variation on the theme, the trumpet soloist moves quickly between low notes playing the familiar melody, and trilled higher notes providing an accompaniment. Played properly, the solo sounds like two trumpet players playing together.

Perceptual Constancy

To consider perceptual constancyperceptual constancyThe tendency to perceive objects as stable in size, shape, and color despite changes in their retinal image., let's imagine a rather routine occurrence. You are standing in a hallway somewhere, and a friend walks down the hall toward you, then begins having a conversation with you. While this may seem trivial, your brain's ability to correctly interpret what is going on is actually quite impressive. In order to do so, your brain must figure out that even though many things are changing, they are actually remaining the same.

Size Constancy – When you see your friend walking toward you, you don't perceive that she is growing rapidly in size even though the image on your retina is indeed growing larger and larger. Instead you are able to determine that your friend is simply moving closer to you.

Brightness Constancy – As your friend is walking toward you, she passes underneath a light fixture on the ceiling. As she approaches, then passes this light source, the hue and brightness of her face changes, shadows appear in new places, and light is reflected off different surfaces. Yet, you don't feel that your friend's complexion is changing in any way.

Shape Constancy – Now, as you're talking to your friend, and she nods her head, its shape on your retina changes drastically. But here again, you don't perceive that her head is actually morphing into new shapes. Instead, your brain perceives it as maintaining a single, normal shape.

Depth Cues

The surface of the retina is 2D, so how are we able to experience a 3D world that has depth? This experience of depth is created from a combination of cues. Monocular cuesMonocular cuesDepth cues that work with only one eye — includes linear perspective, interposition, and texture gradient. are cues that each eye uses individually, meaning that the appearance of depth from each of these cues still occurs even if you close one eye.

Monocular depth cues — linear perspective, texture gradient, relative size, and
Monocular depth cues — linear perspective, texture gradient, relative size, and interposition allow depth perception from two-dimensional images.

Linear perspective – Parallel lines appear to converge as they get farther away from us and as any artist knows, this can be used to give the impression of depth.

Relative size – We also judge how far away things are from us based on their familiar size. If I see a small image of a person on my retina, I will probably conclude that this is a normal-sized person who is far away from me, rather than an extremely small person who is close to me.

Texture gradient – Texture gradient relies on the fact that our visual acuity is generally better for objects that are closer to us. As things get farther away, their textures appear smoother.

Interposition – Objects block our view of the objects behind them, and this aids us in knowing which objects are closer. If our view of an object is partially blocked, the object must be farther away from us than whatever is blocking it.

Shading – How light falls on objects can also help us to determine their size as well as their distances from us. This is something that you unconsciously use each time you correctly judge the height of a step based on the shadow that it casts.

Binocular Cues

While the monocular cues above all work with only one eye, having two eyes improves our depth perceptiondepth perceptionThe ability to perceive objects in three dimensions and judge their distance. considerably (sorry again, pirates). You can test this yourself by playing catch with a friend, then trying again with one eye closed. You'll probably find this more difficult, because even if you can still see the object clearly, it becomes harder to judge its distance from you and the speed of its approach.

Binocular disparity – We have two eyes in slightly different locations, so the brain is actually receiving two slightly different views of the world and, rather amazingly, combining these images into one coherent whole. If you hold a pencil at arm's length and focus on it with one eye, then switch eyes, you'll notice that the image of the pencil shifts back and forth a bit. Now hold the pencil closer to your face and repeat the process. Now it appears to shift even more. When objects are close to us, our two eyes see different versions of the object, but when objects are far away, the views are more similar. The brain takes this into account as it combines the two viewpoints into one, improving our ability to judge depth. 3D glasses take advantage of this by showing each eye a slightly different version of a movie (which used to be accomplished with red/blue tints but now uses polarized lenses), allowing the brain to combine these two versions into one that appears to have depth.

Convergence – Using the same pencil viewing task above, as you move the pencil closer to your face, muscles around your eyes are moving your eyeballs to different angles to focus on it. When you focus on something far away, both eyes are positioned to look relatively straight forward. As an object gets closer, your eyes must point progressively closer and closer together to focus on it, and this also provides you with information about how far away the object is.

Motion-based Cues

We also use the motion of objects across our retina to judge depth. For example, if you look out a window on a train passing through the countryside, you'll notice that trees along the tracks seem to move past the window very quickly. The trees in the distance, however, take much longer to travel across the window. The mountain even farther away seems to move at a crawl. This difference in how quickly objects appear to move also gives us clues about depth and is known as motion parallax.

While motion parallax tells us about distances for objects moving perpendicular to us, optic flow tells us about objects moving toward or away from us. Imagine you are looking out the front windshield of a car speeding down the highway. As an object gets closer, its image grows on your retinas and the speed of this enlargement helps to tell you how far the object is from you and how quickly it is moving towards you, adding to a sense of depth. As we move forward, we can see that objects at similar distances will “grow” on our retinas at similar rates. Closer objects will all grow more quickly, while distant objects will all grow more slowly.

Culture and Perception

While it often seems like perception just happens, remember that it's actually the result of a great deal of experience. We must learn how to organize and interpret the information coming into our senses. We may take for granted the effortlessness of vision because we forget just how many years we spent learning how to make sense of the messages we get from our eyes. This can be seen dramatically in cases where congenitally blind people are able to see, thanks to modern surgical techniques. When the surgery is complete, the process of seeing has only just begun.

The moon illusion — the moon appears larger near the horizon than overhead, demo
The moon illusion — the moon appears larger near the horizon than overhead, demonstrating how context influences size perception.

These patients must learn to see, and this is actually a slow and difficult process. Their brains have no experience interpreting visual messages and must establish millions of neuronal connections in order to organize the onslaught of visual information. Things we don't give a second thought, such as correctly judging the size and distance of numerous objects as we walk across a room, must be slowly and painstakingly learned by the newly-sighted. The same learning process is true for our other senses, as experience has taught us how to identify the timbre of a piano or a violin, recognize the voice of a friend, or identify an object by touch.

The fact that we must learn how to perceive also opens the possibility that our environmental experiences shape our perception of reality. For instance, our susceptibility to the famous Müller-Lyer illusion above (which you have probably seen before) has been shown to be influenced by our cultural background. Research by Marshall Segall and colleagues found that European and American subjects experienced the illusion of the top horizontal line appearing longer more often than subjects from other cultures (the horizontal segments are identical). They hypothesized that differing amounts of experience viewing 2D pictures combined with differences in exposure to straight lines and right angles influenced participants' susceptibility to the illusion. This idea that living in an environment filled with straight lines and right angles affects our perceptual inferences is known as the carpentered-world hypothesis. While researchers still disagree on interpretations of the Müller-Lyer illusion and the carpentered-world hypothesis, there is general agreement that culture shapes our perceptual experience of the world.

Chapter Summary

Key takeaways — Chapter 4
  • Psychophysics is the study of thresholds for detecting stimuli or changes in stimuli. Signal Detection Theory considers how our response criteria influences our detection of stimuli.
  • Lightwaves are focused onto the retina where photoreceptors convert them into neural activity which is processed in the occipital lobes of the brain.
  • Soundwaves vibrate ossicles in the ears and the subsequent movement of hair cells in the cochlea converts these vibrations into neural signals which are processed in the temporal lobes.
  • The skin contains several types of receptors which can respond to stimuli including vibration, pressure, temperature, and pain and these messages are processed in the somatosensory cortex in the parietal lobe. Messages from pain fibers are also sent to the limbic system, a key area for processing emotion and memory.
  • Smell and taste both involve the interaction of chemical molecules with special receptors; olfactory receptor neurons in the nose and gustatory cells in taste buds on the tongue.
  • All our senses are combined in the brain, meaning that senses influence one another. This is seen more dramatically in synesthetes who experience uncontrollable cross-overs among senses.
  • Perception follows some general rules (Gestalt Laws) for how we process sensory information. Correctly interpreting stimuli requires practice and as a result perception involves learning from experience.

Review Questions

Chapter 4 — Sensation & Perception
10 multiple choice questions · Select an answer then click Check
Question 1 of 10 Score: 0 / 0
out of 10
Study tools
Practice the Chapter 4 key terms
40 flashcards covering all key terms from this chapter — with instant definitions.
Study Flashcards →