Psychology In Modules - PDF Free Download (2024)

PSYCHOLOGY NINTH EDITION IN MODULES

This page intentionally left blank

PSYCHOLOGY NINTH EDITION IN MODULES David G. Myers Hope College Holland, Michigan

WORTH PUBLISHERS

Senior Publisher: Catherine Woods Senior Acquisitions Editor: Kevin Feyen Executive Marketing Manager: Katherine Nurre Development Editors: Trish Morgan, Christine Brune, Nancy Fleming Media Editor: Sharon Prevost Photo Editor: Bianca Moscatelli Photo Researcher: Donna Ranieri Art Director: Babs Reingold Interior Designer: Lissi Sigillo Layout Designers: Paul Lacy and Lee Ann McKevitt Cover Designer: Lyndall Culbertson Associate Managing Editor: Tracey Kuehn Project Editor: Dana Kasowitz Illustration Coordinator: Bill Page Illustrations: TSI Graphics, Keith Kasnot Production Manager: Sarah Segal Composition: TSI Graphics Printing and Binding: RR Donnelley Cover Painting: Illustration of Tree Losing Leaves in Autumn ©Andrew Judd Library of Congress Control Number: 2009931721 ISBN-13: 978-1-4292-1638-8 ISBN-10: 1-4292-1638-7 © 2010, 2007, 2004 by Worth Publishers All rights reserved. Printed in the United States of America First printing 2009 All royalties from the sale of this book are assigned to the David and Carol Myers Foundation, which exists to receive and distribute funds to other charitable organizations. Worth Publishers 41 Madison Avenue New York, NY 10010 www.worthpublishers.com

FOR BETTY PROBERT, with gratitude for 27 years of loyal friendship and superb editorial support in creating this book and its teaching supplements

This page intentionally left blank

About the Author David Myers received his psychology Ph.D. from the University of Iowa. He has spent his career at Hope College, Michigan, where he has taught dozens of introductory psychology sections. Hope College students have invited him to be their commencement speaker and voted him “outstanding professor.” Myers’ scientific articles have, with support from National Science Foundation grants, appeared in more than two dozen scientific periodicals, including Science, American Scientist, Psychological Science, and the American Psychologist. In addition to his scholarly writing and his textbooks for introductory and social psychology, he also digests psychological science for the general public. His writings have appeared in four dozen magazines, from Today’s Education to Scientific American. He also has authored five general audience books, including The Pursuit of Happiness and Intuition: Its Powers and Perils. David Myers has chaired his city’s Human Relations Commission, helped found a thriving assistance center for families in poverty, and spoken to hundreds of college and community groups. Drawing on his experience, he also has written articles and a book (A Quiet World) about hearing loss, and he is advocating a transformation in American assistive listening technology (see hearingloop.org). He bikes to work year-round and plays daily pickup basketball. David and Carol Myers have raised two sons and a daughter.

Brief Contents Preface ix

Developing Through the Life Span . . . . . . . . . . . . . 169

Introduction to the History and Science of Psychology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

MODULE 1 MODULE 2

The Story of Psychology 2

MODULE 3

Research Strategies: How Psychologists Ask and Answer Questions 25

Thinking Critically With Psychological Science 14

MODULE 13

Prenatal Development and the Newborn 170

MODULE 14 MODULE 15 MODULE 16

Infancy and Childhood 174

MODULE 4 MODULE 5

Neural and Hormonal Systems 46

MODULE 6

The Cerebral Cortex and Our Divided Brain 67

MODULE 7 MODULE 8 MODULE 9 MODULE 10

Tools of Discovery and Older Brain Structures 58

. . . . . . . .

83

The Brain and Consciousness 85

Introduction to Sensation and Perception 226

MODULE 18 MODULE 19 MODULE 20 MODULE 21 MODULE 22

Vision 233

Hypnosis 108

225

Hearing 243 Other Senses 251 Perceptual Organization 262 Perceptual Interpretation 272

Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289

MODULE 23 MODULE 24 MODULE 25

Sleep and Dreams 91

. . . . . . . . . . . . . . . . . . . . . .

MODULE 17

Classical Conditioning 290 Operant Conditioning 301 Learning by Observation 315

Drugs and Consciousness 113 Memory

Nature, Nurture, and Human Diversity

viii

Adulthood, and Reflections on Developmental Issues 206

Sensation and Perception

The Biology of Mind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

Consciousness and the Two-Track Mind

Adolescence 195

. . . . . . . . .

131

MODULE 11

Behavior Genetics and Evolutionary Psychology 132

MODULE 12

Environmental Influences on Behavior 148

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

323

MODULE 26 MODULE 27

Introduction to Memory 324

MODULE 28 MODULE 29

Storage: Retaining Information 336

MODULE 30

Forgetting, Memory Construction, and Improving Memory 351

Encoding: Getting Information In 327 Retrieval: Getting Information Out 346

Thinking, Language, and Intelligence

MODULE 31 MODULE 32 MODULE 33 MODULE 34 MODULE 35

. . . . . . . . . .

Thinking 370

Psychological Disorders . . . . . . . . . . . . . . . . . . . . . . . . 597

MODULE 48

Introduction to Psychological Disorders 599

MODULE 49 MODULE 50

Anxiety Disorders 610

MODULE 51 MODULE 52

Mood Disorders 625

Language and Thought 384 Introduction to Intelligence 404 Assessing Intelligence 416 Genetic and Environmental Influences on Intelligence 428

Motivation and Work

. . . . . . . . . . . . . . . . . . . . . . . . . . . .

MODULE 36 MODULE 37 MODULE 38

Introduction to Motivation 444

MODULE 39

Motivation at Work 484

Dissociative, Personality, and Somatoform Disorders 618 Schizophrenia 637

443 Therapy

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

MODULE 53 MODULE 54 MODULE 55

Hunger 448 Sexual Motivation and the Need to Belong 466

. . . . . . . . . . . . . . . . . .

499

Introduction to Emotion 500 Expressed Emotion 510 Experienced Emotion 517

MODULE 56 MODULE 57 MODULE 58 MODULE 59

645

The Psychological Therapies 646 Evaluating Psychotherapies 660 The Biomedical Therapies 671

Social Psychology

Emotions, Stress, and Health

MODULE 40 MODULE 41 MODULE 42 MODULE 43 MODULE 44

369

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

681

Social Thinking 682 Social Influence 690 Antisocial Relations 703 Prosocial Relations 719

Stress and Health 530 Appendix A: Careers in Psychology A-1

Promoting Health 542

Appendix B: Answers to Test Yourself Questions B-1 Personality

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

MODULE 45 MODULE 46 MODULE 47

557

The Psychoanalytic Perspective 558 The Humanistic Perspective 570 Contemporary Research on Personality 574

ix

Contents module 5 Tools of Discovery and Older Brain Structures 58 The Tools of Discovery: Having Our Head Examined 58 Older Brain Structures 60

module 6 The Cerebral Cortex and Our Divided

Brain 67

The Cerebral Cortex 67

Preface ix

Our Divided Brain 74 Right-Left Differences in the Intact Brain 77

Introduction to the History and Science of Psychology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

module 1 The Story of Psychology

2

What Is Psychology? 2 Contemporary Psychology 6 CLOSE-UP: Tips for Studying Psychology 12

module 2 Thinking Critically With Psychological Science 14

The Need for Psychological Science 14

Consciousness and the Two-Track Mind . . . 83

Frequently Asked Questions About Psychology 19

module 7 The Brain and Consciousness

module 3 Research Strategies: How

85

Cognitive Neuroscience 85

Psychologists Ask and Answer Questions 25

Dual Processing 86

How Do Psychologists Ask and Answer Questions? 25

module 8 Sleep and Dreams

Statistical Reasoning in Everyday Life 37

91

Biological Rhythms and Sleep 91 Why Do We Sleep? 96 Sleep Disorders 100 Dreams 102

module 9 Hypnosis

108

Facts and Falsehoods 108 Explaining the Hypnotized State 110

module 10 Drugs and Consciousness The Biology of Mind . . . . . . . . . . . . . . . . . . . . . 45

module 4 Neural and Hormonal Systems Neural Communication 46 The Nervous System 52 The Endocrine System 56

x

46

Dependence and Addiction 113 Psychoactive Drugs 115 CLOSE-UP: Near-Death Experiences 123

Influences on Drug Use 125

113

module 15 Adolescence

195

Physical Development 195 Cognitive Development 198 Social Development 200 Emerging Adulthood 204

Nature, Nurture, and Human Diversity . . . . 131

module 11 Behavior Genetics and Evolutionary

Psychology 132

module 16 Adulthood, and Reflections on Developmental Issues 206 Physical Development 206 Cognitive Development 212 Social Development 215

Behavior Genetics: Predicting Individual Differences 132

Reflections on Two Major Developmental Issues 221

Evolutionary Psychology: Understanding Human Nature 141

module 12 Environmental Influences on

Behavior 148

Parents and Peers 148 Cultural Influences 151 Gender Development 157 Reflections on Nature and Nurture 164

Sensation and Perception . . . . . . . . . . . . . . 225

module 17 Introduction to Sensation and Perception 226 Thresholds 227 Sensory Adaption 230

Developing Through the Life Span . . . . . . . 169

module 13 Prenatal Development and the

Newborn 170

module 18 Vision

233

The Stimulus Input: Light Energy 233 The Eye 234 Visual Information Processing 237 Color Vision 240

Conception 170 Prenatal Development 170

module 19 Hearing

The Competent Newborn 172

243

The Stimulus Input: Sound Waves 243

module 14 Infancy and Childhood

174

The Ear 244

Physical Development 174

Hearing Loss and Deaf Culture 247

Cognitive Development 176

CLOSE-UP: Living in a Silent World 249

CLOSE-UP: Autism and “Mind-Blindness” 182

Social Development 185

xi

module 20 Other Senses

module 25 Learning by Observation

251

Touch 251

Mirrors in the Brain 316

Pain 253

Bandura’s Experiments 317

Taste 257

Applications of Observational Learning 318

315

Smell 259

module 21 Perceptual Organization

262

Form Perception 262 Depth Perception 264 Motion Perception 267 Perceptual Constancy 267

module 22 Perceptual Interpretation

272

Sensory Deprivation and Restored Vision 272 Perceptual Adaptation 273 Perceptual Set 274

Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323

Perception and the Human Factor 279

module 26 Introduction to Memory

Is There Extrasensory Perception? 281

324

The Phenomenon of Memory 324 Studying Memory: Information-Processing Models 325

module 27 Encoding: Getting Information

In 327

How We Encode 327 What We Encode 330

module 28 Storage: Retaining Information

336

Sensory Memory 336 Working/Short-Term Memory 337

Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287

module 23 Classical Conditioning

Long-Term Memory 338 Storing Memories in the Brain 339

290

module 29 Retrieval: Getting Information

Pavlov’s Experiments 290

Out 346

Extending Pavlov’s Understanding 295

Retrieval Cues 347

Pavlov’s Legacy 298 CLOSE-UP: Trauma as Classical Conditioning 299

module 24 Operant Conditioning

301

Skinner’s Experiments 301

Forgetting 351

Extending Skinner’s Understanding 308

CLOSE-UP: Retrieving Passwords 355

Skinner’s Legacy 310

Memory Construction 357

CLOSE-UP: Training Our Partners 312

Improving Memory 365

Contrasting Classical and Operant Conditioning 313

xii

module 30 Forgetting, Memory Construction, and Improving Memory 351

Environmental Influences 430 Group Differences in Intelligence Test Scores 432 The Question of Bias 438

Thinking, Language, and Intelligence . . . . . 369

module 31 Thinking

370

Concepts 370 Solving Problems 371 Making Decisions and Forming Judgments 374 THINKING CRITICALLY ABOUT: The Fear Factor—Do We Fear

the Right Things? 378

module 36 Introduction to Motivation

module 32 Language and Thought

384

A Hierarchy of Motives 446

The Brain and Language 391 Thinking and Language 393

module 37 Hunger 448

Animal Thinking and Language 397

The Physiology of Hunger 448

CLOSE-UP: Talking Hands 400

The Psychology of Hunger 451

module 33 Introduction to Intelligence

404

Is Intelligence One General Ability or Several Specific Abilities? 405 Intelligence and Creativity 409 Is Intelligence Neurologically Measurable? 412

416

The Origins of Intelligence Testing 416 Modern Tests of Mental Abilities 418 Principles of Test Construction 420 The Dynamics of Intelligence 422

module 35 Genetic and Environmental Influences Heritability 430

Obesity and Weight Control 456 CLOSE-UP: Waist Management 463

module 38 Sexual Motivation and the Need to

Belong 466

Emotional Intelligence 411

Twin and Adoption Studies 428

Instincts and Evolutionary Psychology 444 Optimum Arousal 445

Language Development 386

on Intelligence 428

444

Drives and Incentives 445

Language Structure 385

module 34 Assessing Intelligence

Motivation and Work . . . . . . . . . . . . . . . . . . . 443

The Physiology of Sex 466 The Psychology of Sex 468 Adolescent Sexuality 470 Sexual Orientation 472 Sex and Human Values 479 The Need to Belong 479

module 39 Motivation at Work

484

CLOSE-UP: I/O Psychology at Work 485

Personnel Psychology 486 CLOSE-UP: Discovering Your Strengths 487

Organizational Psychology: Motivating Achievement 490 CLOSE-UP: Doing Well While Doing Good: “The Great

Experiment” 492

xiii

Emotions, Stress, and Health . . . . . . . . . . . 499

module 40 Introduction to Emotion

500

Theories of Emotion 500 THINKING CRITICALLY ABOUT: Lie Detection 504

module 41 Expressed Emotion

510

Detecting Emotion 510 Gender, Emotion, and Nonverbal Behavior 511 Culture and Emotional Expression 513

558

Exploring the Unconscious 558 Assessing Unconscious Processes 563 Evaluating the Psychoanalytic Perspective 565

module 46 The Humanistic Perspective

570

Abraham Maslow’s Self-Actualizing Person 570 Carl Rogers’ Person-Centered Perspective 570 Assessing the Self 571

The Effects of Facial Expressions 515

Evaluating the Humanistic Perspective 572

module 42 Experienced Emotion

517

Fear 518 Anger 520

module 47 Contemporary Research on Personality 574 The Trait Perspective 574

Happiness 521

THINKING CRITICALLY ABOUT: How to Be a “Successful”

CLOSE-UP: How to Be Happier 527

Astrologer or Palm Reader 578

module 43 Stress and Health

530

Stress and Illness 530

The Social-Cognitive Perspective 583 CLOSE-UP: Toward a More Positive Psychology 588

Exploring the Self 590

Stress and the Heart 535 Stress and Susceptibility to Disease 537

542

Coping With Stress 542 CLOSE-UP: Pets Are Friends, Too 546

Managing Stress 547 THINKING CRITICALLY ABOUT: Complementary and

Alternative Medicine 550 CLOSE-UP: The Relaxation Response 551

xiv

module 45 The Psychoanalytic Perspective The Neo-Freudian and Psychodynamic Theorists 562

Embodied Emotion 501

module 44 Promoting Health

Personality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557

module 52 Schizophrenia

637

Symptoms of Schizophrenia 637 Onset and Development of Schizophrenia 638 Understanding Schizophrenia 639

Psychological Disorders . . . . . . . . . . . . . . . . 597

module 48 Introduction to Psychological

Disorders 599

Defining Psychological Disorders 599 THINKING CRITICALLY ABOUT: ADHD—Normal High Energy

or Genuine Disorder? 600 Understanding Psychological Disorders 601

Therapy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645

Classifying Psychological Disorders 602

module 53 The Psychological Therapies

CLOSE-UP: The “un-DSM”: A Diagnostic Manual of Human

Strengths 604

Humanistic Therapies 649

Labeling Psychological Disorders 605 THINKING CRITICALLY ABOUT: Insanity and

Behavior Therapies 650

Responsibility 606

Cognitive Therapies 654

Rates of Psychological Disorders 606

Group and Family Therapies 657

module 49 Anxiety Disorders

646

Psychoanalysis 646

610

module 54 Evaluating Psychotherapies

660

Generalized Anxiety Disorder 610

Is Psychotherapy Effective? 660

Panic Disorder 611

THINKING CRITICALLY ABOUT: “Regressing” From Unusual to

Usual 662

Phobias 611

The Relative Effectiveness of Different Therapies 663

Obsessive-Compulsive Disorder 612

Evaluating Alternative Therapies 665

Post-Traumatic Stress Disorder 612

Commonalities Among Psychotherapies 666

Understanding Anxiety Disorders 614

CLOSE-UP: A Consumer’s Guide to Psychotherapists 667

module 50 Dissociative, Personality, and

Somatoform Disorders 618 Dissociative Disorders 618

module 55 The Biomedical Therapies

Personality Disorders 620

Major Depressive Disorder 625

671

Drug Therapies 671

Somatoform Disorders 622

module 51 Mood Disorders

Culture and Values in Psychotherapy 668 CLOSE-UP: Preventing Psychological Disorders 669

Brain Stimulation 675

625

Psychosurgery 677 Therapeutic Life-Style Change 678

Bipolar Disorder 626 Understanding Mood Disorders 627 CLOSE-UP: Suicide 630

xv

module 59 Prosocial Relations

719

Attraction 719 CLOSE-UP: Online Matchmaking and Speed Dating 720

Altruism 726 Peacemaking 729

Social Psychology . . . . . . . . . . . . . . . . . . . . . 681

module 56 Social Thinking

The Bachelor’s Degree A-1 Postgraduate Degrees A-3

682

Attributing Behavior to Persons or to Situations 682 Attitudes and Actions 684 CLOSE-UP: Abu Ghraib Prison: An “Atrocity-Producing

Situation”? 687

module 57 Social Influence

Appendix A: Careers in Psychology A-1 Preparing for a Career in Psychology A-1

690

Conformity and Obedience 690

Subfields of Psychology A-4 Preparing Early for Graduate Study in Psychology A-9 For More Information A-10 Appendix B: Answers to Test Yourself Questions B-1

Group Influence 697 The Power of Individuals 701

module 58 Antisocial Relations

703

Prejudice 703 CLOSE-UP: Automatic Prejudice 705

Aggression 709 CLOSE-UP: Parallels Between Smoking Effects and Media

Violence Effects 716

xvi

Glossary G-1 References R-1 Name Index NI-1 Subject Index SI-1

Preface With each new edition, I’ve found myself traveling a familiar path. When it is first published, I am relieved after many months of intense effort, and I am thrilled—sure that it is my best effort yet. But before long, as new research appears, and as thoughtful instructors and students begin writing with suggestions for improvement, and then when commissioned reviews and survey results start coming in, I have second thoughts about the current edition’s seeming perfection. As my storage cubbies begin fattening with new materials, my eagerness for the next edition grows. By the time the new edition is ready to come out, I grimace when reminded of people using the old edition, which once seemed so perfect! This new Psychology, Ninth Edition in Modules is no exception—it is so much improved over the previous work! I am delighted to offer the following changes: 䉴 some 1300 new research citations representing the most exciting and important

new discoveries in our field, 䉴 organizational changes based on changes in the field (for example, the heavily

䉴 䉴 䉴

revised Consciousness unit, which now follows The Biology of Mind unit and is titled Consciousness and the Two-Track Mind to reflect the dual processing and cognitive neuroscience themes), fine-tuned writing with countless small and large improvements in the way concepts are presented, supported by the input and creative ideas of hundreds of contributing instructors and students, and from my longtime editors, a sharp new art program and new pedagogy that teaches more effectively, continually improving coverage of cultural and gender diversity issues, and 44 fewer pages.

I find myself fascinated by today’s psychology, with its studies of the neuroscience of our moods and memories, the reach of our adaptive unconscious, and the shaping power of the social and cultural context. Psychological science is increasingly attuned to the relative effects of nature and nurture, to gender and cultural diversity, to our conscious and unconscious information processing, and to the biology that underlies our behavior. (See TABLES 1 and 2 on the next page.) I am grateful for the privilege of assisting with the teaching of this mind-expanding discipline to so many students, in so many countries, through so many different languages. To be entrusted with discerning and communicating psychology’s insights is both an exciting honor and a great responsibility. The thousands of instructors and millions of students across the globe who have studied this book have contributed immensely to its development. Much of this has occurred spontaneously, through correspondence and conversations. For this edition, we also formally involved over 300 researchers and teaching psychologists, along with many students. This input was part of our effort to gather accurate and up-to-date information about the field of psychology and the content, pedagogy, and supplements needs of instructors and students in the introductory course. We look forward to continuing feedback as we strive, over future editions, to create an ever better book and supplements package.

xvii

TABLE 1 Evolutionary Psychology and Behavior Genetics

In addition to the coverage found in Modules 11–12, the evolutionary perspective is covered on the following pages:

In addition to the coverage found in Modules 11–12, behavior genetics is covered on the following pages:

Aging, p. 208 Anxiety disorders, pp. 615–616 Attraction, p. 720 Biological predispositions in learning, pp. 295–298 in operant conditioning, pp. 309–310 Brainstem, p. 61 Consciousness, p. 85 Darwin, Charles, pp. 3, 6–7, 416 Depression, p. 678 and light exposure therapy, p. 666 Emotional expression, p. 514 effects of facial expressions, 515 Emotion-detecting ability, p. 433 Evolutionary perspective, defined, p. 9 Exercise, p. 548 Fear, pp. 378–379 function of, 519 Feature detection, p. 238 Hearing, p. 243

Abuse, intergenerational transmission of, p. 319 Aggression, p. 710 Depth perception, p. 264 Drives and incentives, p. 445 Drug dependence, p. 126 Drug use, pp. 125–126 Eating disorders, p. 454 Fear, pp. 519–520 Happiness, pp. 528–529 Hunger and taste preference, p. 452 Intelligence, pp. 428–430 brain size, p. 412 Down syndrome, p. 425 Language, pp. 388–389 Learning, pp. 295–297, 309–310 Memory, pp. 345, 346 Motor development, p. 175 Obesity and weight control, p. 460 Perception, p. 272

TABLE 2 Neuroscience

Hunger and taste preference, p. 452 Instincts, pp. 444–445 Intelligence, pp. 405, 416, 435–438 Language, pp. 384, 388–389 Love, p. 217 Math and spatial ability, p. 434 Mating preferences, pp. 145–146 Menopause, p. 207 Need to belong, p. 480 Obesity, p. 456 Overconfidence, p. 377 Perceptual adaptation, pp. 274–275 Puberty, onset of, pp. 204–205 Sensation, pp. 226–227 Sensory adaptation, pp. 230–231 Sexual orientation, p. 476 Sexuality, pp. 145–146, 466 Signal detection theory, pp. 227–228 Sleep, pp. 92, 99 Smell, p. 261 Taste, p. 257

Personality traits, pp. 576–580 Psychological disorders: ADHD, p. 600 anxiety disorders, p. 616 biopsychosocial approach, p. 602 mood disorders, p. 629 personality disorders, pp. 621–622 schizophrenia, pp. 641–642 Romantic love, p. 217 Sexual orientation, pp. 475–476 Sexuality, p. 466 Smell, pp. 259–260 Stress: AIDS, pp. 539–540 benefits of exercise, p. 548 cancer, p. 540 personality and illness, pp. 535–537 psychoneuroimmunology, pp. 537–539 Traits, pp. 576, 577–580

In addition to the coverage found in Modules 4–6, neuroscience can be found on the following pages:

ADHD and the brain, p. 600 Aggression, pp. 710–711 Aging: physical exercise and the brain, pp. 210–211 Animal language, p. 397 Antisocial personality disorder, p. 621 Autism, pp. 182–183 Biofeedback, p. 549 Brain activity and: aging, pp. 210, 352 dementia and Alzheimer’s, pp. 211–212, 346 disease, p. 248 dreams, pp. 102–105 emotion, pp. 197, 346–348, 503–504, 507–508, 513 sleep, pp. 91–96 Brain development: adolescence, p. 197 experience and, pp. 148–149 infancy and childhood, p. 174 sexual differentiation in utero, p. 161 Cognitive neuroscience, pp. 5, 85 Drug dependence, p. 126 Emotion and cognition, pp. 506–508 Emotional intelligence and brain damage, p. 411 ESP and fMRI testing, p. 284 Fear-learning, p. 616 Fetal alcohol syndrome and brain abnormalities, p. 171

Hallucinations and: hallucinogens, pp. 122–124 near-death experiences, p. 123 sleep, p. 105 Hormones and: abuse, p. 190 development, pp. 160–161, 195–197 emotion, pp. 500–501 gender, pp. 160–161 memory, pp. 346–348 sex, p. 207 sexual behavior, pp. 467–468 stress, pp. 502, 532, 538, 544, 546 weight control, pp. 449–451 Hunger, pp. 449–451 Insight, pp. 371–372 Intelligence, pp. 412–414 and creativity, p. 409 and twins, p. 428 Language, pp. 388–393 and statistical learning, p. 390 and thinking in images, pp. 395–396 Light-exposure therapy: brain scans, p. 666 Limbic system and fear, pp. 519–520 Meditation, p. 552 Memory: physical storage of, pp. 345–346 implicit/explicit memories, pp. 348–351 sleep, pp. 99–100, 105–106

Mirror neurons, pp. 316–317 Neuroscience perspective, defined, p. 9 Neurostimulation therapy: deep-brain stimulation, p. 677 magnetic stimulation, pp. 676–677 Neurotransmitters and: anxiety disorders, pp. 616, 672 biomedical therapy: depression, pp. 632, 673–674 ECT, pp. 675–676 schizophrenia, pp. 639, 672 child abuse, p. 190 cognitive-behavior therapy for obsessive-compulsive disorder, pp. 656–657 curare, 50 depression, pp. 630–632 drugs, pp. 113, 115–122 exercise, pp. 548–549 narcolepsy, p. 101 schizophrenia, pp. 639–640, 642 Optimum arousal: rewards, p. 446 org*sm, pp. 466–467 Pain, pp. 254–255 phantom limb pain, pp. 255–257 Parallel vs. serial processing, pp. 239–240 Perception: brain damage and, pp. 238, 239 color vision, pp. 240–242 feature detection, p. 238

transduction, p. 233 visual information processing, pp. 233, 235–238 Perceptual organization, pp. 262–270 Personality and brain-imaging, p. 576 PET scans and obsessivecompulsive disorder, p. 678 Post-traumatic stress disorder and the limbic system, p. 613 Prejudice (automatic) and the amygdala, p. 705 Psychosurgery: lobotomy, pp. 677–678 Schizophrenia and brain abnormalities, pp. 640, 642 Sensation: body position and movement, p. 253 deafness, pp. 248–249 hearing, pp. 243–245, 247 sensory adaptation, p. 231 smell, pp. 259–261 taste, p. 258 touch, p. 252 Sexual orientation, pp. 475–477 Sleep: hypnotized brain and, p. 111 memory and, pp. 99–100 recuperation during, p. 99 Smell and emotion, pp. 262–263 Unconscious mind, p. 566

PR EFA C E

Why a Modular Book? This 59-module text has been a longtime wish come true for me. It breaks out of the box by restructuring the material into a buffet of (a) short, digestible chapters (called modules) that (b) can be selected and assigned in any order. Have we not all heard the familiar student complaint: “The chapters are too long!” A text’s typical 30- to 50-page chapters cannot be read in a single sitting before the eyes grow weary and the mind wanders. So why not parse the material into readable units? Ask your students whether they would prefer a 700-page book to be organized as fourteen 50-page chapters or as fifty 14-page chapters. You may be surprised at their overwhelming support for shorter chapters. Indeed, students digest material better when they process it in smaller chunks—as “spaced” rather than massed practice. I have equally often heard from instructors bemoaning the fact that they “just can’t get to everything” in the book. Sometimes instructors want to cover certain sections in a traditional, long chapter but not others. For example, in the typical Consciousness chapter, someone may want to cover Sleep and Hypnosis but not Drugs. In Psychology, Ninth Edition in Modules, instructors could easily choose to cover Module 8, Sleep and Dreams, and Module 9, Hypnosis, but not Module 10, Drugs and Consciousness.

How Is This Different From Psychology, Ninth Edition? The primary differences between this book and my Psychology, Ninth Edition text are organization and module independence.

Organization This book really IS Psychology, Ninth Edition—just in a different format. So, this modular version contains all the updated research and innovative new coverage from Psychology, Ninth Edition. A very few sections have moved around to accommodate the modular structure. For example, Rates of Psychological Disorders is a separate section at the end of the Psychological Disorders chapter in Psychology, Ninth Edition, but it is covered in the first of the Psychological Disorders modules in this modular version.

The Modules Are Independent Each module in this book is self-standing rather than dependent upon the others for understanding. Cross references to other parts of the book have been replaced with brief explanations. In some cases, illustrations or key terms are repeated to avoid possible confusion. No assumptions are made about what students have read prior to each module. This independence gives instructors ultimate flexibility in deciding which modules to use, and in what order. Connections among psychology’s subfields and findings are still made—they are just made in a way that does not assume knowledge of other parts of the book.

What Continues, and What’s New Since Psychology, Eighth Edition in Modules? Throughout its nine editions, my overall vision for Psychology has not wavered: to merge rigorous science with a broad human perspective in a book that engages both mind and heart. My aim has been to create a state-of-the-art introduction to psychology, written with sensitivity to students’ needs and interests. I aspire to help students understand and appreciate the wonder of important phenomena in their lives. I also want to convey the inquisitive spirit with which psychologists do psychology. The study of psychology, I believe, enhances our abilities to restrain intuition with critical thinking, judgmentalism with compassion, and illusion with understanding. Believing with Thoreau that “Anything living is easily and naturally expressed in popular language,” I seek to communicate psychology’s scholarship with crisp narrative and

xix

xx

P REFACE

vivid storytelling. Writing as a solo author, I hope to tell psychology’s story in a way that is warmly personal as well as rigorously scientific. I love to reflect on connections between psychology and other realms, such as literature, philosophy, history, sports, religion, politics, and popular culture. And I love to provoke thought, to play with words, and to laugh.

Eight Guiding Principles Despite all the exciting changes, this new edition does retain its predecessors’ voice, as well as much of the content and organization. It also retains the goals—the guiding principles—that have animated the previous eight editions: 1. To exemplify the process of inquiry I strive to show students not just the outcome of research, but how the research process works. Throughout, the book tries to excite the reader’s curiosity. It invites readers to imagine themselves as participants in classic experiments. Several modules introduce research stories as mysteries that progressively unravel as one clue after another falls into place. (See, for example, the historical story of research on the brain’s processing of language on pages 388–390.) 2. To teach critical thinking By presenting research as intellectual detective work, I exemplify an inquiring, analytical mindset. Whether students are studying development, cognition, or statistics, they will become involved in, and see the rewards of, critical reasoning. Moreover, they will discover how an empirical approach can help them evaluate competing ideas and claims for highly publicized phenomena— ranging from subliminal persuasion, ESP, and alternative therapies, to astrology, hypnotic regression, and repressed and recovered memories. 3. To put facts in the service of concepts My intention is not to fill students’ intellectual file drawers with facts, but to reveal psychology’s major concepts—to teach students how to think, and to offer psychological ideas worth thinking about. In each module I place emphasis on those concepts I hope students will carry with them long after they complete the course. Always, I try to follow Albert Einstein’s dictum that “Everything should be made as simple as possible, but not simpler.” Test Yourself questions at the end of each module reinforce the take-home message from that section. 4. To be as up-to-date as possible Few things dampen students’ interest as quickly as the sense that they are reading stale news. While retaining psychology’s classic studies and concepts, I also present the discipline’s most important recent developments. More than 600 references in this edition are dated 2007 or 2008. 5. To integrate principles and applications Throughout—by means of anecdotes, case histories, and the posing of hypothetical situations—I relate the findings of basic research to their applications and implications. Where psychology can illuminate pressing human issues—be they racism and sexism, health and happiness, or violence and war—I have not hesitated to shine its light. Ask Yourself questions at the end of each module encourage students to apply the concepts to their own lives to help make the material more meaningful, and memorable. 6. To enhance comprehension by providing continuity Because the book has a single author, many significant issues—such as cognitive neuroscience, dual processing, cultural and gender diversity, behavior genetics, the bold thinking of intellectual pioneers, human rationality and irrationality, empathy for and understanding of troubled lives—weave throughout the whole book, and students hear a consistent voice. “The uniformity of a work,” observed Edward Gibbon, “denotes the hand of a single artist.” 7. To reinforce learning at every step Everyday examples and rhetorical questions encourage students to process the material actively. Concepts are presented and then frequently applied to reinforce learning. Learning objective questions, selftests, a marginal glossary, and end-of-module key terms lists help students master important concepts and terminology.

PR EFA C E

xxi

8. To convey respect for human unity and diversity Throughout the book, readers will see evidence of our human kinship—our shared biological heritage, our common mechanisms of seeing and learning, hungering and feeling, loving and hating. They will also better understand the dimensions of our diversity—our individual diversity in development and aptitudes, temperament and personality, and disorder and health; and our cultural diversity in attitudes and expressive styles, child-rearing and care for older people, and life priorities.

Continually Improving Cultural and Gender Diversity Coverage This edition presents an even more thoroughly cross-cultural perspective on psychology (TABLE 3)—reflected in research findings, and text and photo examples. Coverage of the psychology of women and men is thoroughly integrated (see TABLE 4 on the next page). TABLE 3 Culture and Multicultural Experience

From Module 1 to Module 59, coverage of culture and multicultural experience can be found on the following pages: Aggression, p. 713 Aging population, p. 208 AIDS, pp. 381, 539–540 Anger, p. 520 Animal research ethics, p. 21 Attraction: love and marriage, p. 725 speed-dating, p. 720 Attractiveness, pp. 145–146, 731 Attribution, political effects of, p. 684 Behavioral effects of culture, pp. 183–189 Body ideal, pp. 454–455 Categorization, p. 370 Complementary/alternative medicine, p. 550 Conformity, pp. 690–691, 693 Corporal punishment practices, p. 307 Cultural norms, pp. 152–153, 162, 164–165 Culture: context effects, p. 277 definition, pp. 151–152 and the self, pp. 154–155 shock, pp. 153, 534, 586 Deaf culture, pp. 73, 77, 247–249, 387, 388, 390, 399–400 Development: adolescence, p. 195 attachment, pp. 188, 191 child-rearing, p. 156 cognitive development, p. 185 moral development, p. 199 similarities, pp. 156–157 social development, p. 188 Drugs: psychological effects of, pp. 114, 117 use of, 127

Eating disorders: Western culture and, p. 139 Emotion: emotion-detecting ability, pp. 510–511 experiencing, pp. 517, 520 expressing, pp. 512, 513–515 Enemy perceptions, p. 731 Fear, p. 379 Flow, p. 484 Flynn effect, pp. 420–421 Fundamental attribution error, p. 682 Gender: roles, pp. 162–163 social power, p. 158 Grief, expressing, p. 221 Happiness, p. 528 Hindsight bias, p. 15 History of psychology, pp. 2–6 hom*osexuality, views on, p. 27 Human diversity/kinship, pp. 19–20, 151–153 Identity, forming a social, p . 201 Individualism/collectivism, pp. 154–155 Intelligence, pp. 404, 416, 418, 420–421, 433–434, 435–438 bias, pp. 438–439 nutrition and, p. 436 Language, pp. 152, 384–387, 388, 392–395 monolingual/bilingual, p. 395 Leaving the nest, pp. 204–205 Life satisfaction, pp. 524–526 Life-expectancy, p. 208 Life-span and well-being, p. 219 Loop systems, pp. 280–281 Management styles, pp. 495–496 Marriage, p. 217 Mating preferences, p. 145–146

Meditation, p. 552 Memory, encoding, pp. 333, 352 Menopause, p. 207 Mental illness rate, p. 607 Molecular genetics: “missing women,” p. 141 Motivation: hierarchy of needs, p. 447 Need to belong, pp. 479–481 Neurotransmitters: curare, p. 50 Obesity, pp. 456–457, 460–462 Obesity guidance/counseling, p. 457 Observational learning: attachment and television viewing, p. 191 television and aggression, p. 319 Optimism and health, p. 544 Organ donation, p. 382 Pace of life, pp. 29, 153 Pain, perception of, p. 256 Parapsychology, p. 281 Parent and peer relationships, pp. 202–203 Participative management, p. 496 Peacemaking: conciliation, p. 734 contact, p. 731 cooperation, p. 732–733 Peer influence, p. 151 Personal space, p. 153 Personality, p. 584 Prejudice prototypes, p. 371 Prejudice, pp. 23, 36, 703–709 Psychoanalysis, p. 648 Psychological disorders: antisocial personality disorder, p. 622 cultural norms, p. 599 depression, pp. 628, 634

dissociative personality disorder, p. 619 eating disorders, pp. 454–455, 602 rates of, p. 597 schizophrenia, pp. 602, 640–641 somatoform, p. 622 suicide, p. 630 susto, p. 602 taijin-kyofusho, p. 602 Psychotherapy: culture and values in, pp. 668–669 EMDR training, p. 665 Puberty and adult independence, pp. 204–205 Self-esteem, p. 528 Self-serving bias, pp. 592–593, 594 Sex drive, p. 144 Sexual orientation, pp. 472–473 Similarities, pp. 142–143 Social clock, p. 216 Social loafing, p. 698 Social-cultural perspective, p. 9 Spirituality: Israeli kibbutz communities, pp. 552–554 Stress: adjusting to a new culture, p. 534 racism and, p. 535 Taste preferences, p. 452 Teen sexuality, pp. 470–472 Testing bias, pp. 439–440 Theory of mind: internalizing language, p. 184 Weight control, p. 453 See also Modules 56–59, Social Psychology, pp. 681–734

xxii

PREFACE

TABLE 4 The Psychology of Men and Women

Coverage of the psychology of men and women can be found on the following pages: ADHD, p. 600 Adulthood: physical changes, pp. 206–207 Aggression, pp. 711–715 p*rnography, pp. 714–715 rape, pp. 712, 714–715 Alcohol: addiction and, p. 116 use, p. 115 sexual aggression, p. 117 Altruism: help-receiving, p. 728 Antisocial personality disorder, p. 620 Attraction, pp. 720–722 Autism, p. 182 Behavioral effects of gender, p. 20 Biological predispositions, and the color red, pp. 296–297 Biological sex/gender, pp. 160–161 Bipolar disorder, p. 627 Body image, pp. 454–455 Classical conditioning and trauma/rape, p. 299 Color vision, p. 241 Conformity: obedience, p. 694 Dating, p. 720 Depression, pp. 625, 627, 632–633 Dream content, p. 103 Drug use: biological influences, p. 126 methamphetamines, p. 118 psychological/social-cultural influences, p. 126

Eating disorders, pp. 453–455 Emotion-detecting ability, pp. 433, 511–513 Empty nest, p. 218 Father care, pp. 188, 472 Freud’s views: evaluating, p. 565 identification/gender identity, pp. 560–561 Oedipus/Electra complexes, p. 560 penis envy, p. 562 Gender: and anxiety, p. 610 and child-rearing, pp. 163, 454 development, pp. 157–164 prejudice, pp. 703–704 roles, pp. 162–163 similarities/differences, pp. 157–160 Gendered brain, pp. 160–161, 469–470, 478 Generic pronoun “he,” p. 394 Grief, p. 220 Group polarization, p. 699 Happiness, p. 529 Hormones and: aggression, p. 711 sexual behavior, pp. 467–468 sexual development, pp. 160–161, 195–197 testosterone-replacement therapy, p. 468

Intelligence, pp. 432–435 bias, p. 439 low extreme, p. 425 Leadership: transformational, p. 495 Life expectancy, p. 208 Losing weight, p. 462 Marriage, pp. 217–218, 545 Maturation, pp. 195–197 Menarche, p. 196 Menopause, p. 207 Midlife crisis, p. 216 Molecular genetics: “missing women,” p. 141 Obesity: genetic factors, p. 460 guidance/counseling, p. 457 health risks, p. 457 ingested calories, p. 461 weight discrimination, pp. 457–458 Observational learning: sexually violent media, p. 321 TV’s influence, p. 319 p*rnography, p. 469 Post-traumatic stress disorder: development of, p. 614 Prejudice, pp. 371, 703–706 Psychological disorders, rates of, p. 607 Rape, p. 709 Religiosity and: life expectancy, pp. 552–553

REM sleep, arousal in, pp. 94–95 Romantic love, pp. 724–726 Savant syndrome, p. 406 Schizophrenia, pp. 638–639 Sense of smell, p. 260 Sexual abuse, p. 143 Sexual attraction, pp. 144–146 Sexual disorders, p. 467 Sexual fantasies, p. 470 Sexual orientation, pp. 472–479 Sexuality, pp. 466–472 adolescent, pp. 470–472 evolutionary explanation, pp. 143–146 external stimuli, p. 469 Sleep, p. 97 Stereotyping, p. 277 Stress: and depression, p. 537 and heart disease, pp. 535–536 and HIV, p. 539 and the immune system, p. 538 and health and sexual abuse, p. 546 response, p. 532 Suicide, pp. 630–631 Women in psychology, p. 4

In addition, I am working to offer a world-based psychology for our worldwide student readership. Thus, I continually search the world for research findings and text and photo examples, conscious that readers may be in Melbourne, Sheffield, Vancouver, or Nairobi. North American and European examples come easily, given that I reside in the United States, maintain contact with friends and colleagues in Canada, subscribe to several European periodicals, and live periodically in the U.K. This edition, for example, offers 61 explicit Canadian and 151 British examples, and 72 mentions of Australia and New Zealand. We are all citizens of a shrinking world, thanks to increased migration and the global economy. Thus, American students, too, benefit from information and examples that internationalize their world-consciousness. And if psychology seeks to explain human behavior (not just American or Canadian or Australian behavior), the broader the scope of studies presented, the more accurate is our picture of this world’s people. My aim is to expose all students to the world beyond their own culture, and I continue to welcome input and suggestions from all readers. Discussion of the relevance of cultural and gender diversity begins on the first page of the first module and continues throughout the text. Modules 11 and 12 provide focused coverage, encouraging students to appreciate cultural and gender differences and commonalities, and to consider the interplay of nature and nurture.

PR EFA C E

Emphasis on the Biological-Psychological-Social/Cultural Levels of Analysis Approach in Psychology Psychology, Ninth Edition in Modules explores the biological, psychological, and socialcultural influences on our behavior. A significant section in Module 1 introduces the levels-of-analysis approach, setting the stage for discussion in other modules, and levels-of-analysis figures in several modules help students understand concepts in the biopsychosocial context.

Increasing Sensitivity to the Clinical Perspective With helpful guidance from clinical psychologist colleagues, I have become more mindful of the clinical angle on various concepts within psychology, which has sensitized and improved the Personality, Psychological Disorders, and Therapy units, among others. For example, I cover problem-focused and emotion-focused coping strategies in Module 44, Promoting Health, and Module 34, Assessing Intelligence, describes how psychologists use intelligence tests in clinical settings.

Strong Critical Thinking Coverage I aim to introduce students to critical thinking throughout the book. New learning objective questions at the beginning of main sections, and Review sections at the end of each module, encourage critical reading to glean an understanding of important concepts. This ninth edition also includes the following opportunities for students to learn or practice their critical thinking skills. 䉴 Module 2, Thinking Critically With Psychological Science, introduces students

䉴 䉴 䉴

to psychology’s research methods, emphasizing the fallacies of our everyday intuition and common sense and, thus, the need for psychological science. Critical thinking is introduced as a key term in this module (p. 18). The Statistical Reasoning discussion in Module 3 encourages students to “focus on thinking smarter by applying simple statistical principles to everyday reasoning” (pp. 37–41). “Thinking Critically About . . .” boxes are found throughout the book, modeling for students a critical approach to some key issues in psychology. For example, see the updated box “Thinking Critically About: The Fear Factor—Do We Fear the Right Things?”(pp. 378–379). Detective-style stories throughout the narrative get students thinking critically about psychology’s key research questions. “Apply this” and “Think about it”-style discussions keep students active in their study of each module. Critical examinations of pop psychology spark interest and provide important lessons in thinking critically about everyday topics.

See TABLE 5 on the next page for a complete list of this text’s coverage of critical thinking topics and Thinking Critically About boxes.

Stellar Teaching and Learning Resources Our supplements and media have been celebrated for their quality, abundance, and connectivity. The package available for Psychology, Ninth Edition in Modules raises the bar even higher with PsychPortal, which includes an interactive eBook, a suite of interactive components, the powerful Online Study Center, the Student Video Tool Kit for Introductory Psychology, and the Scientific American News Feed. See page xxv for details.

APA Learning Goals and Outcomes for Psychology Majors In March 2002, an American Psychological Association (APA) Task Force created a set of Learning Goals and Outcomes for students graduating with psychology majors from four-year schools (www.apa.org/ed/pcue/).

xxiii

xxiv

PREFACE

TABLE 5 Critical Thinking and Research Emphasis

Critical thinking coverage, and in-depth stories of psychology’s scientific research process, can be found on the following pages: Thinking Critically About . . . boxes: The Fear Factor—Do We Fear the Right Things?, pp. 378–379 Lie Detection, pp. 504–505 Complementary and Alternative Medicine, p. 550 How to Be a “Successful” Astrologer or Palm Reader, pp. 578–579 ADHD—Normal High Energy or Genuine Disorder?, p. 600 Insanity and Responsibility, p. 606 “Regressing” from Unusual to Usual, p. 662 Critical Examinations of Pop Psychology: The need for psychological science, p. 14 Perceiving order in random events, pp. 33–34 Do we use only 10 percent of our brains?, p. 72 Can hypnosis enhance recall? Coerce action? Be therapeutic? Alleviate pain?, pp. 109–110 Has the concept of “addiction” been stretched too far?, pp. 114–115

Near–death experiences, p. 123 Critiquing the evolutionary perspective, p. 146 How much credit (or blame) do parents deserve?, p. 150 Is there extrasensory perception?, pp. 281–283 How valid is the Rorschach test?, pp. 564–565 Is repression a myth?, p. 566 Is Freud credible?, pp. 565–568 Is psychotherapy effective?, pp. 660–664 Evaluating alternative therapies, pp. 665–666 Do video games teach or release violence?, pp. 715–717 Thinking Critically with Psychological Science: The limits of intuition and common sense, pp. 14–15 The scientific attitude, pp. 17–19 “Critical thinking” introduced as a key term, p. 18 The scientific method, pp. 25–26 Correlation and causation, pp. 31–32

Illusory correlation, pp. 32–33 Exploring cause and effect, p. 34 Random assignment, pp. 34–35 Independent and dependent variables, pp. 35–36 Statistical reasoning, pp. 37–41 Describing data, pp. 37–40 Making inferences, pp. 40–41 Scientific Detective Stories: Is breast milk better than formula?, pp. 34–36 Our divided brains, pp. 74–77 Why do we sleep?, pp. 96–100 Why do we dream?, pp. 104–106 Is hypnosis an extension of normal consciousness or an altered state?, pp. 110–112 Twin and adoption studies, pp. 133–137 How a child’s mind develops, pp. 176–185 Aging and intelligence, pp. 213–215 Parallel processing, pp. 239–240 How do we see in color?, pp. 240–242

How do we store memories in our brains?, pp. 345–351 How are memories constructed?, pp. 357–365 Do animals exhibit language?, pp. 399–402 Why do we feel hunger?, pp. 448–451 What determines sexual orientation?, pp. 472–479 The pursuit of happiness: Who is happy, and why?, pp. 521–529 Why—and in whom—does stress contribute to heart disease?, pp. 535–537 How and why is social support linked with health?, pp. 545–547 Self-esteem versus self-serving bias, pp. 592–594 What causes mood disorders?, pp. 627–635 Do prenatal viral infections increase risk of schizophrenia?, pp. 640–641 Is psychotherapy effective?, pp. 660–663 Why do people fail to help in emergencies?, pp. 726–728

Psychology departments in many schools have since used these goals and outcomes to help them establish their own benchmarks. Some instructors are eager to know whether a given text for the introductory course helps students get a good start at achieving these goals. Psychology, Ninth Edition in Modules will work nicely to help you begin to address these goals in your department. See www.worthpublishers.com/myers for a detailed guide to how Psychology, Ninth Edition in Modules corresponds to the APA Learning Goals and Outcomes.

A Thoroughly Considered Pedagogical Program This edition includes the following study aids. 䉴 Numbered Questions establish learning objectives for each significant section of

text (around 3 to 7 per module) and direct student reading. 䉴 Review sections, found at the end of each module, repeat the numbered objective

questions and address them with a narrative summary followed by page-referenced Terms and Concepts to Remember. 䉴 The module-ending Review sections also include Ask Yourself questions, which encourage students to apply new concepts to their own experiences, and Test Yourself questions (with answers in an appendix) that assess student mastery and encourage big-picture thinking.

Thoroughly Updated Despite the overarching continuity, there is change on every page. There are updates everywhere and some 1300 new references—comprising nearly 30 percent of the bibliography! Psychology as a field is moving, and this new edition reflects much of that exciting progress.

PR EFA C E

Streamlined Coverage My teaching colleagues have asked for a somewhat shorter length to help the book better fit the course. I worked judiciously to reduce the length, often by removing repetitive research examples (it is sometimes very hard to choose among all the great options!) and with lean, clean rewriting. The result is a text that is about 44 pages shorter.

Consciousness and the Two-Track Mind New Module 7, The Brain and Consciousness, contains coverage of cognitive neuroscience and dual processing, establishing both more firmly as key ideas in psychology. In order to help students make the connection to neuroscience in The Biology of Mind unit (Modules 4 through 6), the Consciousness modules now follow (Modules 7 through 10). Module 7 previews the new evidence of the enormity of our automatic, out-of-sight information processing, including our implicit memories and attitudes.

Exciting New Art Program We worked carefully with talented artists to create all new anatomical and “people” art throughout the text. The result is pedagogically more effective, and visually more appealing.

Innovative Multimedia Supplements Package Psychology, Ninth Edition in Modules boasts impressive electronic and print supplements titles. For more information about any of these titles, visit Worth Publishers’ online catalog at www.worthpublishers.com.

PsychPortal Integrating the best online material that Worth has to offer, PsychPortal is an innovative learning space that combines a powerful quizzing engine with unparalleled media resources (see FIGURE 1). PsychPortal conveniently offers all the functionality FIGURE 1 PsychPortal opening page

xxv

xxvi

PREFACE

you need to support your online or hybrid course, yet it is flexible, customizable, and simple enough to enhance your traditional course. The following interactive learning materials contained within PsychPortal make it truly unique: 䉴 An interactive eBook allows students to highlight, bookmark, and make their 䉴 䉴

own notes just as they would with a printed textbook. Tom Ludwig’s (Hope College) suite of interactive media—PsychSim 5.0, PsychInquiry, and the new Concepts in Action—bring key concepts to life. The Online Study Center combines PsychPortal’s powerful assessment engine with Worth’s unparalleled collection of interactive study resources. Based on their quiz results, students receive Personalized Study Plans that direct them to sections in the book and also to simulations, animations, links, and tutorials that will help them succeed in mastering the concepts. Instructors can access reports indicating their students’ strengths and weaknesses (based on class quiz results) and browse suggestions for helpful presentation materials (from Worth’s renowned videos and demonstrations) to focus their teaching efforts accordingly. The Student Video Tool Kit for Introductory Psychology includes more than 110 engaging video modules that instructors can easily assign, assess, and customize for their students (FIGURE 2). Videos cover classic experiments, current news footage, and cutting-edge research, all of which are sure to spark discussion and encourage critical thinking. Scientific American News Feed highlights current behavioral research.

Additional Student Media 䉴 䉴 䉴 䉴 FIGURE 2 Sample of our Student

Video Tool Kit

Book Companion Site Worth eBook for Psychology, Ninth Edition in Modules The Online Study Center Psych2Go (audio downloads for study and review)

PR EFA C E

䉴 PsychSim 5.0 (on CD-ROM) 䉴 Student Video Tool Kit for Introductory Psychology

(online and on CD-ROM)

Course Management 䉴 Enhanced Course Management Solutions

Assessment 䉴 Printed Test Bank, Volumes 1 and 2 䉴 Diploma Computerized Test Bank 䉴 i•Clicker Radio Frequency Classroom Response System

Presentation 䉴 ActivePsych: Classroom Activities Project and Video Teaching Modules (including

Worth’s Digital Media Archive, Second Edition, and Scientific American Frontiers Video Collection, Third Edition) 䉴 Instructor’s Resource CD-ROM 䉴 Worth’s Image and Lecture Gallery at www.worthpublishers.com/ilg

Video and DVD 䉴 Instructor Video Tool Kit for Introductory Psychology 䉴 Digital Media Archive, Second Edition (available within ActivePsych and on 䉴 䉴 䉴 䉴 䉴 䉴 䉴 䉴

closed-captioned DVD) Scientific American Frontiers Video Collection, Third Edition (available within ActivePsych and on closed-captioned DVD) Worth Digital Media Archive Scientific American Frontiers Video Collection, Second Edition The Mind Video Teaching Modules, Second Edition The Brain Video Teaching Modules, Second Edition Psychology: The Human Experience Teaching Modules Moving Images: Exploring Psychology Through Film The Many Faces of Psychology Video

Print Resources 䉴 䉴 䉴 䉴 䉴

Instructor’s Resources and Lecture Guides Instructor’s Media Guide for Introductory Psychology Study Guide Pursuing Human Strengths: A Positive Psychology Guide Critical Thinking Companion, Second Edition

Scientific American Resources 䉴 䉴 䉴 䉴

Scientific American Mind Scientific American Reader to Accompany Myers Improving the Mind and Brain: A Scientific American Special Issue Scientific American Explores the Hidden Mind: A Collector’s Edition

In Appreciation If it is true that “whoever walks with the wise becomes wise” then I am wiser for all the wisdom and advice received from my colleagues. Aided by over a thousand consultants and reviewers over the last two decades, this has become a better, more accurate book than one author alone (this author, at least) could write. As my editors and I keep reminding ourselves, all of us together are smarter than any one of us.

xxvii

xxviii

PREFACE

My indebtedness continues to each of the teacher-scholars whose influence I have acknowledged in previous editions, to the innumerable researchers who have been so willing to share their time and talent to help me accurately report their research, and to the 191 instructors who took the time to respond to our early information-gathering survey. I also appreciated having detailed input from three of Rick Maddigan’s (Memorial University) students—Charles Collier, Alex Penney, and Megan Freake. My gratitude extends to the colleagues who contributed criticism, corrections, and creative ideas related to the content, pedagogy, and format of this new edition and its supplements package. For their expertise and encouragement, and the gifts of their time to the teaching of psychology, I thank the reviewers and consultants listed below. Richard Alexander, Muskegon Community College

Clara Cheng, American University

Carol Anderson, Bellevue Community College

Jennifer Cina, Barnard College

Aaron Ashly, Weber State University

Virgil Davis, Ashland Community and Technical College

John Baker, University of Wisconsin, Stephens Point

Joyce C. Day, Naugatuck Valley Community College

Dave Baskind, Delta College

Dawn Delaney, Madison Area Technical College

Beth Lanes Battinelli, Union County College

G. William Domhoff, University of California, Santa Cruz

Alan Beauchamp, Northern Michigan University

Darlene Earley-Hereford, Southern Union State Community College, Opelika

Brooke Bennett, Florida State University

Kimberly Fairchild, Rutgers University, Livingston

Sylvia Beyer, University of Wisconsin, Parkside

Pam Fergus, Inver Hills Community College

Patricia Bishop, Cleveland State Community College

Christopher J. Ferguson, Texas A&M International University

James Bodle, College of Mount Saint Joseph

Faith Florer, New York University

Linda Bradford, Community College of Aurora

Jocelyn Folk, Kent State University

Steve Brasel, Moody Bible Institute

Patricia Foster, Austin Community College, Northridge

June Breninger, Cascade College

Lauren Fowler, Weber State University

Tom Brothen, University of Minnesota

Daniel J. Fox, Sam Houston State University

Eric L. Bruns, Campbellsville University

Ron Friedman, Rochester University

David Campell, Humboldt State University

Stan Friedman, Southwest Texas State University

LeeAnn Cardaciotto, La Salle University

Sandra Geer, Northeastern University

Jill Carlivati, George Washington University

Sandra Gibbs, Muskegon Community College

Kenneth Carter, Oxford College

Bryan Gibson, Central Michigan University

Lorelei Carvajal, Triton College

Carl Granrud, University of Northern Colorado

Sarah Caverly, George Mason University

Laura Gruntmeir, Redlands Community College

PR EFA C E

R. Mark Hamilton, Chippewa Valley Technical College

Antoinette Miller, Clayton State University

Lora Harpster, Salt Lake Community College

Robin Morgan, Indiana University, Southeast

Susan Harris-Mitchell, College of DuPage

Jeffrey Nicholas, Bridgewater State College

Lesley Hathorn, University of Nevada, Las Vegas

Dan Patanella, John Jay College of Criminal Justice, CUNY

Paul Hillock, Algonquin College

Shirley Pavone, Sacred Heart University

Herman Huber, College of Saint Elizabeth

Andrew Peck, Penn State University

Linda Jackson, Michigan State University

Tom Peterson, Grand View College

Andrew Johnson, Park University

Brady Phelps, South Dakota State University

Deanna Julka, University of Portland

Michelle Pilati, Rio Hondo College

Regina Kakhnovets, Alfred University

Ron Ponsford, North Nazarene University

Paul Kasenow, Henderson Community College

Diane Quartarolo, Sierra College

Teresa King, Bridgewater State College

Sharon Rief, Logan View High School, and Northeast Community College

Kristina Klassen, North Idaho College Chris Koch, George Fox University Daniel Kretchman, University of Rhode Island, Providence Jean Kubek, New York City College of Technology, CUNY Priya Lalvani, William Patterson University Claudia Lampman, University of Alaska, Anchorage Deb LeBlanc, Bay Mills Community College Don Lucas, Northwest Vista College Angelina MacKewn, University of Tennessee, Martin Marion Mason, Bloomsburg University of Pennsylvania Sal Massa, Marist College Christopher May, Carroll College Paul Mazeroff, McDaniel College Donna McEwen, Friends University Brian Meier, Gettysburg College Michelle Merwin, University of Tennessee, Martin Dinah Meyer, Muskingum College

Alan Roberts, Indiana University, Bloomington June Rosenberg, Lyndon State College Nicole Rossi, Augusta State University Wade Rowatt, Baylor University Michelle Ryder, Ashland University Patrick Saxe, SUNY, New Paltz Sherry Schnake, Saint Mary-of-the-Woods College Cindy Selby, California State University, Chico Dennis Shaffer, Ohio State University Mark Sibicky, Marietta College Randy Simonson, College of Southern Idaho David B. Simpson, Valparaiso College David D. Simpson, Carroll College Jeff Skowronek, University of Tampa Todd Smith, Lake Superior State University Bettina Spencer, Saint Mary's College

xxix

xxx

PREFACE

O’Ann Steere, College of DuPage

Barbara Van Horn, Indian River Community College

Barry Stennett, Gainesville State College

Michael Verro, Champlain College

Bruce Stevenson, North Island College

Craig Vickio, Bowling Green State University

Colleen Stevenson, Muskingum College

Denise Vinograde, LaGuardia Community College, CUNY

Jaine Strauss, Macalester College

Joan Warmbold, Oakton Community College

Cynthia Symons, Houghton College

Eric Weiser, Curry College

Rachelle Tannenbaum, Anne Arundel Community College

Diane Wille, Indiana University Southeast

Sarah Ting, Cerritos College

Paul Young, Houghton College

At Worth Publishers a host of people played key roles in creating this new edition. Although the information gathering is never ending, the formal planning began as the author-publisher team gathered for a two-day retreat in June 2007. This happy and creative gathering included John Brink, Martin Bolt, Thomas Ludwig, Richard Straub, and me from the author team, along with my assistants Kathryn Brownson and Sara Neevel. We were joined by Worth Publishers executives Tom Scotty, Elizabeth Widdicombe, and Catherine Woods; editors Christine Brune, Kevin Feyen, Nancy Fleming, Tracey Kuehn, Betty Probert, and Peter Twickler; artistic director Babs Reingold; and sales and marketing colleagues Kate Nurre, Tom Kling, Guy Geraghty, Sandy Manly, Amy Shefferd, Rich Rosenlof, and Brendan Baruth. The input and brainstorming during this meeting of minds gave birth, among other things, to the new pedagogy in this edition, and to new Module 7, The Brain and Consciousness. Christine Brune, chief editor for the last seven editions, is a wonder worker. She offers just the right mix of encouragement, gentle admonition, attention to detail, and passion for excellence. An author could not ask for more. Development editor Nancy Fleming is one of those rare editors who is gifted both at “thinking big” while also applying her sensitive, graceful, line-by-line touches. Editor Trish Morgan has repeatedly amazed me with her wide-ranging knowledge, meticulous focus, and deft editing. Senior Psychology Acquisitions Editor Kevin Feyen has become a valued team leader, thanks to his dedication, creativity, and sensitivity. Publisher Catherine Woods helped construct and execute the plan for this text and its supplements. Catherine was also a trusted sounding board as we faced a seemingly unending series of discrete decisions along the way. Sharon Prevost coordinated production of the huge supplements package for this edition. Betty Probert efficiently edited and produced the print supplements and, in the process, also helped fine-tune the whole book. Lorraine Klimowich, with help from Greg Bennetts and Emily Ernst, provided invaluable support in commissioning and organizing the multitude of reviews, mailing information to professors, and handling numerous other daily tasks related to the book’s development and production. Lee McKevitt did a splendid job of laying out each page. Bianca Moscatelli and Donna Ranieri worked together to locate the myriad photos. Associate Managing Editor Tracey Kuehn and Project Editor Dana Kasowitz displayed tireless tenacity, commitment, and impressive organization in leading Worth’s gifted artistic production team and coordinating editorial input throughout the production process. Production Manager Sarah Segal masterfully kept the book to its tight schedule, and Babs Reingold skillfully directed creation of the beautiful new design and art

PR EFA C E

program. Production Manager Stacey Alexander, along with supplements production editor Jenny Chiu, did their usual excellent work of producing the many supplements. To achieve our goal of supporting the teaching of psychology, this teaching package not only must be authored, reviewed, edited, and produced, but also made available to teachers of psychology. For their exceptional success in doing that, our author team is grateful to Worth Publishers’ professional sales and marketing team. We are especially grateful to Executive Marketing Manager Kate Nurre, Marketing Manager Amy Shefferd, and National Psychology and Economics Consultant Tom Kling for their tireless efforts to inform our teaching colleagues of our efforts to assist their teaching, and for the joy of working with them. At Hope College, the supporting team members for this edition included Kathryn Brownson, who researched countless bits of information and proofed hundreds of pages. Kathryn has become a knowledgeable and sensitive adviser on many matters, and Sara Neevel has become our high-tech manuscript developer, par excellence. Kathryn Brownson updated, with page citations, all the cross-referenced Preface tables. Again, I gratefully acknowledge the influence and editing assistance of my writing coach, poet Jack Ridl, whose influence resides in the voice you will be hearing in the pages that follow. He, more than anyone, cultivated my delight in dancing with the language, and taught me to approach writing as a craft that shades into art. After hearing countless dozens of people say that this book’s supplements have taken their teaching to a new level, I reflect on how fortunate I am to be a part of a team in which everyone has produced on-time work marked by the highest professional standards. For their remarkable talents, their long-term dedication, and their friendship, I thank Martin Bolt (Instructor’s Manual), John Brink (Test Bank), Thomas Ludwig (PsychPortal), and Richard Straub (Study Guide). Finally, my gratitude extends to the many students and instructors who have written to offer suggestions, or just an encouraging word. It is for them, and those about to begin their study of psychology, that I have done my best to introduce the field I love. The day this book went to press was the day I started gathering information and ideas for the tenth edition. Your input will again influence how this book continues to evolve. So, please, do share your thoughts.

Hope College Holland, Michigan 49422-9000 USA

xxxi

Introduction to the History and Science of Psychology

modules 1 The Story of Psychology

H

arvard astronomer Owen Gingerich (2006) reports that there are more than 100 billion galaxies. Just one of these, our own relative speck of a galaxy, has some 200 billion stars, many of which, like our Sun-star, are circled by planets. On the scale of outer space, we are less than a single grain of sand on all the oceans’ beaches, and our lifetime but a relative nanosecond. Yet there is nothing more awe inspiring and absorbing than our own inner space. Our brain, adds Gingerich, “is by far the most complex physical object known to us in the entire cosmos” (p. 29). Our consciousness—mind somehow arising from matter— remains a profound mystery. Our thinking, emotions, and actions (and their interplay with others’ thinking, emotions, and actions) fascinate us. Outer space staggers us with its enormity, but inner space enthralls us. Enter psychological science. For people whose exposure to psychology comes from popular books, magazines, TV, and the Internet, psychologists analyze personality, offer counseling, and dispense child-rearing advice. Do they? Yes, and much more. Consider some of psychology’s questions that from time to time you may wonder about:

2 Thinking Critically With Psychological Science

3 Research Strategies: How Psychologists Ask and Answer Questions

䉴 Have you ever found yourself reacting to something as one of your biological par-

Such questions provide grist for psychology’s mill, because psychology is a science that seeks to answer all sorts of questions about us all— how and why we think, feel, and act as we do. In Module 1, we trace psychology’s roots and survey its scope. In Module 2, we consider how psychological science can help you to think critically in everyday life and to understand some dangers in relying too heavily on intuition and common sense. In Module 3, we survey psychology’s methods—how psychologists ask and answer questions.

Benedict Spinoza, A Political Treatise, 1677

A smile is a smile the world around Throughout this book, you will see examples not only of our cultural and gender diversity but also of the similarities that define our shared human nature. People in different cultures vary in when and how often they smile, but a naturally happy smile means the same thing anywhere in the world.

Ariadne Van Zandb/Lonely Planet Images

“I have made a ceaseless effort not to ridicule, not to bewail, not to scorn human actions, but to understand them.”

Megapress/Alamy

John Lund/Sam Diephuis/Blend Images/Corbis

ents would—perhaps in a way you vowed you never would—and then wondered how much of your personality you inherited? To what extent are person-to-person differences in personality predisposed by our genes? To what extent by the home and community environments? Have you ever worried about how to act among people of a different culture, race, or gender? In what ways are we alike as members of the human family? How do we differ? Have you ever awakened from a nightmare and, with a wave of relief, wondered why you had such a crazy dream? How often, and why, do we dream? Have you ever played peekaboo with a 6-month-old and wondered why the baby finds the game so delightful? The infant reacts as though, when you momentarily move behind a door, you actually disappear—only to reappear later out of thin air. What do babies actually perceive and think? Have you ever wondered what leads to school and work success? Are some people just born smarter? Does sheer intelligence explain why some people get richer, think more creatively, or relate more sensitively? Have you ever become depressed or anxious and wondered whether you’ll ever feel “normal”? What triggers our bad moods—and our good ones?

1

What Is Psychology? Contemporary Psychology

module 1 The Story of Psychology 䉴|| What Is Psychology? Psychology’s Roots Once upon a time, on a planet in this neighborhood of the universe, there came to be people. Soon thereafter, these creatures became intensely interested in themselves and in one another: “Who are we? What produces our thoughts? Our feelings? Our actions? And how are we to understand and manage those around us?”

Psychological Science Is Born || To assist your active learning, I will periodically offer learning objectives. These will be framed as questions that you can answer as you read on. ||

To be human is to be curious about ourselves and the world around us. Before 300 B.C., the Greek naturalist and philosopher Aristotle theorized about learning and memory, motivation and emotion, perception and personality. Today we chuckle at some of his guesses, like his suggestion that a meal makes us sleepy by causing gas and heat to collect around the source of our personality, the heart. But credit Aristotle with asking the right questions. Philosophers’ thinking about thinking continued until the birth of psychology as we know it, on a December day in 1879, in a small, third-floor room at Germany’s University of Leipzig. There, two young men were helping an austere, middle-aged professor, Wilhelm Wundt, create an experimental apparatus. Their machine measured the time lag between people’s hearing a ball hit a platform and their pressing a telegraph key (Hunt, 1993). Curiously, people responded in about onetenth of a second when asked to press the key as soon as the sound occurred—and in about two-tenths of a second when asked to press the key as soon as they were consciously aware of perceiving the sound. (To be aware of one’s awareness takes a little longer.) Wundt was seeking to measure “atoms of the mind”—the fastest and simplest mental processes. Thus began what many consider psychology’s first experiment, launching the first psychological laboratory, staffed by Wundt and psychology’s first graduate students. Before long, this new science of psychology became organized into different branches, or schools of thought, each promoted by pioneering thinkers. These early schools included structuralism and functionalism, described here, and three schools described in other modules: Gestalt psychology, behaviorism, and psychoanalysis. Monika Suteski

Wilhelm Wundt Wundt (far left) established the first psychology laboratory at the University of Leipzig, Germany.

1-1 When and how did psychological science begin?

|| Information sources are cited in parentheses, with name and date. Every citation can be found in the endof-book References, with complete documentation that follows American Psychological Association style. ||

|| Throughout the text, important concepts are boldfaced. As you study, you can find these terms with their definitions in a nearby margin and in the Glossary at the end of the book. ||

2

Thinking About the Mind’s Structure Soon after receiving his Ph.D. in 1892, Wundt’s student Edward Bradford Titchener joined the Cornell University faculty and introduced structuralism. As physicists and chemists discerned the structure of matter, so Titchener aimed to discover the

3

The Story of Psychology M O D U L E 1

Bradford Titchener 䉴 Edward Titchener used introspection to

Monika Suteski

search for the mind’s structural elements.

structural elements of mind. His method was to engage people in self-reflective introspection (looking inward), training them to report elements of their experience as they looked at a rose, listened to a metronome, smelled a scent, or tasted a substance. What were their immediate sensations, their images, their feelings? And how did these relate to one another? Titchener shared with the English essayist C. S. Lewis the view that “there is one thing, and only one in the whole universe which we know more about than we could learn from external observation.” That one thing, Lewis said, is ourselves. “We have, so to speak, inside information” (1960, pp. 18–19). Alas, introspection required smart, verbal people. It also proved somewhat unreliable, its results varying from person to person and experience to experience. Moreover, we often just don’t know why we feel what we feel and do what we do. Recent studies indicate that people’s recollections frequently err. So do their self-reports about what, for example, has caused them to help or hurt another (Myers, 2002). As introspection waned, so did structuralism.

“You don’t know your own mind.” Jonathan Swift, Polite Conversation, 1738

Thinking About the Mind’s Functions Unlike those hoping to assemble the structure of mind from simple elements—which was rather like trying to understand a car by examining its disconnected parts— philosopher-psychologist William James thought it more fruitful to consider the evolved functions of our thoughts and feelings. Smelling is what the nose does; thinking is what the brain does. But why do the nose and brain do these things? Under the influence of evolutionary theorist Charles Darwin, James assumed that thinking, like smelling, developed because it was adaptive—it contributed to our ancestors’ survival. Consciousness serves a function. It enables us to consider our past, adjust to our present circ*mstances, and plan our future. As a functionalist, James encouraged explorations of down-to-earth emotions, memories, willpower, habits, and moment-to-moment streams of consciousness. James’ greatest legacy, however, came less from his laboratory than from his Harvard teaching and his writing. When not plagued by ill health and depression, James was an impish, outgoing, and joyous man, who once recalled that “the first lecture on psychology I ever heard was the first I ever gave.” During one of his wise-cracking lectures, a student interrupted and asked him to get serious (Hunt, 1993). He was reportedly one of the first American professors to solicit end-of-course student evaluations of his teaching. He loved his students, his family, and the world of ideas, but he tired of painstaking chores such as proofreading. “Send me no proofs!” he once told an editor. “I will return them unopened and never speak to you again” (Hunt, 1993, p. 145).

structuralism an early school of psychology that used introspection to explore the structural elements of the human mind.

functionalism a school of psychology that focused on how our mental and behavioral processes function—how they enable us to adapt, survive, and flourish.

4

MOD U LE 1 The Story of Psychology

James displayed the same spunk in 1890, when—over the objections of Harvard’s president—he admitted Mary Calkins into his graduate seminar (Scarborough & Furumoto, 1987). (In those years women lacked even the right to vote.) When Calkins joined, the other students (all men) dropped out. So James tutored her alone. Later, she finished all the requirements for a Harvard Ph.D., outscoring all the male students on the qualifying exams. Alas, Harvard denied her the degree she had earned, offering her instead a degree from Radcliffe College, its undergraduate sister school for women. Calkins resisted the unequal treatment and refused the degree. (More than a century later, psychologists and psychology students were lobbying Harvard to posthumously award the Ph.D. she earned [Feminist Psychologist, 2002].) Calkins nevertheless went on to become a distinguished memory researcher and the American Psychological Association’s (APA’s) first female president in 1905. When Harvard denied Calkins the claim to being psychology’s first female psychology Ph.D., that honor fell to Margaret Floy Washburn, who later wrote an influential book, The Animal Mind, and became the second female APA president in 1921. Although Washburn’s thesis was the first foreign study Wundt published in his journal, her gender meant she was barred from joining the organization of experimental psychologists founded by Titchener, her own graduate adviser (Johnson, 1997). (What a different world from the recent past—1996 to 2009—when women claimed two-thirds or more of new psychology Ph.D.s and were 6 of the 13 elected presidents of the science-oriented Association for Psychological Science. In Canada and Europe, too, most recent psychology doctorates have been earned by women.) James’ influence reached even further through his dozens of wellreceived articles, which moved the publisher Henry Holt to offer a contract for a textbook of the new science of psychology. James agreed and began work in 1878, with an apology for requesting two years to finish his writing. The text proved an unexpected chore and actually took him 12 years. (Why am I not surprised?) More than a century later, people still read the resulting Principles of Psychology and marvel at the brilliance and elegance with which James introduced psychology to the educated public.

Monika Suteski

Monika Suteski

William James and Mary Whiton Calkins James, legendary teacher-writer, mentored Calkins, who became a pioneering memory researcher and the first woman to be president of the American Psychological Association.

Margaret Floy Washburn The first woman to receive a psychology Ph.D., Washburn synthesized animal behavior research in The Animal Mind.

Psychological Science Develops 1-2 How did psychology continue to develop from the 1920s through today? The young science of psychology developed from the more established fields of philosophy and biology. Wundt was both a philosopher and a physiologist. James was an American philosopher. Ivan Pavlov, who pioneered the study of learning, was a Russian physiologist. Sigmund Freud, who developed an influential theory of personality, was an Austrian physician. Jean Piaget, the last century’s most influential observer of children, was a Swiss biologist. This list of pioneering psychologists—“Magellans of the mind,” as Morton Hunt (1993) has called them—illustrates psychology’s origins in many disciplines and countries. The rest of the story of psychology—the subject of this book—develops at many levels. With activities ranging from the study of nerve cell activity to the study of international conflicts, psychology is not easily defined.

5

The Story of Psychology M O D U L E 1

Freud The controversial 䉴 Sigmund ideas of this famed personality theorist and therapist have influenced humanity’s self-understanding.

Monika Suteski

In psychology’s early days, Wundt and Titchener focused on inner sensations, images, and feelings. James, too, engaged in introspective examination of the stream of consciousness and of emotion. Freud emphasized the ways emotional responses to childhood experiences and our unconscious thought processes affect our behavior. Thus, until the 1920s, psychology was defined as “the science of mental life.” From the 1920s into the 1960s, American psychologists, initially led by flamboyant and provocative John B. Watson and later by the equally provocative B. F. Skinner, dismissed introspection and redefined psychology as “the scientific study of observable behavior.” After all, said these behaviorists, science is rooted in observation. You cannot observe a sensation, a feeling, or a thought, but you can observe and record people’s behavior as they respond to different situations. Humanistic psychology rebelled against Freudian psychology and behaviorism. Pioneers Carl Rogers and Abraham Maslow found behaviorism’s focus on learned behaviors too mechanistic. Rather than focusing on the meaning of early childhood memories, as a psychoanalyst might, the humanistic psychologists emphasized the importance of current environmental influences on our growth potential, and the importance of having our needs for love and acceptance satisfied. In the 1960s, another movement emerged as psychology began to recapture its initial interest in mental processes. This cognitive revolution supported ideas developed by earlier psychologists, such as the importance of how our mind processes and retains information. But cognitive psychology and more recently cognitive neuroscience (the study of brain activity linked with mental activity) have expanded upon those ideas to explore scientifically the ways we perceive, process, and remember information. This

behaviorism the view that psychology (1) should be an objective science that (2) studies behavior without reference to mental processes. Most research psychologists today agree with (1) but not with (2).

humanistic psychology historically significant perspective that emphasized the growth potential of healthy people and the individual’s potential for personal growth.

John B. Watson and Rosalie Rayner Working with Rayner, Watson championed psychology as the science of behavior and demonstrated conditioned responses on a baby who became famous as “Little Albert.”

Monika Suteski

Monika Suteski

cognitive neuroscience the interdisciplinary study of the brain activity linked with cognition (including perception, thinking, memory, and language).

B. F. Skinner A leading behaviorist, Skinner rejected introspection and studied how consequences shape behavior.

6

MOD U LE 1 The Story of Psychology

approach has been especially beneficial in helping to develop new ways to understand and treat disorders such as depression. To encompass psychology’s concern with observable behavior and with inner thoughts and feelings, today we define psychology as the science of behavior and mental processes. Let’s unpack this definition. Behavior is anything an organism does—any action we can observe and record. Yelling, smiling, blinking, sweating, talking, and questionnaire marking are all observable behaviors. Mental processes are the internal, subjective experiences we infer from behavior—sensations, perceptions, dreams, thoughts, beliefs, and feelings. The key word in psychology’s definition is science. Psychology, as I will emphasize throughout this book, is less a set of findings than a way of asking and answering questions. My aim, then, is not merely to report results but also to show you how psychologists play their game. You will see how researchers evaluate conflicting opinions and ideas. And you will learn how all of us, whether scientists or simply curious people, can think smarter when describing and explaining the events of our lives.

䉴|| Contemporary Psychology Like its pioneers, today’s psychologists are citizens of many lands. The International Union of Psychological Science has 69 member nations, from Albania to Zimbabwe. Nearly everywhere, membership in psychological societies is mushrooming—from 4183 American Psychological Association members and affiliates in 1945 to nearly 150,000 today, with similarly rapid growth in the British Psychological Society (from 1100 to 45,000). In China, the first university psychology department began in 1978; in 2008 there were 200 (Tversky, 2008). Worldwide, some 500,000 people have been trained as psychologists, and 130,000 of them belong to European psychological organizations (Tikkanen, 2001). Moreover, thanks to international publications, joint meetings, and the Internet, collaboration and communication cross borders now more than ever. “We are moving rapidly toward a single world of psychological science,” reported Robert Bjork (2000). Psychology is growing and it is globalizing. Across the world, psychologists are debating enduring issues, viewing behavior from the differing perspectives offered by the subfields in which they teach, work, and do research.

Psychology’s Biggest Question 1-3 What is psychology’s historic big issue?

psychology the science of behavior and mental processes.

nature-nurture issue the longstanding controversy over the relative contributions that genes and experience make to the development of psychological traits and behaviors. Today’s science sees traits and behaviors arising from the interaction of nature and nurture.

natural selection the principle that, among the range of inherited trait variations, those contributing to reproduction and survival will most likely be passed on to succeeding generations.

During its short history, psychology has wrestled with some issues that will reappear throughout this book. The biggest and most persistent is the nature-nurture issue— the controversy over the relative contributions of biology and experience. The origins of this debate are ancient. Do our human traits develop through experience, or are we born with them? The Greek philosopher Plato (428–348 B.C.) assumed that character and intelligence are largely inherited and that certain ideas are inborn. Aristotle (384–322 B.C.) countered that there is nothing in the mind that does not first come in from the external world through the senses. In the 1600s, European philosophers rekindled the debate. John Locke rejected the notion of inborn ideas, suggesting that the mind is a blank sheet on which experience writes. René Descartes disagreed, believing that some ideas are innate. Two centuries later, Descartes’ views gained support from a curious naturalist. In 1831, an indifferent student but ardent collector of beetles, mollusks, and shells set sail on what was to prove a historic round-the-world journey. The 22-year-old voyager was Charles Darwin, and for some time afterward, he pondered the incredible

7

The Story of Psychology M O D U L E 1

Darwin Darwin argued that 䉴 Charles natural selection shapes behaviors as

Monika Suteski

well as bodies.

species variation he had encountered, including tortoises on one island that differed from those on other islands of the region. Darwin’s 1859 On the Origin of Species explained this diversity of life by proposing the evolutionary process of natural selection: From among chance variations, nature selects the traits that best enable an organism to survive and reproduce in a particular environment. Darwin’s principle of natural selection—“the single best idea anyone has ever had,” said philosopher Daniel Dennett (1996)—is still with us nearly 150 years later as an organizing principle of biology. Evolution also has become an important principle for twenty-first-century psychology. This would surely have pleased Darwin, for he believed his theory explained not only animal structures (such as a polar bear’s white coat) but also animal behaviors (such as the emotional expressions associated with human lust and rage). The nature-nurture debate weaves a thread from the ancient Greeks’ time to our own. Today’s psychologists explore the issue by asking, for example:

䉴 How are we humans alike (because of our common biology and evolutionary his䉴 䉴 䉴 䉴 䉴

tory) and diverse (because of our differing environments)? Are gender differences biologically predisposed or socially constructed? Is children’s grammar mostly innate or formed by experience? How are differences in intelligence and personality influenced by heredity and by environment? Are sexual behaviors more “pushed” by inner biology or “pulled” by external incentives? Should we treat psychological disorders—depression, for example—as disorders of the brain, disorders of thought, or both?

Gary Parker/Photo Researchers Inc.

Mitch Diamond/Alamy

nature-made nature-nurture 䉴 Aexperiment Because identical twins have the same genes, they are ideal participants in studies designed to shed light on hereditary and environmental influences on intelligence, personality, and other traits. Studies of identical and fraternal twins provide a rich array of findings that underscore the importance of both nature and nurture.

8

MOD U LE 1 The Story of Psychology

levels of analysis the differing complementary views, from biological to psychological to social-cultural, for analyzing any given phenomenon.

biopsychosocial approach an integrated approach that incorporates biological, psychological, and social-cultural levels of analysis.

Such debates continue. Yet over and over again we will see that in contemporary science the nature-nurture tension dissolves: Nurture works on what nature endows. Our species is biologically endowed with an enormous capacity to learn and adapt. Moreover, every psychological event (every thought, every emotion) is simultaneously a biological event. Thus depression can be both a brain disorder and a thought disorder.

Psychology’s Three Main Levels of Analysis 1-4 What are psychology’s levels of analysis and related perspectives? Each of us is a complex system that is part of a larger social system. But each of us is also composed of smaller systems, such as our nervous system and body organs, which are composed of still smaller systems—cells, molecules, and atoms. These tiered systems suggest different levels of analysis, which offer complementary outlooks. It’s like explaining why grizzly bears hibernate. Is it because hibernation helped their ancestors to survive and reproduce? Because their inner physiology drives them to do so? Because cold environments hinder food gathering during winter? Such perspectives are complementary because “everything is related to everything else” (Brewer, 1996). Together, different levels of analysis form an integrated biopsychosocial approach, which considers the influences of biological, psychological, and social-cultural factors (FIGURE 1.1). Each level provides a valuable vantage point for looking at behavior, yet each by itself is incomplete. Like different academic disciplines, psychology’s varied perspectives ask different questions and have their own limits. One perspective may stress the biological, psychological, or social-cultural level more than another, but the different perspectives described in TABLE 1.1 complement one another. Consider, for example, how they shed light on anger.

䉴 Someone working from a neuroscience perspective might study brain circuits that cause us to be “red in the face” and “hot under the collar.” Someone working from the evolutionary perspective might analyze how anger 䉴 facilitated the survival of our ancestors’ genes. 䉴 Someone working from the behavior genetics perspective might study how heredity and experience influence our individual differences in temperament.

FIGURE 1.1 Biopsychosocial approach This integrated viewpoint incorporates various levels of analysis and offers a more complete picture of any given behavior or mental process.

Biological influences: • natural selection of adaptive traits • genetic predispositions responding to environment • brain mechanisms • hormonal influences

Psychological influences: • learned fears and other learned expectations • emotional responses • cognitive processing and perceptual interpretations

Behavior or mental process

Social-cultural influences: • presence of others • cultural, societal, and family expectations • peer and other group influences • compelling models (such as in the media)

9

The Story of Psychology M O D U L E 1

TABLE 1.1 Psychology’s Current Perspectives Perspective

Focus

Sample Questions

Neuroscience

How the body and brain enable emotions, memories, and sensory experiences

How are messages transmitted within the body? How is blood chemistry linked with moods and motives?

Evolutionary

How the natural selection of traits promoted the survival of genes

How does evolution influence behavior tendencies?

Behavior genetics

How much our genes and our environment influence our individual differences

To what extent are psychological traits such as intelligence, personality, sexual orientation, and vulnerability to depression attributable to our genes? To our environment?

Psychodynamic

How behavior springs from unconscious drives and conflicts

How can someone’s personality traits and disorders be explained in terms of sexual and aggressive drives or as the disguised effects of unfulfilled wishes and childhood traumas?

Behavioral

How we learn observable responses

How do we learn to fear particular objects or situations? What is the most effective way to alter our behavior, say, to lose weight or stop smoking?

Cognitive

How we encode, process, store, and retrieve information

How do we use information in remembering? Reasoning? Solving problems?

Social-cultural

How behavior and thinking vary across situations and cultures

How are we humans alike as members of one human family? As products of different environmental contexts, how do we differ?

䉴 Someone working from the psychodynamic perspective might view

1-5 What are psychology’s main subfields? Picturing a chemist at work, you probably envision a white-coated scientist surrounded by glassware and high-tech equipment. Picture a psychologist at work and you would be right to envision

䉴 a white-coated scientist probing a rat’s brain. 䉴 an intelligence researcher measuring how quickly an infant shows boredom by looking away from a familiar picture.

Views of anger How would each of psychology’s levels of analysis explain what’s going on here?

© The New Yorker Collection, 1986, J. B. Handelsman from cartoonbank.com. All Rights Reserved.

Psychology’s Subfields

The point to remember: Like two-dimensional views of a threedimensional object, each of psychology’s perspectives is helpful. But each by itself fails to reveal the whole picture. So bear in mind psychology’s limits. Don’t expect it to answer the ultimate questions, such as those posed by Russian novelist Leo Tolstoy (1904): “Why should I live? Why should I do anything? Is there in life any purpose which the inevitable death that awaits me does not undo and destroy?” Instead, expect that psychology will help you understand why people think, feel, and act as they do. Then you should find the study of psychology fascinating and useful.

David Madison/Corbis

an outburst as an outlet for unconscious hostility. Someone working from the behavioral perspective might attempt 䉴 to determine which external stimuli trigger angry responses or aggressive acts. 䉴 Someone working from the cognitive perspective might study how our interpretation of a situation affects our anger and how our anger affects our thinking. 䉴 Someone working from the social-cultural perspective might explore how expressions of anger vary across cultural contexts.

“I’m a social scientist, Michael. That means I can’t explain electricity or anything like that, but if you ever want to know about people I’m your man.”

10

MOD U LE 1 The Story of Psychology

basic research pure science that aims to increase the scientific knowledge base.

applied research scientific study that aims to solve practical problems.

counseling psychology a branch of psychology that assists people with problems in living (often related to school, work, or marriage) and in achieving greater well-being.

clinical psychology a branch of psychology that studies, assesses, and treats people with psychological disorders. psychiatry a branch of medicine dealing with psychological disorders; practiced by physicians who sometimes provide medical (for example, drug) treatments as well as psychological therapy

䉴 an executive evaluating a new “healthy life-styles” training program for employees. 䉴 someone at a computer keyboard analyzing data on whether adopted teens’ temperaments more closely resemble those of their adoptive parents or their biological parents. 䉴 a therapist listening carefully to a client’s depressed thoughts. 䉴 a traveler visiting another culture and collecting data on variations in human values and behaviors. 䉴 a teacher or writer sharing the joy of psychology with others. The cluster of subfields we call psychology has less unity than most other sciences. But there is a payoff: Psychology is a meeting ground for different disciplines. “Psychology is a hub scientific discipline,” said Association for Psychological Science president John Cacioppo (2007). Thus, it’s a perfect home for those with wide-ranging interests. In their diverse activities, from biological experimentation to cultural comparisons, the tribe of psychology is united by a common quest: describing and explaining behavior and the mind underlying it. Some psychologists conduct basic research that builds psychology’s knowledge base. In the pages that follow we will meet a wide variety of such researchers, including

䉴 biological psychologists exploring the links between brain and mind. 䉴 developmental psychologists studying our changing abilities from womb to tomb. 䉴 cognitive psychologists experimenting with how we perceive, think, and solve problems.

䉴 personality psychologists investigating our persistent traits. 䉴 social psychologists exploring how we view and affect one another.

Laura Dwight

I see you! A biological psychologist might view this child’s delighted response as evidence of brain maturation. A cognitive psychologist might see it as a demonstration of the baby’s growing knowledge of his surroundings. For a cross-cultural psychologist, the role of grandparents in different societies might be the issue of interest. As you will see throughout this book, these and other perspectives offer complementary views of behavior.

These psychologists also may conduct applied research that tackles practical problems. So do other psychologists, including industrial-organizational psychologists, who use psychology’s concepts and methods in the workplace to help organizations and companies select and train employees, boost morale and productivity, design products, and implement systems. Although most psychology textbooks focus on psychological science, psychology is also a helping profession devoted to such practical issues as how to have a happy marriage, how to overcome anxiety or depression, and how to raise thriving children. As a science, psychology at its best bases such interventions on evidence of effectiveness. Counseling psychologists help people to cope with challenges and crises (including academic, vocational, and marital issues) and to improve their personal and social functioning. Clinical psychologists assess and treat mental, emotional, and behavior disorders (APA, 2003). Both counseling and clinical psychologists administer and interpret tests, provide counseling and therapy, and sometimes conduct basic and applied research. By contrast, psychiatrists, who also often provide psychotherapy, are medical doctors licensed to prescribe drugs and otherwise treat physical causes of psychological disorders. (Some clinical psychologists are lobbying for a similar right to prescribe mentalhealth–related drugs, and in 2002 and 2004 New Mexico and Louisiana became the first states to grant that right to specially trained and licensed psychologists.) With perspectives ranging from the biological to the social, and

11

Scott J. Ferrell/Congressional Quarterly/Getty Images

©2007 John Kish IV

with settings from the laboratory to the clinic, psychology relates to many fields, ranging from mathematics to biology to sociology to philosophy. And more and more, psychology’s methods and findings aid other disciplines. Psychologists teach in medical schools, law schools, and theological seminaries, and they work in hospitals, factories, and corporate offices. They engage in interdisciplinary studies, such as psychohistory (the psychological analysis of historical characters), psycholinguistics (the study of language and thinking), and psychoceramics (the study of crackpots).1 Psychology also influences modern culture. Knowledge transforms us. Learning about the solar system and the germ theory of disease alters the way people think and act. Learning psychology’s findings also changes people: They less often judge psychological disorders as moral failings, treatable by punishment and ostracism. They less often regard and treat women as men’s mental inferiors. They less often view and rear children as ignorant, willful beasts in need of taming. “In each case,” notes Morton Hunt (1990, p. 206), “knowledge has modified attitudes, and, through them, behavior.” Once aware of psychology’s well-researched ideas—about how body and mind connect, how a child’s mind grows, how we construct our perceptions, how we remember (and misremember) our experiences, how people across the world differ (and are alike)—your mind may never again be quite the same.

1Confession:

I wrote the last part of this sentence on April Fools’ Day.

Michael Newman/Photo Edit

The Story of Psychology M O D U L E 1

Psychology: A science and a profession Psychologists experiment with, observe, test, and treat behavior. Here we see psychologists testing a child, measuring emotion-related physiology, and doing face-to-face therapy.

|| Want to learn more? See Appendix A, Careers in Psychology, at the end of this book for more information about psychology’s subfields and to learn about the many interesting options available to those with bachelor’s, master’s, and doctoral degrees in psychology. ||

“Once expanded to the dimensions of a larger idea, [the mind] never returns to its original size.” Oliver Wendell Holmes, 1809–1894

12

MOD U LE 1 The Story of Psychology

CLOSE-UP

Tips for Studying Psychology 1-6 How can psychological

principles help you as a student? The investment you are making in studying psychology should enrich your life and enlarge your vision. Although many of life’s significant questions are beyond psychology, some very important ones are illuminated by even a first psychology course. Through painstaking research, psychologists have gained insights into brain and mind, dreams and memories, depression and joy. Even the unanswered questions can enrich us, by renewing our sense of mystery about “things too wonderful” for us yet to understand. Your study of psychology can also help teach you how to ask and answer important questions—how to think critically as you evaluate competing ideas and claims. Having your life enriched and your vision enlarged (and getting a decent grade) requires effective study. To master information you must actively process it. Your mind is not like your stomach, something to be filled passively; it is more like a muscle that grows stronger with exercise. Countless experiments reveal that people learn and remember best when they put material in their own words, rehearse it, and then review and rehearse it again. The SQ3R study method incorporates these principles (Robinson, 1970). SQ3R is an acronym for its five steps: Survey, Question, Read, Rehearse, Review. To study a module, first survey, taking a bird’s-eye view. Scan the headings, and notice how the module is organized. As you prepare to read each section, use its heading or learning objective to form your own question to answer. For this section, you might have asked, “How can I most effectively and efficiently master the information in this book?” Then read, actively searching for the answer. Usually a single module—sometimes just a main section within a module—will be as much as you can absorb without tiring. Read actively and critically. Ask questions. Make notes. Consider implications: How does what you’ve read relate to your own life? Does it support or challenge your assumptions? How convincing is the evidence?

Having read a section, rehearse in your own words what you read. Test yourself by trying to answer your question, rehearsing what you can recall, then glancing back over what you can’t recall. Finally, review: Read over any notes you have taken, again with an eye on the module’s organization, and quickly review the whole module. Survey, question, read, rehearse, review. I have organized this book’s modules to facilitate your use of the SQ3R study system. Each module begins with an outline that aids your survey. Headings and learning objective questions suggest issues and concepts you should consider as you read. The material is organized into sections of readable length. The Test Yourself and Ask Yourself questions at the end of each module help you rehearse what you know. The module Review also provides answers to the learning objective questions, and the list of key terms helps you check your mastery of important concepts. Survey, question, read . . . Five additional study tips may further boost your learning: Distribute your study time. One of psychology’s oldest findings is that spaced practice promotes better retention than massed practice. You’ll remember material better if you space your time over several study periods—perhaps one hour a day, six days a week—rather than cram it into one long study blitz. For example, rather than trying to read an entire module in a single sitting, read just one main section and then turn to something else. Spacing your study sessions requires a disciplined approach to managing your time. (Richard O. Straub explains time management in the helpful Study Guide that accompanies this text.) Learn to think critically. Whether you are reading or in class, note people’s assumptions and values. What perspective or bias underlies an argument? Evaluate evidence. Is it anecdotal? Correlational? Experimental? Assess conclusions. Are there alternative explanations? In class, listen actively. Listen for the main ideas and sub-ideas of a lecture. Write them down. Ask questions during and after class. In class, as in your private

study, process the information actively and you will understand and retain it better. As psychologist William James urged a century ago, “No reception without reaction, no impression without . . . expression.” Overlearn. Psychology tells us that overlearning improves retention. We are prone to overestimating how much we know. You may understand a module as you read it, but by devoting extra study time to testing yourself and reviewing what you think you know, you will retain your new knowledge long into the future. Be a smart test-taker. If a test contains both multiple-choice questions and an essay question, turn first to the essay. Read the question carefully, noting exactly what the instructor is asking. On the back of a page, pencil in a list of points you’d like to make and then organize them. Before writing, put aside the essay and work through the multiple-choice questions. (As you do so, your mind may continue to mull over the essay question. Sometimes the learning objective questions will bring pertinent thoughts to mind.) Then reread the essay question, rethink your answer, and start writing. When you finish, proofread your answer to eliminate spelling and grammatical errors that make you look less competent than you are. When reading multiple-choice questions, don’t confuse yourself by trying to imagine how each choice might be the right one. Instead, try to answer each question as if it were a fill-in-the-blank question. First cover the answers and form a sentence in your mind, recalling what you know to complete the sentence. Then read the answers on the test and find the alternative that best matches your own answer. While exploring psychology, you will learn much more than effective study techniques. Psychology deepens our appreciation for how we humans perceive, think, feel, and act. By so doing it can indeed enrich our lives and enlarge our vision. Through this book I hope to help guide you toward that end. As educator Charles Eliot said a century ago: “Books are the quietest and most constant of friends, and the most patient of teachers.”

SQ3R a study method incorporating five steps: Survey, Question, Read, Rehearse, Review.

13

The Story of Psychology M O D U L E 1

Review The Story of Psychology 1-1 When and how did psychological science begin? Psychological science had its modern beginning with the first psychological laboratory, founded in 1879 by German philosopher and physiologist Wilhelm Wundt, and from the later work of other scholars from several disciplines and many countries. 1-2 How did psychology continue to develop from the 1920s through today? Having begun as a “science of mental life,” psychology evolved in the 1920s into the “scientific study of observable behavior.” After rediscovering the mind, psychology since the 1960s has been widely defined as the science of behavior and mental processes. 1-3

What is psychology’s historic big issue? Psychology’s biggest and most enduring issue concerns the relative contributions and interplay between the influences of nature (genes) and nurture (all other influences, from conception to death). Today’s science emphasizes the interaction of genes and experiences in specific environments.

1-4

What are psychology’s levels of analysis and related perspectives? The biopsychosocial approach integrates information from the biological, psychological, and social-cultural levels of analysis. Psychologists study human behaviors and mental processes from many different perspectives (including the neuroscientific, evolutionary, behavior genetics, psychodynamic, behavioral, cognitive, and social-cultural perspectives).

1-5 What are psychology’s main subfields? Psychology's subfields encompass basic research (often done by biological, developmental, cognitive, personality, and social psychologists), applied research (sometimes conducted by industrial-organizational psychologists), and clinical science and applications (the work of counseling psychologists and clinical psychologists). Clinical psychologists study, assess, and treat (with psychotherapy) people with psychological disorders. Psychiatrists also study, assess, and treat people with disorders, but as medical doctors, they may prescribe drugs in addition to psychotherapy. 1-6 How can psychological principles help you as a student? Research has shown that learning and memory are enhanced by active study. The SQ3R study method—survey, question, read, rehearse, and review—applies the principles derived from this research.

Terms and Concepts to Remember structuralism, p. 2 functionalism, p. 3 behaviorism, p. 5 humanistic psychology, p. 5 cognitive neuroscience, p. 5 psychology, p. 6 nature-nurture issue, p. 6 natural selection, p. 7

levels of analysis, p. 8 biopsychosocial approach, p. 8 basic research, p. 10 applied research, p. 10 counseling psychology, p. 10 clinical psychology, p. 10 psychiatry, p. 10 SQ3R, p. 12

Test Yourself* 1. What event defined the founding of scientific psychology? 2. What are psychology’s major levels of analysis?

Ask Yourself** 1. How do you think psychology might change as more people from non-Western countries contribute their ideas to the field?

2. When you signed up for this course, what did you think psychology would be all about?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

*The Test Yourself questions offer you a handy self-test on the material you have just read. Answers to these questions can be found in Appendix B at the end of the book. **The Ask Yourself questions will help you reflect on key issues and connect them to your own life. Making these issues personally meaningful will make them memorable.

The Need for Psychological Science Frequently Asked Questions About Psychology

module 2 Thinking Critically With Psychological Science Hoping to satisfy their curiosity about people and to remedy their own woes, millions turn to “psychology.” They listen to talk-radio counseling, read articles on psychic powers, attend stop-smoking hypnosis seminars, and absorb self-help Web sites and books on the meaning of dreams, the path to ecstatic love, and the roots of personal happiness. Others, intrigued by claims of psychological truth, wonder: Do mothers and infants bond in the first hours after birth? Should we trust childhood sexual abuse memories that get “recovered” in adulthood—and prosecute the alleged predators? Are first-born children more driven to achieve? Does psychotherapy heal? In working with such questions, how can we separate uninformed opinions from examined conclusions? How can we best use psychology to understand why people think, feel, and act as they do?

䉴|| The Need for Psychological Science 2-1 Why are the answers that flow from the scientific approach more reliable than those based on intuition and common sense?

Chris Ryan/Ojo Images/Getty Images

The limits of intuition Personnel interviewers tend to be overconfident of their gut feelings about job applicants. Their confidence stems partly from their recalling cases where their favorable impression proved right, and partly from their ignorance about rejected applicants who succeeded elsewhere.

14

Some people suppose that psychology merely documents and dresses in jargon what people already know: “So what else is new—you get paid for using fancy methods to prove what my grandmother knew?” Others place their faith in human intuition: “Buried deep within each and every one of us, there is an instinctive, heart-felt awareness that provides—if we allow it to—the most reliable guide,” offered Prince Charles (2000). “I know there’s no evidence that shows the death penalty has a deterrent effect,” George W. Bush (1999) reportedly said as Texas governor, “but I just feel in my gut it must be true.” “I’m a gut player. I rely on my instincts,” said the former president in explaining to Bob Woodward (2002) his decision to launch the Iraq war. Prince Charles and former President Bush have much company. A long list of pop psychology books encourage us toward “intuitive managing,” “intuitive trading,” “intuitive healing,” and much more. Today’s psychological science does document a vast intuitive mind. As we will see, our thinking, memory, and attitudes operate on two levels, conscious and unconscious, with the larger part operating automatically, off-screen. Like jumbo jets, we fly mostly on autopilot. So, are we smart to listen to the whispers of our inner wisdom, to simply trust “the force within”? Or should we more often be subjecting our intuitive hunches to skeptical scrutiny? This much seems certain. Intuition is important, but we often underestimate its perils. My geographical intuition tells me that Reno is east of Los Angeles, that Rome is south of New York, that Atlanta is east of Detroit. But I am wrong, wrong, and wrong. Experiments have found people greatly overestimating their lie detection accuracy, their eyewitness recollections, their interviewee assessments, their risk predictions, and their stock-picking talents. “The first principle,” said Richard Feynman (1997), “is that you must not fool yourself—and you are the easiest person to fool.”

15

Thinking Critically With Psychological Science M O D U L E 2

Indeed, observed Madeleine L’Engle, “The naked intellect is an extraordinarily inaccurate instrument” (1972). Two phenomena—hindsight bias and judgmental overconfidence—illustrate why we cannot rely solely on intuition and common sense.

“He who trusts in his own heart is a fool.” Proverbs 28:28

Did We Know It All Along? Hindsight Bias “Life is lived forwards, but understood backwards.” Philosopher Søren Kierkegaard, 1813–1855

hindsight bias the tendency to believe, after learning an outcome, that one would have foreseen it. (Also known as the I-knew-it-all-along phenomenon.)

“Anything seems commonplace, once explained.” Dr. Watson to Sherlock Holmes

Hindsight bias After the 2007 Virginia Tech massacre of 32 people, it seemed obvious that school officials should have locked down the school (despite its having the population of a small city) after the first two people were murdered. With 20/20 hindsight, everything seems obvious.

AP Photo/The Roanoke Times, Matt Gentry

How easy it is to seem astute when drawing the bull’s eye after the arrow has struck. After the first New York World Trade Center tower was hit on September 11, 2001, commentators said people in the second tower should have immediately evacuated. (It became obvious only later that the strike was not an accident.) After the U.S. occupation of Iraq led to a bloody civil war rather than a peaceful democracy, commentators saw the result as inevitable. Before the invasion was launched, these results seemed anything but obvious: In voting to allow the Iraq invasion, most U.S. senators did not anticipate the chaos that would seem so predictable in hindsight. Finding that something has happened makes it seem inevitable, a tendency we call hindsight bias (also known as the I-knew-it-all-along phenomenon). This phenomenon is easy to demonstrate: Give half the members of a group some purported psychological finding, and give the other half an opposite result. Tell the first group, “Psychologists have found that separation weakens romantic attraction. As the saying goes, ‘Out of sight, out of mind.’” Ask them to imagine why this might be true. Most people can, and nearly all will then regard this true finding as unsurprising. Tell the second group the opposite, “Psychologists have found that separation strengthens romantic attraction. As the saying goes, ‘Absence makes the heart grow fonder.’” People given this untrue result can also easily imagine it, and they overwhelmingly see it as unsurprising common sense. Obviously, when both a supposed finding and its opposite seem like common sense, there is a problem. Such errors in our recollections and explanations show why we need psychological research. Just asking people how and why they felt or acted as they did can sometimes be misleading—not because common sense is usually wrong, but because common sense more easily describes what has happened than what will happen. As physicist Neils Bohr reportedly said, “Prediction is very difficult, especially about the future.” Hindsight bias is widespread. Some 100 studies have observed it in various countries and among both children and adults (Blank et al., 2007). Nevertheless, Grandma’s intuition is often right. As Yogi Berra once said, “You can observe a lot by watching.” (We have Berra to thank for other gems, such as “Nobody ever comes here—it’s too crowded,” and “If the people don’t want to come out to the ballpark, nobody’s gonna stop ’em.”) Because we’re all behavior watchers, it would be surprising if many of psychology’s findings had not been foreseen. Many people believe that love breeds happiness, and they are right, according to researchers who have found we have a “deep need to belong” to others. Indeed, note Daniel Gilbert, Brett Pelham, and Douglas Krull (2003), “good ideas in psychology usually have an oddly familiar quality, and the moment we encounter them we feel certain that we once came close to thinking the same thing ourselves and simply failed to write it down.” Good ideas are like good inventions; once created, they seem obvious. (Why did it take so long for someone to invent suitcases on wheels and Post-it Notes?) But sometimes Grandma’s intuition, informed by countless casual observations, has it wrong. Research has overturned popular ideas—that familiarity breeds contempt, that dreams predict the future, and that emotional reactions coincide with menstrual

16

MOD U LE 2 Thinking Critically With Psychological Science

TABLE 2.1 True or False?

Psychological research has either confirmed or refuted each of these statements (adapted, in part, from Furnham et al., 2003). Can you predict which of these popular ideas have been confirmed and which refuted? (Check your answers at the bottom of this table.) 1. If you want to teach a habit that persists, reward the desired behavior every time, not just intermittently. 2 Patients whose brains are surgically split down the middle survive and function much as they did before the surgery. 3 Traumatic experiences, such as sexual abuse or surviving the Holocaust, are typically “repressed” from memory. 4. Most abused children do not become abusive adults. 5. Most infants recognize their own reflection in a mirror by the end of their first year. 6. Adopted siblings usually do not develop similar personalities, even though they are reared by the same parents. 7. Fears of harmless objects, such as flowers, are just as easy to acquire as fears of potentially dangerous objects, such as snakes. 8. Lie detection tests often lie. 9. Most of us use only about 10 percent of our brains. 10. The brain remains active during sleep. Answers: 1. F, 2. T, 3. F, 4. T, 5. F, 6. T, 7. F, 8. T, 9. F, 10. T.

phase. (See also TABLE 2.1.) We will also see how it has surprised us with discoveries about how the brain’s chemical messengers control our moods and memories, about other animals’ abilities, and about the effects of stress on our capacity to fight disease.

Overconfidence

|| Fun anagram solutions from Wordsmith.org: Elvis = lives Dormitory = dirty room Slot machines = cash lost in ’em ||

“We don’t like their sound. Groups of guitars are on their way out.” Decca Records, in turning down a recording contract with the Beatles in 1962

“Computers in the future may weigh no more than 1.5 tons.” Popular Mechanics, 1949

We humans tend to be overconfident. We tend to think we know more than we do. Asked how sure we are of our answers to factual questions (Is Boston north or south of Paris?), we tend to be more confident than correct.1 Or consider these three anagrams, which Richard Goranson (1978) asked people to unscramble: WREAT → WATER ETRYN → ENTRY GRABE → BARGE About how many seconds do you think it would have taken you to unscramble each of these? Once people know the answer, hindsight makes it seem obvious—so much so that they become overconfident. They think they would have seen the solution in only 10 seconds or so, when in reality the average problem solver spends 3 minutes, as you also might, given a similar anagram without the solution: OCHSA.2 Are we any better at predicting our social behavior? To find out, Robert Vallone and his associates (1990) had students predict at the beginning of the school year whether they would drop a course, vote in an upcoming election, call their parents more than twice a month, and so forth. On average, the students felt 84 percent confident in making these self-predictions. Later quizzes about their actual behavior showed their predictions were only 71 percent correct. Even when students were 100 percent sure of themselves, their self-predictions erred 15 percent of the time. 1

Boston is south of Paris.

2

Solution to anagram: CHAOS.

17

Thinking Critically With Psychological Science M O D U L E 2

It’s not just collegians. Ohio State University psychologist Philip Tetlock (1998, 2005) has collected more than 27,000 expert predictions of world events, such as the future of South Africa or whether Quebec would separate from Canada. His repeated finding: These predictions, which experts made with 80 percent confidence on average, were right less than 40 percent of the time. Nevertheless, even those who erred maintained their confidence by noting they were “almost right.” “The Québécois separatists almost won the secessionist referendum.” The point to remember: Hindsight bias and overconfidence often lead us to overestimate our intuition. But scientific inquiry can help us sift reality from illusion.

“They couldn’t hit an elephant at this distance.”

The Scientific Attitude

General John Sedgwick just before being killed during a U.S. Civil War battle, 1864

“The telephone may be appropriate for our American cousins, but not here, because we have an adequate supply of messenger boys.” British expert group evaluating the invention of the telephone

2-2 What are three main components of the scientific attitude?

Do you see an aura around my head? Yes, indeed. Can you still see the aura if I put this magazine in front of my face? Of course. Then if I were to step behind a wall barely taller than I am, you could determine my location from the aura visible above my head, right?

Randi has told me that no aura-seer has agreed to take this simple test. When subjected to such scrutiny, crazy-sounding ideas sometimes find support. During the 1700s, scientists scoffed at the notion that meteorites had extraterrestrial origins. When two Yale scientists dared to deviate from the conventional opinion, Thomas Jefferson jeered, “Gentlemen, I would rather believe that those two Yankee Professors would lie than to believe that stones fell from heaven.” Sometimes scientific inquiry turns jeers into cheers. More often, science becomes society’s garbage disposal by sending crazy-sounding ideas to the waste heap, atop previous claims of perpetual motion machines, miracle cancer cures, and out-of-body travels into centuries past. Today’s “truths” sometimes become tomorrow’s fallacies. To sift reality from fantasy, sense from nonsense, therefore requires a scientific attitude: being skeptical but not cynical, open but not gullible. “To believe with certainty,” says a Polish proverb, “we must begin by doubting.” As scientists, psychologists approach the world of behavior with a curious skepticism, persistently asking two questions: What do you mean? How do you know? When ideas compete, skeptical testing can reveal which ones best match the facts. Do parental behaviors determine children’s sexual orientation? Can astrologers predict your future based on the position of the planets at your birth? As we will see, putting such claims to the test has led psychological scientists to doubt them.

Physicist J. Robert Oppenheimer, Life, October 10, 1949

“A skeptic is one who is willing to question any truth claim, asking for clarity in definition, consistency in logic, and adequacy of evidence.” Philosopher Paul Kurtz, The Skeptical Inquirer, 1994 The Amazing Randi The magician James Randi exemplifies skepticism. He has tested and debunked a variety of psychic phenomena.

Courtesy of the James Randi Education Foundation

Randi: Aura-seer: Randi: Aura-seer: Randi:

“The scientist . . . must be free to ask any question, to doubt any assertion, to seek for any evidence, to correct any errors.”

Underlying all science is, first, a hard-headed curiosity, a passion to explore and understand without misleading or being misled. Some questions (Is there life after death?) are beyond science. To answer them in any way requires a leap of faith. With many other ideas (Can some people demonstrate ESP?), the proof is in the pudding. No matter how sensible or crazy an idea sounds, the critical thinker’s question is Does it work? When put to the test, can its predictions be confirmed? This scientific approach has a long history. As ancient a figure as Moses used such an approach. How do you evaluate a self-proclaimed prophet? His answer: Put the prophet to the test. If the predicted event “does not take place or prove true,” then so much the worse for the prophet (Deuteronomy 18:22). By letting the facts speak for themselves, Moses was using what we now call an empirical approach. Magician James Randi uses this approach when testing those claiming to see auras around people’s bodies:

18

MOD U LE 2 Thinking Critically With Psychological Science

Reprinted by permission of Universal Press Syndicate. © 1997 Wiley.

Non Sequitur

“My deeply held belief is that if a god anything like the traditional sort exists, our curiosity and intelligence are provided by such a god. We would be unappreciative of those gifts . . . if we suppressed our passion to explore the universe and ourselves.” Carl Sagan, Broca’s Brain, 1979

Putting a scientific attitude into practice requires not only skepticism but also humility—an awareness of our own vulnerability to error and an openness to surprises and new perspectives. In the last analysis, what matters is not my opinion or yours, but the truths nature reveals in response to our questioning. If people or other animals don’t behave as our ideas predict, then so much the worse for our ideas. This humble attitude was expressed in one of psychology’s early mottos: “The rat is always right.” Historians of science tell us that these three attitudes—curiosity, skepticism, and humility—helped make modern science possible. Many of its founders, including Copernicus and Newton, were people whose religious convictions made them humble before nature and skeptical of mere human authority (Hooykaas, 1972; Merton, 1938). Some deeply religious people today may view science, including psychological science, as a threat. Yet, notes sociologist Rodney Stark (2003a,b), the scientific revolution was led mostly by deeply religious people acting on the idea that “in order to love and honor God, it is necessary to fully appreciate the wonders of his handiwork.” Of course, scientists, like anyone else, can have big egos and may cling to their preconceptions. We all view nature through the spectacles of our preconceived ideas. Nevertheless, the ideal that unifies psychologists with all scientists is the curious, skeptical, humble scrutiny of competing ideas. As a community, scientists check and recheck one another’s findings and conclusions.

Critical Thinking

“The real purpose of the scientific method is to make sure Nature hasn’t misled you into thinking you know something you don’t actually know.” Robert M. Pirsig, Zen and the Art of Motorcycle Maintenance, 1974

The scientific attitude prepares us to think smarter. Smart thinking, called critical thinking, examines assumptions, discerns hidden values, evaluates evidence, and assesses conclusions. Whether reading a news report or listening to a conversation, critical thinkers ask questions. Like scientists, they wonder, How do they know that? What is this person’s agenda? Is the conclusion based on anecdote and gut feelings, or on evidence? Does the evidence justify a cause-effect conclusion? What alternative explanations are possible? Has psychology’s critical inquiry been open to surprising findings? The answer is plainly yes. Believe it or not . . .

䉴 massive losses of brain tissue early in life may have minimal long-term effects. 䉴 within days, newborns can recognize their mother’s odor and voice. 䉴 brain damage can leave a person able to learn new skills yet unaware of such learning.

critical thinking thinking that does not

䉴 diverse groups—men and women, old and young, rich and middle class, those

blindly accept arguments and conclusions. Rather, it examines assumptions, discerns hidden values, evaluates evidence, and assesses conclusions.

with disabilities and without—report roughly comparable levels of personal happiness. 䉴 electroconvulsive therapy (delivering an electric shock to the brain) is often a very effective treatment for severe depression.

19

Thinking Critically With Psychological Science M O D U L E 2

And has critical inquiry convincingly debunked popular presumptions? The answer is again yes. The evidence indicates that . . .

䉴 sleepwalkers are not acting out their dreams. 䉴 our past experiences are not all recorded verbatim in our brains; with brain stimulation or hypnosis, one cannot simply “hit the replay button” and relive long-buried or repressed memories. 䉴 most people do not suffer from unrealistically low self-esteem, and high selfesteem is not all good. 䉴 opposites do not generally attract. In each of these instances and more, what has been learned is not what is widely believed.

䉴|| Frequently Asked Questions About Psychology You are now prepared to think critically about psychological matters. Yet, even knowing this much, you may still be approaching psychology with a mixture of curiosity and apprehension. So before we plunge in, let’s entertain some frequently asked questions.

2-3 Can laboratory experiments illuminate everyday life? When you see or hear about psychological research, do you ever wonder whether people’s behavior in the lab will predict their behavior in real life? For example, does detecting the blink of a faint red light in a dark room have anything useful to say about flying a plane at night? After viewing a violent, sexually explicit film, does an aroused man’s increased willingness to push buttons that he thinks will electrically shock a woman really say anything about whether violent p*rnography makes a man more likely to abuse a woman? Before you answer, consider: The experimenter intends the laboratory environment to be a simplified reality—one that simulates and controls important features of everyday life. Just as a wind tunnel lets airplane designers re-create airflow forces under controlled conditions, a laboratory experiment lets psychologists re-create psychological forces under controlled conditions. An experiment’s purpose is not to re-create the exact behaviors of everyday life but to test theoretical principles (Mook, 1983). In aggression studies, deciding whether to push a button that delivers a shock may not be the same as slapping someone in the face, but the principle is the same. It is the resulting principles—not the specific findings—that help explain everyday behaviors. When psychologists apply laboratory research on aggression to actual violence, they are applying theoretical principles of aggressive behavior, principles they have refined through many experiments. Similarly, it is the principles of the visual system, developed from experiments in artificial settings (such as looking at red lights in the dark), that we apply to more complex behaviors such as night flying. And many investigations show that principles derived in the laboratory do typically generalize to the everyday world (Anderson et al., 1999). The point to remember: Psychologists’ concerns lie less with particular behaviors than with the general principles that help explain many behaviors.

2-4 Does behavior depend on one’s culture and gender? What can psychological studies done in one time and place, often with White Europeans or North Americans, really tell us about people in general? As we will see time and again, culture—shared ideas and behaviors that one generation passes on to the next—matters. Our culture shapes our behavior. It influences our standards of

culture the enduring behaviors, ideas, attitudes, and traditions shared by a group of people and transmitted from one generation to the next.

20

MOD U LE 2 Thinking Critically With Psychological Science

promptness and frankness, our attitudes toward premarital sex and varying body shapes, our tendency to be casual or formal, our willingness to make eye contact, our conversational distance, and much, much more. Being aware of such differences, we can restrain our assumptions that others will think and act as we do. Given the growing mixing and clashing of cultures, our need for such awareness is urgent. It is also true, however, that our shared biological heritage unites us as a universal human family. The same underlying processes guide people everywhere:

Ami Vitale/Getty Images

䉴 People diagnosed with dyslexia, a reading disorder, exhibit the same

A cultured greeting Because culture shapes people’s understanding of social behavior, actions that seem ordinary to us may seem quite odd to visitors from far away. Yet underlying these differences are powerful similarities. Supporters of newly elected leaders everywhere typically greet them with pleased deference, though not necessarily with bows and folded hands, as in India. Here influential and popular politician Sonia Gandhi greets some of her constituents shortly after her election.

“All people are the same; only their habits differ.” Confucius, 551–479 B.C.

brain malfunction whether they are Italian, French, or British (Paulesu et al., 2001). 䉴 Variation in languages may impede communication across cultures. Yet all languages share deep principles of grammar, and people from opposite hemispheres can communicate with a smile or a frown. People in different cultures vary in feelings of loneliness. But across cultures, 䉴 loneliness is magnified by shyness, low self-esteem, and being unmarried (Jones et al., 1985; Rokach et al., 2002). We are each in certain respects like all others, like some others, and like no other. Studying people of all races and cultures helps us discern our similarities and our differences, our human kinship and our diversity. You will see throughout this book that gender matters, too. Researchers report gender differences in what we dream, in how we express and detect emotions, and in our risk for alcohol dependence, depression, and eating disorders. Gender differences fascinate us, and studying them is potentially beneficial. For example, many researchers believe that women carry on conversations more readily to build relationships, while men talk more to give information and advice (Tannen, 1990). Knowing this difference can help us prevent conflicts and misunderstandings in everyday relationships. But again, psychologically as well as biologically, women and men are overwhelmingly similar. Whether female or male, we learn to walk at about the same age. We experience the same sensations of light and sound. We feel the same pangs of hunger, desire, and fear. We exhibit similar overall intelligence and well-being. The point to remember: Even when specific attitudes and behaviors vary by gender or across cultures, as they often do, the underlying processes are much the same.

2-5 Why do psychologists study animals, and is it ethical to experiment on animals?

“Rats are very similar to humans except that they are not stupid enough to purchase lottery tickets.” Dave Barry, July 2, 2002

Many psychologists study animals because they find them fascinating. They want to understand how different species learn, think, and behave. Psychologists also study animals to learn about people, by doing experiments permissible only with animals. Human physiology resembles that of many other animals. We humans are not like animals; we are animals. Animal experiments have therefore led to treatments for human diseases—insulin for diabetes, vaccines to prevent polio and rabies, transplants to replace defective organs. Likewise, the same processes by which humans see, exhibit emotion, and become obese are present in rats and monkeys. To discover more about the basics of human learning, researchers even study sea slugs. To understand how a combustion engine works, you would do better to study a lawn mower’s engine than a Mercedes’. Like Mercedes engines, human nervous systems are complex. But the simplicity of the sea slug’s nervous system is precisely what makes it so revealing of the neural mechanisms of learning.

21

Thinking Critically With Psychological Science M O D U L E 2

If we share important similarities with other animals, then should we not respect them? “We cannot defend our scientific work with animals on the basis of the similarities between them and ourselves and then defend it morally on the basis of differences,” noted Roger Ulrich (1991). The animal protection movement protests the use of animals in psychological, biological, and medical research. Researchers remind us that the animals used worldwide each year in research are but a fraction of 1 percent of the billions of animals killed annually for food. And yearly, for every dog or cat used in an experiment and cared for under humane regulations, 50 others are killed in humane animal shelters (Goodwin & Morrison, 1999). Some animal protection organizations want to replace experiments on animals with naturalistic observation. Many animal researchers respond that this is not a question of good versus evil but of compassion for animals versus compassion for people. How many of us would have attacked Louis Pasteur’s experiments with rabies, which caused some dogs to suffer but led to a vaccine that spared millions of people (and dogs) from agonizing death? And would we really wish to have deprived ourselves of the animal research that led to effective methods of training children with mental disorders; of understanding aging; and of relieving fears and depression? The answers to such questions vary by culture. In Gallup surveys in Canada and the United States, about 60 percent of adults deem medical testing on animals “morally acceptable.” In Britain, only 37 percent do (Mason, 2003). Out of this heated debate, two issues emerge. The basic one is whether it is right to place the well-being of humans above that of animals. In experiments on stress and cancer, is it right that mice get tumors in the hope that people might not? Should some monkeys be exposed to an HIV-like virus in the search for an AIDS vaccine? Is our use and consumption of other animals as natural as the behavior of carnivorous hawks, cats, and whales? Defenders of research on animals argue that anyone who has eaten a hamburger, worn leather shoes, tolerated hunting and fishing, or supported the extermination of crop-destroying or plague-carrying pests has already agreed that, yes, it is sometimes permissible to sacrifice animals for the sake of human well-being. Scott Plous (1993) notes, however, that our compassion for animals varies, as does our compassion for people—based on their perceived similarity to us. We feel more attraction, give more help, and act less aggressively toward similar others. Likewise, we value animals according to their perceived kinship with us. Thus, primates and companion pets get top priority. (Western people raise or trap mink and foxes for their fur, but not dogs or cats.) Other mammals occupy the second rung on the privilege ladder, followed by birds, fish, and reptiles on the third rung, with insects at the bottom. In deciding which animals have rights, we each draw our own cut-off line somewhere across the animal kingdom. If we give human life first priority, the second issue is the priority we give to the well-being of animals in research. What safeguards should protect them? Most researchers today feel ethically obligated to enhance the well-being of captive animals and protect them from needless suffering. In one survey of animal researchers, 98 percent or more supported government regulations protecting primates, dogs, and cats, and 74 percent supported regulations providing for the humane care of rats and mice (Plous & Herzog, 2000). Many professional associations and funding agencies already have such guidelines. For example, British Psychological Society guidelines call for housing animals under reasonably natural living conditions, with companions for social animals (Lea, 2000). American Psychological Association (2002) guidelines mandate ensuring the “comfort, health, and humane treatment” of animals, and of minimizing “infection, illness, and pain of animal subjects.” Humane care also leads to more effective science, because pain and stress would distort the animals’ behavior during experiments.

“I believe that to prevent, cripple, or needlessly complicate the research that can relieve animal and human suffering is profoundly inhuman, cruel, and immoral.” Psychologist Neal Miller, 1983

“Please do not forget those of us who suffer from incurable diseases or disabilities who hope for a cure through research that requires the use of animals.” Psychologist Dennis Feeney, 1987

“The righteous know the needs of their animals.” Proverbs 12:10

22

Animals have themselves benefited from animal research. One Ohio team of research psychologists measured stress hormone levels in samples of millions of dogs brought each year to animal shelters. They devised handling and stroking methods to reduce stress and ease the dogs’ transition to adoptive homes (Tuber et al., 1999). In New York, formerly listless and idle Bronx Zoo animals now stave off boredom by working for their supper, as they would in the wild (Stewart, 2002). Other studies have helped improve care and management in animals’ natural habitats. By revealing our behavioral kinship with animals and the remarkable intelligence of chimpanzees, gorillas, and other animals, experiments have also led to increased empathy and protection for them. At its best, a psychology concerned for humans and sensitive to animals serves the welfare of both. Ami Vitale/Getty Images

Animal research benefiting animals Thanks partly to research on the benefits of novelty, control, and stimulation, these gorillas are enjoying an improved quality of life in New York’s Bronx Zoo.

MOD U LE 2 Thinking Critically With Psychological Science

“The greatness of a nation can be judged by the way its animals are treated.” Mahatma Gandhi, 1869–1948

2-6 Is it ethical to experiment on people? If the image of researchers delivering supposed electric shocks troubles you, you may be relieved to know that in most psychological studies, especially those with human participants, blinking lights, flashing words, and pleasant social interactions are more common. Occasionally, though, researchers do temporarily stress or deceive people, but only when they believe it is essential to a justifiable end, such as understanding and controlling violent behavior or studying mood swings. Such experiments wouldn’t work if the participants knew all there was to know about the experiment beforehand. Wanting to be helpful, the participants might try to confirm the researcher’s predictions. Ethical principles developed by the American Psychological Association (1992), by the British Psychological Society (1993), and by psychologists internationally (Pettifor, 2004), urge investigators to (1) obtain the informed consent of potential participants, (2) protect them from harm and discomfort, (3) treat information about individual participants confidentially, and (4) fully explain the research afterward. Moreover, most universities today screen research proposals through an ethics committee that safeguards the well-being of every participant. The ideal is for a researcher to be sufficiently informative and considerate that participants will leave feeling at least as good about themselves as when they came in. Better yet, they should be repaid by having learned something. If treated respectfully, most participants enjoy or accept their engagement (Epley & Huff, 1998; Kimmel, 1998). Indeed, say psychology’s defenders, professors provoke much greater anxiety by giving and returning course exams than do researchers in the typical experiment. Much research occurs outside of university laboratories, in places where there may be no ethics committees. For example, retail stores routinely survey people, photograph their purchasing behavior, track their buying patterns, and test the effectiveness of advertising. Curiously, such research attracts less attention than the scientific research done to advance human understanding.

23

Thinking Critically With Psychological Science M O D U L E 2

Office of Public Affairs at Columbia University

Psychology is definitely not value-free. Values affect what we study, how we study it, and how we interpret results. Researchers’ values influence their choice of topics. Should we study worker productivity or worker morale? Sex discrimination or gender differences? Conformity or independence? Values can also color “the facts.” As we noted earlier, our preconceptions can bias our observations and interpretations; sometimes we see what we want or expect to see (FIGURE 2.1). Even the words we use to describe something can reflect our values. Are the sex acts that an individual does not practice “perversions” or “sexual variations”? Both in and out of psychology, labels describe and labels evaluate: The same holds true in everyday speech. One person’s “rigidity” is another’s “consistency.” One person’s “faith” is another’s “fanaticism.” Our labeling someone as “firm” or “stubborn,” “careful” or “picky,” “discreet” or “secretive” reveals our feelings. Popular applications of psychology also contain hidden values. If you defer to “professional” guidance about how to live—how to raise children, how to achieve selffulfillment, what to do with sexual feelings, how to get ahead at work—you are accepting value-laden advice. A science of behavior and mental processes can certainly help us reach our goals, but it cannot decide what those goals should be. If some people see psychology as merely common sense, others have a different concern—that it is becoming dangerously powerful. Is it an accident that astronomy is the oldest science and psychology the youngest? To some people, exploring the external universe seems far safer than exploring our own inner universe. Might psychology, they ask, be used to manipulate people? Knowledge, like all power, can be used for good or evil. Nuclear power has been used to light up cities—and to demolish them. Persuasive power has been used to educate people—and to deceive them. Although psychology does indeed have the power to deceive, its purpose is to enlighten. Every day, psychologists are exploring ways to enhance learning, creativity, and compassion. Psychology speaks to many of our world’s great problems—war, overpopulation, prejudice, family crises, crime—all of which involve attitudes and behaviors. Psychology also speaks to our deepest longings—for nourishment, for love, for happiness. Psychology cannot address all of life’s great questions, but it speaks to some mighty important ones.

© Roger Shepard

2-7 Is psychology free of value judgments?

FIGURE 2.1 What do you see? People interpret ambiguous information to fit their preconceptions. Did you see a duck or a rabbit? Before showing some friends this image, ask them if they can see the duck lying on its back (or the bunny in the grass). (From Shepard, 1990.)

“It is doubtless impossible to approach any human problem with a mind free from bias.” Simone de Beauvoir, The Second Sex, 1953

speaks In 䉴 Psychology making its historic 1954 school desegregation decision, the U.S. Supreme Court cited the expert testimony and research of psychologists Kenneth Clark and Mamie Phipps Clark (1947). The Clarks reported that, when given a choice between Black and White dolls, most African-American children chose the White doll, which seemingly indicated internalized anti-Black prejudice.

24

MOD U LE 2 Thinking Critically With Psychological Science

Review Thinking Critically With Psychological Science 2-1 Why are the answers that flow from the scientific approach more reliable than those based on intuition and common sense? Although common sense often serves us well, we are prone to hindsight bias (also called the “I-knew-it-all-along phenomenon”), the tendency to believe, after learning an outcome, that we would have foreseen it. We also are routinely overconfident of our judgments, thanks partly to our bias to seek information that confirms them. Although limited by the testable questions it can address, scientific inquiry can help us sift reality from illusion and restrain the biases of our unaided intuition. 2-2 What are three main components of the scientific attitude? The three components of the scientific attitude are (1) a curious eagerness to (2) skeptically scrutinize competing ideas and (3) an open-minded humility before nature. This attitude carries into everyday life as critical thinking, which examines assumptions, discerns hidden values, evaluates evidence, and assesses outcomes. Putting ideas, even crazy-sounding ideas, to the test helps us winnow sense from nonsense. 2-3

Can laboratory experiments illuminate everyday life? By intentionally creating a controlled, artificial environment in the lab, researchers aim to test theoretical principles. These general principles help explain everyday behaviors.

2-4 Does behavior depend on one’s culture and gender? Attitudes and behaviors vary across cultures, but the underlying principles vary much less because of our human kinship. Although gender differences tend to capture attention, it is important to remember our greater gender similarities. 2-5

Why do psychologists study animals, and is it ethical to experiment on animals? Some psychologists are primarily interested in animal behavior. Others study animals to better understand the physiological and psychological processes shared by humans. Under ethical and legal guidelines, animals used in experiments rarely experience pain. Nevertheless, animal rights groups raise an important issue: Even if it leads to the relief of human suffering, is an animal’s temporary suffering justified?

2-6 Is it ethical to experiment on people? Researchers may temporarily stress or deceive people in order to learn something important. Professional ethical standards provide guidelines concerning the treatment of both human and animal participants. 2-7 Is psychology free of value judgments? Psychologists’ values influence their choice of research topics, their theories and observations, their labels for behavior, and their professional advice. Applications of psychology’s principles have been used mainly in the service of humanity. Terms and Concepts to Remember hindsight bias, p. 15 critical thinking, p. 18

culture, p. 19

Test Yourself 1. What is the scientific attitude, and why is it important for critical thinking?

2. How are human and animal research subjects protected? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. How might critical thinking help us assess someone’s interpretations of people’s dreams or their claims to communicate with the dead?

2. Were any of the Frequently Asked Questions your questions? Do you have other questions or concerns about psychology?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

module 3 Research Strategies: How Psychologists Ask and Answer Questions

How Do Psychologists Ask and Answer Questions? Statistical Reasoning in Everyday Life

䉴|| How Do Psychologists Ask and Answer

Questions?

Psychologists arm their scientific attitude with the scientific method. Psychological science evaluates competing ideas with careful observation and rigorous analysis. In its attempt to describe and explain human nature, it welcomes hunches and plausiblesounding theories. And it puts them to the test. If a theory works—if the data support its predictions—so much the better for that theory. If the predictions fail, the theory will be revised or rejected.

The Scientific Method 3-1 How do theories advance psychological science? In everyday conversation, we often use theory to mean “mere hunch.” In science, however, theory is linked with observation. A scientific theory explains through an integrated set of principles that organizes observations and predicts behaviors or events. By organizing isolated facts, a theory simplifies. There are too many facts about behavior to remember them all. By linking facts and bridging them to deeper principles, a theory offers a useful summary. As we connect the observed dots, a coherent picture emerges. A good theory of depression, for example, helps us organize countless depression-related observations into a short list of principles. Imagine that we observe over and over that people with depression describe their past, present, and future in gloomy terms. We might therefore theorize that at the heart of depression lies low self-esteem. So far so good: Our self-esteem principle neatly summarizes a long list of facts about people with depression. Yet no matter how reasonable a theory may sound—and low self-esteem seems a reasonable explanation of depression—we must put it to the test. A good theory produces testable predictions, called hypotheses. By enabling us to test and to reject or revise the theory, such predictions give direction to research. They specify what results would support the theory and what results would disconfirm it. To test our selfesteem theory of depression, we might assess people’s self-esteem by having them respond to statements such as “I have good ideas” and “I am fun to be with.” Then we could see whether, as we hypothesized, people who report poorer self-images also score higher on a depression scale (FIGURE 3.1 on the next page). In testing our theory, we should be aware that it can bias subjective observations. Having theorized that depression springs from low self-esteem, we may see what we expect. We may perceive depressed people’s neutral comments as self-disparaging. The urge to see what we expect is an ever-present temptation, in the laboratory and outside of it. According to the bipartisan U.S. Senate Select Committee on Intelligence (2004), preconceived expectations that Iraq had weapons of mass destruction

theory an explanation using an integrated set of principles that organizes observations and predicts behaviors or events.

hypothesis a testable prediction, often implied by a theory.

25

26

MOD U LE 3 Research Strategies: How Psychologists Ask and Answer Questions

self-correcting process for asking questions and observing nature’s answers.

FIGURE 3.1 The scientific method A

(1) Theories Example: Low self-esteem feeds depression.

confirm, reject, or revise

lead to

(3) Research and observations Example: Administer tests of self-esteem and depression. See if a low score on one predicts a high score on the other.

lead to

|| Good theories explain by 1. organizing and linking observed facts. 2. implying hypotheses that offer testable predictions and, sometimes, practical applications. ||

operational definition a statement of the procedures (operations) used to define research variables. For example, human intelligence may be operationally defined as what an intelligence test measures.

replication repeating the essence of a research study, usually with different participants in different situations, to see whether the basic finding extends to other participants and circ*mstances.

case study an observation technique in which one person is studied in depth in the hope of revealing universal principles. survey a technique for ascertaining the self-reported attitudes or behaviors of a particular group, usually by questioning a representative, random sample of the group.

(2) Hypotheses Example: People with low self-esteem will score higher on a depression scale.

led intelligence analysts to wrongly interpret ambiguous observations as confirming that theory, and this theory-driven conclusion then led to the preemptive U.S. invasion of Iraq. As a check on their biases, psychologists report their research with precise operational definitions of procedures and concepts. Hunger, for example, might be defined as “hours without eating,” generosity as “money contributed.” Such carefully worded statements should allow others to replicate (repeat) the original observations. If other researchers re-create a study with different participants and materials and get similar results, then our confidence in the finding’s reliability grows. The first study of hindsight bias aroused psychologists’ curiosity. Now, after many successful replications with differing people and questions, we feel sure of the phenomenon’s power. In the end, our theory will be useful if it (1) effectively organizes a range of selfreports and observations, and (2) implies clear predictions that anyone can use to check the theory or to derive practical applications. (If we boost people’s self-esteem, will their depression lift?) Eventually, our research will probably lead to a revised theory that better organizes and predicts what we know about depression. As we will see next, we can test our hypotheses and refine our theories using descriptive methods (which describe behaviors, often using case studies, surveys, or naturalistic observations), correlational methods (which associate different factors), and experimental methods (which manipulate factors to discover their effects). To think critically about popular psychology claims, we need to recognize these methods and know what conclusions they allow.

Description 3-2 How do psychologists observe and describe behavior? The starting point of any science is description. In everyday life, all of us observe and describe people, often drawing conclusions about why they behave as they do. Professional psychologists do much the same, though more objectively and systematically.

27

Research Strategies: How Psychologists Ask and Answer Questions M O D U L E 3

The Survey

Among the oldest research methods, the case study examines one individual in depth in hopes of revealing things true of us all. Some examples: Much of our early knowledge about the brain came from case studies of individuals who suffered a particular impairment after damage to a certain brain region. Jean Piaget taught us about children’s thinking after carefully observing and questioning only a few children. Studies of only a few chimpanzees have revealed their capacity for understanding and language. Intensive case studies are sometimes very revealing. Case studies often suggest directions for further study, and they show us what can happen. But individual cases may mislead us if the individual being studied is atypical. Unrepresentative information can lead to mistaken judgments and false conclusions. Indeed, anytime a researcher mentions a finding (“Smokers die younger: 95 percent of men over 85 are nonsmokers”) someone is sure to offer a contradictory anecdote (“Well, I have an uncle who smoked two packs a day and lived to be 89”). Dramatic stories and personal experiences (even psychological case examples) command our attention, and they are easily remembered. Which of the following do you find more memorable? (1) “In one study of 1300 dream reports concerning a kidnapped child, only 5 percent correctly envisioned the child as dead (Murray & Wheeler, 1937).” (2) “I know a man who dreamed his sister was in a car accident, and two days later she died in a head-on collision!” Numbers can be numbing, but the plural of anecdote is not evidence. As psychologist Gordon Allport (1954, p. 9) said, “Given a thimbleful of [dramatic] facts we rush to make generalizations as large as a tub.” The point to remember: Individual cases can suggest fruitful ideas. What’s true of all of us can be glimpsed in any one of us. But to discern the general truths that cover individual cases, we must answer questions with other research methods.

Susan Kuklin/Photo Researchers

The Case Study

The case of the conversational chimpanzee In case studies of chimpanzees, psychologists have asked whether language is uniquely human. Here Nim Chimpsky signs hug as his trainer, psychologist Herbert Terrace, shows him the puppet Ernie. But is Nim really using language?

“‘Well my dear,’ said Miss Marple, ‘human nature is very much the same everywhere, and of course, one has opportunities of observing it at closer quarters in a village.’” Agatha Christie, The Tuesday Club Murders, 1933

Wording Effects Even subtle changes in the order or wording of questions can have major effects. Should cigarette ads or p*rnography be allowed on television? People are much more likely to approve “not allowing” such things than “forbidding” or “censoring” them. In one national survey, only 27 percent of Americans approved of “government censorship” of media sex and violence, though 66 percent approved of “more restrictions on what is shown on television” (Lacayo, 1995). People are similarly much more approving of “aid to the needy” than of “welfare,” of “affirmative action” than of “preferential treatment,” and of “revenue enhancers” than of “taxes.” Because wording is such a delicate matter, critical thinkers will reflect on how the phrasing of a question might affect people’s expressed opinions. Random Sampling We can describe human experience by drawing on memorable anecdotes and personal experience. But for an accurate picture of a whole population’s attitudes and experience, there’s only one game in town—the representative sample.

Drawing by D. Fradon; © 1969 The New Yorker Magazine, Inc.

The survey method looks at many cases in less depth. A survey asks people to report their behavior or opinions. Questions about everything from sexual practices to political opinions are put to the public. Harris and Gallup polls have revealed that 72 percent of Americans think there is too much TV violence, 89 percent favor equal job opportunities for hom*osexual people, 89 percent are facing high stress, and 96 percent would like to change something about their appearance. In Britain, seven in ten 18- to 29-year-olds support gay marriage; among those over 50, about the same percentage oppose it (a generation gap found in many Western countries). But asking questions is tricky, and the answers often depend on the ways questions are worded and respondents are chosen.

“How would you like me to answer that question? As a member of my ethnic group, educational class, income group, or religious category?”

This Modern World by Tom Tomorrow © 1991.

28

|| With very large samples, estimates become quite reliable. E is estimated to represent 12.7 percent of the letters in written English. E, in fact, is 12.3 percent of the 925,141 letters in Melville’s Moby Dick, 12.4 percent of the 586,747 letters in Dickens’ A Tale of Two Cities, and 12.1 percent of the 3,901,021 letters in 12 of Mark Twain’s works (Chance News, 1997). ||

MOD U LE 3 Research Strategies: How Psychologists Ask and Answer Questions

We can extend this point to everyday thinking, as we generalize from samples we observe, especially vivid cases. Given (a) a statistical summary of a professor’s student evaluations and (b) the vivid comments of two irate students, an administrator’s impression of the professor may be influenced as much by the two unhappy students as by the many favorable evaluations in the statistical summary. The temptation to generalize from a few vivid but unrepresentative cases is nearly irresistible. The point to remember: The best basis for generalizing is from a representative sample of cases. So how do you obtain a representative sample—say, of the students at your college or university? How could you choose a group that would represent the total student population, the whole group you want to study and describe? Typically, you would choose a random sample, in which every person in the entire group has an equal chance of participating. This means you would not send each student a questionnaire. (The conscientious people who return it would not be a random sample.) Rather, you might number the names in the general student listing and then use a random number generator to pick the participants for your survey. Large representative samples are better than small ones, but a small representative sample of 100 is better than an unrepresentative sample of 500. Political pollsters sample voters in national election surveys just this way. Using only 1500 randomly sampled people, drawn from all areas of a country, they can provide a remarkably accurate snapshot of the nation’s opinions. Without random sampling, large samples—including call-in phone samples and TV or Web site polls—often merely give misleading results. The point to remember: Before accepting survey findings, think critically: Consider the sample. You cannot compensate for an unrepresentative sample by simply adding more people.

Naturalistic Observation A third descriptive method records behavior in natural environments. These naturalistic observations range from watching chimpanzee societies in the jungle, to unobtrusively videotaping (and later systematically analyzing) parent-child interactions in different cultures, to recording racial differences in students’ self-seating patterns in the lunchroom at school. Like the case study and survey methods, naturalistic observation does not explain behavior. It describes it. Nevertheless, descriptions can be revealing. We once thought, for example, that only humans use tools. Then naturalistic observation revealed that chimpanzees sometimes insert a stick in a termite mound and withdraw it, eating the stick’s load of termites. Such unobtrusive naturalistic observations paved the way for later studies of animal thinking, language, and emotion, which further expanded our understanding of our fellow animals. “Observations, made in the natural habitat, helped to show that the societies and behavior of animals are far more complex than previously supposed,” notes chimpanzee observer Jane Goodall (1998). For example, chimpanzees and baboons have been observed using deception. Psychologists Andrew Whiten and Richard Byrne (1988) repeatedly saw one young baboon pretending to have been attacked by another as a tactic to get its mother to drive the other baboon away from its food. Moreover, the more developed a primate species’ brain, the more likely it is that the animals will display deceptive behaviors (Byrne & Corp, 2004). Naturalistic observations also illuminate human behavior. Here are three findings you might enjoy.

䉴 A funny finding. We humans laugh 30 times more often in social situations than in solitary situations. (Have you noticed how seldom you laugh when alone?) As we laugh, 17 muscles contort our mouth and squeeze our eyes, and we emit a

29

Research Strategies: How Psychologists Ask and Answer Questions M O D U L E 3

natural observer Chimpanzee 䉴 Aresearcher Frans de Waal (2005) reports that “I am a born observer. . . . When picking a seat in a restaurant I want to face as many tables as possible. I enjoy following the social dynamics—love, tension, boredom, antipathy—around me based on body language, which I consider more informative than the spoken word. Since keeping track of others is something I do automatically, becoming a fly on the wall of an ape colony came naturally to me.”

EAR for naturalistic 䉴 An observation Psychologists Matthias Mehl and James Pennebaker have used Electronically Activated Recorders (EAR) to sample naturally occurring slices of daily life.

Courtesy of Matthias Mehl

Photo by Jack Kearse, Emory University for Yerkes National Primate Research Center

series of 75-millisecond vowellike sounds that are spaced about one-fifth of a second apart (Provine, 2001). 䉴 Sounding out students. What, really, are introductory psychology students saying and doing during their everyday lives? To find out, Matthias Mehl and James Pennebaker (2003) equipped 52 such students from the University of Texas with electronically activated belt-worn tape recorders. For up to four days, the recorders captured 30 seconds of the students’ waking hours every 12.5 minutes, thus enabling the researchers to eavesdrop on more than 10,000 half-minute life slices by the end of the study. On what percentage of the slices do you suppose they found the students talking with someone? What percentage captured the students at a computer keyboard? The answers: 28 and 9 percent. (What percentage of your waking hours are spent in these activities?) 䉴 Culture, climate, and the pace of life. Naturalistic observation also enabled Robert Levine and Ara Norenzayan (1999) to compare the pace of life in 31 countries. (Their operational definition of pace of life included walking speed, the speed with which postal clerks completed a simple request, and the accuracy of public clocks.) Their conclusion: Life is fastest paced in Japan and Western Europe, and slower paced in economically less-developed countries. People in colder climates also tend to live at a faster pace (and are more prone to die from heart disease). Naturalistic observation offers interesting snapshots of everyday life, but it does so without controlling for all the factors that may influence behavior. It’s one thing to observe the pace of life in various places, but another to understand what makes some people walk faster than others. Yet naturalistic observation, like surveys, can provide data for correlational research, which we consider next.

Correlation 3-3 What are positive and negative correlations, and why do they enable

prediction but not cause-effect explanation?

Describing behavior is a first step toward predicting it. Surveys and naturalistic observations often show us that one trait or behavior is related to another. In such cases, we say the two correlate. A statistical measure (the correlation coefficient) helps us figure how closely two things vary together, and thus how well either one predicts the other. Knowing how much aptitude test scores correlate with school success tells us how well the scores predict school success.

population all the cases in a group being studied, from which samples may be drawn. (Note: Except for national studies, this does not refer to a country’s whole population.)

random sample a sample that fairly represents a population because each member has an equal chance of inclusion.

naturalistic observation observing and recording behavior in naturally occurring situations without trying to manipulate and control the situation. correlation a measure of the extent to which two factors vary together, and thus of how well either factor predicts the other.

correlation coefficient a statistical index of the relationship between two things (from −1 to +1).

30

MOD U LE 3 Research Strategies: How Psychologists Ask and Answer Questions

No relationship (0.00)

Perfect negative correlation (–1.00)

Perfect positive correlation (+1.00)

FIGURE 3.2 Scatterplots, showing patterns of correlation Correlations can range from +1.00 (scores on one measure increase in direct proportion to scores on another) to –1.00 (scores on one measure decrease precisely as scores rise on the other).

TABLE 3.1 Height and Temperament

of 20 Men Person

Height in Inches

Temperament

1

80

75

2

63

66

3

61

60

4

79

90

5

74

60

6

69

42

7

62

42

8

75

60

9

77

81

10

60

39

11

64

48

12

76

69

13

71

72

14

66

57

15

73

63

16

70

75

17

63

30

18

71

57

19

68

84

20

70

39

Throughout this book we will often ask how strongly two things are related: For example, how closely related are the personality scores of identical twins? How well do intelligence test scores predict achievement? How closely is stress related to disease? FIGURE 3.2 contains three scatterplots, illustrating the range of possible correlations from a perfect positive to a perfect negative. (Perfect correlations rarely occur in the “real world.”) Each dot in a scatterplot represents the scattered values of two variables. A correlation is positive if two sets of scores, such as height and weight, tend to rise or fall together. Saying that a correlation is “negative” says nothing about its strength or weakness. A correlation is negative if two sets of scores relate inversely, one set going up as the other goes down. Tooth brushing and decay correlate negatively. As brushing goes up from zero, tooth decay goes down. A weak correlation, indicating little relationship, has a coefficient near zero. Here are four news reports of correlational research, some derived from surveys or natural observations. Can you spot which are reporting positive correlations, which negative? (Check your answers at the bottom of this page.) 1. The more young children watch TV, the less they read (Kaiser, 2003). 2. The more sexual content teens see on TV, the more likely they are to have sex (Collins et al., 2004). 3. The longer children are breast-fed, the greater their later academic achievement (Horwood & Fergusson, 1998). 4. The more often adolescents eat breakfast, the lower their body mass (Timlin et al., 2008).1 Statistics can help us see what the naked eye sometimes misses. To demonstrate this for yourself, try an imaginary project. Wondering if tall men are more or less easygoing, you collect two sets of scores: men’s heights and men’s temperaments. You measure the heights of 20 men, and you have someone else independently assess their temperaments (from zero for extremely calm to 100 for highly reactive). With all the relevant data right in front of you (TABLE 3.1), can you tell whether there is (1) a positive correlation between height and reactive temperament, (2) very little or no correlation, or (3) a negative correlation? Comparing the columns in Table 3.1, most people detect very little relationship between height and temperament. In fact, the correlation in this imaginary example is moderately positive, +0.63, as we can see if we display the data as a scatterplot. In FIGURE 3.3, moving from left to right, the upward, oval-shaped slope of the cluster of points shows that our two imaginary sets of scores (height and reactivity) tend to rise together. If we fail to see a relationship when data are presented as systematically as in Table 3.1, how much less likely are we to notice them in everyday life? To see what is right in front of us, we sometimes need statistical illumination. We can easily see 1Answers

to correlation questions: 1. negative, 2. positive, 3. positive, 4. negative.

31

Research Strategies: How Psychologists Ask and Answer Questions M O D U L E 3

3.3 Scatterplot for height 䉴 FIGURE and temperament This display of data

95 90

Temperament scores 85

from 20 imagined people (each represented by a data point) reveals an upward slope, indicating a positive correlation. The considerable scatter of the data indicates the correlation is much lower than +1.0.

80 75 70 65 60 55 50 45 40 35 30 25 55

60

65

70

75

80

85

Height in inches

Correlations help us predict. Low self-esteem correlates with (and therefore predicts) depression. (This correlation might be indicated by a correlation coefficient, or just by a finding that people who score on the lower half of a self-esteem scale have an elevated depression rate.) So, does low self-esteem cause depression? If, based on the correlational evidence, you assume that it does, you have much company. A nearly irresistible thinking error is assuming that an association, sometimes presented as a correlation coefficient, proves causation. But no matter how strong the relationship, it does not prove anything! As options 2 and 3 in FIGURE 3.4 on the next page show, we’d get the same negative correlation between low self-esteem and depression if depression caused people to be down on themselves, or if some third factor—such as heredity or brain chemistry—caused both low self-esteem and depression. Among men, for example, length of marriage correlates positively with hair loss—because both are associated with a third factor, age. This point is so important—so basic to thinking smarter with psychology—that it merits one more example, from a survey of over 12,000 adolescents. The study found that the more teens feel loved by their parents, the less likely they are to behave in unhealthy ways—having early sex, smoking, abusing alcohol and drugs, exhibiting violence (Resnick et al., 1997). “Adults have a powerful effect on their children’s behavior right through the high school years,” gushed an Associated Press (AP) story reporting the finding. But this correlation comes with no built-in cause-effect arrow. Said differently (turn the volume up here), association does not prove causation.2 Thus, the AP could as well have reported, “Well-behaved teens feel their parents’ love and approval; out-of-bounds teens more often think their parents are disapproving jerks.” 2 Because many associations are stated as correlations, the famously worded principle is “Correlation does not prove causation.” That’s true, but it’s also true of associations verified by other nonexperimental statistics (Hatfield et al., 2006).

each of which represents the values of two variables. The slope of the points suggests the direction of the relationship between the two variables. The amount of scatter suggests the strength of the correlation (little scatter indicates high correlation).

Correlation need not mean causation Length of marriage correlates with hair loss in men. Does this mean that marriage causes men to lose their hair (or that balding men make better husbands)? In this case, as in many others, a third factor obviously explains the correlation: Golden anniversaries and baldness both accompany aging.

Big Cheese Photo LLC/Alamy

Correlation and Causation

scatterplots a graphed cluster of dots,

evidence of gender discrimination when given statistically summarized information about job level, seniority, performance, gender, and salary. But we often see no discrimination when the same information dribbles in, case by case (Twiss et al., 1989). The point to remember: A correlation coefficient helps us see the world more clearly by revealing the extent to which two things relate.

32

MOD U LE 3 Research Strategies: How Psychologists Ask and Answer Questions

relationships People low in self-esteem are more likely to report depression than are those high in self-esteem. One possible explanation of this negative correlation is that a bad self-image causes depressed feelings. But, as the diagram indicates, other cause-effect relationships are possible.

FIGURE 3.4 Three possible cause-effect

(1) Low self-esteem

could cause

Depression

or

(2) Depression

could cause

Low self-esteem

or

Low self-esteem (3) Distressing events or biological predisposition

could cause

and Depression

The point to remember: Correlation indicates the possibility of a cause-effect relationship, but it does not prove causation. Knowing that two events are associated need not tell us anything about causation. Remember this principle and you will be wiser as you read and hear news of scientific studies.

|| A study reported in the British Medical Journal found that youths who identify with the goth subculture attempt, more often than other young people, to harm or kill themselves (Young et al., 2006). Can you imagine multiple possible explanations for this association? ||

Illusory Correlations

3-4 What are illusory correlations? Correlation coefficients make visible the relationships we might otherwise miss. They also restrain our “seeing” relationships that actually do not exist. A perceived but nonexistent correlation is an illusory correlation. When we believe there is a relationship between two things, we are likely to notice and recall instances that confirm our belief (Trolier & Hamilton, 1986). Because we are sensitive to dramatic or unusual events, we are especially likely to notice and remember the occurrence of two such events in sequence—say, a premonition of an unlikely phone call followed by the call. When the call does not follow the premonition, we are less likely to note and remember the nonevent. Illusory correlations help explain many superstitious beliefs, such as the presumption that infertile couples who adopt become more likely to conceive (Gilovich, 1991). Couples who conceive after adopting capture our attention. We’re less likely to notice those who adopt and never conceive, or those who conceive without adopting. In other words, illusory correlations occur when we over-rely on the top left cell of FIGURE 3.5, ignoring equally essential information in the other cells.

everyday life Many people believe infertile couples become more likely to conceive a child after adopting a baby. This belief arises from their attention being drawn to such cases. The many couples who adopt without conceiving or conceive without adopting grab less attention. To determine whether there actually is a correlation between adoption and conception, we need data from all four cells in this figure. (From Gilovich, 1991.)

FIGURE 3.5 Illusory correlation in

Conceive

Do not conceive

confirming evidence

disconfirming evidence

disconfirming evidence

confirming evidence

Adopt

Do not adopt

Michael Newman Jr./PhotoEdit

|| A New York Times writer reported a massive survey showing that “adolescents whose parents smoked were 50 percent more likely than children of nonsmokers to report having had sex.” He concluded (would you agree?) that the survey indicated a causal effect—that “to reduce the chances that their children will become sexually active at an early age” parents might “quit smoking” (O’Neil, 2002). ||

33

Research Strategies: How Psychologists Ask and Answer Questions M O D U L E 3

Such illusory thinking helps explain why for so many years people believed (and many still do) that sugar makes children hyperactive, that getting chilled and wet causes people to catch a cold, and that changes in the weather trigger arthritis pain. We are, it seems, prone to perceiving patterns, whether they’re there or not. The point to remember: When we notice random coincidences, we may forget that they are random and instead see them as correlated. Thus, we can easily deceive ourselves by seeing what is not there.

illusory correlation the perception of a relationship where none exists.

Perceiving Order in Random Events In our natural eagerness to make sense of our world—what poet Wallace Stevens called our “rage for order”—we look for order even in random data. And we usually find it, because—here’s a curious fact of life—random sequences often don’t look random. Consider a random coin flip: If someone flipped a coin six times, which of the following sequences of heads (H) and tails (T) would be most likely: HHHTTT or HTTHTH or HHHHHH? Daniel Kahneman and Amos Tversky (1972) found that most people believe HTTHTH would be the most likely random sequence. Actually, all three are equally likely (or, you might say, equally unlikely). A bridge or poker hand of 10 through ace, all of hearts, would seem extraordinary; actually, it would be no more or less likely than any other specific hand of cards (FIGURE 3.6). In actual random sequences, patterns and streaks (such as repeating digits) occur more often than people expect. To demonstrate this phenomenon for myself (as you can do), I flipped a coin 51 times, with these results: H T T T H H H T T T

11. 12. 13. 14. 15. 16. 17. 18. 19. 20.

T H H T T H T T H H

21. 22. 23. 24. 25. 26. 27. 28. 29. 30.

T T H T T T H T H T

31. 32. 33. 34. 35. 36. 37. 38. 39. 40.

T T T T T H T T H T

41. 42. 43. 44. 45. 46. 47. 48. 49. 50.

H H H H T H H T T T

dealt either of these hands are precisely the same: 1 in 2,598,960.

51. T

Looking over the sequence, patterns jump out: Tosses 10 to 22 provided an almost perfect pattern of pairs of tails followed by pairs of heads. On tosses 30 to 38 I had a “cold hand,” with only one head in eight tosses. But my fortunes immediately reversed with a “hot hand”—seven heads out of the next nine tosses. Similar streaks happen, about as often as one would expect in random sequences, in basketball shooting, baseball hitting, and mutual fund stock pickers’ selections (Gilovich et al., 1985; Malkiel, 1989, 1995; Myers, 2002). These sequences often don’t look random, and so get overinterpreted (“When you’re hot, you’re hot!”). What explains these streaky patterns? Was I exercising some sort of paranormal control over my coin? Did I snap out of my tails funk and get in a heads groove? No such explanations are needed, for these are the sorts of streaks found in any random data. Comparing each toss to the next, 24 of the 50 comparisons yielded a changed result—just the sort of near 50-50 result we expect from coin tossing. Despite seeming patterns, the outcome of one toss gives no clue to the outcome of the next.

© 1990 by Sidney Harris/American Scientist magazine.

1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

3.6 Two random 䉴 FIGURE sequences Your chances of being

Bizarre-looking, perhaps. But actually no more unlikely than any other number sequence.

34

|| On March 11, 1998, Utah’s Ernie and Lynn Carey gained three new grandchildren when three of their daughters gave birth—on the same day (Los Angeles Times, 1998). ||

“The really unusual day would be one where nothing unusual happens.” Statistician Persi Diaconis, 2002

However, some happenings seem so extraordinary that we struggle to conceive an ordinary, chance-related explanation (as applies to our coin tosses). In such cases, statisticians often are less mystified. When Evelyn Marie Adams won the New Jersey lottery twice, newspapers reported the odds of her feat as 1 in 17 trillion. Bizarre? Actually, 1 in 17 trillion are indeed the odds that a given person who buys a single ticket for two New Jersey lotteries will win both times. But statisticians Stephen Samuels and George McCabe (1989) reported that, given the millions of people who buy U.S. state lottery tickets, it was “practically a sure thing” that someday, somewhere, someone would hit a state jackpot twice. Indeed, said fellow statisticians Persi Diaconis and Frederick Mosteller (1989), “with a large enough sample, any outrageous thing is likely to happen.” An event that happens to but 1 in 1 billion people every day occurs about six times a day, 2000 times a year. Jerry Telfer/San Francisco Chronicle

Given enough random events, something weird will happen Angelo and Maria Gallina were the beneficiaries of one of those extraordinary chance events when they won two California lottery games on the same day.

MOD U LE 3 Research Strategies: How Psychologists Ask and Answer Questions

Experimentation 3-5 How do experiments, powered by random assignment, clarify cause and effect?

experiment a research method in which an investigator manipulates one or more factors (independent variables) to observe the effect on some behavior or mental process (the dependent variable). By random assignment of participants, the experimenter aims to control other relevant factors.

random assignment assigning participants to experimental and control groups by chance, thus minimizing preexisting differences between those assigned to the different groups.

Happy are they, remarked the Roman poet Virgil, “who have been able to perceive the causes of things.” To isolate cause and effect, psychologists can statistically control for other factors. For example, researchers have found that breast-fed infants grow up with somewhat higher intelligence scores than do infants bottle-fed with cow’s milk (Angelsen et al., 2001; Mortensen et al., 2002; Quinn et al., 2001). They have also found that breast-fed British babies have been more likely than their bottle-fed counterparts to eventually move into a higher social class (Martin et al., 2007). But the “breast is best” intelligence effect shrinks when researchers compare breast-fed and bottle-fed children from the same families (Der et al., 2006). So, does this mean that smarter mothers (who in modern countries more often breast-feed) have smarter children? Or, as some researchers believe, do the nutrients of mother’s milk contribute to brain development? To help answer this question, researchers have “controlled for” (statistically removed differences in) certain other factors, such as maternal age, education, and income. And they have found that in infant nutrition, mother’s milk correlates modestly but positively with later intelligence. Correlational research cannot control for all possible factors. But researchers can isolate cause and effect with an experiment. Experiments enable a researcher to focus on the possible effects of one or more factors by (1) manipulating the factors of interest and (2) holding constant (“controlling”) other factors. With parental permission, a British research team randomly assigned 424 hospital preterm infants either to standard infant formula feedings or to donated breast milk feedings (Lucas et al., 1992). On intelligence tests taken at age 8, the children nourished with breast milk had significantly higher intelligence scores than their formula-fed counterparts.

Random Assignment No single experiment is conclusive, of course. But by randomly assigning infants to one feeding group or the other, researchers were able to hold constant all factors except nutrition. This eliminated alternative explanations and supported the conclusion that breast is indeed best for developing intelligence (at least for preterm infants).

35

If a behavior (such as test performance) changes when we vary an experimental factor (such as infant nutrition), then we infer the factor is having an effect. The point to remember: Unlike correlational studies, which uncover naturally occurring relationships, an experiment manipulates a factor to determine its effect. Consider, too, how we might assess a therapeutic intervention. Our tendency to seek new remedies when we are ill or emotionally down can produce misleading testimonies. If three days into a cold we start taking vitamin C tablets and find our cold symptoms lessening, we may credit the pills rather than the cold naturally subsiding. If, after nearly failing the first exam, we listen to a “peak learning” subliminal CD and then improve on the next exam, we may credit the CD rather than conclude that our performance has returned to our average. In the 1700s, blood-letting seemed effective. Sometimes people improved after the treatment; when they didn’t, the practitioner inferred the disease was just too advanced to be reversed. (We, of course, now know that usually blood-letting is a bad treatment.) So, whether or not a remedy is truly effective, enthusiastic users will probably endorse it. To find out whether it actually is effective, we must experiment. And that is precisely how investigators evaluate new drug treatments and new methods of psychological therapy. The participants in these studies are randomly assigned to the research groups and are often blind (uninformed) about what treatment, if any, they are receiving. One group receives a treatment (such as medication or other therapy). The other group receives a pseudotreatment—an inert placebo (perhaps a pill with no drug in it). If the study is using a double-blind procedure, neither the participants nor the research assistants collecting the data will know which group is receiving the treatment. In such studies, researchers can check a treatment’s actual effects apart from the participants’ belief in its healing powers and the staff’s enthusiasm for its potential. Just thinking you are getting a treatment can boost your spirits, relax your body, and relieve your symptoms. This placebo effect is well documented in reducing pain, depression, and anxiety (Kirsch & Sapirstein, 1998). And the more expensive the placebo, the more “real” it seems to us—a fake pill that costs US$2.50 works better than one costing 10 cents (Waber et al., 2008). To know how effective a therapy really is, researchers must control for a possible placebo effect. The double-blind procedure is one way to create an experimental group, in which people receive the treatment, and a contrasting control group that does not receive the treatment. By randomly assigning people to these conditions, researchers can be fairly certain the two groups are otherwise identical. Random assignment roughly equalizes the two groups in age, attitudes, and every other characteristic. With random assignment, as occurred with the infants in the breast milk experiment, we also can conclude that any later differences between people in the experimental and control groups will usually be the result of the treatment.

Independent and Dependent Variables Here is an even more potent example: The drug Viagra was approved for use after 21 clinical trials. One trial was an experiment in which researchers randomly assigned 329 men with erectile dysfunction to either an experimental group (Viagra takers) or a control group (placebo takers). It was a double-blind procedure—neither the men nor the person who gave them the pills knew which drug they were receiving. The result: At peak doses, 69 percent of Viagra-assisted attempts at intercourse were successful, compared with 22 percent for men receiving the placebo (Goldstein et al., 1998). Viagra worked. This simple experiment manipulated just one factor: the drug dosage (none versus peak dose). We call this experimental factor the independent variable because we can vary it independently of other factors, such as the men’s age, weight, and personality (which random assignment should control). Experiments examine the effect of one or more independent variables on some measurable behavior, called the

The New Yorker Collection, 2007, P. C. Vey from cartoonbank.com. All Rights Reserved.

Research Strategies: How Psychologists Ask and Answer Questions M O D U L E 3

“If I don’t think it’s going to work, will it still work?”

double-blind procedure an experimental procedure in which both the research participants and the research staff are ignorant (blind) about whether the research participants have received the treatment or a placebo. Commonly used in drug-evaluation studies.

placebo [pluh-SEE-bo; Latin for “I shall please”] effect experimental results caused by expectations alone; any effect on behavior caused by the administration of an inert substance or condition, which the recipient assumes is an active agent.

experimental group in an experiment, the group that is exposed to the treatment, that is, to one version of the independent variable. control group in an experiment, the group that is not exposed to the treatment; contrasts with the experimental group and serves as a comparison for evaluating the effect of the treatment.

independent variable the experimental factor that is manipulated; the variable whose effect is being studied.

36

MOD U LE 3 Research Strategies: How Psychologists Ask and Answer Questions

Random assignment (controlling for other variables such as parental intelligence and environment)

Group

Control

©Michael Wertz

Experimental

FIGURE 3.7 Experimentation To discern causation, psychologists may randomly assign some participants to an experimental group, others to a control group. Measuring the dependent variable (intelligence score in later childhood) will determine the effect of the independent variable (type of milk).

|| Note the distinction between random sampling in surveys (discussed earlier in relation to surveys) and random assignment in experiments (depicted in Figure 3.7). Random sampling helps us generalize to a larger population. Random assignment controls extraneous influences, which helps us infer cause and effect. ||

dependent variable because it can vary depending on what takes place during the experiment. Both variables are given precise operational definitions, which specify the procedures that manipulate the Independent Dependent independent variable (the precise drug dosage and variable variable timing in this study) or measure the dependent Intelligence variable (the questions that assessed the men’s reBreast milk score, age 8 sponses). These definitions answer the “What do you mean?” question with a level of precision that enables others to repeat the study. (See FIGURE 3.7 Intelligence Formula for the breast milk experiment’s design.) score, age 8 Let’s pause to check your understanding using a simple psychology experiment: To test the effect of perceived ethnicity on the availability of a rental house, Adrian Carpusor and William Loges (2006) sent identically worded e-mail inquiries to 1115 Los Angeles-area landlords. The researchers varied the ethnic connotation of the sender’s name and tracked the percentage of positive replies (invitations to view the apartment in person). “Patrick McDougall,” “Said Al-Rahman,” and “Tyrell Jackson” received, respectively, 89 percent, 66 percent, and 56 percent invitations. In this experiment, what was the independent variable? The dependent variable?3 Experiments can also help us evaluate social programs. Do early childhood education programs boost impoverished children’s chances for success? What are the effects of different anti-smoking campaigns? Do school sex-education programs reduce teen pregnancies? To answer such questions, we can experiment: If an intervention is welcomed but resources are scarce, we could use a lottery to randomly assign some people (or regions) to experience the new program and others to a control condition. If later the two groups differ, the intervention’s effect will be confirmed (Passell, 1993). Let’s recap. A variable is anything that can vary (infant nutrition, intelligence, TV exposure—anything within the bounds of what is feasible and ethical). Experiments aim to manipulate an independent variable, measure the dependent variable, and control all other variables. An experiment has at least two different groups: an experimental group and a comparison or control group. Random assignment works to equate the groups before any treatment effects. In this way, an experiment tests the effect of at least one independent variable (what we manipulate) on at least one dependent variable (the outcome we measure). TABLE 3.2 compares the features of psychology’s research methods. 3 The independent variable, which the researchers manipulated, was the ethnicity-related names. The dependent variable, which they measured, was the positive response rate.

TABLE 3.2 Comparing Research Methods Research Method

Basic Purpose

How Conducted

What Is Manipulated

Weaknesses

Descriptive

To observe and record behavior

Do case studies, surveys, or naturalistic observations

Nothing

No control of variables; single cases may be misleading

Correlational

To detect naturally occurring relationships; to assess how well one variable predicts another

Compute statistical association, sometimes among survey responses

Nothing

Does not specify cause and effect

Experimental

To explore cause and effect

Manipulate one or more factors; use random assignment

The independent variable(s)

Sometimes not feasible; results may not generalize to other contexts; not ethical to manipulate certain variables

37

Research Strategies: How Psychologists Ask and Answer Questions M O D U L E 3

䉴|| Statistical Reasoning in Everyday Life

䉴 Ten percent of people are lesbians or gay men. Or is it 2 to 3 percent, as suggested by various national surveys? 䉴 We ordinarily use but 10 percent of our brain. Or is it closer to 100 percent? 䉴 The human brain has 100 billion nerve cells. Or is it more like 40 billion, as suggested by extrapolation from sample counts?

© Patrick Hardincord

In descriptive, correlational, and experimental research, statistics are tools that help us see and interpret what the unaided eye might miss. But statistical understanding benefits more than just researchers. To be an educated person today is to be able to apply simple statistical principles to everyday reasoning. One needn’t memorize complicated formulas to think more clearly and critically about data. Off-the-top-of-the-head estimates often misread reality and then mislead the public. Someone throws out a big, round number. Others echo it, and before long the big, round number becomes public misinformation. A few examples: “Figures can be misleading—so I’ve written a song which I think expresses the real story of the firm’s performance this quarter.”

The point to remember: Doubt big, round, undocumented numbers. Rather than swallowing top-of-the-head estimates, focus on thinking smarter by applying simple statistical principles to everyday reasoning. dependent variable the outcome factor; the variable that may change in response to manipulations of the independent variable.

Describing Data 3-6 How can we describe data with measures of central tendency and variation? Once researchers have gathered their data, they must organize them in some meaningful way. One way to do this is to convert the data into a simple bar graph, as in FIGURE 3.8, which displays a distribution of different brands of trucks still on the road after a decade. When reading statistical graphs such as this, take care. It’s easy to design a graph to make a difference look big (FIGURE 3.8a) or small (FIGURE 3.8b). The secret lies in how you label the vertical scale (the Y-axis). The point to remember: Think smart. When viewing figures in magazines and on television, read the scale labels and note their range.

FIGURE 3.8 Read the scale labels An

American truck manufacturer offered graph (a)—with actual brand names included—to suggest the much greater durability of its trucks. Note, however, how the apparent difference shrinks as the vertical scale changes (graph b).

Percentage 100% still functioning after 10 years

Percentage 100% still functioning 90 after 10 years

99

80 70 60

98

50 40

97

30 20

96

10 95

Our brand

Brand X

Brand Y

Brand Z

Our brand

Brand X

Brand Y

Brand of truck

Brand of truck

(a)

(b)

Brand Z

38

MOD U LE 3 Research Strategies: How Psychologists Ask and Answer Questions

Measures of Central Tendency The next step is to summarize the data using some measure of central tendency, a single score that represents a whole set of scores. The simplest measure is the mode, the most frequently occurring score or scores. The most commonly reported is the mean, or arithmetic average—the total sum of all the scores divided by the number of scores. On a divided highway, the median is the middle. So, too, with data: The median is the midpoint—the 50th percentile. If you arrange all the scores in order from the highest to the lowest, half will be above the median and half will be below it.

30

40

50

60

70

80

90

100

180

950

1420

140 Mode

Median

Income per family in thousands of dollars

One family

Mean

FIGURE 3.9 A skewed distribution This graphic representation of the distribution of a village’s incomes illustrates the three measures of central tendency—mode, median, and mean. Note how just a few high incomes make the mean—the fulcrum point that balances the incomes above and below—deceptively high.

|| The average person has one ovary and one testicl*. ||

Measures of central tendency neatly summarize data. But consider what happens to the mean when a distribution is lopsided or skewed. With income data, for example, the mode, median, and mean often tell very different stories (FIGURE 3.9). This happens because the mean is biased by a few extreme scores. When Microsoft cofounder Bill Gates sits down in an intimate café, its average (mean) customer instantly becomes a billionaire. But the customer’s median wealth remains unchanged. Understanding this, you can see how a British newspaper could accurately run the headline “Income for 62% Is Below Average” (Waterhouse, 1993). Because the bottom half of British income earners receive only a quarter of the national income cake, most British people, like most people everywhere, make less than the mean. In the United States, Republicans have tended to tout the economy’s solid growth since 2000 using average income; Democrats have lamented the economy’s lackluster growth using median income (Paulos, 2006). Mean and median tell different true stories. The point to remember: Always note which measure of central tendency is reported. Then, if it is a mean, consider whether a few atypical scores could be distorting it.

Measures of Variation Knowing the value of an appropriate measure of central tendency can tell us a great deal. But the single number omits other information. It helps to know something about the amount of variation in the data—how similar or diverse the scores are. Averages derived from scores with low variability are more reliable than averages based on scores with high variability. Consider a basketball player who scored between 13 and 17 points in each of her first 10 games in a season. Knowing this, we would be more confident that she would score near 15 points in her next game than if her scores had varied from 5 to 25 points.

39

Research Strategies: How Psychologists Ask and Answer Questions M O D U L E 3

© The New Yorker Collection, 1988, Mirachi from cartoonbank.com. All Rights Reserved.

The range of scores—the gap between the lowest and highest scores—provides only a crude estimate of variation because a couple of extreme scores in an otherwise uniform group, such as the $950,000 and $1,420,000 incomes in Figure 3.9, will create a deceptively large range. The more useful standard for measuring how much scores deviate from one another is the standard deviation. It better gauges whether scores are packed together or dispersed, because it “The poor are getting poorer, but with the rich uses information from each score getting richer it all averages out in the long run.” (TABLE 3.3). The computation assembles information about how much individual scores differ from the mean. If your college or university attracts students of a certain ability level, their intelligence scores will have a relatively small standard deviation compared with the more diverse community population outside your school. You can grasp the meaning of the standard deviation if you consider how scores tend to be distributed in nature. Large numbers of data—heights, weights, intelligence scores, grades (though not incomes)—often form a symmetrical, bell-shaped distribution. Most cases fall near the mean, and fewer cases fall near either extreme. This bell-shaped distribution is so typical that we call the curve it forms the normal curve.

mode the most frequently occurring score(s) in a distribution. mean the arithmetic average of a distribution, obtained by adding the scores and then dividing by the number of scores. median the middle score in a distribution; half the scores are above it and half are below it.

range the difference between the highest and lowest scores in a distribution.

standard deviation a computed measure of how much scores vary around the mean score. normal curve (normal distribution) a symmetrical, bell-shaped curve that describes the distribution of many types of data; most scores fall near the mean (68 percent fall within one standard deviation of it) and fewer and fewer near the extremes.

TABLE 3.3 Standard Deviation Is Much More Informative Than Mean Alone

Note that the test scores in Class A and Class B have the same mean (80), but very different standard deviations, which tell us more about how the students in each class are really faring. Test Scores in Class A

Score

Deviation from the Mean

Test Scores in Class B

Squared Deviation

Score

Squared Deviation

72

−8

64

60

−20

400

74

−6

36

60

−20

400

77

−3

9

70

−10

100

79

−1

1

70

−10

100

82

+2

4

90

+10

100

84

+4

16

90

+10

100

85

+5

25

100

+20

400

87

+7

49

100

+20

400

Sum of (deviations)2 = 204

Total = 640

Total = 640 Mean = 640 ÷ 8 = 80

Mean = 640 ÷ 8 = 80

Standard deviation =

Standard deviation =

Deviation from the Mean

Sum of (deviations)2 = Number of scores

204 = 5.0 8

Sum of (deviations)2 = Number of scores

Sum of (deviations)2 = 2000

2000 = 15.8 8

FIGURE 3.10 The normal curve Scores on aptitude tests tend to form a normal, or bell-shaped, curve. For example, the Wechsler Adult Intelligence Scale calls the average score 100.

MOD U LE 3 Research Strategies: How Psychologists Ask and Answer Questions

40

Sixty-eight percent of people score within 15 points above or below 100

Number of scores About 95 percent of all people fall within 30 points of 100

68%

95%

0.1%

2% 13.5% 55

70

34% 85

34% 100

13.5% 115

0.1%

2%

130

145

Wechsler intelligence score

As FIGURE 3.10 shows, a useful property of the normal curve is that roughly 68 percent of the cases fall within one standard deviation on either side of the mean. About 95 percent of cases fall within two standard deviations. Thus, about 68 percent of people taking an intelligence test will score within ±15 points of 100. About 95 percent will score within ±30 points.

Making Inferences 3-7 What principles can guide our making generalizations from samples and deciding whether differences are significant? Data are “noisy.” The average score in one group (breast-fed babies) could conceivably differ from the average score in another group (formula-fed babies) not because of any real difference but merely because of chance fluctuations in the people sampled. How confidently, then, can we infer that an observed difference accurately estimates the true difference? For guidance, we can ask how reliable and significant the differences are.

When Is an Observed Difference Reliable? In deciding when it is safe to generalize from a sample, we should keep three principles in mind. 1. Representative samples are better than biased samples. The best basis for generalizing is not from the exceptional and memorable cases one finds at the extremes but from a representative sample of cases. Research never randomly samples the whole human population. Thus, it pays to keep in mind what population a study has sampled. 2. Less-variable observations are more reliable than those that are more variable. As we noted in the example of the basketball player whose game-togame points were consistent, an average is more reliable when it comes from scores with low variability. 3. More cases are better than fewer. An eager prospective student visits two university campuses, each for a day. At the first, the student randomly attends two classes and discovers both instructors to be witty and engaging. At the next campus, the two sampled instructors seem dull and uninspiring.

41

Research Strategies: How Psychologists Ask and Answer Questions M O D U L E 3

Returning home, the student (discounting the small sample size of only two teachers at each institution) tells friends about the “great teachers” at the first school, and the “bores” at the second. Again, we know it but we ignore it: Averages based on many cases are more reliable (less variable) than averages based on only a few cases. The point to remember: Don’t be overly impressed by a few anecdotes. Generalizations based on a few unrepresentative cases are unreliable.

When Is a Difference Significant?

PEANUTS reprinted by permission of UFS, Inc.

Statistical tests also help us determine whether differences are meaningful. Here is the underlying logic: When averages from two samples are each reliable measures of their respective populations (as when each is based on many observations that have small variability), then their difference is likely to be reliable as well. (Example: The less the variability in women’s and in men’s aggression scores, the more confidence we would have that any observed gender difference is reliable.) And when the difference between the sample averages is large, we have even more confidence that the difference between them reflects a real difference in their populations. In short, when the sample averages are reliable, and when the difference between them is relatively large, we say the difference has statistical significance. This means that the observed difference is probably not due to chance variation between the samples. In judging statistical significance, psychologists are conservative. They are like juries who must presume innocence until guilt is proven. For most psychologists, proof beyond a reasonable doubt means not making much of a finding unless the odds of its occurring by chance are less than 5 percent (an arbitrary criterion).

When reading about research, you should remember that, given large enough or hom*ogeneous enough samples, a difference between them may be “statistically significant” yet have little practical significance. For example, comparisons of intelligence test scores among hundreds of thousands of first-born and later-born individuals indicate a highly significant tendency for first-born individuals to have higher average scores than their later-born siblings (Kristensen & Bjerkedal, 2007; Zajonc & Markus, 1975). But because the scores differ by only one to three points, the difference has little practical importance. Such findings have caused some psychologists to advocate alternatives to significance testing (Hunter, 1997). Better, they say, to use other ways to express a finding’s effect size—its magnitude and reliability. The point to remember: Statistical significance indicates the likelihood that a result will happen by chance. But this does not say anything about the importance of the result.

statistical significance a statistical statement of how likely it is that an obtained result occurred by chance.

42

MOD U LE 3 Research Strategies: How Psychologists Ask and Answer Questions

Review Research Strategies: How Psychologists Ask and Answer Questions 3-1 How do theories advance psychological science? Psychological theories organize observations and imply predictive hypotheses. After constructing precise operational definitions of their procedures, researchers test their hypotheses, validate and refine the theory, and, sometimes, suggest practical applications. If other researchers can replicate the study with similar results, we can then place greater confidence in the conclusion. 3-2 How do psychologists observe and describe behavior? Psychologists observe and describe behavior using individual case studies, surveys among random samples of a population, and naturalistic observations. In generalizing from observations, remember: Representative samples are a better guide than vivid anecdotes. 3-3 What are positive and negative correlations, and why do they enable prediction but not cause-effect explanation? Scatterplots help us to see correlations. A positive correlation (ranging from 0 to +1.00) indicates the extent to which two factors rise together. In a negative correlation (ranging from 0 to −1.00), one item rises as the other falls. An association (sometimes stated as a correlation coefficient) indicates the possibility of a cause-effect relationship, but it does not prove the direction of the influence, or whether an underlying third factor may explain the correlation. 3-4 What are illusory correlations? Illusory correlations are random events that we notice and falsely assume are related. Patterns or sequences occur naturally in sets of random data, but we tend to interpret these patterns as meaningful connections, perhaps in an attempt to make sense of the world around us.

3-5 How do experiments, powered by random assignment, clarify cause and effect? To discover cause-effect relationships, psychologists conduct experiments, manipulating one or more factors of interest and controlling other factors. Random assignment minimizes preexisting differences between the experimental group (exposed to the treatment) and the control group (given a placebo or different version of the treatment). The independent variable is the factor you manipulate to study its effect. The dependent variable is the factor you measure to discover any changes that occur in response to these manipulations. Studies may use a double-blind procedure to avoid the placebo effect and researcher’s bias. 3-6 How can we describe data with measures of central tendency and variation? Three measures of central tendency are the median (the middle score in a group of data), the mode (the most frequently occurring score), and the mean (the arithmetic average). Measures of variation tell us how similar or diverse data are. A range describes the gap between the highest and lowest scores. The more useful measure, the standard deviation, states how much scores vary around the mean, or average, score. The normal curve is a bellshaped curve that describes the distribution of many types of data. 3-7 What principles can guide our making generalizations from samples and deciding whether differences are significant? Three principles are worth remembering: (1) Representative samples are better than biased samples. (2) Less-variable observations are more reliable than those that are more variable. (3) More cases are better than fewer. When averages from two samples are each reliable measures of their own populations, and the difference between them is relatively large, we can assume that the result is statistically significant—that it did not occur by chance alone.

Research Strategies: How Psychologists Ask and Answer Questions M O D U L E 3

Terms and Concepts to Remember theory, p. 25 hypothesis, p.25 operational definition, p. 26 replication, p. 26 case study, p. 27 survey, p. 27 population, p. 28 random sample, p. 28 naturalistic observation, p. 28 correlation, p. 29 correlation coefficient, p. 29 scatterplots, p. 30 illusory correlation, p. 32 experiment, p. 34

random assignment, p. 34 double-blind procedure, p. 35 placebo effect, p. 35 experimental group, p. 35 control group, p. 35 independent variable, p.35 dependent variable, p. 36 mode, p. 38 mean, p. 38 median, p. 38 range, p. 39 standard deviation, p. 39 normal curve, p. 39 statistical significance, p. 41

2. Consider a question posed by Christopher Jepson, David Krantz, and Richard Nisbett (1983) to University of Michigan introductory psychology students: The registrar’s office at the University of Michigan has found that usually about 100 students in Arts and Sciences have perfect marks at the end of their first term at the University. However, only about 10 to 15 students graduate with perfect marks. What do you think is the most likely explanation for the fact that there are more perfect marks after one term than at graduation? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. If you were to become a research psychologist, what questions would you like to explore with experiments?

2. Find a graph in a popular magazine ad. How does the advertiser use (or abuse) statistics to make a point?

Test Yourself 1. Why, when testing a new drug to control blood pressure, would we learn more about its effectiveness from giving it to half of the participants in a group of 1000 than to all 1000 participants?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

43

The Biology of Mind

modules 4

sending chemical messages across a tiny gap that separates them (Module 4). 䉴 specific brain systems serve specific functions—though not the functions Gall supposed (Module 5). 䉴 we integrate information processed in these different brain systems to construct our experience of sights and sounds, meanings and memories, pain and passion (Module 6). 䉴 our adaptive brain is wired by our experience (Module 6).

Tools of Discovery and Older Brain Structures

6 The Cerebral Cortex and Our Divided Brain

Bettman/Corbis

䉴 the body is composed of cells (Module 4). 䉴 among these are nerve cells that conduct electricity and “talk” to one another by

5

N

o principle is more central to today’s psychology, or to this book, than this: Everything psychological is simultaneously biological. Your every idea, every mood, every urge is a biological happening. You love, laugh, and cry with your body. Without your body—your genes, your brain, your appearance—you are, indeed, nobody. Although we find it convenient to talk separately of biological and psychological influences on behavior, we need to remember: To think, feel, or act without a body would be like running without legs. Today’s science is riveted on our body’s most amazing parts—the brain, its component neural systems, and their genetic instructions. The brain’s ultimate challenge? To understand itself. How does the brain organize and communicate with itself? How do heredity and experience together wire the brain? How does the brain process the information we need to shoot a basketball? To delight in a guitarist’s notes? To remember our first kiss? Our understanding of how the brain gives birth to the mind has come a long way. The ancient Greek philosopher Plato correctly located the mind in the spherical head—his idea of the perfect form. His student, Aristotle, believed the mind was in the heart, which pumps warmth and vitality to the body. The heart remains our symbol for love, but science has long since overtaken philosophy on this issue. It’s your brain, not your heart, that falls in love. We have come far since the early 1800s, when the German physician Franz Gall invented phrenology, a popular but ill-fated theory that claimed bumps on the skull could reveal our mental abilities and our character traits. At one point, Britain had 29 phrenological societies, and phrenologists traveled North America giving skull readings (Hunt, 1993). Using a false name, humorist Mark Twain put one famous phrenologist to the test. “He found a cavity [and] startled me by saying that that cavity represented the total absence of the sense of humor!” Three months later, Twain sat for a second reading, this time identifying himself. Now “the cavity was gone, and in its place was . . . the loftiest bump of humor he had ever encountered in his life-long experience!” (Lopez, 2002). Phrenology did, however, correctly focus attention on the idea that various brain regions have particular functions. You and I enjoy a privilege Gall did not have. We are living in a time when discoveries about the interplay of our biology and our behavior and mental processes are occurring at an exhilarating pace. Within little more than the last century, as we will see in Modules 4 through 6, researchers seeking to understand the biology of the mind have discovered that

Neural and Hormonal Systems

A wrongheaded theory Despite initial acceptance of Franz Gall’s speculations, bumps on the skull tell us nothing about the brain’s underlying functions. Nevertheless, some of Gall’s assumptions have held true. Different parts of the brain do control different aspects of behavior.

45

module 4

Neural Communication The Nervous System The Endocrine System

Neural and Hormonal Systems 䉴|| Neural Communication

“If I were a college student today, I don’t think I could resist going into neuroscience.” Novelist Tom Wolfe, 2004

biological psychology a branch of psychology concerned with the links between biology and behavior. (Some biological psychologists call themselves behavioral neuroscientists, neuropsychologists, behavior geneticists, physiological psychologists, or biopsychologists.) neuron a nerve cell; the basic building block of the nervous system. sensory neurons neurons that carry incoming information from the sensory receptors to the brain and spinal cord.

motor neurons neurons that carry outgoing information from the brain and spinal cord to the muscles and glands. interneurons neurons within the brain and spinal cord that communicate internally and intervene between the sensory inputs and motor outputs. dendrite the bushy, branching extensions of a neuron that receive messages and conduct impulses toward the cell body.

axon the extension of a neuron, ending in branching terminal fibers, through which messages pass to other neurons or to muscles or glands. myelin [MY-uh-lin] sheath a layer of fatty tissue segmentally encasing the fibers of many neurons; enables vastly greater transmission speed of neural impulses as the impulse hops from one node to the next.

action potential a neural impulse; a brief electrical charge that travels down an axon.

46

By studying the links between biological activity and psychological events, biological psychologists continue to expand our understanding of sleep and dreams, depression and schizophrenia, hunger and sex, stress and disease. We have also realized that we are each a system composed of subsystems that are in turn composed of even smaller subsystems. Tiny cells organize to form such body organs as the stomach, heart, and brain. These organs in turn form larger systems for digestion, circulation, and information processing. And those systems are part of an even larger system—the individual, who in turn is a part of a family, culture, and community. Thus, we are biopsychosocial systems, and to understand our behavior, we need to study how these biological, psychological, and social-cultural systems work and interact. In this book we start small and build from the bottom up—from nerve cells up to the brain, and to the environmental and cultural influences that interact with our biology. We will also work from the top down, as we consider how our thinking and emotions influence our brain and our health. At all levels, psychologists examine how we process information—how we take in information; how we organize, interpret, and store it; and how we use it. The body’s information system handling all these tasks is built from billions of interconnected cells called neurons. To fathom our thoughts and actions, memories and moods, we must first understand how neurons work and communicate. For scientists, it is a happy fact of nature that the information systems of humans and other animals operate similarly—so similarly, in fact, that you could not distinguish between small samples of brain tissue from a human and a monkey. This similarity allows researchers to study relatively simple animals, such as squids and sea slugs, to discover how our neural systems operate. It allows them to study other mammals’ brains to understand the organization of our own. Cars differ, but all have engines, accelerators, steering wheels, and brakes. A Martian could study any one of them and grasp the operating principles. Likewise, animals differ, yet their nervous systems operate similarly. Though the human brain is more complex than a rat’s, both follow the same principles.

Neurons 4-1 What are neurons, and how do they transmit information? Our body’s neural information system is complexity built from simplicity. Its building blocks are neurons, or nerve cells. Sensory neurons carry messages from the body’s tissues and sensory organs inward to the brain and spinal cord, for processing. The brain and spinal cord then send instructions out to the body’s tissues via the motor neurons. Between the sensory input and motor output, information is processed in the brain’s internal communication system via its interneurons. Our complexity resides mostly in our interneuron systems. Our nervous system has a few million sensory neurons, a few million motor neurons, and billions and billions of interneurons.

47

Neural and Hormonal Systems M O D U L E 4

Dendrites (receive messages from other cells)

Terminal branches of axon (form junctions with other cells)

䉴 FIGURE 4.1 A motor neuron

Axon (passes messages away from the cell body to other neurons, muscles, or glands)

Cell body (the cell’s lifesupport center)

Neural impulse (action potential) (electrical signal traveling down the axon)

Myelin sheath (covers the axon of some neurons and helps speed neural impulses)

All are variations on the same theme (FIGURE 4.1). Each consists of a cell body and its branching fibers. The bushy dendrite fibers receive information and conduct it toward the cell body. From there, the cell’s axon passes the message along to other neurons or to muscles or glands. Axons speak. Dendrites listen. Unlike the short dendrites, axons are sometimes very long, projecting several feet through the body. A motor neuron carrying orders to a leg muscle, for example, has a cell body and axon roughly on the scale of a basketball attached to a rope 4 miles long. Much as home electrical wire is insulated, so a layer of fatty tissue, called the myelin sheath, insulates the axons of some neurons and helps speed their impulses. As myelin is laid down up to about age 25, neural efficiency, judgment, and self-control grows (Fields, 2008). If the myelin sheath degenerates, multiple sclerosis results: Communication to muscles slows, with eventual loss of muscle control. Depending on the type of fiber, a neural impulse travels at speeds ranging from a sluggish 2 miles per hour to a breakneck 200 or more miles per hour. But even this top speed is 3 million times slower than that of electricity through a wire. We measure brain activity in milliseconds (thousandths of a second) and computer activity in nanoseconds (billionths of a second). Thus, unlike the nearly instantaneous reactions of a high-speed computer, your reaction to a sudden event, such as a child darting in front of your car, may take a quarter-second or more. Your brain is vastly more complex than a computer, but slower at executing simple responses. Neurons transmit messages when stimulated by signals from our senses or when triggered by chemical signals from neighboring neurons. At such times, a neuron fires an impulse, called the action potential—a brief electrical charge that travels down its axon. Neurons, like batteries, generate electricity from chemical events. The chemistryto-electricity process involves the exchange of ions, electrically charged atoms. The fluid interior of a resting axon has an excess of negatively charged ions, while the fluid outside the axon membrane has more positively charged ions. This positiveoutside/negative-inside state is called the resting potential. Like a tightly guarded facility, the axon’s surface is very selective about what it allows in. We say the axon’s surface is selectively permeable. For example, a resting axon has gates that block positive sodium ions. When a neuron fires, however, the security parameters change: The first bit of the axon opens its gates, rather like sewer covers flipping open, and the positively charged

“I sing the body electric.” Walt Whitman, “Children of Adam” (1855)

48

MOD U LE 4 Neural and Hormonal Systems

Cell body end of axon

2. This depolarization produces another action potential a little farther along the axon. Gates in this neighboring area now open, and charged sodium atoms rush in. Meanwhile, a pump in the cell membrane (the sodium/potassium pump) transports the sodium ions back out of the cell. 3. As the action potential continues speedily down the axon, the first section has now completely recharged.

1. Neuron stimulation causes a brief change in electrical charge. If strong enough, this produces depolarization and an action potential.

Direction of neural impulse: toward axon terminals

FIGURE 4.2 Action potential

“What one neuron tells another neuron is simply how much it is excited.” Francis Crick, The Astonishing Hypothesis, 1994

threshold the level of stimulation required to trigger a neural impulse.

sodium ions flood through the membrane (FIGURE 4.2). This depolarizes that section of the axon, causing the axon’s next channel to open, and then the next, like dominoes falling, each one tripping the next. During a resting pause (the refractory period, rather like a camera flash pausing to recharge), the neuron pumps the positively charged sodium ions back outside. Then it can fire again. (In myelinated neurons, as in Figure 4.1, the action potential speeds up by hopping from one myelin “sausage” to the next.) The mind boggles when imagining this electrochemical process repeating up to 100 or even 1000 times a second. But this is just the first of many astonishments. Each neuron is itself a miniature decision-making device performing complex calculations as it receives signals from hundreds, even thousands, of other neurons. Most of these signals are excitatory, somewhat like pushing a neuron’s accelerator. Others are inhibitory, more like pushing its brake. If excitatory signals minus inhibitory signals exceed a minimum intensity, or threshold, the combined signals trigger an action potential. (Think of it this way: If the excitatory party animals outvote the inhibitory party poopers, the party’s on.) The action potential then travels down the axon, which branches into junctions with hundreds or thousands of other neurons and with the body’s muscles and glands. Increasing the level of stimulation above the threshold, however, will not increase the neural impulse’s intensity. The neuron’s reaction is an all-or-none response: Like guns, neurons either fire or they don’t. How then do we detect the intensity of a stimulus? How do we distinguish a gentle touch from a big hug? A strong stimulus—a slap rather than a tap—can trigger more neurons to fire, and to fire more often. But it does not affect the action potential’s strength or speed. Squeezing a trigger harder won’t make a bullet go faster.

49

Neural and Hormonal Systems M O D U L E 4

How Neurons Communicate

synapse [SIN-aps] the junction between the axon tip of the sending neuron and the dendrite or cell body of the receiving neuron. The tiny gap at this junction is called the synaptic gap or synaptic cleft.

4-2 How do nerve cells communicate with other nerve cells?

neurotransmitters chemical messengers that cross the synaptic gaps between neurons. When released by the sending neuron, neurotransmitters travel across the synapse and bind to receptor sites on the receiving neuron, thereby influencing whether that neuron will generate a neural impulse.

“All information processing in the brain involves neurons ‘talking to’ each other at synapses.” Neuroscientist Solomon H. Snyder (1984)

FIGURE 4.3 How neurons communicate

Neurons interweave so intricately that even with a microscope you would have trouble seeing where one neuron ends and another begins. Scientists once believed that the axon of one cell fused with the dendrites of another in an uninterrupted fabric. Then British physiologist Sir Charles Sherrington (1857–1952) noticed that neural impulses were taking an unexpectedly long time to travel a neural pathway. Inferring that there must be a brief interruption in the transmission, Sherrington called the meeting point between neurons a synapse. We now know that the axon terminal of one neuron is in fact separated from the receiving neuron by a synaptic gap (or synaptic cleft) less than a millionth of an inch wide. Spanish anatomist Santiago Ramón y Cajal (1852–1934) marveled at these near-unions of neurons, calling them “protoplasmic kisses.” “Like elegant ladies airkissing so as not to muss their makeup, dendrites and axons don’t quite touch,” notes poet Diane Ackerman (2004). How do the neurons execute this protoplasmic kiss, sending information across the tiny synaptic gap? The answer is one of the important scientific discoveries of our age. When an action potential reaches the knoblike terminals at an axon’s end, it triggers the release of chemical messengers, called neurotransmitters (FIGURE 4.3).

1. Electrical impulses (action potentials) travel down a neuron’s axon until reaching a tiny junction known as a synapse. Sending neuron

Action potenti

al

Receiving neuron

Synapse

Sending neuron Action potential

Synaptic gap

Receptor sites on receiving neuron

Reuptake

Axon terminal

Neurotransmitter

2. When an action potential reaches an axon terminal, it stimulates the release of neurotransmitter molecules. These molecules cross the synaptic gap and bind to receptor sites on the receiving neuron. This allows electrically charged atoms to enter the receiving neuron and excite or inhibit a new action potential.

3. The sending neuron normally reabsorbs excess neurotransmitter molecules, a process called reuptake.

50

MOD U LE 4 Neural and Hormonal Systems

Within 1/10,000th of a second, the neurotransmitter molecules cross the synaptic gap and bind to receptor sites on the receiving neuron—as precisely as a key fits a lock. For an instant, the neurotransmitter unlocks tiny channels at the receiving site, and electrically charged atoms flow in, exciting or inhibiting the receiving neuron’s readiness to fire. Then, in a process called reuptake, the sending neuron reabsorbs the excess neurotransmitters.

How Neurotransmitters Influence Us 4-3 How do neurotransmitters influence behavior, and how do drugs and other chemicals affect neurotransmission?

FIGURE 4.4 Neurotransmitter pathways Each of the brain’s differing chemical messengers has designated pathways where it operates, as shown here for the neurotransmitters serotonin and dopamine (Carter, 1998).

Both photos from Mapping the Mind, Rita Carter, © 1989 University of California Press

Neuroscientist Floyd Bloom (1993)

“When it comes to the brain, if you want to see the action, follow the neurotransmitters.”

In their quest to understand neural communication, researchers have discovered dozens of different neurotransmitters and almost as many new questions: Are certain neurotransmitters found only in specific places? How do they affect our moods, memories, and mental abilities? Can we boost or diminish these effects through drugs or diet? In other modules we examine neurotransmitter influences on depression and euphoria, hunger and thinking, addictions and therapy. For now, let’s glimpse how neurotransmitters influence our motions and our emotions. A particular pathway in the brain may use only one or two neurotransmitters (FIGURE 4.4), and particular neurotransmitters may have particular effects on behavior and emotions. (TABLE 4.1 offers examples.) Acetylcholine (ACh) is one of the best-understood neurotransmitters. In addition to its role in learning and memory, ACh is the messenger at every junction between a motor neuron and skeletal muscle. When ACh is released to our muscle cell receptors, the muscle contracts. If ACh transmission is blocked, as happens during some kinds of anesthesia, the muscles cannot contract and we are paralyzed. Candace Pert and Solomon Snyder (1973) made an exciting discovery about neurotransmitters when they attached a radioactive tracer to morphine, showing where it was taken up in an animal’s brain. The morphine, an opiate drug that elevates mood and eases pain, bound to receptors in areas linked with mood and pain sensations. But why would the brain have these “opiate receptors”? Why would it have a chemical lock, unless it also had a natural key to open it? Researchers soon confirmed that the brain does indeed produce its own naturally occurring opiates. Our body releases several types of neurotransmitter molecules similar to morphine in response to pain and vigorous exercise. These endorphins (short

Seratonin pathways

Dopamine pathways

51

Neural and Hormonal Systems M O D U L E 4

TABLE 4.1 Some Neurotransmitters and Their Functions

reuptake a neurotransmitter’s reab-

Neurotransmitter

Function

Examples of Malfunctions

sorption by the sending neuron.

Acetylcholine (ACh)

Enables muscle action, learning, and memory.

With Alzheimer’s disease, ACh-producing neurons deteriorate.

endorphins [en-DOR-fins] “morphine within”—natural, opiatelike neurotransmitters linked to pain control and to pleasure.

Dopamine

Influences movement, learning, attention, and emotion.

Excess dopamine receptor activity is linked to schizophrenia. Starved of dopamine, the brain produces the tremors and decreased mobility of Parkinson’s disease.

Serotonin

Affects mood, hunger, sleep, and arousal.

Undersupply linked to depression. Prozac and some other antidepressant drugs raise serotonin levels.

Norepinephrine

Helps control alertness and arousal.

Undersupply can depress mood.

GABA (gammaaminobutyric acid)

A major inhibitory neurotransmitter.

Undersupply linked to seizures, tremors, and insomnia.

Glutamate

A major excitatory neurotransmitter; involved in memory.

Oversupply can overstimulate brain, producing migraines or seizures (which is why some people avoid MSG, monosodium glutamate, in food).

for endogenous [produced within] morphine), as we now call them, help explain good feelings such as the “runner’s high,” the painkilling effects of acupuncture, and the indifference to pain in some severely injured people. But once again, new knowledge led to new questions.

How Drugs and Other Chemicals Alter Neurotransmission If indeed the endorphins lessen pain and boost mood, why not flood the brain with artificial opiates, thereby intensifying the brain’s own “feel-good” chemistry? One problem is that when flooded with opiate drugs such as heroin and morphine, the brain may stop producing its own natural opiates. When the drug is withdrawn, the brain may then be deprived of any form of opiate, causing intense discomfort. For suppressing the body’s own neurotransmitter production, nature charges a price. Drugs and other chemicals affect brain chemistry at synapses, often by either amplifying or blocking a neurotransmitter’s activity. An agonist molecule may be similar enough to a neurotransmitter to mimic its effects (FIGURE 4.5b on the next page) or it may block the neurotransmitter’s reuptake. Some opiate drugs, for example, produce a temporary “high” by amplifying normal sensations of arousal or pleasure. Not so pleasant are the effects of black widow spider venom, which floods synapses with ACh. The result? Violent muscle contractions, convulsions, and possible death. Antagonists block a neurotransmitter’s functioning. Botulin, a poison that can form in improperly canned food, causes paralysis by blocking ACh release. (Small injections of botulin—Botox—smooth wrinkles by paralyzing the underlying facial muscles.) Other antagonists are enough like the natural neurotransmitter to occupy its receptor site and block its effect, as in Figure 4.5c, but are not similar enough to stimulate the receptor (rather like foreign coins that fit into, but won’t operate, a soda or candy machine). Curare, a poison certain South American Indians have applied to hunting-dart tips, occupies and blocks ACh receptor sites, leaving the neurotransmitter unable to affect the muscles. Struck by one of these darts, an animal becomes paralyzed.

Physician Lewis Thomas, on the endorphins: “There it is, a biologically universal act of mercy. I cannot explain it, except to say that I would have put it in had I been around at the very beginning, sitting as a member of a planning committee.” The Youngest Science, 1983

52

MOD U LE 4 Neural and Hormonal Systems

Neurotransmitter molecule Receiving cell membrane This neurotransmitter molecule fits the receptor site on the receiving neuron, much as a key fits a lock. Vesicles containing neurotransmitters

Sending neuron

Receptor site on receiving neuron

Action potential Agonist mimics neurotransmitter

Synaptic gap Neurotransmitter molecule Receptor sites

(a)

Receiving neuron

This antagonist molecule inhibits. It has a structure similar enough to the neurotransmitter to occupy its receptor site and block its action, but not similar enough to stimulate the receptor. Curare poisoning paralyzes its victims by blocking ACh receptors involved in muscle movement.

(b)

Antagonist blocks neurotransmitter

(c)

Neurotransmitters carry a message from a sending neuron across a synapse to receptor sites on a receiving neuron.

This agonist molecule excites. It is similar enough in structure to the neurotransmitter molecule to mimic its effects on the receiving neuron. Morphine, for instance, mimics the action of endorphins.

FIGURE 4.5 Agonists and antagonists

䉴|| The Nervous System 4-4 What are the functions of the nervous system’s main divisions? nervous system the body’s speedy, electrochemical communication network, consisting of all the nerve cells of the peripheral and central nervous systems.

central nervous system (CNS) the brain and spinal cord.

peripheral nervous system (PNS) the sensory and motor neurons that connect the central nervous system (CNS) to the rest of the body.

nerves bundled axons that form neural “cables” connecting the central nervous system with muscles, glands, and sense organs.

To live is to take in information from the world and the body’s tissues, to make decisions, and to send back information and orders to the body’s tissues. All this happens thanks to our body’s speedy electrochemical communications network, our nervous system (FIGURE 4.6). The brain and spinal cord form the central nervous system (CNS), which communicates with the body’s sensory receptors, muscles, and glands via the peripheral nervous system (PNS). Neurons are the nervous system’s building blocks. PNS information travels through axons that are bundled into the electrical cables we know as nerves. The optic nerve, for example, bundles a million axon fibers into a single cable carrying the messages each eye sends to the brain (Mason & Kandel, 1991). As noted earlier, information travels in the nervous system through sensory neurons, motor neurons, and interneurons.

somatic nervous system the division of the peripheral nervous system that controls the body’s skeletal muscles. Also called the skeletal nervous system.

The Peripheral Nervous System Our peripheral nervous system has two components—somatic and autonomic. Our somatic nervous system enables voluntary control of our skeletal muscles. As you

53

Neural and Hormonal Systems M O D U L E 4

Peripheral nervous system

Central nervous system

4.6 The functional divisions 䉴 ofFIGURE the human nervous system

Nervous system

Central (brain and spinal cord)

Peripheral

Autonomic (controls self-regulated action of internal organs and glands)

Sympathetic (arousing)

Somatic (controls voluntary movements of skeletal muscles)

Parasympathetic (calming)

reach the bottom of this page, your somatic nervous system will report to your brain the current state of your skeletal muscles and carry instructions back, triggering your hand to turn the page. Our autonomic nervous system controls our glands and the muscles of our internal organs, influencing such functions as glandular activity, heartbeat, and digestion. Like an automatic pilot, this system may be consciously overridden, but usually it operates on its own (autonomously). The autonomic nervous system serves two important, basic functions (FIGURE 4.7 on the next page). The sympathetic nervous system arouses and expends energy. If something alarms, enrages, or challenges you, your sympathetic system will accelerate your heartbeat, raise your blood pressure, slow your digestion, raise your blood sugar, and cool you with perspiration, making you alert and ready for action. When the stress subsides, your parasympathetic nervous system produces opposite effects. It conserves energy as it calms you by decreasing your heartbeat, lowering your blood sugar, and so forth. In everyday situations, the sympathetic and parasympathetic nervous systems work together to keep you in a steady internal state.

autonomic [aw-tuh-NAHM-ik] nervous system the part of the peripheral nervous system that controls the glands and the muscles of the internal organs (such as the heart). Its sympathetic division arouses; its parasympathetic division calms.

sympathetic nervous system the division of the autonomic nervous system that arouses the body, mobilizing its energy in stressful situations. parasympathetic nervous system the division of the autonomic nervous system that calms the body, conserving its energy.

From the simplicity of neurons “talking” to other neurons arises the complexity of the central nervous system’s brain and spinal cord. It is the brain that enables our humanity—our thinking, feeling, and acting. Tens of billions of neurons, each communicating with thousands of other neurons, yield an ever-changing wiring diagram that dwarfs a powerful computer. With some 40 billion neurons, each having roughly 10,000 contacts with other neurons, we end up with perhaps 400 trillion synapses—places where neurons meet and greet their neighbors (de Courten-Myers, 2005). A grain-of-sand–sized speck of your brain contains some 100,000 neurons and one billion “talking” synapses (Ramachandran & Blakeslee, 1998). The brain’s neurons cluster into work groups called neural networks. To understand why, Stephen Kosslyn and Olivier Koenig (1992, p. 12) invite us to “think about why cities exist; why don’t people distribute themselves more evenly across the countryside?”

© Tom Swick

The Central Nervous System

“The body is made up of millions and millions of crumbs.”

Stephen Colbert: “How does the brain work? Five words or less.” Steven Pinker: “Brain cells fire in patterns.” The Colbert Report, February 8, 2007

54

autonomic nervous system The autonomic nervous system controls the more autonomous (or self-regulating) internal functions. Its sympathetic division arouses and expends energy. Its parasympathetic division calms and conserves energy, allowing routine maintenance activity. For example, sympathetic stimulation accelerates heartbeat, whereas parasympathetic stimulation slows it.

FIGURE 4.7 The dual functions of the

MOD U LE 4 Neural and Hormonal Systems

SYMPATHETIC NERVOUS SYSTEM (arousing)

PARASYMPATHETIC NERVOUS SYSTEM (calming)

Brain

Contracts pupil

Dilates pupil Heart

Slows heartbeat

Accelerates heartbeat

Stomach

Pancreas Liver

Adrenal gland Kidney

Spinal cord Inhibits digestion Stimulates digestion Stimulates glucose release by liver

Stimulates secretion of epinephrine, norepinephrine

Stimulates gallbladder

Contracts bladder

Relaxes bladder

Stimulates ejacul*tion in male

reflex a simple, automatic response to a sensory stimulus, such as the knee-jerk response.

Allows blood flow to sex organs

Like people networking with people, neurons network with nearby neurons with which they can have short, fast connections. As in FIGURE 4.8, the cells in each layer of a neural network connect with various cells in the next layer. Learning occurs as feedback strengthens connections. Learning to play the violin, for example, builds neural connections. Neurons that fire together wire together. The spinal cord is an information highway connecting the peripheral nervous system to the brain. Ascending neural fibers send up sensory information, and descending fibers send back motor-control information. The neural pathways governing our reflexes, our automatic responses to stimuli, illustrate the spinal cord’s work. A simple spinal reflex pathway is composed of a single sensory neuron and a single motor neuron. These often communicate through an interneuron. The knee-jerk response, for example, involves one such simple pathway. A headless warm body could do it. Another such pathway enables the pain reflex (FIGURE 4.9). When your finger touches a flame, neural activity excited by the heat travels via sensory neurons to interneurons in your spinal cord. These interneurons respond by activating motor neurons leading to the muscles in your arm. Because the simple pain reflex pathway runs through the spinal cord and right back out, your hand jerks away from the

55

Neural and Hormonal Systems M O D U L E 4

4.8 A simplified neural network: 䉴 FIGURE learning to play the violin Neurons

Neurons in the brain connect with one another to form networks

Inputs (lessons, practice, master classes, music camps, time spent with musical friends)

network with nearby neurons. Encoded in these networks of interrelating neurons is your own enduring identity (as a musician, an athlete, a devoted friend)—your sense of self that extends across the years.

Outputs (beautiful music!)

The brain learns by modifying certain connections in response to feedback (specific skills develop)

candle’s flame before your brain receives and responds to the information that causes you to feel pain. That’s why it feels as if your hand jerks away not by your choice, but on its own. Information travels to and from the brain by way of the spinal cord. Were the top of your spinal cord severed, you would not feel pain from your body below. Nor would you feel pleasure. With your brain literally out of touch with your body, you would lose all sensation and voluntary movement in body regions with sensory and motor connections to the spinal cord below its point of injury. You would exhibit the kneejerk without feeling the tap. When the brain center keeping the brakes on erections is severed, men paralyzed below the waist may be capable of an erection (a simple reflex) if their genitals are stimulated (Goldstein, 2000). Females similarly paralyzed may respond with vagin*l lubrication. But, depending on where and how completely the spinal cord is severed, they may be genitally unresponsive to erotic images and have no genital feeling (Kennedy & Over, 1990; Sipski & Alexander, 1999). To produce bodily pain or pleasure, the sensory information must reach the brain.

“If the nervous system be cut off between the brain and other parts, the experiences of those other parts are nonexistent for the mind. The eye is blind, the ear deaf, the hand insensible and motionless.” William James, Principles of Psychology, 1890

Brain Sensory neuron (incoming information)

Interneuron

1. In this simple hand-withdrawal reflex, information is carried from skin receptors along a sensory neuron to the spinal cord (shown by the red arrow). From here it is passed via interneurons to motor neurons that lead to muscles in the hand and arm (blue arrow).

Muscle Skin receptors

Spinal cord Motor neuron (outgoing information)

2. Because this reflex involves only the spinal cord, the hand jerks away from the candle flame even before information about the event has reached the brain, causing the experience of pain.

4.9 A simple 䉴 FIGURE reflex

56

MOD U LE 4 Neural and Hormonal Systems

endocrine [EN-duh-krin] system the body’s “slow” chemical communication system; a set of glands that secrete hormones into the bloodstream.

hormones chemical messengers that are manufactured by the endocrine glands, travel through the bloodstream, and affect other tissues.

adrenal [ah-DREEN-el] glands a pair of endocrine glands that sit just above the kidneys and secrete hormones (epinephrine and norepinephrine) that help arouse the body in times of stress. pituitary gland the endocrine system’s most influential gland. Under the influence of the hypothalamus, the pituitary regulates growth and controls other endocrine glands.

FIGURE 4.10 The endocrine

system

Hypothalamus (brain region controlling the pituitary gland)

Thyroid gland (affects metabolism, among other things) Adrenal glands (inner part helps trigger the “fight-or-flight” response)

Testis (secretes male sex hormones)

䉴|| The Endocrine System 4-5 How does the endocrine system—the body’s slower information system— transmit its messages? So far we have focused on the body’s speedy electrochemical information system. Interconnected with your nervous system is a second communication system, the endocrine system (FIGURE 4.10). The endocrine system’s glands secrete another form of chemical messengers, hormones, which travel through the bloodstream and affect other tissues, including the brain. When they act on the brain, they influence our interest in sex, food, and aggression. Some hormones are chemically identical to neurotransmitters (those chemical messengers that diffuse across a synapse and excite or inhibit an adjacent neuron). The endocrine system and nervous system are therefore close relatives: Both produce molecules that act on receptors elsewhere. Like many relatives, they also differ. The speedy nervous system zips messages from eyes to brain to hand in a fraction of a second. Endocrine messages trudge along in the bloodstream, taking several seconds or more to travel from the gland to the target tissue. If the nervous system’s communication delivers messages rather like e-mail, the endocrine system is the body’s snail mail. But slow and steady sometimes wins the race. Endocrine messages tend to outlast the effects of neural messages. That helps explain why upset feelings may linger, sometimes beyond our thinking about what upset us. It takes time for us to “simmer down.” In a moment of danger, for example, the autonomic nervous system orders the adrenal glands on top of the kidneys to release epinephrine and norepinephrine (also called adrenaline and noradrenaline). These hormones increase heart rate, blood pressure, and blood sugar, providing us with a surge of energy. When the emergency passes, the hormones—and the feelings of excitement—linger a while. The endocrine system’s hormones influence many aspects of our lives— growth, reproduction, metabolism, mood— working with our nervous system to keep Pituitary gland everything in balance while we respond to stress, (secretes many different hormones, some of which exertion, and our own thoughts. affect other glands) The most influential endocrine gland is the pituitary gland, a pea-sized structure located in Parathyroids (help regulate the level the core of the brain, where it is controlled by an of calcium in the blood) adjacent brain area, the hypothalamus. The pituitary releases hormones that influence growth, and its secretions also influence the release of hormones by other endocrine glands. The pituitary, then, is a sort of master gland (whose own master is the hypothalamus). For example, under Pancreas the brain’s influence, the pituitary triggers your (regulates the level of sugar in the blood) sex glands to release sex hormones. These in turn influence your brain and behavior. This feedback system (brain → pituitary → other glands → hormones → brain) reveals the intimate connection of the nervous and endocrine systems. Ovary (secretes female The nervous system directs endocrine secretions, sex hormones) which then affect the nervous system. Conducting and coordinating this whole electrochemical orchestra is that maestro we call the brain.

57

Neural and Hormonal Systems M O D U L E 4

Review Neural and Hormonal Systems 4-1

What are neurons, and how do they transmit information? Neurons are the elementary components of the nervous system, the body’s speedy electrochemical information system. Sensory neurons carry incoming information from sense receptors to the brain and spinal cord, and motor neurons carry information from the brain and spinal cord out to the muscles and glands. Interneurons communicate within the brain and spinal cord and between sensory and motor neurons. A neuron sends signals through its axons, and receives signals through its branching dendrites. If the combined signals are strong enough, the neuron fires, transmitting an electrical impulse (the action potential) down its axon by means of a chemistry-to-electricity process. The neuron’s reaction is an all-or-none process.

4-2 How do nerve cells communicate with other nerve cells? When action potentials reach the end of an axon (the axon terminals), they stimulate the release of neurotransmitters. These chemical messengers carry a message from the sending neuron across a synapse to receptor sites on a receiving neuron. The sending neuron, in a process called reuptake, then normally absorbs the excess neurotransmitter molecules in the synaptic gap. The receiving neuron, if the signals from that neuron and others are strong enough, generates its own action potential and relays the message to other cells. 4-3 How do neurotransmitters influence behavior, and how do drugs and other chemicals affect neurotransmission? Each neurotransmitter travels a designated path in the brain and has a particular effect on behavior and emotions. Acetylcholine affects muscle action, learning, and memory. Endorphins are natural opiates released in response to pain and exercise. Drugs and other chemicals affect communication at the synapse. Agonists excite by mimicking particular neurotransmitters or by blocking their reuptake. Antagonists inhibit a particular neurotransmitter’s release or block its effect. 4-4 What are the functions of the nervous system’s main divisions? One major division of the nervous system is the central nervous system (CNS), the brain and spinal cord. The other is the peripheral nervous system (PNS), which connects the CNS to the rest of the body by means of nerves. The peripheral nervous system has two main divisions. The somatic nervous system enables voluntary control of the skeletal muscles. The autonomic nervous system, through its sympathetic and parasympathetic divisions, controls involuntary muscles and glands. Neurons cluster into working networks. 4-5

How does the endocrine system—the body’s slower information system—transmit its messages? The endocrine system is a set of glands that secrete hormones into the bloodstream, where they travel through the body and affect other tissues, including the brain. The endocrine system’s master gland, the pituitary, influences hormone release by other glands. In an intricate feedback

system, the brain’s hypothalamus influences the pituitary gland, which influences other glands, which release hormones, which in turn influence the brain.

Terms and Concepts to Remember biological psychology, p. 46 neuron, p. 46 sensory neurons, p. 46 motor neurons, p. 46 interneurons, p. 46 dendrite, p. 47 axon, p. 47 myelin [MY-uh-lin] sheath, p. 47 action potential, p. 47 threshold, p. 48 synapse [SIN-aps], p. 49 neurotransmitters, p. 49 reuptake, p. 50 endorphins [en-DOR-fins], p. 50 nervous system, p. 52 central nervous system (CNS), p. 52

peripheral nervous system (PNS), p. 52 nerves, p. 52 somatic nervous system, p. 52 autonomic [aw-tuh-NAHM-ik] nervous system, p. 53 sympathetic nervous system, p. 53 parasympathetic nervous system, p. 53 reflex, p. 54 endocrine [EN-duh-krin] system, p. 56 hormones, p. 56 adrenal [ah-DREEN-el] glands, p. 56 pituitary gland, p. 56

Test Yourself 1. How do neurons communicate with one another? 2. How does information flow through your nervous system as you pick up a fork? Can you summarize this process?

3. Why is the pituitary gland called the “master gland”? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. Can you recall a time when the endorphin response may have protected you from feeling extreme pain?

2. Does our nervous system’s design—with its synaptic gaps that chemical messenger molecules cross in an imperceptibly brief instant—surprise you? Would you have designed yourself differently?

3. Can you remember feeling an extended period of discomfort after some particularly stressful event? How long did those feelings last?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

module 5

The Tools of Discovery: Having Our Head Examined

© The New Yorker Collection, 1992, Gahan Wilson, from cartoonbank.com. All rights reserved.

Older Brain Structures

“You’re certainly a lot less fun since the operation.”

“I am a brain, Watson. The rest of me is a mere appendix.” Sherlock Holmes, in Arthur Conan Doyle’s “The Adventure of the Mazarin Stone”

Tools of Discovery and Older Brain Structures In a jar on a display shelf in Cornell University’s psychology department resides the well-preserved brain of Edward Bradford Titchener, a great turn-of-the-century experimental psychologist and proponent of the study of consciousness. Imagine yourself gazing at that wrinkled mass of grayish tissue, wondering if in any sense Titchener is still there.1 You might answer that, without the living whir of electrochemical activity, there could be nothing of Titchener in his preserved brain. Consider then an experiment about which the inquisitive Titchener himself might have daydreamed. Imagine that just moments before his death, someone removed Titchener’s brain from his body and kept it alive by floating it in a tank of cerebral fluid while feeding it enriched blood. Would Titchener still be in there? Further imagine that someone then transplanted the still-living brain into the body of a person with severe brain damage. To whose home should the recovered patient return? That we can imagine such questions illustrates how convinced we are that we live “somewhere north of the neck” (Fodor, 1999). And for good reason: The brain enables the mind—seeing, hearing, smelling, feeling, remembering, thinking, speaking, dreaming. The brain is what poet Diane Ackerman (2004, p. 3) calls “that shiny mound of being . . . that dream factory . . . that huddle of neurons calling all the plays . . . that fickle pleasuredrome.” Moreover, it is the brain that self-reflectively analyzes the brain. When we’re thinking about our brain, we’re thinking with our brain—by firing countless millions of synapses and releasing billions of neurotransmitter molecules. The effect of hormones on experiences such as love reminds us that we would not be of the same mind if we were a bodiless brain. Brain + body = mind. Nevertheless, say neuroscientists, the mind is what the brain does. If all your organs were transplanted, you would still be much the same person, unless, as psychologist Jonathan Haidt has said, one of those organs was the brain. But precisely where and how are the mind’s functions tied to the brain? Let’s first see how scientists explore such questions.

䉴|| The Tools of Discovery: Having Our Head

Examined

5-1 How do neuroscientists study the brain’s connections to behavior and mind? For centuries, we had no tools high-powered yet gentle enough to explore the living human brain. Clinical observations revealed some brain-mind connections. Physicians noted, for example, that damage to one side of the brain often caused numbness or paralysis on the body’s opposite side, suggesting that the body’s right side is wired to the brain’s left side, and vice versa. Others noticed that damage to the back of the brain disrupted vision, and that damage to the left-front part of the brain produced speech difficulties. Gradually, these early explorers were mapping the brain. 1Carl

58

Sagan’s Broca’s Brain (1979) inspired this question.

59

Tools of Discovery and Older Brain Structures M O D U L E 5

brains Francine Benes, 䉴 Banking director of McLean Hospital’s Brain

Tom Landers/Boston Globe

Bank, sees the collection as a valuable database.

Now, within a lifetime, the whole brain-mapping process has changed. The known universe’s most amazing organ is being probed and mapped by a new generation of neural cartographers. Whether in the interests of science or medicine, they can selectively lesion (destroy) tiny clusters of normal or defective brain cells, leaving the surrounding tissue unharmed. Such studies have revealed, for example, that damage to one area of the hypothalamus in a rat’s brain reduces eating, causing the rat to starve unless force-fed. Damage in another area produces overeating. Today’s scientists can also electrically, chemically, or magnetically stimulate various parts of the brain and note the effects; snoop on the messages of individual neurons and eavesdrop on the chatter of billions of neurons; and see color representations of the brain’s energy-consuming activity. These techniques for peering into the thinking, feeling brain are doing for psychology what the microscope did for biology and the telescope did for astronomy. Let’s look at a few of them and see how neuroscientists study the working brain.

Recording the Brain’s Electrical Activity Right now, your mental activity is giving off telltale electrical, metabolic, and magnetic signals that would enable neuroscientists to observe your brain at work. The tips of modern microelectrodes are so small they can detect the electrical pulse in a single neuron. For example, we can now detect exactly where the information goes in a cat’s brain when someone strokes its whisker. Electrical activity in the brain’s billions of neurons sweeps in regular waves across its surface. An electroencephalogram (EEG) is an amplified read-out of such waves. Studying an EEG of the brain’s activity is like studying a car engine by listening to its hum. By presenting a stimulus repeatedly and having a computer filter out brain activity unrelated to the stimulus, one can identify the electrical wave evoked by the stimulus (FIGURE 5.1).

FIGURE 5.1 An electro䉴 encephalograph providing

AJ Photo/Photo Researchers, Inc.

amplified tracings of waves of electrical activity in the brain Here it is displaying the brain activity of this 4-year-old who has epilepsy.

lesion [LEE-zhuhn] tissue destruction. A brain lesion is a naturally or experimentally caused destruction of brain tissue.

electroencephalogram (EEG) an amplified recording of the waves of electrical activity that sweep across the brain’s surface. These waves are measured by electrodes placed on the scalp.

60

MOD U LE 5 Tools of Discovery and Older Brain Structures

Neuroimaging Techniques

Courtesy of Brookhaven National Laboratories

“You must look into people, as well as at them,” advised Lord Chesterfield in a 1746 letter to his son. Newer windows into the brain give us that Supermanlike ability to see inside the living brain. One such tool, the PET (positron emission tomography) scan (FIGURE 5.2), depicts brain activity by showing each brain area’s consumption of its chemical fuel, the sugar glucose. Active neurons are glucose hogs. After a person receives temporarily radioactive glucose, the PET scan detects where this “food for thought” goes by locating the radioactivity. Rather like weather radar showing rain activity, PET scan “hot spots” show which brain areas are most active as the person performs mathematical calculations, looks at images of faces, or daydreams. In MRI (magnetic resonance imaging) brain scans, the head is put in a strong magnetic field, which aligns the spinning atoms of brain molecules. Then a radio wave pulse momentarily disorients the atoms. When the atoms return to their normal spin, they release signals that provide a detailed picture of the brain’s soft tissues. (MRI scans are also used to scan other body parts.) MRI scans have revealed a larger-than-average neural area in the left hemisphere of musicians who display perfect pitch (Schlaug et al., 1995). They have also revealed enlarged fluid-filled brain areas in some patients who have schizophrenia, a disabling psychological disorder (FIGURE 5.3). A special application of MRI—fMRI (functional MRI)—can reveal the brain’s functioning as well as its structure. Where the brain is especially active, blood goes. By comparing MRI scans taken less than a second apart, researchers can watch the brain “light up” (with increased oxygen-laden bloodflow) as a person performs different mental functions. As the person looks at a scene, for example, the fMRI machine detects blood rushing to the back of the brain, which processes visual information. Such snapshots of the brain’s changing activity provide new insights into how the brain divides its labor. To be learning about the neurosciences now is like studying world geography while Magellan was exploring the seas. This truly is the golden age of brain science.

Both photos from Daniel Weinberger, M.D., CBDB, NIMH

obtain a PET scan, researchers inject volunteers with a low and harmless dose of a short-lived radioactive sugar. Detectors around the person’s head pick up the release of gamma rays from the sugar, which has concentrated in active brain areas. A computer then processes and translates these signals into a map of the brain at work.

FIGURE 5.2 The PET scan To

FIGURE 5.3 MRI scan of a healthy individual (left) and a person with schizophrenia (right) Note the enlarged fluid-filled brain region in the image on the right.

䉴|| Older Brain Structures PET (positron emission tomography) scan a visual display of brain activity that detects where a radioactive form of glucose goes while the brain performs a given task.

MRI (magnetic resonance imaging) a technique that uses magnetic fields and radio waves to produce computergenerated images of soft tissue. MRI scans show brain anatomy.

fMRI (functional MRI) a technique for revealing bloodflow and, therefore, brain activity by comparing successive MRI scans. fMRI scans show brain function.

5-2 What are the functions of important lower-level brain structures? If you could open the skull and look inside, the first thing you might note is the brain’s size. In dinosaurs, the brain represents 1/100,000th of the body’s weight; in whales, 1/10,000th; in elephants, 1/600th; in humans, 1/45th. It looks as though a principle is emerging. But read on. In mice, the brain is 1/40th of the body’s weight, and in marmosets, 1/25th. So there are exceptions to the rule that the ratio of brain to body weight provides a clue to a species’ intelligence. Indicators about an animal’s capacities come from its brain structures. In primitive animals, such as sharks, a not-so-complex brain primarily regulates basic survival functions: breathing, resting, and feeding. In lower mammals, such as rodents, a more complex brain enables emotion and greater memory. In advanced mammals, such as humans, a brain that processes more information enables foresight as well.

61

Tools of Discovery and Older Brain Structures M O D U L E 5

This increasing complexity arises from new brain systems built on top of the old, much as the Earth’s landscape covers the old with the new. Digging down, one discovers the fossil remnants of the past—brainstem components performing for us much as they did for our distant ancestors. Let’s start with the brain’s basem*nt and work up to the newer systems.

brainstem the oldest part and central core of the brain, beginning where the spinal cord swells as it enters the skull; the brainstem is responsible for automatic survival functions.

medulla [muh-DUL-uh] the base of the brainstem; controls heartbeat and breathing.

The Brainstem The brain’s oldest and innermost region is the brainstem. It begins where the spinal cord swells slightly after entering the skull. This slight swelling is the medulla (FIGURE 5.4). Here lie the controls for your heartbeat and breathing. Just above the medulla sits the pons, which helps coordinate movements. If a cat’s brainstem is severed from the rest of the brain above it, the animal will still breathe and live—and even run, climb, and groom (Klemm, 1990). But cut off from the brain’s higher regions, it won’t purposefully run or climb to get food. The brainstem is a crossover point, where most nerves to and from each side of the brain connect with the body’s opposite side. This peculiar cross-wiring is but one of the brain’s many surprises. Inside the brainstem, between your ears, lies the reticular (“netlike”) formation, a finger-shaped network of neurons that extends from the spinal cord right up to the thalamus. As the spinal cord’s sensory input travels up to the thalamus, some of it travels through the reticular formation, which filters incoming stimuli and relays important information to other areas of the brain. In 1949, Giuseppe Moruzzi and Horace Magoun discovered that electrically stimulating the reticular formation of a sleeping cat almost instantly produced an awake, alert animal. When Magoun severed a cat’s reticular formation from higher brain regions, without damaging the nearby sensory pathways, the effect was equally dramatic: The cat lapsed into a coma from which it never awakened. Magoun could clap his hands by the cat’s ear, even pinch it; still, no response. The conclusion? The reticular formation is involved in arousal.

reticular formation a nerve network in the brainstem that plays an important role in controlling arousal.

Thalamus

Reticular formation

Pons Brainstem Medulla

5.4 The 䉴 FIGURE brainstem and thalamus

The brainstem, including the pons and medulla, is an extension of the spinal cord. The thalamus is attached to the top of the brainstem. The reticular formation passes through both structures.

62

MOD U LE 5 Tools of Discovery and Older Brain Structures

The Thalamus Sitting at the top of the brainstem is the thalamus (Figure 5.4). This joined pair of egg-shaped structures acts as the brain’s sensory switchboard. It receives information from all the senses except smell and routes it to the higher brain regions that deal with seeing, hearing, tasting, and touching. Think of the thalamus as being to sensory input what London is to England’s trains: a hub through which traffic passes en route to various destinations. The thalamus also receives some of the higher brain’s replies, which it then directs to the medulla and to the cerebellum.

FIGURE 5.5 The brain’s organ of agility Hanging at the back of the brain, the cerebellum coordinates our voluntary movements, as when David Beckham directs the ball precisely.

Lluis Gene/AFP/Getty Images

The Cerebellum

Cerebellum Spinal cord

“Consciousness is a small part of what the brain does.” Neuroscientist Joseph LeDoux, in “Mastery of Emotions,” 2006

Extending from the rear of the brainstem is the baseball-sized cerebellum, meaning “little brain,” which is what its two wrinkled halves resemble (FIGURE 5.5). The cerebellum enables one type of nonverbal learning and memory. It helps us judge time, modulate our emotions, and discriminate sounds and textures (Bower & Parsons, 2003). It also coordinates voluntary movement. When soccer great David Beckham fires the ball into the net with a perfectly timed kick, give his cerebellum some credit. If you injured your cerebellum, you would have difficulty walking, keeping your balance, or shaking hands. Your movements would be jerky and exaggerated. Under alcohol’s influence on the cerebellum, walking may lack coordination, as many a driver has learned after being pulled over and given a roadside test. Note: These older brain functions all occur without any conscious effort. This illustrates another of our recurring themes: Our brain processes most information outside of our awareness. We are aware of the results of our brain’s labor (say, our current visual experience) but not of how we construct the visual image. Likewise, whether we are asleep or awake, our brainstem manages its life-sustaining functions, freeing our newer brain regions to think, talk, dream, or savor a memory.

The Limbic System

neural system sits between the brain’s older parts and its cerebral hemispheres. The limbic system’s hypothalamus controls the nearby pituitary gland.

FIGURE 5.6 The limbic system This

At the border (“limbus”) between the brain’s older parts and the cerebral hemispheres—the two halves of the brain—is the limbic system (FIGURE 5.6). One limbic system component, the hippocampus, processes memory. (If animals or humans lose their hippocampus to surgery or injury, they become unable to process new memories of facts and episodes.) For now, let’s look at the limbic system’s links to emotions (such as fear and anger) and to basic motives (such as those for food and sex).

The Amygdala Hypothalamus Pituitary gland Amygdala

Hippocampus

In the limbic system, two lima bean–sized neural clusters, the amygdala, influence aggression and fear (FIGURE 5.7). In 1939, psychologist Heinrich Klüver and neurosurgeon Paul Bucy surgically lesioned the part of a

63

Tools of Discovery and Older Brain Structures M O D U L E 5

䉴 FIGURE 5.7 The amygdala

thalamus [THAL-uh-muss] the brain’s sensory switchboard, located on top of the brainstem; it directs messages to the sensory receiving areas in the cortex and transmits replies to the cerebellum and medulla.

Moonrunner Design Ltd., UK

cerebellum [sehr-uh-BELL-um] the “little brain” at the rear of the brainstem; functions include processing sensory input and coordinating movement output and balance. limbic system neural system (including the hippocampus, amygdala, and hypothalamus) located below the cerebral hemispheres; associated with emotions and drives.

Frank Siteman/Stock, Boston

The Hypothalamus Just below (hypo) the thalamus is the hypothalamus (FIGURE 5.8 on the next page), an important link in the chain of command governing bodily maintenance. Some neural clusters in the hypothalamus influence hunger; others regulate thirst, body temperature, and sexual behavior. The hypothalamus both monitors blood chemistry and takes orders from other parts of the brain. For example, thinking about sex (in your brain’s cerebral cortex) can stimulate your hypothalamus to secrete hormones. These hormones in turn trigger the adjacent “master gland,” the pituitary (see Figure 5.6), to influence hormones released by other glands. (Note the interplay between the nervous and endocrine systems: The brain influences the endocrine system, which in turn influences the brain.)

hypothalamus [hi-po-THAL-uh-muss] a neural structure lying below (hypo) the thalamus; it directs several maintenance activities (eating, drinking, body temperature), helps govern the endocrine system via the pituitary gland, and is linked to emotion and reward.

as a brain state Back 䉴 Aggression arched and fur fluffed, this fierce cat is ready to attack. Electrical stimulation of a cat’s amygdala provokes reactions such as the one shown here, suggesting its role in emotions like rage. Which division of the autonomic nervous system is activated by such stimulation? (Answer below.) The cat would be aroused via its sympathetic nervous system.

rhesus monkey’s brain that included the amygdala. The result? The normally illtempered monkey turned into the most mellow of creatures. Poke it, pinch it, do virtually anything that normally would trigger a ferocious response, and still the animal remained placid. In later studies with other wild animals, including the lynx, wolverine, and wild rat, researchers noted the same effect. What then might happen if we electrically stimulated the amygdala in a normally placid domestic animal, such as a cat? Do so in one spot and the cat prepares to attack, hissing with its back arched, its pupils dilated, its hair on end. Move the electrode only slightly within the amygdala, cage the cat with a small mouse, and now it cowers in terror. These experiments confirm the amygdala’s role in rage and fear, including the perception of these emotions and the processing of emotional memories (Anderson & Phelps, 2000; Poremba & Gabriel, 2001). Still, we must be careful. The brain is not neatly organized into structures that correspond to our categories of behavior. Aggressive and fearful behavior involves neural activity in many brain levels. Even within the limbic system, stimulating structures other than the amygdala can evoke such behavior. If you charge your car’s dead battery, you can activate the engine. Yet the battery is merely one link in an integrated system that makes a car go.

amygdala [uh-MIG-duh-la] two lima bean–sized neural clusters in the limbic system; linked to emotion.

MOD U LE 5 Tools of Discovery and Older Brain Structures

FIGURE 5.8 The hypothalamus This small

ISM/Phototake

but important structure, colored yellow/ orange in this MRI scan photograph, helps keep the body’s internal environment in a steady state.

64

Candace Pert (1986)

FIGURE 5.9 Rat with an implanted

electrode With an electrode implanted in a reward center of its hypothalamus, the rat readily crosses an electrified grid, accepting the painful shocks, to press a pedal that sends electrical impulses to that center.

“If you were designing a robot vehicle to walk into the future and survive, . . . you’d wire it up so that behavior that ensured the survival of the self or the species— like sex and eating—would be naturally reinforcing.”

A remarkable discovery about the hypothalamus illustrates how progress in science often occurs—when curious, open-minded investigators make an unexpected observation. Two young McGill University neuropsychologists, James Olds and Peter Milner (1954), were trying to implant an electrode in a rat’s reticular formation when they made a magnificent mistake: They incorrectly placed the electrode in what they later discovered was a region of the rat’s hypothalamus (Olds, 1975). Curiously, as if seeking more stimulation, the rat kept returning to the location where it had been stimulated by this misplaced electrode. On discovering their mistake, Olds and Milner alertly realized they had stumbled upon a brain center that provides a pleasurable reward. In a meticulous series of experiments, Olds (1958) went on to locate other “pleasure centers,” as he called them. (What the rats actually experience only they know, and they aren’t telling. Rather than attribute human feelings to rats, today’s scientists refer to reward centers, not “pleasure centers.”) When allowed to press pedals to trigger their own stimulation in these areas, rats would sometimes do so at a feverish pace—up to 7000 times per hour—until they dropped from exhaustion. Moreover, to get this stimulation, they would even cross an electrified floor that a starving rat would not cross to reach food (FIGURE 5.9). Similar reward centers in or near the hypothalamus were later discovered in many other species, including goldfish, dolphins, and monkeys. In fact, animal research has revealed both a general reward system that triggers the release of the neurotransmitter dopamine, and specific centers associated with the pleasures of eating, drinking, and sex. Animals, it seems, come equipped with built-in systems that reward activities essential to survival. Experimenters have found new ways of using limbic stimulation to control animals’ actions. By using brain stimulation to reward rats for turning left or right, Sanjiv Talwar and his colleagues (2002) trained previously caged rats to navigate natural environments (FIGURE 5.10). By pressing buttons on a laptop, the researchers

Stimulation pedal

Electrified grid

65

Tools of Discovery and Older Brain Structures M O D U L E 5

FIGURE 5.10 Ratbot on a 䉴 pleasure cruise When

Sanjiv Talwar, SUNY Downstate

stimulated by remote control, this rat could be guided to navigate across a field and even up a tree.

FIGURE 5.11 Brain structures and their functions

can direct a rat—which carries a receiver, power source, and video camera on a backpack—to turn on cue, climb trees, scurry along branches, and turn around and come back down. Their work suggests future applications in search-and-rescue operations. Do we humans also have limbic centers for pleasure? Indeed we do. To calm violent patients, one neurosurgeon implanted electrodes in such areas. Stimulated patients reported mild pleasure; however, unlike Olds’ rats, they were not driven to a frenzy (Deutsch, 1972; Hooper & Teresi, 1986). Some researchers believe that addictive disorders, such as alcohol dependence, drug abuse, and binge eating, may stem from a reward deficiency syndrome—a genetically disposed deficiency in the natural brain systems for pleasure and well-being that leads people to crave whatever provides that missing pleasure or relieves negative feelings (Blum et al., 1996). FIGURE 5.11 locates the brain areas discussed in this module.

Thalamus: relays messages between lower brain centers and cerebral cortex Hypothalamus: controls maintenance functions such as eating; helps govern endocrine system; linked to emotion and reward Pituitary: master endocrine gland

Amygdala: linked to emotion

Reticular formation: helps control arousal Medulla: controls heartbeat and breathing

Hippocampus: linked to memory

Spinal cord: pathway for neural fibers traveling to and from brain; controls simple reflexes

Limbic system

Brainstem

Cerebellum: coordinates voluntary movement and balance and supports memories of such

66

MOD U LE 5 Tools of Discovery and Older Brain Structures

Review Tools of Discovery and Older Brain Structures 5-1 How do neuroscientists study the brain’s connections to behavior and mind? Clinical observations and lesioning reveal the general effects of brain damage. MRI scans now reveal brain structures, and EEG, PET, and fMRI (functional MRI) recordings reveal brain activity. 5-2 What are the functions of important lower-level brain structures? The brainstem is the oldest part of the brain and is responsible for automatic survival functions. Its components are the medulla (which controls heartbeat and breathing), the pons (which helps coordinate movements), and the reticular formation (which affects arousal). The thalamus, the brain’s sensory switchboard, sits above the brainstem. The cerebellum, attached to the rear of the brainstem, coordinates muscle movement and helps process sensory information. The limbic system is linked to emotions, memory, and drives. Its neural centers include the amygdala (involved in responses of aggression and fear) and the hypothalamus (involved in various bodily maintenance functions, pleasurable rewards, and the control of the hormonal system). The pituitary (the “master gland”) controls the hypothalamus by stimulating it to trigger the release of hormones. The hippocampus processes memory. Terms and Concepts to Remember lesion [LEE-zhuhn], p. 59 electroencephalogram (EEG), p. 59 PET (positron emission tomography) scan, p. 60 MRI (magnetic resonance imaging), p. 60 fMRI (functional MRI), p. 60 brainstem, p. 61 medulla [muh-DUL-uh], p. 61

reticular formation, p. 61 thalamus [THAL-uh-muss], p. 62 cerebellum [sehr-uh-BELL-um], p. 62 limbic system, p. 62 amygdala [uh-MIG-duh-la], p. 62 hypothalamus [hi-po-THALuh-muss], p. 63

Test Yourself 1. Within what brain region would damage be most likely to disrupt your ability to skip rope? Your ability to sense tastes or sounds? In what brain region would damage perhaps leave you in a coma? Without the very breath and heartbeat of life? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. How do you feel about the idea of using selective lesioning or electrical stimulation to reduce aggression in those convicted of violent acts? How about for reducing appetite in those who are overweight?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

module 6 The Cerebral Cortex and Our Divided Brain

The Cerebral Cortex Our Divided Brain Right-Left Differences in the Intact Brain

䉴|| The Cerebral Cortex 6-1 What functions are served by the various cerebral cortex regions? Older brain networks sustain basic life functions and enable memory, emotions, and basic drives. Newer neural networks within the cerebrum—the two large hemispheres that contribute 85 percent of the brain’s weight—form specialized work teams that enable our perceiving, thinking, and speaking. Covering those hemispheres, like bark on a tree, is the cerebral cortex, a thin surface layer of interconnected neural cells. It is your brain’s thinking crown, your body’s ultimate control and informationprocessing center. As we move up the ladder of animal life, the cerebral cortex expands, tight genetic controls relax, and the organism’s adaptability increases. Frogs and other amphibians with a small cortex operate extensively on preprogrammed genetic instructions. The larger cortex of mammals offers increased capacities for learning and thinking, enabling them to be more adaptable. What makes us distinctively human mostly arises from the complex functions of our cerebral cortex.

|| The people who first dissected and labeled the brain used the language of scholars—Latin and Greek. Their words are actually attempts at graphic description: For example, cortex means “bark,” cerebellum is “little brain,” and thalamus is “inner chamber.” ||

Structure of the Cortex If you opened a human skull, exposing the brain, you would see a wrinkled organ, shaped somewhat like the meat of an oversized walnut. Without these wrinkles, a flattened cerebral cortex would require triple the area—roughly that of a very large pizza. The brain’s ballooning left and right hemispheres are filled mainly with axons connecting the cortex to the brain’s other regions. The cerebral cortex—that thin surface layer—contains some 20 to 23 billion nerve cells and 300 trillion synaptic connections (de Courten-Myers, 2005). Being human takes a lot of nerve. Supporting these billions of nerve cells are nine times as many spidery glial cells (“glue cells”). Neurons are like queen bees; on their own they cannot feed or sheathe themselves. Glial cells are worker bees. They provide nutrients and insulating myelin, guide neural connections, and mop up ions and neurotransmitters. Glia may also play a role in learning and thinking. By “chatting” with neurons they may participate in information transmission and memory (Miller, 2005). Moving up the ladder of animal life, the proportion of glia to neurons increases. A recent postmortem analysis of Einstein’s brain did not find more or larger-than-usual neurons, but it did reveal a much greater concentration of glial cells than found in an average Albert’s head (Fields, 2004). Stepping back to consider the whole cortex, each hemisphere is divided into four lobes, geographic subdivisions separated by prominent fissures, or folds (FIGURE 6.1 on the next page). Starting at the front of your brain and moving over the top, there are the frontal lobes (behind your forehead), the parietal lobes (at the top and to the rear), and the occipital lobes (at the back of your head). Reversing direction and moving forward, just above your ears, you find the temporal lobes. Each of the four lobes carries out many functions, and many functions require the interplay of several lobes.

cerebral [seh-REE-bruhl] cortex the intricate fabric of interconnected neural cells covering the cerebral hemispheres; the body’s ultimate control and information-processing center. glial cells (glia) cells in the nervous system that support, nourish, and protect neurons. frontal lobes portion of the cerebral cortex lying just behind the forehead; involved in speaking and muscle movements and in making plans and judgments. parietal [puh-RYE-uh-tuhl] lobes portion of the cerebral cortex lying at the top of the head and toward the rear; receives sensory input for touch and body position. occipital [ahk-SIP-uh-tuhl] lobes portion of the cerebral cortex lying at the back of the head; includes areas that receive information from the visual fields.

temporal lobes portion of the cerebral cortex lying roughly above the ears; includes the auditory areas, each receiving information primarily from the opposite ear.

67

FIGURE 6.1 The cortex and its basic

subdivisions

MOD U LE 6 The Cerebral Cortex and Our Divided Brain

68

The brain has left and right hemispheres

Frontal lobe

Parietal lobe

Temporal lobe

Occipital lobe

Functions of the Cortex More than a century ago, autopsies of people who had been partially paralyzed or speechless revealed damaged cortical areas. But this rather crude evidence did not convince researchers that specific parts of the cortex perform specific complex functions. After all, if control of speech and movement were diffused across the cortex, damage to almost any area might produce the same effect. A television with its power cord cut would go dead, but we would be fooling ourselves if we thought we had “localized” the picture in the cord.

Motor Functions || Demonstration: Try moving your right hand in a circular motion, as if polishing a table. Now start your right foot doing the same motion synchronized with the hand. Now reverse the foot motion (but not the hand). Tough, huh? But easier if you try moving the left foot opposite to the right hand. The left and right limbs are controlled by opposite sides of the brain. So their opposed activities interfere less with each other. ||

motor cortex an area at the rear of the frontal lobes that controls voluntary movements.

Scientists had better luck in localizing simpler brain functions. For example, in 1870, when German physicians Gustav Fritsch and Eduard Hitzig applied mild electrical stimulation to parts of a dog’s cortex, they made an important discovery: They could make parts of its body move. The effects were selective: Stimulation caused movement only when applied to an arch-shaped region at the back of the frontal lobe, running roughly ear-to-ear across the top of the brain. Moreover, stimulating parts of this region in the left or right hemisphere caused movements of specific body parts on the opposite side of the body. Fritsch and Hitzig had discovered what is now called the motor cortex (FIGURE 6.2). Mapping the Motor Cortex Lucky for brain surgeons and their patients, the brain has no sensory receptors. Knowing this, Otfrid Foerster and Wilder Penfield were able to map the motor cortex in hundreds of wide-awake patients by stimulating different cortical areas and observing the body’s responses. They discovered that body areas requiring precise control, such as the fingers and mouth, occupied the greatest amount of cortical space. Spanish neuroscientist José Delgado repeatedly demonstrated the mechanics of motor behavior. In one human patient, he stimulated a spot on the left motor cortex that triggered the right hand to make a fist. Asked to keep the fingers open during the next stimulation, the patient, whose fingers closed despite his best efforts, remarked, “I guess, Doctor, that your electricity is stronger than my will” (Delgado, 1969, p. 114).

69

The Cerebral Cortex and Our Divided Brain M O D U L E 6

Input: Sensory cortex (Left hemisphere section receives input from the body’s right side)

Output: Motor cortex (Left hemisphere section controls the body’s right side) Trunk

Trunk Hip

Hip Neck

Knee Wrist Fingers

Arm

Hand Ankle

Foot

Thumb Toes

Face

Lips

Leg

Fingers

Thumb Neck Brow Eye

Arm

Knee

Toes Eye Nose Face

Genitals

Lips

Jaw Teeth Gums

Tongue

Jaw Tongue

Swallowing

FIGURE 6.2 Left hemisphere tissue

More recently, scientists have been able to predict a monkey’s arm motion a tenth of a second before it moves—by repeatedly measuring motor cortex activity preceding specific arm movements (Gibbs, 1996). Such findings, some researchers believe, have opened the door to a new generation of prosthetics (artificial body part replacements). Neural Prosthetics By similarly eavesdropping on the brain, could we enable someone—perhaps a paralyzed person—to move a robotic limb or command a cursor to write an e-mail or surf the Net? To find out, Brown University brain researchers implanted 100 tiny recording electrodes in the motor cortexes of three monkeys (Nicolelis & Chapin, 2002; Serruya et al., 2002). As the monkeys used a joystick to move a cursor to follow a moving red target (to gain rewards), the researchers matched the brain signals with the arm movements. Then they programmed a computer to monitor the signals and operate the joystick without the monkey’s help. When a monkey merely thought about a move, the mind-reading computer moved the cursor with nearly the same proficiency as had the rewardseeking monkey. In a follow-up experiment, two monkeys were trained to control a robot arm that could reach for and grab food (Velliste et al., 2008). Research has also recorded messages not from the motor neurons that directly control a monkey’s arm, but from a brain area involved in planning and intention (Musallam et al., 2004). While the monkeys awaited a cue that told them to reach toward a spot (to get a juice reward) that had flashed on a screen in one of up to eight locations, a computer program recorded activity in this planning-intention brain area. By matching this neural brain activity to the monkey’s subsequent

devoted to each body part in the motor cortex and the sensory cortex As you can see from this classic though inexact representation, the amount of cortex devoted to a body part is not proportional to that part’s size. Rather, the brain devotes more tissue to sensitive areas and to areas requiring precise control. Thus, the fingers have a greater representation in the cortex than does the upper arm.

Motorlab, University of Pittsburgh School of Medicine

Mind over matter Guided by a tiny, 100-electrode brain implant, monkeys learned to control a mechanical arm that can grab snacks and put them in their mouth (Velliste et al., 2008). Implantable electrode grids are not yet permanently effective, but such research raises hopes that people with paralyzed limbs may someday be able to use their own brain signals to control computers and robotic prosthetic limbs.

MOD U LE 6 The Cerebral Cortex and Our Divided Brain

70

pointing, the mind-reading researchers could now program a cursor to move in response to the monkey’s thinking. Monkey think, computer do. If this technique works with motor brain areas, why not use it to capture the words a person can think but cannot say (for example, after a stroke)? Cal Tech neuroscientist Richard Andersen (2004, 2005) speculates that researchers could implant electrodes in speech areas, “ask a patient to think of different words and observe how the cells fire in different ways. So you build up your database, and then when the patient thinks of the word, you compare the signals with your database, and you can predict the words they’re thinking. Then you take this output and connect it to a speech synthesizer. This would be identical to what we’re doing for motor control.” In 2004, the U.S. Food and Drug Administration approved the first clinical trial of neural prosthetics with paralyzed humans (Pollack, 2004, 2006). The first patient, a paralyzed 25-year-old man, was able to mentally control a television, draw shapes on a computer screen, and play video games—all thanks to an aspirin-sized chip with 100 microelectrodes recording activity in his motor cortex (Hochberg et al., 2006).

Sensory Functions

sensory cortex area at the front of the parietal lobes that registers and processes body touch and movement sensations.

association areas areas of the cerebral cortex that are not involved in primary motor or sensory functions; rather, they are involved in higher mental functions such as learning, remembering, thinking, and speaking.

If the motor cortex sends messages out to the body, where does the cortex receive the incoming messages? Penfield also identified the cortical area that specializes in receiving information from the skin senses and from the movement of body parts. This area at the front of the parietal lobes, parallel to and just behind the motor cortex, we now call the sensory cortex (Figure 6.2). Stimulate a point on the top of this band of tissue and a person may report being touched on the shoulder; stimulate some point on the side and the person may feel something on the face. The more sensitive the body region, the larger the sensory cortex area devoted to it (Figure 6.2). Your supersensitive lips project to a larger brain area than do your toes, which is one reason we kiss with our lips rather than touch toes. Rats have a large area of the brain devoted to their whisker sensations, and owls to their hearing sensations.

71

The Cerebral Cortex and Our Divided Brain M O D U L E 6

Courtesy of V. P. Clark, K. Keill, J. Ma. Maisog, S. Courtney, L. G. Ungerleider, and J. V. Haxby, National Institute of Mental Health

FIGURE 6.4 The visual cortex and auditory cortex The visual cortex of the occipital lobes at the rear of your brain receives input from your eyes. The auditory cortex, in your temporal lobes—above your ears—receives information from your ears.

Scientists have identified additional areas where the cortex receives input from senses other than touch. At this moment, you are receiving visual information in the visual cortex in your occipital lobes, at the very back of your brain (FIGURES 6.3 and 6.4). A bad enough bash there would make you blind. Stimulated there, you might see flashes of light or dashes of color. (In a sense, we do have eyes in the back of our head!) From your occipital lobes, visual information goes to other areas that specialize in tasks such as identifying words, detecting emotions, and recognizing faces. Any sound you now hear is processed by your auditory cortex in your temporal lobes (Figure 6.4). (If you think of your clenched fist as your brain, and hold it in front of you, your thumb would roughly correspond to one of your temporal lobes.) Most of this auditory information travels a circuitous route from one ear to the auditory receiving area above your opposite ear. If stimulated there, you might hear a sound. MRI scans of people with schizophrenia reveal active auditory areas in the temporal lobes during auditory hallucinations (Lennox et al., 1999). Even the phantom ringing sound experienced by people with hearing loss is—if heard in one ear—associated with activity in the temporal lobe on the brain’s opposite side (Muhlnickel, 1998).

FIGURE 6.3 New technology shows the brain in action This fMRI (functional MRI) scan shows the visual cortex in the occipital lobes activated (color representation of increased bloodflow) as a research participant looks at a photo. When the person stops looking, the region instantly calms down.

Auditory cortex Visual cortex

Association Areas So far, we have pointed out small areas of the cortex that either receive sensory input or direct muscular output. In humans, that leaves a full three-fourths of the thin, wrinkled layer, the cerebral cortex, uncommitted to sensory or muscular activity. What, then, goes on in this vast region of the brain? Neurons in these association areas (the peach-colored areas in FIGURE 6.5) integrate information. They link sensory inputs with stored memories—a very important part of thinking.

6.5 Areas of the cortex in four 䉴 FIGURE mammals More intelligent animals have

increased “uncommitted” or association areas of the cortex. These vast areas of the brain are responsible for integrating and acting on information received and processed by sensory areas. Rat Motor areas Sensory areas Association areas

Cat Chimpanzee Human

72

Electrically probing the association areas doesn’t trigger any observable response. So, unlike the sensory and motor areas, association area functions cannot be neatly mapped. Their silence has led to what Donald McBurney (1996, p. 44) calls “one of the hardiest weeds in the garden of psychology”: the claim that we ordinarily use only 10 percent of our brains. (If true, wouldn’t this imply a 90 percent chance that a bullet to your brain would land in an unused area?) Surgically lesioned animals and brain-damaged humans bear witness that association areas are not dormant. Rather, these areas interpret, integrate, and act on information processed by the sensory areas. Association areas are found in all four lobes. In the frontal lobes, they enable judgment, planning, and processing of new memories. People with damaged frontal lobes may have intact memories, high scores on intelligence tests, and great cake-baking skills. Yet they would not be able to plan ahead to begin baking a cake for a birthday party (Huey et al., 2006). Frontal lobe damage also can alter personality, removing a person’s inhibitions. Consider the classic case of railroad worker Phineas Gage. One afternoon in 1848, Gage, then 25 years old, was packing gunpowder into a rock with a tamping iron. A spark ignited the gunpowder, shooting the rod up through his left cheek and out the top of his skull, leaving his frontal lobes massively damaged (FIGURE 6.6). To everyone’s amazement, he was immediately able to sit up and speak, and after the wound healed he returned to work. But the affable, soft-spoken Phineas Gage was now irritable, profane, and dishonest. Although his mental abilities and memories were intact, his personality was not. This person, said his friends, was “no longer Gage.” He eventually lost his job and ended up earning his living as a fairground exhibit. With his frontal lobes ruptured, Gage’s moral compass had disconnected from his behavior. Similar impairments to moral judgment have appeared in more recent studies of people with damaged frontal lobes. Not only may they become less inhibited (without the frontal lobe brakes on their impulses), but their moral judgments seem unrestrained by normal emotions. Would you advocate pushing someone in front of a runaway boxcar to save five others? Most people do not, but those with damage to a brain area behind the eyes often do (Koenigs et al., 2007). Association areas also perform other mental functions. In the parietal lobes, parts of which were large and unusually shaped in Einstein’s normal-weight brain, they enable mathematical and spatial reasoning (Witelson et al., 1999). An area on the underside of the right temporal lobe enables us to recognize faces. If a stroke or head injury destroyed this area of your brain, you would still be able to describe facial features and to recognize someone’s gender and approximate age, yet be strangely unable to identify the person as, say, Jack Black, or even your grandmother. Nevertheless, we should be wary of using pictures of brain “hot spots” to create a new phrenology that locates complex functions in precise brain areas (Uttal, 2001). Complex mental functions don’t reside in any one place. There is no one spot in a rat’s small association cortex that, when damaged, will obliterate its ability to learn or remember a maze. Memory, language, and attention result from the synchronized activity among distinct brain areas (Knight, 2007). © 2004 Massachusetts Medical Society. All rights reserved.

FIGURE 6.6 Phineas Gage reconsidered Using measurements of his skull (which was kept as a medical record) and modern neuroimaging techniques, researcher Hanna Damasio and her colleagues (1994) have reconstructed the probable path of the rod through Gage’s brain.

MOD U LE 6 The Cerebral Cortex and Our Divided Brain

73

The Cerebral Cortex and Our Divided Brain M O D U L E 6

The Brain’s Plasticity 6-2 To what extent can a damaged brain reorganize itself?

Joe McNally/Joe McNally Photography

Our brains are sculpted not only by our genes but also by our experiences. MRI scans show that well-practiced pianists have a larger-than-usual auditory cortex area that encodes piano sounds (Bavelier et al., 2000; Pantev et al., 1998). Other modules focus more on how experience molds the brain, but for now, let’s turn to evidence from studies of the brain’s plasticity, its ability to modify itself after some types of damage. Unlike cut skin, severed neurons usually do not regenerate (if your spinal cord were severed, you would probably be permanently paralyzed). And some very specific brain functions seem preassigned to particular areas. One newborn who suffered damage to the facial recognition areas on both temporal lobes never regained a normal ability to recognize faces (Farah et al., 2000). But there is good news: Some neural tissue can reorganize in response to damage. It happens within all of us, as the brain repairs itself after little mishaps. Our brains are most plastic when we are young children (Kolb, 1989; see also FIGURE 6.7). Constraint-induced therapy aims to rewire brains by restraining a fully functioning limb and forcing use of the “bad hand” or the uncooperative leg. Gradually, the therapy reprograms the brain, improving the dexterity of a brain-damaged child or even an adult stroke victim (Taub, 2004). One stroke victim, a surgeon in his fifties, was put to work cleaning tables, with his good arm and hand restrained. Slowly, the bad arm recovered its skills. As the damaged brain functions migrated to other brain regions, he gradually learned to write again and even to play tennis (Doidge, 2007). The brain’s plasticity is good news for those blind or deaf. Blindness or deafness makes unused brain areas available for other uses (Amedi et al., 2005). If a blind person uses one finger to read Braille, the brain area dedicated to that finger expands as the sense of touch invades the visual cortex that normally helps people see (Barinaga, 1992a; Sadato et al., 1996). Temporarily “knock out” the visual cortex with magnetic stimulation, and a lifelong-blind person will make more errors on a language task (Amedi et al., 2004). In Deaf people whose native language is sign, the temporal lobe area normally dedicated to hearing waits in vain for stimulation. Finally, it looks for other signals to process, such as those from the visual system. That helps explain why some studies find that Deaf people have enhanced peripheral vision (Bosworth & Dobkins, 1999). Plasticity is especially evident after serious damage. If a slow-growing left hemisphere tumor disrupts language, the right hemisphere may compensate (Thiel et al., 2006). Lose a finger and the sensory cortex that received its input will begin to receive input from the adjacent fingers, which then become more sensitive (Fox, 1984). Lost fingers also feature in another mysterious phenomenon. As Figure 6.2 shows, the hand is between the sensory cortex’s face and arm regions. When stroking the arm of someone whose hand had been amputated, V. S. Ramachandran found the person felt the sensations not only on the area stroked but also on the nonexistent (“phantom”) fingers. Sensory fibers that terminate on adjacent areas had invaded the brain area vacated by the hand.

plasticity the brain’s ability to change, especially during childhood, by reorganizing after damage or by building new pathways based on experience.

6.7 Brain plasticity If 䉴 FIGURE surgery or an injury destroys one part

of a child’s brain or, as in the case of this 6-year-old, even an entire hemisphere (removed to eliminate seizures), the brain will compensate by putting other areas to work. One Johns Hopkins medical team reflected on the child hemispherectomies they had performed. Although use of the opposite hand is compromised, they reported being “awed” by how well children retain their memory, personality, and humor after removal of either brain hemisphere (Vining et al., 1997). The younger the child, the greater the chance that the remaining hemisphere can take over the functions of the one that was surgically removed (Choi, 2008).

74

MOD U LE 6 The Cerebral Cortex and Our Divided Brain

neurogenesis the formation of new neurons.

corpus callosum [KOR-pus kah-LOWsum] the large band of neural fibers connecting the two brain hemispheres and carrying messages between them.

Note, too, that the toes region is adjacent to the genitals. So what do you suppose was the sexual intercourse experience of another Ramachandran patient whose lower leg had been amputated? “I actually experience my org*sm in my foot. And there it’s much bigger than it used to be because it’s no longer just confined to my genitals” (Ramachandran & Blakeslee, 1998, p. 36). Although brain modification often takes the form of reorganization, evidence suggests that, contrary to long-held belief, adult mice and humans can also generate new brain cells (Jessberger et al., 2008). Monkey brains illustrate neurogenesis by forming thousands of new neurons each day. These baby neurons originate deep in the brain and may then migrate elsewhere and form connections with neighboring neurons (Gould, 2007). Master stem cells that can develop into any type of brain cell have also been discovered in the human embryo. If mass-produced in a lab and injected into a damaged brain, might neural stem cells turn themselves into replacements for lost brain cells? Might we someday be able to rebuild damaged brains, much as we reseed damaged lawns? Might new drugs spur the production of new nerve cells? Stay tuned. Today’s biotech companies are hard at work on such possibilities (Gage, 2003). In the meantime, we can all benefit from other natural promoters of neurogenesis, such as exercise, sleep, and nonstressful but stimulating environments (Iso et al., 2007; Pereira et al., 2007; Stranahan et al., 2006).

䉴|| Our Divided Brain 6-3 What do split brains reveal about the functions of our two brain hemispheres?

This large band of neural fibers connects the two brain hemispheres. To photograph the half brain shown at left, a surgeon separated the hemispheres by cutting through the corpus callosum and lower brain regions. In the view on the right, brain tissue has been cut back to expose the corpus callosum and bundles of fibers coming out from it.

Martin M. Rother

Corpus callosum

Splitting the Brain In 1961, two Los Angeles neurosurgeons, Philip Vogel and Joseph Bogen, speculated that major epileptic seizures were caused by an amplification of abnormal brain activity bouncing back and forth between the two cerebral hemispheres. If so, they wondered, could they put an end to this biological tennis game by severing the corpus callosum (FIGURE 6.8), the wide band of axon fibers connecting the two hemispheres and carrying messages between them? Vogel and Bogen knew that psychologists Roger Sperry, Ronald Myers, and Michael Gazzaniga had divided the brains of cats and monkeys in this manner, with no serious ill effects. So the surgeons operated. The result? The seizures were all but eliminated. Moreover, the Courtesy of Terence Williams, University of Iowa

FIGURE 6.8 The corpus callosum

For more than a century, clinical evidence has shown that the brain’s two sides serve differing functions. This hemispheric specialization (or lateralization) is apparent after brain damage. Accidents, strokes, and tumors in the left hemisphere can impair reading, writing, speaking, arithmetic reasoning, and understanding. Similar lesions in the right hemisphere seldom have such dramatic effects. By 1960, many interpreted these differences as evidence that the left hemisphere is the “dominant” or “major” hemisphere, and its silent companion to the right is the “subordinate” or “minor” hemisphere. Then researchers found that the “minor” right hemisphere was not so limited after all. The story of this discovery is a fascinating chapter in psychology’s history.

75

The Cerebral Cortex and Our Divided Brain M O D U L E 6

patients with these split brains were surLeft Right prisingly normal, their personality and invisual field visual field tellect hardly affected. Waking from surgery, one even joked that he had a “splitting headache” (Gazzaniga, 1967). Sperry and Gazzaniga’s studies of people with split brains provide a key to understanding the two hemispheres’ complementary functions. As FIGURE 6.9 explains, the peculiar nature of our visual wiring enabled the researchers to send information to a patient’s left or right hemisphere. As the person stared at a spot, they flashed a stimulus to its right or left. They could do this with you, too, but in your intact brain, the hemisphere receiving the information Optic nerves would instantly pass the news to its partner across the valley. Not so in patients who had undergone split-brain surgery. The phone cables responsible for transmitting messages from one hemisphere to the Optic other—the corpus callosum—had been sevchiasm Speech ered. This enabled the researchers to quiz each hemisphere separately. In an early experiment, Gazzaniga (1967) asked these patients to stare at a dot as he flashed HE·ART on a screen (FIGURE 6.10 on the next page). Thus, HE appeared in their left visual field (which transmits to the right hemisphere) and ART in the right field Visual area Corpus Visual area (which transmits to the left hemisphere). of left callosum of right hemisphere hemisphere When he then asked what they had seen, the patients said they had seen ART. But when asked to point to the word, they were startled when their left hand (controlled by the right hemisphere) pointed to HE. Given an opportunity to express itself, each hemisphere reported what it had seen. The right hemisphere (controlling the left hand) intuitively knew what it could not verbally report. When a picture of a spoon was flashed to their right hemisphere, the patients could not say what they had viewed. But when asked to identify what they had viewed by feeling an assortment of hidden objects with their left hand, they readily selected the spoon. If the experimenter said, “Right!” the patient might reply, “What? Right? How could I possibly pick out the right object when I don’t know what I saw?” It is, of course, the left hemisphere doing the talking here, bewildered by what the nonverbal right hemisphere knows. A few people who have had split-brain surgery have been for a time bothered by the unruly independence of their left hand, which might unbutton a shirt while the right hand buttoned it, or put grocery store items back on the shelf after the right hand put them in the cart. It was as if each hemisphere was thinking “I’ve half a mind to wear my green (blue) shirt today.” Indeed, said Sperry (1964), split-brain surgery leaves people “with two separate minds.” With a split brain, both hemispheres can comprehend and follow an instruction to copy—simultaneously—different figures with the

FIGURE 6.9 The information highway 䉴 from eye to brain Information from the left

half of your field of vision goes to your right hemisphere, and information from the right half of your visual field goes to your left hemisphere, which usually controls speech. (Note, however, that each eye receives sensory information from both the right and left visual fields.) Data received by either hemisphere are quickly transmitted to the other across the corpus callosum. In a person with a severed corpus callosum, this information sharing does not take place.

split brain a condition resulting from surgery that isolates the brain’s two hemispheres by cutting the fibers (mainly those of the corpus callosum) connecting them.

“Look at the dot.”

Two words separated by a dot are momentarily projected.

(a)

(b)

or

“What word did you see?”

“Point with your left hand to the word you saw.”

(c)

left and right hands (Franz et al., 2000; see also FIGURE 6.11). (Reading these reports, I fantasize a person enjoying a solitary game of “rock, paper, scissors”—left versus right hand.) When the “two minds” are at odds, the left hemisphere does mental gymnastics to rationalize reactions it does not understand. If a patient follows an order sent to the

BBC

FIGURE 6.11 Try this! Joe, who has had split-brain surgery, can simultaneously draw two different shapes.

FIGURE 6.10 Testing the divided brain When an experimenter flashes the word HEART across the visual field, a woman with a split brain reports seeing the portion of the word transmitted to her left hemisphere. However, if asked to indicate with her left hand what she saw, she points to the portion of the word transmitted to her right hemisphere. (From Gazzaniga, 1983.)

MOD U LE 6 The Cerebral Cortex and Our Divided Brain

76

77

The Cerebral Cortex and Our Divided Brain M O D U L E 6

䉴|| Right-Left Differences in the Intact Brain So, what about the 99.99+ percent of us with undivided brains? Does each of our hemispheres also perform distinct functions? Several different types of studies indicate they do. When a person performs a perceptual task, for example, brain waves, bloodflow, and glucose consumption reveal increased activity in the right hemisphere. When the person speaks or calculates, activity increases in the left hemisphere. A dramatic demonstration of hemispheric specialization happens before some types of brain surgery. To check the location of language centers, the surgeon injects a sedative into the neck artery feeding blood to the left hemisphere. Before the injection, the patient is lying down, arms in the air, chatting with the doctor. You can probably predict what happens when the drug flows into the artery going to the left hemisphere: Within seconds, the person’s right arm falls limp. The patient also usually becomes speechless until the drug wears off. When the drug enters the artery to the right hemisphere, the left arm falls limp, but the person can still speak. Which hemisphere would you suppose enables sign language among deaf people? The right, because of its visual-spatial superiority? Or the left, because it typically processes language? Studies reveal that, just as hearing people usually use the left hemisphere to process speech, deaf people use the left hemisphere to process sign language (Corina et al., 1992; Hickok et al., 2001). A stroke in the left hemisphere will disrupt a deaf person’s signing, much as it would disrupt a hearing person’s speaking. The same brain area is similarly involved in both spoken and signed speech production (Corina, 1998). To the brain, language is language, whether spoken or signed. Although the left hemisphere is adept at making quick, literal interpretations of language, the right hemisphere excels in making inferences (Beeman & Chiarello, 1998;

|| Question: If we flashed a red light to the right hemisphere of a person with a split brain and flashed a green light to the left hemisphere, would each observe its own color? Would the person be aware that the colors differ? What would the person verbally report seeing? (Answers below.)|| Answers: Yes. No. Green.

right hemisphere (“Walk”), a strange thing happens. Unaware of the order, the left hemisphere doesn’t know why the patient begins walking. Yet, when asked why, the patient doesn’t say “I don’t know.” Instead, the interpretive left hemisphere improvises— “I’m going into the house to get a co*ke.” Thus, Michael Gazzaniga (1988), who considers these patients “The most fascinating people on earth,” concludes that the conscious left hemisphere is an “interpreter” or press agent that instantly constructs theories to explain our behavior. These studies reveal that the left hemisphere is more active when a person deliberates over decisions (Rogers, 2003). When the rational left brain is active, people more often discount disagreeable information (Drake, 1993). The right hemisphere understands simple requests, easily perceives objects, and is more engaged when quick, intuitive responses are needed. The right side of the brain also surpasses the left at copying drawings and at recognizing faces. The right hemisphere is skilled at perceiving emotion and at portraying emotions through the more expressive left side of the face (FIGURE 6.12). Right-hemisphere damage therefore more greatly disrupts emotion processing and social conduct (Tranel et al., 2002). Most of the body’s paired organs—kidneys, lungs, breasts—perform identical functions, providing a backup system should one side fail. Not so the brain’s two halves, which can simultaneously carry out different functions with minimal duplication of effort. The result is a biologically odd but smart couple, each seemingly with a mind of its own.

FIGURE 6.12 Which one is happier? 䉴 Look at the center of one face, then the

other. Does one appear happier? Most people say the right face does. Some researchers think this is because the right hemisphere, which is skilled in emotion processing, receives information from the left half of each face (when looking at its center).

78

MOD U LE 6 The Cerebral Cortex and Our Divided Brain

Bowden & Beeman, 1998; Mason & Just, 2004). Primed with the flashed word foot, the left hemisphere will be especially quick to recognize the closely associated word heel. But if primed with foot, cry, and glass, the right hemisphere will more quickly recognize another word distantly related to all three (cut). And if given an insightlike problem— “What word goes with boot, summer, and ground?”—the right hemisphere more quickly than the left recognizes the solution—camp. As one patient explained after a righthemisphere stroke, “I understand words, but I’m missing the subtleties.” The right hemisphere also helps us modulate our speech to make meaning clear— as when we ask “What’s that in the road ahead?” instead of “What’s that in the road, a head?” (Heller, 1990). The right hemisphere also seems to help orchestrate our sense of self. People who suffer partial paralysis will sometimes obstinately deny their impairment—strangely claiming they can move a paralyzed limb—if the damage is to the right hemisphere (Berti et al., 2005). With right brain damage, some patients have difficulty perceiving who other people are in relation to themselves, as in the case of a man who saw medical caretakers as family (Feinberg & Keenan, 2005). Others fail to recognize themselves in a mirror, or assign ownership of a limb to someone else (“that’s my husband’s arm”). The power of the right brain appeared in an experiment in which people with normal brains viewed a series of images that progressively morphed from the face of a co-worker into their own face. As people recognized themselves, parts of their right brain displayed sudden activity. But when magnetic stimulation disrupted their normal right-brain activity, they had difficulty recognizing themselves in the morphed photos (Uddin et al., 2005, 2006). Simply looking at the two hemispheres, so alike to the naked eye, who would suppose they contribute uniquely to the harmony of the whole? Yet a variety of observations—of people with split brains and people with normal brains—converge beautifully, leaving little doubt that we have unified brains with specialized parts.

Brain Organization and Handedness 6-4 How does handedness relate to brain organization? Nearly 90 percent of us are primarily right-handed (Leask & Beaton, 2007; Medland et al., 2004; Peters et al., 2006). Some 10 percent of us (somewhat more among males, somewhat less among females) are left-handed. (A few people write with their

AP Photo/Nati Harnik, File

The rarest of baseball players: an ambidextrous pitcher Using a glove with two thumbs, Creighton University pitcher Pat Venditte, shown here in a 2008 game, pitched to right-handed batters with his right hand, then switched to face left-handed batters with his left hand. After one switch-hitter switched sides of the plate, Venditte switched pitching arms, which triggered the batter to switch again, and so on. The umpires ultimately ended the comedy routine by applying a little-known rule: A pitcher must declare which arm he will use before throwing his first pitch to a batter (Schwarz, 2007).

|| Most people also kick with their right foot, look through a microscope with their right eye, and (had you noticed?) kiss the right way—with their head tilted right (Güntürkün, 2003). ||

The Cerebral Cortex and Our Divided Brain M O D U L E 6

79

right hand and throw a ball with their left, or vice versa.) Almost all right-handers (96 percent) process speech primarily in the left hemisphere, which tends to be the slightly larger hemisphere (Hopkins, 2006). Left-handers are more diverse. Seven in ten process speech in the left hemisphere, as right-handers do. The rest either process language in the right hemisphere or use both hemispheres.

Is Handedness Inherited? Judging from prehistoric human cave drawings, tools, and hand and arm bones, this veer to the right occurred long ago (Corballis, 1989; Steele, 2000). Righthandedness prevails in all human cultures. Moreover, it appears prior to culture’s impact. Ultrasound observations of fetal thumb-sucking reveal that more than 9 in 10 fetuses suck the right hand’s thumb (Hepper et al., 1990, 2004). This bias for the right hand is unique to humans and to the primates most like us: chimpanzees and bonobos (Hopkins, 2006). Other primates are more evenly divided between lefties and righties. Observing 150 human babies during the first two days after their birth, George Michel (1981) found that two-thirds consistently preferred to lie with their heads turned to the right. When he again studied a sample of these babies at age 5 months, almost all of the “head-right” babies reached for things with their right hands, and almost all of the “head-left” babies reached with their left hands. Such findings, along with the universal prevalence of right-handers, indicate that either genes or some prenatal factors influence handedness.

So, Is It All Right to Be Left-Handed? Judging by our everyday conversation, left-handedness is not all right. To be “coming out of left field” is hardly better than to be “gauche” (derived from the French word for “left”). On the other hand, right-handedness is “right on,” which any “righteous” “right-hand man” “in his right mind” usually is. Left-handers are more numerous than usual among those with reading disabilities, allergies, and migraine headaches (Geschwind & Behan, 1984). But in Iran, where students report which hand they write with when taking the university entrance exam, lefties outperform righties in all subjects (Noroozian et al., 2003). Lefthandedness is also more common among musicians, mathematicians, professional baseball and cricket players, architects, and artists, including such luminaries as Michelangelo, Leonardo da Vinci, and Picasso.1 Although left-handers must tolerate elbow jostling at the dinner table, right-handed desks, and awkward scissors, the pros and cons of being a lefty seem roughly equal. *** We have glimpsed the truth of an overriding principle: Everything psychological is simultaneously biological. We have focused on how our thoughts, feelings, and actions arise from our specialized yet integrated brain. Elsewhere in this text, we further explore the significance of the biological revolution in psychology. Today’s neuroscience has come a long way, yet what is unknown still dwarfs what is known. We can describe the brain. We can learn the functions of its parts. We can study how the parts communicate. But how do we get mind out of meat? How does the electrochemical whir in a hunk of tissue the size of a head of lettuce give rise to elation, a creative idea, or that memory of Grandmother? 1 Strategic factors explain the higher-than-normal percentage of lefties in sports. For example, it helps a soccer team to have left-footed players on the left side of the field (Wood & Aggleton, 1989). In golf, however, no lefthander won the Masters tournament until Canadian Mike Weir did so in 2003.

|| Evidence that challenges a genetic explanation of handedness: Handedness is one of but a few traits that genetically identical twins are not especially likely to share (Halpern & Coren, 1990). ||

MOD U LE 6 The Cerebral Cortex and Our Divided Brain

Much as gas and air can give rise to something different—fire—so also, believed Roger Sperry, does the complex human brain give rise to something different: consciousness. The mind, he argued, emerges from the brain’s dance of ions, yet is not reducible to it. Cells cannot be fully explained by the actions of atoms, nor minds by the activity of cells. Psychology is rooted in biology, which is rooted in chemistry, which is rooted in physics. Yet psychology is more than applied physics. As Jerome Kagan (1998) reminds us, the meaning of the Gettysburg Address is not reducible to neural activity. Sexual love is more than blood flooding to the genitals. Morality and responsibility become possible when we understand the mind as a “holistic system,” said Sperry (1992) (FIGURE 6.13). We are not mere jabbering robots.

FIGURE 6.13 Mind and brain as holistic system In Roger Sperry’s view, the brain creates and controls the emergent mind, which in turn influences the brain. (Think vividly about biting into a lemon and you may salivate.)

80

Mind

Brain

The mind seeking to understand the brain—that is indeed among the ultimate scientific challenges. And so it will always be. To paraphrase cosmologist John Barrow, a brain simple enough to be understood is too simple to produce a mind able to understand it.

81

The Cerebral Cortex and Our Divided Brain M O D U L E 6

Review The Cerebral Cortex and Our Divided Brain 6-1 What functions are served by the various cerebral cortex regions? In each hemisphere the cerebral cortex has four lobes, the frontal, parietal, occipital, and temporal. Each lobe performs many functions and interacts with other areas of the cortex. The motor cortex controls voluntary movements. The sensory cortex registers and processes body sensations. Body parts requiring precise control (in the motor cortex) or those that are especially sensitive (in the sensory cortex) occupy the greatest amount of space. Most of the brain’s cortex—the major portion of each of the four lobes—is devoted to uncommitted association areas, which integrate information involved in learning, remembering, thinking, and other higher-level functions. 6-2

To what extent can a damaged brain reorganize itself? If one hemisphere is damaged early in life, the other will pick up many of its functions. This plasticity diminishes later in life. Some brain areas are capable of neurogenesis (forming new neurons).

6-3 What do split brains reveal about the functions of our two brain hemispheres? Split-brain research (experiments on people with a severed corpus callosum) has confirmed that in most people, the left hemisphere is the more verbal, and that the right hemisphere excels in visual perception and the recognition of emotion. Studies of healthy people with intact brains confirm that each hemisphere makes unique contributions to the integrated functioning of the brain. 6-4

How does handedness relate to brain organization? About 10 percent of us are left-handed. Almost all right-handers process speech in the left hemisphere, as do more than half of all left-handers.

Terms and Concepts to Remember cerebral [seh-REE-bruhl] cortex, p. 67 glial cells (glia), p. 67 frontal lobes, p. 67 parietal [puh-RYE-uh-tuhl] lobes, p. 67 occipital [ahk-SIP-uh-tuhl] lobes, p. 67 temporal lobes, p. 67

motor cortex, p. 68 sensory cortex, p. 70 association areas, p. 71 plasticity, p. 73 neurogenesis, p. 74 corpus callosum [KOR-pus kah-LOW-sum], p. 74 split brain, p. 75

Test Yourself 1. What has split-brain research taught us about the intact brain?

2. What is the role of the corpus callosum in our brain? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. Now that you have learned more about how our brains enable our minds, how has this affected your view of human nature?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

Consciousness and the Two-Track Mind

C

onsciousness can be a funny thing. It offers us weird experiences, as when entering sleep or leaving a dream, and sometimes it leaves us wondering who is really in control. After putting me under the influence of nitrous oxide, my dentist tells me to turn my head to the left. My conscious mind resists: “No way,” I silently say. “You can’t boss me around!” Whereupon my robotic head, ignoring my conscious mind, turns obligingly under the dentist’s control. During my noontime pickup basketball games, I am sometimes mildly irritated as my body passes the ball while my conscious mind is saying, “No, stop, you fool! Peter is going to intercept!” Alas, my body completes the pass on its own. Other times, notes psychologist Daniel Wegner (2002) in Illusion of Conscious Will, people believe their consciousness is controlling their actions when it isn’t. In one experiment, people co-controlled a computer mouse with a partner (who was actually the experimenter’s accomplice). Even when the partner caused the mouse to stop on a predetermined square, the participants perceived that they had caused it to stop there. And then there are those times when consciousness seems to split. Reading Green Eggs and Ham to one of my preschoolers for the umpteenth time, my obliging mouth could say the words while my mind wandered elsewhere. And if someone drops by my office while I’m typing this sentence, it’s not a problem. My fingers can complete it as I strike up a conversation. Was my drug-induced dental experience akin to people’s experiences with other psychoactive drugs (mood- and perception-altering substances)? Was my automatic obedience to my dentist like people’s responses to a hypnotist? Or does a split in consciousness, like those that we have when our mind goes elsewhere while reading or typing, explain people’s behavior while under hypnosis? And during sleep, when and why do those weird dream experiences occur? But first questions first: What is consciousness? Every science has concepts so fundamental they are nearly impossible to define. Biologists agree on what is alive but not on precisely what life is. In physics, matter and energy elude simple definition. To psychologists, consciousness is similarly a fundamental yet slippery concept. At its beginning, psychology was “the description and explanation of states of consciousness” (Ladd, 1887). But during the first half of the twentieth century, the difficulty of scientifically studying consciousness led many psychologists—including those in the emerging school of behaviorism—to turn to direct observations of behavior. By the 1960s, psychology had nearly lost consciousness and was defining itself as “the science of behavior.” Consciousness was likened to a car’s speedometer: “It doesn’t make the car go, it just reflects what’s happening” (Seligman, 1991, p. 24). After 1960, mental concepts began to reemerge. Advances in neuroscience made it possible to relate brain activity to sleeping, dreaming, and other mental states. Researchers began studying consciousness altered by hypnosis and drugs. Psychologists of all persuasions were affirming the importance of cognition, or mental processes. Psychology was regaining consciousness.

modules 7 The Brain and Consciousness

8 Sleep and Dreams

9 Hypnosis

10 Drugs and Consciousness

“Neither [psychologist] Steve Pinker nor I can explain human subjective consciousness. . . . We don’t understand it.” Evolutionary biologist Richard Dawkins (1999)

“Psychology must discard all reference to consciousness.” Behaviorist John B. Watson (1913)

83

Christine Brune

Maria Teijeiro/Getty Images

AP Photo/Ricardo Mazalan

Stuart Franklin/Magnum Photos

addition to normal, waking awareness, consciousness comes to us in altered states, including daydreaming, sleeping, meditating, and drug-induced hallucinating.

FIGURE 1 States of consciousness In

Some states occur spontaneously

Daydreaming

Drowsiness

Dreaming

Some are physiologically induced

Hallucinations

org*sm

Food or oxygen starvation

Some are psychologically induced

Sensory deprivation

Hypnosis

Meditation

Over the course of a day, a week, a month, we flit between various states of consciousness, from waking to sleeping and various other altered states (FIGURE 1). Module 7 focuses on the brain and consciousness in light of today’s cognitive neuroscience and dual processing. Module 8 offers an in-depth look at sleep and dreams. Module 9 discusses various beliefs about hypnosis, including research support for some of the claims related to hypnosis. Module 10 details dependence, addiction, and the three main categories of psychoactive drugs.

84

module 7 The Brain and Consciousness

Cognitive Neuroscience Dual Processing

7-1 What is the “dual processing” being revealed by today’s cognitive neuroscience? For most psychologists today, consciousness is our awareness of ourselves and our environment. Our spotlight of awareness allows us to assemble information from many sources as we reflect on our past and plan for our future. And it focuses our attention when we learn a complex concept or behavior—say, driving a car—making us aware of the car and the traffic. With practice, driving no longer requires our undivided attention, freeing us to focus our attention on other things. In today’s science, one of the most hotly pursued research quests is to understand the biology of consciousness. Evolutionary psychologists speculate that consciousness must offer a reproductive advantage (Barash, 2006). Perhaps consciousness helps us act in our long-term interests (by considering consequences) rather than merely seeking short-term pleasure and avoiding pain. Or perhaps consciousness promotes our survival by anticipating how we seem to others and helping us read their minds. (“He looks really angry! I’d better run!”) Even so, that leaves us with the so-called “hard-problem”: How do brain cells jabbering to one another create our awareness of the taste of a taco, the pain of a toothache, the feeling of fright?

consciousness our awareness of ourselves and our environment. cognitive neuroscience the interdisciplinary study of the brain activity linked with cognition (including perception, thinking, memory, and language).

FIGURE 7.1 Evidence of awareness?

Courtesy: Adrian M. Owen, MRC Cognition and Brain Sciences Unit, University of Cambridge

Scientists assume, in the words of neuroscientist Marvin Minsky (1986, p. 287), that “the mind is what the brain does.” We just don’t know how it does it. Even with all the world’s chemicals, computer chips, and energy, we still don’t have a clue how to make a conscious robot. Yet today’s cognitive neuroscience—the interdisciplinary study of the brain activity linked with our mental processes—is taking the first small step by relating specific brain states to conscious experiences. We know, for example, that the upper brainstem contributes to consciousness because some children born without a cerebral cortex exhibit signs of consciousness (Merker, 2007). Another stunning demonstration of some level of consciousness appeared in brain scans of a noncommunicative patient—a 23-year-old woman who had been in a car accident and showed no outward signs of conscious awareness (Owen et al., 2006). When researchers asked her to imagine playing tennis or moving around her home, fMRI scans revealed brain activity like that of healthy volunteers. As she imagined playing tennis, for example, an area of her brain controlling arm and leg movements became active (FIGURE 7.1). Even in a motionless body, the researchers concluded, the brain—and the mind—may still be active. However, most cognitive neuroscientists are exploring and mapping the conscious functions of the cortex. Based on your cortical activation patterns, they can now, in some limited ways, read your mind. They can, for example, tell which of 10 similar objects (hammer, drill, and so forth) you are viewing (Shinkareva et al., 2008). Despite such advances, much disagreement remains. One research group theorizes that conscious experiences arise from specific neuron circuits firing in a specific manner. Another sees conscious experiences as produced by the synchronized activity of the whole brain (Koch & Greenfield, 2007). How the brain produces the mind remains a mystery.

When asked to imagine playing tennis or navigating her home, a vegetative patient’s brain (top) exhibited activity similar to a healthy person’s brain (bottom). Although the case may be an exception, researchers wonder if such fMRI scans might enable a “conversation” with unresponsive patients, by instructing them, for example, to answer yes to a question by imagining playing tennis, and no by imagining walking around their home.

䉴|| Cognitive Neuroscience

85

86

MOD U LE 7 The Brain and Consciousness

䉴|| Dual Processing

dual processing the principle that information is often simultaneously processed on separate conscious and unconscious tracks.

Many cognitive neuroscience discoveries tell us of a particular brain region that becomes active with a particular conscious experience. Such findings strike many people as interesting but not mind-blowing. (If everything psychological is simultaneously biological, then our ideas, emotions, and spirituality must all, somehow, be embodied.) What is mind-blowing to many of us is the growing evidence that we have, so to speak, two minds, each supported by its own neural equipment. At any moment, you and I are aware of little more than what’s on the screen of our consciousness. But one of the grand ideas of recent cognitive neuroscience is that much of our brain work occurs off stage, out of sight. For example, fascinating studies of split-brain patients have revealed a conscious “left brain” and a more intuitive “right brain.” Other modules explore our hidden mind at work in research on unconscious priming, on conscious (explicit) and unconscious (implicit) memories, on conscious versus automatic prejudices, and on the out-of-sight processing that enables sudden insights and creative moments. Perception, memory, thinking, language, and attitudes all operate on two levels—a conscious, deliberate “high road” and an unconscious, automatic “low road.” Today’s researchers call this dual processing. We know more than we know we know.

The Two-Track Mind

FIGURE 7.2 The hollow face

Adapted from: Milner, A. D., & Goodale, M. A. (2006). The Visual Brain in Action, 2nd Edition. Oxford University Press.

illusion What you see (an illusory protruding face from a reverse mask, as in the box at upper right) may differ from what you do (reach for a speck on the face inside the mask).

A scientific story illustrates the mind’s two levels. Sometimes science-aided critical thinking confirms widely held beliefs. But sometimes, as this story illustrates, science is stranger than science fiction. During my sojourns at Scotland’s University of St. Andrews, I came to know cognitive neuroscientists Melvyn Goodale and David Milner (2004, 2006). A local woman, whom they call D. F., was overcome by carbon monoxide one day while showering. The resulting brain damage left her unable to recognize and discriminate objects visually. Yet she was only partly blind, for she would act as if she could see. Asked to slip a postcard into a vertical or horizontal mail slot, she could do so without error. Although unable to report the width of a block in front of her, she could grasp it with just the right finger-thumb distance. How could this be? Don’t we have one visual system? Goodale and Milner knew from animal research that the eye sends information simultaneously to different brain areas, which have different tasks. Sure enough, a scan of D. F.’s brain activity revealed normal activity in the area concerned with reaching for and grasping objects, but damage in the area concerned with consciously recognizing objects. So, would the reverse damage lead to the opposite symptoms? Indeed, there are a few such patients—who can see and recognize objects but have difficulty pointing toward or grasping them. How strangely intricate is this thing we call vision, conclude Goodale and Milner in their aptly titled book, Sight Unseen. We may think of our vision as one system that controls our visually guided actions, but it is actually a dual-processing system. A visual perception track enables us “to create the mental furniture that allows us to think about the world”—to recognize things and to plan future actions. A visual action track guides our moment-to-moment actions. On rare occasions, the two conflict. Shown the hollow face illusion, people will mistakenly perceive the inside of a mask as a protruding face (FIGURE 7.2). Yet they will unhesitatingly and accurately reach into the inverted mask to flick off a buglike target stuck on the face. What their conscious mind doesn’t know, their hand does. This big idea—that much of our everyday thinking, feeling, and acting operates outside our conscious awareness—“is a difficult one for people to accept,” report New York University psychologists John Bargh and Tanya Chartrand (1999). We

87

are understandably biased to believe that our own intentions and deliberate choices rule our lives. But in the mind’s downstairs, there is much, much more to being human. So, consciousness, though enabling us to exert voluntary control and to communicate our mental states to others, is but the tip of the informationprocessing iceberg. Beneath the surface, unconscious information processing occurs simultaneously on many parallel tracks. When we look at a bird flying, we are consciously aware of the result of our cognitive processing (“It’s a hummingbird!”) but not of our subprocessing of the bird’s color, form, movement, distance, and identity. Today’s neuroscientists are identifying neural activity that precedes consciousness. In some provocative experiments, Benjamin Libet (1985, 2004) observed that when you move your wrist at will, you consciously experience the decision to move it about 0.2 seconds before the actual movement. No surprise there. But your brain waves jump about 0.35 seconds ahead of your conscious perception of your decision (FIGURE 7.3)! Thus, before you know it, your brain seems headed toward your decision to move your wrist. Likewise, if asked to press a button when you feel a tap, you can respond in 1/10th of a second—less time than it takes to become conscious that you have responded (Wegner, 2002). In a followup experiment, fMRI brain scans enabled researchers to predict—with 60 percent accuracy and up to 7 seconds ahead—participants’ decisions to press a button with their left or right finger (Soon et al., 2008). The startling conclusion: Consciousness sometimes arrives late to the decision-making party. All of this unconscious information processing occurs simultaneously on multiple parallel tracks. Traveling by car on a familiar route, your hands and feet do the driving while your mind rehearses your upcoming day. Running on automatic pilot allows your consciousness—your mind’s CEO—to monitor the whole system and deal with new challenges, while many assistants automatically take care of routine business. Serial conscious processing, though slower than parallel processing, is skilled at solving new problems, which require our focused attention. Try this: If you are righthanded, you can move your right foot in a smooth counterclockwise circle, and you can write the number 3 repeatedly with your right hand—but probably not at the same time. (If you are musically inclined, try something equally difficult: Tap a steady three times with your left hand while tapping four times with your right hand.) Both tasks require conscious attention, which can be in only one place at a time. If time is nature’s way of keeping everything from happening at once, then consciousness is nature’s way of keeping us from thinking and doing everything at once.

The Brain and Consciousness M O D U L E 7

FIGURE 7.3 Is the brain ahead of the

mind? In this study, volunteers watched a computer clock sweep through a full revolution every 2.56 seconds. They noted the time at which they decided to move their wrist. About one-third of a second before that decision, their brain wave activity jumped, indicating a readiness potential to move. Watching a slow-motion replay, the researchers were able to predict when a person was about to decide to move (following which, the wrist did move) (Libet, 1985, 2004).

Selective Attention 7-2 How much information do we consciously attend to at once? Through selective attention, your conscious awareness focuses, like a flashlight beam, on only a very limited aspect of all that you experience. By one estimate, your five senses take in 11,000,000 bits of information per second, of which you consciously process about 40 (Wilson, 2002). Yet your mind’s unconscious track intuitively makes great use of the other 10,999,960 bits. Until reading this sentence, for example, you have been unaware that your shoes are pressing against your feet or that your nose is in your line of vision. Now, suddenly, your attentional spotlight shifts. Your feet feel encased, your nose stubbornly intrudes on the page before you. While attending to these words, you’ve also been blocking from awareness information coming from your peripheral vision. But you can change that. As you stare at the X on the next page, notice what surrounds the book (the edges of the page, your desktop, and so forth).

selective attention the focusing of conscious awareness on a particular stimulus.

88

MOD U LE 7 The Brain and Consciousness

X

inattentional blindness failing to see visible objects when our attention is directed elsewhere.

change blindness failing to notice changes in the environment.

Another example of selective attention, the co*cktail party effect, is your ability to attend to only one voice among many. (Let another voice speak your name and your cognitive radar, operating on the mind’s other track, will instantly bring that voice into consciousness.) This focused listening comes at a cost. Imagine hearing two conversations over a headset, one in each ear, and being asked to repeat the message in your left ear while it is spoken. When paying attention to what is being said in your left ear, you won’t perceive what is said in your right. Asked later what language your right ear heard, you may draw a blank (though you could report the speaker’s gender and loudness).

Selective Attention and Accidents

kchronicles.com–Keith Knight

Talk on the phone while driving and your selective attention will shift back and forth from the road to the phone. But when a demanding situation requires your full attention, you’ll probably stop talking. This process of switching attentional gears, especially when shifting to complex tasks, can entail a slight and sometimes fatal delay in coping (Rubenstein et al., 2001). The U.S. National Highway Traffic Safety Board (2006) estimates that almost 80 percent of vehicle crashes involve driver distraction. In University of Utah driving-simulation experiments, students conversing on cellphones were slower to detect and respond to traffic signals, billboards, and other cars (Strayer & Johnston, 2001; Strayer et al., 2003). Because attention is selective, attending to a phone call (or a GPS navigation system or a DVD player) causes inattention to other things. Thus, when Suzanne McEvoy and her University of Sydney colleagues (2005, 2007) analyzed phone records for the moments before a car crash, they found that cellphone users (even with hands-free sets) were four times more at risk. Having a passenger increased risk only 1.6 times. This difference in risk also appeared in an experiment that asked drivers to pull off at a freeway rest stop 8 miles ahead. Of drivers conversing with a passenger, 88 percent did so. Of those talking on a cellphone, 50 percent drove on by (Strayer & Drews, 2007). Even hands-free cellphone talking is more distracting than a conversation with passengers, who can see the driving demands and pause the conversation. Sally Forth

Driven to distraction In driving-simulation experiments, people whose attention is diverted by cellphone conversation make more driving errors.

Walking while talking can also pose dangers, as one naturalistic observation of Ohio State University pedestrians found (Nasar et al., 2008). Half the people on cellphones and only a quarter without this distraction exhibited unsafe road crossing, such as by crossing when a car was approaching.

Selective Inattention At the level of conscious awareness, we are “blind” to all but a tiny sliver of the immense array of visual stimuli constantly before us. Ulric Neisser (1979) and Robert Becklen and Daniel Cervone (1983) demonstrated this dramatically by showing people

89

The Brain and Consciousness M O D U L E 7

FIGURE 7.4 Gorillas in our 䉴 midst When attending to one

task (counting basketball passes by one of the threeperson teams) about half the viewers displayed inattentional blindness by failing to notice a clearly visible gorilla passing through.

Daniel Simons, University of Illinois

|| Magicians exploit our change blindness by selectively riveting our attention on one hand’s dramatic act with inattention to the change accomplished by the other hand. ||

FIGURE 7.5 Change blindness While a man (white hair) provides directions to a construction worker, two experimenters rudely pass between them carrying a door. During this interruption, the original worker switches places with another person wearing different colored clothing. Most people, focused on their direction giving, do not notice the switch.

© 1998 Psychonomic Society, Inc. Image provided courtesy of Daniel J. Simons.

a one-minute video in which images of three black-shirted men tossing a basketball were superimposed over the images of three white-shirted players. The viewers’ supposed task was to press a key every time a black-shirted player passed the ball. Most focused their attention so completely on the game that they failed to notice a young woman carrying an umbrella saunter across the screen midway through the video. When researchers replayed the video, viewers were astonished to see her. With their attention directed elsewhere, they exhibited inattentional blindness (Mack & Rock, 2000). In a recent repeat of the experiment, smart-aleck researchers Daniel Simons and Christopher Chabris (1999) sent a gorilla-suited assistant through the swirl of players (FIGURE 7.4). During its 5- to 9-second cameo appearance, the gorilla paused to thump its chest. Still, half the conscientious pass-counting participants failed to see it. In other experiments, people have also exhibited a blindness to change. After a brief visual interruption, a big co*ke bottle may disappear, a railing may rise, clothing color may change, but, more often than not, viewers won’t notice (Resnick et al., 1997; Simons, 1996; Simons & Ambinder, 2005). This form of inattentional blindness is called change blindness. It has occurred among people giving directions to a construction worker who, unnoticed by two-thirds of them, is replaced by another construction worker (FIGURE 7.5). Out of sight, out of mind. Change deafness can also occur. In one experiment, 40 percent of people focused on repeating a list of sometimes challenging words failed to notice a change in the person speaking (Vitevitch, 2003). An equally astonishing form of inattention is the choice blindness discovered by a Swedish research team. Petter Johansson and his colleagues (2005) showed 120 volunteers two female faces for 2 to 5 or more seconds and asked them which face was more attractive. The researchers then put the photos face down and handed viewers the one they had chosen, inviting them to explain their choice. But on 3 of 15 occasions, the tricky researchers used sleight-of-hand to switch the photos—showing viewers the face they had not chosen. Not only did people seldom notice the deception (on only 13 percent of the switches), they readily explained why they preferred the face they had actually rejected. “I chose her because she smiled,” said one person (after picking the solemn-faced one). Asked later whether they would notice such a

90

MOD U LE 7 The Brain and Consciousness

The pop-out phenomenon

FIGURE 7.6

switch in a “hypothetical experiment,” 84 percent insisted they would. They exhibited a blindness the researchers call (can you see the twinkle in their eyes?) choice-blindness blindness. Some stimuli, however, are so powerful, so strikingly distinct, that we experience pop-out, as with the only smiling face in FIGURE 7.6. We don’t choose to attend to these stimuli; they draw our eye and demand our attention. Our selective attention extends even into our sleep, when we are oblivious to most but not all of what is happening around us. We may feel “dead to the world,” but we are not.

Review The Brain and Consciousness 7-1 What is the “dual processing” being revealed by today’s cognitive neuroscience? Cognitive neuroscientists and others studying the brain mechanisms underlying consciousness and cognition have discovered a two-track human mind, each with its own neural processing. This dual processing affects our perception, memory, and attitudes at an explicit, conscious level and at an implicit, unconscious level. 7-2

How much information do we consciously attend to at once? We selectively attend to, and process, a very limited aspect of incoming information, blocking out most, often shifting the spotlight of our attention from one thing to another. The limits of our attention contribute to car and pedestrian accidents. We even display inattentional blindness to events and changes in our visual world.

Terms and Concepts to Remember consciousness, p. 85 cognitive neuroscience, p. 85 dual processing, p. 86

selective attention, p. 87 inattentional blindness, p. 89 change blindness, p. 89

Test Yourself 1. What are the mind’s two tracks, as revealed by studies of “dual processing”? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. Can you recall a recent time when, your attention focused on one thing, you were oblivious to something else (perhaps to pain, to someone’s approach, or to background music)?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

module 8 Sleep and Dreams Sleep—the irresistible tempter to whom we inevitably succumb. Sleep—the equalizer of presidents and peasants. Sleep—sweet, renewing, mysterious sleep. Even when you are deeply asleep, your perceptual window is actually not completely shut. You move around on your bed, but you manage not to fall out. The occasional roar of passing vehicles may leave your deep sleep undisturbed, but a cry from a baby’s nursery quickly interrupts it. So does the sound of your name. EEG recordings confirm that the brain’s auditory cortex responds to sound stimuli even during sleep (Kutas, 1990). And when we are asleep, as when we are awake, we process most information outside our conscious awareness. Many of sleep’s mysteries are now being solved as some people sleep, attached to recording devices, while others observe. By recording brain waves and muscle movements, and by observing and occasionally waking sleepers, researchers are glimpsing things that a thousand years of common sense never told us. Perhaps you can anticipate some of their discoveries. Are the following statements true or false? 1. When people dream of performing some activity, their limbs often move in concert with the dream. 2. Older adults sleep more than young adults. 3. Sleepwalkers are acting out their dreams. 4. Sleep experts recommend treating insomnia with an occasional sleeping pill. 5. Some people dream every night; others seldom dream. All these statements (adapted from Palladino & Carducci, 1983) are false. To see why, read on.

Biological Rhythms and Sleep Why Do We Sleep? Sleep Disorders Dreams

“I love to sleep. Do you? Isn’t it great? It really is the best of both worlds. You get to be alive and unconscious.” Comedian Rita Rudner, 1993

|| Dolphins, porpoises, and whales sleep with one side of their brain at a time (Miller et al., 2008). ||

䉴|| Biological Rhythms and Sleep 8-1 How do our biological rhythms influence our daily functioning and our sleep and dreams? Like the ocean, life has its rhythmic tides. Over varying time periods, our bodies fluctuate, and with them, our minds. Let’s look more closely at two of those biological rhythms—our 24-hour biological clock and our 90-minute sleep cycle.

Circadian Rhythm The rhythm of the day parallels the rhythm of life—from our waking at a new day’s birth to our nightly return to what Shakespeare called “death’s counterfeit.” Our bodies roughly synchronize with the 24-hour cycle of day and night through a biological clock called the circadian rhythm (from the Latin circa, “about,” and diem, “day”). Body temperature rises as morning approaches, peaks during the day, dips for a time in early afternoon (when many people take siestas), and then begins to drop again before we go to sleep. Thinking is sharpest and memory most accurate when we are at our daily peak in circadian arousal. Pulling an all-nighter, we may feel groggiest about 4:00 A.M., and then we get a second wind after our normal wakeup time arrives. Bright light in the morning tweaks the circadian clock by activating light-sensitive retinal proteins. These proteins control the circadian clock by triggering signals to the

circadian [ser-KAY-dee-an] rhythm the biological clock; regular bodily rhythms (for example, of temperature and wakefulness) that occur on a 24hour cycle.

91

Pineal gland

Melatonin production suppressed

Melatonin produced

Light od

Light striking the retina signals the suprachiasmatic nucleus (SCN) to suppress the pineal gland’s production of the sleep hormone melatonin. At night, the SCN quiets down, allowing the pineal gland to release melatonin into the bloodstream.

Suprachiasmatic nucleus

f lo w

FIGURE 8.1 The biological clock

MOD U LE 8 Sleep and Dreams

92

Blo

|| At about age 20 (slightly earlier for women), we begin to shift from being evening-energized “owls” to being morning-loving “larks” (Roenneberg et al., 2004). Most 20-year-olds are owls, with performance improving across the day (May & Hasher, 1998). Most older adults are larks, with performance declining as the day wears on. Retirement homes are typically quiet by mid-evening; in university dorms, the day is far from over. ||

|| If our natural circadian rhythm were attuned to a 23-hour cycle, would we instead need to discipline ourselves to stay up later at night and sleep in longer in the morning? ||

brain’s suprachiasmatic nucleus (SCN)—a pair of grain-of-rice-sized, 20,000-cell clusters in the hypothalamus (Foster, 2004). The SCN does its job in part by causing the brain’s pineal gland to decrease its production of the sleep-inducing hormone melatonin in the morning or increase it in the evening (FIGURE 8.1). Bright light at night helps delay sleep, thus resetting our biological clock when we stay up late and sleep in on weekends (Oren & Terman, 1998). Sleep often eludes those who sleep till noon on Sunday and then go to bed just 11 hours later in preparation for the new workweek. They are like New Yorkers whose biology is on California time. But what about North Americans who fly to Europe, and who need to be up when their circadian rhythm cries “Sleep!”? Studies in the laboratory and with shift workers find that bright light—spending the next day outdoors—helps reset the biological clock (Czeisler et al., 1986, 1989; Eastman et al., 1995). Curiously—given that our ancestors’ body clocks were attuned to the rising and setting sun of the 24-hour day—many of today’s young adults adopt something closer to a 25-hour day, by staying up too late to get 8 hours of sleep. For this, we can thank (or blame) Thomas Edison, inventor of the light bulb. Being bathed in light disrupts our 24-hour biological clock (Czeisler et al., 1999; Dement, 1999). This helps explain why, until our later years, we must discipline ourselves to go to bed and force ourselves to get up. Most animals, too, when placed under unnatural constant illumination will exceed a 24-hour day. Artificial light delays sleep.

Sleep Stages 8-2 What is the biological rhythm of our sleep?

REM sleep rapid eye movement sleep, a recurring sleep stage during which vivid dreams commonly occur. Also known as paradoxical sleep, because the muscles are relaxed (except for minor twitches) but other body systems are active.

alpha waves the relatively slow brain waves of a relaxed, awake state.

As sleep overtakes us and different parts of our brain’s cortex stop communicating, consciousness fades (Massimini et al., 2005). But our still-active sleeping brain does not emit a constant dial tone, because sleep has its own biological rhythm. About every 90 minutes, we pass through a cycle of five distinct sleep stages. This elementary fact apparently was unknown until 8-year-old Armond Aserinsky went to bed one night in 1952. His father, Eugene, a University of Chicago graduate student, needed to test an electroencephalograph he had been repairing that day (Aserinsky, 1988; Seligman & Yellen, 1987). Placing electrodes near Armond’s eyes to record the rolling eye movements then believed to occur during sleep, Aserinsky watched the machine go wild, tracing deep zigzags on the graph paper. Could the machine still be broken? As the night proceeded and the activity periodically recurred, Aserinsky finally realized that the fast, jerky eye movements were accompanied by energetic brain activity. Awakened during one such episode, Armond reported having a dream. Aserinsky had discovered what we now know as REM sleep (rapid eye movement sleep). To find out if similar cycles occur during adult sleep, Nathaniel Kleitman (1960) and Aserinsky pioneered procedures that have now been used with thousands of volunteers.

93

Sleep and Dreams M O D U L E 8

Left eye movements

8.2 Measuring sleep activity 䉴 FIGURE Sleep researchers measure brain-wave

Right eye movements EMG (muscle tension)

Hank Morgan/Rainbow

EEG (brain waves)

To appreciate their methods and findings, imagine yourself in their lab. As the hour grows late, you feel sleepy and you yawn in response to reduced brain metabolism. (Yawning, which can be socially contagious, stretches your neck muscles and increases your heart rate, which increases your alertness [Moorcroft, 2003]). When you are ready for bed, the researcher tapes electrodes to your scalp (to detect your brain waves), just outside the corners of your eyes (to detect eye movements), and on your chin (to detect muscle tension) (FIGURE 8.2). Other devices allow the researcher to record your heart rate, your respiration rate, and your genital arousal. When you are in bed with your eyes closed, the researcher in the next room sees on the EEG the relatively slow alpha waves of your awake but relaxed state (FIGURE 8.3). As you adapt to all this equipment, you grow tired and, in an unremembered moment,

FIGURE 8.3 Brain waves and 䉴 sleep stages The regular alpha

Awake, relaxed

waves of an awake, relaxed state are quite different from the slower, larger delta waves of deep Stage 4 sleep. Although the rapid REM sleep waves resemble the near-waking Stage 1 sleep waves, the body is more aroused during REM sleep than during Stage 1 sleep. (From Dement, 1978.)

Alpha waves

Stage 1 sleep

Stage 2 sleep

Spindle (burst of activity)

Stage 3 sleep

Stage 4 sleep

Delta waves

REM sleep

Eye movement phase

activity, eye movements, and muscle tension by electrodes that pick up weak electrical signals from the brain, eye, and facial muscles. (From Dement, 1978.)

FIGURE 8.4 The moment of sleep We seem unaware of the moment we fall into sleep, but someone eavesdropping on our brain waves could tell. (From Dement, 1999.)

|| To catch your own hypnagogic experiences, you might use the “Repeat Snooze” alarm found on some alarm clocks. ||

MOD U LE 8 Sleep and Dreams

94

Sleep

1 second

slip into sleep. The transition is marked by the slowed breathing and the irregular brain waves of Stage 1 (FIGURE 8.4). In one of his 15,000 research participants, William Dement (1999) observed the moment the brain’s perceptual window to the outside world slammed shut. Dement asked this sleep-deprived young man, lying on his back with eyelids taped open, to press a button every time a strobe light flashed in his eyes (about every 6 seconds). After a few minutes the young man missed one. Asked why, he said, “Because there was no flash.” But there was a flash. He missed it because (as his brain activity revealed) he had fallen asleep for 2 seconds. Unaware that he had done so, he had missed not only the flash 6 inches from his nose but also the abrupt moment of his entry into sleep. During this brief Stage 1 sleep you may experience fantastic images, resembling hallucinations—sensory experiences that occur without a sensory stimulus. You may have a sensation of falling (at which moment your body may suddenly jerk) or of floating weightlessly. Such hypnagogic sensations may later be incorporated into memories. People who claim to have been abducted by aliens—often shortly after getting into bed— commonly recall being floated off or pinned down on their beds (Clancy, 2005). You then relax more deeply and begin about 20 minutes of Stage 2 sleep, characterized by the periodic appearance of sleep spindles—bursts of rapid, rhythmic brain-wave activity (see Figure 8.3). Although you could still be awakened without too much difficulty, you are now clearly asleep. Sleeptalking—usually garbled or nonsensical—can occur during Stage 2 or any other sleep stage (Mahowald & Ettinger, 1990). Then for the next few minutes you go through the transitional Stage 3 to the deep sleep of Stage 4. First in Stage 3, and increasingly in Stage 4, your brain emits large, slow delta waves. These two slow-wave stages last for about 30 minutes, during which you would be hard to awaken. Curiously, it is at the end of the deep sleep of Stage 4 that children may wet the bed or begin sleepwalking. About 20 percent of 3to 12-year-olds have at least one episode of sleepwalking, usually lasting 2 to 10 minutes; some 5 percent have repeated episodes (Giles et al., 1994).

REM Sleep

sleep periodic, natural, reversible loss of consciousness—as distinct from unconsciousness resulting from a coma, general anesthesia, or hibernation. (Adapted from Dement, 1999.) hallucinations false sensory experiences, such as seeing something in the absence of an external visual stimulus.

delta waves the large, slow brain waves associated with deep sleep.

About an hour after you first fall asleep, a strange thing happens. Rather than continuing in deep slumber, you ascend from your initial sleep dive. Returning through Stage 3 and Stage 2 (where you spend about half your night), you enter the most intriguing sleep phase—REM sleep (FIGURE 8.5). For about 10 minutes, your brain waves become rapid and saw-toothed, more like those of the nearly awake Stage 1 sleep. But unlike Stage 1 sleep, during REM sleep your heart rate rises, your breathing becomes rapid and irregular, and every half-minute or so your eyes dart around in a momentary burst of activity behind closed lids. Because anyone watching a sleeper’s eyes can notice these REM bursts, it is amazing that science was ignorant of REM sleep until 1952. Except during very scary dreams, your genitals become aroused during REM sleep, and you have an erection or increased vagin*l lubrication and cl*toral engorgement, regardless of whether the dream’s content is sexual (Karacan et al., 1966). Men’s common “morning erection” stems from the night’s last REM period, often just before waking. In young men, sleep-related erections outlast REM periods, lasting 30 to 45 minutes on average (Karacan et al., 1983; Schiavi & Schreiner-Engel, 1988). A typical 25-year-old man therefore has an erection during nearly half his night’s sleep,

95

Sleep and Dreams M O D U L E 8

(a)

Sleep stages

Awake

REM periods increase as night progresses. REM

1

8.5 The stages 䉴 inFIGURE a typical night’s

(b)

REM

Minutes of 25 Stage 4 and REM sleep 20

Increasing REM

REM REM

15 10

2 Decreasing Stage 4

5 3

4

1st 2nd 3rd 4th 5th 6th 7th 8th Hours asleep

Stage 4 occurs early in the night.

sleep Most people pass through the five-stage sleep cycle (graph a) several times, with the periods of Stage 4 sleep and then Stage 3 sleep diminishing and REM sleep periods increasing in duration. Graph b plots this increasing REM sleep and decreasing deep sleep based on data from 30 young adults. (From Cartwright, 1978; Webb, 1992.)

a 65-year-old man for one-quarter. Many men troubled by erectile dysfunction (impotence) have sleep-related erections, suggesting the problem is not between their legs. Although your brain’s motor cortex is active during REM sleep, your brainstem blocks its messages, leaving muscles relaxed—so relaxed that, except for an occasional finger, toe, or facial twitch, you are essentially paralyzed. Moreover, you cannot easily be awakened. Thus, REM sleep is sometimes called paradoxical sleep, with the body internally aroused and externally calm. More intriguing than the paradoxical nature of REM sleep is what the rapid eye movements announce: the beginning of a dream. Even those who claim they never dream will, more than 80 percent of the time, recall a dream after being awakened during REM sleep. Unlike the fleeting images of Stage 1 sleep (“I was thinking about my exam today,” or “I was trying to borrow something from someone”), REM sleep dreams are often emotional, usually storylike, and more richly hallucinatory: My husband and I were at some friends’ house, but our friends weren’t there. Their TV had been left on, but otherwise it was very quiet. After we wandered around for a while, their dogs finally noticed us and barked and growled loudly, with bared teeth. sleep deeply, 䉴 Some some not The fluctuating

AP Photo/David Guttenfelder

sleep cycle enables safe sleep for these soldiers on the battlefield. One benefit of communal sleeping is that someone will probably be awake or easily roused in the event of a threat.

© 1994 by Sidney Harris.

Hours asleep

“Boy are my eyes tired! I had REM sleep all night long.”

|| Horses, which spend 92 percent of each day standing and can sleep standing, must lie down for REM sleep (Morrison, 2003). ||

96

|| People rarely snore during dreams. When REM starts, snoring stops. ||

MOD U LE 8 Sleep and Dreams

The sleep cycle repeats itself about every 90 minutes. As the night wears on, deep Stage 4 sleep gets progressively briefer and then disappears. The REM and Stage 2 sleep periods get longer (see Figure 8.5b). By morning, 20 to 25 percent of our average night’s sleep—some 100 minutes—has been REM sleep. Thirty-seven percent of people report rarely or never having dreams “that you can remember the next morning” (Moore, 2004). Unknown to those people, they spend about 600 hours a year experiencing some 1500 dreams, or more than 100,000 dreams over a typical lifetime—dreams swallowed by the night but never acted out, thanks to REM’s protective paralysis.

䉴|| Why Do We Sleep? The idea that “everyone needs 8 hours of sleep” is untrue. Newborns spend nearly two-thirds of their day asleep, most adults no more than one-third. Age-related differences in average sleeping time are rivaled by the differences among individuals at any age. Some people thrive with fewer than 6 hours per night; others regularly rack up 9 hours or more. Such sleep patterns may be genetically influenced. When Wilse Webb and Scott Campbell (1983) checked the pattern and duration of sleep among fraternal and identical twins, only the identical twins were strikingly similar. Sleep patterns are also culturally influenced. In the United States and Canada, for example, adults average just over 8 hours per night (Hurst, 2008; Robinson & Martin, 2007). (The weeknight sleep of many students and workers falls short of this average [NSF, 2008].) North Americans are nevertheless sleeping less than their counterparts a century ago. Thanks to modern light bulbs, shift work, and social diversions, those who would have gone to bed at 9:00 P.M. are now up until 11:00 P.M. or later. Thomas Edison (1948, pp. 52, 178) was pleased to accept credit for this, believing that less sleep meant more productive time and greater opportunities: When I went through Switzerland in a motor-car, so that I could visit little towns and villages, I noted the effect of artificial light on the inhabitants. Where water power and electric light had been developed, everyone seemed normally intelligent. When these appliances did not exist, and the natives went to bed with the chickens, staying there till daylight, they were far less intelligent.

|| In 1989, Michael Doucette was named America’s Safest Driving Teen. In 1990, while driving home from college, he fell asleep at the wheel and collided with an oncoming car, killing both himself and the other driver. Michael’s driving instructor later acknowledged never having mentioned sleep deprivation and drowsy driving (Dement, 1999). ||

Allowed to sleep unhindered, most adults will sleep at least 9 hours a night, reports Stanley Coren (1996). With that much sleep, we awake refreshed, sustain better moods, and perform more efficient and accurate work. Compare that with a succession of 5-hour nights, when we accumulate a sleep debt that cannot be paid off by one long marathon sleep. “The brain keeps an accurate count of sleep debt for at least two weeks,” says William Dement (1999, p. 64). With our body yearning for sleep, we will begin to feel terrible. Trying to stay awake, we will eventually lose. In the tiredness battle, sleep always wins. Obviously, then, we need sleep. Sleep commands roughly one-third of our lives— some 25 years, on average. But why? It seems an easy question to answer: Just keep people awake for several days and note how they deteriorate. If you were a volunteer in such an experiment, how do you think it would affect your body and mind? You would, of course, become terribly drowsy—especially during the hours when your biological clock programs you to sleep. But could the lack of sleep physically damage you? Would it noticeably alter your biochemistry or body organs? Would you become emotionally disturbed? Mentally disoriented?

97

Sleep and Dreams M O D U L E 8

The Effects of Sleep Loss

and suffering 䉴 Sleepless These fatigued, sleep-

8-3 How does sleep loss affect us?

Reuters/China Daily (China)

Good news! Psychologists have discovered a treatment that strengthens memory, increases concentration, boosts mood, moderates hunger and obesity, fortifies the diseasefighting immune system, and lessens the risk of fatal accidents. Even better news: The treatment feels good, it can be self-administered, the supplies are limitless, and it’s available free! If you are a typical university-age student, often going to bed near 2:00 A.M. and dragged out of bed six hours later by the dreaded alarm, the treatment is simple: Each night just add an hour to your sleep. The U.S. Navy and the National Institutes of Health have demonstrated the benefits of unrestricted sleep in experiments in which volunteers spent 14 hours daily in bed for at least a week. For the first few days, the volunteers averaged 12 hours of sleep a day or more, apparently paying off a sleep debt that averaged 25 to 30 hours. That accomplished, they then settled back to 7.5 to 9 hours nightly and, with no sleep debt, felt energized and happier (Dement, 1999). In one Gallup survey (Mason, 2005), 63 percent of adults who reported getting the sleep they need also reported being “very satisfied” with their personal life (as did only 36 percent of those needing more sleep). When Daniel Kahneman and his colleagues (2004) invited 909 working women to report on their daily moods, they were struck by what mattered little, such as money (so long as they were not battling poverty). And they were struck by what mattered a lot—less time pressure at work and a good night’s sleep. Unfortunately, many of us are suffering from patterns that not only leave us sleepy but also thwart our having an energized feeling of well-being (Mikulincer et al., 1989). Teens who typically need 8 or 9 hours of sleep now average less than 7 hours— nearly 2 hours less each night than did their counterparts of 80 years ago (Holden, 1993; Maas, 1999). In one survey, 28 percent of high school students acknowledged falling asleep in class at least once a week (Sleep Foundation, 2006). When the going gets boring, the students start snoring. Even when awake, students often function below their peak. And they know it: Four in five American teens and three in five 18- to 29-year-olds wish they could get more sleep on weekdays (Mason, 2003, 2005). Yet that teen who staggers glumly out of bed in response to an unwelcome alarm, yawns through morning classes, and feels half-depressed much of the day may be energized at 11 P.M. and mindless of the next day’s looming sleepiness (Carskadon, 2002). Sleep researcher William Dement (1997) reports that at Stanford University, 80 percent of students are “dangerously sleep deprived. . . . Sleep deprivation [entails] difficulty studying, diminished productivity, tendency to make mistakes, irritability, fatigue.” A large sleep debt “makes you stupid,” says Dement (1999, p. 231). It can also make you fatter. Sleep deprivation increases the hunger-arousing hormone ghrelin and decreases its hunger-suppressing partner, leptin. It also increases the stress hormone cortisol, which stimulates the body to make fat. Sure enough, children and adults who sleep less than normal are fatter than those who sleep more (Chen et al., 2008; Knutson et al., 2007; Schoenborn & Adams, 2008). And experimental sleep deprivation of adults increases appetite and eating (Nixon et al., 2008; Patel et al., 2006; Spiegel et al., 2004; Van Cauter et al., 2007). This may help explain

deprived earthquake rescue workers in China may experience a depressed immune system, impaired concentration, and greater vulnerability to accidents.

|| In a 2001 Gallup poll, 61 percent of men, but only 47 percent of women, said they got enough sleep. ||

“Tiger Woods said that one of the best things about his choice to leave Stanford for the professional golf circuit was that he could now get enough sleep.” Stanford sleep researcher William Dement, 1997

|| To test whether you are one of the many sleep-deprived students, see Table 8.1 on the next page. ||

98

MOD U LE 8 Sleep and Dreams

TABLE 8.1

Cornell University psychologist James Maas reports that most students suffer the consequences of sleeping less than they should. To see if you are in that group, answer the following truefalse questions: True

False 1. I need an alarm clock in order to wake up at the appropriate time. 2. It’s a struggle for me to get out of bed in the morning. 3. Weekday mornings I hit the snooze bar several times to get more sleep. 4. I feel tired, irritable, and stressed out during the week. 5. I have trouble concentrating and remembering. 6. I feel slow with critical thinking, problem solving, and being creative. 7. I often fall asleep watching TV. 8. I often fall asleep in boring meetings or lectures or in warm rooms. 9. I often fall asleep after heavy meals or after a low dose of alcohol. 10. I often fall asleep while relaxing after dinner. 11. I often fall asleep within five minutes of getting into bed. 12. I often feel drowsy while driving. 13. I often sleep extra hours on weekend mornings. 14. I often need a nap to get through the day. 15. I have dark circles around my eyes.

If you answered “true” to three or more items, you probably are not getting enough sleep. To determine your sleep needs, Maas recommends that you “go to bed 15 minutes earlier than usual every night for the next week—and continue this practice by adding 15 more minutes each week—until you wake without an alarm clock and feel alert all day.” (Quiz reprinted with permission from James B. Maas, Power sleep: The revolutionary program that prepares your mind and body for peak performance [New York: HarperCollins, 1999].)

the common weight gain among sleep-deprived students (although a review of 11 studies reveals that the mythical “freshman 15” is, on average, closer to a “first-year 4” [Hull et al., 2007]). In addition to making us more vulnerable to obesity, sleep deprivation can suppress immune cells that fight off viral infections and cancer (Motivala & Irwin, 2007). This may help explain why people who sleep 7 to 8 hours a night tend to outlive those who are chronically sleep deprived, and why older adults who have no difficulty falling or staying asleep tend to live longer than their sleep-deprived agemates (Dement, 1999; Dew et al., 2003). When infections do set in, we typically sleep more, boosting our immune cells. Chronic sleep debt also alters metabolic and hormonal functioning in ways that mimic aging and are conducive to hypertension and memory impairment (Spiegel et al., 1999; Taheri, 2004). Other effects include irritability, slowed performance, and impaired creativity, concentration, and communication (Harrison & Horne, 2000). Reaction times slow and errors increase on visual tasks similar to those involved in airport baggage screening, performing surgery, and reading X-rays (Horowitz et al., 2003). Sleep deprivation can be devastating for driving, piloting, and equipment operating. Driver fatigue contributes to an estimated 20 percent of American traffic accidents (Brody, 2002) and to some 30 percent of Australian highway deaths (Maas, 1999). Consider the timing of the 1989 Exxon Valdez oil spill; Union Carbide’s 1984 Bhopal, India, disaster; and the 1979 Three Mile Island and 1986 Chernobyl nuclear accidents—all occurred after midnight, when operators in charge were likely to be

99

Sleep and Dreams M O D U L E 8

Number of accidents

2800

Less sleep, more accidents

2700

8.6 Canadian traffic accidents 䉴 FIGURE On the Monday after the spring time change,

Number of accidents

4200 More sleep, fewer accidents

2600

4000

2500

3800

2400

when people lose one hour of sleep, accidents increased as compared with the Monday before. In the fall, traffic accidents normally increase because of greater snow, ice, and darkness, but they diminished after the time change. (Adapted from Coren, 1996.)

3600

Spring time change (hour of sleep lost) Monday before time change

Fall time change (hour of sleep gained) Monday after time change

drowsiest and unresponsive to signals that require an alert response. When sleepy frontal lobes confront an unexpected situation, misfortune often results. Stanley Coren capitalized on what is, for many North Americans, a semi-annual sleep-manipulation experiment—the “spring forward” to “daylight savings” time and “fall backward” to “standard” time. Searching millions of records, Coren found that in both Canada and the United States, accidents increase immediately after the time change that shortens sleep (FIGURE 8.6). But let’s put all this positively: To manage your life with enough sleep to awaken naturally and well rested is to be more alert, productive, happy, healthy, and safe.

“Drowsiness is red alert!” William Dement, The Promise of Sleep, 1999

Sleep Theories 8-4 What is sleep’s function? So, nature charges us for our sleep debt. But why do we have this need for sleep? We have very few answers, but sleep may have evolved for five reasons: First, sleep protects. When darkness precluded our distant ancestors’ hunting and food gathering and made travel treacherous, they were better off asleep in a cave, out of harm’s way. Those who didn’t try to navigate around rocks and cliffs at night were more likely to leave descendants. This fits a broader principle: A species’ sleep pattern tends to suit its ecological niche. Animals with the most need to graze and the least ability to hide tend to sleep less. Elephants and horses sleep 3 to 4 hours a day, gorillas 12 hours, and cats 14 hours. For bats and eastern chipmunks, both of which sleep 20 hours, to live is hardly more than to eat and to sleep (Moorcroft, 2003). (Would you rather be like a giraffe and sleep 2 hours a day or a bat and sleep 20?) Second, sleep helps us recuperate. It helps restore and repair brain tissue. Bats and other animals with high waking metabolism burn a lot of calories, producing a lot of free radicals, molecules that are toxic to neurons. Sleeping a lot gives resting neurons time to repair themselves, while allowing unused connections to weaken (Siegel, 2003; Vyazovski et al., 2008). Think of it this way: When consciousness leaves your house, brain construction workers come in for a makeover. But sleep is not just for keeping us safe and for repairing our brain. New research reveals that sleep is for making memories—for restoring and rebuilding our fading

“Sleep faster, we need the pillows.” Yiddish proverb

“Corduroy pillows make headlines.” Anonymous

100

MOD U LE 8 Sleep and Dreams

insomnia recurring problems in falling or staying asleep.

narcolepsy a sleep disorder characterized by uncontrollable sleep attacks. The sufferer may lapse directly into REM sleep, often at inopportune times.

sleep apnea a sleep disorder characterized by temporary cessations of breathing during sleep and repeated momentary awakenings.

memories of the day’s experiences. People trained to perform tasks recall them better after a night’s sleep, or even after a short nap, than after several hours awake (Walker & Stickgold, 2006). And in both humans and rats, neural activity during slow-wave sleep reenacts and promotes recall of prior novel experiences (Peigneux et al., 2004; Ribeiro et al., 2004). In one experiment, people were exposed to the scent of roses while learning the locations of various picture cards. When reexposed to the scent during slow-wave sleep, their memory scratch-pad—the hippocampus—was reactivated, and they remembered the picture placements with almost perfect accuracy the next day (Rasch et al., 2007). Sleep also feeds creative thinking. On occasion, dreams have inspired noteworthy literary, artistic, and scientific achievements, such as the dream that clued chemist August Kekulé to the structure of benzene (Ross, 2006). More commonplace is the boost that a complete night’s sleep gives to our thinking and learning. After working on a task, then sleeping on it, people solve problems more insightfully than do those who stay awake (Wagner et al., 2004). They can also, after sleep, better discern connections among different novel pieces of information (Ellenbogen et al., 2007). Even 15-month-olds, if retested after a nap, better recall relationships among novel words (Gómez et al., 2006). To think smart and see connections, it often pays to sleep on it. Finally, sleep may play a role in the growth process. During deep sleep, the pituitary gland releases a growth hormone. As we age, we release less of this hormone and spend less time in deep sleep (Pekkanen, 1982). Such discoveries are beginning to solve the ongoing riddle of sleep.

䉴|| Sleep Disorders “The lion and the lamb shall lie down together, but the lamb will not be very sleepy.”

Mark Parisi/offthemark.com

Woody Allen, in the movie Love and Death, 1975

“Sleep is like love or happiness. If you pursue it too ardently it will elude you.” Wilse Webb, Sleep: The Gentle Tyrant, 1992 (p. 170)

8-5 What are the major sleep disorders? No matter what their normal need for sleep, 1 in 10 adults, and 1 in 4 older adults, complain of insomnia—not an occasional inability to sleep when anxious or excited, but persistent problems in falling or staying asleep (Irwin & others, 2006). From middle age on, sleep is seldom uninterrupted. Being occasionally awakened becomes the norm, not something to fret over or treat with medication. And some people do fret unnecessarily about their sleep (Coren, 1996). In laboratory studies, insomnia complainers do sleep less than others, but they typically overestimate—by about double—how long it takes them to fall asleep. They also underestimate by nearly half how long they actually have slept. Even if we have been awake only an hour or two, we may think we have had very little sleep because it’s the waking part we remember. The most common quick fixes for true insomnia—sleeping pills and alcohol—can aggravate the problem, reducing REM sleep and leaving the person with next-day blahs. Relying on sleeping pills—sales of which soared 60 percent from 2000 to 2006 (Saul, 2007)—the person may need increasing doses to get an effect. Then, when the drug is discontinued, the insomnia can worsen. Scientists are searching for natural chemicals that are abundant during sleep, hoping they might be synthesized as a sleep aid without side effects. In the meantime, sleep experts offer other natural alternatives:

䉴 Exercise regularly but not in the late evening. (Late afternoon is best.) 䉴 Avoid all caffeine after early afternoon, and avoid rich foods before bedtime. Instead, try a glass of milk, which provides raw materials for the manufacture of serotonin, a neurotransmitter that facilitates sleep. 䉴 Relax before bedtime, using dimmer light.

101

Sleep and Dreams M O D U L E 8

robs sleep Urban 䉴 Stress police officers, especially those

Dwayne Newton/PhotoEdit

under stress, report poorer sleep quality and less sleep than average (Neylan et al., 2002).

䉴 Sleep on a regular schedule (rise at the same time even after a restless night) and

Rarer but also more troublesome than insomnia are the sleep disorders narcolepsy, sleep apnea, night terrors, and sleepwalking. Narcolepsy (from narco, “numbness,” and lepsy, “seizure”) sufferers experience periodic, overwhelming sleepiness. Attacks usually last less than 5 minutes but sometimes occur at the most inopportune times, perhaps just after taking a terrific swing at a softball or when laughing loudly, shouting angrily, or having sex (Dement, 1978, 1999). In severe cases, the person may collapse directly into a brief period of REM sleep, with its accompanying loss of muscular tension. People with narcolepsy—1 in 2000 of us, estimates the Stanford University Center for Narcolepsy (2002)—must therefore live with extra caution. As a traffic menace, “snoozing is second only to boozing,” says the American Sleep Disorders Association, and those with narcolepsy are especially at risk (Aldrich, 1989). At the twentieth century’s end, researchers discovered a gene causing narcolepsy in dogs (Lin et al., 1999; Taheri, 2004). Genes help sculpt the brain, and neuroscientists are searching the brain for abnormalities linked with narcolepsy. One team of researchers discovered a relative absence of a hypothalamic neural center that produces orexin (also called hypocretin), a neurotransmitter linked to alertness (Taheri et al., 2002; Thannickal et al., 2000). (That discovery has led to the clinical testing of a new sleeping pill that works by blocking orexin’s arousing activity.) Narcolepsy, it is now clear, is a brain disease; it is not just “in your mind.” And this gives hope that narcolepsy might be effectively relieved by a drug that mimics the missing orexin and can sneak through the blood-brain barrier (Fujiki et al., 2003; Siegel, 2000). In the meantime, physicians are prescribing other drugs to relieve narcolepsy’s sleepiness in humans. Sleep apnea also puts millions of people at increased risk of traffic accidents (Teran-Santos et al., 1999). Although 1 in 20 of us has this disorder, it was unknown

James B. Maas, Power Sleep, 1999

Did Brahms need his own lullabies? Cranky, overweight, and nap-prone, Johannes Brahms exhibited common symptoms of sleep apnea (Margolis, 2000).

Archivo Iconografico, S.A./Corbis

“In 1757 Benjamin Franklin gave us the axiom, ‘Early to bed, early to rise, makes a man healthy, wealthy, and wise.’ It would be more accurate to say ‘consistently to bed and consistently to rise . . . ’”

䉴 䉴 䉴

avoid naps. Sticking to a schedule boosts daytime alertness, too, as shown in an experiment in which University of Arizona students averaged 7.5 hours of sleep a night on either a varying or consistent schedule (Manber et al., 1996). Hide the clock face so you aren’t tempted to check it repeatedly. Reassure yourself that a temporary loss of sleep causes no great harm. Realize that for any stressed organism, being vigilant is natural and adaptive. A personal conflict during the day often means a fitful sleep that night (Åkerstedt et al., 2007; Brisette & Cohen, 2002). Managing your stress levels will enable more restful sleeping. If all else fails, settle for less sleep, either going to bed later or getting up earlier.

102

MOD U LE 8 Sleep and Dreams

before modern sleep research. Apnea means “with no breath,” and people with this condition intermittently stop breathing during sleep. After an airless minute or so, decreased blood oxygen arouses them and they wake up enough to snort in air for a few seconds, in a process that repeats hundreds of times each night, depriving them of slow-wave sleep. Apart from complaints of sleepiness and irritability during the day—and their mate’s complaints about their loud “snoring”—apnea sufferers are often unaware of their disorder. The next morning they have no recall of these episodes, and may just report feeling fatigued and depressed (Peppard et al., 2006). Sleep apnea is associated with obesity, and as the number of obese people in the United States has increased, so has this disorder, particularly among overweight men, including some football players (Keller, 2007). Anyone who snores at night, feels tired during the day, and possibly has high blood pressure as well (increasing the risk of a stroke or heart attack) should be checked for apnea (Dement, 1999). A physician may prescribe a masklike device with an air pump that keeps the sleeper’s airway open and breathing regular. If one doesn’t mind looking a little goofy in the dark (imagine a snorkeler at a slumber party), the treatment can effectively treat both the apnea and associated depressed energy and mood. Unlike sleep apnea, night terrors target mostly children, who may sit up or walk around, talk incoherently, experience a doubling of heart and breathing rates, and appear terrified (Hartmann, 1981). They seldom wake up fully during an episode and recall little or nothing the next morning—at most, a fleeting, frightening image. Night terrors are not nightmares (which, like other dreams, typically occur during early morning REM sleep); night terrors usually occur during the first few hours of Stage 4. Children also are most prone to sleepwalking—another Stage 4 sleep disorder—and to sleeptalking, conditions that run in families. Finnish twin studies reveal that occasional childhood sleepwalking occurs for about one-third of those with a sleepwalking fraternal twin and half of those with a sleepwalking identical twin. The same is true for sleeptalking (Hublin et al., 1997, 1998). Sleepwalking is usually harmless and unrecalled the next morning. Sleepwalkers typically return to bed on their own or are guided there by a family member. Young children, who have the deepest and lengthiest Stage 4 sleep, are the most likely to experience both night terrors and sleepwalking. As we grow older and deep Stage 4 sleep diminishes, so do night terrors and sleepwalking. After being sleep deprived, people sleep more deeply, which increases any tendency to sleepwalk (Zadra et al., 2008).

night terrors a sleep disorder characterized by high arousal and an appearance of being terrified; unlike nightmares, night terrors occur during Stage 4 sleep, within two or three hours of falling asleep, and are seldom remembered.

dream a sequence of images, emotions, and thoughts passing through a sleeping person’s mind. Dreams are notable for their hallucinatory imagery, discontinuities, and incongruities, and for the dreamer’s delusional acceptance of the content and later difficulties remembering it. manifest content according to Freud, the remembered story line of a dream (as distinct from its latent, or hidden, content).

䉴|| Dreams 8-6 What do we dream? Now playing at an inner theater near you: the premiere showing of a sleeping person’s vivid dream. This never-before-seen mental movie features captivating characters wrapped in a plot so original and unlikely, yet so intricate and so seemingly real, that the viewer later marvels at its creation. Waking from a troubling dream, wrenched by its emotions, who among us has not wondered about this weird state of consciousness? How can our brain so creatively, colorfully, and completely construct this alternative, conscious world? In the shadowland between our dreaming and waking consciousness, we may even wonder for a moment which is real. Discovering the link between REM sleep and dreaming opened a new era in dream research. Instead of relying on someone’s hazy recall hours or days after having a dream, researchers could catch dreams as they happened. They could awaken people during or within 3 minutes after a REM sleep period and hear a vivid account.

103

Sleep and Dreams M O D U L E 8

What We Dream REM dreams—“hallucinations of the sleeping mind”(Loftus & Ketcham, 1994, p. 67)—are vivid, emotional, and bizarre. They are unlike daydreams, which tend to involve the familiar details of our life—perhaps picturing ourselves explaining to an instructor why a paper will be late, or replaying in our minds personal encounters we relish or regret. The dreams of REM sleep are so vivid we may confuse them with reality. Awakening from a nightmare, a 4-year-old may be sure there is a bear in the house. We spend six years of our life in dreams, many of which are anything but sweet. For both women and men, 8 in 10 dreams are marked by at least one negative event or emotion (Domhoff, 2007). People commonly dream of repeatedly failing in an attempt to do something; of being attacked, pursued, or rejected; or of experiencing misfortune (Hall et al., 1982). Dreams with sexual imagery occur less often than you might think. In one study, only 1 dream in 10 among young men and 1 in 30 among young women had sexual overtones (Domhoff, 1996). More commonly, the story line of our dreams—what Sigmund Freud called their manifest content—incorporates traces of previous days’ nonsexual experiences and preoccupations (De Koninck, 2000):

“I do not believe that I am now dreaming, but I cannot prove that I am not.”

䉴 After suffering a trauma, people commonly report nightmares (Levin & Nielsen,

“For what one has dwelt on by day, these things are seen in visions of the night.”

Philosopher Bertrand Russell (1872–1970)

|| Would you suppose that people dream if blind from birth? Studies of blind people in France, Hungary, Egypt, and the United States all found them dreaming of using their nonvisual senses—hearing, touching, smelling, tasting (Buquet, 1988; Taha, 1972; Vekassy, 1977). ||

2007). One sample of Americans who were recording their dreams during September 2001 reported an increase in threatening dreams following the 9/11 attack (Propper et al., 2007). 䉴 After playing the computer game Tetris for seven hours and then being awakened repeatedly during their first hour of sleep, 3 in 4 people reported experiencing images of the game’s falling blocks (Stickgold et al., 2000). 䉴 People in hunter-gatherer societies often dream of animals; urban Japanese rarely do (Mestel, 1997). 䉴 Compared with nonmusicians, musicians report twice as many dreams of music (Uga et al., 2006).

Menander of Athens (342–292 B.C.), Fragments

Sensory stimuli in our sleeping environment may also intrude. A particular odor or the telephone’s ringing may be instantly and ingeniously woven into the dream story. In a classic experiment, William Dement and Edward Wolpert (1958) lightly sprayed cold water on dreamers’ faces. Compared with sleepers who did not get the cold-water treatment, these people were more likely to dream about a waterfall, a leaky roof, or even about being sprayed by someone. Even while in REM sleep, focused on internal stimuli, we maintain some awareness of changes in our external environment.

|| A popular sleep myth: If you dream you are falling and hit the ground (or if you dream of dying), you die. (Unfortunately, those who could confirm these ideas are not around to do so. Some people, however, have had such dreams and are alive to report them.) ||

© 2001 Mariam Henley

MAXINE

104

“Follow your dreams, except for that one where you’re naked at work.”

MOD U LE 8 Sleep and Dreams

So, could we learn a foreign language by hearing it played while we sleep? If only it were so easy. While sleeping we can learn to associate a sound with a mild electric shock (and to react to the sound accordingly). But we do not remember recorded information played while we are soundly asleep (Eich, 1990; Wyatt & Bootzin, 1994). In fact, anything that happens during the 5 minutes just before we fall asleep is typically lost from memory (Roth et al., 1988). This explains why sleep apnea patients, who repeatedly awaken with a gasp and then immediately fall back to sleep, do not recall the episodes. It also explains why dreams that momentarily awaken us are mostly forgotten by morning. To remember a dream, get up and stay awake for a few minutes.

Attributed to Henny Youngman

Why We Dream 8-7 What is the function of dreams?

“When people interpret [a dream] as if it were meaningful and then sell those interpretations, it’s quackery.” Sleep researcher J. Allan Hobson (1995)

latent content according to Freud, the underlying meaning of a dream (as distinct from its manifest content).

Dream theorists have proposed several explanations of why we dream, including these: To satisfy our own wishes. In 1900, in his landmark book The Interpretation of Dreams, Freud offered what he thought was “the most valuable of all the discoveries it has been my good fortune to make”: Dreams provide a psychic safety valve that discharges otherwise unacceptable feelings. According to Freud, a dream’s manifest (apparent) content is a censored, symbolic version of its latent content, which consists of unconscious drives and wishes that would be threatening if expressed directly. Although most dreams have no overt sexual imagery, Freud nevertheless believed that most adult dreams can be “traced back by analysis to erotic wishes.” Thus, a gun might be a disguised representation of a penis. Freud considered dreams the key to understanding our inner conflicts. However, his critics say it is time to wake up from Freud’s dream theory, which is a scientific nightmare. Based on the accumulated science, “there is no reason to believe any of Freud’s specific claims about dreams and their purposes,” notes dream researcher William Domhoff (2003). Some contend that even if dreams are symbolic, they could be interpreted any way one wished. Others maintain that dreams hide nothing. A dream about a gun is a dream about a gun. Legend has it that even Freud, who loved to smoke cigars, acknowledged that “sometimes, a cigar is just a cigar.” Freud’s wishfulfillment theory of dreams has in large part given way to other theories. To file away memories. Researchers who see dreams as information processing believe that dreams may help sift, sort, and fix the day’s experiences in our memory. As we noted earlier, people tested the next day generally improve on a learned task after a night of memory consolidation. Even after two nights of recovery sleep, those who have been deprived of both slow-wave and REM sleep don’t do as well as those who sleep undisturbed on their new learning (Stickgold et al., 2000, 2001). People who hear unusual phrases or learn to find hidden visual images before bedtime remember less the next morning if awakened every time they begin REM sleep than they do if awakened during other sleep stages (Empson & Clarke, 1970; Karni & Sagi, 1994). Brain scans confirm the link between REM sleep and memory. The brain regions that buzz as rats learn to navigate a maze, or as people learn to perform a visualdiscrimination task, buzz again during later REM sleep (Louie & Wilson, 2001; Maquet, 2001). So precise are these activity patterns that scientists can tell where in the maze the rat would be if awake. Some researchers are unpersuaded by these studies (Siegel, 2001; Vertes & Siegel, 2005). They note that memory consolidation may occur independent of dreaming, including during non-REM sleep. But this much seems true: A night of solid sleep (and

105

Sleep and Dreams M O D U L E 8

|| Rapid eye movements also stir the liquid behind the cornea; this delivers fresh oxygen to corneal cells, preventing their suffocation. ||

|| Question: Does eating spicy foods cause one to dream more? Answer: Any food that causes you to awaken more increases your chance of recalling a dream (Moorcroft, 2003). ||

FIGURE 8.7 Sleep across the life

span As we age, our sleep patterns change. During our first few months, we spend progressively less time in REM sleep. During our first 20 years, we spend progressively less time asleep. (Adapted from Snyder & Scott, 1972.)

dreaming) has an important place in our lives. To sleep, perchance to remember. This is important news for students, many of whom, researcher Robert Stickgold (2000) believes, suffer from a kind of sleep bulimia—binge-sleeping on the weekend. “If you don’t get good sleep and enough sleep after you learn new stuff, you won’t integrate it effectively into your memories,” he warns. That helps explain why secondary students with high grades average 25 minutes more sleep a night and go to bed 40 minutes earlier than their lower-achieving classmates (Wolfson & Carskadon, 1998). To develop and preserve neural pathways. Some researchers speculate that dreams may also serve a physiological function. Perhaps the brain activity associated with REM sleep provides the sleeping brain with periodic stimulation. This theory makes developmental sense. We know that stimulating experiences develop and preserve the brain’s neural pathways. Infants, whose neural networks are fast developing, spend much of their abundant sleep time in REM sleep (FIGURE 8.7). To make sense of neural static. Other theories propose that dreams erupt from neural activity spreading upward from the brainstem (Antrobus, 1991; Hobson, 2003, 2004). According to one version—the activation-synthesis theory—this neural activity is random, and dreams are the brain’s attempt to make sense of it. Much as a neurosurgeon can produce hallucinations by stimulating different parts of a patient’s cortex, so can stimulation originating within the brain. These internal stimuli activate brain areas that process visual images, but not the visual cortex area, which receives raw input from the eyes. As Freud might have expected, PET scans of sleeping people also reveal increased activity in the emotion-related limbic system (in the amygdala) during REM sleep. In contrast, frontal lobe regions responsible for inhibition and logical thinking seem to idle, which may explain why our dreams are less inhibited than we are (Maquet et al., 1996). Add the limbic system’s emotional tone to the brain’s visual bursts and—Voila!—we dream. Damage either the limbic system or the visual centers active during dreaming, and dreaming itself may be impaired (Domhoff, 2003). To reflect cognitive development. Some dream researchers dispute both the Freudian and activation-synthesis theories, preferring instead to see dreams as part of brain maturation and cognitive development (Domhoff, 2003; Foulkes, 1999). For

24

Average daily sleep 16 (hours)

Marked drop in REM during infancy

14 Waking

12 REM sleep

10 8 6 4

0 1–15 3–5 6–23 days mos. mos.

Infancy

2 3–4 5–13 14–18 19–30 yrs. yrs. yrs.

yrs.

yrs.

Childhood Adolescence

31–45

90

yrs.

yrs.

Adulthood and old age

Tom Prettyman/PhotoEdit, Inc.

Non-REM sleep

2

106

MOD U LE 8 Sleep and Dreams

TABLE 8.2 Dream Theories Theory

Explanation

Critical Considerations

Freud’s wish-fulfillment

Dreams provide a “psychic safety valve”—expressing otherwise unacceptable feelings; contain manifest (remembered) content and a deeper layer of latent content—a hidden meaning.

Lacks any scientific support; dreams may be interpreted in many different ways.

Information-processing

Dreams help us sort out the day’s events and consolidate our memories.

But why do we sometimes dream about things we have not experienced?

Physiological function

Regular brain stimulation from REM sleep may help develop and preserve neural pathways.

This may be true, but it does not explain why we experience meaningful dreams.

Activation-synthesis

REM sleep triggers neural activity that evokes random visual memories, which our sleeping brain weaves into stories.

The individual’s brain is weaving the stories, which still tells us something about the dreamer.

Cognitive development

Dream content reflects dreamers’ cognitive development— their knowledge and understanding.

Does not address the neuroscience of dreams.

example, prior to age 9, children’s dreams seem more like a slide show and less like an active story in which the dreamer is an actor. Dreams overlap with waking cognition and feature coherent speech. They draw on our concepts and knowledge. TABLE 8.2 compares major dream theories. Although sleep researchers debate dreams’ function—and some are skeptical that dreams serve any function—there is one thing they agree on: We need REM sleep. Deprived of it by repeatedly being awakened, people return more and more quickly to the REM stage after falling back to sleep. When finally allowed to sleep undisturbed, they literally sleep like babies—with increased REM sleep, a phenomenon called REM rebound. Withdrawing REM-suppressing sleeping medications also increases REM sleep, but with accompanying nightmares. Most other mammals also experience REM rebound, suggesting that the causes and functions of REM sleep are deeply biological. That REM sleep occurs in mammals—and not in animals such as fish, whose behavior is less influenced by learning—also fits the information-processing theory of dreams. So does this mean that because dreams serve physiological functions and extend normal cognition, they are psychologically meaningless? Not necessarily. Every psychologically meaningful experience involves an active brain. We are once again reminded of a basic principle: Biological and psychological explanations of behavior are partners, not competitors. Dreams may be akin to abstract art—open to more than one meaningful interpretation.

REM rebound the tendency for REM sleep to increase following REM sleep deprivation (created by repeated awakenings during REM sleep).

107

Sleep and Dreams M O D U L E 8

Review Sleep and Dreams 8-1 How do our biological rhythms influence our daily functioning and our sleep and dreams? Our internal biological rhythms create periodic physiological fluctuations. The circadian rhythm’s 24-hour cycle regulates our daily schedule of sleeping and waking, in part in response to light on the retina, triggering alterations in the level of sleep-inducing melatonin. Shifts in schedules can reset our biological clock. 8-2 What is the biological rhythm of our sleep? We cycle through five sleep stages in about 90 minutes. Leaving the alpha waves of the awake, relaxed stage, we descend into transitional Stage 1 sleep, often with the sensation of falling or floating. Stage 2 sleep (in which we spend the most time) follows about 20 minutes later, with its characteristic sleep spindles. Then follow Stages 3 and 4, together lasting about 30 minutes, with large, slow delta waves. Reversing course, we retrace our path, but with one difference: About an hour after falling asleep, we begin periods of REM (rapid eye movement) sleep. Most dreaming occurs in this fifth stage (also known as paradoxical sleep) of internal arousal but outward paralysis. During a normal night’s sleep, periods of Stages 3 and 4 sleep shorten and REM sleep lengthens. 8-3 How does sleep loss affect us? Sleep deprivation causes fatigue and impairs concentration, creativity, and communication. It also can lead to obesity, hypertension, a suppressed immune system, irritability, and slowed performance (with greater vulnerability to accidents). 8-4

What is sleep’s function? Sleep may have played a protective role in human evolution by keeping people safe during potentially dangerous periods. Sleep also gives the brain time to heal, as it restores and repairs damaged neurons. During sleep, we restore and rebuild memories of the day’s experiences. A good night’s sleep promotes creative problem-solving the next day. Finally, sleep encourages growth; the pituitary gland secretes a growth hormone in Stage 4 sleep.

8-5 What are the major sleep disorders? The disorders of sleep include insomnia (recurring wakefulness), narcolepsy (sudden uncontrollable sleepiness or lapsing into REM sleep), sleep apnea (the stopping of breathing while asleep), night terrors (high arousal and the appearance of being terrified), sleepwalking, and sleeptalking. Sleep apnea mainly targets overweight men. Children are most prone to night terrors, sleepwalking, and sleeptalking. 8-6 What do we dream? We usually dream of ordinary events and everyday experiences, most involving some anxiety or misfortune. Fewer than 10 percent (and less among women) of dreams have any sexual content. Most dreams occur during REM sleep; those that happen during non-REM sleep tend to be vague fleeting images. 8-7 What is the function of dreams? There are five major views of the function of dreams. (1) Freudian: to provide a safety valve, with manifest content (or story line) acting as a censored version of

latent content (some underlying meaning that gratifies our unconscious wishes). (2) The information-processing perspective: to sort out the day’s experiences and fix them in memory. (3) Brain stimulation: to preserve neural pathways in the brain. (4) The activationsynthesis explanation: to make sense of neural static our brain tries to weave into a story line. (5) The brain-maturation/cognitivedevelopment perspective: Dreams represent the dreamer’s level of development, knowledge, and understanding. Most sleep theorists agree that REM sleep and its associated dreams serve an important function, as shown by the REM rebound that occurs following REM deprivation.

Terms and Concepts to Remember circadian [ser-KAY-dee-an] rhythm, p. 91 REM sleep, p. 92 alpha waves, p. 93 sleep, p. 94 hallucinations, p. 94 delta waves, p. 94 insomnia, p. 100

narcolepsy, p. 101 sleep apnea, p. 101 night terrors, p. 102 dream, p. 103 manifest content, p. 103 latent content, p. 104 REM rebound, p. 106

Test Yourself 1. Are you getting enough sleep? What might you ask yourself to answer this question? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. In some countries, such as Britain, the school day for teenagers runs from about 9:00 A.M. to 4:00 P.M. In other countries, such as the United States, the teen school day often runs from 8:00 A.M. to 3:00 P.M. or even 7:30 A.M. to 2:30 P.M. Early to rise isn’t making kids wise, say critics—it’s making them sleepy. For optimal alertness and well-being, teens need 8 to 9 hours of sleep a night. So, should early-start schools move to a later start time, even if it requires buying more buses or switching start times with elementary schools? Or is this impractical, and would it do little to remedy the tiredteen problem?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

module 9

Facts and Falsehoods Explaining the Hypnotized State

Hypnosis 9-1 What is hypnosis, and what powers does a hypnotist have over a hypnotized subject? Imagine you are about to be hypnotized. The hypnotist invites you to sit back, fix your gaze on a spot high on the wall, and relax. In a quiet voice the hypnotist suggests, “Your eyes are growing tired. . . . Your eyelids are becoming heavy . . . now heavier and heavier. . . . They are beginning to close. . . . You are becoming more deeply relaxed. . . . Your breathing is now deep and regular. . . . Your muscles are becoming more and more relaxed. Your whole body is beginning to feel like lead.” After a few minutes of this hypnotic induction, you may experience hypnosis. When the hypnotist suggests, “Your eyelids are shutting so tight that you cannot open them even if you try,” it may indeed seem beyond your control to open your eyelids. Told to forget the number 6, you may be puzzled when you count 11 fingers on your hands. Invited to smell a sensuous perfume that is actually ammonia, you may linger delightedly over its pungent odor. Told that you cannot see a certain object, such as a chair, you may indeed report that it is not there, although you manage to avoid the chair when walking around. But is hypnosis really an altered state of consciousness? Let’s start with some agreed-upon facts.

䉴|| Facts and Falsehoods Those who study hypnosis have agreed that its power resides not in the hypnotist but in the subject’s openness to suggestion (Bowers, 1984). Hypnotists have no magical mind-control power; they merely engage people’s ability to focus on certain images or behaviors. But how open to suggestions are we?

Can Anyone Experience Hypnosis?

hypnosis a social interaction in which one person (the hypnotist) suggests to another (the subject) that certain perceptions, feelings, thoughts, or behaviors will spontaneously occur. posthypnotic suggestion a suggestion, made during a hypnosis session, to be carried out after the subject is no longer hypnotized; used by some clinicians to help control undesired symptoms and behaviors.

108

To some extent, we are all open to suggestion. When people stand upright with their eyes closed and are told that they are swaying back and forth, most will indeed sway a little. In fact, postural sway is one of the items assessed on the Stanford Hypnotic Susceptibility Scale. People who respond to such suggestions without hypnosis are the same people who respond with hypnosis (Kirsch & Braffman, 2001). After giving a brief hypnotic induction, a hypnotist suggests a series of experiences ranging from easy (your outstretched arms will move together) to difficult (with eyes open, you will see a nonexistent person). Highly hypnotizable people—say, the 20 percent who can carry out a suggestion not to smell or react to a bottle of ammonia held under their nose—are those who easily become deeply absorbed in imaginative activities (Barnier & McConkey, 2004; Silva & Kirsch, 1992). Typically, they have rich fantasy lives and become absorbed in the imaginary events of a novel or movie. (Perhaps you can recall being riveted by a movie into a trancelike state, oblivious to the people or noise surrounding you.) Many researchers refer to hypnotic “susceptibility” as hypnotic ability—the ability to focus attention totally on a task, to become imaginatively absorbed in it, to entertain fanciful possibilities. Indeed, anyone who can turn attention inward and imagine is able to experience some degree of hypnosis—because that’s what hypnosis is. And virtually anyone will experience hypnotic responsiveness if led to expect it. Imagine being asked to stare at a

109

Hypnosis M O D U L E 9

high spot and then hearing that “your eyes are growing tired . . . your eyelids are becoming heavy.” With such strain, anyone’s eyes would get tired. (Try looking up for 30 seconds.) But you likely would attribute your heavy eyelids to the hypnotist’s abilities and then become more open to other suggestions.

Can Hypnosis Enhance Recall of Forgotten Events? Can hypnotic procedures enable people to recall kindergarten classmates? To retrieve forgotten or suppressed details of a crime? Should testimony obtained under hypnosis be admissible in court? Most people wrongly believe that our experiences are all “in there,” recorded in our brain and available for recall if only we can break through our own defenses (Loftus, 1980). In one community survey, 3 in 4 people agreed with the inaccurate statement that hypnosis enables people to “recover accurate memories as far back as birth” (Johnson & Hauck, 1999). But 60 years of research disputes such claims of age regression—the supposed ability to relive childhood experiences. Age-regressed people act as they believe children would, but they typically miss the mark by outperforming real children of the specified age (Silverman & Retzlaff, 1986). They may, for example, feel childlike and print much as they know a 6-year-old would. But they sometimes do so with perfect spelling and typically without any change in their adult brain waves, reflexes, and perceptions. “Hypnotically refreshed” memories combine fact with fiction. Without either person being aware of what is going on, a hypnotist’s hints—“Did you hear loud noises?”—can plant ideas that become the subject’s pseudomemory. Thus, American, Australian, and British courts generally ban testimony from witnesses who have been hypnotized (Druckman & Bjork, 1994; Gibson, 1995; McConkey, 1995). Other striking examples of memories created under hypnosis come from the thousands of people who since 1980 have reported being abducted by UFOs. Most such reports have come from people who are predisposed to believe in aliens, are highly hypnotizable, and have undergone hypnosis (Newman & Baumeister, 1996; Nickell, 1996).

“Hypnosis is not a psychological truth serum and to regard it as such has been a source of considerable mischief.” Researcher Kenneth Bowers (1987)

Can Hypnosis Force People to Act Against Their Will? Researchers have induced hypnotized people to perform an apparently dangerous act: plunging one hand briefly into fuming “acid,” then throwing the “acid” in a researcher’s face (Orne & Evans, 1965). Interviewed a day later, these people exhibited no memory of their acts and emphatically denied they would ever follow such orders. Had hypnosis given the hypnotist a special power to control others against their will? To find out, researchers Martin Orne and Frederich Evans unleashed that enemy of so many illusory beliefs—the control group. Orne asked other individuals to pretend they were hypnotized. Laboratory assistants, unaware that those in the experiment’s control group had not been hypnotized, treated both groups the same. The result? All the unhypnotized participants (perhaps believing that the laboratory context assured safety) performed the same acts as those who were hypnotized. Such studies illustrate a principle that social psychologist Stanley Milgram (1974) demonstrated: An authoritative person in a legitimate context can induce people—hypnotized or not—to perform some unlikely acts. Hypnosis researcher Nicholas Spanos (1982) put it directly: “The overt behaviors of hypnotic subjects are well within normal limits.”

Can Hypnosis Be Therapeutic? Hypnotherapists try to help patients harness their own healing powers (Baker, 1987). Posthypnotic suggestions have helped alleviate headaches, asthma, and stressrelated skin disorders. One woman, who for more than 20 years suffered from open

“It wasn’t what I expected. But facts are facts, and if one is proved to be wrong, one must just be humble about it and start again.” Agatha Christie’s Miss Marple

110

MOD U LE 9 Hypnosis

sores all over her body, was asked to imagine herself swimming in shimmering, sunlit liquids that would cleanse her skin, and to experience her skin as smooth and unblemished. Within three months her sores had disappeared (Bowers, 1984). In one statistical digest of 18 studies, the average client whose therapy was supplemented with hypnosis showed greater improvement than 70 percent of other therapy patients (Kirsch et al., 1995, 1996). Hypnosis seemed especially helpful for the treatment of obesity. However, drug, alcohol, and smoking addictions have not responded well to hypnosis (Nash, 2001). In controlled studies, hypnosis speeds the disappearance of warts, but so do the same positive suggestions given without hypnosis (Spanos, 1991, 1996).

Can Hypnosis Alleviate Pain? Yes, hypnosis can relieve pain (Druckman & Bjork, 1994; Patterson, 2004). When unhypnotized people put their arm in an ice bath, they feel intense pain within 25 seconds. When hypnotized people do the same after being given suggestions to feel no pain, they indeed report feeling little pain. As some dentists know, even light hypnosis can reduce fear, thus reducing hypersensitivity to pain. Nearly 10 percent of us can become so deeply hypnotized that we can even undergo major surgery without anesthesia. Half of us can gain at least some pain relief from hypnosis. In surgical experiments, hypnotized patients have required less medication, recovered sooner, and left the hospital earlier than unhypnotized controls, thanks to the inhibition of pain-related brain activity (Askay & Patterson, 2007; Spiegel, 2007). The surgical use of hypnosis has flourished in Europe, where one Belgian medical team has performed more than 5000 surgeries with a combination of hypnosis, local anesthesia, and a mild sedative (Song, 2006).

䉴|| Explaining the Hypnotized State 9-2 Is hypnosis an extension of normal consciousness or an altered state? We have seen that hypnosis involves heightened suggestibility. We have also seen that hypnotic procedures do not endow a person with special powers. But they can sometimes help people overcome stress-related ailments and cope with pain. So, just what is hypnosis?

Hypnosis as a Social Phenomenon Some researchers believe that hypnotic phenomena reflect the workings of normal consciousness and the power of social influence (Lynn et al., 1990; Spanos & Coe, 1992). They point out how powerfully our interpretations and attentional spotlight influence our ordinary perceptions. Does this mean that people are consciously faking hypnosis? No—like actors caught up in their roles, subjects begin to feel and behave in ways appropriate for “good hypnotic subjects.” The more they like and trust the hypnotist, the more they allow that person to direct their attention and fantasies (Gfeller et al., 1987). “The hypnotist’s ideas become the subject’s thoughts,” explained Theodore Barber (2000), “and the subject’s thoughts produce the hypnotic experiences and behaviors.” If told to scratch their ear later when they hear the word psychology, subjects will likely do so only if they think the experiment is still under way (and scratching is therefore expected). If an experimenter eliminates their motivation for acting hypnotized—by stating that hypnosis reveals their “gullibility”—subjects become unresponsive. Based on such findings, advocates of the social influence theory contend that hypnotic phenomena—like the behaviors associated with other supposed altered states,

111

Hypnosis M O D U L E 9

such as dissociative identity disorder (“multiple personalities”) and spirit or demon possession—are an extension of everyday social behavior, not something unique to hypnosis (Spanos, 1994, 1996).

dissociation a split in consciousness, which allows some thoughts and behaviors to occur simultaneously with others.

Hypnosis as Divided Consciousness Most hypnosis researchers grant that normal social and cognitive processes play a part in hypnosis, but they nevertheless believe hypnosis is more than inducing someone to play the role of “good subject.” For one thing, hypnotized subjects will sometimes carry out suggested behaviors on cue, even when they believe no one is watching (Perugini et al., 1998). Moreover, distinctive brain activity accompanies hypnosis. When deeply hypnotized people in one experiment were asked to imagine a color, areas of their brain lit up as if they were really seeing the color. Mere imagination had become—to the hypnotized person’s brain—a compelling hallucination (Kosslyn et al., 2000). Another experiment invited hypnotizable or nonhypnotizable people to say the color of letters—an easy task that slows if, say, green letters form the conflicting word RED (Raz et al., 2005). When given a suggestion to focus on the color and to perceive the letters as irrelevant gibberish, easily hypnotized people became much less slowed by the word-color conflict. (Brain areas that decode words and detect conflict remained inactive.) These results would not have surprised famed researcher Ernest Hilgard (1986, 1992), who believed hypnosis involves not only social influence but also a special state of dissociation—a split between different levels of consciousness. Hilgard viewed hypnotic dissociation as a vivid form of everyday mind splits—similar to doodling while listening to a lecture or keying in the end of a sentence while starting a conversation. Hilgand felt that when, for example, hypnotized people lower their arm into an ice bath, as in FIGURE 9.1, that hypnosis dissociates the sensation of the pain stimulus (of which the subjects are still aware) from the emotional suffering that defines their experience of pain. The ice water therefore feels cold—very cold—but not painful. Hypnotic pain relief may also result from another form of dual processing—selective attention—as when an injured athlete, caught up in the competition, feels little or no pain until the game ends. Support for this view comes from PET scans showing that hypnosis reduces brain activity in a region that processes painful stimuli, but not in the sensory cortex, which receives the raw sensory input (Rainville et al., 1997). Hypnosis does not block sensory input, but it may block our attention to those stimuli.

Courtesy of News and Publications Service, Stanford University

William James, Principles of Psychology, 1890

9.1 Dissociation or 䉴 FIGURE role-playing? This hypnotized woman

Attention is diverted from a painful ice bath. How?

Divided-consciousness theory: Hypnosis has caused a split in awareness.

“The total possible consciousness may be split into parts which co-exist but mutually ignore each other.”

Social influence theory: The subject is so caught up in the hypnotized role that she ignores the cold.

tested by Ernest Hilgard exhibited no pain when her arm was placed in an ice bath. But asked to press a key if some part of her felt the pain, she did so. To Hilgard, this was evidence of dissociation, or divided consciousness. Proponents of social influence theory, however, maintain that people responding this way are caught up in playing the role of “good subject.”

112

MOD U LE 9 Hypnosis

Although the divided-consciousness theory of hypnosis is controversial, this much seems clear: There is, without doubt, much more to thinking and acting than we are conscious of. Our information processing, which starts with selective attention, is diFIGURE 9.2 Levels of analysis for hypnosis Using a biopsychosocial vided into simultaneous conscious and nonconscious realms. In hypnosis as in life, approach, researchers explore hypnosis much of our behavior occurs on autopilot. We have two-track minds. from complementary perspectives. Yet, there is also little doubt that social influences do play an important role in hypnosis. So, might the two views—social influence and divided consciousBiological influences: Psychological influences: ness—be bridged? Researchers John Kihlstrom and • distinctive brain activity • focused attention Kevin McConkey (1990) believe there is no contra• unconscious information • expectations processing • heightened suggestibility diction between the two approaches, which are con• dissociation between normal verging toward a unified account of hypnosis. sensations and conscious awareness Hypnosis, they suggest, is an extension both of norHypnosis mal principles of social influence and of everyday dissociations between our conscious awareness and our automatic behaviors. Hypnosis researchers are moving beyond the “hypnosis is social influence” Social-cultural influences: versus “hypnosis is divided consciousness” debate • presence of an authoritative (Killeen & Nash, 2003; Woody & McConkey, 2003). person in legitimate context They are instead exploring how brain activity, atten• role-playing “good subject” tion, and social influences interact to affect hypnotic phenomena (FIGURE 9.2).

Review Hypnosis 9-1 What is hypnosis, and what powers does a hypnotist have over a hypnotized subject? Hypnosis is a social interaction in which one person suggests to another that certain perceptions, feelings, thoughts, or behaviors will spontaneously occur. Hypnotized people are no more vulnerable to acting against their will than unhypnotized people are, and hypnosis does not enhance recall of forgotten events (it may even evoke false memories). Hypnotized people, like unhypnotized people, may perform unlikely acts when told to do so by an authoritative person. Posthypnotic suggestions have helped people harness their own healing powers but have not been very effective in treating addiction. Hypnosis can help relieve pain. 9-2

Is hypnosis an extension of normal consciousness or an altered state? Many psychologists believe that hypnosis is a form of normal social influence and that hypnotized people act out the role of “good subject.” Other psychologists view hypnosis as a dissociation—a split between normal sensations and conscious awareness. A unified account of hypnosis melds these two views and studies how brain activity, attention, and social influences interact in hypnosis.

Terms and Concepts to Remember hypnosis, p. 108 posthypnotic suggestion, p. 109

dissociation, p. 111

Test Yourself 1. When is the use of hypnosis potentially harmful, and when can hypnosis be used to help? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. You’ve read about two examples of dissociated consciousness: talking while typing, and doodling while listening to a lecture. Can you think of another example that you have experienced?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

module 10 Drugs and Consciousness

Dependence and Addiction Psychoactive Drugs Influences on Drug Use

䉴|| Dependence and Addiction

© 1992 by Sidney Harris.

There is little dispute that some drugs alter consciousness. Psychoactive drugs are chemicals that change perceptions and moods through their actions at the neural synapses. Let’s imagine a day in the life of a legal-drug user. It begins with a wake-up latte. By midday, several cigarettes have calmed frazzled nerves before an appointment at the plastic surgeon’s office for wrinkle-smoothing Botox injections. A diet pill before dinner helps stem the appetite, and its stimulating effects can later be partially offset with a glass of wine and two Tylenol PMs. And if performance needs enhancing, there are beta blockers for onstage performers, Viagra for middle-aged men, hormonedelivering “libido patches” for middle-aged women, and Adderall for students hoping to focus their concentration. Before drifting off into REM-depressed sleep, our hypothetical drug user is dismayed by news reports of pill-sharing, pill-popping college students and of celebrity deaths (Anna Nicole Smith, Heath Ledger) attributed to accidental overdoses of lethal drug combinations.

“Just tell me where you kids got the idea to take so many drugs.”

10-1 What are tolerance, dependence, and addiction, and what are some common misconceptions about addiction? Why might a person who rarely drinks alcohol get tipsy on one can of beer, but an experienced drinker show few effects until the second six-pack? Continued use of alcohol and other psychoactive drugs produces tolerance. As the user’s brain adapts its chemistry to offset the drug effect (a process called neuroadaptation), the user requires larger and larger doses to experience the same effect (FIGURE 10.1). Despite the connotations of alcohol “tolerance,” the brain, heart, and liver of a person addicted to alcohol suffer damage from the excessive alcohol being “tolerated.” Users who stop taking psychoactive drugs may experience the undesirable side effects of withdrawal. As the body responds to the drug’s absence, the user may feel physical pain and intense cravings, indicating physical dependence. People can also

10.1 Drug tolerance 䉴 InFIGURE most cases, with repeated

Big effect

Drug effect

exposure to a psychoactive drug, the drug’s effect lessens. Thus, it takes bigger doses to get the desired effect.

Response to first exposure

After repeated exposure, more drug is needed to produce same effect

Little effect

psychoactive drug a chemical substance that alters perceptions and moods.

tolerance the diminishing effect with regular use of the same dose of a drug, requiring the user to take larger and larger doses before experiencing the drug’s effect.

withdrawal the discomfort and distress that follow discontinuing the use of an addictive drug.

physical dependence a physiological Small

Large

Drug dose

need for a drug, marked by unpleasant withdrawal symptoms when the drug is discontinued.

113

114

MOD U LE 1 0 Drugs and Consciousness

psychological dependence a psychological need to use a drug, such as to relieve negative emotions.

addiction compulsive drug craving and

develop psychological dependence, particularly for stress-relieving drugs. Such drugs, although not physically addictive, can become an important part of the user’s life, often as a way of relieving negative emotions. With either physical or psychological dependence, the user’s primary focus may be obtaining and using the drug.

use, despite adverse consequences.

Misconceptions About Addiction An addiction is a compulsive craving for a substance despite adverse consequences and often with physical symptoms such as aches, nausea, and distress following sudden withdrawal. Worldwide, reports the World Health Organization (2008), 90 million people suffer from such problems related to alcohol and other drugs. In recent pop psychology, the supposedly irresistible seduction of addiction has been extended to cover many behaviors formerly considered bad habits or even sins. Has the concept been stretched too far? Are addictions as irresistible as commonly believed? Many drug researchers believe the following three myths about addiction are false: || The odds of getting hooked after trying various drugs: Marijuana:

9 percent

Alcohol:

15 percent

Heroin:

23 percent

Tobacco:

32 percent

Source: National Academy of Science, Institute of Medicine (Brody, 2003). ||

“About 70 percent of Americans have tried illicit drugs, but . . . only a few percent have done so in the last month. . . . Past age 35, the casual use of illegal drugs virtually ceases.” Having sampled the pleasures and their aftereffects, “most people eventually walk away.” Neuropsychologist Michael Gazzaniga (1997)

Myth 1. Addictive drugs quickly corrupt; for example, morphine taken to control pain is powerfully addictive and often leads to heroin abuse. People given morphine to control pain rarely develop the cravings of the addict who uses morphine as a mood-altering drug (Melzack, 1990). But some people—perhaps 10 percent—do indeed have a hard time using a psychoactive drug in moderation or stopping altogether. Even so, controlled, occasional users of drugs such as alcohol and marijuana far outnumber those addicted to these substances (Gazzaniga, 1988; Siegel, 1990). “Even for a very addictive drug like cocaine, only 15 to 16 percent of people become addicted within 10 years of first use,” report Terry Robinson and Kent Berridge (2003). Much the same is true for rats, only some of which become compulsively addicted to cocaine (Deroche-Garmonet et al., 2004). Myth 2. Addictions cannot be overcome voluntarily; therapy is required. Addictions can be powerful, and some addicts do benefit from treatment programs. Alcoholics Anonymous, for example, has supported many people in overcoming their alcohol dependence. But the recovery rates of treated and untreated groups differ less than one might suppose. Helpful as therapy or group support may be, people often recover on their own. Moreover, viewing addiction as a disease, as diabetes is a disease, can undermine self-confidence and the will to change cravings that, without treatment, “one cannot fight.” And that, critics say, would be unfortunate, for many people do voluntarily stop using addictive drugs, without treatment. Most of America’s 41 million exsmokers kicked the habit on their own, usually after prior failed efforts or treatments. Myth 3. We can extend the concept of addiction to cover not just drug dependencies, but a whole spectrum of repetitive, pleasure-seeking behaviors. We can, and we have, but should we? The addiction-as-disease-needing-treatment idea has been suggested for a host of driven behaviors, including too much eating, shopping, exercise, sex, gambling, and work. Initially, we may use the term metaphorically (“I’m a science fiction addict”), but if we begin taking the metaphor as reality, addiction can become an all-purpose excuse. Those who embezzle to feed their “gambling addiction,” surf the Web half the night to satisfy their “Internet addiction,” or abuse or betray to indulge their “sex addiction” can then explain away their behavior as an illness. Sometimes, though, behaviors such as gambling, playing video games, or surfing the Internet do become compulsive and dysfunctional, much like abusive drug taking (Griffiths, 2001; Hoeft et al., 2008). Some Internet users, for example, do display an apparent inability to resist logging on, and staying on, even when this excessive use

115

Drugs and Consciousness M O D U L E 1 0

impairs their work and relationships (Ko et al., 2005). So, there may be justification for stretching the addiction concept to cover certain social behaviors. Debates over the addiction-as-disease model continue.

depressants drugs (such as alcohol, barbiturates, and opiates) that reduce neural activity and slow body functions.

䉴|| Psychoactive Drugs The three major categories of psychoactive drugs—depressants, stimulants, and hallucinogens—all do their work at the brain’s synapses. They stimulate, inhibit, or mimic the activity of the brain’s own chemical messengers, the neurotransmitters. Our culturally influenced expectations also play a role in the way these drugs affect us (Ward, 1994). If one culture assumes that a particular drug produces euphoria (or aggression or sexual arousal) and another does not, each culture may find its expectations fulfilled.

© The New Yorker Collection 1998. Leo Cullum from cartoonbank.com. All Rights Reserved.

Depressants 10-2 What are depressants, and what are their effects? Depressants are drugs such as alcohol, barbiturates (tranquilizers), and opiates that calm neural activity and slow body functions.

Alcohol True or false? In large amounts, alcohol is a depressant; in small amounts, it is a stimulant. False. Low doses of alcohol may, indeed, enliven a drinker, but they do so by slowing brain activity that controls judgment and inhibitions. Alcohol lowers our inhibitions, slows neural processing, disrupts memory formation, and reduces selfawareness.

“That is not one of the seven habits of highly effective people.”

Disinhibition Alcohol is an equal-opportunity drug: It increases harmful tendencies— as when angered people become aggressive after drinking. And it increases helpful tendencies—as when tipsy restaurant patrons leave extravagant tips (M. Lynn, 1988). The urges you would feel if sober are the ones you will more likely act upon when intoxicated. Slowed Neural Processing Low doses of alcohol relax the drinker by slowing sympathetic nervous system activity. In larger doses, alcohol can become a staggering problem: Reactions slow, speech slurs, skilled performance deteriorates. Paired with sleep

Drawing by D. Fradon; © 1969 The New Yorker Magazine, Inc.

Ray Ng/Time & Life Pictures/Getty Images

disinhibition Alcohol 䉴 Dangerous consumption leads to feelings of invincibility, which become especially dangerous behind the wheel of a car, such as this one totaled by a teenage drunk driver. This Colorado University Alcohol Awareness Week exhibit prompted many students to post their own anti-drinking pledges (white flags).

116

MOD U LE 1 0 Drugs and Consciousness

deprivation, alcohol is a potent sedative. (Although either sleep deprivation or drinking can put a driver at risk, their combination is deadlier yet.) These physical effects, combined with lowered inhibitions, contribute to alcohol’s worst consequences—the several hundred thousand lives claimed worldwide each year in alcohol-related accidents and violent crime. Car accidents occur despite most drinkers’ belief (when sober) that driving under the influence of alcohol is wrong and despite their insisting that they would not do so. Yet, as blood-alcohol levels rise and moral judgments falter, people’s qualms about drinking and driving lessen. Virtually all will drive home from a bar, even if given a breathalyzer test and told they are intoxicated (Denton & Krebs, 1990; MacDonald et al., 1995). Memory Disruption Alcohol also disrupts the processing of recent experiences into long-term memories. Thus, heavy drinkers may not recall people they met the night shrinks the brain MRI scans show before or what they said or did while intoxicated. These blackouts result partly from brain shrinkage in women with alcohol the way alcohol suppresses REM sleep, which helps fix the day’s experiences into perdependence (left) compared with women manent memories. in a control group (right). The effects of heavy drinking on the brain and cognition can be longterm. In rats, at a development period corresponding to human adolescence, binge-drinking diminishes the genesis of nerve cells, impairs the growth of synaptic connections, and contributes to nerve cell death (Crews et al., 2006, 2007). MRI scans show another way prolonged and excessive drinking can affect cognition (FIGURE 10.2). It can shrink the brain, especially in women, who have less of a stomach enzyme that digests alcohol (Wuethrich, 2001). Girls and young women can also become addicted to Scan of woman with Scan of woman without alcohol more quickly than boys and young men do, and they are at risk for alcohol dependence alcohol dependence lung, brain, and liver damage at lower consumption levels (CASA, 2003).

Daniel Hommer, NIAAA, NIH, HHS

FIGURE 10.2 Alcohol dependence

Reduced Self-Awareness and Self-Control Alcohol not only impairs judgment and memory, it also reduces self-awareness (Hull et al., 1986). This may help explain why people who want to suppress their awareness of failures or shortcomings are more likely to drink than are those who feel good about themselves. Losing a business deal, a game, or a romantic partner sometimes elicits a drinking binge. Excess drinking is especially common when people with low self-esteem experience pain in a romantic relationship (DeHart et al., 2008). By focusing attention on the immediate situation and away from any future consequences, alcohol also lessens impulse control (Steele & Josephs, 1990). In surveys of rapists, more than half acknowledge drinking before committing their offense (Seto & Barbaree, 1995). Expectancy Effects As with other psychoactive drugs, alcohol’s behavioral effects stem not only from its alteration of brain chemistry but also from the user’s expectations. When people believe that alcohol affects social behavior in certain ways, and believe, rightly or wrongly, that they have been drinking alcohol, they will behave accordingly (Leigh, 1989). David Abrams and Terence Wilson (1983) demonstrated this in a now-classic experiment. They gave Rutgers University men who volunteered for a study on “alcohol and sexual stimulation” either an alcoholic or a nonalcoholic drink. (Both had strong tastes that masked any alcohol.) In each group, half the participants thought they were drinking alcohol and half thought they were not. After watching an erotic movie clip, the men who thought they had consumed alcohol were more likely to report having strong sexual fantasies and feeling guilt-free. Being able to attribute their sexual responses to alcohol released their inhibitions—whether they actually had drunk alcohol or not. If, as commonly believed, liquor is the quicker pick-her-upper, the effect lies partly in that powerful sex organ, the mind. Alcohol + Sex = The Perfect Storm Alcohol’s effects on self-control and social expectations often converge in sexual situations. More than 600 studies have explored

117

Drugs and Consciousness M O D U L E 1 0

the link between drinking and risky sex, with “the overwhelming majority” finding the two correlated (Cooper, 2006). But of course correlations do not come with causal arrows attached. In this case, three factors appear to influence the correlation. 1. Underlying “third variables,” such as sensation-seeking and peer influences, simultaneously push people toward both drinking and risky sex. 2. The desire for sex leads people to drink and to get their partners to drink. Sexually coercive college men, for example, may lower their dates’ sexual inhibitions by getting them to drink (Abbey, 1991; Mosher & Anderson, 1986). 3. Drinking disinhibits, and when sexually aroused, men become more disposed to sexual aggression, and men and women more disposed to casual sex (Davis et al., 2006; Grello et al., 2006). University women under alcohol’s influence find an attractive but sexually promiscuous man a more appealing potential date than they do when sober. It seems, surmise Sheila Murphy and her colleagues (1998), “that when people have been drinking, the restraining forces of reason may weaken and yield under the pressure of their desires.”

|| A University of Illinois campus survey showed that before sexual assaults, 80 percent of the male assailants and 70 percent of the female victims had been drinking (Camper, 1990). Another survey of 89,874 American collegians found alcohol or drugs involved in 79 percent of unwanted sexual intercourse experiences (Presley et al., 1997). ||

Barbiturates The barbiturate drugs, or tranquilizers, mimic the effects of alcohol. Because they depress nervous system activity, barbiturates such as Nembutal, Seconal, and Amytal are sometimes prescribed to induce sleep or reduce anxiety. In larger doses, they can lead to impaired memory and judgment or even death. If combined with alcohol—as sometimes happens when people take a sleeping pill after an evening of heavy drinking—the total depressive effect on body functions can be lethal.

Opiates The opiates—opium and its derivatives, morphine and heroin—also depress neural functioning. Pupils constrict, breathing slows, and lethargy sets in, as blissful pleasure replaces pain and anxiety. But for this short-term pleasure the user may pay a long-term price: a gnawing craving for another fix, a need for progressively larger doses, and the extreme discomfort of withdrawal. When repeatedly flooded with an artificial opiate, the brain eventually stops producing its own opiates, the endorphins. If the artificial opiate is then withdrawn, the brain lacks the normal level of these painkilling neurotransmitters. Those who cannot or choose not to tolerate this state may pay an ultimate price—death by overdose.

Stimulants 10-3 What are stimulants, and what are their effects? Stimulants such as caffeine and nicotine temporarily excite neural activity and arouse body functions. People use these substances to stay awake, lose weight, or boost mood or athletic performance. This category of drugs also includes amphetamines, and the even more powerful cocaine, Ecstasy, and methamphetamine (“speed”), which is chemically related to its parent drug, amphetamine (NIDA, 2002, 2005). All strong stimulants increase heart and breathing rates and cause pupils to dilate, appetite to diminish (because blood sugar increases), and energy and self-confidence to rise. And, as with other drugs, the benefits of stimulants come with a price. These substances can be addictive and may induce an aftermath crash into fatigue, headaches, irritability, and depression (Silverman et al., 1992).

Methamphetamine Methamphetamine has even greater effects, which can include eight hours or so of heightened energy and euphoria. The drug triggers the release of the neurotransmitter dopamine, which stimulates brain cells that enhance energy and mood. In response

barbiturates drugs that depress the activity of the central nervous system, reducing anxiety but impairing memory and judgment. opiates opium and its derivatives, such as morphine and heroin; they depress neural activity, temporarily lessening pain and anxiety.

stimulants drugs (such as caffeine, nicotine, and the more powerful amphetamines, cocaine, and Ecstasy) that excite neural activity and speed up body functions. amphetamines drugs that stimulate neural activity, causing speeded-up body functions and associated energy and mood changes.

methamphetamine a powerfully addictive drug that stimulates the central nervous system, with speeded-up body functions and associated energy and mood changes; over time, appears to reduce baseline dopamine levels.

118

to a typical amphetamine dose, men show a higher rate of dopamine release than do women, which helps explain their higher addiction rate (Munro et al., 2006). Over time, methamphetamine may reduce baseline dopamine levels, leaving the user with permanently depressed functioning. This drug is highly addictive, and its possible aftereffects include irritability, insomnia, hypertension, seizures, social isolation, depression, and occasional violent outbursts (Homer et al., 2008). The British government now classifies crystal meth, the highly addictive crystalized form of methamphetamine, alongside cocaine and heroin as one of the most dangerous drugs (BBC, 2006). National Pictures/Topham/The Image Works

Dramatic drug-induced decline This woman’s methamphetamine addiction led to obvious physical changes. Her decline is evident in these two photos, taken at age 36 (left) and, after four years of addiction, at age 40 (right).

MOD U LE 1 0 Drugs and Consciousness

Caffeine Caffeine, the world’s most widely consumed psychoactive substance, can now be found not only in coffee, tea, and soda but also in fruit juices, mints, energy drinks, bars, and gels—and even in soap. Coffees and teas vary in their caffeine content, with a cup of drip coffee surprisingly having more caffeine than a shot of espresso, and teas having less. A mild dose of caffeine typically lasts three or four hours, which—if taken in the evening—may be long enough to impair sleep. Like other drugs, caffeine used regularly and in heavy doses produces tolerance: Its stimulating effects lessen. And discontinuing heavy caffeine intake often produces withdrawal symptoms, including fatigue and headache.

Nicotine

“There is an overwhelming medical and scientific consensus that cigarette smoking causes lung cancer, heart disease, emphysema, and other serious diseases in smokers. Smokers are far more likely to develop serious diseases, like lung cancer, than nonsmokers.” Philip Morris Companies Inc., 1999

|| Smoke a cigarette and nature will charge you 12 minutes—ironically, just about the length of time you spend smoking it (Discover, 1996). ||

Imagine that cigarettes were harmless—except, once in every 25,000 packs, an occasional innocent-looking one is filled with dynamite instead of tobacco. Not such a bad risk of having your head blown off. But with 250 million packs a day consumed worldwide, we could expect more than 10,000 gruesome daily deaths (more than three times the 9/11 fatalities each and every day)—surely enough to have cigarettes banned everywhere.1 The lost lives from these dynamite-loaded cigarettes approximate those from today’s actual cigarettes. Each year throughout the world, tobacco kills nearly 5.4 million of its 1.3 billion customers, reports the World Health Organization (WHO). (Imagine the outrage if terrorists took down an equivalent of 25 loaded jumbo jets today, let alone tomorrow and every day thereafter.) And by 2030, annual deaths will increase to 8 million, according to WHO predictions. That means that 1 billion (say that number slowly) twenty-first-century people may be killed by tobacco (WHO, 2008). A teen-to-the-grave smoker has a 50 percent chance of dying from the habit, and the death is often agonizing and premature, as the Philip Morris company acknowledged in 2001. Responding to Czech Republic complaints about the health-care costs of tobacco, Philip Morris reassured the Czechs that there was actually a net “healthcare cost savings due to early mortality” and the resulting savings on pensions and elderly housing (Herbert, 2001).

1This analogy, adapted here with world-based numbers, was suggested by mathematician Sam Saunders, as reported by K. C. Cole (1998).

119

Drugs and Consciousness M O D U L E 1 0

-A-Teen “A cigarette in 䉴 Nic the hands of a Hollywood star on screen is a gun aimed at a 12- or 14-year-old.” Screenwriter Joe Eszterhas, 2002

© WinStar Cinema/Courtesy: Everett Collection

Eliminating smoking would increase life expectancy more than any other preventive measure. Why, then, do so many people smoke? Smoking usually begins during early adolescence. (If you are in college or university, and if by now the cigarette manufacturers haven’t attracted your business, they almost surely never will.) Adolescents, self-conscious and often thinking the world is watching their every move, are vulnerable to smoking’s allure. They may first light up to imitate glamorous celebrities, or to project a mature image, or to get the social reward of being accepted by other smokers (Cin et al., 2007; Tickle et al., 2006). Mindful of these tendencies, cigarette companies have effectively modeled smoking with themes that appeal to youths: sophistication, independence, adventure-seeking, social approval. Typically, teens who start smoking also have friends who smoke, who suggest its pleasures, and who offer them cigarettes (Eiser, 1985; Evans et al., 1988; Rose et al., 1999). Among teens whose parents and best friends are nonsmokers, the smoking rate is close to zero (Moss et al., 1992; also see FIGURE 10.3). Those addicted to nicotine find it very hard to quit because tobacco products are as powerfully and quickly addictive as heroin and cocaine. As with other addictions, a smoker becomes dependent; each year fewer than one of every seven smokers who want to quit will do so. Smokers also develop tolerance, eventually needing larger and larger doses to get the same effect. Quitting causes nicotine-withdrawal symptoms, including craving, insomnia, anxiety, and irritability. Even attempts to quit within the first weeks of smoking often fail as nicotine cravings set in (DiFranza, 2008). And all it takes to relieve this aversive state is a cigarette—a portable nicotine dispenser. Nicotine, like other addictive drugs, is not only compulsive and mood-altering, it is also reinforcing. Smoking delivers its hit of nicotine within 7 seconds, triggering the release of epinephrine and norepinephrine, which in turn diminish appetite and boost alertness and mental efficiency (FIGURE 10.4 on the next page). At the same time, nicotine stimulates the central nervous system to release neurotransmitters that calm anxiety and reduce sensitivity to pain. For example, nicotine stimulates the release of dopamine and (like heroin and morphine) opioids (Nowak, 1994; Scott et al., 2004). These rewards keep people smoking even when they wish they could stop— indeed, even when they know they are committing slow-motion suicide (Saad, 2002). An informative exception: Brain-injured patients who have lost a prune-size frontal lobe region called the insula—an area that lights up when people crave drugs— are able to give up cigarettes instantly (Naqvi et al., 2007).

|| Asked “If you had to do it all over again, would you start smoking?” more than 85 percent of adult smokers answer No (Slovic et al., 2002). ||

10.3 Peer 䉴 FIGURE influence Kids don’t

Percentage of 45% 11- to 17-year-olds who smoked a cigarette at least 30 once in the past 30 days 15 0

Humorist Dave Barry (1995) recalling why he smoked his first cigarette the summer he turned 15: “Arguments against smoking: ‘It’s a repulsive addiction that slowly but surely turns you into a gasping, gray-skinned, tumor-ridden invalid, hacking up brownish gobs of toxic waste from your one remaining lung.’ Arguments for smoking: ‘Other teenagers are doing it.’ Case closed! Let’s light up!”

All/Most of my friends smoke

Some of my friends smoke

None of my friends smoke

smoke if their friends don’t (Philip Morris, 2003). A correlation-causation question: Does the close link between teen smoking and friends’ smoking reflect peer influence? Teens seeking similar friends? Or both?

FIGURE 10.4 Where there’s smoke . . . :

The physiological effects of nicotine Nicotine reaches the brain within 7 seconds, twice as fast as intravenous heroin. Within minutes, the amount in the blood soars.

MOD U LE 1 0 Drugs and Consciousness

120

1. Arouses the brain to a state of increased alertness 4. Reduces circulation to extremities

2. Increases heart rate and blood pressure

3. At high levels, relaxes muscles and triggers the release of neurotransmitters that may reduce stress

“To cease smoking is the easiest thing I ever did; I ought to know because I’ve done it a thousand times.” Mark Twain, 1835–1910

5. Suppresses appetite for carbohydrates

Nevertheless, half of all Americans who have ever smoked have quit, and 81 percent of those who haven’t yet quit wish to (Jones, 2007). For those who endure, the acute craving and withdrawal symptoms gradually dissipate over the ensuing six months (Ward et al., 1997). These nonsmokers may live not only healthier but also happier. Smoking correlates with higher rates of depression, chronic disabilities, and divorce (Doherty & Doherty, 1998; Vita et al., 1998). Healthy living seems to add both years to life and life to years.

Cocaine

|| The recipe for Coca-Cola originally included an extract of the coca plant, creating a cocaine tonic for tired older people. Between 1896 and 1905, co*ke was indeed “the real thing.” ||

“Cocaine makes you a new man. And the first thing that new man wants is more cocaine.” Comedian George Carlin (1937–2008)

Cocaine use offers a fast track from euphoria to crash. When sniffed (“snorted”), and especially when injected or smoked (“free-based”), cocaine enters the bloodstream quickly. The result: a “rush” of euphoria that depletes the brain’s supply of the neurotransmitters dopamine, serotonin, and norepinephrine (FIGURE 10.5). Within 15 to 30 minutes, a crash of agitated depression follows as the drug’s effect wears off. In national surveys, 4 percent of U.S. high school seniors and 5 percent of British 18- to 24-year-olds reported having tried cocaine during the past year (Home Office, 2003; Johnston et al., 2009). Nearly half of the drug-using seniors had smoked crack, a crystallized form of cocaine. This faster-working, potent form of the drug produces a briefer but more intense high, a more intense crash, and a craving for more, which wanes after several hours only to return several days later (Gawin, 1991). Cocaine-addicted monkeys have pressed levers more than 12,000 times to gain one cocaine injection (Siegel, 1990). Many regular cocaine users—animal and human—do become addicted. In situations that trigger aggression, ingesting cocaine may heighten reactions. Caged rats fight when given foot shocks, and they fight even more when given cocaine and foot shocks. Likewise, humans ingesting high doses of cocaine in laboratory experiments impose higher shock levels on a presumed opponent than do those receiving a placebo (Licata et al., 1993). Cocaine use may also lead to emotional disturbances, suspiciousness, convulsions, cardiac arrest, or respiratory failure.

121

Drugs and Consciousness M O D U L E 1 0

Sending neuron Action potential

Reuptake Synaptic gap

Receiving neuron Neurotransmitter molecule (a)

Receptor sites

(b)

(c)

The sending neuron normally reabsorbs excess neurotransmitter molecules, a process called reuptake.

By binding to the sites that normally reabsorb neurotransmitter molecules, cocaine blocks reuptake of dopamine, norepinephrine, and serotonin (Ray & Ksir, 1990). The extra neurotransmitter molecules therefore remain in the synapse, intensifying their normal moodaltering effects and producing a euphoric rush. When the cocaine level drops, the absence of these neurotransmitters produces a crash.

Neurotransmitters carry a message from a sending neuron across a synapse to receptor sites on a receiving neuron.

Cocaine

FIGURE 10.5 Cocaine euphoria and

As with all psychoactive drugs, cocaine’s psychological effects depend not only on the dosage and form consumed but also on the situation and the user’s expectations and personality. Given a placebo, cocaine users who think they are taking cocaine often have a cocainelike experience (Van Dyke & Byck, 1982).

crash

Ecstasy Ecstasy, a street name for MDMA (methylenedioxymethamphetamine), is both a stimulant and a mild hallucinogen. As an amphetamine derivative, it triggers dopamine release. But its major effect is releasing stored serotonin and blocking its reabsorption, thus prolonging serotonin’s feel-good flood (Braun, 2001). About a half-hour after taking an Ecstasy pill, users enter a three- to four-hour period of feelings of emotional elevation and, given a social context, connectedness with those around them (“I love everyone”).

hug drug MDMA, known 䉴 The as Ecstasy, produces a

AP Photo/Dale Sparks

euphoric high and feelings of intimacy. But repeated use destroys serotonin-producing neurons and may permanently deflate mood and impair memory. Ecstasy (MDMA) a synthetic stimulant and mild hallucinogen. Produces euphoria and social intimacy, but with short-term health risks and longer-term harm to serotonin-producing neurons and to mood and cognition.

122

MOD U LE 1 0 Drugs and Consciousness

During the late 1990s, Ecstasy’s popularity soared as a “club drug” taken at night clubs and all-night raves (Landry, 2002). There are, however, reasons not to be ecstatic about Ecstasy. One is its dehydrating effect, which—when combined with prolonged dancing—can lead to severe overheating, increased blood pressure, and death. Another is that long-term, repeated leaching of brain serotonin can damage serotoninproducing neurons, leading to decreased output and increased risk of permanently depressed mood (Croft et al., 2001; McCann et al., 2001; Roiser et al., 2005). Ecstasy also suppresses the disease-fighting immune system, impairs memory and other cognitive functions, and disrupts sleep by interfering with serotonin’s control of the circadian clock (Laws & Kokkalis, 2007; Pacifici et al., 2001; Schilt et al., 2007). Ecstasy delights for the night but dispirits the morrow.

Hallucinogens 10-4 What are hallucinogens, and what are their effects? Hallucinogens distort perceptions and evoke sensory images in the absence of sensory input (which is why these drugs are also called psychedelics, meaning “mindmanifesting”). Some, such as LSD and MDMA (Ecstasy), are synthetic. Others, including the mild hallucinogen marijuana, are natural substances.

LSD In 1943, Albert Hofmann reported perceiving “an uninterrupted stream of fantastic pictures, extraordinary shapes with intense, kaleidoscopic play of colors” (Siegel, 1984). Hofmann, a chemist, created—and on one Friday afternoon in April 1943 accidentally ingested—LSD (lysergic acid diethylamide). The result reminded him of a childhood mystical experience that had left him longing for another glimpse of “a miraculous, powerful, unfathomable reality” (Smith, 2006). LSD and other powerful hallucinogens are chemically similar to (and therefore block the actions of) a subtype of the neurotransmitter serotonin (Jacobs, 1987). The emotions of an LSD trip vary from euphoria to detachment to panic. The user’s current mood and expectations color the emotional experience, but the perceptual distortions and hallucinations have some commonalities. Psychologist Ronald Siegel (1982) reports that whether you provoke your brain to hallucinate by drugs, loss of oxygen, or extreme sensory deprivation, “it will hallucinate in basically the same way.” The experience typically begins with simple geometric forms, such as a lattice, a cobweb, or a spiral. The next phase consists of more meaningful images; some may be superimposed on a tunnel or funnel, others may be replays of past emotional experiences. As the hallucination peaks, people frequently feel separated from their body and experience dreamlike scenes so real that they may become panic-stricken or harm themselves. (Some researchers have pointed out similarities between these experiences and the near-death phenomenon. See Close-Up: Near-Death Experiences.)

Marijuana hallucinogens psychedelic (“mindmanifesting”) drugs, such as LSD, that distort perceptions and evoke sensory images in the absence of sensory input. LSD a powerful hallucinogenic drug; also known as acid (lysergic acid diethylamide).

THC the major active ingredient in marijuana; triggers a variety of effects, including mild hallucinations.

Marijuana consists of the leaves and flowers of the hemp plant, which for 5000 years has been cultivated for its fiber. Whether smoked or eaten, marijuana’s major active ingredient, THC (delta-9-tetrahydrocannabinol), produces a mix of effects. (Smoking gets the drug into the brain in about 7 seconds, producing a greater effect than does eating the drug, which causes its peak concentration to be reached at a slower, unpredictable rate.) Like alcohol, marijuana relaxes, disinhibits, and may produce a euphoric high. But marijuana is also a mild hallucinogen, amplifying sensitivity to colors, sounds, tastes, and smells. And unlike alcohol, which the body eliminates within hours, THC and its by-products linger in the body for a month or more. Thus, contrary to the usual tolerance phenomenon, regular users may achieve a high with smaller amounts of the drug than occasional users would need to get the same effect.

123

Drugs and Consciousness M O D U L E 1 0

CLOSE-UP

Near-Death Experiences 10-5 What are near-death

experiences, and what is the controversy over their explanation? A man . . . hears himself pronounced dead by his doctor. He begins to hear an uncomfortable noise, a loud ringing or buzzing, and at the same time feels himself moving very rapidly through a long dark tunnel. After this, he suddenly finds himself outside of his own physical body . . . and sees his own body from a distance, as though he is a spectator. . . . Soon other things begin to happen. Others come to meet and to help him. He glimpses the spirits of relatives and friends who have already died, and a loving, warm spirit of a kind he has never encountered before—a being of light—appears before him. . . . He is overwhelmed by intense feelings of joy, love, and peace. Despite his attitude, though, he somehow reunites with his physical body and lives. (Moody, 1976, pp. 23, 24.)

© The New Yorker Collection, 2000. Frank Cotham from cartoonbank.com. All rights reserved.

This is a composite description of a near-death experience. In studies of those who have come close to death through cardiac arrest or other physical traumas, 12 to 40 percent recalled a neardeath experience (Gallup, 1982; Ring,

“My entire vision for the future passed before my eyes.”

1980; Schnaper, 1980; Van Lommel et al., 2001). Did the description of the near-death experience sound familiar? The parallels with Ronald Siegel’s (1977) descriptions of the typical hallucinogenic experience are striking: replay of old memories, out-ofbody sensations, and visions of tunnels or funnels and bright lights or beings of light (FIGURE 10.6). After being resuscitated from apparent death—with no breathing or pulse for more than 30 seconds—many children, too, offer near-death recollections (Morse, 1994). And worldwide, people near death have sometimes reported visions of another world, though the content of that vision often depends on the culture (Kellehear, 1996). Patients who have experienced temporal lobe seizures have reported profound mystical experiences, sometimes similar to those of near-death experiences. When researchers stimulated the crucial temporal lobe area of one such patient, she reported a sensation of “floating” near the ceiling and seeing herself, from above, lying in bed (Blanke et al., 2002, 2004). Solitary sailors and polar explorers have had out-of-body sensations while enduring monotony, isolation, and cold (Suedfeld & Mocellin, 1987). Oxygen deprivation can produce such hallucinations, complete with tunnel vision (Woerlee, 2004, 2005). As oxygen deprivation turns off the brain’s inhibitory cells, neural activity increases in the visual cortex (Blackmore, 1991, 1993). In the oxygen-starved brain, the result is a growing patch of light, which looks much like what you would see as you moved through a tunnel. The near-death experience, argued Siegel (1980), is best understood as “hallucinatory activity of the brain.” Some near-death investigators object. People who have experienced both hallucinations and the near-death phenomenon typically deny their similarity. Moreover, a near-death experience may change people in ways that a drug trip does not. Those who have been “embraced by the light” may become kinder, more

FIGURE 10.6 Near-death vision or hallucination? Psychologist Ronald Siegel (1977) reported that people under the influence of hallucinogenic drugs often see “a bright light in the center of the field of vision. . . . The location of this point of light create[s] a tunnel-like perspective.”

spiritual, more believing in life after death. And they tend to handle stress well, often by addressing a stressful situation directly rather than becoming traumatized (Britton & Bootzin, 2004). Skeptics reply that these effects stem from the deathrelated context of the experience. *** The debates over the significance of neardeath experiences are an aspect of a wider debate over dreams, fantasy, hypnotic states, and drug-induced hallucinations. In all these cases, science informs our wondering about human consciousness and human nature. Although there remain questions that it cannot answer, science nevertheless helps fashion our image of who we are—of our human potentials and our human limits.

near-death experience an altered state of consciousness reported after a close brush with death (such as through cardiac arrest); often similar to druginduced hallucinations.

124

MOD U LE 1 0 Drugs and Consciousness

A user’s experience can vary with the situation. If the person feels anxious or depressed, using marijuana may intensify these feelings. And studies controlling for other drug use and personal traits have found that the more one uses marijuana, the greater one’s risk of anxiety, depression, or possibly schizophrenia (Hall, 2006; Murray et al., 2007; Patton et al., 2002). Daily use bodes a worse outcome than infrequent use. The National Academy of Sciences (1982, 1999) and National Institute on Drug Abuse (2004) have identified other marijuana consequences. Like alcohol, marijuana impairs the motor coordination, perceptual skills, and reaction time necessary for safely operating an automobile or other machine. “THC causes animals to misjudge events,” reported Ronald Siegel (1990, p. 163). “Pigeons wait too long to respond to buzzers or lights that tell them food is available for brief periods; and rats turn the wrong way in mazes.” Marijuana also disrupts memory formation and interferes with immediate recall of information learned only a few minutes before. Such cognitive effects outlast the period of smoking (Messinis et al., 2007). Prenatal exposure through maternal marijuana use also impairs brain development (Berghuis et al., 2007; Huizink & Mulder, 2006). Heavy adult use for over 20 years is associated with a shrinkage of brain areas that process memories and emotions (Yücel et al., 2008). Scientists have shed light on marijuana’s cognitive, mood, and motor effects with the discovery of concentrations of THC-sensitive receptors in the brain’s frontal lobes, limbic system, and motor cortex (Iversen, 2000). As the 1970s discovery of receptors for morphine put researchers on the trail of morphinelike neurotransmitters (the endorphins), so the recent discovery of cannabinoid receptors has led to a successful hunt for naturally occurring THC-like molecules that bind with cannabinoid receptors. These molecules may naturally control pain. If so, this may help explain why marijuana can be therapeutic for those who suffer the pain, nausea, and severe weight loss associated with AIDS (Watson et al., 2000). Such uses have motivated legislation in some states to make the drug legally available for medical purposes. To avoid the toxicity of marijuana smoke—which, like cigarette smoke, can cause cancer, lung damage, and pregnancy complications—the Institute of Medicine recommends medical inhalers to deliver the THC. *** Despite their differences, the psychoactive drugs summarized in TABLE 10.1 share a common feature: They trigger negative aftereffects that offset their immediate positive TABLE 10.1 A Guide to Selected Psychoactive Drugs Drug

Type

Pleasurable Effects

Adverse Effects

Alcohol

Depressant

Initial high followed by relaxation and disinhibition

Depression, memory loss, organ damage, impaired reactions

Heroin

Depressant

Rush of euphoria, relief from pain

Depressed physiology, agonizing withdrawal

Caffeine

Stimulant

Increased alertness and wakefulness

Anxiety, restlessness, and insomnia in high doses; uncomfortable withdrawal

Methamphetamine

Stimulant

Euphoria, alertness, energy

Irritability, insomnia, hypertension, seizures

Cocaine

Stimulant

Rush of euphoria, confidence, energy

Cardiovascular stress, suspiciousness, depressive crash

Nicotine

Stimulant

Arousal and relaxation, sense of well-being

Heart disease, cancer

Ecstasy (MDMA)

Stimulant; mild hallucinogen

Emotional elevation, disinhibition

Dehydration, overheating, depressed mood, impaired cognitive and immune functioning

Marijuana

Mild hallucinogen

Enhanced sensation, relief of pain, distortion of time, relaxation

Impaired learning and memory, increased risk of psychological disorders, lung damage from smoke

125

Drugs and Consciousness M O D U L E 1 0

effects and grow stronger with repetition. And that helps explain both tolerance and withdrawal. As the opposing, negative aftereffects grow stronger, it takes larger and larger doses to produce the desired high (tolerance), causing the aftereffects to worsen in the drug’s absence (withdrawal). This in turn creates a need to switch off the withdrawal symptoms by taking yet more of the drug.

“How strange would appear to be this thing that men call pleasure! And how curiously it is related to what is thought to be its opposite, pain! . . . Wherever the one is found, the other follows up behind.” Plato, Phaedo, fourth century B.C.

䉴|| Influences on Drug Use 10-6 Why do some people become regular users of consciousness-altering drugs? Drug use by North American youth increased during the 1970s. Then, with increased drug education and a more realistic and deglamorized media depiction of taking drugs, drug use declined sharply. After the early 1990s, the cultural antidrug voice softened, and drugs for a time were again glamorized in some music and films. Consider these marijuana-related trends:

䉴 In the University of Michigan’s annual survey of 15,000 U.S. high school seniors, the proportion who believe there is “great risk” in regular marijuana use rose from 35 percent in 1978 to 79 percent in 1991, then retreated to 52 percent in 2008 (Johnston et al., 2009). 䉴 After peaking in 1978, marijuana use by U.S. high school seniors declined through 1992, then rose, but has recently been tapering off (FIGURE 10.7). Among Canadian 15- to 24-year-olds, 23 percent report using marijuana monthly, weekly, or daily (Health Canada, 2007). For some adolescents, occasional drug use represents thrill seeking. Why, though, do other adolescents become regular drug users? In search of answers, researchers have engaged biological, psychological, and social-cultural levels of analysis.

Biological Influences Some people may be biologically vulnerable to particular drugs. For example, evidence accumulates that heredity influences some aspects of alcohol abuse problems, especially those appearing by early adulthood (Crabbe, 2002):

䉴 Adopted individuals are more susceptible to alcohol dependence if one or both biological parents have a history of it. FIGURE 10.7 Trends in drug use The 䉴 percentage of U.S. high school seniors

High school 80% seniors reporting 70 drug use 60

who report having used alcohol, marijuana, or cocaine during the past 30 days declined from the late 1970s to 1992, when it partially rebounded for a few years. (From Johnston et al., 2009.)

Alcohol

50 40 30 20 10

Marijuana/ hashish

Cocaine

0 1975 ’77 ’79 ’81 ’83 ’85 ’87 ’89 ’91 ’93 ’95 ’97 ’99 2001 ’03 ’05 ’07

Year

126

MOD U LE 1 0 Drugs and Consciousness

䉴 Having an identical rather than fraternal twin with alcohol dependence puts one

|| Warning signs of alcohol dependence • Drinking binges • Regretting things done or said when drunk • Feeling low or guilty after drinking • Failing to honor a resolve to drink less • Drinking to alleviate depression or anxiety • Avoiding family or friends when drinking ||

at increased risk for alcohol problems (Kendler et al., 2002). (In marijuana use also, identical twins more closely resemble one another than do fraternal twins.) 䉴 Boys who at age 6 are excitable, impulsive, and fearless (genetically influenced traits) are more likely as teens to smoke, drink, and use other drugs (Masse & Tremblay, 1997). 䉴 Researchers have bred rats and mice that prefer alcoholic drinks to water. One such strain has reduced levels of the brain chemical NPY. Mice engineered to overproduce NPY are very sensitive to alcohol’s sedating effect and drink little (Thiele et al., 1998). 䉴 Researchers have identified genes that are more common among people and animals predisposed to alcohol dependency, and they are seeking genes that contribute to tobacco addiction (NIH, 2006; Nurnberger & Bierut, 2007). These culprit genes seemingly produce deficiencies in the brain’s natural dopamine reward system, which is impacted by addictive drugs. When repeated, the drugs trigger dopamine-produced pleasure but also disrupt normal dopamine balance. Studies of how drugs reprogram the brain’s reward systems raise hopes for antiaddiction drugs that might block or blunt the effects of alcohol and other drugs (Miller, 2008; Wilson & Kuhn, 2005).

Psychological and Social-Cultural Influences Psychological and social-cultural influences also contribute to drug use (FIGURE 10.8). In their studies of youth and young adults, Michael Newcomb and L. L. Harlow (1986) found that one psychological factor is the feeling that one’s life is meaningless and directionless, a common feeling among school dropouts who subsist without job skills, without privilege, and with little hope. When young unmarried adults leave home, alcohol and other drug use increases; when they marry and have children, it decreases (Bachman et al., 1997). Heavy users of alcohol, marijuana, and cocaine often display other psychological influences. Many have experienced significant stress or failure and are depressed. Females with a history of depression, eating disorders, or sexual or physical abuse are at risk for substance addiction, as are those undergoing school or neighborhood transitions (CASA, 2003; Logan et al., 2002). Monkeys, too, develop a taste for alcohol when stressed by permanent separation from their mother at birth (Small, 2002). By

FIGURE 10.8 Levels of analysis for drug use The biopsychosocial approach enables researchers to investigate drug use from complementary perspectives.

Biological influences: • genetic predispositions • variations in neurotransmitter systems

Psychological influences: • lacking sense of purpose • significant stress • psychological disorders, such as depression Drug use

Social-cultural influences: • urban environment • cultural attitude toward drug use • peer influences

127

Drugs and Consciousness M O D U L E 1 0

temporarily dulling the pain of self-awareness, alcohol may offer a way to avoid coping with depression, anger, anxiety, or insomnia. The relief may be temporary, but behavior is often controlled more by its immediate consequences than by its later ones. Especially for teenagers, drug use also has social roots. Most teen drinking is done for social reasons, not as a way to cope with problems (Kuntsche et al., 2005). Social influence also appears in the differing rates of drug use across cultural and ethnic groups. For example, a 2003 survey of 100,000 teens in 35 European countries found that marijuana use in the prior 30 days ranged from zero to 1 percent in Romania and Sweden to 20 to 22 percent in Britain, Switzerland, and France (ESPAD, 2003). Independent U.S. government studies of drug use in households nationwide and among high schoolers in all regions reveal that African-American teens have sharply lower rates of drinking, smoking, and cocaine use (Johnston et al., 2007). Alcohol and other drug addiction rates have also been extremely low in the United States among Orthodox Jews, Mormons, the Amish, and Mennonites (Trimble, 1994). Relatively drug-free small towns and rural areas tend to constrain any genetic predisposition to drug use, report Lisa Legrand and her colleagues (2005). For those whose genetic predispositions nudge them toward substance use, “cities offer more opportunities” and less supervision. Whether in cities or rural areas, peers influence attitudes about drugs. They also throw the parties and provide the drugs. If an adolescent’s friends use drugs, the odds are that he or she will, too. If the friends do not, the opportunity may not even arise. Teens who come from happy families, who do not begin drinking before age 14, and who do well in school tend not to use drugs, largely because they rarely associate with those who do (Bachman et al., 2007; Hingson et al., 2006; Oetting & Beauvais, 1987, 1990). Peer influence, however, is not just a matter of what friends do and say but also of what adolescents believe friends are doing and favoring. In one survey of sixth graders in 22 U.S. states, 14 percent believed their friends had smoked marijuana, though only 4 percent acknowledged doing so (Wren, 1999). University students are not immune to such misperceptions: Drinking dominates social occasions partly because students overestimate their fellow students’ enthusiasm for alcohol and underestimate their views of its risks (Prentice & Miller, 1993; Self, 1994) (TABLE 10.2). People whose beginning use was influenced by their peers are more likely to stop using drugs when friends stop or the social network changes (Kandel & Raveis, 1989). One study that followed 12,000 adults over 32 years found that smokers tend to quit in clusters (Christakis & Fowler, 2008). Within a social network, the odds of a person’s quitting increased when a spouse, friend, or co-worker stopped smoking. Similarly, most soldiers who became drug-addicted while in Vietnam ceased their drug use after returning home (Robins et al., 1974).

TABLE 10.2 Facts About “Higher” Education

College and university students drink more alcohol than their nonstudent peers and exhibit 2.5 times the general population’s rate of substance abuse. Fraternity and sorority members report nearly twice the binge drinking rate of nonmembers. Since 1993, campus smoking rates have declined, alcohol use has been steady, and abuse of prescription opioids, stimulants, tranquilizers, and sedatives has increased, as has marijuana use. Source: NCASA, 2007.

|| In the real world, alcohol accounts for one-sixth or less of beverage use. In TV land, drinking alcohol occurs more often than the combined drinking of coffee, tea, soft drinks, and water (Gerbner, 1990). ||

|| Culture and alcohol Percentage drinking weekly or more: United States

30%

Canada

40%

Britain

58%

(Gallup Poll, from Moore, 2006)

||

MOD U LE 1 0 Drugs and Consciousness

SNAPSHOTS

As always with correlations, the traffic between friends’ drug use and our own may be two-way: Our friends influence us. Social networks matter. But we also select as friends those who share our likes and dislikes. What do the findings on drug use suggest for drug prevention and treatment programs? Three channels of influence seem possible:

䉴 Educate young people about the

© Jason Love

128

long-term costs of a drug’s temporary pleasures. 䉴 Help young people find other ways to boost their self-esteem and purpose in life. 䉴 Attempt to modify peer associations or to “inoculate” youths against peer pressures by training them in refusal skills. People rarely abuse drugs if they understand the physical and psychological costs, feel good about themselves and the direction their lives are taking, and are in a peer group that disapproves of using drugs. These educational, psychological, and social factors may help explain why 42 percent of U.S. high school dropouts, but only 15 percent of college graduates, smoke (Ladd, 1998).

129

Drugs and Consciousness M O D U L E 1 0

Review Drugs and Consciousness 10-1 What are tolerance, dependence, and addiction, and what are some common misconceptions about addiction? Psychoactive drugs alter perceptions and moods. Their continued use produces tolerance (requiring larger doses to achieve the same effect) and may lead to physical or psychological dependence. Addiction is compulsive drug craving and use. Three common misconceptions about addiction are that (1) addictive drugs quickly corrupt; (2) therapy is always required to overcome addiction; and (3) the concept of addiction can meaningfully be extended beyond chemical dependence to a wide range of other behaviors. 10-2 What are depressants, and what are their effects? Depressants, such as alcohol, barbiturates, and the opiates, dampen neural activity and slow body functions. Alcohol tends to disinhibit— it increases the likelihood that we will act on our impulses, whether harmful or helpful. Alcohol also slows nervous system activity and impairs judgment, disrupts memory processes by suppressing REM sleep, and reduces self-awareness. User expectations strongly influence alcohol’s behavioral effects. 10-3 What are stimulants, and what are their effects? Stimulants—caffeine, nicotine, the amphetamines, cocaine, and Ecstasy—excite neural activity and speed up body functions. All are highly addictive. Nicotine’s effects make smoking a difficult habit to kick, but the percentage of Americans who smoke is nevertheless decreasing. Continued use of methamphetamine may permanently reduce dopamine production. Cocaine gives users a 15- to 30-minute high, followed by a crash. Its risks include cardiovascular stress and suspiciousness. Ecstasy is a combined stimulant and mild hallucinogen that produces a euphoric high and feelings of intimacy. Its users risk immune system suppression, permanent damage to mood and memory, and (if taken during physical activity) dehydration and escalating body temperatures. 10-4 What are hallucinogens, and what are their effects? Hallucinogens—such as LSD and marijuana—distort perceptions and evoke hallucinations—sensory images in the absence of sensory input. The user’s mood and expectations influence the effects of LSD, but common experiences are hallucinations and emotions varying from euphoria to panic. Marijuana’s main ingredient, THC, may trigger feelings of disinhibition, euphoria, relaxation, relief from pain, and intense sensitivity to sensory stimuli. It may also increase feelings of depression or anxiety, impair motor coordination and reaction time, disrupt memory formation, and damage lung tissue (because of the inhaled smoke). 10-5 What are near-death experiences, and what is the controversy over their explanation? Many people who have survived a brush with death, such as through cardiac arrest, report near-death experiences. These sometimes involve out-of-body sensations and seeing or traveling toward a bright light. Some researchers believe that such experiences closely parallel reports of hallucinations and may be products of a brain under stress. Others reject this analysis.

10-6 Why do some people become regular users of consciousness-altering drugs? Psychological factors (such as stress, depression, and hopelessness) and social factors (such as peer pressure) combine to lead many people to experiment with—and sometimes become dependent on—drugs. Cultural and ethnic groups have differing rates of drug use. Some people may be biologically more likely to become dependent on drugs such as alcohol. Each type of influence—biological, psychological, and social-cultural—offers a possible path for drug prevention and treatment programs. Terms and Concepts to Remember psychoactive drug, p. 113 tolerance, p. 113 withdrawal, p. 113 physical dependence, p. 113 psychological dependence, p. 114 addiction, p. 114 depressants, p. 115 barbiturates, p. 117

opiates, p. 117 stimulants, p. 117 amphetamines, p. 117 metamphetamine, p. 117 Ecstasy (MDMA), p. 121 hallucinogens, p. 122 LSD, p. 122 THC, p. 122 near-death experience, p. 123

Test Yourself 1. In what ways are near-death experiences similar to drug-induced hallucinations?

2. A U.S. government survey of 27,616 current or former alcohol drinkers found that 40 percent of those who began drinking before age 15 grew dependent on alcohol. The same was true of only 10 percent of those who first imbibed at ages 21 or 22 (Grant & Dawson, 1998). What possible explanations might there be for this correlation between early use and later abuse? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. Does your understanding of mind-brain science and your personal philosophy or faith incline you toward acceptance or denial of the “near death experience”?

2. Drinking dominates university parties when students overestimate other students’ enthusiasm for alcohol. Do you think such misperceptions exist on your campus? How might you find out?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

Nature, Nurture, and Human Diversity

W

hat makes you you? In important ways, we are each unique. We look different. We sound different. We have varying personalities, interests, and cultural and family backgrounds. We are also the leaves of one tree. Our human family shares not only a common biological heritage—cut us and we bleed—but also common behavioral tendencies. Our shared brain architecture predisposes us to sense the world, develop language, and feel hunger through identical mechanisms. Whether we live in the Arctic or the tropics, we prefer sweet tastes to sour. We divide the color spectrum into similar colors. And we feel drawn to behaviors that produce and protect offspring. Our kinship appears in our social behaviors as well. Whether named Wong, Nkomo, Smith, or Gonzales, we start fearing strangers at about eight months, and as adults we prefer the company of those with attitudes and attributes similar to our own. Coming from different parts of the globe, we know how to read one another’s smiles and frowns. As members of one species, we affiliate, conform, return favors, punish offenses, organize hierarchies of status, and grieve a child’s death. A visitor from outer space could drop in anywhere and find humans dancing and feasting, singing and worshiping, playing sports and games, laughing and crying, living in families and forming groups. Taken together, such universal behaviors define our human nature. What causes our striking diversity, and also our shared human nature? How much are human differences shaped by our differing genes? And how much by our environment—by every external influence, from maternal nutrition while in the womb to social support while nearing the tomb? To what extent are we formed by our upbringing? By our culture? By our current circ*mstances? By people’s reactions to our genetic dispositions? Modules 11 and 12 tell something of the complex story of how our genes (nature) and environments (nurture) define us.

modules 11 Behavior Genetics and Evolutionary Psychology

12 Environmental Influences on Behavior

Courtesy Brendan Baruth

nurture of nature Parents every䉴 The where wonder: Will my baby grow up to be peaceful or aggressive? Homely or attractive? Successful or struggling at every step? What comes built in, and what is nurtured—and how? Research reveals that nature and nurture together shape our development—every step of the way.

131

module 11

Behavior Genetics: Predicting Individual Differences Evolutionary Psychology: Understanding Human Nature

Behavior Genetics and Evolutionary Psychology 䉴|| Behavior Genetics: Predicting Individual

Differences

11-1 What are genes, and how do behavior geneticists explain our individual differences? If Jaden Agassi, son of tennis stars Andre Agassi and Stephanie Graf, grows up to be a tennis star, should we attribute his superior talent to his Grand Slam genes? To his growing up in a tennis-rich environment? To high expectations? Such questions intrigue behavior geneticists, who study our differences and weigh the effects and interplay of heredity and environment. © The New Yorker Collection, 1999, Danny Shanahan from cartoonbank.com. All rights reserved.

Genes: Our Codes for Life

“Thanks for almost everything, Dad.”

“Your DNA and mine are 99.9 percent the same. . . . At the DNA level, we are clearly all part of one big worldwide family.” Francis Collins, Human Genome Project director, 2007

“We share half our genes with the banana.” Evolutionary biologist Robert May, president of Britain’s Royal Society, 2001

132

Behind the story of our body and of our brain—surely the most awesome thing on our little planet—is the heredity that interacts with our experience to create both our universal human nature and our individual and social diversity. Barely more than a century ago, few would have guessed that every cell nucleus in your body contains the genetic master code for your entire body. It’s as if every room in the Empire State Building had a book containing the architect’s plans for the entire structure. The plans for your own book of life run to 46 chapters—23 donated by your mother (from her egg) and 23 by your father (from his sperm). Each of these 46 chapters, called a chromosome, is composed of a coiled chain of the molecule DNA (deoxyribonucleic acid). Genes, small segments of the giant DNA molecules, form the words of those chapters (FIGURE 11.1). All told, you have 30,000 or so gene words. Genes can be either active (expressed) or inactive. Environmental events “turn on” genes, rather like hot water enabling a tea bag to express its flavor. When turned on, genes provide the code for creating protein molecules, the building blocks of physical development. Genetically speaking, every other human is close to being your identical twin. Human genome researchers have discovered the common sequence within human DNA. It is this shared genetic profile that makes us humans, rather than chimpanzees or tulips. Actually, we aren’t all that different from our chimpanzee cousins; with them we share about 96 percent of our DNA sequence (Mikkelsen et al., 2005). At “functionally important” DNA sites, reports one molecular genetics team, the human-chimpanzee DNA similarity is 99.4 percent (Wildman et al., 2003). Yet that wee difference matters. Despite some remarkable abilities, chimpanzees grunt. Shakespeare intricately wove some 24,000 words to form his literary masterpieces. And small differences matter among chimpanzees, too. Two species, common chimpanzees and bonobos, differ by much less than 1 percent of their genomes, yet they display markedly differing behaviors. Chimpanzees are aggressive and male-dominated. Bonobos are peaceable and female led. Geneticists and psychologists are interested in the occasional variations found at particular gene sites in human DNA. Slight person-to-person variations from the common pattern give clues to our uniqueness—why one person has a disease that another does not, why one person is short and another tall, why one is outgoing and another shy.

133

Behavior Genetics and Evolutionary Psychology M O D U L E 1 1

FIGURE 11.1 The human building blocks 䉴 The nucleus of every human cell contains

chromosomes, each of which is made up of two strands of DNA connected in a double helix.

Chromosome

Cell

Gene

Nucleus DNA

Most of our traits are influenced by many genes. How tall you are, for example, reflects the size of your face, vertebrae, leg bones, and so forth—each of which may be influenced by different genes interacting with your environment. Complex traits such as intelligence, happiness, and aggressiveness are similarly influenced by groups of genes. Thus our genetic predispositions—our genetically influenced traits— help explain both our shared human nature and our human diversity.

Twin and Adoption Studies To scientifically tease apart the influences of environment and heredity, behavior geneticists would need to design two types of experiments. The first would control the home environment while varying heredity. The second would control heredity while varying the home environment. Such experiments with human infants would be unethical, but happily for our purposes, nature has done this work for us.

AP Photo/CP, Derek Oliver

Olsen twins The actresses 䉴 Fraternal Mary-Kate and Ashley Olsen have often been mistaken for identical twins. As babies and then preschoolers, they even played the same young character (trading places when one would tire or get fussy) in the late 1980s and early 1990s TV show Full House. But they are actually fraternal twins, having formed from two separate eggs, so they share no more genes than any other sibling pair.

behavior genetics the study of the relative power and limits of genetic and environmental influences on behavior.

environment every nongenetic influence, from prenatal nutrition to the people and things around us.

chromosomes threadlike structures made of DNA molecules that contain the genes. DNA (deoxyribonucleic acid) a complex molecule containing the genetic information that makes up the chromosomes.

genes the biochemical units of heredity that make up the chromosomes; a segment of DNA capable of synthesizing a protein. genome the complete instructions for making an organism, consisting of all the genetic material in that organism’s chromosomes.

134

MOD U LE 1 1 Behavior Genetics and Evolutionary Psychology

Identical Versus Fraternal Twins Fraternal twins

Identical twins

Identical twins, who develop from a single fertilized egg that splits in two, are genetically identical (FIGURE 11.2). They are nature’s own human clones—indeed, clones who share not only the same genes but the same conception, uterus, birth date, and usually the same cultural history. Two slight qualifications:

䉴 Although identical twins have the same genes, they don’t always have the same number of copies of those genes. That may help explain why one twin may be more at risk for certain illnesses (Bruder et al., 2008). 䉴 Most identical twins share a placenta during prenatal development, but one of every three sets has two separate placentas. One twin’s placenta may provide slightly better nourishment, which may contribute to identical twin differences (Davis et al., 1995; Phelps et al., 1997; Sokoll et al., 1995).

Same or opposite sex

Same sex only

FIGURE 11.2 Same fertilized egg,

same genes; different eggs, different genes Identical twins develop from a single fertilized egg, fraternal twins from two.

“It will inevitably be revealed that there are strong genetic components associated with more aspects of what we attribute to human existence including personality subtypes, language capabilities, mechanical abilities, intelligence, sexual activities and preferences, intuitive thinking, quality of memory, willpower, temperament, athletic abilities, etc.”

Fraternal twins develop from separate fertilized eggs. They share a fetal environment, but they are genetically no more similar than ordinary brothers and sisters. Shared genes can translate into shared experiences. A person whose identical twin has Alzheimer’s disease, for example, has a 60 percent risk of getting the disease; if the affected twin is fraternal, the risk is only 30 percent (Plomin et al., 1997). Are identical twins, being genetic clones of one another, also behaviorally more similar than fraternal twins? Studies of thousands of twin pairs in Sweden, Finland, and Australia provide a consistent answer: On both extraversion (outgoingness) and neuroticism (emotional instability), identical twins are much more similar than fraternal twins. If genes influence traits such as emotional instability, might they also influence the social effects of such traits? To find out, Matt McGue and David Lykken (1992) studied divorce rates among 1500 same-sex, middle-aged twin pairs. Their result: If you have a fraternal twin who has divorced, the odds of your divorcing go up 1.6 times (compared with having a not-divorced twin). If you have an identical twin who has divorced, the odds of your divorcing go up 5.5 times. From such data, McGue and Lykken estimate that people’s differing divorce risks are about 50 percent attributable to genetic factors. When John Loehlin and Robert Nichols (1976) gave a battery of questionnaires to 850 U.S. twin pairs, identical twins, more than fraternal twins, also reported being treated alike. So, did their experience rather than their genes account for their similarity? No, said Loehlin and Nichols; identical twins whose parents treated them alike were not psychologically more alike than identical twins who were treated less similarly. In explaining individual differences, genes matter.

Genomics researcher J. Craig Venter, 2006

Peter Arnold, Inc./Alamy

ACE Stock Limited/Alamy

Ethel Wolvitz/The ImageWorks

More twins Curiously, twinning rates vary by race. The rate among Caucasians is roughly twice that of Asians and half that of Africans. In Africa and Asia, most twins are identical. In Western countries, most twins are fraternal, and fraternal twins are increasing with the use of fertility drugs (Hall, 2003; Steinhauer, 1999).

135

Behavior Genetics and Evolutionary Psychology M O D U L E 1 1

Separated Twins Imagine the following science fiction experiment: A mad scientist decides to separate identical twins at birth, then rear them in differing environments. Better yet, consider a true story: On a chilly February morning in 1979, some time after divorcing his first wife, Linda, Jim Lewis awoke in his modest home next to his second wife, Betty. Determined that this marriage would work, Jim made a habit of leaving love notes to Betty around the house. As he lay in bed he thought about others he had loved, including his son, James Alan, and his faithful dog, Toy. Jim was looking forward to spending part of the day in his basem*nt woodworking shop, where he had put in many happy hours building furniture, picture frames, and other items, including a white bench now circling a tree in his front yard. Jim also liked to spend free time driving his Chevy, watching stock-car racing, and drinking Miller Lite beer. Jim was basically healthy, except for occasional half-day migraine headaches and blood pressure that was a little high, perhaps related to his chain-smoking habit. He had become overweight a while back but had shed some of the pounds. Having undergone a vasectomy, he was done having children. What was extraordinary about Jim Lewis, however, was that at that same moment (I am not making this up) there existed another man—also named Jim—for whom all these things (right down to the dog’s name) were also true.1 This other Jim—Jim Springer—just happened, 38 years earlier, to have been his womb-mate. Thirty-seven days after their birth, these genetically identical twins were separated, adopted by blue-collar families, and reared with no contact or knowledge of each other’s whereabouts until the day Jim Lewis received a call from his genetic clone (who, having been told he had a twin, set out to find him). One month later, the brothers became the first twin pair tested by University of Minnesota psychologist Thomas Bouchard and his colleagues, beginning a study of separated twins that extends to the present (Holden, 1980a,b; Wright, 1998). Given tests measuring their personality, intelligence, heart rate, and brain waves, the Jim twins—despite 38 years of separation—were virtually as alike as the same person tested twice. Their voice intonations and inflections were so similar that, hearing a playback of an earlier interview, Jim Springer guessed “That’s me.” Wrong—it was his brother.

|| Sweden has the world’s largest national twin registry—140,000 living and dead twin pairs—which form part of a massive registry of 600,000 twins currently being sampled in the world’s largest twin study (Wheelwright, 2004; www.genomeutwin.org). ||

|| Twins Lorraine and Levinia Christmas, driving to deliver Christmas presents to each other near Flitcham, England, collided (Shepherd, 1997). || || Bouchard’s famous twin research was, appropriately enough, conducted in Minneapolis, the “Twin City” (with St. Paul), and home to the Minnesota Twins baseball team. ||

1Actually,

this description of the two Jims errs in one respect: Jim Lewis named his son James Alan. Jim Springer named his James Allan.

twins are people 䉴 Identical two Identical twins Jim Lewis

©2006 Bob Sacha

and Jim Springer were separated shortly after birth and raised in different homes without awareness of each other. Research has shown remarkable similarities in the life choices of separated identical twins, lending support to the idea that genes influence personality.

identical twins twins who develop from a single fertilized egg that splits in two, creating two genetically identical organisms.

fraternal twins twins who develop from separate fertilized eggs. They are genetically no closer than brothers and sisters, but they share a fetal environment.

136

“In some domains it looks as though our identical twins reared apart are . . . just as similar as identical twins reared together. Now that’s an amazing finding and I can assure you none of us would have expected that degree of similarity.” Thomas Bouchard (1981)

|| Coincidences are not unique to twins. Patricia Kern of Colorado was born March 13, 1941, and named Patricia Ann Campbell. Patricia DiBiasi of Oregon also was born March 13, 1941, and named Patricia Ann Campbell. Both had fathers named Robert, worked as bookkeepers, and at the time of this comparison had children ages 21 and 19. Both studied cosmetology, enjoyed oil painting as a hobby, and married military men, within 11 days of each other. They are not genetically related (from an AP report, May 2, 1983). ||

“We carry to our graves the essence of the zygote that was first us.” Mary Pipher, Seeking Peace: Chronicles of the Worst Buddhist in the World, 2009

MOD U LE 1 1 Behavior Genetics and Evolutionary Psychology

Identical twins Oskar Stohr and Jack Yufe presented equally striking similarities. One was raised by his grandmother in Germany as a Catholic and a Nazi, while the other was raised by his father in the Caribbean as a Jew. Nevertheless, they shared traits and habits galore. They liked spicy foods and sweet liqueurs, fell asleep in front of the television, flushed the toilet before using it, stored rubber bands on their wrists, and dipped buttered toast in their coffee. Stohr was domineering toward women and yelled at his wife, as did Yufe before he and his wife separated. Both married women named Dorothy Jane Scheckelburger. Okay, the last item is a joke. But as Judith Rich Harris (2006) notes, it is hardly weirder than some other reported similarities. Aided by publicity in magazine and newspaper stories, Bouchard and his colleagues (1990; DiLalla et al., 1996; Segal, 1999) located and studied 80 pairs of identical twins reared apart. They continued to find similarities not only of tastes and physical attributes but also of personality, abilities, attitudes, interests, and even fears. In Sweden, Nancy Pedersen and her co-workers (1988) identified 99 separated identical twin pairs and more than 200 separated fraternal twin pairs. Compared with equivalent samples of identical twins reared together, the separated identical twins had somewhat less identical personalities (characteristic patterns of thinking, feeling, and acting). Still, separated twins were more alike if genetically identical than if fraternal. And separation shortly after birth (rather than, say, at age 8) did not amplify their personality differences. Stories of startling twin similarities do not impress Bouchard’s critics, who remind us that “the plural of anecdote is not data.” They contend that if any two strangers were to spend hours comparing their behaviors and life histories, they would probably discover many coincidental similarities. If researchers created a control group of biologically unrelated pairs of the same age, sex, and ethnicity, who had not grown up together but who were as similar to one another in economic and cultural background as are many of the separated twin pairs, wouldn’t these pairs also exhibit striking similarities (Joseph, 2001)? Bouchard replies that separated fraternal twins do not exhibit similarities comparable to those of separated identical twins. Twin researcher Nancy Segal (2000) notes that virtual twins—same-age, biologically unrelated siblings—are also much more dissimilar. Even the more impressive data from personality assessments are clouded by the reunion of many of the separated twins some years before they were tested. Moreover, identical twins share an appearance, and the responses it evokes, and adoption agencies tend to place separated twins in similar homes. Despite these criticisms, the striking twin-study results helped shift scientific thinking toward a greater appreciation of genetic influences.

Biological Versus Adoptive Relatives For behavior geneticists, nature’s second type of real-life experiment—adoption—creates two groups: genetic relatives (biological parents and siblings) and environmental relatives (adoptive parents and siblings). For any given trait, we can therefore ask whether adopted children are more like their biological parents, who contributed their genes, or their adoptive parents, who contribute a home environment. While sharing that home environment, do adopted siblings also come to share traits? The stunning finding from studies of hundreds of adoptive families is that people who grow up together, whether biologically related or not, do not much resemble one another in personality (McGue & Bouchard, 1998; Plomin et al., 1998; Rowe, 1990). In traits such as extraversion and agreeableness, adoptees are more similar to their biological parents than to their caregiving adoptive parents. The finding is important enough to bear repeating: The environment shared by a family’s children has virtually no discernible impact on their personalities. Two adopted children reared in the same home are no more likely to share personality

137

Behavior Genetics and Evolutionary Psychology M O D U L E 1 1

or nurture or both? When 䉴 Nature talent runs in families, as with the Williams sisters, how do heredity and environment together do their work?

Sean Garnsworthy/Getty Images

traits with each other than with the child down the block. Heredity shapes other primates’ personalities, too. Macaque monkeys raised by foster mothers exhibit social behaviors that resemble their biological, rather than foster, mothers (Maestripieri, 2003). Add all this to the similarity of identical twins, whether they grow up together or apart, and the effect of a shared rearing environment seems shockingly modest. What we have here is perhaps “the most important puzzle in the history of psychology,” contends Steven Pinker (2002): Why are children in the same family so different? Why does shared family environment have so little effect on children’s personalities? Is it because each sibling experiences unique peer influences and life events? Because sibling relationships ricochet off each other, amplifying their differences? Because siblings—despite sharing half their genes—have very different combinations of genes and may evoke very different kinds of parenting? Such questions fuel behavior geneticists’ curiosity. The minimal shared-environment effect does not, however, mean that adoptive parenting is a fruitless venture. The genetic leash may limit the family environment’s influence on personality, but parents do influence their children’s attitudes, values, manners, faith, and politics (Reifman & Cleveland, 2007). A pair of adopted children or identical twins will, especially during adolescence, have more similar religious beliefs if reared together (Kelley & De Graaf, 1997; Koenig et al., 2005; Rohan & Zanna, 1996). Parenting matters! Moreover, in adoptive homes, child neglect and abuse and even parental divorce are rare. (Adoptive parents are carefully screened; natural parents are not.) So it is not surprising that, despite a somewhat greater risk of psychological disorder, most adopted children thrive, especially when adopted as infants (Loehlin et al., 2007; van IJzendoorn & Juffer, 2006; Wierzbicki, 1993). Seven in eight report feeling strongly attached to one or both adoptive parents. As children of self-giving parents, they grow up to be more self-giving and altruistic than average (Sharma et al., 1998). Many score higher than their biological parents on intelligence tests, and most grow into happier and more stable adults. In one Swedish study, infant adoptees grew up with fewer problems than were experienced by children whose biological mothers had initially registered them for adoption but then decided to raise the children themselves (Bohman & Sigvardsson, 1990). Regardless of personality differences between parents and their adoptees, children benefit from adoption.

“Mom may be holding a full house while Dad has a straight flush, yet when Junior gets a random half of each of their cards his poker hand may be a loser.” David Lykken (2001)

|| The greater uniformity of adoptive homes—mostly healthy, nurturing homes—helps explain the lack of striking differences when comparing child outcomes of different adoptive homes (Stoolmiller, 1999). ||

Temperament and Heredity As most parents will tell you after having their second child, babies differ even before gulping their first breath. Consider one quickly apparent aspect of personality. Infants’ temperaments are their emotional excitability—whether reactive, intense, and fidgety, or easygoing, quiet, and placid. From the first weeks of life, difficult babies are more irritable, intense, and unpredictable. Easy babies are cheerful, relaxed, and predictable in feeding and sleeping. Slow-to-warm-up infants tend to resist or withdraw from new people and situations (Chess & Thomas, 1987; Thomas & Chess, 1977).

temperament a person’s characteristic emotional reactivity and intensity.

138

MOD U LE 1 1 Behavior Genetics and Evolutionary Psychology

Temperament differences tend to persist. Consider:

䉴 The most emotionally reactive newborns tend also to be the most reactive © The New Yorker Collection, 1999, Barbara Smaller from cartoonbank.com. All rights reserved.

9-month-olds (Wilson & Matheny, 1986; Worobey & Blajda, 1989).

䉴 Exceptionally inhibited and fearful 2-year-olds often are still relatively shy as 8-yearolds; about half will become introverted adolescents (Kagan et al., 1992, 1994). The most emotionally intense preschoolers tend to be relatively intense young 䉴 adults (Larsen & Diener, 1987). In one study of more than 900 New Zealanders, emotionally reactive and impulsive 3-year-olds developed into somewhat more impulsive, aggressive, and conflict-prone 21-year-olds (Caspi, 2000).

“Oh, he’s cute, all right, but he’s got the temperament of a car alarm.”

Heredity predisposes temperament differences (Rothbart, 2007). As we have seen, identical twins have more similar personalities, including temperament, than do fraternal twins. Physiological tests reveal that anxious, inhibited infants have high and variable heart rates and a reactive nervous system, and that they become more physiologically aroused when facing new or strange situations (Kagan & Snidman, 2004). One form of a gene that regulates the neurotransmitter serotonin predisposes a fearful temperament and, in combination with unsupportive caregiving, an inhibited child (Fox et al., 2007). Such evidence adds to the emerging conclusion that our biologically rooted temperament helps form our enduring personality (McCrae et al., 2000, 2007; Rothbart et al., 2000).

Heritability 11-2 What is heritability, and how does it relate to individuals and groups?

© The New Yorker Collection, 2003, Michael Shaw from cartoonbank.com. All rights reserved.

Using twin and adoption studies, behavior geneticists can mathematically estimate the heritability of a trait—the extent to which variation among individuals can be attributed to their differing genes. If the heritability of intelligence is, say, 50 percent, this does not mean that your intelligence is 50 percent genetic. (If the heritability of height is 90 percent, this does not mean that a 60-inch-tall woman can credit her genes for 54 inches and her environment for the other 6 inches.) Rather, it means that genetic influence explains 50 percent of the observed variation among people. This point is so often misunderstood that I repeat: We can never say what percentage of an individual’s personality or intelligence is inherited. It makes no sense to say that your personality is due x percent to your heredity and y percent to your environment. Heritability refers instead to the extent to which differences among people are attributable to genes. Even this conclusion must be qualified, because heritability can vary from study to study. Consider humorist Mark Twain’s (1835–1910) proposal to raise boys in barrels to age 12, feeding them through a hole. If we were to follow his suggestion, the boys would all emerge with lower-than-normal intelligence scores at age 12; yet, given their equal environments, their test score differences could be explained only by their heredity. In this case, heritability—differences due to genes—would be near 100 percent. As environments become more similar, heredity as a source of differences necessarily becomes more important. If all schools were of uniform quality, all families equally loving, and all neighborhoods equally healthy, then heritability would increase (because differences due to environment would decrease). At the other extreme, if all people had similar heredities but were raised in drastically different environments (some in barrels, some in luxury homes), heritability would be much lower.

Group Differences

“The title of my science project is ‘My Little Brother: Nature or Nurture.’”

If genetic influences help explain individual diversity in traits such as aggressiveness, can the same be said of group differences between men and women, or between people of different races? Not necessarily. Individual differences in height and weight, for example, are highly heritable; yet nutritional rather than genetic influences explain why, as a

139

Behavior Genetics and Evolutionary Psychology M O D U L E 1 1

group, today’s adults are taller and heavier than those of a century ago. The two groups differ, but not because human genes have changed in a mere century’s eyeblink of time. As with height and weight, so with personality and intelligence scores: Heritable individual differences need not imply heritable group differences. If some individuals are genetically disposed to be more aggressive than others, that needn’t explain why some groups are more aggressive than others. Putting people in a new social context can change their aggressiveness. Today’s peaceful Scandinavians carry many genes inherited from their Viking warrior ancestors.

heritability the proportion of variation among individuals that we can attribute to genes. The heritability of a trait may vary, depending on the range of populations and environments studied. interaction the interplay that occurs when the effect of one factor (such as environment) depends on another factor (such as heredity).

Nature and Nurture Among our similarities, the most important—the behavioral hallmark of our species— is our enormous adaptive capacity. Some human traits, such as having two eyes, develop the same in virtually every environment. But other traits are expressed only in particular environments. Go barefoot for a summer and you will develop toughened, callused feet—a biological adaptation to friction. Meanwhile, your shod neighbor will remain a tenderfoot. The difference between the two of you is, of course, an effect of environment. But it is also the product of a biological mechanism—adaptation. Our shared biology enables our developed diversity (Buss, 1991). An analogy may help: Genes and environment—nature and nurture—work together like two hands clapping. Genes not only code for particular proteins, they also respond to environments. An African butterfly that is green in summer turns brown in fall, thanks to a temperature-controlled genetic switch. The genes that produce brown in one situation produce green in another. Thus, genes are self-regulating. Rather than acting as blueprints that lead to the same result no matter the context, genes react. People with identical genes but differing experiences therefore have similar though not identical minds. One twin may fall in love with someone quite different from the co-twin’s love. At least one known gene will, in response to major life stresses, code for a protein that controls a neurotransmitter involved in depression. By itself, the gene doesn’t cause depression, but it is part of the recipe. Likewise, the research finding that breast feeding boosts later intelligence turns out to be true only for the 90 percent of infants with a gene that assists in breaking down fatty acids present in human milk (Caspi et al., 2007). Studies of 1037 New Zealand adults and 2232 English 12- and 13-year olds found no breast-feeding boost among those not carrying the gene. As so often happens, nature and nurture work together. Thus, asking whether your personality is more a product of your genes or your environment is like asking whether the area of a field is more the result of its length or its width. We could, however, ask whether the differing areas of various fields are more the result of differences in their length or their width, and also whether personto-person personality differences are influenced more by nature or nurture. Human differences result from both genetic and environmental influences. Thus, eating disorders are genetically influenced: Some individuals are more at risk than others. But culture also bends the twig, for eating disorders are primarily a contemporary Western cultural phenomenon.

Gene-Environment Interaction To say that genes and experience are both important is true. But more precisely, they interact. Imagine two babies, one genetically predisposed to be attractive, sociable, and easygoing, the other less so. Assume further that the first baby attracts more affectionate and stimulating care than the second and so develops into a warmer and more outgoing person. As the two children grow older, the more naturally outgoing child more often seeks activities and friends that encourage further social confidence.

“Men’s natures are alike; it is their habits that carry them far apart.” Confucius, Analects, 500 B.C.

“Heredity deals the cards; environment plays the hand.” Psychologist Charles L. Brewer (1990)

AP Photo/Dan Steinberg

Rex Features

Gene-environment interaction People respond differently to Rowan Atkinson (shown at left playing Mr. Bean) than to his fellow actor Zac Efron, right.

MOD U LE 1 1 Behavior Genetics and Evolutionary Psychology

140

What has caused their resulting personality differences? Neither heredity nor experience dances alone. Environments trigger gene activity. (Scientists are now exploring environmental influences on when particular genes generate proteins.) The other partner in the dance—our genetically influenced traits—also evoke significant responses in others. Thus, a child’s impulsivity and aggression may evoke an angry response from a teacher who otherwise reacts warmly to the child’s model classmates. Parents, too, may treat their own children differently; one child elicits punishment, another does not. In such cases, the child’s nature and the parents’ nurture interact. Neither operates apart from the other. Gene and scene dance together. Evocative interactions may help explain why identical twins reared in different families recall their parents’ warmth as remarkably similar—almost as similar as if they had had the same parents (Plomin et al., 1988, 1991, 1994). Fraternal twins have more differing recollections of their early family life—even if reared in the same family! “Children experience us as different parents, depending on their own qualities,” noted Sandra Scarr (1990). Moreover, as we grow older we also select environments well suited to our natures. So, from conception onward, we are the product of a cascade of interactions between our genetic predispositions and our surrounding environments. Our genes affect how people react to and influence us. Biological appearances have social consequences. So, forget nature versus nurture; think nature via nurture.

The New Frontier: Molecular Genetics 11-3 What is the promise of molecular genetics research? Behavior geneticists have progressed beyond asking, “Do genes influence behavior?” The new frontier of behavior-genetics research draws on “bottom-up” molecular genetics as it seeks to identify specific genes influencing behavior. As we have already seen, most human traits are influenced by teams of genes. For example, twin and adoption studies tell us that heredity influences body weight, but there is no single “obesity gene.” More likely, some genes influence how quickly the stomach tells the brain, “I’m full.” Others might dictate how much fuel the muscles need, how many calories are burned off by fidgeting, and how efficiently the body converts extra calories into fat (Vogel, 1999). The goal of molecular behavior genetics is to find some of the many genes that influence normal human traits, such as body weight, sexual orientation, and extraversion, and also to explore the mechanisms that control gene expression (Tsankova et al., 2007).

141

Genetic tests can now reveal at-risk populations for at least a dozen diseases. The search continues in labs worldwide, where molecular geneticists are teaming with psychologists to pinpoint genes that put people at risk for such genetically influenced disorders as learning disabilities, depression, schizophrenia, and alcohol dependence. Worldwide research efforts are under way to sleuth the genes that make people vulnerable to the emotional swings of bipolar disorder, formerly known as manicdepressive disorder. To tease out the implicated genes, molecular behavior geneticists seek links between certain genes or chromosome segments and specific disorders. First, they find families that have had the disorder across several generations. Then they draw blood or take cheek swabs from both affected and unaffected family members and examine their DNA, looking for differences. “The most powerful potential for DNA,” note Robert Plomin and John Crabbe (2000), “is to predict risk so that steps can be taken to prevent problems before they happen.” Aided by inexpensive DNA-scanning techniques, medical personnel are becoming able to give would-be parents a readout on how their fetus’ genes differ from the normal pattern and what this might mean. With this benefit come risks. Might labeling a fetus, for example, “at risk for a learning disorder” lead to discrimination? Prenatal screening poses ethical dilemmas. In China and India, where boys are highly valued, testing for an offspring’s sex has enabled selective abortions resulting in millions— yes, millions—of “missing women.” Assuming it were possible, should prospective parents take their eggs and sperm to a genetics lab for screening before combining them to produce an embryo? Should we enable parents to screen their fertilized eggs for health—and for brains or beauty? Progress is a double-edged sword, raising both hopeful possibilities and difficult problems. By selecting out certain traits, we may deprive ourselves of future Handels and van Goghs, Churchills and Lincolns, Tolstoys and Dickinsons—troubled people all.

© The New Yorker Collection, 1999, Nick Downes from cartoonbank.com. All rights reserved.

Behavior Genetics and Evolutionary Psychology M O D U L E 1 1

“I thought that sperm-bank donors remained anonymous.”

䉴|| Evolutionary Psychology: Understanding

Human Nature

11-4 How do evolutionary psychologists use natural selection to explain behavior tendencies? Behavior geneticists explore the genetic and environmental roots of human differences. Evolutionary psychologists instead focus mostly on what makes us so much alike as humans. They use Darwin’s principle of natural selection to understand the roots of behavior and mental processes. Richard Dawkins (2007) calls natural selection “arguably the most momentous idea ever to occur to a human mind.” The idea, simplified, is this:

䉴 Organisms’ varied offspring compete for survival. 䉴 Certain biological and behavioral variations increase their reproductive and survival chances in their environment.

䉴 Offspring that survive are more likely to pass their genes to ensuing generations. 䉴 Thus, over time, population characteristics may change. To see these principles at work, let’s consider a straightforward example in foxes.

Natural Selection and Adaptation A fox is a wild and wary animal. If you capture a fox and try to befriend it, be careful. Stick your hand in the cage and, if the timid fox cannot flee, it may make a snack of your fingers. Dmitry Belyaev, of the Russian Academy of Science’s Institute of Cytology and Genetics, wondered how our human ancestors had domesticated dogs from their equally

molecular genetics the subfield of biology that studies the molecular structure and function of genes.

evolutionary psychology the study of the evolution of behavior and the mind, using principles of natural selection.

natural selection the principle that, among the range of inherited trait variations, those that lead to increased reproduction and survival will most likely be passed on to succeeding generations.

L. N. Trut, American Scientist (1999) 87: 160–169

142

From wary to winsome More than 40 years into the fox-breeding experiment, most of the offspring are devoted, affectionate, and capable of forming strong bonds with people.

MOD U LE 1 1 Behavior Genetics and Evolutionary Psychology

wild wolf forebears. Might he, within a comparatively short stretch of time, accomplish a similar feat by transforming the fearful fox into a friendly fox? To find out, Belyaev set to work with 30 male and 100 female foxes. From their offspring he selected and mated the tamest 5 percent of males and 20 percent of females. (He measured tameness by the foxes’ responses to attempts to feed, handle, and stroke them.) Over more than 30 generations of foxes, Belyaev and his successor, Lyudmila Trut, repeated that simple procedure. Forty years and 45,000 foxes later, they had a new breed of foxes that, in Trut’s (1999) words, are “docile, eager to please, and unmistakably domesticated. . . . Before our eyes, ‘the Beast’ has turned into ‘beauty,’ as the aggressive behavior of our herd’s wild [ancestors] entirely disappeared.” So friendly and eager for human contact are they, so inclined to whimper to attract attention and to lick people like affectionate dogs, that the cash-strapped institute seized on a way to raise funds—marketing its foxes to people as house pets. When certain traits are selected—by conferring a reproductive advantage to an individual or a species—those traits, over time, will prevail. Dog breeders, as Robert Plomin and his colleagues (1997) remind us, have given us sheepdogs that herd, retrievers that retrieve, trackers that track, and pointers that point. Psychologists, too, have bred dogs, mice, and rats whose genes predispose them to be serene or reactive, quick learners or slow. Does natural selection also explain our human tendencies? Nature has indeed selected advantageous variations from among the mutations (random errors in gene replication) and from the new gene combinations produced at each human conception. But the tight genetic leash that predisposes a dog’s retrieving, a cat’s pouncing, or an ant’s nest building is looser on humans. The genes selected during our ancestral history provide more than a long leash; they endow us with a great capacity to learn and therefore to adapt to life in varied environments, from the tundra to the jungle. Genes and experience together wire the brain. Our adaptive flexibility in responding to different environments contributes to our fitness—our ability to survive and reproduce.

Evolutionary Success Helps Explain Similarities

mutation a random error in gene replication that leads to a change.

Although human differences grab our attention, our deep similarities also demand explanation. And in the big picture, our lives are remarkably alike. Visit the international arrivals area at Amsterdam’s Schiphol Airport, a world hub where arriving passengers meet their excited loved ones. There you will see the same delighted joy in the faces of Indonesian grandmothers, Chinese children, and homecoming Dutch. Evolutionary psychologist Steven Pinker (2002, p. 73) believes it is no wonder that our emotions, drives, and reasoning “have a common logic across cultures.” Our shared human traits “were shaped by natural selection acting over the course of human evolution.” Our behavioral and biological similarities arise from our shared human genome. No more than 5 percent of the genetic differences among humans arise from population group differences. Some 95 percent of genetic variation exists within populations (Rosenberg et al., 2002). The typical genetic difference between two Icelandic villagers or between two Kenyans is much greater than the average difference between the two groups. Thus, noted geneticist Richard Lewontin (1982), if after a worldwide catastrophe only Icelanders or Kenyans survived, the human species would suffer only “a trivial reduction” in its genetic diversity. And how did we develop this shared human genome? At the dawn of human history, our ancestors faced certain questions: Who is my ally, who my foe? What food should I eat? With whom should I mate? Some individuals answered those questions

Behavior Genetics and Evolutionary Psychology M O D U L E 1 1

143

more successfully than others. For example, some women’s experience of nausea in the critical first three months of pregnancy predisposes their avoiding certain bitter, strongly flavored, and novel foods. Avoiding such foods has survival value, since they are the very foods most often toxic to embryonic development (Schmitt & Pilcher, 2004). Early humans disposed to eat nourishing rather than poisonous food survived to contribute their genes to later generations. Those who deemed leopards “nice to pet” often did not. Similarly successful were those whose mating helped them produce and nurture offspring. Over generations, the genes of individuals not so disposed tended to be lost from the human gene pool. As genes contributing to success continued to be selected, behavioral tendencies and thinking and learning capacities emerged that prepared our Stone Age ancestors to survive, reproduce, and send their genes into the future.

Outdated Tendencies As inheritors of this prehistoric genetic legacy, we are predisposed to behave in ways that promoted our ancestors’ surviving and reproducing. We love the taste of sweets and fats, which once were hard to come by but which prepared our ancestors to survive famines. With famine now rare in Western cultures, and sweets and fats beckoning us from store shelves, fast-food outlets, and vending machines, obesity has become a growing problem. Our natural dispositions, rooted deep in history, are mismatched with today’s junk-food environment (Colarelli & Dettman, 2003). We are, in some ways, biologically prepared for a world that no longer exists.

|| Despite high infant mortality and rampant disease in past millennia, not one of your countless ancestors died childless. ||

Evolutionary Psychology Today Charles Darwin’s theory of evolution has been an organizing principle for biology for a long time. Jared Diamond (2001) notes that “virtually no contemporary scientists believe that Darwin was basically wrong.” Today, Darwin’s theory lives on in “the second Darwinian revolution”: the application of evolutionary principles to psychology. In concluding On the Origin of Species, Darwin (1859, p. 346) anticipated this, foreseeing “open fields for far more important researches. Psychology will be based on a new foundation.” Evolutionary psychologists have addressed questions such as these:

䉴 Why do infants start to fear strangers about the time they become mobile? 䉴 Why are biological fathers so much less likely than unrelated boyfriends to abuse and murder the children with whom they share a home?

䉴 Why do so many more people have phobias about spiders, snakes, and heights than about more dangerous threats, such as guns and electricity? 䉴 Why do humans share some universal moral ideas? 䉴 How are men and women alike? How and why do men’s and women’s sexuality differ? To see how evolutionary psychologists think and reason, let’s pause now to explore that last question.

An Evolutionary Explanation of Human Sexuality 11-5 How might an evolutionary psychologist explain gender differences in mating preferences? Having faced many similar challenges throughout history, men and women have adapted in similar ways. Whether male or female, we eat the same foods, avoid the same predators, and perceive, learn, and remember similarly. It is only in those domains where we have faced differing adaptive challenges—most obviously in behaviors related to reproduction—that we differ, say evolutionary psychologists.

|| Those who are troubled by an apparent conflict between scientific and religious accounts of human origins may find it helpful to recall that different perspectives of life can be complementary. For example, the scientific account attempts to tell us when and how; religious creation stories usually aim to tell about an ultimate who and why. As Galileo explained to the Grand duch*ess Christina, “The Bible teaches how to go to heaven, not how the heavens go.” ||

144

MOD U LE 1 1 Behavior Genetics and Evolutionary Psychology

The New Yorker Collection, 2003, Michael Crawford from cartoonbank.com. All rights reserved.

Gender Differences in Sexuality

“Not tonight, hon, I have a concussion.”

Differ we do, report psychologists Roy Baumeister, Kathleen Catanese, and Kathleen Vohs (2001). They invite us to consider whether women or men have the stronger sex drive. Who desires more frequent sex, thinks more about sex, masturbat*s more often, initiates more sex, and sacrifices more to gain sex? The answers, they report, are men, men, men, men, and men. For example, in one BBC survey of more than 200,000 people in 53 nations, men everywhere more strongly agreed that “I have a strong sex drive” and “It doesn’t take much to get me sexually excited” (Lippa, 2008). Indeed, “with few exceptions anywhere in the world,” report cross-cultural psychologist Marshall Segall and his colleagues (1990, p. 244), “males are more likely than females to initiate sexual activity.” This is among the largest of gender differences in sexuality (Regan & Atkins, 2007). Consider:

䉴 In a survey of 289,452 entering U.S. college students, 58 percent of men but only

“It’s not that gay men are oversexed; they are simply men whose male desires bounce off other male desires rather than off female desires.” Steven Pinker, How the Mind Works, 1997

gender in psychology, the biologically and socially influenced characteristics by which people define male and female.

34 percent of women agreed that “if two people really like each other, it’s all right for them to have sex even if they’ve known each other for a very short time” (Pryor et al., 2005). “I can imagine myself being comfortable and enjoying ‘casual’ sex with different partners,” agreed 48 percent of men and 12 percent of women in a survey of 4901 Australians (Bailey et al., 2000). 䉴 In another survey of 3432 U.S. 18- to 59-year-olds, 48 percent of the women but only 25 percent of the men cited affection as a reason for first intercourse. And how often do they think about sex? “Every day” or “Several times a day,” acknowledged 19 percent of the women and 54 percent of the men (Laumann et al., 1994). Ditto for the sexual thoughts of Canadians: “Several times a day,” agreed 11 percent of women and 46 percent of men (Fischtein et al., 2007). 䉴 In surveys, gay men (like straight men) report more interest in uncommitted sex, more responsiveness to visual sexual stimuli, and more concern with their partner’s physical attractiveness than do lesbian women (Bailey et al., 1994; Doyle, 2005; Schmitt, 2007). Gender differences in attitudes extend to differences in behavior. Gay male couples report having sex more often than do lesbian couples (Peplau & Fingerhut, 2007). And in the first year of Vermont’s same-sex civil unions, men were only onethird of those electing this legal partnership (Rothblum, 2007). Casual, impulsive sex is most frequent among males with traditional masculine attitudes (Pleck et al., 1993). Russell Clark and Elaine Hatfield (1989, 2003) observed this striking gender difference in 1978 when they sent some average-looking student research assistants strolling across the Florida State University quadrangle. Spotting an attractive person of the other sex, a researcher would approach and say, “I have been noticing you around campus and I find you to be very attractive. Would you go to bed with me tonight?” The women all declined, some obviously irritated (“What’s wrong with you, creep? Leave me alone!”). But 75 percent of the men readily agreed, often replying with comments such as “Why do we have to wait until tonight?” (All were then truthfully told this was just an experiment.) Somewhat astonished by their result, Clark and Hatfield repeated their study in 1982 and twice more during the late 1980s, a high-risk AIDS time in the United States (Clark, 1990). Each time, virtually no women, but half or more of the men, agreed to go to bed with a stranger. Men also have a lower threshold for perceiving warm responses as a sexual come-on. In study after study, men more often than women attribute a woman’s friendliness to sexual interest (Abbey, 1987; Johnson et al., 1991). Misattributing women’s cordiality as a come-on helps explain—but does not excuse—men’s greater sexual assertiveness (Kolivas & Gross, 2007). The unfortunate results can range from sexual harassment to date rape.

145

Behavior Genetics and Evolutionary Psychology M O D U L E 1 1

As biologists use natural selection to explain the mating behaviors of many species, so evolutionary psychologists use natural selection to explain a worldwide human sexuality difference: Women’s approach to sex is usually more relational, and men’s more recreational (Schmitt, 2005, 2007). The explanation goes like this: While a woman usually incubates and nurses one infant at a time, a male can spread his genes through other females. Our natural yearnings are our genes’ way of reproducing themselves. In our ancestral history, women most often sent their genes into the future by pairing wisely, men by pairing widely. “Humans are living fossils—collections of mechanisms produced by prior selection pressures,” says evolutionary psychologist David Buss (1995). And what do heterosexual men and women find attractive in the other sex? Some aspects of attractiveness cross place and time. Men in a wide range of cultures, from Australia to Zambia (FIGURE 11.3), judge women as more attractive if they have a youthful appearance (Buss, 1994). Evolutionary psychologists say that men who were drawn to healthy, fertile-appearing women—women with smooth skin and a youthful shape suggesting many childbearing years to come—stood a better chance of sending their genes into the future. And sure enough, men feel most attracted to women whose waists are (or are surgically altered to be) roughly a third narrower than their hips—a sign of future fertility. Moreover, just as evolutionary psychology predicts, men are most attracted to women who, in the ancestral past (when ovulation began later than today), were at ages associated with peak fertility. Thus teen boys are most excited by a woman several years older than themselves, report Douglas Kenrick and his colleagues (in press). Mid-twenties men prefer women around their own age. And older men prefer younger women. This pattern, they report, consistently appears across European singles ads, Indian marital ads, and marriage records from North and South America, Africa, and the Philippines (Singh, 1993; Singh & Randall, 2007). Women, in turn, prefer stick-around dads over likely cads. They are attracted to men who seem mature, dominant, bold, and affluent (Singh, 1995). They prefer mates with the potential for long-term mating and investment in their joint offspring (Gangestad & Simpson, 2000). Such attributes, say the evolutionary psychologists, connote a capacity to support and protect (Buss, 1996, 2000; Geary, 1998). In one experiment, women skillfully discerned which men most liked looking at baby pictures, and they rated those men higher as potential long-term mates (Roney et al., 2006).

© The New Yorker Collection, Matthew Diffee from cartoonbank.com All rights reserved.

Natural Selection and Mating Preferences

“What about you, Walter—how do you feel about same-age marriage?”

FIGURE 11.3 Worldwide mating 䉴 preferences In a wide range of cultures

studied (indicated by the red dots), men more than women preferred physical features suggesting youth and health—and reproductive potential. Women more than men preferred mates with resources and social status. Researchers credit (or blame) natural selection (Buss, 1994).

146

MOD U LE 1 1 Behavior Genetics and Evolutionary Psychology

© The New Yorker Collection, 1999, Robert Mankoff from cartoonbank.com. All rights reserved.

There is a principle at work here, say evolutionary psychologists: Nature selects behaviors that increase the likelihood of sending one’s genes into the future. As mobile gene machines, we are designed to prefer whatever worked for our ancestors in their environments. They were predisposed to act in ways that would leave grandchildren— had they not been, we wouldn’t be here. And as carriers of their genetic legacy, we are similarly predisposed.

Critiquing the Evolutionary Perspective

11-6 What are the key criticisms of evolutionary psychology?

“I had a nice time, Steve. Would you like to come in, settle down, and raise a family?”

Without disputing nature’s selection of traits that enhance gene survival, critics see problems with evolutionary psychology. It often, they say, starts with an effect (such as the gender sexuality difference) and works backward to propose an explanation. So let’s imagine a different observation and reason backward. If men were uniformly loyal to their mates, might we not reason that the children of these committed, supportive fathers would more often survive to perpetuate their genes? Might not men also be better off bonded to one woman—both to increase their odds of impregnation and to keep her from the advances of competing men? Might not a ritualized bond— a marriage—also spare women from chronic male harassment? Such suggestions are, in fact, evolutionary explanations for why humans tend to pair off monogamously. One can hardly lose at hindsight explanation, which is, said paleontologist Stephen Jay Gould (1997), mere “speculation [and] guesswork in the co*cktail party mode.” Some also worry about the social consequences of evolutionary psychology. Does it suggest a genetic determinism that strikes at the heart of progressive efforts to remake society (Rose, 1999)? Does it undercut moral responsibility? Could it be used to rationalize “high-status men marrying a series of young, fertile women” (Looy, 2001)? Much of who we are is not hard-wired, agree evolutionary psychologists. What’s considered attractive does vary somewhat with time and place. The voluptuous Marilyn Monroe ideal of the 1950s has been replaced by the twenty-first–century, leaner, yet still curvy athletic image. Moreover, cultural expectations can bend the genders. If socialized to value lifelong commitment, men may sexually bond with one partner; if socialized to accept casual sex, women may willingly have sex with many partners. Social expectations also shape gender differences in mate preferences. Show Alice Eagly and Wendy Wood (1999; Wood & Eagly, 2002, 2007) a culture with gender inequality—where men are providers and women are homemakers—and they will show you a culture where men strongly desire youth and domestic skill in their potential mates, and where women seek status and earning potential in their mates. Show Eagly and Wood a culture with gender equality, and they will show you a culture with smaller gender differences in mate preferences. Evolutionary psychologists reassure us that the sexes, having faced similar adaptive problems, are far more alike than different. They stress that humans have a great capacity for learning and social progress. (We come equipped to adapt and survive, whether living in igloos or tree houses.) They point to the coherence and explanatory power of evolutionary principles, especially those offering testable predictions (for example, that we will favor others to the extent that they share our genes or can later reciprocate our favors). And they remind us that the study of how we came to be need not dictate how we ought to be. Understanding our propensities sometimes helps us overcome them.

147

Behavior Genetics and Evolutionary Psychology M O D U L E 1 1

Review Behavior Genetics and Evolutionary Psychology 11-1 What are genes, and how do behavior geneticists explain our individual differences? Chromosomes are coils of DNA containing gene segments that, when “turned on” (expressed), code for the proteins that form our body’s building blocks. Most human traits are influenced by many genes acting together. Behavior geneticists seek to quantify genetic and environmental influences on our traits. Studies of identical twins, fraternal twins, and adoptive families help specify the influence of genetic nature and of environmental nurture, and the interaction between them (meaning that the effect of each depends on the other). The stability of temperament suggests a genetic predisposition.

11-6 What are the key criticisms of evolutionary psychology? Critics argue that evolutionary psychologists start with an effect and work backward to an explanation, that the evolutionary perspective gives too little emphasis to social influences, and that the evolutionary viewpoint absolves people from taking responsibility for their sexual behavior. Evolutionary psychologists respond that understanding our predispositions can help us overcome them. They also cite the value of testable predictions based on evolutionary principles, as well as the coherence and explanatory power of those principles.

11-2

Terms and Concepts to Remember

What is heritability, and how does it relate to individuals and groups? Heritability describes the extent to which variation among members of a group can be attributed to genes. Heritable individual differences in traits such as height or intelligence need not explain group differences. Genes mostly explain why some are taller than others, but not why people today are taller than a century ago.

11-3 What is the promise of molecular genetics research? Molecular geneticists study the molecular structure and function of genes. Psychologists and molecular geneticists are cooperating to identify specific genes—or more often, teams of genes—that put people at risk for disorders. 11-4

How do evolutionary psychologists use natural selection to explain behavior tendencies? Evolutionary psychologists seek to understand how natural selection has shaped our traits and behavior tendencies. The principle of natural selection states that variations increasing the odds of reproducing and surviving are most likely to be passed on to future generations. Some variations arise from mutations (random errors in gene replication), others from new gene combinations at conception. Charles Darwin, whose theory of evolution has for a long time been an organizing principle in biology, anticipated the contemporary application of evolutionary principles in psychology.

11-5 How might an evolutionary psychologist explain gender differences in mating preferences? Men more than women approve of casual sex, think about sex, and misinterpret friendliness as sexual interest. Women more than men cite affection as a reason for first intercourse and have a relational view of sexual activity. Applying principles of natural selection, evolutionary psychologists reason that men’s attraction to multiple healthy, fertileappearing partners increases their chances of spreading their genes widely. Because women incubate and nurse babies, they increase their own and their children’s chances of survival by searching for mates with the resources and the potential for long-term investment in their joint offspring.

behavior genetics, p. 132 environment, p. 132 chromosomes, p. 132 DNA (deoxyribonucleic acid), p. 132 genes, p. 132 genome, p. 132 identical twins, p. 134 fraternal twins, p. 134

temperament, p. 137 heritability, p. 138 interaction, p. 139 molecular genetics, p. 140 evolutionary psychology, p. 141 natural selection, p. 141 mutation, p. 142 gender, p. 144

Test Yourself 1. What is heritability? 2. What are the three main criticisms of the evolutionary explanation of human sexuality? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. Would you want genetic tests on your unborn offspring? What would you do if you knew your child would be destined for hemophilia? A learning disability? A high risk of depression? Do you think society would benefit or lose if such embryos were aborted?

2. Whose reasoning do you find most persuasive—that of evolutionary psychologists or their critics? Why?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

module 12

Parents and Peers Cultural Influences Gender Development Reflections on Nature and Nurture

Environmental Influences on Behavior 䉴|| Parents and Peers 12-1 To what extent are our lives shaped by early stimulation, by parents, and by

peers?

Our genes, as expressed in specific environments, influence our developmental differences. We are not “blank slates,” note Douglas Kenrick and his colleagues (in press). We are more like coloring books, with certain lines predisposed and experience filling in our picture. We are formed by nature and nurture. But what are the most influential components of our nurture? How do our early experiences, our family and peer relationships, and all our other experiences guide our development and contribute to our diversity?

Parents and Early Experiences The formative nurture that conspires with nature begins at conception, with the prenatal environment in the womb, as embryos receive differing nutrition and varying levels of exposure to toxic agents. Nurture then continues outside the womb, where our early experiences foster brain development.

Experience and Brain Development

148

FIGURE 12.1 Experience affects brain development Mark Rosenzweig and David Krech raised rats either alone in an environment without playthings, or with other rats in an environment enriched with playthings changed daily. In 14 of 16 repetitions of this basic experiment, rats in the enriched environment developed significantly more cerebral cortex (relative to the rest of the brain’s tissue) than did those in the impoverished environment.

Impoverished environment

Impoverished rat brain cell

Enriched environment

Enriched rat brain cell

(From “Brain changes in response to experience” by M. R. Rosenzweig, E. L. Bennett, and M. C. Diamond. Copyright © 1972 Scientific American, Inc. All rights reserved.)

Our genes dictate our overall brain architecture, but experience fills in the details, developing neural connections and preparing our brain for thought and language and other later experiences. So how do early experiences leave their “marks” in the brain? Mark Rosenzweig and David Krech opened a window on that process when they raised some young rats in solitary confinement and others in a communal playground. When they later analyzed the rats’ brains, those who died with the most toys had won. The rats living in the enriched environment, which simulated a natural environment, usually developed a heavier and thicker brain cortex (FIGURE 12.1). Rosenzweig was so surprised by this discovery that he repeated the experiment several times before publishing his findings (Renner & Rosenzweig, 1987; Rosenzweig, 1984). So great are the effects that, shown brief video clips of rats, you could tell from

149

Environmental Influences on Behavior M O D U L E 1 2

the circuits young 䉴 Stringing String musicians who started playing before age 12 have larger and more complex neural circuits controlling the note-making left-hand fingers than do string musicians whose training started later (Elbert et al., 1995).

Both photos courtesy of Avi Karni and Leslie Ungerleider, National Institute of Mental Health

Courtesy of C. Brune

their activity and curiosity whether their environment had been impoverished or enriched (Renner & Renner, 1993). Bryan Kolb and Ian Whishaw (1998) noted extraordinary changes after 60 days in the enriched environment; the rats’ brain weights increased 7 to 10 percent and the number of synapses mushroomed by about 20 percent. Such results have motivated improvements in environments for laboratory, farm, and zoo animals—and for children in institutions. Stimulation by touch or massage also benefits infant rats and premature babies (Field et al., 2007). “Handled” infants of both species develop faster neurologically and gain weight more rapidly. By giving preemies massage therapy, neonatal intensive care units now help them to go home sooner (Field et al., 2006). Both nature and nurture sculpt our synapses. After brain maturation provides us with an abundance of neural connections, our experiences trigger a pruning process. Sights and smells, touches and tugs activate connections and strengthen them. Unused neural pathways weaken and degenerate. Similar to pathways through a forest, popular paths are broadened and less-traveled paths gradually disappear. The result by puberty is a massive loss of unemployed connections. Here at the juncture of nurture and nature is the biological reality of early childhood learning. During early childhood—while excess connections are still on call— youngsters can most easily master such skills as the grammar and accent of another language. Lacking any exposure to language before adolescence, a person will never master any language. Likewise, lacking visual experience during the early years, people whose vision is restored by cataract removal never achieve normal perceptions. The brain cells normally assigned to vision have died or been diverted to other uses. For us to have optimum brain development, normal stimulation during the early years is critical. The maturing brain’s rule: Use it or lose it. The brain’s development does not, however, end with childhood. Our neural tissue is ever changing. If a monkey is trained to push a lever with a finger several thousand times a day, the brain tissue controlling that finger will change to reflect the experience. Human brains work similarly (FIGURE 12.2). Whether learning to keyboard or skateboard, we perform with increasing skill as our brain incorporates the learning.

“Genes and experiences are just two ways of doing the same thing—wiring synapses.” Joseph LeDoux, The Synaptic Self, 2002

12.2 A trained brain A 䉴 FIGURE well-learned finger-tapping task activates

more motor cortex neurons (orange area, right) than were active in the same brain before training (left). (From Karni et al., 1998.)

150

MOD U LE 1 2 Environmental Influences on Behavior

How Much Credit (or Blame) Do Parents Deserve?

© The New Yorker Collection, 2001, Barbara Smaller from cartoonbank.com. All rights reserved.

|| Even among chimpanzees, when one infant is hurt by another, the victim’s mother will often attack the offender’s mother (Goodall, 1968). ||

“So I blame you for everything—whose fault is that?”

“If you want to blame your parents for your own adult problems, you are entitled to blame the genes they gave you, but you are not entitled—by any facts I know—to blame the way they treated you. . . . We are not prisoners of our past.” Martin Seligman, What You Can Change and What You Can’t, 1994

In procreation, a woman and a man shuffle their gene decks and deal a life-forming hand to their child-to-be, who is then subjected to countless influences beyond their control. Parents, nonetheless, feel enormous satisfaction in their children’s successes, and feel guilt or shame over their failures. They beam over the child who wins an award. They wonder where they went wrong with the child who is repeatedly called into the principal’s office. Freudian psychiatry and psychology have been among the sources of such ideas, by blaming problems from asthma to schizophrenia on “bad mothering.” Society reinforces such parent-blaming: Believing that parents shape their offspring as a potter molds clay, people readily praise parents for their children’s virtues and blame them for their children’s vices. Popular culture endlessly proclaims the psychological harm toxic parents inflict on their fragile children. No wonder that it can seem risky to have and raise children. But do parents really produce future adults with an inner wounded child by being (take your pick from the toxic-parent lists) overbearing—or uninvolved? Pushy—or ineffectual? Overprotective—or distant? Are children really so easily wounded? If so, should we then blame our parents for our failings, and ourselves for our children’s failings? Or does all the talk of wounding fragile children through normal parental mistakes trivialize the brutality of real abuse? Peter Neubauer and Alexander Neubauer (1990, pp. 20–21) illustrate how, with hindsight, we may inappropriately credit or blame our parents: Identical twin men, now age 30, were separated at birth and raised in different countries by their respective adoptive parents. Both kept their lives neat— neat to the point of pathology. Their clothes were preened, appointments met precisely on time, hands scrubbed regularly to a raw, red color. When the first was asked why he felt the need to be so clean, his answer was plain. “My mother. When I was growing up she always kept the house perfectly ordered. She insisted on every little thing returned to its proper place, the clocks—we had dozens of clocks—each set to the same noonday chime. She insisted on this, you see. I learned from her. What else could I do?” The man’s identical twin, just as much a perfectionist with soap and water, explained his own behavior this way: “The reason is quite simple. I’m reacting to my mother, who was an absolute slob.” Parents do matter. The power of parenting to shape our differences is clearest at the extremes—the abused who become abusive, the neglected who become neglectful, the loved but firmly handled children who become self-confident and socially competent. The power of the family environment also frequently shows up in children’s political attitudes, religious beliefs, and personal manners. And it appears in the remarkable academic and vocational successes of children of the refugee “boat people” fleeing Vietnam and Cambodia—successes attributed to close-knit, supportive, even demanding families (Caplan et al., 1992). Yet in personality measures, shared environmental influences—including, as we have seen, the home influences siblings share—typically account for less than 10 percent of children’s differences. In the words of behavior geneticists Robert Plomin and Denise Daniels (1987), “Two children in the same family [are on average] as different from one another as are pairs of children selected randomly from the population.” To developmental psychologist Sandra Scarr (1993), this implies that “parents should be given less credit for kids who turn out great and blamed less for kids who don’t.” Knowing children are not easily sculpted by parental nurture, perhaps parents can relax a bit more and love their children for who they are.

151

Environmental Influences on Behavior M O D U L E 1 2

Peer Influence As children mature, what other experiences do the work of nurturing? At all ages, but especially during childhood and adolescence, we seek to fit in with groups and are subject to group influences. Consider the power of peers (Harris, 1998, 2000):

䉴 Preschoolers who disdain a certain food often will eat that food if put at a table with a group of children who like it.

䉴 Children who hear English spoken with one accent at home and another in the neighborhood and at school will invariably adopt the accent of their peers, not their parents. Accents (and slang) reflect culture, “and children get their culture from their peers,” notes Harris (2007). 䉴 Teens who start smoking typically have friends who model smoking, suggest its pleasures, and offer cigarettes (J. S. Rose et al., 1999; R. J. Rose et al., 2003). Part of this peer similarity may result from a selection effect, as kids seek out peers with similar attitudes and interests. Those who smoke (or don’t) may select as friends those who also smoke (or don’t).

“Men resemble the times more than they resemble their fathers.” Ancient Arab proverb

Howard Gardner (1998) concludes that parents and peers are complementary: power As we 䉴 Peer develop, we play, mate, and partner with peers. No wonder children and youths are so sensitive and responsive to peer influences.

Ole Graf/zefa/Corbis

Parents are more important when it comes to education, discipline, responsibility, orderliness, charitableness, and ways of interacting with authority figures. Peers are more important for learning cooperation, for finding the road to popularity, for inventing styles of interaction among people of the same age. Youngsters may find their peers more interesting, but they will look to their parents when contemplating their own futures. Moreover, parents [often] choose the neighborhoods and schools that supply the peers.

As Gardner points out, parents can influence the culture that shapes the peer group, by helping to select their children’s neighborhood and schools. And because neighborhood influences matter, parents may want to become involved in intervention programs for youth that aim at a whole school or neighborhood. If the vapors of a toxic climate are seeping into a child’s life, that climate—not just the child—needs reforming. Even so, peers are but one medium of cultural influence.

“It takes a village to raise a child.” African proverb

䉴|| Cultural Influences 12-2 How do cultural norms affect our behavior? Compared with the narrow path taken by flies, fish, and foxes, the road along which environment drives us is wider. The mark of our species—nature’s great gift to us—is our ability to learn and adapt. We come equipped with a huge cerebral hard drive ready to receive many gigabytes of cultural software. Culture is the behaviors, ideas, attitudes, values, and traditions shared by a group of people and transmitted from one generation to the next (Brislin, 1988). Human

culture the enduring behaviors, ideas, attitudes, values, and traditions shared by a group of people and transmitted from one generation to the next.

152

MOD U LE 1 2 Environmental Influences on Behavior

nature, notes Roy Baumeister (2005), seems designed for culture. We are social animals, but more. Wolves are social animals; they live and hunt in packs. Ants are incessantly social, never alone. But “culture is a better way of being social,” notes Baumeister. Wolves function pretty much as they did 10,000 years ago. You and I enjoy things unknown to most of our century-ago ancestors, including electricity, indoor plumbing, antibiotics, and the Internet. Culture works. Primates exhibit the rudiments of culture, with local customs of tool use, grooming, and courtship. Younger chimpanzees and macaque monkeys sometimes invent customs—potato washing, in one famous example—and pass them on to their peers and offspring. But human culture does more. It supports our species’ survival and reproduction by enabling social and economic systems that give us an edge. Thanks to our mastery of language, we humans enjoy the preservation of innovation. Within the span of this day, I have, thanks to my culture, made good use of Post-it Notes, Google, and a single-shot skinny latte. On a grander scale, we have culture’s accumulated knowledge to thank for the last century’s 30-year extension of the average life expectancy in most countries where this book is being read. Moreover, culture enables an efficient division of labor. Although one lucky person gets his name on this book’s cover, the product actually results from the coordination and commitment of a team of women and men, no one of whom could produce it alone. Across cultures, we differ in our language, our monetary systems, our sports, which fork—if any—we eat with, even which side of the road we drive on. But beneath these differences is our great similarity—our capacity for culture. Culture provides the shared and transmitted customs and beliefs that enable us to communicate, to exchange money for things, to play, to eat, and to drive with agreed-upon rules and without crashing into one another. This shared capacity for culture enables our striking group differences. Human nature manifests human diversity. If we all lived in hom*ogeneous ethnic groups in separate regions of the world, as some people still do, cultural diversity would be less relevant. In Japan, almost 99 percent of the country’s 127 million people are of Japanese descent. Internal cultural differences are therefore minimal compared with those found in Los Angeles, where the public schools recently taught 82 different languages, or in Toronto or Vancouver, where minorities are one-third of the population and many are immigrants (as are 13.4 percent of all Canadians and 23 percent of Australians) (Axiss, 2007; Statistics Canada, 2002). I am ever mindful that the readers of this book are culturally diverse. You and your ancestors reach from Australia to Africa and from Singapore to Sweden.

Variation Across Cultures

norm an understood rule for accepted and expected behavior. Norms prescribe “proper” behavior.

We see our adaptability in cultural variations among our beliefs and our values, in how we raise our children and bury our dead, and in what we wear (or whether we wear anything at all). Riding along with a unified culture is like biking with the wind: As it carries us along, we hardly notice it is there. When we try riding against the wind we feel its force. Face to face with a different culture, we become aware of the cultural winds. Visiting Europe, most North Americans notice the smaller cars, the left-handed use of the fork, the uninhibited attire on the beaches. Stationed in Iraq, Afghanistan, and Kuwait, American and European soldiers alike realized how liberal their home cultures were. Arriving in North America, visitors from Japan and India struggle to understand why so many people wear their dirty street shoes in the house. Each cultural group evolves its own norms—rules for accepted and expected behavior. Many South Asians, for example, use only the right hand’s fingers for eating. The British have a norm for orderly waiting in line. Sometimes social expectations seem oppressive: “Why should it matter how I dress?” Yet, norms grease the social machinery and free us from self-preoccupation. Knowing when to clap or bow, which

153

Environmental Influences on Behavior M O D U L E 1 2

differ 䉴 Cultures Behavior seen as appropriate in one culture may violate the norms of another group. In Arab societies, but not in Western cultures, men often greet one another with a kiss.

Annie Griffiths Belt/Corbis

fork to pick up first at a dinner party, and what sorts of gestures and compliments are appropriate—whether to greet people by shaking hands or kissing each cheek, for example— we can relax and enjoy one another without fear of embarrassment or insult. When cultures collide, their differing norms often befuddle. For example, if someone invades our personal space— the portable buffer zone we like to maintain around our bodies—we feel uncomfortable. Scandinavians, North Americans, and the British have traditionally preferred more personal space than do Latin Americans, Arabs, and the French (Sommer, 1969). At a social gathering, a Mexican seeking a comfortable conversation distance may end up walking around a room with a backpedaling Canadian. (You can experience this at a party by playing Space Invader as you talk with someone.) To the Canadian, the Mexican may seem intrusive; to the Mexican, the Canadian may seem standoffish. Cultures also vary in their expressiveness. Those with roots in northern European culture have perceived people from Mediterranean cultures as warm and charming but inefficient. The Mediterraneans, in turn, have seen northern Europeans as efficient but cold and preoccupied with punctuality (Triandis, 1981). Cultures vary in their pace of life, too. People from time-conscious Japan—where bank clocks keep exact time, pedestrians walk briskly, and postal clerks fill requests speedily—may find themselves growing impatient when visiting Indonesia, where clocks keep less accurate time and the pace of life is more leisurely (Levine & Norenzayan, 1999). In adjusting to their host countries, the first wave of U.S. Peace Corps volunteers reported that two of their greatest culture shocks, after the language differences, were the differing pace of life and the people’s differing sense of punctuality (Spradley & Phillips, 1972).

Variation Over Time Consider, too, how rapidly cultures may change over time. English poet Geoffrey Chaucer (1342–1400) is separated from a modern Briton by only 20 generations, but the two would converse with great difficulty. In the thin slice of history since 1960, most Western cultures have changed with remarkable speed. Middle-class people fly to places they once only read about, e-mail those they once snail-mailed, and work in air-conditioned comfort where they once sweltered. They enjoy the convenience of online shopping, anywhere-anytime electronic communication, and—enriched by doubled per-person real income—eating out more than twice as often as did their parents back in the culture of 1960. With greater economic independence, today’s women are more likely to marry for love and less likely to endure abusive relationships out of economic need. Many minority groups enjoy expanded human rights. But some changes seem not so wonderfully positive. Had you fallen asleep in the United States in 1960 and awakened today, you would open your eyes to a culture with more divorce, delinquency, and depression. You would also find North Americans— like their counterparts in Britain, Australia, and New Zealand—spending more hours at work, fewer hours sleeping, and fewer hours with friends and family (Frank, 1999; Putnam, 2000). Whether we love or loathe these changes, we cannot fail to be impressed by their breathtaking speed. And we cannot explain them by changes in the human gene pool, which evolves far too slowly to account for high-speed cultural transformations. Cultures vary. Cultures change. And cultures shape our lives.

personal space the buffer zone we like to maintain around our bodies.

154

MOD U LE 1 2 Environmental Influences on Behavior

© The New Yorker Collection, 2000, Ziegler from cartoonbank.com. All rights reserved.

Culture and the Self 12-3 How do individualist and collectivist cultural influences affect people? Cultures vary in the extent to which they give priority to the nurturing and expression of personal identity or group identity. To grasp the difference, imagine that someone were to rip away your social connections, making you a solitary refugee in a foreign land. How much of your identity would remain intact? The answer would depend in large part on whether you give greater priority to the independent self that marks individualism or to the interdependent self that marks collectivism. If as our solitary traveler you pride yourself on your individualism, a great deal of your identity would remain intact—the very core of your being, the sense of “me,” the awareness of your personal convictions and values. Individualists (often people from North America, Western Europe, Australia, or New Zealand) give relatively greater priority to personal goals and define their identity mostly in terms of personal attributes (Schimmack et al., 2005). They strive for personal control and individual achievement. In American culture, with its relatively big “I” and small “we,” 85 percent of people say it is possible “to pretty much be who you want to be” (Sampson, 2000). Individualists share the human need to belong. They join groups. But they are less focused on group harmony and doing their duty to the group (Brewer & Chen, 2007). Being more self-contained, individualists also move in and out of social groups more easily. They feel relatively free to switch places of worship, leave one job for another, or even leave their extended families and migrate to a new place. Marriage is often for as long as they both shall love. If set adrift in a foreign land as a collectivist, you might experience a greater loss of identity. Cut off from family, groups, and loyal friends, you would lose the connections that have defined who you are. In a collectivist culture, group identifications provide a sense of belonging, a set of values, a network of caring individuals, an assurance of security. In return, collectivists have deeper, more stable attachments to their groups, often their family, clan, or company. In Korea, for example, people place less value on expressing a consistent, unique self-concept, and more on tradition and shared practices (Choi & Choi, 2002). Valuing communal solidarity, people in collectivist cultures place a premium on preserving group spirit and making sure others never lose face. What people say reflects not only what they feel (their inner attitudes) but what they presume others

collectivism giving priority to goals of one’s group (often one’s extended family or work group) and defining one’s identity accordingly.

Kevin R. Morris/Corbis

own goals over group goals and defining one’s identity in terms of personal attributes rather than group identifications.

individualism giving priority to one’s

Uniform requirements People in individualist Western cultures sometimes see traditional Japanese culture as confining. But from the Japanese perspective, the same tradition expresses a “serenity that comes to people who know exactly what to expect from each other” (Weisz et al., 1984).

155

feel (Kashima et al., 1992). Avoiding direct confrontation, blunt honesty, and uncomfortable topics, people often defer to others’ wishes and display a polite, selfeffacing humility (Markus & Kitayama, 1991). In new groups, they may be shy and more easily embarrassed than their individualist counterparts (Singelis et al., 1995, 1999). Compared with Westerners, people in Japanese and Chinese cultures, for example, exhibit greater shyness toward strangers and greater concern for social harmony and loyalty (Bond, 1988; Cheek & Melchior, 1990; Triandis, 1994). Elders and superiors receive respect, and duty to family may trump personal career preferences. When the priority is “we,” not “me,” that individualized latte—“decaf, single shot, skinny, extra hot”—that feels so good to a North American in a coffee shop might sound more like a selfish demand in Seoul (Kim & Markus, 1999). To be sure, there is diversity within cultures. Even in the most individualistic countries, some people manifest collectivist values. And there are regional differences within cultures, such as the spirit of individualism in Japan’s “northern frontier” island of Hokkaido (Kitayama et al., 2006). But in general, people (especially men) in competitive, individualist cultures have more personal freedom, are less geographically bound to their families, enjoy more privacy, and take more pride in personal achievements (TABLE 12.1). During the 2000 and 2002 Olympic games, U.S. gold medal winners and the U.S. media covering them attributed the achievements mostly to the athletes themselves (Markus et al., 2006). “I think I just stayed focused,” explained swimming gold medalist Misty Hyman. “It was time to show the world what I could do. I am just glad I was able to do it.” Japan’s gold medalist in the women’s marathon, Naoko Takahashi, had a different explanation: “Here is the best coach in the world, the best manager in the world, and all of the people who support me—all of these things were getting together and became a gold medal.” Even when describing friends, Westerners tend to use trait-describing adjectives (“she is helpful”), whereas East Asians more often use verbs that describe behaviors in context (“she helps her friends”) (Maass et al., 2006). Individualism’s benefits can come at the cost of more loneliness, more divorce, more homicide, and more stress-related disease (Popenoe, 1993; Triandis et al., 1988). People in individualist cultures demand more romance and personal fulfillment in marriage, subjecting the relationship to more pressure (Dion & Dion, 1993). In one survey, “keeping romance alive” was rated as important to a good marriage by 78 percent of U.S. women but only 29 percent of Japanese women (American Enterprise, 1992). In China, love songs often express enduring commitment and friendship (Rothbaum & Tsang, 1998). As one song put it, “We will be together from now on . . . I will never change from now to forever.”

AP Photo/Xinhua Chenxie

Environmental Influences on Behavior M O D U L E 1 2

Interdependence This young man is helping a fellow student who became trapped in the rubble that was their school after a devastating earthquake shook China in 2008. By identifying strongly with family and other groups, Chinese people tend to have a collectivist sense of “we” and an accompanying support network of care, which may have helped them struggle through the aftermath of this disaster.

“One needs to cultivate the spirit of sacrificing the little me to achieve the benefits of the big me.” Chinese saying

TABLE 12.1 Value Contrasts Between Individualism and Collectivism Concept

Individualism

Collectivism

Self

Independent (identity from individual traits)

Interdependent (identity from belonging)

Life task

Discover and express one’s uniqueness

Maintain connections, fit in, perform role

What matters

Me—personal achievement and fulfillment; rights and liberties; self-esteem

Us—group goals and solidarity; social responsibilities and relationships; family duty

Coping method

Change reality

Accommodate to reality

Morality

Defined by individuals (self-based)

Defined by social networks (duty-based)

Relationships

Many, often temporary or casual; confrontation acceptable

Few, close and enduring; harmony valued

Attributing behavior

Behavior reflects one’s personality and attitudes

Behavior reflects social norms and roles

Sources: Adapted from Thomas Schoeneman (1994) and Harry Triandis (1994).

156

MOD U LE 1 2 Environmental Influences on Behavior

Copyright Steve Reehl

Culture and Child-Rearing

Cultures vary In Scotland’s Orkney Islands’ town of Stromness, social trust has enabled parents to park their toddlers outside of shops.

José Luis Pelaez, Inc./Corbis

Parental involvement promotes development Parents in every culture facilitate their children’s discovery of their world, but cultures differ in what they deem important. Asian cultures place more emphasis on school and hard work than do North American cultures. This may help explain why Japanese and Taiwanese children get higher scores on mathematics achievement tests.

Child-rearing practices reflect cultural values that vary across time and place. Do you prefer children who are independent or children who comply? If you live in a Westernized culture, the odds are you prefer independence. “You are responsible for yourself,” Western families and schools tell their children. “Follow your conscience. Be true to yourself. Discover your gifts. Think through your personal needs.” A half-century and more ago, Western cultural values placed greater priority on obedience, respect, and sensitivity to others (Alwin, 1990; Remley, 1988). “Be true to your traditions,” parents then taught their children. “Be loyal to your heritage and country. Show respect toward your parents and other superiors.” Cultures can change. Many Asians and Africans live in cultures that value emotional closeness. Rather than being given their own bedrooms and entrusted to day care, infants and toddlers may sleep with their mothers and spend their days close to a family member (Morelli et al., 1992; Whiting & Edwards, 1988). These cultures encourage a strong sense of family self—a feeling that what shames the child shames the family, and what brings honor to the family brings honor to the self. Children across place and time have thrived under various child-rearing systems. Upper-class British parents traditionally handed off routine caregiving to nannies, then sent their children off to boarding school at about age 10. These children generally grew up to be pillars of British society, just like their parents and their boardingschool peers. In the African Gusii society, babies nurse freely but spend most of the day on their mother’s back—with lots of body contact but little face-to-face and language interaction. When the mother becomes pregnant, the toddler is weaned and handed over to someone else, often an older sibling. Westerners may wonder about the negative effects of this lack of verbal interaction, but then the African Gusii would in turn wonder about Western mothers pushing their babies around in strollers and leaving them in playpens and car seats (Small, 1997). Such diversity in child-rearing cautions us against presuming that our culture’s way is the only way to rear children successfully.

Developmental Similarities Across Groups Mindful of how others differ from us, we often fail to notice the similarities predisposed by our shared biology. One 49-country study revealed that nation-to-nation differences in personality traits such as conscientiousness and extraversion are smaller than most people suppose (Terracciano et al., 2006). Australians see themselves as outgoing, German-speaking Swiss see themselves as conscientious, and Canadians see themselves as agreeable. Actually, these national stereotypes exaggerate differences that, although real, are modest. Compared with the person-to-person differences within groups, the differences between groups are small. Regardless of our culture, we humans are more alike than different. We share the same life cycle. We speak to our infants in similar ways and respond similarly to their coos and cries (Bornstein et al., 1992a,b). All over the world, the children of warm and supportive parents feel better about themselves and are less hostile than are the children of punitive and rejecting parents (Rohner, 1986; Scott et al., 1991). Even differences within a culture, such as those sometimes attributed to race, are often easily explained by an interaction between our biology and our culture. David Rowe and his colleagues (1994, 1995) illustrate this with an analogy: Black men tend to have higher blood pressure than White men. Suppose that (1) in both groups salt consumption correlates with blood pressure, and (2) salt consumption is higher among Black men than among White men. The blood pressure “race difference” might then actually be, at least partly, a diet difference—a cultural preference for certain foods.

157

Environmental Influences on Behavior M O D U L E 1 2

And that, say Rowe and his colleagues, parallels psychological findings. Although Latino, Asian, Black, White, and Native Americans differ in school achievement and delinquency, the differences are “no more than skin deep.” To the extent that family structure, peer influences, and parental education predict behavior in one of these ethnic groups, they do so for the others as well. So as members of different ethnic and cultural groups, we may differ in surface ways, but as members of one species we seem subject to the same psychological forces. Our languages vary, yet they reflect universal principles of grammar. Our tastes vary, yet they reflect common principles of hunger. Our social behaviors vary, yet they reflect pervasive principles of human influence. Cross-cultural research can help us appreciate both our cultural diversity and our human likeness.

“When [someone] has discovered why men in Bond Street wear black hats he will at the same moment have discovered why men in Timbuctoo wear red feathers.” G. K. Chesterton, Heretics, 1905

䉴|| Gender Development Cognitive psychologists have demonstrated that we humans share an irresistible urge to organize our worlds into simple categories. Among the ways we classify people—as tall or short, fat or slim, smart or dull—one stands out: At your birth, everyone wanted to know, “Boy or girl?” Our biological sex in turn helps define our gender, the biological and social characteristics by which people define male or female. In considering how nature and nurture together create social diversity, gender is the prime case example. Let’s recap one of psychology’s main themes—that nature and nurture together create our differences and commonalities—by considering gender variations.

Gender Similarities and Differences

Having faced similar adaptive challenges, we are in most ways alike. Men and women are not from different planets—Mars and Venus—but from the same planet Earth. Tell me whether you are male or female and you give me virtually no clues to your vocabulary, intelligence, and happiness, or to the mechanisms by which you see, hear, learn, and remember. Your “opposite” sex is, in reality, your very similar sex. And should we be surprised? Among your 46 chromosomes, 45 are unisex. But males and females also differ, and differences command attention. Some much talked-about differences are actually quite modest, as Janet Hyde (2005) illustrated by graphically representing the gender difference in self-esteem scores, across many studies (FIGURE 12.3). Some differences are more striking. Compared with the average man, the average woman enters puberty two years Number of people sooner, lives five years longer, carries 70 percent more fat, has 40 percent less muscle, and is 5 inches shorter. Other gender differences appear throughout this book. Women can become sexually re-aroused immediately after org*sm. They smell fainter odors, express emotions more freely, and are offered help more often. They are doubly vulnerable to depression and anxiety, and their risk of developing eating disorders is 10 times greater. But then men are some 4 times more likely to commit suicide or suffer alcohol dependence. They are far more often diagnosed with autism, color-blindness, attention-deficit hyperactivity disorder (as Lower scores children), and antisocial personality disorder (as adults). Choose your gender and pick your vulnerability.

FIGURE 12.3 Much ado about a small difference Janet Hyde (2005) shows us two normal distributions that differ by the approximate magnitude (0.21 standard deviations) of the gender difference in self-esteem, averaged over all available samples. Moreover, though we can identify gender differences, the variation among individual women and among individual men greatly exceeds the difference between the average woman and man.

12-4 What are some ways in which males and females tend to be alike and to differ?

Females Males

Self-esteem scores

Higher scores

158

MOD U LE 1 2 Environmental Influences on Behavior

How much does biology bend the genders? What portion of our differences are socially constructed—by the gender roles our culture assigns us, and by how we are socialized as children? To answer those questions, let’s look more closely at some average gender differences in aggression, social power, and social connectedness.

Gender and Aggression In surveys, men admit to more aggression than do women, and experiments confirm that men tend to behave more aggressively, such as by administering what they believe are more painful electric shocks (Bettencourt & Kernahan, 1997). The aggression gender gap pertains to physical aggression (such as hitting) rather than verbal, relational aggression (such as excluding someone). The gender gap in physical aggression appears in everyday life at various ages and in various cultures, especially those with gender inequality (Archer, 2004, 2006). In dating relationships, violent acts (such as slaps and thrown objects) are often mutual (Straus, 2008). Violent crime rates more strikingly illustrate the gender difference. The male-to-female arrest ratio for murder, for example, is 10 to 1 in the United States and almost 7 to 1 in Canada (FBI, 2007; Statistics Canada, 2007). Throughout the world, hunting, fighting, and warring are primarily men’s activities (Wood & Eagly, 2002, 2007). Men also express more support for war. The Iraq war, for example, has consistently been supported more by American men than by American women (Newport et al., 2007).

Gender and Social Power

|| Women’s 2008 representations in national parliaments ranged from 9% in the Arab States to 17% in the United States and 24% in Canada to 41% in Scandinavia (IPU, 2008). ||

Around the world, from Nigeria to New Zealand, people have perceived men as more dominant, forceful, and independent, women as more deferential, nurturant, and affiliative (Williams & Best, 1990). Indeed, in most societies men are socially dominant. When groups form, whether as juries or companies, leadership tends to go to males (Colarelli et al., 2006). Men worldwide place more importance on power and achievement (Schwartz & Rubel, 2005). As leaders, men tend to be more directive, even autocratic; women tend to be more democratic, more welcoming of subordinates’ participation in decision making (Eagly & Carli, 2007; van Engen & Willemsen, 2004). When people interact, men are more likely to utter opinions, women to express support (Aries, 1987; Wood, 1987). These differences carry into everyday behavior, where men are more likely to act as powerful people often do—talking assertively, interrupting, initiating touches, staring more, and smiling less (Hall, 1987; Leaper & Ayres, 2007; Major et al., 1990). Such behaviors help sustain social power inequities. When political leaders are elected, they usually are men, who held 82 percent of the seats in the world’s governing parliaments in 2008 (IPU, 2008). When salaries are paid, those in traditionally male occupations receive more.

Gender and Social Connectedness

aggression physical or verbal behavior intended to hurt someone.

To Carol Gilligan and her colleagues (1982, 1990), the “normal” struggle to create a separate identity describes Western individualist males more than relationshiporiented females. Gilligan believes females tend to differ from males both in being less concerned with viewing themselves as separate individuals and in being more concerned with “making connections.” These gender differences in connectedness surface early in children’s play, and they continue with age. Boys typically play in large groups with an activity focus and little intimate discussion (Rose & Rudolph, 2006). Girls usually play in smaller groups, often with one friend. Their play is less competitive than boys’ and more imitative of social relationships. Both in play and other settings, females are more open and responsive to feedback than are males (Maccoby, 1990; Roberts, 1991). Asked

159

Environmental Influences on Behavior M O D U L E 1 2

man for himself, or 䉴 Every tend and befriend? Gender

Dex Image/Getty Images

Oliver Eltinger/ zefa/ Corbis

differences in the way we interact with others begin to appear at a very young age.

difficult questions—“Do you have any idea why the sky is blue?” “Do you have any idea why shorter people live longer?”—men are more likely than women to hazard answers rather than admit they don’t know, a phenomenon Traci Giuliano and her colleagues (1998a,b) call the male answer syndrome. Females are more interdependent than males. As teens, girls spend more time with friends and less time alone (Wong & Csikszentmihalyi, 1991). As late adolescents, they spend more time on social-networking Internet sites (Pryor et al., 2007). As adults, women take more pleasure in talking face-to-face, and they tend to use conversation more to explore relationships. Men enjoy doing activities side-by-side, and they tend to use conversation to communicate solutions (Tannen, 1990; Wright, 1989). The communication difference is apparent even in student e-mails, from which people in one New Zealand study could correctly guess the author’s gender two-thirds of the time (Thomson & Murachver, 2001). These gender differences are sometimes reflected in patterns of phone communication. In France, women make 63 percent of phone calls and, when talking to a woman, stay connected longer (7.2 minutes) than men do when talking to other men (4.6 minutes) (Smoreda & Licoppe, 2000). So, does this confirm the idea that women are just more talkative? To check that presumption, Matthias Mehl and his colleagues (2007) counted the number of words 396 college students spoke in the course of an average day. (How many words would you guess you speak a day?) They found that talkativeness varied enormously—by 45,000 words between their most and least talkative participants. But contrary to stereotypes of jabbering women, both men and women averaged about 16,000 words daily. Women worldwide orient their interests and vocation more to people and less to things (Lippa, 2005, 2006, 2008). In the workplace, they often are less driven by money and status and are more likely to choose reduced work hours (Pinker, 2008). In the home, they provide most of the care to the very young and the very old. Women also purchase 85 percent of greeting cards (Time, 1997). Women’s emphasis on caring helps explain another interesting finding: Although 69 percent of people say they have a close relationship with their father, 90 percent feel close to their mother (Hugick, 1989). When wanting understanding and someone with whom to share worries and hurts, both men and women usually turn to women, and both report their friendships with women to be more intimate, enjoyable, and nurturing (Rubin, 1985; Sapadin, 1988). Bonds and feelings of support are even stronger among women than among men (Rossi & Rossi, 1993). Women’s ties—as mothers, daughters, sisters, aunts, and

|| Question: Why does it take 200 million sperm to fertilize one egg? Answer: Because they won’t stop for directions. ||

160

“In the long years liker must they grow; The man be more of woman, she of man.” Alfred Lord Tennyson, The Princess, 1847

MOD U LE 1 2 Environmental Influences on Behavior

grandmothers—bind families together. As friends, women talk more often and more openly (Berndt, 1992; Dindia & Allen, 1992). And when they themselves must cope with stress, women more than men turn to others for support—they tend and befriend (Tamres et al., 2002; Taylor, 2002). As empowered people generally do, men value freedom and self-reliance, which helps explain why men of all ages, worldwide, are less religious and pray less (Benson, 1992; Stark, 2002). Men also dominate the ranks of professional skeptics. All 10 winners and 14 runners-up on the Skeptical Inquirer list of outstanding twentieth-century rationalist skeptics were men. In the Science and the Paranormal section of the 2007 Prometheus Books catalog (from the leading publisher of skepticism), one can find 94 male and 4 female authors. In one Skeptics Society survey, nearly 4 in 5 respondents were men (Shermer, 1999). Women, it appears, are more open to spirituality (and are far more likely to author books on spirituality than on skepticism). Gender differences in power, connectedness, and other traits peak in late adolescence and early adulthood—the very years most commonly studied (also the years of dating and mating). As teenagers, girls become progressively less assertive and more flirtatious; boys become more domineering and unexpressive. But by age 50, these differences have diminished. Men become more empathic and less domineering and women, especially if working, become more assertive and self-confident (Kasen et al., 2006; Maccoby, 1998).

The Nature of Gender

Courtesy of Nick Downes.

12-5 How do nature and nurture together form our gender? What explains our gender diversity? Is biology destiny? Are we shaped by our cultures? A biopsychosocial view suggests it is both, thanks to the interplay among our biological dispositions, our developmental experiences, and our current situations (Wood & Eagly, 2002, 2007). In domains where men and women have faced similar challenges—regulating heat with sweat, developing tastes that nourish, growing calluses where the skin meets friction—the sexes are similar. Even when describing the ideal mate, both men and women put traits such as “kind,” “honest,” and “intelligent” at the top of their lists. But in domains pertinent to mating, evolutionary psychologists contend, guys act like guys whether they are elephants or elephant seals, rural peasants or corporate presidents. Such gender differences may be influenced genetically, by our differing sex chromosomes and, physiologically, from our differing concentrations of sex hormones. Males and females are variations on a single form. Seven weeks after conception, you were anatomically indistinguishable from someone of the other sex. Then your genes activated your biological sex, which was determined by your twenty-third pair of chromosomes, the two sex chromosomes. From your mother, you received an X chromosome. From your father, you received the one chromosome out of 46 that is not unisex—either another X chromosome, making you a girl, or a Y chromosome, making you a boy. The Y chromosome includes a single gene that throws a master switch triggering the testes to develop and produce the principal male hormone, testosterone. Females also have testosterone, but less of it. The male’s greater output of testosterone starts the development of external male sex organs at about the seventh week. Another key period for sexual differentiation falls during the fourth and fifth prenatal months, when sex hormones bathe the fetal brain and influence its wiring. Different patterns for males and females develop under the influence of the male’s greater testosterone and the female’s ovarian hormones (Hines,

161

Environmental Influences on Behavior M O D U L E 1 2

X chromosome the sex chromosome found in both men and women. Females have two X chromosomes; males have one. An X chromosome from each parent produces a female child.

Y chromosome the sex chromosome found only in males. When paired with an X chromosome from the mother, it produces a male child. testosterone the most important of the male sex hormones. Both males and females have it, but the additional testosterone in males stimulates the growth of the male sex organs in the fetus and the development of the male sex characteristics during puberty.

“Genes, by themselves, are like seeds dropped onto pavement: powerless to produce anything.” Primatologist Frans B. M. de Waal (1999)

© The New Yorker Collection, 2001, Barbara Smaller from cartoonbank.com All rights reserved.

2004; Udry, 2000). Recent research confirms male-female differences during development in brain areas with abundant sex hormone receptors (Cahill, 2005). In adulthood, parts of the frontal lobes, an area involved in verbal fluency, are reportedly thicker in women. Part of the parietal cortex, a key area for space perception, is thicker in men. Other studies report gender differences in the hippocampus, the amygdala, and the volume of brain gray matter (the neural bodies) versus white matter (the axons and dendrites). Given sex hormones’ influence on development, what do you suppose happens when glandular malfunction or hormone injections expose a female embryo to excess testosterone? These genetically female infants are born with masculine-appearing genitals, which can either be accepted or altered surgically. Until puberty, such females tend to act in more aggressive “tomboyish” ways than do most girls, and they dress and play in ways more typical of boys than of girls (Berenbaum & Hines, 1992; Ehrhardt, 1987). Given a choice of toys, they (like boys) are more likely to play with cars and guns than with dolls and crayons. Some develop into lesbians, but most— like nearly all girls with traditionally feminine interests—become heterosexual. Moreover, the hormones do not reverse their gender identity; they view themselves as girls, not boys (Berenbaum & Bailey, 2003). Is the tomboyish behavior of these girls due to the prenatal hormones? If so, may we conclude that biological sex differences produce behavioral gender differences? Vervet monkeys seem to suggest one answer. Male vervets, like most little boys, will spend more time playing with “masculine” toys such as trucks, and female vervets, like most little girls, will choose “feminine” toys such as dolls (Hines, 2004). Moreover, experiments with many species, from rats to monkeys, confirm that female embryos given male hormones will later exhibit a typically masculine appearance and more aggressive behavior (Hines & Green, 1991). But a more complex picture emerges when we consider social influences. Girls who were prenatally exposed to excess testosterone frequently look masculine and are known to be “different,” so perhaps people also treat them more like boys. Thus, the effect of early exposure to sex hormones is both direct, in the girl’s biological appearance, and indirect, in the influence of social experiences that shape her. Like a sculptor’s two hands shaping a lump of clay, nature and nurture work together. Further evidence of biology’s influence on gender development comes from studies of genetic males who, despite normal male hormones and testes, are born without penises or with very small ones. In one study of 14 who underwent early sexreassignment surgery (which is now controversial) and were raised as girls, 6 later declared themselves as males, 5 were living as females, and 3 had an unclear sexual identity (Reiner & Gearhart, 2004). In one famous case, the parents of a Canadian boy who lost his penis to a botched circumcision followed advice to raise him as a girl rather than as a damaged boy. Alas, “Brenda” Reimer was not like most other girls. “She” didn’t like dolls. She tore her dresses with rough-and-tumble play. At puberty she wanted no part of kissing boys. Finally, Brenda’s parents explained what had happened, whereupon this young person immediately rejected the assigned female identity, got a haircut, and chose a male name, David. He ended up marrying a woman, becoming a stepfather, and, sadly, later committing suicide (Colapinto, 2000). “Sex matters,” concludes the National Academy of Sciences (2001). In combination with the environment, sex-related genes and physiology “result in behavioral and cognitive differences between males and females.”

The Nurture of Gender Although biologically influenced, gender is also socially constructed. What biology initiates, culture accentuates.

“Sex brought us together, but gender drove us apart.”

about a social position, defining how those in the position ought to behave.

gender role a set of expected behaviors for males or for females. gender identity our sense of being male or female.

gender typing the acquisition of a traditional masculine or feminine role. social learning theory the theory that we learn social behavior by observing and imitating and by being rewarded or punished.

Gender Roles Sex indeed matters. But from a biopsychosocial perspective, culture and the immediate situation matter, too. Culture, as we noted earlier, is everything shared by a group and transmitted across generations. We can see culture’s shaping power in the social expectations that guide men’s and women’s behavior. In psychology, as in the theater, a role refers to a cluster of prescribed actions—the behaviors we expect of those who occupy a particular social position. One set of norms defines our culture’s gender roles—our expectations about the way men and women should behave. In the United States 30 years ago, it was standard for men to initiate dates, drive the car, and pick up the check, and for women to decorate the home, buy and care for the children’s clothes, and select the wedding gifts. Gender roles exist outside the home, too. Compared with employed women, employed men in the United States spend about an hour and a half more on the job each day and about one hour less on household activities and caregiving (Amato et al., 2007; Bureau of Labor Statistics, 2004; Fisher et al., 2006). I do not have to tell you which parent, about 90 percent of the time in two-parent U.S. families, has stayed home with a sick child, arranged for the baby-sitter, or called the doctor (Maccoby, 1995). In Australia, women devote 54 percent more time to unpaid household work and 71 percent more time to child care than do men (Trewin, 2001). Gender roles can smooth social relations, saving awkward decisions about who does the laundry this week and who mows the lawn. But they often do so at a cost: If we deviate from such conventions, we may feel anxious. Do gender roles reflect what is biologically natural for men and women? Or do cultures construct them? Gender-role diversity over time and space indicates that culture has a big influence. Nomadic societies of food-gathering people have only a minimal division of labor by sex. Boys and girls receive much the same upbringing. In agricultural societies, where women work in the fields close to home, and men roam more freely herding livestock, children typically socialize into more distinct gender roles (Segall et al., 1990; Van Leeuwen, 1978). Among industrialized countries, gender roles and attitudes vary widely (UNICEF, 2006). Australia and the Scandinavian countries offer the greatest gender equity, Middle Eastern and North African countries the least (Social Watch, 2006). And consider: Would you say life is more satisfying when both spouses work for pay and share child care? If so, you would agree with most people in 41 of 44 countries, according to a Pew Global Attitudes survey (2003). Even so, the culture-to-culture differences were huge, ranging from Egypt, where people disagreed 2 to 1, to Vietnam, where people agreed 11 to 1.

The gendered tsunami In Sri Lanka, Indonesia, and India, the gendered division of labor helps explain the excess of female deaths from the 2004 tsunami. In some villages, 80 percent of those killed were women, who were mostly at home while the men were more likely to be at sea fishing or doing out-of-the-home chores (Oxfam, 2005).

© DPA/The Image Works

role a set of expectations (norms)

MOD U LE 1 2 Environmental Influences on Behavior

162

163

Environmental Influences on Behavior M O D U L E 1 2

Attitudes about gender roles also vary over 80% time. At the opening of the twentieth century, only one country—New Zealand—granted 70 Female Ph.D. recipients women the right to vote (Briscoe, 1997). By in psychology 60 Female law the late 1960s and early 1970s, with the flick school graduates 50 of an apron, the number of U.S. college 40 women hoping to be full-time homemakers Female medical had plunged. And in the three decades after 30 school graduates 1976, the percentage of women in medical, 20 law, and psychology programs roughly doubled 10 (FIGURE 12.4). Gender ideas vary not only across cultures 0 1976 1991 2005 and over time, but also across generations. When families emigrate from Asia to Canada and the United States, their children tend to grow up with peers from a new culture. Many immigrant children, especially girls, feel torn between the competing sets of gender-role norms presented by peers and parents (Dion & Dion, 2001).

12.4 Women and 䉴 FIGURE the professions Law,

medicine, and psychology have been attracting more and more women. (Data from professional associations reported by A. Cynkar, 2007.)

As society assigns each of us to a gender, the social category of male or female, the inevitable result is our strong gender identity, our sense of being male or female. To varying extents, we also become gender typed. That is, some boys more than others exhibit traditionally masculine traits and interests, and some girls more than others become distinctly feminine. Social learning theory assumes that children learn gender-linked behaviors by observing and imitating and by being rewarded or punished. “Nicole, you’re such a good mommy to your dolls”; “Big boys don’t cry, Alex.” But parental modeling and rewarding of male-female differences aren’t enough to explain gender typing (Lytton & Romney, 1991). In fact, even when their families discourage traditional gender typing, children usually organize themselves into “boy worlds” and “girl worlds,” each guided by rules for what boys and girls do. Cognition (thinking) also matters. In your own childhood, as you struggled to comprehend the world, you—like other children—formed schemas, or concepts that helped you make sense of your world. One of these was a schema for your own gender (Bem, 1987, 1993). Your gender schema then became a lens through which you viewed your experiences. Social learning shapes gender schemas. Before age 1, children begin to discriminate male and female voices and faces (Martin et al., 2002). After age 2, language forces children to begin organizing their worlds on the basis of gender. English, for example, uses the pronouns he and she; other languages classify objects as masculine (“le train”) or feminine (“la table”). Young children are “gender detectives,” explain Carol Lynn Martin and Diane Ruble (2004). Once they grasp that two sorts of people exist—and that they are of one sort—they search for clues about gender, and they find them in language, dress, toys, and songs. Girls, they may decide, are the ones with long hair. Having divided the human world in half, 3-year-olds will then like their own sex better and seek out their own kind for play. And having compared themselves with their concept of gender, they will adjust their behavior accordingly (“I am male—thus, masculine, strong, aggressive,” or “I am female—therefore, feminine, sweet, and helpful”). The rigidity of boy-girl stereotypes peaks at about age 5 or 6. If the new neighbor is a boy, a 6-year-old girl may just assume he cannot share her interests. For young children, gender looms large.

© The New Yorker Collection, 1999, John O’Brien from cartoonbank.com. All rights reserved.

Gender and Child-Rearing

“How is it gendered?”

164

MOD U LE 1 2 Environmental Influences on Behavior

䉴|| Reflections on Nature and Nurture

San Diego Museum of Man, photograph by Rose Tyson

“There are trivial truths and great truths,” reflected the physicist Niels Bohr on some of the paradoxes of modern science. “The opposite of a trivial truth is plainly false. The opposite of a great truth is also true.” It appears true that our ancestral history helped form us as a species. Where there is variation, natural selection, and heredity, there will be evolution. The unique gene combination created when our mother’s egg engulfed our father’s sperm predisposed both our shared humanity and our individual differences. This is a great truth about human nature. Genes form us. But it also is true that our experiences form us. In our families and in our peer relationships, we learn ways of thinking and acting. Differences initiated by our nature may be amplified by our nurture. If their genes and hormones predispose males to be more physically aggressive than females, culture may magnify this gender difference through norms that encourage males to be macho and females to be the kinder, gentler sex. If men are encouraged toward roles that demand physical power, and women toward more nurturing roles, each may then exhibit the actions expected of those who fill such roles and find themselves shaped accordingly. Roles remake their players. Presidents in time become more presidential, servants more servile. Gender roles similarly shape us. But gender roles are converging. Brute strength has become increasingly irrelevant to power and status (think Bill Gates and Oprah Winfrey). Thus both women and men are now seen as “fully capable of effectively carrying out organizational roles at all levels,” note Wendy Wood and Alice Eagly (2002). And as women’s employment in formerly male occupations has increased, gender differences in traditional masculinity or femininity and in what one seeks in a mate have diminished (Twenge, 1997). As the roles we play change over time, we change with them.

Culture matters As this exhibit at San Diego’s Museum of Man illustrates, children learn their culture. A baby’s foot can step into any culture.

*** If nature and nurture jointly form us, are we “nothing but” the product of nature and nurture? Are we rigidly determined? We are the product of nature and nurture (FIGURE 12.5), but we are also an open system. Genes are all-pervasive but not all-powerful; people may defy their genetic

approach to development

FIGURE 12.5 The biopsychosocial

Biological influences: • Shared human genome • Individual genetic variations • Prenatal environment • Sex-related genes, hormones, and physiology

Psychological influences: • Gene-environment interaction • Neurological effect of early experiences • Responses evoked by our own temperament, gender, etc. • Beliefs, feelings, and expectations

Individual development

Social-cultural influences: • Parental influences • Peer influences • Cultural individualism or collectivism • Cultural gender norms

165

Environmental Influences on Behavior M O D U L E 1 2

bent to reproduce, by electing celibacy. Culture, too, is all-pervasive but not allpowerful; people may defy peer pressures and do the opposite of the expected. To excuse our failings by blaming our nature and nurture is what philosopher-novelist Jean-Paul Sartre called “bad faith”—attributing responsibility for one’s fate to bad genes or bad influences. In reality, we are both the creatures and the creators of our worlds. We are—it is a great truth—the products of our genes and environments. Nevertheless (another great truth) the stream of causation that shapes the future runs through our present choices. Our decisions today design our environments tomorrow. Mind matters. The human environment is not like the weather—something that just happens. We are its architects. Our hopes, goals, and expectations influence our future. And that is what enables cultures to vary and to change so quickly. *** I know from my mail and from public opinion surveys that some readers feel troubled by the naturalism and evolutionism of contemporary science. Readers from other nations bear with me, but in the United States there is a wide gulf between scientific and lay thinking about evolution. “The idea that human minds are the product of evolution is . . . unassailable fact,” declared a 2007 editorial in Nature, a leading science magazine. That sentiment concurs with a 2006 statement of “evidence-based facts” about evolution jointly issued by the national science academies of 66 nations (IAP, 2006). In The Language of God, Human Genome Project director Francis Collins (2006, pp. 141, 146), a self-described evangelical Christian, compiles the “utterly compelling” evidence that leads him to conclude that Darwin’s big idea is “unquestionably correct.” Yet a 2007 Gallup survey reports that half of U.S. adults do not believe in evolution’s role in “how human beings came to exist on Earth” (Newport, 2007). Many of those who dispute the scientific story worry that a science of behavior (and evolutionary science in particular) will destroy our sense of the beauty, mystery, and spiritual significance of the human creature. For those concerned, I offer some reassuring thoughts. When Isaac Newton explained the rainbow in terms of light of differing wavelengths, the poet Keats feared that Newton had destroyed the rainbow’s mysterious beauty. Yet, notes Richard Dawkins (1998) in Unweaving the Rainbow, Newton’s analysis led to an even deeper mystery—Einstein’s theory of special relativity. Moreover, nothing about Newton’s optics need diminish our appreciation for the dramatic elegance of a rainbow arching across a brightening sky. When Galileo assembled evidence that the Earth revolved around the Sun, not vice versa, he did not offer irrefutable proof for his theory. Rather, he offered a coherent explanation for a variety of observations, such as the changing shadows cast by the Moon’s mountains. His explanation eventually won the day because it described and explained things in a way that made sense, that hung together. Darwin’s theory of evolution likewise is a coherent view of natural history. It offers an organizing principle that unifies various observations. Francis Collins is not the only person of faith to find the scientific idea of human origins congenial with his spirituality. In the fifth century, St. Augustine (quoted by Wilford, 1999) wrote, “The universe was brought into being in a less than fully formed state, but was gifted with the capacity to transform itself from unformed matter into a truly marvelous array of structures and life forms.” Some 1600 years later, Pope John Paul II in 1996 welcomed a science-religion dialogue, finding it noteworthy that evolutionary theory “has been progressively accepted by researchers, following a series of discoveries in various fields of knowledge.” Meanwhile, many people of science are awestruck at the emerging understanding of the universe and the human creature. It boggles the mind—the entire universe

“Let’s hope that it’s not true; but if it is true, let’s hope that it doesn’t become widely known.” Lady Ashley, commenting on Darwin’s theory

“Is it not stirring to understand how the world actually works—that white light is made of colors, that color measures light waves, that transparent air reflects light . . . ? It does no harm to the romance of the sunset to know a little about it.” Carl Sagan, Skies of Other Worlds, 1988

166

“The causes of life’s history [cannot] resolve the riddle of life’s meaning.” Stephen Jay Gould, Rocks of Ages: Science and Religion in the Fullness of Life, 1999

MOD U LE 1 2 Environmental Influences on Behavior

popping out of a point some 14 billion years ago, and instantly inflating to cosmological size. Had the energy of this Big Bang been the tiniest bit less, the universe would have collapsed back on itself. Had it been the tiniest bit more, the result would have been a soup too thin to support life. Astronomer Sir Martin Rees has described Just Six Numbers (1999), any one of which, if changed ever so slightly, would produce a cosmos in which life could not exist. Had gravity been a tad bit stronger or weaker, or had the weight of a carbon proton been a wee bit different, our universe just wouldn’t have worked. What caused this almost-too-good-to-be-true, finely tuned universe? Why is there something rather than nothing? How did it come to be, in the words of HarvardSmithsonian astrophysicist Owen Gingerich (1999), “so extraordinarily right, that it seemed the universe had been expressly designed to produce intelligent, sentient beings”? Is there a benevolent superintelligence behind it all? Have there instead been an infinite number of universes born and we just happen to be the lucky inhabitants of one that, by chance, was exquisitely fine-tuned to give birth to us? Or does that idea violate Occam’s razor, the principle that we should prefer the simplest of competing explanations? On such matters, a humble, awed, scientific silence is appropriate, suggested philosopher Ludwig Wittgenstein: “Whereof one cannot speak, thereof one must be silent.” Rather than fearing science, we can welcome its enlarging our understanding and awakening our sense of awe. In The Fragile Species, Lewis Thomas (1992) described his utter amazement that the Earth in time gave rise to bacteria and eventually to Bach’s Mass in B Minor. In a short 4 billion years, life on Earth has come from nothing to structures as complex as a 6-billion-unit strand of DNA and the incomprehensible intricacy of the human brain. Atoms no different from those in a rock somehow formed dynamic entities that became conscious. Nature, says cosmologist Paul Davies (2007), seems cunningly and ingeniously devised to produce extraordinary, self-replicating, information-processing systems—us. Although we appear to have been created from dust, over eons of time, the end result is a priceless creature, one rich with potential beyond our imagining.

167

Environmental Influences on Behavior M O D U L E 1 2

Review Environmental Influences on Behavior 12-1 To what extent are our lives shaped by early stimulation, by parents, and by peers? During maturation, a child’s brain changes as neural connections increase in areas associated with stimulating activity, and unused synapses degenerate. Parents influence their children in areas such as manners and political and religious beliefs, but not in other areas, such as personality. Language and other behaviors are shaped by peer groups, as children adjust to fit in. By choosing their children’s neighborhoods and schools, parents can exert some influence over peer group culture.

Terms and Concepts to Remember

12-2 How do cultural norms affect our behavior? Cultural norms are rules for accepted and expected behaviors, ideas, attitudes, and values. Across places and over time cultures differ in their norms. Despite such cultural variations, we humans share many common forces that influence behavior.

Test Yourself

12-3

How do individualist and collectivist cultural influences affect people? Cultures based on self-reliant individualism, like those of most of the United States, Canada, Australia, and Western Europe, value personal independence and individual achievement. Identity is defined in terms of self-esteem, personal goals and attributes, and personal rights and liberties. Cultures based on socially connected collectivism, like those of many parts of Asia and Africa, value interdependence, tradition, and harmony, and they define identity in terms of group goals and commitments and belonging to one’s group. Within any culture, the degree of individualism or collectivism varies from person to person.

12-4 What are some ways in which males and females tend to be alike and to differ? Human males and females are more alike than different, thanks to their similar genetic makeup. Regardless of our gender, we see, hear, learn, and remember similarly. Males and females do differ in body fat, muscle, height, age of onset of puberty, and life expectancy; in vulnerability to certain disorders; and in aggression, social power, and social connectedness. 12-5 How do nature and nurture together form our gender? Biological sex is determined by the twenty-third pair of chromosomes, to which the mother contributes an X chromosome and the father either an X (producing a female) or a Y chromosome (producing a male). A Y chromosome triggers additional testosterone release and male sex organs. Gender refers to the characteristics, whether biologically or socially influenced, by which people define male and female. Sex-related genes and hormones influence gender differences in behavior, possibly by influencing brain development. We also learn gender roles, which vary with culture, across place and time. Social learning theory proposes that we learn gender identity as we learn other things—through reinforcement, punishment, and observation.

culture, p. 151 norm, p. 152 personal space, p. 153 individualism, p. 154 collectivism, p. 154 aggression, p. 158 X chromosome, p. 160

Y chromosome, p. 160 testosterone, p. 160 role, p. 162 gender role, p. 162 gender identity, p. 163 gender typing, p. 163 social learning theory, p. 163

1. To predict whether a teenager smokes, ask how many of the teen’s friends smoke. One explanation for this correlation is peer influence. What’s another?

2. How do individualist and collectivist cultures differ? 3. What are gender roles, and what do their variations tell us about our human capacity for learning and adaptation?

4. How does the biopsychosocial approach explain our individual development? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. To what extent, and in what ways, have your peers and your parents helped shape who you are? 2. What concept best describes you—collectivist or individualist? Do you fit completely in either category, or are you sometimes a collectivist and sometimes an individualist? 3. Do you consider yourself strongly gender typed or not strongly gender typed? What factors do you think have contributed to your feelings of masculinity or femininity? 4. How have your heredity and your environment influenced who you are today? Can you recall an important time when you determined your own fate in a way that was at odds with pressure you felt from either your heredity or your environment?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

Developing Through the Life Span

A

s we journey through life—from womb to tomb—when and how do we develop? Virtually all of us began walking around age 1 and talking by age 2. As children, we engaged in social play in preparation for life’s work. As adults, we all smile and cry, love and loathe, and occasionally ponder the fact that someday we will die. As reflected in these modules, developmental psychology examines how people are continually developing—physically, cognitively, and socially—from infancy through old age. Much of its research centers on three major issues: 1. Nature/nurture: How do genetic inheritance (our nature) and experience (the nurture we receive) influence our development? 2. Continuity/stages: Is development a gradual, continuous process like riding an escalator, or does it proceed through a sequence of separate stages, like climbing rungs on a ladder?

modules 13 Prenatal Development and the Newborn

14 Infancy and Childhood

15 Adolescence

16 Adulthood, and Reflections on Developmental Issues

3. Stability/change: Do our early personality traits persist through life, or do we become different persons as we age? In other modules, we engage the nature/nurture issue. At the end of the modules in this part, we reflect on the continuity and stability issues throughout the life span.

“Nature is all that a man brings with him into the world; nurture is every influence that affects him after his birth.”

Courtesy of Christine Brune

Francis Galton, English Men of Science, 1874

developmental psychology a branch of psychology that studies physical, cognitive, and social change throughout the life span.

169

module 13

Conception Prenatal Development The Competent Newborn

Prenatal Development and the Newborn 13-1 How does life develop before birth?

䉴|| Conception

FIGURE 13.1 Life is sexually transmitted (a) Sperm cells surround an ovum. (b) As one sperm penetrates the egg’s jellylike outer coating, a series of chemical events begins that will cause sperm and egg to fuse into a single cell. If all goes well, that cell will subdivide again and again to emerge 9 months later as a 100-trillion-cell human being.

(a)

(b)

Both photos Lennart Nilsson/Albert Bonniers Publishing Company

Nothing is more natural than a species reproducing itself. Yet nothing is more wondrous. With humans, the process starts when a woman’s ovary releases a mature egg—a cell roughly the size of the period at the end of this sentence. The woman was born with all the immature eggs she would ever have, although only 1 in 5000 will ever mature and be released. A man, in contrast, begins producing sperm cells at puberty. For the rest of his life, 24 hours a day, he will be a nonstop sperm factory, although the rate of production—in the beginning more than 1000 sperm during the second it takes to read this phrase—will slow with age. Like space voyagers approaching a huge planet, the 200 million or more deposited sperm begin their race upstream, approaching a cell 85,000 times their own size. The relatively few reaching the egg release digestive enzymes that eat away its protective coating (FIGURE 13.1). As soon as one sperm begins to penetrate and is welcomed in, the egg’s surface blocks out the others. Before half a day elapses, the egg nucleus and the sperm nucleus fuse. The two have become one. Consider it your most fortunate of moments. Among 200 million sperm, the one needed to make you, in combination with that one particular egg, won the race.

䉴|| Prenatal Development Fewer than half of all fertilized eggs, called zygotes, survive beyond the first 2 weeks (Grobstein, 1979; Hall, 2004). But for you and me, good fortune prevailed. One cell became 2, then 4—each just like the first—until this cell division produced a zygote of some 100 cells within the first week. Then the cells began to differentiate—to specialize in structure and function. How identical cells do this—as if one decides “I’ll become a brain, you become intestines!”—is a puzzle that scientists are just beginning to solve.

170

171

Prenatal Development and the Newborn M O D U L E 1 3

© Patrick Moberg/www.patrickmoberg.com

known photo 䉴 First of Michael Phelps (If the playful cartoonist were to convey literal truth, a second arrow would also point to the egg that contributed the other half of Michael Phelps’ genes.)

zygote the fertilized egg; it enters a 2-week period of rapid cell division and develops into an embryo.

embryo the developing human organism from about 2 weeks after fertilization through the second month. fetus the developing human organism from 9 weeks after conception to birth.

teratogens agents, such as chemicals and viruses, that can reach the embryo or fetus during prenatal development and cause harm.

|| Prenatal development zygote:

conception to 2 weeks

embryo:

2 weeks through 8 weeks

fetus:

9 weeks to birth ||

FIGURE 13.2 Prenatal development (a) The embryo grows and develops rapidly. At 40 days, the spine is visible and the arms and legs are beginning to grow. (b) Five days later, the inch-long embryo’s proportions have begun to change. The rest of the body is now bigger than the head, and the arms and legs have grown noticeably. (c) By the end of the second month, when the fetal period begins, facial features, hands, and feet have formed. (d) As the fetus enters the fourth month, its 3 ounces could fit in the palm of your hand.

All photos Petit Format/Science Source/Photo Researchers

About 10 days after conception, the zygote attaches to the mother’s uterine wall, beginning approximately 37 weeks of the closest human relationship. The zygote’s inner cells become the embryo (FIGURE 13.2a). Over the next 6 weeks, organs begin to form and function. The heart begins to beat. By 9 weeks after conception, the embryo looks unmistakably human (FIGURE 13.2c). It is now a fetus (Latin for “offspring” or “young one”). During the sixth month, organs such as the stomach have developed enough to allow a prematurely born fetus a chance of survival. At this point, the fetus is also responsive to sound (Hepper, 2005). Microphone readings taken inside the uterus have revealed that the fetus is exposed to the sound of its mother’s muffled voice (Ecklund-Flores, 1992). Immediately after birth, when newborns emerge from living 38 or so weeks underwater, they prefer this voice to another woman’s or to their father’s voice (Busnel et al., 1992; DeCasper et al., 1984, 1986, 1994). At each prenatal stage, genetic and environmental factors affect our development. The placenta, which formed as the zygote’s outer cells attached to the uterine wall, transfers nutrients and oxygen from mother to fetus. The placenta also screens out many potentially harmful substances. But some substances slip by, including teratogens, which are harmful agents such as viruses and drugs. If the mother carries the HIV virus, her baby may also. If she is a heroin addict, her baby will be born a heroin addict. A pregnant woman never smokes alone; she and her fetus both experience reduced blood oxygen and a shot of nicotine. If she is a heavy smoker, her fetus may receive fewer nutrients and be born underweight and at risk for various problems (Pringle et al., 2005). There is no known safe amount of alcohol during pregnancy. Alcohol enters the woman’s bloodstream—and her fetus’—and depresses activity in both their central nervous systems. A pregnant mother’s alcohol use may prime her offspring to like alcohol. Teens whose mothers drank when pregnant are at risk for heavy drinking and alcohol dependence. In experiments, when pregnant rats drink alcohol, their young offspring later display a liking for alcohol’s odor (Youngentob et al., 2007). Even light

(a)

(b)

(c)

(d)

172

MOD U LE 1 3 Prenatal Development and the Newborn

“You shall conceive and bear a son. So then drink no wine or strong drink.” Judges 13:7

drinking can affect the fetal brain (Braun, 1996; Ikonomidou et al., 2000), and persistent heavy drinking will put the fetus at risk for birth defects and intellectual disability. For 1 in about 800 infants, the effects are visible as fetal alcohol syndrome (FAS), marked by a small, misproportioned head and lifelong brain abnormalities (May & Gossage, 2001).

䉴|| The Competent Newborn 13-2 What are some newborn abilities, and how do researchers explore infants’ mental abilities? “I felt like a man trapped in a woman’s body. Then I was born.” Comedian Chris Bliss

fetal alcohol syndrome (FAS) physical and cognitive abnormalities in children caused by a pregnant woman’s heavy drinking. In severe cases, symptoms include noticeable facial misproportions.

habituation decreasing responsiveness with repeated stimulation. As infants gain familiarity with repeated exposure to a visual stimulus, their interest wanes and they look away sooner.

Carl and Ann Purcell/Corbis

Lightscapes Photography, Inc. Corbis

Prepared to feed and eat Animals are predisposed to respond to their offsprings’ cries for nourishment.

Having survived prenatal hazards, we as newborns came equipped with automatic responses ideally suited for our survival. We withdrew our limbs to escape pain. If a cloth over our face interfered with our breathing, we turned our head from side to side and swiped at it. New parents are often in awe of the coordinated sequence of reflexes by which their baby gets food. When something touches their cheek, babies turn toward that touch, open their mouth, and vigorously root for a nipple. Finding one, they automatically close on it and begin sucking—which itself requires a coordinated sequence of reflexive tonguing, swallowing, and breathing. Failing to find satisfaction, the hungry baby may cry—a behavior parents find highly unpleasant and very rewarding to relieve. The pioneering American psychologist William James presumed that the newborn experiences a “blooming, buzzing confusion.” Until the 1960s, few people disagreed. It was said that, apart from a blur of meaningless light and dark shades, newborns could not see. But then scientists discovered that babies can tell you a lot—if you know how to ask. To ask, you must capitalize on what babies can do—gaze, suck, turn their heads. So, equipped with eye-tracking machines and pacifiers wired to electronic gear, researchers set out to answer parents’ age-old questions: What can my baby see, hear, smell, and think? One technique developmental researchers use to answer such questions is a simple form of learning called habituation—a decrease in responding with repeated stimulation. A novel stimulus gets attention when first presented. But the more often the stimulus is presented, the weaker the response becomes. This seeming boredom with familiar stimuli gives us a way to ask infants what they see and remember. Janine Spencer, Paul Quinn, and their colleagues (1997; Quinn, 2002) used a novelty-preference procedure to ask 4-month-olds how they recognize cats and dogs. The researchers first showed the infants a series of images of cats or dogs. Which of the two animals in FIGURE 13.3 do you think the infants would find more novel (measured in

173

looking time) after seeing a series of cats? It was the hybrid animal with the dog’s head (or with a cat’s head, if they had previously viewed a series of dogs). This suggests that infants, like adults, focus first on the face, not the body. Indeed, we are born preferring sights and sounds that facilitate social responsiveness. As newborns, we turn our heads in the direction of human voices. We gaze longer at a drawing of a facelike image (FIGURE 13.4) than at a bull’s-eye pattern; yet we gaze more at a bull’s-eye pattern—which has contrasts much like those of the human eye—than at a solid disk (Fantz, 1961). We prefer to look at objects 8 to 12 inches away. Wonder of wonders, that just happens to be the approximate distance between a nursing infant’s eyes and its mother’s (Maurer & Maurer, 1988). Within days after birth, our brain’s neural networks were stamped with the smell of our mother’s body. Thus, a week-old nursing baby, placed between a gauze pad from its mother’s bra and one from another nursing mother, will usually turn toward the smell of its own mother’s pad (MacFarlane, 1978). At 3 weeks, if given a pacifier that sometimes turns on recordings of its mother’s voice and sometimes that of a female stranger’s, an infant will suck more vigorously when it hears its now-familiar mother’s voice (Mills & Melhuish, 1974). So not only could we as young infants see what we needed to see, and smell and hear well, we were already using our sensory equipment to learn.

Courtesy Paul Quinn, © John Wiley & Sons

Prenatal Development and the Newborn M O D U L E 1 3

FIGURE 13.3 Quick—which is the cat? Researchers used cat-dog hybrid images such as these to test how infants categorize animals.

FIGURE 13.4 Newborns’ preference 䉴 for faces When shown these two

stimuli with the same elements, Italian newborns spent nearly twice as many seconds looking at the facelike image (Johnson & Morton, 1991). Canadian newborns—average age 53 minutes in one study—display the same apparently inborn preference to look toward faces (Mondloch et al., 1999).

Review Prenatal Development and the Newborn 13-1 How does life develop before birth? Developmental psychologists study physical, mental, and social changes throughout the life span. The life cycle begins at conception, when one sperm cell unites with an egg to form a zygote. Attached to the uterine wall, the developing embryo’s body organs begin to form and function. By 9 weeks, the fetus is recognizably human. Teratogens are potentially harmful agents that can pass through the placental screen and harm the developing embryo or fetus, as happens with fetal alcohol syndrome. 13-2

What are some newborn abilities, and how do researchers explore infants’ mental abilities? Newborns are born with sensory equipment and reflexes that facilitate their survival and their social interactions with adults. For example, they quickly learn to discriminate their mother’s smell and sound. Researchers use techniques that test habituation, such as the novelty-preference procedure, to explore infants’ abilities.

Terms and Concepts to Remember developmental psychology, p. 169 zygote, p. 170 embryo, p. 171 fetus, p. 171

teratogens, p. 171 fetal alcohol syndrome (FAS), p. 172 habituation, p. 172

Test Yourself 1. Your friend—a regular drinker—hopes to become pregnant soon and has stopped drinking. Why is this a good idea? What negative effects might alcohol consumed during pregnancy have on a developing fetus? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. Are you surprised by the news of infants’ competencies? Or did you “know it all along”?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

Physical Development Cognitive Development Social Development

“It is a rare privilege to watch the birth, growth, and first feeble struggles of a living human mind.” Annie Sullivan, in Helen Keller’s The Story of My Life, 1903

module 14 Infancy and Childhood During infancy, a baby grows from newborn to toddler, and during childhood from toddler to teenager. We all traveled this path, developing physically, cognitively, and socially. From infancy on, brain and mind—neural hardware and cognitive software— develop together.

䉴|| Physical Development 14-1 During infancy and childhood, how do the brain and motor skills develop?

Brain Development In your mother’s womb, your developing brain formed nerve cells at the explosive rate of nearly one-quarter million per minute. The developing brain cortex actually overproduces neurons, with the number peaking at 28 weeks and then subsiding to a stable 23 billion or so at birth (Rabinowicz et al., 1996, 1999; de Courten-Myers, 2002). On the day you were born, you had most of the brain cells you would ever have. However, your nervous system was immature: After birth, the branching neural networks that eventually enabled you to walk, talk, and remember had a wild growth spurt (FIGURE 14.1). From ages 3 to 6, the most rapid growth was in your frontal lobes, which enable rational planning. This helps explain why preschoolers display a rapidly developing ability to control their attention and behavior (Garon et al., 2008). The association areas—those linked with thinking, memory, and language—are the last cortical areas to develop. As they do, mental abilities surge (Chugani & Phelps, 1986; Thatcher et al., 1987). Fiber pathways supporting language and agility proliferate into puberty, after which a pruning process shuts down excess connections and strengthens others (Paus et al., 1999; Thompson et al., 2000). As a flower unfolds in accord with its genetic instructions, so do we, in the orderly sequence of biological growth processes called maturation. Maturation decrees many of our commonalities—from standing before walking, to using nouns before adjectives. Severe deprivation or abuse can retard development, and ample parental experiences of talking and reading will help sculpt neural connections. Yet the genetic growth tendencies are inborn. Maturation sets the basic course of development; experience adjusts it.

maturation biological growth processes that enable orderly changes in behavior, relatively uninfluenced by experience.

174

FIGURE 14.1 Drawings of human cerebral cortex sections In humans, the brain is immature at birth. As the child matures, the neural networks grow increasingly more complex.

At birth

3 months

15 months

175

Infancy and Childhood M O D U L E 1 4

“This is the path to adulthood. You’re here.”

|| In the eight years following the 1994 launch of a U.S. “Back to Sleep” educational campaign, the number of infants sleeping on their stomachs dropped from 70 to 11 percent—and SIDS (Sudden Infant Death Syndrome) deaths fell by half (Braiker, 2005). ||

Profimedia.CZ s.r.o./Alamy

Phototake Inc./Alamy Images

Jim Craigmyle/Corbis

Renee Altier for Worth Publishers

The developing brain enables physical coordination. As an infant’s muscles and nervous system mature, more complicated skills emerge. With occasional exceptions, the sequence of physical (motor) development is universal. Babies roll over before they sit unsupported, and they usually creep on all fours before they walk (FIGURE 14.2). These behaviors reflect not imitation but a maturing nervous system; blind children, too, crawl before they walk. There are, however, individual differences in timing. In the United States, for example, 25 percent of all babies walk by age 11 months, 50 percent within a week after their first birthday, and 90 percent by age 15 months (Frankenburg et al., 1992). The recommended infant back-to-sleep position (putting babies to sleep on their backs to reduce the risk of a smothering crib death) has been associated with somewhat later crawling but not with later walking (Davis et al., 1998; Lipsitt, 2003). Genes play a major role in motor development. Identical twins typically begin sitting up and walking on nearly the same day (Wilson, 1979). Maturation—including the rapid development of the cerebellum at the back of the brain—creates our readiness to learn walking at about age 1. Experience before that time has a limited effect. This is true for other physical skills, including bowel and bladder control. Before necessary muscular and neural maturation, don’t expect pleading or punishment to produce successful toilet training.

© The New Yorker Collection, 2001, Robert Weber from cartoonbank.com. All rights reserved.

Motor Development

Maturation and Infant Memory Our earliest memories seldom predate our third birthday. We see this infantile amnesia in the memories of some preschoolers who experienced an emergency fire evacuation caused by a burning popcorn maker. Seven years later, they were able to recall the alarm and what caused it—if they were 4 to 5 years old at the time. Those experiencing the event as 3-year-olds could not remember the cause and usually misrecalled being already outside when the alarm sounded (Pillemer, 1995). Other studies confirm that the average age of earliest conscious memory is 3.5 years (Bauer, 2002). By 4 to 5 years, childhood amnesia is giving way to remembered experiences (Bruce et al., 2000). But even into adolescence, the brain areas underlying memory, such as the hippocampus and frontal lobes, continue to mature (Bauer, 2007). Although we consciously recall little from before age 4, our memory was processing information during those early years. In 1965, while finishing her doctoral work, Carolyn Rovee-Collier observed an infant memory. She was also a new mom, whose

FIGURE 14.2 Triumphant toddlers Sit, crawl, walk, run—the sequence of these motor development milestones is the same the world around, though babies reach them at varying ages.

|| Can you recall your first day of preschool (or your third birthday party)? ||

MOD U LE 1 4 Infancy and Childhood

colicky 2-month-old, Benjamin, could be calmed by moving a crib mobile. Weary of bonking the mobile, she strung a cloth ribbon connecting the mobile to Benjamin’s foot. Soon, he was kicking his foot to move the mobile. Thinking about her unintended home experiment, Rovee-Collier realized that, contrary to popular opinion at that time, babies are capable of learning. To know for sure that little Benjamin wasn’t just a whiz kid, Rovee-Collier had to repeat the experiment with other infants (Rovee-Collier, 1989, 1999). Sure enough, they, too, soon kicked more when linked to a mobile, both on the day of the experiment and the day after. They had learned the link between moving legs and moving mobile. If, however, she hitched them to a different mobile the next day, the infants showed no learning. Their actions indicated that they remembered the original mobile and recognized the difference. Moreover, when tethered to a familiar mobile a month later, they remembered the association and again began kicking (FIGURE 14.3). Evidence of early processing also appeared in a study in which 10-year-olds were shown photos of preschoolers and asked to spot their former classmates. Although they consciously recognized only 1 in 5 of their onetime compatriots, their physiological responses (measured as skin perspiration) were greater to their former classmates whether or not they consciously recognized them (Newcombe et al., 2000). What the conscious mind does not know and cannot express in words, the nervous system somehow remembers.

Michael Newman/PhotoEdit

176

FIGURE 14.3 Infant at work Babies only 3 months old can learn that kicking moves a mobile, and they can retain that learning for a month. (From Rovee-Collier, 1989, 1997.)

䉴|| Cognitive Development 14-2 From the perspective of Piaget and of today’s researchers, how does a child’s mind develop?

“Who knows the thoughts of a child?” Poet Nora Perry

“Childhood has its own way of seeing, thinking, and feeling, and there is nothing more foolish than the attempt to put ours in its place.”

Cognition refers to all the mental activities associated with thinking, knowing, remembering, and communicating. Somewhere on your precarious journey “from egghood to personhood” (Broks, 2007), you became conscious. When was that, and how did your mind unfold from there? Developmental psychologist Jean Piaget (pronounced Pee-ah-ZHAY) spent his life searching for the answers to such questions. His interest began in 1920, when he was in Paris developing questions for children’s intelligence tests. While administering the tests, Piaget became intrigued by children’s wrong answers, which, he noted, were often strikingly similar among children of a given age. Where others saw childish mistakes, Piaget saw intelligence at work. A half-century spent with children convinced Piaget that a child’s mind is not a miniature model of an adult’s. Thanks partly to his work, we now understand that children reason differently, in “wildly illogical ways about problems whose solutions are self-evident to adults” (Brainerd, 1996).

Philosopher Jean-Jacques Rousseau, 1798 Piaget (1896–1980) “If we examine the 䉴 Jean intellectual development of the individual or of the whole of humanity, we shall find that the human spirit goes through a certain number of stages, each different from the other” (1930).

cognition all the mental activities associated with thinking, knowing, remembering, and communicating. organizes and interprets information.

assimilation interpreting our new experience in terms of our existing schemas. accommodation adapting our current understandings (schemas) to incorporate new information.

Bill Anderson/Photo Researchers, Inc.

schema a concept or framework that

177

Infancy and Childhood M O D U L E 1 4

FIGURE 14.4 Scale errors 䉴 Psychologists Judy DeLoache, David

FIGURE 14.5 An impossible object Look carefully at the “devil’s tuning fork” below. Now look away—no, better first study it some more—and then look away and draw it. . . . Not so easy, is it? Because this tuning fork is an impossible object, you have no schema for such an image.

FIGURE 14.6 Pouring experience into

mental molds We use our existing schemas to assimilate new experiences. But sometimes we need to accommodate (adjust) our schemas to include new experiences.

Piaget’s studies led him to believe that a child’s mind develops through a series of stages, in an upward march from the newborn’s simple reflexes to the adult’s abstract reasoning power. Thus, an 8-year-old can comprehend things a toddler cannot, such as the analogy that “getting an idea is like having a light turn on in your head,” or that a miniature slide is too small for sliding, and a miniature car is much too small to get into (FIGURE 14.4). But our adult minds likewise engage in reasoning uncomprehended by 8-year-olds. Piaget’s core idea is that the driving force behind our intellectual progression is an unceasing struggle to make sense of our experiences: “Children are active thinkers, constantly trying to construct more advanced understandings of the world” (Siegler & Ellis, 1996). To this end, the maturing brain builds schemas, concepts or mental molds into which we pour our experiences (FIGURE 14.5). By adulthood we have built countless schemas, ranging from cats and dogs to our concept of love. To explain how we use and adjust our schemas, Piaget proposed two more concepts. First, we assimilate new experiences—we interpret them in terms of our current understandings (schemas). Having a simple schema for cow, for example, a toddler may call all four-legged animals cows. But as we interact with the world, we also adjust, or accommodate, our schemas to incorporate information provided by new experiences. Thus, the child soon learns that the original cow schema is too broad and accommodates by refining the category (FIGURE 14.6).

Both photos: Courtesy Judy DeLoache

Uttal, and Karl Rosengren (2004) report that 18- to 30-month-old children may fail to take the size of an object into account when trying to perform impossible actions with it. At left, a 21-month-old attempts to slide down a miniature slide. At right, a 24-month-old opens the door to a miniature car and tries to step inside.

Two-year-old Gabriella has learned the schema for cow from her picture books.

Gabriella sees a moose and calls it a "cow." She is trying to assimilate this new animal into an existing schema. Her mother tells her, "No, it's a moose."

Gabriella accommodates her schema for large, shaggy animals and continues to modify that schema to include "mommy moose," "baby moose," and so forth.

178

sensorimotor stage in Piaget’s theory, the stage (from birth to about 2 years of age) during which infants know the world mostly in terms of their sensory impressions and motor activities.

object permanence the awareness that things continue to exist even when not perceived.

MOD U LE 1 4 Infancy and Childhood

Piaget believed that as children construct their understandings while interacting with the world, they experience spurts of change, followed by greater stability as they move from one cognitive plateau to the next. He viewed these plateaus as forming stages. Let’s consider Piaget’s stages now, in the light of current thinking.

Piaget’s Theory and Current Thinking Piaget proposed that children progress through four stages of cognitive development, each with distinctive characteristics that permit specific kinds of thinking (TABLE 14.1).

Milt and Patti Putnam/Corbis

TABLE 14.1 Piaget’s Stages of Cognitive Development Developmental Phenomena

Typical Age Range

Description of Stage

Birth to nearly 2 years

Sensorimotor Experiencing the world through senses and actions (looking, hearing, touching, mouthing, and grasping)

• Object permanence • Stranger anxiety

2 to about 6 or 7 years

Preoperational Representing things with words and images; using intuitive rather than logical reasoning

• Pretend play • Egocentrism

About 7 to 11 years

Concrete operational Thinking logically about concrete events; grasping concrete analogies and performing arithmetical operations

• Conservation • Mathematical

Formal operational Abstract reasoning

• Abstract logic • Potential for mature

About 12 through adulthood

transformations

moral reasoning

Sensorimotor Stage

Doug Goodman

FIGURE 14.7 Object permanence Infants younger than 6 months seldom understand that things continue to exist when they are out of sight. But for this infant, out of sight is definitely not out of mind.

In the sensorimotor stage, from birth to nearly age 2, babies take in the world through their senses and actions—through looking, hearing, touching, mouthing, and grasping. Very young babies seem to live in the present: Out of sight is out of mind. In one test, Piaget showed an infant an appealing toy and then flopped his beret over it. Before the age of 6 months, the infant acted as if it ceased to exist. Young infants lack object permanence—the awareness that objects continue to exist when not perceived (FIGURE 14.7). By 8 months, infants begin exhibiting memory for things no longer

179

Infancy and Childhood M O D U L E 1 4

seen. If you hide a toy, the infant will momentarily look for it. Within another month or two, the infant will look for it even after being restrained for several seconds. But does object permanence in fact blossom at 8 months, much as tulips blossom in spring? Today’s researchers see development as more continuous than Piaget did, and they believe object permanence unfolds gradually. Even young infants will at least momentarily look for a toy where they saw it hidden a second before (Wang et al., 2004). Researchers believe Piaget and his followers underestimated young children’s competence. Consider some simple experiments that demonstrate baby logic:

䉴 Like adults staring in disbelief at a magic trick (the “Whoa!” look), infants look

FIGURE 14.8 Infants 䉴 can discriminate

Impossible

between possible and impossible objects After habituating to the stimulus on the left, 4-month-olds stared longer if shown the impossible version of the cube—where one of the back vertical bars crosses over a front horizontal bar (Shuwairi et al., 2007).

FIGURE 14.9 Baby math Shown a numerically impossible outcome, 5-month-old infants stare longer. (From Wynn, 1992.)

Sarah Shuwairi

longer at an unexpected and unfamiliar scene of a car seeming to pass through a solid object, a ball stopping in midair, or an object violating object permanence by magically disappearing (Baillargeon, 1995, 2008; Wellman & Gelman, Habituation 1992). In another clever experiment, Sarah Shuwairi Stimulus and her colleagues (2007) exposed 4-month-olds to a picture of a cube (FIGURE 14.8) with one small area covered. After the infants had habituated to this image, they stared longer when shown an impossible ? rather than possible version of the cube. Babies, it seems, have a more intuitive grasp of simple laws of physics than Piaget realized. Babies also have a head for numbers. Karen Wynn 䉴 (1992, 2000) showed 5-month-olds one or two objects. Then she hid the objects behind a screen, and visibly removed or added one (FIGURE 14.9). When she lifted the screen, the infants sometimes did a double take, staring longer when shown a wrong number of objects. But were they just responding to a greater or smaller mass of objects, rather than a change in number (Feigenson et al., 2002)? Later experiments showed that babies’ number sense extends to larger numbers and such things as drumbeats and motions (McCrink & Wynn, 2004; Spelke & Kinzler, 2007; Wynn et al., 2002). If accustomed to a Daffy Duck puppet jumping three times on stage, they show surprise if it jumps only twice. Clearly, infants are smarter than Piaget appreciated. Even as babies, we had a lot on our minds.

Then either: possible outcome 5. Screen drops…

1. Objects placed in case

2. Screen comes up

3. Empty hand enters

revealing 1 object

4. One object removed

or: impossible outcome 5. Screen drops…

revealing 2 objects

180

MOD U LE 1 4 Infancy and Childhood

Preoperational Stage Piaget believed that until about age 6 or 7, children are in a preoperational stage— too young to perform mental operations. For a 5-year-old, the milk that seems “too much” in a tall, narrow glass may become an acceptable amount if poured into a short, wide glass. Focusing only on the height dimension, this child cannot perform the operation of mentally pouring the milk back, because she lacks the concept of conservation—the principle that quantity remains the same despite changes in shape (FIGURE 14.10).

|| Question: If most 21⁄2-year-olds do not understand how miniature toys can symbolize real objects, should anatomically correct dolls be used when questioning such children about alleged physical or sexual abuse? Judy DeLoache (1995) reports that “very young children do not find it natural or easy to use a doll as a representation of themselves.” ||

preoperational stage in Piaget’s theory, the stage (from 2 to about 6 or 7 years of age) during which a child learns to use language but does not yet comprehend the mental operations of concrete logic. conservation the principle (which Piaget believed to be a part of concrete operational reasoning) that properties such as mass, volume, and number remain the same despite changes in the forms of objects.

egocentrism in Piaget’s theory, the preoperational child’s difficulty taking another’s point of view.

theory of mind people’s ideas about their own and others’ mental states— about their feelings, perceptions, and thoughts, and the behaviors these might predict.

Bianca Moscatelli/Worth Publishers

Piaget did not view the stage transitions as abrupt. Even so, symbolic thinking appears at an earlier age than he supposed. Judy DeLoache (1987) discovered this when she showed children a model of a room and hid a model toy in it (a miniature stuffed dog behind a miniature couch). The 21⁄ 2-year-olds easily remembered where to find the miniature toy, but they could not use the model to locate an actual stuffed dog behind a couch in a real room. Three-year-olds—only 6 months older—usually went right to the actual stuffed animal in the real room, showing they could think of the model as a symbol for the room. Piaget probably would have been surprised.

Egocentrism Piaget contended that preschool children are egocentric: They have difficulty perceiving things from another’s point of view. Asked to “show Mommy your picture,” 2-year-old Gabriella holds the picture up facing her own eyes. Three-year-old Gray makes himself “invisible” by putting his hands over his eyes, assuming that if he can’t see his grandparents, they can’t see him. Children’s conversations also reveal their egocentrism, as one young boy demonstrated (Phillips, 1969, p. 61):

©The New Yorker Collection, 2007, David Sipress from cartoonbank.com. All rights reserved.

of conservation This preoperational child does not yet understand the principle of conservation of substance. When the milk is poured into a tall, narrow glass, it suddenly seems like “more” than when it was in the shorter, wider glass. In another year or so, she will understand that the volume stays the same.

FIGURE 14.10 Piaget’s test

“Do you have a brother?” “Yes.” “What’s his name?” “Jim.” “Does Jim have a brother?” “No.”

Like Gabriella, TV-watching preschoolers who block your view of the TV assume “It’s too late, Roger—they’ve seen us.” that you see what they see. They simply Roger has not outgrown his early childhood egocentrism. have not yet developed the ability to take another’s viewpoint. Even as adults, we often overestimate the extent to which others share our opinions and perspectives, as when we assume that something will be clear to others if it is clear to us, or that e-mail recipients will “hear” our “just kidding” intent (Epley et al., 2004; Kruger et al., 2005). Children, however, are even more susceptible to this curse of knowledge.

181

Infancy and Childhood M O D U L E 1 4

When Little Red Riding Hood realizes her “grandmother” is really a wolf, she swiftly revises her ideas about the creature’s intentions and races away. Preschoolers, although still egocentric, develop this ability to infer others’ mental states when they begin forming a theory of mind (a term first coined by psychologists David Premack and Guy Woodruff, to describe chimpanzees’ seeming ability to read intentions). As children’s ability to take another’s perspective develops, they seek to understand what made a playmate angry, when a sibling will share, and what might make a parent buy a toy. And they begin to tease, empathize, and persuade. Between about 31⁄2 and 41 ⁄2, children worldwide come to realize that others may hold false beliefs (Callaghan et al., 2005; Sabbagh et al., 2006). Jennifer Jenkins and Janet Astington (1996) showed Toronto children a Band-Aids box and asked them what was inside. Expecting Band-Aids, the children were surprised to discover that the box actually contained pencils. Asked what a child who had never seen the box would think was inside, 3-year-olds typically answered “pencils.” By age 4 to 5, the children’s theory of mind had leapt forward, and they anticipated their friends’ false belief that the box would hold Band-Aids. In a follow-up experiment, children see a doll named Sally leaving her ball in a red cupboard (FIGURE 14.11). Another doll, Anne, then moves the ball to a blue cupboard. Researchers then pose a question: When Sally returns, where will she look for the ball? Children with autism (see Close-Up: Autism, on the next page) have difficulty

|| Use your finger to trace a capital E on your forehead. When Adam Galinsky and his colleagues (2006) invited people to do that, they were more egocentric—less likely to draw it from the perspective of someone looking at them—if they were first made to feel powerful. Other studies confirm that feeling powerful reduces people’s sensitivity to how others see, think, and feel. ||

Family Circus ® Bil Keane ©Bil Keane, Inc. Reprinted with special permission of King Features Syndicate.

Theory of Mind

FIGURE 14.11 Testing 䉴 children’s theory of mind This

This is Sally.

This is Anne.

Sally puts her ball in the red cupboard.

Sally goes away.

Anne moves the ball to the blue cupboard.

Where will Sally look for her ball?

simple problem illustrates how researchers explore children’s presumptions about others’ mental states. (Inspired by Baron-Cohen et al., 1985.)

“Don’t you remember, Grandma? You were in it with me.”

182

MOD U LE 1 4 Infancy and Childhood

CLOSE-UP

Ozier Muhammad/The New York Times/Redux

Autism and “Mind-Blindness”

Autism This speechlanguage pathologist is helping a boy with autism learn to form sounds and words. Autism, which afflicts four boys for every girl, is marked by deficient social communication and difficulty in grasping others’ states of mind.

Diagnoses of autism, a disorder marked by communication deficiencies and repetitive behaviors, have been increasing, according to recent estimates. Once believed to affect 1 in 2500 children, autism or a related disorder will now strike 1 in 150 American children and, in Britain’s London area, 1 in 86 children (Baird et al., 2006; CDC, 2007; Lilienfeld & Arkowitz, 2007). Some people have attributed the modern “autism epidemic” to small amounts of mercury in childhood vaccines, leading nearly 5000 parents of children with autism to file a 2007 lawsuit against the U.S. government. But the mercury-laden ingredient was removed from vaccines in 2001, and autism rates have reportedly not dropped since then (Normand & Dallery, 2007; Schechter & Grether, 2008). Moreover, the increase in autism diagnoses has been offset by a decrease in the number of children considered “cognitively disabled” or “learning disabled,” which suggests a relabeling of children’s disorders (Gernsbacher et al., 2005; Grinker, 2007; Shattuck, 2006). We do know that the underlying source of autism’s symptoms seems to be poor

communication among brain regions that normally work together to let us take another’s viewpoint. This effect appears to result from an unknown number of autism-related genes interacting with the environment in as yet poorly understood ways (Blakeslee, 2005; Wiekelgren, 2005). People with autism are therefore said to have an impaired theory of mind (Rajendran & Mitchell, 2007). They have difficulty inferring others’ thoughts and feelings. They do not appreciate that playmates and parents might view things differently. Mindreading that most find intuitive (Is that face conveying a happy smile, a selfsatisfied smirk, or a contemptuous sneer?) is difficult for those with autism. Most children learn that another child’s pouting mouth signals sadness, and that twinkling eyes mean happiness or mischief. A child with autism fails to understand these signals (Frith & Frith, 2001). To encompass the variations in autism, today’s researchers refer to Autism spectrum disorder. One variation in this spectrum is Asperger syndrome, a “high-functioning” form of autism. Asperger syndrome is marked by normal intelligence, often accompanied by exceptional skill or talent in

a specific area, but deficient social and communication skills (and thus an inability to form normal peer relationships). Psychologist Simon Baron-Cohen (2008) proposes that autism, which afflicts four boys for every girl, represents an “extreme male brain.” Girls are naturally predisposed to be “empathizers,” he contends. They are better at reading facial expressions and gestures—a challenging task for those with autism. And, although the sexes overlap, boys are, he believes, better “systemizers”: They understand things according to rules or laws, as in mathematical and mechanical systems. “If two ‘systemizers’ have a child, this will increase the risk of the child having autism,” Baron-Cohen theorizes. And because of assortative mating—people’s tendency to seek spouses who share their interests—two systemizers will indeed often mate. “I do not discount environmental factors,” he notes. “I’m just saying, don’t forget about biology.” Biology’s influence appears in studies of identical twins. If one twin is diagnosed with autism, the chances are 70 percent that the identical co-twin will be as well (Sebat et al., 2007). The younger sibling of a child with autism also is at a heightened risk of 15 percent or so (Sutcliffe, 2008). Random genetic mutations in sperm-producing cells may also play a role. As men age, these mutations become more frequent, which may help explain why an over-40 man has a much higher risk of fathering a child with autism than does a man under 30 (Reichenberg et al., 2007). Genetic influences appear to do their damage by altering brain synapses (Crawley, 2007; Garber, 2007). Biology’s role in autism also appears in studies comparing the brain’s functioning in those with and without autism. People without autism often yawn after seeing others yawn. And as they view and imitate

Infancy and Childhood M O D U L E 1 4

(a) Emotion-conveying faces grafted onto toy trains © Crown copyright MMVI, www.thetransporters.com, courtesy Changing Media Development

another’s smiling or frowning, they feel something of what the other is feeling, thanks to their brain’s mirror neurons. Not so among those with autism, who are less imitative and whose brain areas involved in mirroring others’ actions are much less active (Dapretto et al., 2006; Perra et al., 2008; Senju et al., 2007). For example, when people with autism watch another person’s hand movements, their brain displays less than normal mirroring activity (Oberman & Ramachandran, 2007; Théoret et al., 2005). Such discoveries have launched explorations of treatments that might alleviate some of autism’s symptoms by triggering mirror neuron activity (Ramachandran & Oberman, 2006). For example, seeking to “systemize empathy,” Baron-Cohen and his Cambridge University colleagues (2007; Golan et al., 2007) collaborated with Britain’s National Autistic Society and a film production company. Knowing that television shows with vehicles have been most popular for kids with autism, they created a series of animations that graft emotion-conveying faces onto toy tram, train, and tractor characters in a pretend boy’s bedroom (FIGURE 14.12). After the boy leaves for school, the characters come to life and have experiences that lead them to display various emotions (which I predict you would enjoy viewing at www.thetransporters.com). The children expressed a surprising ability to generalize what they had learned to a new, real context. By the end of the intervention, their previously deficient ability to recognize emotions on real faces now equaled that of children without autism.

(b) Matching new scenes and faces (and data for two trials) “The neighbor’s dog has bitten people before. He is barking at Louise.” Point to the face that shows how Louise is feeling. 14

Accuracy scores 13 12

After intervention, children with autism become better able to identify which facial emotion matches the context.

11 10 9 8

Time 1 Typical control

Time 2 Faces intervention

FIGURE 14.12 Transported into a world of emotion

autism a disorder that appears in childhood and is marked by deficient communication, social interaction, and understanding of others’ states of mind.

(a) A research team at Cambridge University’s Autism Research Centre introduced children with autism to emotions experienced and displayed by toy vehicles. (b) After four weeks of viewing animations, the children displayed a markedly increased ability to recognize emotions in human as well as the toy faces.

183

184

MOD U LE 1 4 Infancy and Childhood

understanding that Sally’s state of mind differs from their own—that Sally, not knowing the ball has been moved, will return to the red cupboard. They also have difficulty reflecting on their own mental states. They are, for example, less likely to use the personal pronouns I and me. Deaf children who have hearing parents and minimal communication opportunities have similar difficulty inferring others’ states of mind (Peterson & Siegal, 1999). Our abilities to perform mental operations, to think symbolically, and to take another’s perspective are not absent in the preoperational stage and then miraculously present in later stages. Rather, these abilities begin to show up early and continue to develop gradually (Wellman et al., 2001). For example, we are able to appreciate others’ perceptions and feelings before we can appreciate others’ beliefs (Saxe & Powell, 2006). By age 7, children become increasingly capable of thinking in words and of using words to work out solutions to problems. They do this, noted the Russian psychologist Lev Vygotsky (1896–1934), by internalizing their culture’s language and relying on inner speech. Parents who say “No, no!” when pulling a child’s hand away from a cake are giving the child a self-control tool. When later needing to resist temptation, the child may likewise say “No, no!” Second-graders who mutter to themselves while doing math problems grasp third-grade math better the following year (Berk, 1994). Whether out loud or inaudibly, talking to themselves helps children control their behavior and emotions and master new skills.

Concrete Operational Stage By about 6 or 7 years of age, said Piaget, children enter the concrete operational stage. Given concrete materials, they begin to grasp conservation. Understanding that change in form does not mean change in quantity, they can mentally pour milk back and forth between glasses of different shapes. They also enjoy jokes that allow them to use this new understanding: Mr. Jones went into a restaurant and ordered a whole pizza for his dinner. When the waiter asked if he wanted it cut into 6 or 8 pieces, Mr. Jones said, “Oh, you’d better make it 6, I could never eat 8 pieces!” (McGhee, 1976) Piaget believed that during the concrete operational stage, children fully gain the mental ability to comprehend mathematical transformations and conservation. When my daughter, Laura, was 6, I was astonished at her inability to reverse simple arithmetic. Asked, “What is 8 plus 4?” she required 5 seconds to compute “12,” and another 5 seconds to then compute 12 minus 4. By age 8, she could answer a reversed question instantly.

Formal Operational Stage concrete operational stage in Piaget’s theory, the stage of cognitive development (from about 6 or 7 to 11 years of age) during which children gain the mental operations that enable them to think logically about concrete events.

formal operational stage in Piaget’s theory, the stage of cognitive development (normally beginning about age 12) during which people begin to think logically about abstract concepts. stranger anxiety the fear of strangers that infants commonly display, beginning by about 8 months of age.

By age 12, our reasoning expands from the purely concrete (involving actual experience) to encompass abstract thinking (involving imagined realities and symbols). As children approach adolescence, said Piaget, many become capable of solving hypothetical propositions and deducing consequences: If this, then that. Systematic reasoning, what Piaget called formal operational thinking, is now within their grasp. Although full-blown logic and reasoning await adolescence, the rudiments of formal operational thinking begin earlier than Piaget realized. Consider this simple problem: If John is in school, then Mary is in school. John is in school. What can you say about Mary? Formal operational thinkers have no trouble answering correctly. But neither do most 7-year-olds (Suppes, 1982).

185

Infancy and Childhood M O D U L E 1 4

Developmental psychologist Harry Beilin (1992)

James V. Wertsch/Washington University

What remains of Piaget’s ideas about the child’s mind? Plenty—enough to merit his being singled out by Time magazine as one of the twentieth century’s 20 most influential scientists and thinkers and rated in a survey of British psychologists as the greatest psychologist of that century (Psychologist, 2003). Piaget identified significant cognitive milestones and stimulated worldwide interest in how the mind develops. His emphasis was less on the ages at which children typically reach specific milestones than on their sequence. Studies around the globe, from aboriginal Australia to Algeria to North America, have confirmed that human cognition unfolds basically in the sequence Piaget described (Lourenco & Machado, 1996; Segall et al., 1990). However, today’s researchers see development as more continuous than did Piaget. By detecting the beginnings of each type of thinking at earlier ages, they have revealed conceptual abilities Piaget missed. Moreover, they see formal logic as a smaller part of cognition than he did. Piaget would not be surprised that today, as part of our own cognitive development, we are adapting his ideas to accommodate new findings. Piaget’s emphasis on how the child’s mind grows through interaction with the physical environment is complemented by Vygotsky’s emphasis on how the child’s mind grows through interaction with the social environment. If Piaget’s child was a young scientist, Vygotsky’s was a young apprentice. By mentoring children and giving them new words, parents and others provide a temporary scaffold from which children can step to higher levels of thinking (Renninger & Granott, 2005). Language, an important ingredient of social mentoring, provides the building blocks for thinking, noted Vygotsky (who was born the same year as Piaget, but died prematurely of tuberculosis).

“Assessing the impact of Piaget on developmental psychology is like assessing the impact of Shakespeare on English literature.”

Reflecting on Piaget’s Theory

Lev Vygotsky (1895–1934) Vygotsky, a Russian developmental psychologist, pictured here with his daughter, studied how a child’s mind feeds on the language of social interaction.

Implications for Parents and Teachers

Stranger anxiety A newly emerging ability to evaluate people as unfamiliar and possibly threatening helps protect babies 8 months and older.

Future parents and teachers remember: Young children are incapable of adult logic. Preschoolers who stand in the way or ignore negatively phrased instructions simply have not learned to take another’s viewpoint. What seems simple and obvious to us— getting off a teeter-totter will cause a friend on the other end to crash—may be incomprehensible to a 3-year-old. Also remember that children are not passive receptacles waiting to be filled with knowledge. Better to build on what they already know, engaging them in concrete demonstrations and stimulating them to think for themselves. And, finally, accept children’s cognitive immaturity as adaptive. It is nature’s strategy for keeping children close to protective adults and providing time for learning and socialization (Bjorklund & Green, 1992).

䉴|| Social Development From birth, babies in all cultures are social creatures, developing an intense bond with their caregivers. Infants come to prefer familiar faces and voices, then to coo and gurgle when given their mother’s or father’s attention. Soon after object permanence emerges and children become mobile, a curious thing happens. At about 8 months, they develop stranger anxiety. They may greet strangers by crying and reaching for familiar caregivers. “No! Don’t leave me!” their distress seems to say. At about this age, children have schemas for familiar faces; when they cannot assimilate the new face into these remembered schemas, they become distressed (Kagan, 1984). Once again, we see an important principle: The brain, mind, and social-emotional behavior develop together.

© Christina Kennedy/PhotoEdit

14-3 How do parent-infant attachment bonds form?

another person; shown in young children by their seeking closeness to the caregiver and showing distress on separation.

critical period an optimal period shortly after birth when an organism’s exposure to certain stimuli or experiences produces proper development. imprinting the process by which certain animals form attachments during a critical period very early in life.

Origins of Attachment By 12 months, infants typically cling tightly to a parent when they are frightened or expect separation. Reunited after being separated, they shower the parent with smiles and hugs. No social behavior is more striking than this intense and mutual infantparent bond. This attachment bond is a powerful survival impulse that keeps infants close to their caregivers. Infants become attached to those—typically their parents— who are comfortable and familiar. For many years, developmental psychologists reasoned that infants became attached to those who satisfied their need for nourishment. It made sense. But an accidental finding overturned this explanation.

Body Contact During the 1950s, University of Wisconsin psychologists Harry Harlow and Margaret Harlow bred monkeys for their learning studies. To equalize the infant monkeys’ experiences and to isolate any disease, they separated them from their mothers shortly after birth and raised them in sanitary individual cages, which included a cheesecloth baby blanket (Harlow et al., 1971). Then came a surprise: When their blankets were taken to be laundered, the monkeys became distressed. The Harlows recognized that this intense attachment to the blanket contradicted the idea that attachment derives from an association with nourishment. But how could they show this more convincingly? To pit the drawing power of a food source against the contact comfort of the blanket, they created two artificial mothers. One was a bare wire cylinder with a wooden head and an attached feeding bottle, the other a cylinder wrapped with terry cloth. When raised with both, the monkeys overwhelmingly preferred the comfy cloth mother (FIGURE 14.13). Like human infants clinging to their mothers, the monkeys would cling to their cloth mothers when anxious. When venturing into the environment, they used her as a secure base, as if attached to her by an invisible elastic band that stretched only so far before pulling them back. Researchers soon learned that other qualities—rocking, warmth, and feeding—made the cloth mother even more appealing. Human infants, too, become attached to parents who are soft and warm and who rock, feed, and pat. Much parent-infant emotional communication occurs via touch (Hertenstein et al., 2006), which can be either soothing (snuggles) or arousing (tickles). Human attachment also consists of one person providing another with a safe haven when distressed and a secure base from which to explore. As we mature, our

FIGURE 14.13 The Harlows’ mothers

Psychologists Harry Harlow and Margaret Harlow raised monkeys with two artificial mothers—one a bare wire cylinder with a wooden head and an attached feeding bottle, the other a cylinder with no bottle but covered with foam rubber and wrapped with terry cloth. The Harlows’ discovery surprised many psychologists: The infants much preferred contact with the comfortable cloth mother, even while feeding from the nourishing mother.

Harlow Primate Laboratory, University of Wisconsin

attachment an emotional tie with

MOD U LE 1 4 Infancy and Childhood

186

187

Infancy and Childhood M O D U L E 1 4

Attachment Differences 14-4 How have psychologists studied attachment differences, and what have they learned? What accounts for children’s attachment differences? Placed in a strange situation (usually a laboratory playroom), about 60 percent of infants display secure attachment. In their mother’s presence they play comfortably, happily exploring their new environment. When she leaves, they are distressed; when she returns, they seek contact with her. Other infants avoid attachment or show insecure attachment. They are less likely to explore their surroundings; they may even cling to their mother. When she leaves, they either cry loudly and remain upset or seem indifferent to her departure and return (Ainsworth, 1973, 1989; Kagan, 1995; van IJzendoorn & Kroonenberg, 1988). Mary Ainsworth (1979), who designed the strange situation experiments, studied attachment differences by observing mother-infant pairs at home during their first six months. Later she observed the 1-year-old infants in a strange situation without their mothers. Sensitive, responsive mothers—those who noticed what their babies were doing and responded appropriately—had infants who exhibited secure attachment. Insensitive, unresponsive mothers—mothers who attended to their babies when they felt like doing so but ignored them at other times—had infants who often became insecurely attached. The Harlows’ monkey studies, with unresponsive artificial mothers, produced even more striking effects. When put in strange situations without their artificial mothers, the deprived infants were terrified (FIGURE 14.14).

Alastair Miller

Attachment When French pilot Christian Moullec took off in his microlight plane, his imprinted geese, which he had raised since their hatching, followed closely.

FIGURE 14.14 Social deprivation and

fear Monkeys raised with artificial mothers were terror-stricken when placed in strange situations without their surrogate mothers. (Today’s climate of greater respect for animal welfare prevents such primate studies.) Harlow Primate Laboratory, University of Wisconsin

Contact is one key to attachment. Another is familiarity. In many animals, attachments based on familiarity likewise form during a critical period—an optimal period when certain events must take place to facilitate proper development (Bornstein, 1989). For goslings, ducklings, or chicks, that period falls in the hours shortly after hatching, when the first moving object they see is normally their mother. From then on, the young fowl follow her, and her alone. Konrad Lorenz (1937) explored this rigid attachment process, called imprinting. He wondered: What would ducklings do if he was the first moving creature they observed? What they did was follow him around: Everywhere that Konrad went, the ducks were sure to go. Further tests revealed that although baby birds imprint best to their own species, they also will imprint to a variety of moving objects—an animal of another species, a box on wheels, a bouncing ball (Colombo, 1982; Johnson, 1992). And, once formed, this attachment is difficult to reverse. Children—unlike ducklings—do not imprint. However, they do become attached to what they’ve known. Mere exposure to people and things fosters fondness. Children like to reread the same books, rewatch the same movies, reenact family traditions. They prefer to eat familiar foods, live in the same familiar neighborhood, attend school with the same old friends. Familiarity is a safety signal. Familiarity breeds content.

Familiarity

|| Lee Kirkpatrick (1999) reports that for some people a perceived relationship with God functions as do other attachments, by providing a secure base for exploration and a safe haven when threatened. ||

secure base and safe haven shift—from parents to peers and partners (Cassidy & Shaver, 1999). But at all ages we are social creatures. We gain strength when someone offers, by words and actions, a safe haven: “I will be here. I am interested in you. Come what may, I will actively support you” (Crowell & Waters, 1994).

188

© Barry Hewlett

Follow-up studies have confirmed that sensitive mothers—and fathers—tend to have securely attached infants (De Wolff & van IJzendoorn, 1997). But what explains the correlation? Is attachment style the result of parenting? Or is attachment style the result of genetically influenced temperament—a person’s characteristic emotional reactivity and intensity? Shortly after birth, some babies are noticeably difficult—irritable, intense, and unpredictable. Others are easy—cheerful, relaxed, and feeding and sleeping on predictable schedules (Chess & Thomas, 1987). By neglecting such inborn differences, chides Judith Harris (1998), the parenting studies are like “comparing foxhounds reared in kennels with poodles reared in apartments.” So, to separate nature and nurture, Dutch researcher Dymphna van den Boom (1990, 1995) varied parenting while controlling temperament. (Pause and think: If you were the researcher, how might you have done this?) Van den Boom’s solution was to randomly assign one hundred 6- to 9-month-old temperamentally difficult infants to either an experimental condition, in which mothers received personal training in sensitive responding, or to a control condition in which they did not. At 12 months of age, 68 percent of the experimental-condition infants were rated securely attached, as were only 28 percent of the control-condition infants. Other studies have also found that intervention programs can increase parental sensitivity and, to a lesser extent, infant attachment security (BakermansKranenburg et al., 2003; Van Zeijl et al., 2006). As these examples indicate, researchers have more often studied mother care than father care. Infants who lack a caring mother are said to suffer “maternal deprivation”; those lacking a father’s care merely experience “father absence.” This reflects a wider attitude in which “fathering a child” has meant impregnating, and “mothering” has meant nurturing. But fathers are more than just mobile sperm banks. Across nearly 100 studies worldwide, a father’s love and acceptance have been comparable to a mother’s love in predicting their offspring’s health and well-being (Rohner & Veneziano, 2001). In one mammoth British study following 7259 children from birth to adulthood, those whose fathers were most involved in parenting (through outings, reading to them, and taking an interest in their education) tended to achieve more in school, even after controlling for many other factors, such as parental education and family wealth (Flouri & Buchanan, 2004). Whether children live with one parent or two, are cared for at home or in a daycare center, live in North America, Guatemala, or the Kalahari Desert, their anxiety over separation from parents peaks at around 13 months, then gradually declines (FIGURE 14.15). Does this mean our need for and love of others also fades away? Hardly. Our capacity for love grows, and our pleasure in touching and holding those we love never ceases. The power of early attachment does nonetheless gradually relax,

FIGURE 14.15 Infants’ distress over separation from parents In an experiment, groups of infants were left by their mothers in an unfamiliar room. In both groups, the percentage who cried when the mother left peaked at about 13 months. Whether the infant had experienced day care made little difference. (From Kagan, 1976.)

䉴 Fantastic father Among the Aka people of Central Africa, fathers form an especially close bond with their infants, even suckling the babies with their own nipples when hunger makes the child impatient for Mother’s return. According to anthropologist Barry Hewlett (1991), fathers in this culture are holding or within reach of their babies 47 percent of the time.

MOD U LE 1 4 Infancy and Childhood

Percentage of 100% infants who cried when their 80 mothers left

Day care

60 40 20

Home

0 31/2 51/2 71/2 91/2 111/2 131/2 20

Age in months

29

189

Infancy and Childhood M O D U L E 1 4

allowing us to move out into a wider range of situations, communicate with strangers more freely, and stay emotionally attached to loved ones despite distance. Developmental theorist Erik Erikson (1902–1994), working in collaboration with his wife, Joan Erikson, said that securely attached children approach life with a sense of basic trust—a sense that the world is predictable and reliable. He attributed basic trust not to environment or inborn temperament, but to early parenting. He theorized that infants blessed with sensitive, loving caregivers form a lifelong attitude of trust rather than fear. Although debate continues, many researchers now believe that our early attachments form the foundation for our adult relationships and our comfort with affection and intimacy (Birnbaum et al., 2006; Fraley, 2002). Adult styles of romantic love do tend to exhibit secure, trusting attachment; insecure, anxious attachment; or the avoidance of attachment (Feeney & Noller, 1990; Shaver & Mikulincer, 2007; Rholes & Simpson, 2004). Moreover, these adult attachment styles in turn affect relationships with our children, as avoidant people find parenting more stressful and unsatisfying (Rholes et al., 2006). Attachment style is also associated with motivation, note Andrew Elliot and Harry Reis (2003). Securely attached people exhibit less fear of failure and a greater drive to achieve.

“Out of the conflict between trust and mistrust, the infant develops hope, which is the earliest form of what gradually becomes faith in adults.” Erik Erikson (1983)

Deprivation of Attachment 14-5 Do parental neglect, family disruption, or day care affect children’s attachments? If secure attachment nurtures social competence, what happens when circ*mstances prevent a child from forming attachments? In all of psychology, there is no sadder research literature. Babies reared in institutions without the stimulation and attention of a regular caregiver, or locked away at home under conditions of abuse or extreme neglect, are often withdrawn, frightened, even speechless. Those abandoned in Romanian orphanages during the 1980s looked “frighteningly like [the Harlows’] monkeys” (Carlson, 1995). If institutionalized more than 8 months, they often bore lasting emotional scars (Chisholm, 1998; Malinosky-Rummell & Hansen, 1993; Rutter et al., 1998). The Harlows’ monkeys bore similar scars if reared in total isolation, without even an artificial mother. As adults, when placed with other monkeys their age, they either cowered in fright or lashed out in aggression. When they reached sexual maturity, most were incapable of mating. If artificially impregnated, females often were neglectful, abusive, even murderous toward their first-born. A recent experiment with primates confirms the abuse-breeds-abuse phenomenon. Whether reared by biological or adoptive mothers, 9 of 16 females who were abused by their mothers became abusive parents, as did no female reared by a nonabusive mother (Maestripieri, 2005). In humans, too, the unloved sometimes become the unloving. Most abusive parents—and many condemned murderers—report having been neglected or battered as children (Kempe & Kempe, 1978; Lewis et al., 1988). But does this mean that today’s victim is predictably tomorrow’s victimizer? No. Though most abusers were indeed abused, most abused children do not later become violent criminals or abusive parents. Most children growing up under adversity (as did the surviving children of the Holocaust) are resilient; they become normal adults (Helmreich, 1992; Masten, 2001). But others, especially those who experience no sharp break from their abusive past, don’t bounce back so readily. Some 30 percent of people who have been abused do abuse their children—a rate lower than that found in the primate study, but four

“What is learned in the cradle, lasts to the grave.” French proverb

basic trust according to Erik Erikson, a sense that the world is predictable and trustworthy; said to be formed during infancy by appropriate experiences with responsive caregivers.

190

MOD U LE 1 4 Infancy and Childhood

times the U.S. national rate of child abuse (Dumont et al., 2007; Kaufman & Zigler, 1987; Widom, 1989a,b). Extreme early trauma seems to leave footprints on the brain. If repeatedly threatened and attacked while young, normally placid golden hamsters grow up to be cowards when caged with same-sized hamsters, or bullies when caged with weaker ones (Ferris, 1996). Such animals show changes in the brain chemical serotonin, which calms aggressive impulses. A similarly sluggish serotonin response has been found in abused children who become aggressive teens and adults. “Stress can set off a ripple of hormonal changes that permanently wire a child’s brain to cope with a malevolent world,” concludes abuse researcher Martin Teicher (2002). Such findings help explain why young children terrorized through physical abuse or wartime atrocities (being beaten, witnessing torture, and living in constant fear) may suffer other lasting wounds—often nightmares, depression, and an adolescence troubled by substance abuse, binge eating, or aggression (Kendall-Tackett et al., 1993, 2004; Polusny & Follette, 1995; Trickett & McBride-Chang, 1995). Child sexual abuse, especially if severe and prolonged, places children at increased risk for health problems, psychological disorders, substance abuse, and criminality (Freyd et al., 2005; Tyler, 2002). Abuse victims are at considerable risk for depression if they carry a gene variation that spurs stress-hormone production (Bradley et al., 2008). As we will see again and again, behavior and emotion arise from a particular environment interacting with particular genes.

Disruption of Attachment What happens to an infant when attachment is disrupted? Separated from their families, infants—both monkeys and humans—become upset and, before long, withdrawn and even despairing (Bowlby, 1973; Mineka & Suomi, 1978). Fearing that the stress of separation might cause lasting damage (and when in doubt, acting to protect parents’ rights), courts are usually reluctant to remove children from their homes. If placed in a more positive and stable environment, most infants recover from the separation distress. In studies of adopted children, Leon Yarrow and his co-workers (1973) found that when children between 6 and 16 months of age were removed from their foster mothers, they initially had difficulties eating, sleeping, and relating to their new mothers. But when these children were studied at age 10, little visible effect remained. Thus, they fared no worse than children placed before the age of 6 months (with little accompanying distress). Likewise, socially deprived but adequately nourished Romanian orphans who were adopted into a loving home during infancy or early childhood usually progressed rapidly, especially in their cognitive development. If removed and adopted after age 2, however, they were at risk for attachment problems. Foster care that prevents attachment by moving a child through a series of foster families can be very disruptive. So can repeated and prolonged removals from a mother. We adults also suffer when our attachment bonds are severed. Whether through death or separation, a break produces a predictable sequence. Agitated preoccupation with the lost partner is followed by deep sadness and, eventually, the beginnings of emotional detachment and a return to normal living (Hazan & Shaver, 1994). Newly separated couples who have long ago ceased feeling affection are sometimes surprised at their desire to be near the former partner. Deep and longstanding attachments seldom break quickly. Detaching is a process, not an event.

Does Day Care Affect Attachment? In the mid-twentieth century, when Mom-at-home was the social norm, researchers asked, “Is day care bad for children? Does it disrupt children’s attachments to their

191

parents?” For the high-quality day-care programs usually studied, the answer was no. In Mother Care/Other Care, developmental psychologist Sandra Scarr (1986) explained that children are “biologically sturdy individuals . . . who can thrive in a wide variety of life situations.” Scarr spoke for many developmental psychologists, whose research has uncovered no major impact of maternal employment on children’s development (Erel et al., 2000; Goldberg et al., 2008). Research then shifted to the effects of differing quality of day care on different types and ages of children. Scarr (1997) explained: Around the world, “high-quality child care consists of warm, supportive interactions with adults in a safe, healthy, and stimulating environment. . . . Poor care is boring and unresponsive to children’s needs.” Newer research not only confirms that daycare quality matters, but also finds that family poverty often consigns children to lower-quality day care, as well as more family instability and turmoil, more authoritarian parenting (imposing strict rules and demanding obedience), more time in front of the television, and less access to books (Love et al., 2003; Evans, 2004). One ongoing study in 10 American cities has followed 1100 children since the age of 1 month. The researchers found that at ages 41⁄2 to 6, those children who had spent the most time in day care had slightly advanced thinking and language skills. They also had an increased rate of aggressiveness and defiance (NICHD, 2002, 2003, 2006). To developmental psychologist Eleanor Maccoby (2003), the positive correlation between increased rate of problem behaviors and time spent in child care suggests “some risk for some children spending extended time in some day-care settings as they’re now organized.” But the child’s temperament, the parents’ sensitivity, and the family’s economic and educational level mattered more than time spent in day care. To be a day-care researcher and “to follow the data” can be controversial, notes researcher Jay Belsky (2003). Both opponents and advocates of day care have strong feelings. “As a result,” says Belsky, “the scientist who is willing to report unpopular results is all too frequently blamed for generating them.” Just as weather forecasters can report rain but love sunshine, so scientists aim to reveal and report the way things are, even when they wish it were otherwise. Children’s ability to thrive under varied types of responsive caregiving should not surprise us, given cultural variations in attachment patterns. Westernized attachment features one or two caregivers and their offspring. In other cultures, such as the Efe of Zaire, multiple caregivers are the norm (Field, 1996; Whaley et al., 2002). Even before the mother holds her newborn, the baby is passed among several women. In the weeks to come, the infant will be constantly held (and fed) by other women. The result is strong multiple attachments. As an African proverb says, “It takes a village to raise a child.” There is little disagreement that the many preschool children left alone for part of their parents’ working hours deserve better. So do the children who merely exist for 9 hours a day in minimally equipped, understaffed centers. What all children need is a consistent, warm relationship with people whom they can learn to trust. The importance of such relationships extends beyond the preschool years, as Finnish psychologist Lea Pulkkinen (2006) observed in her career-long study of 285 individuals tracked from age 8 to 42. Her observation that adult monitoring of children was associated with favorable outcomes led her to undertake, with support from Finland’s parliament, a nationwide program of adult-supervised activities for all first and second graders (Pulkkinen, 2004; Rose, 2004).

Digital Vision/Getty Images

Infancy and Childhood M O D U L E 1 4

An example of high-quality day care Research has shown that young children thrive socially and intellectually in safe, stimulating environments with a ratio of one caregiver for every three or four children.

192

MOD U LE 1 4 Infancy and Childhood

self-concept our understanding and evaluation of who we are.

14-6 How do children’s self-concepts develop, and how are children’s traits related to parenting styles? Infancy’s major social achievement is attachment. Childhood’s major social achievement is a positive sense of self. By the end of childhood, at about age 12, most children have developed a self-concept—an understanding and assessment of who they are. Parents often wonder when and how this sense of self develops. “Is my baby girl aware of herself—does she know she is a person distinct from everyone else?” Of course we cannot ask the baby directly, but we can again capitalize on what she can do—letting her behavior provide clues to the beginnings of her self-awareness. In 1877, biologist Charles Darwin offered one idea: Self-awareness begins when we recognize ourselves in a mirror. By this indicator, self-recognition emerges gradually over about a year, starting in roughly the sixth month as the child reaches toward the mirror to touch her image as if it were another child (Courage & Howe, 2002; Damon & Hart, 1982, 1988, 1992). But how can we know when the child recognizes that the girl in the mirror is indeed herself, not just an agreeable playmate? In a simple variation of the mirror procedure, researchers sneakily dabbed rouge on children’s noses before placing them in front of the mirror. At about 15 to 18 months, children will begin to touch their own noses when they see the red spot in the mirror (Butterworth, 1992; Gallup & Suarez, 1986). Apparently, 18-month-olds have a schema of how their face should look, and they wonder, “What is that spot doing on my face?” Beginning with this simple self-recognition, the child’s self-concept gradually strengthens. By school age, children start to describe themselves in terms of their gender, group memberships, and psychological traits, and they compare themselves with other children (Newman & Ruble, 1988; Stipek, 1992). They come to see themselves as good and skillful in some ways but not others. They form a concept of which traits, ideally, they would like to have. By age 8 or 10, their self-image is quite stable. As adolescents and adults, will our self-esteem be lower if we have experienced adoption? That’s what Dutch researchers Femmie Juffer and Marinus van IJzendoorn (2007) predicted, given that some adopted children will have suffered early neglect or abuse, will know that their biological parents gave them up, and will often look different from their adoptive parents. To check their presumption, they mined data from 88 studies comparing the self-esteem scores of 10,977 adoptees and 33,862 nonadoptees. To their surprise, they found “no difference in self-esteem.” This was true even for transracial and international adoptees. Many adoptees face challenges, the researchers acknowledge, but “supported by the large investment of adoptive families” they display resilience. Children’s views of themselves affect their actions. Children who form a positive self-concept are more confident, independent, optimistic, assertive, and sociable (Maccoby, 1980). This then raises important questions: How can parents encourage a positive yet realistic self-concept? AP Photo/National Academy of Sciences, Courtesy of Joshua Plotnik, Frans de Waal, and Diana Reiss

Kate Nurre/Worth Publishers

Self-awareness Mirror images fascinate infants from the age of about 6 months. Only at about 18 months, however, does the child recognize that the image in the mirror is “me.”

Self-aware animals After prolonged exposure to mirrors, several species—chimpanzees, orangutans, gorillas, dolphins, elephants, and magpies—have similarly demonstrated selfrecognition of their mirror image (Gallup, 1970: Reiss & Marino, 2001; Prior et al., 2008). In an experiment by Joshua Plotnick and colleagues (2006), an Asian elephant, when facing a mirror, repeatedly used her trunk to touch an "X" painted above her eye (but not a similar mark above the other eye that was visible only under black light).

Self-Concept

193

Infancy and Childhood M O D U L E 1 4

Parenting Styles Some parents spank, some reason. Some are strict, some are lax. Some show little affection, some liberally hug and kiss. Do such differences in parenting styles affect children? The most heavily researched aspect of parenting has been how, and to what extent, parents seek to control their children. Investigators have identified three parenting styles: 1. Authoritarian parents impose rules and expect obedience: “Don’t interrupt.” “Keep your room clean.” “Don’t stay out late or you’ll be grounded.” “Why? Because I said so.” 2. Permissive parents submit to their children’s desires. They make few demands and use little punishment. 3. Authoritative parents are both demanding and responsive. They exert control by setting rules and enforcing them, but they also explain the reasons for rules. And, especially with older children, they encourage open discussion when making the rules and allow exceptions. Too hard, too soft, and just right, these styles have been called. Studies by Stanley Coopersmith (1967), Diana Baumrind (1996), and John Buri and others (1988) reveal that children with the highest self-esteem, self-reliance, and social competence usually have warm, concerned, authoritative parents. (Those with authoritarian parents tend to have less social skill and self-esteem, and those with permissive parents tend to be more aggressive and immature.) The participants in most studies have been middle-class White families, and some critics suggest that effective parenting may vary by culture. Yet studies with families of other races and in more than 200 cultures worldwide confirm the social and academic correlates of loving and authoritative parenting (Rohner & Veneziano, 2001; Sorkhabi, 2005; Steinberg & Morris, 2001). And the effects are stronger when children are embedded in authoritative communities with connected adults who model a good life (Commission on Children at Risk, 2003). A word of caution: The association between certain parenting styles (being firm but open) and certain childhood outcomes (social competence) is correlational. Correlation is not causation. Here are two possible alternative explanations for this parenting-competence link. (Can you imagine others?)

䉴 Children’s traits may influence parenting more than vice versa. Parental warmth and control vary somewhat from child to child, even in the same family (Holden & Miller, 1999). So perhaps socially mature, agreeable, easygoing children evoke greater trust and warmth from their parents, and less competent and less cooperative children elicit less. Twin studies support this possibility (Kendler, 1996). 䉴 Some underlying third factor may be at work. Perhaps, for example, competent parents and their competent children share genes that predispose social competence. Twin studies also support this possibility (South et al., 2008). Parents struggling with conflicting advice and with the stresses of child-rearing should remember that all advice reflects the advice-giver’s values. For those who prize unquestioning obedience from a child, an authoritarian style may have the desired effect. For those who value children’s sociability and self-reliance, authoritative firmbut-open parenting is advisable. The investment in raising a child buys many years not only of joy and love but of worry and irritation. Yet for most people who become parents, a child is one’s biological and social legacy—one’s personal investment in the human future. Remind young adults of their mortality and they will express increased desire for children (Wisman & Goldenberg, 2005). To paraphrase psychiatrist Carl Jung, we reach backward into our parents and forward into our children, and through their children into a future we will never see, but about which we must therefore care.

“You are the bows from which your children as living arrows are sent forth.” Kahlil Gibran, The Prophet, 1923

194

MOD U LE 1 4 Infancy and Childhood

Review Infancy and Childhood 14-1 During infancy and childhood, how do the brain and motor skills develop? The brain’s nerve cells are sculpted by heredity and experience; their interconnections multiply rapidly after birth. Our complex motor skills—sitting, standing, walking— develop in a predictable sequence whose timing is a function of individual maturation and culture. We lose conscious memories of experiences from before about age 31 ⁄ 2, in part because major areas of the brain have not yet matured. 14-2 From the perspective of Piaget and of today’s researchers, how does a child’s mind develop? Piaget proposed that through assimilation and accommodation, children actively construct and modify their understanding of the world. They form schemas that help them organize their experiences. Progressing from the simplicity of the sensorimotor stage of the first two years, in which they develop object permanence, children move to more complex ways of thinking. In the preoperational stage they develop a theory of mind (absent in children with autism), but they are egocentric and unable to perform simple logical operations. At about age 6 or 7, they enter the concrete operational stage and can perform concrete operations, such as those required to comprehend the principle of conservation. By about age 12, children enter the formal operational stage and can reason systematically. Research supports the sequence Piaget proposed for the unfolding of human cognition, but it also shows that young children are more capable, and their development more continuous, than he believed. 14-3 How do parent-infant attachment bonds form? At about 8 months, infants separated from their caregivers display stranger anxiety. Infants form attachments not simply because parents gratify biological needs but, more important, because they are comfortable, familiar, and responsive. Ducks and other animals have a more rigid attachment process, called imprinting, that occurs during a critical period. Neglect or abuse can disrupt the attachment process. Infants’ differing attachment styles reflect both their individual temperament and the responsiveness of their parents and child-care providers. 14-4 How have psychologists studied attachment differences, and what have they learned? Attachment has been studied in strange situation experiments, which show that some children are securely attached and others are insecurely attached. Sensitive, responsive parents tend to have securely attached children. Adult relationships seem to reflect the attachment styles of early childhood, lending support to Erikson’s idea that basic trust is formed in infancy by our experiences with responsive caregivers. 14-5 Do parental neglect, family disruption, or day care affect children’s attachments? Children are very resilient. But those who are moved repeatedly, severely neglected by their parents, or otherwise prevented from forming attachments by age 2 may be at risk for attachment problems. Quality day care, with responsive

adults interacting with children in a safe and stimulating environment, does not appear to harm children’s thinking and language skills. Some studies have linked extensive time in day care with increased aggressiveness and defiance, but other factors—the child’s temperament, the parents’ sensitivity, and the family’s economic and educational levels and culture—also matter.

14-6 How do children’s self-concepts develop, and how are children’s traits related to parenting styles? Self-concept, a sense of one’s identity and personal worth, emerges gradually. At 15 to 18 months, children recognize themselves in a mirror. By school age, they can describe many of their own traits, and by age 8 to 10 their self-image is stable. Parenting styles—authoritarian, permissive, and authoritative—reflect varying degrees of control. Children with high self-esteem tend to have authoritative parents and to be self-reliant and socially competent, but the direction of cause and effect in this relationship is not clear. Terms and Concepts to Remember maturation, p. 174 cognition, p. 176 schema, p. 177 assimilation, p. 177 accommodation, p. 177 sensorimotor stage, p. 178 object permanence, p. 178 preoperational stage, p. 180 conservation, p. 180 egocentrism, p. 180 theory of mind, p. 181

autism, p. 182 concrete operational stage, p. 184 formal operational stage, p. 184 stranger anxiety, p. 185 attachment, p. 186 critical period, p. 187 imprinting, p. 187 basic trust, p. 189 self-concept, p. 192

Test Yourself 1. Use Piaget’s first three stages of cognitive development to explain why young children are not just miniature adults in the way they think. (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. Can you recall a time when you misheard some song lyrics because you assimilated them into your own schema? (For hundreds of examples of this phenomenon, visit www.kissthisguy.com.)

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

module 15 Adolescence

Physical Development Cognitive Development Social Development Emerging Adulthood

Many psychologists once believed that childhood sets our traits. Today’s developmental psychologists see development as lifelong. At a five-year high school reunion, former soul mates may be surprised at their divergence; a decade later, they may have trouble sustaining a conversation. As the life-span perspective emerged, psychologists began to look at how maturation and experience shape us not only in infancy and childhood, but also in adolescence and beyond. Adolescence—the years spent morphing from child to adult—starts with the physical beginnings of sexual maturity and ends with the social achievement of independent adult status (which means that in some cultures, where teens are self-supporting, adolescence hardly exists). In industrialized countries, what are the teen years like? In Leo Tolstoy’s Anna Karenina, the teen years were “that blissful time when childhood is just coming to an end, and out of that vast circle, happy and gay, a path takes shape.” But another teenager, Anne Frank, writing in her diary while hiding from the Nazis, described tumultuous teen emotions: My treatment varies so much. One day Anne is so sensible and is allowed to know everything; and the next day I hear that Anne is just a silly little goat who doesn’t know anything at all and imagines that she’s learned a wonderful lot from books. . . . Oh, so many things bubble up inside me as I lie in bed, having to put up with people I’m fed up with, who always misinterpret my intentions. G. Stanley Hall (1904), one of the first psychologists to describe adolescence, believed that this tension between biological maturity and social dependence creates a period of “storm and stress.” Indeed, after age 30, many who grow up in independence-fostering Western cultures look back on their teenage years as a time they would not want to relive, a time when their peers’ social approval was imperative, their sense of direction in life was in flux, and their feeling of alienation from their parents was deepest (Arnett, 1999; Macfarlane, 1964). But for many, adolescence is a time of vitality without the cares of adulthood, a time of rewarding friendships, of heightened idealism and a growing sense of life’s exciting possibilities.

|| How will you look back on your life 10 years from now? Are you making choices that someday you will recollect with satisfaction? ||

adolescence the transition period

䉴|| Physical Development 15-1 What physical changes mark adolescence? Adolescence begins with puberty, the time when we mature sexually. Puberty follows a surge of hormones, which may intensify moods and which trigger a two-year period of rapid physical development, usually beginning at about age 11 in girls and at about age 13 in boys. About the time of puberty, boys’ growth propels them to greater height than their female counterparts (FIGURE 15.1 on the next page). During this growth spurt, the primary sex characteristics—the reproductive organs and external genitalia—develop dramatically. So do secondary sex characteristics, the nonreproductive traits such as breasts and hips in girls, facial hair and deepened voice in boys,

from childhood to adulthood, extending from puberty to independence.

puberty the period of sexual maturation, during which a person becomes capable of reproducing. primary sex characteristics the body structures (ovaries, testes, and external genitalia) that make sexual reproduction possible. secondary sex characteristics nonreproductive sexual characteristics, such as female breasts and hips, male voice quality, and body hair.

195

196

MOD U LE 1 5 Adolescence

Boys keep growing and become taller than girls after age 14

Height in 190 centimeters 170 150 130

Girls have an earlier pubertal growth spurt

110 90

Ellen Senisi/The Image Works

FIGURE 15.1 Height differences Throughout childhood, boys and girls are similar in height. At puberty, girls surge ahead briefly, but then boys overtake them at about age 14. (Data from Tanner, 1978.) Recent studies suggest that sexual development and growth spurts are beginning somewhat earlier than was the case a half-century ago (Herman-Giddens et al., 2001).

70 50 30

2

4

6

8

10 12 14 16 18

Age in years Boys

|| Menarche appears to occur a few months earlier, on average, for girls who have experienced stresses related to father absence or sexual abuse (Vigil et al., 2005; Zabin et al., 2005). ||

pubic and underarm hair in both sexes (FIGURE 15.2). A year or two before puberty, however, boys and girls often feel the first stirrings of attraction toward those of the other (or their own) sex (McClintock & Herdt, 1996). In girls, puberty starts with breast development, which now often begins by age 10 (Brody, 1999). But puberty’s landmarks are the first ejacul*tion in boys, usually by about age 14, and the first menstrual period in girls, usually within a year of age 121⁄2 (Anderson et al., 2003). The first menstrual period, called menarche (meh-NARkey), is a memorable event. Nearly all adult women recall it and remember experiencing a mixture of feelings—pride, excitement, embarrassment, and apprehension (Greif & Ulman, 1982; Woods et al., 1983). Girls who have been prepared for

FIGURE 15.2 Body changes at puberty At about age 11 in girls and age 13 in boys, a surge of hormones triggers a variety of physical changes.

Girls

Pituitary gland releases hormones that stimulate

Facial and underarm hair growth

Underarm hair growth

Larynx enlargement

Breast development Adrenal glands

Adrenal glands

Enlargement of uterus Beginning of menstruation Pubic hair growth

Ovaries

Testes

To release hormones that stimulate

Pubic hair growth Growth of penis and testes Beginning of ejacul*tion

197

Adolescence M O D U L E 1 5

menarche [meh-NAR-key] the first menstrual period.

© The New Yorker Collection 2006 Barbara Smaller from cartoonbank.com. All rights reserved.

menarche usually experience it as a positive life transition. Most men similarly recall their first ejacul*tion (spermarche), which usually occurs as a nocturnal emission (Fuller & Downs, 1990). Just as in the earlier life stages, the sequence of physical changes in puberty (for example, breast buds and visible pubic hair before menarche) is far more predictable than their timing. Some girls start their growth spurt at 9, some boys as late as age 16. Though such variations have little effect on height at maturity, they may have psychological consequences. For boys, early maturation pays dividends: Being stronger and more athletic during their early teen years, they tend to be more popular, self-assured, and independent, though also more at risk for alcohol use, delinquency, and premature sexual activity (Lynne et al., 2007; Steinberg & Morris, 2001). For girls, early maturation can be stressful (Mendle et al., 2007). If a young girl’s body is out of sync with her own emotional maturity and her friends’ physical development and experiences, she may begin associating with older adolescents or may suffer teasing or sexual harassment. It is not only when we mature that counts, but how people react to our genetically influenced physical development. Remember: Heredity and environment interact. An adolescent’s brain is also a work in progress. Until puberty, brain cells increase their connections, like trees growing more roots and branches. Then, during adolescence, comes a selective pruning of unused neurons and connections (Blakemore, 2008). What we don’t use, we lose. It’s rather like traffic engineers reducing congestion by eliminating certain streets and constructing new beltways that move traffic more efficiently. As teens mature, their frontal lobes also continue to develop. The growth of myelin, the fatty tissue that forms around axons and speeds neurotransmission, enables better communication with other brain regions (Kuhn, 2006; Silveri et al., 2006). These developments bring improved judgment, impulse control, and the ability to plan for the long term. Frontal lobe maturation lags the emotional limbic system. Puberty’s hormonal surge and limbic system development help explain teens’ occasional impulsiveness, risky behaviors, emotional storms—slamming doors and turning up the music (Casey et al., 2008). No wonder younger teens (whose unfinished frontal lobes aren’t yet fully equipped for making long-term plans and curbing impulses) so often succumb to the lure of smoking, which most adult smokers could tell them they will later regret. Teens actually don’t underestimate the risks of smoking—or driving fast or unprotected sex—they just, when reasoning from their gut, weigh the benefits more heavily (Reyna & Farley, 2006; Steinberg, 2007). So, when Junior drives recklessly and academically self-destructs, should his parents reassure themselves that “he can’t help it; his frontal cortex isn’t yet fully grown”? They can at least take hope: The brain with which Junior begins his teens differs from the brain with which he will end his teens. Unless he slows his brain development with heavy drinking—leaving him prone to impulsivity and addiction—his frontal lobes will continue maturing until about age 25 (Beckman, 2004; Crews et al., 2007). In 2004, the American Psychological Association joined seven other medical and mental health associations in filing U.S. Supreme Court briefs, arguing against the death penalty for 16- and 17-years-olds. The briefs documented the teen brain’s immaturity “in areas that bear upon adolescent decision-making.” Teens are “less guilty by reason of adolescence,” suggested psychologist Laurence Steinberg and law professor Elizabeth Scott (2003). In 2005, by a 5-to-4 margin, the Court concurred, declaring juvenile death penalties unconstitutional.

“Young man, go to your room and stay there until your cerebral cortex matures.”

“If a gun is put in the control of the prefrontal cortex of a hurt and vengeful 15-year-old, and it is pointed at a human target, it will very likely go off.” National Institutes of Health brain scientist Daniel R. Weinberger, “A Brain Too Young for Good Judgment,” 2001

198

MOD U LE 1 5 Adolescence

“When the pilot told us to brace and grab our ankles, the first thing that went through my mind was that we must all look pretty stupid.” Jeremiah Rawlings, age 12, after a 1989 DC-10 crash in Sioux City, Iowa

䉴|| Cognitive Development 15-2 How did Piaget, Kohlberg, and later researchers describe adolescent cognitive and moral development? As young teenagers become capable of thinking about their thinking, and of thinking about other people’s thinking, they begin imagining what other people are thinking about them. (Adolescents might worry less if they understood their peers’ similar preoccupation.) As their cognitive abilities mature, many begin to think about what is ideally possible and compare that with the imperfect reality of their society, their parents, and even themselves.

Drawing by Koren; © 1992 The New Yorker Magazine, Inc.

Developing Reasoning Power

“Ben is in his first year of high school, and he’s questioning all the right things.”

Developing Morality

“It is a delightful harmony when doing and saying go together.” Michel Eyquem de Montaigne (1533–1592)

Two crucial tasks of childhood and adolescence are discerning right from wrong and developing character—the psychological muscles for controlling impulses. Much of our morality is rooted in gut-level reactions, for which the mind seeks rationalization (Haidt, 2006). Often, reason justifies passions such as disgust or liking. Yet to be a moral person is to think morally and act accordingly. Piaget (1932) believed that children’s moral judgments build on their cognitive development. Agreeing with Piaget, Lawrence Kohlberg (1981, 1984) sought to describe the development of moral reasoning, the thinking that occurs as we consider right and wrong. Kohlberg posed moral dilemmas (for example, whether a person should steal medicine to save a loved one’s life) and asked children, adolescents, and adults if the action was right or wrong. He then analyzed their answers for evidence of stages of moral thinking.

AP/Wide World Photos

William Thomas Cain/Getty Images

Demonstrating their reasoning ability Although on opposite sides of the Iraq war debate, these teens are demonstrating their ability to think logically about abstract topics. According to Piaget, they are in the final cognitive stage, formal operations.

During the early teen years, reasoning is often self-focused. Adolescents may think their private experiences are unique, something parents just could not understand: “But, Mom, you don’t really know how it feels to be in love” (Elkind, 1978). Gradually, though, most achieve the intellectual summit Jean Piaget called formal operations, and they become more capable of abstract reasoning. Adolescents ponder and debate human nature, good and evil, truth and justice. Having left behind the concrete images of early childhood, they may now seek a deeper conception of God and existence (Elkind, 1970; Worthington, 1989). The ability to reason hypothetically and deduce consequences also enables them to detect inconsistencies in others’ reasoning and to spot hypocrisy. This can lead to heated debates with parents and silent vows never to lose sight of their own ideals (Peterson et al., 1986).

199

Adolescence M O D U L E 1 5

reasoning New Orleans 䉴 Moral Hurricane Katrina victims were faced

AP Photo/Eric Gray

with a moral dilemma: Should they steal household necessities? Their reasoning likely reflected different levels of moral thinking, even if they behaved similarly.

His findings led him to believe that as we develop intellectually, we pass through three basic levels of moral thinking:

䉴 Preconventional morality Before age 9, most children’s morality focuses on selfinterest: They obey rules either to avoid punishment or to gain concrete rewards.

䉴 Conventional morality By early adolescence, morality focuses on caring for others and on upholding laws and social rules, simply because they are the laws and rules. 䉴 Postconventional morality With the abstract reasoning of formal operational thought, people may reach a third moral level. Actions are judged “right” because they flow from people’s rights or from self-defined, basic ethical principles. Kohlberg claimed these levels form a moral ladder. As with all stage theories, the sequence is unvarying. We begin on the bottom rung and ascend to varying heights. Research confirms that children in various cultures progress from Kohlberg’s preconventional level into his conventional level (Gibbs et al., 2007). The postconventional level is more controversial. It appears mostly in the European and North American educated middle class, which prizes individualism—giving priority to one’s own goals rather than to group goals (Eckensberger, 1994; Miller & Bersoff, 1995). Critics therefore contend that Kohlberg’s theory is biased against the moral reasoning of members of collectivist societies such as China and India. Moreover, people’s thinking about real-world moral choices also engages their emotions, and moral feelings don’t easily fit into Kohlberg’s neat stages (Krebs & Denton, 2005).

Moral Feeling The mind makes moral judgments as it makes aesthetic judgments—quickly and automatically. We feel disgust when seeing people engaged in degrading or subhuman acts, and we feel elevation—a tingly, warm, glowing feeling in the chest—when seeing people display exceptional generosity, compassion, or courage. One woman recalled driving through her snowy neighborhood with three young men as they passed “an elderly woman with a shovel in her driveway. I did not think much of it, when one of the guys in the back asked the driver to let him off there. . . . When I saw him jump out of the back seat and approach the lady, my mouth dropped in shock as I realized that he was offering to shovel her walk for her.” Witnessing this unexpected goodness triggered elevation: “I felt like jumping out of the car and hugging this guy. I felt like singing and running, or skipping and laughing. I felt like saying nice things about people” (Haidt, 2000). In Jonathan Haidt’s (2002, 2007, 2008) social intuitionist account of morality, moral feelings precede moral reasoning. “Could human morality really be run by the moral

“I am a bit suspicious of any theory that says that the highest moral stage is one in which people talk like college professors.” James Q. Wilson, The Moral Sense, 1993

Drawing by Vietor; © 1987 The New Yorker Magazine, Inc.

200

“This might not be ethical. Is that a problem for anybody?”

MOD U LE 1 5 Adolescence

emotions,” he wonders, “while moral reasoning struts about pretending to be in control?” Indeed, he surmises, “moral judgment involves quick gut feelings, or affectively laden intuitions, which then trigger moral reasoning.” Moral reasoning—our mind’s press secretary—aims to convince ourselves and others of what we intuitively feel. The social intuitionist explanation of morality finds support from a study of moral paradoxes. Imagine seeing a runaway trolley headed for five people. All will certainly be killed unless you throw a switch that diverts the trolley onto another track, where it will kill one person. Should you throw the switch? Most say yes. Kill one, save five. Now imagine the same dilemma, except that your opportunity to save the five requires you to push a large stranger onto the tracks, where he will die as his body stops the trolley. Kill one, save five? The logic is the same, but most say no. Seeking to understand why, a Princeton research team led by Joshua Greene (2001) used brain imaging to spy on people’s neural responses as they contemplated such dilemmas. Only when given the bodypushing type of moral dilemma did their brain’s emotion areas light up. Despite the identical logic, the personal dilemma engaged emotions that altered moral judgment. Moral judgment is more than thinking; it is also gut-level feeling. The gut feelings that drive our moral judgments turn out to be widely shared. To neuroscientist Marc Hauser (2006) this suggests that humans are hard-wired for moral feelings. Faced with moral choices, people across the world, with similar evolved brains, display similar moral intuitions. For example, is it acceptable to kill a healthy man who walks into a hospital that has five dying patients who could be saved by harvesting his organs? Most people say no. We all seem to unconsciously assume that harm caused by an action is worse than harm caused by failing to act (Cushman et al., 2006). With damage to a brain area that underlies emotions, however, people apply more coldly calculating reasoning to moral dilemmas (Koenigs et al., 2007).

Moral Action

On humanity’s need to delay gratification in response to global climate change: “The benefits of strong early action considerably outweigh the costs.” The Economics of Climate Change, UK Government Economic Service, 2007

Our moral thinking and feeling surely affect our moral talk. But sometimes talk is cheap and emotions are fleeting. Morality involves doing the right thing, and what we do also depends on social influences. As political theorist Hannah Arendt (1963) observed, many Nazi concentration camp guards during World War II were ordinary “moral” people who were corrupted by a powerfully evil situation. Nevertheless, as our thinking matures, our behavior also becomes less selfish and more caring (Krebs & Van Hesteren, 1994; Miller et al., 1996). Today’s character education programs therefore tend to focus both on moral issues and on doing the right thing. They teach children empathy for others’ feelings, and also the self-discipline needed to restrain one’s own impulses—to delay small gratifications now to enable bigger rewards later. Those who do learn to delay gratification become more socially responsible, academically successful, and productive (Funder & Block, 1989; Mischel et al., 1988, 1989). In service-learning programs, teens tutor, clean up their neighborhoods, and assist older people, and their sense of competence and desire to serve increase at the same time that their school absenteeism and drop-out rates diminish (Andersen, 1998; Piliavin, 2003). Moral action feeds moral attitudes.

䉴|| Social Development 15-3 What are the social tasks and challenges of adolescence? Theorist Erik Erikson (1963) contended that each stage of life has its own psychosocial task, a crisis that needs resolution. Young children wrestle with issues of trust, then autonomy (independence), then initiative (TABLE 15.1). School-age children strive for competence, feeling able and productive. The adolescent’s task, said Erikson,

201

Adolescence M O D U L E 1 5

Issue

Description of Task

Infancy (to 1 year)

Trust vs. mistrust

If needs are dependably met, infants develop a sense of basic trust.

Toddlerhood (1 to 3 years)

Autonomy vs. shame and doubt

Toddlers learn to exercise their will and do things for themselves, or they doubt their abilities.

Preschool (3 to 6 years)

Initiative vs. guilt

Preschoolers learn to initiate tasks and carry out plans, or they feel guilty about their efforts to be independent.

Elementary school (6 years to puberty)

Industry vs. inferiority

Children learn the pleasure of applying themselves to tasks, or they feel inferior.

Adolescence (teen years into 20s)

Identity vs. role confusion

Teenagers work at refining a sense of self by testing roles and then integrating them to form a single identity, or they become confused about who they are.

Young adulthood (20s to early 40s)

Intimacy vs. isolation

Young adults struggle to form close relationships and to gain the capacity for intimate love, or they feel socially isolated.

Middle adulthood (40s to 60s)

Generativity vs. stagnation

In middle age, people discover a sense of contributing to the world, usually through family and work, or they may feel a lack of purpose.

Late adulthood (late 60s and up)

Integrity vs. despair

Reflecting on his or her life, an older adult may feel a sense of satisfaction or failure.

is to synthesize past, present, and future possibilities into a clearer sense of self. Adolescents wonder, “Who am I as an individual? What do I want to do with my life? What values should I live by? What do I believe in?” Erikson called this quest the adolescent’s search for identity. As sometimes happens in psychology, Erikson’s interests were bred by his own life experience. As the son of a Jewish mother and a Danish Gentile father, Erikson was “doubly an outsider,” reports Morton Hunt (1993, p. 391). He was “scorned as a Jew in school but mocked as a Gentile in the synagogue because of his blond hair and blue eyes.” Such episodes fueled his interest in the adolescent struggle for identity.

Competence vs. inferiority

John Eastcott/Yves Momativk/The Image Works

Stage (approximate age)

Intimacy vs. isolation

Jeff Greenberg/PhotoEdit

TABLE 15.1 Erikson’s Stages of Psychosocial Development

identity our sense of self; according to Erikson, the adolescent’s task is to solidify a sense of self by testing and integrating various roles.

social identity the “we” aspect of our self-concept; the part of our answer to “Who am I?” that comes from our group memberships.

Forming an Identity To refine their sense of identity, adolescents in individualistic cultures usually try out different “selves” in different situations. They may act out one self at home, another with friends, and still another at school or on Facebook. If two situations overlap—as when a teenager brings home friends—the discomfort can be considerable. The teen asks, “Which self should I be? Which is the real me?” The resolution is a selfdefinition that unifies the various selves into a consistent and comfortable sense of who one is—an identity. For both adolescents and adults, group identities often form around how we differ from those around us. When living in Britain, I became conscious of my Americanness. When spending time with my daughter in Africa, I become conscious of my minority (White) race. When surrounded by women, I am mindful of my gender identity. For international students, for those of a minority ethnic group, for people with a disability, for those on a team, a social identity often forms around their distinctiveness.

“Self-consciousness, the recognition of a creature by itself as a ‘self,’ [cannot] exist except in contrast with an ‘other,’ a something which is not the self.” C. S. Lewis, The Problem of Pain, 1940

MOD U LE 1 5 Adolescence

Matthias Clamer/Getty Images

Leland Bobbe/Getty Images

202

Who shall I be today? By varying the way they look, adolescents try out different “selves.” Although we eventually form a consistent and stable sense of identity, the self we present may change with the situation.

But not always. Erikson noticed that some adolescents forge their identity early, simply by adopting their parents’ values and expectations. (Traditional, less individualistic cultures inform adolescents about who they are, rather than encouraging them to decide on their own.) Other adolescents may adopt an identity defined in opposition to parents but in conformity with a particular peer group—jocks, preppies, geeks, goths. Most young people do develop a sense of contentment with their lives. When American teens were asked whether a series of statements described them, 81 percent said yes to “I would choose my life the way it is right now.” But others never quite seem to find themselves: The other 19 percent agreed with “I wish I were somebody else.” In response to another question, 28 percent agreed that “I often wonder why I exist” (Lyons, 2004). Reflecting on their existence, 75 percent of American collegians say they “discuss religion/spirituality” with friends, “pray,” and agree that “we are all spiritual beings” and “search for meaning/purpose in life” (Astin et al., 2004; Bryant & Astin, 2008). This would not surprise Stanford psychologist William Damon and his colleagues (2003), who contend that a key task of adolescent development is to achieve a purpose—a desire to accomplish something personally meaningful that makes a difference to the world beyond oneself. The late teen years, when many people in industrialized countries begin attending college or working full time, provide new opportunities for trying out possible roles. Many college seniors have achieved a clearer identity and a more positive self-concept than they had as first-year students (Waterman, 1988). In several nationwide studies, researchers have given young Americans tests of self-esteem. (Sample item: “I am able to do things as well as most other people.”) During the early to mid-teen years, self-esteem falls and, for girls, depression scores often increase, but then self-image rebounds during the late teens and twenties (Robins et al., 2002; Twenge & Campbell, 2001; Twenge & Nolen-Hoeksema, 2002). Identity also becomes more personalized. Daniel Hart (1988) asked American youths of various ages to imagine a machine that would clone (a) what you think and feel, (b) your appearance, or (c) your relationships with friends and family. When he then asked which clone would be “closest to being you?” three-fourths of the seventh-graders chose (c), the clone with the same social network. In contrast, three-fourths of the ninth-graders chose (a), the one with their individual thoughts and feelings. Erikson contended that the adolescent identity stage is followed in young adulthood by a developing capacity for intimacy. With a clear and comfortable sense of who you are, said Erikson, you are ready to form emotionally close relationships. Such relationships are, for most of us, a source of great pleasure. When Mihaly Csikszentmihalyi (pronounced chick-SENT-me-hi) and Jeremy Hunter (2003) used a beeper to sample the daily experiences of American teens, they found them unhappiest when alone and happiest when with friends. As Aristotle long ago recognized, we humans are “the social animal.”

© David Sipress

Parent and Peer Relationships “She says she’s someone from your past who gave birth to you, and raised you, and sacrificed everything so you could have whatever you wanted.”

As adolescents in Western cultures seek to form their own identities, they begin to pull away from their parents (Shanahan et al., 2007). The preschooler who can’t be close enough to her mother, who loves to touch and cling to her, becomes the 14year-old who wouldn’t be caught dead holding hands with Mom. The transition occurs gradually (FIGURE 15.3). By adolescence, arguments occur more often, usually

203

Adolescence M O D U L E 1 5

Percentage with positive, warm interaction with parents

FIGURE 15.3 The changing 䉴 parent -child relationship

100%

Interviews from a large, national study of Canadian families reveal that the typically close, warm relationships between parents and preschoolers loosen as children become older. (Data from Statistics Canada, 1999.)

80

60

intimacy in Erikson’s theory, the ability to form close, loving relationships; a primary developmental task in late adolescence and early adulthood.

40

20

0 2 to 4

5 to 8

9 to 11

Age of child in years

“I love u guys.” Emily Keyes’ final text message to her parents before dying in a Colorado school shooting, 2006

© The New Yorker Collection, 2001, Barbara Smaller from cartoonbank.com. All rights reserved.

over mundane things—household chores, bedtime, homework (Tesser et al., 1989). Parent-child conflict during the transition to adolescence tends to be greater with first-born than with second-born children (Shanahan et al., 2007). For a minority of parents and their adolescents, differences lead to real splits and great stress (Steinberg & Morris, 2001). But most disagreements are at the level of harmless bickering. And most adolescents—6000 of them in 10 countries, from Australia to Bangladesh to Turkey—say they like their parents (Offer et al., 1988). “We usually get along but . . . ,” adolescents often report (Galambos, 1992; Steinberg, 1987). Positive parent-teen relations and positive peer relations often go hand-in-hand. High school girls who have the most affectionate relationships with their mothers tend also to enjoy the most intimate friendships with girlfriends (Gold & Yanof, 1985). And teens who feel close to their parents tend to be healthy and happy and to do well in school (Resnick et al., 1997). Of course, we can state this correlation the other way: Misbehaving teens are more likely to have tense relationships with parents and other adults. Adolescence is typically a time of diminishing parental influence and growing peer influence. Asked in a survey if they had “ever had a serious talk” with their child about illegal drugs, 85 percent of American parents answered yes. But if the parents had indeed given this earnest advice, many teens apparently had tuned it out: Only 45 percent could recall such a talk (Morin & Brossard, 1997). Heredity does much of the heavy lifting in forming individual differences in temperament and personality, and parent and peer influences do much of the rest. Most teens are herd animals. They talk, dress, and act more like their peers than their parents. What their friends are, they often become, and what “everybody’s doing,” they often do. In teen calls to hotline counseling services, peer relationships are the most discussed topic (Boehm et al., 1999). For those who feel excluded, the pain is acute. “The social atmosphere in most high schools is poisonously clique-driven and exclusionary,” observed social psychologist Elliot Aronson (2001). Most excluded “students suffer in silence. . . . A small number act out in violent ways against their classmates.” Those who withdraw are vulnerable to loneliness, low self-esteem, and depression (Steinberg & Morris, 2001). Peer approval matters.

“How was my day? How was my day? Must you micromanage my life?”

204

MOD U LE 1 5 Adolescence

© 2002, Margaret Shulock. Reprinted with special permission of King Features Syndicate.

Teens see their parents as having more influence in other areas—for example, in shaping their religious faith and in thinking about college and career choices (Emerging Trends, 1997). A Gallup Youth Survey reveals that most share their parent’s political views (Lyons, 2005).

䉴|| Emerging Adulthood 15-4 What is emerging adulthood?

emerging adulthood for some people in modern cultures, a period from the late teens to early twenties, bridging the gap between adolescent dependence and full independence and responsible adulthood.

FIGURE 15.4 The transition to

adulthood is being stretched from both ends In the 1890s, the average interval between a woman’s first menstrual period and marriage, which typically marked a transition to adulthood, was about 7 years; in industrialized countries today it is about 12 years (Guttmacher, 1994, 2000). Although many adults are unmarried, later marriage combines with prolonged education and earlier menarche to help stretch out the transition to adulthood.

Nine times out of ten, it’s all about peer pressure.

In young adulthood, emotional ties with parents loosen further. During their early twenties, many people still lean heavily on their parents. But by the late twenties, most feel more comfortably independent and better able to empathize with parents as fellow adults (Frank, 1988; White, 1983). This graduation from adolescence to adulthood is now taking longer. In the Western world, adolescence now roughly corresponds to the teen years. At earlier times, and still today in other parts of the world, this slice of the life span has been much smaller (Baumeister & Tice, 1986). Shortly after sexual maturity, such societies bestowed adult responsibilities and status on the young person, often marking the event with an elaborate initiation—a public rite of passage. With society’s blessing, the new adult would then work, marry, and have children. When schooling became compulsory in many Western countries, independence began occurring later. In industrialized cultures from Europe to Australia, adolescents are now taking more time to finish college, leave the nest, and establish careers. In the United States, for example, the average age at first marriage varies by ethnic group but has increased more than 4 years since 1960 (to 27 for men, 25 for women). While cultural traditions were changing, Western adolescents were also beginning to develop earlier. Today’s earlier sexual maturity is related both to increased body fat (which can support pregnancy and nursing) and to weakened parent-child bonds, including absent fathers (Ellis, 2004). Together, delayed independence and earlier sexual maturity have widened the once-brief interlude between biological maturity and social independence (FIGURE 15.4). Especially for those still in school, the time from 18 to the mid-twenties is an increasingly not-yet-settled phase of life, which some now call emerging adulthood

1890, WOMEN 7.2 - Year interval Menarche (First period) 10

Marriage

20

30

Age 1995, WOMEN 12.5 - Year interval Menarche

10

Marriage

20

Age

30

205

(Arnett, 2006, 2007; Reitzle, 2006). Unlike some other cultures with an abrupt transition to adulthood, Westerners typically ease their way into their new status. Those who leave home for college, for example, are separated from parents and, more than ever before, managing their time and priorities. Yet they may remain dependent on their parents’ financial and emotional support and may return home for holidays. For many others, their parents’ home may be the only affordable place to live. No longer adolescents, these emerging adults have not yet assumed full adult responsibilities and independence, and they feel “in between.” But adulthood emerges gradually, and often with diminishing bouts of depression or anger and increased self-esteem (Galambos et al., 2006).

© The New Yorker Collection, 2007, William Haefeli from cartoonbank.com. All rights reserved.

Adolescence M O D U L E 1 5

“When I was your age, I was an adult.”

Review Adolescence 15-1 What physical changes mark adolescence? Adolescence is the transition period between puberty and social independence. During these years, both primary and secondary sex characteristics develop dramatically. Boys seem to benefit from “early” maturation, girls from “late” maturation. The brain’s frontal lobes mature during adolescence and the early twenties, enabling improved judgment, impulse control, and long-term planning. 15-2 How did Piaget, Kohlberg, and later researchers describe adolescent cognitive and moral development? Piaget theorized that adolescents develop a capacity for formal operations and that this development is the foundation for moral judgment. Kohlberg proposed a stage theory of moral reasoning, from a preconventional morality of self-interest, to a conventional morality concerned with upholding laws and social rules, to (in some people) a postconventional morality of universal ethical principles. Kohlberg’s critics note that morality lies in actions and emotions as well as thinking, and that his postconventional level represents morality from the perspective of individualist, middle-class males. 15-3

What are the social tasks and challenges of adolescence? Erikson theorized that a chief task of adolescence is solidifying one’s sense of self—one’s identity. This often means “trying on” a number of different roles. During adolescence, parental influence diminishes and peer influence increases.

15-4 What is emerging adulthood? The transition from adolescence to adulthood is now taking longer. Emerging adulthood is the period from age 18 to the mid-twenties, when many young people are not yet fully independent. But critics note that this stage is found mostly in today’s Western cultures.

Terms and Concepts to Remember adolescence, p. 195 puberty, p. 195 primary sex characteristics, p. 195 secondary sex characteristics, p. 195

menarche [meh-NAR-key], p. 196 identity, p. 201 social identity, p. 201 intimacy, p. 202 emerging adulthood, p. 204

Test Yourself 1. How has the transition from childhood to adulthood changed in Western cultures in the last century or so? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. What are the most positive and negative things you remember about your own adolescence? And who do you credit or blame more—your parents or your peers?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

Physical Development Cognitive Development Social Development Reflections on Two Major Developmental Issues

module 16 Adulthood, and Reflections on Developmental Issues At one time, psychologists viewed the center-of-life years between adolescence and old age as one long plateau. No longer. Those who follow the unfolding of people’s adult lives now believe our development continues.

Michelangelo, 1560, at age 85

Rick Doyle/ Corbis

Adult abilities vary widely Eighty-seven-year-olds: Don’t try this. In 2002, George Blair became the world’s oldest barefoot water skier, 18 days after his eighty-seventh birthday.

“I am still learning.”

|| How old does a person have to be before you think of him or her as old? The average 18- to 29-year-old says 67. The average person 60 and over says 76 (Yankelovich, 1995). ||

It is more difficult to generalize about adulthood stages than about life’s early years. If you know that James is a 1-year-old and Jamal is a 10-year-old, you could say a great deal about each child. Not so with adults who differ by a similar number of years. The boss may be 30 or 60; the marathon runner may be 20 or 50; the 19-yearold may be a parent who supports a child or a child who receives an allowance. Yet our life courses are in some ways similar. Physically, cognitively, and especially socially, we are at age 50 different from our 25-year-old selves.

䉴|| Physical Development © The New Yorker Collection, 1999, Tom Cheney from cartoonbank.com. All rights reserved.

16-1 What physical changes occur during middle and late adulthood?

“Happy fortieth. I’ll take the muscle tone in your upper arms, the girlish timbre of your voice, your amazing tolerance for caffeine, and your ability to digest french fries. The rest of you can stay.”

206

Our physical abilities—muscular strength, reaction time, sensory keenness, and cardiac output—all crest by the mid-twenties. Like the declining daylight after the summer solstice, the decline of physical prowess begins imperceptibly. Athletes are often the first to notice. World-class sprinters and swimmers peak by their early twenties. Women—who mature earlier than men—also peak earlier. But most of us—especially those of us whose daily lives do not require top physical performance—hardly perceive the early signs of decline.

Physical Changes in Middle Adulthood Middle-aged (post-40) athletes know all too well that physical decline gradually accelerates (FIGURE 16.1). As a 65-year-old who plays basketball, I now find myself occasionally wondering whether my team really needs me to run for that loose ball. But

207

Adulthood, and Reflections on Developmental Issues M O D U L E 1 6

Baseball averages—18 players with 20-year careers

360

Hitting average 310

Hitting average 340

300

320

290

300

280

280

270

260

260

240

250

5

10

15

20

220

5

10

15

20

Years

Years

Bettmann/Corbis

320

Baseball averages over 20 years for Willie Mays

even diminished vigor is sufficient for normal activities. Moreover, during early and middle adulthood, physical vigor has less to do with age than with a person’s health and exercise habits. Many of today’s physically fit 50-year-olds run 4 miles with ease, while sedentary 25-year-olds find themselves huffing and puffing up two flights of stairs. Aging also brings a gradual decline in fertility. For a 35- to 39-year-old woman, the chances of getting pregnant after a single act of intercourse are only half those of a woman 19 to 26 (Dunson et al., 2002). A woman’s foremost biological sign of aging, the onset of menopause, ends her menstrual cycles, usually within a few years of age 50. Her expectations and attitudes will influence the emotional impact of this event. Does she see it as a sign that she is losing her femininity and growing old? Or does she view it as liberation from menstrual periods and fears of pregnancy? As is often the case, our expectations influence our perceptions. Data from Africa support an evolutionary theory of menopause: Infants with a living maternal grandmother—typically a caring and invested family member without young children of her own—have had a greater chance of survival (Shanley et al., 2007). Men experience no equivalent to menopause—no cessation of fertility, no sharp drop in sex hormones. They do experience a gradual decline in sperm count, testosterone level, and speed of erection and ejacul*tion. Some may also experience distress related to their perception of declining virility and physical capacities. But most age without such problems. In a national survey of Canadians age 40 to 64, only 3 in 10 rated their sex life as less enjoyable than during their twenties (Wright, 2006). After middle age, most men and women remain capable of satisfying sexual activity. In another survey by the National Council on Aging, 39 percent of people over 60 expressed satisfaction with the amount of sex they were having and 39 percent said they wished for sex more frequently (Leary, 1998). And in an American Association of Retired Persons sexuality survey, it was not until age 75 or older that most women and nearly half of men reported little sexual desire (DeLamater & Sill, 2005).

FIGURE 16.1 Gradually accelerating decline An analysis of aging and batting averages of all twentieth-century major league baseball players revealed a gradual but accelerating decline in players’ later years (Schall & Smith, 2000). The career performance record of the great Willie Mays is illustrative.

“If the truth were known, we’d have to diagnose [older women] as having P.M.F.—Post-Menstrual Freedom.” Social psychologist Jacqueline Goodchilds (1987)

“The things that stop you having sex with age are exactly the same as those that stop you riding a bicycle (bad health, thinking it looks silly, no bicycle).” Alex Comfort, The Joy of Sex, 2002

Physical Changes in Later Life Is old age “more to be feared than death” (Juvenal, Satires)? Or is life “most delightful when it is on the downward slope” (Seneca, Epistulae ad Lucilium)? What is it like to grow old? To gauge your own understanding, take the following true/false quiz: 1. Older people become more susceptible to short-term illnesses. 2. During old age many of the brain’s neurons die. 3. If they live to be 90 or older, most people eventually become senile.

menopause the time of natural cessation of menstruation; also refers to the biological changes a woman experiences as her ability to reproduce declines.

208

MOD U LE 1 6 Adulthood, and Reflections on Developmental Issues

4. Recognition memory—the ability to identify things previously experienced— declines with age. 5. Life satisfaction peaks in the fifties and then gradually declines after age 65.

Life Expectancy The above statements—all false—are among the misconceptions about aging exploded by recent research. Worldwide, life expectancy at birth increased from 49 years in 1950 to 67 in 2004—and to 80 and beyond in some developed countries (PRB, 2004; Sivard, 1996). This increasing life expectancy (humanity’s greatest achievement, say some) combines with decreasing birthrates to make older adults a bigger and bigger population segment, which provides an increasing demand for cruise ships, hearing aids, retirement villages, and nursing homes. By 2050, about 35 percent of Europe’s population likely will be over age 60 (Fernández-Ballesteros & Caprara, 2003). Clearly, countries that have depended on children to care for the aged are destined for a “demographic tsunami.” Russia and Western Europe are also headed for depopulation—from 146 million to 104 million people in Russia by 2050, projects the United Nations (Brooks, 2005). “When an entire continent, healthier, wealthier, and more secure than ever before, fails to create the human future in the most elemental sense—by creating the next generation— something very serious is afoot,” states George Weigel (2005). Life expectancy differs for males and females; males are more prone to dying. Although 126 male embryos begin life for every 100 females who do so, the sex ratio is down to 105 males for every 100 females at birth (Strickland, 1992). During the first year, male infants’ death rates exceed females’ by one-fourth. Women outlive men by 4 years worldwide and by 5 to 6 years in Canada, the United States, and Australia. (Rather than marrying a man older than themselves, 20-year-old women who want a husband who shares their life expectancy should wait for the 15-year-old boys to mature.) By age 100, females outnumber males 5 to 1. But few of us live to 100. Even if no one died before age 50, and cancer, heart disease, and infectious illness were eliminated, average life expectancy would still increase only to about 85 or a few years beyond (Barinaga, 1991). The body ages. Its cells stop reproducing. It becomes frail. It becomes vulnerable to tiny insults—hot weather, a fall, a mild infection—that at age 20 would have been trivial. With age (especially when accentuated by smoking, obesity, or stress), people’s chromosome tips, called telomeres, wear down, much as the tip of a shoelace frays. As these protective tips shorten, aging cells may die without being replaced with perfect genetic replicas (Blackburn et al., 2007; Valdes et al., 2005; Zhang et al., 2007). Why do we eventually wear out? Why don’t we, like the bristlecone pine trees, rockfish, and some social insect queens, grow older without withering? One theory, proposed by evolutionary biologists, speculates that the answer relates to our survival as a species: We pass on our genes most successfully when we raise our young and then stop consuming resources. Once we’ve fulfilled our gene-reproducing and nurturing task, there are no natural selection pressures against genes that cause degeneration in later life (Olshansky et al., 1993; Sapolsky & Finch, 1991). The human spirit also affects life expectancy. For example, chronic anger and depression increase our risk of ill health and premature death. Researchers have even observed

Betsy Streeter

Georges Gobet/AP Photo

World record for longevity? French woman Jeanne Calment, the oldest human in history with authenticated age, died in 1998 at age 122. At age 100, she was still riding a bike. At age 114, she became the oldest film actor ever, by portraying herself in Vincent and Me.

209

Adulthood, and Reflections on Developmental Issues M O D U L E 1 6

FIGURE 16.2 Postponing a date with 䉴 the grim reaper? The total number of daily

Daily U.S. 86,000 deaths 85,000

U.S. deaths from 1987 to 2002 increased on the days following Christmas. To researchers Mitsuru Shimizu and Brett Pelham (2008), this adds to the growing evidence of a death-deferral phenomenon.

84,000 83,000 82,000 81,000 80,000 79,000

23 Dec.

24 Dec.

25 Dec. (Christmas)

26 Dec.

27 Dec.

Dates

an intriguing death-deferral phenomenon. For example, Mitsuru Shimizu and Brett Pelham (2008) report that, in one recent 15-year period, 2000 to 3000 more Americans died on the two days after Christmas than on Christmas and the two days before (FIGURE 16.2). And the death rate increases when people reach their birthdays, as it did for those who survived to the milestone first day of the new millennium.

Sensory Abilities

“For some reason, possibly to save ink, the restaurants had started printing their menus in letters the height of bacteria.” Dave Barry, Dave Barry Turns Fifty, 1998

FIGURE 16.3 The aging senses Sight, smell, and hearing all are less acute among those over age 70. (From Doty et al., 1984.)

Although physical decline begins in early adulthood, we are not usually acutely aware of it until later life. Visual sharpness diminishes, and distance perception and adaptation to changes in light level are less acute. Muscle strength, reaction time, and stamina also diminish noticeably, as do vision, the sense of smell, and hearing (FIGURE 16.3). In later life, the stairs get steeper, the print gets smaller, and other people seem to mumble more. In Wales, teens’ loitering around a convenience store has been discouraged by a device that emits an aversive high-pitched sound that almost no one over 30 can hear (Lyall, 2005). Some students use that pitch to their advantage with cellphone ringtones that their instructors cannot hear (Vitello, 2006). With age, the eye’s pupil shrinks and its lens becomes less transparent, reducing the amount of light reaching the retina. In fact, a 65-year-old retina receives only about one-third as much light as its 20-year-old counterpart (Kline & Schieber, 1985). Thus, to see as well as a 20-year-old when reading or driving, a 65-year-old

1.00

90%

90%

Proportion of normal (20/20) vision when identifying letters on an eye chart

0.50

Jose Luis Pelaez/Blend Images/Jupiterimages

0.75

70

70 Percent correct when identifying smells

Percent correct when identifying spoken words

0.25 0

10

30

50

Age in years

70

90

50

10

30

50

Age in years

70

90

50

10

30

50

Age in years

70

90

210

MOD U LE 1 6 Adulthood, and Reflections on Developmental Issues

|| Most stairway falls taken by older people occur on the top step, precisely where the person typically descends from a window-lit hallway into the darker stairwell (Fozard & Popkin, 1978). Our knowledge of aging could be used to design environments that would reduce such accidents (National Research Council, 1990). ||

fatalities Slowing reactions contribute to increased accident risks among those 75 and older, and their greater fragility increases their risk of death when accidents happen (NHTSA, 2000). Would you favor driver exams based on performance, not age, to screen out those whose slow reactions or sensory impairments indicate accident risk?

Health For those growing older, there is both bad and good news about health. The bad news: The body’s disease-fighting immune system weakens, making older people more susceptible to life-threatening ailments such as cancer and pneumonia. The good news: Thanks partly to a lifetime’s accumulation of antibodies, those over 65 suffer fewer short-term ailments, such as common flu and cold viruses. They are, for example, half as likely as 20-year-olds and one-fifth as likely as preschoolers to suffer upper respiratory flu each year (National Center for Health Statistics, 1990). This helps explain why older workers have lower absenteeism rates (Rhodes, 1983). Aging levies a tax on the brain by slowing our neural processing. Up to the teen years, we process information with greater and greater speed (Fry & Hale, 1996; Kail, 1991). But compared with teens and young adults, older people take a bit more time to react, to solve perceptual puzzles, even to remember names (Bashore et al., 1997; Verhaeghen & Salthouse, 1997). The lag is greatest on complex tasks (Cerella, 1985; Poon, 1987). At video games, most 70-year-olds are no match for a 20-year-old. And, as FIGURE 16.4 indicates, fatal accident rates per mile driven increase sharply after age 75. By age 85, they exceed the 16-year-old level. Nevertheless, because older people drive less, they account for fewer than 10 percent of crashes (Coughlin et al., 2004). Brain regions important to memory begin to atrophy during aging (Schacter, 1996). In young adulthood, a small, gradual net loss of brain cells begins, contributing by age 80 to a brain-weight reduction of 5 percent or so. Late-maturing frontal lobes help account for teen impulsivity. Late in life, atrophy of the inhibition-controlling frontal lobes seemingly explains older people’s occasional blunt questions (“Have you put on weight?”) and frank comments (von Hippel, 2007). In addition to enhancing muscles, bones, and energy and helping to prevent obesity and heart disease, exercising the body feeds the brain and helps compensate for cell loss (Coleman & Flood, 1986). Physical exercise stimulates brain cell development and neural connections, thanks perhaps to increased oxygen and nutrient flow (Kempermann et al., 1998; Pereira et al., 2007). That may explain why active older adults tend to be mentally quick older adults, and why, across 20 studies, sedentary older adults randomly assigned to aerobic exercise programs have exhibited enhanced

FIGURE 16.4 Age and driver

needs three times as much light—a reason for buying cars with untinted windshields. This also explains why older people sometimes ask younger people, “Don’t you need better light for reading?”

The fatal accident rate jumps over age 65, especially when measured per miles driven

Fatal 12 accident rate 10 8 6

Fatal accidents per 10,000 drivers

Fatal accidents per 100 million miles

4 2 0

16–19 20–24 25–29 30–34 35–39 40–44 45–49 50–54 55–59 60–64 65–69 70–74

Age in years

75 and over

211

Adulthood, and Reflections on Developmental Issues M O D U L E 1 6

memory and sharpened judgment (Colcombe & Kramer, 2003; Colcombe et al., 2004; Weuve et al., 2004). Exercise also promotes neurogenesis (the birth of new nerve cells) in the hippocampus, a brain region important for memory (Pereira et al., 2007). And exercise helps maintain the telomeres protecting the ends of chromosomes (Cherkas et al., 2008). We are more likely to rust from disuse than to wear out from overuse.

Dementia and Alzheimer’s Disease Some adults do, unfortunately, suffer a substantial loss of brain cells. Up to age 95, the incidence of mental disintegration doubles roughly every 5 years (FIGURE 16.5). A series of small strokes, a brain tumor, or alcohol dependence can progressively damage the brain, causing that mental erosion we call dementia. So, too, can the feared brain ailment, Alzheimer’s disease, which strikes 3 percent of the world’s population by age 75. Alzheimer’s symptoms are not normal aging. (Occasionally forgetting where you laid the car keys is no cause for alarm; forgetting how to get home may suggest Alzheimer’s.) Alzheimer’s destroys even the brightest of minds. First memory deteriorates, then reasoning. Robert Sayre (1979) recalls his father shouting at his afflicted mother to “think harder,” while his mother, confused, embarrassed, on the verge of tears, randomly searched the house for lost objects. A diminishing sense of smell is associated with the pathology that foretells Alzheimer’s (Wilson et al., 2007). As the disease runs its course, after 5 to 20 years, the person becomes emotionally flat, then disoriented and disinhibited, then incontinent, and finally mentally vacant—a sort of living death, a mere body stripped of its humanity. Underlying the symptoms of Alzheimer’s is a loss of brain cells and deterioration of neurons that produce the neurotransmitter acetylcholine. Deprived of this vital chemical messenger, memory and thinking suffer. An autopsy reveals two telltale abnormalities in these acetylcholine-producing neurons: shriveled protein filaments in the cell body, and plaques (globs of degenerating tissue) at the tips of neuron branches. In one line of research, scientists are working to develop drugs that will block proteins from aggregating into plaques or that will lower the levels of the culprit protein, much as cholesterol-lowering drugs help prevent heart disease (Grady, 2007; Wolfe, 2006).

Steve McConnell, Alzheimer’s Association Vice President, 2007

16.5 Incidence of 䉴 FIGURE dementia (mental disintegration) 40%

by age Risk of dementia due to Alzheimer’s disease or a series of strokes doubles about every 5 years in later life. (From Jorm et al., 1987, based on 22 studies in industrial nations.)

Risk of dementia increases in later years

30

20 Alan Oddie/PhotoEdit

Percentage with dementia

“We’re keeping people alive so they can live long enough to get Alzheimer’s disease.”

10

0 60–64 70–74 90–95 80–84 65–69 75–79 85–89

Age group

212

Susan Bookheimer

MOD U LE 1 6 Adulthood, and Reflections on Developmental Issues

FIGURE 16.6 Predicting Alzheimer’s

disease During a memory test, MRI scans of the brains of people at risk for Alzheimer’s (left) revealed more intense activity (yellow, followed by orange and red) when compared with normal brains (right). As brain scans and genetic tests make it possible to identify those likely to suffer Alzheimer’s, would you want to be tested? At what age?

Researchers are gaining insights into the chemical, neural, and genetic roots of Alzheimer’s (Gatz, 2007; Rogaeva et al., 2007). In people at risk for this disease, brain scans (FIGURE 16.6) reveal—before symptoms appear—the telltale degeneration of critical brain cells and diminished activity in brain areas affected by Alzheimer’s (Apostolova et al., 2006; Johnson et al., 2006; Wu & Small, 2006). When the person is memorizing words, they also show diffuse brain activity, as if more exertion was required to achieve the same performance (Bookheimer et al., 2000). Physically active, nonobese people are less at risk for Alzheimer’s (Abbott et al., 2004; Gustafson et al., 2003; Marx, 2005). So, too, are those with an active, challenged mind—often the mind of an educated, active reader (Wilson & Bennett, 2003). As with muscles, so with the brain: Those who use it, less often lose it.

䉴|| Cognitive Development 16-2 How do memory and intelligence change with age? Among the most intriguing developmental psychology questions is whether adult cognitive abilities, such as memory, intelligence, and creativity, parallel the gradually accelerating decline of physical abilities.

Aging and Memory As we age, we remember some things well. Looking back in later life, people asked to recall the one or two most important events over the last half-century tend to name events from their teens or twenties (Conway et al., 2005; Rubin et al., 1998). Whatever people experience around this time of life—the Iraq war, the events of 9/11, the civil rights movement, World War II—becomes pivotal (Pillemer, 1998; Schuman & Scott, 1989). Our teens and twenties are a time of so many memorable “firsts”—first date, first job, first going to college or university, first meeting your parents-in-law. Early adulthood is indeed a peak time for some types of learning and remembering. In one experiment, Thomas Crook and Robin West (1990) invited 1205 people to learn some names. Fourteen videotaped people said their names, using a common format: “Hi, I’m Larry.” 100% Then the same individuals reappeared Percentage After three introductions of names and said, for example, “I’m from 90 recalled Older age groups Philadelphia”—thus providing visual 80 have poorer performance and voice cues for remembering their 70 name. As FIGURE 16.7 shows, everyone remembered more names after a sec60 ond and third replay of the introduc50 tions, but younger adults consistently 40 surpassed older adults. Perhaps it is After two 30 introductions not surprising, then, that nearly twothirds of people over age 40 say their 20 After one memory is worse than it was 10 years 10 introduction ago (KRC, 2001). 0 But consider another experiment 18–39 40–49 50–59 60–69 70–90 (Schonfield & Robertson, 1966), in Age group which adults of various ages learned a

|| If you are within five years of 20, what experiences from your last year will you likely never forget? (This is the time of your life you may best remember when you are 50.) ||

FIGURE 16.7 Tests of recall Recalling new names introduced once, twice, or three times is easier for younger adults than for older ones. (Data from Crook & West, 1990.)

213

Adulthood, and Reflections on Developmental Issues M O D U L E 1 6

24 20 Number of words recognized is stable with age

16 12 8

Number of words recalled declines with age

4 0

20

30

40

50

60

Age in years

list of 24 words. Without giving any clues, the researchers then asked some to recall as many words as they could from the list, and others simply to Number recognize words, using multiple-choice questions. Although younger adults of words had better recall, no age-related memory decline appeared on the recogniremembered tion tests (FIGURE 16.8). So, how well older people remember depends: Are they being asked simply to recognize what they have tried to memorize (minimal decline) or to recall it without clues (greater decline)? Prospective memory (“Remember to . . .”) remains strong when events help trigger memories, as when walking by a convenience store triggers a “Pick up milk!” memory. Time-based tasks (“Remember the 3:00 P.M. meeting”) prove somewhat more challenging for older people. Habitual tasks, such as remembering to take medications three times daily, can be especially challenging (Einstein et al., 1990, 1995, 1998). Teens and young adults surpass both young children and 70-year-olds at remembering to do something (Zimmerman & Meier, 2006). To minimize problems associated with declining prospective memory, older adults rely more on time management and on using reminder cues, such as notes to themselves (Henry et al., 2004). Those who study our capacity to learn and remember are aware of one other important complication: Right through our later years, we continue to diverge. Younger adults differ in their abilities to learn and remember, but 70-year-olds differ much more. “Differences between the most and least able 70-year-olds become much greater than between the most and least able 50-year-olds,” reports Oxford researcher Patrick Rabbitt (2006). Some 70-year-olds perform below nearly all 20-year-olds; other 70-year-olds match or outdo the average 20-year-old. But no matter how quick or slow we are, remembering seems also to depend on the type of information we are trying to retrieve. If the information is meaningless— nonsense syllables or unimportant events—then the older we are, the more errors we are likely to make. If the information is meaningful, older people’s rich web of existing knowledge will help them to catch it, though they may take longer than younger adults to produce the words and things they know (Burke & Shafto, 2004). (Quickthinking game show winners are usually younger to middle-aged adults.) Older people’s capacity to learn and remember skills also declines less than their verbal recall (Graf, 1990; Labouvie-Vief & Schell, 1982; Perlmutter, 1983).

FIGURE 16.8 Recall and recognition in adulthood In this experiment, the ability to recall new information declined during early and middle adulthood, but the ability to recognize new information did not. (From Schonfield & Robertson, 1966.)

Aging and Intelligence What happens to our broader intellectual powers as we age? Do they gradually decline, as does our ability to recall new material? Or do they remain constant, as does our ability to recognize meaningful material? The quest for answers to these questions makes an interesting research story, one that illustrates psychology’s self-correcting process (Woodruff-Pak, 1989). This research developed in phases.

Phase I: Cross-Sectional Evidence for Intellectual Decline In cross-sectional studies, researchers at one point in time test and compare people of various ages. When giving intelligence tests to representative samples of people, researchers consistently find that older adults give fewer correct answers than do younger adults. David Wechsler (1972), creator of the most widely used adult intelligence test, therefore concluded that “the decline of mental ability with age is part of the general [aging] process of the organism as a whole.” For a long time, this rather dismal view of mental decline went unchallenged. Many corporations established mandatory retirement policies, assuming the companies would benefit by replacing aging workers with younger, presumably more capable, employees. As everyone “knows,” you can’t teach an old dog new tricks.

cross-sectional study a study in which people of different ages are compared with one another.

70

214

MOD U LE 1 6 Adulthood, and Reflections on Developmental Issues

Phase II: Longitudinal Evidence for Intellectual Stability After colleges began giving intelligence tests to entering students about 1920, several psychologists saw their chance to study intelliReasoning gence longitudinally—retesting the same people over a period of ability score 60 years. What they expected to find was a decrease in intelligence after about age 30 (Schaie & Geiwitz, 1982). What they actually found was 55 a surprise: Until late in life, intelligence remained stable (FIGURE 16.9). On some tests, it even increased. 50 How then are we to account for the cross-sectional findings? In Longitudinal method retrospect, researchers saw the problem. When cross-sectional studies 45 suggests more stability compared 70-year-olds and 30-year-olds, it compared people not only of two different ages but of two different eras. It compared generally 40 less-educated people (born, say, in the early 1900s) with bettereducated people (born after 1950), people raised in large families with 35 25 32 39 46 53 60 67 74 81 people raised in smaller families, people growing up in less affluent Age in years families with people raised in more affluent families. Cross-sectional method According to this more optimistic view, the myth that intelligence Longitudinal method sharply declines with age was laid to rest. At age 70, John Rock developed the birth control pill. At age 78, Grandma Moses took up painting, and she was still painting after age 100. At age 81—and 17 years from the end of FIGURE 16.9 Cross-sectional versus longitudinal testing of intelligence his college football coaching career—Amos Alonzo Stagg was named coach of the year. at various ages In this test of one type At age 89, architect Frank Lloyd Wright designed New York City’s Guggenheim of verbal intelligence (inductive reasonMuseum. As everyone “knows,” given good health you’re never too old to learn.

Cross-sectional method suggests decline

ing), the cross-sectional method produced declining scores with age. The longitudinal method (in which the same people were retested over a period of years) produced a slight rise in scores well into adulthood. (Adapted from Schaie, 1994.)

|| Like older people, older gorillas process information more slowly (Anderson et al., 2005). ||

“In youth we learn, in age we understand.” Marie Von Ebner-Eschenbach, Aphorisms, 1883

Phase III: It All Depends With “everyone knowing” two different and opposing facts about age and intelligence, something was clearly wrong. As it turns out, longitudinal studies have their own potential pitfalls. Those who survive to the end of longitudinal studies may be bright, healthy people whose intelligence is least likely to decline. (Perhaps people who died younger and were removed from the study had declining intelligence.) Adjusting for the loss of participants, as did a study following more than 2000 people over age 75 in Cambridge, England, reveals a steeper intelligence decline. This is especially so as people age after 85 (Brayne et al., 1999). Research is further complicated by the finding that intelligence is not a single trait, but rather several distinct abilities. Intelligence tests that assess speed of thinking may place older adults at a disadvantage because of their slower neural mechanisms for processing information. Meeting old friends on the street, names rise to the mind’s surface more slowly—“like air bubbles in molasses,” said David Lykken (1999). But slower processing need not mean less intelligent. When given tests that assess general vocabulary, knowledge, and ability to integrate information, older adults generally fare well (Craik, 1986). Older Canadians surpass younger Canadians at answering questions such as, “Which province was once called New Caledonia?” And in four studies in which players were given 15 minutes to fill in words in New York Times crossword puzzles, the highest average performance was achieved by adults in their fifties, sixties, and seventies (FIGURE 16.10). German researcher Paul Baltes and his colleagues (1993, 1994, 1999) developed “wisdom” tests that assess “expert knowledge about life in general and good judgment and advice about how to conduct oneself in the face of complex, uncertain circ*mstances.” Their results suggest that older adults more than hold their own on these tests, too. Thus, despite 30-year-olds’ quick-thinking smarts, we usually select older-than-thirties people to be president of the company, the college, or the country. Age is sage. To paraphrase one 60-year-old, “Forty years ago I had a great memory, but I was a fool.”

215

Adulthood, and Reflections on Developmental Issues M O D U L E 1 6

16.10 Word power grows with 䉴 FIGURE age In four studies summarized by Timothy

70

Number of words correctly completed

Salthouse (2004), older crossword puzzle players excelled when given 15 minutes with a New York Times puzzle.

60 50 40 30 20 10 20

30

40

50

60

70

80

Age

So the answers to our age-and-intelligence questions depend on what we assess and how we assess it. Crystallized intelligence—our accumulated knowledge as reflected in vocabulary and analogies tests—increases up to old age. Fluid intelligence—our ability to reason speedily and abstractly, as when solving novel logic problems— decreases slowly up to age 75 or so, then more rapidly, especially after age 85 (Cattell, 1963; Horn, 1982). We can see this pattern in the intelligence scores of a national sample of adults (Kaufman et al., 1989). After adjustments for education, verbal scores (reflecting crystallized intelligence) held relatively steady from ages 20 to 74. Nonverbal, puzzle-solving intelligence declined. With age we lose and we win. We lose recall memory and processing speed, but we gain vocabulary and knowledge (Park et al., 2002). Our decisions also become less distorted by negative emotions such as anxiety, depression, and anger (Blanchard-Fields, 2007; Carstensen & Mikels, 2005). These cognitive differences help explain why mathematicians and scientists produce much of their most creative work during their late twenties or early thirties, whereas those in literature, history, and philosophy tend to produce their best work in their forties, fifties, and beyond, after accumulating more knowledge (Simonton, 1988, 1990). For example, poets (who depend on fluid intelligence) reach their peak output earlier than prose authors (who need a deeper knowledge reservoir), a finding observed in every major literary tradition, for both living and dead languages. Despite age-related cognitive changes, studies in several countries indicate that age is only a modest predictor of abilities such as memory and intelligence. Mental ability more strongly correlates with proximity to death. Tell me whether someone is 70, 80, or 90, and you haven’t told me much about the person’s mental sharpness. But if you tell me whether someone is 8 months or 8 years from death, regardless of age, you’ll give me a better clue to the person’s mental ability. Especially in the last three or four years of life, cognitive decline typically accelerates (Wilson et al., 2007). Researchers call this near-death drop terminal decline (Backman & MacDonald, 2006). longitudinal study research in which

䉴|| Social Development

the same people are restudied and retested over a long period.

16-3 What themes and influences mark our social journey from early adulthood

crystallized intelligence our accumulated knowledge and verbal skills; tends to increase with age.

Many differences between younger and older adults are created by significant life events. A new job means new relationships, new expectations, and new demands. Marriage brings the joy of intimacy and the stress of merging your life with another’s.

fluid intelligence our ability to reason

to death?

speedily and abstractly; tends to decrease during late adulthood.

216

MOD U LE 1 6 Adulthood, and Reflections on Developmental Issues

The birth of a child introduces responsibilities and alters your life focus. The death of a loved one creates an irreplaceable loss. Do these adult life events shape a sequence of life changes?

Adulthood’s Ages and Stages

©The New Yorker Collection, 2006, John Donohue from cartoonbank.com. All rights reserved.

As people enter their forties, they undergo a transition to middle adulthood, a time when they realize that life will soon be mostly behind instead of ahead of them. Some psychologists have argued that for many the midlife transition is a crisis, a time of great struggle, regret, or even feeling struck down by life. The popular image of the midlife crisis is an early-forties man who forsakes his family for a younger girlfriend and a hot sports car. But the fact—reported by large samples of people—is that unhappiness, job dissatisfaction, marital dissatisfaction, divorce, anxiety, and suicide do not surge during the early forties (Hunter & Sundel, 1989; Mroczek & Kolarz, 1998). Divorce, for example, is most common among those in their twenties, suicide among those in their seventies and eighties. One study of emotional instability in nearly 10,000 men and women found “not the slightest evidence” that distress peaks anywhere in the midlife age range (FIGURE 16.11). For the 1 in 4 adults who do report experiencing a life crisis, the trigger is not age, but a major event, such as illness, divorce, or job loss (Lachman, 2004). Life events trigger transitions to new life stages at varying ages. The social clock— the definition of “the right time” to leave home, get a job, marry, have children, and “The important events of a person’s life retire—varies from era to era and culture to culture. In Western Europe, fewer than are the products of chains of highly 10 percent of men over 65 remain in the work force, as do 16 percent in the United improbable occurrences.” States, 36 percent in Japan, and 69 percent in Mexico (Davies et al., 1991). And the Joseph Traub, “Traub’s Law,” 2003 once-rigid sequence for many Western women—of student to worker to wife to athome mom to worker again—has loosened. Contemporary women occupy these roles in any order or all at once. The social clock still ticks, but people feel freer about being out of sync with it. Even chance events can have lasting significance because they often deflect us down one road rather than another (Bandura, 1982). Romantic attraction, for example, is FIGURE 16.11 Early forties midlife often influenced by chance encounters. Albert Bandura (2005) recalls the ironic true crises? Among 10,000 people story of a book editor who came to one of Bandura’s lectures on the “Psychology of responding to a national health survey, Chance Encounters and Life Paths”—and ended up marrying the woman who hapthere was no early forties increase in emotional instability (“neuroticism”) pened to sit next to him. The sequence that led to my authoring this book (which was scores. (From McCrae & Costa, 1990.) not my idea) began with my being seated near, and getting to know, a distinguished colleague at an international conference. Thus, chance events, including romantic encounters, can change 24 Emotional our lives. Consider one study of identical twins, who tend to make instability No early 40s score similar choices of friends, clothes, vacations, jobs, and so on. So, if emotional crisis your identical twin became engaged to someone, wouldn’t you (being 16 Females in so many ways the same as your twin) expect to also feel attracted to this person? Surprisingly, only half the identical twins recalled really liking their co-twin’s selection, and only 5 percent said, “I could 8 have fallen for my twin’s partner.” Researchers David Lykken and Males Auke Tellegen (1993) surmise that romantic love is rather like ducklings’ imprinting: Given repeated exposure to someone after child0 hood, you may form a bond (infatuation) with almost any available 33 36 39 42 45 48 51 54 person who has a roughly similar background and level of attractiveAge in years ness and who reciprocates your affections.

217

Adulthood, and Reflections on Developmental Issues M O D U L E 1 6

Adulthood’s Commitments Two basic aspects of our lives dominate adulthood. Erik Erikson called them intimacy (forming close relationships) and generativity (being productive and supporting future generations). Researchers have chosen various terms—affiliation and achievement, attachment and productivity, commitment and competence. Sigmund Freud (1935) put it most simply: The healthy adult, he said, is one who can love and work.

“One can live magnificently in this world if one knows how to work and how to love.” Leo Tolstoy, 1856

We typically flirt, fall in love, and commit—one person at a time. “Pair-bonding is a trademark of the human animal,” observed anthropologist Helen Fisher (1993). From an evolutionary perspective, relatively monogamous pairing makes sense: Parents who cooperated to nurture their children to maturity were more likely to have their genes passed along to posterity than were parents who didn’t. Adult bonds of love are most satisfying and enduring when marked by a similarity of interests and values, a sharing of emotional and material support, and intimate self-disclosure. Couples who seal their love with commitment—via (in one Vermont study) marriage for heterosexual couples and civil unions for hom*osexual couples— more often endure (Balsam et al., 2008). Marriage bonds are especially likely to last when couples marry after age 20 and are well educated. Compared with their counterparts of 40 years ago, people in Western countries are better educated and marrying later. Yet, ironically, they are nearly twice as likely to divorce. (Both Canada and the United States now have about one divorce for every two marriages [Bureau of the Census, 2007], and in Europe, divorce is only slightly less common.) The divorce rate partly reflects women’s lessened economic dependence and men and women’s rising expectations. We now hope not only for an enduring bond, but also for a mate who is a wage earner, caregiver, intimate friend, and warm and responsive lover. Might test-driving life together in a “trial marriage” minimize divorce risk? In a 2001 Gallup survey of American twenty-somethings, 62 percent thought it would (Whitehead & Popenoe, 2001). In reality, in Europe, Canada, and the United States, those who cohabit before marriage have had higher rates of divorce and marital dysfunction than those who did not cohabit (Dush et al., 2003; Popenoe & Whitehead, 2002). The risk appears greatest for cohabiting prior to engagement (Kline et al., 2004). Two factors help explain why American children born to cohabiting parents are about five times more likely to experience their parents’ separation than are children born to married parents (Osborne et al., 2007). First, cohabiters tend to be initially less committed to the ideal of enduring marriage. Second, they become even less marriage-supporting while cohabiting. Nonetheless, the institution of marriage endures. Worldwide, reports the United Nations, 9 in 10 heterosexual adults marry. And marriage is a predictor of happiness, health, sexual satisfaction, and income. National Opinion Research Center surveys of more than 40,000 Americans since 1972 reveal that 40 percent of married adults, though only 23 percent of unmarried adults, have reported being “very happy.” Lesbian couples, too, report greater well-being than those who are alone (Peplau & Fingerhut, 2007; Wayment & Peplau, 1995). Moreover, neighborhoods with high marriage rates typically have low rates of social pathologies such as crime, delinquency, and emotional disorders among children (Myers & Scanzoni, 2005). Marriages that last are not always devoid of conflict. Some couples fight but also shower one another with affection. Other couples never raise their voices yet also seldom praise one another or nuzzle. Both styles can last. After observing the interactions of 2000 couples, John Gottman (1994) reported one indicator of marital success: at least a five-to-one ratio of positive to negative interactions. Stable marriages provide five times more instances of smiling, touching, complimenting, and

Lisa B./Corbis

Love

Love Intimacy, attachment, commitment—love by whatever name—is central to healthy and happy adulthood.

|| What do you think? Does marriage correlate with happiness because marital support and intimacy breed happiness, because happy people more often marry and stay married, or both? ||

social clock the culturally preferred timing of social events such as marriage, parenthood, and retirement.

218

MOD U LE 1 6 Adulthood, and Reflections on Developmental Issues

laughing than of sarcasm, criticism, and insults. So, if you want to predict which newlyweds will stay together, don’t pay attention to how passionately they are in love. The couples who make it are more often those who refrain from putting down their partners. To prevent a cancerous negativity, successful couples learn to fight fair (to state feelings without insulting) and to steer conflict away from chaos with comments like “I know it’s not your fault” or “I’ll just be quiet for a moment and listen.” Often, love bears children. For most people, this most enduring of life changes is a happy event. “I feel an overwhelming love for my children unlike anything I feel for anyone else,” said 93 percent of American mothers in a national survey (Erickson & Aird, 2005). Many fathers feel the same. A few weeks after the birth of my first child I was suddenly struck by a realization: “So this is how my parents felt about me!” When children begin to absorb time, money, and emotional energy, satisfaction with the marriage itself may decline. This is especially likely among employed women who, more than they expected, carry the traditional burden of doing the chores at home. Putting effort into creating an equitable relationship can thus pay double dividends: a more satisfying marriage, which breeds better parent-child relations (Erel & Burman, 1995). Although love bears children, children eventually leave home. This departure is a significant and sometimes difficult event. For most people, however, an empty nest is a happy place (Adelmann et al., 1989; Glenn, 1975). Compared with middle-aged women with children still at home, those living in an empty nest report greater happiness and greater enjoyment of their marriage. Many parents experience a “postlaunch honeymoon,” especially if they maintain close relationships with their children (White & Edwards, 1990). As Daniel Gilbert (2006) has said, “The only known symptom of ‘empty nest syndrome’ is increased smiling.”

|| If you have left home, did your parents suffer the “empty nest syndrome”—a feeling of distress focusing on a loss of purpose and relationship? Did they mourn the lost joy of listening for you in the wee hours of Saturday morning? Or did they seem to discover a new freedom, relaxation, and (if still married) renewed satisfaction with their own relationship? ||

Work For many adults, the answer to “Who are you?” depends a great deal on the answer to “What do you do?” For women and men, choosing a career path is difficult, especially in today’s changing work environment. During the first two years of college or university, few students can predict their later careers. Most shift from their initially intended majors, many find their postcollege employment in fields not directly related to their majors, and most will change careers (Rothstein, 1980). In the end, happiness is about having work that fits your interests and provides you with a sense of competence and accomplishment. It is having a close, supportive companion who cheers your accomplishments (Gable et al., 2006). And for some, it includes having children who love you and whom you love and feel proud of.

LWA—Dann Tardif/Corbis

Elena Roaid/PhotoEdit

Job satisfaction and life satisfaction Work can provide us with a sense of identity and competence and opportunities for accomplishment. Perhaps this is why challenging and interesting occupations enhance people’s happiness.

219

Adulthood, and Reflections on Developmental Issues M O D U L E 1 6

Well-Being Across the Life Span To live is to grow older. This moment marks the oldest you have ever been and the youngest you will henceforth be. That means we all can look back with satisfaction or regret, and forward with hope or dread. When asked what they would have done differently if they could relive their lives, people’s most common answer is “Taken my education more seriously and worked harder at it” (Kinnier & Metha, 1989; Roese & Summerville, 2005). Other regrets—“I should have told my father I loved him,” “I regret that I never went to Europe”—also focus less on mistakes made than on the things one failed to do (Gilovich & Medvec, 1995). From the teens to midlife, people typically experience a strengthening sense of identity, confidence, and self-esteem (Miner-Rubino et al., 2004; Robins & Trzesniewski, 2005). In later life, challenges arise: Income shrinks, work is often taken away, the body deteriorates, recall fades, energy wanes, family members and friends die or move away, and the great enemy, death, looms ever closer. Small wonder that most presume that happiness declines in later life (Lacey et al., 2006). But the over65 years are not notably unhappy, as Ronald Inglehart (1990) discovered when he amassed interviews conducted during the 1980s with representative samples of nearly 170,000 people in 16 nations (FIGURE 16.12). Newer surveys of some 2 million people worldwide confirm that happiness is slightly higher among both young and older adults than among those middle-aged. Moreover, national studies in both Britain and Australia reveal that the risk of depression tapers off in later life (Blanchflower & Oswald, 2008; Troller et al., 2007). If anything, positive feelings grow after midlife and negative feelings subside (Charles et al., 2001; Mroczek, 2001). Consider:

“I hope I die before I get old,” sang rock star Pete Townshend—when he was 20.

䉴 Older adults increasingly use words that convey positive emotions (Pennebaker & Stone, 2003).

䉴 Older adults attend less and less to negative information. For example, they are slower than younger adults to perceive negative faces (Carstensen & Mikels, 2005). 䉴 The amygdala, a neural processing center for emotions, shows diminishing activity in older adults in response to negative events, but it maintains its responsiveness to positive events (Mather et al., 2004; Williams et al., 2006). 䉴 Brain-wave reactions to negative images diminish with age (Kisley et al., 2007).

Percentage “satisfied” with life as a whole

“At twenty we worry about what others think of us. At forty we don’t care what others think of us. At sixty we discover they haven’t been thinking about us at all.” Anonymous

16.12 Age and life satisfaction 䉴 FIGURE With the tasks of early adulthood behind

them, many older adults have more time to pursue personal interests. No wonder their satisfaction with life remains high, and may even rise if they are healthy and active. As this graph, based on surveys of 170,000 people in 16 countries shows, age differences in life satisfaction are small. (Data from Inglehart, 1990.)

80%

60

40

20

0 15–24 25–34 35–44 45–54 55–64

Age group in years

65+

220

MOD U LE 1 6 Adulthood, and Reflections on Developmental Issues

Moreover, at all ages, the bad feelings we associate with negative Biological influences: Psychological influences: Biopsychosocial events fade faster than do the good • no genes predisposing • optimistic outlook influences on dementia or other diseases • physically and mentally feelings we associate with positive successful aging • appropriate nutrition active life style events (Walker et al., 2003). This Numerous biological, psychological, and contributes to most older people’s social-cultural factors sense that life, on balance, has affect the way we age. been mostly good. Given that With the right genes, Successful growing older is an outcome of livwe have a good aging chance of aging ing (an outcome nearly all of us successfully if we prefer to early dying), the positivity maintain a positive of later life is comforting. More outlook and stay Social-cultural influences: and more people flourish into later mentally and physically • support from family active as well as life, thanks to biological, psychoand friends connected to family logical, and social-cultural influ• meaningful activities and friends in the • cultural respect for aging ences (FIGURE 16.13). community. • safe living conditions The resilience of well-being across the life span obscures some interesting age-related emotional differences. Although life satisfaction does not decline with age, it often wanes in the terminal decline phase, as death approaches (Gerstorf et al., 2008). Also, as the years go by, feelings mellow (Costa et al., 1987; Diener et al., 1986). Highs become less high, lows less low. Thus, although we are less often depressed, and our average feeling level tends to remain stable, with age we also find ourselves less often feeling ex“The best thing about being 100 is no peer cited, intensely proud, and on top of the world. Compliments provoke less elation pressure.” and criticisms less despair, as both become merely additional feedback atop a mounLewis W. Kuester, 2005, on turning 100 tain of accumulated praise and blame. Psychologists Mihaly Csikszentmihalyi and Reed Larson (1984) mapped people’s emotional terrain by periodically signaling them with electronic beepers to report their current activities and feelings. They found that teenagers typically come down from elation or up from gloom in less than an hour. Adult moods are less extreme but more enduring. For most people, old age offers less intense joy but greater contentment and increased spirituality, especially for those who remain socially engaged (Harlow & Cantor, 1996; Wink & Dillon, 2002). As we age, life becomes less an emotional roller coaster.

FIGURE 16.13

Death and Dying

“Love—why, I’ll tell you what love is: It’s you at 75 and her at 71, each of you listening for the other’s step in the next room, each afraid that a sudden silence, a sudden cry, could mean a lifetime’s talk is over.” Brian Moore, The Luck of Ginger Coffey, 1960

Most of us will suffer and cope with the deaths of relatives and friends. Usually, the most difficult separation is from a spouse—a loss suffered by five times more women than men. When, as usually happens, death comes at an expected late-life time, the grieving may be relatively short-lived. (FIGURE 16.14 shows the typical emotions before and after a spouse’s death.) But even 20 years after losing a spouse, people still talk about the long-lost partner once a month on average (Carnelley et al., 2006). Grief is especially severe when the death of a loved one comes suddenly and before its expected time on the social clock. The sudden illness that claims a 45-year-old life partner or the accidental death of a child may trigger a year or more of memory-laden mourning that eventually subsides to a mild depression (Lehman et al., 1987). For some, however, the loss is unbearable. One study, following more than 1 million Danes over the last half of the twentieth century, found that more than 17,000 people had suffered the death of a child under 18. In the five years following that death, 3 percent of them had a first psychiatric hospitalization. This rate was 67 percent higher than the rate recorded for parents who had not lost a child (Li et al., 2005).

221

Adulthood, and Reflections on Developmental Issues M O D U L E 1 6

16.14 Life satisfaction before, 䉴 FIGURE during the year of, and after a spouse’s

7.4

Life 7.2 satisfaction

death Richard Lucas and his collaborators (2003) examined longitudinal annual surveys of more than 30,000 Germans. The researchers identified 513 married people who experienced the death of a spouse and did not remarry. They found that life satisfaction began to dip during the prewidowhood year, dropped significantly during the year of the spouse’s death, and then eventually rebounded to nearly the earlier level. (Source: Richard Lucas.)

7

6.8 6.6 6.4 6.2 6 5.8

Year of spouse’s death

5.6 5.4 –4

–3

–2

–1

1

2

3

4

5

6

7

8

Year

© The New Yorker Collection, 2006, Barbara Smaller from cartoonbank.com. All rights reserved.

Even so, the normal range of reactions to a loved one’s death is wider than most suppose. Some cultures encourage public weeping and wailing; others hide grief. Within any culture, individuals differ. Given similar losses, some people grieve hard and long, others are more resilient (Ott et al., 2007). Contrary to popular misconceptions, however,

䉴 terminally ill and bereaved people do not go through identical predictable stages, such as denial before anger (Nolen-Hoeksema & Larson, 1999). A Yale study following 233 bereaved individuals through time did, however, find that yearning for the loved one reached a high point four months after the loss, with anger peaking, on average, about a month later (Maciejewski et al., 2007). 䉴 those who express the strongest grief immediately do not purge their grief more quickly (Bonanno & Kaltman, 1999; Wortman & Silver, 1989). 䉴 bereavement therapy and self-help groups offer support, but there is similar healing power in the passing of time and the support of friends—and also in giving support and help to others (Brown et al., 2008). Grieving spouses who talk often with others or who receive grief counseling adjust about as well as those who grieve more privately (Bonanno, 2001, 2004; Genevro, 2003; Stroebe et al., 2001, 2002, 2005). We can be grateful for the waning of death-denying attitudes. Facing death with dignity and openness helps people complete the life cycle with a sense of life’s meaningfulness and unity—the sense that their existence has been good and that life and death are parts of an ongoing cycle. Although death may be unwelcome, life itself can be affirmed even at death. This is especially so for people who review their lives not with despair but with what Erik Erikson called a sense of integrity—a feeling that one’s life has been meaningful and worthwhile.

䉴|| Reflections on Two Major Developmental

Issues

Any survey of developmental psychology must consider three pervasive issues. The first—how development is steered by genes and by experience—recurs throughout this text. But let’s stop now and consider the second issue, whether development is a gradual, continuous process or a series of discrete stages, and the third, whether development is characterized more by stability over time or by change.

“Donald is such a fatalist—he’s convinced he’s going to grow old and die.”

“Consider, friend, as you pass by, as you are now, so once was I. As I am now, you too shall be. Prepare, therefore, to follow me.” Scottish tombstone epitaph

222

MOD U LE 1 6 Adulthood, and Reflections on Developmental Issues

Continuity and Stages

©Shannon Wheeler

Do adults differ from infants as a giant redwood differs from its seedling—a difference created by gradual, cumulative growth? Or do they differ as a butterfly differs from a caterpillar—a difference of distinct stages? Generally speaking, researchers who emphasize experience and learning see development as a slow, continuous shaping process. Those who emphasize biological maturation tend to see development as a sequence of genetically predisposed stages or steps: Although progress through the various stages may be quick or slow, everyone passes through the stages in the same order. Are there clear-cut stages of psychological development, as there are physical stages such as walking before running? The stage theories of Jean Piaget on cognitive development, Lawrence Kohlberg on moral development, and Erik Erikson on psychosocial development propose that such stages do exist (FIGURE 16.15). But some research casts doubt on the idea that life proceeds through neatly defined, age-linked stages. Young children have some abilities Piaget attributed to later stages. Kohlberg’s work reflected a worldview characteristic of educated males in individualistic cultures and emphasized thinking over acting. Adult life does not progress through the fixed, predictable series of steps Erikson envisioned. Nevertheless, the stage concept remains useful. The human brain does experience growth spurts during childhood and puberty that correspond roughly to Piaget’s stages (Thatcher et al., 1987). And stage theories contribute a developmental perspective on the whole life span, by suggesting how people of one age think and act differently when they arrive at a later age.

Stages of the life cycle

FIGURE 16.15 Comparing the stage

theories (With thanks to Dr. Sandra Gibbs, Muskegon Community College, for inspiring this illustration.)

Lawrence Kohlberg

Preconventional morality

(Postconventional morality?)

Conventional morality

Erik Erikson

Basic Trust

Autonomy

Initiative

Competence

Identity

Intimacy

Generativity

Integrity

Jean Piaget

Sensorimotor

Birth

1

2

Preoperational

3

4

5

Concrete operational 6

7

8

9

10

Formal operational

11

12

13

14

Death

© The New Yorker Collection, 1998, Peter Mueller from cartoonbank.com. All rights reserved.

Stability and Change

As adults grow older, there is continuity of self.

This leads us to the final developmental issue: Over time, are people’s personalities consistent, or do they change? If reunited with a long-lost grade school friend, would you instantly recognize that “it’s the same old Andy”? Or does a person befriended during one period of life seem like a different person at a later period? (That was the experience of one male friend of mine who failed to recognize a former classmate at his 40-year college reunion. The aghast classmate to whom he spoke was his long-ago ex-wife.) Researchers who have followed lives through time have found evidence for both stability and change. There is continuity to personality and yet, happily for troubled children and adolescents, life is a process of becoming: The struggles of the present may be laying a foundation for a happier tomorrow. More specifically, researchers generally agree on the following points: 1. The first two years of life provide a poor basis for predicting a person’s eventual traits (Kagan et al., 1978, 1998). Older children and adolescents also change.

223

Adulthood, and Reflections on Developmental Issues M O D U L E 1 6

Although delinquent children have elevated rates of later work problems, substance abuse, and crime, many confused and troubled children have blossomed into mature, successful adults (Moffitt et al., 2002; Roberts et al., 2001; Thomas & Chess, 1986). 2. As people grow older, personality gradually stabilizes (Hampson & Goldberg, 2006; Johnson et al., 2005; Terracciano et al., 2006). Some characteristics, such as temperament, are more stable than others, such as social attitudes (Moss & Susman, 1980). When a research team led by Avshalom Caspi (2003) studied 1000 New Zealanders from age 3 to 26, they were struck by the consistency of temperament and emotionality across time. 3. In some ways, we all change with age. Most shy, fearful toddlers begin opening up by age 4, and most people become more self-disciplined, stable, agreeable, and self-confident in the years after adolescence (McCrae & Costa, 1994; Roberts et al., 2003, 2006, 2008). Many irresponsible 18-year-olds have matured into 40-year-old business or cultural leaders. (If you are the former, you aren’t done yet.) Such changes can occur without changing a person’s position relative to others of the same age. The hard-driving young adult may mellow by later life, yet still be a relatively hard-driving senior citizen. Finally, we should remember that life requires both stability and change. Stability enables us to depend on others, provides our identity, and motivates our concern for the healthy development of children. Change motivates our concerns about present influences, sustains our hope for a brighter future, and lets us adapt and grow with experience.

“As at 7, so at 70.” Jewish proverb

“At 70, I would say the advantage is that you take life more calmly. You know that ‘this, too, shall pass!’” Eleanor Roosevelt, 1954

Review Adulthood, and Reflections on Developmental Issues 16-1 What physical changes occur during middle and late adulthood? Muscular strength, reaction time, sensory abilities, and cardiac output begin to decline in the late twenties and continue throughout middle and late adulthood. Around age 50, menopause ends women’s period of fertility but usually does not trigger psychological problems or interfere with a satisfying sex life. Men do not undergo a similar sharp drop in hormone levels or fertility. 16-2 How do memory and intelligence change with age? As the years pass, recall begins to decline, especially for meaningless information, but recognition memory remains strong. Crosssectional and longitudinal studies have shown that fluid intelligence declines in later life but crystallized intelligence does not. 16-3

What themes and influences mark our social journey from early adulthood to death? Adults do not progress through an orderly sequence of age-related social stages. More important are life events, and the loosening of strict dictates of the social clock—the culturally preferred timing of social events. The dominant themes of adulthood are love and work, which Erikson called intimacy and generativity. Life satisfaction tends to remain high across the life span.

Terms and Concepts to Remember menopause, p. 207 cross-sectional study, p. 213 longitudinal study, p. 214

crystallized intelligence, p. 215 fluid intelligence, p. 215 social clock, p. 216

Test Yourself 1. Research has shown that living together before marriage predicts an increased likelihood of future divorce. Can you imagine two possible explanations for this correlation?

2. What findings in psychology support the concept of stages in development and the idea of stability in personality across the life span? What findings challenge these ideas? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. As you reflect on your last few years—formative years if you are a young adult—what do you most regret? What do you feel best about?

2. Are you the same person you were as a preschooler? A 10year-old? A mid-teen? How are you different? How are you the same?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

Sensation and Perception

modules 17

“I

have perfect vision,” explains my colleague, Heather Sellers, an acclaimed writer and writing teacher. Her vision may be fine, but there is a problem with her perception. She cannot recognize faces. In her memoir, Face First, Sellers (2010) tells of awkward moments resulting from her lifelong prosopagnosia—face blindness. In college, on a date at the Spaghetti Station, I returned from the bathroom and plunked myself down in the wrong booth, facing the wrong man. I remained unaware he was not my date even as my date (a stranger to me) accosted Wrong Booth Guy, and then stormed out of the Station. I can’t distinguish actors in movies and on television. I do not recognize myself in photos or video. I can’t recognize my step-sons in the soccer pick-up line; I failed to determine which husband was mine at a party, in the mall, at the market.

Her inability to recognize acquaintances means that people sometimes perceive her as snobby or aloof. “Why did you walk past me?” someone might later ask. Similar to those of us with hearing loss who fake hearing during trite social conversation, Sellers sometimes fakes recognition. She often smiles at people she passes, in case she knows them. Or she pretends to know the person with whom she is talking. (To avoid the stress associated with such perception failures, people with serious hearing loss or with prosopagnosia often shy away from busy social situations.) But there is an upside: When encountering someone who previously irritated her, she typically won’t feel ill will, because she doesn’t recognize the person. This curious mix of “perfect vision” and face blindness illustrates the distinction between sensation and perception. When Sellers looks at a friend, her sensation is normal: Her sensory receptors detect the same information yours would, and they transmit that information to her brain. And her perception—the organization and interpretation of sensory information that enables her to consciously recognize objects—is almost normal. Thus, she may recognize people from their hair, their gait, their voice, or their particular physique, just not their face. She can see the elements of their face—the nose, the eyes, and the chin—and yet, at a party, “[I introduce myself] to my colleague Gloria THREE TIMES.” Her experience is much like the struggle you or I would have trying to recognize a specific penguin in a group of waddling penguins. Thanks to an area on the underside of your brain’s right hemisphere, you can recognize a human face (but not a penguin’s) in one-seventh of a second. As soon as you detect a face, you recognize it (Jacques & Rossion, 2006). How do you do it? Twenty-four hours a day, all kinds of stimuli from the outside world bombard your body. Meanwhile, in a silent, cushioned, inner world, your brain floats in utter darkness. By itself, it sees nothing. It hears nothing. It feels nothing. So, how does the world out there get in? To phrase the question scientifically: How do we construct our representations of the external world? How do a campfire’s flicker, crackle, and smoky scent activate neural connections? And how, from this living neurochemistry, do we create our conscious experience of the fire’s motion and temperature, its aroma and beauty? In search of answers to such questions, let’s look more closely at what psychologists have learned about how we sense and perceive the world around us. Module 17 outlines the basic principles of sensation and perception. Modules 18, 19, and 20 examine vision, hearing, and other senses. And Modules 21 and 22 explore perceptual organization and perceptual interpretation.

Introduction to Sensation and Perception

18 Vision

19 Hearing

20 Other Senses

21 Perceptual Organization

22 Perceptual Interpretation

225

module 17

Thresholds Sensory Adaptation

Introduction to Sensation and Perception 17-1 What are sensation and perception? What do we mean by bottom-up processing and top-down processing?

In our everyday experiences, sensation and perception blend into one continuous process. Here, we slow down that process to study its parts. We start with the sensory receptors and work up to higher levels of processing. Psychologists refer to sensory analysis that starts at the entry level as bottom-up processing. But our minds also interpret what our senses detect. We construct perceptions drawing both on sensations coming bottom-up to the brain and on our experience and expectations, which psychologists call top-down processing. For example, as our brain deciphers the information in FIGURE 17.1, bottom-up processing enables our sensory systems to detect the lines, angles, and colors that form the horses, rider, and surroundings. Using top-down processing we consider the painting’s title, notice the apprehensive expressions, and then direct our attention to aspects of the painting that will give those observations meaning. Nature’s sensory gifts suit each recipient’s needs. They enable each organism to obtain essential information. Consider:

䉴 A frog, which feeds on flying insects, has eyes with receptor cells that fire only in response to small, dark, moving objects. A frog could starve to death knee-deep in motionless flies. But let one zoom by and the frog’s “bug detector” cells snap awake. 䉴 A male silkworm moth has receptors so sensitive to the female sex-attractant odor that a single female need release only a billionth of an ounce per second to attract every male silkworm moth within a mile. That is why there continue to be silkworms.

226

Detail, The Forest Has Eyes by Bev Doolittle © The Greenwich Workshop, Inc., Trumbull, CT.

FIGURE 17.1 What’s going on here? Our sensory and perceptual processes work together to help us sort out the complex images, including the hidden faces, in this Bev Doolittle painting, The Forest Has Eyes.

227

Introduction to Sensation and Perception M O D U L E 1 7

䉴 We are similarly equipped to detect the important features of our environment. Our ears are most sensitive to sound frequencies that include human voice consonants and a baby’s cry. We begin our exploration of our sensory gifts with a question that cuts across all our sensory systems: What stimuli cross our threshold for conscious awareness?

䉴|| Thresholds 17-2 What are the absolute and difference thresholds, and do stimuli below the absolute threshold have any influence? We exist in a sea of energy. At this moment, you and I are being struck by X-rays and radio waves, ultraviolet and infrared light, and sound waves of very high and very low frequencies. To all of these we are blind and deaf. Other animals detect a world that lies beyond human experience (Hughes, 1999). Migrating birds stay on course aided by an internal magnetic compass. Bats and dolphins locate prey with sonar (bouncing echoing sound off objects). On a cloudy day, bees navigate by detecting polarized light from an invisible (to us) sun. The shades on our own senses are open just a crack, allowing us only a restricted awareness of this vast sea of energy. Let’s see what psychophysics has discovered about the physical energy we can detect and its effect on our psychological experience.

sensation the process by which our sensory receptors and nervous system receive and represent stimulus energies from our environment.

perception the process of organizing and interpreting sensory information, enabling us to recognize meaningful objects and events.

bottom-up processing analysis that begins with the sensory receptors and works up to the brain’s integration of sensory information. top-down processing information processing guided by higher-level mental processes, as when we construct perceptions drawing on our experience and expectations.

psychophysics the study of relationships between the physical characteristics of stimuli, such as their intensity, and our psychological experience of them.

absolute threshold the minimum stimulation needed to detect a particular stimulus 50 percent of the time.

signal detection theory a theory pre-

Absolute Thresholds To some kinds of stimuli we are exquisitely sensitive. Standing atop a mountain on an utterly dark, clear night, most of us could see a candle flame atop another mountain 30 miles away. We could feel the wing of a bee falling on our cheek. We could smell a single drop of perfume in a three-room apartment (Galanter, 1962). Our awareness of these faint stimuli illustrates our absolute thresholds—the minimum stimulation necessary to detect a particular light, sound, pressure, taste, or odor 50 percent of the time. To test your absolute threshold for sounds, a hearing specialist would expose each of your ears to varying sound levels. For each tone, the test would define where half the time you correctly detect the sound and half the time you do not. For each of your senses, that 50-50 recognition point defines your absolute threshold. Absolute thresholds may vary with age. Sensitivity to high-pitched sounds declines with normal aging, leaving older ears in need of louder sound to hear a high-pitched cellphone ring. That fact of life has been exploited by some students wanting a ringtone their instructors are unlikely to hear, and by some Welsh shopkeepers broadcasting annoying sounds that help disperse loitering teens without repelling older adults.

dicting how and when we detect the presence of a faint stimulus (signal) amid background stimulation (noise). Assumes there is no single absolute threshold and that detection depends partly on a person’s experience, expectations, motivation, and alertness.

Signal Detection Detecting a weak stimulus, or signal, depends not only on the signal’s strength (such as a hearing-test tone) but also on our psychological state—our experience, expectations, motivation, and alertness. Signal detection theory predicts when we will detect weak signals (measured as our ratio of “hits” to “false alarms”). Signal detection theorists seek to understand why people respond differently to the same stimuli, and why the same person’s reactions vary as circ*mstances change. Exhausted parents will notice the faintest whimper from a newborn’s cradle while failing to notice louder, unimportant sounds. In a horror-filled wartime situation, failure to detect an intruder could be fatal. Mindful of many comrades’ deaths, soldiers and police in Iraq probably became more likely to notice—and fire at—an almost imperceptible noise. With such heightened

|| Try out this old riddle on a couple of friends. “You’re driving a bus with 12 passengers. At your first stop, 6 passengers get off. At the second stop, 3 get off. At the third stop, 2 more get off but 3 new people get on. What color are the bus driver’s eyes?” Do your friends detect the signal—who is the bus driver?—amid the accompanying noise? ||

Carol Lee/Tony Stone Images

228

䉴 Signal detection How soon would you notice the radar blips of an approaching object? Fairly quickly if (1) you expect an attack, (2) it is important that you detect it, and (3) you are alert.

MOD U LE 1 7 Introduction to Sensation and Perception

responsiveness come more false alarms, as when the U.S. military fired on an approaching car that was rushing an Italian journalist to freedom, killing the Italian intelligence officer who had rescued her. In peacetime, when survival is not threatened, the same soldiers would require a stronger signal before sensing danger. Signal detection can also have life-or-death consequences when people are responsible for watching an airport scanner for weapons, monitoring patients from an intensive-care nursing station, or detecting radar blips. Studies have shown, for example, that people’s ability to catch a faint signal diminishes after about 30 minutes. But this diminishing response depends on the task, on the time of day, and even on whether the participants periodically exercise (Warm & Dember, 1986). To help motivate airport baggage screeners, the U.S. Transportation Security Administration periodically adds images of guns, knives, and other threatening objects into bag X-rays. When the signal is detected, the system congratulates the screener and the image disappears (Winerman, 2006). Experience matters, too. In one experiment, 10 hours of action video game playing—scanning for and instantly responding to any intrusion—increased novice players’ signal detection skills (Green & Bavelier, 2003).

Subliminal Stimulation Hoping to penetrate our unconscious, entrepreneurs offer recordings that supposedly speak directly to our brains to help us lose weight, stop smoking, or improve our memories. Masked by soothing ocean sounds, unheard messages (“I am thin,” “Smoke tastes bad,” or “I do well on tests. I have total recall of information”) will, they say, influence our behavior. Such claims make two assumptions: (1) We can unconsciously sense subliminal (literally, “below threshold”) stimuli, and (2) without our awareness, these stimuli have extraordinary suggestive powers. Can we? Do they? Can we sense stimuli below our absolute thresholds? In one sense, the answer is clearly yes. Remember that an “absolute” threshold is merely the point at which we detect a stimulus half the time (FIGURE 17.2). At or slightly below this threshold, we will still detect the stimulus some of the time.

Percentage 100 of correct detections 75

50

25 Kurt Scholz/Superstock

FIGURE 17.2 Absolute threshold What subtle differences can a person detect among these coffee samples? When stimuli are detectable less than 50 percent of the time, they are “subliminal.” Absolute threshold is the intensity at which we can detect a stimulus half the time.

Subliminal stimuli

Low

Absolute threshold

Intensity of stimulus

Medium

229

Introduction to Sensation and Perception M O D U L E 1 7

subliminal below one’s absolute threshold for conscious awareness.

priming the activation, often unconsciously, of certain associations, thus predisposing one’s perception, memory, or response.

“The heart has its reasons which reason does not know.” Pascal, Pensées, 1670

䉴 Subliminal persuasion? Although subliminally presented stimuli can subtly influence people, experiments discount attempts at subliminal advertising and selfimprovement. (The playful message here is not actually subliminal— because you can easily perceive it.)

Babs Reingold

Can we be affected by stimuli so weak as to be unnoticed? Under certain conditions, the answer is yes. An invisible image or word can briefly prime your response to a later question. In a typical experiment, the image or word is quickly flashed, then replaced by a masking stimulus that interrupts the brain’s processing before conscious perception. For example, one experiment subliminally flashed either emotionally positive scenes (kittens, a romantic couple) or negative scenes (a werewolf, a dead body) an instant before participants viewed slides of people (Krosnick et al., 1992). The participants consciously perceived either scene as only a flash of light. Yet the people somehow looked nicer if their image immediately followed unperceived kittens rather than an unperceived werewolf. Another experiment exposed people to subliminal pleasant, neutral, or unpleasant odors (Li et al., 2007). Despite having no awareness of the odors, the participants rated a neutral-expression face as more likable after exposure to pleasant rather than unpleasant smells. This experiment illustrates an intriguing phenomenon: Sometimes we feel what we do not know and cannot describe. An imperceptibly brief stimulus often triggers a weak response that can be detected by brain scanning (Blankenburg et al., 2003; Haynes & Rees, 2005, 2006). The conclusion (turn up the volume here): Much of our information processing occurs automatically, out of sight, off the radar screen of our conscious mind. But does the fact of subliminal sensation verify entrepreneurial claims of subliminal persuasion? Can advertisers really manipulate us with “hidden persuasion”? The nearconsensus among researchers is no. Their verdict is similar to that of astronomers who say of astrologers, yes, they are right that stars and planets are out there; but no, the celestial bodies don’t directly affect us. The laboratory research reveals a subtle, fleeting effect. Priming thirsty people with the subliminal word thirst might therefore, for a brief interval, make a thirst-quenching beverage ad more persuasive (Strahan et al., 2002). Likewise, priming thirsty people with Lipton Ice Tea may increase their choosing the primed brand (Karremans et al., 2006). But the subliminal-message hucksters claim something different: a powerful, enduring effect on behavior. To test whether commercial subliminal recordings have an effect beyond that of a placebo (the effect of one’s belief in them), Anthony Greenwald and his colleagues (1991, 1992) randomly assigned university students to listen daily for five weeks to commercial subliminal messages claiming to improve either self-esteem or memory. But the researchers played a very practical joke and switched half of the labels. Some students thought they were receiving affirmations of self-esteem when they actually were hearing the memory enhancement message. Others got the selfesteem message but thought their memory was being recharged. Were the recordings effective? Students’ scores on tests for both self-esteem and memory, taken before and after the five weeks, revealed no effects. And yet, those who thought they had heard a memory recording believed their memories had improved. A similar result occurred for those who thought they had heard a self-esteem recording. The recordings had no effects, yet the students perceived themselves receiving the benefits they expected. When reading this research, one hears echoes of the testimonies that ooze from the mail-order catalogs. Some customers, having bought what is not supposed to be heard (and having indeed not heard it!) offer testimonials like, “I really know that your tapes were invaluable in reprogramming my mind.” Over a decade, Greenwald conducted 16 double-blind experiments evaluating subliminal self-help tapes. His results were uniform: Not one had any therapeutic effect (Greenwald, 1992). His conclusion: “Subliminal procedures offer little or nothing of value to the marketing practitioner” (Pratkanis & Greenwald, 1988).

230

The difference threshold In this computer-generated copy of the Twenty-third Psalm, each line of the typeface changes imperceptibly. How many lines are required for you to experience a just noticeable difference?

MOD U LE 1 7 Introduction to Sensation and Perception

difference threshold the minimum difference between two stimuli required for detection 50 percent of the time. We experience the difference threshold as a just noticeable difference (or jnd).

Weber’s law the principle that, to be perceived as different, two stimuli must differ by a constant minimum percentage (rather than a constant amount).

sensory adaptation diminished sensitivity as a consequence of constant stimulation.

Difference Thresholds To function effectively, we need absolute thresholds low enough to allow us to detect important sights, sounds, textures, tastes, and smells. We also need to detect small differences among stimuli. A musician must detect minute discrepancies in an instrument’s tuning. Parents must detect the sound of their own child’s voice amid other children’s voices. Even after living two years in Scotland, sheep baa’s all sound alike to my ears. But not to those of ewes, which I have observed streaking, after shearing, directly to the baa of their lamb amid the chorus of other distressed lambs. The difference threshold, also called the just noticeable difference (jnd), is the minimum difference a person (or sheep) can detect between any two stimuli half the time. That detectable difference increases with the size of the stimulus. Thus, if you add 1 ounce to a 10-ounce weight, you will detect the difference; add 1 ounce to a 100-ounce weight and you probably will not. More than a century ago, Ernst Weber noted something so simple and so widely applicable that we still refer to it as Weber’s law: For their difference to be perceptible, two stimuli must differ by a constant proportion—not a constant amount. The exact proportion varies, depending on the stimulus. For the average person to perceive their differences, two lights must differ in intensity by 8 percent. Two objects must differ in weight by 2 percent. And two tones must differ in frequency by only 0.3 percent (Teghtsoonian, 1971).

䉴|| Sensory Adaptation 17-3 What is the function of sensory adaptation? “We need above all to know about changes; no one wants or needs to be reminded 16 hours a day that his shoes are on.” Neuroscientist David Hubel (1979)

“My suspicion is that the universe is not only queerer than we suppose, but queerer than we can suppose.” J. B. S. Haldane, Possible Worlds, 1927

Entering your neighbors’ living room, you smell a musty odor. You wonder how they can stand it, but within minutes you no longer notice it. Sensory adaptation—our diminishing sensitivity to an unchanging stimulus—has come to your rescue. (To experience this phenomenon, move your watch up your wrist an inch: You will feel it— but only for a few moments.) After constant exposure to a stimulus, our nerve cells fire less frequently. Why, then, if we stare at an object without flinching, does it not vanish from sight? Because, unnoticed by us, our eyes are always moving, flitting from one spot to another enough to guarantee that stimulation on the eyes’ receptors continually changes (FIGURE 17.3). What if we actually could stop our eyes from moving? Would sights seem to vanish, as odors do? To find out, psychologists have devised ingenious instruments for maintaining a constant image on the eye’s inner surface. Imagine that we have fitted a volunteer, Mary, with one of these instruments—a miniature projector mounted on a contact lens (FIGURE 17.4a). When Mary’s eye moves, the image from the projector moves as well. So everywhere that Mary looks, the scene is sure to go. If we project the profile of a face through such an instrument, what will Mary see? At first, she will see the complete profile. But within a few seconds, as her sensory system begins to fatigue, things will get weird. Bit by bit, the image will vanish, only later to reappear and then disappear—in recognizable fragments or as a whole (FIGURE 17.4b).

231

Introduction to Sensation and Perception M O D U L E 1 7

FIGURE 17.3 The jumpy eye University of 䉴 Edinburgh psychologist John Henderson

John M. Henderson

(2007) illustrates how a person’s gaze jumps from one spot to another every third of a second or so. Eye-tracking equipment shows how a typical person views a photograph of Edinburgh’s Princes Street Gardens. Circles represent fixations, and the numbers indicate the time of fixation in milliseconds (300 milliseconds = three-tenths of a second).

Although sensory adaptation reduces our sensitivity, it offers an important benefit: freedom to focus on informative changes in our environment without being distracted by the constant chatter of uninformative background stimulation. Our sensory receptors are alert to novelty; bore them with repetition and they free our attention for more important things. Stinky or heavily perfumed people don’t notice their odor because, like you and me, they adapt to what’s constant and detect only change. This reinforces a fundamental lesson: We perceive the world not exactly as it is, but as it is useful for us to perceive it. Our sensitivity to changing stimulation helps explain television’s attention-grabbing power. Cuts, edits, zooms, pans, sudden noises—all demand attention, even from TV researchers: During interesting conversations, notes Percy Tannenbaum (2002), “I cannot for the life of me stop from periodically glancing over to the screen.” Sensory thresholds and adaptation are only two of the commonalities shared by the senses. All our senses receive sensory stimulation, transform it into neural information, and deliver that information to the brain.

17.4 Sensory adaptation: now 䉴 FIGURE you see it, now you don’t! (a) A projector

mounted on a contact lens makes the projected image move with the eye. (b) Initially, the person sees the stabilized image, but soon she sees fragments fading and reappearing. (From “Stabilized images on the retina,” by R. M. Pritchard. Copyright © 1961 Scientific American, Inc. All rights reserved.)

(a)

(b)

232

MOD U LE 1 7 Introduction to Sensation and Perception

Review Introduction to Sensation and Perception 17-1 What are sensation and perception? What do we mean by bottom-up processing and top-down processing? Sensation is the process by which our sensory receptors and nervous system receive and represent stimulus energies from our environment. Perception is the process of organizing and interpreting this information. Although we view sensation and perception separately to analyze and discuss them, they are actually parts of one continuous process. Bottom-up processing is sensory analysis that begins at the entry level, with information flowing from the sensory receptors to the brain. Top-down processing is analysis that begins with the brain and flows down, filtering information through our experience and expectations to produce perceptions. 17-2

What are the absolute and difference thresholds, and do stimuli below the absolute threshold have any influence? Our absolute threshold for any stimulus is the minimum stimulation necessary for us to be consciously aware of it 50 percent of the time. Signal detection theory demonstrates that individual absolute thresholds vary, depending on the strength of the signal and also on our experience, expectations, motivation, and alertness. Our difference threshold (also called just noticeable difference, or jnd) is the barely noticeable difference we discern between two stimuli 50 percent of the time. Priming shows that we can process some information from stimuli below our absolute threshold for conscious awareness. But the effect is too fleeting to enable people to exploit us with subliminal messages. Weber’s law states that two stimuli must differ by a constant proportion to be perceived as different.

17-3 What is the function of sensory adaptation? Sensory adaptation (our diminished sensitivity to constant or routine odors, sounds, and touches) focuses our attention on informative changes in our environment.

Terms and Concepts to Remember sensation, p. 226 perception, p. 226 bottom-up processing, p. 226 top-down processing, p. 226 psychophysics, p. 227 absolute threshold, p. 227

signal detection theory, p. 227 subliminal, p. 228 priming, p. 229 difference threshold, p. 230 Weber’s law, p. 230 sensory adaptation, p. 230

Test Yourself 1. What is the rough distinction between sensation and perception? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. What types of sensory adaptation have you experienced in the last 24 hours?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

module 18

The Stimulus Input: Light Energy

Vision

The Eye Visual Information Processing

18-1 What is the energy that we see as visible light? One of nature’s great wonders is neither bizarre nor remote, but commonplace: How does our material body construct our conscious visual experience? How do we transform particles of light energy into colorful sights? Part of this genius is our ability to convert one sort of energy to another. Our eyes, for example, receive light energy and transduce (transform) it into neural messages that our brain then processes into what we consciously see. How does such a takenfor-granted yet remarkable thing happen?

䉴|| The Stimulus Input: Light Energy

Color Vision

transduction conversion of one form of energy into another. In sensation, the transforming of stimulus energies, such as sights, sounds, and smells, into neural impulses our brains can interpret.

Scientifically speaking, what strikes our eyes is not color but pulses of electromagnetic energy that our visual system perceives as color. What we see as visible light is but a thin slice of the whole spectrum of electromagnetic energy. As FIGURE 18.1 illustrates, this electromagnetic spectrum ranges from imperceptibly short waves of gamma rays, to the narrow band we see as visible light, to the long waves of radio transmission and AC circuits. Other organisms are sensitive to differing portions of the spectrum. Bees, for instance, cannot see red but can see ultraviolet light. 18.1 The spectrum of 䉴 FIGURE electromagnetic energy This spectrum

White light

ranges from gamma rays as short as the diameter of an atom to radio waves over a mile long. The narrow band of wavelengths visible to the human eye (shown enlarged) extends from the shorter waves of blue-violet light to the longer waves of red light.

Prism

400

500

600

700

Part of spectrum visible to humans

Gamma rays

10±5

10±3

X−rays

10±1

Ultra− violet rays

101

Infrared rays

103

105

Broadcast bands

Radar

107

109

1011

1013

AC circuits

1015

1017

Wavelength in nanometers (billionths of a meter)

233

waves (a) Waves vary in wavelength (the distance between successive peaks). Frequency, the number of complete wavelengths that can pass a point in a given time, depends on the wavelength. The shorter the wavelength, the higher the frequency. (b) Waves also vary in amplitude (the height from peak to trough). Wave amplitude determines the intensity of colors.

wavelength the distance from the peak of one light or sound wave to the peak of the next. Electromagnetic wavelengths vary from the short blips of cosmic rays to the long pulses of radio transmission.

hue the dimension of color that is determined by the wavelength of light; what we know as the color names blue, green, and so forth. intensity the amount of energy in a light or sound wave, which we perceive as brightness or loudness, as determined by the wave’s amplitude.

pupil the adjustable opening in the center of the eye through which light enters.

iris a ring of muscle tissue that forms the colored portion of the eye around the pupil and controls the size of the pupil opening. lens the transparent structure behind the pupil that changes shape to help focus images on the retina. retina the light-sensitive inner surface of the eye, containing the receptor rods and cones plus layers of neurons that begin the processing of visual information.

accommodation the process by which the eye’s lens changes shape to focus near or far objects on the retina.

FIGURE 18.3 The eye Light rays reflected

from the candle pass through the cornea, pupil, and lens. The curvature and thickness of the lens change to bring either nearby or distant objects into focus on the retina. Rays from the top of the candle strike the bottom of the retina and those from the left side of the candle strike the right side of the retina. The candle’s retinal image is thus upside-down and reversed.

Short wavelength = high frequency (bluish colors)

Great amplitude (bright colors)

Long wavelength = low frequency (reddish colors)

Small amplitude (dull colors)

(a)

(b)

Two physical characteristics of light help determine our sensory experience of them. Light’s wavelength—the distance from one wave peak to the next (FIGURE 18.2a)— determines its hue (the color we experience, such as blue or green). Intensity, the amount of energy in light waves (determined by a wave’s amplitude, or height), influences brightness (FIGURE 18.2b). To understand how we transform physical energy into color and meaning, we first need to understand vision’s window, the eye.

䉴|| The Eye 18-2 How does the eye transform light energy into neural messages? Light enters the eye through the cornea, which protects the eye and bends light to provide focus (FIGURE 18.3). The light then passes through the pupil, a small adjustable opening surrounded by the iris, a colored muscle that adjusts light intake. The iris dilates or constricts in response to light intensity and even to inner emotions. (When we’re feeling amorous, our telltale dilated pupils and dark eyes subtly signal our interest.) Each iris is so distinctive that an iris-scanning machine could confirm your identity. Behind the pupil is a lens that focuses incoming light rays into an image on the retina, a multilayered tissue on the eyeball’s sensitive inner surface. The lens focuses the rays by changing its curvature in a process called accommodation. For centuries, scientists knew that when an image of a candle passes through a small opening, it casts an inverted mirror image on a dark wall behind. If the retina

FIGURE 18.2 The physical properties of

MOD U LE 1 8 Vision

234

Lens

Retina

Pupil Fovea (point of central focus)

Iris

Optic nerve to brain’s visual cortex

Cornea Blind spot

235

Vision M O D U L E 1 8

receives this sort of upside-down image, as in Figure 18.3, how can we see the world right side up? The ever-curious Leonardo da Vinci had an idea: Perhaps the eye’s watery fluids bend the light rays, reinverting the image to the upright position as it reaches the retina. But then in 1604, the astronomer and optics expert Johannes Kepler showed that the retina does receive upside-down images of the world (Crombie, 1964). And how could we understand such a world? “I leave it,” said the befuddled Kepler, “to natural philosophers.” Eventually, the answer became clear: The retina doesn’t “see” a whole image. Rather, its millions of receptor cells convert particles of light energy into neural impulses and forward those to the brain. There, the impulses are reassembled into a perceived, upright-seeming image.

rods retinal receptors that detect black, white, and gray; necessary for peripheral and twilight vision, when cones don’t respond. cones retinal receptor cells that are concentrated near the center of the retina and that function in daylight or in welllit conditions. The cones detect fine detail and give rise to color sensations.

optic nerve the nerve that carries neural impulses from the eye to the brain.

blind spot the point at which the optic

The Retina If you could follow a single light-energy particle into your eye, you would first make your way through the retina’s outer layer of cells to its buried receptor cells, the rods and cones (FIGURE 18.4). There, you would see the light energy trigger chemical changes that would spark neural signals, activating neighboring bipolar cells. The bipolar cells in turn would activate the neighboring ganglion cells. Following the particle’s path, you would see axons from this network of ganglion cells converging, like the strands of a rope, to form the optic nerve that carries information to your brain (where the thalamus will receive and distribute the information). The optic nerve can send nearly 1 million messages at once through its nearly 1 million ganglion fibers. (The auditory nerve, which enables hearing, carries much less information through its mere 30,000 fibers.) Where the optic nerve leaves the eye there are no receptor cells—creating a blind spot (FIGURE 18.5 on the next page). Close one eye and you won’t see a black hole on your TV screen, however. Without seeking your approval, your brain fills in the hole.

䉴 FIGURE 18.4 The retina’s reaction to light

2. Chemical reaction in turn activates bipolar cells.

1. Light entering eye triggers photochemical reaction in rods and cones at back of retina.

3

2

1

Light Cone Rod Ganglion cell Bipolar cell

Neural impulse

Light 3 2

1 Cross section of retina

Optic nerve

nerve leaves the eye, creating a “blind” spot because no receptor cells are located there.

To the brain’s visual cortex via the thalamus

3. Bipolar cells then activate the ganglion cells, the axons of which converge to form the optic nerve. This nerve transmits information to the visual cortex (via the thalamus) in the brain.

receptor cells where the optic nerve leaves the eye (see Figure 18.4). This creates a blind spot in your vision. To demonstrate, close your left eye, look at the spot, and move the page to a distance from your face (about a foot) at which the car disappears. The blind spot does not normally impair your vision, because your eyes are moving and because one eye catches what the other misses.

Rods and cones differ in their geography and in the tasks they handle (TABLE 18.1). Cones cluster in and around the fovea, the retina’s area of central focus (see

Figure 18.3). Many cones have their own hotline to the brain—bipolar cells that help relay the cone’s individual message to the visual cortex, which devotes a large area to input from the fovea. These direct connections preserve the cones’ precise information, making them better able to detect fine detail. Rods have no such hotline; they share bipolar cells with other rods, sending combined messages. To experience this difference in sensitivity to details, pick a word in this sentence and stare directly at it, focusing its image on the cones in your fovea. Notice that words a few inches off to the side appear blurred? Their image strikes the more peripheral region of your retina, where rods predominate. The next time you are driving or biking, note, too, that you can detect a car in your peripheral vision well before perceiving its details. Cones also enable you to perceive color. In dim light they become ineffectual, so you see no colors. Rods, which enable black-and-white vision, remain sensitive in dim light, and several rods will funnel their faint energy output onto a single bipolar cell. Thus, cones and rods each provide a special sensitivity—cones to detail and color, and rods to faint light. When you enter a darkened theater or turn off the light at night, your pupils dilate to allow more light to reach your retina. It typically takes 20 minutes or more before your eyes fully adapt. You can demonstrate dark adaptation by closing or covering one eye for up to 20 minutes. Then make the light in the room not quite bright enough to read this book with your open eye. Now open the dark-adapted eye and read (easily). This period of dark adaptation parallels the average natural twilight transition between the sun’s setting and darkness.

TABLE 18.1 Receptors in the Human Eye: Rod-Shaped

Rods and Cone-Shaped Cones

Omikron/Photo Researchers, Inc.

FIGURE 18.5 The blind spot There are no

MOD U LE 1 8 Vision

236

Cones

Rods

Number

6 million

120 million

Location in retina

Center

Periphery

Sensitivity in dim light

Low

High

Color sensitivity

High

Low

Detail sensitivity

High

Low

237

Vision M O D U L E 1 8

Some nocturnal animals, such as toads, mice, rats, and bats, have retinas made up almost entirely of rods, allowing them to function well in dim light. These creatures probably have very poor color vision. Knowing just this much about the eye, can you imagine why a cat sees so much better at night than you do?1

fovea the central focal point in the retina, around which the eye’s cones cluster.

䉴|| Visual Information Processing 18-3 How does the brain process visual information? Visual information percolates through progressively more abstract levels. At the entry level, the retina processes information before routing it via the thalamus to the brain’s cortex. The retina’s neural layers—which are actually brain tissue that migrates to the eye during early fetal development—don’t just pass along electrical impulses; they also help to encode and analyze the sensory information. The third neural layer in a frog’s eye, for example, contains the “bug detector” cells that fire only in response to moving flylike stimuli. After processing by your retina’s nearly 130 million receptor rods and cones, information travels to your bipolar cells, then to your million or so ganglion cells, through their axons making up the optic nerve, to your brain. Any given retinal area relays its information to a corresponding location in the visual cortex, in the occipital lobe at the back of your brain (FIGURE 18.6). The same sensitivity that enables retinal cells to fire messages can lead them to misfire as well. Turn your eyes to the left, close them, and then gently rub the right side of your right eyelid with your fingertip. Note the patch of light to the left, moving as your finger moves. Why do you see light? Why at the left? 1There are at least two reasons: (1) A cat’s pupils can open much wider than yours, letting in more light; (2) a cat has a higher proportion of light-sensitive rods (Moser, 1987). But there is a trade-off: With fewer cones, a cat sees neither details nor color as well as you do.

18.6 Pathway from the eyes to 䉴 FIGURE the visual cortex Ganglion axons forming

Visual area of the thalamus

the optic nerve run to the thalamus, where they synapse with neurons that run to the visual cortex.

Optic nerve

Retina

Visual cortex

238

MOD U LE 1 8 Vision

Your retinal cells are so responsive that even pressure triggers them. But your brain interprets their firing as light. Moreover, it interprets the light as coming from the left—the normal direction of light that activates the right side of the retina.

Feature Detection

Reuters/Claro Cortes IV (China)

Well-developed supercells In this 2007 World Cup match, Brazil’s Marta instantly processed visual information about the positions and movements of Australia's defenders and goalie (Melissa Barbieri) and somehow managed to get the ball around them all and into the net.

Looking at faces, houses, and chairs activates different brain areas in this right-facing brain.

FIGURE 18.7 The telltale brain

Nobel prize winners David Hubel and Torsten Wiesel (1979) demonstrated that neurons in the occipital lobe’s visual cortex receive information from individual ganglion cells in the retina. These feature detector cells derive their name from their ability to respond to a scene’s specific features—to particular edges, lines, angles, and movements. Feature detectors in the visual cortex pass such information to other cortical areas where teams of cells (supercell clusters) respond to more complex patterns. One temporal lobe area just behind your right ear, for example, enables you to perceive faces. If this region were damaged, you might recognize other forms and objects, but not familiar faces. Functional MRI (fMRI) scans show other brain areas lighting up when people view other object categories (Downing et al., 2001). Damage in these areas blocks other perceptions while sparing face recognition. Amazingly specific combinations of activity may appear (FIGURE 18.7). “We can tell if a person is looking at a shoe, a chair, or a face, based on the pattern of their brain activity,” notes researcher James Haxby (2001). Psychologist David Perrett and his colleagues (1988, 1992, 1994) reported that for biologically important objects and events, monkey brains (and surely ours as well) have a “vast visual encyclopedia” distributed as cells that specialize in responding to one type of stimulus—such as a specific gaze, head angle, posture, or body movement. Other supercell clusters integrate this information and fire only when the cues collectively indicate the direction of someone’s attention and approach. This instant analysis, which aided our ancestors’ survival, also helps a soccer goalie anticipate the direction of an impending kick, and a driver anticipate a pedestrian’s next movement.

239

Vision M O D U L E 1 8

Parallel Processing Unlike most computers, which do step-by-step serial processing, our brain engages in parallel processing: doing many things at once. The brain divides a visual scene into subdimensions, such as color, movement, form, and depth (FIGURE 18.8), and works on each aspect simultaneously (Livingstone & Hubel, 1988). We then construct our perceptions by integrating the separate but parallel work of these different visual teams. To recognize a face, for example, the brain integrates information that the retina projects to several visual cortex areas, compares it to stored information, and enables you to recognize the image as, say, your grandmother. The whole process of facial recognition requires tremendous brain power—30 percent of the cortex (10 times the brain area devoted to hearing). If researchers temporarily disrupt the brain’s faceprocessing areas with magnetic pulses, people are unable to recognize faces. They will, however, be able to recognize houses; the brain’s face-perception process differs from its object-perception process (McKone et al., 2007; Pitcher et al., 2007). Destroying or disabling the neural workstation for other visual subtasks produces different peculiar results, as happened to “Mrs. M.” (Hoffman, 1998). Since a stroke damaged areas near the rear of both sides of her brain, she can no longer perceive movement. People in a room seem “suddenly here or there but I have not seen them moving.” Pouring tea into a cup is a challenge because the fluid appears frozen—she cannot perceive it rising in the cup. Others with stroke or surgery damage to their brain’s visual cortex have experienced blindsight, a localized area of blindness in part of their field of vision (Weiskrantz, 1986). Shown a series of sticks in the blind field, they report seeing nothing. Yet when asked to guess whether the sticks are vertical or horizontal, their visual intuition typically offers the correct response. When told, “You got them all right,” they are astounded. There is, it seems, a second “mind”—a parallel processing system—operating unseen. Our separate visual systems for perception and action illustrate dual processing—the two-track mind. It’s not just brain-injured people who have two visual information systems, as Jennifer Boyer and her colleagues (2005) showed in studies of people without such injuries. Using magnetic pulses to shut down the brain’s primary visual cortex area, the researchers showed these temporarily disabled people a horizontal or vertical line, or a red or green dot. Although they reported seeing nothing, the participants were right 75 percent of the time in guessing the line orientation and 81 percent right in guessing the dot color. A scientific understanding of visual information processing leaves many neuropsychologists awestruck. As Roger Sperry (1985) observed, the “insights of science give added, not lessened, reasons for awe, respect, and reverence.” Think about it: As you look at someone, visual information is transduced and sent to your brain as millions of neural impulses, then constructed into its component features, and finally, in

Color

Motion

Form

Depth

feature detectors nerve cells in the brain that respond to specific features of the stimulus, such as shape, angle, or movement. parallel processing the processing of many aspects of a problem simultaneously; the brain’s natural mode of information processing for many functions, including vision. Contrasts with the stepby-step (serial) processing of most computers and of conscious problem solving.

18.8 Parallel processing 䉴 FIGURE Studies of patients with brain damage

suggest that the brain delegates the work of processing color, motion, form, and depth to different areas. After taking a scene apart, how does the brain integrate these subdimensions into the perceived image? The answer to this question is the Holy Grail of vision research.

FIGURE 18.9 A simplified summary of visual information processing

MOD U LE 1 8 Vision

240

Feature detection: Brain’s detector cells respond to specific features—edges, lines, and angles AP Photo/Petros Giannakouris

Parallel processing: Brain cell teams process combined information about color, movement, form, and depth

Retinal processing: Receptor rods and cones bipolar cells ganglion cells

Recognition: Brain interprets the constructed image based on information from stored images Scene

“I am . . . wonderfully made.” King David, Psalm 139:14

some as-yet mysterious way, composed into a meaningful image, which you compare with previously stored images and recognize: “That’s Sara!” Likewise, as you read this page, the printed squiggles are transmitted by reflected light rays onto your retina, which triggers a process that sends formless nerve impulses to several areas of your brain, which integrates the information and decodes meaning, thus completing the transfer of information across time and space from my mind to your mind. The whole process (FIGURE 18.9) is more complex than taking apart a car, piece by piece, transporting it to a different location, then having specialized workers reconstruct it. That all of this happens instantly, effortlessly, and continuously is indeed awesome.

䉴|| Color Vision 18-4 What theories help us understand color vision?

“Only mind has sight and hearing; all things else are deaf and blind.” Epicharmus, Fragments, 550 B.C.

We talk as though objects possess color: “A tomato is red.” Perhaps you have pondered the old question, “If a tree falls in the forest and no one hears it, does it make a sound?” We can ask the same of color: If no one sees the tomato, is it red? The answer is no. First, the tomato is everything but red, because it rejects (reflects) the long wavelengths of red. Second, the tomato’s color is our mental construction. As Isaac Newton (1704) noted, “The [light] rays are not colored.” Color, like all aspects of vision, resides not in the object but in the theater of our brains, as evidenced by our dreaming in color. In the study of vision, one of the most basic and intriguing mysteries is how we see the world in color. How, from the light energy striking the retina, does the brain manufacture our experience of color—and of such a multitude of colors? Our difference threshold for colors is so low that we can discriminate some 7 million different color variations (Geldard, 1972).

241

Vision M O D U L E 1 8

At least most of us can. For about 1 person in 50, vision is color deficient—and that person is usually male, because the defect is genetically sex-linked. To understand why some people’s vision is color deficient, it will help to first understand how normal color vision works. Modern detective work on the mystery of color vision began in the nineteenth century when Hermann von Helmholtz built on the insights of an English physicist, Thomas Young. Knowing that any color can be created by combining the light waves of three primary colors—red, green, and blue—Young and von Helmholtz inferred that the eye must have three corresponding types of color receptors. Years later, researchers measured the response of various cones to different color stimuli and confirmed the Young-Helmholtz trichromatic (three-color) theory, which implies that the cones do their color magic in teams of three. Indeed, the retina has three types of color receptors, each especially sensitive to one of three colors. And those colors are, in fact, red, green, and blue. When we stimulate combinations of these cones, we see other colors. For example, there are no receptors especially sensitive to yellow. Yet when both red-sensitive and green-sensitive cones are stimulated, we see yellow. Most people with color-deficient vision are not actually “colorblind.” They simply lack functioning red- or green-sensitive cones, or sometimes both. Their vision— perhaps unknown to them, because their lifelong vision seems normal—is monochromatic (onecolor) or dichromatic (two-color) instead of trichromatic, making it impossible to distinguish the red and green in FIGURE 18.10 (Boynton, 1979). Dogs, too, lack receptors for the wavelengths of red, giving them only limited, dichromatic color vision (Neitz et al., 1989). But trichromatic theory cannot solve all parts of the color vision mystery, as Ewald Hering soon noted. For example, we see yellow when mixing red and green light. But how is it that those blind to red and green can often still see yellow? And why does yellow appear to be a pure color and not a mixture of red and green, the way purple is of red and blue? Hering, a physiologist, found a clue in the wellknown occurrence of afterimages. When you stare at a green square for a while and then look at a white sheet of paper, you see red, green’s opponent color. Stare at a yellow square and you will later see its opponent color, blue, on the white paper (as in the flag demonstration in FIGURE 18.11). Hering surmised that there must be two additional color processes, one responsible for red-versus-green perception, and one for blue-versus-yellow.

Young-Helmholtz trichromatic (three-color) theory the theory that the retina contains three different color receptors—one most sensitive to red, one to green, one to blue—which, when stimulated in combination, can produce the perception of any color.

18.10 䉴 FIGURE Color-deficient vision

People who suffer redgreen deficiency have trouble perceiving the number within the design.

18.11 Afterimage effect 䉴 FIGURE Stare at the center of the flag for a

minute and then shift your eyes to the dot in the white space beside it. What do you see? (After tiring your neural response to black, green, and yellow, you should see their opponent colors.) Stare at a white wall and note how the size of the flag grows with the projection distance!

242

MOD U LE 1 8 Vision

opponent-process theory the theory that opposing retinal processes (redgreen, yellow-blue, white-black) enable color vision. For example, some cells are stimulated by green and inhibited by red; others are stimulated by red and inhibited by green.

A century later, researchers confirmed Hering’s opponent-process theory. As visual information leaves the receptor cells, we analyze it in terms of three sets of opponent colors: red-green, yellow-blue, and white-black. In the retina and in the thalamus (where impulses from the retina are relayed en route to the visual cortex), some neurons are turned “on” by red but turned “off” by green. Others are turned on by green but off by red (DeValois & DeValois, 1975). Opponent processes explain afterimages, such as in the flag demonstration, in which we tire our green response by staring at green. When we then stare at white (which contains all colors, including red), only the red part of the green-red pairing will fire normally. The present solution to the mystery of color vision is therefore roughly this: Color processing occurs in two stages. The retina’s red, green, and blue cones respond in varying degrees to different color stimuli, as the Young-Helmholtz trichromatic theory suggested. Their signals are then processed by the nervous system’s opponentprocess cells, en route to the visual cortex.

Review Vision 18-1 What is the energy that we see as visible light? Each sense receives stimulation, transforms (transduces) it into neural signals, and sends these neural messages to the brain. In vision, the signals consist of light-energy particles from a thin slice of the broad spectrum of electromagnetic energy. The hue we perceive in a light depends on its wavelength, and its brightness depends on its intensity. 18-2 How does the eye transform light energy into neural messages? After entering the eye and being focused by a lens, light-energy particles strike the eye’s inner surface, the retina. The retina’s light-sensitive rods and color-sensitive cones convert the light energy into neural impulses which, after processing by bipolar and ganglion cells, travel through the optic nerve to the brain. 18-3 How does the brain process visual information? Impulses travel along the optic nerve, to the thalamus, and on to the visual cortex. In the visual cortex, feature detectors respond to specific features of the visual stimulus. Higher-level supercells integrate this pool of data for processing in other cortical areas. Parallel processing in the brain handles many aspects of a problem simultaneously, and separate neural teams work on visual subtasks (color, movement, depth, and form). Other neural teams integrate the results, comparing them with stored information, and enabling perceptions. 18-4 What theories help us understand color vision? The Young-Helmholtz trichromatic (three-color) theory proposed that the retina contains three types of color receptors. Contemporary research has found three types of cones, each most sensitive to the wavelengths of one of the three primary colors of light (red, green, or blue). Hering’s opponent-process theory proposed three additional color processes (red-versus-green, blue-versus-yellow, blackversus-white). Contemporary research has confirmed that, en route to the brain, neurons in the retina and the thalamus code

the color-related information from the cones into pairs of opponent colors. These two theories, and the research supporting them, show that color processing occurs in two stages.

Terms and Concepts to Remember transduction, p. 233 wavelength, p. 234 hue, p. 234 intensity, p. 234 pupil, p. 234 iris, p. 234 lens, p. 234 retina, p. 234 accommodation, p. 234 rods, p. 235

cones, p. 235 optic nerve, p. 235 blind spot, p. 235 fovea, p. 236 feature detectors, p. 238 parallel processing, p. 239 Young-Helmholtz trichromatic (three-color) theory, p. 241 opponent-process theory, p. 242

Test Yourself 1. What is the rapid sequence of events that occurs when you see and recognize someone you know? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. If you were forced to give up one sense, which would it be? Why?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

module 19

The Stimulus Input: Sound Waves

Hearing

The Ear

Among the mysterious but amazing aspects of our ordinary experience is the process by which we transduce air pressure waves into neural messages the brain interprets as a meaningful symphony of sound. Like our other senses, our audition, or hearing, is highly adaptive. We hear a wide range of sounds, but we hear best those sounds with frequencies in a range corresponding to that of the human voice. We also are acutely sensitive to faint sounds, an obvious boon for our ancestors’ survival when hunting or being hunted, or for detecting a child’s whimper. (If our ears were much more sensitive, we would hear a constant hiss from the movement of air molecules.) We are also remarkably attuned to variations in sounds. We easily detect differences among thousands of human voices: Answering the phone, we recognize a friend calling from the moment she says “Hi.” A fraction of a second after such events stimulate receptors in the ear, millions of neurons have simultaneously coordinated in extracting the essential features, comparing them with past experience, and identifying the stimulus (Freeman, 1991). Let’s start by considering the fundamental question: How do we do it?

Hearing Loss and Deaf Culture

䉴|| The Stimulus Input: Sound Waves 19-1 What are the characteristics of air pressure waves that we hear as sound? Draw a bow across a violin, and the resulting stimulus energy is sound waves— jostling molecules of air, each bumping into the next, like a shove transmitted through a concert hall’s crowded exit tunnel. The resulting waves of compressed and expanded air are like the ripples on a pond circling out from where a stone has been tossed. As we swim in our ocean of moving air molecules, our ears detect these brief air pressure changes. Exposed to a loud, low bass sound—perhaps from a bass guitar or a cello—we can also feel the vibration, and we hear by both air and bone conduction. sounds of music A violin’s short, 䉴 The fast waves create a high pitch, a cello’s

Steve Skjold/PhotoEdit

longer, slower waves a lower pitch. Differences in the waves’ height, or amplitude, also create differing degrees of loudness.

audition the sense or act of hearing.

243

244

MOD U LE 1 9 Hearing

waves (a) Waves vary in wavelength, the distance between successive peaks. Frequency, the number of complete wavelengths that can pass a point in a given time, depends on the wavelength. The shorter the wavelength, the higher the frequency. (b) Waves also vary in amplitude, the height from peak to trough. Wave amplitude determines the intensity of sounds.

frequency the number of complete wavelengths that pass a point in a given time (for example, per second). pitch a tone’s experienced highness or lowness; depends on frequency. middle ear the chamber between the eardrum and cochlea containing three tiny bones (hammer, anvil, and stirrup) that concentrate the vibrations of the eardrum on the cochlea’s oval window. cochlea [KOHK-lee-uh] a coiled, bony, fluid-filled tube in the inner ear through which sound waves trigger nerve impulses.

inner ear the innermost part of the ear, containing the cochlea, semicircular canals, and vestibular sacs.

Dr. Fred Hossler/Visuals Unlimited

Be kind to your inner ear’s hair cells When vibrating in response to sound, the hair cells shown here lining the cochlea produce an electrical signal.

FIGURE 19.1 The physical properties of

Short wavelength = high frequency (high-pitched sounds)

Great amplitude (loud sounds)

Long wavelength = low frequency (low-pitched sounds)

Small amplitude (soft sounds)

(a)

(b)

The ears then transform the vibrating air into nerve impulses, which our brain decodes as sounds. The strength, or amplitude, of sound waves (FIGURE 19.1) determines their loudness. Waves also vary in length, and therefore in frequency. Their frequency determines the pitch we experience: Long waves have low frequency—and low pitch. Short waves have high frequency—and high pitch. A violin produces much shorter, faster sound waves than does a cello. We measure sounds in decibels. The absolute threshold for hearing is arbitrarily defined as zero decibels. Every 10 decibels correspond to a tenfold increase in sound intensity. Thus, normal conversation (60 decibels) is 10,000 times more intense than a 20-decibel whisper. And a temporarily tolerable 100-decibel passing subway train is 10 billion times more intense than the faintest detectable sound.

䉴|| The Ear 19-2 How does the ear transform sound energy into neural messages? To hear, we must somehow convert sound waves into neural activity. The human ear accomplishes this feat through an intricate mechanical chain reaction (FIGURE 19.2). First, the visible outer ear channels the sound waves through the auditory canal to the eardrum, a tight membrane that vibrates with the waves. The middle ear then transmits the eardrum’s vibrations through a piston made of three tiny bones (the hammer, anvil, and stirrup) to the cochlea, a snail-shaped tube in the inner ear. The incoming vibrations cause the cochlea’s membrane (the oval window) to vibrate, jostling the fluid that fills the tube. This motion causes ripples in the basilar membrane, bending the hair cells lining its surface, not unlike the wind bending a wheat field. Hair cell movement triggers impulses in the adjacent nerve cells, whose axons converge to form the auditory nerve, which sends neural messages (via the thalamus) to the temporal lobe’s auditory cortex. From vibrating air to moving piston to fluid waves to electrical impulses to the brain: Voila! We hear. My vote for the most intriguing part of the hearing process is the hair cells. A Howard Hughes Medical Institute (2008)

245

Hearing M O D U L E 1 9

(a)

OUTER EAR

MIDDLE EAR

INNER EAR

Semicircular canals Bones of the middle ear

Bone Auditory nerve

Sound waves

Cochlea

Eardrum Oval window (where stirrup attaches)

Auditory canal

Hammer

Anvil

Auditory cortex of temporal lobe

Cochlea, partially uncoiled

(b) Enlargement of middle ear and inner ear, showing cochlea partially uncoiled for clarity

Auditory nerve Sound waves

Nerve fibers to auditory nerve Protruding hair cells Stirrup

Oval window

Motion of fluid in the cochlea

Eardrum

report on these “quivering bundles that let us hear” marvels at their “extreme sensitivity and extreme speed.” A cochlea has 16,000 of them, which sounds like a lot until we compare that with an eye’s 130 million or so photoreceptors. But consider their responsiveness. Deflect the tiny bundles of cilia on the tip of a hair cell by the width of an atom—the equivalent of displacing the top of the Eiffel Tower by half an inch— and the alert hair cell, thanks to a special protein at its tip, triggers a neural response (Corey et al., 2004). Damage to hair cells accounts for most hearing loss. They have been likened to shag carpet fibers. Walk around on them and they will spring back with a quick vacuuming. But leave a heavy piece of furniture on them for a long time and they may never rebound. As a general rule, if we cannot talk over a noise, it is potentially harmful, especially if prolonged and repeated (Roesser, 1998). Such experiences are common when sound exceeds 100 decibels, as happens in venues from frenzied sports arenas to bagpipe bands to iPods playing near maximum volume (FIGURE 19.3 on the next page). Ringing of the ears after exposure to loud machinery or music indicates that we have been bad to our unhappy hair cells. As pain alerts us to possible bodily harm, ringing of the ears alerts us to possible hearing damage. It is hearing’s equivalent of bleeding. Teen boys more than teen girls or adults blast themselves with loud volumes for long periods (Zogby, 2006). Males’ greater noise exposure may help explain why men’s hearing tends to be less acute than women’s. But male or female, those who spend many hours in a loud nightclub, behind a power mower, or above a jackhammer should wear earplugs. “Condoms or, safer yet, abstinence,” say sex educators. “Earplugs or walk away,” say hearing educators.

FIGURE 19.2 Hear here: How we transform sound waves into nerve impulses that our brain interprets (a) The outer ear funnels sound waves to the eardrum. The bones of the middle ear amplify and relay the eardrum’s vibrations through the oval window into the fluidfilled cochlea. (b) As shown in this detail of the middle and inner ear, the resulting pressure changes in the cochlear fluid cause the basilar membrane to ripple, bending the hair cells on the surface. Hair cell movements trigger impulses at the base of the nerve cells, whose fibers converge to form the auditory nerve, which sends neural messages to the thalamus and on to the auditory cortex.

246

FIGURE 19.3 The intensity of some common sounds At close range, the thunder that follows lightning has 120-decibel intensity.

MOD U LE 1 9 Hearing

Decibels 140

Rock band (amplified) at close range

130 120

Loud thunder

110

Jet plane at 500 feet

100

Subway train at 20 feet

Prolonged exposure above 85 decibels produces hearing loss

90 80 70 Richard Kaylin/Stone/Getty Images

60 50

Busy street corner Normal conversation

40 30 20

Whisper

10 0

Threshold of hearing

Perceiving Loudness So, how do we detect loudness? It is not, as I would have guessed, from the intensity of a hair cell’s response. Rather, a soft, pure tone activates only the few hair cells attuned to its frequency. Given louder sounds, its neighbor hair cells also respond. Thus, the brain can interpret loudness from the number of activated hair cells. If a hair cell loses sensitivity to soft sounds, it may still respond to loud sounds. This helps explain another surprise: Really loud sounds may seem loud both to people with hearing loss and to those with normal hearing. As a person with hearing loss, I used to wonder when exposed to really loud music what it must sound like to people with normal hearing. Now I realize it can sound much the same; where we differ is in our sensation of soft sounds. This is why we hard-of-hearing people do not want all sounds (loud and soft) amplified. We like sound compressed—which means harder-to-hear sounds are amplified more than loud sounds (a feature of today’s digital hearing aids).

Perceiving Pitch 19-3 What theories help us understand pitch perception? place theory in hearing, the theory that links the pitch we hear with the place where the cochlea’s membrane is stimulated.

frequency theory in hearing, the theory that the rate of nerve impulses traveling up the auditory nerve matches the frequency of a tone, thus enabling us to sense its pitch.

conduction hearing loss hearing loss caused by damage to the mechanical system that conducts sound waves to the cochlea.

How do we know whether a sound is the high-frequency, high-pitched chirp of a bird or the low-frequency, low-pitched roar of a truck? Current thinking on how we discriminate pitch combines two theories. Hermann von Helmholtz’s place theory presumes that we hear different pitches because different sound waves trigger activity at different places along the cochlea’s basilar membrane. Thus, the brain determines a sound’s pitch by recognizing the specific place (on the membrane) that is generating the neural signal. When Nobel laureate-to-be Georg von Békésy (1957) cut holes in the cochleas of guinea pigs and human cadavers and looked inside with a microscope, he discovered that the cochlea vibrated, rather like a shaken bedsheet, in response to sound. High frequencies produced large vibrations near the beginning of the cochlea’s membrane, low frequencies near the end.

247

Hearing M O D U L E 1 9

But there is a problem with place theory. It can explain how we hear high-pitched sounds, but not how we hear low-pitched sounds, because the neural signals generated by low-pitched sounds are not so neatly localized on the basilar membrane. Frequency theory suggests an alternative explanation: The brain reads pitch by monitoring the frequency of neural impulses traveling up the auditory nerve. The whole basilar membrane vibrates with the incoming sound wave, triggering neural impulses to the brain at the same rate as the sound wave. If the sound wave has a frequency of 100 waves per second, then 100 pulses per second travel up the auditory nerve. Frequency theory can explain how we perceive low-pitched sounds. But it, too, is problematic: An individual neuron cannot fire faster than 1000 times per second. How, then, can we sense sounds with frequencies above 1000 waves per second (roughly the upper third of a piano keyboard)? Enter the volley principle: Like soldiers who alternate firing so that some can shoot while others reload, neural cells can alternate firing. By firing in rapid succession, they can achieve a combined frequency above 1000 waves per second. Thus, place theory best explains how we sense high pitches, frequency theory best explains how we sense low pitches, and some combination of place and frequency seems to handle the pitches in the intermediate range.

Locating Sounds 19-4 How do we locate sounds?

䉴|| Hearing Loss and Deaf Culture 19-5 What are the common causes of hearing loss, and why does controversy surround cochlear implants? The ear’s intricate and delicate structure makes it vulnerable to damage. Problems with the mechanical system that conducts sound waves to the cochlea cause conduction hearing loss. If the eardrum is punctured or if the tiny bones of the middle ear lose their ability to vibrate, the ear’s ability to conduct vibrations diminishes.

Air

Sound shadow

Why don’t we have one big ear—perhaps above our one nose? The better to hear you, as the wolf said to Red Riding Hood. As the placement of our eyes allows us to sense visual depth, so the placement of our two ears allows us to enjoy stereophonic (“three-dimensional”) hearing. Two ears are better than one for at least two reasons: If a car to the right honks, your right ear receives a more intense sound, and it receives sound slightly sooner than your left ear (FIGURE 19.4). Because sound travels 750 miles per hour and our ears are but 6 inches apart, the intensity difference and the time lag are extremely small. However, our supersensitive auditory system can detect such minute differences (Brown & Deffenbacher, 1979; Middlebrooks & Green, 1991). A just noticeable difference in the direction of two sound sources corresponds to a time difference of just 0.000027 second! To simulate what the ears experience with sound from varying locations, audio software can emit sound from two stereo speakers with varying time delays and intensity. The result: We may perceive a bee buzzing loudly in one ear, then flying around the room and returning to buzz near the other ear (Harvey, 2002). So how well do you suppose we do at locating a sound that is equidistant from our two ears, such as those that come from directly ahead, behind, overhead, or beneath us? Not very well. Why? Because such sounds strike the two ears simultaneously. Sit with closed eyes while a friend snaps fingers around your head. You will easily point to the sound when it comes from either side, but you will likely make some mistakes when it comes from directly ahead, behind, above, or below. That is why, when trying to pinpoint a sound, you co*ck your head, so that your two ears will receive slightly different messages.

FIGURE 19.4 How we locate sounds Sound waves strike one ear sooner and more intensely than the other. From this information, our nimble brain computes the sound’s location. As you might therefore expect, people who lose all hearing in one ear often have difficulty locating sounds.

“By placing my hand on a person’s lips and throat, I gain an idea of many specific vibrations, and interpret them: a boy’s chuckle, a man’s ‘Whew!’ of surprise, the ‘Hem!’ of annoyance or perplexity, the moan of pain, a scream, a whisper, a rasp, a sob, a choke, and a gasp.” Helen Keller, 1908

Hardware for hearing An X-ray image shows a cochlear implant’s array of wires leading to 12 stimulation sites on the auditory nerve.

cochlear implant a device for converting sounds into electrical signals and stimulating the auditory nerve through electrodes threaded into the cochlea.

|| Experiments are also under way to restore vision—with a bionic retina (a 2-millimeter-diameter microchip with photoreceptors that stimulate damaged retinal cells), and with a video camera and computer that stimulate the visual cortex. In test trials, both devices have enabled blind people to gain partial sight (Boahen, 2005; Steenhuysen, 2002). ||

|| Deaf culture advocates prefer capitalizing “Deaf” when referring to people with deafness, and to the Deaf community in general. In referring to children without hearing, “deaf” is usually lowercased because young children have not yet had the opportunity to make an informed decision about whether they are a part of the Deaf community. I have followed this style throughout my text. ||

Wolfgang Gstottner. (2004) American Scientist, Vol. 92, Number 5. (p. 437)

sensorineural hearing loss hearing loss caused by damage to the cochlea’s receptor cells or to the auditory nerves; also called nerve deafness.

MOD U LE 1 9 Hearing

248

Damage to the cochlea’s hair cell receptors or their associated nerves can cause the more common sensorineural hearing loss (or nerve deafness). Occasionally, disease causes sensorineural hearing loss, but more often the culprits are biological changes linked with heredity, aging, and prolonged exposure to ear-splitting noise or music. (See Close-Up: Living in a Silent World.) For now, the only way to restore hearing for people with nerve deafness is a sort of bionic ear—a cochlear implant. This electronic device translates sounds into electrical signals that, wired into the cochlea’s nerves, convey some information about sound to the brain. The implant helps children become proficient in oral communication (especially if they receive it as preschoolers or even before age 1) (Dettman et al., 2007; Schorr et al., 2005). The latest cochlear implants also can help restore hearing for most adults (though not for those whose adult brain never learned to process sound during childhood). By 2003, some 60,000 people worldwide had cochlear implants, and millions more were potential candidates (Gates & Miyamoto, 2003). The use of cochlear implants is hotly debated. On the one side are the hearing parents of more than 90 percent of all deaf children. Most of these parents want their children to experience their world of sound and talk. If an implant is to be effective, they cannot delay the decision until their child reaches the age of consent. On the other side are Deaf culture advocates, who object to using the implants on children who were deaf prelingually—before developing language. The National Association of the Deaf, for example, argues that deafness is not a disability because native signers are not linguistically disabled. In his 1960 book Sign Language Structure, Gallaudet University linguist William Stokoe showed what even native signers had not fully understood: Sign is a complete language with its own grammar, syntax, and meanings. Deaf culture advocates sometimes further contend that deafness could as well be considered “vision enhancement” as “hearing impairment.” People who lose one channel of sensation do seem to compensate with a slight enhancement of their other sensory abilities (Backman & Dixon, 1992; Levy & Langer, 1992). Some examples:

䉴 Blind musicians (think Stevie Wonder) are more likely than sighted ones to develop perfect pitch (Hamilton, 2000). 䉴 With one ear plugged, blind people are also more accurate than sighted people at locating a sound source (Gougoux et al., 2005; Lessard et al., 1998). 䉴 Close your eyes and with your hands indicate the width of a one-dozen egg carton. Blind individuals, report University of Otago researchers, can do this more accurately than sighted people (Smith et al., 2005).

Hearing M O D U L E 1 9

249

CLOSE-UP

The world’s 500 million people who live with hearing loss are a diverse group (Phonak, 2007). Some are profoundly deaf; others have limited hearing. Some were deaf prelingually; others have known the hearing world. Some sign and identify with the language-based Deaf culture; more, especially those who lost their hearing postlingually, are “oral” and converse with the hearing world by reading lips or reading written notes. Still others move between the two cultures. Despite its many variations, living without hearing poses challenges. When older people with hearing loss must expend effort to hear words, they have less remaining cognitive capacity available to remember and comprehend them (Wingfield et al., 2005). In several studies, people with hearing loss, especially if not wearing hearing aids, have also reported feeling sadder, being less socially engaged, and more often experiencing others’ irritation (Chisolm et al., 2007; Fellinger et al., 2007; National Council on Aging, 1999). Children who grow up around other Deaf people more often identify with Deaf culture and feel positive self-esteem. If raised in a signing household, whether by Deaf or hearing parents, they also express higher self-esteem and feel more accepted (Bat-Chava, 1993, 1994). Separated from a supportive community, Deaf people face many challenges (Braden, 1994). Unable to communicate

in customary ways, speaking and signing playmates may struggle to coordinate their play. And because academic subjects are rooted in spoken languages, signing students’ school achievement may suffer. Adolescents may feel socially excluded, with a resulting low self-confidence. Even adults whose hearing becomes impaired later in life may experience a sort of shyness. “It’s almost universal among the deaf to want to cause hearing people as little fuss as possible,” observed Henry Kisor (1990, p. 244), a Chicago newspaper editor and columnist who lost his hearing at age 3. “We can be selfeffacing and diffident to the point of invisibility. Sometimes this tendency can be crippling. I must fight it all the time.” Helen Keller, both blind and deaf, noted, “Blindness cuts people off from things. Deafness cuts people off from people.” I understand. My mother, with whom we communicated by writing notes on an erasable “magic pad,” spent her last dozen years in a silent world, largely withdrawn from the stress and strain of trying to interact with people outside a small circle of family and old friends. With my own hearing declining on a trajectory toward hers, I find myself sitting front and center at plays and meetings, seeking quiet corners in restaurants, and asking my wife to make necessary calls to friends whose accents differ from ours. I do benefit from cool technology that, at the press of a button, can transform my hearing aids into

䉴 People who have been deaf from birth exhibit enhanced attention to their peripheral vision (Bavelier et al., 2006). Their auditory cortex, starved for sensory input, remains largely intact but becomes responsive to touch and to visual input (Emmorey et al., 2003; Finney et al., 2001; Penhune et al., 2003). Close your eyes and immediately you, too, will notice your attention being drawn to your other senses. In one experiment, people who had spent 90 minutes sitting quietly blindfolded became more accurate in their location of sounds (Lewald, 2007). When kissing, lovers minimize distraction and increase their touch sensitivity by closing their eyes.

AP Photo/Seth Perlman

Living in a Silent World

Signs of success Deaf participants in a spelling bee offer applause to a contestant.

in-the-ear loudspeakers for the broadcast of phone, TV, and public address system sound (see www.hearingloop.org). Yet I still experience frustration when, with or without hearing aids, I can’t hear the joke everyone else is guffawing over; when, after repeated tries, I just can’t catch that exasperated person’s question and can’t fake my way around it; when family members give up and say, “Oh, never mind” after trying three times to tell me something unimportant. As she aged, my mother came to feel that seeking social interaction was simply not worth the effort. However, I share newspaper columnist Kisor’s belief that communication is worth the effort (p. 246): “So, . . . I will grit my teeth and plunge ahead.” To reach out, to connect, to communicate with others, even across a chasm of silence, is to affirm our humanity as social creatures.

250

MOD U LE 1 9 Hearing

Review Hearing 19-1 What are the characteristics of air pressure waves that we hear as sound? Sound waves are bands of compressed and expanded air. Our ears detect these changes in air pressure and transform them into neural impulses, which the brain decodes as sound. Sound waves vary in frequency, which we experience as differing pitch, and amplitude, which we perceive as differing loudness. 19-2 How does the ear transform sound energy into neural messages? The outer ear is the visible portion of the ear. The middle ear is the chamber between the eardrum and cochlea. The inner ear consists of the cochlea, semicircular canals, and vestibular sacs. Through a mechanical chain of events, sound waves traveling through the auditory canal cause tiny vibrations in the eardrum. The bones of the middle ear amplify the vibrations and relay them to the fluid-filled cochlea. Rippling of the basilar membrane, caused by pressure changes in the cochlear fluid, causes movement of the tiny hair cells, triggering neural messages to be sent (via the thalamus) to the auditory cortex in the brain. 19-3 What theories help us understand pitch perception? Place theory proposes that our brain interprets a particular pitch by decoding the place where a sound wave stimulates the cochlea’s basilar membrane. Frequency theory proposes that the brain deciphers the frequency of the pulses traveling to the brain. Place theory explains how we hear high-pitched sounds, but it cannot explain how we hear low-pitched sounds. Frequency theory explains how we hear low-pitched sounds, but it cannot explain how we hear high-pitched sounds. Some combination of the two helps explain how we hear sounds in the middle range. 19-4 How do we locate sounds? Sound waves strike one ear sooner and more intensely than the other. The brain analyzes the minute differences in the sounds received by the two ears and computes the sound’s source. 19-5

What are the common causes of hearing loss, and why does controversy surround cochlear implants? Conduction hearing loss results from damage to the mechanical system that

transmits sound waves to the cochlea. Sensorineural hearing loss (or nerve deafness) results from damage to the cochlea’s hair cells or their associated nerves. Diseases and accidents can cause hearing loss, but age-related disorders and prolonged exposure to loud noises are more common causes. Artificial cochlear implants can restore hearing for some people, but members of the Deaf culture movement believe cochlear implants are unnecessary for people who have been Deaf from birth and who can speak their own language, sign.

Terms and Concepts to Remember audition, p. 243 frequency, p. 244 pitch, p. 244 middle ear, p. 244 cochlea [KOHK-lee-uh], p. 244 inner ear, p. 244

place theory, p. 246 frequency theory, p. 247 conduction hearing loss, p. 247 sensorineural hearing loss, p. 248 cochlear implant, p. 248

Test Yourself 1. What are the basic steps in transforming sound waves into perceived sound? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. If you are a hearing person, imagine that you had been born deaf. Do you think you would want to receive a cochlear implant? Does it surprise you that most lifelong Deaf adults do not desire implants for themselves or their children?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

module 20 Other Senses Although our brains give seeing and hearing priority in the allocation of cortical tissue, extraordinary happenings occur within our four other senses—our senses of touch, body position and movement, taste, and smell. Sharks and dogs rely on their extraordinary sense of smell, aided by large brain areas devoted to this system. Without our own senses of touch, body position and movement, taste, and smell, we humans would also be seriously handicapped, and our capacities for enjoying the world would be devastatingly diminished.

Touch Pain Taste Smell

䉴|| Touch

䉴 Stroking adjacent pressure spots creates a tickle. 䉴 Repeated gentle stroking of a pain spot creates an itching sensation.

䉴 Touching adjacent cold and pressure spots triggers a sense of wetness, which you can experience by touching dry, cold metal. 䉴 Stimulating nearby cold and warm spots produces the sensation of hot (FIGURE 20.1).

Although not the first sense to come to mind, touch could be our priority sense. Right from the start, touch is essential to our development. Infant rats deprived of their mother’s grooming produce less growth hormone and have a lower metabolic rate—a good way to keep alive until the mother returns, but a reaction that stunts growth if prolonged. Infant monkeys allowed to see, hear, and smell—but not touch—their mother become desperately unhappy; those separated by a screen with holes that allow touching are much less miserable. Premature human babies gain weight faster and go home sooner if they are stimulated by hand massage. As lovers, we yearn to touch—to kiss, to stroke, to snuggle. And even strangers, touching only their forearms and separated by a curtain, can communicate anger, fear, disgust, love, gratitude, and sympathy at well above chance levels (Hertenstein et al., 2006). Humorist Dave Barry may be right to jest that your skin “keeps people from seeing the inside of your body, which is repulsive, and it prevents your organs from falling onto the ground.” But skin does much more. Our “sense of touch” is actually a mix of distinct senses, with different types of specialized nerve endings within the skin. Touching various spots on the skin with a soft hair, a warm or cool wire, and the point of a pin reveals that some spots Cold are especially sensitive to pressure, others to warmth, water others to cold, still others to pain. Surprisingly, there is no simple relationship between what we feel at a given spot and the type of specialized nerve ending found there. Only pressure has identifiable receptors. Other skin sensations are variations of the basic four (pressure, warmth, cold, and pain):

Bruce Ayres/Stone/Getty Images

20-1 How do we sense touch and sense our body’s position and movement? How do we experience pain?

The precious sense of touch As William James wrote in his Principles of Psychology (1890), “Touch is both the alpha and omega of affection.”

Warm water

HOT!

FIGURE 20.1 Warm + 䉴 cold = hot When

ice-cold water passes through one coil and comfortably warm water through another, we perceive the combined sensation as burning hot.

251

252

MODULE 20 Other Senses

vestibular sense the sense of body movement and position, including the sense of balance.

Michal Cizek/AFP/Getty Images

The intricate vestibular sense These Cirque du Soleil performers can thank their inner ears for the information that enables their brains to monitor their bodies’ position so expertly.

FIGURE 20.2 The rubber-hand illusion When Dublin researcher Deirdre Desmond simultaneously touches a volunteer’s real and fake hands, the volunteer feels as though the seen fake hand is her own.

kinesthesis [kin-ehs-THEE-sehs] the system for sensing the position and movement of individual body parts.

Touch sensations involve more than tactile stimulation, however. A self-produced tickle produces less somatosensory cortex activation than the same tickle would from something or someone else (Blakemore et al., 1998). (The brain is wise enough to be most sensitive to unexpected stimulation.) This top-down influence on touch sensation also appears in the rubber-hand illusion. Imagine yourself looking at a realistic rubber hand while your own hand is hidden (FIGURE 20.2). If an experimenter simultaneously touches your fake and real hands, you likely will perceive the rubber hand as your own and sense it being touched. Even just “stroking” the fake hand with a laser light produces, for most people, an illusory sensation of warmth or touch in their unseen real hand (Durgin et al., 2007). Touch is not only a bottom-up property of your senses but also a top-down product of your brain and your expectations. Important sensors in your joints, tendons, bones, and ears, as well as your skin sensors enable your kinesthesis— your sense of the position and movement of your body parts. By closing your eyes or plugging your ears you can momentarily imagine being without sight or sound. But what would it be like to live without touch or kinesthesis— without, therefore, being able to sense the positions of your limbs when you wake during the night? Ian Waterman of Hampshire, England, knows. In 1972, at age 19, Waterman contracted a rare viral infection that destroyed the nerves that enabled his sense of light touch and of body position and movement. People with this condition report feeling disembodied, as though their body is dead, not real, not theirs (Sacks, 1985). With prolonged practice, Waterman has learned to walk and eat—by visually focusing on his limbs and directing them accordingly. But if the lights go out, he crumples to the floor (Azar, 1998). Even for the rest of us, vision interacts with kinesthesis. Stand with your right heel in front of your left toes. Easy. Now close your eyes and you will probably wobble. A companion vestibular sense monitors your head’s (and thus your body’s) position and movement. The biological gyroscopes for this sense of equilibrium are in your

253

Other Senses MODULE 20

inner ear. The semicircular canals, which look like a three-dimensional pretzel (FIGURE 20.3), and the vestibular sacs, which connect the canals Semicircular canals with the cochlea, contain fluid that moves when your head rotates or tilts. This movement stimulates hairlike receptors, which send messages to the cerebellum at the back of the brain, thus enabling you to sense your body position and to maintain your balance. If you twirl around and then come to an abrupt halt, neither the fluid in your semicircular canals nor your kinesthetic receptors will immediately return to their neutral state. The dizzy aftereffect fools your brain with the sensation that you’re still spinning. This illustrates a principle that underlies perceptual illusions: Mechanisms that normally give us an accurate experience of the world can, under special conditions, fool us. Understanding how we get fooled provides clues to how our perceptual system works.

䉴|| Pain Be thankful for occasional pain. Pain is your body’s way of telling you something has gone wrong. Drawing your attention to a burn, a break, or a sprain, pain orders you to change your behavior—“Stay off that turned ankle!” The rare people born without the ability to feel pain may experience severe injury or even die before early adulthood. Without the discomfort that makes us occasionally shift position, their joints fail from excess strain, and without the warnings of pain, the effects of unchecked infections and injuries accumulate (Neese, 1991). More numerous are those who live with chronic pain, which is rather like an alarm that won’t shut off. The suffering of such people, and of those with persistent or recurring backaches, arthritis, headaches, and cancer-related pain, prompts two questions: What is pain? How might we control it?

AP Photo/Stephen Morton

pain-free, problematic life 䉴 AAshlyn Blocker (right), shown here with her mother and sister, has a rare genetic disorder. She feels neither pain nor extreme hot and cold. She must frequently be checked for accidentally self-inflicted injuries that she herself cannot feel. “Some people would say [that feeling no pain is] a good thing,” says her mother. “But no, it’s not. Pain’s there for a reason. It lets your body know something’s wrong and it needs to be fixed. I’d give anything for her to feel pain” (quoted by Bynum, 2004).

20.3 䉴 FIGURE Semicircular canals in

the inner ear

254

MODULE 20 Other Senses

Understanding Pain

Tom Mihalek/AFP/Getty Images

Our pain experiences vary widely, depending on our physiology, our experiences and attention, and our surrounding culture (Gatchel et al., 2007). Thus, our feelings of pain combine both bottom-up sensations and top-down processes.

Biological Influences

Playing with pain In a 2008 NBA championship series game, Boston Celtic star Paul Pierce screamed in pain after an opposing player stepped on his right foot, causing his knee to twist and pop. After being carried off the court, he came back and played through the pain, which reclaimed his attention after the game’s end.

“When belly with bad pains doth swell, It matters naught what else goes well.” Sadi, The Gulistan, 1258

The pain system, unlike vision, is not located in a simple neural cord running from a sensing device to a definable area in the brain. Moreover, there is no one type of stimulus that triggers pain (as light triggers vision). Instead, there are different nociceptors—sensory receptors that detect hurtful temperatures, pressure, or chemicals (FIGURE 20.4). Although no theory of pain explains all available findings, Ronald Melzack and biologist Patrick Wall’s (1965, 1983) classic gate-control theory provides a useful model. The spinal cord contains small nerve fibers that conduct most pain signals, and larger fibers that conduct most other sensory signals. Melzack and Wall theorized that the spinal cord contains a neurological “gate.” When tissue is injured, the small fibers activate and open the gate, and you feel pain. Large-fiber activity closes the gate, blocking pain signals and preventing them from reaching the brain. Thus, one way to treat chronic pain is to stimulate (by massage, by electric stimulation, or by acupuncture) “gate-closing” activity in the large neural fibers (Wall, 2000). Rubbing the area around your stubbed toe will create competing stimulation that will block some pain messages.

Projection to brain

Pain impulse Cell body of nociceptor

receptors (nociceptors) respond to potentially damaging stimuli by sending an impulse to the spinal cord, which passes the message to the brain, which interprets the signal as pain.

FIGURE 20.4 The pain circuit Sensory

Nerve cell

Tissue injury

Cross-section of the spinal cord

255

Other Senses MODULE 20

But pain is not merely a physical phenomenon of injured nerves sending impulses to the brain—like pulling on a rope to ring a bell. Melzack and Wall noted that brainto-spinal-cord messages can also close the gate, helping to explain some striking influences on pain. When we are distracted from pain (a psychological influence) and soothed by the release of endorphins, our natural painkillers (a biological influence), our experience of pain may be greatly diminished. Sports injuries may go unnoticed until the after-game shower. People who carry a gene that boosts the availability of endorphins are less bothered by pain, and their brain is less responsive to pain (Zubieta et al., 2003). Others, who carry a mutated gene that disrupts pain circuit neurotransmission, may be unable to experience pain (Cox et al., 2006). Such discoveries may point the way toward new pain medications that mimic these genetic effects. The brain can also create pain, as it does in people’s experiences of phantom limb sensations, when it misinterprets the spontaneous central nervous system activity that occurs in the absence of normal sensory input. As the dreamer may see with eyes closed, so some 7 in 10 amputees may feel pain or movement in nonexistent limbs, notes psychologist Melzack (1992, 2005). (An amputee may also try to step off a bed onto a phantom limb or to lift a cup with a phantom hand.) Even those born without a limb sometimes perceive sensations from the absent arm or leg. The brain, Melzack (1998) surmises, comes prepared to anticipate “that it will be getting information from a body that has limbs.” A similar phenomenon occurs with other senses. People with hearing loss often experience the sound of silence: phantom sounds—a ringing-in-the-ears sensation known as tinnitus. Those who lose vision to glaucoma, cataracts, diabetes, or macular degeneration may experience phantom sights—nonthreatening hallucinations (Ramachandran & Blakeslee, 1998). Some with nerve damage have had taste phantoms, such as ice water seeming sickeningly sweet (Goode, 1999). Others have experienced phantom smells, such as nonexistent rotten food. The point to remember: We feel, see, hear, taste, and smell with our brain, which can sense even without functioning senses.

Psychological Influences The psychological effects of distraction are clear in the stories of athletes who, focused on winning, play through the pain. Carrie Armel and Vilayanur Ramachandran (2003) cleverly illustrated psychological influences on pain with another version of the rubber-hand illusion. They bent a finger slightly backward on the unseen hands of 16 volunteers, while simultaneously “hurting” (severely bending) a finger on a visible fake rubber hand. The volunteers felt as if their real finger were being bent, and they responded with increased skin perspiration. We also seem to edit our memories of pain, which often differ from the pain we actually experienced. In experiments, and after medical procedures, people overlook a pain’s duration. Their memory snapshots instead record two factors: First, people tend to record pain’s peak moment, which can lead them to recall variable pain, with peaks, as worse (Stone et al., 2005). Second, they register how much pain they felt at the end, as Daniel Kahneman and his co-researchers (1993) discovered when they asked people to immerse one hand in painfully cold water for 60 seconds, and then the other hand in the same painfully cold water for 60 seconds followed by a slightly less painful 30 seconds more. Which of these experiences would you expect to recall as most painful? Curiously, when asked which trial they would prefer to repeat, most preferred the longer trial, with more net pain—but less pain at the end. A physician used this principle with patients undergoing colon exams—lengthening the discomfort by a minute, but lessening its intensity (Kahneman, 1999). Although the extended milder discomfort added to their net pain experience, patients experiencing this taper-down treatment later recalled the exam as less painful than did those whose pain ended abruptly.

gate-control theory the theory that the spinal cord contains a neurological “gate” that blocks pain signals or allows them to pass on to the brain. The “gate” is opened by the activity of pain signals traveling up small nerve fibers and is closed by activity in larger fibers or by information coming from the brain.

256

MODULE 20 Other Senses

Social-Cultural Influences Psychological influences: • attention to pain • learning based on experience • expectations

Barros & Barros/ Getty Images

Lawrence Migdale/ Stock, Boston

Biological influences: • activity in spinal cord’s large and small fibers • genetic differences in endorphin production • the brain’s interpretation of CNS activity

Personal experience of pain

Robert Nickelsberg/ Getty Images

Social-cultural influences: • presence of others • empathy for others’ pain • cultural expectations

FIGURE 20.5 Biopsychosocial approach to pain Our experience of pain is much more than neural messages sent to the brain.

Gary Conner/PhototakeUSA.com

Seeking relief This acupuncturist is attempting to help this woman gain relief from back pain by using needles on points of the patient’s hand.

Our perception of pain also varies with our social situation and our cultural traditions. We tend to perceive more pain when others also seem to be experiencing pain (Symbaluk et al., 1997). This may help explain other apparent social aspects of pain, as when pockets of Australian keyboard operators during the mid-1980s suffered outbreaks of severe pain during typing or other repetitive work—without any discernible physical abnormalities (Gawande, 1998). Sometimes the pain in sprain is mainly in the brain—literally. When feeling empathy for another’s pain, a person’s own brain activity may partly mirror that of the other’s brain in pain (Singer et al, 2004). Thus, our perception of pain is a biopsychosocial phenomenon (FIGURE 20.5). Viewing pain this way can help us better understand how to cope with pain and treat it.

Controlling Pain If pain is where body meets mind—if it is both a physical and a psychological phenomenon—then it should be treatable both physically and psychologically. Depending on the type of symptoms, pain control clinics select one or more therapies from a list that includes drugs, surgery, acupuncture, electrical stimulation, massage, exercise, hypnosis, relaxation training, and thought distraction. Even an inert placebo can help, by dampening the brain’s attention and responses to painful experiences—mimicking analgesic drugs (Wager, 2005). After being injected in the jaw with a stinging saltwater solution, men in one experiment were given a placebo that was said to relieve pain. They immediately felt better, a result associated with activity in a brain area that releases natural pain-killing opiates (Scott et al., 2007; Zubieta et al., 2005). Being given fake pain-killing chemicals caused the brain to dispense real ones. “Believing becomes reality,” noted one commentator (Thernstrom, 2006), as “the mind unites with the body.” Another experiment pitted two placebos—fake pills and pretend acupuncture—against each other (Kaptchuk et al., 2006). People with persistent arm pain (270 of them) received either sham acupuncture (with trick needles that retracted without puncturing the skin) or blue cornstarch pills that looked like pills often prescribed for strain injury. A fourth of those receiving the nonexistent needle pricks and 31 percent of those receiving the pills complained of side effects, such as painful skin or dry mouth and fatigue. After two months, both groups were reporting less pain, with the fake acupuncture group reporting the greater pain drop. Distracting people with pleasant images (“Think of a warm, comfortable environment”) or drawing their attention away from the painful stimulation (“Count backward by 3’s”) is an especially effective way to increase pain tolerance (Fernandez & Turk, 1989; McCaul & Malott, 1984). A well-trained nurse may distract needle-shy patients by chatting with them and asking them to look away when inserting the needle. For burn victims receiving excruciating wound care, an even more effective

257

Other Senses MODULE 20

Distraction

Image by Todd Richards and Aric Bills, U.W., ©Hunter Hoffman, www.vrpain.com

No distraction

distraction comes from immersion in a computer-generated 3-D world, like the snow scene in FIGURE 20.6. Functional MRI (fMRI) scans reveal that playing in the virtual reality reduces the brain’s pain-related activity (Hoffman, 2004). Because pain is in the brain, diverting the brain’s attention may bring relief.

䉴|| Taste 20-2 How do we experience taste? Like touch, our sense of taste involves several basic sensations. Taste’s sensations were once thought to be sweet, sour, salty, and bitter, with all others stemming from mixtures of these four (McBurney & Gent, 1979). Then, as investigators searched for specialized nerve fibers for the four taste sensations, they encountered a receptor for what we now know is a fifth—the savory meaty taste of umami, best experienced as the flavor enhancer monosodium glutamate. Tastes exist for more than our pleasure (see TABLE 20.1). Pleasureful tastes attracted our ancestors to energy- or protein-rich foods that enabled their survival. Aversive tastes deterred them from new foods that might be toxic. We see the inheritance of this biological wisdom in today’s 2- to 6-year-olds, who are typically fussy eaters, especially when offered new meats or bitter-tasting vegetables, such as spinach and Brussels sprouts (Cooke et al., 2003). Meat and plant toxins were both potentially dangerous sources of food poisoning for our ancestors, especially for children. Given repeated small tastes of disliked new foods, children will, however, typically begin to accept them (Wardle et al., 2003). Taste is a chemical sense. Inside each little bump on the top and sides of your tongue are 200 or more taste buds, each containing a pore that catches food chemicals. Into each taste bud pore, 50 to 100 taste receptor cells project antennalike hairs that sense food molecules. Some receptors respond mostly to sweet-tasting molecules, others to salty- , sour-, umami-, or bitter-tasting ones. It doesn’t take much to trigger a response that alerts your brain’s temporal lobe. If a stream of water is pumped across your tongue, the addition of a concentrated salty or sweet taste for but onetenth of a second will get your attention (Kelling & Halpern, 1983). When a friend asks for “just a taste” of your soft drink, you can squeeze off the straw after a mere fraction of a second. Taste receptors reproduce themselves every week or two, so if you burn your tongue with hot food it hardly matters. However, as you grow older, the number of taste buds decreases, as does taste sensitivity (Cowart, 1981). (No wonder adults enjoy strongtasting foods that children resist.) Smoking and alcohol use accelerate these declines.

FIGURE 20.6 Virtual-reality pain control For burn victims undergoing painful skin repair, an escape into virtual reality can powerfully distract attention, thus reducing pain and the brain’s response to painful stimulation. The MRI scans above illustrate a lowered pain response when the patient is distracted.

“Pain is increased by attending to it.” Charles Darwin, Expression of Emotions in Man and Animals, 1872

TABLE 20.1 The Survival Functions of

Basic Tastes Taste

Indicates

Sweet

Energy source

Salty

Sodium essential to physiological processes

Sour

Potentially toxic acid

Bitter

Potential poisons

Umami

Proteins to grow and repair tissue

(Adapted from Cowart, 2005.)

258

sensory interaction the principle that one sense may influence another, as when the smell of food influences its taste.

MODULE 20 Other Senses

Those who lose their sense of taste report that food tastes like “straw” and is hard to swallow (Cowart, 2005). Essential as taste buds are, there’s more to taste than meets the tongue. As with other senses, your expectations influence your brain’s response. When people are forewarned that an unpleasant taste is coming, their brain responds more actively to negative tastes, which they rate as very unpleasant. When led to believe that the same taste will be merely mildly unpleasant, the brain region that responds to aversive tastes is less active, and the participants rate the taste as less unpleasant (Nitschke et al., 2006). Likewise, being told that a wine costs $90 rather than its real $10 price makes an inexpensive wine taste better and triggers more activity in a brain area that responds to pleasant experiences (Plassmann et al., 2008). As happens with the pain placebo effect, the brain’s thinking frontal lobes offer information that other brain regions act upon.

Sensory Interaction

Courtesy of RNID www.rnid.org.uk

FIGURE 20.7 Sensory interaction When a hard-of-hearing listener sees an animated face forming the words being spoken at the other end of a phone line, the words become easier to understand (Knight, 2004).

Taste also illustrates another curious phenomenon. Hold your nose, close your eyes, and have someone feed you various foods. A slice of apple may be indistinguishable from a chunk of raw potato; a piece of steak may taste like cardboard; without their smells, a cup of cold coffee may be hard to distinguish from a glass of red wine. To savor a taste, we normally breathe the aroma through our nose—which is why eating is not much fun when you have a bad cold. Smell can also change our perception of taste: A drink’s strawberry odor enhances our perception of its sweetness. This is sensory interaction at work—the principle that one sense may influence another. Smell plus texture plus taste equals flavor. Sensory interaction similarly influences what we hear. If I (as a person with hearing loss) watch a video with simultaneous captioning, I have no trouble hearing the words I am seeing (and may therefore think I don’t need the captioning). If I then turn off the captioning, I suddenly realize I need it (FIGURE 20.7). But what do you suppose happens if we see a speaker saying one syllable while hearing another? Surprise: We may perceive a third syllable that blends both inputs. Seeing the mouth movements for ga while hearing ba we may perceive da—a phenomenon known as the McGurk effect, after its discoverers, psychologist Harry McGurk and his assistant John MacDonald (1976). Much the same is true with vision and touch. A weak flicker of light that we have trouble perceiving becomes more visible when accompanied by a short burst of sound (Kayser, 2007). In detecting events, the brain can combine simultaneous visual and touch signals, thanks to neurons projecting from the somatosensory cortex back to the visual cortex (Macaluso et al., 2000). So, the senses interact: Seeing, hearing, touching, tasting, and smelling are not totally separate channels. In interpreting the world, the brain blends their inputs. In a few select individuals, the senses become joined in a phenomenon called synaesthesia, where one sort of sensation (such as hearing sound) produces another (such as seeing color). Thus, hearing music or seeing a specific number may activate color-sensitive cortex regions and trigger a sensation of color (Brang et al., 2008; Hubbard et al., 2005). Seeing the number 3 may evoke a taste sensation (Ward, 2003). For many people, an odor, perhaps of mint or chocolate, may evoke a sensation of taste (Stevenson & Tomiczek, 2007).

259

Other Senses MODULE 20

䉴|| Smell 20-3 How do we experience smell?

|| Impress your friends with your new word for the day: People unable to see are said to experience blindness. People unable to hear experience deafness. People unable to smell experience anosmia. ||

FIGURE 20.8 The sense of smell If you are to smell a flower, airborne molecules of its fragrance must reach receptors at the top of your nose. Sniffing swirls air up to the receptors, enhancing the aroma. The receptor cells send messages to the brain’s olfactory bulb, and then onward to the temporal lobe’s primary smell cortex and to the parts of the limbic system involved in memory and emotion.

Inhale, exhale. Inhale, exhale. Breaths come in pairs—except at two moments: birth and death. Between those two moments, you will daily inhale and exhale nearly 20,000 breaths of life-sustaining air, bathing your nostrils in a stream of scent-laden molecules. The resulting experiences of smell (olfaction) are strikingly intimate: You inhale something of whatever or whoever it is you smell. Like taste, smell is a chemical sense. We smell something when molecules of a substance carried in the air reach a tiny cluster of 5 million or more receptor cells at the top of each nasal cavity (FIGURE 20.8). These olfactory receptor cells, waving like sea anemones on a reef, respond selectively—to the aroma of a cake baking, to a wisp of smoke, to a friend’s fragrance. Instantly, they alert the brain through their axon fibers. Even nursing infants and their mothers have a literal chemistry to their relationship. They quickly learn to recognize each other’s scents (McCarthy, 1986). Aided by smell, a mother fur seal returning to a beach crowded with pups will find her own. Our own sense of smell is less impressive than the acuteness of our seeing and hearing. Looking out across a garden, we see its forms and colors in exquisite detail and hear a variety of birds singing, yet we smell little of it without sticking our nose into the blossoms. Odor molecules come in many shapes and sizes—so many, in fact, that it takes many different receptors to detect them. A large family of genes designs the 350 or so receptor proteins that recognize particular odor molecules (Miller, 2004). Richard

Olfactory bulb 4. The signals are transmitted to higher regions of the brain Olfactory nerve 3. The signals are relayed via converged axons

Olfactory bulb Receptor cells in olfactory membrane

Bone Olfactory receptor cells

2. Olfactory receptor cells are activated and send electric signals

Odor molecules 1. Odorants bind to receptors Odorant receptor Air with odorant molecules

260

MODULE 20 Other Senses

“There could be a stack of truck tires burning in the living room, and I wouldn’t necessarily smell it. Whereas my wife can detect a lone spoiled grape two houses away.” Dave Barry, 2005

|| Humans have 10 to 20 million olfactory receptors. A bloodhound has some 200 million (Herz, 2001). ||

“The smell and taste of things bears unfaltering, in the tiny and almost impalpable drop of their essence, the vast structure of recollection.” French novelist Marcel Proust, in Remembrance of Things Past (1913), describing how the aroma and flavor of a morsel of cake soaked in tea resurrected longforgotten memories of the old family house.

Axel and Linda Buck (1991) discovered (in work for which they received a 2004 Nobel prize) that these receptor proteins are embedded on the surface of nasal cavity neurons. As a key slips into a lock, so odor molecules slip into these receptors. Yet we don’t seem to have a distinct receptor for each detectable odor. This suggests that some odors trigger a combination of receptors, in patterns that are interpreted by the olfactory cortex. As the English alphabet’s 26 letters can combine to form many words, so odor molecules bind to different receptor arrays, producing the 10,000 odors we can detect (Malnic et al., 1999). It is the combinations of olfactory receptors, which activate different neuron patterns, that allow us to distinguish between the aromas of fresh-brewed and hours-old coffee (Zou et al., 2005). The ability to identify scents peaks in early adulthood and gradually declines thereafter, with women’s smelling ability tending to surpass men’s (FIGURE 20.9). Despite our skill at discriminating scents, we aren’t very good at describing them. Words more readily portray the sound of coffee brewing than its aroma. Compared with how we experience and remember sights and sounds, smells are almost primitive and certainly harder to describe and recall (Richardson & Zucco, 1989; Zucco, 2003). As any dog or cat with a good nose could tell us, we each have our own identifiable chemical signature. (One noteworthy exception: A dog will follow the tracks of one identical twin as though they had been made by the other [Thomas, 1974].) Animals that have many times more olfactory receptors than we do also use their sense of smell to communicate and to navigate. Long before the shark can see its prey, or the moth its mate, odors direct their way. Migrating salmon follow faint olfactory cues back to their home stream. If exposed in a hatchery to one of two odorant chemicals, they will, when returning two years later, seek whichever stream near their release site is spiked with the familiar smell (Barinaga, 1999). For humans, too, the attractiveness of smells depends on learned associations (Herz, 2001). Babies are not born with a built-in preference for the smell of their mother’s breast; as they nurse, their preference builds. After a good experience becomes associated with a particular scent, people come to like that scent, which helps explain why people in the United States tend to like the smell of wintergreen (which they associate with candy and gum) more than do those in Great Britain (where it often is associated with medicine). In another example of odors evoking unpleasant emotions, Rachel Herz and her colleagues (2004) frustrated Brown University students with a rigged computer game in a scented room. Later, if exposed to the same odor while working on a verbal task, the students’ frustration was rekindled and they gave up sooner than others exposed to a different odor or no odor.

Number of correct answers

of smell Among the 1.2 million people who responded to a National Geographic scratch-and-sniff survey, women and younger adults most successfully identified six sample odors (from Wysocki & Gilbert, 1989). Smokers and people with Alzheimer’s, Parkinson’s, or alcohol dependence typically experience a diminished sense of smell (Doty, 2001).

FIGURE 20.9 Age, sex, and sense

Women and young adults have best sense of smell

4 Women

3 Men

2

0 10–19

20–29

30–39

40–49

50–59

Age group

60–69

70–79

80–89

90–99

261

Other Senses MODULE 20

Though it’s difficult to recall odors by name, Processes we have a remarkable capacity to recognize smell (near long-forgotten odors and their associated memmemory area) ories (Engen, 1987; Schab, 1991). The smell of the sea, the scent of a perfume, or an aroma of a favorite relative’s kitchen can bring to mind a happy time. It’s a phenomenon understood by the British travel agent chain Lunn Poly. To evoke memories of lounging on sunny, warm beaches, the company once piped the aroma of Processes taste coconut suntan oil into its shops (Fracassini, 2000). Our brain’s circuitry helps explain this power to evoke feelings and memories (FIGURE 20.10). A hotline runs between the brain area receiving information from the nose and the brain’s ancient limbic centers associated with memory and emotion. Smell is primitive. Eons before the elaborate analytical areas of our cerebral cortex had fully evolved, our mammalian ancestors sniffed for food—and for predators.

FIGURE 20.10 The olfactory 䉴 brain Information from the taste

buds (yellow arrow) travels to an area of the temporal lobe not far from where the brain receives olfactory information, which interacts with taste. The brain’s circuitry for smell (red arrow) also connects with areas involved in memory storage, which helps explain why a smell can trigger a memory explosion.

Review Other Senses 20-1 How do we sense touch and sense our body’s position and movement? How do we experience pain? Our sense of touch is actually several senses—pressure, warmth, cold, and pain—that combine to produce other sensations, such as “hot.” Through kinesthesis, we sense the position and movement of body parts. We monitor the body’s position and maintain our balance with our vestibular sense. Pain is an alarm system that draws our attention to some physical problem. One theory of pain is that a “gate” in the spinal cord either opens to permit pain signals traveling up small nerve fibers to reach the brain, or closes to prevent their passage. The biopsychosocial approach views pain as the sum of three sets of forces: biological influences, such as nerve fibers sending messages to the brain; psychological influences, such as our expectations; and social-cultural influences, such as the presence of others. Treatments to control pain often combine physiological and psychological elements. 20-2

How do we experience taste? Taste, a chemical sense, is a composite of five basic sensations—sweet, sour, salty, bitter, and umami—and of the aromas that interact with information from the taste receptor cells of the taste buds. The influence of smell on our sense of taste is an example of sensory interaction, the ability of one sense to influence another.

20-3 How do we experience smell? There are no basic sensations for smell. Smell is a chemical sense. Some 5 million olfactory receptor cells, with their approximately 350 different receptor proteins, recognize individual odor molecules. The receptor cells send

messages to the brain’s olfactory bulb, then to the temporal lobe and to parts of the limbic system. Odors can spontaneously evoke memories and feelings, due in part to the close connections between brain areas that process smell and memory.

Terms and Concepts to Remember kinesthesis [kin-ehs-THEE-sehs], p. 252 vestibular sense, p. 252

gate-control theory, p. 254 sensory interaction, p. 258

Test Yourself 1. What are the similarities among our senses of touch (including the vestibular sense and kinesthesis), taste, and smell? What are the differences? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. Can you recall a time when, with your attention focused on some activity, you felt no pain from a wound or injury?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

Form Perception Depth Perception Motion Perception Perceptual Constancy

FIGURE 21.1 A Necker cube What do you see: circles with white lines, or a cube? If you stare at the cube, you may notice that it reverses location, moving the tiny X in the center from the front edge to the back. At times, the cube may seem to float in front of the page, with circles behind it; other times the circles may become holes in the page through which the cube appears, as though it were floating behind the page. There is far more to perception than meets the eye. (From Bradley et al., 1976.)

module 21 Perceptual Organization 21-1 How did the Gestalt psychologists understand perceptual organization? Barring a disability, we all sense sights and sounds, touch and movement, tastes and smells. But how do we perceive? How do we see not just shapes and colors, but a rose in bloom, a loved one’s face, a beautiful sunset? How do we hear not just a mix of pitches and rhythms, but a child’s cry of pain, the hum of distant traffic, a symphony? In short, how do we organize and interpret our sensations so that they become meaningful perceptions? Early in the twentieth century, a group of German psychologists noticed that when given a cluster of sensations, people tend to organize them into a gestalt, a German word meaning a “form” or a “whole.” For example, look at the Necker cube in FIGURE 21.1. Note that the individual elements of the figure are really nothing but eight blue circles, each containing three converging white lines. When we view them all together, however, we see a whole, a cube. The Gestalt psychologists, who had wide-ranging interests, were fond of saying that in perception the whole may exceed the sum of its parts. Combine sodium, a corrosive metal, with chlorine, a poisonous gas, and something very different emerges—table salt. Likewise, a unique perceived form emerges from a stimulus’ components (Rock & Palmer, 1990). Over the years, the Gestalt psychologists provided compelling demonstrations and described principles by which we organize our sensations into perceptions. As you read further about these principles, keep in mind the fundamental truth they illustrate: Our brain does more than register information about the world. Perception is not just opening a shutter and letting a picture print itself on the brain. We constantly filter sensory information and infer perceptions in ways that make sense to us. Mind matters.

䉴|| Form Perception gestalt an organized whole. Gestalt psychologists emphasized our tendency to integrate pieces of information into meaningful wholes. figure-ground the organization of the visual field into objects (the figures) that stand out from their surroundings (the ground).

grouping the perceptual tendency to organize stimuli into coherent groups.

262

21-2 How do figure-ground and grouping principles contribute to our perceptions? Imagine designing a video/computer system that, like your eye/brain system, can recognize faces at a glance. What abilities would it need?

Figure and Ground To start with, the system would need to recognize faces as distinct from their backgrounds. Likewise, our first perceptual task is to perceive any object (the figure) as distinct from its surroundings (the ground). Among the voices you hear at a party,

263

Perceptual Organization M O D U L E 2 1

䉴 FIGURE 21.2 Reversible figure and ground Time Saving Suggestion, © 2003 Roger Shepherd.

the one you attend to becomes the figure; all others, part of the ground. As you read, the words are the figure; the white paper, the ground. In FIGURE 21.2, the figure-ground relationship continually reverses—but always we organize the stimulus into a figure seen against a ground. Such reversible figure-and-ground illustrations demonstrate again that the same stimulus can trigger more than one perception.

Grouping Having discriminated figure from ground, we (and our video/computer system) now have to organize the figure into a meaningful form. Some basic features of a scene— such as color, movement, and light/dark contrast—we process instantly and automatically (Treisman, 1987). To bring order and form to these basic sensations, our minds follow certain rules for grouping stimuli together. These rules, identified by the Gestalt psychologists and applied even by infants, illustrate the idea that the perceived whole differs from the sum of its parts (Quinn et al., 2002; Rock & Palmer, 1990): Proximity We group nearby figures together, as in FIGURE 21.3. We see three sets of two lines, not six separate

Similarity

Continuity

Connectedness

Closure We fill in gaps to create a complete, whole object. Thus we assume that the circles (above left) are complete but partially blocked by the (illusory) triangle. Add nothing more than little line segments that close off the circles (above right) and now your brain stops constructing a triangle.

Proximity

Enrico Feroell

lines. Similarity We group similar figures together. We see the triangles and circles as vertical columns of similar shapes, not as horizontal rows of dissimilar shapes. Continuity We perceive smooth, continuous patterns rather than discontinuous ones. The pattern in the lower-left corner of Figure 21.3 could be a series of alternating semicircles, but we perceive it as two continuous lines—one wavy, one straight. Connectedness Because they are uniform and linked, we perceive each set of two dots and the line between them as a single unit.

FIGURE 21.3 Organizing stimuli into groups We could perceive the stimuli shown here in many ways, yet people everywhere see them similarly. The Gestalt psychologists believed this shows that the brain follows rules to order sensory information into wholes.

objects in three dimensions although the images that strike the retina are twodimensional; allows us to judge distance.

visual cliff a laboratory device for testing depth perception in infants and young animals. binocular cues depth cues, such as retinal disparity, that depend on the use of two eyes.

retinal disparity a binocular cue for perceiving depth: By comparing images from the retinas in the two eyes, the brain computes distance—the greater the disparity (difference) between the two images, the closer the object.

FIGURE 21.4 Grouping principles What’s the secret to this impossible doghouse? You probably perceive this doghouse as a gestalt—a whole (though impossible) structure. Actually, your brain imposes this sense of wholeness on the picture. As Figure 21.9 shows, Gestalt grouping principles such as closure and continuity are at work here.

monocular cues depth cues, such as interposition and linear perspective, available to either eye alone.

Photo by Walter Wick. Reprinted from GAMES Magazine. © 1983 PCS Games Limited Partnership.

depth perception the ability to see

MOD U LE 2 1 Perceptual Organization

264

Such principles usually help us construct reality. Sometimes, however, they lead us astray, as when we look at the doghouse in FIGURE 21.4.

䉴|| Depth Perception 21-3 How do we see the world in three dimensions?

Innervisions

FIGURE 21.5 Visual cliff Eleanor Gibson and Richard Walk devised this miniature cliff with a glass-covered drop-off to determine whether crawling infants and newborn animals can perceive depth. Even when coaxed, infants are reluctant to venture onto the glass over the cliff.

From the two-dimensional images falling on our retinas, we somehow organize threedimensional perceptions. Depth perception, seeing objects in three dimensions, enables us to estimate their distance from us. At a glance, we estimate the distance of an oncoming car or the height of a house. This ability is partly innate. Eleanor Gibson and Richard Walk (1960) discovered this using a miniature cliff with a drop-off covered by sturdy glass. Gibson’s inspiration for these experiments occurred while she was picnicking on the rim of the Grand Canyon. She wondered: Would a toddler peering over the rim perceive the dangerous drop-off and draw back? Back in their Cornell University laboratory, Gibson and Walk placed 6- to 14month-old infants on the edge of a safe canyon—a visual cliff (FIGURE 21.5). When the infants’ mothers then coaxed them to crawl out onto the glass, most refused to do so, indicating that they could perceive depth. Crawling infants come to the lab after lots of learning. Yet newborn animals with virtually no visual experience— including young kittens, a day-old goat, and newly hatched chicks—respond similarly. To Gibson and Walk, this suggested that mobile newborn animals come prepared to perceive depth. Each species, by the time it is mobile, has the perceptual abilities it needs. But if biological maturation predisposes our wariness of heights, experience amplifies it. Infants’ wariness increases with their experiences of crawling, no matter when they begin to crawl (Campos et al., 1992). And judging from what they will reach for, 7-month-olds use the cast shadow of a toy to perceive its distance, while 5-month-olds don’t (Yonas & Granrud, 2006). This suggests that in human infants, depth perception grows with age. How do we do it? How do we transform two differing twodimensional retinal images into a single three-dimensional perception? The process begins with depth cues, some that depend on the use of two eyes, and others that are available to each eye separately.

265

Perceptual Organization M O D U L E 2 1

Try this: With both eyes open, hold two pens or pencils in front of you and touch their tips together. Now do so with one eye closed. With one eye, the task becomes noticeably more difficult, demonstrating the importance of binocular cues in judging the distance of nearby objects. Two eyes are better than one. Because our eyes are about 21⁄2 inches apart, our retinas receive slightly different images of the world. When the brain compares these two images, the difference between them—their retinal disparity—provides one important binocular cue to the relative distance of different objects. When you hold your fingers directly in front of your nose, your retinas receive quite different views. (You can see this if you close one eye and then the other, or create a finger sausage as in FIGURE 21.6.) At a greater distance—say, when you hold your fingers at arm’s length—the disparity is smaller. The creators of three-dimensional (3-D) movies simulate or exaggerate retinal disparity by photographing a scene with two cameras placed a few inches apart (a feature we might want to build into our seeing computer). When we view the movie through spectacles that allow the left eye to see the image from the left camera and the right eye the image from the right camera, the 3-D effect mimics or exaggerates normal retinal disparity. Similarly, twin cameras in airplanes can take photos of terrain for integration into 3-D maps.

Binocular Cues

FIGURE 21.6 The floating finger sausage Hold your two index fingers about 5 inches in front of your eyes, with their tips a half-inch apart. Now look beyond them and note the weird result. Move your fingers out farther and the retinal disparity—and the finger sausage—will shrink.

Monocular Cues

Rick Friedman/Black Star

How do we judge whether a person is 10 or 100 meters away? In both cases, retinal disparity while looking straight ahead is slight. At such distances, we depend on monocular cues (available to each eye separately). Monocular cues also influence our everyday perceptions. Is the St. Louis Gateway Arch (FIGURE 21.7)—the world’s largest human-made illusion—taller than it is wide? Or wider than it is tall? To most of us, it appears taller. Actually, its height and width are equal. Relative height is a possible contributor to this unexplained horizontal-vertical illusion—our perceiving vertical dimensions as longer than identical horizontal dimensions. No wonder people (even experienced bartenders) pour less juice when given a tall, thin glass rather than a short, wide glass (Wansink & van Ittersum, 2003, 2005). Another monocular depth cue, the lightand-shadow effect, may have contributed to several accidents when the steps of our new campus fieldhouse were unfortunately painted black on the step’s edge (making it seem farther away) and bright silver on the flat surface of the step below (making it

21.7 The St. Louis 䉴 FIGURE Gateway Arch Which is greater:

its height or width?

266

MOD U LE 2 1 Perceptual Organization

seem closer). The seeming result was the misperception of no step-down, and (for some) sprained ankles and backs. FIGURE 21.8 illustrates relative height, light and shadow, and other monocular cues.

Image courtesy Shaun P. Vecera, Ph.D., adapted from stimuli that appeared in Vecrera et al., 2002

From “Perceiving Shape from Shading” by Vilayanur S. Ramachandran. Copyright © 1988 by Scientific American, Inc. All rights reserved.

FIGURE 21.8 Monocular depth cues

• Fixation point

Relative size If we assume two objects are similar in size, most people perceive the one that casts the smaller retinal image as farther away.

Light and shadow Nearby objects reflect more light to our eyes. Thus, given two identical objects, the dimmer one seems farther away. Shading, too, produces a sense of depth consistent with our assumption that light comes from above. Invert the illustration above and the hollow in the bottom row will become a hill.

Relative motion As we move, objects that are actually stable may appear to move. If while riding on a bus you fix your gaze on some object—say, a house—the objects beyond the fixation point appear to move with you; objects in front of the fixation point appear to move backward. The farther those objects are from the fixation point, the faster they seem to move.

Rene Magritte, The Blank Signature, oil on canvas, National Gallery of Art, Washington. Collection of Mr. and Mrs. Paul Mellon. Photo by Richard Carafelli.

Relative height We perceive objects higher in our field of vision as farther away. Because we perceive the lower part of a figure-ground illustration as closer, we perceive it as figure (Vecera et al., 2002). Invert the illustration above and the black becomes ground, like a night sky.

©The New Yorker Collection, 2002, Jack Ziegler from cartoonbank.com. All rights reserved.

Direction of passenger’s motion

Linear perspective Parallel lines, such as railroad tracks, appear to converge with distance. The more they converge, the greater their perceived distance.

Interposition If one object partially blocks our view of another, we perceive it as closer. The depth cues provided by interposition make this an impossible scene.

267

Perceptual Organization M O D U L E 2 1

21-5 How do perceptual constancies help us organize our sensations into

meaningful perceptions?

So far, we have noted that our video/computer system must first perceive objects as we do—as having a distinct form, location, and perhaps motion. Its next task is to recognize objects without being deceived by changes in their shape, size, brightness, or color—an ability we call perceptual constancy. Regardless of our viewing angle, distance, and illumination, this top-down process lets us identify people and things in less time than it takes to draw a breath. This human perceptual feat, which has intrigued researchers for decades, provides a monumental challenge for our perceiving computer.

Shape and Size Constancies Sometimes an object whose actual shape cannot change seems to change shape with the angle of our view (FIGURE 21.10). More often, thanks to shape constancy, we perceive the form of familiar objects, such as the door in FIGURE 21.11 on the next page, as constant even while our retinal image of it changes.

phi phenomenon an illusion of movement created when two or more adjacent lights blink on and off in quick succession.

perceptual constancy perceiving objects as unchanging (having consistent shapes, size, lightness, and color) even as illumination and retinal images change.

FIGURE 21.10 Perceiving shape Do

the tops of these tables have different dimensions? They appear to. But— believe it or not—they are identical. (Measure and see.) With both tables, we adjust our perceptions relative to our viewing angle.

Shepard’s tables, © 2003 Roger Shepard.

䉴|| Perceptual Constancy

FIGURE 21.9 The solution Another view of the impossible doghouse in Figure 21.4 reveals the secrets of this illusion. From the photo angle in Figure 21.4, the grouping principle of closure leads us to perceive the boards as continuous.

Imagine that you could perceive the world as having color, form, and depth but that you could not see motion. Not only would you be unable to bike or drive, you would have trouble writing, eating, and walking. Normally your brain computes motion based partly on its assumption that shrinking objects are retreating (not getting smaller) and enlarging objects are approaching. But you are imperfect at motion perception. Large objects, such as trains, appear to move more slowly than smaller objects, such as cars moving at the same speed. (Perhaps at an airport you’ve noticed that jumbo jets seem to land more slowly than little jets.) To catch a fly ball, softball or cricket players (unlike drivers) want to achieve a collision—with the ball that’s flying their way. To accomplish that, they follow an unconscious rule—one they can’t explain but know intuitively: Run to keep the ball at a constantly increasing angle of gaze (McBeath et al., 1995). A dog catching a Frisbee does the same (Shaffer et al., 2004). The brain will also perceive continuous movement in a rapid series of slightly varying images (a phenomenon called stroboscopic movement). As film animation artists know well, you can create this illusion by flashing 24 still pictures a second. The motion we then see in popular action adventures is not in the film, which merely presents a superfast slide show. The motion is constructed in our heads. Marquees and holiday lights create another illusion of movement using the phi phenomenon. When two adjacent stationary lights blink on and off in quick succession, we perceive a single light moving back and forth between them. Lighted signs exploit the phi phenomenon with a succession of lights that creates the impression of, say, a moving arrow. All of these illusions reinforce a fundamental lesson: Perception is not merely a projection of the world onto our brain. Rather, sensations are disassembled into information bits that the brain then reassembles into its own functional model of the external world. Our brain constructs our perceptions.

21-4 How do we perceive motion?

Photo by Walter Wick. Reprinted from GAMES Magazine. © 1983 PCS Games Limited Partnership.

䉴|| Motion Perception

MOD U LE 2 1 Perceptual Organization

FIGURE 21.11 Shape constancy

A door casts an increasingly trapezoidal image on our retinas as it opens, yet we still perceive it as rectangular.

Thanks to size constancy, we perceive objects as having a constant size, even while our distance from them varies. We assume a car is large enough to carry people, even when we see its tiny image from two blocks away. This illustrates the close connection between perceived distance and perceived size. Perceiving an object’s distance gives us cues to its size. Likewise, knowing its general size—that the object is a car—provides us with cues to its distance. It is a marvel how effortlessly size perception occurs. Given an object’s perceived distance and the size of its image on our retinas, we instantly and unconsciously infer the object’s size. Although the monsters in FIGURE 21.12a cast the same retinal images, the linear perspective tells our brain that the monster in pursuit is farther away. We therefore perceive it as larger. This interplay between perceived size and perceived distance helps explain several well-known illusions. For example, can you imagine why the Moon looks up to 50 percent larger when near the horizon than when high in the sky? For at least 22 centuries, scholars have debated this question (Hershenson, 1989). One reason for the Moon illusion is that cues to objects’ distances make the horizon Moon—like the distant monster in Figure 21.12a and the distant bar in the Ponzo illusion in Figure 21.12b—appear farther away and therefore larger than the Moon high in the night sky (Kaufman & Kaufman, 2000). Take away these distance cues—by looking at the horizon Moon (or each monster or each bar) through a paper tube—and the object immediately shrinks. Size-distance relationships also explain why in FIGURE 21.13 the two same-age girls seem so different in size. As the diagram reveals, the girls are actually about the same

Alan Choisnet/The Image Bank

perceived size and distance (a) The monocular cues for distance (such as linear perspective and relative height) make the pursuing monster look larger than the pursued. It isn’t. (b) This visual trick, called the Ponzo illusion, is based on the same principle as the fleeing monsters. The two red bars cast identical-size images on our retinas. But experience tells us that a more distant object can create the same-size image as a nearer one only if it is actually larger. As a result, we perceive the bar that seems farther away as larger.

From Shepard (1990)

FIGURE 21.12 The interplay between

268

(a)

(b)

269

S. Schwartzenberg/The Exploratorium

Perceptual Organization M O D U L E 2 1

FIGURE 21.13 The illusion of the

size, but the room is distorted. Viewed with one eye through a peephole, its trapezoidal walls produce the same images as those of a normal rectangular room viewed with both eyes. Presented with the camera’s one-eyed view, the brain makes the reasonable assumption that the room is normal and each girl is therefore the same distance from us. And given the different sizes of their images on the retina, our brain ends up calculating that the girls are very different in size. Our occasional misperceptions reveal the workings of our normally effective perceptual processes. The perceived relationship between distance and size is usually valid. But under special circ*mstances it can lead us astray—as when helping to create the Moon illusion and the Ames illusion.

shrinking and growing girls This distorted room, designed by Adelbert Ames, appears to have a normal rectangular shape when viewed through a peephole with one eye. The girl in the right corner appears disproportionately large because we judge her size based on the false assumption that she is the same distance away as the girl in the far corner.

White paper reflects 90 percent of the light falling on it; black paper, only 10 percent. In sunlight, a black paper may reflect 100 times more light than does a white paper viewed indoors, but it still looks black (McBurney & Collings, 1984). This illustrates lightness constancy (also called brightness constancy); we perceive an object as having a constant lightness even while its illumination varies. Perceived lightness depends on relative luminance—the amount of light an object reflects relative to its surroundings (FIGURE 21.14). If you view sunlit black paper through a narrow tube so nothing else is visible, it may look gray, because in bright sunshine it reflects a fair amount of light. View it without the tube and it is again black, because it reflects much less light than the objects around it.

Courtesy Edward Adelson

Lightness Constancy

FIGURE 21.14 Relative

Color Constancy As light changes, a red apple in a fruit bowl retains its redness. This happens because our experience of color depends on something more than the wavelength information received by the cones in our retina. That something more is the surrounding context. If you view only part of a red apple, its color will seem to change as the light changes. But if you see the whole apple as one item in a bowl of fresh fruits, its color will remain roughly constant as the lighting and wavelengths shift—a phenomenon known as color constancy. Dorothea Jameson (1985) noted that a chip colored blue under indoor lighting matches the wavelengths reflected by a gold chip in sunlight. Yet bring a bluebird indoors and it won’t look like a goldfinch. Likewise, a green leaf hanging from a brown branch may, when the illumination changes, reflect the same light energy that formerly came from the brown branch. Yet to us the leaf stays greenish and the branch stays brownish. Put on yellow-tinted ski goggles and the snow, after a second, looks as white as before.

luminance Squares A and B are identical in color, believe it or not. (If you don’t believe me, photocopy the illustration, cut out the squares, and compare.) But we perceive B as lighter, thanks to its surrounding context.

color constancy perceiving familiar objects as having consistent color, even if changing illumination alters the wavelengths reflected by the object.

270

MOD U LE 2 1 Perceptual Organization

Dr. Seuss, One Fish, Two Fish, Red Fish, Blue Fish, 1960

FIGURE 21.15 Color depends on context Believe it or not, these three blue disks are identical in color.

Though we take color constancy for granted, the phenomenon is truly remarkable. It demonstrates that our experience of color comes not just from the object—the color is not in the isolated leaf—but from everything around it as well. You and I see color thanks to our brains’ computations of the light reflected by any object relative to its surrounding objects. But only if we grew up with normal light, it seems. Monkeys raised under a restricted range of wavelengths later have great difficulty recognizing the same color when illumination varies (Sugita, 2004). In a context that does not vary, we maintain color constancy. But what if we change the context? Because the brain computes the color of an object relative to its context, the perceived color changes (as is dramatically apparent in FIGURE 21.15). This principle—that we perceive objects not in isolation but in their environmental context—matters to artists, interior decorators, and clothing designers. Our perception of the color of a wall or of a streak of paint on a canvas is determined not just by the paint in the can but by the surrounding colors. The takehome lesson: Comparisons govern perceptions. R. Beau Lotto at University College, London

“From there to here, from here to there, funny things are everywhere.”

*** Form perception, depth perception, motion perception, and perceptual constancy illuminate how we organize our visual experiences. Perceptual organization applies to other senses, too. It explains why we perceive a clock’s steady tick not as a tick-ticktick but as grouped sounds, say, TICK-tick, TICK-tick. Listening to an unfamiliar language, we have trouble hearing where one word stops and the next one begins. Listening to our own language, we automatically hear distinct words. This, too, reflects perceptual organization. But it is more, for we even organize a string of letters— THEDOGATEMEAT—into words that make an intelligible phrase, more likely “The dog ate meat” than “The do gate me at” (McBurney & Collings, 1984). This process involves not only the organization we’ve been discussing, but also interpretation— discerning meaning in what we perceive.

271

Perceptual Organization M O D U L E 2 1

Review Perceptual Organization 21-1 How did the Gestalt psychologists understand perceptual organization? Gestalt psychologists searched for rules by which the brain organizes fragments of sensory data into gestalts (from the German word for “whole”), or meaningful forms. In pointing out that the whole is more than the sum of its parts, they noted that we filter sensory information and infer perceptions in ways that make sense to us. 21-2 How do figure-ground and grouping principles contribute to our perceptions? To recognize an object, we must first perceive it (see it as a figure) as distinct from its surroundings (the ground). We bring order and form to stimuli by organizing them into meaningful groups, following the rules of proximity, similarity, continuity, connectedness, and closure. 21-3 How do we see the world in three dimensions? Depth perception is our ability to see objects in three dimensions and judge distance. The visual cliff and other research demonstrates that many species perceive the world in three dimensions at, or very soon after, birth. Binocular cues, such as retinal disparity, are depth cues that rely on information from both eyes. Monocular cues (such as relative size, interposition, relative height, relative motion, linear perspective, and light and shadow) let us judge depth using information transmitted by only one eye. 21-4

How do we perceive motion? As objects move, we assume that shrinking objects are retreating and enlarging objects are approaching. But sometimes we miscalculate. A quick succession of images on the retina can create an illusion of movement, as in stroboscopic movement or the phi phenomenon.

21-5

How do perceptual constancies help us organize our sensations into meaningful perceptions? Perceptual constancy enables us to perceive objects as stable despite the changing image they cast on our retinas. Shape constancy is our ability to perceive familiar objects (such as an opening door) as unchanging in shape. Size constancy is perceiving objects as unchanging in size despite their changing retinal images. Knowing an object’s size

gives us clues to its distance; knowing its distance gives clues about its size, but we sometimes misread monocular distance cues and reach the wrong conclusions, as in the Moon illusion. Lightness (or brightness) constancy is our ability to perceive an object as having a constant lightness even when its illumination—the light cast upon it—changes. The brain perceives lightness relative to surrounding objects. Color constancy is our ability to perceive consistent color in objects, even though the lighting and wavelengths shift. Our brain constructs our experience of the color of an object through comparisons with other surrounding objects.

Terms and Concepts to Remember gestalt, p. 262 figure-ground, p. 263 grouping, p. 263 depth perception, p. 264 visual cliff, p. 264 binocular cues, p. 265

retinal disparity, p. 265 monocular cues, p. 265 phi phenomenon, p. 267 perceptual constancy, p. 267 color constancy, p. 269

Test Yourself 1. What do we mean when we say that, in perception, the whole is greater than the sum of its parts? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. Try drawing a realistic depiction of the scene from your window. How many monocular cues will you use in your drawing?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

Sensory Deprivation and Restored Vision

module 22

Perceptual Adaptation

Perceptual Interpretation

Perceptual Set Perception and the Human Factor Is There Extrasensory Perception?

“Let us then suppose the mind to be, as we say, white paper void of all characters, without any ideas: How comes it to be furnished? . . . To this I answer, in one word, from EXPERIENCE.” John Locke, An Essay Concerning Human Understanding, 1690

Mike May, Allison Aliano Photography

Learning to see At age 3, Mike May lost his vision in an explosion. On March 7, 2000, after a new cornea restored vision to his right eye, he got his first look at his wife and children. Alas, although signals were reaching his long dormant visual cortex, it lacked the experience to interpret them. Faces, apart from features such as hair, were not recognizable. Expressions eluded him. Yet he can see an object in motion and is gradually learning to navigate his world and to marvel at such things as dust floating in sunlight (Abrams, 2002).

272

Philosophers have debated whether our perceptual abilities should be credited to our nature or our nurture. To what extent do we learn to perceive? German philosopher Immanuel Kant (1724–1804) maintained that knowledge comes from our inborn ways of organizing sensory experiences. Indeed, we come equipped to process sensory information. But British philosopher John Locke (1632–1704) argued that through our experiences we also learn to perceive the world. Indeed, we learn to link an object’s distance with its size. So, just how important is experience? How radically does it shape our perceptual interpretations?

䉴|| Sensory Deprivation and Restored Vision 22-1 What does research on sensory restriction and restored vision reveal about the effects of experience? Writing to John Locke, William Molyneux wondered whether “a man born blind, and now adult, taught by his touch to distinguish between a cube and a sphere” could, if made to see, visually distinguish the two. Locke’s answer was no, because the man would never have learned to see the difference. Molyneux’ hypothetical case has since been put to the test with a few dozen adults who, though blind from birth, have gained sight (Gregory, 1978; von Senden, 1932). Most had been born with cataracts—clouded lenses that allowed them to see only diffused light, rather as you or I might see a diffuse fog through a Ping-Pong ball sliced in half. After cataract surgery, the patients could distinguish figure from ground and could sense colors—suggesting that these aspects of perception are innate. But much as Locke supposed, they often could not visually recognize objects that were familiar by touch. Experience also influences our perception of faces. You and I perceive and recognize individual faces as a whole. Show us the same top half of a face paired with two different bottom halves (as in FIGURE 22.1), and the identical top halves will seem different. People deprived of visual experience during childhood surpass the rest of us at recognizing that the top halves are the same, because they didn’t learn to process faces as a whole (Le Grand et al., 2004). One 43-year-old man whose sight was recently restored after 40 years of blindness could associate people with distinct features (“Mary’s the one with red hair”). But he could not instantly recognize a face. He also lacked perceptual constancy: As people walked away from him they seemed to be shrinking in size (Bower, 2003). Vision, such cases make clear, is partly an acquired sense. Seeking to gain more control than is provided by clinical cases, researchers have conducted Molyneux’ imaginary experiment with infant kittens and monkeys. In one experiment, they outfitted them with goggles through which the animals could see only diffuse, unpatterned light (Wiesel, 1982). After infancy, when their goggles were removed, these animals exhibited perceptual limitations much like those of humans born with cataracts. They could distinguish color and brightness, but not the form of a circle from that of a square. Their eyes had not degenerated; their retinas still relayed signals to their visual cortex. But lacking stimulation, the cortical cells had not developed

Perceptual Interpretation MODULE 22

FIGURE 22.1 Perceiving composite 䉴 faces To most people, the top halves

Courtesy of Richard LeGrand

of these two faces in the top row, created by Richard Le Grand and his colleagues (2004), look different. Actually, they are the same, though paired with two different lower face halves. People deprived of visual experience early in life have more difficulty perceiving whole faces, which ironically enables their superiority at recognizing that the top halves of these faces are identical.

normal connections. Thus, the animals remained functionally blind to shape. Experience guides, sustains, and maintains the brain’s neural organization. In both humans and animals, a similar period of sensory restriction does no permanent harm if it occurs later in life. Cover the eye of an animal for several months during adulthood, and its vision will be unaffected after the eye patch is removed. Remove cataracts that develop after early childhood, and a human, too, will enjoy normal vision. The effects of visual experiences during infancy in cats, monkeys, and humans suggest there is a critical period shortly after birth—an optimal time when certain events must take place—for normal sensory and perceptual development. Likewise, cochlear implants given to congenitally deaf kittens and human infants seem to trigger an “awakening” of the pertinent brain area (Klinke et al., 1999; Sirenteanu, 1999). Nurture sculpts what nature has endowed. Experiments on perceptual limitations and advantages produced by early sensory deprivation provide a partial answer to the enduring question about experience: Does the effect of early experience last a lifetime? For some aspects of visual and auditory perception, the answer is clearly yes: “Use it soon or lose it.” We retain the imprint of early sensory experiences far into the future.

䉴|| Perceptual Adaptation 22-2 How adaptable is our ability to perceive? Given a new pair of glasses, we may feel slightly disoriented, even dizzy. Within a day or two, we adjust. Our perceptual adaptation to changed visual input makes the world seem normal again. But imagine a far more dramatic new pair of glasses—one that shifts the apparent location of objects 40 degrees to the left. When you first put them on and toss a ball to a friend, it sails off to the left. Walking forward to shake hands with the person, you veer to the left. Could you adapt to this distorted world? Chicks cannot. When fitted with such lenses, they continue to peck where food grains seem to be (Hess, 1956; Rossi, 1968). But we humans adapt to distorting lenses quickly. Within a few minutes your throws would again be accurate, your stride on target. Remove the lenses and you would experience an aftereffect: At first your throws would err in the opposite direction, sailing off to the right; but again, within minutes you would readapt.

273

perceptual adaptation in vision, the ability to adjust to an artificially displaced or even inverted visual field.

274

Courtesy of Hubert Dolezal

Perceptual adaptation “Oops, missed,” thinks researcher Hubert Dolezal as he views the world through inverting goggles. Yet, believe it or not, kittens, monkeys, and humans can adapt to an inverted world.

MODULE 22 Perceptual Interpretation

Indeed, given an even more radical pair of glasses—one that literally turns the world upside down—you could still adapt. Psychologist George Stratton (1896) experienced this when he invented, and for eight days wore, optical headgear that flipped left to right and up to down, making him the first person to experience a right-sideup retinal image while standing upright. The ground was up, the sky was down. At first, Stratton felt disoriented. When he wanted to walk, he found himself searching for his feet, which were now “up.” Eating was nearly impossible. He became nauseated and depressed. But Stratton persisted, and by the eighth day he could comfortably reach for something in the right direction and walk without bumping into things. When he finally removed the headgear, he readapted quickly. Later experiments replicated Stratton’s experience (Dolezal, 1982; Kohler, 1962). After a period of adjustment, people wearing the optical gear have even been able to ride a motorcycle, ski the Alps, and fly an airplane. Did they adjust by perceptually converting their strange worlds to “normal” views? No. Actually, the world around them still seemed above their heads or on the wrong side. But by actively moving about in these topsy-turvy worlds, they adapted to the context and learned to coordinate their movements.

䉴|| Perceptual Set 22-3 How do our expectations, contexts, and emotions influence our perceptions? to perceive one thing and not another.

“The temptation to form premature theories upon insufficient data is the bane of our profession.” Sherlock Holmes, in Arthur Conan Doyle’s The Valley of Fear, 1914 || When shown the phrase Mary had a a little lamb many people perceive what they expect, and miss the repeated word. Did you? ||

FIGURE 22.2 Perceptual set Show a friend either the left or right image. Then show the center image and ask, “What do you see?” Whether your friend reports seeing a saxophonist or a woman’s face will likely depend on which of the other two drawings was viewed first. In each of those images, the meaning is clear, and it will establish perceptual expectations.

As everyone knows, to see is to believe. As we less fully appreciate, to believe is to see. Our experiences, assumptions, and expectations may give us a perceptual set, or mental predisposition, that greatly influences (top-down) what we perceive. People perceive an adult-child pair as looking more alike when told they are parent and child (Bressan & Dal Martello, 2002). And consider: Is the image in the center picture of FIGURE 22.2 a man playing a saxophone or a woman’s face? What we see in such a drawing can be influenced by first looking at either of the two unambiguous versions (Boring, 1930). Once we have formed a wrong idea about reality, we have more difficulty seeing the truth. Everyday examples of perceptual set abound. In 1972, a British newspaper published genuine, unretouched photographs of a “monster” in Scotland’s Loch Ness— “the most amazing pictures ever taken,” stated the paper. If this information creates in you the same perceptual set it did in most of the paper’s readers, you, too, will see the monster in the photo reproduced in FIGURE 22.3a. But when Steuart Campbell (1986) approached the photos with a different perceptual set, he saw a curved tree trunk—as had others the day the photo was shot. With this different perceptual set, you may now notice that the object is floating motionless, without any rippling water or wake around it—hardly what we would expect of a lively monster.

perceptual set a mental predisposition

275

Perceptual Interpretation MODULE 22

FIGURE 22.3 Believing is 䉴 seeing What do you

Dick Ruhl

Frank Searle, photo Adams/Corbis-Sygma

perceive in these photos? (a) Is this Nessie, the Loch Ness monster, or a log? (b) Are these flying saucers or clouds? We often perceive what we expect to see.

(a)

FIGURE 22.4 Recognizing faces When briefly flashed, a caricature of Arnold Schwarzenegger was more accurately recognized than Schwarzenegger himself. Ditto for other familiar male faces.

Kieran Lee/FaceLab, Department of Psychology, University of Western Australia

Perceptual set can similarly influence what we hear. Consider the kindly airline pilot who, on a takeoff run, looked over at his depressed co-pilot and said, “Cheer up.” The co-pilot heard the usual “Gear up” and promptly raised the wheels—before they left the ground (Reason & Mycielska, 1982). Perceptual set also influenced some bar patrons invited to sample free beer (Lee et al., 2006). When researchers added a few drops of vinegar to a brand-name beer, the tasters preferred it—unless they had been told they were drinking vinegar-laced beer and thus expected, and usually experienced, a worse taste. Perceptual set also influences preschool children’s taste preferences. By a 6 to 1 margin in one experiment, they judged french fries as tasting better when served in a McDonald’s bag rather than a plain white bag (Robinson et al., 2007). Clearly, much of what we perceive comes not just from the world “out there” but also from what’s behind our eyes and between our ears. What determines our perceptual set? Through experience we form concepts, or schemas, that organize and interpret unfamiliar information. Our preexisting schemas for male saxophonists and women’s faces, for monsters and tree trunks, for clouds and UFOs, all influence how we interpret ambiguous sensations with top-down processing. Our schemas for faces prime us to see facial patterns even in random configurations, such as the Moon’s landscape, clouds, rocks, or cinnamon buns. Kieran Lee, Graham Byatt, and Gillian Rhodes (2000) demonstrated how we recognize people by facial features that cartoonists can caricature. For but a fraction of a second they showed University of Western Australia students three versions of familiar faces—the actual face, a computer-created caricature that accentuated the differences between this face and the average face, and an “anticaricature” that muted the distinctive features. As FIGURE 22.4 shows, the students more accurately recognized the caricatured faces than the actual ones. A caricatured Arnold Schwarzenegger is more recognizably Schwarzenegger than Schwarzenegger himself!

© The New Yorker Collection, 2002, Leo Cullum from cartoonbank.com. All rights reserved.

(b)

66

Percentage of students correctly 64 recognizing face 62 60 58 56

Anticaricature

54 52 50

Anticaricature

Actual

Caricature

Actual

Caricature

276

MODULE 22 Perceptual Interpretation

Context Effects A given stimulus may trigger radically different perceptions, partly because of our differing set, but also because of the immediate context. Some examples:

䉴 Imagine hearing a noise interrupted by the words “eel is on the wagon.” Likely,

FIGURE 22.5 Context effects: the magician’s cabinet Is the box in the far left frame lying on the floor or hanging from the ceiling? What about the one on the far right? In each case, the context defined by the inquisitive rabbits guides our perceptions. (From Shepard, 1990.)

you would actually perceive the first word as wheel. Given “eel is on the orange,” you would hear peel. This curious phenomenon, discovered by Richard Warren, suggests that the brain can work backward in time to allow a later stimulus to determine how we perceive an earlier one. The context creates an expectation that, top-down, influences our perception as we match our bottom-up signal against it (Grossberg, 1995). 䉴 Is the “magician’s cabinet” in FIGURE 22.5 sitting on the floor or hanging from the ceiling? How we perceive it depends on the context defined by the rabbits. 䉴 How tall is the shorter player in FIGURE 22.6?

Even hearing sad rather than happy music can predispose people to perceive a sad meaning in spoken hom*ophonic words—mourning rather than morning, die rather than dye, pain rather than pane (Halberstadt et al., 1995).

Denis R. J. Geppert Holland Sentinel.

FIGURE 22.6 Big and “little” The “little guy” shown here is actually a 6’9” former Hope College basketball center who towers over me. But he seemed like a short player when matched in a semi-pro game against the world’s tallest basketball player, 7'9'' Sun Ming Ming from China.

277

Perceptual Interpretation MODULE 22

and context effects What is 䉴 Culture above the woman’s head? In one study, nearly all the East Africans who were questioned said the woman was balancing a metal box or can on her head and that the family was sitting under a tree. Westerners, for whom corners and boxlike architecture are more common, were more likely to perceive the family as being indoors, with the woman sitting under a window. (Adapted from Gregory & Gombrich, 1973.)

The effects of perceptual set and context show how experience helps us construct perception. In everyday life, for example, stereotypes about gender (another instance of perceptual set) can color perception. Without the obvious cues of pink or blue, people will struggle over whether to call the new baby “he” or “she.” But told an infant is “David,” people (especially children) may perceive “him” as bigger and stronger than if the same infant is called “Diana” (Stern & Karraker, 1989). Some differences, it seems, exist merely in the eyes of their beholders.

“We hear and apprehend only what we already half know.”

Cathy copyright © 1986 Cathy Guisewite. Reprinted with permission of Universal Press Syndicate. All rights reserved.

Henry David Thoreau, Journal, 1860

Given a perceptual set—“this is a girl”—people see a more feminine baby.

Emotion and Motivation Perceptions are influenced, top-down, not only by our expectations and by the context, but also by our emotions. Dennis Proffitt (2006a,b) and others have demonstrated this with clever experiments showing that

䉴 walking destinations look farther away to those who have been fatigued by prior exercise.

䉴 a hill looks steeper to those wearing a heavy backpack or just exposed to sad, heavy classical music rather than light, bouncy music.

䉴 a target seems farther away to those throwing a heavy rather than a light object at it. Even a softball appears bigger when you are hitting well, observed Jessica Witt and Proffitt (2005), after asking players to choose a circle the size of the ball they had just hit well or poorly. Motives also matter. In Cornell University experiments, students viewed ambiguous figures, such as the horse/seal in FIGURE 22.7 on the next page. If rewards were linked with seeing one category of stimulus (such as a farm animal rather than a sea animal),

“When you’re hitting the ball, it comes at you looking like a grapefruit. When you’re not, it looks like a blackeyed pea.” Former major league baseball player George Scott

MODULE 22 Perceptual Interpretation

George Carlin, George Carlin on Campus, 1984

then, after just a one-second exposure to the drawing, viewers tended instantly to perceive an example of their hoped-for category (Balcetis & Dunning, 2006). (To confirm the participants’ honesty in reporting their perceptions, the researchers in one experiment redefined the to-be-rewarded perception after the viewing. Still, people reported perceiving a stimulus from their originally hoped-for category.) Emotions color our social perceptions, too. Spouses who feel loved and appreciated perceive less threat in stressful marital events—“He’s just having a bad day” (Murray et al., 2003). Professional referees, if told a soccer team has a history of aggressive behavior, will assign more penalty cards after watching videotaped fouls (Jones et al., 2002). Lee Ross invites us to recall our own perceptions in different contexts: “Ever notice that when you’re driving you hate pedestrians, the way they saunter through the crosswalk, almost daring you to hit them, but when you’re walking you hate drivers?” (Jaffe, 2004). To return to the question “Is perception innate or learned?” we can answer: It’s both. The river of perception is fed by sensation, cognition, and emotion. And that is why we need multiple levels of analysis (FIGURE 22.8). “Simple” perceptions are the brain’s creative products.

FIGURE 22.8

Perception is a biopsychosocial phenomenon Psychologists study how we perceive with different levels of analysis, from the biological to the social-cultural.

“Have you ever noticed that anyone driving slower than you is an idiot, and anyone going faster is a maniac?”

"Ambiguity of form: Old and new" by G. H. Fisher, 1968, Perception and Psychophysics, 4, 189–192. Copyright 1968 by Psychonomic Society, Inc.

FIGURE 22.7 Ambiguous horse/seal figure If motivated to perceive farm animals, about 7 in 10 people immediately perceived a horse. If motivated to perceive a sea animal, about 7 in 10 perceived a seal.

278

Biological influences: • sensory analysis • unlearned visual phenomena • critical period for sensory development

Psychological influences: • selective attention • learned schemas • Gestalt principles • context effects • perceptual set

Perception: Our version of reality

Social-cultural influences: • cultural assumptions and expectations

279

Perceptual Interpretation MODULE 22

䉴|| Perception and the Human Factor

human factors psychology a branch

22-4 How do human factors psychologists work to create user-friendly

machines and work settings?

The Ride On Carry On foldable chair attachment “designed by a flight attendant mom,” enables a small suitcase to double as a stroller.

Courtesy The London Teapot Company Ltd.

Courtesy OXO Good Grips

Courtesy www.rideoncarryon.com

Designs sometimes neglect the human factor. Psychologist Donald Norman, an MIT alumnus with a Ph.D., bemoaned the complexity of assembling his new highdefinition TV, receiver, speakers, digital recorder, DVD player, VCR, and seven remotes into a usable home theater system: “I was VP of Advanced Technology at Apple. I can program dozens of computers in dozens of languages. I understand television, really, I do. . . . It doesn’t matter: I am overwhelmed.” How much easier life might be if engineers would routinely work with human factors psychologists to test their designs and instructions on real people. Human factors psychologists help to design appliances, machines, and work settings that fit our natural perceptions and inclinations. ATM machines are internally more complex than VCRs ever were, yet, thanks to human factors psychologists working with engineers, ATMs are easier to operate. TiVo has solved the TV recording problem with a simple select-and-click menu system (“record that one”). Apple has similarly engineered easy usability with the iPod and iPhone. Norman (2001) hosts a Web site (www.jnd.org) that illustrates good designs that fit people (see FIGURE 22.9). Human factors psychologists also work at designing safe and efficient environments. An ideal kitchen layout, researchers have found, stores needed items close to their usage point and near eye level. It locates work areas to enable doing tasks in order, such as with a refrigerator, stove, and sink in a triangle. It creates counters that enable hands to work at or slightly below elbow height (Boehm-Davis, 2005). Understanding human factors can do more than enable us to design for reduced frustration; it can help prevent accidents and avoid disaster (Boehm-Davis, 2005). Two-thirds of commercial air accidents, for example, have been caused by human error (Nickerson, 1998). After beginning commercial flights in the late 1960s, the Boeing 727 was involved in several landing accidents caused by pilot error. Psychologist Conrad Kraft (1978) noted a common setting for these accidents: All took place at night, and all involved landing short of the runway after crossing a dark stretch of water or unilluminated ground. Kraft reasoned that, beyond the runway, city lights would project a larger retinal image if on a rising terrain. This would make the ground seem farther away than it was. By re-creating these conditions in flight simulations, Kraft discovered that pilots were deceived into thinking they were flying higher than their actual altitudes (FIGURE 22.10 on the next page). Aided by Kraft’s finding, the airlines began requiring the co-pilot to monitor the altimeter—calling out altitudes during the descent—and the accidents diminished.

of psychology that explores how people and machines interact and how machines and physical environments can be made safe and easy to use.

The Oxo measuring cup allows the user to see the quantity from above.

The Chatsford Tea Pot comes with a built-in strainer.

22.9 Designing 䉴 FIGURE products that fit

people Human factors psychologist Donald Norman offers these and other examples of effectively designed new products.

280

FIGURE 22.10 The human factor in accidents Lacking distance cues when approaching a runway from over a dark surface, pilots simulating a night landing tended to fly too low. (From Kraft, 1978.)

MODULE 22 Perceptual Interpretation

10

Altitude (thousands of feet)

8

Pilot’s perceived descent path

6 Altitude looks this much higher

4 2

Actual descent path

0 20 18 16 14 12 10 8

6

4

2

Distance from runway (miles)

Later Boeing psychologists worked on other human factors problems (Murray, 1998): How should airlines best train and manage mechanics to reduce the maintenance errors that underlie about 50 percent of flight delays and 15 percent of accidents? What illumination and typeface would make on-screen flight data easiest to read? How could warning messages be most effectively worded—as an action statement (“Pull Up”) rather than a problem statement (“Ground Proximity”)? In studying human factors issues, psychologists’ most powerful tool is theoryaided research. If an organization wonders what sort of Web design (Emphasizing content? Speed? Graphics?) would most effectively draw in visitors and entice them to return, the psychologist will want to test responses to several alternatives. If NASA (National Aeronautics and Space Administration) wonders what sort of spacecraft design would best facilitate sleeping, work, and morale, their human factors psychologists will want to test the alternatives (FIGURE 22.11). Consider, finally, the available assistive listening technologies in various theaters, auditoriums, and places of worship. One technology, commonly available in the United States, requires a headset attached to a pocket-size receiver that detects infrared or FM signals from the room’s sound system. The well-meaning people who design, purchase, and install these systems correctly understand that the technology puts sound directly into the user’s ears. Alas, few people with hearing loss undergo the hassle and embarrassment of locating, requesting, wearing, and returning a conspicuous headset. Most such units therefore sit in closets. Britain, the Scandinavian countries,

AP Photo/Steven Day

The human factor in safe landings Advanced co*ckpit design and rehearsed emergency procedures aided pilot Chesley “Sully” Sullenberger, a U.S. Air Force Academy graduate who studied psychology and human factors. In January 2009, Sullenberger’s instantaneous decisions safely guided his disabled airliner onto New York City’s Hudson River, where all 155 of the passengers and crew were safely evacuated.

281

Perceptual Interpretation MODULE 22

FIGURE 22.11 How not to go mad while 䉴 going to Mars Future astronauts headed to

controversial claim that perception can occur apart from sensory input; includes telepathy, clairvoyance, and precognition.

parapsychology the study of paranormal phenomena, including ESP and psychokinesis.

Courtesy of NASA

Mars will be confined in conditions of monotony, stress, and weightlessness for months on end. To help design and evaluate a workable human environment, such as for this Transit Habitation (Transhab) Module, NASA engages human factors psychologists (Weed, 2001; Wichman, 1992).

extrasensory perception (ESP) the

and Australia have instead installed loop systems (see www.hearingloop.org) that broadcast customized sound directly through a person’s own hearing aid. When suitably equipped, a hearing aid can be transformed by a discrete touch of a switch into an in-the-ear loudspeaker. Offered convenient, inconspicuous, personalized sound, many more people elect to use assistive listening. Designs that enable safe, easy, and effective interactions between people and technology often seem obvious after the fact. Why, then, aren’t they more common? Technology developers sometimes mistakenly assume that others share their expertise—that what’s clear to them will similarly be clear to others (Camerer et al., 1989; Nickerson, 1999). When people rap their knuckles on a table to convey a familiar tune (try this with a friend), they often expect their listener to recognize it. But for the listener, this is a near-impossible task (Newton, 1991). When you know a thing, it’s hard to mentally simulate what it’s like not to know, and that is called the curse of knowledge. The point to remember: Designers and engineers should consider human abilities and behaviors by designing things to fit people, user-testing their inventions before production and distribution, and being mindful of the curse of knowledge.

䉴|| Is There Extrasensory Perception? 22-5 What are the claims of ESP, and what have most research psychologists concluded after putting these claims to the test? Can we perceive only what we sense? Or, as nearly half of Americans believe, are we capable of extrasensory perception (ESP) without sensory input (AP, 2007; Moore, 2005)? Are there indeed people—any people—who can read minds, see through walls, or foretell the future? Five British universities have parapsychology units staffed by Ph.D. graduates of Edinburgh University’s parapsychology program (Turpin, 2005). Sweden’s Lund University, the Netherlands’ Utrecht University, and Australia’s University of Adelaide also have added faculty chairs or research units for parapsychology. Parapsychologists in such places do experiments that search for possible ESP and other paranormal phenomena. But other research psychologists and scientists—including 96

|| There would also be some people, notes Michael Shermer (1999), who would have no need for caller ID, who would never lose at “rock, paper, scissors,” and for whom we could never have a surprise party. ||

282

MODULE 22 Perceptual Interpretation

SNAPSHOTS

percent of the scientists in the U.S. National Academy of Sciences—are skeptical that such phenomena exist (McConnell, 1991). If ESP is real, we would need to overturn the scientific understanding that we are creatures whose minds are tied to our physical brains and whose perceptual experiences of the world are built of sensations. Sometimes new evidence does overturn our scientific preconceptions. Science, as we will see throughout this book, offers us various surprises—about the extent of the unconscious mind, about the effects of emotions on health, about what heals and what doesn’t, and much more. Before we evaluate claims of ESP, let’s review them.

Claims of ESP

© Jason Love

Claims of paranormal phenomena (“psi”) include astrological predictions, psychic healing, communication with the dead, and out-of-body experiences. But the most testable and (for a perception discussion) most relevant claims are for three varieties of ESP:

䉴 Telepathy, or mind-to-mind communication—one person sending thoughts to another or perceiving another’s thoughts.

䉴 Clairvoyance, or perceiving remote events, such as sensing that a friend’s house is on fire. 䉴 Precognition, or perceiving future events, such as a political leader’s death or a sporting event’s outcome.

FIGURE 22.12 Parapsychological

concepts

Purported paranormal phenomena (“psi”)

ESP Extrasensory perception

“A person who talks a lot is sometimes right.” Spanish proverb

PK Psychokinesis

Closely linked with these are claims of psychokinesis (PK), or “mind over matter,” such as levitating a table or influencing the roll of a die (FIGURE 22.12). (The claim is illustrated by the wry request, “Will all those who believe in psychokinesis please raise my hand?”)

Premonitions or Pretensions?

Can psychics see into the future? Although one might wish for a psychic stock forecaster, the tallied forecasts of “leading psyTelepathy Clairvoyance Precognition chics” reveal meager accuracy. No greedy—or charitable—psychic has been able to predict the outcome of a lottery jackpot, or to make billions on the stock market. During the 1990s, tabloid psychics were all wrong in predicting surprising events. (Madonna did not become a gospel singer, the Statue of Liberty did not lose both its arms in a terrorist blast, Queen Elizabeth did not abdicate her throne to enter a convent.) And the new-century psychics missed the bignews events, such as the horror of 9/11. (Where were the psychics on 9/10 when we needed them? Why, despite a $50 million reward offered, could none of them help locate Osama bin Laden after 9/11?) Gene Emery (2004), who has tracked annual psychic forecasts for 26 years, reports that almost never have unusual predictions come true, and virtually never have psychics anticipated any of the year’s headline events. Analyses of psychic visions offered to police departments reveal that these, too, are no more accurate than guesses made by others (Reiser, 1982). Psychics working with the police do, however, generate hundreds of predictions. This increases the odds of an occasional correct guess, which psychics can then report to the media. Moreover, vague predictions can later be interpreted (“retrofitted”) to match events that provide a perceptual set for “understanding” them. Nostradamus, a sixteenth-century French psychic, explained in an unguarded moment that his ambiguous prophecies “could not possibly be understood till they were interpreted after the event and by it.”

283

Perceptual Interpretation MODULE 22

Police departments are wise to all this. When Jane Ayers Sweat and Mark Durm (1993) asked the police departments of America’s 50 largest cities whether they ever used psychics, 65 percent said they never had. Of those that had, not one had found it helpful. Are the spontaneous “visions” of everyday people any more accurate? Consider our dreams. Do they foretell the future, as people often believe? Or do they only seem to do so because we are more likely to recall or reconstruct dreams that appear to have come true? Two Harvard psychologists (Murray & Wheeler, 1937) tested the prophetic power of dreams after aviator Charles Lindbergh’s baby son was kidnapped and murdered in 1932, but before the body was discovered. When the researchers invited the public to report their dreams about the child, 1300 visionaries submitted dream reports. How many accurately envisioned the child dead? Five percent. And how many also correctly anticipated the body’s location—buried among trees? Only 4 of the 1300. Although this number was surely no better than chance, to those 4 dreamers the accuracy of their apparent precognitions must have seemed uncanny. Throughout the day, each of us imagines many events. Given the billions of events in the world each day, and given enough days, some stunning coincidences are sure to occur. By one careful estimate, chance alone would predict that more than a thousand times a day someone on Earth will think of someone and then within the ensuing five minutes will learn of the person’s death (Charpak & Broch, 2004). With enough time and people, the improbable becomes inevitable. That was the experience of comics writer John Byrne (2003). Six months after his Spider-Man story about a New York blackout appeared, New York suffered a massive blackout. A subsequent Spider-Man storyline involved a major earthquake in Japan. “And again,” he recalled, “the real thing happened in the month the issue hit the stands.” Later, when working on a Superman comic book, he “had the Man of Steel fly to the rescue when disaster beset the NASA space shuttle. The Challenger tragedy happened almost immediately thereafter” (with time for the issue to be redrawn). “Most recently, and chilling, came when I was writing and drawing Wonder Woman and did a story in which the title character was killed as a prelude to her becoming a goddess.” The issue cover “was done as a newspaper front page, with the headline ‘Princess Diana Dies.’ (Diana is Wonder Woman’s real name.) That issue went on sale on a Thursday. The following Saturday . . . I don’t have to tell you, do I?”

Putting ESP to Experimental Test In the past, there have been all kinds of strange ideas—that bumps on the head reveal character traits, that bloodletting is a cure-all, that each sperm cell contains a miniature person. Faced with such claims—or with claims of mind-reading or out-of-body travel or communication with the dead—how can we separate bizarre ideas from those that sound bizarre but are true? At the heart of science is a simple answer: Test them to see if they work. If they do, so much the better for the ideas. If they don’t, so much the better for our skepticism. This scientific attitude has led both believers and skeptics to agree that what parapsychology needs is a reproducible phenomenon and a theory to explain it. Parapsychologist Rhea White (1998) spoke for many in saying that “the image of parapsychology that comes to my mind, based on nearly 44 years in the field, is that of a small airplane [that] has been perpetually taxiing down the runway of the Empirical Science Airport since 1882 . . . its movement punctuated occasionally by lifting a few feet off the ground only to bump back down on the tarmac once again. It has never taken off for any sustained flight.” Seeking a reproducible phenomenon, how might we test ESP claims in a controlled experiment? An experiment differs from a staged demonstration. In the laboratory, the experimenter controls what the “psychic” sees and hears. On stage, the

“At the heart of science is an essential tension between two seemingly contradictory attitudes—an openness to new ideas, no matter how bizarre or counterintuitive they may be, and the most ruthless skeptical scrutiny of all ideas, old and new.” Carl Sagan (1987)

MODULE 22 Perceptual Interpretation

Courtesy of Claire Cole

Testing psychic powers in the British population Hertfordshire University psychologist Richard Wiseman created a “mind machine” to see if people can influence or predict a coin toss. Using a touchsensitive screen, visitors to festivals around the country were given four attempts to call heads or tails. Using a random-number generator, a computer then decided the outcome. When the experiment concluded in January 2000, nearly 28,000 people had predicted 110,972 tosses—with 49.8 percent correct.

284

“A psychic is an actor playing the role of a psychic.”

The “Bizarro” cartoon by Dan Piraro is reprinted by permission of Chronicle Features.

Psychologist-magician Daryl Bem (1984)

Which supposed psychic ability does Psychic Pizza claim?

“People’s desire to believe in the paranormal is stronger than all the evidence that it does not exist.” Susan Blackmore, “Blackmore’s first law,” 2004

psychic controls what the audience sees and hears. Time and again, skeptics note, so-called psychics have exploited unquestioning audiences with mind-blowing performances in which they appeared to communicate with the spirits of the dead, read minds, or levitate objects—only to have it revealed that their acts were nothing more than the illusions of stage magicians. The search for a valid and reliable test of ESP has resulted in thousands of experiments. Some 380 of them have assessed people’s efforts to influence computergenerated random sequences of ones and zeros. In some small experiments, the tally of the desired number has exceeded chance by 1 or 2 percent, an effect that disappears when larger experiments are added to the mix (Bösch et al., 2006a,b; Radin et al., 2006; Wilson & Shadish, 2006). Another set of experiments has invited “senders” to telepathically transmit one of four visual images to “receivers” deprived of sensation in a nearby chamber (Bem & Honorton, 1994). The result? A reported 32 percent accurate response rate, surpassing the chance rate of 25 percent. But follow-up studies have (depending on who was summarizing the results) failed to replicate the phenomenon or produced mixed results (Bem et al., 2001; Milton & Wiseman, 2002; Storm, 2000, 2003). If ESP nevertheless exists, might it subtly register in the brain? To find out, Harvard researchers Samuel Moulton and Stephen Kosslyn (2008) had a sender try to send one of two pictures telepathically to a receiver lying in an fMRI machine. In these pairs (mostly couples, friends, or twins), the receivers guessed the picture’s content correctly at the level of chance (50.0 percent). Moreover, their brains responded no differently when later viewing the actual pictures “sent” by ESP. “These findings,” concluded the researchers, “are the strongest evidence yet obtained against the existence of paranormal mental phenomena.” From 1998 to 2010, one skeptic, magician James Randi, offered $1 million “to anyone who proves a genuine psychic power under proper observing conditions” (Randi, 1999, 2008). French, Australian, and Indian groups have parallel offers of up to 200,000 euros to anyone with demonstrable paranormal abilities (CFI, 2003). Large as these sums are, the scientific seal of approval would be worth far more to anyone whose claims could be authenticated. To refute those who say there is no ESP, one need only produce a single person who can demonstrate a single, reproducible ESP phenomenon. (To refute those who say pigs can’t talk would take but one talking pig.) So far, no such person has emerged. Randi’s offer has been publicized for years and dozens of people have been

285

tested, sometimes under the scrutiny of an independent panel of judges. Still, nothing. *** To feel awe and to gain a deep reverence for life, we need look no further than our own perceptual system and its capacity for organizing formless nerve impulses into colorful sights, vivid sounds, and evocative smells. As Shakespeare’s Hamlet recognized, “There are more things in Heaven and Earth, Horatio, than are dreamt of in your philosophy.” Within our ordinary sensory and perceptual experiences lies much that is truly extraordinary—surely much more than has so far been dreamt of in our psychology.

The Quigmans by Buddy Hickerson; © 1990, Los Angeles Times Syndicate. Reprinted with permission.

Perceptual Interpretation MODULE 22

“So, how does the mind work? I don’t know. You don’t know. Pinker doesn’t know. And, I rather suspect, such is the current state of the art, that if God were to tell us, we wouldn’t understand.” Jerry Fodor, “Reply to Steven Pinker,” 2005

Review Perceptual Interpretation 22-1 What does research on sensory restriction and restored vision reveal about the effects of experience? People who were born blind but regained sight after surgery lack the experience to recognize shapes, forms, and complete faces. Animals who have had severely restricted visual input suffer enduring visual handicaps when their visual exposure is returned to normal. There is a critical period for some aspects of sensory and perceptual development. Without early stimulation, the brain’s neural organization does not develop normally. 22-2

How adaptable is our ability to perceive? Perceptual adaptation is evident when people are given glasses that shift the world slightly to the left or right, or even upside-down. People are initially disoriented, but they manage to adapt to their new context.

22-3

How do our expectations, contexts, and emotions influence our perceptions? Perceptual set is a mental predisposition that functions as a lens through which we perceive the world. Our learned concepts (schemas) prime us to organize and interpret ambiguous stimuli in certain ways. The surrounding context helps create expectations that guide our perceptions. Emotional context can color our interpretation of other people’s behaviors, as well as our own.

22-4 How do human factors psychologists work to create user-friendly machines and work settings? Human factors psychologists contribute to human safety and improved design by encouraging developers and designers to consider human perceptual abilities, to avoid the curse of knowledge, and to test users to reveal perception-based problems. 22-5 What are the claims of ESP, and what have most research psychologists concluded after putting these claims to the test? The three most testable forms of extrasensory perception (ESP) are telepathy (mind-to-mind communication), clairvoyance

(perceiving remote events), and precognition (perceiving future events). Most research psychologists’ skepticism focuses on two points. First, to believe in ESP, you must believe the brain is capable of perceiving without sensory input. Second, psychologists and parapsychologists have been unable to replicate (reproduce) ESP phenomena under controlled conditions.

Terms and Concepts to Remember perceptual adaptation, p. 273 perceptual set, p. 274 human factors psychology, p. 279

extrasensory perception (ESP), p. 281 parapsychology, p. 281

Test Yourself 1. What type of evidence shows that, indeed, “there is more to perception than meets the senses”?

2. In the sports channel cartoon above, what psychic ability is being claimed? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. Can you recall a time when your expectations have predisposed how you perceived a person (or group of people)?

2. Have you ever had what felt like an ESP experience? Can you think of an explanation other than ESP for that experience?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

Learning

modules 23

W

hen a chinook salmon first emerges from its egg in a stream’s gravel bed, its genes provide most of the behavioral instructions it needs for life. It knows instinctively how and where to swim, what to eat, and how to protect itself. Following a built-in plan, the young salmon soon begins its trek to the sea. After some four years in the ocean, the mature salmon returns to its birthplace. It navigates hundreds of miles to the mouth of its home river and then, guided by the scent of its home stream, begins an upstream odyssey to its ancestral spawning ground. Once there, the salmon seeks out the best temperature, gravel, and water flow for breeding. It then mates and, its life mission accomplished, dies. Unlike salmon, we are not born with a genetic plan for life. Much of what we do we learn from experience. Although we struggle to find the life direction a salmon is born with, our learning gives us more flexibility. We can learn how to build grass huts or snow shelters, submarines or space stations, and thereby adjust to almost any environment. Indeed, nature’s most important gift to us may be our adaptability—our capacity to learn new behaviors that help us cope with changing circ*mstances. Learning breeds hope. What is learnable we can potentially teach—a fact that encourages parents, educators, coaches, and animal trainers. What has been learned we can potentially change by new learning—an assumption that underlies counseling, psychotherapy, and rehabilitation programs. No matter how unhappy, unsuccessful, or unloving we are, that need not be the end of our story. No topic is closer to the heart of psychology than learning, a relatively permanent behavior change due to experience. Psychologists study the learning of visual perceptions, of a drug’s expected effect, of gender roles. They also consider how learning shapes our thought and language, our motivations and emotions, our personalities and attitudes. Modules 23 through 25 examine three types of learning: classical conditioning, operant conditioning, and observational learning.

Classical Conditioning

24 Operant Conditioning

25 Learning by Observation

“Learning is the eye of the mind.” Thomas Drake, Bibliotheca Scholastica Instructissima, 1633

䉴|| How Do We Learn? More than 200 years ago, philosophers such as John Locke and David Hume echoed Aristotle’s conclusion from 2000 years earlier: We learn by association. Our minds naturally connect events that occur in sequence. Suppose you see and smell freshly baked bread, eat some, and find it satisfying. The next time you see and smell fresh bread, that experience will lead you to expect that eating it will once again be satisfying. So, too, with sounds. If you associate a sound with a frightening consequence, hearing the sound alone may trigger your fear. As one 4-year-old exclaimed after watching a TV character get mugged, “If I had heard that music, I wouldn’t have gone around the corner!” (Wells, 1981). Learned associations also feed our habitual behaviors (Wood & Neal, 2007). As we repeat behaviors in a given context—the sleeping posture we associate with bed, our walking routes on campus, our eating popcorn in a movie theater—the behaviors become associated with the contexts. Our next experience of the context then automatically triggers the habitual response. Such associations can make it hard to kick a

© 1984 by Sidney Harris, American Scientist Magazine.

23-1 What are some basic forms of learning?

“Actually, sex just isn’t that important to me.”

287

Jouanneau Thomas/CORBIS SYGMA

288

Nature without appropriate nurture Keiko—the killer whale of Free Willy fame—had all the right genes for being dropped right back into his Icelandic home waters. But lacking life experience, he required caregivers to his life’s end in a Norwegian fjord.

|| Most of us would be unable to name the order of the songs on our favorite CD or playlist. Yet, hearing the end of one piece cues (by association) an anticipation of the next. Likewise, when singing your national anthem, you associate the end of each line with the beginning of the next. (Pick a line out of the middle and notice how much harder it is to recall the previous line.) ||

smoking habit; when back in the smoking context, the urge to light up can be powerful (Siegel, 2005). Other animals also learn by association. Disturbed by a squirt of water, the sea slug Aplysia protectively withdraws its gill. If the squirts continue, as happens naturally in choppy water, the withdrawal response diminishes. (The slug’s response habituates.) But if the sea slug repeatedly receives an electric shock just after being squirted, its withdrawal response to the squirt instead grows stronger. The animal relates the squirt to the impending shock. Complex animals can learn to relate their own behavior to its outcomes. Seals in an aquarium will repeat behaviors, such as slapping and barking, that prompt people to toss them a herring. By linking two events that occur close together, both the sea slug and the seals exhibit associative learning. The sea slug associates the squirt with an impending shock; the seal associates slapping and barking with a herring treat. Each animal has learned something important to its survival: predicting the immediate future. The significance of an animal’s learning is illustrated by the challenges captivebred animals face when introduced to the wild. After being bred and raised in captivity, 11 Mexican gray wolves—extinct in the United States since 1977—were released in Arizona’s Apache National Forest in 1998. Eight months later, a lone survivor was recaptured. The pen-reared wolves had learned how to hunt—and to move 100 feet away from people—but had not learned to run from a human with a gun. Their story is not unusual. Twentieth-century records document 145 reintroductions of 115 species. Of those, only 11 percent produced self-sustaining populations in the wild. Successful adaptation requires both nature (the needed genetic predispositions) and nurture (a history of appropriate learning). Conditioning is the process of learning associations. In classical conditioning, the topic of Module 23, we learn to associate two stimuli and thus to anticipate events. We learn that a flash of lightning signals an impending crack of thunder, so when lightning flashes nearby, we start to brace ourselves (FIGURE 1). In operant conditioning, Module 24’s focus, we learn to associate a response (our behavior) and its consequence and thus to repeat acts followed by good results (FIGURE 2) and avoid acts followed by bad results. To simplify, we will explore these two types of associative learning separately in Modules 23 and 24. Often, though, they occur together, as on one Japanese cattle ranch, where the clever rancher outfitted his herd with electronic pagers, which he

Two related events: Stimulus 1: Lightning

Stimulus 2: Thunder

Result after repetition: Stimulus: We see lightning

FIGURE 1 Classical conditioning

Response: We wince, anticipating thunder

289

(b) Consequence: receiving food

(c) Behavior strengthened

(a) Response: balancing a ball

FIGURE 2 Operant conditioning

calls from his cellphone. After a week of training, the animals learn to associate two stimuli—the beep on their pager and the arrival of food (classical conditioning). But they also learn to associate their hustling to the food trough with the pleasure of eating (operant conditioning). The concept of association by conditioning provokes questions: What principles influence the learning and the loss of associations? How can these principles be applied? And what really are the associations: Does the beep on a steer’s pager evoke a mental representation of food, to which the steer responds by coming to the trough? Or does it make little sense to explain conditioned associations in terms of cognition? These questions are among the many being studied by researchers exploring how the brain stores and retrieves learning. Conditioning is not the only form of learning. As Module 25 explains, we also learn from others’ experiences through observational learning. Chimpanzees, too, may learn behaviors merely by watching others perform them. If one sees another solve a puzzle and gain a food reward, the observer may perform the trick more quickly. By conditioning and by observation we humans learn and adapt to our environments. We learn to expect and prepare for significant events such as food or pain (classical conditioning). We also learn to repeat acts that bring good results and to avoid acts that bring bad results (operant conditioning). By watching others we learn new behaviors (observational learning). And through language, we also learn things we have neither experienced nor observed.

Pavlov’s Experiments Extending Pavlov’s Understanding Pavlov’s Legacy

module 23 Classical Conditioning 23-2 What is classical conditioning, and how did Pavlov’s work influence behaviorism?

associative learning learning that certain events occur together. The events may be two stimuli (as in classical conditioning) or a response and its consequences (as in operant conditioning).

classical conditioning a type of learning in which one learns to link two or more stimuli and anticipate events.

learning a relatively permanent change in an organism’s behavior due to experience.

behaviorism the view that psychology (1) should be an objective science that (2) studies behavior without reference to mental processes. Most research psychologists today agree with (1) but not with (2).

Although associative learning had long generated philosophical discussion, only in the early twentieth century did psychology’s most famous research verify it. For many people, the name Ivan Pavlov (1849–1936) rings a bell. His experiments are classics, and the phenomenon he explored we justly call classical conditioning. Pavlov’s work also laid the foundation for many of psychologist John B. Watson’s ideas. In searching for laws underlying learning, Watson (1913) urged his colleagues to discard reference to inner thoughts, feelings, and motives. The science of psychology should instead study how organisms respond to stimuli in their environments, said Watson: “Its theoretical goal is the prediction and control of behavior. Introspection forms no essential part of its methods.” Simply said, psychology should be an objective science based on observable behavior. This view, which influenced North American psychology during the first half of the twentieth century, Watson called behaviorism. Watson and Pavlov shared both a disdain for “mentalistic” concepts (such as consciousness) and a belief that the basic laws of learning were the same for all animals—whether dogs or humans. Few researchers today propose that psychology should ignore mental processes, but most now agree that classical conditioning is a basic form of learning by which all organisms adapt to their environment.

䉴|| Pavlov’s Experiments 23-3 How does a neutral stimulus become a conditioned stimulus?

Sovfoto

Ivan Pavlov “Experimental investigation . . . should lay a solid foundation for a future true science of psychology” (1927).

290

Pavlov was driven by a lifelong passion for research. After setting aside his initial plan to follow his father into the Russian Orthodox priesthood, Pavlov received a medical degree at age 33 and spent the next two decades studying the digestive system. This work earned him Russia’s first Nobel prize in 1904. But it was his novel experiments on learning, to which he devoted the last three decades of his life, that earned this feisty scientist his place in history. Pavlov’s new direction came when his creative mind seized on an incidental observation. Without fail, putting food in a dog’s mouth caused the animal to salivate. Moreover, the dog began salivating not only to the taste of the food, but also to the mere sight of the food, or the food dish, or the person delivering the food, or even the sound of that person’s approaching footsteps. At first, Pavlov considered these “psychic secretions” an annoyance—until he realized they pointed to a simple but important form of learning. Pavlov and his assistants tried to imagine what the dog was thinking and feeling as it drooled in anticipation of the food. This only led them into fruitless debates. So, to explore the phenomenon more objectively, they experimented. To eliminate other possible influences, they isolated the dog in a small room, secured it in a harness, and attached a device to divert its saliva to a measuring instrument. From the next room, they presented food—first by sliding in a food bowl, later by blowing meat powder into the dog’s mouth at a precise moment. They then paired various neutral events— something the dog could see or hear but didn’t associate with food—with food in the dog’s mouth. If a sight or sound regularly signaled the arrival of food, would the dog learn the link? If so, would it begin salivating in anticipation of the food?

291

Classical Conditioning M O D U L E 2 3

BEFORE CONDITIONING

US (food in mouth)

UR (salivation) An unconditioned stimulus (US) produces an unconditioned response (UR).

Neutral stimulus (tone)

A neutral stimulus produces no salivation response.

DURING CONDITIONING

Neutral stimulus (tone)

+

No salivation

AFTER CONDITIONING

US (food in mouth)

UR (salivation)

CR (salivation)

The neutral stimulus alone now produces a conditioned response (CR), thereby becoming a conditioned stimulus (CS).

The unconditioned stimulus is repeatedly presented just after the neutral stimulus. The unconditioned stimulus continues to produce an unconditioned response.

CS (tone)

The answers proved to be yes and yes. Just before placing food in the dog’s mouth to produce salivation, Pavlov sounded a tone. After several pairings of tone and food, the dog, anticipating the meat powder, began salivating to the tone alone. In later experiments, a buzzer, a light, a touch on the leg, even the sight of a circle set off the drooling.1 (This procedure works with people, too. When hungry young Londoners viewed abstract figures before smelling peanut butter or vanilla, their brains soon were responding in anticipation to the abstract images alone [Gottfried et al., 2003]). Because salivation in response to food in the mouth was unlearned, Pavlov called it an unconditioned response (UR). Food in the mouth automatically, unconditionally, triggers a dog’s salivary reflex (FIGURE 23.1). Thus, Pavlov called the food stimulus an unconditioned stimulus (US). Salivation in response to the tone was conditional upon the dog’s learning the association between the tone and the food. Today we call this learned response the conditioned response (CR). The previously neutral (in this context) tone stimulus that now triggered the conditional salivation we call the conditioned stimulus (CS). Distinguishing these two kinds of stimuli and responses is easy: Conditioned = learned; unconditioned = unlearned.

PEANUTS reprinted by permission of United Feature Syndicate, Inc.

PEANUTS

1The

FIGURE 23.1 Pavlov’s classic experiment Pavlov presented a neutral stimulus (a tone) just before an unconditioned stimulus (food in mouth). The neutral stimulus then became a conditioned stimulus, producing a conditioned response.

unconditioned response (UR) in classical conditioning, the unlearned, naturally occurring response to the unconditioned stimulus (US), such as salivation when food is in the mouth.

unconditioned stimulus (US) in classical conditioning, a stimulus that unconditionally—naturally and automatically—triggers a response. conditioned response (CR) in classical conditioning, the learned response to a previously neutral (but now conditioned) stimulus (CS). conditioned stimulus (CS) in classical conditioning, an originally irrelevant stimulus that, after association with an unconditioned stimulus (US), comes to trigger a conditioned response.

“buzzer” (English translation) was perhaps Pavlov’s supposed bell—a small electric bell (Tully, 2003).

292

MOD U LE 2 3 Classical Conditioning

Let’s check your understanding with a second example. An experimenter sounds a tone just before delivering an air puff to your blinking eye. After several repetitions, you blink to the tone alone. What is the US? The UR? The CS? The CR?2 If Pavlov’s demonstration of associative learning was so simple, what did he do for the next three decades? What discoveries did his research factory publish in his 532 papers on salivary conditioning (Windholz, 1997)? He and his associates explored five major conditioning processes: acquisition, extinction, spontaneous recovery, generalization, and discrimination.

Acquisition 23-4 In classical conditioning, what are the processes of acquisition, extinction, spontaneous recovery, generalization, and discrimination?

|| Check yourself: If the aroma of cake baking sets your mouth to watering, what is the US? The CS? The CR? (Answers below.) Remember: US = Unconditioned Stimulus UR = Unconditioned Response CS = Conditioned Stimulus CR = Conditioned Response || The cake (and its taste) are the US. The associated aroma is the CS. Salivation to the aroma is the CR. acquisition in classical conditioning, the initial stage, when one links a neutral stimulus and an unconditioned stimulus so that the neutral stimulus begins triggering the conditioned response. In operant conditioning, the strengthening of a reinforced response.

higher-order conditioning a procedure in which the conditioned stimulus in one conditioning experience is paired with a new neutral stimulus, creating a second (often weaker) conditioned stimulus. For example, an animal that has learned that a tone predicts food might then learn that a light predicts the tone and begin responding to the light alone. (Also called second-order conditioning.)

To understand the acquisition, or initial learning, of the stimulus-response relationship, Pavlov and his associates had to confront the question of timing: How much time should elapse between presenting the neutral stimulus (the tone, the light, the touch) and the unconditioned stimulus? In most cases, not much—half a second usually works well. What do you suppose would happen if the food (US) appeared before the tone (CS) rather than after? Would conditioning occur? Not likely. With but a few exceptions, conditioning doesn’t happen when the CS follows the US. Remember, classical conditioning is biologically adaptive because it helps humans and other animals prepare for good or bad events. To Pavlov’s dogs, the tone (CS) signaled an important biological event—the arrival of food (US). To deer in the forest, the snapping of a twig (CS) may signal a predator’s approach (US). If the good or bad event had already occurred, the CS would not likely signal anything significant. Michael Domjan (1992, 1994, 2005) showed how a CS can signal another important biological event, by conditioning the sexual arousal of male Japanese quail. Just before presenting an approachable female, the researchers turned on a red light. Over time, as the red light continued to herald the female’s arrival, the light caused the male quail to become excited. They developed a preference for their cage’s red-light district, and when a female appeared, they mated with her more quickly and released more sem*n and sperm (Matthews et al., 2007). All in all, the quail’s capacity for classical conditioning gives it a reproductive edge. Again we see the larger lesson: Conditioning helps an animal survive and reproduce—by responding to cues that help it gain food, avoid dangers, locate mates, and produce offspring (Hollis, 1997). In humans, too, objects, smells, and sights associated with sexual pleasure—even a geometric figure in one experiment—can become conditioned stimuli for sexual arousal (Byrne, 1982). Psychologist Michael Tirrell (1990) recalls: “My first girlfriend loved onions, so I came to associate onion breath with kissing. Before long, onion breath sent tingles up and down my spine. Oh what a feeling!”(FIGURE 23.2) Through higher-order conditioning, a new neutral stimulus can become a new conditioned stimulus. All that’s required is for it to become associated with a previously conditioned stimulus. If a tone regularly signals food and produces salivation, then a light that becomes associated with the tone may also begin to trigger salivation. Although this higher-order conditioning (also called second-order conditioning) tends to be weaker than first-stage conditioning, it influences our everyday lives. Imagine that something makes us very afraid (perhaps a pit bull dog associated with a previous dog bite). If something else, such as the sound of a barking dog, brings to mind that pit bull, the bark alone may make us feel a little afraid. 2US

= air puff; UR = blink to air puff; CS = tone after procedure; CR = blink to tone

293

Classical Conditioning M O D U L E 2 3

UR (sexual arousal)

US (passionate kiss)

CS (onion breath)

sexual arousal. But when repeatedly paired with a passionate kiss, it can become a CS and do just that.

CR (sexual arousal)

Associations can influence attitudes (De Houwer et al., 2001; Park et al., 2007). As Andy Field (2006) showed British children novel cartoon characters alongside either ice cream (Yum!) or Brussels sprouts (Yuk!), the children came to like best the ice-cream–associated characters. Michael Olson and Russell Fazio (2001) classically conditioned adults’ attitudes, using little-known Pokémon characters. The participants, playing the role of a security guard monitoring a video screen, viewed a stream of words, images, and Pokémon characters. Their task, they were told, was to respond to one target Pokémon character by pressing a button. Unnoticed by the participants, when two other Pokémon characters appeared on the screen, one was consistently associated with various positive words and images (such as awesome or a hot fudge sundae); the other appeared with negative words and images (such as awful or a co*ckroach). Without any conscious memory for the pairings, the participants formed more gut-level positive attitudes for the characters associated with the positive stimuli. Follow-up studies indicate that conditioned likes and dislikes are even stronger when people notice and are aware of the associations they have learned (De Houwer et al., 2005a,b; Pleyers et al., 2007). Cognition matters.

Extinction and Spontaneous Recovery After conditioning, what happens if the CS occurs repeatedly without the US? Will the CS continue to elicit the CR? Pavlov discovered that when he sounded the tone again and again without presenting food, the dogs salivated less and less. Their declining Strong salivation illustrates extinction, the diminAcquisition ished responding that occurs when the CS (CS + US) (tone) no longer signals an impending US (food). Strength Pavlov found, however, that if he allowed of CR several hours to elapse before sounding the tone again, the salivation to the tone would reappear spontaneously (FIGURE 23.3). This spontaneous recovery—the reappearance of Weak a (weakened) CR after a pause—suggested to Pavlov that extinction was suppressing the CR Time rather than eliminating it.

extinction the diminishing of a conditioned response; occurs in classical conditioning when an unconditioned stimulus (US) does not follow a conditioned stimulus (CS); occurs in operant conditioning when a response is no longer reinforced.

spontaneous recovery the reappearance, after a pause, of an extinguished conditioned response.

FIGURE 23.3 Idealized curve of

acquisition, extinction, and spontaneous recovery The rising curve shows that the CR rapidly grows stronger as the CS and US are repeatedly paired (acquisition), then weakens as the CS is presented alone (extinction). After a pause, the CR reappears (spontaneous recovery).

US (passionate kiss)

CS (onion breath)

FIGURE 23.2 An unexpected CS 䉴 Onion breath does not usually produce

UR (sexual arousal)

Extinction (CS alone)

Spontaneous recovery of CR

Extinction (CS alone)

Pause

294

MOD U LE 2 3 Classical Conditioning

After breaking up with his fire-breathing heartthrob, Tirrell also experienced extinction and spontaneous recovery. He recalls that “the smell of onion breath (CS), no longer paired with the kissing (US), lost its ability to shiver my timbers. Occasionally, though, after not sensing the aroma for a long while, smelling onion breath awakens a small version of the emotional response I once felt.”

© The New Yorker Collection, 1998, Sam Gross from cartoonbank.com. All rights reserved.

Stimulus generalization

Generalization Pavlov and his students noticed that a dog conditioned to the sound of one tone also responded somewhat to the sound of a different tone that had never been paired with food. Likewise, a dog conditioned to salivate when rubbed would also drool a bit when scratched (Windholz, 1989) or when touched on a different body part (FIGURE 23.4). This tendency to respond to stimuli similar to the CS is called generalization.

FIGURE 23.4 Generalization

Pavlov demonstrated generalization by attaching miniature vibrators to various parts of a dog’s body. After conditioning salivation to stimulation of the thigh, he stimulated other areas. The closer a stimulated spot was to the dog’s thigh, the stronger the conditioned response. (From Pavlov, 1927.)

“I don’t care if she’s a tape dispenser. I love her.”

Strongest responses from areas nearest the thigh

Drops of saliva 60 50 40 30 20 10 0 Hind paw

Pelvis Thigh

FIGURE 23.5 Child abuse leaves

© UW–Madison News & Public Affairs. Photo by Jeff Miller

tracks in the brain Seth Pollak (University of Wisconsin–Madison) reports that abused children’s sensitized brains react more strongly to angry faces. This generalized anxiety response may help explain why child abuse puts children at greater risk of psychological disorder.

Shoulder

Trunk

Front paw

Foreleg

Part of body stimulated

Generalization can be adaptive, as when toddlers taught to fear moving cars also become afraid of moving trucks and motorcycles. So automatic is generalization that one Argentine writer who underwent torture still recoils with fear when he sees black shoes—his first glimpse of his torturers as they approached his cell. Generalization of anxiety reactions has been demonstrated in laboratory studies comparing abused with nonabused children (FIGURE 23.5). Shown an angry face on a computer screen, abused children’s brain-wave responses are dramatically stronger and longer lasting (Pollak et al., 1998). Because of generalization, stimuli similar to naturally disgusting or appealing objects will, by association, evoke some disgust or liking. Normally desirable foods, such as fudge, are unappealing when shaped to resemble dog feces (Rozin et al., 1986). Adults with childlike facial features (round face, large forehead, small chin, large eyes) are perceived as having childlike warmth, submissiveness, and naiveté (Berry & McArthur, 1986). In both cases, people’s emotional reactions to one stimulus generalize to similar stimuli.

295

Classical Conditioning M O D U L E 2 3

Discrimination Pavlov’s dogs also learned to respond to the sound of a particular tone and not to other tones. Discrimination is the learned ability to distinguish between a conditioned stimulus (which predicts the US) and other irrelevant stimuli. Being able to recognize differences is adaptive. Slightly different stimuli can be followed by vastly different consequences. Confronted by a pit bull, your heart may race; confronted by a golden retriever, it probably will not.

generalization the tendency, once a response has been conditioned, for stimuli similar to the conditioned stimulus to elicit similar responses. discrimination in classical conditioning, the learned ability to distinguish between a conditioned stimulus and stimuli that do not signal an unconditioned stimulus.

䉴|| Extending Pavlov’s Understanding 23-5 Do cognitive processes and biological constraints affect classical conditioning? In their dismissal of “mentalistic” concepts such as consciousness, Pavlov and Watson underestimated the importance of cognitive processes (thoughts, perceptions, expectations) and biological constraints on an organism’s learning capacity.

Cognitive Processes The early behaviorists believed that rats’ and dogs’ learned behaviors could be reduced to mindless mechanisms, so there was no need to consider cognition. But Robert Rescorla and Allan Wagner (1972) showed that an animal can learn the predictability of an event. If a shock always is preceded by a tone, and then may also be preceded by a light that accompanies the tone, a rat will react with fear to the tone but not to the light. Although the light is always followed by the shock, it adds no new information; the tone is a better predictor. The more predictable the association, the stronger the conditioned response. It’s as if the animal learns an expectancy, an awareness of how likely it is that the US will occur. Such experiments help explain why classical conditioning treatments that ignore cognition often have limited success. For example, people receiving therapy for alcohol dependency may be given alcohol spiked with a nauseating drug. Will they then associate alcohol with sickness? If classical conditioning were merely a matter of “stamping in” stimulus associations, we might hope so, and to some extent this does occur. However, the awareness that the nausea is induced by the drug, not the alcohol, often weakens the association between drinking alcohol and feeling sick. So, even in classical conditioning, it is (especially with humans) not simply the CS–US association but also the thought that counts.

Biological Predispositions Ever since Charles Darwin, scientists have assumed that all animals share a common evolutionary history and thus commonalities in their makeup and functioning. Pavlov and Watson, for example, believed the basic laws of learning were essentially similar in all animals. So it should make little difference whether one studied pigeons or people. Moreover, it seemed that any natural response could be conditioned to any neutral stimulus. As learning researcher Gregory Kimble proclaimed in 1956, “Just about any activity of which the organism is capable can be conditioned and . . . these responses can be conditioned to any stimulus that the organism can perceive” (p. 195). Twenty-five years later, Kimble (1981) humbly acknowledged that “half a thousand” scientific reports had proven him wrong. More than the early behaviorists realized, an animal’s capacity for conditioning is constrained by its biology. Each species’ predispositions prepare it to learn the associations that enhance its survival. Environments are not the whole story.

“All brains are, in essence, anticipation machines.” Daniel C. Dennett, Consciousness Explained, 1991

Courtesy of John Garcia

296

John Garcia As the laboring son of California farmworkers, Garcia attended school only in the off-season during his early childhood years. After entering junior college in his late twenties, and earning his Ph.D. in his late forties, he received the American Psychological Association’s Distinguished Scientific Contribution Award “for his highly original, pioneering research in conditioning and learning.” He was also elected to the National Academy of Sciences.

Colin Young-Wolff/PhotoEdit Inc.

Taste aversion If you became violently ill after eating mussels, you probably would have a hard time eating them again. Their smell and taste would have become a CS for nausea. This learning occurs readily because our biology prepares us to learn taste aversions to toxic foods.

MOD U LE 2 3 Classical Conditioning

John Garcia was among those who challenged the prevailing idea that all associations can be learned equally well. While researching the effects of radiation on laboratory animals, Garcia and Robert Koelling (1966) noticed that rats began to avoid drinking water from the plastic bottles in radiation chambers. Could classical conditioning be the culprit? Might the rats have linked the plastic-tasting water (a CS) to the sickness (UR) triggered by the radiation (US)? To test their hunch, Garcia and Koelling gave the rats a particular taste, sight, or sound (CS) and later also gave them radiation or drugs (US) that led to nausea and vomiting (UR). Two startling findings emerged: First, even if sickened as late as several hours after tasting a particular novel flavor, the rats thereafter avoided that flavor. This appeared to violate the notion that for conditioning to occur, the US must immediately follow the CS. Second, the sickened rats developed aversions to tastes but not to sights or sounds. This contradicted the behaviorists’ idea that any perceivable stimulus could serve as a CS. But it made adaptive sense, because for rats the easiest way to identify tainted food is to taste it. (If sickened after sampling a new food, they thereafter avoid the food—which makes it difficult to eradicate a population of “bait-shy” rats by poisoning.) Humans, too, seem biologically prepared to learn some associations rather than others. If you become violently ill four hours after eating contaminated mussels, you will probably develop an aversion to the taste of mussels but not to the sight of the associated restaurant, its plates, the people you were with, or the music you heard there. In contrast, birds, which hunt by sight, appear biologically primed to develop aversions to the sight of tainted food (Nicolaus et al., 1983). Organisms are predisposed to learn associations that help them adapt. Remember those Japanese quail that were conditioned to get excited by a red light that signaled a receptive female’s arrival? Michael Domjan and his colleagues (2004) report that such conditioning is even speedier, stronger, and more durable when the CS is ecologically relevant—something similar to stimuli associated with sexual activity in the natural environment, such as the stuffed head of a female quail. In the real world, observes Domjan (2005), conditioned stimuli have a natural association with the unconditioned stimuli they predict. This may help explain why we humans seem to be naturally disposed to learn associations between the color red and women’s sexuality, note Andrew Elliot and Daniela Niesta (2008). Female primates display red when nearing ovulation. In human females, enhanced bloodflow produces the red blush of flirtation and sexual excitation. Does the frequent pairing of red and sex—with Valentine’s hearts, redlight districts, and red lipstick—naturally enhance men’s attraction to women? Elliot and Niesta’s experiments consistently suggest that, without men’s awareness, it does (FIGURE 23.6). Garcia’s early findings on taste aversion were met with an onslaught of criticism. As the German philosopher Arthur Schopenhauer (1788–1860) once said, important ideas are first ridiculed, then attacked, and finally taken for granted. In Garcia’s case, the leading journals refused to publish his work. The findings were impossible, said some critics. But, as often happens in science, Garcia and Koelling’s taste-aversion research is now basic textbook material. It is also a good example of experiments that began with the discomfort of some laboratory animals and ended by enhancing the welfare of many others. In another conditioned taste-aversion study, coyotes and wolves that were tempted into eating sheep carcasses laced with a sickening poison developed an aversion to sheep meat (Gustavson et al., 1974, 1976). Two wolves later penned with a live sheep seemed actually to fear it. The study not only saved the sheep from their predators, but also saved the sheep-shunning coyotes and wolves from angry ranchers and farmers who had wanted to destroy them. Later applications of

297

Classical Conditioning M O D U L E 2 3

FIGURE 23.6 Romantic red In a series of 䉴 experiments that controlled for other factors

Courtesy of Kathryn Brownson, Hope College

(such as the brightness of the image), men (but not women) found women more attractive and sexually desirable when framed in red (Elliot & Niesta, 2008).

Garcia and Koelling’s findings have prevented baboons from raiding African gardens, raccoons from attacking chickens, and ravens and crows from feeding on crane eggs— all while preserving predators who occupy an important ecological niche (Garcia & Gustavson, 1997). “All animals are on a voyage through time, All these cases support Darwin’s principle that natural selection favors traits that navigating toward futures that promote aid survival. Our ancestors who readily learned taste aversions were unlikely to eat their survival and away from futures that the same toxic food again and were more likely to survive and leave descendants. threaten it. Pleasure and pain are the stars Nausea, like anxiety, pain, and other bad feelings, serves a good purpose. Like a lowby which they steer.” oil warning on a car dashboard, each alerts the body to a threat (Neese, 1991). Psychologists Daniel T. Gilbert and Timothy D. The discovery of biological constraints affirms the value of different levels of Wilson, “Prospection: Experiencing the Future,” analysis, including the biological and cognitive (FIGURE 23.7), when we seek to under2007 stand phenomena such as learning. And once again, we see an important principle at work: Learning enables animals to adapt to their environments. Responding to stimuli that announce significant events, such as food or pain, is adaptive. So is a genetic predisposition to associate a CS with a US that follows predictably and immediately: Causes often immediately precede effects. “Once bitten, twice shy.” Often, but not always, as we saw in the taste-aversion findings. Adaptation also G. F. Northall, Folk-Phrases, 1894 sheds light on this exception. The ability to discern that effect need not follow cause immediately—that poisoned food can cause sickness quite a while after it has been eaten—gives animals an adaptive advantage. Occasionally, however, FIGURE 23.7 our predispositions trick us. When Biological influences: Psychological influences: Biopsychosocial chemotherapy triggers nausea and • genetic • previous experiences influences on learning vomiting more than an hour folpredispositions • predictability of Today’s learning theorists • unconditioned associations lowing treatment, cancer patients recognize that our learning responses • generalization results not only from may over time develop classically • adaptive responses • discrimination environmental experiences, conditioned nausea (and somebut also from cognitive and times anxiety) to the sights, sounds, biological influences. and smells associated with the Learning clinic (FIGURE 23.8 on the next page) (Hall, 1997). Merely returning to the clinic’s waiting room or seeing Social-cultural influences: the nurses can provoke these condi• culturally learned tioned feelings (Burish & Carey, preferences • motivation, affected by 1986; Davey, 1992). Under normal presence of others circ*mstances, such revulsion to sickening stimuli would be adaptive.

298

MOD U LE 2 3 Classical Conditioning

FIGURE 23.8 Nausea conditioning in cancer patients

Before conditioning

Conditioning

CS (waiting room)

After conditioning

UR (nausea)

US (drug)

UR (nausea)

US (drug)

CS (waiting room)

CR (nausea)

䉴|| Pavlov’s Legacy 23-6 Why is Pavlov’s work important?

“[Psychology’s] factual and theoretical developments in this century—which have changed the study of mind and behavior as radically as genetics changed the study of heredity—have all been the product of objective analysis—that is to say, behavioristic analysis.” Psychologist Donald Hebb (1980)

John B. Watson Watson (1924) admitted to “going beyond my facts” when offering his famous boast: “Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take any one at random and train him to become any type of specialist I might select— doctor, lawyer, artist, merchant-chief, and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors.”

What, then, remains of Pavlov’s ideas? A great deal. Most psychologists agree that classical conditioning is a basic form of learning. Judged by today’s knowledge of cognitive processes and biological predispositions, Pavlov’s ideas were incomplete. But if we see further than Pavlov did, it is because we stand on his shoulders. Why does Pavlov’s work remain so important? If he had merely taught us that old dogs can learn new tricks, his experiments would long ago have been forgotten. Why should we care that dogs can be conditioned to salivate at the sound of a tone? The importance lies first in this finding: Many other responses to many other stimuli can be classically conditioned in many other organisms—in fact, in every species tested, from earthworms to fish to dogs to monkeys to people (Schwartz, 1984). Thus, classical conditioning is one way that virtually all organisms learn to adapt to their environment. Second, Pavlov showed us how a process such as learning can be studied objectively. He was proud that his methods involved virtually no subjective judgments or guesses about what went on in a dog’s mind. The salivary response is a behavior measurable in cubic centimeters of saliva. Pavlov’s success therefore suggested a scientific model for how the young discipline of psychology might proceed—by isolating the basic building blocks of complex behaviors and studying them with objective laboratory procedures.

Applications of Classical Conditioning 23-7 What have been some applications of classical conditioning? In countless areas of psychology, including consciousness, motivation, emotion, health, psychological disorders, and therapy, Pavlov’s principles of classical conditioning are now being used to improve human health and well-being. Two examples:

䉴 Former drug users often feel a craving when they are again in the drug-using

Brown Brothers

context—with people or in places they associate with previous highs. Thus, drug counselors advise addicts to steer clear of people and settings that may trigger these cravings (Siegel, 2005). 䉴 Classical conditioning even works on the body’s disease-fighting immune system. When a particular taste accompanies a drug that influences immune responses, the taste by itself may come to produce an immune response (Ader & Cohen, 1985). Pavlov’s work also provided a basis for John Watson’s (1913) idea that human emotions and behaviors, though biologically influenced, are mainly a bundle of conditioned responses. Working with an 11-month-old named Albert, Watson and Rosalie Rayner (1920; Harris, 1979) showed how specific fears might be conditioned. Like

299

Classical Conditioning M O D U L E 2 3

|| In Watson and Rayner’s experiment, what was the US? The UR? The CS? The CR? (Answers below.) || The US was the loud noise; the UR was the startled fear response; the CS was the rat; the CR was fear.

most infants, “Little Albert” feared loud noises but not white rats. Watson and Rayner presented a white rat and, as Little Albert reached to touch it, struck a hammer against a steel bar just behind his head. After seven repeats of seeing the rat and hearing the frightening noise, Albert burst into tears at the mere sight of the rat (an ethically troublesome study by today’s standards). What is more, five days later Albert showed generalization of his conditioned response by reacting with fear to a rabbit, a dog, and a sealskin coat, but not to dissimilar objects such as toys. Although Little Albert’s fate is unknown, Watson’s is not. After losing his professorship at Johns Hopkins University over an affair with Rayner (whom he later married), he became the J. Walter Thompson advertising agency’s resident psychologist. There he used his knowledge of associative learning to conceive many successful campaigns, including one for Maxwell House that helped make the “coffee break” an American custom (Hunt, 1993). Some psychologists, noting that Albert’s fear wasn’t learned quickly, had difficulty repeating Watson and Rayner’s findings with other children. Nevertheless, Little Albert’s case has had legendary significance for many psychologists. Some have wondered if each of us might not be a walking repository of conditioned emotions (see Close-Up: Trauma as Classical Conditioning). Might extinction procedures or even new conditioning help us change our unwanted responses to emotion-arousing stimuli? One patient, who for 30 years had feared going into an elevator alone, did just that. Following his therapist’s advice, he forced himself to enter 20 elevators a day. Within 10 days, his fear had nearly vanished (Ellis & Becker, 1982). This dramatic turnaround is but one example of how psychologists use behavioral techniques to treat emotional disorders and promote personal growth.

CLOSE-UP

Trauma as Classical Conditioning “A burnt child dreads the fire,” says a medieval proverb. Experiments with dogs reveal that, indeed, if a painful stimulus is sufficiently powerful, a single event is sometimes enough to traumatize the animal when it again faces the situation. The human counterparts to these experiments can be tragic, as illustrated by one woman’s experience of being attacked and raped, and conditioned to a period of fear. Her fear (CR) was most powerfully associated with particular locations and people (CS), but it generalized to other places and people. Note, too, how her traumatic experience robbed her of the normally relaxing associations with such stimuli as home and bed. Four months ago I was raped. In the middle of the night I awoke to the sound of someone outside my bedroom. Thinking my housemate was coming home, I called

out her name. Someone began walking slowly toward me, and then I realized. I screamed and fought, but there were two of them. One held my legs, while the other put a hand over my mouth and a knife to my throat and said, “Shut up, bitch, or we’ll kill you.” Never have I been so terrified and helpless. They both raped me, one brutally. As they then searched my room for money and valuables, my housemate came home. They brought her into my room, raped her, and left us both tied up on my bed. We never slept another night in that apartment. We were too terrified. Still, when I go to bed at night—always with the bedroom light left on—the memory of them entering my room repeats itself endlessly. I was an independent person who had lived alone or with other women for four years; now I can’t even think about spending a night alone. When I drive by our old apartment, or when I have to go

into an empty house, my heart pounds and I sweat. I am afraid of strangers, especially men, and the more they resemble my attackers the more I fear them. My housemate shares many of my fears and is frightened when entering our new apartment. I’m afraid to stay in the same town, I’m afraid it will happen again, I’m afraid to go to bed. I dread falling asleep.

Eleven years later this woman could report—as do many trauma victims (Gluhoski & Wortman, 1996)—that her conditioned fears had mostly extinguished: The frequency and intensity of my fears have subsided. Still, I remain cautious about personal safety and occasionally have nightmares about my experience. But more important is my renewed ability to laugh, love, and trust—both old friends and new. Life is once again joyful. I have survived. (From personal correspondence, with permission.)

300

MOD U LE 2 3 Classical Conditioning

Review Classical Conditioning 23-1 What are some basic forms of learning? Learning is a relatively permanent change in an organism’s behavior due to experience. In associative learning, we learn to associate two stimuli (as in classical conditioning) or a response and its consequences (as in operant conditioning). In observational learning, we learn by watching others’ experiences and examples. 23-2 What is classical conditioning, and how did Pavlov's work influence behaviorism? Classical conditioning is a type of learning in which an organism comes to associate stimuli. Pavlov’s work on classical conditioning laid the foundation for behaviorism, the view that psychology should be an objective science that studies behavior without reference to mental processes. 23-3 How does a neutral stimulus become a conditioned stimulus? In classical conditioning, a UR is an event that occurs naturally (such as salivation), in response to some stimulus. A US is something that naturally and automatically (without learning) triggers the unlearned response (as food in the mouth triggers salivation). A CS is a previously irrelevant stimulus (such as a bell) that, through learning, comes to be associated with some unlearned response (salivating). A CR is the learned response (salivating) to the originally irrelevant but now conditioned stimulus. 23-4

In classical conditioning, what are the processes of acquisition, extinction, spontaneous recovery, generalization, and discrimination? In classical conditioning, acquisition is associating a CS with the US. Acquisition occurs most readily when a CS is presented just before (ideally, about a half-second before) a US, preparing the organism for the upcoming event. This finding supports the view that classical conditioning is biologically adaptive. Extinction is diminished responding when the CS no longer signals an impending US. Spontaneous recovery is the appearance of a formerly extinguished response, following a rest period. Generalization is the tendency to respond to stimuli that are similar to a CS. Discrimination is the learned ability to distinguish between a CS and other irrelevant stimuli.

23-5 Do cognitive processes and biological constraints affect classical conditioning? The behaviorists’ optimism that in any species, any response can be conditioned to any stimulus has been tempered. Conditioning principles, we now know, are cognitively and biologically constrained. In classical conditioning, animals learn when to expect a US, and they may be aware of the link between stimuli and responses. Moreover, because of biological predispositions, learning some associations is easier than learning others. Learning is adaptive: Each species learns behaviors that aid its survival. 23-6 Why is Pavlov’s work important? Pavlov taught us that significant psychological phenomena can be studied objectively, and that classical conditioning is a basic form of learning that

applies to all species. Later research modified this finding somewhat by showing that in many species cognition and biological predispositions place some limits on conditioning.

23-7 What have been some applications of classical conditioning? Classical conditioning techniques are used in treatment programs for those recovering from drug abuse and to condition more appropriate responses in therapy for emotional disorders. The body’s immune system also appears to respond to classical conditioning. Terms and Concepts to Remember associative learning, p. 290 classical conditioning, p. 290 learning, p. 290 behaviorism, p. 290 unconditioned response (UR), p. 291 unconditioned stimulus (US), p. 291 conditioned response (CR), p. 291

conditioned stimulus (CS), p. 291 acquisition, p. 292 higher-order conditioning, p. 292 extinction, p. 293 spontaneous recovery, p. 293 generalization, p. 294 discrimination, p. 295

Test Yourself 1. As we develop, we learn cues that lead us to expect and prepare for good and bad events. We learn to repeat behaviors that bring rewards. And we watch others and learn. What do psychologists call these three types of learning?

2. In slasher movies, sexually arousing images of women are sometimes paired with violence against women. Based on classical conditioning principles, what might be an effect of this pairing? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. Can you remember some example from your childhood of learning through classical conditioning—perhaps salivating at the sound or smell of some delicious food cooking in your family kitchen? Can you remember an example of operant conditioning, when you repeated (or decided not to repeat) a behavior because you liked (or hated) its consequences? Can you recall watching someone else perform some act and later repeating or avoiding that act?

2. How have your emotions or behaviors been classically conditioned?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

module 24

Skinner’s Experiments Extending Skinner’s Understanding

Operant Conditioning

Skinner’s Legacy

24-1 What is operant conditioning, and how does it differ from classical conditioning? It’s one thing to classically condition a dog to salivate at the sound of a tone, or a child to fear moving cars. To teach an elephant to walk on its hind legs or a child to say please, we must turn to another type of associative learning—operant conditioning. Classical conditioning and operant conditioning are both forms of associative learning, yet their difference is straightforward:

䉴 Classical conditioning forms associations between stimuli (a CS and the US it signals). It also involves respondent behavior—actions that are automatic responses to a stimulus (such as salivating in response to meat powder and later in response to a tone). 䉴 In operant conditioning, organisms associate their own actions with consequences. Actions followed by reinforcers increase; those followed by punishers decrease. Behavior that operates on the environment to produce rewarding or punishing stimuli is called operant behavior. We can therefore distinguish classical from operant conditioning by asking: Is the organism learning associations between events it does not control (classical conditioning)? Or is it learning associations between its behavior and resulting events (operant conditioning)?

䉴|| Skinner’s Experiments

associative learning learning that certain events occur together. The events may be two stimuli (as in classical conditioning) or a response and its consequences (as in operant conditioning). respondent behavior behavior that occurs as an automatic response to some stimulus.

operant conditioning a type of learning in which behavior is strengthened if followed by a reinforcer or diminished if followed by a punisher. operant behavior behavior that operates on the environment, producing consequences.

law of effect Thorndike’s principle that behaviors followed by favorable consequences become more likely, and that behaviors followed by unfavorable consequences become less likely.

FIGURE 24.1 Cat in a puzzle box Thorndike (1898) used a fish reward to entice cats to find their way out of a puzzle box (right) through a series of maneuvers. The cats’ performance tended to improve with successive trials (left), illustrating Thorndike’s law of effect. (Adapted from Thorndike, 1898.)

B. F. Skinner (1904–1990) was a college English major and an aspiring writer who, seeking a new direction, entered graduate school in psychology. He went on to become modern behaviorism’s most influential and controversial figure. Skinner’s work elaborated what psychologist Edward L. Thorndike (1874–1949) called the law of effect: Rewarded behavior is likely to recur (FIGURE 24.1). Using Thorndike’s law of effect as a starting point, Skinner developed a behavioral technology that revealed principles of behavior control. These principles also enabled him to teach pigeons such unpigeonlike behaviors as walking in a figure 8, playing Ping-Pong, and keeping a missile on course by pecking at a screen target.

Contrasting Classical and Operant Conditioning

Time required 240 to escape (seconds) 180

60 0

5

10

15

20

Successive trials in the puzzle box

Yale University Library

120

301

302

Inside the box, the rat presses a bar for a food reward. Outside, a measuring device (not shown here) records the animal’s accumulated responses.

FIGURE 24.2 A Skinner box

MOD U LE 2 4 Operant Conditioning

Light Bar

Speaker

Water Food dispenser

For his pioneering studies, Skinner designed an operant chamber, popularly known as a Skinner box (FIGURE 24.2). The box has a bar or key that an animal presses or pecks to release a reward of food or water, and a device that records these responses. Operant conditioning experiments have done far more than teach us how to pull habits out of a rat. They have explored the precise conditions that foster efficient and enduring learning.

Shaping Behavior

Fred Bavendam/Peter Arnold, Inc.

Khamis Ramadhan/Panapress/Getty Images

A discriminating creature University of Windsor psychologist Dale Woodyard uses a food reward to train this manatee to discriminate between objects of different shapes, colors, and sizes. Manatees remember such responses for a year or more.

Shaping rats to save lives A Gambian giant pouched rat, having been shaped to sniff out land mines, receives a bite of banana after successfully locating a mine during training in Mozambique.

In his experiments, Skinner used shaping, a procedure in which reinforcers, such as food, gradually guide an animal’s actions toward a desired behavior. Imagine that you wanted to condition a hungry rat to press a bar. First, you would watch how the animal naturally behaves, so that you could build on its existing behaviors. You might give the rat a food reward each time it approaches the bar. Once the rat is approaching regularly, you would require it to move closer before rewarding it, then closer still. Finally, you would require it to touch the bar before you gave it the food. With this method of successive approximations, you reward responses that are ever-closer to the final desired behavior, and you ignore all other responses. By making rewards contingent on desired behaviors, researchers and animal trainers gradually shape complex behaviors. Shaping can also help us understand what nonverbal organisms perceive. Can a dog distinguish red and green? Can a baby hear the difference between lower- and higher-pitched tones? If we can shape them to respond to one stimulus and not to another, then we know they can perceive the difference. Such experiments have even shown that some animals can form concepts. If an experimenter reinforces a pigeon for pecking after seeing a human face, but not after seeing other images, the pigeon learns to recognize human faces (Herrnstein & Loveland, 1964). In this experiment, a face is a discriminative stimulus; like a green traffic light, it signals that a response will be reinforced.

303

Operant Conditioning M O D U L E 2 4

Could you tie my shoes? (Continues reading paper.) Dad, I need my shoes tied. Uh, yeah, just a minute. DAAAAD! TIE MY SHOES! How many times have I told you not to whine? Now, which shoe do we do first?

Billy’s whining is reinforced, because he gets something desirable—his dad’s attention. Dad’s response is reinforced because it gets rid of something aversive—Billy’s whining.

Reprinted with special permission of King Features Syndicate.

HI AND LOIS

Or consider a teacher who pastes gold stars on a wall chart after the names of children scoring 100 percent on spelling tests. As everyone can then see, some children consistently do perfect work. The others, who take the same test and may have worked harder than the academic all-stars, get no rewards. The teacher would be better advised to apply the principles of operant conditioning—to reinforce all spellers for gradual improvements (successive approximations toward perfect spelling of words they find challenging).

Types of Reinforcers 24-2 What are the basic types of reinforcers? People often refer rather loosely to the power of “rewards.” This idea gains a more precise meaning in Skinner’s concept of a reinforcer: any event that strengthens (increases the frequency of) a preceding response. A reinforcer may be a tangible reward, such as food or money. It may be praise or attention—even being yelled at, for a child hungry for attention. Or it may be an activity—borrowing the family car after doing the dishes, or taking a break after an hour of study. Although anything that serves to increase behavior is a reinforcer, reinforcers vary with circ*mstances. What’s reinforcing to one person (rock concert tickets) may not be to another. What’s reinforcing in one situation (food when hungry) may not be in another. Up to now, we’ve really been discussing positive reinforcement, which strengthens a response by presenting a typically pleasurable stimulus after a response. But there are two basic kinds of reinforcement (TABLE 24.1 on the next page). Negative reinforcement strengthens a response by reducing or removing something undesirable or unpleasant, as when an organism escapes an aversive situation. Taking aspirin may

learning a relatively permanent change in an organism’s behavior due to experience.

shaping an operant conditioning procedure in which reinforcers guide behavior toward closer and closer approximations of the desired behavior. reinforcer in operant conditioning, any event that strengthens the behavior it follows. positive reinforcement increasing behaviors by presenting positive stimuli, such as food. A positive reinforcer is any stimulus that, when presented after a response, strengthens the response. negative reinforcement increasing behaviors by stopping or reducing negative stimuli, such as shock. A negative reinforcer is any stimulus that, when removed after a response, strengthens the response. (Note: negative reinforcement is not punishment.)

Positive reinforcement A heat lamp positively reinforces this Taronga Zoo meerkat’s behavior during a cold snap in Sydney, Australia.

Reuters/Corbis

Billy: Father: Billy: Father: Billy: Father:

operant chamber in operant conditioning research, a chamber (also known as a Skinner box) containing a bar or key that an animal can manipulate to obtain a food or water reinforcer; attached devices record the animal’s rate of bar pressing or key pecking.

After being trained to discriminate among flowers, people, cars, and chairs, pigeons can usually identify the category in which a new pictured object belongs (Bhatt et al., 1988; Wasserman, 1993). They have even been trained to discriminate between Bach’s music and Stravinsky’s (Porter & Neuringer, 1984). In everyday life, we continually reward and shape others’ behavior, said Skinner, but we often do so unintentionally. Billy’s whining, for example, annoys his mystified parents, but look how they typically deal with Billy:

304

MOD U LE 2 4 Operant Conditioning

TABLE 24.1 Ways to Increase Behavior

|| Remember whining Billy? In that example, whose behavior was positively reinforced and whose was negatively reinforced? (Answer below.) ||

Operant Conditioning Term

Description

Examples

Positive reinforcement

Add a desirable stimulus

Getting a hug; receiving a paycheck

Negative reinforcement

Remove an aversive stimulus

Fastening seatbelt to turn off beeping

relieve your headache, and pushing the snooze button will silence your annoying alarm. These welcome results (end of pain, end of alarm) provide negative reinforcement and increase the odds that you will repeat these behaviors. For drug addicts, the negative reinforcement of ending withdrawal pangs can be a compelling reason to resume using (Baker et al., 2004). Note that contrary to popular usage, negative reinforcement is not punishment. (Advice: Repeat the last five words in your mind, because this is one of psychology’s most often misunderstood concepts.) Rather, negative reinforcement removes a punishing (aversive) event. Sometimes negative and positive reinforcement coincide. Imagine a worried student who, after goofing off and getting a bad exam grade, studies harder for the next exam. This increased effort may be negatively reinforced by reduced anxiety, and positively reinforced by a better grade. Whether it works by reducing something aversive, or by giving something desirable, reinforcement is any consequence that strengthens behavior.

Primary and Conditioned Reinforcers Primary reinforcers—getting food when hungry or having a painful headache go away—are unlearned. They are innately satisfying. Conditioned reinforcers, also called secondary reinforcers, get their power through learned association with primary reinforcers. If a rat in a Skinner box learns that a light reliably signals that food is coming, the rat will work to turn on the light. The light has become a conditioned reinforcer associated with food. Our lives are filled with conditioned reinforcers—money, good grades, a pleasant tone of voice—each of which has been linked with more basic rewards. If money is a conditioned reinforcer—if people’s desire for money is derived from their desire for food—then hunger should also make people more money-hungry, reasoned one European research team (Briers et al., 2006). Indeed, in their experiments, people were less likely to donate to charity when food deprived, and less likely to share money with fellow participants when in a room with hunger-arousing aromas.

Billy’s whining was positively reinforced, because Billy got something desirable—his father’s attention. His dad’s response to the whining (doing what Billy wanted) was negatively reinforced, because it got rid of Billy’s annoying whining. © The New Yorker Collection, 1993, Tom Cheney from cartoonbank.com. All rights reserved.

Immediate and Delayed Reinforcers

“Oh, not bad. The light comes on, I press the bar, they write me a check. How about you?”

Let’s return to the imaginary shaping experiment in which you were conditioning a rat to press a bar. Before performing this “wanted” behavior, the hungry rat will engage in a sequence of “unwanted” behaviors—scratching, sniffing, and moving around. If you present food immediately after any one of these behaviors, the rat will likely repeat that rewarded behavior. But what if the rat presses the bar while you are distracted, and you delay giving the reinforcer? If the delay lasts longer than 30 seconds, the rat will not learn to press the bar. You will have reinforced other incidental behaviors—more sniffing and moving—that intervened after the bar press. Unlike rats, humans do respond to delayed reinforcers: the paycheck at the end of the week, the good grade at the end of the semester, the trophy at the end of the season. Indeed, to function effectively we must learn to delay gratification. In laboratory testing, some 4-year-olds show this ability. In choosing a candy, they prefer having a big reward tomorrow to munching on a small one right now. Learning to control our impulses in order to achieve more valued rewards is a big step toward maturity (Logue, 1998a,b). No wonder children who make such choices tend to become socially competent and high-achieving adults (Mischel et al., 1989).

305

Operant Conditioning M O D U L E 2 4

Reinforcement Schedules 24-3 How do different reinforcement schedules affect behavior? So far, most of our examples have assumed continuous reinforcement: Reinforcing the desired response every time it occurs. Under such conditions, learning occurs rapidly, which makes continuous reinforcement preferable until a behavior is mastered. But extinction also occurs rapidly. When reinforcement stops—when we stop delivering food after the rat presses the bar—the behavior soon stops. If a normally dependable candy machine fails to deliver a chocolate bar twice in a row, we stop putting money into it (although a week later we may exhibit spontaneous recovery by trying again). Real life rarely provides continuous reinforcement. Salespeople do not make a sale with every pitch, nor do anglers get a bite with every cast. But they persist because their efforts have occasionally been rewarded. This persistence is typical with partial (intermittent) reinforcement schedules, in which responses are sometimes reinforced, sometimes not. Although initial learning is slower, intermittent reinforcement produces greater resistance to extinction than is found with continuous reinforcement. Imagine a pigeon that has learned to peck a key to obtain food. When the experimenter gradually phases out the delivery of food until it occurs only rarely and unpredictably, pigeons may peck 150,000 times without a reward (Skinner, 1953). Slot machines reward gamblers in much the same way—occasionally and unpredictably. And like pigeons, slot players keep trying, time and time again. With intermittent reinforcement, hope springs eternal. Lesson for parents: Partial reinforcement also works with children. Occasionally giving in to children’s tantrums for the sake of peace and quiet intermittently reinforces the tantrums. This is the very best procedure for making a behavior persist. Skinner (1961) and his collaborators compared four schedules of partial reinforcement. Some are rigidly fixed, some unpredictably variable. Fixed-ratio schedules reinforce behavior after a set number of responses. Just as coffee shops reward us with a free drink after every 10 purchased, laboratory animals may be reinforced on a fixed ratio of, say, one reinforcer for every 30 responses. Once conditioned, the animal will pause only briefly after a reinforcer and will then return to a high rate of responding (FIGURE 24.3 on the next page). Variable-ratio schedules provide reinforcers after an unpredictable number of responses. This is what slot-machine players and fly-casting anglers experience— unpredictable reinforcement—and what makes gambling and fly fishing so hard to extinguish even when both are getting nothing for something. Like the fixed-ratio schedule, the variable-ratio schedule produces high rates of responding, because reinforcers increase as the number of responses increases. Fixed-interval schedules reinforce the first response after a fixed time period. Like people checking more frequently for the mail as the delivery time approaches, or checking to see if the Jell-O has set, pigeons on a fixed-interval schedule peck a key more frequently as the anticipated time for reward draws near, producing a choppy stop-start pattern (see Figure 24.3) rather than a steady rate of response.

primary reinforcer an innately reinforcing stimulus, such as one that satisfies a biological need.

conditioned reinforcer a stimulus that gains its reinforcing power through its association with a primary reinforcer; also known as a secondary reinforcer. continuous reinforcement reinforcing the desired response every time it occurs.

partial (intermittent) reinforcement reinforcing a response only part of the time; results in slower acquisition of a response but much greater resistance to extinction than does continuous reinforcement.

fixed-ratio schedule in operant conditioning, a reinforcement schedule that reinforces a response only after a specified number of responses. variable-ratio schedule in operant conditioning, a reinforcement schedule that reinforces a response after an unpredictable number of responses. fixed-interval schedule in operant conditioning, a reinforcement schedule that reinforces a response only after a specified time has elapsed.

“The charm of fishing is that it is the pursuit of what is elusive but attainable, a perpetual series of occasions for hope.” Scottish author John Buchan (1875–1940)

|| Door-to-door salespeople are reinforced by which schedule? People checking the oven to see if the cookies are done are on which schedule? Airline frequent-flyer programs that offer a free flight after every 25,000 miles of travel use which reinforcement schedule? (Answers below.) || Door-to-door salespeople are reinforced on a variable-ratio schedule (after varying numbers of rings). Cookie checkers are reinforced on a fixed-interval schedule. Frequent-flyer programs use a fixed-ratio schedule.

But to our detriment, small but immediate consequences (the enjoyment of watching late-night TV, for example) are sometimes more alluring than big but delayed consequences (tomorrow’s sluggishness). For many teens, the immediate gratification of risky, unprotected sex in passionate moments prevails over the delayed gratifications of safe sex or saved sex (Loewenstein & Furstenberg, 1991). And for too many of us, the immediate rewards of today’s gas-guzzling vehicles, air travel, and air conditioning, have prevailed over the bigger future consequences of global climate change, rising seas, and extreme weather.

FIGURE 24.3 Intermittent reinforcement schedules Skinner’s laboratory pigeons produced these response patterns to each of four reinforcement schedules. (Reinforcers are indicated by diagonal marks.) For people, as for pigeons, reinforcement linked to number of responses (a ratio schedule) produces a higher response rate than reinforcement linked to amount of time elapsed (an interval schedule). But the predictability of the reward also matters. An unpredictable (variable) schedule produces more consistent responding than does a predictable (fixed) schedule.

MOD U LE 2 4 Operant Conditioning

306

Number of responses

1000 Fixed ratio

Variable ratio

Reinforcers

750

Fixed interval Rapid responding near time for reinforcement

500 Variable interval

250 Steady responding

10

20

30

40

50

60

70

80

Time (minutes)

Variable-interval schedules reinforce the first response after varying time intervals. Like the “You’ve got mail” that finally rewards persistence in rechecking for email, variable-interval schedules tend to produce slow, steady responding. This makes sense, because there is no knowing when the waiting will be over (TABLE 24.2). TABLE 24.2 Schedules of Reinforcement

Ratio

Fixed

Variable

Every so many: reinforcement after every nth behavior, such as buy 10 coffees, get 1 free, or pay per product unit produced

After an unpredictable number: reinforcement after a random number of behaviors, as when playing slot machines or fly-casting

Interval Every so often: reinforcement for behavior after a fixed time, such as Tuesday discount prices

Unpredictably often: reinforcement for behavior after a random amount of time, as in checking for e-mail

Animal behaviors differ, yet Skinner (1956) contended that the reinforcement principles of operant conditioning are universal. It matters little, he said, what response, what reinforcer, or what species you use. The effect of a given reinforcement schedule is pretty much the same: “Pigeon, rat, monkey, which is which? It doesn’t matter. . . . Behavior shows astonishingly similar properties.”

Punishment 24-4 How does punishment affect behavior?

variable-interval schedule in operant conditioning, a reinforcement schedule that reinforces a response at unpredictable time intervals. punishment an event that decreases the behavior that it follows.

Reinforcement increases a behavior; punishment does the opposite. A punisher is any consequence that decreases the frequency of a preceding behavior (TABLE 24.3). Swift and sure punishers can powerfully restrain unwanted behavior. The rat that is shocked after touching a forbidden object and the child who loses a treat after running into the street will learn not to repeat the behavior. Some punishments, though unintentional, are nevertheless quite effective: A dog that has learned to come running at the sound of an electric can opener will stop coming if its owner starts running the machine to attract the dog and banish it to the basem*nt.

Operant Conditioning M O D U L E 2 4

TABLE 24.3 Ways to Decrease Behavior Type of Punisher

Description

Possible Examples

Positive punishment

Administer an aversive stimulus

Spanking; a parking ticket

Negative punishment

Withdraw a desirable stimulus

Time-out from privileges (such as time with friends); revoked driver’s license

Sureness and swiftness are also marks of effective criminal punishment, note John Darley and Adam Alter (in press). Studies show that criminal behavior, much of it impulsive, is not deterred by the threat of severe sentences. Thus, when Arizona introduced an exceptionally harsh sentence for first-time drunk drivers, it did not affect the drunk-driving rate. But when Kansas City started patrolling a high crime area to increase the sureness and swiftness of punishment, crime dropped dramatically. So, how should we interpret the punishment studies in relation to parenting practices? Many psychologists and supporters of nonviolent parenting note four drawbacks of physically punishing children (Gershoff, 2002; Marshall, 2002). 1. Punished behavior is suppressed, not forgotten. This suppression, though temporary, may (negatively) reinforce parents’ punishing behavior. The child swears, the parent swats, the parent hears no more swearing and feels the punishment successfully stopped the behavior. No wonder spanking is a hit with so many U.S. parents of 3- and 4-year-olds—more than 9 in 10 of whom acknowledge spanking their children (Kazdin & Benjet, 2003). 2. Punishment teaches discrimination. Was the punishment effective in putting an end to the swearing? Or did the child simply learn that it’s not okay to swear around the house, but it is okay to swear elsewhere? 3. Punishment can teach fear. The child may associate fear not only with the undesirable behavior but also with the person who delivered the punishment or the place it occurred. Thus, children may learn to fear a punishing teacher and try to avoid school. For such reasons, most European countries now ban hitting children in schools and child-care institutions (Leach, 1993, 1994). Eleven countries, including those in Scandinavia, further outlaw hitting by parents, giving children the same legal protection given to spouses (EPOCH, 2000). 4. Physical punishment may increase aggressiveness by modeling aggression as a way to cope with problems. We know that many aggressive delinquents and abusive parents come from abusive families (Straus & Gelles, 1980; Straus et al., 1997). But some researchers note a problem with studies that find that spanked children are at increased risk for aggression (and depression and low self-esteem). Well, yes, they say, just as people who have undergone psychotherapy are more likely to suffer depression—because they had preexisting problems that triggered the treatments (Larzelere, 2000, 2004). Which is the chicken and which is the egg? The correlations don’t hand us an answer. If one adjusts for preexisting antisocial behavior, then an occasional single swat or two to misbehaving 2- to 6-year-olds looks more effective (Baumrind et al., 2002; Larzelere & Kuhn, 2005). That is especially so if the swat is used only as a backup when milder disciplinary tactics (such as a time-out, removing them from reinforcing surroundings) fail, and when the swat is combined with a generous dose of reasoning and reinforcing. Remember: Punishment tells you what not to do; reinforcement tells you what to do. This dual approach can be effective. When children with self-destructive behaviors bite themselves or bang their heads, they may be mildly punished (say, with

307

308

MOD U LE 2 4 Operant Conditioning

cognitive map a mental representation of the layout of one’s environment. For example, after exploring a maze, rats act as if they have learned a cognitive map of it.

latent learning learning that occurs but is not apparent until there is an incentive to demonstrate it. intrinsic motivation a desire to perform a behavior effectively for its own sake. extrinsic motivation a desire to perform a behavior to receive promised rewards or avoid threatened punishment.

a squirt of water in the face), but they may also be rewarded (with positive attention and food) when they behave well. In high school classrooms, teachers can give feedback on papers by saying, “No, but try this . . .” and “Yes, that’s it!” Such responses reduce unwanted behavior while reinforcing more desirable alternatives. Parents of delinquent youth are often unaware of how to achieve desirable behaviors without screaming or hitting their children (Patterson et al., 1982). Training programs can help reframe contingencies from dire threats to positive incentives— turning “You clean up your room this minute or no dinner!” to “You’re welcome at the dinner table after you get your room cleaned up.” When you stop to think about it, many threats of punishment are just as forceful, and perhaps more effective, if rephrased positively. Thus, “If you don’t get your homework done, there’ll be no car” would better be phrased as . . . What punishment often teaches, said Skinner, is how to avoid it. Most psychologists now favor an emphasis on reinforcement: Notice people doing something right and affirm them for it.

䉴|| Extending Skinner’s Understanding 24-5 Do cognitive processes and biological constraints affect operant conditioning? Skinner granted the existence of private thought processes and the biological underpinnings of behavior. Nevertheless, many psychologists criticized him for discounting the importance of these influences.

Cognition and Operant Conditioning

Latent learning Animals, like people, can learn from experience, with or without reinforcement. After exploring a maze for 10 days, rats received a food reward at the end of the maze. They quickly demonstrated their prior learning of the maze—by immediately completing it as quickly as (and even faster than) rats that had been reinforced for running the maze. (From Tolman & Honzik, 1930.)

Will and Deni McIntyre/Photo Researchers

|| For more information on animal behavior, see books by (I am not making this up) Robin Fox and Lionel Tiger. ||

A mere eight days before dying of leukemia, Skinner (1990) stood before the American Psychological Association convention for one final critique of “cognitive science,” which he viewed as a throwback to early twentieth-century introspectionism. Skinner died resisting the growing belief that cognitive processes—thoughts, perceptions, expectations—have a necessary place in the science of psychology and even in our understanding of conditioning. (He regarded thoughts and emotions as behaviors that follow the same laws as other behaviors.) Yet we have seen several hints that cognitive processes might be at work in operant learning. For example, animals on a fixed-interval reinforcement schedule respond more and more frequently as the time approaches when a response will produce a reinforcer. Although a strict behaviorist would object to talk of “expectations,” the animals behave as if they expected that repeating the response would soon produce the reward. Evidence of cognitive processes has also come from studying rats in mazes. Rats exploring a maze, with no obvious reward, are like people sightseeing in a new town. They seem to develop a cognitive map, a mental representation of the maze. When an experimenter then places food in the maze’s goal box, the rats very soon run the maze as quickly as rats that have been reinforced with food for running the maze. During their explorations, the rats have seemingly experienced latent learning— learning that becomes apparent only when

309

Operant Conditioning M O D U L E 2 4

© The New Yorker Collection, 2000, Pat Byrnes, from cartoonbank.com. All rights reserved.

there is some incentive to demonstrate it. Children, too, may learn from watching a parent but demonstrate the learning only much later, as needed. The point to remember: There is more to learning than associating a response with a consequence; there is also cognition. Psychologists have presented some striking evidence of animals’ cognitive abilities in solving problems and in using aspects of language.

Intrinsic Motivation

“Bathroom? Sure, it’s just down the hall to the left, jog right, left, another left, straight past two more lefts, then right, and it’s at the end of the third corridor on your right.”

Tiger Woods’ intrinsic motivation “I remember a daily ritual that we had: I would call Pop at work to ask if I could practice with him. He would always pause a second or two, keeping me in suspense, but he’d always say yes. . . . In his own way, he was teaching me initiative. You see, he never pushed me to play” (quoted in USA Weekend, 1997). Woods (shown here being consoled by his caddy, Steve Williams) reacted with strong emotion to his first tournament win after his father’s death.

The cognitive perspective has also led to an important qualification concerning the power of rewards: Promising people a reward for a task they already enjoy can backfire. Many think that offering tangible rewards will boost anyone’s interest in an activity. Actually, in experiments, children promised a payoff for playing with an interesting puzzle or toy later play with the toy less than do their unpaid counterparts (Deci et al., 1999; Tang & Hall, 1995). It is as if the children think, “If I have to be bribed into doing this, it must not be worth doing for its own sake.” Excessive rewards can undermine intrinsic motivation—the desire to perform a behavior effectively and for its own sake. Extrinsic motivation is the desire to behave in certain ways to receive external rewards or avoid threatened punishment. To sense the difference, think about your experience in this course. Are you feeling pressured to finish this reading before a deadline? Worried about your grade? Eager for rewards that depend on your doing well? If yes, then you are extrinsically motivated (as, to some extent, almost all students must be). Are you also finding the course material interesting? Does learning it make you feel more competent? If there were no grade at stake, might you be curious enough to want to learn the material for its own sake? If yes, intrinsic motivation also fuels your efforts. Intrinsically motivated people work and play in search of enjoyment, interest, self-expression, or challenge. Youth sports coaches who aim to promote enduring interest in an activity, not just to pressure players into winning, should focus on the intrinsic joy of playing and of reaching one’s potential, note motivation researchers Edward Deci and Richard Ryan (1985, 1992, 2002). Giving people choices also enhances their intrinsic motivation (Patall et al., 2008). Nevertheless, rewards can be effective if used neither to bribe nor to control but to signal a job well done (Boggiano et al., 1985). “Most improved player” awards, for example, can boost feelings of competence and increase enjoyment of a sport. Rightly administered, rewards can raise performance and spark creativity (Eisenberger & Rhoades, 2001; Henderlong & Lepper, 2002). And extrinsic rewards (such as the admissions scholarships and jobs that often follow good grades) are here to stay.

As with classical conditioning, an animal’s natural predispositions constrain its capacity for operant conditioning. Using food as a reinforcer, you can easily condition a hamster to dig or to rear up because these actions are among the animal’s natural food-searching behaviors. But you won’t be so successful if you use food as a reinforcer to shape other hamster behaviors, such as face washing, that aren’t normally associated with food or hunger (Shettleworth, 1973). Similarly, you could easily teach pigeons to flap their wings to avoid being shocked, and to peck to obtain food, because fleeing with their wings and eating with their beaks are natural pigeon behaviors. However, they would have a hard time learning to peck to avoid a shock, or to flap their wings to obtain food (Foree & LoLordo, 1973). The principle: Biological constraints predispose organisms to learn associations that are naturally adaptive. After witnessing the power of operant technology, Skinner’s students Keller Breland and Marian Breland (1961; Bailey & Gillaspy, 2005) began training dogs, cats, chickens,

Jack Smith/AP World Wide Photos

Biological Predispositions

310

Saota/Gamma Liaison/Getty Images

Natural athletes Animals can most easily learn and retain behaviors that draw on their biological predispositions, such as cats’ inborn tendency to leap high and land on their feet.

MOD U LE 2 4 Operant Conditioning

“Never try to teach a pig to sing. It wastes your time and annoys the pig.”

parakeets, turkeys, pigs, ducks, and hamsters, and they eventually left their graduate studies to form an animal training company. Over the ensuing 47 years they trained over 15,000 animals from 140 species for movies, traveling shows, corporations, amusem*nt parks, and the government. They also trained animal trainers, including Sea World’s first director of training. At first, the Brelands presumed that operant principles would work on almost any response an animal could make. But along the way, they confronted the constraints of biological predispositions. In one act, pigs trained to pick up large wooden “dollars” and deposit them in a piggy bank began to drift back to their natural ways. They would drop the coin, push it with their snouts as pigs are prone to do, pick it up again, and then repeat the sequence—delaying their food reinforcer. This instinctive drift occurred as the animals reverted to their biologically predisposed patterns.

Mark Twain (1835–1910)

䉴|| Skinner’s Legacy

B. F. Skinner “I am sometimes asked, ‘Do you think of yourself as you think of the organisms you study?’ The answer is yes. So far as I know, my behavior at any given moment has been nothing more than the product of my genetic endowment, my personal history, and the current setting” (1983).

B. F. Skinner was one of the most controversial intellectual figures of the late twentieth century. He stirred a hornet’s nest with his outspoken beliefs. He repeatedly insisted that external influences (not internal thoughts and feelings) shape behavior. And he urged people to use operant principles to influence others’ behavior at school, work, and home. Knowing that behavior is shaped by its results, he said we should use rewards to evoke more desirable behavior. Skinner’s critics objected, saying that he dehumanized people by neglecting their personal freedom and by seeking to control their actions. Skinner’s reply: External consequences already haphazardly control people’s behavior. Why not administer those consequences toward human betterment? Wouldn’t reinforcers be more humane than the punishments used in homes, schools, and prisons? And if it is humbling to think that our history has shaped us, doesn’t this very idea also give us hope that we can shape our future?

Applications of Operant Conditioning 24-6 How might operant conditioning principles be applied at school, in sports, at work, and at home?

Falk/Photo Researchers, Inc.

Psychologists are applying operant conditioning principles to help people with a variety of challenges, from moderating high blood pressure to gaining social skills. Reinforcement technologies are also at work in schools, sports, workplaces, and homes (Flora, 2004).

At School A generation ago, Skinner and others worked toward a day when teaching machines and textbooks would shape learning in small steps, immediately reinforcing correct responses. Such machines and texts, they said, would revolutionize education and free teachers to focus on each student’s special needs.

311

Stand in Skinner’s shoes for a moment and imagine two math teachers, each with a class of students ranging from whiz kids to slow learners. Teacher A gives the whole class the same lesson, knowing that the bright kids will breeze through the math concepts, and the slower ones will be frustrated and fail. With so many different children, how could one teacher guide them individually? Teacher B, faced with a similar class, paces the material according to each student’s rate of learning and provides prompt feedback, with positive reinforcement, to both the slow and the fast learners. Thinking as Skinner did, how might you achieve the individualized instruction of Teacher B? Computers were Skinner’s final hope. “Good instruction demands two things,” he said. “Students must be told immediately whether what they do is right or wrong and, when right, they must be directed to the step to be taken next.” Thus, the computer could be Teacher B—pacing math drills to the student’s rate of learning, quizzing the student to find gaps in understanding, giving immediate feedback, and keeping flawless records. To the end of his life, Skinner (1986, 1988, 1989) believed his ideal was achievable. Although the predicted education revolution has not occurred, today’s interactive student software, Web-based learning, and online testing bring us closer than ever before to achieving his ideal.

Anderson Ross/Bend Images/Corbis

Operant Conditioning M O D U L E 2 4

Computer-assisted learning Computers have helped realize Skinner’s goal of individually paced instruction with immediate feedback.

In Sports Reinforcement principles can enhance athletic performance as well. Again, the key is to shape behavior, by first reinforcing small successes and then gradually increasing the challenge. Thomas Simek and Richard O’Brien (1981, 1988) applied these principles to teaching golf and baseball by starting with easily reinforced responses. Golf students learn putting by starting with very short putts. As they build mastery, they eventually step back farther and farther. Likewise, novice batters begin with half swings at an oversized ball pitched from 10 feet away, giving them the immediate pleasure of smacking the ball. As the hitters’ confidence builds with their success and they achieve mastery at each level, the pitcher gradually moves back—to 15, then 22, 30, and 40.5 feet—and eventually introduces a standard baseball. Compared with children taught by conventional methods, those trained by this behavioral method show, in both testing and game situations, faster skill improvement.

Skinner’s ideas have also shown up in the workplace. Knowing that reinforcers influence productivity, many organizations have invited employees to share the risks and rewards of company ownership. Others focus on reinforcing a job well done. Rewards are most likely to increase productivity if the desired performance has been welldefined and is achievable. The message for managers? Reward specific, achievable behaviors, not vaguely defined “merit.” Even criticism triggers the least resentment and the greatest performance boost when specific and considerate (Baron, 1988). Operant conditioning also reminds us that reinforcement should be immediate. IBM legend Thomas Watson understood. When he observed an achievement, he wrote the employee a check on the spot (Peters & Waterman, 1982). But rewards need not be material, or lavish. An effective manager may simply walk the floor and sincerely affirm people for good work, or write notes of appreciation for a completed project. As Skinner said, “How much richer would the whole world be if the reinforcers in daily life were more effectively contingent on productive work?”

© The New Yorker Collection, 1989, Ziegler from cartoonbank.com. All rights reserved.

At Work

312

MOD U LE 2 4 Operant Conditioning

© The New Yorker Collection, 2001, Mick Stevens from cartoonbank.com. All rights reserved.

At Home

“I wrote another five hundred words. Can I have another cookie?”

As we have seen, parents can apply operant conditioning practices. Parent-training researchers remind us that parents who say “Get ready for bed” but cave in to protests or defiance reinforce whining and arguing (Wierson & Forehand, 1994). Exasperated, they may then yell or gesture menacingly. When the child, now frightened, obeys, that in turn reinforces the parents’ angry behavior. Over time, a destructive parent-child relationship develops. To disrupt this cycle, parents should remember the basic rule of shaping: Notice people doing something right and affirm them for it. Give children attention and other reinforcers when they are behaving well (Wierson & Forehand, 1994). Target a specific behavior, reward it, and watch it increase. When children misbehave or are defiant, don’t yell at them or hit them. Simply explain the misbehavior and give them a time-out. Finally, we can use operant conditioning in our own lives (see Close-Up: Training Our Partners). To reinforce your own desired behaviors and extinguish the undesired ones, psychologists suggest taking these steps: 1. State your goal—to stop smoking, eat less, or study or exercise more—in measurable terms, and announce it. You might, for example, aim to boost your study time by an hour a day and share that goal with some close friends. 2. Monitor how often you engage in your desired behavior. You might log your current study time, noting under what conditions you do and don’t study. (When I began writing textbooks, I logged how I spent my time each day and was amazed to discover how much time I was wasting.) 3. Reinforce the desired behavior. To increase your study time, give yourself a reward (a snack or some activity you enjoy) only after you finish your extra hour of study. Agree with your friends that you will join them for weekend activities only if you have met your realistic weekly studying goal. 4. Reduce the rewards gradually. As your new behaviors become more habitual, give yourself a mental pat on the back instead of a cookie.

CLOSE-UP

Training Our Partners For a book I was writing about a school for exotic animal trainers, I started commuting from Maine to California, where I spent my days watching students do the seemingly impossible: teaching hyenas to pirouette on command, cougars to offer their paws for a nail clipping, and baboons to skateboard. I listened, rapt, as professional trainers explained how they taught dolphins to flip and elephants to paint. Eventually it hit me that the same techniques might work on that stubborn but lovable species, the American husband. The central lesson I learned from exotic animal trainers is that I should reward behavior I like and ignore behavior I

don’t. After all, you don’t get a sea lion to balance a ball on the end of its nose by nagging. The same goes for the American husband. Back in Maine, I began thanking Scott if he threw one dirty shirt into the hamper. If he threw in two, I’d kiss him. Meanwhile, I would step over any soiled clothes on the floor without one sharp word, though I did sometimes kick them under the bed. But as he basked in my appreciation, the piles became smaller. I was using what trainers call “approximations,” rewarding the small steps toward learning a whole new behavior. . . . Once I started thinking this way, I couldn’t stop. At the school in California, I’d be

By Amy Sutherland scribbling notes on how to walk an emu or have a wolf accept you as a pack member, but I’d be thinking, “I can’t wait to try this on Scott. . . .” After two years of exotic animal training, my marriage is far smoother, my husband much easier to love. I used to take his faults personally; his dirty clothes on the floor were an affront, a symbol of how he didn’t care enough about me. But thinking of my husband as an exotic species gave me the distance I needed to consider our differences more objectively. Excerpted with permission from Sutherland, A. (2006, June 25). What Shamu taught me about a happy marriage, New York Times.

313

Operant Conditioning M O D U L E 2 4

䉴|| Contrasting Classical and Operant

Conditioning

Both classical and operant conditioning are forms of associative learning, and both involve acquisition, extinction, spontaneous recovery, generalization, and discrimination. The similarities are sufficient to make some researchers wonder if a single stimulus-response learning process might explain them both (Donahoe & Vegas, 2004). Their procedural difference is this: Through classical (Pavlovian) conditioning, an organism associates different stimuli that it does not control and responds automatically (respondent behaviors) (TABLE 24.4). Through operant conditioning, an organism associates its operant behaviors—those that act on its environment to produce rewarding or punishing stimuli—with their consequences. Cognitive processes and biological predispositions influence both classical and operant conditioning.

“O! This learning, what a thing it is.” William Shakespeare, The Taming of the Shrew, 1597

TABLE 24.4 Comparison of Classical and Operant Conditioning Classical Conditioning

Operant Conditioning

Basic idea

Organism learns associations between events it doesn’t control.

Organism learns associations between its behavior and resulting events.

Response

Involuntary, automatic.

Voluntary, operates on environment.

Acquisition

Associating events; CS announces US.

Associating response with a consequence (reinforcer or punisher).

Extinction

CR decreases when CS is repeatedly presented alone.

Responding decreases when reinforcement stops.

Spontaneous recovery

The reappearance, after a rest period, of an extinguished CR.

The reappearance, after a rest period, of an extinguished response.

Generalization

The tendency to respond to stimuli similar to the CS.

Organism’s response to similar stimuli is also reinforced.

Discrimination

The learned ability to distinguish between a CS and other stimuli that do not signal a US.

Organism learns that certain responses, but not others, will be reinforced.

Cognitive processes

Organisms develop expectation that CS signals the arrival of US.

Organisms develop expectation that a response will be reinforced or punished; they also exhibit latent learning, without reinforcement.

Biological predispositions

Natural predispositions constrain what stimuli and responses can easily be associated.

Organisms best learn behaviors similar to their natural behaviors; unnatural behaviors instinctively drift back toward natural ones.

314

MOD U LE 2 4 Operant Conditioning

Review Operant Conditioning 24-1 What is operant conditioning, and how does it differ from classical conditioning? In operant conditioning, an organism learns associations between its own behavior and resulting events; this form of conditioning involves operant behavior (behavior that operates on the environment, producing consequences). In classical conditioning, the organism forms associations between stimuli—behaviors it does not control; this form of conditioning involves respondent behavior (automatic responses to some stimulus). Expanding on Edward Thorndike’s law of effect, B. F. Skinner and others found that the behavior of rats or pigeons placed in an operant chamber (Skinner box) can be shaped by using reinforcers to guide closer and closer approximations of the desired behavior. 24-2

What are the basic types of reinforcers? Positive reinforcement adds something desirable to increase the frequency of a behavior. Negative reinforcement removes something undesirable to increase the frequency of a behavior. Primary reinforcers (such as receiving food when hungry or having nausea end during an illness) are innately satisfying—no learning is required. Conditioned (or secondary) reinforcers (such as cash) are satisfying because we have learned to associate them with more basic rewards (such as the food or medicine we buy with them). Immediate reinforcers (such as unprotected sex) offer immediate payback; delayed reinforcers (such as a weekly paycheck) require the ability to delay gratification.

24-3 How do different reinforcement schedules affect behavior? In continuous reinforcement (reinforcing desired responses every time they occur), learning is rapid, but so is extinction if rewards cease. In partial (intermittent) reinforcement, initial learning is slower, but the behavior is much more resistant to extinction. Fixed-ratio schedules offer rewards after a set number of responses; variable-ratio schedules, after an unpredictable number. Fixed-interval schedules offer rewards after set time periods; variable-interval schedules, after unpredictable time periods. 24-4 How does punishment affect behavior? Punishment attempts to decrease the frequency of a behavior (a child’s disobedience) by administering an undesirable consequence (such as spanking) or withdrawing something desirable (such as taking away a favorite toy). Undesirable side effects can include suppressing rather than changing unwanted behaviors, teaching aggression, creating fear, encouraging discrimination (so that the undesirable behavior appears when the punisher is not present), and fostering depression and feelings of helplessness. 24-5 Do cognitive processes and biological constraints affect operant conditioning? Skinner underestimated the limits that cognitive and biological constraints place on conditioning. Research on cognitive mapping and latent learning demonstrate the importance of cognitive processes in learning. Excessive rewards can undermine intrinsic motivation. Training that attempts to override biological constraints will probably not endure because the animals will revert to their predisposed patterns.

24-6 How might operant conditioning principles be applied at school, in sports, at work, and at home? At school, teachers can use shaping techniques to guide students’ behaviors, and they can use interactive software and Web sites to provide immediate feedback. In sports, coaches can build players’ skills and selfconfidence by rewarding small improvements. At work, managers can boost productivity and morale by rewarding well-defined and achievable behaviors. At home, parents can reward behaviors they consider desirable, but not those that are undesirable. We can shape our own behaviors by stating our goals, monitoring the frequency of desired behaviors, reinforcing desired behaviors, and cutting back on incentives as behaviors become habitual. Terms and Concepts to Remember associative learning, p. 301 respondent behavior, p. 301 operant conditioning, p. 301 operant behavior, p. 301 law of effect, p. 301 operant chamber, p. 302 learning, p. 302 shaping, p. 302 reinforcer, p. 303 positive reinforcement, p 303 negative reinforcement, p. 303 primary reinforcer, p. 304 conditioned reinforcer, p. 304

continuous reinforcement, p. 305 partial (intermittent) reinforcement, p. 305 fixed-ratio schedule, p. 305 variable-ratio schedule, p. 305 fixed-interval schedule, p. 305 variable-interval schedule, p. 306 punishment, p. 306 cognitive map, p. 308 latent learning, p. 308 intrinsic motivation, p. 309 extrinsic motivation, p. 309

Test Yourself 1. Positive reinforcement, negative reinforcement, positive punishment, and negative punishment are tricky concepts for many students. Can you fit the right term in the three boxes in this table? I’ll do the first one (positive reinforcement) for you. Type of Stimulus

Give It

Desired (for example, a compliment)

Positive reinforcement

Take It Away

Undesired/aversive (for example, an insult) (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. Can you recall a time when a teacher, coach, family member, or employer helped you learn something by shaping your behavior in little steps until you achieved your goal?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

module 25

Mirrors in the Brain Bandura’s Experiments

Learning by Observation

Applications of Observational Learning

25-1 What is observational learning, and how is it enabled by mirror neurons?

learning a relatively permanent change in an organism’s behavior due to experience.

observational learning learning by observing others.

modeling the process of observing and imitating a specific behavior.

©Herb Terrace

©Herb Terrace

From drooling dogs, running rats, and pecking pigeons we have learned much about the basic processes of learning. But conditioning principles don’t tell us the whole story. Higher animals, especially humans, can learn without direct experience, through observational learning, by observing and imitating others. A child who sees his sister burn her fingers on a hot stove learns not to touch it. And a monkey watching another selecting certain pictures to gain treats learns to imitate that behavior (FIGURE 25.1). We learn all kinds of specific behaviors by observing and imitating models, a process called modeling. Lord Chesterfield (1694–1773) had the idea: “We are, in truth, more than half what we are by imitation.”

Monkey B’s screen

Monkey A’s screen

David Strickler/The Image Works

We can glimpse the roots of observational learning in other species. Rats, pigeons, crows, and gorillas all observe others and learn (Byrne & Russon, 1998; Dugatkin, 2002). So do monkeys. Rhesus macaque monkeys rarely make up quickly after a fight—unless they grow up with forgiving older macaques. Then, more often than not, their fights, too, are quickly followed by reconciliation (de Waal & Johanowicz, 1993). Monkey see, monkey do. Chimpanzees learn all sorts of foraging and tool use behaviors by observation, which then are transmitted across generations within their local culture (Hopper et al., 2008; Whiten et al., 2007). Imitation is all the more striking in humans. Our catch-phrases, hem lengths, ceremonies, foods, traditions, vices, and fads all spread by one person copying another. Even as 21⁄ 2-year-olds, when many of our mental abilities were near those of chimpanzees, we considerably surpassed chimps at social tasks such as imitating another’s solution to a problem (Herrmann et al., 2007).

FIGURE 25.1 Cognitive imitation When Monkey A (left) sees Monkey B touch four pictures on a display screen in a certain order to gain a banana, Monkey A learns to imitate that order, even when shown a different configuration (Subiaul et al., 2004).

see, children do? Children who 䉴 Children often experience physical punishment tend to display more aggression.

315

“Children need models more than they need critics.” Joseph Joubert, Pensées, 1842

䉴|| Mirrors in the Brain On a 1991 hot summer day in Parma, Italy, a lab monkey awaited its researchers’ return from lunch. The researchers had implanted wires next to its motor cortex, in a frontal lobe brain region that enabled the monkey to plan and enact movements. When the monkey moved a peanut into its mouth, for example, the monitoring device would buzz. That day, as one of the researchers reentered the lab, ice cream cone in hand, the monkey stared at him. As the student raised the cone to lick it, the monkey’s monitor again buzzed—as if the motionless monkey had itself moved (Blakeslee, 2006; Iacoboni, 2008). Having earlier observed the same weird result when the monkey watched humans or other monkeys move peanuts to their mouths, the flabbergasted researchers, led by Giacomo Rizzolatti (2002, 2006), eventually surmised that they had stumbled onto a previously unknown type of neuron: mirror neurons, whose activity provides a neural basis for imitation and observational learning. When a monkey grasps, holds, or tears something, these neurons fire. And they likewise fire when the monkey observes another doing so. When one monkey sees, these neurons mirror what another monkey does. It’s not just monkey business. Imitation shapes even very young humans’ behavior. Shortly after birth, a baby may imitate an adult who sticks out his tongue. By 8 to 16 months, infants imitate various novel gestures (Jones, 2007). By age 12 months, they begin looking where an adult is looking (Brooks & Meltzoff, 2005). And by age 14 months (FIGURE 25.2), children imitate acts modeled on TV (Meltzoff, 1988; Meltzoff & Moore, 1989, 1997). Children see, children do. PET scans of different brain areas reveal that humans, like monkeys, have a mirror neuron system that supports empathy and imitation (Iacoboni, 2008). As we observe another’s action, our brain generates an inner simulation, enabling us to experience the other’s experience within ourselves. Mirror neurons help give rise to children’s empathy and to their ability to infer another’s mental state, an ability

FIGURE 25.2 Learning from observation This 14-month-old boy in Andrew Meltzoff’s laboratory is imitating behavior he has seen on TV. In the top photo the infant leans forward and carefully watches the adult pull apart a toy. In the middle photo he has been given the toy. In the bottom photo he pulls the toy apart, imitating what he has seen the adult do.

Meltzoff, A. N. (1988). Imitation of televised models by infants. Child Development, 59, 1221–1229. Photos courtesy of A. N. Meltzoff and M. Hanuk.

mirror neurons frontal lobe neurons that fire when performing certain actions or when observing another doing so. The brain’s mirroring of another’s action may enable imitation and empathy.

MOD U LE 2 5 Learning by Observation

316

317

Learning by Observation M O D U L E 2 5

known as theory of mind. People with autism, a developmental disorder, display reduced imitative yawning and mirror neuron activity—“broken mirrors,” some have said (Ramachandran & Oberman, 2006; Senju et al., 2007; Williams et al., 2006). For most of us, however, our mirror neurons make emotions contagious. We grasp others’ states of mind—often feeling what they feel—by mental simulation. We find it harder to frown when viewing a smile than when viewing a frown (Dimberg et al., 2000, 2002). We find ourselves yawning after observing another’s yawn, laughing when others laugh. When watching movies, a scorpion crawling up someone’s leg makes us tighten up; observing a passionate kiss, we may notice our own lips puckering. Seeing a loved one’s pain, our faces mirror their emotion. But as FIGURE 25.3 shows, so do our brains. In this fMRI scan, the pain imagined by an empathic romantic partner has triggered some of the same brain activity experienced by the loved one actually having the pain (Singer et al., 2004). Even fiction reading may trigger such activity, as we mentally simulate the experiences described (Mar & Oatley, 2008). The bottom line: Our brain’s mirror neurons underlie our intensely social nature.

Reprinted with permission from The American Association for the Advancement of Science, Subiaul et al., Science 305:407-410 (2004) ©2004 AAAS.

FIGURE 25.3 Experienced and imagined 䉴 pain in the brain Brain activity related to

actual pain (left) is mirrored in the brain of an observing loved one (right). Empathy in the brain shows up in emotional brain areas, but not in the somatosensory cortex, which receives the physical pain input.

Pain

Empathy

䉴|| Bandura’s Experiments

Courtesy of Albert Bandura, Stanford University

Picture this scene from a famous experiment by Albert Bandura, the pioneering researcher of observational learning (Bandura et al., 1961). A preschool child works on a drawing. An adult in another part of the room is building with Tinkertoys. As the child watches, the adult gets up and for nearly 10 minutes pounds, kicks, and throws around the room a large inflated Bobo doll, yelling, “Sock him in the nose. . . . Hit him down. . . . Kick him.” The child is then taken to another room filled with appealing toys. Soon the experimenter returns and tells the child she has decided to save these good toys “for the other children.” She takes the now-frustrated child to a third adjacent room containing a few toys, including a Bobo doll. Left alone, what does the child do? Compared with children not exposed to the adult model, those who viewed the model’s actions were much more likely to lash out at the doll. Apparently, observing the aggressive outburst lowered their inhibitions. But something more was also at work, for the children imitated the very acts they had observed and used the very words they had heard (FIGURE 25.4 on the next page).

Bandura “The Bobo doll 䉴 Albert follows me wherever I go. The photographs are published in every introductory psychology text and virtually every undergraduate takes introductory psychology. I recently checked into a Washington hotel. The clerk at the desk asked, ‘Aren’t you the psychologist who did the Bobo doll experiment?’ I answered, ‘I am afraid that will be my legacy.’ He replied, ‘That deserves an upgrade. I will put you in a suite in the quiet part of the hotel’” (2005).

MOD U LE 2 5 Learning by Observation

Courtesy of Albert Bandura, Stanford University

318

FIGURE 25.4 The famous Bobo doll experiment Notice how the children’s actions directly imitate the adult’s.

What determines whether we will imitate a model? Bandura believes part of the answer is reinforcements and punishments—those received by the model as well as by the imitator. By watching, we learn to anticipate a behavior’s consequences in situations like those we are observing. We are especially likely to imitate people we perceive as similar to ourselves, as successful, or as admirable.

䉴|| Applications of Observational Learning The big news from Bandura’s studies is that we look and we learn. Models—in one’s family or neighborhood, or on TV—may have effects—good or bad. Many business organizations effectively use behavior modeling to train communications, sales, and customer service skills (Taylor et al., 2005). Trainees gain skills faster when they not only are told the needed skills but also are able to observe the skills being modeled effectively by experienced workers (or actors simulating them).

Prosocial Effects

Bob Daemmrich/The Image Works

A model grandma This boy is learning to cook by observing his grandmother. As the sixteenth-century proverb states, “Example is better than precept.”

25-2 What is the impact of prosocial modeling and of antisocial modeling? The good news is that prosocial (positive, helpful) models can have prosocial effects. To encourage children to read, read to them and surround them with books and people who read. To increase the odds that your children will practice your religion, worship and attend religious activities with them. People who exemplify nonviolent, helpful behavior can prompt similar behavior in others. India’s Mahatma Gandhi and America’s Martin Luther King, Jr., both drew on the power of modeling, making nonviolent action a powerful force for social change in both countries. Parents are also powerful models. European Christians who risked their lives to rescue Jews from the Nazis usually had a close relationship with at least one parent who modeled a strong moral or humanitarian concern; this was also true for U.S. civil rights activists in the 1960s (London, 1970; Oliner & Oliner, 1988). The observational learning of morality begins early. Socially responsive toddlers who readily imitate their parents tend to become preschoolers with a strong internalized conscience (Forman et al., 2004).

319

Learning by Observation M O D U L E 2 5

Models are most effective when their actions and words are consistent. Sometimes, however, models say one thing and do another. Many parents seem to operate according to the principle “Do as I say, not as I do.” Experiments suggest that children learn to do both (Rice & Grusec, 1975; Rushton, 1975). Exposed to a hypocrite, they tend to imitate the hypocrisy by doing what the model did and saying what the model said.

prosocial behavior positive, constructive, helpful behavior. The opposite of antisocial behavior.

Antisocial Effects The bad news is that observational learning may have antisocial effects. This helps us understand why abusive parents might have aggressive children, and why many men who beat their wives had wife-battering fathers (Stith et al., 2000). Critics note that being aggressive could be passed along by parents’ genes. But with monkeys we know it can be environmental. In study after study, young monkeys separated from their mothers and subjected to high levels of aggression grew up to be aggressive themselves (Chamove, 1980). The lessons we learn as children are not easily unlearned as adults, and they are sometimes visited on future generations. TV is a powerful source of observational learning. While watching TV, children may “learn” that bullying is an effective way to control others, that free and easy sex brings pleasure without later misery or disease, or that men should be tough and women gentle. And they have ample time to learn such lessons. During their first 18 years, most children in developed countries spend more time watching TV than they spend in school. In the United States, where 9 in 10 teens watch TV daily, someone who lives to age 75 will have spent 9 years staring at the tube (Gallup, 2002; Kubey & Csikszentmihalyi, 2002). With more than 1 billion TV sets playing in homes worldwide, CNN reaching 150 countries, and MTV broadcasting in 17 languages, television has created a global pop culture (Gundersen, 2001; Lippman, 1992). Television viewers are learning about life from a rather peculiar storyteller, one that reflects the culture’s mythology but not its reality. During the late twentieth century, the average child viewed some 8000 TV murders and 100,000 other acts of violence before finishing elementary school (Huston et al., 1992). If we include cable programming and video rentals, the violence numbers escalate. An analysis of more than 3000 network and cable programs aired in the 1996–1997 season revealed that nearly 6 in 10 featured violence, that 74 percent of the violence went unpunished, that 58 percent did not show the victims’ pain, that nearly half the incidents involved “justified” violence, and that nearly half involved an attractive perpetrator. These conditions define the recipe for the violence-viewing effect described in many studies (Donnerstein, 1998). How much are we affected by repeated exposure to violent programs? Was the judge who in 1993 tried two British 10-year-olds for their murder of a 2-year-old right to suspect that the pair had been influenced by “violent video films”? Were the American media right to think that the teen assassins who killed 13 of their Columbine High School classmates had been influenced by repeated exposure to Natural Born Killers and splatter games such as Doom? To understand whether violence viewing leads to violent behavior, researchers have done some 600 correlational and experimental studies (Anderson & Gentile, 2008; Comstock, 2008; Murray, 2008). Correlational studies do support this link:

䉴 In the United States and Canada, homicide rates doubled between 1957 and 1974, just when TV was introduced and spreading. Moreover, census regions with later dates for TV service also had homicide rates that jumped later. 䉴 White South Africans were first introduced to TV in 1975. A similar neardoubling of the homicide rate began after 1975 (Centerwall, 1989). 䉴 Elementary schoolchildren with heavy exposure to media violence (via TV, videos, and video games) also tend to get into more fights (FIGURE 25.5 on the next page).

|| TV’s greatest effect may stem from what it displaces. Children and adults who spend 4 hours a day watching TV spend 4 fewer hours in active pursuits— talking, studying, playing, reading, or socializing with friends. What would you have done with your extra time if you had never watched TV, and how might you therefore be different? ||

320

90%

Percentage of students involved in fights at time 2

80

Girls

Boys

70 60 50 40 Ron Chapple/Taxi/Getty Images

FIGURE 25.5 Media violence viewing predicts future aggressive behavior Douglas Gentile and his colleagues (2004) studied more than 400 third to fifth graders. After controlling for existing differences in hostility and aggression, the researchers reported increased aggression in those heavily exposed to violent television, videos, and video games.

MOD U LE 2 5 Learning by Observation

30 20 10 0 Low

Medium

High

Media violence exposure at time 1

Violence viewing leads to violent play Research has shown that viewing media violence does lead to increased expression of aggression in the viewers, as with these boys imitating pro wrestlers.

Glassman/The Image Works

|| Gallup surveys asked American teens (Mazzuca, 2002): “Do you feel there is too much violence in the movies, or not?” 1977: 42 percent said yes. 1999: 23 percent said yes. ||

Bob Daemmrich/The Image Works

U.S. Senator Paul Simon, Remarks to the Communitarian Network, 1993

But remember: correlation does not imply causation. So these studies do not prove that viewing violence causes aggression (Freedman, 1988; McGuire, 1986). Maybe aggressive children prefer violent programs. Maybe abused or neglected children are both more aggressive and more often left in front of the TV. Maybe violent programs simply reflect, rather than affect, violent trends. To pin down causation, psychologists use experiments. In this case, researchers randomly assigned some viewers to observe violence and others to watch entertaining nonviolence. Does viewing cruelty prepare people, when irritated, to react more cruelly? To some extent, it does. “The consensus among most of the research community,” reported the National Institute of Mental Health (1982), “is that violence on television does lead to aggressive behavior by children and teenagers who watch the programs.” This is especially so when an attractive person commits seemingly justified, realistic violence that goes unpunished and causes no visible pain or harm (Donnerstein, 1998). The violence-viewing effect seems to stem from at least two factors. One is imitation (Geen & Thomas, 1986). As we noted earlier, children as young as 14 months will imitate acts they observe on TV. As they watch, their mirror neurons simulate the behavior, and after this inner rehearsal they become more likely to act it out. One re-

“Thirty seconds worth of glorification of a soap bar sells soap. Twenty-five minutes worth of glorification of violence sells violence.”

321

search team observed a sevenfold increase in violent play immediately after children viewed the “Power Rangers” (Boyatzis et al., 1995). These children, like those we saw earlier in the Bobo doll experiment, often precisely imitated the models’ violent acts, including flying karate kicks. Imitation may also have played a role in the first eight days after the 1999 Columbine High School massacre, when every U.S. state except Vermont had to deal with copycat threats or incidents. Pennsylvania alone had 60 threats of school violence (Cooper, 1999). Prolonged exposure to violence also desensitizes viewers; they become more indifferent to it when later viewing a brawl, whether on TV or in real life (Rule & Ferguson, 1986). Adult males who spent three evenings watching sexually violent movies became progressively less bothered by the rapes and slashings. Compared with those in a control group, the film watchers later expressed less sympathy for domestic violence victims, and they rated the victims’ injuries as less severe (Mullin & Linz, 1995). Indeed, suggested Edward Donnerstein and his co-researchers (1987), an evil psychologist could hardly imagine a better way to make people indifferent to brutality than to expose them to a graded series of scenes, from fights to killings to the mutilations in slasher movies. Watching cruelty fosters indifference.

© The New Yorker Collection, 2000, J. Day from cartoonbank.com. All rights reserved.

Learning by Observation M O D U L E 2 5

“Don’t you understand? This is life, this is what is happening. We can’t switch to another channel.”

*** Bandura’s work—like that of Ivan Pavlov, John Watson, B. F. Skinner, and thousands of others who advanced our knowledge of learning principles—illustrates the impact that can result from single-minded devotion to a few well-defined problems and ideas. All of these researchers defined the issues and impressed on us the importance of learning. As their legacy demonstrates, intellectual history is often made by people who risk going to extremes in pushing ideas to their limits (Simonton, 2000).

Review Learning by Observation 25-1 What is observational learning, and how is it enabled by mirror neurons? In observational learning, we observe and imitate others. Mirror neurons, located in the brain’s frontal lobes, demonstrate a neural basis for observational learning. They fire when we perform certain actions (such as responding to pain or moving our mouth to form words), or when we observe someone else performing those actions. 25-2 What is the impact of prosocial modeling and of antisocial modeling? Children tend to imitate what a model does and says, whether the behavior being modeled is prosocial (positive, constructive, and helpful) or antisocial. If a model’s actions and words are inconsistent, children may imitate the hypocrisy they observe. Terms and Concepts to Remember learning, p. 315 observational learning, p. 315 modeling, p. 315

mirror neurons, p. 316 prosocial behavior, p. 318

Test Yourself 1. Jason’s parents and older friends all smoke, but they advise him not to. Juan’s parents and friends don’t smoke, but they say nothing to deter him from doing so. Will Jason or Juan be more likely to start smoking? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. Who has been a significant role model for you? For whom are you a model?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

Memory

modules 26

B © The New Yorker Collection, 1992, Robert Mankoff from cartoonbank.com. All rights reserved.

e thankful for memory. We take it for granted, except when it malfunctions. But it is our memory that accounts for time and defines our life. It is our memory that enables us to recognize family, speak our language, find our way home, and locate food and water. It is our memory that enables us to enjoy an experience and then mentally replay and enjoy it again. Our shared memories help bind us together as Irish or Aussies, as Serbs or Albanians. And it is our memories that occasionally pit us against those whose offenses we cannot forget. In large part, you are what you remember. Without memory, your storehouse of accumulated learning, there would be no savoring of past joys, no guilt or anger over painful recollections. You would instead live in an enduring present, each moment fresh. But each person would be a stranger, every language foreign, every task—dressing, cooking, biking—a new challenge. You would even be a stranger to yourself, lacking that continuous sense of self that extends from your distant past to your momentary present. “If you lose the ability to recall your old memories then you have no life,” suggested memory researcher James McGaugh (2003). “You might as well be a rutabaga or a cabbage.” To think about memory, we first need a model of how it works. Module 26 introduces a modified version of Richard Atkinson and Richard Shiffrin’s classic and influential three-stage model of memory. Modules 27 through 29 examine sensory memory, short-term/working memory, and long-term memory—thus reviewing how we move information “Waiter, I’d like to order, unless I’ve eaten, in which case bring me the check.” into our memories, retain it, and later retrieve it. Module 30 looks at what happens when our memories fail us (as when we forget information, misremember it, or create false memories). That module concludes with some tips on how you can apply memory researchers’ findings to your own education.

Introduction to Memory

27 Encoding: Getting Information In

28 Storage: Retaining Information

29 Retrieval: Getting Information Out

30 Forgetting, Memory Construction, and Improving Memory

323

The Phenomenon of Memory Studying Memory: Information-Processing Models

module 26 Introduction to Memory

© The New Yorker Collection, 1987, W. Miller from cartoonbank.com. All rights reserved.

䉴|| The Phenomenon of Memory

FIGURE 26.1 What is this? People

who had, 17 years earlier, seen the complete image (in Figure 26.3 when you turn the page) were more likely to recognize this fragment, even if they had forgotten the earlier experience (Mitchell, 2006).

324

To a psychologist, memory is learning that has persisted over time, information that has been stored and can be retrieved. Research on memory’s extremes has helped us understand how memory works. At age 92, my father suffered a small stroke that had but one peculiar effect. His genial personality was intact. He was as mobile as before. He knew us and while poring over family photo albums could reminisce in detail about his past. But he had lost most of his ability to lay down new memories of conversations and everyday episodes. He could not tell me what day of the week it was. Told repeatedly of his brother-in-law’s death, he expressed surprise each time he heard the news. At the other extreme are people who would be medal winners in a memory Olympics, such as Russian journalist Shereshevskii, or S, who had merely to listen while other reporters scribbled notes (Luria, 1968). Where you and I could parrot back a string of about 7— maybe even 9—digits, S could repeat up to 70, provided they were read about 3 seconds apart in an otherwise silent room. Moreover, he could recall digits or words backward as easily as forward. His accuracy was unerring, even when recalling a list as much as 15 years later, after having memorized hundreds of others. “Yes, yes,” he might recall. “This was a series you gave me once when we were in your apartment. . . . You were sitting at the table and I in the rocking chair. . . . You were wearing a gray suit and you looked at me like this. . . .” Amazing? Yes, but consider your own pretty staggering capacity for remembering countless voices, sounds, and songs; tastes, smells, and textures; faces, places, and happenings. Imagine viewing more than 2500 slides of faces and places, for only 10 seconds each. Later you see 280 of these slides, paired with others not previously seen. If you are like the participants in this experiment by Ralph Haber (1970), you would recognize 90 percent of those you had seen before. Or imagine yourself looking at a picture fragment, such as the one in FIGURE 26.1. Also imagine that you had seen the complete picture for a couple of seconds 17 years earlier. When David Mitchell (2006) gave people this experience, they were more likely to identify the previously seen objects than were members of a control group who had not seen the complete drawings. Moreover, like the cicada insect that reemerges every 17 years, the picture memory reappeared even for those who had no conscious recollection of participating in the long-ago experiment! How do we accomplish such memory feats? How can we remember things we have not thought about for years, yet forget the name of someone we met a minute ago? How are memories stored in our brains? Why do some painful memories persist, like unwelcome houseguests, while other memories leave too quickly? How can two people’s memories of the same event be so different? How can we improve our memories? These will be among the questions we consider as we review more than a century of research on memory.

325

Introduction to Memory M O D U L E 2 6

䉴|| Studying Memory: Information-Processing

memory the persistence of learning over time through the storage and retrieval of information.

Models

encoding the processing of information into the memory system—for example, by extracting meaning.

26-1 How do psychologists describe the human memory system? A model of how memory works can help us think about how we form and retrieve memories. One model that has often been used is a computer’s information-processing system, which is in some ways similar to human memory. To remember any event, we must get information into our brain (encoding), retain that information (storage), and later get it back out (retrieval). A computer also encodes, stores, and retrieves information. First, it translates input (keystrokes) into an electronic language, much as the brain encodes sensory information into a neural language. The computer permanently stores vast amounts of information on a drive, from which it can later be retrieved. Like all analogies, the computer model has its limits. Our memories are less literal and more fragile than a computer’s. Moreover, most computers process information speedily but sequentially, even while alternating between tasks. The brain is slower but does many things at once. Psychologists have proposed several information-processing models of memory. One modern model, connectionism, views memories as emerging from interconnected neural networks. Specific memories arise from particular activation patterns within these networks. In an older but easier-to-picture model, Richard Atkinson and Richard Shiffrin (1968) proposed that we form memories in three stages:

storage the retention of encoded information over time. retrieval the process of getting information out of memory storage.

sensory memory the immediate, very brief recording of sensory information in the memory system.

short-term memory activated memory that holds a few items briefly, such as the seven digits of a phone number while dialing, before the information is stored or forgotten. long-term memory the relatively permanent and limitless storehouse of the memory system. Includes knowledge, skills, and experiences.

1. We first record to-be-remembered information as a fleeting sensory memory. 2. From there, we process information into a short-term memory bin, where we encode it through rehearsal. 3. Finally, information moves into long-term memory for later retrieval.

FIGURE 26.2 A modified three-stage processing model of memory Atkinson and Shiffrin’s classic three-step model helps us to think about how memories are processed, but today’s researchers recognize other ways long-term memories form. For example, some information slips into long-term memory via a “back door,” without our consciously attending to it. And so much active processing occurs in the short-term memory stage that many now prefer the term working memory.

Although historically important and helpfully simple, this three-step process is limited and fallible. In this text, we use a modified version of the three-stage processing model of memory (FIGURE 26.2). This updated model accommodates two important new concepts:

䉴 Some information skips Atkinson and Shiffrin’s first two stages and is processed directly and automatically into long-term memory, without our conscious awareness.

Attention to important or novel information

Sensory input

External events

UNCONSCIOUS PROCESSING

Sensory memory

Working/shortterm memory

Sensory memory registers incoming information, allowing your brain to capture for a moment a sea of faces.

Long-term memory

Retrieving

Bob Daemmrich/The Image Works

Bob Daemmrich/The Image Works

Bob Daemmrich/The Image Works

Encoding

Encoding

We pay attention to and encode important or novel stimuli—in this case an angry face in the crowd.

If we stare at the face long enough (rehearsal), or if we’re sufficiently disturbed by it (it’s deemed “important”), we will encode it for long-term storage, and we may, an hour later, be able to call up an image of the face.

326

MOD U LE 2 6 Introduction to Memory

䉴 Working memory, a newer understanding of Atkinson and Shiffrin’s second

stage, concentrates on the active processing of information in this intermediate stage. Because we cannot possibly focus on all the information bombarding our senses at once, we shine the flashlight beam of our attention on certain incoming stimuli—often those that are novel or important. We process these incoming stimuli, along with information we retrieve from long-term memory, in temporary working memory. Working memory associates new and old information and solves problems (Baddeley, 2001, 2002; Engle, 2002).

FIGURE 26.3 Now you know People who had seen this complete image were, 17 years later, more likely to recognize the fragment in Figure 26.1.

working memory a newer understanding of short-term memory that focuses on conscious, active processing of incoming auditory and visual-spatial information, and of information retrieved from longterm memory.

People’s working memory capacity differs. Imagine being shown a letter of the alphabet, then asked a simple question, then being shown another letter, followed by another question, and so on. Those who can juggle the most mental balls—who can remember the most letters despite the interruptions—tend in everyday life to exhibit high intelligence and to better maintain their focus on tasks (Kane et al., 2007; Unsworth & Engle, 2007). When beeped to report in at various times, they are less likely than others to report that their mind was wandering from their current activity.

Review Introduction to Memory 26-1 How do psychologists describe the human memory system? Memory is the persistence of learning over time. The Atkinson-Shiffrin classic three-stage memory model (encoding, storage, and retrieval) suggests that we (1) register fleeting sensory memories, some of which are (2) processed into conscious shortterm memories, a tiny fraction of which are (3) encoded for longterm memory and, possibly, later retrieval. Contemporary memory researchers note that we also register some information automatically, bypassing the first two stages. And they prefer the term working memory (rather than short-term memory) to emphasize the active processing in the second stage.

Test Yourself 1. Memory includes (in alphabetical order) long-term memory, sensory memory, and working/short-term memory. What’s the correct order of these three memory stages? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. How have you used the three parts of your memory system (encoding, storage, and retrieval) in learning something new today?

Terms and Concepts to Remember memory, p. 324 encoding, p. 325 storage, p. 325 retrieval, p. 325

sensory memory, p. 325 short-term memory, p. 325 long-term memory, p. 325 working memory, p. 326

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

module 27

How We Encode What We Encode

Encoding: Getting Information In 27-1 What information do we encode automatically? What information do we encode effortfully, and how does the distribution of practice influence retention?

䉴|| How We Encode

FIGURE 27.1 A modified three-stage processing model of memory Here we will focus on the encoding part of memory processing.

We will use a modified version of Richard Atkinson and Richard Shiffrin’s classic three-stage processing model of memory (1968) (FIGURE 27.1). We process some external stimuli consciously in our sensory memory, while other external events are processed beneath the radar of our conscious efforts. The events we notice and attend to are encoded, or processed in our working memory (which Atkinson and Shiffrin referred to as our short-term memory). Further processing and rehearsing encodes important parts of the event into our long-term memory, from which the information may later be retrieved. Here we will focus on the encoding part of that process. You encode some information, such as the route you walked to your last class, with great ease, freeing your memory system to focus on less familiar events. But to retain novel information, such as a friend’s new cellphone number, you need to pay attention and try hard.

UNCONSCIOUS PROCESSING Attention to important or novel information

Sensory input

External events

Sensory memory

Working/shortterm memory Encoding

Encoding

Long-term memory

Retrieving

Automatic Processing Thanks to your brain’s capacity for simultaneous activity (for parallel processing), an enormous amount of multitasking goes on without your conscious attention. For example, without conscious effort you automatically process information about

䉴 space. While studying, you often encode the place on a page where certain material appears; later, when struggling to recall that information, you may visualize its location. 䉴 time. While going about your day, you unintentionally note the sequence of the day’s events. Later, when you realize you’ve left your coat somewhere, you can recreate that sequence and retrace your steps. 䉴 frequency. You effortlessly keep track of how many times things happen, thus enabling you to realize “this is the third time I’ve run into her today.”

automatic processing unconscious encoding of incidental information, such as space, time, and frequency, and of well-learned information, such as word meanings.

327

328

MOD U LE 2 7 Encoding: Getting Information In

䉴 well-learned information. For example, when you see words in your native language, perhaps on the side of a delivery truck, you cannot help but register their meanings. At such times, automatic processing is so effortless that it is difficult to shut it off.

|| Later in this module, I’ll ask you to recall this sentence: The angry rioter threw the rock at the window. ||

Deciphering words was not always so easy. When you first learned to read, you sounded out individual letters to figure out what words they made. With effort, you plodded slowly through a mere 20 to 50 words on a page. Reading, like some other forms of processing, initially requires attention and effort, but with experience and practice becomes automatic. Imagine now learning to read reversed sentences like this: .citamotua emoceb nac gnissecorp luftroffE At first, this requires effort, but after enough practice, you would also perform this task much more automatically. We develop many skills in this way. We learn to drive, to text messages, to speak a new language first with great effort, then more automatically.

Effortful Processing We encode and retain vast amounts of information automatically, but we remember other types of information, such as this module’s concepts, only with effort and attention (FIGURE 27.2). Effortful processing often produces durable and accessible memories. When learning novel information such as names, we can boost our memory through rehearsal, or conscious repetition. The pioneering researcher of verbal memory, German philosopher Hermann Ebbinghaus (1850–1909), showed this after becoming impatient with philosophical speculations about memory. Ebbinghaus decided he would scientifically study his own learning and forgetting of novel verbal materials.

FIGURE 27.2 Automatic versus effortful processing Some information, such as where you ate dinner yesterday, you process automatically. Other information, such as this module’s concepts, requires effort to encode and remember.

Encoding

Automatic

Effortful

(Where you ate dinner yesterday)

(This module’s concepts)

effortful processing encoding that requires attention and conscious effort.

spacing effect the tendency for distributed study or practice to yield better long-term retention than is achieved through massed study or practice.

Spencer Grant/PhotoEdit

information, either to maintain it in consciousness or to encode it for storage.

© Bananastock/Alamy

rehearsal the conscious repetition of

329

Encoding: Getting Information In M O D U L E 2 7

To create novel verbal material for his learning experiments, Ebbinghaus formed a list of all possible nonsense syllables by sandwiching one vowel between two consonants. He then randomly selected a sample of the syllables, practiced them, and tested himself. To get a feel for his experiments, rapidly read aloud, eight times over, the following list (from Baddeley, 1982). Then try to recall the items: JIH, BAZ, FUB, YOX, SUJ, XIR, DAX, LEQ, VUM, PID, KEL, WAV, TUV, ZOF, GEK, HIW. The day after learning such a list, Ebbinghaus could recall few of the syllables. But were they entirely forgotten? As FIGURE 27.3 portrays, the more frequently he repeated the list aloud on day 1, the fewer repetitions he required to relearn the list on day 2. Here, then, was a simple beginning principle: The amount remembered depends on the time spent learning. Even after we learn material, additional rehearsal (overlearning) increases retention. The point to remember: For novel verbal information, practice—effortful processing—does indeed make perfect. Later research revealed more about how to lay down enduring memories. To paraphrase Ebbinghaus (1885), those who learn quickly also forget quickly. We retain information better when our rehearsal is distributed over time (as when learning classmates’ names), a phenomenon called the spacing effect. More than 300 experiments over the last century consistently reveal the benefits of spacing learning times (Cepeda et al., 2006). Massed practice (cramming) can produce speedy short-term learning and feelings of confidence. But distributed study time produces better longterm recall. After studying long enough to master the material, further study becomes inefficient, note Doug Rohrer and Harold Pashler (2007). Better to spend that extra reviewing time later—a day later if you need to remember something 10 days hence, or a month later if you need to remember something 6 months hence. In a 9-year experiment, Harry Bahrick and three of his family members (1993) practiced foreign language word translations for a given number of times, at intervals ranging from 14 to 56 days. Their consistent finding: The longer the space between practice sessions, the better their retention up to 5 years later. The practical implication? Spreading out learning—over a semester or a year, rather than over a shorter

27.3 䉴 FIGURE Ebbinghaus’ retention

Time in minutes taken to relearn 20 list on day 2

curve Ebbinghaus found that the more times he practiced a list of nonsense syllables on day 1, the fewer repetitions he required to relearn it on day 2. Said simply, the more time we spend learning novel information, the more we retain. (From Baddeley, 1982.)

15 As rehearsal increases, relearning time decreases

10

5 0

8

16

24

32

42

53

64

Number of repetitions of list on day 1

“He should test his memory by reciting the verses.” Abdur-Rahman Abdul Khaliq, “Memorizing the Quran”

330

MOD U LE 2 7 Encoding: Getting Information In

Roman philosopher Seneca (4 B.C.–A.D. 65)

FIGURE 27.4 The serial position effect Immediately after Australian Prime Minister Kevin Rudd introduces this long line of officials to Afghan President Hamid Karzai, President Karzai will probably recall the names of the last few people best. But later Karzai may recall the first few people best. (From Craik & Watkins, 1973.)

Percentage 90% of words 80 recalled

Immediate recall: last items best (recency effect)

70 60 50 40 30

Later recall: only first items recalled well (primacy effect)

20 10 0

AP Photo/Musadeq Sadeq. Pool

“The mind is slow in unlearning what it has been long in learning.”

term—should help you not only on comprehensive final exams, but also in retaining the information for a lifetime. Repeated quizzing of previously studied material also helps, a phenomenon that Henry Roediger and Jeffrey Karpicke (2006) call the testing effect, adding, “Testing is a powerful means of improving learning, not just assessing it.” In one of their studies, students recalled the meaning of 40 previously learned Swahili words much better if tested repeatedly than if they spent the same time restudying the words (Karpicke & Roediger, 2008). So here is another point to remember: Spaced study and self-assessment beat cramming. Another phenomenon, the serial position effect, further illustrates the benefits of rehearsal. As an everyday parallel, imagine it’s your first day in a new job, and your manager is introducing co-workers. As you meet each one, you repeat (rehearse) all their names, starting from the beginning. By the time you meet the last person, you will have spent more time rehearsing the earlier names than the later ones; thus, the next day you will probably more easily recall the earlier names. Also, learning the first few names may interfere with your learning the later ones. Experimenters have demonstrated the serial position effect by showing people a list of items (words, names, dates, even odors) and then immediately asking them to recall the items in any order (Reed, 2000). As people struggle to recall the list, they often remember the last and first items better than they do those in the middle (FIGURE 27.4). Perhaps because the last items are still in working memory, people briefly recall them especially quickly and well (a recency effect). But after a delay— after they shift their attention from the last items—their recall is best for the first items (a primacy effect). Sometimes, however, rehearsal is not enough to store new information for later recall (Craik & Watkins, 1973; Greene, 1987). To understand why this happens, we need to know more about how we encode information for processing into long-term memory.

1 2 3 4 5 6 7 8 9 10 11 12

Position of word in list

䉴|| What We Encode 27-2 What effortful processing methods aid in forming memories? Processing our sensory input is like sorting through e-mail. Some items we instantly discard. Others we open, read, and retain. We process information by encoding its meaning, encoding its image, or mentally organizing it.

331

Encoding: Getting Information In M O D U L E 2 7

Levels of Processing

serial position effect our tendency to

When processing verbal information for storage, we usually encode its meaning, associating it with what we already know or imagine. Whether we hear eye-screem as “ice cream” or “I scream” depends on how the context and our experience guide us to interpret and encode the sounds. (Remember, our working memories interact with our long-term memories.) Can you repeat the sentence about the rioter that I gave you at this module’s beginning? (“The angry rioter threw . . .”) Perhaps, like those in an experiment by William Brewer (1977), you recalled the rioter sentence by the meaning you encoded when you read it (for example, “The angry rioter threw the rock through the window”) and not as it was written (“The angry rioter threw the rock at the window”). Referring to such recall, Gordon Bower and Daniel Morrow (1990) liken our minds to theater directors who, given a raw script, imagine a finished stage production. Asked later what we heard or read, we recall not the literal text but what we encoded. Thus, studying for an exam, you may remember your lecture notes rather than the lecture itself. What kind of encoding do you think yields the best memory of verbal information? Visual encoding of images? Acoustic encoding of sounds? Semantic encoding of meaning? Each of these levels of processing has its own brain system (Poldrack & Wagner, 2004). And each can help. For example, acoustic encoding enhances the memorability and seeming truth of rhyming aphorisms. “What sobriety conceals, alcohol reveals” seems more accurate than “what sobriety conceals, alcohol unmasks” (McGlone & Tofighbakhsh, 2000). Attorney Johnnie Cochran’s celebrated plea to O. J. Simpson’s jury—“If the glove doesn’t fit, you must acquit”—was also more easily remembered than had Cochran said, “If the glove doesn’t fit, you must find him not guilty!” To compare visual, acoustic, and semantic encoding, Fergus Craik and Endel Tulving (1975) flashed a word at people. Then they asked a question that required the viewers to process the words at one of three levels (1) visually (the appearance of the letters), (2) acoustically (the sound of the words), or (3) semantically (the meaning of the words). To experience the task yourself, rapidly answer the following questions: Sample Questions to Elicit Processing

Word Flashed

1. Is the word in capital letters?

CHAIR

2. Does the word rhyme with train?

brain

Yes

recall best the last and first items in a list.

visual encoding the encoding of picture images.

acoustic encoding the encoding of sound, especially the sound of words.

semantic encoding the encoding of meaning, including the meaning of words.

No

3. Would the word fit in this sentence? on the table.

gun

Which type of processing would best prepare you to recognize the words at a later time? In Craik and Tulving’s experiment, the deeper, semantic processing—question 3—yielded much better memory than the “shallow processing” elicited by question 2 and especially by question 1 (FIGURE 27.5 on the next page). But given too raw a script, we have trouble creating a mental model. Put yourself in the place of the students who John Bransford and Marcia Johnson (1972) asked to remember the following recorded passage: The procedure is actually quite simple. First you arrange things into different groups. Of course, one pile may be sufficient depending on how much there is to do. . . . After the procedure is completed one arranges the materials into different groups again. Then they can be put into their appropriate places. Eventually they will be used once more and the whole cycle will then have to be repeated. However, that is part of life.

|| How many Fs are in the following sentence? FINISHED FILES ARE THE RESULTS OF YEARS OF SCIENTIFIC STUDY COMBINED WITH THE EXPERIENCE OF YEARS. (Answer below.) || Partly because your initial processing of the letters was primarily acoustic rather than visual, you probably missed some of the six Fs, especially those that sound like a V rather than an F.

The girl put the

FIGURE 27.5 Levels of processing

Processing a word deeply—by its meaning (semantic encoding)—produces better recognition of it at a later time than does shallow processing by attending to its appearance or sound. (From Craik & Tulving, 1975.)

MOD U LE 2 7 Encoding: Getting Information In

332

Type of encoding

Semantic (type of . . . )

Acoustic (rhymes with . . . )

Visual (written in capitals?)

10

20

30

40

50

60

70

80

90

100

Percentage who later recognized word

When the students heard the paragraph you have just read, without a meaningful context, they remembered little of it. When told the paragraph described washing clothes (something meaningful to them), they remembered much more of it—as you probably could now after rereading it. Processing a word deeply—by its meaning (semantic encoding)—produces better recognition later than does shallow processing, such as attending to its appearance (visual encoding) or sound (acoustic encoding) (Craik & Tulving, 1975). Such research suggests the benefits of rephrasing what we read and hear into meaningful terms. People often ask actors how they learn “all those lines.” They do it by first coming to understand the flow of meaning, report psychologist-actor team Helga Noice and Tony Noice (2006). “One actor divided a half-page of dialogue into three [intentions]: ‘to flatter,’ ‘to draw him out,’ and ‘to allay his fears.’” With this meaningful sequence in mind, the actor more easily remembers the lines. From his experiments on himself, Ebbinghaus estimated that, compared with learning nonsense material, learning meaningful material required one-tenth the effort. As memory researcher Wayne Wickelgren (1977, p. 346) noted, “The time you spend thinking about material you are reading and relating it to previously stored material is about the most useful thing you can do in learning any new subject matter.” The point to remember: The amount remembered depends both on the time spent learning and on your making it meaningful. We have especially good recall for information we can meaningfully relate to ourselves. Asked how well certain adjectives describe someone else, we often forget them; asked how well the adjectives describe us, we—especially those from individualistic Western cultures—remember the words well. This phenomenon is called the selfreference effect (Symons & Johnson, 1997; Wagar & Cohen, 2003). So, you will profit from taking time to find personal meaning in what you are studying. Information deemed “relevant to me” is processed more deeply and remains more accessible.

Visual Encoding Why is it that we struggle to remember formulas, definitions, and dates, yet we can easily remember where we were yesterday, who was with us, where we sat, and what we wore? One difference is the greater ease of remembering mental pictures. Our earliest memories—probably of something that happened at age 3 or 4—involve visual imagery. We more easily remember concrete words, which lend themselves to visual

333

Encoding: Getting Information In M O D U L E 2 7

mental images, than we do abstract, low-imagery words. (When I quiz you later, which three of these words—typewriter, void, cigarette, inherent, fire, process—will you most likely recall?) If you still recall the rock-throwing rioter sentence, it is probably not only because of the meaning you encoded but also because the sentence lent itself to a visual image. Memory for concrete nouns, such as “cigarette,” is aided by encoding them both semantically and visually (Marschark et al., 1987; Paivio, 1986). Two codes are better than one. Thanks to this durability of vivid images, our memory of an experience is often colored by its best or worst moment—the best moment of pleasure or joy, and the worst moment of pain or frustration (Fredrickson & Kahneman, 1993). Recalling the high points while forgetting the mundane may explain the phenomenon of rosy retrospection (Mitchell et al., 1997): People tend to recall events such as a camping holiday more positively than they judged them at the time. The muggy heat and long lines of that visit to Disney World fade in the glow of vivid surroundings, food, and rides. Imagery is at the heart of many mnemonic (nih-MON-ik) devices (so named after the Greek word for “memory”). Ancient Greek scholars and orators developed mnemonics to help them retrieve lengthy memorized passages and speeches. Some modern mnemonic devices rely on both acoustic and visual codes. For example, the peg-word system requires you to memorize a jingle: “One is a bun; two is a shoe; three is a tree; four is a door; five is a hive; six is sticks; seven is heaven; eight is a gate; nine is swine; ten is a hen.” Without much effort, you will soon be able to count by peg-words instead of numbers: bun, shoe, tree . . . and then to visually associate the peg-words with to-beremembered items. Now you are ready to challenge anyone to give you a grocery list to remember. Carrots? Stick them into the imaginary bun. Milk? Fill the shoe with it. Paper towels? Drape them over the tree branch. Think bun, shoe, tree and you see their associated images: carrots, milk, paper towels. With few errors (Bugelski et al., 1968), you will be able to recall the items in any order and to name any given item. Memory whizzes understand the power of such systems. A study of star performers in the World Memory Championships showed them not to have exceptional intelligence, but rather to be superior at using spatial mnemonic strategies (Maguire et al., 2003).

imagery mental pictures; a powerful aid to effortful processing, especially when combined with semantic encoding. mnemonics [nih-MON-iks] memory aids, especially those techniques that use vivid imagery and organizational devices.

chunking organizing items into familiar, manageable units; often occurs automatically.

Organizing Information for Encoding

Chunking Glance for a few seconds at row 1 of FIGURE 27.6, then look away and try to reproduce what you saw. Impossible, yes? But you can easily reproduce the second row, which is no less complex. Similarly, you will probably find row 4 much easier to remember than row 3, although both contain the same letters. And you could remember the sixth cluster more easily than the fifth, although both contain the same words. As these units demonstrate, we more easily recall information when we can organize it into familiar, manageable chunks. Chunking occurs so naturally that we take it for granted. If you are a native English speaker, you can reproduce perfectly the 150 or so line segments that make up the words in the three phrases of item 6 in Figure 27.6. It would astonish someone unfamiliar with the language. I am similarly awed at the ability of someone literate in Chinese to glance at FIGURE 27.7 and then to reproduce all of the strokes; or of a chess master

FIGURE 27.6 Effects of chunking on memory When we organize information into meaningful units, such as letters, words, and phrases, we recall it more easily. (From Hintzman, 1978.)

Mnemonic devices can also help organize material for our later retrieval. When Bransford and Johnson’s laundry paragraph became meaningful, we could mentally organize its sentences into a sequence. We process information more easily when we can organize it into meaningful units or structures.

334

MOD U LE 2 7 Encoding: Getting Information In

chunking—for those who read Chinese After looking at these characters, can you reproduce them exactly? If so, you are literate in Chinese.

FIGURE 27.7 An example of

|| In the discussion of encoding imagery, I gave you six words and told you I would quiz you about them later. How many of these words can you now recall? Of these, how many are highimagery words? How many are lowimagery? (You can check your list against the six inverted words below.) ||

Hierarchies When people develop expertise in an area, they process information not only in chunks but also in hierarchies composed of a few broad concepts divided and subdivided into narrower concepts and facts. This module, for example, aims not only to teach you the elementary facts of memory but also to help you organize these facts around broad principles, such as encoding; subprinciples, such as automatic and effortful processing; and still more specific concepts, such as meaning, imagery, and organization (FIGURE 27.8).

Typewriter, void, cigarette, inherent, fire, process memory When we organize words or concepts into hierarchical groups, as illustrated here with concepts in this module, we remember them better than when we see them presented randomly.

FIGURE 27.8 Organization benefits

who, after a 5-second look at the board during a game, can recall the exact positions of most of the pieces (Chase & Simon, 1973); or of a varsity basketball player who, given a 4-second glance at a basketball play, can recall the positions of the players (Allard & Burnett, 1985). We all remember information best when we can organize it into personally meaningful arrangements. Chunking can also be used as a mnemonic technique to recall unfamiliar material. Want to remember the colors of the rainbow in order of wavelength? Think of the mnemonic ROY G. BIV (red, orange, yellow, green, blue, indigo, violet). Need to recall the names of North America’s five Great Lakes? Just remember HOMES (Huron, Ontario, Michigan, Erie, Superior). In each case, we chunk information into a more familiar form by creating a word (called an acronym) from the first letters of the to-be-remembered items.

Encoding (automatic or effortful)

Meaning

Imagery

Organization

Chunks

Hierarchies

Organizing knowledge in hierarchies helps us retrieve information efficiently. Gordon Bower and his colleagues (1969) demonstrated this by presenting words either randomly or grouped into categories. When the words were organized into groups, recall was two to three times better. Such results show the benefits of organizing what you study—of paying special attention to module outlines, headings, preview questions, summaries, and self-test questions. If you can master a module’s concepts within their overall organization, your recall should be effective at test time. Taking lecture and text notes in outline format—a type of hierarchical organization—may also prove helpful.

335

Encoding: Getting Information In M O D U L E 2 7

Review Encoding: Getting Information In 27-1 What information do we encode automatically? What information do we encode effortfully, and how does the distribution of practice influence retention? Automatic processing happens unconsciously, as we absorb information (space, time, frequency, well-learned material) in our environment. Effortful processing (of meaning, imagery, organization) requires conscious attention and deliberate effort. The spacing effect is our tendency to retain information more easily if we practice it repeatedly (spaced study) than if we practice it in one long session (massed practice, or cramming). The serial position effect is our tendency to recall the first item (the primacy effect) and the last item (the recency effect) in a long list more easily than we recall the intervening items. 27-2

What effortful processing methods aid in forming memories? Visual encoding (of images) and acoustic encoding (of sounds) engage shallower processing than semantic encoding (of meaning). We process verbal information best when we make it relevant to ourselves (the self-reference effect). Encoding imagery, as when using some mnemonic devices, also supports memory, because vivid images are memorable. Chunking and hierarchies help organize information for easier retrieval.

Terms and Concepts to Remember automatic processing, p. 327 effortful processing, p. 328 rehearsal, p. 328 spacing effect, p. 329 serial position effect, p. 330 visual encoding, p. 331

acoustic encoding, p. 331 semantic encoding, p. 331 imagery, p. 332 mnemonics [nih-MON-iks], p. 333 chunking, p. 333

Test Yourself 1. What would be the most effective strategy to learn and retain a list of names of key historical figures for a week? For a year? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. Can you think of three ways to employ the principles in this module to improve your own learning and retention of important ideas?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

module 28

Sensory Memory Working/Short-Term Memory Long-Term Memory Storing Memories in the Brain

Storage: Retaining Information At the heart of memory is storage. If you later recall something you experienced, you must, somehow, have stored and retrieved it. Anything stored in long-term memory lies dormant, waiting to be reconstructed by a cue. What is our memory storage capacity? Let’s start with the first memory store noted in the three-stage processing model outlined in FIGURE 28.1 (Atkinson & Shiffrin, 1968)—our fleeting sensory memory.

UNCONSCIOUS PROCESSING Attention to important or novel information

Sensory input

External events

Sensory memory

Encoding

Working/shortterm memory

Retrieving

Encoding

Long-term memory

䉴|| Sensory Memory 28-1 What is sensory memory? How much of this page could you sense and recall with less exposure than a lightning flash? Researcher George Sperling (1960) asked people to do something similar when he showed them, for only one-twentieth of a second, three rows of three letters each (FIGURE 28.2). After the nine letters disappeared, people could recall only about half of them. Was it because they had insufficient time to glimpse them? No, Sperling cleverly demonstrated that people actually could see and recall all the letters, but only momentarily. Rather than ask them to recall all nine letters at once, Sperling sounded a high, medium, or low tone immediately after flashing the nine letters. This cue directed participants to report only the letters of the top, middle, or bottom row, respectively. Now they rarely missed a letter, showing that all nine letters were momentarily available for recall. Sperling’s experiment revealed that we have a fleeting photographic memory called iconic memory. For a few tenths of a second, our eyes register an exact representation

FIGURE 28.2 Momentary photographic memory When George Sperling flashed a group of letters similar to this for onetwentieth of a second, people could recall only about half the letters. But when signaled to recall a particular row immediately after the letters had disappeared, they could do so with near-perfect accuracy.

336

FIGURE 28.1 A modified three-stage processing model of memory Here we will focus on the storage aspects of our memory processing: sensory memory, working/short-term memory, and long-term memory.

K

Z

R

Q

B

T

S

G

N

337

Storage: Retaining Information M O D U L E 2 8

of a scene and we can recall any part of it in amazing detail. But if Sperling delayed the tone signal by more than half a second, the image faded and participants again recalled only about half the letters. Our visual screen clears quickly, as new images are superimposed over old ones. We also have an impeccable, though fleeting, memory for auditory stimuli, called echoic memory (Cowan, 1988; Lu et al., 1992). Picture yourself in conversation, as your attention veers to the TV. If your mildly irked companion tests your attention by asking, “What did I just say?” you can recover the last few words from your mind’s echo chamber. Auditory echoes tend to linger for 3 or 4 seconds. Experiments on echoic and iconic memory have helped us understand the initial recording of sensory information in the memory system.

iconic memory a momentary sensory memory of visual stimuli; a photographic or picture-image memory lasting no more than a few tenths of a second. echoic memory a momentary sensory memory of auditory stimuli; if attention is elsewhere, sounds and words can still be recalled within 3 or 4 seconds.

䉴|| Working/Short-Term Memory 28-2 What are the duration and capacity of short-term and of long-term memory? Among the vast amounts of information registered by our sensory memory, we illuminate some with our attentional flashlight. We also retrieve information from longterm storage for “on-screen” display. But unless our working memory meaningfully encodes or rehearses that information, it quickly disappears from our short-term store. During your finger’s trip from phone book to phone, a telephone number may evaporate. To find out how quickly a short-term memory will disappear, Lloyd Peterson and Margaret Peterson (1959) asked people to remember three-consonant groups, such as CHJ. To prevent rehearsal, the researchers asked them, for example, to start at 100 and count aloud backward by threes. After 3 seconds, people recalled the letters only about half the time; after 12 seconds, they seldom recalled them at all (FIGURE 28.3). Without active processing, short-term memories have a limited life. Short-term memory is limited not only in duration but also in capacity, typically storing about seven bits of information (give or take two). George Miller (1956) enshrined this recall capacity as the Magical Number Seven, plus or minus two. Not surprisingly, when some phone companies began requiring all callers to dial a

Percentage who recalled consonants

FIGURE 28.3 Short-term 䉴 memory decay Unless

90%

rehearsed, verbal information may be quickly forgotten. (From Peterson & Peterson, 1959; see also Brown, 1958.)

80 70 60

Rapid decay with no rehearsal

50 40 30 20 10 0

3

6

9

12

15

18

Time in seconds between presentation of consonants and recall request (no rehearsal allowed)

|| The Magical Number Seven has become psychology’s contribution to an intriguing list of magic sevens—the seven wonders of the world, the seven seas, the seven deadly sins, the seven primary colors, the seven musical scale notes, the seven days of the week—seven magical sevens. ||

338

MOD U LE 2 8 Storage: Retaining Information

three-digit area code in addition to a seven-digit number, many people reported trouble retaining the just-looked-up number. Our short-term recall is slightly better for random digits (as in a phone number) than for random letters, which may have similar sounds. It is slightly better for what we hear than for what we see. Both children and adults have short-term recall for roughly as many words as they can speak in 2 seconds (Cowan, 1994; Hulme & Tordoff, 1989). Compared with spoken English words, signs in American Sign Language take longer to articulate. And sure enough, short-term memory can hold fewer signs than spoken words (Wilson & Emmorey, 2006). Without rehearsal, most of us actually retain in short-term memory only about four information chunks (for example, letters meaningfully grouped as BBC, FBI, KGB, CIA) (Cowan, 2001; Jonides et al., 2008). Suppressing rehearsal by saying the, the, the while hearing random digits also reduces memory to about four items. The basic principle: At any given moment, we can consciously process only a very limited amount of information.

䉴|| Long-Term Memory In Arthur Conan Doyle’s A Study in Scarlet, Sherlock Holmes offers a popular theory of memory capacity:

R. J. Erwin/Photo Researchers

I consider that a man’s brain originally is like a little empty attic, and you have to stock it with such furniture as you choose. . . . It is a mistake to think that that little room has elastic walls and can distend to any extent. Depend upon it, there comes a time when for every addition of knowledge you forget something that you knew before.

Clark’s nutcracker Among animals, one contender for champion memorist would be a mere birdbrain—the Clark’s Nutcracker—which during winter and early spring can locate up to 6000 caches of pine seeds it had previously buried (Shettleworth, 1993).

Contrary to Holmes’ belief, our capacity for storing long-term memories is essentially limitless. Our brains are not like attics, which once filled can store more items only if we discard old ones. The point is vividly illustrated by those who have performed phenomenal memory feats (TABLE 28.1). Consider the 1990s tests of psychologist Rajan Mahadevan’s memory.

TABLE 28.1 World Memory Championship Records From world memory competitions, here are some current records, as of 2008: Contest/Description

Record

Speed cards Shortest time to memorize a shuffled pack of 52 playing cards

26 seconds

One-hour cards Most cards memorized in one hour (52 points for every pack correct; 26 points if 1 mistake)

1404 points

Speed numbers Most random digits memorized in 5 minutes

396 digits

Names and faces Most first and last names memorized in 15 minutes after being shown cards with faces (1 point for every correctly spelled first or last name; 1 ⁄2 point for every phonetically correct but incorrectly spelled name)

181 points

Binary digits Most binary digits (101101, etc.) memorized in 30 minutes when presented in rows of 30 digits

4140 digits

Sources: www.usamemoriad.com and www.worldmemorychampionship.com

339

Storage: Retaining Information M O D U L E 2 8

Given a block of 10 digits from the first 30,000 or so digits of pi, Rajan, after a few moments of mental searching for the string, would pick up the series from there, firing numbers like a machine gun (Delaney et al., 1999; Thompson et al., 1993). He could also repeat 50 random digits—backward. It is not a genetic gift, he said; anyone could learn to do it. But given the genetic influence on so many human traits, and knowing that Rajan’s father memorized Shakespeare’s complete works, one wonders. We are reminded that many psychological phenomena, including memory capacity, can be studied by means of different levels of analysis, including the biological.

|| Pi in the sky: In 2006, Japan’s Akira Haraguchi reportedly recited the first 100,000 digits of pi, topping the world record by over 30,000 digits (Associated Press, 2006). ||

䉴|| Storing Memories in the Brain 28-3 How does the brain store our memories? I marveled at my aging mother-in-law, a retired pianist and organist. At age 88 her blind eyes could no longer read music. But let her sit at a keyboard and she would flawlessly play any of hundreds of hymns, including ones she had not thought of for 20 years. Where did her brain store those thousands of sequenced notes? For a time, some surgeons and memory researchers believed that flashbacks triggered by brain stimulation during surgery indicated that our whole past, not just wellpracticed music, is “in there,” in complete detail, just waiting to be relived. But when Elizabeth Loftus and Geoffrey Loftus (1980) analyzed the vivid “memories” triggered by brain stimulation, they found that the seeming flashbacks appeared to have been invented, not relived. Psychologist Karl Lashley (1950) further demonstrated that memories do not reside in single, specific spots. He trained rats to find their way out of a maze, then cut out pieces of their cortexes and retested their memory. Amazingly, no matter which small brain section he removed, the rats retained at least a partial memory of how to navigate the maze. So, despite the brain’s vast storage capacity, we do not store information as libraries store their books, in discrete, precise locations.

Synaptic Changes

Aplysia The California sea slug, which neuroscientist Eric Kandel studied for 45 years, has increased our understanding of the neural basis of learning.

Jeff Rotman

Looking for clues to the brain’s storage system, contemporary memory researchers have searched for a memory trace. Although the brain represents a memory in distributed groups of neurons, those nerve cells must communicate through their synapses (Tsien, 2007). Thus, the quest to understand the physical basis of memory—for how information becomes incarnated in matter—has sparked study of the synaptic meeting places where neurons communicate with one another via their neurotransmitter messengers. We know that experience does modify the brain’s neural networks; given increased activity in a particular pathway, neural interconnections form or strengthen. Eric Kandel and James Schwartz (1982) observed such changes in the sending neurons of a simple animal, the California sea slug, Aplysia. Its mere 20,000 or so nerve cells are unusually large and accessible, enabling the researchers to observe synaptic changes during learning. The sea slug can be classically conditioned (with electric shock) to reflexively withdraw its gills when squirted with water, much as a shell-shocked soldier jumps at the sound of a snapping twig. By observing the slug’s neural connections before and after conditioning, Kandel and Schwartz pinpointed changes. When learning occurs, the slug releases more of the neurotransmitter serotonin at certain synapses. These synapses then become more efficient at transmitting signals. Increased synaptic efficiency makes for more efficient neural circuits. In experiments, rapidly stimulating certain memory-circuit connections has increased their sensitivity for hours or even weeks to come. The sending neuron now needs less prompting to release its neurotransmitter, and the receiving neuron’s receptor sites

Both photos: From N. Toni et al., Nature, 402, Nov. 25, 1999. Courtesy of Dominique Muller

FIGURE 28.4 Doubled receptor sites Electron microscope images show just one receptor site (gray) reaching toward a sending neuron before long-term potentiation (left) and two sites after LTP (right). A doubling of the receptor sites means that the receiving neuron has increased sensitivity for detecting the presence of the neurotransmitter molecules that may be released by the sending neuron. (From Toni et al., 1999.)

MOD U LE 2 8 Storage: Retaining Information

340

“The biology of the mind will be as scientifically important to this [new] century as the biology of the gene [was] to the twentieth century.”

may increase (FIGURE 28.4). This prolonged strengthening of potential neural firing, called long-term potentiation (LTP), provides a neural basis for learning and remembering associations (Lynch, 2002; Whitlock et al., 2006). Several lines of evidence confirm that LTP is a physical basis for memory:

Eric Kandel, acceptance remarks for the 2000 Nobel prize

䉴 Drugs that block LTP interfere with learning (Lynch & Staubli, 1991). 䉴 Mutant mice engineered to lack an enzyme needed for LTP can’t learn their way out of a maze (Silva et al., 1992). 䉴 Rats given a drug that enhances LTP will learn a maze with half the usual number of mistakes (Service, 1994). 䉴 Injecting rats with a chemical that blocks the preservation of LTP erases recent learning (Pastalkova et al., 2006).

|| Although ECT for depression disrupts memory for recent experiences, it leaves most memories intact. ||

Some memory-biology explorers have helped found pharmaceutical companies that are competing to develop and test memory-boosting drugs. Their target market includes millions of people with Alzheimer’s disease, millions more with mild cognitive impairment that often becomes Alzheimer’s, and countless millions who would love to turn back the clock on age-related memory decline. From expanding memories perhaps will come bulging profits. One approach is developing drugs that boost production of the protein CREB, which can switch genes off or on. Genes code the production of protein molecules. With repeated neural firing, a nerve cell’s genes produce synapse-strengthening proteins, enabling LTP (Fields, 2005). Boosting CREB production might lead to increased production of proteins that help reshape synapses and consolidate a short-term memory into a long-term memory. Sea slugs, mice, and fruit flies with enhanced CREB production have displayed enhanced memories. Another approach is developing drugs that boost glutamate, a neurotransmitter that enhances synaptic communication (LTP). It remains to be seen whether such drugs can boost memory without nasty side effects and without cluttering our minds with trivia best forgotten. In the meantime, one effective, safe, and free memory enhancer is already available on college campuses: study followed by adequate sleep! After long-term potentiation has occurred, passing an electric current through the brain won’t disrupt old memories. But the current will wipe out very recent memories. Such is the experience both of laboratory animals and of depressed people given electroconvulsive therapy (ECT). A blow to the head can do the same. Football players and boxers momentarily knocked unconscious typically have no memory of events just before the knock-out (Yarnell & Lynch, 1970). Their working memory had no time to consolidate the information into long-term memory before the lights went out.

Stress Hormones and Memory Researchers interested in the biology of the mind have also looked closely at the influence of emotions and stress hormones on memory. When we are excited or stressed, emotion-triggered stress hormones make more glucose energy available to fuel brain

341

Storage: Retaining Information M O D U L E 2 8

stress sears in 䉴 Severe memories Significantly stressful

increase in a synapse’s firing potential after brief, rapid stimulation. Believed to be a neural basis for learning and memory.

flashbulb memory a clear memory of an emotionally significant moment or event.

Spencer Platt/Getty Images

events, such as the disastrous 2007 California wildfires, may be an indelible part of the memories of those who experienced them.

long-term potentiation (LTP) an

activity, signaling the brain that something important has happened. Moreover, the amygdala, two emotion-processing clusters in the limbic system, boosts activity and available proteins in the brain’s memory-forming areas (Buchanan, 2007; Kensinger, 2007). The result? Arousal can sear certain events into the brain, while disrupting memory for neutral events around the same time (Birnbaum et al., 2004; Brewin et al., 2007). “Stronger emotional experiences make for stronger, more reliable memories,” says James McGaugh (1994, 2003). After traumatic experiences—a wartime ambush, a house fire, a rape—vivid recollections of the horrific event may intrude again and again. It is as if they were burned in. This makes adaptive sense. Memory serves to predict the future and to alert us to potential dangers. Conversely, weaker emotion means weaker memories. People given a drug that blocks the effects of stress hormones will later have more trouble remembering the details of an upsetting story (Cahill, 1994). That connection is appreciated by those working to develop drugs that, when taken after a traumatic experience, might blunt intrusive memories. In one experiment, victims of car accidents, rapes, and other traumas received either one such drug, propranolol, or a placebo for 10 days following their horrific event. When tested three months later, half the placebo group but none of the drug-treated group showed signs of stress disorder (Pitman et al., 2002, 2005). Emotion-triggered hormonal changes help explain why we long remember exciting or shocking events, such as our first kiss or our whereabouts when learning of a friend’s death. In a 2006 Pew survey, 95 percent of American adults said they could recall exactly where they were or what they were doing when they first heard the news of the attack on 9/11. This perceived clarity of memories of surprising, significant events leads some psychologists to call them flashbulb memories. It’s as if the brain commands, “Capture this!” The people who experienced a 1989 San Francisco earthquake did just that. A year and a half later, they had perfect recall of where they had been and what they were doing (verified by their recorded thoughts within a day or two of the quake). Others’ memories for the circ*mstances under which they merely heard about the quake were more prone to errors (Neisser et al., 1991; Palmer et al., 1991). Flashbulb memories that people relive, rehearse, and discuss may also come to err (Talarico et al., 2003). Although our flashbulb memories are noteworthy for their vividness and the confidence with which we recall them, misinformation can seep into them (Talarico & Rubin, 2007). There are other limits to stress-enhanced remembering. When prolonged—as in sustained abuse or combat—stress can act like acid, corroding neural connections and shrinking the brain area (the hippocampus) that is vital for laying down memories.

|| If you suffered a traumatic experience, would you want to take a drug to blunt that memory? ||

|| Which is more important—your experiences or your memories of them? ||

342

MOD U LE 2 8 Storage: Retaining Information

THE FAR SIDE © 1993 FARWORKS INC./Dist. by UNIVERSAL PRESS SYNDICATE. Reprinted with permission. All rights reserved.

Moreover, when sudden stress hormones are flowing, older memories may be blocked. It is true for stressed rats trying to find their way to a hidden target (de Quervain et al., 1998). And it is true for those of us whose mind has gone blank while speaking in public.

Storing Implicit and Explicit Memories

amnesia the loss of memory. implicit memory retention independent of conscious recollection. (Also called nondeclarative memory.)

explicit memory memory of facts and experiences that one can consciously know and “declare.” (Also called declarative memory.)

hippocampus a neural center that is located in the limbic system; helps process explicit memories for storage.

“The most important patient in the history of brain science.” So said the New York Times of H. M., revealed at his death in 2008 to be Henry Molaison. H. M. died at age 82 in a Connecticut nursing home.

http://www.nytimes.com/2008/12/05/us/05hm.html.

More facts of nature: All forest animals, to this very day, remember exactly where they were and what they were doing when they heard that Bambi’s mother had been shot.

A memory-to-be enters the cortex through the senses, then wends its way into the brain’s depths. Precisely where it goes depends on the type of information, as dramatically illustrated by those who, as in the case of my father mentioned earlier, suffer from a type of amnesia in which they are unable to form new memories. The most famous case, a patient known to every neuroscientist as H. M., experienced in 1953 the necessary surgical removal of a brain area involved in laying new conscious memories of facts and experiences. The brain tissue loss left his older memories intact. But converting new experiences to long-term storage was another matter. For example, over practice sessions H. M. became skilled at what is for anyone an initially difficult task: tracing the mirrored outline of a star. Yet, having no memory for doing the task he remarked to researcher Brenda Milner, after many practice trials, that “this was easier than I thought it would be” (Carey, 2009). “I’ve known H. M. since 1962, and he still doesn’t know who I am,” noted his longtime researcher Suzanne Corkin (Adelson, 2005). Neurologist Oliver Sacks (1985, pp. 26–27) described another such patient, Jimmie, who had brain damage. Jimmie had no memories—thus, no sense of elapsed time— beyond his injury in 1945. Asked in 1975 to name the U.S. President, he replied, “FDR’s dead. Truman’s at the helm.” When Jimmie gave his age as 19, Sacks set a mirror before him: “Look in the mirror and tell me what you see. Is that a 19-year-old looking out from the mirror?” Jimmie turned ashen, gripped the chair, cursed, then became frantic: “What’s going on? What’s happened to me? Is this a nightmare? Am I crazy? Is this a joke?” When his attention was diverted to some children playing baseball, his panic ended, the dreadful mirror forgotten. Sacks showed Jimmie a photo from National Geographic. “What is this?” he asked. “It’s the Moon,” Jimmie replied. “No, it’s not,” Sacks answered. “It’s a picture of the Earth taken from the Moon.” “Doc, you’re kidding? Someone would’ve had to get a camera up there!” “Naturally.” “Hell! You’re joking—how the hell would you do that?” Jimmie’s wonder was that of a bright young man from 60 years ago reacting with amazement to his travel back to the future. Careful testing of these unique people reveals something even stranger: Although incapable of recalling new facts or anything they have done recently, Jimmie and others with similar conditions can learn. Shown hard-tofind figures in pictures (in the Where’s Waldo? series), they can quickly spot them again later. They can find their way to the bathroom, though without being able to tell you where it is. They can learn to read mirrorimage writing or do a jigsaw puzzle, and they

343

Storage: Retaining Information M O D U L E 2 8

FIGURE 28.5 Memory subsystems 䉴 We process and store our explicit and

Types of long-term memories

Explicit (declarative) With conscious recall

Implicit (nondeclarative) Without conscious recall

Processed in hippocampus

Processed by other brain areas, including cerebellum

Personally experienced events

Skills– motor and cognitive

Classical conditioning

have even been taught complicated job skills (Schacter, 1992, 1996; Xu & Corkin, 2001). And they can be classically conditioned. However, they do all these things with no awareness of having learned them. These amnesia victims are in some ways like people with brain damage who cannot consciously recognize faces but whose physiological responses to familiar faces reveal an implicit (unconscious) recognition. Their behaviors challenge the idea that memory is a single, unified, conscious system. Instead, we seem to have two memory systems operating in tandem (FIGURE 28.5). Whatever has destroyed conscious recall in these individuals with amnesia has not destroyed their unconscious capacity for learning. They can learn how to do something—called implicit memory (nondeclarative memory). But they may not know and declare that they know—called explicit memory (declarative memory). Having read a story once, they will read it faster a second time, showing implicit memory. But there will be no explicit memory, for they cannot recall having seen the story before. If repeatedly shown the word perfume, they will not recall having seen it. But if asked the first word that comes to mind in response to the letters per, they say perfume, readily displaying their learning. Using such tasks, even Alzheimer’s patients, whose explicit memories for people and events are lost, display an ability to form new implicit memories (Lustig & Buckner, 2004).

Mark Parisi/offthemark.com

Facts– general knowledge

implicit memories separately. Thus, one may lose explicit memory (becoming amnesic), yet display implicit memory for material one cannot consciously recall.

The Hippocampus These remarkable stories provoke us to wonder: Do our explicit and implicit memory systems involve separate brain regions? Brain scans, such as PET scans of people recalling words (Squire, 1992), and autopsies of people who had amnesia, reveal that new explicit memories of names, images, and events are laid down via the hippocampus, a temporal lobe neural center that also forms part of the brain’s limbic system (FIGURE 28.6 on the next page; Anderson et al., 2007). Damage to the hippocampus therefore disrupts some types of memory. Chickadees and other birds can store food in hundreds of places and return to these unmarked caches months later, but not if their hippocampus has been removed (Kamil & Cheng, 2001; Sherry & Vaccarino, 1989). Like the cortex, the hippocampus is lateralized. (You have two of them, one just above each ear and about an inch and a half straight in.) Damage to one or the other seems to produce different results. With lefthippocampus damage, people have trouble remembering verbal information, but they have no trouble recalling visual designs and locations. With right-hippocampus damage, the problem is reversed (Schacter, 1996).

|| The two-track memory system reinforces an important principle of the way the brain handles information via parallel processing: Mental feats such as vision, thinking, and memory may seem to be single abilities, but they are not. Rather, we split information into different components for separate and simultaneous processing. ||

MOD U LE 2 8 Storage: Retaining Information

Hippocampus

New research also pinpoints the functions of subregions of the hippocampus. One part is active as people learn to associate names with faces (Zeineh et al., 2003). Another part is active as memory whizzes engage in spatial mnemonics (Maguire et al., 2003b). The rear area, which processes spatial memory, also grows bigger the longer a London cabbie has been navigating the maze of city streets (Maguire et al., 2003a). The hippocampus is active during slow-wave sleep, as memories are processed and filed for later retrieval. The greater the hippocampus activity during sleep after a training experience, the better the next day’s memory (Peigneux et al., 2004). But those memories are not permanently stored in the hippocampus. Instead, it seems to act as a loading dock where the brain registers and temporarily holds the elements of a remembered episode—its smell, feel, sound, and location. Then, like older files shifted to a basem*nt storeroom, memories migrate for storage elsewhere. Removing the hippocampus 3 hours after rats learn the location of some tasty new food disrupts this process and prevents long-term memory formation; removal 48 hours later does not (Tse et al., 2007). Sleep supports this memory consolidation. During sleep, our hippocampus and brain cortex display simultaneous activity rhythms, as if they were having a dialogue (Euston et al., 2007; Mehta, 2007). Researchers suspect that the brain is replaying the day’s experiences as it transfers them to the cortex for longterm storage. Once stored, our mental encores of these past experiences activate various parts of the frontal and temporal lobes (Fink et al., 1996; Gabrieli et al., 1996; Markowitsch, 1995). Recalling a telephone number and holding it in working memory, for example, would activate a region of the left frontal cortex; calling up a party scene would more likely activate a region of the right hemisphere.

The Cerebellum Although your hippocampus is a temporary processing site for your explicit memories, you could lose it and still lay down memories for skills and conditioned associations. Joseph LeDoux (1996) recounts the story of a brain-damaged patient whose amnesia left her unable to recognize her physician as, each day, he shook her hand and introduced himself. One day, after reaching for his hand, she yanked hers back, for the physician had pricked her with a tack in his palm. The next time he returned to introduce himself she refused to shake his hand but couldn’t explain why. Having been classically conditioned, she just wouldn’t do it. The cerebellum, the brain region extending out from the rear of the brainstem, plays a key role in forming and storing the implicit memories created by classical conditioning. With a damaged cerebellum, people cannot develop certain conditioned reflexes, such as associating a tone with an impending puff of air—and thus do not blink in anticipation of the puff (Daum & Schugens, 1996; Green & Woodruff-Pak,

Weidenfield & Nicolson archives

FIGURE 28.6 The hippocampus Explicit memories for facts and episodes are processed in the hippocampus and fed to other brain regions for storage.

344

345

Storage: Retaining Information M O D U L E 2 8

2000). By methodically disrupting the function of different pathways in the cortex and cerebellum of rabbits, researchers have shown that rabbits also fail to learn a conditioned eyeblink response when the cerebellum is temporarily deactivated (Krupa et al., 1993; Steinmetz, 1999). Implicit memory formation needs the cerebellum. Our dual explicit-implicit memory system helps explain infantile amnesia: The implicit reactions and skills we learned during infancy reach far into our future, yet as adults we recall nothing (explicitly) of our first three years. Children’s explicit memories have a seeming half-life. In one study, events experienced and discussed with one’s mother at age 3 were 60 percent remembered at age 7 but only 34 percent remembered at age 9 (Bauer et al., 2007). As adults, our conscious memory of our first three years is blank because we index so much of our explicit memory by words that nonspeaking children have not learned, but also because the hippocampus is one of the last brain structures to mature.

The 䉴 Cerebellum cerebellum plays an important part in our forming and storing of implicit memories.

Cerebellum

Review Storage: Retaining Information 28-1 What is sensory memory? As information enters the memory system through our senses, we briefly register and store visual images via iconic memory, in which picture images last no more than a few tenths of a second. We register and store sounds via echoic memory, where echoes of auditory stimuli may linger as long as 3 or 4 seconds. 28-2 What are the duration and capacity of short-term and of long-term memory? At any given time, we can focus on and process only about seven items of information (either new or retrieved from our memory store). Without rehearsal, information disappears from short-term memory within seconds. Our capacity for storing information permanently in long-term memory is essentially unlimited. 28-3

How does the brain store our memories? Researchers are exploring memory-related changes within and between single neurons. Long-term potentiation (LTP) appears to be the neural basis of learning and memory. Stress triggers hormonal changes that arouse brain areas and can produce indelible memories. We are particularly likely to remember vivid events that form flashbulb memories. We have two memory systems. Explicit (declarative) memories of general knowledge, facts, and experiences are processed by the hippocampus. Implicit (nondeclarative) memories of skills and conditioned responses are processed by other parts of the brain, including the cerebellum.

Terms and Concepts to Remember iconic memory, p. 336 echoic memory, p. 337 long-term potentiation (LTP), p. 340 flashbulb memory, p. 341

amnesia, p. 342 implicit memory, p. 343 explicit memory, p. 343 hippocampus, p. 343

Test Yourself 1. Your friend tells you that her father experienced brain damage in an accident. She wonders if psychology can explain why he can still play checkers very well but has a hard time holding a sensible conversation. What can you tell her? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. Can you name an instance in which stress has helped you remember something, and another instance in which stress has interfered with remembering something?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

module 29

Retrieval Cues

Retrieval: Getting Information Out 29-1 How do we get information out of memory? || Later in this module, I’ll ask you to recall this sentence: The fish attacked the swimmer. ||

To remember an event requires more than getting it in (encoding) and retaining it (storage). To most people, memory is recall, the ability to retrieve information not in conscious awareness. To a psychologist, memory is any sign that something learned has been retained. So recognizing or more quickly relearning information also indicates memory (FIGURE 29.1). UNCONSCIOUS PROCESSING Attention to important or novel information

Sensory input

External events

Sensory memory

Working/shortterm memory

Long-term memory

Retrieving

Encoding

Encoding

person must retrieve information learned earlier, as on a fill-in-the-blank test.

recognition a measure of memory in which the person need only identify items previously learned, as on a multiple-choice test. relearning a measure of memory that assesses the amount of time saved when learning material for a second time.

346

Remembering things past Even if Oprah Winfrey and Brad Pitt had not become famous, their high school classmates would most likely still recognize their yearbook photos.

Both Photos: Spanky’s Yearbook Archive

recall a measure of memory in which the

Long after you cannot recall most of the people in your high school graduating class, you may still be able to recognize their yearbook pictures from a photographic lineup and pick their names from a list of names. Harry Bahrick and his colleagues (1975) reported that people who had graduated 25 years earlier could not recall many of their old classmates, but they could recognize 90 percent of their pictures and names. If you are like most students, you, too, could likely recognize more names of the Seven Dwarfs than you could recall (Miserandino, 1991). Our recognition memory is impressively quick and vast. “Is your friend wearing a new or old outfit?” “Old.” “Is this five-second movie clip from a film you’ve ever seen?” “Yes.” “Have you ever seen this person before—this minor variation on the same old human features (two eyes, one nose, and so on)?” “No.” Before the mouth can form our answer to any of millions of such questions, the mind knows, and knows that it knows. Our speed at relearning also reveals memory. If you once learned something and then forgot it, you probably will relearn it more quickly your second time around. When you study for a final exam or resurrect a language used in early childhood, the relearning is easier. Tests of recognition and of time spent relearning confirm the point: We remember more than we can recall.

FIGURE 29.1 A modified three-stage processing model of memory Here we will focus on the retrieval aspects of our memory processing.

347

Retrieval: Getting Information Out M O D U L E 2 9

29.2 䉴 FIGURE Priming—awakening

© The New Yorker Collection, 1993, Michael Maslin from cartoonbank.com. All rights reserved.

Imagine a spider suspended in the middle of her web, held up by the many strands extending outward from her in all directions to different points (perhaps a window sill, a tree branch, a leaf on a shrub). If you were to trace a pathway to the spider, you would first need to create a path from one of these anchor points and then follow the strand down into the web. The process of retrieving a memory follows a similar principle, because memories are held in storage by a web of associations, each piece of information interconnected with others. When you encode into memory a target piece of information, such as the name of the person sitting next to you in class, you associate with it other bits of information about your surroundings, mood, seating position, and so on. These bits can serve as retrieval cues, anchor points you can use to access the target information when you want to retrieve it later. The more retrieval cues you have, the better your chances of finding a route to the suspended memory. Mnemonic devices (memory aids that use vivid images or organizational devices) provide us with handy retrieval cues. But the best retrieval cues come from associations we form at the time we encode a memory. Tastes, smells, and sights often evoke our recall of associated episodes. To call up visual cues when trying to recall something, we may mentally place ourselves in the original context. After losing his sight, John Hull (1990, p. 174) described his difficulty recalling such details: “I knew I had been somewhere, and had done particular things with certain people, but where? I could not put the conversations . . . into a context. There was no background, no features against which to identify the place. Normally, the memories of people you have spoken to during the day are stored in frames which include the background.” The features Hull was mourning are the strands we activate to retrieve a specific memory from its web of associations. Philosopher-psychologist William James referred to this process, which we call priming, as the “wakening of associations.” Often our associations are activated, or primed, without our awareness. As FIGURE 29.2 indicates, seeing or hearing the word rabbit primes associations with hare, even though we may not recall having seen or heard rabbit. Priming is often “memoryless memory”—invisible memory without explicit remembering. If, walking down a hallway, you see a poster of a missing child, you will then unconsciously be primed to interpret an ambiguous adult-child interaction as a

|| Multiple-choice questions test our a. recall. b. recognition. c. relearning. Fill-in-the-blank questions test our . (Answers below.) || Multiple-choice questions test recognition. Fill-in-the-blank questions test recall.

䉴|| Retrieval Cues

“Let me refresh your memory. It was the night before Christmas and all through the house not a creature was stirring until you landed a sled, drawn by reindeer, on the plaintiff’s home, causing extensive damage to the roof and chimney.”

|| Ask a friend two rapid-fire questions: (a) How do you pronounce the word spelled by the letters s-h-o-p? (b) What do you do when you come to a green light? If your friend answers “stop” to the second question, you have demonstrated priming. ||

associations After seeing or hearing rabbit, we are later more likely to spell the spoken word as h-a-r-e. The spreading of associations unconsciously activates related associations. This phenomenon is called priming. (Adapted from Bower, 1986.)

Seeing or hearing the word rabbit

Activates concept

Primes spelling the spoken word hair/hare as h-a-r-e

priming the activation, often unconsciously, of particular associations in memory.

348

MOD U LE 2 9 Retrieval: Getting Information Out

déjà vu that eerie sense that “I’ve experienced this before.” Cues from the current situation may subconsciously trigger retrieval of an earlier experience.

mood-congruent memory the tendency to recall experiences that are consistent with one’s current good or bad mood.

possible kidnapping (James, 1986). Although you don’t consciously remember the poster, it predisposes your interpretation. Meeting someone who reminds us of someone we’ve previously met can awaken our associated feelings about that earlier person, which may transfer into the new context (Andersen & Saribay, 2005; Lewicki, 1985). Even subliminal stimuli can briefly prime responses to later stimuli.

Context Effects 29-2 How do external contexts and internal emotions influence memory

retrieval?

FIGURE 29.3 The effects of context on memory Words heard underwater are best recalled underwater; words heard on land are best recalled on land. (Adapted from Godden & Baddeley, 1975.)

Percentage of words recalled

40%

Fred McConnaughey/Photo Researchers

Putting yourself back in the context where you experienced something can prime your memory retrieval. Duncan Godden and Alan Baddeley (1975) discovered this when they had scuba divers listen to a list of words in two different settings, either 10 feet underwater or sitting on the beach. As FIGURE 29.3 illustrates, the divers recalled more words when they were retested in the same place. You may have experienced similar context effects. Consider this scenario: While taking notes from this book, you realize you need to sharpen your pencil. You get up and walk downstairs, but then you cannot remember why. After returning to your desk it hits you: “I wanted to sharpen this pencil!” What happens to create this frustrating experience? In one context (desk, reading psychology), you realize your pencil needs sharpening. When you go downstairs into a different context, you have few cues to lead you back to that thought. When you are once again at your desk, you are back in the context in which you encoded the thought “This pencil is dull.” In several experiments, Carolyn Rovee-Collier (1993) found that a familiar context can activate memories even in 3-month-olds. After infants learned that kicking a crib mobile would make it move (via a connecting ribbon from the ankle), the infants kicked more when tested again in the same crib with the same bumper than when in a different context. Sometimes, being in a context similar to one we’ve been in before may trigger the experience of déjà vu (French for “already seen”). Two-thirds of us have experienced this fleeting, eerie sense that “I’ve been in this exact situation before,” but it happens

Greater recall when learning and testing contexts were the same

30

20

10

Water/land

Land/water

Different contexts for hearing and recall

Water/water

Land/land

Same contexts for hearing and recall

Retrieval: Getting Information Out M O D U L E 2 9

most commonly to well-educated, imaginative young adults, especially when tired or stressed (Brown, 2003, 2004; McAneny, 1996). Some wonder, “How could I recognize a situation I’m experiencing for the first time?” Others may think of reincarnation (“I must have experienced this in a previous life”) or precognition (“I viewed this scene in my mind before experiencing it”). Posing the question differently (“Why do I feel as though I recognize this situation?”), we can see how our memory system might produce déjà vu (Alco*ck, 1981). The current situation may be loaded with cues that unconsciously retrieve an earlier, similar experience. (We take in and retain vast amounts of information while hardly noticing and often forgetting where it came from.) Thus, if in a similar context you see a stranger who looks and walks like one of your friends, the similarity may give rise to an eerie feeling of recognition. Having awakened a shadow of that earlier experience, you may think, “I’ve seen that person in this situation before.” Or perhaps, suggests James Lampinen (2002), a situation seems familiar when moderately similar to several events. Imagine you briefly encounter my dad, my brothers, my sister, my children, and a few weeks later meet me. You might think, “I’ve been with this guy before.” Although no one in my family looks or acts just like me (lucky them), their looks and gestures are somewhat like mine and I might form a “global match” to what you had experienced. Yet another theory, among more than 50 proposed, attributes déjà vu to our dual processing. Recall that we assemble our perceptions from information processing that occurs simultaneously on multiple tracks. If there’s a slight neural hiccup and one track’s signal is delayed, perhaps it feels like a repeat of the earlier one, creating an illusion that we are now reexperiencing something (Brown, 2004b).

349

“Do you ever get that strange feeling of vujà dé? Not déjà vu; vujà dé. It’s the distinct sense that, somehow, something just happened that has never happened before. Nothing seems familiar. And then suddenly the feeling is gone. Vujà dé.” George Carlin (1937–2008), in Funny Times, December 2001

Associated words, events, and contexts are not the only retrieval cues. Events in the past may have aroused a specific emotion that later primes us to recall its associated events. Cognitive psychologist Gordon Bower (1983) explained it this way: “An emotion is like a library room into which we place memory records. We best retrieve those records by returning to that emotional room.” What we learn in one state—be it drunk or sober—may be more easily recalled when we are again in that state, a subtle phenomenon called state-dependent memory. What people learn when drunk they don’t recall well in any state (alcohol disrupts storage). But they recall it slightly better when again drunk. Someone who hides money when drunk may forget the location until drunk again. Our mood states provide an example of memory’s state dependence. Emotions that accompany good or bad events become retrieval cues (Fiedler et al., 2001). Thus, our memories are somewhat mood-congruent. If you’ve had a bad evening—your date never showed, your Toledo Mud Hens hat disappeared, your TV went out 10 minutes before the end of a mystery—your gloomy mood may facilitate recalling other bad times. Being depressed sours memories by priming negative associations, which we then use to explain our current mood. If put in a buoyant mood—whether under hypnosis or just by the day’s events (a World Cup soccer victory for the German participants in one study)—people recall the world through rose-colored glasses (DeSteno et al., 2000; Forgas et al., 1984; Schwarz et al., 1987). They judge themselves competent and effective, other people benevolent, happy events more likely. Knowing this mood-memory connection, we should not be surprised that in some studies currently depressed people recall their parents as rejecting, punitive, and guiltpromoting, whereas formerly depressed people describe their parents much as do those who have never suffered depression (Lewinsohn & Rosenbaum, 1987; Lewis,

©The New Yorker Collection, 2005 David Sipress from cartoonbank.com. All rights reserved.

Moods and Memories

“I can’t remember what we’re arguing about, either. Let’s keep yelling, and maybe it will come back to us.”

“When a feeling was there, they felt as if it would never go; when it was gone, they felt as if it had never been; when it returned, they felt as if it had never gone.” George MacDonald, What’s Mine’s Mine, 1886

350

MOD U LE 2 9 Retrieval: Getting Information Out

|| Moods influence not only our memories but also how we interpret other people’s behavior. In a bad mood we read someone’s look as a glare and feel even worse; in a good mood we encode the same look as interest and feel even better. Passions exaggerate. ||

|| Do you remember the gist of the sentence I asked you to remember at the beginning of this module? If not, does the word shark help? Experiments show that shark more readily cues the image you stored than does the sentence’s actual word, fish (Anderson et al., 1976). ||

1992). Similarly, adolescents’ ratings of parental warmth in one week give little clue to how they will rate their parents six weeks later (Bornstein et al., 1991). When teens are down, their parents seem inhuman; as their mood brightens, their parents morph from devils into angels. You and I may nod our heads knowingly. Yet, in a good or bad mood, we persist in attributing to reality our own changing judgments and memories. Our mood’s effect on retrieval helps explain why our moods persist. When happy, we recall happy events and therefore see the world as a happy place, which helps prolong our good mood. When depressed, we recall sad events, which darkens our interpretations of current events. For those of us with a predisposition to depression, this process can help maintain a vicious, dark cycle.

Review Retrieval: Getting Information Out 29-1 How do we get information out of memory? Recall is the ability to retrieve information not in conscious awareness; a fill-in-the-blank question tests recall. Recognition is the ability to identify items previously learned; a multiple-choice question tests recognition. Relearning is the ability to master previously stored information more quickly than you originally learned it. Retrieval cues catch our attention and tweak our web of associations, helping to move target information into conscious awareness. Priming is the process of activating associations (often unconsciously). 29-2 How do external contexts and internal emotions influence memory retrieval? The context in which we originally experienced an event or encoded a thought can flood our memories with retrieval cues, leading us to the target memory. In a different but similar context, such cues may trick us into retrieving a memory, a feeling known as déjà vu. Specific emotions can prime us to retrieve memories consistent with that state. Mood-congruent memory, for example, primes us to interpret others’ behavior in ways consistent with our current emotions.

Terms and Concepts to Remember recall, p. 346 recognition, p. 346 relearning, p. 346

priming, p. 347 déjà vu, p. 348 mood-congruent memory, p. 349

Test Yourself 1. You have just watched a movie that includes a chocolate factory. After the chocolate factory is out of mind, you nevertheless feel a strange urge for a chocolate bar. How do you explain this in terms of priming? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. What sort of mood have you been in lately? How has your mood colored your memories, perceptions, and expectations?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

module 30 Forgetting, Memory Construction, and Improving Memory

Forgetting Memory Construction Improving Memory

䉴|| Forgetting 30-1 Why do we forget?

Three sins of forgetting:

䉴 Absent-mindedness—inattention to details leads to encoding failure (our mind is elsewhere as we lay down the car keys).

“Amnesia seeps into the crevices of our brains, and amnesia heals.”

Robert Hanashiro, USA Today

Joyce Carol Oates, “Words Fail, Memory Blurs, Life Wins,” 2001

Amid all the applause for memory—all the efforts to understand it, all the books on how to improve it—have any voices been heard in praise of forgetting? William James (1890, p. 680) was such a voice: “If we remembered everything, we should on most occasions be as ill off as if we remembered nothing.” To discard the clutter of useless or out-of-date information—where we parked the car yesterday, a friend’s old phone number, restaurant orders already cooked and served—is surely a blessing. The Russian memory whiz Shereshevskii accumulated a junk heap of memories that haunted him. They dominated his consciousness. He had difficulty thinking abstractly—generalizing, organizing, evaluating. After reading a story, he could recite it but would struggle to summarize its gist. A more recent case of a life overtaken by memory is “A. J.,” whose experience has been studied and verified by a University of California at Irvine research team (Parker et al., 2006). A. J., who has identified herself as Jill Price, describes her memory as “like a running movie that never stops. It’s like a split screen. I’ll be talking to someone and seeing something else. . . . Whenever I see a date flash on the television (or anywhere for that matter) I automatically go back to that day and remember where I was, what I was doing, what day it fell on, and on and on and on and on. It is nonstop, uncontrollable, and totally exhausting.” A good memory is helpful, but so is the ability to forget. If a memory-enhancing pill becomes available, it had better not be too effective. More often, however, our memory dismays and frustrates us. Memories are quirky. My own memory can easily call up such episodes as that wonderful first kiss with the woman I love or trivial facts like the air mileage from London to Detroit. Then it abandons me when I discover I have failed to encode, store, or retrieve my new colleague’s name or where I left my sunglasses. Memory researcher Daniel Schacter (1999) enumerates seven ways our memories fail us—the seven sins of memory, he calls them:

The woman who can’t forget “A. J.” in real life is Jill Price, who, with writer Bart Davis, told her story in a 2008 published memoir. Price remembers every day of her life since age 14 with detailed clarity, including both the joys and the unforgotten hurts.

|| Cellist Yo-Yo Ma forgot his 266-yearold, $2.5 million cello in a New York taxi. (He later recovered it.) ||

䉴 Transience—storage decay over time (after we part ways with former classmates, unused information fades).

䉴 Blocking—inaccessibility of stored information (seeing an actor in an old movie, we feel the name on the tip of our tongue but experience retrieval failure—we cannot get it out). Three sins of distortion:

䉴 Misattribution—confusing the source of information (putting words in someone else’s mouth or remembering a dream as an actual happening). 䉴 Suggestibility—the lingering effects of misinformation (a leading question—“Did Mr. Jones touch your private parts?”—later becomes a young child’s false memory).

351

352

MODULE 30 Forgetting, Memory Construction, and Improving Memory

©The New Yorker Collection, 2007 Robert Leighton from cartoonbank.com. All rights reserved.

䉴 Bias—belief-colored recollections (current feelings toward a friend may color our recalled initial feelings). One sin of intrusion:

䉴 Persistence—unwanted memories (being haunted by images of a sexual assault). Let’s first consider the sins of forgetting, then those of distortion and intrusion.

Encoding Failure Much of what we sense we never notice, and what we fail to encode, we will never remember (FIGURE 30.1). Age can affect encoding efficiency. The brain areas that jump into action when young adults encode new information are less responsive in older adults. This slower encoding helps explain age-related memory decline (Grady et al., 1995).

“Oh, is that today?”

FIGURE 30.1 Forgetting as encoding failure We cannot remember what we have not encoded.

External events

Sensory memory

Working/ short-term memory

Attention

Long-term memory

Encoding

Encoding failure leads to forgetting

FIGURE 30.2 Test your memory Which one of these pennies is the real thing? (If you live outside the United States, try drawing one of your own country’s coins.) (From Nickerson & Adams, 1979.) Answer below.

“Each of us finds that in [our] own life every moment of time is completely filled. [We are] bombarded every second by sensations, emotions, thoughts . . . ninetenths of which [we] must simply ignore. The past [is] a roaring cataract of billions upon billions of such moments: Any one of them too complex to grasp in its entirety, and the aggregate beyond all imagination. . . . At every tick of the clock, in every inhabited part of the world, an unimaginable richness and variety of ‘history’ falls off the world into total oblivion.”

But no matter how young we are, we selectively attend to few of the myriad sights and sounds continually bombarding us. Consider this example: If you live in North America, Britain, or Australia, you have looked at thousands of pennies in your lifetime. You can surely recall their color and size, but can you recall what the side with the head looks like? If not, let’s make the memory test easier: If you are familiar with U.S. coins, can you recognize the real thing in FIGURE 30.2? Most people cannot (Nickerson & Adams, 1979). Of the eight critical features (Lincoln’s head, date, “In God we trust,” and so on), the average person spontaneously remembers only three. Likewise, few British people can draw from memory the one-pence coin (Richardson, 1993). The details of these coins are not very meaningful—nor are they essential for distinguishing them from other coins—and few of us have made the effort to encode them. We encode some information—where we had dinner yesterday—automatically; other types of information—like the concepts in this module—require effortful processing. Without effort, many memories never form.

(a)

(b)

(c)

(d)

(e)

(f )

English novelist-critic C. S. Lewis (1967)

The first penny (a) is the real penny.

353

Forgetting, Memory Construction, and Improving Memory MODULE 30

Percentage of list retained 60% when relearning 50 Retention drops,

40 30

Bettmann/Corbis

then levels off

20 10

Hermann Ebbinghaus (1850–1909)

1 2 3 4 5

10

15

20

25

30

Time in days since learning list

Storage Decay

FIGURE 30.4 The forgetting curve for

Spanish learned in school Compared with people just completing a Spanish course, those 3 years out of the course remembered much less. Compared with the 3-year group, however, those who studied Spanish even longer ago did not forget much more. (Adapted from Bahrick, 1984.)

Even after encoding something well, we sometimes later forget it. To study the durability of stored memories, German philosopher Hermann Ebbinghaus (1885) learned lists of nonsense syllables (for example, DAH) and measured how much he retained when relearning each list, from 20 minutes to 30 days later. The result, confirmed by later experiments, was his famous forgetting curve: The course of forgetting is initially rapid, then levels off with time (FIGURE 30.3; Wixted & Ebbesen, 1991). One such experiment was Harry Bahrick’s (1984) study of the forgetting curve for Spanish vocabulary learned in school. Compared with those just completing a high school or college Spanish course, people 3 years out of school had forgotten much of what they had learned (FIGURE 30.4). However, what people remembered then, they still remembered 25 and more years later. Their forgetting had leveled off. One explanation for these forgetting curves is a gradual fading of the physical memory trace. Cognitive neuroscientists are getting closer to solving the mystery of

FIGURE 30.3 Ebbinghaus’ forgetting curve After learning lists of nonsense syllables, Ebbinghaus studied how much he retained up to 30 days later. He found that memory for novel information fades quickly, then levels out. (Adapted from Ebbinghaus, 1885.)

Percentage of 100% original vocabulary 90 retained 80

Retention drops,

70 60

Andrew Holbrooke/Corbis

then levels off

50 40 30 20 10 0

1 3 5

91/2 141/2

25

351/2

Time in years after completion of Spanish course

491/2

354

MODULE 30 Forgetting, Memory Construction, and Improving Memory

the physical storage of memory and are increasing our understanding of how memory storage could decay. But memories fade for other reasons, including the accumulation of learning that disrupts our retrieval.

Retrieval Failure

|| Deaf persons fluent in sign language experience a parallel “tip of the fingers” phenomenon (Thompson et al., 2005). ||

FIGURE 30.5 Retrieval failure We store in long-term memory what’s important to us or what we’ve rehearsed. But sometimes even stored information cannot be accessed, which leads to forgetting.

We have seen that forgotten events are like books you can’t find in your campus library—some because they were never acquired (not encoded), others because they were discarded (stored memories decay). But there is a third possibility: The book may be there but inaccessible because we don’t have enough information to look it up and retrieve it. How frustrating when we know information is “in there,” but we cannot get it out (FIGURE 30.5), as when a name lies poised on the tip of our tongue, waiting to be retrieved. Given retrieval cues (“It begins with an M”), we may easily retrieve the elusive memory. Retrieval problems contribute to the occasional memory failures of older adults, who more frequently are frustrated by tip-of-the-tongue forgetting (Abrams, 2008). Often, forgetting is not memories discarded but memories unretrieved.

External events

Sensory memory

Attention

Working/ short-term memory

Encoding

Long-term memory Retrieval Retrieval failure leads to forgetting

Interference

proactive interference the disruptive effect of prior learning on the recall of new information. retroactive interference the disruptive effect of new learning on the recall of old information.

Learning some items may interfere with retrieving others, especially when the items are similar. If someone gives you a phone number, you may be able to recall it later. But if two more people give you their numbers, each successive number will be more difficult to recall. Likewise, if you buy a new combination lock, your memory of the old one may interfere. Such proactive (forward-acting) interference occurs when something you learned earlier disrupts your recall of something you experience later. As you collect more and more information, your mental attic never fills, but it certainly gets cluttered. The ability to tune out the clutter helps us focus, as one experiment demonstrated. Given the task of remembering certain new word pairs from among a list (“ATTIC-dust,” “ATTIC-junk,” and so forth), some people were better at forgetting the irrelevant pairs (as verified by diminished activity in a pertinent brain area). And it’s those people who best focused on and recalled the to-be-remembered pairs (Kuhl et al., 2007). Sometimes forgetting is adaptive. Retroactive (backward-acting) interference occurs when new information makes it harder to recall something you learned earlier. It is rather like a second stone tossed in a pond, disrupting the waves rippling out from a first. (See Close-Up: Retrieving Passwords.) Information presented in the hour before sleep is protected from retroactive interference because the opportunity for interfering events is minimized. Researchers John Jenkins and Karl Dallenbach (1924) discovered this in a now-classic experiment. Day after day, two people each learned some nonsense syllables, then tried to recall them after up to eight hours of being awake or asleep at night. As FIGURE 30.6 shows, forgetting occurred more rapidly after being awake and involved with other activities. The

355

Forgetting, Memory Construction, and Improving Memory MODULE 30

investigators surmised that “forgetting is not so much a matter of the decay of old impressions and associations as it is a matter of interference, inhibition, or obliteration of the old by the new” (1924, p. 612). Later experiments have confirmed the benefits of sleep and found that the hour before a night’s sleep is indeed a good time

Percentage of syllables recalled

FIGURE 30.6 Retroactive 䉴 interference More forgetting occurred

90% 80 70

when a person stayed awake and experienced other new material. (From Jenkins & Dallenbach, 1924.)

Without interfering events, recall is better

After sleep

60 50 40 30 20 10 0

After remaining awake

1

2

3

4

5

6

7

8

Hours elapsed after learning syllables

CLOSE-UP

Retrieving Passwords There’s something that you need lots of, and that your grandparents at your age didn’t: passwords. To log into your e-mail, retrieve your voice mail, draw cash from a machine, access your phone card, use the copy machine, or persuade the keypad to open the building door, you need to remember your password. A typical student faces eight demands for passwords, report Alan Brown and his colleagues (2004). With so many passwords needed, what’s a person to do? As FIGURE 30.7 illustrates, we are plagued by proactive interference from irrelevant old information and retroactive interference from other newly learned information. Memory researcher Henry Roediger takes a simple approach to storing all the important phone, PIN, and code numbers in his life: “I have a sheet in my shirt pocket with all the numbers I need,” says Roediger (2001), adding that he can’t mentally store them all, so why bother? Other strategies may help those who do not want to lose their PINs in the wash. First, duplicate. The average student uses

Early event

Later event

FIGURE 30.7

Proactive and retroactive interference Proactive interference

Learn friend’s e-mail address at college [emailprotected]

Retroactive interference

Can no longer recall password for using ATM card my . . . ???

four different passwords to meet those eight needs. Second, harness retrieval cues. Surveys in Britain and the United States reveal that about half of our passwords harness a familiar name or date. Others often involve familiar phone or identification numbers. Third, in online banking or other situations where security is essential, use a

Familiar old address interferes with recall of new e-mail address nfleming@????

Learn password for using bank debit card my99money

mix of letters and numbers, advise Brown and his colleagues. After composing such a password, rehearse it, then rehearse it a day later, and continue rehearsing at increasing intervals. In such ways, long-term memories will form and be retrievable at the cash and copy machines.

356

MODULE 30 Forgetting, Memory Construction, and Improving Memory

repression in psychoanalytic theory, the basic defense mechanism that banishes from consciousness anxiety-arousing thoughts, feelings, and memories.

to commit information to memory (Benson & Feinberg, 1977; Fowler et al., 1973; Nesca & Koulack, 1994). But not the seconds just before sleep; information presented then is seldom remembered (Wyatt & Bootzin, 1994). Nor is recorded information played during sleep, although the ears do register it (Wood et al., 1992). Interference is an important cause of forgetting, and it may explain why ads viewed during attention-grabbing violent or sexual TV programs are so forgettable (Bushman & Bonacci, 2002). But we should not overstate the point. Sometimes old information can facilitate our learning of new information. Knowing Latin may help us to learn French—a phenomenon called positive transfer. It is when old and new information compete with each other that interference occurs.

Motivated Forgetting

FIGURE 30.8 When do we forget? Forgetting can occur at any memory stage. As we process information, we filter, alter, or lose much of it.

Sensory memory The senses momentarily register amazing detail.

Working/short-term memory A few items are both noticed and encoded.

Long-term storage Some items are altered or lost.

Retrieval from long-term memory Depending on interference, retrieval cues, moods, and motives, some things get retrieved, some don’t.

To remember our past is often to revise it. Years ago, the huge cookie jar in our kitchen was jammed with freshly baked chocolate chip cookies. Still more were cooling across racks on the counter. Twenty-four hours later, not a crumb was left. Who had taken them? During that time, my wife, three children, and I were the only people in the house. So while memories were still fresh, I conducted a little memory test. Andy acknowledged wolfing down as many as 20. Peter admitted eating 15. Laura guessed she had stuffed her then-6-year-old body with 15 cookies. My wife, Carol, recalled eating 6, and I remembered consuming 15 and taking 18 more to the office. We sheepishly accepted responsibility for 89 cookies. Still, we had not come close; there had been 160. This would not have surprised Michael Ross and his colleagues (1981), who time and again showed that people unknowingly revise their own histories. One group of people, told the benefits of frequent tooth-brushing, then recalled (more than others did) having frequently brushed their teeth in the preceding two weeks. Even memory researcher Ralph Haber has found his own memory sometimes unreliable. In one instance, his recollection was distorted by his motivation to see himself as boldly leaving home despite being loved by a mother who wanted him nearby. Thus, he recalled choosing to leave the University of Michigan to go to graduate school at Stanford. In his memory, he “leapt for joy” when the Information bits Stanford admission came, and he enthusiastically prepared to head west. Twenty-five years later he visited Michigan for his mother’s eightieth birthday. Reading the letters he had sent her over the years, he was startled to discover himself explaining his decision to stay at Michigan, until yielding to his mother’s passionate plea that he accept the Stanford offer. Sometimes, note Carol Tavris and Elliot Aronson (2007) in recording this story, memory is an “unreliable, self-serving historian” (pp. 6, 79). Why do our memories fail us? Why did my family and I not remember the number of cookies each of us had eaten? As FIGURE 30.8 reminds us, we automatically encode sensory information in amazing detail. So was it an encoding problem? Or a storage problem—might our memories of cookies, like Ebbinghaus’ memory of nonsense syllables, have vanished almost as fast as the cookies themselves? Or was the information still intact but irretrievable because it would be embarrassing to remember?1 Sigmund Freud might have argued that our memory systems selfcensored this information. He proposed that we repress painful memories to protect our self-concept and to minimize anxiety. But the submerged memory will linger, he believed, to be retrieved by some later cue or during therapy. Here is a sample case. A woman had an intense, unexplained fear of running water. One day her aunt whispered, “I have

1One

of my cookie-scarfing sons, on reading this in his father’s textbook years later, confessed he had fibbed “a little.”

357

Forgetting, Memory Construction, and Improving Memory MODULE 30

never told.” Like relighting a blown-out candle, these words cued the woman’s memory of an incident when, as a disobedient young child, she had wandered away from a family picnic and become trapped under a waterfall—until being rescued by her aunt, who promised not to tell her parents (Kihlstrom, 1990). Repression was central to Freud’s psychology and became part of psychology’s lore. Most everyone—including 9 in 10 university students—believes that “memories for painful experiences are sometimes pushed into unconsciousness” (Brown et al., 1996). Therapists often assume it. Yet increasing numbers of memory researchers think repression rarely, if ever, occurs. People’s efforts to intentionally forget neutral material often succeed, but not when the to-be-forgotten material is emotional (Payne & Corrigan, 2007). Thus, we may have intrusive memories of the very traumatic experiences we would most like to forget.

䉴|| Memory Construction 30-2 How do misinformation, imagination, and source amnesia influence our memory construction? How real-seeming are false memories? Picture yourself having this experience: You go to a fancy restaurant for dinner. You are seated at a table with a white tablecloth. You study the menu. You tell the server you want prime rib, medium rare, a baked potato with sour cream, and a salad with blue cheese dressing. You also order some red wine from the wine list. A few minutes later the server returns with your salad. Later the rest of the meal arrives. You enjoy it all, except the prime rib is a bit overdone. Were I immediately to quiz you on this paragraph (adapted from Hyde, 1983), you could surely retrieve considerable detail. For example, without looking back, answer the following questions: 1. 2. 3. 4.

What kind of salad dressing did you order? Was the tablecloth red checked? What did you order to drink? Did the server give you a menu?

You were probably able to recall exactly what you ordered, and maybe even the tablecloth color. But did the server give you a menu? Not in the paragraph given. Nevertheless, many answer yes. We often construct our memories as we encode them, and we may also alter our memories as we withdraw them from our memory bank. Like scientists who infer a dinosaur’s appearance from its remains, we infer our past from stored information plus what we later imagined, expected, saw, and heard. We don’t just retrieve memories, we reweave them, notes Daniel Gilbert (2006, p. 79): “Information acquired after an event alters memory of the event.”

Misinformation and Imagination Effects In more than 200 experiments, involving more than 20,000 people, Elizabeth Loftus has shown how eyewitnesses similarly reconstruct their memories when later questioned. In one classic experiment, Loftus and John Palmer showed a film of a traffic accident and then quizzed people about what they had seen (Loftus & Palmer, 1974). Those asked, “How fast were the cars going when they smashed into each other?” gave higher speed estimates than those asked, “How fast were the cars going when they hit each other?” A week later, the researchers asked both groups if they recalled seeing any broken glass. Those who had heard smashed were more than twice as likely to report seeing glass fragments (FIGURE 30.9 on the next page). In fact, the film showed no broken glass.

“Memory isn’t like reading a book; it’s more like writing a book from fragmentary notes.” Psychologist John F. Kihlstrom (1994)

358

MODULE 30 Forgetting, Memory Construction, and Improving Memory

FIGURE 30.9 Memory construction When people who had seen the film of a car accident were later asked a leading question, they recalled a more serious accident than they had witnessed. (From Loftus, 1979.)

Leading question: “About how fast were the cars going when they smashed into each other?”

Depiction of actual accident

“Memory is insubstantial. Things keep replacing it. Your batch of snapshots will both fix and ruin your memory. . . . You can’t remember anything from your trip except the wretched collection of snapshots.” Annie Dillard, “To Fashion a Text,” 1988

By Garry Trudeau DOONESBURY © 1994 G. B. Trudeau. Reprinted with permission of UNIVERSAL PRESS SYNDICATE.

DOONESBURY

Memory construction

In many follow-up experiments around the world, people have witnessed an event, received or not received misleading information about it, and then taken a memory test. The repeated result is a misinformation effect: After exposure to subtle misinformation, many people misremember. They have misrecalled a yield sign as a stop sign, hammers as screwdrivers, co*ke cans as peanut cans, Vogue magazine as Mademoiselle, “Dr. Henderson” as “Dr. Davidson,” breakfast cereal as eggs, and a cleanshaven man as a man with a mustache (Loftus et al., 1992). One experiment showed people digitally altered photos depicting themselves (pasted image from a childhood family album) taking a hot air balloon ride. After seeing this three times over two weeks, half the participants “remembered” the nonexistent experience, often in rich detail (Wade et al., 2002). The human mind comes with built-in Photoshopping software. So unwitting is the misinformation effect that we may later find it nearly impossible to discriminate between our memories of real and suggested events (Schooler et al., 1986). Perhaps you can recall recounting an experience, and filling in memory gaps with plausible guesses and assumptions. We all do it, and after more retellings, we may recall the guessed details—now absorbed into our memories—as if we had actually observed them (Roediger et al., 1993). Others’ vivid retelling of an event may also implant false memories. Even repeatedly imagining nonexistent actions and events can create false memories. Students who repeatedly imagined simple acts such as breaking a toothpick or

359

Forgetting, Memory Construction, and Improving Memory MODULE 30

picking up a stapler later experienced this imagination inflation; they were more likely than others to think they had actually done such things during the experiment’s first phase (Goff & Roediger, 1998; Seamon et al., 2006). Similarly, one in four American and British university students asked to imagine certain childhood events, such as breaking a window with their hand or having a skin sample removed from a finger, later recalled the imagined event as something that had really happened (Garry et al., 1996; Mazzoni & Memon, 2003). Imagination inflation occurs partly because visualizing something and actually perceiving it activate similar brain areas (Gonsalves et al., 2004). Imagined events later seem more familiar, and familiar things seem more real. Thus, the more vividly we can imagine things, the more likely we are to inflate them into memories (Loftus, 2001; Porter et al., 2000). People who believe aliens transported them to spaceships for medical exams tend to have powerful imaginations and, in memory tests, to be more susceptible to false memories (Clancy, 2005). Those who believe they have recovered memories of childhood sexual abuse likewise tend to have vivid imaginations and to score high on false memory tests (Clancy et al., 2000; McNally, 2003). To see how far the mind’s search for a fact will go in creating a fiction, Richard Wiseman and his University of Hertfordshire colleagues (1999) staged eight seances, each attended by 25 curious people. During each session, the medium—actually a professional actor and magician—urged everyone to concentrate on the moving table. Although it never moved, he suggested it had: “That’s good. Lift the table up. That’s good. Keep concentrating. Keep the table in the air.” When questioned two weeks later, 1 in 3 participants recalled actually having seen the table levitate. Psychologists are not immune to memory construction. Famed child psychologist Jean Piaget was startled as an adult to learn that a vivid, detailed memory from his childhood—a nursemaid’s thwarting his kidnapping—was utterly false. Piaget apparently constructed the memory from the many retellings of the story he had heard (which his nursemaid, after undergoing a religious conversion, later confessed had never happened).

misinformation effect incorporating misleading information into one’s memory of an event.

source amnesia attributing to the wrong source an event we have experienced, heard about, read about, or imagined. (Also called source misattribution.) Source amnesia, along with the misinformation effect, is at the heart of many false memories.

“It isn’t so astonishing, the number of things I can remember, as the number of things I can remember that aren’t so.” Mark Twain (1835–1910)

Source Amnesia Piaget remembered, but attributed his memory to the wrong sources (to his own experience rather than to his nursemaid’s stories). Among the frailest parts of a memory is its source. Thus, we may recognize someone but have no idea where we have seen the person. We may dream an event and later be unsure whether it really happened. We may hear something and later recall seeing it (Henkel et al., 2000). In all these cases of source amnesia (also called source misattribution), we retain the memory of the event, but not of the context in which we acquired it. Debra Poole and Stephen Lindsay (1995, 2001, 2002) demonstrated source amnesia among preschoolers. They had the children interact with “Mr. Science,” who engaged them in activities such as blowing up a balloon with baking soda and vinegar. Three months later, on three successive days, their parents read them a story describing some things the children had experienced with Mr. Science and some they had not. When a new interviewer asked what Mr. Science had done with them—“Did Mr. Science have a machine with ropes to pull?”—4 in 10 children spontaneously recalled him doing things that had happened only in the story.

Discerning True and False Memories Because memory is reconstruction as well as reproduction, we can’t be sure whether a memory is real by how real it feels. Much as perceptual illusions may seem like real perceptions, unreal memories feel like real memories.

|| Authors and songwriters sometimes suffer source amnesia. They think an idea came from their own creative imagination, when in fact they are unintentionally plagiarizing something they earlier read or heard. ||

2007 (on MSNBC): “When I voted to support this war I knew it was probably going to be long and hard and tough.” 2002 (on Larry King): “I believe that the operation will be relatively short [and] that the success will be fairly easy.”

FIGURE 30.10 Our assumptions alter our perceptual memories Researchers showed people faces with computerblended expressions, such as the angry/happy face in (a), then asked them to explain why the person was either angry or happy. Those asked to explain an “angry” expression later (when sliding a bar on a morphing movie to identify the earlier-seen face) remembered an angrier face, such as the one shown in (b).

Indeed, note today’s researchers, memories are akin to perceptions—perceptions of the past (Koriat et al., 2000). And as Jamin Halberstadt and Paul Niedenthal (2001) showed, people’s initial interpretations influence their perceptual memories. They invited New Zealand university students to view morphed faces that expressed a mix of emotions, such as happiness and anger (FIGURE 30.10a), and to imagine and explain “why this person is feeling angry [or happy].” A half-hour later, the researchers asked the students to view a computer video showing a morphed transition from the angry to happy face, and to slide a bar to change the face’s expression until it matched the expression they had seen earlier. Students who had explained anger (“This woman is angry because her best friend has cheated on her with her boyfriend”) recalled the face as angrier (Figure 30.10b) than did those who had explained happiness (“This woman is very happy that everyone remembered her birthday”). So could we judge a memory’s reality by its persistence? Again, the answer is no. Memory researchers Charles Brainerd and Valerie Reyna (Brainerd et al., 1995, 1998, 2002) note that memories we derive from experience have more detail than memories we derive from imagination. Memories of imagined experiences are more restricted to the gist of the supposed event—the associated meanings and feelings. Because gist memories are durable, children’s false memories sometimes outlast their true memories, especially as children mature and become better able to process the gist (Brainerd & Poole, 1997). Thus, therapists or investigators who ask for the gist rather than the details run a greater risk of eliciting false memories. False memories created by suggested misinformation and misattributed sources may feel as real as true memories and may be very persistent. Imagine that I were to read aloud a list of words such as candy, sugar, honey, and taste. Later, I ask you to recognize the presented words from a larger list. If you are at all like the people tested by Henry Roediger and Kathleen McDermott (1995), you would err three out of four times—by falsely remembering a nonpresented similar word, such as sweet. We more easily remember the gist than the words themselves. In experiments on eyewitness testimony, researchers have repeatedly found that the most confident and consistent eyewitnesses are the most persuasive; however, they often are not the most accurate. Eyewitnesses, whether right or wrong, express roughly similar self-assurance (Bothwell et al., 1987; Cutler & Penrod, 1989; Wells & Murray, 1984). Memory construction helps explain why 79 percent of 200 convicts exonerated by later DNA testing had been misjudged based on faulty eyewitness identification (Garrett, 2008). It explains why “hypnotically refreshed” memories of crimes so easily incorporate errors, some of which originate with the hypnotist’s leading questions (“Did you hear loud noises?”). It explains why dating partners who fall in love overestimate their first impressions of one another (“It was love at first sight”), while

© Simon Niedentha

U.S. Senator John McCain on the Iraq war:

MODULE 30 Forgetting, Memory Construction, and Improving Memory

360

(a)

(b)

those who break up underestimate their earlier liking (“We never really clicked”) (McFarland & Ross, 1987). And it explains why people asked how they felt 10 years ago about marijuana or gender issues recall attitudes closer to their current views than to the views they had actually reported a decade earlier (Markus, 1986). How people feel today seems to be how they have always felt. What people know today seems to be what they have always known (Mazzoni & Vannucci, 2007; this is our tendency to experience hindsight bias). One research team interviewed 73 ninth-grade boys and then reinterviewed them 35 years later. When asked to recall how they had reported their attitudes, activities, and experiences, most men recalled statements that matched their actual prior responses at a rate no better than chance. Only 1 in 3 now remembered receiving physical punishment, though as ninth-graders 82 percent had said they had (Offer et al., 2000). As George Vaillant (1977, p. 197) noted after following adult lives through time, “It is all too common for caterpillars to become butterflies and then to maintain that in their youth they had been little butterflies. Maturation makes liars of us all.” Australian psychologist Donald Thompson found his own work on memory distortion ironically haunting him when authorities brought him in for questioning about a rape. Although he was a near-perfect match to the victim’s memory of the rapist, he had an airtight alibi. Just before the rape occurred, Thompson was being interviewed on live television. He could not possibly have made it to the crime scene. Then it came to light that the victim had been watching the interview—ironically about face recognition—and had experienced source amnesia, confusing her memories of Thompson with those of the rapist (Schacter, 1996). Recognizing that the misinformation effect can occur as police and attorneys ask questions framed by their own understandings of an event, Ronald Fisher, Edward Geiselman, and their colleagues (1987, 1992) have trained police interviewers to ask less suggestive, more effective questions. To activate retrieval cues, the detective first asks witnesses to visualize the scene—the weather, time of day, lighting, sounds, smells, positions of objects, and their mood. Then the witness tells in detail, and without interruption, every point recalled, no matter how trivial. Only then does the detective ask evocative follow-up questions: “Was there anything unusual about the person’s appearance or clothing?” When this cognitive interview technique is used, accurate recall increases (Wells et al., 2006).

Children’s Eyewitness Recall If memories can be sincere, yet sincerely wrong, might children’s recollections of sexual abuse be prone to error? Stephen Ceci (1993) thinks “it would be truly awful to ever lose sight of the enormity of child abuse.” Yet, as we have seen, interviewers who ask leading questions can plant false memories. Ceci and Maggie Bruck’s (1993, 1995) studies of children’s memories have sensitized them to children’s suggestibility. For example, they asked 3-year-olds to show on anatomically correct dolls where a pediatrician had touched them. Fifty-five percent of the children who had not received genital examinations pointed to either genital or anal areas. And when the researchers used suggestive interviewing techniques, they found that most preschoolers and many older children could be induced to report false events, such as seeing a thief steal food in their day-care center (Bruck & Ceci, 1999, 2004). In another experiment, preschoolers merely overheard an erroneous remark that a magician’s missing rabbit had gotten loose in their classroom. Later, when the children were suggestively questioned, 78 percent of them recalled actually seeing the rabbit (Principe et al., 2006). In one study, Ceci and Bruck had a child choose a card from a deck of possible happenings and an adult then read from the card. For example, “Think real hard, and

361

© Sipress, 1988

Forgetting, Memory Construction, and Improving Memory MODULE 30

362

|| In experiments with adults, suggestive questions (“In fresh water, do snakes swim upside down for about half the time?”) are often misremembered as statements (Pandelaere & Dewitte, 2006). ||

MODULE 30 Forgetting, Memory Construction, and Improving Memory

tell me if this ever happened to you. Can you remember going to the hospital with a mousetrap on your finger?” After 10 weekly interviews, with the same adult repeatedly asking children to think about several real and fictitious events, a new adult asked the same question. The stunning result: 58 percent of preschoolers produced false (often vivid) stories regarding one or more events they had never experienced, as this little boy did (Ceci et al., 1994): My brother Colin was trying to get Blowtorch [an action figure] from me, and I wouldn’t let him take it from me, so he pushed me into the wood pile where the mousetrap was. And then my finger got caught in it. And then we went to the hospital, and my mommy, daddy, and Colin drove me there, to the hospital in our van, because it was far away. And the doctor put a bandage on this finger.

“[The] research leads me to worry about the possibility of false allegations. It is not a tribute to one’s scientific integrity to walk down the middle of the road if the data are more to one side.” Stephen Ceci (1993)

|| Like children (whose frontal lobes have not fully matured), older adults— especially those whose frontal lobe functioning has declined—are more susceptible than young adults to false memories. This makes older adults more vulnerable to scams, as when a repairperson overcharges by falsely claiming, “I told you it would cost x, and you agreed to pay” (Jacoby et al., 2005; Jacoby & Rhodes, 2006; Roediger & Geraci, 2007; Roediger & McDaniel, 2007). ||

Given such detailed stories, professional psychologists who specialize in interviewing children were often fooled. They could not reliably separate real memories from false ones. Nor could the children themselves. The above child, reminded that his parents had told him several times that the mousetrap incident never happened—that he had imagined it—protested, “But it really did happen. I remember it!” Does this mean that children can never be accurate eyewitnesses? No. If questioned about their experiences in neutral words they understand, children often accurately recall what happened and who did it (Goodman, 2006; Howe, 1997; Pipe, 1996). When interviewers use less suggestive, more effective techniques, even 4- to 5year-old children produce more accurate recall (Holliday & Albon, 2004; Pipe et al., 2004). Children are especially accurate when they have not talked with involved adults prior to the interview and when their disclosure is made in a first interview with a neutral person who asks nonleading questions.

Repressed or Constructed Memories of Abuse? 30-3 What is the controversy related to claims of repressed and recovered memories? There are two tragedies related to adult recollections of child abuse. One is trauma survivors being disbelieved when telling their secret. The other is innocent people being falsely accused. What, then, shall we say about clinicians who have guided people in “recovering” memories of childhood abuse? Are they triggering false memories that damage innocent adults? Or are they uncovering the truth? In one American survey, the average therapist estimated that 11 percent of the population—some 34 million people—have repressed memories of childhood sexual abuse (Kamena, 1998). In another survey, of British and American doctoral-level therapists, 7 in 10 said they had used techniques such as hypnosis or drugs to help clients recover suspected repressed memories of childhood sexual abuse (Poole et al., 1995). Some have reasoned with patients that “people who’ve been abused often have your symptoms, so you probably were abused. Let’s see if, aided by hypnosis or drugs, or helped to dig back and visualize your trauma, you can recover it.” As we might expect from the research on source amnesia and the misinformation effect, patients exposed to such techniques may form an image of a threatening person. With further visualization, the image grows more vivid, leaving the patient stunned, angry, and ready to confront or sue the equally stunned and devastated parent, relative, or clergy member, who then vigorously denies the accusation. After 32 therapy sessions, one woman recalled her father abusing her when she was 15 months old. Without questioning the professionalism of most therapists, critics have charged that clinicians who use “memory work” techniques such as “guided imagery,” hypnosis, and

Forgetting, Memory Construction, and Improving Memory MODULE 30

363

dream analysis to recover memories “are nothing more than merchants of mental chaos, and, in fact, constitute a blight on the entire field of psychotherapy” (Loftus et al., 1995). “Thousands of families were cruelly ripped apart,” with “previously loving adult daughters” suddenly accusing fathers, noted Martin Gardner (2006) in his commentary on North America’s “greatest mental health scandal.” Irate clinicians countered that those who dispute recovered memories of abuse add to abused people’s trauma and play into the hands of child molesters. In an effort to find a sensible common ground that might resolve this ideological battle—psychology’s “memory war”—study panels have been convened and public statements made by the American Medical, American Psychological, and American Psychiatric Associations; the Australian Psychological Society; the British Psychological Society; and the Canadian Psychiatric Association. Those committed to protecting abused children and those committed to protecting wrongly accused adults agree on the following:

䉴 Sexual abuse happens. And it happens more often than we once supposed. There

䉴 䉴

To more closely approximate therapist-aided recall, Elizabeth Loftus and her colleagues (1996) have experimentally implanted false memories of childhood traumas. In one study, she had a

“When memories are ‘recovered’ after long periods of amnesia, particularly when extraordinary means were used to secure the recovery of memory, there is a high probability that the memories are false.” Royal College of Psychiatrists Working Group on Reported Recovered Memories of Child Sexual Abuse (Brandon et al., 1998)

© The New Yorker Collection, 1993, Lorenz from cartoonbank.com. All rights reserved.

is no characteristic “survivor syndrome” (Kendall-Tackett et al., 1993). However, sexual abuse is a traumatic betrayal that can leave its victims predisposed to problems ranging from sexual dysfunction to depression (Freyd et al., 2007). Injustice happens. Some innocent people have been falsely convicted. And some guilty people have evaded responsibility by casting doubt on their truth-telling accusers. Forgetting happens. Many of those actually abused were either very young when abused or may not have understood the meaning of their experience— circ*mstances under which forgetting is common. Forgetting isolated past events, both negative and positive, is an ordinary part of everyday life. Recovered memories are commonplace. Cued by a remark or an experience, we recover memories of long-forgotten events, both pleasant and unpleasant. What is debated is whether the unconscious mind sometimes forcibly represses painful experiences and, if so, whether these can be retrieved by certain therapist-aided techniques. (Memories that surface naturally are more likely to be corroborated than are therapist-assisted recollections [Geraerts et al., 2007].) Memories of things happening before age 3 are unreliable. People do not reliably recall happenings of any sort from their first three years—a phenomenon called infantile amnesia. Most psychologists—including most clinical and counseling psychologists—therefore doubt “recovered” memories of abuse during infancy (Gore-Felton et al., 2000; Knapp & VandeCreek, 2000). The older a child’s age when suffering sexual abuse, and the more severe it was, the more likely it is to be remembered (Goodman et al., 2003). Memories “recovered” under hypnosis or the influence of drugs are especially unreliable. “Age-regressed” hypnotized subjects incorporate suggestions into their memories, even memories of “past lives.” Memories, whether real or false, can be emotionally upsetting. Both the accuser and the accused may suffer when what was born of mere suggestion becomes, like an actual trauma, a stinging memory that drives bodily stress (McNally, 2003, 2007). People knocked unconscious in unremembered accidents have later developed stress disorders after being haunted by memories they constructed from photos, news reports, and friends’ accounts (Bryant, 2001).

MODULE 30 Forgetting, Memory Construction, and Improving Memory

|| Although scorned by some trauma therapists, Loftus has been elected president of the science-oriented Association for Psychological Science, awarded psychology’s biggest prize ($200,000), and elected to the U.S. National Academy of Sciences and the Royal Society of Edinburgh. ||

trusted family member recall for a teenager three real childhood experiences and a false one—a vivid account of the child’s being lost for an extended time in a shopping mall at age 5 until being rescued by an older person. Two days later, one participant, Chris, said, “That day I was so scared that I would never see my family again.” Two days after that, he began to visualize the flannel shirt, bald head, and glasses of the old man who supposedly had found him. Told the story was made up, Chris was incredulous: “I thought I remembered being lost . . . and looking around for the guys. I do remember that, and then crying, and Mom coming up and saying, ‘Where were you? Don’t you . . . ever do that again.’” In other experiments, a third of participants have become wrongly convinced that they almost drowned as a child, and about half were led to falsely recall an awful experience, such as a vicious animal attack (Heaps & Nash, 2001; Porter et al., 1999). Such is the memory construction process by which people can recall being abducted by aliens, victimized by a satanic cult, molested in a crib, or living a past life. Thousands of reasonable, normally functioning human beings, notes Loftus, “speak in terror-stricken voices about their experience aboard flying saucers. They remember, clearly and vividly, being abducted by aliens” (Loftus & Ketcham, 1994, p. 66). Loftus knows firsthand the phenomenon she studies. At a family reunion, an uncle told her that at age 14 she found her mother’s drowned body. Shocked, she denied it. But the uncle was adamant, and over the next three days she began to wonder if she had a repressed memory. “Maybe that’s why I’m so obsessed with this topic.” As the now-upset Loftus pondered her uncle’s suggestion, she “recovered” an image of her mother lying in the pool, face down, and of herself finding the body. “I started putting everything into place. Maybe that’s why I’m such a workaholic. Maybe that’s why I’m so emotional when I think about her even though she died in 1959.” Then her brother called. Their uncle now remembered what other relatives also confirmed. Aunt Pearl, not Loftus, had found the body (Loftus & Ketcham, 1994; Monaghan, 1992). Loftus also knows firsthand the reality of sexual abuse. A male baby-sitter molested her when she was 6 years old. She has not forgotten. And that makes her wary of those whom she sees as trivializing real abuse by suggesting uncorroborated traumatic experiences, then accepting them uncritically as fact. The enemies of the truly victimized are not only those who prey and those who deny, she says, but those whose

Don Shrubshell

Elizabeth Loftus “The research findings for which I am being honored now generated a level of hostility and opposition I could never have foreseen. People wrote threatening letters, warning me that my reputation and even my safety were in jeopardy if I continued along these lines. At some universities, armed guards were provided to accompany me during speeches.” Elizabeth Loftus, on receiving the Association for Psychological Science’s William James Fellow Award, 2001

364

365

Forgetting, Memory Construction, and Improving Memory MODULE 30

writings and allegations “are bound to lead to an increased likelihood that society in general will disbelieve the genuine cases of childhood sexual abuse that truly deserve our sustained attention” (Loftus, 1993). So, does repression ever occur? Psychologists continue to have heated debates on this topic, which is the cornerstone of Freud’s theory and a recurring theme in much popular psychology. But this much now appears certain: The most common response to a traumatic experience (witnessing a parent’s murder, experiencing the horrors of a Nazi death camp, being terrorized by a hijacker or a rapist, escaping the collapsing World Trade Center towers, surviving an Asian tsunami) is not banishment of the experience into the unconscious. Rather, such experiences are typically etched on the mind as vivid, persistent, haunting memories (Porter & Peace, 2007). Playwright Eugene O’Neill understood. As one of the characters in his Strange Interlude (1928) exclaimed, “The devil! . . . What beastly incidents our memories insist on cherishing!”

“Horror sears memory, leaving . . . the consuming memories of atrocity.” Robert Kraft, Memory Perceived: Recalling the Holocaust, 2002

䉴|| Improving Memory 30-4 How can an understanding of memory contribute to more effective study techniques? Now and then we are dismayed at our forgetfulness—at our embarrassing inability to recall someone’s name, at forgetting to bring up a point in conversation, at not bringing along something important, at finding ourselves standing in a room unable to recall why we are there (Herrmann, 1982). Is there anything we can do to minimize such memory misdeeds? Much as biology benefits medicine and botany benefits agriculture, so can the psychology of memory benefit education. Here for easy reference is a summary of concrete suggestions for improving memory. The SQ3R—Survey, Question, Read, Rehearse, Review—study technique used in this book incorporates several of these strategies. Study repeatedly. To master material, use distributed (spaced) practice. To learn a concept, provide yourself with many separate study sessions: Take advantage of life’s little intervals—riding on the bus, walking across campus, waiting for class to start. To memorize specific facts or figures, suggests Thomas Landauer (2001), “rehearse the name or number you are trying to memorize, wait a few seconds, rehearse again, wait a little longer, rehearse again, then wait longer still and rehearse yet again. The waits should be as long as possible without losing the information.” New memories are weak; exercise them and

© LWA-Dann Tardiff/Corbis

and memory 䉴 Thinking Most of what we know is not the result of efforts to memorize. We learn because we’re curious and because we spend time thinking about our experiences. Actively thinking as we read, by rehearsing and relating ideas, yields the best retention.

“I have discovered that it is of some use when you lie in bed at night and gaze into the darkness to repeat in your mind the things you have been studying. Not only does it help the understanding, but also the memory.” Leonardo da Vinci (1452–1519)

366

“Knit each new thing on to some acquisition already there.” William James, Principles of Psychology, 1890

MODULE 30 Forgetting, Memory Construction, and Improving Memory

they will strengthen. Speed-reading (skimming) complex material—with minimal rehearsal—yields little retention. Rehearsal and critical reflection help more. It pays to study actively. Make the material meaningful. To build a network of retrieval cues, take text and class notes in your own words. (Mindlessly repeating someone else’s words is relatively ineffective.) To apply the concepts to your own life, form images, understand and organize information, relate the material to what you already know or have experienced, and put it in your own words. Increase retrieval cues by forming associations. Without such cues, you may find yourself stuck when a question uses phrasing different from the rote forms you memorized. Activate retrieval cues. Mentally re-create the situation and the mood in which your original learning occurred. Return to the same location. Jog your memory by allowing one thought to cue the next. Use mnemonic devices. Make up a story to associate items with memorable images or jingles. Vivid images or words in familiar rhymes can act as pegs on which you can “hang” items. Chunk information into acronyms by creating a word from the first letters of the to-be-remembered items. Create rhythmic rhymes (“i before e, except after c”). Minimize interference. Study before sleeping. Do not schedule back-to-back study times for topics that are likely to interfere with each other, such as Spanish and French. Sleep more. During sleep, the brain organizes and consolidates information for long-term memory. Sleep-deprivation disrupts this process. Test your own knowledge, both to rehearse it and to help determine what you do not yet know. Don’t be lulled into overconfidence by your ability to recognize information. Test your recall using the preview questions. Outline sections on a blank page. Define the terms and concepts listed at each module’s end before turning back to their definitions. Take practice tests; the study guides that accompany many texts, including this one, are a good source for such tests.

367

Forgetting, Memory Construction, and Improving Memory MODULE 30

Review Forgetting, Memory Construction, and Improving Memory 30-1 Why do we forget? We may fail to encode information for entry into our memory system. Memories may fade after storage—rapidly at first, and then leveling off, a trend known as the forgetting curve. We may experience retrieval failure, when old and new material compete, when we don’t have adequate retrieval cues, or possibly, in rare instances, because of motivated forgetting, or repression. In proactive interference, something learned in the past interferes with our ability to recall something recently learned. In retroactive interference, something recently learned interferes with something learned in the past. 30-2 How do misinformation, imagination, and source amnesia influence our memory construction? How real-seeming are false memories? If children or adults are subtly exposed to misinformation after an event, or if they repeatedly imagine and rehearse an event that never occurred, they may incorporate misleading details into their memory of what actually happened. When we reassemble a memory during retrieval, we may successfully retrieve something we have heard, read or imagined, but attribute it to the wrong source (source amnesia). False memories feel like true memories and are equally durable. Constructed memories are usually limited to the gist of the event. 30-3 What is the controversy related to claims of repressed and recovered memories? This controversy between memory researchers and some well-meaning therapists is related to whether most memories of early childhood abuse are repressed and can be recovered by means of leading questions and/or hypnosis during therapy. Psychologists now tend to agree that: (1) Abuse happens, and can leave lasting scars. (2) Some innocent people have been falsely convicted of abuse that never happened, and some true abusers have used the controversy over recovered memories to avoid punishment. (3) Forgetting isolated past events, good or bad, is an ordinary part of life. (4) Recovering good and bad memories, triggered by some memory cue, is commonplace. (5) Infantile amnesia—the inability to recall memories from the first three years of life—makes recovery of very early childhood memories unlikely. (6) Memories obtained under the influence of hypnosis or drugs or therapy are unreliable. (7) Both real and false memories cause stress and suffering.

30-4 How can an understanding of memory contribute to more effective study techniques? Research on memory suggests concrete strategies for improving memory. These include studying repeatedly, making material personally meaningful, activating retrieval cues, using mnemonic devices, minimizing interference, getting adequate sleep, and self-testing. Terms and Concepts to Remember proactive interference, p. 354 retroactive interference, p. 354 repression, p. 356

misinformation effect, p. 358 source amnesia, p. 359

Test Yourself 1. Can you offer an example of proactive interference? 2. Source amnesia occurs when we attribute to the wrong source an event we have experienced, heard about, read about, or imagined. What might life be like if we remembered all our waking experiences as well as all our dreams?

3. What are the recommended memory strategies you just read about? (One advised rehearsing to-be-remembered material. What were the others?) (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. Most people, especially as they grow older, wish for a better memory. Is that true of you? Or do you more often wish you could better discard old memories?

2. How would you feel about being an impartial jury member in a trial of a parent accused of sexual abuse based on a recovered memory, or of a therapist being sued for creating a false memory of abuse?

3. Which of the study and memory strategies suggested in this module will work best for you?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

Thinking, Language, and Intelligence

T

hroughout history, we humans have both bemoaned our foolishness and celebrated our wisdom. The poet T. S. Eliot was struck by “the hollow men . . . Headpiece filled with straw.” But Shakespeare’s Hamlet extolled the human species as “noble in reason! . . . infinite in faculties! . . . in apprehension how like a god!” Throughout this text, we likewise marvel at both our abilities and our errors. We study the human brain—3 pounds of wet tissue the size of a small cabbage, yet containing circuitry more complex than the planet’s telephone networks. We marvel at the competence of newborns. We relish our sensory system, which disassembles visual stimuli into millions of nerve impulses, distributes them for parallel processing, and then reassembles them into colorful perceptions. We ponder our memory’s seemingly limitless capacity and the ease with which our two-track mind processes information, consciously and unconsciously. Little wonder that our species has had the collective genius to invent the camera, the car, and the computer; to unlock the atom and crack the genetic code; to travel out to space and into the oceans’ depths. Yet we also see that our species is kin to the other animals. We are influenced by the same principles that produce learning in rats and pigeons. As one pundit said, echoing Pavlov, “How like a dog!” We note that we assimilate reality into our preconceptions and succumb to perceptual illusions. We see how easily we deceive ourselves about pseudopsychic claims, hypnotic feats, and false memories. In Modules 31 and 32, we encounter further instances of these two images of the human condition—the rational and the irrational. We will consider how our cognitive system uses all the information it has received, perceived, stored, and retrieved. We will look at our flair for language and consider how and why it develops. And we will reflect on how deserving we are of our name, hom*o sapiens—wise human. In Modules 33 through 35, we focus on an ongoing debate about intelligence, in which psychologists and others pick sides on two major questions: (1) Does each of us have an inborn general mental capacity (intelligence), and (2) can we quantify this capacity as a meaningful number?

modules 31 Thinking

32 Language and Thought

33 Introduction to Intelligence

34 Assessing Intelligence

35 Genetic and Environmental Influences on Intelligence

369

module 31

Concepts Solving Problems Making Decisions and Forming Judgments

“The average newspaper boy in Pittsburgh knows more about the universe than did Galileo, Aristotle, Leonardo, or any of those other guys who were so smart they only needed one name.”

© The New Yorker Collection, 1977, Kaufman from cartoonbank.com. All rights reserved.

Daniel Gilbert, Stumbling on Happiness, 2006

“Attention, everyone! I’d like to introduce the newest member of our family.”

Daniel J. Cox/Liaison/Getty Images

J. Messerschmidt/The Picture Cube

A bird and a . . . ? It takes a bit longer to conceptualize a penguin as a bird because it doesn’t match our prototype of a small, feathered, flying creature.

370

Thinking Thinking, or cognition, refers to all the mental activities associated with thinking, knowing, remembering, and communicating. Cognitive psychologists study these activities, including the logical and sometimes illogical ways in which we create concepts, solve problems, make decisions, and form judgments.

䉴|| Concepts 31-1 What are the functions of concepts? To think about the countless events, objects, and people in our world, we simplify things. We form concepts—mental groupings of similar objects, events, ideas, and people. The concept chair includes many items—a baby’s high chair, a reclining chair, a dentist’s chair—all of which are for sitting. Chairs vary, but it is their common features that define the concept of chair. Imagine life without concepts. We would need a different name for every object and idea. We could not ask a child to “throw the ball” because there would be no concept of ball (or throw). Instead of saying, “They were angry,” we would have to describe expressions, intensities, and words. Such concepts as ball and anger give us much information with little cognitive effort. To further simplify things, we organize concepts into category hierarchies. Cab drivers organize their cities into geographical sectors, which subdivide into neighborhoods, and again into blocks. Once our categories exist, we use them efficiently. Shown a bird, car, or food, people need no more time to identify an item’s category than to perceive that something is there. “As soon as you know it is there, you know what it is,” report Kalanit Grill-Spector and Nancy Kanwisher (2005). We form some concepts by definition. Told that a triangle has three sides, we thereafter classify all three-sided geometric forms as triangles. More often, however, we form our concepts by developing prototypes—a mental image or best example that incorporates all the features we associate with a category (Rosch, 1978). The more closely something matches our prototype of a concept, the more readily we recognize it as an example of the concept. A robin and a penguin both satisfy our definition of bird: a two-footed animal that has wings and feathers and hatches from an egg. Yet people agree more quickly that “a robin is a bird” than that “a penguin is a bird.” For most of us, the robin is the birdier bird; it more closely resembles our bird prototype. Once we place an item in a category, our memory of it later shifts toward the category prototype. Olivier Corneille and his colleagues (2004) found memory shifts after showing Belgian students ethnically mixed faces. For example, when shown a face that was a blend of 70 percent of the features of a Caucasian person and 30 percent of an Asian person, people categorized the face as Caucasian and later recalled having seen a more prototypically Caucasian person (Corneille et al., 2004). (They were more likely to recall an 80 percent Caucasian face than the 70 percent Caucasian they actually saw.) If shown a 70 percent Asian face, they later recalled a more prototypically Asian face (FIGURE 31.1). A follow-up study found the phenomenon with gender as well. Those shown 70 percent male faces categorized them as male (no surprise there), and then later misrecalled them as even more prototypically male (Huart et al., 2005).

371

Courtesy of Olivier Corneille

Thinking M O D U L E 3 1

80% CA

70% CA

60% CA

50%/50%

60% AS

70% AS

80% AS

90% AS

90% CA

Move away from our prototypes, and category boundaries may blur. Is a tomato a fruit? Is a 17-year-old female a girl or a woman? Is a whale a fish or a mammal? Because this marine animal fails to match our prototype, we are slower to recognize it as a mammal. Similarly, we are slow to perceive an illness when our symptoms don’t fit one of our disease prototypes (Bishop, 1991). People whose heart attack symptoms (shortness of breath, exhaustion, a dull weight in the chest) don’t match their prototype of a heart attack (sharp chest pain) may not seek help. And when discrimination doesn’t fit our prejudice prototypes—of White against Black, male against female, young against old—we often fail to notice it. People more easily detect male prejudice against females than female against males or female against females (Inman & Baron, 1996; Marti et al., 2000). So, concepts, like other mental shortcuts we will encounter, speed and guide our thinking. But they don’t always make us wise.

FIGURE 31.1 Face categorization influences recollection For example, shown a face that was 70 percent Caucasian, people tended to classify the person as Caucasian and to recollect the face as more Caucasian than it was. (From Corneille et al., 2004.)

䉴|| Solving Problems 31-2 What strategies assist our problem solving, and what obstacles hinder it? One tribute to our rationality is our problem-solving skill in coping with novel situations. What’s the best route around this traffic jam? How shall we handle a friend’s criticism? How can we get in the house without our keys? Some problems we solve through trial and error. Thomas Edison tried thousands of lightbulb filaments before stumbling upon one that worked. For other problems, we use algorithms, step-by-step procedures that guarantee a solution. But step-by-step algorithms can be laborious and exasperating. For example, to find another word using all the letters in SPLOYOCHYG, we could try each letter in each position, but we would need to generate and examine the 907,200 resulting permutations. In such cases, we often resort to simpler strategies called heuristics (see photo on next page). Thus, we might reduce the number of options in our SPLOYOCHYG example by excluding rare letter combinations, such as two Y’s together. By using heuristics and then applying trial and error, we may hit upon the answer (which you can see by turning the page). Sometimes, the problem-solving strategy seems to be no strategy at all. We puzzle over a problem, and suddenly, the pieces fall together as we perceive the solution in a sudden flash of insight. Ten-year-old Johnny Appleton displayed insight in solving a problem that had stumped construction workers: how to rescue a young robin that had fallen into a narrow 30-inch-deep hole in a cement-block wall. Johnny’s solution: Slowly pour in sand, giving the bird enough time to keep its feet on top of the constantly rising sand (Ruchlis, 1990). Teams of researchers have identified brain activity associated with sudden flashes of insight (Jung-Beeman et al., 2004; Sandkühler & Bhattacharya, 2008). They gave people a problem: Think of a word that will form a compound word or phrase with each of three words in a set (such as pine, crab, and sauce), and press a button to

cognition the mental activities associated with thinking, knowing, remembering, and communicating. concept a mental grouping of similar objects, events, ideas, or people.

prototype a mental image or best example of a category. Matching new items to a prototype provides a quick and easy method for sorting items into categories (as when comparing feathered creatures to a prototypical bird, such as a robin).

algorithm a methodical, logical rule or procedure that guarantees solving a particular problem. Contrasts with the usually speedier—but also more error-prone— use of heuristics.

heuristic a simple thinking strategy that often allows us to make judgments and solve problems efficiently; usually speedier but also more error-prone than algorithms. insight a sudden and often novel realization of the solution to a problem; it contrasts with strategy-based solutions.

MOD U LE 3 1 Thinking

sound a bell when you know the answer. (If you need a hint: The word is a fruit.1) All the while, the researchers mapped the problem solver’s brain activity, using fMRIs (functional MRIs) or EEGs. In the first experiment, about half the solutions were by a sudden Aha! insight, which typically was preceded by frontal lobe activity involved in focusing attention and was accompanied by a burst of activity in the right temporal lobe, just above the ear (FIGURE 31.2). As you perhaps experienced in solving the pine-crab-sauce problem, insight often pops into the mind with striking suddenness, with no prior sense that one is “getting warmer” or feeling close to the answer (Knoblich & Oellinger, 2006; Metcalfe, 1986). When the “Eureka moment” hits us, we feel a sense of satisfaction, a feeling of happiness. The joy of a joke may similarly lie in our sudden comprehension of an unexpected ending or a double meaning. See for yourself, with these 2 jokes rated funniest (among 2 million ratings of 40,000 submitted jokes) in an Internet humor study co-sponsored by Richard Wiseman (2002) and the British Association for the Advancement of Science. First, the runner-up: B2M Productions/Digital Vision/Getty Images

Heuristic searching To search for guava juice, you could search every supermarket aisle (an algorithm), or check the bottled beverage, natural foods, and produce sections (heuristics). The heuristics approach is often speedier, but an algorithmic search guarantees you will find it eventually.

372

FIGURE 31.2 The Aha! moment A

Sherlock Holmes and Dr. Watson are going camping. They pitch their tent under the stars and go to sleep. Sometime in the middle of the night Holmes wakes Watson up.

burst of right temporal lobe activity accompanies insight solutions to word problems (Jung-Beeman et al., 2004).

From Mark Jung-Beeman, Northwestern University and John Kounios, Drexel University

Holmes: Watson:

Holmes:

“Watson, look up at the stars, and tell me what you deduce.” “I see millions of stars and even if a few of those have planets, it’s quite likely there are some planets like Earth, and if there are a few planets like Earth out there, there might also be life. What does it tell you, Holmes?” “Watson, you idiot, somebody has stolen our tent!”

And drum roll, please, for the winner: A couple of New Jersey hunters are out in the woods when one of them falls to the ground. He doesn’t seem to be breathing, his eyes are rolled back in his head. The other guy whips out his cellphone and calls the emergency services. He gasps to the operator: “My friend is dead! What can I do?” The operator, in a calm, soothing voice says: “Just take it easy. I can help. First, let’s make sure he’s dead.” There is a silence, then a shot is heard. The guy’s voice comes back on the line: “OK, now what?”

Obstacles to Problem Solving Inventive as we can be in solving problems, the correct answer may elude us. Two cognitive tendencies—confirmation bias and fixation—often lead us astray. || Answer to SPLOYOCHYG anagram on previous page: PSYCHOLOGY. ||

1

The word is apple: pineapple, crabapple, applesauce.

373

Thinking M O D U L E 3 1

search for information that supports our preconceptions and to ignore or distort contradictory evidence.

fixation the inability to see a problem from a new perspective, by employing a different mental set.

mental set a tendency to approach a problem in one particular way, often a way that has been successful in the past.

functional fixedness the tendency to think of things only in terms of their usual functions; an impediment to problem solving.

“The human understanding, when any proposition has been once laid down . . . forces everything else to add fresh support and confirmation.” Francis Bacon, Novum Organum, 1620

Once we incorrectly represent a problem, it’s hard to restructure how we approach it. If the solution to the matchstick problem in FIGURE 31.3 eludes you, you may be experiencing fixation—the inability to see a problem from a fresh perspective. (See the solution in FIGURE 31.5 on the next page.) Two examples of fixation are mental set and functional fixedness. As a perceptual set predisposes what we perceive, a mental set predisposes how we think. Mental set refers to our tendency to approach a problem with the mind-set of what has worked for us previously. Indeed, solutions that worked in the past often do work on new problems. Consider: Given the sequence O-T-T-F-?-?-?, what are the final three letters? Most people have difficulty recognizing that the three final letters are F(ive), S(ix), and S(even). But solving this problem may make the next one easier: Given the sequence J-F-M-A-?-?-?, what are the final three letters? (If you don’t get this one, ask yourself what month it is.) Sometimes, however, a mental set based on what worked in the past precludes our finding a new solution to a new problem. Our mental set from our past experiences with matchsticks predisposes our arranging them in two dimensions. Another type of fixation—our tendency to think of only the familiar functions for objects, without imagining alternative uses—goes by the awkward but appropriate label functional fixedness. A person may ransack the house for a screwdriver when a coin would have turned the screw. As an example, try the candle-mounting problem in FIGURE 31.4. Did you experience functional fixedness? If so, see FIGURE 31.6 (on the next page). Perceiving and relating familiar things in new ways is part of creativity.

31.3 䉴 FIGURE The matchstick problem

How would you arrange six matches to form four equilateral triangles?

FIGURE 31.4 The candle-mounting problem Using these materials, how would you mount the candle on a bulletin board? (From Duncker, 1945.)

Fixation

confirmation bias a tendency to

From “Problem Solving” by M. Scheerer. Copyright © 1963 by Scientific American, Inc. All rights reserved.

We seek evidence verifying our ideas more eagerly than we seek evidence that might refute them (Klayman & Ha, 1987; Skov & Sherman, 1986). This tendency, known as confirmation bias, is a major obstacle to problem solving. Peter Wason (1960) demonstrated the confirmation bias by giving British university students the threenumber sequence 2-4-6 and asking them to guess the rule he had used to devise the series. (The rule was simple: any three ascending numbers.) Before submitting answers, students generated their own sets of three numbers, and Wason told them whether their sets conformed to his rule. Once they felt certain they had the rule, they were to announce it. The result? Seldom right but never in doubt. Most of Wason’s students formed a wrong idea (“Maybe it’s counting by twos”) and then searched only for evidence confirming the wrong rule (by testing 6-8-10, 100-102104, and so forth). “Ordinary people,” said Wason (1981), “evade facts, become inconsistent, or systematically defend themselves against the threat of new information relevant to the issue.” The results are sometimes momentous. The United States launched its war against Iraq on the assumption that Saddam Hussein possessed weapons of mass destruction (WMD) that posed an immediate threat. When that assumption turned out to be false, confirmation bias was one of the flaws in the judgment process identified by the bipartisan U.S. Senate Select Committee on Intelligence (2004). Administration analysts “had a tendency to accept information which supported [their presumptions] . . . more readily than information which contradicted” them. Sources denying such weapons were deemed “either lying or not knowledgeable about Iraq’s problems, while those sources who reported ongoing WMD activities were seen as having provided valuable information.”

From “Problem Solving” by M. Scheerer. Copyright © 1963 by Scientific American, Inc. All rights reserved.

Confirmation Bias

374

MOD U LE 3 1 Thinking

Bulletin board

From “Problem Solving” by M. Scheerer. Copyright © 1963 by Scientific American, Inc. All rights reserved.

Thumbtacks pushed through empty matchbox

FIGURE 31.5 Solution to the

matchstick problem To solve this problem, you must view it from a new perspective, breaking the fixation of limiting solutions to two dimensions.

FIGURE 31.6 Solution to the candlemounting problem Solving this problem requires recognizing that a matchbox can have other functions besides holding matches. (From Duncker, 1945.)

䉴|| Making Decisions and Forming Judgments 31-3 How do heuristics, overconfidence, and belief perseverance influence our decisions and judgments? When making each day’s hundreds of judgments and decisions (Is it worth the bother to take an umbrella? Can I trust this person? Should I shoot the basketball or pass to the player who’s hot?) we seldom take the time and effort to reason systematically. We just follow our intuition. After interviewing policymakers in government, business, and education, social psychologist Irving Janis (1986) concluded that they “often do not use a reflective problem-solving approach. How do they usually arrive at their decisions? If you ask, they are likely to tell you . . . they do it mostly by the seat of their pants.”

Using and Misusing Heuristics When we need to act quickly, those mental shortcuts we call heuristics often do help us overcome analysis paralysis. Thanks to our mind’s automatic information processing, intuitive judgments are instantaneous. But the price we sometimes pay for this efficiency—quick but bad judgments—can be costly. Research by cognitive psychologists Amos Tversky and Daniel Kahneman (1974) on the representativeness and availability heuristics showed how these generally helpful shortcuts can lead even the smartest people into dumb decisions. (Their joint work on decision making received the 2002 Nobel prize, although sadly, only Kahneman was alive to receive the honor.)

The Representativeness Heuristic To judge the likelihood of things in terms of how well they represent particular prototypes is to use the representativeness heuristic. To illustrate, consider:

representativeness heuristic judging the likelihood of things in terms of how well they seem to represent, or match, particular prototypes; may lead us to ignore other relevant information.

A stranger tells you about a person who is short, slim, and likes to read poetry, and then asks you to guess whether this person is more likely to be a professor of classics at an Ivy League university or a truck driver (adapted from Nisbett & Ross, 1980). Which would be the better guess? Did you answer “professor”? Many people do, because the description seems more representative of Ivy League scholars than of truck drivers. The representativeness

“In creating these problems, we didn’t set out to fool people. All our problems fooled us, too.” Amos Tversky (1985)

Courtesy of Greymayer Award, University of Louisville and Daniel Kahneman

Courtesy of Greymayer Award, University of Louisville and the Tversky family

Thinking M O D U L E 3 1

375

availability heuristic estimating the likelihood of events based on their availability in memory; if instances come readily to mind (perhaps because of their vividness), we presume such events are common.

“Intuitive thinking [is] fine most of the time. . . . But sometimes that habit of mind gets us in trouble.” Daniel Kahneman (2005)

heuristic enabled you to make a snap judgment. But it also led you to ignore other relevant information. When I help people think through this question, the conversation goes something like this: Question: Answer: Question: Answer: Question: Answer: Question: Answer: Question: Answer: Question: Answer: Question: Answer: Comment:

First, let’s figure out how many professors fit the description. How many Ivy League universities do you suppose there are? Oh, about 10, I suppose. How many classics professors would you guess there are at each? Maybe 4. Okay, that’s 40 Ivy League classics professors. What fraction of these are short and slim? Let’s say half. And, of these 20, how many like to read poetry? I’d say half—10 professors. Okay, now let’s figure how many truck drivers fit the description. How many truck drivers do you suppose there are? Maybe 400,000. What fraction are short and slim? Not many—perhaps 1 in 8. Of these 50,000, what percentage like to read poetry? Truck drivers who like poetry? Maybe 1 in 100—oh, oh, I get it— that leaves 500 short, slim, poetry-reading truck drivers. Yup. So, even if we accept your stereotype that the description is more representative of classics professors than of truck drivers, the odds are 50 to 1 that this person is a truck driver.

© B. Veley. Used by permission.

The representativeness heuristic influences many of our daily decisions. To judge the likelihood of something, we intuitively compare it with our mental representation of that category—of, say, what truck drivers are like. If the two match, that fact usually overrides other considerations of statistics or logic.

The Availability Heuristic The availability heuristic operates when we base our judgments on how mentally available information is. Anything that enables information to “pop into mind” quickly and with little effort—its recency, vividness, or distinctiveness—can increase its perceived availability, making it seem commonplace. Casinos entice us to gamble by signaling even small wins with bells and lights—making them vividly memorable—

“The problem is I can’t tell the difference between a deeply wise, intuitive nudge from the Universe and one of my own bone-headed ideas!”

overconfidence the tendency to be more confident than correct—to overestimate the accuracy of our beliefs and judgments. belief perseverance clinging to one’s initial conceptions after the basis on which they were formed has been discredited.

MOD U LE 3 1 Thinking

FIGURE 31.7 Risk of death from various causes in the United States, 2001 (Data assembled from various government sources by Randall Marshall et al., 2007.)

376

0.00018

Risk of death

0.00016

Auto accidents: 1 in 6029

0.00014 0.00012

Suicide: 1 in 9310

Terrorist attack: 1 in 97,927

0.00010 0.00008 0.00006 0.00004 0.00002 0

“Don’t believe everything you think.” Bumper sticker

“The human understanding is most excited by that which strikes and enters the mind at once and suddenly, and by which the imagination is immediately filled and inflated. It then begins almost imperceptibly to conceive and suppose that everything is similar to the few objects which have taken possession of the mind.” Francis Bacon, Novum Organum, 1620

Accidental choking: 1 in 94,371 Homicide: 1 in 25,123 Pedestrian: 1 in 46,960

Cause of death

while keeping big losses soundlessly invisible. And if someone from a particular ethnic group commits a terrorist act, our readily available memory of the dramatic event may shape our impression of the whole group. When statistical reality is pitted against a single vivid case, the memorable case often wins. The mass killing of civilians may seem on the increase of late, thanks to memorably available terrorism and genocide. Actually, such horrors have declined sharply since the late 1980s (Pinker, 2007; U.S. Department of State, 2004). Even during 9/11’s horrific year, terrorist acts claimed comparatively few lives, note risk researchers (see FIGURE 31.7). Yet in 2007, a poll showed “terrorism” was Americans’ top priority for Congress and the President, and that responding to global climate change—which some scientists regard as a future “Armageddon in slow motion”—was one of the lowest priorities (Pew, 2007). Emotion-laden images of terror exacerbate our fears of terrorism by harnessing the availability heuristic, notes political scientist Cass Sunstein (2007). We fear flying because we play in our heads a tape of 9/11 or some other air disaster. We fear letting our children walk to school because we play in our heads tapes of abducted and brutalized children. We fear swimming in ocean waters because we replay Jaws in our heads. And so, thanks to these readily available images, we come to fear extremely rare events. Meanwhile, the lack of comparably available images of global climate change leaves most people little concerned. (For more on the power of vivid cases, turn the page to see Thinking Critically About: The Fear Factor.) We reason emotionally and neglect probabilities, points out psychologist Paul Slovic (2007). We overfeel and underthink. In one experiment, Deborah Small, George Lowenstein, and Slovic (2007) found that donations to a starving 7-year-old child were greater when her image was not accompanied by statistical information about the millions of needy African children like her. “If I look at the mass I will never act,” Mother Teresa reportedly said. “If I look at the one, I will.”

Overconfidence Our use of intuitive heuristics when forming judgments, our eagerness to confirm the beliefs we already hold, and our knack for explaining away failures combine to create overconfidence, a tendency to overestimate the accuracy of our knowledge and judgments. Across various tasks, people overestimate what their performance was, is, or will be (Metcalfe, 1998). People are also more confident than correct when answering such questions as, “Is absinthe a liqueur or a precious stone?” (It’s a licorice-flavored liqueur.) On questions

377

Our readiness to fear the wrong things and to be overconfident in our judgments is startling. Equally startling is our tendency to cling to our beliefs in the face of contrary evidence. Belief perseverance often fuels social conflict, as it did in one study of people with opposing views of capital punishment (Lord et al., 1979). Those on both sides studied two supposedly new research findings, one supporting and the other refuting the claim that the death penalty deters crime. Each side was more impressed by the study supporting its own beliefs, and each readily disputed the other study. Thus, showing the pro– and anti–capital-punishment groups the same mixed evidence actually increased their disagreement. If you want to rein in the belief perseverance phenomenon, a simple remedy exists: Consider the opposite. When Charles Lord and his colleagues (1984) repeated the capital-punishment study, they asked some participants to be “as objective and unbiased as possible.” The plea did nothing to reduce biased evaluations of evidence. They asked another group to consider “whether you would have made the same high or low evaluations had exactly the same study produced results on the other side of the issue.” Having imagined and pondered opposite findings, these people became much less biased in their evaluations of the evidence. The more we come to appreciate why our beliefs might be true, the more tightly we cling to them. Once people have explained to themselves why they believe a child is “gifted” or “learning disabled,” or why candidate X or Y will be a better commanderin-chief, or why company Z is a stock worth owning, they tend to ignore evidence undermining that belief. Prejudice persists. Once beliefs form and get justified, it takes more compelling evidence to change them than it did to create them.

Predict your own behavior When will you finish reading this module?

“When you know a thing, to hold that you know it; and when you do not know a thing, to allow that you do not know it; this is knowledge.” Confucius (551–479 B.C.), Analects

“As we know, There are known knowns. There are things we know we know. We also know There are known unknowns. That is to say We know there are some things We do not know. But there are also unknown unknowns, The ones we don’t know We don’t know.” Donald Rumsfeld, U.S. Department of Defense news briefing, 2002

© The New Yorker Collection, 1973, Fradon from cartoonbank.com. All rights reserved.

The Belief Perseverance Phenomenon

where only 60 percent of people answer correctly, respondents typically feel 75 percent confident. Even those who feel 100 percent certain err about 15 percent of the time (Fischhoff et al., 1977). Overconfidence plagues decisions outside the laboratory, too. It was an overconfident Lyndon Johnson who waged war with North Vietnam and an overconfident George W. Bush who marched into Iraq to eliminate supposed weapons of mass destruction. On a smaller scale, overconfidence drives stockbrokers and investment managers to market their ability to outperform stock market averages, despite overwhelming evidence to the contrary (Malkiel, 2004). A purchase of stock X, recommended by a broker who judges this to be the time to buy, is usually balanced by a sale made by someone who judges this to be the time to sell. Despite their confidence, buyer and seller cannot both be right. Students are routinely overconfident about how quickly they can do assignments and write papers, typically expecting to finish ahead of schedule (Buehler et al., 1994). In fact, the projects generally take about twice the number of days predicted. Despite our painful underestimates, we remain overly confident of our next prediction. Moreover, anticipating how much we will accomplish, we then overestimate our future free time (Zauberman & Lynch, 2005). Believing we will have more free time next month than we do today, we happily accept invitations, only to discover we’re just as busy when the day rolls around. Failing to appreciate our potential for error can have serious consequences, but overconfidence does have adaptive value. People who err on the side of overconfidence live more happily, find it easier to make tough decisions, and seem more credible than those who lack self-confidence (Baumeister, 1989; Taylor, 1989). Moreover, given prompt and clear feedback—as weather forecasters receive after each day’s predictions—we can learn to be more realistic about the accuracy of our judgments (Fischhoff, 1982). The wisdom to know when we know a thing and when we do not is born of experience.

Bianca Moscatelli/Worth Publishers

Thinking M O D U L E 3 1

“I’m happy to say that my final judgment of a case is almost always consistent with my prejudgment of the case.”

378

MOD U LE 3 1 Thinking

Thinking Critically About

The Fear Factor—Do We Fear the Right Things? “Most people reason dramatically, not quantitatively,” said Oliver Wendell Holmes. After 9/11, many people feared flying more than driving. (In a 2006 Gallup survey, only 40 percent reported being “not afraid at all” to fly.) Yet Americans were—mile for mile—230 times more likely to die in an automobile crash than on a commercial flight in the months between 2003 and 2005 (National Safety Council, 2008). In a late-2001 essay, I calculated that if—because of 9/11—we flew 20 percent less and instead drove

half those unflown miles, about 800 more people would die in traffic accidents in the year after 9/11 (Myers, 2001). In checking this estimate against actual accident data (why didn’t I think of that?), German psychologist Gerd Gigerenzer (2004) found that the last three months of 2001 did indeed produce significantly more U.S. traffic fatalities than the three-month average in the previous five years (FIGURE 31.8). Long after 9/11, the dead terrorists were still killing Americans. As air travel gradually

FIGURE 31.8 Still killing Americans Images of 9/11 etched a sharper image in our minds than did the millions of fatality-free flights on U.S. airlines during 2002 and after. Such dramatic events, being readily available to memory, shape our perceptions of risk. In the three months after 2001, those faulty perceptions led more people to travel, and some to die, by car. (Adapted from Gigerenzer, 2004.)

Oct.–Dec. 2001: 353 excess fatalities

Average number of fatalities, 1996–2000

Ja n

. Fe b M . ar ch Ap ril M ay Ju ne Ju ly Au g. Se pt . Oc t. N ov . De c.

Number of fatalities, 2001

AP/Wide World Photos

U.S. traffic 3600 fatalities 3500 by month 3400 3300 3200 3100 3000 2900 2800 2700 2600 2500 2400 2300 2200

recovered during 2002 through 2005, U.S. commercial flights carried nearly 2.5 billion passengers, with no deaths on a major airline big jet (McMurray, 2006; Miller, 2005). Meanwhile, 172,000 Americans died in traffic accidents. For most people, the most dangerous aspect of airline flying is the drive to the airport. Why do we fear the wrong things? Why do we judge terrorism to be a greater risk than accidents—which kill nearly as many per week in just the United States as did terrorism (2527 deaths worldwide) in all of the 1990s (Johnson, 2001)? Even with the horror of 9/11, more Americans in 2001 died of food poisoning (which scares few) than of terrorism (which scares many). Psychological science has identified four influences on our intuitions

The Perils and Powers of Intuition 31-4 How do smart thinkers use intuition? intuition an effortless, immediate, automatic feeling or thought, as contrasted with explicit, conscious reasoning.

We have seen how our irrational thinking can plague our efforts to solve problems, make wise decisions, form valid judgments, and reason logically. Intuition also feeds our gut fears and prejudices. Moreover, these perils of intuition appear even when people

Ian Berry/Magnum Photos

Thinking M O D U L E 3 1

Dramatic deaths in bunches breed concern and fear The memorable South Asian tsunami that killed some 300,000 people stirred an outpouring of concern and new tsunami-warning technology. Meanwhile, a “silent tsunami” of poverty-related malaria was killing about that many of the world’s children every couple months, noted Jeffrey Sachs, the head of a United Nations project aiming to cut extreme poverty in half by 2015 (Dugger, 2005).

about risk. Together they explain why we sometimes fret over remote possibilities while ignoring much higher probabilities. First, we fear what our ancestral history has prepared us to fear. Human emotions were road tested in the Stone Age. Our

old brain prepares us to fear yesterday’s risks: snakes, lizards, and spiders (which combined now kill relatively few in developed countries). And it prepares us to fear confinement and heights, and therefore flying. Second, we fear what we cannot control. Driving we control, flying we do not. Third, we fear what is immediate. Threats related to flying are mostly telescoped into the moments of takeoff and landing, while the dangers of driving are diffused across many moments to come, each trivially dangerous. Similarly, many smokers (whose habit shortens their lives, on average, by about five years) fret openly before flying (which, averaged across people, shortens life by one day). Smoking’s toxicity kills in the distant future. Fourth, we fear what is most readily available in memory. Powerful, available memories—like the image of United Flight 175 slicing into the World Trade Center— serve as our measuring rods as we intuitively judge risks. Thousands of safe car trips have extinguished our anxieties about driving. Vivid events also distort our comprehension of risks and probable outcomes. We comprehend disasters that have killed people dramatically, in bunches. But we fear too little those threats that will claim lives undramatically, one by one, and in the distant future. As Bill Gates has noted, each year a half-million children worldwide—the equivalent of four 747s full of children every day—die quietly, one by one, from rotavirus, and we hear nothing of it (Glass, 2004). Dramatic outcomes make us gasp; probabilities we hardly grasp.

are offered extra pay for thinking smart, even when they are asked to justify their answers, and even when they are expert physicians or clinicians (Shafir & LeBoeuf, 2002). From this you might conclude that our heads are indeed filled with straw. But we must not abandon hope for human rationality. Today’s cognitive scientists are also revealing intuition’s powers, as you can see throughout this book (TABLE 31.1 on the next page). For the most part, our cognition’s instant, intuitive reactions enable us to react quickly and usually adaptively. They do so thanks, first, to our fast and frugal

Nevertheless, we must “learn to protect ourselves and our families against future terrorist attacks,” warns a U.S. Department of Homeland Security ad that appeared periodically in my local newspaper, advising us to buy and store the food supplies, duct tape, and battery-powered radios we’ll need if “there’s a terrorist attack on your city.” With 4 in 10 Americans being at least somewhat worried “that you or someone in your family will become a victim of terrorism,” the “Be afraid!” message— be afraid not just of a terrorist attack on somebody somewhere, but of one on you and your place—has been heard (Carroll, 2005). The point to remember: It is perfectly normal to fear purposeful violence from those who hate us. When terrorists strike again, we will all recoil in horror. But smart thinkers will remember this: Check your fears against the facts and resist those who serve their own purposes by cultivating a culture of fear. By so doing, we can take away the terrorists’ most omnipresent weapon: exaggerated fear.

“Fearful people are more dependent, more easily manipulated and controlled, more susceptible to deceptively simple, strong, tough measures and hard-line postures.” Media researcher George Gerbner to U.S. Congressional Subcommittee on Communications, 1981

379

380

MOD U LE 3 1 Thinking

TABLE 31.1 Intuition’s Perils and Powers Intuition’s Dozen Deadly Sins

Evidence of Intuition’s Powers

•Hindsight bias—looking back on events, we falsely surmise that we

•Blindsight—brain-damaged persons’ “sight unseen” as their bodies react to things and faces not consciously recognized.

knew it all along.

•Illusory correlation—intuitively perceiving a relationship where none •Memory construction—influenced by our present moods and by misinformation, we may form false memories. •Representativeness and availability heuristics—fast and frugal heuristics become quick and dirty when leading us into illogical and incorrect judgments.

•Overconfidence—our intuitive assessments of our own knowledge are often more confident than correct.

•Belief perseverance and confirmation bias—thanks partly to our preference for confirming information, beliefs are often resilient, even after their foundation is discredited.

•Framing—judgments flip-flop, depending on how the same issue or information is posed.

•Interviewer illusion—inflated confidence in one’s discernment based on interview alone.

•Mispredicting our own feelings—we often mispredict the intensity and duration of our emotions.

•Self-serving bias—in various ways, we exhibit inflated self-assessments.

•Fundamental attribution error—overly attributing others’ behavior to their dispositions by discounting unnoticed situational forces.

•Mispredicting our own behavior—our intuitive self-predictions often go astray.

•Right-brain thinking—split-brain persons displaying knowledge they cannot verbalize.

exists.

•Infants’ intuitive learning—of language and physics. •Moral intuition—quick gut feelings that precede moral reasoning. •Divided attention and priming—unattended information processed by the mind’s downstairs radar watchers.

•Everyday perception—the instant parallel processing and integration of complex information streams.

•Automatic processing—the cognitive autopilot that guides us through most of life.

•Implicit memory—remembering how to do something without knowing that one knows.

•Heuristics—those fast and frugal mental shortcuts that normally serve us well enough.

•Intuitive expertise—phenomena of unconscious learning, expert learning, and physical genius.

•Creativity—the sometimes-spontaneous appearance of novel and valuable ideas.

•Social and emotional intelligence—the intuitive know-how to comprehend and manage ourselves in social situations and to perceive and express emotions.

•The wisdom of the body—when instant responses are needed, the brain’s emotional pathways bypass the cortex; hunches sometimes precede rational understanding.

•Thin slices—detecting traits from mere seconds of behavior. •Dual attitude system—as we have two ways of knowing (unconscious and conscious) and two ways of remembering (implicit and explicit), we also have gut-level and rational attitude responses.

heuristics that enable us, for example, to intuitively assume that fuzzy-looking objects are far away, which they usually are (except on foggy mornings). Our learned associations also spawn the intuitions of our two-track mind. If a stranger looks like someone who previously harmed or threatened us, we may—without consciously recalling the earlier experience—react warily. (The learned association surfaces as a gut feeling.) In showing how everyday heuristics usually make us smart (and only sometimes make us dumb), Gerd Gigerenzer (2004, 2007) asked both American and German university students, “Which city has more inhabitants: San Diego or San Antonio?” After thinking a moment, 62 percent of the Americans guessed right: San Diego. But German students, many of whom had not heard of San Antonio (apologies to our Texas friends), used a fast and frugal intuitive heuristic: Pick the one you recognize. With less knowledge but an adaptive heuristic, 100 percent of the German respondents answered correctly. University of Amsterdam psychologist Ap Dijksterhuis and his colleagues (2006a,b) discovered the surprising powers of unconscious intuition in experiments that showed people complex information about potential apartments (or roommates or art posters). They invited some participants to state their immediate preference after reading a dozen pieces of information about each of four apartments. A second group, given several minutes to analyze the information, tended to make slightly smarter decisions. But wisest of all, in study after study, was a third group,

381

whose attention was distracted for a time. This enabled their minds to process the complex information unconsciously and to arrive at a more satisfying result. Faced with complex decisions involving many factors, the best advice may indeed be to take our time—to “sleep on it”—and to await the intuitive result of our unconscious processing. Intuition is huge. More than we realize, thinking occurs off-screen, with the results occasionally displayed on-screen. Intuition is adaptive. It feeds our expertise, our creativity, our love, and our spirituality. And intuition, smart intuition, is born of experience. Chess masters can look at a board and intuitively know the right move. Playing “blitz chess,” where every move is made after barely more than a glance, they display a hardly diminished skill (Burns, 2004). Experienced chicken sexers can tell you a chick’s sex at a glance, yet cannot tell you how they do it. In each case, the immediate insight describes acquired, speedy expertise that feels like instant intuition. Experienced nurses, firefighters, art critics, car mechanics, hockey players, and you, for anything in which you develop a deep and special knowledge, learn to size up many a situation in an eyeblink. Intuition is recognition, observed Nobel laureate psychologist-economist Herbert Simon (2001). It is analysis “frozen into habit.” So, intuition—fast, automatic, unreasoned feeling and thought—harvests our experience and guides our lives. Intuition is powerful, often wise, but sometimes perilous, and especially so when we overfeel and underthink, as we do when judging risks. Today’s psychological science enhances our appreciation for intuition. But it also reminds us to check our intuitions against reality. Our two-track mind makes sweet harmony as smart, critical thinking listens to the creative whispers of our vast unseen mind, and builds upon it by evaluating evidence, testing conclusions, and planning for the future.

Jean-Philippe Ksiazek/AFP

Thinking M O D U L E 3 1

Chick sexing When acquired expertise becomes an automatic habit, as it is for experienced chick sexers, it feels like intuition. At a glance, they just know.

The Effects of Framing 31-5 What is framing? A further test of rationality is whether the same issue, presented in two different but logically equivalent ways, will elicit the same answer. For example, one surgeon tells someone that 10 percent of people die while undergoing a particular surgery. Another tells someone that 90 percent survive. The information is the same. The effect is not. To both patients and physicians, the risk seems greater to those who hear that 10 percent will die (Marteau, 1989; McNeil et al., 1988; Rothman & Salovey, 1997). The effects of framing, the way we present an issue, are sometimes striking. Nine in 10 college students rate a condom as effective if it has a supposed “95 percent success rate” in stopping the HIV virus that causes AIDS; only 4 in 10 think it successful given a “5 percent failure rate” (Linville et al., 1992). And people express more surprise when a 1-in-20 event happens than when an equivalent 10-in-200 event happens (Denes-Raj et al., 1995). To scare people, frame risks as numbers, not percentages. People told that a chemical exposure is projected to kill 10 of every 10 million people (imagine 10 dead people!) feel more frightened than if told the fatality risk is an infinitesimal .000001 (Kraus et al., 1992). Consider how the framing effect influences political and business decisions. Politicians know to frame their position on public assistance as “aid to the needy” if they are for it and “welfare” if not. Merchants mark up their “regular prices” to appear to offer huge savings on “sale prices.” A $100 coat marked down from $150 by Store X can seem like a better deal than the same coat priced regularly at $100 by Store Y (Urbany et al., 1988). And ground beef described as “75 percent lean” seems much more appealing than beef that is “25 percent fat” (Levin & Gaeth, 1988; Sanford et al.,

framing the way an issue is posed; how an issue is framed can significantly affect decisions and judgments.

382

MOD U LE 3 1 Thinking

2002). Likewise, a price difference between gas purchased by credit card versus cash feels better if framed as a “cash discount” rather than a “credit card fee.” Framing research also finds a powerful application in the definition of options, which can be posed in ways that nudge people toward better decisions (Thaler & Sunstein, 2008).

䉴 Preferred portion size depends on framing. If a restaurant offers a regular and an

|| What time is it now? When I asked you (in the section on overconfidence) to estimate how quickly you would finish this module, did you underestimate or overestimate? ||

alternative “small-sized” menu option, most people will elect the larger option. If the restaurant makes the small portion the default option and relabels the larger option as “supersized,” more people will choose the smaller portion (Schwartz, 2007). 䉴 Why choosing to be an organ donor depends on where you live. In many European countries as well as North America, people can decide whether they want to be organ donors when renewing their driver’s license. In countries where the default option is yes, but people can opt out, nearly 100 percent agree to be donors. In the United States, Canada, Britain, and Germany, where the default option is no but people can “opt in,” only about 1 in 4 agree to be donors (Johnson & Goldstein, 2003). 䉴 How to help employees decide to save for their retirement. A 2006 U.S. pension law recognized the huge effect of framing options. Previously, employees who wanted to defer part of their compensation to a 401(k) retirement plan typically had to elect to lower their take-home pay, which most people are averse to doing. Now companies are being encouraged to enroll their employees automatically but to allow them to opt out (thereby raising their take-home pay). In both plans, the choice was the employee’s. But under the “opt-out” rather than “opt-in” option, enrollments soared from 49 to 86 percent (Madrian & Shea, 2001). The point to remember: Those who understand the power of framing can use it to influence our decisions. *** Returning to our debate about how deserving we are of our name hom*o sapiens, let’s pause to issue an interim report card. On decision making and judgment, our errorprone species might rate a C+. On problem solving, where humans are inventive yet vulnerable to fixation, we would probably receive a better mark, perhaps a B. On cognitive efficiency, our fallible but quick heuristics earn us an A.

383

Thinking M O D U L E 3 1

Review Thinking 31-1 What are the functions of concepts? Cognition is a term covering all the mental activities associated with thinking, knowing, remembering, and communicating. We use concepts, mental groupings of similar objects, events, ideas, or people, to simplify and order the world around us. In creating hierarchies, we subdivide these categories into smaller and more detailed units. We form some concepts, such as triangles, by definition (three-sided objects), but we form most around prototypes, or best examples of a category. 31-2 What strategies assist our problem solving, and what obstacles hinder it? An algorithm is a time-consuming but thorough set of rules or procedures (such as a step-by-step description for evacuating a building during a fire) that guarantees a solution to a problem. A heuristic is a simpler thinking strategy (such as running for an exit if you smell smoke) that may allow us to solve problems quickly, but sometimes leads us to incorrect solutions. Insight is not a strategy-based solution, but rather a sudden flash of inspiration that solves a problem. Obstacles to successful problem solving are the confirmation bias, which predisposes us to verify rather than challenge our hypotheses, and fixation, such as mental set and functional fixedness, which may prevent us from taking the fresh perspective that would let us solve the problem. 31-3 How do heuristics, overconfidence, and belief perseverance influence our decisions and judgments? The representativeness heuristic leads us to judge the likelihood of things in terms of how they represent our prototype for a group of items. The availability heuristic leads us to judge the likelihood of things based on how readily they come to mind, which often leads us to fear the wrong things. We are often more confident than correct. Once we have formed a belief and explained it, the explanation may linger in our minds even if the belief gets discredited—the result is belief perseverance. A remedy for belief perseverance is to consider how we might have explained an opposite result. 31-4 How do smart thinkers use intuition? Although it sometimes leads us astray, human intuition—effortless, immediate, automatic feeling or thought—can give us instant help when we need it. Experts in a field grow adept at making quick, shrewd judgments.

Smart thinkers will welcome their intuitions but check them against available evidence.

31-5 What is framing? Framing is the way a question or statement is worded. Subtle wording differences can dramatically alter our responses. Terms and Concepts to Remember cognition, p. 370 concept, p. 370 prototype, p. 370 algorithm, p. 371 heuristic, p. 371 insight, p. 371 confirmation bias, p. 373 fixation, p. 373 mental set, p. 373

functional fixedness, p. 373 representativeness heuristic, p. 374 availability heuristic, p. 375 overconfidence, p. 376 belief perseverance, p. 377 intuition, p. 378 framing, p. 381

Test Yourself 1. The availability heuristic is a quick-and-easy but sometimes misleading guide to judging reality. What is the availability heuristic? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. People’s perception of risk, often biased by vivid images from movies or the news, are surprisingly unrelated to actual risks. (People may hide in the basem*nt during thunderstorms but fail to buckle their seat belts in the car.) What are the things you fear? Are some of those fears out of proportion to statistical risk? Are you failing, in other areas of your life, to take reasonable precautions?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

The Brain and Language Thinking and Language Animal Thinking and Language

module 32 Language and Thought The most tangible indication of our thinking power is language—our spoken, written, or signed words and the ways we combine them as we think and communicate. Humans have long and proudly proclaimed that language sets us above all other animals. “When we study human language,” asserted linguist Noam Chomsky (1972), “we are approaching what some might call the ‘human essence,’ the qualities of mind that are, so far as we know, unique [to humans].” To cognitive scientist Steven Pinker (1990), language is “the jewel in the crown of cognition.” Imagine an alien species that could pass thoughts from one head to another merely by pulsating air molecules in the space between them. Perhaps these weird creatures could inhabit a future Spielberg movie? Actually, we are those creatures! When we speak, our brain and voice box conjure up air pressure waves that we send banging against another’s eardrum—enabling us to transfer thoughts from our brain into theirs. As Pinker (1998) notes, we sometimes sit for hours “listening to other people make noise as they exhale, because those hisses and squeaks contain information.” And thanks to all those funny sounds created in our heads from the air pressure waves we send out, adds Bernard Guerin (2003), we get people’s attention, we get them to do things, and we maintain relationships. Depending on how you vibrate the air after opening your mouth, you may get slapped or kissed. But language is more than vibrating air. As I create this paragraph, my fingers on a keyboard generate electronic binary numbers that get translated into squiggles of dried carbon pressed onto stretched wood pulp on the page in front of you. When transmitted by reflected light rays into your retina, the printed squiggles trigger formless nerve impulses that project to several areas of your brain, which integrates the information, compares it to stored information, and decodes meaning. Thanks to language, we have transferred meaning from one mind to another. Whether spoken, written, or signed, language enables us not only to communicate but to transmit civilization’s accumulated knowledge across generations. Monkeys mostly know what they see. Thanks to language, we know much that we’ve never seen.

Language transmits culture Words and grammar differ from culture to culture. But in every society, language allows people to transmit their accumulated knowledge from one generation to the next. Here, a group of Ivory Coast boys listens as an elder retells a tribal legend.

language our spoken, written, or signed words and the ways we combine them to communicate meaning.

384

M. & E. Bernheim/Woodfin Camp & Associates

Language Development

Language Structure

385

Language and Thought M O D U L E 3 2

䉴|| Language Structure 32-1 What are the structural components of a language? Consider how we might go about inventing a language. For a spoken language, we would need three building blocks.

Phonemes First, we would need a set of basic sounds, which linguists call phonemes. To say bat we utter the phonemes b, a, and t. Chat also has three phonemes—ch, a, and t. Linguists surveying nearly 500 languages have identified 869 different phonemes in human speech (Holt, 2002; Maddieson, 1984). No one language uses all of them. English uses about 40; other languages, anywhere from half to more than twice that many. Within a language, changes in phonemes produce changes in meaning. In English, varying the vowel sound between b and t creates 12 different meanings: bait, bat, beat/beet, bet, bit, bite, boat, boot, bought, bout, and but (Fromkin & Rodman, 1983). Generally, though, consonant phonemes carry more information than do vowel phonemes. The treth ef thes stetement shed be evedent frem thes bref demenstretien. People who grow up learning one set of phonemes usually have difficulty pronouncing those of another language. The native English speaker may smile at the native German speaker’s difficulties with the th sound, which can make this sound like dis. But the German speaker smiles back at the problems English speakers have rolling the German r or pronouncing the breathy ch in ich, the German word for I. Sign language also has phonemelike building blocks defined by hand shapes and movements. Like speakers, native signers of one of the 200+ sign languages may have difficulty with the phonemes of another. Chinese native signers who come to America and learn sign usually sign with an accent, notes researcher Ursula Bellugi (1994).

phoneme in language, the smallest distinctive sound unit. morpheme in a language, the smallest unit that carries meaning; may be a word or a part of a word (such as a prefix). grammar in a language, a system of rules that enables us to communicate with and understand others.

semantics the set of rules by which we derive meaning from morphemes, words, and sentences in a given language; also, the study of meaning.

syntax the rules for combining words into grammatically sensible sentences in a given language.

Morphemes

Grammar Finally, our new language must have a grammar, a system of rules (semantics and syntax) in a given language that enables us to communicate with and understand others. Semantics is the set of rules we use to derive meaning from morphemes, words, and even sentences. In English, a semantic rule tells us that adding -ed to laugh means that it happened in the past. Syntax refers to the rules we use to order words into sentences. One rule of English syntax says that adjectives usually come before nouns, so we say white house. Spanish adjectives usually reverse this order, as in casa blanca. The English rules of syntax allow the sentence They are hunting dogs. Given the context, semantics will tell us whether it refers to dogs that seek animals or people who seek dogs. In all 6000 human languages, the grammar is intricately complex. “There are ‘Stone Age’ societies, but they do not have ‘Stone Age’ languages” (Pinker, 1995). Contrary to the illusion that less-educated people speak ungrammatically, they simply speak a different dialect. To a linguist, “ain’t got none” is grammatically equal to “doesn’t have any.” (It has the same syntax.)

|| How many morphemes are in the word cats? How many phonemes? (Answer below.) || Two morphemes—cat and s, and four phonemes—c, a, t, and s.

But sounds alone do not make a language. The second building block is the morpheme, the smallest unit of language that carries meaning. In English, a few morphemes are also phonemes—the personal pronoun I and the article a, for instance. But most morphemes are combinations of two or more phonemes. Some, like bat, are words, but others are only parts of words. Morphemes include prefixes and suffixes, such as the pre- in preview or the -ed that shows past tense.

|| Slightly more than half the world’s 6000 languages are spoken by fewer than 10,000 people. And slightly more than half the world’s population speaks one of the top 20 languages (Gibbs, 2002). ||

From The Wall Street Journal—permission Cartoon Features Syndicate.

386

“Let me get this straight now. Is what you want to build a jean factory or a gene factory?”

MOD U LE 3 2 Language and Thought

Note, however, that language becomes increasingly more complex as you move from one level to the next. In English, for example, the relatively small number of 40 or so phonemes can be combined to form more than 100,000 morphemes, which alone or in combination produce the 616,500 word forms in the Oxford English Dictionary (including 290,500 main entries such as meat and 326,000 subentries such as meat eater). We can then use these words to create an infinite number of sentences, most of which (like this one) are original. Like life itself constructed from the genetic code’s simple alphabet, language is complexity built of simplicity. I know that you can know why I worry that you think this sentence is starting to get too complex, but that complexity—and our capacity to communicate and comprehend it—is what distinguishes human language capacity (Hauser et al., 2002).

䉴|| Language Development || Although you probably know between 60,000 and 80,000 words, you use only 150 words for about half of what you say. ||

Make a quick guess: How many words did you learn during the years between your first birthday and your high school graduation? The answer is about 60,000 (Bloom, 2000; McMurray, 2007). That averages (after age 1) to nearly 3500 words each year, or nearly 10 each day! How you did it—how the 3500 words a year you learned could so far outnumber the roughly 200 words a year that your schoolteachers consciously taught you—is one of the great human wonders. Before you were able to add 2 + 2, you were creating your own original and grammatically appropriate sentences. Most of us would have trouble stating our language’s rules for ordering words to form sentences. Yet as preschoolers, you comprehended and spoke with a facility that puts to shame your fellow college students now struggling to learn a foreign language. We humans have an astonishing facility for language. With remarkable efficiency, we selectively sample tens of thousands of words in memory, effortlessly assemble them with near-perfect syntax, and spew them out at a rate of three words (with a dozen or so phonemes) a second (Vigliocco & Hartsuiker, 2002). Seldom do we form sentences in our minds before speaking them. Rather they organize themselves on the fly as we speak. And while doing all this, we also adapt our utterances to our social and cultural context, following rules for speaking (How far apart should we stand?) and listening (Is it OK to interrupt?). Given how many ways there are to mess up, it’s amazing that we can master this social dance. So, when and how does it happen?

When Do We Learn Language? 32-2 What are the milestones in language development? Receptive Language Children’s language development moves from simplicity to complexity. Infants start without language (in fantis means “not speaking”). Yet by 4 months of age, babies can discriminate speech sounds (Stager & Werker, 1997). They can also read lips: They prefer to look at a face that matches a sound, so we know they can recognize that ah comes from wide open lips and ee from a mouth with corners pulled back (Kuhl & Meltzoff, 1982). This period marks the beginning of the development of babies’ receptive language, their ability to comprehend speech. At seven months and beyond, babies grow in their power to do what you and and I find difficult when listening to an unfamiliar language: segmenting spoken sounds into individual words. Moreover, their adeptness at this task, as judged by their listening patterns, predicts their language abilities at ages 2 and 5 (Newman et al., 2006).

387

Language and Thought M O D U L E 3 2

Babies’ productive language, their ability to produce words, matures after their receptive language. Around 4 months of age, babies enter the babbling stage, in which they spontaneously utter a variety of sounds, such as ah-goo. Babbling is not an imitation of adult speech, for it includes sounds from various languages, even those not spoken in the household. From this early babbling, a listener could not identify an infant as being, say, French, Korean, or Ethiopian. Deaf infants who observe their Deaf parents signing begin to babble more with their hands (Petitto & Marentette, 1991). Before nurture molds our speech, nature enables a wide range of possible sounds. Many of these natural babbling sounds are consonant-vowel pairs formed by simply bunching the tongue in the front of the mouth (da-da, na-na, ta-ta) or by opening and closing the lips (ma-ma), both of which babies do naturally for feeding (MacNeilage & Davis, 2000). By the time infants are about 10 months old, their babbling has changed so that a trained ear can identify the language of the household (de Boysson-Bardies et al., 1989). Sounds and intonations outside that language begin to disappear. Without exposure to other languages, babies become functionally deaf to speech sounds outside their native language (Pallier et al., 2001). This explains why adults who speak only English cannot discriminate certain Japanese sounds within speech, and why Japanese adults with no training in English cannot distinguish between the English r and l. Thus, la-la-ra-ra may, to a Japanese-speaking adult, sound like the same syllable repeated. A Japanese-speaking person told that the train station is “just after the next light” may wonder, “The next what? After the street veering right, or farther down, after the traffic light?” Around their first birthday (the exact age varies from child to child), most children enter the one-word stage. They have already learned that sounds carry meanings, and if repeatedly trained to associate, say, fish with a picture of a fish, 1-year-olds will look at a fish when a researcher says “Fish, fish! Look at the fish!” (Schafer, 2005). Not surprisingly, they now begin to use sounds—usually only one barely recognizable syllable, such as ma or da—to communicate meaning. But family members quickly learn to understand, and gradually the infant’s language conforms more to the family’s language. At this one-word stage, an inflected word may equal a sentence. “Doggy!” may mean “Look at the dog out there!” At about 18 months, children’s word learning explodes from about a word per week to a word per day. By their second birthday, most have entered the two-word stage. They start uttering two-word sentences (TABLE 32.1) in telegraphic speech: Like the old-fashioned telegrams (TERMS ACCEPTED. SEND MONEY), this early form of speech contains mostly nouns and verbs (Want juice). Also like telegrams, it follows rules of syntax; the words are in a sensible order. English-speaking children typically place adjectives before nouns—big doggy rather than doggy big. TABLE 32.1 Summary of Language Development Month (approximate) 4

Stage Babbles many speech sounds.

10

Babbling resembles household language.

12

One-word stage.

24

Two-word, telegraphic speech.

24+

Language develops rapidly into complete sentences.

babbling stage beginning at about 4 months, the stage of speech development in which the infant spontaneously utters various sounds at first unrelated to the household language.

one-word stage the stage in speech development, from about age 1 to 2, during which a child speaks mostly in single words.

two-word stage beginning about age 2, the stage in speech development during which a child speaks mostly two-word statements.

telegraphic speech early speech stage in which a child speaks like a telegram— “go car”—using mostly nouns and verbs.

© 1994 by Sidney Harris.

Productive Language

“Got idea. Talk better. Combine words. Make sentences.”

388

MOD U LE 3 2 Language and Thought

Once children move out of the two-word stage, they quickly begin uttering longer phrases (Fromkin & Rodman, 1983). If they get a late start on learning a particular language, for example after receiving a cochlear implant or being an international adoptee, their language development still proceeds through the same sequence, although usually at a faster pace (Ertmer et al., 2007; Snedeker et al., 2007). By early elementary school, children understand complex sentences and begin to enjoy the humor conveyed by double meanings: “You never starve in the desert because of all the sand-which-is there.”

Explaining Language Development 32-3 How do we learn language? Attempts to explain how we acquire language have sparked a spirited intellectual controversy. The nature-nurture debate surfaces again and, here as elsewhere, appreciation for innate predisposition and the nature-nature interaction has grown.

Skinner: Operant Learning Behaviorist B. F. Skinner (1957) believed we can explain language development with familiar learning principles, such as association (of the sights of things with the sounds of words); imitation (of the words and syntax modeled by others); and reinforcement (with smiles and hugs when the child says something right). Thus, Skinner (1985) argued, babies learn to talk in many of the same ways that animals learn to peck keys and press bars: “Verbal behavior evidently came into existence when, through a critical step in the evolution of the human species, the vocal musculature became susceptible to operant conditioning.” And it’s not just humans. Song-learning birds also acquire their “language” aided by imitation (Haesler, 2007).

Chomsky: Inborn Universal Grammar Linguist Noam Chomsky (1959, 1987) has likened Skinner’s ideas to filling a bottle with water. But developing language is not just being “filled up” with the right kinds of experiences, Chomsky insisted. Children acquire untaught words and grammar at a rate too extraordinary to be explained solely by learning principles. They generate all sorts of sentences they have never heard, sometimes with novel errors. (No parent teaches the sentence, “I hate you, Daddy.”) Moreover, many of the errors young children make result from overgeneralizing logical grammatical rules, such as adding -ed to form the past tense (de Cuevas, 1990): Child: Mother: Child: Mother: Child:

|| Under Chomsky’s influence, some researchers also infer a “universal moral grammar”—an innate sense of right and wrong—that comes prewired by evolution and gets refined by culture (Hauser, 2006; Mikhail, 2007). ||

My teacher holded the baby rabbits and we petted them. Did you say your teacher held the baby rabbits? Yes. Did you say she held them tightly? No, she holded them loosely.

Chomsky instead views language development much like “helping a flower to grow in its own way.” Given adequate nurture, language will naturally occur. It just “happens to the child.” And the reason it happens is that we come prewired with a sort of switch box—a language acquisition device. It is as if the switches need to be turned either “on” or “off” for us to understand and produce language. As we hear language, the switches get set for the language we are to learn. Underlying human language, Chomsky says, is a universal grammar: All human languages therefore have the same grammatical building blocks, such as nouns and verbs, subjects and objects, negations and questions. Thus, we readily learn the specific grammar of whatever language we experience, whether spoken or signed (Bavelier et al., 2003). And no matter what that language is, we start speaking mostly in nouns (kitty, da-da) rather than verbs and adjectives (Bornstein et al., 2004). It happens so naturally—as naturally as birds learning to fly—that training hardly helps.

389

Language and Thought M O D U L E 3 2

a language Brought 䉴 Creating together as if on a desert island

Susan Meiselas/Magnum Photos

(actually a school), Nicaragua’s young deaf children over time drew upon sign gestures from home to create their own Nicaraguan Sign Language, complete with words and intricate grammar. Our biological predisposition for language does not create language in a vacuum. But activated by a social context, nature and nurture work creatively together (Osborne, 1999; Sandler et al., 2005; Senghas & Coppola, 2001).

Many psychologists believe we benefit from both Skinner’s and Chomsky’s views. Children’s genes design complex brain wiring that prepares them to learn language as they interact with their caregivers. Skinner’s emphasis on learning helps explain how infants acquire their language as they interact with others. Chomsky’s emphasis on our built-in readiness to learn grammar rules helps explain why preschoolers acquire language so readily and use grammar so well. Once again, we see biology and experience working together.

Statistical Learning and Critical Periods

natural talent Human infants 䉴 Acome with a remarkable capacity to soak up language. But the particular language they learn will reflect their unique interactions with others.

Stock Connection Distribution/Alamy

Human infants display a remarkable ability to learn statistical aspects of human speech. When you or I listen to an unfamiliar language, the syllables all run together. Someone unfamiliar with English might, for example, hear United Nations as “Uneye Tednay Shuns.” Well before our first birthday, our brains were not only discerning word breaks, they were statistically analyzing which syllables, as in “hap-py-ba-by,” most often go together. Jenny Saffran and her colleagues (1996; in press) showed this by exposing 8-month-old infants to a computer voice speaking an unbroken, monotone string of nonsense syllables (bidakupadotigolabubidaku . . .). After just two minutes of exposure, the infants were able to recognize (as indicated by their attention) three-syllable sequences that appeared repeatedly. Other research offers further testimony to infants’ surprising knack for soaking up language. For example, 7-month-old infants can learn simple sentence structures. After repeatedly hearing syllable sequences that follow one rule, such as ga-ti-ga and li-na-li (an ABA pattern), they listen longer to syllables in a different sequence, such as wo-fe-fe (an ABB pattern) rather than wo-fe-wo. Their detecting the difference between the two patterns supports the idea that babies come with a built-in readiness to learn grammatical rules (Marcus et al., 1999). But are we capable of performing this same feat of statistical analysis throughout our life span? Many researchers believe not. Childhood seems to represent a critical (or “sensitive”) period for mastering certain aspects of language (Hernandez & Li, 2007). Deaf children who gain hearing with cochlear implants by age 2 develop better oral speech than do those who receive implants after age 4 (Greers, 2004). And whether children are deaf or hearing, later-than-usual exposure to language (at age 2

“Childhood is the time for language, no doubt about it. Young children, the younger the better, are good at it; it is child’s play. It is a onetime gift to the species.” Lewis Thomas, The Fragile Species, 1992

390

“Children can learn multiple languages without an accent and with good grammar, if they are exposed to the language before puberty. But after puberty, it’s very difficult to learn a second language so well. Similarly, when I first went to Japan, I was told not even to bother trying to bow, that there were something like a dozen different bows and I was always going to ‘bow with an accent.’” Psychologist Stephen M. Kosslyn, “The World in the Brain,” 2008

Percentage correct on 100% grammar test 90 80

The older the age at immigration, the poorer the mastery of a second language

70 60 50

Native

3–7

8–10 11–15 17–39

Age at arrival, in years

Copyright © Don Smetzer/PhotoEdit—All rights reserved.

FIGURE 32.1 New language learning gets harder with age Young children have a readiness to learn language. Ten years after coming to the United States, Asian immigrants took a grammar test. Although there is no sharply defined critical period for second language learning, those who arrived before age 8 understood American English grammar as well as native speakers did. Those who arrived later did not. (From Johnson & Newport, 1991.)

or 3) unleashes their brain’s idle language capacity, producing a rush of language. But children who have not been exposed to either a spoken or a signed language during their early years (by about age 7) gradually lose their ability to master any language. Natively deaf children who learn sign language after age 9 never learn it as well as those who become deaf at age 9 after learning English. They also never learn English as well as other natively deaf children who learned sign in infancy (Mayberry et al., 2002). The striking conclusion: When a young brain does not learn any language, its language-learning capacity never fully develops. After the window for learning language closes, even learning a second language seems more difficult. People who learn a second language as adults usually speak it with the accent of their first. Grammar learning is similarly more difficult. Jacqueline Johnson and Elissa Newport (1991) asked Korean and Chinese immigrants to identify whether each of 276 English sentences (“Yesterday the hunter shoots a deer”) was grammatically correct or incorrect. Some test-takers had arrived in the United States in early childhood, others as adults, but all had been in the country for approximately 10 years. Nevertheless, as FIGURE 32.1 reveals, those who learned their second language early learned it best. The older the age at which one emigrates to a new country, the harder it is to learn its language (Hakuta et al., 2003). The impact of early experiences is also evident in language learning in the 90+ percent of deaf children born to hearing-nonsigning parents. These children typically do not experience language during their early years. Compared with children exposed to sign language from birth, those who learn to sign as teens or adults are like immigrants who learn English after childhood. They can master the basic words and learn to order them, but they never become as fluent as native signers in producing and comprehending subtle grammatical differences (Newport, 1990). Moreover, the latelearners show less brain activity in right hemisphere regions that are active as native signers read sign language (Newman et al., 2002). As a flower’s growth will be stunted without nourishment, so, too, children will typically become linguistically stunted if isolated from language during the critical period for its acquisition. The altered brain activity in those deprived of early language raises a question: How does the maturing brain normally process language? George Ancona

No means no—no matter how you say it! Deaf children of Deaf-signing parents and hearing children of hearing parents have much in common. They develop language skills at about the same rate, and they are equally effective at opposing parental wishes and demanding their way.

MOD U LE 3 2 Language and Thought

391

Language and Thought M O D U L E 3 2

䉴|| The Brain and Language

aphasia impairment of language, usu-

32-4 What brain areas are involved in language processing? We think of speaking and reading, or writing and reading, or singing and speaking as merely different examples of the same general ability—language. But consider this curious finding: Aphasia, an impaired use of language, can result from damage to any one of several cortical areas. Even more curious, some people with aphasia can speak fluently but cannot read (despite good vision), while others can comprehend what they read but cannot speak. Still others can write but not read, read but not write, read numbers but not letters, or sing but not speak. What does this tell us about the mystery of how we use language, and how did researchers solve this mystery?

ally caused by left hemisphere damage either to Broca’s area (impairing speaking) or to Wernicke’s area (impairing understanding).

Broca’s area controls language expression—an area, usually in the left frontal lobe, that directs the muscle movements involved in speech. Wernicke’s area controls language reception—a brain area involved in language comprehension and expression; usually in the left temporal lobe.

Clue 1 In 1865, French physician Paul Broca reported that after damage to a specific area of the left frontal lobe (later called Broca’s area) a person would struggle to speak words while still being able to sing familiar songs and comprehend speech. Damage to Broca’s area disrupts speaking. Clue 2 In 1874, German investigator Carl Wernicke discovered that after damage to a specific area of the left temporal lobe (Wernicke’s area) people could speak only meaningless words. Asked to describe a picture that showed two boys stealing cookies behind a woman’s back, one patient responded: “Mother is away her working her work to get her better, but when she’s looking the two boys looking the other part. She’s working another time” (Geschwind, 1979). Damage to Wernicke’s area also disrupts understanding. Clue 3 A third brain area, the angular gyrus, is involved in reading aloud. It receives visual information from the visual area and recodes it into an auditory form, which Wernicke’s area uses to derive its meaning. Damage to the angular gyrus leaves a person able to speak and understand, but unable to read. Clue 4 Nerve fibers interconnect these brain areas. A century after Broca’s and Wernicke’s findings, Norman Geschwind assembled these and other clues into an explanation of how we use language (FIGURES 32.2 and 32.3). When you read aloud, the words (1) register in the visual area, (2) are relayed to FIGURE 32.2 A simplified model of brain 䉴 areas involved in language processing

5. Motor cortex (word is pronounced)

4. Broca’s area (controls speech muscles via the motor cortex)

2. Angular gyrus (transforms visual representations into an auditory code)

1. Visual cortex (receives written words as visual stimulation)

3. Wernicke’s area (interprets auditory code)

FIGURE 32.3 Brain activity when

hearing, seeing, and speaking words PET scans such as these detect the activity of different areas of the brain.

MOD U LE 3 2 Language and Thought

392

(a) Hearing words (auditory cortex and Wernicke’s area)

(b) Seeing words (visual cortex and angular gyrus)

(c) Speaking words (Broca’s area and the motor cortex)

a second brain area, the angular gyrus, which transforms the words into an auditory code that (3) is received and understood in the nearby Wernicke’s area, and (4) is sent to Broca’s area, which (5) controls the motor cortex as it creates the pronounced word. Depending on which link in this chain is damaged, a different form of aphasia occurs. Today’s neuroscience continues to enrich our understanding of language processing. We now know that more sites are involved than those portrayed in Figure 32.3, and that the “map” can vary from person to person. Moreover, fMRI scans reveal that different neural networks are activated by nouns and verbs, and by one’s native language and a second language learned later in life (Perani & Abutalebi, 2005; Shapiro et al., 2006). For example, adults who learned a second language early in life use the same patch of frontal lobe tissue when recounting an event in either the native or the second language. Those who learned their second tongue after childhood display activity in an adjacent brain area while using their second language (Kim et al., 1997). Still, the big point to remember is this: In processing language, as in other forms of information processing, the brain operates by dividing its mental functions—speaking, perceiving, thinking, remembering—into subfunctions. Your conscious experience of reading this page seems indivisible, but your brain is computing each word’s form, sound, and meaning using different neural networks (Posner & Carr, 1992). We can also see this division of processing in vision. Right now, assuming you have sight, you are experiencing a whole visual scene as if your eyes were video cameras projecting the scene into your brain. Actually, your brain is breaking that scene into specialized subtasks, such as discerning color, depth, movement, and form. And in vision as in language, a localized trauma that destroys one of these neural work teams may cause people to lose just one aspect of processing, as when a stroke destroys the ability to perceive movement. In both systems, each specialized neural network, having simultaneously done its own thing, then feeds its information to higher-level networks that combine the atoms of experience and relay them to progressively higher-level association areas, enabling us to recognize a face as “Grandmother.” This helps explain another funny finding. Functional MRI scans show that jokes playing on meaning (“Why don’t sharks bite lawyers? . . . Professional courtesy”) are processed in a different brain area than jokes playing on words (“What kind of lights did Noah use on the ark? . . . Flood lights”) (Goel & Dolan, 2001). Scientists have even been able to predict, from the brain’s response to various concrete nouns (things we experience with our senses), the brain’s response to other concrete nouns (Mitchell et al., 2008). Think about it: What you experience as a continuous, indivisible stream of experience is actually but the visible tip of a subdivided information-processing iceberg, most of which lies beneath the surface of your awareness.

393

Language and Thought M O D U L E 3 2

To sum up, the mind’s subsystems are localized in particular brain regions, yet the brain acts as a unified whole. Moving your hand; recognizing faces; perceiving scenes; comprehending language—all depend on specific neural networks. Yet complex functions such as listening, learning, and loving involve the coordination of many brain areas. Together, these two principles—specialization and integration—describe the brain’s functioning.

“It is the way systems interact and have a dynamic interdependence that is—unless one has lost all sense of wonder—quite awe-inspiring.” Simon Conway Morris, “The Boyle Lecture,” 2005

䉴|| Thinking and Language 32-5 What is the relationship between language and thinking? Thinking and language intricately intertwine. Asking which comes first is one of psychology’s chicken-and-egg questions. Do our ideas come first and we wait for words to name them? Or are our thoughts conceived in words and therefore unthinkable without them?

Language Influences Thinking

|| Before reading on, use a pen or pencil to sketch this idea: “The girl pushes the boy.” Now see the inverted margin note below. || How did you illustrate “the girl pushes the boy”? Anne Maass and Aurore Russo (2003) report that people whose language reads from left to right mostly position the pushing girl on the left. Those who read and write Arabic, a right-to-left language, mostly place her on the right. This spatial bias appears only in those old enough to have learned their culture’s writing system (Dobel et al., 2007).

Linguist Benjamin Lee Whorf contended that language determines the way we think. According to Whorf’s (1956) linguistic determinism hypothesis, different languages impose different conceptions of reality: “Language itself shapes a man’s basic ideas.” The Hopi, Whorf noted, have no past tense for their verbs. Therefore, he contended, a Hopi could not so readily think about the past. To say that language determines the way we think is much too strong. But to those who speak two dissimilar languages, such as English and Japanese, it seems obvious that a person may think differently in different languages (Brown, 1986). Unlike English, which has a rich vocabulary for self-focused emotions such as anger, Japanese has more words for interpersonal emotions such as sympathy (Markus & Kitayama, 1991). Many bilinguals report that they have different senses of self, depending on which language they are using (Matsumoto, 1994). They may even reveal different personality profiles when taking the same test in their two languages (Dinges & Hull, 1992). “Learn a new language and get a new soul,” says a Czech proverb. Michael Ross, Elaine Xun, and Anne Wilson (2002) demonstrated this by inviting China-born, bilingual University of Waterloo students to describe themselves in English or Chinese. English-language versions of self-descriptions fit typical Canadian profiles: Students expressed mostly positive self-statements and moods. Responding in Chinese, students gave typically Chinese self-descriptions: They reported more agreement with Chinese values and roughly equal positive and negative selfstatements and moods. Their language use seemed to shape how they thought of themselves. A similar personality change occurs as people shift between the cultural frames associated with English and Spanish. English speakers score higher than Spanish speakers on measures of extraversion, agreeableness, and conscientious. But is this a language effect? Nairán Ramírez-Esparza and her co-workers (2006) wondered. So they had samples of bicultural, bilingual Americans and Mexicans take the tests in each language. Sure enough, when using English they expressed their somewhat more extraverted, agreeable, and conscientious selves (and the differences were not due to how the questionnaires were translated). So our words may not determine what we think, but they do influence our thinking (Hardin & Banaji, 1993; Özgen, 2004). We use our language in forming categories. In Brazil, the isolated Piraha tribespeople have words for the numbers 1 and 2, but numbers above that are simply “many.” Thus if shown 7 nuts in a row, they find it very difficult to lay out the same number from their own pile (Gordon, 2004).

linguistic determinism Whorf’s hypothesis that language determines the way we think.

|| Perceived distances between cities also grow when two cities are in different countries rather than in the same country (Burris & Branscombe, 2005). ||

“All words are pegs to hang ideas on.” Henry Ward Beecher, Proverbs from Plymouth Pulpit, 1887

FIGURE 32.4 Language and perception

Emre Özgen (2004) reports that when people view blocks of equally different colors, they perceive those with different names as more different. Thus the “green” and “blue” in contrast A may appear to differ more than the two similarly different blues in contrast B.

MOD U LE 3 2 Language and Thought

Words also influence our thinking about colors. Whether we live in New Mexico, New South Wales, or New Guinea, we see colors much the same, but we use our native language to classify and remember colors (Davidoff, 2004; Roberson et al., 2004, 2005). If that language is English, you might view three colors and call two of them “yellow” and one of them “blue.” Later you would likely see and recall the yellows as being more similar. But if you were a member of Papua New Guinea’s Berinmo tribe, which has words for two different shades of yellow, you would better recall the distinctions between the two yellows. Perceived differences grow when we assign different names to colors. On the color spectrum, blue blends into green—until we draw a dividing line between the portions we call “blue” and “green.” Although equally different on the color spectrum (FIGURE 32.4), two different “blues” (or two different “greens”) that share the same name are harder to distinguish than two items with the different names “blue” and “green” (Özgen, 2004). Given words’ subtle influence on thinking, we do well to choose our words carefully. Does it make any difference whether I write, “A child learns language as he interacts with his caregivers” or “Children learn language as they interact with their caregivers”? Many studies have found that it does. When hearing the generic he (as in “the artist and his work”) people are more likely to picture a male (Henley, 1989; Ng, 1990). If he and his were truly gender-free, we shouldn’t skip a beat when hearing that “man, like other mammals, nurses his young.” To expand language is to expand the ability to think. In young children, thinking develops hand in hand with language (Gopnik & Meltzoff, 1986). Indeed, it is very difficult to think about or conceptualize certain abstract ideas (commitment, freedom, or rhyming) without language! And what is true for preschoolers is true for everyone: It pays to increase your word power. That’s why most textbooks, including this one, introduce new words—to teach new ideas and new ways of thinking. And that is why psychologist Steven Pinker (2007) titled his book on language, The Stuff of Thought. Increased word power helps explain what McGill University researcher Wallace Lambert (1992; Lambert et al., 1993) calls the bilingual advantage. Bilingual children, who learn to inhibit one language while using the other, are also better able to inhibit their attention to irrelevant information. If asked to say whether a sentence (“Why is the cat barking so loudly?”) is grammatically correct, they can more efficiently focus on the grammar alone (Bialystok, 2001; Carlson & Meltzoff, 2008). Lambert helped devise a Canadian program that immerses English-speaking children in French. (From 1981 to 2001, the number of non-Quebec Canadian children immersed in French rose from 65,000 to 297,000 [Statistics Canada, 2007].) For most of their first three years in school, the English-speaking children are taught entirely in French, and thereafter gradually shift by the end of their schooling to classes mostly in English. Not surprisingly, the children attain a natural French fluency unrivaled by

394

A

B

Language and Thought M O D U L E 3 2

395

safe sign We have outfielder 䉴 AWilliam Hoy to thank for baseball

Jim Cummins/Getty Images

sign language. The first deaf player to join the major leagues (1892), he invented hand signals for “Strike!” “Safe!” (shown here) and “Yerr Out!” (Pollard, 1992). Such gestures worked so well that referees in all sports now use invented signs, and fans are fluent in sports sign language.

other methods of language teaching. Moreover, compared with similarly capable children in control groups, they do so without detriment to their English fluency, and with increased aptitude scores, creativity, and appreciation for French-Canadian culture (Genesee & Gándara, 1999; Lazaruk, 2007). Whether we are deaf or hearing, minority or majority, language links us to one another. Language also connects us to the past and the future. “To destroy a people, destroy their language,” observed poet Joy Harjo.

Thinking in Images When you are alone, do you talk to yourself? Is “thinking” simply conversing with yourself? Without a doubt, words convey ideas. But aren’t there times when ideas precede words? To turn on the cold water in your bathroom, in which direction do you turn the handle? To answer this question, you probably thought not in words but with nondeclarative (procedural) memory—a mental picture of how you do it. Indeed, we often think in images. Artists think in images. So do composers, poets, mathematicians, athletes, and scientists. Albert Einstein reported that he achieved some of his greatest insights through visual images and later put them into words. Pianist Liu Chi Kung showed the value of thinking in images. One year after placing second in the 1958 Tschaikovsky piano competition, Liu was imprisoned during China’s cultural revolution. Soon after his release, after seven years without touching a piano, he was back on tour, the critics judging his musicianship better than ever. How did he continue to develop without practice? “I did practice,” said Liu, “every day. I rehearsed every piece I had ever played, note by note, in my mind” (Garfield, 1986). For someone who has learned a skill, such as ballet dancing, even watching the activity will activate the brain’s internal simulation of it, reported one British research team after collecting fMRIs as people watched videos (Calvo-Merino et al., 2004). So, too, will imagining an activity. FIGURE 32.5 on the next page shows an fMRI of a person imagining the experience of pain, activating neural networks that are active during actual pain (Grèzes & Decety, 2001). Small wonder, then, that “mental practice has become a standard part of training” for Olympic athletes (Suinn, 1997). One experiment on mental practice and basketball foul shooting tracked the University of Tennessee women’s team over 35 games (Savoy & Beitel, 1996). During that time, the team’s free-throw shooting increased from approximately 52 percent in games following standard physical practice to some 65 percent after mental practice. Players had repeatedly imagined making foul shots

|| Many native English speakers, including most Americans, are monolingual. Most humans are bilingual or multilingual. Does monolingualism limit people’s ability to comprehend the thinking of other cultures? ||

© Jean Duffy Decety, September 2003

under various conditions, including being “trash-talked” by their opposition. In a dramatic conclusion, Tennessee won the national championship game in overtime, thanks in part to their foul shooting. Mental rehearsal can also help you achieve an academic goal, as Shelley Taylor and her UCLA colleagues (1998) demonstrated with two groups of introductory psychology students facing a midterm exam one week later. (Scores of other students formed a control group, not engaging in any mental simulation.) The first group was told to spend five minutes each day visualizing themselves scanning the posted grade list, seeing their A, beaming with joy, and feeling proud. This daily outcome simulation had little effect, adding only 2 points to their exam-scores average. Another group spent five minutes each day visualizing themselves effectively studying—reading the chapters, going over notes, eliminating distractions, declining an offer to go out. This daily process simulation paid off—this second group began studying sooner, spent more time at it, and beat the others’ average by 8 points. The point to remember: It’s better to spend your fantasy time planning how to get somewhere than to dwell on the imagined destination. Experiments on thinking without language bring up a recurring principle: Much of our information processing occurs outside of consciousness and beyond language. Inside our ever-active brain, many streams of activity flow in parallel, function automatically, are remembered implicitly, and only occasionally surface as conscious words. *** What, then, should we say about the relationship between thinking and language? As we have seen, language does influence our thinking. But if thinking did not also affect language, there would never be any new words. And new words and new combinations of old words express new ideas. The basketball term slam dunk was coined after the act itself had become fairly common. So, let us say that thinking affects our language, which then affects our thought (FIGURE 32.6).

Courtesy Christine Brune

A thoughtful art Playing the piano engages thinking without language. In the absence of a piano, mental practice can sustain one’s skill.

FIGURE 32.5 The power of imagination Imagining a physical activity triggers action in the same brain areas that are triggered when actually performing that activity. These fMRIs show a person imagining the experience of pain, which activates some of the same areas in the brain as the actual experience of pain.

MOD U LE 3 2 Language and Thought

396

Language and Thought M O D U L E 3 2

FIGURE 32.6 The interplay of thought 䉴 and language The traffic runs both ways

Thinking

between thinking and language. Thinking affects our language, which affects our thought.

© Rubber Ball/Alamy

Psychological research on thinking and language mirrors the mixed views of our species by those in fields such as literature and religion. The human mind is simultaneously capable of striking intellectual failures and of striking intellectual power. Misjudgments are common and can have disastrous consequences. So we do well to appreciate our capacity for error. Yet our efficient heuristics—our snap judgment strategies—often serve us well. Moreover, our ingenuity at problem solving and our extraordinary power of language mark humankind as almost “infinite in faculties.”

Language

䉴|| Animal Thinking and Language 32-6 What do we know about animal thinking? Do other animals share our capacity for language? If in our use of language we humans are, as the psalmist long ago rhapsodized, “little lower than God,” where do other animals fit in the scheme of things? Are they “little lower than human”? Let’s see what the research on animal thinking and language can tell us.

What Do Animals Think? Animals are smarter than we often realize. A baboon knows everyone’s voices within its 80-member troop (Jolly, 2007). Sheep can recognize and remember individual faces (Morell, 2008). A marmoset can learn from and imitate others. Great apes and even monkeys can form concepts. When monkeys learn to classify cats and dogs, certain frontal lobe neurons in their brains fire in response to new “catlike” images, others to new “doglike” images (Freedman et al., 2001). Even pigeons—mere birdbrains—can sort objects (pictures of cars, cats, chairs, flowers). Shown a picture of a never-before-seen chair, pigeons will reliably peck a key that represents the category “chairs” (Wasserman, 1995). We also are not the only creatures to display insight, as psychologist Wolfgang Köhler (1925) demonstrated in an experiment with Sultan, a chimpanzee. Köhler placed a piece of fruit and a long stick well beyond Sultan’s reach, and a short stick inside his cage. Spying the short stick, Sultan grabbed it and tried to reach the fruit. After several unsuccessful attempts, Sultan dropped the stick and seemed to survey the situation. Then suddenly, as if thinking “Aha!” he jumped up, seized the short stick again, and used it to pull in the longer stick—which he then used to reach the fruit. This evidence of animal cognition, said Köhler, showed that there is more to learning than conditioning. What is more, apes will even exhibit foresight, by storing a tool that they can use to retrieve food the next day (Mulcahy & Call, 2006). Chimpanzees, like humans, are shaped by reinforcement when they solve problems. Forest-dwelling chimpanzees have become natural tool users (BoeschAchermann & Boesch, 1993). They break off a reed or a stick, strip the twigs and leaves, carry it to a termite mound, fish for termites by twisting it just so, and then carefully remove it without scraping off many termites. They even select different tools for different purposes—a heavy stick to puncture holes, a light, flexible stick for

397

MOD U LE 3 2 Language and Thought

fishing (Sanz et al., 2004). One anthropologist, trying to mimic the chimpanzee’s deft termite fishing, failed miserably. Some animals also display a surprising numerical ability. Over two decades, Kyoto University researcher Tetsuro Matsuzawa (2007) has studied chimpanzees’ ability to remember and relate numbers. In one experiment, a chimpanzee named Ai taps, in ascending order, numbers randomly displayed on a computer screen (FIGURE 32.7). If four or five of the numbers between 1 and 9 are flashed for no more than a second, and then replaced by white boxes, she does what a human cannot. Remembering the flashed numbers, she again taps the boxes in numerical order. Until his death in 2007, the grey parrot, Alex, also displayed a jaw-dropping numerical ability (Pepperberg, 2006). He not only could name and categorize objects, he displayed a comprehension of numbers up to 6. Thus, he could speak the number of objects, add two small clusters of objects and announce the sum, and indicate which of two numbers was greater. And he could answer when shown various groups of objects and asked, for example, “What color four?” (meaning “What’s the color of the objects of which there are four?”). Researchers have found at least 39 local customs related to chimpanzee tool use, grooming, and courtship (Whiten & Boesch, 2001). One group may slurp ants directly from the stick, while another group plucks them off individually. One group may break nuts with a stone hammer, another with a wooden hammer. Or picture this actual laboratory experiment: Chimpanzee B observes Chimpanzee A as it obtains food, either by sliding or lifting a door. Then B follows the same lifting or sliding procedure. So does Chimpanzee C after observing B, and so forth. Chimp see, chimp do, unto the sixth generation (Bonnie et al., 2007; Horner et al., 2006). To learn such customs, it helps to be a primate with a relatively large cortex (Whiten & van Schaik, 2007). But the chimpanzee group differences, along with differing dialects and hunting styles, seem not to be genetic. Rather, they are the chimpanzee equivalent of cultural diversity. Like humans, chimpanzees invent behaviors and transmit cultural patterns to their peers and offspring (FIGURE 32.8a). So do orangutans and capuchin monkeys (Dindo et al., 2008; van Schaik et al., 2003). And so do some Australian dolphins (FIGURE 32.8b), which have learned to break off sponges and wear them on their snouts while probing the sea floor for fish (Krützen et al., 2005). Thus animals, and chimpanzees in particular, display remarkable talents. They form concepts, display insight, fashion tools, exhibit numerical abilities, and transmit local

Chimpanzee bests humans It is adaptive for chimpanzees to be able to monitor lots of information in their natural environment. This might explain how chimpanzee Ai can remember and tap numbers in ascending order, even after they are covered by white boxes.

Tetsuro Matsuzawa/Primate Research Institute, Kyoto University

FIGURE 32.7

398

399

Language and Thought M O D U L E 3 2

the western bank of one Ivory Coast river, a youngster watches as its mother uses a stone hammer to open a nut. On the river’s other side, a few miles away, chimpanzees do not follow this custom. (b) This bottlenose dolphin in Shark Bay, Western Australia, is a member of a small group that uses marine sponges as protective gear when probing the sea floor for fish.

Copyright Amanda K. Coakes

Michael Nichols/National Geographic Image Collection

32.8 䉴 FIGURE Cultural transmission (a) On

(b)

(a)

cultural behaviors. Chimpanzees and two species of monkeys can even read your intent. They would show more interest in a food container that you have intentionally grasped rather than one you flopped your hand on, as if by accident (Wood et al., 2007). Great apes, dolphins, and elephants have also demonstrated self-awareness (by recognizing themselves in a mirror). And as social creatures, chimpanzees have shown altruism, cooperation, and group aggression. But do they, like humans, exhibit language?

Do Animals Exhibit Language?

canine Rico, 䉴 Comprehending a border collie with a 200-word vocabulary, can infer that an unfamiliar sound refers to a novel object.

© The New Yorker Collection, 1999, Tom Chalkley from cartoonbank.com. All rights reserved.

Copyright Baus/Krzeslowski

Without doubt, animals communicate. Vervet monkeys have different alarm cries for different predators: a barking call for a leopard, a cough for an eagle, and a chuttering for a snake. Hearing the leopard alarm, other vervets climb the nearest tree. Hearing the eagle alarm, they rush into the bushes. Hearing the snake chutter, they stand up and scan the ground (Byrne, 1991). Whales also communicate, with clicks and wails. Honeybees do a dance that informs other bees of the direction and distance of the food source. And what shall we say of dogs’ ability to understand us? Border collie Rico knows and can fetch 200 items by name. Moreover, reports a team of psychologists at Leipzig’s Max Planck Institute, if he is asked to retrieve a novel toy with a name he has never heard, Rico will pick out the novel item from among a group of familiar items (Kaminski et al., 2004). Hearing that novel word for the second time four weeks later, he as often as not retrieves the object. Such feats show animals’ comprehension and communication. But is this language?

The Case of the Apes The greatest challenge to our claim to be the only language-using species has come from one of our closest genetic relatives, the chimpanzees. Psychologists Allen Gardner and Beatrix Gardner (1969) aroused enormous scientific and public interest when they taught sign language to the chimpanzee Washoe (c. 1965–2007). After four years, Washoe could use 132 signs; by age 32, 181 signs (Sanz et al., 1998). One New York Times reporter, having learned sign language from his deaf parents, visited Washoe and exclaimed, “Suddenly I realized I was conversing with a member of another species in my native tongue.”

“He says he wants a lawyer.”

400

MOD U LE 3 2 Language and Thought

CLOSE-UP

Talking Hands Chimpanzees’ use of sign language builds upon their natural gestured words (such as a hand extended for “I want some”). Human language appears to have evolved from such gestured communications (Corballis, 2002, 2003; Pollick & de Waal, 2007). So, it’s no wonder we talk and think with our hands:

Gestures (pointing at a cup) pave the way for children’s language (saying cup, while simultaneously

pointing) (Iverson & GoldinMeadow, 2005).

Signed language readily develops among Deaf people.

People gesture even when talking on the phone.

Congenitally blind people, like sighted people, gesture (Iverson & GoldinMeadow, 1998). (And they do so even when they believe their listener is also blind.)

Prohibiting gestures disrupts speech with spatial content, as when people try to describe an apartment’s layout.

Gesturing lightens a speaker’s “cognitive load” (Goldin-Meadow, 2006). People told not to gesture put more effort into communicating with words alone, and are less able to remember recently learned words or numbers.

Gestured communication For hearing people, today’s gestures may be less central to communication than for those who first used hand signals. Yet gestures remain naturally associated with spontaneous speech, especially speech that has spatial content.

|| Gesture researcher Robert Krauss (1998) recalls his grandfather telling of two men walking on a bitter winter day. One chattered away while the second nodded, saying nothing. “Schmuel, why aren’t you saying anything?” the first friend finally wondered. “Because,” replied Schmuel, “I forgot my gloves.” ||

|| Seeing a doll floating in her water, Washoe signed, “Baby in my drink.” ||

Further evidence of gestured “ape language” surfaced during the 1970s (see CloseUp: Talking Hands). Usually apes sign just single words such as “that” or “gimme” (Bowman, 2003). But sometimes they string signs together to form intelligible sentences. Washoe signed, “You me go out, please.” Apes even appear to combine words creatively. Washoe designated a swan as a “water bird.” Koko, a gorilla trained by Francine Patterson (1978), reportedly described a long-nosed Pinocchio doll as an “elephant baby.” Lana, a “talking” chimpanzee that punched a crude computer keyboard that translated her entries into English, wanted her trainer’s orange. She had no word for orange, but she did know her colors and the word for apple, so she improvised: “?Tim give apple which-is orange” (Rumbaugh, 1977). Granted, these vocabularies and sentences are simple, rather like those of a 2-yearold child (and nothing like your own 60,000 or so words, which you fluidly combine to create a limitless variety of sentences). Yet, as reports of ape language accumulated, it seemed that they might indeed be “little lower than human.” Then, in the late 1970s, fascination with “talking apes” turned toward cynicism: Were the chimps language champs or were the researchers chumps? The ape language researchers were making monkeys of themselves, said the skeptics. Consider:

401

Language and Thought M O D U L E 3 2

© 1979 by Sidney Harris/ American Scientist magazine.

In science as in politics, controversy can stimulate progress. Further evidence confirms chimpanzees’ abilities to think and communicate. One surprising finding was of Washoe’s training her adopted son in the signs she had learned. After her second infant died, Washoe became withdrawn when told, “Baby dead, baby gone, baby finished.” Two weeks later, caretaker-researcher Roger Fouts (1992, 1997) signed better news: “I have baby for you.” Washoe reacted with instant excitement, hair on end, swaggering and panting while signing over and again, “Baby, my baby.” It took several hours for Washoe and the foster infant, Loulis, to warm to each other, whereupon she broke the ice by signing, “Come baby” and cuddling Loulis. In the months that followed, Loulis picked up 68 signs simply by observing Washoe and three other language-trained chimpanzees. They now sign spontaneously, asking one another to chase, tickle, hug, come, or groom. People who sign are in near-perfect agreement about what the chimpanzees are saying, 90 percent of which pertains to social interaction, reassurance, or play (Fouts & Bodamer, 1987). The chimpanzees are even modestly bilingual; they can translate spoken English words into signs (Shaw, 1989–1990). Even more stunning was the report by Sue Savage-Rumbaugh and her colleagues (1993) of pygmy chimpanzees learning to comprehend syntax in English spoken to them. Kanzi, a pygmy chimpanzee with the seeming grammatical abilities of a human 2-year-old, happened onto language while observing his adoptive mother during language training. Kanzi has behaved intelligently whether asked, “Can you show me the light?” or “Can you bring me the [flash]light?” or “Can you turn the light on?” Kanzi also knows many spoken words, such as snake, bite, and dog. Given stuffed animals and asked—for the first time—to “make the dog bite the snake,” he put the snake to the dog’s mouth. For “Although humans make sounds with their mouths and occasionally look at each other, there is no chimpanzees as for humans, early life solid evidence that they actually communicate is a critical time for learning language. with each other.”

new words a week, apes gain their limited vocabularies only with great difficulty (Wynne, 2004, 2008). Saying that apes can learn language because they can sign words is like saying humans can fly because they can jump. 䉴 Chimpanzees can make signs or push buttons in sequence to get a reward, but pigeons, too, can peck a sequence of keys to get grain (Straub et al., 1979). After training a chimpanzee he named Nim Chimsky, Herbert Terrace (1979) concluded that much of apes’ signing is nothing more than aping their trainers’ signs and learning that certain arm movements produce rewards. 䉴 Presented with ambiguous information, people, thanks to their perceptual set, tend to see what they want or expect to see. Interpreting chimpanzee signs as language may be little more than the trainers’ wishful thinking, claimed Terrace. (When Washoe signed water bird, she perhaps was separately naming water and bird.) 䉴 “Give orange me give eat orange me eat orange . . .” is a far cry from the exquisite syntax of a 3-year-old (Anderson, 2004; Pinker, 1995). To the child, “you tickle” and “tickle you” communicate different ideas. A chimpanzee, lacking human syntax, might sign the phrases interchangeably.

Paul Fusco/Magnum Photos

䉴 Unlike speaking or signing children, who effortlessly soak up dozens of

But is this language? Chimpanzees’ ability to express themselves in American Sign Language (ASL) raises questions about the very nature of language. Here, the trainer is asking, “What is this?” The sign in response is “Baby.” Does the response constitute language?

“[Our] view that [we are] unique from all other forms of animal life is being jarred to the core.” Duane Rumbaugh and Sue Savage-Rumbaugh (1978)

402

“Chimps do not develop language. But that is no shame on them; humans would surely do no better if trained to hoot and shriek like chimps, to perform the waggledance of the bee, or any of the other wonderful feats in nature’s talent show.” Steve Pinker (1995)

MOD U LE 3 2 Language and Thought

Without early exposure to speech or word symbols, adults will not gain language competence (Rumbaugh & Savage-Rumbaugh, 1994). The provocative claims that “apes share our capacity for language” and the skeptical counterclaims that “apes no use language” (as Washoe might have put it) have moved psychologists toward a greater appreciation of apes’ remarkable abilities and of our own (Friend, 2004; Rumbaugh & Washburn, 2003). Most now agree that humans alone possess language, if by the term we mean verbal or signed expression of complex grammar. If we mean, more simply, an ability to communicate through a meaningful sequence of symbols, then apes are indeed capable of language. Believing that animals could not think, Descartes and other philosophers argued that they were living robots without any moral rights. Animals, it has been said at one time or another, cannot plan, conceptualize, count, use tools, show compassion, or use language (Thorpe, 1974). Today, we know better. Animal researchers have shown us that primates exhibit insight, show family loyalty, communicate with one another, display altruism, transmit cultural patterns across generations, and comprehend the syntax of human speech. Accepting and working out the moral implications of all this is an unfinished task for our own thinking species.

403

Language and Thought M O D U L E 3 2

Review Language and Thought 32-1 What are the structural components of a language? Phonemes are a language’s basic units of sound. Morphemes are the elementary units of meaning. Grammar—the system of rules that enables us to communicate—includes semantics (rules for deriving meaning) and syntax (rules for ordering words into sentences). 32-2 What are the milestones in language development? The timing varies from one child to another, but all children follow the same sequence. At about 4 months of age, infants babble, making sounds found in languages located all over the world. By about 10 months, their babbling contains only the sounds found in their household language. Around 12 months of age, children begin to speak in single words. This one-word stage evolves into two-word (telegraphic) utterances before their second birthday, after which they begin speaking in full sentences. 32-3 How do we learn language? Behaviorist B. F. Skinner proposed that we learn language by the familiar principles of association (of sights of things with sounds of words), imitation (of words and syntax modeled by others), and reinforcement (with smiles and hugs after saying something right). Linguist Noam Chomsky argues that we are born with a language acquisition device that biologically prepares us to learn language and that equips us with a universal grammar, which we use to learn a specific language. Cognitive researchers believe childhood is a critical period for learning spoken and signed language. 32-4 What brain areas are involved in language processing? When you read aloud, your brain’s visual cortex registers words as visual stimuli, the angular gyrus transforms those visual representations into auditory codes, and Wernicke’s area interprets those codes and sends the message to Broca’s area, which controls the motor cortex as it creates the pronounced words. But we now know that language results from the integration of many specific neural networks performing specialized subtasks in many parts of the brain. 32-5

What is the relationship between language and thinking? Although Whorf's linguistic determinism hypothesis suggested that language determines thought, it is more accurate to say that language influences thought. Different languages embody different ways of thinking, and immersion in bilingual education can enhance thinking. We often think in images when we use procedural memory—our unconscious memory system for motor and cognitive skills and classically and operantly conditioned associations. Thinking in images can increase our skills when we mentally practice upcoming events.

32-6 What do we know about animal thinking? Do other animals share our capacity for language? Both humans and the great apes form concepts, display insight, use and create tools, exhibit numerical abilities, and transmit cultural innovations. A

number of chimpanzees have learned to communicate with humans by signing or by pushing buttons wired to a computer, have developed vocabularies of nearly 200 words, have communicated by stringing these words together, and have taught their skills to younger animals. Only humans can master the verbal or signed expression of complex rules of syntax. Nevertheless, primates and other animals demonstrate impressive abilities to think and communicate.

Terms and Concepts to Remember language, p. 384 phoneme, p. 385 morpheme, p. 385 grammar, p. 385 semantics, p. 385 syntax, p. 385 babbling stage, p. 387

one-word stage, p. 387 two-word stage, p. 387 telegraphic speech, p. 387 aphasia, p. 391 Broca’s area, p. 391 Wernicke’s area, p. 391 linguistic determinism, p. 393

Test Yourself 1. If children are not yet speaking, is there any reason to think they would benefit from parents and other caregivers reading to them?

2. To say that “words are the mother of ideas” assumes the truth of what concept?

3. If your dog barks at a stranger at the front door, does this qualify as language? What if the dog yips in a telltale way to let you know she needs to go out? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. There has been controversy at some universities about allowing fluency in sign language to fulfill a second-language requirement for an undergraduate degree. What is your opinion?

2. Do you use certain words or gestures that only your family or closest circle of friends understand? Can you envision using these words or gestures to construct a language, as the Nicaraguan children did in building their version of sign?

3. Can you think of a time when you felt an animal was communicating with you? How might you put such intuition to a test?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

Intelligence and Creativity Emotional Intelligence Is Intelligence Neurologically Measurable?

|| New York Times interviewer Deborah Solomon, 2004: “What is your IQ?” Physicist Stephen Hawking: “I have no idea. People who boast about their IQ are losers.” ||

module 33 Introduction to Intelligence School boards, courts, and scientists debate the use and fairness of tests that attempt to assess people’s mental abilities and assign them a score. Is intelligence testing a constructive way to guide people toward suitable opportunities? Or is it a potent, discriminatory weapon camouflaged as science? First, some basic questions:

䉴 䉴 䉴 䉴

What is intelligence? How can we best assess it? To what extent does it result from heredity rather than environment? What do test score differences among individuals and groups really mean? Should we use such differences to rank people, to admit them to colleges or universities, to hire them?

Psychologists debate: Should we consider intelligence as one aptitude or many? As linked to cognitive speed? As neurologically measurable? Yet, intelligence experts do agree on this: Although people have differing abilities, intelligence is a concept and not a “thing.” When we refer to someone’s “IQ” (short for intelligence quotient) as if it were a fixed and objectively real trait like height, we commit a reasoning error called reification—viewing an abstract, immaterial concept as if it were a concrete thing. To reify is to invent a concept, give it a name, and then convince ourselves that such a thing objectively exists in the world. When someone says, “She has an IQ of 120,” they are reifying IQ; they are imagining IQ to be a thing this person has, rather than a score she once obtained on a particular intelligence test. Better to say, “She scored 120 on the intelligence test.” Intelligence is a socially constructed concept: Cultures deem “intelligent” whatever attributes enable success in those cultures (Sternberg & Kaufman, 1998). In the Amazon rain forest, intelligence may be understanding the medicinal qualities of local plants; in an Ontario high school, it may be superior performance on cognitive tasks. In each context, intelligence is the ability to learn from experience, solve problems, and use knowledge to adapt to new situations. In research studies, intelligence is what intelligence tests measure. Historically, as we will see, that has been the sort of problem solving displayed as “school smarts.”

Hands-on healing The socially constructed concept of intelligence varies from culture to culture. This folk healer in Peru displays his intelligence in his knowledge about his medicinal plants and understanding of the needs of the people he is helping.

Is Intelligence One General Ability or Several Specific Abilities?

ing an individual’s mental aptitudes and comparing them with those of others, using numerical scores.

intelligence mental quality consisting of the ability to learn from experience, solve problems, and use knowledge to adapt to new situations.

404

© Maya Goded/Magnum Photos

intelligence test a method for assess-

405

Introduction to Intelligence M O D U L E 3 3

䉴|| Is Intelligence One General Ability

or Several Specific Abilities?

33-1 What arguments support intelligence as one general mental ability, and what arguments support the idea of multiple distinct abilities? You probably know some people with talents in science, others who excel at the humanities, and still others gifted in athletics, art, music, or dance. You may also know a talented artist who is dumbfounded by the simplest mathematical problems, or a brilliant math student with little aptitude for literary discussion. Are all of these people intelligent? Could you rate their intelligence on a single scale? Or would you need several different scales? Charles Spearman (1863–1945) believed we have one general intelligence (often shortened to g). He granted that people often have special abilities that stand out. Spearman had helped develop factor analysis, a statistical procedure that identifies clusters of related items. He had noted that those who score high in one area, such as verbal intelligence, typically score higher than average in other areas, such as spatial or reasoning ability. Spearman believed a common skill set, the g factor, underlies all of our intelligent behavior, from navigating the sea to excelling in school. This idea of a general mental capacity expressed by a single intelligence score was controversial in Spearman’s day, and it remains so in our own. One of Spearman’s early opponents was L. L. Thurstone (1887–1955). Thurstone gave 56 different tests to people and mathematically identified seven clusters of primary mental abilities (word fluency, verbal comprehension, spatial ability, perceptual speed, numerical ability, inductive reasoning, and memory). Thurstone did not rank people on a single scale of general aptitude. But when other investigators studied the profiles of the people Thurstone had tested, they detected a persistent tendency: Those who excelled in one of the seven clusters generally scored well on the others. So, the investigators concluded, there was still some evidence of a g factor. We might, then, liken mental abilities to physical abilities. Athleticism is not one thing but many. The ability to run fast is distinct from the strength needed for power lifting, which is distinct from the eye-hand coordination required to throw a ball on target. A champion weightlifter rarely has the potential to be a skilled ice skater. Yet there remains some tendency for good things to come packaged together—for running speed and throwing accuracy to correlate, thanks to general athletic ability. So, too, with intelligence. Several distinct abilities tend to cluster together and to correlate enough to define a small general intelligence factor. Satoshi Kanazawa (2004) argues that general intelligence evolved as a form of intelligence that helps people solve novel problems—how to stop a fire from spreading, how to find food during a drought, how to reunite with one’s band on the other side of a flooded river. More common problems—such as how to mate or how to read a stranger’s face or how to find your way back to camp—require a different sort of intelligence. Kanazawa asserts that general intelligence scores do correlate with the ability to solve various novel problems (like those found in academic and many vocational situations) but do not much correlate with individuals’ skills in evolutionarily familiar situations—such as marrying and parenting, forming close friendships, displaying social competence, and navigating without maps.

Theories of Multiple Intelligences 33-2 How do Gardner’s and Sternberg’s theories of multiple intelligences differ? Since the mid-1980s some psychologists have sought to extend the definition of intelligence beyond Spearman’s and Thurstone’s academic smarts. They acknowledge that people who score well on one sort of cognitive test have some tendency to score well

“g is one of the most reliable and valid measures in the behavioral domain . . . and it predicts important social outcomes such as educational and occupational levels far better than any other trait.” Behavior geneticist Robert Plomin (1999)

general intelligence (g) a general intelligence factor that, according to Spearman and others, underlies specific mental abilities and is therefore measured by every task on an intelligence test.

factor analysis a statistical procedure that identifies clusters of related items (called factors) on a test; used to identify different dimensions of performance that underlie a person’s total score.

406

MOD U LE 3 3 Introduction to Intelligence

savant syndrome a condition in which a person otherwise limited in mental ability has an exceptional specific skill, such as in computation or drawing.

on another. But maybe this occurs not because they express an underlying general intelligence but rather because, over time, different abilities interact and feed one another, rather as a speedy runner’s throwing ability improves after being engaged in sports that develop both running and throwing abilities (van der Maas et al., 2006).

Gardner’s Eight Intelligences Howard Gardner (1983, 2006) views intelligence as multiple abilities that come in packages. Gardner finds evidence for this view in studies of people with diminished or exceptional abilities. Brain damage, for example, may destroy one ability but leave others intact. And consider people with savant syndrome, who often score low on intelligence tests but have an island of brilliance (Treffert & Wallace, 2002). Some have virtually no language ability, yet are able to compute numbers as quickly and accurately as an electronic calculator, or identify almost instantly the day of the week that corresponds to any given date in history, or render incredible works of art or musical performances (Miller, 1999). About 4 in 5 people with savant syndrome are males, and many also have autism, a developmental disorder. Memory whiz Kim Peek, a savant who does not have autism, was the inspiration for the movie Rain Man. In 8 to 10 seconds, he can read and remember a page, and he has learned 9000 books, including Shakespeare and the Bible, by heart. He learns maps from the front of phone books, and he can provide MapQuest-like travel directions within any major U.S. city. Yet he cannot button his clothes. And he has little capacity for abstract concepts. Asked by his father at a restaurant to “lower your voice,” he slid lower in his chair to lower his voice box. Asked for Lincoln’s Gettysburg Address, he responded, “227 North West Front Street. But he only stayed there one night—he gave the speech the next day” (Treffert & Christensen, 2005). Using such evidence, Gardner argues that we do not have an intelligence, but rather multiple intelligences. He identifies a total of eight (TABLE 33.1), including the verbal and mathematical aptitudes assessed by standard tests. Thus, the computer programmer, the poet, the street-smart adolescent who becomes a crafty executive, and the basketball team’s point guard exhibit different kinds of intelligence (Gardner, 1998). He notes, If a person is strong (or weak) in telling stories, solving mathematical proofs, navigating around unfamiliar terrain, learning an unfamiliar song, mastering a new game that entails dexterity, understanding others, or understanding himself, one simply does not know whether comparable strengths (or weaknesses) will be found in other areas.

© The Stephen Wiltshire Gallery

Islands of genius: Savant syndrome After a 30-minute helicopter ride and a visit to the top of a skyscraper, British savant artist Stephen Wiltshire began seven days of drawing that reproduced the Tokyo skyline.

Introduction to Intelligence M O D U L E 3 3

407

TABLE 33.1 Gardner’s Eight Intelligences A general intelligence score is therefore like the overall rating of Aptitude Exemplar a city—which tells you something 1. Linguistic T. S. Eliot, poet but doesn’t give you much spe2. Logical-mathematical Albert Einstein, scientist cific information about its 3. Musical Igor Stravinsky, composer schools, streets, or nightlife. 4. Spatial Pablo Picasso, artist Wouldn’t it be wonderful if 5. Bodily-kinesthetic Martha Graham, dancer the world were so just, responds 6. Intrapersonal (self) Sigmund Freud, psychiatrist intelligence researcher Sandra 7. Interpersonal (other people) Mahatma Gandhi, leader Scarr (1989). Wouldn’t it be nice 8. Naturalist Charles Darwin, naturalist if being weak in one area would be compensated by genius in some other area? Alas, the world is not just. General intelligence scores predict per|| Gardner (1998) has also speculated about a ninth possible intelligence— formance on various complex tasks, in various jobs, and in varied countries; g matexistential intelligence—the ability “to ters (Bertua et al., 2005; Gottfredson, 2002a,b, 2003a,b; Rindermann, 2007). In two ponder large questions about life, digests of more than 100 data sets, academic intelligence scores that predicted gradudeath, existence.” || ate school success also predicted later job success (Kuncel et al., 2004; Strenze, 2007; see also FIGURE 33.1).

$230,000

FIGURE 33.1 Smart and rich? Jay Zagorsky 䉴 (2007) tracked 7403 participants in the U.S.

180,000

National Longitudinal Survey of Youth across 25 years. As shown in this scatterplot, their intelligence scores correlated +.30, a moderate positive correlation, with their later income.

Income

130,000

80,000

30,000 70

80

90

100

110

120

130

Intelligence score

intelligence genius 䉴 Spatial In 1998, World Checkers

Courtesy of Cameras on Wheels

Even so, “success” is not a oneingredient recipe. High intelligence may help you get into a profession (via the schools and training programs that take you there), but it won’t make you successful once there. The recipe for success combines talent with grit: Those who become highly successful are also conscientious, well-connected, and doggedly energetic. Anders Ericsson (2002, 2007; Ericsson et al., 2007) reports a 10-year rule: A common ingredient of expert performance in chess, dancing, sports, computer programming, music, and medicine is “about 10 years of intense, daily practice.”

Champion Ron “Suki” King of Barbados set a new record by simultaneously playing 385 players in 3 hours and 44 minutes. Thus, while his opponents often had hours to plot their game moves, King could only devote about 35 seconds to each game. Yet he still managed to win all 385 games!

408

MOD U LE 3 3 Introduction to Intelligence

Sternberg’s Three Intelligences Robert Sternberg (1985, 1999, 2003) agrees that there is more to success than traditional intelligence. And he agrees with Gardner’s idea of multiple intelligences. But he proposes a triarchic theory of three, not eight, intelligences:

䉴 Analytical (academic problem-solving) intelligence is assessed by intelligence tests, which present well-defined problems having a single right answer. Such tests predict school grades reasonably well and vocational success more modestly. 䉴 Creative intelligence is demonstrated in reacting adaptively to novel situations and generating novel ideas. 䉴 Practical intelligence is required for everyday tasks, which may be ill-defined, with multiple solutions. Managerial success, for example, depends less on academic problem-solving skills than on a shrewd ability to manage oneself, one’s tasks, and other people. Sternberg and Richard Wagner’s (1993, 1995) test of practical managerial intelligence measures skill at writing effective memos, motivating people, delegating tasks and responsibilities, reading people, and promoting one’s own career. Business executives who score relatively high on this test tend to earn high salaries and receive high performance ratings.

Street smarts This child selling candy on the streets of Manaus, Brazil, is developing practical intelligence at a very young age.

David R. Frazier Photolibrary, Inc./Alamy

Bill Gates (1998)

“You have to be careful, if you’re good at something, to make sure you don’t think you’re good at other things that you aren’t necessarily so good at. . . . Because I’ve been very successful at [software development] people come in and expect that I have wisdom about topics that I don’t.”

With support from the U.S. College Board (which administers the widely used SAT Reasoning Test to U.S. college and university applicants), Sternberg (2006, 2007) and a team of collaborators have developed new measures of creativity (such as thinking up a caption for an untitled cartoon) and practical thinking (such as figuring out how to move a large bed up a winding staircase). Their initial data indicate that these more comprehensive assessments improve prediction of American students’ first-year college grades, and they do so with reduced ethnic-group differences. Although Sternberg and Gardner differ on specific points, they agree that multiple abilities can contribute to life success. (Neither candidate in the 2000 U.S. presidential election had scored exceptionally high on college entrance aptitude tests, Sternberg [2000] noted, yet both became influential.) The two theorists also agree that the differing varieties of giftedness add spice to life and challenges for education. Under their influence, many teachers have been trained to appreciate the varieties of ability and to apply multiple intelligence theory in their classrooms. However we define intelligence (TABLE 33.2), one thing is clear: There’s more to creativity than intelligence test scores.

409

Introduction to Intelligence M O D U L E 3 3

TABLE 33.2 Comparing Theories of Intelligence Theory

Summary

Strengths

Other Considerations

Spearman’s general intelligence (g)

A basic intelligence predicts our abilities in varied academic areas.

Different abilities, such as verbal and spatial, do have some tendency to correlate.

Human abilities are too diverse to be encapsulated by a single general intelligence factor.

Thurstone’s primary mental abilities

Our intelligence may be broken down into seven factors: word fluency, verbal comprehension, spatial ability, perceptual speed, numerical ability, inductive reasoning, and memory.

A single g score is not as informative as scores for seven primary mental abilities.

Even Thurstone’s seven mental abilities show a tendency to cluster, suggesting an underlying g factor.

Gardner’s multiple intelligences

Our abilities are best classified into eight independent intelligences, which include a broad range of skills beyond traditional school smarts.

Intelligence is more than just verbal and mathematical skills. Other abilities are equally important to our human adaptability.

Should all of our abilities be considered intelligences? Shouldn’t some be called less vital talents?

Sternberg’s triarchic

Our intelligence is best classified into three areas that predict real-world success: analytical, creative, and practical.

These three facets can be reliably measured.

1. These three facets may be less independent than Sternberg thought and may actually share an underlying g factor. 2. Additional testing is needed to determine whether these facets can reliably predict success.

䉴|| Intelligence and Creativity 33-3 What is creativity, and what fosters it? Pierre de Fermat, a seventeenth-century mischievous genius, challenged mathematicians of his day to match his solutions to various number theory problems. His most famous challenge—Fermat’s last theorem—baffled the greatest mathematical minds, even after a $2 million prize (in today’s dollars) was offered in 1908 to whoever first created a proof. Princeton mathematician Andrew Wiles had pondered the problem for more than 30 years and had come to the brink of a solution. Then, one morning, out of the blue, the final “incredible revelation” struck him. “It was so indescribably beautiful; it was so simple and so elegant. I couldn’t understand how I’d missed it and I just stared at it in disbelief for 20 minutes. Then during the day I walked around the department, and I’d keep coming back to my desk looking to see if it was still there. It was still there. I couldn’t contain myself, I was so excited. It was the most important moment of my working life” (Singh, 1997, p. 25). Wiles’ incredible moment illustrates creativity—the ability to produce ideas that are both novel and valuable. Studies suggest that a certain level of aptitude—a score of about 120 on a standard intelligence test—is necessary but not sufficient for creativity. Exceptionally creative architects, mathematicians, scientists, and engineers usually score no higher on intelligence tests than do their less creative peers (MacKinnon & Hall, 1972; Simonton, 2000). So, clearly there is more to creativity than what intelligence tests reveal. Indeed, the two kinds of thinking engage different brain areas. Intelligence tests, which demand a single correct answer, require convergent thinking. Creativity tests (How many uses can you think of for a brick?) require divergent thinking. Injury to the left parietal lobe damages the convergent thinking required by intelligence test scores and for school success. Injury to certain areas of the frontal lobes can leave reading, writing, and arithmetic skills intact but destroy imagination (Kolb & Whishaw, 2006).

creativity the ability to produce novel and valuable ideas.

410

MOD U LE 3 3 Introduction to Intelligence

Sternberg and his colleagues have identified five components of creativity (Sternberg, 1988, 2003; Sternberg & Lubart, 1991, 1992): 1. Expertise, a well-developed base of knowledge, furnishes the ideas, images, and phrases we use as mental building blocks. “Chance favors only the prepared mind,” observed Louis Pasteur. The more blocks we have, the more chances we have to combine them in novel ways. Wiles’ well-developed base of knowledge put the needed theorems and methods at his disposal. 2. Imaginative thinking skills provide the ability to see things in novel ways, to recognize patterns, and to make connections. Having mastered a problem’s basic elements, we redefine or explore it in a new way. Copernicus first developed expertise regarding the solar system and its planets, and then creatively defined the system as revolving around the Sun, not the Earth. Wiles’ imaginative solution combined two partial solutions. 3. A venturesome personality seeks new experiences, tolerates ambiguity and risk, and perseveres in overcoming obstacles. Inventor Thomas Edison tried countless substances before finding the right one for his lightbulb filament. Wiles said he labored in near-isolation from the mathematics community partly to stay focused and avoid distraction. Venturing encounters with different cultures also fosters creativity (Leung et al., 2008). 4. Intrinsic motivation is being driven more by interest, satisfaction, and challenge than by external pressures (Amabile & Hennessey, 1992). Creative people focus less on extrinsic motivators—meeting deadlines, impressing people, or making money—than on the pleasure and stimulation of the work itself. Asked how he solved such difficult scientific problems, Isaac Newton reportedly answered, “By thinking about them all the time.” Wiles concurred: “I was so obsessed by this problem that for eight years I was thinking about it all the time—when I woke up in the morning to when I went to sleep at night” (Singh & Riber, 1997). 5. A creative environment sparks, supports, and refines creative ideas. After studying the careers of 2026 prominent scientists and inventors, Dean Keith Simonton (1992) noted that the most eminent among them were mentored, challenged, and supported by their relationships with colleagues. Many have the emotional intelligence needed to network effectively with peers. Even Wiles stood on the shoulders of others and wrestled his problem with the collaboration of a former student. Creativity-fostering environments often support contemplation. After Jonas Salk solved a problem that led to the polio vaccine while in a monastery, he designed the Salk Institute to provide contemplative spaces where scientists could work without interruption (Sternberg, 2006).

“If you would allow me any talent, it’s simply this: I can, for whatever reason, reach down into my own brain, feel around in all the mush, find and extract something from my persona, and then graft it onto an idea.” Cartoonist Gary Larson, The Complete Far Side, 2003

“For the love of God, is there a doctor in the house?”

Reprinted with permission of Paul Soderblom.

© 1991 Leigh Rubin. Creator’s Syndicate Inc.

© The New Yorker Collection, 2006, Christopher Weyant from cartoonbank.com. All rights reserved.

Imaginative thinking Cartoonists often display creativity as they see things in new ways or make unusual connections.

Everyone held up their crackers as David threw the cheese log into the ceiling fan.

411

Introduction to Intelligence M O D U L E 3 3

33-4 What makes up emotional intelligence?

emotional intelligence the ability to perceive, understand, manage, and use emotions.

Also distinct from academic intelligence is social intelligence—the know-how involved in comprehending social situations and managing oneself successfully. The concept was first proposed in 1920 by psychologist Edward Thorndike, who noted, “The best mechanic in a factory may fail as a foreman for lack of social intelligence” (Goleman, 2006, p. 83). Like Thorndike, later psychologists have marveled that high-aptitude people are “not, by a wide margin, more effective . . . in achieving better marriages, in successfully raising their children, and in achieving better mental and physical well-being” (Epstein & Meier, 1989). Others have explored the difficulty that some rationally smart people have in processing and managing social information (Cantor & Kihlstrom, 1987; Weis & Süß, 2007). This idea is especially significant for an aspect of social intelligence that John Mayer, Peter Salovey, and David Caruso (2002, 2008) have called emotional intelligence. They have developed a test that assesses four “You’re wise, but you lack tree smarts.” emotional intelligence components, which are the abilities to

䉴 䉴 䉴 䉴

© The New Yorker Collection, 1988, Reilly from cartoonbank.com. All Rights Reserved.

䉴|| Emotional Intelligence

perceive emotions (to recognize them in faces, music, and stories). understand emotions (to predict them and how they change and blend). manage emotions (to know how to express them in varied situations). use emotions to enable adaptive or creative thinking.

Mindful of popular misuses of their concept, Mayer, Salovey, and Caruso caution against stretching “emotional intelligence” to include varied traits such as self-esteem and optimism, although emotionally intelligent people are self-aware. In both the United States and Germany, those scoring high on managing emotions enjoy higherquality interactions with friends (Lopes et al., 2004). They avoid being hijacked by overwhelming depression, anxiety, or anger. They can read others’ emotions and know what to say to soothe a grieving friend, encourage a colleague, and manage a conflict. Such findings may help explain why, across 69 studies in many countries, those scoring high in emotional intelligence also exhibit modestly better job performance (Van Rooy & Viswesvaran, 2004; Zeidner et al., 2008). They can delay gratification in pursuit of long-range rewards, rather than being overtaken by immediate impulses. Simply said, they are emotionally in tune with others, and thus they often succeed in career, marriage, and parenting situations where academically smarter (but emotionally less intelligent) people fail (Ciarrochi et al., 2006). Brain damage reports have provided extreme examples of the results of diminished emotional intelligence in people with high general intelligence. Neuroscientist Antonio Damasio (1994) tells of Elliot, who had a brain tumor removed: “I never saw a tinge of emotion in my many hours of conversation with him, no sadness, no impatience, no frustration.” Shown disturbing pictures of injured people, destroyed communities, and natural disasters, Elliot showed—and realized he felt—no emotion. He knew but he could not feel. Unable to intuitively adjust his behavior in response to others’ feelings, Elliot lost his job. He went bankrupt. His marriage collapsed. He remarried and divorced again. At last report, he was dependent on custodial care from a sibling and a disability check. Some scholars, however, are concerned that emotional intelligence stretches the concept of intelligence too far. Multiple-intelligence man Howard Gardner (1999) welcomes our stretching the concept into the realms of space, music, and information about ourselves and others. But let us also, he says, respect emotional sensitivity, creativity, and motivation as important but different. Stretch “intelligence” to include everything we prize and it will lose its meaning.

“I worry about [intelligence] definitions that collapse assessments of our cognitive powers with statements about the kind of human beings we favor.” Howard Gardner, “Rethinking the Concept of Intelligence,” 2000

412

MOD U LE 3 3 Introduction to Intelligence

䉴|| Is Intelligence Neurologically Measurable? 33-5 To what extent is intelligence related to brain anatomy and neural processing speed? Using today’s neuroscience tools, might we link differences in people’s intelligence test performance to dissimilarities in the heart of smarts—the brain? Might we anticipate a future brain test of intelligence?

Brain Size and Complexity || A sperm whale’s brain is about 6 times heavier than your brain. ||

|| Correlations do not indicate causeeffect, but they do tell us whether two things are associated in some way. A correlation of –1.0 represents perfect disagreement between two sets of scores—as one score goes up, the other goes down. A correlation of zero represents no association. The highest correlation, +1.0, represents perfect agreement—as the first score goes up, so does the second. ||

After the brilliant English poet Lord Byron died in 1824, doctors discovered that his brain was a massive 5 pounds, not the normal 3 pounds. Three years later, Beethoven died and his brain was found to have exceptionally numerous and deep convolutions. Such observations set brain scientists off studying the brains of other geniuses at their wits’ end (Burrell, 2005). Do people with big brains have big smarts? Alas, some geniuses had small brains, and some dim-witted criminals had brains like Byron’s. More recent studies that directly measure brain volume using MRI scans do reveal correlations of about +.33 between brain size (adjusted for body size) and intelligence score (Carey, 2007; McDaniel, 2005). Moreover, as adults age, brain size and nonverbal intelligence test scores fall in concert (Bigler et al., 1995). One review of 37 brain-imaging studies revealed associations between intelligence and brain size and activity in specific areas, especially within the frontal and parietal lobes (Jung & Haier, 2007). Sandra Witelson would not have been surprised. With the brains of 91 Canadians as a comparison base, Witelson and her colleagues (1999) seized an opportunity to study Einstein’s brain. Although not notably heavier or larger in total size than the typical Canadian’s brain, Einstein’s brain was 15 percent larger in the parietal lobe’s lower region—which just happens to be a center for processing mathematical and spatial information. Certain other areas were a tad smaller than average. With different mental functions competing for the brain’s real estate, these observations may offer a clue to why Einstein, like some other great physicists such as Richard Feynman and Edward Teller, was slow in learning to talk (Pinker, 1999). If intelligence does modestly correlate with brain size, the cause could be differing genes, nutrition, environmental stimulation, some combination of these, or perhaps something else. We now know that experience alters the brain. Rats raised in a stimulating rather than deprived environment develop thicker, heavier cortexes. And learning leaves detectable traces in the brain’s neural connections. “Intelligence is due to the development of neural connections in response to the environment,” notes University of Sydney psychologist Dennis Garlick (2003). Postmortem brain analyses reveal that highly educated people die with more synapses—17 percent more in one study—than their less-educated counterparts (Orlovskaya et al., 1999). This does not tell us whether people grow synapses with education, or people with more synapses seek more education, or both. But other evidence suggests that highly intelligent people differ in their neural plasticity—their ability during childhood and adolescence to adapt and grow neural connections in response to their environment (Garlick, 2002, 2003). One study repeatedly scanned the brains of 307 children and teens ages 5 to 19. The surprising result: Kids with average intelligence scores showed modest cortex thickening and thinning—with a peak thickness at age 8, suggesting a short developmental window (Shaw et al., 2006). The most intelligent 7-year-olds had a thinner brain cortex, which progressively thickened to age 11 to 13, before thinning with the natural pruning of unused connections. Agile minds came with agile brains. Efforts to link brain structure with cognition continue. One research team, led by psychologist Richard Haier (2004; Colom et al., 2006), correlated intelligence scores

413

Introduction to Intelligence M O D U L E 3 3

from 47 adult volunteers with scans that measured their volume of gray matter (neural cell bodies) and white matter (axons and dendrites) in various brain regions. Higher intelligence scores were linked with more gray matter in areas known to be involved in memory, attention, and language (FIGURE 33.2).

Brain Function Even if the modest correlations between brain anatomy and intelligence prove reliable, they only begin to explain intelligence differences. Searching for other explanations, neuroscientists are studying the brain’s functioning. As people contemplate a variety of questions like those found on intelligence tests, a frontal lobe area just above the outer edge of the eyebrows becomes especially active— in the left brain for verbal questions, and on both sides for spatial questions (Duncan et al., 2000). Information from various brain areas seems to converge in this spot, suggesting to researcher John Duncan (2000) that it may be a “global workspace for organizing and coordinating information” and that some people may be “blessed with a workspace that functions very, very well.” Are more intelligent people literally more quick-witted, much as today’s speedier computer chips enable more powerful computing than did their predecessors? On some tasks they seem to be. Earl Hunt (1983) found that verbal intelligence scores are predictable from the speed with which people retrieve information from memory. Those who recognize quickly that sink and wink are different words, or that A and a share the same name, tend to score high in verbal ability. Extremely precocious 12- to 14-year-old college students are especially quick in responding to such tasks (Jensen, 1989). To try to define quick-wittedness, researchers are taking a close look at speed of perception and speed of neural processing of information.

Perceptual Speed Across many studies, the correlation between intelligence score and the speed of taking in perceptual information tends to be about +.3 to +.5 (Deary & Der, 2005; Sheppard & Vernon, 2008). A typical experiment flashes an incomplete stimulus, as in FIGURE 33.3, then a masking image—another image that overrides the lingering afterimage of the incomplete stimulus. The researcher then asks participants whether the long side appeared on the right or left. How much stimulus inspection time do you think you would need to answer correctly 80 percent of the time? Perhaps .01 second? Or .02 second? Those who perceive very quickly tend to score somewhat higher on intelligence tests, particularly on tests based on perceptual rather than verbal problem solving. FIGURE 33.3 An inspection 䉴 time task A stimulus is flashed

before being overridden by a masking image. How long would you need to glimpse the stimulus at the left to answer the question? People who can perceive the stimulus very quickly tend to score somewhat higher on intelligence tests. (Adapted from Deary & Stough, 1996.) Stimulus

Mask

Question: Long side on left or right?

33.2 Gray matter 䉴 FIGURE matters A frontal view of the

brain shows some of the areas where gray matter is concentrated in people with high intelligence scores, and where g may therefore be concentrated. (From Haier et al., 2004.)

414

MOD U LE 3 3 Introduction to Intelligence

Neurological Speed Do the quicker processing and perceptions of highly intelligent people reflect greater neural processing speed? Repeated studies have found that their brain waves do register a simple stimulus (such as a flash of light or a beeped tone) more quickly and with greater complexity (Caryl, 1994; Deary & Caryl, 1993; Reed & Jensen, 1992). The evoked brain response also tends to be slightly faster when people with high rather than low intelligence scores perform a simple task, such as pushing a button when an X appears on a screen (McGarry-Roberts et al., 1992). Neural processing speed on a simple task seems far removed from the untimed responses to complex intelligence test items, such as, “In what way are wool and cotton alike?” As yet, notes intelligence expert Nathan Brody (1992, 2001), we have no firm understanding of why fast reactions on simple tasks should predict intelligence test performance, though he suspects they reflect one’s “core information processing ability.” Philip Vernon (1983) has speculated that “faster cognitive processing may allow more information to be acquired.” Perhaps people who more quickly process information accumulate more information—about wool, cotton, and millions of other things. Or perhaps, as one Australian-Dutch research team has found, processing speed and intelligence may correlate not because one causes the other but because they share an underlying genetic influence (Luciano et al., 2005). The neurological approach to understanding intelligence (and so many other things in psychology) is currently in its heyday. Will this new research reduce what we now call the g factor to simple measures of underlying brain activity? Or are these efforts totally wrongheaded because what we call intelligence is not a single general trait but several culturally adaptive skills? The controversies surrounding the nature of intelligence are a long way from resolution.

415

Introduction to Intelligence M O D U L E 3 3

Review Introduction to Intelligence 33-1 What arguments support intelligence as one general mental ability, and what arguments support the idea of multiple distinct abilities? Factor analysis is a statistical procedure that has revealed some underlying commonalities in different mental abilities. Spearman named this common factor the g factor. Thurstone argued against defining intelligence so narrowly as just one score. He identified seven different clusters of mental abilities. Yet there remained a tendency for high scorers in one of his clusters to score high in other clusters as well. Our g scores seem most predictive in novel situations and do not much correlate with skills in evolutionarily familiar situations. 33-2

How do Gardner’s and Sternberg’s theories of multiple intelligences differ? Gardner proposes eight independent intelligences: linguistic, logical-mathematical, musical, spatial, bodilykinesthetic, intrapersonal, interpersonal, and naturalist. Sternberg’s theory has proposed three intelligence domains: analytical (academic problem-solving), creative, and practical. (For more on the single-intelligence/multiple intelligences debate, see Table 33.2).

33-3 What is creativity, and what fosters it? Creativity is the ability to produce novel and valuable ideas. It correlates somewhat with intelligence, but beyond a score of 120, that correlation dwindles. It also correlates with expertise, imaginative thinking skills, a venturesome personality, intrinsic motivation, and the support offered by a creative environment. 33-4 What makes up emotional intelligence? Emotional intelligence is the ability to perceive, understand, manage, and use emotions. Those with higher emotional intelligence achieve greater personal and professional success. However, critics question whether we stretch the idea of intelligence too far when we apply it to emotions. 33-5 To what extent is intelligence related to brain anatomy and neural processing speed? Recent studies indicate some correlation (about +.33) between brain size (adjusted for body size) and intelligence score. Highly educated or intelligent people exhibit an above-average volume of synapses and gray matter. People who

score high on intelligence tests tend also to have speedy brains that retrieve information and perceive stimuli quickly.

Terms and Concepts to Remember intelligence test, p. 404 intelligence, p. 404 general intelligence (g), p. 405 factor analysis, p. 405

savant syndrome, p. 406 creativity, p. 409 emotional intelligence, p. 411

Test Yourself 1. Joseph, a Harvard Law School student, has a straight-A average, writes for the Harvard Law Review, and will clerk for a Supreme Court justice next year. His grandmother, Judith, is very proud of him, saying that he is way more intelligent than she ever was. But Joseph is also very proud of Judith: As a young woman, she was imprisoned by the Nazis. When the war ended, she walked out of Germany, contacted an agency helping refugees, and began a new life in the United States as an assistant chef in her cousin’s restaurant. According to the definition of intelligence in this module, is Joseph the only intelligent person in this story? Why or why not? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. The modern concept of multiple intelligences (as proposed by Gardner and Sternberg) assumes that the analytical school smarts measured by traditional intelligence tests are important abilities but that other abilities are also important. Different people have different gifts. What are yours?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

The Origins of Intelligence Testing Modern Tests of Mental Abilities Principles of Test Construction The Dynamics of Intelligence

module 34 Assessing Intelligence How do we assess intelligence? In simple terms, intelligence is whatever intelligence tests measure. So, what are these tests, and what makes a test credible? Answering those questions begins with a look at why psychologists created tests of mental abilities and how they have used those tests.

䉴|| The Origins of Intelligence Testing 34-1 When and why were intelligence tests created? intelligence test a method for assessing an individual’s mental aptitudes and comparing them with those of others, using numerical scores.

Alfred Binet “Some recent philosophers have given their moral approval to the deplorable verdict that an individual’s intelligence is a fixed quantity, one which cannot be augmented. We must protest and act against this brutal pessimism” (Binet, 1909, p. 141).

Alfred Binet: Predicting School Achievement

National Library of Medicine

䉴 416

Some societies concern themselves with promoting the collective welfare of the family, community, and society. Other societies emphasize individual opportunity. Plato, a pioneer of the individualist tradition, wrote more than 2000 years ago in The Republic that “no two persons are born exactly alike; but each differs from the other in natural endowments, one being suited for one occupation and the other for another.” As heirs to Plato’s individualism, people in Western societies have pondered how and why individuals differ in mental ability. Western attempts to assess such differences began in earnest more than a century ago. The English scientist Francis Galton (1822–1911) had a fascination with measuring human traits. When his cousin Charles Darwin proposed that nature selects successful traits through the survival of the fittest, Galton wondered if it might be possible to measure “natural ability” and to encourage those of high ability to mate with one another. At the 1884 London Exposition, more than 10,000 visitors received his assessment of their “intellectual strengths” based on such things as reaction time, sensory acuity, muscular power, and body proportions. But alas, on these measures, eminent adults and high-achieving students did not outscore those supposedly not so bright. Nor did the measures correlate with each other. Although Galton’s quest for a simple intelligence measure failed, he gave us some statistical techniques that we still use (as well as the phrase “nature and nurture”). And his persistent belief in the inheritance of eminence and genius—reflected in the title of his book Hereditary Genius—illustrates an important lesson from both the history of intelligence research and the history of science: Although science itself strives for objectivity, individual scientists are affected by their own assumptions and attitudes.

The modern intelligence-testing movement began at the turn of the twentieth century, when France passed a law requiring that all children attend school. Some children, including many newcomers to Paris, seemed incapable of benefiting from the regular school curriculum and in need of special classes. But how could the schools objectively identify children with special needs? The French government hesitated to trust teachers’ subjective judgments of children’s learning potential. Academic slowness might merely reflect inadequate prior education. Also, teachers might prejudge children on the basis of their social backgrounds. To minimize bias, France’s minister of public education in 1904 commissioned Alfred Binet (1857–1911) and others to study the problem. Binet and his collaborator, Théodore Simon, began by assuming that all children follow the same course of intellectual development but that some develop more

417

Assessing Intelligence M O D U L E 3 4

rapidly. On tests, therefore, a “dull” child should perform as does a typical younger child, and a “bright” child as does a typical older child. Thus, their goal became measuring each child’s mental age, the level of performance typically associated with a certain chronological age. The average 9-year-old, for example, has a mental age of 9. Children with below-average mental ages, such as 9-year-olds who perform at the level of a typical 7-year-old, would struggle with schoolwork considered normal for their age. To measure mental age, Binet and Simon theorized that mental aptitude, like athletic aptitude, is a general capacity that shows up in various ways. After testing a variety of reasoning and problem-solving questions on Binet’s two daughters, and then on “bright” and “backward” Parisian schoolchildren, Binet and Simon identified items that would predict how well French children would handle their schoolwork. Note that Binet and Simon made no assumptions concerning why a particular child was slow, average, or precocious. Binet personally leaned toward an environmental explanation. To raise the capacities of low-scoring children, he recommended “mental orthopedics” that would train them to develop their attention span and selfdiscipline. He believed his intelligence test did not measure inborn intelligence as a meter stick measures height. Rather, it had a single practical purpose: to identify French schoolchildren needing special attention. Binet hoped his test would be used to improve children’s education, but he also feared it would be used to label children and limit their opportunities (Gould, 1981).

mental age a measure of intelligence test performance devised by Binet; the chronological age that most typically corresponds to a given level of performance. Thus, a child who does as well as the average 8-year-old is said to have a mental age of 8. Stanford-Binet the widely used American revision (by Terman at Stanford University) of Binet’s original intelligence test. intelligence quotient (IQ) defined originally as the ratio of mental age (ma) to chronological age (ca) multiplied by 100 (thus, IQ = ma/ca × 100). On contemporary intelligence tests, the average performance for a given age is assigned a score of 100.

“The IQ test was invented to predict academic performance, nothing else. If we wanted something that would predict life success, we’d have to invent another test completely.” Social psychologist Robert Zajonc (1984b)

Lewis Terman: The Innate IQ Binet’s fears were realized soon after his death in 1911, when others adapted his tests for use as a numerical measure of inherited intelligence. This began when Stanford University professor Lewis Terman (1877–1956) found that the Paris-developed questions and age norms worked poorly with California schoolchildren. Adapting some of Binet’s original items, adding others, and establishing new age norms, Terman extended the upper end of the test’s range from teenagers to “superior adults.” He also gave his revision the name it retains today—the Stanford-Binet. From such tests, German psychologist William Stern derived the famous intelligence quotient, or IQ. The IQ was simply a person’s mental age divided by chronological age and multiplied by 100 to get rid of the decimal point: mental age × 100 chronological age

Thus, an average child, whose mental and chronological ages are the same, has an IQ of 100. But an 8-year-old who answers questions as would a typical 10-year-old has an IQ of 125. The original IQ formula worked fairly well for children but not for adults. (Should a 40-year-old who does as well on the test as an average 20-year-old be assigned an IQ of only 50?) Most current intelligence tests, including the Stanford-Binet, no longer compute an IQ (though the term IQ still lingers in everyday vocabulary as a shorthand expression for “intelligence test score”). Instead, they represent the test-taker’s performance relative to the average performance of others the same age. This average performance is arbitrarily assigned a score of 100, and about two-thirds of all test-takers fall between 85 and 115. Terman promoted the widespread use of intelligence testing. His motive was to “take account of the inequalities of children in original endowment” by assessing their “vocational fitness.” In sympathy with eugenics—a much-criticized nineteenth-century movement that proposed measuring human traits and using the results to encourage only smart and fit people to reproduce—Terman (1916, pp. 91–92) envisioned that

© Jason Love

IQ =

Mrs. Randolph takes mother’s pride too far.

418

achievement tests a test designed to assess what a person has learned.

aptitude tests a test designed to predict a person’s future performance; aptitude is the capacity to learn.

Wechsler Adult Intelligence Scale (WAIS) the WAIS is the most widely used intelligence test; contains verbal and performance (nonverbal) subtests.

MOD U LE 3 4 Assessing Intelligence

the use of intelligence tests would “ultimately result in curtailing the reproduction of feeble-mindedness and in the elimination of an enormous amount of crime, pauperism, and industrial inefficiency” (p. 7). With Terman’s help, the U.S. government developed new tests to evaluate both newly arriving immigrants and World War I army recruits—the world’s first mass administration of an intelligence test. To some psychologists, the results indicated the inferiority of people not sharing their Anglo-Saxon heritage. Such findings were part of the cultural climate that led to a 1924 immigration law that reduced Southern and Eastern European immigration quotas to less than a fifth of those for Northern and Western Europe. Binet probably would have been horrified that his test had been adapted and used to draw such conclusions. Indeed, such sweeping judgments did become an embarrassment to most of those who championed testing. Even Terman came to appreciate that test scores reflected not only people’s innate mental abilities but also their education and their familiarity with the culture assumed by the test. Nevertheless, abuses of the early intelligence tests serve to remind us that science can be value-laden. Behind a screen of scientific objectivity, ideology sometimes lurks.

䉴|| Modern Tests of Mental Abilities 34-2 What’s the difference between aptitude and achievement tests, and how can we develop and evaluate them? By this point in your life, you’ve faced dozens of ability tests: school tests of basic reading and math skills, course exams, intelligence tests, and driver’s license exams, to name just a few. Psychologists classify such tests as either achievement tests, intended to reflect what you have learned, or aptitude tests, intended to predict your ability to learn a new skill. Exams covering what you have learned in this course are achievement tests. A college entrance exam, which seeks to predict your ability to do college work, is an aptitude test—a “thinly disguised intelligence test,” says Howard Gardner (1999). Indeed, report Meredith Frey and Douglas Detterman (2004), total scores on the U.S. SAT Reasoning Test (formerly called the U.S. Scholastic Aptitude Test) correlated +.82 with general intelligence scores in a national sample of 14- to 21-year-olds (FIGURE 34.1).

and intelligence scores A scatterplot shows the close correlation between intelligence scores and verbal and quantitative SAT scores. (From Frey and Detterman, 2004.)

FIGURE 34.1 Close cousins: Aptitude

140

IQ 130 120 110 100 90 80 70 60 200

400

600

800

1000

1200

SAT scores (verbal + quantitative)

1400

1600

419

Assessing Intelligence M O D U L E 3 4

Actually, the differences between achievement and aptitude tests are not so clear-cut. Your achieved vocabulary influences your score on most aptitude tests. Similarly, your aptitudes for learning and test-taking influence your grades on achievement tests. Most tests, whether labeled achievement or aptitude, assess both ability and its development. Practically speaking, however, achievement tests assess current performance and aptitude tests predict future performance. Psychologist David Wechsler created what is now the most widely used intelligence test, the Wechsler Adult Intelligence Scale (WAIS), with a version for school-age children (the Wechsler Intelligence Scale for Children [WISC]), and another for preschool children. As illustrated in FIGURE 34.2, the WAIS consists of 11 subtests broken into verbal and performance areas. It yields not only an overall intelligence score, as does the Stanford-Binet, but also separate scores for verbal comprehension, perceptual organization, working memory, and processing speed. Striking differences among these scores can provide clues to cognitive strengths or weaknesses that teachers or therapists can build upon. For example, a low verbal comprehension score combined with high scores on other subtests could indicate a reading or language disability. Other comparisons can help a psychologist or psychiatrist establish a rehabilitation plan for a stroke patient. Such uses are possible, of course, only when we can trust the test results.

patterns Block 䉴 Matching design puzzles test the

Lew Merrim/Photo Researchers, Inc.

ability to analyze patterns. Wechsler’s individually administered intelligence test comes in forms suited for adults (WAIS) and children (WISC).

Getty Images/Stockdisc

FIGURE 34.2 Sample items from the Wechsler Adult Intelligence Scale (WAIS) subtests (Adapted from Thorndike & Hagen, 1977.)

1

5

4

2

1

3

5

4

1

5

standardization defining meaningful scores by comparison with the performance of a pretested group.

normal curve the symmetrical bellshaped curve that describes the distribution of many physical and psychological attributes. Most scores fall near the average, and fewer and fewer scores lie near the extremes.

FIGURE 34.3 The normal curve Scores on aptitude tests tend to form a normal, or bellshaped, curve around an average score. For the Wechsler scale, for example, the average score is 100.

MOD U LE 3 4 Assessing Intelligence

䉴|| Principles of Test Construction To be widely accepted, psychological tests must meet three criteria: They must be standardized, reliable, and valid. The Stanford-Binet and Wechsler tests meet these requirements.

Standardization The number of questions you answer correctly on an intelligence test would tell us almost nothing. To evaluate your performance, we need a basis for comparing it with others’ performance. To enable meaningful comparisons, test-makers first give the test to a representative sample of people. When you later take the test following the same procedures, your score can be compared with the sample’s scores to determine your position relative to others. This process of defining meaningful scores relative to a pretested group is called standardization. Group members’ scores typically are distributed in a bell-shaped pattern that forms the normal curve shown in FIGURE 34.3. No matter what we measure—heights, weights, or mental aptitudes—people’s scores tend to form this roughly symmetrical shape. On an intelligence test, we call the midpoint, the average score, 100. Moving out from the average, toward either extreme, we find fewer and fewer people. For both the StanfordBinet and the Wechsler tests, a person’s score indicates whether that person’s performance fell above or below the average. As Figure 34.3 shows, a performance higher than all but 2 percent of all scores earns an intelligence score of 130. A performance lower than 98 percent of all scores earns an intelligence score of 70. To keep the average score near 100, the Stanford-Binet and the Wechsler scales are periodically restandardized. If you took the WAIS Third Edition recently, your performance was compared with a standardization sample who took the test during 1996, not to David Wechsler’s initial 1930s sample. If you compared the performance of the most recent standardization sample with that of the 1930s sample, do you suppose you would find rising or declining test performance? Amazingly—given that college entrance aptitude scores were dropping during the 1960s and 1970s—intelligence test performance has been improving. This worldwide phenomenon is called the Flynn effect, in honor of New Zealand researcher James Flynn (1987, 2007), who first calculated its magnitude. As FIGURE 34.4 indicates, the average person’s intelligence test score 80 years ago was—by today’s standard—only a 76! Such rising performance has been observed in 20 countries, from Canada to rural Australia (Daley et al., 2003). Although the gains have recently reversed in Scandinavia, the historic increase is now widely accepted as an important phenomenon (Sundet et al., 2004; Teasdale & Owen, 2005, 2008).

420

Sixty-eight percent of people score within 15 points above or below 100

Number of scores About ninety-five percent of all people fall within 30 points of 100

68%

95%

0.1%

2% 13.5% 55

70

34% 85

34% 100

13.5% 115

Wechsler intelligence score

0.1%

2%

130

145

421

Rising average intelligence test performance

90 85 80 75 1910 1920 1930 1940 1950 1960 1970 1980 1990

Year

Lew Merrim/Photo Researchers

Intelligence test scores, based 100 on 1996 standards 95

Archives of the History of American Psychology/University of Akron

Assessing Intelligence M O D U L E 3 4

FIGURE 34.4 Getting 䉴 smarter? In every country

studied, intelligence test performance rose during the twentieth century, as shown here with American Wechsler and StanfordBinet test performance between 1918 and 1989. In Britain, test scores have risen 27 points since 1942. (From Hogan, 1995.) Very recent data indicate this trend may have leveled off or may even be reversing.

The Flynn effect’s cause is a mystery (Neisser, 1997a, 1998). Did it result from greater test sophistication? (But the gains began before testing was widespread.) Better nutrition? As the nutrition explanation would predict, people have gotten not only smarter but taller. Moreover, the increases have been greatest at the lowest economic levels, which have gained the most from improved nutrition (Colom et al., 2005). Or did the Flynn effect stem from more education? More stimulating environments? Less childhood disease? Smaller families and more parental investment? Regardless of what combination of factors explains the rise in intelligence test scores, the phenomenon counters one concern of some hereditarians—that the higher twentieth-century birthrates among those with lower scores would shove human intelligence scores downward (Lynn & Harvey, 2008). Seeking to explain the rising scores, and mindful of global mixing, one scholar has even speculated about the influence of a genetic phenomenon comparable to “hybrid vigor,” which occurs in agriculture when cross-breeding produces corn or livestock superior to the parent plants or animals (Mingroni, 2004, 2007).

Reliability Knowing where you stand in comparison to a standardization group still won’t tell us much about your intelligence unless the test has reliability—unless it yields dependably consistent scores. To check a test’s reliability, researchers retest people. They may use the same test or they may split the test in half and see whether odd-question scores and even-question scores agree. If the two scores generally agree, or correlate, the test is reliable. The higher the correlation between the test-retest or the split-half scores, the higher the test’s reliability. The tests we have considered so far—the Stanford-Binet, the WAIS, and the WISC—all have reliabilities of about +.9, which is very high. When retested, people’s scores generally match their first score closely.

reliability the extent to which a test yields consistent results, as assessed by the consistency of scores on two halves of the test, or on retesting.

validity the extent to which a test mea-

Validity High reliability does not ensure a test’s validity—the extent to which the test actually measures or predicts what it promises. If you use an inaccurate tape measure to measure people’s heights, your height report would have high reliability (consistency) but low validity. It is enough for some tests that they have content validity, meaning the test taps the pertinent behavior, or criterion. The road test for a driver’s license has content validity because it samples the tasks a driver routinely faces. Course exams have content validity if they assess one’s mastery of a representative sample of course material. But we expect intelligence tests to have predictive validity: They should predict the criterion of future performance, and to some extent they do.

sures or predicts what it is supposed to. (See also content validity and predictive validity.)

content validity the extent to which a test samples the behavior that is of interest. predictive validity the success with which a test predicts the behavior it is designed to predict; it is assessed by computing the correlation between test scores and the criterion behavior. (Also called criterion-related validity.)

Football linemen’s 10 success 9

Greater correlation over broad range of body weights

8 7 6 © 2003 Peter Menzel/Robosapiens

FIGURE 34.5 Diminishing predictive power Let’s imagine a correlation between football linemen’s body weight and their success on the field. Note how insignificant the relationship becomes when we narrow the range of weight to 280 to 320 pounds. As the range of data under consideration narrows, its predictive power diminishes.

MOD U LE 3 4 Assessing Intelligence

422

5

Little correlation within restricted range

4 3 2 1 0 200

280

320

Body weight in pounds

Are general aptitude tests as predictive as they are reliable? As critics are fond of noting, the answer is plainly no. The predictive power of aptitude tests is fairly strong in the early school years, but later it weakens. Academic aptitude test scores are reasonably good predictors of achievement for children ages 6 to 12, where the correlation between intelligence score and school performance is about +.6 (Jensen, 1980). Intelligence scores correlate even more closely with scores on achievement tests—+.81 in one comparison of 70,000 English children’s intelligence scores at age 11 to their academic achievement in national exams at age 16 (Deary et al., 2007). The SAT, used in the United States as a college entrance exam, is less successful in predicting first-year college grades; here, the correlation is less than +.5 (Willingham et al., 1990). By the time we get to the Graduate Record Examination (GRE; an aptitude test similar to the SAT but for those applying to graduate school), the correlation with graduate school performance is an even more modest but still significant +.4 (Kuncel & Hezlett, 2007). Why does the predictive power of aptitude scores diminish as students move up the educational ladder? Consider a parallel situation: Among all American or Canadian football linemen, body weight correlates with success. A 300-pound player tends to overwhelm a 200-pound opponent. But within the narrow 280- to 320-pound range typically found at the professional level, the correlation between weight and success becomes negligible (FIGURE 34.5). The narrower the range of weights, the lower the predictive power of body weight becomes. If an elite university takes only those students who have very high aptitude scores, those scores cannot possibly predict much. This will be true even if the test has excellent predictive validity with a more diverse sample of students. So, when we validate a test using a wide range of people but then use it with a restricted range of people, it loses much of its predictive validity.

䉴|| The Dynamics of Intelligence We now can address some age-old questions about the dynamics of human intelligence—about its stability over the life span, and about the extremes of intelligence.

Stability or Change? 34-3 How stable are intelligence scores over the life span? If we retested people periodically throughout their lives, would their intelligence scores be stable?

423

Assessing Intelligence M O D U L E 3 4

This question has led to a search for indicators of infants’ later intelligence that has left few stones unturned. Unable to talk with infants, developmental psychologists have assessed what they can observe—everything from birth weight, to the relative lengths of different toes, to age of sitting up alone. None of these measures provides any useful prediction of intelligence scores at much later ages (Bell & Waldrop, 1989; Broman, 1989). Perhaps, as developmental psychologist Nancy Bayley reflected in 1949, “we have not yet found the right tests.” Someday, she speculated, we might find “infant behaviors which are characteristic of underlying intellectual functions” and which will predict later intelligence. Some studies have found that infants who quickly grow bored with a picture—who, given a choice, prefer to look at a new one—score higher on tests of brain speed and intelligence up to 21 years later, but the prediction is crude (fa*gan et al., 2007; Kavsek, 2004; Tasbihsazan et al., 2003). So, new parents who are wondering about their baby’s intelligence and anxiously comparing their baby to others can relax. Except for extremely impaired or very precocious children, casual observation and intelligence tests before age 3 only modestly predict children’s future aptitudes (Humphreys & Davey, 1988; Tasbihsazan et al., 2003). For example, children who are early talkers—speaking in sentences typical of 3-year-olds by age 20 months—are not especially likely to be reading by age 41⁄2 (Crain-Thoreson & Dale, 1992). (A better predictor of early reading is having parents who have read lots of stories to their child.) Remember that even Albert Einstein was slow in learning to talk (Quasha, 1980). By age 4, however, children’s performance on intelligence tests begins to predict their adolescent and adult scores. Moreover, high-scoring adolescents tend to have been early readers. One study surveyed the parents of 187 American seventh- and eighth-graders who had taken a college aptitude test as part of a seven-state talent search and had scored considerably higher than most high school seniors. If their parents’ memories can be trusted, more than half of this precocious group of adolescents began reading by age 4 and more than 80 percent were reading by age 5 (Van Tassel-Baska, 1983). Not surprisingly, then, intelligence tests given to 5-year-olds do predict school achievement (Tramontana et al., 1988). After about age 7, intelligence test scores, though certainly not fixed, stabilize (Bloom, 1964). Thus, the consistency of scores over time increases with the age of the child. The remarkable stability of aptitude scores by late adolescence is seen in a U.S. Educational Testing Service study of 23,000 students who took the SAT and then later took the GRE (Angoff, 1988). On either test, verbal scores correlated only modestly with math scores—revealing that these two aptitudes are distinct. Yet scores on the SAT verbal test correlated +.86 with the scores on the GRE verbal tests taken four to five years later. An equally astonishing +.86 correlation occurred between the two math tests. Given the time lapse and differing educational experiences of these 23,000 students, the stability of their aptitude scores is remarkable. Ian Deary and his colleagues (2004) recently set a record for long-term follow-up. Their amazing study was enabled by their country, Scotland, doing something that no nation has done before or since. On Monday morning, June 1, 1932, essentially every child in the country who had been born in 1921—87,498 children at ages 101⁄2 to 111⁄2—was given an intelligence test. The aim was to identify working-class children who would benefit from further education. Sixty-five years later to the day, Patricia Whalley, the wife of Deary’s co-worker, Lawrence Whalley, discovered the test results on dusty storeroom shelves at the Scottish Council for Research in Education, not far from Deary’s Edinburgh University office. “This will change our lives,” Deary replied when Whalley told him the news. And so it has, with dozens of studies of the stability and the predictive capacity of these early test results. For example, when the intelligence test administered to 11year-old Scots in 1932 was readministered to 542 survivors as turn-of-the-millennium

“My dear Adele, I am 4 years old and I can read any English book. I can say all the Latin substantives and adjectives and active verbs besides 52 lines of Latin poetry.” Francis Galton, letter to his sister, 1827 || Ironically, SAT and GRE scores correlate better with each other than either does with its intended criterion, school achievement. Thus, their reliability far exceeds their predictive validity. If either test was much affected by coaching, luck, or how one feels on the test day (as so many people believe), such reliability would be impossible. ||

424

MOD U LE 3 4 Assessing Intelligence

FIGURE 34.6 Intelligence endures When Ian Deary and his colleagues (2004) retested 80-year-old Scots, using an intelligence test they had taken as 11-year-olds, their scores across seven decades correlated +.66.

IQ, age 140 80 years 120

100

80

60

40 40

60

80

100

120

140

IQ, age 11 years

80-year-olds, the correlation between the two sets of scores—after some 70 years of varied life experiences—was striking (FIGURE 34.6). High-scoring 11-year-olds also were more likely to be living independently as 77-year-olds and were less likely to have suffered late-onset Alzheimer’s disease (Starr et al., 2000; Whalley et al., 2000). Among girls scoring in the highest 25 percent, 70 percent were still alive at age 76—as were only 45 percent of those scoring in the lowest 25 percent (FIGURE 34.7). (World War II prematurely ended the lives of many of the male test-takers.) Another study that followed 93 nuns confirmed that those exhibiting less verbal ability in essays written when entering their convent in their teens were more at risk for Alzheimer’s disease after age 75 (Snowdon et al., 1996).

“Whether you live to collect your old-age pension depends in part on your IQ at age 11.” Ian Deary, “Intelligence, Health, and Death,” 2005

FIGURE 34.7 Living smart Women scoring in the highest 25 percent on the Scottish national intelligence test at age 11 tended to live longer than those who scored in the lowest 25 percent. (From Whalley & Deary, 2001.)

Women—highest IQ quarter

Percentage 100% alive 90 80 70

Women—lowest IQ quarter

60 50 40

10

20

30

40

50

60

70

80

Age (years)

Extremes of Intelligence 34-4 What are the traits of those at the low and high intelligence extremes? One way to glimpse the validity and significance of any test is to compare people who score at the two extremes of the normal curve. The two groups should differ noticeably, and they do.

Assessing Intelligence M O D U L E 3 4

The Low Extreme At one extreme of the normal curve are those whose intelligence test scores fall at 70 or below. To be labeled as having an intellectual disability (formerly referred to as mental retardation), a child must have both a low test score and difficulty adapting to the normal demands of independent living. Only about 1 percent of the population meets both criteria, with males outnumbering females by 50 percent (American Psychiatric Association, 1994). As TABLE 34.1 indicates, most individuals with intellectual disabilities can, with support, live in mainstream society. Intellectual disabilities sometimes have a known physical cause. Down syndrome, for example, is a disorder of varying severity caused by an extra chromosome 21 in the person’s genetic makeup. During the last two centuries, the pendulum of opinion about how best to care for Americans with intellectual disabilities has made a complete swing. Until the midnineteenth century, they were cared for at home. Many of those with the most severe disabilities died, but people with less significant challenges often found a place in a farm-based society. Then, residential schools for slow learners were established. By the twentieth century, many of these institutions had become warehouses, providing residents little attention, no privacy, and no hope. Parents often were told to separate themselves permanently from their impaired child before they became attached. In the last half of the twentieth century, the pendulum swung back to normalization—encouraging people to live in their own communities as normally as their functioning permits. Children with mild disabilities are educated in less restrictive environments, and many are integrated, or mainstreamed, into regular classrooms. Most grow up with their own families, then move into a protected living arrangement, such as a group home. The hope, and often the reality, is a happier and more dignified life. But think about another reason people diagnosed with mild intellectual disabilities—those just below the 70 score on an intelligence test used to draw the line on who has a disability—might be better able to live independently today than many decades ago. Recall that, thanks to the Flynn effect, the tests have been periodically restandardized. When that happens, individuals who scored near 70 suddenly lose about 6 IQ points, and two people with the same ability level could thus be classified differently depending on when they were tested (Kanaya et al., 2003). As the number of people diagnosed with an intellectual disability suddenly jumps, more people become eligible for special education and for Social Security payments for those with an intellectual disability. And in the United States (one of only a few countries with the

TABLE 34.1 Degrees of Intellectual Disability

Level

Approximate Intelligence Scores

Mild

50–70

May learn academic skills up to sixth-grade level. Adults may, with assistance, achieve self-supporting social and vocational skills.

Moderate

35–50

May progress to second-grade level academically. Adults may contribute to their own support by laboring in sheltered workshops.

Severe

20–35

May learn to talk and to perform simple work tasks under close supervision but are generally unable to profit from vocational training.

Profound

Below 20

Adaptation to Demands of Life

Require constant aid and supervision.

Source: Reprinted with permission from the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, text revision. Copyright 2000 American Psychiatric Association.

425

intellectual disability (formerly referred to as mental retardation) a condition of limited mental ability, indicated by an intelligence score of 70 or below and difficulty in adapting to the demands of life; varies from mild to profound. Down syndrome a condition of intellectual disability and associated physical disorders caused by an extra copy of chromosome 21.

426

MOD U LE 3 4 Assessing Intelligence

death penalty), fewer people are eligible for execution—the U.S. Supreme Court ruled in 2002 that the execution of people with intellectual disabilities is “cruel and unusual punishment.” For people near that score of 70, intelligence testing can be a high-stakes competition.

The High Extreme

|| Terman did test two future Nobel laureates in physics but they failed to score above his gifted sample cutoff (Hulbert, 2005). ||

“Joining Mensa means that you are a genius. . . . I worried about the arbitrary 132 cutoff point, until I met someone with an IQ of 131 and, honestly, he was a bit slow on the uptake.” Steve Martin, 1997

AP Photo/Anne Ryan

The extremes of intelligence Sho Yano was playing Mozart by age 4, aced the SAT at age 8, and graduated summa cum laude from Loyola University at age 12, at which age he began combined Ph.D.–M.D. studies at the University of Chicago.

In one famous project begun in 1921, Lewis Terman studied more than 1500 California schoolchildren with IQ scores over 135. Contrary to the popular notion that intellectually gifted children are frequently maladjusted because they are “in a different world” from their nongifted peers, Terman’s high-scoring children, like those in later studies, were healthy, well-adjusted, and unusually successful academically (Lubinski & Benbow, 2006; Stanley, 1997). When restudied over the next seven decades, most people in Terman’s group had attained high levels of education (Austin et al., 2002; Holahan & Sears, 1995). They included many doctors, lawyers, professors, scientists, and writers, but no Nobel prize winners. A more recent study of precocious youths who aced the math SAT at age 13—by scoring in the top quarter of 1 percent of their age group—were at age 33 twice as likely to have patents as were those in the bottom quarter of the top 1 percent (Wai et al., 2005). And they were more likely to have earned a Ph.D.—1 in 3, compared with 1 in 5 from the lower part of the top 1 percent. Compared with the math aces, 13-year-olds scoring high on verbal aptitude were more likely to have become humanities professors or written a novel (Park et al., 2007). These whiz kids remind me of Jean Piaget, who by age 7 was devoting his free time to studying birds, fossils, and machines; who by age 15 was publishing scientific articles on mollusks; and who later went on to become the twentieth century’s most famous developmental psychologist (Hunt, 1993). Children with extraordinary academic gifts are sometimes more isolated, introverted, and in their own worlds (Winner, 2000). But most thrive. There are critics who question many of the assumptions of currently popular “gifted child” programs, such as the belief that only 3 to 5 percent of children are gifted and that it pays to identify and “track” these special few—segregating them in special classes and giving them academic enrichment not available to the other 95 percent. Critics note that tracking by aptitude sometimes creates a self-fulfilling prophecy: Those implicitly labeled “ungifted” may be influenced to become so (Lipsey & Wilson, 1993; Slavin & Braddock, 1993). Denying lower-ability students opportunities for enriched education can widen the achievement gap between ability groups and increase their social isolation from one another (Carnegie, 1989; Stevenson & Lee, 1990). Because minority and low-income youth are more often placed in lower academic groups, tracking can also promote segregation and prejudice—hardly, note critics, a healthy preparation for working and living in a multicultural society. Critics and proponents of gifted education do, however, agree on this: Children have differing gifts. Some are especially good at math, others at verbal reasoning, others at art, still others at social leadership. Educating children as if all were alike is as naive as assuming that giftedness is something, like blue eyes, that you either have or do not have. One need not hang labels on children to affirm their special talents and to challenge them all at the frontiers of their own ability and understanding. By providing appropriate developmental placement suited to each child’s talents, we can promote both equity and excellence for all (Colangelo et al., 2004; Lubinski & Benbow, 2000; Sternberg & Grigorenko, 2000).

427

Assessing Intelligence M O D U L E 3 4

Review Assessing Intelligence 34-1 When and why were intelligence tests created? In France in 1904, Alfred Binet started the modern intelligence-testing movement by developing questions that helped predict children’s future progress in the Paris school system. Lewis Terman of Stanford University revised Binet’s work for use in the United States. Terman believed his Stanford-Binet could help guide people toward appropriate opportunities, but more than Binet, he believed intelligence is inherited. During the early part of the twentieth century, intelligence tests were sometimes used to “document” scientists’ assumptions about the innate inferiority of certain ethnic and immigrant groups. 34-2 What’s the difference between aptitude and achievement tests, and how can we develop and evaluate them? Aptitude tests are designed to predict what you can learn. Achievement tests are designed to assess what you have learned. The WAIS (Wechsler Adult Intelligence Scale), an aptitude test, is the most widely used intelligence test for adults. Such tests must be standardized, by giving the test to a representative sample of future test-takers to establish a basis for meaningful score comparisons. The distribution of test scores often forms a normal, bell-shaped curve. Tests must also be reliable, by yielding consistent scores (on two halves of the test, or when people are retested). And they must be valid. A valid test measures or predicts what it is supposed to. Content validity is the extent to which a test samples the pertinent behavior (as a driving test measures driving ability). Predictive validity is the extent to which the test predicts a behavior it is designed to predict (aptitude tests have predictive ability if they can predict future achievements). 34-3

How stable are intelligence scores over the life span? The stability of intelligence test scores increases with age. By age 4, scores fluctuate somewhat but begin to predict adolescent and adult scores. At about age 7, scores become fairly stable and consistent.

34-4

What are the traits of those at the low and high intelligence extremes? Those with intelligence test scores below 70, the cut-off mark for the diagnosis of an intellectual disability (formerly referred to as mental retardation), vary from near-normal to those requiring constant aid and supervision. Down syndrome is a form of intellectual disability with a physical cause—an extra copy of chromosome 21. High-scoring people, contrary to popular myths, tend to be healthy and well-adjusted, as well as unusually success-

ful academically. Schools sometimes “track” such children, separating them from those with lower scores. Such programs can become self-fulfilling prophecies as children live up to—or down to—others’ perceptions of their ability.

Terms and Concepts to Remember intelligence test, p. 416 mental age, p. 417 Stanford-Binet, p. 417 intelligence quotient (IQ), p. 417 achievement tests, p. 418 aptitude tests, p. 418 Wechsler Adult Intelligence Scale (WAIS), p. 419

standardization, p. 420 normal curve, p. 420 reliability, p. 421 validity, p. 421 content validity, p. 421 predictive validity, p. 421 intellectual disability, p. 425 Down syndrome, p. 425

Test Yourself 1. What was the purpose of Binet’s pioneering intelligence test? 2. The Smiths have enrolled their 2-year-old son in a special program that promises to assess his IQ and, if he places in the top 5 percent of test-takers, to create a plan that will guarantee his admission to a top university at age 18. Why is this endeavor of questionable value? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. Are you working to the potential reflected in your college entrance exam scores? What, other than your aptitude, is affecting your college performance?

2. How do you feel about mainstreaming children of all ability levels in the same classroom? What evidence are you using to support your view?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

Twin and Adoption Studies Heritability Environmental Influences Group Differences in Intelligence Test Scores The Question of Bias

module 35 Genetic and Environmental Influences on Intelligence 35-1 What does evidence reveal about hereditary and environmental influences on intelligence? Intelligence runs in families. But why? Are our intellectual abilities mostly inherited? Or are they molded by our environment? Few issues arouse such passion or have such serious political implications. Consider: If we mainly inherit our differing mental abilities, and if success reflects those abilities, then people’s socioeconomic standing will correspond to their inborn differences. This could lead to those on top believing their intellectual birthright justifies their social positions. But if mental abilities are primarily nurtured by the environments that raise and inform us, then children from disadvantaged environments can expect to lead disadvantaged lives. In this case, people’s standing will result from their unequal opportunities. For now, as best we can, let’s set aside such political implications and examine the evidence.

䉴|| Twin and Adoption Studies Do people who share the same genes also share comparable mental abilities? As you can see from FIGURE 35.1, which summarizes many studies, the answer is clearly yes. In support of the genetic contribution to intelligence, researchers cite three sets of findings:

“There are more studies addressing the genetics of g [general intelligence] than any other human characteristic.”

© The New Yorker Collection, 1999, Donald Reilly from cartoonbank.com. All rights reserved.

Robert Plomin (1999)

“I told my parents that if grades were so important they should have paid for a smarter egg donor.”

428

䉴 The intelligence test scores of identical twins reared together are virtually as similar as those of the same person taking the same test twice (Lykken, 1999; Plomin, 2001). (The scores of fraternal twins, who typically share only half their genes, are much less similar.) Likewise, the test scores of identical twins reared separately are similar enough to have led twin researcher Thomas Bouchard (1996a) to estimate that “about 70 percent” of intelligence test score variation “can be attributed to genetic variation.” Other estimates range from 50 to 75 percent (Devlin et al., 1997; Neisser et al., 1996; Plomin, 2003). For simple reaction time tasks that measure processing speed, estimates range from 30 to 50 percent (Beaujean, 2005). 䉴 Brain scans reveal that identical twins have very similar gray matter volume, and that their brains (unlike those of fraternal twins) are virtually the same in areas associated with verbal and spatial intelligence (Thompson et al., 2001). 䉴 Are there genes for genius? Today’s researchers have identified chromosomal regions important to intelligence, and they have pinpointed specific genes that seemingly influence variations in intelligence and learning disabilities (Dick et al., 2007; Plomin & Kovas, 2005; Posthuma & deGeus, 2006). Intelligence appears to be polygenetic, meaning many genes seem to be involved, with each gene accounting for much less than 1 percent of intelligence variations (Butcher et al., 2008). But other evidence points to the effects of environment. Studies show that adoption enhances the intelligence scores of mistreated or neglected children (van IJzendoorn & Juffer, 2005, 2006). And fraternal twins, who are genetically no more alike than any other siblings—but who are treated more alike because they are the same age—tend to score more alike than other siblings. So if shared environment matters, do children in adoptive families share similar aptitudes?

429

Genetic and Environmental Influences on Intelligence M O D U L E 3 5

Lower correlation than identical twins reared together shows some environmental effect

Similarity of 1.00 intelligence 0.90 scores (correlation) 0.80

Lower correlation than identical twins shows genetic effects

0.70 0.60 0.50

Corbis Images/Picture Quest

0.40 0.30 0.20 0.10 Identical twins reared together

Identical twins reared apart

Fraternal twins reared together

Siblings reared together

Unrelated individuals reared together

Seeking to disentangle genes and environment, researchers have compared the intelligence test scores of adopted children with those of their adoptive siblings and with those of (a) their biological parents, the providers of their genes, and (b) their adoptive parents, the providers of their home environment. During childhood, the intelligence test scores of adoptive siblings correlate modestly. Over time, adopted children accumulate experience in their differing adoptive families. So would you expect the family environment effect to grow with age and the genetic legacy effect to shrink? If you would, behavior geneticists have a surprise for you. Mental similarities between adopted children and their adoptive families wane with age, until the correlation approaches zero by adulthood (McGue et al., 1993). This is even true of “virtual twins”—same age, biologically unrelated siblings reared together from infancy (Segal et al., 2007). Genetic influences—not environmental ones—become more apparent as we accumulate life experience (Bouchard, 1995, 1996b). Identical twins’ similarities, for example, continue or increase into their eighties (McClearn et al., 1997; Plomin et al., 1997). Similarly, adopted children’s intelligence scores over time become more like those of their biological parents (FIGURE 35.2).

FIGURE 35.1 Intelligence: Nature and nurture The most genetically similar people have the most similar intelligence scores. Remember: 1.0 indicates a perfect correlation; zero indicates no correlation at all. (Data from McGue et al., 1993.)

FIGURE 35.2 Who do adopted 䉴 children resemble? As the years

0.35

Child-parent correlation 0.30 in verbal ability scores 0.25 0.20

Children and their birth parents

0.15

Adopted children and their birth parents

0.10

Adopted children and their adoptive parents

0.05 0.00 3 years

16 years

went by in their adoptive families, children’s verbal ability scores became modestly more like their biological parents’ scores. (Adapted from Plomin & DeFries, 1998.)

430

MOD U LE 3 5 Genetic and Environmental Influences on Intelligence

heritability the proportion of variation among individuals that we can attribute to genes. The heritability of a trait may vary, depending on the range of populations and environments studied.

|| A check on your understanding of heritability: If environments become more equal, the heritability of intelligence would a. increase. b. decrease. c. be unchanged. (Answer below.) ||

䉴|| Heritability Given such studies, one might be tempted to make statements about the heritability of intelligence. But be very careful how you use the word heritability. Estimates of the heritability of intelligence—the variation in intelligence test scores attributable to genetic factors—put it at about 50 percent. Does this mean your genes are responsible for 50 percent of your intelligence and your environment for the rest? No. It means we credit heredity with 50 percent of the variation in intelligence among people being studied. This point is so often misunderstood that I repeat: Heritability never pertains to an individual, only to why people differ from one another. Heritability differences among people due to genes can vary from study to study. Where environments vary widely, as they do among children of less-educated parents, environmental differences are more predictive of intelligence scores (Rowe et al., 1999; Turkheimer et al., 2003). Mark Twain once proposed that boys should be raised in barrels and fed through a hole until they were 12 years old. Given the boys’ equal environments, differences in their individual intelligence test scores at age 12 could be explained only by their heredity. Thus, heritability for their differences would be nearly 100 percent. But if we raise people with similar heredities in drastically different environments (barrels versus advantaged homes), the environment effect will be huge, and heritability will therefore be lower. In a world of clones, heritability would be zero. Remember, too, that genes and environment work together. If you try out for a basketball team and are just slightly taller and quicker than others, notes James Flynn (2003, 2007), you will more likely be picked, play more, and get more coaching. The same would be true for your separated identical twin—who might, not just for genetic reasons, also come to excel at basketball. Likewise, if you have a natural aptitude for academics, you will more likely stay in school, read books, and ask questions—all of which will amplify your cognitive brain power. Thanks to such gene-environment interaction, modest genetic advantages can be socially multiplied into big performance advantages. Our genes shape the experiences that shape us.

Heritability—variation explained by genetic influences—will increase as environmental variation decreases.

䉴|| Environmental Influences Genes make a difference. Even if we were all raised in the same intellectually stimulating environment, we would have differing aptitudes. But life experiences also matter. Human environments are rarely as impoverished as the dark and barren cages inhabited by deprived rats that develop thinner-than-normal brain cortexes (Rosenzweig et al., 1984). Yet severe deprivation does leave footprints on the brain.

© The New Yorker Collection, 2000, Leo Cullum from cartoonbank.com. All rights reserved.

Early Environmental Influences

“Selective breeding has given me an aptitude for the law, but I still love fetching a dead duck out of freezing water.”

We have seen that biology and experience intertwine. Nowhere is this more apparent than in impoverished human environments such as J. McVicker Hunt (1982) observed in a destitute Iranian orphanage. The typical child Hunt observed there could not sit up unassisted at age 2 or walk at age 4. The little care the infants received was not in response to their crying, cooing, or other behaviors, so the children developed little sense of personal control over their environment. They were instead becoming passive “glum lumps.” Extreme deprivation was bludgeoning native intelligence. Aware of both the dramatic effects of early experiences and the impact of early intervention, Hunt began a program of tutored human enrichment. He trained caregivers to play language-fostering games with 11 infants, imitating the babies’ babbling, then engaging them in vocal follow-the-leader, and finally teaching them sounds from the Persian language. The results were dramatic. By 22 months of age, the infants could

431

Schooling and Intelligence Later in childhood, schooling is one intervention that pays dividends reflected in intelligence scores. Schooling and intelligence interact, and both enhance later income (Ceci & Williams, 1997). Hunt was a strong believer in the ability of education to boost children’s chances for success by developing their cognitive and social skills. Indeed, his 1961 book, Intelligence and Experience, helped launch Project Head Start in 1965. Head Start, a U.S. government-funded preschool program, serves more than 900,000 children, most of whom come from families below the poverty level (Head Start, 2005). Does it succeed? Researchers study Head Start and other preschool programs such as Sure Start in Britain by comparing children who experience the program with their counterparts who don’t. Quality programs, offering individual attention, increase children’s school readiness, which decreases their likelihood of repeating a grade or being placed in special education. Generally, the aptitude benefits dissipate over time (reminding us that life experience after Head Start matters, too). Psychologist Edward Zigler, the program’s first director, nevertheless believes there are long-term benefits (Ripple & Zigler, 2003; Zigler & Styfco, 2001). High-quality preschool programs can provide at least a small boost to emotional intelligence—creating better attitudes toward learning and reducing school dropouts and criminality (Reynolds et al., 2001). Genes and experience together weave the intelligence fabric. But what we accomplish with our intelligence depends also on our own beliefs and motivation, reports

name more than 50 objects and body parts and so charmed visitors that most were adopted—an unprecedented success for the orphanage. (Institutionalized Romanian orphans also have benefited cognitively if transferred early to more enriched home care [Nelson et al., 2007].) Hunt’s findings are an extreme case of a more general finding: Among the poor, environmental conditions can override genetic differences, depressing cognitive development. Unlike children of affluence, siblings within impoverished families have more similar intelligence scores (Turkheimer et al., 2003). Schools with lots of poverty-level children often have less-qualified teachers, as one study of 1450 Virginia schools found. And even after controlling for poverty, having less-qualified teachers predicted lower achievement scores (Tuerk, 2005). Malnutrition also plays a role. Relieve infant malnutrition with nutritional supplements, and poverty’s effect on physical and cognitive development lessens (Brown & Pollitt, 1996). Do studies of such early interventions indicate that providing an “enriched” environment can “give your child a superior intellect,” as some popular books claim? Most experts are doubtful (Bruer, 1999). Although malnutrition, sensory deprivation, and social isolation can retard normal brain development, there is no environmental recipe for fast-forwarding a normal infant into a genius. All babies should have normal exposure to sights, sounds, and speech. Beyond that, Sandra Scarr’s (1984) verdict still is widely shared: “Parents who are very concerned about providing special educational lessons for their babies are wasting their time.” Still, explorations of intelligence promotion continue. One widely publicized but now-discounted finding, dubbed the “Mozart effect,” suggested that listening to classical music boosted cognitive ability. Other research has, however, revealed small but enduring cognitive benefits to either keyboard or vocal music training (Schellenberg, 2005, 2006). The music-training effect appears unexplained by the greater parental income and education of music-trained children; it may result from improved attention focus or abstract thinking ability. Other researchers hold out hope that targeted training of specific abilities (rather like a body builder doing curls to strengthen biceps and situps to strengthen abdominal muscles) might build mental muscles (Kosslyn, 2007).

Josef Polleross/The Image Works

Genetic and Environmental Influences on Intelligence M O D U L E 3 5

Devastating neglect Romanian orphans who had minimal interaction with caregivers, such as this child in the Lagunul Pentro Copii orphanage in 1990, suffered delayed development.

“There is a large body of evidence indicating that there is little if anything to be gained by exposing middle-class children to early education.” Developmental psychologist Edward F. Zigler (1987)

Getting a Head Start Project Head Start offers educational activities designed to increase readiness for schoolwork and expand children’s notions of where school might lead them. Here children in a classroom learn about colors, and children on a field trip prepare for the annual Head Start parade in Boston.

“It is our choices . . . that show what we truly are, far more than our abilities.” Professor Dumbledore to Harry Potter in J. K. Rowling’s Harry Potter and the Chamber of Secrets, 1999

AP Photo/Paul Sakuma

MOD U LE 3 5 Genetic and Environmental Influences on Intelligence

Jacques Chenet/Woodfin Camp & Associates

432

Carol Dweck (2006, 2007). Those who believe that intelligence is biologically fixed and unchanging tend to focus on proving and defending their identity. Those who instead believe that intelligence is changeable will focus more on learning and growing. Seeing that it pays to have a “growth mindset” rather than a “fixed mindset,” Dweck has developed interventions that effectively teach early teens that the brain is like a muscle that grows stronger with use as neuron connections grow. Indeed, superior achievements in fields from sports to science to music arise from disciplined effort and sustained practice (Ericsson et al., 2007).

䉴|| Group Differences in Intelligence Test Scores 35-2 How and why do gender and racial groups differ in mental ability scores? If there were no group differences in aptitude scores, psychologists could politely debate hereditary and environmental influences in their ivory towers. But there are group differences. What are they? And what shall we make of them?

Gender Similarities and Differences

|| Despite the gender equivalence in intelligence test scores, males are more likely than females to overestimate their own test scores. Both males and females tend to rate their father’s scores higher than their mother’s, their brothers’ scores higher than their sisters’, and their sons’ scores higher than their daughters’ (Furnham, 2001; Furnham et al., 2002a,b, 2004a,b,c). ||

In science, as in everyday life, differences, not similarities, excite interest. Compared with the anatomical and physiological similarities between men and women, our differences are relatively minor. Yet it is the differences we find exciting. Similarly, in the psychological domain, gender similarities vastly outnumber gender differences. We are all so much alike. In a 1932 testing of all Scottish 11-year-olds, for example, girls’ average intelligence score was 100.6 and boys’ was 100.5 (Deary et al., 2003). On a 2001 to 2003 Cognitive Ability Test administered to 324,000 British 11- and 12-yearolds, boys averaged 99.1 and girls a similar 99.9 (Strand et al., 2006). So far as general intelligence (g) is concerned, boys and girls, men and women, are the same species. Yet, most people find differences more newsworthy. And here they are: Spelling Females are better spellers: At the end of high school, only 30 percent of U.S. males spell better than the average female (Lubinski & Benbow, 1992). Verbal ability Females excel at verbal fluency and remembering words (Halpern et al., 2007). And, year after year, among nearly 200,000 students taking Germany’s Test for Medical Studies, young women have surpassed men in remembering facts from short medical cases (Stumpf & Jackson, 1994). (My wife, who remembers many of my experiences for me, tells me that if she died I’d be a man without a past.) Nonverbal memory Females have an edge in remembering and locating objects (Voyer et al., 2007). In studies of more than 100,000 American adolescents, girls also modestly surpassed boys in memory for picture associations (Hedges & Nowell, 1995).

433

Genetic and Environmental Influences on Intelligence M O D U L E 3 5

Sensation Females are more sensitive to touch, taste, and odor. Emotion-detecting ability Females are better emotion detectors. Robert Rosenthal, Judith Hall, and their colleagues (1979; McClure, 2000) discovered this while studying sensitivity to emotional cues (an aspect of emotional intelligence). They showed hundreds of people brief film clips of portions of a person’s emotionally expressive face or body, sometimes with a garbled voice added. For example, after showing a 2-second scene revealing only the face of an upset woman, the researchers asked people to guess whether the woman was criticizing someone for being late or was talking about her divorce. Rosenthal and Hall found that some people, many of them women, are much better emotion detectors than others. Such skills may explain women’s somewhat greater responsiveness in both positive and negative emotional situations. Could this ability also have helped our ancestral mothers read emotions in their infants and would-be lovers, in turn fueling cultural tendencies to encourage women’s empathic skills? Some evolutionary psychologists believe so. Math and spatial aptitudes On math tests given to more than 3 million representatively sampled people in 100 independent studies, males and females obtained nearly identical average scores (Hyde et al., 1990, 2008). But again—despite greater diversity within the genders than between them—group differences make the news. In 20 of 21 countries, females displayed an edge in math computation, but males scored higher in math problem solving (Bronner, 1998; Hedges & Nowell, 1995). In Western countries, virtually all math prodigies participating in the International Mathematics Olympiad have been males. (More female math prodigies have, however, reached the top levels in non-Western countries, such as China [Halpern, 1991]). The score differences are sharpest at the extremes. Among 12- to 14-year-olds scoring extremely high on SAT math, boys have outnumbered girls 13 to 1, and within that precocious group, the boys more often went on to earn a degree in the inorganic sciences and engineering (Benbow et al., 2000). In the United States, males also have an edge in the annual physics and computer science Advanced Placement exams (Stumpf & Stanley, 1998). Men are 99 percent of the world’s chess grandmasters, a difference attributable to the much greater number of boys beginning to play competitive chess. Understanding why boys more than girls enter competitive chess is a challenge for future research (Chabris & Glickman, 2006).

|| In the first 56 years of the college Putnam Mathematical Competition, all of the nearly 300 awardees were men (Arenson, 1997). In 1997, a woman broke the male grip by joining 5 men in the winner’s circle. In 1998, Melanie Wood became the first female member of a U.S. math Olympics team (Shulman, 2000). Her training began at an early age: When mall-shopping with her then-4-year-old daughter, Melanie’s mother would alleviate her child’s boredom by giving her linear equations to solve. ||

Robert Strawn/National Academy of Sciences/Einstein Statue, sculptor, Robert Berks

Math Olympics champs After 䉴 World outscoring thousands of their U.S. peers, these young people became the U.S. Math Team in 2002 and placed third in the worldwide competition.

434

MOD U LE 3 5 Genetic and Environmental Influences on Intelligence

FIGURE 35.3 The mental rotation test This is a test of spatial abilities. (From Vandenberg & Kuse, 1978.) (Answer below.)

Which two circles contain a configuration of blocks identical to the one in the circle at the left?

Standard

Responses

The first and fourth alternatives.

|| Among entering American collegians, 22 percent of men and 4 percent of women report having played video/ computer games six or more hours a week (Pryor et al., 2006). ||

AP Photo/Paul Sakuma

Nature or nurture? At this 2005 Google Inc.-sponsored computer coding competition, programmers competed for cash prizes and possible jobs. What do you think accounted for the fact that only one of the 100 finalists was female?

The average male edge seems most reliable in spatial ability tests like the one shown in FIGURE 35.3, which involves speedily rotating three-dimensional objects in one’s mind (Collins & Kimura, 1997; Halpern, 2000). Exposure to high levels of male sex hormones during the prenatal period does enhance spatial abilities (Berenbaum et al., 1995). So, one recent experiment indicates, does action video game playing (Feng et al., 2007). Spatial abilities skills help when fitting suitcases into a car trunk, playing chess, or doing certain types of geometry problems. From an evolutionary perspective (Geary, 1995, 1996; Halpern et al., 2007), those same skills helped our ancestral fathers track prey and make their way home. The survival of our ancestral mothers may have benefited more from a keen memory for the location of edible plants—a legacy that lives today in women’s superior memory for objects and their location. Evolutionary psychologist Steven Pinker (2005) argues that biological as well as social influences appear to affect gender differences in life priorities (women’s greater interest in people versus men’s in money and things), in risk-taking (with men more reckless), and in math reasoning and spatial abilities. Such differences are, he notes, observed across cultures, stable over time, influenced by prenatal hormones, and observed in genetic boys raised as girls. Other researchers are exploring a brain basis for male-female cognitive differences (Halpern et al., 2007). Elizabeth Spelke (2005), however, urges caution in charting male-female intellectual worlds. It oversimplifies to say that women have more “verbal ability” and men more “math ability.” Women excel at verbal fluency, men at verbal analogies. Women excel at rapid math calculations, men at rapid math reasoning. Women excel at remembering objects’ spatial positions, men at remembering geometric layouts. Other critics urge us to remember that social expectations and divergent opportunities shape boys’ and girls’ interests and abilities (Crawford et al., 1995; Eccles et al., 1990). Gender-equal cultures, such as Sweden and Iceland, exhibit little of the gender math gap found in gender-unequal cultures, such as Turkey and Korea (Guiso et al., 2008). In the United States, the male edge in math problem solving is detectable only after elementary school. Traditionally, math and science have been considered masculine subjects, but as more parents encourage their daughters to develop their abilities in math and science, the gender gap is narrowing (Nowell & Hedges, 1998). In some fields, including psychology, women now earn most of the Ph.D.s. Yet, notes Diane Halpern (2005) with a twinkle in her eye, “no one has asked if men have the innate ability to succeed in those academic disciplines where they are underrepresented.” Greater male variability Finally, intelligence research consistently reports a peculiar tendency for males’ mental

435

Genetic and Environmental Influences on Intelligence M O D U L E 3 5

FIGURE 35.4 Gender and variability 䉴 When nearly 90,000 Scottish 11-year-olds

100%

Percentage

90

were administered an intelligence test in 1932, the average IQ scores for girls and boys was essentially identical. But as other studies have found, boys were overrepresented at the low and high extremes. (Adapted from Deary et al., 2003.)

80 70 60

Boys

50 40

Girls

30 20 10 0 60

70

80

90

100

110

120

130

140

IQ score

ability scores to vary more than females’ (Halpern et al., 2007). Thus, boys outnumber girls at both the low extreme and the high extreme (Kleinfeld, 1998; Strand et al., 2006; also see FIGURE 35.4). Boys are, therefore, more often found in special education classes. They talk later. They stutter more.

Ethnic Similarities and Differences Fueling the group-differences debate are two other disturbing but agreed-upon facts:

䉴 Racial groups differ in their average intelligence test scores. 䉴 High-scoring people (and groups) are more likely to attain high levels of education and income. A statement by 52 intelligence researchers explained: “The bell curve for Whites is centered roughly around IQ 100; the bell curve for American Blacks roughly around 85; and those for different subgroups of Hispanics roughly midway between those for Whites and Blacks” (Avery et al., 1994). Comparable results come from other academic aptitude tests. In recent years, the Black-White difference has diminished somewhat, and among children has dropped to 10 points in some studies (Dickens & Flynn, 2006). Yet the test score gap stubbornly persists, and other studies suggest the gap stopped narrowing among those born after 1970 (Murray, 2006, 2007). There are differences among other groups as well. New Zealanders of European descent outscore native Maori New Zealanders. Israeli Jews outscore Israeli Arabs. Most Japanese outscore the stigmatized Japanese minority, the Burakumin. And those who can hear outscore those born deaf (Braden, 1994; Steele, 1990; Zeidner, 1990). Everyone further agrees that such group differences provide little basis for judging individuals. Women outlive men by six years, but knowing someone’s sex doesn’t tell us with any precision how long that person will live. Even Charles Murray and Richard Herrnstein (1994), whose writings drew attention to Black-White differences, reminded us that “millions of Blacks have higher IQs than the average White.” Swedes and Bantus differ in complexion and language. That first factor is genetic, the second environmental. So what about intelligence scores? As we have seen, heredity contributes to individual differences in intelligence. Does that mean it also contributes to group differences? Some psychologists believe it does, perhaps because of the world’s differing climates and survival challenges (Herrnstein & Murray, 1994; Lynn, 1991, 2001; Rushton & Jensen, 2005, 2006).

436

MOD U LE 3 5 Genetic and Environmental Influences on Intelligence

Heritability—differences due to genes—will be greater in country X, where environmental differences in nutrition are minimal. || Since 1830, the average Dutch man has grown from 5 feet 5 inches to nearly 6 feet. ||

Variation within group

Seeds

Poor soil

Fertile soil Difference between groups

But we have also seen that group differences in a heritable trait may be entirely environmental, as in our earlier barrel-versus-home–reared boys example. Consider one of nature’s experiments: Allow some children to grow up hearing their culture’s dominant language, while others, born deaf, do not. Then give both groups an intelligence test rooted in the dominant language, and (no surprise) those with expertise in that language will score highest. Although individual performance differences may be substantially genetic, the group difference is not (FIGURE 35.5). Also consider: If each identical twin were exactly as tall as his or her co-twin, heritability would be 100 percent. Imagine that we then separated some young twins and gave only half of them a nutritious diet, and that the well-nourished twins all grew to be exactly 3 inches taller than their counterparts—an environmental effect comparable to that actually observed in both Britain and America, where adolescents are several inches taller than their counterparts were a half-century ago. What would the heritability of height now be for our well-nourished twins? Still 100 percent, because the variation in height within the group would remain entirely predictable from the heights of their malnourished identical siblings. So even perfect heritability within groups would not eliminate the possibility of a strong environmental impact on the group differences. Might the racial gap be similarly environmental? Consider: Genetics research reveals that under the skin, the races are remarkably alike (CavalliSforza et al., 1994; Lewontin, 1982). Individual differences within a race are much

Nature’s own morphing Nature draws no sharp boundaries between races, which blend gradually one into the next around the Earth. Thanks to the human urge to classify, however, people socially define themselves in racial categories, which become catch-all labels for physical features, social identity, and nationality.

© Paul Almasy/Corbis; © Rob Howard/Corbis; © Barbara Bannister; Gallo Images/Corbis; © David Turnley/Corbis; © Dave Bartruff/Corbis; © Haruyoshi Yamaguchi/Corbis; © Richard T. Nowitz/Corbis; © Owen Franken/Corbis; © Paul Almasy/Corbis; © John-Francis Bourke/zefa/Corbis

|| In prosperous country X everyone eats all they want. In country Y the rich are well fed, but the semistarved poor are often thin. In which country will the heritability of body weight be greater? (Answer below.) ||

Variation within group

and environmental impact Even if the variation between members within a group reflects genetic differences, the average difference between groups may be wholly due to the environment. Imagine that seeds from the same mixture are sown in different soils. Although height differences within each window box will be genetic, the height difference between the two groups will be environmental. (From Lewontin, 1976.)

FIGURE 35.5 Group differences

Genetic and Environmental Influences on Intelligence M O D U L E 3 5

greater than differences between races. The average genetic difference between two Icelandic villagers or between two Kenyans greatly exceeds the group difference between Icelanders and Kenyans. Moreover, looks can deceive. Light-skinned Europeans and dark-skinned Africans are genetically closer than are dark-skinned Africans and dark-skinned Aboriginal Australians. Race is not a neatly defined biological category. Some scholars argue that there is a reality to race, noting that there are genetic markers for race (the continent of one’s ancestry) and that medical risks (such as skin cancer or high blood pressure) vary by race. Behavioral traits may also vary by race. “No runner of Asian or European descent—a majority of the world’s population—has broken 10 seconds in the 100-meter dash, but dozens of runners of West African descent have done so,” observes psychologist David Rowe (2005). Many social scientists, though, see race primarily as a social construction without well-defined physical boundaries (Helms et al., 2005; Smedley & Smedley, 2005; Sternberg et al., 2005). People with varying ancestry may categorize themselves in the same race. Moreover, with increasingly mixed ancestries, more and more people defy neat racial categorization. (What race is Tiger Woods?) Asian students outperform North American students on math achievement and aptitude tests. But this difference appears to be a recent phenomenon and may reflect conscientiousness more than competence. Asian students also attend school 30 percent more days per year and spend much more time in and out of school studying math (Geary et al., 1996; Larson & Verma, 1999; Stevenson, 1992). The intelligence test performance of today’s better-fed, better-educated, and more testprepared population exceeds that of the 1930s population—by the same margin that the intelligence test score of the average White today exceeds that of the average Black. No one attributes the generational group difference to genetics. White and Black infants have scored equally well on an infant intelligence measure (preference for looking at novel stimuli—a crude predictor of future intelligence scores [fa*gan, 1992]). When Blacks and Whites have or receive the same pertinent knowledge, they exhibit similar information-processing skill. “The data support the view that cultural differences in the provision of information may account for racial differences in IQ,” report researchers Joseph fa*gan and Cynthia Holland (2007). In different eras, different ethnic groups have experienced golden ages—periods of remarkable achievement. Twenty-five-hundred years ago, it was the Greeks and the Egyptians, then the Romans; in the eighth and ninth centuries, genius seemed to reside in

culture of scholarship The children of 䉴 The Indochinese refugee families studied by Nathan

Jason Goltz

Caplan, Marcella Choy, and James Whitmore (1992) typically excel in school. On weekday nights after dinner, the family clears the table and begins homework. Family cooperation is valued, and older siblings help younger ones.

437

438

“Do not obtain your slaves from Britain, because they are so stupid and so utterly incapable of being taught.” Cicero, 106–43 B.C.

MOD U LE 3 5 Genetic and Environmental Influences on Intelligence

the Arab world; 500 years ago it was the Aztec Indians and the peoples of Northern Europe. Today, people marvel at Asians’ technological genius. Cultures rise and fall over centuries; genes do not. That fact makes it difficult to attribute a natural superiority to any race. Moreover, consider the striking results of a national study that looked back over the mental test performances of White and Black young adults after graduation from college. From eighth grade through the early high school years, the average aptitude scores of the White students increased, while those of the Black students decreased— creating a gap that reached its widest point at about the time that high school students take college admissions tests. But during college, the Black students’ scores increased “more than four times as much” as those of their White counterparts, thus greatly decreasing the aptitude gap. “It is not surprising,” concluded researcher Joel Myerson and his colleagues (1998), “that as Black and White students complete more grades in high school environments that differ in quality, the gap in cognitive test scores widens. At the college level, however, where Black and White students are exposed to educational environments of comparable quality . . . many Blacks are able to make remarkable gains, closing the gap in test scores.”

䉴|| The Question of Bias 35-3 Are intelligence tests inappropriately biased? If one assumes that race is a meaningful concept, the debate over race differences in intelligence divides into three camps, note Earl Hunt and Jerry Carlson (2007):

䉴 There are genetically disposed race differences in intelligence. 䉴 There are socially influenced race differences in intelligence. 䉴 There are race differences in test scores, but the tests are inappropriate or biased. Are intelligence tests biased? The answer depends on which of two very different definitions of bias are used, and on an understanding of stereotypes.

Two Meanings of Bias

“Political equality is a commitment to universal human rights, and to policies that treat people as individuals rather than representatives of groups; it is not an empirical claim that all groups are indistinguishable.” Steven Pinker (2006)

A test may be considered biased if it detects not only innate differences in intelligence but also performance differences caused by cultural experiences. This in fact happened to Eastern European immigrants in the early 1900s. Lacking the experience to answer questions about their new culture, many were classified as feeble-minded. David Wechsler, who entered the United States as a 6-year-old Romanian just before this group, designed the Wechsler Adult Intelligence Scale (WAIS). In this popular sense, intelligence tests are biased. They measure your developed abilities, which reflect, in part, your education and experiences. You may have read examples of intelligence test items that make middle-class assumptions (for example, that a cup goes with a saucer, or, as in one of the sample test items from the WAIS, that people buy insurance to protect the value of their homes and possessions). Do such items bias the test against those who do not use saucers or do not have enough possessions to make the cost of insurance relevant? Could such questions explain racial differences in test performance? If so, are tests a vehicle for discrimination, consigning potentially capable children to dead-end classes and jobs? Defenders of aptitude testing note that racial group differences are at least as great on nonverbal items, such as counting digits backward (Jensen, 1983, 1998). Moreover, they add, blaming the test for a group’s lower scores is like blaming a messenger for bad news. Why blame the tests for exposing unequal experiences and opportunities? If, because of malnutrition, people were to suffer stunted growth, would you

439

Genetic and Environmental Influences on Intelligence M O D U L E 3 5

blame the measuring stick that reveals it? If unequal past experiences predict unequal future achievements, a valid aptitude test will detect such inequalities. The second meaning of bias—its scientific meaning—is different. It hinges on a test’s validity—on whether it predicts future behavior only for some groups of testtakers. For example, if the U.S. SAT accurately predicted the college achievement of women but not that of men, then the test would be biased. In this statistical meaning of the term, the near-consensus among psychologists (as summarized by the U.S. National Research Council’s Committee on Ability Testing and the American Psychological Association’s Task Force on Intelligence) is that the major U.S. aptitude tests are not biased (Hunt & Carlson, 2007; Neisser et al., 1996; Wigdor & Garner, 1982). The tests’ predictive validity is roughly the same for women and men, for Blacks and Whites, and for rich and poor. If an intelligence test score of 95 predicts slightly below average grades, that rough prediction usually applies equally to both genders and all ethnic and economic groups.

Test-Takers’ Expectations Throughout this text, we have seen that our expectations and attitudes can influence our perceptions and behaviors. Once again, we find this effect in intelligence testing. When Steven Spencer and his colleagues (1997) gave a difficult math test to equally capable men and women, women did not perform as well as men—except when they had been led to expect that women usually do as well as men on the test. Otherwise, the women apparently felt apprehensive, and it affected their performance. With Claude Steele and Joshua Aronson, Spencer (2002) also observed this self-fulfilling stereotype threat when Black students, taking verbal aptitude tests under conditions designed to make them feel threatened, performed at a lower level. Critics note that stereotype threat does not fully account for the Black-White aptitude score difference (Sackett et al., 2004, 2008). But it does help explain why Blacks have scored higher when tested by Blacks than when tested by Whites (Danso & Esses, 2001; Inzlicht & Ben-Zeev, 2000). And it gives us insight into why women have scored higher on math tests when no male test-takers were in the group, and why women’s chess play drops sharply when they think they are playing a male rather than female opponent (Maass et al., 2008). Steele (1995, 1997) concluded that telling students they probably won’t succeed (as is sometimes implied by remedial “minority support” programs) functions as a stereotype that can erode test and school performance. Over time, such students may detach their self-esteem from academics and look for recognition elsewhere. Indeed, as African-American boys progress from eighth to twelfth grade, they tend to underachieve as the disconnect between their grades and their self-esteem becomes pronounced (Osborne, 1997). One experiment randomly assigned some AfricanAmerican seventh-graders to write for 15 minutes about their most important values (Cohen et al., 2006). That simple exercise in self-affirmation had the apparent effect of boosting their semester grade point average by 0.26 in a first experiment and 0.34 in a replication. Minority students in university programs that challenge them to believe in their potential, or to focus on the idea that intelligence is malleable and not fixed, have likewise produced markedly higher grades and had lower dropout rates (Wilson, 2006). What, then, can we realistically conclude about aptitude tests and bias? The tests do seem biased (appropriately so, some would say) in one sense—sensitivity to performance differences caused by cultural experience. But they are not biased in the scientific sense of making valid statistical predictions for different groups. Bottom line: Are the tests discriminatory? Again, the answer can be yes or no. In one sense, yes, their purpose is to discriminate—to distinguish among individuals. In

“Math class is tough!” “Teen talk” talking Barbie doll (introduced February 1992, recalled October 1992)

stereotype threat a self-confirming concern that one will be evaluated based on a negative stereotype.

MOD U LE 3 5 Genetic and Environmental Influences on Intelligence

BananaStock/Jupiter images

Untestable compassion Intelligence test scores are only one part of the picture of a whole person. They don’t measure the abilities, talent, and commitment of, for example, people who devote their lives to helping others.

440

“Almost all the joyful things of life are outside the measure of IQ tests.” Madeleine L’Engle, A Circle of Quiet, 1972

another sense, no, their purpose is to reduce discrimination by reducing reliance on subjective criteria for school and job placement—who you know, how you dress, or whether you are the “right kind of person.” Civil service aptitude tests, for example, were devised to discriminate more fairly and objectively by reducing the political, racial, and ethnic discrimination that preceded their use. Banning aptitude tests would lead those who decide on jobs and admissions to rely more on other considerations, such as their personal opinions. Perhaps, then, our goals for tests of mental abilities should be threefold. First, we should realize the benefits Alfred Binet, the founder of modern-day intelligence testing, foresaw—to enable schools to recognize who might profit most from early intervention. Second, we must remain alert to Binet’s fear that intelligence test scores may be misinterpreted as literal measures of a person’s worth and potential. And finally, we must remember that the competence that general intelligence tests sample is important; it helps enable success in some life paths. But it reflects only one aspect of personal competence. Our practical intelligence and emotional intelligence matter, too, as do other forms of creativity, talent, and character. The carpenter’s spatial ability differs from the programmer’s logical ability, which differs from the poet’s verbal ability. Because there are many ways of being successful, our differences are variations of human adaptability.

441

Genetic and Environmental Influences on Intelligence M O D U L E 3 5

Review Genetic and Environmental Influences on Intelligence 35-1 What does evidence reveal about hereditary and environmental influences on intelligence? Studies of twins, family members, and adoptees together point to a significant hereditary contribution to intelligence scores. The search is under way for genes that together contribute to intelligence. Yet research also provides evidence of environmental influence. The intelligence test scores of fraternal twins raised together are more similar than those of other siblings, and the scores of identical twins raised apart are slightly less similar (though still very highly correlated) than the scores of identical twins raised together. Other studies, of children reared in extremely impoverished, enriched, or culturally different environments, indicate that life experiences can significantly influence intelligence test performance. Heritability of intelligence refers to the extent to which variation in intelligence test scores in a group of people being studied is attributable to genetic factors. Heritability never applies to an individual’s intelligence, but only to differences among people. 35-2

How and why do gender and racial groups differ in mental ability scores? Males and females average the same in overall intelligence. There are, however, some small but intriguing gender differences in specific abilities. Girls are better spellers, more verbally fluent, better at locating objects, better at detecting emotions, and more sensitive to touch, taste, and color. Boys outperform girls at spatial ability and related mathematics, though girls outperform boys in math computation. Boys also outnumber girls at the low and high extremes of mental abilities. Psychologists debate evolutionary, brain-based, and cultural explanations of such gender differences. As a group, Whites score higher than their Hispanic and Black counterparts, though the gap is not as great as it was half a century and more ago. The evidence suggests that environmental differences are largely, perhaps entirely responsible for these group differences.

35-3 Are intelligence tests inappropriately biased? Aptitude tests aim to predict how well a test-taker will perform in a given situation. So they are necessarily “biased” in the sense that they are sensitive to performance differences caused by cultural experience. But bias can also mean what psychologists commonly mean by the term—that a test predicts less accurately for one group than for another. In this sense of the term, most experts consider the major aptitude tests unbiased. Stereotype threat, a self-confirming concern that one will be evaluated based on a negative stereotype, affects performance on all kinds of tests. Terms and Concepts to Remember heritability, p. 430

stereotype threat, p. 439

Test Yourself 1. As society succeeds in creating equality of opportunity, it will also increase the heritability of ability. The heritability of intelligence scores will be greater in a society marked by equal opportunity than in a society of peasants and aristocrats. Why? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. How have genetic and environmental influences shaped your intelligence?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

Motivation and Work

modules 36 Introduction to Motivation

“W

37 Hunger

38 Sexual Motivation and the Need to Belong

39 Motivation at Work

Motivation personified Aron Ralston’s motivation to live and belong energized and directed his sacrificing half of his arm.

AP Photo/Rocky Mountain News, Judy Walgren

hat’s my motivation?” the actor asks the director. In our everyday conversation, “What motivated you to do that?” is a way of asking “What caused your behavior?” To psychologists, a motivation is a need or desire that energizes behavior and directs it toward a goal. After an ill-fated Saturday morning in the spring of 2003, experienced mountaineer Aron Ralston understands the extent to which motivation can energize and direct behavior. Having bagged nearly all of Colorado’s tallest peaks, many of them solo and in winter, Ralston ventured some solo canyon hiking that seemed so riskfree he didn’t bother to tell anyone where he was going. In Utah’s narrow Bluejohn Canyon, just 150 yards above his final rappel, he was climbing over an 800-pound rock when disaster struck: It shifted and pinned his right wrist and arm. He was, as the title of his recent book says, caught Between a Rock and a Hard Place. Realizing no one would be rescuing him, Ralston tried with all his might to dislodge the rock. Then, with his dull pocket knife, he tried chipping away at the rock. When that, too, failed, he rigged up ropes to lift the rock. Alas, nothing worked. Hour after hour, then cold night after cold night, he was stuck. By Tuesday, he had run out of food and water. On Wednesday, as thirst and hunger gnawed, he began saving and sipping his own urine. Using his video recorder, he said his good-byes to family and friends, for whom he now felt intense love: “So again love to everyone. Bring love and peace and happiness and beautiful lives into the world in my honor. Thank you. Love you.” On Thursday, surprised to find himself still alive, Ralston had a seemingly divine insight into his reproductive future, a vision of a preschool boy being scooped up by a one-armed man. With this inspiration, he summoned his remaining strength and his enormous will to live and, over the next hour, willfully broke his bones and then proceeded to use that dull knife to cut off his arm. The moment after putting on a tourniquet, chopping the last piece of skin, and breaking free— and before rappelling with his bleeding half-arm down a 65-foot cliff and hiking 5 miles until finding someone—he was, in his own words, “just reeling with this euphoria . . . having been dead and standing in my grave, leaving my last will and testament, etching ‘Rest in peace’ on the wall, all of that, gone and then replaced with having my life again. It was undoubtedly the sweetest moment that I will ever experience” (Ralston, 2004). Ralston’s thirst and hunger, his sense of belonging to others, and his brute will to live and become a father highlight motivation’s energizing and directing power. In Modules 36 through 39, we explore motivation by focusing on four motives—hunger, sex, the need to belong, and achievement at work. Although other identifiable motives exist (including thirst and curiosity), a close look at these four reveals the interplay between nature (the physiological “push”) and nurture (the cognitive and cultural “pulls”). Before considering specific motivations, we’ll take time in Module 36 to see how psychologists have approached the study of motivation.

443

Instincts and Evolutionary Psychology Drives and Incentives Optimum Arousal A Hierarchy of Motives

module 36 Introduction to Motivation 36-1 From what perspectives do psychologists view motivated behavior?

© The New Yorker Collection, 2000, Bob Zahn from cartoonbank.com. All rights reserved.

Psychologists today define motivation as a need or desire that energizes and directs behavior. In their attempt to understand motivated behaviors, psychologists have used four perspectives. Instinct theory (now replaced by the evolutionary perspective) focuses on genetically predisposed behaviors. Drive-reduction theory focuses on how our inner pushes and external pulls interact. Arousal theory focuses on finding the right level of stimulation. And Abraham Maslow’s hierarchy of needs describes how some of our needs take priority over others.

䉴|| Instincts and Evolutionary Psychology “What do you think . . . should we get started on that motivation research or not?”

444

Tony Brandenburg/Bruce Coleman, Inc.

© Ariel Skelley/Masterfile

Same motive, different wiring The more complex the nervous system, the more adaptable the organism. Both the people and the weaver bird satisfy their need for shelter in ways that reflect their inherited capacities. The people’s behavior is flexible; they can learn whatever skills they need to build a house. The bird’s behavior pattern is fixed; it can build only this kind of nest.

Early in the twentieth century, as the influence of Charles Darwin’s evolutionary theory grew, it became fashionable to classify all sorts of behaviors as instincts. If people criticized themselves, it was because of their “self-abasem*nt instinct.” If they boasted, it reflected their “self-assertion instinct.” After scanning 500 books, one sociologist compiled a list of 5759 supposed human instincts! Before long, this fad for naming instincts collapsed under its own weight. Rather than explaining human behaviors, the early instinct theorists were simply naming them. It was like “explaining” a bright child’s low grades by labeling the child an “underachiever.” To name a behavior is not to explain it. To qualify as an instinct, a complex behavior must have a fixed pattern throughout a species and be unlearned (Tinbergen, 1951). Such behaviors are common in other species. Newly hatched ducks and geese form attachments to the first moving object they see. And mature salmon swim hundreds of miles upstream to reach the place they were born, where they will mate and then die. Human behavior, too, exhibits certain unlearned fixed patterns, including infants’ innate reflexes for rooting and sucking. Most psychologists, though, view human behavior as directed both by physiological needs and by psychological wants.

445

Introduction to Motivation M O D U L E 3 6

Although instinct theory failed to explain human motives, the underlying assumption that genes predispose species-typical behavior remains as strong as ever. Psychologists may apply this perspective, for example, in explanations of our human similarities, animals’ biological predispositions to learn certain behaviors, and the influence of evolution on our phobias, our helping behaviors, and our romantic attractions.

䉴|| Drives and Incentives When the original instinct theory of motivation collapsed, it was replaced by drivereduction theory—the idea that a physiological need creates an aroused state that drives the organism to reduce the need by, say, eating or drinking. With few exceptions, when a physiological need increases, so does a psychological drive—an aroused, motivated state. The physiological aim of drive reduction is homeostasis—the maintenance of a steady internal state. An example of homeostasis (literally “staying the same”) is the body’s temperature-regulation system, which works like a room thermostat. Both systems operate through feedback loops: Sensors feed room temperature to a control device. If the room temperature cools, the control device switches on the furnace. Likewise, if our body temperature cools, blood vessels constrict to conserve warmth, and we feel driven to put on more clothes or seek a warmer environment (FIGURE 36.1). Not only are we pushed by our “need” to reduce drives, we also are pulled by incentives—positive or negative stimuli that lure or repel us. This is one way our individual learning histories influence our motives. Depending on our learning, the aroma of good food, whether fresh roasted peanuts or toasted ants, can motivate our behavior. So can the sight of those we find attractive or threatening.

Need (e.g., for food, water)

Drive (hunger, thirst)

Drive-reducing behaviors (eating, drinking)

When there is both a need and an incentive, we feel strongly driven. The fooddeprived person who smells baking bread feels a strong hunger drive. In the presence of that drive, the baking bread becomes a compelling incentive. For each motive, we can therefore ask, “How is it pushed by our inborn physiological needs and pulled by incentives in the environment?”

䉴|| Optimum Arousal We are much more than homeostatic systems, however. Some motivated behaviors actually increase arousal. Well-fed animals will leave their shelter to explore and gain information, seemingly in the absence of any need-based drive. Curiosity drives monkeys to monkey around trying to figure out how to unlock a latch that opens nothing or how to open a window that allows them to see outside their room (Butler, 1954). It drives the 9-month-old infant who investigates every accessible corner of the house. It drives the scientists whose work this text discusses. And it drives explorers and adventurers such as Aron Ralston and George Mallory. Asked why he wanted to climb Mount Everest, Mallory answered, “Because it is there.” Those who, like Mallory and Ralston, enjoy high arousal are most likely to enjoy intense music, novel foods, and risky behaviors (Zuckerman, 1979). So, human motivation aims not to eliminate arousal but to seek optimum levels of arousal. Having all our biological needs satisfied, we feel driven to experience

motivation a need or desire that energizes and directs behavior. instinct a complex behavior that is rigidly patterned throughout a species and is unlearned. drive-reduction theory the idea that a physiological need creates an aroused tension state (a drive) that motivates an organism to satisfy the need. homeostasis a tendency to maintain a balanced or constant internal state; the regulation of any aspect of body chemistry, such as blood glucose, around a particular level. incentive a positive or negative environmental stimulus that motivates behavior.

FIGURE 36.1 Drive-reduction theory 䉴 Drive-reduction motivation arises from

homeostasis—an organism’s natural tendency to maintain a steady internal state. Thus, if we are water deprived, our thirst drives us to drink and to restore the body’s normal state.

446

Glenn Swier

Harlow Primate Laboratory, University of Wisconsin

Driven by curiosity Baby monkeys and young children are fascinated by things they’ve never handled before. Their drive to explore the relatively unfamiliar is one of several motives that do not fill any immediate physiological need.

MOD U LE 3 6 Introduction to Motivation

stimulation and we hunger for information. We are “infovores,” say neuroscientists Irving Biederman and Edward Vessel (2006), after identifying brain mechanisms that reward us for acquiring information. Lacking stimulation, we feel bored and look for a way to increase arousal to some optimum level. However, with too much stimulation comes stress, and we then look for a way to decrease arousal.

䉴|| A Hierarchy of Motives Some needs take priority over others. At this moment, with your needs for air and water hopefully satisfied, other motives—such as your desire to achieve—are energizing and directing your behavior. Let your need for water go unsatisfied and your thirst will preoccupy you. Just ask Aron Ralston. Deprived of air, your thirst would disappear. Abraham Maslow (1970) described these priorities as a hierarchy of needs (FIGURE 36.2). At the base of this pyramid are our physiological needs, such as those for food and water. Only if these needs are met are we prompted to meet our need

Alliance to End Hunger, 2002

needs Once our lower-level needs are met, we are prompted to satisfy our higher-level needs. (From Maslow, 1970.) For survivors of the disastrous 2007 Bangladeshi flood, such as this man carefully carrying his precious load of clean water, satisfying very basic needs for water, food, and safety become top priority. Higher-level needs on Maslow’s hierarchy, such as respect, self-actualization, and meaning, tend to become far less important during such times.

FIGURE 36.2 Maslow’s hierarchy of

Self-transcendence needs Need to find meaning and identity beyond the self

Self-actualization needs Need to live up to our fullest and unique potential

Esteem needs Need for self-esteem, achievement, competence, and independence; need for recognition and respect from others

Belongingness and love needs Need to love and be loved, to belong and be accepted; need to avoid loneliness and separation

Safety needs Need to feel that the world is organized and predictable; need to feel safe

Physiological needs Need to satisfy hunger and thirst

AP Photo/Pavel Rahman

“Hunger is the most urgent form of poverty.”

447

Introduction to Motivation M O D U L E 3 6

for safety, and then to satisfy the uniquely human needs to give and receive love and to enjoy self-esteem. Beyond this, said Maslow (1971), lies the need to actualize one’s full potential. Near the end of his life, Maslow proposed that some people also reach a level of self-transcendence. At the self-actualization level, people seek to realize their own potential. At the self-transcendence level, people strive for meaning, purpose, and communion that is beyond the self, that is transpersonal (Koltko-Rivera, 2006). Maslow’s hierarchy is somewhat arbitrary; the order of such needs is not universally fixed. People have starved themselves to make a political statement. Nevertheless, the simple idea that some motives are more compelling than others provides a framework for thinking about motivation. Life-satisfaction surveys in 39 nations support this basic idea (Oishi et al., 1999). In poorer nations that lack easy access to money and the food and shelter it buys, financial satisfaction more strongly predicts feelings of well-being. In wealthy nations, where most are able to meet basic needs, home-life satisfaction is a better predictor. Self-esteem matters most in individualist nations, whose citizens tend to focus more on personal achievements than on family and community identity.

hierarchy of needs Maslow’s pyramid of human needs, beginning at the base with physiological needs that must first be satisfied before higher-level safety needs and then psychological needs become active.

Review Introduction to Motivation 36-1 From what perspectives do psychologists view motivated behavior? The instinct/evolutionary perspective explores genetic influences on complex behaviors. Drive-reduction theory explores how physiological needs create aroused tension states (drives) that direct us to satisfy those needs. Arousal theory proposes a motivation for behaviors, such as curiosity-driven behaviors, that do not reduce physiological needs. Maslow’s hierarchy of needs proposes a pyramid of human needs, from basic needs such as hunger and thirst up to higher-level needs such as actualization and transcendence.

Test Yourself 1. While on a long road trip, you suddenly feel very hungry. You see a diner that looks pretty deserted and creepy, but you are really hungry, so you stop anyway. What motivational perspective would most easily explain this behavior, and why? (Answers to the Test Yourself questions can be found in Appendix B at the end of the book.)

Ask Yourself 1. Consider your own experiences in relation to Maslow’s hier-

Terms and Concepts to Remember motivation, p. 444 instinct, p. 444 drive-reduction theory, p. 445

homeostasis, p. 445 incentive, p. 445 hierarchy of needs, p. 446

archy of needs. Have you ever experienced true hunger or thirst that displaced your concern for other, higher-level needs? Do you usually feel safe? Loved? Confident? How often do you feel you are able to address what Maslow called your “self-actualization” needs?

WEB Multiple-choice self-tests and more may be found at www.worthpublishers.com/myers

The Physiology of Hunger

module 37

The Psychology of Hunger Obesity and Weight Control

“Nobody wants to kiss when they are hungry.” Dorothea Dix, 1801–1887

“The full person does not understand the needs of the hungry.” Irish proverb

Hunger A vivid demonstration of the supremacy of physiological needs came from starvation experiences in World War II prison camps. David Mandel (1983), a Nazi concentration camp survivor, recalled how a starving “father and son would fight over a piece of bread. Like dogs.” One father, whose 20-year-old son stole his bread from under his pillow while he slept, went into a deep depression, asking over and over how his son could do such a thing. The next day the father died. “Hunger does something to you that’s hard to describe,” Mandel explained. To learn more about the results of semistarvation, a research team led by physiologist Ancel Keys (1950), the creator of World War II Army K rations, fed 36 male volunteers—all conscientious objectors to the war—just enough to maintain their initial weight. Then, for six months, they cut this food level in half. The effects soon became visible. Without thinking about it, the men began conserving energy; they appeared listless and apathetic. After dropping rapidly, their body weights eventually stabilized at about 25 percent below their starting weights. Especially dramatic were the psychological effects. Consistent with Maslow’s idea of a needs hierarchy, the men became food-obsessed. They talked food. They daydreamed food. They collected recipes, read cookbooks, and feasted their eyes on delectable forbidden foods. Preoccupied with their unfulfilled basic need, they lost interest in sex and social activities. As one participant reported, “If we see a show, the most interesting part of it is contained in scenes where people are eating. I couldn’t laugh at the funniest picture in the world, and love scenes are completely dull.” The semistarved men’s preoccupations illustrate the power of activated motives to hijack our consciousness. When we are hungry, thirsty, fatigued, or sexually aroused, little else may seem to matter. When you’re not, food, water, sleep, or sex just don’t seem like such big things in your life, now or ever. In University of Amsterdam studies, Loran Nordgren and his colleagues (2006, 2007) found that people in a motivational “hot” state (from fatigue, hunger, or sexual arousal) become more aware of having had such feelings in the past and more sympathetic to how fatigue, hunger, or sexual arousal might drive others’ behavior. Similarly, if preschool children are made to feel thirsty (by eating salty pretzels), they understandably want water; but unlike children who are not thirsty, they also choose water over pretzels for “tomorrow” (Atance & Meltzoff, 2006). Motives matter mightily. Grocery shop with an empty stomach and you are more likely to think that those jelly-filled doughnuts are just what you’ve always loved and will be wanting tomorrow.

䉴|| The Physiology of Hunger 37-1 What physiological factors produce hunger? Keys’ semistarved volunteers felt their hunger in response to a homeostatic system designed to maintain normal body weight and an adequate nutrient supply. But what precisely triggers hunger? Is it the pangs of an empty stomach? That is how it feels. And so it seemed after A. L. Washburn, working with Walter Cannon (Cannon & Washburn, 1912), intentionally swallowed a balloon. When inflated to fill his stomach, the balloon transmitted his stomach contractions to a recording device

448

449

Hunger M O D U L E 3 7

GARFIELD 1986 PAWS, INC. Dist. by UNIVERSAL PRESS SYNDICATE. Reprinted with permission. All rights reserved.

GARFIELD

(FIGURE 37.1). While his stomach was being monitored, Washburn pressed a key each time he felt hungry. The discovery: Washburn was indeed having stomach contractions whenever he felt hungry. Would hunger persist without stomach pangs? To answer that question, researchers removed some rats’ stomachs and attached their esophagi to their small intestines (Tsang, 1938). Did the rats continue to eat? Indeed they did. Some hunger persists similarly in humans whose ulcerated or cancerous stomachs have been removed. If the pangs of an empty stomach are not the only source of hunger, what else matters? 37.1 Monitoring stomach 䉴 FIGURE contractions Using this procedure,

Washburn swallows balloon, which measures stomach contractions.

Washburn showed that stomach contractions (transmitted by the stomach balloon) accompany our feelings of hunger (indicated by a key press). (From Cannon, 1929.)

Stomach contractions

Washburn presses key each time he feels hungry.

Hunger pangs

1

2

3

4 5

6

7

8

9 10

Time in minutes

Body Chemistry and the Brain People and other animals automatically regulate their caloric intake to prevent energy deficits and maintain a stable body weight. This suggests that somehow, somewhere, the body is keeping tabs on its available resources. One such resource is the blood sugar glucose. Increases in the hormone insulin (secreted by the pancreas) diminish blood glucose, partly by converting it to stored fat. If your blood glucose level drops, you won’t consciously feel this change. But your brain, which is automatically monitoring your blood chemistry and your body’s internal state, will trigger hunger. Signals from your stomach, intestines, and liver (indicating whether glucose is being deposited or withdrawn) all signal your brain to motivate eating or not.

glucose the form of sugar that circulates in the blood and provides the major source of energy for body tissues. When its level is low, we feel hunger.

450

But how does the brain integrate and respond to these messages? More than a half-century ago, researchers began unraveling this puzzle when they located hunger controls within the hypothalamus, that small but complex neural traffic intersection deep in the brain (FIGURE 37.2). Two distinct hypothalamic centers influence eating. Activity along the sides of the hypothalamus (the lateral hypothalamus) brings on hunger. If electrically stimulated there, well-fed animals begin to eat. (If the area is destroyed, even starving animals have no interest in food.) Recent research helps explain this behavior. When a rat is food-deprived, its blood sugar levels wane and the lateral hypothalamus churns out the hunger-triggering hormone orexin. When given orexin, rats become ravenously hungry (Sakurai et al., 1998). Activity in the second center—the lower mid-hypothalamus (the ventromedial hypothalamus)—depresses hunger. Stimulate this area and an animal will stop eating; destroy it and the animal’s stomach and intestines will process food more rapidly, causing it to become extremely fat (Duggan & Booth, 1986; Hoebel & Teitelbaum, 1966). This discovery helped explain why some patients with tumors near the base of the brain (in what we now realize is the hypothalamus) eat excessively and become very overweight (Miller, 1995). Rats with mid-hypothalamus lesions eat more often, produce more fat, and use less fat for energy, rather like a miser who runs every bit of extra money to the bank and resists taking any out (Pinel, 1993). In addition to producing orexin, the hypothalamus monitors levels of the body’s other appetite hormones (FIGURE 37.3). One of these is ghrelin, a hunger-arousing Pix* Elation from Fran Heyl Associates

Richard Howard

The hypothalamus (colored red) performs various body maintenance functions, including control of hunger. Blood vessels supply the hypothalamus, enabling it to respond to our current blood chemistry as well as to incoming neural information about the body’s state.

FIGURE 37.2 The hypothalamus

MOD U LE 3 7 Hunger

Evidence for the brain’s control of eating A lesion near the ventromedial area of the hypothalamus caused this rat’s weight to triple.

Orexin

Insulin: Secreted by pancreas; controls blood glucose. Leptin: Secreted by fat cells; when abundant, causes brain to increase metabolism and decrease hunger. Orexin: Hunger-triggering hormone secreted by hypothalamus. Ghrelin: Secreted by empty stomach; sends “I’m hungry” signals to the brain. Obestatin: Secreted by stomach; sends out “I’m full” signals to the brain. PYY: Digestive tract hormone; sends “I’m not hungry” signals to the brain.

FIGURE 37.3 The appetite hormones

Obestatin Ghrelin Leptin

Insulin

PYY

451

hormone secreted by an empty stomach. When people with severe obesity undergo bypass surgery that seals off part of the stomach, the remaining stomach then produces much less ghrelin, and their appetite lessens (Lemonick, 2002). Obestatin, a sister hormone to ghrelin, is produced by the same gene, but obestatin sends out a fullness signal that suppresses hunger (Zhang et al., 2005). Other appetite-suppressants include PYY, a hormone secreted by the digestive tract, and leptin, a protein that is secreted by fat cells and acts to diminish the rewarding pleasure of food (Farooqi et al., 2007). Experimental manipulation of appetite hormones has raised hopes for an appetitereducing medication. Such a nose spray or skin patch might counteract the body’s hunger-producing chemicals, or mimic or increase the levels of hunger-dampening chemicals. The recent ups and downs of excitement over PYY illustrate the intense search for a substance that might someday be a treatment, if not a magic bullet, for obesity. The initial report that PYY suppresses appetite in mice was followed by a skeptical statement from 12 laboratories reporting a big fat disappointment: The PYY finding did not replicate. But a few months later, this was followed by newer studies using different methods that did find at least a temporary appetite-suppressing effect (Gura, 2004). The complex interaction of appetite hormones and brain activity may help explain the body’s apparent predisposition to maintain itself at a particular weight level. When semistarved rats fall below their normal weight, this “weight thermostat” signals the body to restore the lost weight: Hunger increases and energy expenditure decreases. If body weight rises—as happens when rats are force-fed—hunger decreases and energy expenditure increases. This stable weight toward which semistarved and overstuffed rats return is their set point (Keesey & Corbett, 1983). In rats and humans, heredity influences body type and set point. Our bodies regulate weight through the control of food intake, energy output, and basal metabolic rate—the rate of energy expenditure for maintaining basic body functions when the body is at rest. By the end of their 24 weeks of semistarvation, the men who participated in Keys’ experiment had stabilized at three-quarters of their normal weight, while taking in half of their previous calories. How did they manage this? By reducing their energy expenditure, partly through inactivity but partly because of a 29 percent drop in their basal metabolic rate. Some researchers, however, doubt that our bodies have a preset tendency to maintain optimum weight (Assanand et al., 1998). They point out that slow, sustained changes in body weight can alter one’s set point, and that psychological factors also sometimes drive our feelings of hunger. Given unlimited access to a wide variety of tasty foods, people and other animals tend to overeat and gain weight (Raynor & Epstein, 2001). For all these reasons, some researchers have abandoned the idea of a biologically fixed set point. They prefer the term settling point to indicate the level at which a person’s weight settles in response to caloric intake and expenditure (which are influenced by environment as well as biology).

© The New Yorker Collection, 2002, Alex Gregory from cartoonbank.com. All rights reserved.

Hunger M O D U L E 3 7

“Never get a tattoo when you’re drunk and hungry.”

|| Over the next 40 years you will eat about 20 tons of food. If, during those years, you increase your daily intake by just .01 ounce more than required for your energy needs, you will gain 24 pounds (Martin et al., 1991). ||

䉴|| The Psychology of Hunger 37-2 What psychological and cultural factors influence hunger? Our eagerness to eat is indeed pushed by our physiological state—our body chemistry and hypothalamic activity. Yet there is more to hunger than meets the stomach. This was strikingly apparent when Paul Rozin and his trickster colleagues (1998) tested two patients with amnesia who had no memory for events occurring more than a minute ago. If, 20 minutes after eating a normal lunch, the patients were offered another, both readily consumed it . . . and usually a third meal offered 20 minutes after

set point the point at which an individual’s “weight thermostat” is supposedly set. When the body falls below this weight, an increase in hunger and a lowered metabolic rate may act to restore the lost weight.

basal metabolic rate the body’s resting rate of energy expenditure.

452

the second was finished. This suggests that part of knowing when to eat is our memory of our last meal. As time passes since we last ate, we anticipate eating again and start feeling hungry. Psychological influences on eating behavior are most striking when the desire to be thin overwhelms normal homeostatic pressures.

Taste Preferences: Biology and Culture

like hot spices Countries with hot climates, in which food historically spoiled more quickly, feature recipes with more bacteria-inhibiting spices (Sherman & Flaxman, 2001). India averages nearly 10 spices per meat recipe; Finland, 2 spices.

FIGURE 37.4 Hot cultures

Body chemistry and environmental factors together influence not only when we feel hungry, but also what we hunger for—our taste preferences. When feeling tense or depressed, do you crave starchy, carbohydrate-laden foods? Carbohydrates help boost levels of the neurotransmitter serotonin, which has calming effects. When stressed, even rats find it extra rewarding to scarf Oreos (Artiga et al., 2007; Boggiano et al., 2005). Our preferences for sweet and salty tastes are genetic and universal. Other taste preferences are conditioned, as when people given highly salted foods develop a liking for excess salt (Beauchamp, 1987), or when people who have been sickened by a food develop an aversion to it. (The frequency of children’s illnesses provides many chances for them to learn food aversions.) Culture affects taste, too. Bedouins enjoy eating the eye of a camel, which most North Americans would find repulsive. But then North Americans and Europeans shun horse, dog, and rat meat, all of which are prized elsewhere. Rats themselves tend to avoid unfamiliar foods (Sclafani, 1995). So do we, especially those that are animal-based. In experiments, people have tried novel fruit drinks or ethnic foods. With repeated exposure, their appreciation for the new taste typically increases; moreover, exposure to one set of novel foods increases our willingness to try another (Pliner, 1982; Pliner et al., 1993). 10 Spices per Neophobia (dislike of things unfamiliar) recipe 8 surely was adaptive for our ancestors, protecting them from potentially toxic substances. 6 Other taste preferences are also adaptive. For example, the spices most commonly used 4 in hot-climate recipes—where food, especially meat, spoils more quickly—inhibit the growth 2 of bacteria (FIGURE 37.4). Pregnancy-related 0 nausea is another example of adaptive taste 0 5 10 15 20 25 30 preferences. Its associated food aversions Mean annual temperature peak about the tenth week, when the devel(degrees Celsius) oping embryo is most vulnerable to toxins.

Victor Englebert

Richard Olsenius/Black Star

An acquired taste For Alaska Natives (left), but not for most other North Americans, whale blubber is a tasty treat. For these Campa Indians in Peru (right), roasted ants are similarly delicious. People everywhere learn to enjoy the foods prescribed by their culture.

MOD U LE 3 7 Hunger

453

Hunger M O D U L E 3 7

The Ecology of Eating To a surprising extent, situations also control our eating. You perhaps have noticed one situational phenomenon, though you likely have underestimated its power: People eat more when eating with others (Herman et al., 2003; Hetherington et al., 2006). The presence of others tends to amplify our natural behavior tendencies (a phenomenon called social facilitation, which helps explain why, after a party or a feast, we may realize that we have overeaten). Another aspect of the ecology of eating, which Andrew Geier and his colleagues (2006) call unit bias, occurs with similar mindlessness. In collaboration with researchers at France’s National Center for Scientific Research, they explored a possible explanation of why French waistlines are smaller than American waistlines. From soda drinks to yogurt sizes, the French offer foods in smaller portion sizes. Does it matter? (One could as well order two small sandwiches as one large one.) To find out, the investigators offered people varieties of free snacks. For example, in the lobby of an apartment house, they laid out either full or half pretzels, big or little Tootsie Rolls, or a big bowl of M&M’s with either a small or large serving scoop. Their consistent result: Offered a supersized standard portion, people put away more calories. Another research team led by Brian Wansink (2006) invited some Americans to help themselves to ice cream. They, too, found a unit bias: Even nutrition experts took 31 percent more when given a big rather than small bowl, and 15 percent more when scooping it with a big scoop rather than a small one. For cultures struggling with rising obesity rates, the principle—that ecology influences eating—implies a practical message: Reduce standard portion sizes, and serve food with smaller bowls, plates, and utensils.

anorexia nervosa an eating disorder in which a person (usually an adolescent female) diets and becomes significantly (15 percent or more) underweight, yet, still feeling fat, continues to starve. bulimia nervosa an eating disorder characterized by episodes of overeating, usually of high-calorie foods, followed by vomiting, laxative use, fasting, or excessive exercise.

Eating Disorders

usually adolescents and 3 out of 4 times females—drop significantly (typically 15 percent or more) below normal weight. Yet they feel fat, fear gaining weight, and remain obsessed with losing weight. About half of those with anorexia display a binge-purge-depression cycle. 䉴 Bulimia nervosa may also be triggered by a weight-loss diet, broken by gorging on forbidden foods. Binge-purge eaters—mostly women in their late teens or early twenties—eat the way some people with alcohol dependency drink—in spurts, sometimes influenced by friends who are bingeing (Crandall, 1988). In a cycle of repeating episodes, overeating is followed by compensatory purging (through vomiting or laxative use) or fasting or excessive exercise (Wonderlich et al., 2007). Preoccupied with food (craving sweet and high-fat foods), and fearful of becoming overweight, bingepurge eaters experience bouts of depression

Reuters/Lucas Jackson (USA)

䉴 Anorexia nervosa typically begins as a weight-loss diet. People with anorexia—

Dying to be thin Anorexia was identified and named in the 1870s, when it appeared among affluent adolescent girls (Brumberg, 2000). This 1930s photo illustrates the physical condition (left). Many modern-day celebrities have struggled publicly with eating disorders, as when actress Mary-Kate Olsen (right) was admitted to a rehabilitation clinic in Utah for six weeks in 2004 for treatment of anorexia.

Our bodies are naturally disposed to maintain a normal weight, including stored energy reserves for times when food becomes unavailable. Yet sometimes psychological influences overwhelm biological wisdom. This becomes painfully clear in three eating disorders.

Reprinted by permission of The New England Journal of Medicine, 207, (Oct. 5, 1932), 613–617.

37-3 How do anorexia nervosa, bulimia nervosa, and binge-eating disorder demonstrate the influence of psychological forces on physiologically motivated behaviors?

454

© 1999 Shannon Burns www.shannonburns.com/cartoon4.htm

MOD U LE 3 7 Hunger

“Thanks, but we don’t eat.”

and anxiety, most severe during and following binges (Hinz & Williamson, 1987; Johnson et al., 2002). Unlike anorexia, bulimia is marked by weight fluctuations within or above normal ranges, making the condition easy to hide. 䉴 Those who do significant binge eating, followed by remorse—but do not purge, fast, or exercise excessively—are said to have binge-eating disorder. A national study funded by the U.S. National Institute of Mental Health reports that, at some point during their lifetimes, 0.6 percent of people meet the criteria for anorexia, 1 percent for bulimia, and 2.8 percent for binge-eating disorder (Hudson et al., 2007). So, how can we explain these disorders? Eating disorders do not provide (as some have speculated) a telltale sign of childhood sexual abuse (Smolak & Murnen, 2002; Stice, 2002). The family environment may provide a fertile ground for the growth of eating disorders in other ways, however.

䉴 Mothers of girls with eating disorders tend to focus on their own weight and on their daughters’ weight and appearance (Pike & Rodin, 1991). 䉴 Families of bulimia patients have a higher-than-usual incidence of childhood obesity and negative self-evaluation (Jacobi et al., 2004). 䉴 Families of anorexia patients tend to be competitive, high-achieving, and protective (Pate et al., 1992; Yates, 1989, 1990). Anorexia sufferers often have low self-evaluations, set perfectionist standards, fret about falling short of expectations, and are intensely concerned with how others perceive them (Pieters et al., 2007; Polivy & Herman, 2002; Striegel-Moore et al., 1993, 2007). Some of these factors also predict teen boys’ pursuit of unrealistic muscularity (Ricciardelli & McCabe, 2004). Genetics may influence susceptibility to eating disorders. Twins are somewhat more likely to share the disorder if they are identical rather than fraternal (Fairburn Charles, Ninth Earl of Spencer, eulogizing his et al., 1999; Kaplan, 2004). In follow-up molecular studies, scientists are searching sister Princess Diana, 1997 for culprit genes, which may influence the body’s available serotonin and estrogen (Klump & Culbert, 2007). But these disorders also have cultural and gender components. Body ideals vary across culture and time. In India, women students rate their ideals as close to their actual shape. In much of Africa—where plumpness means prosperous and thinness can signal poverty, AIDS, and hunger—bigger seems better (Knickmeyer, 2001). Bigger does not seem better in Western cultures, where, according to 222 studies of 141,000 people, the rise in eating disorders over the last 50 years has coincided with a dramatic increase in women having a poor body image (Feingold & Mazzella, 1998). In one national survey, nearly one-half of all U.S. women reported feeling negative about their appearance and preoccupied with being or becoming overweight (Cash & Henry, 1995). Gender differences in body image have surfaced in several studies. In one study of New Zealand university students and 3500 British bank and university staff, men were more likely to be overweight and women were more likely to perceive themselves as overweight (Emslie et al., 2001; Miller & Halberstadt, 2005). In another study at the University of Michigan, men and women donned either a sweater or a swimsuit and completed a math test while alone in a changing room (Fredrickson et al., 1998). For the women but not the men, wearing the swimsuit triggered self-consciousness and shame that disrupted their math performance. That surely explains why a survey of 52,677 adults found that 16 percent of men and 31 percent of women avoid wearing a swimsuit in public (Frederick et al., 2006). In another informal survey of 60,000 people, 9 in 10 women said they would rather have a perfect body than have a mate with a perfect body; 6 of 10 men preferred the reverse (Lever, 2003). “Gee, I had no idea you were married to a supermodel.”

© The New Yorker Collection, 1999, Michael Maslin from cartoonbank.com. All rights reserved.

“Diana remained throughout a very insecure person at heart, almost childlike in her desire to do good for others, so she could release herself from deep feelings of unworthiness, of which her eating disorders were merely a symptom.”

455

Hunger M O D U L E 3 7

on parade” A recent article 䉴 “Skeletons used this headline in criticizing superthin

binge-eating episodes, followed by distress, disgust, or guilt, but without the compensatory purging, fasting, or excessive exercise that marks bulimia nervosa.

Reuters/David Gray (Australia)

models. Do such models make selfstarvation fashionable?

binge-eating disorder significant

“Why do women have such low self-esteem?

There are many complex psychological and Those most vulnerable to eating disorders are also those (usually women) who societal reasons, by which I mean Barbie.” most idealize thinness and have the greatest body dissatisfaction (Striegel-Moore Dave Barry, 1999 & Bulik, 2007). Should it surprise us, then, that when women view real and doctored images of unnaturally thin models and celebrities, they often feel ashamed, depressed, and dissatisfied with their own bodies—the very attitudes that predispose eating disorders (Grabe et al., 2008)? Eric Stice and his colleagues (2001) tested this idea by giving some adolescent girls (but Biological influences: Psychological influences: not others) a 15-month subscription to an • hypothalamic centers in the • sight and smell of food brain monitoring appetite • variety of foods available American teen-fashion magazine. Compared • appetite hormones • memory of time elapsed since with their counterparts who had not received • stomach pangs last meal • weight set/settling point • stress and mood the magazine, vulnerable girls — defined as • attraction to sweet and salty tastes • food unit size those who were already dissatisfied, idealiz• adaptive wariness toward novel foods ing thinness, and lacking social support— exhibited increased body dissatisfaction and Eating eating disorder tendencies. But even ultrabehavior thin models do not reflect the impossible standard of the classic Barbie doll, who had, when adjusted to a height of 5 feet 7 inches, a 32–16–29 figure (in centimeters, 82–41–73) Social-cultural influences: (Norton et al., 1996). • culturally learned taste preferences It seems clear that the sickness of today’s • responses to cultural preferences for appearance eating disorders lies in part within our weight-obsessed culture—a culture that says, FIGURE 37.5 Levels of analysis for in countless ways, “Fat is bad,” that motivates millions of women to be “always diour hunger motivation Clearly, we are eting,” and that encourages eating binges by pressuring women to live in a conbiologically driven to eat, yet psychostant state of semistarvation. If cultural learning contributes to eating behavior logical and social-cultural factors strongly (FIGURE 37.5), then might prevention programs increase acceptance of one’s body? influence what, when, and how much we eat. From their review of 66 prevention studies, Stice and his colleagues (2007) answer yes, and especially if the programs are interactive and focused on girls over age 15.

456

MOD U LE 3 7 Hunger

䉴|| Obesity and Weight Control 37-4 What factors predispose some people to become and remain obese? Why do some people gain weight while others eat the same amount and seldom add a pound? And why do so few overweight people win the battle of the bulge? Is there weight-loss hope for the 66 percent of Americans who, according to the Centers for Disease Control, are overweight? Our bodies store fat for good reasons. Fat is an ideal form of stored energy—a highcalorie fuel reserve to carry the body through periods when food is scarce—a common occurrence in the feast-or-famine existence of our prehistoric ancestors. (Think of that spare tire around the middle as an energy storehouse—biology’s counterpart to a hiker’s waist-borne snack pack.) No wonder that in most developing societies today, as in Europe in earlier centuries—in fact, wherever people face famine—obesity signals affluence and social status (Furnham & Baguma, 1994). In those parts of the world where food and sweets are now abundantly available, the rule that once served our hungry distant ancestors (When you find energy-rich fat or sugar, eat it!) has become dysfunctional. Pretty much everywhere this book is being read, people have a growing problem. Worldwide, estimates the World Health Organization (WHO) (2007), more than 1 billion people are overweight, and 300 million of them are clinically obese (defined by WHO as a body mass index of 30 or more—see FIGURE 37.6). In the United States, the adult obesity rate has more than doubled in the last 40 years, reaching 34 percent, and child-teen obesity has quadrupled (CDC, 2007; NCHS, 2007). Australia classifies 54 percent of its population as overweight or obese, with Canada close behind at 49 percent, and France at 42 percent (Australian Bureau of Statistics, 2007; Statistics Canada, 2007). In all these and many other countries, rising obesity rates trail the

mass index (BMI) U.S. government guidelines encourage a BMI under 25. The World Health Organization and many countries define obesity as a BMI of 30 or more. The shading in this graph is based on BMI measurements for these heights and weights. BMI is calculated by using the following formula: Weight in kg (pounds × .45) Squared height in meters (inches ÷ 39.4)2

FIGURE 37.6 Obesity measured as body

Height (feet) 4’10”

5’

5’2”

5’4”

5’6”

5’8”

5’10”

6’

6’2”

150

308

140

Morbidly obese

130

280

120

252

= BMI

110

224

100

Weight (kg)

Obese 196

90 80

Overweight

168

70

140

Healthy

60

112

50

Underweight 40

84 1.5

1.6

1.7

Height (meters)

1.8

1.9

Weight (pounds)

457

Hunger M O D U L E 3 7

FIGURE 37.7 Obesity and mortality 䉴 Relative risk of death among healthy

2.8

Relative 2.6 risk of death 2.4

nonsmokers rises with extremely high or low body mass index. (Data from 14-year study of 1.05 million Americans, Calle et al., 1999.)

2.2 2.0 1.8 1.6 1.4 1.2 1.0 0.8 0.6

Psychology In Modules - PDF Free Download (2024)
Top Articles
Latest Posts
Article information

Author: Greg Kuvalis

Last Updated:

Views: 5932

Rating: 4.4 / 5 (55 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Greg Kuvalis

Birthday: 1996-12-20

Address: 53157 Trantow Inlet, Townemouth, FL 92564-0267

Phone: +68218650356656

Job: IT Representative

Hobby: Knitting, Amateur radio, Skiing, Running, Mountain biking, Slacklining, Electronics

Introduction: My name is Greg Kuvalis, I am a witty, spotless, beautiful, charming, delightful, thankful, beautiful person who loves writing and wants to share my knowledge and understanding with you.