Article summaries

If you are looking for affordable, custom-written, high-quality, and non-plagiarized papers, your student life just became easier with us. We are the ideal place for all your writing needs.


Order a Similar Paper Order a Different Paper

The student will post one thread of at least 250 words, The thread must integrate insight from the textbook and 2 other scholarly sources.  

Lifespan development is a broad topic, covering multiple stages, theories, and critical issues. As we begin the course, do a search for 2 scholarly journal articles related to lifespan. Provide a brief description of these articles and how they relate to the topic of lifespan development as well as the textbook.  

attached are chapters 1 and 2.

Chapter 1

Organizing Themes in Development

Learning Objectives

Mai is a 56-year-old woman who was born to a poor Vietnamese immigrant family in rural California. When Mai was 5, her parents moved the family to a large city where they eventually succeeded in building a stable business that provided an adequate income. Mai was typically quiet and shy, and she had difficulty making friends in elementary school, often feeling left out of her peers’ activities. Her social life improved in adolescence, but she often felt the need to hide her outstanding academic skills to fit in. In college and medical school, Mai felt more accepted for her intellectual prowess and freer to be herself, but she still ruminated at times about what others thought of her, and was plagued by vague anxieties. By her mid-twenties, cyclical problems with depression and anxiety had become a part of her existence. Her marriage at 33 to a scholarly man provided a haven for her, and life seemed calmer and less frightening. After the birth of a son at age 38, Mai again felt overwhelmed by anxiety. The couple struggled to balance the complex needs of a fragile infant with their own careers, and Mai’s husband found her heavy dependence on his attention and calming influence difficult to accommodate. As their son grew, however, the couple handled the balancing act more skillfully. Now, Mai’s child is starting college. Mai is busy with her work and usually finds her anxiety manageable. She continues to view her husband as the steadying force in her life.

Mai’s story raises a host of questions about the influences that have shaped her life. How important was Mai’s early poverty, her cultural background, and her parents’ immigrant status? What was the source of her early social inhibition? Would things have been different for her if her parents had not been able to eventually provide economic stability? Were Mai’s early difficulties forming social relationships just a “stage,” or were they foundational to her later problems with depression and anxiety? Did stereotype threat (expecting to be judged on the basis of ethnicity or gender) play a role? How unusual is it for a married couple to experience increased conflicts following the birth of a child? If Mai and her husband had divorced, would their child have suffered lasting emotional damage? Is Mai’s intellectual ability likely to change as she continues to age? Are her emotional problems likely to increase or decrease? What factors enable any person to overcome early unfavorable experiences and become a successful, healthy adult? And conversely, why do some people who do well as children experience emotional or behavioral problems as adults? These intriguing questions represent a sampling of the kinds of topics that developmental scientists tackle. Their goal is to understand life span development: human behavioral change from conception to death. “Behavioral” change refers broadly to change in both observable activity (e.g., from crawling to walking) and mental activity (e.g., from disorganized to logical thinking). More specifically, developmental science seeks to

· describe people’s behavioral characteristics at different ages,

· identify how people are likely to respond to life’s experiences at different ages,

· formulate theories that explain how and why we see the typical characteristics and responses that we do,

· understand what factors contribute to developmental differences from one person to another, and

· understand how behavior is influenced by cultural context and by changes in culture across generations.

· Using an array of scientific tools designed to obtain objective (unbiased) information, developmentalists make careful observations and measurements, and they test theoretical explanations empirically. The Appendix, A Practitioner’s Guide to the Methods of Developmental Science, provides a guide to these techniques. An understanding of the processes that lead to objective knowledge will help you evaluate new information from any source as you move forward in your career as a practitioner.

· Developmental science is not a remote or esoteric body of knowledge. Rather, it has much to offer helping professionals in both their careers and their personal lives. As you study developmental science, you will build a knowledge base of information about age-related behaviors and about causal theories that help organize and make sense of these behaviors. These tools will help you better understand client concerns that are rooted in shared human experience. And when you think about clients’ problems from a developmental perspective, you will increase the range of problem-solving strategies that you can offer. Finally, studying development can facilitate personal growth by providing a foundation for reflecting on your own life.

Reflection and Action

1. 1.1 Explain the role of developmental science (research and theory) in the problem-solving processes of reflective practitioners.

Despite strong support for a comprehensive academic grounding in scientific developmental knowledge for helping professionals (e.g., Van Hesteren & Ivey, 1990), there has been a somewhat uneasy alliance between practitioners, such as mental health professionals, and those with a more empirical bent, such as behavioral scientists. The clinical fields have depended on research from developmental psychology to inform their practice. Yet in the past, overreliance on traditional experimental methodologies sometimes resulted in researchers’ neglect of important issues that could not be studied using these rigorous methods (Hetherington, 1998). Consequently, there was a tendency for clinicians to perceive some behavioral science literature as irrelevant to real-world concerns (Turner, 1986).

Clearly, the gap between science and practice is not unique to the mental health professions. Medicine, education, and law have all struggled with the problems involved in preparing students to grapple with the complex demands of the workplace. Contemporary debate on this issue has led to the development of serious alternative paradigms for the training of practitioners.

One of the most promising of these alternatives for helping professionals is the concept of reflective practice. The idea of “reflectivity” derives from Dewey’s (1933/1998) view of education, which emphasized careful consideration of one’s beliefs and forms of knowledge as a precursor to practice. Donald Schon (1987), a pioneer in the field of reflective practice, describes the problem this way:

In the varied topography of professional practice, there is a high, hard ground overlooking a swamp. On the high ground, manageable problems lend themselves to solution through the application of research-based theory and technique. In the swampy lowland, messy confusing problems defy technical solutions. The irony of this situation is that the problems of the high ground tend to be relatively unimportant to individuals or society at large, however great their technical interest may be, while in the swamp lie the problems of greatest human concern. (p. 3)

The Gap Between Science and Practice

Traditionally, the modern, university-based educational process has been driven by the belief that problems can be solved best by applying objective, technical, or scientific information amassed from laboratory investigations. Implicit in this assumption is that human nature operates according to universal principles that, if known and understood, will enable us to predict behavior.

For example, if I understand the principles of conditioning and reinforcement, I can apply a contingency contract to modify my client’s inappropriate behavior. Postmodern critics have pointed out the many difficulties associated with this approach. Sometimes a “problem” behavior is related to, or maintained by, neurological, systemic, or cultural conditions. Sometimes the very existence of a problem may be a cultural construction. Unless a problem is viewed within its larger context, a problem-solving strategy may prove ineffective.

Most of the situations helpers face are confusing, complex, ill defined, and often unresponsive to the application of a simple, specific set of scientific principles. Thus, the training of helping professionals often involves a “dual curriculum.” The first is more formal and may be presented as a conglomeration of research-based facts, whereas the second, often learned in a practicum, field placement, or first job, covers the curriculum of “what is really done” when working with clients. Unfortunately, some practitioners lose sight of the value of research-based knowledge in this process. The antidote to this dichotomous pedagogy, Schon (1987) and his followers suggest, is reflective practice. This is a creative method of thinking about practice in which the helper masters the knowledge and skills base pertinent to the profession but is encouraged to go beyond rote technical applications to generate new kinds of understanding and strategies of action. Rather than relying solely on objective technical applications to determine ways of operating in a given situation, the reflective practitioner constructs solutions to problems by engaging in personal hypothesis generating and hypothesis testing. Reflective practices are now used across a wide range of helping professions, from counseling and psychology to education to medicine and nursing (Curtis, Elkins, Duran, & Venta, 2016).

How can you use the knowledge of developmental science in a meaningful and reflective way? What place does it have in the process of reflective construction? A consideration of another important line of research, namely, that of characteristics of expert problem solvers, will help us answer this question. Research studies on expert–novice differences in many areas such as teaching, science, and athletics all support the contention that experts have a great store of knowledge and skill in a particular area. Expertise is domain-specific. When compared to novices in any given field, experts possess well-organized and integrated stores of information that they draw on, almost automatically, when faced with novel challenges. Because this knowledge is well practiced, truly a “working body” of information, retrieval is relatively easy (Lewandowsky & Thomas, 2009). Progress in problem solving is closely self-monitored. Problems are analyzed and broken down into smaller units, which can be handled more efficiently.

If we apply this information to the reflective practice model, you will see some connections. One core condition of reflective practice is that practitioners use theory as a “partial lens through which to consider a problem” (Nelson & Neufelt, 1998). Practitioners also use another partial lens: their professional and other life experience. In reflective practice, theory-driven hypotheses about client and system problems are generated and tested for goodness of fit.

A rich supply of problem-solving strategies depends on a deep understanding of and thorough grounding in fundamental knowledge germane to the field. Notice that there is a sequence to reflective practice. Schon (1987), for example, argues against putting the cart before the horse. He states that true reflectivity depends on the ability to “recognize and apply standard rules, facts and operations; then to reason from general rules to problematic cases in ways characteristic of the profession; and only then to develop and test new forms of understanding and action where familiar categories and ways of thinking fail” (p. 40). In other words, background knowledge is important, but it is most useful in a dynamic interaction with contextual applications. The most effective helpers can shift flexibly between the “big picture” that their knowledge base provides and the unique problems and contexts that they confront in practice (Ferreira, Basseches, & Vasco, 2016). A working knowledge of human development supplies the helping professional with a firm base from which to proceed.

Given the relevance of background knowledge to expertise in helping and to reflective practice, we hope we have made a sufficiently convincing case for the study of developmental science. However, it is obvious that students approaching this study are not “blank slates.” You already have many ideas and theories about the ways that people grow and change. These implicit theories have been constructed over time, partly from personal experience, observation, and your own cultural “take” on situations. Dweck and her colleagues have demonstrated that reliably different interpretations of situations can be predicted based on individual differences in people’s implicit beliefs about certain human attributes, such as intelligence or personality (see Dweck, 2006, 2017). Take the case of intelligence. If you happen to hold the implicit belief that a person’s intellectual capacity can change and improve over time, you might be more inclined to take a skill-building approach to some presenting problem involving knowledge or ability. However, if you espouse the belief that a person’s intelligence is fixed and not amenable to incremental improvement, possibly because of genetic inheritance, you might be more likely to encourage a client to cope with and adjust to cognitive limitations. For helping professionals, the implicit theoretical lens that shapes their worldview can have important implications for their clients.

We are often reluctant to give up our personal theories even in the face of evidence that these theories are incorrect (Lewandowsky & Oberauer, 2016; Rousseau & Gunia, 2016). The critical thinking that reflective practice requires can be impaired for many reasons, especially if we are busy and feel overwhelmed by the demands of the moment. The best antidote to misapplication of our personal views is self-monitoring: being aware of what our theories are and recognizing that they are only one of a set of possibilities. (See
Chapter
11 for a more extensive discussion of this issue.) Before we discuss some specific beliefs about the nature of development, take a few minutes to consider what you think about the questions posed in Box 1.1.

A Historical Perspective on Developmental Theories

1. 1.2 Identify distinguishing characteristics and core issues of classic theoretical approaches in developmental science, particularly classic stage theories and incremental theories.

Now that you have examined some of your own developmental assumptions, let’s consider the theoretical views that influence developmentalists, with special attention to how these views have evolved through the history of developmental science. Later, we will examine how different theoretical approaches might affect the helping process.

Like you, developmental scientists bring to their studies theoretical assumptions that help to structure their understanding of known facts. These assumptions also guide their research and shape how they interpret new findings. Scientists tend to develop theories that are consistent with their own cultural background and experience; no one operates in a vacuum. A core value of Western scientific method is a pursuit of objectivity, so that scientists are committed to continuously evaluating their theories in light of evidence. As a consequence, scientific theories change over time. Throughout this text, you will be introduced to many developmental theories. Some are broad and sweeping in their coverage of whole areas of development, such as Freud’s theory of personality development (see
Chapters
7 and
8) or Piaget’s theory of cognitive development (see
Chapters
3,
6, and
9); some are narrower in scope, focusing on

a particular issue, such as Vygotsky’s theory of the enculturation of knowledge (see
Chapter
3) or Bowlby’s attachment theory (see
Chapters
4 and
12). You will see that newer theories usually incorporate empirically verified ideas from older theories. Scientific theories of human development began to emerge in Europe and America in the 19th century. They had their roots in philosophical inquiry, in the emergence of biological science, and in the growth of mass education that accompanied industrialization. Throughout medieval times in European societies, children and adults of all ages seem to have been viewed and treated in very similar ways (Aries, 1960). Only infants and preschoolers were free of adult responsibilities, although they were not always given the special protections and nurture that they are today. At age 6 or 7, children took on adult roles, doing farmwork or learning a trade, often leaving their families to become apprentices. As the Industrial Revolution advanced, children worked beside adults in mines and factories. People generally seemed “indifferent to children’s special characteristics” (Crain, 2005, p. 2), and there was no real study of children or how they change. The notion that children only gradually develop the cognitive and personality structures that will characterize them as adults first appeared in the writings of 17th- and 18th-century philosophers, such as John Locke in Great Britain and Jean-Jacques Rousseau in France. In the 19th century, Charles Darwin’s theory of the evolution of species and the growth of biological science helped to foster scholarly interest in children. The assumption grew that a close examination of how children change might help advance our understanding of the human species. Darwin himself introduced an early approach to child study, the “baby biography,” writing a richly detailed account of his young son’s daily changes in language and behavior. By the 18th and 19th centuries, the Industrial Revolution led to the growth of “middle-class” occupations (e.g., merchandizing) that required an academic education: training in reading, writing, and math. The need to educate large numbers of children sharpened the public’s interest in understanding how children change with age. The first academic departments devoted to child study began to appear on American college campuses in the late 19th and early 20th centuries. The idea that development continues even in adulthood was a 20th-century concept and a natural outgrowth of the study of children. If children’s mental and behavioral processes change over time, perhaps such processes continue to evolve beyond childhood. Interest in adult development was also piqued by dramatic increases in life expectancy in the 19th and 20th centuries, as well as cultural changes in how people live. Instead of single households combining three or four generations of family members, grandparents and other relatives began to live apart from “nuclear families,” so that understanding the special needs and experiences of each age group took on greater importance. Most classic developmental theories emerged during the early and middle decades of the 20th century. Contemporary theories integrate ideas from many classic theories, as well as from other disciplines: modern genetics, neuroscience, cognitive science, psycholinguistics, anthropology, and social and cultural psychology. They acknowledge that human development is a complex synthesis of diverse processes at multiple levels of functioning. Because they embrace complexity, contemporary developmental theories can be especially useful to helping professionals (Melchert, 2016). See the timeline in Figure 1.1 for a graphic summary of some of the key theories and ideas in the history of developmental science.

You can expect that the most up-to-date theories you read about in this text will continue to change in the future, because theoretical ideas evolve as research testing them either supports or does not support them. But theories are also likely to need adjusting because global shifts in immigration patterns, climate, and access to technology and information are likely to modify behavior and perhaps even some of the processes that govern the development of behavior. Developmental theories must accommodate such changes (Jensen, 2012).

Emphasizing Discontinuity: Classic Stage Theories

Some of the most influential early theories of development described human change as occurring in stages. Imagine a girl when she is 4 months old and then again when she is 4 years old. If your sense is that these two versions of the same child are fundamentally different in kind, with different intellectual capacities, different emotional structures, or different ways of perceiving others, you are thinking like a stage theorist. A stage is a period of time, perhaps several years, during which a person’s activities (at least in one broad domain) have certain characteristics in common. For example, we could say that in language development, the 4-month-old girl is in a preverbal stage: Among other things, her communications share in common the fact that they do not include talking. As a person moves to a different stage, the common characteristics of behavior change. In other words, a person’s activities have similar qualities within stages but different qualities across stages. Also, after long periods of stability, qualitative shifts in behavior seem to happen relatively quickly. For example, the change from not talking to talking seems abrupt or discontinuous. It tends to happen between 12 and 18 months of age, and once it starts, language use seems to advance very rapidly. A 4-year-old is someone who communicates primarily by talking; she is clearly in a verbal stage. The preverbal to verbal example illustrates two features of stage theories. First, they describe development as qualitative or transformational change, like the emergence of a tree from a seed. At each new stage, new forms of behavioral organization are both different from and more complex than the ones at previous stages. Increasing complexity suggests that development has “directionality.” There is a kind of unfolding or emergence of behavioral organization.

Second, they imply periods of relative stability (within stages) and periods of rapid transition (between stages). Metaphorically, development is a staircase. Each new stage lifts a person to a new plateau for some period of time, and then there is another steep rise to another plateau. There seems to be discontinuity in these changes rather than change being a gradual, incremental process.

One person might progress through a stage more quickly or slowly than another, but the sequence of stages is usually seen as the same across cultures and contexts, that is, universal. Also, despite the emphasis on qualitative discontinuities between stages, stage theorists argue for functional continuities across stages. That is, the same processes drive the shifts from stage to stage, such as brain maturation and social experience.

Sigmund Freud’s theory of personality development began to influence developmental science in the early 1900s and was among the first to include a description of stages (e.g., Freud, 1905/1989, 1949/1969). Freud’s theory no longer takes center stage in the interpretations favored by most helping professionals or by developmental scientists. First, there is little evidence for some of the specific proposals in Freud’s theory (Loevinger, 1976).

Second, his theory has been criticized for incorporating the gender biases of early 20th-century Austrian culture. Yet, some of Freud’s broad insights are routinely accepted and incorporated into other theories, such as his emphasis on the importance of early family relationships to infants’ emotional life, his notion that some behavior is unconsciously motivated, and his view that internal conflicts can play a primary role in social functioning. Currently influential theories, like those of Erik Erikson and John Bowlby, incorporated some aspects of Freud’s theories or were developed to contrast with Freud’s ideas. For these reasons, it is important to understand Freud’s theory. Also, his ideas have permeated popular culture, and they influence many of our assumptions about the development of behavior. As you work to make explicit your own implicit assumptions about development, it will help to understand their origins and how well the theories that spawned them stand up in the light of scientific investigation. Freud’s Personality Theory

Sigmund Freud’s psychoanalytic theory both describes the complex functioning of the adult personality and offers an explanation of the processes and progress of its development throughout childhood. To understand any given stage it helps to understand Freud’s view of the fully developed adult.

Id, Ego, and Superego.   According to Freud, the adult personality functions as if there were actually three personalities, or aspects of personality, all potentially in conflict with one another. The first, the id, is the biological self, the source of all psychic energy. Babies are born with an id; the other two aspects of personality develop later. The id blindly pursues the fulfillment of physical needs or “instincts,” such as the hunger drive and the sex drive. It is irrational, driven by the pleasure principle, that is, by the pursuit of gratification. Its function is to keep the individual, and the species, alive, although Freud also proposed that there are inborn aggressive, destructive instincts served by the id.

The ego begins to develop as cognitive and physical skills emerge. In Freud’s view, some psychic energy is invested in these skills, and a rational, realistic self begins to take shape.

The id still presses for fulfillment of bodily needs, but the rational ego seeks to meet these needs in sensible ways that take into account all aspects of a situation. For example, if you were hungry, and you saw a child with an ice cream cone, your id might press you to grab the cone away from the child—an instance of blind, immediate pleasure seeking. Of course, stealing ice cream from a child could have negative consequences if someone else saw you do it or if the child reported you to authorities. Unlike your id, your ego would operate on the reality principle, garnering your understanding of the world and of behavioral consequences to devise a more sensible and self-protective approach, such as waiting until you arrive at the ice cream store yourself and paying for an ice cream cone.

The superego is the last of the three aspects of personality to emerge. Psychic energy is invested in this “internalized parent” during the preschool period as children begin to feel guilty if they behave in ways that are inconsistent with parental restrictions. With the superego in place, the ego must now take account not only of instinctual pressures from the id, and of external realities, but also of the superego’s constraints. It must meet the needs of the id without upsetting the superego to avoid the unpleasant anxiety of guilt. In this view, when you choose against stealing a child’s ice cream cone to meet your immediate hunger, your ego is taking account not only of the realistic problems of getting caught but also of the unpleasant feelings that would be generated by the superego.

The Psychosexual Stages.   In Freud’s view, the complexities of the relationships and conflicts that arise among the id, the ego, and the superego are the result of the individual’s experiences during five developmental stages. Freud called these psychosexual stages because he believed that changes in the id and its energy levels initiated each new stage. The term sexual here applies to all biological instincts or drives and their satisfaction, and it can be broadly defined as “sensual.”

For each stage, Freud posited that a disproportionate amount of id energy is invested in drives satisfied through one part of the body. As a result, the pleasure experienced through that body part is especially great during that stage. Children’s experiences satisfying the especially strong needs that emerge at a given stage can influence the development of personality characteristics throughout life. Freud also thought that parents typically play a pivotal role in helping children achieve the satisfaction they need. For example, in the oral stage, corresponding to the first year of life, Freud argued that the mouth is the body part that provides babies with the most pleasure. Eating, drinking, and even nonnutritive sucking are presumably more satisfying than at other times of life. A baby’s experiences with feeding and other parenting behaviors are likely to affect her oral pleasure, and could influence how much energy she invests in seeking oral pleasure in the future. Suppose that a mother in the early 20th century believed the parenting advice of “experts” who claimed that nonnutritive sucking is bad for babies. To prevent her baby from sucking her thumb, the mother might tie the baby’s hands to the sides of the crib at night—a practice recommended by the same experts! Freudian theory would predict that such extreme denial of oral pleasure could cause an oral fixation: The girl might grow up needing oral pleasures more than most adults, perhaps leading to overeating, to being especially talkative, or to being a chain smoker. The grown woman might also exhibit this fixation in more subtle ways, maintaining behaviors or feelings in adulthood that are particularly characteristic of babies, such as crying easily or experiencing overwhelming feelings of helplessness. According to Freud, fixations at any stage could be the result of either denial of a child’s needs, as in this example, or overindulgence of those needs. Specific defense mechanisms, such as “reaction formation” or “repression,” can also be associated with the conflicts that arise at a particular stage.

In Table 1.1, you will find a summary of the basic characteristics of Freud’s five psychosexual stages. Some of these stages will be described in more detail in later chapters. Freud’s stages have many of the properties of critical (or sensitive) periods for personality development. That is, they are time frames during which certain developments must occur or can most fully form. Freud’s third stage, for example, provides an opportunity for sex typing and moral processes to emerge (see Table 1.1). Notice that Freud assumed that much of personality development occurs before age 5, during the first three stages. This is one of the many ideas from Freud’s theory that has made its way into popular culture, even though modern research clearly does not support this position.

By the mid-1900s, two other major stage theories began to significantly impact the progress of developmental science. The first, by Erik Erikson, was focused on personality development, reshaping some of Freud’s ideas. The second, by Jean Piaget, proposed that there are stagelike changes in cognitive processes during childhood and adolescence, especially in rational thinking and problem solving.

Erikson’s Personality Theory

Erik Erikson studied psychoanalytic theory with Anna Freud, Sigmund’s daughter, and later proposed his own theory of personality development (e.g., Erikson, 1950/1963). Like many “neo-Freudians,” Erikson deemphasized the id as the driving force behind all behavior, and he emphasized the more rational processes of the ego. His theory is focused on explaining the psychosocial aspects of behavior: attitudes and feelings toward the self and toward others. Erikson described eight psychosocial stages. The first five correspond to the age periods laid out in Freud’s psychosexual stages, but the last three are adult life stages, reflecting Erikson’s view that personal identity and interpersonal attitudes are continually evolving from birth to death.

The “Eight Stages of Man.”   In each stage, the individual faces a different “crisis” or developmental task (see Chapter
9 for a detailed discussion of Erikson’s concept of crisis). The crisis is initiated, on one hand, by changing characteristics of the person—biological maturation or decline, cognitive changes, advancing (or deteriorating) motor skills—and, on the other hand, by corresponding changes in others’ attitudes, behaviors, and expectations. As in all stage theories, people qualitatively change from stage to stage, and so do the crises or tasks that they confront. In the first stage, infants must resolve the crisis of trust versus mistrust (see Chapter
4). Infants, in their relative helplessness, are “incorporative.” They “take in” what is offered, including not only nourishment but also stimulation, information, affection, and attention. If infants’ needs for such input are met by responsive caregivers, they will begin to trust others, to feel valued and valuable, and to view the world as a safe place. If caregivers are not consistently responsive, infants will fail to establish basic trust or to feel valuable, carrying mistrust with them into the next stage of development, when the 1- to 3-year-old toddler faces the crisis of autonomy versus shame and doubt. Mistrust in others and self will make it more difficult to successfully achieve a sense of autonomy. The new stage is initiated by the child’s maturing muscular control and emerging cognitive and language skills. Unlike helpless infants, toddlers can learn not only to control their elimination but also to feed and dress themselves, to express their desires with some precision, and to move around the environment without help. The new capacities bring a strong need to practice and perfect the skills that make children feel in control of their own destinies. Caregivers must be sensitive to the child’s need for independence and yet must exercise enough control to keep the child safe and to help the child learn self-control. Failure to strike the right balance may rob children of feelings of autonomy—a sense that “I can do it myself”—and can promote instead either shame or self-doubt.

These first two stages illustrate features of all of Erikson’s stages (see Table 1.2 for a description of all eight stages).

First, others’ sensitivity and responsiveness to the individual’s needs create a context for positive psychosocial development. Second, attitudes toward self and toward others emerge together. For example, developing trust in others also means valuing (or trusting) the self. Third, every psychosocial crisis or task involves finding the right balance between positive and negative feelings, with the positive outweighing the negative. Finally, the successful resolution of a crisis at one stage helps smooth the way for successful resolutions of future crises. Unsuccessful resolution at an earlier stage may stall progress and make maladaptive behavior more likely. Erikson’s personality theory is often more appealing to helping professionals than Freud’s theory. Erikson’s emphasis on the psychosocial aspects of personality focuses attention on precisely the issues that helpers feel they are most often called on to address: feelings and attitudes about self and about others. Also, Erikson assumed that the child or adult is an active, self-organizing individual who needs only the right social context to move in a positive direction. Further, Erikson was himself an optimistic therapist who believed that poorly resolved crises could be resolved more adequately in later stages if the right conditions prevailed. Erikson was sensitive to cultural differences in behavioral development. Finally, developmental researchers frequently find Eriksonian interpretations of behavior useful. Studies of attachment, self-concept, self-esteem, and adolescent identity, among other topics addressed in subsequent chapters, have produced results compatible with some of Erikson’s ideas. (See
Chapter 4, Box 4.2 for a biographical sketch of Erikson.) Piaget’s Cognitive Development Theory

In Jean Piaget’s cognitive development theory, we see the influence of 18th-century philosopher Jean-Jacques Rousseau (e.g., 1762/1948), who argued that children’s reasoning and understanding emerges naturally in stages and that parents and educators can help most by allowing children freedom to explore their environments and by giving them learning experiences that are consistent with their level of ability. Similarly, Piaget outlined stages in the development of cognition, especially logical thinking, which he called operational thought (e.g., Inhelder & Piaget, 1955/1958, 1964; Piaget, 1952, 1954). He assumed that normal adults are capable of thinking logically about both concrete and abstract contents but that this capacity evolves in four stages through childhood. Briefly, the first sensorimotor stage, lasting for about two years, is characterized by an absence of representational thought (see
Chapter
3). Although babies are busy taking in the sensory world, organizing it on the basis of inborn reflexes or patterns, and then responding to their sensations, Piaget believed that they cannot yet symbolically represent their experiences, and so they cannot really reflect on them. This means that young infants do not form mental images or store memories symbolically, and they do not plan their behavior or intentionally act. These capacities emerge between 18 and 24 months, launching the next stage.

Piaget’s theory is another classic stage model. First, cognitive abilities are qualitatively similar within stages. If we know how a child approaches one kind of task, we should be able to predict her approaches to other kinds of tasks as well. Piaget acknowledged that children might be advanced in one cognitive domain or lag behind in another. For example, an adolescent might show more abstract reasoning about math than about interpersonal matters. He called these within-stage variations décalages. But generally, Piaget expected that a child’s thinking would be organized in similar ways across most domains. Second, even though progress through the stages could move more or less quickly depending on many individual and contextual factors, the stages unfold in an invariant sequence, regardless of context or culture. The simpler patterns of physical or mental activity at one stage become integrated into more complex organizational systems at the next stage (hierarchical integration). Finally, despite the qualitative differences across stages, there are functional similarities or continuities from stage to stage in the ways in which children’s cognitive development proceeds. According to Piaget, developmental progress depends on children’s active engagement with the environment. This active process, which will be described in more detail in
Chapter 3, suggests that children (and adults) build knowledge and understanding in a self-organizing way. They interpret new experiences and information to fit their current ways of understanding even as they make some adjustments to their understanding in the process. Children do not just passively receive information from without and store it “as is.” And, knowledge does not just emerge from within as though preformed. Instead, children actively build their knowledge, using both existing knowledge and new information. This is a constructivist view of development.

Piaget’s ideas about cognitive development were first translated into English in the 1960s, and they swept American developmental researchers off their feet. His theory filled the need for an explanation that acknowledged complex qualitative changes in children’s abilities over time, and it launched an era of unprecedented research on all aspects of children’s intellectual functioning that continues today. Although many of the specifics of Piaget’s theory have been challenged by research findings, researchers, educators, and other helping professionals still find the broad outlines of this theory very useful for organizing their thinking about the kinds of understandings that children of different ages can bring to a problem or social situation. Piaget’s theory also inspired some modern views of cognitive change in adulthood. As you will see in
Chapter
11, post-Piagetians have proposed additional stages in the development of logical thinking, hypothesizing that the abstract thinking of the adolescent is transformed during adulthood into a more relativistic kind of logical thinking, partly as a function of adults’ practical experience with the complexity of real-world problems. Emphasizing Continuity: Incremental Change

Unlike stage theories, some theoretical approaches characterize development as a more continuous process. Change tends to be incremental, metaphorically resembling not a staircase but a steadily rising mountainside. Again, picture a 4-month-old girl, and the same girl when she is 4 years old. If you tend to “see” her evolving in small steps from a smiling, attentive infant to a smiling, eager toddler, to a smiling, mischievous preschooler, always noting in your observations threads of sameness as well as differences, your own theoretical assumptions about development may be more compatible with one of these incremental models. Like stage models, they can be very different in the types and breadth of behaviors they attempt to explain. They also differ in the kinds of processes they assume to underlie psychological change, such as the kinds of processes involved in learning. But they all agree that developmental change is not marked by major, sweeping reorganizations that affect many behaviors at once, as in stage theories. Rather, change is steady and specific to particular behaviors or behavioral domains. Incremental theorists, like stage theorists, tend to see “change for the better” as a key feature of development. So, adding words to your vocabulary over time would be a typical developmental change, but forgetting previously learned information might not. Social learning theory and most information processing theories are among the many incremental models available to explain development. Learning Theories

Learning theories, in what is called the behaviorist tradition, have a distinguished history in American psychology. They were the most widely accepted class of theories through much of the 20th century, influenced by many thinkers from John B. Watson (e.g., 1913) to B. F. Skinner (e.g., 1938) to Albert Bandura (e.g., 1974). These theories trace their philosophical roots from ancient Greece and the writings of Aristotle through John Locke and the British empiricists of the 17th and 18th centuries. In this philosophical tradition, knowledge and skill are thought to accumulate as the result of each person’s individual experiences. The environment gradually leaves its imprint on one’s behavior and mind, a mind that in infancy is like a blank slate. Locke described several simple processes—association, repetition, imitation, reward, and punishment—by which the environment can have an impact. Many of the processes Locke described were incorporated into behaviorist approaches to development.

Some learning theories explain behavioral change as a function of chains of specific environmental events, such as those that occur in classical conditioning and operant conditioning. In these processes, change in behavior takes place because environmental events (stimuli) are paired with certain behaviors. Let’s begin with classical conditioning, also called respondent conditioning (Vargas, 2009). A respondent is an automatic response to a stimulus. For example, when you hear an unexpected loud noise you will automatically produce a startle response. This stimulus/response association is unconditioned, built-in to your biological system. But the response can be conditioned to a new, neutral stimulus. Suppose a child calmly watches a dog approach her. At first, sight of the dog is a neutral stimulus. But the dog suddenly barks loudly, causing the child to automatically startle and pull back. Suppose that the next time the child sees the dog, it does not bark. Even so, just the sight of the dog triggers the same response as loud barking would: The child automatically startles and pulls back. The child has learned a new response, because the formerly neutral event (sight of dog) has been paired with an event (loud barking) that automatically causes a startle. Perhaps the startle reaction is also accompanied by feelings of fear. If so, the child haslearned a new response, because the formerly neutral event (sight of dog) has been paired with an event (loud barking) that automatically causes a startle. Perhaps the startle reaction is also accompanied by feelings of fear. If so, the child has learned to fear this dog and will likely generalize that fear to other, similar dogs. When a neutral event or stimulus is associated with a stimulus that causes an automatic response, the neutral stimulus can become a conditioned stimulus, meaning that it can cause the person to make the same automatic response in the future, called a conditioned response. This is classical conditioning. Operant conditioning is different. First, a person performs some behavior. The behavior is an operant, any act with potential to lead to consequences in the environment (that is, to “operate” on the environment). Immediately after the operant occurs, there is a “reinforcing event,” or reinforcement, something that is experienced by the person as pleasurable or rewarding. For example, suppose that a young child happens to babble “da” just as a dog appears in the child’s line of sight, and the child’s mother excitedly claps and kisses the child. (The mother has mistakenly assumed that the child has tried to say “dog.”) The mother’s reaction serves as a reinforcement for the child, who will repeat the “da” sound the next time a dog comes into view. In operant conditioning, the child learns to produce a spontaneous behavior or operant (e.g., “da”) in response to a cue (e.g., the appearance of a dog) because the behavior was previously reinforced in that situation. A reinforcement is a consequence of the operant behavior that maintains or increases the likelihood of that behavior when the cue occurs again (Sparzo, 2011). The mother’s approving reaction is an example of a positive reinforcement: Something pleasurable is presented after the operant occurs. There are also rewarding consequences that are called negative reinforcements: An aversive experience stops or is removed after the operant occurs. If your brother releases you from a painful hammer-hold when you yell “Uncle,” you have been negatively reinforced for saying “Uncle” (the operant) in that situation.

Social learning theories, which have focused specifically on how children acquire personality characteristics and social skills, consider conditioning processes part of the story, but they also emphasize “observational learning,” or modeling. In this kind of learning, one person (the learner) observes another (the model) performing some behavior, and just from close observation, learns to do it too. The observer may or may not imitate the modeled behavior, immediately or in the future, depending on many factors, such as whether the observer

expects a reward for the behavior, whether the model is perceived as nurturing or competent, and even whether the observer believes that the performance will meet the observer’s own performance standards. Current versions of social learning theory emphasize many similar cognitive, self-regulated determiners of performance and suggest that they too are often learned from models (e.g., Bandura, 1974, 1999).

Whatever the learning processes that are emphasized in a particular learning theory, the story of development is one in which behaviors or beliefs or feelings change in response to specific experiences, one experience at a time. Broader changes can occur by generalization. If new events are experienced that are very similar to events in the

original learning context, the learned behaviors may be extended to these new events. For example, the child who learns to say “da” when a particular dog appears may do the same when other dogs appear, or even in the presence of other four-legged animals. Or a child who observes a model sharing candy with a friend may later share toys with a sibling. But these extensions of learned activities are narrow in scope compared to the sweeping changes hypothesized by stage theorists. While these processes explain changes in discrete behaviors or patterns of behavior, learning theories do not explain developmental reorganizations and adaptations in the ways classic stage theories do. Information Processing Theories

Since the introduction of computing technologies in the middle of the 20th century, some theorists have likened human cognitive functioning to computer processing of information. Not all information processing theories can be strictly classified as incremental theories, but many can. Like learning theories, these do not hypothesize broad stages, but emphasize incremental changes in narrow domains of behavior or thought. The mind works on information—attending to it, holding it in a temporary store or “working memory,” putting it into long-term storage, using strategies to organize it or to draw conclusions from it, and so on. How the information is processed depends on general characteristics of the human computer, such as how much information can be accessed, or made available for our attention, at one time. These characteristics can change to some degree over time. For example, children’s attentional capacity increases gradually with age. Yet most changes withage. Yet most changes with age are quite specific to particular domains of knowledge, such as changes in the strategies children use to solve certain kinds of problems.

Furthermore, processing changes are not stagelike; they do not extend beyond the particular situation or problem space in which they occur. For example, Siegler and his colleagues (e.g., Siegler, 1996, 2007; Siegler & Svetina, 2006) describe changes in the ways that children do arithmetic, read, solve problems of various kinds, and perform many other tasks and skills. Siegler analyzes very particular changes in the kinds of strategies that children use when they attempt these tasks. Although there can be similarities across tasks in the ways that strategies change (e.g., they become more automatic with practice, they generalize to similar problems, etc.), usually the specific strategies used in one kind of task fail to apply to another, and changes are not coordinated across tasks. To illustrate, a kindergartner trying to solve an addition problem might use the strategy of “counting from one.” “[T]his typically involves putting up fingers on one hand to represent the first addend, putting up fingers on the other hand to represent the second addend, and then counting the raised fingers on both hands” (Siegler, 1998, p. 93). This strategy is characteristic of early addition efforts, but would play no role in tasks such as reading or spelling. Overall, then, cognitive development in this kind of model is like social development in social learning theories: It results from the accrual of independent changes in many different domains of thought and skill. Development involves change for the better, but it does not lead to major organizational shifts across domains.

Classic Theories and the Major Issues They Raise

Classic theories of development have typically addressed a set of core issues. In our brief review, you have been introduced to just a few of them. Is developmental change qualitative (e.g., stagelike) or quantitative (e.g., incremental)? Are some developments restricted to certain critical periods in the life cycle or are changes in brain and behavior possible at any time given the appropriate opportunities? Are there important continuities across the life span (in characteristics or change processes) or is everything in flux? Are people actively influencing the course and nature of their own development (self-organizing), or are they passive products of other forces? Which is more important in causing developmental change, nature (heredity) or nurture (environment)? Are there universal developmental trajectories, processes, and changes that are the same in all cultures and historical periods, or is development more specific to place and time? Classic theorists usually took a stand on one side or the other of these issues, framing them as “either-or” possibilities. However, taking an extreme position does not fit the data we now have available. Contemporary theorists propose that human development is best described by a synthesis of the extremes. The best answer to all of the questions just posed appears to be “Both.”

Contemporary Multidimensional or Systems Theories: Embracing the Complexity of Development

Throughout this text you will find evidence that development is the result of the relationships among many causal components, interacting in complex ways. Modern developmental theories, which we refer to as multidimensional or systems theories, explain and describe the enormous complexity of interrelated causal processes in development. They generally assume that in all behavioral domains, from cognition to personality, there are layers, or levels, of interacting causes for change: physical/molecular, biological, psychological, social, and cultural. What happens at one level both causes and is caused by what happens at other levels. That is, the relationships among causes are reciprocal or bidirectional processes. For example, increased testosterone levels at puberty (biological change) might help influence a boy to pursue an aggressive sport, like wrestling. The boy’s success at wrestling may cause his status and social dominance to rise among his male friends (social change), and this social change can reciprocally influence his biological functioning. Specifically, it can lead to additional increases in his testosterone levels (Cacioppo & Berntson, 1992).

These theories acknowledge and incorporate many kinds of change: qualitative, transforming changes, both great (stagelike) and small (such as strategy changes within a particular problem-solving domain), as well as continuous, incremental variations that can even be reversible, such as learning and then forgetting new information (e.g., Overton, 1990). This is one example of how contemporary theories integrate features of many classic theories of development.

Think again about a girl who is 4 months old, and then later 4 years old. Do you perceive so many changes that she is transformed into a different sort of creature, and yet, at the same time, do you see enduring qualities that characterize her at both ages? Does your sense of the forces that have changed her include influences such as her family, community, and culture? Do you also recognize that she has played a significant role in her own change and in modifying those other forces? If so, your implicit assumptions about development may be more consistent with multidimensional models than with either stage or incremental theories alone.

Multidimensional theories portray the developing person metaphorically as a vine growing through a thick forest (Kagan, 1994). In doing so, the vine is propelled by its own inner processes, but its path, even its form, is in part created by the forest it inhabits. There is continuous growth, but there are changes in structure too—in its form and direction—as the vine wends its way through the forest. Finally, its presence in the forest changes the forest itself, affecting the growth of the trees and other plants, which reciprocally influence the growth of the vine.

Many multidimensional theories have been proposed, but they are remarkably similar in their fundamental assumptions and characteristics. They are typically different in which aspects of development they provide most detail about. They include transactional theory (e.g., Sameroff & Chandler, 1975), relational theory (e.g., Lerner, 1998), dialectical theory (e.g., Sameroff, 2012), bioecological theory (e.g., Bronfenbrenner & Ceci, 1994), bio-social-ecological theory (e.g., Cole & Packer, 2011), epigenetic theory (e.g., Gottlieb, 1992), life course theory (Elder & Shanahan, 2006), life span developmental theory (e.g., Baltes, 1997; Baltes, Lindenberger, & Staudinger, 2006), dynamic systems theory (e.g., Thelen & Smith, 1998), and several others. (See Overton, 2015, for a deeper analysis of the similarities and differences among these theories.) Figure 1.2 provides one illustration of the multiple, interacting forces that these theories identify. Two examples of multidimensional models will help flesh out the typical characteristics of many of these theories. Bronfenbrenner’s Bioecological Theory

In his bioecological theory, Urie Bronfenbrenner and his colleagues (e.g., Bronfenbrenner & Ceci, 1994; Bronfenbrenner & Morris, 1998, 2006) described all developments—including personality and cognitive change—as a function of proximal processes. These are reciprocal interactions between an “active, evolving biopsychological human organism and the persons, objects and symbols in its immediate external environment” (Bronfenbrenner & Morris, 1998, p. 996). In other words, proximal processes refer to a person’s immediate interactions with people or with the physical environment or with informational sources (such as books or movies). Proximal processes are truly interactive: The organism both influences and is influenced by the immediate environment. These proximal processes are modified by more distal processes. Some of these are within the organism—such as genes. Others are outside the immediate environment—such as features of the educational system or of the broader culture. The quality and effectiveness of the immediate environment—its responsiveness to the individual’s particular needs and characteristics and the opportunities it provides—depend on the larger context. For example, parental monitoring of children’s homework benefits children’s academic performance. But monitoring is more effective if parents are knowledgeable about the child’s work. A parent who insists that his child do her algebra homework may have less effect if the parent cannot be a resource who guides and explains the work. Thus, the parent’s own educational background affects the usefulness of the monitoring (Bronfenbrenner & Ceci, 1994).

An individual’s characteristics also influence the effectiveness of the environment. For example, motivations affect the impact of learning opportunities in a given context. A man interested in gambling may learn to reason in very complex ways about horses and their relative probability of winning at the track, but he may not display such complex reasoning in other contexts (Ceci & Liker, 1986). Other important individual qualities include demand characteristics, behavioral tendencies that often either encourage or discourage certain kinds of reactions from others. A child who is shy and inhibited, a trait that appears to have some biological roots (Kagan & Fox, 2006), may often fail to elicit attention from others, and may receive less support when she needs it, than a child who is open and outgoing (Bell & Chapman, 1986; see also
Chapters 4 and 5).

Changes in the organism can be emergent, stagelike, qualitative changes, such as a shift from preoperational to concrete operational thought (see Table 1.3), or they can be more continuous, graded changes, such as shifts in academic interest or involvement in athletics. Both kinds of change are the result of proximal processes, influenced by more distal internal and external causes. Once changes occur, the individual brings new resources to these proximal processes. For example, when a child begins to demonstrate concrete operational thought, she will be given different tasks to do at home or at school than before, and she will learn things from those experiences that she would not have learned earlier. This is a good example of the bidirectionality of proximal processes: Change in the child fosters change in the environment, leading to more change in the child, and so on.

In earlier versions of his theory, Bronfenbrenner characterized in detail the many levels of environment that influence a person’s development.

He referred to the immediate environment, where proximal processes are played out, as the microsystem. Babies interact primarily with family members, but as children get older, other microsystems, such as the school, the neighborhood, or a local playground and its inhabitants, become part of their lives. The microsystems interact with and modify each other. For example, a discussion between a child’s parent and a teacher might change how one or both of them interacts with the child. The full set of relationships among the microsystems is called the mesosystem. The next level of the environment, the exosystem, includes settings that children may not directly interact with but that influence the child nonetheless. For example, a teacher’s family life will influence the teacher and thereby the child. Or a child’s socioeconomic status influences where her family lives, affecting the school the child will attend, and thus affecting the kinds of experiences the child has with teachers. Finally, there is the
macrosystem, including the customs and character of the larger culture that help shape the microsystems. For example, cultural attitudes and laws regarding the education of exceptional students influence the operation of a school and therefore a child’s interactions with teachers.

The environment, then, is like “a set of nested structures, each inside the next, like a set of Russian dolls” (Bronfenbrenner, 1979). In newer versions of his theory, Bronfenbrenner gave equal attention to the nested internal levels of the organism. As we have seen, a person brings to proximal processes a set of dispositions, resources (preexisting abilities, experiences, knowledge, and skills), and demand characteristics. These, in turn, are influenced by biological and physical levels of functioning that include the genes. Bronfenbrenner also emphasized, as other multidimensional theorists do, the bidirectional effects of each level on the adjacent levels. For example, proximal psychological processes playing out in the immediate context are both influenced by, and influencing, physiological processes (Bronfenbrenner & Morris, 1998, 2006). Finally, these interactions continue and change across time.

Life Span Developmental Theory

In
life span developmental theories, the same developmental processes that produce the transformation of infants into children, and children into adults, are thought to continue throughout adulthood until death. Developmental change is part of what it means to be alive. Adaptation continues from conception to death, with proximal interactions between the organism and the immediate context modified by more distal processes both within the individual and in the environment. Life span theorists like Paul Baltes (e.g., 1997; Baltes, Lindenberger, & Staudinger, 2006) refer to the interacting web of influences on development as the “architecture” of biological and cultural supports. Baltes proposed that successful adaptation is benefited more by biological supports in childhood than in adulthood. Cultural supports are important in childhood, but if not optimal, most children have biological supports (we could think of them as a complex of biological protective factors) that have evolved to optimize development in most environments. For adults, successful adaptation is more heavily dependent on cultural supports or protective factors. “The older individuals are, the more they are in need of culture-based resources (material, social, economic, psychological) to generate and maintain high levels of functioning” (Baltes, Lindenberger, & Staudinger, 1998, p. 1038). We will have more to say about life span developmental theories in
Chapter 13.

Applying Theory to Practice

We have described both classic theoretical approaches to development and the more integrative and complex multidimensional theories that contemporary developmentalists favor. Preferring one of these paradigms can influence the way helping professionals assess and interpret client concerns. Let’s consider how various theoretical orientations to development might apply to a counseling situation:

Juliana is a 26-year-old Latina female who was raised in an intact, middle-class family. Her father was a teacher and her mother a housewife who occasionally worked in a neighborhood preschool as a teacher’s aide. Juliana was the second child in the family, which included an older brother and a younger sister. She attended parochial schools from kindergarten through 12th grade, where she worked very hard to achieve average and sometimes above-average grades. During her early years in school, Juliana had reading difficulties and received remedial instruction. At home, her parents stressed the value of education and kept a close watch on the children. The children were well behaved, respectful, and devoted to the family. Most of their spare time was spent with their close relatives, who lived nearby. Despite Juliana’s interest in dating during high school, her parents did not permit her to spend time with boyfriends. They told her that she needed to concentrate on her schoolwork so that she could be a nurse when she grew up. After graduation, Juliana entered a small local college and enrolled in a program designed to prepare her for a career in nursing. She lived at home and commuted to school on a daily basis. Life proceeded as it had for most of her high school years. Her course work, however, became increasingly more difficult for her. She also felt isolated from most of her classmates, many of whom were working and living on their own. She tried to participate in some of the college’s social events, but without much satisfaction or success. To pass her science courses, Juliana had to spend most of her time studying. By the middle of her academic program, it was clear that she was in danger of failing. She felt frustrated and angry. At this point, she became romantically involved with Bill, a young White man who worked at the college. She dropped out of school and moved in with him, hoping their relationship would lead to marriage. Her family was shocked and upset with her decision and put pressure on her to come home. Eventually, the relationship with Bill ended, and Juliana, unwilling to return home, moved in with a group of young students who were looking for someone to share the rent. She found a low-wage job, changed her style of dress to look more like the younger students, and quickly became involved in a series of other romantic relationships. Juliana grew increasingly despondent about her inability to maintain a relationship that would lead to marriage and a family. In addition, she felt some distress about not completing her college degree. She enrolled in a night-school program at a local community college to retake her science courses. Once again, she experienced confusion, problems fitting in, and academic difficulty. She went to the college counseling center to ask for help.

Take a minute to think about how you would respond to Juliana. Do any of your views about development enter into your appraisal of her situation? If you tend to be a stage theorist, you might consider Juliana’s problems to be based on Erikson’s crisis of intimacy in early adulthood (see Table 1.2). She does seem to have difficulties with intimacy, and she is just at the age when these issues are supposed to become central to psychosocial development. But a rigid assumption of age–stage correspondence could prevent you from considering other possibilities, such as an unresolved identity crisis.

If you tend to be an incremental theorist, perhaps favoring social learning explanations, you might perceive Juliana’s situation quite differently. You may see Juliana as having problems in her intimate relationships that are similar to her difficulties with school. In both domains she is apparently “delayed,” perhaps because she has had insufficient opportunities to learn social and academic skills or perhaps because she has been reinforced for behaviors that are ineffective in more challenging contexts. Although this may be a useful way of construing Juliana’s dilemma, any stage issues contributing to her distress may be missed. Also, there could be factors in her social environment, such as cultural expectations, that might not be considered.

If you take a more multidimensional approach, as we do, you will try to remain alert to multiple influences, both proximal and distal, on Juliana’s development. The roles of her biological status, her individual capabilities, her stage of development, her earlier experiences, her family, and her culture will all be considered as possible influences and points of intervention. One disadvantage could be that the complexity of the interacting factors is so great that you may find it difficult to sort out the most effective place to begin. Another disadvantage is that macrosystem influences, such as cultural expectations about appropriate roles for women, may be quite resistant to intervention. However, one of the advantages of a multidimensional view is that it does highlight many possible avenues of intervention, and if you can identify one or a few that are amenable to change, you may have a positive influence on Juliana’s future.

Helping professionals with different developmental assumptions would be likely to choose different approaches and strategies in working with Juliana. In a sense, any set of theoretical biases is like a set of blinders. It will focus your attention on some aspects of the situation and reduce the visibility of other aspects. Taking a multidimensional or systems view has the advantage of minimizing the constraints of those blinders. In any case, knowing your own biases can help you avoid the pitfalls of overreliance on one way of viewing development.

A New Look at Three Developmental Issues

In the following sections, we examine three classic developmental issues that have garnered a great deal of attention in recent years. As you read about these issues from the viewpoint of contemporary research, you will begin to see why modern developmental theories take a multidimensional approach. Notice whether any of the new information causes you to reexamine your own assumptions about development.

Nature and Nurture

How did you respond to the first three items of the questionnaire in Box 1.1? Did you say that physical traits are primarily inherited? Did you say that intelligence or personality is inherited? Your opinions on these matters are likely to be influenced by your cultural background. Through most of the last century, North Americans viewed intelligence as mostly hereditary, but Chinese and Japanese tended to disregard the notion of “native ability” and to consider intellectual achievements more as a function of opportunity and hard work (e.g., Stevenson, Chen, & Lee, 1993). Partially influenced by psychological research, North Americans have begun to see intelligence as at least partially dependent on education and opportunity, but they still are likely to see some achievements, such as in mathematics, as mostly dependent on ability. Alternatively, North Americans have traditionally viewed social adjustment as a result of environmental experiences, especially parents’ nurturance and socialization practices, but East Asians typically see these qualities as mostly inherent traits. Developmental researchers acknowledge that both nature and nurture influence most behavioral outcomes, but in the past they have often focused primarily on one or the other, partly because a research enterprise that examines multiple causes at the same time tends to be a massive undertaking. So, based on personal interest, theoretical bias, and practical limitations, developmental researchers have often systematically investigated one kind of cause of behavior, setting aside examination of other causes. Interestingly, what these limited research approaches have accomplished is to establish impressive bodies of evidence, both for the importance of genes and for the importance of the environment!

What theorists and researchers face now is the difficult task of specifying how the two sets of causes work together: Do they have separate effects that “add up,” for example, or do they qualitatively modify each other, creating together, in unique combinations, unique outcomes? Modern multidimensional theories make the latter assumption, and evidence is quickly accumulating to support this view. Heredity and environment are interdependent: The same genes operate differently in different environments, and the same environments are experienced differently by individuals with different genetic characteristics. Developmental outcomes are always a function of interplay between genes and environment, and the operation of one cannot even be described adequately without reference to the other. In Chapter
2 you will learn more about this complex interdependence.

Neuroplasticity and Critical

(Sensitive) Periods

Neuroplasticity refers to changes in the brain that occur as a result of some practice or experience. Neurons, the basic cells of the nervous system, become reorganized as a result of such practice, resulting in new learning and memory. Neural changes were once thought to occur primarily in infancy and early childhood. Contemporary neuroscientists recognize that “there is no period when the brain and its functions are static; changes are continuous throughout the lifespan. The nature, extent and the rates of change vary by region and function assessed and are influenced by genetic as well as environmental factors” (Pascual-Leone & Taylor, 2011, p. 183). The realization that our brains continue to change throughout life has revolutionized the way scientists regard the brain. As we have seen, modern multidisciplinary theories incorporate descriptions of relative life-long plasticity.

The time-related “variation by region and function” noted previously is at the heart of the classic question about critical (sensitive) periods. Although the brain exhibits plasticity throughout life, do some changes, such as first language learning, occur more easily and more effectively at certain ages and stages? Or, is the organism able to develop or learn any new skill at any time with the right opportunities? There is little doubt that there are some behavioral developments that usually take place within a particular period. In many ways, language acquisition is nearly complete by the age of 5 or 6, for example. But is it possible to acquire a language at another point in the life cycle if this usual time is somehow “missed”? Pinker (1994) reviewed several findings that led him to conclude that although language can be learned at other times, it is never learned as well or as effortlessly as it would have been in the critical period from about 1 to 5 years. Consider deaf individuals learning American Sign Language (ASL). ASL is a “real” symbolic language, with a complex grammar. Sometimes, however, American deaf children are not given the opportunity to learn ASL in their early years, often because of a belief that deaf children should learn to read lips and speak English (the “oralist” tradition). As a result, many deaf children simply do not learn any language. When these individuals are introduced to ASL in late childhood or adolescence, they often fail to acquire the same degree of facility with the grammar that children who learn ASL as preschoolers achieve.

If findings like these mean that a sensitive period has been missed, what could be the cause of such time-dependent learning? It is usually assumed that the end of a sensitive period is due to brain changes that make learning more difficult after the change. The environmental conditions that are likely to support the new learning may also be less favorable at certain times. As we have seen, the explanation is likely to be complex. For example, total immersion in a language may be just the right arrangement for a preschooler who knows no other communicative system. Older learners, even deaf children or adults who have learned no formal language early in life, may always filter a new language through previously established communication methods, such as an idiosyncratic set of hand signals. If so, for an older child or adult, total immersion may be less effective than a learning environment that can make correspondences between the new language and the old one. In later chapters, we will examine this issue as it relates to several developments, such as the emergence of sexual identity (
Chapter 8) and the formation of bonds between mothers and infants (
Chapter 4). In each case, the evidence indicates that time-dependent, region-specific windows of opportunity for rapid neural reorganization exist alongside continuing plasticity.

Universality
and Specificity: The Role of Culture

Developmental science is concerned with explaining the nature and characteristics of change. Are developmental changes universal, having the same qualities across ethnic, racial, or socioeconomic status groups, between genders, and from one historical period to another? Or does development depend entirely on the specific group or time within which it occurs? Many classic developmental theories have posited basic similarities in development across different groups and historical periods. Stage theories, like Freud’s theory in particular, often specify invariant sequences in personality or cognitive outcomes that are thought to apply to everyone, regardless of culture, group, or historical time. Yet even classic stage theories incorporate sociocultural influences.

In Erikson’s psychosocial stage theory, for example, all adolescents confront the task of formulating an adult identity. But the nature of that identity will certainly differ across groups. How complex and arduous a struggle the adolescent might face in forming an identity could vary dramatically depending on her context. Erikson’s studies of identity development in two Native American groups, the Sioux in South Dakota and the Yurok on the Pacific coast, and of mainstream White culture revealed different struggles and different outcomes in each (Erikson, 1950/1963).

Some sociocultural theories, which trace their roots to the work of Lev Vygotsky (e.g., 1934, 1978; see
Chapter 3), argue that cognitive developments may be qualitatively different in different cultures (e.g., Rogoff, 2003; Sternberg, 2014). For example, in Western cultures classifying objects by functional associations (birds with nests) is a trademark of preschoolers’ sorting behavior. Hierarchically organized taxonomic classification (e.g., collies and dachshunds grouped as kinds of dogs, dogs and birds grouped as animals) is more typical of elementary-school-age children. Piaget regarded taxonomic sorting to be an indicator of the logical thinking that emerges in middle childhood. But in some ethnic groups, such as the African Kpelle tribe, even adults do not sort objects taxonomically. They use functionally based schemes, perhaps because these are more meaningful in their everyday lives, and they would probably consider such schemes more sophisticated than a taxonomic one (Cole, 1998).

Even sub-cultural groups within the same society may show differences in the development of cognitive skills. For example, in most societies, boys perform better than girls on certain spatial tasks. Yet in a study that included low-, middle-, and high-income children in Chicago, low-income girls performed as well on spatial tasks as low-income boys, even though the usual gender difference was found in the other income groups (Levine, Vasilyeva, Lourenco, Newcombe, & Huttenlocher, 2005). Differences in the typical experiences of children in different social classes are the likely reason for these cognitive variations. Bronfenbrenner (1979) explained how culture could influence behavior through proximal processes, the daily give and take with others in one’s social networks that he considered the primary engines of development. Adults in particular are “cultural experts,” and their interactions with children embody cultural expectations and ways of doing things (Oyserman, 2017).

The bulk of social science research has been done on a relatively narrow sample of WEIRD (Western, Educated, Industrialized, Rich, and Democratic) people (Henrich, Heine, & Norenzayan, 2010), and developmental research is no exception (Fernald, 2010). Researchers have become acutely aware of the need to discover how developmental processes play out among other groups both within and outside North America to answer questions about universal versus specific developmental trajectories. To this end, culture, race, and ethnicity now have greater prominence in research than in the past, even though these constructs have proven somewhat difficult to define (Corbie-Smith et al., 2008).

Formerly, differences among racial groups, like Blacks, Whites, and Asians, were considered to be due to heredity, identifiable primarily by skin color, but also by variations in hair, bone structure, or other physiological markers. The assumption was that these superficial markers were indicators of deeper biological differences. But the range of genetic differences within groups is actually equal to or greater than those between racial groups (Richeson & Sommers, 2016). Racial groupings are now usually seen as social constructions, founded on superficial characteristics that change across time and circumstance (Saperstein, Penner, & Light, 2013). For example, Arab Americans were considered White before the terrorist attacks of September 11, 2001, but now are often considered non-White (Perez & Hirschman, 2010). Ethnicity is sometimes used interchangeably with race, although this too is problematic. Shared ancestry, language, a common place of origin, and a sense of belonging to the group are elements commonly used to describe membership in an ethnic group.

Adding to the complexity, culture includes a community’s shared values, rituals, psychological processes, behavioral norms, and practices (Fiske, Kitayama, Markus, & Nisbett, 1998). The concept clearly overlaps in meaning with “ethnicity.” Early studies often represented culture as a kind of “social address” (Bronfenbrenner, 1979) with gender, race, religion, age, language, ethnic heritage, and socioeconomic status as labels signifying aspects of cultural group membership. Think of your own status in relation to the items on this list. Then consider the status of another person you know. How similar to or different from you is this other person? Is there one category that stands out for you when you try to describe her social/cultural address? For someone you consider to be culturally similar to you, do all the labels overlap? Just a little reflection gives you a taste of the dizzying complexity of such distinctions.

Some research demonstrates that shared values might not be the most reliable indicator of culture. For example, a study of values drawn from approximately 169,000 participants from six continents revealed broad agreement in values, contrary to what one might expect. Autonomy, relatedness, and competence were highly ranked across all cultures although some differences were observed for the value of conformity (Fischer & Schwartz, 2011). This finding questions the assumption that cultures are reliably different in their value systems. People in the same cultural group may be too diverse to justify painting with a broad brush. In addition, increasing global migration and trade, as well as worldwide media and information networks, mean that more and more individuals have multiple ethnic/cultural experiences and many may be polycultural in perspective and behavior (Jensen, 2012; Morris, Chiu, & Liu, 2015). Cultural identities can be difficult to pin down. Currently, there is a tendency to move away from static conceptualizations of what constitutes ethnic/cultural group membership and toward more dynamic, process-oriented definitions for these important variables (Brubaker, 2009). In particular, researchers are concerned about disaggregating social class, or socioeconomic status (SES), from race and ethnic/cultural distinctions. Socioeconomic status is based on social standing or power, and is defined by characteristics of the adults in a household, including educational background, income, and occupation. Frequently, variables of race/ethnicity and SES are conflated in research, leading to questionable findings. This conflation comes in at least two forms. First, SES is a characteristic that can directly affect how people perceive an individual’s racial identity. If someone is affluent or highly educated, others may assume that she is White; if she is on welfare, others might assume that she is a member of a minority (e.g., Penner & Saperstein, 2008).

Second, SES and race/ethnicity can be conflated when research findings are attributed to one factor without examining the other. A good example of why disaggregation is important comes from a study of preschool children’s everyday activities in four cultural communities (Black and White in the United States, Luo in Kenya, and European descent in Porto Allegre, Brazil). Tudge and his colleagues (2006) observed everyday behaviors of preschool children, hypothesizing that each culture provides its young with the kinds of opportunities (e.g., school or work-related activities) deemed important for successful participation in their culture. Equal numbers of high and low SES children within each culture were included to study the intersection of culture and class. The Brazilian children engaged in fewer academic activities compared to White and Kenyan groups. Nonetheless, middle-class Brazilian children were involved in more academic lessons than their working-class counterparts. Kenyan children participated in significantly more work-related activities than all other groups. However, the working-class Kenyan children engaged in twice as much work as those from all other groups, including middle-class Kenyan children.

As complex as the identification of culture is, it is clear that cultural experience is important to development. But does culture have effects only at a superficial level (e.g., learning different behaviors, manners, customs), or are there effects on more fundamental processes, such as information processing and developing brain structures (Fiske, 2009; Kitayama & Park, 2010)? This is precisely the kind of question that experimenters in the field of cultural psychology have taken on (Sternberg, 2014; Wang, 2016).

Consider, for example, the often-cited distinction between the holistic (interdependent) modes of interacting in cultures of the Eastern hemisphere and the analytic (independent) modes in cultures of the Western hemisphere. A body of research now supports the existence of reliable differences beyond just superficial behavioral ones. First, differences have been identified in information processing (attention, understanding cause and effect, memory, and categorization) between people from Eastern and Western cultures (Nisbett & Masuda, 2003). Some analysts have speculated that the historical-cultural antecedents of these processing differences may be ancient ways of viewing the world common to Chinese and Greek societies respectively (Nisbett, Peng, Choi, & Norenzayan, 2001). In turn, each of these ways may have been shaped by the respective economies of those societies (large-scale farming vs. hunting and trading) along with their different physical environments (open plain vs. seaside mountains). Freed from the interdependence required for massive farming and irrigation projects, ancient Greeks, the forebears of Western societies, came to view the world by focusing on central objects. In other words, the ancient Greeks inhabited a world where objects were typically perceived as relatively unchanging and detached from their context. The objects’ features (size, shape, color, etc.) were investigated so as to understand their operating rules and to predict and control their operations. Logic and scientific empiricism are related to this perspective on the world.

Needing to pay attention to the larger context in order to thrive, members of Eastern societies such as China focused more holistically on interrelationships, paying as much attention to the field wherein objects existed as to the objects themselves. Understanding the world from this perspective was more likely to incorporate figure-ground relationships and to hold the dialectic of opposing points of view in balance (Nisbett & Masuda, 2003). It could be that these fundamental differences in Greek and Chinese social organization and cognition, based upon geographical constraints and the exigencies of survival, continued to affect the development of people and societies that followed in their wake.

Any study of cultural differences embodies a fundamental wish to see the world as others do. Imagine that you could literally see what people pay attention to as a way of gaining knowledge about their perspective. Seeing the world through the eyes of people who live in the Eastern and Western regions of the globe could be a fruitful place to start because any differences that evolved from long histories of practice might be more obvious. Recent studies of attention and visual processing that compare these two cultural groups, made possible by the development of a variety of advanced technologies, have indeed proved fruitful.

Results of many studies have demonstrated a greater tendency among Eastern participants to attend to context when compared to Western participants who are more likely to attend to central objects (e.g., in photos of animals in complex environments; Boduroglu, Shah, & Nisbett, 2009; Kitayama, Duffy, Kawamura, & Larsen, 2003; Masuda & Nisbett, 2001, 2006). Eastern research subjects also process groups of items in relationship to each other rather than by category (e.g., linking a cow and grass instead of a cow and chicken; Chiu, 1972; Ji, Zhang, & Nisbett, 2004). They typically make judgments of contextual characteristics more quickly than Westerners, and they engage fewer areas of the brain in the process (Goh et al., 2013). The Eastern emphasis on field also extends to making causal attributions for events. When Westerners were asked to explain the reasons for outcomes in athletic competitions or the causes of criminal events, they emphasized internal traits as causal, whereas Easterners gave more contextualized explanations for outcomes (Choi, Nisbett, & Norenzayan, 1999).

In an ingenious study demonstrating the interrelated effects of culture, development, and neuroplasticity, Goh et al. (2007) show that what you pay attention to makes a subtle yet enduring difference in your brain over time. Repeated practice results in changes in the brain that become our preferred modes of thought and action. The cultural shaping of visual processing in the brain was explored in groups of young and old North Americans and East Asians from Singapore. Researchers studied the visual ventral cortex, a complex of brain structures responsible for identifying what is being processed visually (Farah, Rabinowitz, & Quinn, 2000). Some parts of this complex process object information and other parts process background information (see Park & Huang, 2010). The inclusion of older individuals in this study allowed researchers to analyze whether sustained cultural experience with analytic (central object) versus holistic (background/context) processing sculpted the brain in unique ways over time. During experimental sessions utilizing an adapted functional magnetic resonance imaging (fMR-A) paradigm (see
Chapter 2), which shows which parts of the brain are in use during different tasks, all participants viewed pictures of objects in scenes. As you can see in Figure 1.3, young Westerners’ and Easterners’ brains were similar in where and to what extent they processed objects versus backgrounds. Older participants from both cultures showed reduced processing of objects relative to backgrounds compared to younger participants. But there was an East/West difference in the older participants: The older Asian participants showed much more of a reduction than older Western participants in object processing. These older Asians did not lose their ability to focus on central objects, but they needed to be prompted to do so. For them, holistic processing had become the default mode, suggesting a lifetime cultural habit of attention to context.

A New Look at Three Developmental Issues

In the following sections, we examine three classic developmental issues that have garnered a great deal of attention in recent years. As you read about these issues from the viewpoint of contemporary research, you will begin to see why modern developmental theories take a multidimensional approach. Notice whether any of the new information causes you to reexamine your own assumptions about development.

Nature and Nurture

How did you respond to the first three items of the questionnaire in Box 1.1? Did you say that physical traits are primarily inherited? Did you say that intelligence or personality is inherited? Your opinions on these matters are likely to be influenced by your cultural background. Through most of the last century, North Americans viewed intelligence as mostly hereditary, but Chinese and Japanese tended to disregard the notion of “native ability” and to consider intellectual achievements more as a function of opportunity and hard work (e.g., Stevenson, Chen, & Lee, 1993). Partially influenced by psychological research, North Americans have begun to see intelligence as at least partially dependent on education and opportunity, but they still are likely to see some achievements, such as in mathematics, as mostly dependent on ability. Alternatively, North Americans have traditionally viewed social adjustment as a result of environmental experiences, especially parents’ nurturance and socialization practices, but East Asians typically see these qualities as mostly inherent traits. Developmental researchers acknowledge that both nature and nurture influence most behavioral outcomes, but in the past they have often focused primarily on one or the other, partly because a research enterprise that examines multiple causes at the same time tends to be a massive undertaking. So, based on personal interest, theoretical bias, and practical limitations, developmental researchers have often systematically investigated one kind of cause of behavior, setting aside examination of other causes. Interestingly, what these limited research approaches have accomplished is to establish impressive bodies of evidence, both for the importance of genes and for the importance of the environment!

What theorists and researchers face now is the difficult task of specifying how the two sets of causes work together: Do they have separate effects that “add up,” for example, or do they qualitatively modify each other, creating together, in unique combinations, unique outcomes? Modern multidimensional theories make the latter assumption, and evidence is quickly accumulating to support this view. Heredity and environment are interdependent: The same genes operate differently in different environments, and the same environments are experienced differently by individuals with different genetic characteristics. Developmental outcomes are always a function of interplay between genes and environment, and the operation of one cannot even be described adequately without reference to the other. In Chapter
2 you will learn more about this complex interdependence.

Neuroplasticity and Critical

(Sensitive) Periods

Neuroplasticity refers to changes in the brain that occur as a result of some practice or experience. Neurons, the basic cells of the nervous system, become reorganized as a result of such practice, resulting in new learning and memory. Neural changes were once thought to occur primarily in infancy and early childhood. Contemporary neuroscientists recognize that “there is no period when the brain and its functions are static; changes are continuous throughout the lifespan. The nature, extent and the rates of change vary by region and function assessed and are influenced by genetic as well as environmental factors” (Pascual-Leone & Taylor, 2011, p. 183). The realization that our brains continue to change throughout life has revolutionized the way scientists regard the brain. As we have seen, modern multidisciplinary theories incorporate descriptions of relative life-long plasticity.

The time-related “variation by region and function” noted previously is at the heart of the classic question about critical (sensitive) periods. Although the brain exhibits plasticity throughout life, do some changes, such as first language learning, occur more easily and more effectively at certain ages and stages? Or, is the organism able to develop or learn any new skill at any time with the right opportunities? There is little doubt that there are some behavioral developments that usually take place within a particular period. In many ways, language acquisition is nearly complete by the age of 5 or 6, for example. But is it possible to acquire a language at another point in the life cycle if this usual time is somehow “missed”? Pinker (1994) reviewed several findings that led him to conclude that although language can be learned at other times, it is never learned as well or as effortlessly as it would have been in the critical period from about 1 to 5 years. Consider deaf individuals learning American Sign Language (ASL). ASL is a “real” symbolic language, with a complex grammar. Sometimes, however, American deaf children are not given the opportunity to learn ASL in their early years, often because of a belief that deaf children should learn to read lips and speak English (the “oralist” tradition). As a result, many deaf children simply do not learn any language. When these individuals are introduced to ASL in late childhood or adolescence, they often fail to acquire the same degree of facility with the grammar that children who learn ASL as preschoolers achieve.

If findings like these mean that a sensitive period has been missed, what could be the cause of such time-dependent learning? It is usually assumed that the end of a sensitive period is due to brain changes that make learning more difficult after the change. The environmental conditions that are likely to support the new learning may also be less favorable at certain times. As we have seen, the explanation is likely to be complex. For example, total immersion in a language may be just the right arrangement for a preschooler who knows no other communicative system. Older learners, even deaf children or adults who have learned no formal language early in life, may always filter a new language through previously established communication methods, such as an idiosyncratic set of hand signals. If so, for an older child or adult, total immersion may be less effective than a learning environment that can make correspondences between the new language and the old one. In later chapters, we will examine this issue as it relates to several developments, such as the emergence of sexual identity (
Chapter 8) and the formation of bonds between mothers and infants (
Chapter 4). In each case, the evidence indicates that time-dependent, region-specific windows of opportunity for rapid neural reorganization exist alongside continuing plasticity.

Universality
and Specificity: The Role of Culture

Developmental science is concerned with explaining the nature and characteristics of change. Are developmental changes universal, having the same qualities across ethnic, racial, or socioeconomic status groups, between genders, and from one historical period to another? Or does development depend entirely on the specific group or time within which it occurs? Many classic developmental theories have posited basic similarities in development across different groups and historical periods. Stage theories, like Freud’s theory in particular, often specify invariant sequences in personality or cognitive outcomes that are thought to apply to everyone, regardless of culture, group, or historical time. Yet even classic stage theories incorporate sociocultural influences.

In Erikson’s psychosocial stage theory, for example, all adolescents confront the task of formulating an adult identity. But the nature of that identity will certainly differ across groups. How complex and arduous a struggle the adolescent might face in forming an identity could vary dramatically depending on her context. Erikson’s studies of identity development in two Native American groups, the Sioux in South Dakota and the Yurok on the Pacific coast, and of mainstream White culture revealed different struggles and different outcomes in each (Erikson, 1950/1963).

Some sociocultural theories, which trace their roots to the work of Lev Vygotsky (e.g., 1934, 1978; see
Chapter 3), argue that cognitive developments may be qualitatively different in different cultures (e.g., Rogoff, 2003; Sternberg, 2014). For example, in Western cultures classifying objects by functional associations (birds with nests) is a trademark of preschoolers’ sorting behavior. Hierarchically organized taxonomic classification (e.g., collies and dachshunds grouped as kinds of dogs, dogs and birds grouped as animals) is more typical of elementary-school-age children. Piaget regarded taxonomic sorting to be an indicator of the logical thinking that emerges in middle childhood. But in some ethnic groups, such as the African Kpelle tribe, even adults do not sort objects taxonomically. They use functionally based schemes, perhaps because these are more meaningful in their everyday lives, and they would probably consider such schemes more sophisticated than a taxonomic one (Cole, 1998).

Even sub-cultural groups within the same society may show differences in the development of cognitive skills. For example, in most societies, boys perform better than girls on certain spatial tasks. Yet in a study that included low-, middle-, and high-income children in Chicago, low-income girls performed as well on spatial tasks as low-income boys, even though the usual gender difference was found in the other income groups (Levine, Vasilyeva, Lourenco, Newcombe, & Huttenlocher, 2005). Differences in the typical experiences of children in different social classes are the likely reason for these cognitive variations. Bronfenbrenner (1979) explained how culture could influence behavior through proximal processes, the daily give and take with others in one’s social networks that he considered the primary engines of development. Adults in particular are “cultural experts,” and their interactions with children embody cultural expectations and ways of doing things (Oyserman, 2017).

The bulk of social science research has been done on a relatively narrow sample of WEIRD (Western, Educated, Industrialized, Rich, and Democratic) people (Henrich, Heine, & Norenzayan, 2010), and developmental research is no exception (Fernald, 2010). Researchers have become acutely aware of the need to discover how developmental processes play out among other groups both within and outside North America to answer questions about universal versus specific developmental trajectories. To this end, culture, race, and ethnicity now have greater prominence in research than in the past, even though these constructs have proven somewhat difficult to define (Corbie-Smith et al., 2008).

Formerly, differences among racial groups, like Blacks, Whites, and Asians, were considered to be due to heredity, identifiable primarily by skin color, but also by variations in hair, bone structure, or other physiological markers. The assumption was that these superficial markers were indicators of deeper biological differences. But the range of genetic differences within groups is actually equal to or greater than those between racial groups (Richeson & Sommers, 2016). Racial groupings are now usually seen as social constructions, founded on superficial characteristics that change across time and circumstance (Saperstein, Penner, & Light, 2013). For example, Arab Americans were considered White before the terrorist attacks of September 11, 2001, but now are often considered non-White (Perez & Hirschman, 2010). Ethnicity is sometimes used interchangeably with race, although this too is problematic. Shared ancestry, language, a common place of origin, and a sense of belonging to the group are elements commonly used to describe membership in an ethnic group.

Adding to the complexity, culture includes a community’s shared values, rituals, psychological processes, behavioral norms, and practices (Fiske, Kitayama, Markus, & Nisbett, 1998). The concept clearly overlaps in meaning with “ethnicity.” Early studies often represented culture as a kind of “social address” (Bronfenbrenner, 1979) with gender, race, religion, age, language, ethnic heritage, and socioeconomic status as labels signifying aspects of cultural group membership. Think of your own status in relation to the items on this list. Then consider the status of another person you know. How similar to or different from you is this other person? Is there one category that stands out for you when you try to describe her social/cultural address? For someone you consider to be culturally similar to you, do all the labels overlap? Just a little reflection gives you a taste of the dizzying complexity of such distinctions.

Some research demonstrates that shared values might not be the most reliable indicator of culture. For example, a study of values drawn from approximately 169,000 participants from six continents revealed broad agreement in values, contrary to what one might expect. Autonomy, relatedness, and competence were highly ranked across all cultures although some differences were observed for the value of conformity (Fischer & Schwartz, 2011). This finding questions the assumption that cultures are reliably different in their value systems. People in the same cultural group may be too diverse to justify painting with a broad brush. In addition, increasing global migration and trade, as well as worldwide media and information networks, mean that more and more individuals have multiple ethnic/cultural experiences and many may be polycultural in perspective and behavior (Jensen, 2012; Morris, Chiu, & Liu, 2015). Cultural identities can be difficult to pin down. Currently, there is a tendency to move away from static conceptualizations of what constitutes ethnic/cultural group membership and toward more dynamic, process-oriented definitions for these important variables (Brubaker, 2009). In particular, researchers are concerned about disaggregating social class, or socioeconomic status (SES), from race and ethnic/cultural distinctions. Socioeconomic status is based on social standing or power, and is defined by characteristics of the adults in a household, including educational background, income, and occupation. Frequently, variables of race/ethnicity and SES are conflated in research, leading to questionable findings. This conflation comes in at least two forms. First, SES is a characteristic that can directly affect how people perceive an individual’s racial identity. If someone is affluent or highly educated, others may assume that she is White; if she is on welfare, others might assume that she is a member of a minority (e.g., Penner & Saperstein, 2008).

Second, SES and race/ethnicity can be conflated when research findings are attributed to one factor without examining the other. A good example of why disaggregation is important comes from a study of preschool children’s everyday activities in four cultural communities (Black and White in the United States, Luo in Kenya, and European descent in Porto Allegre, Brazil). Tudge and his colleagues (2006) observed everyday behaviors of preschool children, hypothesizing that each culture provides its young with the kinds of opportunities (e.g., school or work-related activities) deemed important for successful participation in their culture. Equal numbers of high and low SES children within each culture were included to study the intersection of culture and class. The Brazilian children engaged in fewer academic activities compared to White and Kenyan groups. Nonetheless, middle-class Brazilian children were involved in more academic lessons than their working-class counterparts. Kenyan children participated in significantly more work-related activities than all other groups. However, the working-class Kenyan children engaged in twice as much work as those from all other groups, including middle-class Kenyan children.

As complex as the identification of culture is, it is clear that cultural experience is important to development. But does culture have effects only at a superficial level (e.g., learning different behaviors, manners, customs), or are there effects on more fundamental processes, such as information processing and developing brain structures (Fiske, 2009; Kitayama & Park, 2010)? This is precisely the kind of question that experimenters in the field of cultural psychology have taken on (Sternberg, 2014; Wang, 2016). Consider, for example, the often-cited distinction between the holistic (interdependent) modes of interacting in cultures of the Eastern hemisphere and the analytic (independent) modes in cultures of the Western hemisphere. A body of research now supports the existence of reliable differences beyond just superficial behavioral ones. First, differences have been identified in information processing (attention, understanding cause and effect, memory, and categorization) between people from Eastern and Western cultures (Nisbett & Masuda, 2003). Some analysts have speculated that the historical-cultural antecedents of these processing differences may be ancient ways of viewing the world common to Chinese and Greek societies respectively (Nisbett, Peng, Choi, & Norenzayan, 2001). In turn, each of these ways may have been shaped by the respective economies of those societies (large-scale farming vs. hunting and trading) along with their different physical environments (open plain vs. seaside mountains). Freed from the interdependence required for massive farming and irrigation projects, ancient Greeks, the forebears of Western societies, came to view the world by focusing on central objects. In other words, the ancient Greeks inhabited a world where objects were typically perceived as relatively unchanging and detached from their context. The objects’ features (size, shape, color, etc.) were investigated so as to understand their operating rules and to predict and control their operations. Logic and scientific empiricism are related to this perspective on the world.

Needing to pay attention to the larger context in order to thrive, members of Eastern societies such as China focused more holistically on interrelationships, paying as much attention to the field wherein objects existed as to the objects themselves. Understanding the world from this perspective was more likely to incorporate figure-ground relationships and to hold the dialectic of opposing points of view in balance (Nisbett & Masuda, 2003). It could be that these fundamental differences in Greek and Chinese social organization and cognition, based upon geographical constraints and the exigencies of survival, continued to affect the development of people and societies that followed in their wake.

Any study of cultural differences embodies a fundamental wish to see the world as others do. Imagine that you could literally see what people pay attention to as a way of gaining knowledge about their perspective. Seeing the world through the eyes of people who live in the Eastern and Western regions of the globe could be a fruitful place to start because any differences that evolved from long histories of practice might be more obvious. Recent studies of attention and visual processing that compare these two cultural groups, made possible by the development of a variety of advanced technologies, have indeed proved fruitful.

Results of many studies have demonstrated a greater tendency among Eastern participants to attend to context when compared to Western participants who are more likely to attend to central objects (e.g., in photos of animals in complex environments; Boduroglu, Shah, & Nisbett, 2009; Kitayama, Duffy, Kawamura, & Larsen, 2003; Masuda & Nisbett, 2001, 2006). Eastern research subjects also process groups of items in relationship to each other rather than by category (e.g., linking a cow and grass instead of a cow and chicken; Chiu, 1972; Ji, Zhang, & Nisbett, 2004). They typically make judgments of contextual characteristics more quickly than Westerners, and they engage fewer areas of the brain in the process (Goh et al., 2013). The Eastern emphasis on field also extends to making causal attributions for events. When Westerners were asked to explain the reasons for outcomes in athletic competitions or the causes of criminal events, they emphasized internal traits as causal, whereas Easterners gave more contextualized explanations for outcomes (Choi, Nisbett, & Norenzayan, 1999).

In an ingenious study demonstrating the interrelated effects of culture, development, and neuroplasticity, Goh et al. (2007) show that what you pay attention to makes a subtle yet enduring difference in your brain over time. Repeated practice results in changes in the brain that become our preferred modes of thought and action. The cultural shaping of visual processing in the brain was explored in groups of young and old North Americans and East Asians from Singapore. Researchers studied the visual ventral cortex, a complex of brain structures responsible for identifying what is being processed visually (Farah, Rabinowitz, & Quinn, 2000). Some parts of this complex process object information and other parts process background information (see Park & Huang, 2010). The inclusion of older individuals in this study allowed researchers to analyze whether sustained cultural experience with analytic (central object) versus holistic (background/context) processing sculpted the brain in unique ways over time. During experimental sessions utilizing an adapted functional magnetic resonance imaging (fMR-A) paradigm (see
Chapter
2), which shows which parts of the brain are in use during different tasks, all participants viewed pictures of objects in scenes. As you can see in Figure 1.3, young Westerners’ and Easterners’ brains were similar in where and to what extent they processed objects versus backgrounds. Older participants from both cultures showed reduced processing of objects relative to backgrounds compared to younger participants. But there was an East/West difference in the older participants: The older Asian participants showed much more of a reduction than older Western participants in object processing. These older Asians did not lose their ability to focus on central objects, but they needed to be prompted to do so. For them, holistic processing had become the default mode, suggesting a lifetime cultural habit of attention to context.

Focus on Developmental Psychopathology

In several chapters of this text, you will find sections titled
Focus on Developmental Psychopathology, which highlight developmental approaches to specific behavioral disorders. These sections emphasize work in the field of developmental psychopathology, offering clinicians a unique perspective on dysfunctional behavior by integrating work from many disciplines, including developmental psychology, clinical psychology, abnormal psychology, biology, and genetics. This field takes a life span perspective on aberrant behavior by assuming that it is an outgrowth of complex but lawful developmental processes (e.g., Rutter & Sroufe, 2000). Unhealthy social, emotional, and behavioral processes, like depression or conduct disorder, emerge in the same way that healthy ones do: as a function of the individual’s attempts to adapt to her environment in interaction with the environment’s actions on the individual. Behaviors or coping strategies that are adaptive in one developmental circumstance can be maladaptive in other concurrent contexts, or they may establish a trajectory that can result in maladaptive outcomes later. “In contrast to the often dichotomous world of mental disorder/nondisorder depicted in psychiatry, a developmental psychopathology perspective recognizes that normality often fades into abnormality, adaptive and maladaptive may take on differing definitions depending on whether one’s time referent is immediate circumstances or long-term development, and processes within the individual can be characterized as having shades or degrees of psychopathology” (Cicchetti & Toth, 2006, p. 498). Developmental psychopathology is largely guided by multidimensional or systems theories of development (e.g., Cicchetti & Sroufe, 2000; Cicchetti & Toth, 2006; Rutter & Sroufe, 2000; Sameroff, 2000; Sameroff & MacKenzie, 2003). Every individual is seen as an active organism, influenced by multiple levels of internal processes, and continuously adapting to multiple embedded contexts. Increasingly, culture is also viewed as something that one “develops” over time through an intricate interplay of biology and environment. More than a fixed social group membership with only higher-order, macrosystem effects, culture should be viewed as a primary, proximal influence on development, affecting the course of beneficial and harmful outcomes as the person progresses through the life span (Causadias, 2013). Abnormality results from the same proximal processes that produce more normative patterns: As the individual transacts with the environment, she attempts to meet her needs and to adjust to environmental inputs and demands. She brings both strengths and vulnerabilities to these transactions, and the environment contributes both stressors and supports. Both the individual and the environment are somewhat altered by each transaction, reciprocally influencing each other. That is, the change processes are bidirectional. The individual’s strengths and vulnerabilities as well as environmental stressors and supports are all variables or factors that impact the overall development of the individual. Both healthy and unhealthy outcomes are the result of the interplay of the individual’s characteristics and her experiences across time. “Single factors can be potent in destroying systems . . . a gunshot can destroy a child. But single factors cannot create a child or any other living system” (Sameroff, 2000, p. 37). The individual’s strengths and the environment’s supports are protective factors, helping to promote healthy outcomes; the individual’s vulnerabilities and the environmental stressors she experiences are risk factors that can interfere with healthy development. Among the individual’s characteristics that may matter are various genetic and other biological factors, temperamental traits, cognitive capacities, social skills, attitudes, beliefs, and so on. Among the environmental factors are socioeconomic status, safety of the neighborhood, quality of the schools, family history and culture, parental nurturing and monitoring, peer attitudes, friendships, marital and community supports, cultural dynamics including racial and ethnic processes, and so on.

You may recognize that many of these risks seem to fit together. As Garbarino (1999) points out, many children “fall victim to the unfortunate synchronicity between the demons inhabiting their own internal world and the corrupting influences of modern American culture” (p. 23). So, not only do risks gain power as they accumulate, but they also operate in clusters that serve as “correlated constraints” (Cairns & Cairns, 1994). In other words, they reinforce each other by their redundancy and work together to shape the developmental trajectory. As a result, although altering one or just a few risk factors can have a positive impact on behavior and outcomes, sometimes such limited changes have little effect because the other related risks maintain the status quo.

Certain risks, as well as certain protections, become more important at different points in development. For example, the protection offered by prosocial peers and the risks associated with exposure to deviant ones are particularly powerful as children approach adolescence but less so in early childhood (Bolger & Patterson, 2003). On the other hand, some protections, such as authoritative parenting, appear to retain their power throughout childhood and adolescence. Some risk factors (e.g., deficits in perspective taking or problems with peer relationships) may be related to the development of multiple disorders, such as conduct disorder or depression.

Work in developmental psychopathology has brought into focus the importance of both mediating and moderating relationships between variables or factors in development. Let’s begin with mediating variables. Suppose that one factor appears to be a cause of some behavioral outcome. For example, when a child experiences early, pervasive poverty, she is at higher risk than other children for developing mental health problems and medical diseases in adulthood, from depression to cardiovascular disease to some cancers (Chen, 2004). Even if children’s economic circumstances improve in later childhood or adulthood, the increased risk of adult problems persists. One mediating variable that links early poverty to later health vulnerability is a compromised immune system. Specifically, poor children are more prone to inflammation. Changes in the functioning of certain genes cause this “pro-inflammatory profile,” which lasts into adulthood and can contribute to poor health, including some mental health problems like depression (Chen, Miller, Kobor, & Cole, 2011).

Moderating variables are those that affect the strength of the relationship between other variables (Baron & Kenny, 1986). They interact with causal factors, altering and sometimes even eliminating their effects on outcome variables. For example, researchers have found that not all adults exposed to early poverty are characterized by a “pro- inflammatory profile” (Chen, Miller, Kobor, & Cole, 2011). Adults who suffered chronic early poverty but who report having a warm, supportive relationship with their mothers in childhood often have normal immune system functioning. Warm mothering appears to be a protective factor that moderates the impact of early poverty, a risk factor. See Figure 1.5 for a graphic illustration of both mediating and moderating factors related to early poverty’s effects. Recent research in psychopathology has focused on the role of endophenotypes as mediators and moderators. Endophenotypes are biobehavioral processes that can be traced to genes. These processes serve as intermediary links between the actual genes that contribute to disorders and their expressed behavioral manifestations. The “pro-inflammatory profile” that serves as a mediator between early childhood poverty and later mental and physical health problems is an example of an endophenotype, because it has been found to result from epigenetic processes (Chen et al., 2011; see Chapter 2). Lenroot and Giedd (2011) artfully describe endophenotypes as “bridges between molecules and behavior” (p. 429). Study of these intermediary links can help us better understand the processes by which genetic information exerts influence on observable behavior (Gottesman & Gould, 2003). Because so many interacting factors are involved, there is no such thing as perfect prediction of who will have healthy outcomes, who will not, when problems may arise, and how they will evolve. A rough guideline for prediction is that the more risk factors and the fewer protective factors there are, the more likely an individual is to have adjustment problems. Developmental psychopathology also recognizes two axiomatic principles: multifinality and equifinality (e.g., Cicchetti & Rogosch, 1996). The principle of multifinality is that individual pathways of development may result in a wide range of possible outcomes. For example, children exhibiting conduct-disordered behavior in the elementary school years may, as adults, display one or more of several different disorders, including antisocial personality, depression, substance abuse, and so on. The complementary principle of equifinality specifies that different early developmental pathways can produce similar outcomes. For example, Sroufe (1989) has demonstrated two pathways, one primarily biological and one primarily related to parenting style, that lead to attention-deficit hyperactivity disorder (ADHD). Using these ideas from systems theory allows for the study of multiple subgroups and multiple pathways to disorders. Most important, it allows for a more realistic look at the problems people face (Cicchetti & Toth, 1994).

Here again, biobehavioral research is taking us a step closer to making better predictions about who will be affected by disorders and who will have more healthy outcomes. It may also help us unravel the mystery of multifinality. For example, why do some emotionally abused children become depressed as they age while others develop substance abuse disorders? Research on transdiagnostic risk factors (Nolen-Hoeksema & Watson, 2011), which are conceptually similar to endophenotypes, takes a close look at how risk factors other than the target one (e.g., history of emotional abuse) may moderate the effects of the target factor. Assessment of the moderating impact of different risk factors on each other is aimed at explaining “divergent trajectories”: how different disorders evolve from the same target risk factors. Careful explication of these intervening sequences supports the fundamental goals of these new scientific fields.

Two primary goals of developmental psychopathology are to increase the probability of successfully predicting problematic outcomes and to find ways of preventing them. Developmental psychopathology is therefore closely linked to the field of prevention science, which aims at designing and testing prevention and intervention techniques for promoting healthy development in at-risk groups. Developmental psychopathologists also emphasize the value of studying individuals at the extremes of disordered behavior, for the purpose of enlightening us about how developmental processes work for everyone. Consider one example: Typically developing children eventually form a coherent and relatively realistic notion of self, so that they distinguish the self from others; they differentiate the real from the imagined; they form integrated memories of what they have done and experienced, and so on (see
Chapters
5 and
7). Our understanding of when and how a coherent sense of self emerges in normal development has benefited from studies of maltreated children whose sense of self is often disorganized. In one study comparing maltreated with non-maltreated preschoolers, maltreated children showed substantially more dissociative behaviors, such as talking to imaginary playmates, being forgetful or confused about things the child should know, and lying to deny misbehavior even when the evidence is clear (Macfie, Cicchetti, & Toth, 2001). Note that all of these kinds of behaviors are typical of preschoolers sometimes. But finding that non-maltreated preschoolers are less likely to engage in these behaviors than maltreated youngsters helps substantiate two things: First, relationships with caregivers are important to the development of a coherent self-system, and second, typically developing preschoolers are beginning to form a cohesive self-system even though the process is not complete.

The field of developmental psychopathology has several practical implications for clinical practice. First, interventions and treatments need to be developmentally appropriate to be effective. One approach will not fit all situations or age groups. For example, maltreated preschoolers showing signs of excessive dissociative behavior can be helped to form a more coherent self-system if helpers intervene with primary caregivers to increase their positivity, sensitivity, and responsivity (see
Chapters
4 and
5); interventions with adults who suffer from dissociative behaviors would require other approaches. Second, periods throughout the life span marked by disequilibrium or disorganization with resultant reorganization may be considered points at which individuals might be most receptive to change. Developmental psychopathologists suggest that at these sensitive periods, interventions may be most effective because the individual can incorporate treatment into newly emerging levels of cognitive, emotional, and behavioral organization. Thus, the issue of timing of interventions is one of great interest to this field. In addition, the wide variety of possible pathways and outcomes involved in the development of psychopathology is an argument for the use of multiple means of intervention and treatment. However, interventions should be carefully considered and based on a thoughtful assessment of a person’s developmental level and quality of adaptation, the contexts that the person must function within, and the availability of external supports. This discipline’s ideas and research findings hold out great promise for helpers.

Case Study

Anna is a 9-year-old third-grade student in a public school on the outskirts of a large industrial city. She is the oldest of three children who live in an apartment with their mother, a 29-year-old White woman recently diagnosed with rheumatoid arthritis. Despite her young age, Anna’s past history is complicated. Anna’s biological father, Walter, is a 37-year-old man who emigrated from Eastern Europe when he was in his early 20s. He married Anna’s mother, Karen, when she was 19 years old. The couple married hastily and had a child, Anna, but Walter abandoned the family shortly after Anna’s birth. Walter and Karen had fought constantly about his problems with alcohol. Karen was particularly upset about Walter’s behavior because her own father, now deceased, had suffered from alcoholism and left her mother without sufficient resources to care for herself.

Alone with a child to support and only a high school degree, Karen went to work in the office of a small family-owned business. There she met Frank, one of the drivers who worked sporadically for the company. They married within a few months of meeting and, within another year, had a son named John. Karen, with Frank’s grudging consent, decided not to tell Anna about her biological father. She reasoned that Anna deserved to believe that Frank, who filled the role of father to both children, was her real parent. Anna was developing normally and seemed to be attached to Frank. But, unknown to Karen, Frank had some problems of his own. He had been incarcerated for theft as a young man and had an inconsistent employment history. The family struggled to stay together through many ups and downs. When Anna was 6, Karen became pregnant again. Frank wanted Karen to have an abortion because he didn’t think the family’s finances could support another child. Karen refused, saying that she would take on another job once the new baby was born. Ultimately, the marriage did not survive the many stresses the couple faced, and Karen and Frank were divorced when Anna was 7.

Karen’s situation at work is tenuous because of her medical condition. Her employer balks at making accommodations for her, and she fears she might be let go. After the divorce, Karen filed for child support, and Frank was directed to pay a certain amount each month for the three children, but Frank was outraged that he should have to pay for Anna’s care because she was not his biological child. During a particularly difficult conversation, Frank told Anna the “truth” that he was not her “real” father. Karen, still unable to deal with this issue, insisted to Anna

that Frank was her biological parent. Karen could not bring herself to mention Walter, whose existence had never been mentioned to the children before. Karen desperately needed the money for Anna’s support, especially because she had amassed substantial credit card debt. She felt her only pleasure was watching shopping shows on TV and ordering items for her children.

In school, Anna is struggling to keep up with her peers. Her academic performance is a full grade level behind, and her teachers are concerned. The school Anna attends has high academic standards and pressures for achievement are intense. Anna behaves in immature ways with peers and adults, alternating between excessive shyness and overly affectionate behavior. She does not appear to have any friendships.

Chapter 2

Genetics, Epigenetics, and the Brain: The Fundamentals of Behavioral Development

Leo and George are identical twins. When they were born, George was a half-pound heavier than Leo, but otherwise they were so much alike that most people could not tell them apart. As they grew, friends and family used different tricks to distinguish them. If you were very observant, for instance, you might notice that George had a mole low on his left cheek, almost at the chin line. Leo also had a mole, but it was located just above his left cheekbone. Nonetheless, the boys were similar enough that they occasionally amused themselves by standing in for each other—fooling their third-grade teachers for a whole day of school or switching places with their junior prom dates. They had a shared passion for music and were gifted instrumentalists by their late teens. Yet, the boys also began to diverge more and more, both physically and psychologically. Leo’s hair was somewhat lighter than George’s by early adolescence; George was more agile on the soccer field. Leo began to excel in math and chose astronomy as his major in college. George became a theatre major and hoped for a career as a performer. Throughout their lives, George was more placid than Leo, who was more easily agitated. In early adulthood, Leo was diagnosed with schizophrenia. George suffered for his brother but never developed the disorder himself. Identical twins will help us tell the story of heredity and environment. They are called “identical” because they carry the same biological inheritance. You may assume that twins’ great similarity is due to their identical heredity—and that their differences must be somehow due to their environments. But what does that really mean? How can environments make a difference in traits such as the location of a mole or the shade of hair color? In fact, the similarities and the differences are the outcome of both heredity and environment. Neither can work alone.

The Nature–Nurture Illusion

Images like this one have been used by Gestalt psychologists to illustrate a perceptual phenomenon known as “figure-ground,” but we introduce it here because it provides a useful model for understanding the nature–nurture debate. No one would dispute that both heredity and environment influence human development, but when we focus on information about one of these contributors, the other seems to fade into the background. Evidence from both sides is compelling, and the helper may be persuaded to attend to one side of the argument to the exclusion of the other. The challenge is to guard against taking such a one-sided perspective, which allows for consideration of only half of the story. The most effective way that we know to avoid this kind of oversimplification is to understand the fundamentals of how geneenvironment interactions function. The “take away” message, as you will see, is that genes can do nothing without environmental input—and that environmental effects are shaped by genetic constraints. For helping professionals, learning about these intricate transactions makes clear that there is little value in placing “blame” for problematic outcomes (Fruzzetti, Shenk, & Hoffman, 2005). This knowledge also improves a helper’s ability to fashion therapeutic interventions that are realistic and valid for clients, and to help both clients and the general public to understand the complex interplay of heredity and environment in physical and behavioral outcomes (Dick & Rose, 2002).

Epigenesis and Coaction

Conception and Early Growth

You probably know that the inheritance of traits begins with conception, when a man’s sperm fertilizes a woman’s egg, called an ovum. Fertile women usually release an ovum from one of their ovaries into a fallopian tube during every menstrual cycle. The human ovum is a giant cell with a nucleus containing 23 chromosomes, the physical structures that are the vehicles of inheritance from the mother. The sperm, in contrast, is a tiny cell, but it too carries 23 chromosomes: the father’s contribution to inheritance. The ovum’s nucleus is surrounded by a great deal of cellular material called cytoplasm; the cytoplasm is loaded with a vast array of chemicals. During fertilization, the tiny sperm penetrates the outer membrane of the ovum and makes the long journey through the ovum’s cytoplasm to finally penetrate the nucleus, where the sperm’s outer structure disintegrates. The sperm’schromosomes become part of the nuclear material in the fertilized ovum, which is called a zygote.

The zygote contains 46 chromosomes, or more accurately, 23 pairs of chromosomes. One member of each pair comes from the mother (ovum) and one from the father (sperm). Twenty-two of these pairs are matched and are called autosomes. In autosome pairs, the two chromosomes look and function alike. The chromosomes of the 23rd pair are called sex chromosomes, because they have an important role to play in sex determination. In female zygotes, the 23rd pair consists of two matched chromosomes, called X chromosomes, but male zygotes have a mismatched pair. They have an X chromosome from their mothers but a much smaller Y chromosome from their fathers. Figure 2.2 presents two karyotypes, one from a male and one rom a female. A karyotype displays the actual chromosomes from human body cells as seen under a microscope, arranged in matching pairs and then photographed. Notice the 23rd pair is matched in the female example but not in the male example. (See
Chapter 8 for a fuller description of the role of sex chromosomes in human development.) Chromosomes for a karyotype can be taken from cells anywhere in a person’s body, such as the skin, the liver, or the brain. A duplicate copy of the original set of 46 chromosomes from the zygote exists in nearly every body cell. How did they get there? The chromosomes in a zygote begin to divide within hours after conception, replicating themselves. The duplicate chromosomes pull apart, to opposite sides of the nucleus. The nucleus then divides along with the rest of the cell, producing two new cells, which are essentially identical to the original zygote. This cell division process is called mitosis (see Figure 2.3). Most importantly, mitosis produces two new cells, each of which contains a duplicate set of chromosomes. The new cells quickly divide to produce four cells, the four cells divide to become eight cells, and so on. Each of the new cells also contains some of the cytoplasm of the original fertilized egg. The cell divisions continue in quick succession, and before long there is a cluster of cells, each containing a duplicate set of the original 46 chromosomes. Over a period of about two weeks, the growing organism migrates down the mother’s fallopian tube, into the uterus, and may succeed in implantation, attaching itself to the uterine lining, which makes further growth and development possible. Now it is called an embryo. Defining Epigenesis and Coaction

If every new cell contains a duplicate set of the chromosomes from the zygote, and chromosomes are the carriers of heredity, then it would seem that every cell would develop in the same way. Yet cells differentiate during prenatal development and become specialized. They develop different structures and functions, depending on their surrounding environments. For example, cells located in the anterior portion of an embryo develop into parts of the head, whereas cells located in the embryo’s lateral portion develop into parts of the back, and so on. Apparently, in different cells, different aspects of heredity are being expressed. The lesson is clear: Something in each cell’s environment must interact with hereditary material to direct the cell’s developmental outcome, making specialization possible.

Biologists have long recognized that cell specialization must mean that hereditary mechanisms are not unilaterally in charge of development. Biologists first used the term epigenesis just to describe the emergence of specialized cells and systems of cells (like the nervous system or the digestive system) from an undifferentiated zygote. It was a term to describe the emergence of different outcomes from the same hereditary material, which all seemed rather mysterious (Francis, 2011). The term has evolved as biology has advanced. Biologists now define epigenesis more specifically as the set of processes by which factors outside of hereditary material itself can influence how hereditary material functions (Charney, 2012). These “factors” are environmental. They include the chemicals in the cytoplasm of the cell (which constitute the immediate environment surrounding the chromosomes), factors in the cells and tissues adjacent to the cell, and factors beyond the body itself, such as heat, light, and even social interaction. The epigenome is the full set of factors, from the cell to the outside world,that controls the expression of hereditary material. “The activity of the genes can be affected through the cytoplasm of the cell by events originating at any other level in the system, including the external environment” (Gottlieb, 2003, p. 7).

But, as you will see, the chemicals in the cytoplasm of the cell are themselves partly determined by the hereditary material in the chromosomes. These chemicals can move beyond a cell to influence adjacent cells and ultimately to influence behavior in the outside environment. Heredity and environment are engaged from the very beginning in an intricate dance, a process called coaction, so that neither one ever causes any outcome on its own. Gottlieb (e.g., 1992, 2003) emphasizes coaction in his epigenetic model of development, a multidimensional theory. He expands the concept of epigenesis, describing it as the emergence of structural and functional properties and competencies as a function of the coaction of hereditary and environmental factors, with these factors having reciprocal effects, “meaning they can influence each other”(Gottlieb, 1992, p. 161). Figure 2.4 gives you a flavor of such reciprocal effects. It will be familiar to you from
Chapter 1. The Cell as the Scene of the Action

Understanding epigenesis starts with the cell. The chromosomes in the nucleus of the cell are made of a remarkable organic chemical called deoxyribonucleic acid or DNA. Long strands of DNA are combined with proteins called histones, wrapped and compacted to make up the chromosomes that we can see under a microscope. Chromosomes can reproduce themselves, because DNA has the extraordinary property of self-replication. Genes are functional units or sections of DNA, and they are often called “coded” sections of DNA. For each member of a pair of chromosomes, the number and location of genes are the same. So genes, like chromosomes, come in matched pairs, half from the mother (ovum) and half from the father (sperm).

You may have read reports in the popular press of genetic “breakthroughs” suggesting that scientists have identified a gene for a trait or condition, such as depression or obesity. These reports are extremely misleading. Genes provide a code that a cell is capable of “reading” and using to help construct a protein, a complex organic chemical, made up of smaller molecules called amino acids. Proteins in many forms and combinations influence physical and psychological characteristics and processes by affecting cell processes.

First, let’s consider the multistage process by which a gene’s code affects protein production. The complexity of this process can be a bit overwhelming to those of us not schooled in biochemistry, so we will only examine it closely enough to get a sense of how genes and environment coact. The DNA code is a long sequence of molecules of four bases (that is, basic chemicals, not acids): adenine, cytosine, guanine, and thymine, identified as A, C, G, and T. In a process called transcription, intertwined strands of DNA separate, and one of the strands acts as a template for the synthesis of a new, single strand of messenger ribonucleic acid or mRNA. In effect, the sequence of bases (the “code”) is replicated in the mRNA. Different sections of a gene’s code can be combined in different ways in the mRNA it produces, so that a single gene can actually result in several different forms of mRNA. In a second step, called translation, the cell “reads” the mRNA code and produces a protoprotein, a substance that with a little tweaking (e.g., folding here, snipping there) can become a protein. Here again, the cell can produce several protein variations from the same protoprotein, a process called alternative splicing (Charney, 2017). Different cell climates (combinations of chemicals) can induce different protein outcomes. One example will help here. A gene labeled the “POMC” gene is eventually translated into a protoprotein called “proopiomelanocortin” (thus the POMC abbreviation). This protein can be broken up into several different types of proteins. Cells in different parts of the body, with their different chemical workforces, do just that. In one lobe of the pituitary gland (a small gland in the brain), POMC becomes adrenocorticotropic hormone (ACTH), an important substance in the stress reaction of the body, as you will see later in this chapter. In another lobe of the pituitary gland, the cells’ chemical environments convert POMC into an opiate, called β-endorphin. In skin cells, POMC becomes a protein that promotes the production of melanin, a pigment (Francis, 2011; Mountjoy, 2015). You can see that the chemical environment of the cell affects the production of proteins at several points in the transcription and translation of coded genes. The entire transcription through translation process is referred to as gene expression. Whether or not genes will be expressed, and how often, is influenced by the environment of the cell. Most genes do not function full-time. Also, genes may be turned “on” in some cells and not in others. When a gene is on, transcription occurs and the cell manufactures the coded product or products. To understand how this works, let’s begin by noting that coded genes make up only a small portion (2% to 3%) of the DNA in a human chromosome; the rest is called “intergenic” DNA. How and when a gene’s code will be transcribed is partially regulated by sections of intergenic DNA, sometimes referred to as noncoded genes because they do not code for protein production. They function to either initiate or prevent the gene’s transcription. This process is called gene regulation. All of a person’s coded and noncoded DNA is referred to as his or her genome.

Gene Regulation: The Heart of Coaction

What provokes the gene regulation mechanisms to get the transcription process going? Some chemicals in the cells are transcription factors; they bind with the regulatory portions of the DNA, initiating the uncoiling of the strands of DNA at the gene location. This allows mRNA production to begin. Of course, it is more complicated than that. For example, transcription factors cannot bind to the regulatory DNA unless they first bind to another chemical called a receptor. Each kind of transcription factor binds to one or only a few receptors. Some receptors are found on the surface of a cell, binding with transcription factors coming from outside the cell. Other receptors are located inside the cell.

Let’s follow the functioning of one transcription factor to illustrate how gene regulation works. Hormones, like testosterone and estrogen, are transcription factors. We’ll focus on testosterone, produced in larger quantities by males than females. Testosterone is primarily produced in the testes, and then it circulates widely through the body via the blood. Only cells in some parts of the body, such as the skin, skeletal muscles, the testes themselves, and some parts of the brain, have receptors for testosterone. In each of these locations, testosterone binds with a different receptor. As a result, testosterone turns on different genes in different parts of the body. In a skeletal muscle, it triggers protein production that affects the growth of muscle fibers; in the testes, it turns on genes that influence sperm production.

Notice the bidirectionality of the processes we have been describing. Genes in the testes must be turned on by cellular chemicals (transcription factors and receptors) to initiate testosterone production. Then testosterone acts as a transcription factor turning on several different genes in different parts of the body where testosterone-friendly receptors are also produced. The cell’s chemical makeup directs the activity of the genes, and the genes affect the chemical makeup of the cell. What makes all of this much more complex is that many influences beyond the cell moderate these bidirectional processes. For example, winning a competition tends to increase testosterone production in men, whereas losing a competition tends to decrease it. The effects of winning and losing on testosterone are found in athletes and spectators, voters in elections, even stock traders (Carre, Campbell, Lozoya, Goetz, & Welker, 2013). Coaction is everywhere.

How can factors outside of the cell, even outside the organism, influence gene regulation? Here again, the biochemistry of complex sequences of events can be daunting to follow, but let’s consider a few fundamental mechanisms at the cellular level. One epigenetic change that can affect the expression of a gene is methylation, the addition of a methyl group (an organic molecule) to DNA, either to the coded gene or to regulatory DNA. Such methylation makes transcription of the gene more difficult. Heavy methylation may even turn off a gene for good.

Methylation is persistent, and it is passed on when chromosomes duplicate during cell division, although some events can cause demethylation. That is, methyl groups may detach from DNA. In this case, gene transcription is likely to increase. Another class of epigenetic changes affects histones, the proteins that bind with DNA to make up the chromosomes. How tightly bound histones are to DNA affects how likely it is that a coded gene will be transcribed, with looser binding resulting in more transcription. A variety of biochemicals can attach to, or detach from, histones, such as methyl groups (methylation and demethylation) and acetyl groups (acetylation and deacetylation). Each of these can affect how tightly histones and DNA are bound together. Methylation causes tighter binding and reduces gene transcription; demethylation causes looser binding and more transcription. Acetylation loosens the binding, typically increasing gene transcription, and deacetylation tends to tighten the bonds again. Methylation, acetylation, and their reverse processes are common modifications of histones, but there are many others as well, each of them having some effect on the likelihood that a gene will be transcribed. In addition, other cellular components can shut down the impact of a gene, such as short sections of RNA called micro RNAs (miRNAs) that attach themselves to mRNAs and block their translation into proteins. These processes are all part of the cell’s repertoire of DNA regulation devices (see Charney, 2012; Grigorenko, Kornilov, & Naumova, 2016).

When the environment outside the organism alters gene regulation, its effects on the body must eventually influence processes like methylation inside cells, so that certain genes in these cells become either more or less active. Let’s consider one example that demonstrates the impact that the social environment can have on cellular processes in the development of rat pups. There is reason to suspect that somewhat analogous processes may occur in primates as well, including humans. Rat mothers differ in the care they give their pups, specifically, in how much licking and grooming (LG) they do. In a series of studies, Michael Meaney and his colleagues (see Kaffman & Meaney, 2007; Meaney, 2010, for summaries) discovered that variations in mothers’ care during the first postnatal week alter the development of a rat pup’s hippocampus. The hippocampus is a part of the brain with a central role to play in reactions to stress. The offspring of “high LG” mothers grow up to be more mellow—less reactive to stressful events—than the offspring of “low LG” mothers. Of course, these differences could simply indicate that the “high LG” mothers pass on to their offspring genes that influence low stress reactivity. But Meaney and colleagues were able to show that it is actually the mothers’ care that makes the difference. They did a series of cross-fostering studies: They gave the offspring of high LG mothers to low LG mothers to rear, and they gave the offspring of low LG mothers to high LG mothers to rear. Rat pups reared by high LG foster mothers grew up to be more mellow than rat pups reared by low LG foster mothers. When Meaney and others studied the biochemistry of the rats’ response to stress, they found that pups who receive extra maternal care respond to stress hormones (glucocorticoids) differently than other rats. (See later sections of this chapter for a description of the stress response in mammals, including humans.) Ordinarily, when glucocorticoids are produced, the body has been aroused for immediate action—fight or flight. But the body also launches a recovery from this arousal, reacting to reduce the further production of stress hormones. The hippocampus is the part of the brain that initiates the recovery. In rats that experience high LG as pups, just a tiny quantity of glucocorticoids is sufficient for the hippocampus to trigger a rapid reduction in the production of more stress hormones, resulting in a minimal behavioral response to stress.

Now you will see the importance of epigenesis. A mother rat’s external stimulation of her pup causes changes in the regulatory DNA of the pup’s hippocampus. One of the changes is that the affected DNA is demethylated. Because of this demethylation, regulatory DNA turns on a gene that produces a stress hormone receptor in hippocampus cells. With larger amounts of the stress hormone receptor, the hippocampus becomes more sensitive, reacting quickly to small amounts of stress hormone, which makes the rat pup recover quickly from stressful events, which makes it a mellow rat. So maternal rearing, an environmental factor, changes the activity of the rat pup’s DNA by demethylating it, which changes the pup’s brain, affecting its behavioral response to stress. This change is permanent after the first week of life. What is truly remarkable is that this cascade of changes has consequences for the next generation of rat pups. Mellow female rats (who have experienced extra mothering as pups) grow up to be mothers who give their pups extra grooming and licking. And so their pups are also mellow—for life. Researchers are exploring how human infants’ physiological responses to stress may be similarly calibrated by parental closeness and care (Gunnar & Sullivan, 2017; Tang, Reeb-Sutherland, Romeo, & McEwen, 2014; see
Chapter 4). Epigenesis is one reason that identical twins can have the same genotype (type of gene or genes), but not have identical phenotypes (physical and behavioral traits). Their genotypes are exactly the same because they come from a single zygote. Usually after a zygote divides for the first time, the two new cells “stick together” and continue the cell division process, leading to a multi-celled organism. But in identical twinning, after one of the early cell divisions, one or more cells separate from the rest for unknown reasons. The detached cell or cells continue(s) the cell division process. Each of the new clusters of cells that form can develop into a complete organism, producing identical, or monozygotic twins. Yet, even though they have the same genotypes, their environments may diverge, even prenatally (e.g., Ollikainen et al., 2010). Different experiences throughout their lifetimes can affect the cellular environments of the twins, and these effects can cause differences in how, and how often, some genes are expressed. As a result, as they age twins tend to diverge more and more both physically and behaviorally (Kebir, Chaumette, & Krebs, 2018).

A large, longitudinal study of both monozygotic and dizygotic twins illustrates how variable epigenetic effects can be at the cellular level. Dizygotic twins, often called fraternal twins, are conceived when a mother releases two ova in the same menstrual cycle, and each ovum is fertilized by a separate sperm. Thus, these twins develop from two separate zygotes, and like any two siblings, they share about 50% of their genes on average. Wong, Caspi, and their colleagues (2010) studied a large number of both kinds of twins, taking cell samples when the children were 5 and 10 years old. For each child, the researchers measured the methylation of the regulatory DNA of three genes that affect brain function and behavior. You might expect a great deal of concordance (similarity between members of a pair of twins) in methylation, given that the members of each pair were exactly the same age and were growing up in the same families. Yet, the differences in the twins’ experiences were enough to lead to substantial discordance (differences between members of a pair of twins) in methylation for both monozygotic and dizygotic twins. The investigators also found that gene methylation tended to change for individual children from age 5 to age 10. These changes sometimes involved increased methylation and sometimes involved decreased methylation. Differences between twins, and changes with age within each child, could partly be a result of random processes. But much of this variation is likely to be caused by the impact that differences in life experiences have on the functioning of each child’s cells. More About Genes

What significance is there to having matched pairs of genes, one from each parent? One important effect is that it increases hereditary diversity. The genes at matching locations on a pair of chromosomes often are identical, with exactly the same code, but they can also be slightly different forms of the same gene, providing somewhat different messages to the cell. These slightly different varieties of genes at the same location or locus on the chromosome are called alleles. For example, Tom has a “widow’s peak,” a distinct point in the hairline at the center of the forehead. He inherited from one parent an allele that would usually result in a widow’s peak, but he inherited an allele that usually results in a straight hairline from the other parent. These two alleles represent Tom’s genotype for hairline shape. This example illustrates that two alleles of the same gene can have a dominant- recessive relationship, with only the first affecting the phenotype. In this case, the impact of the second gene allele is essentially overpowered by the impact of the first allele, so that the phenotype does not reflect all aspects of the genotype. Tom is a carrier of a recessive gene that could “surface” in the phenotype of one of his offspring. If a child receives two recessive alleles, one from each parent, the child will have the recessive trait. For instance, in Table 2.1, a mother and a father both have a widow’s peak. Each of the parents has one dominant gene allele for a widow’s peak and one recessive allele for a straight hairline, so they are both carriers of the straight hairline trait. On the average, three out of four children born to these parents will inherit at least one widow’s peak allele. Even if they also inherit a straight hairline allele, they are likely to have a widow’s peak. But one child out of four, on average, will inherit two straight hairline alleles, one from each parent. Without a widow’s peak allele to suppress the effects of the straight hairline allele, such a child is likely to have a straight hairline, probably much to the surprise of the parents if they were unaware that they were carriers of the straight hairline trait! Two different alleles will not necessarily have a dominant-recessive relationship. Sometimes alleles exhibit codominance, producing a blended or additive outcome. For example, Type A blood is the result of one gene allele; Type B blood is the result of a different allele. If a child inherits a Type A allele from one parent and a Type B allele from the other parent, the outcome will be a blend—Type AB blood.

Gene alleles at a single gene location can heavily influence some traits, as you have seen with hairline shape and blood type. But nearly all traits are influenced by the protein products of many different gene pairs. These genes may even be located on different chromosomes. Such polygenic effects make the prediction of traits from one generation to another very difficult and suggest that any one pair of gene alleles has a very modest influence on phenotypic outcomes.

Height, skin color, and a host of other physical traits are polygenic, and most genetic influences on intelligence, personality, psychopathology, and behavior appear to be of this kind as well. One large study described 31 genes contributing to the onset of menstruation in girls, and more have been found (Tu et al., 2015). Polygenic determination on such a large scale is typical of gene influences on single traits. Do not lose sight of the importance of epigenesis in any of the gene effects we have been describing. Regulation of genes by the cellular environment, influenced by environments outside the cell, can trump dominance-recessive or codominance relationships between alleles. You will learn about some examples as we consider genetic sources of atypical development.

Atypical Development

Typical prenatal development is an amazing story of orderly and continuous progress from a single fertilized cell to a highly differentiated organism with many interconnected and efficiently functioning systems. The 9-month gestational period spans the period of the zygote (about 2 weeks), from fertilization to implantation; the period of the embryo (from about the 3rd to 8th week), when most of the body’s organ systems and structures are forming; and finally, the period of the fetus (from the 9th week until birth), when the reproductive system forms, gains in body weight occur, and the brain and nervous system continue to develop dramatically. (Figure 2.5 illustrates major developments during the periods of prenatal development.) Typical development depends on the genome to code for the products that the body needs to grow and function normally; and it depends on the environment to provide a normal range of inputs, from nutrients to social interactions, in order for gene expression to be properly timed and regulated. The principle of coaction operates at every level of the developmental drama—with genes and environment in constant communication.

What role does the gene/environment dance play in atypical development? Deviations in either the genome or the environment can push the developing organism off course. In this section you will learn about genetic and chromosomal deviations as well as environmental distortions that can alter development as early as the period of the zygote. But remember: Neither ever works alone. The effects of the genome depend on the environment and vice versa. Watch for indicators of this coaction.

The Influence of Defective Gene Alleles

Recessive, Defective Alleles

In sickle-cell anemia, the red blood cells are abnormally shaped, more like a half moon than the usual, round shape. The abnormal cells are not as efficient as normal cells in carrying oxygen to the tissues. Individuals with this disorder have breathing problems and a host of other difficulties that typically lead to organ malfunctions and, without treatment, to early death. Fortunately, modern treatments can substantially prolong life span. A recessive gene allele causes the malformed blood cells. If one normal gene allele is present, it will be dominant, and the individual will not have sickle-cell anemia. Many hereditary disorders are caused by such recessive, defective alleles, and it is estimated that most people are carriers of three to five such alleles. Yet, most of these illnesses are rare because to develop them an individual has to be unlucky enough to have both parents be carriers of the same defective allele and then to be the one in four (on average) offspring to get the recessive alleles from both parents. Table 2.2 lists some examples of these illnesses. Some recessive, defective genes are more common in certain ethnic or geographic groups than in others. The sickle-cell anemia gene, for example, is most common among people of African descent. For some of these disorders, tests are available that can identify carriers. Prospective parents who have family members with the disorder or who come from groups with a higher than average incidence of the disorder may choose to be tested to help them determine the probable risk for their own offspring. Genetic counselors help screen candidates for such testing, as well as provide information and support to prospective parents, helping them to understand genetic processes and to cope with the choices that confront them—choices about testing, childbearing, and parenting (e.g., Madlensky et al., 2017).

Dominant, Defective AllelesSome genetic
disorders are caused by dominant gene alleles, so that only one defective gene need be present. Someone who has the defective gene will probably have the problem it causes, because the effects of a dominant gene allele overpower the effects of a recessive allele. Suppose that such an illness causes an early death, before puberty. Then, the defective, dominant allele that causes the illness will die with the affected individual because no offspring are produced. When these alleles occur in some future generation, it must be through
mutation, a change in the chemical structure of an existing gene. Sometimes mutations occur spontaneously, and sometimes they are due to environmental influences, like exposure to radiation or toxic chemicals (Strachan & Read, 2000). For example,
progeria is a fatal disorder that causes rapid aging, so that by late childhood affected individuals are dying of “old age.” Individuals with progeria usually do not survive long enough to reproduce. When the disease occurs, it is caused by a genetic mutation during the embryonic period of prenatal development, so that while it is precipitated by a genetic defect, it does not run in families.

Some disorders caused by dominant, defective alleles do not kill individuals affected by them in childhood, and thus can be passed on from one generation to another. When one parent has the disease, each child has a 50% chance of inheriting the dominant, defective gene from that parent. Some of these disorders are quite mild in their effects, such as farsightedness. Others unleash lethal effects late in life. Among the most famous is Huntington’s disease, which causes the nervous system to deteriorate, usually beginning between 30 and 40 years of age. Symptoms include uncontrolled movements and increasingly disordered psychological functioning, eventually ending in death. In recent years the gene responsible for Huntington’s disease has been identified, and a test is now available that will allow early detection, before symptoms appear. Unfortunately, there is no cure. The offspring of individuals with the disease face a difficult set of dilemmas, including whether to have the test and, if they choose to do so and find they have the gene, how to plan for the future. Again, genetic counselors may play a critical role in this process (Hines, McCarthy Veach, & LeRoy, 2010).

Often, having a dominant defective allele, or two recessive, defective alleles, seems like a bullet in the heart: If you have the defective gene or genes you will develop the associated disorder. Yet epigenetic effects can alter the course of events. The disorder may not develop if epigenetic processes prevent the transcription of defective alleles or the translation of mRNA to a protein. Consider another rodent example. A fat, diabetic yellow mouse, and a slim, healthy brown mouse can actually be identical genetically. Both mice carry a dominant “Agouti” gene allele that causes the problems of the yellow mouse. But in the brown mouse, that troublesome allele is heavily methylated. Dolinoy (2008) demonstrated that this epigenetic change can happen during fetal development if the mother mouse is fed a diet rich in folate, choline, and B12. Such a diet promotes methylation of the Agouti allele, shutting it down.

Research on the role of epigenesis in the expression of defective genes is in its infancy (Heijmans & Mill, 2012; Mazzio & Soliman, 2012). Yet it promises to help solve some medical mysteries, such as why occasionally one monozygotic twin develops a hereditary disease but the other does not, or why some people with the same genetic defect have milder forms of a disease than others (e.g., Ollikainen et al., 2010).

Such outcomes illustrate that coaction is always in play. This is true for behavioral disorders as well. For example, Caspi and his colleagues (2002) studied people with a range of variations in the “MAOA” gene. This gene provides the cell with a template for production of the MAOA enzyme, a protein that metabolizes a number of important brain chemicals called neurotransmitters, like serotonin and dopamine. (You’ll learn more about neurotransmitters later in this chapter.) Its effect is to inactivate these neurotransmitters, a normal process in neurological functioning. Apparently, while these neurotransmitters are critical to normal brain function, too much of them is a problem. Animals become extremely aggressive if the MAOA gene is deleted so that the enzyme cannot be produced. In humans, different alleles of the MAOA gene result in different amounts of MAOA enzyme production. Could alleles that cause low levels of production increase aggression and antisocial behavior in humans? Most research has suggested no such relationship. But Caspi and colleagues hypothesized that child rearing environment might affect how different gene alleles function. Specifically, they hypothesized that early abusive environments might make some MAOA alleles more likely to have negative effects on development. They studied a sample of New Zealand residents who had been followed from birth through age 26. They identified each person’s MAOA alleles and looked at four indicators of antisocial, aggressive behavior, such as convictions for violent crimes in adulthood. Finally, they looked at each person’s child-rearing history. Caspi and colleagues did find a link between gene alleles that result in low levels of MAOA enzyme production and aggression, but only when the individual carrying such an allele had experienced abuse as a child. For people with no history of abuse, variations in the MAOA gene were not related to adult aggressive behavior. This appears to be epigenesis in action. (See Ouellet-Morin et al., 2016 for related studies.)

Polygenic Influences

As with most normal characteristics, inherited disorders are usually related to more than one gene, such that some combination of defective alleles at many chromosomal sites predisposes the individual to the illness. Like all polygenic traits, these disorders run in families, but they cannot be predicted with the precision of disorders caused by genes at a single chromosomal location. Most forms of muscular dystrophy are disorders of this type. Polygenic influences have also been implicated in diabetes, clubfoot, some forms of Alzheimer’s disease, and multiple sclerosis, to name just a few. As we noted earlier, genetic effects on behavioral traits are typically polygenic. This appears to be true for most mental illnesses and behavioral disorders, such as alcoholism, schizophrenia, and clinical depression (e.g., Halldorsdottir & Binder, 2017). For example, the MAOA gene is only one of several that are associated with antisocial behavior (Ouellet-Morin et al., 2016). Genes linked to serious behavioral problems and disorders affect brain function. It will not surprise you to learn that whether and when these genes or their normal variants are expressed also depend on epigenetic modifications that are associated with a person’s experiences at different points in development, and researchers have begun to identify the biochemical processes involved (e.g., Matosin, Halldorsdottir, & Binder, 2018; Shorter & Miller, 2015).

The Influence of Chromosomal

Abnormalities

Occasionally, a zygote will form that contains too many chromosomes, or too few, or a piece of a chromosome might be missing. Problems in the production of either the ovum or the sperm typically cause these variations. Such zygotes often do not survive. When they do, the individuals usually have multiple physical or behavioral problems. The causes of chromosomal abnormalities are not well understood. Either the mother or the father could be the source, and ordinarily, the older the parent, the more likely that an ovum or sperm will contain a chromosomal abnormality. Among the most common and well known of these disorders is Down syndrome (also called trisomy 21), caused by an extra copy of chromosome number 21. The extra chromosome in this syndrome usually comes from the ovum, but about 5% of the time it comes from the sperm. Children with Down syndrome experience some degree of intellectual impairment, although educational interventions can have a big impact on the severity of cognitive deficits. In addition, these children are likely to have several distinctive characteristics, such as a flattening of facial features, poor muscle tone, small stature, and heart problems (Marchal et al., 2016). Table 2.3 provides some examples of disorders influenced by chromosomal abnormalities. Teratogenic Influences

From conception, the environment is an equal partner with genes in human development. What constitutes the earliest environment beyond the cell? It is the mother’s womb, of course, but it is also everything outside of the womb—every level of the physical and social and cultural context. For example, if a mother is stressed by marital conflict, her developing fetus is likely to be influenced by the impact that her distress has on the biochemical environment of the uterus. (For simplicity, we will use the term fetus to refer to the prenatal organism in this section, even though technically it might be a zygote or an embryo.) Even the ancient Greeks, like Hippocrates who wrote 2,500 years ago, recognized that ingestion of certain drugs, particularly during the early stages of pregnancy, could “weaken” the fetus and cause it to be misshapen. Environmental substances and agents that can harm the developing fetus are called teratogens. The name comes from the Greek and literally means “monstrosity making.”

The fetus is surrounded by a placenta, an organ that develops from the zygote along with the embryo; it exchanges blood products with the baby through the umbilical cord. The placenta allows nutrients and oxygen from the mother’s blood to pass into the baby’s blood and allows carbon dioxide and waste to be removed by the mother’s blood, but otherwise it usually keeps the two circulatory systems separate. Teratogens can cross the placental barrier from mother to fetus. They include some drugs and other chemicals, certain disease organisms, and radioactivity. The list of known teratogens is quite lengthy, so we have presented in Table 2.4 a summary of the main characteristics of a few of these agents. Consider, for example, alcohol. Alcohol has been called “the most prominent behavioral teratogen in the world” (Warren & Murray, 2013) because its use is common across the globe. “Behavioral” here refers to the fact that the fetus’s exposure is a result of the mother’s behavior. Conservative estimates are that 2% to 5% of babies born in the United States suffer negative effects from prenatal exposure to alcohol. Worldwide, “it is the leading cause of preventable developmental disabilities” (Hoyme et al., 2016, p. 2).

Physicians have suspected the risks of drinking during pregnancy for centuries, but only in the last few decades has there been broad recognition of those risks (Warren, 2013). Identifying teratogens is difficult, because their effects are variable and unpredictable. Maternal drinking during pregnancy can cause no harm to the fetus, or it can result in one or more of a wide range of problems, called fetal alcohol spectrum disorders (FASD). Most children on this spectrum experience some intellectual or behavioral problems. These can include specific learning disabilities, language delays, or memory problems, or more global and severe cognitive deficits, as well as difficulties with impulse control, hyperactivity, social understanding, and so on (Wilhoit, Scott, & Simecka, 2017). More severe intellectual and behavioral impairments are typically accompanied by gross structural brain anomalies. Prenatal development of the brain and the face are interrelated, so it is not surprising that alcohol exposure is also associated with facial abnormalities (del Campo & Jones, 2017). The most extreme of the disorders is fetal alcohol syndrome (FAS), which is identified by a unique facial configuration with three especially likely characteristics: small eye openings so that the eyes look widely spaced, a smooth philtrum (the ridge between the nose and upper lip), and a thin upper lip. Other likely facial variations are a flattened nasal bridge, a small nose, and unusual ear ridges. Cognitive deficits are often accompanied by a small head and sometimes recurrent seizures. Children with FAS typically show growth retardation, either pre- or postnatally, both in weight and height. Many organ systems can be affected in addition to the central nervous system; problems with the heart, kidneys, and bones are common (see Table 2.4).

Teratogens impact fetal development by modifying both intracellular and intercellular activity in the placenta and in the fetus. Teratogens may sometimes actually cause mutations in coded DNA (Bishop, Witt, & Sloane, 1997). But more often they seem to operate by making epigenetic modifications to DNA and thereby altering gene expression. For example, changes in methylation patterns (both methylation of some genes and demethylation of others) have been found in children with FASD for clusters of genes that are important for neurodevelopment and behavior (e.g., Chater-Diehl, Laufer, & Singh, 2017).

The teratogenic effects of alcohol are so variable, ranging from none at all to FAS, because a whole set of other factors moderates any teratogen’s impact. The unpredictability of teratogenic effects provides a good illustration of the multidimensionality of development. Damaging outcomes may be reduced or enhanced by the timing of prenatal exposure, the mother’s and fetus’s genomes (genetic susceptibility), the amount of exposure (dosage), and the presence or absence of other risks to the fetus.

Timing of Exposure

The likelihood and the extent of teratogenic damage depends on when in development exposure occurs (refer again to Figure 2.5). In the first few months, the structure of major organ systems is formed. Brain structures could show unusual and/or insufficient development if a fetus is exposed to a teratogen like alcohol in the first trimester. If the exposure occurs in the last trimester, obvious structural anomalies are not as likely, but brain and other organ functions are still in jeopardy, so that processes such as learning and behavior regulation, vision, and hearing are still vulnerable. Whereas “there is no safe trimester to drink alcohol” (Hoyme et al., 2016, p. 2), some teratogens seem to be harmful primarily at certain times. For example, thalidomide is a sedative introduced in the 1950s and widely prescribed to pregnant women for morning sickness. When used in the first trimester, it caused serious limb deformities (Ito, Ando, & Handa, 2011). Before thalidomide was identified as the culprit, it had affected the lives of over 10,000 children around the world.

Genetic Susceptibility

Not all fetuses are equally susceptible to a teratogen’s effects. Both the mother’s and the baby’s genes play a role in sensitivity or resistance to a teratogen. For example, FASD is slightly more prevalent among boys than girls (May et al., 2014), and there is some indication from animal studies that maternal drinking affects males’ social behavior more than females’ (e.g., Rodriguez et al., 2016). For some teratogens, such as nicotine, researchers have identified specific genes, and gene alleles, that can increase or decrease the effects of prenatal exposure (e.g., Price, Grosser, Plomin, & Jaffee, 2010). This is, of course, an illustration of coaction.

Dosage

Larger amounts of a teratogenic agent and longer periods of exposure generally have greater effects than smaller doses and shorter periods. Alcohol’s effects are dose dependent. Mothers who drink more days per week increase their babies’ chances of FAS (Gupta, Gupta, & Shirisaka, 2016). Mothers’ binge drinking seems to be especially harmful, although no “safe” dose has been found for alcohol (May et al., 2013). Note also that the effects of any amount of alcohol ingestion are always more potent for the fetus than they are for the mother. In other words, the fetus may have crossed a toxic threshold even if the mother experiences few or very mild alcohol-related effects. Consequently, the U.S. Department of Health and Human Services (2015) and the American Academy of Pediatrics (Williams & Smith, 2015) recommend that women refrain from drinking alcohol throughout a pregnancy. “No amount of alcohol intake during pregnancy can be considered safe” (Hoyme et al., 2016, p. 2).

Number of Risk Factors

As you learned in Chapter
1, risk factors are more likely to cause problems the more numerous they are. The developing organism can often correct for the impact of one risk factor, but the greater the number, the less likely it is that such a correction can be made. The negative effects of teratogens can be amplified when the fetus or infant is exposed to more than one. For example, poor maternal nutrition tends to increase the risk of FAS (Keen et al., 2010). Often, pregnant women who drink also smoke. They are also more likely to be poor, so it is fairly common that their babies have been exposed to multiple risks. The teratogenic effects of cocaine were once thought to include congenital malformations until researchers recognized that pregnant women who use cocaine frequently consume other drugs, such as alcohol, tobacco, marijuana, or heroin, and they often have poor nutrition as well. Cocaine users also tend to be poor and to experience more life stress during and after pregnancy than other women. Although prenatal cocaine exposure in the absence of other risk factors can have effects on some aspects of behavior, many of the outcomes once attributed to cocaine alone seem to result from combinations of risk factors (Terplan & Wright, 2011).

Nutritional Influences

Teratogens are problematic because they add something to the ordinary fetal environment, intruding on the developing system and driving it off course. But what happens when contextual factors that belong in the ordinary fetal environment are missing or in short supply? When food sources are lacking in protein or essential vitamins and minerals during prenatal and early postnatal development, an infant’s physical, socioemotional, and intellectual development can be compromised (e.g., Aboud & Yousafzai, 2015), and epigenetic alterations seem to be at the root of these developmental problems (Champagne, 2016).

In a classic intervention study, Rush, Stein, and Susser (1980) provided nutritional supplements to pregnant women whose socioeconomic circumstances indicated that they were likely to experience inaadequate diets. At age 1, the babies whose mothers received a protein supplement during pregnancy performed better on measures of play behavior and perceptual habituation (which is correlated with later intelligence) than those whose mothers received a high-calorie liquid or no supplement at all.

Are there longer-term behavioral consequences of inadequate prenatal nutrition? Some research does reveal enduring effects. When the fetus is unable to build adequate stores of iron, for example, the infant is likely to show signs of anemia by 4 to 6 months of age, and even if corrected, a history of anemia has been shown to affect later school performance. One large longitudinal study demonstrates the many long-term effects that famine can have on the developing fetus. During World War II, people in the western part of The Netherlands experienced a serious food shortage as a result of a food embargo imposed by Germany over the winter of 1944–45. At the end of the war, in 1945, scientists began studying the cohort of babies born to pregnant women who experienced the famine, comparing them either to siblings who were not born during the famine, or to another sample of Dutch people who were born in the same period, but who were not exposed to the famine (e.g., Lumey & van Poppel, 2013). Among the many long-term consequences of prenatal exposure to the famine are: higher rates of obesity by young adulthood; increased risk of schizophrenia and mood disorders, such as depression; more high blood pressure, coronary artery disease, and type II diabetes by age 50; and the list goes on (see Francis, 2011, for a summary). These long-term consequences appear to result from epigenetic changes at the cellular level. For example, one group of investigators found significant demethylation of a gene that codes for “insulin-like growth factor II” (IGF2) in individuals exposed to the famine very early in gestation, when methylation of this particular gene usually occurs (Heijmans et al., 2008). In another study, methylation and demethylation changes in six genes were identified. The kind of change depended on the gene, the gender of the individual, and the timing of fetal exposure to the famine (Tobi et al., 2009). It is not surprising that prenatal nutrition has such effects, given what we have learned about the effects of postnatal nutrition on children’s functioning. We have known for decades that children who experience severe protein and calorie shortages at any age may develop kwashiorkor, characterized by stunted growth, a protuberant belly, and extreme apathy (Roman, 2013). Therapeutic diets can eliminate the apathy of kwashiorkor, but cognitive impairments often persist. Some research indicates that even much less severe nutritional deficits may have impacts on children’s cognitive functioning. An intriguing study of changes in the food supplied to New York City schools provides a strong illustration (Schoenthaler, Doraz, & Wakefield, 1986). In a three-stage process, from 1978 to 1983, many food additives and foods high in sugar (sucrose) were eliminated from school meals, so that children’s consumption of “empty calories” was reduced, and, presumably, their intake of foods with a higher nutrient-to-calorie ratio increased. With each stage of this process, average achievement test scores increased in New York City schools, with improvements occurring primarily among the children who were performing worst academically.

Findings such as these suggest that children whose prenatal and postnatal environments are short on protein and other essential nutrients may not achieve the levels of behavioral functioning that they could have with adequate diets. But the long-range impact of early diet, like the effects of teratogens, is altered by the presence or absence of other risk and protective factors. Some studies have found, for example, that if children experience poor early nutrition because of extreme poverty or major events such as war, they are less likely to have cognitive impairments the more well educated their parents are (e.g., Boo, 2016). As with other risk factors, the effects of poor nutrition are lessened by other more benign influences. We note again that it is in combination that risk factors do the most harm. (See Box 2.2 for further discussion of this phenomenon.) One heartening consequence is that when we intervene to reduce one risk factor, such as malnutrition, we may actually reduce the impact of other negative influences on development as well.

The Developing Brain

Now that you have a sense of the genetic and epigenetic processes at work in development, we can begin to examine behavioral change over time. We will first focus on the physical system that underlies behavior: the central nervous system and, especially, the brain. Helping professionals can better understand how their clients think, feel, and learn if they give some attention to the workings of this marvelously complex system. We will guide you through the story of prenatal and immediate postnatal brain development. Then we will examine in depth a key process mediated by the brain: the stress and adaptation system. As you will see throughout this text, stress, and individual differences in the response to stress, are at the core of what helpers must understand about human development. Early Prenatal Brain Development

When you were just a 2-week-old embryo, your very existence still unknown to your parents, cells from the embryo’s upper surface began to form a sheet that rearranged itself by turning inward and curling into a neural tube. This phenomenon, called neurulation, signaled the beginning of your central nervous system’s development. Once formed, this structure was covered over by another sheet of cells, to become your skin, and was moved inside you so that the rest of your body could develop around it. Around the 25th day of your gestational life, your neural tube began to take on a pronounced curved shape. At the top of your “C-shaped” embryonic self, three distinct bulges appeared, which eventually became your hindbrain, midbrain, and forebrain. (See Figure 2.6.)

Within the primitive neural tube, important events were occurring. Cells from the interior surface of the neural tube reproduced to form neurons, or nerve cells, that would become the building blocks of your brain. From about the 40th day, or 5th week, of gestation, your neurons began to increase at a staggering rate—one quarter of a million per minute for 9 months—to create the 100 billion neurons that make up a baby’s brain at birth. At least half would be destroyed later either because they were unnecessary or were not used. We will have more to say about this loss of neurons later.

Your neurons began to migrate outward from their place of birth rather like filaments extending from the neural tube to various sites in your still incomplete brain. Supporting cells called glial cells, stretching from the inside of the neural tube to its outside, provided a type of scaffolding for your neurons, guiding them as they ventured out on their way to their final destinations. Those neurons that developed first migrated only a short distance from your neural tube and were destined to become the hindbrain. Those that developed later traveled a little farther and ultimately formed the midbrain. Those that developed last migrated the farthest to populate the cerebral cortex of the forebrain. This development always progressed from the inside out, so that cells traveling the farthest had to migrate through several other already formed layers to reach their proper location. To build the six layers of your cortex, epigenetic processes pushed each neuron toward its ultimate address, moving through the bottom layers that had been already built up before it could get to the outside layer. (See Box 2.1 and Figures 2.7 and 2.8.) Box 2.1: The Major Structures of the Brain

Multidimensional models of mental health and psychopathology now incorporate genetics and brain processes into their conceptual frameworks. Thus, a working knowledge of the brain and its functioning should be part of a contemporary helper’s toolkit. Consumers of research also need this background to understand studies that increasingly include brain-related measures. Here we present a very short introduction to some important brain areas and describe their related functions. The complex human brain can be partitioned in various ways. One popular way identifies three main areas that track evolutionary history: hindbrain, midbrain, and forebrain. Bear in mind, however, that brain areas are highly interconnected by neural circuitry despite attempts to partition them by structure or function. In general, the more complex, higher-order cognitive functions are served by higher-level structures while lower-level structures control basic functions like respiration and circulation.

Beginning at the most ancient evolutionary level, the hindbrain structures of medulla, pons, cerebellum, and the reticular formation regulate autonomic functions that are outside our conscious control. The medulla contains nuclei that control basic survival functions, such as heart rate, blood pressure, and respiration. Damage to this area of the brain can be fatal. The pons, situated above the medulla, is involved in the regulation of the sleep–wake cycle. Individuals with sleep disturbances can sometimes have abnormal activity in this area. The medulla and the pons are also especially sensitive to an overdose of drugs or alcohol. Drug effects on these structures can cause suffocation and death. The pons transmits nerve impulses to the cerebellum, a structure that looks like a smaller version of the brain itself. The cerebellum is involved in the planning, coordination, and smoothness of complex motor activities such as hitting a tennis ball or dancing, in addition to other sensorimotor functions.

Within the core of the brainstem (medulla, pons, and midbrain) is a bundle of neural tissue called the reticular formation that runs up through the midbrain. This, together with smaller groups of neurons called nuclei, forms the reticular activating system, that part of the brain that alerts the higher structures to “pay attention” to incoming stimuli. This system also filters out the extraneous stimuli that we perceive at any point in time. For example, it is possible for workers who share an office to tune out the speech, music, or general background hum going on around them when they are involved in important telephone conversations. However, they can easily “perk up” and attend if a coworker calls their name. The midbrain also consists of several small structures (superior colliculi, inferior colliculi, and substantia nigra) that are involved in vision, hearing, and consciousness. These parts of the brain receive sensory input from the eyes and ears and are instrumental in controlling eye movement.

The forebrain is the largest part of the brain and includes the cerebrum, thalamus, hypothalamus, and limbic system structures. The thalamus is a primary way station for handling neural communication, something like “information central.” It receives information from the sensory and limbic areas and sends these messages to their appropriate destinations. For example, the thalamus projects visual information, received via the optic nerve, to the occipital lobe of the cortex (discussed later in this box). On both sides of the thalamus are structures called the basal ganglia. These structures, especially the nucleus accumbens, are involved in motivation and approach behavior.

The hypothalamus, situated below the thalamus, is a small but important area that regulates many key bodily functions, such as hunger, thirst, body temperature, and breathing rate. Lesions in areas of the hypothalamus have been found to produce eating abnormalities in animals, including obesity or starvation. It is also important in the regulation of emotional responses, including stress-related responses. The hypothalamus functions as an intermediary, translating the emotional messages received from the cortex and the amygdala into a command to the endocrine system to release stress hormones in preparation for fight or flight. We will discuss the hypothalamus in more detail in the section on the body’s stress systems.

Limbic structures (hippocampus, amygdala, septum, and cingulate cortex) are connected by a system of nerve pathways (limbic system) to the cerebral cortex. Often referred to as the “emotional brain,” the limbic system supports social and emotional functioning and works with the frontal lobes of the cortex to help us think and reason. The amygdala rapidly assesses the emotional significance of environmental events, assigns them a threat value, and conveys this information to parts of the brain that regulate neurochemical functions. The structures of the limbic system have direct connections with neurons from the olfactory bulb, which is responsible for our sense of smell. It has been noted that pheromones, a particular kind of hormonal substance secreted by animals and humans, can trigger particular reactions that affect emotional responsiveness below the level of conscious awareness. We will have more to say about the workings of the emotional brain and its ties to several emotional disorders in
Chapter 4.

Other limbic structures, notably the hippocampus, are critical for learning and memory formation. The hippocampus is especially important in processing the emotional context of experience and sensitive to the effects of stress. Under prolonged stress, hippocampal neurons shrink and new neurons are not produced. The hippocampus and the amygdala are anatomically connected, and together they regulate the working of the HPA axis (described later in this chapter). In general, the amygdala activates this stress response system while the hippocampus inhibits it (McEwen & Gianaros, 2010).

The most recognizable aspect of the forebrain is the cerebrum, which comprises two thirds of the total mass. A crevice, or fissure, divides the cerebrum into two halves, like the halves of a walnut. Information is transferred between the two halves by a network of fibers comprising the corpus callosum. These halves are referred to as the left and right hemispheres. Research on hemispheric specialization (also called lateralization), pioneered by Sperry (1964), demonstrated that the left hemisphere controls functioning of the right side of the body and vice versa. Language functions such as vocabulary knowledge and speech are usually localized in the left hemisphere, and visual–spatial skills are localized on the right. Recently, this research was introduced to lay readers through a rash of popular books about left brain–right brain differences. Overall, many of these publications have distorted the facts and oversimplified the findings. Generally the hemispheres work together, sharing information via the corpus callosum and cooperating with each other in the execution of most tasks. There is no reliable evidence that underlying modes of thinking, personality traits, or cultural differences can be traced exclusively to hemispheric specialization.

Each hemisphere of the cerebral cortex can be further divided into lobes, or areas of functional specialization (see Figure 2.8). The occipital lobe, located at the back of the head, handles visual information. The temporal lobe, found on the sides of each hemisphere, is responsible for auditory processing. At the top of each hemisphere, behind a fissure called the central sulcus, is the parietal lobe. This area is responsible for the processing of somatosensory information such as touch, temperature, and pain. Finally, the frontal lobe, situated at the top front part of each hemisphere, controls voluntary muscle movements and higher-level cognitive functions.

The prefrontal cortex (PFC) is the part of the frontal lobe that occupies the front or anterior portion. This area is involved in processes like sustained attention, working memory, planning, decision making, and emotion regulation. Generally, the PFC plays a role in regulation and can moderate an overactive amygdala as well as the activity of the body’s stress response system. Another important regulatory pathway involves the anterior cingulate cortex (ACC), a structure in the middle of the brain above the corpus callosum. The ACC mediates cognition and affect.

Impaired connections between the ACC and the amygdala are related to higher levels of anxiety and neuroticism, and lower ACC volume has been found in depressed patients (Kaiser, Andrews-Hanna, Wager, & Piagalli, 2015). The size of the various brain regions and the integrity of their circuitry play a role in individuals’ cognition, affect, and behavior. Scientists have discovered that neurons sometimes need to find their destinations (for example, on the part of the cortex specialized for vision) before that part of the cortex develops. It’s a little like traveling in outer space. Or as Davis (1997) has suggested, “It’s a bit like an arrow reaching the space where the bull’s-eye will be before anyone has even set up the target” (pp. 54–55). Certain cells behave like signposts, providing the traveling neurons with way stations as they progress on their journey. Neurons may also respond to the presence of certain chemicals that guide their movements in a particular direction.

About the fourth month of your prenatal life, your brain’s basic structures were formed. Your neurons migrated in an orderly way and clustered with similar cells into distinct sections or regions in your brain, such as in the cerebral cortex or in the specific nuclei. The term nucleus here refers to a cluster of cells creating a structure, rather than to the kind of nucleus that is found in a single cell. An example is the nucleus accumbens, part of the basal ganglia in the brain’s interior.

As we have seen, one important question concerns just how specialization of cells in different regions of the brain occurs and what directs it. This issue is complex and is the subject of intense investigation (Arlotta & Vanderhaeghen, 2017). However, most available evidence supports the view that cortical differentiation is an epigenetic process, primarily influenced by the kinds of environmental inputs the cortex receives. In other words, the geography of the cortex is not rigidly built in, but responds to activity and experiences by making changes in its structural organization (LaFrenier & MacDonald, 2013). This principle was demonstrated by researchers who transplanted part of the visual cortex of an animal to its parietal lobe (O’Leary & Stanfield, 1989). The transplanted neurons began to process somatosensory rather than visual information. Studies such as these show that the brain is amazingly malleable and demonstrates great neuroplasticity, particularly during early stages of development. In time, however, most cells become specialized for their activity, and it is harder to reverse their operation even though neuroplasticity continues to exist throughout life.

Structure and Function of Neurons

The neurons in your brain are among nature’s most fantastic accomplishments. Although neurons come in various sizes and shapes, a typical neuron is composed of a cell body with a long extension, or axon, which is like a cable attached to a piece of electronic equipment. The axon itself can be quite long relative to the size of the cell’s other structures because it needs to connect or “wire” various parts of the brain, such as the visual thalamus to the visual cortex. At the end of the axon are the axon terminals, which contain tiny sacs of chemical substances called neurotransmitters. Growing out from the cell body are smaller projections, called dendrites, resembling little branches, which receive messages or transmissions from other neurons. (See Figure 2.9.) So how do brain cells “talk” to each other? Even though we speak of wiring or connecting, each neuron does not actually make physical contact with other neurons but remains separate. The points of near contact where communication occurs are called synapses. Communication is a process of sending and receiving electrochemical messages. Simply put, when a neuron responds to some excitation, or when it “fires,” an electrical impulse, or message, travels down the axon to the axon terminals. The sacs in the axon terminals containing neurotransmitters burst and release their contents into the space between the neurons called the synaptic gap. Over a hundred different neurotransmitters have been identified, and many more are likely to be. Among those that have been widely studied are serotonin, acetylcholine, glutamate, gamma-amino butyric acid (GABA), epinephrine (adrenaline), norepinephrine (noradrenaline), and dopamine. Some of these are more common in specific parts of the brain than in others. They are literally chemical messengers that stimulate the dendrites, cell body, or axon of a neighboring neuron to either fire (excitation) or stop firing (inhibition). For example, glutamate is an excitatory transmitter that is important for transmission in the retina of the eye; GABA is an inhibitory transmitter that is found throughout the brain. Helpers should know that psychotropic drugs, such as anti-depressants and anti-psychotics, affect synaptic transmission. They change the availability of a neurotransmitter, either increasing or decreasing it, or they mimic or block a neurotransmitter’s effects. When neurons fire, the speed of the electrical impulse is greater if the axon is wrapped in glial cells forming a white, insulating sheath that facilitates conduction. This phenomenon, called myelination, begins prenatally for neurons in the sensorimotor areas of the brain but happens later in other areas. The term white matter mainly refers to bundles of myelinated axons, while grey matter refers to bundles of cell bodies, dendrites, and unmyelinated axons.

Myelination is a key aspect of brain maturation in childhood and adolescence, but it also continues into adulthood. Generally, the more myelination there is, the more efficient brain functions are. Changes in myelination accompany some kinds of learning and experience throughout life. Researchers have found white matter density changes in specific brain areas when people learn to juggle or they practice the piano intensively. Extended social isolation of either infant or adult mice leads to myelin deterioration accompanied by behavioral dysfunctions (see Klingseisen & Lyons, 2017).

Neurons are not “wired together” randomly. Rather, they are joined via their synaptic connections into groups called circuits. Circuits are part of larger organizations of neurons, called systems, such as the visual and olfactory systems. Two main types of neurons populate these systems, projection neurons, which have axons that extend far away from the cell body, and interneurons, which branch out closer to the local area. The intricate neural fireworks described earlier are going on all the time in your brain. Perhaps as you read this chapter, you are also listening to music and drinking a cup of coffee. Or you may be distracted by thoughts of a telephone conversation that you had earlier. The neuronal circuitry in your brain is processing all these stimuli, allowing your experiences to be perceived and comprehended.

Later Prenatal Brain Development

To return to your prenatal life story, your neurons began to fire spontaneously around the fourth month of gestation. This happened despite a lack of sensory input. Even though your eyes were not completely formed, the neurons that would later process visual information began firing as though they were preparing for the work they would do in a few months’ time. By the end of the second and beginning of the third trimesters, your sense organs had developed sufficiently to respond to perceptual stimulation from outside your mother’s womb. Sounds were heard by 15 weeks. Not only did you learn to recognize the sound of your mother’s voice, but you also became familiar with the rhythms and patterns of your native language. In a classic study, DeCasper and colleagues conducted a project in which they directed pregnant women to recite a child’s nursery rhyme out loud each day from the 33rd to the 37th week of pregnancy (DeCasper, Lecaneut, Busnel, Granier-Deferre, & Maugeais, 1994). During the 38th week, the women were asked to listen to a recording of either the familiar rhyme or an unfamiliar one while their fetuses’ heart rates were being measured. The fetal heart rates dropped for the group who heard the familiar rhyme, signifying attention, but did not change for the group who heard the unfamiliar one. This result suggests that the fetus can attend to and discriminate the cadence of the rhyme. Studies such as this one should not be misinterpreted to mean that the fetus can “learn” as the term is commonly used. No one has suggested that the fetus can understand the poem. However, what this and other similar studies do indicate is an early responsivity to experience that begins to shape the contours of the brain by establishing patterns of neural or synaptic connections.

By your 25th week, you could open and close your eyes. You could see light then rather like the way you see things now when you turn toward a bright light with your eyes closed. At this point in development a fetus turns his head toward a light source, providing his visual system with light stimulation that probably promotes further brain development. Sensory experience has been found to be critical for healthy brain development after birth, and vision provides a good example of this principle. The interplay between neurons and visual experience was documented dramatically in another classic research project by Wiesel and Hubel (1965), who, in a series of experiments with kittens for which they won the Nobel Prize, showed that early visual deprivation has permanent deleterious effects. They sewed shut one of each kitten’s eyes at birth so that only one eye was exposed to visual input. Several weeks later when the eyes were reopened, the kittens were found to be permanently blinded in the eye that had been deprived of stimulation. No amount of intervention or aggressive treatment could repair the damage. The neurons needed visual stimulation to make the proper connections for sight; in the absence of stimulation, the connections were never made, and blindness resulted. The existence of a critical or sensitive period for visual development was established. This research prompted surgeons to remove infant cataracts very shortly after birth instead of waiting several years, so that permanent damage to sight could be avoided.

Sensory systems, such as the auditory and visual systems, influence each other. Their eventual development is a function of their interrelationships (Murray, Lewkowicz, Amedi, & Wallace, 2016). The integration of these systems seems to serve the baby well in making sense of his world. So, for example, when a young infant sees an object he also has some sense of how it feels. Or when a 2-year-old watches lip movements, vocal sounds are easier to decipher.

Postnatal Brain Development

After your birth, your neurons continued to reproduce at a rapid pace, finally slowing down around 12 months of age. For many years it was assumed that neurons do not reproduce after early infancy. Newer research, however, has definitively documented the growth of new neurons throughout the life span in some parts of the brain (see Lux & van Ommen, 2016). These adult neural stem cells (NSCs) are generated in two principal brain areas, the subventricular zone (SVZ) located near the ventricles and in part of the hippocampus called the dentate gyrus. SVZ neurons migrate to the olfactory bulb where they appear to maintain its functioning by generating interneurons. New hippocampal neurons appear to integrate into existing networks that involve learning and memory. The location, migration patterns, and the ways adult neural stem cells integrate with existing neural networks are subjects of intense investigation. Available research indicates that increases and decreases in hippocampal neurogenesis during adulthood depend on environmental factors, including enriching stimulation, stress, and physical activity (Charney, 2012).

Brain growth after birth is also the result of the formation of new synapses. The growth spurt in synapses reflects the vast amount of learning that typically occurred for you and for most babies in the early months of postnatal life. Some areas of your developing brain experienced periods of rapid synaptic growth after birth, such as in the visual and auditory cortices, which increased dramatically between 3 and 4 months. In contrast, the synapses in your prefrontal cortex developed more slowly and reached their maximum density around 12 months. As you will see in the following sections, infants make rapid strides in cognitive development at the end of the first year, at about the time when prefrontal synapses have reached their peak density.

The growth of these connections was the product of both internal and external factors. Certain chemical substances within your brain, such as nerve growth factor, were absorbed by the neurons and aided in the production of synapses. Your own prenatal actions, such as turning, sucking your thumb, and kicking, as well as the other sensory stimulation you experienced, such as sound, light, and pressure, all contributed to synaptic development. However, as we noted earlier, the major work of synaptogenesis, the growth of new synapses, took place after birth, when much more sensory stimulation became available. You arrived in the world with many more neurons than you would ever need. Over the next 12 years or so, through a process known as neural pruning, many neurons would die off and many synaptic connections would be selectively discarded. Some of these neurons migrated incorrectly and failed to make the proper connections, rendering them useless. Some of the synaptic connections were never established or were rarely used, so they ultimately disappeared as well. What counts most after birth is not the sheer number of neurons, but the number and strength of the interconnections. Those branching points of contact that remained to constitute your brain would be a unique reflection of your genetics and epigenetics, the conditions of your prenatal period, the nutrition you received, and your postnatal experience and environment. This rich network of connections is what makes your thinking, feeling brain, and its structure depends heavily upon what happens to you both before and after your birth.

You may be wondering how to account for the simultaneous processes of synaptogenesis and pruning, which seem to be acting at cross-purposes. What is the point of making new synaptic connections if many neurons and connections will just be culled eventually? In a classic analysis, Greenough and Black (1992) argued that synaptic overproduction occurs when it is highly likely that nature will provide the appropriate experience to structure the development of a particular system. For example, many animal species, as well as humans, go through a predictable sequence of activities designed to provide information for the brain to use in the development of vision. These include opening eyes, reaching for and grasping objects, and so on. This type of development depends upon environmental input that is experience-expectant because it is experience that is part of the evolutionary history of the organism and that occurs reliably in most situations. Hence, it is “expected.” Lack of such experience results in developmental abnormalities, as we saw in the kitten experiments performed by Wiesel and Hubel. The timing of this particular kind of experience for nervous system growth is typically very important; that is, there is a critical period for such experience-expectant development. Nature may provide for an overabundance of synapses because it then can select only those that work best, pruning out the rest.

When human infants are neglected or deprived of typical parental caregiving, the cognitive consequences can be severe. One reason may be that ordinary caregiving behaviors—repeated vocalizations, facial expressions, touch—are necessary for experience-expectant development. McLaughlin and colleagues have argued that such caregiving helps direct infant’s attention to relevant stimuli in the environment. This facilitates experience-expectant learning of basic associations between vocal sounds and situations, between actions and outcomes, and so on, that other learning depends upon (McLaughlin, Sheridan, & Nelson, 2017).

In contrast to overproducing synapses in anticipation of later experience, some synaptic growth occurs as a direct result of exposure to more individualized kinds of environmental events. This type of neural growth is called experience-dependent. The quality of the synaptic growth “depends” upon variations in environmental opportunities. Stimulating and complex environments promote such growth in rat pups and other mammals (e.g., Kolb, Gibb, & Robinson, 2003). It seems likely that the same is true for infants and children. Imagine what might be the differences in synaptic development between children raised by parents who speak two different languages in their home and children raised by those who speak only one, for example. Experience-dependent processes do not seem to be limited to sensitive periods but can occur throughout the life span. Connections that remain active become stabilized, whereas those that are not used die out. This type of experientially responsive synaptic growth and the concomitant changes in brain structure it induces have been linked to learning and the formation of some kinds of memory. This process fine-tunes the quality of brain structure and function and individualizes the brain to produce the best person–environment fit (Bialystock, 2017).

Clearly, your early experiences played a vital role in the functional and structural development of your brain. Your experiences helped stimulate the duplication of neurons in some parts of your brain, and they prompted synaptogenesis and pruning. These processes contribute to the plasticity of brain development, which can be quite remarkable. Suppose, for example, that you had suffered an injury to your left cerebral cortex during infancy. You can see in Box 2.1 that ordinarily, the left cerebral cortex serves language functions. Yet when the left hemisphere is damaged in infancy, the right hemisphere is very likely to take over language functions. Your language acquisition might have been delayed by an early left hemisphere injury, but by 5 or 6 years of age you would most likely have caught up with other children’s language development. Adult brains also exhibit some plasticity after brain injury, but nothing as dramatic as we see in children (e.g., Gianotti, 2015; Stiles, 2001). Understanding of postnatal brain development has improved dramatically over the last few decades, spurred by modern technologies. In Table 2.5, we describe several of the approaches that are recurrently mentioned in this and later chapters.

The Developing Stress and Adaptation System

Learning about early brain development can help you to understand how development proceeds more generally. You have seen that the intricate interplay of genes and the many layers of environment, from the cell to the outside world, produces an organized brain and nervous system. That system interacts with all other bodily systems and continuously re-organizes in response to experience. Development is a life-long process of adaptation, which is adjustment to change: changing bodily needs as well as new and different opportunities and challenges from the environment. Some changes and the adaptations that we make to them are predictable and repetitious. For example, our bodies’ needs for food and rest change predictably during the course of a day, and we respond by eating and sleeping on relatively consistent schedules. But much of our experience requires adjusting to less predictable events, which can alter our brains and behavior, sometimes temporarily and sometimes more permanently. Developmental science is particularly focused on understanding the process of stress, adaptation responses to challenges, and how stress helps to produce healthy and unhealthy life trajectories.

What Is Stress?

Stress is a word that most of us use often, usually to describe what we feel when life seems challenging or frightening or uncertain. We call our feelings “stress” when it looks like we will be late for class or work, or we discover that there is not enough money to cover the rent, or we find a skin lesion that might be cancerous. Scientists and practitioners also use the term to cover lots of situations, and as a result, it can sometimes be difficult to pin down its meaning. An early researcher, Hans Selye (e.g., 1956), first used the term stress and helped launch its worldwide popularity as well as the breadth of its use. It was Selye who characterized stress as a bodily response to any change or demand (i.e., an adaptation), and a stressor as an event that initiates such a response. Stressors can be noxious or positive; the key is that they induce adaptation.

Modern researchers often make more fine-grained distinctions among types of stress. McEwen and McEwen (2016) summarize these as good stress, tolerable stress, and toxic stress. Good stress is “the efficient, acute activation and efficient turning off of the normal physiological stress response when one faces a challenge” (p. 451)—a challenge such as giving a talk or managing the drive to work through heavy traffic. Tolerable stress is a more chronic physiological response, likely to be triggered by more serious and long-lasting threats such as losing a job, but the chronic response is turned off eventually as the individual finds ways to cope, relying on a range of strengths, such as self-esteem, supportive relationships, and so on. Toxic stress is the same kind of chronic response, but it fails to “turn off”—the individual is unable to cope effectively. Determining whether an experience or event is a stressor is complicated by the fact that what is stressful is, to some degree, in the eye of the beholder. Although most people would agree that certain situations are more traumatic than others, individual differences in what people view as stressful are influenced by prior learning, memories, expectations, and perceptions about one’s ability to cope. Also, whether a potential stressor triggers a stress reaction, and the strength and duration of that reaction, depend on many contextual factors, especially social context. A person’s social standing matters; so does the presence of a loving, supportive caregiver or partner or friend (Rutter, 2016).

The concept of allostasis is helpful in understanding where stress fits into a person’s overall physical and psychological development. Allostasis refers to the regulation of many interacting bodily processes affected by stressors, from the sleep–wake cycle to blood pressure to digestion, as the individual adjusts to experiences. Allostatic load is the accumulation of the effects of multiple stressors; and allostatic overload refers to pathological changes brought on by toxic stress (McEwen & McEwen, 2016).

Some general points about allostasis are important to keep in mind. Adaptations change our brains. The physiological events involved in the stress response alter gene expression and a variety of cellular and intercellular processes. We learn and change, sometimes in ways that are positive, sometimes not. Acute, mild to moderate stressors that are typically associated with good outcomes, such as exercise or giving a speech, are often brain-enhancing. The “good stress” they generate can promote synaptic or neuron growth. When stress is chronic and/or intensely negative, brain changes can make us less resilient (more cognitively rigid), reducing the capacity to learn in the future. Dendrites are lost, synaptic and neuron growth is suppressed, and processes that promote recovery from stress can be impaired. The costs of allostatic load mount up, leading to a host of mental and physical problems (McEwen et al., 2015).

Before we consider the pathological effects of toxic stress, let’s review the typical response we call stress, or “good stress.” It involves a complicated, multilevel set of physiological reactions, governed by many parts of the brain. Because what initiates stress is affected by an individual’s cognitions and context, it is clear that cortical areas such as the prefrontal cortex play a role. Parts of the brain involved in emotional and social processing, especially the amygdala and basal ganglia, are pivotal, as are closely related areas, such as the hippocampus and hypothalamus.

The Architecture of the Stress Response

A stressor is first detected via an interconnected network of sensory areas in the cortex, thalamus, and amygdala, the brain’s specialist in threat detection (LeDoux, 2012). The amygdala is involved in virtually all fear conditioning and works to jumpstart multiple stress-related networks peripheral to the central nervous system. Two major peripheral systems subject to this central control are the sympathetic nervous system (SNS) and the hypothalamic-pituitary-adrenal (HPA) axis. The SNS releases important chemicals such as epinephrine (adrenaline) and norepinephrine (noradrenalin) that send a burst of energy to those organs necessary for “fight or flight” (e.g., heart, lungs) while diverting energy from less necessary systems (e.g., growth, reproduction, digestion). Adrenaline is instrumental in causing the well-known effects of arousal, such as racing heart and sweaty palms. Once the threat has passed, the parasympathetic nervous system (PSNS) counteracts the sympathetic system’s effects, down regulating its activity. The heart rate returns to base rate and ordinary bodily functions go back to normal functioning.

The HPA axis is activated when the amygdala stimulates the hypothalamus. The hypothalamus communicates the danger message to the pituitary gland by means of the chemical messenger corticotropin releasing factor (CRF). The message is read by the pituitary as a sign to release adrenocorticotropic hormone (ACTH) into the bloodstream. ACTH then makes its way to the adrenal glands, situated atop the kidneys, which receive the message to release cortisol. Cortisol is a key glucocorticoid hormone produced by humans. Glucocorticoid receptors (GRs) are located in organs and tissues throughout the body and brain. When cortisol and other glucocorticoids bind with these receptors, physiological responses to stressors are triggered. One of these, for example, is a reduction in inflammation, the body’s normal response to a disease pathogen or irritant. Another is an increase in glucose production, increasing energy levels. Cortisol in the bloodstream also travels back to the brain, forming an important feedback loop. Cortisol binds to GRs in the amygdala and the hippocampus, efficiently shutting down the acute stress response and helping the body to return to a normal state (Spencer & Deak, 2016).

Toxic Stress and Allostatic

Overload

You can see now that allostasis engages many regulatory systems, altering physiological functions temporarily as the body deals with an acute stressor. The typical “fight or flight” reaction rights itself quickly so that normal functions like digestion and fighting infection can proceed. It should not be surprising then, that chronic and intense stress can be toxic, creating allostatic load or overload with the potential to alter many physical and psychological functions, sometimes permanently. For example, cortisol secretion is part of many non–stress-related bodily processes (Spencer & Deak, 2016), and under normal circumstances, cortisol levels in the bloodstream (which can be measured in saliva samples) show a regular daily pattern of morning elevation and afternoon decline. This pattern is often disrupted in chronically stressed individuals. They might show elevated cortisol levels throughout the day or unusual daily variations (e.g., high afternoon levels). Allostatic overload also tends to make the stress response itself less efficient. It might remain activated even when a threat has ended or it may produce a blunted response to threat (Rutter, 2016). Allostatic overload also affects the immune system. Chemical messengers of the immune system, called cytokines, are produced during the immune response. These proteins can be either pro-inflammatory or anti-inflammatory in nature. Inflammation, it should be noted, is the body’s protective response to infection or injury. When appropriate, inflammation is adaptive; however, when inflammatory processes persist unremittingly, mental and physical diseases can result. During an acute stress response, as you have seen, cortisol reduces the body’s immune response. But chronic activation of the stress-response system revamps the immune system, which becomes under-responsive to cortisol so that pro-inflammatory cytokines are then under-regulated. Persistent inflammation increases the risk of health problems over time, such as cardiovascular disease, type 2 diabetes, and rheumatoid arthritis. Along with other changes associated with toxic stress, it is also linked to an increased risk of psychopathology, such as depression and schizophrenia (Danese & Baldwin, 2017).

Toxic stress may do its damage largely by fostering epigenetic alterations that modify gene expression (e.g., Provencal & Binder, 2015; Tyrka, Ridout, & Parade, 2016). For example, children who are maltreated by their caregivers are victims of toxic stress. They experience many of the short- and long-term effects of allostatic overload that we have described. In a study of low income, school-aged children, researchers found substantial differences in methylation patterns for maltreated children as compared to nonmaltreated children. Especially noteworthy is that maltreated children showed methylation differences for genes that are linked to risk for many physical and psychiatric disorders (Cicchetti, Hetzel, Rogosch, Handley, & Toth, 2016).

The impact of stressors on the development of the stress response and all of the systems involved in that response begins in utero. In the prenatal period, stress experienced by the mother has teratogenic effects on the developing fetus. Maternal stress increases the risk of a wide range of negative outcomes, from miscarriage to low birth weight to postnatal neural and behavioral dysregulation, such as learning problems and increased anxiety levels. When glucocorticoids such as cortisol are present at typical, daily levels during pregnancy, they play a positive role in the development of a fetus’s organs and tissues, especially in the third trimester. But human and animal studies have shown that when a fetus is overexposed prenatally to a mother’s stress hormones, brain development can be altered, especially in the HPA axis (Provencal & Binder, 2015). Even mild prenatal maternal stress in later pregnancy has been shown in animal studies to increase DNA methylation levels in the frontal cortex and hippocampus; heavy doses of maternal stress dramatically decrease methylation in the same areas. In either case, epigenetic changes are induced in brain areas that are key to normal stress reactivity and to many cognitive functions as well (Bock, Wainstock, Braun, & Segal, 2015). In humans and other mammals, the HPA axis is relatively immature at birth, and the environment can play a major role in its further development. You saw earlier in this chapter that early maternal caregiving can alter the reactivity of a rat pup’s stress response for life. Rat pups who get lots of maternal licking and grooming produce more of the critical receptors in the hippocampus that bind with glucocorticoids, ending acute stress responses quickly. These pups mature into mellow adults (Meaney, 2010). With human babies, researchers have found that more sensitive maternal caregiving in infancy is linked to lower cortisol levels during and after exposure to stressors over the first three years of life (Laurent, Harold, Leve, Shelton & van Goozen, 2016). This suggests that just as in rats, the care babies receive early in life can alter the architecture of their stress response. In Box 2.2 and in the Applications section of this chapter, you will learn more about the long-term consequences of prenatal exposure to stressors and of chronic stress early in life. You will also learn how helpers might intervene to mitigate those consequences. Box 2.2: Do Numbers Matter? Early Adverse Experiences Add Up

No doubt about it, early experience matters. In this chapter, we describe some of the processes that explain how adverse experiences exert harm. This knowledge raises serious concerns for practitioners. Which experiences pose risks for children’s development? Is there a tolerable level of stress for children? Are there certain times during development when risks exert maximal influence? At first glance, these questions seem impossible to answer. Each individual possesses a unique blend of strengths and vulnerabilities, making exact prediction unlikely. Nonetheless, some large-scale epidemiological investigations of this question could provide useful guidance for clinicians and public health experts.

Several longitudinal investigations have looked at numbers of stressful experiences in children’s lives and their subsequent relationship to health or adversity. A western Australian study (Robinson et al., 2011) followed a group of 2,868 pregnant women from 16 weeks gestation (updated at 34 weeks gestation) until their children were adolescents. During their pregnancies, mothers were asked about the number and types of stressors they experienced. Stressors included financial difficulties, job loss, deaths of relatives, residential moves, marital difficulties, separation, or divorce. The researchers controlled for maternal SES, age, ethnic status, smoking, drinking, education, and history of emotional problems. Child variables, including gestational age, birth weight, and histories of breastfeeding, were also controlled. Assessments of children’s physical, emotional, and behavioral functions were made at ages 1, 2, 3, 5, 8, 10, and 14.

Results showed that offspring of mothers who reported zero to two stressors during pregnancy did not differ significantly on measures of internalizing problems (e.g., depression, low self-esteem) and externalizing problems (e.g., acting out, aggressive, disruptive behaviors). However, children whose mothers experienced three or more stressful events during their pregnancies exhibited significantly higher levels of internalizing and externalizing problems at every assessment period across the prospective study compared to the group with fewer stressors. Externalizing problems were more pronounced in children whose mothers experienced four events compared to children of mothers reporting no stress. If mothers experienced five or more events, higher rates of child internalizing disorders were observed even after controlling for many other possible contributions to depression and anxiety. Maternal stressors experienced at 16 weeks gestation were more strongly related to later problems than those experienced at 34 weeks. Overall, the results of this study support a dose-response relationship between numbers of prenatal stressors and later maladaptive outcomes. This means that lower levels of stressors predicted lower symptom levels. Symptoms increased with each additional stressor exposure in a stepwise fashion. It’s important to keep in mind, however, that the nature of some stressors (e.g., low SES, financial difficulty) can exert an ongoing influence on development in addition to having prenatal impact. Conditions of low SES can provide a context for ongoing development in which adverse consequences accumulate (Duncan & Brooks-Gunn, 1997).

Researchers from a large California health consortium took a similar epidemiological approach to the study of the effect of early life stress on subsequent illness. Chronic diseases like cancer, heart disease, and diabetes account for 70% of deaths in the United States (Centers for Disease Control [CDC], 2012). To a large degree, these diseases are related to unhealthy behaviors like smoking, overeating, drinking, lack of exercise, and so on. Because such behaviors are modifiable, efforts to understand the factors underlying unhealthy behaviors can improve public health, reduce mortality, and potentially decrease national health expenditures. Two waves of patient data from San Diego’s Kaiser Permanente Medical System were collected for the Adverse Childhood Experiences (ACE) study (Felitti et al., 1998). Respondents (N = 17,421) reported their early experiences of adversity in the following 10 categories: emotional abuse, physical abuse, sexual abuse, physical neglect, emotional neglect, substance abuse in the home, mental illness of parent or household member, domestic violence, incarceration of a household member, parental divorce or separation. Close to two thirds of the sample reported having experienced at least one of these adverse childhood experiences (see Figure 2.10). Over 20% reported three or more ACEs, and rates of comorbidity were high. The most common adverse experience was substance abuse in the home. Findings showed a similar pattern to that observed in the Australian study. Mental and physical health consequences in adulthood increased in a stepwise fashion proportional to the number of early adverse experiences (Anda et al., 2006). To date, over 60 studies have supported the initial finding that early life experience of adversity confers risk. The higher the dose of early adversity, the greater the risk. Adversity’s effects manifest in the following areas: alcoholism and alcohol abuse, behavioral problems and developmental delays in children, chronic obstructive pulmonary disease, depression, fetal death, financial stress, health-related quality of life, illicit drug use, ischemic heart disease, liver disease, obesity, poor work performance, risk for intimate partner violence, schizophrenia in males, multiple sexual partners, sexually transmitted diseases, smoking, suicide attempts, unintended pregnancies, early initiation of smoking and/or sexual activity, adolescent pregnancy, risk for sexual violence, use of prescription psychotropic medication, poor academic achievement and premature mortality (The Adverse Childhood Experience CDC website: www.cdc.gov/violenceprevention/acestudy/journal.html; Larkin, Shields, & Anda, 2012; Vallejos, Cesoni, Farinola, Bertone, & Prokopez, 2016). Studies suggest that early adverse experiences contribute to the etiology of later illness by altering allostatic regulation and interfering with brain, immune, and endocrine system development (Danese & McEwen, 2012). If this is what is actually happening, you should expect that the effect would be present regardless of changing social attitudes about reporting mental illness, medication use, and so on. In other words, the biologically based changes attendant upon adversity would be fundamental. Analyses of ACEs in four patient cohorts from 1900 to 1978 provided supportive results (Dube, Felitti, Dong, Giles, & Anda, 2003). The strength of the dose-response association between ACEs and health outcomes was observed within each successive cohort, even though the prevalence of risk behaviors varied across decades. For example, changes in attitudes about smoking and variations in smoking behavior have occurred over the 78 years included in this analysis. Nevertheless, the relationship between ACE and smoking remained consistent. This evidence suggests that early adverse experiences contribute to multiple health problems by means of “inherent biological effects of traumatic stressors (ACEs) on the developing nervous systems of children” (Dube et al., 2003, p. 274).

The impact of ACE research has been profound because it connects some dots between the rich literature on early childhood maltreatment and that of adult health and productivity. Many states have begun to add ACE questions to their own assessments of statewide behavioral health (e.g., Austin & Herrick, 2014). However, the original work from Kaiser Permanente involved primarily White participants with college experience and health insurance. Recognition of racial disparities in both health and health care have prompted researchers to expand the list of conventional adversity categories to include experiences that may be more frequently observed in diverse groups. Since poverty has such a powerful impact on development (Evans & Kim, 2013), and because low SES children tend to report higher numbers of ACEs (Slopen et al., 2016), its inclusion in the list could improve ability to track outcomes and predict the needs of poor children as they age. In addition, factors outside the home such as community violence and racism have research support that indicates long-term adverse effects.

Using a racially and socioeconomically diverse sample in Philadelphia, Cronholm and his colleagues (2015) tested a “second-generation” measure of ACEs. This measure included the conventional categories but added five additional questions related to witnessing violence in one’s community, having a history of foster care, being bullied, experiencing racism, and living in an unsafe neighborhood. Survey results from a group of 1,784 predominantly African American respondents indicated higher incidences of conventional ACEs except for sexual abuse, physical neglect, and emotional neglect, which were more frequently reported in the original California sample. Notably, 50% of the Philadelphia sample experienced 1 to 2 expanded ACEs and 13% experienced 3 or more. Approximately one third reported no experience with expanded ACEs. As you might imagine, there was some overlap between conventional and expanded categories. Almost half of the Philadelphia sample had experienced both categories of early adverse events. About 14% experienced adversities only included in the expanded list. If not for these additional categories, researchers would have missed specific kinds of adversity faced by this lower income, urban group.

If we are to fully understand the developmental pathways to health and disease for everyone, it is crucial to understand the vulnerability profiles and specific stress exposure that may be related to culture, race, gender, disability, and socioeconomic status. This epidemiological approach provides a good example of early steps in the process of translational research. As the process moves forward, programs and policies can be developed, tested, and implemented to target common and specific risks, ultimately improving quality of life and reducing disease burden for all. With such clear evidence of risk factors and such potential for remediation, prevention and intervention may never have been more important than they are now. Case Study

Jennifer and Jianshe Li have been married for 10 years. Jennifer is a White, 37-year-old woman who is an associate in a law firm in a medium-sized, Midwestern city. Her husband, Jianshe, a 36-year-old Chinese American man, is a software developer employed by a large locally based company. The couple met while they were in graduate school and married shortly thereafter. They have no children. They own a home in one of the newly developed suburban areas just outside the city. Jennifer was adopted as an infant and maintains close ties with her adoptive parents. She is their only child. There has been no contact between Jennifer and her biological parents, and she has never attempted to learn their names or find out where they live. Jianshe’s parents, two brothers, and one sister live on the U.S. West Coast, and all the family members try to get together for visits several times a year. Jennifer and Jianshe are active in a few local community organizations. They enjoy the company of many friends and often spend what leisure time they have hiking and camping.

The Lis have been unsuccessful in conceiving a child even though they have tried for the past four years. Both husband and wife have undergone testing. Jennifer has had infertility treatment for the past three years but without success. The treatments have been lengthy, expensive, and emotionally stressful. Approximately a year ago, Jennifer began to experience some mild symptoms of dizziness and dimmed vision. At first, she disregarded the symptoms, attributing them to overwork. However, they persisted for several weeks, and she consulted her physician. He thought that they might be a side effect of the medication she had been taking to increase fertility. Jennifer’s treatment protocol was changed, and shortly afterward, much to the couple’s delight, Jennifer became pregnant. Unfortunately, the symptoms she had experienced earlier began to worsen, and she noticed some mild tremors in her arms and legs as well. Jennifer’s physician referred her to a specialist, who tentatively diagnosed a progressive disease of the central nervous system that has a suspected genetic link and is marked by an unpredictable course. The Lis were devastated by the news. They were very concerned about the risks of the pregnancy to Jennifer’s health. They also worried about the possible transmission of the disease to the new baby whom they had wanted for such a long time. In great distress, they sought counseling to help them deal with some of these concerns.

Are you stuck with another assignment? Use our paper writing service to score better grades and meet your deadlines. We are here to help!


Order a Similar Paper Order a Different Paper
Writerbay.net