Human services profession and history
If you are looking for affordable, custom-written, high-quality, and non-plagiarized papers, your student life just became easier with us. We are the ideal place for all your writing needs.
Order a Similar Paper
Order a Different Paper
The student will post one thread of at least 300 words or each thread, students must demonstrate course-related knowledge and support assertions
with at least 2 scholarly citations in APA format. Acceptable sources include the class textbook
and/or peer-reviewed articles from within the last five years.
Part 1: Chapter one of the course textbook discusses what makes the human service field distinct from other fields, such as social work and counseling psychology. For the first part of your thread, discuss why the human service field is a much-needed profession and share one or two possible careers that you might be interested in pursuing. Remember that the Human Service degree is designed to prepare individuals with the necessary skills to participate in agency-related activities (community care, court, agency, etc.) and does not fulfill licensure requirements. Students planning to apply for state licensure should contact their academic advisor for information on degree programs designed to fulfill state licensure requirements.
Part 2: Based on your study of the Learn materials for this Module: Week, what do you consider the most important historical event in the history of the human services profession? Why? While you are sharing your opinion here, you must demonstrate informed opinion by supporting your points with references to the course textbook.
What is QMHP and how does it impact my career el igibi l ity?
What qualif ies as an appropriate degree and as cl inical experience?
How do I become QMHP Qualif ied?
In Virginia and other states, mental health service providers require new employees to be QMHP-credentialed. For
graduates with only an undergraduate social science degree, such as psychology or family and child development, this
requirement presents a challenge when applying for jobs in the field. When looking at entry level positions, one will often
see “1 year of experience” as a requirement; this requirement usually refers to QMHP qualification.
Human Services Degrees include social work, psychology, psychiatric rehabilitation, sociology, counseling, vocational
rehabilitation, human services counseling, or other degrees deemed equivalent to those described.
Clinical Experience includes providing direct behavioral mental health services and treatment to individuals. In addition,
the supervising clinican must be appropriately credentialed for the field (for example, being licensed, QMHP qualified, or
other having other relevant certifications).
Sources: www.dbhds.virginia.gov/documents/ol-licensingqmhp-qmrp-qual.pdf & www.dbhds.virginia.gov/
documents/HumanRights/Human%20Services%20and%20Related%20Fields%20Approved%20Degrees.pdf
Further questions can be directed to Virginia’s DBHDS’ Licensure Office: (804) 786-1747
More detailed critieria can be found at the website listed at the bottom of this page; however, if you have a human services-
related undergraduate degree, there is information you need to know before pursuing QMHP. These descriptions are taken from
the Virginia Department of Behavioral Health and Developmental Services. If you complete roughly 2,000 hours (i.e., 1 year of
full-time work experience) while in your undergraduate program then you will be QMHP-A or C qualified upon graduation.
QMHP-E: Qualified Mental Health Professional-Eligible: A person who has obtained at least a bachelor’s degree in a
human service field or special education from an accredited college without one year of clinical experience. An individual
with QMHP-E is generally eligibile for volunteer or intern opportunities to build hours toward QMHP-A or QMHP-C.
QMHP-A: Qualified Mental Health Professional-Adult: A person in the human services field who is trained and
experienced in providing psychiatric or mental health services to adults who have a mental illness. For a student with a
bachelor’s degree in a social sciences field the requirements for QMHP-A include at least a bachelor’s degree in human
services or related field from an accredited college and at least one year of clinical experience providing direct services to
individuals with a diagnosis of mental illness.
QMHP-C: Qualified Mental Health Professional-Child: A person in the human services field who is trained and
experienced in providing psychiatric or mental health services to children who have a mental illness. For a student with a
bachelor’s degree in a social sciences field the requirements for QMHP-C include at least a bachelor’s degree in a human
services field or in special education from an accredited college and at least one year of clinical experience with children
and adolescents.
QMHP
Qualified Mental Health Professional
LIBERT
career center
Y
U N I V E R S I T Y
What is QMHP and how does it impact my career el igibi l ity?
What qualif ies as an appropriate degree and as cl inical experience?
How do I become QMHP Qualif ied?
In Virginia and other states, mental health service providers require new employees to be QMHP-credentialed. For
graduates with only an undergraduate social science degree, such as psychology or family and child development, this
requirement presents a challenge when applying for jobs in the field. When looking at entry level positions, one will often
see “1 year of experience” as a requirement; this requirement usually refers to QMHP qualification.
Human Services Degrees include social work, psychology, psychiatric rehabilitation, sociology, counseling, vocational
rehabilitation, human services counseling, or other degrees deemed equivalent to those described.
Clinical Experience includes providing direct behavioral mental health services and treatment to individuals. In addition,
the supervising clinican must be appropriately credentialed for the field (for example, being licensed, QMHP qualified, or
other having other relevant certifications).
Sources: www.dbhds.virginia.gov/documents/ol-licensingqmhp-qmrp-qual.pdf & www.dbhds.virginia.gov/
documents/HumanRights/Human%20Services%20and%20Related%20Fields%20Approved%20Degrees.pdf
Further questions can be directed to Virginia’s DBHDS’ Licensure Office: (804) 786-1747
More detailed critieria can be found at the website listed at the bottom of this page; however, if you have a human services-
related undergraduate degree, there is information you need to know before pursuing QMHP. These descriptions are taken from
the Virginia Department of Behavioral Health and Developmental Services. If you complete roughly 2,000 hours (i.e., 1 year of
full-time work experience) while in your undergraduate program then you will be QMHP-A or C qualified upon graduation.
QMHP-E: Qualified Mental Health Professional-Eligible: A person who has obtained at least a bachelor’s degree in a
human service field or special education from an accredited college without one year of clinical experience. An individual
with QMHP-E is generally eligibile for volunteer or intern opportunities to build hours toward QMHP-A or QMHP-C.
QMHP-A: Qualified Mental Health Professional-Adult: A person in the human services field who is trained and
experienced in providing psychiatric or mental health services to adults who have a mental illness. For a student with a
bachelor’s degree in a social sciences field the requirements for QMHP-A include at least a bachelor’s degree in human
services or related field from an accredited college and at least one year of clinical experience providing direct services to
individuals with a diagnosis of mental illness.
QMHP-C: Qualified Mental Health Professional-Child: A person in the human services field who is trained and
experienced in providing psychiatric or mental health services to children who have a mental illness. For a student with a
bachelor’s degree in a social sciences field the requirements for QMHP-C include at least a bachelor’s degree in a human
services field or in special education from an accredited college and at least one year of clinical experience with children
and adolescents.
QMHP
Qualified Mental Health Professional
LIBERT
career center
Y
U N I V E R S I T Y
Educational Requirements and Professional Standards for the Helping Professions
The human services field is generalist and interdisciplinary in nature, and thus includes different professions with varying functions, levels of education, and requirements for state licensure. Understanding the specific requirements for the various careers within the broader human services profession helps human services students better understand the requirements for their careers of interest.
Human Services Educational Standards
The Council for Standards in Human Service Education (CSHSE) was established in 1979 for the purposes of ensuring excellence in human services education at the associate, baccalaureate, and master’s levels, through the guidance and direction of educational programs offering degrees specifically in human services. The CSHSE developed a set of research-based national standards for curriculum and subject area competencies for human services education degree programs at colleges and universities and provides guidance and oversight to educational programs during the accreditation process.
The CSHSE requires that the curriculum in a human services program cover the following standard content areas:
knowledge of the human services field through the understanding of relevant
theory, skills, and values of the profession, within the context of the
history of the profession; the interaction of
human systems; the range and scope of
human services delivery systems; information literacy; common
program planning and evaluation methods; appropriate
client interventions and strategies; the development of students’ skills in
interpersonal communication; client-related values and attitudes; and students’
self-development.
The curriculum must also meet the minimum requirements for
field experience in a human services agency, as well as illustrate that students are receiving appropriate
supervision within their field placement sites (CSHSE, 2019). The CSHSE is the only organization that accredits human services educational programs and also offers continuing education opportunities for human services professionals and educators, networking opportunities, an informational website, and various professional publications.
Human Services Professional Certification
In 2010, the CSHSE and the NOHS in collaboration with the Center for Credentialing & Education (CCE) took a significant step toward the continuing professionalization of the human services profession by developing a voluntary professional certification called the Human Services Board Certified Practitioner (HS-BCP). Human services professionals who hold at least an associate degree in human services (or related field) from a regionally accredited college or university and have 350 hours of post-graduate work in the human services field may be qualified to take the HS-BCP exam (pending an evaluation by the CCE).
The implementation of the HS-BCP certification has moved both the discipline and the profession of human services toward increased professional identity and recognition within the broader helping professional fields by verifying human services practitioners’ attainment of relevant education and practice knowledge. Credentials are maintained through a recertification process that requires 60 hours of continuing education every 5 years, including 6 hours of ethics (CCE, n.d.).
Duties and Functions of a Human Services Professional
The NOHS, as the primary professional organization for human services students, educators, and practitioners, provides a range of benefits to members, including opportunities for professional development as well as networking, advocacy of a human services agenda, and the promotion of professional and organizational identity. The NOHS has also been influential in developing the scope and parameters of human services professional functions and competencies, some of which include the following:
·
· Understanding the nature of human systems, including individuals, groups, organizations, communities, and society, and how each system interacts with others.
· Understanding conditions that promote or limit optimal functioning of human systems.
· Selecting, implementing, and evaluating intervention strategies that promote growth and optimal functioning, and that are consistent with the values of the practitioner, client, agency, and human services profession.
· Developing process skills that enable human services professionals to plan and implement services, including the development of verbal and oral communication skills, interpersonal relationship skills, self-discipline, and time management skills.
The reason why these competencies are so important is because in the human services profession the human services practitioner is the primary tool used to effect change in people’s lives. Thus, to be effective, they must develop a comprehensive and
generalist skill set that enables them to work with a wide range of clients, with diverse backgrounds, many of whom are experiencing a wide range of challenges, within varying contexts. For instance, imagine that you have a 40-year-old White mother of two young girls as a client. She has recently left a violent relationship and is currently residing in a transitional housing shelter. Now imagine that you have another client who is a 75-year-old Black veteran with an alcohol addiction who is grieving the recent death of his wife. And finally, imagine that you have a client who is a young Native American teen who was living in foster care and recently ran away from home and is now living on the streets, hasn’t attended school in weeks, and is refusing to return home.
Each of these cases will require that you develop the ability to understand and assess these clients through the lenses of their generational cohort, gender, race and ethnicity, socioeconomic status, the systems within which each client is operating (e.g., educational, legal, family, vocational), and how each system interacts with the others. You will also need to develop an understanding of and ability to assess conditions that support or limit functioning, such as histories of trauma and abuse, mental and physical health status, educational and employment backgrounds, prior losses, coping styles, and available resources. You will need to become familiar with a range of intervention strategies, including the ability to evaluate what interventions would be appropriate for each client, and then learn how to engage in an ongoing evaluation of the selected interventions’ effectiveness.
Finally, you will need some additional skills to pull all this off, such as good interpersonal skills that enable you to connect with clients who are likely very different from you, who may be resistant to change, or who are emotionally guarded. You will also need to have excellent writing skills so you can succinctly write process notes and enter them on your agency’s electronic records system using your excellent technical skills. Whew! If you can accomplish all of this, you’ll be a true generalist human services professional!
Of course, you won’t be flying by the seat of your pants and making things up as you go. Rather, you will have access to a set of guiding principles, also called
theoretical orientations, to guide your decision making and interactions with clients and client systems. The human services discipline is built on theoretical foundations that reflect the values of the profession. Understanding the underlying assumptions of any theoretical framework is important because such assumptions guide practice decisions about the people we work with and society as a whole. For instance, theoretical orientations and frameworks (also called
theoretical models) make assumptions about human nature and what motivates people to behave in certain ways under certain conditions.
We rely on theories every day when coming to conclusions about people and events, and why people behave as they do. So if you have ever expressed an opinion about why people don’t work (they are lazy, or they don’t have sufficient opportunities), or why some people commit crimes (they are evil, or they are socialized during a bad childhood), you are espousing a theory and may not even realize it!
Theoretical Frameworks and Approaches Used in Human Services
Theoretical frameworks can serve as the foundational underpinnings of a profession, reflecting its overarching values and guiding principles (such as human services’ commitment to social justice and a belief in a person’s natural capacity for growth). They can also extend into the clinical realm by outlining the most effective ways to help people become emotionally healthy based on some presumptions about what caused them to become emotionally unhealthy in the first place. For instance, if a practitioner embraces a psychoanalytic perspective that holds to the assumption that early childhood experiences influence adult motivation to behave in certain ways, then counseling sessions will likely focus on the client’s childhood. But if the practitioner embraces a cognitive behavioral approach, which focuses on behavioral reinforcements and thinking patterns, then the focus of counseling will likely be on how the client frames and interprets their life experiences.
All of this information about theoretical frameworks and approaches raises the question of what theories tend to be used the most in the human services discipline—both as theoretical foundations (or underpinnings) for the profession, as well as those that guide practice. When considering the various theories of human behavior and social dynamics, it is important to note that theories can be either descriptive (e.g., describing a range of child behaviors), or prescriptive (e.g., determining which behaviors in children are normative and healthy, and which ones are not). A theory may begin by merely describing certain phenomena related to how people think, feel, and behave, but in time, as the theory develops, it may become more prescriptive in the sense that certain determinations are made by the theorists with regard to what is normative and healthy versus what is maladaptive.
It is also important to remember that culture and history often affect what is considered normative thinking and behavior. For instance, 100 years ago if a woman chose to remain single and not have children so she could focus on her career goals, she likely would have been considered mentally ill. A commoncriticism of the major theories of human behavior is that they are based on Western cultural values, and thus the behaviors deemed normative and healthy are often culturally prescribed and not necessarily representative or reflective of non-Western cultures. For instance, is it appropriate to apply Freud’s psychoanalytic theory of human behavior, which was developed from his work with high-society women in the Victorian era, to individuals of a Masai tribe in Kenya? What about using a Western-based theory of parenting with parents from an indigenous culture in South America?
Theories of human behavior used in the human services must reflect the values and guiding principles of the profession and also the range of human experiences, which supports the evaluation and assessment of clients
in context. Important areas of context include personal characteristics, such as age, race and ethnicity, national origin, sexual orientation, gender and gender identity, geographical region, health status, socioeconomic status, and religion. Context involving social characteristics is important as well, such as the economy, political culture, various laws, the educational system, the health care system, racial oppression, privilege, gender bias, and any other broader social dynamic that may have an impact (even a distant one) on an individual’s life.
The theoretical frameworks and approaches most commonly used within the human services discipline evaluate and assess clients in the context of their various personal and environmental systems, while also considering the transactional relationship between clients and their various systems. Consider this case example:
A woman in her 40s is feeling rather depressed. She spends her first counseling session describing a fear that her children will be killed. She explains how she is so afraid of bullets coming through her walls and windows that she doesn’t allow her children to watch television in the living room. She never allows her children to play outside and worries constantly when they are at school. She admits that she has not slept well in weeks, and she has difficulty feeling anything other than sadness and despair.
Would you consider this woman mentally ill? Paranoid, perhaps? Correctly assessing her mental state does not depend solely on her thinking patterns and behavior, but on the
context of her thinking and behavioral patterns, including her various experiences within her environment. If this woman lived in an extremely safe, gate-guarded community where no crimes had been reported in decades, then an assessment of some form of paranoia might be appropriate. But what if she lived in a high-crime neighborhood, where “drive-by” shootings were a daily occurrence? What if you learned that her neighbor’s children were recently shot and killed while watching television in their living room? What about her economic level, the relationship between her, her neighborhood, and local law enforcement? What about the relationship between her children and their school? Her thinking and behavioral patterns do not seem as bizarre when considered within the context of the various systems in which she is operating; rather, it appears as though she is responding and adapting to her various social systems in quite adaptive ways!
Theoretical Frameworks Based on General Systems Theory
General systems theory is a foundational framework used in the human services discipline because it reflects these systemic interactions. General systems theory is based on the premise that various elements in an environment interact with one another, and that this interaction (or transaction) has an impact on all elements or components involved. This presumption has certain implications for the hard sciences such as ecology and physics, but when applied to the social environment, its implications involve the dynamic and interactive relationship between environmental elements (such as one’s family, friends, neighborhood, and gender, as well as broader social elements, such as religion, culture, one’s ethnic background, politics, and the economy) and an individual’s thoughts, attitudes, beliefs, and behavior.
The systems within which we operate influence not just our thoughts, attitudes, beliefs, and behaviors, but our sense of identity as well. Consider how you might respond if someone asked you who you were. You might describe yourself as a female college student who is married, who has two high school-aged children, and who attends church on a regular basis. You might further describe yourself as an active online blogger from a second-generation Italian Catholic family who loves to run. On further questioning you might explain that your parents are older, and you have been attempting to help them find alternate housing that can assist them with their extensive medical needs. You might describe the current problems you’re having with your teenage daughter, who was recently caught with drugs by her high school’s police officer and has been referred to drug court.
Whether you realize it or not, you have shared that you are interacting with the following environmental and social systems: family, friends, neighborhood, social media, Italian American culture, Catholicism, gender, marriage, adolescence, the sports community, the medical community, the school system, and the criminal justice system. Your interactions with each of these systems is influenced by your expectations of these systems and their expectations of you. For instance, what are your expectations of your college professors? Your family? The Catholic Church? And what about what is expected of you as a college student? What is expected of you as a woman? As a wife? As a Catholic? What about the expectations of you as a married woman who is Catholic? What about the expectations of your family within the Italian American Catholic community? As you attempt to focus on your academic studies, do these various systems offer support or added pressure? If you went to counseling, would it be helpful for the practitioner to understand what it means to be a member of a large, Catholic, Italian American family? Would it be helpful for your therapist to understand what it means to be in college when married with teen daughters and aging parents?
The focus on the transactional exchanges between individuals and their social environment is what distinguishes the field of human services from other fields such as psychology and psychiatry (which tend to take a more intrinsic view of clients), although recently systems theory has gained increasing attention in these disciplines as well. Several theoretical frameworks and approaches have evolved in the last several decades that are based on general systems theory and thus capture this reciprocal relationship between individuals and their social environment and broader social systems, including Bronfenbrenner’s ecological systems theory, the ecosystems perspective, and a practice orientation called the person-in-environment approach. Urie Bronfenbrenner (1979) developed the ecological systems theory, which conceptualizes an individual’s environment as four expanding spheres, each with increasing levels of interaction with the individual. The
microsystem includes one’s family, the
mesosystem (or mezzosystem) includes elements such as one’s neighborhood and school, the exosystem includes elements such as the government, and the
macrosystem includes elements such as one’s broader culture. The primary principle of Bronfenbrenner’s theory is that individuals can best be understood when evaluated in the context of their relationships with the various systems in their lives, and understanding the nature of these reciprocal relationships will aid in understanding the individual holistically.
Similar to Bronfenbrenner’s theory is the ecosystems theory, which conceptualizes an individual’s various environmental systems as overlapping concentric circles, indicating the reciprocal exchange between a person and various environmental systems. Although there is no official recognition of varying levels of systems in ecosystems theory (from micro to macro), the basic concept is very similar to Bronfenbrenner’s theory.
The person-in-environment (PIE) approach is often used as a basic orientation in practice because it encourages practitioners to evaluate individuals within the context of their environment. Clients are evaluated on a micro level (i.e., intra- and interpersonal relationships and family dynamics) and on a macro (or societal) level (i.e., the client is a Black male youth who experiences significant cultural and racial oppression). It is important to note that these theories do not presume that individuals are necessarily aware of the various systems they operate within, even if they are actively interacting with them. In fact, effective human services professionals will help their clients increase their personal awareness of the existence of these systems and how they are currently operating within them (i.e., the nature of reciprocity). It is through this awareness that clients increase their level of empowerment within their environment and consequently in all aspects of their life.
Self-Actualization and Strengths-Based Frameworks
Other theories that can help human services professionals better understand why people behave as they do come from the positive psychology movement, which focuses on people’s strengths rather than viewing people from a pathological perspective. Abraham Maslow (1954) developed a theoretical model focusing on needs motivation, theorizing that people self-actualize naturally, but are motivated to get their most basic physiological needs met first (e.g., food and oxygen) before they are motivated to meet their higher-level needs.
According to Maslow, most people would find it difficult to focus on higher-level needs related to self-esteem if they were starving or had no place to sleep at night. Maslow’s theory suggests that thoughts of self-esteem and self-actualization quickly take a back seat to worries about mere survival. Maslow’s Hierarchy of Needs theory can assist human services professionals in recognizing a client’s need to prioritize more pressing needs over others and can also explain why clients in crisis may appear to resist attempts to help them gain insight into their situations, choosing instead to focus on more basic needs. Many people were criticized during the 2020 global COVID-19 pandemic for hoarding toilet paper despite no reports of disruptions in the toilet paper supply chain, resulting in shortages lasting for months. And yet, evaluated through the lens of Maslow’s Hierarchy of Needs theory, this seemingly irrational behavior may make sense since toilet paper is a very basic need for many Americans and hoarding it may have been reflective of people’s fears that the pandemic would prohibit them from getting their basic needs met. The strengths perspective is another theoretical approach commonly used in the human services field because it encourages the practitioner to recognize and promote a client’s strengths rather than focusing on their deficits. The strengths perspective also presumes a client’s ability to solve their own problems through the development of self- sufficiency skills and self-determination. Although there are several contributors to the strengths perspective approach, Dennis Saleebey, a social work professor at the University of Chicago, is often attributed with the development of the strengths-based perspective in social work practice. Saleebey (1996) developed several guiding principles for practitioners that promote client empowerment, including recognizing that all clients
1. have resources available to them, both within themselves and their communities;
1. are members of the community and as such are entitled to respect and dignity;
1. are resilient by nature and have the potential to grow and heal in the face of crisis and adversity;
1. need to be in relationships with others in order to self-actualize; and
1. have the right to their own perception of their problems, even if this perception isn’t held by the practitioner.
1. Sullivan (1992) was one of the first theorists to apply a strengths-based approach to practice with clients experiencing chronic mental illness, where practitioners encouraged clients to recognize and develop their own personal strengths and abilities. This was a revolutionary approach since the prevailing approach to working with the chronically mentally ill population was based on a medical model where clients are viewed as sick and in need of a cure. Sullivan asserted that by redefining the problem and focusing on a clients’ existing strengths and abilities rather than on their deficits, treatment goals were more consistent with the goals of early mental health reformers who sought to remove treatment barriers by promoting respectful, compassionate, and comprehensive care of the mentally ill.
1. In the human services field, using strengths-based approaches is empowering for the practitioner and clients because we aren’t coming into their lives presuming that they are sick and we are the experts. Rather, we spend as much time looking for strengths as we do problems. Strengths-based approaches also enable us to partner with our clients in a way that encourages them to take more ownership over their journey toward increased self-sufficiency and more optimal functioning.
Multicultural and Diversity Perspectives
The United States is racially diverse, particularly in the more highly populated cities in the country, such as New York, Chicago, and Los Angeles. Such diversity offers many advantages, particularly in the areas of an interesting blend of cultural activities and food. But racial diversity can also lead to conflict, oppression, marginalization, and social injustice, particularly when various cultural traditions conflict with or are misunderstood by the majority culture. Research on why so many White working-class men voted for Donald Trump for president in 2016, when many had previously identified as Democrats, found that many expressed concerns about increasing multiculturalism and they worried that they were losing their place in the social hierarchy—a phenomenon called
cultural backlash (Cox et al. 2017; Inglehart & Norris, 2016).
Multicultural human services practice emphasizes the importance of seeing people and communities as having unique cultures, ideologies, worldviews, and life experiences (Sue et al. 2015). Human services practitioners understand the unique experiences of their clients and the communities within which they live, particularly clients who are members of historically marginalized groups. From an educational perspective, multicultural human services focuses on cross-cultural training, cultural competence, cultural sensitivity, and ethnic relations, including teaching about the nature and impact of White privilege (Akintayo et al. 2018).
But, why do we need to use a multicultural lens? Aren’t we all human? Aren’t we all “one”? I mean, shouldn’t we be striving to be “color blind” and equal? The “we’re all members of the human race” and the “I don’t see color” narratives may sound good on the surface, but they’re actually quite dangerous because they negate the histories of oppression, discrimination, and disenfranchisement of many groups of people in the United States. These narratives are also often based on the premise that everyone in the country has an equal chance for success. The reality is that we do not have a ‘level playing field’ in the United States. Some subpopulations—racial and ethnic minorities; indigenous people; religious minorities; immigrants; sexual minorities; women, particularly women of color; people with physical and intellectual disabilities; and other disadvantaged populations—have a long history of disparate treatment. Some examples include overt and covert racial discrimination, gender bias, ageism, ableism, disparate criminal justice laws, forced displacement, and environmental injustice.
I have integrated multicultural perspectives and theories throughout this book, using them as lenses to explore the roles and functions of human services professionals and the clients they serve. I’ve also included special sections within each chapter to highlight and more deeply explore social problems and the populations they impact, such as how White privilege (individual and systemic) has impacted diverse populations, or how people of color have figured prominently among leaders in the fight for social justice, reflecting the resiliency and strength of various marginalized populations. I do both — infuse and highlight content—as a way of calling attention to historic and current systemic injustice and the impact these injustices have on diverse populations. My goal is also to highlight the resiliency and many other strengths of historically marginalized people who have far too often been portrayed solely from a deficit perspective.
A Word About Terminology
Word are important because they reflect meaning and intent. For this reason, I have taken a lot of care in the selection of terminology used in this book. For instance, when selecting labels to describe various populations, I have deferred to the preferences of the populations themselves. When exploring lesbian, gay, bisexual, transgender, and queer/questioning (LGBTQ) issues, I’ve used terminology recommended in GLAAD’s Media Reference Guide (GLAAD, 2016). I’ve also consulted surveys of target populations to ensure I am using their preferred terminology, such as a a recent poll that found that only 2% of the U.S. Hispanic population preferred the term
Latinx (ThinkNow, 2020). Thus, despite the popularity of this label in academic circles, based on the growing controversy surrounding its use, I will be using either
Hispanic or
Latina/o depending on use and context.
I use the term
people of color when my goal is to use inclusive language in reference to a broad range of racial and ethnic minority populations, but I use more specific terminology when I am referring to a particular group. Throughout the book I use the term
Black rather than African American in response to the longtime push by Black scholars and advocates (Tharps, 2014). Additionally, I am following the lead of the Brookings Institution and capitalizing the “B” in Black as an act of racial respect in response to centuries of White Americans refusing to capitalize any reference to the Black population out of sense of superiority (Lanham & Liu, 2019).
When referencing indigenous populations, I use the most specific terminology possible. I use the term
Native American in reference to broader indigenous populations in the continental United States,
Alaska Natives when referencing indigenous populations in the state of Alaska, and
Native Hawaiian and/or other Pacific Islanders when referencing indigenous populations in Hawaii, Tonga, the Marshall Islands, and Fiji.
Words and terminology are important and meaningful. They can be used to bring people together or divide them. My goal is to be specific and inclusive in my choice of words, respecting the wishes of the populations I am writing about, if there is a consensus and an authoritative reference. Despite my thoughtful approach though, there is bound to be some controversy surrounding the words I and other authors use to describe people, communities, and movements, particularly when so many of the people being written about in books like this one have been subject to labels specifically designed to harm them for so many years. It is important to me that the readers of this book know that I have invested time and energy in being respectful in my choice of words, and yet, if I get it wrong, please reach out and let me know.
Conclusion
The human services profession is generalist, meaning that we work with a wide range of people experiencing a wide range of challenges. Human services professionals practice in numerous settings, such as schools, hospitals, advocacy organizations, faith-based agencies, government agencies, hospices, prisons, and police departments, as well as in private practice if they have advanced degrees and an appropriate professional license. The nature of human services interventions is wholly dependent on the specific practice setting delivering the services. In other words, intervention strategies and approaches are contextually driven. For instance, let’s assume you work with children in a school setting and your colleague works with children in a hospice setting. Certainly, there will be some similarities, particularly if the children are in a similar age range, but for the most part your jobs will be quite different, utilizing different skill sets and intervention strategies to deal with significantly different psychosocial issues.
It would be difficult to present an exhaustive list of practice settings due to the broad and often very general nature of the human services profession. Sometimes practice settings target specific social issues (i.e., domestic violence, homelessness, child abuse), and sometimes a specific target population is the focus (i.e., older adults, the chronically mentally ill), and sometimes practice settings may target a specific area of specialty (i.e., grief and loss, marriage and family). Regardless of how we choose to categorize the various fields within human services, it is imperative that this career be examined and explored contextually in order to accurately explore the nature of the work performed by human services professionals, the range of psychosocial issues experienced among various client populations, and the career opportunities available to human services professionals, within each practice setting.
For the purposes of this book, the roles, skills, and functions of human services professionals are explored in the context of particular practice settings, as well as areas of specialization within the generalist human services field—general enough to cover as many functions and settings as possible within the field of human services but narrow enough to be descriptively meaningful. The role of the human services professional is examined by exploring the history of the practice setting, the range of clients served, the psychosocial issues most commonly encountered, the modes of service delivery, the nature of case management, the level of practice (e.g., micro, mezzo, or macro), and the most common generalist intervention strategies used within a particular area. The practice settings explored in this book are child and youth services; aging; mental health; housing; health care and hospice; schools; faith-based agencies and spirituality; violence, victim advocacy, and corrections; and international practice.
Summary
·
· A working definition of the human services professional is developed that identifies key reasons why people may need to use a human services professional. The nature of the human services profession was explored, providing comparisons and distinctive aspects of the human services profession compared to other helping fields. A range of social problems and individual challenges that may lead to people needing the services of a human services professional is explored. The nature of vulnerability and how social conditions often render some populations more at risk of needing assistance to overcome various challenges are also explored.
· The role of the Council on Standards for Human Service Education (CSHSE) and the National Organization for Human Services (NOHS) is described. The function and purpose of the professional organizations that monitor and support the human services profession, including educational standards, state licensure, and professional certification, are also explored.
· The rationale for the scope and parameters of human services professional functions and competencies is described. The roles, functions, and scope of human services professionals engaging in practice on micro and macro levels are described.
· Key theoretical frameworks used in the human services discipline to real scenarios are applied. The foundational theoretical approaches most often used in the human services discipline, including systems theory, self-actualizing, and strengths-based approaches, are explored.
· An introduction to multicultural and diversity perspectives is provided with a particular focus on the importance of multicultural training and education related to cultural competence, cultural sensitivity, and ethnic relations, including teaching about the nature and impact of White privilege.
· Clarification on terminology used in the book provides an important basis for understanding the importance of words and labels, particularly when they apply to at-risk populations that have been subjected to historic and current marginalization and oppression. Identifying the rationale for terminology used in the book reflects efforts taken by the author to be sensitive to and help diminish the stigmatization associated with certain populations and social problems.
Poor Care in Europe
The Feudal System of the Middle Ages
A good place to begin this examination is the Middle Ages, from about the 11th to the 15th centuries, where a sociopolitical system called
feudalism
prevailed as England’s primary method of caring for the poor. Under this elitist system, privileged and wealthy landowners would parcel off small sections of their land, which would then be worked by peasants (also called serfs). Many policy experts consider feudalism a governmentally imposed form of slavery or servitude because individuals became serfs through economic discrimination (Trattner,
2007).
Serfs were commonly born into serfdom with little hope of ever escaping, and as such they were considered the legal property of their landowner, or what was commonly called, a lord. Although lords were required to provide for the care and support of serfs in exchange for farming their land, lords had complete control over their serfs and could sell them or give them away as they deemed fit (Stephenson, 1943; Trattner, 2007). Despite the seeming harshness of this system, it did provide insurance against many of the social hazards associated with being poor, a social condition considered an inescapable part of life, particularly for the lower classes. Many economic and environmental conditions led to the eventual decline of the feudal system from the mid-14th century through its legal abolition in 1660. Some of these conditions included several natural disasters that resulted in massive crop failures, the bubonic plague (also called Black Death), various political dynamics, social unrest, and urbanization due to the development of trade and towns. Officially, poor relief during the Middle Ages was the responsibility of the Catholic Church, primarily facilitated through the monasteries and local parishes. Catholic Bishops administered poor care through the support of mandatory taxes or compulsory tithing. Poverty was not seen as a sin, and, in fact, the poor were perceived as a necessary component of society, in that they gave the rich an opportunity to show their grace and goodwill through the giving of alms to the less fortunate. Thus, caring for the poor was perceived as a noble duty that rested on the shoulders of all those who were able-bodied. Almost in the same way that evil was required to highlight good, according to biblical scripture and Catholic theology, poverty was likewise necessary to highlight charity and goodwill as required by God (Duncan & Moore, 2003; Trattner, 2007). Poor Laws of England: 1350 to 1550
Many economic and environmental conditions led to the eventual phasing out of the feudal system between 1350 and 1550, including health and natural disasters (such as the bubonic plague and massive crop failures). Increased demand for factory wage labor in the cities led to droves of people moving to growing cities to work in factories. Mass urbanization led to freedom from serfdom for the poorest members of English society, but it also generated a vacuum in how poverty was managed, creating the necessity for the development of England’s earliest poor laws (Trattner, 2007).
These gradual shifts in how poverty was managed also led to a shift in how poverty was perceived. During the Middle Ages, poverty was seen as an inescapable condition people were born into. People were either lords or serfs, rich or poor. But after the dismantling of the feudal system, when people were migrating to the cities, the poor were often nameless, faceless strangers living in the city slums or on the streets. They were often from different countries and spoke different languages, ate odd foods, and behaved in very different manners than traditional English societal norms. Thus, it became easier to blame the poor for their lot in life (Martin, 2012; Trattner, 2007).
The increasingly impersonal nature of caring for the poor, as well as the complexities of life in cities, ultimately led to the incorporation of punitive measures into poor relief policy to control begging and vagrancy, and decrease crime in the cities. Also, during this time period about one-third of the English population was poor, prompting the need for a complete overhaul of the social welfare system (Trattner, 2007). England responded to these changing dynamics and the associated problems by passing several relief laws, called
Tudor Poor
Laws
, between the mid-1500s and 1601. Tudor Poor Laws placed responsibility for dealing with the poor at the local level and reflected a complete intolerance of idleness. Local police scoured the cities in search of beggars and vagrants, and once found, a determination was made between those who could not work—the
worthy poor, and those who were able-bodied but refused to work—the
unworthy poor (Beier, 2003).
Legislative guidelines typically stipulated that only pregnant women, individuals who were extremely ill, or any person over the age of 60 were considered justifiably poor; thus, they were treated more leniently, including receiving government authorization to beg (typically in the form of a letter of authorization). In some cases, the poor were given other forms of sustenance in addition to authorized begging, such as food and medicine. But, if an able-bodied person was found to be unemployed, they were considered vagrant and were punished in a variety of harsh ways, including whippings, being paraded through the streets naked, being returned to the town of their birth, or incarceration. Repeat offenders were often subjected to having an ear cut off or were even put to death (Chambliss, 2017; Trattner, 2007).
Clearly, there was no sympathy to be had for individuals, male or female, who were deemed capable of working but found themselves without a job or any means of support. Additionally, little consideration was given to social or economic dynamics or what is now referred to as the cycle of poverty. What’s even more surprising is that little sympathy was extended even to children, particularly adolescents who were unparented and found begging in the streets. In fact, district officials often took these children into custody, placing them into apprenticeship programs or almshouses, and subjected them to what we would now consider to be child slavery (Trattner, 2007). The Elizabethan Poor Laws of 1601
The Tudor Poor Laws were replaced by the
Elizabethan Poor Laws of 1601, a set of laws that established a system of poor relief in England and Wales. The Elizabethan Poor Laws of 1601 reflected an organized merging of England’s earlier, sometimes conflicting and erratic, social welfare legislation. The Elizabethan Poor Laws of 1601 formalized many of the driving principles rooted in the Tudor Poor Laws, including the belief that the primary responsibility for provision of the poor resided with one’s family, that poor relief should be handled at the local level, that vagrancy was a criminal offense, and that individuals should not be allowed to move to a new community if unable to provide for themselves financially.
It was quite common for community members to bring charges against others if it could be proven that they had moved into the district within the last 40 days and had no means to support themselves. Such individuals would be charged as vagrants by local officials and returned to their home districts. The underlying notion was that local parishes didn’t mind supporting individuals who had fallen on hard times after years of paying taxes and contributing to society, but they didn’t want to be forced to support strangers who came to their district for the sole purpose of receiving aid. The Elizabethan Poor Laws of 1601 served as the foundation for social welfare legislation in colonial America, and elements of residency requirements can be found in current U.S. welfare policy.
During this time period in England there were generally two types of charitable provision:
indoor relief and
outdoor relief. Indoor relief was provided for the unworthy poor—those deemed able-bodied but who did not work (vagrants, indigents, and criminals). Indoor relief consisted of mandatory institutionalization in workhouses or poorhouses, where residents were forced to work. Workhouses were designed to be harsh, with the hope that they served as a
deterrent for those individuals who lacked the skill or desire to work and become self-sufficient. Outdoor relief consisted of money, clothing, food baskets, and medicine, provided in the homes of those who were considered the worthy poor, most often widows, the disabled, and the aged (Jones, 1969; Slack, 1990).
The History of Poor Care During the Colonial Era
Life in colonial America offered tremendous economic opportunities as well as significant hardship related to life on the frontier. Many immigrants were quite poor to begin with, and the long and difficult ocean voyage to the New World often left them unprepared for the rigors of life in America. Thus, even though colonial America offered many opportunities not available in the “Old World,” such as land ownership and numerous vocational opportunities, many of the social ills plaguing new immigrants in their homeland followed them to North America. Colonial America: 1607 to 1775
English and Scottish colonization of North America began in 1607 in Virginia and continued through most of the 1700s until independence. Because there was no existing infrastructure in the original 13 British colonies, poor relief consisted primarily of mutual kindness, family support, and distant help from England. Self-sufficiency was a must, and life was not easy on the frontier. There was a dramatic increase in the population during the 75 years before independence, increasing from 250,000 settlers in 1700 to an estimated 2.5 million in 1775! And, as the population increased, so did the need for a more formal and organized system of poor care.
Poor Care in the Industrial and Progressive Eras: 1776 to 1920s
After independence in 1776, poor care remained minimal, consisting primarily of free land grants primarily for White settlers, pensions for widows, and aid to disabled veterans. There was very little formal social welfare legislation passed at the state or federal levels until the early 1900s. And even those early laws provided only minimal benefits for some groups of children and the disabled. One of the first federal social welfare efforts was the Civil War Pension Program passed in 1862, which provided aid to Civil War Veterans and their families. Unemployment benefits were offered in most states by about 1929, and a program offering veterans benefits, consisting primarily of medical aid, was instituted after World War I.
The Great Depression in 1929 marked the first time the federal government recognized the need for a national social welfare system, but the nature of provision in the 1800s through the early 1900s was highly influenced by philosophical and religious belief systems that presumed to explain why poverty and other social ills existed. These ideologies in turn influenced how the leaders of early American society believed poverty should be addressed. Two philosophies that have strongly influenced the development of social welfare policy in the United States, and perceptions of those who are in need, are John Calvin’s Protestant theology, specifically his doctrine of predestination, and philosopher Herbert Spencer’s
social Darwinism (explored in the next section). Calvin’s doctrine of predestination emanated from the Protestant Reformation in the 16th century. Calvin wrote about the nature of God’s control over the world and how this control was exercised, primarily in the form of who God would allow into heaven (the elect) and who he would condemn to hell (the condemned). According to Calvin’s doctrine, a person’s salvation was predestined by God and based solely on God’s grace, not by what people did in their lives (whether they were essentially good or bad). Thus, even though all people were called to faith and repentance, not all people would be allowed into heaven.
Even though many Protestants rejected Calvin’s concept of predestination, including Lutherans and Methodists, Calvin’s doctrine became embedded into early American society in a number of ways. In his book
The Protestant Ethic and the Spirit of Capitalism, Max Weber described in detail the vast influence of Calvin’s doctrine on European and American society. According to Weber, Calvin theorized that since everyone deserved to go to hell anyway, that was the lot they should accept, and those who were saved from condemnation were blessed by the grace of God. Human action in an attempt to secure their own salvation (through works) was futile since one’s eternal fate rested not on human goodness, but on God’s mysterious desire and will (Weber, 1905/1958). Roman Catholic theology, which previously influenced poor care, recognized the omnipotence of God in matters of salvation, but also acknowledged that people had free will and choice, and could elect to walk with God and have everlasting life by following his commandments.
According to Weber, the Calvinists accepted the concept of predestination, but did not accept that there was no way to determine who was saved and who was condemned, since privilege and participation in society were based in large part on separating people into two categories: those who were godly and those who were not. For instance, only God’s faithful were allowed to become members of the church, receive communion, and enjoy other benefits of salvation, including societal respect. Determining that one was condemned to hell, not because of anything that person necessarily did, but because of God’s mysterious determination, became a common form of social exclusion.
In time, particular behaviors and conditions became indicators—or signs—of one’s eternal fate. For instance, hard work (what Weber referred to as the
Protestant work ethic) and good moral conduct (the ability to deny worldly pleasures in pursuit of purity) became signs of the elect since it was believed that God blessed the elect by giving them a vocation, and only the “elect” were granted the ability to be pure (Weber, 1905/1958). In other words, those who could not work for any reason, even through no fault of their own, were perceived in society to be condemned, because they were not bestowed a vocation.
A “catch-22” with regard to living a pure life was that it was the privileged members of society who determined what was considered “pure.” For instance, church attendance was a requirement of purity, but only members of the elect were permitted to join the church, and the remainder were excluded, which was then used as an indicator that they were not pure, and thus not a member of the elect. Even if the poor and suffering had a voice and could protest the paradoxical reasoning behind the signs, according to Calvin, everyone deserved to be condemned anyway, thus there was simply nothing to complain about (Hudson & Coukos, 2005; Weber, 1905/1958).
The influence of the Protestant work ethic and Calvin’s doctrine of predestination on U.S. society as a whole, and specifically on the poor, were significant, extending well beyond religious communities (Kim, 1977). With hard work, material success, and good moral conduct serving as the best signs of election to salvation, it did not take long for poverty and presumed immoral behavior (remember, it was presumed that only the elect had the spiritual fortitude to behave morally) to become clear indications of one’s condemnation (Chunn & Gavigan, 2004; Gettleman, 1963; Hudson & Coukos, 2005; Kim, 1977; Schram et al., 2008; Tropman, 1986; Weber, 1905/1958).
Early Social Work Movements
Charity Organization Societies: 1870 to 1893
The
Charity Organization Society (COS) is often considered one of the forerunners of the modern social services profession and marked one of the first organized efforts within the United States to provide charity to the poor. The COS movement began in England in 1869, in response to increased urbanization and immigration and common frustration with the current welfare system, which consisted primarily of disorganized and chaotic almsgiving. The COS movement was started by Rev. S. Humphreys Gurteen, who believed that it was the duty of good Christians to provide an organized and systematic way of addressing the plight of the poor in a manner that would increase self-sufficiency and personal responsibility. Gurteen and his colleagues strongly believed that giving alms indiscriminately, and without conditions, encouraged fraud and abuse, as well as encouraged laziness among those receiving the help. The first COS was founded in Buffalo, New York, in 1877 and served as a sort of umbrella organization for other charities by assisting in the coordination and oversight of relief services to the poor (Schlabach, 1969). The COS concept of organized and systematic provision quickly spread to large cities across the nation, and in 1890 over 100 cities had at least one COS serving the community (Wahab, 2002). The COS philosophy focused on achieving self-sufficiency and reducing dependence. Therefore, outdoor relief, such as cash assistance, was discouraged because it was considered harmful to the beneficiary based upon the belief that material relief would encourage dependence and laziness, thus ultimately increasing poverty (Gettleman, 1963; Kusmer, 1973). In this respect, the COS included concepts of the worthy and unworthy poor.
The COS practiced what was called
scientific charity, which involved
intelligent giving, embracing the notion that charity should be natural, not artificial (Gettleman, 1963; Leiby, 1984). Natural giving was both spontaneous and informal, and was drawn from the philosophies advanced by Thomas Chalmers, a Scottish political economist and member of the clergy. Chalmers made a distinction between “natural charity” and “artificial charity,” where the former was based on what he called the “four fountains of charity”: (1) people’s willingness to help themselves, (2) the willingness of families to help, (3) the willingness of neighbors to help, and (4) the willingness of wealthy people to contribute to their community. Chalmers believed that “natural charity” was far less likely to involve fraud, whereas “artificial charity”, involving more organized forms of giving by churches and the government, had a far greater likelihood of being abused by both the giver (e.g., politicians) and the beneficiaries.
Based on this ideology, COS leaders were highly suspicious of organized giving, and while they believed in the importance of charity, they wanted to root out fraud, by coordinating the often haphazard and disorganized giving of alms to the poor, as well as create relationships with those in need (typically single women) to determine the individual cause of their poverty (Gettleman, 1963). According to COS philosophy, poverty was almost always caused by laziness, drinking alcohol, and spending too much money (Rauch, 1975). COS directors employed
friendly visitors, an early version of caseworkers, to visit the homes of aid applicants, diagnose the reasons for their poverty, and, if possible, develop a case plan to alleviate their poverty (Rauch, 1975; Trattner, 2007). Because poverty was defined as an individual problem, and because most aid recipients were women, there was excessive focus placed on sexual morality, with the goal of modeling appropriate moral behavior (O’Neill, 2016). Since material relief was discouraged, most friendly visitors offered only sympathy, encouragement, and guidance on how to seek employment, with minimal financial assistance (Wahab, 2002).
The COS movement was highly influenced by Calvinism, but also by another sociopolitical ideology called
social Darwinism, which involved the application of Charles Darwin’s theory of natural selection to the human social world. Darwin’s theory, developed in the mid-19th century, was based on the belief that environmental competition—a process called natural selection—ensured that only the strongest and most fit organisms would survive (allowing the biologically fragile to perish), thus guaranteeing successful survival of a species (Darwin, 1859/2009). Social Darwinists apply Darwin’s theory to humans and the social world in an attempt to provide naturalistic explanations for various phenomena in human social life (Weikart, 1998).
One of the most influential social Darwinists was Herbert Spencer, an English sociologist and philosopher who coined the term
“survival of the fittest” (a term often incorrectly attributed to Darwin) in reference to the importance of human competition for resources in securing the survival of what were considered the fittest members of society (Hofstadter, 1992). Spencer was a fierce opponent of any form of government intervention or charity on behalf of the poor and disadvantaged, arguing that such interventions would interfere with the natural order, thus threatening society as a whole. Although Spencer’s theory of social superiority was developed in advance of Darwin’s theory, his followers relied on Darwin’s theory of natural selection for scientific validity of social Darwinism.
The fatalistic nature of the concept of predestination, the Protestant work ethic, and social Darwinism became deeply imbedded in U.S. religious and secular culture and were used to justify a laissez-faire approach to charity throughout most of the 19th and 20th centuries (Duncan & Moore, 2003; Hofstadter, 1992). Although the specific tenets of these ideologies may have softened over the years, the significance of hard work, good fortune, material success, and living a socially acceptable life have remained associated with special favor and privilege in life, whereas poverty and disadvantage have remained associated with presumed weak character, laziness, and immoral behavior. Leaving the poor and disadvantaged to their own devices was perceived as nothing more than complying with God’s (or nature’s) grand plan (Duncan & Moore, 2003). Remnants of these doctrines and philosophies can still be seen in contemporary approaches to helping the poor and disadvantaged, and continue to influence the development of legislation in the United States, as well as people’s attitudes about poverty and the poor (Chunn & Gavigan, 2004; Duncan & Moore, 2003; Gettleman, 1963; Hudson & Coukos, 2005; Kim, 1977; Schram et al., 2008; Tropman, 1986).
The social hierarchy espoused by social Darwinists was reflected in the philosophical motivation of COS leaders, often the community’s wealthiest and most religious members, who agreed to provide charity to the poor as long as the poor remembered their proper place in society (Gettleman, 1963). Yet even the deserving poor did not escape the influence of the Protestant work ethic or the fatalism of social Darwinism, both of which were deeply imbedded in COS culture. For example, friendly visitors often focused excessively on the sexual behavior of the women they helped. The COS viewed immorality as the primary problem in most slums, believing that the women living in the slums (many of whom were single mothers) were weak and fallen, having succumbed to the charms and sexual advances of male suitors (Wahab, 2002). Friendly visitors would often use the guise of friendship to connect to these women, hoping they could influence them through modeling the value of becoming a good Christian woman. Many COS “friendly visitors” even went so far as to ask neighbors to monitor the younger women in the slums and report back on any male visitors (Wahab, 2002).
The principles of the Protestant work ethic and social Darwinism, with their focus on hard work, self-sufficiency, and natural selection, were clearly reflected in various speeches and writings of COS leaders. Common themes included arguments that even widows would become lazy if too much help was given, and life was made too easy for them. Many COS leaders also argued that providing charity to the unemployed, able-bodied poor was actually immoral since, according to natural selection, this population was destined to perish, and providing them charity only prolonged their suffering and was therefore in neither their nor society’s best interest (Gettleman, 1963). Despite clear indications that the COS movement was influenced by the ideologies of the Protestant work ethic and social Darwinism, Leiby (1984) points out that many of the early COS leaders and volunteers, while Christians and members of society’s upper classes, were committed reformers who perceived charity as a form of much-needed love—a concept that contradicted the social Darwinists’ noninterventionist approach.
Mary Richmond, the general secretary of the Baltimore COS, is an example of a committed reformer. Richmond was a fierce advocate for social justice and social reform and believed that charities could employ good economics and engage in compassionate giving at the same time. Richmond became well known for increasing public awareness of the COS movement and for her fundraising efforts. Richmond’s compassion for the poor was likely due to her own experience with poverty as a child. Richmond was orphaned at the age of two and then later abandoned by her aunt, who left Richmond to fend for herself in New York when she was only 17 years old. Thus, Richmond no doubt understood the social components of poverty, and how factors outside of peoples’ control could have a devastating impact on their lives. Richmond is credited for contributing to the development of the modern case management model through her conceptualization of
social diagnosis, a process involving friendly visitors assessing clients and their environments.
Social diagnoses enabled the visitor to identify sources of strength and barriers to self-sufficiency (Kusmer, 1973; Richmond, 1917).
Despite the general success of the COS and the contributions the movement made to professionalizing the helping fields, its adherence to deterministic philosophies that negated social factors of poverty while pathologizing the poor deepened the belief that the poor were to blame for their lot in life. In retrospect, one can recognize the naiveté of believing that poverty could be controlled merely through moral behavior. But the country was about to learn a very hard collective lesson during the Depression era—one that immigrants, many ethnic minority groups, and single mothers had known for years—that sometimes conditions exist that are beyond one’s control, creating immovable barriers to economic self-sufficiency. Jane Addams and the Settlement House Movement: 1889 to 1929
During the same time that the COS “friendly visitors” were addressing poverty in the slums by focusing on personal morality, Jane Addams was confronting poverty in a vastly different way—by focusing on social injustice. Addams was a social justice advocate and a social reformer who started the
settlement house movement in the United States with the opening of the Hull House in Chicago. Addams considered the more religiously oriented COS movement as being rather heartless because most COS leaders were more concerned with efficiency and controlling fraud than alleviating poverty (Schneiderhan, 2008). Addams used a relational model of poverty alleviation based on the belief that poverty and disadvantage were caused by problems within society, not idleness and moral deficiency (Lundblad, 1995). Addams advocated for changes within the social structure of society in order to remove barriers to self-sufficiency, which she viewed as an essential component of a democracy (Hamington, 2005; Martin, 2012). In fact, the opening of the Hull House, the first settlement house in the United States, was considered the beginning of one of the most significant social movements in U.S. history. Addams was born in Cedarville, Illinois, in 1860. She was raised in an upper-class home where education and philanthropy were highly valued. Addams greatly admired her father, who encouraged her to pursue an education at a time when most women were destined to solely pursue marriage and motherhood. She graduated from Rockford Female Seminary in 1881, the same year her father died. After her father’s death, Addams entered Woman’s Medical College in Pennsylvania but dropped out because of chronic illness. Addams had become quite passionate about the plight of immigrants in the United States, but due to her poor health and the societal limitations placed on women during that era, she did not believe she had a role in social advocacy.
The United States experienced another significant wave of immigration between 1860 and 1910, with 23 million people emigrating from Europe, including Eastern Europe. Many of these immigrants were from non–English-speaking countries, such as Italy, Poland, Russia, and Serbia, and were very poor. Unable to obtain work in the skilled labor force, many immigrants were forced to work in unsafe urban factories and live in subhuman conditions, crammed together with several other families in tenements. For instance, New York’s Lower East Side had approximately 330,000 inhabitants per square mile (Trattner, 2007). With no labor laws for protection, racial discrimination and a variety of employment abuses were common, including extremely low wages, unsafe working conditions, and child labor. Poor families, particularly non–English-speaking families, had little recourse, and their mere survival depended on their coerced cooperation.
Addams was aware of these conditions because of her father’s political involvement, but she was unsure of how she could help. Despondent about her father’s death and her failure in medical school, as well as her ongoing health problems, Addams and her friend Ellen Gates Starr took an extended trip with friends to Europe where, among other activities, she visited Toynbee Hall settlement house, England’s response to poverty and other social problems. Toynbee Hall served as a neighborhood welfare institution in an urban slum area, where trained settlement house volunteers worked to improve social conditions by providing community services and promoting neighborly cooperation.
The concept of addressing poverty at the neighborhood level through social and economic reform was revolutionary. Rather than monitoring the behavior of the poor through intermittent visits, settlement house workers lived right alongside the immigrant families they endeavored to help. In addition to providing a safe, clean home, settlement houses also provided poor immigrants with comprehensive care such as assistance with food, health care, English language lessons, child care, and general advocacy. The settlement house movement had a mission of no longer distinguishing between the worthy and unworthy poor, and instead recognizing the role that society played in the ongoing plight of the poor—a stance that was a departure from the traditional charity organizations.
Addams and Starr returned home from Europe convinced that it was their duty to do something similar in the United States, and with the donation of a building in Chicago, the Hull House became America’s first settlement house in 1889. Addams and her colleagues lived in the settlement house, in the middle of what was considered a bad neighborhood in Chicago, offering services targeting the underlying causes of poverty such as unfair labor practices, the exploitation of non–English-speaking immigrants, and child labor. The Hull-House quickly became the social hub for residents who gathered in the Hull-House café, and was also the informal headquarters for many of Addams’ social advocacy efforts, which ranged from advocating for women’s suffrage to advocating for racial equality (e.g., advocating against the extrajudicial lynching of Black men), to child labor laws, to global peace efforts to end war (Knight, 2010). Addams’ influence on American social welfare policy was significant, in that her work represented a shift away from the fatalistic perspectives of social Darwinism and the religious perspectives of Calvin’s Reformed theology. Instead, Addams highlighted the need for social change so that barriers to upward mobility and optimal functioning could be removed (Martin, 2012). Addams and her colleagues were committed to viewing the poor as equal members of society, just as worthy of respect and dignity as anyone else. Addams clearly saw societal conditions and the hardship of immigration as the primary cause of poverty, not necessarily one’s personal moral failing. Social inequality was perceived as the manifestation of exploitation, with social egalitarianism perceived as not just a desirable but an achievable outcome (Lundblad, 1995; Martin, 2012). Addams’ focus on social inequity was reflected in her tireless lobbying for the passage of child labor laws (despite fierce opposition by corporations and conservative politicians). Addams also advocated on a local and national level for labor laws that would protect the working-class poor, who were often exploited in factories with
sweatshop conditions. She also worked alongside Ida B. Wells, confronting racial inequality in the United States, such as the extrajudicial lynching of Black men (Addams, 1909). Although there are no working settlement houses today, the prevailing concepts espoused by this movement, with its focus on social components of poverty and disadvantage, remain foundational to the human services and social work professions, and also serve as the roots of today’s urban neighborhood centers. Yet, despite the overall success of the settlement house movement and the particular successes of Addams with regard to achieving social reform in a variety of arenas, the threads of moralistic and deterministic philosophies have remained strongly interwoven into American society, and have continued to influence perceptions of the poor and social welfare policy and legislation.
Ida B. Wells and the Fight Against Racial Oppression
The opening vignette is about one of the greatest social reformers in modern history—Ida B. Wells, a Black reformer and social activist whose campaigns against racial oppression and inequity laid the foundation for the civil rights movement of the 1960s. As referenced in the vignette, although legal slavery ended 6 months after her birth, Wells’ life was never free from the crushing effects of severe racial prejudice and discrimination. Her schooling was interrupted when she was orphaned at the age of 16 leaving her responsible for raising her five younger siblings. This experience not only forced her to grow up quickly but also seemed to serve as a springboard for her subsequent advocacy against racial injustice. The newspaper she owned was called
Free Speech, and she used this platform to write about matters of racial oppression and inequity, including the vast amount of socially sanctioned crimes committed against members of the Black community (Hamington, 2005).
The indiscriminate lynching of Black men was prevalent in the South during Wells’ lifetime and was an issue that Wells became quite passionate about. Black men were commonly perceived as a threat on many levels, and there was no protection of their personal, political, or social rights. The Black man’s reputation as an “angry rapist” was endemic in White society, and many speeches were given and articles written by White community members (including clergy) about this allegedly growing problem. For example, an article published in the mainstream newspaper in the South, the
Commercial, entitled “More Rapes More Lynchings,” cites the Black man’s alleged penchant for raping White women, stating:
The generation of Negroes which have grown up since the war have lost in large measure the traditional and wholesome awe of the white race which kept the Negroes in subjection… . There is no longer a restraint upon the brute passion of the Negro… . The facts of the crime appear to appeal more to the Negro’s lustful imagination than the facts of the punishment do to his fears. He sets aside all fear of death in any form when opportunity is found for the gratification of his bestial desires. (Davidson, 2008, p. 154)
Wells wrote extensively on the subject of the “myth of the angry Black man,” and the myth that all Black men raped White women (a common excuse used to justify the lynching of Black men) (Hamington, 2005). She challenged the growing sentiment in White communities that Black men, as a race, were growing more aggressive and “lustful” of White women, which she believed was prompted in part by the increasing number of biracial couples. The response to Wells’ articles was swift and harsh. A group of White men surrounded her newspaper building with the intention of lynching her, but when they could not find her, they burned down her business instead (Davidson, 2008).
Although this act of revenge essentially stopped her newspaper career, what it really did was motivate Wells even further. After the burning down of her business, Wells left the South and moved to Chicago, where she continued to wage a fierce anti-lynching campaign, often coordinating efforts with Jane Addams. She wrote numerous books and articles on racial inequality, challenging socially entrenched notions that all Black men were angry and violent sexual predators (Hamington, 2005). Wells and Addams worked as colleagues, coordinating their social justice advocacy efforts fighting for civil rights. Together, they ran the Chicago Association for the NAACP and worked collectively on a variety of projects, including fighting against racial segregation in schools (Martin, 2012; Wells, 2020).
The New Deal and Great Society Programs
In 1929 the stock market crashed, leading to a series of economic crises unprecedented in the United States. For the first time in modern U.S. history, large segments of the middle-class population lost their jobs and all means of income. Within a very short time, thousands of people who had once enjoyed financial security were suddenly without money, homes, and food. This served as a wake-up call for social reformers, many of whom had abandoned their earlier commitment to social activism because of decades of a good economy. In response, many social reformers started pushing President Hoover to develop the country’s first comprehensive system of social welfare on a federal level.
Hoover was resistant, though, fearing that a federal system of social welfare would create dependency and displace the role of private and local charities. Hoover wanted to allow time for the economy to self-correct through the capitalist system and the market economy before intervening on a federal level. Hoover was a strong believer in the power of volunteerism, believing that everyday people could be convinced of the power of helping others, without coercion. He wanted to allow time for people to jump into action and help their neighbors, and for democracy and capitalism to self-correct before intervening with broad entitlement programs (McElvaine, 1993). But much of the country apparently did not agree with this plan. In 1933, Hoover lost his bid for reelection, and Franklin D. Roosevelt was elected as the country’s 32nd president. Roosevelt immediately set about to create changes in federal policy with regard to social welfare, promising dramatic changes, including sweeping reforms in the form of comprehensive poverty alleviation programs.
From 1933 through 1938, Roosevelt instituted a series of legislative reforms and domestic programs collectively referred to as the
New Deal programs. In his first 100 days in office, Roosevelt passed 13 legislative acts, including one that created the Civil Works Administration, which provided over a million temporary jobs to the unemployed; the Federal Emergency Relief Act, which provided direct aid and food to the unemployed (and was replaced by the Works Progress Administration in 1935); and one that created the Civilian Conservation Corp (CCC), which put thousands of young men ages 18 to 25 to work in reforestation and other conservation programs. Yet, as progressive as Roosevelt was, and as compassionate as the country had become toward the poor due to the realization that poverty could strike anyone, racism was still rampant, as illustrated by Roosevelt placing a 10% enrollment limit for Black men in the CCC program (Trattner, 2007).
By far the most famous of all programs in the New Deal were those created in response to the Social Security Act of 1935, which among other things created old-age pensions for all workers, unemployment compensation, Aid to Families with Dependent Children (AFDC), and aid to the blind and disabled. Programs such as the Federal Deposit Insurance Corporation (FDIC), which provided insurance for bank deposits, helped to instill a sense of renewed confidence in the banking system, and the development of the Securities and Exchange Commission (SEC), which regulates the stock market, helped to ensure that a crash similar to the one in 1929 would be unlikely to occur again. In total, Roosevelt created 15 federal programs as a part of the New Deal, some of which remain today, and some of which were dismantled once the crisis of the Great Depression subsided. Although some claim that the New Deal was not good for the country in the long run, it did pull the country out of a severe economic decline, providing relief for millions of Americans who could have literally starved had the federal government not intervened.
The United States recovered from the Great Depression and has since experienced several periods of economic growth and decline, but never any as severe as that which was prompted by the 1929 stock market crash. This is likely because of federal programs such as the FDIC and the creation of the SEC (and similar government agencies). In later times, though, the dismantling of some post-Depression financial regulations would contribute to yet another devastating economic downturn in 2007—perhaps not as severe as the Great Depression, but more serious and long lasting than any other recession experienced in the U.S. post-Depression era, particularly because of its global consequences.
The 1940s remained a time of general recovery and the 1950s was a relatively stable time, both economically and socially. Several laws were passed and agencies created that continued to advance the state of social welfare in the United States, including the creation of the U.S. Department of Health, Education, and Welfare in 1953 and the passage of the U.S. Housing Act of 1954 (Ch. 649, 68 Stat. 590).
The 1960s was a time of civil unrest and increasing rates of poverty, which spawned a resurgence of interest in social problems, including poverty and social injustice, particularly related to many at-risk populations, such as ethnic minority populations, older adults, and the mentally ill. For instance, President John F. Kennedy signed into law the Community Mental Health Centers Act (Pub. L. No. 88-164) on October 31, 1963, which transitioned the U.S. mental health-based care system from one of institutionalization to a community model. Kennedy was assassinated less than a month later, on November 22, 1963, and President Lyndon B. Johnson continued the Kennedy legacy with the introduction of the
Great Society programs—a set of social welfare programs designed to eliminate poverty and racial injustice.
Policy areas within the Great Society programs included civil rights, education, and poverty (later popularly referred to as Johnson’s
War on Poverty). Examples of some of the social welfare legislation and programs included under the umbrella of the Great Society are the Economic Opportunity Act of 1964 (Pub. L. No. 88-452); the Civil Rights Act of 1964 (Pub. L. No. 88-352); the Food Stamp Act of 1964 (Pub. L. No. 88-525); Medicare, Medicaid and the Older Americans Act of 1965 (Pub. L. No. 89-73); the Elementary and Secondary Education Act of 1965 (Pub. L. No. 89-10); the development of the U.S. Department of Housing and Urban Development (HUD); and the Voting Rights Act of 1965 (Pub. L. No. 89-110).
Whether the Great Society and the War on Poverty programs were successful in reducing poverty, racial discrimination, and other social problems continues to be debated to this day. It’s no surprise that conclusions tend to fall along party lines, with many conservatives complaining that Johnson’s social experiment amounted to nothing more than throwing money at oversimplified problems with disastrous results, and liberals decrying just the opposite—that most of the programs had the potential to be successful, but were grossly underfunded (Zarefsky, 2005). Some point to racism as the reason why many Great Society programs were ultimately dismantled (Quadagno, 1994), while others pointed to the Vietnam War as the reason for government (and societal) shifting priorities (Zarefsky, 2005). Regardless, many of the programs remain and represent a time in history when there was increased recognition of structural barriers in society that can keep many people from functioning at their optimal level and achieving economic self-sufficiency.
Social Welfare in Contemporary United States
A Time of Recovery: 1970 to 1990
The 1970s and 1980s was a time of mixed reviews on welfare and welfare reform. There was considerable conservative backlash in response to what was considered a few decades of liberal social welfare legislation and entitlement programs, but despite President Nixon’s opposition to welfare, existing programs continued to grow. The mid-1970s through the 1980s was a boom time economically in the United States, and boom times typically mean that people become less sympathetic toward the plight of the poor. And that’s exactly what happened—there was a resurgence of earlier negative sentiments toward the poor beginning in the mid-1970s and peaking in the 1990s.
This increased negative attitude toward the poor was reflected in several studies and national public opinion surveys that indicated a general belief that the poor were to blame for their lot in life. For instance, a national survey conducted in 1975 found that the majority of those living in the United States attributed poverty to personal failures, such as having a poor work ethic, poor money managementskills, a lack of any special talent that might translate into a positive contribution to society, and low personal moral values. When asked to rank several causes of poverty, subjects ranked social forces, such as racism, poor schools, and the lack of sufficient employment opportunities the lowest of all possible causes of poverty (Feagin, 1975).
Ronald Reagan capitalized on this negative sentiment toward the poor during his 1976 presidential campaign when he based his platform in large part on welfare reform. In several of his speeches Reagan cited the story of the woman from the South Side of Chicago who was finally arrested after committing egregious welfare fraud. He asserted that she had 80 names, 30 addresses, and 12 Social Security cards, claiming that she was also collecting veteran’s benefits on four husbands, none of whom was real. He also alleged that she was getting Social Security payments, Medicaid, and food stamps, and was collecting public assistance under all of her assumed identities (Zucchino, 1999). While Reagan never mentioned the woman’s race, the context of the story as well as the reference to the South Side of Chicago (a primarily Black community) made it clear that he was referring to a Black woman—thus playing on the common stereotype of welfare users (and abusers) as being Black (Krugman, 2007). And with that, the enduring “Myth of the
Welfare Queen” was born.
Journalist David Zucchino (1999) attempted to debunk the myth that women on welfare were lazy and engaged in rampant fraud in his book
The Myth of the Welfare Queen, where he explored the realities of being a mother on welfare. He noted that despite the availability of ample factual information on public assistance utilization showing very low rates of welfare fraud, the image of the Black woman who drove a Cadillac while illegally collecting welfare under numerous false identities was so imbedded in American culture it was impossible to debunk the myth. Krugman (2007) also cites how politicians and media commentators have used the myth of the welfare queen to reduce sympathy for the poor and gain public support for welfare cuts, arguing that while covert, such images clearly play on negative racial stereotypes. They also play on the common belief in the United States that those who receive welfare benefits are poor because they are lazy, promiscuous, and generally immoral.
More recent surveys conducted in the mid-1990s revealed an increase in the tendency to blame the poor for their poverty (Weaver et al., 1995), even though a considerable body of research points to social and structural dynamics as the primary cause of enduring poverty. Examples of structural causes of poverty include a shortage of affordable housing, recent shifts to a technologically based society requiring a significant increase in educational and training requirements, longstanding institutionalized oppression of and discrimination against certain racial and ethnic groups, and a general increase in the complexity of life (Martin, 2012; Wright, 2000).
The general public’s perception of social welfare programs seems to be based in large part on this negative bias against the poor and the stigmas such bias creates. Surveys conducted in the 1980s and 1990s showed support for the general idea of helping the poor, but when asked about specific programs or policies, most respondents became critical of governmental policies, specific welfare programs, and welfare recipients in general. For instance, a 1987 national study found that 74% of those surveyed believed that most welfare recipients were dishonest and collected more benefits than they deserved (Kluegal, 1987).
Welfare Reform and the Emergence of Neoliberal Economic Policies: 1990 to Now
Political discourse in the mid-1990s reflected what is often referred to as economic
neoliberal philosophies—a political movement embraced by most political conservatives, espousing a belief that capitalism and the free market economy were far better solutions to many social conditions, including poverty, than government programs, which were presumed to be inefficient and poorly run. Advocates of neoliberalism pushed for social programs to be privatized based on the belief that getting social welfare out of the hands of the government and into the hands of private enterprise, where market forces could work their magic, would increase efficiency and lower costs. Market theory can be applied to many areas of the economy, when there is competition among providers, a reliable workforce, clear goals, and known outcomes. Yet research has consistently shown the limits of neoliberalism, particularly in public services, including social welfare services, due to the complexity of human services issues, unknown outcomes, a highly trained workforce, the lack of competition among providers, and other dynamics that make social welfare services so unique (King, 2007; Nelson, 1992; Van Slyke, 2003).
During the 1994 U.S. Congressional campaign, the Republican Party released a document entitled
The New Contract with America, which included a plan to dramatically reform welfare, and according to its authors, the poor would be reformed as well (Hudson & Coukos, 2005).
The New Contract with America was introduced just a few weeks prior to Clinton’s first mid-term election and was signed by all but two of the Republican members of the House of Representatives, as well as all of the GOP Congressional candidates. In addition to a renewed commitment to smaller government and lower taxes, the contract also pledged a complete overhaul of the welfare system to root out fraud and increase the poor’s commitment to work and self-sufficiency.
Hudson and Coukos (2005) note the similarities between this political movement and the movement 100 years before, asserting that the Protestant work ethic served as the driving force behind both. Take, for instance, the common arguments for welfare reform (policies that reduce and restrict social welfare programs and services), which have often been predicated on the belief that (1) hardship is often the result of laziness; (2) providing assistance will increase laziness (and thus dependence), hence increasing hardship, not decreasing it; and (3) those in need often receive services at the expense of the working population. These arguments were cited during the COS era as reasons why material support was ill-advised.
One of the more stark (and relatively recent) examples of this sentiment was expressed by Rep. John Mica, a congressman from Florida, when he stood on the U.S. House floor, holding a sign that read “Don’t Feed the Alligators” while delivering an impassioned speech in support of welfare reform. During hearings on the state of public welfare in the United States, Rep. Mica compared people on welfare to alligators in Florida, stating that the reason for such signs is because “unnatural feeding” leads to dependency and will cause the animal to lose its natural desire for self-sufficiency. Mica argued that welfare programs have done the same for people, creating subsequent generations of enslavement and servitude (Lindsey, 2004).
While there may be some merit in debating the most effective way of structuring social welfare programs, arguments such as Mica’s negate the complexity of poverty and economic disadvantage, particularly among historically marginalized populations. They also play into longstanding stigmas and negative stereotypes that portray the poor as a homogenous group with different natures and characters than mainstream working society. These types of narratives also reflect the
genderized and
racialized nature of poverty, contributing to institutionalized gender bias and racism (Seccombe, 2015).
Whether veiled or overt, negative bias, particularly that which is bestowed on female public welfare recipients of color, negates the disparity in social problems experienced by Black women and other women of color (El-Bassel et al., 2009; Martin, 2012; Siegel & Williams, 2003). Negative stereotypes and myths also provide a false picture of welfare recipient demographics by implying that the largest demographic of beneficiaries is Black single women with numerous children, which statistics do not support. PRWORA of 1996, TANF, and Other Programs for Low-Income Families
A Republican Congress may have initiated welfare reform, but it was passed by the Democratic Clinton administration in the form of the
Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA) of 1996. This bipartisan effort illustrated the wide support for welfare reform as well as for the underlying philosophical beliefs about what causes poverty and what types of poverty alleviation methods are effective.
The social welfare program authorized under PRWORA of 1996 is called the Temporary Assistance for Needy Families (TANF) program, which replaced the
Aid to Families with Dependent Children (AFDC). TANF is operated at the state level through federal block grants as well as state funding. According to the PRWORA act, TANF has four primary goals: (1) to provide help to needy families and their children; (2) to promote job preparation, employment, and marriage so that families no longer need to depend on government assistance; (3) to reduce out-of-wedlock births, and (4) to encourage two-parent families.
Initially, TANF listed 12 different categories of acceptable work activities, but in 2008 the federal government provided additional clarity in terms of what activities would count toward TANF’s work requirement in each category. Among the 12 categories, nine are considered “core,” which means they directly count toward the required number of hours per week. Three of the categories are considered “non-core” and count only after the required hours for core activities are met. The nine core work activities include unsubsidized work, subsidized work, work experience, on-the-job training, job searches, job readiness, community service, vocational education, and providing child care to anyone participating in community service. Non-core activities include employment-related education, job skills training, and attendance at a high school or GED program.
TANF benefits include modest cash assistance for basic needs; transitional services focused on self-sufficiency, such as vocational training, rehabilitation, and child care; substance abuse, mental health, and domestic violence screening and referrals; medical assistance through a government-funded health insurance program; and Supplemental Nutrition Assistance Program (SNAP) benefits (formerly called food stamps).
States have considerable latitude in how to meet the four goals of TANF as well as how to deliver the benefits, as long as their programs remain within federal guidelines. Guidelines include time limits, which are not to exceed 60 months of lifetime benefits (in most cases); eligibility requirements, which include barring all immigrants who have lived in the United States for less than 5 years; all undocumented immigrants, and work requirements of at least 30–35 hours per week for two-parent families and 20 hours per week for single parents with young children. Parents who fail to comply with the work requirement experience sanctions such as the termination of all family benefits. Approved work activities include subsidized and unsubsidized work at a for-profit or not-for-profit organization and can also include on-the-job training and vocational training (not to exceed 12 months). A significant area of concern among social justice advocates is that educational programs, including programs to assist recipients with earning their high school diplomas, are not included in approved work categories.
According to a 2020 U.S. Department of Health and Human Services (DHHS) report, in fiscal year 2019 there were just over 1.9 million families receiving TANF benefits, with the total individual TANF caseload of about 3.8 million individuals (DHHS, 2019). This represents about 400,000 additional recipients compared to 2013 (DHHS, 2015). About half of all caseloads consisted of small female head-of-household families with one or two children, with the other half consisting of child-only cases. With regard to the racial makeup of recipients, in fiscal year 2018, 37.8% were Hispanic, 28.9% were Black, 27.2% were White, 1.9% were Asian, 1.5% were Native American and Alaska Natives, and 0.6% were Native Hawaiian and other Pacific Islanders (UDHHS, 2018). Among all families receiving TANF benefits, about 90% received medical assistance, 82% received SNAP benefits, 10.8% received housing assistance, and 6.1% received subsidized child care. These percentages have remained relatively stable in the last 5 years, except SNAP benefits have increased by about 2%. The average monthly cash benefit for TANF recipients (per individual based on a family of three) ranges from $56 (Mississippi) to $308 (Alaska), and 35 states haven’t raised TANF benefits in over a decade (Burnside & Floyd, 2019).
Many social welfare advocates believe that TANF is too punitive in nature because of its strict time limits for lifetime benefits, stringent work requirements, and other punitive measures designed to control the behavior of recipients. Supporters of welfare reform rely on old arguments, citing the need to control welfare fraud and welfare dependency. They cited a host of other behaviors exhibited by female welfare recipients, including perceived sexual promiscuity and out-of-wedlock childbearing, while focusing very little on the behaviors of the fathers, particularly those who abandon their children (Hudson & Coukos, 2005; Rainford, 2004).
Uluorta (2008) cautions that far too often morality in the United States has been defined in very narrow terms, focusing on select groups of individuals and on very specific behaviors, such as sex and sexuality, marital status, and social standing. (It is interesting to note that rarely do those criticizing the immoral behavior of the poor also frame behaviors such as greed or lacking compassion in moral terms). While individual responsibility is certainly worth achieving, it can also be a code word for philosophies that scapegoat the poor and minimize long-standing social inequalities. Such scapegoating is of great concern to many within the human services fields and others who recognize the wide range of ways that social problems and their causes can be framed, and the danger of focusing too heavily on perceived behavioral flaws of those who are struggling.
The belief that generous social welfare programs will result in increased dependence has deep roots in the United States and has been a powerful influence on social welfare policy development from the country’s inception. But is this true? Is there any evidence to support the contention that generous social welfare programs will increase dependence and decrease self-sufficiency? There are a few ways we can answer this question. The first is to explore empirical research on the effectiveness of TANF, specifically whether the punitive nature of the program has the intended result—to increase compliance and decrease dependence. A 2019 study examined this very dynamic and found that TANF’s structure, particularly its sanctions mechanisms that punish single mothers for noncompliance (not working enough hours, etc.), actually led to increased dependence and lower levels of self-sufficiency (Hamilton et al., 2019). The other method of assessing the effectiveness of TANF is to compare the U.S. social welfare system with systems in other high-income countries.
The Male Breadwinner Model Versus the Nordic Model
There are two primary social welfare models used in high-income countries, such as the United States and European countries: the male breadwinner model and the ‘all adult worker model’. The former is the traditional model, which assumes that men are the wage earners and women stay home and care for the children and are then provided for financially through their husbands’ earnings. Western societies have long presumed that this traditional model resulted in stable families (Clark, 2000; Weitzman, 1985). In fact, modern welfare systems have been constructed on the concept of the full engagement of a male workforce, where wages from employment were considered the best form of “welfare provision” (with regard to monthly income, health care benefits, and pensions) (Lewis, 2001), which may be one reason why the United States has been resistant to provide more governmental universal programs
The male breadwinner model has been used by many countries to enforce a social structure that was believed to be the foundation of society. AFDC, the program that preceded TANF, was based on a male breadwinner model because it presumed that men were the wage earners of the family and women stayed home to care for the children, and in the absence of a male provider, the government stepped in until the woman remarried (Moffitt et al., 1994).
U.S. family behaviors have changed significantly since the 1950s, resulting in the general breakdown of the traditional family structure. We now have far more fluidity and flexibility in intimate relationships, a large increase in single-person households, as well as an increase in women entering the labor force (Lewis, 2001). Welfare reform in the mid-1990s was fueled for many reasons, but a primary one was related to these shifting cultural tides in the United States and a building resentment toward AFDC recipients whom many Americans believed should be working (Murray, 2008).
The other social welfare model that has been adopted in most European countries is the “all adult worker model,” which assumes that all adults, males and females, are equally involved in the labor market and thus all adults are economically independent. Both TANF and the Nordic Model (the social welfare systems in the Nordic countries of Sweden, Iceland, Finland, Denmark, and Norway) are considered all adult worker models, but their design and impact are dramatically different. While TANF is technically considered an all adult worker model, the philosophical basis of the legislation strongly reflects male breadwinner values, which is captured in the legislative definition of poverty as primarily a result of teen out-of-wedlock births, and the legislative goal of marriage promotion.
The TANF program does expect all beneficiaries to work, which is consistent with the all adult worker model, but research shows that while family behavior in the United States has changed considerably since the 1950s, it hasn’t changed as much as the all adult worker model requires to be effective. For instance, women’s behavior in the United States has changed pretty substantially with respect to entering the paid workforce, but most women still only work part-time, and most are in far lower-paying fields. Also, the majority of women in the United States still perform the bulk of unpaid care work, whereas men have not changed significantly in their work-related patterns. They still engage primarily in paid work (and are paid on a much higher scale), and as a whole haven’t significantly increased their involvement in childcare or other unpaid work (Dush et al., 2018; Lewis, 2001).
Thus, while TANF is considered an all adult worker model because it is a welfare-to-work program, it does not match current behaviors in the United States with regard to labor engagement and unpaid work provision. For instance, TANF expects new mothers to enter the labor force rapidly, yet most enter low-wage service sector jobs that offer little opportunity for advancement (Mitchell et al., 2018; Seefeldt, 2017). Also, because TANF is an income-tested program, it tends to stigmatize beneficiaries, blaming poverty on individual circumstances (primarily women and their sexual behavior) rather than structural problems, such as a poor economy, a lack of jobs offering a living wage, racial oppression, domestic violence, and poor educational systems. The U.S. social welfare model in general also discourages parents from leaving the labor market to care for their children, by failing to provide paid paternity leave on a federal level.
The Nordic Model is also an all adult worker model, but the Nordic countries have a strong commitment to universal care entitlements focusing on children and older adults, thus utilization is far less stigmatized (Lewis, 2001). Temporarily exiting the labor market in the Nordic countries for unpaid care work is encouraged. Men are incentivized to temporarily leave their jobs to care for their children by the availability of generous parental leave (about 480 days) that can be split between the parents. Finally, the United States pays a fraction of what Nordic countries pay for family benefits (Owaza, 2004).
So, what’s the answer to our question then? Which program is more effective in reducing poverty without creating dependence? Surprisingly, there aren’t many comparative studies, but a set of data we can examine are poverty rates among single mothers between the United States and the Nordic countries, to get an idea of the effectiveness of the two models. In 2018 the poverty rate of single mothers in the United States was 35.1% (U.S. Census Bureau, 2018). This is an improvement over 2007 rates when 50% of single mothers in the U.S. lived below the poverty line (Legal Momentum, 2011), but the U.S. poverty rate for single mothers is still far higher at 35.1% than most Nordic countries, which range from 17% in Denmark to 24% in Sweden.
Poverty is highly complex and is influenced far more by structural factors than individual ones. As long as social welfare policy in the United States is fueled by fears of dependency (that a generous safety net will make us all lazy), chances are many people will continue to believe in the myth of the welfare queen and negate the despair many single mothers feel when faced with challenges of rising out of poverty with minimal support and high levels of stigma (Seccombe, 2015).
The Economic Crisis of 2007–2008
After years of an economic boom, the U.S. economy began faltering in about 2007 and devolved into a full-blown recession by 2008, which lasted until about 2009 or 2010. The economic recession of 2007 consisted of a dramatic and lengthy economic downturn not experienced since the Great Depression. The real estate market bubble burst, the stock market crashed, the banking industry seemed to implode, and many people lost their jobs and their houses as a result (Geithner, 2009).
President Obama and the 111th Congress responded to the economic crisis with several policy and legislative actions, including the passage of the American Recovery and Reinvestment Act of 2009 (often referred to as the Stimulus bill [Pub. L. No. 111-5]). This economic stimulus package, worth over $787 billion, included a combination of federal tax cuts, various social welfare provisions, and increases in domestic spending, and was designed to stimulate the economy and assist Americans who were suffering economically.
As a part of the 2009 Recovery Act, Congress allotted $5 billion in emergency funding to assist states with increased TANF caseloads (expired in September 2010). TANF was reauthorized in 2009 and was up for reauthorization in 2015 but experienced several delays. The National Association of Social Workers
(NASW) released a statement regarding reauthorization recommending several changes to the TANF program, some of which include the following:
·
· Increase the floor for TANF benefits to 100% of the federal poverty line. Currently, many states’ benefits are 50% of the federal poverty line, while benefits in several states are only about 30% of the federal poverty line.
· Expand the definitions of employment to include higher education, English and literacy classes, and more expansive vocational training.
· Address common barriers to employment such as physical illness, mental illness, disabilities, substance abuse, domestic violence, and sexual violence.
·
·
· and sexual violence.
· Restore benefits for documented immigrants (NASW,
2015).
The stimulus package was considered largely successful and initially had the approval of the majority of Americans (Pew Research Center,
2008). The economy took years to recover, though, and some populations and regions never fully recovered, including many rural communities (Farrigan,
2014). Over time, Americans became increasingly critical of what many now call the “Wall Street Bailout.” When the 2008 elections rolled around, many in the United States were ready for a change, reflected in the election of Democrat Barack Obama.
The Election of the First Black President
The 2008 presidential election was unprecedented in many respects. The United States had its first Black and first female presidential candidates of a major party. Many people who had historically been relatively apathetic about politics were suddenly passionate about this election for a variety of reasons. Growing discontent with the leadership in the preceding 8 years coupled with a lengthy war in the Persian Gulf region and a struggling economy created a climate where significant social change could take root. Barack Obama’s campaign slogans based on hope and change (e.g., “Yes We Can!” and “Change We Can Believe In”) seemed to tap into this growing discontent.
Perhaps one of the most significant federal laws to be passed during the Obama administration was the
Patient Protection and Affordable Care Act of 2010 (ACA) (PPACA, 2010). The ACA (or its more commonly used name, Obamacare) was signed into law by former President Obama in March 2010 after a fierce public relations battle waged by many Republicans and health insurance companies designed to prevent its passage. The ACA, which took effect incrementally between 2010 and 2014, is a comprehensive health care reform bill. Overall, this legislation is designed to make it easier for individuals and families to obtain quality lower-cost health insurance by having people apply for a policy through a central exchange. One of the goals of the legislation was to make it more difficult for health insurance companies to deny coverage, particularly based on preexisting conditions. The ACA also expands Medicare in a variety of ways, including bolstering community and home-based health care services, and providing incentives for preventative, holistic, and wellness care. With respect to behavioral and mental health care, the ACA provides increased incentives for coordinated care and school-based care, including mental health care and substance abuse treatment. It also includes provisions that will require the inclusion of mental health and substance abuse coverage in benefits packages, including prescription drug coverage and wellness and prevention services. Although the Trump administration attempted to weaken the ACA in a variety of ways, it remains an effective piece of legislation as long as states comply with the act’s mandates, including providing oversight for unwarranted price increases.
Political speeches and debates leading up to the 2012 presidential elections revealed the same debate about the causes of poverty and effective poverty alleviation strategies. After a brief display of compassion toward the poor at the height of the 2008 economic crisis, harsh sentiments reflecting historic stigmatization of the poor were strongly espoused, particularly among potential Republican primary candidates who continued their campaign against “big government,” social welfare programs, and civil liberties in general. One 2012 Republican presidential candidate, Newt Gingrich, even went so far as to challenge current child labor laws, calling them “stupid.” In a campaign speech in Iowa in the fall of 2011, Gingrich characterized poor ethnically diverse children living in poor neighborhoods as lazy and having no work ethic. In two different speeches (his initial speech and a subsequent speech where he was asked to clarify his earlier comments), Gingrich suggested that poor children in poor neighborhoods could start work early, perhaps as janitorial staff in their own schools (Dover, 2011). Gingrich’s sentiments completely negated the role of racial oppression and White privilege in the poverty experienced by racial and ethnic minorities in the United States.
President Obama significantly advanced social justice during his presidency, including signing into law the Matthew Shepard and James Byrd, Jr. Hate Crimes Prevention Act of 2009, which extended federal protection to victims of a hate crimes based on actual or perceived sexual orientation or gender identity. Obama also signed into law the Fair Sentencing Act of 2010, which addressed the disparity in sentencing laws between powder cocaine and crack cocaine, impacting primarily people of color. In 2012, Obama repealed
Don’t Ask, Don’t Tell (DADT) (an official policy of the U.S. government that prohibited the military from discriminating against gay and lesbian military personnel as long as they kept their sexual orientation a secret), which meant that lesbian, gay, and bisexual Americans could serve openly in the U.S. Armed Services without fear of dismissal.
Other advances in social justice legislation and policy include Obama’s 2012 Executive Order implementing the Deferred Action for Childhood Arrivals (DACA) policy, providing legal protection for undocumented migrant youth who came to the United States as children with their parents, until federal legislation could be passed to provide them a path to citizenship. Obama also advocated in support of marriage equality for same-sex couples in advance of the 2015 Supreme Court case
Obergefell v.
Hodges, which legalized same-sex marriage throughout the United States. And in 2016, the Obama administration increased the annual refugee threshold to 110,000 to accommodate the resettlement of Syrian refugees, among other highly vulnerable populations.
Some of these advances have since been dismantled by President Trump, including the withdrawal of DACA (an action since reversed by the U.S. Supreme Court) and the consistent lowering of the annual refugee threshold to 18,000 in 2020, the lowest number in the history of the United States Refugee program (U. S. Department of State, 2020). Despite these dramatic shifts in values and policy approaches, President Obama remains a key figure in U.S. history, not only because of his race, but because of his social justice legacy and his overall popularity,
The Tea Party Movement
A powerful conservative social movement that sprung up 2009 in a reaction to the Obama presidency is the American Tea Party Movement, a part of the
Christian Right and a fringe part of the Republican base. Tea Party members advocated for smaller government, lower taxes (the name of the group is a reference to the Boston Tea Party), states’ rights, and the literal interpretation of the U.S. Constitution. The Tea Party movement gained a reputation for advocating for very conservative policies that advanced traditional American values such as marriage between a man and a woman, restrictions on abortions, and governance that supported conservative Christian values. For instance, Michele Bachmann, a Tea Party member, former Minnesota congresswoman, and 2012 presidential candidate, asserted in a 2006 speech that religion was supposed to be a part of government, and that the notion of separation of church and state (contained in the First Amendment of the U.S. Constitution) was a myth (Turley, 2011).
The Tea Party has been criticized for many reasons, including being anti-immigrant and racist, accusations the party’s leadership strongly denied. And yet, media coverage of Tea Party rallies frequently highlighted their racially charged tone, such as racial slurs on posters, many of which were directed at former President Obama’s ethnic background. Although “tea partiers” often denied accusations of bias, a study conducted during the height of the movement showed that about 60% of Tea Party opponents believed that the movement had strong racist and homophobic overtones (Gardner & Thompson, 2010). Most members as of 2019 have shed the Tea Party label but remain an influential core of the Republican party as conservative evangelicals.
The Era of Donald Trump
Donald Trump, a reality television star and real estate mogul, was elected president in 2016, and took office the following January, surprising many people in the United States and around the globe. Trump’s election was also a surprise to political pollsters, many of whom predicted Hillary Clinton had a clear path to the White House (Wright & Wright, 2018). The disappointment and shock many Democrats felt in response to Trump’s win was in large part rooted in the contentious nature of the election and the belief that Americans would not vote for someone so mired in scandal and who espoused such controversial rhetoric in speeches and tweets that many believed reflected racism, sexism, and xenophobia (Jacobson, 2017; Tani, 2016). But that’s precisely what happened.
Trump’s policies, particularly his stance on immigration and his “America First” rhetoric, are consistent with right-wing populism, a far-right political ideology that is rooted in nationalism and protectionism (Dunn, 2015; Mudde, 2013; Wodak, 2015). Right-wing populism is by definition anti-immigrant, since immigrants are perceived as a threat to the country’s traditional culture and way of life (Bonikowski, 2017; Ybarra et al., 2016). A wave of right-wing populism, particularly those fueling anti-immigrant social movements, has swept the globe in recent years (Donovan & Redlawsk, 2018), so from a broader perspective Trump’s election is wholly in line with global political trends throughout the second decade of the 21st century.
The dominant narrative in the wake of Trump’s 2016 win was that it was economic anxiety that drove support for Trump—the forgotten White working class, those living in rural communities, the former manufacturing states of the upper Midwest (often called the Rust Belt), and coal country. And yet, recent research has revealed that Trump’s popularity was rooted not as much in economic anxiety, but in cultural anxiety—a profound concern among White voters, particularly working-class men without a college education, that they were losing their cultural status to ethnic minority populations (Mutz, 2018). This dynamic is referred to by researchers as “out-group anxiety,” a response to multicultural changes in the United States, such as increasing acceptance of multiculturalism (Barreto et al., 2011).
In addition to the support of White working-class voters, approximately 80% of White conservative Christians who self-identified as born-again and/or evangelical (Protestants, Catholics, and Mormons) voted for Trump in 2016 (CNN, 2016; Smith & Martinez, 2016). This was also surprising to many Americans, in light of Trump having been married three times, reports of his many extramarital affairs, and his personal admission on an Access Hollywood videotape of his sexual exploits, including grabbing women by their genitals (Fahrenhold, 2016). A 2016 poll by Christianity Today revealed that despite admitting that Trump was difficult to like, the majority of self-described White evangelicals believed that Trump was honest, a good role model, and well qualified to be president (interestingly, the majority of Black evangelicals reported almost the exact opposite sentiments) (Eekhoff-Zylstra & Weber, 2016).
In addition to the unrelenting support of his base, the 2016 election was significantly influenced by social media, particularly propaganda (or “fake news”) disseminated via Facebook and Twitter that swayed many people’s opinions about both Hillary Clinton and Donald Trump (Allcott & Gentzkow, 2017). Concerns about Russian election meddling and e-mail hacking, allegations that the Trump administration cooperated with the Russian government, and the discovery of Russian “troll farms” that used Facebook and Twitter to influence Americans to vote for Trump rather than Clinton remain controversial topics that will likely take years to fully understand. In the meantime, political polarization remains high, with research showing that most Americans now have little contact with people in the opposing political party (Pew Research Center, 2016).
Social media will no doubt continue to play a pivotal role in the political polarization in the United States. Research indicates that when people are exposed to opposing views on sites like Twitter, they become even more entrenched in their political stances (Bail et al., 2018). Social media is also used for good. For instance, advocacy is increasingly occurring online to affect changes in public policy. A 2017 study found that among advocates and advocacy organizations 70% use Facebook for advocacy purposes and 75% use Twitter. Additionally, over 50% of advocacy organizations now have a professional position designated for social media (Rehr, 2017). Social media is consistently evolving, in both usage and functionality, so it’s important that human services professionals remain up to date in all ways social media is being used—both positively and negatively.
The election of Donald Trump has significantly changed the policy priorities of the United States, including social welfare policy. Human services professionals and others in the helping fields applauded Trump’s support for criminal justice reform by signing into law the Step Act—a bipartisan bill that addresses racial disparities in sentencing laws—but remain highly concerned about the Trump administration’s stances on immigration, including the separation of Central American families seeking political asylum, the increase in expedited deportations without hearings, a dramatic reduction in the refugee resettlement threshold, and rollbacks in environmental protections. The NASW’s first statement regarding President Trump, released the day after the 2016 election, encouraged Trump to heal the divisiveness caused by his campaign (NASW, 2016), and yet, according to the Southern Poverty Law Center, hate crimes have risen steadily during the Trump administration (Beirich, 2019).
The NASW has also released statements expressing significant concern about Trump’s various policies, particularly those that impact the most vulnerable and marginalized members in U.S. society. Examples include the NASW statements on the Trump administration’s travel ban on refugees coming from primarily Muslim nations (NASW, 2017b), its policy to separate migrant families at the border (NASW, 2018), and the administration’s economic policies (NASW, 2017a). Trump was also highly criticized by his handling of the 2020 coronavirus pandemic, including concerns that he was slow to respond to the crisis (particularly with regard to testing and quarantining), but the NASW also praised the Trump administration for signing into law two emergency bills, Families First Coronavirus Response Act (H.R. 6201), which among other things expanded unemployment benefits fits, emergency family leave, and sick leave for those impacted by the coronavirus, and the Coronavirus Preparedness and Response Supplemental Appropriations Act of 2020 (H.R. 6074), which provided emergency funding for public health agencies and major expansions in the use of telehealth,
Despite the controversy surrounding Trump’s 2016 election and the values and choices made by his administration, another positive development in response to his election has been the dramatic increase in grassroots social justice advocacy in the form of rallies and protests (e.g., the Women’s March, the March for Science), and a significant increase in people of color and women running for political office. Social media played a significant role in these developments as well. For instance, the Women’s March was organized primarily on Facebook through shared statuses on timelines, and in pages and groups, as well as the use of the platform’s event calendar to organize people globally. The number of women who showed up globally to march for gender equality was unprecedented, and this grassroots coordination could not have occurred without the broad and rapid reach of social media. President Trump lost his 2020 re-election bid to Joe Biden, which many social reformers saw as a positive sign. And yet, it’s likely that political and social polarization will continue for quite some time.
Conclusion
The United States is often referred to as a reluctant welfare state because throughout its history a battle has been waged between reformers, who advocate for a compassionate, inclusive, less-stigmatized social safety net, and opposing groups, who advocate for a system with less government involvement, more privatization, and increased work incentives based on fears that a generous social safety net will decrease incentives for people to work. Currently it would be more accurate to describe the U.S. social welfare system as a piecemeal welfare-to-work system that focuses more on the behavior of the poor than on structural causes of poverty that act as the barriers to self-sufficiency. But it is also accurate to describe the U.S. social welfare system as ever evolving, reflected in the passage of emergency stimulus bills in response to the 2020 pandemic. Other changes are on the horizon as well, but they will be highly influenced by political leadership and economic constraints.
Summary
·
· The ways in which England’s historic system of poor care influenced the development of social welfare policies in the United States is analyzed. England’s early social welfare system, including the development of social welfare policy in the United States, is discussed, tracing aspects of social welfare provision from England’s feudal system in the Middle Ages to Elizabethan Poor Laws, to the development of the social welfare system in Colonial America.
· Movements and associated philosophical influences in poor care and social reform in early America are compared and contrasted. Various philosophical and religious movements that have influenced perceptions of the poor and social welfare policy, such as Calvin’s Protestant work ethic and the concept of predestination, social Darwinism, and the settlement house movement, are discussed.
· Early leaders in the fight for social justice are explored, including Jane Addams and Ida B. Wells, with a particular focus on how the social justice movement formed the underlying values of the human services profession.
· The ways that the New Deal and Great Society programs alleviated poverty after the Great Depression are discussed. The successes and failures of the post–Depression New Deal programs on poverty alleviation in the United States are explored.
· A summary and analysis of current social welfare approaches and programs are explored, including the 1970s recovery, welfare reform and TANF, the 2008 economic crisis, the Obama administration, and the Trump administration, with a particular focus on the impact of social welfare policy and provision on at-risk populations.
Poor Care in Europe
The Feudal System of the Middle Ages
A good place to begin this examination is the Middle Ages, from about the 11th to the 15th centuries, where a sociopolitical system called
feudalism
prevailed as England’s primary method of caring for the poor. Under this elitist system, privileged and wealthy landowners would parcel off small sections of their land, which would then be worked by peasants (also called serfs). Many policy experts consider feudalism a governmentally imposed form of slavery or servitude because individuals became serfs through economic discrimination (Trattner,
2007).
Serfs were commonly born into serfdom with little hope of ever escaping, and as such they were considered the legal property of their landowner, or what was commonly called, a lord. Although lords were required to provide for the care and support of serfs in exchange for farming their land, lords had complete control over their serfs and could sell them or give them away as they deemed fit (Stephenson, 1943; Trattner, 2007). Despite the seeming harshness of this system, it did provide insurance against many of the social hazards associated with being poor, a social condition considered an inescapable part of life, particularly for the lower classes. Many economic and environmental conditions led to the eventual decline of the feudal system from the mid-14th century through its legal abolition in 1660. Some of these conditions included several natural disasters that resulted in massive crop failures, the bubonic plague (also called Black Death), various political dynamics, social unrest, and urbanization due to the development of trade and towns. Officially, poor relief during the Middle Ages was the responsibility of the Catholic Church, primarily facilitated through the monasteries and local parishes. Catholic Bishops administered poor care through the support of mandatory taxes or compulsory tithing. Poverty was not seen as a sin, and, in fact, the poor were perceived as a necessary component of society, in that they gave the rich an opportunity to show their grace and goodwill through the giving of alms to the less fortunate. Thus, caring for the poor was perceived as a noble duty that rested on the shoulders of all those who were able-bodied. Almost in the same way that evil was required to highlight good, according to biblical scripture and Catholic theology, poverty was likewise necessary to highlight charity and goodwill as required by God (Duncan & Moore, 2003; Trattner, 2007). Poor Laws of England: 1350 to 1550
Many economic and environmental conditions led to the eventual phasing out of the feudal system between 1350 and 1550, including health and natural disasters (such as the bubonic plague and massive crop failures). Increased demand for factory wage labor in the cities led to droves of people moving to growing cities to work in factories. Mass urbanization led to freedom from serfdom for the poorest members of English society, but it also generated a vacuum in how poverty was managed, creating the necessity for the development of England’s earliest poor laws (Trattner, 2007).
These gradual shifts in how poverty was managed also led to a shift in how poverty was perceived. During the Middle Ages, poverty was seen as an inescapable condition people were born into. People were either lords or serfs, rich or poor. But after the dismantling of the feudal system, when people were migrating to the cities, the poor were often nameless, faceless strangers living in the city slums or on the streets. They were often from different countries and spoke different languages, ate odd foods, and behaved in very different manners than traditional English societal norms. Thus, it became easier to blame the poor for their lot in life (Martin, 2012; Trattner, 2007).
The increasingly impersonal nature of caring for the poor, as well as the complexities of life in cities, ultimately led to the incorporation of punitive measures into poor relief policy to control begging and vagrancy, and decrease crime in the cities. Also, during this time period about one-third of the English population was poor, prompting the need for a complete overhaul of the social welfare system (Trattner, 2007). England responded to these changing dynamics and the associated problems by passing several relief laws, called
Tudor Poor
Laws
, between the mid-1500s and 1601. Tudor Poor Laws placed responsibility for dealing with the poor at the local level and reflected a complete intolerance of idleness. Local police scoured the cities in search of beggars and vagrants, and once found, a determination was made between those who could not work—the
worthy poor, and those who were able-bodied but refused to work—the
unworthy poor (Beier, 2003).
Legislative guidelines typically stipulated that only pregnant women, individuals who were extremely ill, or any person over the age of 60 were considered justifiably poor; thus, they were treated more leniently, including receiving government authorization to beg (typically in the form of a letter of authorization). In some cases, the poor were given other forms of sustenance in addition to authorized begging, such as food and medicine. But, if an able-bodied person was found to be unemployed, they were considered vagrant and were punished in a variety of harsh ways, including whippings, being paraded through the streets naked, being returned to the town of their birth, or incarceration. Repeat offenders were often subjected to having an ear cut off or were even put to death (Chambliss, 2017; Trattner, 2007).
Clearly, there was no sympathy to be had for individuals, male or female, who were deemed capable of working but found themselves without a job or any means of support. Additionally, little consideration was given to social or economic dynamics or what is now referred to as the cycle of poverty. What’s even more surprising is that little sympathy was extended even to children, particularly adolescents who were unparented and found begging in the streets. In fact, district officials often took these children into custody, placing them into apprenticeship programs or almshouses, and subjected them to what we would now consider to be child slavery (Trattner, 2007). The Elizabethan Poor Laws of 1601
The Tudor Poor Laws were replaced by the
Elizabethan Poor Laws of 1601, a set of laws that established a system of poor relief in England and Wales. The Elizabethan Poor Laws of 1601 reflected an organized merging of England’s earlier, sometimes conflicting and erratic, social welfare legislation. The Elizabethan Poor Laws of 1601 formalized many of the driving principles rooted in the Tudor Poor Laws, including the belief that the primary responsibility for provision of the poor resided with one’s family, that poor relief should be handled at the local level, that vagrancy was a criminal offense, and that individuals should not be allowed to move to a new community if unable to provide for themselves financially.
It was quite common for community members to bring charges against others if it could be proven that they had moved into the district within the last 40 days and had no means to support themselves. Such individuals would be charged as vagrants by local officials and returned to their home districts. The underlying notion was that local parishes didn’t mind supporting individuals who had fallen on hard times after years of paying taxes and contributing to society, but they didn’t want to be forced to support strangers who came to their district for the sole purpose of receiving aid. The Elizabethan Poor Laws of 1601 served as the foundation for social welfare legislation in colonial America, and elements of residency requirements can be found in current U.S. welfare policy.
During this time period in England there were generally two types of charitable provision:
indoor relief and
outdoor relief. Indoor relief was provided for the unworthy poor—those deemed able-bodied but who did not work (vagrants, indigents, and criminals). Indoor relief consisted of mandatory institutionalization in workhouses or poorhouses, where residents were forced to work. Workhouses were designed to be harsh, with the hope that they served as a
deterrent for those individuals who lacked the skill or desire to work and become self-sufficient. Outdoor relief consisted of money, clothing, food baskets, and medicine, provided in the homes of those who were considered the worthy poor, most often widows, the disabled, and the aged (Jones, 1969; Slack, 1990).
The History of Poor Care During the Colonial Era
Life in colonial America offered tremendous economic opportunities as well as significant hardship related to life on the frontier. Many immigrants were quite poor to begin with, and the long and difficult ocean voyage to the New World often left them unprepared for the rigors of life in America. Thus, even though colonial America offered many opportunities not available in the “Old World,” such as land ownership and numerous vocational opportunities, many of the social ills plaguing new immigrants in their homeland followed them to North America. Colonial America: 1607 to 1775
English and Scottish colonization of North America began in 1607 in Virginia and continued through most of the 1700s until independence. Because there was no existing infrastructure in the original 13 British colonies, poor relief consisted primarily of mutual kindness, family support, and distant help from England. Self-sufficiency was a must, and life was not easy on the frontier. There was a dramatic increase in the population during the 75 years before independence, increasing from 250,000 settlers in 1700 to an estimated 2.5 million in 1775! And, as the population increased, so did the need for a more formal and organized system of poor care.
Poor Care in the Industrial and Progressive Eras: 1776 to 1920s
After independence in 1776, poor care remained minimal, consisting primarily of free land grants primarily for White settlers, pensions for widows, and aid to disabled veterans. There was very little formal social welfare legislation passed at the state or federal levels until the early 1900s. And even those early laws provided only minimal benefits for some groups of children and the disabled. One of the first federal social welfare efforts was the Civil War Pension Program passed in 1862, which provided aid to Civil War Veterans and their families. Unemployment benefits were offered in most states by about 1929, and a program offering veterans benefits, consisting primarily of medical aid, was instituted after World War I.
The Great Depression in 1929 marked the first time the federal government recognized the need for a national social welfare system, but the nature of provision in the 1800s through the early 1900s was highly influenced by philosophical and religious belief systems that presumed to explain why poverty and other social ills existed. These ideologies in turn influenced how the leaders of early American society believed poverty should be addressed. Two philosophies that have strongly influenced the development of social welfare policy in the United States, and perceptions of those who are in need, are John Calvin’s Protestant theology, specifically his doctrine of predestination, and philosopher Herbert Spencer’s
social Darwinism (explored in the next section). Calvin’s doctrine of predestination emanated from the Protestant Reformation in the 16th century. Calvin wrote about the nature of God’s control over the world and how this control was exercised, primarily in the form of who God would allow into heaven (the elect) and who he would condemn to hell (the condemned). According to Calvin’s doctrine, a person’s salvation was predestined by God and based solely on God’s grace, not by what people did in their lives (whether they were essentially good or bad). Thus, even though all people were called to faith and repentance, not all people would be allowed into heaven.
Even though many Protestants rejected Calvin’s concept of predestination, including Lutherans and Methodists, Calvin’s doctrine became embedded into early American society in a number of ways. In his book
The Protestant Ethic and the Spirit of Capitalism, Max Weber described in detail the vast influence of Calvin’s doctrine on European and American society. According to Weber, Calvin theorized that since everyone deserved to go to hell anyway, that was the lot they should accept, and those who were saved from condemnation were blessed by the grace of God. Human action in an attempt to secure their own salvation (through works) was futile since one’s eternal fate rested not on human goodness, but on God’s mysterious desire and will (Weber, 1905/1958). Roman Catholic theology, which previously influenced poor care, recognized the omnipotence of God in matters of salvation, but also acknowledged that people had free will and choice, and could elect to walk with God and have everlasting life by following his commandments.
According to Weber, the Calvinists accepted the concept of predestination, but did not accept that there was no way to determine who was saved and who was condemned, since privilege and participation in society were based in large part on separating people into two categories: those who were godly and those who were not. For instance, only God’s faithful were allowed to become members of the church, receive communion, and enjoy other benefits of salvation, including societal respect. Determining that one was condemned to hell, not because of anything that person necessarily did, but because of God’s mysterious determination, became a common form of social exclusion.
In time, particular behaviors and conditions became indicators—or signs—of one’s eternal fate. For instance, hard work (what Weber referred to as the
Protestant work ethic) and good moral conduct (the ability to deny worldly pleasures in pursuit of purity) became signs of the elect since it was believed that God blessed the elect by giving them a vocation, and only the “elect” were granted the ability to be pure (Weber, 1905/1958). In other words, those who could not work for any reason, even through no fault of their own, were perceived in society to be condemned, because they were not bestowed a vocation.
A “catch-22” with regard to living a pure life was that it was the privileged members of society who determined what was considered “pure.” For instance, church attendance was a requirement of purity, but only members of the elect were permitted to join the church, and the remainder were excluded, which was then used as an indicator that they were not pure, and thus not a member of the elect. Even if the poor and suffering had a voice and could protest the paradoxical reasoning behind the signs, according to Calvin, everyone deserved to be condemned anyway, thus there was simply nothing to complain about (Hudson & Coukos, 2005; Weber, 1905/1958).
The influence of the Protestant work ethic and Calvin’s doctrine of predestination on U.S. society as a whole, and specifically on the poor, were significant, extending well beyond religious communities (Kim, 1977). With hard work, material success, and good moral conduct serving as the best signs of election to salvation, it did not take long for poverty and presumed immoral behavior (remember, it was presumed that only the elect had the spiritual fortitude to behave morally) to become clear indications of one’s condemnation (Chunn & Gavigan, 2004; Gettleman, 1963; Hudson & Coukos, 2005; Kim, 1977; Schram et al., 2008; Tropman, 1986; Weber, 1905/1958).
Early Social Work Movements
Charity Organization Societies: 1870 to 1893
The
Charity Organization Society (COS) is often considered one of the forerunners of the modern social services profession and marked one of the first organized efforts within the United States to provide charity to the poor. The COS movement began in England in 1869, in response to increased urbanization and immigration and common frustration with the current welfare system, which consisted primarily of disorganized and chaotic almsgiving. The COS movement was started by Rev. S. Humphreys Gurteen, who believed that it was the duty of good Christians to provide an organized and systematic way of addressing the plight of the poor in a manner that would increase self-sufficiency and personal responsibility. Gurteen and his colleagues strongly believed that giving alms indiscriminately, and without conditions, encouraged fraud and abuse, as well as encouraged laziness among those receiving the help. The first COS was founded in Buffalo, New York, in 1877 and served as a sort of umbrella organization for other charities by assisting in the coordination and oversight of relief services to the poor (Schlabach, 1969). The COS concept of organized and systematic provision quickly spread to large cities across the nation, and in 1890 over 100 cities had at least one COS serving the community (Wahab, 2002). The COS philosophy focused on achieving self-sufficiency and reducing dependence. Therefore, outdoor relief, such as cash assistance, was discouraged because it was considered harmful to the beneficiary based upon the belief that material relief would encourage dependence and laziness, thus ultimately increasing poverty (Gettleman, 1963; Kusmer, 1973). In this respect, the COS included concepts of the worthy and unworthy poor.
The COS practiced what was called
scientific charity, which involved
intelligent giving, embracing the notion that charity should be natural, not artificial (Gettleman, 1963; Leiby, 1984). Natural giving was both spontaneous and informal, and was drawn from the philosophies advanced by Thomas Chalmers, a Scottish political economist and member of the clergy. Chalmers made a distinction between “natural charity” and “artificial charity,” where the former was based on what he called the “four fountains of charity”: (1) people’s willingness to help themselves, (2) the willingness of families to help, (3) the willingness of neighbors to help, and (4) the willingness of wealthy people to contribute to their community. Chalmers believed that “natural charity” was far less likely to involve fraud, whereas “artificial charity”, involving more organized forms of giving by churches and the government, had a far greater likelihood of being abused by both the giver (e.g., politicians) and the beneficiaries.
Based on this ideology, COS leaders were highly suspicious of organized giving, and while they believed in the importance of charity, they wanted to root out fraud, by coordinating the often haphazard and disorganized giving of alms to the poor, as well as create relationships with those in need (typically single women) to determine the individual cause of their poverty (Gettleman, 1963). According to COS philosophy, poverty was almost always caused by laziness, drinking alcohol, and spending too much money (Rauch, 1975). COS directors employed
friendly visitors, an early version of caseworkers, to visit the homes of aid applicants, diagnose the reasons for their poverty, and, if possible, develop a case plan to alleviate their poverty (Rauch, 1975; Trattner, 2007). Because poverty was defined as an individual problem, and because most aid recipients were women, there was excessive focus placed on sexual morality, with the goal of modeling appropriate moral behavior (O’Neill, 2016). Since material relief was discouraged, most friendly visitors offered only sympathy, encouragement, and guidance on how to seek employment, with minimal financial assistance (Wahab, 2002).
The COS movement was highly influenced by Calvinism, but also by another sociopolitical ideology called
social Darwinism, which involved the application of Charles Darwin’s theory of natural selection to the human social world. Darwin’s theory, developed in the mid-19th century, was based on the belief that environmental competition—a process called natural selection—ensured that only the strongest and most fit organisms would survive (allowing the biologically fragile to perish), thus guaranteeing successful survival of a species (Darwin, 1859/2009). Social Darwinists apply Darwin’s theory to humans and the social world in an attempt to provide naturalistic explanations for various phenomena in human social life (Weikart, 1998).
One of the most influential social Darwinists was Herbert Spencer, an English sociologist and philosopher who coined the term
“survival of the fittest” (a term often incorrectly attributed to Darwin) in reference to the importance of human competition for resources in securing the survival of what were considered the fittest members of society (Hofstadter, 1992). Spencer was a fierce opponent of any form of government intervention or charity on behalf of the poor and disadvantaged, arguing that such interventions would interfere with the natural order, thus threatening society as a whole. Although Spencer’s theory of social superiority was developed in advance of Darwin’s theory, his followers relied on Darwin’s theory of natural selection for scientific validity of social Darwinism.
The fatalistic nature of the concept of predestination, the Protestant work ethic, and social Darwinism became deeply imbedded in U.S. religious and secular culture and were used to justify a laissez-faire approach to charity throughout most of the 19th and 20th centuries (Duncan & Moore, 2003; Hofstadter, 1992). Although the specific tenets of these ideologies may have softened over the years, the significance of hard work, good fortune, material success, and living a socially acceptable life have remained associated with special favor and privilege in life, whereas poverty and disadvantage have remained associated with presumed weak character, laziness, and immoral behavior. Leaving the poor and disadvantaged to their own devices was perceived as nothing more than complying with God’s (or nature’s) grand plan (Duncan & Moore, 2003). Remnants of these doctrines and philosophies can still be seen in contemporary approaches to helping the poor and disadvantaged, and continue to influence the development of legislation in the United States, as well as people’s attitudes about poverty and the poor (Chunn & Gavigan, 2004; Duncan & Moore, 2003; Gettleman, 1963; Hudson & Coukos, 2005; Kim, 1977; Schram et al., 2008; Tropman, 1986).
The social hierarchy espoused by social Darwinists was reflected in the philosophical motivation of COS leaders, often the community’s wealthiest and most religious members, who agreed to provide charity to the poor as long as the poor remembered their proper place in society (Gettleman, 1963). Yet even the deserving poor did not escape the influence of the Protestant work ethic or the fatalism of social Darwinism, both of which were deeply imbedded in COS culture. For example, friendly visitors often focused excessively on the sexual behavior of the women they helped. The COS viewed immorality as the primary problem in most slums, believing that the women living in the slums (many of whom were single mothers) were weak and fallen, having succumbed to the charms and sexual advances of male suitors (Wahab, 2002). Friendly visitors would often use the guise of friendship to connect to these women, hoping they could influence them through modeling the value of becoming a good Christian woman. Many COS “friendly visitors” even went so far as to ask neighbors to monitor the younger women in the slums and report back on any male visitors (Wahab, 2002).
The principles of the Protestant work ethic and social Darwinism, with their focus on hard work, self-sufficiency, and natural selection, were clearly reflected in various speeches and writings of COS leaders. Common themes included arguments that even widows would become lazy if too much help was given, and life was made too easy for them. Many COS leaders also argued that providing charity to the unemployed, able-bodied poor was actually immoral since, according to natural selection, this population was destined to perish, and providing them charity only prolonged their suffering and was therefore in neither their nor society’s best interest (Gettleman, 1963). Despite clear indications that the COS movement was influenced by the ideologies of the Protestant work ethic and social Darwinism, Leiby (1984) points out that many of the early COS leaders and volunteers, while Christians and members of society’s upper classes, were committed reformers who perceived charity as a form of much-needed love—a concept that contradicted the social Darwinists’ noninterventionist approach.
Mary Richmond, the general secretary of the Baltimore COS, is an example of a committed reformer. Richmond was a fierce advocate for social justice and social reform and believed that charities could employ good economics and engage in compassionate giving at the same time. Richmond became well known for increasing public awareness of the COS movement and for her fundraising efforts. Richmond’s compassion for the poor was likely due to her own experience with poverty as a child. Richmond was orphaned at the age of two and then later abandoned by her aunt, who left Richmond to fend for herself in New York when she was only 17 years old. Thus, Richmond no doubt understood the social components of poverty, and how factors outside of peoples’ control could have a devastating impact on their lives. Richmond is credited for contributing to the development of the modern case management model through her conceptualization of
social diagnosis, a process involving friendly visitors assessing clients and their environments.
Social diagnoses enabled the visitor to identify sources of strength and barriers to self-sufficiency (Kusmer, 1973; Richmond, 1917).
Despite the general success of the COS and the contributions the movement made to professionalizing the helping fields, its adherence to deterministic philosophies that negated social factors of poverty while pathologizing the poor deepened the belief that the poor were to blame for their lot in life. In retrospect, one can recognize the naiveté of believing that poverty could be controlled merely through moral behavior. But the country was about to learn a very hard collective lesson during the Depression era—one that immigrants, many ethnic minority groups, and single mothers had known for years—that sometimes conditions exist that are beyond one’s control, creating immovable barriers to economic self-sufficiency. Jane Addams and the Settlement House Movement: 1889 to 1929
During the same time that the COS “friendly visitors” were addressing poverty in the slums by focusing on personal morality, Jane Addams was confronting poverty in a vastly different way—by focusing on social injustice. Addams was a social justice advocate and a social reformer who started the
settlement house movement in the United States with the opening of the Hull House in Chicago. Addams considered the more religiously oriented COS movement as being rather heartless because most COS leaders were more concerned with efficiency and controlling fraud than alleviating poverty (Schneiderhan, 2008). Addams used a relational model of poverty alleviation based on the belief that poverty and disadvantage were caused by problems within society, not idleness and moral deficiency (Lundblad, 1995). Addams advocated for changes within the social structure of society in order to remove barriers to self-sufficiency, which she viewed as an essential component of a democracy (Hamington, 2005; Martin, 2012). In fact, the opening of the Hull House, the first settlement house in the United States, was considered the beginning of one of the most significant social movements in U.S. history. Addams was born in Cedarville, Illinois, in 1860. She was raised in an upper-class home where education and philanthropy were highly valued. Addams greatly admired her father, who encouraged her to pursue an education at a time when most women were destined to solely pursue marriage and motherhood. She graduated from Rockford Female Seminary in 1881, the same year her father died. After her father’s death, Addams entered Woman’s Medical College in Pennsylvania but dropped out because of chronic illness. Addams had become quite passionate about the plight of immigrants in the United States, but due to her poor health and the societal limitations placed on women during that era, she did not believe she had a role in social advocacy.
The United States experienced another significant wave of immigration between 1860 and 1910, with 23 million people emigrating from Europe, including Eastern Europe. Many of these immigrants were from non–English-speaking countries, such as Italy, Poland, Russia, and Serbia, and were very poor. Unable to obtain work in the skilled labor force, many immigrants were forced to work in unsafe urban factories and live in subhuman conditions, crammed together with several other families in tenements. For instance, New York’s Lower East Side had approximately 330,000 inhabitants per square mile (Trattner, 2007). With no labor laws for protection, racial discrimination and a variety of employment abuses were common, including extremely low wages, unsafe working conditions, and child labor. Poor families, particularly non–English-speaking families, had little recourse, and their mere survival depended on their coerced cooperation.
Addams was aware of these conditions because of her father’s political involvement, but she was unsure of how she could help. Despondent about her father’s death and her failure in medical school, as well as her ongoing health problems, Addams and her friend Ellen Gates Starr took an extended trip with friends to Europe where, among other activities, she visited Toynbee Hall settlement house, England’s response to poverty and other social problems. Toynbee Hall served as a neighborhood welfare institution in an urban slum area, where trained settlement house volunteers worked to improve social conditions by providing community services and promoting neighborly cooperation.
The concept of addressing poverty at the neighborhood level through social and economic reform was revolutionary. Rather than monitoring the behavior of the poor through intermittent visits, settlement house workers lived right alongside the immigrant families they endeavored to help. In addition to providing a safe, clean home, settlement houses also provided poor immigrants with comprehensive care such as assistance with food, health care, English language lessons, child care, and general advocacy. The settlement house movement had a mission of no longer distinguishing between the worthy and unworthy poor, and instead recognizing the role that society played in the ongoing plight of the poor—a stance that was a departure from the traditional charity organizations.
Addams and Starr returned home from Europe convinced that it was their duty to do something similar in the United States, and with the donation of a building in Chicago, the Hull House became America’s first settlement house in 1889. Addams and her colleagues lived in the settlement house, in the middle of what was considered a bad neighborhood in Chicago, offering services targeting the underlying causes of poverty such as unfair labor practices, the exploitation of non–English-speaking immigrants, and child labor. The Hull-House quickly became the social hub for residents who gathered in the Hull-House café, and was also the informal headquarters for many of Addams’ social advocacy efforts, which ranged from advocating for women’s suffrage to advocating for racial equality (e.g., advocating against the extrajudicial lynching of Black men), to child labor laws, to global peace efforts to end war (Knight, 2010). Addams’ influence on American social welfare policy was significant, in that her work represented a shift away from the fatalistic perspectives of social Darwinism and the religious perspectives of Calvin’s Reformed theology. Instead, Addams highlighted the need for social change so that barriers to upward mobility and optimal functioning could be removed (Martin, 2012). Addams and her colleagues were committed to viewing the poor as equal members of society, just as worthy of respect and dignity as anyone else. Addams clearly saw societal conditions and the hardship of immigration as the primary cause of poverty, not necessarily one’s personal moral failing. Social inequality was perceived as the manifestation of exploitation, with social egalitarianism perceived as not just a desirable but an achievable outcome (Lundblad, 1995; Martin, 2012). Addams’ focus on social inequity was reflected in her tireless lobbying for the passage of child labor laws (despite fierce opposition by corporations and conservative politicians). Addams also advocated on a local and national level for labor laws that would protect the working-class poor, who were often exploited in factories with
sweatshop conditions. She also worked alongside Ida B. Wells, confronting racial inequality in the United States, such as the extrajudicial lynching of Black men (Addams, 1909). Although there are no working settlement houses today, the prevailing concepts espoused by this movement, with its focus on social components of poverty and disadvantage, remain foundational to the human services and social work professions, and also serve as the roots of today’s urban neighborhood centers. Yet, despite the overall success of the settlement house movement and the particular successes of Addams with regard to achieving social reform in a variety of arenas, the threads of moralistic and deterministic philosophies have remained strongly interwoven into American society, and have continued to influence perceptions of the poor and social welfare policy and legislation.
Ida B. Wells and the Fight Against Racial Oppression
The opening vignette is about one of the greatest social reformers in modern history—Ida B. Wells, a Black reformer and social activist whose campaigns against racial oppression and inequity laid the foundation for the civil rights movement of the 1960s. As referenced in the vignette, although legal slavery ended 6 months after her birth, Wells’ life was never free from the crushing effects of severe racial prejudice and discrimination. Her schooling was interrupted when she was orphaned at the age of 16 leaving her responsible for raising her five younger siblings. This experience not only forced her to grow up quickly but also seemed to serve as a springboard for her subsequent advocacy against racial injustice. The newspaper she owned was called
Free Speech, and she used this platform to write about matters of racial oppression and inequity, including the vast amount of socially sanctioned crimes committed against members of the Black community (Hamington, 2005).
The indiscriminate lynching of Black men was prevalent in the South during Wells’ lifetime and was an issue that Wells became quite passionate about. Black men were commonly perceived as a threat on many levels, and there was no protection of their personal, political, or social rights. The Black man’s reputation as an “angry rapist” was endemic in White society, and many speeches were given and articles written by White community members (including clergy) about this allegedly growing problem. For example, an article published in the mainstream newspaper in the South, the
Commercial, entitled “More Rapes More Lynchings,” cites the Black man’s alleged penchant for raping White women, stating:
The generation of Negroes which have grown up since the war have lost in large measure the traditional and wholesome awe of the white race which kept the Negroes in subjection… . There is no longer a restraint upon the brute passion of the Negro… . The facts of the crime appear to appeal more to the Negro’s lustful imagination than the facts of the punishment do to his fears. He sets aside all fear of death in any form when opportunity is found for the gratification of his bestial desires. (Davidson, 2008, p. 154)
Wells wrote extensively on the subject of the “myth of the angry Black man,” and the myth that all Black men raped White women (a common excuse used to justify the lynching of Black men) (Hamington, 2005). She challenged the growing sentiment in White communities that Black men, as a race, were growing more aggressive and “lustful” of White women, which she believed was prompted in part by the increasing number of biracial couples. The response to Wells’ articles was swift and harsh. A group of White men surrounded her newspaper building with the intention of lynching her, but when they could not find her, they burned down her business instead (Davidson, 2008).
Although this act of revenge essentially stopped her newspaper career, what it really did was motivate Wells even further. After the burning down of her business, Wells left the South and moved to Chicago, where she continued to wage a fierce anti-lynching campaign, often coordinating efforts with Jane Addams. She wrote numerous books and articles on racial inequality, challenging socially entrenched notions that all Black men were angry and violent sexual predators (Hamington, 2005). Wells and Addams worked as colleagues, coordinating their social justice advocacy efforts fighting for civil rights. Together, they ran the Chicago Association for the NAACP and worked collectively on a variety of projects, including fighting against racial segregation in schools (Martin, 2012; Wells, 2020).
The New Deal and Great Society Programs
In 1929 the stock market crashed, leading to a series of economic crises unprecedented in the United States. For the first time in modern U.S. history, large segments of the middle-class population lost their jobs and all means of income. Within a very short time, thousands of people who had once enjoyed financial security were suddenly without money, homes, and food. This served as a wake-up call for social reformers, many of whom had abandoned their earlier commitment to social activism because of decades of a good economy. In response, many social reformers started pushing President Hoover to develop the country’s first comprehensive system of social welfare on a federal level.
Hoover was resistant, though, fearing that a federal system of social welfare would create dependency and displace the role of private and local charities. Hoover wanted to allow time for the economy to self-correct through the capitalist system and the market economy before intervening on a federal level. Hoover was a strong believer in the power of volunteerism, believing that everyday people could be convinced of the power of helping others, without coercion. He wanted to allow time for people to jump into action and help their neighbors, and for democracy and capitalism to self-correct before intervening with broad entitlement programs (McElvaine, 1993). But much of the country apparently did not agree with this plan. In 1933, Hoover lost his bid for reelection, and Franklin D. Roosevelt was elected as the country’s 32nd president. Roosevelt immediately set about to create changes in federal policy with regard to social welfare, promising dramatic changes, including sweeping reforms in the form of comprehensive poverty alleviation programs.
From 1933 through 1938, Roosevelt instituted a series of legislative reforms and domestic programs collectively referred to as the
New Deal programs. In his first 100 days in office, Roosevelt passed 13 legislative acts, including one that created the Civil Works Administration, which provided over a million temporary jobs to the unemployed; the Federal Emergency Relief Act, which provided direct aid and food to the unemployed (and was replaced by the Works Progress Administration in 1935); and one that created the Civilian Conservation Corp (CCC), which put thousands of young men ages 18 to 25 to work in reforestation and other conservation programs. Yet, as progressive as Roosevelt was, and as compassionate as the country had become toward the poor due to the realization that poverty could strike anyone, racism was still rampant, as illustrated by Roosevelt placing a 10% enrollment limit for Black men in the CCC program (Trattner, 2007).
By far the most famous of all programs in the New Deal were those created in response to the Social Security Act of 1935, which among other things created old-age pensions for all workers, unemployment compensation, Aid to Families with Dependent Children (AFDC), and aid to the blind and disabled. Programs such as the Federal Deposit Insurance Corporation (FDIC), which provided insurance for bank deposits, helped to instill a sense of renewed confidence in the banking system, and the development of the Securities and Exchange Commission (SEC), which regulates the stock market, helped to ensure that a crash similar to the one in 1929 would be unlikely to occur again. In total, Roosevelt created 15 federal programs as a part of the New Deal, some of which remain today, and some of which were dismantled once the crisis of the Great Depression subsided. Although some claim that the New Deal was not good for the country in the long run, it did pull the country out of a severe economic decline, providing relief for millions of Americans who could have literally starved had the federal government not intervened.
The United States recovered from the Great Depression and has since experienced several periods of economic growth and decline, but never any as severe as that which was prompted by the 1929 stock market crash. This is likely because of federal programs such as the FDIC and the creation of the SEC (and similar government agencies). In later times, though, the dismantling of some post-Depression financial regulations would contribute to yet another devastating economic downturn in 2007—perhaps not as severe as the Great Depression, but more serious and long lasting than any other recession experienced in the U.S. post-Depression era, particularly because of its global consequences.
The 1940s remained a time of general recovery and the 1950s was a relatively stable time, both economically and socially. Several laws were passed and agencies created that continued to advance the state of social welfare in the United States, including the creation of the U.S. Department of Health, Education, and Welfare in 1953 and the passage of the U.S. Housing Act of 1954 (Ch. 649, 68 Stat. 590).
The 1960s was a time of civil unrest and increasing rates of poverty, which spawned a resurgence of interest in social problems, including poverty and social injustice, particularly related to many at-risk populations, such as ethnic minority populations, older adults, and the mentally ill. For instance, President John F. Kennedy signed into law the Community Mental Health Centers Act (Pub. L. No. 88-164) on October 31, 1963, which transitioned the U.S. mental health-based care system from one of institutionalization to a community model. Kennedy was assassinated less than a month later, on November 22, 1963, and President Lyndon B. Johnson continued the Kennedy legacy with the introduction of the
Great Society programs—a set of social welfare programs designed to eliminate poverty and racial injustice.
Policy areas within the Great Society programs included civil rights, education, and poverty (later popularly referred to as Johnson’s
War on Poverty). Examples of some of the social welfare legislation and programs included under the umbrella of the Great Society are the Economic Opportunity Act of 1964 (Pub. L. No. 88-452); the Civil Rights Act of 1964 (Pub. L. No. 88-352); the Food Stamp Act of 1964 (Pub. L. No. 88-525); Medicare, Medicaid and the Older Americans Act of 1965 (Pub. L. No. 89-73); the Elementary and Secondary Education Act of 1965 (Pub. L. No. 89-10); the development of the U.S. Department of Housing and Urban Development (HUD); and the Voting Rights Act of 1965 (Pub. L. No. 89-110).
Whether the Great Society and the War on Poverty programs were successful in reducing poverty, racial discrimination, and other social problems continues to be debated to this day. It’s no surprise that conclusions tend to fall along party lines, with many conservatives complaining that Johnson’s social experiment amounted to nothing more than throwing money at oversimplified problems with disastrous results, and liberals decrying just the opposite—that most of the programs had the potential to be successful, but were grossly underfunded (Zarefsky, 2005). Some point to racism as the reason why many Great Society programs were ultimately dismantled (Quadagno, 1994), while others pointed to the Vietnam War as the reason for government (and societal) shifting priorities (Zarefsky, 2005). Regardless, many of the programs remain and represent a time in history when there was increased recognition of structural barriers in society that can keep many people from functioning at their optimal level and achieving economic self-sufficiency.
Social Welfare in Contemporary United States
A Time of Recovery: 1970 to 1990
The 1970s and 1980s was a time of mixed reviews on welfare and welfare reform. There was considerable conservative backlash in response to what was considered a few decades of liberal social welfare legislation and entitlement programs, but despite President Nixon’s opposition to welfare, existing programs continued to grow. The mid-1970s through the 1980s was a boom time economically in the United States, and boom times typically mean that people become less sympathetic toward the plight of the poor. And that’s exactly what happened—there was a resurgence of earlier negative sentiments toward the poor beginning in the mid-1970s and peaking in the 1990s.
This increased negative attitude toward the poor was reflected in several studies and national public opinion surveys that indicated a general belief that the poor were to blame for their lot in life. For instance, a national survey conducted in 1975 found that the majority of those living in the United States attributed poverty to personal failures, such as having a poor work ethic, poor money managementskills, a lack of any special talent that might translate into a positive contribution to society, and low personal moral values. When asked to rank several causes of poverty, subjects ranked social forces, such as racism, poor schools, and the lack of sufficient employment opportunities the lowest of all possible causes of poverty (Feagin, 1975).
Ronald Reagan capitalized on this negative sentiment toward the poor during his 1976 presidential campaign when he based his platform in large part on welfare reform. In several of his speeches Reagan cited the story of the woman from the South Side of Chicago who was finally arrested after committing egregious welfare fraud. He asserted that she had 80 names, 30 addresses, and 12 Social Security cards, claiming that she was also collecting veteran’s benefits on four husbands, none of whom was real. He also alleged that she was getting Social Security payments, Medicaid, and food stamps, and was collecting public assistance under all of her assumed identities (Zucchino, 1999). While Reagan never mentioned the woman’s race, the context of the story as well as the reference to the South Side of Chicago (a primarily Black community) made it clear that he was referring to a Black woman—thus playing on the common stereotype of welfare users (and abusers) as being Black (Krugman, 2007). And with that, the enduring “Myth of the
Welfare Queen” was born.
Journalist David Zucchino (1999) attempted to debunk the myth that women on welfare were lazy and engaged in rampant fraud in his book
The Myth of the Welfare Queen, where he explored the realities of being a mother on welfare. He noted that despite the availability of ample factual information on public assistance utilization showing very low rates of welfare fraud, the image of the Black woman who drove a Cadillac while illegally collecting welfare under numerous false identities was so imbedded in American culture it was impossible to debunk the myth. Krugman (2007) also cites how politicians and media commentators have used the myth of the welfare queen to reduce sympathy for the poor and gain public support for welfare cuts, arguing that while covert, such images clearly play on negative racial stereotypes. They also play on the common belief in the United States that those who receive welfare benefits are poor because they are lazy, promiscuous, and generally immoral.
More recent surveys conducted in the mid-1990s revealed an increase in the tendency to blame the poor for their poverty (Weaver et al., 1995), even though a considerable body of research points to social and structural dynamics as the primary cause of enduring poverty. Examples of structural causes of poverty include a shortage of affordable housing, recent shifts to a technologically based society requiring a significant increase in educational and training requirements, longstanding institutionalized oppression of and discrimination against certain racial and ethnic groups, and a general increase in the complexity of life (Martin, 2012; Wright, 2000).
The general public’s perception of social welfare programs seems to be based in large part on this negative bias against the poor and the stigmas such bias creates. Surveys conducted in the 1980s and 1990s showed support for the general idea of helping the poor, but when asked about specific programs or policies, most respondents became critical of governmental policies, specific welfare programs, and welfare recipients in general. For instance, a 1987 national study found that 74% of those surveyed believed that most welfare recipients were dishonest and collected more benefits than they deserved (Kluegal, 1987).
Welfare Reform and the Emergence of Neoliberal Economic Policies: 1990 to Now
Political discourse in the mid-1990s reflected what is often referred to as economic
neoliberal philosophies—a political movement embraced by most political conservatives, espousing a belief that capitalism and the free market economy were far better solutions to many social conditions, including poverty, than government programs, which were presumed to be inefficient and poorly run. Advocates of neoliberalism pushed for social programs to be privatized based on the belief that getting social welfare out of the hands of the government and into the hands of private enterprise, where market forces could work their magic, would increase efficiency and lower costs. Market theory can be applied to many areas of the economy, when there is competition among providers, a reliable workforce, clear goals, and known outcomes. Yet research has consistently shown the limits of neoliberalism, particularly in public services, including social welfare services, due to the complexity of human services issues, unknown outcomes, a highly trained workforce, the lack of competition among providers, and other dynamics that make social welfare services so unique (King, 2007; Nelson, 1992; Van Slyke, 2003).
During the 1994 U.S. Congressional campaign, the Republican Party released a document entitled
The New Contract with America, which included a plan to dramatically reform welfare, and according to its authors, the poor would be reformed as well (Hudson & Coukos, 2005).
The New Contract with America was introduced just a few weeks prior to Clinton’s first mid-term election and was signed by all but two of the Republican members of the House of Representatives, as well as all of the GOP Congressional candidates. In addition to a renewed commitment to smaller government and lower taxes, the contract also pledged a complete overhaul of the welfare system to root out fraud and increase the poor’s commitment to work and self-sufficiency.
Hudson and Coukos (2005) note the similarities between this political movement and the movement 100 years before, asserting that the Protestant work ethic served as the driving force behind both. Take, for instance, the common arguments for welfare reform (policies that reduce and restrict social welfare programs and services), which have often been predicated on the belief that (1) hardship is often the result of laziness; (2) providing assistance will increase laziness (and thus dependence), hence increasing hardship, not decreasing it; and (3) those in need often receive services at the expense of the working population. These arguments were cited during the COS era as reasons why material support was ill-advised.
One of the more stark (and relatively recent) examples of this sentiment was expressed by Rep. John Mica, a congressman from Florida, when he stood on the U.S. House floor, holding a sign that read “Don’t Feed the Alligators” while delivering an impassioned speech in support of welfare reform. During hearings on the state of public welfare in the United States, Rep. Mica compared people on welfare to alligators in Florida, stating that the reason for such signs is because “unnatural feeding” leads to dependency and will cause the animal to lose its natural desire for self-sufficiency. Mica argued that welfare programs have done the same for people, creating subsequent generations of enslavement and servitude (Lindsey, 2004).
While there may be some merit in debating the most effective way of structuring social welfare programs, arguments such as Mica’s negate the complexity of poverty and economic disadvantage, particularly among historically marginalized populations. They also play into longstanding stigmas and negative stereotypes that portray the poor as a homogenous group with different natures and characters than mainstream working society. These types of narratives also reflect the
genderized and
racialized nature of poverty, contributing to institutionalized gender bias and racism (Seccombe, 2015).
Whether veiled or overt, negative bias, particularly that which is bestowed on female public welfare recipients of color, negates the disparity in social problems experienced by Black women and other women of color (El-Bassel et al., 2009; Martin, 2012; Siegel & Williams, 2003). Negative stereotypes and myths also provide a false picture of welfare recipient demographics by implying that the largest demographic of beneficiaries is Black single women with numerous children, which statistics do not support. PRWORA of 1996, TANF, and Other Programs for Low-Income Families
A Republican Congress may have initiated welfare reform, but it was passed by the Democratic Clinton administration in the form of the
Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA) of 1996. This bipartisan effort illustrated the wide support for welfare reform as well as for the underlying philosophical beliefs about what causes poverty and what types of poverty alleviation methods are effective.
The social welfare program authorized under PRWORA of 1996 is called the Temporary Assistance for Needy Families (TANF) program, which replaced the
Aid to Families with Dependent Children (AFDC). TANF is operated at the state level through federal block grants as well as state funding. According to the PRWORA act, TANF has four primary goals: (1) to provide help to needy families and their children; (2) to promote job preparation, employment, and marriage so that families no longer need to depend on government assistance; (3) to reduce out-of-wedlock births, and (4) to encourage two-parent families.
Initially, TANF listed 12 different categories of acceptable work activities, but in 2008 the federal government provided additional clarity in terms of what activities would count toward TANF’s work requirement in each category. Among the 12 categories, nine are considered “core,” which means they directly count toward the required number of hours per week. Three of the categories are considered “non-core” and count only after the required hours for core activities are met. The nine core work activities include unsubsidized work, subsidized work, work experience, on-the-job training, job searches, job readiness, community service, vocational education, and providing child care to anyone participating in community service. Non-core activities include employment-related education, job skills training, and attendance at a high school or GED program.
TANF benefits include modest cash assistance for basic needs; transitional services focused on self-sufficiency, such as vocational training, rehabilitation, and child care; substance abuse, mental health, and domestic violence screening and referrals; medical assistance through a government-funded health insurance program; and Supplemental Nutrition Assistance Program (SNAP) benefits (formerly called food stamps).
States have considerable latitude in how to meet the four goals of TANF as well as how to deliver the benefits, as long as their programs remain within federal guidelines. Guidelines include time limits, which are not to exceed 60 months of lifetime benefits (in most cases); eligibility requirements, which include barring all immigrants who have lived in the United States for less than 5 years; all undocumented immigrants, and work requirements of at least 30–35 hours per week for two-parent families and 20 hours per week for single parents with young children. Parents who fail to comply with the work requirement experience sanctions such as the termination of all family benefits. Approved work activities include subsidized and unsubsidized work at a for-profit or not-for-profit organization and can also include on-the-job training and vocational training (not to exceed 12 months). A significant area of concern among social justice advocates is that educational programs, including programs to assist recipients with earning their high school diplomas, are not included in approved work categories.
According to a 2020 U.S. Department of Health and Human Services (DHHS) report, in fiscal year 2019 there were just over 1.9 million families receiving TANF benefits, with the total individual TANF caseload of about 3.8 million individuals (DHHS, 2019). This represents about 400,000 additional recipients compared to 2013 (DHHS, 2015). About half of all caseloads consisted of small female head-of-household families with one or two children, with the other half consisting of child-only cases. With regard to the racial makeup of recipients, in fiscal year 2018, 37.8% were Hispanic, 28.9% were Black, 27.2% were White, 1.9% were Asian, 1.5% were Native American and Alaska Natives, and 0.6% were Native Hawaiian and other Pacific Islanders (UDHHS, 2018). Among all families receiving TANF benefits, about 90% received medical assistance, 82% received SNAP benefits, 10.8% received housing assistance, and 6.1% received subsidized child care. These percentages have remained relatively stable in the last 5 years, except SNAP benefits have increased by about 2%. The average monthly cash benefit for TANF recipients (per individual based on a family of three) ranges from $56 (Mississippi) to $308 (Alaska), and 35 states haven’t raised TANF benefits in over a decade (Burnside & Floyd, 2019).
Many social welfare advocates believe that TANF is too punitive in nature because of its strict time limits for lifetime benefits, stringent work requirements, and other punitive measures designed to control the behavior of recipients. Supporters of welfare reform rely on old arguments, citing the need to control welfare fraud and welfare dependency. They cited a host of other behaviors exhibited by female welfare recipients, including perceived sexual promiscuity and out-of-wedlock childbearing, while focusing very little on the behaviors of the fathers, particularly those who abandon their children (Hudson & Coukos, 2005; Rainford, 2004).
Uluorta (2008) cautions that far too often morality in the United States has been defined in very narrow terms, focusing on select groups of individuals and on very specific behaviors, such as sex and sexuality, marital status, and social standing. (It is interesting to note that rarely do those criticizing the immoral behavior of the poor also frame behaviors such as greed or lacking compassion in moral terms). While individual responsibility is certainly worth achieving, it can also be a code word for philosophies that scapegoat the poor and minimize long-standing social inequalities. Such scapegoating is of great concern to many within the human services fields and others who recognize the wide range of ways that social problems and their causes can be framed, and the danger of focusing too heavily on perceived behavioral flaws of those who are struggling.
The belief that generous social welfare programs will result in increased dependence has deep roots in the United States and has been a powerful influence on social welfare policy development from the country’s inception. But is this true? Is there any evidence to support the contention that generous social welfare programs will increase dependence and decrease self-sufficiency? There are a few ways we can answer this question. The first is to explore empirical research on the effectiveness of TANF, specifically whether the punitive nature of the program has the intended result—to increase compliance and decrease dependence. A 2019 study examined this very dynamic and found that TANF’s structure, particularly its sanctions mechanisms that punish single mothers for noncompliance (not working enough hours, etc.), actually led to increased dependence and lower levels of self-sufficiency (Hamilton et al., 2019). The other method of assessing the effectiveness of TANF is to compare the U.S. social welfare system with systems in other high-income countries.
The Male Breadwinner Model Versus the Nordic Model
There are two primary social welfare models used in high-income countries, such as the United States and European countries: the male breadwinner model and the ‘all adult worker model’. The former is the traditional model, which assumes that men are the wage earners and women stay home and care for the children and are then provided for financially through their husbands’ earnings. Western societies have long presumed that this traditional model resulted in stable families (Clark, 2000; Weitzman, 1985). In fact, modern welfare systems have been constructed on the concept of the full engagement of a male workforce, where wages from employment were considered the best form of “welfare provision” (with regard to monthly income, health care benefits, and pensions) (Lewis, 2001), which may be one reason why the United States has been resistant to provide more governmental universal programs
The male breadwinner model has been used by many countries to enforce a social structure that was believed to be the foundation of society. AFDC, the program that preceded TANF, was based on a male breadwinner model because it presumed that men were the wage earners of the family and women stayed home to care for the children, and in the absence of a male provider, the government stepped in until the woman remarried (Moffitt et al., 1994).
U.S. family behaviors have changed significantly since the 1950s, resulting in the general breakdown of the traditional family structure. We now have far more fluidity and flexibility in intimate relationships, a large increase in single-person households, as well as an increase in women entering the labor force (Lewis, 2001). Welfare reform in the mid-1990s was fueled for many reasons, but a primary one was related to these shifting cultural tides in the United States and a building resentment toward AFDC recipients whom many Americans believed should be working (Murray, 2008).
The other social welfare model that has been adopted in most European countries is the “all adult worker model,” which assumes that all adults, males and females, are equally involved in the labor market and thus all adults are economically independent. Both TANF and the Nordic Model (the social welfare systems in the Nordic countries of Sweden, Iceland, Finland, Denmark, and Norway) are considered all adult worker models, but their design and impact are dramatically different. While TANF is technically considered an all adult worker model, the philosophical basis of the legislation strongly reflects male breadwinner values, which is captured in the legislative definition of poverty as primarily a result of teen out-of-wedlock births, and the legislative goal of marriage promotion.
The TANF program does expect all beneficiaries to work, which is consistent with the all adult worker model, but research shows that while family behavior in the United States has changed considerably since the 1950s, it hasn’t changed as much as the all adult worker model requires to be effective. For instance, women’s behavior in the United States has changed pretty substantially with respect to entering the paid workforce, but most women still only work part-time, and most are in far lower-paying fields. Also, the majority of women in the United States still perform the bulk of unpaid care work, whereas men have not changed significantly in their work-related patterns. They still engage primarily in paid work (and are paid on a much higher scale), and as a whole haven’t significantly increased their involvement in childcare or other unpaid work (Dush et al., 2018; Lewis, 2001).
Thus, while TANF is considered an all adult worker model because it is a welfare-to-work program, it does not match current behaviors in the United States with regard to labor engagement and unpaid work provision. For instance, TANF expects new mothers to enter the labor force rapidly, yet most enter low-wage service sector jobs that offer little opportunity for advancement (Mitchell et al., 2018; Seefeldt, 2017). Also, because TANF is an income-tested program, it tends to stigmatize beneficiaries, blaming poverty on individual circumstances (primarily women and their sexual behavior) rather than structural problems, such as a poor economy, a lack of jobs offering a living wage, racial oppression, domestic violence, and poor educational systems. The U.S. social welfare model in general also discourages parents from leaving the labor market to care for their children, by failing to provide paid paternity leave on a federal level.
The Nordic Model is also an all adult worker model, but the Nordic countries have a strong commitment to universal care entitlements focusing on children and older adults, thus utilization is far less stigmatized (Lewis, 2001). Temporarily exiting the labor market in the Nordic countries for unpaid care work is encouraged. Men are incentivized to temporarily leave their jobs to care for their children by the availability of generous parental leave (about 480 days) that can be split between the parents. Finally, the United States pays a fraction of what Nordic countries pay for family benefits (Owaza, 2004).
So, what’s the answer to our question then? Which program is more effective in reducing poverty without creating dependence? Surprisingly, there aren’t many comparative studies, but a set of data we can examine are poverty rates among single mothers between the United States and the Nordic countries, to get an idea of the effectiveness of the two models. In 2018 the poverty rate of single mothers in the United States was 35.1% (U.S. Census Bureau, 2018). This is an improvement over 2007 rates when 50% of single mothers in the U.S. lived below the poverty line (Legal Momentum, 2011), but the U.S. poverty rate for single mothers is still far higher at 35.1% than most Nordic countries, which range from 17% in Denmark to 24% in Sweden.
Poverty is highly complex and is influenced far more by structural factors than individual ones. As long as social welfare policy in the United States is fueled by fears of dependency (that a generous safety net will make us all lazy), chances are many people will continue to believe in the myth of the welfare queen and negate the despair many single mothers feel when faced with challenges of rising out of poverty with minimal support and high levels of stigma (Seccombe, 2015).
The Economic Crisis of 2007–2008
After years of an economic boom, the U.S. economy began faltering in about 2007 and devolved into a full-blown recession by 2008, which lasted until about 2009 or 2010. The economic recession of 2007 consisted of a dramatic and lengthy economic downturn not experienced since the Great Depression. The real estate market bubble burst, the stock market crashed, the banking industry seemed to implode, and many people lost their jobs and their houses as a result (Geithner, 2009).
President Obama and the 111th Congress responded to the economic crisis with several policy and legislative actions, including the passage of the American Recovery and Reinvestment Act of 2009 (often referred to as the Stimulus bill [Pub. L. No. 111-5]). This economic stimulus package, worth over $787 billion, included a combination of federal tax cuts, various social welfare provisions, and increases in domestic spending, and was designed to stimulate the economy and assist Americans who were suffering economically.
As a part of the 2009 Recovery Act, Congress allotted $5 billion in emergency funding to assist states with increased TANF caseloads (expired in September 2010). TANF was reauthorized in 2009 and was up for reauthorization in 2015 but experienced several delays. The National Association of Social Workers
(NASW) released a statement regarding reauthorization recommending several changes to the TANF program, some of which include the following:
·
· Increase the floor for TANF benefits to 100% of the federal poverty line. Currently, many states’ benefits are 50% of the federal poverty line, while benefits in several states are only about 30% of the federal poverty line.
· Expand the definitions of employment to include higher education, English and literacy classes, and more expansive vocational training.
· Address common barriers to employment such as physical illness, mental illness, disabilities, substance abuse, domestic violence, and sexual violence.
·
·
· and sexual violence.
· Restore benefits for documented immigrants (NASW,
2015).
The stimulus package was considered largely successful and initially had the approval of the majority of Americans (Pew Research Center,
2008). The economy took years to recover, though, and some populations and regions never fully recovered, including many rural communities (Farrigan,
2014). Over time, Americans became increasingly critical of what many now call the “Wall Street Bailout.” When the 2008 elections rolled around, many in the United States were ready for a change, reflected in the election of Democrat Barack Obama.
The Election of the First Black President
The 2008 presidential election was unprecedented in many respects. The United States had its first Black and first female presidential candidates of a major party. Many people who had historically been relatively apathetic about politics were suddenly passionate about this election for a variety of reasons. Growing discontent with the leadership in the preceding 8 years coupled with a lengthy war in the Persian Gulf region and a struggling economy created a climate where significant social change could take root. Barack Obama’s campaign slogans based on hope and change (e.g., “Yes We Can!” and “Change We Can Believe In”) seemed to tap into this growing discontent.
Perhaps one of the most significant federal laws to be passed during the Obama administration was the
Patient Protection and Affordable Care Act of 2010 (ACA) (PPACA, 2010). The ACA (or its more commonly used name, Obamacare) was signed into law by former President Obama in March 2010 after a fierce public relations battle waged by many Republicans and health insurance companies designed to prevent its passage. The ACA, which took effect incrementally between 2010 and 2014, is a comprehensive health care reform bill. Overall, this legislation is designed to make it easier for individuals and families to obtain quality lower-cost health insurance by having people apply for a policy through a central exchange. One of the goals of the legislation was to make it more difficult for health insurance companies to deny coverage, particularly based on preexisting conditions. The ACA also expands Medicare in a variety of ways, including bolstering community and home-based health care services, and providing incentives for preventative, holistic, and wellness care. With respect to behavioral and mental health care, the ACA provides increased incentives for coordinated care and school-based care, including mental health care and substance abuse treatment. It also includes provisions that will require the inclusion of mental health and substance abuse coverage in benefits packages, including prescription drug coverage and wellness and prevention services. Although the Trump administration attempted to weaken the ACA in a variety of ways, it remains an effective piece of legislation as long as states comply with the act’s mandates, including providing oversight for unwarranted price increases.
Political speeches and debates leading up to the 2012 presidential elections revealed the same debate about the causes of poverty and effective poverty alleviation strategies. After a brief display of compassion toward the poor at the height of the 2008 economic crisis, harsh sentiments reflecting historic stigmatization of the poor were strongly espoused, particularly among potential Republican primary candidates who continued their campaign against “big government,” social welfare programs, and civil liberties in general. One 2012 Republican presidential candidate, Newt Gingrich, even went so far as to challenge current child labor laws, calling them “stupid.” In a campaign speech in Iowa in the fall of 2011, Gingrich characterized poor ethnically diverse children living in poor neighborhoods as lazy and having no work ethic. In two different speeches (his initial speech and a subsequent speech where he was asked to clarify his earlier comments), Gingrich suggested that poor children in poor neighborhoods could start work early, perhaps as janitorial staff in their own schools (Dover, 2011). Gingrich’s sentiments completely negated the role of racial oppression and White privilege in the poverty experienced by racial and ethnic minorities in the United States.
President Obama significantly advanced social justice during his presidency, including signing into law the Matthew Shepard and James Byrd, Jr. Hate Crimes Prevention Act of 2009, which extended federal protection to victims of a hate crimes based on actual or perceived sexual orientation or gender identity. Obama also signed into law the Fair Sentencing Act of 2010, which addressed the disparity in sentencing laws between powder cocaine and crack cocaine, impacting primarily people of color. In 2012, Obama repealed
Don’t Ask, Don’t Tell (DADT) (an official policy of the U.S. government that prohibited the military from discriminating against gay and lesbian military personnel as long as they kept their sexual orientation a secret), which meant that lesbian, gay, and bisexual Americans could serve openly in the U.S. Armed Services without fear of dismissal.
Other advances in social justice legislation and policy include Obama’s 2012 Executive Order implementing the Deferred Action for Childhood Arrivals (DACA) policy, providing legal protection for undocumented migrant youth who came to the United States as children with their parents, until federal legislation could be passed to provide them a path to citizenship. Obama also advocated in support of marriage equality for same-sex couples in advance of the 2015 Supreme Court case
Obergefell v.
Hodges, which legalized same-sex marriage throughout the United States. And in 2016, the Obama administration increased the annual refugee threshold to 110,000 to accommodate the resettlement of Syrian refugees, among other highly vulnerable populations.
Some of these advances have since been dismantled by President Trump, including the withdrawal of DACA (an action since reversed by the U.S. Supreme Court) and the consistent lowering of the annual refugee threshold to 18,000 in 2020, the lowest number in the history of the United States Refugee program (U. S. Department of State, 2020). Despite these dramatic shifts in values and policy approaches, President Obama remains a key figure in U.S. history, not only because of his race, but because of his social justice legacy and his overall popularity,
The Tea Party Movement
A powerful conservative social movement that sprung up 2009 in a reaction to the Obama presidency is the American Tea Party Movement, a part of the
Christian Right and a fringe part of the Republican base. Tea Party members advocated for smaller government, lower taxes (the name of the group is a reference to the Boston Tea Party), states’ rights, and the literal interpretation of the U.S. Constitution. The Tea Party movement gained a reputation for advocating for very conservative policies that advanced traditional American values such as marriage between a man and a woman, restrictions on abortions, and governance that supported conservative Christian values. For instance, Michele Bachmann, a Tea Party member, former Minnesota congresswoman, and 2012 presidential candidate, asserted in a 2006 speech that religion was supposed to be a part of government, and that the notion of separation of church and state (contained in the First Amendment of the U.S. Constitution) was a myth (Turley, 2011).
The Tea Party has been criticized for many reasons, including being anti-immigrant and racist, accusations the party’s leadership strongly denied. And yet, media coverage of Tea Party rallies frequently highlighted their racially charged tone, such as racial slurs on posters, many of which were directed at former President Obama’s ethnic background. Although “tea partiers” often denied accusations of bias, a study conducted during the height of the movement showed that about 60% of Tea Party opponents believed that the movement had strong racist and homophobic overtones (Gardner & Thompson, 2010). Most members as of 2019 have shed the Tea Party label but remain an influential core of the Republican party as conservative evangelicals.
The Era of Donald Trump
Donald Trump, a reality television star and real estate mogul, was elected president in 2016, and took office the following January, surprising many people in the United States and around the globe. Trump’s election was also a surprise to political pollsters, many of whom predicted Hillary Clinton had a clear path to the White House (Wright & Wright, 2018). The disappointment and shock many Democrats felt in response to Trump’s win was in large part rooted in the contentious nature of the election and the belief that Americans would not vote for someone so mired in scandal and who espoused such controversial rhetoric in speeches and tweets that many believed reflected racism, sexism, and xenophobia (Jacobson, 2017; Tani, 2016). But that’s precisely what happened.
Trump’s policies, particularly his stance on immigration and his “America First” rhetoric, are consistent with right-wing populism, a far-right political ideology that is rooted in nationalism and protectionism (Dunn, 2015; Mudde, 2013; Wodak, 2015). Right-wing populism is by definition anti-immigrant, since immigrants are perceived as a threat to the country’s traditional culture and way of life (Bonikowski, 2017; Ybarra et al., 2016). A wave of right-wing populism, particularly those fueling anti-immigrant social movements, has swept the globe in recent years (Donovan & Redlawsk, 2018), so from a broader perspective Trump’s election is wholly in line with global political trends throughout the second decade of the 21st century.
The dominant narrative in the wake of Trump’s 2016 win was that it was economic anxiety that drove support for Trump—the forgotten White working class, those living in rural communities, the former manufacturing states of the upper Midwest (often called the Rust Belt), and coal country. And yet, recent research has revealed that Trump’s popularity was rooted not as much in economic anxiety, but in cultural anxiety—a profound concern among White voters, particularly working-class men without a college education, that they were losing their cultural status to ethnic minority populations (Mutz, 2018). This dynamic is referred to by researchers as “out-group anxiety,” a response to multicultural changes in the United States, such as increasing acceptance of multiculturalism (Barreto et al., 2011).
In addition to the support of White working-class voters, approximately 80% of White conservative Christians who self-identified as born-again and/or evangelical (Protestants, Catholics, and Mormons) voted for Trump in 2016 (CNN, 2016; Smith & Martinez, 2016). This was also surprising to many Americans, in light of Trump having been married three times, reports of his many extramarital affairs, and his personal admission on an Access Hollywood videotape of his sexual exploits, including grabbing women by their genitals (Fahrenhold, 2016). A 2016 poll by Christianity Today revealed that despite admitting that Trump was difficult to like, the majority of self-described White evangelicals believed that Trump was honest, a good role model, and well qualified to be president (interestingly, the majority of Black evangelicals reported almost the exact opposite sentiments) (Eekhoff-Zylstra & Weber, 2016).
In addition to the unrelenting support of his base, the 2016 election was significantly influenced by social media, particularly propaganda (or “fake news”) disseminated via Facebook and Twitter that swayed many people’s opinions about both Hillary Clinton and Donald Trump (Allcott & Gentzkow, 2017). Concerns about Russian election meddling and e-mail hacking, allegations that the Trump administration cooperated with the Russian government, and the discovery of Russian “troll farms” that used Facebook and Twitter to influence Americans to vote for Trump rather than Clinton remain controversial topics that will likely take years to fully understand. In the meantime, political polarization remains high, with research showing that most Americans now have little contact with people in the opposing political party (Pew Research Center, 2016).
Social media will no doubt continue to play a pivotal role in the political polarization in the United States. Research indicates that when people are exposed to opposing views on sites like Twitter, they become even more entrenched in their political stances (Bail et al., 2018). Social media is also used for good. For instance, advocacy is increasingly occurring online to affect changes in public policy. A 2017 study found that among advocates and advocacy organizations 70% use Facebook for advocacy purposes and 75% use Twitter. Additionally, over 50% of advocacy organizations now have a professional position designated for social media (Rehr, 2017). Social media is consistently evolving, in both usage and functionality, so it’s important that human services professionals remain up to date in all ways social media is being used—both positively and negatively.
The election of Donald Trump has significantly changed the policy priorities of the United States, including social welfare policy. Human services professionals and others in the helping fields applauded Trump’s support for criminal justice reform by signing into law the Step Act—a bipartisan bill that addresses racial disparities in sentencing laws—but remain highly concerned about the Trump administration’s stances on immigration, including the separation of Central American families seeking political asylum, the increase in expedited deportations without hearings, a dramatic reduction in the refugee resettlement threshold, and rollbacks in environmental protections. The NASW’s first statement regarding President Trump, released the day after the 2016 election, encouraged Trump to heal the divisiveness caused by his campaign (NASW, 2016), and yet, according to the Southern Poverty Law Center, hate crimes have risen steadily during the Trump administration (Beirich, 2019).
The NASW has also released statements expressing significant concern about Trump’s various policies, particularly those that impact the most vulnerable and marginalized members in U.S. society. Examples include the NASW statements on the Trump administration’s travel ban on refugees coming from primarily Muslim nations (NASW, 2017b), its policy to separate migrant families at the border (NASW, 2018), and the administration’s economic policies (NASW, 2017a). Trump was also highly criticized by his handling of the 2020 coronavirus pandemic, including concerns that he was slow to respond to the crisis (particularly with regard to testing and quarantining), but the NASW also praised the Trump administration for signing into law two emergency bills, Families First Coronavirus Response Act (H.R. 6201), which among other things expanded unemployment benefits fits, emergency family leave, and sick leave for those impacted by the coronavirus, and the Coronavirus Preparedness and Response Supplemental Appropriations Act of 2020 (H.R. 6074), which provided emergency funding for public health agencies and major expansions in the use of telehealth,
Despite the controversy surrounding Trump’s 2016 election and the values and choices made by his administration, another positive development in response to his election has been the dramatic increase in grassroots social justice advocacy in the form of rallies and protests (e.g., the Women’s March, the March for Science), and a significant increase in people of color and women running for political office. Social media played a significant role in these developments as well. For instance, the Women’s March was organized primarily on Facebook through shared statuses on timelines, and in pages and groups, as well as the use of the platform’s event calendar to organize people globally. The number of women who showed up globally to march for gender equality was unprecedented, and this grassroots coordination could not have occurred without the broad and rapid reach of social media. President Trump lost his 2020 re-election bid to Joe Biden, which many social reformers saw as a positive sign. And yet, it’s likely that political and social polarization will continue for quite some time.
Conclusion
The United States is often referred to as a reluctant welfare state because throughout its history a battle has been waged between reformers, who advocate for a compassionate, inclusive, less-stigmatized social safety net, and opposing groups, who advocate for a system with less government involvement, more privatization, and increased work incentives based on fears that a generous social safety net will decrease incentives for people to work. Currently it would be more accurate to describe the U.S. social welfare system as a piecemeal welfare-to-work system that focuses more on the behavior of the poor than on structural causes of poverty that act as the barriers to self-sufficiency. But it is also accurate to describe the U.S. social welfare system as ever evolving, reflected in the passage of emergency stimulus bills in response to the 2020 pandemic. Other changes are on the horizon as well, but they will be highly influenced by political leadership and economic constraints.
Summary
·
· The ways in which England’s historic system of poor care influenced the development of social welfare policies in the United States is analyzed. England’s early social welfare system, including the development of social welfare policy in the United States, is discussed, tracing aspects of social welfare provision from England’s feudal system in the Middle Ages to Elizabethan Poor Laws, to the development of the social welfare system in Colonial America.
· Movements and associated philosophical influences in poor care and social reform in early America are compared and contrasted. Various philosophical and religious movements that have influenced perceptions of the poor and social welfare policy, such as Calvin’s Protestant work ethic and the concept of predestination, social Darwinism, and the settlement house movement, are discussed.
· Early leaders in the fight for social justice are explored, including Jane Addams and Ida B. Wells, with a particular focus on how the social justice movement formed the underlying values of the human services profession.
· The ways that the New Deal and Great Society programs alleviated poverty after the Great Depression are discussed. The successes and failures of the post–Depression New Deal programs on poverty alleviation in the United States are explored.
· A summary and analysis of current social welfare approaches and programs are explored, including the 1970s recovery, welfare reform and TANF, the 2008 economic crisis, the Obama administration, and the Trump administration, with a particular focus on the impact of social welfare policy and provision on at-risk populations.
Are you stuck with another assignment? Use our paper writing service to score better grades and meet your deadlines. We are here to help!
Order a Similar Paper
Order a Different Paper
