Evaluation critique

If you are looking for affordable, custom-written, high-quality, and non-plagiarized papers, your student life just became easier with us. We are the ideal place for all your writing needs.


Order a Similar Paper Order a Different Paper

4-6 pages, double space. 

Assignment Critique

Select a completed evaluation to critique. The Better Evaluation and American Evaluation Association websites are good starting points to find a critique. It is also useful to do a search on google in an area of interest, such as “evaluations of early childhood programs” or “Evaluation of international education initiative” to find an evaluation. Academic journals may have brief summaries but you often have to go to the funding organization or the program organization’s website to find the complete evaluation.

Write a critique of this evaluation. Your critique should be 4-6 pages double spaced, size 12 font. It should include the following:

An overview of the program

The purpose of the evaluation, including who wants the evaluation and how will the evaluation be used?

Description of the Evaluation design and a logic model describing the program and outcomes

Description of the data sources and analysis

Summary of evaluation findings

Strengths and weaknesses of the evaluation. Please incorporate information from the AEA Guiding Principles, the UNEG Quality Checklist as well as information about good rigorous research design.

Be sure and include references, including the Executive Summary and your text books as well as any other materials you incorporate in your critique

Critique Rubric 50

Component

Proficient

Adequate

Insufficient

Comments

Program Overview

Evaluation Purpose

Evaluation Design

Logic Model

Data sources and analysis

Summary

Strengths and Weaknesses

Incorporate information from AEA Guiding Principles, UNEG Checklist and other class room materials.

Well organized, readable with few spelling and grammatical errors. Consistent use of style guide

References included and cited appropriately, consistent with the style guide

Program _____________________________________________
Logic Model

Goal or Purpose_____________________________________________________________________________________________________________________________________________________________

Outcomes

Short Medium IMPACT!

Outputs

Activities

Inputs/Resources

Assumptions

External Factors

UNEG Quality Checklist for
Evaluation Reports

Approved at the UNEG AGM 2010, this quality checklist for evaluation reports
serves as a guideline for UNEG members in the preparation and assessment
of an evaluation report.

Based on the UNEG Norms and Standards for evaluation, this checklist
includes critical indicators for a high-quality evaluation report.

Guidance
Document

UNEG/G(2010)/2

UNEG Quality Checklist for Evaluation Reports 2

UNEG Quality Checklist for Evaluation Reports

This checklist is intended to help evaluation managers and evaluators to ensure the final product of the evaluation – evaluation report – meets the

expected quality. It can also be shared as part of the TOR prior to the conduct of the evaluation or after the report is finalized to assess its quality.

Evaluation Title:

Commissioning Office:

1. The Report Structure

1.0 The report is well structured, logical, clear and complete.

1.1 Report is logically structured with clarity and coherence (e.g. background and objectives are presented before findings, and findings are

presented before conclusions and recommendations).

1.2 The title page and opening pages provide key basic information.

1. Name of the evaluation object

2. Timeframe of the evaluation and date of the report

3. Locations (country, region, etc.) of the evaluation object

4. Names and/or organizations of evaluators

5. Name of the organization commissioning the evaluation

6. Table of contents which also lists Tables, Graphs, Figures and Annexes

7. List of acronyms.

1.3 The Executive Summary is a stand-alone section of 2-3 pages that includes1:

1. Overview of the evaluation object

2. Evaluation objectives and intended audience

3. Evaluation methodology

4. Most important findings and conclusions

5. Main recommendations

1
Executive Summary: Critical elements are listed in UNEG Standards for Evaluation in the UN System (UNEG/FN/Standards[2005]), page 18, Standard 4.2,

Number 3.

UNEG Quality Checklist for Evaluation Reports

3

1.4 Annexes increase the credibility of the evaluation report. They may include, inter alia:2

1. TORs

2. List of persons interviewed and sites visited.

3. List of documents consulted

4. More details on the methodology, such as data collection instruments, including details of their reliability and validity

5. Evaluators biodata and/or justification of team composition

6. Evaluation matrix

7. results framework

2. Object of Evaluation

2.0 The report presents a clear and full description of the ‘object’ of the evaluation3.

2.1 The logic model and/or the expected results chain (inputs, outputs and outcomes) of the object is clearly described.

2.2 The context of key social, political, economic, demographic, and institutional factors that have a direct bearing on the object is

described. For example, the partner government’s strategies and priorities, international, regional or country development goals,

strategies and frameworks, the concerned agency’s corporate goals and priorities, as appropriate.

2.3 The scale and complexity of the object of the evaluation are clearly described, for example:

• The number of components, if more than one, and the size of the population each component is intended to serve, either directly and

indirectly.

• The geographic context and boundaries (such as the region, country, and/or landscape and challenges where relevant

• The purpose and goal, and organization/management of the object

• The total resources from all sources, including human resources and budget(s) (e.g. concerned agency, partner government and other

donor contributions.

2
Content of Annexes is described in UNEG Standards for Evaluation in the UN System (UNEG/FN/Standards[2005]), page 20, Standard 4.9 and page 23,

Standard 4.18.

3
The “object” of the evaluation is the intervention (outcome, programme, project, group of projects, themes, soft assistance) that is (are) the focus of the

evaluation and evaluation results presented in the report.

UNEG Quality Checklist for Evaluation Reports 4

2.4 The key stakeholders involved in the object implementation, including the implementing agency(s) and partners, other key stakeholders

and their roles.

2.5 The report identifies the implementation status of the object, including its phase of implementation and any significant changes (e.g.

plans, strategies, logical frameworks) that have occurred over time and explains the implications of those changes for the evaluation.

3. Evaluation Purpose, Objective(s) and Scope.

3.0 The evaluation’s purpose, objectives and scope are fully explained.

3.1 The purpose of the evaluation is clearly defined, including why the evaluation was needed at that point in time, who needed the

information, what information is needed, how the information will be used.

3.2 The report should provide a clear explanation of the evaluation objectives and scope including main evaluation questions and describes

and justifies what the evaluation did and did not cover.

3.3 The report describes and provides an explanation of the chosen evaluation criteria, performance standards, or other criteria used by the

evaluators4.

3.4 As appropriate, evaluation objectives and scope include questions that address issues of gender and human rights.

4. Evaluation Methodology

4.0 The report presents transparent description of the methodology applied to the evaluation that clearly explains how the evaluation was

specifically designed to address the evaluation criteria, yield answers to the evaluation questions and achieve evaluation purposes.

4.1 The report describes the data collection methods and analysis, the rationale for selecting them, and their limitations. Reference indicators

and benchmarks are included where relevant.

4.2 The report describes the data sources, the rationale for their selection, and their limitations. The report includes discussion of how the mix

of data sources was used to obtain a diversity of perspectives, ensure data accuracy and overcome data limits.

4
The most commonly applied evaluation criteria are the following: the five OECD/DAC criteria of relevance, efficiency, effectiveness, impact and

sustainability. Each evaluation may have a different focus (not all criteria are addressed in every evaluation). Each agency may wish to add an indicator in this

instrument, in order to assess the extent to which each criterion is addressed in the evaluation.

UNEG Quality Checklist for Evaluation Reports

5

4.3 The report describes the sampling frame – area and population to be represented, rationale for selection, mechanics of selection,

numbers selected out of potential subjects, and limitations of the sample.

4.4 The evaluation report gives a complete description of stakeholder’s consultation process in the evaluation, including the rationale for

selecting the particular level and activities for consultation.

4.5 The methods employed are appropriate for the evaluation and to answer its questions.

4.6 The methods employed are appropriate for analysing gender and rights issues identified in the evaluation scope.

4.7 The report presents evidence that adequate measures were taken to ensure data quality, including evidence supporting the reliability and

validity of data collection tools (e.g. interview protocols, observation tools, etc.)

5. Findings

5.0 Findings respond directly to the evaluation criteria and questions detailed in the scope and objectives section of the report and are based

on evidence derived from data collection and analysis methods described in the methodology section of the report.

5.1 Reported findings reflect systematic and appropriate analysis and interpretation of the data.

5.2 Reported findings address the evaluation criteria (such as efficiency, effectiveness, sustainability, impact and relevance) and questions

defined in the evaluation scope.

5.3 Findings are objectively reported based on the evidence.

5.4 Gaps and limitations in the data and/or unanticipated findings are reported and discussed.

5.5 Reasons for accomplishments and failures, especially continuing constraints, were identified as much as possible

5.6 Overall findings are presented with clarity, logic, and coherence.

6. Conclusions

6.0 Conclusions present reasonable judgments based on findings and substantiated by evidence, and provide insights pertinent to the object

and purpose of the evaluation.

6.1 The conclusions reflect reasonable evaluative judgments relating to key evaluation questions.

6.2 Conclusions are well substantiated by the evidence presented and are logically connected to evaluation findings.

UNEG Quality Checklist for Evaluation Reports 6

6.3
Stated conclusions provide insights into the identification and/or solutions of important problems or issues pertinent to the prospective

decisions and actions of evaluation users.

6.4
Conclusions present strengths and weaknesses of the object (policy, programmes, project’s or other intervention) being evaluated, based

on the evidence presented and taking due account of the views of a diverse cross-section of stakeholders.

7. Recommendations

7.0 Recommendations are relevant to the object and purposes of the evaluation, are supported by evidence and conclusions, and were

developed with the involvement of relevant stakeholders.

7.1 The report describes the process followed in developing the recommendations including consultation with stakeholders.

7.2 Recommendations are firmly based on evidence and conclusions.

7.3 Recommendations are relevant to the object and purposes of the evaluation.

7.4 Recommendations clearly identify the target group for each recommendation.

7.5 Recommendations are clearly stated with priorities for action made clear.

7.6 Recommendations are actionable and reflect an understanding of the commissioning organization and potential constraints to follow-up.

8. Gender and Human Rights

8.0 The report illustrates the extent to which the design and implementation of the object, the assessment of results and the evaluation

process incorporate a gender equality perspective and human rights based approach

8.1 The report uses gender sensitive and human rights-based language throughout, including data disaggregated by sex, age, disability, etc.

8.2 The evaluation approach and data collection and analysis methods are gender equality and human rights responsive and appropriate for

analyzing the gender equality and human rights issues identified in the scope.

8.3 The report assesses if the design of the object was based on a sound gender analysis and human rights analysis and implementation for

results was monitored through gender and human rights frameworks, as well as the actual results on gender equality and human rights.

8.4 Reported findings, conclusions, recommendations and lessons provide adequate information on gender equality and human rights

aspects.

GuidingPrinciples
The American Evaluation Association’s mission is to improve evaluation practices and methods, increase
evaluation use, promote evaluation as a profession, and support the contribution of evaluation to the
generation of theory and knowledge about effective human action. Evaluation involves assessing the
strengths and weaknesses of programs, policies, personnel, products, and organizations.

Preface to Evaluators’ Ethical Guiding Principles

Purpose of the Guiding Principles: The Guiding Principles
reflect the core values of the American Evaluation Association
(AEA) and are intended as a guide to the professional ethical
conduct of evaluators.

Focus and Interconnection of the Principles: The
five Principles address systematic inquiry, competence,
integrity, respect for people, and common good
and equity. The Principles are interdependent and
interconnected. At times, they may even conflict with one
another. Therefore, evaluators should carefully examine
how they justify professional actions.

Use of Principles: The Principles govern the behavior
of evaluators in all stages of the evaluation from the
initial discussion of focus and purpose, through design,
implementation, reporting, and ultimately the use of the
evaluation.

Communication of Principles: It is primarily the
evaluator’s responsibility to initiate discussion and
clarification of ethical matters with relevant parties to the
evaluation. The Principles can be used to communicate
to clients and other stakeholders what they can expect in
terms of the professional ethical behavior of an evaluator.

Professional Development about Principles:
Evaluators are responsible for undertaking professional
development to learn to engage in sound ethical
reasoning. Evaluators are also encouraged to consult with
colleagues on how best to identify and address ethical
issues.

Structure of the Principles: Each Principle is
accompanied by several sub-statements to amplify the
meaning of the overarching principle and to provide
guidance for its application. These sub-statements do not
include all possible applications of that principle, nor are
they rules that provide the basis for sanctioning violators.
The Principles are distinct from Evaluation Standards
and evaluator competencies.

Evolution of Principles: The Principles are part of an
evolving process of self-examination by the profession in
the context of a rapidly changing world. They have been
periodically revised since their first adoption in 1994.
Once adopted by the membership, they become the
official position of AEA on these matters and supersede
previous versions. It is the policy of AEA to review the
Principles at least every five years, engaging members in
the process. These Principles are not intended to replace
principles supported by other disciplines or associations
in which evaluators participate.

Glossary of Terms
Common Good – the shared benefit for all or most
members of society including equitable opportunities
and outcomes that are achieved through citizenship and
collective action. The common good includes cultural,
social, economic, and political resources as well as natural
resources involving shared materials such as air, water and
a habitable earth.

Contextual Factors – geographic location and conditions;
political, technological, environmental, and social climate;
cultures; economic and historical conditions; language,
customs, local norms, and practices; timing; and other
factors that may influence an evaluation process or its
findings.

Culturally Competent Evaluator – “[an evaluator who]
draws upon a wide range of evaluation theories and
methods to design and carry out an evaluation that is
optimally matched to the context. In constructing a model or
theory of how the evaluand operates, the evaluator reflects
the diverse values and perspectives of key stakeholder
groups.”1

Environment – the surroundings or conditions in which a
being lives or operates; the setting or conditions in which a
particular activity occurs.

Equity – the condition of fair and just opportunities for all
people to participate and thrive in society regardless of
individual or group identity or difference. Striving to achieve
equity includes mitigating historic disadvantage and existing
structural inequalities.

Guiding Principles vs. Evaluation Standards – the Guiding
Principles pertain to the ethical conduct of the evaluator
whereas the Evaluation Standards pertain to the quality of
the evaluation.

People or Groups – those who may be affected by an
evaluation including, but not limited to, those defined
by race, ethnicity, religion, gender, income, status,
health, ability, power, underrepresentation, and/or
disenfranchisement.

Professional Judgment – decisions or conclusions based
on ethical principles and professional standards for evidence
and argumentation in the conduct of an evaluation.

Stakeholders – individuals, groups, or organizations served
by, or with a legitimate interest in, an evaluation including
those who might be affected by an evaluation.

1 American Evaluation Association (2011). Public Statement on Cultural Competence
in Evaluation. Washington DC: Author. p. 3.

AEA Guiding Principles

A: Systematic Inquiry: Evaluators conduct data-based inquiries that are thorough,
methodical, and contextually relevant.
A1. Adhere to the highest technical standards appropriate to the methods being used while attending to the

evaluation’s scale and available resources.
A2. Explore with primary stakeholders the limitations and strengths of the core evaluation questions and the

approaches that might be used for answering those questions.
A3. Communicate methods and approaches accurately, and in sufficient detail, to allow others to understand,

interpret, and critique the work.
A4. Make clear the limitations of the evaluation and its results.
A5. Discuss in contextually appropriate ways the values, assumptions, theories, methods, results, and analyses

that significantly affect the evaluator’s interpretation of the findings.
A6. Carefully consider the ethical implications of the use of emerging technologies in evaluation practice.

B: Competence: Evaluators provide skilled professional services to stakeholders.
B1. Ensure that the evaluation team possesses the education, abilities, skills, and experiences required to

complete the evaluation competently.
B2. When the most ethical option is to proceed with a commission or request outside the boundaries of

the evaluation team’s professional preparation and competence, clearly communicate any significant
limitations to the evaluation that might result. Make every effort to supplement missing or weak
competencies directly or through the assistance of others.

B3. Ensure that the evaluation team collectively possesses or seeks out the competencies necessary to work
in the cultural context of the evaluation.

B4. Continually undertake relevant education, training or supervised practice to learn new concepts,
techniques, skills, and services necessary for competent evaluation practice. Ongoing professional
development might include: formal coursework and workshops, self-study, self- or externally-
commissioned evaluations of one’s own practice, and working with other evaluators to learn and refine
evaluative skills and expertise.

C: Integrity: Evaluators behave with honesty and transparency in order to ensure the
integrity of the evaluation.
C1. Communicate truthfully and openly with clients and relevant stakeholders concerning all aspects of the

evaluation, including its limitations.
C2. Disclose any conflicts of interest (or appearance of a conflict) prior to accepting an evaluation

assignment and manage or mitigate any conflicts during the evaluation.
C3. Record and promptly communicate any changes to the originally negotiated evaluation plans, the

rationale for those changes, and the potential impacts on the evaluation’s scope and results.
C4. Assess and make explicit the stakeholders’, clients’, and evaluators’ values, perspectives, and interests

concerning the conduct and outcome of the evaluation.
C5. Accurately and transparently represent evaluation procedures, data, and findings.
C6. Clearly communicate, justify, and address concerns related to procedures or activities that are likely to

produce misleading evaluative information or conclusions. Consult colleagues for suggestions on proper
ways to proceed if concerns cannot be resolved, and decline the evaluation when necessary.

C7. Disclose all sources of financial support for an evaluation, and the source of the request for the
evaluation.

D: Respect for People: Evaluators honor the dignity, well-being, and self-worth of individuals
and acknowledge the influence of culture within and across groups.
D1. Strive to gain an understanding of, and treat fairly, the range of perspectives and interests that individuals

and groups bring to the evaluation, including those that are not usually included or are oppositional.
D2. Abide by current professional ethics, standards, and regulations (including informed consent, confidentiality,

and prevention of harm) pertaining to evaluation participants.
D3. Strive to maximize the benefits and reduce unnecessary risks or harms for groups and individuals associated

with the evaluation.
D4. Ensure that those who contribute data and incur risks do so willingly, and that they have knowledge of and

opportunity to obtain benefits of the evaluation.

E: Common Good and Equity: Evaluators strive to contribute to the common good and
advancement of an equitable and just society.
E1. Recognize and balance the interests of the client, other stakeholders, and the common good while also

protecting the integrity of the evaluation.
E2. Identify and make efforts to address the evaluation’s potential threats to the common good especially when

specific stakeholder interests conflict with the goals of a democratic, equitable, and just society.
E3. Identify and make efforts to address the evaluation’s potential risks of exacerbating historic disadvantage or

inequity.
E4. Promote transparency and active sharing of data and findings with the goal of equitable access to

information in forms that respect people and honor promises of confidentiality.
E5. Mitigate the bias and potential power imbalances that can occur as a result of the evaluation’s context.

Selfassess one’s own privilege and positioning within that context.

2025 M St. NW, Ste. 800, Washington, DC 20036 • eval.org • P: 202.367.1166 • E: [email protected]

By Carrie E. Fry, Sayeh S. Nikpay, Erika Leslie, and Melinda B. Buntin

Evaluating Community-Based
Health Improvement Programs

ABSTRACT Increasingly, public and private resources are being dedicated
to community-based health improvement programs. But evaluations of
these programs typically rely on data about process and a pre-post
study design without a comparison community. To better determine
the association between the implementation of community-based
health improvement programs and county-level health outcomes, we
used publicly available data for the period 2002–06 to create a
propensity-weighted set of controls for conducting multiple regression
analyses. We found that the implementation of community-based health
improvement programs was associated with a decrease of less than
0.15 percent in the rate of obesity, an even smaller decrease in the
proportion of people reporting being in poor or fair health, and a
smaller increase in the rate of smoking. None of these changes was
significant. Additionally, program counties tended to have younger
residents and higher rates of poverty and unemployment than
nonprogram counties. These differences could be driving forces behind
program implementation. To better evaluate health improvement
programs, funders should provide guidance and expertise in
measurement, data collection, and analytic strategies at the beginning of
program implementation.

O
ver the past decade the private
and public sectors have made
large community-based invest-
ments in improving population
health. Many of these invest-

ments have been made in multisector coalitions
that seek to improve specific communitywide
health outcomes, such as reductions in obesity
or smoking. Through their programs, these co-
alitions develop consensus on targeted health
outcomes, potential metrics, and programs
for implementation; align existing resources
in community-based organizations; and imple-
ment evidence-based interventions to fill pro-
grammatic gaps. Despite often substantial finan-
cial investment, little is known about the

relationship between the implementation of a
health improvement program and the subse-
quent health status of the community.
Previous studies of community-based health

improvement programs have found that they
are influential in changing individual behavior
and health-related community policies1,2 but do
not produce significant changes in health out-
comes, evenafter tenyears.3–8Muchof the earlier
literature that demonstrated positive changes
in attributable health outcomes was limited to
smaller, health care–oriented interventions, spe-
cific racial or ethnic groups, or highly specific
health conditions.9–12 A more recent study inves-
tigating self-reported public health coalition ac-
tivity found that greater planning activity was

doi: 10.1377/hlthaff.2017.1125
HEALTH AFFAIRS 37,
NO. 1 (2018): 22–29
©2018 Project HOPE—
The People-to-People Health
Foundation, Inc.

Carrie E. Fry (carrie_fry@
g.harvard.edu) is a doctoral
student in health policy at the
Harvard Graduate School of
Arts and Sciences, in
Cambridge, Massachusetts.

Sayeh S. Nikpay is an
assistant professor in the
Department of Health Policy
at Vanderbilt University
School of Medicine, in
Nashville, Tennessee.

Erika Leslie is a postdoctoral
fellow in the Department of
Health Policy at Vanderbilt
University School of Medicine.

Melinda B. Buntin is a
professor in and chair of the
Department of Health Policy
at Vanderbilt University
School of Medicine.

22 Health Affairs January 2018 37 : 1

Culture Of Health

Downloaded from HealthAffairs.org on August 22, 2022.
Copyright Project HOPE—The People-to-People Health Foundation, Inc.

For personal use only. All rights reserved. Reuse permissions at HealthAffairs.org.

associated with reductions in mortality.13

These previous reports highlight the chal-
lenges inherent in evaluating community-based
health improvement programs. Communities
that implement these programs might not have
sufficient resources to collect data or measure
health outcomes. Evaluations of these programs
typically rely on easily collectible data and pre-
post designs without comparison or control
communities. And because these evaluations
do not adjust for secular trends, it is difficult
to link program implementation to changes in
health behavior, attitudes, or outcomes. Never-
theless, the economic and human capital invest-
ments being made in health improvement pro-
gramswarrant the use ofmore rigorous research
designs.14

This study used a pre-post design with county-
level health status comparisons to evaluate com-
munity-basedhealth improvementprograms im-
plemented in the period 2007–12. By combining
multiple programs into a single analysis, exam-
ining changes in specific health outcomes, and
using a more rigorous design, this study pro-
vides insight into such programs’ potential to
make positive changes in population health out-
comes. Our analysis also demonstrates impor-
tant threats to the validity of commonly used
evaluation designs.

Study Data And Methods
Becausemany of the communities in our data set
implemented programs at the county level, we
focused on the association between these pro-
grams and county-level health outcomes. We
used multiple sources of publicly available data
to create an inverse propensity-weighted set of
controls for conducting multiple regression an-
alyses.

Data We conducted extensive internet
searches for relevant community-based health
improvement programs and contacted leaders
at national foundations and governmental agen-
cies engaged inpopulationhealth efforts to iden-
tify an initial set of programs to examine.
Through snowball-sampled conversations with
these leaders and, subsequently, with leaders
of the programs, we attempted to define the uni-
verse of programs thatmet our program criteria.
(For the programs included in our analysis, see
online appendix exhibit 1.)15 We shared this list
with major foundations and agencies operating
in this area to ensure that we identified all rele-
vant programs, and we iterated our identifica-
tion strategy based on their feedback.
We then defined the geographical areas (or

program sites) covered by each implemented
program. Most program sites involved only a

single county, or a large metropolitan area with-
in a county, but others encompassed multicoun-
ty regions. The majority of the programs were
implemented at the county level, and programs
serving areas larger than a county could be dis-
aggregated to the county level, which suggested
that county-level analysis was most appropriate
for this study.
We included communities that implemented a

program in the period 2007–12 if their program
included multiple sectors, such as private indus-
try, health care organizations, and public health
departments; were externally funded; or re-
ceived guidance, oversight, or technical assis-
tance from a national coordinating agency.
These selection criteria intentionally omitted
many programs implemented by county or city
health departments using federal or state grant
money. Identifying programs in a less restrictive
way would have introduced greater variability in
the kind, intensity, and duration of the pro-
grams, which would have decreased the preci-
sion of the estimated effects and would have
made it difficult to generalize findings to pro-
grams with specific characteristics.
We identified four programs implemented at

fifty-two sites that collectively encompassed 396
counties (appendix exhibit 1 lists organization
names andoverall characteristics of the four pro-
grams included in our study).15 We classified
each site by its foci (sites within programs could
have different foci, and sites could also have
multiple foci). Sites were classified as focusing
either on overall health and well-being (two) or
on specific health outcomes—namely, child
health (six), tobacco control (twenty-three), di-
abetes (eight), obesity (thirty-eight), or other
health outcomes (nineteen). Additionally, we
identified each program’s year of implementa-
tion, as well as the year of its termination (if
applicable).
The outcome variables were county-level

health outcomes obtained from the Selected
Metropolitan/Micropolitan Area Risk Trends
(SMART) data for the period 2002–12 from
the Behavioral Risk Factor Surveillance System
(BRFSS). BRFSS county-level SMART estimates
are derived frommetropolitan andmicropolitan
statistical areas (MMSAs) that have at least 500
respondents in a given year and 19 sample mem-
bers in each MMSA-level stratification category
(such as race, sex, or age groups).16 County-level
estimates are weighted by procedures that em-
ploy known population demographics produced
by the decennial census and American Commu-
nity Survey.16 Over our study period, an average
of 7.36 percent of US counties were included in
the SMART data. The units of analysis for our
study are county-year dyads.

January 2018 37 : 1 Health Affairs 23
Downloaded from HealthAffairs.org on August 22, 2022.

Copyright Project HOPE—The People-to-People Health Foundation, Inc.
For personal use only. All rights reserved. Reuse permissions at HealthAffairs.org.

We linked the SMART data and program data
to county-level estimates of poverty and demo-
graphic and employment characteristics. Pover-
ty data, includingmedianhousehold incomeand
percentage living in poverty, were obtained from
the Small Area Income and Poverty Estimates,
producedannually by theCensusBureau.17 Coun-
ty-level age composition was obtained from
Surveillance, Epidemiology, and End Results
Program data, produced annually by the Nation-
al Cancer Institute.18 Employment data were
obtained from the Local Area Unemployment
Statistics program of the Bureau of Labor Sta-
tistics.19

Analyses Descriptive statistics of the number
and type of community-based programs over
time were produced.We then used inverse pro-
pensity score treatment weighting to reweight
treatment and control counties. Regression
analyses were conducted using a difference-in-
differences design and an event study.
Our goal was to evaluate the implementation

of any program, a tobacco-focused program, and
an obesity-focused program. We examined pro-
grams that focused on tobacco and obesity sepa-
rately because of the direct link between the im-
plementation of these programs and changes in
specific health outcomes captured in the SMART
data. Additionally, we chose to focus on tobacco
and obesity programs because of their growth in
numbers over the study period. This growth was
attributable, in part, to funding provided by the
American Recovery and Reinvestment Act of
2009, which required a focus on tobacco control
or obesity.
For each type of program, we were interested

in the association between implementation
and three county-level self-reported health out-
comes: whether respondents reported being in
poor or fair health, smoking status, and obesity
status. We chose overall health because of the
potential of any program to improve this out-
come, and we chose smoking and obesity status
becauseof our emphasis on tobacco- andobesity-
focused programs. Programs that focused on
other health priorities, such as diabetes and
hypertension, may also improve smoking and
obesity status, making the latter two outcomes
relevant to a broader set of programs.
Inverse Propensity Score Treatment

Weighting We employed inverse propensity
score treatment weighting, using changes in
pre-implementation covariates to reweight un-
treated counties to achieve greater balance on
observed covariates and create a more appropri-
ate control group.20 We assessed the balance of
observed covariates using standardized differ-
ences.21 These inverse propensity weights were
then used in all subsequent regression analyses.

(For details on the methodology, see the appen-
dix.)15

Difference-In-Differences Analysis We
used difference-in-differences regression analy-
sis to evaluate the association between the im-
plementation of a health improvement program
and county-level health outcomes.22 Because
some of the counties in our data set were includ-
ed in both the treatment and control groups,
depending on the year of implementation, we
also employed a difference-in-differences design
in which only counties that did not implement a
program during the study period were included
in the control set. All regressionmodels included
county and year fixed effects.We clustered stan-
dard errors at the county level to address auto-
correlation.
Event Study To examine possible pretreat-

ment trends in the study counties, we also used
an “event study”design,which compared annual
average outcomes for treated counties in each
year leading up to and after the county imple-
mentedahealth improvementprogram.23,24 Each
of these models also included county and year
fixed effects, the same covariates that were in-
cluded in our difference-in-differences analysis,
and clustered standard errors at the county level.
Sensitivity Analyses Communities that im-

plemented a population health improvement
programmaybe intrinsicallydifferent fromcom-
munities that did not. This endogeneity presents
a problem in the regression analyses above. One
way tomitigate the potential biases attributed to
endogeneity is to parse out programs where se-
lection is less of an issue. Our data set included
counties that were selected for the Communities
Putting Prevention to Work program,25 which
was funded by the Centers for Disease Control
and Prevention under the American Recovery
andReinvestment Act. Funding for this program
was competitive, and communities that received
fundinghad to demonstrate in their applications
that they were “shovel ready” (that is, had devel-
oped the necessary coalition, infrastructure, or
capacity to begin implementing evidence-based
programs as soon as funding was obtained).
Additionally, there may be countercyclical ef-

fects on health resulting from the Great Reces-
sion (2007–09).26 To address this concern, we
excluded counties that received a Communities
Putting Prevention to Work grant. These pro-
grams were implemented as a direct result of
the recession, and counties that received these
grantsmay have beenmore susceptible than oth-
er counties were to the countercyclical effects of
the economic downturn.
All analyses were conducted using Stata, ver-

sion 15. The Vanderbilt University Institutional
ReviewBoard considered this study exempt from

Culture Of Health

24 Health Affairs January 2018 37 : 1
Downloaded from HealthAffairs.org on August 22, 2022.

Copyright Project HOPE—The People-to-People Health Foundation, Inc.
For personal use only. All rights reserved. Reuse permissions at HealthAffairs.org.

review, basedon its useof publicly available data.
Limitations This study, like many quasi-

experimental studies, had several limitations.
First, counties with a health improvement pro-
gram have economic and demographic charac-
teristics that differ significantly from those of
counties without such a program. Because these
differences could be related to both the health
outcomes of interest and the probability of treat-
ment, our estimates could be biased. However,
when we limited our analyses to programs that
had received funding through the American Re-
covery and Reinvestment Act, a group arguably
less subject to selection bias than the group of
programs thatwasnot competitively selected,we
found that programs funded through the act
were not associated with significantly different
changes in county-level health or smoking status
or obesity when compared to programs that had
not received funding through the act.
Second, although the list of programs, their

foci, and their years of implementation have
been validated by the program staff of funders
in this area, including large nonprofit organiza-
tions and governmental agencies, there is still a
possibility that some unpublicized programs
were excluded from this analysis. Additionally,
we did not measure the intensity (that is, the
number of interventions implemented or the
number of people reached) of the implemented
programs or the amount of financial resources
invested. Failure to capture variations in these
programs could also mask the true effects of
larger, more resourced, or better-administered
programs.
Third, programs could have different effects

depending on the baseline levels of health con-
ditions or behaviors. For example, we found
some evidence to suggest that among counties
with higher baseline rates of people who re-
ported poor or fair health, implementation of
a health improvement program was associated
with significant decreases in the proportion of
residents reporting such health. This type of
analysis was beyond the scope of this study,
but it merits further investigation.
Fourth, while our identification and classifica-

tion strategy included the stated health outcome
foci of these programs, we did not necessarily
capture the full range of intended outcomes. For
some communities, the intended outcome of the
health improvement program could be changes
to policies or procedures; for others, the goal
could have been improvements in health educa-
tion and knowledge or changes in health behav-
iors and outcomes.While all of these policies and
programs may eventually lead to changes in
health outcomes, such changes might not be
the only or best source of measurement for all

programs.Despite the validationof our selection
criteria and the use of small-area estimates for
health outcomes, obtaining adequate data for
the evaluation of programs was difficult.
Finally, small-area estimates from the BRFSS

SMART data are known to have measurement
error, which could result in inflated standard
errors. Thus, relying on existing sources of
aggregate data would be problematic even for
communities that may conduct more rigorous
evaluations of their programs in the future. Ad-
ditional data gathering for evaluation from both
implementation and non-implementation coun-
ties may be necessary and could prove to be a
challenge, in terms of both the quality of the data
and the time and resources required. Despite
these limitations, this study used the best data
andmost rigorousmethods available to estimate
the relationship between health program imple-
mentation and county-level health outcomes.

Study Results
The number of health improvement program
sites grew substantially over the study period,
from fourteen in 2007 to fifty-two in 2012. The
number of counties with a health improvement
program also grew, from 319 in 2007 to 396 in
2012. Before 2010, most of the programmatic
siteswere focusedon child health or other health
priorities.With the start of funding through the
American Recovery and Reinvestment Act in
2010, the number of tobacco- and obesity-
focused sites grew substantially, from one each
in 2007 to twenty-four and thirty-seven, respec-
tively, in2012(exhibit 1).While the relative share
of programs that focused on hypertension, child
health, and other health priorities decreased af-
ter 2009, the absolute numberof theseprograms
either remained the same or grew.
Before the implementation of any health im-

provement program (that is, in 2002–06), there
were significant differences between counties
that did and did not implement a program in
the period 2007–12. Compared to non-imple-
menting counties, the counties with a health
improvement program had a larger share of
young adults (ages 20–39) but a smaller propor-
tion of nonelderly adults (ages 40–64) (exhib-
it 2). Additionally, counties with a program had
significantly higher proportions of their popula-
tions living in poverty and higher rates of un-
employment. Our inverse propensity treatment
reweighting, however, achieved balance among
observable covariates in the pre-implementation
period (appendix exhibit 3).15

Difference-In-Differences Analysis Using
a standard difference-in-differences analysis, we
found that the implementation of a health im-

January 2018 37 : 1 Health Affairs 25
Downloaded from HealthAffairs.org on August 22, 2022.

Copyright Project HOPE—The People-to-People Health Foundation, Inc.
For personal use only. All rights reserved. Reuse permissions at HealthAffairs.org.

provement program was associated with a mean
reduction of less than 0.06 percentage points in
thepopulation that reportedbeing inpooror fair
health and a mean reduction of less than 0.15
percentage points in the population that is over-
weight or obese (exhibit 3). However, neither of
these results was significant (α ¼ 0:05).
Reweighting control counties with inverse

propensity treatment score weights resulted in
a reduction of more than 0.06 percentage points
in the proportion of a county’s population that
was overweight or obese (exhibit 3). However,
this reweighting resulted in an increase of more
than 0.1 percentage points in the proportion
of the population reporting being in poor or
fair health. (For full regression output of the
difference-in-differences analysis, see appendix
exhibits 3 and 4.)15 As was the case with the
unweighted difference-in-differences approach,
these changes were not significant. In both dif-
ference-in-differences analyses, the implementa-
tion of a health improvement program was
associated with an increase (greater than 0.03
percentage point and 0.05 percentage point,
respectively) in the proportion of people who
smoked (exhibit 3). Results fromour event study
analysis were substantively similar to the results
from the inverse propensity treatment score
weighting analysis. (For results of the event
study analysis, see appendix exhibits 5 and 6.)15

The implementationof aCommunities Putting
Prevention to Work program program funded
through the American Recovery and Reinvest-
ment Act was associated with an average de-
crease of 0.05 percentage points in the propor-
tion of the population that reported being in

Exhibit 1

Numbers of community-based health improvement programs and their health outcome foci, 2007–12

SOURCE Authors’ analysis of selected community-based health improvement program data. NOTES The number of programs is cumu-
lative over time. Programs with more than one focus are counted in each of their foci. Over this period, four programs were imple-
mented at fifty-two sites that collectively contained 396 counties. The first program was implemented in 2007.

Exhibit 2

Sample characteristics of counties in 2002–06, before the implementation of health
improvement programs, by implementation status

Characteristic
Non-implementing
counties (n= 695)

Implementing counties
(n= 269)

Population age range (years)
0–19 27.91% 27.63%
20–39 27.50 28.66***
40–64 32.69 32.01***
65 and older 11.90 11.69

Population living in poverty 10.96% 13.34%***

Unemployment rate 4.81 5.62***

SOURCE Authors’ analysis of selected community health improvement program data and county-level
Behavioral Risk Factor Surveillance System (BRFSS) Selected Metropolitan/Micropolitan Area Risk
Trends (SMART) data for 2002–06. NOTES Counties with health improvement programs were not
included in this analysis if there were fewer than 500 respondents in the BRFSS SMART data.
Means were compared using unpooled t-tests of means. ***p < 0:01

Culture Of Health

26 Health Affairs January 2018 37 : 1
Downloaded from HealthAffairs.org on August 22, 2022.

Copyright Project HOPE—The People-to-People Health Foundation, Inc.
For personal use only. All rights reserved. Reuse permissions at HealthAffairs.org.

poor or fair health, compared to counties that
did not implement a program (exhibit 4). Imple-
mentation of a program funded through the act
was also associated with a reduction of greater
than 0.18 percentage points in county-level rates
of obesity or overweight. Similar to the imple-
mentation of all programs, the implementation
of a program funded by the act was associated
with an increase of less than 0.2 percentage
points in the proportion of the population that
smoked, though these changes were not signifi-
cant. Restricting our analysis to programs not
funded through the act produced results similar
to those seen in our analysis of all programs
(exhibit 3).
When we restricted the treatment group to

programs that focused specifically on tobacco
or obesity, we found that the implementation
of a tobacco-focused program was associated
with a reduction of less than 0.5 percentage
points in the population that reported being in
poor or fair health (exhibit 4). Additionally, the
implementation of an obesity-focused program
was associated with a modest and nonsignficant
decrease (of 0.22 percentage points) in the pro-
portion of people who reported being in poor or
fair health. The implementation of a tobacco
programwas associatedwith the largest percent-
age-point reduction (0.20) in the proportion of
the population that smoked, and, similarly, the
implementation of an obesity-focused program
was associated with the largest reductions (0.40
percentage points) in the proportion of the pop-
ulation that was overweight or obese. However,
tobacco-focused programs were associated with
an increase of roughly 0.10 percentage points in
theproportionof peoplewhowereoverweight or
obese, and the implementation of an obesity-
focusedprogramwas associatedwith an increase
of less than 0.25 percentage points in the pro-
portion of the population that reported smok-
ing. None of these changes was significant.

Discussion
Our work provides modest evidence for the role
of health improvement programs in improving
certain health outcomes and also provides in-
sights into the kinds of communities that have
engaged in community-based health improve-
ment efforts.
Program implementation was associated with

modest reductions in the percentage of the pop-
ulation that reported being in poor or fair health
or being overweight or obese, although these
differences were not significant. Programs that
focused on a specific health outcome (for exam-
ple, tobacco control andobesity)were associated
with greater changes in these outcome, com-

pared to all health improvementprograms.How-
ever, it is important to note that programs that
focused on obesity saw increases in tobacco use
and programs that focused on tobacco control
saw increases in obesity rates, which suggests
that these programs may focus on one health
outcome to the detriment of others.
In the pre-implemation study period, counties

that implemented a health improvement pro-
gram were more economically disadvantaged
and had younger populations, compared to con-
trol counties. Taken together, these differences
could be the impetus behind a community’s
decision to implement a health improvement
program. If this is the case, such programs
may improve overall health status, but not to a
degree that overcomes other potential measures
of social or economic disadvantage—such as
educational attainment rates, the predominant
industry, or median household income.
Until now, most of the evidence supporting

multisectoral collaborations for health improve-
ment comes from studies that used a simple pre-
post design, comparing people who received the
intervention’s services before and after its inter-
vention. This study, in contrast, used population
health outcomes and employed regression tech-
niques and inverse propensity treatment score

Exhibit 3

County-level changes in selected health outcomes after program implementation, by
methodological approach

SOURCE Authors’ analysis of selected community-based health improvement program data and
Behavioral Risk Factor Surveillance System Selected Metropolitan/Micropolitan Area Risk Trends
data for 2002–12. NOTES The error bars indicate 95 percent confidence intervals. Models labeled
“ARRA” include only counties that received funding via from the American Recovery and Reinvest-
ment Act’s Communities Putting Prevention to Work grant. Models labeled “non-ARRA” include coun-
ties that did not receive funding from that grant. Standard difference-in-differences models are
labeled “OLS” (ordinary least squares). Inverse propensity treatment score weighted models are la-
beled “IPW.” Statistical methods are described in the text, technical appendix, and appendix exhibits 2
and 3 (see note 15 in text).

January 2018 37 : 1 Health Affairs 27
Downloaded from HealthAffairs.org on August 22, 2022.

Copyright Project HOPE—The People-to-People Health Foundation, Inc.
For personal use only. All rights reserved. Reuse permissions at HealthAffairs.org.

weights to construct a control group. The advan-
tageof a controlleddesign is that it lends support
for the association between program implemen-
tation and population health outcomes.
For instance, the simple pre-post design used

inmanyof the studies cited abovewouldnothave
captureddeclining smoking ratesnationally dur-
ing our study period and could have inadvertent-
ly attributed changes in smoking status to pro-
gram implementation. Additionally, the use of a
controlled design allowed us to capture and ac-
count for other, non-health-related differences
among the communities we examined.
Improving population-level health outcomes

is difficult, and it takes time to “move theneedle”
on health outcomes. For example, a decrease of
0.5–1.0 percentage point in the rate of smoking
per year may be the maximum change that a
community could expect when implementing
comprehensive tobacco control policies and pro-
grams. This means that in a community with an
adult population of 500,000 and an adult smok-
ing rate of 20 percent, a program would need to
change the smoking behavior of 500–1,000

adults in a single year to obtain a decrease of
0.5–1.0 percent. The level and intensity of pro-
gramming required for this level of change
might not be available to many communities,
and almost a decade of programmatic implemen-
tation and evaluation might be required to pro-
duce changes of this magnitude. Thus, five years
of post-implementation data (the maximum in
our data set) might not provide enough time for
changes in health outcomes to be realized, de-
pending on the intensity and specificity of pro-
gramming. Future research could extend our
study period to more recent years, potentially
providing the necessary lag time to observe
changes in population-level health outcomes.

Conclusion
Retrospective evaluation of collaborative, multi-
sector health improvement initiatives, including
the health improvement programs evaluated
here, is difficult. A preferablemethod of summa-
tive evaluation is for programs to be engaged in
evaluation before, during, and after implemen-
tation. However, in many situations, organiza-
tions and coalitions that lead, develop, and im-
plement a programhave expertise in community
outreach and organizing, implementation sci-
ence, or evidence-based practices, rather than
in program evaluation.
Thus, an evaluation team should be employed

to provide guidance and expertise in measure-
ment, data collection, and analytic strategies at
thebeginningofprogramimplementation.Early
entry of such a team allows for the identification
of control communities, gathering of necessary
pre-implementation data, and formative evalua-
tions that lead to a summative evaluation.
However, resources are scarce, andmany com-

munities that engage in these efforts require
private investment, grants, and public funds to
implement their programs. There are often few
resources remaining for an evaluation of any
kind, much less an evaluation on the scale de-
scribed here. Grant-making organizations and
private-sector entities that invest in the imple-
mentation of programs could consider also
providing resources to perform a thorough sum-
mative evaluation to adequately evaluate their
returnon investment. Inaddition, theymaywant
to invest inmore-robust data collection, not only
for evaluation but also to target needs and guide
implementation of population health improve-
ment programs more broadly. ▪

Exhibit 4

County-level changes in selected health outcomes after program implementation, by focus
of program

SOURCE Authors’ analysis of selected community-based health improvement program data and Be-
havioral Risk Factor Surveillance System Selected Metropolitan/Micropolitan Area Risk Trends data
for 2002–12. NOTES The error bars indicate 95 percent confidence intervals. “Any smoking” includes
people who reported smoking daily and people who reported smoking some. “Obese or overweight”
includes people with a body mass index of ≥25 to ≤40. Standard errors are clustered at the county
(FIPS) level. Programs labeled as “obesity program” or “tobacco program” may also focus on addi-
tional health outcomes. All models are inverse propensity treatment weighted. Statistical methods
are detailed in the text, technical appendix, and appendix exhibit 3 (see note 15 in text).

Culture Of Health

28 Health Affairs January 2018 37 : 1
Downloaded from HealthAffairs.org on August 22, 2022.

Copyright Project HOPE—The People-to-People Health Foundation, Inc.
For personal use only. All rights reserved. Reuse permissions at HealthAffairs.org.

An earlier version of this article was
presented at the AcademyHealth Annual
Research Meeting, New Orleans,
Louisiana, June 25, 2017. This work was

supported by the Robert Wood Johnson
Foundation (Grant No. 77330). The
authors thank Oktawia Wojcik and
Caroline Young for their thoughtful

comments on earlier versions of the
article and support of this project more
generally.

NOTES

1 Lv J, Liu QM, Ren YJ, He PP, Wang
SF, Gao F, et al. A community-based
multilevel intervention for smoking,
physical activity, and diet: short-
term findings from the Community
Interventions for Health programme
in Hangzhou, China. J Epidemiol
Community Health. 2014;68(4):
333–9.

2 Driscoll DL, Rupert DJ, Golin CE,
McCormack LA, Sheridan SL, Welch
BM, et al. Promoting prostate-
specific antigen informed decision-
making. Evaluating two community-
level interventions. Am J Prev Med.
2008;35(2):87–94.

3 Brand T, Pischke CR, Steenbock B,
Schoenbach J, Poettgen S,
Samkange-Zeeb F, et al. What works
in community-based interventions
promoting physical activity and
healthy eating? A review of reviews.
Int J Environ Res Public Health.
2014;11(6):5866–88.

4 Kloek GC, van Lenthe FJ, van Nierop
PW, Koelen MA, Mackenbach JP.
Impact evaluation of a Dutch com-
munity intervention to improve
health-related behaviour in deprived
neighbourhoods. Health Place.
2006;12(4):665–77.

5 Wolfenden L, Wyse R, Nichols M,
Allender S, Millar L, McElduff P. A
systematic review and meta-analysis
of whole of community interventions
to prevent excessive population
weight gain. Prev Med. 2014;62:
193–200.

6 Kreuter MW, Lezin NA, Young LA.
Evaluating community-based collab-
orative mechanisms: implications
for practitioners. Health Promot
Pract. 2000;1(1):49–63.

7 Merzel C, D’Afflitti J. Reconsidering
community-based health promotion:
promise, performance, and poten-
tial. Am J Public Health. 2003;
93(4):557–74.

8 Roussos ST, Fawcett SB. A review of
collaborative partnerships as a
strategy for improving community
health. Annu Rev Public Health.
2000;21:369–402.

9 Liao Y, Tucker P, Siegel P, Liburd L,
Giles WH. Decreasing disparity in
cholesterol screening in minority
communities—findings from the
racial and ethnic approaches to
community health 2010. J Epidemiol
Community Health. 2010;64(4):
292–9.

10 Cruz Y, Hernandez-Lane ME,
Cohello JI, Bautista CT. The effec-
tiveness of a community health
program in improving diabetes
knowledge in the Hispanic popula-
tion: Salud y Bienestar (Health and
Wellness). J Community Health.
2013;38(6):1124–31.

11 Kent L, Morton D, Hurlow T, Rankin
P, Hanna A, Diehl H. Long-term ef-
fectiveness of the community-based
Complete Health Improvement Pro-
gram (CHIP) lifestyle intervention: a
cohort study. BMJ Open. 2013;3(11):
e003751.

12 Landon BE, Hicks LS, O’Malley AJ,
Lieu TA, Keegan T, McNeil BJ, et al.
Improving the management of
chronic disease at community health
centers. N Engl J Med. 2007;356(9):
921–34.

13 Mays GP, Mamaril CB, Timsina LR.
Preventable death rates fell where
communities expanded population
health activities through multisector
networks. Health Aff (Millwood).
2016;35(11):2005–13.

14 Wolfenden L, Wiggers J. Strength-
ening the rigour of population-wide,
community-based obesity preven-
tion evaluations. Public Health Nutr.
2014;17(2):407–21.

15 To access the appendix, click on the
Details tab of the article online.

16 Centers for Disease Control and
Prevention. SMART: BRFSS fre-
quently asked questions (FAQs)
[Internet]. Atlanta (GA): CDC; [last
updated 2013 Jul 12; cited 2017 Nov
27]. Available from: https://www
.cdc.gov/brfss/smart/smart_faq
.htm

17 Census Bureau. Small Area Income
and Poverty Estimates (SAIPE): data
set. Washington (DC): Census

Bureau.
18 National Cancer Institute. Surveil-

lance, Epidemiology, and End Re-
sults (SEER): data set. Bethesda
(MD): National Cancer Institute.

19 Bureau of Labor Statistics. Local
Area Unemployment Statistics:
database. Washington (DC): BLS.

20 D’Agostino RB Jr. Propensity score
methods for bias reduction in the
comparison of a treatment to a non-
randomized control group. Stat
Med. 1998;17(19):2265–81.

21 Austin PC, Stuart EA. Moving
towards best practice when using
inverse probability of treatment
weighting (IPTW) using the pro-
pensity score to estimate causal
treatment effects in observational
studies. Stat Med. 2015;34(28):
3661–79.

22 Ashenfelter O, Card D. Using the
longitudinal structure of earnings to
estimate the effect of training pro-
grams. Rev Econ Stat. 1985;67(4):
648–60.

23 Bailey MJ, Goodman-Bacon A. The
War on Poverty’s experiment in
public medicine: community health
centers and the mortality of older
Americans [Internet]. Cambridge
(MA): National Bureau of Economic
Research; 2014 Oct [cited 2017 Nov
27]. (NBER Working Paper No.
20653). Available from: http://
www.nber.org/papers/w20653.pdf

24 Jacobson LS, LaLonde RJ, Sullivan
DG. Earnings losses of displaced
workers. Am Econ Rev. 1993;83(4):
685–709.

25 Centers for Disease Control and
Prevention. Communities Putting
Prevention to Work (2010–2012)
[Internet]. Atlanta (GA): CDC; [last
updated 2017 Mar 7; cited 2017 Nov
27]. Available from: https://www
.cdc.gov/nccdphp/dch/programs/
communitiesputtingprevention
towork/index.htm

26 Ruhm CJ. Are recessions good for
your health? Q J Econ. 2000;115(2):
617–50.

January 2018 37 : 1 Health Affairs 29
Downloaded from HealthAffairs.org on August 22, 2022.

Copyright Project HOPE—The People-to-People Health Foundation, Inc.
For personal use only. All rights reserved. Reuse permissions at HealthAffairs.org.

Are you stuck with another assignment? Use our paper writing service to score better grades and meet your deadlines. We are here to help!


Order a Similar Paper Order a Different Paper
Writerbay.net