Artificial intelligence, machine learning and predictive analytics

If you are looking for affordable, custom-written, high-quality, and non-plagiarized papers, your student life just became easier with us. We are the ideal place for all your writing needs.

Order a Similar Paper Order a Different Paper

Overview: In this assignment, you will write a (3 pages) addressing artificial intelligence according to the prompt below. 


  1. Read and review the material attached
  2. Discern the similarities and differences between artificial intelligence, machine learning, and predictive analytics. 
  3. Explain the relationship between artificial intelligence, machine learning, and predictive analytics.  

Transactions of the SDPS:
Journal of Integrated Design and Process Science
DOI 10.3233/JID200002

1092-0617/$27.50© 2020 – Society for Design and Process Science. All rights reserved. Published by IOS Press

Convergence of Artificial Intelligence Research in
Healthcare: Trends and Approaches

Thomas T.H. Wan *

Professor of Healthcare Administration and Medical Informatics, Kaohsiung Medical University, Taiwan
and Professor Emeritus of the Department of Health Management and Informatics, University of Center
Florida, Orlando, USA

Abstract A value-based strategy relies on the implementation of a patient-centered care system that will directly
benefit patient care outcomes and reduce costs of care. This paper identifies the trends and approaches to artificial
intelligence (AI) research in healthcare. The convergence of multiple disciplines in the conduct of healthcare research
requires partnerships to be established among academic scholars, healthcare practitioners, and industrial experts in
software design and data science. This collaborative work will greatly enhance the formulation of theoretically
relevant frameworks to guide empirical research and application, particularly relevant in the search for causal
mechanisms to reduce costly and avoidable hospital readmissions for chronic conditions. An example of implementing
patient-centered care at the community level is presented and entails the influence of the context, design, process,
performance and outcomes on personal and population health, employing AI research and informational technology.

Keywords: AI research, context-design-performance-outcomes framework, predictive analytics, shared decision
support, patient-centered care

1. Introduction

The Institute of Medicine (IOM) of the National Academies of Science has estimated that 44,000 to
98,000 Americans die annually due to preventable mistakes in healthcare each year (Kohn, Corrigan, &
Donaldson, 2000). The IOM has doggedly hounded the nation’s health care delivery system because it
“…has fallen far short in its ability to translate knowledge into practice and to apply new technology safely
and appropriately (Institute of Medicine, 2001)”. The IOM (2003) has made continuity of care a primary
goal of its comprehensive call for transforming the quality of care in the United States. In 2006, the
American College of Physicians (ACP) established continuity of care as a central theme for restructuring
or reengineering healthcare. Recent research of life-limited patients receiving patient-centered care
management showed a notable 38% reduction of hospital utilizations and a 26% reduction of overall costs
with high patient satisfaction (Sweeney, Waranoff, & Halpert, 2007). Thus, it is imperative to establish
scientific evidence in support of the need for adopting healthcare technologies/devices (Reckers-Droog et
al., 2020) and expanding home care monitoring as part of the patient-centric care management technology
(Williams & Wan, 2015). The current status of the healthcare system is evolving from a provider-centric to
a patient-centric care modality.

* Corresponding author. Email: [email protected] Tel: 407-823-3678.

2 Thomas T.H. Wan. / Convergence of Artificial Intelligence Research in Healthcare: Trends and Approaches

The changes in ecology of medical care are greatly facilitated by the availability of advanced health
technology and informatics (Rav-Marathe et. al, 2016), particularly related to chronic disease and self-care
management. For instance, design and process science plays a pivotal role in reshaping the service delivery
system for improving the efficiency and quality of patient care safety through the adoption of usable
information technology tools. Furthermore, the workflow of health services begins to be more standardized
and routinized. Important clinical and personal care data are often used to assess the performance of
healthcare system.

Innovative collaboration in establishing academia-industry partnerships for artificial intelligence (AI)
research and development in healthcare is essential to the improvement of quality and efficiency in care
management practice. An evidence-based approach for doing the right thing right in healthcare is the
fundamental step to establish performance guidelines and enhance the productivity of healthcare
workforces. Since 2019 the Centers for Medicare and Medicaid Services (CMS) has launched the projects
for AI Health Outcomes Challenge and offered federal grants and contracts to innovators to demonstrate
how AI tools –- such as deep learning and neural networks – can be used to predict unplanned hospital and
skilled nursing facility admissions and adverse events. By partnering with the American Academy of
Family Physicians and Arnold Ventures, CMS challenges researchers and practitioners to harness AI
solutions to predict health outcomes for potential use in CMS Innovation Center’s innovative payment and
service delivery models.

In order to optimize the effectiveness of care management strategies we need to pay special attention
to human factors in delivering patient-centered care. Professor Barbara Huelat, a renowned healing
environment designer, often says that we should include human centric or patient-centered factors in the
design of a system to optimize the healthcare delivery systems (Huelat and Wan, 2011). Hence, we should
use information technology to identify and target population subgroups who are most likely to benefit from
the use of innovative techniques. Most importantly, we have to utilize the knowledge-based information
system and technology to guide shared decision making for patient care. Thus, human factors influencing
the quality and efficiency of care can be effectively incorporated into the design and implementation of AI
in healthcare.

A report on the rankings of health for more than 3,000 counties in the U.S. has documented the need for
recognizing four categories of predictors of the variability in population health and performance in 2019
( The first category is physical environmental and ecological factors,
which account for 10% of the total health variation. The second category is medical care, accounting for
20% of the variation. The third category is health behavioral factors, accounting for 30% of the variation.
The fourth category is related to socio-economic factors or disparities, accounting for 40% of the variation
in county health. So, if one would like to improve health status or reduce health disparities, it is necessary
to pay greater attention to health behavioral and socioeconomic factors that may influence the health and
health care of the population. Naturally, healthy habits and lifestyles are important components of
promoting health and wellbeing for the people. Therefore, to actualize the power of AI or technology-
oriented decision support systems in healthcare we should prioritize healthcare research on identifying the
determinants of personal and population health. The past, current, and future interests in pursuing AI
research are relatively centered in employing machine-learning methods (i.e., classic support vector
machine, neural network and deep learning) for structured data and the natural language processing methods
on unstructured health data (Jiang et al., 2017). The opportunities for understanding human emotions and
behavioral responses to care rendered should be thoroughly explored by AI researchers and software

The use of theoretically informed frameworks to guide machine learning and deep learning explorations
in healthcare data is important for generating causal inferences derived from specified and justifiable
assumptions in the empirical investigation of healthcare outcomes. The proper design and implementation
of an innovative patient-centered care system has to pay attention to the collection of the right kind of
clinical and patient-reported data. If the data are not correctly specified or quantified, they will not be used
properly no matter how much data you have generated. In other words, data driven activities will not be
fruitful without the determination of their theoretical relevance. It is the integration of inductive and

Thomas T.H. Wan. / Convergence of Artificial Intelligence Research in Healthcare: Trends and Approaches 3

deductive logics in the conduct of scientific inquiry that enables us to develop some forms of predictive
medicine or precision medicine. The confirmatory nature of data-driven effort could solidify supportive
and foundational theories to guide us in designing more efficacious or efficient delivery system. Hence, we
could formulate clinical and administrative decision support products for enhancing patient care

2. Current Trends in AI Healthcare Research

AI research in healthcare emerges into a high-growth area of medical enterprises. Attention to practice
standards and self-reported care outcomes in both inpatient and outpatient care settings offers rewarding
benefits for improving the quality of care.

A few trends in AI healthcare and applications are worthy of noting here. First, the world’s population
is aging at a rapid rate. The compression of morbidity and mortality has signified the need to design useful
care management strategies for the chronically ill. The call for attention to population health management
for poly chronic conditions as a systematic approach is timely in response to the potential needs of the aging
population (Wan, 2018). Second, the decline in population growth engenders a significant dilemma for
future economic development and growth as it is manifested in the shortage of labor. The shift of caregiving
responsibilities towards finding formal caregivers to take care of our elderly is a modern phenomenon.
Third, it is very fashionable to advocate the need for delivering patient-centered care, but the substantive
meaning of patient-centered care has yet to be better understood. The three-prong questions are: 1) What is
patient-centric care? 2) How do we incorporate the principles of considering personal or patient experiences
into the design of AI products for healthcare? 3) What types and generations of information technology are
available for supporting healthcare organizations in solving the delivery problems?

Strategically speaking, we should start our exploratory journey in search of AI solutions by looking for
low-hanging fruits. By employing low-tech strategies in the initial phase, we could find out what’s known
about the effects of human experience in the healing process. For example, a large hospital in Florida faces
a situation of paying millions of dollars in annual fines as a penalty for having higher readmission rates
than the national average for heart failure and other chronic conditions. The Centers for Medicare and
Medicaid Services (CMS) uses the annual average rate at 15% of hospital readmissions for heart failure as
a standard. Higher than the national average rates are therefore liable to pay the penalty in an average of 2
to 5% reduction in reimbursement or payment, depending upon the categories of clinical diagnosis. Under
the threat of reducing revenues, all hospitals are very concerned about how to reduce avoidable
readmissions for chronic conditions. Naturally, a proper care management strategy is to focus on the
determinants of hospital readmission. The literature also suggests that multiple causal factors for
readmissions exist. The relative influence of personal, health provider, and institutional factors on hospital
readmission has yet to be determined (Wan, 2018). Interestingly enough, empirical studies have also
documented that provider characteristics and practice factors (e.g., primary care or clinical integration) may
contribute to the variations in hospital readmissions. However, limited research has been focused on in-
patient-centric care modalities and their effects on patient readmission.

In response to the need for conducting a thorough investigation on patient or personal care factors
influencing the variability in hospitalization or re-hospitalization, a systematic analysis was performed
along with meta analysis on the data derived from high-quality published clinical trial studies on heart
failure admissions (Wan et al., 2017). A well-trained group of graduate students conducted the systematic
review on personal determinants of heart failure and found magic bullets for eliminating or reducing the
readmission problem. They identified important personal factors affecting patient variations in heart failure
readmission. They learned that human factors involved with patients would help with redesigning or
improving care management. Finally, they classified patient-centered factors into an eight-character word,
CREATION as an abbreviation of Choice (C), Restfulness (R), healing Environment (E), Activity (A),
Trust (T), Interpersonal relations (I), Outlook (O), and Nutrition (N). They found that the Choice factor or
self-efficacy has exerted a substantial influence on readmission. When the patient-centered care strategy
focuses on a great deal of individual choice or preferences, heart failure patients will be able to reduce the

4 Thomas T.H. Wan. / Convergence of Artificial Intelligence Research in Healthcare: Trends and Approaches

likelihood of readmission in multi-fold than an average heart patient without practicing self-care. The
conclusion is that higher priorities should be given to delivering patient educational interventions and
raising patient awareness of self-care management, and understanding the interplay among multiple
personal factors such as the knowledge (K), motivation (M), attitude (A), preventive practice (P) and patient
care outcomes (O). Figure 1 is a behavioral change model with the KMAP-O framework for improving
patient adherence levels (Wan et al., 2018). Health practices or preventive activities are directly influenced
by improved knowledge, motivation, and attitude toward self-care via patient care education and, in turn,
positively affect patient care outcomes. Thus, it confirms the validity in adopting a systematical review and
meta-analytic approach to the low-hanging fruit for reducing or avoiding hospital readmissions. By
searching for current literature and finding potential causal factors relevant to prevent avoidable
hospitalization or re-hospitalization, one can then effectively design patient-centered interventions. Because
there are many known multi-tiered approaches involving personal, provider, community, and policy factors,
we should recognize the relative influences of determinants of health behavioral change properly when we
launch a patient-centered care and educational initiative.

Fig. 1. The KMAP-O framework as a patient-centered health education model

The fourth trend is related to market competition. Every company in AI design and application is trying
to produce a device that could dominate the regional, national and/or global market. The Society for Design
and Process Science (SDPS) sponsored the 24th International Conference on Navigating Innovative Design
and Applications via Automation and Artificial Intelligence ( at the end of July of 2019 in
Taichung, Taiwan. This conference exemplified the need for convergence of multiple disciplines in order
to reshape market niches and facilitate collaborations among varying disciplines in their research and
development initiatives. We hope that SDPS colleagues will lead the delivery of AI product design and
process research to enable people to effectively adopt health information and knowledge management tools
to solve healthcare problems such as hospital readmissions. Because the traditional technology-adoption
model is limited in offering insightful ideas about how to improve the efficacy of patient-centered care
modality, it is therefore imperative to search for the underlying reasons for those who do not use IT products
for patient education and communication. Careful attention is needed to fully understand the reasons for
the failure in effective use of health educational products.

The fifth trend relates to looking for ways to achieve multi-criteria optimization. By applying the
KMAP-O model as specified for patient-centered care, we are able to collect the right kind of data with
proven validity in its theoretical formulation of predictive domains of patient-centered care. Eventually, the
data could be warehoused in a defined framework with populated variables in each major domain or
conceptual formulation. The availability of big data enables investigators to employ effective data analytics
to pursue both exploratory and confirmatory analysis of predictors of healthcare outcomes. Thus, we can
maximize the power of knowing and confirming the predictor variables via multi-criteria optimization.
Ultimately, decision support systems could be designed and incorporated into AI devices for improving

Thomas T.H. Wan. / Convergence of Artificial Intelligence Research in Healthcare: Trends and Approaches 5

personal health. Through innovation in design-process-outcome science, we hope that we can handle 80%
of system problems with AI innovations in healthcare. It would be fascinating to see how clinical practice
could be made more efficient and effective by using graphic-user interface (GUI) based decision support
systems or other data visualization techniques in healthcare improvement.

The sixth trend is the increasing prevalence of chronic disease in the population. If you ask the elderly
over 65 years or older, you may find that an average number of chronic conditions ranges from 2 to 5
chronic illnesses reported by them. Thus, how to target a high-risk population is a major task for researchers
in population health management. The population health management perspective emerges as a new
enterprise in health care management. By identifying high-risk groups for designing and implementing care
management intervention using the AI technology to monitor and collect relevant data, health providers
could design and adopt shared decision making apps for their patients in varying settings such as home
based, community care, and/or institutional care settings (Wan, 2019).

The seventh trend is to learn how to enhance self-care ability. Patients discharged from an acute care
facility should be coordinated and provided with adequate personal care information enabling them to take
care of themselves during the post-hospital discharge period. Self-care management plays a very important
role in reshaping the patient-first ideology and helping reduce the future health care expenditures.

The eighth trend is the adoption and use of varying health information technologies, particularly related
to digital devices, cloud-based mechanisms and blockchain technologies to improving the design and
process of healthcare delivery. Furthermore, the emerging data science applied to healthcare and enabled
by advanced Internet technologies will greatly speed up data mining and analytics developments. Thus,
researchers and practitioners can clearly understand how care management innovations and interventions
will effectively impact patient care outcomes. The dose-response relationship between medical care
interventions, such as the types and amounts of health education, and outcomes of care could be carefully
delineated from the big-data-to-knowledge approach (National Institutes of Health, 2019). Addition, the
cost-efficiency and quality of service delivery systems could be substantially improved when the system is
able to achieve more effective coordination and timely process medical information or claims. AI via
machine learning and optimization is capable to solve healthcare issues and then bend the cost and quality

3. AI Healthcare Research: Directions and Strategies

Several directions and strategies for AI research in healthcare are suggested as follows:
First, AI researchers in healthcare should utilize the results from predictive modeling of determinants of

personal health or outcomes. Predictive analytics should not just to rely on a single criterion. By identifying
a few parameters parsimoniously, we would be able to optimize the performance and outcomes. In other
words, the future is to look beyond the scope of design and process that will be directly influenced by the
context or ecology of medical care. We should focus on outcomes and performance as well. This systems
approach to healthcare also refers to the context-design-process-outcomes framework guiding the
development of AI research.

Second, the convergence in systems science needs to employ causal inquiry approaches via the
establishment of theoretical models containing the context-design-process-performance-outcome
components of the healthcare system. This causal framework specifies that under specific contexts, a good
design leads to a good process, good process leads to good performance, and then good performance helps
achieve better patient care outcomes. This is an expanded model of the structural-process-outcome
framework specified by Donabedian (1966) for quality improvement.

Third, a multi-tiered approach to healing environment design is suggested. Figure 2 displays a complex
causal model of the determinants of health care outcomes. The endpoint is a holistic state of physical and
mental wellbeing achievable through improving the healthcare delivery system and its performance. With
adequate levels of inputs and outputs used in the healthcare system, the patient-centered care modality is
integrated into the design. Evidence-based design in healing environments can exert important positive
effects, including the reduction of stress and risk, improvement of patient safety, reduction of airborne

6 Thomas T.H. Wan. / Convergence of Artificial Intelligence Research in Healthcare: Trends and Approaches

pathogens and hospital acquired infections, avoidance of transfer patients induced errors, and enhancement
of staff satisfaction and productivity (Ulrich et al., 2004; Douglas and Douglas, 2005; and Huisman et al,
2012). Furthermore, the systematic design has to consider the context or environment in which patient care
is affected by cultural, political, social and physical environmental factors. The appropriate designs and
processes of care management or population health management enable to maximize or optimize
performance of a healthcare system.

Fig. 2. Holistic well-being affected by input- and output components of the healthcare system and person-

centric experience

Fourth, data science seeks the patterns and causal mechanisms associated with the observation (Ertas,
Tanik, & Maxwell, 2000). We should effectively guide the development of theoretical foundations that
enable the formulation of best practices in healing environment design. A transdisciplinary approach,
combining micro- and macro-predictor variables, is highly recommended. This will widen the scope of
research activities beyond the engineering or system domains. For instance, the empirical examination of
personal and societal determinants of health should specify the relevance of micro- and macro-level
predictors in a search for their causal influences on personal and population health. The micro-level factors
may include KMAP-O components of health behavioral change, whereas the macro-level factors may
consider the contextual, ecological, and organizational variabilities in the conduct of health services
research. The big-data research in clinical practices could benefit from the integration of a multi-tiered
approach with multi-level modeling and analysis (Wan, 2002). For instance, researchers can populate
relevant micro- and macro-level predictor variables based on the conceptual formulation or model.
Therefore, domain-specific information is organized and integrated into a theoretically sound data system
defined by the investigators (Figure 3). Then, we will be able to tease out the relevance of system
components in designing predictive analytics. The usefulness of exploratory and confirmatory approaches
of data science should not be based on the hit and miss trials in search for important determinants of health,
but they are theoretically guided investigations to identify action plans and directions of interventions. By
considering predictive variables in a causal sequence, one can begin to develop useful predictive models in
healthcare (Figure 3). We can then explain fully what we have gained from the data analysis via predictive

Thomas T.H. Wan. / Convergence of Artificial Intelligence Research in Healthcare: Trends and Approaches 7

modeling. Ultimately, we can design and implement decision support systems for optimizing health care
outcomes, such as reduced hospital readmissions.

Fig. 3. Micro- and macro-level predictors and integration serving as a theoretical framework to guide the

design of predictive analytics

Fifth, the utilization of Internet-of-Things (IoT) technologies in healthcare offers researchers to connect
with smart devices and data with Internet and identify relevant information for improving the healthcare
quality (Dauwed and Meri, 2019). In a recent literature review, Naziv et al. (2019) examined varying
sources of publications and workshops and identified concerns such as data connectedness, standardization,
and security and privacy of data compiled by mobile health technologies. These issues are the challenges
encountered by researchers as well as providers.

Sixth, value-based approaches to healthcare management are highlighted in prior research (Wan, 2002;
Shortell et al., 2007; Lee and Wan, 2002; Wan, 2018). For instance, the increased technical efficiency of
hospital care is positively associated with the improved quality of care. The relationship between efficiency
and quality of care is a complimentary rather than a substitutive one. A recent hospital research report
suggests that hospital standardization in the design of an automated care management system facilitates the
effectiveness in targeting high-risk populations through a systematic risk identification (Shettian and Wan,
2018). Similarly, population health management could be enhanced by integrating activities such as risk
identification, utilization, quality, and patient engagement management.

Seventh, longitudinal data and prospective study design are germane to the search for causal factors
influencing care management effectiveness. Because the conventional approach to health data analysis does
not observe patient states longitudinally in multiple time points with repeated measures, the static nature of
patient care data is unable to reveal trajectory patterns of chronic disease and its complications. Sequential

8 Thomas T.H. Wan. / Convergence of Artificial Intelligence Research in Healthcare: Trends and Approaches

data of patient care status with both time-varying and time-constant variables together should capture any
changes in the panel data system (Wan, 2017). Hence, we can develop meaningful and useful predictive
analytics for identifying determinants of health or illness (Figure 4).

Fig. 4. Panel data needed in predictive analytics

4. Implementing Patient-Centered Care Management Technologies for Solving Problems
in the Health Services Delivery System: A Proposal for AI Research

Body Text Wellness and preventive care may be improved through proper design and implementation
of a patient-centered care management technology (PCCMT). Little is known about how an ideal care
management technology can be applied to community-based wellness centers. Research has shown that
increased patient-clinician communication is correlated to higher levels of patient satisfaction and improved
health outcomes (Breen et al., 2009). The synergism of employing personal health records (PHR) and health
information technology (HIT) in wellness centers may play a pivotal role for enhancing collaborative
patient care and increasing patient safety and quality of care. It is also unclear if the PHR, augmented with
a sound education training program, can reduce risks associated with medical errors in ambulatory care,
improve patient-clinician communication, increase continuity of patient-centered care, and generate better
proximal outcomes (patient and provider satisfaction, trust) and distal outcomes (health-related quality of
life and health status).

In implementing the PCCMT, we need to identify barriers and benefits of PCCMT for participants,
providers, wellness centers and the community. To evaluate the beneficial effects of the patient-centric care
management technology (PCCMT) interventions, we propose to adopt the following: 1) Personal Health
Records (PHR), 2) participant health education interventions, and 3) integration of PHR technologies with
care coordination, lifestyle change and nutritional review, and preventive care processes and outcomes
measured by indicators such as improvement of interpersonal continuity of care, patient-provider
communication, patient adherence to prescribed treatment regimen, appropriate use of healthcare resources,
participant satisfaction, adverse drug events detected by pharmacy consultation, health related quality of
life (HRQOL), and health status measures.

Overall improvement in patient safety, using health information technologies (HIT) has been made
(Bates and Singh, 2018; Bates and Bitton, 2010). However, the integration of electronic health records
(EHR) into personal health records (PHR) has not been made to benefit the patient directly, particularly in
the design of shared clinical decision making software. Relieving critical symptoms of the larger healthcare
system failure requires a more comprehensive, dynamic intervention. Further protection of patient safety
and ultimately, health system safety, requires attention to the broader scope of the root problem. Focus on
better management and utilization of informatics must be employed at the heart of patient-centered delivery
of care called PCCMT. This expanded approach to HIT is known as knowledge management. It is not
enough to collect and control the information and organize it for efficient recall and communication.
Knowledge management combines technology-infused efficiency with timeliness, appropriateness, and
effectiveness of healthcare provision. This proposal illustrates an innovative application of IT-based
knowledge management to improve personal and public health.

4.1. Conceptual formulation of patient-centric care management technology

There is a critical need to conceptualize how patient-centric care modalities can be systematically
formulated and evaluated. It is, therefore, important to explore the components that constitute an ideal

Thomas T.H. Wan. / Convergence of Artificial Intelligence Research in Healthcare: Trends and Approaches 9

patient-centric care management technology. The HIT applications to community-based wellness centers,
using a PHR, have the potential to enhance the continuity of care and the patient-clinician communication.
The expected benefits may include improved patient-provider relationships, enhanced physician knowledge
of the patient status, increased patient adherence, reduced duplication of services and lab orders, improved
patient safety, and fewer missed appointments.

The foundational principles of patient-centric care management rely on the improvement of
interpersonal continuity of care and patient-provider communication. The IOM (2003) has made continuity
of care a primary goal of its comprehensive call for transforming the quality of care in the United States. In
2006, the American College of Physicians (ACP) established continuity of care as a central theme for
restructuring or reengineering healthcare. Recent research of life-limited patients receiving patient-centered
care management showed a notable 38% reduction of hospital utilizations and a 26% reduction of overall
costs with high patient satisfaction (Sweeney, Halpert, & Waranoff, 2007). Thus, it is imperative to
establish scientific evidence in support of the need for expanding the PHR as part of the patient-centric care
management technology.

4.2. Electronic personal health record (PHR)

The electronic personal health record (PHR) is a dynamic, longitudinal listing of up to date patient
allergies, clinical care providers, current medications, test results, problem list, living will and power of
attorney and contact information. The PHR format will utilize a web based secure vault with or without a
USB storage drive and will conform to health record interoperability standards. This comprehensive PHR
avails the patient and their physicians of healthcare information at the point of care. A constantly updated
PHR is expected to improve healthcare performance.

4.3. Methodological rigor and measurement of healthcare outcomes

Health services research and evaluation are based on scientific principles of experimentation (Wan,
1995). The measurement issues pertaining to outcomes should be examined and validated, particularly
related to patient reported outcomes (Leidy, Beusterien, Sullivan, Richner, & Muni, 2006). The temporal
sequences of outcome-related measures should be clearly ascertained before one can draw any strong
conclusion in regard to the effectiveness and efficacy of patient-centric care modalities. The evaluation of
patient reported outcomes should delineate the causal sequela of proximal and distal outcomes, using an
experimental design. In addition, the study design should be able to tease out the main effects and
interaction effects of intervention variables on outcome measures. The proposed investigation is capable of
demonstrating how an ideal patient-centric care management technology can be implemented and evaluated
by a rigorous experimental design.

4.4. Evidence-based knowledge and best practices in patient-centered care

Over the past twenty years, concerted efforts have been made to design and implement the concept of
patient-centered care through the use of care management technology. In recent years there has been an
explosion of evidence-based medicine/practice. This is the direct result of several factors: the aging of the
population, rising patient and professional expectations, the proliferation of new information technologies,
the growth of disease management modeling, and the demand for better healing environments (Wan, 2002).
Massive amounts of clinical and administrative data have been gathered. Little has been done, however, to
build the relational databases that can generate information for improving healthcare processes and
outcomes. Such systematic information is needed to build a repository of knowledge for the use of policy
decision makers, providers, administrators, facility designers, researchers, and patients. Evidence-based
knowledge gives users a competitive edge in making policy, clinical, administrative, and constructional
decisions that improve personal and public health (Wan and Connell, 2003). An article appearing in the
Journal of American Medical Association (Westfall, Mold, & Fagnan, 2007) states that practice-based
research will generate new knowledge and bridge the chasm between recommended care and improved

10 Thomas T.H. Wan. / Convergence of Artificial Intelligence Research in Healthcare: Trends and Approaches

health. Practice-based research through intervention studies is a needed expansion of the NIH Roadmap
(Meek and Prudino, 2017).

In 2001, the Institute of Medicine recommended that “all healthcare organizations, professional groups,
and private and public purchasers should pursue six major aims; specifically, healthcare should be safe,
effective, patient-centered, timely, efficient, and equitable (IOM, 2001).” Teaching the patient and the
clinician to use a personal health record (PHR) could help achieve several of these aims. A report from the
National Committee on Patient Safety and Health Information Technology identified potential benefits of
PHRs and PHR systems (IOM, 2011). They included: improving patient understanding of health issues,
increasing patient control over access to personal health information, supporting timely and appropriate
preventive services, strengthening communication with providers, and supporting home monitoring for
chronic diseases. PHRs can also support understanding and appropriate use of medications, support
continuity of care across time and providers, avoid duplicate tests, and reduce adverse drug interactions and
allergic reactions (U.S. Department of Health and Human Services, 2006).

Because of the concern about the Medicaid crisis and the lack of coordinated care for vulnerable
populations, increased coordination of PHR and EHR, patient and provider communication, and education
holds promise for greater economic and clinical improvements. Furthermore, it is imperative to integrate
digitalized data gathered from health and social services networks. Thus, coordinated care and continuity
of care for the high-risk patient population can be greatly facilitated (Weil, 2020).

The questions related to outcomes evaluation are grouped into two broad categories: 1) proximal
outcomes—health resource use, patient safety, patient and provider satisfaction; and 2) distal outcomes—
patient reported outcomes, wellness, and reduction of adverse health events. The participants in the focus
group discussions reached a common consensus as follows: a collaborative team should conduct a thorough
and scientific experiment to evaluate the benefits of implementing the PHR.

The American Health Information Management Association (AHIMA) provides free community-based
education programs on the PHR and has a public website for education and training on the benefits of the
PHR ( AHIMA will partner and support the PHR and CCMT project and provide
initial training for patients in St. Johns County in the use of the PHR.

4.5. Methodology and research design

A randomized trial design is formulated to investigate the benefits and barriers to implement patient-
centric care management technologies in wellness centers. A conceptual framework to guide the research
design is presented in Figure 5.

Contextual /Structural Variables
 age,
 gender,
 Care Management Technology
 Pharmacist


 ” PHR Education,
 ethnicity/race,
 physician communication

 Use of PHR

 Interpersonal

continuity of care
 Patient -Provider


Health Services Use

Use of healthcare resources (patient
visits, duplicate laboratory tests and
imaging exams, emergency room visits
(> 1 per six months) hospitalizations (>1
in previous 12 months),

Proximal outcomes
Patient and provider satisfaction

Distal outcomes
health status
Patient adherence to treatment regimen,
Adverse drug events detected by
physician / pharmacy consultation

Fig. 5. Analytical framework

4.6. Plan to make use of clinical and administrative data to prescribe best-performance practices
based on research evidence

Analysis of clinical and administrative data is planned to determine factors contributing to improved
performance. Analysis will be in terms of improved patient outcomes, patient cost, quality of care, and
patient safety based on measured performance comparing intervention to controls. The results will thereby

Thomas T.H. Wan. / Convergence of Artificial Intelligence Research in Healthcare: Trends and Approaches 11

serve as a sound evidence-based prescription for patient-centered care management and cost reduction
without consequence to quality of care.

By focusing on elements known to be strengths of wellness centers, PCCMT demonstrates a patient-
centric care plan that recognizes the benefit of revolving service around the individual participant’s need.
The participant is nestled in the field of their healthcare advocate, a technologically well-connected
Medical-Social Navigator trained to guide them through their healthcare choices and facilitate coordination
(inside and out) of the care advised by the provider team. This advocate, the HIT-equipped Medical-Social
Navigator, is firmly seated between both the participant’s sphere and the realm of the wellness center, where
she/he can coordinate care needs from appointments to group education to childcare referrals. The wellness
center staff and resources are encompassed by the larger community of specialists and other health agencies
(Figure 6).

Fig. 6. PCCMT-based care process: A patient-centric care model

The products of this project include a collaborative program of offering PHRs to participants. This will
facilitate patient-provider communication regarding current medications profile, healthcare history, and
results of patient controlled monitoring as well as interactive patient education projects on mobile devices
for post-discharge self-training.

5. Concluding Remarks

This paper points out the trends and issues pertaining to AI research in healthcare. The transdisciplinary
science plays an important role in facilitating the convergence and standardization of concepts and
principles of AI research in healthcare. In light of the current development of patient-centered AI
applications, we briefly identify care management issues associated with access, costs and quality of care
at the population level. It also highlights the theoretical and empirical relevance to the design of AI
healthcare applications for self-care management. A value-based strategy relying on the implementation of
patient-centered technologies, as an example, will directly benefit patient care outcomes and reduce costs
of care.

The convergence of multiple disciplines in the conduct of AI healthcare research requires new
partnerships among academic scholars, healthcare practitioners, data scientists, and information
technologists. The collaborative work will greatly enhance the formulation of theoretically relevant
frameworks to guide empirical research and application, which will be particularly relevant in the search
for causal mechanisms to reduce costly and avoidable hospital readmissions for chronic conditions.

AI is changing the world in every area of human life (Lee, 2018). Different types and generations of AI
approaches and applications have been developed and used (Schwartz et al., 1987). The current trend in AI
research will continue as the driver of technologies such as predictive analytics, big-data-to-knowledge,
robotics, and IOT are emerging. If the AI functions are appropriately and effectively applied to healthcare,
evidence-based practices could be standardized and further improve the efficiency of health services to
solve the delivery problems associated with accessibility, costs, and safety/quality. The Society for Design

12 Thomas T.H. Wan. / Convergence of Artificial Intelligence Research in Healthcare: Trends and Approaches

and Process Science (SDPS) is uniquely positioned in shaping coordinated science and research by
encouraging collaboration and convergence of scientific developments of functional AI products or
decision support systems for enhancing personalized experience and receiving high quality of care,
particularly in the implementation of innovative care management technologies applicable to shared clinical
decision making models, prevention, disease detection, diagnosis, therapeutics, and rehabilitation. The
availability of massive data generated from electronic medical records coupled with the cloud-based and
blockchain databases will greatly enhance AI research in the future (Hou and Xiao, 2019). Thus, AI research
in healthcare is able to answer relevant questions pertaining to how to optimize limited resources and
achieve competitive health goals in medical and public health practices.


Bates, D.W. & Bitton A. (2010). The future of health information technology in the patient- centered
medical home. Health Affairs 29(4), 614-621.

Bates, D.W. & Singh, H. (2018). Two decades since to err is human: An assessment of progress and
emerging priorities in patient safety. Health Affairs 37 (11), 1736-1743.

Breen, G. M., Wan, T. T. H., Zhang, N. J., Marathe, S. S., Seblega, B. K., & Paek, S. C. (2009). Improving
doctor-patient communication: Examining innovative modalities vis-à-vis effective patient-centric
care management technology. Journal of Medical Systems 33, 155-162.

Dauwed, M. & Meri, A. (2019). IOT service utilization in healthcare. IntechOpen.

Donabedian, A. (1966). Evaluating the quality of medical care. Milbank Memorial Fund Quarterly 54(3),

Douglas, C.H. & Douglas, M.R. (2005). Patient-centered improvements in healthcare built environments:
Perspectives and design interventions. Health Expectations 8, 264-276.

Ertas, A., Tanik, M.M., and Maxwell, T. (2000). Transdisciplinary engineering education and research
model. Journal of Integrated Design and Process Science 4(4), 1-11.

Hou, Z.X. & Xiao, Y. (2019). Special issue on big data for IoT cloudcomputing convergence. Web
Intelligence 17, 101-103. DOI:10.3233/WEB-190404IOS

Huelat, B. & Wan, T.T.H. (2011). Healing Environments: What’s the Proof? Alexandra, VA: Medezyn.

Huisman, E.R.C.M. et al. (2012). Healing environment: A review of the impact of physical environmental
factors on users. Building and Environment 58,70-80.

Institute of Medicine. (2001). Crossing the Quality Chasm: A New Health System for the 21st Century.
Washington, DC: National Academy Press; 2001.

Institute of Medicine. (2011). Health IT and Patient Safety. National Academy Press.

Jiang, F., Jiang, Y., Zhi, H., Li, H., Ma, S., Wang, Y., Dong, Q., Shen, H. & Wang, Y. (2017). Artificial
intelligence in healthcare: Past, present and future. Stroke and Vascular Neurology, e000101. DOI:

Lee. K. S. & Wan, T.T.H. (2002). Effects of hospitals’ structural clinical integration on efficiency and
patient outcome. Health Services Management Research 15, 234-244.

Lee, K.F. (2018). AI Superpowers: China, Silicon Valley and the New World Order. New York: Houghton
Mifflin Harcourt.

Leidy, N.K., Beusteuen, K., Sullivan, E. Richner, S.E. & Muni, N.I. (2006). Integrating patient’s
perspective into device evaluation trials. Value in Health 9(6), 394-401.

Meek, S. & Prudino, R. (2017). Practice concepts will become intervention research effective January 2017.
Gerontologist 57(2), 151-152.

Thomas T.H. Wan. / Convergence of Artificial Intelligence Research in Healthcare: Trends and Approaches 13

Naziv, S., Ali, Y., Ullah, N. & Garcia-Magarino, I. (2019). Internet of things for healthcare using effects of
mobile computing: A systematic literature review. Wireless Communications and Mobile Computing.
DOI: 10.1155/2019/5931315.

Rav-Marathe, K., Wan, T.T.H. & Marathe, S. (2016). The effect of health education on clinical and self-
reported outcomes of diabetes in a medical practice. Journal of Integrated Design and Process Science
20(1), 45-63.

Reckers-Droog, V., Federici, C., Brouwer, W. & Drummond, M. (2020). Challenges with coverage
evidence development schemes for medical devices: A systematic review. Health Policy and
Technology. DOI: 10.1016/j.hlpt.2020.02.006.

Saqla, M. & Saqib, N. (2019). Internet of things technologies for healthcare. IntechOpen.

Schwartz, W.B., Patil, R.S. & Szolovits, P. (1987). Artificial intelligence in medicine. New England
Journal of Medicine 316,685-688. DOI. 10.1056/nejm198703123161109.

Shettian, K.M. & Wan, T.T.H. (2018). Structural factors influencing the standardization process in acute
care hospitals. Journal of Hospital and Healthcare Administration. JHHA-115 (3rd issue). Doi:

Shortell, S.M., Rundall, T.G. & Hsu, J. (2007). Improving patient care by linking evidence-based medicine
and evidence-based management. Journal of American Medical Association, 298(6), 673-676.

Sweeney, L. Waranoff, J. & Halpert, A. (2007). Patient-centered management of complex patients can
reduce cost without shortening life. American Journal of Managed Care 13(2), 84-92.

Ulrich, R., Quan, X. & Zimring, C. (2004). The Role of the Physical Environment in the Hospital of the
21st Century: A Once-in-a-Lifetime Opportunity.

U.S. Department of Health and Human Services. (2006). Personal Health Records and Personal Health
Record System: A report and Recommendations for the National Committee on Vital and Health

Wan, T.T.H. (1995). Analysis and Evaluation of Health Care Systems: An Integrated Approach to
Managerial Decision Making. Baltimore: Health Professions Press.

Wan, T.T.H. (2002). Evidence‐Based Health Care Management: Multivariate Modeling Approaches.
Boston: Kluwer Academic Publishers.

Wan. T.T.H. & Connell, A. (2003). Monitoring the Quality of Health Care: Issues and Scientific
Approaches. Boston: Kluwer Academic Publishers.

Wan, T.T.H., Terry, A., McKee, B., & Kattan, W. (2017). A KMAP‐O framework for care management
research of patients with type 2 diabetes. The World Journal of Diabetes, 8(4), 165‐171. DOI:

Wan, T.T.H., Terry, A., Cobb, E., McKee, B., Tregerman, R., & Barbaro, S.D.S. (2017). Strategies to
modify the risk of heart failure readmission: A systematic review and meta analysis. Health Services
Research‐Managerial Epidemiology, 4, 1‐16.

Wan, T.T.H. (2017). A population health approach to care management interventions and healthcare
artificial intelligence. Journal of Biomedical Research and Practice 1(1), 1-8.

Wan, T.T.H. (2018). Population Health Management for Poly Chronic Conditions: Evidence-Based
Research Approaches. New York: Springer.

Wan, T.T.H. (2019). A clinical decision support system approach to artificial intelligence research for
chronic care management innovations. Proceedings on AI Research, Society for Design and Process
Science, Italy.

Weil, A.R. (2020). Integrating social services and health. Health Affairs 39(4), 551-551.

14 Thomas T.H. Wan. / Convergence of Artificial Intelligence Research in Healthcare: Trends and Approaches

Westfall, J.M., Mold, J., & Fagnan, L. (2007). Practice-based research—Blue highways on the NIH road
map. Journal of American Medical Association, 297(4): 403-406.

Williams, C. & Wan, T.T.H. (2015). The influence of remote monitoring on clinical decision making: A
pilot study. Home Health Care Management and Practice. DOI: 10.1177/1084822315604600. (2019). The 2019 County Health Rankings and Roadmaps. A report
presented by RWJF and University of Wisconsin Population Health Institute.

Author Biographies

Thomas T.H. Wan, Professor of Health Administration and Medical Informatics, Kaohsiung Medical
University and Professor Emeritus, Department of Health Management and Informatics, University of
Central Florida, is an experienced health services researcher who has contributed to health informatics
research, predictive analytics and modeling, quality assessment and evaluation, and health system design
and analysis. He is an author of 14 books and more than 200 articles, and 28 book chapters in health care
research and design. He serves on the editorial boards for 7 scientific journals in health care. He is the
corresponding author for this article.

Copyright of Journal of Integrated Design & Process Science is the property of IOS Press and
its content may not be copied or emailed to multiple sites or posted to a listserv without the
copyright holder’s express written permission. However, users may print, download, or email
articles for individual use.

Review Vol 395 May 16, 2020 1579

Artificial intelligence and the future of global health
Nina Schwalbe*, Brian Wahl*

Concurrent advances in information technology infrastructure and mobile computing power in many low and
middle-income countries (LMICs) have raised hopes that artificial intelligence (AI) might help to address challenges
unique to the field of global health and accelerate achievement of the health-related sustainable development goals. A
series of fundamental questions have been raised about AI-driven health interventions, and whether the tools,
methods, and protections traditionally used to make ethical and evidence-based decisions about new technologies can
be applied to AI. Deployment of AI has already begun for a broad range of health issues common to LMICs, with
interventions focused primarily on communicable diseases, including tuberculosis and malaria. Types of AI vary, but
most use some form of machine learning or signal processing. Several types of machine learning methods are
frequently used together, as is machine learning with other approaches, most often signal processing. AI-driven
health interventions fit into four categories relevant to global health researchers: (1) diagnosis, (2) patient morbidity
or mortality risk assessment, (3) disease outbreak prediction and surveillance, and (4) health policy and planning.
However, much of the AI-driven intervention research in global health does not describe ethical, regulatory, or
practical considerations required for widespread use or deployment at scale. Despite the field remaining nascent,
AI-driven health interventions could lead to improved health outcomes in LMICs. Although some challenges of
developing and deploying these interventions might not be unique to these settings, the global health community will
need to work quickly to establish guidelines for development, testing, and use, and develop a user-driven research
agenda to facilitate equitable and ethical use.

AI is changing how health services are delivered in many
high-income settings, particularly in specialty care
(eg, radiology and pathology).1–3 This development has
been facilitated by the growing availability of large
datasets and novel analytical methods that rely on such
datasets. Concurrent advances in information technology
(IT) infrastructure and mobile computing power have
raised hopes that AI might also provide opportunities to
address health challenges in LMICs.4 These challenges,
including acute health workforce shortages and weak
public health surveillance systems, undermine global
progress towards achieving the health-related sustainable
development goals (SDGs).5,6 Although not unique to
such countries, these challenges are particularly relevant
given their contribution to morbidity and mortality.7,8

AI-driven health technologies could be used to address
many of these and other system-related challenges.4
For example, in some settings, AI-driven interventions
have supplemented clinical decision making towards
reducing the workload of health workers.9 New dev-
elopments in AI have also helped to identify disease
outbreaks earlier than traditional approaches, thereby
supporting more timely programme planning and
policy making.10 Although these interventions provide
promise, there remain several ethical, regulatory, and
practical issues that require guidance before scale-up
or widespread deployment in low and middle-income

The global health community, including several large
donor agencies, has increasingly recognised the urgency
of addressing these issues towards ensuring that
populations in low and middle-income settings benefit
from developments in digital health and AI.11 Several
global meetings have taken place since 2015.12–14 For

example, in May, 2018, the World Health Assembly
adopted a resolution on digital technologies for universal
health coverage.15 In 2019, the United Nations Secretary
General’s High-Level Panel on Digital Cooperation
recommended that “by 2030, every adult should have
affordable access to digital networks, as well as digitally-
enabled financial and health services, as a means to
make a substantial contribution to achieving the SDGs”.16

Lancet 2020; 395: 1579–86

*Joint first authors

Heilbrunn Department of
Population and Family Health,
Columbia Mailman School of
Public Health, New York, NY,
USA (N Schwalbe MPH); Spark
Street Advisors, New York, NY,
USA (N Schwalbe, B Wahl PhD);
and Department of
International Health, Johns
Hopkins Bloomberg School of
Public Health, Baltimore, MD,
USA (B Wahl)

Correspondence to:
Nina Schwalbe, Columbia
Mailman School of Public Health,
New York, NY 10032, USA
[email protected]

Search strategy and selection criteria

We reviewed PubMed, MEDLINE, and Google Scholar.
This Review included peer-reviewed research articles
published in English between Jan 1, 2010, and Dec 31, 2019.
Relevant articles were identified using search terms that
included low and middle-income country names (appendix
pp 2–7) and “artificial intelligence”, “augmented intelligence”,
“computational intelligence”, and “machine learning”.
The titles and abstracts of identified articles were initially
reviewed by a study reviewer to assess whether the study was
done in a low-income or middle-income country, according
to the World Bank Atlas country classification method, and
focused on health or health system challenges that could be
addressed with artificial intelligence (AI) interventions.
We synthesised key themes and trends, using a previously
described classification for AI-driven health interventions
(ie, expert systems, machine learning, natural language
processing, automated planning and scheduling, and image
and signal processing) and broad categories of health
interventions (ie, diagnosis, risk assessment, disease outbreak
prediction and surveillance, and health policy and planning).
We excluded studies done in LMICs where AI might have been
used to develop a drug or diagnostic, but was not a central
component of the final health tool being studied.


1580 Vol 395 May 16, 2020

In October, 2019, The Lancet and Financial Times
inaugurated a joint Commission focused on the
convergence of digital health, AI, and universal health
coverage.17 A report from this Commission is expected
in 2021.

In the context of these efforts to achieve the health-
related SDGs and ensure universal health coverage, we
aim to assess current AI research related to health in
LMICs. We identified the types of health issues being
addressed by AI, types of AI used in these interventions
(eg, machine learning, natural language processing,
signal processing), and whether there is sufficient
evidence that such interventions could improve health
outcomes in LMICs. In this Review we aim to highlight
additional research requirements, inform national and
global policy discussions, and support efforts to develop
a research and implementation agenda for AI in global
low-income and middle-income countries.

Current research on AI in LMICs
A full list of studies included in this narrative Review is
provided in the appendix (pp 8–11). AI interventions focus
on a broad range of health issues common to LMICs.
Most AI studies focused on communicable diseases,
including tuberculosis, malaria, dengue, and other
infectious diseases. Other AI studies focused on non-
infectious diseases in children and infants, preterm birth
complications, and malnutrition. Some interventions
aimed to address non-communicable diseases, including
cervical cancer. AI studies in LMICs addressed public
health from a broader perspective, particularly, health
policy and management. These studies include AI
research aimed at improving the performance of health
facilities, improving resource allocation from a systems
perspective, reducing traffic-related injuries, and other
health system issues.

The types of AI deployed in health research in LMICs
are described in the table. Most AI-driven health
interventions used some form of machine leaning or

signal processing, or both. Studies often evaluated
the use of machine learning together with other AI
approaches, most often with signal processing. In
addition, several types of machine learning methods
were frequently used together. For example, a common
approach used in machine learning and signal processing
was the use of convolutional neural networks for feature
extraction, and support-vector machines for classifi-
cation. A few research studies assessed interventions
based on natural language processing, data mining,
expert systems, or advanced planning.

AI-driven interventions for health
AI-driven health interventions broadly fit into four
categories described in the table. The automation or
support of diagnosis for communicable and non-com-
municable diseases emerged from studies as one of the
main uses of AI. Signal processing methods are often
used together with machine learning to automate the
diagnosis of communicable diseases. Signal processing
interventions focused specifically on the use of radiological
data for tuberculosis18,23 and drug-resistant tuberculosis,19
ultrasound data for pneumonia,24 micro scopy data for
malaria,25–27 and other biological sources of data for
tuberculosis.28–30 Most diagnostic interventions using AI in
LMICs reported either high sensitivity, specificity, or high
accuracy (>85% for all), or non-inferiority to comparator
diagnostic tools. Machine learning aids clinicians in
diagnosing tuberculosis,31 and expert systems are used
for diagnosing tuberculosis32 and malaria.27 Studies
mostly reported high diagnostic sensitivity, specificity, and
accuracy; however, at least one study reported low accuracy
when attempting to identify asymptomatic cases of

AI-driven interventions also focused on the diagnosis
of non-communicable diseases in LMICs, primarily
using signal processing methods for disease detection,
including cervical cancer and pre-cervical cancer using
microscopy,33–36 or data from photos of the cervix called
cervigrams.37 The accuracy has been reported to be
greater than 90%. One study aimed to evaluate a low-
cost, point-of-care oral cancer screening tool using cloud-
based signal processing and reported high sensitivity and
specificity relative to that of an onsite specialist.38

Morbidity and mortality risk assessment is another
area for which AI driven interventions have been
assessed in the global health context. These interventions
are based largely on machine learning classification tools
and typically compare multiple machine learning
approaches with the aim of identifying the optimal
approach to characterise risk. This approach has also
been used at health facilities to predict disease severity in
patients with dengue fever20 and malaria,39 and children
with acute infections.40 Researchers have used this
approach to quantify the risk of tuberculosis treatment
failure41 and assess the risk of cognitive sequelae after
malaria infection in children.42

See Online for appendix

Types of AI* Example

Diagnosis Expert system; machine learning;
natural language processing;
signal processing

Researchers applied machine learning and signal
processing methods to digital chest radiographs
to identify tuberculosis cases18 and drug-resistant
tuberculosis cases19

Mortality and
morbidity risk

Data mining; machine learning;
signal processing

To quantify the risk of dengue fever severity,
researchers applied machine learning algorithms
to administrative datasets from a large tertiary
care hospital in Thailand20

Disease outbreak
prediction and

Data mining; machine learning;
natural language processing;
signal processing

Remote sensing data and machine learning
algorithms were used to characterise and predict
the transmission patterns of Zika virus globally21

Health policy and

Expert planning; machine

Machine learning models were applied to
administrative data from South Africa to predict
length of stay among health-care workers in
underserved communities22

AI=artificial intelligence. *Many types AI were implemented together.

Table: Public health functions and associated types of AI

Review Vol 395 May 16, 2020 1581

Machine learning classification tools were also used
to estimate the risk of non-infectious disease health
outcomes. For example, studies have focused on esti-
mating anaemia risk in children using standardised
household survey data,43 identifying children with the
greatest risk of missing immunisation sessions,44 and
detecting high-risk births using cardiotocography
data.45 A study from Brazil aimed to assess the
behavioural risk classification of sexually active teen-
agers.46 The reported accuracy of these tools ranged
from moderate (approximately 65%) to high (almost

Signal processing and machine learning have also been
used to estimate perinatal risk factors—eg, to automat-
ically estimate gestational age using data from ultrasound
images and other patient variables.47–49 Studies reported
high accuracy (>85%) relative to trained experts and other
standard gestational age estimation techniques.

Researchers are using AI for public health surveil-
lance to predict disease outbreak and evaluate disease
surveillance tools. Researchers have evaluated prediction
models using machine learning algorithms and remote
(ie, data collected by satellite or aircraft sensors) or local
(ie, data measured on site such as rainfall) sensing data to
estimate outbreaks of dengue virus. Although one study
reported high sensitivity and specificity for identifying
dengue outbreaks using a data-driven epidemiological
prediction method,50 other researchers51 found that
machine learning approaches for predicting dengue
outbreaks outperformed approaches based on linear
regression. Researchers have also used remote sensing
data and machine learning methods to predict malaria52,53
and Zika virus21 outbreaks with accuracy greater than

Another common approach to disease prediction and
surveillance is the use of machine learning and data
mining, together with data from online social media
networks and search engines. One study used this
approach to predict dengue outbreaks54 and other studies
to track and predict influenza outbreaks.55,56 All studies
reported high accuracy compared with observed data.
Social media data and machine learning using artificial
neural networks were also used to improve surveillance
of HIV in China.57

AI-driven health interventions can also be used to
support programme policy and planning. One such
study used data from a health facility in Brazil and an
agent-based simulation model to compare programme
options aimed at increasing the overall efficiency of the
health workforce.58 In another study, researchers used
several government datasets—including health system,
environmental, and financial data—together with
machine learning (ie, artificial neural networks) to
optimise the allocation of health system resources by
geography based on an array of prevalent health
challenges.59 Expert planning methods and household
survey data to optimise community health-worker visit

schedules were reported in the literature; however, no
results have yet been published.60

Additionally, AI methods aimed at informing pro-
gramme planning efforts within facilities have been
evaluated in low and middle-income settings. Some
examples include forecasting the number of outpatient
visits at an urban hospital61 and the length of health
-worker retention,22 using machine learning methods
and large administrative datasets from health facilities.
In another example, researchers used expert systems
and administrative data to design a system for measuring
the performance of hospital managers.62

Researchers are also using machine learning and data
mining methods to improve road safety in LMICs. In one
study, researchers used street imagery available online
and machine learning to estimate helmet use prev-
alence.63 In another study, a large government dataset of
road injuries and data mining techniques were used to
predict road injury severity.64

Accelerating access to AI
Numerous data are available to show how AI is being
tested to address health challenges relevant to the
achievement of SDGs. Such interventions include disease-
specific applications and those aimed at strengthening
health systems. Many AI health interventions have shown
promising preliminary results, and could soon be used to
augment existing strategies for delivering health services
in LMICs. Especially in disease diagnosis, where AI-
powered interventions could be used in countries with
insufficient numbers of health providers, and in risk
assessment, where tools based largely on machine
learning could help to supplement clinical knowledge.9

Although the research identified in this Review
indicates that AI-driven health interventions can help to
address several existing and emerging health challenges,
many issues are not sufficiently described in these
studies and warrant further exploration. These issues
relate to the development of AI-driven health inter-
ventions; how efficacy and effectiveness are assessed
and reported; planning for deployment at scale; and
the ethical, regulatory, and economic standards and
guidelines that will help to protect the interests of
communities in LMICs. Although these issues have
been described elsewhere,4,11,65–67 they have not been
systematically or explicitly addressed in research
published to date. We highlight these areas and suggest a
framework for consideration in future development,
testing, and deployment.

From development to deployment
One of the most important challenges facing AI in
LMICs relates to appropriate development and design.
Although none of the articles we reviewed here have
explained the impetus for project development, there are
most likely multiple reasons that explain why particular
health challenges in LMICs have been targeted by AI


1582 Vol 395 May 16, 2020

developers. Communicable diseases—including malaria
and tuberculosis—continue to account for a pronounced
burden of disease in LMICs5 and attract substantial
donor funding.68 In addition, the characteristics of some
common health challenges in LMICs are able to be
addressed by AI—eg, the use of ultrasound data to
diagnose respiratory diseases and identify preterm birth
risk factors. The availability and portability of digital
ultrasound units and large datasets that can be used to
train AI algorithms (including in high-income settings),
have contributed to the development and testing of such
interventions in LMICs.

Although interventions such as those identified in this
Review might be beneficial, it is important that the
research agenda and development of interventions is
driven by local needs, health system constraints, and
disease burden rather than availability of data and
funding. A global research agenda for AI interventions
relevant to LMICs would help to ensure that new tools are
developed to respond to population needs. Step should
also be taken during the development of AI applications
to avoid ethnic, socioeconomic, and gender biases found
in some AI applications.

Another major challenge relates to comparative
performance of algorithms—including benchmarking
against any current standard care—and for continuously
assessing performance after deployment. Although
processes to enable benchmarking and assessment have
begun, including a collaboration between WHO and the
UN International Telecommunications Union (ITU),12,69
this type of testing will require adequate and representative
datasets from observational and surveillance studies,
electronic medical records, and social media platforms.
Open access to diverse datasets representing different
populations is particularly important, considering that
most AI-driven health interventions from the research
literature we identified are based on machine learning.
Enabling access across borders will require new types of
data sharing protocols and standards on inter-operability
and data labelling. This global movement could be
facilitated by an international collaboration so that data
are rapidly and equitably available for the development
and testing of AI-driven health interventions. Such
collaborations are already being developed in the UK by
initiatives such as the Health Data Research Alliance70
and the Confederation of Laboratories for Artificial
Intelligence Research in Europe.71

Reporting and methodological standards are also
required for AI health interventions in LMICs, particu-
larly those used for diagnostic tools. Although the
epidemiological and statistical methods used in studies
that we identified seem largely appropriate for the
research questions addressed, results were not reported
consistently. For example, some studies assessing diag-
nostic tools provide estimates of sensitivity, specificity,
and overall accuracy—ie, the probability of an individual
being correctly identified by a diagnostic test, which is

mathematically equivalent to a weighted average of the
sensitivity and specificity of the test. However, other
studies provided only a subset of these measurements.
The use of comparators was also inconsistently reported.
The Standards for Reporting of Diagnostic Accuracy
Studies72 provide guidelines for diagnostic assessments
and could be a starting place for standardising of
research in AI diagnostics.

None of the reviewed studies described whether
health technology assessments for an AI-driven health
intervention had been done. Standardised methods for
these assessments, including the extent to which these
interventions add value over current standards of care,
are urgently needed. Such methods should show how
well AI tools work outside study settings and highlight
related health system costs, including unintended
clinical, psychological, and social consequences. The
costs associated with false positive and false negative
results are also important to assess.

Although many studies reviewed here used statistical
methods that follow classic epidemiology methods,
basing their hypotheses on plausible models of causality,
some new AI-driven health interventions—particularly
those applying machine learning algorithms—identify
disease patterns and associations without a priori
hypotheses. Such approaches hold promise because they
are not necessarily affected by developer-introduced bias.
However, there remains a threat that false associations
could be identified and integrated into new AI-driven
health interventions.

The successful deployment of many AI-driven health
interventions will require investment to strengthen the
underlying health system. In addition to ethical concerns
related to diagnosing disease when treatment is not
available, the effectiveness of new diagnostic tools will
be limited if access to treatment is not expanded for all
patients. Similarly, tools that aim to predict outbreaks
and supplement surveillance would need to be supported
and complemented by robust surveillance systems to
guide an adequate public health emergency response if
an outbreak is accurately predicted.

Given the nascent stage of research on AI health
interventions in LMICs, global standards and guidelines
are needed to inform the development and evaluate
performance of tools in these settings. To support such
efforts, we provide several recommendations for research
and development of AI-driven health interventions in
low and middle-income settings using the AI application
value chain (figure).

Throughout the development and deployment phases,
we propose that researchers consider the principles for
digital development (panel).13 These principles provide
guidance on the best practice for development of digital
health technologies. Although none of the studies
reviewed here explicitly acknowledge digital principles,

Review Vol 395 May 16, 2020 1583

we believe that they are helpful for development of
AI-driven health technologies. However, the digital
principles alone are insufficient. Institutional structures
also have an important role to play in the development
and deployment of new health technologies. Such
structures include appropriate regulatory and ethical
frameworks, benchmarking standards, pre-qualification
mechanisms, guidance on clinical and cost-effective
approaches, and frameworks for issues related to data
protection, in particular for children and youth, many of
whom now have a digital presence from birth. The
impact of AI tools on gender issues is another important
consideration and an area in which global guidance is
currently lacking.

AI does not need to be held to a higher standard of
research; however, its unique complexities, including the
requisite use of large datasets and the opaque nature of
some AI algorithms, will require approaches specifically
tailored to interventions and consideration of how efficacy
and effectiveness are assessed. Guidelines, such as those
from the EQUATOR network including the Transparent
Reporting of a Multivariable Prediction Model for
Individual Prognosis or Diagnosis—statement specific to
Machine Learning (TRIPOD-ML), Standard Protocol
Items: Recommendations for Interventional Trials
(SPIRIT)-AI, and Consolidated Standards of Reporting
Trials (CONSORT)-AI, that aim to harmonise termi-
nologies and reporting standards in prediction research,66
might help to guide researchers as they design and assess
AI interventions. Agencies in high-income countries,
including the US Food and Drug Administration, have
begun to develop separate regulatory pathways for
AI-driven health intereventions.67 In addition to the UN
ITU benchmarking initiative, WHO has recently created a
new digital health department and released new guidelines
on digital health.73 These efforts can help to provide
valuable insight for LMICs.

Current AI research highlights additional areas for
strengthening standards and guidelines for AI research
in LMICs. Although most AI investigators report neces-
sary approvals by institutional review boards, indicating
that the studies were all done ethically, only a few
described how the research teams addressed issues of
informed consent or ethical research design in tools that
used large datasets and electronic health records.
Reporting on ethical considerations would help future
researchers to address these complex yet essential issues.

Similarly, only a few studies reported on the usability
or acceptability of AI tools from the provider or patients’
perspective, despite acknowledging that usability is
an important factor for AI interventions, particularly
in LMICs. Human-centred design, an approach to
programme and product development frequently cited in
technology literature, considers human factors to ensure
that interactive systems are more usable. Human-
centered design is acknowledged as an important factor
for the development of new technologies in LMICs.65

There was also an absence of randomised clinical
trials (RCTs) identified in the literature. Clinical trials
help to establish clinical efficacy in LMICs. Given the
challenges associated with conducting RCTs for new
health technologies,74 new approaches such as the Idea,
Development, Exploration, Assessment, and Long Term
(IDEAL) follow-up framework75 recommended for the
evaluation of novel surgical practices, could serve to
provide relevant learning. This framework provides
guidance on clinical assessment for surgical inter-
ventions, in the context of challenges that make clinical
trials difficult, including variation in setting, disparities
in quality, and subjective interpretation.

There were only a few references to any type of
implementation research to assess questions related to
adoption or deployment at scale. Assessing implemen-
tation-related factors could help to identify potential

Figure: Recommendations for development of artificial intelligence driven health applications in low and
middle-income countries

Research and development
• Incorporate human centred

design principles into
application development

• Ensure equitable access to
representative datasets

• Standardise reporting of efficacy

and effectiveness
• Build consensus around

appropriate statistical and
epidemiological methods and

• Assess relative benefits over
current standard of care

• Develop standards for health

technology assessments
• Encourage cost-effectiveness

and cost–benefit evaluations
• Conduct implementation and

systems-related research
• Do continuous assessments of

efficacy and effectiveness

User-driven research agenda aligned with digital principles

Statistical, ethical, and regulatory standards

Panel: Digital principles for artificial intelligence driven interventions in global health

• User-centred design starts with getting to know the people you are designing for by
conversation, observation, and co-creation

• Well designed initiatives and digital tools consider the particular structures and needs
that exist in each country, region, and community

• Achieving a larger scale requires adoption beyond a pilot population and often
necessitates securing funding or partners that take the initiative to new communities
and regions

• Building sustainable programmes, platforms, and digital tools is essential to maintain
user and stakeholder support, and to maximise long-term effect

• When an initiative is data driven, quality information is available to the right people
when they need it, and those people will use data to act

• An open approach to digital development can help to increase collaboration in the
digital development community and avoid duplicating work that has already been done

• Reusing and improving is about taking the work of the global development
community further than any organisation or programme can do alone

• Addressing privacy and security in digital development involves careful
consideration of which data are collected and how data are acquired, used, stored,
and shared

• Being collaborative means sharing information, insights, strategies, and resources
across projects, organisations, and sectors, leading to increased efficiency and effect


1584 Vol 395 May 16, 2020

unintended consequences at an individual and system
level of AI interventions. Further, there was no
description of the costs related to patients, providers, or
systems. A thorough assessment of these costs is crucial
to inform cost-effectiveness analyses and the potential
for scalability.

Limitations and conclusions
First, relevant articles might have been published before
2010. However, The field of AI, particularly in global
health, is rapidly evolving and any articles that were not
included as a result of being published before 2010 are
unlikely to be representative of this field as it is today. In
addition, our Review included only English-language
articles. Given the prominence of AI research around the
world, excluding articles published in languages other
than English could be a limitation.

As with all reviews, publication bias is another potential
limitation. There are two probable sources of this bias in
AI research. First, studies with null results are less likely
to be published.76 For that reason, AI-driven health
interventions that have not shown statistically significant
results might be under-represented in our literature
Review. Furthermore, investments in AI and health were
forecasted to have reached US$1∙7 billion in 2018,77 and
are increasingly dominated by private equity firms78 and
driven by so-called big tech companies such as Google
and Baidu ventures.79 Given that many interventions are
developed in the private sector for commercial use, some
AI developers might not place a high priority on
publishing the results in academic literature.80

AI is already being developed to address health issues in
LMICs. Current research is addressing a range of health
issues and using various AI-driven health interventions.
The breadth and promising results of these interventions
emphasise the urgency for the global community to act
and create guidance to facilitate deployment of effective
interventions. This point is particularly crucial given the
rapid deployment of AI-driven health interventions
which are being rolled out at scale as part of the severe
acute respiratory syndrome coronavirus 2 (SARS-CoV-2)
pandemic response. In many cases this roll-out is being
carried out without adequate evidence or appropriate

In accordance with our recommendations, the global
health community will need to work quickly to: incorporate
aspects of human-centred design into the development
process, including starting from a needs-based rather
than a tool-based approach; ensure rapid and equitable
access to representative datasets; establish global systems
for assessing and reporting efficacy and effectiveness of
AI-driven interventions in global health; develop a
research agenda that includes implementation and
system related questions on the deployment of new
AI-driven interventions; and develop and implement
global regulatory, economic, and ethical standards and
guidelines that safeguard the interests of LMICs. These

recommendations will ensure that AI helps to improve
health in low and middle-income settings and contributes
to the achievement of the SDGs, universal health
coverage, and to the coronavirus disease 2019 (COVID-19)
NS and BW are joint first authors. NS and BW reviewed the literature
and wrote the manuscript.

Declaration of interests
We declare no competing interests.

Fondation Botnar funded the data collection and supported an initial
synthesis of the literature which provided the basis for this Review.
The funder had no role in study design, data collection, data analysis,
data interpretation, writing of the report, or the decision to submit for
publication. All authors had full access to all the data used in the study
and the corresponding author had final responsibility for the decision to
submit for publication.

1 Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJWL.

Artificial intelligence in radiology. Nat Rev Cancer 2018;
18: 500–10.

2 Chang HY, Jung CK, Woo JI, et al. Artificial intelligence in
pathology. J Pathol Transl Med 2019; 53: 1–12.

3 Jha S, Topol EJ. Adapting to artificial intelligence: radiologists and
pathologists as information specialists. JAMA 2016; 316: 2353–54.

4 Wahl B, Cossy-Gantner A, Germann S, Schwalbe NR. Artificial
intelligence (AI) and global health: how can AI contribute to
health in resource-poor settings? BMJ Glob Health 2018;
3: e000798.

5 Lozano R, Fullman N, Abate D, et al. Measuring progress from 1990
to 2017 and projecting attainment to 2030 of the health-related
Sustainable Development Goals for 195 countries and territories:
a systematic analysis for the Global Burden of Disease Study 2017.
Lancet 2018; 392: 2091–138.

6 Bartolomeos KK. The case for investing in public health surveillance
in low- and middle-income countries. Afr J Emerg Med 2018; 8: 127–28.

7 Alkire BC, Peters AW, Shrime MG, Meara JG. The economic
consequences of mortality amenable to high-quality health care in
low- and middle-income countries. Health Aff (Millwood) 2018;
37: 988–96.

8 Kruk ME, Gage AD, Joseph NT, Danaei G, García-Saisó S,
Salomon JA. Mortality due to low-quality health systems in the
universal health coverage era: a systematic analysis of amenable
deaths in 137 countries. Lancet 2018; 392: 2203–12.

9 Guo J, Li B. The application of medical artificial intelligence
technology in rural areas of developing countries. Health Equity
2018; 2: 174–81.

10 Lake IR, Colón-González FJ, Barker GC, Morbey RA, Smith GE,
Elliot AJ. Machine learning to refine decision making within a
syndromic surveillance service. BMC Public Health 2019; 19: 559.

11 USAID Center for Innovation and Impact. Artificial intelligence in
global health: defining a collective path forward. 2019. https://www.
webFinal_508.pdf (accessed May 26, 2019).

12 International Telecommunication Union. AI for good global
summit. 2017.
default.aspx (accessed May 28, 2019).

13 Digital Principles. Principles for digital development. 2017. (accessed May 26, 2019).

14 Digital Investment Principles. The principles of donor alignment
for digital health. 2018.
(accessed June 3, 2019).

15 WHO. Seventy-first world health assembly. Digital health. 2018.
(accessed Nov 19, 2019).

16 UN Secretary-General’s High-level Panel on Digital Cooperation.
The age of digital interdependence. 2018.
pdfs/DigitalCooperation-report-for%20web.pdf (accessed
Nov 19, 2019).

Review Vol 395 May 16, 2020 1585

17 Kickbusch I, Agrawal A, Jack A, Lee N, Horton R. Governing health
futures 2030: growing up in a digital world-a joint The Lancet and
Financial Times Commission. Lancet 2019; 394: 1309.

18 Lopes UK, Valiati JF. Pre-trained convolutional neural networks as
feature extractors for tuberculosis detection. Comput Biol Med 2017;
89: 135–43.

19 Jaeger S, Juarez-Espinosa OH, Candemir S, et al. Detecting drug-
resistant tuberculosis in chest radiographs. Int J CARS 2018;
13: 1915–25.

20 Phakhounthong K, Chaovalit P, Jittamala P, et al. Predicting the
severity of dengue fever in children on admission based on clinical
features and laboratory indicators: application of classification tree
analysis. BMC Pediatr 2018; 18: 109.

21 Jiang D, Hao M, Ding F, Fu J, Li M. Mapping the transmission risk
of Zika virus using machine learning models. Acta Trop 2018;
185: 391–99.

22 Moyo S, Doan TN, Yun JA, Tshuma N. Application of machine
learning models in predicting length of stay among healthcare
workers in underserved communities in South Africa.
Hum Resour Health 2018; 16: 68.

23 Aguiar FS, Torres RC, Pinto JV, Kritski AL, Seixas JM, Mello FC.
Development of two artificial neural network models to support the
diagnosis of pulmonary tuberculosis in hospitalized patients in
Rio de Janeiro, Brazil. Med Biol Eng Comput 2016; 54: 1751–59.

24 Correa M, Zimic M, Barrientos F, et al. Automatic classification of
pediatric pneumonia based on lung ultrasound pattern recognition.
PLoS One 2018; 13: e0206410.

25 Go T, Kim JH, Byeon H, Lee SJ. Machine learning-based in-line
holographic sensing of unstained malaria-infected red blood cells.
J Biophotonics 2018; 11: e201800101.

26 Torres K, Bachman CM, Delahunt CB, et al. Automated microscopy
for routine malaria diagnosis: a field comparison on Giemsa-
stained blood films in Peru. Malar J 2018; 17: 339.

27 Andrade BB, Reis-Filho A, Barros AM, et al. Towards a precise test
for malaria diagnosis in the Brazilian Amazon: comparison among
field microscopy, a rapid diagnostic test, nested PCR, and a
computational expert system based on artificial neural networks.
Malar J 2010; 9: 117.

28 Khan S, Ullah R, Shahzad S, Anbreen N, Bilal M, Khan A.
Analysis of tuberculosis disease through Raman spectroscopy and
machine learning. Photodiagn Photodyn Ther 2018; 24: 286–91.

29 Mohamed EI, Mohamed MA, Moustafa MH, et al. Qualitative
analysis of biological tuberculosis samples by an electronic nose-
based artificial neural network. Int J Tuberc Lung Dis 2017; 21: 810–17.

30 Kuok CP, Horng MH, Liao YM, Chow NH, Sun YN. An effective and
accurate identification system of Mycobacterium tuberculosis using
convolution neural networks. Microsc Res Tech 2019; 82: 709–19.

31 Elveren E, Yumuşak N. Tuberculosis disease diagnosis using
artificial neural network trained with genetic algorithm. J Med Syst
2011; 35: 329–32.

32 Osamor VC, Azeta AA, Ajulo OO. Tuberculosis-Diagnostic Expert
System: an architecture for translating patients information from
the web for use in tuberculosis diagnosis. Health Informatics J 2014;
20: 275–87.

33 Zhao M, Wu A, Song J, Sun X, Dong N. Automatic screening of
cervical cells using block image processing. Biomed Eng Online
2016; 15: 14.

34 Chankong T, Theera-Umpon N, Auephanwiriyakul S.
Automatic cervical cell segmentation and classification in pap
smears. Comput Methods Programs Biomed 2014; 113: 539–56.

35 Kumar R, Srivastava R, Srivastava S. Detection and classification of
cancer from microscopic biopsy images using clinically significant
and biologically interpretable features. J Med Eng 2015;
2015: 457906.

36 Su J, Xu X, He Y, Song J. Automatic detection of cervical cancer
cells by a two-level cascade classification system.
Anal Cell Pathol (Amst) 2016; 2016: 9535027.

37 Hu L, Bell D, Antani S, et al. An observational study of deep
learning and automated evaluation of cervical images for cancer
screening. J Natl Cancer Inst 2019; 111: 923–32.

38 Uthoff RD, Song B, Sunny S, et al. Point-of-care, smartphone-based,
dual-modality, dual-view, oral cancer screening device with neural
network classification for low-resource communities. PLoS One
2018; 13: e0207493.

39 Johnston IG, Hoffmann T, Greenbury SF, et al. Precision
identification of high-risk phenotypes and progression pathways in
severe malaria without requiring longitudinal data. NPJ Digit Med
2019; 2: 63.

40 Kwizera A, Kissoon N, Musa N, et al. A machine learning-based
triage tool for children with acute infection in a low resource
setting. Pediatr Crit Care Med 2019; 20: e524–30.

41 Hussain OA, Junejo KN. Predicting treatment outcome of drug-
susceptible tuberculosis patients using machine-learning models.
Inform Health Soc Care 2019; 44: 135–51.

42 Veretennikova MA, Sikorskii A, Boivin MJ. Parameters of stochastic
models for electroencephalogram data as biomarkers for child’s
neurodevelopment after cerebral malaria. J Stat Distrib Appl 2018;
5: 8.

43 Meena K, Tayal DK, Gupta V, Fatima A. Using classification
techniques for statistical analysis of Anemia. Artif Intell Med 2019;
94: 138–52.

44 Chandir S, Siddiqi DA, Hussain OA, et al. Using predictive
analytics to identify children at high risk of defaulting from a
routine immunization program: feasibility study.
JMIR Public Health Surveill 2018; 4: e63.

45 Hoodbhoy Z, Noman M, Shafique A, Nasim A, Chowdhury D,
Hasan B. Use of machine learning algorithms for prediction of fetal
risk using cardiotocographic data. Int J Appl Basic Med Res 2019;
9: 226–30.

46 Waleska Simões P, Cesconetto S, Toniazzo de Abreu LL, et al.
A data mining approach to identify sexuality patterns in a Brazilian
university population. Stud Health Technol Inform 2015; 216: 1074.

47 van den Heuvel TLA, Petros H, Santini S, de Korte CL,
van Ginneken B. Automated fetal head detection and circumference
estimation from free-hand ultrasound sweeps using deep learning
in resource-limited countries. Ultrasound Med Biol 2019; 45: 773–85.

48 Rittenhouse KJ, Vwalika B, Keil A, et al. Improving preterm
newborn identification in low-resource settings with machine
learning. PLoS One 2019; 14: e0198919.

49 Papageorghiou AT, Kemp B, Stones W, et al. Ultrasound-based
gestational-age estimation in late pregnancy.
Ultrasound Obstet Gynecol 2016; 48: 719–26.

50 Buczak AL, Koshute PT, Babin SM, Feighner BH, Lewis SH.
A data-driven epidemiological prediction method for dengue
outbreaks using local and remote sensing data.
BMC Med Inform Decis Mak 2012; 12: 124.

51 Scavuzzo JM, Trucco F, Espinosa M, et al. Modeling Dengue vector
population using remotely sensed data and machine learning.
Acta Trop 2018; 185: 167–75.

52 Haddawy P, Hasan AHMI, Kasantikul R, et al. Spatiotemporal
Bayesian networks for malaria prediction. Artif Intell Med 2018;
84: 127–38.

53 Kabaria CW, Molteni F, Mandike R, et al. Mapping intra-urban
malaria risk using high resolution satellite imagery: a case study of
Dar es Salaam. Int J Health Geogr 2016; 15: 26.

54 Althouse BM, Ng YY, Cummings DA. Prediction of dengue
incidence using search query surveillance. PLoS Negl Trop Dis 2011;
5: e1258.

55 Xu Q, Gel YR, Ramirez Ramirez LL, Nezafati K, Zhang Q, Tsui KL.
Forecasting influenza in Hong Kong with Google search queries
and statistical model fusion. PLoS One 2017; 12: e0176690.

56 Clemente L, Lu F, Santillana M. Improved real-time influenza
surveillance: using internet search data in eight Latin American
countries. JMIR Public Health Surveill 2019; 5: e12214.

57 Nan Y, Gao Y. A machine learning method to monitor China’s AIDS
epidemics with data from Baidu trends. PLoS One 2018;
13: e0199697.

58 Yousefi M, Yousefi M, Ferreira RPM, Kim JH, Fogliatto FS.
Chaotic genetic algorithm and Adaboost ensemble metamodeling
approach for optimum resource planning in emergency
departments. Artif Intell Med 2018; 84: 23–33.

59 Rosas MA, Bezerra AF, Duarte-Neto PJ. Use of artificial neural
networks in applying methodology for allocating health resources.
Rev Saude Publica 2013; 47: 128–36.

60 Brunskill E, Lesh N. Routing for rural health: optimizing
community health worker visit schedules. Association for the
Advancement of Artificial Intelligence. 2010. https://www.cs.cmu.
edu/~ebrun/brunskilllesh.pdf (accessed June 3, 2019).


1586 Vol 395 May 16, 2020

61 Huang D, Wu Z. Forecasting outpatient visits using empirical mode
decomposition coupled with back-propagation artificial neural
networks optimized by particle swarm optimization. PLoS One 2017;
12: e0172539.

62 Shafii M, Hosseini SM, Arab M, Asgharizadeh E, Farzianpour F.
Performance analysis of hospital managers using fuzzy AHP and
fuzzy TOPSIS: Iranian experience. Glob J Health Sci 2015; 8: 137–55.

63 Merali HS, Lin LY, Li Q, Bhalla K. Using street imagery and
crowdsourcing internet marketplaces to measure motorcycle
helmet use in Bangkok, Thailand. Inj Prev 2019; published online
March 4. DOI:10.1136/injuryprev-2018-043061.

64 Beshah T, Hill S. Mining road traffic accident data to improve
safety: role of road-related factors on accident severity in Ethiopia.
2010. AAAI Spring Symposium Series.

65 The Lancet. Artificial intelligence in global health: a brave new
world. Lancet 2019; 393: 1478.

66 Collins GS, Moons KGM. Reporting of artificial intelligence
prediction models. Lancet 2019; 393: 1577–79.

67 He J, Baxter SL, Xu J, Xu J, Zhou X, Zhang K. The practical
implementation of artificial intelligence technologies in medicine.
Nat Med 2019; 25: 30–36.

68 Chapman N, Doubell A, Oversteegen L, et al. Neglected disease
research and development: reaching new heights. G-FINDER
Report. 2018.
(accessed June 3, 2019).

69 Wiegand T, Krishnamurthy R, Kuglitsch M, et al. WHO and ITU
establish benchmarking process for artificial intelligence in health.
Lancet 2019; 394: 9–11.

70 Health Data Research UK. UK health data research alliance. 2019.
alliance/ (accessed May 28, 2018).

71 Confederation of Laboratories for Artificial Intelligence Research in
Europe. A European vision for AI.
(accessed May 30, 2019).

72 Cohen JF, Korevaar DA, Altman DG, et al. STARD 2015 guidelines
for reporting diagnostic accuracy studies: explanation and
elaboration. BMJ Open 2016; 6: e012799.

73 WHO. WHO guideline recommendations on digital interventions
for health system strengthening. 2019.
(accessed Dec 2, 2019).

74 Lorenzo C, Garrafa V, Solbakk JH, Vidal S. Hidden risks associated
with clinical trials in developing countries. J Med Ethics 2010;
36: 111–15.

75 McCulloch P, Altman DG, Campbell WB, et al. No surgical
innovation without evaluation: the IDEAL recommendations. Lancet
2009; 374: 1105–12.

76 Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR. Publication
bias in clinical research. Lancet 1991; 337: 867–72.

77 Franklin Templeton. Artificial intelligence: real opportunity. 2019.
intelligence-real-opportunity.html (accessed June 3, 2019)

78 OECD. Private Equity Investment in Artificial Intelligence. 2018.
artificial-intelligence.pdf (accessed June 3, 2019).

79 Hui M, Lahiri T. This Chinese tech giant was 2018’s most active
corporate investor in AI startups. Quartz. 2019.
(accessed June 2, 2019).

80 Breschi S, Lassébie J, Menon C. A portrait of innovative start-ups
across countries. OECD. 2018.
checksum=3BD05CBBFA0446D064FFFA10A059C23D (accessed
June 5, 2019).

© 2020 Elsevier Ltd. All rights reserved.

Reproduced with permission of copyright owner. Further reproduction
prohibited without permission.

  • Artificial intelligence and the future of global health
    • Introduction
    • Current research on AI in LMICs
    • AI-driven interventions for health
    • Accelerating access to AI
    • From development to deployment
    • Recommendations
    • Limitations and conclusions
    • Acknowledgments
    • References


Endocrinol Metab 2016;31:38-44
pISSN 2093-596X · eISSN 2093-5978


How to Establish Clinical Prediction Models
Yong-ho Lee1, Heejung Bang2, Dae Jung Kim3

1Department of Internal Medicine, Yonsei University College of Medicine, Seoul, Korea; 2Division of Biostatistics, Department
of Public Health Sciences, University of California Davis School of Medicine, Davis, CA, USA; 3Department of Endocrinology
and Metabolism, Ajou University School of Medicine, Suwon, Korea

A clinical prediction model can be applied to several challenging clinical scenarios: screening high-risk individuals for asymp-
tomatic disease, predicting future events such as disease or death, and assisting medical decision-making and health education.
Despite the impact of clinical prediction models on practice, prediction modeling is a complex process requiring careful statisti-
cal analyses and sound clinical judgement. Although there is no definite consensus on the best methodology for model develop-
ment and validation, a few recommendations and checklists have been proposed. In this review, we summarize five steps for de-
veloping and validating a clinical prediction model: preparation for establishing clinical prediction models; dataset selection;
handling variables; model generation; and model evaluation and validation. We also review several studies that detail methods
for developing clinical prediction models with comparable examples from real practice. After model development and vigorous
validation in relevant settings, possibly with evaluation of utility/usability and fine-tuning, good models can be ready for the use
in practice. We anticipate that this framework will revitalize the use of predictive or prognostic research in endocrinology, leading
to active applications in real clinical practice.

Keywords: Clinical prediction model; Development; Validation; Clinical usefulness


Hippocrates emphasized prognosis as a principal component of
medicine [1]. Nevertheless, current medical investigation
mostly focuses on etiological and therapeutic research, rather
than prognostic methods such as the development of clinical
prediction models. Numerous studies have investigated wheth-
er a single variable (e.g., biomarkers or novel clinicobiochemi-
cal parameters) can predict or is associated with certain out-

comes, whereas establishing clinical prediction models by in-
corporating multiple variables is rather complicated, as it re-
quires a multi-step and multivariable/multifactorial approach to
design and analysis [1].
Clinical prediction models can inform patients and their
physicians or other healthcare providers of the patient’s proba-
bility of having or developing a certain disease and help them
with associated decision-making (e.g., facilitating patient-doc-
tor communication based on more objective information). Ap-

Received: 9 January 2016, Revised: 14 January 2016,
Accepted: 27 January 2016
Corresponding authors: Dae Jung Kim
Department of Endocrinology and Metabolism, Ajou University School of
Medicine, 164 World cup-ro, Yeongtong-gu, Suwon 16499, Korea
Tel: +82-31-219-5128, Fax: +82-31-219-4497, E-mail: [email protected]

Yong-ho Lee
Department of Internal Medicine, Yonsei University College of Medicine, 50-1
Yonsei-ro, Seodaemun-gu, Seoul 03722, Korea
Tel: +82-2-2228-1943, Fax: +82-2-393-6884, E-mail: [email protected]

Copyright © 2016 Korean Endocrine Society
This is an Open Access article distributed under the terms of the Creative Com-
mons Attribution Non-Commercial License (
licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribu-
tion, and reproduction in any medium, provided the original work is properly

Clinical Prediction Models

Copyright © 2016 Korean Endocrine Society 39

Endocrinol Metab 2016;31:38-44
pISSN 2093-596X · eISSN 2093-5978

plying a model to a real world problem can help with detection
or screening in undiagnosed high-risk subjects, which improves
the ability to prevent developing diseases with early interven-
tions. Furthermore, in some instances, certain models can pre-
dict the possibility of having future disease or provide a prog-
nosis for disease (e.g., complication or mortality). This review
will concisely describe how to establish clinical prediction
models, including the principles and processes for conducting
multivariable prognostic studies and developing and validating
clinical prediction models.


In the era of personalized medicine, prediction of prevalent or
incident diseases (diagnosis) or outcomes for future disease
course (prognosis) became more important for patient manage-
ment by health-care personnel. Clinical prediction models are
used to investigate the relationship between future or unknown
outcomes (endpoints) and baseline health states (starting point)
among people with specific conditions [2]. They generally
combine multiple parameters to provide insight into the relative
impacts of individual predictors in the model. Evidence-based
medicine requires the strongest scientific evidence, including
findings from randomized controlled trials, meta-analyses, and
systematic reviews [3]. Although clinical prediction models are
partly based on evidence-based medicine, the user must also
adopt practicality and an artistic approach to establish clinically
relevant and meaningful models for targeted users.
Models should predict specific events accurately and be rela-
tively simple and easy to use. If a prediction model provides
inaccurate estimates of future-event occurrences, it will mislead
healthcare professionals to provide insufficient management of
patients or resources. On the other hand, if a model has high
predictability power but is difficult to apply (e.g., with compli-
cated calculation or unfamiliar question/item or unit), time con-
suming, costly [4] or less relevant (e.g., European model for
Koreans, event too far away), it will not be commonly used.
For example, a diabetes prediction model developed by Lim et
al. [5] has a relatively high area under the receiver operating
curve (AUC, 0.77), while blood tests that measure hemoglobin
A1c, high density lipoprotein cholesterol, and triglyceride are
included in the risk score, which would generally require clini-
cian’s involvement so could be a major barrier for use in com-
munity settings. When prediction models consist of complicat-
ed mathematical equations [6,7], a web-based application can

enhance implementation (e.g., calculating 10-year and lifetime
risk for atherosclerotic cardiovascular disease [CVD] is avail-
able at There-
fore, achieving a balance between predictability and simplicity
is a key to a good clinical prediction model.


There are several reports [1,8-13] and a textbook [14] that de-
tail methods to develop clinical prediction models. Although
there is currently no consensus on the ideal construction meth-
od for prediction models, the Prognosis Research Strategy
(PROGRESS) group has proposed a number of methods to im-
prove the quality and impact of model development [2,15]. Re-
cently, investigators on the Transparent Reporting of a multi-
variable prediction model for Individual Prognosis Or Diagno-
sis (TRIPOD) study have established a checklist of recommen-
dations for reporting on prediction or prognostic models [16].
This review will summarize the analytic process for developing
clinical prediction models into five stages.

Stage 1: preparation for establishing clinical prediction
The aim of prediction modeling is to develop an accurate and
useful clinical prediction model with multiple variables using
comprehensive datasets. First, we have to articulate several im-
portant research questions that affect database selection and the
approach of model generation. (1) What is the target outcome
(event or disease) to predict (e.g., diabetes, CVD, or fracture)?
(2) Who is the target patient of the model (e.g., general popula-
tion, elderly population ≥65 years or patients with type 2 dia-
betes)? (3) Who is the target user of the prediction model (e.g.,
layperson, doctor or health-related organization)? Depending
on the answers to the above questions, researchers can choose
the proper datasets for the model. The category of target users
will determine the selection and handling process of multiple
variables, which will affect the structure of the clinical predic-
tion model. For example, if researchers want to make a predic-
tion model for laypersons, a simple model with not many user-
friendly questions in only a few categories (e.g., yes vs. no)
could be ideal.

Stage 2: dataset selection
The dataset is one of the most important components of the
clinical prediction model—often not under investigators’ con-

Lee YH, et al.

40 Copyright © 2016 Korean Endocrine Society

trol—and ultimately determines its quality and credibility;
however, there are no general rules for assessing the quality of
data [9]. Yet, there is no such thing as perfect data and prefect
model. It would be reasonable to search for best-suited dataset.
Oftentimes, secondary or administrative data sources must be
utilized because a primary dataset with the study endpoint and
all of key predictors is not available. Researchers should use
different types of datasets, depending on the purpose of the
prediction model. For example, a model for screening high-risk
individuals with undiagnosed condition/disease can be devel-
oped using cross-sectional cohort data. However, such models
may have relatively low power for predicting future incidence
of disease when different risk factors come into play. Accord-
ingly, longitudinal or prospective cohort datasets should be
used for prediction models for future events (Table 1). Models
for prevalent events are useful for predicting asymptomatic
diseases, such as diabetes or chronic kidney disease, by screen-
ing undiagnosed cases, whereas models for incident events are
useful for predicting the incidence of relatively severe diseases,
such as CVD, stroke, and cancer.
A universal clinical prediction model for disease does not
exist; thus, separate specific models that can individually as-
sess the role of ethnicity, nationality, sex, or age on disease risk
are warranted. For example, the Framingham coronary heart
disease (CHD) risk score is generated by one of the most com-
monly used clinical prediction models; however, it tends to
overestimate CHD risk by approximately 5-fold in Asian popu-
lations [17,18]. This indicates that models derived from one
ethnicity sample may not be directly applied to populations of
other ethnicities. Other specific characteristics of study popula-
tions beside ethnicity (e.g., obesity- or culture-related vari-
ables) could be important.
There is no absolute consensus on the minimal requirement
for dataset sample size. Generally, large representative, contem-

porary datasets that closely reflect the characteristics of their
target population are ideal for modeling and can enhance the
relevance, reproducibility, and generalizability of the model.
Moreover, two types of datasets are generally needed: a devel-
opment dataset and a validation dataset. A clinical prediction
model is first derived from analyses of the development dataset
and its predictive performance should be assessed in different
populations based on the validation dataset. It is highly recom-
mended to use validation datasets from external study popula-
tions or cohorts, whenever available [19,20]; however, if it is
not possible to find appropriate external datasets, an internal
validation dataset can be formed by randomly splitting the orig-
inal cohort into two datasets (if sample size is large) or statisti-
cal techniques such as jackknife or bootstrap resampling (if not)
[21]. The splitting ratio can vary depending on the researchers’
particular goals, but generally, more subjects should be allocat-
ed to the development dataset than to the validation dataset.

Stage 3: handling variables
Since cohort datasets contain more variables than can reason-
ably be used in a prediction model, evaluation and selection of
the most predictive and sensible predictors should be done.
Generally, inclusion of more than 10 variables/questions may
decrease the efficiency, feasibility and convenience of predic-
tion models, but expert’s judgment that could be somewhat
subjective is required to assess the need for each situation. Pre-
dictors that were previously found to be significant should nor-
mally be considered as candidate variables (e.g., family history
of diabetes in diabetes risk score). It should be noted that not
all significant predictors need to be included in the final model
(e.g., P<0.05); predictor selection must be always guided by
clinical relevance/judgement to prevent nonsensical or less rel-
evant or user-unfriendly variables (e.g., socioeconomic status-
related) or possible false-positive associations. Additionally,

Table 1. Characteristics of Different Clinical Prediction Models according to Their Purpose

Characteristic Prevalent/concurrent events Incident/future events

Data type Cross-sectional data Longitudinal/prospective cohort data

Application Useful for asymptomatic diseases for screening
undiagnosed cases (e.g., diabetes, CKD)

Useful for predicting the incidence of diseases
(e.g., CVD, stroke, cancer)

Aim of the model Detection Prevention

Simplicity in model and use More important Less important

Example Korean Diabetes Score [34] ACC/AHA ASCVD risk equation [7]

CKD, chronic kidney disease; CVD, cardiovascular disease; ACC/AHA, American College of Cardiology/American Heart Association; ASCVD,
atherosclerotic cardiovascular disease.

Clinical Prediction Models

Copyright © 2016 Korean Endocrine Society 41

variables which are highly correlated with others may be ex-
cluded because they contribute little unique information [22].
On the other hand, variables not statistically significant or with
small effect size may still contribute to the model [23]. De-
pending on researcher discretion, different models that analyze
different variables may be developed for targeting distinct us-
ers. For example, a simple clinical prediction model that does
not require laboratory variables and a comprehensive model
that does could both be designed for laypersons and health care
providers, respectively [19].
With regard to variable coding, categorical and continuous
variables should be managed differently [8]. For ordered cate-
gorical variables, infrequent categories can be merged and sim-
ilar variables may be combined/grouped. For example, past and
current smoker categories can be merged if numbers of sub-
jects who report being a past or current smoker are relatively
small and variable unification does not alter the statistical sig-
nificance of the model materially. Although continuous param-
eters are usually included in a regression model, assuming lin-
earity, researchers should consider the possibility of non-linear
associations such as J- or U-shaped distributions [24]. Further-
more, the relative effect of a continuous variable is determined
by the measurement scale used in the model [8]. For example,
the impact of fasting glucose levels on the risk of CVD may be
interpreted as having a stronger influence when scaled per 10
mg/dL than per 1 mg/dL.
Researchers often emphasize the importance of not dichoto-
mizing continuous variables in the initial stage of model devel-
opment because valuable predictive information can be lost
during categorization [24]. However, prediction models—is
not the same thing as regression models—with continuous pa-
rameters may be complex and hard to use or be understood by
laypersons, because they have to calculate their risk scores by
themselves. A web or computer-based platform is usually re-
quired for the implementation of these models. Otherwise, in a
later phase, researchers may transform the model into a user-
friendly format by categorizing some predictors, if the predic-
tive capacity of the model is retained [8,19,25].
Finally, missing data is a chronic problem in most data anal-
yses. Missing data can occur various reasons, including uncol-
lected (e.g., by design), not available or not applicable, refusal
by respondent, dropout, or “don’t know.” To handle this issue,
researchers may consider imputation technique, dichotomizing
the answer into yes versus others, or allow “unknown” as a
separate category as in

Stage 4: model generation
Although there are no consensus guidelines for choosing vari-
ables and determining structures to develop the final prediction
model, various strategies with statistical tools are available
[8,9]. Regression analyses, including linear, logistic, and Cox
models are widely used depending on the model and its intend-
ed purpose. First, the full model approach is to include all the
candidate variables in the model; the benefit of this approach is
to avoid overfitting and selection bias [9]. However, it can be
impractical to pre-specify all predictors and previously signifi-
cant predictors may not be in a new population/sample. Sec-
ond, a backward elimination approach or stepwise selection
method can be applied to remove a number of insignificant
candidate variables. To check for overfitting of the model,
Akaike information criterion (AIC) [26], an index of model fit-
ting that charges a penalty against larger models, may be useful
[19]. Lower AIC values indicate a better model fit. Some inter-
pret that AIC addresses explanation and Bayesian information
criterion (BIC) addresses prediction, where BIC may be con-
sidered a Bayesian counterpart [27].
If researchers prefer algorithm modeling culture instead of
data modeling culture, e.g., formula-based regression [28], a
classification and regression tree analysis or recursive parti-
tioning could be considered [28-30].
With regard to determining scores for each predictor in the
generation of simplified models, researchers using expert judg-
ment may create a weighted scoring system by converting β
coefficients [19] or odds ratios [20] from the final model to in-
teger values, while preserving monotonicity and simplicity. For
example, from the logistic regression model built by Lee et al.
[19], β coefficients <0.6, 0.7 to 1.3, 1.4 to 2.0, and >2.1 were
assigned scores of 1, 2, 3, and 4, respectively.

Stage 5: model evaluation and validation (internal/
After model generation, researchers should evaluate the predic-
tive power of their proposed model using an independent datas-
et, where truly external dataset is preferred whenever available.
There are several standard performance measures that capture
different aspects: two key components are calibration and dis-
crimination [8,9,31]. Calibration can be assessed by plotting the
observed proportions of events against the predicted probabili-
ties for groups defined by ranges of individual predicted risk
[9,10]. For example, a common method is to categorize 10 risk
groups of equal size (deciles) and then conduct the calibration
process [32]. The most ideal calibration plot would show a 45°

Lee YH, et al.

42 Copyright © 2016 Korean Endocrine Society

line, which indicates that the observed proportions of events
and predicted probabilities completely overlap over the entire
range of probabilities [9]. However, this is not guaranteed when
external validation is conducted with a different sample. Dis-
crimination is defined as the ability to distinguish events versus
non-events (e.g., dead vs. alive) [8]. The most common dis-
crimination measure is the AUC or, equivalently, concordance
(c)-statistic. The AUC is equal to the probability that, given two
individuals randomly selected—one who will develop an event
and another who will not—the model will assign a higher prob-
ability of an event to the former [10]. A c-statistic value of 0.5
indicates a random chance (i.e., flip of a coin). The usual c-sta-
tistic range for a prediction model is 0.6 to 0.85; this range can
be affected by target-event characteristics (disease) or the study
population. A model with a c-statistic ranging from 0.70 to 0.80
has an adequate power of discrimination; a range of 0.80 to 0.90
is considered excellent. Table 2 shows several common statisti-
cal measures for model evaluation.
As usual, selection, application and interpretation of any sta-
tistical method and results need great care as virtually all meth-
ods entail assumptions and limited capacity. Let us review
some here. Predictive values depend on the disease prevalence
so direct comparison for different diseases may not be valid.
When sample size is very large, P value can be impressively
small even for a practically meaningless difference. Net reclas-
sification index and integrated discrimination improvement are
known to lead to non-proper scoring and vulnerable to miscali-

brated or overfit problems [33]. AUC and R2 are often hard to
increase by a new predictor, even with large odds ratio. Despite
similar names, AIC and BIC address slightly different issues and
information in BIC can be decreased with sample size increases.
The Hosmer-Lemeshow test is highly sensitive when sample
size is large, which is not an ideal property as a goodness-fit sta-
tistic. Calibration plot can easily provide a high correlation coef-
ficient (>0.9), simply because they are computed for predicted
versus observed values on grouped data (without random vari-
ability). Finally, AUC also needs caution: a high value (e.g.,
>0.9) may mean excellent discrimination but it can also reflect
the situation where prediction is not so relevant: (1) the task is
closer to diagnostic or early onset rather than prediction; (2) cas-
es vs. non-cases are fundamentally different with minimal over-
lap; or (3) predictors and endpoints are virtually the same things
(e.g., current blood pressure vs. future blood pressure).
Despite the long list provided above, we do not think this is
a discouraging news to researchers. We may tell us no method
is perfect and “one size does not fit all” is also true to statistical
methods; thus blinded or automated application can be danger-
It is crucial to separate internal and external validation and
to conduct the previously mentioned analyses on both datasets
to finalize the research findings (see the following for example
reports [19,20,34]). Internal validation can be done using a ran-
dom subsample or different years from the development dataset
or by conducting bootstrap resampling [22]. This approach can
particularly assess the stability of selected predictors, as well as
prediction quality. Subsequently, external validation should be
performed on an independent dataset from that which was pre-
viously used to develop the model. For example, datasets can
be obtained from populations from other hospitals or centers
(see geographic validation [19]) or a more recently collected
cohort population (temporal validation [34]). This process is
often considered to be a more powerful test for prediction mod-
els than internal validation because it evaluates transportability,
generalizability and true replication, rather than reproducibility
[8]. Poor model performance may occur after use of an external
dataset due to differences in healthcare systems, measurement
methods/definitions of predictors and/or endpoint, subject
characteristics or context (e.g., high vs. low risk).


For patient-centered perspectives, clinical prediction models
are useful for several purposes: to screen high-risk individuals

Table 2. Statistical Measures for Model Evaluation

Sensitivity and specificity

Discrimination (ROC/AUC)

Predictive values: positive, negative

Likelihood ratio: positive, negative

Accuracy: Youden index, Brier score

Number needed to treat or screen

Calibration: Calibration plot, Hosmer-Lemeshow test

Model determination: R2

Statistical significance: P value (e.g., likelihood ratio test)

Magnitude of association, e.g., β coefficient, odds ratio

Model quality: AIC/BIC

Net reclassification index and integrated discrimination improvement

Net benefit


ROC, receiver operating characteristic; AUC, area under the curve;
AIC, Akaike information criterion; BIC, Bayesian information criterion.

Clinical Prediction Models

Copyright © 2016 Korean Endocrine Society 43

for asymptomatic disease, to predict future events of disease or
death, and to assist medical decision-making. Herein, we sum-
marized five steps for developing a clinical prediction model.
Prediction models are continuously designed but few have had
their predictive performance validated with an external popula-
tion. Because model development is complex, consultation
with statistical experts can improve the validity and quality of
rigorous prediction model research. After developing the mod-
el, vigorous validation with multiple external datasets and ef-
fective dissemination to interested parties should occur before
using the model in practice [35]. Web or smartphone-based ap-
plications can be good routes for advertisement and delivery of
clinical prediction models to the public. For example, Korean
risk models for diabetes, fatty liver, CVD, and osteoporosis are
readily available at Simple
model may be translated into a one page checklist for patient’s
self-assessment (e.g., equipped in waiting room in clinic). We
anticipate that the framework that we provide/summarize,
along with additional assistance from related references or text-
books, will help predictive or prognostic research in endocri-
nology; this will lead to active application of these practices in
real world settings. In light of the personalized- and precision-
medicine era, further research is needed to attain individual-
level predictions, where genetic or novel biomarkers can play
bigger roles, as well as simple generalized predictions which
can further help patient-centered care.


No potential conflict of interest relevant to this article was re-


This study was supported by a grant from the Korea Healthcare
Technology R&D Project, Ministry of Health and Welfare, Re-
public of Korea (No. HI14C2476). H.B. was partly supported
by the National Center for Advancing Translational Sciences,
National Institutes of Health, through grant UL1 TR 000002.
D.K. was partly supported by a grant of the Korean Health
Technology R&D Project, Ministry of Health and Welfare, Re-
public of Korea (HI13C0715).


Yong-ho Lee

Dae Jung Kim


1. Moons KG, Royston P, Vergouwe Y, Grobbee DE, Altman
DG. Prognosis and prognostic research: what, why, and how?
BMJ 2009;338:b375.

2. Hemingway H, Croft P, Perel P, Hayden JA, Abrams K,
Timmis A, et al. Prognosis research strategy (PROGRESS)
1: a framework for researching clinical outcomes. BMJ

3. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Rich-
ardson WS. Evidence based medicine: what it is and what it
isn’t. BMJ 1996;312:71-2.

4. Greenland S. The need for reorientation toward cost-effective
prediction: comments on ‘Evaluating the added predictive
ability of a new marker. From area under the ROC curve to re-
classification and beyond’ by M. J. Pencina et al., Statistics in
Medicine (DOI: 10.1002/sim.2929). Stat Med 2008;27:199-

5. Lim NK, Park SH, Choi SJ, Lee KS, Park HY. A risk score
for predicting the incidence of type 2 diabetes in a middle-
aged Korean cohort: the Korean genome and epidemiology
study. Circ J 2012;76:1904-10.

6. Griffin SJ, Little PS, Hales CN, Kinmonth AL, Wareham NJ.
Diabetes risk score: towards earlier detection of type 2 diabe-
tes in general practice. Diabetes Metab Res Rev 2000;16:164-

7. Goff DC Jr, Lloyd-Jones DM, Bennett G, Coady S,
D’Agostino RB, Gibbons R, et al. 2013 ACC/AHA guideline
on the assessment of cardiovascular risk: a report of the
American College of Cardiology/American Heart Associa-
tion Task Force on Practice Guidelines. Circulation 2014;129
(25 Suppl 2):S49-73.

8. Steyerberg EW, Vergouwe Y. Towards better clinical pre-
diction models: seven steps for development and an ABCD
for validation. Eur Heart J 2014;35:1925-31.

9. Royston P, Moons KG, Altman DG, Vergouwe Y. Prognosis
and prognostic research: developing a prognostic model.
BMJ 2009;338:b604.

10. Altman DG, Vergouwe Y, Royston P, Moons KG. Progno-
sis and prognostic research: validating a prognostic model.
BMJ 2009;338:b605.

11. Moons KG, Altman DG, Vergouwe Y, Royston P. Prognosis
and prognostic research: application and impact of prog-
nostic models in clinical practice. BMJ 2009;338:b606.

Lee YH, et al.

44 Copyright © 2016 Korean Endocrine Society

12. Laupacis A, Sekar N, Stiell IG. Clinical prediction rules. A
review and suggested modifications of methodological
standards. JAMA 1997;277:488-94.

13. Altman DG, Royston P. What do we mean by validating a
prognostic model? Stat Med 2000;19:453-73.

14. Steyerberg EW. Clinical prediction models: a practical ap-
proach to development, validation, and updating. New
York: Springer; 2009.

15. Steyerberg EW, Moons KG, van der Windt DA, Hayden
JA, Perel P, Schroter S, et al. Prognosis Research Strategy
(PROGRESS) 3: prognostic model research. PLoS Med

16. Collins GS, Reitsma JB, Altman DG, Moons KG. Transpar-
ent Reporting of a multivariable prediction model for Indi-
vidual Prognosis or Diagnosis (TRIPOD): the TRIPOD
statement. Ann Intern Med 2015;162:55-63.

17. Liu J, Hong Y, D’Agostino RB Sr, Wu Z, Wang W, Sun J, et
al. Predictive value for the Chinese population of the Fram-
ingham CHD risk assessment tool compared with the Chi-
nese Multi-Provincial Cohort Study. JAMA 2004;291:2591-

18. Jee SH, Jang Y, Oh DJ, Oh BH, Lee SH, Park SW, et al. A
coronary heart disease prediction model: the Korean Heart
Study. BMJ Open 2014;4:e005025.

19. Lee YH, Bang H, Park YM, Bae JC, Lee BW, Kang ES, et al.
Non-laboratory-based self-assessment screening score for
non-alcoholic fatty liver disease: development, validation and
comparison with other scores. PLoS One 2014;9:e107584.

20. Bang H, Edwards AM, Bomback AS, Ballantyne CM, Bril-
lon D, Callahan MA, et al. Development and validation of a
patient self-assessment score for diabetes risk. Ann Intern
Med 2009;151:775-83.

21. Kotronen A, Peltonen M, Hakkarainen A, Sevastianova K,
Bergholm R, Johansson LM, et al. Prediction of non-alco-
holic fatty liver disease and liver fat using metabolic and
genetic factors. Gastroenterology 2009;137:865-72.

22. Harrell FE Jr. Regression modeling strategies: with applica-
tions to linear models, logistic regression, and survival
analysis. New York: Springer; 2001.

23. Sun GW, Shook TL, Kay GL. Inappropriate use of bivari-

able analysis to screen risk factors for use in multivariable
analysis. J Clin Epidemiol 1996;49:907-16.

24. Royston P, Altman DG, Sauerbrei W. Dichotomizing con-
tinuous predictors in multiple regression: a bad idea. Stat
Med 2006;25:127-41.

25. Boersma E, Poldermans D, Bax JJ, Steyerberg EW, Thom-
son IR, Banga JD, et al. Predictors of cardiac events after
major vascular surgery: role of clinical characteristics, dobu-
tamine echocardiography, and beta-blocker therapy. JAMA

26. Sauerbrei W. The use of resampling methods to simplify re-
gression models in medical statistics. J R Stat Soc Ser C
Appl Stat 1999;48:313-29.

27. Shmueli G. To explain or to predict? Stat Sci 2010:289-310.
28. Heikes KE, Eddy DM, Arondekar B, Schlessinger L. Diabe-

tes risk calculator: a simple tool for detecting undiagnosed
diabetes and pre-diabetes. Diabetes Care 2008;31:1040-5.

29. Breiman L, Friedman J, Stone CJ, Olshen RA. Classifica-
tion and regression trees. Belmont: Wadsworth Internation-
al Group; 1984.

30. Breiman L. Statistical modeling: the two cultures (with com-
ments and a rejoinder by the author). Statist Sci 2001;16:199-

31. Steyerberg EW, Vickers AJ, Cook NR, Gerds T, Gonen M,
Obuchowski N, et al. Assessing the performance of predic-
tion models: a framework for traditional and novel mea-
sures. Epidemiology 2010;21:128-38.

32. Meffert PJ, Baumeister SE, Lerch MM, Mayerle J, Kratzer
W, Volzke H. Development, external validation, and com-
parative assessment of a new diagnostic score for hepatic
steatosis. Am J Gastroenterol 2014;109:1404-14.

33. Hilden J. Commentary: on NRI, IDI, and “good-looking” sta-
tistics with nothing underneath. Epidemiology 2014;25:265-

34. Lee YH, Bang H, Kim HC, Kim HM, Park SW, Kim DJ. A
simple screening score for diabetes for the Korean popula-
tion: development, validation, and comparison with other
scores. Diabetes Care 2012;35:1723-30.

35. Wyatt JC, Altman DG. Commentary: Prognostic models:
clinically useful or quickly forgotten? BMJ 1995;311:1539.


Analysing the power of deep learning techniques over the
traditional methods using medicare utilisation and provider data
Varadraj P. Gurupura, Shrirang A. Kulkarnib, Xinliang Liua, Usha Desai c and Ayan Nasird

aDepartment of Health Management and Informatics, University of Central Florida, Orlando, FL, USA; bSchool of
Computer Science and Engineering, Vellore Institute of Technology, Vellore, India; cDepartment of Electronics and
Communication Engineering, Nitte Mahalinga Adyanthaya Memorial Institute of Technology, Nitte, Udupi, India;
dUCF School of Medicine, University of Central Florida, Orlando, FL, USA

Deep Learning Technique (DLT) is the sub-branch of Machine
Learning (ML) which assists to learn the data in multiple levels of
representation and abstraction and shows impressive performance
on many Artificial Intelligence (AI) tasks. This paper presents a new
method to analyse the healthcare data using DLT algorithms and
associated mathematical formulations. In this study, we have first
developed a DLT to programme two types of deep learning neural
networks, namely: (a) a two-hidden layer network, and (b) a three-
hidden layer network. The data was analysed for predictability in
both of these networks. Additionally, a comparison was also made
with simple and multiple Linear Regression (LR). The demonstration
of successful application of this method is carried out using the
dataset that was constructed based on 2014 Medicare Provider
Utilization and Payment Data. The results indicate a stronger case
to use DLTs compared to traditional techniques like LR. Furthermore,
it was identified that adding more hidden layers to neural network
constructed for performing deep learning analysis did not have
much impact on predictability for the dataset considered in this
study. Therefore, the experimentation described in this article sets
up a case for using DLTs over the traditional predictive analytics. The
investigators assume that the algorithms described for deep learning
is repeatable and can be applied for other types of predictive ana-
lysis on healthcare data. The observed results indicate, the accuracy
obtained by DLT was 40% more accurate than the traditional multi-
variate LR analysis.

Received 16 April 2018
Accepted 30 August 2018

Deep Learning Technique
(DLT); medicare data;
Machine Learning (ML);
Linear Regression (LR);
Confusion Matrix (CM)


Methods involving Artificial Intelligence (AI) associated with Deep Learning Technique (DLT)
and Machine Learning (ML) are slowly but surely being used in medical and health infor-
matics. Traditionally, techniques such as Linear Regression (LR) (Nimon & Oeswald, 2013),
Analysis of Variance (ANOVA) (Kim, 2014), and Multivariate Analysis of Variance (MANOVA)
(Xu, 2014) (Malehi et al., 2015) have been used for predicting outcomes in healthcare.
However, in the recent years the methods of analysis applied are changing towards the
aforementioned computationally stronger techniques. The purpose of current research work
delineated in this paper, effectively proves the usefulness of DLTs and Confusion Matrix (CM)

CONTACT Usha Desai [email protected] Electronics and Communication Engineering, Nitte Mahalinga
Adyanthaya Memorial Institute of Technology, Nitte, India

2019, VOL. 31, NO. 1, 99–115

© 2018 Informa UK Limited, trading as Taylor & Francis Group

analysis to predict the outcome for a healthcare informatics case study. The core objectives of
this research are as follows:

a) Illustrate the power of DLT (LeCun, et al., 2015) by conducting an analysis comparing it with
Linear Regression (LR).

b) Introduce advancement in science of DLT by mathematical formulations.
c) To analyse that, if changes applied in DLT algorithm can affect the predictability involved.

To achieve the aforementioned objectives, investigators conducted experimentation on a
dataset that was constructed based on the 2014 Medicare Provider Utilization and Payment
Data. This data encompasses information on services provided to Medicare beneficiaries by
physical therapists. The 2014 Medicare Provider Utilization and Payment Data provide informa-
tion on procedures and services provided to those insured under Medicare by various
healthcare professionals. This dataset has information on utilisation, amount differentiated
into allowed amount and the Medicare payment (Medicare Provider and Utilization Data,
Online 2018), and charges submitted which are organised and identified by a Medicare
assigned National Provider Identifier. It is important to mention that this data covers only
those claims covered for the Medicare fee-for-service population (specifically 100% final-
action physician/supplier Part B non-institutional line items).

In the past, research experiments on Medicare data have been successfully carried out by using
methods such as LR; however, while proposed study applies DLT to satisfy the aforementioned core
research objectives. Additionally, we have compared the obtained results of DLT and LR. Thereby,
ascertaining the strength and usefulness of this stronger computational technique in analysing the
Medicare data.

Related work

In recent years, Machine Learning (ML)/Artificial Intelligence (AI) approaches are widely adopted by
the researchers to solve a variety of complex problems. Traditional ML/AI approaches have been
widely adopted in applications like image processing, signal evaluation, pattern recognition, etc.
For large datasets, the traditional ML/AI approaches sometimes may provide the erroneous results.
Hence, in recent years, the large volumes of data have been efficiently processed and interpreted
using a modernised ML using DLT.

The DLT can be implemented by means of the Neural Network (NN) approach or Belief
Network (BN) approach. In the literature, the NN-based DLT, such as Deep NN (DNN) and
Recurrent NN (RNN) are widely implemented to process the medical dataset, in order to get
better accuracy. The results of previous study also confirm that, DLT approaches will offer
better result in disease recognition, classification and evaluation approaches. Due to its
superiority, it is widely adopted by the researchers to evaluate the dataset related to patient’s
health information. In the proposed work, evaluation of the aforementioned dataset is carried
using the DLT to develop a health information system, which is applicable to analyse the
public health data.

Suinesiaputra (Suinesiaputra, Gracia, Cowan, & Young, 2015) proposed a detailed review
regarding the heart disease by using the benchmark cardiovascular image dataset. This work
also insists the necessity of sharing the medical data in order to predict the cardiovascular
disease (CVD) in its early stages (Zhang et al., 2016). In addition to this, the work of Puppala
(Puppala et al., 2015) proposes a novel online evaluation framework for the CVD dataset
using an approach termed as Methodist Environment for Translational Enhancement and
Outcomes Research (METEOR). This framework is considered to construct a data warehouse
(METEOR) to link the patient’s dataset with the end users, such as the doctors and research-
ers. In order to test the efficiency of the proposed approach, breast cancer dataset was


chosen for evaluation purposes. The result of this approach confirms the efficiency of METEOR
in data collection, sharing, disease detection and treatment planning procedures.

It is important to note that Santana (Santana et al., 2012) proposed an evaluation tool to evaluate the
heart risk based on the patient’s health information. The developed tool (Santana et al., 2012) collects
invasive/non-invasive health information from the patient, and provides the disease related information
to support the treatment planning process. The research contribution by Snee and McCormick (2004)
proposes an approach to consider the indispensable elements of the available public health information
network to collect and forecast the data for Disease Control and Prevention centres. This work clearly
presents the software and hardware requirements, to accomplish the proposed setup to link the patient
with the monitoring system. Web based online examination procedure was proposed by (Weitzel, Smith,
Deugd, & Yates, 2010). In this framework, the concept of cloud computing is implemented to enhance the
communal collaborative pattern to support a physician to employ protocols while accessing, assembling
and visualising patient data through embeddable web applications coined as OpenSocial gadgets. This
DLT framework supports real time interaction between the patient and the doctor for purposes of
diagnosis and treatment.

The investigators would like to mention that Zhang (Zhang, Zheng, Lin, Zhang, & Zhou,
2013) proposed a prediction model for the CVD based on various signals collected using the
dedicated sensors. This work considers the use of wearable sensors to collect the signals from
the chosen parts of the human body and non-invasive imaging techniques to identify the
disease initiations required to develop models to support the early detection of CVD. The
recent research work by Zheng (Zheng et al., 2014) also confirms the need for the use of
these wearable sensors to support the premature detection of the disease. This work exem-
plifies the use of wireless/wire based biomedical sensors in association with DLT to collect
critical data from internal/external organs of the human body in order to make an accurate
prediction on the disease.

DLT is also applied to support the early detection of life threatening diseases that aids the
reduction of mortality rates. The availability of modern clinical equipment and the data sharing
network reduced the gap between the patients and the doctor in identifying the disease, getting
the opinion from the experts, comparing the existing patient’s critical data pertaining to the
disease with the related data existing in the literature, identifying the severity/stage of the disease,
and possible treatment procedures. Hence, in recent years, more researchers are working in the
field of health informatics using DLT to propose efficient data sharing frameworks, modifying the
existing health informatics setups, and synthesising wearable health devices to track the normal/
abnormal body signals to predict the disease.

Usually in health informatics, the size of the dataset could be large and the accuracy of
disease identification and the evaluation procedure relies mainly on the processing approach
considered to evaluate the healthcare data. Here the accuracy of the disease prediction
depends only on the processing approach. The recent work of (Ravi et al., 2017) summarises
the implementation of the applications of various deep learning approaches to evaluate a
healthcare database.


Figure 1 represents the flow diagram of Medicare dataset pre-processing system using Python
simulation tool. Further, pre-processed data is subjected for classification using DLT and LR
algorithms. Our research method relies on the use of LR to test two particular outcome
variables. We then proceed with the application of DLT and perform a required comparison
to satisfy the aforementioned research objectives. This encourages us to test a simple prediction
model using linear regression to indicate towards the property of homoscedasticity. Further in
the required analysis the investigators consider a simple linear regression model as given in
Equation (1).


Y ¼ pþ q Z (1)

where Y is the outcome, and variable Z is the predictor variable,q identifies the slope and p is
the intercept. The simulation of the proposed block diagram (Figure 2) was implemented in
Python 3.6 using packages such as pandas, scipy and sklearn modules. The metric considered
was R2 .

R2 ¼ 1� SSre


R2 indicates the correlation coefficient squared where SSre known as error sum of squares and SSto
known as total corrected sum of squares as given using Equations (3) and (4), respectively.

SSre ¼

yi � ŷið Þ2 (3)

SSto ¼

yi � �yið Þ2 (4)

In the Equations (3) and (4) �yi estimates the mean value, whereas ŷi gives the mean value of yi in
the regression structure, respectively. Whereas, the multiple LR was modelled using Equation (5),

y ¼ X1n1 þ X2n2 þ X3n3 þ���� þ Xpnpyþ 2 (5)

where y is the dependent variable and X1; 2; X3 and so on, are the p independent variables with
parameters n1; n2 ,n3 and so on. In applying DLT, we first base our premise on mathematical
formulation, formulated by implementation and discussion of results. Figure 2 represents stages
involved in development of proposed DLT Medicare utilisation informatics system.

Mathematical formulation for DLT algorithm

In this study, the investigators would first like to illustrate the DLT algorithms used for the
proposed Medicare health data informatics system. To specify this in algorithmic form, the
Stochastic Gradient Descent (SGD) algorithm is considered as described in Figure 3. The key part

Importing Libraries
Importing the


Categorical Data is


Splitting the Dataset

into Train and Test


Perform Feature

Scaling on Train and

Test Set


Figure 1. Flow diagram for pre-processing of the medicare utilisation dataset.


in this algorithm is the calculation of the partial derivatives @[email protected] . If ∂Lk= @wið Þ is positive, further
increasing wi by some small amount will increase the loss Lk for the current example; decreasing
wi will decrease the loss function (Taylor, 1993), (Fernandes, Gurupur, et al., 2017). In this study, a
small step is considered in the direction to minimise the loss function, as an efficient deep learning

Input: Network parameters , loss function , training data , learning rate >

while termination conditions are not met, perform as follow:

( , ) ← .

( ) ← ( , )

← ( , , , )


Figure 3. Implementation flow for the Stochastic Gradient Descent (SGD) algorithm.

Randomly initialize

the weights to


Input the first

patient record

details from the

database to the

input layer

Each feature of the

database is

associated to one

input node


propagation is

performed from left

to right

Error obtained is


Predicted result is

compared with

actual result

Activation is

propagated until the

predicted result ‘y’ is


Neurons are

activated such that

the impact of each

neuron’s activation

is limited by weights

The previous steps

updated the weight

for each observation

in the dataset

Weights are updated

according to the

calculated weight

Error id back


Perform back

propagation from

right to left

The entire process is

repeated for the

entire training


Redo the process for

more epochs

Figure 2. Methodology in implementation of proposed medicare data analyser system.


Backpropagation in a multilayer perceptron

In this work, a simple multilayer perceptron with a standard fully connected feed-forward neural
network layer along with the sum of squared error loss function (Zheng et al., 2014) (Figure 4) is
considered as follows (Zhang et al., 2016):

L y; ŷð Þ ¼

ðyi � ŷiÞ2 (6)

where N is the number of outputs, yi is the ith label, and ŷi = output (w, f) is the network’s prediction
of yi , given the feature vector f and current parameter w.

Here the input vector to the current layer is the vector zi (of length 4), the element-wise
nonlinearity (activation function, such as tanh and sigmoid), then the forward-pass equations for
this network are (Zhang et al., 2016) expressed as follows:

zi¼bi þ

wi;jai (7)

ŷi ¼ σ zið Þ (8)

where bi is the bias and wi;j is the weight connecting input i to neuron j as shown in Figure 5. Given
the loss function, the first partial derivative is calculated with respect to the network output,byi
(Taylor, 1993):

ð@LkÞ=ð@ŷjÞ ¼ @=ð@ŷjÞð

ði¼1Þ ðyi � ŷiÞ2Þ (9)















Figure 4. Application of Stochastic Gradient Descent deep learning computation.


¼ @

ðyj � ŷjÞ2 (10)

¼ �2ðyj � ŷjÞ (11)

Following the network structure backward, the @Lk

is a function of @Lk

is computed (Ravi et al.,
2017). This will depend on the mathematical form of the activation function σk zð Þ (Taylor, 1993) in
which sigmoid activation function is considered.


¼ @Lk



¼ σ0k zið Þ @Lk


where σk zð Þ ¼ 1
1þe�z and the function σ

k zð Þ ¼ σk zð Þ 1� σk zð Þð Þ.

Next, applying the chain rule to calculate the partial derivatives of the weights wj;i given the

previously calculated derivatives, @Lk

(Fernandes, Gurupur, et al., 2017),






¼ @Lk










Actual Value

Output Value

½ (z-y)

Figure 5. Assigning the weights to the artificial neural network.


¼ @Lk


bi þ



¼ ai


Finally, derivatives of the loss function is computed with respect to the input activation ai , where

given as,









ðbj þ


wk;jajÞ (19)



wi;j (20)

Outcome variables

To apply Machine Learning (Martis, Lin, Gurupur, & Fernandes, 2017) (Fernandes, Chakraborty,
Gurupur, & Prabhu, 2016) (Fernandes, Gurupur, Sunder, & Kadry, 2017) (Rajnikanth, Satapathy,
et al, 2017) and Deep Learning (Shabbira, Sharifa, Nisara, Yasmina, & Fernandes, 2017) (Khan,
Sharif, Yasmin, & Fernandes, 2016) (Hempelmann, Sakoglu, Gurupur, & Jampana, 2015)
(Walpole, Myers, Myers, & Ye, 2012) (Kulkarni & Rao, 2009), we obtained the aforementioned
dataset with information on 40,000 physical therapists from the aforementioned 2014
Medicare Provider Utilization and Payment Data. In the dataset we added a new column
termed as Result which contains the value resulted by comparison of the Total Medicare
standardized Payment Value with its median value. Result column consists of two values (0, 1)
for the following outcome variables:

Outcome-1 (O1):
Result = 1 {when Medicare Standardized Payment Received by a Physical Therapist is greater than the
Result = 0 {when Medicare Standardized Payment Received by a Physical Therapist is equal to or less
than the median}

Outcome-2 (O2):
Result = 1 {when Total Medicare Standardized Payment Value is greater than Median Household
Result = 0 {when Total Medicare Standardized Payment Value is lesser than Median Household Value}

Here we would like to note that for Outcome-2 the investigators have used multiple dependent
variables and a single independent variable. For the purposes of experimentation with DLT we
have applied Spyder V3 on Ubuntu operating system. The respective algorithm implemented in the
proposed experimentation is illustrated in Figure 6.


Results and discussion


The investigators first analysed both the aforementioned outcome variables using linear
regression. Thus to visualise the data we further plotted a scatter plot of resulting data
values. In this study, the simulation plot of distribution of results is depicted in Figure 7. In
which, the scatter plot distribution Figure 7(a) shows signs of non-linearity and thus the
principle of homoscedasticity was disapproved. This is because homoscedasticity would have
required evenly distributed values; thereby leading the investigators to further this investiga-
tion using a range of independent variables to predict the Total Medicare Standardized
Payment Value (dependent variable) (Diehr et al., 1999). For this purpose the investigators
applied multiple LR model with the dependent variable as Total Medicare Standardized
Payment Value. The range of independent variables was derived by stepwise regression. The
default p value considered for eliminating independent variables entering the set was 15%
(0.15). The comparative plot of predicted values and the actual values is illustrated in Figure 7
(b). Our results achieved R2 as 0.9451 which in a way indicated that the explained variance
was around 94%. To further visualise it, we plotted a scatterplot as illustrated in Figure 7(b)
for multiple LR analysis.

The scatter plot depicted in Figure 7(b) using multiple LR indicates heteroscedasticity of data
values. Heteroscedasticity has a major impact on regression analysis. The presence of heterosce-
dasticity can invalidate the significance of the results. Thus we further plan to investigate the more
accurate modelling of our independent variable Total Medicare Standardized Payment Value using

dataset = pd.read_csv (‘dataset.csv’) // import dataset

// Independent values and dependent values are separated,

//x denotes independent variable and y will be the dependent variable



// Convert all dependent data into integer values

ConvertInteger(Dependent Data)

TestSet [] = dataset (20% randomly selected)

TrainingSet [] = dataset (80% randomly selected)

Standardize (dataset)


// 2-3 hidden layers are created with an output dimension of 13

//and input dimension of 30

set(X_train,Y_train, Batch and Epoch values),

// X-train is the training set of the independent variable (x) and

//Y_train is training set corresponding to dependent variable y

//The values used are Batch= 32 and Epoch = 100



Y_predict = classifier.predict (X_Test)

// The unlabeled observations (X_Test) used are 20% of the entire dataset

// the threshold value of 50% is set for predicted labels (y_predict).

} while (Epoch <=100);

GenerateConfusionMatrix ()

Figure 6. Algorithm for implementing the healthcare system using DLT.


DLT algorithm. The simulation value gave a result of R2 as 0.5159, which in a way indicates the
variance was reduced by 51%.

For the purpose of applying DLT the system is trained by randomly selecting 32,530 records
(80%) and tested using 8133 records (20%). The above mentioned analysis methodology was out to
test on the dataset mentioned in the introduction section. In addition, the LR model depicted in
Figure 7 had a much lesser level of accuracy. The conceptual meaning of the Confusion Matrix (CM)
for two-hidden layers, considering Outcome-1 (O1) is tabulated in Table 1.

The details of the CM illustrated in Table 1 are as follows:

● True Negative (TN) value = 4013 which indicates the values of the predicted output that is
correctly considered as 0 as per the O1 (Result = 0 when Medicare Standardized Payment
Received by a Physical Therapist is less than its median).

● True Positive (TP) value = 4066 which indicates the values of the predicted output that is
correctly considered as 1 as per the O1 (Result = 1 when Medicare Standardized Payment
Received by a Physical Therapist is greater than its median).

● False Negative (FN) value = 28 which indicates the values of the predicted output that is
wrongly considered as 0 as per the O1 (Result = 0 when Medicare Standardized Payment
Received by a Physical Therapist is less than its median).

● False Positive (FP) value = 26 which indicates the values of the predicted output that is
wrongly considered as 0 as per the O1 (Result = 1 when Medicare Standardized Payment
Received by a Physical Therapist is greater than its median).

Accordingly, (TN) 4013 + (TF) 4066 = 8079 matched correctly; (FN) 28 + (FP) 26 = 54 not matched
(Table 1). Accuracy can be calculated as ¼ Data matched correctly

Total data = 8079/8133 = 99.33%. The concep-
tual meaning of CM for three-hidden layers, considering O1 is tabulated in Table 2.

However, (TN) 4015 + (TP) 4080 = 8095 matched correctly; (FN) 14 + (FP) 24 = 38 not matched
(Table 2). Accuracy can be calculated as ¼ Data matched correctly

Total data = 8095/8133 = 99.53%.
The system is trained by randomly selecting 32,530 records (80%) and tested using 8,133

records (20%). The conceptual meaning of the CM for two-hidden layers, considering Outcome-2
(O2) is tabulated in Table 3. Additionally, the data generated for three-hidden layers considering O2
is presented in Table 4.

Figure 7. (a) Simple Linear Regression (LR) analysis, (b) Multiple LR analysis.


The CM given in Table 3 represents (TN) 6760 + (TF) 1339 = 8099 matched correctly; (FN) 9 +
(FP) 27 = 36 not matched. Hence, the accuracy can be calculated as ¼ Data matched correctly

Total data = 8099/
8133 = 99.58%. Further, the conceptual meaning of the CM for three-hidden layers, considering O2
is tabulated using Table 4. In which, (TN) 6741 + (TP) 1341 = 8082 matched correctly; whereas (FN)

5 + (FP) 27 = 32 not matched. In this case, accuracy can be calculated as ¼ Data matched correctly
Total data

= 8082/8133 = 99.37%.
Table 5 presents comprehensive summary of performance achieved for the set O1 and O2 for

the proposed Medicare analysis system. Therefore, it can be clearly identify that Deep Learning
Technique (DLT) can perform automatic feature extraction which is not possible in Linear
Regression (LR). The DLT network can automatically decide which characteristics of data can be
used as indicators to label that data reliably. DLT has recently surpassed all the conventional
Machine Learning (ML) techniques with minimal tuning and human effort. This effectively repre-
sents the DLT network can automatically decide which characteristics of data can be used as
indicators to label that data reliably.

The key observations of this experiment are as follows: (i) DLT has a better accuracy when
compared to LR method for a single set of the variables, (ii) the accuracy of DLT increases
exponentially (99.58%) when multiple dependent variables are considered, (iii) adding additional

Table 1. Confusion Matrix (CM) for two-hidden layers considering Outcome-1 (O1) .

Two-hidden layers



ACTUAL NO TN = 4013 FP = 26
YES FN = 28 TP = 4066

Table 2. CM for three-hidden layers considering O1.

Three-hidden layers



ACTUAL NO TN = 4015 FP = 24
YES FN = 14 TP = 4080

Table 3. CM for two-hidden layers considering O2.

Two-hidden layers



ACTUAL NO TN = 6760 FP = 27
YES FN = 7 TP = 1339

Table 4. CM for three-hidden layers considering O2.

Three-hidden layers



ACTUAL NO TN = 6741 FP = 46
YES FN = 5 TP = 1341

Table 5. Summary of accuracy obtained for O1 and O2 using two-layer and three-layer models.

Outcome Accuracy TPþTN

O1 Two-hidden layers 99.34%
Three-hidden layers 99.53%

O2 Two-hidden layers 99.58%
Three-hidden layers 99.37%


hidden neural network layer for Outcome-2 (O2) did not increase the accuracy (99.37%) of

Comparison with techniques used in medical imaging

Zhang (Zhang et al., 2016) applied five-layer Deep DNN Support Vector Machine (SVM) to
detect colorectal cancer and achieved with precision 87.3%, recall rate 85.9% and accuracy
85.9%. However, the method lacks in simultaneous detection as well as the classification of
polyps. Furthermore, random background considered which may lead to increase in the False
Positive (FP) rate (Zhang et al., 2016) (Yu, Chen, Dou, Qin, & Heng, 2017) for offline and online
colorectal cancer prevention and diagnosis subjected the three-dimensional fully connected
Convolutional Neural Network (CNN) and obtained precision of 88%, recall rate of 71%, F1
79% and F2 of 74%. In (Yu et al., 2017) study, it was observed that there is a high interclass
relationship and intra class distinction regarding colon polyps. Here translation is difficult for
machine learning algorithms to correctly classify the polyps. Christodoulidis (Christodoulidis,
Anthimopoulos, Ebner, Christe, & Mougiakakou, 2017) conducted study to classify the inter-
stitial lung disease using ensemble of multi-source transfer learning method. Here the
investigators attained F-score of 88.17%. However in the developed technique the computa-
tional complexity is more due to multilevel feature extraction measures. (Tan, Fujita, et al.,
2017b) (Tan, Acharya, Bhandary, Chua, & Sivaprasad, 2017) identified diabetic retinopathy by
constructing ten-layer CNN. Here the investigators observed a sensitivity of 87.58% for
detection of exudates and sensitivity of 71.58% for dark lesions identification. Akkus (Akkus,
Galimzianova, Hoogi, Rubin, & Erickson, 2017) investigated tumour genomic prediction using
two-dimensional CNN and observed 93% of sensitivity, 82% of specificity, and 88% of
accuracy. Furthermore, Kumar (Kumar, Kim, Lyndon, Fulham, & Feng, 2017) developed system
for classification of modality of medical images and achieved accuracy of 96.59% using
ensemble of fine-tuned CNN. It was observed that ensemble of CNNs will enable higher
quality features to be extracted. Later, Lekadir (Lekadir et al., 2017) conducted study to
characterise the plaque composition by applying nine-layers of CNN. In this technique
78.5% accuracy was evaluated, where the ground truth is verified by a single physician.
Therefore, we can conclude that DLT used by the investigators in the study delineated in
this article had a much higher degree of accuracy when it came to predictability.

Comparison with techniques used in pervasive sensing

Hannink (Hannink et al., 2017) developed system for mobile gait analysis considering DCNN. Here
the authors reported precision of 0.13 ± 3.78°. However in (Hannink et al., 2017) parameter space
such as number and dimensionality of kernels are not considered. Ravi (Ravi et al., 2017) designed
methodology to recognise human activity applying DNN and achieved 95.8% of accuracy. This
method demonstrates the feasibility of real-time investigation, however the computation cost
obtained is significantly less. The results obtained in the technique employed by the investigators
far exceeds this value as well.

Comparison with techniques used to analyse biomedical signals

The investigators have achieved a higher level of accuracy with respect to perceived analysis of
biomedical signals. Acharya, Oh, et al., 2017 classified arrhythmic heartbeats subjecting nine-
layer augmented data DCNN. Using this technique authors achieved augmented data accuracy
of 94.03% and imbalanced data accuracy of 89.3%. In fact this method requires long training
hours and the specialised hardware to train. Further, normal and Myocardial Infarction (MI) ECG
beats were detected using CNN and the investigators for this study reported an accuracy of


93.53% with noise and 95.22% without noise (Acharya, Fujita, et al., 2017b). Later using same
CNN architecture CAD beats were classified with accuracy of 95.11%, sensitivity of 91.13% and
specificity of 95.88% (Acharya, Fujita, Lih, et al., 2017). Also studies were conducted using CNN
model to detect tachycardia beats of five seconds duration and reported accuracy, sensitivity
and specificity of 94.90%, 99.13% and 81.44%, respectively. However, in their technique few of
the remarks were observed. Such as computationally difficult in learning the features, limited
database is applied, training process requires huge database and tested using restricted dataset.

Comparison with techniques used in personalised healthcare

Pham (Pham, Tran, Phung, & Venkatesh, 2017) developed algorithm for Electronic Medical
Records (EMRs) using deep dynamic memory NN. In this study the investigators achieved
F-score of 79.0% and confidence interval of (77.2–80.9) %. This system is more suitable for
long progresses of many incidences. However, the young patients normally have only one or
two admissions. Also, Nguyen (Nguyen, Tran, Wickramasinghe, & Venkatesh, 2017) designed
automated tool to predict the future risk constructing the CNN model. In which the AUC
measured for 3 months was 0.8 and for 6 months it was 81.9%. It was noticed that accurate
and exact risk estimation is an important step towards the personalised care. However, in the
analysis illustrated in this article, we have used the secondary dataset to evaluate the effective-
ness of DLT methods (Desai, Martis, Nayak, Sarika, & Seshikala, 2015). As mentioned before, this
dataset was constructed based on the 2014 Medicare Provider Utilization and Payment Data:
Physician and Other Supplier Public Use File (Medicare Provider and Utilization Data, Online
2018), which contains information on services provided to beneficiaries by 40,662 physical
therapists (Liu, et al, 2018).


The research delineated in this article suffers from the following limitations: (a) the computational
techniques used requires a high performance for this purpose a sample derived using a rando-
mised approach was used, and (b) the Deep Learning Technique has only been tested on the
aforementioned 2014 Medicare Provider and Utilization Data, it has not yet been experimented on
other data samples.


In this article we have successfully proved the power and accuracy of using DLT over
traditional methods (Desai et al., 2016) (Liu, Oetjen, et al, unpublished) (Jain, Kumar, &
Fernandes, 2017) (Desai et al., 2016) (Bokhari, Sharif, Yasmin, & Fernandes, 2018) (Desai
et al., 2015) (Desai, et al., 2016) on analysing the healthcare data. Table 6 provides the
detailed comparison on this statement. The core contribution of the research delineated in
this article is the introduction of new mathematical techniques harnessing DLT. While dis-
cussing the results we also proved that our technique had a much higher accuracy level than
the techniques used in available literature in medical imaging, pervasive sensing, analysing
biomedical signals, and personalised healthcare. Additionally, here we have fully illustrated
the power of higher computational techniques over traditional methods. The future direction
of research on this particular topic would be: (a) application of the deep learning methods
addressed in this study, on other types of healthcare data (Desai et al., 2015) (Naqi, Sharif,
Yasmin, & Fernandes, 2018) (Desai, Nayak, et al., 2017b) (Desai, Nayak, Seshikala, & Martis,
2017) (Shah, Chen, Sharif, Yasmin, & Fernandes, 2017) (LeCun, et al, 2015) (Swasthik & Desai,
2017), (b) further modification of the DLTs (Mehrtash et al., 2017) considered with the
purpose of improvising it from a computational perspective (Gurupur & Gutierrez, 2016)





























































































































































































































(Nasir, Liu, Gurupur, & Qureshi, 2017) (Gurupur & Tanik, 2012) (Gurupur, Sakoglu, Jain, & Tanik,
2014) (Desai, et al., 2018). This improvisation is because of the fact that a high performance
computational facility is required to carry out the computer programme in the implementa-
tion system.

Disclosure statement

No potential conflict of interest was reported by the authors.


Usha Desai


Acharya, U. R., Fujita, H., Lih, O. S., Adam, M., Tan, J. H., & Chua, C. K. (2017). Automated detection of coronary artery
disease using different durations of ECG segments with convolutional neural network. Knowledge-Based Systems.

Acharya, U. R., Fujita, H., Lih, O. S., Hagiwara, Y., Tan, J. H., & Adam, M. (2017). Automated detection of arrhythmias
using different intervals of tachycardia ECG segments with convolutional neural network. Information Sciences.

Acharya, U. R., Fujita, H., Oh, S. L., Hagiwara, Y., Tan, J. H., & Adam, M. (2017). Application of deep convolutional neural
network for automated detection of myocardial infarction using ECG signals. Information Sciences. doi:10.1016/j.

Acharya, U. R., Oh, S. L., Hagiwara, Y., Tan, J. H., Adam, M., Gertych, A., & San, T. R. (2017). A deep convolutional neural
network model to classify heartbeats. Computers in Biology and Medicine. doi:10.1016/j.compbiomed.2017.08.022

Akkus, Z, Galimzianova, A, Hoogi, A, Rubin, D. L, & Erickson, B. J. (2017). Deep learning for brain mri segmentation:
state of the art and future directions. Journal Of Digital Imaging, 30(4), 449-459. doi:10.1007/s10278-017-9983-4

Akkus, Z., Galimzianova, A., Hoogi, A., Rubin, D. L., & Erickson, B. J. (2017). Deep learning for brain MRI segmentation:
State of the art and future directions. Journal of Digital Imaging. doi:10.1007/s10278-017-9983-4

Bokhari, S. T. F., Sharif, M., Yasmin, M., & Fernandes, S. L. (2018). Fundus image segmentation and feature extraction for
the detection of glaucoma: A new approach. Current Medical Imaging Reviews. doi:10.2174/

Christodoulidis, S., Anthimopoulos, M., Ebner, L., Christe, A., & Mougiakakou, S. (2017). Multisource transfer learning
with convolutional neural networks for lung pattern analysis. IEEE Journal of Biomedical and Health Informatics, 21
(1), 76–84.

Desai U. et al. (2015) Discrete Cosine Transform Features in Automated Classification of Cardiac Arrhythmia Beats. In:
Shetty N., Prasad N., Nalini N. (eds) Emerging Research in Computing, Information, Communication and
Applications. Springer, New Delhi.

Desai, U., Martis, R. J., Acharya, U. R., Nayak, C. G., Seshikala, G., & Shetty, R. K. (2016). Diagnosis of multiclass
tachycardia beats using recurrence quantification analysis and ensemble classifiers. Journal of Mechanics in
Medicine and Biology, 16, 1640005.

Desai, U., Martis, R. J., Nayak, C. G., Sarika, K., & Seshikala, G. (2015). Machine intelligent diagnosis of ECG for arrhythmia
classification using DWT, ICA and SVM techniques, India Conference (INDICON), Proceedings of the annual IEEE India
conference, doi: 10.1109/INDICON.2015.7443220

Desai, U., Martis, R. J., Nayak, C. G., Sheshikala, G., Sarika, K., & Shetty, R. K. (2016). Decision support system for
arrhythmia beats using ECG signals with DCT, DWT and EMD methods: A comparative study. Journal of Mechanics
in Medicine and Biology, 16, 1640012.

Desai, U., Nayak, C. G., & Seshikala, G. An application of EMD technique in detection of tachycardia beats. In
Communication and Signal Processing (ICCSP), 2016 International Conference on 2016 Apr 6 (pp. 1420–1424). IEEE.

Desai, U., Nayak, C. G., & Seshikala, G. An efficient technique for automated diagnosis of cardiac rhythms using
electrocardiogram. In Recent Trends in Electronics, Information & Communication Technology (RTEICT), IEEE
International Conference on 2016 May 20 (pp. 5–8), Bengaluru, India. IEEE. DOI:10.1109/RTEICT.2016.7807770.

Desai, U., Nayak, C. G., & Seshikala, G. (2017). Application of ensemble classifiers in accurate diagnosis of myocardial
ischemia conditions. Progress in Artificial Intelligence, 6(3), 245–253.

Desai, U., Nayak, C. G., Seshikala, G., & Martis, R. J. (2017). Automated diagnosis of coronary artery disease using
pattern recognition approach. Proceedings of the 39th Annual International Conference of the IEEE Engineering in
Medicine and Biology Society (EMBC), pp. 434–437.


Desai, U., Nayak, C.G., Seshikala, G., Martis, R.J., & Fernandes, S.L. (2018). Automated Diagnosis Of Tachycardia Beats. In:
Satapathy S., Bhateja V., Das S. (eds) Smart Computing and Informatics. Smart Innovation, Systems and
Technologies, vol 77. Springer, Singapore. doi:

Diehr, P., Yanez, D., Ash, A., Hornbrook, M., & Lin, D. Y. (1999). Methods for analyzing healthcare utilization and costs.
Annual Review of Public Health, 20, 125–144.

Fernandes, S. L., Chakraborty, B., Gurupur, V. P., & Prabhu, A. (2016). Early skin cancer detection using computer aided
diagnosis techniques. Journal of Integrated Design and Process Science, 20(1), 33–43.

Fernandes, S. L., Gurupur, V. P., Lin, H., & Martis, R. J. (2017). A novel fusion approach for early lung cancer detection
using computer aided diagnosis techniques. Journal of Medical Imaging and Health Informatics, 7(8), 1841–1850.

Fernandes, S. L., Gurupur, V. P., Sunder, N. R., & Kadry, S. (2017). A novel nonintrusive decision support approach for
heart rate measurement. Pattern Recognition Letters, 94(15), 87–95.

Gurupur, V., & Gutierrez, R. (2016). Designing the right framework for healthcare decision support. Journal of
Integrated Design and Process Science, 20, 7–32.

Gurupur, V., Sakoglu, U., Jain, G. P., & Tanik, U. J. (2014). Semantic requirements sharing approach to develop software
systems using concept maps and information entropy: A personal health information system example. Advances in
Engineering Software, 70, 25–35.

Gurupur, V., & Tanik, M. M. (2012). A system for building clinical research applications using semantic web-based
approach. Journal of Medical Systems, 36(1), 53–59.

Hannink, J., Kautz, T., Pasluosta, C. F., Gaßmann, K. G., Klucken, J., & Eskofier, B. M. (2017). Sensor-based gait parameter
extraction with deep convolutional neural networks. IEEE Journal of Biomedical and Health Informatics, 21(1), 85–93.

Hempelmann, C. F., Sakoglu, U., Gurupur, V., & Jampana, S. (2015). An entropy-based evaluation method for knowl-
edge bases of medical information systems. Expert Systems with Applications, 46, 262–273.

Jain, V. K., Kumar, S., & Fernandes, S. L. (2017). Extraction of emotions from multilingual text using intelligent text
processing and computational linguistics. Journal of Computational Science, 21, 316–326.

Khan, M. W., Sharif, M., Yasmin, M., & Fernandes, S. L. (2016). A new approach of cup to disk ratio based glaucoma
detection using fundus images. Journal of Integrated Design and Process Science, 20(1), 77–94.

Kim, H.-Y. (2014). Analysis of Variance (ANOVA) comparing means of more than two groups. Restorative Dentistry and
Endodontics, 39(1), 74–77.

Kulkarni, S. A., & Rao, G. R. (2009). Modeling reinforcement learning algorithms for performance analysis. In
Proceedings of ICAC3ʹ09 of the International Conference on Advances in Computing, Communication and Control
(pp. 35–39), Mumbai, India. doi:10.1145/1523103.1523111.

Kumar, A., Kim, J., Lyndon, D., Fulham, M., & Feng, D. (2017). An ensemble of fine-tuned convolutional neural networks
for medical image classification. IEEE Journal of Biomedical and Health Informatics, 21(1), 31–40.

LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521, 436–444.
Lekadir, K., Galimzianova, A., Betriu, À., del Mar Vila, M., Igual, L., Rubin, D. L., . . . Napel, S. (2017). A convolutional neural

network for automatic characterization of plaque composition in carotid ultrasound. IEEE Journal of Biomedical and
Health Informatics, 21(1), 48–55.

Liu, X., Oetjen, R. M., Hanney, W. J., Rovito, M., Masaracchio, M., Peterson, R. L., & Dottore, K. (2018). Characteristics of
physical therapists serving medicare fee-for-service beneficiaries (Unpublished manuscript).

Malehi, A. S., Pourmotahari, F., & Angali, K. A. (2015). Statistical models for the analysis of skewed healthcare cost data:
A simulation study. Health Economics Review, 5. doi:10.1186/s13561-015-0045-7

Martis, R. J., Lin, H., Gurupur, V. P., & Fernandes, S. L. (2017). Editorial: Frontiers in development of intelligent
applications for medical imaging processing and computer vision. Computers in Biology and Medicine, 89, 549–550.

Medicare Provider Utilization and Payment Data: Physician and Other Supplier. (2018, February 26). [Online]. Retrieved

Mehrtash, A., Sedghi, A., Ghafoorian, M., Taghipour, M., Tempany, C. M., Wells, W. M., . . . Fedorov, A. (2017).
Classification of clinical significance of MRI prostate findings using 3D convolutional neural networks.
Proceedings of SPIE–the international society for optical engineering, Orlando, Florida, United States. doi: 10.1117/

Naqi, S. M., Sharif, M., Yasmin, M., & Fernandes, S. L. (2018). Lung nodule detection using polygon approximation and
hybrid features from lung CT images. Current Medical Imaging Reviews. doi:10.2174/1573405613666170306114320

Nasir, A., Liu, X., Gurupur, V., & Qureshi, Z. (2017). Disparities in patient record completeness with respect to the health
care utilization project. Health Informatics Journal. doi:10.1177/1460458217716005

Nguyen, P., Tran, T., Wickramasinghe, N., & Venkatesh, S. (2017). Deepr: A convolutional net for medical records. IEEE
Journal of Biomedical and Health Informatics, 21(1), 22–30.

Nimon, K. F., & Oswald, F. L. (2013). Understanding the results of multiple linear regression. Organizational Research
Methods, 16(4), 650–674.

Pham, T., Tran, T., Phung, D., & Venkatesh, S. (2017). Predicting healthcare trajectories from medical records: A deep
learning approach. Journal of Biomedical Informatics, 69, 218–229.


Puppala, M., He, T., Chen, S., Ogunti, R., Yu, X., Li, F., . . . Wong, S. T. C. (2015). METEOR: An enterprise health informatics
environment to support evidence-based medicine. IEEE Transactions on Biomedical Engineering, 62(12), 2776–2786.

Rajinikanth, V., Satapathy, S. C., Fernandes, S. L., & Nachiappan, S. (2017). Entropy based segmentation of tumor from
brain MR images – A study with teaching learning based optimization. Pattern Recognition Letters, 94, 87–95.

Ravì, D., Wong, C., Lo, B., & Yang, G. Z. (2017). A deep learning approach to on-node sensor data analytics for mobile or
wearable devices. IEEE Journal of Biomedical and Health Informatics, 21(1), 56–64.

Ravi, D., Wong, C., Deligianni, F., Berthelot, M., Andreu-Perez, J., Lo, B., & Yang, G.-Z. (2017). Deep learning for health
informatics. IEEE Journal of Biomedical and Health Informatics, 21(1), 4–21.

Santana, D. B., Z´Ocalo, Y. A., Ventura, I. F., Arrosa, J. F. T., Florio, L., Lluberas, R., & Armentano, R. L. (2012). Health
informatics design for assisted diagnosis of subclinical atherosclerosis, structural, and functional arterial age
calculus and patient-specific cardiovascular risk evaluation. IEEE Transactions on Information Technology in
Biomedicine, 16(5), 943–951.

Shabbira, B., Sharifa, M., Nisara, W., Yasmina, M., & Fernandes, S. L. (2017). Automatic cotton wool spots extraction in
retinal images using texture segmentation and Gabor wavelet. Journal of Integrated Design and Process Science, 20
(1), 65–76.

Shah, J. H., Chen, Z., Sharif, M., Yasmin, M., & Fernandes, S. L. (2017). A novel biomechanics based approach for person
re-identification by generating dense color sift salience features. Journal of Mechanics in Medicine and Biology, 17,

Snee, N. L., & McCormick, K. A. (2004). The case for integrating public health informatics networks. IEEE Engineering in
Medicine and Biology Magazine, 23(1), 81-88.

Suinesiaputra, A., Gracia, P. P. M., Cowan, B. R., & Young, A. A. (2015). Big heart data: Advancing health informatics
through data sharing in cardiovascular imaging. IEEE Journal of Biomedical and Health Informatics, 19(4), 1283–1290.

Swasthi, D. U. (2017). Automated detection of cardiac health condition using linear techniques. In Recent Trends in
Electronics, Information & Communication Technology (RTEICT), 2017 2nd IEEE International Conference on 2017 May
19 (pp. 890–894). IEEE.

Tan, J. H., Acharya, U. R., Bhandary, S. V., Chua, K. C., & Sivaprasad, S. (2017a). Segmentation of optic disc, fovea and
retinal vasculature using a single convolutional neural network. Journal of Computational Science. doi:10.1016/j.

Tan, J. H, Fujita, H, Sivaprasad, S, Bhandary, S. V, Rao, A. K, Chua, K. C, & Acharya, U. R. (2017b). Automated
segmentation of exudates, hemorrhages, microaneurysms using single convolutional neural network. In
Information sciences, 420(c) (pp. 66–76).

Taylor, J. G. (Eds). (1993). Mathematical approaches to neural networks (Vol. 51, 1st ed.). North Holland: Elsevier.
The Centers for Medicare and Medicaid Services, Office of Enterprise Data and Analytics. (2016). Medicare fee-for-

service provider utilization & payment data physician and other supplier public use file: A methodological over-
view. Available from:

Walpole, R. E., Myers, R. H., Myers, S. L., & Ye, K. (2012). Probability and statistics for engineers and scientists (9th ed., pp.
361–363). Boston, USA: Prentice Hall.

Weitzel, M., Smith, A., Deugd, S., & Yates, R. (2010). A web 2.0 model for patient-centered health informatics
applications. Computer, 43(7), 43–50.

Xu, L.-W. (2014). MANOVA for nested designs with unequal cell sizes and unequal cell covariance matrices. Journal of
Applied Mathematics. doi:10.1155/201/649202.2014

Yu, L., Chen, H., Dou, Q., Qin, J., & Heng, P. A. (2017). Integrating online and offline three-dimensional deep learning for
automated polyp detection in colonoscopy videos. IEEE Journal of Biomedical and Health Informatics, 21(1), 65–75.

Zhang, R., Zheng, Y., Mak, T. W., Yu, R., Wong, S. H., Lau, J. Y., & Poon, C. C. (2016). Automatic detection and
classification of colorectal polyps by transferring low-level CNN features from nonmedical domain. IEEE Journal
of Biomedical and Health Informatics, 21(1), 41–47.

Zhang, R, Zheng, Y, Mak, Tony Wing Chung, Yu, R, Wong, SH, Lau, James Y. W, & Poon, Carmen C. Y. (2017). Automatic
detection and classification of colorectal polyps by transferring low-level cnn features from nonmedical domain.
Ieee Journal Of Biomedical and Health Informatics, 21(1), 41-47. doi:10.1109/JBHI.2016.2635662

Zhang, Y.-T., Zheng, Y.-L., Lin, W.-H., Zhang, H.-Y., & Zhou, X.-L. (2013). Challenges and opportunities in cardiovascular
health informatics. IEEE Transactions on Biomedical Engineering, 60(3), 633–642.

Zheng, Y.-L., Ding, X.-R., Poon, C. C. Y., Lo, B. P. L. H., Zhang, X.-L., Zhou, G.-Z., . . . Zhang, Y.-T. (2014). Unobtrusive
sensing and wearable devices for health informatics. IEEE Transactions on Biomedical Engineering, 61(5), 1538–1554.


Copyright of Journal of Experimental & Theoretical Artificial Intelligence is the property of
Taylor & Francis Ltd and its content may not be copied or emailed to multiple sites or posted
to a listserv without the copyright holder’s express written permission. However, users may
print, download, or email articles for individual use.

  • Abstract
  • Introduction
  • Related work
  • Methodology
    • Mathematical formulation for DLT algorithm
    • Backpropagation in a multilayer perceptron
    • Outcome variables
  • Results and discussion
    • Results
    • Comparison with techniques used in medical imaging
    • Comparison with techniques used in pervasive sensing
    • Comparison with techniques used to analyse biomedical signals
    • Comparison with techniques used in personalised healthcare
    • Limitations
  • Conclusion
  • Disclosure statement
  • References

Are you stuck with another assignment? Use our paper writing service to score better grades and meet your deadlines. We are here to help!

Order a Similar Paper Order a Different Paper