

Using AI to Reduce Health Disparities in Diverse Populations
According to a 2023 CDC report, only 13% of Black and 24% of Hispanic/Latino individuals eligible for PrEP were prescribed it, compared to 94% of eligible White individuals. In the UK, Black women are nearly four times more likely to die in childbirth, according to a 2022 Reuters analysis based on NHS data.
These are not isolated numbers — they reflect lived realities that unfold in homes, clinics, and communities every day. Behind every statistic is a story. Access to healthcare has never been equal.
These disparities are not new, and they’re not just about access—they are deeply rooted in systems that were never designed to serve everyone equally. This raises a critical question: Can technology, particularly AI, help rebalance that system?
The answer depends on how we build and apply it. AI has the potential to identify hidden gaps, personalize care, and bring underserved communities into focus. But there’s a catch: AI is only as good as the data it’s trained on. And if that data reflects decades of inequality, the technology may simply reinforce the same patterns we’re trying to fix. A Rutgers University study, for example, showed that some algorithms provide less access for people of color due to less accurate, biased training data.
So, the opportunity is real — but so is the responsibility. If we want to use AI to reduce disparities, we need to ensure it sees the full picture. It’s about small shifts, emerging tools, and the bigger question: can smart technology lead to fairer care?
Understanding the Roots of Health Disparities

Health disparities aren’t random. They’re the predictable outcome of systems — systems that were designed, built, and funded in ways that often excluded the most vulnerable.
One of the clearest examples lies in geography. A person’s ZIP code often has more influence on their health than their genetic code. Communities in low-income or rural areas frequently lack access to quality clinics, healthy food, clean air, and safe environments for physical activity. These are part of what public health experts call the social determinants of health (SDOH) — non-medical factors like income, housing, education, and employment that shape people’s lives long before they ever see a doctor.
Language and culture also matter. Patients who aren’t fluent in the dominant language often face critical communication barriers. Inadequate translation, cultural misunderstandings, or a lack of representation can lead to delays in care or misdiagnosis. Even when care is technically available, it may be unaffordable or delivered with implicit bias.
For example, a 2019 study published in the Men’s Health Journal found that Black patients were 40% less likely than white patients to receive pain medication in U.S. emergency rooms. Implicit bias, though unintentional, still shapes how seriously symptoms are taken or how much time providers spend with patients.
These systemic issues are amplified when data, the foundation of most health technologies, carries its own bias. For decades, electronic health records (EHRs) have been built primarily around the experiences of white, English-speaking, insured individuals. As a result, AI tools trained on this data often overlook or misrepresent patients from more diverse backgrounds.
Fortunately, new tools are starting to address these blind spots. In India, the Aravind Eye Care System and Google Health deployed an AI model to detect diabetic retinopathy — a leading cause of blindness — in under-resourced clinics. A peer-reviewed study in JAMA Ophthalmology showed that the model improved early detection rates and reduced diagnostic backlogs.
A similar effort is underway in sub-Saharan Africa, where access to trained specialists is often limited by geography and infrastructure. In Kenya, AI-powered ultrasound guidance tools have been introduced to assist nurses in performing pregnancy screenings in rural areas lacking radiologists. These portable devices, enhanced with artificial intelligence, allow for early detection of pregnancy complications, thereby improving maternal and fetal health outcomes.
These examples highlight a key truth: health disparities don't stem from individual choices—they're driven by the systems we build. When those systems are reimagined with equity in mind, technology can become a powerful tool for change.
Where AI Meets the Problem
Artificial intelligence isn’t magic. It’s a tool that learns from data and identifies patterns to support decisions. In healthcare, the key is not what AI can do in theory, but what problems we ask it to solve.
One core area is language. Natural Language Processing (NLP), a branch of AI, is helping clinicians make sense of unstructured data, like doctors’ notes and patient messages that often contain critical but uncoded information about housing instability, food insecurity, or social stressors. By scanning these notes, AI tools can flag patients at risk who might otherwise go unnoticed.
Another is prediction. Machine learning models are now being trained to forecast which patients are likely to visit the emergency room within 30 days, not just based on medical history, but on missed appointments, skipped medications, or even transportation issues. This allows care teams to intervene earlier and allocate resources more equitably.
AI is also enhancing outreach. In rural or underserved areas, chatbots and virtual assistants are being tested to deliver basic health information, send medication reminders, and even provide mental health check-ins in multiple languages. These tools aren’t replacements for human care, but for communities without easy access, they offer a critical bridge.
For example in Brazil, an AI model was used by the public health system to map dengue fever outbreaks using weather and mobility data. A 2022 study in Nature Communications showed that this approach enabled earlier mosquito control efforts in high-risk neighborhoods, preventing further spread of disease and reducing the impact on low-income populations.
Another real-world success story comes from Malawi, where AI-based fetal monitoring was introduced in a clinic with limited resources. Within months, rates of stillbirth and neonatal death dropped by over 80%, showing how targeted use of AI can have an outsized impact when deployed in the right context.
But the potential doesn’t stop there. AI is also being used to personalize cancer treatment plans for underserved patients, flag overlooked chronic conditions like diabetes, and even suggest follow-ups that might otherwise be missed. These examples demonstrate that AI, when applied thoughtfully, can surface hidden risks, guide earlier action, and extend care to people long left behind. But its success depends entirely on the choices we make: which problems we prioritize, what data we use, and who we design for.
Conclusion
Artificial intelligence isn’t a cure-all, but it can be part of the solution. When thoughtfully designed and responsibly applied, AI tools offer new ways to identify overlooked risks, reach underserved communities, and support clinical decision-making with more context and sensitivity.
From predicting emergency visits to translating mental health check-ins into multiple languages, AI is already being used to close some of the most persistent care gaps. But none of this happens automatically. Technology is only as good as the people and values behind it.
That’s why collaboration matters. It’s not just about building smarter algorithms, but building them alongside public health experts, clinicians, social workers, and the very communities they’re meant to serve.
Health equity starts with visibility. And the more we see the barriers, the better equipped we are to design technology that breaks them down, not reinforces them.
Tell us about your project
Fill out the form or contact us

Tell us about your project
Thank you
Your submission is received and we will contact you soon
Follow us