Four figures made up of multicolor rectangles run forward. The background is binary code.

AI in Action

How public health is harnessing the power of AI, now.

By Gabriel Muller

Data drives public health, but it’s often too much of a good thing. Valuable insights—whether it’s the twists and bends of a protein’s structure, lifesaving clues from community-based surveillance data, or a Fitbit’s read on a person’s subtly changing gait—may be hidden by the sheer volume of information. With the help of artificial intelligence, researchers across the School are unveiling those insights—and using them to improve health—faster than ever.


From Flush to Field 

Nearly half of the treated human excrement in the U.S. is spread on agricultural land as fertilizer. It’s a sustainable solution for managing waste and improving agricultural yields—but using such biosolids is not without risk. Even when treated, human waste can carry trace residues from pharmaceuticals, beauty products, industrial contaminants, and more.

AI Icon 1

“There could be health risks we’re overlooking,” warns Keeve Nachman, PhD ’06, MHS ’01, an associate professor in Environmental Health and Engineering. With help from AI, he’s spearheading research to delve into what’s lurking in our waste and whether the EPA needs to step up its oversight.

By examining samples from wastewater treatment plants nationwide, Nachman’s team aims to identify chemicals that aren’t currently on the EPA’s radar. When they spot unfamiliar compounds, AI tools can swiftly predict potential health impacts by comparing them to known chemicals in the EPA’s database.

“We can make inferences in hours instead of years,” Nachman says. Given the tens of thousands of untested chemicals present in our daily lives, traditional animal studies, which are both time-consuming and expensive, aren’t always feasible. AI-driven toxicity models offer a promising alternative.

Preliminary findings, when confirmed, can be useful in identifying interventions to limit people’s exposures to these chemicals. For example, risk-reducing approaches may include the introduction of better filtration systems at treatment facilities to remove concerning substances. New protective gear guidelines might be rolled out for farmworkers in biosolids-fertilized fields. And if certain crops are found to absorb more chemicals, new dietary recommendations might advise limiting our intake of foods derived from those crops.

The stakes are high: exposure to these contaminants, including chemicals from medications, cosmetics, food processing, and industry may potentially lead to health issues over time. 

While animal fertilizers have their own set of challenges, human waste introduces a more varied mix of chemical residues. Nachman is optimistic that his team’s findings could ultimately guide updates to EPA regulations, infrastructure, and farm practices—and reduce public health threats lurking in the post-flush unknown.


Decoding Long COVID 

What if an AI assistant could provide COVID patients with on-demand access to symptom reporting, while helping researchers unravel the mysteries behind this perplexing syndrome? 

Ahmed Hassoon, MD, MPH ’14, is pioneering just that with an innovative study deploying conversational AI using Amazon’s Alexa.

AI Icon 2

By providing round-the-clock access to a HIPAA-compliant version of the household chatbot, Hassoon’s team enables study participants to easily report symptoms, answer surveys, and schedule follow-ups. “The Alexa bot overcomes the intensive human resources normally needed for such research,” says Hassoon, an assistant scientist in Epidemiology

But Hassoon also ran into challenges, including older participants’ inexperience with using AI or voice technology, and deployment barriers in rural areas that lack reliable internet access. 

He envisions a future where AI excels at integrating diverse data forms—text, images, and sound—to unveil novel health insights. “Our pilot has underscored the immense promise of AI as a research tool,” he says.

This long COVID initiative is an extension of Hassoon’s 2019 research, in which he used Alexa to motivate cancer survivors to boost their physical activity. In that study, the conversational AI provided tailored encouragement and reminders to promote walking. The results, published in 2021, found that the Alexa-based coaching significantly increased physical activity—participants using the voice assistant had an average increase of over 3,500 steps per day compared to the control group.

Since then, Hassoon has continued to push AI’s capabilities even further. In a current study, his team tasked leading language models with reviewing datasets and materials on diagnostic errors in health care. Shockingly, the AI spit out dozens of novel PhD-level hypotheses about reducing misdiagnosis, many of which Hassoon felt were ripe for testing. “Some were so innovative, I joked they could substitute for graduate work.”

For Hassoon, AI may serve as an invaluable research assistant, but human insight must lead the way. “We need to invest in humans to produce novel knowledge that can feed the AI,” he says. “Only then will AI help us continue to do a better job.”


Illuminating Cell Communication 

Our cells are constantly communicating with each other to coordinate critical functions like growth, division, and threat response. This cellular “talking” relies on messenger proteins to transmit signals from the cell surface to the nucleus. Enzymes called kinases control these signals by adding phosphate tags to messenger proteins. 

Two cells communicating

When kinase signaling goes awry, diseases like cancer can occur. 

To decode this complex communication, Jennifer Kavran, PhD, MS, MPhil, investigates the intricate 3D shapes of these kinases. Their precise shape determines what signals they respond to and which proteins they can modify.

Traditionally, researchers painstakingly determine protein architecture using time-intensive techniques like X-ray crystallography. But in 2019, Google unveiled a revolutionary AI system called AlphaFold that rapidly predicts structures simply from amino acid sequences. Kavran, an associate professor in Biochemistry and Molecular Biology, realized AlphaFold could supercharge her work.

She recently used the tool to study a group of kinases that modify messenger proteins by adding chemical tags that alter their behavior—a process known as phosphorylation. “We wanted to understand how these kinases get switched on and off,” Kavran explains. 

Her team introduced mutations in the kinases to identify which parts were essential for turning them “on” and allowing them to function properly. They then used AlphaFold to instantly model the impact on overall shape, confirming that these mutations did not radically distort or destroy the entire protein structure—just the tiny region involved in activation. 

Rather than months of painstaking experiments, AlphaFold provides crucial structural insight almost instantly. Still, Kavran doesn’t rely solely on its predictions. “With any computational model, validation is critical,” she emphasizes. Her group confirmed the AI’s predictions using time-tested (and time-consuming) techniques.

By turbocharging part of their workflow, AlphaFold enables Kavran’s group to invest more resources on interpreting results and developing hypotheses. It also allows non-experts—even fellow biologists who may not have a structural background—to explore relevant protein architecture for their own research. “It gets people excited who normally don’t think mechanistically,” she says.  


The Nudge That Saves Lives 

A hand holding two people.

Despite significant tribal-led prevention efforts, Native American youth suicide is four times the national average. For decades, the White Mountain Apache have partnered with the Johns Hopkins Center for Indigenous Health to address this crisis. Their groundbreaking work, legislation, and community programs led to a nearly 40% reduction in suicides from 2001 to 2012, though suicide continues to be a problem. 

“In small communities, deaths reverberate in ways you don’t always get a sense of in large cities,” says Emily Haroz, PhD ’15, MHS ’11, MA. “It takes a big toll.”

Building on the decades of tribal-university collaboration, Haroz and her team led a new AI pilot study aimed at supporting those at highest ongoing risk. They developed a machine learning model that analyzes five years of community-based surveillance data to identify individuals showing patterns that predict risk of suicidal behavior. 

The model scans for risk factors like past diagnoses, substance abuse, and past attempts, and affixes a quantifiable risk score for each patient. When the model flags high-risk individuals, it alerts Apache caseworkers—tribal members trained in suicide prevention—to conduct culturally appropriate follow-up outreach. With the model active, high-risk individuals were nine times less likely to have subsequent suicidal behaviors. “Any individual trying to do this would take hours,” Haroz explains. “We can process that information in a way that produces an interpretable risk score in seconds.”

“It’s not a crystal ball,” stresses Haroz. It offers a mere nudge, a “reminder light” that ultimately defers to the judgment of local caseworkers. 


Wearing Your Health on Your Sleeve  

We’ve come a long way from the step-counting pedometers of yesteryear. Today’s smart watches, wristbands, and other digital health tools continuously collect massive amounts of health data—often resulting in several terabytes from a single clinical study—and researchers like Vadim Zipunnikov are unlocking deep insights from them using AI and machine learning.

A smartwatch showing a heartbeat.

“These devices provide incredibly rich data,” says Zipunnikov, PhD, MS, who has built models that can detect subtle changes in sleep, gait, activity patterns, and physiological signals, which can be early indicators of disease onset and progression. 

Zipunnikov, an associate professor in Biostatistics, and his colleagues from the Wearable and Implantable Technology group use machine learning to create digital biomarkers of gait. For example, in one project, machine learning was used to analyze wrist accelerometer data collected during walking to create unique “walking fingerprints” for each person. Zipunnikov is excited about the potential for these fingerprints to serve as “digital biomarkers of aging and disease progression.” Detecting changes in an individual’s gait over time could provide an early warning sign of neurological disorders like Parkinson’s disease.

Another initiative uses multisensor bracelets and AI to unobtrusively track the daily lives of multiple sclerosis patients. The algorithms help build profiles of each patient’s sleep quality, walking patterns, mobility changes, and circadian rhythms over time and could detect accelerating disability before the next clinic visit. 

He describes a future where patient-generated data seamlessly integrates with medical care systems. “If we can collect and interpret the data responsibly, it opens up enormous opportunities,” he says. Clinical reports could automatically incorporate years of Apple Watch or Fitbit data to provide doctors with insights into current health. AI models trained on individual patients over time could enable early detection of neurological disorders like Parkinson’s, cardiovascular diseases, and even mental health disorders.

But Zipunnikov stresses that rigorous methodology matters most. Properly understanding the structure of human data before applying predictive models will be key to unlocking the potential of digital health while avoiding pitfalls. “If we do this right, it could truly transform medicine,” he says.