Why AI in public health needs focus, funding, and community voice
Artificial intelligence has become a cornerstone of innovation in medical research, diagnostics, and treatment development. From computer vision used in tumor detection to generative AI accelerating pharmaceutical discovery, investment in AI for healthcare has surged across academic institutions, philanthropic organizations, and venture capital. Yet amid this rapid progress, one area remains under-resourced and under-examined: AI in public health.
Public health focuses on population-level well-being and the social, environmental, and economic conditions that shape it. Housing stability, nutrition, education, transportation, and climate all influence health outcomes as much as clinical interventions. During COVID-19, the world witnessed how critical public health systems are in determining whether medical breakthroughs reach people who need them. A life-saving treatment is only as effective as the systems designed to distribute, communicate, and guide its use.
Despite this, most AI investment continues to concentrate on clinical and biomedical applications. Public health - where challenges are diffuse, structural, and often shaped by inequity - has not benefited from the same level of attention or funding. That gap has meaningful consequences. If AI tools are developed without considering social determinants of health or population-level needs, they risk reinforcing disparities rather than addressing them.
That is why Humane Intelligence, in collaboration with Dr. Rumi Chunara and New York University's Centre for Health Data Science (CHDS), has launched a new working group dedicated to AI in public health. The group aims to serve as an open, collaborative space where practitioners, researchers, technologists, and students can explore the questions that traditional medical-health AI investments have not addressed.
AI in public health spans a broad set of emerging areas. Some opportunities include integrating environmental datasets into disease surveillance models, using AI to improve early warnings for climate-related health risks, or developing contextual evaluation frameworks for low- and middle-income countries. These evaluation approaches are especially important when tools are intended for use in environments that differ significantly from the settings in which they were developed. Without context-specific evaluation, AI tools can behave unpredictably, degrade in performance, or cause harm.
Other areas of interest include AI systems that connect with social services information, such as food access resources or housing programs. Public health practitioners also face capacity constraints. Many lack training in AI literacy, data science fundamentals, or evaluation methodologies. Increased investment in workforce training is essential if public health institutions are to meaningfully engage with AI tools.
The working group's second objective is to explore the feasibility of creating an AI Public Health Fund. Such a fund could fill the programmatic and financial gaps that currently prevent meaningful progress. While AI-for-medical-health initiatives benefit from substantial investment, there is little equivalent support for AI projects designed to strengthen population health systems. A dedicated public health fund could support targeted research, pilot programs, and strategic collaboration, particularly in areas where government or philanthropic attention has been limited.
A central principle of this working group is inclusivity. Expertise in AI alone is insufficient; public health is fundamentally interdisciplinary. That means centering voices from epidemiology, environmental health, community engagement, and social science alongside technologists. It also means inviting participation from practitioners in low- and middle-income countries, whose contexts and needs often shape the most critical public health challenges.
Because AI is already being deployed in insurance workflows, clinical decision-making, and health communication platforms, public health practitioners need clear, practical guidance. They need frameworks and tools that help them evaluate what an AI system is doing, how it performs across contexts, and where risks may emerge. Without this kind of grounding, well-intentioned deployments can lead to unanticipated consequences.
The working group will convene before the end of the year for an initial discussion. From there, members will help shape the agenda, identify research topics, and outline potential funding pathways. The group will also serve as an open space for dialogue. Members will be able to raise concerns, explore emerging use cases, and highlight where more research is needed.
This initiative is one of several ways Humane Intelligence is working to support accountable and equitable AI deployment. By creating community-centered spaces for evaluation, building open source tools, and supporting sector-specific working groups, we aim to help organizations use AI more responsibly and effectively.
Public health remains one of the most important and overlooked areas for meaningful AI innovation. Ensuring that AI supports - not undermines - population well-being will require sustained investment, interdisciplinary collaboration, and a commitment to equity. The AI in Public Health Working Group is a step toward building that foundation. We welcome participants with diverse experiences across public health, AI, and data science to join us.