Vilas Dhar is President of the Patrick J. McGovern Foundation, a US-based $1.5 billion philanthropy advancing artificial intelligence for public purposes.
The Foundation is among the largest funders of AI for public purposes, having invested more than $500 million in various projects.
Dhar has framed global discussions on AI governance, responsible innovation, and the social impact of technology. He has served as a member of the UN Secretary-General’s High-Level Advisory Body on AI, convened to shape an international framework for the governance of artificial intelligence and to align its development with human rights, sustainable development, and global security.
He also serves as the US Government’s nominated expert to the Global Partnership on AI and advises leading academic centres, including the Stanford Institute for Human-Centered AI and MIT Solve.
Vilas Dhar holds a Master’s in Public Administration from Harvard Kennedy School and dual bachelor’s degrees in Biomedical Engineering and Computer Science from the University of Illinois Urbana-Champaign.
Vilas Dhar spoke to indianexpress.com on ethical AI, the opportunities and the challenges it presents, the strategic grant-making of his organisation, and the importance of open dialogue in building responsible AI. Edited excerpts:
Vilas Dhar: The McGovern Foundation was born out of the belief that technology must advance human well-being, not just profits. From its inception, it has focused on seeding, scaling, and sustaining AI and data innovations that serve a public purpose.
Our thinking is that AI will change everything, but we do not need one more AI company. We need a place to build social resilience to what will happen because of AI. So we want an institution that protects society while tech innovation runs rampant.
Our foundation gives about $75 million a year in grants to around 150 projects. Over the years, we have deployed over $500 million across dozens of countries to support projects tackling challenges in health, climate, digital rights, education, governance, and more.
We do three things. We fund and support nonprofits to use AI against specific public use cases. We are also trying to build a platform for policy engagement so that we can actually support governments and international organisations on how to build this. And finally, we work on building institutions around AI capacity. The idea is not just how we build technology capacity, but also policy and technology support.
Vilas Dhar: We are kind of frontline AI developers. And we do two things. We build products and we provide consulting services to any nonprofit in the world that wants to use AI to further their mission.
We are known in the US as a disruptive philanthropy because we do grant making, product development, direct deployment, consulting, and policy advisory. The idea is that the times we live in require a new kind of institution to solve problems.
I see data and AI as part of society’s backbone, as foundational as roads or electricity. If that infrastructure is biased, opaque, or concentrated, it degrades all we build upon it. So our work spans three strands: direct deployment, capacity building (helping nonprofits, governments, and researchers absorb these tools), and governance (norms, auditability, open tools). All of it driven by one question: how do we make these systems accountable to people, not just institutions?
Vilas Dhar: We invest in projects that show how artificial intelligence can serve people, not just institutions.
We focus a lot on AI safety. And so we have a partnership with OpenAI and Anthropic, and other major companies to ensure that we are building safe and responsible frameworks for AI models.
We are also building the largest global data coalition around climate change and how it affects farmers, villagers, and other communities.
Among the various projects are Climate TRACE, which uses satellite imagery and machine learning to produce the world’s first open, verifiable inventory of greenhouse gas emissions. It gives policymakers, journalists, and citizens the data they need to hold governments and industries accountable.
Audere builds smartphone tools that use computer vision to interpret rapid diagnostic tests for malaria, HIV, and COVID-19, improving accuracy and access in communities where laboratory infrastructure is limited.
WattTime applies AI to measure and reduce real-time carbon emissions from the power grid, helping cities and companies source cleaner energy every hour.
The Online News Association’s AI Initiative equips journalists with the skills and frameworks to use AI responsibly, ensuring that new tools strengthen public trust rather than erode it.
And Grant Guardian, an AI platform created within the Foundation, automates nonprofit financial review so philanthropy can move resources faster and more equitably.
Vilas Dhar: The future of data for public good depends on what we choose to make open. Many of the world’s most valuable datasets are locked behind commercial walls, shaped by incentives that serve shareholders rather than societies.
If we want AI to strengthen democracy and improve lives, we must invest in global, noncommercial data assets that anyone can use to build solutions for the public interest. Only philanthropy and governments have both the independence and the responsibility to create them. That belief guides much of our work. One example is Climate TRACE, a global coalition we have supported that uses satellite data and AI to track greenhouse gas emissions across every major sector in near real time. It is the world’s first open, verifiable inventory of emissions.
Vilas Dhar: The concentration of power: when a few platforms decide what data, predictions, or views we can access. The “governance gap”: weak rules around consent, quality, audits, and recourse. The execution gap: many policies are noble on paper, but real institutions struggle to use safe and responsible systems.
Vilas Dhar: We have an American for-profit model of AI. We have a Chinese government-run model of AI. And in many conversations, those are the only two models that are put forward. I’m very invested in the idea that the Indian model of an open stack around AI provides an alternative, a compelling case for how we build AI.
We are investing heavily in open-source AI. How do we invest in talent, data, and compute access? If philanthropy only provides grant funding, it will never fix this problem.
Instead, I think philanthropy and government have to come together to build public capacity around AI. That means investing in building compute access, in creating public data sets, and in making it possible for people to have jobs in AI that are not with a tech company. This could be transformative.
Vilas Dhar: I don’t think there is a distinction between authoritarian and democratic LLMs. Instead, what we have is LLMs and AI tools that are being used to solidify the existing power structure, and a space to build new forms of AI that actually break the inequity of our society.
The first is heavily capitalised, it’s funded, and it’s being built with government power. The second is undercapitalised, under-resourced, and underdeveloped. That’s where we should be focusing. Now, you might use a different term.
There’s Western AI and maybe Chinese AI or other ways that you want to think about it. I think the big challenge is that there’s no people’s AI yet. And I think if we could invest in building a mechanism for democratic popular participation in AI, you could actually have a third model.
Vilas Dhar: Ethical AI begins with humility. We are building systems that learn from us, yet we still struggle to define what we value, what we protect, and what we refuse to automate. The real challenge is not in writing ethical codes but in creating the social institutions that can enforce them. Technology moves faster than governance, so philanthropy and civil society must build the capacity to test, question, and correct these systems in real time.
Across our grants, I have seen that ethics fails when it is abstract. It succeeds when it is practiced: when developers document their choices, when data scientists interrogate bias, and when public institutions demand transparency.
Vilas Dhar: We see it all the time with how AI is intersecting with social media. We see AI tools being used to drive algorithms that command attention are being considered responsible because of certain ethical frameworks at the company level.
But when they’re actually used in a community, they lead to substantial impact on mental health, around isolation and political polarisation.
What would have to happen is to transform the ethics framework from principles to specific engineering guidelines about how we build these tools to centre community consequences. So, for example, in social media, companies might need to restrict access based on age, based on social and emotional maturity, or based on the context in which people are using the tools.
And for that to happen, the companies have to take on more moral responsibility for the tools that they create. We help that happen in two ways. We can monitor and advise companies to make sure that they are actually building moral and ethical principles into the products they create.
And we can organise consumer and people’s movements to push back on the companies when they build a tool and deploy it that actually creates harm.
Vilas Dhar: India represents one of the most dynamic laboratories in the world for understanding how technology can serve humanity. The country’s scale, diversity, and tradition of public innovation make it a testing ground for what responsible AI can achieve.
Our work in India supports organisations that pair deep local expertise with modern data science. There is Khushi Baby, an organisation working in Rajasthan in partnership with the government. With experience in last-mile neonatal maternal health care, they have built an AI-enabled tool that lets them do what we call population-level health. And in doing so, they were able to identify a number of villages that have nutritional deficiencies. And using AI, they were able to identify what those deficiencies were, work with the government to provide supplements, and lead to better health outcomes.
Another is the work that we do with rice farmers to build AI-based tooling that allows them to have better knowledge about when to plant, when to harvest, and when to sell in the market.
In India, we believe the space of publicly created AI tools is going to be adopted faster than anywhere else in the world.
Vilas Dhar: India has a rare window: to treat AI as civic infrastructure. If India invests now in open data platforms, interoperable standards, institutional capacity, and rights-based guardrails, we can lead, not follow. But the risk is slipping into opaque, proprietary systems that deepen inequality.
Vilas Dhar: Imagine a system for community health workers: an AI that listens, triages, diagnoses, and tracks, all in local languages, offline-first, bridging traditional and modern medicine.
Or a climate planner that couples neuroscience, data, and behavioral inference, predicting how city systems (energy, transport, water) react under stress and steering them in real time, optimising for equity, resilience, and wellbeing.
Vilas Dhar: I think we are already in the middle of the greatest transformation of human society. Because what’s happening is not one more iteration. It’s not how the internet made us go from going to a store to going online or something else. This is about a fundamental change in what humanity is capable of. I don’t say this as a tech optimist. I am optimistic about the fact that if we have better tools, we can do more. And I think we’re in the middle of that change already. The question is, how are we going to use it.
Vilas Dhar: AI is going to change people’s access to political information in a way that gives them much more power. I think we will see AI influencing political participation in the next five years in a really positive and meaningful way.