TECHNOLOGY | AI Won’t Wait — But Nonprofits Can Choose How They Show Up

publication date: Dec 3, 2025
 | 
author/source: Meenakshi (Meena) Das

Artificial intelligence has arrived at our sector’s doorstep — not with a grand entrance, but a soft-constant knock. Tools appear in our inboxes, on our grant portals, and inside the software we already use. Whether we have formally adopted AI or not, we are already living in its ecosystem.

The AI Equity Project 2025, which gathered insights from more than 850 nonprofits across North America — 35% of them Canadian — found something remarkable: while 65% of organizations are curious about AI, only 9% feel ready to adopt it responsibly. And yet nearly three-quarters are already experimenting in small ways. AI isn’t waiting for us to “be ready.” The question is how we’ll choose to show up.

From readiness to responsibility

We often treat “readiness” as a finish line — a checklist of infrastructure, budget, and confidence. But what if readiness, in the context of AI, isn’t technical at all? What if it’s ethical?

This year’s data shows that only 15% of nonprofits have any AI policy or guidance, and that even among those familiar with data equity, practice rates are declining — from 46% last year to 36% today. In other words, our knowledge of fairness and bias is growing, but our ability to apply it is shrinking.

That gap is not about ignorance; it’s about capacity. Nonprofits are stretched thin. Equity conversations often stay at the level of intent because the sector lacks time, funding, and shared models for what ethical AI looks like in practice. Yet this is precisely the time when we must slow down enough to design those models together.

The Canadian Context: Curiosity with caution

Canadian nonprofits in the study expressed more aspirational language in their “dreaming” responses than their U.S. peers. They asked questions like, “How might we center community in our AI use?” rather than “How can we scale faster?”

This matters. It signals an approach rooted in community and care — one that could become a model for AI stewardship globally. But that potential will remain unrealized if our equity conversations stop at aspiration.

Imagine if, instead of each organization drafting a policy in isolation, we invested in “AI Readiness Studios” — shared spaces where small nonprofits could learn together, test low-risk tools, and write equity commitments in community. Or a “Nonprofit AI Equity Open Library” — an open-access Canadian resource hub filled with templates, checklists, and local case studies. These are not luxuries; they’re the scaffolding that allows ethics to keep pace with innovation.

Hope over hype

What stood out during the analysis of this data was not the fear — though it exists — but the hope. Many participants described AI as an opportunity to “lighten administrative load,” “reach communities faster,” and “spark creativity.” They want to use technology in service of people, not in replacement of their staff for efficiency.

But hope needs structure.

Without guardrails, even well-intentioned experiments can reproduce harm. AI equity isn’t a software upgrade; it’s a cultural practice that lives in the questions we ask, the policies we write, and the patience we extend to learn together.

A call to Canadian nonprofits

If you lead a nonprofit, start here:

  1. Name your stance. Ask your board, “What does responsible AI mean to us?” Draft a one-page statement, even if imperfect.
  2. Map your data. Know what you collect, where it lives, and whose consent guides it. AI readiness begins with data readiness.
  3. Build capacity before capability. Send one staff member to an AI ethics workshop before buying another platform license.
  4. Collaborate publicly. Join sector discussions about governance and shared resources. The future of equity is collective.

AI may be built on code, but its consequences live in community. Canada’s nonprofit sector has a chance to model what care-centered innovation looks like: slow, curious, and inclusive.

AI won’t wait — but that doesn’t mean we have to rush.

We can choose to meet it with the courage to question, the humility to learn, and the wisdom to build a future rooted in equity, not speed.

 

Meena Das is the founder and CEO of Namaste Data, a consulting practice focused on community-centric data and AI equity for nonprofits. She helps social-impact teams design inclusive surveys, build human-centered AI cultures, and turn data into human-centric strategies. A sought-after speaker and writer, Meena leads the multi-year AI Equity Project and partners with organizations across North America to make data and technology more just, joyful, and accessible—especially for marginalized communities and their stories.


Like this article?  Join our mailing list for more great information!


Copyright © 2011-Current, The Hilborn Group Ltd. All rights reserved.

Free Fundraising Newsletter
Join Our Mailing List