TECHNOLOGY | 4 Things I've Learned from Facilitating “Human-Centric AI” Workshops

publication date: Jun 12, 2024
 | 
author/source: Meena Das

My years of working with data and algorithms in various contexts make me both hopeful and anxious about AI.

I am hopeful and excited when the output of algorithms digesting huge amounts of data is unique and powerful—in far less time than a human would need to create similar insights heuristically.

I am anxious and nervous when the output of the same algorithms digesting vast amounts of data can miss the experiences of some specific individuals, wipe out an entire community, or harm the well-being and safety of living beings.

Living with AI cannot be just about all the pretty sorting, segmentation, and everyday efficiency improvements AI claims to offer. It must also include the realization that we must continuously work on our messy data and the messy cultures that want to use that AI.

Towards human-centric AI

I am writing this piece as the instructor of the “Towards Human-Centric AI Workshop,” as a “data is for everyone” believer, and as your co-agent in building a progressively better tomorrow by sharing what we need to engage with AI. By “engage” I mean collectively, as the nonprofit industry—centering humans in the AI we use, purchase, and exchange. Because centering humans in our algorithms is the only way to build a sustainable future with this technology.

Through the teaching of workshops with nonprofits of various sizes, I have had the privilege of engaging with many nonprofit professionals eager to explore the potential of artificial intelligence within their departments. Asking “What tools do I need?” to “Where can I practically use AI?” are where a lot of questions on AI start in my sessions.

Based on these many conversations, I am sharing four observations on why the sector feels unprepared for AI. And yes, as a leader, you can do something about it.

1. We need a common, shared, and clear understanding of AI in our nonprofit teams.

Participants frequently need clarification about the different types of AI (such as machine learning, natural language processing, and predictive analytics). This confusion extends to the potential benefits and limitations of these technologies and their practical applications in a nonprofit context. With a foundational understanding, nonprofit professionals can envision how AI can align with, and enhance, their mission-driven work.
Addressing this knowledge gap supports the ability to make informed decisions about AI implementation.

2. We need intentional organizational readiness for AI – from culture to technology.

Without buy-in from senior leadership and a strategic plan that integrates AI into the organization’s broader goals, AI initiatives will likely be restricted to ad-hoc initiatives without a collective, common readiness. This readiness includes technological infrastructure, such as data management systems, but also an inclusive culture that values and understands data-driven decision-making from all people. We need continuous education and training to build data-friendly teams that can learn to speak with each other about possible AI helpful scenarios, before jumping to looking for ad-hoc AI tools.

3. We need to be friends with our data to understand biases.

Many nonprofit professionals are wary of deploying AI systems that could inadvertently perpetuate or exacerbate existing biases. This concern is particularly acute for organizations working with vulnerable populations, where the stakes are high, and the impact of biased algorithms can be particularly damaging. We need to create a team-wide space for discussing data biases and ethical guidelines to ensure that AI applications align with equity and inclusion values. This effort—to understand where, what, and how biases can occur—will emphasize the importance of transparency and accountability in AI systems, advocating for processes that allow for regular audits and community oversight.

4. We need data quality and accessibility to harness the power of AI.

Data quality and accessibility are recurrent themes in discussions with workshop participants. Nonprofits often need help in collecting, managing, and utilizing data effectively. Many participants describe their data as siloed, incomplete, or inconsistent, which poses significant obstacles to successful AI implementation.

There is also a need for robust data governance practices and the development of skills to manage and analyze data. We need committed and consistent effort to assess and improve data practices (from storage to management) so that data quality and accessibility are improved and AI explorations can start.

Imagine the future

If you ask me one thing you can do to start with AI, I would say, “imagine the future.”

Yes, imagine the future, along with your role in it.

And while you are at it, don’t forget: If we want to imagine a future where AI exists to enable and do good, we don’t need just the experts and believers who simply ask the world to trust AI. We also need those willing to question who, when, and what gets in the way of harm and less hopeful paths when we haven’t done our due diligence with this technology.

Meena Das (she/her/hers) is the CEO, consultant, and facilitator at NamasteData. Namaste Data is focused on advancing data equity for nonprofits and social impact agencies. With 17 years of experience in data, Meena specializes in designing and teaching equitable research tools and analyzing engagement. Namaste Data supports nonprofits in consulting & workshops on improving data & AI. The workshop mentioned in this piece are: AI Advancement Lab (applications open for July cohort) and Towards Human-Centric AI.



Like this article?  Join our mailing list for more great information!


Copyright © 2011-Current, The Hilborn Group Ltd. All rights reserved.

Free Fundraising Newsletter
Join Our Mailing List