Grounded in Values, Exploring AI with Care
Grounded in Values, Exploring AI with Care
Kendra • Dec 3, 2025
Starting Where I Am
Technology has never been second nature to me. I grew up in a home more focused on growing fresh food and cooking from scratch than keeping up with the latest gadgets. I still remember my first real encounter with a computer when our 6th grade class got four IBM machines (and yes, I was introduced to Oregon Trail).
I’ve grown more comfortable with technology over time, enough to stay curious and keep pace. That’s the same mindset I bring as Rotary Charities adapts our work to leverage Artificial Intelligence (AI): not mastery, but curiosity.
Why Share Our Journey
AI can feel intimidating, especially for organizations that don’t see themselves as “tech-centered.” Sharing how we approach AI as an organization isn’t about having all the answers, but to show that responsible AI use looks different for everyone. We know not every organization has the time, resources, or capacity to dive deep into AI or build AI policies.
That’s why we want to make this conversation less overwhelming and rooted in values, not hype or fear. By sharing the questions we’ve asked, the policies we’ve created, and the lessons we’re learning, we hope to offer a helpful starting point for reflection and exploration, wherever you are on your own journey.
Why AI, Why Now?
AI is not a new concept. In fact, it was philanthropy that helped launch the field: back in 1955, the Rockefeller Foundation provided funding to cognitive scientist John McCarthy to explore whether machines could mimic human intelligence. This early investment sparked the research that coined the term “artificial intelligence” and laid the groundwork for what we see today.
Seventy years later, AI isn’t just theory, it’s part of everyday life. At Rotary Charities, we saw this firsthand soon after ChatGPT launched. At first, it was just quiet conversations, —colleagues sharing which platforms they were trying out and how. But when we realized almost everyone was already using AI in some form, we knew it was time to take a step back and ask: How do we engage responsibly, together?
Building Our Approach
We began by exploring how other organizations were navigating AI and learning from experts like the Technology Association for Grantmakers and NTEN. Early on, we knew that responsible AI use isn’t just about technology; it takes different perspectives. We created a small internal AI working group with staff in different roles, because responsible use depends on more than technical knowledge.
From there, we developed two key documents:
- Governance Framework – Our big-picture document defines purpose, scope, and values, sets direction and accountability, and explains how AI is viewed at the organizational level.
- Acceptable Use Guidelines – Our everyday roadmap. It outlines what’s in bounds and what isn’t, and helps staff know how to apply AI responsibly in their work.
Together, these documents bridge policy and practice and connect our values to action, making sure we use AI thoughtfully and responsibly every day.
What We Considered
In shaping our policies, we had to wrestle with real questions:
- How do we prevent unintended consequences and bias? What does equity look like in the age of AI?
- How do we safeguard data and privacy? How do we ensure transparency, accountability, and security?
- How do we weigh the environmental impacts of increased technology use?
- What parameters do we use to responsibly experiment?
Some answers came quickly. Others required more research and deeper internal conversations. We also recognized that these policies aren’t “one and done.” Our Acceptable Use Guidelines is a living document, meant to evolve as AI technologies and our understanding grow.
How We’re Using AI Today
As we developed our guidelines, we also took stock of how we were already using AI. We’ve found it most useful in three areas:
- Ideation: Workflow guidance, brainstorming, meeting facilitation, and identifying themes and trends from anonymized data to inform strategy.
- Content Generation & Refinement: Summarizing long reports, synthesizing data, or drafting content, with the important step that final products are always reviewed and approved by a staff member.
- Assistive Technology: Supporting administrative and routine tasks to help free up staff time for more relational and strategic work.
These examples aren’t flashy, and that’s intentional. AI doesn’t need to feel overwhelming or futuristic. It can simply be another tool to support the work we’re already doing.
How We’re Not Using AI
Here are key areas where we intentionally choose not to use AI:
- Making decisions about funding, strategy, or partnerships
- Processing sensitive or confidential information
- Automating technical processes where accuracy is critical
- Representing personal or community voices without direct attribution, input, or approval
- Replacing human judgment, particularly in values-driven or relationship-based work
Beyond these policies and practical choices, there’s a deeply human side to adapting to AI that deserves attention.
Honoring the Human Side of Change
While we’ve made many decisions about how we will and won’t use AI, there’s another layer that’s easy to overlook: the impact this shift has on us as people.
For many of us, our work isn’t just a series of tasks. It’s tied to our identity, our expertise, and the skills we’ve spent years, sometimes decades, developing. When a tool suddenly appears that can do in seconds what once required time, care, or creativity, it can bring up real grief or fear.
There’s a natural attachment to the things we do well. These capabilities shape how we see ourselves and our place within our teams and communities. When those change or feel uncertain, it’s normal to wrestle with questions about the future and where we belong. That discomfort doesn’t mean resistance to change or “progress”; it means we deeply value the meaning behind our work.
Recognizing this personal experience is as vital as creating policies or selecting tools, and is a key part of our ongoing learning.
What We’ve Learned So Far
The pace of change is staggering. At a recent training, we learned that until the 1900s, knowledge doubled every 100 years. By 1945, that pace sped up to every 25 years. Now, knowledge doubles every 12 hours, and that number will only keep getting smaller.
So the question isn’t if you’ll engage with AI, but how. None of us can slow the pace of change, but we can ground ourselves in what we do control: our values, our practices, and our shared commitments.
For us, that means building guardrails that don’t stifle creativity but help guide responsible innovation. We recognize that unintended consequences, inequities, and risks exist, and we’re committed to actively learning, revisiting, and navigating these challenges as this technology develops.
Reflections From Our Journey
Our AI journey is ongoing, but here are some practices that have been especially valuable for us so far:
- Don’t wait to be an expert. Curiosity is enough to start.
- Bring multiple perspectives to the table. AI isn’t just about tech — it touches equity, privacy, trust, and our humanity.
- Start with your values. Let them guide how you use AI, and build policies that reflect them.
- Keep it dynamic. Your first policy won’t be your last. Expect to revisit and revise as the technology and your understanding evolve.
- Experiment with intention and always keep human review at the center.
As Artificial Intelligence develops at a rapid and increasing pace, we must ground ourselves in what we can control: our values, our practices, and our shared commitments.
As you explore AI or other emerging tools, ask yourself: Which of your organization’s values could guide your experimentation? Who could you bring into the conversation to challenge your assumptions? And most importantly, how might you use these tools not just for efficiency, but to strengthen the impact you have on the people and communities you serve?
We’d love to hear your perspective on AI
Whether you’re just starting with AI or using it regularly, let us know how you’re engaging with it and whether your organization has an AI policy in place.
Take 3 minutes to share your experience and help us learn together.
Take the Survey
AI Resources for Nonprofits