Don’t Be Afraid of AI with Afua Bruce

Is it going to take over the world and make nonprofit work obsolete? Some peeps seem hyped, while others wanna unplug the mainframe, Office Space-style. 💻🚫

Well, never fear!  With experience from computer engineering to working for the FBI in the White House (?!?), my guest Afua Bruce knows her bytes from her bits. Afua decrypts how bias gets baked into algorithms and breaks communities. But she also shares how nonprofits can responsibly use AI as a force for good and level up their impact with AI’s help. 👩🏿‍💻

Some of the key takeaways:

  • Top AI concerns include the invisibility of marginalized groups and scalable bias due to historical data and speed/scale of deployment.
  • Everyone can educate themselves on AI by trying tools, asking questions, and knowing when to turn systems off if they don’t serve you.
  • Guidance for using AI ethically includes clarifying your strategy and values first, then evaluating if tools align.
  • AI may risk loss of critical thinking skills but it also presents opportunities like predictive analytics to improve services.

Bottom line: We got this, people! 💪 Ask questions, set guardrails, but don’t let fear leave you behind. The tools ain’t perfect but we can influence and improve. As Afua says, “…you still have agency in this process.” ✊

Listen to the full episode to learn more. 🎧

Important Links: 

https://www.linkedin.com/in/afua-bruce/

Get The Book

Episode Transcript

RHEA  00:00

Hey you, it’s Rhea Wong. If you’re listening to Nonprofit Lowdown, I’m pretty sure that you’d love my weekly newsletter. Every Tuesday morning, you get updates on the newest podcast episodes, and then interspersed, we have fun special invitations for newsletter subscribers only, and fun raising inspo, because I know what it feels like to be in the trenches alone.

On top of that, you get cute dog photos. Best of all, it is free, so what are you waiting for? Head over to rheawong.com now to sign up.

Welcome to Nonprofit Lowdown, I’m your host, Rhea Wong.

Hey, podcast listeners, Rhea Wong with you once again. And so this must be Nonprofit Lowdown. Today I am speaking with my friend Afua Bruce. She is the author of The Tech That Comes Next, as well as the founder and principal of A& B Advisory Group. And today we are talking about AI. It is hot out there, hot topic, stuff that we need to know about.

But really, the webinar is Don’t Be Scared. So Afua, welcome to the show.

AFUA 01:02

Thank you so much for having me. I’m so excited to be here.

RHEA 01:06

I am excited to talk to you. So a little bit of context for folks out there. I was able to go to the Microsoft Global Nonprofit Summit and Afua was one of the keynote speakers and she just blew my mind.

around all of this AI stuff, because I think it’s coming fast. A lot of us are just scrambling, trying to make sense of it all. And I, what I really appreciate about your comments, if I think you brought up a lot of things that we don’t necessarily think about, or at least maybe as much as we should with respect to technology and AI and ethics and who is designing it.

But before we jump. into those details because those are very juicy. I want to talk about a little bit about yourself. what was the origin of the Afua Bruce story? Like, how did you get started in tech?

AFUA 01:44

That is a great a great way to ask the question. I’m now like, what is my origin story?

I’m a big Marvel Comics fan. I don’t think I have a villain origin story, but I did grow up always loving technology related things. I love to play with remote control cars and to play around with building my own rockets and things like that, my house. So love that all growing up. And when I went to college, decided to become a computer engineer for that reason.

Started working at IBM, life was great. Wanted to continue as a software engineer, went to business school for a bit. And then my career took a turn out of business school. I went to work for the government and then to some nonprofit organizations after that, and ultimately to write the book and to do what I’m doing now.

But the common thread through most of my career has been, how do we do good things with technology and how do we work well with people.

RHEA 02:40

I just have to pause here. I feel like you’re being a little too modest. Y’all, she used to work for the FBI in the White House. So when she says works for government this is serious, y’all.

So I just want to highlight that. And then just, just a little detour on the way to talking about AI. I mean, as a woman of color, I would imagine that your computer science classes were not full of folks who looked at like you. So I’m sure there were like some tough moments.

How did you fare? Because I’m just thinking particularly of folks out here who work in predominantly white spaces. And it can be, it can wear on the soul a little bit.

AFUA 03:11

Yeah. I think wear on the soul a little bit to wear on the soul a lot is actually a great way to describe My experience and I think the experience of many other black women who become computer engineers or work in the tech field more broadly.

I did my undergraduate education at Purdue University, which is a fine institution and provides a wonderful engineering curriculum. I’m very thankful. And really Appreciative of the education that I received there. But one thing I do remember is that I was a co op student. So every other semester I would work at IBM full time and then the other semesters I’m on campus full time.

And there was one other black woman who was on the same, was on the opposite schedule of me also in the computer engineering program. And so she was on campus while I was working and people would forever get us mixed up. I would come back to campus and they would call me her name or apparently I would get emails while I was at IBM and I would get an email from someone saying, Oh, I thought I saw you on campus and you didn’t say hello to me when you walked by.

That, that wasn’t me. That was the other one. Who is also in this co op program. But I think where I stand now in my career is that it is remains incredibly important to have a diversity of people in the tech field, the way that tech is designed, the way that tech is technology works is reflective of who creates the technology, what sort of biases we have really affect what the technology looks like, how it works, how it literally sees people or does not see people.

And so having a diverse set of perspectives as those who are creating the technology is incredibly important. That’s why it’s important we should both continue to push for diversification of the tech space and also continue to support people who are in the tech space.

RHEA 04:55

Uh, To use the Czech term, double click on that.

So let’s talk about AI because I feel like everyone was aware of AI and then ChatGPT busted out on the scene about a year and a half ago, let’s call it, a year and three months ago. And I think it changed everyone’s perspective about AI. And it feels and especially for my people out in the nonprofit field, like we’re just scrambling to catch up.

So I guess my question to you is. Particularly as we think about who creates the tech and what biases they may have or not have. One thing I was thinking about with respect to coming out of the summit is it just feels like the genie’s out of the bottle. Is it too late for us to really push around the need for diverse perspectives and to diversify the the designers and people who are creating the tech?

AFUA 05:41

No, I don’t think it is too late at all, even though the coverage of AI and the stories about AI make it seem sometimes that AI has already arrived, that AI has evolved nearly as much as it’s going to evolve from now on, it is tweaking the technology that is already out there, that everything is set.

I don’t think that is the case. I think that over the past year or so, even with the public release of chat GT, we’ve seen other generative AI models come to play both that are text generated, that are image generated and other generative AI models used in a variety of other situations as well.

And so this is a reason and space for other people to be involved in the process. I think also

reality is that technology isn’t used by only a handful of people. And so even if generative AI tools that are out there today are deployed into the world, when we point out the ways that they inflict harm disproportionately on specific groups if those tools have that. Then this is a call to action.

I think of some of the action just as an example of meta and meta over the past couple of years. So several, maybe three, four years ago, I forgot the exact number of years ago the Department of Housing In urban development brought a case against Meta because the way they’d implemented some of its their search capabilities in Facebook violated the Fair Housing Act and said essentially the ways that people, advertisers are allowed to select allows people, advertisers specifically, to restrict housing ads from people based on protected classes such as race, gender, age.

And in the aftermath of that, over the past couple of years, Meta has had to come up with a number of different iterations, including changing how they programmed their algorithm, changing guidelines for how other algorithms are programmed for other parts of Facebook. Looking at what the controls are and reconsidering how testing is done and who’s involved in that process.

And so that predates generative AI, but I think those sorts of tools and showing that impact is something that we should continue to push for and we should continue to see. So it’s, no, to answer your question, it’s definitely not too late to have more people involved in the AI creation process.

RHEA 08:05

It’s funny that you mentioned META because I was just thinking about that because I think the ethos, at least coming from Silicon Valley, was the move fast, break things mentality.

And I think we’re now seeing the ripple effects of what that kind of mentality has been. I guess I’m wondering, how confident or optimistic are you that we’ve learned from our mistakes?

AFUA 08:25

Yeah, unfortunately, I think the move fast and break things ethos has really revealed that when we break things, the things we break are people, the things we break are communities, the things we break are access to livelihoods and to employment and to housing, to the meta example. And

on the one hand racism and sexism and all the isms weren’t invented with generative AI tools, they existed before and they will probably continue to exist. And so humans have this ability to forget really easily. On the other hand, I do think that we are seeing a Stronger groundswell of both researchers who have access to different media channels and different funders in ways that didn’t exist 20 years ago, 50 years ago and more.

I think we are having more conversations about the ethics of technology in the general public than we have had in the past. So it doesn’t mean that everyone is talking about the ethics of technology, but I think that the number Robustness of those conversations are increasing. And I do think that we are seeing government agencies around the world.

Wrestle a little bit more with what it means to provide guidelines or to interpret existing laws in the face of new technologies. That’s to say, I am hopeful because especially as a black woman, I have to be hopeful that we can improve things. But I am hopeful that humans have learned some lessons and that we can build more responsible and more equitable technology systems.

RHEA 09:58

So I want to dig into that a little bit because I think from my experience I’ve seen two kind of distinct camps when it comes to AI. There are the people who are all in and we’re just going to use it for all the things and then there are people who kind of stick their head in the sand and they’re like, I don’t want to know anything about it and hope it goes away.

I think. both camps, it feels very extreme. I think there’s a middle ground. So if I’m listening to you and I’m like, I feel like I hear you, I agree. how can I as an individual consumer or the head of a nonprofit really think about ethical, responsible AI?

AFUA 10:29

Yeah. So I’d say a couple of things to that.

One, I would, different, so when you uh, describe the spectrum, you talked about people who are all in and who want to use AI for all things all the time. That is a choice that some people will make. That is not the choice that I would make or that I would advocate for anyone to make, but that is a choice that people make.

On the other hand, do you know, you mentioned people who have their head in the sand. I will also say on the other end. Of those who aren’t adopting tools, sometimes it is because they see the, some of the ethical risks with AI and the way that it’s inherently designed as too great. And so therefore they’ve decided that their value system says these ethics trade offs are too much and we are not going to use AI at all.

Because the way generative AI tools work they have to take in a lot of information. And tools such as ChatGPT essentially scrape to the web for that information, whether it’s text or images often without compensating those creators, and use that then to generate new content that is based and rooted in that data, and so.

I just want to acknowledge that some folks who were saying we shouldn’t use AI at all aren’t doing it because they don’t necessarily understand the technology, but because they’re concerned about some of the ethical risks. You are right in that I don’t sit on either end of the spectrum, and I sit somewhere in the middle, and that I think that there are ways to responsibly use AI.

I myself use tools such as generative AI sometimes to help me with brainstorming. If I’m stuck on a problem and just want to think about things going on, I might pop a couple of prompts into a tool to basically do a new form of research or to have something explained to me. In a different voice or in a different character’s voice to get my own creative juices flowing.

And something like that when individuals want to think about using AI, I suggest a couple things. One is to find some tools and play around with it. See what looks good. See what doesn’t look good to you. See what feels comfortable. What doesn’t feel comfortable. See the ways that AI can fail, whether it’s with generating an image or video that is not what you expected, that perhaps you are looking for.

I think I recently put a prompt into an image generator saying show me an image of a woman at her desk. thinking all of the images were of white woman. It’s not reflective of me. So see where the technology works for you. See the tech where the technology doesn’t work for you. I think a second thing that I would suggest is that as more and more products are building in AI.

Turn on the features and see how they work and if that helps your life or if it doesn’t help your life. I think sometimes is be really clear what’s useful and what’s not useful there. And then also know that you have the freedom to turn off the AI features as well. If they do not work for you.

I think of some of the general guidance that I give people as they are looking to adopt new AI tools, but honestly, most new technologies in general is always know what your end goal is, what the actual outcomes you want for the work you are doing, not what the AI needs to do, but what sort of why are we here?

Know what your answer is to that. So you can have a clear understanding of whether you’re getting closer or further away from your task also to know how to turn the technology on and how to turn it off with any technology. Things can go really well, or things can go really not so well. And so being able to turn it on and turn it off.

In relation to how things are working or not working is really important.

RHEA 13:57

Okay. I have a little bit of out of left field question here, but go with me. Sure. And if it’s a ridiculous question, you can tell me, but a couple of years ago I was doing this little week long thing at the MIT media lab, which was really fun.

And I was talking to. Okay. I think he was an ethicist, but I’m not quite sure. Anyway, he was talking about AI, this was before the explosion of Chad JPD, and he was talking about what happens if we can’t turn it off? The singularity, perhaps I’m watching too much Westworld, but there will come a point whether or not that the AI will be able to function without the humans.

And I’m just curious, as someone who thinks a lot about ethics and the future of AI, like is that something you’ve conceived of? Is that something we should be afraid of?

AFUA 14:37

This is the idea of singularity and computers running themselves indefinitely without any sort of human written code or human written parameters is a, I’d say, growing debate amongst computer scientists and computer engineers.

I would say I am in the camp that singularity isn’t the thing that we need to be worrying about right now, and I the data also shows that a lot of engineers also think that singularity isn’t the thing that we need to be worried about. Of course, there are some who do think that they that that’s what we need to worry about, but that is not my top concern with AI by any means.

I don’t think we are approaching that quite yet. I think the other thing that the conversation around what do we do if we reach singularity again, this idea that computers just run the world all by themselves without any humans, what do we do if we get there? Sometimes takes us away from the fact that real harms are happening today.

That real impacts of AI, positive and negative, are happening today and we need to solve for them today.

RHEA 15:41

let’s jump into that a little bit because you mentioned, breaking communities, breaking humans. What are the concerns that we should be thinking about as we embark on this new AI future?

What are the things that keep you up at night?

AFUA 15:52

I don’t know that we have enough time for the things that keep me up at night. Not just a I related, but sometimes the state of the world in general. We don’t have enough time for all that,

But when it comes to thinking about what can go wrong with a I I first think about the ability for people to be seen or not seen.

So even so there have been some applications of a I with some housing programs, for example, and helping to sift through and to determine who has access or who will be able to come to the front of the line or whose application will be put to the bottom of the pile when it comes to accessing different.

housing benefits. And if those systems are biased for whatever reason, perhaps based on race, perhaps based on gender, perhaps based on socioeconomic status, that’s a problem because one, we shouldn’t be doing that in general, especially in the U S where we have fair housing acts that fair housing act.

But two, because of the Scale and speed at which A. I. Systems work. These problems that could have been small suddenly become very large, very quickly and impact people at a much greater percentage. So I think that’s an example of something AI tools ultimately are powered by data and the data is historical.

And so depending on what you are running your AI systems on, that data could be not great. It could be imperfect. It could be biased. It could over index, let’s say on people of color. In ways that would indicate people of color at higher risks of potential illness or potential committing of crimes or other things in which ways that the data is skewed because of oversampling versus the actual data and actual statistics for that issue.

And the AI systems don’t have any other judgment other than the data and sort of the probability weights that coders gave it. And so they’ll probably just take those imperfect data sets, not probably, the AI tools then take those imperfect data sets and extrapolate at scale. And that’s where you get to people not being able to access housing, people being identified at a higher rate for committing fraud when actually they just happen to be from lower income areas.

And so things like that are where some of the bias and risks with AI can appear.

RHEA 18:19

So if I’m listening to you and I’m not a tech person, I’m not a coder, like what can an everyday citizen do to help combat these biases, these concerns that come up with AI?

AFUA 18:31

Yeah, absolutely. So I think the first thing is to educate yourself on what data is what data is and how it’s being used.

And always ask those questions when someone comes to you and says, Hey, have I got an AI tool for you? Or, Hey, we are from our local government. We are from a local tech company, and we’re from anything here, and we have an AI tool that is going to revolutionize your world. Ask what data are you using?

Ask what controls have you given this? Ask what guardrails have you put into this technology? Especially as an everyday citizen, if you’re doing something with government services, this is certainly You’re right to be able to ask these questions and to get these questions answered. When working with tech companies as a consumer, you should be able to raise these questions.

And if you are in positions to evaluate whether or not to use particular tools, whether it’s for yourself, your family, the organization you run, if you get back an answer that you don’t like, or you don’t think is strong, You don’t have to turn on those tools. You don’t have to use them. I think when we started, you mentioned how a lot of people in the nonprofit space at this point are just inundated with information and feeling like we have to scramble for, we’re behind the eight ball on this AI thing.

Everyone’s talking about it. Everyone’s here. I see all of these cases, use cases about how to use AI and all these organizations have adopted it. I would encourage you to just quiet your fears a little bit on that because the reality is everyone is figuring this out together. Yes, there are some early adopters who have figured it out already.

Yes, there are some organizations who are finding ways to use it. But what I see more often with the number of nonprofits that I do work with is that people are finding out, or people are identifying specific use cases. They’re identifying specific ways that they can find tools that fit with their goals with their organization strategy.

They’re also putting in some guardrails or guidelines for the organization to say we’re going to stay human first. That means even if you use a generative AI tool, you as a staff member are still responsible. For everything that comes out. So you should double check what is produced by the AI tool. You should check if your AI tool is giving you recommendations on what to open, what to close, what to write, that’s accurate, that it makes sense, that it fits with the strategic goals of the organization.

And so I think reminding people that you still have agency in this process. That level of agency could vary based on the tool and where you sit, but you still have the ability to ask questions about what use looks like and to ask questions about what guidelines look like.

RHEA 21:15

That’s such a great perspective because I’m, in my mind, I’m like, are we just like in the matrix and we’re just harvested by the AI for content?

Can you talk a little bit about some use cases that you’ve seen for AI in the social goods space? Just because I know. You are very much coming from this background of how to use tech for social good. And I do think, look, I think your average executive director is totally overwhelmed. So I think it’d be helpful for you to share some interesting use cases that you’ve seen.

AFUA 21:44

Yeah, absolutely. So when it comes to generative AI again, those are your chat GPT like tools, which are generating new content. I do see AI being used by nonprofit organizations for things like brainstorming things like figuring out. About how to summarize some of their original content. I was actually just reading an article this week put out by the Center for Democracy and Technology, which is a nonprofit that does as its name might imply democracy for technology.

And they were sharing their internal guidelines for use of AI. And they also said they use it for things like generating Information graphics as well as generating headlines in titles of reports. So some of these tasks that feel maybe a bit more wrote, but they don’t have the staff internally for and then use generative tools can help their staff perform those functions more quickly and more efficiently.

And so I think those are some of the ways that we’ve seen it used really well. that’s on the generative AI side. There’s also the more traditional AI side, which is essentially predictions and numbers and things, predictive analytics. And one project I got to work on when I was at a previous organization was working with John Jay college was a four year institution.

And Running a tool off of their two decades, I think of data that they had to identify students who are at risk of dropping out. So students who had completed three quarters of the credits they needed to graduate, but were at risk of dropping out based on an analysis of two decades of information of people who were in a similar category and then identifying potential interventions for the John Jay College staff to do.

And by using that tool, John Jay College credits, I think, about 900 additional students graduating over a two year period.

RHEA 23:36

That’s really awesome. It’s so funny you mention that because when I was running my non profit I think I really wanted this level of predictive AI because I was like, we have this data.

How do we know that like wearing purple socks on a Tuesday isn’t going to affect your college graduation rates? Like we don’t know. And at that point, we lacked the computational power to. Analyze all of the data to look at the trends. So I think that’s a great use of AI.

AFUA 24:00

Absolutely.

RHEA 24:01

Let’s talk about guidelines.

You mentioned, staying human first. What are some other guidelines or where might folks look for examples of guidelines that they could put into their organization? Because again, I think there’s so much out there. It’s so new and most EDs are probably not in the tech field or tech savvy. So where do we begin?

AFUA 24:20

Absolutely. So I, I think the first thing that I suggest to leaders is to say that the challenge of adopting AI or choosing not to adopt AI again, you have agency to do that. But the challenge is. One, certainly about the technology, but it’s even more so about leadership. And so this means making sure that you as a leader are very clear to your organization about where you’re going strategically and what your organizational values are.

If you have a sort of clear strategic objective, you have a clear mission statement and a clear strategy that is supporting that it will be easier for people to say, these tools that we want to use are getting us closer or they’re not getting us closer. If you have clear values, such as we are a human first organization we value transparency, we value accountability, things like that, it’ll be easier for people to decide, do the specific tools that we are now considering.

Are they in line with these values? Are they getting us closer or are they taking us further away? So the first thing I always suggest is making sure that as leaders that your strategic vision is clear and documented for your staff and for your teams. the second thing on a practical example, right of sort of, okay, now I have a strategy.

People know what the strategy is. They can recite it back to me. We’ve made pretty a pretty report that we’ve showed to our board with our three year and our five year strategy. Where do we go from there? So I think. What I now suggest for folks is because there are more and more nonprofit organizations which are doing this and publicly sharing about their work and their process to check out some of their materials and to reach out to people in your nonprofit communities or to the authors or executive directors from these organizations that are published that they have had a process and what their outcomes have at the Microsoft summit.

I talked about the cyber peace initiative. I talked about coding it forward. I talked about catalyst those 3 organizations, which I think all have public materials on their processes. I mentioned today. The Center for Democracy and Technology. I think all of those are examples. There are other organizations that also have examples, but I’ve talked about those now in speeches and in articles.

So I clearly think that the structure that those organizations have laid out is pretty good.

RHEA 26:42

Excellent. I know we’re running out of time. So I have so many more questions here, but is there a place where folks can go to learn more about Emerging tools, because I think you said, just test them out, right?

I think folks are familiar with chat GPT, but is there someplace you go? Is there a forum where you keep track? Cause I mean, it feels like there are new tools every second of the day coming up. So how do I keep up with all of the new stuff on the market?

AFUA 27:05

Yeah, absolutely. And honestly, keeping up with all of the new AI tools on the market is a never ending task.

And so some places where I go, one is just following, quite frankly, the AI hashtags on a little, less so on Twitter these days, but certainly on LinkedIn, to see what people are talking about, see what people are sharing. I think there are also a number of non profit tech communities where people are also sharing about tools they hear of, tools they learn about, so that includes Microsoft has a community, N10 has a community, I think TechSoup puts out a lot of great stuff as well.

So I think those are other places that I go to look at look at what’s coming down the pipe.

RHEA 27:51

Okay, last question for me, because I, we could go all day, but we’ve talked about the concerns and the potential. Dark sides of AI, what are the potential upsides? And I’m thinking particularly around scalability and around making up for limited staff capacity.

Cause I think the thing that we complain about all the time in the nonprofit is we have too much to do. We don’t have enough staff. Can AI help us with this?

AFUA 28:14

Yeah, I think I can certainly help with this. I think again, the use of AI for predictive analytics can be really powerful and can really help you sift through all of the data that your organization has.

And I I will say as a side note, one benefit of this year, the second year of AI that we’re in is that as people talk about adopting AI, we’ve now come back to we need to be on top Of our data, what our data looks like and having access to status. That’s an important conversation and I’m glad that it’s coming back.

And people are doubling down on. But I do think predictive analytics is one thing that can really help. I do think that on the generative AI side, really, Being able to use AI tool of as something that can help automate or perform some of those more rote tasks when it comes to things like formatting or thinking through things like report titles and other things that for me personally.

Aren’t my favorite things to do. Perhaps there are people out there who really like that stuff. And I should partner with them on some of the things, but until then having generative AI tools perform some of those functions.

It can be really helpful.

RHEA 29:22

Okay. Wait I actually, I lie. I have one last question. I used to work in education and with ChatGPT, I think a lot of teachers are wringing their hands about the fact that kids are not going to know how to write and they’re going to be generating their essays via AI. How big of a concern is this?

Should we be worried about the fact that kids are not going to learn how to write or think?

AFUA 29:42

I would be concerned about people. I think I would say I have a general concern about people not possessing critical thinking skills, but that predated the release of generative A. I for me.

And so when I think about some of the conversations about the use of generative A. I tools in the classroom, I think of it is going back to the The introduction of calculator, maybe the graphing calculator into the classroom. We didn’t stop teaching math. It’s not that now no one knows how to do math.

We actually just. Started doing more complicated math and doing different types of math and then also recognizing that some people only needed math skills up to a certain level, and then they would be fine and letting people specialize from there. So I think we’ll see something similar. I I’m an adjunct faculty at Carnegie Mellon University, and my students are all able to use generative AI tools.

And I will say to calm the teachers is that when you If you were reading a stack of papers and people just threw in a prompt to generative AI, a generative AI tool, it’s really apparent really quickly. Syntax structure looks the same, the phrasing looks odd and the same as well. And so it’s really easy to identify when something was created with a generative AI tool.

It’s fairly easy to identify, I should say. But that also means that how teachers teach, I think, may need to change, and that we teach people a little bit more about what it means to creatively interrogate outputs, to evaluate information and what it means to make something your own, and the importance of maintaining your own voice that is distinct from a computer’s voice, your neighbor’s voice, and the general masses.

RHEA 31:22

What are your, what are some of your favorite AI tools as we’re rounding the corner on our conversation today?

AFUA 31:27

Sure. So I will say the AI tool that I have most enjoyed using

my family did a big game night last year. And so I used a generative AI tool to create a logo. For the Bruce family game night that I then put all the medals that were given to the winners of the game night. And so that’s perhaps my favorite use of AI, of generative AI to date.

RHEA 31:51

More important question, who won the game night? Because I, you’ve told me about your equally high achieving sister. So I imagine it gets quite competitive in the Bruce household.

AFUA 32:00

It does get quite competitive. My youngest sister, the doctor won. Handedly. Oh, no, it was close. I was not in first or second place.

It’s fine. They’re fine. We’re all fine. But my youngest sister, Juan, there was a lot of trash talking and she proudly displayed her metal.

RHEA 32:19

That’s it. It’s a hard one metal. Cause I, I sense that the Bruce sisters are not going to softball. Easy here.

AFUA 32:25

Of course.

RHEA 32:26

Okay, my friend, this has been fantastic, and so I think the takeaway for me is don’t be afraid, but be discerning, ask questions, but also experiment and

be willing to dip a toe in and see if these tools serve you or not.

Is that the right takeaway?

AFUA 32:41

that is a wonderful way to some of the conversation, be discerning about the tools with great power comes great responsibility. But regardless of your your technical fluency or the technical acumen you think you have, you do have a place in making these decisions.

You do have a place in contributing to the discussion about when and where AI should be used and how.

RHEA 33:05

Love it. Well, Afua, thank you so much. I’m going to make sure to put your info in the show notes if folks want to get in touch with you to learn more. And with that, carry on in AI, but don’t be scared.

AFUA 33:13

Exactly. Thanks so much.

RHEA 33:15

Thank you.

Support this podcast: https://anchor.fm/nonprofitlowdown/support

Host

Rhea Wong

I Help Nonprofit Leaders Raise More Money For Their Causes.

Schedule free session

Subscribe to podcast

Subscribe to watch video