Humanity + AI with Vilas Dhar

Worried AI will take your job? Fear robots more than global warming? Think AI lacks morals? Well, take a chill pill my nonprofit friends. 💊

Today’s guest Vilas Dhar provides a refreshing perspective on AI. Rather than some apocalyptic force, he views it as a “human-centric reflection of our society.” Our biases and beliefs shape how it develops.

So who is this optimistic leader guiding us to an ethical AI future? Vilas heads the Patrick J. McGovern Foundation, advancing social good through tech. With experience across business, government and nonprofits, he connects communities to steer AI toward human dignity.

Discover how we can distribute AI more broadly so underserved groups have agency to address their needs. Learn what civil society can do now to democratize participation. Hear Vilas’ global vision for aligning innovators and people.

The robots aren’t coming for us yet, people! 🤖 Tune into this uplifting episode to understand how we shape the ethics and promise of AI. The future remains unwritten – our choices matter.

Important Links:

https://www.linkedin.com/in/vilasdhar

Episode Transcript

RHEA  00:00

Hey you, it’s Rhea Wong. If you’re listening to Nonprofit Lowdown, I’m pretty sure that you’d love my weekly newsletter. Every Tuesday morning, you get updates on the newest podcast episodes, and then interspersed, we have fun special invitations for newsletter subscribers only, and fun raising inspo, because I know what it feels like to be in the trenches alone.

On top of that, you get cute dog photos. Best of all, it is free, so what are you waiting for? Head over to rheawong.com now to sign up.

Welcome to Nonprofit Lowdown, I’m your host, Rhea Wong.

listeners. It’s Rhea Wong with you with Nonprofit Lowdown. Today, my guest is Vilas Dar, the president and trustee of the Patrick J. McGovern Foundation. And we are talking today about ethical AI and all the things that you need to know.

So I’ve just been obsessed with AI. I think we all have ever since ChatGPT was born, basically just a little over a year ago. And I think it’s, on everyone’s minds, generative AI, how can AI help us, but more importantly, and this is why I think it’s important to talk about this today. What are the ethical considerations of AI and how do we make sure that we’re using it responsibly?

So with us, welcome to the show.

VILAS 01:13

hea, so much for that welcome. I’m super excited to be with you today.

RHEA 01:16

I’m excited as well. Before we jump into all things AI, I think it’s always important to start with the origin story. So I know that you had a very interesting upbringing and two different worlds, which may have been the origin story for why you’re dedicated to this particular mission.

So could you tell us a little bit about that?

VILAS 01:33

Sure, Rhea. I’ll tell you it’s actually probably three different worlds, and I think of it as almost three different expressions of what humanity is facing today and when I was growing up and over the last century. One was I had the great privilege of growing up in central Illinois, right in the middle of cornfields and soybeans, and really being a part of what was happening in middle heartland America in the 1980s, where you were seeing this kind of transformation of Culture and politics and economies and even as I got to know all of my friends who lived on farms and knew what that world was, I was spending my summers going back to rural India where my family was from, and it was a different world.

It was a place where, in my family’s house, there wasn’t running water, there wasn’t power, the only phone in the entire village was a 15 minute walk away, and there was one. And so in that contrast where the first two worlds I saw one country that was developing quickly that had all of this technology and the other that was so distanced from that.

And then there was a third part, which I was very lucky to grow up in a town that hosted one of the national centers for supercomputing. This incredible commitment by the government to foster research and development, not just about the technologies of the day, but the technologies of the future. I remember being super young, eight, nine, 10 years old, getting to run through supercomputers.

Having people who really took an interest in helping me understand what these computers, what these technologies meant and what we could do with them. So if you take those three different experiences for me, I was always trying to figure out how they fit together. These three very different expressions of the world.

And I realized a couple of things that have stayed with me through my whole career on one side was. This incredible optimism that I developed about what these tools represented, the capacity to connect to the world in entirely different ways, to go online and meet people from all over the world, to hear different viewpoints, to understand how they might change commerce and jobs and health and all of these things.

And the second realization that optimism, that idea of the world that was possible, it wasn’t an inevitable outcome that there were people on in the world then. And today, That just don’t have access, that don’t get to participate, that don’t have the agency of being creators of what that digital future looks like.

And in that, I felt a deep sense of frustration, but also a calling, that this is what my life could be about. And that’s really what I spent my career doing, is connecting the dots, is saying, if we know that we can create a better world through technology and through social systems, then why wouldn’t we be spending all of our effort, all of our time, all of our energy to make sure that we make that happen as quickly as possible.

RHEA 03:57

That’s such a beautiful confluence of different influences. I know you were at Kennedy, so I’m going to name check Marshall Gantz here. was there a moment for you that really triggered the trajectory of your whole life, the choice that you made in the face of a challenge?

VILAS 04:11

Yeah, so many, Ria. And that’s the thing, right? I think I often quest for the moment where it all happened. And for me, it was really a sequence of incredible moments of spending time in communities and seeing what happened when just that next little iteration of a technological solution came. I remember and I’ll give you one of these sequences just because I hold it so closely.

my grandfather was a very special figure. He was somebody who had a third grade education, had spent most of his life really doing the kinds of manual labor that you expect in a place like rural India in the middle of the 20th century. But in that he had also become somebody who very organically just became a champion of justice, who would do anything that he needed to do for the people in his community.

And because of that kind of built this incredible support structure. That knitted entire communities together. Now this grandfather of mine, who I’ve shared with you, with this kind of bigger than life figure with a big mustache, with a third grade education, the thing he loved more than anything else was when I would come back to India, and I would show him whatever the newest piece of technology I was fascinated with, right?

So in the early days, it was things like video game systems and a Game Boy, or eventually it became, here’s my laptop, and eventually my cell phone. And the thing that struck me, Rhea, and, To bring it back to your question, every time he had such curiosity about it, he would ask me a million questions, and with my enthusiasm, I would tell him all about it, and he wanted to know what it did, and how it worked, and how it had come to be.

And inevitably, in every one of those conversations, after we’d done all of that, he would ask the most poignant question, which is, okay this is great. How does it help the people in my community? The teacher we visited yesterday, the farmer that we walked along in the field with, how is it going to change their lives?

I think in the story of me, the story of my work in the world, like that set of questions that came from my grandfather from the community around him are the ones that shape my entire worldview, which is we can do anything we want. We can create these amazing innovations. But until we ground them in a question and a story of how they help the people around us.

There’s not really intrinsic value to it just yet, but when we figure out how it transforms a life, how that teacher is better able to educate the students in the classroom, and those students are able to express their curiosity and ask new questions to become better owners of knowledge. How that farmer who’s working on a subsistence farmer farm can understand what’s going to happen because of our capacity to predict weather.

And our capacity to predict what will happen in the markets, so they know that they can get the most, the best price when they take their products to the market. Then we’re doing something that really compels all of us, that creates excitement and enthusiasm, that gives us a sense of purpose.

RHEA 06:44

It’s funny you mentioned, I’ve been doing a deep dive on our Arthur Brooks lately and he talks about the pillars of happiness, one of which is being a sense of purpose.

And so I think I’m, I don’t know what the question here is other than I think, here in the U. S. we’re facing a presidential campaign. And I think there are two very disparate versions of the future that we live in, right? There’s, One that’s optimistic, that is altruistic, that we can build a better tomorrow together.

There’s another sort of darker view of, everything is terrible, bad things are gonna happen, everything is awful and I guess, What I would say is as I think about AI and how it develops and who funds it, it is in inextricably linked with capitalism. And we know that capitalism hasn’t always done a great job of thinking about social good and humanity at its center.

So I’m just curious. I think the question is like, as AI is progressing so quickly, literally is changing every single second. How do we. Make sure that we’re injecting these important ethical and purpose driven questions in the development when oftentimes it’s the people who are at the fringes who don’t have control over the development of the technology.

VILAS 07:54

Yeah, I have a simple thought for you at the top, which is I know that these narratives are emerging in the public discourse, but the narratives that I care about, the ones that I feel matter, the ones that are shaped by actual communities who are willing to step forward and talk about what their lived experience is and what they need and what they care about.

And what I find over and over again is that whatever the media will tell you and the abstractions will tell you about what the visions of the world are, what I know that people are grounded in when you really talk to them and ask them, and where I am, is in a sense of shared dignity, of a sense of shared welfare, of a sense of shared community.

The key word here being the idea that we’re in this together and we’re trying to figure it out. And no matter what the irritations of the moment are in a political discourse or who the emerging figures are that are leading a particular track, we are still all very conscious of the fact that we live on one planet together and that there’s really no stories of Individual outcomes that don’t take into account the people around us.

So let me tell you what that means for me when we talk about technology And I know we’re going to spend a fair bit of our discussion thinking about tech But people always ask me well, these private companies are out for extracting profits Aren’t they just going to damn the future for all of us and my response is no Companies aren’t hegemonic, homogenous organizations.

They’re made up of people. And when I talk to engineers and scientists and salespeople and marketing folks and program managers at these companies, I don’t hear something that’s much different from those who are in the nonprofit sector. I hear people who say I’m trying to commit my life to doing something that matters.

I’m trying to build a product that people can use. And so the question for me isn’t, how do you take these two very different worlds, that ethical and responsible community driven approach? And something that maybe is cast as being very far away from that in the private sector. And instead is about saying how do we share common agency among the people who are using these tools, and the people who are creating them.

The people who are conceptualizing what the product looks like, and the people who are conceptualizing what the social and human future looks like. So that to me feels like a very worthwhile space to But for an effort to say, how do we connect the dots? How do we make sure that if people are speaking in different languages about purpose and impact and profit, that we’re able to align around a common and shared sense of values and principles, and then make sure that those are expressed in the decisions that people make and all of their choices.

RHEA 10:14

That’s so beautifully stated. And so do you see your role in the world as linking these two communities together? Because I do think that they’re often in separate rooms and we’re often in separate spaces And so how do we come together around the shared dialogue?

VILAS 10:30

Yeah, I think that’s certainly a part of it and that’s a very tactical way to think about it is to connect Different communities, but I think there’s something more foundational that we need to think about, which is one of the things that has happened is the fragmentation of a sense of shared vision for humanity.

That each kind of group, each sector, each, has developed their own point of view on it. And that’s not a bad thing. But I think if you really were to zoom out and you wanted to be very abstract about it, what I would say is I think the role that all of us can play, those of us who are in these different sectors, is try to say, we have our competencies and our confidences.

We know what we’re good at and we know what we want to do. How do we align all of those shared actions and our interests in pursuit of a common and shared goal? How do we elaborate what that goal is? How do we make sure that we know what the dimensions of it are? For me, as I talk to communities around the world, I hear shadows of the same goals.

I hear about a world where technology creates incredible prosperity, that it creates real opportunities for people to find economic value, and find jobs of purpose, and find access to basic services and dignity. One where technology is inspiring humans to be more creative, to be more visionary. And I hear different versions of facets of it, and different mechanisms of making it happen.

Thank you. But as long as we’re allowing aligned around the idea that technology doesn’t define our future, that our choices as humans define that future, then we can step in and say, if we’re thinking about a particular kind of product development, we can ask the question, how does it advance our idea of human dignity?

If we’re thinking about a solution to a fundamental problem, like how climate change is affecting people who live in coastal floodplains, we can ask a question that says, how do we bring technology to bear in a way that creates a solution that helps them? So that sense of a common idea of a technology enabled but human centered future, I think, is at the core of our work.

RHEA 12:14

I love that. A human technology enabled human centered future because I, I think at least the way I sometimes think about AI is to go back to the 80s, you know, the medium is the message and it feels like riding this dragon that is going forward and we’re just holding on as it, as it moves.

And yet what I’m hearing you say, which I feel is very encouraging is that we as humans have the agency to make choices about how we use this technology.

VILAS 12:44

That’s exactly right. I’ll tell you, Rhea, as a child, I always wanted to ride dragons, right? It felt like the most exhilarating and amazing thing.

And let’s use the metaphor, because I think it’s worthwhile, right? That this was never a conversation about one, exclusively controlling the other, right? It was always about partnerships. It was about thinking about how technology could become a partner. That advances human interests. And there’s lots of, lots of challenges to the metaphor.

And so maybe we’ll let it live there for a second, but let’s talk about AI. You brought it up a few times and you’re right in the eighties and the nineties when science fiction was really determining what the future of AI was more than the science. And we had Hollywood putting out movies about robot futures and dystopian ideas.

I think they took hold in our common consciousness in a way that created a lot of fear, a lot of anxiety, a lot of concern, and I think that’s entirely valid. But that can’t be the only story that dominates how the rest of us, the non technologists, think about these tools. If we don’t counterbalance that with an understanding of what these tools enable in really positive ways.

of the ways that they can support our efforts on so many of these stories that we’ll talk about, then we’re left with just a very narrow view on what AI could be. And that doesn’t serve any of us in a meaningful way.

RHEA 13:54

It’s funny you mentioned this dystopian view. So I had a chance to chat with Afua Bruce a little while ago, and I asked her about singularity.

And she was like, I think you need to not worry about singularity is not the thing to worry about. So what are the things that we need to be worrying about? Because I think to your point, is embedded in so many things that we’re not even conscious about it’s making decisions about things like who gets credit and who doesn’t, who qualifies for loans and who’s most likely to, I guess racially profiling people as to what they might do based on data pools that we had no idea about.

Tactically speaking so many questions here, but to back up what are the things that we should be aware of that AI is determining right now?

VILAS 14:35

It’s a great question, Rhea. Let me say a couple of things here. So one is Afua is an amazing and visionary leader, and I’m so glad that you had a chance to chat with her.

And I agree with her, right? That, as we think about, what’s happening in the world and what media is reporting on AI and where the public discussion is, I’m really concerned that this trend we’ve talked about, that fear has taken hold. And I often see that there’s a set of conversations about the very short term risks of AI.

And I think these are so valid and so concerning. And we should be thinking about algorithmic bias and we should be thinking about data representation, but we should consider them as not fears of an unknown, but rather as. specific defined challenges that we need to ensure are addressed. There’s the longer term existential risks, the other end of the spectrum, right?

And these are sometimes amorphous and they really prey on, I think very human tendencies to be concerned about that deep unknown. I agree with that for I don’t know that’s what we should be spending a lot of our time on. We should acknowledge it and we should name it. But what about what I call the missing middle?

What about all of the pieces in between the short term and the long term? We’re proceeding by default. You alluded to this with the idea that AI systems are being deployed almost willy nilly. I want to be clear, it’s not so much that the AI systems are making decisions, but that humans have given these AI systems the agency to make those choices.

It’s happening, as you alluded to, all over the place. We know of situations where AI systems are being used to make determinations about things like credit, or make sometimes decisions about even more fundamental things, like I think you’ve probably seen there have been news stories about governments trying to use AI systems in the delivery of services and benefits, or even sometimes in criminal sentencing.

The challenge for me is the missing middle isn’t just about saying, where are the systems that have been deployed without our notice, But where are we stepping in as a society to say, these are the parameters and rules we want to put in place. These are the ways we want to invest in building AI systems that actually support positive use and pro social outcomes.

When we talk about something like an AI system that changes the way we think about credit there’s a negative version of that, a predatory one that creates vulnerabilities, where a system that’s not transparent to us as humans is making decisions that negatively impact our lives. But what about the positive side of that coin?

What about AI systems that are able to understand risk in a way that says, you know what, for that person who has no banking history, but we can use their behaviors and their patterns through their data after they willfully and knowingly consent to build a profile that says, you know what, this is a worthy person for us to lend money to that with that they will go and do something.

Of great impact to their cells, their families, their communities. And that person who has been left out of our economic system for the last centuries, decades, centuries, and millennia now has an ability to access the world because this new data is giving us new ways to create trust. There are a hundred examples like this, where I think about the missing middle as the place where we should be thinking about civil society and nonprofits stepping forward and advocating for the communities that we work with to say, we should be building technology tools that advance their interests.

We should be investing in shared compute architecture and the ways that we build these tools. And governments should be thinking about more than just limiting technology companies or regulating against harms. Governments should be thinking about how we shape and create a future where technology capacity is distributed.

Where we’re investing in public resources and ideas that can actually make the world a better place.

RHEA 17:55

I love that and, it makes me think about one of the questions. So I work with a lot of nonprofits and many of them are tech focused and hoping to diversify the pipeline of tech talent.

I’m wondering, for the non technical person, like we know that the tech sector is, predominantly white, predominantly male. And with this very specific lens on the world, how do we as a community or folks of color step into, as you say, creating this dialogue and creating, this dialogue Co creating these rules around what we agree to, and I think the word used is consent, which I think is really important.

Probably not talked about enough. Like, how do we create a consent based infrastructure and architecture so that we are co creating this future that we want to live together versus leaving it in the hands of, baseless technology executives?

VILAS 18:45

Yeah look, I think we need to reframe some very tightly held assumptions about the world we live in.

I agree with you that tech sector is deeply unrepresentative. But, just because that’s the way it is, doesn’t mean that’s the inevitability of the future. The idea that the sector is the way it is because, in my view, so many of us stepped back over the last few decades when we were faced with questions about technology.

And rather than stepping into places where we expressed our views and really forced our policy makers to hold these organizations to account. We said, it’s a technology question and implicitly or explicitly, we said, we’ll let the tech companies figure it out. It happened around the growth of the internet.

We saw the incredibly toxic ways that social media developed because of that. And the choice we are now is. really fundamental. It’s not just about training more diverse talent, although that’s a critical part of it. And I’ll come back to that. It’s also about us all taking a step in and saying, you know what, we’re not going to let this happen again.

And we’re not going to proceed on the assumption that the ways we’ve set things up are the only way they can be. So let me give you a few examples. The first is this idea that’s so endemic, that technology companies drive innovation, that private sector organizations are the ones that advance what AI is capable of.

I think I agree that it’s probably a reality today, but again, it’s not an inevitability. If we were to resource public institutions, our universities and research and development, our civil society organizations and applications, to build this stuff in the same way that we resource the private sector, We’d have an entirely different way of building technology.

We’d have a grounding and purpose over profit. We’d have a grounding and building use cases that really affect people’s lives without having to worry about whether there’s a market fit. And this is something that we could do with the resources that are in philanthropy and the amazing talent that’s in the nonprofit sector.

When you talk about nonprofits that are close to tech, it’s incredible to see how organizations are using these tools today. So that’s one. The second is around talent itself. And again, I want to say the fact that even though much of the sector is, as you said, deeply unrepresentative, there are incredible leaders that are working at massive scale to train new talent throughout the world.

An organization called Technovation that maybe you’ve run across that has been leading efforts to bring together technology companies, along with philanthropies and nonprofits, to say, how do we train millions of young women and girls to become technologists? And then let’s extend that idea even further.

It’s not just that we need more coders or more people who can create AI. What if we could really invest in building a broad based digital literacy, the idea that every person on the planet doesn’t need to be somebody who can make AI, but at least needs to know what it’s capable of and can use their kind of own creativity and innovation to say.

Now that I’ve learned what these tools are capable of, how might I apply them in my own life? In our partnership with Technovation, we’ve come across so many of these incredible stories, but I’ll tell you one very quickly. Three very young women, teenagers in Delhi, India, who, as you might know, experience every day just massive, terrible air pollution because of a set of agricultural practices that happen very far away from where they are.

They don’t hold responsibility for it as individuals or even as their community. They’re the victims of a behavior that happens very far away. So these three young women brought together their knowledge of data and AI through training programs, and they said, you know what, we could build an app that actually helps the people in our community.

And they brought together knowledge about geospatial intelligence, about weather prediction, about air quality monitoring, and they built an app so that just for the people in their neighborhood when they go out, they have real time, clear data about just how bad the air is. And they can make choices and decisions about how they want to protect themselves.

music ends It’s a powerful story about how anybody in any community, when they just become literate about what these tools allow them to do, then can pull together the pieces to build a tool that empowers themselves and their communities to live in this world. Is it the solution to the underlying problem?

Not yet. But does it give me a lot of hope that when you empower communities to have access to these tools, they do incredible things?

RHEA 22:42

Absolutely. Yes.

As you were talking, what came to me is this the words moral leadership, because I think part of what empowering people with technology means is that I think there has to be a corresponding knowledge or commitment or viewpoint that is grounded in a sense of ethics and morals.

And I think we are drowning in information, but we’re starving for wisdom. And so

how do we think about this? Who do we look to in order to make sure that we are imbuing the technology and the people who are working on it with the idea of ethics and putting humans at the center. And really what is the moral choice, right? Because I think this, especially as our political discourse is becoming much more binary, there are fundamental disagreements about things like human rights.

VILAS 23:30

Rhea, I think about the fact that so much of the conversation these days is about ethical AI. And I think about that term and I find it really hard to even understand. I don’t know how a technology can be ethical. I don’t know how a computer or a calculator or a telephone can be an ethical being.

So I like to reframe it. I like to talk about the fact that the conversation we actually need to have is one about building an ethical society that’s enabled by technology. And who is responsible for that? We can’t ask our technologists to be responsible for the ethics and morals by which we build our human societies.

That’s a shared responsibility and it’s one that we all need to own. So what does that mean? It means that we need to have political conversations. About how we structure decisions about using technology about how we build it about the guidelines that we give our engineers as they go off and build these tools.

And it’s something that we still have a bit of a vacuum around. There are certainly incredible leaders who’ve stepped into the space and they are now proliferating throughout society. And yet I still think about the people on the community I grew up in. People who know they’re being affected by these tools, but don’t yet have the clear pathways by which they exert their voice and their agency and their interests.

Rhea, I think a lot about the fact that a conversation that we sometimes like to put in a box and say it’s about tech ethics or it’s even about technology, we need to take it out of the box and we need to make it a core part of our political discourse. We need to make sure that every community here in the United States and around the world understands that these tools are inevitable in the fact that they’re going to be created, but they’re not inevitable in what those tools will look like, and how they’ll act, and how they’ll impact our lives.

I think about the fact that there’s going to be really disproportionate and differential effects to these tools. That the ways that a white collar worker living in an urban city is going to experience AI based disruption is going to look very different from how a person of color who’s working in a migrant agricultural community is going to be displaced by potentially an autonomous robot that does their job.

We need to have a new discourse, a new social compact that brings everybody together. And I know this is a big idea and maybe not the kind of thing that you expected to talk about on this podcast.

RHEA 25:41

No, listen,

I’m here for the big ideas.

VILAS 25:43

But it’s at the core of what we need to do. I think often about these moments in human history where we realized something was happening and we needed to come up with a new way to address it.

I think about the fact that in 1776, a year that’s so important in kind of the American history three things happened that were really quite critical. One, I think, is probably easy to recognize and well known to maybe our listeners. 1776 was the year of the Declaration of Independence, right?

A massive new way of thinking about what it meant to have a democratic country, what it meant to build a new political system. But in the same decade, two other things happened. One was the creation of the first commercially viable steam engine. The single invention that probably launched the industrial age that let us build these amazing new ways of thinking about the world that created economic prosperity for everyone didn’t necessarily end up creating the same levels of prosperity for every single group, right?

And the third thing that happened in almost the same year was the publication of the Wealth of Nations, the treatise that defined what our economic structures looked like for multiple centuries and still are the baseline for how we think a lot about economic theory. I share this with you, Rhea, because in that historical moment, I saw the convergence of a political way of thinking, an economic model for how we built our society, and a technological model for how we created prosperity.

I look around the world today and I see as you alluded to the sometimes fragmentation of our political system, the ways that the ideologies that have led for so many centuries maybe don’t seem to be applying anymore. I see economic fractures that are happening because of this potential capacity of AI, some positive and some negative.

On the positive side, maybe we have new ways of thinking about supply chains and optimizing manufacturing that reverse some of the harms we’ve done to our planet. On the other, potentially the displacement of workers and the creation of new economic vulnerabilities. And then we have the technology itself, technology that promises so much that we don’t quite yet know where the boundaries of possibility are.

And I wonder, is it time for us to have a new constitutional convention? Is it time for us to have a new public discourse that says, look, we’re at a critical turning point, not just for this country, but for humanity writ large? And do we feel equipped? Have we done the work to make sure that every person on the planet has a literacy about what’s happening and has the agency to express their interests?

Are we at a point where we could actually have a conversation like that? I don’t know. But I’m optimistic. And I’m optimistic to say that even if today is not that day, There are things that we can do that bring about that readiness. It’s things that philanthropies can do by resourcing and equipping civil society organizations.

It’s something that nonprofits can do when they think about something as specific in particular as an issue that’s close to my heart, like homelessness or around making sure that there’s access to basic food and medicines, that there’s access for those who are fleeing political conflict to be able to find a sense of safety of home of welcome.

Each of these lines of work still lead back into the idea that we’re going into a time of deep transition in the world. And even as we serve the immediate need, we need to be thinking about how we put people to step into that public conversation.

RHEA 28:48

Yeah, so much is coming up, which in no particular order is this idea that AI is merely a reflection of who we are.

And so if we’re seeing the use of AI, In ways that we don’t like, I think the mirror should be reflected back on us like who are we and what are we doing and how are we acting in ways that we don’t like I’m thinking too about the the fact that historically, religion has played a big role in American society, certainly globally, and that’s often the anchor that many people have used to develop a sense of ethics and what’s right.

And now with people being less religious and practicing religion less in their day to day lives, where do we learn about ethics and a moral framework? So that’s the second thing. And the third thing. Is who do you look to? As you are in the world spreading this message, who are the people that you really look to as the moral leaders or people who are inspiring you with their vision of the world?

VILAS 29:48

Yeah, that’s a bundle of really important questions. I’ll try to take them together in sequence. The first is, I think you hit the nail on the head with this idea that it’s so important that we acknowledge that AI isn’t an alien thing. It’s not a human technology. It’s not something that exists.

In its own vacuum, it didn’t arrive on a meteor from somewhere outside of the planet. AI is, in so many ways, a reflection of a human centric view of the world, now imbued into a technology. And it’s really important to recognize that, Rhea, because when we sometimes try to take a technology and anthropomorphize it, to make it feel more human, just so we can critique it, We should recognize that what we’re really critiquing is our own biases.

I’ll give you a very quick example. When in early days of AI, one of the things that we heard about a lot was how it was being used in hiring systems in recruiting, and it became very clear that the early generation of these algorithms had deep and embedded biases that they often prioritize. Men or women, they prioritize certain racial groups over others.

And this was, I think, the source of a deep sense of consternation for people who were critiquing AI as a tool. But I think we very quickly came to the realization that the critique wasn’t enough AI. It wasn’t at the algorithms. It was the fact that the algorithms were built untrained on data about how humans had made these decisions for decades.

It always struck me and it strikes me to this day that we were so able to Find a sense of shared frustration and anger about the tools. Where is our anger and frustration about the fact that our society is built in such a way that around us in every kind of hiring decision that we’ve seen, these biases have been explicit and implicit that they’ve existed and we haven’t done anything about them.

So let’s reframe it from being frustrated at the tool for being biased. To actually saying, hey, this is an amazing, almost Sherlock Holmes style magnifying glass that if we could use, we could investigate the fundamental biases of our society writ large, we could find and root out all of the places where we’re finding discriminatory actions, where we’re finding inequity and injustice.

That AI actually becomes a tool when we train it on human behaviors that lets us focus and clarify the things that we should be deeply uncomfortable about, and then do something about. At that point, AI becomes something more. It becomes this incredible tool, becomes an aperture for us to understand human organizations and systems and behaviors, and it gives us the power to change those things.

What an incredible opportunity we have. You spoke a little bit about kind of cultural traditions and wisdom and where people look, and I can tell you that where I look hasn’t really changed. I look to the wisdom of communities. I look to people who tell me about their daily lives, and I think about, somebody I met on a recent trip to the global majority who shared with me that he was in his 60s, and he shared with me that at the start of his life, he had never had a phone in his home, and then he had gone through this accelerated progression through technology, and today he has a smartphone, and he was showing me how he used it.

The use cases were so different from what I’m familiar to when I think about that same kind of profile of somebody in the U. S., right? This was a person who used it to make sure that his kids could have rides on the local commuter system. And for the first time in his life, he knew when the buses were coming and the rickshaws were coming, and he could make sure that his kids could get there safely.

He used it to monitor his farm to make sure that there wasn’t wildlife running through it, a tiny little plot of rice, and to make sure that it wasn’t being destroyed when he wasn’t paying attention. These ways it was intersecting with his life in really human, personal, moment driven ways. I think that’s wisdom too, right?

I think the wisdom of thinking about how you use these tools to live your life in a way that’s more about expressing your own dignity and interests. So there’s plenty of space for cultural traditions, for religion, for faith, and there’s space for the lived experience of communities. And there’s no need to make it all one single source of truth that every community can have their own expression.

When I travel around the world now meeting with governments in different places, I find some fascinating pieces that come out that in global majority governments, people want to talk about AI safety and they want to talk about regulation. But when I’m in West Africa, the government wants to talk about the applications of AI to subsistence farming.

When I’m in a South American country, the minister comes to me and says, I’m really deeply concerned about how AI is going to disrupt mining in my country and how we make sure that it disrupts it in a way that’s better for the environment and doesn’t displace tens of millions of workers who rely on this as their sole source of sustenance.

So this idea that we can have a global framework of shared values, shared principles, of a shared sense of justice, and then we can have local priorities and local senses of how we want to build these things. Once we have the bedrock of our values in place, then it lets us have innovation and this flourishing of both technological and human creation all across the planet.

RHEA 34:31

As I love talking to you, it just feels so optimistic in a time that feels somewhat dark and pessimistic. So last question for me, if I’m listening to this and I’m on the bus I with it, I. Want the shared vision, but I’m not a technologist. I maybe I’m running a nonprofit. Maybe I’m just a concerned citizen.

What can I do tactically speaking to help be part of this conversation?

VILAS 34:57

Yeah, it’s such a good question. And luckily it has a pretty easy answer. I don’t know. That’s maybe not what you want to hear. I could give you a dissertation on it, but I think there are just two things that are so critical.

The first is for every single one of us to express our personal curiosity on these topics. To go from a world where I think we’re taught often that technology is scary, it’s complex, it’s far away, and to recognize the reality that it’s really none of those things, that with our curiosity, we can go and understand these tools that are built by humans for humans.

That’s the first. To be curious, to go out and engage, to do exactly what I’m sure many of our listeners here are doing, to listen to a conversation about a topic that maybe isn’t the first thing that crossed their mind this morning. And the second is to take that curiosity and turn it into agency to develop our own points of view on these items to recognize that it’s not somebody else’s decision to make what my future looks like and how AI will impact it.

And I think if we did those two things as a humanity together, if we were curious about and learned about these tools, and we use that to shape points of view that advance our interests for ourselves, for our communities, for the people we care about. Think about what a different conversation we would have in the world today about AI, about technology, but also about democracy, about what it means to be human, about what kind of world we want to create for ourselves and the generations that will come after us.

RHEA 36:17

Okay. I lied. I do have one last question. It’s completely unrelated. So 350 days out of the year, which is just bananas. What are your favorite travel hacks? What do you do? Because I’m assuming that you move it. Over many time zones, I, what are the tricks of the trade?

VILAS 36:34

It’s a good question. I have to tell you that travel isn’t the outcome, right? I’m not sure that I would advise anybody to live a life like this. It’s it’s got its challenges. But here’s the good parts of it. It stems and comes from the fact of a commitment that I’ve made personally and that we’ve made as an institution to be with communities where they are.

Rather than saying we have a Fifth Avenue skyscraper for our philanthropic institution that people can come to, that we will come and not just be with you, but really try to live and understand and hear community wisdom. Now, in order to do that, you asked about travel hacks, and there’s probably a small handful of things.

I think one of them is recognizing that it’s really easy to get sucked into the inconveniences of travel, and I think you have to remember the why all the time, and it’s what lets you step through some of those things and realize, you know what, they’re actually not that big of a deal.

Sometimes a flight’s delayed or you’re up at 1 a. m. or 3 a. m. or 5 a. m. because you’re doing a podcast in whatever corner of the world you’re in, But those are things that are part of a broader story about connection and to remember to be inspired by the fact that we live in a world where, for the first time, communities are able to connect in such meaningful ways.

I think that’s a really great and amazing thing that we get to do.

RHEA 37:42

VIlas, thank you so much. Is there anything that I haven’t asked about that you think it’s important that we touch on before we say our goodbyes?

VILAS 37:48

No, I just want to thank you, Rhea, for the work that you’ve been doing to support nonprofits and their journeys to help them understand how to use these tools in more effective ways.

But recognizing at the end of the day, our organizations exist because they support communities and our communities, our people across the planet. We’re all in this together. I really appreciate that.

RHEA 38:05

Thank you for that. And thank you for all the work that you’re doing and for being a moral leader and for being optimistic and helping us to think about how we use AI to live.

tech enabled, human centered future.

VILAS 38:17

That’s right. Excellent. Thank you so much, Rhea.

RHEA 38:19

Thank you.

Support this podcast: https://anchor.fm/nonprofitlowdown/support

Host

Rhea Wong

I Help Nonprofit Leaders Raise More Money For Their Causes.

Schedule free session

Subscribe to podcast

Subscribe to watch video