top of page

Become a Good Steward of AI: How Our Use of AI Can Ripple Around the World for Good - Part 1



Photo by Alena Darmel https://www.pexels.com/@a-darmel/
Photo by Alena Darmel https://www.pexels.com/@a-darmel/

Part 1: Trust, Inclusion, and Preparing Society

In a world where technology evolves at lightning speed, artificial intelligence (AI) has emerged as a transformative force, reshaping how we live, work, and connect. This evolution prompts a deeply human question: What kind of future are we building with AI?

This inquiry isn't confined to data scientists or policymakers; it's a call to all of us, especially leaders, to embrace a new form of responsibility: AI stewardship. This concept transcends mere compliance or innovation; it embodies our values, vision, and the far-reaching impact of our choices—both now and for generations to come.


Acknowledging the Pioneers of AI

Before delving deeper, it's essential to honor the trailblazers whose foundational work has paved the way for today's AI advancements:

Geoffrey Hinton: Often referred to as the "Godfather of AI," Hinton's pioneering research in deep learning and neural networks has significantly influenced the field.








Yoshua Bengio: A professor at the University of Montreal and founder of the Mila - Quebec Artificial Intelligence Institute, Bengio's contributions have been instrumental in advancing machine learning.











Yann LeCun: Serving as the Vice President and Chief AI Scientist at Meta, LeCun has made significant strides in the development of convolutional neural networks.











These three luminaries were jointly awarded the Turing Award in 2018 for their groundbreaking work in AI.

Why AI Stewardship Now?

As Ginni Rometty (2023) articulates in her Harvard Business Review article, every organization engaging with AI must act not merely as a creator or user of technology but as a guardian of its impact. The stakes are monumental: encompassing privacy, security, equity, and even the integrity of democratic processes.


TRUST


Building Trust in AI Systems

Rometty reflects on her tenure as CEO of IBM, highlighting the company's proactive stance on trust and transparency:

  • Technology should augment humanity, not supplant it.

  • Data belongs to its creator, and safeguarding it is paramount.

  • Transparency is essential, particularly concerning AI applications.

These principles help shape a trustworthy AI infrastructure and reinforce user confidence.



Inclusion: A Moral and Strategic Imperative

Rometty emphasizes that "the demographics of engineers who create AI play a role in AI’s predictions." Without diverse teams behind the technology, outcomes may perpetuate bias. To develop #HumanCentered #AI, companies must rethink qualifications, hire inclusively, and remove structural barriers.




Preparing Society for an AI-Driven Workforce

In Edward Lewis' (2024) article, he describes how AI is already reshaping industries like education, employment, and real estate. Tools like Khan Academy's chatbot or Zillow's Zestimate show promise but also highlight risks when outdated or biased data drives decisions. Two programs I recommend are Section School and SuperHuman AI has two great certifications to get started. Keep an eye for the launch of E4C Academy, a 2 for 1 value with community and courses focused on AI Learning and Core Skill Development.


To lead responsibly, we must invest in #LifelongLearning, #Upskilling, and #WorkforceDevelopment so people can adapt and thrive in AI-integrated economies.


Holding Space for Nuance: The Christensen Perspective

In her piece from the Christensen Institute (2024), Ann Christensen warns against overhyping or over fearing AI. She encourages applying disruption theory, asking:

  • What business models support equitable prosperity?

  • What metrics ensure student and teacher progress in AI-powered education?

  • Theories matter. They help us forecast disruption and design AI ecosystems with foresight.

Part 2 of Become a Good Steward of AI

Drafted in collaboration with ChatGPT, honoring the voices of today's most influential AI thought leaders. Vision, outline, sources, commentary and stewardship by Isabella Johnston, CEO/Founder of Employers 4 Change.

 
 
 

5 Comments



Ingrid Vicuna
Aug 20, 2025

Thank you for sharing this, Isabella. What stood out to me is how essential trust is in building responsible AI systems. Without transparency and safeguarding data, it’s hard for society to fully embrace the benefits of AI.


Best regards,


Ingrid Vicuna

Like

t.hill101
May 22, 2025

Yes,, those that are creating and advancing ai are not just creating and dealing with the technology, they are tasked with guiding, protecting and preserving the purity of it. I believe that as long as we can maintain its purity, we can avoid it getting to a sentient being of catastrophic proportions.

Like

Quin Nguyen
Quin Nguyen
May 15, 2025

I really appreciated how this article states AI not just as a tool, but as a reflection of our values. It’s easy to get caught up in the hype around tech, but this piece brings us back to the core idea: we need to lead with empathy, equity, and long-term thinking; not just rely on it completely


Like

Rocket Butter 2
Rocket Butter 2
May 14, 2025

"Without diverse teams behind the technology, outcomes may perpetuate bias" Very true we must have different people working on Ai.

Like
bottom of page