top of page

Becoming a Good Steward of AI: How Our Use of AI Can Ripple Around the World for Good - Part II

Responsible Design, Ethical Principles & The Path Forward

Defining Ethical vs. Responsible AI

Sigma AI (2024) differentiates these two types of AI in this way; ethical AI (values-based) and responsible AI (execution-based). To truly be responsible stewards, organizations must:

  • Implement ethical guardrails in design

  • Use diverse, unbiased data

  • Include human oversight at ALL stages

Bias is not just a data issue; it's a people issue. Inclusion, context, and culture matter in training AI systems.


The 13 Principles for Responsible AI

The Harvard Business Review (2024)  outlines a set of principles designed to create human-centered, transparent, and safe AI systems. Among the most impactful:

  • Conversational Transparency: Ensure users know when they're interacting with AI, what it's capable of, and what it's not.

  • Data Dignity: Allow users to control how their data is collected, used, and shared.

  • Human Oversight: Keep a human in the loop for critical decisions, especially in high-risk domains.

  • Aligned Interests: Design AI systems that align with human values and well-being—not just company metrics.

These principles serve as essential building blocks for trustworthy AI adoption


Global Guardrails: UNESCO’s Framework

UNESCO's Recommendation on AI Ethics (2024) urges nations to center AI in human rights and sustainability. Key principles include:

  • Proportionality

  • Explainability

  • Bias and safety monitoring

Their global standard challenges both private and public sectors to adopt AI frameworks that support equity, trust, and transparency.


DASCA’s Four Pillars of Responsible AI

The Data Science Council of America (2024) proposes a hands-on model for companies:

  • Transparency

  • Accountability

  • Fairness

  • Societal Welfare

Documenting an ethical audit trail across an AI lifecycle promotes accountability and sustainable innovation.


What About Entry-Level Humans?

Sam Altman (2025), CEO of OpenAI, predicts that AI will "gradually" replace many software engineering tasks. This isn't doom and gloom—it's a shift in what new grads and self-taught professionals need to learn. This is my take on this; it will be vital to know HOW to code, READ code, and CORRECT code. Humans need to have building code skills to oversee AI no/low code prompting.

"The obvious tactical thing is just get really good at using AI tools... this is the new version of learning to code."

Where Entry-Level Opportunities Are Emerging:
  • AI operations roles (QA, testing, oversight)

  • AI ethics and compliance

  • Prompt engineering and AI tooling

  • Low-code/no-code orchestration

Skills like critical thinking, ethical foresight, collaboration, and AI fluency will define the next generation of leaders and technologists. To protect ethics in coding, I believe we need to create a certification around


Final Word: Humans Still Shape the Ripple

The ripple effect of responsible AI isn’t theoretical. It’s happening now—through tools we build, decisions we make, and people we include (or exclude). As Geoffrey Hinton cautioned:

"AI trained by good people will have a bias towards good; AI trained by bad people... will have a bias towards bad."

We shape the future by how we design, deploy, and distribute the benefits of AI. The ripple starts with us.


Be sure to read Part I here. Contact Isabella on Linkedin


Drafted in collaboration with ChatGPT, honoring the voices of today's most influential AI thought leaders. Commentary and stewardship by Isabella Johnston, Employers 4 Change.


 
 
 

4 Comments


t.hill101
a day ago

Human guidance and oversight is definitely needed in the advancement of ai technology. Morals and ethics should definitely be incorporated in this process. My concern with this advancement is, if someone who doesn’t value morals and ethics creates ai, will governing these principles matter to them?

Like

econtreras8
3 days ago

I never really thought about it like this but now that i did , It is one if not the most important things to make AI share human values in order to avoid huge problems when AI gets even more advanced than it is now since after all as advanced as AI is its still in its baby years.

Like

This was a very interesting read. I had no idea there were ethical and responsible standards set for the use of AI. I'm grateful for the guardrails, it's important that we use this tool respectfully.

Like

Ai reflects the values of its creators. how we design, deploy, and distribute AI determines whether its future serves humanity.

Like
bottom of page