Session: AI, But Make It Responsible: Building Trustworthy & Inclusive Tech

AI is shaping the future, but who is it really working for?

From biased hiring algorithms to AI-generated content that reinforces stereotypes, we’ve seen firsthand how unintentional design choices can cause real harm. The truth is, AI isn’t neutral—it reflects the biases, assumptions, and blind spots of the people who build it.

In this talk, we’ll dive into why AI trust and ethics aren’t just theoretical concerns—they’re urgent, real-world challenges that every developer, product owner, and decision-maker should care about. We’ll explore:

  • The Hidden Biases in AI – How seemingly small choices can lead to exclusion and harm
  • Why Representation in AI Development Matters – Who gets left out when we don’t prioritize inclusivity?
  • How to Build AI That Works for Everyone – Practical ways to approach AI design with intentionality

This isn’t just a talk about problems—it’s about solutions. Whether you’re working directly with AI tools or just integrating them into your workflow, you’ll walk away with concrete strategies to ensure the technology you build is responsible, inclusive, and truly trustworthy. Because the future of AI isn’t just about better models—it’s about better choices.

Presenters: