SXSW 2019

Creative Ways to Solve the Bias Problem in AI

Description:

During the Facebook hearings in Congress, when asked how to solve the hate speech problem, CEO Mark Zuckerberg replied "artificial intelligence (AI)." But before we allow AI to solve our problems, we need to solve its' biggest problem first: bias. As AI continues to spread, countless examples of bias have risen: from racist facial recognition to the proliferation of sexist language, AI is far from perfect. Before we let the cold, mathematical calculations of AI tackle our difficult questions, we need to rethink its algorithms, how they are written, and even who writes them. Our expert panel from across the nonprofit, industry, and education sectors will discuss ways we can tackle algorithmic bias.


Related Media


Takeaways

  1. Algorithmic bias is a solvable problem, but it is also a reflection of our society.
  2. If one of the reasons for bias is who writes the code, we need to figure out who will solve the problem.
  3. Some critics say there is no bias in artificial intelligence, but our panelists from across the political spectrum agree there is a serious problem.

Speakers


Organizer

Sasha Moss, Federal Affairs Manager, R Street Institute


Meta Information:

  • Event: SXSW
  • Format: Panel
  • Track: Intelligent Future
  • Track 2
  • Level: Intermediate


Add Comments

comments powered by Disqus

SXSW reserves the right to restrict access to or availability of comments related to PanelPicker proposals that it considers objectionable.