SXSW 2019
Creative Ways to Solve the Bias Problem in AI
Description:
During the Facebook hearings in Congress, when asked how to solve the hate speech problem, CEO Mark Zuckerberg replied "artificial intelligence (AI)." But before we allow AI to solve our problems, we need to solve its' biggest problem first: bias. As AI continues to spread, countless examples of bias have risen: from racist facial recognition to the proliferation of sexist language, AI is far from perfect. Before we let the cold, mathematical calculations of AI tackle our difficult questions, we need to rethink its algorithms, how they are written, and even who writes them. Our expert panel from across the nonprofit, industry, and education sectors will discuss ways we can tackle algorithmic bias.
Related Media
Takeaways
- Algorithmic bias is a solvable problem, but it is also a reflection of our society.
- If one of the reasons for bias is who writes the code, we need to figure out who will solve the problem.
- Some critics say there is no bias in artificial intelligence, but our panelists from across the political spectrum agree there is a serious problem.
Speakers
- Sasha Moss, Federal Affairs Manager & Policy Counsel , R Street Institute
- Sarah Holland, Assembly Cohort, Berkman Klein Center for Internet & Society at Harvard University and MIT Media Lab
- Heather West, Senior Policy Manager, Americas Principal, Mozilla
- Tiffany Li, Attorney and Resident Fellow, Yale Law School’s Information Society Project
Organizer
Sasha Moss, Federal Affairs Manager, R Street Institute
SXSW reserves the right to restrict access to or availability of comments related to PanelPicker proposals that it considers objectionable.
Add Comments