Society has fair share of problems, and fairness is one of them. We will in an unequal world, of haves and have-nots, in fact the biggest determinant of an individual’s success in the US is the zip of where they were born. We all know life isn't fair, but society does bear the burden to help level the playing field where possible, with education, job creation and presenting opportunities to those who need them the most. Enter Artificial Intelligence, and brings with it, immense potential to analyze data in way hitherto impractical. But can machines achieve something that humans can't - fairness. We don't have to look very far to see that machines learning algorithms rely on historical patterns to make future predictions, but what if the data itself is biased? Can we overcome this?
Other Resources / Information
- AI systems don’t have an inherent fairness check when it comes to decisions that affect people’s lives
- Data used to train AI models today is usually flawed with embedded biases in it
- We can design systems that can learn to identify fairness problems and circumvent them
- Adnan Khaleel, Technology Strategist, Dell EMC
- Jay Boisseau, AI & HPC Technology Strategist, Dell EMC
Adnan Khaleel, Technology Strategist, Dell EMC
SXSW reserves the right to restrict access to or availability of comments related to PanelPicker proposals that it considers objectionable.