Bias in AI: From Education to FinTech
Panelists will discuss how bias is introduced into AI and machine learning models used in education and finance, ultimately resulting in biased systems. Inequality in education across groups and especially subgroups can be perpetuated by misuse of AI and machine learning models. Similarly, misuse of models can result in inequality in loan offerings, student debt, medical debt, and other finances. As automation widens accessibility, this same automation can transfer bias through models to new FinTech applications of financial institutions. Whether credit scores or alternative data, institutional and cultural bias built into existing data used to train models can become embedded and legitimized by science. Scientists need to ask questions about implicit bias and fairness to create change.
Share this idea
- Biased business questions, data, and research design used for building AI/ ML models leads to biased outcomes and systems.
- Be aware alternative data, such as social media feeds, can also lead to biased models.
- It is necessary to define and evaluate bias and fairness of outcomes across groups and subgroups.
Meredith Butterfield, Principal Data Scientist, Valkyrie Intelligence