Existing in today’s world increasingly requires AI literacy.
For people’s safety, their careers, and higher aspects of humanity, such as freedom and flourishing, there is a growing need to navigate the AI systems and other advanced technologies that pervade everyone’s lives.
We must move beyond ascribing the achievements of AI to magic, instead providing transparency and education in AI systems.
In our research and design workshops, we’ve learned how we can help teams increase AI literacy by taking a broader view on explanations. This broad view involves:
* improving explanations at the moment AI makes a decision or inference
* adding explanations throughout user journeys
* and explaining the impacts of AI outside of tech products: in user manuals, journalism, and K-12 curricula.
Additional Supporting Materials
- Explainability, providing human understandable reasons and context for decisions made by an AI system, is a key aspect of responsible AI development
- We must design experiences that build AI literacy and help people understand the consequences and impacts of AI
- A broad view of explainability, which takes us well beyond today’s in-the-moment explanations, is the best path for improving the UX of AI products
- Patrick Gage Kelley, Research, Google
Patrick Gage Kelley, Research, Google
SXSW reserves the right to restrict access to or availability of comments related to PanelPicker proposals that it considers objectionable.