AI isn't always explainable. But does it matter? AI can have a "black box" effect, so often you aren't able to retrace its steps to understand how a prediction or decision was made. This effect can make AI implementation difficult as it requires a high level of trust and maybe even a leap of faith in the algorithm. Algorithms can be built to improve explainability but often at the expense of accuracy, which isn't always something you want to sacrifice. This leaves you with an important decision - do you prioritize accuracy or explainability? Or is there a way to achieve both? Learn how algorithm explainability can impact deployment as well as the tradeoffs between near-term and long-term implications to your business and culture.
Other Resources / Information
- The audience will better understand the trade off for accuracy and explainability of AI, a major problem and topic in the space right now.
- The audience will have an understanding on what it takes to deploy AI at scale, across the enterprise
- The audience will leave understanding how to teach algorithms to be accurate and explainable
- Steve Meier, Director of Growth, KUNGFU.AI
- Ron Green, Co-Founder & Chief Technology Officer, KUNGFU.AI
Connor Bibby, Marketing Associate, KUNGFU.AI
SXSW reserves the right to restrict access to or availability of comments related to PanelPicker proposals that it considers objectionable.