SXSW 2025
Experil: The Moral Hazard Beyond Hallucinations
Description:
We have a fundamental fallacy in AI today that we can just do better. Hallucinations are indeed improving but experil remains largely unchecked. The hazards not measured or not defined in a PERFECT model – the peril external to the model. It isn't a wrong answer, it is the unintended consequences of correct answers.
We will discuss how the inherent pace of AI is creating experil everyday. We will examine specific cases and propose new tools to identify, monitor, and mitigate these effects.
Related Media
Takeaways
- • New perspectives on the use of AI models.
- • Awareness of error and unintended consequences in the best models.
- • Examination of statistical frameworks and systems engineering perspectives to improve application of AI models.
Speakers
- Sean Bauld, Chief Experiment Officer, spxk
- Shree Moorthy, Innovation Catalyst, Solve4M
- Maya Gumennik, Growth Advisor, Stackpoint Ventures
Organizer
Sean Bauld, Chief Experiment Officer, spxk
SXSW reserves the right to restrict access to or availability of comments related to PanelPicker proposals that it considers objectionable.
Add Comments