SXSW 2025

Experil: The Moral Hazard Beyond Hallucinations

Description:

We have a fundamental fallacy in AI today that we can just do better. Hallucinations are indeed improving but experil remains largely unchecked. The hazards not measured or not defined in a PERFECT model – the peril external to the model. It isn't a wrong answer, it is the unintended consequences of correct answers.

We will discuss how the inherent pace of AI is creating experil everyday. We will examine specific cases and propose new tools to identify, monitor, and mitigate these effects.


Related Media


Takeaways

  1. • New perspectives on the use of AI models.
  2. • Awareness of error and unintended consequences in the best models.
  3. • Examination of statistical frameworks and systems engineering perspectives to improve application of AI models.

Speakers


Organizer

Sean Bauld, Chief Experiment Officer, spxk


Meta Information:

  • Event: SXSW
  • Format: Panel
  • Track: Artificial Intelligence
  • Track 2
  • Level: Beginner


Add Comments


SXSW reserves the right to restrict access to or availability of comments related to PanelPicker proposals that it considers objectionable.