Cast Your VoteSign in to vote
Ethics in the Time of Artificial Intelligence
Despite the enormous potential for societal good that may result from the development of artificial intelligence, there exists an equally frightening risk of failure, abuse, and bias. Can scientists, ethicists, and policy makers ensure that our future eldercare robots and self-driving cars actually reduce, rather than increase, injuries and fatalities? Can we build powerful, creative AI systems without ushering in a new era of auto-generated fake news, images, and videos? Can we create brain-machine interfaces without risking the privacy of our very thoughts? This panel will draw on the knowledge of a diverse group of AI experts in robotics, neuroscience, and safety in machine learning to address these critical questions about the future of AI.
Share this idea
- Researchers are building amazing AI systems, but find it challenging to prove that they will behave safely and correctly.
- There is disagreement about best practices for preventing abuse of AI; we have already witnessed downsides of both closed and open models of research.
- Ethics and safety must be foundational principles of AI research and development, not an afterthought, in order to be successful.
- Scott Niekum, Assistant Professor, UT Austin, Department of Computer Science
- Alex Huth, Assistant Professor, UT Austin, Department of Neuroscience
- Brenna Argall, Associate Professor, Northwestern University, Department of Electrical Engineering
- Elizabeth Lopatto, Deputy Editor, The Verge
Scott Niekum, Assistant Professor, The University of Texas at Austin