SXSW 2023
Will Robots Fight Our Future Conflicts?
Description:
Uncrewed military systems are becoming more sophisticated and widely used. These systems are still largely under human control — but that may not always be the case. It is our responsibility to ensure the next generation of autonomous military systems can be trusted to operate in contested environments. What will conflict look like in a world where military systems can engage without human oversight? How might we develop autonomous systems that minimize risks to bystanders, limit conflict, and can be trusted by the public? How can we program ethical frameworks into these machines to avoid unnecessarily conflict and loss of life?
Related Media
Other Resources / Information
Takeaways
- As human beings are increasingly teamed with autonomous military systems, AI decision-making must become increasingly explainable and accountable
- Responsibility for creating safe, ethical, assured autonomous military systems must be shared between government, industry, and international bodies.
- Autonomous military systems are becoming a reality. We must make every effort to plan for how they will change the nature of conflict and peace.
Speakers
- Cara LaPointe, Co-Director, Johns Hopkins Institute for Assured Autonomy, Johns Hopkins Applied Physics Laboratory
- Jane Pinelis, Chief AI Assurance, Department of Defense, Johns Hopkins Applied Physics Laboratory
- Dorothy Engelhardt, Director of Unmanned Systems, DASN Ships, US Navy
- Erin Hahn, Group Supervisor, Senior National Security Analyst, Johns Hopkins Applied Physics Laboratory
Organizer
John Courtmanche, Communications Strategist, Johns Hopkins University Applied Physics Laboratory
SXSW reserves the right to restrict access to or availability of comments related to PanelPicker proposals that it considers objectionable.
Add Comments