Poetry in Motion: Sound Culture and Data Mining
Poetry, stories, and speeches aren’t just what we read or what we hear - they're how we make meaning and celebrate history. Hundreds of thousands of spoken text audio files - including poetry readings, Native American stories, and presidential speeches - remain untapped in archives throughout the world. These digital artifacts hold our oral traditions, and projects like High Performance Sound Technologies for Analysis and Scholarship (HiPSTAS) out of the University of Texas’s School of Information feature original high performance data mining tools that help us visualize sound culture in ways we never imagined. How will these new audio technologies reshape the way we understand our words and ourselves?
- Our understanding of the spoken word has been limited not only by technology, but also by imagination. With the emergence of new data mining techniques, how will we use visualization software to improve access, preserve culture, and create new teaching tools?
- We classify speech by rhythm and tone - and both evolve over time and differ markedly from speaker to speaker. How can we use the systems we have developed for musical computational analysis to determine how one storyteller's cadence might be influenced or reflected by another's?
- Imagine a poet who, while digging through a remote archive, unearths a hidden recording of Robert Frost reading "Stopping by Woods on a Snowy Evening." If she were working with digitized files, what technologies would help her recognize this recording as unique and valuable?
- Recordings of Native American Ojibwe elders known as oshkabewis (“one empowered to translate between the spiritual and mundane worlds”) reveal how these teachers use traditional cultural expressions to infuse their English narratives with spiritual elements. Using data mining software, can we identify changes in tone that reveal nuances such as self-identification, historical eras, and myth-making?
- Outstanding poetry compels us to reexamine our feelings and our worldview. Through technology, can we identify cadences and other traits that evoke our emotions?
Tanya Clement University of Texas at Austin
Show me another