Through the Looking Glass: Sounds of Data – Part 1

“There is nothing new under the sun.”

Ecclesiastes 1:9

I know this biblical quote has serious implications. But here, I cite it only in the context of what I expected to be true about data. I’ve reached that well-seasoned stage of my data governance career where I continuously run into the same problems and challenges. I still enjoy finding fresh perspectives in presentations, articles, and books about data. Yet, the content is invariably familiar.

That is, until I listened to the December 2022 episode of The Radical AI Podcast, hosted by Jessie ‘Jess’ Smith and Dylan Doyle-Burke. I’ve long been a fan of this thought-provoking podcast and its frank discussions about thorny ethical issues with today’s technology. The title of this episode surprised and captivated me: “Sounds, Sights, Smells, and Senses: Let’s Talk Data with Jordan Wirfs-Brock.”[1] I listened to it several times and followed up with my own conversation with Jordan, an assistant professor at Whitman College whose “research explores how to bring data into our everyday lives as a creative material by developing data representations that are participatory and engage all of our senses, especially sound.”[2] For a musician like me, what could be more intriguing than the idea that sound is data!

As Jordan explained, we process sound as data all the time. We hear the horn of a car and instantly process that information about where the car is before we see it. Or, if you are a New Yorker entering a subway station, it’s the sound of an approaching train that tells you it’s time to run down the stairs as fast as possible. And when your train comes to a stop between stations, your brain translates the sudden silence into the fact you are not going to get to the office on time.

Given my predilection for data governance, it’s not surprising that one of my first thoughts about sound as data was “How do you govern it”? What makes those sounds you hear trustworthy, and what does data quality mean to the human ear? I realized I needed to learn a lot more about this new world of data before I could answer those questions.

Among the topics Jordan discussed in the podcast is sonification. One definition of sonification is “the use of nonspeech audio to convey information. More specifically, sonification is the transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication or interpretation.”[3] This is an intriguing field, and examples include NASA’s sonification project, A Universe of Sound, turning astronomical data into sound.[4] You can find many more at a site Jordan shared with me.

When we met, Jordan and I traded favorite podcasts, and one she recommended was Loud Numbers. In each episode, the hosts, Miriam Quick and Duncan Geere, take a data story, sonify it, and play the sonification. The results are fascinating.

In one case, “Tasting Notes,”[5] Miriam and Duncan start with beer tasting scores collected from beer expert Malin Derwinger. Then, they use ten distinctive sounds or musical motifs to signify scores for everything from the scent of fermentation to fizziness to sweetness vs. acidity. The result was ten little musical pieces representing not only the taste of 10 beers, but their effect on our senses of smell and sight as well. Based on the comments Malin makes about each beer, the sonifications capture the character of each beer vividly.

In another episode, “The End of the Road,”[6] the story the hosts choose to tell is about the declining insect population, based on data collected over 20 years by a Danish researcher, Anders Pape Møller. Møller counted the number of insects spattered on his windshield over this period, adjusting for factors such as wind speed, rainfall, and the make and model of the car he was driving, to track the shrinking population. Miriam and Duncan use a synth flutter to stand for the number of insects Anders records. Higher sounds depict smaller insects, and lower sounds larger insects. As the population decreases, these sounds become less frequent, eventually falling silent. A synth pad plays a falling melody based on an estimated annual 1.1% decline in the global population of land-based insects. Meanwhile, a bass line recalls the famous Dies Irae chant, used to evoke death and doom by many composers, especially in movie scores. A funeral bell rings out the beginning of each year. The total effect is terrifying.

This is auditory data storytelling, riveting and revealing. It’s hard to imagine any visual portrayal which would better illuminate the data sets described above. Jordan introduced me to a term for this kind of data storytelling, “data visceralization.” Catherine D’Ignazio and Lauren Klein describe data visceralization in their book Data Feminism as “representations of data that the whole body can experience, emotionally as well as physically.”[7]

Sonification is fascinating. Even more so is how the brain reacts to and interprets sound and music as data. For me, this recalls Douglas R. Hofstadter’s Gödel, Escher, Bach: an Eternal Golden Braid and his thoughts on music and artificial intelligence. In one of the book’s Dialogues, “Prelude…”, Hofstadter has his recurring characters, Achilles, the Tortoise, the Anteater, and the Crab, muse about how to listen to and understand fugues. [8]

A fugue is a complex musical form. Multiple voices take turns playing or singing a short melody, often at the same time, sometimes at different speeds, or even upside down or backwards! The most distinguishing feature of the fugue is the independence of the voices: they each go their own way yet sound harmonious together.[9]

Or, as the Anteater explains:

“Fugues have that interesting property, that each of their voices is a piece of a music in itself; and thus, a fugue might be thought of as a collection of several distinct pieces of music, all based on one single theme, and all played simultaneously. And it is up to the listener (or his subconscious) to decide whether it should be perceived as a unit, or as a collection of independent parts, all of which harmonize.”[10]

Hofstadter relates this property of fugues to a major problem of Artificial Intelligence: “How to construct a system which can accept one level of description and produce the other.”[11] Our brains are constantly processing sounds, especially music, this way. Consider any straightforward popular song. We may not need to decide whether to listen for one melody among many or the collection of voices at once. But there is still the tune, the harmony, and whatever the bass is doing.

I wanted to go deeper into how we process the data conveyed by music. I started listening to another podcast Jordan had recommended, Switched on Pop. The hosts, musicologist Nate Sloan and songwriter Charlie Harding, analyze pop music the same way my music school professors did with classical music. They look at how the composer/songwriter captures the audience’s attention through melody, harmony, instrumentation, and lyrics. The hosts often focus on how aspects of the music which we might not be consciously aware of, at least from a casual listen. These may set a mood, or evoke some other song or soundscape, and add to the richness of the experience.

But now my exploration of sound and music as data is too expansive to fit into one column! So, in my next column, I will explain how I discovered a surprising conjunction between two incongruous pieces of music: Taylor Swift’s Midnights album and the great Late Romantic composer Gustav Mahler’s 3rd Symphony.

  • How both share a bass sound palette which paints an evocative picture.
  • How these bass sounds affect the brain.
  • How we can develop the skills to listen to the sound and musical data in different ways, beyond what Hofstadter’s Anteater conceives.

For now, I leave you with one more quote from Ecclesiastes (1:8): “Nor is the ear filled with hearing.”


[1] Wirfs-Brock, J. (guest), December 2022, “Sounds, Sights, Smells, and Senses: Let’s Talk Data with Jordan Wirfs-Brock”. In The Radical AI Podcast. Radical AI LLC. https://www.radicalai.org/data-senses

[2] Ibid.

[3] Hermann, Thomas, “Sonification –  A Definition”, sonification.de, https://sonification.de/son/definition/

[4] “A Universe of Sound”, NASA, https://chandra.si.edu/sound/

[5] Geere, D. and Quick, M, (hosts), June 2021, “Tasting Notes”. In Loud Numbers. https://www.loudnumbers.net/podcast

[6] Geere, D. and Quick, M, (hosts), August 2021, “The End of the Road”. In Loud Numbers. https://www.loudnumbers.net/podcast

[7] D’Ignazio, Catherine; Klein, Lauren F. Data Feminism (Strong Ideas) (pp. 84-85). MIT Press. Kindle Edition.

[8] Hofstadter, Douglas R. Gödel, Escher, Bach: an Eternal Golden Braid, 1979, Basic Books, pp. 275-284.

[9] One of the Loud Numbers sonifications is a fugue based on Beethoven’s familiar Ode to Joy, in celebration of the “glorious bureaucracy” of the European Union.

[10] Hofstadter, pg. 283

[11] Hofstadter, pg. 285.

Share this post

Randall Gordon

Randall Gordon

Randall (Randy) Gordon has worked in the financial industry for over twenty years and has spent the last decade in data governance leadership roles. He is passionate about data governance because he believes reliable, trusted data is the foundation of strong decision making, advanced analytics, and innovation. Randy currently is Head of Data Governance at Cross River Bank. Previous employers include Citi, Moody’s, Bank of America, and Merrill Lynch. Randy holds an MS in Management – Financial Services from Rensselaer Polytechnic Institute, and a Bachelor of Music degree from Hartt School of Music, University of Hartford, where he majored in cello performance. In addition to being a columnist for TDAN.com, Randy frequently speaks at industry conferences. The views expressed in Through the Looking Glass are Randy’s own and not those of Cross River Bank.

scroll to top