Here is my interview of sonification artist Scot Gresham Lancaster. He talks about sonifying plasma physics data, tidal data in Venice. and human movement in the city of Marseille.
Here is his IMERA project description
Imagine researchers monitoring ongoing experiments or researching system that are made up of extremely large amounts of data, that can be listening to music of their choice that is being discernibly modulated by the real time information as it flows through that given network. This research would be focused on creating and testing a generalized sound production engine that can take data streams from a complex networks and output a re-rendering of familiar music that would be chosen by the user and agree with their stylistic, cultural, and personal listening habits. The result would be an ability to generate auditory rendering of these continuous flows that takes advantage of auditory perception, cultural memory, autonomic reaction, yet perceptively changed with the embedded addition to changes and additions to the recording that are linked directly to the temporally changing data; the flux of the network that is being examined. This allows the auditory experience to be moulded by the user to compensate for mood, individual taste and cultural bias. The work will be to create and test a utilitarian set of dynamic tools that can take the inputs from any given set of nodes in a data network and map them to the version of some music or sound environment that the user chooses to create a sonic output that has the character of that recording, with instrumental voices that are reactive to the data added and recognized sonic features perturbed by other information flows. In this way the participant can monitor the state of a complex network in relative cultural and stylistic comfort, but still have access to the nuance and subtlety that is uniquely available to humans via the perception of time in relationship to the 10 octaves of human auditory perception. The boundaries between style and culture, between “music” and noise are vague, dynamic, and depend on the listener’s preferences, history, and current mood. Jacques Bertin’s groundbreaking work in a perceptual basis for visual data representation excluded musical notation, language and mathematics, because those systems were “bound to the temporal linearity”. These tools would be more generally usable as a means of investigation for data that are temporal in nature, and uses the changes in flow of the arbitrary network to perturb that familiar auditory context; a favourite song, a string quartet, a special raga, etc. The technique to do this depends on spectral analysis with audio feature extraction and creating instruments that uses characteristics of the physics of the virtual instruments and time domain spectral manipulation to play back sounds that, when perturbed or modified by changes in the data flow, can instantly generate clearly perceptible subtle changes in timbre and resonance, especially in familiar songs and styles of music. Also, changes and variation in the “beat” or rhythmic structure of this music or sound environment can be discerned if it is slightly slowed, sped up, or if the distance between beats is even subtly moved. The “feel” of a given music can be perceptively sensed, particularly if the song or even style is familiar and cultural expectations are in place. This generalized model offers a new tool for scientists, engineers, and others to more fully use another sense, hearing, to examine and investigate large complex networks and even systems of networks within the unique context of time perception that is unique to listening to music and sound environments.