biotomusic

From Biological Signals to Music:
The History of the Sonification of Fetal Heart Rate Variability [1]

The Body Electric

Carlo Matteucci recognized, during experiments on pigeon hearts in1843, that the hearts activity is based on electrical processes.

1882, the physiologist Augustus Waller derived the first ECG, on his dog Jimmy, by dipping its four paws into conductive silver chloride solution. He recorded the hearts currents for the first time in 1887, with the help of a capillary electrometer.

The technology was substantially improved in 1903 by Willem Einthoven, who developed the ECG into a useful diagnostic procedure in hospitals.

.

ekg

In 1906, Cremer [2] reported the first successful fetal electrocardiogram (EKG)

 

cremer

 
and in 1908 Hofbauer and Weiss [3] recorded the first phonocardiogram of the fetal heart sounds.  

Dr (later Sir) Thomas Lewis [4] corresponded with the Dutch physiologist Willem Einthoven from 1906, concerning Einthoven's invention of electrocardiography, and Lewis pioneered its use in clinical settings. Accordingly, Lewis is considered the "father of clinical cardiac electrophysiology"

In 1913, Lewis recorded fetal heart sounds together with the mother’s electrocardiogram at University College Hospital, London. By using a twin string carrier designed in that year by WH Apthorpe, he was able with only one Einthoven galvanometer to obtain these two simultaneous recordings. Lewis used the carbon microphone system which Einthoven had designed when he first recorded phonoardiograms in 1907.

fhr&mhr

Listening to the Body Electric 

For twenty-five centuries, Western knowledge has tried to look upon the world. It has failed to understand that the world is not for the beholding. It is for hearing. It is not legible, but audible. - Jacques Attali

Necessity is not always the mother of invention, sometimes inventions themselves can mother new inventions. To this end, art and technology have always been strange bedfellows.

Although the invention of the stethoscope  (invented in 1816 by René Laennec ) revealed the acoustic complexity of a hitherto silent world, it wasn’t until the invention of  the microphone, the loudspeaker and later the wireless valve that the amplification of sounds became possible and  this new world became generally accessible.

On October 9, 1876, Alexander Graham Bell and Thomas A. Watson talked by telephone to each other over a two-mile wire stretched between Cambridge and Boston. It was the first wire conversation ever held. 

Bell patented his first electric loudspeaker (capable of reproducing intelligible speech) as part of his telephone in 1876. The Bell Telephone Company was created in 1877, and in 1879, the Bell Company acquired Edison's patents for the carbon microphone from Western Union.  

bell

1876 Bell Centennial Telephone

Already in 1878, Ludimar Hermann [5] published a paper about connecting muscle cells with the newly invented telephone. Not only were the current variations now observable but the pitch of the generated sound indicated the frequency of the current. This was the first sonification of human bioelectric signals for auditory display.

Amplification of sounds was made possible by the wireless valve. Major G. 0. Squier  of the United States Army constructed a heart transmitter in 1921 which made the heart sounds audible in a large room and, in addition, he transmitted them by wireless. Abbott of Purdue University devised a telephone transmitter in 1923, tuned so that the adult heart sounds could be heard on a loud-speaker in a large room.

Human brainwaves (EEG) were first measured in 1924 by Hans Berger [6]. His results were verified by Adrian and Matthews [7] in 1934 who also attempted to listen to the brainwave signals via an amplified speaker. 

The Electrical Amplification of Fetal Heart Sounds  [8] 

It was not long after the wireless valve was used to amplify the adult heart sounds that an apparatus was devised to magnify the sound of the fetal heart. Falls and Rockwood  in the United States seem to have been the first to describe a makeshift amplifier in 1923 which enabled the fetal heart to be readily counted several feet from the loudspeaker.

Hyman developed a machine in 1930,  called a fetal phonocardiograph, that gave tracings as photographic records of the fetal heart sounds after they had been converted into electro-magnetic waves by radio-amplification. Hyman found some gross irregularities of the fetal heart rhythm (fetal heart rate variability). 

Pommerenke and Bishop of the Department of Radiology of the University of Rochester described their amplifier in 1938 which obtained sufficient amplification of the fetal heart to make gramophone records.

De Costa  described a photostethoscope in 1938 which gave good amplication of fetal heart sounds and which carried a neon lamp so that the sounds were translated into flickers of light.

In 1960, Dr. Lee Salk [9] published the results of research, indicating that the sound of a mother's heartbeat has a calming effect on a newborn infant.

In a later study of 287 mothers, he found that both right-handed and left-handed women have a strong tendency to cradle their infants near their hearts. Salk theorized that mothers who hold their children near their hearts provide an auditory link that quiets the infants and enhances their growth.

Dr. Salk tested his theory by broadcasting recordings of a normal heartbeat in a nursery. Babies responded by becoming more tranquil than those in a quiet environment. Prolonged exposure during the first four days of life resulted in increased weight gain, his studies showed. By contrast, babies exposed to the sound of a racing heartbeat appeared agitated.

"From the most primitive tribal drumbeats to the symphonies of Mozart and Beethoven," he wrote in a report to the World Federation of Mental Health, "there is a startling similarity to the rhythm of the human heart."

The sound of a healthy heart:

The Sonification of Bioelectric Signals

The use of electrical signals emanating from nerve and muscle (bioelectric signals) to create music came into being in the late 1960's. 

The first instance of the intentional use of bioelectric signals to generate music did not occur until 1965, when Alvin Lucier [10], who had begun working with physicist Edmond Dewan, composed a piece of music using brainwaves as the sole generative source. In that piece, EEG electrodes attached to the performer's scalp detect bursts of alpha waves generated when the performer achieves a meditative, non-visual brain state. These alpha waves are amplified and the resulting electrical signal is used to vibrate percussion instruments distributed around the performance space.

alvinluier


Alvin Lucier: Music For Solo Performer (1965):

In 1966 10 New York artists worked with 30 engineers and scientists from the world renowned Bell Telephone Laboratories to create groundbreaking performances that incorporated new technology. 9 Evenings: Theatre and Engineering was to be the first event in a series of projects that would become known as E.A.T. or Experiments in Art and Technology.

grassfield

For one of these performances, Grass Field, Alex Hay [11] wanted to pick up body sounds: brain waves, muscle activity and eye movements. Pete Kaminsky, Fred Waldhauer and Cecil Coker built a battery-driven differential amplifier which had a peak gain of 80 db at low frequencies from 112 Hz to 10 Hz. The whole unit, batteries and all fit into a 1" x 3" x 5" box, no mean feat to do this in 1966. The signal from the differential amplifier was fed into a voltage-controlled oscillator, then to a transmitter, which sent the sound to the speakers. Electrodes were placed on Hay's head (EEG) and chest (ECG) and all the equipment was attached to a plastic plate fastened on Hay's back. These body sounds were heard through the speakers as Hay carefully laid out 64 numbered pieces of cloth. Here Hay is sitting in front of a television camera and the image of his face is projected on the screen behind him as Robert Rauschenberg picks up numbered cloths.

Alex Hay. Grass Field (1966):

In the late 1960's, Richard Teitelbaum [12], inspired by Luciers work, used various biological signals, including brain (EEG) and cardiac (ECG) signals, as control sources for electronic synthesisers. In Spacecraft (1967), Teitelbaum used the neuro- and physiological signals of his own body as live (real-time) musical materials, using heartbeat, chest cavity and throat contact microphones as transducers, as well as electrodes for EEG and ECG. The signals picked up by the former were generally transmitted as audio, the latter served mainly as control voltages for the Moog synthesizer. Thus, in addition to the kinds of concious "musical" gestures input by the others in the quintet, Teitelbaum's channel also carried a loop (or loops) of psychophysical signals from his own autonomic nervous system, modifications of which could be made manually (or automatically) through the Moog, which in turn could also be modified "autonomically".

teitelbaummoog

Richard Teitelbaum: Spacecraft (1967):

Also in the late 1960's, another composer, David Rosenboom [13], began to use EEG signals to generate music. Initially, this took place in 1968-1969 in the laboratory of Les Fehmi, an early biofeedback researcher at the State University of New York at Stony Brook. 

Rosenboom developed an environmental demonstration-participation-performance event entitled Ecology of the Skin in 1970-1971. It involved biofeedback monitoring of brainwaves and heart signals from performers and audience members and their translation into a musical texture, along with synchronous electronic stimulation of visual phosphenes (colored patterns often seen with eyes closed) at cerebral light-show viewing stations for the audience. The electronic setup for this work included the capability of adjusting the degree of brainwave control over sound for each of 10 participants according to a simple statistical measure, the amount of time spent per minute producing alpha waves.

rosenboom skin

David Rosenboom: Brainwave Music 01:

According to Rosenboom, use was also made of a miniaturized, highly portable, electrocardiogram (ECG) feedback device developed at Rockefeller University by Dr. David Vandercar [14], in which the pitch of a tone triggered by each heartbeat is determined as a result of the inter-beat intervals of the subject. This was the first attempt to sonify heart rate variability. 

Another early experimenter was Manfred Eaton [15], who carried out experiments in music and bioelectric phenomena at the ORCUS Research Center in Kansas City during the 1960s and early 1970s. Eaton described extensive explorations in applying various bioelectrically derived signals, including brain (EEG) and cardiac (ECG) signals, to artistic projects in order to generate complex patterns for music, kinetic arts and television. 

In February of 1971, French composer Pierre Henry [16] gave a concert at the Museum of Modern Art in Paris consisting of an improvised performance of electronic music based on the live manipulation of sound patterns modulated by his own brain wave activity.  The device that made this possible was the “Corticalart,” invented by Roger Lafosse, a researcher and musician who in 1965 founded the Sigma festival of contemporary stage and visual art in the French city of Bordeaux.

corticalart

 “Attached to the musician’s head, a system of electrodes, comparable to those used in the electroencephalogram, allows the detection of three kinds of electrical signals which convey the characteristic activity of certain zones of the cerebral cortex: alpha waves (states of relaxation, inattention, repose), beta waves (states of alertness, attention, activity, reaction), and “artifacts” caused by the movement of the eyeball.” 

From the album “Mise en musique du Corticalart de Roger Lafosse (1971):

Nam June Paik’s [17] video, A Tribute to John Cage (1973), is Paik's homage to avant-garde composer John Cage, a major figure in contemporary art and music.

 cage

A screen shot from Nam June Paik’s Video: Tribute to John Cage, 1976.

John Cage experimenting with a biofeedback device (1976)
(
Alvin Lucier explains the importance of John Cage during the sequence):

In 1981, the composer/artist/architect Christopher Janney [18] began researching heartbeat monitor systems and modified a wireless telemetry system, equipping it with a custom audio filter which  isolated the sound of the heart’s electrical impulses to the brain and its surrounding muscles.

In 1982, Janney collaborated with choreographer/dancer Sara Rudner and developed  “Heartbeat”, a performance utilizing the customized heart monitor, with the focus on exploring the heart as both a machine for pumping blood and the “seat of the soul.” The result was first performed in 1983 at The Institute of Contemporary Art in Boston.

The dance is a solo piece, with choreographic structure within which improvisation is taken. The dancer wears a wireless device that amplifies and sonifies the natural electrical impulses that stimulate the heart to beat. This forms the basis of the musical score, which is then overlaid with sounds of medical text, jazz scat, and the adagio movement of Samuel Barber’s String Quartet.  

sararudnerheartbeat

Christopher Janney: Heartbeat:


In the late1980's two scientists, Benjamin Knapp and Hugh Lusted [19] began working on a human-computer interface called the BioMuse: a complete portable digital signal-processing system designed to provide a real-time interface between the electrical signals of the human body and any computer or MIDI instrument. BioMuse was introduced onto the market in 1992, the first commercially available device to sonify biological signals. 

biomuse

In 1995, the ReyLab Heartsongs project [20], which originated from basic research work by Ary Goldberger [21] to probe the fractal features common to both music and the complex rhythms of the healthy heart, used actual rhythms of the heart as a template for musical compositions. In biological systems, disease and aging are associated with degradation of these fractal structures and processes.  Mapping heart rate time series of healthy and diseased heart into musical notes can provide a way to begin appreciating the differences in the dynamics of health and disease that can be quantified by sophisticated mathematical calculations. 

The Heartsongs project was implemented in 1995 in a hands on exhibit at the Boston Museum of Science, Music of the Heart [22], which also allowed museum-goers to hold onto bars to record their own electrocardiogram of approximately 25 beats and, in real time, listen to the 'music' the raw data produced. 

Music of the Heart exhibit:  Raw ECG:

A CD, ”Heartsongs: Musical Mappings of the Heartbeat” [22], in which chords and rhythm were added by the composer on top of the melody created from previously recorded and averaged data (“The third step in creating these heartsongs was to convert the time intervals between heartbeats into integers. We used a simple computer program to generate roughly 330 integers per data set. We started with 10,000 recorded heartbeats and then calculated the average of every 300 beats. We averaged the beats to remove very short-term fluctuations caused by movement or breathing.”), was also released in 1995.

In 1999, Henrik Bettermann [23] applied the compositional rhythm principles of African music to the analysis of cardiac time series. He constructed binary symbolic patterns from the differential 24-h R-R tachogram of healthy subjects on the basis of symbolic dynamics. Together with the African music pattern concept, this allowed a musical interpretation of heart period dynamics. 

bettermann

Henrik Bettermann: A non real-time example of a musician’s interpretation of these heart rhythms:

 
Henrik Bettermann:  The first melorhythmic interpretation of fetal heart rate dynamics ( 23 weeks of gestation):

In 2000, Marc Ballora published his doctoral thesis on auditory display and HRV and a Poster at the ICAD 2000: „Sonification of Heart Rate Variability Data” [24].

In Ballora’s method, heart rate variability (HRV) data sets are saved, using James McCartney’s, in 1996 introduced SuperCollider software, as separate files and stored as array variables. The arrays are iterated simultaneously, with each successive value employed as the source of a musical event. Each interbeat interval is then mapped to a pitch then sounded by an oscillator that produces short sine wave sounds ("grains"). a default playback rate of 60 events/second was used. Via an interface, listeners may adjust relative volume levels among signal processing operations, playback rate (data points per second) and the region of the file to be played. Thus, users may "zoom" in or out to focus on any dimension(s) of the data.

Marc Ballora: Non real-time audio example:

In 2001, Erich Berger [25] created A Sophisticated Soirée, a temporary installation space in which sound and visuals are controlled by the heartbeat of visitors, at the Ars Electronica Festival, Linz (A). 

soiree

The 64 participants where each fitted with two disposable stick-on electrodes, which register the electrical signal of their heartbeat and send this, via a wireless transmitter unit, to a receiver station. The signals were used to trigger various different musical and optical processes directed by computer programs. 

Erich Berger: A Sophisticated Soirée:

October 2006 was the premiere of Berger’s Heart Chamber Orchestra in Trondheim Norway. A sensor network consists of 12 individual sensors; each one is fitted onto the body of a musician. A computer receives the heartbeat data. Software then analyzes the data and generates, via different algorithms, the real-time musical score for the musicians, the electronic sounds and the computer graphic visualization.

Erich Berger: Heart Chamber Orchestra:

 

 heartchamberorchestra

Real-time HRV Sonification 

In 2002, Kiyoko Yokoyama [26] used an algorithm to convert heart rate data into real-time pitch and note interval MIDI data. The effects of real-time HRV audio-biofeedback are analyzed for the first time.

 yokoyama

Kiyoko Yokoyama : real-time HRV audio-biofeedback :

In 2004, Michael Falkner and Dr. Bernd Orzessek started to work with real-time HRV biofeedback as a diagnostic and therapeutic tool. They developed a hard- and software process, called Herzklang, that converted the non-averaged, beat-to-beat time and frequency characteristics of heart rate variability (HRV) into music in real-time. In 2006, they published Sonification of Autonomic Rhythms in the Frequency Spectrum of Heart Rate Variability[27]. 

Falkner & Orzessek: Sonification of the spectral analysis of an ectopic heartbeat:

In 2005, Michael Falkner introduced HeartMusic Therapy to patients at the Paracelsus Clinic, CH.

In HeartMusic Therapy, ECG data are recorded and the heart rate variability (HRV) is calculated. Simply put, HRV is the natural rise and fall of your heart rate in response to your breathing, blood pressure, hormones, stress and even emotions. The greater the rhythmic changes in pulse rate, the healthier the heart and nervous system. HRV is thus reflective and predictive of general health and overall psycho-physiological (mind-body) wellness. Anything that improves your autonomic nervous system’s balance and power and thus HRV will also improve immune response and thus your overall health.

By practicing a patient-specific breathing technique, under the guidance of a therapist, it is possible to bring the autonomic nervous system into a condition, such that it can regulate optimally. 

With the HeartMusic software program, these recorded data are then converted, in real-time, into music.

In real time, the heart’s melody reacts to all therapeutic steps. This patient-specific, breath-dependent oscillation of heart rate represents the wave which, when broken down into its individual time and frequency components and further analysed, forms the basis of the heart’s music to be heard.

If, under guidance of the therapist, a sufficient progress is recognized, the music of the heart is recorded for the patient during the exercises and handed out as an audio CD or mp3 file. This own HeartMusic is heard daily by the patient and the therapeutic process is thus resumed up to the following HeartMusic Therapy session. The clearly defined psycho-physiological, therapeutic effects of listening to music are well-known. This effect is greatly enhanced due to the unique source of the music. 

HeartMusic Therapy: Healthy HeartMusic:

 

HeartMusic Therapy: Chronic illness:

In 2008, Michael Falkner produced the first non-real-time and in 2011 the first real-time sonifications of non-invasive, non-averaged, beat-to-beat fetal heart rate variability  using an extended HeartMusic technology [28].

Michael Falkner: Maternal and fetal heart rate entrainment [29]:

References

[1] Task Force of the European Society of Cardiology and NASPE. Heart rate variability, standards of measurement, physiological interpretation and clinical use. Circulation 1996;93:1043-1065. 

[2] Cremer, M.: Über die direkte Ableitung der Aktionsstroeme des menschlichen Herzens vom Oesophagus und über das Elektrokardiogramm des Fetus. Münch. Med. Wschr. 53 (1906) 811

[3] Hofbauer, J., O. Weiss: Photographische Registrierung der foetalen Herztöne. Zbl. Gynaek. 32 (1908) 429. Gynecol. 32 (1908) 429

[4] Hollman, A., Journal of the Royal Society of Medicine, Volume 82, November 1989, 694.

[5] Hermann, L. (1878). Ueber electrophysiologische Verwendung des Telephons. Archiv für die gesamte Physiologie des Menschen und der Tiere 16:504–509.

[6] Gunn & Wood, (1952) The Amplification and Recording of Fetal Heart Sounds, Proceedings of the Royal Society of Medicine.

[7] Berger H., “Uber das elektrenkephalogramm des menschen”, Arch. f.Psychiat, vol. 87, pp.527-570, 1929. 

[8] Adrian, E. and Matthews, B. The Berger Rhythm: Potential Changes from the Occipital Lobes in Man, Brain 57, No. 4, 355385 (1934). 

[9] Salk, L. (1960). The effects of the normal heartbeat sound on the behavior of newborn infant: implications for mental health. World Mental Health, 12, 1-8.

[10] Lucier, A, MUSIC FOR SOLO PERFORMER (1965), for enormously amplified brain waves and percussion , Lovely Music, Ltd. VR 1014, 1982. (http://www.youtube.com/watch?v=bIPU2ynqy2Y).

[11]«Grass Field», Performance  presented as part of 9 Evenings: Theatre and Engineering, The 69th Regiment Armory, New York, N.Y., United States, October 13-22, 1966. http://www.fondation-langlois.org/html/e/page.php?NumPage=662

[12] Teitelbaum,   R,  "In Tune: Some Early Experiments in Biofeedback Music", from Biofeedback and the Arts, Results of Early Experiments, D. Rosenboom, Ed., Aesthetic Research Centre of Canada, Toronto, Canada. (http://inside.bard.edu/teitelbaum/writings/biofeedback.pdf). 

[13] Rosenboom D. ed., Biofeedback and the arts : results of early experiments, Vancouver: Aesthetic Research Centre of Canada, 1976. 

[14] Rosenbloom, D, “Homuncular Homophony” in: Rosenbloom, D (1976), ”Biofeedback and the Arts“ , ARC Publications, Vancouver, 1976. 

[15] Eaton,M., Bio-Music: Biological Feedback Experiential Music Systems, Orcus 1971; republished in 1974 by Something Else Press. 

[16] Pierre Henry from the album “Mise en musique du Corticalart de Roger Lafosse” (1971). Online at: 
https://profiles.google.com/106612352451315600458/buzz/d33wsnjz8pA#106612352451315600458/buzz/d33wsnjz8pA

[17] Electronic Arts Intermix : A Tribute to John Cage, Nam June Paik http://www.eai.org/title.htm?id=2865

[18]  http://janneysound.com/physical-music/heartbeat/

[19] Knapp B. and Lusted H., “A Bioelectric Controller for Computer Music Applications.”, Computer Music Journal, 14(1) pp. 42-47. 1990. 

[20] The Heartsongs project

http://www.bidmc.org/Research/Departments/Medicine/Divisions/InterdisciplinaryMedicineandBiotechnology/ReyLab/Heartsongs.aspx

[21] C-K Peng, S Havlin, HE Stanley and AL Goldberger. Quantification of scaling exponents and crossover phenomena in nonstationary heartbeat time series. Chaos  1995;5:82-87.

[22] http://polymer.bu.edu/music/ 

[23] H Bettermann, D Amponsah, D Cysarz and P Van Leeuwen, Musical rhythms in heart period dynamics: a cross-cultural and interdisciplinary approach to cardiac rhythms. Am. J. Physio. 1999;277:H1762-H1770. (http://www.rhythmen.de/downloads/heartmus.pdf)

[24] Ballora M, Data Analysis through Auditory Display: Applications in Heart Rate Variability. Faculty of Music, McGill University, Montreal, May 2000.

(http://www.personal.psu.edu/meb26/sonification/sonex.html) 

[25] Erich Berger : http://randomseed.org  &  http://90.146.8.18/de/archiv_files/20011/2001_353.pdf

[26] Yokoyama K, Ushida J, Sagiura Y, Mizono M, Mizuno Y and Takata K, Heart Rate Indication Using Musical Data, IEEE 2002;49/7:729-733. ICAD06 – 274 

[27] Orzessek B, Falkner M, Sonification of Autonomic Rhythms in the Frequency Spectrum of Heart Rate Variability, Proceedings of the 12th International Conference on Auditory Display, London, UK, June 20-23, 2006 (http://www.dcs.qmul.ac.uk/research/imc/icad2006/proceedings/posters/f7.pdf) 

[28]  Realtime Sonification of Fetal and Maternal Heart Rate: http://herzmusik.ch/heartmusicexamples2.html

[29]  Clayton, M., Sager, R. and Will, U., In time with the music: The concept of entrainment and its significance for ethnomusicology. Available online at:
http://ethnomusicology.osu.edu/EMW/Will/InTimeWithTheMusic.pdf
Startside Musical examples of  sonification of  fetal and maternal heart rate Real time examples of sonification of  fetal and maternal heart rate