The Xth Sense (2011, present) is a free and open biophysical technology. The Xth Sense captures sounds from heart, blood and muscles and uses them to integrate the human body with a digital interactive system for sound and video production. It was originally created in 2011 to investigate exploratory applications of biological sounds, namely muscle sounds, for musical performance and responsive milieux. One year after, in 2012, it was named the “world’s most innovative new musical instrument” and awarded the first prize in the Margaret Guthman New Musical Instrument Competition by the Georgia Tech Center for Music Technology (US). Today, the Xth Sense is used by a steadily growing community of creatives, ranging from performing artists and musicians, to researchers in physiotherapy and prosthetics, and universities and students in diverse fields.
The XS software and the hardware documentation are freely downloadable on-line. Tutorials on how to build an Xth Sense instrument, and a blog documenting the research can be viewed on-line at http://res.marcodonnarumma.com. Visit the Users Forum to ask questions, show personal projects using the Xth Sense, and learn what is being done with the instrument. Our community forum is kindly hosted by Create Digital Music.
The Xth Sense DIY kit is also available for pre-order. The kit consists of a low-cost pack that enables anyone to build, hack and extend the XS wearable sensor.
PLEASE NOTE: While new kits are being designed to be purchased worldwide from common resellers, the current version of the kit can be pre-ordered. Please write here if you wish to be added to the waiting list. Be aware that unfortunately, due to the amount of request, reply to your email could take some time.
Below you can view a live recording of Ominous, one of the latest music performance for the Xth Sense, recorded live at ICT & Art Connect at Watermans Art Centre, London, 2014.
“Everything we do is music.”
The central principle underpinning the Xth Sense (XS) is not to “interface” the human body to an interactive system, but rather to approach the human body as an actual and complete instrument in itself. Augmented musical instruments and physical computing techniques are generally based on the relation user>controller>system: the performer can interact with a control interface (a physical controller or sensor systems) and modify results and/or rules of a computing system. Sometimes this approach can confine and perhaps drive the kinetic expression of a performer, leaving less room for his/her physical energy and non-verbal communication. Besides, being that often the sonic outcome of such performances is digitally synthesised, the overall performance can lack of “liveness”. The XS transcends the paradigm of the user interface by capturing sonic matter and control data directly from the performer’s body. There is no apparent mediation between body movements and music because the raw sonic material originates within the fibres of the body, and the sound manipulations are driven by the vibrations of the performer’s muscle tissue.
The XS fosters a new and authentic interaction between man and machines. By enabling a computer to sense and interact with the sounds produced by the muscle tissues, the XS approaches the biological body as a means for computational artistry. During a performance muscle movements and blood flow produce subcutaneous mechanical oscillations, which are nothing but low frequency sound waves (mechanomyographic signals or MMG). Two microphone sensors capture the sonic matter created by the performer’s limbs and send it to a computer. This develops an understanding of the performer’s kinetic behaviour by *listening* to the friction of her flesh. Specific gesture, force levels and patterns are identified in real time by the computer; then, according to this information, it manipulates algorithmically the sound of the flesh and diffuses it through a variety of multi-channel sound systems. The neural and biological signals that drive the performer’s actions become analogous expressive matter, for they emerge as a tangible sound experience.
The XS can be played as a traditional musical instrument, i.e. analog sounds can be produced and modified by adequately exciting the “chords” i.e. by contracting the muscles, but it can also be used as a gestural controller to drive audio synthesis or sample processing. The XS can be used in both modes simultaneously. The most interesting performance feature of such system consists of the possibility to expressively control a multi-layered processing of the MMG audio signal by simply exerting diverse amounts of kinetic energy. For instance, stronger and wider gestures could be analysed and mapped so to generate sharp, higher resonating frequencies coupled with a very short reverb time, whereas weaker and more confined gestures could be deployed to produce gentle, lower resonances with longer reverb time.
The form and color of the sonic outcome is continuously shaped in real time with a very low latency (measured at 2.5ms), thus the interaction among the perceived sonic force and spatiality of the gesture is neat, transparent and fully expressive. From the exclusive real time processing of the muscle sounds, through resampling of pre-recorded sounds, to the audio manipulations of traditional musical instruments, the XS is the first musical instrument of its kind to offer such a flexibility at a very low cost and with a free and open technology.
Support and awards
The work was developed at the SLE, Sound Lab Edinburgh – the audio research group at The University of Edinburgh, and was kindly supported by the Edinburgh Hacklab and Dorkbot ALBA. The project was finalized during an Artistic Development Residency at Inspace, Edinburgh. Inspace kindly sponsored the work by providing technical and logistical support, and organizing a public vernissage for the official launch of the project within the artistic research program “Non-Bio Boom”.
The XS technology was awarded the first prize at the Margaret Guthman Musical Instrument Competition (Georgia Tech Center for Music Technology, US, 2012) as the “world’s most innovative new musical instrument”.
The Scottish Arts Council, Creative Scotland, has awarded a grant in support of my participation to the Korean Electro Acoustic Community 2011 conference in Seoul, South Korea. The research was endowed twice with a PRE travel grant by the University of Edinburgh.
The use of open source technologies is an integral aspect of the research. The biosensing wearable device was designed and implemented by Marco Donnarumma, with the support of Andrea Donnarumma and Marianna Cozzolino. The Pure Data-based framework for real time analysis and processing of biological sounds was designed and coded by the author on a Linux machine, with inspiring advice by Martin Parker, Sean Williams, Owen Green Jaime Oliver, and Andy Farnell.
Hypo Chrysos is the most recent performance work that makes use of the XS. The piece was premiered in December 2011 during the Madatac Festival at CaixaForum, Madrid.
Since its inception in March 2011, the first piece for the XS titled “Music for Flesh II” (MFII) has toured USA, South Korea, Mexico, Norway, UK, Ireland, Italy, Germany and Protugal, and has been presented at several major academic conferences among which the NIME, New Interfaces for Musical Expression (USA); the ICMC, International Computer Music Conference (UK); the Linux Audio Conference (IRL) and the 4th Pure Data Convention (GER).
In May 2011 the system has been employed as central technology in the project Raw/Roar, a two weeks artistic residency which involved a team of five dancers and three composers directed by the author. The residency focused on the creation of an intermedia dance piece for enhanced bodies which was premiered at Dansehallerne, DK. The project was commissioned by the Danish National School of Theatre and Contemporary Dance and supported by The Danish Arts Council and Augustinus Fonden.
In March 2011 the author was commissioned a new work development residency at Inspace, UK. During the residency the XS has been deployed in the implementation of Non-Bio Boom: a Musicircus, a biosensing, participatory sound environment for eight audio channels and multiple users.
Pictures courtesy of Chris Scott.