I’ve been getting excited about the possibilities to be found by combining music and motion detection. I just changed my mind.
I’ve always thought the theremin was a pretty cool idea: a musical instrument that you don’t touch, but control by waving your hands in the air. (For those not in the know, one hand controls the pitch, and the other the volume). I won’t go into detail on the theremin – there’s plenty of information on the web, and a very active musician’s community behind it. And yes, they’re around for purchase if your tastes run in that direction.
There are a few limitation with the concept as implemented in the theremin, though. For one, the timbre (the sound) of the instrument is fixed – if you want a “theremin sound”, it can do that, but the only way to modify the sound is to feed it into external devices. This was an issue Leon Theremin himself sought to resolve; he invented a “theremin cello” that included some capability to modify the instrument’s timbre. (As a side note, Theremin’s non-musical inventions are perhaps even more interesting than his musical ones. The Wikipedia article on him is a great place to start. [As a side note to the side note, consider the accidental discovery of “The Thing” alongside yesterday’s discussion of The Great Phenol Plot as an accidental success of counter-intelligence work.]) The modern synthesizer is a descendent of the theremin: Robert Moog built a number of theremins in high school, and sold theremin kits; he credits the knowledge he gained as directly leading to the creation of the synthesizer that bears his name. But the theremin cello didn’t really catch on the way the theremin did, and a more flexible gesture-controlled instrument remained elusive.
Another limitation of the theremin is that playing it requires very specific, very precise movements. Taken on its own terms, that’s reasonable – certainly most instruments require very precise, specific movements to play. But many people, again including Theremin himself, have been interested in the possibilities of allowing instruments to be controlled in a less finicky fashion. Theremin created the terpsitone, a theremin-like instrument controlled by the movements of a dancer’s body, as well as what Wikipedia describes as “performance locations that could automatically react to dancers’ movements with varied patterns of sound and light.” – which sound to me like early precursors of discos, or at least precursors to modern “sound and light” extravaganzas.
At this point, let’s fast-forward to the present day – or rather, to a day not too far in the past: 4 November 2010. On that date, Microsoft released the Kinect accessory for the Xbox 360 game console. The Kinect is a input device for the Xbox 360 (and later for computers) that allows the user to control the console with full-body gestures. Needless to say, it didn’t take long before musically-inclined hackers saw the potential. (I’m greatly simplifying things here – there have been a large number of efforts to integrate music and motion – see, for instance the MIT Hyperinstrument/Opera of the Future group [parenthetically, the Guitar Hero rhythm games were created by former MIT students who had been part of the Hyperinstrument group]). What was notable about the Kinect was the low price of the hardware; it lowered some of the barriers to entry and meant that experimenters no longer had to build everything from scratch. That hardware standardization led to the creation of a set of open software frameworks for the integration of gesture and music, which let creators and performers concentrate on the performance instead of the software.
The Kinect works well for large movements and groups of performers, but it has come in for some criticism for the amount of lag it introduces between the performer’s movement and the triggering of the desired effect. The gesture-controlled music culture is looking ahead to the imminent release of the Leap Motion Controller, which promises to not only reduce lag, but allow for the use of smaller movements on the scale of a single finger gesture – shades of the theremin! I presume that a combination of the two devices will be the tool of choice in many cases.
While all this has been going on, there’s been a parallel effort in the realm of musical instrument control via cell phones – one example being the Crossfader – Move & Mix app, which uses an iOS device’s accelerometer to do live mixing of two tracks.
It’s in this latter realm that the dark underbelly of the intersection of music and motion is found. Consider the evil genius of Calvin Harris as explained by evolver.fm: in order to drive sales of his recent album “18 Months”, Harris released it as an app which will only play if the phone is moving. In other words, if you don’t dance, you don’t listen.
Now, consider today’s concert scene with the mandatory encore, the revving up of the audience with ritualistic phrases such as “Are you ready to rock?”, the use of Autotune and “backing” tracks in live performances to ensure the sound is the same on stage as in the studio, and the rise of the non-apology.
How long will it be before performers begin integrating audience motion sensors into the sound and light systems so that the “full concert experience” is only available to an audience that’s sufficiently “into it”? How long after that will it be before a performer blames a bad review on the audience, saying that in effect it’s their fault the show sucked because they didn’t “help out” enough?
So yes, I’ve changed my mind about the possibilities inherent in the marriage of music and motion detection. With apologies to Emma Goldman, if this particular revolution requires dancing, I’ll be on the other side of the barricades.