NPR’s All Things considered had an interesting piece last week about the work of Condiment Junkie – a British ‘creative agency specialising in sensory branding’. The firm which specializes in sound design, has done research to see if (and more importantly how) people can tell the difference between hot and cold beverages being poured, to make a beer sound colder, or in the case of Twining’s make the tea sound piping hot on their commercials.
In their research they found that 96% of participants could tell the difference. NPR’s own informal web poll (now closed) found that 80% correctly identified the ‘cold’ audio and 90% the ‘hot’ audio.
The follow-up piece digs briefly into what exactly makes the sounds so identifiable.
Cold water is more viscous, or sticky, than hot water. That’s what makes that high pitched ringing, and it’s what tells your brain ‘This water is cold!’ before you even take a sip.
(Photo Credit – Oh Baby It’s Cold Outside by Daniel Novta under Creative Commons Attribution License)
Fans of Steve Reich will enjoy this visualization
HTML5 Visualization of ‘Piano Phase’. The different colored dots represent the two players.
. It is another excellent demonstration of the Web Audio API
(along with the HTML5 canvas tag) used to create this visualization of ‘Piano Phase
‘. The piece is one of the most recognizable of the minimalist movement, where two pianists play the same short figure at the same time, with the second pianist gradually changing the tempo such that the two players become out of phase. The gradual shifting in tempo, reveals interesting patterns created by the interplay of the two melodic figures. You can read the score here (scribd)
The site was created by Alexander Chen, one of the creative directors at Google Creative Lab in New York (and the brains behind the famous Les Paul Google Doodle).
This site is based on the first section from Steve Reich’s 1967 piece Piano Phase. Two pianists repeat the same twelve note sequence, but one gradually speeds up. Here, the musical patterns are visualized by drawing two lines, one following each pianist.
The Human Harp is a project using movement of a performer connected to the interface to create music and ‘play’ the bridge. The interface is a number of modules with string/sensors which attach to the performer. The sensors capture the rate, length and angle of the string pull and translate those movements through Max/MSP to produce synthesized sounds via granular synthesis. The inspiration for artist Di Mainstone was looking out over the Brooklyn Bridge and thinking how much it looked – and sounded through the hum of the cables in the wind – like a harp.
“As I listened to the hum of the steel suspension cables, the chatter of visitors and the musical ‘clonks’ of their footsteps along the bridge’s wooden walkway, I wondered if these sounds could be recorded, remixed and replayed through a collaborative digital interface? Mirroring the steel suspension cables of the bridge, I decided that this clip-on device could be harp-like, with retractable strings that physically attach the user or Movician’s body to the bridge, literally turning them into a human harp.”
Working with ferrets – whose auditory system is a lot like ours – a team of researchers at the University of Maryland have shown how a form of noise reduction happens in the brain. Speech and other dynamic signals get boosted while noise gets dampened, leading to better clarity for relevant signals.
Continue reading »
I’ve been enjoying the videos from ASAP Science on Youtube recently. These are the same guys that created the ‘How Old Are Your Ears’ video and do a great job of explaining relatively complicated topics in simple and entertaining ways. Below is a video that explains some of the audio illusions from this post in their usual style.