There is a quiet revolution underway in audio processing - although it's quiet only in that the developments are mainly aimed at eliminating troublesome noise and echos. To find out more, we talked to a few of the companies leading the charge.
It may seem counter-intuitive, but as audio headsets get more and more clever, their users are unlikely to hear the improvements. That's because the work is mostly aimed at improving the outbound sound quality via techniques such as noise cancellation - although headset users should see benefits in areas such as voice dialling or speech recognition.
To make those improvements, modern headsets increasingly use multiple microphones, plus powerful digital signal processors (DSPs) to re-work the sounds thus collected.
For example, Blue Ant's Z9 Bluetooth headset has two microphones, plus a DSP which uses the signals to measure the distance to the sound source and thereby triangulate on the mouth, says Taisen Maddern, the company's CEO.
Maddern notes that as devices such as this are essentially software-driven, they can also be upgraded as new and better algorithms come along. He points for example to the emerging wideband speech profile for Bluetooth, which will allow headsets to support 3G's broader audio spectrum.
He adds that as developers look to make Bluetooth easier to use, voice control is an obvious possibility - but it's one that relies upon a clear audio signal.
"We have a headset coming that will be the first voice-command headset that talks you through the process of pairing," he says. "You can use voice commands to request help, check the battery level, make calls."
It's not just audio that can be collected and made use of, adds Alex Affely, the CTO of Aliph, the company behind the Jawbone headset.
As its name suggests, the Jawbone not only picks up exterior and spoken sound, it also has a third microphone which "taps into your jaw vibration," says Affely.
Wireless has really given audio technology a big boost, he reckons. He says that while some of the work behind Jawbone dates back to the early 1990s and the First Gulf War, when it was realised that the wired headsets used by soldiers needed better noise cancellation, it wasn't until Bluetooth came into focus a few years ago that it really took off.
As well as noise and echo cancellation, developers are also taking advantage of research into areas such as pattern recognition, says Jennifer Stagnaro, the marketing VP at Audience, which develops voice processing technology for integration into mobile phones.
"We have reverse-engineered the human auditory system, using an optimised DSP chip and software. Most past technologies could cancel stationary noise, we can cancel non-stationary noise," she claims.
Audience's voice processor includes algorithms based on research into auditory scene analysis - a complex technique for taking a mixture of sounds and sorting them into packages, each of which probably has the same source. Other useful techniques for signal selectivity include beamforming and blind signal separation, she says.
One advantage of the DSP-and-algorithm approach is that it can be position-neutral, so it can be used in handsets as well as headsets, she argues.
"The challenge in the handset is bigger," Stagnaro says. "With a headset you can do [bone] conduction pick-up. Ours still uses two microphones, but is position-neutral so it works in speakerphone mode, for example."
But whether the technology's in the headset or the handset, it still only cleans up the outgoing audio, notes Alex Affely.
He adds, provocatively: "What about incoming sound? That's one possibility for the future, it's one of the interesting things out there."