At Sony Ericsson's recent launch of several new headsets, a notable feature was conspicuous by its absence - the boom microphone.
In the past, a boom was important if you wanted to be able to use a headset in a noisy environment - by putting the mic closer to the mouth, it made it easier to pick up only what the wearer was saying.
The advent of sophisticated, yet tiny and power-frugal, digital signal processors (DSPs) has changed all that.
These can filter out the frequencies that belong to speech and leave behind the background noise, so you can still be heard clearly even when speaking from a fast-moving car, say.
However, while DSPs take a lot less power than they used to, they still take power. So you can expect a digital headset to have a shorter talk-time than an analogue version; they should have the same standby time though.
One reason for these developments, according to a SonyE staffer, is that the headset engineers have been talking to people with loads of expertise in an overlapping area, namely hearing aid developers.
The only remaining snag is that the mic picks up all speech in the vicinity, not only the wearer's voice.
It strikes me that one way around this (which apparently they're not doing yet) would be to use bone conduction.
We can hear sound two ways, one through the eardrum and the other through the skull. That's why we can hear a tuning fork pressed to our temple, and it's why our own voices sound strange on tape - we hear ourselves both ways, but the taped version lacks the bone-conducted component.
So you could have a sensor to pick up the wearer's voice coming through the skull, and then use the DSP to match this to the appropriate audio components, thereby dropping extraneous speech.
There is probably a reason why this isn't done yet - can anyone tell me what it is, please?
Find your next job with techworld jobs