Esteemed scientist and Microsoft Research lead Chris Bishop has launched a set of free machine learning Application Programmer Interfaces (APIs) - including one that can tell if a person is happy or sad.
Bishop, who is head laboratist at Microsoft Research Cambridge, announced that developers and businesses can get their hands on tools based on Microsoft products to bring AI like speech recognition, vision, and language understanding into their apps.
The resurgence of machine learning, rooted in academics and boosted by the likes of Microsoft decades ago, is having a comeback as advertising and social networks see the potential in recognising users and personalising experiences with products.
Bishop, a renowned AI researcher, said that machine learning could rival that of humans within 50 years, despite the fact that research has only unlocked around two percent of its potential.
“I believe it has the power to be the most powerful technology ever created...As a society we face enormous challenges and AI could help solve these in the future,” he added during Microsoft's Future Decoded event in London this morning.
Now startups or businesses without the budgets or skills to develop their own machine learning can tap into the API and start training the system to recognise traits from photographs of human faces. The tool will identify these emotions in new pictures it is fed - akin to how Facebook or Google Photos 'auto tags'.
The emotion tool can be used to create systems that recognise eight intrinsically human emotional states – anger, contempt, fear, disgust, happiness, neutral, sadness or surprise.
It’s hoped that developers will use these to create apps that let marketers gauge people’s reaction to shop windows, films, or even food.
The open API follows the likes of Hailo, Addison Lee and Hive who have adopted the "I'll scratch your back" method of giving free data to the developer community in the hope they will gain innovative product ideas and insight into usage.
Microsoft released the first set of Microsoft Project Oxford tools last spring, drawing interest from well-known Fortune 500 companies startups, according to the software company.
The emotion tool is available to developers as a public beta beginning today - a limited free trial.
This also include a spell check tool, which will recognise slang words as well as common sense errors like the difference between “four” and “for” thanks to contextual machine learning.
Microsoft will release public beta versions of several other new tools by the end of the year, including video - which automatically edits video by tracking faces, detecting motion and stabilising shaky video as seen in the Microsoft Hyperlapse.
Speaker recognition will also be available for free by the end of the year, allowing developers to learn the intricacies of an individual’s voice. This could assist with security measures, Microsoft said.
It will also update its face APIs and add a custom recognition tool for noisy public places - like a shop floor.
Developers can find the APIs at the Microsoft Project Oxford website.