How do you ethically program an autonomous car? What is the best way to train a robot surgeon? Can machines be taught to exhibit aesthetic sensibilities? These are just some of the questions that the AI team at Cambridge Consultants might find themselves grappling with over the course of a normal day.
Founded in 1960, the group now works on over 400 projects every year, spanning industries such as medical, industrial and defence, and partnering with high profile clients such as Ocado, BT and Northrop Grumman Park Air Systems.
The group is currently working on a variety of projects, with a particular focus on both autonomous vehicles and the medical applications of AI. Their ongoing commercial work is fiercely confidential, however last week Techworld was invited to its headquarters in the Cambridge science park on the outskirts of the city to learn about what they have been up to. Here's what we saw.
Can AI classify music?
The first project demoed was released just last year, but given the frenzied pace of the world of AI, it's already viewed as relatively old. It was created to compare the accuracy of different types of AI programmes in effectively categorising genres of music.
On one side was a deep learning programme that was trained on vast databases of different types of musc and on the other side was an older, algorithmic programme.
"In parallel, we have old school, hand coded, normal algorithm development - the kind of stuff we still pride ourselves on and our customers do,” said Monty Barlow, head of AI at Cambridge Consultants.
Instead of learning from musical examples, this programme was fed multiple different rules about how to determine different genres of music, such as: 'syncopation indicates jazz'. At test, musical pieces that have never been heard by either programme before are played - in the case of the demo Techworld witnessed, piano music was played by one of the team members.
As the player segued from classical into ragtime and then jazz, bars jumping on a graph indicated the AI programme’s best guess at which musical genre it was hearing. Experiments demonstrated that of the two programmes, the deep learning competitor was superior at accurately identifying the musical genre.
This is down in part to the deep learning programme's ability to classify stimuli in different contexts, meaning it's broadly applicable to other classes of phenomena, for example, identifying and classifying language.
Can AI play Pac-Man?
In 2016 we witnessed the trouncing of world champion Lee Sedol by Google's AlphaGo computer programme in the ancient board game Go, where 10172 possible moves are available. So it may not come as a surprise that Cambridge Consultants were able to develop a deep learning programme that could successfully learn how to play the humble Pac-Man.
What’s interesting is how it did so, and in what ways this might be applicable to other industries - namely autonomous transport.
In this case, the programme was not provided with any guidance at first, meaning in the early training stages the AI controlled avatar simply sat still without realising that moving would help it succeed in the game. From there, it slowly began shifting from side to side, before gathering more and more information about the game and honing its performance.
"Slowly it figured out that the white dots - they're good. And it would run around but it didn't understand that it had to avoid ghosts. The next level was essentially understanding that avoiding ghosts is good, and it actually stays that way for quite a long time," says Dominic Kelly, head of AI research at Cambridge Consultants, pointing out that there were a few notable 'plateaus' in the programme's performance before it reached another breakthrough stage of understanding in the game.
In the demo, the programme projected lines down the routes that it was considering, so you could compare in real-time the programme's 'thought process' and try to determine why it ended up selecting one particular route over another.
This capability could also be applied to analysing the simulated decisions that autonomous vehicles make in response to their environment. “Can we freeze this [decision]?" says Kelly. "For an autonomous car, can we get a simulator that would include what it was planning, and can we try to understand what it learned that would lead it to make a bad decision?”
Can AI paint better than the old masters?
Another project demoed could have wide-ranging applications in the creative industries, and prove particularly useful to graphic designers.
The programme works on a electronic sketch pad, where the user enters an outline, before allowing the AI to intelligently 'fill in' the picture. "What the system is doing is taking the sketches and turning them into a completed piece of artwork. It’s taking a rough outline and filling in the gaps for you, so you don’t have to," said the demonstrator.
The programme was fed over 8000 examples - a drop in the ocean compared to most deep learning databases - from masterpieces over the past six centuries. The programme then went through these outlines repeatedly, creating millions and millions of example sketches over the course of 14 hours.
“This is a whole series of neural networks put together, it’s based on general adversarial techniques and what we’re doing here is converting one domain of images into another. We’re converting from a simple, low level data type - the outline of a sketch - and from it creating much more information.”
The obvious application for this system is in the form of visual design. “This might be the future of computer aided design, it might be that you sketch the outline of your new concept car and be given a fully photo realistic picture right off the bat.”
Can AI see better than humans?
The most recent project showcased, and earmarked for a formal launch at the GTC conference in Munich next month, focuses on human vision, specifically human vision of a scene that is subject to distortion due to, for example, smoke or rippled glass.
What humans do to create a clear mental picture in these cases is to draw on memory to fill in the gaps. “We wanted technology to do the same and go that little bit further - can it do better than humans?”
For this project, distorted images, as well as their non-distorted counterparts were fed into a vast neural net over and over again. Eventually, the programme arrived at a stage where it could produce the ‘perfect’ image simply from seeing the distorted one.
At test, it's shown images it's never seen before - for example, correctly removing the distortion to project a perfect image of a chameleon.
There are several real-world applications of this technology that the team foresee. “Imagine you're a firefighter walking into a burning building, there’s smoke everywhere. This technology can help crisp up the image. Maybe you won’t be able to see every fleck, or detail, or spot, or line; but it will improve the image.”
It could also have use miles below sea level, in submarines where vision is distorted by the depth of the craft, and really any other case where distortion of vision is a problem.
All four of the projects demonstrate how deep learning techniques are enabling AI programmes to become increasingly skilled in areas we once considered the sole preserve of humans. The work being done in Cambridge is certainly making AI more human, but it is still some distance away from a 'general AI', where all of these human-like abilities could be knit together in one system. That being said, the question is no longer can computers create beautiful art or music, but do we want them to?