Technology companies including Google must consider the “unintended consequences” when investing in AI, an eminent machine learning professor and founder of multimillion startup, Dato, warns.
The ethics surrounding machine learning is a hot topic. Google, which this morning announced it had invested an undisclosed sum in a top German AI research centre will instate an ethics board to monitor its AI efforts. It made the promise after buying Cambridge-based machine learning startup, Deepmind, last year.
It hasn’t revealed who will sit on the board, but the move is indicative of the public mood after tech figureheads like Stephen Hawking and Elon Musk claimed AI would “overtake humans” within the next 100 years and consumers have wised up on tech company’s privacy and data collection policies.
There is reason to tread carefully now that machine learning is maturing. The combination of Google’s worldwide data pool (combining the world’s location, browsing history, personal details and preferences, for example) and sophisticated algorithms could spell the end of privacy if not properly regulated.
Carlos Guestrin, a leader in machine learning research and Amazon professor of machine learning at the University of Washington tells Techworld: “For any technology you have to think about implications of your actions.
“Machine learning is very powerful because it can combine different types of data and turn data much more than a human could so there is room for a lot of unintended or unexpected consequences,” he adds.
This could cause selective bias for loans, insurance cover and even job applications and companies like Google “need to think of the unintended consequences of that,” he says.
Carlos Guestrin is the founder of Dato, formerly known as GraphLab
While care must be taken, but Hawking’s warnings are “sensationalist,” Guestrin says. Arms technology and drones poses a greater, more immediate threat, he claims.
“The thing I worry about is war. Today, drone technology is really changing how war is fought. You can imagine how the technology will change things. Yes, it is a long time before an automated soldier but we do need to consider the implications.”
Google’s bottomless piggy bank means it can spend on in-house AI teams and proprietary algorithms. It's already spent £400 million on Cambridge AI startup Deepmind, and today invested in a German research centre which has an annual budget of €41 million and 450 scientists to hand.
Cash obstacles aside, everyday enterprises struggle to hire a single developer thanks to the dearth of talent. But with pressure on better customer service, fraud detection, security and internet-of-things inspired manufacturing, machine learning features will become a necessity for every business.
Guestrin’s AI startup, which has received $27 million in funding since it transformed from an open source academic project to a product in 2013, hopes to solve this problem by providing machine learning products like recommendation engines and sensor reading technology as a service.
Its customers include Pandora for recommendations, Bosch for appliance predictive maintenance, and PayPal for fraud detection.
As an open-source project-turned- commercial product, “if you got free coffee everyday, it wouldn’t be great coffee,” Guestrin explains, the Dato [formerly GraphLab] CEO is in a position to comment on the growing list of tech firms embracing the open source community.
“I think it’s fine as long as people have a choice when they are being upsold. If it is a useful and valuable project on its own, it isn’t a problem.”
“Facebook’s value is their billion registered users. Even if they open-sourced their entire platform it would be very hard for anyone to compete with them. Talent is so scarce in the general computing area you have to do what you can to attract people. It’s totally fair game.”