Today, Google Cloud chief Diane Greene announced the company’s new push in machine learning and artificial intelligence. There’s now a new group in Greene’s division that will unify some of the disparate teams that had previously been doing machine learning work across Google’s cloud. Two women will take charge of the new team: Fei-Fei Li, who was director of AI at Stanford, and Jia Li, who was previously head of research at Snap, Inc. As Business Insider notes, Fei-Fei Li was one of the minds behind the Snapchat feature that lets you attach emoji to real-world objects in your snaps.
Google just hired Snapchat’s head of research
The news came at the top of a slew of announcements about the product roadmap for Google’s cloud services and how they’re expanding their use of machine learning, a critical technique for training large-scale AI networks to teach and improve themselves over time. The announcements were all aimed at showing how Google’s cloud services include more than just renting time on a server — that it can provide services to its enterprise customers that are based on its machine learning algorithms. Those services include easier translation, computer vision, and even hiring.
For example, Google is talking up how it’s improving the infrastructure for Google Cloud. It will be able to run more efficiently thanks to the addition of GPUs to the CPUs its system already uses. Graphical processors are especially good at training machine learning systems more quickly. Google has also added some security layers to the GPU, something it claims isn’t necessarily common on other cloud platforms. So, Google says, there won’t be any data from a previous customer sitting in any of the GPU’s caches when the next customer starts spinning it up for their tasks. They’ll be available in 2017 the company says.
Google is making its AI platform better at image recognition and translation
Google is also unifying its “cloud vision” API so the same system will be able to identify logos, landmarks, labels, faces, and text for optical character recognition — making it simpler to implement. These systems will run on “Tensor Processing Units,” new hardware that’s optimized for Google’s TensorFlow platform. Google had previously unveiled the TPUs, but the news today is that it’s now cutting the price for “large-scale deployments” by 80 percent.