Google is making it easier than ever to give any app the power of object recognition

0
204

Smartphones have fast become the new frontier of artificial intelligence. Algorithms that used to run in the cloud, beaming results down to our devices via the internet, are now being replaced by software that runs directly on phones and tablets. Facebook is doing it, Apple is doing it, and Google is (perhaps) doing it slightly more than anyone else.

The latest example of mobile AI from the Silicon Valley search giant is the release of MobileNets, a set of machine vision neural networks designed to run directly on mobile devices. The networks come in a variety of sizes to fit all sort of devices (bigger neural nets for more powerful processors) and can be trained to tackle a number of tasks.

MobileNets can be used to analyze faces, detect common objects, geo-locate photos, and perform fine-grained recognition tasks, like identifying different species of dogs. These tools are extremely adaptable, and could be put to a number of different uses, including powering augmented reality features, or creating apps to help the disabled. Google says the performance of each neural net differs from task to task, but overall, its networks either meet or approach recent state-of-the-art standards.


Google’s new MobileNets can be trained to complete a number of different tasks.

Image: Google

For consumers, this is going to mean more mobile apps with AI functions as developers start incorporating these tools. Running these sort of tasks directly on-device has a number of benefits for everyday users, including faster performance, greater convenience (you don’t have to connect to the internet), and better privacy (your data isn’t being sent off-device).