Google has updated its Lookout app, an AI toolkit for people with impaired vision, with two helpful new capabilities: scanning long documents and reading out food labels. Paper forms and similarly-shaped products at the store present a challenge for blind folks and this ought to make things easier.

Food labels, if you think about it, are actually a pretty difficult problem for a computer vision system to solve. They’re designed to be attention-grabbing and distinctive, but not necessarily highly readable or informative. If a sighted person can accidentally buy the wrong kind of peanut butter, what chance does someone who can’t read the label themselves have?

GIF of Google's Lookout app showing it identifying a jar of mustard.

Image Credits: Google

The new food label mode, then, is less about reading text and more about recognizing exactly what product it’s looking at. If the user needs to turn the can or bottle to give the camera a good look, the app will tell them so. It compares what it sees to a database of product images, and when it gets a match it reads off the relevant information: brand, product, flavor, other relevant information. If there’s a problem, the app can always scan the barcode as well.

Document scanning isn’t exactly exciting, but it’s good to have the option built in a straightforward way into a general-purpose artificial vision app. It works as you’d expect: Point your phone at the document (the app will help you get the whole thing in view) and it scans it for your screen reader to read out.

The “quick read” mode that the app debuted with last year, which watches for text in the camera view and reads it out loud, has gotten some speed improvements.

The update brings a few other conveniences to the app, which should run on any Android phone with 2 gigs of RAM and running version 6.0 or higher. It’s also now available in Spanish, German, French, and Italian.

Source link

Author