Read the text below.
Google has added dining and translation features to Lens, its image recognition software that can detect plants, animals, landmarks, music, and others.
The new features were announced in May 2019 at the Google I/O, an annual conference for developers held by the tech company. Since then, the features have been rolled out to both Android and iOS users of Google Lens.
Google Lens can be accessed using Google programs such as Assistant and Photos. On some Google Pixel phones, the technology is built into the camera.
With Google Lens’ new dining feature, users simply have to point their camera at a restaurant’s menu, and the program will automatically highlight the most popular dishes. Photos, information, and reviews about the dishes will also be displayed. Aside from these, the feature can help users calculate and split the bill after dining just by positioning the camera toward receipts.
The translation feature is activated in the same way. Users just need to point their camera at a foreign text, and the feature will recognize the language. The user’s device will instantly display the translation and even read it aloud. According to Google, this feature can translate over 100 languages, so it will come in handy for people traveling in foreign and unfamiliar cities.
Speaking at the Google I/O conference, Aparna Chennapragada, the general manager of Google’s camera products, said that the features aim to turn Google Lens into an augmented reality (AR) browser. Through Google Lens, the company hopes to serve as a link between useful digital information and the physical world.