Improving portraits by adding light after a picture was taken

How Google allows Pixel phone users to improve portraits by adding light after a picture was taken
Photographing an individual as illuminated one-light-at-a-time in the Google Light Stage, a 360° computational illumination. Credit: Google

Recently, Google introduced Portrait Light, a feature on its Pixel phones that can be used to enhance portraits by adding an external light source not present at the time the photo was taken. In a new blog post, Google explains how they made this possible.

In their post, engineers at Google Research note that professional photographers discovered long ago that the best way to make people look their best in portraits is by using secondary flash devices that are not attached to the camera. Such flash devices can be situated by the photographer prior to photographing a subject by taking into account the direction their face is pointing, other light available, skin tone and other factors. Google has attempted to capture those factors with its new portrait-enhancing software. The system does not require the camera phone operator to use another light source. Instead, the software simply pretends that there was another light source all along, and then allows the user to determine the most flattering configuration for the subject.

The engineers explain they achieved this feat using two algorithms. The first, which they call automatic directional light placement, places synthetic light into the scene as a professional photographer would. The second algorithm is called synthetic post-capture relighting. It allows for repositioning the light after the fact in a realistic and natural-looking way.

How Google allows Pixel phone users to improve portraits by adding light after a picture was taken
Left: Example images from an individual’s photographed reflectance field, their appearance in the Light Stage as illuminated one-light-at-a-time. Right: The images can be added together to form the appearance of the subject in any novel lighting environment. Credit: Google

Both of the algorithms rely on deep-learning networks. Google trained the software using available photographs and by photographing hundreds of portrait shots of 70 people with lights placed in 331 locations and cameras placed at 64 viewpoints. They also employed well-known principles such as the best angles for placing lights relative to the particular features of a person’s face.

The software is available in newer Pixel phones. Older camera users can try the new software on the Google Photos online service, while it’s built into newer phones. Users can either accept the automatic enhancement provided by their phone or change it manually.

More information: ai.googleblog.com/2020/12/port … ancing-portrait.html
Citation: Improving portraits by adding light after a picture was taken (2020, December 14) retrieved 15 December 2020 from https://techxplore.com/news/2020-12-portraits-adding-picture.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Source: TechExplore