Friday , October 18 2019
Home / canada / that's how google made it even better

that's how google made it even better



  • Google reported on its recent improvements in art and photography – especially with regard to portrait mode on Pixel 3.
  • The article discusses how Google has improved its way of measuring the depth of its neural networks.
  • The result is an enhanced bokeh effect in portrait mode shots.

Google described in detail one of the main achievements of photography, achieved on Pixel 3 in his blog AI. In a post yesterday, Google discussed how it improved the portrait mode between Pixel 2 and Pixel 3.

Portrait mode is a popular smartphone photography mode that blurs the background of the scene, while maintaining focus on the foreground subject (which is sometimes called the bokeh effect). Pixel 3 and Google Camera take advantage of neural networks, machines and GPU equipment to make this effect even better.

In portrait mode on pixel 2, the camera will capture two versions of the scene from different angles. In these images, the foreground figure, the person in most portrait images is likely to change less than background images (an effect known as parallax). This discrepancy was used as a basis for interpreting the depth of the image, and, therefore, which areas were blurred.

An example of parallax scrolling in Google portrait. Google Blog

This provided strong results for Pixel 2, but it wasn’t. Two versions of the scene provided very little depth information, so problems may arise. Most often, Pixel 2 (and many others like it) will not be able to accurately separate the foreground from the background.

With the Google camera, Pixel 3 enabled Google to turn on more depth cues to report this blur effect for better accuracy. In addition to parallax, Google used sharpness as an indicator of depth — more distant objects less sharp than closer objects, and the identification of objects in the real world. For example, a camera can recognize a person’s face in a scene and determine how close or far it was based on its number of pixels relative to the objects around it. Smart.

Then, Google trained its neural network using new variables to better understand (more precisely, estimate) the depth of the image.

Google Pixel 3 portrait mode bokeh skull

Pixel portrait mode does not just require a person.

What does all of this mean?

The result is a more beautiful portrait mode when using Pixel 3 compared to previous Pixel cameras (and supposedly many other Android phones) due to more accurate background blur. And, yes, this should mean that hair is lost on the background of blur.

There is an interesting meaning of all this, associated with chips. It takes a lot of energy to process the necessary data to create these photos after they are clicked (they are based on full-size images with several PDAF megapixels); Pixel 3 does this quite well thanks to its combination of TensorFlow Lite and GPU.

In the future, however, better processing efficiency and selected neural chips will expand opportunities not only for how quickly these images will be delivered, but for which developers even want to integrate.

To learn more about the Pixel 3 camera, click the link and give us your thoughts on it in the comments.


Source link