Most people agree the Pixel 2 family has the best cameras on any smartphone right now. The camera hardware itself is great, but most of the magic is happening on the software side. For example, the HDR+ feature makes almost any camera better when it’s ported to other phones. A new software feature on the Pixel 2 is “Portrait Mode.” It identifies you and blurs the background to create a cool effect.
The camera is using semantic image segmentation to achieve this. Basically, it categorizes every pixel with a label such as “person” or “sky.” This helps the camera to differentiate between a person in the foreground and the sky in the background. Google has released this technology as open source, which means developers can use the same tech in their own apps. Portrait Mode is just one example of how this technology can be used. Developers can do even more cool stuff.
This release includes DeepLab-v3+ models built on top of a powerful convolutional neural network (CNN) backbone architecture [2, 3] for the most accurate results, intended for server-side deployment. As part of this release, we are additionally sharing our Tensorflow model training and evaluation code, as well as models already pre-trained on the Pascal VOC 2012 and Cityscapes benchmark semantic segmentation tasks.
Source: Google Research