This week was jam-packed full of exciting announcements from Google at the company’s annual I/O developer conference. The annual gathering of developers, journalists, industry members, and Google employees is a great place to talk about the latest products, features, and services from Google. Artificial intelligence and machine learning were big focuses this year, and one of the most exciting applications of machine learning came from the Google Photos team. The new Google Photos features that are rolling out soon include quick actions to brighten a photo, colorize a subject while leaving the background black and white, share a photo with relevant contacts, archive documents, and save documents to PDF files.
Features like these are why, in my opinion, Photos is the best Google service in recent memory. Google Photos makes it easy for beginners to archive, share, and now even edit photos, so we’re excited to see the features start to roll out (eg. the Color Pop feature should start showing up today.) There was one feature teased at last year’s Google I/O that many of us have waited for, but have yet to see go live in the Photos app. The feature, which was said to be “coming very soon,” would allow Photos to automatically remove objects occluding the main subject of a photo. As an example, Google CEO Sundar Pichai showed their object removal algorithm automatically removing a chain link fence occluding a child playing baseball.
Since this feature was teased on stage, we haven’t heard any information from Google about when or if it would be rolled out to users. During this year’s Google I/O, we sat down with David Lieb, Product Lead for Google Photos, and Ben Greenwood, Product Manager for Google Photos, to discuss the latest features for Photos. While much of the discussion centered around the new Google Photos Partner Program and the Google Photos Library API (we’ll have more to say on that in the near future), we had the opportunity to ask the team about what happened with the object removal feature.
We were told that the object removal feature teased during the 2017 keynote was a demonstration of Google’s machine learning capabilities. While the technology is certainly available and can be deployed, the team approaches building their product by prioritizing what’s most important for people. Hence, the Photos team prioritized other applications of machine learning above this feature.
While this answer may not satisfy users who were looking forward to the feature, we now have a clearer picture of what happened to it. Despite the excitement from the tech media, perhaps the Photos team’s assessment found that the object removal feature wouldn’t be as important to users as the new machine learning features introduced this week. It’s possible that the Photos team will revisit this topic to eventually roll out the feature, but unfortunately, we don’t have an exact timeframe on when (or if) it’ll be made available.