- While Google’s camera algorithms can do some incredible things, they aren’t that incredible for people with color.
- Algorithms just aren’t optimized for darker skin tones and certain types of hairstyles.
- At Google I / O 2021, Google committed to changing this inequality and discussed progress made so far.
Smartphone cameras can do some incredible things. With the help of software smarts, you can blur a background and focus on a subject at the same time. You can change the color, correct the exposure, and even add motion to static images.
However, one thing that camera algorithms haven’t been that good is that they work well for people with color. Due to inherent racial prejudice, algorithms are based on databases filled with whites. This can lead to PoC feeling excluded from their own photos.
Thankfully, Google is finally admitting that fact. In addition, it has set itself the goal of optimizing its Google camera algorithms so that they are more comprehensive for PoC. During the Google I / O 2021 keynote event, the company revealed its plans and current progress in addressing this inequality.
Google Camera Algorithms: A Step Towards More Justice
Recognition: Luka Mlinar / Android Authority
If you use the artificial bokeh effect on your phone (also known as portrait mode), you may find that your hair is blurry around the edges. This is because the camera algorithm has a hard time distinguishing the tiny strands of hair from the background. To fix this, Google and other companies are constantly optimizing their algorithms using machine learning.
However, PoC will often get much worse results in this regard as their hair is very different from the hair images used to feed this machine learning. A similar problem arises when you add darker skin tones to an image.
See also: Here are the best Android lens add-ons for mobile cameras
To counter this, Google works with over a dozen photographers and other image experts from around the world and has different skin tones, hair types and cultural backgrounds. These experts capture thousands of images which are then submitted to Google to diversify the pool that supports the machine learning algorithms.
It may take a while for Google camera algorithms to work the same for all types of people, but at least Google admits there is a problem and is trying to fix it.