Portrait Mode Delivers ‘Unsatisfactory’ Images for Certain Skin Tones Due to AI Bias, Poor Tuning: Study

newyhub
4 Min Read


Portrait mode photography on smartphones might have ‘biases’ that result in lower quality images of subjects with darker skin tones, according to a study. The portrait mode effect on smartphones is digitally achieved by using depth-sensing technology to separate the subject from the background of the photo. However, the result can range from impressive to unsatisfactory depending on the scene, lighting conditions, and even the subject. A new study has claimed that individuals with darker skin tones might get a less desirable output compared to those with lighter skin tones.

Portrait Images Might Be Affected by AI Training Bias

Smartphone camera testing website DxOMark, published the results of a study conducted in mid 2023. The study aimed to measure user preferences on people’s pictures taken using portrait photography modes on smartphones. The publication used flagship smartphones that were launched in late 2022 and early 2023. As part of the study, 405 scenes were captured using 83 regular consumers who modelled for the photos. The publication also developed a ‘Satisfaction Index’ to convert people’s opinions into a score out of 100.

The study found that while portrait mode photos scored lower than expectations, there was also a disparity in images captured of individuals with darker skin tones compared to those with lighter skin tones. With darker skin tones, issues such as over-exposing the subject, under-exposing the background, and enhancing the brightness of the skin tone were noted.

Image disparity with respect to different skin tones and the Satisfaction Index score
Photo Credit: DxOMark

 

Highlighting possible reasons for the issue, the study stated that bias in the training data of artificial intelligence (AI) could cause such an issue. Most portrait mode technologies in smartphones use AI for scene detection, semantic segmentation, and deep learning. The study speculates that if the data used by the AI includes a disproportional number of individuals with lighter skin tones, it could result in inaccuracies when rendering darker skin tones.

Citing a 2018 paper titled Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification by the MIT Media Lab, the study mentioned, “Darker females have the highest error rates for all gender classifiers ranging from 20.8 − 34.7 percent. For Microsoft and IBM classifiers, lighter males are the best-classified group with 0.0 and 0.3 percent error rates respectively. Face++ classifies darker males best with an error rate of 0.7 percent.”

However, AI bias is one of the two possible reasons behind it, as per the study. Another reason cited was the poor tuning of the cameras. The publication highlighted that it is not possible to tune a camera that can render optimal pictures in all scenarios and for all skin tones. Some manufacturers prioritise one type of optimisation over the other, which can often lead to these inaccuracies.

The publication highlighted that such studies were important in gauging consumer satisfaction and concerns. The feedback can play an important role in improving the technologies to cater to all demographics.

//
Share This Article
Leave a comment