What do tech giants really do when they express their concerns about creating a fair and unbiased AI technology? Top-notch companies such as Microsoft, Google, and the Dept of Defense release value statements stating their dedication to these objectives. However, they tend to nullify a fundamental reality, i.e., even AI creators with true intentions might face inherent trade-offs, where increasing one type of fairness necessarily indicates sacrificing another.
Facial Recognition for Police Surveillance
One type of AI bias that has gotten more attention is the facial recognition systems. These prototypes are top-notch in determining white male faces as those are the face type they've commonly been trained on. However, they are bad at recognizing an individual with darker skin, especially females - leading to harmful impacts.
Some have argued for the need to debias the system to address these issues with the recognition systems, i.e., train them on a more diverse set of faces. However, identifying all kinds of faces is not the only problem. As the systems are increasingly leveraged in police monitoring, this disproportionately targets people based on color, a system that efficiently identifies Black people resulting in more unjust outputs.
In 2019, the writer Zoe Samudzi at the Daily Beast said, in a country where crime prevention links blackness with inherent criminality, it's not social progress to make black individuals equally visible to technology that will further be weaponized against us.
What If Text Generator Biased Against Certain Groups?
Text-generating AI systems such as GPT-3 have been dropped for their ability to improve creativity. Instead, researchers train them by feeding the models a massive amount of internet text, so they learn how to link words with each other until they respond promptly with a prediction on what comes next. However, the GPT-3, developed by the lab OpenAI, tends to create toxic statements about particular groups.
This is an apparent infringement of representational fairness, denigrating an entire group of individuals who belong to certain groups with the toxic statement. However, the efforts to fix this can be risky, where the AI system might overreact and think any prompt containing certain groups is not good and will refuse to generate any text in response to the decision.