The Silicon Trend Tech Bulletin

Icon Collap
...
Home / AI / ML / Latest Meta AI Study Shares New Datasets and Research on Measuring Fairness and Alleviating Bias in NLP Training Data

Latest Meta AI Study Shares New Datasets and Research on Measuring Fairness and Alleviating Bias in NLP Training Data

Published Fri, Jun 03 2022 14:03 pm
by The Silicon Trend

 

Screenshot 2022-02-15 at 6.30.38 PM

 

 

Latest Meta AI Study Shares New Datasets and Research on Measuring Fairness and Alleviating Bias in NLP Training Data

There's growing proof that models may show social biases as they tend to amplify or repeat undesirable statistical relations in their data training. With massive datasets, the probability of models duplicating negative biases is high. The issue is more crucial for historically oppressed groups like women and people of color.

Creating precise, large-scale techniques to estimate fairness and alleviate biases offer AI practitioners benchmarks to test NLP systems, advancing the goal of ensuring AI treats everyone equitably. Ethnicity, gender, and race are three areas where the scientific community made drastic progress. Though this foundation is a vital step toward addressing fairness along with these areas, it falls short of finding problems of justice based on other essential identities.

 

Also Read: How Covid-19 Pushed Fashion Designers to Welcome Blockchain Tech

 

Models have been taught to leverage a simple method to learn and avoid social biases while creating answers to particular demographic phrases. Moreover, an AI model is designed to help break stereotypes in NLP data by reducing demographic biases in text.

Developing Techniques to Assess and Correct Biases

A rich vocabulary is required that accurately reflects a wide range of identities to accurately determine demographic biases in AI. Therefore, we created more than 500 categories covering over a dozen demographic axes using a combination of participative and computational procedures. As part of the participatory process, new term proposals were collected and received input on current terms from policy and domain experts and individuals with lived experience.

 

Also Read: Top 3 Skills That Can't be Replaced by Automation

 

Model biases can be effectively measured and alleviated by leveraging the comprehensive list of demographic terms. However, when estimating prejudices, the study has previously focused on more broad terms like Asian. If the models are biased against them, it wouldn't be accurate whether they were prejudiced toward Japanese people.