The Silicon Trend Tech Bulletin

Icon Collap
Home / AI / ML / Public Trust in Artificial Intelligence: University of Tokyo

Public Trust in Artificial Intelligence: University of Tokyo

Published Tue, Jan 18 2022 05:52 am
by The Silicon Trend


AI Groove

Public Trust in Artificial Intelligence: University of Tokyo


As the importance of artificial intelligence (AI) keeps increasing, University of Tokyo researchers find that the public trust in the tech varies significantly depending on the applications. Their estimations specify how different ethical and demographic situations impact these trusts. The team designed an octagonal visual metric that helps AI researchers to know how their work is recognized by the public.

Several people feel that the swift tech development often overtakes the social structures that control entirely and guide ethics. As AI is part of our lives, people often fear and mistrust the tech's usefulness in modern living. Researchers Prof. Hiromi Yokoyama from the Kavli Institute for the Physics and Mathematics of the Universe at the University of Tokyo started to quantify people's attitudes towards AI issues. The 2 questions were: how attitudes change depending on the situation presented to a respondent and how the respondent's demographic themself changed attitudes.


Also read: Science Made Simple: What Exactly is Machine Learning (ML) Technology


Octagon Measurements

To estimate people's trust towards AI, the researchers worked on 8 themes common to several AI applications that raised ethical questions like responsibility, safety and security, human tech control, privacy, fairness and non-discrimination, promotion of human values, transparency, and explainability, and accountability. These questions were termed octagon measurements, inspired by research held by Jessica Fjeld and her team at Harvard University in 2020.

According to these criteria, a series of 4 situations with different views on AI - AI customer services, crime detection, AI-generated art, and autonomous weapons, were provided to the survey respondents. They also provided the researchers with their data such as their age, education, interest in science and technology, gender, and occupation. With these data, researchers were able to see the people's characteristics matching certain attitudes.


Also read: How the Emerging Tech Trends Join to Power the Metaverse


Yokoyama said prior researches have shown that the risk is recognized more negatively by elderly people, women, and those subject experts. To her surprise, she expected to see a different turn in a survey given how commonplace the tech has become, but they saw similar trends. Asst. Prof. Tilman Hartwig said, "With a universal scale, researchers, developers, and regulators could better measure the acceptance of specific AI applications or impacts and act accordingly." 

One thing noted when the professors were creating the situations and questionnaire is that several AI lingering topics need particular explanation than they thought - indicating a wide gap between reality and perception regarding AI.