download graspp user website pdf tell external home arrow_down arrow_left arrow_right arrow_up language mail map search tag train downloads

東京大学公共政策大学院 | GraSPP / Graduate School of Public Policy | The university of Tokyo

Profs. Naomi Aoki and Kentaro Maeda’s co-authored paper entitled “Explainable AI for government: Does the type of explanation matter to the accuracy, fairness, and trustworthiness of an algorithmic decision as perceived by those who are affected?” was accepted and published in the Government Information Quarterly September 24, 2024

Faculty news , GraSPP Blog , Research

Profs. Naomi Aoki and Kentaro Maeda‘s co-authored paper entitled “Explainable AI for government: Does the type of explanation matter to the accuracy, fairness, and trustworthiness of an algorithmic decision as perceived by those who are affected?” was accepted and published by the journal Government Information Quarterly.

The paper is introduced in the GraSPP Blog.

GraSPP Blog | Explainable AI for Government: The Type of Explanation Matters to Public Attitudes Towards Adverse Algorithmic Decisions Imposed by Government

Highlights

– Adverse algorithmic decisions imposed by public authorities are discussed.
– The perceived fairness, accuracy, and trustworthiness of such decisions are examined.
– These attitudes depend on the type of explanation provided by explainable AI.
– The effects of the types of explanations on attitudes differ among decision domains.
– These effects can be taken into account when designing and offering explanations.

Abstract

Amidst concerns over biased and misguided government decisions arrived at through algorithmic treatment, it is important for members of society to be able to perceive that public authorities are making fair, accurate, and trustworthy decisions. Inspired in part by equity and procedural justice theories and by theories of attitudes towards technologies, we posited that the perception of these attributes of decisions is influenced by the type of explanation offered, which can be input-based, group-based, case-based, or counterfactual. We tested our hypotheses with two studies, each of which involved a pre-registered online survey experiment conducted in December 2022. In both studies, the subjects (N = 1200) were officers in high positions at stock companies registered in Japan, who were presented with a scenario consisting of an algorithmic decision made by a public authority: a ministry’s decision to reject a grant application from their company (Study 1) and a tax authority’s decision to select their company for an on-site tax inspection (Study 2). The studies revealed that offering the subjects some type of explanation had a positive effect on their attitude towards a decision, to various extents, although the detailed results of the two studies are not robust. These findings call for a nuanced inquiry, both in research and practice, into how to best design explanations of algorithmic decisions from societal and human-centric perspectives in different decision-making contexts.

Research Team

Naomi Aoki (Graduate School of Public Policy)
Tomohiko Tatsumi (Graduate Schools for Law and Politics / Faculty of Law)
Go Naruse(Graduate Schools for Law and Politics / Faculty of Law)
Maeda Kentaro (Graduate School of Public Policy / Faculty of Law)

Paper Information

Aoki, N., Tatsumi, T., Naruse, Go., & Maeda, K. (2024). Explainable AI for government: Does the type of explanation matter to the accuracy, fairness, and trustworthiness of an algorithmic decision as perceived by those who are affected? Government Information Quarterly, 41(4), 101965.
https://doi.org/10.1016/j.giq.2024.101965

 

Inquiries to

GraSPP Public Relations team

graspp.pr.j(at)gs.mail.u-tokyo.ac.jp