A Replication of “Explaining Why the Computer Says No: Algorithmic Transparency Affects the Perceived Trustworthiness of Automated Decision‐Making”
Xuemei Fang,Huayu Zhou,Song Chen
TLDR
A replication of Grimmelikhuijsen reaffirmed Grimmelikhuijsen's core findings that algorithmic explainability enhances public trust, thus demonstrating its potential to foster trust across cultural contexts and indicated that accessibility remains important for fostering trust.
Abstract
With the advancement of artificial intelligence, algorithms are transforming the operations of the public sector. However, lack of algorithm transparency may result in issues such as algorithmic bias and accountability challenges, ultimately undermining public trust. Based on the principles of replication experiments and procedural justice theory, this study conducted a replication of Grimmelikhuijsen in a Chinese context. The replication reaffirmed Grimmelikhuijsen's core findings that algorithmic explainability enhances public trust, thus demonstrating its potential to foster trust across cultural contexts. Unlike the original research, the results indicated that accessibility remains important for fostering trust. The impact of transparency varies across decision contexts, with greater effects in high‐discretion situations. By replicating Grimmelikhuijsen, the current research not only provides new empirical support for procedural justice theory, but it also offers practical insights into configuring algorithmic transparency within a public administration context.
