Social Science Research Council Research AMP Just Tech
Citation

AI algorithm transparency, pipelines for trust not prisms: mitigating general negative attitudes and enhancing trust toward AI

Author:
Park, Keonyoung; Young Yoon, Ho
Publication:
Humanities and Social Sciences Communications
Year:
2025

This study explores artificial intelligence (AI) algorithm transparency to mitigate negative attitudes and to enhance trust in AI systems and the companies that use them. Given the growing importance of generative AI such as ChatGPT in stakeholder communications, our research aims to understand how transparency can influence trust dynamics. Particularly, we propose a shift from a reputation-focused prism model to a knowledge-centric pipeline model of AI trust, emphasizing transparency as a strategic tool to reduce uncertainty and enhance knowledge. To investigate these, we conducted an online experiment using a 2 (AI algorithm transparency: High vs. Low) by 2 (Issue involvement: High vs. Low) between-subjects design. The results indicated that AI algorithm transparency significantly mitigates the negative relationship between a general negative attitude toward AI and trust in the parent company, particularly when issue involvement was high. This suggests that transparency serves as an essential signal of trustworthiness and is capable of reducing skepticism even among those predisposed to distrust AI as a technical feature and a communicative strategy. Our findings extend prior literature by demonstrating that transparency not only fosters understanding but also acts as a signaling mechanism for organizational accountability. This has practical implications for organizations integrating AI, offering a viable strategy to cultivate trust. By highlighting transparency’s role in trust-building, this research underscores its potential to enhance stakeholder confidence in AI systems and support ethical AI integration across diverse contexts.