On the afternoon of April 2, the interdisciplinary roundtable discussion titled Artificial Intelligence and Social Trust was successfully convened in Room 320 of the College. The event featured distinguished participants including Professor Terry Flew from the University of Sydney and Fellow of the Australian Academy of the Humanities, Professor Liao Beishui (Qiushi Distinguished Professor and Director of the Institute of Logic and Cognition at Zhejiang University’s School of Philosophy), and Professor Huang Yihui (Chair Professor and Head of the Department of Media and Communication at City University of Hong Kong), who joined online. The discussion was moderated by Professor Hong Yu, Deputy Dean and Tenured Professor of our college.
Professor Huang Yihui posited that trust serves as a critical mechanism driving public acceptance of AI. Higher levels of trust correlate with more positive attitudes toward AI and greater willingness to adopt it. She noted regional disparities in trust levels and ethical evaluations of AI, highlighting that survey respondents from Taiwan, China, exhibited significantly lower trust in AI technologies compared to their mainland counterparts. Understanding these differences, she argued, is essential for formulating culturally adaptive and ethically grounded AI governance policies.
From an interdisciplinary perspective bridging philosophy and computer science, Professor Liao Beishui emphasized challenges such as AI’s lack of explainability, ethical alignment issues, and the impact of cultural differences on ethical principles. He proposed developing explainable ethical AI models through computational logic and dialogue systems to enhance transparency in human-AI collaboration. Advocating for stronger accountability mechanisms via regulatory and technical measures, he called for cross-disciplinary cooperation to mitigate AI’s ethical risks.
Professor Terry Flew introduced a three-tiered framework of mediated trust—macro-level institutional trust, meso-level organizational trust, and micro-level individual trust—underscoring technology’s role in reshaping communication paradigms. Critiquing power imbalances in algorithmic governance (e.g., data monopolies and digital divides), he urged international collaboration to establish transparent and accountable regulatory frameworks. He warned against AI’s potential to exacerbate employment disparities and social inequality, stressing that AI must integrate human values rather than prioritize efficiency alone.
Professor Hong Yu analyzed the sociotechnical system attributes of AI infrastructure, using Hangzhou’s City Brain project as a case study. She emphasized China’s state-led public data governance model and its collective-interest-oriented approach to public trust, noting that Chinese citizens prioritize AI fairness over privacy concerns. She advocated for cross-national comparative studies on data ownership and algorithmic value differences, alongside critical education to deepen public understanding of technology.
Following the keynote remarks, participants engaged in lively discussions on themes including key observations on social trust, shifts in trust driven by logic, economics, and politics, major threats and opportunities for social trust, and AI’s contributions to the public. The dynamic exchange of ideas culminated in a productive close to the roundtable.
In his closing remarks, Professor Terry Flew described the event as a vibrant dialogue and expressed enthusiasm for future collaborations, stating, It was a pleasure to participate in this roundtable, and I look forward to further exchanges and discussions ahead.