On April 1, Professor Terry Flew from the University of Sydney, a Fellow of the Australian Academy of the Humanities, delivered a lecture titled Artificial Intelligence and Communication: New Machines and New Concepts, discussing how AI is reshaping communication studies both in practice and research. The lecture was hosted by Associate Professor Huang Qing, Deputy Director of the International Communication Research Center at the College of Media and International Culture.
Terry Flew began by introducing the concept of trust through the 3I Model—comprising Idea, Interests, and Institutions—alongside a multi-layered framework of trust. Trust operates at three levels: individual (interpersonal trust), meso (institutional trust), and societal (generalized social trust). These intertwined layers influence social functioning and human relationships. He then elaborated on the social functions of trust: it acts as a social glue, reducing uncertainty and fostering cooperation, while its erosion can lead to societal fragmentation, political polarization, and misinformation.
So why discuss trust now? Flew argued that it intersects with institutions, AI development, and emerging trust challenges. Institutions—governments, corporations, and other organizations—serve as core vessels of trust, yet crises often arise from institutional failures or power abuses. In today’s smart society, AI further complicates trust dynamics. AI applications alter trust structures, particularly in automated decision-making and information dissemination. Overreliance on AI-generated content may diminish critical thinking; facial recognition technologies exhibit biases; smart city projects (e.g., Google Sidewalk) trigger privacy concerns; and AI-driven hiring or credit scoring risks hidden discrimination. Thus, re-examining trust is urgent.
From philosophy to sociology, trust has long been a central research theme. Flew systematically reviewed theories from economics, sociology, and other fields, distinguishing trust from confidence: trust requires not just promises but also the capacity to fulfill them.
From a communication perspective, media’s role in trust remains debated, yet communication itself can bridge trust across levels. In the AI era, new spatiotemporal relationships demand new forms of trust to navigate uncertainty. Delegating trust to AI raises questions: What does it mean to trust or distrust algorithms? Flew introduced mediated trust, shaped by three interconnected forces—institutions, technology, and communication. Here, communication isn’t merely a context for trust but actively shapes its three levels.
Digital transformation has birthed new machines—not just output-driven tools but learning systems—reshaping communication and trust. Human-machine interaction spans three modes: human-to-human (technology-mediated), human-machine, and machine-to-machine. Flew stressed the need to trust technology as a mediator to achieve human trust and to trust machines to harness their potential. Yet, challenges like bias, privacy, and misinformation persist, demanding clarity on how much we can trust these new machines. He proposed applying the 3I Model to analyze AI’s present and future.
During the Q&A, Flew engaged with faculty and students on AI risk stratification, critical thinking, globalization, and trust.
In closing, Huang Qing noted that AI’s rise necessitates redefining human-machine relations and integrating machine intelligence with human wisdom. Trust, she emphasized, is not just a technical issue but a socio-cultural one requiring collective effort.