
At the 4th YVIP International Academic Conference held at Yonsei University in Seoul on Monday, Columbia University Professor Gita Johar argued that both insufficient and excessive trust in AI present significant challenges for its adoption.
Johar outlined several factors contributing to low trust in AI, including privacy concerns, unclear accountability for errors, algorithmic bias, and lack of transparency. She noted that in international surveys, trust in AI use remains below 50% in most countries except for countries like China, India, Thailand, and Nigeria, with South Korea showing around 40%.
Conversely, Johar noted a tendency towards excessive trust among actual AI users. She explained that chatbots’ polite responses often make users perceive them as “the friend you always wanted,” leading to unwarranted levels of trust due to these human-like qualities.
The professor warned about over-reliance on AI for sensitive issues such as academic counseling, relationship advice, and mental health guidance. Medical-related consultations account for 15-20% of AI interactions, with higher rates in countries like the U.S. where healthcare access is limited.
Research experiments have corroborated these findings. When users share personal and emotional information with AI, such as dreams and feelings, it creates a “relational trust,” increasing the likelihood of following AI-generated advice. The moment users feel they are understood by AI, their trust levels surge dramatically.
Johar also addressed the impact of deepfakes on corporate trust. She pointed out the recurring problem of manipulated videos featuring CEOs or company performance spreading online. She explained how misconceptions formed this way are difficult to correct, even with subsequent clarifications. Notably, she said that prior warnings and media literacy education have shown limited effectiveness in combating this issue.
While acknowledging AI’s immense potential benefits, Johar emphasized the growing threats when it’s misused and argued that both a healthy level of trust and a certain degree of skepticism are necessary. Johar cautioned that overly rapid regulations could stifle innovation, but delayed action might lead to irreversible damage, similar to issues faced with social media. She stressed the importance of early, inclusive discussions on AI regulations involving both citizens and academia.