Tracking shifts in public trust, use, and perceptions of AI
Although previous research has explored AI's impact in the private sector and some academic studies have focused on specific areas, this is the first representative survey conducted by a university across the U.S. that measures AI's impact in detail. This survey examines the influence of AI tools on the future of journalism, upcoming electoral campaigns, and younger generations' adoption of these technologies.
Wave 3 - August 2025
The third wave of the AI Global Public Opinion Tracker confirms and deepens earlier findings, now reinforced by other respected U.S. surveys. The data are gaining consistency, pointing to clear directions for the future of communication, journalism and the use of AI tools in the workplace.
AI awareness is rapidly expanding, and adoption is going mainstream: half of U.S. adults have used AI tools for study or work, and usage in communication tasks is growing quickly. Trust in AI ranks surprisingly high (above media and political parties) but concerns over mis/disinformation and inequality remain strong.
Public opinion is split: many see AI as productive and transformative, while others worry about risks to jobs, democracy and truth in news. Experience with AI matters more than demographics in shaping these views: those who use the tools are more optimistic, while those closer to disruption are more cautious.
For communication and journalism, the implications are direct. AI is becoming a standard tool in content creation and news production, but credibility depends on transparency, fact-checking and audience trust. The next generation of professionals must learn to balance innovation with responsibility, using AI both to enhance creativity and to protect democratic values. Download full results »
1. AI awareness is surging. Nearly half of Americans now say they’ve heard “a lot” or “a great deal” about AI, up from just over a third last year. ChatGPT name recognition is near universal. Younger, better-educated men still lead adoption, but attitudes and direct experience with AI weigh more than demographics in shaping opinions.
2. Communication use is rising fast. AI use for creating communication content jumped from 35% to 42% in eight months, driven by idea generation and summarizing. This growth follows a sharp increase in perceived productivity: 80% of users now say AI tools make them more productive, up 20 points in a year!
3. AI adoption is going mainstream. Half of U.S. adults have used ChatGPT or similar tools for work or study, up from 43% in late 2024. A gender gap remains, men use AI more than women, but the difference is narrowing.
4. ChatGPT holds the lead. It’s still the most used AI assistant (83% of AI users), with Google Gemini and Microsoft Copilot gaining ground. The launch of DeepSeek has not significantly shifted these trends.
5. Trust in AI tools is high. In its first-ever measurement, AI tool confidence ranked higher than political parties and media outlets. Similar patterns are emerging globally, according to ongoing UNESCO research. Trust is key: lack of it remains the top reason people avoid using ChatGPT.
6. Public opinion is divided, and shifting. Half see AI’s overall impact as positive, but negative views are up (28%, from 22%), fueled by mis/disinformation fears and the sense that AI content is increasingly indistinguishable from human work.
7. Job fears are evolving. Worries about AI replacing jobs are easing slightly, while more people now expect transformation or creation instead. Direct experience with AI strongly shapes these views.
8. Inequality fears remain high. 57% believe AI will widen the gap between tech-skilled and non-tech workers.
9. Views on regulation remain polarized, with sharp divides over government intervention.
10. Journalism inspires cautious optimism. More people now believe AI can improve news quality, but skepticism is strong - especially among those who can’t tell AI-generated from human-written stories.
11. Mis/disinformation concerns persist. 44% think AI will increase mis/disinformation, and only one-third believe it will reduce it. Linked to this, one in three say AI does more harm than good to democracy.
12. Machine-like AI wins on trust. Nearly half prefer AI that efficiently solves technical problems over AI designed to mimic human conversation & empathy.
For schools of communication and journalism, these findings signal a shift in how future professionals must be trained. AI is no longer a niche tool, it’s moving into the mainstream of content creation, news production and everyday communication. With nearly half of Americans now familiar with AI, and usage rapidly increasing, tomorrow’s communicators will be expected to work fluently with these tools, both for efficiency and creative advantage.
However, rising adoption comes with complex challenges.
Mis/disinformation fears are growing, and a significant share of the public can no longer tell AI-generated from human-created content. For journalists, this means trust and verification skills will become even more central. Fact-checking, transparency about AI use, and audience education will be essential to maintain credibility.
The data also show that attitudes and hands-on experience with AI shape perceptions more than age, gender or education, signifying that researchers and students who actively experiment with AI will be better positioned to understand its strengths, limitations and societal impacts, skills that will set them apart in the job market.
Finally, the split between optimism and skepticism around AI in journalism highlights the need for critical literacy: future communicators must be both innovators and watchdogs, embracing AI’s potential while guarding against its risks. The next generation must graduate ready to navigate and lead in this rapidly evolving media landscape.
The University of South Carolina Global Public Opinion Tracker is produced in partnership with UNESCO. The third wave continues to track and analyze public perceptions, adoption patterns and societal impacts of artificial intelligence, building on insights from previous editions. This new release comes at a moment when interest in AI has not only remained strong but has accelerated, as reflected in global and U.S. search trends.
Google Trends data show a steady, sustained increase in searches for “artificial intelligence” and “ChatGPT” worldwide over the past five years, with particularly sharp growth since early 2023. ChatGPT, in particular, has emerged as the clear driver of public attention, surpassing “artificial intelligence” in search interest on several occasions and consistently outpacing other AI-related tools such as Gemini, Microsoft Copilot, and DeepSeek.
In the U.S., the trends reveal an even more striking shift: in 2025, for the first time in the past five years, public interest in ChatGPT surpassed interest in Facebook. This milestone underlines the pace and scale of change in the digital landscape, signaling how generative AI platforms are becoming part of everyday discourse. These dynamics frame the importance of the AI Global Public Opinion Tracker’s third wave: understanding how growing awareness and engagement with AI tools are influencing trust, perceived risks and expectations. With AI moving from a niche technological innovation to a mainstream topic of social, political and economic relevance, the need for longitudinal data and comparative analysis has never been greater. This wave provides an updated snapshot of a fast-evolving ecosystem - one in which AI is no longer just a tool, but a central actor in shaping communication and information flows.
Volume 2 - Winter 2024
This is a survey focused on measuring the use and perception of artificial intelligence (AI) tools within the United States. The survey explores various aspects of AI, including its impact on news consumption, social media engagement, and professional tasks related to communication. Download full results» (pdf)
Rising Public Interest in AI
Public discourse on AI has surged, with online interest in ChatGPT occasionally matching
that of Trump during the election year. However, awareness of AI tools remains mixed
— 57 percent of the public is acquainted with them to various degrees, while one-third
is highly familiar, predominantly younger, educated, higher-income individuals. Gender
disparities persist, with men being more familiar with AI tools. While over 40 percent
have used such tools for work, study or both, non-users cite distrust as the primary
barrier, rather than cost or complexity.
Shifting Popularity Among AI Tools
ChatGPT remains the leading AI tool but has seen competition from Gemini and Copilot.
Over the last six months, these alternatives have grown significantly in adoption,
collectively surpassing ChatGPT's usage, aided by their recent rebranding and updates.
Adoption of AI tools for content creation
AI tools are used by 35 percent of the population for communication content creation,
with significantly higher adoption in technical, business, and communication industries.
Half of those in communication-related roles and 75 percent in IT and technical fields
report regular use, compared to much lower adoption rates in sectors like manufacturing,
agriculture, and transportation.
Decline in Job Security Concerns
Concerns about AI displacing jobs have dropped by 10 percent since June 2024, from
over half to 42 percent. Among communication professionals, this fear is even lower,
at 37 percent.
Mixed Sentiment on AI's Overall Impact
While AI’s general impact is perceived positively, public expectations remain mixed.
More people express concerns than excitement about AI’s future. Nonetheless, AI-driven
productivity gains are increasingly acknowledged.
Low Awareness of Ethical Challenges
Only one-third of the public surveyed is aware of ethical concerns related to AI tools.
Most expect self-regulation rather than government intervention. However, communication
professionals advocate for stronger government oversight.
Mixed Impact on Journalism
AI tools are expected to enhance journalism quality, particularly by educated, high-income,
and tech-savvy individuals. But this belief is not widely shared across the public.
Mis/disinformation Fears Persist
Concerns about AI’s potential to amplify mis/disinformation remain strong. Optimists
believe AI could reduce disinformation, but a sizable portion of highly educated individuals
remains apprehensive about its role in online manipulation.
Perception of Increased Disinformation in 2024 Elections
Over 60 percent of our respondents believe online disinformation was more prevalent
in the 2024 elections in the US. One-third reported encountering AI-driven disinformation,
such as deepfakes or bot-generated content, and a large majority suspecting AI was
used for spreading disinformation. Similar trends have been observed in other countries,
including Romania’s 2024 presidential elections.
Influence on Political Campaigns
AI tools have played a notable role in the U.S. presidential campaign, with 25 percent
of our respondents using them at least several times a week to understand political
issues.
Polarization and Digital Tools
The U.S. remains deeply polarized, affecting digital tool usage. Republicans tend
to rely on diverse social media platforms, while Democrats trust mainstream media
and public institutions like universities and the government. This divide influences
information sources but does not significantly affect attitudes toward AI.
Social Media Trends Post-Elections
Social media activities have decreased following the elections. By the end of the
year, YouTube surpassed Facebook to become the leading platform for news consumption
in the U.S., according to the survey results. Both platforms remain dominant, far
ahead of others in terms of usage. The survey findings align with trends reported
in other studies on media consumption in the United States.
The findings of this study underscore the urgent need to enhance AI literacy among younger generations, particularly as AI tools increasingly shape communication, work processes, and public discourse. A clear implication is the necessity for targeted educational initiatives that promote understanding of AI functionalities and their ethical implications, ensuring that individuals can use these tools effectively and responsibly. Building trust in AI tools must accompany this effort, focusing on improving transparency, highlighting practical benefits, and addressing concerns such as misinformation and privacy risks.
The study also signals the need to address AI regulation, particularly as ethical implications remain insufficiently understood by the broader public. While the regulatory approaches differ globally—an external observation, not derived from the study — the findings highlight the importance of exploring frameworks that balance innovation with safeguards to mitigate misuse. This is particularly relevant for communication industries, where adoption rates are high, transforming how professionals operate and interact with information.
Beyond education and regulation, fostering critical thinking skills is essential to equip individuals to discern AI-generated content and identify potential biases or manipulations. Given the persistent fears about AI amplifying mis/disinformation — particularly in elections—media and communication professionals must play a pivotal role in setting standards for ethical AI integration. Furthermore, interdisciplinary collaborations between tech developers, educators, and policymakers can accelerate solutions to address these challenges.
In the context of the changing workforce, where AI tools enhance productivity but also disrupt existing job structures, initiatives to reskill workers can be aligned with efforts to develop new career paths centered on AI competencies. Overall, AI literacy is not just a technical necessity but an opportunity to prepare individuals for a future where human-AI interaction becomes the norm, ensuring trust, equity, and ethical usage.
This research initiative aims to understand the utilization and impact of large language models (LLMs) such as OpenAI's ChatGPT, Google's Gemini (formerly Bard), and other generative AI tools on content creation and communication practices in the United States. Contextual data from Google Trends indicates a consistent rise in public interest in AI technologies and tools like ChatGPT over recent years. Evaluating the societal and professional impact of these tools has become a priority for the College of Information and Communications at the University of South Carolina.
Through a biannual national survey, supplemented by social media listening in future phases, this project examines how individuals and organizations adopt AI for communication, study, and work. Future studies will deepen the understanding of how AI tools are integrated into everyday practices, track evolving trends in adoption across sectors, and analyze their long-term influence on communication strategies and professional activities. The College of Information and Communications remains committed to supporting a more informed, adaptive, and responsible approach of using AI in communication and beyond.
Volume 1 - Summer 2024
The full results are available upon request, and the University of South Carolina will repeat this survey biannually to provide an index on AI’s evolving impact. This survey explores the awareness, usage, and perception of artificial intelligence (AI) tools, specifically focusing on large language models (LLMs) like OpenAI's ChatGPT within the United States' communication landscape. Request full results»
- There is a generational divide in AI knowledge. 31 percent of respondents have little to no awareness of AI. Younger people (18-24) are more aware of AI tools like ChatGPT than older demographics.
- AI Usage in Professional and Academic Contexts: 38 percent of respondents use AI for work or study, with higher usage among younger people, those in the Western U.S., and higher-income groups. Major barriers to AI adoption include a lack of trust (46 percent) and insufficient skills (24 percent). Social media engagement positively correlates with AI usage, while older people and those with less education are less likely to use AI.
- AI tools are perceived to enhance productivity, and ChatGPT is by far the most well-known and widely used AI tool (compared with Google Gemini, Microsoft Copilot, or Claude).
- Significant ethical and privacy concerns exist. Only 27 percent know AI ethical guidelines and 12 percent report privacy concerns. Ethical concerns are higher among women and highly educated individuals.
- 46 percent believe AI has a positive impact on journalism, while 36 percent view it negatively, particularly due to concerns about misinformation. Trust in the press and universities correlates with positive views on AI’s role in journalism.
- 52 percent of Americans fear job losses due to AI, while 29 percent expect job transformation requiring new skills. Younger people are more optimistic about AI’s role in the job market, while older people are more skeptical.

One aspect of the study is establishing a baseline for the recognition and relevance of various AI assistants, particularly in the communication industry. The findings shed light on how these tools are currently perceived and utilized across professional fields.
The university plans to continuously measure these perceptions over time, identifying trends and shifts in public attitudes toward AI, as well as emerging fears regarding job automation.
In July 2024, The University of South Carolina conducted a large survey testing perceptions of AI usage in different contexts (1,061 respondents, CAWI method, via Qualtrics platform). This survey was designed by a team of experts led by Dan Sultanescu, Ph.D., USC visiting Fulbright Scholar, and Linwan Wu, Ph.D., associate dean for research at the College of Information and Communications. Contributions were made by Randy Covington, Dana Sultanescu, Ph.D., and Andreea Stancea, Ph.D.
In addition to surveys, the university will employ alternative research methods, through its Social Media Insights Lab, such as analyzing AI’s dominance in online conversations, to offer comprehensive data to the media and academic researchers.