Artificial Intelligence (AI) is no longer the stuff of science fiction—it’s reshaping our lives, from the way we work to the way we access healthcare, shop, and learn. But as AI’s influence grows, so do questions about trust, ethics, and governance.

How do people around the world really feel about this technological revolution?

A landmark global study by The University of Queensland and KPMG set out to answer this question, surveying over 17,000 people across 17 countries to explore public trust, attitudes, and expectations toward AI.

This groundbreaking research dives into critical issues:

Who do we trust to develop, use, and govern AI?

What are the perceived benefits and risks?

How do people feel about AI in their workplaces, and what do they expect from its regulation?

The report not only reveals stark differences in trust and acceptance across countries, generations, and sectors but also charts how attitudes have evolved over time.

Most importantly, it offers a roadmap for organizations, governments, and societies to foster trust in AI, ensuring its development is both responsible and inclusive. Join us as we unpack the insights that are set to shape the future of AI—and our relationship with it.

A World Divided: Trust in AI Varies Globally

One of the standout findings is the stark divide in how people perceive AI. In Western countries like Australia, the UK, and the US, people are more skeptical, often questioning whether AI’s benefits truly outweigh its risks.

In contrast, emerging economies such as Brazil, India, China, and South Africa show greater optimism and trust in AI. Y

ounger generations, university-educated individuals, and those in managerial roles also tend to embrace AI more readily.  

Globally, three out of five people (61%) are either ambivalent or unwilling to trust AI systems.

Trust varies depending on how AI is used—people are more comfortable with AI in healthcare, for instance, than in human resources. While many have faith in AI’s capabilities and potential to help, concerns about safety, security, and fairness remain widespread.  

Key Takeaway: AI trust isn’t one-size-fits-all—it’s shaped by culture, context, and application.  

The Benefits vs. Risks Debate

Here’s the good news: 85% of people globally believe AI will deliver benefits, from improving healthcare to streamlining work. But here’s the catch—only half think those benefits outweigh the risks.

A whopping 73% are worried about risks like cybersecurity (the top global concern), loss of privacy, job loss, manipulation, and even erosion of human rights.  

Countries like India and South Africa are particularly concerned about job loss and deskilling, while Japan worries most about system failures. These concerns highlight the urgent need for safeguards to ensure AI is used responsibly.  

Key Takeaway: People see AI’s potential, but they’re not blind to its pitfalls. Addressing risks is crucial to building trust.  

Who Do We Trust to Steer the AI Ship?

When it comes to developing, using, and governing AI, people place the most confidence in national universities, research institutions, and defense organizations (76–82%).

Governments and commercial organizations, however, face a trust deficit, with a third of people expressing low or no confidence in these entities. This is a red flag, especially as AI use by businesses and governments continues to grow.  

Key Takeaway: Trust in institutions matters. Universities and research bodies are seen as guardians of the public good, while governments and businesses have work to do to earn confidence.  

What Do People Want from AI Governance?

Here’s where the consensus is loud and clear: 97% of people globally view the principles of trustworthy AI—such as transparency, fairness, and accountability—as critical. Most (71%) believe AI regulation is necessary, yet only 39% think current laws and safeguards are adequate. People want independent oversight to ensure AI is safe and ethical, but they’re not convinced the systems in place are up to the task.  

Key Takeaway: The public expects strong governance and oversight to make AI trustworthy, but there’s a gap between expectations and reality.  

AI at Work: A Balancing Act

At work, people are generally comfortable with AI augmenting tasks and informing decisions—55% are on board, as long as humans remain in control. But there’s a catch: people are wary of AI in human resources and people management. Interestingly, except in China and India, most believe AI will eliminate more jobs than it creates.  

Key Takeaway: People want AI to enhance, not replace, human decision-making at work. Job security remains a major concern.  

The Knowledge Gap

While 82% of people have heard of AI, nearly half (49%) are unclear about how and when it’s being used. Even more surprising?

68% of people use common AI applications (think virtual assistants or recommendation algorithms), but 41% don’t realize AI is behind them. Despite this, there’s a strong appetite to learn more—82% want to deepen their understanding of AI.  

Key Takeaway: Awareness is growing, but there’s still a significant knowledge gap. Education is key to empowering people to engage with AI confidently.  

AI’s Impact on Education and Learning

The findings of this study have profound implications for education and learning, an area where AI is already making waves—from personalized learning platforms to AI-driven tutoring systems.

The report’s insights into trust, benefits, risks, and the knowledge gap highlight both opportunities and challenges for AI in education.  

On the positive side, AI has the potential to revolutionize education by tailoring learning experiences to individual needs, automating administrative tasks for educators, and providing access to high-quality resources globally.

The report’s finding that 85% of people see AI’s benefits suggests optimism about its ability to enhance learning outcomes, especially in emerging economies where trust in AI is higher.

For instance, AI could help bridge educational gaps in countries like India and South Africa, where scalable, personalized learning solutions are in high demand.  

However, the report’s concerns about risks—such as privacy, bias, and deskilling—are particularly relevant to education.

Students, parents, and educators may worry about the security of sensitive student data or the potential for AI to perpetuate biases in grading or resource allocation.

The skepticism in Western countries, where only half believe AI’s benefits outweigh its risks, could slow the adoption of AI tools in schools and universities, especially if trust in commercial providers remains low.  

The knowledge gap is another critical issue. If nearly half of people don’t understand how AI works, this could hinder its effective use in education.

Students and educators need to be equipped not only to use AI tools but also to critically evaluate them. The report’s finding that 82% of people want to learn more about AI underscores the urgent need for AI literacy programs in schools and beyond.

This is especially important given the generational divide—younger, university-educated individuals are more trusting of AI, suggesting that education systems have a role to play in shaping attitudes.  

Finally, the preference for human oversight in AI use, as highlighted in the workplace findings, extends to education.

People may be comfortable with AI supporting teachers—such as grading assignments or recommending resources—but less so with AI replacing human judgment in high-stakes decisions like student evaluations or career counseling.  

Key Takeaway: AI in education offers immense potential but must navigate trust, privacy, and equity concerns. Building AI literacy and ensuring human oversight will be critical to its success.  

The Four Pathways to Trust

The study identifies four evidence-based pathways to strengthen trust in AI:  

Institutional Pathway: Strong safeguards, regulations, and confidence in organizations to develop and govern AI. (This is the most influential driver of trust!)  

Motivational Pathway: Highlighting the benefits of AI to inspire optimism.  

Uncertainty Reduction Pathway: Addressing risks and concerns head-on.  

Knowledge Pathway: Improving public understanding of AI and its uses.

Key Takeaway: Trust isn’t accidental—it’s built through deliberate action across these pathways.  

How Have Attitudes Changed Over Time?

Looking at five Western countries (Australia, the UK, USA, Canada, and Germany) since 2020, trust in AI and awareness of its use have increased. However, concerns about inadequate regulations and lack of confidence in entities governing AI persist.  

Key Takeaway: Progress is being made, but there’s still a long way to go to address public concerns.  

Spotlight on Australia

Australians, much like their counterparts in the UK, Canada, and France, are more fearful than excited about AI.

Less than half trust AI at work, and only a minority believe its benefits outweigh the risks. Trust is higher among younger generations (42% of Gen X and Millennials vs. 25% of older Australians) and the university-educated (42% vs. 27% of those without degrees). Interestingly, Australians and Japanese show less interest in learning about AI compared to other countries.  

Key Takeaway: Australia reflects broader Western skepticism, with a clear generational and educational divide.  

Why This Matters—and What’s Next

This study offers a roadmap for businesses, governments, and organizations to build trust in AI.

By focusing on strong governance, addressing risks, highlighting benefits, and improving public understanding, we can ensure AI is developed and used responsibly.

These insights are not just academic—they’re critical for shaping AI policies, standards, and practices worldwide.  

Posted 
Mar 10, 2025
 in 
Digital Learning
 category

More from 

Digital Learning

 category

View All