Report

​​How does the public feel about Artificial Intelligence (AI)?​

The study examined the opinions, awareness, and experiences of AI technologies among UK adults aged 18 and above.
Artificial intelligence

About the study

In November 2024, NatCen conducted a survey on behalf of the Ada Lovelace Institute and Alan Turing Institute (Ada-Turing) using the probability-based NatCen Opinion Panel. The survey explored the public perceptions of artificial intelligence (AI), including experiences of using it, views on its application (across sectors such as policing, healthcare, and finance), and opinions on how AI should be regulated or governed.

Recognising that public perceptions of AI vary by application, this study focused on specific use cases to better capture attitudes. Unlike research based on broad definitions or singular examples, this survey allowed respondents to express both benefits and concerns across distinct AI applications.

This study builds on Ada-Turing’s 2022 study, offering richer insights into evolving public attitudes toward AI.

Key Findings

Awareness varies significantly across AI applications

  • Public awareness was high for prominent AI technologies such as driverless cars and facial recognition in policing, known by 93% and 90% of UK adults respectively.
  • In contrast, awareness of less visible applications was much lower. AI used for “loan eligibility” and “robotic care assistants” were each known by 24% of UK adults, while AI for “welfare assessments” was the least recognised at 18%.

Large Language Models (LLMs)

  • Three in five adults (61%) have heard of LLMs, such as ChatGPT, and two in five (40%) have used them.
  • Willingness to use LLMs varied by context and application. For example, amongst all UK adults there was greater acceptance for search and recommendation tasks (67%) than for support with job applications (53%). Acceptance also varied by key socio-demographics such as age and educational level.

AI technologies and concerns levels

  • The study assessed the perceptions of benefits and risks of seven AI technologies: driverless cars, mental health chatbots, robotic care assistants, cancer risk assessment, facial recognition in policing, assessing loan repayment risk, and assessing welfare eligibility.
  • Since the 2022/23 data collection, concern about five of the six AI technologies examined has increased (all except driverless cars) The sharpest rise in concern is around AI used to assess welfare eligibility, increasing from 44% in 2022/23 to 59% in 2024/25.
  • Concerns about facial recognition in policing were higher among Black and Asian individuals compared to the general population (57%, 52%, and 39%, respectively).
  • People from lower socioeconomic backgrounds were less likely to trust AI systems and more likely to feel that AI reinforced existing inequalities.

AI governance and regulations

  • Seven in ten adults (72%) support laws regulating AI use, with many saying their comfort would increase if they could appeal AI decisions to a human (65%) and access information on how decisions are made (61%).
  • 58% of the public believed that an independent regulator should ensure AI is used safely. Preferences for who should hold this responsibility vary by age: younger adults (18-44) lean toward company-led oversight, while those aged 55 and over favour government or independent regulators.​
  • When presented with a trade-off between explainability and accuracy of AI technologies, the British public prioritised explainability: understanding how AI-driven decisions are made is considered more important than achieving higher accuracy. Specifically, nearly a third of adults (31%) preferred humans to make and explain decisions, with this preference increasing with age.
  • Many also said they wanted humans, as well as machines, to remain accountable for decisions made using AI, especially in areas such as justice and welfare.

Methodology

Fieldwork period, sample, mode

Fieldwork for this study was conducted using the NatCen Opinion Panel – a random-probability panel of people recruited from high-quality, random probability studies such as the British Social Attitudes survey. The data was collected between 25th October and 24th November 2024 using a sequential mixed mode design (web and telephone).

Survey flow

The survey began by gauging respondents’ confidence in using digital technologies. After viewing a definition of AI, they answered questions about its use in various scenarios – such as policing, assessing welfare eligibility, predicting risks in cancer development or repaying loans, Large Language Models (LLMs), chatbots, and so on. For each scenario, respondents were asked to self-asses their awareness of AI usage, its perceived benefits and concerns, and relevant impacts. The second section explored views on AI governance, decision-making, accountability, and comfort with AI in situations affecting them directly.

Response rate

Out of 5,650 panel members invited to take part, 3,513 completed the survey – yielding a 62% response rate. Among respondents, 3,291 completed the survey online and 222 by phone.

Weighting

The data was weighted to be representative of the UK adult (18+) population.