BSA 42 | Politics and social media

Artificial Intelligence (AI) is not politically neutral. It triggers fundamental concerns about power, equity, and governance. As the UK Government seeks to increase the adoption of AI, it is important to understand public perceptions of these tools, and how those perceptions are shaped by people’s political attitudes. This report analyses how political orientation – measured along left/right and libertarian/authoritarian dimensions – shapes attitudes toward AI technologies and their regulation.
People’s perceptions of the benefits and disadvantages of different AI applications are linked to their political orientations.
Political orientation is related to the specific concerns people have about AI applications, and the particular benefits they perceive.
Support for AI regulation is broadly similar across the political spectrum
Artificial technologies are permeating societies and economies, shaping the way that people interact with each other and with organisations of all kinds, and how they go about their daily lives. There is not an agreed definition of what artificial intelligence (AI) means. But broadly, it describes the use of computers and digital technology to perform complex tasks commonly thought to require human reasoning and logic. AI systems typically analyse large amounts of data to draw insights or identify patterns to achieve specific goals. They can sometimes take actions autonomously, that is without human direction.
With the advent of Open AI’s Large Language Model (LLM), ChatGPT, at the end of 2022, followed by a whole range of LLMs (so-called ‘generative AI’) from other companies, people now interact with AI directly. LLMs, such as ChatGPT, Gemini, Claude, and Llama, among others, are AI systems trained on text data that can generate natural-language-like responses to inputs or prompts (Council of the European Union, 2023). They can be used to find information or advice and to generate content in the form of images or text, allowing a whole new range of ways in which AI can have societal influence.
As governments around the world begin to develop and implement national AI strategies, it is important that we understand public attitudes toward these technologies. In the UK, the Government’s AI Opportunities Action Plan articulates a vision for developing and deploying AI to enhance public services, drive productivity gains and foster shared economic prosperity (Department of Science, Innovation and Technology, 2025). But whether and how these benefits are realised will depend, in part, on how people respond to these tools.
Consequently, attitudes to AI matter for at least three key reasons. First, understanding public hopes, concerns and experiences of AI technologies can inform the responsible design, development, and deployment of AI, helping AI developers to build the kind of AI-enabled society that people want. Second, achieving governmental aspirations for AI and its benefits for the economy and society will depend on AI being adopted, which in turn will be associated with how people feel about these technologies, and the extent to which they perceive benefits or harbour concerns. Third, as governments contend with regulating rapidly evolving AI technologies, understanding possible ideological differences over AI is important to ensure the viability and acceptability of proposed regulatory frameworks.
AI technologies trigger fundamental political concerns about power, equity, and governance. Some scholars argue that AI threatens both material interests – such as job security and economic stability – and deeper identity-based values by challenging human uniqueness and autonomy (Han et al., 2021). Media narratives often portray AI as creating clear ‘losers’, such as workers displaced by automation and marginalised communities affected by algorithmic bias, while highlighting the ‘winners’ who benefit from technological progress (Fast & Horvitz, 2017). Some commentators have argued that current AI implementations already reflect certain political attitudes more than others, by prioritising values of efficiency and economic growth over those of accountability and human rights (Neff, 2024).
These political dimensions to attitudes to AI are underscored by earlier research, which points to political orientation as a powerful predictor of people’s attitudes toward technological innovation and government regulation. From nuclear power to environmental technologies and genetic engineering, research consistently shows that political beliefs influence how people assess technological risks and benefits (O’Shaughnessy et al., 2023; König et al, 2023). For specific applications of AI, evidence suggests that there is a small, statistically significant relationship between political attitudes and preferences for the regulation and use of autonomous vehicles and AI assistants (Hemesath and Tepe, 2023; König et al, 2024; Mack et al, 2021). Research is also beginning to suggest that support for AI regulation is shaped more by political ideology than by self-interest or perceptions of societal benefits (O’Shaughnessy et al., 2023). Evidence from the United States (US) suggests that political conservatives tend to view AI as risky but oppose regulatory oversight (Castelo & Ward, 2021; O’Shaughnessy et al., 2023). However, there is a gap in understanding how these ideological dimensions shape attitudes to AI in the UK. Most studies on the relationship between political attitudes and attitudes to AI have been undertaken elsewhere 1 , while UK-based surveys of public attitudes to AI (CDEI, 2022; Royal Society & IPSOS Mori, 2017) have largely overlooked political orientation as a potential influence on people’s attitudes.
How specifically might we expect attitudes to AI in the UK to be associated with political attitudes? We focus in this report on two possibilities – that those on the left view AI differently from those on the right, and that libertarians have a different perspective from those of a more authoritarian outlook.
Given the differences in their social priorities, we might expect people on the political left and those on the right to have different views about the main benefits and risks of AI. For instance, because of their focus on economic growth, people holding right-wing views may be more likely to engage with the supposed benefits of speed and efficiency that AI will bring; conversely, those with left-wing views may be more concerned about the risks of unequal impacts or people losing their jobs. Predicting the impact of these perceived benefits and concerns on overall attitudes may therefore be difficult, as it will depend on the strength of people’s opinions on the different elements. However, perhaps the fact that the benefits of AI are typically framed in terms that might appeal to the right, while the risks termed in ways that might concern the left, means that people to the right will be consistently more positive about AI than those on the left?
Meanwhile, people with a libertarian outlook may be more likely to view AI technologies positively as innovations that could increase market competition and widen human choice, and would advocate freedom for people to make their own choices about how to use AI-enabled tools. At the same time, given their preference for minimal state intervention, we might expect libertarians to be less positive about use of AI in the public sector, where they might see these technologies as allowing the state to become more heavy handed, and more likely to impact on people’s personal freedoms. Authoritarians, meanwhile, who are traditionally more conservative in their outlook, may, on the one hand, be more concerned about the potentially socially disruptive nature of technological change, but on the other hand might see opportunities for enhancing the ability of the state to improve social order. These potentially contrasting opinions again make it difficult to predict the impact on overall perceptions of AI, and perhaps much depends on the use to which AI is put. Attitudes towards its use by, say the police, may not be the same as its deployment by the health service.
Views on governance and regulation should, in principle, be easier to predict. In general, people with right-wing attitudes tend to be supportive of de-regulation and allowing the free market to flourish, encourage competition, and reap economic rewards. Those with left-wing attitudes are more likely to emphasise the need for regulation in order to avoid potentially harmful effects on equality and jobs. However, might some of the concerns that people have about particular uses of AI over-ride these more general stances?
This study aims to fill these gaps in our knowledge by analysing how political orientation is associated with attitudes towards AI and its governance in the UK. To do so, it draws on a new survey of attitudes to AI, undertaken as part of the NatCen Opinion Panel 2 and which, in contrast to previous research, examines this relationship for a range of specific AI applications rather than just for AI in general 3 .
In doing so, it will attempt to answer three distinct research questions. How do individuals with different political orientations perceive the benefits and concerns associated with different applications of AI? To what extent does political orientation shape perceptions of the risks and benefits of AI, such as its potential to increase efficiency, exacerbate discrimination, or replace jobs? And how does political orientation relate to views about AI governance and regulation? The report will conclude by discussing the policy implications of the answers obtained to these questions, and what they might mean for both societal adoption and the government’s future AI plans.
AI is becoming part of everyday life. When designed responsibly and safely, these technologies have the potential to improve people’s lives. However, concerns persist that AI could also exacerbate socioeconomic inequalities that can already be observed in the social and political landscape. To understand people’s perceptions of the costs and benefits of AI, we asked them about eight different AI technologies: facial recognition in policing, determining welfare eligibility, loan repayment risk, detecting the risk of cancer, mental health chatbots, large language models (LLMs), robotic care assistants, and driverless cars. For each application, we asked people whether they thought this particular use of AI would be beneficial, using a scale running from “very” or “fairly” beneficial to “not very” or “not at all” beneficial. We then also asked them about the extent to which they were concerned about this particular use of AI, using a scale running from “very” or “fairly” concerned, to “not very” or “not at all” concerned.
To compare people’s overall perceptions of benefit and concern, we calculated a ‘net benefit’ score for each AI application, by subtracting each respondent’s concern score from their benefit score, with a positive score indicating that perceived benefit outweighs concern, and a negative score suggesting the reverse.
As shown in Figure 1, for four of the uses of AI asked about, people record, on average, a positive net benefit score – namely assessing risk of cancer from a scan, facial recognition for policing, LLMs, and assessing loan repayment risk. For the other four uses, people are more negative, with concern outweighing perceived benefit: these were robotic care assistants, assessing welfare eligibility, mental health chatbots and driverless cars. The uses of AI perceived as having the greatest net benefit are facial recognition in policing and assessing the risk of cancer from scans, while the least beneficial are perceived to be mental health chatbots and driverless cars.
As we might expect, substantial differences in attitudes towards AI were found among different demographic groups, particularly when they might expect to be differentially affected by these technologies. Notably, 57% of people from a Black minority ethnic group are concerned about facial recognition for policing, compared with 39% of the public as a whole. This difference reflects growing evidence that such a use of AI has been shown to exhibit bias towards people from minoritised ethnic groups (e.g., Buolamwini and Gebru, 2018; Leslie, 2020). People from low-income backgrounds are significantly more concerned about all of the uses of AI we asked about, and perceive fewer benefits than those with higher incomes (see Modhvadia et al, 2025 for further details). These data point to potential inequities in how AI technologies are perceived and experienced by different sociodemographic groups.
But is political orientation associated with people’s views towards different AI technologies? As noted earlier, we suspect that the relationship between political orientation and people’s perceptions of AI will vary depending on the specific AI application being considered. For example, we might expect people with right-wing views to be more likely to support the use of AI for calculating eligibility for welfare payments, on the basis that automated rules may be more likely to be enforced. Those with left-wing views, in contrast, may be more concerned about the risk of inequitable decisions being made.
To understand these relationships, we examine whether political orientation, as measured by two Likert scales that are included as standard on the British Social Attitudes (BSA) and thus have also been asked of all members of the NatCen Opinion Panel, is related to perceptions of the benefits of AI applications. One of these scales identifies whether people are on the left or on the right, the other whether they are libertarian or authoritarian in outlook. (Further details on the derivation of these scales are available in the Technical Details). For the purpose of these analyses and those appearing later in the report, we divide respondents first, into the one-third most ‘left-wing’ and the one-third most ‘right-wing’ and, second, the one-third most libertarian and the one-third most authoritarian, in each case based on their scores on the relevant scale.
Those with right-wing views are more likely than those with left-wing views to think the benefits of AI outweigh the concerns. Table 2 shows that those with right-wing views have net benefit scores that are consistently higher than those with left-wing views in all cases, except with regard to driverless cars. This difference is particularly pronounced for the use of facial recognition for policing and the use of AI to determine welfare eligibility.
People with right-wing views perceive positively some uses of AI that people with left-wing views perceive negatively overall – namely determining loan repayment risk, robotic care assistants, and determining welfare eligibility. Looking at the benefit and concern scores separately suggests that these differences result from the fact that those with left-wing views report higher levels of concern across most technologies, compared with people with right-wing views, while the two groups’ perceptions of benefit are more similar. For example, while 36% and 35% of people with left-wing and right-wing views respectively report mental health chatbots to be beneficial, 68% of people with left-wing views say they are concerned by this use of the technology, compared with 59% of people with right-wing views.
Left | Right | Difference | |
---|---|---|---|
AI use | |||
Cancer risk | 1.3 | 1.4 | +0.1 |
Facial recognition in policing | 0.8 | 1.5 | +0.7 |
Large language models | 0.3 | 0.4 | +0.1 |
Loan repayment risk | -0.1 | 0.4 | +0.5 |
Robotic care assistants | -0.1 | 0 | +0.1 |
Welfare eligibility | -0.6 | 0.2 | +0.8 |
Mental health chatbot | -0.7 | -0.4 | +0.3 |
Driverless cars | -0.7 | -0.7 | 0.0 |
Note: Positive scores indicate perceptions of benefit outweigh concerns while negative scores indicate concerns outweigh benefits. Scores can range from -3 to +3.
Unweighted bases can be found in Appendix Table A.1 of this chapter.
There is less of a consistent difference between the scores of those with libertarian views and those with an authoritarian outlook, with the direction of difference not always operating in the same direction. That said, Table 2 shows that people with authoritarian views feel the benefits of AI outweigh their concerns in the case of five uses – facial recognition for policing, assessing risk of cancer, LLMs, assessing loan repayment risk and assessing welfare eligibility. Their net benefit score is particularly high for the use of facial recognition in policing, especially when compared with those with libertarian views. These data align with previous research, which finds that the use of AI for facial recognition in policing is particularly likely to appeal to people with authoritarian views (Peng, 2023). Meanwhile, libertarians have more positive net benefit scores than authoritarians for the majority of private sector AI applications, such as robotic care assistants and driverless cars, perhaps reflecting their view of AI as potentially increasing human choice by widening the range of options for undertaking various tasks.
The difference in attitudes between these two groups is also notable in relation to the use of AI to assess welfare eligibility, where those with libertarian views, unlike those with an authoritarian outlook, feel the concerns around this technology outweigh potential benefits. This view may feed their concern for the possibility of more heavy handed state intervention, when AI is used in the public sector.
Libertarian | Authoritarian | Difference | |
---|---|---|---|
AI use | |||
Cancer risk | 1.4 | 1.4 | +0.0 |
Facial recognition in policing | 0.7 | 1.6 | +0.9 |
Large language models | 0.2 | 0.4 | +0.2 |
Loan repayment risk | 0 | 0.3 | +0.3 |
Robotic care assistants | 0.1 | -0.2 | -0.3 |
Welfare eligibility | -0.5 | 0.2 | +0.7 |
Mental health chatbot | -0.6 | -0.5 | +0.1 |
Driverless cars | -0.4 | -0.9 | -0.5 |
Note: Positive scores indicate perceptions of benefit outweigh concerns while negative scores indicate concerns outweigh benefits. Scores can range from -3 to +3.
Unweighted bases can be found in Appendix Table A.2 of this chapter.
To better understand the relationship between political orientation and net benefit scores (whether benefits outweigh concerns, or vice versa), we conducted a multivariate analysis (linear regression) to assess to what extent net-benefit scores are associated with political orientation, once a number of demographic characteristics have been controlled for – namely ethnicity, digital skills, income, age and education. Previous analysis of these data highlighted that ethnicity, digital skills and income are associated with overall attitudes to AI (Modhvadia et al., 2025). We also anticipated that age and education may be linked. Studies suggest older people reject new technologies, feeling they are not useful in their personal lives (Zhang, 2023) – while we expect that those with higher levels of education may have higher levels of digital literacy and openness to new technologies.
The results of our analysis are presented in the appendix (Table A.3). They show that for the majority of uses of AI, political orientation remains significantly associated with perceptions of net benefit, even once the relationships between attitudes to AI and these demographic variables have been controlled for. The net benefit scores of people with more right-wing views are significantly higher for nearly all of our AI applications. The only exception is driverless cars, the application that is most negatively perceived by all of our respondents. The strength of these relationships is, however, relatively low. Similarly, people with authoritarian views have significantly higher net benefit scores for facial recognition in policing, the use of AI in determining welfare benefits, the use of AI in determining loan repayment risk, LLMs and mental health chatbots, even once the relationships with other demographic variables have been controlled for. The only instance where people with authoritarian views have significantly lower net benefit scores, compared with those holding libertarian views, is in relation to driverless cars. However, again, the strength of these relationships is variable. It is strongest for facial recognition in policing and weakest for mental health chatbots. These findings suggest that political orientation is associated with attitudes to AI, even when other demographic differences have been controlled for, but that the magnitude of this association depends on the use to which AI is applied.
In terms of our control variables, ethnicity, digital skills, income and age were found to be associated with how people view each use of AI. Black and Asian people are less likely to perceive facial recognition in policing as beneficial, while they are more likely to see benefits for LLMs and mental health chatbots. Those with higher digital skills are generally more positive about most of the applications of AI, with this association being strongest in the case of robotic care assistants. Having a higher income is related to more positive perceptions of all of the AI uses, while older people (aged 55 years and over) are more positive about the use of AI in health diagnostics (detecting cancer risk) and justice (facial recognition in policing) but are more negative about LLMs and robotic care assistants.
The net benefit scores discussed so far provide a summary measure of the balance of benefit and concern for eight different applications of AI. To understand the reasons for these assessments, in each case we asked respondents to identify from a list the specific benefits and concerns they associate with each AI technology. For example, for facial recognition in policing, we provided the following list of possible benefits:
Make it faster and easier to identify wanted criminals and missing persons
Be more accurate than the police at identifying wanted criminals and missing persons
Be less likely than the police to discriminate against some groups of people in society when identifying criminal suspects
Save money usually spent on human resources
Make personal information more safe and secure
Our list of possible concerns that people might have about the same AI application were as follows:
Cause delays in identifying wanted criminals and missing persons
Be less accurate than the police at identifying wanted criminals and missing persons
Be more likely than the police to discriminate against some groups of people in society
Lead to innocent people being wrongly accused if it makes a mistake
Make it difficult to determine who is responsible if a mistake is made
Gather personal information which could be shared with third parties
Make personal information less safe and secure
Lead to job cuts (for example, for trained police officers and staff)
Cause the police to rely too heavily on it rather than their professional judgements
While each list was tailored to the specific technology being asked about, the benefits and concerns included in each list had common themes (such as efficiency and bias). Respondents were able to select as many options from each list as they felt applied, as well as “something else”, “none of the above” and “don’t know”.
Across all of our respondents, the most commonly selected benefit for each use of AI related to economic efficiency and/or speed of operation. Meanwhile, the most commonly selected concerns were about over-reliance and inaccuracy. For example, in the case of facial recognition technology in policing, 89% feel that faster identification of wanted criminals and missing persons is a potential benefit, while 57% think that overreliance on this technology is a concern. (Further details of these results are available in Modhvadia et al (2025)).
But how does political orientation shape these views? We found that people across the political spectrum tend to highlight similar types of benefits and concerns – but that the degree to which they do so varies. The next sections focus on four specific themes: speed (i.e. completing tasks faster than humans), inaccuracy, job displacement, and discrimination. These themes reflect broader concerns about efficiency and fairness – areas where political orientation is especially likely to influence attitudes, as discussed in the Introduction. As before, to analyse these differences, we have divided people into three equally-sized groups along the two ideological dimensions and compare the results for the two groups at each end.
We found some support for the theory, set out previously, that those with right-wing views might be more likely to value the economic efficiency that might be delivered by AI. Improving the speed and efficiency of services was more commonly selected as an advantage by those with more right-wing views than those with more left-wing views in the case of determining eligibility for welfare benefits like Universal Credit, and using AI for determining an individual’s risk level for repaying a loan. As shown in Table 3, 55% of those with right-wing views select this benefit for determining welfare eligibility, compared with 49% of those with left-wing views, and 61% select the same benefit for loan repayment risk, compared with 56% of those with left-wing views. However, these differences are small and only apparent in uses of AI that relate to the distribution of financial resources.
Left | Right | Difference | |
---|---|---|---|
AI use | |||
% seeing benefits related to speed and efficiency for…. | |||
Cancer risk | 85 | 85 | +0 |
Facial recognition in policing | 87 | 90 | +3 |
Large language models | 57 | 56 | -1 |
Loan repayment risk | 56 | 61 | +5 |
Robotic care assistants | 50 | 48 | -2 |
Welfare eligibility | 49 | 55 | +6 |
Mental health chatbot | 52 | 50 | -2 |
Driverless cars | 35 | 30 | -5 |
Unweighted base | 1079 | 1078 |
Differences between those with authoritarian views and those with a libertarian outlook in their beliefs about the potential for AI to improve speed and efficiency are more prominent. As shown in Table 4, those with libertarian views tend to be more likely to see speed and efficiency as key benefits of most AI applications, perhaps seeing possibilities for the opening up of human choice and market competition from AI innovations. For example, 62% of those with libertarian views select this benefit for large language models, compared with only 50% of those with authoritarian views. The only exception to this pattern is in relation to facial recognition for policing, where 91% of those with authoritarian views feel efficiency to be a key benefit, compared with 86% of those with libertarian views. This may be because, as compared with those with libertarian views, those with an authoritarian outlook are more positive about the use of facial recognition in policing irrespective of how it is undertaken. In contrast, the low figure of 25% for those with authoritarian views seeing efficiency gains from driverless cars (compared with 40% of those with libertarian views) may reflect a sense of the possible legal issues and potential chaos that could result from this (as yet untested in a UK setting) AI innovation on Britain’s roads.
Libertarian | Authoritarian | Difference | |
---|---|---|---|
AI use | |||
% seeing benefits related to speed and efficiency for…. | |||
Cancer risk | 86 | 82 | -4 |
Facial recognition in policing | 86 | 91 | +5 |
Large language models | 62 | 50 | -12 |
Loan repayment risk | 60 | 55 | -5 |
Robotic care assistants | 54 | 42 | -12 |
Welfare eligibility | 55 | 52 | -3 |
Mental health chatbot | 58 | 46 | -12 |
Driverless cars | 40 | 25 | -15 |
Unweighted base | 1082 | 1081 |
As shown in Table 5, those with left-wing views are generally more worried than those with right-wing views about inaccuracy and inequity, although this difference is more pronounced for some uses of AI, compared with others. Most markedly, 63% of those with left-wing views are concerned that facial recognition in policing could lead to false accusations, whereas only 45% of those with right-wing views express this concern. People with left-wing views are also markedly more worried about inaccuracy in terms of welfare eligibility and loan repayment.
Left | Right | Difference | |
---|---|---|---|
AI use | |||
% with concerns related to inaccuracy for…. | |||
Cancer risk | 25 | 23 | -2 |
Facial recognition in policing | 63 | 45 | -18 |
Loan repayment risk | 30 | 22 | -8 |
Robotic care assistants | 44 | 41 | -3 |
Welfare eligibility | 43 | 28 | -15 |
Mental health chatbot | 51 | 46 | -5 |
Driverless cars | 46 | 40 | -6 |
Unweighted base | 1079 | 1078 |
Note: Inaccuracy concerns were not in the selection list for LLMs
Similarly, Table 6 shows that 23% of those with left-wing views are worried about discriminatory outcomes in the use of AI to determine welfare eligibility, compared with just 8% of those with right-wing views. Even for the application of AI in cancer risk assessment, a use that is consistently positively viewed across those with different political orientations, 27% of those with left-wing views are concerned about the technology being less effective for some groups of society, leading to discrimination in healthcare. The comparable figure is 17% for those with right-wing views.
Left | Right | Difference | |
---|---|---|---|
AI use | |||
% with concerns related to discriminatory outcomes for…. | |||
Cancer risk | 27 | 17 | -10 |
Facial recognition in policing | 24 | 9 | -15 |
Loan repayment risk | 24 | 13 | -11 |
Robotic care assistants | 27 | 23 | -4 |
Welfare eligibility | 23 | 8 | -15 |
Mental health chatbot | 16 | 8 | -8 |
Driverless cars | 28 | 23 | -5 |
Unweighted base | 1079 | 1078 |
Note: Discriminatory concerns were not in the selection list for LLMs
Research suggests that people who hold more authoritarian views are less likely to be concerned about discrimination or fairness (Curtice, 2024), leading us to anticipate that they are less likely to be concerned about the impact that AI technologies might have on minority groups. Our data support this theory. As shown in Table 7, for most applications of AI, those with libertarian views appear to be more concerned than those with an authoritarian outlook about discrimination. For example, 25% of those with libertarian views express concern that facial recognition in policing may discriminate against certain groups, compared with 9% of those holding authoritarian views. A similar pattern can be found in attitudes towards the use of AI for detecting the risk of cancer risk; 29% of those holding libertarian views worry about it leading to health inequalities, compared with 15% of those with authoritarian views.
Libertarian | Authoritarian | Difference | |
---|---|---|---|
AI use | |||
% with concerns related to discriminatory outcomes for…. | |||
Cancer risk | 29 | 15 | -14 |
Facial recognition in policing | 25 | 9 | -16 |
Loan repayment risk | 20 | 14 | -6 |
Robotic care assistants | 26 | 26 | 0 |
Welfare eligibility | 18 | 11 | -7 |
Mental health chatbot | 15 | 9 | -6 |
Driverless cars | 26 | 26 | 0 |
Unweighted base | 1082 | 1081 |
Note: Discriminatory concerns were not in the selection list for LLMs
In contrast, as shown in Table 8, worries about inaccuracy appear to depend much more on the specific application of AI being considered, than to people’s libertarian-authoritarian orientation. That said, 61% of those holding libertarian views – but only 47% of authoritarians – are worried about false accusations from facial recognition. Meanwhile, 39% of those holding libertarian views are worried that the use of AI for determining welfare eligibility may be less accurate than the use of professionals, compared with 31% of those holding authoritarian views. However, the inverse pattern is found in the case of robotic care assistants.
Libertarian | Authoritarian | Difference | |
---|---|---|---|
AI use | |||
% with concerns related to inaccuracy for…. | |||
Cancer risk | 20 | 27 | +7 |
Facial recognition in policing | 61 | 47 | -14 |
Loan repayment risk | 24 | 26 | +2 |
Robotic care assistants | 39 | 47 | +8 |
Welfare eligibility | 39 | 31 | -8 |
Mental health chatbot | 51 | 46 | -5 |
Driverless cars | 39 | 45 | +6 |
Unweighted base | 1082 | 1081 |
Note: Inaccuracy concerns were not in the selection list for LLMs
For all the AI applications, those with left-wing views are more concerned than those with right-wing views about potential job losses. This is consistent with existing research, which posits that left-wing individuals are more likely to express concerns about job displacement and increasing social inequality (Curtice, 2024). Table 9 shows that this concern is particularly high for both robotic care assistants (where 62% are of those on the left worried about job loss, compared with 44% of those who are right-wing) and driverless cars (where 60% are worried about job loss, compared with 47%).
Left | Right | Difference | |
---|---|---|---|
AI use | |||
% with concerns related to job loss for…. | |||
Facial recognition in policing | 46 | 37 | -9 |
Large language models | 48 | 37 | -11 |
Loan repayment risk | 46 | 37 | -9 |
Robotic care assistants | 62 | 44 | -18 |
Welfare eligibility | 50 | 38 | -12 |
Mental health chatbot | 47 | 32 | -15 |
Driverless cars | 60 | 47 | -13 |
Unweighted base | 1079 | 1078 |
Note: Job loss concern not in selection list for cancer risk detection
Again, as shown in Table 10, the extent to which libertarians differ from authoritarians in their level of concern about job losses depends on the use to which AI is being put. More people with authoritarian views are worried in the case of facial recognition in policing (44%, compared with 38% of those with libertarian views) while more people with libertarian views are worried in relation to general-purpose LLMs (46%, compared with 39% of people with authoritarian views). For other applications of AI, levels of concern about job losses are largely similar, irrespective of whether someone holds authoritarian or libertarian views.
Libertarian | Authoritarian | Difference | |
---|---|---|---|
AI use | |||
% with concerns related to job loss for…. | |||
Facial recognition in policing | 38 | 44 | +6 |
Large language models | 46 | 39 | -7 |
Loan repayment risk | 41 | 46 | +5 |
Robotic care assistants | 52 | 54 | +2 |
Welfare eligibility | 44 | 46 | +2 |
Mental health chatbot | 42 | 41 | -1 |
Driverless cars | 53 | 55 | +2 |
Unweighted base | 1082 | 1081 |
Note: Job loss concern not in selection list for cancer risk detection
Taken together, these findings show that political orientation is linked to particular beliefs about the key advantages and disadvantages of AI. In general, people who are left-wing are more concerned than those with right-wing views about inaccuracy, discrimination and job loss, perhaps reflecting a broader concern they may have that AI technologies exacerbate inequalities in society. People with libertarian views, more so than people with authoritarian views, appear to be concerned about discrimination for most applications of AI, while at the same time showing more optimism about the potential speed and efficiency benefits that might come with these tools.
However, these findings also indicate that people’s attitudes towards AI and their relationship with political orientation, depend on their attitude towards the particular use to which the technology is put. For instance, the greater popularity of the use of facial recognition in policing among authoritarians translates into greater enthusiasm for the various potential advantages that it is thought AI could bring to this task. One possible explanation for the different attitudes of people with libertarian and authoritarian views towards the efficiency benefits of driverless cars may be that the more positive attitudes of libertarians towards the technology in general, as an AI innovation which opens up new possibilities for human choice (in this case of transport options), lead them to perceive them as more efficient, while authoritarians’ more negative views lead them to view driverless cars as less likely to bring efficiency gains. Overall, individual buy-in for specific applications of AI is likely to shape assessments of the potential benefits and risks of that application.
We have clearly established then that political orientation shapes attitudes towards AI. These patterns, along with the common concerns and benefits that people have about AI, offer important clues about how different groups might want these AI technologies to be governed. Previous research has found that people who are left-wing are generally more likely to support greater state intervention in the economy, and are more likely to support stricter regulation of AI technologies (König et al, 2023). In contrast, right-wing individuals may oppose regulatory overreach, prioritising market freedom and economic growth achieved through AI-driven innovation. In this final section, we assess how political views influence attitudes towards AI regulation. We measured preferences for regulation by asking respondents what would make them more comfortable with AI technologies being used, providing them with the following options:
Clear explanations of how AI systems work and make decisions in general
Specific, clear information on how AI systems made a decision about you
More human involvement and control in AI decisions
Clear procedures in place for appealing to a human specialist against a decision made by AI
Assurance that the AI has been deemed acceptable by a government regulator
Laws and regulations that prohibit certain uses of technologies, and guide the use of all AI technologies
People’s personal information is kept safe and secure
The AI technology is regularly evaluated to ensure it does not discriminate against particular groups of people
Respondents were able to select as many options as they liked from the list of measures that could increase their comfort with AI technologies. Overall, a substantial majority of the public – 72% – think that laws and regulations would make them feel more comfortable with AI technologies, up from 62% in 2023 (Modhvadia et al., 2025). This increased demand for regulation is worthy of note, especially given that the UK is yet to introduce a comprehensive legal framework for AI. For this reason, in Table 11, we focus on how political orientation relates to people selecting either “laws and regulation” or “assurance that the AI has been deemed acceptable by a government regulator” as measures that would increase their comfort with AI being used. 4
Support for regulation is consistently high across both the left-right and authoritarian-libertarian dimensions. Table 11 shows that over half of both those holding right-wing and left-wing views feel assurance by a government regulator would make them more comfortable with AI. Even higher proportions of people feel laws and regulations that prohibit certain uses would make them more comfortable with AI: this is the case for 70% of those with right-wing views and 76% of those with left-wing views. Meanwhile, Table 12 shows that tighter regulation is also popular among both libertarians and authoritarians.
Left | Right | |
---|---|---|
What would make you more comfortable with AI technologies being used? | % | % |
Assurance that the AI has been deemed acceptable by a government regulator | 58 | 55 |
Laws and regulations that prohibit certain uses of technologies, and guide the use of all AI technologies | 76 | 70 |
Unweighted base | 1079 | 1078 |
Respondents who did not answer our questions about political orientation, or answered with “don’t know”, are not included in this table
Libertarian | Authoritarian | |
---|---|---|
What would make you more comfortable with AI technologies being used? | % | % |
Assurance that the AI has been deemed acceptable by a government regulator | 58 | 54 |
Laws and regulations that prohibit certain uses of technologies, and guide the use of all AI technologies | 77 | 67 |
Unweighted base | 1079 | 1078 |
Still, people on the right and authoritarians are a little less likely than those on the left and libertarians to say that government assurance and regulation would make them feel more comfortable about AI. To examine whether these small differences remain significant once their associations with other characteristics are controlled for, we conducted a multivariate analysis (logistic regression) with political orientation and key demographic characteristics (ethnicity, digital skills, income, age and education) included as predictors of attitudes to AI regulation. These characteristics were chosen because either we have previously identified them as related to attitudes to AI (ethnicity, income and digital skills were associated with attitudes to AI in a previous study, Modhvadia et al 2025), or because we anticipate they may relate to engagement and preferences around new technologies (in the case of age and education). The results of this model are presented in the appendix (Table A.4).
In three out of four instances, this analysis indicates that the differences, though small, are statistically significant. Those on the right are less likely than those on the left to say that either government assurance or regulation would make them feel more comfortable about AI, while authoritarians are less likely than libertarians to say the same of regulation. Other characteristics, and in particular having digital skills and a higher household income, appear to more strongly relate to preferences for regulation than political orientation.
In this report, we have investigated the relationship between political orientation and public perceptions of AI technologies and their regulation. As we expected, the findings reveal a significant correlation between political orientation and the perceived benefits of and concerns about a wide range of AI applications. Those with right-wing views are more positive than those with left-wing views about nearly all the uses of AI about which respondents were asked, a pattern which held true even when the associations between on the one hand political orientations and attitudes towards AI, and on the other hand, people’s demographic characteristics were controlled for. The difference in attitudes between people with left-wing and right-wing views is most pronounced in the case of facial recognition for policing and the use of AI for assessing eligibility for welfare. Greater concern among those with more left-wing views may be occasioned by worries about how these technologies might have a negative impact on equity and fairness, as we found that those with left-wing views are more likely to report worries about inaccuracy, discrimination and job losses.
Where people stand on the authoritarian-libertarian dimension is also associated with their attitudes to the uses of AI. Those holding authoritarian views are more positive than those with libertarian views about several applications of AI. Specifically, those with authoritarian views are more likely to perceive facial recognition technologies in policing as beneficial, suggesting they may be more likely to perceive AI surveillance technologies more broadly as beneficial too. This is likely to reflect their preference for security and social order, where AI is viewed as an instrument to enhance these objectives. Conversely, people with libertarian views express heightened concerns regarding the potential for discriminatory outcomes from facial recognition technology, an outlook that aligns with their emphasis on individual autonomy and rights. They are also more likely than people with authoritarian views to have concerns about possible discrimination by other AI applications, such as in their use to predict cancer risk, provide mental health chatbots, and assess both welfare eligibility and the likelihood that someone would repay a loan.
Three of these last four applications (the exception is loan repayment) constitute the examples of the use of AI by the public sector covered by our survey. Our findings suggest that attitudes towards public sector applications, which impact people’s lives and liberty, may be more divisive between people of different political orientations than are applications of AI provided by private sector companies for consumers. Certainly, facial recognition in policing and the use of AI to determine welfare eligibility appear to be two particularly politically salient applications of AI, where there is much debate over fairness, accuracy and equity. In contrast, private sector consumer applications of AI, such as driverless cars (albeit universally regarded negatively) and LLMs (viewed positively), seem to be viewed in a similar fashion irrespective of people’s political orientation.
However, contrary to our expectations, we did not find a strong relationship between political orientation and preference for the regulation of AI. Irrespective of political orientation, we found that seven in 10 people feel laws and regulations would make them more comfortable with AI. And although support for regulation is somewhat lower among those who hold right-wing or authoritarian views, the difference is marginal. Instead, socio-economic factors such as income and digital skills appear to serve as more robust predictors of attitudes to AI regulation.
These findings are important for three key reasons. First, as the UK government seeks to increase the use of AI, describing AI as “a golden opportunity…an opportunity we are determined to seize” (UK Government, 2025), they will need to understand people’s hopes and fears. Our findings offer an understanding of the perceptions of the technology held by different groups, as well as their likelihood of adopting AI applications in the future. They provide policymakers with insight as to how they can encourage public acceptance of AI, and the benefits that they should highlight for their message to resonate with different constituencies. Our results show that people carry with them values and expectations, such as worries about discrimination, which differ across political ideologies.
Second, these findings reiterate the value of studying attitudes towards specific uses of AI technologies. Our data suggest that some applications of AI may be politically divisive – such as facial recognition in policing and the use of AI to determine welfare eligibility – while other uses of AI, such as cancer risk assessment, are met with similar levels of optimism or concern by those with different political orientations. Future research would benefit from working with the public to understand how attitudes towards specific uses of AI affect the considerations that need to be taken into account when deploying AI technologies.
Third, as the government considers options for regulating AI, it will be important to understand where people’s concerns lie, and how opposition to regulation might arise. Our findings show that the public want regulation around AI, and this desire appears to be largely independent of political orientation. As a minimum, it appears that there is public support for the government to deliver on its commitment in the AI Opportunities Action Plan (2025) to “funding regulators to scale up their AI capabilities”.
There are signs that, in the future, considerations like these will become more important in the UK political landscape. In both the US and Europe, AI has become politically salient. In the US, any moves towards AI safety, or AI regulation have become controversial and divide explicitly along political fault-lines. In the European Union (EU), AI regulation has been implemented more comprehensively than anywhere else in the world, setting policymakers in direct confrontation with US firms and, potentially, the US administration. The UK has tried to follow a delicate path between these two extremes, but it seems likely that issues such as digital services taxes, the Online Safety Act and technology regulation more generally will become politically salient in the future. Meanwhile, the public is increasingly using commercial LLMs, which show considerable potential to reshape – and bring US influences to bear upon – specific policy areas. Understanding of the political make-up of the public with respect to the use of AI, AI adoption and AI regulation will become increasingly helpful to politicians as they attempt to navigate this increasingly important and politically contested field.
The research reported here was undertaken as part of Public Voices in AI, a satellite project funded by Responsible AI UK and EPSRC (Grant number: EP/Y009800/1). Public Voices in AI was a collaboration between: the ESRC Digital Good Network @ the University of Sheffield, Elgon Social Research Limited, Ada Lovelace Institute, The Alan Turing Institute and University College London.
The authors would like to acknowledge Octavia Field Reid, Associate Director, Ada Lovelace Institute, for her work reviewing a draft of this report.
Ada Lovelace Institute. (October 2023). What do the public think about AI? https://www.adalovelaceinstitute.org/evidence-review/what-do-the-public-think-about-ai/
Araujo, T., Brosius, A., Goldberg, A. C., Möller, J., & Vreese, C. de. (2023). Humans vs. AI: The Role of Trust, Political Attitudes, and Individual Characteristics on Perceptions About Automated Decision Making Across Europe. International Journal of Communication, 17(0) 6222-6249.
Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research 18(01), 1-15.
Claudy, M. C., Parkinson, M., & Aquino, K. (2024). Why should innovators care about morality? Political ideology, moral foundations, and the acceptance of technological innovations. Technological Forecasting and Social Change, 203, 1-17. https://doi.org/10.1016/j.techfore.2024.123384
Council of the European Union (2023) ChatGPT in the Public Sector – Overhyped or Overlooked?
Curtice, John (2024), One-dimensional or two-dimensional? The changing dividing lines of Britain’s electoral politics. British Social Attitudes: the 41st report, London: The National Centre for Social Research. https://natcen.ac.uk/publications/bsa-41-one-dimensional-or-two-dimensional
Fast, E., & Horvitz, E. (2017). Long-Term Trends in the Public Perception of Artificial Intelligence. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10635
Gur, T., Hameiri, B., & Maaravi, Y. (2024). Political ideology shapes support for the use of AI in policy-making. Frontiers in Artificial Intelligence, 7, 1-9. https://doi.org/10.3389/frai.2024.1447171
Hemesath, S and Tepe, M. (2024). Multidimensional preference for technology risk regulation: The role of political beliefs, technology attitudes, and national innovation cultures. Regulation and Governance 18, 1264-1283. https://doi.org/10.1111/rego.12578
König, P., Wurster, S., & Siewert, M. (2023) Sustainability challenges of artificial intelligence and Citizens' regulatory preferences. Government Information Quarterly, 40 1-11. https://doi.org/10.1016/j.giq.2023.101863
Leslie, D. (2020). Understanding bias in facial recognition technologies: an explainer. The Alan Turing Institute. https://doi.org/10.5281/zenodo.4050457
Mack, E. A., Miller, S. R., Chang, C. H., Van Fossen, J. A., Cotten, S. R., Savolainen, P. T., & Mann, J. (2021). The politics of new driving technologies: Political ideology and autonomous vehicle adoption. Telematics and Informatics, 61, 101604 https://doi.org/10.1016/j.tele.2021.101604
Modhvadia, R., Sippy, T., Field Reid, O., and Margetts, H. (2025). How do people feel about AI? Ada Lovelace Institute and The Alan Turing Institute. https://attitudestoai.uk/
Neff, G. (2024). Can Democracy Survive AI? Sociologica, 18(3), 137-146. https://doi.org/10.6092/issn.1971-8853/21108
O’Shaughnessy, M. R., Schiff, D. S., Varshney, L. R., Rozell, C. J., & Davenport, M. A. (2023). What governs attitudes toward artificial intelligence adoption and governance? Science and Public Policy, 50(2), 161–176. https://doi.org/10.1093/scipol/scac056
Prabhakaran, V., Mitchell, M., Gebru, T., & Gabriel, I. (2022). A Human Rights-Based Approach to Responsible AI (No. arXiv:2210.02667). arXiv. https://doi.org/10.48550/arXiv.2210.02667
UK Government. (January 2025). AI Opportunities Action Plan. GOV.UK. https://www.gov.uk/government/publications/ai-opportunities-action-plan/ai-opportunities-action-plan
UK Government. (March 2025). PM remarks on the fundamental reform of the British State. GOV.UK. https://www.gov.uk/government/speeches/pm-remarks-on-the-fundamental-reform-of-the-british-state-13-march-2025
Wang, S. (2023). Factors related to user perceptions of artificial intelligence (AI)-based content moderation on social media. Computers in Human Behavior, 149, 107971. https://doi.org/10.1016/j.chb.2023.107971
Wen, C.-H. R., & Chen, Y.-N. K. (2024). Understanding public perceptions of revolutionary technology: The role of political ideology, knowledge, and news consumption. Journal of Science Communication, 23(5), 1-18. https://doi.org/10.22323/2.23050207
Yang, S., Krause, N. M., Bao, L., Calice, M. N., Newman, T. P., Scheufele, D. A., Xenos, M. A., & Brossard, D. (2023). In AI We Trust: The Interplay of Media Use, Political Ideology, and Trust in Shaping Emerging AI Attitudes. Journalism & Mass Communication Quarterly https://doi.org/10.1177/10776990231190868
Yi, A., Goenka, S., & Pandelaere, M. (2024). Partisan Media Sentiment Toward Artificial Intelligence. Social Psychological and Personality Science, 15(6), 682–690. https://doi.org/10.1177/19485506231196817
Zhang, M. (2023). Older people’s attitudes towards emerging technologies: A systematic literature review. Public Understanding of Science, 32(8), 948-968. https://doi.org/10.1177/09636625231171677
Left | Right | |
---|---|---|
AI use | (N) | (N) |
Cancer risk | 987 | 980 |
Facial recognition in policing | 1,013 | 1,029 |
Large language models | 846 | 814 |
Loan repayment risk | 911 | 932 |
Robotic care assistants | 908 | 894 |
Welfare eligibility | 896 | 875 |
Mental health chatbot | 851 | 807 |
Driverless cars | 991 | 970 |
Libertarian | Authoritarian | |
---|---|---|
AI use | (N) | (N) |
Cancer risk | 2,006 | 981 |
Facial recognition in policing | 1,029 | 1,034 |
Large language models | 909 | 779 |
Loan repayment risk | 926 | 915 |
Robotic care assistants | 918 | 896 |
Welfare eligibility | 897 | 884 |
Mental health chatbot | 873 | 823 |
Driverless cars | 987 | 973 |
Facial recognition for policing | Welfare assessments | Cancer diagnosis | Loan assessments | |
---|---|---|---|---|
Left-right scale | 0.18*** | 0.32*** | 0.08* | 0.23*** |
(0.03) | (0.04) | (0.03) | (0.03) | |
Libertarian-authoritarian scale | 0.52** | 0.40*** | -0.05 | 0.21*** |
(0.03) | (0.04) | (0.03) | (0.04) | |
Ethnicity (Neither Black nor Asian) | ||||
Asian or Asian British | -0.39** | 0.16 | -0.22* | 0.05 |
(0.09) | (0.12) | (0.10) | (0.11) | |
Black or Black British | -0.36* | -0.20 | -0.16 | -0.03 |
(0.16) | (0.21) | (0.17) | (0.19) | |
Whether the respondent has basic digital skills (no digital skills) | ||||
Respondent has basic digital skills | 0.31*** | 0.06 | 0.03*** | 0.28*** |
(0.06) | (0.08) | (0.07) | (0.07) | |
Monthly equivalised household income (Less than £1,500) | ||||
Monthly equalised household income is more than £1,500 | 0.24*** | 0.35*** | 0.27*** | 0.16** |
(0.05) | (0.07) | (0.05) | (0.06) | |
Age (aged 18-34) | ||||
Aged 34-54 | 0.02 | -0.19* | -0.07 | 0.09 |
(0.06) | (0.08) | (0.07) | (0.07) | |
Aged 55+ | 0.16** | -0.13 | 0.18** | 0.12 |
(0.06) | (0.08) | (0.07) | (0.07) | |
Education (does not have a degree) | ||||
Has a degree | -0.11* | 0.12 | 0.07 | 0.05 |
(0.05) | (0.07) | (0.05) | (0.06) | |
Adjusted R squared | 0.16 | 0.09 | 0.04 | 0.05 |
Unweighted base: | 2,839 | 2,452 | 2,716 | 2,554 |
Large language models | Mental health chatbots | Robotic care assistants | Driverless cars | |
Left-right scale | 0.11** | 0.09* | 0.09* | 0.06 |
(0.04) | (0.04) | (0.04) | (0.04) | |
Libertarian-authoritarian scale | 0.15*** | 0.10* | -0.02 | -0.17*** |
(0.04) | (0.04) | (0.04) | (0.04) | |
Ethnicity (Neither Black nor Asian) | ||||
Asian or Asian British | 0.28* | 0.38** | 0.51*** | 0.25 |
(0.11) | (0.14) | (0.13) | (0.13) | |
Black or Black British | 0.69*** | 0.47* | 0.18 | 0.09 |
(0.19) | (0.23) | (0.21) | (0.22) | |
Whether the respondent has basic digital skills (no digital skills) | ||||
Respondent has basic digital skills | 0.30*** | 0.02 | 0.45*** | 0.20* |
(0.08) | (0.09) | (0.09) | (0.09) | |
Monthly equivalised household income (Less than £1,500) | ||||
Monthly equalised household income is more than £1,500 | 0.17** | 0.15* | 0.23** | 0.26*** |
(0.06) | (0.08) | (0.07) | (0.07) | |
Age (aged 18-34) | ||||
Aged 34-54 | 0.02 | -0.21* | -0.05 | 0.16 |
(0.07) | (0.08) | (0.08) | (0.09) | |
Aged 55+ | -0.24** | -0.16 | -0.17* | -0.16 |
(0.07) | (0.09) | (0.08) | (0.08) | |
Education (does not have a degree) | ||||
Has a degree | -0.02 | -0.10 | 0.27*** | 0.24*** |
(0.06) | (0.07) | (0.07) | (0.07) | |
Adjusted R squared | 0.04 | 0.01 | 0.05 | 0.04 |
Unweighted base: | 2,310 | 2,315 | 2,505 | 2,717 |
*=significant at 95% level
**=significant at 99% level
***=significant at 99.9% level
Assurance that the AI has been deemed acceptable by a government regulator | Laws and regulation that prohibit certain uses of technologies, and guide the use of all AI technologies | |
---|---|---|
Left-right scale` | -0.10* | -0.12* |
(0.05) | (0.05) | |
Libertarian-authoritarian scale | 0.01 | -0.21*** |
(0.05) | (0.06) | |
Ethnicity (Neither Black nor Asian) | ||
Asian or Asian British | 0.29 | -0.19 |
(0.15) | (0.16) | |
Black or Black British | -0.23 | -0.02 |
(0.26) | (0.29) | |
Whether the respondent has basic digital skills (no digital skills) | ||
Respondent has basic digital skills | 0.31** | 0.54*** |
(0.10) | (0.10) | |
Monthly equivalised household income (Less than £1,500) | ||
Monthly equalised household income is more than £1,500 | 0.50*** | 0.52*** |
(0.08) | (0.09) | |
Age (aged 18-34) | ||
Aged 34-54 | 0.08 | 0.24* |
(0.10) | (0.11) | |
Aged 55+ | 0.30** | 0.43*** |
(0.10) | (0.11) | |
Education (does not have a degree) | ||
Has a degree | 0.29*** | 0.22* |
(0.08) | (0.09) | |
Unweighted base: | 2,979 | 2,979 |
*=significant at 95% level
**=significant at 99% level
***=significant at 99.9% level
Clery, E., Curtice, J. and Jessop, C. (eds.) (2025)
British Social Attitudes: The 42nd Report.
London: National Centre for Social Research
© National Centre for Social Research 2025
First published 2025
You may print out, download and save this publication for your non-commercial use. Otherwise, and apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act, 1988, this publication may be reproduced, stored or transmitted in any form, or by any means, only with the prior permission in writing of the publishers, or in the case of reprographic reproduction, in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the National Centre for Social Research.
National Centre for Social Research
35 Northampton Square
London
EC1V 0AX
info@natcen.ac.uk
natcen.ac.uk
Receive a regular update, sent directly to your inbox, with a summary of our current events, research, blogs and comment.
Subscribe