Blog

Exploring AI Frontiers: Insights from ESRA 2025

The European Social Research Association (ESRA) conference was held in July in Utrecht, the Netherlands.
Image showing an identifiable person's hands typing on a laptop screen.

This summer, colleagues from the National Centre for Social Research (NatCen) attended the European Social Research Association (ESRA) conference, eager to delve into the role of Artificial Intelligence (AI) in social research. The conference provided a platform for sharing how AI is revolutionising the research landscape.

Large language models (LLMs) are now integral to many research organisations, automating tasks such as feedback analysis and coding qualitative data, to streamline processes, making it much easier to handle large volumes of qualitative information. At NatCen, LLMs are already enhancing our Rapid Evidence Reviews (REAs) by speeding up the screening process.  

One notable presentation by Gummer et al (GESIS) explored how respondents use generative AI to answer political knowledge questions in web surveys, and to assess how useful the information obtained from chatbots is to respondents when trying to answer survey questions. Their findings revealed differences in how respondents use chatbots compared to search engines, suggesting further research is needed to understand these behaviours and to identify the characteristics of respondents that adopt different chatbot query strategies.

This research aligns with our own use of chatbots. A common theme for discussion amongst colleagues at NatCen is the language we use for the prompts employed with LLMs. The validity and reliability of the output from chatbots is directly linked to the quality of prompts, and there is certainly more work to do here to develop a fit-for-purpose survey and/or qualitative interviewer chatbot. As survey designers, we are capable of writing good prompts and so it’s just a matter of honing these skills.

Susanne Weber from the Institute of Medical Biometry and Statistics showcased the use of LLMs for qualitative data analysis. Her work with ChatGPT's o3 model demonstrated both the promise and limitations of AI in survey research. While codebook creation showed consistency between codes generated across different AI chats, using AI for coding interviews is proving challenging, highlighting the need for further refinement.

Roberts et al (University of Lausanne) have investigated the potential of chatbots in improving the quality and cost efficiency of survey questionnaire design. Preliminary results showed reasonable agreement between chatbot and human coders in identifying problems with survey questions using the Questionnaire Appraisal System. In the next stage of the project, they will be using the chatbot to generate and analyse cognitive interview data.

Similarly, Joanne Groves (ONS) presented some interesting takeaways to highlight how useful AI will be for questionnaire design but also emphasised the importance of the role of the social researcher in writing good prompts and critically reviewing outputs. Joanne noted that the AI generated questionnaire had some serious issues which would have resulted in bad quality data being collected. These flaws resulted from the prompts used, which were not written by a questionnaire design expert. NatCen’s questionnaire development and testing experts have been experimenting with the use of AI chatbots to assist with the development of questions. We have found that, when given clear and detailed prompts on what is needed and the rules of good questionnaire design, AI can be a helpful tool that saves researchers time and supports creative thinking and problem solving.

Sturgis et al (London School of Economics) are assessing whether a chatbot embedded in a web survey questionnaire can enhance the quality of responses to questions on occupation. Early results suggests that the chatbot improves the accuracy of Standard Occupation Classification (SOC) coding on the fly compared to human coding. We believe this will become more standard in web surveys within a year or so, but acknowledge that there’s still work to consider in developing governance and ethical guidance to support the operational use of chatbots in this way.

Kipling Zubevich (Social Research Centre) presented compelling research on using conversational agents as survey tools, testing Boost AI's customer service platform for follow-up interviews with panel respondents. In one example, he proposed having a chatbot conduct survey interviews—it could make the experience feel more interactive and friendly than a web survey. People might even be more willing to share honest answers with a conversational agent than they would with a survey interviewer. Plus, it could help reach a wider audience, including those who might not usually participate in surveys. However, there are important ethical considerations with the use of conversational agents in social research that need to be considered.

The AI demonstrated sophisticated capabilities, matching responses to code frames and asking probing questions when answers were ambiguous. Key findings revealed reduced social desirability bias on sensitive topics like drug use and smoking, while 20% of participants showed no mode preference, indicating openness to AI interviewing. The human-like Australian-accented voice proved crucial for acceptance. Though web panel participants still preferred traditional methods, the research highlighted conversational agents' promising potential to offer more flexible, interactive survey experiences. While development continues, this technology shows significant promise for expanding respondent choice and improving data quality in social research.

This year’s ESRA conference was a hub of stimulating discussions and camaraderie. As NatCen continues to harness its own use of AI in research, it is clear that this technology holds immense potential for improving efficiencies in data collection and processing, and the potential to improve data quality. However, the role of the skilled survey research remains essential to the production of high quality, ethical data. As researchers, we must learn as much as we can about AI so that we can make informed decisions about when and how to use it, for what purposes and how we assess the validity and reliability of what it produces. Events such as the ESRA conference help us to do this.