Related content
Conducting remote cognitive interviews: telephone or video conferencing?

The instruments used to collect training data for machine learning models have many similarities to web surveys, such as the provision of a stimulus and fixed response options.
Survey methodologists know that item and response option wording and ordering, as well as annotator effects, impact survey data.
Our previous research showed that these effects also occur when collecting annotations for model training.
Our new study builds on those results, exploring how instrument structure and annotator composition impact models trained on the resulting annotations.
Using previously annotated Twitter data on hate speech, we collect annotations with five versions of an annotation instrument, randomly assigning annotators to versions.
We then train ML models on each of the five resulting datasets.
By comparing model performance across the instruments, we aim to understand:
In addition, we expand upon our earlier findings that annotators' demographic characteristics impact the annotations they make. Our results emphasize the importance of careful annotation instrument design.
The event is free to attend, simply register your details to receive a unique Zoom Webinar link.
Please note that you will be required to register for the event using an email address linked to a valid Zoom account.
Attendance at City events is subject to terms and conditions
Receive a regular update, sent directly to your inbox, with a summary of our current events, research, blogs and comment.
Subscribe