Subject004 Data processing and computer science ComputerlinguistikComputerunterstütztes LernenOnline-BefragungBenutzeroberflächeMensch-Maschine-KommunikationEvaluation der Lehre
MetadataShow full item record
Unleashing the Potential of Conversational Agents for Course Evaluations: Empirical Insights from a Comparison with Web Surveys
Recent advances in Natural Language Processing (NLP) bear the opportunity to design new forms of human-computer interaction with conversational interfaces. However, little is known about how these interfaces change the way users respond in online course evaluations. We aim to explore the effects of conversational agents (CAs) on the response quality of online course evaluations in education compared to the common standard of web surveys. Past research indicates that web surveys come with disadvantages, such as poor response quality caused by inattention, survey fatigue or satisficing behavior. We propose that a conversational interface will have a positive effect on the response quality through the different way of interaction. To test our hypotheses, we design an NLP-based CA and deploy it in a field experiment with 176 students in three different course formats and compare it with a web survey as a baseline. The results indicate that participants using the CA showed higher levels of response quality and social presence compared to the web survey. These findings along with technology acceptance measurements suggest that using CAs for evaluation are a promising approach to increase the effectiveness of surveys in general.