Subject004 Data processing and computer science Evaluation der LehreComputerlinguistikDialogsystemMensch-Maschine-Kommunikation
MetadataShow full item record
A Conversational Agent to Improve Response Quality in Course Evaluations
Recent advances in Natural Language Processing (NLP) bear the opportunity to design new forms of human-computer interaction with conversational interfaces. We hypothesize that these interfaces can interactively engage students to increase response quality of course evaluations in education compared to the common standard of web surveys. Past research indicates that web surveys come with disadvantages, such as poor response quality caused by inattention, survey fatigue or satisficing behavior. To test if conversational interfaces have a positive impact on the level of enjoyment and the response quality, we design an NLP-based conversational agent and deploy it in a field experiment with 127 students in our lecture and compare it with a web survey as a baseline. Our findings indicate that using conversational agents for evaluations are resulting in higher levels of response quality and level of enjoyment, and are therefore, a promising approach to increase the effectiveness of surveys in general.
CitationIn: ACM (Hrsg.): Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems - CHI EA '20. Association for Computing Machinery: New York, NY 2020, S. ; ISBN 978-1-4503-6819-3
The following license files are associated with this item: