학술논문

Natural Language Processing Algorithms for Divergent Thinking Assessment
Document Type
Conference
Source
2023 IEEE 6th Eurasian Conference on Educational Innovation (ECEI) Educational Innovation (ECEI), 2023 IEEE 6th Eurasian Conference on. :198-202 Feb, 2023
Subject
Bioengineering
Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineered Materials, Dielectrics and Plasmas
Engineering Profession
Fields, Waves and Electromagnetics
General Topics for Engineers
Photonics and Electrooptics
Robotics and Control Systems
Signal Processing and Analysis
Technological innovation
Computational modeling
Bit error rate
Semantics
Manuals
Transformers
Natural language processing
Divergent Thinking Assessment
Natural Language Processing
Alternative Uses Task
Crowdsourcing
Language
Abstract
The manual assessment of creativity by human raters is coupled with unavoidable subjectivity and often costs much time and human resources. To address these issues, this paper explores how to apply natural language processing (NLP) methods to the assessment of creativity. Using the Alternative Use Task (AUT), participants were encouraged to generate ideas as fast as possible for a fixed time. It was hypothesized that the similarity of ideas would decrease over time in the AUT, considering the design fixation and the limitation of working memory. In the first study, 12 university students completed the AUT in paper-pencil form and generated a total of 376 responses. We applied two NLP models, namely BERT (Bidirectional Encoder Representations from Transformers) and USE (Universal Sentence Encoder), to assess the similarity of responses between individuals. The results did not confirm our hypothesis. One prominent reason might be that the applied models represent millions of sentence structures that are over-ecological and too dissimilar to the sentence structures participants had used while finishing the AUT. Nevertheless, the results did show that BERT and USE could more accurately express the semantic information of responses pace with the Latent Semantic Analysis, a popular computer-aided model for AUT response assessment. In study 2, we proposed an algorithm to reanalyze the 376 responses in study 1 based on word embedding with crowdsourced responses. There were 1690 crowdsourced responses collected from 550 participants who completed an online version of the AUT. The results supported our hypothesis and showed that the similarity of responses increases as time passes. This indicates the proposed algorithm would alleviate the influence of sentence structure in AUT tasks. The differences between BERT, USE, and proposed algorithms are discussed in relation to the assessment of creativity, and the implications for future work are explored in-depth.