Date of Final Oral Examination (Defense)
Type of Culminating Activity
Master of Science in Computer Science
Cassey Kennington, Ph.D.
Francesca Spezzano, Ph.D.
Tim Anderson, Ph.D.
Transformer-based Language Models (LMs), learn contextual meanings for words using a huge amount of unlabeled text data. These models show outstanding performance on various Natural Language Processing (NLP) tasks. However, what the LMs learn is far from what the meaning is for humans, partly due to the fact that humans can differentiate between concrete and abstract words, but language models make no distinction. Concrete words are words that have a physical representation in the world such as “chair”, while abstract words are ideas such as “democracy”. The process of learning word meanings starts from early childhood when children acquire their first language. Children learn their first language through interacting with the physical world by using simple referring expressions. They do not need many examples to learn from, and they learn concrete words first from interacting with their physical world and abstract words later, yet language models are not capable of referring to objects or learning concrete aspects of words.
In this thesis, I derived motivation from the way children acquire language and combined a concrete representation of certain words into LMs while leveraging its existing training regime. My methodology involves using referring expressions to visual objects as a way of linking the visual world representations (images) with text. This takes place by extracting word-level visual embeddings for concrete words from images, while extracting word-level contextual embeddings for abstract words from text and then using them to train language models. In order to enable the model to differentiate between concrete and abstract words, I use a dataset that gives an indication of the level of concreteness for words to determine how information about each word was applied during training.
The work presented in this thesis is evaluated using a standard language understanding benchmark by analyzing the effect of using the proposed training regime on the language model and comparing its performance with traditional language models trained on large corpus data. In the final analysis, the results demonstrate that using referring expressions as the input text to train language models yields better performance on some language understanding tasks than using traditional, corpus-based text. However, the proposed approach cannot affirm that adding visual knowledge and/or concreteness distinction knowledge enriches LMs.
Mahdy, Yousra, "Towards Making Transformer-Based Language Models Learn How Children Learn" (2022). Boise State University Theses and Dissertations. 1975.