Browsing by Author "Grissom, Alvin"
Now showing 1 - 11 of 11
Results Per Page
- Item1-D TV: A Computational Investigation into the Lexical Representation of Black Womanhood In Reality Television News(2021) Habtu, Blien; Grissom, Alvin; Payne, AmandaIt is well-established that when categorizing lexical associations of words in news corpora, women and minorities tend to be associated with negative terms (Manziniet al. 2019). This harm is also carried through other forms of media. For instance, Black women on television have been historically depicted as one dimensional characters, often forced into categories of strict binaries. Commonly, they are either extremely educated or they dropped out of school, either they are ambitious or they have lost all enthusiasm, either they are completely desexualized or they are hypersexualized, either they are always compliant or they are always aggressive (Boylorn 2008). While these depictions are known to cause harm, racism and sexism are not necessarily so overt, and more work is needed to quantify the effects and spread of stereotypes relating to intersections of identities. Through this thesis, I use the context of reality television to examine how racial representations in media can influence people's perceptions of Black womanhood. I begin with background information on some of the effects of media consumption and with a high-level computational overview of how words can be represented as vectors to quantify prejudicial bias in text representations. Then I conduct a literature review exploring some of the ways previous researchers (Parthasarthi et al. 2019; Garg et al. 2018) have measured bias in digital media both through text and over time. To conclude, I propose an experiment to examine the ways in which Black female reality television contestants are talked about in news article headlines using the Word2Vec algorithm and vector representation tools.
- Item1-D TV: A Computational Investigation into the Lexical Representation of Black Womanhood In Reality Television News(2021) Habtu, Blien; Payne, Amanda; Grissom, AlvinIt is well-established that when categorizing lexical associations of words in news corpora, women and minorities tend to be associated with negative terms (Manziniet al. 2019). This harm is also carried through other forms of media. For instance, Black women on television have been historically depicted as one dimensional characters, often forced into categories of strict binaries. Commonly, they are either extremely educated or they dropped out of school, either they are ambitious or they have lost all enthusiasm, either they are completely desexualized or they are hypersexualized, either they are always compliant or they are always aggressive (Boylorn 2008). While these depictions are known to cause harm, racism and sexism are not necessarily so overt, and more work is needed to quantify the effects and spread of stereotypes relating to intersections of identities. Through this thesis, I use the context of reality television to examine how racial representations in media can influence people’s perceptions of Black womanhood. I begin with background information on some of the effects of media consumption and with a high-level computational overview of how words can be represented as vectors to quantify prejudicial bias in text representations. Afterwards, I conduct a literature review exploring some of the ways previous researchers (Parthasarthi et al. 2019; Garget al. 2018) have measured bias in digital media both through text and over time. Then in order to understand more about the complexities of this task, I explore away in which word embeddings can be generated by using the Word2vec algorithm (Mikolov et al. 2013a) and visualized through vector representation tools. I conclude by addressing the challenges of my experiment and suggesting future improvements to this project.
- ItemAnalyzing the Aeneid and its Translations with Topic Models and Word Embeddings(2022) Langen, Carter; Kuper, Charles; Grissom, AlvinWe review new advances in word embeddings and apply them to cross-lingual literary analysis of Latin and English translations of Latin. We introduce word embeddings, summarize the developments to them that allow them to be trained from small data, on morphologically rich languages, and cross-lingually in detail. We also review Latent Dirichlet Allocation, and Polylingual Topic Models. We then use these models to analyze Vergil's Aeneid and the John Dryden, John Conington, and Theodor Williams translations into English.
- ItemAugmenting Data to Improve Incremental Japanese-English Sentence and Sentence-final Verb Translation(2021) Goldman, Benjamin; Grissom, AlvinFinal verb prediction has been shown to help expedite simultaneous machine translation when translating between languages with different word orders. Prediction allows a system to begin translating earlier because it has access to information that it otherwise would need to wait for to begin translating. Specifically, if the system is able to predict the final verb in a SOV sentence, it allows the system to start translating much earlier. This thesis examines current prediction mechanisms in neural machine translation models to determine what factors improve predictions between SOV and SVO language. We first train a neural machine translation model to establish how well it can predict the English verb corresponding to the final Japanese verb. We then train new models, with modified data to see it's impact on the model's ability to predict the English verb. We found that the model predicts the English verbs more accurately as more of the Japanese Sentence is revealed. Shuffling the preverb context as well adding subsentences to training data both improved the ability of the model to predict the English verb as well.
- ItemEvaluating the Effect of Training Data on Bias in Generative Adversarial Networks(2023) Trotter, Ryan; Grissom, AlvinWith the rising popularity of generative adversarial networks used for facial image generation, it is becoming increasingly important to ensure that these models are not biased. With the many possible uses of these networks that produce incredible realistic face images, the possibility for bias in these networks to cause harm is substantial. While StyleGAN is very effective when trained on FFHQ, the unbalanced nature of this dataset raises concerns. In this paper, we explore how GANs work, past research on bias in GAN image generation, and possible alternatives to reduce bias in these models. We also examine new results on bias in the GAN discriminator, which reveals new possible research ideas on how to mitigate bias in GANs.
- ItemImproving Sentence-final Verb Prediction in Japanese using Recurrent Neural Networks and Sentence Shuffling(2021) Su, Amberley (Zhan); Grissom, AlvinThis thesis is about analyzing the different approaches to verb prediction in machine translation, mainly between languages with different grammar structures. We will introduce the importance of verb prediction in translation between Subject-Object- Verb languages and Subject-Verb-Object languages. There will be summaries of recent works on using different models like Reinforcement Learning and Regression in classifying the verbs, and strategies like sentence rewriting to achieve better performance insimultaneous translation. Then we use different models with different approaches to do similar tasks in previous papers. We first do a near replication of model in Grissom II et al. 2016. Besides that, in order to see the importance of the final case marker in Japanesefinal verb prediction, we train the model to do the classification without the final case marker. We also use an LSTM model and a BiGRU model to do the same task in the aforementioned paper and analyze the different results from the previous model. Since sentence structure is relatively free in Japanese, we shuffle the POS tokens and KNP bunsetsu to see if adding shuffled sentences in the dataset can increase the accuracy. Besides doing a 4 choice multiple choice task as Grissom II et al. 2016, we also conduct experiments on the LSTM and BiGRU models to do a multiple choice task with 50 and 100 verbs to see their general prediction among more verbs.
- ItemInductive Biases in Generative Adversarial Networks(2023) Lin, Yikang; Grissom, AlvinIn unsupervised learning, generative adversarial network (GAN) is one the generative models for generating new examples that are not present in the original data. Although GANs have impressive results in generating novel examples, the inductive biases of the model can lead to biased results in image generation. This paper provides a summary of two types of GAN-the MLP GAN and StyleGAN-and their corresponding inductive biases. Because analytical analysis of the effects of GAN inductive biases is difficult, an empirical method for studying the inductive biases is also discussed.
- ItemLearning Hierarchical Structure with LSTMs(2021) Paris, Tomas; Grissom, AlvinRecurrent Neural Networks (RNNs) are successful at modeling languages because of their ability to recognize patterns over an undefined input length using their internal memory. However, the data kept in their memory decays over time due to a problem called vanishing gradients. Long Short-Term Memory (LSTM) units mitigate this problem with forget gates which help reserve its memory for only important data. This model has thus become very popular in natural language processing (NLP), because they are able to model context. Compared to ealier models used in NLP, LSTMs excel at modeling a language modeling. However, some aspects of their success in the field have surprised researchers. Their apparent ability to model syntax suggests that they use mechanisms of learning which we do not yet fully understand. Research has been done on LSTMs and language syntax, in an effort to potentially further the field. Yet, an exhaustive account of how the inside an LSTM works and what needs improving has yet to be compiled. Here, we hope to use previous research and some final experiments to provide a clear picture of how LSTMs model hierarchical syntax.
- ItemRepresenting Negation and Compositionality in Neural Models for NLP(2023) Gihlstorf, Caroline; Grissom, AlvinI survey some of the difficulties neural machine translation models and large language models face when translating and modeling two linguistic phenomena: negation and semantic compositionality. I describe empirical findings of errors in model outputs, findings from probing models’ internal representations of each phenomenon, and current approaches to mitigating these errors in neural machine translation and large language models. I argue that training neural networks to better understand compositionality in language may also increase their ability to model negation. I then conduct an experiment to measure how well large language models are able to infer the focus of negation in a negated sentence from prior context. Results show that the models are not able to infer the focus of negation most of the time.
- ItemSingle Deletions in Source Sentences Trigger Hallucinations in English-Chinese Machine Translation with Transformers: An Analysis(2021) Shi, Ruikang; Grissom, AlvinMachine translation is widely used by people. However, all machine translation models can sometimes make mistakes. I review some previous studies on possible errors in machine translation. Then I elaborate on the problem of how mistyping can cause a severe translation error: HALLUCINATION. I conduct experiments to examine the effect of deleting single letters or words on the probability of HALLUCINATION. The results show that both behaviors may cause HALLUCINATION while deleting single words has a greater probability. It also shows that training the model with more data can decrease the probability of HALLUCINATION. Moreover, the untranslated proper nouns in training data lead to INABILITY, a specific type of HALLUCINATION.
- ItemTowards Effective Machine Translation For A Low-Resource Agglutinative Language: Karachay-Balkar(2022) Rice, Enora; Washington, Jonathan; Grissom, AlvinNeural machine translation (NMT) is often heralded as the most effective approach to machine translation due to its success on language pairs with large parallel corpora. However, neural methods produce less than ideal results on low-resource languages when their performance is evaluated using accuracy metrics like the Bilingual Evaluation Understudy (BLEU) score. One alternative to NMT is rule-based machine translation (RBMT), but it too has drawbacks. Furthermore, little research has been done to compare the two approaches on criteria beyond their respective accuracies. This thesis evaluates RBMT and NMT systems holistically based on efficacy, ethicality, and utility to low-resource language communities. Using the language Karachay-Balkar as a case-study, the latter half of this thesis investigates how two free and open-source machine translation packages, Apertium (rule-based) and JoeyNMT (neural), might support community-driven machine translation development. While neither platform is found to be ideal, this thesis finds that the Apertium is more conducive to a community driven machine translation development process than JoeyNMT when evaluated on the criteria of efficiency, accessibility, ease of deployment, and interpretability.