Browsing by Subject "ChatGPT"
Now showing 1 - 1 of 1
Results Per Page
Sort Options
- ItemScene and Unseen: GPT Bias in Script Writing(2024) Crawford, Charlie; Friedler, SorelleAs Large Language Models and generative AI become more and more prevalent, the question of how to measure bias in these systems becomes that much more crucial. This paper delves into a background on how these systems came to be, the biases they have been shown to carry, and the strategies researchers and developers can use to monitor these biases. Alongside this, I will seek to pull from the literature to understand the different strategies which researchers use to define fairness and bias in generative AI as a way to contextualize their audits. This literature review will focus on OpenAI’s publicly available generative pre-trained transformer model, ChatGPT, as an example of these themes throughout the paper. Following this literature review is an overview of research conducted on OpenAI’s content moderation system, the “moderation endpoint". This research was conducted in the form of an algorithm audit, using television data as input to the moderation endpoint to determine how frequently texts were flagged as violating OpenAI’s content moderation rules. For this input text, we compared real television scripts to scripts we had asked GPT-3.5 and GPT-4 to generate to determine any trends in content moderation. We ultimately found that the moderation endpoint flagged a high proportion of scripts, both GPT and human generated, but had a much higher flagging rate for real scripts.