Browsing by Subject "Artificial intelligence"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
- ItemAn Examination of AI-powered Exchange Traded Fund Returns and Their Exposure to ESG(2024) Millie, Samuel; Arredondo-Chavez, AlbertoThis thesis analyzes whether AI-powered exchange-traded funds invest in ESG-responsible companies at a higher rate than traditionally managed ETFs. Subsequently, the paper also examines if these same AI-powered funds generate higher excess returns than traditionally managed ETFs. This paper uses performance and ESG score data from Refinitiv. The results do not provide statistically significant evidence suggesting AI-powered funds outperform or possess higher ESG scores than traditional funds. However, the paper produces significant results analyzing the effect of different ESG scores on indexed performance, alpha, and the Sharpe and Treynor ratios for non-AI-powered funds. The findings reveal differing conclusions on the effect that ESG metrics have on the performance of non-AI-powered funds.
- Item"Dropped down halfway" The Flawed Designer and the Failure of the Posthuman in Richard Powers's Galatea 2.2(2021) Behrends, Miranda; McGrane, LauraThis essay explores the role of the humanist designer in the creation of AI in an academic and social posthuman setting. Set on the cusp of the 21st century, Powers's pseudo-autobiographical Galatea 2.2 reimagines the myth of "Pygmalion" during a time when technology has evolved what it means to have a body and be human. Powers's narrator becomes responsible for creating and instructing an AI, and in failing to do so, is presented as a flawed designer who is unable to reconcile the humanist structures that he lives by with the emerging posthuman environment. In his inability to break out of these structures, he does not properly recognize the hybridized existence of the AI he creates, leading to its isolation and eventual failure. I argue that through the representation of the failed humanist designer, the novel allows the reader to think more critically about the possible applications of the posthuman both to literary productions and academic institutions, as well as the possible capacities for human relationships and knowledge creation.
- ItemModel feature importance scores should reflect recourse(2024) Wernerfelt, Anneke; Friedler, SorelleBecause AI is used to make decisions that affect humans in domains such as loan applications, healthcare, criminal justice, and more, it's crucial that model deployers can explain those decisions. Existing methods for explaining complex classifiers fail to provide satisfactory results for the problems they seek to solve because they lack a way forward. To explain models, the popular methods LIME and SHAP measure feature importance. However, they don't provide recourse, which is an actionable step someone can take to change their adverse classification. In the context of loan applications, recourse means outputting action recommendations that somebody can take to be granted a loan, having been denied previously. This thesis catalogs the shortcomings of state-of-the-art explanation methods for turning feature importance scores into actions. We then show that manufacturing recourse from LIME and SHAP's outputs is insufficient, and thus we need to establish a new feature ranking system. We do this by proposing a new metric, r_ij, for recommending actions to people who received an adverse label.
- ItemScene and Unseen: GPT Bias in Script Writing(2024) Crawford, Charlie; Friedler, SorelleAs Large Language Models and generative AI become more and more prevalent, the question of how to measure bias in these systems becomes that much more crucial. This paper delves into a background on how these systems came to be, the biases they have been shown to carry, and the strategies researchers and developers can use to monitor these biases. Alongside this, I will seek to pull from the literature to understand the different strategies which researchers use to define fairness and bias in generative AI as a way to contextualize their audits. This literature review will focus on OpenAI’s publicly available generative pre-trained transformer model, ChatGPT, as an example of these themes throughout the paper. Following this literature review is an overview of research conducted on OpenAI’s content moderation system, the “moderation endpoint". This research was conducted in the form of an algorithm audit, using television data as input to the moderation endpoint to determine how frequently texts were flagged as violating OpenAI’s content moderation rules. For this input text, we compared real television scripts to scripts we had asked GPT-3.5 and GPT-4 to generate to determine any trends in content moderation. We ultimately found that the moderation endpoint flagged a high proportion of scripts, both GPT and human generated, but had a much higher flagging rate for real scripts.