Evaluating the Effect of Training Data on Bias in Generative Adversarial Networks
Haverford College. Department of Computer Science
Place of Publication
Table of Contents
Dark Archive until 2043-01-01, afterwards Haverford users only
With the rising popularity of generative adversarial networks used for facial image generation, it is becoming increasingly important to ensure that these models are not biased. With the many possible uses of these networks that produce incredible realistic face images, the possibility for bias in these networks to cause harm is substantial. While StyleGAN is very effective when trained on FFHQ, the unbalanced nature of this dataset raises concerns. In this paper, we explore how GANs work, past research on bias in GAN image generation, and possible alternatives to reduce bias in these models. We also examine new results on bias in the GAN discriminator, which reveals new possible research ideas on how to mitigate bias in GANs.