Problem Statement: AI images have become an increasingly ubiquitous part of our lives and have lead to viewing authorship and authorial intent. Oftentimes the datasets which AI image models are trained off of are categorized imprecisely by crowdworkers.
Project Abstract: This project investigates the ways that social categories and our ideas of authorship influence how we train AI image models, and how we view their output. I hope to develop an interactive demonstration of how image datasets can perpetuate social categories.
Student: Nathan Zhang
Advisor: Andy Ricci