I was a freshman when I first heard Professor Sarah Kreps speak to Milstein Program students about modern drone warfare policy. Needless to say, we were all impressed. So, when Professor Kreps launched the Tech Policy Lab, I joined with my friend and fellow Milstein cohort member Nate Watson and we set about finding a good research topic. We eventually settled on vaccine misinformation, since it seems like a tractable problem, quite relevant to current events.
Nate and I knew that we wanted our research to involve GPT-3, a language prediction model that generates human-sounding text using deep learning. Created by OpenAI, GPT-3 can write full articles, draft tweets, simulate conversations and more. Although usage of GPT-3 is restricted to a very exclusive group of organizations, Professor Kreps was able to secure access for us.
Given the extent of GPT-3’s abilities, it was a little overwhelming at first trying to narrow down our project’s scope. We also wanted to include other state-of-the-art Natural Language Processing (NLP) techniques into our projects along with GPT-3, if possible.
We liked the idea of programming a bot that could detect anti-vax discourse on twitter and respond appropriately. Natural language processing can be used in all sorts of duplicitous ways, so creating a tool like this that could potentially enact positive change excited us. Ultimately, we decided on the following question: can GPT-3 effectively participate in online vaccine discourse? This question gives us a way to build this anti-vax bot and test its effectiveness in a formalized research setting.
To create the bot, we first use a separate natural language processing model called BERT (Bidirectional Encoder Representations from Transformers) to classify a tweet into various categories (pro-vaccine, anti-vaccine, ambiguous, etc.). Then, based on the content of that tweet, we use GPT-3 to generate a response.
Much of our work so far has been learning how to use these tools. Neither Nate nor I had much experience in NLP or web scraping of social media sites before this project. We’ve spoken to Professor Kreps’ co-researchers at ETH Zurich who have used GPT-3 in the past, and taken inspiration from their work. We’ve also spent hours banging our heads into the wall over online tutorials.
As of now, we have successfully scraped a labeled dataset of vaccine-related tweets to train our models and have implemented a specialized version of the BERT model that is fine-tuned for tweets related to COVID. We can now do basic classification modeling on tweets that can detect, more often than not, the degree to which a tweet is pro or anti-vax. We have also learned how to make GPT-3 generate the sorts of responses that we believe are effective counters to common anti-vax arguments.
Over the next couple of months, we look forward to seeing the effectiveness of these models. We plan to write a set of sample responses to anti-vax tweets ourselves, and use GPT-3 to generate another set of responses. Once we do this, we will survey our peers at Cornell about whether they think GPT-3 could pass as human, and whether they think its responses could be effective in changing people’s minds about vaccines.
Overall, the Tech Policy Lab has been an incredible experience, and our research is just one of several exciting projects currently underway. I can’t wait to see how the Lab evolves over the next couple of semesters, especially when we can meet in person again.