Keeping Up with the Crises: Buntain Builds Digital Tools to Combat Misinformation
Most people remember where they were the tragic morning of the Boston Marathon Bombing on April 15, 2013, which left three people dead and injured hundreds.
Cody Buntain recalls watching the news coverage roll in at Looney’s Pub in College Park, marveling at how the aired interview with the Boston Chief of Police was in direct contradiction to the scrolling news ticker at the bottom of the screen.
A second-year computer science graduate student at the time, Buntain says the news ticker was sourced from social media because that information was most readily available.
“This contradiction demonstrated the major need for methods that help us find high-quality, truthful information rapidly, and shaped my decision to study social media during crises,” he says.
Now an assistant professor in the College of Information Studies, the goal of Buntain’s research is to make the online information ecosystem a more informative, higher-quality space by enhancing its resistance to manipulation.
His work is mainly focused on crisis informatics and online political engagement, which includes online manipulation, social media use in conflict spaces—like the ongoing strife in Ukraine and Afghanistan—and how technology is mitigating or contributing to social ills online.
One of Buntain’s major research efforts is to advance the understanding of how visual media is used to manipulate online audiences and influence elections. He is also actively working on how content moderation—both technological and manual—can positively or negatively impact the broader information ecosystem.
After graduating with his doctorate in computer science in 2016, Buntain continued at UMD as a postdoctoral fellow in the Human-Computer Interaction Lab, where he won a Best Paper Award for developing a tool to identify fake news in twitter threads.
Although his research originally focused on crisis informatics and how people use social media during disasters, Buntain realized that the problems surrounding information quality during crises are similar to questions that arise during times of social and political unrest.
Consequently, he began working more closely with political scientists, which led to his second postdoctoral fellowship at New York University’s Social Media and Political Participation Lab, now known as the Center for Social Media and Politics.
After a year at NYU and two years as an assistant professor of informatics at the New Jersey Institute of Technology, Buntain was drawn back to his alma mater for its ideal resources to conduct his interdisciplinary research. Located just nine miles from the U.S. Capitol, UMD is also one of the nation’s leading universities for computer science research.
More recently, Buntain joined the University of Maryland Institute for Advanced Computer Studies (UMIACS) as an affiliate faculty member. He emphasizes that UMIACS’ collaborative and computational infrastructure will play a major role in supporting his research.
“From leveraging large multi-modal datasets of online behavior to engaging with UMIACS centers and labs like the Computational Linguistics and Information Processing Lab and the UMD Center for Machine Learning—these resources will vastly facilitate and accelerate my work,” he says.
Looking ahead, Buntain’s upcoming research is heavily focused on artificial intelligence (AI). While AI has changed how politicians engage with the public, revealed new societal vulnerabilities, and exacerbated longstanding social ills, these technologies have also empowered new voices and engagement in political processes, he explains.
Buntain was recently invited to teach a new course in the UMD Honors College called "We the Artificial People: How AI has Reshaped Politics.” The class will encourage critical evaluation of AI’s impact on political behavior and how it has opened new threats like foreign electoral inference, disinformation, and online manipulation through deep-fakes and generative language models.
He is also a part of the new Institute for Trustworthy AI in Law & Society (TRAILS), a cross-campus collaboration aimed at influencing fairness and trustworthiness in AI by incorporating both users and stakeholders in the development process. Led by UMD, the institute is supported by a $20 million award from the National Science Foundation and the National Institute of Standards and Technology.
—Story by Melissa Brachfeld, UMIACS communications group