Op-Ed: Data performativity and UConn’s COVID-19 dashboard

0
119

by Mary Bugbee, PhD student (Anthropology), and Sara Ailshire, PhD student (Anthropology) 

When UConn released its COVID-19 dashboard on Aug. 19, it was a welcome development for students, workers and residents in surrounding communities. There was a pervasive sense of uncertainty among both those who would be returning to or living near campus, as well as those who would remain online. As we prepared for reopening, many of us thought that the dashboard — presented as a tool to track daily COVID-19 positive cases and other relevant data — could help manage uncertainty and provide transparency regarding our health and safety. However, the dashboard quickly shifted from serving as a tool of transparency and uncertainty management to one that obfuscated data and generated rumor. 

During the early weeks of the dashboard, graduate students like ourselves were engaged in an administrative fight that we would come to find out was directly related to the quality and accuracy of the data on this dashboard. We were advocating for those enrolled in GRAD credits who would be conducting all research and dissertation activities online to be classified as online students, in order to save hundreds of dollars in fees during a time of increased precarity and decreased academic employment opportunities. Yet the university refused to correctly classify nearly 1000 grads like us as online students in an effort to preserve $500,000 in fee revenue. Our ‘online’ misclassification proved to be a larger issue once the university began the COVID-19 testing for commuter students. We are currently living in California and Ohio, and yet we were urged to take a test. There was no clear communication about testing for the nearly 1000 graduate students in our position — those who were commuters in name only.  

As concerned UConn community members, and also as medical anthropologists who have been trained to think critically about health metrics — specifically how they’re produced and what agendas they serve — we were quick to call attention to this issue with relevant administrators and staff. We pointed out that the sampling error in the dashboard would lead to inaccurate data for commuter positivity rates, and, having realized the university would not be classifying us correctly as online to remedy this issue, we thought perhaps administrators might consider a workaround solution to preserve the integrity of the testing data. 

We both inquired about exceptions to taking the test but never received a definitive answer for how to proceed. Some staff said grad students in our situation could get an exception from the test; others told us we could not, and still others recognized the core issue at stake (data accuracy) but advised us to take it anyway. Multiple administrators did not perceive the sampling error from their policy to be a problem. We ended up both testing, concerned that we would be subject to penalties for not testing, but we remained concerned about what would happen to our test results.  

We were frustrated. We saw multiple solutions that could be implemented to protect the data. To start, the university could have sent official communication issuing guidance on whether we could get an exception to the test and under what circumstances, as well as what bureaucratic procedures we needed to follow to ensure we wouldn’t receive holds on our accounts. They also could have taken an ex post-facto approach by including a disclaimer on the dashboard about the sampling error. 

When metrics are shared publicly, but members of the community recognize them to be inaccurate or insufficient, rumors and speculation can easily proliferate. We began to speculate what the purpose of this dashboard was, after all, if accuracy was not being prioritized. Photo courtesy of Reopen UConn’s website.

This dashboard became more to us than a site for criticism. As medical anthropologists in-training, it became a site to think through the role of metrics, rumor and data performativity. These are concepts that have guided anthropological analyses in contexts ranging from post-2004 tsunami Indonesia, malaria reporting requirements in Senegal, global gender-based violence and human trafficking statistics and more.  

Rumor figures importantly here. When metrics are shared publicly, but members of the community recognize them to be inaccurate or insufficient, rumors and speculation can easily proliferate. We began to speculate what the purpose of this dashboard was, after all, if accuracy was not being prioritized. Was this dashboard a PR tool to generate a false sense of security among the community? Was it an effort to perform transparency, while in reality obfuscating accurate data? We noticed how the structure of the dashboard changed over the weeks with little or insufficient explanation: Available isolation beds periodically increased, the commuter testing from the initial reopening period was retired (and relegated to small print) the week after the commuter positive rate increased and other visually minor, but high impact, changes were made.  

To top things off, the New York Times has been reporting data that is inconsistent with UConn’s. The university maintains that the difference lies in NYT’s inclusion of UConn Health data with their total, while the university chooses to disaggregate the two campuses. Even then, students have pointed out that this explanation does not fully account for the inconsistencies. Student accounts of positive cases among the student body that have not been represented in the official counts circulate online and off. What can the dashboard do to dispel rumor when its inaccuracy is well-known to the community it is meant to serve? 

On Tuesday, Sept. 22, UConn’s public relations team released an article in UConn Today to address “misconceptions” and “questions” about the dashboard. When a dashboard is sowing confusion, obfuscating data, and publishing faulty data, it is only a matter of time before UConn would have to engage in damage control. The article alone is damning as it demonstrates the dashboard is failing at what it’s allegedly supposed to do: provide transparent, accurate public health data to the community. 

We write here because we, like so many others, have been ignored at best and dismissed at worst. We assume that UConn’s priority has never been to full transparency of the public health data it was collecting. Rather, their COVID-19 dashboard has been a performance of transparency, one aimed at preserving image and reducing skepticism regarding the safety of reopening. We urge UConn to seriously engage with the concerns in our community and to allow the dashboard to serve as an accurate, informative place for communication about campus health and safety.  

References 

Adams, Vincanne. 2016. Metrics: What Counts in Global Health. Duke University Press. 

Merry, Sally Engle. 2016. The Seductions of Quantification: Measuring Human Rights, Gender Violence, and Sex Trafficking. Chicago Series in Law and Society. Chicago: The University of Chicago Press. 

Samuels, Annemarie. 2015. “Narratives of Uncertainty: The Affective Force of Child-Trafficking Rumors in Postdisaster Aceh, Indonesia.” American Anthropologist 117 (2): 229–41. https://doi.org/10.1111/aman.12226

Tichenor, Marlee. 2017. “Data Performativity, Performing Health Work: Malaria and Labor in Senegal.” Medical Anthropology 36 (5): 436–48. https://doi.org/10.1080/01459740.2017.1316722

Leave a Reply