Recherche

Interview Society Text Interviews

Dossier / Should we Be Afraid of the Digital Revolution ?

The Historical and Material Roots of Digital Innovation
An interview with Megan Finn


by Jules Naudet , 11 June 2022
with the support of CASBS



The digital world is the result of the accumulation of centuries of scientific and organizational progress. Virtuality is further enabled by the materiality of computers, objects that are themselves the product of economic exchanges and labor.

This publication is part of our partnership with the Center for Advanced Study in the Behavioral Sciences. The full list of our joint publications is available here.

Megan Finn is an Associate Professor at University of Washington’s Information School and has a 2021-22 Lenore Annenberg and Wallis Annenberg Fellowship in Communication at the Center for Advanced Study in the Behavioral Sciences at Stanford University. She completed her PhD in 2012 at UC Berkeley’s School of Information, and spent two years as a Postdoctoral Researcher at Microsoft Research New England in Cambridge, MA with the Social Media Research Group. She also holds a Masters degree in Information Management and Systems from UC Berkeley, and a B.S. in Computer Science from the University of Michigan, Ann Arbor LSA Honors College.

Her work examines relations among institutions, infrastructures, and practices in the production, circulation, and use of information. She examines these themes in a book, called Documenting Aftermath: Information Infrastructures in the Wake of Disasters, with MIT Press (October 2018). She is currently working on a collaborative book project about COVID data infrastructure builders with Amelia Acker, Bidisha Chaudhuri, Ryan Ellis, Young Rim Kim, Janaki Srinivasan, and Stacey Wedlake. Together they examine the work of making data about COVID for non-expert publics in India and the USA using over 75 interviews with builders of eleven dashboards in the two countries.

She also does historical and contemporary empirical studies of responsible computing and data governance. With Amelia Acker, Yubing Tian, and Sarika Sharma, she is working on an NSF-sponsored project about Scientific Data Governance, Preservation and Archiving. They investigate the life of scientific data, specifically in relation to NSF’s requirement for Data Management Plans with a focus on the relationship between national science policies and different epistemic cultures. She also worked on an another NSF-funded project to examine thedevelopment of ethical practices in the computer security research community with her collaborators Katie Shilton, and Quinn DuPont. Her research further engages with a transnational comparative investigation of personal data policies in Seattle, Bangalore, and London with her associates Janaki Srinivasan, Elisa Oreglia, and Justin Petelka.

Throughout her work, she brings together perspectives and approaches from information studies, science and technology studies, and the history of media, information, and communication. Her research engages questions that require historical and contemporary analysis, including: How do changing technological infrastructures, information practices, and technology policies shape one another?

Books & Ideas: The continuing flow of technological innovations in the aftermath of the Internet revolution has progressively transformed the way we navigate the world today. From high-speed traveling information to over-abundance of content, from cookies to perpetual behavioral monitoring, from on-line banking to bitcoins, from on-line work to prospects of an all-encompassing virtual reality world, it seems that the frames and structures of the world we live in today are undergoing radical transformations. How would you characterize this specific moment of history we are in?

Megan Finn: We seem to be experiencing a moment when decades- and centuries-in-the-making technological innovations are being integrated and transforming everyday practices. We are just starting to realize the impacts. Take for example, navigation. I rely on different navigation platforms to move through areas that I know (sometimes in counterintuitive ways) in addition to areas that are new to me (sometimes forgoing use of regional maps to orient myself). Screens, algorithms, and data mediate the ways that I move through and experience space.

I like to draw attention to the history of the assemblages of materials, practices, organizations, and regulations that make up “technological innovation” to push back on their sense of inevitability and hopefully open up opportunities to understand that many humans have made our current moment. This moment seems to be one where we find that technological innovations that have been in development for centuries –navigational technologies have been a focus of humans of many different societies — meet developments in computing, telecommunications, and numerous other fields, making for new ways to move through and be in the world. The navigation technologies in my smartphone rely on research from several fields (e.g., cartography, civil engineering, urban planning, computer engineering), and, in the United States, several government agencies (e.g. Department of Defense, NASA, National Oceanic and Atmospheric Administration).

Though governments have underwritten the development of many of the navigational technologies that are reshaping our lives, the way that we experience them is through the private sector as customers: we “pay” to use navigational products with our data – data about where we went, how we got their, and who we were with. The extractive logics at the root of consumer products have made the few people who founded a few companies (“Big Tech”) very wealthy. The distribution of resources and experiences around digital technologies gives me pause as I try to come up with generalized descriptions of this moment in history. Questions about moments of cumulative transformation must center on who is seeing the results and how experiences are different. Do transformational innovations feel the same when you are a professional driver and new navigation algorithms and technologies are used to surveil and control your movements?

Though many of our technological innovations have been driven by military and commercial funding or goals and many people have resisted these frames, perhaps the cacophony of critical questions about technology in this moment in history might open up the possibility for new thinking about how “innovation” is constituted. Approaches to innovation that attend to decolonization, anti-racism, and respect for nature can help profoundly reshape the sociotechnical relations that surround technical innovation.

Books & Ideas: Structural anthropology has classically posited the hypothesis of a homology or a correspondence between, on the one side, the physical built-in world in which we live and, on the other side, the layout of social groups and the ‘forms of classification’ through which we view ourselves and the world. Would you go as far as extending this analogy to the architectural design of our digital structures? To what extent would you say computer systems, the internet, social media, smartphones, etc. transform the way we make sense of the world we live in and transform the way we try to act within it?

Megan Finn: The language of transformation makes me uncomfortable because of all of the ways that technologies are themselves products of our social and regulatory worlds and the ways in which these technologies are folded into old socio-spatial formations. Yet, materiality of technologies, the digital architectures, make possible new practices and social formations.

Take for example, the millenia-old institution of higher education. The move-to-online that some workers underwent during COVID meant that many universities experimented with purely virtual learning which utilized a suite of tools and technologies for distance-learning that have been under development for more than 100-plus years, and much longer if you count distance learning through the mail without any electronic devices. The move-to-online was envisaged and marketed as learning-as-usual, just online.

The reality was more complicated as some classroom dynamics were easily replicated online while others were not; and a host of new practices were enabled by online learning technologies. The design of Zoom or of “Learning Management Systems” or LMSs seemed to replicate the dynamics of classrooms with an instructor in the position of authority (or with all the permissions on an LMS or as “host” on Zoom) and students who could be grouped, assigned, and taught. But where classroom dynamics could not easily be replicated – with pen and paper tests, for example, schools experimented with new technologies. Exam surveillance technologies which used laptop cameras to observe students taking exams and surveillance technology (or malware) to see what students were looking at on their devices. These exam surveillance companies would analyze these data to try to advise instructors about whether students were cheating. In addition to the creepy factor, these technologies could also be racist because they did not work the same way for all students because of variation in students’ skin color. While many instructors used LMSs to emulate offline practices of distribution, commenting on, and grading of assignments, some instructors found LMS features that are new to digital environments such as automatic gradings functions or the ability to surveil how often and for how long students are on course pages useful. And these LMSs enabled easier oversight of classes by administrators and learning researchers, raising issues of academic freedom and consent.

Some students felt the move to online-only learning made certain aspects of education accessible as it never had been before. Work or care responsibilities made the flexibility of online learning a benefit for some students. And many instructors have made use of their social networks in new ways during COVID, inviting people from all over the world into the virtual classroom in ways that were possible but perhaps unrealized before. For students who didn’t have access to quality technology, stable internet connections, or quiet spaces to concentrate, online learning environments put them at a disadvantage. To the extent that online learning was inaccessible because of financial resources seemed to incur another penalty onto students who might have already been struggling in an educational system that seemed to be designed for middle and upper- class students. And the mental health implications of COVID for students are still being grappled with.

With the move to online learning, whole new organizations and sectors of workers have been integrated into the higher education such as those with technical and educational expertise in online environments. Utilizing LMSs and digital systems to manage assignments have also enabled technology firms such as plagiarism detection firms to amass enormous volumes of student assignments bringing issues such as intellectual property to the fore.

But at the same time, even with all of our complex computing systems, many students asserted that they wanted to be in a traditional classroom with an instructor and other students. Having said that, the solid walls of the classroom feel more porous than ever to students and instructors. The Internet can be brought into the classroom to aid in learning goals aided by collaborative writing tools, cloud storage, and new research tools. Educators feel they must fight for their student’s attention when students use smartphones in class. And what happens in a closed classroom may appear online, with or without the consent of other students and professors.

Computing systems bear the legacies of our built and social environment – constructions that someone designing a rational and efficient system from the ground up would never have chosen. The massive inequities that permeate the structures of our socio-spatial world are often not just reproduced but sometimes reified by our digital technologies such as exam surveillance technologies or even online learning itself. At the same time the mark of computing systems can be seen everywhere in the built and social environments we all pass through every day from the algorithmically-generated navigation through the streets of our cities to the libraries emptied of books to the QR codes in restaurants in place of menus.

Books & Ideas: Does the materiality of the “old” world becomes obsolete as a consequence of our new ways of experiencing the world? How do you address the fears of those who foresee a danger of going all virtual and of becoming alienated from reality?

Megan Finn: On the one hand, one can argue that reality is online as much as it is offline. Internet discourse in the 1990s and earlier imagined that people could be someone else online, or that online spaces were table rasa. This was exemplified by the cartoon with a dog using a computer saying, “On the Internet, nobody knows you’re a dog.” While people could remain anonymous in some online spaces, researchers quickly showed that the social structures that existed offline appeared online (though they were also experimented with). Social structures online and offline worlds are mutually constitutive, making it hard to analytically distinguish what counts as strictly online and what counts as offline.

On the other hand, social media, gaming, and other immersive online spaces have unique characteristics, not least of which is the ways in which the affordances of these technologies tap into deep human needs. Natasha Schüll’s book Addiction by Design describes how everything from the design of casinos to the design of gambling machines to the ways that people pay to gamble help gamblers stay in a space of flow where the time and concerns of everyday life can disappear. While Schull’s book describes the specific sociotechnical assemblage around machine gambling, the addictive experiences that she describes are reminiscent of discussions of social media, smartphones, or video games. The designs keeping people emersed in spaces are not by accident – so-called “dark patterns” and “persuasive design” practices aim at manipulating people into taking certain actions and decreasing users’ autonomy. The more we learn about the plasticity of the human brain, it is hard to imagine that the materialities of online spaces does not have effects on our physiology.
All of this said, the online world is forever tethered to the offline world through the materials that are required to make and run computing technologies – the conflict minerals that are found in digital devices; the shipping and supply chain logistics that enable the assembly and distribution of the devices that we use; the factories in which they are made; the massive data centers that enable many of these technologies; the energy resources that data centers and our devices rely on. Climate change and pollution make it impossible to ignore the materiality of computing even as the sensation of being emersed in the online world can seems to transcend everything.

Books & Ideas: Can you tell us how your research helps understand or navigate the consequences of these transformations? What does it tell us about the impact these changes have on our daily lives?

Megan Finn: My book Documenting Aftermath looked at earthquakes in Northern California in 1868, 1906, and 1989, and the production of public information infrastructures after moments of rupture and breakage, as well as the public information infrastructures in place if there was an earthquake today. Though times of disaster certainly do not represent the everyday, they do represent moments when we can sometimes see structures that are ordinarily operating invisibly. Those structures are indeed revealed when they break down or do not work as people expect them to. Through the careful examination of public information infrastructures in different eras, I attempt to articulate how changes in the materiality of technologies in different eras (using the telegraph versus the Internets, for example) impact practices. For example, the materiality of public information infrastructures has certainly influenced the sharing of news. However, the speed of knowing that an earthquake has occurred has not much changed throughout these eras, even for people far away. By this I mean that in 1868 with a working telegraph, people in Boston could “instantly” hear about an earthquake in California. Yet once the telegraph was sent to a telegraph office in Boston, it was not instantly news to everyone in Boston. The news would have to be broadcast in some way. In the 1868 and the 1906 earthquakes, people across the country crowded around bulletin boards or at telegraph offices to get the latest updates about what had happened. In these moments, there was a huge outpouring of concern about what had happened, and some wanted to get in touch with loved ones however they could. Today, the pervasive availability of news is even more complete; if people have the Internet on their phones, and their phones are available, they might instantly have access to news of an earthquake. Still, the speed of the news, especially, seems to be somewhat marginal. People can find out quickly what is going on, but not so much quicker, even than in 1868. Thus, it is not purely the technical speed of the news but also the political, social, and economic forces that organize the materials that determine which acts of distribution and interpretation matter post-disaster.

The materiality of public information infrastructures also shapes the kinds of publics that public information infrastructures can convene. In 1868 and 1906, letters from earthquake survivors were read aloud to interested parties in public spaces or printed in newspapers. Whose letter was read or reprinted, and who heard the content of letters, was embedded in the social practices of the day. People gathered at telegraph offices to hear the latest news of an earthquake. And different newspapers, based on their design, printing equipment, and printing frequency and distribution plan, influenced who read the reprinted letters.

Similarly today, the design of technologies shapes the contours of publics. Tarleton Gillespie argues that new technologies convene novel formulations of publics, or what he calls “calculated publics.” Corporations such as Facebook convene calculated earthquake publics. The calculated earthquake publics are initially determined algorithmically and Facebook offers little visibility into how the software actually constructs these earthquake publics. Social media platforms also set the terms of how people interact with others after a disaster through design choices that ask for specific types of speech, display certain news, and promote particular interactions.

Beyond the way that the materiality of technology shapes the kinds of publics that convene, it also enables a type of social imaginary: the total archive. The ability to collect many digital sources quickly, afforded by the Internet along with a myriad of storage and algorithmic technologies, allows for the imagining of a kind of omniscience in what Donna Haraway calls the “God’s eye.” Drones are now being used to collect information about an area that has been affected by a disaster. People with the power to distribute resources use these sociotechnical platforms—as part of the public information infrastructure—to imagine that they can see the whole situation and know best how to act. This has happened as local newspapers – who employed the reporters who would have been on the ground – have gone out of business because they cannot support their work with advertising revenue now being spent in online adverting.

Books & Ideas: Does the fact that big Tech companies and States have access to a sort of panopticon creates a real threat to democracy? Do you see ways in which these new technologies could rather empower citizens and consolidate democracy?

Megan Finn: Most of us don’t know much about how Big Tech works on the inside, what data Big Tech has about us, or how Big Tech collaborates with different States. Additionally, while the label of Big Tech is useful, I suspect that it masks a variety of practices in different firms that create very different possibilities. And Big Tech firms are not the only private entities gathering data about us – a different class of firms that are often referred to as “data brokers” sells information that is supposedly about us (sometimes wildly inaccurate) to whoever is willing to pay. Clearly more regulation is needed so that citizens can have both understanding of the working of Big Tech, data brokers, as well as opportunities for oversight of the data that is being collected and how it is being used.

That is not to say that there are not efforts at regulation. New data protection laws have been passed in the European Union and in some parts of the USA in areas such as California, Virginia, and Colorado. We are hopefully entering an exciting period where we are going to start to learn more about how data protection laws can be used by citizens and what the limitations of these new laws are in efforts to regulate Big Tech. In collaborative work that I have undertaken with Justin Petelka, Elisa Oreglia, and Janaki Srinivasan, we have been trying to understand the possibilities new data protection laws afford to users when they exercise data access rights. In our preliminary work, we have found that it is hard to tell what is going on inside companies when individuals make data access requests – and sometimes it seems that companies have the wrong idea about who users are. In our next phase of work, we hope to understand whether more collaborative efforts at exercising data protection rights might enable users to learn more about the workings of Big Tech.
But the full impact of data protection laws has yet to be realized and the lack of regulation (in the USA) and understanding of how Big Tech works has created confusion. In the USA, most of the terms of service agreements for Big Tech companies have provisions that say that they will share your data with law enforcement when they are legally required to do so (when there is a warrant). Snowden’s revelations about Prism in 2012 marred this understanding, particularly for people living outside of the United States. In the Prism program Big Tech companies such as Google, Apple, and Facebook supposedly shared data about personal communications such as email with the NSA. The lack of oversight and mistrust from some post-911 projects such as Prism have created uncertainty for users. This uncertainty has only been amplified with the charges that Cambridge Analytica used Facebook to manipulate voters in the 2016 election in the USA.

With this uncertainty, the reality of the Big Tech panopticon may be beside the point. If people believe that Big Tech has all the data about us and is sharing it with States, then it might have the panoptic quality that Foucault surmised if people self-censor and change their activities because they believe that they are being watched. To the extent that democracy makes assumptions that citizens are relatively autonomous, we need regulation, transparency, and oversight of Big Tech not just to understand if we are living in a panopticon but also to ensure that citizens understand to what extent they are being watched so they feel free to be the creative citizens that democracy demands.

by Jules Naudet, 11 June 2022

To quote this article :

Jules Naudet, « The Historical and Material Roots of Digital Innovation. An interview with Megan Finn », Books and Ideas , 11 June 2022. ISSN : 2105-3030. URL : https://booksandideas.net/The-Historical-and-Material-Roots-of-Digital-Innovation

Nota Bene:

If you want to discuss this essay further, you can send a proposal to the editorial team (redaction at laviedesidees.fr). We will get back to you as soon as possible.

Our partners


© laviedesidees.fr - Any replication forbidden without the explicit consent of the editors. - Mentions légales - webdesign : Abel Poucet