Recherche

Interview Society

Dossier: Should we Be Afraid of the Digital Revolution ?

The pressing need to harness emerging technologies
Interview with James Guszcza


by Jules Naudet , 10 June 2022
with the support of CASBS



The existence of a digital panopticon in the hands of a small number of democratically unaccountable companies is an affront to human dignity and self-determination. The new technologies’ risks and benefits must be harnessed by adequate social technologies.

This publication is part of our partnership with the Center for Advanced Study in the Behavioral Sciences. The full list of our joint publications is available here.

Jim Guszcza holds a PhD in the Philosophy of Science from The University of Chicago and is a former professor at the University of Wisconsin-Madison business school. Guszcza has worked as data scientist for two decades and is currently the US chief data scientist of Deloitte Consulting, and a member of Deloitte’s Advanced Analytics and Modeling practice. The creation of hybrid human-machine systems has been a recurring theme in his work. In recent years, he has applied behavioral nudge techniques to more ethically and effectively operationalize machine learning algorithms. Guszcza was a CASBS fellow in 2020-21 and he serves on the scientific advisory board of the Psychology of Technology Institute.

Books & Ideas: The continuing flow of technological innovations in the aftermath of the Internet revolution has progressively transformed the way we navigate the world today. From high-speed traveling information to over-abundance of content, from cookies to perpetual behavioral monitoring, from on-line banking to bitcoins, from on-line work to prospects of an all-encompassing virtual reality world, it seems that the frames and structures of the world we live in today are undergoing radical transformations. How would you characterize this specific moment of history we are in?

Jim Guszcza: We live in a technologically fraught era in which ever more aspects of our lives are digitally mediated. I think of this historical moment in terms of forces that are out of balance. Our digital technologies have progressed at an extraordinary clip. But the “social technologies” — laws, policies, institutional arrangements, business models, social norms, educational strategies, design philosophies — needed to shape and constrain them have not adequately kept pace.

This is perhaps due, at least in part, to prevailing ideologies surrounding digital technologies – techno-optimism and technological determinism – that have obscured the need to properly harness them through social innovation. These ideologies have made it difficult for policymakers, technologists, and the public to reason carefully about emerging technologies’ risks, benefits, and role in society. They have furthermore abetted a collective sense of complacency.

Techno-optimism has invited us to identify technological progress with societal progress. Taglines like “exponential technology” and slogans like “Information wants to be free”, “Data is the new oil”, and “AI is the new electricity” all convey useful insights. But when elevated to mantras, they promote the fantasy that technological outputs automatically yield desirable outcomes. In fact, designing the right social practices around technologies is typically no less a challenge than developing the technologies themselves.

We have neglected this truth at our peril. Until recently, it was widely assumed that digitally connecting people, giving them access to unlimited information, and creating new forms of “intelligence” through the combined power of big data and machine learning would naturally result in better connected, better informed, smarter, more productive societies. To be sure, the benefits of these technologies are real and substantial. But their laissez faire development has led to phenomena that threaten both individual wellbeing and the functioning of democratic societies: addiction, depression, polarization, widespread misinformation, and increased economic inequality.

The techno-optimist ideology has been intertwined with a techno-determinist mindset which regards technology as a quasi-autonomous force to which societies must adapt. This mindset has prevailed until very recently, routinely telegraphed by book titles like The Singularity is Near, The Inevitable, and Rise of the Robots; and slogans like “Software eats the world”, “Move fast and break things” and “You have zero privacy… get over it.” Conceiving of technology as an inevitable, autonomous force short-circuits critical thinking and obscures the need to design technologies – together with the legal and institutional arrangements surrounding them – in ways that align with the needs and values of individuals and societies.

This is not a purely technical challenge, but rather a socio-technical one. And unlike technologies developed in lab environments, socio-technical systems are inherently complex. The introduction of major new technologies can therefore lead to consequences that are impossible to anticipate. We must therefore design our socio-technical systems guided by a sense of intellectual humility and a recognition of the need to test, learn, innovate, and improve in a gradual and iterative fashion. Moving fast and breaking things is a good way to innovate in controlled lab environments where the stakes are low, but perverse in the context of socio-technical complexity.

Reconsidered from this perspective, the “AI is the new electricity” mantra conveys a more subtle meaning. Harnessing electricity in ways that have benefited societies has required surrounding the basic technology with disciplines of human-centered design, engineering, safety inspection, public policy, and regulatory systems. Much the same is needed for technologies built using big data, machine learning, and digital infrastructures. Rather than letting these forces run rampant, we must harness them in societally beneficial ways. Doing so requires letting go of simplistic ideologies and mantras, and cultivating design, development, and deployment strategies that align with human needs and values.

Books & Ideas: Structural anthropology has classically posited the hypothesis of a homology or a correspondence between, on the one side, the physical built-in world in which we live and, on the other side, the layout of social groups and the ‘forms of classification’ through which we view ourselves and the world. Would you go as far as extending this analogy to the architectural design of our digital structures? To what extent would you say computer systems, the internet, social media, smartphones, etc. transform the way we make sense of the world we live in and transform the way we try to act within it?

Jim Guszcza: A small, personal anecdote illustrates the power of our digital prostheses to alter the way we perceive, understand, and behave in the world. In Atlanta on a business trip, I hailed a taxi to take me from Downtown to a restaurant in the Midtown neighborhood. This involved nothing more than a simple dive up a major thoroughfare. But the driver – an Atlanta local – took a meandering, circuitous route by mistake. The reason: He was following the instructions of a GPS device whose signals were garbled by the surrounding skyscrapers. This story should not be read as a criticism of GPS devices. To the contrary, such devices might be regarded as a paradigm example of human-centered artificial intelligence. They harness massive amounts of information in ways that enhance human autonomy by helping us overcome our cognitive limitations. In a literal way, they make our world more navigable. Nevertheless, in this instance a quirk of human psychology interacted with the technology in a way that resulted in “artificial stupidity” rather than “artificial intelligence”.

A web search for “Death by GPS” reveals that this story is not a one-off, but rather an instance of a widespread phenomenon: There is a tendency for people to suppress their common sense, background knowledge, and situational awareness in favor of fallible GPS indications – even in remote, dangerous environments. A basic maxim of human-centered design is that while one-off mistakes can be blamed on human error, repeated mistakes should be blamed on poor (or completely neglected) design.

Analogous points can be made about the spread of misinformation in online environments, the anxiety arising from comparing oneself to social media-curated versions of friends and acquaintances, the group polarization that arises from collaboratively filtered news and opinion content, and decision-makers who improperly suppress their faculties of scientific and ethical judgment in favor of algorithmic outputs. In each case, what is needed is embedding the technology within a decision environment that is designed to comport with human psychology.

It is very apt to characterize this missing layer – the human-centered design of digital environments – as a kind of “architecture”. The Greek roots of this word help us understand the missing layer. Software and machine learning engineering are modern types of techne – “crafts” or “skills”. Arche means “chief” or “master”. The master builder – the architect – provides the designs that guide technical development. Just as livable and humane cities are unlikely to be built by teams that understand only materials science and principles of construction, livable and humane digital environments are unlikely to arise from teams that understand only software development and machine learning engineering. In each case what is missing is type of design, or “architecture”.

Of course, it would be misleading to suggest that psychologically informed decision architectures are altogether missing in our digital technologies. To the contrary, dark patterns – design elements that manipulate users in ways that make it harder for them to express their preferences and achieve their goals – are today ubiquitous in online environments. The early Silicon Valley “growth hackers” who worked to make digital environments addictive were steeped in theories of “persuasive technology”. But such pathologies should not lead us to conclude that reflecting human psychology in the design of digital technologies is an inherently rogue activity. To the contrary, psychology can and should be harnessed ethically, in ways that enhance (rather than diminish) human autonomy. Achieving this will require more than a naïve call for ethical behavior on the part of large organizations. Digital technologies be developed and deployed in the context of appropriate regulatory arrangements and business models that align economic incentives with societal needs. Simply calling for ethical development of technology is unlikely to be effective. Ethical development must be incentivized.

Books & Ideas: Does the materiality of the “old” world become obsolete as a consequence of our new ways of experiencing the world? How do you address the fears of those who foresee a danger of going all virtual and of becoming alienated from reality?

Jim Guszcza: Fears of virtual reality eclipsing lived physical reality remind me of the fears that a “singularity” version of AI will somehow emerge, dominating human intelligence and rendering human labor obsolete. In each case, I think the question should be turned on its head: why should we believe that these are anything more than science fiction scenarios?

In the case of AI, headlines are replete with examples of algorithms capable of super-human feats: searching for subtle patterns in massive databases, proving mathematical theorems, beating world chess and Go champions, weighing risks factors in statistically optimal ways. But an equally important headline is usually buried: The tasks which come easiest to humans – recognizing objects or voices, moving around in space, judging human motivations, understanding simple narratives – tend to be hardest to implement in machine form. The philosopher Andy Clark once remarked that humans are “good at frisbee, bad at logic”. AI algorithms are the opposite. They tend to shine at tasks involving the reasoning faculties of human cognition that have evolved most recently. But they have less of a comparative advantage over the perceptual and motor skills that have evolved over millions of years. This is a major reason why it is most sensible to design algorithmic technologies to serve as complements to – not substitute for – human cognition.

Similarly, it would presumably be highly nontrivial to build virtual reality technologies capable of robustly substituting for the physical reality within which we have evolved over millions of years. Analogously as AI is best framed as a way of extending – rather than mimicking or replacing – human cognition, VR is best framed as a way of enhancing – not substituting – our experience of reality.

Books & Ideas: Can you tell us how your research helps understand or navigate the consequences of these transformations? What does it tell us about the impact these changes have on our daily lives?

Jim Guszcza: I am co-leading a project at The Center for Advanced Study in the Behavioral Sciences at Stanford University, graciously funded by the Rockefeller Foundation, whose goal is to articulate the need for a new field of AI practice. This field would directly confront some of the challenges discussed above. There exist well established methodologies which enable machine learning engineers to optimize pre-specified objective functions on convenience samples of data (often at web scale). But machine learning offers neither the scientific tools nor the conceptual resources needed to evaluate which objectives we should optimize, or how to construct samples of training data appropriate to the situation at hand. Such decisions require scientific and ethical judgments, informed by context-specific knowledge. That they tend to fall outside the scope of mainstream AI practice is evidenced by the many stories of algorithmic bias from recent years. We might call this the “first mile problem” of machine learning: Before training algorithms, we must first construct a data sample that adequately registers the world in scientifically and ethically sound ways.

The envisioned field would also address the “last mile problem” of machine learning: Our goal is typically not merely to optimize algorithmic outputs, but rather real-world outcomes. This implies the need to embed algorithms within decision environments and workflows that align well with such features of human psychology as bounded cognition, bounded self-control, bounded self-interest, bounded ethicality, and the role of emotion in human reasoning. Machine learning enables us to optimize algorithms; but the real goal is typically to optimize systems of humans working with algorithmic technologies. Doing so requires a scientific foundation that extends beyond machine learning to encompass ethics and the behavioral sciences as well.

Consider medical decision-making as an example. Independently of the big data and machine learning revolutions, psychologists have long known that even simple algorithms outperform the unaided judgment of expert clinicians at a wide range of decisions. Yet good data scientists also know that algorithms cannot distinguish appearances (the data) from reality (the processes that generated the data); and are devoid of the kind of judgment needed to assess whether an indication is appropriate in a specific situation. Therefore, they cannot replace human experts.

The true challenge therefore goes beyond using machine learning to optimize algorithms. It is necessary to design processes of human-algorithm collaboration in which the relative strengths and limitations of one counterbalance those of the other. In contrast with the well-established field of machine learning engineering, designing such processes is today more of an art than a science. Hence the need, articulated in our proposal, to create a design-led field of human-machine “hybrid intelligence” engineering. We argue that such a field cannot be founded on computational and information sciences alone. It must also integrate principles drawn from ethics and the behavioral sciences. Furthermore, it must incorporate participatory approaches to design to ensure that local knowledge, and that the needs and values of multiple stakeholders, are all reflected in the design of the human-machine system.

Books & Ideas: Does the fact that big Tech companies and States have access to a sort of panopticon creates a real threat to democracy? Do you see ways in which these new technologies could rather empower citizens and consolidate democracy?

Jim Guszcza: The big tech companies’ consolidation of granular behavioral data poses a threat to democracy in at least two ways: through their sheer size, and through their growing ability to mediate political discourse and algorithmically manipulate the dissemination of opinion, information, and misinformation.

Regarding the former issue, digital businesses tend to be relatively easy to scale up thanks to their high fixed costs and low marginal costs. In addition, many of them also enjoy powerful network effects: the more users and developers they attract, the more valuable they become to future users and developers. This results in a virtuous cycle of growth. Big data enables digital companies to magnify these inherent advantages by using machine learning to continually improve and personalize their services.

Such dynamics give rise to highly wealthy and powerful organizations. The existence of such dominant, hard-to-displace companies moves societies away from democracy, in the direction of plutocracy. Their economic clout affords them excessive power to crush or buy out smaller competitors, oppress their employees and gig workers, and capture government regulation. In short, the dominance of a small number of highly powerful tech companies moves societies away from the ideal of democracy, towards the direction of plutocracy.
Aside from issues relating to size, data-rich big tech companies further threaten democracy by sorting users into like-minded affinity groups, and by training algorithms on granular behavioral data to recommend different news, opinion, and misinformation pieces to different users. Such algorithms often exploit natural human psychological tendencies such as confirmation bias, in-group bias, and the role of emotion in political judgment. These tactics benefit companies by increasing users’ screen time. But they also create serious democracy-threating negative externalities in the form of large numbers of misinformed, polarized citizens. The January 6, 2021 attack on the United States capitol vividly illustrates the gravity of the threat.

Beyond the issues of size, power, and polarization, the existence of a “digital panopticon” in the hands of a small number of democratically unaccountable companies is simply an affront to human dignity and self-determination. As it is currently configured, big tech directly undermines some of the bedrock human values that democracy is intended to buttress. On a more hopeful note, the work of Taiwan’s Digital Minister Audrey Tang demonstrates that digital tech and big data can be harnessed in ways that make democracies more, rather than less, democratic. Taiwan’s g0v (“gov-zero”) community of decentralized “civic hackers” has created popular open platforms in which diverse stakeholders can self-organize various uses of data, debate policy issues, and partner with government officials to upgrade public services. This “bottom-up” approach is inherently democratic: it uses digital platforms to enable collective organization, enhance citizens’ agency over their data and technologies, and magnify their ability to guide their government’s operations.

by Jules Naudet, 10 June 2022

To quote this article :

Jules Naudet, « The pressing need to harness emerging technologies . Interview with James Guszcza », Books and Ideas , 10 June 2022. ISSN : 2105-3030. URL : https://booksandideas.net/The-pressing-need-to-harness-emerging-technologies

Nota Bene:

If you want to discuss this essay further, you can send a proposal to the editorial team (redaction at laviedesidees.fr). We will get back to you as soon as possible.

Our partners


© laviedesidees.fr - Any replication forbidden without the explicit consent of the editors. - Mentions légales - webdesign : Abel Poucet