How can we make technology care?

How can we make technology care?

Introduction

In The Bleeding Edge, a documentary about the medical devices industry on Netflix, the producers examine the conflict between technological innovation and patient safety. The film interviews several women who experienced life-changing complications after receiving permanent birth control implants or vaginal meshes, while conducting a parallel set of interviews with medical professionals and device industry representatives. The film highlights an important conundrum surrounding technology and its increasingly embedded place in contemporary society. Among the themes explored in the documentary, the key one that stuck out for me was the repeated observations that medical devices were being released onto the market with a lack of research into their long-term effects, essentially pitting innovation against consequences. Getting a product onto the market quickly overrode the need to adequately research side-effects, risks and likelihood of complications. Barbara Adam and Chris Groves call this the valorisation of speed.

The medical devices industry is one example of emerging technologies being deployed before their impacts are fully known, so how might a similar situation be avoided in other fields? Kate Crawford asks an important question towards the end of her talk on the politics of AI: what kind of world do we want? It’s a question that I’ll keep coming back to throughout this post, because it asks us to evaluate how technology might contribute to that world, and to what extent. So let’s ask the key question of this post: how can we make technology care? To answer this, we should probably begin with outlining the contemporary paradigm used in technology development, and then explore what is meant by ‘care’ and how it might support addressing the social impacts of both present and emerging technologies.

Technology: it just works…

As Sheila Jasanoff succinctly states, modern technology is ‘the application of expert knowledge to achieve practical goals’, a means to an end. What ensues is a curious sort of futurism, wherein the outcome is already known and technology is employed to attain said outcome (what Adam and Groves call a ‘present future’). Within this viewpoint, we are told that not only is technological progress inevitable, but that the technology itself is completely apolitical and free from bias. Any unintended consequences, such as discrimination or violence, are blamed on the human element. This creates a paradigm in which technologies like machine learning are used to ‘solve’ complex social problems. The drawback of such an approach is that, well, the future is essentially unknowable, and while the designers of a technology may have had only one particular ‘end’ in mind, the reality is that uses are as unpredictable as their users. It has also become strikingly apparent that technology is not free from bias, nor is it a politically innocent entity. As an example, various machine learning systems have been shown to exhibit racial or gender biases through the use of skewed datasets. It is worth noting that these biases are not necessarily conscious, but instead are symptomatic of the overwhelming representation of white males working in technology companies. The problem here is the conflict between the supposed simplicity of technological development and the actual complexity that is 21st-century life. This gives rise to a question: how might this be addressed?

Care

Care is described by Joan Tronto as ‘everything we do to maintain, continue and repair our ‘world’ so that we can live in it as well as possible’. At its heart, care concerns itself with acknowledging and working within the complexity of human lives, by weaving together the myriad processes, relationships and objects that shape and define our lives. As is probably already apparent, this conflicts greatly with the ‘practical goals’ approach discussed earlier. Care is messy; it constantly asks us to consider how everything we do affects those around us (and vice versa) and, unlike the predetermined outcomes favoured by technology, it is a continuous process. It can be argued that care is rooted in a collectivist ideology, which puts it at odds with the prevailing (particularly Western) socioeconomic system that favours individualism and autonomy. Because of this, care as a concept has largely been relegated to the private sphere, becoming the work of the marginalised and poorer members of society – who, coincidentally, are also disproportionately affected by technological solutions.

So how can care contribute to creating a world that we want? By employing care’s focus on the interconnected web of complexity. The ‘move fast and break things’ mentality (as Meredith Broussard delightfully writes) that dominates tech industry thinking sees everything as a technical problem with a technical fix. As we’ve already seen, this means potential consequences aren’t examined or acknowledged until after a product is on the market and its effects felt. Care would necessarily slow this process down. It would also draw on the experiences and lives of those who have performed caring activities and incorporate this into ongoing design conversations. Most importantly, care ‘thickens’ our involvement with our creations (as Maria Puig de la Bellacasa describes), establishing what she calls a continuous ‘as well as possible’ relationship. Unintended consequences might be impossible to predict, but care can assist in mitigating them. Of course, there are challenges to incorporating care into technological design.

The challenges

By far the biggest challenge facing a care-oriented design system is convincing entire industries to completely rewrite their principles and motivations. It is somewhat obvious that care, in the form outlined above, doesn’t gel particularly well with our current socioeconomic system. Neoliberalism and capitalism both champion individualism and simplicity, which sits at odds with the complexity-favouring messiness of care. The largest companies in the world are predominantly technology companies, with vast resources and political influence, and as such dictate the public narrative surrounding technology. A care-based approach would potentially shift the benefits of technology from being highly concentrated to specific groups, and thus would likely meet resistance.

How do we care about technology?

This is not an easy topic to discuss in a single blog post. Because it crosses several disciplinary boundaries, care can mean different things to different people. It can be seen as a response to the separation of technology from its social impacts and consequences, or as a criticism of the capitalist imperative towards endless innovation. If we return to Kate Crawford’s question about how we want our world to be, by reframing technology design through a care-based perspective, we are required to ask how emerging technologies fit into the complex, interconnected web of human life.

Leave a comment