Deciding how to decide: Six key questions for reducing AI’s democratic deficit
Artificial intelligence (AI) has a “democratic deficit” — and maybe that shouldn’t be a surprise. As Jonnie Penn and others have argued…
Artificial intelligence (AI) has a “democratic deficit” — and maybe that shouldn’t be a surprise. As Jonnie Penn and others have argued, AI, in conception and application, has long been bound up with the logic and operations of big business. Today, we find AI put to use in an increasing array of socially significant settings, from sifting through CVs to swerving through traffic, many of which continue to serve these corporate interests. (We also find “AI” the brand put to use in the absence of AI the technlogy: a recent study suggests that 40% of start-ups who claim to use AI do not in fact do so.) Nor are governments of all stripes lacking interest in the potential power of AI to patrol and cajole the movements and mindsets of citizens.
Yet while as citizens and consumers, many of us still enjoy the notional power to vote with our ballot paper and our purse strings, several features of AI, the datasets upon which it is developed, and the practices through which it is embedded, make pose novel challenges to equitable decision-making about this decision-making technology at the level of society. The first set of constraints are technical: it is difficult to make many types of AI systems explain themselves. Without a basic understanding of how a given implementation of AI works, at the level of either individual decisions or system functionality, obscure the ability to hold AI and its developers and deployers to account. The second set of constraints are economic. The power of network effects makes it increasingly difficult to exercise true consumer choice: if all my friends are on one social network, or if all my purchasing history is held by a single shopping site, my desire to leave these services for preferable equivalents is counterbalanced by the inconvenience of doing so. This problem was not created by AI, but is certainly exacerbated by its increasing ubiquity, as algorithmic “secret sauce” powers the ranking of items in news feeds, search results, and shopping suggestions. (And this of course presumes that people have any sort of choice, however compromised or problematic; this kind of agency is not enjoyed by those fleeing digitally mediated persecution, or those deprived of any sort of free choice of internet services.)
Upstream of these constraints, however, lie more fundamental questions about the basic legitimacy of using AI in society. Dogged engineering and diligent policymaking may improve algorithmic transparency, or soften network effects — but we may still find ourselves facing larger questions about exactly the sort of societies we want, and how we want to use AI to help us get there. Opening the black box doesn’t remove the risk of Black Mirror-style social control; indeed, in some sense China’s “social credit” system depends on a managed degree of system transparency to marshal behaviour. And while firmer antitrust regulation may be a necessary step to open up, if not break up, today’s monopolistic tech titans, it may not be sufficient; many of the firms under threat ultimately today were themselves the indirect beneficiaries of earlier action against Microsoft.
Nor will mere compliance or limited electoral democracy alone enable us to harness the genuine power of AI and related technologies to propel social good. The 37 billion tonne elephant in the room here is climate change. AI could prove an enormous boon in helping deal with the extraordinary coordination complexity that effectively mitigating climate change will likely require. But this in turn will require both enormous investment and political will, both of which rest on securing public consent. In the five and a half hours of presidential debates between Hillary Clinton and Donald Trump in 2016, not a single question addressed climate change — so even if the candidate who hadn’t called global warming a “hoax” had prevailed, she would have struggled to claim a political mandate for the sort of large-scale strategy needed to halt climate crisis.
All of which obliges us to consider strategies beyond the ballot box to reduce AI’s democratic deficit, and to start reshaping this technology to help us reshape society in desirable ways. Of course, the appealingly simple idea of “democratising AI” masks millennia of debates over what this democratisation should look like, and how it should be pursued and preserved. This risks a kind of Rawlsian recursive loop, since before we know what democracy should look like, we need to know how we should decide what democracy should look like, but before that we need to decide how to decide, and so on. However, the sudden ubiquity of AI in society (not to mention the many other urgent issues that AI may itself help us address) encourages us to break out of this infinite loop and move rather more swiftly and pragmatically. In this spirit, it seems to me that we need to answer six important, practical questions before we can hope to meaningfully reduce AI’s democratic deficit. I introduce these key questions in turn.
What should we ask about AI?
The first and most basic question concerns what exactly are the features and applications of AI about which we need societal input. This is the realm of definitions and domains. First, we need a sensible definition of AI — or perhaps better yet, an alternative term that better encompasses what AI is and can do. In this sense, a phrase like the one used by the IEEE, “Autonomous and Intelligent Systems”, may preclude some of the more hyperbolic hopes and fears around AI. Second, and more complicated, we will need to know at the outset of any public consultation what it is about the way in which AI is developed and deployed that is of societal significance. Should the focus be on the ethical principles that should govern AI in general, on the application of AI systems in particular settings, or on the very anatomy of an AI system and the resources it draws upon? Some organisations, such as the Ada Lovelace Institute, have adopted a wide remit to research and deliberate on “the impact of AI and data-driven technologies on different groups in society”. Yet not all domains are created equal, and it is clear that in the most pressing areas of (potential) application, such as health, specific action is needed, as demonstrated by Nesta’s recent report on “creating a people-powered future for AI in health”, and in the work of my colleagues at the Turing’s Data Ethics Group on a code of conduct for digital health. Similarly, self-driving cars clearly introduce new social and ethical questions — above and beyond the classic “trolley problem” — and therefore merit close consideration, as promised by “Driverless Futures”, a new collaborative research project led by Turing Fellow Jack Stilgoe.
Nor should we neglect to squarely address the new conceptual challenges that AI poses. A recent synthesis that I co-wrote found that an important ethical principle “new” to AI, compared with other forms of patient-oriented ethics like bioethics, is explicability — that is, the ability to understand how an AI decision was reached, and how to hold accountable those responsible for it. The Information Commissioner’s Office and the Alan Turing Institute are taking this question forward with work to address the technical and ethical complexities of “explainable AI”.
Of course, these are somewhat false dichotomies: we can, and should, adopt both a broad perspective on AI’s capabilities and risks, and a deep focus on the specific challenges that AI poses in particular domains. But this in turn will require a “horses for courses” approach, asking different questions in different ways for different purposes. In other words, another consideration is how we should seek societal perspectives on the impact of AI.
How should we ask about AI?
The sociologist Pierre Bourdieu once argued that “public opinion does not exist”. He cited three basic assumptions about public opinion that, he argued, do not hold: first, that everyone has an opinion; second, that everyone agrees on the question; and third, that everyone’s opinion is of the same value. We will tackle the final allegedly flawed assumption in the following section, but address the first two points here. It is certainly true that presuming at the outset that a given issue is salient to a particular person or sample of people is a fool’s errand. This seems to apply in the case of digital technology, at least when its perceived importance is formally measured against other political questions. This can be hard to measure, since many “issue importance” polls pose a forced choice to respondents, in which technology is not one of the issues asked about — though as John Oliver’s notorious “research” into public perceptions of Edward Snowden in 2015 anecdotally showed, it does seem that public awareness of technology is relatively lacking compared to more traditional concerns.
Of course, this general problem about a lack of public awareness or interest can be avoided by designing public opinion research in a way that prioritises the issues one wishes to address. At a time when various pressing issues jostle for attention, it is not surprising that the time and inclination that an ordinary person has to focus on AI is limited. But by carving out some time and space for focused consultation about the serious issues that AI raises, it would not be surprising to find that, contra Bourdieu, most people do indeed “have an opinion”. Yet this does not resolve the second contention, that “everyone agrees on the question”. Indeed, taking this line of reasoning further, neither should we presume that everyone agrees on how the question is asked.
Above I have referred rather generically to “public opinion research”, but in truth the methods of eliciting this public opinion are disparate, and the choice of method is likely to yield quite different responses. Representative opinion polls are often seen as the default method of eliciting public opinion, in a way that can be standardised, summarised and compared with earlier poll results. But particularly in the case of complex and often opaque digital technologies, opinion polls are but one instrument, and a rather blunt one at that. Many organisations are instead turning to focus groups, round-tables, citizens’ juries, and other immersive methods for eliciting public opinion, including the Royal Society of the Arts, the Wellcome Trust’s work with Future Advocacy, and our own work with the ICO. These exercises deliver not (only) headline numbers, but also more nuanced, sensitive articulation of the key ethical values and social concerns shaping people’s perceptions of how AI technology should or should not function in different contexts.
Finally, a third category of opinion research employs experimental approaches. This includes MIT’s “Moral Machine” project, which recreates the “trolley problem” for self-driving cars, by asking online participants to choose which direction an out-of-control car should swerve, in order to save some fictional hypothetical individuals by mowing down others. Such exercises may prove problematic, both by focusing attention on rather hypothetical questions about self-driving cars at the expense of deeper and more urgent concerns, and because research suggests that responses to hypothetical dilemmas are not necessarily predictive of real-life dilemma behaviour. Nonetheless, experimental approaches are an interesting complement to other qualitative and quantitative methods for creating a rich understanding of public values, such as justice and fairness, as it relates to AI. But whether it is opinion polls, focus groups or simulations, gaining perspectives on AI inevitably involves deciding who, exactly, to speak to — another key question.
Who should we ask about AI?
Any attempt to engage with and understand public perspectives on AI needs to grapple with a rather hard truth up front: that the design and development of AI systems is carried out by a small, clustered coterie of engineers and other experts, most of whom work for corporations. In other words, a tiny fraction of humans are building the systems that are, or soon will be, used by many if not most of the rest of humanity. Nor, crucially, may we assume that this small group representative of humanity as a whole, whether measured along the dimensions of race, gender, class, religion, sexuality, ability, nationality, and so on. Reducing this aspect of AI’s democratic deficit is likely to involve several complementary responses: diversifying this workforce; debiasing (as far as possible) the datasets that are used to train AI systems; and consulting with “diversely diverse” groups of citizens about their preferences for and responses to AI design and use.
Where we want to speak of what “the public” thinks about AI (or any other issue), the first port of call has tended to be public opinion polls, which strive for overall representativeness. But, notwithstanding the critiques of opinion polling noted above (as well as other methodological challenges like falling response rates), in many contexts it is actually unhelpful to think of “the public” as a single entity. Moreover, while nobody sincerely thinks that “the public” has, or can have, a single opinion (except perhaps those British politicians pushing through Brexit supposedly at the behest of “the will of the people”), the value of the mere exercise of working towards consensus has been debated. While political theorists like John Rawls, Jurgen Habermas and Kwasi Wiredu have, in their own ways, championed the idea that societal consensus, or something like it, is possible and desirable, others such as Amartya Sen, Chantal Mouffe, Nancy Fraser and Catherine Squires have (again, in their own ways) critiqued both the plausibility and the preferability of such an approach. While I don’t hope to resolve these decades of debate here, for present purposes it seems most sensible to again adopt an “a la carte” approach. Having an overall sense of representative public opinion will be of use in some cases, and as Reuben Binns has persuasively argued, algorithmic accountability can be usefully framed in terms of the ideal of expansive “public reason”. But in practice, it will also be important to engage directly with particular groups who are most likely to be affected by a given application of AI. These groups need not be small — consider consulting the millions of people engaged in gig economy work for their perspectives on the algorithmic ranking or routing deployed on their platforms — but they are likely to be salient. And in particular, greater attention should be paid to those individuals and groups who have typically been left behind or left worse off by technological change.
In this sense it is therefore heartening to see that the UK’s Centre for Data Ethics and Innovation has recently committed in its ways of working not merely to conduct “public engagement” per se, but more specifically to “ensure the inclusion of marginalised groups and those most affected by technological developments in the debate.” This follows the Ada Lovelace Institute’s stated aim to “convene diverse voices to create a shared understanding of the ethical issues arising from data and AI”. Whilst not everything can be boiled down to a single “debate”, nor is a truly “shared understanding” perhaps strictly possible, the commitment of these organisations to promoting previously underrepresented views and values should be welcomed. Eagle-eyed readers might have noticed, however, that these two examples, as with most of the organisations noted thus far, are based in the UK. The question of “who” we should ask about AI therefore invites questions in turn about “where” and “when” questions should be asked.
Where and when should we ask about AI?
A recent Politico piece claimed, seemingly without irony, that “ethics” would be “Europe’s silver bullet in [the] global AI battle”. While some of the greatest treatises on ethics have emerged in the context of warfare, nonetheless it is a little jarring to read that ethics itself can serve as a “bullet” in a “battle” with implicitly amoral global competitors. More jarring still, though, was the presumption of a European monopoly on technology ethics. While the European Commission has indeed sought to make sustainability and trustworthiness central to its AI strategy, the notion that Europe is implicitly ethical (or worse yet that ethics is implicitly European) is a dangerous one. For one thing, many of the highest-profile statements of principles for ethical AI are global, including the IEEE’s Ethically Aligned Design as well as the the Asilomar Principles and the Montreal Declaration. But more importantly, to see Europe as the sole or even chief steward of ethical technology is a mistake. We need instead to work towards not merely an international but also an intercultural ethics of technology, consisting of contributions from the full range of global traditions.
Of course, for the purposes of law and policymaking, the traditional Westphalian notion of bounded sovereignty is still powerful. It is therefore perfectly legitimate for national governments to address their own AI democratic deficit in terms most relevant to their populations. But as Luciano Floridi has argued, to focus on the nation state as the sole or pre-eminent information agent even in a single society is increasingly naive, as multi-agent systems that transcend physical borders become ever more prominent and cross-border problems become more prevalent. (And the farce of applying “technology” to borders directly was recently laid bare by my colleagues at the Turing, in the context of the impasse over the Irish border post-Brexit.)
Moreover, seeing the American and Chinese states as geopolitical foes is to miss most of the people that reside there and the value of their experiences and traditions — whether this is, say, the shocking racism encountered by an African-American woman navigating the web, or the contributions of Confucianism to the ethics of technology. Neither, better yet, should the debate over the ethics of AI be framed as tripartite between the US, China and Europe, a classic global-northern assumption; whether out of genuine inclusivity, or just the narrower interests of geopolitical risk, market share or talent acquisition, policymakers would be wise not to ignore the rapid innovation and increasing skills of other regions, nor to miss the earlier effects of datafication and “legibility” in the service of international development. Nor need the question of “where” we should ask about AI restrict us to geography: as noted above, focusing on overlooked or underrepresented groups, even within the context of a particular place, enables a richer set of perspectives to be gathered and more inclusive, effective responses generated.
The idea of generating “responses” to the impact of AI obliges us to grapple with the further question of when any exercise in public engagement or consultation should occur. It is self-evident, at any rate, that we must avoid an “oil spill” approach to AI, focused entirely on environmental clean-up, community regeneration and legal restitution after the seemingly inevitable damage is done. This suggests instead an “ethics by design” approach, a principle recently adopted by the International Conference of Data Protection and Privacy Commissioners. But in addition to the questions raised above about how and with whose input an ethical AI system should be built, even if “ethics by design” is necessary, it may not be sufficient. As Sylvie Delacroix has argued, the training of algorithmic systems, however ethically, typically involves a “one-stop shop” approach, which can preclude these systems from mimicking the “habit reversal” seen in humans. This raises the risk that an AI system trained in this way will, in Delacroix’s words, “leap morally away”. As my colleagues at the Oxford Internet Institute have argued, explanation of how an AI system works can occur, with some limitations, either ex ante or ex post; we may usefully apply this framing when we consider societal input into the operations of AI systems as well. When we seek people’s perspectives on AI, then, may be not only before, but also during and after, a given system is in place.
Why should we ask at all — and who are “we” to ask, anyway?
Above I have sketched some key considerations around who is asked about the design and implementation AI; when, where, and how they are asked; and what they are asked about. These are all vital questions, but the question of “why” they should be asked is of even more fundamental importance, which it may be valuable to restate in closing. I have argued that there is a democratic deficit associated with AI systems, resulting from several sets of constraints — technical and economic, but also political and ethical — that emerge from the characteristics of many sorts of AI and the varied social circumstances in which it is deployed. Traditional ballot-box politics and pocket-book economics is struggling to accommodate both the severe risks and dangers of AI systems — not least to vulnerable populations — on one hand, and their potential to enhance social good on the other.
But asking people what they want from AI in ways that move beyond electoral politics and free market economics may offer another, less obvious benefit. While the hypes and fears around AI are often greatly overstated, nonetheless the fact that some amount of “smart” decision-making agency is being increasingly ascribed to non-human entities for socially significant decisions presents an opportunity. It obliges us to make more concrete the views and values that should shape our societies. With the rise of the internet, much has been made of the idea that we now reveal our innermost thoughts, feelings and worries online by making them digital, through seemingly anonymous Google searches or half-written-but-unpublished Facebook posts. These new data sources are, it must be said, mostly just quite creepy (not to mention a privacy black hole). But training AI systems to operate equitably may come to work the other way round. By making manifest people’s views and values, in the diverse ways discussed above, it may no longer be that digital technology reveals who we supposedly really are, but instead, that we make technology act more like us. In this sense, then, we should not just aspire for AI to reason publicly, through system transparency and explainability, but also for it to reason public-like. We need to move from black boxes to glass boxes, but we also need to move from Black Mirror to a glass mirror — reflecting not society as it is, but society as we think it should be. Asking people for their perspectives, preferences and priorities for what AI should do is the first essential step in that process.
This is of course aspirational, to put it mildly. AI systems today seem more likely to amplify rather than limit existing inequities and to radicalise rather than reconcile opposing views. The reasons for this are, of course, bound up in structural factors which stretch beyond technology itself but which have shaped its creation and adoption. As noted, not least among these factors is the logic of capitalism — or “surveillance capitalism” as Shoshana Zuboff has prefixed it, a logic which shapes much of everyday digital life. Wresting control of the steering wheel will not be easy, to say the least. But this notion of asking people what they want from AI also carries danger, because it ascribes enormous power to whoever is doing the asking — the question of who is this “we” doing all the asking, anyway? is one that I trust has crossed the reader’s mind already. We are unlikely to satisfactorily resolve the Rawlsian recursive loop I described above, whereby before we can decide how AI should decide, we need to decide to decide how AI should decide, and so on. Instead, we should be more pragmatic, by vesting responsibility (and funding!) for managing this process in equitable public institutions, civil society organisations, and other nonprofits who together may be seen to represent society as a whole and the assemblages within it, especially those who stand to lose the most from AI that is designed and deployed in a negligent or malevolent way. In and of itself, this would mark a stark contrast with the status quo, whereby those who have the most to gain, in profit and power, have by far the most sway.
We should also be content with incomplete, incoherent or even contradictory answers to the questions people are asked — and not only externally contradictory (between people), but also internally contradictory (within people’s own contributions) — and as such, we should explore in turn ways that such disparities can be captured by the functioning of algorithmic systems. Politics is messy. But this messiness is far preferable to the faux-omniscience of the AI of today. In the same way, it is far preferable that AI systems not merely be “trained” on our data, hoovered up with minimal consent, and deployed only with power and profit in mind, but instead be designed and used in ways that respect — better yet, revere — the preferences, perspectives and priorities of people. So let’s start by asking.