29 January, 2020 • • 42.2Philosophy

Email This Article Print This Article

Can AI Dream of a Better World?

Maya Krishnan

Brian Cantwell Smith
The Promise of Artificial Intelligence: Reckoning and Judgment
The MIT Press
2019
184pp
£20.00


When I moved from Silicon Valley to Oxford five years ago, it was a relief to escape from the alienation that unreflective techno-optimism tends to inspire in those who lack sufficient stock options. At least, that’s what I told myself. In fact, I found significant aesthetic satisfaction in slinking around under the palm trees like a disgruntled wraith, supposing myself one of the last among the dying race of humanists. So, upon returning last month to this land of entrepreneurial hedonism (where Salesforce and Palantir have downtown offices a two-minute walk from shops selling cannabis-infused deodorant and buffalo-milk ice cream), I was concerned that Silicon Valley had changed. The flyers on Stanford campus intimated a new alignment between the humanities and tech. One flyer advertised a seminar series on AI that would address issues like bias and privacy. Another promoted a separate course on “Ethics, Public Policy, and Technological Change” (the tagline: “Stanford Created Silicon Valley. With Great Power Comes Great Responsibility”). A friend told me that she went to a conference on “AI ethics” that included a swag table where attendees could pick up custom-printed Moleskines. The new role for the humanities—supporting technologists’ vision of wielding Great Power and having Great Responsibility—at first raised the concern that California-gothic alienation had been rendered obsolete. How can you adopt a critical tone when you’re being offered free Moleskines? But there was no real cause for concern. Silicon Valley still offers ample opportunity to cultivate a sensibility of mild disaffection.

The new ethos taking hold in Palo Alto and Menlo Park—call it “sunshine normativity”—is built on the implicit promise that, once the technologists have made their Faustian bargains in profit’s name, the humanists will swoop in for a last-act intervention and redeem the soul of the whole enterprise. It is therefore no surprise that ethics and political philosophy, and not metaphysics or epistemology, have become tech’s philosophical darlings. The way that philosophy has been incorporated by Silicon Valley reflects a questionable but prevalent local assumption about the relationship between scientists and technologists on the one hand, and humanists on the other (in Stanford lingo: between “techies” and “fuzzies”). The assumption is that “techies” are supposed to understand the world we live in and the systems we build, while “fuzzies” add a few humanistic niceties for good measure. Now, it might seem strange to see soul-redemption identified as a “nicety.” And yet, the care of the soul has started to look like one more form of self-cultivation. The humanities in general, and normative theorising in particular, have become like a Mozart symphony you would hear in the new Bing concert hall or a Diebenkorn painting you could see at the likewise new Anderson Collection building—pleasant to have around, good for you in a yoga-and-kale sort of way, and cordoned off from more serious work happening elsewhere. Although I do not doubt that many technologists’ recent interests in the humanities are sincere, and though I admire any philosopher who has figured out how to secure a reasonable paycheck, it seems clear that no small amount of relationship dysfunction threatens the recent shacking-up between philosophy and tech.

A different approach to uniting philosophy and technology is on offer in Brian Cantwell Smith’s recent book, The Promise of Artificial Intelligence: Reckoning and Judgment. Smith’s strategy for applying philosophy to issues in technology in general, and artificial intelligence in particular, has two major components. First, he believes the focus should be on theoretical questions, rather than ethical or political ones, in stark contrast to the popularity of the latter. Correspondingly, he is skeptical of the recent ethical turn, and holds that there must be adequate accounts of both what AI is and of what intelligence requires before a substantial investigation into AI ethics can take place. Thus, one of the book’s central objectives is to characterise intelligence. Here ethics does enter the picture, although in a way that diverges from the strictures of sunshine normativity. For Smith, intelligence requires abilities like “ethical commitment” and the recognition that there are “things that matter”. This demanding account of the conditions for possessing intelligence underpins Smith’s provocative contention that neither those techniques called “deep learning” nor  the broader category of “second-wave AI” will yield systems that possess “genuine intelligence”. Technologists overestimate what recent AI can do because they underestimate how much intelligence requires.

The quasi-existentialist conditions on intelligence point to the second major component of Smith’s approach to the relationship between philosophy and artificial intelligence. This is the choice to use Heideggarian existential phenomenology as the primary resource for understanding what it is to be intelligent. Here Smith participates in an older tradition associated with Hubert Dreyfus and the late John Haugeland (Haugeland is Smith’s major philosophical interlocutor, and the book is dedicated to him). Like Dreyfus and Haugeland, Smith is interested in disentangling Heidegger’s insights from Heidegger’s often daunting vocabulary, and in using those insights to interpret developments in computing. The interpretation of Heidegger on offer also supplements Heideggarian existential phenomenology with ethical content whose origin in the source texts is not obvious (although how much of a divergence this really is from Heidegger’s own vision is a complex issue, and fidelity to the source is less important than the coherence of the final product). But while Smith’s focus on theoretical questions is salutary and his critical remarks about the capacities of recent AI systems are useful, the techno-Heideggarianism that informs his approach is questionable. Ultimately, the Heideggarian framing leads Smith to adopt an account of what intelligence requires that is both implausibly inflated and implicitly anthropocentric. And this problem would remain even if one opted not to supplement Heidegger with ethical content and framed the requirements on intelligence in terms of existential commitment alone (rather than existential commitment and ethical commitment). This book’s most promising contributions are the ones that are the most easily separable from its particular philosophical lineage.

Smith contextualises his skeptical remarks about current AI by narrating the historical shift from “first-wave” to “second-wave” AI. The story that Smith tells provides a provocative “ontological” explanation for why first-wave AI failed and second-wave AI succeeded. This diagnostic project, which in many of its points echoes Smith’s 1996 book On The Origin of Objects, is largely free-standing relative to Smith’s overarching point that contemporary second-wave AI lacks intelligence. Still, the chapters in which Smith lays out his diagnostic narrative provide another interesting case study concerning how theoretical philosophy might contribute to the tech scene. Philosophy can identify assumptions (for instance, about what counts as knowledge, or what it is to be an object) that coders and scientists do not realise that they have implicitly incorporated into their systems. By offering different accounts, they can call those assumptions into question and facilitate exploring alternative possibilities. Steve Jobs famously credited LSD with expanding his vision (thereby inspiring headlines such as, “Would the iPhone Exist if Steve Jobs Did Not Take LSD?”); perhaps theoretical philosophy will be able to brand itself as functionally equivalent to microdosing.

Smith narrates AI’s evolution as a story about transcending the limitations of inflexible traditional assumptions in order to occupy a richer and wilder world. Within the tale of AI, first-wave AI systems (which became a major research focus in the mid-1950s) represent the pre-enlightened condition. According to Smith, first-wave systems were based on symbolic representations, organised data using propositions, and carried out tasks by performing long sequences of operations. Logical operations on discrete data structures were supposed to mirror a logically structured world containing discrete objects. Although first-wave AI had many successes, the research focus in AI shifted to (and has remained on) very different second-wave systems. Second-wave systems track many more factors, which they do not represent symbolically or by using discrete concept-like structures, and their computations rely on massive parallelisation. These are the systems that underpin recent AI’s uncanny ability to identify which faces appear in photographs and the intrusive accuracy of Amazon’s recommendations based on past purchases. One central aspect of Smith’s explanation for why second-wave AI has been able to perform tasks (such as object-recognition) at which first-wave AI failed is that first-wave AI had an “untenably rigid view of formal ontology”. That is, it assumed that “the world comes chopped up into neat, ontologically discrete objects”. Second-wave AI has woken up to “the nature of reality”, which consists in “a plenum of surpassingly rich differentiation, which intelligent creatures ontologically “parse” in ways that suit their projects”. Straight-laced first-wave AI failed at object recognition because it was wrong about what objects were in the first place.

It’s a bold and intriguing hypothesis. But on this point, Smith moves too fast. There is a big difference—one might say a whole world of difference—between claiming that there aren’t any discrete objects out there, and claiming that it’s hard to figure out (or to design AI systems that can figure out) which discrete objects there are. Moreover, Smith’s discussion regarding the putatively non-discrete nature of reality runs together many different issues. There is the metaphysical question regarding whether objects have sharp boundaries (Smith raises this issue when he asks where one fog ends and another one begins). There is the semantic question regarding whether terms and concepts have sharp boundaries (Smith addresses this in a discussion about whether particular landmasses in a photo reproduced in the book count as islands). And then there is the epistemic question regarding whether representing the world always requires abstracting away from, and thereby disregarding, some details about it. Smith seems to think that the abstraction issue somehow provides the key to the object-boundary issue and concept-boundary issue. All these problems subsequently become lumped together via talk about “richness” and the “ineffable” (e.g. “[s]econd-wave reckoning… is succeeding by recognising the world itself as ineffably dense”).

Here Smith would have been helped by greater attention to what is going on in contemporary philosophy. Vagueness and metaphysical indeterminacy, which are topics whose subject matters include all three of these issues, have jointly formed an active research areas in analytic metaphysics over the past few decades. Philosophers have proposed many strategies that account for putative cases of indeterminacy, while remaining consistent with the view that the world contains discrete objects, with precise boundaries. Although these strategies (like all proposals in the vagueness and indeterminacy literatures) are controversial, the important point is that there is no quick route from the examples Smith discusses to his sweeping ontological conclusions. Unfortunately, however, Smith’s engagement is limited to a single dismissive remark about the putative infelicity associated with using the word “vague” to discuss these topics.

Philosophy’s assumption-questioning tendencies are most useful when they are deployed with patience. This is not to say that analytic philosophy is the only philosophical tradition that possesses resources suitable for meticulously assessing AI or technology. The point is rather that it is too easy for philosophy in a public, cross-disciplinary, or non-academic context to turn into a source of grand pronouncements, when in fact the most helpful contribution that philosophy (whether analytic or not) can make is to model precise thinking about complicated issues. Given Smith’s own diagnosis that first-wave AI was based on faulty ontological assumptions, there is additional reason to be careful about the ontological claims that are put in their place. Still, Smith’s analysis provides an excellent example of what philosophy might do in relation to tech, apart from engage in hand-wringing over driverless cars. There is significant value in his attempt to examine the unnoticed metaphysical and epistemological presuppositions at work in AI.

The more basic project this book undertakes is to explain why even second-wave AI is far from possessing intelligence. An important contribution Smith thereby makes is his sobering assessment of the significance possessed by second-wave AI’s current successes. Given Siri’s ability to answer all of one’s most mundane questions, it might seem that we are surrounded by intelligent machines. But, Smith argues, these systems do not really identify, recommend, or answer at all. Terms like “refer”, “recognise”, or “categorise” are terms that humans apply to computers based on their own interpretations of the operations that the computers performed, but do not describe what the computers are in fact doing. For instance, regarding facial recognition (or “recognition”), AI systems learn to map images of faces to names or labels. Since humans know the referents that the names or labels possess, they can use the systems to “recognise” the people in the picture. But the AIs themselves do not “recognise” anything.

This negative point—that current AI does not really refer, categorise, and so on—raises the question of what is involved in genuinely performing those tasks. Smith frames his polemic using a contrast between the two concepts that give the book’s subtitle its biblical ring: “reckoning” and “judgment”. “Reckoning” is what current second-wave AI does well. It is cognate with what Smith calls “calculative rationality”. This capacity is distinct from what Smith calls “judgment”, which is the faculty that Smith believes is required for genuine intelligence:

I use judgment for the normative ideal to which I argue we should hold full-blooded human intelligence—a form of dispassionate deliberative thought, grounded in ethical commitment and responsible action, appropriate to the situation in which it is deployed.

Smith also says that “judgment” requires “deep contextual awareness, and ontological sensitivity”, before later in the book equating judgment with “something like phronesis.” He also states that lacking judgment amounts to “not fully considering the consequences and failing to uphold the highest principles of justice and humanity and the like”. A lot goes into judgment, then—this is a very expansive and eclectic account. It is accordingly doubtful whether these many conjuncts really characterise a unified phenomenon. More importantly, as an account of intelligence, “judgment” incorporates extraneous necessary conditions. After all, a system can demonstrate “contextual awareness” or even “deliberative thought” without upholding the “highest principles of justice and humanity”. Aren’t sociopaths able to exhibit contextual awareness and engage in deliberation?

One might respond to this objection by arguing that sociopaths are able to aim at upholding the highest principles of justice and humanity, and that it is the capacity to aim at upholding ideals, rather than actually upholding them, that is required for having intelligence. On this view, sociopaths do not provide a decisive objection to Smith’s account of intelligence-as-judgment. But it nonetheless seems perfectly possible for there to be creatures whose normal mode is to be what humans would call sociopathic. Moreover, science fiction is replete with examples of beings whose normal or default mode is to behave in ways I would categorise as intelligent, but not necessarily as having ideals, let alone principles of justice and humanity. Maybe I’m wrong to hold such creatures are possible. But if so, what is needed is a successful argument to show that genuine intelligence and “ethical commitment” cannot come apart, and such an argument is not provided. It is correspondingly unclear why intelligence is as hard to achieve as Smith claims.

Now, one argument that Smith seems to provide goes like this: intelligence requires reference, and reference requires ethical commitment (or at least commitment of some sort or another—this is not always made clear), therefore intelligence requires ethical commitment. But if the link between intelligence and ethical commitment seems shaky, the link between reference and ethical commitment (or any other kind of commitment) is even more so. Smith poses the following question in presenting his proposal about the conditions on reference: “What must be true of a system in order for it to register an object as an object in the world?”. His strategy for answering this question appeals heavily to Heideggarian existential phenomenology.

One central Heideggarian resource to which Smith appeals is that of the world. Smith takes it as crucial that objects are held as fitting together into one unified reality. What is distinctive about the world is not just its unity. The world also has a distinctive priority. When one’s representations fail to fit the world, it is the representations that are wrong, not the world: “judgment involves a system’s giving priority to the world, over its internal states and representations”. Moreover, the system must be able to orient itself toward the world, rather than a representation or model of the world. Referring to objects is about recognising them as part of the world, and worldhood has a distinctive structure, the recognition of which presupposes the aforementioned commitments and practices.

While this much seems plausible, matters get dicier when more elaboration is added. Smith explains the point about giving priority to the world by saying that “reasoning systems, and all of intelligence, must be deferential” (emphasis in original). But what is involved in being “deferential” to the world, above and beyond, say, updating one’s representations to fit the world when there is a conflict between the world and one’s representation of it? A lot, it seems. Smith argues that “a system that is deferentially orientated toward the world will go to bat for its references being external, will be existentially committed and engaged”. But I don’t see any justification for moving from the more prosaic gloss provided above to the language of existential commitment and engagement, or to the almost theological language of deference. If there is more to “commitment” and “engagement” than what has been outlined above in relatively simple terms, what is it? Moreover, why is it a condition on reference, let alone intelligence?

Although Smith states that he wants to avoid Heideggarian jargon, and he does avoid invoking Dasein, Geworfenheit, or notions like “equiprimordiality”, he retains the Meisterdenker’s tendency to use portentous language as a substitute for clear explanation. For instance, according to Smith, it not enough that a system maintain a disposition to check for conflicts between its idealised representations and the world—what is needed is rather a “relentless attunement to the fact that registration schemes necessarily impose non-innocent idealisations”. An intelligent system must show “thick engagement” and “intentional commitment to the world”; it must “go to bat for what is right”, “go to bat for the truth”, and “go to bat for the world as world”. One problem here is that it is very unclear, to me at least, what these locutions mean. What is involved in an “attunement” being “relentless”, apart from being constant or frequent or ongoing? The colloquial substations are less dramatic, without being any easier to understand. What does it mean to “go to bat” for truth, rightness, or the world as the world? Smith provides glosses on his demi-Heideggarian language, but one puzzling term is always cashed out using another.

This charge of obscurity may be unfair, insofar as what one counts as “obscure” is often relative to one’s own indoctrination. Accordingly, some readers may find themselves better able to navigate the high existentialist language than I was. My own more fundamental concern is again that Smith builds far too much into the task of referring to an object (which he in turn takes to be a condition on intelligence-as-judgment), or even into the somewhat richer task of registering an object as being part of a world. Or rather, Smith builds a lot into these tasks without offering justification for doing so. He states, “[in] order to stand in a noneffective semantic relation to the distal world and to take there to be an object out there—to take an object as an object, for there to be reference at all—there must be stakes, norms, things that matter” . It’s an interesting thesis, but no argument is given for it. Likewise, Smith states that “existential commitment” (whatever that may be) is required to prevent a situation in which “one’s thoughts or representations lose all of their significance”. Without an argument for this point, one might suspect equivocation between different senses of “significance”—“significance” as ‘having a meaning’, and “significance” as ‘mattering’. Why should we think that the former requires the latter?

Smith states at the book’s outset that he wants to identify a non-prejudicial account of intelligence that can in principle be met by humans and non-humans alike. This is an important feature for any account of intelligence that might be useful for interpreting recent developments in tech. But when carefully constructed arguments are not provided, it is easy for anthropocentrism to creep in. It is true that things matter to humans, and that humans refer and are intelligent, but by what right do we make mattering into a condition on reference or intelligence? Since humans are the only uncontroversial examples of creatures that can refer and that are intelligent, it is easy to conflate aspects of the particular way in which humans refer or are intelligent with conditions on reference or intelligence as such. While describing “going to bat for the world” may capture something about the way humans are, it is not clear why any intelligent system—much less any referential system—must be the same way. Reverting to a less ethically-infused version of Heidegger’s framework would not resolve this issue. The same problem arises even if one attempts to eschew ethical vocabulary and instead explain “commitment” using non-ethical and purely existentialist terms.

In asking what the preconditions are for genuine intelligence, Smith poses an important question for those involved with contemporary AI. Moreover, his focus on what systems must be like in order to register an object as being in the world might be a good one. But Smith’s proposal that existentially and ethically engaged judgment is the essence of intelligence strikes me as unconvincing, at least on the basis of what is said in this book.

It is also doubtful whether Heideggarian existential phenomenology provides the right framework for making further progress on these issues. Though it is salutary to receive a reminder that it is a sophisticated task to register objects as being in a world, Heidegger himself was exclusively interested in Dasein, a mode of being that he and his readers share (put crudely: that humans share). The philosophy that Heidegger developed was particular to Dasein. The way Smith’s proposal inflates existential engagement into a necessary condition on reference and intelligence reflects the awkwardness inherent in using Heidegger’s work to define intelligence in a way that is neutral between humans and whatever other creatures are out there. Heidegger was not interested in providing a neutral account of worldhood, and Smith’s work does not transcend Heidegger’s  local ambitions. While The Promise of Artificial Intelligence poses important questions and models some productive ways for philosophy to engage with AI, taking this book’s promise further may require setting aside the techno-Heideggarian tradition in favour of a more fresh approach.

Maya Krishnan is reading for a DPhil in Philosophy at All Souls College. She works on metaphysics, theology, and Kant.