17 November, 2017Issue 3535.5ScienceTechnology

Email This Article Print This Article

Ripples in Space and Time

Neil Dewar

Harry Collins
Gravity’s Kiss: The detection
of gravitational waves
MIT Press
2017
416pp
£24.95 hbk

 

 

 

 

 

 

 

The first news wasn’t really news, exactly, but rather the announcement that soon there would be news: the LIGO Scientific Collaboration (LIGO being the Laser Interferometry Gravitational-Wave Observatory) had called a press conference for that Thursday, 11th February 2016. Rumours of a detection had been circulating for a while, and it was widely expected the news would be positive. A few days later, the hopes were realised: “We did it,” LIGO’s executive director David Reitze announced, “We detected gravitational waves.” The jubilation that followed the announcement of LIGO’s detection – which glories in the rather unlovely name GW150914 (it stands for the detection date, 14th September 2015) – was followed by a home run of scientific prizes for LIGO and its key members: the Special Breakthrough Prize in Physics, the Gruber Prize in Cosmology, and now the Nobel Prize in Physics. (Not to mention continued scientific successes for the collaboration, like the detection last month of a neutron star merger.) The story of this first detection is the subject of Harry Collins’ engaging and enlightening new book.

The story of gravitational waves themselves goes back much further, almost to the dawn of General Relativity itself: they were predicted in an Einstein paper of 1916, just nine months after he presented the final form of his field equations to the Prussian Academy of Sciences. Nowadays, showing that the Einstein field equations admit of wavelike solutions (just as the equations for the electromagnetic field, or water, do) is a standard part of undergraduate syllabuses. Physically, such solutions describe ripples in spacetime; a spherical shell of local alterations to distances and durations, propagating outwards from their origin at the speed of light. Detectable gravitational waves are produced by violent gravitational events, such as pairs of black holes inspiraling and merging with one another: the GW150914 event was, we believe, the result of a 30-solar-mass black hole merging with a 35-solar-mass black hole, resulting in a black hole of 62 solar masses. The “extra” 3 solar masses were radiated away in the form of gravitational waves. (Mass and energy are relativistically interchangeable, in the ratio E = mc2, so that’s more energy than a billion Krakatoas firing once per second for a billion years.)

Catching sight of such a wave means looking for local changes to the spacetime metric—that is, in how far away things are from one another. To do that, each of LIGO’s two sites (in Louisiana and Washington) uses a laser beam, which is split and sent down two perpendicular tunnels 4km long, then reflected and recombined. The interference fringes of the laser—the patterns produced by the alignment or disalignment of the waves in the returning component beams—act as a ruler of sorts: changes to the fringes indicate changes in the path lengths that the beams have traversed. So build your lasers, watch the fringes, and see if you can spot the transverse warping and expanding that would signal a gravitational wave passing through the beams.

The challenge, of course, is in quite how closely one needs to watch those fringes. The change in length of an object is proportional to its total length, hence the desire to make the interferometer arms as long as technically feasible: but for an arm 4km in length, the GW150914 signal still only produced a change in length of around 1/10,000th the diameter of a proton. Needless to say, the changes to the fringes produced by such a change in path length are pretty small. As a consequence, much of the art of gravitational wave detection lies in the separation of the signal from the noise: all the constant background perturbations (either from external disturbances of various kinds, or from other gravitational waves washing around in the universe). Moreover, building a detector of this kind of sensitivity, and doing the analyses required to process its results or model the physics of the events it’s looking for, requires a vast collaboration. At the time of detection, the LIGO Scientific Collaboration comprised over a thousand scientists spread around the globe.

It’s these two aspects of gravitational-wave detection that make it a fitting topic of interest for Collins. On the one hand, the weakness of the signals—and the attendant difficulties of data analysis—makes it a good case study for Collins’ theories on the nature of expertise. As he compellingly argues, the notion that one could find some all-encompassing manual on how to do science, with a full enumeration of what statistical and methodological rules to use, is a chimera (even in the case of a “hard” science such as physics): although such rules certainly exist, the question of which rules are to be applied in which context is rarely something that can be laid out in advance, but is rather a matter of applying (technically informed) judgment. Collins offers an insightful commentary on where this kind of expertise comes from, and the extent to which it can be acquired by a layman (such as himself) “merely” by immersion in the community’s discussions, rather than active participation in the practice.

On the other hand, the generation of knowledge by such a large and heterogeneous network of people will evidently be a process with very rich social dynamics. Accordingly, a second major theme of the book is the ways in which knowledge, including scientific knowledge, is socially mediated. That includes both the extent to which such knowledge is “socially constructed”—of which more in a moment—and the role of social knowledge per se in scientific work. For example, an early chapter deals with discussions amongst the collaboration of whether GW150914 could have been the result of malicious data manipulation. Collins observes that the rejection of that possibility turned on essentially social hypotheses, analogous to those used for rejecting conspiracy theories in other walks of life: such manipulation would require too many people, co-operating at too high a level, for it to be plausible; the accountability structures are robust enough that this would be caught; the interests and incentives of the relevant people are not such as to make this an attractive course of action to them; and so on. Those “fringe scientists” who reject the gravitational-wave detection do so both because of misgivings about the physics—claiming, in many instances, that relativity theory is wrong or inconsistent—and because of views about the social structures of science, holding that the whole detection enterprise is an exercise in conspiracy. (Indeed for bonus points one can combine the two, as witnessed by the long history of claiming relativity theory to be a Jewish conspiracy.)

As Collins’ history makes clear, the work of a scientific collaboration like this depends not just on judgments over matters of physics, but judgments about how the social dynamics of a collaboration like LIGO do or should operate, or about how to contextualise people’s contributions by knowledge of their psychology and biography. Within LIGO, and to an even greater extent outside of it, our knowledge of science is (and should be) based upon the testimony of trusted figures rather than verified by first-hand acquaintance with the data; so for most people, including most scientists, that knowledge will require assessing such figures’ credibility.

Where things start to get trickier is whether scientific knowledge might be socially constituted in a stronger sense: not just dependent upon social or psychological hypotheses as well as physical ones, but upon the social structure of the relevant community. Famously, the most radical thinkers in sociology of scientific knowledge (SSK) claimed that one could explain the development of science by citing sociological factors rather than the truth of the resulting theories, and hence that we had no good reason for regarding such theories as true. The argument here might be thought of as a kind of causal exclusion problem: if the social structure of a scientific community is enough to explain how it winds up saying the things it says, then it looks as though varying what the facts are is neither necessary nor sufficient for changing what results get reported in the scientific journals; it follows that the content of scientific journals will not be reliably correlated with the facts. The reaction from scientists, and other advocates of scientific objectivity, sparked the Science Wars of the 90s.

Collins’ own views are more nuanced. The aim of his “methodological[ly] relativist” approach is not to show that science is the result of social factors, but to explore the extent to which science is the result of social factors, by discounting—for the purposes of sociology—explanations invoking the truth of theories. The aim is to consider what happens if one fixes the facts but varies the social structure, as Collins explains (quoting from his previous book, Gravity’s Shadow):

Most of the time, the principle of methodological relativism, when it is applied to facts-in-the-making, needs be seen as no more than a version of a methodological guideline found in every science: concentrate on the explanatory variable. In this case, it implies that the science be “held constant”, as it were. For facts-in-the-making, the science must not be taken to explain itself on pain of circularity and/or the dimming of the sociological gaze.

But even this paints a picture of “scientific truth” and “social factors” as mutually antagonistic causal factors, so that evidence for the latter’s causal efficacy is evidence against the former’s—showing the butler’s innocence by implicating the chambermaid. More plausible is that what the social structure of a science does is establish a certain kind of functional relationship between the facts and the assertions of scientists: if the facts are thus-and-so, then the scientists will end up asserting such-and-such. After all, a community of scientists does not exist in a vacuum, but is continuously interacting with and intervening in the world around it. So the claim that merely knowing the social facts will suffice for predicting the scientists’ behaviour should never have been more plausible than, say, the claim that a boatload of mutineers (with some fixed social structure) will behave identically whether they wash up on Tahiti or St Kilda. What we should expect is that the social structures affect the nature of the correlation between environment and results, but the results themselves depend on the environment—whether the chief mutineer is rewarded with the lion’s share of tropical abundance, or punished for leading his crew somewhere so godforsaken. And, crucially, the strength of the sociological explanation is orthogonal to the strength of the environmental explanation: strengthening one need not mean weakening the other.

If we buy this alternative picture, then the point of well-designed scientific institutions is that they can take collections of people, highly trained but subject to the same kinds of biases and irrationalities as the rest of us, and co-ordinate their activities and incentives in such a way as to reliably produce genuine scientific knowledge. In some ways, this picture resembles the classical-economics vision of unco-ordinated and selfish agents somehow conspiring to generate Pareto-optimal behaviour—except that the agents also take themselves to be working towards the collective goal. In other words, we might think of the social organisation of science as a kind of social technology, not in principle different to the physical technology within the detectors. Certainly, if the social organisation were different then that might lead to different results being promulgated, just as a different detector design might. But we shouldn’t take this dependence on detector design to show that our theories are only true for certain kinds of detectors; likewise, accepting these sociological analyses need not mean that our theories are only true for certain forms of social organisation. A process can be rational or reliable without being the result of blind rule-following, and whilst being part of a social matrix of processes and procedures.

From this perspective, the important insight from SSK is that the dedication of the individual actors is not sufficient to guarantee success: if institutions are badly designed, then they can fail to reliably generate scientific knowledge even if all the individuals involved in the institution value scientific truth and are doing their best to work towards it. This seems like a fairly plausible reading of, for example, the current replication crisis, where a large number of “standard results” (notably in psychology—especially social psychology—and medicine) have turned out not to be replicable. There’s an emerging consensus that this crisis is likely the result of a system that fails to provide sufficient incentives for ensuring that one’s own work is replicable (rather than original or striking), or for replicating the work of others. But note that a certain realism about the scientific phenomena is necessary for this critique to work. If it’s the case that a scientific truth is simply whatever consensus the scientific community has arrived at, then the scientific enterprise becomes by definition immune to the possibility of failure—which even the most ardent cheerleaders for science surely don’t want to claim. Collins certainly doesn’t: later in the book, he gives an acute diagnosis of the problems with science’s willingness “to adopt the iconography of religion and similar revelatory enterprises”, rather than advertising its virtues of honest fallibility.

Besides these issues, Collins also offers thoughtful discussion of the drawbacks of keeping news a secret until full analysis has been completed, and how scientific norms might provide a role model for democratic intercourse. It’s also worth stressing just how enjoyably the book bowls along, enlivened by Collins’ clear affection for the community and his willingness to offer his own, occasionally acerbic, take on what is going on. In keeping with Collins’ methodological relativism, the physics itself is to a large extent absent: this is not a pop-science book, and readers wanting an introduction to the theory behind gravitational waves will be disappointed. But it is hard to think of a better guide to the practice and process of science—to the way in which the rubber of theory meets the road of data.

~

Neil Dewar completed a DPhil in Philosophy in 2016. He is now an Assistant Professor at the Munich Center for Mathematical Philosophy.