UFD017

Download

2016

Document

On the Borderline: Science and Reductionism

In this review-essay, Giuseppe Longo outlines some important reasons for reading Sergio Chibbaro, Lamberto Rondoni, and Angelo Vulpiani’s Reductionism, Emergence and Levels of Reality: The Importance of Being Borderline, and reflects on the damage done by reductionist models of, and in, science

This book arrives at a most suitable time. A critique of reductionism originating from within physics may be the most effective way to halt and repair the damage caused by the misuse of physicalist reductionism in biology and in other sciences. I strongly recommend this book to all scientifically educated persons and, in particular, to biologists, and physicists studying biological phenomena. The authors, three working physicists, address the complex theoretical dynamics that connect theories and knowledge levels within physics, a science where, I would dare to say, ‘reduction’ never applies.

51ZEzIrD5cL._SX330_BO1,204,203,200_

In short, the book shows the theoretical richness of physics. Theories are proposed, insights are given from different perspectives, and a mere change of level or scale suffices to require a novel theoretical invention. Then, conceptual and technical bridges are proposed, different forms of unification are suggested or fully constructed. The book shows that unified knowledge is not a metaphysical a priori; the history of the search for unifying theories was marked by hard-won successes. As a representative example, consider the molecularist claim in biology: ‘we, the organisms, are just made out of molecules, aren’t we?’ Indeed, we are, but this is a triviality (‘the limit of truth is not falsity, but insignificance’, observed René Thom), since an organism is a rather strange bunch of molecules. The scientific problem then is: which is an adequate theory for dealing with these peculiar bunches of molecules, the organisms, i.e. with the living state of matter? Then, of course, the problem of relating it to good ‘theories of molecules’ may be soundly posed.

What matters in mathematics is its ‘asymptotic nature’. Mathematics is a limit construction

The first, apparently obvious observation that I would like to address, is the role played by mathematics in the construction of physical theories. What is not obvious is the remark that what matters in mathematics is its ‘asymptotic nature’, as M. Berry stresses in his Foreword. Mathematics is a limit construction, since its beginnings with Euclid’s geometry: its fundamental structure is given by definition II, ‘the line is a length with no thickness’. There is no such a line in the world, it is a limit concept that allowed Euclid to construct a fully general theory of measurement of surfaces: this line is the border of plane figures, a very difficult notion when generalized, as we know from contemporary mathematics; it is a ‘practice’ of an actual limit, the invention of a 0 thickness structure.

Similarly, Galileo opened the way to the mathematization of physics by proposing the principle of inertia, which is also a non-existing asymptotic limit of all possible movements, and by doing so he was able to analyze what affects them: gravitation and frictions.

Moreover, and this is crucial for the book’s perspective, as stated by Berry: ‘[U]nderstanding relations between levels must involve the study of limits, that is, mathematical asymptotics […] wave optics “reduces to” geometrical optics when the wave-length is negligibly small, quantum physics “reduces” to classical physics when Planck’s constant can be neglected, etc. The book’s analysis largely focuses on more complex limit constructions that ‘are responsible for fundamental phenomena inhabiting the borderlands between theories—phenomena at the forefront of physics research, such as critical phenomena in statistical mechanics, fluid turbulence and the universal statistics of the energy levels of highly excited quantum systems.’

The search for reduction of all phenomena to the smallest particles, for example, the myth of high energy physics, is a consequence of our monotheistic religions. However, the search for unity should be accomplished without practicing theology like reductionists do

As the authors say in the preface, quoting a contemporary philosopher, Severino, theology is the fundamental form of reductionism, as it reduces the essence of the world to God. In this sense, the search for reduction of all phenomena to the smallest particles, for example, the myth of high energy physics, is a consequence of our monotheistic religions. Indeed, it seems that Democritus used to say that the more perspectives we have on one phenomenon, the better. Thus, he was pleased by the lively discussions on the Agora with the followers of Parmenides and Heraclitus who stressed continuity in nature against his atomistic perspective. Indeed, they were polytheist, and each god had his/her own opinion on every matter. I do advocate, instead, the interest of our monotheistic search for unity, rather than for reduction. For example, the ‘unification’ of thermodynamics and particles’ trajectories, or of quantum and relativistic physics, forced physicists, and still forces them, to search for new unifying theories, a major scientific conquest or challenge. The ‘third theory’ aimed at unifying different existing proposals usually is a fantastic asymptotic construction. The unifying theory adds knowledge and tools for knowledge which often lead us far away from naive intuitions of realty. From this perspective I would like to suggest that we should continue working towards unity of knowledge, which represents one of the few scientific inheritances of our monotheistic background. However, the search for unity should be accomplished without practicing theology like reductionists do. As a long term goal, once we manage to (asymptotically?) unify molecular dynamics with the behaviour of living cells, we will at long last know more about ontogenesis.

The first chapter of the book is a very pleasant imitation of Galileo’s dialogue: Salvati, Sagredo, and Simplicio discuss the guidelines of the book through a contemporary version of their opposing views. It is very hard to convince Simplicio of the complexity of the levels of description; his objections are always smart ones. For example, Sagredo and Salviati dig into a famous case:

[C]omplex hydrodynamic features can be reproduced by means of cellular automata […] this artificial system does not fulfill some of the fundamental properties of the microscopic dynamics. For example, only microscopic discrete states are allowed: the ‘molecules’ in this system move on a lattice and their velocities assume only a finite number of values. Moreover, rather than strictly deterministic rules, such as those of classical dynamics, the system follows probabilistic rules.

Of course, this parody of reductionism precisely shows how reductionism may be misleading. In this case, classical randomness, as determinism in presence of non-linear dynamics and measurement by intervals in continua, is mimicked by probabilistic rules in discrete state machines. Unfortunately, following Wolfram, some think that this hilarious imitation coincides with the true world, which would then be a dynamics of finite state automata. Simplicio, who acknowledges this physically meaningless ‘mathematical virtuosity’, is more advanced than some of our contemporary ‘computationalists’, for whom the Universe is a big Turing Machine, which is computationally equivalent to a cellular automata, also adding to both some probability values. I will go back to these forms of extreme reductionism and their role in biology.

There is no junction of two different theories, but the macroscopic case is used as a ‘bootstrap’ for constructing the microscopic/macroscopic bridge

How does this book provide a scientific answer to the follies resulting from the reductionistic stance dominant in many disciplines? It does so by giving a competent account of the existing forms of unification in physics and an introduction to the ongoing tentative ones, ‘understood through the analysis of the connections between different levels of description or theories’. Boltzmann, of course, is a major reference for this work. Following early ideas by Maxwell, he paved the way for a major and revolutionary ‘unification’, the understanding of thermodynamics in atomistic terms via Statistical Mechanics (SM). This is soundly considered a paradigmatic case by the authors: SM is a new theory that unifies existing approaches by an asymptotic construction, as both the hypothesis of molecular chaos and the thermodynamic integral are mathematical limits. The presentation is just beautiful: it is careful, clear and complete. For example, one fully understands how irreversibility pops out, at the limit, from individually reversible trajectories. A particular emphasis is also given to the analysis of hydrodynamics and meteorology and the related ‘unification’ problems. Hydrodynamics is not fully unified to an atomistic perspective, as it is done by SM in relation to thermodynamics. Typically, the individual behavior of particles, as described by the one-body distribution function, depends on the global or macroscopic hydrodynamic field, which is thus assumed, not derived, from (possibly asymptotic) particle dynamics. Thus, there is no junction of two different theories, but the macroscopic case is used as a ‘bootstrap’ for constructing the microscopic/macroscopic bridge.

Meteorology is then discussed and the merits of Richardson’s work, a pioneer of the mathematical approach to this discipline (1922), are briefly mentioned. He was the first to insist on moving from ‘historical accounts’, where predictions were based on data from the past, to numerical solutions of hydrodynamic equations in weather forecasts. This global approach to atmospheric dynamics was not grounded on any sort of reduction, but on a hierarchy of models that ‘were not mere approximations of the original set of equations, obtained from a systematic strategy based on fundamental principles. On the contrary, they were obtained from a subtle mixture of hypotheses, theory and observations’. This method ‘shows that knowledge of the ultimate laws governing the behaviour of the atmosphere, in its tiniest detail, is uninteresting’.

The understanding of irreversibility in thermodynamics and hydrodynamics, and thus in meteorology, requires the peculiar and non-obvious asymptotic constructions presented in the book. As pointed out by the authors, this irreversibility is independent from the presence of deterministic chaos, in spite of Prigogine’s attempt to found both themodynamics’ and hydrodynamics’ irreversibility on the latter.

The book also presents a synthetic account of the ongoing work relating Quantum Mechanics and Chemistry. Because Chemistry is traditionally presented in classical frames, the authors first clarify that classical mechanics may be obtained as a ‘semi-classical limit’ from quantum dynamics. In short, when Planck’s h goes to 0 and time goes to infinity (in my opinion, this infinity is not sufficiently stressed in the book), classical and quantum trajectories merge (see Paul Thierry’s recent work on this). Of course, this is not a form of unification, but ‘just’ says that the mathematics of the two theories asymptotically merge—which is beautiful, but very different from the asymptotic construction by Boltzmann, for example, which re-introduces physical ‘smoothness’ by assuming that the number of molecules in a unitary volume goes to infinity. Thus, I slightly disagree with the authors’ remark that ‘classical mechanics can be seen as a sort of emergent property of quantum mechanics where the interactions with the environment are modelled using some random variable’. Firstly, classical analysis, since Poincaré, describes the ‘origin’ of randomness in a very different way from quantum randomness, and thus only the mathematics would be unified (there would be no reciprocal conceptual understanding of this fundamental aspect, just a ‘mathematical virtosity’). Secondly, as the authors soundly acknowledge, ‘Newtonian mechanics is a kind of a priori for quantum theory […] It is in principle impossible […] to formulate the basic concepts of quantum mechanics without using classical mechanics’. Thus, we are far from any sort of ‘emergence’, whatever this word may mean.

As for Chemistry, the solution problems for Schrödinger’s equation for more than one electron are closely examined. The authors also refer to the insight of Pauling, who, in the ’30s, ‘produced a clever mix of chemical intuition and quantum mechanics, hardly reducible to mere quantum mechanical calculations’. Thus, physics is facing yet another beautiful challenge: a consistent quantum understanding of chemical phenomena, possibly in a novel unified frame. This, perhaps, could help us being less anti-scientific than we have been in the twentieth century, when we produced about 80,000 artificial molecules through practical knowledge with little theory (physical, organismal, or ecosystemic), and dropped them into the environment, with dramatic consequences, including for endocrine disruption and cancer (see USA Toxic Substances Control Act [TSCA], Sept. 2008).

I would contest the view of science as ‘compression of reality’ by stating instead that data, produced by an active friction with reality, are ‘compressed theories’

I would now like to discuss an issue that appears in a few places in the book: the relation between complexity, compressibility, and unpredictability. The situation is slightly more complex than is indicated by the book, and it challenges a shallow idea that the authors quote, namely that ‘natural laws are compression of empirical data’ (yet, at the end, they draw a very sound conclusion against this idea). The 1966 paper by Martin-Löf (ML) quoted in the book gives an asymptotic notion of randomness, which is, indeed, the only one to be soundly defined in classical discrete manifolds. Martin-Löf also proved that any infinite sequence (of 0s and 1s) contains infinitely many compressible initial sequences. Chaitin reconstructed a unity between finite incompressibility and asymptotic randomness, conjectured by Kolmogorov, by defining a restricted class of Turing Machines that generate all semidecidable sets (but not all computable functions) and such that ML infinite random sequences are exactly those whose initial segments are Chaitin incompressible, modulo a constant. However, Chaitin’s definition requires ad hoc constructions in order to be transferred to formalisms other than Turing’s, whenever this is possible. This is in contrast the beautiful invariance of ML (asymptotic!) randomness or of computability that are both invariant with regard to any transformation of (sufficiently expressive) formalism. One can then derive two major consequences. Firstly, if any long enough finite sequence is compressible, including randomly generated ones (!), then where would the law be? Now, any long enough finite sequence is compressible, as shown by Ramsey Theory, and is made possible by the use of more than the restricted Chaitin-Turing Machines to compress data. For example, by Van der Warden Theorem, any infinite sequence, including ML random ones, contains arbitrarily long ‘monochromatic’ (just 0s or just 1s, say) arithmetic progressions—easy to compress at any finite length.1 Secondly, it confirms the authors’ intuition: time incompressibility is not the same as space incompressibility. Consider a long series of quantum measurements or of coin flips. There is no way to produce a program that would generate the results before time elapses (it is time-incompressible). Yet once the long enough series of 0s and 1s is written, a good data mining algorithm and a compressor would allow one to produce a program shorter than the sequence, yet capable of generating it; this is due to Ramsey type regularities, such as Van der Waerden’s. As the authors claim throughout the book with regard to borderline theories, space and time randomness, also, are unified only at the infinite limit. Finite incompressibility does not yield an invariant property of randomness as unpredictability with respect to all theories, or even with respect to all programs within a fully expressive theory of computation. Fortunately, the deep understanding of physics by the authors leads them to a sound conclusion: ‘What is not compressible is the time sequence generated by chaotic systems, and this is due to the non-compressibility of a generic initial condition.’ This remark, on one hand, soundly refers to time and recalls the insightful analysis of the role of initial and border conditions made in chapter 5, where the notion of (space) incompressibility, though, would have benefited from a more detailed analysis. On the other, it allows the authors to depart from the view of science as ‘compression of reality’. I would further contest this view by stating instead that data, produced by an active friction with reality, are ‘compressed theories’. The collection of empirical data requires a perspective, the choice of observables, metrics and measurement instruments, in short a strong theoretical bias. Making it explicit, changing it, comparing with other scales or forms of knowledge, possibly in search for unity, is the job of science, so beautifully described in the book.

In order to conclude, let’s go back to a paradigmatic case of apparent ‘reduction’. As mentioned above, hydrodynamic equations are not very sensitive to the details of microscopic dynamics. Thus, by mathematical ‘virtuosity’, one may derive an hydrodynamic behaviour from probabilistic rules over discrete state machines, such as cellular automata or, equivalently, (two or three dimensional) Turing Machines. Note now that, in discrete manifolds, the dynamics do not depend on the dimensions: they can be all encoded in one dimension at a low cost. Thus we were able to construct the Universal Turing Machine and, subsequently, the operating systems and compilers that are at the heart of modern computing. Of course, this dimensionless approach makes no sense in most physical theories, where phenomena heavily depend on the dimensions of the phase space. Yet those who believe that the Universe is a big cellular automaton or a Turing Machine use these sorts of examples as proof that we have constructed the final machine, the digital computer, as it may encode the instructions written by God to run the Universe, with some scattered probability values. In this frame, the brain and the DNA would then be emergent computations (for a synthesis of the ‘computationalist’ views, see Wolfram’s and Chaitin’s papers in a volume in honor of Turing, edited by B. Cooper in 2012).

W. Gilbert, a leading molecular biologist, predicted in 1992 that we were going to be able to encode the human DNA in a CD-ROM and then say: ‘Here is a human being’

As for biology, following this philosophy of nature, F. Collins, director of the National Human Genome Institute, publicly asserted in 2000: ‘We have grasped the traces of our own instruction manual, previously known to God alone’—which brings us back to theology. How does a body fall? It follows the instructions, like a cellular automaton; no probability is required in this case. How does an embryo develop? It follows the instructions in the DNA; however, in this case, some stochasticity is acknowledged. W. Gilbert, a leading molecular biologist, predicted in 1992 that we were going to be able to encode the human DNA in a CD-ROM and then say: ‘Here is a human being, this is me’ (sic!…It should be easy to compress such a CD-ROM, in particular in view of the 95% of ‘junk DNA’ that, for too long, too many molecular biologists claimed we are). The ongoing, heavily-financed project of ‘personalized medicine’ is based on these ideas: you go to a hospital, they decode your DNA and pass it to different departments. In the near future, there will be no need for doctors either: computers will analyze the CD-ROM and fix it, by reprogramming it. Note that the word ‘reprogramming’ has been consistently used in the search for genetic therapies for cancer, ever since Nixon’s War on Cancer, 1971—1976, the latter being the year in which those therapies were expected to arrive. It is now 2016 and we have none, even though, since then, most of the financial support in cancer research has gone towards finding one (see Weinberg’s severe autocritique, in Cell 157, March 27, 2014 and an enlightening 2010 interview by C. Venter, the human genome decoder [2001]: ‘We have learned nothing from the genome’).

In biology, physicalist reductionism is based on naive, common sense reference to physics, rather than actual physical theorizing

In conclusion, this broad and original book may greatly help to oppose science to the reductionist follies that hinder its practice, particularly in crucial domains so immediately relevant to our life. In biology, physicalist reductionism is based on naive, common sense reference to physics, rather than actual physical theorizing. Similarly, the references to a new observable, ‘information’, and to ‘programming’, are based on common sense use of these words, while ignoring their scientific meaning and its actual implications (see my own and co-workers’ writings on this, on my web page). Relating these two constructive critiques of reductionism may help in finding new paths, in biology in particular.

[banner image from Bruce Bryson’s A Question of Scale]

  1. See C.H. Calude, G. Longo, ‘The Deluge of Spurious Correlations in Big Data’, Link.