What are 'the laws of biology'?
Without tackling the fundamental philosophy of biological complexity, we might never truly understand how living organisms work
The Biologist 64(6) p6
Biology is not just applied chemistry or applied physics. The components of living systems obey the laws of physics, but the existence and behaviour of living organisms cannot be deduced from those laws alone. Everything that goes on in living things cannot be explained by low-level details. To understand living things requires reference to higher-order principles of system organisation – indeed, it is the essential fact that they are organisms, that do things, that requires explanation.
Most biologists would generally endorse this view if forced into a philosophical corner – it is, after all, what makes biology a science in itself. However, even among those who fully agree that a systems-level perspective is appropriate in biology, only a small number actively engage with the science of complex systems, either in research or teaching, and fewer still with the underpinning philosophy.
There is good reason for this. The reductionist approach has been productive. We have identified more and more components of individual subsystems, and defined their interactions and elucidated their functions in ever increasing detail. But has this productivity really led to a deeper understanding of entire systems or is it just creating an illusion of progress?
If we define progress by an increasing ability to predict and control the behaviour of living systems, then you could argue, for example, that all the new drugs we have developed over the past decades speak to the power of the reductionist approach. However, for every successful new drug, there are hundreds that have fallen at one hurdle or another, usually due to unpredicted system-level effects.
Similarly, although hundreds of genetic variants have been associated with diverse human traits and disorders, the proportion of phenotypic variance collectively explained by such variants remains frustratingly low. We still do not understand the logic relating genotypes to phenotypes, and despite having access to entire genomes, often can make only the fuzziest of predictions.
In neuroscience, we have an unprecedented ability to monitor or even drive neural activity, from the level of single neurons, to neuronal ensembles, to distributed brain systems, but we lack a principled framework to understand what all this activity is doing – what information is being processed, how it is passed from level to level, what computations are being performed and what this all means for the animal's behaviour.
Ignoring the systems perspective when we were dealing with only small amounts of isolated data could perhaps be excused – that perspective wasn't needed and arguably wasn't useful. But the limitations of the reductionist approach have now been exposed. We are drowning in data, with measurements available of sometimes every component of many different kinds of interdependent systems. It is clear that the logic of how it all works will not emerge from our knowledge of each of the components in isolation, nor from simple linear models, nor even from the brute force approach of machine learning.
To make real progress, we will need a different language, based on a different conceptual footing, with different tools and methods that can be brought to bear. Fortunately, such concepts and tools already exist, derived from cybernetics, information theory, dynamic systems theory, decision-making theory, semiotics, and many other areas[1–3].
By taking what is essentially an engineering and computational perspective, we can simplify our view of the functional architecture of living systems. For example, we can recognise that some set of components interacting in a certain way acts as a filter, or a switch, or a coincidence detector, and so on. And when we put several of them together just so, we make an oscillator or a homeostatic regulator or an evidence accumulator. This provides a way to go beyond simply describing what is happening to actually understanding what the system is doing.
The nervous system of the nematode Caenorhabditis elegans comprises 302 neurons and its connectome has been fully known for more than 30 years. Many experiments have probed the functions of individual neurons or small circuits, but mostly in an isolated fashion, leaving the logic of how they coordinate behaviour unclear. By contrast, the application of control theory to the connectome recently revealed the functional architecture of motor control, making subsequently validated predictions of which neurons in the densely connected network would be dispensable or indispensable for regulated movement.
This is the right kind of computational approach to make sense of complex systems and derive knowledge from all the data we are generating. More fundamentally, this kind of systems perspective provides a much-needed philosophical foundation for biology as a science unto itself.
Kevin J Mitchell is associate professor in developmental neurobiology at Trinity College, Dublin. He can be found on Twitter @WiringtheBrain
1) Wiener, N. Cybernetics. Or Control and Communication in the Animal and the Machine (MIT Press, 1948).
2) von Bertalanffy, L. General System Theory (George Braziller Inc, 1969).
3) Alon, U. An Introduction to Systems Biology (Chapman and Hall/CRC, 2007).
4) Yan, G. et al. Network control principles predict neuron function in the Caenorhabditis elegans connectome. Nature 550, 519–523 (2017).