Redundancy and importance, or “is an oyster baby any different from an aircraft?”

A very frequent statement we hear concerning biological systems can be expressed as follow:

The degree of functional redundancy observed in a subsystem reflects the importance of this subsystem within the system it belongs to.

This idea is very anthropomorphic, and based on what an engineer would consider a good design: Any critical subsystem should be redundant, so if one instance fails, the others maintain the system in a functional state. But a biological system has not been designed. It evolved. And the two processes are completely different.If I am an engineer who wants to build a new type of aircraft, I can hardly afford to loose one of them when packed with passengers. Or actually even one with only the test pilot on board. I will therefore create all the critical subsystems redundant, so the plane succeeds at least to land before a general failure makes it unflyable.

However, if I am an oyster the situation is quite different. How then, can I be sure to preserve my offspring from transmitting any negative deviations I would happen to have? If the critical subsystems are redundant, they will be able to survive the effects of those minor deleterious variations. As a result they will transmit those variations to their own offspring. Following the neutral theory of molecular evolution (Motoo Kimura), these variations can invade the population by genetic drift. Now, if my species encounters situations where an optimal function of the all redundant subsystems is required, it will be wiped-out from the surface of the earth. Unlucky. An alternative scenario: The redundancy of my critical subsystems has been kept to its minimum. If any of my offsprings experiences a small deviation in one of those subsystems, it will die quickly. But I do not care, because I am an oyster. I spawn millions of eggs. I can afford to loose 90% or more of them.

In fact, it seems very few of the critical subsystems in a cell are redundant. For instance, most of the enzymes producing the energy in the cell are unique. Same for the RNA production machinery. If you happend to have a problem in one of those enzymes, you die at a very early age, so you cannot “polute” the genome of the species. Redundancy appears only for “secondary” subsystems, typically dealing with feeding, signalling etc. Yes those subsystems are important for the proper life of higher organisms, and probably provide a selective advantage in a stable environment. But they are not crucial for life itself. Furthermore the diversity observed is rarely a true redundancy but allows to feed on a larger variety of substrates, sense more compounds etc. The redundancy only appears when the systems is stretched by the disappearance of other subsystems. True redundant systems would most probably just be eliminated (for a more informed discussion of gene duplications and losses, read this recent review)

Continuing on the topics of “importance”, one of the most irritating remarks one can still hear too often from hard-core molecular biologists is mutatis mutandis:

This gene is not important because the knock-outed mice display no phenotype.

A term has even been coined for that: the essentiality.

Gene essentiality is a pretty busy domain of research, and I am as far as it is possible to be an expert. So I am not going to discuss it. But I ressent the notion that equate “important” with “phenotype immediately apparent”. The problem comes from the way we analyse mutant animals (which is designed this way for very good reasons, that is not the issue here).

Consider the case of a car. What is a car supposed to do? To progress on a flat surface propulsed by its own engine. So we set-up an experimental environment, with a perfect flat surface. To eliminate any uncontrolled variables, we place this surface indoor, under a constant temperature and illumination. And of course we remove all the other vehicles from the environment. On my right, a control car. On my left, the same car without chock absorbers, ABS, without lighting at all, without hooter, with all doors but the driver’s one fused to the frame, and with pure water in the cooling system. Let’s start both engines, and drive the cars during 50 metres. Noticed the difference? No? Therefore none of the parts we removed was important. Well, how long do you thing you will drive the modified car at night on the London Orbital when it’s -10 degree Celsius and the surface is covered in ice? I will tell you: that day, you will find the ABS, lightning, anti-freeze etc. damned important. Even essential.

I once worked in a team studying a mouse mutant strain “without phenotype” (*). Until someone (**) decided to study aged mice (what a weird idea) and discovered that the brain degenerated quickly in those animals. Hard to find out, when for practical reason one uses only young animals. See: Zoli M, Picciotto MR, Ferrari R, Cocchi D, Changeux JP. Increased neurodegeneration during ageing in mice lacking high-affinity nicotine receptors. EMBO J. 1999 Mar 1;18(5):1235-44.

(*) Well, with very a very mild phenotype.


Modelling success stories (2) Monod-Wyman-Changeux 1965

For the second model of this series, I will break my own rule limiting the topic to “systems-biology-like” models, i.e. models that are simulated with a computer to predict the behaviours of systems. However, a fair number of MWC models resulted in the instantiation of kinetics simulations, so I do not feel too bad about this breach. The reason to include the MWC model here is mainly because I think the work is one of the early examples where a model shed light on biochemical processes and led to a mechanism, rather than merely fit the results.

The model itself is described in a highly cited paper (5776 times according to Google Scholar on March 14th 2013):

Monod J, Wyman J, Changeux JP. On the nature of allosteric transitions: A plausible model. J Mol Biol 1965, 12: 88-118. PDF2

Contrarily to the Hodgkin-Huxley model, described earlier in this post, the main body of the work is located in a single page, the fourth of the paper. The rest of the paper is certainly interesting, and several thesis (or even careers) have been devoted to the analysis of a formula or a figure found in the other pages (several papers were even focused on the various footnotes, the discussions still going on after 50 years). However, the magic is entirely contained in this fourth page.

Cooperativity of binding had been known for a long time, ever since the work of Christian Bohr (the father of Niels Bohr, the quantum physicist) on binding of oxygen to hemoglobin. For an historical account see this article, to be published in PLoS computational biology and then on Wikipedia. Around the year 1960, it was discovered that enzymes also exhibited this kind of ultrasensitive behaviour. In particular the multimeric “allosteric” enzymes, where regulators bind to sites sterically distinct from the substrate, displayed positive cooperativity for the regulation. At that time, the explanations of the cooperativity relied on the Adair-Klotz paradigm, that postulated a progressive increase of affinity as the ligand bound more sites, or the Pauling one, based on only one microscopic affinity and an energy component coming from subunit interactions. In both cases, the mechanisms are inductionist, the ligand “instructing” the protein to change its binding site affinities or its inter-subunit interactions. In addition, the state function (the fraction of active proteins) and the binding function (the fraction of protein bound to the ligand) were identical (more exactly there was not even the notion that two different functions existed), something that was shown to be wrong for the enzymes.

The model developed by Monod and Changeux (Jeffrey Wyman always referred to the paper as “the Monod and Changeux paper”) relied on brutally simple and physically based assumptions:

  1. thermodynamic equilibrium: the proteins which activities are regulated by the binding of ligands exist in different interconvertible conformations, in thermodynamic equilibrium, even in the absence of ligand. This assumption is opposed to the induced-fit mechanism whereby the protein always exists in a conformation in the absence of ligand, and is always in the other conformation when bound to the ligand.
  2. different affinities for the two states: the two conformations display different affinities for the ligand. Consequently, the ligand will shift the equilibrium towards the state with the highest affinity (that is the lowest free energy). This is a selectionist mechanism rather than instructionist. The binding of a ligand no longer provoke the switch of conformation. Proteins flicker, with or without the ligand bound. However, the time spent in any given conformation depends on the presence of ligand (or the probability to be in a given conformation).
  3. all monomers of a multimer are in the same conformation: this assumption was, and still is, the most controversial. It is opposed to the notion of sequential transitions, whereby the monomers switch conformation progressively, as the ligands binds to them.

The rest followed from simple thermodynamics, explained by the two figures below.

MWC reaction scheme

Reaction scheme showing the binding of ligands to an allosteric dimer. c=KR/KT.

MWC energy diagram

Energy diagram showing the stabilisation effect of successive binding events.


The MWC model has been successfully used to explain the behaviour of many proteins, such as hemoglobin or allosteric enzymes, as mentioned above, but also neurotransmitter receptors, transcription factors, intracellular signalling mediators or scaffolding proteins. For an example of how MWC thinking help to understand signalling cascades, see our work on calcium signalling in synaptic function (Stefan et al. PNAS 2008, 105: 10768-10773; Stefan et al.  PLoS ONE 2012, 7(1): e29406; Li et al. PLoS ONE (2012), 7(9): e43810).

As for every useful theory, the MWC framework has since been refined and extended, for instance to encompass the interactions between several regulators, lattices of monomers etc.  I’ll finish by a little advertisement for a conference to celebrate the 50th anniversary of the allosteric regulation

Is learning handwriting harmful for our kids?

My son is 6, and he is not super-skilled at writing. A decade ago, that would not have been a problem, because you were not supposed to read and write fluently at 6. However, thanks to the teaching methods based on graphemes and phonemes called “phonics“, English children can now start learning how to read at 4 and to write at 5.

By the way, phonics is one example where empirics methods have been proved right by modern investigation in neurosciences. For an excellent account on the neural basis of reading, but also of learning how to read, one can … read the book … Les neurones de la lecture by Stanislas Dehaene. I think the English version is Reading in the brain.

Anyway, back to my son. He learnt to read quite quickly, and went on writing correctly … on a keyboard. But as some other children of his age, he exhibited two problems. The first one was a problem with symmetry. As you’ll read in Stan’s book, before you learn to read you cannot distinguish between objects horizontally mirrored (as an evolutionary explanation he points that while it is important to distinguish a tiger on its paws from a tiger on his back (dead), it is not so important to distinguish the tiger coming from the left from the tiger coming from the right. Just run). So my son will write d for b, p for q, and half of the numbers the wrong way round. Interestingly, the problem is worse in Britain, where one learn to write “like a typewriter”. See the image below.

Top are British hand-written letters, bottom are French hand-written letters. The French ones are not symmetrical. And looking at the letters written in The Gimp by your author you can also understand that there is a certain genetic component to the problem at hand (no pun intended) …

The second problem he experienced is the conjunction between weak muscles and a strong mind. Because of the former, he writes very slowly (by hand) and the result is suboptimal. Because of the latter, he refuses to go on until the writing reaches what he considers acceptable. The result is a continuous rewriting of the same bit of text and time-out.

Now why is it such a problem? After all he is only 6. It is a problem because the SATs are based on written work. Yes! For those of you readers who are not living in Britain, children over here have their first written exams at 6. Mini-baccalaureates if you wish. So the apologetic teacher had a meeting with us, and explained that she knew that our son was able to count, read, understood stories and able to produce some. But she could not document it without written material, and he would fail his SATs. At which point I had to explain that by 20 only my mother could read me, by 30 not even her, and that nowadays I avoid taking notes because I cannot read my own scribbling. I then felt her despairing a bit, and toying with the idea of contacting social services.

Why do we condition the future of our children on something like handwriting? Who needs to write important documents by hands nowadays? Typewriters have been the rule for many decades in administration. Now, we have computers for all sorts of forms or declarations. Even the traditionally unreadable prescriptions from doctors are now typed and printed. Handwriting is of course a very useful skill, like riding a bike or swimming. But we do not refuse to assess the progression of children in other disciplines, if they cannot ride a bike or swim. The comparison is a bit extreme. But after they leave school, our children will almost never need handwriting any more. The only words I wrote by hand over the last 5 years are my name and address on forms, and loads of totally unreadable new year cards. Why don’t we assess kids using computers? After all, they have ICT lessons all the time, and my boy has to use the Starz system where he can play, work and exchange messages with friends (in a safe and easy to use environment).

But there is more. If you read Stan’s book, you will come across the fact that by learning how to read when you are young, you re-route bundles of nerve fibres that were otherwise essential to orienteering. I.e. either you read or you can find your way in a forest. But that is not a problem for us because 1) not everyone get lost in a forest or need to hunt for food, and 2) we have maps and compass that we can read. So now comes the question: What better usage, or maybe entirely new usage of our brain do-we hinder by polishing all our childhood a skill, handwriting, that will be of no significant usage in our adult life? Are-we teaching our kids “how to find their way in a forest” to prepare them for a life of “reading road-signs and maps”?

Update December 2013: My boy’s teacher allowed him to use a computer for an essay writing competition. He won the competition. Suddenly, he was able to express his creativity rather than spending his time struggling to form letters.