Wednesday, May 17, 2023

Review of “Innate,” by Kevin Mitchell



Innate,” by the neurogeneticist Kevin Mitchell, explores the case for a genetic and neurodevelopmental origin of individual differences in intelligence and other human character traits. As the title suggests, the book generally leans toward “nature” in the nature vs. nurture debate, and makes the assumption that “innate” implies a genetic origin, although with a more dynamic view of the path from gene to trait than one sees in Robert Plomin’s “Blueprint” or Katherine Paige Harden’s “Genetic Lottery” (links to my reviews of those books at the end of this review).

The question of the nature of individuals and how that nature arises has existed, in one form or another, for as long as human civilization, but took a specific turn in our own with the work of Charles Darwin or, more specifically, the work of his second cousin, Francis Galton, the eugenicist and polymath who applied Darwin’s evolutionary theories to human behavior and intelligence and actually coined the term “nature versus nurture.” 


Galton’s eugenic ideas have inspired quite a bit of misery and Mitchell rightly condemns these ideas. Nonetheless, he is often complimentary of Galton’s statistical  work related to trait heritability, which I find unfortunate. I don’t think one can simplistically separate this from Galton’s eugenic ideas, which were arguably the driving force behind his math, and which is still embraced by race-oriented “scientists” to this day. 


Pigeon-holing behavioral traits into mathematical boxes, so that traits like intelligence, extroversion and schizophrenia can be assessed in the same way we might assess traits like height, eye color, or other obvious physical features, or even milk production in cows is bizarre on its face and involves some unimaginative assumptions about the nature and complexity of human beings, while also ignoring ongoing philosophical debates and simplifies individual human nature down to an assumption that it must be related to differences in genetics and neurodevelopment. 


Mitchell uses the analogy of a robot being programmed, to explain his view of the mind, with  “computational algorithms of decision-making,” and  “neuromodulator circuits …tuned - they work differently in each of us, thus influencing the habitual behavior strategies we each tend to develop.”   Mitchell suggests that “brain circuits” develop with some variation in individuals that make “major contributions to our psychological traits.” None of this is demonstrable, and is the kind of theoretical understanding of the brain-as-computer you find in his field. Unfortunately, Mitchell largely sells it as a factual representation of the human mind, rather than his theoretical viewpoint, a recurring theme in this book.  I think he could use far more qualifiers when presenting his ideas.


I happen to take a much broader view of the human condition, and I could devote the entire review just to this kind of debate (if not an entire book). Perhaps his new book, “Free Agents,” will inspire such a review, but for now I will focus on the scientific claims and evidence that he presents, which I would suggest are rather slim. Perhaps because of this, Mitchell initially resorts to anecdotal proclamations to set the stage for his arguments, appealing to common sense observations to imply a genetic or genetically-influenced neurodevelopment underlying trait differences:

“…it fits with our common experience that, at some level, people just are the way they are-that they’re just made that way. Certainly, any parent with more than one child will know that they start out different from each other.” 

Mitchell assumes that such anecdotal observations, in and of themselves, are evidence for a genetic basis for human character, but such intuition is only a starting point for a hypothesis that should then hold up to scientific scrutiny.

There is a sense of condescension behind this kind of certitude (at least it feels that way for those of us who might disagree), none more so than this argument:

Similarly, growing up in a household with more books in it is correlated with higher IQ - does this mean reading raises your IQ? Well, I’m all for reading, but this correlation more likely reflects the fact that parents with higher IQ tend to have more books in the house and also tend to have children with higher IQ.”

Is that really “more likely”?  I can think of several other explanations, but the point of a book like this is to provide  evidence, rather than a restatement of the opinion behind the title of the book. It also seems somewhat contradictory to his later claim that heritability of intelligence increases with age, stating, “The more you learn and understand, the easier it is to learn and understand even more.” Mitchell also embraces the “Flynn Effect,” which I will discuss later, and which seems to raise further doubt on this contention.

In addition to anecdotal evidence, Mitchell makes several arguments by extension, particularly via other species, citing disputed studies of domesticated foxes and assumptions about dog breeds to make a case for genetically-derived individual differences in humans. He takes this a step further, in fact, suggesting that interspecies differences are evidence for individual human differences:  

“The difference between our genomes and those of chimps or tigers or aardvarks are responsible for the differences in our respective natures. INDIVIDUAL DIFFERENCES [emphasis his]. The same can be said for differences within species. There is extensive genetic variation across the individuals in every species.”

  I think that one can make a distinction between entire chromosomal differences existing between separate species versus the occasional base pair differences between humans. Furthermore, I would again suggest that more circumspect arguments are in order; for example adding a qualifier: “PERHAPS [emphasis mine] the same could be said for differences within species.”  

Because of the lack of solid findings from genetic studies, Mitchell, like Plomin and Harden, relies on twin and adoption studies to make the two-fold claim that intelligence and behavioral differences are genetic and that environmental factors (parents, teachers, culture, etc.) are far more limited. It’s becoming almost quaint that scientists are still making a case for the genetics of behavioral traits by relying heavily on twin and adoption studies. There are a number of problems with twin studies, beginning with the fact that some of the initial important twin studies have been exposed as fraudulent. More recent studies from the 1990’s, such as the well-known “Minnesota Study of Twin Reared Apart,”  have been increasingly called into question, as well.  In any case, the fact that identical twins are often more alike than fraternal twins can be based on many factors and not simply genetics. Moreover, adopted children are environmentally unique in many ways from their non-adopted siblings, both in the circumstances of their early life and the emotional consequences of being adopted, as anyone who has worked clinically with adoptees even well into their adult lives could attest. 


It’s not feasible to provide a lengthy critique of twin and adoption studies in this brief review, but since twin studies are being held up as significant evidence for hereditary claims, I think it’s useful to look at an example. For that, I’ll highlight a 2017 Danish twin study of people diagnosed with  schizophrenia. Mitchell doesn’t cite this particular study directly, but claims a concordance rate for schizophrenia of 50% based on twin studies, meaning that if one identical twin is diagnosed with schizophrenia, the other twin would have a 50% likelihood of getting that diagnosis. This is an old and often cited figure that I heard a lot during my psychiatry residency program in the 1990’s. It is also incorrect. Looking at modern studies, the concordance rate is less than half of that and stretches even thinner when looking at well-documented studies, such as this 2017 Danish study that had the entire nation’s twin registry available to them. In that study, the concordance rate was only 14%.  In another earlier study, this one from Finland, which also has good documentation of twins, the figure was 11%. So the true concordance rate might be in the 10% to 15% range. 


If you are unfamiliar with statistical analyses used in these studies, you might find it surprising that this study reporting 14% concordance claimed a heritability of 79% for schizophrenia. That sounds like a lot, of course. It implies that schizophrenia is mostly a genetic disorder, as Mitchell contends. Yet, even if you have an identical twin brother with schizophrenia, your chances of being diagnosed with schizophrenia is relatively low based on this study.


How is this heritability figure derived? Well, non-identical (fraternal) twins had a 4% concordance. Heritability calculations simply use a formula that takes into account the difference in concordance between identical versus fraternal twins. So even though the concordance is low in both cases, they claim a high heritability. Such equations might be useful when assessing the genetics of milk production in cattle, but for a complex disorder like schizophrenia, this is not useful, not informative and likely invalid. It also assumes independent evaluation of those making the diagnosis, which is far from accurate. For example, psychiatrists, as part of their training, are told to give weight to a diagnosis of schizophrenia if it is known that the patient has an identical twin with that diagnosis. This will, of course, inflate that concordance rate, especially when (as noted in the previously cited Finnish study) identical twins are three times more likely to be living together than fraternal twins, so are more likely to have the same doctor or a doctor that is aware of the sibling’s diagnosis. Thus, even the 10 to 15% might be a gross exaggeration of the concordance rate. It would also be useful to know exactly what kinds of patients are being diagnosed with schizophrenia related to these studies, as that isn’t as cut and dried as one might think from a clinical perspective, as I’ll discuss later.


So even when looking at well-designed twin studies (and many of them are not), one can see that the claims are far less impressive than it might seem from a peek at the abstract or a press release. Moreover, why are we even talking about twin and adoption studies, after three decades of actual genetic studies, which, if twin studies are valid, should be confirmed by genetic studies? 

The reason is simple: Genetic studies have not confirmed the high heritablity claims that were expected based on twin studies. It was assumed that we would find the genes involved for various behavioral traits. To date we have not really been able to definitively link any specific genetic variants to a trait. Studies will claim that some variants are seen more commonly in those with the trait, but whether they have anything to do with the trait in a way that can be causally mapped out has not been possible. Also, if you take all the genetic variants that are more common for a trait and “total” how much each would contribute to the trait, you will find that it is a tiny fraction of the claimed heritability from twin studies, and even that is arguably suspect. 


It’s also worth noting the history of failed genetic studies. As Mitchell concedes, most of the early genetic studies, referred to as candidate gene studies, did not hold up in the end. These studies would pick one or a few genes that might seem to be logical candidates for a trait (eg. dopamine related genes for schizophrenia or serotonin related genes for depression based on medications used to treat these conditions), to determine if a variant for the gene is more common for those with the trait. 

Many of these studies would claim to demonstrate that specific genetic variables were linked to schizophrenia, depression, IQ, various personality traits, etc., and these studies were often highlighted (and exaggerated) in major media outlets, creating an illusion of genetic discoveries for these traits that pervades societal consciousness to this day; to our detriment, in my opinion.


Mitchell is far more accepting of the successor to candidate gene studies: Genome-Wide Associations Studies (GWAS). These studies have the ability to look at thousands of locations along our chromosomal DNA, to find genetic variants that are more commonly correlated to individuals with a particular trait like “schizophrenia” or “IQ”. Mitchell gives these studies more credence, stating that the larger sample sizes would make them more accurate, although one could easily use larger sample sizes for candidate gene studies, which no one proposes. Arguably, finding multiple variants in a study merely compounds the problem of false positive findings. Moreover,  GWAS certainly do not eliminate the problem of publication bias, as Mitchell claims, especially when you consider that they continue to use the same dataset (UK Biobank) for most of these studies, and therefore those doing the studies already know what is in the data they are using before designing their studies.


Even if you accept the validity of GWAS, though, you are left with a significant problem, since, although they flag a large number of variants correlated to a trait, these correlations are far too small to explain the heritability claims of twin studies, often only finding 1 or 2% of the variance explained instead of the 60% or more cited in various twin studies. This disparity is referred to as the “missing heritability” problem. Over the past decade, the assumption was that larger sample sizes would find most of this missing heritability, but that hasn’t happened. Mitchell tries to tackle this in a couple of ways. The first is the idea of a “stochastic” neurodevelopmental process that individualizes for even genetically identical twins. 


The idea here, as I understand it, is that there is a certain amount of randomness in the neurodevelopmental process, but there is still a trend for a particular direction or outcome. Mitchell uses as an analogy a ball rolling down a hill and encountering ruts and valleys, changing the ball’s course, but with most balls trending in a particular endpoint. This analogy seems to capture the same idea  as a “quincunx,” a device where a bead is dropped down a peg board, bouncing hither and yon, but more often ending in the center position, creating a kind of “bell curve” distribution. It is also referred to as a “Galton Board,” named after its inventor, the previously mentioned eugenicist, Francis Galton (and perhaps is why Mitchell chose a different analogy).



In any case, Mitchell uses his analogy to explain why one identical twin might have a trait while the other does not, while still claiming the influence of genes, since they are more likely to have the same trait, even though it is not definitive in the way it would be for, say, Huntington’s Chorea, a debilitating neurological disease which involves a specific single genetic mutation, where if one twin had the disorder the other would have it 100% of the time. The Quincunx and similar analogies are also often used to explain Chaos Theory, specifically Deterministic Chaos Theory, leading to the “Butterfly Effect,” where small, imperceptible changes in the initial condition can lead to dramatically different outcomes and therein lies a problem with the analogy as a model of trait inheritance.


If we lived in a world of genetic clones and needed only to explain why there are differences in behavioral phenotypes between clones, these stochasm-producing analogies might be useful. The problem arises when you have to explain significant heritability from one generation to the next using such analogies. If, for example, you take the 14% concordance rate mentioned in the Danish schizophrenia study noted earlier, imagine how insignificant the concordance would be for offspring, when you are changing out half the genetic variants (or pegs or bumps and valleys in the path). There is no reason to assume that you are likely to get more frequent outcomes or phenotypes than you might at random. In fact, you could just as easily get the opposite outcome more frequently in such a scenario, where a person is more likely not to have the trait. Perhaps it could be argued that this is just a limitation of the analogy, but if you can’t provide a mathematically workable analogy to explain both a randomness in neurodevelopment and generational genetic heritability, then perhaps you are working with an invalid premise. 


Mitchell’s other explanation related to the “missing heritability” are rare genetic variants, suggesting that they may be contributing to a phenotype without yet being elucidated in a GWAS. A bit convenient, I think, and Harden made a similar argument in “The Genetic Lottery.” The idea is that these variants are more likely to be de novo and have a larger effect than common variants.  There is little actual evidence that rare variants have a significant role. Mitchell appears to be basing it on what we see with copy number variants (CNVs)—deletions or duplications of segments of chromosomes that can cause significant pathology, reasoning that rare variants could have similar pathological effects that correlate to psychological traits such as schizophrenia and IQ (invariably negatively). 


MItchell cites a classic example of this,  “…the deletion at 22q11.2 (referring to a specific genomic position on chromosome 22), which is now known to be the cause of what used to be called velo-cardio-facial syndrome”. This CNV is often noted to confer a significantly higher risk for schizophrenia in addition to other mental and physical disabilities. The idea then, is that rare variants, not yet identified, might also confer a high risk for schizophrenia, presumably without the other concomitant mental and physical difficulties, since these are generally not noted in those with what you might call “classic” schizophrenia.


As a long-practicing psychiatrist primarily treating serious mental illnesses like schizophrenia, it’s worth delving into why I think this is a flawed assumption, and requires an understanding of some of the less glamorous aspects of psychiatric treatment.  When a person enters the mental health system to be treated with medication and inpatient commitments or outpatient housing, it requires a specific diagnosis.  Clinicians could not just give a patient a diagnosis of mental retardation or what is now called intellectual disability.  So when you have such a patient, who often has significant behavioral difficulties, you have to justify why you are giving them medication. By the time they are young adults, they have effectively been “trained” by family and mental health staff to describe “hearing voices” that caused them to act in destructive or self-destructive ways. They will also often be described as “paranoid,” about others talking about them behind their back. It doesn’t take much imagination to see that such paranoia has a reality behind it for someone with intellectual disabilities. Generally, then, they are given the diagnosis of schizophrenia, or more commonly, Schizoaffective Disorder (which is going to be included in genetic studies of the schizophrenia spectrum of disorders) to account for these stated symptoms as well as mood swings and temper tantrums commonly associated with these disabilities. At that point, you have a justification for giving antipsychotic medications, and mood stabilizers which are really more of a behavioral control. Similarly, you generally need to justify your psychiatric inpatient admission with a DSM diagnosis (“Diagnostic and Statistical Manual” used by psychiatrists to give a diagnosis), using a serious mental disorder like schizophrenia.


These individuals are not presenting in the way that those with more classical schizophrenia do, who hear actual voices speaking to them and have more specific and bizarre paranoid delusions (CIA, Freemasons, religious deities, etc.), nor do their mood issues resemble true “mania.” So, semantically, you have the same stated symptoms, but little real resemblance in their presentation. This has more to do with the limitations of the DSM. I am by no means proud of this aspect of psychiatry and mental health treatment and avoid engaging in it as best I can, but often by the time you see a patient like this, they have been given a diagnosis and have received medications for many years. I would be happy to see it addressed in its own right, but I’m bringing it up here, because I think it creates confounding issues for scientists studying schizophrenia and other disorders who might not have significant clinical experience. I assume that Mitchell, like most behavioral geneticists, does not.


In any case, this is why I think the “rare variants” argument is a non-starter. These rare variants would have to cause schizophrenia, without the other mental and physical issues associated with 22q11.2 deletion, Prader-Willi Syndrome or other genetic disorders.  The problem with this is that the other mental disabilities are exactly what leads to their diagnosis of schizophrenia. For that reason, I think it is unlikely that rare variants will fill in any significant portion of the missing heritability and this is really just a bit of wishful thinking.


Mitchell also devotes some time to gender differences, claiming that the wiring of male and female brains are different in some way that would explain behavioral differences between men and women. He invokes some of the usual undisprovable “just so stories” of the field of evolutionary psychology (nurturer vs. hunter, beards and large breasts as mate attractors, etc.) to suggest necessary genetic differentiation. There is, in fact, little real science to demonstrate differences in the brains of men and women, much of it is contradictory and methods such as functional magnetic resonance (fMRI), have been strongly criticized, and without consistent results. Moreover, I hope that someday we get past the point of using brain size differences, to explain differences in behavior and intelligence, whether that be with an MRI or calipers.


Mitchell also contends that there is a genetic, neurodevelopmental origin for sexual preference, noted here:

“The vast majority of humans who inherit a Y chromosome are primarily attracted to females, while the vast majority of those who do not are primarily attracted to males. That is an incredibly potent genetic effect.”

As a thought experiment, let me tweak this quote as follows:

“The vast majority of humans who inherit a gene consistent with dark pigment, are primarily attracted to others with that same pigment gene and those without that pigment gene are primarily attracted to others without that pigment gene.”

Is that an incredibly potent genetic effect, as well? I think if you really want to connect genetics to behavior, looking at the obvious on the outside would be far more fruitful than trying to connect it to unspecified brain circuits.


Mitchell doesn’t completely abandon environmental factors in the development of traits. For example, he acknowledges the  “Flynn Effect,” which describes how the collective IQ of a nation or culture can increase over time with better educational opportunities, nutrition and other opportunities. This effect was exemplified, as Mitchell notes, over a 3 decade span from the 1970’s to the 2000’s in Ireland, where collective Irish IQ scores, which had lagged behind their British counterparts by as much as 15 points, largely equalized over that time period. This, of course, is not due to changes in the genetics of Irish people over 1 or 2 generations, but to better opportunities for the people there. Mitchell discusses the Flynn Effect in order to make the case that attempts to find IQ differences between groups (which inevitably lead to racist and classist conclusions), are invalid and I certainly agree with that. However, I would argue that the Flynn Effect points to a problem with most behavioral genetic studies: population stratification, which should call these genetic studies, along with the field of behavioral genetics into question.


Say, for example, you could have performed a modern genetic study (GWAS) for IQ in England and Ireland in the 1970’s. Because of the IQ disparity at that time between the two countries, you would have had many genetic correlations for lower IQ that really are just markers for Irish ancestry (similar to how a site like Ancestry.com works). If you did such a GWAS now, of course, none of these specific correlations would remain, demonstrating that they are all false positives. Yet, at the time, a GWAS would have identified variants that were correlated to cognitive function, and claimed that these explained genetic differences in cognition, without identifying and explaining how any specific genetic variant identified affects our IQ. There would have been assumptions about Irish people being genetically inferior to British people. For anyone who harbored beliefs about British superiority, this study would be the kind of thing that would affirm those beliefs and perhaps even help spread these kinds of prejudices.


May I suggest that we are in a similar situation, now? These genetic studies identify even tiny correlations that are likely markers for ancestry, geographic location, cultural/racial divisions, class divisions and basically anything that stratifies populations. These correlations will then be held up as proof of a genetic basis for intelligence or other traits, without any real proof that they are anything more than false positive results. Despite this, racist individuals use these studies to suggest racial disparities, justifying their prejudices as scientific. No amount of hand-waving from scientists will change this.


Given the unfortunate use of this science to justify and even reinforce racist and classist assumptions, it’s hard to understand what drives this desire to find evidence of genetic determinism. We’ve had 150 years of this Francis Galton fantasy, which has led to some devastating black marks in our history, with the eugenics movement, institutionalization and sterilization policies here in the U.S., along with racist immigration policies and, of course, the Nazi “master race” atrocities. Books like this oversell the knowledge to date, and rarely consider the possibility that there is simply little or nothing to find, which is my contention. Mitchell at least acknowledges some of the limitations of genetic studies, but otherwise this is the latest in an endless line of such books, each with the supposition that the answers will be found at a later date, while deflecting from the misuse of this kind of science.


My review of Robert Plomin’s “Blueprint” can be found here: https://unwashedgenes.blogspot.com/2019/01/copy-of-my-review-of-robert-plomins.html


My review of Kathryn Paige Harden’s “Genetic Lottery” can be found here: https://unwashedgenes.blogspot.com/2021/09/my-review-of-kathryn-paige-hardens.html







 

2 comments:

  1. Well done! Your discussions of the Flynn effect and stratification were excellent. I hope Mitchell reads this. But could he understand it? He wrote Innate because he not just doesn't understand it, but because he can't understand it and doesn't want to understand it. So don't expect any kind of response from him. Besides, promoting this fantasy is how he makes a living.

    ReplyDelete
  2. Re "it’s hard to understand what drives this desire to find evidence of genetic determinism"

    It's not hard to understand at all. The answer is in front of everyone's (sleeping) nose.

    The bogus theory allows to pin down putative diseases for the allopathic medical allpathy to sell a specific drug/intervention for. Or the fake theory supports their lucrative criminal business model.

    So how come a criminal giant business, such as the medical mafia, manages to dominate with profitable nonscience for over a century?

    A coherent theory of how we got to this dismal point has been proposed, read the free essay “The 2 Married Pink Elephants In The Historical Room –The Holocaustal Covid-19 Coronavirus Madness: A Sociological Perspective & Historical Assessment Of The Covid “Phenomenon”” at www.CovidTruthBeKnown.com (or https://www.rolf-hefti.com/covid-19-coronavirus.html)

    Facts hardly matter when dogma, power positions, and avarice rule corporate "science" that has been dominating for well over half a century.

    ReplyDelete