Loading...

Follow The Evolution Institute on Feedspot

Continue with Google
Continue with Facebook
or

Valid

As a clinical psychologist I collaborate with people facing problems that are sometimes practical but also of an emotional nature. Emotions evolved to motivate us, to “motion” us into action — and sometimes those emotions are involuntary, that is to say, not chosen. Some, are unwanted. Take shame, for example, one of many social emotions that allow us to navigate relationships.

The emotion of shame doesn’t garner as much research attention as do other painful emotions, but clinically, it undergirds many instances of depression and anxiety. My mentor, Albert Ellis saw shame as a driver of much emotional pain and devised “shame-attacking” exercises to rid us of all shame. He saw shame as a bug, not a feature of our cognitive software. The approach yielded mixed results, but today we have a better understanding of shame.1

A clue that shame has evolutionary roots is that it feels involuntary.2 Morality and a sense of shame served the needs of our place among kin and group. The inclusive tent for conscientiousness raised the possibility of falling short of our tribe’s expectations.

Understanding evolution helps us gain an “ancestral awareness,” to contrast the modern challenges that face us — mismatches between those natural mechanisms and the novel current environment. As a cognitive behavioral therapy practitioner for many years, this perspective fits into best practices because it aids us in understanding dysfunctional emotions and how to augment our thinking.

For example, many of us avoid potentially embarrassing adventures (to our long-term detriment), because of the risk of incurring scorn. Clinging to an “appropriate” facade feels natural but can also activate anxiety, “What if I look foolish? That would be horrible.” That’s a common premise entertained (sometimes semi-consciously) by anxious people. I’ve often seen versions of this phenomenon in a clinical setting, with consequent anxiety and depression.

Self-recrimination and depression can follow a self-described “shameful” performance. Anxiety often occurs in anticipation of one. In both cases, shame is at the root. A possible shameful event is looming in the future and we feel anxiety. Shameful event in the past and we feel depression.

We want to survive, mate, contribute to our group, offer resources, and command positive attention. Here, shame may be a trade-off between goals as a signal to oneself and as a signal to others, that we are trustworthy and well-meaning. An evolutionary perspective can help us keep conscientiousness while challenging crippling shame. 4

Shame may have helped us to maintain conscientious behavior, but our current self-messages such as, “I absolutely must be approved and loved by important allies” may trigger unwanted modern effects such as avoidance and isolation.

Shame seems to have two separate components. One, the corrective feedback for us by which to monitor social behavior. The other, the more troublesome one, is putting oneself down as an incompetent person. Huge difference! My job is to help clients keep the first, while changing the second of those. And evolutionary thinking helps us do that.

Today, the entire world seems like our village. Social media can fool us into taking our reputations not just seriously, which is good, but too seriously, which is harmful. You see, for most of human history our reputations would stratify us in the entirety of our social options. Our minds still read it that way. But a few screwups today do not have to sully our relationships to any hardened degree. Our long lives and multiple opportunities to connect with new people has never been greater.

Our ancestors did not have electronic social media, iPhones, police, air travel, cities, Tinder, vodka, running water, Starbucks or delis. These may seem obvious, but its profundity gets clearer with a few thought experiments. Mismatch insights are revelatory in reframing our modern experiences. Not a cure — a perspective helping to distinguish emotional features from emotional bugs, and how the experience of shame can reflect both. Evolutionary context and purposes matter — and understanding ancestral mismatch can provide a ready rosetta stone.

An ancestral awareness can provide some perspective on our propensity to feel overly ashamed about our errors, either the ones we’ve made, or the ones we’re endlessly committing—in our imaginings only.

References:

  1. Gilbert, P. (2003). Evolution, social roles, and differences in shame and guilt. Social Research: An International Quarterly of the Social Sciences 70, 1205-1230
  2. Gilbert, P., & McGuire, M. T. (1998). Shame, status, and social roles: Psychobiology and evolution. In P. Gilbert & B. Andrews (Eds.), Series in affective science. Shame: Interpersonal behavior, psychopathology, and culture (pp. 99-125). New York, NY, US: Oxford University Press.
  3. Gilbert, P. (2006). Compassionate mind training for people with high shame and self-criticism: overview and pilot study of a group therapy approach. Clinical Psychology & Psychotherapy, 353-379.
  4. Pelusi, N. (2008) No shame on you. Psychology Today: https://www.psychologytoday.com/articles/200801/neanderthink-no-shame-you
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The world continues to wait for a conversation between Brett Weinstein and myself on the topic of group selection. It began with Weinstein’s conversation with Richard Dawkins, where he tried and failed to get the great master to admit that religions might be adaptations rather than mind viruses. That led me to look up what Weinstein has written on the subject. There wasn’t much [1], but it was enough for me to tweet that he was invoking group selection. Weinstein didn’t see it that way, so we both started to tweet about having our own conversation. It would be punchy but respectful, in keeping with the ethos of the so-called Intellectual Dark Web.

Alas, the event has yet to materialize so the world continues to wait. Fortunately, a lengthy conversation between Weinstein and Joseph Walker [2] provides all the information needed to justify my original hunch. Brett Weinstein invokes group selection in every way except using the words.

If this phrase sounds familiar to some readers, it is because I have written a similar critique of Richard Wrangham [3], based on his new book The Goodness Paradox. As an old guy (I turn 70 in July), I even critiqued Richard Alexander, Weinstein’s mentor and PhD advisor, way back in 1999 [4]!

Lest you think that I’m trying to nuke the entire establishment of evolutionary thinkers who traffic in ideas such as selfish genes, kin selection, reciprocal altruism, indirect reciprocity, and extended phenotypes, let me be clear about the nature of my complaint. Imagine that you are fluent in two languages—say, English and Spanish–and you enter into conversation with someone who speaks English fluently and Spanish hardly at all. To your surprise, this person tells you that English is a superior language and that much of what is said in Spanish is flat out wrong. What an incredible boor!

That is my complaint against the establishment of evolutionary thinkers who reject group selection in their own minds when they can barely speak its language. Why can’t they become bilingual, for heaven’s sake! I can speak and appreciate what is said in their language–why can’t they do the same in mine?

Actually, the peer-reviewed literature has become more bilingual over the decades, I am happy to report. Here is what two highly respected researchers, Jonathan Birch and Samir Okasha, write in a 2014 article titled “Kin Selection and Its Critics” [5, p. 28].

In earlier debates, biologists tended to regard kin and multilevel selection as rival empirical hypotheses, but many contemporary biologists regard them as ultimately equivalent, on the grounds that gene frequency change can be correctly computed using either approach. Although dissenters from this equivalence claim can be found, the majority of social evolutionists appear to endorse it.

A 2014 survey of anthropologists from PhD granting departments found that the majority accepted group selection as an important force in human cultural evolution and understood the concept of equivalence described in the passage quoted above [6]. Unfortunately, the majority of social evolutionists contributing to the peer-review literature aren’t very active on the internet. In addition, there are generational effects. The adage “science progresses funeral by funeral” doesn’t hold for everyone, but it does hold for some, who will go to their graves proclaiming that English is superior to Spanish and that things said in Spanish are just plain wrong.

***

Weinstein laments Dawkins’ inflexibility on the subject of religion, but he himself is a seed that has fallen close to the tree of his mentors, Robert Trivers and Richard Alexander, in his portrayal of group selection. At least he is fluent in the language that he knows, so his interview with Walker provides all of the background needed on the core concepts of selfish genes, kin selection, reciprocal altruism, indirect reciprocity, and extended phenotypes, which became established during the second half of the 20th century, along with the stock argument for why group selection doesn’t work. Let’s take these in turn.

What all the core concepts share in common is the appearance of selfishness, or self-interest if you prefer. Everything that evolves boils down to selfish genes. Inclusive fitness is about maximizing the copies of your genes in the bodies of others in addition to yourself. Reciprocity is about helping others in expectation of return benefits to yourself. Extended phenotypes are about your genes reaching beyond your body, such as a beaver dam as the phenotype of a beaver. Indirect reciprocity notes that the return benefits from helping others need not be restricted to the individual that you helped. In all cases, the explanation ends up being all about you.

The stock argument against group selection is that in any group containing both altruistic and selfish individuals, it is inevitable that the latter will replace the former. Hence group selection won’t work. This is what Dawkins says about group selection in The Selfish Gene and what Weinstein says in his interview with Walker—over four decades later—hasn’t changed a bit. There’s an enduring meme for you!

I have always marvelled at how this argument against group selection could ever have been taken seriously when it doesn’t even acknowledge the solution offered by group selection theory. The reason that altruism can evolve, despite declining in frequency in each and every group also containing selfish individuals, is because groups with a higher frequency of altruists contribute more to the total gene pool than groups with a lower frequency of altruists. What evolves in the total population reflects a balance between the opposing forces of selection among individuals within groups (favoring selfishness) and selection among groups in a multi-group population (favoring altruism). You can’t dismiss the evolution of altruism by focusing only on the negative within-group component!

Allow me to illustrate this point with one of the most influential models of group selection—the haystack model of John Maynard Smith [7]. The advantage of a model is that it makes everything precise. Maynard Smith fancifully imagined a population of mice that lives in haystacks. At a single genetic locus, one allele codes for aggressiveness (a form of selfishness) and its alternate codes for docility (a form of altruism). Each haystack is colonized by a single female fertilized at random by a single male—therefore a sample of four alleles drawn at random from the total population. The population in each haystack grows for a number of generations without any migration between haystacks. Then all of the mice disperse, mate randomly, and colonize a new set of haystacks to repeat the cycle. During the period of time spent in isolation, the selfish gene completely replaces the altruistic gene in all haystacks that were colonized by both alleles. In haystacks that were colonized only by the altruistic allele, however, the mouse population grows larger and contributes more to the total gene pool than the selfish populations.

Unlike the stock dismissal of group selection, the haystack model includes both within-group selection (favoring selfishness) and between-group selection (favoring altruism) in accounting for what evolves in the total population. Given the assumptions of the model, within-group selection is by far the strongest force, so altruism goes extinct not only within each haystack, but in the total population. When the haystack model was published in Nature magazine in 1964, it was regarded as a drop-dead argument against group selection as a whole.

But wait! Looking back, the assumptions of the model are highly unrealistic. In particular, the assumption that altruistic genes are completely replaced by selfish genes within each haystack before dispersal occurs makes within-group selection as strong as it can possibly be. In fairness to Maynard Smith, 1964 was before the advent of desktop computers and many of his assumptions were made to simplify the math. But what if we were to revisit the model with more realistic assumptions? For example, what if we use the same benefit and cost terms that Hamilton used in his model of inclusive fitness? In other words, in every haystack containing both types, altruists deliver benefits (b) to others in their group at a cost (c) to themselves, while selfish individuals accept the benefits without paying any costs. Let’s also alter the number of generations spent within each haystack (e.g., 5,10,15) as a variable of the model. What happens then?

This is the question that I asked in a 1987 article published in the journal Evolution [8]. What happens is that the altruistic gene declines in frequency in each group colonized by both types, but it doesn’t entirely go extinct. Take the extreme case of a single altruistic mutation in a population of selfish genes. The mutant allele finds itself in a haystack with three selfish alleles, or an initial frequency of 25%. Depending upon the values that we assign to the b’s and c’s, its frequency might decline to 19% after 5 generations, 14% after 10 generations, and 9% after 15 generations. That’s the bad news. The good news is that the single haystack containing the altruists might grow 20%, 45%, and 80% larger than the entirely selfish haystacks during the same period. Thanks to this fitness difference at the group level, the frequency of the altruistic gene can increase in the total population, despite decreasing within the haystack. A thorough exploration of the parameter space, made possible by the advent of desktop computing, showed that altruism can robustly evolve in the haystack model, given assumptions that are more reasonable than Maynard Smith’s original model.

This was only one of many computer simulation models, laboratory studies, and field studies demonstrating that between-group selection cannot be dismissed as invariably weak. Instead, the balance between levels of selection must be evaluated on a case-by-case basis. One of the most general and elegant theoretical formulations of multilevel selection is the Price equation, which convinced Hamilton that his theory of inclusive fitness had been invoking group selection all along [9]. None of this is reflected in Weinstein’s perpetuation of the stock argument against group selection, which only takes note of within-group selection.

***

Something else that Weinstein gets wrong in his interview with Walker is that group selection models are confined to interactions among non-relatives. I wish I could say otherwise, but this simply reflects ignorance of the literature, including everything that W.D. Hamilton wrote from 1975 onward. If Weinstein was bilingual, he would realize that the coefficient of relationship (r) in Hamilton’s rule translates into an index of genetic variation among groups in a group selection model. When r=0, individuals are randomly distributed into the groups. When r=1, group members are genetically identical and all of the variation is between groups. Throughout the entire range of r values, what evolves in the total population reflects a balance between levels of selection. In all of the classic group selection models, groups are colonized by small numbers of individuals, which are therefore related to each other compared to the total population. In the case of the haystack model, the mice within each haystack start out as full siblings, but this did not prevent Maynard Smith from calling it a group selection model and contrasting it with Hamilton’s model of kin selection. Years were required to reveal that not only is between-group selection a potent evolutionary force, but that it is implicitly invoked by all of the theories that were developed as alternatives.

Perhaps the best way to make this point is for me to present my fantasy version of a conversation with Weinstein.

David Sloan Wilson (DSW): Brett, do you agree with me that evolutionary models are based on relative fitness? It doesn’t matter how well an individual survives and reproduces in absolute terms, only that it does so better than others in its vicinity—right?

Brett Weinstein (BW): Of course!

DSW: Then doesn’t it strike you as curious that all of the models framed in terms of self-interest don’t calculate relative fitness?

BW: What do you mean?

DSW: They all assume that individuals or genes evolve to maximize their absolute fitness. For example, an individual that follows Hamilton’s rule will make more copies of its genes than if it doesn’t follow Hamilton’s rule. The reciprocal altruist gets a larger net benefit than if she doesn’t reciprocate. They all take the form of “what’s my payoff if I behave this way, compared to my payoff if I behave that way”. That’s the maximization of an individual’s (or gene’s) absolute fitness, not its relative fitness compared to others in its vicinity.

BW: But the models show that these behaviors evolve in the total population, compared to those who don’t maximize their absolute fitness. Isn’t that relative fitness?

DSW: Yes, but it’s relative fitness all things considered. By “in the vicinity,” I mean within the groups where the social interactions are taking place. Let’s take the concept of extended phenotypes, for example, which notes that a beaver dam can be regarded as part of the phenotype of a beaver.

BW: Yes—that’s one of Dawkins’ concepts that I like, even though I disagree with him about religion.

DSW: Ok, but now let’s look more closely at a beaver pond. Does it contain only one beaver?

BW: No, typically it contains more than one.

DSW: What if the beavers in a single pond vary in the amount of work they put into building a dam? Which have the highest relative fitness?

BW: Well…..

DSW: Spit it out, Brett! The free-riding beavers have the highest relative fitness! Calling the dam an extended phenotype of beavers doesn’t alter this fact!

BW: But what if the beavers are related?

DSW: That doesn’t matter either! In every pond containing both types, the free-riders will have the relative fitness advantage. Genetic relatedness is important because it clusters the dam-builders and free-riders into different ponds more than if they were distributed at random. So-called kin selection increases variation among groups, therefore the importance of between-group selection compared to within-group selection. Partner choice among genetically unrelated individuals would do the same thing. It’s variation among groups that’s important, not genealogical relatedness per se. Hamilton got that in 1975. Where have you been all these years?

BW: Hey! We of the Intellectual Dark Web like to keep the conversation respectful!

DSW: So do I, but I am cordially telling you that there are academic standards to maintain. Our reading public deserves to know that we are both experts in the topic that we are discussing. But, to lighten up on you a bit, I know that there is a weird dynamic in which some experts aren’t bilingual when it comes to theories of social evolution. They only think in terms of the individualistic perspective, only read that literature, and therefore dismiss group selection and can’t see it in their own models when it is front of their faces. Here’s an example from Richard Wrangham’s new book The Goodness Paradox, which is just like the beaver example. Instead of beavers in their dams, we have chimps in their communities. The counterpart of dam-building beavers is male chimps that kill or harass members of neighboring territories. The cost of doing this might not be large, but it is still an individual cost, while the benefit of expanding the territory over a period of years goes to the whole community. This is by Wrangham’s own account, but he can’t see the role of between-group selection any more than you can. The same goes for your mentor, Richard Alexander, but you should have the flexibility to learn what has become the consensus in the peer-review literature.

BW: Well, gosh, David! Thanks for enlightening me! I’m really beginning to see it your way now!

Ok, that last line was total fantasy, but this is the conversation that I will attempt to have with Weinstein, if it ever materializes, and I don’t see how he can evade the conclusion. His stock dismissal of group selection—selection against altruists within groups—takes place in every model of social evolution framed in terms of self-interest and can be seen merely by comparing the relative fitness of individuals in the groups where the social interactions are taking place.

***

Much of the conversation between Walker and Weinstein is framed in terms of the study of religion. For Weinstein, it is a no-brainer that religions are adaptations, not mind viruses. They are also “memeplexes”—whole complexes of cultural traits that evolve as a package—not atomistic memes. Weinstein wishes that Dawkins could be flexible enough to get beyond mind viruses. I wish that Weinstein could be flexible enough to realize that possibly—just possibly—the evolution of religions as adaptive memeplexes might require a process of selection among alternative memeplexes and cannot be explained entirely in terms of selection among alternative memes within memeplexes. That should also be a no-brainer.

I also hope, respectfully, that in future conversations with myself or anyone else on the subject, Weinstein displays a grasp of the peer-reviewed literature on religion from an evolutionary perspective. As far as I can tell, he remains mostly within the bubble of the New Atheist community, which for the most part does not conduct serious research on the subject. When was the last time that Richard Dawkins, Daniel Dennett, or Sam Harris contributed to the peer-review literature? How familiar is Weinstein with the likes of Candice Alcorta, Scott Atran, Robert Bellah, Pascal Boyer, Joseph Bulbulia, Michele Geland, Joseph Henrich, Dominic Johnson, Ara Norenzayan, Richard Sosis, Peter Turchin, and Harvey Whitehouse in addition to my own scholarly work? Go here [10,11] for recent overviews.

To pick an example central to New Atheist concerns, consider the concept of the resurrection associated with Christianity [what follows is distilled from a chapter of my book The Neighborhood Project titled “The Natural History of the Afterlife”]. When did it arise in human history and what caused its spread, compared to alternative beliefs? Historical scholarship is sufficiently detailed to answer this question [12]. In the Hebrew Bible, death is mentioned about a thousand times. In most cases, people just die and nothing is said about an afterlife of any kind. Judaism is centered around establishing the nation of Israel on earth, not in an afterlife.

In about seventy cases, an afterlife is mentioned in the Hebrew Bible, but it is not the heaven associated with Christianity. Instead it is Sheol, a gloomy place similar to Hades in Greek mythology, where everyone goes regardless of whether they have been good or bad.

Sheol is mentioned in a specific context—when people face the prospect of dying without having achieved much of anything during their time on earth. Their gloomy thoughts about the afterlife mirror their gloomy thoughts about their lives. So much for the concept of heaven as an individual-level adaptation for allaying anxiety about death!

The concept of the resurrection associated with Christianity doesn’t unambiguously appear until the Book of Daniel. Scholarship of the period is so detailed that the Book of Daniel can be dated to the time of the Seleucid persecution of 167-164 B.C.E., which makes it one of the latest texts of the Hebrew Bible. The challenge to Judaism at that time was assimilation: many Jews were attracted to Hellenic culture. According to the Book of Daniel, eternal life would be granted to Jews who maintained their traditional ways and all other Jews would suffer everlasting abhorrence. Belief in the resurrection was arguably a key factor that enabled the Maccabees, the traditionalist faction, to resist the Hellenist monarch Antiochus in a victory that is still celebrated in the festival of Hanukkah. Here is how two eminent scholars of the period put it [12].

The Jewish expectation of a resurrection of the dead is always and inextricably associated with the restoration of the people of Israel; it is not, in the first instance, focused on individual destiny. The question it answer is not the familiar, self-interested on “Will I have life after death?” but rather a more profound and encompassing one, ‘Will God honor his promises to his people?’

In other words, the concept of the resurrection originated as a cultural mutation that increased solidarity in the context of between-group competition within Judaism. The belief did not increase the fitness of individuals, compared to other individuals in the same group who did not believe. It increased the fitness of the groups that believed, compared to the groups that didn’t believe.

The religion that formed around Jesus was initially a twig on a branch of this cultural tree that sprouted several centuries previously. The idea of a Messiah who would die and return to signal the end of days was by then thoroughly familiar. The followers of Jesus thought that he was the Messiah, that he had indeed risen three days after his crucifixion, and that the end time had begun. The Jesus sect would probably have remained a footnote in religious history were it not for a key difference that had nothing to do with its otherwise standard (for its time) afterlife beliefs: Gentiles could now become the chosen people. A belief system that previously had been restricted to one ethnic group could now spread throughout the world. The same dynamic would result in the origin and spread of Islam centuries later.

But wait! There’s more! Have you ever wondered why the four gospels of the New Testament are so different from each other, not to speak of the gospels that weren’t included in the New Testament? Was it simply because of faulty memory and the passage of time, which in evolutionary terms would be a form of cultural drift? Not according to the distinguished scholar of religion Elaine Pagels. In The Origin of Satan and other books, she interprets each gospel as a sacred story that became locally adapted to a particular Christian community in the context of its specific challenges. The gospels that made it into the New Testament were the ones that did the best job of creating strong communities.

There is much, much more, but perhaps I have said enough to make my general point: Human history provides a fossil record of cultural evolution so detailed that it puts the biological fossil record to shame. It isn’t necessary to speculate about whether religions are adaptations or..

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

It is well known to most that physical activity and exercise can exert positive effects upon our health and wellbeing. The fields of epidemiology and exercise physiology have yielded insights into the amount of physical activity we should be performing, with more typically being better; and that the harder the physical activity is, the greater the benefit received independently of the amount. Many have argued that the benefits received from physical activity and exercise come from our evolutionary history where we were typically far more active than we are today. That our bodies evolved to be active. Some have even gone so far as to offering recommendations for how to achieve ‘evolutionary’ or ‘paleo’ fitness (Cordain et al., 1998; O’Keefe & Cordain, 2004; O’Keefe et al., 2010; 2011).

These recommendations consider ‘what should we do?’ based upon the evolved traits in humans that determine our physical activity capacities and limitations (i.e. ‘what can we do?’), and with respect to emulating the physical activity patterns of extinct or extant hunter gatherers (i.e. ‘what did we do?’).

But, are narratives regarding evolutionary rationales and recommendations for physical activity and exercise a convenient ‘just so’ story?

Paleo-archaeology has given us considerable insight into the types of activity Homo sapiens are adapted for and what we are capable of. We have numerous features that indicate we evolved to be able endurance athletes, particularly with respect to bipedal locomotion, and which conferred several adaptive advantages (Bramble & Lieberman, 2004). Further, although we have lost much of the upper body locomotor specialization from our ancestors (Lovejoy, 2009) studies indicate our ancestors likely had well developed upper body musculature and thus physical capacity (Trinkaus et al., 2002). Lastly, as is evident from the field of exercise physiology, our bodies have adapted to be highly plastic with the ability to adapt towards the demands placed upon it (Lieberman, 2012).

Clearly, with respect to the question of ‘what can we do?’, Homo sapiens evolved to be physically capable and able to perform quite a repertoire of physical activity patterns. But, this doesn’t necessarily answer the question of ‘what did we do?’ when it comes to the physical activity patterns of our evolutionary past.

We can learn a lot from our close relatives, other extant primate species, who interestingly (though with obvious variation across species) are probably not as physically active as many would expect. They spend the vast majority of their time resting, typically sitting, and relatively little being what we would consider to be ‘physically active’ (Rose, 1973). It could be argued that primates are actually quite sedentary.

However, the reconstruction of physical activity patterns of extinct Homo sapiens is what some have termed ‘Bio-archaeology’s Holy Grail’ (Jurmain et al., 2012). Understanding the volume, frequency, intensity of effort, and types/modalities of activities performed by extinct humans is not a simple endeavor. Studies of articular modifications, musculoskeletal stress markers, and skeletal robusticity and geometry, though informative, are certainly not something we can reliably use to answer the question of ‘what should we do?’ in order to enhance our longevity and health in the modern world. Indeed, some research would perhaps suggest we should avoid the types of activities our ancestors likely engaged in (Berger & Trinkaus, 1995).

That last point is worth expanding upon. In our evolutionary past our physical activity was directed towards, and evolved enabling, things that would maximize our reproductive success; our evolutionary ‘fitness’ though not in the sense that many of paleo fitness proponents use it. Not all ancestral adaptions are good for us, and many involve trade-offs. Some novel modern behaviors not selected for are not necessarily bad for us either.

With a lack of ability to truly understand the physical activity patterns of our extinct ancestors, studies of extant hunter gatherers perhaps offer the most valuable insight into what physical activity patterns we should be following. Indeed, they typically are more active on the whole than people in modern industrialised populations (Eaton & Eaton, 2003) and spend at least some of their time performing more vigorous intensity of effort activities (Gurven et al., 2013). However, when it comes to the types and modalities of physical activity, these are highly variable and influenced by sexual division of labour, occupation duration, habitat quality, and hunting and logistical mobility (Grove, 2009).

Physical activity recommendations from national and international guidelines have historically been ‘volume-centrique’ with a focus upon how much we should be doing. However, it has recently been argued that perhaps we should be focusing more on how hard the activity we perform is (Steele et al., 2017). We can’t know what our physical activity patterns truly were in our evolutionary past. But, this approach perhaps best matches the physical activity patterns of modern hunter gatherers who have highly variable activity patterns, undulating both within and between days. Indeed, there are poor relationships between physical activity levels and physical fitness in many cases (Lightfoot, 2013) making it unclear from an evolutionary perspective as to whether adaptations drove increased physical activity, or vice versa. In considering the variation in physical activity types and modalities it’s clear that recommendations to engage in any particular one may be folly. Yet, it may not even matter as long as such activities are of a sufficient intensity of effort as recent work suggests the physiological response, and perhaps then the stimulus, to differing modalities differs little (Steele et al., 2018).

It’s highly likely that some degree of mismatch exists between our modern environment and the physical activity levels we have evolved to perform. Yet, despite the lack of clarity as to exactly what the physical activity patterns of our past, what kinds of recommendations could we broadly offer that fit with modern understandings of exercise science:

  1. Select a modality based upon personal preference (or sporting requirement) whilst considering the potential injury risks associated with it; consider the risk-reward ratio.
  2. Focus upon utilising a high intensity of effort (preferably maximal or near maximal) at least some of the time whilst performing low intensity of effort activity the majority of the time.

In all likelihood, it’s probably as simple as that.

References:

  1. Cordain L, et al. Physical activity, energy expenditure, and fitness: An evolutionary perspective. In J Sports Med 1998;19:328-335
  2. O’Keefe J, & Cordain L. Cardiovascular disease resulting from a diet and lifestyle at odds with our Paleolithic genome: How to become a 21st-Century hunter-gatherer. Mayo Clin Proc 2004;79:101-108
  3. O’Keefe J, et al. Organic fitness: Physical activity consistent with our hunter-gatherer heritage. Phys Sportsmed 2010;37:11-18
  4. O’Keefe J, et al. Exercise like a hunter-gatherer: A prescription for organic physical fitness. Prog Cardiovasc Dis 2011;53:471-479
  5. Bramble DM, & Lieberman DE. Endurance running and the evolution of Homo. Nature 2004;432:345-352
  6. Lovejoy CO. Reexamining human origins in light of Ardipithecus ramidus. Science 2009;326:e1-8
  7. Trinkaus E, et al. Upper limb versus lower limb loading patterns among near eartern middle Paleolithic hominds. In: Akazawa T, et al. Neanderthals and modern humans in western Asia. Springer: Boston, MA, 2002
  8. Lieberman DE. What are humans adapted for? Presented at the Ancestral Health Symposium 2012 – https://www.youtube.com/watch?v=Txrs-FLz64Y&t
  9. Rose MD. Quadrupedalism in primates. Primates 1973;14:334-357
  10. Jurmain R et al. Chapter 29. Bioarchaeology’s Holy Grail: The reconstruction of activity. In: Grauer AL. A Companion to Paleopathology, Blackwell 2012
  11. Berger TD, & Trinkaus E. Patterns of trauma among the Neandertals. J Archaeol Sci 1995;22:841-852
  12. Eaton SB, & Eaton SB. An evolutionary perspective on human physical activity: implications for health. Comp Biochem Physiol A Mol Integr Physiol 2003;136:153-159
  13. Gurven M et al. Physical activity and modernization among Bolivian Amerindians. PLoS One 2013;8:e55679
  14. Grove M. Hunter-gatherer movement patterns: Causes and constraints. J Anthropol Archaeol 2009;28:222-233
  15. Steele J et al. A higher effort-based paradigm in physical activity and exercise for public health: making the case for a greater emphasis on resistance training. BMC Public Health 201717:300
  16. Lightfoot JT. Why control activity? Evolutionary selection pressures affecting the development of physical activity genetic and biological regulation. BioMed Res Int 2013;821678:1-10
  17. Steele J et al. Similar acute physiological responses from effort and duration matched leg press and recumbent cycling tasks. PeerJ 2018;6:e4403
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Evolutionary mismatch is now recognized as affecting many aspects of modern life; examples include diet, exercise, light exposure, chronic stress, sleep deprivation, electronic technology, drug abuse, mental health, spousal abuse, climate change, social inequity, politics, and economics1, 2, 3, 4, 5, 6, 7, 8, 9. It also has many implications for Public Health, often resulting in unnecessary suffering and death. The controversy surrounding vaccination is just one such example.

Vaccines have been undeniably successful at preventing many of the most dreaded and fatal diseases and represent one of the most important health discoveries ever made. One has to look no further than the eradication of smallpox, protection against polio, and the dramatically lower rates of many previously common diseases (e.g., measles, mumps, rubella, cholera, yellow fever) to appreciate their importance.

I hypothesize that the current anti-vaccination movement is a result, in part, of the innate cognitive biases inherent in our nervous systems that evolved to deal with problems in a very different premodern world. While such largely unconscious biases worked well for our distant ancestors in their environmental context, they are ill-suited to novel modern circumstances. Two such biases that today result in people making objectively irrational decisions are “discounting the future” and “loss aversion.”

“Discounting the future” makes us more likely to value immediate rewards over future benefits, and was on average beneficial to hunter-gatherers (e.g., it was best to drink and eat while water and food were available). However, today humans are faced with longer lives complicated by rapid cultural evolution. Ignoring Public Health interventions such as vaccinations (or stopping smoking, etc.) that greatly reduce future morbidity and mortality can only be described as irrational.

“Loss aversion” (the tendency to strongly prefer avoiding losses relative to acquiring gains) made sense in the ancestral context because it was better to err on the side of caution when dealing with predators, since a loss might mean death. Today this bias results in irrational decisions with people often avoiding even very minor “risks” (e.g., low probability of allergic reactions or slight inconvenience such as minor pain of an injection) rather than opting for a far greater likely gain (protection from serious disease).

Likewise, humans evolved to respond most strongly to immediate threats or clear and present danger (e.g., predator attack) but not so strongly to something more remote that might or might not occur (e.g., infection by a fatal disease). Considering this perspective, vaccines that protect against potentially catastrophic but temporally or spatially remote and unfamiliar illnesses often do not generate a response proportional to their true benefit. In the modern context such biases predispose us to irrational decisions that can result in individuals or their loved ones having higher rates of avoidable disease or death.

Cognitive biases also interact with cultural mismatches in the anti-vaccine movement. Consider the opposition by some religious groups and politicians to vaccination against human papillomavirus (HPV) because of the fear of promiscuity. People can easily fall into the trap of “in-group” versus “out-group” or “us versus them” cognitive bias. Subsequently, once a party or religion or other group “decides” what is “correct,” there are psychological and social consequences (e.g., ostracism, expulsion from the group, loss of reputation) for individuals that go against the group mentality. This can lead to individuals making choices that ultimately cause harm to themselves and others. Thus, affiliation with a religion or political party that advocates ideas harmful to group members becomes an example of cultural mismatch. This is especially tragic in the case of HPV where the vaccine is now known to reduce cervical cancer risk (as well as the risk of throat and mouth cancer) and those not taking the vaccine have a much greater risk of unnecessary suffering and death.

How does one address these types of mismatch? The only approaches likely to succeed are those that consider both our inherent strengths and innate weaknesses, and that apply appropriate techniques to change minds and ultimately policies. Such strategies that have been suggested and sometimes applied in other fields (e.g., Behavioral Economics or in countering climate change denial) include: increasing awareness of our cognitive biases; improving choice architecture (i.e., design ways in which choices are presented to increase the probability of desirable outcomes); framing topics differently (e.g., focusing on what can be gained versus lost); encouraging people to contemplate the question, “What if I’m wrong?”; leveraging relevant social group norms; and minimizing time discounting and present bias (e.g., help people connect with their future selves or descendants)10, 11.

References:

  1. Lloyd, E., Wilson, D.S., & Sober, E. (2011). Evolutionary mismatch and what to do about it: A basic tutorial. Available at: https://evolution-institute.org/wp-content/uploads/2015/08/Mismatch-Sept-24-2011.pdf.
  2. Eaton, S.B., Konner, M., & Shostak, M. (1988). Stone agers in the fast lane: Chronic degenerative diseases in evolutionary perspective. American Journal of Medicine 84(4):739-749.
  3. Lieberman, D.E. (2013). The story of the human body: Evolution, health, and disease. New York: Pantheon Books Inc.
  4. Logan, A.C. & Jacka, F.N. (2014). Nutritional psychiatry research: An emerging discipline and its intersection with global urbanization, environmental challenges and the evolutionary mismatch. Journal of Physiological Anthropology. 33(1):22. doi: 10.1186/1880-6805-33-22.
  5. Pani, L. (2000). Is there an evolutionary mismatch between the normal physiology of the human dopaminergic system and current environmental conditions in industrialized countries? Molecular Psychiatry 5(5):467-475.
  6. Diggs, G.M. (2017). Evolutionary mismatch: Implications far beyond diet and exercise. Journal of Evolution and Heath 2(1):3. doi: 10.15310/2334-3591.1057.
  7. Thaler, R.H. & Sustein, C.R. (2008). Nudge: Improving decisions about health, wealth, and happiness. New Haven: Yale Univ. Press.
  8. Kahneman, D. (2011). Thinking, fast and slow. New York: Farrar, Straus & Giroux.
  9. Gifford, R. (2011). The dragons of inaction: Psychological barriers that limit climate change mitigation and adaptation. American Psychologist 66(4):290-302.
  10. van der Linden, S., Maibach, E., & Leiserowitz, A. (2015). Improving public engagement with climate change: Five “best practice” insights from psychological science. Perspectives on Psychological Science 10:758-763.
  11. Ross, L., Arrow, K., Cialdini, R., Diamond-Smith, N., Diamond, J., Dunne, J., Feldman, M., Horn, R., Kennedy, D., Murphy, C., Pirages, D., Smith, K., York, R., & Ehrlich, P. (2016). The climate change challenge and barriers to the exercise of foresight intelligence. BioScience 66(5):363-370.
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Tooth decay is the most prevalent diseases of the modern age1. Unfortunately, too many people conflate commonality with normality. Tooth decay is not normal. What is different now versus then? Humans (as a genus) evolved over two million years ago in Africa. We ate whatever we could scavenge, kill, or gather. The addition of cereal grains such as wheat, corn, and barley did not begin until about 12,000 years ago. Combined with the current paradigm of food production, the human diet changed faster than our ability to adapt.

Up until the age of agriculture, almost no one had a cavity2. Tooth enamel is the hardest substance in the human body and preserves well, thus the fossil record gives insight into when decay first appeared. For further proof look no further than the humble toothbrush. While Homo sapiens is a roughly 250,000-year-old experiment in evolution, the toothbrush is only about 3-5,000 years old3. Why do you suppose that is the case? The simple answer is no one needed one when you are eating a species appropriate diet because that is what your teeth have evolved to eat.

While cooking dates to about 750,000 years ago in Africa it became ubiquitous about 350,000 years ago4. That’s longer than we have been defined as a species. The cooked meat and marrow contributed to expand and encephalize the human brain. The Out of Africa theory5, is the currently accepted model of human planetary conquest, states that we left Africa and followed the coastline, estuaries, and rivers until we were a firm presence on six of earth’s seven continents. The long chain ω-3 and abundant iodine in seafood further grew the human brain6.

Whole food such as game, tubers, greens, fruit in season, seafood, including sea vegetables are the foodstuffs that natural selection has endowed us to eat. This is the backbone of a species appropriate diet for humans. Since there are almost 6,000 species of toothed mammals how many eating their natural diet get tooth decay? The correct answer is NONE. Teeth evolved along with the rest of our bodies. While you can find other animals with occasional decay, notably bear, and some gorillas7. It is usually because they found a source of human food. This screams at anyone paying attention that tooth decay is a disease of environmental mismatch. Realizing that this is the case makes fixing it easy, at least on paper. The solution is simple once you recognize that this is the problem.

The standard American diet is high on starches, sugars, processed food, and low on natural fats. We also live in a society that values work and busyness. This is why you see so many processed and prepared foods catering to “those busy lifestyles” we are all expected to lead. While my profession has done a great job of getting the public to understand that sugar is bad for teeth our message has been one of oral hygiene and moderation. We have failed to communicate to the public that starches will cause decay. My message is simple. Explore where we evolved from and what we ate along the way. This will return us to our natural grain free diet of meat, seafood, and leafy greens. Limit fruit to the time of year it is abundant where you live. This how we can end tooth decay. This is what is species appropriate for humans.

I could argue that the current model of dentistry in the United States is self-serving. A cynic may say the profession wants it this way. If a dentist is doing his job properly he should be trying to put himself out of business. The appropriate metric for this is when you start to see dental schools closing. That actually started happening in the 1980’s but as decay rates have risen it has reversed itself.

Evolution invites us to look back to advance forward. Look to a time when humans did not have decay and emulate that diet. Follow the circadian and circa-annual rhythms. Humans evolved in the Old World and at the equator8. We enlarged our brains eating meat and marrow from the animals that lived there. We then conquered the planet by walking the inter-tidal zones and estuaries, exploiting the resources we found along the way until we were on every continent and island. We were always rising with the sun and eating what we found locally. This is what we are hard-wired to do. If we return to it the reward will be robust health as well as make the best dentists poor.

Read the full Evolutionary Mismatch series:

  1. Introduction: Evolutionary Mismatch and What To Do About It by David Sloan Wilson
  2. Functional Frivolity: The Evolution and Development of the Human Brain Through Play by Aaron Blaisdell
  3. A Mother’s Mismatch: Why Cancer Has Deep Evolutionary Roots by Amy M. Boddy
  4. It’s Time To See the Light (Another Example of Evolutionary Mismatch) by Dan Pardi
  5. Generating Testable Hypotheses of Evolutionary Mismatch by Sudhindra Rao
  6. (Mis-) Communication in Medicine: A Preventive Way for Doctors to Preserve Effective Communication in Technologically-Evolved Healthcare Environments by Brent C. Pottenger
  7. The Darwinian Causes of Mental Illness by Eirik Garnas
  8. Is Cancer a Disease of Civilization? by Athena Aktipis
  9. The Potential Evolutionary Mismatches of Germicidal Ambient Lighting by Marcel Harmon
  10. Do We Sleep Better Than Our Ancestors? How Natural Selection and Modern Life Have Shaped Human Sleep by Charles Nunn and David Samson
  11. The Future of the Ancestral Health Movement by Hamilton M. Stapell
  12. Humans: Smart Enough to Create Processed Foods, Daft Enough to Eat Them by Ian Spreadbury

References:

  1. https://www.nidcr.nih.gov/research/data-statistics/dental-caries
  2. http://www.businessinsider.com/growing-crops-human-cavities-increase-2016-3
  3. https://www.colgateprofessional.com/patient-education/articles/history-of-toothbrushes-and-toothpastes
  4. Ungar, University of Arkansas, Conversations, March 2012
  5. https://www.sciencedaily.com/releases/2007/05/070509161829.htm
  6. https://www.livestrong.com/article/454495-symptoms-of-excessive-iodine-intake/
  7. http://www.telegraph.co.uk/news/science/science-news/12007929/Bear-who-inspired-Winnie-the-Pooh-had-tooth- decay-because-Christopher-Robin-fed-it-honey.html
  8. https://genographic.nationalgeographic.com/human-journey/
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Richard Wrangham’s newest book, The Goodness Paradox, gets a lot right. The central thesis is that we are a self-domesticated species. We have bred ourselves for tameness, in the same way that we have bred our animal companions. The opposite of tameness is reactive aggression, or the tendency to lash out in a social confrontation. But there is another kind of aggression, the cool, calculated kind called proactive, that is also a hallmark of our species. Hence “the strange relationship between virtue and violence in human evolution”, which is the book’s subtitle.

The idea that we are a self-domesticated species has long roots but is experiencing a renaissance based on a now classic study of silver fox by the Russian scientists Dmitri Belyaev and Lyudmila Trut. The story is beautifully told in the book How To Tame a Fox and Build a Dog, co-authored by Trut and Lee Alan Dugatkin, who happens to be one of my former PhD students. Not only could tameness in silver foxes be selected in only a few generations, but a whole suite of other behavioral, physical, and life history traits also evolved as byproducts. Moreover, the same package of traits appears to evolve in all domesticated species. Thus, an important secondary theme of Wrangham’s book is that not all products of evolution require separate adaptive explanations, a point stressed by Stephen Jay Gould and Richard Lewontin in their classic “spandrels” article in 1979[1].

The existence of an entire syndrome of traits associated with domestication provides ways to test the hypothesis of self-domestication in humans and other species, such as bonobos vs. chimpanzees and species that inhabit islands compared to their continental ancestors. This allows Wrangham to be quite precise about when human self-domestication evolved. It is a hallmark of our entire species, Homo sapiens, compared to all other species in the Homo genus. Arguably, self-domestication is the reason why our species replaced those other species.

As one of the pre-eminent thinkers about primate and human evolution, Wrangham does an excellent job addressing all four questions that must be asked to fully explain any product of evolution, concerning their function (if any), phylogeny, mechanism, and development[2]. He is also one of the most lucid writers for a general audience. Hence, I warmly recommend The Goodness Paradox to experts and laypeople alike. I learned a lot from it and think that you will also.

But there is one thing that Wrangham gets wrong. He thinks that he can develop his thesis without invoking group selection, when he is invoking group selection in every way except using the words.

Group selection is the evolution of traits based on the differential survival and reproduction of groups in a multi-group population, as opposed to the differential survival and reproduction of individuals within groups. It is famously controversial. Among the scientists cited by Wrangham — such as John Alcock, Richard Alexander, Scott Atran, David Barash, Paul Bingham, Christopher Boehm, Samuel Bowles, Elizabeth Cashdan, Timothy Cluttonbrock, Leda Cosmides, Jerry Coyne, Paul Crook, Martin Daly, Richard Dawkins, Lee Dugatkin, Frans de Waal, Andrew Gardner, Herbert Gintis, Ashley Griffin, Jonathan Haidt, Marc Hauser, Joseph Henrich, Robert Hinde, Dominic Johnson, Martin Nowak, Daijiro Okada, Steve Pinker, Anne Pusey. Matthew Ridley, Robert Sapolsky, Michael Shermer, Elliott Sober, Corina Tarnita, Ian Tattersall, Michael Tomasello, John Tooby, Carel van Schaik, Stuart West, E.O Wilson, Margo Wilson, and Robert Wright–some rely heavily upon group selection, others reject it, and still others treat group selection as equivalent to other theories of social evolution, the difference being a matter of perspective rather than the invocation of different causal processes[3].

****

Before critiquing Wrangham’s treatment of group selection, it is important to be precise about the definition of terms. The best way to do this is to briefly review what all theories of social evolution share in common.

Consider the evolution of a nonsocial trait such as coloration in desert living species. Individuals vary in their coloration, those that match their background are more fit, and coloration is partially heritable. The result: individuals that impressively match their background.

Now consider the evolution of a social trait such as docility and aggression. Aggressive individuals pick fights with others while docile individuals avoid fights. To model the evolution of these alternative traits, we must assign fitness values to them. But, unlike solitary traits, we cannot do this solely on the basis of their individual properties. The fitness of either type depends upon the other individuals with whom they socially interact. This makes the study of social behavior more complicated than the study of solitary traits.

Any study of social evolution must say something about the structure of social interactions.

This is true for a verbal model, a mathematical or computer simulation model, or an empirical study that aims to understand how social behaviors evolve. However, mathematical and computer simulation models have the virtue of being precise about their assumptions. For example, N-person game theory assumes that individuals socially interact in groups of size N. The fitness of any individual depends upon its trait value (such as aggressive vs. docile) and the trait values of the other members of its group. The simplest N-person game theory models assume a very large number of groups, the random distribution of individuals into groups, and the dissolution of the groups after a single round of social interactions.

Elaborated models consider non-random distributions of individuals, multiple interactions within groups, different patterns of dispersal, and so on. The details depend upon the biological details of the organism being modeled. If the real organisms interact in pairs, then only 2-person game theory will do. If they interact with genetic relatives or with partners chosen on the basis of previous experience, then the assumption of random interactions won’t do. If the groups persist indefinitely and trade a fraction of dispersers every generation, then the assumption of ephemeral groups won’t do. It is the biology of real-world organisms that decide the details of any given model!

No matter what the details, all models of social evolution share the following features.

1) The sets of socially interacting individuals that influence each other’s fitness (the N in N-person game theory) are small compared to the total evolving population. This means that most evolving populations are populations of groups in addition to populations of individuals within groups—such as fish schools, bird flocks, primate troops, and human tribes. Sometimes the groups have discrete boundaries but sometimes they are neighborhoods, such as plants that interact only with their immediate neighbors. The important common denominator is social interactions that are local compared to the size of the total evolving population.

2) Selection among individuals within each group tends to favor traits that would be called disruptively self-serving in human terms, such as aggression compared to docility. In N-person game theory, virtually all of the traits called altruistic or cooperative are selectively disadvantageous within the groups of size N. Even the tit-for-tat strategy of 2-person game theory, which starts out nice and thereafter imitates the previous play of its partner, never beats its partner and can only lose or draw.

3) If social traits that are variously called altruistic, cooperative, mutualistic, and prosocial cannot evolve by within-group selection, then they require the differential survival and productivity of groups in a multi-group population. As I put it in my 2007 article with E.O. Wilson titled “Rethinking the Theoretical Foundation of Sociobiology”, selfishness beats altruism within groups, altruistic groups beat selfish groups, and everything else is commentary[4]. In 2-person game theory, pairs of altruists do better than mixed pairs, which in turn do better than pairs of selfish individuals. This between-group advantage for altruism can override its within-group disadvantage, especially if the distribution of individuals into groups is above-random.

Notice that these three features apply to all models of social evolution, no matter what they are called. Moreover, all models of social evolution must conform to the biological details of the social traits being modeled. Otherwise they will simply arrive at the wrong answer. The definition of groups and the larger population structure whereby groups are formed and dissolve are not arbitrary. They must be tailored to each and every social trait being modelled. This is why I coined the term “trait-group” in my first article on group selection in 1975[5].

Against this background, we can define “individual selection” as “selection among individuals within groups” and “group selection” as “selection among groups in a multi-group population”. These are the definitions that are used in virtually all explicit group selection models. They also capture what Darwin meant when he famously wrote “although a high standard of morality gives but a slight or no advantage to each individual man and his children over the other men of the same tribe, yet that an advancement in the standard of morality and an increase in the number of well-endowed men will certainly give an immense advantage to one tribe over another[6].”

****

Now I am in a position to advance my claim that Wrangham rejects group selection in his own mind but invokes it in every way except using the words when developing his thesis. He directly discusses group selection three times in the main text. The first is when he describes a theoretical model of warfare by Jung-Kyoo Choi and Samuel Bowles (location 2410 of the kindle edition)[7]. The details need not concern us, other than to say that they conform to Darwin’s scenario and the three features of all models of social evolution just listed. Wrangham accepts it as a group selection model but rejects it for not getting the biological details of human warfare in hunter-gatherer groups right. Fair enough. I have myself stressed that the model must match the biology.

The second mention of group selection is in the following passage (3763):

Group selection theory suggests that self-sacrifice by an individual can be favored over evolutionary time if it provides sufficiently large benefits to the individual’s group, which normally means a social breeding unit such as a hunter-gatherer band. Very often, however, the group that benefits from an individual’s generosity is not a social breeding unit. As Robert Graves’s recollection of his school days reminds us, the beneficiaries might be only a subgroup of a given social network. In the group as a whole, moral behavior might benefit some individuals at the expense of others.

Robert Graves’s recollection was of his school days, where he and his friends would never cheat on each other but thought nothing of cheating on their teachers. Here, Wrangham assumes that group selection models have some fixed definition of groups, as opposed to being defined in reference to each trait. For behavior expressed among school chums, the group of chums is the salient group.

Here is Wrangham’s third mention of group selection (5002):

Group selection is commonly invoked to explain our species’s interest in nonrelatives and our occasional willingness to sacrifice our own interests on behalf of a larger good. Group selection theory, however, has never quite been able to explain how benefits at the group level override those of individuals. The theory that the moral senses evolved to protect individuals from the socially powerful suggests that group selection might be unnecessary for explaining why we are such a group-oriented species. Our deference to the coalitionary powers within our own groups leads to a reduced intensity of competition, enabling groups to thrive.

This passage contains two errors. The first is to suppose that the counterforce to within-group selection is mysterious, nebulous, or necessarily weak. This is certainly not the case for formal mathematical and computer simulation models, where between-group selection is as precisely specified as within-group selection. As we shall see, it isn’t true for verbal models or empirical studies of social behavior either.

The second error is for Wrangham to assume that his own way of thinking described in the second half of the paragraph differs from his description of group selection in the first half. Let’s consider his own account in more detail.

****

Wrangham is not a mathematical or computer simulation modeler, but that doesn’t matter. At the beginning of this essay I stated that any study of social evolution must say something about the structure of social interactions. Formal models have the virtue of being precise about their assumptions, but verbal models based on extensive field experience have the virtue of being realistic—the kind of realism that should be the starting point of the more formal models.

Let’s begin with the behavior of chimpanzees, which Wrangham has studied extensively in the wild. Populations are subdivided into communities of a few dozen individuals. Males remain within their natal groups while females move. Males defend the boundaries of their territories against the males of adjacent communities.

Some behaviors expressed by individuals have an impact on the whole community. However, other behaviors are more limited in their impact, such as dyadic interactions or competing cliques within the community. Thus, the community is not a one-size-fits-all group for chimpanzees, any more than a band is a one-size-fits-all group for hunter-gatherers.

Despite these complexities, some chimpanzee behaviors are easy enough to interpret from a multilevel evolutionary perspective. Take reactive aggression for example. It is far, far more common in chimps than in human groups and clearly benefits the aggressor compared to other members of the same community. It is a form of disruptive selfishness, favored by within-group selection, pure and simple.

While humans are far less reactively aggressive and far more proactively aggressive than chimps, chimps do display proactive aggression to a degree, especially in their behavior directed against members of other communities. Here is one example described by Wrangham (4149).

In a few primate species (such as chimpanzees), infanticide occurs for reasons other than sexual selection. Male chimpanzees who encounter mothers from neighboring communities tend to attack them and can severely wound or kill their small infants. In this case, the protagonists are unlikely to meet again, so there is little chance of the killer’s fathering the female’s next infant. The traditional sexual selection theory, therefore, does not apply. Possibly, the killers benefit by intimidating the female into avoiding the area, leaving more food for the killer’s community. Alternatively, the attackers might gain by killing male infants that would otherwise grow up in the neighboring community to become future opponents. Further observations will eventually test ideas.

Let’s give Wrangham the benefit of the doubt and assume that his interpretation is correct. In the traditional sexual selection theory of infanticide, males that kill infants within their own community father more offspring than males who don’t. That’s a case of disruptive within-group selection pure and simple, which is bad for the group. But harming a female from another community is a different matter. The benefits do not flow to the males inflicting the harm, but to their entire community, in the form of an expanded territory and fewer males in the adjacent community.

The same is true for killing adult males of adjacent territories, as described by Wrangham in this passage:

The attacks cost little for the attackers, but by eliminating rivals they benefit their own community. In Kibale’s Ngogo community, John Mitani and David Watt’s team recorded instances when males killed or fatally wounded eighteen members of neighboring communities during period of ten years. The Ngogo community then expanded their territory into the area where most of the kills occurred. In Gombe, Anne Pusey and her colleagues have shown that, when the territory occupied by a community increases in size, community members are better fed, breed faster, and survive better. Kill some neighbors, expand the territory, get more food, have more babies—and be safer at the same time, since there are fewer neighbors who might be able to attack you.

If Wrangham is correct in his interpretation, then by his own account he is describing a case of between-group selection. The proactively aggressive behavior provides a benefit for the whole community at a cost to the aggressors. Wrangham makes much of the fact that the individual cost of killing is not large because many are ganging up against one. Still, the cost is probably something and even if it was zero the proactively aggressive behavior would be neutral with respect to within-group selection. The benefit remains at the group level. The fact that the cost of providing a group-level benefit is low makes group selection plausible, because it is not strongly opposed by selection within groups.

Notice also that Wrangham is able to describe the group-level benefits as clearly as the individual-level costs. There is nothing mysterious, nebulous, or necessarily weak about expanding the territory of a community over a period of years.

To summarize, even before we get to human self-domestication, Wrangham is explaining reactive aggression as a product of within-group selection and proactive aggression as a product of between-group selection in chimpanzees. In the latter case, he is invoking group selection in every way except using the words.

*****

Before proceeding to the human case, it is necessary to return to theoretical models. I have already shown that modelling the evolution of social behaviors is more complicated than modeling the evolution of solitary behaviors. Modeling the evolution of social control mechanisms is more complicated still.

Let’s begin with a standard model of altruism and selfishness in an N-person game theory model. Now let’s introduce a second trait. Some individuals punish selfish members of the group, while others allow selfishness to go unpunished. This creates four combinations of individuals in any given group: selfish punishers (SP), selfish non-punishers (SN), Altruistic punishers (AP), and altruistic non-punishers (AN). If there are enough punishers in a group, then selfishness no longer beats altruism. However, non-punishers enjoy the benefits of social control provided by the punishers without paying the cost. We haven’t solved the problem of altruism, but merely relocated it from the originally altruistic trait to the punishment trait. A rich literature exists on this topic using phrases such as “altruistic punishment” and “second-order public goods”. One fascinating result explored by another of my former PhD students, Omar Eldakar, is that of the four combinations of individuals, group selection can result in a mix of selfish punishers and altruistic non-punishers. Altruistic punishers go extinct because they pay a double cost—the cost of being an altruist and the cost of being a punisher. Selfish non-punishers are held at a low frequency by the selfish punishers. It’s as if the benefits of selfishness become a payment for the cost of being a punisher[8]!

Against this background, we can consider the central premise of Wrangham’s book; that self-domestication evolved in our species because individuals who could not control their aggressive impulses were executed. Or more precisely, execution is a necessary arrow in an entire quiver of social control mechanisms that begins with mild sanctions such as gossip and escalates as needed.

Based on what I have said about social control mechanisms as second-order public goods, it should no longer surprise the reader that the many examples provided by Wrangham invoke group selection in every way except using the words. Here is one example (2251).

Prior to Homo sapiens, Marean suggests, humans lived at low density in small societies, like chimpanzees. Then one population, which he thought might have lived on the southern African coast, developed an ability to gather and hunt so well that their food resources became far more productive. The population naturally grew to the point where there was competition over the food supply, and soon groups were fighting over the best territories. Success in war became imperative. Groups accordingly allied with one another, giving rise to large societies of the type that hunter-gatherers form today. Cooperation among warriors within groups was so vital for winning conflicts that it evolved to become the basis of humans’ exceptional propensity for mutual aid. Sociality became more complex, learning became more vital, and culture became richer.

Could there be a more clear description of group-level selection? It is little different from Darwin’s original speculation or the intent of the Choi and Bowles model of warfare. Wrangham makes much of the fact that with the advent of language and weaponry, punishing deviant behavior became so effective that groups became “a tyranny of cousins” (2777) and “like a boardroom without a chairman” (2802) in enforcing their norms. He can’t seem to see what every model of social control concludes: the lower the cost of punishment, the stronger between-group selection is relative to within-group selection.

*****

The reason that I listed forty-one scientists by name at the beginning of this essay is to stress the magnitude of the problem that I have examined in detail for Wrangham’s book. Every one of those authors discuss the three features shared by all models of social evolution in their writing. They have no choice if they want to be biologically realistic. Some of the authors recognize between-group selection when they see it, including Christopher Boehm, who pioneered the concept of reverse dominance as an important factor in human evolution. Peter Turchin, whose 2014 book Ultrasociety: How 10,000 Years of Warfare Made Humans the Most Cooperative Species on Earth covers much of the same ground as Wrangham’s book, says this: “The central breakthrough in this new field is the theory of cultural multilevel selection.”

Other authors on the list, like Wrangham, manage to describe group selection as a failed concept even when it is staring them in the face. My respect for Wrangham and praise for The Goodness Paradox in every other respect is genuine, but someday historians will look back in wonderment as to how otherwise smart people, who were part of the same scientific community, managed to remain so divided in their own minds about the importance of group selection in human evolution.

References:

[1] Gould, S. J., & Lewontin, R. C. (1979). The spandrels of San Marco and the panglossian paradigm: A critique..

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Children are designed, by natural selection, to educate themselves. Their curiosity, playfulness, sociability, and natural drive to emulate their elders were shaped, biologically, to serve the function of education. From a biological or anthropological vantage point, education can be defined as cultural transmission. We are the cultural animal. Our survival depends on the ability of each new generation of individuals to acquire and build upon the skills, knowledge, beliefs, and mores—in short, the culture—of the previous generation. Over all of human history, children who failed to acquire a sufficient amount of the culture would have been at a great survival and reproduction disadvantage. The selection pressure for self-educative instincts was strong.

We see the power of these self-educative instincts when we observe how much children lean before they are old enough for school. Through their own efforts, they learn from scratch their native language and, through language as well as direct observation and exploration, they acquire an enormous amount of knowledge of their physical and social worlds. By the time they start school, they already know a large portion of what they will ever know. Anthropologists who have observed hunter-gather cultures have described how these natural drives continue, in older children without school.1 Children and young teens are free, in such cultures, to play and explore essentially all day long, every day. They play at hunting, gathering, tool making, and all of the other skills that are essential for success in their culture, not because anyone forces or encourages them to, but because they look around and see that these are skills that their culture values, and their instincts lead them to play at what is valued.

Schooling, in contrast to education, is a cultural development. We think of schools as the places where “education” occurs, but, in fact, schools arose to serve a very narrow educational purpose. The first widespread systems of compulsory schools were church-run Protestant schools, beginning in the 17th century.2 Their clearly stated purposes were indoctrination (in Biblical doctrine) and obedience training. This was a time when people believed that obedience to lords and masters and acceptance of Biblical doctrine were essential to earthly survival and heavenly salvation. Over time, as the power of religion declined and that of states increased, these schools were taken over by states. The lessons to be memorized became more secular and the methods of obedience training became more psychological, less corporal, but the basic purposes and structures of schools did not change. Even today, students who do as the teacher directs and memorize what the teacher tells them to memorize will pass, and those who try to create their own path will fail.

The basic structure of standard schooling requires that children’s natural self-educative instincts be suppressed. Curiosity is suppressed, because it would lead all students to go off in different directions and fail to learn the prescribed curriculum, and would create chaos. Play, if it is allowed at all, becomes recess—a break from learning rather than a means of learning. Socializing (sharing of knowledge) becomes cheating, and it, too, would cause chaos. The opportunity to learn by observing those who are farther along is suppressed by segregating children by age. It is no wonder that children have always been unhappy in school and come to see learning as tedious rather than joyful.

My research and that of others has shown how we could resolve this mismatch between children’s educative instincts and schools.3 I have studied education in settings designed for self-directed education. For legal purposes, these settings (such as the Sudbury Valley School) are called schools, but they are almost the opposite of what we usually think of as schools. Children and teens there are not segregated by age and are allowed to play, explore, and pursue their own interests in any way they choose, all day, just as young hunter-gatherers are. The research shows that children in such settings acquire knowledge and skills essential to our culture, including literary and numerical skills, in the same basic ways by which hunter-gatherer children acquired knowledge and skills essential to their culture. The research also reveals that children in such settings develop passionate interests, through their self-directed play and exploration, that often lead directly to successful, enjoyable adult careers. Such schools operate on a per-student budget well below that of our compulsory public schools. The only forces preventing us from resolving this tragic mismatch between education and schooling are ignorance and cultural inertia.

References:

1 Gray, P. (2012). The value of a play-filled childhood in development of the hunter-gatherer individual. In Narvaez, D., Panksepp, J., Schore, A., & Gleason, T. (Eds.), Evolution, early experience and human development: from research to practice and policy, pp 252-370. New York: Oxford University Press. Also: Hewlett. B. S., Fouts, H. N., Boyette, A., & Hewlett, B. L. (2011). Social learning among Congo Basin hunter-gatherers. Philosophical Transactions of the Royal Society B, 366, 1168-1178.

2 Mulhern, J. (1959). A history of education: A social interpretation, 2nd ed. New York: Ronald Press. Also: Melton, J. V. H. (1988). Absolutism and the eighteenth-century origins of compulsory schooling in Prussia and Austria. Cambridge: Cambridge University Press.

3 Gray, P. (2017). Self-directed education—unschooling and democratic schooling. In G. Noblit (Ed.), Oxford research encyclopedia of education. New York: Oxford University Press. Available online at http://education.oxfordre.com/view/10.1093/acrefore/9780190264093.001.0001/acrefore-9780190264093-e-80  Also: Gray, P. (2016). Children’s natural ways of learning still work—even for the three Rs. In D. C. Geary & D. B. Berch (eds), Evolutionary perspectives on child development and education (pp 63-93). Springer. Also: Gray, P. (2013), Free to learn: why unleashing the instinct to play will make our children happier, more self-reliant, and better students for life. Basic Books.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I spend a lot of time thinking about food and health and have a suspicion that we’ve missed a crucial over-arching principle: “food should be alive” – or at least recently so. During our evolution ‘food’ meant anything with nutritional value that we could get our paws on and that tasted okay without acutely poisoning us. Something that’s often overlooked as people argue over optimal foods for health, is that living hunter-gatherers and similar populations eat widely varying proportions of fat, protein, and carbohydrate1, but all show a near-absence of non-communicable diseases (NCDs), even with excess food availability2.

One thing that ‘ancestral’ diets did have in common was that they contained almost no milled grains (flours) or refined sugars. These foods are universally eaten now, so nutritional studies rarely test their presence vs. absence. However, the clues are there; low carbohydrate diets help with obesity and type-two diabetes3, but similar benefits against overweight and diabetes have been reported for higher carbohydrate diets but without flour and sugar4,5. The degree of processing appears to be what matters.

The question is, if flour and sugar are important agents of chronic disease, how can they do this if carbohydrate from root vegetables and fruit do not? After all, if you eat 70% of your calories as unprocessed carbohydrate, like a Kitavan2, this is surely going to produce considerable blood glucose and insulin responses. Perhaps the only place flour/sugar vs starchy vegetables are properly different is during digestion, when still inside the tube of the gut…

At this point, we should touch on some mechanics of obesity gleaned from animal models. Almost all mouse obesity models need mice to have gut bacteria, the mice won’t overeat with a sterile gut. When overeating begins, inflammatory markers are seen in the small bowel (not the colon – despite it having manifold more bacteria)6. This inflammation interferes with fullness-sensing nerves7, then similar inflammatory changes occur in the energy-regulatory areas of the brain, causing the mice to overeat.

If bacterial-driven inflammation in the small bowel is indeed a crucial step in altering the brain’s regulation of body weight, then the role of flour/sugar in overweight starts to look very interesting indeed8. Plant cells store carbohydrates at no more than 20-25% by weight (remember, life is ~80% water). Flours (milled grass seeds) and sugar are far more dense, a potentially more nutritious growth medium for microbes. Plant cells differ from flour/sugar in other ways too – they’re not keen on being dinner. They’ve survived hundreds of millions of years of evolutionary conflict with bacteria trying to get at their carbohydrate stores, and likely come armed for a molecular fight. The microbial ecosystem of a healthy small bowel presumably has niches occupied by interdependent species carrying out different tasks upon periodically-arriving food’s semi-digested cells. Getting the nutrients before host absorption likely involves much sequential cooperation between species, creating an ecosystem with a complexity that may rival that of a healthy river.

In contrast, a small bowel ecosystem regularly fed a Western diet of flour/sugar could be expected to be very different. Using the river analogy, easy excess nutrients favor species best able to capitalize, reducing ecosystem diversity like an algal bloom with fertilizer run-off. Pathogens may prosper, and work to breach host defences9. This is all happening right next to lymphatic tissue where the bulk of the body’s immune cells are to be found. Changes here might explain the pro-inflammatory nature of western diets, and the improvements in autoimmune conditions many report after eliminating flour and sugar. It should be emphasized that currently there has been relatively little study of small bowel microbes. The above reasoning is why one might expect study here to be highly informative.

We’ve long suspected that natural foods are better for health, but there are now clues that living foods might play a crucial part in a delicately evolved dance between host, microbe, and food that has played out for millions of years. This dance needs all three dancers to be up to speed on the steps. The sudden replacement of one element with a slurry of flour might trigger a cascade of microbial changes that lead to overweight or disease in the susceptible. The theoretical approaches we should adopt to correct this mismatch are already well known to be effective, and in some countries are already the basis of public health advice: “Unprocessed foods are better for health”. Usefully, a diet of only unprocessed foods may even produce remission of some chronic non-communicable diseases.

Read the full Evolutionary Mismatch series:

  1. Introduction: Evolutionary Mismatch and What To Do About It by David Sloan Wilson
  2. Functional Frivolity: The Evolution and Development of the Human Brain Through Play by Aaron Blaisdell
  3. A Mother’s Mismatch: Why Cancer Has Deep Evolutionary Roots by Amy M. Boddy
  4. It’s Time To See the Light (Another Example of Evolutionary Mismatch) by Dan Pardi
  5. Generating Testable Hypotheses of Evolutionary Mismatch by Sudhindra Rao
  6. (Mis-) Communication in Medicine: A Preventive Way for Doctors to Preserve Effective Communication in Technologically-Evolved Healthcare Environments by Brent C. Pottenger
  7. The Darwinian Causes of Mental Illness by Eirik Garnas
  8. Is Cancer a Disease of Civilization? by Athena Aktipis
  9. The Potential Evolutionary Mismatches of Germicidal Ambient Lighting by Marcel Harmon
  10. Do We Sleep Better Than Our Ancestors? How Natural Selection and Modern Life Have Shaped Human Sleep by Charles Nunn and David Samson
  11. The Future of the Ancestral Health Movement by Hamilton M. Stapell
  12. Humans: Smart Enough to Create Processed Foods, Daft Enough to Eat Them by Ian Spreadbury

References:

  1. Strohle A, Hahn A, Sebastian A. Latitude, local ecology, and hunter gatherer dietary acid load: implications from evolutionary ecology. Am J Clin Nutr. 2010;92(4):940–945.
  2. Lindeberg S. Food and Western Disease: Health and Nutrition from an Evolutionary Perspective. Oxford: Wiley-Blackwell; 2010.
  3. Feinman R, R Sundberg et al. Dietary carbohydrate restriction as the first approach in diabetes management: critical review and evidence base. Nutrition. Jan 2015. 31 (1), 1-13.
  4. Lindeberg S, Jonsson T, Granfeldt Y, et al. A Palaeolithic diet improves glucose tolerance more than a Mediterranean-like diet in individuals with ischaemic heart disease. Diabetologia. 2007;50(9):1795–1807.
  5. Gardner CD, Trepanowski JF, Del Gobbo LC, Hauser ME, Rigdon J, Ioannidis JPA, Desai M, King AC. Effect of Low-Fat vs Low-Carbohydrate Diet on 12-Month Weight Loss in Overweight Adults and the Association With Genotype Pattern or Insulin SecretionThe DIETFITS Randomized Clinical Trial. JAMA. 2018;319(7):667–679.
  6. Ding S, Chi MM, Scull BP, et al. High-fat diet: bacteria interactions promote intestinal inflammation which precedes and correlates with obesity and insulin resistance in mouse. PLoS One. 2010;5(8): e12191.
  7. de Lartigue, Barbier de la SC, Espero E, Lee J, Raybould HE. Diet-induced obesity leads to the development of leptin resistance in vagal afferent neurons. Am J Physiol Endocrinol Metab. 2011; 301(1):E187–E195.
  8. Spreadbury I. Comparison with ancestral diets suggests dense acellular carbohydrates promote an inflammatory microbiota, and may be the primary dietary cause of leptin resistance and obesity. Diabetes Metab Syndr Obes. 2012;5:175-89.
  9. S. Manfredo Vieira, M. Hiltensperger, V. Kumar, D. Zegarra-Ruiz, C. Dehner, N. Khan, F. R. C. Costa, E. Tiniakou, T. Greiling, W. Ruff, A. Barbieri, C. Kriegel, S. S. Mehta, J. R. Knight, D. Jain, A. L. Goodman, M. A. Kriegel. Translocation of a gut pathobiont drives autoimmunity in mice and humans. Science 09 Mar 2018 : 1156-1161.
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The current ancestral health movement is often thought to be on the verge of going mainstream, with many believing this will lead to positive health and financial outcomes for both individuals and society as a whole. However, the transition from a relatively small, highly-devoted group of adherents to a mass following will be far more difficult than commonly assumed. In fact, there are at least three main obstacles to the paleo movement becoming a mass phenomenon.

First, Neolithic foods are tightly woven into the fabric of our culture. More than anything else, grains, legumes, and dairy allowed for early populations to expand, and have sustained increasingly larger populations over the past 10,000 of years. It was the invention of agriculture, and the consumption of Neolithic foods, that directly led the development of civilization, including such things as the division labor, the accumulation of wealth, greater social hierarchy, and new forms of technology. It is no exaggeration to say that human civilization was literally founded on, and continues to be based on, Neolithic foods. Because of this important and undeniable link, it will be extremely difficult to remove grains, legumes, and dairy from our daily lives.

Second, Neolithic and industrially-processed foods – and simple carbohydrates in particular – appear to be addictive. As a result, giving up grains, legumes, and dairy represents a real physical or physiological challenge. These foods appear to be so potentially addictive because of three interrelated reasons. First, they taste good. Just think of the way a warm, fresh-baked chocolate chip cookies smell and taste. Whether we like it or not, sweets are highly appealing, and we often crave them. For example, a study from 2006 shows that the main reward and pleasure center in the brain lights up more intensely for foods like chocolate cake and pizza than for blander foods like vegetables.[1] Second, some individuals may also become addicted to Neolithic and processed foods because they tap into a real evolutionary need. Specifically, in the scarce environment of our ancestral past, having a preference for highly sweet and fatty foods had real survival and reproductive advantages.[2] Third, food manufacturers today know all about these “pre-programmed” preferences, and do everything thing they can to exploit them. Multinational corporations literally spend billions of dollars to make foods hyper-palatable and to keep us coming back for more.[3] The addictive nature of these foods is even more problematic when you consider that the typical American diet consists of 70% Neolithic or Industrial foods, which include: cereals, dairy products, refined sugars, refined vegetable oils, and alcohol. This is a key point. When people are encouraged to switch to a paleo diet, they are really being asked to change or give up almost three quarters of their current diet, and the very three quarters that is potentially the most addictive.

The third main obstacle facing the ancestral health movement relates to societal values. Specifically, today we see a general sense of entitlement, which commonly privileges transitory “fun” over true mental and physical “flourishing,” or what the ancient Greeks called eudemonia. This sense of entitlement and desire to have fun manifests itself in several ways. First, there is the “I deserve it” syndrome. This is when a friend, co-worker, family member, or perhaps even we ourselves say: “I deserve that cookie.” Likewise, we simply do not like being told that we cannot eat certain foods, especially when those foods have high emotional or cultural significance. When it comes to food choices, we have also been told again and again: “everything in moderation.” But this approach is simply not compatible with the ancestral health model: some foods are better than others, and other foods are to be avoided altogether. Next, there is the issue of instant gratification. We all want things, and we want them now. This feeling is quite understandable, given we live in a world of instant communication, fast food, and video on demand. Finally, individuals – and society as a whole – typically value personal happiness as the primary goal in life. Society tells us all the time to just “be happy!” But, of course, “being happy” is not the only possible goal in life. Other societies at different places and at different times have prioritized a number of other values, including: social justice, artistic creation, the reduction of suffering, athletic performance, the production of knowledge, sexual ecstasy, and eudemonia. In many ways, that final goal appears to be closest to the true objective of the ancestral health movement.

In addition to these three main obstacles, previous research has also suggested that there are the two main types of individuals that typically go paleo: those who are sick (and for whom conventional medicine has failed) and those who are seeking performance.[4] The key commonality between both groups is a very high level of intrinsic motivation. These individuals are highly motivated to get healthy or to improve their performance, or some combination of the two. In fact, it could be argued that it takes a “special kind of person” to transition from the Standard American Diet (SAD) to an ancestral health lifestyle because it often requires a significant amount of effort to make and to maintain the switch. More specifically, this “special kind of person” might be described as someone who is self-directed, willing to challenge authority and the conventional wisdom, and who has access to education and resources, which allows him or her to explore alternative health paradigms. However, these individuals may very well represent the exception, rather than the norm, in our society today.

My argument here is not intended to stand as a kind of “final judgment.” Instead, we need to question where the ancestral health movement currently is, and to think about where it is going. Many within the movement simply assume that “paleo” will continue to grow and expand. But that growth cannot be taken for granted. As we are often warned in a different context: “Past performance is no guarantee of future results.” Rather than simply making assumptions about the future, the goal should be to identify the most significant challenges that lie ahead. At that point, perhaps then we will be able to develop the most effective strategies to overcome those challenges.

References:

[1] Beaver, J. D., et al. (2006). Individual Differences in Reward Drive Predict Neural Responses to Images of Food. Journal of Neuroscience, 26(19): 5160–5166.

[2] Armelagos G. J. (2014). Brain Evolution, the Determinates of Food Choice, and the Omnivore’s Dilemma. Critical Reviews in Food Science and Nutrition, 54(10): 1330–1341.

[3] Moss, M. (2013). Salt, Sugar, Fat: How the Food Giants Hooked Us. Random House: New York.

[4] Schwartz, D. & Stapell, H. (2013). Modern Cavemen? Stereotypes and Reality of the Ancestral Health Movement. Journal of Evolution and Health, 1(1): Article 3.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Excerpted from This View of Life by David Sloan Wilson. Copyright © 2019 by David Sloan Wilson. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.

From the Introduction:
Whatever you think you currently know about evolution, please move it to one side to make room for what I am about to share in the pages of this book. I think you’ll find that my argument doesn’t fall into any current category. Politically it isn’t left, right, or liber­tarian. It’s not anti-religious and it enables us to think more deeply about religion than ever before. Above all, it moves us in the direc­tion of sustainable living at all scales. Who doesn’t want to improve their personal well-being; their families, neighborhoods, schools, and businesses; their governments and economies; and their stew­ardship of the natural world? These goals are within reach—but only if we see the world through the lens of the right theory.

To begin, we need to do a bit of clear thinking on what science is. It is commonly portrayed as a contest between theories that are based on a common stock of observations. First we see and then we theorize. Theories that do the best job of explaining the obser­vations are accepted, only to be challenged by another round of theories, and so on, bringing our knowledge of the world closer to reality.

The problem with this view of science is that the common stock of observations is nearly infinite. We cannot possibly attend to everything, so a theory—broadly defined as a way of interpreting the world around us—is required to tell us what to pay attention to and what to ignore. We must theorize to see. A new theory doesn’t just posit a new interpretation of old observations. It opens doors to new observations to which the old theories were blind. 

Albert Einstein understood this point when he wrote, “It is the theory that decides what we can observe.” He was corresponding with his colleague Werner Heisenberg about electron orbits inside atoms. There was no way to directly observe electron orbits at the time, and Heisenberg thought it prudent to theorize on the basis of what can be observed. Einstein understood that theorizing about entities that cannot yet be seen can lead to useful predictions about what can be seen, but which had previously gone unnoticed. 

Charles Darwin experienced the blindness that comes from lack of the right theory as a young man on a fossil hunting expedition with his professor Adam Sedgwick. The valley in Wales that they visited had been scoured by glaciers and therefore had no fossils. The evidence for glaciers lay all around them—the scored rocks, the perched boulders, the lateral and terminal moraines, all typical of a glaciated landscape. Yet Darwin and Sedgwick were blind to the evidence because the theory that vast sheets of ice had once covered much of the northern hemisphere had not yet been proposed. They didn’t know what they should have been looking for. Darwin commented in his autobiography that “a house burnt down by fire did not tell its story more plainly than did this valley. If it had still been filled by a glacier, the phenomena would have been less distinct than they are now.”

Darwin went on to contribute his own eye-opening discover­ies with his theory of natural selection. The theory is amazingly simple: 1) Individuals vary; 2) Their differences often have conse­quences for survival and reproduction; 3) Offspring resemble their parents. Given these three conditions, populations will change over time. Traits that contribute to survival and reproduction will become more common. Individuals will become well adapted to their environments.

The theory of natural selection is so simple and rests upon such firm assumptions that it seems obvious in retrospect. As Thomas Huxley famously remarked upon encountering it for the first time, “How stupid of me not to have thought of that!” Nevertheless, for those who first started to explore the implications, it was as if the scales had fallen from their eyes. Wherever they looked—the fos­sil record, comparative anatomy, the geographical distribution of species, and the many wonderful contrivances that adapt organ­isms to their environments—they found confirming evidence. In the contest of theories, the biblical account of creation didn’t stand a chance. By 1973, the geneticist Theodosius Dobzhansky could declare that “nothing in biology makes sense except in the light of evolution.”

My Story

I was a graduate student at Michigan State University in 1973 and my personal experience can help to explain what Dobzhansky meant by his imperious-sounding proclamation. As someone who loved the outdoors and aspired to be a scientist, I decided to become an ecologist so I could study animals in their natural environments. In keeping with the old joke about experts knowing more and more about less and less until they know everything about nothing, my research was focused on the feeding behavior of a tiny aquatic crus­tacean called a copepod. Even for this esoteric subject, the possibili­ties were endless. Copepods might select their food in any number of ways and a theory was needed to narrow the possibilities. Evo­lutionary theory predicts that copepods should feed in ways that enhance their survival and reproduction. This could mean maxi­mizing the amount of energy harvested, feeding in a way that avoids being eaten by predators, or other possibilities that depend upon the details of the environment. No theory leads directly to the right answer. The best that a theory can do is to narrow the field of pos­sibilities. In this case, I predicted that copepods might selectively graze on larger algae rather than harvesting algae without respect to size, which would increase their rate of energy intake. My predic­tion turned out to be correct and resulted in my first publication in 1973. I had added one small but solid brick to the edifice of scien­tific knowledge. I couldn’t vouch for Dobzhansky’s claim for all of biology, but I could testify that evolutionary theory had helped to make sense of my little corner.

Later that year I traveled to Costa Rica to attend a course in tropical ecology run by the Organization for Tropical Studies (OTS), a network of field stations run by a consortium of universi­ties that is still going strong after fifty years. It was a life-changing experience. Anyone who loves nature is thrilled by the tropics, but we were seeing all of those wonderful plants and animals through the lens of evolutionary theory—the same lens that informed my esoteric research. I realized that I didn’t need to spend my life studying copepods. I could pick any creature or topic that interested me and quickly start asking intelligent questions based on the logic of evolutionary theory. It was the opposite of the old joke about experts knowing more and more about less and less. Becoming an expert in evolutionary theory was like receiving a passport to the study of all aspects of life. 

Ever since, I have used evolutionary theory to study a multitude of creatures and topics. I have also witnessed my field of biology become ever more sophisticated in observational techniques. The modern biologist is like Darwin with superhuman powers of observation. He or she can catalog entire genomes and track the patterns of gene expression (epigenetics); can trace neural pathways inside the brain; can monitor the movement of animals via satellite; can measure climate change in the distant past with a high degree of accuracy; can experiment with evolution in the laboratory using microbes that can be frozen and brought back to life to compare with their own descendants. 

These technological marvels bring the common stock of observations well beyond anything imagined in Darwin’s day. The role of evolutionary theory in making sense of all this information is more important than ever before. Dobzhansky’s 1973 statement that nothing in biology makes sense except in the light of evolution has withstood the test of time. Yet for many people the word “biol­ogy” conjures up a different set of associations than words such as “human” or “culture.” To proceed further, we need to expand the scope of what we consider biology.

How About Us?

Darwin was convinced that his theory could explain the length and breadth of humanity, in addition to everything else more typically associated with biology. He observed and kept notes on his chil­dren with the same discerning eye that he observed barnacles and orchids. Following upon On the Origin of Species, he developed his thoughts at length in The Descent of Man and other works.

Yet, as evolutionary biology became a branch of science, the study of humanity did not proceed along the same track. The prob­lem was not just the collision with religious belief that remains with us today. Some people who were fully comfortable with a natural­istic conception of the world still had an allergic reaction to evolu­tionary theory in relation to human affairs. As early as the 1870s, the threat that they perceived was given a name: social Darwinism.

According to most people’s conception of social Darwinism, the haves and have-nots of society are equivalent to the fit and unfit of evolutionary theory. It is nature’s way for the fit to replace the unfit. Interfering with the process would degrade the species and lead to the collapse of society. It is not selfish for the fit to replace the unfit; it is a moral imperative. Policies that flow from this logic include laissez-faire capitalism, withholding welfare from the poor, forced sterilization, and genocide.

After the excesses of the Gilded Age, the eugenics policies enacted in America and Britain, and the genocidal horrors of World War II, the idea of using evolutionary theory to formulate public policy became unthinkable. The stigma carried over to the aca­demic disciplines classified as the social sciences and humanities. While these areas of study developed into sophisticated bodies of knowledge, they largely avoided engaging with evolutionary theory. Most humanist scholars were happy to accept Darwin’s theory for the study of the rest of life, our physical bodies, and a few basic instincts such as to eat and have sex, but insisted that our rich behavioral and cultural diversity operated according to a different set of rules.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview