Many training programmes for psychotherapists and counsellors include a mandatory personal therapy component – as well as learning about psychotherapeutic theories and techniques, and practising being a therapist, the trainee must also spend time in therapy themselves, in the role of a client. Indeed, the British Psychological Society’s own Division of Counselling Psychology stipulates that Counselling Psychology trainees must undertake 40 hours of personal therapy as part of obtaining their qualification.
What is it like for trainees to complete their own mandatory therapy? A new meta-synthesis in Counselling and Psychotherapy Research is the first to combine all previously published qualitative findings addressing this question. The trainees’ accounts suggest that the practice offers many benefits, but that it also has “hindering effects” that raise “serious ethical considerations”.
David Murphy and his colleagues at the University of Nottingham conducted a systematic review of the literature and found 16 relevant qualitative studies up to 2016, involving 139 psychologists, counsellors and psychotherapists in training and who had undertaken compulsory personal psychotherapy as part of their course requirements. Most the studies involved interviews with the trainees about their experiences; the others were based on trainees’ written accounts.
Murphy and his team identified six themes in the trainees’ descriptions. Some were positive. The trainees talked about how therapy had helped their personal and professional development, for example raising their self-awareness, emotional resilience and confidence in their skills. Personal psychotherapy also offered them a powerful form of experiential learning in which they got to see for themselves how concepts like transference play out in therapy, and they obviously experienced what it is like to be a client. They also learned about “reflexivity” – how to reflect on themselves and the way their own “self material” contributes to the dynamics of therapy.
Another positive theme was therapeutic gains – some trainees saw their personal therapy as a form of “explicit stress management”; they said it helped them work through issues from their past; and also helped them to become their authentic selves, and accept their strengths and weaknesses.
But the remaining themes were more concerning. The first – Do no harm – referred to the fact that many trainees spoke of the stress and anguish that the therapy caused them, and the way it affected their personal relationships. It some cases this left them feeling unable to cope with their client work (in which they were the therapist). Another theme – “Justice” – summarises the burden that trainees felt the mandatory therapy imposed on them, in terms of time and expense, and the pressure of being assessed and of their lost autonomy.
Finally, under the theme “Integrity“, the researchers said some trainees talked about how their therapist was unprofessional, yet it was difficult to change them; that they felt coerced into therapy and that the mandatory nature of it prevented them from truly opening up – in fact there was a sense of some trainees simply jumping through hoops in a functional way to complete their course requirement.
Murphy and his team end their paper calling on regulatory and training institutions to consider the issues raised by their findings. Although the “hindering factors” they identified raise serious ethical issues, they believe that it may be possible to address them. “We envisage that programmes that attend to the points raised in this study will provide the best learning opportunities, compared with courses that do not regularly critically reflect upon, assess, and evaluate mandatory psychotherapy within the course.”
Many millions of people around the world have taken the “implicit association test (IAT)” hosted by Harvard University. By measuring the speed of your keyboard responses to different word categories (using keys previously paired with a particular social group), it purports to show how much subconscious or “implicit” prejudice you have towards various groups, such as different ethnicities. You might think that you are a morally good, fair-minded person free from racism, but the chances are your IAT results will reveal that you apparently have racial prejudices that are outside of your awareness.
What is it like to receive this news, and what do the public think of the IAT more generally? To find out, a team of researchers, led by Jeffery Yen at the University of Guelph, Ontario, analysed 793 reader comments to seven New York Times articles (op-eds and science stories) about the IAT published between 2008 and 2010. The findings appear in the British Journal of Social Psychology.
Crudely speaking, the readers could be divided into sceptics and believers. Among the former were those who felt the idea of implicit bias was an academic abstraction in a world of “real racism”. “To me the question of whether [unconscious] racism exists is almost irrelevant when 1 in 15 black adults and 1 in 9 black men between 20 and 34 is in jail,” wrote Nick. “It’s a shame so much time is spent pulling apart such … tiny bits of data … There are many, many examples of actual bias … ,” wrote Jonathan.
Others expressed scepticism about their personal test results, and they often pushed back, arguing that the scientists behind the IAT have a political agenda. “We laugh at the religious for blindly following dogma and dismissing ‘science’. There is as much dogma in this test methodology and the conclusions its backers draw from it,” wrote Luke. An alternative, skeptical reaction was sardonic humour: “I am a white male in his mid-30s, yet I’m good. Even subconsciously! Yes!” wrote vkm.
The reaction among believers in the validity and power of the IAT was very different from the sceptics, leading many to embark on what the researchers called “morally inflected soul searching”. For example, an Asian American voiced concern about her (according to the IAT) anti-white prejudice: “Somewhat more troubling to me is not my results, but that I almost feel proud of them, when my sense of right and wrong tells me I shouldn’t be … much as I wouldn’t be of an anti-black one,” wrote Iris.
Others “confessed” to their implicit prejudice while make a virtue of their willingness to own up to their “guilt”: “I’m happy to own my implicit biases and glad to be made conscious of them,” wrote Jennifer. “I am open-minded enough to be introspective and search my soul for bias of which I might have been unaware previously …,” wrote Bob.
As well as advertising their own wokeness, many of the fans of the IAT also criticised the test’s detractors. “Interesting that a number of these posts are angry,” wrote Laura. “Is this the response of defensive people who don’t want to get close enough to the truth of something to acknowledge it may have merit?” Or take John’s comment: “We should each go home and look in the mirror and recognize the ultimate Bad Guy – who ultimately must become the Good Guy to be the solution.”
Yet another kind of reaction among believers in the IAT was to see implicit prejudice as a unavoidable aspect of being human, thereby absolving the test-taker of responsibility. “I do not believe we can ever get rid of racism and sexism from within ourselves. No amount of education on the importance of tolerance and equality can trump our biological instincts,” wrote David.
The context of this research is that the IAT has become perhaps the most famous and widely taken modern psychological test, even though serious concerns have been raised about its reliability (take the test today and again tomorrow and you will probably find your results have changed) and validity (an individual’s score on the IAT does not tell you much, if anything, about how he or she is likely to behave in the real world). Despite these problems, the test and the concept of implicit prejudice now form the basis of compulsory diversity programmes for many employees.
Arguably there is an important ethical discussion to be had about the fact of millions of people receiving test feedback of questionable meaning and how this might affect them (the current version of the IAT test site features a disclaimer that attempts to off-set these concerns) . However anyone hoping for a hard-hitting ethical critique of the IAT will not find it in this paper. Yen and his colleagues write that they have deliberately side-stepped these issues. “Rather, our objective has been to draw out the social implications of the science in relation to the changing context of prejudice discourse.”
They added: “Our analysis provided a first demonstration of how this research and technology have begun to function in lay understandings of prejudice and public discourse”. In this sense, their findings make a novel and useful contribution, even though it is not clear how much New York Times readers are representative of public reaction more generally. Inevitably for research of this kind there will also have been a large dose of subjectivity in how the researchers parsed the hundreds of online comments.
Yen’s team end on an optimistic note: “Both the idea of implicit bias and the practice of measuring it can … impact on the way people think of themselves, others, and their prejudices. They provide tools for talking about prejudice, for moral-psychological work on the self, for explaining social ills, and for mobilizing others to act in the interests of change.”
The representation of women in STEM fields (science, technology, engineering and maths) is increasing, albeit more slowly than many observers would like. But a focus on this issue has begun throwing up head-scratching anomalies, such as Finland, which has one of the larger gender gaps in STEM occupations, despite being one of the more gender equal societies, and boasting a higher science literacy rate in its girls than boys. Now a study in Psychological Science has used an international dataset of almost half a million participants that confirms what they call the “STEM gender-equality paradox”: more gender-equal societies have fewer women taking STEM degrees. And the research goes much further, exploring the causes that are driving these counterintuitive findings.
Gijsbert Stoet at Leeds Beckett University and David Geary at the University of Missouri analysed several large and often publicly available datasets, like the gender inequality measures taken by the World Economic Forum (WEF; based on metrics like women’s earnings, life expectancy and seats in parliament) and UNESCO data on STEM degrees.
The researchers found the percentage of women STEM graduates is higher for countries that have more gender inequality. For instance, countries like Tunisia, Albania and Turkey, which come out the poorest on the WEF gender equality measures, see women making up 35-40 per cent of STEM graduates, whereas in countries with more gender equality, like Switzerland and Norway, the figure is lower at around 20 per cent, similar to Finland.
To better understand the STEM gender-equality paradox, Stoet and Geary accessed results from a 2015 OECD educational survey of the science literacy and attitudes to science of 15 to 16-year-old students from 67 countries. Objectively, neither boys or girls were more scientifically literate – girls were better in 19 countries, boys in 22, with no difference in the others.
These survey results suggest it is not girls’ lack of scientific knowledge or negative attitudes toward science that holds them back. It’s possible, however, that girls might match or outperform boys in science lessons in some countries and still be making a rational choice to avoid STEM routes because they outperform boys even more in other areas (it’s welldocumented that girls outperform boys at school on many topics, on average).
Using the OECD survey, Stoet and Geary calculated a personal ranking for each student of their relative ability across the three main areas of maths, science and reading. In all but two of the 67 nations, boys more than girls were personally strongest in science (80 per cent of boys were personally strongest in either maths or science; in contrast, half of girls were personally strongest in reading). Stoet and Geary found that boys tending to be personally strongest in science was most apparent in more gender-equal countries, the very countries where boys go on to pursue more STEM careers. This smooths some of the kinks in the STEM gender-equality paradox: in fairer societies, boys seem to be optimising their future by pursuing science-like activities, whereas girls have other options on the table.
However, questions remain. For instance, why should a motivated and academically talented teenage girl forego a science route because her reading slightly outperforms her other literacies? After all, reading and writing skills are also beneficial to scientists, from paper writing to fundraising.
Stoet and Geary tried to address this question by focusing on gender differences in motivation, in terms of interest, confidence and enjoyment. In 60 per cent of countries, boys showed more interest in science than girls (in various fields, from disease prevention to energy), and the gender-gap in scientific interest was greater in the most gender-equal countries; boys also expressed more confidence in their science abilities in 39 of 67 countries, especially the gender-equal ones. And even though girls reported enjoying scientific activities more than boys in two-thirds of the countries, boys’ enjoyment was higher in the more gender-equal countries.
Why are boys most enthusiastic, interested, and personally strongest at science in more gender-equal societies? The authors suggest that in highly stable countries with strong welfare systems, people can pursue their calling and unlock their personal potential, building their future around their genuine interests and personal strengths. This echoes the finding popularised in online lectures by psychologist Jordan Peterson that sex-related personality differences are higher in gender-equal societies – when societies’ social pressures are less tyrannical, individual tendencies can be expressed more freely. In more repressive cultures, by contrast, young people are liable to prioritise pragmatism – food on the table – over self-actualisation, and as STEM jobs tend to be stable and well-paid, that would encourage more female representation. Consistent with this, Stoet and Geary used a United Nations life satisfaction measure as a proxy for cultural stability and found that more women took STEM degrees in countries where life satisfaction is lower – which tended to be in the unequal societies.
These findings suggest we need a nuanced approach to the sticky issue of gender and participation in science. Firstly, there is no question from this data that objectively, young women across the globe are just as capable to tackle scientific subjects as are young men. And even after taking into account the gender differences in science attitudes and personal strengths, the researchers calculated that, in a society where women’s rational preferences led directly to their level of STEM participation, we should see women take 34 per cent of STEM degrees, while the actual global average is 28 per cent – so other factors unaddressed in this study are clearly leading women away from science roles. So the study doesn’t suggest that we sit on our laurels and validate the status quo.
It does suggest, however, that we misunderstand the current levels of STEM gender imbalance if we attribute it entirely to social injustice. A substantial cause of the current STEM gender mix may be the product of young men and women making considered, rational choices to leverage their strengths and passions in different ways.
Across the globe, ADHD prevalence is estimated around 5 per cent. It’s a figure that’s been rising for decades. For example, Sweden saw ADHD diagnoses among 10-year olds increase more than sevenfold from 1990 to 2007. Similar spikes have been reported from other countries, too, including Taiwan and the US, suggesting this may be a universal phenomenon. In fact, looking at dispensed ADHD medication as a proxy measure of ADHD prevalence, studies from the UK show an even steeperincrease.
Does this mean that more people today really have ADHD than in the past? Not necessarily. For example, greater awareness by clinicians, teachers or parents could have simply captured more patients who had previously had been “under the radar”. Such a shift in awareness or diagnostic behaviour would inflate the rate of ADHD diagnoses without necessarily more people havingclinical ADHD. However, if this is not the true or full explanation, then perhaps ADHD symptoms really have become more frequent or severe over the years. A new study in The Journal of Child Psychology and Psychiatry from Sweden with almost 20,000 participants has now provided a preliminary answer.
The researchers, led by Mina Rydell at Karolinska Institutet, examined data from participants in an ongoing study of all twins in Sweden that started in 2004 and aims to study their physical and mental health, with various measures taken the year that the children turn nine years of age.
Specifically, the researchers analysed A-TAC (Autism-Tics, ADHD and other Comorbidities Inventory) scores from 19,271 children from 9,673 families recorded between 2004 and 2014. The A-TAC is a telephone-based interview in which parents are quizzed about their kids’ behaviour and mental health, including sub-scales focused on attention deficits and hyperactivity. The questions are about symptoms with no mention of diagnostic categories and the wording has stayed the same over the years. A typical question is “Does he/she have difficulties keeping his/her hands and feet still or can he/she not stay seated?”.
The researchers used the A-TAC scores to classify the proportion of children in different years with diagnostic-level ADHD or subthreshold ADHD or no ADHD. Important to keep in mind here is that instruments like the A-TAC are restricted to assessing the severity of certain symptoms and cannot be used to diagnose children with ADHD (only clinicians and mental health experts can diagnose someone). For example, if a child fell in the diagnostic-level ADHD category, it would mean that the severity of his or her ADHD symptoms would likely result in a diagnosis by a specialist, but this couldn’t be known for sure. The authors calculated the changes in these categories, as well as in mean A-TAC scores, over time by comparing results from the parent interviews conducted in 1995-1998, 1992-2002 and 2003-2005.
Across the 10-year study period, 2.1 per cent of all participants (n=406) showed diagnostic-level ADHD and 10.7 per cent (n=2,058) showed subthreshold level ADHD. Interestingly, there was no statistically significant increase in diagnostic-level ADHD prevalence over time, fluctuating around 2 per cent in most years. On the other hand, the prevalence of sub-threshold ADHD increased significantly from 2004 to 2014, when at 14.76 per cent it reached its peak. Mean ADHD scores and inattention/ hyperactivity-impulsivity sub-scale scores also showed a similar increase from 1994 to 2004.
These symptom changes over time are probably not due to changes during the study in the status of the twin families who agreed to take part in the research and those who didn’t. The researchers accessed the National Patient Register and this showed that while participants in the twin study differed from non-participants in terms of having fewer ADHD diagnoses, this difference did not change over the years of the study, suggesting that it was unlikely to explain the results. Perhaps most important, the National Patient Register showed that prevalence of clinician-diagnosed ADHD had increased more than fivefold from 2004 to 2014, which is inconsistent with the fact that the twin study found diagnostic-level ADHD prevalence did not see a similar rise.
So while the diagnosis rates of clinical ADHD increased during the period of the study, the findings from the twin study suggest that only milder forms of ADHD-related symptoms became more frequent across the population during the same years. This suggests that the number of people who have such severe ADHD symptoms that it merits a diagnosis has actually remained stable, and that other factors are more probably the driving force behind an increased ADHD prevalence. While speculative, these could be related to changes in awareness among parents, teachers or clinicians; societal or medical norms; or better access to healthcare.
There are several caveats that need to be kept in mind when interpreting these findings. For example, as mentioned, A-TAC relies on parents’ reports, which might not be the most adequate source of information. In fact, a diagnosis of ADHD requires symptom impairments in at least two different contexts, such as at school or at home. Because only twins were enrolled in CATSS, it is also not clear whether these results also apply to only children. A similar argument could be made about the age of the participants.
Keeping its limitations in mind, this study highlights an important point by providing an alternative explanation for rising ADHD diagnoses. This demonstrates the effects that shifts in societal, political or medical opinion can have on the “prevalence” of an illness. Considering that more diagnoses are likely to go hand in hand with more (potentially unnecessary) medication, this study provides food for thought to clinical and political decision-makers.
Post written for BPS Research Digest by Helge Hasselmann. Helge studied psychology and clinical neurosciences. Since 2014, he is a PhD student in medical neurosciences at Charité University Hospital in Berlin, Germany, with a focus on understanding the role of the immune system in major depression.
Finger counting by young kids has traditionally been frowned upon because it’s seen as babyish and a deterrent to using mental calculations. However, a new Swiss study in the Journal of Cognitive Psychology has found that six-year-olds who finger counted performed better at simple addition, especially if they used an efficient finger counting strategy. What’s more, it was the children with higher working memory ability – who you would expect to have less need for using their fingers – were more inclined to finger count, and to do so in an efficient way. “Our study advocates for the promotion of finger use in arithmetic tasks during the first years of schooling,” said the researchers Justine Dupont-Boime and Catherine Thevenot at the Universities of Geneva and Lausanne.
The 84 child volunteers were recruited from six different Swiss schools where the policy is not to teach finger counting explicitly, but not to discourage it either (except for very simple additions where the sum is less than 10).
The researchers tested the children’s working memory using the backward digit span task, which involves hearing a string of numbers and repeating them back in reverse order. Children with higher working memory can accurately repeat back longer strings.
The researchers also videoed the children discreetly while they performed, one child at a time, simple single-digit additions, some a bit trickier than others because they involved sums larger than 10 (some kids did the addition task before the memory tests, others afterwards). The researchers later coded the videos to see which kids counted on their fingers during the addition task, and which strategy they used.
Fifty-two of the children finger counted, and there was a significant correlation between finger counting and better performance (for the easier and harder sums), and also between finger counting and higher working memory ability. The researchers think kids with poorer working memory struggle to discover finger counting for themselves, even though it would be advantageous if they used the right strategy.
A problem for those kids with lower working memory ability who did finger count is that they tended to use a more laborious strategy that involves counting out both addends (i.e. numbers to be added) on their fingers, whereas the children with higher working memory ability favoured an efficient strategy that only involved using the fingers to count on from the first addend – for example, for 8+3, the child would only use three fingers to count on from eight. When the kids with lower working memory used the laborious finger strategy, they actually performed worse than if they used no fingers, especially for the harder sums. However, if they used the superior strategy, they did better at addition than those who didn’t use their fingers.
“Explicitly teaching lower achievers to use the [more efficient finger counting] strategy could be very beneficial for them,” the researchers said, adding that “… repeatedly using fingers to solve arithmetic problems should allow children to progressively abandon this strategy for more mental procedures and, thus, allow children to become more and more performant through practice.”
The new findings build upon a previous study that tested five-year-olds’ addition skills repeatedly over a three-year period and which found that finger counting correlated with superior performance up to, but not beyond, age 8.
It’s that song. Again. The one they play over, and over, and over. It might be your roommate, child, or colleague. The year I shared a flat with my brother, it was Worst Comes To Worst thrice daily. What are the properties of the songs that drive some people to repeatedly listen to them over and over? A new article in Psychology of Music explores the tunes that just won’t quit.
In the Autumn of 2013, the research team led by Frederick Conrad of the University of Michigan asked 204 men and women, mostly in their 30s or younger, what song they were “listening to most often these days”. Participants mentioned mostly pop and rock songs, but also rap, country, jazz and reggae, with only 11 songs picked by more than one listener (the most frequently mentioned were Get Lucky, Royals, and Blurred Lines, all of which were hits in the year of the survey).
Eighty-six per cent of participants listened to their song at least once a week, and almost half did so daily. Sixty per cent said that they liked to re-listen to this song immediately, with many enjoying a third or even fourth go. Participants reported having high levels of connection with their named song, with higher connection associated with a tendency to close their eyes during listening to devote the fullest attention to it.
Prompted to describe their chosen song’s effect in their own words, the participants’ descriptions suggested the songs fell into three categories. Over two-thirds were happy, energetic songs – “Pumped up! Excited! Ready to dance, sing, and love!”. For these songs, beat and rhythm were important, and almost half of people who were stuck on a happy song also reported tapping their feet, clapping their hands or drumming on the furniture during listens. This is definitely the vibe that was driving my brother’s daily house party!
The other categories were calm and relaxed (“It makes me feel at ease, calm, and helps me to put things into perspective”) and bittersweet (“It makes me feel sad. But not the bad kind of sad, and I like singing with it”). Bittersweet songs were the most likely to produce deep connections, and were also associated with a greater ability to build a “mental model” of the song, as measured by how much of the song participants felt they could replay in their head. (This ability increased with frequency of all song listens, but more so for bittersweet ones.) Bittersweet songs were listened to many more times than the other song types – on average 790 times, vs. 515 for calm songs and 175 for happy songs.
Repeated listening to songs is a bit of a riddle, given previous research that tends to bear out the classic Wundt curve, which states that a pleasurable stimulus becomes more pleasurable with familiarity until reaching a ceiling and dropping off, as happens with songs on heavy radio rotation. But our listeners weren’t being assailed with the songs against their will (only six per cent of the songs were even on the radio during that time), they were deliberately seeking out and returning to them. For some idiosyncratic reason, a particular song speaks to this particular person, and that connection provides an incentive to listen deeply to the song, which can unlock further nuance in lyrical meaning or musical richness. And the emotional payoff is reliable, much as is a mood-regulating drug, and that reliable payoff can be more important than the hit of something novel.
Why not review your top listens on Spotify, iTunes or Winamp – this is mine – and have a think about what they are giving you: a dose of energy to tackle the day, a tonic of restoring calm, or a companion to join you in walking through contradictory, complex feelings.
A researcher in human intelligence at Utah Valley University has analysed the 29 best-selling introductory psychology textbooks in the US – some written by among the most eminent psychologists alive – and concluded that they present a highly misleading view of the science of intelligence (see full list of books below).
Russell T Warne and his co-authors found that three-quarters of the books contain inaccuracies; that the books give disproportionate coverage to unsupported theories, such as Gardner’s “multiple intelligencies”; and nearly 80 per cent contain logical fallacies in their discussions of the topic.
In terms of topic coverage, over 93 per cent of the books covered Gardner’s multiple intelligences and over 89 per cent covered Sternberg’s triarchic theory of intelligence (both of which challenge the idea of there being a unitary “intelligence” per se), even though neither of these theories are mainstream or well-supported by evidence, according to Warne. In contrast, fewer than a quarter of the books covered the most strongly supported contemporary, hierarchical theories, such as Carroll’s three-statrum model and CHC theory each of which posits the influence of a general intelligence on other cognitive abilities.
To identify factual inaccuracies, Warne’s team used as a benchmark a consensus statement on intelligence research published in 1997 by Linda Gottfredson et al, and a 1996 APA report on the state of the field. Although no longer cutting edge, Warne chose these references because they reflect consensus in the field and because they are old enough for the information in them to have filtered through to non-specialist textbook authors.
The most common inaccuracy (appearing in nearly half the books) was that intelligence tests are biased against particular groups or individuals. This contradicts the 1997 consensus statement which tackles this issue and concludes that “intelligence tests are not culturally biased”.
The 29 best-selling textbooks that were subject to scrutiny for their coverage of intelligence. From Warne et al, 2018
Other common inaccuracies included promotion of the idea that it is not possible to measure intelligence in a meaningful way (in fact, Warne and his colleagues point out that “it is actually easier to measure intelligence than many other psychological constructs”), and claims that intelligence is only relevant in academic settings (in fact, intelligence correlates with many non-academic life outcomes, from life expectancy to risk of dying in a car accident, and is among the strongest predictors of career success).
Among the logical fallacies in the books is what’s known as “Lewontin’s fallacy” – this idea, advanced in six of the books, states that because humans share about 99 per cent of the same genes, that genes cannot therefore have a role in the differences between individuals or groups. In fact, “slight differences in genotypes among organisms can result in major phenotype differences”, according to Warne and his team. Twelve other fallacies appeared in the books – see full list above – such as giving less scrutiny to politically correct ideas or claiming that intelligence doesn’t exist because it is a collection of abilities (suggesting the textbook authors had failed to understand the principle of g or “general intelligence”).
In terms of questionable accuracy (i.e. errors not covered by the consensus statements), Warne highlights issues around the discussion of the taboo topic of race and IQ; textbook authors overplaying the role of “stereotype threat“, and authors having a tendency to overestimate environmental influences on intelligence (the books largely neglected the work of scholars who study the genetic influences on intelligence, such as the British researchers Ian Deary and Robert Plomin).
Warne and his team admit there is an element of subjectivity in their analysis of the textbooks. However, they tried to mitigate this by using the two consensus publications from the 1990s as a reference point, by being as lenient as possible in their judgments of the books, and by being explicit in the how they went about their analysis.
This new analysis helps explain why the public and lay journalists often express a scepticism toward intelligence and intelligence testing that is at odds with expert opinion (perhaps best captured by the hackneyed claim that “intelligence tests only measure your ability to take intelligence tests”). Over a million students take introductory psych courses every year in the USA alone (a majority of whom are taking the course as part of a different degree subject), and judging by the content of most popular introductory psych textbooks in America, it seems likely these students are getting a highly distorted view of the field.
An obvious issue for our domestic readers is that it’s not clear if the same inaccuracies and bias toward intelligence research also appear in British and European introductory textbooks. In fact this is a recurring shortcoming in our coverage of investigations into psychology textbooks – there simply doesn’t seem to be the same scrutiny of psychology textbooks here as there is in America.
“Improving the public’s understanding about intelligence starts in psychology’s own backyard with improving the content of undergraduate courses and textbooks,” Warne and his colleagues conclude. And for anyone who would like to know more about intelligence, they recommend Intelligence: All That Matters by Research Digest guestcontributorStuartRitchie, and Intelligence: A Very Short Introduction by Ian Deary – these books are “not only accurate, but they also have a breezy writing style that makes them easily digestible”.
For most of us, goose-bumps are something that happens outside of our conscious control, either when we’re cold or afraid, or because we’ve been moved by music or poignant art. However, it seems there are a few individuals with a kind of psychophysiological super-power – they can give themselves goose-bumps at will.
For a new study, which they’ve released as a pre-print at PeerJ, a team led by James Heathers at Northeastern University, Boston, created a Facebook group with descriptions of “voluntary piloerection”, to use the technical term, and invited anyone with this ability to complete a comprehensive questionnaire. Thirty-two voluntary goose-bumpers took part. Though the results are preliminary, this is a landmark study considering that voluntary piloerection has not previously been subject to systematic investigation, and that the scientific record contains just three prior case studies over a period of more than a century.
The average age of the individuals who answered the survey was 32 and there were many consistencies in their descriptions of their goose-bump ability, closely matching the few obscure accounts in the existing medical literature. Nearly three quarters said that the voluntary sensation or action began on the back of their head or neck; many described a shiver or tingle down the spine; and ninety per cent said the sensation manifested on their arms, among other body areas. The participants overwhelmingly described their ability as a voluntary act, akin to deciding to move, as opposed to using their imagination or memories to cause the goose-bumps. They saw it as a normal, harmless activity and many were surprised to learn that others cannot do it.
“I tighten a muscle behind my ears … and the goosebumps appear on my back and then travel to my arms”
“I think about goose-bumps, they start to appear, I shudder/shiver, and there they are”
“I simple [sic] think of doing it. I don’t need to have a [sic] emotion involved, in fact I can do it now without feeling any emotion whatsoever”
“I flex a ‘muscle’ in my brain. Sometimes I have to concentrate a little if I’ve been doing it a while”
Most of the participants said that they noticed their ability in adolescence or early adulthood; two said they only realised they had the ability after reading the description on Facebook (leading the researchers to doubt that the ability arises via conditioning).
Although the participants didn’t use emotions to produce their goose-bumps, triggering the sensation was associated with subsequent emotional experience, especially absorption and wonder (similar to involuntary goose-bumps). In fact, nearly three quarters of them said they deliberately triggered goose-bumps while engaged in aesthetic activities, such as listening to music or dancing, and also activities like sex and study, as if to accentuate or facilitate these experiences. Half of them also used their ability to prolong involuntary goose-bumps.
An obvious criticism of the study is that the participants’ accounts must be taken on trust – the researchers didn’t invite these individuals to the lab to study the phenomenon with their own eyes (or if they have, these results aren’t available yet). However, given the consistency of the accounts, Heathers and his colleagues doubt that the participants are fabricating, and they added: “It is unlikely any participant had prior cues or expectations regarding how their ability might be expected to work due its rarity”.
As part of the survey, the participants also completed a personality and emotion questionnaire and the most striking result was that they scored much higher than average on the trait of Openness, and they experienced the state of absorption more often than normal. The personality finding isn’t surprising given that prior research has found that more open individuals are more prone to involuntary goose-bumps; openness is also associated with self-experimentation, which may be one way that the goose-bumpers discovered or developed their skill.
Unfortunately, we still have no idea about the prevalence of the voluntary goose-bump ability because the researchers don’t know how many people viewed their online advertisements for the survey. We do know that they screened 682 first-year psychology students, and none of them had the ability.
It’s going to be exciting to see what more can be discovered about this little known phenomenon, especially how other physiological systems such as heart rate are involved, and how it affects emotional experience.
A growth mindset – believing your capabilities can grow over time – can help us set self-improvement goals, consider mistakes as a step towards mastery, and remain upbeat when facing tribulation. Psychologists are excited by the ways we can help develop such mindsets, particularly towards creativity and intelligence, but some studies have found the impact less impressive than earlier research had suggested. Now researchers are hungry to understand the individual characteristics that might prevent these interventions making an impact on some people.
New research in the British Journal of Social Psychology has investigated one possible candidate – political ideology, specifically a perspective known as “social dominance orientation”. If you are invested in preserving the status quo, perhaps that encourages you to see social relations as inevitable, as “just the way things are” – an essentialist, fixed view of the world that seems to carry over to how you view human capability.
Crystal Hoyt at the University of Richmond and her colleagues asked 300 online participants to rate their agreement with statements about social dominance like “some groups of people are simply inferior to other groups.” Participants then read either an article about intelligence that promoted a growth mindset, or an article arguing for the opposite (both articles used text and graphs to suggest that intelligence was either highly influenced or entirely uninfluenced by the circumstances of our lives, respectively). Finally participants rated their agreement with fixed mindset items like “you can learn new things, but you can’t really change your basic intelligence”.
Overall, participants higher in social dominance orientation had a somewhat more fixed intelligence mindset. Their views also barely budged after reading either the growth or fixed-mindset article. In contrast, and against the researchers’ expectations, the low-dominance participants were particularly influenced by the article that argued for intelligence as a fixed quality (after reading it, they expressed a fixed mindset, as high as the high-dominance participants).
Perhaps people who are low in social dominance beliefs are more open to scientific argument, and the fixed article was the one that presented them with a new take. In any case, the evidence suggests they are not the ones with most to gain from growth interventions, as was expected, but instead they are at greater risk of sliding towards a fixed argument.
On a positive note, rather than suggesting that high-dominance people might be an especially hard nut for these mindset interventions to crack, the findings create a case for giving it a go – these people are no more resistant than anyone else, and may have further to go, and hence more to gain. All in all, this new research shows that individual differences are worth considering if we want to change mindset, but we’re still in early days of understanding how.
The prolific psychology researcher and author Adrian Furnham sourced 56 brain myths from my 2014 book Great Myths of the Brain and 49 child development myths from the 2015 book Great Myths of Child Development by Stephen Hupp and Jeremy Jewell. Furnham presented the myths to 220 online participants (aged 19 to 66) and they rated them on a five-point scale from definitely false, probably false to probably true or definitely true, and also including “don’t know”.
There were four brain myths that over 40 per cent of the participants thought were definitely true: that the brain is very well designed; that after head injury, people can forget who they are and not recognise others, but be normal in every other way; that we have five senses; and that our brain cells are joined together forming a huge net of nerves.
Similarly, there were two child development myths that more than 30 per cent of participants thought were definitely true: that all boys have one Y chromosome and all girls do not; and that a woman who is already pregnant can’t get pregnant again. Other strongly endorsed child development myths included the idea that sugar makes kids hyperactive and that dyslexia’s defining feature is letter reversal (65 per cent and 66 per cent believed probably or definitely true, respectively).
There was no evidence that participants’ age, gender or educational background (including specific experience with psychology) made any difference to their endorsement in myths. However, those who rated their own common sense higher endorsed fewer myths, and there was a modest correlation between being more religious and/or more right-wing and endorsing fewer myths (Furnham has no explanation for these “surprising” correlations).
Furnham notes that participants were generally disinclined to answer “don’t know” – on average less than a fifth chose this option. “It could be that people were too embarrassed to admit they did not know when indeed the evidence shows quite clearly that they did not,” he said.
The study has a few issues, such as the lack of detailed information on the participants’ psychology background or their geographical location. There were no “brain facts” interspersed with the myths. Also, Furnham reproduced the wording of the myths verbatim from the books, even though many aren’t that easily understood without the accompany explanatory text from the books (some are also open to different interpretations without contextual explanation).
Nonetheless, Furnham says his findings reveal once more “the ‘shocking truth’ about the widespread acceptance of myths” and he warns that they are “potentially harmful and socially divisive“. He adds: “The question of how to combat these myths and ensure that people are better informed about various areas of psychology remains largely unanswered.”