Loading...
A key theme was that it felt like having just half the client in therapy

By Alex Fradera

Psychotherapists are devoted to improving people’s psychological health, but sometimes their efforts fail. A new qualitative study in Psychotherapy Research delves into what therapists take away from these unsuccessful experiences.

Andrzej Werbart led the Stockholm University research team that focused on eight therapy cases where the clients – all women under the age of 26 – had experienced no improvement, or in three cases, had deteriorated. This was based on comparing their pre- and post-therapy symptom levels following one to two sessions per week of psychoanalytically-focused therapy for about two years, to deal with symptoms such as depressed mood, anxiety, or low self-esteem.

The seven therapists responsible for these cases (one had two non-improving clients) were also all women, average age 53, with a range of experience in therapy.  Each had had success in leading other clients to improvement, which is typical; the evidence shows even strong therapists have cases that fail.

The therapists took part in interviews at the start and end of treatment using the Private Theories Interview – a way of exploring the therapist’s take on the case, how they are approaching it, and (retrospectively) what could have been handled differently.

Werbart’s team used the grounded theory tradition to look for emerging patterns in the interviews and found a paradoxical picture. On the one hand, the therapists talked of the great first impressions they’d formed of these cases; they had a clear sense of empathy with the clients’ plight, and engaged with their interesting stories or quick wits. They also said they had felt a connection, even admiration; these were special cases, and the therapist was motivated to do right by them. They also reported that the clients seemed to be attracted to the process, at least on the surface, finding it intellectually stimulating.

Yet the therapists also reported that from the very beginning there was a sense the clients were somehow removed. This was the first inkling of what Werbart’s team found to be a key theme, of “having half of the patient in therapy”.

Initially the therapists said they were optimistic that this was a solvable challenge, but as the sessions continued, it became a mire. Whenever the therapists attempted to address what a client was not disclosing, the client would typically pull back – by intellectualising around issues, or holding back revelations until the session was nearly over. Later, some clients became impersonal in manner and treated the therapist as just a part of the furniture, or they cancelled sessions entirely. The therapists described how trying more actively to wrest back control led to “fruitless battles”, and the process terminated with the therapist in an emotional state, overwhelmed by the client’s energy. One therapist reported “I felt I was drawn into some damned depth.”

Werbart’s team speculate that the paradoxical elements – high initial promise and enthusiasm followed by the later sense of distance – may form two parts of a whole, the case of a therapist “one-sidedly [allying] herself with the patient’s more capable and seemingly well-functioning parts.” These clients are clearly sharp, and may have developed shiny, effective defence mechanisms that took the therapist in. Evidence shows that therapists who initially underestimate the degree of the client’s problems are more likely to struggle, and that successful outcomes begin with a good reading of the situation, not sceptical, but not credulous either. Regarding the current cases, the therapists may have been beguiled by their client’s charisma, and were on the back foot when they belatedly began to dig deeper.

Despite acknowledging the lack of improvement in their clients, the therapists maintained that the therapy they’d offered had been useful – that their clients had grown in self-awareness, laying groundwork for improvement, if only there were more time. While this interpretation can’t be ruled out, the therapists’ insistence on it may reflect a reluctance to acknowledge their being taken in by their clients’ defences. This is perhaps because being a therapist – especially in ultimately successful therapy – is associated with unpleasant feelings, as progress means getting into hard and upsetting issues. A rare case that generates strong feelings of engagement, connection and excitement – even in the absence of tangible improvements – may feel like a welcome change, and it may be hard to acknowledge that the such situations are not cause for celebration, but for caution.

“It was like having half of the patient in therapy”: Therapists of nonimproved patients looking back on their work

Alex Fradera (@alexfradera) is Staff Writer at BPS Research Digest

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By Emma Young

The way parents and teachers praise children is known to influence not only their future performance, but how they feel about the malleability of intelligence. If a child has done well, focusing positive comments on their efforts, actions and strategies (saying, for example, “good job” or “you must have tried really hard”) is preferable to saying “you’re so smart”, in part because process-centred praise is thought to encourage kids to interpret setbacks as opportunities to grow, rather than as threats to their self-concept. In contrast, a kid who’s led to believe she succeeds because she’s “intelligent” may not attempt a difficult challenge, in case she fails.

Now – and somewhat remarkably, given all the praise and growth mindset research conducted on children – a new study, led by Rachael Reavis at Earlham College, Indiana, US, published the Journal of Genetic Psychology, claims to be the first to test the effects of different types of praise on how adults feel after failure. 

The researchers recruited 156 adults via Amazon’s Mechanical Turk website. After completing a set of six easy visual pattern problems, which they were given up to two minutes to solve, they were all informed: “You did better than the majority of adults!” But then the feedback varied.

About a third were told that, based on the pattern of their results, they had been classified as “in the high intelligence group” (person-focused ability feedback); about a third were told they had been classified as “the kind of person who works hard” (person-focused effort feedback – they were a “hard worker”) and about a third were told they had been classified as “working hard on these questions” (process-focused effort feedback – they had worked hard).

All participants were then given a set of 12 difficult problems (which timed out after 3.5 minutes), and no matter how well they did, all were told that their performance was “worse than most adults”. The researchers were interested in how the earlier feedback would affect the participants’ performance and enjoyment on the tasks, and especially how they would interpret their apparent failure at the final set of difficult problems. 

Based partly on previous findings involving children, the researchers expected that being told you’re a “hard worker” would be the most beneficial kind of praise or feedback, as it implies that the person typically puts in a lot of effort. (And while this form of praise is person-centred, it focuses on behaviour, rather than on intrinsic ability.)

In fact, the type of feedback participants received after the easy task did not affect their performance on the difficult problems, relative to their performance on the first. Meanwhile, it was the “hard worker” group who said they enjoyed the difficult set of problems the least (the other two groups did not differ from each other on this); they also believed they had been less successful on the tasks than those in the other groups.

Finally, when the participants indicated, on a scale of 0 to 10, to what extent they attributed their poor performance at the final task to lack of effort or to lack of intelligence (as well as to eight other factors that were included to obscure the true purpose of the study), there were no group differences for effort, but as expected, those in the “worked hard” group were significantly less likely to attribute their failure to their level of intelligence than those in the “high intelligence group”. However, against expectations, the “hard worker” group actually blamed their low intelligence just as much as the “high intelligence” participants. 

As the researchers note, “Few of the results demonstrated with children were replicated.”

Why might this be? 

It’s possible that adults believe that telling someone they’re a hard worker is something positive to say when you can’t plausibly say that they’re smart or gifted. When I think back to my own childhood, there were awards at school for “good work” and also for “hard work”, and, among the kids, a “good work” award was seen as being the bigger achievement. In contrast, at my children’s primary school, in the light of the findings on process-focused praise, rewards are focused entirely on effort. However, it’s also standard for a child who typically puts in a lot of effort to be called a “hard worker”. 

For children today, this may perhaps still be beneficial. For adults, who grew up in a different time, “being told one is a ‘hard worker’ may elicit feelings of inadequacy, which undermine positive perceptions of the task,” the researchers write. “Future work should investigate how both children and adults interpret these types of praise.”

Emma Young (@EmmaELYoung) is Staff Writer at BPS Research Digest

Effort as Person-Focused Praise: “Hard Worker” Has Negative Effects for Adults After a Failure

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By Alex Fradera

The “Big Society” initiative – launched at the turn of this decade by the incoming British government – was a call for politics to recognise the importance of community and social solidarity. It has since fizzled out, and for a while communitarianism fell out of the political conversation, but it has returned post-Brexit, sometimes with a nationalist or even nativist flavour. The US political scientist Robert Putnam’s research is sometimes recruited into these arguments, as his data suggests that racially and ethnically diverse neighbourhoods have lower levels of trust and social capital, which would seem an obstacle to community-building. But an international team led by Jared Nai at Singapore Management University has published a paper in the Journal of Personality and Social Psychology that suggests that diverse neighbourhoods are in fact more likely to generate prosocial helpful behaviours.

Putnam’s work tallies with a distinguished psychological idea, conflict theory, which suggests salient distinctions between people in an area heightens a sense of competition between groups over resources. Because race is so visible – neuroscientific studies suggest we perceive it earlier even than gender – the argument goes that racially diverse areas lead people to, as Putnam puts it, hunker down and withdraw. Consistent with this view, Putnam’s data shows that multicultural areas have lower levels of trust – even between people of the same race – and some evidence, but not all, has shown this has a knock-on adverse effect on civic engagement and volunteering.

Nai’s team predicted that despite this, racially diverse areas would show more, not less prosocial helping. They drew on contact theory, which suggests that active contact with people from other groups humanises them. In particular, such contact leads people to view their own identity as broader, potentially encompassing all of humanity – this could be a mechanism to encourage prosociality. As most of the contact theory work puts people in extended face to face interactions, the question was whether mere ambient diversity will help (simply being around other people who look different).

A first study showed that diverse areas have a more prosocial online buzz. Nai’s team pulled 60 million tweets and identified the usage frequency of words from James Pennebaker’s “prosocial dictionary” that are known to correlate with a desire to help others. Indexing across 200 metropolitan areas, they found that tweets from more racially diverse areas used prosocial language more frequently. But language is an indirect measure, and it may be that people with that style voluntarily move into areas that are more diverse. So Nai’s group looked at international data, as moving country is rarer than moving cities, and at levels of actual (reported) helping. The data came from a 2012 Gallop World Poll that asked “in the past month, have you helped a stranger?”. Across 128 countries, Nai’s team found that it was the more ethnically diverse countries that scored a greater frequency of yes responses to this question.

Next, Nai’s team wanted to see if the factor that turned diversity into helping was people having a broader sense of identity. They surveyed US people online using the same helping question used in the Gallop poll, and replicated the greater diversity/greater helping correlation for different zip codes for around 500 participants (good gender balance, average age 33). They also asked the participants how much they identified with three groups: people in my community, Americans, and all humans everywhere. They found that diverse neighbourhoods were associated with higher identification with all of humanity, and statistical analysis suggested, but could not prove, that this (and only this) form of identification was driving the helping behaviour.

Finally, the researchers looked at help for outsiders during a crisis. Real data from a helping website set up following the Boston marathon showed that more offers of help came from zip codes that were more racially diverse, even after controlling for distance from site and wealth of the zip code. And in an online experiment, 300 participants stated they would be more likely to offer help following a bombing when they had been asked to imagine living in a very diverse neighbourhood – again mediated by a greater sense of connection to humanity. 

All the studies controlled for a range of factors related to area or nation, such as income / national economics, education, urbanisation, and religious diversity. One weakness is that apart from the last study, all this work is correlational. It would be interesting to track neighbourhoods over time to see how changes have an impact dynamically – maybe an area renowned for its diversity that evolved organically over decades would have a different attitude to the idea of “one humanity” than a neighbourhood with no sense of itself as diverse per se and that had changed fairly rapidly due to impersonal market or governmental forces. 

An interesting detail from the international study is that trust scores (available for a subset of nations) were lower in more ethnically diverse countries, in line with the Putnam data. So it seems that a populace can both be less trusting and more willing to help strangers, which is something to puzzle on. But regardless, the new data pushes back against the assertion that diverse neighbourhoods struggle to show communal spirit, and suggests that ambient contact with those superficially different can underscore our common humanity and obligations to one another.

People in more racially diverse neighborhoods are more prosocial

Alex Fradera (@alexfradera) is Staff Writer at BPS Research Digest 

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By Christian Jarrett

“Our country doesn’t do many things well, but when it comes to big occasions, no one else comes close,” so claimed an instructor I heard at the gym this week. He might be an expert in physical fitness but it’s doubtful this chap was drawing on any evidence or established knowledge about the UK’s standing on the international league table of pageantry or anything else, and what’s more, he probably didn’t care about his oversight. What he probably did feel is a social pressure to have an opinion on the royal wedding that took place last weekend. To borrow the terminology of US psychologist John Petrocelli, he was probably bullshitting.

“In essence,” Petrocelli explains in his new paper in the Journal of Experimental Social Psychology, “the bullshitter is a relatively careless thinker/communicator and plays fast and loose with ideas and/or information as he bypasses consideration of, or concern for, evidence and established knowledge.”

While pontificating on Britain’s prowess at pomp is pretty harmless, Petrocelli has more serious topics in mind. “Whether they be claims or expressions of opinions about the effects of vaccinations, the causes of success and failure, or political ideation, doing so with little to no concern for evidence or truth is wrong,” he writes.

There are countless psychology studies into lying (which is different from bullshitting because it involves deliberately concealing the truth) and an increasing number into fake news (again, unlike BS, deliberate manipulation is part of it). However, there are virtually none on bullshitting. Now Petrocelli has made a start, identifying several social factors that encourage or deter the practice.

The research began with nearly 600 people on Amazon’s Mechanical Turk survey website reading that a man named Jim had pulled out of running for a seat on the City Council. Participants thought they were taking part in a study of how we ascribe causes to others’ behaviour (the term bullshitting did not appear anywhere in the experiment instructions), and after they read about Jim’s resignation, they were invited to list five possible reasons and any related thoughts on why Jim might have done this – a perfect opportunity for bullshitters to let rip!

Petrocelli varied the precise conditions to see how this affected people’s propensity to bullshit when answering. For starters, he manipulated background knowledge by earlier on giving half the participants 13 facts about Jim, such as that he liked to be admired. Petrocelli also manipulated the social pressure to give an opinion, telling half the participants they didn’t have to list any reasons if they didn’t want to. Finally, Petrocelli manipulated audience knowledge, telling half the participants that their reasons would be scored by judges who knew Jim extremely well.

To measure bullshitting, Petrocelli later asked the participants to score their own reasons, based on how much they had been concerned with genuine evidence and established knowledge; essentially they assessed their own BS levels.

All the factors that Petrocelli manipulated made a difference. Overall, the participants who received no background information on Jim admitted to engaging in more bullshitting. Participants also bullshitted more when they felt more obliged to give an opinion, and when their audience was not knowledgeable about him. These latter two factors (obligation and audience knowledge) interacted, with social obligation being more potent. When feeling obligated to have an opinion, uninformed participants bullshitted a lot even when they knew their audience knew more than they did.

“Anything that an audience may do to enhance the social expectation that one should have or provide an opinion appears to increase the likelihood of the audience receiving bullshit,” Petrocelli said.

Without such pressure, however, the risk of being caught out was a deterrent to BS and Petrocelli further explored this in a follow up experiment. Online participants were invited to justify their attitudes on hot-button social issues: affirmative action; nuclear weapons; and capital punishment. Crucially, Petrocelli manipulated who participants thought would be reading their justifications – either he gave participants no information about their audience or he told them a sociology professor with expertise on these issues would be reading their views (and further, that the prof either agreed with their positions; disagreed; or his own position was concealed).

The participants subsequently rated their own BS levels (i.e. they rated whether they’d been concerned with evidence or established knowledge) and it was the participants not told about a professor, or who thought the professor agreed with them, who especially admitted to more bullshitting. Those participants who knew a professor with opposing views was going to read their arguments admitted to the least bullshitting. Fear of being called out, in other words, appears to be a strong deterrent to spewing BS.

Where does this leave us? It’s a shame there was no objective marker of bullshitting in this research – sure, it makes sense to ask people if they considered any evidence or knowledge, but at least some kind of third-party assessment would have been useful. You might also have your doubts about the realism of these online experiments. Giving inconsequential opinions on a survey website is rather removed from the real life scourge of the office colleague who is forever sharing their questionable wisdom on what you need to do to stay healthy or how the country should be run. Nonetheless, all empirical investigations must start somewhere and Petrocelli said his studies “provide a great deal of information relevant to the social psychology of bullshitting.”

The professor is under no delusions though about how hard it will be to translate his insights into practical anti-BS strategies. While his findings suggest that calling out BS (or the mere threat that it might be called out) can reduce the propensity for bullshitting to take place, Petrocelli notes that doing so “may not necessarily enhance evidence-based communication” and that instead it may just be a “conversation killer”. He concludes that “future research will do well to respond to such questions empirically and determine effective ways of enhancing the concern for evidence and truth.”

Antecedents of bullshitting

Christian Jarrett (@Psych_Writer) is Editor of BPS Research Digest

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By guest blogger Dan Jones

When, in Shakespeare’s Julius Caesar, Marc Anthony delivers his funeral oration for his fallen friend, he famously says, “The evil that men do lives on; the good is oft interred with their bones”. 

Anthony was talking about how history would remember Caesar, lamenting that doing evil confers greater historical immortality than doing good. But what about literal immortality? 

While there’s no room for such a notion in the scientific worldview, belief in an immortal afterlife was common throughout history and continues to this day across many cultures. Formal, codified belief systems like Christianity have a lot to say about the afterlife, including how earthly behaviour determines our eternal fate: the virtuous among us will apparently spend the rest of our spiritual days in paradise, while the wicked are condemned to suffer until the end of time. Yet, according to Christianity and many other formal religions, there’s no suggestion that anyone – good, bad or indifferent – gets more or less immortality, which is taken to be an all-or-nothing affair.

This is not how ordinary people think intuitively about immortality, though. In a series of seven studies published in Personality and Social Psychology Bulletin, Kurt Gray at The University of North Carolina at Chapel Hill, and colleagues, have found that, whether religious or not, people tend to think that those who do good or evil in their earthly lives achieve greater immortality than those who lead more morally neutral lives. What’s more, the virtuous and the wicked are seen to achieve different kinds of immortality.

The new findings complement previous work showing how we see moral character as a defining feature of people, both when they’re alive and when their souls depart. In the new studies, Gray and colleagues extended this, finding that their participants (recruited online via Amazon Mechanical Turk and including atheists and people of different religious faiths), rated historical figures who were extremely good or bad – Martin Luther King or Hitler, for example – as achieving a greater degree of immortality than morally neutral figures, such as Ameila Earhart and Andy Warhol.

Even if both good and evil people are seen to achieve greater immortality than the more morally neutral, might they nonetheless experience different kinds of immortality? For many of the world’s major religions, the answer is clearly yes: perform morally positive acts on earth and you go to The Good Place (Heaven) to enjoy total freedom in a paradisiacal realm, but do evil and you go instead to The Bad Place (Hell) to be tormented for all time.

Casting a wider anthropological net, many smaller, less formal belief systems also posit that good and evil spirits experience immorality in different ways. In particular, it’s common to find the belief that while virtuous spirits enjoy a transcendental freedom, evil spirits are more likely to be trapped or confined in some way, such as the Iroquois belief that they are eternally confined to their homes. Similar ideas crop up in popular culture too. When the evil wizard Voldemort dies in Harry Potter, his soul lives on in magical objects called Horcruxes. But when Obi-Wan Kenobi dies in the Star Wars films, his spirit is able to roam freely through the ethereal realm of the Force.

Gray and his colleagues found their participants held similar intuitive beliefs about the fates of deceased good, bad or neutral historical figures: they were more likely to see good souls as living in a transcendent state, wicked souls as trapped, and neutral souls as slightly less free than good ones but freer than bad ones. 

The reverse inference also held: reading about someone who had recently died and whose spirit had left the earthly realm and moved beyond space and time prompted participants to infer that this must have been a good person, while the converse led them to think the person must have been bad. 

Similarly, participants inferred that spirits inhabiting expansive locations, like hot deserts , arctic tundra or mountaintops, were more benevolent than those living in more confined locations, like a narrow trench,underground cave or tent in the woods, irrespective of how pleasant those locations were deemed.

Such inferences might explain why paranormal events are typically chalked up to malevolent spirits. The researchers asked more participants to imagine being in the house of someone recently deceased and that they felt a strange sensation as their spirit passed by. After reading these stories, people tended to view this spirit as malevolent, as a trapped spirit must be a bad spirit.

In explaining their findings, Gray’s team suggest that seeing good souls as free and transcendent and bad ones as confined and trapped stems in part for a basic desire for justice, with bad souls ending up in a spiritual prison and less able to roam and harm others. Such a desire may also receive a cognitive boost from the fact that notions of good and evil are metaphorically associated with ideas of lightness and airiness, and darkness and constriction, respectively.

The results did not appear to depend on whether participants already held religious beliefs about the afterlife – the same patterns were found regardless of their stated faith or supernatural belief, suggesting that our folk intuitions about immortality tend to overpower any formal belief systems that we claim to subscribe to. “These ways of thinking are very intuitive, and overcoming them takes effort,” says Gray.

One caveat to these studies concerns the fact that the participants were from a Western, educated, industrialised, rich, democratic (WEIRD) society. Psychological insights generated from WEIRD participants do not always generalize to other cultures, and in this case beliefs about reincarnation may have been under-represented. But Gray and colleagues argue that the cross-cultural similarities in the beliefs about the afterlife that inspired the research suggest that the new studies tap into a universal aspect of our psychology.

Gray is currently writing up the results of follow-on studies in which he looked at how the state of someone’s mind at the point of death – say, whether it was at the peak of health or wracked by dementia – affected how participants perceived its prospects for immortality. So while we may not ever be able to achieve literal immortality, at least we may soon know what it takes for others to think we’re immortal.

To be immortal, do good or evil

Image: Milton’s Paradise Lost – Hell at last, Yawning. Vintage engraving by Gustave Dore, from Milton’s Paradise Lost. Hell at last, Yawning, received them whole.

Post written by Dan Jones (@MultipleDraftz) for the BPS Research Digest. Dan is a freelance writer based in Brighton, UK, whose writing has appeared in The Psychologist, New Scientist, Nature, Science and many other magazines. He blogs at www.philosopherinthemirror.wordpress.com.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

 

Red and blue lines show the ratio of the yearly survival rates for Olympic medallists and Chess grandmasters, respectively, relative to the general population (flat dashed line). Shaded areas show confidence intervals. Via An Tran-Duy et al, 2018

By Christian Jarrett

It’s well established that elite athletes have a longer life expectancy than the general public. A recent review of over 50 studies comprising half a million people estimated the athletic advantage to be between 4 and 8 years, on average. This comes as little surprise. One can easily imagine how the same genetic endowment and training necessary to develop physical prowess in sport might also manifest in physical health. Now for the first time, a study published in PLOS One (open access) shows that athletes of the mind – chess grandmasters – show the same longevity advantage as athletes of the body.

An Tran-Duy at the University of Melbourne and his colleagues obtained data on over 1,200 chess grandmasters, mostly men, from 28 countries in three world regions, including whether or not they survived each successive year after receiving their title, all the way up to the beginning of 2017. From this, the researchers calculated the average yearly survival rates, adjusting for region, age and sex, which allowed them to come up with estimated life expectancies for grandmasters of different ages in different years. They did the same with data for over 15,000 olympic medalists.

There was no difference in the average life expectancy of the athletes and the chess grandmasters, but both groups showed a sizeable life expectancy advantage compared to the general population. For instance, in 2010, the average life expectancy of a chess grandmaster aged 25 was 6.3 years longer than the average for a 25-year-old member of the public. For a 55-year-old chess grandmaster, life expectancy was 4.5 years longer.

The study can’t tell us anything about why chess grandmasters live longer than the public. It’s possible some of the causes are indirect, such as the grandmasters possibly having higher average IQ (which is itself associated with longevity); elite chess players are also known to take more care of their physical fitness than the general population; and the social and economic benefits of becoming a grandmaster, especially notable in Eastern Europe, may have health benefits. Chess may also have direct health benefits, including via its known effects on the brain – for instance, it reduces risk of dementia.

Tran-Duy and his team begin their paper quoting Isaac Asimov: “In life, unlike chess, the game continues after checkmate”, and they reference this quote in their conclusion. “Not only does the game of life continue after the checkmate,” they write, “but excelling in mind sports like chess means one is likely to play the game for longer.”

Longevity of outstanding sporting achievers: Mind versus muscle

Christian Jarrett (@Psych_Writer) is Editor of BPS Research Digest

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By Alex Fradera

Abraham Maslow was one of the great psychological presences of the twentieth century, and his concept of self-actualisation has entered our vernacular and is addressed in most psychology textbooks. A core concept of humanistic psychology, self-actualisation theory has inspired a range of psychological therapies as well as approaches taken in social work. But a number of myths have crept into our understanding of the theory and the man himself. In a new paper in the Journal of Humanistic Psychology, William Compton of Middle Tennessee State University aims to put the record straight.

Maslow’s most penetrating idea is that we have a hierarchy of needs, proceeding from physiological needs like water or warmth, through safety, love, esteem and then self-actualisation. He argued that lower needs occupy our attention when they are unmet and make it more difficult to fulfil the higher ones – including self-actualisation, which is about becoming the self you always had the potential to be.

Compton first deals with the charge that this work is ascientific. He finds there is a lack of strong evidence showing that individuals transition from one level of the hierarchy to the next, as Maslow claimed. However, research on this point is complicated by the widely mistaken belief that Maslow considered needs must be fully satisfied at each level before progressing. In fact, Maslow stated that everyone has unsatisfied needs at every level – who feels safe 100 per cent of the time? 

On the other hand, in favour of the idea of progression through the hierarchy is evidence from comparisons of national populations. Cross-cultural research shows that when more people in a population have their basic needs met, a greater proportion also tend to reach self-actualisation, as compared with populations that are preoccupied with scarcities. 

Maslow also claimed that people are more likely to flourish when they hold self-actualising values like spontaneity, positive self-regard, and acceptance of paradoxes. There is supportive data associating these qualities with positive outcomes – including creativity, lower anxiety or a personal locus of control, and also – and perhaps more surprisingly – higher instances of peak experiences, higher sexual satisfaction, and less fear of death.

The hierarchy is sometimes presented with another element slotted in: cognition needs, placed just below self-actualisation (as seen in these examples). In fact Maslow opposed this, as he saw cognition as a tool that can serve every need at every level, whether in knowing self-defense techniques to help you feel safe, or knowing ourselves. For him, it lay outside the hierarchy. Another point often forgotten is that self-actualisation isn’t Maslow’s pinnacle. He broke out another stage for “peakers” – self actualised individuals who also experience peak or mystical experiences.

Compton moves on to address allegations about who and what the theory is for. He disputes the idea that it encourages self-centredness: many of the self-actualisation qualities Maslow emphasised are actually centred on others, like fairness, service, and adherence to a universal framework of values. Moreover, two of Maslow’s favoured reference points when talking about self-actualisation were Alfred Adler’s gemeinschaftsgefühl (the psychological health that follows from caring about others) and the bodhisattva (the Buddhist notion of one who strives for compassion towards others). 

What about the related charge that self-actualisation is elitist, a preoccupation reserved for the privileged? This criticism needs some thinking through. There is a case that Maslow didn’t pay enough attention to how sexism or racism could impede self-actualisation, although his writings did show a more vague sensitivity to life throwing you a trickier hand. It’s true many of his self-actualised examples are white men, but he also cited figures such as Jane Addams, Frederick Douglass, and Harriet Tubman. And while it may seem like self-actualisation requires plenty of disposable income and leisure time, what Maslow meant by self-actualisation is bringing your full self to the moment, which includes dedicating yourself to work, how you treat others in daily interactions, and holding yourself to the highest standards. You can do all that from wherever you are standing.

Finally, Compton deals with the references in Maslow’s copious writing to the self-actualised as “more fully human” versus the “less evolved persons” who are lower down the hierarchy – at the very least, this is a case of bad optics. In his defense, Maslow repeatedly emphasised that he did not believe anyone was innately superior, just that some people made more of their potential. Compton argues that some of the criticism around this is motivated by defensiveness: that some people are apparently stung by the claim that someone can work on their personality and thus make it excellent, just as they can become an excellent gymnast or painter. I disagree – I think there is a case that Maslow’s language unhelpfully conjures a sense of individualistic exceptionalism that would probably feel right at home in a TED summit or posthumanist away-day. Not, I think, what he would have wanted.

Clearly, Maslow’s work is not without flaws. But his reframing of psychology to look at upwards possibilities rather than constantly into pathology sparked a shift that anticipated the positive psychology movement by decades. So his ideas deserve to be better understood, so we can use them more effectively to better ourselves, and so they can be developed and built upon for professionals who are seeking a ladder to help humanity reach greatness.

Self-Actualization Myths: What Did Maslow Really Say?

Alex Fradera (@alexfradera) is Staff Writer at BPS Research Digest 

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By Christian Jarrett

Three years ago, in a time before Trump or Brexit or This Is America, someone posted an overexposed photograph of a black and blue striped dress on Tumblr. Soon millions of people had seen it and started arguing about it. The reason? It quickly became apparent that about half of us – more often women and older people – perceive the dress, not as black and blue, but white and gold.

In a neat example of real life echoing a classic psychology experiment (I’m referring to Asch), #thedress was enough to make you think your friends were gas lighting you – how could it be that you and they were looking at the exact same picture and yet seeing entirely different things?

The squares marked A and B are actually the same shade of grey, via Wikipedia

Of course there are many optical illusions, including others that involve colour (see, for example, the “checker shadow” illusion, pictured right). What was special about #thedress was that it triggered a bimodal split in perceptual experience among the population. Also, many illusions trigger a fluctuating percept, but once someone perceives the dress one way, they usually keep seeing it that way.

Viral hits happen overnight. Science is slow, but it’s catching up. With the passing of the years, numerous studies into #thedress have now been published – 23 according to a new review. Here we present you with a fascinating digest of what’s been discovered so far about the famous frock – researchers have made progress, certainly, yet much remains mysterious, making this a humbling experience for perceptual science.

A big part of it how you see the dress has to do with a process called colour constancy

Before any studies had been published, psychologists and vision experts were quick to explain that the #thedress illusion is related to a process called “colour constancy” whereby your brain takes environmental lighting effects (which are ambiguous in #thedress photo), and your own past experiences, into account when interpreting the precise wavelengths it believes are being reflected off a surface. The same automatic adjustment process allows us to recognise grass as green whatever the weather or time of day. The process sometimes goes wrong, though, like when the blue top you bought at the clothes store turns out to be black when you get home.

How you see the dress depends partly on what inferences you make about the lighting

If our inferences about the background lighting are relevant, as the initial expert reaction suggested, then manipulating the background illumination in an unambiguous way ought to have a direct effect on viewers’ perceptions of the dress. A team led by Rosa Lafer-Sousa found this to be the case. When they presented the dress against an obviously cool, blue-ish background (see image, left), most people saw it as white and gold, but when they presented it against a warm, yellowish background, most saw it as black and blue.

The researchers Andrey Chetverikov and Ivan Ivanchei later developed this idea by showing that people’s perception of the dress are related to their assumptions about the light source in the original photograph. In their survey, people who saw the dress as black and blue were twice as likely as the white and gold group to believe that the dress is illuminated from the front, as if with a flash. Conversely, white and gold perceivers were more likely to believe there was a window behind the dress (implying under exposure).

In the survey, time of day seemed to make a modest difference to these assumptions, but still, the fundamental reasons why we come to make different inferences about the lighting remains largely unknown. In their new review of research into #thedress, published in Archivos de la Sociedad Española de Oftalmología, ophthalmologist Julio González Martín-Moro and his colleagues state that the “dichotomic behaviour of this optical illusion is surprising and leads us to consider two different versions of the software in charge of maintaining chromatic [colour] constancy.”

Your initial impression of the dress are likely to stick

I already mentioned how whichever camp we are in (white/gold or blue/black), most of us seem to stay seeing the dress that way. This stubbornness of the percept could be because our brains continue to make the same assumptions about the scene, but alternatively it might be what Leila Drissi Daoudi and her colleagues describe as an example of “one shot learning”.

To test this, they manipulated naive participants’ perception of the dress by using occluders to block out the background – when they used black occluders (see above), this induced the perception of the dress as white and gold in nearly everyone; when they used white occluders, the opposite was true. Crucially, when the researchers took the occluders away, very few participants experienced any change in their perception of the dress’ colours. “There seems to be a one-shot learning mechanism in play, which leads to an imprinting of the color perceived”, the researchers wrote, although they were unable to explain what (without the use of occluders) induces the initial and lasting perception – they tested the location of people’s initial eye movements to the dress, but this made no difference.

The brains of gold/white perceivers are engaged in more interpretive processing

Consistent with the idea that our inferences about the background lighting are important to #thedress illusion, the results from a brain imaging study that was published in 2015 suggested that people who see the dress as white and gold may be engaged in more “top down” interpretive processing, albeit that this processing leads to a misleading perception. The researchers scanned the brains of volunteers while they looked at the dress and found that those who perceived it as white and gold exhibited more neural activity in a raft of brain areas, including in frontal, parietal (near the crown of the head) and temporal (near the ears) regions.

What this brain scan study couldn’t answer is whether the extra neural processing is the cause or consequence of the gold/white perceptual experience of the dress. However, a survey of patients with multiple sclerosis suggested the neural activity may be causal. Researchers in Italy found that with greater progression of the disease, people were more likely to perceive the dress as blue and black. Perhaps, the researchers speculated, this is because greater impairment of colour processing regions in the cortex led to “a less demanding black and blue vision of The Dress in more advanced MS”.

Differences in the eye may also account for the illusion

The way we perceive the dress might not all be about colour constancy and top-down brain processing. A 2016 study found that “front end” factors might also be relevant. Specifically, the researchers found that  “macular pigment optical density” was higher in volunteers who saw the dress as white and gold. Macular pigment is found in the retina and a greater density increases the amount of short wavelength light that is absorbed by the eye. “Our results indicate that early-stage optical, retinal and neural factors influence perception of the Dress”, the researchers said. Consistent with this, another study found that white and gold perceivers tended to have smaller pupils potentially limiting the amount of light that falls on the retina, thus suggesting another early stage influence upon the perception of the dress.

Past experience seems to be more important than genetics

Last year, researchers in London and Cambridge published the first twin study into how people perceive the dress. By comparing the similarities in perception between identical and non-identical twins, they estimated that around 34 per cent of the variance in people’s perception is due to genetic differences, while 66 per cent is due to environmental factors (i.e. including learning and past experience). This was a small study, however, and the findings are tentative. Omar Mahroo and his colleagues concluded that the environmental factors “…may include prior lifetime experience to different spectral and luminance environments, shaping retinal and higher neuronal processing, and the development and evolution of [how come to name] object colors in various contexts.”

In other words, there are lots of possibilities for how experience shapes our varied perceptions of colour in ambiguous scenes, all of which remain to be studied. We also no nothing about how common visual deficits, such as colour blindness and cataracts, interact with the perception of #thedress.

A lesson in humility

Of all the senses, vision is the one that’s been studied most intensively by psychologists and neuroscientists. Despite this, there are no tests available that would allow even the best experts in the land to predict whether you will see the dress as black and blue or white and gold. And while we have some good ideas about the processes that are involved in #thedress illusion, so much remains mysterious about what leads each of us to view it one way or the other.

At the end of their review, González Martín-Moro and his colleagues come to a stark conclusion: “…the fact is that researchers have failed to come up with a theory that can satisfactorily explain the dichotomic behaviour of the population when viewing said optical illusion.”

Christian Jarrett (@Psych_Writer) is Editor of BPS Research Digest and he continues to see the dress as white and gold.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By Alex Fradera

Social class may seem different today than in the early 20th Century. Former British Deputy Prime Minister John Prescott’s comment in 1997 that “we are all middle class now” had a ring of truth, given that most people in the West have access to what once were luxuries, such as running water, in-house entertainment, and eye-catching brands. But this is something of an illusion, according to Cardiff University’s Antony Manstead, who shows in the British Journal of Social Psychology (open access) how class is still written into our psychology, and the implications this has for how we behave and our wellbeing.

First, Manstead presents evidence that shows socio-economic status (based on income and educational attainment – not identical with class, but an overlapping concept that’s used in much of the relevant research) remains as important to people’s identity as factors like gender or ethnicity. This is truer for wealthier people than lower earners suggesting some eroding of a positive working-class identity. 

Apart from obvious class markers like clothing, our behaviour is also still a giveaway: lab studies show that working-class people are more likely to employ eye-contact, laughter and head nods when interacting with others, compared to a more disengaged non-verbal style from the middle/upper classes. And class can be read at greater than chance levels even from stimuli as basic as Facebook photographs or seven words of speech. So we feel a certain class, and others can detect that class fairly easily.

What are the psychological consequences of this awareness of social class? Manstead describes how working class people are more likely to consider the world as a mass of forces and risks to contend with and accommodate, which manifests in various ways including their measurably higher levels of vigilance and a heightened threat detection system. This makes sense if you lack some of the resources (e.g. rich parents) that can buffer you from catastrophe, but may also reflect limiting beliefs. Meanwhile, middle/upper-class people are more motivated by internal states and personal goals. They are more solipsistic, on average: the issue is how they should shape the world, not how it pushes back on them. This is indicated by a higher sense of perceived control, and more confidence that good things happen to people due to their choices.

For working class people, this approach to the world has personal downstream consequences, including a higher degree of fatalism towards serious life outcomes, such as contracting HIV. These attitudes may also contribute to greater illness susceptibility: one study found that people exposed to a cold or flu virus and held in quarantine were more likely to become ill if they considered themselves as a lower status social class, even while controlling for their actual class and levels of wealth.

Meanwhile, the costs of the middle and upper-class approach seem to be externalised on the society at large. Put bluntly, the research that Manstead amasses suggests that those with of a higher socioeconomic class are less empathic, have more favourable attitudes towards greed, and are more likely to lie in negotiations. Merely thinking about yourself as higher up the status ladder leads to more selfishness. One can see how a solipsistic view could lead to these attitudes. In contrast, the contextual, real-world focus of lower-class people leads to empathy and a greater ability to see how external events may be shaping the emotions of other people. In addition, they have a greater degree of interdependent relationships and higher levels of social engagement.

One would think social mobility might bridge this psychological gap between the classes, sliding more socially-minded people into positions of influence. Higher education is meant to be one of the engines of social mobility, but people from lower social classes are more likely to report feeling like a fish out of water at university, or to perceive the opportunities available to them as second-rate. Although it ought to be possible to address these anxieties about the unfamiliar, research suggests that the culture of universities may not be designed with the working classes in mind. University deans and administrators asked to list qualities of their culture tended to endorse words more about independence – the natural state of the solipsistic upper-class person, charting their course into the future – than interdependence, which tends to be a particular priority for first-generation university goers, looking to give back to their community. Manstead notes that these same cultural factors are also likely to be present in many workforces and professions.

So “the system” has a psychological component that helps maintain the status quo. Yet there is an appetite for change: in surveys of nationally representative samples, including the wealthy, people repeatedly express a preference for a more equal society. There is even some evidence that greater equality can flip some of the negative trends described about the middle and upper class, leading them to be more, rather than less generous in some economic tasks (perhaps because they feel less entitlement and sense of threat in a more equal society). And the good news is that studies show that giving people an accurate picture of their socioeconomic standing, or telling them that inequality is rising, leads them to being more receptive to change.

The psychology of social class: How socioeconomic status impacts thought, feelings, and behaviour

Alex Fradera (@alexfradera) is Staff Writer at BPS Research Digest

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By Christian Jarrett

Studies of identical and non-identical twins indicate that our self-esteem is influenced by the genes we inherited from our parents, but also, and perhaps slightly more so, by environmental factors. And according to a new study in Journal of Personality and Social Psychology, these environmental influences started playing a lasting role very early in life.

Ulrich Orth at the University of Bern has reported evidence that, on average, the higher the quality of a person’s home environment when they were aged between 0 and 6 years – based on warm and responsive parenting; cognitive stimulation; and a safe, organised physical environment – the higher their self-esteem many years later in adulthood.

The data come from nearly 9,000 individuals, born between 1970 and 2001, whose mothers had enrolled in the National Longitudinal Survey of Youth that started in the US in 1979.

Orth analysed the biennial interviews with the mothers that took place in their homes when their children – the participants in this study – were aged 0 to 6. This provided the measure of the quality of the participants’ early childhood home environment in terms of parental warmth and responsiveness, cognitive stimulation and the safety and organisation of the home. Orth also noted the quality of the relationship between mother and father during this period; the presence or not of the father; maternal depression; and family poverty.

Measures of the participants’ self-esteem started when they were aged just 8 and continued biennially until they were 27. The survey researchers used a measure designed for children until the participants were age 14, and thereafter switched to the well-known Rosenberg Self-esteem Scale.

The critical finding is that the quality of the home environment between the ages of 0 to 6 correlated significantly with participants’ self-reported self-esteem in later childhood, and even with their self-esteem into adulthood, although the association weakened over time. “The findings suggest that the home environment is a key factor in early childhood that influences the long-term development of self-esteem”, Orth said.

Other childhood environmental factors besides the quality of the home environment were also associated with later self-esteem. Maternal depression (associated with lower self-esteem) and better quality of parents’ relationship (associated with higher self-esteem) correlated with participants’ self-esteem in later childhood, but these correlations approached zero over the longer-term into adulthood.

In contrast, the families’ poverty (associated with lower self-esteem) and, to a lesser extent, the presence of the father (associated with higher self-esteem) continued to correlate with later self-esteem through to age 27.

When factoring out quality of the home environment, the association between these other childhood factors (maternal depression, parents’ relationship, poverty, and father’s presence) and participants’ later self-esteem was weakened substantially (but not entirely eradicated) suggesting that these other factors are tied to later self-esteem largely via their influence on the quality of the home environment.

Orth added a note of caution in relation to the findings for fathers’ presence. The data do not say anything about homosexual same-sex parents and children’s later self-esteem because such a family situation was too rare in the survey. It’s possible, he says, that the presence of any second parent – not necessarily a father – would have the same associations with later, higher self-esteem. In any case, the association between father’s presence and later self-esteem, though statistically significant, was only very small (over the long term, the effect sizes for poverty and especially for quality of the home environment were larger).

Why should the early family environment have such enduring associations with later self-esteem? Orth believes it is because early child-parent interactions affect a person’s pre-conscious representations of who they are and their self worth, eventually becoming deeply embodied in their self concept.

Orth says his findings have important practical implications because they suggest that interventions designed to enhance the quality of the early home environment could have lasting benefits for a child’s self-esteem. The way that the quality of the home environment mediated the role of other factors, like poverty, is particularly relevant. This suggests, Orth explains, that ” …the negative effects of poverty on children’s self-esteem could be prevented, or at least reduced, by interventions that improve the quality of the home environment in families that are in poverty.”

As with all survey research of this kind, it’s important to remember that causality has not been demonstrated conclusively between the earlier measured factors and later self-esteem – it’s possible unknown factors are at play. Most obviously, a study of this kind cannot account for the role played by genes shared between parents and their children.

Perhaps a deeper question is whether higher self-esteem is a desirable outcome at all. There was a time when many psychologists and social reformers believed increasing the average self-esteem of a community would open the doors to a range of welcome outcomes, from superior mental health to career success. However, we know today that the benefits of greater self-esteem are quite modest, mostly centred on feeling happier and having more initiative, and that excessive self-esteem can even be problematic in some cases, especially if it slides into narcissism.

The family environment in early childhood has a long-term effect on self-esteem: A longitudinal study from birth to age 27 years

Christian Jarrett (@Psych_Writer) is Editor of BPS Research Digest

Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview