Loading...

Follow The Neurocritic on Feedspot

Continue with Google
Continue with Facebook
or

Valid
Goats Galore (May 2019)


If you live in a drought-ridden, wildfire-prone area on the West Coast, you may see herds of goats chomping on dry grass and overgrown brush. This was initially surprising for many who live in urban areas, but it's become commonplace where I live. Announcements appear on local message boards, and families bring their children.


Goats Goats Goats (June 2017)


Goats are glamorous, and super popular on social media now (e.g. Instagram, more Instagram, and Twitter). Over 41 million people have watched Goats Yelling Like Humans - Super Cut Compilation on YouTube. We all know that goats have complex vocalizations, but very few of us know what they mean.


2019 Reservoir Goats - YouTube



For the health and well-being of livestock, it's advantageous to understand the emotional states conveyed by vocalizations, postures, and other behaviors. A 2015 study measured the acoustic features of different goat calls, along with their associated behavioral and physiological responses. Twenty-two adult goats were put in four situations:
(1) control (neutral)
(2) anticipation of a food reward (positive)
(3) food-related frustration (negative)
(4) social isolation (negative)
Dr. Elodie Briefer and colleagues conducted the study at a goat sanctuary in Kent, UK (Buttercups Sanctuary for Goats). The caprine participants had lived at the sanctuary for at least two years and were fully habituated to humans. Heart rate and respiration were recorded as indicators of arousal, so this dimension of emotion could be considered separately from valence (positive/negative). For conditions #1-3, the goats were tested in pairs (adjacent pens) to avoid the stress of social isolation. They were habituated to the general set-up, to the Frustration and Isolation scenarios, and to the heart rate monitor before the actual experimental sessions, which were run on separate days. Additional details are presented in the first footnote.1





Audio A1. One call produced during a negative situation (food frustration), followed by a call produced during a positive situation (food reward) by the same goat (Briefer et al., 2015).


Behavioral responses during the scenarios were timed and scored; these included tail position, locomotion, rapid head movement, ear orientation, and number of calls. The investigators recorded the calls and produced spectograms that illustrated the frequencies of the vocal signals.



The call on the left (a) was emitted during food frustration (first call in Audio A1). The call on the right (b) was produced during food reward; it has a lower fundamental frequency (F0) and smaller frequency modulations. Modified from Fig. 2 (Briefer et al., 2015).


Both negative and positive food situations resulted in greater goat arousal (measured by heart rate) than the neutral control condition and the low arousal negative condition (social isolation). Behaviorally speaking, arousal and valence had different indicators:
During high arousal situations, goats displayed more head movements, moved more, had their ears pointed forwards more often and to the side less often, and produced more calls. ... In positive situations, as opposed to negative ones, goats had their ears oriented backwards less often and spent more time with the tail up.
Happy goats have their tails up, and do not point their ears backwards. I think I would need a lot more training to identify the range of goat emotions conveyed in my amateur video. At least I know not to stare at them, but next time I should read more about their reactions to human head and body postures.


Do goats show a left or right hemisphere advantage for vocal perception?

Now that the researchers have characterized the valence and arousal communicated by goat calls, another study asked whether goats show a left hemisphere or right hemisphere “preference” for the perception of different calls (Baciadonna et al., 2019). How is this measured, you ask?

Head-Turning in Goats and Babies

The head-turn preference paradigm is widely used in studies of speech perception in infants.

Figure from Prosody cues word order in 7-month-old bilingual infants (Gervain & Werker, 2013).




However, I don't know whether this paradigm is used to assess lateralization of speech perception in babies. In the animal literature, a similar head-orienting response is a standard experimental procedure. For now, we will have to accept the underlying assumption that orienting left or right may be an indicator of a contralateral hemispheric “preference” for that specific vocalization (i.e., orienting to the left side indicates a right hemisphere dominance, and vice versa).
The experimental procedure usually applied to test functional auditory asymmetries in response to vocalizations of conspecifics and heterospecifics is based on a major assumption (Teufel et al. 2007; Siniscalchi et al. 2008). It is assumed that when a sound is perceived simultaneously in both ears, the head orientation to either the left or right side is an indicator of the side of the hemisphere that is primarily involved in the response to the stimulus presented. There is strong evidence that this is the case in humans ... The assumption is also supported by the neuroanatomic evidence of the contralateral connection of the auditory pathways in the mammalian brain (Rogers and Andrew 2002; Ocklenburg et al. 2011).

The experimental set-up to test this in goats is shown below.



A feeding bowl (filled with a tasty mixture of dry pasta and hay) was fixed at the center of the arena opposite to the entrance. The speakers were positioned at a distance of 2 meters from the right and left side of the bowl and were aligned to it. 'X' indicates the position of the Experimenter. Modified from Fig. 2 (Baciadonna et al., 2019).


Four types of vocalizations were played over the speakers: food anticipation, food frustration, isolation, and dog bark (presumably a negative stimulus). Three examples of each vocalization were played, each from a different and unfamiliar goat (or dog).

The various theories of brain lateralization of emotion predicted different results. The right hemisphere model predicts right hemisphere dominance (head turn to the left) for high-arousal emotion regardless of valence (food anticipation, food frustration, dog barks). In contrast, the valence model predicts right hemisphere dominance for processing negative emotions (food frustration, isolation, dog barks), and left hemisphere dominance for positive emotions (food anticipation). The conspecific model predicts left hemisphere dominance for all goat calls (“familiar and non-threatening”) and right hemisphere dominance for dog barks. Finally, a general emotion model predicts right hemisphere dominance for all of the vocalizations, because they're all emotion-laden.

The results sort of supported the conspecific model (according to the authors), if we now accept that dog barks are actually “familiar and non-threatening” [if I understand correctly]. The head-orienting response did not differ significantly between the four vocalizations, and there was a slight bias for head orienting to the right (p=.046 vs. chance level), when collapsed across all stimulus types. 2

The time to resume feeding after hearing a vocalization (a measure of fear) didn't differ between goat calls and dog barks, so the authors concluded that “goats at our study site may have been habituated to dog barks and that they did not perceive dog barks as a serious threat.” However, if a Siberian Husky breaks free of its owner and runs around a fenced-in rent-a-goat herd, chaos may ensue.


Reservoir Goats see a Siberian Husky - YouTube



Footnotes

1 Methodological details:
“(1) During the control situation, goats were left unmanipulated in a pen with hay (‘Control’). This situation did not elicit any calls, but allowed us to obtain baseline values for physiological and behavioural data. (2) The positive situation was the anticipation of an attractive food reward that the goats had been trained to receive during 3 days of habituation (‘Feeding’). (3) After goats had been tested with the Feeding situation, they were tested with a food frustration situation. This consisted of giving food to only one of the goats in the pair and not to the subject (‘Frustration’). (4) The second negative situation was brief isolation, out of sight from conspecifics behind a hedge. For this situation, goats were tested alone and not in a pair (‘Isolation’).”

2 The replication police will certainly go after such a marginal significance level, but I would like to see them organize a “Many Goats in Many Goat Sanctuaries” replication project.


References

Baciadonna L, Nawroth C, Briefer EF, McElligott AG. (2019). Perceptual lateralization of vocal stimuli in goats. Curr Zool. 65(1):67-74. [PDF]

Briefer EF, Tettamanti F, McElligott AG. (2015). Emotions in goats: mapping physiological, behavioural and vocal profiles. Animal Behaviour 99:131-43. [PDF]

How smart are farm animals? And why should we care?

Short answers: A lot! And because it heavily affects how we treat, and how we should treat, these animals.

[Paper in a nutshell - a thread]https://t.co/pwDrrSmRFr
— Christian Nawroth (@GoatsThatStare) February 18, 2019

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I have secretly obtained a large cache of files from Johnson & Johnson, makers of TYLENOL®, the ubiquitous pain relief medication (generic name: acetaminophen in North America, paracetamol elsewhere). The damaging information contained in these documents has been suppressed by the pharmaceutical giant, for reasons that will become obvious in a moment.1

After a massive upload of materials to Wikileaks, it can now be revealed that Tylenol not only...
...but along with the good comes the bad. Acetaminophen (paracetamol) also has ghastly negative effects that tear at the very fabric of society. These OTC tablets...

In a 2018 review of the literature, Ratner and colleagues warned:
“In many ways, the reviewed findings are alarming. Consumers assume that when they take an over-the-counter pain medication, it will relieve their physical symptoms, but they do not anticipate broader psychological effects.”

In the latest installment of this alarmist saga, we learn that acetaminophen blunts positive empathy, i.e. the capacity to appreciate and identify with the positive emotions of others (Mischkowski et al., 2019). I'll discuss those findings another time.

But now, let's evaluate the entire TYLENOL® oeuvre by taking a step back and examining the plausibility of the published claims. To summarize, one of the most common over-the-counter, non-narcotic, non-NSAID pain-relieving medications in existence supposedly alleviates the personal experience of hurt feelings and social pain and heartache (positive outcomes). At the same time, TYLENOL® blunts the phenomenological experiences of positive emotion and diminishes empathy for others' people's experiences, both good and bad (negative outcomes). Published articles have reported that many of these effects can be observed after ONE REGULAR DOSE of paracetamol. These findings are based on how undergraduates judge a series of hypothetical stories. One major problem (which is not specific to The Paracetamol Papers) concerns the ecological validity of laboratory tasks as measures of the cognitive and emotional constructs of interest. This issue is critical, but outside the main scope of our discussion today. More to the point, an experimental manipulation may cause a statistically significant shift in a variable of interest, but ultimately we have to decide whether a circumscribed finding in the lab has broader implications for society at large.


Why TYLENOL® ?

Another puzzling element is, why choose acetaminophen as the exclusive pain medication of interest? Its mechanisms of action for relieving fever, headache, and other pains are unclear. Thus, the authors don't have a specific, principled reason for choosing TYLENOL® over Advil (ibuprofen) or aspirin. Presumably, the effects should generalize, but that doesn't seem to be the case. For instance, ibuprofen actually Increases Social Pain in men.

The analgesic effects of acetaminophen are mediated by a complex series of cellular mechanisms (Mallet et al., 2017). One proposed mechanism involves descending serotonergic bulbospinal pathways from the brainstem to the spinal cord. This isn't exactly Prozac territory, so the analogy between Tylenol and SSRI antidepressants isn't apt. The capsaicin receptor TRPV1 and the Cav3.2 calcium channel might also be part of the action (Mallet et al., 2017). A recently recognized player is the CB1 cannabinoid receptor. AM404, a metabolite of acetaminophen, indirectly activates CB1 by inhibiting the breakdown and reuptake of anandamide, a naturally occurring cannabinoid in the brain (Mallet et al., 2017).



Speaking of cannabinoids, cannabidiol (CBD) — the non-intoxicating cousin of THC — has a high profile now because of its soaring popularity for many ailments. Ironically, CBD has a very low affinity for CB1 and CB2 receptors and may act instead via serotonergic 5-HT1A receptors {PDF}, as a modulator of μ- and δ-opioid receptors, and as an antagonist and inverse agonist at several G protein-coupled receptors. Most CBD use seems to be in the non-therapeutic (placebo) range, because the effective dose for, let's say, anxiety is 10-20 times higher than the average commercial product. You'd have to eat 3-6 bags of cranberry gummies for 285-570 mg of CBD (close to the 300-600 mg recommended dose). Unfortunately, you would also ingest 15-30 mg of THC, which would be quite intoxicating.

"the typical cup of CBD-infused coffee that you buy in your local trendy coffee shop will have, on average, 5-10 mg of CBD, which is nowhere near the therapeutic dosage [at least 300 mg] required for it to have any kind of anxiolytic effect" https://t.co/trytic2nf2
— sarcastic_f (@sarcastic_f) April 26, 2019


Words Have Meanings

If acetaminophen were so effective in “mending broken hearts”, “easing heartaches”, and providing a “cure for a broken heart”, we would be a society of perpetually happy automatons, wiping away the suffering of breakup and divorce with a mere OTC tablet. We'd have Tylenol epidemics and Advil epidemics to rival the scourge of the present Opioid Epidemic.

Meanwhile, social and political discourse in the US has reached a new low. Ironically, the paracetamol “blissed-out” population is enraged because they can't identify with the feelings or opinions of the masses who are 'different' than they are. Somehow, I don't think it's from taking too much Tylenol. A large-scale global survey could put that thought to rest for good.




Footnotes

1 This is not true, of course, I was only kidding. All of the information presented here is publicly available in peer-reviewed journal articles and published press reports.

2 except for when it doesn’t – “In contrast, effects on perceived positivity of the described experiences or perceived pleasure in scenario protagonists were not significant” (Mischkowski et al., 2019).

3 Yes, I made this up too. It is entirely fictitious; no one has ever claimed this, to the best of my knowledge.


References

Mallet C, Eschalier A, Daulhac L. Paracetamol: update on its analgesic mechanism of action (2017). Pain relief–From analgesics to alternative therapies.

Mischkowski D, Crocker J, Way BM. (2019). A Social Analgesic? Acetaminophen(Paracetamol) Reduces Positive Empathy. Front Psychol. 10:538.
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Bravado SPRAVATO™ (esketamine)
© Janssen Pharmaceuticals, Inc. 2019.


Ketamine is the miracle drug that cures depression:
“Recent studies report what is arguably the most important discovery in half a century: the therapeutic agent ketamine that produces rapid (within hours) antidepressant actions in treatment-resistant depressed patients (4, 5). Notably, the rapid antidepressant actions of ketamine are associated with fast induction of synaptogenesis in rodents and reversal of the atrophy caused by chronic stress (6, 7).”

– Duman & Aghajanian (2012). Synaptic Dysfunction in Depression: Potential Therapeutic Targets. Science 338: 68-72.

Beware the risks of ketamine:
“While ketamine may be beneficial to some patients with mood disorders, it is important to consider the limitations of the available data and the potential risk associated with the drug when considering the treatment option.”

– Sanacora et al. (2017). A Consensus Statement on the Use of Ketamine in the Treatment of Mood Disorders. JAMA Psychiatry 74: 399-405.

Ketamine, dark and light:
Is ketamine a destructive club drug that damages the brain and bladder? With psychosis-like effects widely used as a model of schizophrenia? Or is ketamine an exciting new antidepressant, the “most important discovery in half a century”?

For years, I've been utterly fascinated by these separate strands of research that rarely (if ever) intersect. Why is that? Because there's no such thing as “one receptor, one behavior.” And because like most scientific endeavors, neuro-pharmacology/psychiatry research is highly specialized, with experts in one microfield ignoring the literature produced by another...

– The Neurocritic (2015). On the Long Way Down: The Neurophenomenology of Ketamine

Confused?? You're not alone.


FDA Approval

The animal tranquilizer and club drug ketamine now known as a “miraculous” cure for treatment resistant depression has been approved by the FDA in a nasal spray formulation. No more messy IV infusions at shady clinics.

Here's a key Twitter thread that marks the occasion:
The pricing for esketamine is unconscionable. It’s just one stereoisomer of ketamine, which btw works fine. https://t.co/iwIdEAzky3
— Drug Monkey (@drugmonkeyblog) March 6, 2019


How does it work?

A new paper in Science (Moda-Sava et al., 2019) touts the importance of spine formation and synaptogenesis basically, the remodeling of synapses in microcircuits  in prefrontal cortex, a region important for the top-down control of behavior. Specifically, ketamine and its downstream actions are involved in the creation of new spines on dendrites, and in the formation of new synapses. But it turns out this is NOT linked to the rapid improvement in 'depressive' symptoms observed in a mouse model.



So I think we're still in the dark about why some humans can show immediate (albeit short-lived) relief from their unrelenting depression symptoms after ketamine infusion. Moda-Sava et al. say:
Ketamine’s acute effects on depression-related behavior and circuit function occur rapidly and precede the onset of spine formation, which in turn suggests that spine remodeling may be an activity-dependent adaptation to changes in circuit function (83, 88) and is consistent with theoretical models implicating synaptic homeostasis mechanisms in depression and the stress response (89, 90). Although not required for inducing ketamine’s effects acutely, these newly formed spines are critical for sustaining the antidepressant effect over time.

But the problem is, depressed humans require constant treatment with ketamine to maintain any semblance of an effective clinical response, because the beneficial effect is fleeting. If we accept the possibility that ketamine acts through the mTOR signalling pathway, in the long run detrimental effects on the brain (and non-brain systems) may occur (e.g., bladder damage, various cancers, psychosis, etc).

But let's stay isolated in our silos, with our heads in the sand.


Thanks to @o_ceifero for alerting me to this study.

Further Reading

Ketamine for Depression: Yay or Neigh?

Warning about Ketamine in the American Journal of Psychiatry

Chronic Ketamine for Depression: An Unethical Case Study?

still more on ketamine for depression

Update on Ketamine in Palliative Care Settings

Ketamine - Magic Antidepressant, or Expensive Illusion? - by Neuroskeptic

Fighting Depression with Special K - by Scicurious

On the Long Way Down: The Neurophenomenology of Ketamine


Reference

Moda-Sava RN, Murdock MH, Parekh PK, Fetcho RN, Huang BS, Huynh TN, Witztum J, Shaver DC, Rosenthal DL, Alway EJ, Lopez K, Meng Y, Nellissen L, Grosenick L, Milner TA, Deisseroth K, Bito H, Kasai H, Liston C. (2019). Sustained rescue of prefrontal circuit dysfunction by antidepressant-induced spine formation. Science 364(6436). pii: eaat8078.
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

People like conflict (the interpersonal kind, not BLUE).1 Or at least, they like scientific debate at conferences. Panel discussions that are too harmonious seem to be divisive. Some people will say, “well, now THAT wasn't very controversial.” But as I mentioned last time, one highlight of the 2019 Cognitive Neuroscience Society Annual Meeting was a Symposium organized by Dr. David Poeppel.2

Special Session - The Relation Between Psychology and Neuroscience, David Poeppel, Organizer,  Grand Ballroom
Whether we study single cells, measure populations of neurons, characterize anatomical structure, or quantify BOLD, whether we collect reaction times or construct computational models, it is a presupposition of our field that we strive to bridge the neurosciences and the psychological/cognitive sciences. Our tools provide us with ever-greater spatial resolution and ideal temporal resolution. But do we have the right conceptual resolution? This conversation focuses on how we are doing with this challenge, whether we have examples of successful linking hypotheses between psychological and neurobiological accounts, whether we are missing important ideas or tools, and where we might go or should go, if all goes well. The conversation, in other words, examines the very core of cognitive neuroscience.

Conversation. Not debate. So first, let me summarize the conversation. Then I'll get back to the merits (demerits) of debate. In brief, many of the BIG IDEAS motifs of 2017 were revisited...
  • David Marr and the importance of work at all levels of analysis 
  • What are the “laws” that bridge these levels of analysis?
  • Emergent properties” – a unique higher-level entity (e.g., consciousness, a flock of birds) emerges from the activity of lower-level activity (e.g., patterns of neuronal firing, the flight of individual birds)... the sum is greater than its parts
  • Generative Models – formal models that make computational predictions
...with interspersed meta-commentary on replication, publishing, and Advice to Young Neuroscientists. Without further adieu:

Dr. David Poeppel – Introductory Remarks that examined the very core of cognitive neuroscience (i.e., “we have to face the music”).
  • the conceptual basis of cognitive neuroscience shouldn't be correlation 
For example, fronto-parietal network connectivity (as determined by resting state fMRI) is associated with some cognitive function, but that doesn't mean it causes or explains the behavior. We all know this, and we all know that “we must want more!” But we haven't the vaguest idea of how to relate complex psychological constructs such as attention, volition, and emotion to ongoing biological processes involving calcium channels, dendrites, and glutamatergic synapses.
  • but what if the psychological and the biological are categorically dissimilar??
In their 2003 book, Philosophical Foundations of Neuroscience, Bennett and Hacker warned that cognitive neuroscientists make the cardinal error of “...commit[ting] the mereological fallacy, the tendency to ascribe to the brain psychological concepts that only make sense when ascribed to whole animals.”
For the characteristic form of explanation in contemporary cognitive neuroscience consists in ascribing psychological attributes to the brain and its parts in order to explain the possession of psychological attributes and the exercise (and deficiencies in the exercise) of cognitive powers by human beings. (p. 3)

On that optimistic note, the four panelists gave their introductory remarks.

(1) Dr. Lila Davachi asked, “what is the value of the work we do?” Uh, well, that's a difficult question. Are we improving society in some way? Adding to a collective body of knowledge that may (or may not) be the key to explaining behavior and curing disease? Although still difficult, Davachi posed an easier question, “what are your goals?” To describe behavior, predict behavior (correlation), explain behavior (causation), change behavior (manipulation)? But “what counts as an explanation?” I don't think anyone really answered that question. Instead she mentioned the recurring themes of levels of analysis (without invoking Marr by name), emergent properties (the flock of birds analogy), and bridging laws (that link levels of analysis). The correct level of analysis is/are the one(s) that advance your goals. But what to do about “level chauvinism” in contemporary neuroscience? This question was raised again and again.

(2) Dr. Jennifer Groh jumped right out of the gate with this motif. There are competing narratives in neuroscience we can call the electrode level (recording from neurons) vs. the neuroimaging level (recording large-scale brain activations or “network” interactions based on an indirect measure of neural activity). They make different assumptions about what is significant or worth studying. I found this interesting, since her lab is the only one that records from actual neurons. But there are ever more reductionist scientists who always throw stones at those above them. Neurobiologists (at the electrode level and below) are operating at ever more granular levels of detail, walking away from cognitive neuroscience entirely (who wants to be a dualist, anyway?). I knew exactly where she was going with this: the field is being driven by techniques, doing experiments merely because you can (cough — OPTOGENETICS — cough). Speaking for myself, however, the fact that neurobiologists can control mouse behavior by manipulating highly specific populations of cells raises the specter of insecurity... certain areas of research might not be considered “neuroscience” any more by a bulk of practitioners in the field (just attend the Society for Neuroscience annual meeting).

(3) Dr. Catherine Hartley continued with the recurring theme that we need both prediction and explanation to reach our ultimate goal of understanding behavior. Is a prediction system enough? No, we must know how the black box functions by studying “latent processes” such as representation and computation. But what if we're wrong about representations, I thought? The view of @PsychScientists immediately came to mind. Sorry to interrupt Dr. Hartley, but here's Golonka and Wilson in Ecological Representations:
Mainstream cognitive science and neuroscience both rely heavily on the notion of representation in order to explain the full range of our behavioral repertoire. The relevant feature of representation is its ability to designate (stand in for) spatially or temporally distant properties ... While representational theories are a potentially a powerful foundation for a good cognitive theory, problems such as grounding and system-detectable error remain unsolved. For these and other reasons, ecological explanations reject the need for representations and do not treat the nervous system as doing any mediating work. However, this has left us without a straight-forward vocabulary to engage with so-called 'representation-hungry' problems or the role of the nervous system in cognition.

Then they invoke James J Gibson's ecological information functions. But I can already hear Poeppel's colleague @GregoryHickok and others on Twitter debating with @PsychScientists. Oh. Wait. Debate.

Returning to The Conversation that I so rudely interrupted, Dr. Hartley gave some excellent examples of theories that link psychology and neuroscience. The trichromatic theory of color vision — the finding that three independent channels convey color information — was based on psychophysics in the early-mid 1800s (Young–Helmholtz theory). This was over a century before the discovery of cones in the retina, which are sensitive to three different wavelengths. She also mentioned the more frequently used examples of Tolman's cognitive maps (which predated The Hippocampus as a Cognitive Map by 30 years) and error-driven reinforcement learning (Bush–Mosteller [23, 24] and Rescorla–Wagner, both of which predate knowledge of dopamine neurons). To generate good linking hypotheses in the present, we need to construct formal models that make quantitative predictions (generative models).

(4) Dr. Sharon Thompson-Schill gave a brief introduction with no slides, which is good because this post has gotten very long. For this reason, I won't cover the panel discussion and the Q&A period, which continued the same themes outlined above and expanded on “predictivism” (predictive chauvinism and data-driven neuroscience) and raised new points like the value (or not) of introspection in science. When the Cognitive Neuroscience Society updates their YouTube channel, I'll let you know. Another source is the excellent live tweeting of @VukovicNikola. But to wrap up, Dr. Thompson-Schill asked members of the audience whether they consider themselves psychologists or neuroscientists. Most identified as neuroscientists (which is a relative term, I think). Although more people will talk to you on a plane if you say you're a psychologist, “neuroscience is easy, psychology is hard,” a surprising take-home message.


Debating Debates

I've actually wanted to see more debating at the CNS meeting. For instance, the Society for the Neurobiology of Language (SNL) often features a lively debate at their conferences. 3 Several examples are listed below.

2016:
Debate: The Consequences of Bilingualism for Cognitive and Neural Function
Ellen Bialystok & Manuel Carreiras

2014:
What counts as neurobiology of language – a debate
Steve Small, Angela Friederici

2013: Panel Discussions
The role of semantic information in reading aloud
Max Coltheart vs Mark Seidenberg

2012: Panel Discussions
What is the role of the insula in speech and language?
Nina F. Dronkers vs Julius Fridriksson


This one-on-one format has been very rare at CNS. Last year we saw a panel of four prominent neuroscientist address/debate...
Big Theory versus Big Data: What Will Solve the Big Problems in Cognitive Neuroscience?

— CNS News (@CogNeuroNews) March 24, 2018

Added-value entertainment was provided by Dr. Gary Marcus, which speaks to the issue of combative personalities dominating the scene. 4


Gary Marcus talking over Jack Gallant. Eve Marder is out of the frame.
image by @CogNeuroNews


I'm old enough to remember the most volatile debate in CNS history, which was held (sadly) at the New York Marriott World Trade Center Hotel in 2001. Dr. Nancy Kanwisher and Dr. Isabel Gauthier debated whether face recognition (and activation of the fusiform face area) is a 'special' example of domain specificity (and perhaps an innate ability), or a manifestation of plasticity due to our exceptional expertise at recognizing faces:
A Face-Off on Brain Studies / How we recognize people and objects is a matter of debate
. . .

At the Cognitive Neuroscience Society meeting in Manhattan last week, a panel of scientists on both sides of the debate presented their arguments. On one side is Nancy Kanwisher of MIT, who first proposed that the fusiform gyrus was specifically designed to recognize faces–and faces alone–based on her findings using a magnetic resonance imaging device. Then, Isabel Gauthier, a neuroscientist at Vanderbilt, talked about her research, showing that the fusiform gyrus lights up when looking at many different kinds of objects people are skilled at recognizing.
Kudos to Newsday for keeping this article on their site after all these years.


Footnotes

1 This is the color-word Stroop task: name the font color, rather than read the word. BLUE elicits conflict between the overlearned response ("read the word blue") and the task requirment (say "red").

2 aka the the now-obligatory David Poeppel session on BIG STUFF. See these posts:
3 Let me now get on my soapbox to exhort the conference organizers to keep better online archives  — with stable urls — so I don't have to hunt through archive.org to find links to past meetings.

4 Although this is really tangential, I'm reminded of the Democratic Party presidential contenders in the US. Who deserves more coverage, Beto O'Rourke or Elizabeth Warren? Bernie Sanders or Kamala Harris?
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 


It's March, an odd-numbered year, must mean.... it's time for the Cognitive Neuroscience Society Annual Meeting to be in San Francisco!

I only started looking at the schedule yesterday and noticed the now-obligatory David Poeppel session on BIG stuff 1 on Saturday (March 23, 2019):

Special Session - The Relation Between Psychology and Neuroscience, David Poeppel, Organizer,  Grand Ballroom

Then I clicked on the link and saw a rare occurrence: an all-female slate of speakers!



Whether we study single cells, measure populations of neurons, characterize anatomical structure, or quantify BOLD, whether we collect reaction times or construct computational models, it is a presupposition of our field that we strive to bridge the neurosciences and the psychological/cognitive sciences. Our tools provide us with ever-greater spatial resolution and ideal temporal resolution. But do we have the right conceptual resolution? This conversation focuses on how we are doing with this challenge, whether we have examples of successful linking hypotheses between psychological and neurobiological accounts, whether we are missing important ideas or tools, and where we might go or should go, if all goes well. The conversation, in other words, examines the very core of cognitive neuroscience.

Also on the schedule tomorrow is the public lecture and keynote address by Matt Walker — Why Sleep?
Can you recall the last time you woke up without an alarm clock feeling refreshed, not needing caffeine? If the answer is “no,” you are not alone. Two-thirds of adults fail to obtain the recommended 8 hours of nightly sleep. I doubt you are surprised by the answer to this question, but you may be surprised by the consequences. This talk will describe not only the good things that happen when you get sleep, but the alarmingly bad things that happen when you don’t get enough. The presentation will focus on the brain (learning, memory aging, Alzheimer’s disease, education), but further highlight disease-related consequences in the body (cancer, diabetes, cardiovascular disease). The take-home: sleep is the single most effective thing we can do to reset the health of our brains and bodies.

Why sleep, indeed.

Meanwhile, Foals are playing tonight at The Fox Theater in Oakland. Tickets are still available.


FOALS - Exits [Official Music Video] - YouTube


view video on YouTube.


Footnote

1 See these posts:

The Big Ideas in Cognitive Neuroscience, Explained #CNS2017

Big Theory, Big Data, and Big Worries in Cognitive Neuroscience #CNS2018
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Mood Monitoring via Invasive Brain Recordings or Smartphone SwipesWhich Would You Choose?


That's not really a fair question. The ultimate goal of invasive recordings is one of direct intervention, by delivering targeted brain stimulation as a treatment. But first you have to establish a firm relationship between neural activity and mood. Well, um, smartphone swipes (the way you interact with your phone) aim to establish a firm relationship between your “digital phenotype” and your mood. And then refer you to an app for a precision intervention. Or to your therapist / psychiatrist, who has to buy into use of the digital phenotyping software.

On the invasive side of the question, DARPA has invested heavily in deep brain stimulation (DBS) as a treatment for many disorders – Post-Traumatic Stress Disorder (PTSD), Major Depression, Borderline Personality Disorder, General Anxiety Disorder, Traumatic Brain Injury, Substance Abuse/Addiction, Fibromyalgia/Chronic Pain, and memory loss. None of the work has led to effective treatments (yet?), but the DARPA research model has established large centers of collaborating scientists who record from the brains of epilepsy patients. And a lot of very impressive papers have emerged – some promising, others not so much.

One recent study (Kirkby et al., 2018) used machine learning to discover brain networks that encode variations in self-reported mood. The metric was coherence between amygdala and hippocampal activity in the β-frequency (13-30 Hz). I can't do justice to their work in the context of this post, but I'll let the authors' graphical abstract speak for itself (and leave questions like, why did it only work in 13 out of 21 of your participants? for later).




Mindstrong

Then along comes a startup tech company called Mindstrong, whose Co-Founder and President is none other than Dr. Thomas Insel, former director of NIMH, and one of the chief architects1 of the Research Domain Criteria (RDoC), “a research framework for new approaches to investigating mental disorders” that eschews the DSM-5 diagnostic bible. The Appendix chronicles the timeline of Dr. Insel's evolution from “mindless” RDoC champion to “brainless” wearables/smartphone tech proselytizer.2


From Wired:
. . .

At Mindstrong, one of the first tests of the [“digital phenotype”] concept will be a study of how 600 people use their mobile phones, attempting to correlate keyboard use patterns with outcomes like depression, psychosis, or mania. “The complication is developing the behavioral features that are actionable and informative,” Insel says. “Looking at speed, looking at latency or keystrokes, looking at error—all of those kinds of things could prove to be interesting.”

Curiously, in their list of digital biomarkers, they differentiate between executive function and cognitive control — although their definitions were overlapping (see my previous post, Is executive function different from cognitive control? The results of an informal poll).
Mindstrong tracks five digital biomarkers associated with brain health: Executive function, cognitive control, working memory, processing speed, and emotional valence. These biomarkers are generated from patterns in smartphone use such as swipes, taps, and other touchscreen activities, and are scientifically validated to provide measurements of cognition and mood.

Whither RDoC?

NIMH established a mandate requiring that all clinical trials should postulate a neural circuit “mechanism” that would be responsible for any efficacious response. Thus, clinical investigators were forced to make up simplistic biological explanations for their psychosocial interventions:

“I hypothesize that the circuit mechanism for my elaborate new psychotherapy protocol which eliminates fear memories (e.g., specific phobias, PTSD) is implemented by down-regulation of amygdala activity while participants view pictures of fearful faces using the Hariri task.”



[a fictitious example]


I'm including a substantial portion of the February 27, 2014 text here because it's important.
NIMH is making three important changes to how we will fund clinical trials.

First, future trials will follow an experimental medicine approach in which interventions serve not only as potential treatments, but as probes to generate information about the mechanisms underlying a disorder. Trial proposals will need to identify a target or mediator; a positive result will require not only that an intervention ameliorated a symptom, but that it had a demonstrable effect on a target, such as a neural pathway implicated in the disorder or a key cognitive operation. While experimental medicine has become an accepted approach for drug development, we believe it is equally important for the development of psychosocial treatments. It offers us a way to understand the mechanisms by which these treatments are leading to clinical change.

OK, so the target could be a key cognitive operation. But let's say your intervention is a Housing First initiative in homeless individuals with severe mental illness and co-morbid substance abuse. Your manipulation is to compare quality of life outcomes for Housing First with Assertive Community Treatment vs. Congregate Housing with on-site supports vs. treatment as usual. What is the key cognitive operation here? Fortunately, this project was funded by the Canadian government and did not need to compete for NIMH funding.

I think my ultimate issue is one of fundamental fairness. Is it OK to skate away from the wreckage and profit by making millions of dollars? From Wired:
“I spent 13 years at NIMH really pushing on the neuroscience and genetics of mental disorders, and when I look back on that I realize that while I think I succeeded at getting lots of really cool papers published by cool scientists at fairly large costs—I think $20 billion—I don’t think we moved the needle in reducing suicide, reducing hospitalizations, improving recovery for the tens of millions of people who have mental illness,” Insel says. “I hold myself accountable for that.”

But how? You've admitted to spending $20 billion on cool projects and cool papers and cool scientists who do basic research. This has great value. But the big mistakes were an unrealistic promise of treatments and cures, and the charade of forcing scientists who study C. elegans to explain how they're going to cure psychiatric disorders.


Footnotes

1 Dr. Bruce Cuthbert was especially instrumental, as well as a large panel of experts. But since this post is about digital biomarkers, the former director of NIMH is the focus of RDoC here.

2 The Insel archives of the late Dr. Mickey Nardo in his prolific blog, 1boringoldman.com, are a must-read. I also wish the late Dr. Barney Carroll was still here to issue his trenchant remarks and trademark witticisms.


Reference

Kirkby LA, Luongo FJ, Lee MB, Nahum M, Van Vleet TM, Rao VR, Dawes HE, Chang EF, Sohal VS. (2018). An Amygdala-Hippocampus Subnetwork that Encodes Variation in Human Mood. Cell 175(6):1688-1700.e14.


Additional Reading - Digital Phenotyping

Jain SH, Powers BW, Hawkins JB, Brownstein JS. (2015). The digital phenotype. Nat Biotechnol. 33(5):462-3. [usage of the term here means data mining of content such as Twitter and Google searches, rather than physical interactions with a smartphone]

Insel TR. (2017). Digital Phenotyping: Technology for a New Science of Behavior. JAMA 318(13):1215-1216. [smartphone swipes, NOT content:Who would have believed that patterns of typing and scrolling could reveal individual fingerprints of performance, capturing our neurocognitive function continuously in the real world?”]

Insel TR. (2017). Join the disruptors of health science. Nature 551(7678):23-26. [conversion to the SF Bay Area/Silicon Valley mindset]. Key quote:
“But what struck me most on moving from the Beltway to the Bay Area was that, unlike pharma and biotech, tech companies enter biomedical and health research with a pedigree of software research and development, and a confident, even cocky, spirit of disruption and innovation. They have grown by learning how to move quickly from concept to execution. Software development may generate a minimally viable product within weeks. That product can be refined through ‘dogfooding’ (testing it on a few hundred employees, families or friends) in a month, then released to thousands of users for rapid iterative improvement.”
[is ‘dogfooding’ a real term?? if that's how you're going to test technology designed to help people with severe mental illnesses — without the input of the consumers themselves — YOU WILL BE DOOMED TO FAILURE.]

Philip P, De-Sevin E, Micoulaud-Franchi JA. (2018). Technology as a Tool for Mental Disorders. JAMA 319(5):504.

Insel TR. (2018). Technology as a Tool for Mental Disorders-Reply. JAMA  319(5):504.

Insel TR. (2018). Digital phenotyping: a global tool for psychiatry. World Psychiatry 17(3):276-277.


Appendix - a selective history of RDoC publications























Post-NIMH Transition (articles start appearing less than a month later) 








Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
It ended in a tie!
Is executive function different from cognitive control?
— sarcastic_f (@sarcastic_f) January 30, 2019


Granted, this is a small and biased sample, and I don't have a large number of followers. The answers might have been different had @russpoldrack (Yes in a landslide) or @Neuro_Skeptic (n=12,458 plus 598 wacky write-in votes) posed the question.

Before the poll I facetiously asked:
Other hypothetical questions (that you don't need to answer) might include:
  • Are you a clinical neuropsychologist? 
  • Do you use computational modeling in your work?1
  • What is your age?
Here, I was thinking:
  • Clinical neuropsychologists would say No
  • Computational researchers would say Yes
  • On average, older people would be more likely to say No than younger people

After the poll I asked, “So what ARE the differences between executive function and cognitive control? Or are the terms arbitrary, and their usage a matter of context / subfield?”

No one wanted to expound on the differences between the terms.2
I answered No, because I think the terms are arbitrary, and their usage a matter of context and subfield. Not that Wikipedia is the ultimate authority, but I was amused to see this:
Executive functions
From Wikipedia, the free encyclopedia
  (Redirected from Cognitive control)
Executive functions (collectively referred to as executive function and cognitive control) are a set of cognitive processes that are necessary for the cognitive control of behavior: selecting and successfully monitoring behaviors that facilitate the attainment of chosen goals. Executive functions include basic cognitive processes such as attentional control, cognitive inhibition, inhibitory control, working memory, and cognitive flexibility

Nature said this:
Cognitive controlCognitive control is the process by which goals or plans influence behaviour. Also called executive control, this process can inhibit automatic responses and influence working memory. Cognitive control supports flexible, adaptive responses and complex goal-directed thought. Some disorders, such as schizophrenia and ADHD, are associated with impairments of executive function.

They're using the terms interchangeably! The terms cognitive control, executive control, executive function, and executive control functions are not well-differentiated, except in specific contexts. For instance, the Carter Lab definition below sounds specific at first, but then branches out to encompass many “executive functions” not named as such.
Cognitive Control"Cognitive control" is a construct from contemporary cognitive neuroscience that refers to processes that allow information processing and behavior to vary adaptively from moment to moment depending on current goals, rather than remaining rigid and inflexible. Cognitive control processes include a broad class of mental operations including goal or context representation and maintenance, and strategic processes such as attention allocation and stimulus-response mapping. Cognitive control is associated with a wide range of processes and is not restricted to a particular cognitive domain. For example, the presence of impairments in cognitive control functions may be associated with specific deficits in attention, memory, language comprehension and emotional processing. ...

Actually, the term Cognitive Control dates back to the 1920s, if not further. Two quick examples.

(1) When talking about Charles Spearman and his theory of intelligence and his three qualitative principles, Charles S. Slocombe (1928) said:
“To these he adds five quantitative principles, cognitive control (attention), fatigue, retentivity, constancy of output, and primordial potency...”
Simple! Cognitive Control = Attention.

(2) Frederick Anderson (1942), in The Relational Theory of Mind:
“Meanings, then, are mental processes which, although not themselves objects for consciousness, actively modify and characterize that of which we are for the moment conscious. They differ from other subconscious processes in this respect, that we have cognitive control over them and can at any moment bring them to light if we choose.”
Cognitive Control = having the capacity of “bringing things into consciousness” — is this different from attention, or “paying attention” to something by making it the focus of awareness?


Moving into the 21st century, two of the quintessential contemporary cognitive control papers that [mostly] banish executives from their midst are:

Miller and Cohen (2001):
“The prefrontal cortex has long been suspected to play an important role in cognitive control, in the ability to orchestrate thought and action in accordance with internal goals.”

Botvinick et al. (2001):
“A remarkable feature of the human cognitive system is its ability to configure itself for the performance of specific tasks through appropriate adjustments in perceptual selection, response biasing, and the on-line maintenance of contextual information. The processes behind such adaptability, referred to collectively as cognitive control, have been the focus of a growing research program within cognitive psychology.”

I originally approached this topic during research for a future post on Mindstrong and their “digital phenotyping” technology. Two of their five biomarkers are Executive Function and Cognitive Control. How do they differ? There's an awful lot of overlap, as we'll see in a future post.


Footnotes

1 Another fun (and related) determinant might be, “does your work focus on the dorsal anterior cingulate cortex? In which case, the respondent would answer Yes.

2 except for one deliberately obfuscatory response.


References

Anderson F. (1942). The Relational Theory of Mind. The Journal of Philosophy 39(10):253-60.

Botvinick MM, Braver TS, Barch DM, Carter CS, Cohen JD. (2001). Conflict monitoring and cognitive control. Psychol Rev. 108(3):624-52.

Miller EK, Cohen JD. (2001). An integrative theory of prefrontal cortex function. Annual Rev Neurosci. 2001;24:167-202.

Slocombe CS. (1928). Of mental testing—a pragmatic theory. Journal of Educational Psychology 19(1):1-24.


Appendix

Many, many articles use the terms interchangeably. I won't single out anyone in particular. Instead, here is a valiant attempt by Nigg (2017) to make a slight differentiation between them in a review paper entitled:
On the relations among self-regulation, self-control, executive functioning, effortful control, cognitive control, impulsivity, risk-taking, and inhibition for developmental psychopathology.
But in the end he concludes, “Executive functioning, effortful control, and cognitive control are closely related.”
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 


Today is the 13th anniversary of this blog. I wanted to write a sharp and subversive post.1 Or at least compose a series of self-deprecating witticisms about persisting this long. Alas, it has been an extremely  difficult year.

Instead, I drew inspiration from Twitter (@neuroecology) and a blogger who's been at it even longer than I (@DoctorZen). Very warily I might add, because I knew the results would not be flattering or pretty.

Behold my scores on the “Big Five” personality traits (and weep). Some of the extremes are partly situational, and that's why I'm presenting these traits separately. Sure, negative emotionality is a relative fixed part of my personality, but the 100% scores on depression and anxiety are influenced by grief (due to the loss of my spouse of 12 years). Personality psychologists would turn this around and say that someone high in trait negative emotionality (formerly known as the more disparaging “neuroticism”) would be predisposed to depression and anxiety.




Another fun trait score is shown below. This one might be even sadder. Yeah, I'm introverted, but people in my situation often tend to withdraw from friends, family, and society.2 Again, reverse the causality if you wish, but social isolation is not an uncommon response.





But hey, I am pretty conscientious, as you can see from my overall test results on the Big Five. You too can take the test HERE.




I'll have something more interesting for you next time.



Footnotes

1 Why? To prove to myself that I can still do it? To impress the dwindling number of readers? To show how the blog has not exceeded its expiry date — it still has relevance in its own modest and quirky way.

2 Hey, I actually had two social engagements this weekend! My lack of assertiveness is disturbing, however. But I absolutely do not want to take the lead on anything right now.




Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Before answering that question, I'll tell you about an incredibly impressive ethnographic study and field survey. For a one year period, the investigators (Pretus, Hamid et al., 2018) conducted field work within the community of young Moroccan men in Barcelona, Spain. As the authors explain, the Moroccan diaspora is an immigrant community susceptible to jihadist forms of radicalization:
Spain hosts Europe’s second largest Moroccan diaspora community (after France) and its least integrated, whereas Catalonia hosts the largest and least integrated Moroccan community in Spain. Barcelona ... was most recently the site of a mass killing ... by a group of young Moroccan men pledging support for the Islamic State. According to a recent Europol’s latest annual report on terrorism trends, Spain had the second highest number of jihadist terrorism-related arrests in Europe (second only to France) in 2016...

After months of observation in selected neighborhoods, the researchers approached prospective participants about completing a survey, with the assurance of absolute anonymity. No names were exchanged, and informed consent procedures were performed orally, to prevent any written record of participation. The very large sample included 535 respondents (average age 23.47 years, range 18–42), who were all Sunni Muslim Moroccan men.

The goal of the study was to look at sacred values in these participants, and whether these values might affect their willingness to engage in violent extremism. “Sacred values are immune or resistant to material tradeoffs and are associated with deontic (duty-bound) reasoning...” (Pretus, Hamid et al., 2018). The term sacred values doesn't necessarily refer to religious beliefs. One of the most common is the basic human value, “it is wrong to kill another human being.” But theoretically speaking, we could include statements such as, “it is wrong to kill endangered species for sport (or for any other reason).”

In this study, Sacred Values included:
  • Palestinian right of return
  • Western military forces being expelled from all Muslim lands
  • Strict sharia as the rule of law in all Muslim countries
  • Armed jihad being waged against enemies of Muslims
  • Forbidding of caricatures of Prophet Mohammed
  • Veiling of women in public

What were the Nonsacred Values? We don't know. I couldn't find examples anywhere in the paper. It's crucial that we know what these were, to help understand the “sacralization” of nonsacred values, which was observed in an fMRI experiment (described later). So I turned to the Supplemental Material of Berns et al. (2012), inferring that the statements below are good examples of nonsacred values in a population of adults in Atlanta.
  • You are a dog person.
  • You are a cat person.
  • You are a Pepsi drinker.
  • You are a Coke drinker.
  • You believe that Target is superior to Walmart.
  • You believe that Walmart is superior to Target.

But what if the nonsacred values in the present study of violent extremism were a little more contentious and meaningful?
  • You are a fan of FC Barcelona.
  • You are a fan of AC Milan.

Anyway, to choose participants for the fMRI experiment, the investigators first divided the entire group into those who were more (n=267) or less (n=268) vulnerable to recruitment into violent extremism (see Appendix for details). An important comparison would have been to directly contrast brain activity in these two groups, but that wasn't done here. Out of the 267 men more vulnerable to violent extremism, 38 agreed to participate in the fMRI study. These 38 were more likely to Endorse Militant Jihadism (score 4.24 out of 7) than the general fMRI pool (3.35) and the non-fMRI pool (2.43).

A battery of six sacred and six nonsacred values was constructed individually for each person and presented in the scanner, along with a number of grammatical variants, for a list of 50 different items per condition. The 38 participants were randomly assigned to one of two manipulations in a between-subjects design: exclusion (n=19) and inclusion (n=19) in the ever-popular ball-tossing video game of Cyberball. [PDF]2



Unfortunately, this reduced the study's statistical power. Nonetheless, a major goal of the experiment was to examine how social exclusion affects the processing of sacred values. I don't know if Cyberball studies are ever conducted in a within-subjects design (perhaps with an intervening task), or if exposure to one of the two conditions is too “contaminating”. At any rate, in real life, discrimination against Muslim immigrants is isolating and causes exclusion from social and economic benefits. Feelings of marginalization can result in greater radicalization and support for (and participation in) extremist groups. At this point in time, I don't think neuroimaging can add to the extensive knowledge gained from years of field work.

Nevertheless, the investigators wanted to extend the findings of Berns et al. (2012) to a very different population. The earlier study wanted to determine whether sacred values are processed in a deontological way (based on strict rules of right and wrong) or in a utilitarian fashion (based on cost/benefit analysis of outcome). As interpreted by those authors, processing sacred values was associated with increased activation of left temporoparietal junction (semantic storage) and left ventrolateral prefrontal cortex (semantic retrieval). Berns et al. suggested that “sacred values affect behaviour through the retrieval and processing of deontic rules and not through a utilitarian evaluation of costs and benefits.” Based on those results, the obvious prediction in the present study is that sacred values should activate left temporoparietal junction (L TPJ) and left ventrolateral prefrontal cortex (L VLPFC).


Fig. 3A (Pretus, Hamid et al., 2018).


Fig. 3A shows that only the latter half of that prediction was observed, and there was no explanation for the lack of activation in L TPJ. Instead, there was a finding in R TPJ in the excluded group which I won't discuss further.

Of note, the excluded participants rated themselves as being more likely to fight and die for nonsacred values, compared to the included participants. This was termed “sacralization” and now you can see why it's so important to know the nonsacred values. Are we talking about fighting and dying for Pepsi vs. Coke? For FC Barcelona vs. AC Milan? Not to be glib, but this would help us understand why social exclusion (in an artificial experimental setting) would radicalize these participants (in an artificial experimental setting).



Fig. 3B (Pretus, Hamid et al., 2018). Nonsacred values activate Left Inferior Frontal Gyrus (IFG, aka VLPFC) in the excluded group, but not in the included group. This was interpreted as a neural correlate of “sacralization”.


Another interpretation of Fig. 3B is that the exclusion manipulation was distracting, making it more difficult for these participants to process stimuli expressing nonsacred values (due to increased encoding demands, syntactic processing, etc.). Exclusion increased emotional intensity ratings, and decreased feelings of belongingness and being in control. This distraction could have carried over to the task of rating one's willingness to fight and die in defense of values.

Even if we say the brain imaging results weren't especially informative, the extensive ethnographic study and field surveys were a highly valuable source of data on a marginalized group of young Muslim men at risk of recruitment by violent extremist groups. It's a vicious cycle: terrorist attacks result in greater discrimination and persecution of innocent Muslim men, which has the unintended effect of further radicalization in some of the most vulnerable individuals. To conclude, I acknowledge that my comments may be out of turn because I have no authority or expertise, and because I'm from a country with an appalling record of discriminating against Muslims.


Footnotes

1 I was a bit confused by some of these scores, because they changed from one paragraph to the next, and differed from what was in Table 1. Perhaps one was a composite score, and the other from an individual questionnaire.

2 I've written extensively about whether Cyberball is a valid proxy for social exclusion, but I won't get into that here.


References

Berns GS, Bell E, Capra CM, Prietula MJ, Moore S, Anderson B, Ginges J, Atran S. (2012). The price of your soul: neural evidence for the non-utilitarian representation of sacred values. Philos Trans R Soc Lond B Biol Sci. 367(1589):754-62.

Pretus C, Hamid N, Sheikh H, Ginges J, Tobeña A, Davis R, Vilarroya O, Atran S. (2018). Neural and Behavioral Correlates of Sacred Values and Vulnerability to Violent Extremism. Front Psychol. 9:2462.


Appendix


Modified from Table 1 (Pretus, Hamid et al., 2018).

[The] measures included (1) a modified inventory on general radicalization (support for violence as a political tactic) based on a prior longitudinal study on violent extremist attitudes among Swiss adolescents (Nivette et al., 2017); (2) a scale on personal grievances and previously used on imprisoned Islamist militants in the Philippines, and Tamil Tigers in Sri Lanka (Webber et al., 2018); (3) a scale on collective narcissism which has been shown to shape in-group authoritarian identity and support for military aggression against outgroups (de Zavala et al., 2009); (4) a self-report delinquency inventory adapted from Elliott et al. (1985), based on the disproportionate number of Muslim European delinquents who join jihadist terrorist groups (Basra and Neumann, 2016); and (5) a series of items assessing endorsement of militant jihadism (“The fighting of the Taliban, Al Qaida, ISIS is justified,” “The means of jihadist groups are justified,” “Attacks against Western nations by jihadist groups are justified,” “Attacks against Muslim nations by jihadist groups are justified,” “Attacks against civilians by jihadist groups are justified,” “Spreading Islam using force is every part of the world is an act of justifiable jihad,” and “A Caliphate must be resurrected even by force”) that we combined into a reliable composite score, “Endorsement of Militant Jihadism”...
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Our memory for the details of real-life events is poor, according to a recent study.Seven MIT students took a one hour walk through Cambridge, MA. A day later, they were presented with one second video clips they may or may not have seen during their walk (the “foils” were taken from another person's recording). Mean recognition accuracy was 55.7%, barely better than guessing.1


Minimal recognition memory for detailed events. Dashed line is chance performance. Adapted from Fig. 2 of Misra et al. (2018).


How did the researchers capture the details of what was seen during each person's stroll about town (2.1 miles / 3.5 km)? They were fitted with eye tracking glasses to follow their eye movements (because you can't remember what you don't see), and a GoPro camera was mounted on a helmet.


from Fig. 1 (Misra et al., 2018).


One problem with this setup, however, was that the eye tracking data had to be excluded. The overwhelmingly bright summer sun prevented the eye tracker from obtaining accurate images of the pupil. Thus, Experiment 2 was performed inside the Boston Museum of Fine Arts with a separate group of 10 students.


from Fig. 1 (Misra et al., 2018).


Recognition performance was better in Experiment 2. Mean accuracy was 63.2% — well above chance (p=.0005) — but still not great. Participants correctly identified clips they had seen 59% of the time, and correctly rejected clips they hadn't seen 67% of the time. One participant (#4) was really good, and you'll notice the individual differences below.

Dashed line is chance performance. Adapted from Fig. 2 of Misra et al. (2018).


In Exp. 2, the investigators were able to look at the influence of eye fixations on memory performance. Not surprisingly, people were better at remembering what they looked at (fixated on), but this only held for certain categories of items: talking people, objects rated as “distinctive” (but not distinctive faces), and paintings (but not sculptures).




How do the authors interpret this finding? We don't necessarily pay attention to everything we look at.
“What subjects fixated on also correlated with performance (Fig. 4), but it is clear that subjects did not remember everything that they laid eyes on. There is extensive literature showing that subjects may not pay attention or be conscious of what they are fixating on. Therefore, it is quite likely that, in several instances, subjects may have fixated on an object without necessarily paying attention to that object. Additionally, attention is correlated with the encoding of events into memory. Thus, the current results are consistent with the notion that eye fixations correlate with episodic memory but they are neither necessary nor sufficient for successful episodic memory formation.”

For me personally, 2018 was a year to forget.2 Yet, certain tragic images are etched into my mind, cropping up at inopportune times to traumatize me all over again. That's a very different topic for another time and place.


May your 2019 brighten the sky.


The number 2019 is written in the air with a sparkler near a tourist camp outside Krasnoyarsk, Russia, on January 1, 2019. (The Atlantic)


Footnotes

1 However:
“Two subjects from Experiment I were excluded from the analyses. One of these subjects had a score of 96%, which was well above the performance of any of the other subjects (Figure 2). The weather conditions on the day of the walk for this subject were substantially different, and this subject could thus easily recognize his own video clips purely from assessing the weather conditions. Another subject was excluded 260 because he responded 'yes' >90% of the trials.”

2 See:

I should have done this by now...

The Lie of Precision Medicine

Derealization / Dying

There Is a Giant Hole Where My Heart Used To Be

How to Reconstruct Your Life After a Major Loss


Reference

Misra P, Marconi A, Peterson M, Kreiman G. (2018). Minimal memory for details in real life events. Sci Rep. 8(1):16701.


We forget most of our lives... Misra et al, Scientific Reports 2018. https://t.co/iK3E6c8mvB pic.twitter.com/ern3QQVWJm
— Gabriel Kreiman (@gkreiman) November 16, 2018

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview