Loading...

Follow Daniel Willingham - Daniel Willingham: Science .. on Feedspot

Continue with Google
Continue with Facebook
or

Valid
The Journal of Learning Sciences has posted a Call For Papers for a special issue, “Learning In and For Collective Social Action.” It’s overtly political, and it takes a particular political stance: the first paragraph mentions furthering “progressive social movements.”

I think this special issue broadcasts the wrong message about the Journal, and will foster misunderstanding about the relationship of science and politics.

Let me start with the relationship between basic and applied sciences. Basic science aims to be value-free. It seeks to describe the world as it is. I am not claiming that science generally achieves that aim. Science is a human enterprise and humans are biased, and it’s well-established that biases creep into science, in terms of the agenda set, the interpretation of theory and data, funding, etc. My point is that when such bias is exposed it is considered a criticism. A person claiming to do basic science aims to do it in a value-free way and so must either seek to remedy the bias or give up on the claim of behaving scientifically.

In contrast, applied sciences do not aim to describe the world, but to change it, leveraging findings from basic science. Because they seek to change the world, values are part-and-parcel of what they do. Saying “yours is a political enterprise” is not a criticism of an applied science—there must be a goal, and goals are selected based on ones values. (Naturally, one can behave in a biased manner when conducting applied research, and then deny those biases. That’s a different matter.)

The “Aims and Scope” statement of the Journal of Learning Sciences make clear that it publishes both basic and applied research, describing itself as a “forum for research on education and learning.” In an important sense, reading intervention studies have an implicit goal—the goal that children should read. A study that seeks to close the gender gap in higher education STEM course-taking has an implicit political subtext—men and women should be equally represented in these disciplines. These studies are in that sense political.

But it matters that these are political positions about which everyone generally agrees. As a journal editor (or thoughtful reader) you don’t need to think about viewpoint diversity when it comes to those goals. Everyone thinks children should learn to read. But once you’re including topics about which reasonable people do disagree, you’re in different territory.

I have three problems with a scientific journal plunging into political issues as the Journal of Learning Sciences has done here.

First, applying science to politics is a fraught business. Science is powerful. It is perceived by most of the public as epistemically special—that it’s a better way of understanding the world than others. It’s a problem when a group cloaks its argument in the special status of science to further an essentially political point of view. The fact that we know scientists, like everyone else, are subject to unconscious bias in their work ought to make us worried about that possibility. Those who undertake to apply science to politically controversial issues ought to show self-awareness that they have embarked on a different sort of project, and they ought to take steps to ensure that they are thoughtful about the special problems this work poses.

The very fact that this special issue is being published by a journal that does not routinely handle papers on these topics indicates the editors think there is not anything different about them. They are sending the message “sure, politics is in the purview of our journal. This is what we do. Reading, politics…it’s all the same.”

Second, the Journal of Learning Sciences did not issue an even-handed, open-minded call for applications of the learning sciences to political problems. The call refers specifically to furthering progressive social movements, and it includes a list of issues that those on the political left consider most important, with no mention of issues that those on the right find most important: Islamophobia yes, but not bias against evangelical Christians. Settler colonialism, but not the rights of the unborn. Rather than identifying controversial issues and seeking viewpoint diversity, the Journal is signaling quite clearly who is welcome and who is not.  This is a mistake. Science is about open debate, not exclusion.

My third issue with the call for papers grows out of the second. This call is not only bad science, it’s bad publicity. The academy is already under suspicion for having a left-leaning political agenda and foisting leftist groupthink on students. That suspicion grows partly from the fact that professors, as a group, lean left. This doesn’t mean that progressives should abandon important work to protect the tender feelings of conservatives. There are journals devoted to education that declare a particular view of politics in their mission statements and obviously there should be such journals. This sort of call for papers belongs in such a journal. It does not belong in the Journal of Learning Sciences or any other that purports to be devoted to science.

Note: This blog began as a Tweet, but I should have known better than to use that forum. I obviously was not clear, as people quickly wanted to let me know that scientists are biased, though I thought I had acknowledged that. More peculiar was the suggestion by several Tweeters that because scientists cannot be neutral, they may as well own their biases and stop pretending. It’s peculiar because it suggests a change to a cornerstone feature of a method that has been very successful for the last few centuries, and because the logic seems to be “if you can’t *completely* remove something undesirable, you may as well add more.”  ​
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Guest post with Daniel Ansari, Professor and Canada Research Chair in Developmental Cognitive Neuroscience in the Department of Psychology and the Brain & Mind Institute at the University of Western Ontario in London, Ontario, where he heads the Numerical Cognition Laboratory.

On February 28th Stanford Professor Jo Boaler and one of her students, Tanya Lamar, published an article that we think is a fine example of how not to draw educational conclusions from neuroscientific data. While we’re more interested in applauding great work than pointing out problems, we feel we can’t ignore an article in a high-profile venue like Time Magazine.
The backbone of their piece includes three points:
  1. Science has a new understanding of brain plasticity (the ability of the brain to change in response to experience), and this new understanding shows that the current teaching methods for struggling students are bad. These methods include identifying learning disabilities, providing accommodations, and working to students’ strengths.
  2. These new findings imply that “learning disabilities are no longer a barrier to mathematical achievement” because we now understand that the brain can be changed, if we intervene in the right way.
  3. The authors have evidence that students who thought they were “not math people” can be high math achievers, given the right environment.
 
There are a number of problems in this piece.
 
First, we know of no evidence that conceptions of brain plasticity or (in prior decades) lack of plasticity, had much (if any) influence on educators’ thinking about how to help struggling students. More to the point, conceptions of cellular processes should not influence specific educational plans or general educational outlook. The notion of the brain lacking plasticity obviously was not taken at face value by educators, nor should it have been—an unchangeable brain would be a brain incapable of learning. (For more on the difficulty of drawing educational implications from neuroscientific findings, see here and here)

Second, Boaler and Lamar mischaracterize “traditional” approaches to specific learning disability. Yes, most educators advocate for appropriate accommodations, but that does not mean educators don’t try intensive and inventive methods of practice for skills that students find difficult. Standard practice for students with a specific reading disability, for example, includes intensive practice in decoding and yes, educators have thought of the idea of trying methods other than the ones that a student seems not to learn from—methods that the authors, at the end of the article, mention were suggested for her daughter with dyslexia and auditory processing difficulties.

Third, Boaler and Lamar advocate for diversity of practice for typically developing students that we think would be unremarkable to most math educators: “making conjectures, problem-solving, communicating, reasoning, drawing, modeling, making connections, and using multiple representations.” More surprising is their charge that “There are many problems with the procedural approach to mathematics that emphasizes memorization of methods, instead of deep understanding.“ We agree with the National Mathematics Advisory Panel report that students should learn (and memorize) math facts and algorithms. We also agree with the Panel (and with Boaler and Lamar) that American students struggle with conceptual understanding. Deep understanding is always more difficult than memorization, and it’s the aspect of mathematics that most kids struggle with, but that doesn’t mean that most math educators don’t care if their students understand math. In our view there is no need to reinvigorate the math wars since an overwhelming body of scientific evidence has demonstrated that students need both – procedural fluency and conceptual understanding. One cannot develop one without the other. In our view it is best to lay this false dichotomy to rest and avoid emotive and value laden arguments such as that students who are strong in conceptual understanding of math are more creative.

Fourth, we think it’s inaccurate to suggest that “A number of different studies have shown that when students are given the freedom to think in ways that make sense to them, learning disabilities are no longer a barrier to . Yet many teachers have not been trained to teach in this way.”  We have no desire to argue for student limitations and absolutely agree with Boaler and Lamar’s call for educators to applaud student achievement, to set high expectations, and to express (realistic) confidence that students can reach them. But it’s inaccurate to suggest that with the “right teaching” learning disabilities in math would greatly diminish or even vanish. For some students difficulties persist despite excellent education.  We don’t know which article Boaler & Lamar meant to link to in support of this point—the one linked to concerns different methods of research for typical students vs students identified with a disability.

Do some students struggle with math because of bad teaching? We’re sure some do, and we have no idea how frequently this occurs. To suggest, however, that it’s the principal reason students struggle ignores a vast literature on learning disability in mathematics. This formulation sets up teachers to shoulder the blame for “bad teaching” when students struggle.

As to the final point—that Boaler & Lamar have evidence from a mathematics camp showing that, given the right instruction, students who find math difficult can gain 2.7 years of achievement in the course of a summer—we’re excited! We look forward to seeing the peer-reviewed report detailing how it worked.

In sum, we think that findings from studies of brain plasticity do not support the implications that Boaler and Lamar suggest they do. Further, we think they have mischaracterized both the typical goals of math instructors, and the typical profile of a student with math disability. 
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
I was talking to a couple of consultants last week and they posed the following question: “What would you tell funders not to fund? What sorts of things are they funding now (or are likely to be approached about funding in the near future) that seem plausible, but unlikely to work out well?”

My answer was intemperate and unhelpful, boiling down to “funders need to stop being idiots.” What I was thinking about at the time was funded initiatives designed to influence factors I consider peripheral to student success, or that depend on particular action (or inaction) by stakeholders; action that I think funders should recognize the actors are very unlikely to take.

I’ve been stewing about my answer for days now, to see if I could articulate it in a way that is more defensible and more helpful than “don’t be stupid.” I’ve come up with two factors. Both are pretty obvious, but neither appears to be a guiding principle. They are also obvious enough that I’m sure neither is original to me.

I described the first factor in When Can You Trust the Experts? where I called it the Chain of Influence. Learning happens in the mind of the student, and is the product of student thought. Teachers try to influence student thought. School leaders try to influence what teachers do, with the ultimate aim of changing student thought. District policy is meant to influence principals, state policy to influence districts, and federal policy to influence states.

Student thought ← Teacher ← Admin ← District ← State ← Feds

It does not matter what else is going right…if student thought is not changed, learning does not happen. If you seek change at, say, the state level, the change must migrate down through districts, principals, and teachers before it has a chance to influence student thought. If any link of the chain is either broken or operates in an unexpected way, you lose.

Moving up the chain is tempting because linear distance in the chain yield exponential increases in influence. Influence a teacher and you influence the 25 children she teaches…but influence a principal and you influence all 400 children in her school. Or influence a few hundred members of Congress and you influence every student in America. But I suggest that once you get more than two or three links in your chain of influence, the probability that your change will not work out as you intend approaches 1.0. (Hello, Common Core.) So the first principle I’d advise funders to follow is this: stick as close to classrooms as you can.

All right, once funders are in classrooms, what should they pay attention to?

Given that the ultimate goal is to change student thought, two factors are paramount: what sort of change do you seek, and how are you trying to bring it about? In other words, the content (facts and skills) you want children to learn, and what the teacher does to try to bring about the change in thought. Again, this point is obvious, but it’s easy for it to get lost. That’s what happened when Britain sought to put a smartboard in every school without a plan how it would help children learn. That’s what happened when the state of California sought to boost student self-esteem in the 1980s. So the second principle I’d advise funders to follow is this: fund projects that explore which content is most effective for children to learn (given a set of goals) and how to teach it.

​The drawback to my recommendations is that they lack razzle-dazzle, and so will funded projects. It’s a lot more dramatic and interesting to try to find the next disruption to education. But it’s actually a lot easier to do some good if you pay attention to fundamentals.
 
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Last Friday Emily Hanford published an op-ed in the New York Times. It argued that there are errors of omission and of commission in the education of future teachers concerning how most children learn to read.
Curiously, but not unexpectedly, most of the comments on the New York Times website and on social media did not concern teacher education, but student learning, specifically whether or not phonics instruction is effective.

These comments put me in mind of the polarization of American politics, and this recent survey showing that relatively small percentages of those on the left and right are really far from the mainstream. In other words, we are not as polarized as the media and social media make it seem. Also, the people closer to the center are sick of the yammering anger of those on the far left and right.

I think that may be true of the controversy regarding the teaching of reading.

So have a look at these six statements about children learning to read.
  1. The vast majority of children first learn to read by decoding sound. The extent to which children can learn to read in the absence of systematic phonics instruction varies (probably as a bell curve), depending on their phonemic awareness and other oral language skills when they enter school; the former helps a child to figure out decoding on her own, and the latter to compensate for difficulty in decoding.
  2. Some children—an extremely small percentage, but greater than zero—teach themselves to decode with very minimal input from adults. Many more need just a little support.
  3. The speed with which most children learn to decode will be slower if they receive haphazard instruction in phonics than it would be with systematic instruction. A substantial percentage will make very little progress without systematic phonics instruction.
  4. Phonics instruction is not a literacy program. The lifeblood of a literacy program is real language, as experienced in read-alouds, children’s literature, and opportunities to speak, listen, and to write. Children also need to see teachers and parents take joy in literacy.
  5. Although systematic phonics instruction seems like it might bore children, researchers examining the effect of phonics instruction on reading motivation report no effect.
  6. That said, there’s certainly the potential for reading instruction to tilt too far in the direction of phonics instruction, a concern Jean Chall warned about in her 1967 report. Classrooms should devote much more time to the activities listed in #4 above than to phonics instruction.
 
I think all of the six statements above are true. The number of people who would defend only the even or odd numbered statements (and deny the others) is, I’m guessing, small. I would also say they are ignoring abundant research and have above average capacity to kid themselves.

Most people believe both sets of statements, but often emphasize only one. When challenged, they say “yes, yes, of course those others are true. That’s obvious. But you’re ignoring the statements I’m really passionate about!” Naturally if you mostly emphasize the odd-numbered statements or the even-numbered statements, people will bark about the other.

I’m sure that as you read these six statements you disagreed with the way one or another is phrased, or you thought it went a little too far. I won’t defend any of them vigorously—I didn’t spend that much time writing them, to be honest. The larger point is that the conflict is a waste of time and I suspect most people know it. 
There's plenty of other work to be done . 
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Jeff Bezos recently announced that he would commit two billion dollars to two initiatives, one of which was to create a network of full-scholarship preschools in underserved communities.

Reaction has been “wary,” focusing mostly on the lack of detail in the announcement. (There was also one of those periodic meditations on tech moguls’ love affair with Montessori education.) The two professional organizations of Montessori educators—no doubt hoping for an unprecedented spotlight on and promotion of work they hold dear—issued statements that fizzed with enthusiasm (see here for AMI & AMS).

​For my part, I was focused on one word in the announcement
Despite this headline in Chalkbeat, Bezos is not proposing to launch and operate Montessori preschools, but rather Montessori-INSPIRED preschools. That’s a huge difference, because although there are some studies showing an advantage to the Montessori method (see here, here, and here) research also shows that fidelity matters—children in Montessori classrooms “supplemented” with non-Montessori materials learned less than children in high-fidelity classrooms (see here and here).

I hope that Mr. Bezos and whoever he listens to on education matters are keeping in mind that the method has a lot of components, and, excepting the consequences of adding materials to the classroom, we don’t have data on outcomes when the method is tampered with.

What happens if you
  • employ teachers who lack Montessori training? (There are thousands of teacher training programs in the US. Fewer than 25 offer Montessori training.)
  • eliminate or shorten the 3 hour work cycle typical of Montessori preschool classrooms?
  • eliminate or change the multi-year age groupings?
  • eliminate or change Montessori scripted lessons? (Did you know that Montessori uses scripted lessons?)
  • eliminate or change the curriculum?
  • eliminate or change the Montessori conception of a prepared environment?
  • change what is usually a high student-teacher ratio?
I don’t know the answer to any of these questions. I don’t think anyone does.

The Executive Director of the Yale Education Studies program wondered, in a New York Times op-ed, why Bezos didn’t make use of existing institutions to promote his preschool vision, rather than creating a network out of whole cloth? One obvious answer is that he wants tighter control to shape the organization as he sees fit, and to populate it with people he trusts.

Another likely reason is that he’s (rightly) suspicious that Montessori educators will be sticklers about the method, and he wants the flexibility to adapt the method as he sees fit. This may even be what he meant by another phrase in the brief announcement that drew a lot of attention: “the child will be the customer.”

Fair enough, it’s his money. But if that’s true, you may as well drop the “Montessori-inspired” bit.

Indeed, I’m predicting that picking and choosing elements of the Montessori toddler program (and not adopting it wholesale) will yield student outcomes (academic and social) that are indistinguishable from other preschools. I think the components interlock and all are integral to its success.

​Montessori is, indeed, “inspiring” but using the education program as a jumping off point for your own homebrew will, I predict, disappoint.
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
On September 15 I tweeted about a new meta-analysis that examines the impact of auditory distraction on reading. It’s an issue of broad concern, as many of us read at work in noisy office environments, and when we read for pleasure we may be on a subway, at a playground, and so on. Students and educators are keenly interested in this issue, because some students like to read with music on in the background and some educators wonder whether that affects comprehension.

The article concluded that that background noise, speech, and music all have small but reliable negative impacts on reading comprehension.
In response, several folks on twitter commented as much to say “ok, so we should tell kids not read with music on.”

I am not so sure.

This is a point of interpretation around which Todd Rose framed his book, The End of Average. Now I didn’t care much for this book, because I thought Rose took a valid concern and ran much too far with it, but here it’s applicable.

An average is meant as a summary that gives you a sense of the central tendency of a distribution. That doesn’t mean it is a good representative of every data point. To use Rose’s example, if you measure a large group of airplane pilots and find their average height is 69 inches, and then design airplane cockpits assuming “pilots are 69 inches tall,” well, you’ll be disappointed. The cockpit will be a good fit for a few, but will be too big or too small for most.

I criticized Rose’s book because I argued that (1) many principles of the mind do apply pretty well across the board—everyone’s attention is limited, for example and; (2) psychologists are generally aware of the problem Rose identifies. The entire subfield called individual differences is devoted to identifying ways in which we all differ.

The influence of background music on reading may be a case where Rose’s warning is pertinent. The meta-analysis reports a small, consistent cost to reading comprehension when listening to music. Looking at the breakdown of individual studies it’s easy to see that the studies trend towards the stated conclusion. 
But there is also a lot of variability—I’m not referring to the dot representing the mean of each study, but to the dotted lines around each of those dots, which shows the variability associated with that mean.

​Contrast that with the studies on the effect that background speech has on reading comprehension. 
What this indicates is that, while mean of the grand distribution may show a small hit to comprehension when background music plays, it's NOT the case that every child reads a little worse with background music on. Part, but not all, of that variability is noisy measurement.

As the article notes, researchers have sought variables that differentiate why music hurts, fails to influence, or even helps comprehension. For a while they thought introversion/extraversion might be the answer, but that didn’t pan out. Still, I think this is a case where individual difference play an important role. 

As far as practice goes, I think this finding could be offered as support for a decision not to play music to every child in a classroom. In a big sample, you’d say it will reduce mean comprehension. But I don’t think it supports telling individual children not to listen to music while they read. (Note too, I see this as one factor among many a teacher would consider in a decision of this sort.)

Here’s another reason I personally wouldn’t be too quick to interpret this meta-analysis as showing people should never listen to music while reading. Some of my students say they like music playing in the background because it makes them less anxious. It could be that a laboratory situation (with no stakes) means these students aren’t anxious (and hence show little cost when the music is off) but would have a harder time reading without music when they are studying. In other words, the laboratory situation may underestimate the frequency that music provides a benefit for a subset of students. 
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
The recent kerfuffle concerning Hart & Risley (1995) and the 30 million word gap offers an object lesson in science, the interpretation of science, and the relation of science and policy.

Let’s start with the new science. Douglas Sperry and colleagues sought to replicate Hart & Risley, who reported the 30 million word gap—that’s the projected difference in total number of words directed to a child by caregivers when comparing children of parents on public assistance and children of parents in professional positions.  Sperry and his team claim not to find a statistically reliable difference among parents of different social classes.

Twitter was quick to pounce:
Coverage from NPR made it sound like Hart & Risley had been debunked, with the headline “Let’s stop talking about the 30 million word gap.”

But the Sperry report doesn’t really upend Hart & Risley.

First, Sperry et al. claim that the Hart & Risley finding has never been replicated. I am not sure what Sperry et al. mean by “replicate,” because the conceptual idea that socioeconomic status and volume of caregiver→child speech has been replicated. (The following list is not offered as complete—I stopped looking after I found five.)

Gilkerson et al (2017)
Hoff (2003)
Hoff-Ginsberg (1998)
Huttenlocher et al (2010)
Rowe (2008)

None of these is an exact replication---they have variations in methods, population, and analyses. The same is true of Sperry et al, and funnily enough that study has a fairly significant difference—they didn't include a group of professional parents which is key if your main concern is the size of the gap between professional and public assistance parents.

It’s also worth noting that Sperry et al speculated that their results may be more representative of how parents actually talk, because the researchers used an unobtrusive recording system. Hart and Risley (and most other researchers) had a researcher observing parents and children, so perhaps parents in different SES groups reacted differently in the presence of researcher, the guess being that poor people might clam up, or the wealthier might show off by talking more. I'll leave alone the assumptions underlying that speculation, but I will point out that, first, I doubt observer effects would count for much because the observations occurred over the course of years; people get used to being observed. Second, Gilkerson et al (2017) used the same unobtrusive system that Sperry did and observed the association of SES and caregiver speech.

Another odd thing about the Sperry et al paper is their emphasis on bystander speech (i.e., speech that is not directed to the child but happens in the child’s presence.) This is odd because multiple studies indicate that child *can* learn from such speech, but more often learn little or nothing (e.g., here and here).

Sperry points out that in some cultures children are seldom addressed directly, yet learn to talk. But maybe children in those cultures learn “if someone’s talking, I should listen, even if it’s not addressed to me because they may say something that’s important to me.” In most households in the US, if you’re not being addressed it’s less likely that the speech is important to you, so the child likely does not redirect attention from whatever he or she was doing to the speech.

So all in all, I don’t think this failure to replicate overturns Hart & Risley, coming as it does in the face of several successful replications. As to whether the gap is 30 million or some other figure…I don’t know, maybe somebody thought the absolute value mattered. I doubt any psychologists did. We would care about the predictive power of the caregiver speech. On the whole, there’s still pretty good reason to think there’s an association between SES and child-directed speech from parents. (For more on this issue, see the recent blog by Roberta Golinkoff and her colleagues.)

BUT thinking that there’s pretty good evidence for the association is not AT ALL the same as thinking it ought to influence policy. There are two issues here.

First, do we understand this phenomenon well enough to intervene? Second, should we?

In answer to “do we understand enough?” I’d say “no.” The volume of words is the variable you hear about most, but it may not be the most important. It may be the conversational back and forth that matters. Or the diversity of speech. Or the gestures that go with speech. And oral language is only one contributor to vocabulary size and syntactic complexity. Maybe we should intervene to get more parents reading to their children, or better, using dialogic reading strategies. At the very least, I’d like to see a small-scale intervention study (not just a correlational study) showing positive results of asking caregivers to talk their children more, before I would be ready to draw a strong conclusion that volume of caregiver talk is causal to children’s language capabilities.

The second question—if we were pretty sure we knew that a factor is causal, should we intervene?—is much more fraught. As I have considered  at length elsewhere, questions like this are outside of the realm of science. You’re contemplating using science, but whether or not to intervene is not a scientific question. It’s a question of values. You are seeking to change the world. That brings costs and the promise of benefits. Will it be worth it? It depends on what you value.

That’s what people on Twitter were responding to on this issue (some explicitly, some implicitly)—the assumption that parents in poverty ought to parent more like middle-class parents. Then their kids would be successful…according to middle class values.  That conclusion entails the obvious corollary that parents can eliminate any disadvantage their child has, so if they don’t, well, it’s no one’s fault but their own.

I agree with this argument to a point. The prospect of using science to tell people how to parent makes me very uneasy.

On the other hand, should we fiercely defend parenting practices in the name of cultural equality or because we don’t want to let powerful institutions off the hook if we know those practices put children at a disadvantage in school, and later, in the job market? (Reminder, I don’t think that such evidence currently exists on parental speech volume.) Wealthy parents keep pace with what researchers suggest will help children flourish, and then defend the right of parents living in poverty to use parenting practices that put poor children at a disadvantage? In a long series of tweets (in which she raises many of the criticisms of the Sperry et al study that I raised above) Twitter user @kimmaytube closed with this pointed comment
What is the role of a scientist in these difficult application issues? For better or worse, I have come to what seems the obvious resolution: give people the fullest information you can and let them decide. 

People who are concerned about the impact of proposed applications of science will have on low-income communities and individuals raise valid points, which they sometimes undercut with rash claims about the invalidity of the scientific studies.

I’ve seen this repeatedly on the subject of grit and self-control. Again, there are very legitimate questions to ask about the values that underlie the assumption that we should make kids more self-controlled and/or more gritty, and questions about the costs to children and institutions should we try to intervene that way. These are separate issues than questions about the scientific standing of grit and self-control as explanatory constructs.

​Twitter notwithstanding, I encourage you to bear the distinction in mind. 
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview