Questionable Metascience Practices
Mark Rubin's Social Psychology Research Blog
by
1y ago
In this new article, I consider questionable research practices in the field of metascience. A questionable metascience practice (QMP) is a research practice, assumption, or perspective that's been questioned by several commentators as being potentially problematic for metascience and/or the science reform movement. I discuss 10 QMPs that relate to criticism, replication, bias, generalization, and the characterization of science. My aim is not to cast aspersions on the field of metascience but to encourage a deeper consideration of its more questionable research practices, assumptions, and per ..read more
Visit website
Exploratory hypothesis tests can be more compelling than confirmatory hypothesis tests
Mark Rubin's Social Psychology Research Blog
by
1y ago
Researchers often distinguish between: (1) Exploratory hypothesis tests - unplanned tests of post hoc hypotheses that may be based on the current results, and (2) Confirmatory hypothesis tests - planned tests of a priori hypotheses that are independent from the current results This distinction is supposed to be useful because exploratory results are assumed to be more “tentative” and “open to bias” than confirmatory results. In this recent paper, we challenge this assumption and argue that exploratory results can be more compelling than confirmatory results. Our article has three parts. In the ..read more
Visit website
Two-Sided Significance Tests
Mark Rubin's Social Psychology Research Blog
by
2y ago
In this paper (Rubin, 2022), I make two related points: (1) researchers should halve two-sided p values if they wish to use them to make directional claims, and (2) researchers should not halve their alpha level if they're using two one-sided tests to test two directional null hypotheses. (1) Researchers should halve two-sided p values when making directional claims Researchers sometimes conduct two-sided significance tests and then use the resulting two-sided p values to make directional claims. I argue that this approach is inappropriate because two-sided p values refer to non-directional h ..read more
Visit website
When to Adjust Alpha During Multiple Testing
Mark Rubin's Social Psychology Research Blog
by
3y ago
In this new paper (Rubin, 2021), I consider when researchers should adjust their alpha level (significance threshold) during multiple testing and multiple comparisons. I consider three types of multiple testing (disjunction, conjunction, and individual), and I argue that an alpha adjustment is only required for one of these three types. There’s No Need to Adjust Alpha During Individual Testing I argue that an alpha adjustment is not necessary when researchers undertake a single test of an individual null hypothesis, even when many such tests are conducted within the same study. For example, i ..read more
Visit website
“Repeated sampling from the same population?” A critique of Neyman and Pearson’s responses to Fisher
Mark Rubin's Social Psychology Research Blog
by
3y ago
In a new paper in the European Journal for Philosophy of Science, I consider Fisher's criticism that the Neyman-Pearson approach to hypothesis testing relies on the assumption of “repeated sampling from the same population” (Rubin, 2020). This criticism is problematic for the Neyman-Pearson approach because it implies that test users need to know, for sure, what counts as the same or equivalent population as their current population. If they don't know what counts as the same or equivalent population, then they can't specify a procedure that would be able to repeatedly sample from this populat ..read more
Visit website
Does Preregistration Improve the Interpretablity and Credibility of Research Findings?
Mark Rubin's Social Psychology Research Blog
by
3y ago
Preregistration entails researchers registering their planned research hypotheses, methods, and analyses in a time-stamped document before they undertake their data collection and analyses. This document is then made available with the published research report in order to allow readers to identify discrepancies between what the researchers originally planned to do and what they actually ended up doing. In a recent article (Rubin, 2020), I question whether this historical transparency facilitates judgments of credibility over and above what I call the contemporary transparency that is provided ..read more
Visit website
That’s not a two-sided test! It’s two one-sided tests!
Mark Rubin's Social Psychology Research Blog
by
4y ago
"Using a two-tailed independent samples t test, we found that male participants had higher self-esteem than female participants, t(458) = -2.14, p = .033." In the above case, did the researchers really use a two-tailed test, or did they actually use two one-tailed tests, and what’s the difference? In a recent article, I considered this issue and make two key points. The first point is that, if researchers actually use a two-tailed test, then they should make non-directional claims. For example, in the above case, if the researchers had really used a two-tailed test, then they should claim t ..read more
Visit website
What Type of Type I Error?
Mark Rubin's Social Psychology Research Blog
by
4y ago
In a recent paper (Rubin, 2019), I consider two types of replication in relation to two types of Type I error probability. First, I consider the distinction between exact and direct replications. Exact replications duplicate all aspects of a study that could potentially affect the original result. In contrast, direct replications duplicate only those aspects of the study that are thought to be theoretically essential to reproduce the original result. Second, I consider two types of Type I error probability. The Neyman-Pearson Type I error rate refers to the maximum frequency of incorrectl ..read more
Visit website
Do p Values Lose their Meaning in Exploratory Analyses?
Mark Rubin's Social Psychology Research Blog
by
4y ago
In Rubin (2017), I consider the idea that p values lose their meaning (become invalid) in exploratory analyses (i.e., non-preregistered analyses). I argue that this view is correct if researchers aim to control a familywise error rate that includes all of the hypotheses that they have tested, or could have tested, in their study (i.e., a universal, experimentwise, or studywise error rate). In this case, it is not possible to compute the required familywise error rate because the number of post hoc hypotheses that have been tested, or could have been tested, during exploratory analyses in the s ..read more
Visit website
The Costs of HARKing: Does it Matter if Researchers Engage in Undisclosed Hypothesizing After the Results are Known?
Mark Rubin's Social Psychology Research Blog
by
4y ago
While no-one is looking, a Texas sharpshooter fires his gun at a barn wall. He then walks up to his bullet holes and paints targets around them. When his friends arrive, he points at the targets and claims that he’s a good shot (de Groot, 2014; Rubin, 2017b). In 1998, Norbert Kerr discussed an analogous situation in which researchers engage in undisclosed hypothesizing after the results are known or HARKing. In this case, researchers conduct statistical tests, observe their research results (bullet holes), and then construct post hoc predictions (paint targets) to fit these results. In their ..read more
Visit website

Follow Mark Rubin's Social Psychology Research Blog on FeedSpot

Continue with Google
Continue with Apple
OR