Statistical Criticism is Easy; I Need to Remember That Real People are Involved
Statistical Thinking
by
1y ago
I have been critical of a number of articles, authors, and journals in this growing blog article. Linking the blog with Twitter is a way to expose the blog to more readers. It is far too easy to slip into hyperbole on the blog and even easier on Twitter with its space limitations. Importantly, many of the statistical problems pointed out in my article, are very, very common, and I dwell on recent publications to get the point across that inadequate statistical review at medical journals remains a serious problem. Equally important, many of the issues I discuss, from p-values, null hypothesis t ..read more
Visit website
EHRs and RCTs: Outcome Prediction vs. Optimal Treatment Selection
Statistical Thinking
by
1y ago
Frank Harrell Professor of Biostatistics Vanderbilt University School of Medicine Laura Lazzeroni Professor of Psychiatry and, by courtesy, of Medicine (Cardiovascular Medicine) and of Biomedical Data Science Stanford University School of Medicine Revised July 17, 2017 It is often said that randomized clinical trials (RCTs) are the gold standard for learning about therapeutic effectiveness. This is because the treatment is assigned at random so no variables, measured or unmeasured, will be truly related to treatment assignment. The result is an unbiased estimate of treatment effectiveness. On ..read more
Visit website
Longitudinal Data: Think Serial Correlation First, Random Effects Second
Statistical Thinking
by
1y ago
Random effects/mixed effects models shine for multi-level data such as measurements within cities within counties within states. They can also deal with measurements clustered within subjects. There are at least two contexts for the latter: rapidly repeated measurements where elapsed time is not an issue, and serial measurements spaced out over time for which time trends are more likely to be important. An example of the first is a series of tests on a subject over minutes when the subject does not fatigue. An example of the second is a typical longitudinal clinical trial where patient respons ..read more
Visit website
Damage Caused by Classification Accuracy and Other Discontinuous Improper Accuracy Scoring Rules
Statistical Thinking
by
1y ago
I discussed the many advantages or probability estimation over classification. Here I discuss a particular problem related to classification, namely the harm done by using improper accuracy scoring rules. Accuracy scores are used to drive feature selection, parameter estimation, and for measuring predictive performance on models derived using any optimization algorithm. For this discussion let Y denote a no/yes false/true 0/1 event being predicted, and let Y=0 denote a non-event and Y=1 the event occurred. As discussed here and here, a proper accuracy scoring rule is a metric applied to probab ..read more
Visit website
R Workflow
Statistical Thinking
by
1y ago
..read more
Visit website
Resources for Ordinal Regression Models
Statistical Thinking
by
1y ago
..read more
Visit website
A Comparison of Decision Curve Analysis with Traditional Decision Analysis
Statistical Thinking
by
1y ago
Andrew Vickers Department of Epidemiology and Biostatistics Memorial Sloan Kettering Cancer Center vickersa@mskcc.org Introduction In a traditional decision analysis, the analyst creates a decision tree and then estimates probabilities and assigns utilities for each possible outcome. Decision curve analysis is a type of decision analysis that can be applied to the evaluation of prognostic models and diagnostic tests. The major advantage is that it does not require specification of multiple utilities for different outcomes. Instead, the threshold probability of disease – a concept essential fo ..read more
Visit website
Commentary on Improving Precision and Power in Randomized Trials for COVID-19 Treatments Using Covariate Adjustment, for Binary, Ordinal, and Time-to-Event Outcomes
Statistical Thinking
by
1y ago
Frank E Harrell Jr Professor of Biostatistics Vanderbilt University School of Medicine Nashville TN USA @f2harrell Stephen Senn Consultant Statistician Edinburgh, UK @stephensenn   senns.uk/Blogs.html Standard covariate adjustment as commonly used in randomized clinical trials recognizes which quantities are likely to be constant (relative treatment effects) and which quantities are likely to vary (within-treatment-group outcomes and absolute treatment effects). Modern statistical modeling tools such as regression splines, penalized maximum likelihood estimation, Bayesian shrinkage priors, se ..read more
Visit website
A Litany of Problems With p-values
Statistical Thinking
by
1y ago
With the many problems that p-values have, and the temptation to "bless" research when the p-value falls below an arbitrary threshold such as 0.05 or 0.005, researchers using p-values should at least be fully aware of what they are getting. They need to know exactly what a p-value means and what are the assumptions required for it to have that meaning. ♦ A p-value is the probability of getting, in another study, a test statistic that is more extreme than the one obtained in your study if a series of assumptions hold. It is strictly a probability about data, not a probability about a hypothesis ..read more
Visit website
Equivalence of Wilcoxon Statistic and Proportional Odds Model
Statistical Thinking
by
1y ago
..read more
Visit website

Follow Statistical Thinking on FeedSpot

Continue with Google
Continue with Apple
OR