Bayesian inference for a logistic regression model (Part 6)
Darren Wilkinson's Blog
by darrenjw
1y ago
Part 6: Hamiltonian Monte Carlo (HMC) Introduction This is the sixth part in a series of posts on MCMC-based Bayesian inference for a logistic regression model. If you are new to this series, please go back to Part 1. In the previous post we saw how to construct an MCMC algorithm utilising gradient information by considering a Langevin equation having our target distribution of interest as its equilibrium. This equation has a physical interpretation in terms of the stochastic dynamics of a particle in a potential equal to minus the log of the target density. It turns out that thinking about th ..read more
Visit website
Bayesian inference for a logistic regression model (Part 5)
Darren Wilkinson's Blog
by darrenjw
1y ago
Part 5: the Metropolis-adjusted Langevin algorithm (MALA) Introduction This is the fifth part in a series of posts on MCMC-based Bayesian inference for a logistic regression model. If you are new to this series, please go back to Part 1. In the previous post we saw how to use Langevin dynamics to construct an approximate MCMC scheme using the gradient of the log target distribution. Each step of the algorithm involved simulating from the Euler-Maruyama approximation to the transition kernel of the process, based on some pre-specified step size, . We can improve the accuracy of this approximati ..read more
Visit website
Bayesian inference for a logistic regression model (Part 4)
Darren Wilkinson's Blog
by darrenjw
1y ago
Part 4: Gradients and the Langevin algorithm Introduction This is the fourth part in a series of posts on MCMC-based Bayesian inference for a logistic regression model. If you are new to this series, please go back to Part 1. In the previous post we saw how the Metropolis algorithm could be used to generate a Markov chain targeting our posterior distribution. In high dimensions the diffusive nature of the Metropolis random walk proposal becomes increasingly inefficient. It is therefore natural to try and develop algorithms that use additional information about the target distribution. In the c ..read more
Visit website
Bayesian inference for a logistic regression model (Part 3)
Darren Wilkinson's Blog
by darrenjw
1y ago
Part 3: The Metropolis algorithm Introduction This is the third part in a series of posts on MCMC-based Bayesian inference for a logistic regression model. If you are new to this series, please go back to Part 1. In the previous post we derived the log posterior for the model and implemented it in a variety of programming languages and libraries. In this post we will construct a Markov chain having the posterior as its equilibrium. MCMC Detailed balance A homogeneous Markov chain with transition kernel is said to satisfy detailed balance for some target distribution if Integrating both side ..read more
Visit website
Bayesian inference for a logistic regression model (Part 2)
Darren Wilkinson's Blog
by darrenjw
1y ago
Part 2: The log posterior Introduction This is the second part in a series of posts on MCMC-based Bayesian inference for a logistic regression model. If you are new to this series, please go back to Part 1. In the previous post we looked at the basic modelling concepts, and how to fit the model using a variety of PPLs. In this post we will prepare for doing MCMC by considering the problem of computing the unnormalised log posterior for the model. We will then see how this posterior can be implemented in several different languages and libraries. Derivation Basic structure In Bayesian inference ..read more
Visit website
Bayesian inference for a logistic regression model (Part 1)
Darren Wilkinson's Blog
by darrenjw
1y ago
Part 1: The basics Introduction This is the first in a series of posts on MCMC-based fully Bayesian inference for a logistic regression model. In this series we will look at the model, and see how the posterior distribution can be sampled using a variety of different programming languages and libraries. Logistic regression Logistic regression is concerned with predicting a binary outcome based on some covariate information. The probability of "success" is modelled via a logistic transformation of a linear predictor constructed from the covariate vector. This is a very simple model, but is a co ..read more
Visit website
MCMC code for Bayesian inference for a discretely observed stochastic kinetic model
Darren Wilkinson's Blog
by darrenjw
1y ago
In June this year the (twice COVID-delayed) Richard J Boys Memorial Workshop finally took place, celebrating the life and work of my former colleague and collaborator, who died suddenly in 2019 (obituary). I completed the programme of talks by delivering the inaugural RSS North East Richard Boys lecture. For this, I decided that it would be most appropriate to talk about the paper Bayesian inference for a discretely observed stochastic kinetic model, published in Statistics and Computing in 2008. The paper is concerned with (exact and approximate) MCMC-based fully Bayesian inference for contin ..read more
Visit website
Unbiased MCMC with couplings
Darren Wilkinson's Blog
by darrenjw
4y ago
Yesterday there was an RSS Read Paper meeting for the paper Unbiased Markov chain Monte Carlo with couplings by Pierre Jacob, John O’Leary and Yves F. Atchadé. The paper addresses the bias in MCMC estimates due to lack of convergence to equilibrium (the “burn-in” problem), and shows how it is possible to modify MCMC algorithms in order to construct estimates which exactly remove this bias. The requirement is to couple a pair of MCMC chains so that they will at some point meet exactly and thereafter remain coupled. This turns out to be easier to do that one might naively expect. There are many ..read more
Visit website
Index to first 75 posts
Darren Wilkinson's Blog
by darrenjw
5y ago
This is the 75th post to this blog. Every 25 posts I produce an index of posts so far for easy reference. If I make it to post 100 I’ll do something similar. 25. Catalogue of my first 25 blog posts 50. Index to first 50 posts 51. Calling Scala code from R using rscala 52. Calling R from Scala sbt projects using rscala 53. Data frames and tables in Scala 54. HOFs, closures, partial application and currying to solve the function environment problem in Scala 55. First steps with monads in Scala 56. A scalable particle filter in Scala 57. Working with SBML using Scala 58. Scala for D ..read more
Visit website
A probability monad for the bootstrap particle filter
Darren Wilkinson's Blog
by darrenjw
5y ago
Introduction In the previous post I showed how to write your own general-purpose monadic probabilistic programming language from scratch in 50 lines of (Scala) code. That post is a pre-requisite for this one, so if you haven’t read it, go back and have a quick skim through it before proceeding. In that post I tried to keep everything as simple as possible, but at the expense of both elegance and efficiency. In this post I’ll address one problem with the implementation from that post – the memory (and computational) overhead associated with forming the Cartesian product of particle sets during ..read more
Visit website

Follow Darren Wilkinson's Blog on FeedSpot

Continue with Google
Continue with Apple
OR