is co-authored by Felipe Bandeira, Giselle Fretta, Thu Than, and Elbion Redenica. We additionally thank Prof. Carl Scheffler for his assist.
Introduction
Parameter estimation has been for many years one of the vital vital matters in statistics. Whereas frequentist approaches, resembling Most Chance Estimations, was the gold commonplace, the advance of computation has opened area for Bayesian strategies. Estimating posterior distributions with Mcmc samplers turned more and more widespread, however dependable inferences rely on a activity that’s removed from trivial: ensuring that the sampler â and the processes it executes underneath the hood â labored as anticipated. Holding in thoughts what Lewis Caroll as soon as wrote: âIn case you donât know the place youâre going, any highway will take you there.â
This text is supposed to assist information scientists consider an typically ignored facet of Bayesian parameter estimation: the reliability of the sampling course of. All through the sections, we mix easy analogies with technical rigor to make sure our explanations are accessible to information scientists with any stage of familiarity with Bayesian strategies. Though our implementations are in Python with PyMC, the ideas we cowl are helpful to anybody utilizing an MCMC algorithm, from Metropolis-Hastings to NUTS.Â
Key Ideas
No information scientist or statistician would disagree with the significance of strong parameter estimation strategies. Whether or not the target is to make inferences or conduct simulations, having the capability to mannequin the information era course of is a vital a part of the method. For a very long time, the estimations have been primarily carried out utilizing frequentist instruments, resembling Most Chance Estimations (MLE) and even the well-known Least Squares optimization utilized in regressions. But, frequentist strategies have clear shortcomings, resembling the truth that they’re centered on level estimates and don’t incorporate prior data that would enhance estimates.
As an alternative choice to these instruments, Bayesian strategies have gained reputation over the previous a long time. They supply statisticians not solely with level estimates of the unknown parameter but additionally with confidence intervals for it, all of that are knowledgeable by the information and by the prior data researchers held. Initially, Bayesian parameter estimation was completed via an tailored model of Bayesâ theorem centered on unknown parameters (represented as θ) and recognized information factors (represented as x). We are able to outline P(θ|x), the posterior distribution of a parameterâs worth given the information, as:
[ P(theta|x) = fractheta) P(theta){P(x)} ]
On this components, P(x|θ) is the chance of the information given a parameter worth, P(θ) is the prior distribution over the parameter, and P(x) is the proof, which is computed by integrating all attainable values of the prior:
[ P(x) = int_theta P(x, theta) dtheta ]
In some circumstances, because of the complexity of the calculations required, deriving the posterior distribution analytically was not attainable. Nevertheless, with the advance of computation, working sampling algorithms (particularly MCMC ones) to estimate posterior distributions has grow to be simpler, giving researchers a robust instrument for conditions the place analytical posteriors should not trivial to search out. But, with such energy additionally comes a considerable amount of accountability to make sure that outcomes make sense. That is the place sampler diagnostics are available, providing a set of precious instruments to gauge 1) whether or not an MCMC algorithm is working effectively and, consequently, 2) whether or not the estimated distribution we see is an correct illustration of the true posterior distribution. However how can we all know so?
How samplers work
Earlier than diving into the technicalities of diagnostics, we will cowl how the method of sampling a posterior (particularly with an MCMC sampler) works. In easy phrases, we are able to consider a posterior distribution as a geographical space we havenât been to however have to know the topography of. How can we draw an correct map of the area? Â
Certainly one of our favourite analogies comes from Ben Gilbert. Suppose that the unknown area is definitely a home whose floorplan we want to map. For some motive, we can not instantly go to the home, however we are able to ship bees inside with GPS gadgets connected to them. If all the pieces works as anticipated, the bees will fly round the home, and utilizing their trajectories, we are able to estimate what the ground plan seems like. On this analogy, the ground plan is the posterior distribution, and the sampler is the group of bees flying round the home.
The explanation we’re writing this text is that, in some circumstances, the bees gainedât fly as anticipated. In the event that they get caught in a sure room for some motive (as a result of somebody dropped sugar on the ground, for instance), the information they return gainedât be consultant of your complete home; reasonably than visiting all rooms, the bees solely visited just a few, and our image of what the home seems like will in the end be incomplete. Equally, when a sampler doesn’t work accurately, our estimation of the posterior distribution can be incomplete, and any inference we draw primarily based on it’s prone to be unsuitable.
Monte Carlo Markov Chain (MCMC)
In technical phrases, we name an MCMC course of any algorithm that undergoes transitions from one state to a different with sure properties. Markov Chain refers to the truth that the subsequent state solely relies on the present one (or that the beeâs subsequent location is just influenced by its present place, and never by all the locations the place it has been earlier than). Monte Carlo implies that the subsequent state is chosen randomly. MCMC strategies like Metropolis-Hastings, Gibbs sampling, Hamiltonian Monte Carlo (HMC), and No-U-Flip Sampler (NUTS) all function by developing Markov Chains (a sequence of steps) which are near random and step by step discover the posterior distribution.
Now that you just perceive how a sampler works, letâs dive right into a sensible state of affairs to assist us discover sampling issues.
Case Research
Think about that, in a faraway nation, a governor needs to grasp extra about public annual spending on healthcare by mayors of cities with lower than 1 million inhabitants. Slightly than taking a look at sheer frequencies, he needs to grasp the underlying distribution explaining expenditure, and a pattern of spending information is about to reach. The issue is that two of the economists concerned within the challenge disagree about how the mannequin ought to look.
Mannequin 1
The primary economist believes that every one cities spend equally, with some variation round a sure imply. As such, he creates a easy mannequin. Though the specifics of how the economist selected his priors are irrelevant to us, we do have to needless to say he’s attempting to approximate a Regular (unimodal) distribution.
[
x_i sim text{Normal}(mu, sigma^2) text{ i.i.d. for all } i
mu sim text{Normal}(10, 2)
sigma^2 sim text{Uniform}(0,5)
]
Mannequin 2
The second economist disagrees, arguing that spending is extra complicated than his colleague believes. He believes that, given ideological variations and finances constraints, there are two sorts of cities: those that do their finest to spend little or no and those that aren’t afraid of spending quite a bit. As such, he creates a barely extra complicated mannequin, utilizing a mix of normals to replicate his perception that the true distribution is bimodal.
[
x_i sim text{Normal-Mixture}([omega, 1-omega], [m_1, m_2], [s_1^2, s_2^2]) textual content{ i.i.d. for all } i
m_j sim textual content{Regular}(2.3, 0.5^2) textual content{ for } j = 1,2
s_j^2 sim textual content{Inverse-Gamma}(1,1) textual content{ for } j=1,2
omega sim textual content{Beta}(1,1)
]
After the information arrives, every economist runs an MCMC algorithm to estimate their desired posteriors, which will likely be a mirrored image of actuality (1) if their assumptions are true and (2) if the sampler labored accurately. The primary if, a dialogue about assumptions, shall be left to the economists. Nevertheless, how can they know whether or not the second if holds? In different phrases, how can they make sure that the sampler labored accurately and, as a consequence, their posterior estimations are unbiased?
Sampler Diagnostics
To judge a samplerâs efficiency, we are able to discover a small set of metrics that replicate totally different elements of the estimation course of.
Quantitative Metrics
R-hat (Potential Scale Discount Issue)
In easy phrases, R-hat evaluates whether or not bees that began at totally different locations have all explored the identical rooms on the finish of the day. To estimate the posterior, an MCMC algorithm makes use of a number of chains (or bees) that begin at random areas. R-hat is the metric we use to evaluate the convergence of the chains. It measures whether or not a number of MCMC chains have combined effectively (i.e., if they’ve sampled the identical topography) by evaluating the variance of samples inside every chain to the variance of the pattern means throughout chains. Intuitively, which means that
[
hat{R} = sqrt{frac{text{Variance Between Chains}}{text{Variance Within Chains}}}
]
If R-hat is near 1.0 (or beneath 1.01), it implies that the variance inside every chain is similar to the variance between chains, suggesting that they’ve converged to the identical distribution. In different phrases, the chains are behaving equally and are additionally indistinguishable from each other. That is exactly what we see after sampling the posterior of the primary mannequin, proven within the final column of the desk beneath:
The r-hat from the second mannequin, nonetheless, tells a unique story. The actual fact we’ve such giant r-hat values signifies that, on the finish of the sampling course of, the totally different chains had not converged but. In observe, which means that the distribution they explored and returned was totally different, or that every bee created a map of a unique room of the home. This essentially leaves us and not using a clue of how the items join or what the whole ground plan seems like.

Given our R-hat readouts have been giant, we all know one thing went unsuitable with the sampling course of within the second mannequin. Nevertheless, even when the R-hat had turned out inside acceptable ranges, this doesn’t give us certainty that the sampling course of labored. R-hat is only a diagnostic instrument, not a assure. Typically, even when your R-hat readout is decrease than 1.01, the sampler won’t have correctly explored the complete posterior. This occurs when a number of bees begin their exploration in the identical room and stay there. Likewise, when youâre utilizing a small variety of chains, and in case your posterior occurs to be multimodal, there’s a chance that every one chains began in the identical mode and didn’t discover different peaks.Â
The R-hat readout displays convergence, not completion. So as to have a extra complete concept, we have to verify different diagnostic metrics as effectively.
Efficient Pattern Dimension (ESS)
When explaining what MCMC was, we talked about that âMonte Carloâ refers to the truth that the subsequent state is chosen randomly. This doesn’t essentially imply that the states are totally unbiased. Regardless that the bees select their subsequent step at random, these steps are nonetheless correlated to some extent. If a bee is exploring a front room at time t=0, it should most likely nonetheless be in the lounge at time t=1, regardless that it’s in a unique a part of the identical room. Resulting from this pure connection between samples, we are saying these two information factors are autocorrelated.
Resulting from their nature, MCMC strategies inherently produce autocorrelated samples, which complicates statistical evaluation and requires cautious analysis. In statistical inference, we frequently assume unbiased samples to make sure that the estimates of uncertainty are correct, therefore the necessity for uncorrelated samples. If two information factors are too comparable to one another, the correlation reduces their efficient info content material. Mathematically, the components beneath represents the autocorrelation perform between two time factors (t1 and t2) in a random course of:
[
R_{XX}(t_1, t_2) = E[X_{t_1} overline{X_{t_2}}]
]
the place E is the anticipated worth operator and X-bar is the complicated conjugate. In MCMC sampling, that is essential as a result of excessive autocorrelation implies that new samples donât educate us something totally different from the outdated ones, successfully lowering the pattern measurement we’ve. Unsurprisingly, the metric that displays that is known as Efficient Pattern Dimension (ESS), and it helps us decide what number of actually unbiased samples we’ve.Â
As hinted beforehand, the efficient pattern measurement accounts for autocorrelation by estimating what number of actually unbiased samples would supply the identical info because the autocorrelated samples we’ve. Mathematically, for a parameter θ, the ESS is outlined as:
[
ESS = frac{n}{1 + 2 sum_{k=1}^{infty} rho(theta)_k}
]
the place n is the entire variety of samples and Ď(θ)okay is the autocorrelation at lag okay for parameter θ.
Usually, for ESS readouts, the upper, the higher. That is what we see within the readout for the primary mannequin. Two widespread ESS variations are Bulk-ESS, which assesses mixing within the central a part of the distribution, and Tail-ESS, which focuses on the effectivity of sampling the distributionâs tails. Each inform us if our mannequin precisely displays the central tendency and credible intervals.

In distinction, the readouts for the second mannequin are very unhealthy. Usually, we need to see readouts which are not less than 1/10 of the entire pattern measurement. On this case, given every chain sampled 2000 observations, we should always anticipate ESS readouts of not less than 800 (from the entire measurement of 8000 samples throughout 4 chains of 2000 samples every), which isn’t what we observe.

Visible Diagnostics
Other than the numerical metrics, our understanding of sampler efficiency may be deepened via using diagnostic plots. The primary ones are rank plots, hint plots, and pair plots.
Rank Plots
A rank plot helps us determine whether or not the totally different chains have explored all the posterior distribution. If we as soon as once more consider the bee analogy, rank plots inform us which bees explored which elements of the home. Due to this fact, to judge whether or not the posterior was explored equally by all chains, we observe the form of the rank plots produced by the sampler. Ideally, we wish the distribution of all chains to look roughly uniform, like within the rank plots generated after sampling the primary mannequin. Every coloration beneath represents a sequence (or bee):

Below the hood, a rank plot is produced with a easy sequence of steps. First, we run the sampler and let it pattern from the posterior of every parameter. In our case, we’re sampling posteriors for parameters m and s of the primary mannequin. Then, parameter by parameter, we get all samples from all chains, put them collectively, and get them organized from smallest to largest. We then ask ourselves, for every pattern, what was the chain the place it got here from? This may enable us to create plots like those we see above.Â
In distinction, unhealthy rank plots are simple to identify. In contrast to the earlier instance, the distributions from the second mannequin, proven beneath, should not uniform. From the plots, what we interpret is that every chain, after starting at totally different random areas, received caught in a area and didn’t discover the whole lot of the posterior. Consequently, we can not make inferences from the outcomes, as they’re unreliable and never consultant of the true posterior distribution. This may be equal to having 4 bees that began at totally different rooms of the home and received caught someplace throughout their exploration, by no means overlaying the whole lot of the property.

KDE and Hint Plots
Much like R-hat, hint plots assist us assess the convergence of MCMC samples by visualizing how the algorithm explores the parameter area over time. PyMC supplies two varieties of hint plots to diagnose mixing points: Kernel Density Estimate (KDE) plots and iteration-based hint plots. Every of those serves a definite function in evaluating whether or not the sampler has correctly explored the goal distribution.
The KDE plot (normally on the left) estimates the posterior density for every chain, the place every line represents a separate chain. This permits us to verify whether or not all chains have converged to the identical distribution. If the KDEs overlap, it means that the chains are sampling from the identical posterior and that mixing has occurred. Alternatively, the hint plot (normally on the precise) visualizes how parameter values change over MCMC iterations (steps), with every line representing a unique chain. A well-mixed sampler will produce hint plots that look noisy and random, with no clear construction or separation between chains.
Utilizing the bee analogy, hint plots may be regarded as snapshots of the âoptionsâ of the home at totally different areas. If the sampler is working accurately, the KDEs within the left plot ought to align intently, displaying that every one bees (chains) have explored the home equally. In the meantime, the precise plot ought to present extremely variable traces that mix collectively, confirming that the chains are actively transferring via the area reasonably than getting caught in particular areas.

Nevertheless, in case your sampler has poor mixing or convergence points, you will notice one thing just like the determine beneath. On this case, the KDEs is not going to overlap, that means that totally different chains have sampled from totally different distributions reasonably than a shared posterior. The hint plot can even present structured patterns as a substitute of random noise, indicating that chains are caught in numerous areas of the parameter area and failing to totally discover it.

Through the use of hint plots alongside the opposite diagnostics, you’ll be able to determine sampling points and decide whether or not your MCMC algorithm is successfully exploring the posterior distribution.
Pair Plots
A 3rd sort of plot that’s typically helpful for diagnostic are pair plots. In fashions the place we need to estimate the posterior distribution of a number of parameters, pair plots enable us to look at how totally different parameters are correlated. To grasp how such plots are fashioned, suppose once more concerning the bee analogy. In case you think about that weâll create a plot with the width and size of the home, every âstepâ that the bees take may be represented by an (x, y) mixture. Likewise, every parameter of the posterior is represented as a dimension, and we create scatter plots displaying the place the sampler walked utilizing parameter values as coordinates. Right here, we’re plotting every distinctive pair (x, y), ensuing within the scatter plot you see in the course of the picture beneath. The one-dimensional plots you see on the sides are the marginal distributions over every parameter, giving us further info on the samplerâs habits when exploring them.
Check out the pair plot from the primary mannequin.

Every axis represents one of many two parameters whose posteriors we’re estimating. For now, letâs give attention to the scatter plot within the center, which exhibits the parameter combos sampled from the posterior. The actual fact we’ve a really even distribution implies that, for any explicit worth of m, there was a variety of values of s that have been equally prone to be sampled. Moreover, we donât see any correlation between the 2 parameters, which is normally good! There are circumstances after we would anticipate some correlation, resembling when our mannequin entails a regression line. Nevertheless, on this occasion, we’ve no motive to consider two parameters needs to be extremely correlated, so the actual fact we donât observe uncommon habits is constructive information.Â
Now, check out the pair plots from the second mannequin.

Provided that this mannequin has 5 parameters to be estimated, we naturally have a higher variety of plots since we’re analyzing them pair-wise. Nevertheless, they appear odd in comparison with the earlier instance. Specifically, reasonably than having an excellent distribution of factors, the samples right here both appear to be divided throughout two areas or appear considerably correlated. That is one other manner of visualizing what the rank plots have proven: the sampler didn’t discover the complete posterior distribution. Under we remoted the highest left plot, which accommodates the samples from m0 and m1. In contrast to the plot from mannequin 1, right here we see that the worth of 1 parameter drastically influences the worth of the opposite. If we sampled m1 round 2.5, for instance, m0 is prone to be sampled from a really slender vary round 1.5.

Sure shapes may be noticed in problematic pair plots comparatively often. Diagonal patterns, for instance, point out a excessive correlation between parameters. Banana shapes are sometimes related to parametrization points, typically being current in fashions with tight priors or constrained parameters. Funnel shapes would possibly point out hierarchical fashions with unhealthy geometry. When we’ve two separate islands, like within the plot above, this may point out that the posterior is bimodal AND that the chains havenât combined effectively. Nevertheless, needless to say these shapes would possibly point out issues, however not essentially accomplish that. Itâs as much as the information scientist to look at the mannequin and decide which behaviors are anticipated and which of them should not!
Some Fixing Strategies
When your diagnostics point out sampling issues â whether or not regarding R-hat values, low ESS, uncommon rank plots, separated hint plots, or unusual parameter correlations in pair plots â a number of methods will help you deal with the underlying points. Sampling issues usually stem from the goal posterior being too complicated for the sampler to discover effectively. Complicated goal distributions may need:
- A number of modes (peaks) that the sampler struggles to maneuver between
- Irregular shapes with slender âcorridorsâ connecting totally different areas
- Areas of drastically totally different scales (just like the âneckâ of a funnel)
- Heavy tails which are troublesome to pattern precisely
Within the bee analogy, these complexities signify homes with uncommon ground plans â disconnected rooms, extraordinarily slender hallways, or areas that change dramatically in measurement. Simply as bees would possibly get trapped in particular areas of such homes, MCMC chains can get caught in sure areas of the posterior.


To assist the sampler in its exploration, there are easy methods we are able to use.
Technique 1: Reparameterization
Reparameterization is especially efficient for hierarchical fashions and distributions with difficult geometries. It entails remodeling your mannequinâs parameters to make them simpler to pattern. Again to the bee analogy, think about the bees are exploring a home with a peculiar format: a spacious front room that connects to the kitchen via a really, very slender hallway. One facet we hadnât talked about earlier than is that the bees need to fly in the identical manner via your complete home. That implies that if we dictate the bees ought to use giant âsteps,â they’ll discover the lounge very effectively however hit the partitions within the hallway head-on. Likewise, if their steps are small, they’ll discover the slender hallway effectively, however take ceaselessly to cowl your complete front room. The distinction in scales, which is pure to the home, makes the beesâ job tougher.
A traditional instance that represents this state of affairs is Nealâs funnel, the place the dimensions of 1 parameter relies on one other:
[
p(y, x) = text{Normal}(y|0, 3) times prod_{n=1}^{9} text{Normal}(x_n|0, e^{y/2})
]

We are able to see that the dimensions of x relies on the worth of y. To repair this drawback, we are able to separate x and y as unbiased commonplace Normals after which rework these variables into the specified funnel distribution. As a substitute of sampling instantly like this:
[
begin{align*}
y &sim text{Normal}(0, 3)
x &sim text{Normal}(0, e^{y/2})
end{align*}
]
You’ll be able to reparameterize to pattern from commonplace Normals first:
[
y_{raw} sim text{Standard Normal}(0, 1)
x_{raw} sim text{Standard Normal}(0, 1)
y = 3y_{raw}
x = e^{y/2} x_{raw}
]
This system separates the hierarchical parameters and makes sampling extra environment friendly by eliminating the dependency between them.Â
Reparameterization is like redesigning the home such that as a substitute of forcing the bees to discover a single slender hallway, we create a brand new format the place all passages have comparable widths. This helps the bees use a constant flying sample all through their exploration.
Technique 2: Dealing with Heavy-tailed Distributions
Heavy-tailed distributions like Cauchy and Scholar-T current challenges for samplers and the perfect step measurement. Their tails require bigger step sizes than their central areas (much like very lengthy hallways that require the bees to journey lengthy distances), which creates a problem:
- Small step sizes result in inefficient sampling within the tails
- Massive step sizes trigger too many rejections within the heart

Reparameterization options embody:
- For Cauchy: Defining the variable as a change of a Uniform distribution utilizing the Cauchy inverse CDF
- For Scholar-T: Utilizing a Gamma-Combination illustration
Technique 3: Hyperparameter Tuning
Typically the answer lies in adjusting the samplerâs hyperparameters:
- Enhance whole iterations: The only strategy â give the sampler extra time to discover.
- Enhance goal acceptance charge (adapt_delta): Cut back divergent transitions (strive 0.9 as a substitute of the default 0.8 for complicated fashions, for instance).
- Enhance max_treedepth: Enable the sampler to take extra steps per iteration.
- Lengthen warmup/adaptation section: Give the sampler extra time to adapt to the posterior geometry.
Do not forget that whereas these changes could enhance your diagnostic metrics, they typically deal with signs reasonably than underlying causes. The earlier methods (reparameterization and higher proposal distributions) usually provide extra basic options.
Technique 4: Higher Proposal Distributions
This answer is for perform becoming processes, reasonably than sampling estimations of the posterior. It mainly asks the query: âIâm at the moment right here on this panorama. The place ought to I bounce to subsequent in order that I discover the complete panorama, or how do I do know that the subsequent bounce is the bounce I ought to make?â Thus, selecting a great distribution means ensuring that the sampling course of explores the complete parameter area as a substitute of only a particular area. A very good proposal distribution ought to:
- Have substantial chance mass the place the goal distribution does.
- Enable the sampler to make jumps of the suitable measurement.
One widespread selection of the proposal distribution is the Gaussian (Regular) distribution with imply Îź and commonplace deviation Ď â the dimensions of the distribution that we are able to tune to resolve how far to leap from the present place to the subsequent place. If we select the dimensions for the proposal distribution to be too small, it’d both take too lengthy to discover your complete posterior or it should get caught in a area and by no means discover the complete distribution. But when the dimensions is simply too giant, you would possibly by no means get to discover some areas, leaping over them. Itâs like enjoying ping-pong the place we solely attain the 2 edges however not the center.
Enhance Prior Specification
When all else fails, rethink your mannequinâs prior specs. Obscure or weakly informative priors (like uniformly distributed priors) can typically result in sampling difficulties. Extra informative priors, when justified by area data, will help information the sampler towards extra affordable areas of the parameter area. Typically, regardless of your finest efforts, a mannequin could stay difficult to pattern successfully. In such circumstances, contemplate whether or not an easier mannequin would possibly obtain comparable inferential targets whereas being extra computationally tractable. One of the best mannequin is usually not probably the most complicated one, however the one which balances complexity with reliability. The desk beneath exhibits the abstract of fixing methods for various points.
Diagnostic Sign | Potential Concern | Really useful Repair |
Excessive R-hat | Poor mixing between chains | Enhance iterations, modify the step measurement |
Low ESS | Excessive autocorrelation | Reparameterization, enhance adapt_delta |
Non-uniform rank plots | Chains caught in numerous areas | Higher proposal distribution, begin with a number of chains |
Separated KDEs in hint plots | Chains exploring totally different distributions | Reparameterization |
Funnel shapes in pair plots | Hierarchical mannequin points | Non-centered reparameterization |
Disjoint clusters in pair plots | Multimodality with poor mixing | Adjusted distribution, simulated annealing |
Conclusion
Assessing the standard of MCMC sampling is essential for guaranteeing dependable inference. On this article, we explored key diagnostic metrics resembling R-hat, ESS, rank plots, hint plots, and pair plots, discussing how every helps decide whether or not the sampler is performing correctly.
If thereâs one takeaway we wish you to remember itâs that you need to at all times run diagnostics earlier than drawing conclusions out of your samples. No single metric supplies a definitive reply â every serves as a instrument that highlights potential points reasonably than proving convergence. When issues come up, methods resembling reparameterization, hyperparameter tuning, and prior specification will help enhance sampling effectivity.
By combining these diagnostics with considerate modeling selections, you’ll be able to guarantee a extra strong evaluation, lowering the danger of deceptive inferences because of poor sampling habits.
References
B. Gilbert, Bobâs bees: the importance of using multiple bees (chains) to judge MCMC convergence (2018), Youtube
Chi-Feng, MCMC demo (n.d.), GitHub
D. Simpson, Maybe itâs time to let the old ways die; or We broke R-hat so now we have to fix it. (2019), Statistical Modeling, Causal Inference, and Social Science
M. Taboga, Markov Chain Monte Carlo (MCMC) methods (2021), Lectures on chance concept and mathematical Statistics. Kindle Direct Publishing.Â
T. Wiecki, MCMC Sampling for Dummies (2024), twecki.io
Stan Personâs Information, Reparametrization (n.d.), Stan Documentation