Sometimes in statistics we work with unknown probability distributions and/or the pdf’s evaluatation is computationally expensive. The ARS method arises as a way to solve this problem. Its biggest advantage is the small number of evalutions of \(h(x)\), where \(h(x)\) is the logarithm of the pdf of interest \(f(x)\) (normalized or not). The idea is to use indirect simulation to sample from a log-concave and univariate distribution (ARS needs this two assumptions). To achieve this goal, the ARS method calculates two functions: the upper and lower hull for \(h(x)\). To build this it is necessary to define the number of initial points and the boundaries of this distributions. For example, to sample from a gamma distribution the boundaries could be 0 and \(\infty\).
The upper hull is based on the tangent line and the lower hull is based on the line between two points. The idea is to squeeze the upper and lower hull of \(h(x)\), then some iterations are necessary. For each iteration one observation is sampled from the upper hull and this observation is accepted or not according to two tests. The first test is called squeezing test and it is the ratio of upper and lower hull aplied on the sampled point. If this distance is small, it means that the approximation is good enough otherwise this observation is rejected. If the observation is rejected in this step a rejection test is still performed. This step corresponds to the ratio of upper hull and the objective function (in this case an extra evaluation of the objective function is necessary). Again if the distance is small it means that the approximation is good and then we can accept this sample point otherwise it is rejected.
If the observation is rejected in squeezing test then this point is used to refine the upper and lower hull and this observation is not accepted. If the observation is accepted in the rejection test so the upper and lower hull are refined and this observation is accepted. If the observation is rejected in rejection test so the upper and lower hull are refined and the observation is rejected.
The number of evalutions of \(h(x)\) decreases in each iteration because de envelope approximation becomes good enough and then it is not necessary to evaluate \(h(x)\) anymore.
This methodology is the core of the ARMS method that also include a Metropolis Hastings step. The ARMS method is commonly used with the Gibbs Sampling algorithm for bayesian algorithms and the assumption of log-concavity is not required.
By using this app it is possible to understand how the ARS method works and how it envelopes the distribution and the log-distribution in each iteration.
An online version of the application can be found in my shinyapps account here: Shinyverse
Here are simple steps to use setup this app:
- Select one distribution of the list. They are log-concave and univariate distributions.
- Setup the parameters of this distribution.
- Select the boundaries of the distribution. If you change the boundaries, truncated distributions are simulated.
- Select the number of points to sample with the method.
- Select the number of initial points (to build the upper and lower hull).
- Push the RUN! button.
After pressing RUN, a new control and three graphs are going to appear.
The left graph presents the logarithm of the distribution and we can see the upper and lower hull based on the initial points (setted previously). If you have set a small number of initial points these functions will be clearly different.
The middle graph is the histogram (with the empirical density for comparison) of the sampled observations. At the first time you can see just the density because it is the first iteration.
The right graph is the distribution, the upper and lower hull in the original scale. It corresponds to the exponential of the function in the left graph.
Now you can set the iteration number with the new button on the panel to see step by step the method generating a sample. The evolution of acceptance rate appears in the new graph, below others.
Did you find a mistake? Please let me know.