Using the Box-Muller method
In my previous Medium story: https://oscarnieves100.medium.com/how-to-simulate-random-numbers-dad35905ecdb I discussed how to simulate random numbers in Python by using a little mathematical method known as “the inverse sampling theorem”. In it, I demonstrated how using nothing but uniformly distributed random numbers in the interval [0, 1] we could produce a set of random numbers from an exponential distribution, as shown in the Figure below:
I also discussed how this Inverse Sampling technique could also be applied to a variety of other probability distributions. However, it is essential to point out that this method only works if the cumulative distribution function of the desired random variable can be expressed in a closed-form, that is: in terms of elementary functions, and is also invertible.
In the case of normal random variables, the probability density function is given by
from which we can sample random numbers X ~ N(μ, σ²) — that is, normally distributed numbers with mean μ and variance σ². The cumulative distribution function associated with this is calculated by taking the probability that X will be less or equal to some value z. Using the techniques for evaluating Gaussian integrals in my other story (https://oscarnieves100.medium.com/demystifying-the-gaussian-integral-87904e4b5dea) we arrive at the result
where erf(x) is the error function. We could also express this as
The next step in the inverse sampling method would be to find an inverse function of Φ(z). Unfortunately, erf(x) does not have a closed-form inverse, which means we can’t really use this method to sample normally distributed numbers. So what do we do?
This is an example of when the inverse sampling method fails, but that doesn’t mean we are totally hopeless. There are in fact many algorithms for sampling random numbers out of a normal distribution. Today I will focus on something known as the Box-Muller method. The Box-Muller transform is covered in detail elsewhere…