# How to simulate random numbers in Python

--

A gentle introduction

Random numbers are used in a lot of applications. They are more predominantly used in areas like statistics, probability theory, data science, signal processing, statistical physics, machine learning, actuarial science, quantitative finance, and more. Yet they can be a bit challenging to understand due to their… well, random nature.

First, we need to come clean about something. Random numbers generated in computers are not truly random, in fact they are referred to as “pseudo-random” numbers. The reason for this is that they are basically constructed from a huge number of digit sequences which are then converted into a set of integers within a certain range. These sequences are pre-defined and are simply permuted in certain ways so as to give the illusion of “drawing a random number” each time. Modern programs like Matlab can often expand on this a bit by looking at things like the time-stamp in the computer’s CPU or RAM, from which a specific digit sequence is drawn. So technically you should get a different “random number” every time you press Enter because the time stamp changes. However, since there is a finite number of unique digit sequences that can be stored in the computer, you could easily draw the same random number after a while. There are of course other more sophisticated methods for generating random numbers, but the basis of all random number generation is something called “the uniform distribution”.

The uniform distribution is a probability distribution in which numbers can be sampled between 0 and 1, with any number of decimal places in between. In essence, there are infinite possible numbers we can choose from, but they are always confined between 0 and 1 (inclusive). This distribution is used in order to draw random numbers pertaining to other probability distributions. But how exactly? Here is how we do it. First, suppose we want to draw a random number from a uniform distribution, but we want it to be on a specific interval [a, b] where a and b can be anything we want (so long as they are real numbers). In order to do this, suppose X ~ U(0,1) (meaning X is a random number drawn from a uniform distribution between 0 and 1). Then, to expand this to the domain [a, b] we can use: