Computational complexity simply explained
In computer science and software engineering, we use the term “algorithmic complexity” or “computational complexity” to refer to how much time or space (e.g. memory) it takes to run a certain task or operation (or sequence of operations). Basically, it gives us an idea of how efficient (or inefficient) an algorithms is, and how many hardware resources would be required to execute a certain job. There are two types of computational complexity: time and space complexity. In this article, I will explain how to estimate the computational complexity of different algorithms, using some Python examples.
To quantify complexity, we use something called Big-O notation, which works as follows: suppose that a certain algorithm requires a total of 2n+1 individual computations/steps to be completed, where n is an integer number of computations. Then, we can summarize the complexity by using the notation O(n), which is known as “linear” because the highest power of n is 1. Consequently, we can ignore any constant factors (like the 2 and the 1) because we know that the complexity of the algorithm will scale linearly with n: that is, when we double nnn we also double the total cost of running the algorithm.
There are several ways of quantifying this. For instance, we can refer to the complexity of an algorithm as constant in n by O(c) or O(1), which basically means that no matter how large n gets, the cost remains the same. There aren’t too many examples of this, though, except for very specific cases like checking if a number is…