The complexity of the algorithm can be a difficult concept to grasp, even with compelling mathematical arguments. This article presents a tiny Python program that shows the relative complexity of several typical functions. This can easily be adapted to other functions.

Computational complexity — venerable subject in computer science. It can be defined as the amount of time and space it takes for an algorithm to solve a problem.

The fundamentals of computational complexity are mathematical, but their implications are very practical. There are problems that are "unsolvable." They are not impossible (that is, undecidable), but no efficient algorithm is known for them. That is: they are very difficult to solve with modern technology, and even with predictable technology.

The most popular computational complexity analysis — this is a worst case scenario. Despite his pessimism, this is very reasonable: the size of the problems wishing to be solved increases over time. We want to process petabytes of data, not megabytes. Thus, size is the most important factor in the complexity of the algorithm.

Consider the input size as an independent variable, and the growth rate — dependent variable and try to analyze its performance as the input size increases to infinity. This analysis is called ** big-Oh ** and has many rules that you can familiarize yourself with in any good algorithmic textbook. Most importantly, constants do not affect the performance of the algorithm for large inputs. The reason is again that the size of the input is the most important factor, and the constants are independent of the size of the input.

Newcomers to computation theory are often confused about by the fact that exponential functions like are worse than polynomial functions like, say , This is clear from the mathematical definition of the Big-O function, but it is not easy to see it unless we think we are considering very large ,

The following Python code visualizes the growth as a task instance (N) grows in multiple functions: , Please note that is considered bad work because it takes operations to handle 1000 entries. In general, is considered bad when k" = 2.

The code uses the NumPy and MatPlotLib libraries and uses a functional programming technique called currying to calculate for constant * to *. Other functions are easy to compute by modifying the FUNCTIONS list.

** Code: Python code explaining the asymptotic behavior of multiple functions. **

` ` |

The results are not surprising: the exponential function has the worst performance as it grows very quickly with taking into account the size of the input. For N = 20, other functions are insignificant compared to the exponential.

The logarithm is shown in blue, the polynomials — blue, magenta and yellow, and the exponent — red.

Latest questions