# Javascript Time Complexity

__del__ |

__delete__ |

absolute |

array Python module |

Arrays |

ast Python module |

base64 Python module |

code Python module |

COM PHP module |

Ev PHP module |

exp |

FFI PHP module |

gc Python module |

http Python module |

iat |

imp Python module |

io Python module |

JavaScript |

Loops |

Lua PHP module |

math Python module |

Network PHP module |

nis Python module |

ones |

os Python module |

PS PHP module |

Python functions |

re Python module |

sep |

square |

StackOverflow |

stat Python module |

string Python module |

Strings PHP module |

time Python module |

tty Python module |

UI PHP module
Michael Zippo
04.11.2021

👻 *Check our latest review to choose the best laptop for Machine Learning engineers and Deep learning tasks!*

Like software engineers, sometimes our job is to find a solution to a problem that requires some sort of algorithm. A high-level algorithm is just a set of hints - the recipe for solving a problem. When do we get to a point where we know that the "recipe" that we have written to solve our problem is "good" enough?

is where the Big O notation comes into play . The Big O notation it is used to describe two things: the spatial complexity and the temporal complexity of an algorithm. In this article, we cover the complexity of time: what it is, how to understand it, and why knowing the time complexity - the big O notation - of an algorithm can improve your approach

## Time complexity

The Big O notation for time complexity gives a rough idea of ‚Äã‚Äãhow long an algorithm takes to run depending on two things: the size of the input it has and the number of steps it takes. ’it takes to finish. Let’s compare the two to get our execution. Time complexity measures the efficiency of an algorithm when it has an extremely large data set. We look at the absolute worst case scenario and call our notation Big O.

### O (1)

The first Big O as we speak is constant time, or `O (1 ) `

(oh of one). When we talk about things in constant time, we are talking about declarations or operations of some sort:

Pretty much anything that is evaluated only once in our algorithm is counted as constant time.

When we evaluate overall runtime, we usually ignore these statements because they don’t take complexity into account . Note that there are `O (1)`

expressions and why an expression could be `O (1)`

relative to any other possible value.

### O (n)

Take a look at this example:

Take a look at the first dataset in the example. What is the length of the table? Another look, but this time at the second dataset you created by going to mockaroo.com - how long is that array?

Now we are going to take a look at the actual function since the length of our input is known. To understand the Big O of an algorithm, take a look at blocking things by block and removing nonessential blocks from the code. Once you’ve gone through what you don’t need to figure out about run time, you can calculate the calculation to get the correct answer.

In this example we have a loop. This loop iterates over each element of the array that we pass to it. Because the code must touch *every* a single element in the array to complete its execution, it is linear time or `O (n)`

.

#### Final thoughts on O (n):

Since we are describing Big O in terms of a worst case scenario, it does not matter if we have a loop that works 10 times or 100 times before the loop breaks . The growth rate for the amount of time that entries increase is always linear.

One last example:

Take the same function as above, but add another block of code:

What would be the execution engine of this function?

technically it is `O (2n)`

, because we are running two for loops, one after the other *. *

However, when we express the complexity of time in terms of large O notation, we only look at the most essential parts that means the coefficient 2n -. 2 -. doesn’t make sense no matter how many loops we have stacked on top of each other - the Big O will always be `O (n)`

because we are iterating the same array.

##### one thing to note:

If you create an algorithm that works with two networks and you have loops stacked on top of each other using either together, technically the time to execution are not `O (n)`

, unless the lengths of two separate arrays are not the same

When handling different datasets in a function -. in this case, two long networks of different sizes - we count separately. We use another variable to indicate the other network which has a different length.

The temporal complexity of this problem is `O (n + m)`

. The n is an array here and its elements; m is the other network and its elements. Since we are dealing with two different lengths and we do not know which one has more elements, it cannot be reduced to `O (n)`

.

### Basics of Logarithms and Exponents

Before talking about other possible time complexity values, make sure you have a basic understanding of the work of exponents and logarithms .

Many see the words "exponent", "log" or "logarithm" and be nervous because they will have to do algebra or math that they do not remember in school. This is not the case! In this section, we’ll take a very high-level look at what a log is, what exponent is, and how each relates to the execution of a `O (n)`

.

examine logarithms and how they work, remember how exponents work. The syntax for lifting something to an exponent is:

** x ** ** **^{ y } ** = z **

** **This is often read as "x to y the power is equal to z". The variable xz is multiplied by itself y times

The logarithmic syntax is as follows:.

** log ** ** **_{ x } ** z = y **

** **Read this as "base log x of z equals y". See how the variables compare to the previous equation. If we take the base, we raise to the result, we get the number we are trying to take the log from. It is essentially the reverse of what the exponent is.

An important aspect is that when we are dealing with exponents, we are dealing with an outcome which is a large number. When you deal with logarithms, we are dealing with a smaller number as a result.

So how does this relate to the Big O rating? You will see in the following sections!

### O (n ^{ x })

So far we have been talking about constant time and linear time. O (n 2), a version of O (nx) where x is equal to 2, is called quadratic time. This means that as the size of the input increases, the number of steps to solve the worst case problem is increased to the square or to the x power.

This can happen when it is necessary to nest loops together to compare an i-th value with another value in an array. Check for the presence of duplicates in an array:

The first loop marks our i-th placement in the array. The second loop examines all the indexes in the array to see if it matches the i-th index. If no match and reach the end of the loop, the i-th pointer moves to the next index

this means that we examine each index twice in our algorithm. In this example, an array with a length of 9 takes, at worst, 81 (9 ^{ 2 }). For small datasets , this runtime is okay. But when you dramatically increase the dataset (say 1,000,000,000 entries), the O (n ^{ x }) performance doesn’t look terrible.

#### Final thoughts on O (n ^{ x })

always try to create algorithms with a more optimal execution time than O (n ^{ x }) . There are chances that you’re dealing with a set of data much larger than the picture that we have here. Next, look at the reverse of an engine of execution polynomial:. logarithmic

### O (log n)

Imagine a telephone directory. If n We need to give you the name of a person and what you need to look up, how are you going to do it?

An algorithm that is starts at the beginning of the book and goes through each name until it reaches the name it is looking for, runs in `O (n)`

running - the worst case scenario is that the person you are looking for is just the last name .

What can we do to improve it? How can we do better than linear execution?

You can perform an algorithm called binary search. We won’t go into the details of how to code binary search, but if you understand how it works through a pseudo - code, you can see why it is a little better than `O (n) `

.

#### for pseudo binary search - code

Consider this: an address book as an array of objects in which each object has a first name, last name, and phone number. Here is an excerpt:

- Since the address book is already sorted by name, we can check if the middle lastName property matches the family of the search term.
- Otherwise, and the first letter comes after the first letter of the current middle surname, we delete in the first half.
- If it comes first, remove the second half.
- If it is the same, look at the next letter and compare the substrings to each other by following steps 1-3.
- Continue to perform this action until we find the answer. If we can’t find the answer, say so.
This is called binary search. It has an `O (log n)`

execution because we delete a section of our entry each time until we find the answer.

#### Final thoughts on O (log n)

Remember our basic logarithmic equation . The result when we take a register of a number is always smaller. If `O (n)`

is linear and O (n ^{ 2 }) requires more passes, then `O (log n)`

is a little better than `O (n)`

because when we take the log of n it is a smaller number.

### O (2 ^{ n ) }

O (2 n) generally refers to recursive solutions that involve some type of operation. The Fibonacci sequence is the most popular example of this execution. This particular example returns the nth number in the Fibonacci sequence:

This solution increases the amount of steps required to complete the problem exponentially. Avoid this particular execution at all costs.

### O (n log n)

`O (n log n)`

execution is very similar to O (log n) execution, except that it performs worse than a linear execution. Essentially, a `O (n log n)`

execution algorithm has some kind of linear function that has a nested logarithmic function. Take this example:

In this code snippet, we increment a counter starting at 0 and using a while loop in that counter to multiply j by two for each step - this makes it logarithmic since we do basically big jumps with each iteration using multiplication.