Ever wonder why some algorithms are faster than others? Yeah me neither, but Big-O Notation is the likely source of explanation, and in this two-part series you will learn why!
So what the heck is Big-O Notation?
It's a way of measuring how long an algorithm will take to execute, and how well it scales based on the size of the dataset. Basically, it measures algorithmic efficiency.
Let's say for example we have a list of 15 people, and we would like to sort through these 15 people to find every person whose first name starts with the letter T. Well, there are different algorithms you could use to sort through this list all ranging in different levels of complexity, with some performing better than others.
Now let us pretend that list just jumped up to 1 million names. How do you think this will affect the performance and complexity?
The answers to these questions can be found using Big-O notation.
Many Flavors
Big-O comes in different forms: - O(1) - O(log n) - O(n) - O(n log n) - O(n^2) - O(2^n) - O(n!) In this post, I will be discussing the first three variations with the last four discussed in the next post, so stay tuned for that!
O(1) - Constant Time
Constant time complexity doesn’t care about the size of the data being passed in. The execution time will remain the same regardless of the dataset. Whether our list contained 5 items or 1,000 items, it doesn't matter. This means that this notation is very scalable and independent on time.
Let’s say for example we have an array of numbers and we want to find the second number in that list. No matter what the size of the list is, finding the second number will always take the same amount of time.
let smallList = [0, 1, 2];
let largeList = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
let logSecondNumber = (list) => {
console.log(list[1]);
};
logSecondNumber(smallList);
logSecondNumber(largeList);
Both calls to the function will execute in the same amount of time even though one list is larger than the other.
O(log n) - Logarithmic Time
Logarithmic time complexity is the time it takes to execute depending on the logarithm of the input size. A good example of this would be a binary search. You divide the dataset continuously until you get to the point you want.
In our example below I am looping through the list of numbers and checking whether our middle position in the array is equal to our target value. If it isn’t we divide the list of numbers accordingly, calculate our new middle position, and check again. This will continue until either we find the number we are looking for, or we run out of numbers in our array.
function binarySearch(array, targetValue) {
let minIndex = 0;
let maxIndex = array.length - 1;
let middleIndex = Math.floor((maxIndex + minIndex) / 2);
while (array[middleIndex] != targetValue && minIndex < maxIndex) {
if (targetValue < array[middleIndex]) {
maxIndex = middleIndex - 1;
} else if (targetValue > array[middleIndex]) {
minIndex = middleIndex + 1;
}
middleIndex = Math.floor((maxIndex + minIndex) / 2);
}
return array[middleIndex] != targetValue ? -1 : middleIndex;
}
let numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
binarySearch(numbers, 7);
O(n) - Linear Time
Linear time complexity means that the time to execute the algorithm has a direct relationship with the size of n. As more items are added to the dataset, the time to execute will scale up proportionally.
Looking at our example below, we are using a for loop to print out each item in our array. For each item added to this array, it will increase the time it takes to execute by n.
let junkFood = ['pizza', 'cookie', 'candy', 'icecream']
loopThroughOurJunkFood(junkFood) {
for (let i = 0; i > junkFood.length; i++) {
console.log(junkFood[i]);
}
}
If we were to add another item to our junkFood array, the time it takes to execute our function will increase linearly.
More to come…
In the next post in this series, we will go over the rest of our Big-O notation flavors so stay tuned for that!