Project Technical Lead
Big o notation: analysis of algorithms’ time and space complexity
Since the early days of the rise of information technology, developers have been striving to write code that not only works but also efficiently utilizes hardware. In the past, when processors were expensive and slow, and memory was scarce, optimization was crucial, with literally every machine instruction mattering. Similarly, when working with large data sets, algorithms had to be very sophisticated because computational time was heavily limited.
Hence the need to compare the efficiency of algorithms, and this is where Big O notation plays an important role. In this paper, we take a closer look at the analysis of algorithmic complexity using Big O notation.
Big O notation meaning – what is it?
BIG O notation is a mathematical notation in computer science that is used to describe the asymptotic complexity of algorithms, that is, it describes the worst-case scenario (upper bound) of an algorithm based on the size of the input data. In other words, this notation helps us understand how the efficiency of an algorithm changes depending on the amount of data that is used.
A short history of BIG O enrollment
Its theoretical foundations were laid in the early twentieth century, when people did not even dream of computer algorithms, but it was not until the 1960s that Big O notation became popular and widely used in computer science, mainly thanks to pioneers such as Donald Knuth. Knuth detailed and popularized the use of this notation in his well-known work The Art of Computer Programming, first published in 1968. Knuth’s work contributed significantly to the standardization and widespread use of Big O notation in the analysis of algorithms and computer programs.
How is big o notation used?
The BIG O notation is used to theoretically compare the efficiency of different algorithms. for example when deciding which sorting algorithm is more appropriate to use in a given situation, we look at the notation for the different algorithms. It also helps us predict how the algorithm will behave with increasing amounts of input data. This knowledge is a prerequisite for optimizing the algorithm and using resources more efficiently. When solving complex problems, knowledge of the complexity of a large O of different algorithms can lead to the selection of appropriate data structures and algorithms. This helps us to design efficient solutions to real-world problems. This notation is used in analyzing the temporal and spatial complexity of algorithms. We have already mentioned the temporal complexity, the spatial complexity is expressed using Big O notation to describe the upper bound of memory usage by an algorithm.
BIG O notation explained
In Big O notation, O represents the order of the function, and f(n) represents a function describing the time complexity of the algorithm in terms of the input size n. The notation O(f(n)) means that the time complexity of the algorithm does not grow faster than a specific function with input n. Thus, the mathematical function f(n) describes how the running time of the algorithm increases as the input size increases. for example O(n) notation says that the execution time of the algorithm grows linearly with the input size.
BIG O notation – examples
We now show the most common types of complexity, how they are written using BIG o notation, and an example of an algorithm with a given complexity in a clear table.
BIG O entry | Complexity | Algorithm example |
O(1) | constant | Access an element in an array using an index |
O(log n) | logarithmic | Binary search in a sorted array |
O(n) | linear | Linear search over unordered array |
O(n log n) | Linearithmic | Efficient sorting algorithms such as. mergesort |
O(n²) | quadratic | 2 nested loops iterating over the input |
O(2n) | exponential | Brute force algorithms testing all combinations |
O(n!) | Factor | Algorithm generating all permutations of a set |
Big O notation rules
When using Big O notation, there are a few basic rules that help to accurately and consistently express the asymptotic complexity of algorithms. Big O notation can be used equally for both temporal and spatial complexity.
Ignoring the lower orders
When determining Big O notation, only the highest order is considered. Lower orders and constants are negligible because their influence becomes negligible for large inputs. Example: if an algorithm has time complexity f(n) = 3n² + 5n + 82, the complexity is O(n²) , because n² is the highest order.
Ignoring constants
The constant factors are negligible because they do not have a significant effect on the growth function at large inputs.
Example: if the algorithm has time complexity f(n) = 2n, the complexity is O(n) .
Total
If the algorithm is composed of several steps that are executed sequentially, the overall complexity is determined by the highest order between the steps.
Example: if an algorithm has two parts with complexity O(n) and O(n²), the total complexity is O(n²).
Synopsis
If the algorithm contains nested cycles, the total complexity is the product of the complexities of the individual cycles.
Example: if the outer cycle has complexity O(n)) and the inner cycle also has complexity O(n), the total complexity is O(n * n) = O(n²).
Logarithms
Logarithmic functions grow more slowly than polynomial functions, so they are often negligible in practice compared to polynomials.
Big o notation – programming practice
If we have implemented an algorithm and want to find out its complexity without analyzing the time complexity of individual code blocks, we can run the algorithm sequentially for input data 1 to N and measure the run time of the routine. From these measured values, we can then construct a graph in Excel of the dependence of the input data (x-axis) on the run time of the algorithm (y-axis). From the shape of the curve, we can then roughly identify the complexity of the algorithm, and depending on the type of algorithm, we can then see if there is room for optimization.