The time complexity of an algorithm refers to the amount of time it takes for the algorithm to execute. Instead of measuring time in seconds, it is determined by the number of operations required to complete the task.
The performance of the algorithm is dependent on both the amount of data and the arrangement of elements within that data set. Like space complexity, time complexity is usually expressed using Big O notation.
def get_min_value(arr):
" Return the minimum value in the array "
min_value = arr[0]
for i in arr:
if i < min_value:
min_value = i
return min_value
get_min_value([1, 0, -5, 20])
# Output: -5
Here is an explanation of the time complexity for each line of the get_min_value python function:
- 3rd line assigns the first element of the input array arr to the variable
min_value
. The time complexity of this operation isO(1)
, as it only involves a single assignment and indexing operation. - 4th line starts a for loop that iterates through each element in the input array arr. The time complexity of this operation is
O(n)
, where n is the length of the input array. The loop continues until the size of the Array, the value ofn
. - 5th line compares the current element i of the for loop to the current value of
min_value
. The time complexity of this operation isO(1)
, as it only involves a single comparison operation. - 6th line, if the condition in the previous line is true, this line assigns the current element i of the for loop to the variable
min_value
. The time complexity of this operation isO(1)
, as it only involves a single assignment operation. - 7th line returns the final value of
min_value
as the output of the function. The time complexity of this operation isO(1)
, as it only involves a single return operation.
3 operations occur outside of the loop and 2 operations within the loop. If we take all of the operations as 1 and get the loop n
, the Time complexity of this algorithm is 2(n)+3
. As the value of n increases, the importance of the constant values in the formula will decrease, so the time complexity becomes O(n)
and this is equal to the linear complexity.
Overall, the time complexity of the function get_min_value(arr)
is O(n)
, where n is the size of the array. This is because the function iterates through the entire array once and performs a constant amount of work for each element. The overall time taken to execute the function will increase linearly with the size of the input array.
The most common Big O notations for time complexity are:
O(1)
- Constant time complexity: The running time of the algorithm does not depend on the input size. Example: accessing an element in an array by its index.O(log n)
- Logarithmic time complexity: The running time of the algorithm increases logarithmically with the input size. Example: binary search algorithm.O(n)
- Linear time complexity: The running time of the algorithm increases linearly with the input size. Example: iterating through an array.O(n log n)
- Log-linear time complexity: The running time of the algorithm increases with the input size, but at a slower rate than linear. Example: Sorting algorithms like merge sort and quick sort.O(n^2)
- Quadratic time complexity: The running time of the algorithm increases with the square of the input size. Example: Nested loops, bubble sort algorithm.O(2^n)
- Exponential time complexity: The running time of the algorithm increases exponentially with the input size. Example: Recursive algorithms that solve problems by breaking them down into smaller subproblems.O(n!)
- Factorial time complexity: The running time of the algorithm increases with the factorial of the input size. Example: Brute-force algorithm that generates all possible permutations of a set of items.
It's important to note that these are just examples, and a specific algorithm can have multiple complexities.
Constant Time Complexity
Known as O(1)
complexity, refers to an algorithm where the running time remains the same regardless of the size of the input. In other words, the algorithm takes a fixed amount of time to complete, regardless of how large the input is.
An example of an algorithm with constant time complexity is accessing an element in an array using an index. The time it takes to access an element in an array is independent of the size of the array. In most common programming languages, the time complexity of accessing an element in an array is O(1)
because the index can be used to calculate the memory address of the element, allowing for direct access to the element.
Another example of an algorithm with a constant time complexity is the retrieval of an element in a hash table using a specific key. The time it takes to retrieve the element is independent of the number of elements in the hash table.
It's worth noting that while O(1)
is the best complexity possible, it is not always achievable, but when it's possible it's a great advantage to the algorithm.
def constant_time_function(n):
" This function takes constant time to run "
return n*2
The time complexity of this function is O(1)
as it always takes the same amount of time to perform the operation regardless of the input size n
.
Logarithmic Time Complexity
Known as O(log n)
complexity, refers to an algorithm where the running time increases logarithmically with the size of the input. In other words, the running time of the algorithm increases at a slower rate as the input size increases.
An example of an algorithm with logarithmic time complexity is the binary search algorithm. The binary search algorithm is used to search for a specific element in a sorted array. The algorithm repeatedly divides the search interval in half, until the desired element is found or the search interval is empty. The time complexity of the binary search algorithm is O(log n)
because, in the worst case, the algorithm divides the search interval in half log(n)
times before the element is found.
Example:
def binary_search(arr, target):
" Return the index of the target in the array, or -1 if it is not present "
low = 0
high = len(arr) - 1
while low <= high:
mid = (low + high) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
low = mid + 1
else:
high = mid - 1
return -1
This function performs a binary search on a sorted array arr to find the index of a target element. The time complexity of this function is O(log n)
because, in each iteration, the search space is halved, effectively reducing the number of elements that need to be searched by a factor of 2.
Linear Time Complexity
Known as O(n)
complexity, refers to an algorithm where the running time increases linearly with the size of the input. In other words, the running time of the algorithm is directly proportional to the size of the input.
An example of an algorithm with linear time complexity is the simple linear search algorithm. The linear search algorithm is used to search for a specific element in an array. The algorithm iterates through the array, comparing each element with the target element, until the target element is found or the end of the array is reached. The time complexity of the linear search algorithm is O(n)
because in the worst case, the algorithm needs to iterate through the entire array before the target element is found.
Another example is traversing an array or a linked list, where the algorithm needs to visit each element in the data structure and the time it takes to visit one element is constant.
Example:
def linear_time_function(arr):
" This function takes an array of integers and returns the sum of the integers in the array "
result = 0
for i in range(len(arr)):
result += arr[i]
return result
This function takes in an array as a parameter and iterates through each element of the array, adding up the values and returning the sum. The time complexity of this function is linear, O(n)
, because the number of operations performed is directly proportional to the size of the input array.
Log-Linear Time Complexity
Log-linear time complexity is a complexity class that lies between linear time O(n)
and logarithmic time O(log n)
. It's characterized by an algorithm that takes time proportional to n log n. This complexity class is not commonly used or referred to, but it's important to be aware of it as it can be used to describe certain algorithms performance.
An example of an algorithm with log-linear time complexity is the merge sort algorithm. The merge sort algorithm is a sorting algorithm that uses the divide-and-conquer approach. It divides the array into two halves, recursively sorts the two halves, and then merges the two sorted halves. The time complexity of merge sort is O(n log n)
because the algorithm needs to divide the array log(n)
times and in each step, it needs to merge two sorted subarrays, which takes linear time O(n)
.
Another example is the closest pair of point problem, where the algorithm needs to find the closest two points in a set of n points in the two-dimensional space, where the time complexity of the algorithm is O(n log n)
.
It's worth noting that while Log-linear complexity is not as good as O(log n)
it's better than O(n)
and it's widely used in certain algorithms.
Example:
def merge_sort(arr):
" Merge sort algorithm "
if len(arr) <= 1:
return arr
mid = len(arr) // 2
left = merge_sort(arr[:mid])
right = merge_sort(arr[mid:])
return merge(left, right)
def merge(left, right):
" Merge two sorted arrays "
result = []
i = j = 0
while i < len(left) and j < len(right):
if left[i] < right[j]:
result.append(left[i])
i += 1
else:
result.append(right[j])
j += 1
result.extend(left[i:])
result.extend(right[j:])
return result
This function sorts an array of numbers by first dividing the array into two sub-arrays, sorting each sub-array recursively, and then merging the two sorted sub-arrays back together. The merge step has a linear time complexity of O(n)
because it iterates through both sub-arrays once, and the merge_sort
function has a logarithmic time complexity of O(log n)
because it repeatedly divides the input array in half. The total time complexity of the merge_sort
function is O(n log n)
because the time complexity of the merge step is multiplied by the time complexity of the merge_sort
step.
Quadratic Time Complexity
Quadratic time complexity, also known as O(n^2)
, refers to an algorithm whose time complexity increases at a rate of n^2 as the input size (n) increases. This means that as the input size doubles, the algorithm takes four times as long to complete.
An example of an algorithm with quadratic time complexity is the nested loop algorithm, where the outer loop runs n
times and the inner loop runs n times for each iteration of the outer loop. The total number of iterations is therefore n*n
, or n^2
.
Example:
def quadratic_time_function(arr):
" Return True if there are duplicates in arr, False otherwise "
for i in range(len(arr)):
for j in range(i+1, len(arr)):
if arr[i] == arr[j]:
return True
return False
This function takes in an array and iterates through each element of the array twice , in a nested loop, comparing each pair of elements, therefore the number of operations performed is directly proportional to the size of the input array squared. This function has a Quadratic time complexity O(n^2)
because the number of operations performed is proportional to the square of the size of the input array.
Exponential Time Complexity
Exponential time complexity, also known as O(2^n)
, refers to an algorithm whose time complexity increases at an exponential rate as the input size (n) increases. This means that as the input size doubles, the algorithm takes twice as much time to complete.
An example of an algorithm with exponential time complexity is the recursive algorithm for solving the problem of finding all subsets of a set. In the worst case, the algorithm generates 2^n
subsets, which results in exponential time complexity.
Example:
def fibonacci(n):
" Return the nth number in the Fibonacci sequence "
if n == 0 or n == 1:
return n
return fibonacci(n-1) + fibonacci(n-2)
This function computes the n-th fibonacci number. It does this by recursively calling itself twice for each value of n
, once with n-1
and once with n-2
. Since each call to fibonacci(n)
will result in two more calls to fibonacci(n-1)
and fibonacci(n-2)
the total number of function calls will be equal to 2^n
. This function has an Exponential time complexity O(2^n)
because the number of operations performed is directly proportional to the power of 2 of the size of the input n
.
Factorial Time Complexity
Factorial time complexity, also known as O(n!)
, refers to an algorithm whose time complexity increases at a rate of n! (n factorial) as the input size (n) increases. This means that as the input size doubles, the algorithm takes many more times as long to complete
An example of an algorithm with factorial time complexity is the brute-force algorithm for solving the problem of generating all permutations of a set. In the worst case, the algorithm generates n!
permutations, which results in factorial time complexity.
Example:
def permute(arr):
" Return all permutations of a list "
if len(arr) == 0:
return []
if len(arr) == 1:
return [arr]
l = []
for i in range(len(arr)):
m = arr[i]
rem_list = arr[:i] + arr[i+1:]
for p in permute(rem_list):
l.append([m] + p)
return l
This function takes an array and generates all possible permutations of its elements, it does this by recursively calling itself for each element in the array and appending the current element to all permutations generated by the recursive call, in the worst case the number of recursive calls is equal to the number of elements in the array, therefore the total number of function calls will be equal to n!(n factorial)
. This function has an Exponential time complexity O(n!)
because the number of operations performed is directly proportional to the factorial of the size of the input array.
Last Update: 05 Dec, 2024