Time and Space Complexity Cheat Sheet [Ultimate List]
If you’re still perplexed by Big O notation in terms of time or space complexity, worry no more; here is the ultimate time and space complexity cheat sheet.
This comprehensive cheat sheet is your guiding light to understanding how algorithms perform as data scales up, both in terms of time and memory usage.
Let’s embark on this journey together and unravel the mysteries of algorithmic efficiency.
The Big Idea: Both time and space complexities are crucial in gauging algorithmic efficiency. While time complexity focuses on how an algorithm’s performance scales with input size, space complexity delves into the memory usage as data grows. Think of it as a dual strategy – one for the race track and one for managing your storage closet.
Time Complexity Explained:
- O(1): The champion of speed! This algorithm’s runtime remains constant, irrespective of data size – a quick glance at your favorite shelf.
- O(log n): Swift and efficient! This algorithm divides the search space in half with each step, much like finding a friend in a maze by intelligent questioning. Binary search is a prime example.
- O(n): The steady jogger. This algorithm traverses every piece of data once, akin to reading every page in a book to find your answer. Linear search exemplifies this.
- O(n log n): A bit slower but still commendable. Imagine sorting a deck of cards by suit and then value – two steps, but manageable. Merge sort and quick sort fit this category.
- O(n^2): The marathoners! These algorithms compare every piece of data with every other piece, similar to asking everyone in town if they’ve seen your friend – slow but thorough. Bubble sort and selection sort reside here.
Now, Specifics on Time Complexity:
- Binary Search: O(log n) – lightning-fast for finding things in sorted lists.
- Quick Sort: O(n log n) on average, but O(n^2) in the worst case – a speedy sorter with a grumpy side.
- Insertion Sort: O(n^2) – slow and steady, suitable for small datasets.
- Bubble Sort: O(n^2) – the ultimate marathon, best avoided for large data.
- Heapify: O(log n) – building a heap, the foundation for some faster sorting algorithms.
- Selection Sort: O(n^2) – another slowpoke, like finding the tallest person in a room by comparing them to everyone else.
- Heap Sort: O(n log n) – uses heaps to sort efficiently, a good middle ground.
- Dijkstra’s Algorithm: O(E log V) for finding the shortest paths in a graph – imagine trying out every single road to reach your destination, but smartly!
- DFS/BFS: O(V + E) for exploring graphs – like visiting every room in a house (DFS) or floor by floor (BFS).
- Merge Sort: O(n log n) – another divide-and-conquer champion, like sorting two decks of cards and then merging them.
- Radix Sort: O(nk) where k is the number of digits – imagine sorting library books by color, then size, then author – super fast for specific data.
- Kruskal’s Algorithm: O(E log E) for finding the minimum spanning tree in a graph – like connecting all the cities with roads at the lowest cost.
- Union-Find: O(log n) for merging and finding connected components in a graph – imagine grouping friends who are all connected by handshakes.
- Bucket Sort: O(n + k) where k is the number of buckets – like sorting colored beads by dropping them into matching buckets.
- Prim’s Algorithm: O(E log V) for finding the minimum spanning tree in a graph, similar to Kruskal’s but with a different approach.
- Topological Sort: O(V + E) for ordering tasks with dependencies – like planning your day by making sure you shower before heading out to work.
- Counting Sort: O(n + k) where k is the number of distinct values – like counting the number of different colors in a bag of candy.
- Fibonacci: O(2^n) for the naive recursive implementation, O(n) for dynamic programming.
- Nested For Loop: O(n^2) for two nested loops iterating over n elements each.
- Adjacency List: O(V + E) for common graph representation.
- Auxiliary Space: The additional memory space used by an algorithm, often expressed in terms of Big O notation as well.
Space Complexity Explained:
- O(1): The minimalist! This algorithm uses a constant amount of memory regardless of data size – like storing a single item on your desk.
- O(log n): Pretty lean! This one optimizes space by dividing and conquering, akin to navigating through a hierarchy by asking if something is in the left or right branch. Think binary search trees – efficient for structured data.
- O(n): The space-efficient wanderer. This algorithm stores every piece of data once, akin to placing each item in a closet without duplication. Linear search’s memory usage, anyone?
- O(n log n): A bit more lavish than the log-space frugal bunch, but still manageable. Picture organizing items by category and value – two steps, but not excessive. Merge sort and quick sort fall into this category.
- O(n^2): The memory hoarders! These algorithms store every piece of data with every other piece, like storing every item by comparing it with everything else – spacious but not thrifty. Bubble sort and selection sort join this league.
Now, Specifics on Space Complexity:
- Binary Search Tree: O(log n) – memory-efficient for sorted data structures.
- Quick Sort: O(log n) on average, but O(n) in the worst case – a memory-efficient sorter with a moody side.
- Insertion Sort: O(1) – minimal memory use, suitable for small datasets.
- Bubble Sort: O(1) – low memory overhead but best avoided for large data.
- Heapify: O(1) – building a heap, the foundation for some efficient sorting algorithms.
- Selection Sort: O(1) – low memory usage, akin to finding the tallest person in a room by comparing them to everyone else.
- Heap Sort: O(1) – uses heaps to sort efficiently, a good middle ground.
- Dijkstra’s Algorithm: O(V + E) for finding the shortest paths in a graph – imagine trying out every single road to reach your destination, with some smart bookkeeping!
- DFS/BFS: O(V) for exploring graphs – like visiting every room in a house (DFS) or floor by floor (BFS).
- Merge Sort: O(n) – another divide-and-conquer champion, conserving memory while sorting.
- Radix Sort: O(n + k) where k is the number of digits – imagine sorting library books by color, then size, then author – memory-efficient for specific data.
- Kruskal’s Algorithm: O(E) for finding the minimum spanning tree in a graph – like connecting all the cities with roads at the lowest cost, with minimal memory splurge.
- Union-Find: O(n) for merging and finding connected components in a graph – imagine grouping friends who are all connected by handshakes without extravagant memory use.
- Bucket Sort: O(n + k) where k is the number of buckets – like sorting colored beads by dropping them into matching buckets, minimal memory impact.
- Prim’s Algorithm: O(V) for finding the minimum spanning tree in a graph, similar to Kruskal’s but with a different approach.
- Topological Sort: O(V) for ordering tasks with dependencies – like planning your day by making sure you shower before heading out to work.
- Counting Sort: O(n + k) where k is the number of distinct values – like counting the number of different colors in a bag of candy without hoarding extra wrappers.
- Fibonacci: O(n) for dynamic programming – conserves memory for efficient computation.
- Nested For Loop: O(1) for two nested loops iterating over n elements each – minimal memory impact.
- Adjacency List: O(V + E) for common graph representation – efficient memory use.
- Auxiliary Space: The additional memory space used by an algorithm, often expressed in terms of Big O notation as well.
With this, you have covered a fantastic range of complexities from the “best” (constant) to the “nightmarish” (factorial).
Time and Space Complexity for Accessing and Traversing Data Structures in Python:
Data Structure | Time Access | Time Traverse | Space Access | Space Traverse | Notes |
---|---|---|---|---|---|
List | O(1) | O(n) | O(1) | O(n) | Random access, sequential traversal |
Linked List | O(n) | O(n) | O(n) | O(n) | Sequential access and traversal |
Dictionary | O(1) | O(n) | O(n) | O(n) | Average-case access, keys must be hashable |
Set | O(1) | O(n) | O(n) | O(n) | Average-case access, unordered collection |
Tuple | O(1) | O(n) | O(1) | O(n) | Immutable, random access, sequential traversal |
Stack | O(1) | O(n) | O(n) | O(n) | LIFO (Last-In-First-Out) |
Queue | O(1) | O(n) | O(n) | O(n) | FIFO (First-In-First-Out) |
Deque | O(1) | O(n) | O(n) | O(n) | Double-ended queue, allows insertion/removal from both ends |
Tree | O(log n) | O(n) | O(n) | O(n) | Balanced trees like AVL trees or Red-Black trees |
Heap | O(1) | O(n) | O(1) | O(n) | Priority queue implementation |
Trie | O(kn) | O(n) | O(kn) | O(n) | String-based search tree, k is the length of the key |
Graph | Varies | O(V + E) | Varies | O(V + E) | Depends on representation and traversal method |
Conclusion
- For data structures like linked lists and trees, the complexity depends on the operation. Traversing a linked list might be O(n), but finding a specific element could be O(1) if you have a pointer to it.
- Don’t get bogged down by specific languages – the time and space complexity concepts are universal!
- If you’re feeling adventurous, explore other complexities beyond O(1), O(n), and O(log n). But hey, one step at a time!
This comprehensive cheat sheet is just the beginning of your journey into algorithmic efficiency. Keep exploring, keep learning, and most importantly, keep coding!
I would like to express my admiration for your article, which is quite astonishing. The clarity of your post is remarkable, leading me to believe that you are an authority on this subject. If it’s okay with you, I would like to subscribe to your RSS feed in order to be notified of future posts. Your work is greatly appreciated.
Thank you for your kind words! I’m glad you found the article helpful. And of course, please feel free to subscribe to our RSS feed -> auditorical.com/rss!