- Pattern recognition beats memorization—learn to identify which of 15 core patterns applies within the first minute of reading a problem.
- Each pattern has specific trigger phrases: 'contiguous subarray' means Sliding Window, 'next greater element' means Monotonic Stack.
- Time complexity improvements are often dramatic: Two Pointers and Sliding Window take O(n²) brute force down to O(n).
- The Two Heaps pattern for running median and Monotonic Stack for next-greater queries are the most underrated patterns that show up frequently.
- Master the implementations through practice—solve 2-3 LeetCode problems per pattern until recognition becomes automatic.
The Pattern Recognition Problem
Here’s something that took me embarrassingly long to figure out: coding interviews aren’t really about solving novel problems. They’re pattern matching exercises dressed up as algorithmic puzzles.
I watched a candidate struggle with a sliding window problem for 40 minutes. They knew the algorithm existed—they’d solved similar problems before—but under pressure, they couldn’t recognize when to apply it. The interviewer eventually gave a hint: “What if you didn’t start fresh each time?” And suddenly it clicked.
That’s the gap this post addresses. Not teaching you algorithms from scratch (there are textbooks for that), but building a mental index so you can look at a problem and think: “Ah, this is a monotonic stack situation.”
Pattern 1: Two Pointers
The most underrated pattern. When you have a sorted array or linked list and need to find pairs that satisfy some condition, two pointers often gets you from brute force to .
def two_sum_sorted(nums: list[int], target: int) -> list[int]:
"""Find indices of two numbers that add to target. Array must be sorted."""
left, right = 0, len(nums) - 1
while left < right:
current_sum = nums[left] + nums[right]
if current_sum == target:
return [left, right]
elif current_sum < target:
left += 1 # need bigger sum
else:
right -= 1 # need smaller sum
return [] # no solution found
# Test: [1, 3, 4, 5, 7, 10, 11], target = 9
# nums[2] + nums[3] = 4 + 5 = 9 ✓
print(two_sum_sorted([1, 3, 4, 5, 7, 10, 11], 9)) # [2, 3]
The key insight: by moving pointers based on whether our sum is too big or too small, we eliminate half the search space each step. Why does this work? Because the array is sorted—if nums[left] + nums[right] < target, then nums[left] + nums[right-1] is even smaller (useless), so we must increase left.
When to use it: Sorted arrays, finding pairs/triplets, removing duplicates in-place, container problems.
Pattern 2: Sliding Window
Whenever you see “contiguous subarray” or “substring” with some constraint, your sliding window alarm should go off.
def max_sum_subarray(nums: list[int], k: int) -> int:
"""Find maximum sum of any contiguous subarray of size k."""
if len(nums) < k:
return 0
# Initialize window
window_sum = sum(nums[:k])
max_sum = window_sum
# Slide the window
for i in range(k, len(nums)):
window_sum += nums[i] - nums[i - k] # add new, remove old
max_sum = max(max_sum, window_sum)
return max_sum
print(max_sum_subarray([2, 1, 5, 1, 3, 2], 3)) # 9 (subarray [5, 1, 3])
The fixed-size window is straightforward. Variable-size windows are trickier—you expand until a condition breaks, then shrink until it’s valid again:
def min_subarray_sum(nums: list[int], target: int) -> int:
"""Minimum length subarray with sum >= target. Returns 0 if impossible."""
min_length = float('inf')
window_sum = 0
left = 0
for right in range(len(nums)):
window_sum += nums[right]
while window_sum >= target:
min_length = min(min_length, right - left + 1)
window_sum -= nums[left]
left += 1
return min_length if min_length != float('inf') else 0
print(min_subarray_sum([2, 3, 1, 2, 4, 3], 7)) # 2 (subarray [4, 3])
Time complexity is even though there’s a nested while loop—each element gets added and removed from the window at most once.
Pattern 3: Fast and Slow Pointers
This one’s specifically for linked lists and cycle detection. The Floyd’s cycle detection algorithm (sometimes called tortoise and hare) is elegant: if there’s a cycle, a fast pointer moving 2 steps will eventually catch up to a slow pointer moving 1 step.
class ListNode:
def __init__(self, val=0):
self.val = val
self.next = None
def has_cycle(head: ListNode) -> bool:
if not head or not head.next:
return False
slow = fast = head
while fast and fast.next:
slow = slow.next
fast = fast.next.next
if slow == fast:
return True
return False
def find_cycle_start(head: ListNode) -> ListNode:
"""Returns the node where cycle begins, or None."""
if not head or not head.next:
return None
slow = fast = head
# Phase 1: Find meeting point
while fast and fast.next:
slow = slow.next
fast = fast.next.next
if slow == fast:
break
else:
return None # no cycle
# Phase 2: Find cycle start
# Mathematical proof: distance from head to cycle start equals
# distance from meeting point to cycle start
slow = head
while slow != fast:
slow = slow.next
fast = fast.next
return slow
Why does phase 2 work? Let’s say the distance from head to cycle start is , and the distance from cycle start to meeting point is . When they meet, slow has traveled , and fast has traveled $2(a + b)ca = c – b$, meaning if you start one pointer at head and one at meeting point, they’ll meet at the cycle start. I’ve verified this a dozen times and it still feels like magic.
Pattern 4: Merge Intervals
Sort by start time, then merge overlapping intervals. Simple in concept, but the edge cases trip people up.
def merge_intervals(intervals: list[list[int]]) -> list[list[int]]:
if not intervals:
return []
intervals.sort(key=lambda x: x[0])
merged = [intervals[0]]
for current in intervals[1:]:
last = merged[-1]
if current[0] <= last[1]: # overlapping
last[1] = max(last[1], current[1]) # extend end
else:
merged.append(current)
return merged
print(merge_intervals([[1,3], [2,6], [8,10], [15,18]]))
# [[1, 6], [8, 10], [15, 18]]
The gotcha: current[0] <= last[1] not current[0] < last[1]. Intervals [1,2] and [2,3] touching at a point should merge to [1,3] in most problems, but read carefully—some problems define them as non-overlapping.
Pattern 5: Cyclic Sort
This pattern is weirdly specific but shows up enough that it’s worth knowing. When you have numbers in range [1, n] or [0, n-1] and need to find missing/duplicate numbers, cyclic sort puts each number at its “correct” index in time.
def find_missing_number(nums: list[int]) -> int:
"""Find missing number in [0, n] range. Assumes non-negative integers."""
n = len(nums)
i = 0
while i < n:
correct_idx = nums[i]
# If number is in valid range and not at correct position, swap
if nums[i] < n and nums[i] != nums[correct_idx]:
nums[i], nums[correct_idx] = nums[correct_idx], nums[i]
else:
i += 1
# Now find which index doesn't have its number
for i in range(n):
if nums[i] != i:
return i
return n # all present, missing is n
print(find_missing_number([3, 0, 1])) # 2
The trick is that each swap puts at least one element in its correct position, so you get at most swaps total despite the nested-looking loop.
Note: This pattern assumes the input contains non-negative integers within the expected range. For problems with potential negatives or out-of-range values, add appropriate validation.
Pattern 6: In-place Linked List Reversal
Every interview prep guide covers this, but I’ll add what they don’t: the most common mistake is forgetting to save next before overwriting it.
def reverse_list(head: ListNode) -> ListNode:
prev = None
current = head
while current:
next_temp = current.next # SAVE THIS FIRST
current.next = prev
prev = current
current = next_temp
return prev
def reverse_between(head: ListNode, left: int, right: int) -> ListNode:
"""Reverse nodes between positions left and right (1-indexed)."""
if not head or left == right:
return head
dummy = ListNode(0)
dummy.next = head
prev = dummy
# Move to position before reversal
for _ in range(left - 1):
prev = prev.next
# Reverse the sublist
current = prev.next
for _ in range(right - left):
next_node = current.next
current.next = next_node.next
next_node.next = prev.next
prev.next = next_node
return dummy.next
The dummy node pattern saves so much edge case handling for operations that might modify the head.
Pattern 7: Tree BFS (Level Order)
BFS with a queue, processing level by level. The key is tracking level boundaries.
from collections import deque
class TreeNode:
def __init__(self, val=0, left=None, right=None):
self.val = val
self.left = left
self.right = right
def level_order(root: TreeNode) -> list[list[int]]:
if not root:
return []
result = []
queue = deque([root])
while queue:
level_size = len(queue) # crucial: snapshot current level size
level = []
for _ in range(level_size):
node = queue.popleft()
level.append(node.val)
if node.left:
queue.append(node.left)
if node.right:
queue.append(node.right)
result.append(level)
return result
Variations: zigzag order (alternate direction each level), right side view (last element of each level), average of levels. All the same skeleton.
Pattern 8: Tree DFS
Preorder, inorder, postorder—and knowing which one to use when. Inorder on a BST gives sorted output. Postorder is useful when you need children results before processing parent (like calculating heights).
def max_depth(root: TreeNode) -> int:
"""Classic postorder: need children results first."""
if not root:
return 0
return 1 + max(max_depth(root.left), max_depth(root.right))
def path_sum(root: TreeNode, target: int) -> bool:
"""Preorder with accumulated state."""
if not root:
return False
if not root.left and not root.right: # leaf
return root.val == target
return (path_sum(root.left, target - root.val) or
path_sum(root.right, target - root.val))
I’ve seen candidates freeze when asked which traversal to use. Quick heuristic: if you need to process nodes in order (BST operations), use inorder. If you’re building something from leaves up (heights, subtree sums), use postorder. Everything else is usually preorder.
Pattern 9: Two Heaps
Maintain a max-heap of smaller half and min-heap of larger half. Keeps the median accessible in with insertions.
import heapq
class MedianFinder:
def __init__(self):
self.small = [] # max heap (negate values)
self.large = [] # min heap
def add_num(self, num: int) -> None:
# Always add to max heap first
heapq.heappush(self.small, -num)
# Balance: largest of small should be <= smallest of large
if self.small and self.large and (-self.small[0] > self.large[0]):
val = -heapq.heappop(self.small)
heapq.heappush(self.large, val)
# Size balance: small can have at most 1 more than large
if len(self.small) > len(self.large) + 1:
val = -heapq.heappop(self.small)
heapq.heappush(self.large, val)
if len(self.large) > len(self.small):
val = heapq.heappop(self.large)
heapq.heappush(self.small, -val)
def find_median(self) -> float:
if len(self.small) > len(self.large):
return -self.small[0]
return (-self.small[0] + self.large[0]) / 2
mf = MedianFinder()
for num in [2, 3, 4]:
mf.add_num(num)
print(mf.find_median()) # 3.0
Python’s heapq is a min-heap only, so we negate values for the max-heap behavior. This is the kind of implementation detail that’s easy to mess up under pressure.
Pattern 10: Subsets/Backtracking
Generate all subsets, permutations, or combinations. The backtracking template is almost mechanical.
def subsets(nums: list[int]) -> list[list[int]]:
result = []
def backtrack(start: int, current: list[int]):
result.append(current[:]) # append a copy
for i in range(start, len(nums)):
current.append(nums[i])
backtrack(i + 1, current)
current.pop() # backtrack
backtrack(0, [])
return result
print(subsets([1, 2, 3]))
# [[], [1], [1, 2], [1, 2, 3], [1, 3], [2], [2, 3], [3]]
For permutations, you don’t pass start—instead you track which elements are used:
def permutations(nums: list[int]) -> list[list[int]]:
result = []
used = [False] * len(nums)
def backtrack(current: list[int]):
if len(current) == len(nums):
result.append(current[:])
return
for i in range(len(nums)):
if used[i]:
continue
used[i] = True
current.append(nums[i])
backtrack(current)
current.pop()
used[i] = False
backtrack([])
return result
The time complexity for subsets is and for permutations is . These are exponential, so interviewers usually give small inputs.
Pattern 11: Modified Binary Search
Vanilla binary search is easy. The interesting variants are: search in rotated array, find first/last occurrence, search in infinite array.
def search_rotated(nums: list[int], target: int) -> int:
"""Search in rotated sorted array."""
left, right = 0, len(nums) - 1
while left <= right:
mid = (left + right) // 2
if nums[mid] == target:
return mid
# Figure out which half is sorted
if nums[left] <= nums[mid]: # left half is sorted
if nums[left] <= target < nums[mid]:
right = mid - 1
else:
left = mid + 1
else: # right half is sorted
if nums[mid] < target <= nums[right]:
left = mid + 1
else:
right = mid - 1
return -1
print(search_rotated([4, 5, 6, 7, 0, 1, 2], 0)) # 4
When to use left <= right vs left < right:
– Use left <= right when you need to check every element (like searching for a specific value)
– Use left < right when you’re narrowing down to a boundary (like finding first occurrence)
Example of left < right for finding first occurrence:
def first_occurrence(nums: list[int], target: int) -> int:
left, right = 0, len(nums) - 1
while left < right:
mid = (left + right) // 2
if nums[mid] < target:
left = mid + 1
else:
right = mid # don't exclude mid, it might be the answer
return left if nums[left] == target else -1
Pattern 12: Top K Elements
Heaps are your friend here. For top K largest, use a min-heap of size K—counterintuitive until you realize you’re keeping the K largest by kicking out the smallest among them.
import heapq
from collections import Counter
def top_k_frequent(nums: list[int], k: int) -> list[int]:
"""Find k most frequent elements."""
count = Counter(nums)
# Min heap of (frequency, num), keep size k
return heapq.nlargest(k, count.keys(), key=count.get)
print(top_k_frequent([1, 1, 1, 2, 2, 3], 2)) # [1, 2]
heapq.nlargest is , which beats sorting at when . There’s also a bucket sort approach that’s for this specific problem, but the heap solution is more generalizable.
Pattern 13: K-way Merge
Merging K sorted lists/arrays. Use a heap to always get the smallest element across all lists.
import heapq
def merge_k_sorted(lists: list[list[int]]) -> list[int]:
"""Merge k sorted lists into one sorted list."""
heap = []
# Initialize with first element from each list
for i, lst in enumerate(lists):
if lst:
heapq.heappush(heap, (lst[0], i, 0)) # (value, list_idx, element_idx)
result = []
while heap:
val, list_idx, elem_idx = heapq.heappop(heap)
result.append(val)
# Push next element from same list
if elem_idx + 1 < len(lists[list_idx]):
next_val = lists[list_idx][elem_idx + 1]
heapq.heappush(heap, (next_val, list_idx, elem_idx + 1))
return result
print(merge_k_sorted([[1, 4, 5], [1, 3, 4], [2, 6]]))
# [1, 1, 2, 3, 4, 4, 5, 6]
Time complexity: where is total elements across all lists. Each element is pushed and popped from the heap exactly once.
Pattern 14: Topological Sort
For dependency problems: course scheduling, build order, task scheduling. Two approaches: Kahn’s algorithm (BFS with in-degree counting) or DFS with finish times.
from collections import deque, defaultdict
def course_order(num_courses: int, prerequisites: list[list[int]]) -> list[int]:
"""Return valid course order, or [] if impossible (cycle exists)."""
graph = defaultdict(list)
in_degree = [0] * num_courses
for course, prereq in prerequisites:
graph[prereq].append(course)
in_degree[course] += 1
# Start with courses that have no prerequisites
queue = deque([i for i in range(num_courses) if in_degree[i] == 0])
order = []
while queue:
course = queue.popleft()
order.append(course)
for next_course in graph[course]:
in_degree[next_course] -= 1
if in_degree[next_course] == 0:
queue.append(next_course)
return order if len(order) == num_courses else []
print(course_order(4, [[1, 0], [2, 0], [3, 1], [3, 2]]))
# [0, 1, 2, 3] or [0, 2, 1, 3] - both valid
This pattern is essential for any problem involving dependencies—from build systems to task schedulers to course prerequisites.
Pattern 15: Monotonic Stack
This one feels like a trick until it clicks. When you need “next greater element” or “previous smaller element” type queries, a monotonic stack gives you instead of .
def next_greater_elements(nums: list[int]) -> list[int]:
"""For each element, find next greater element to its right."""
n = len(nums)
result = [-1] * n
stack = [] # stores indices, values are in decreasing order
for i in range(n):
while stack and nums[i] > nums[stack[-1]]:
idx = stack.pop()
result[idx] = nums[i]
stack.append(i)
return result
print(next_greater_elements([2, 1, 2, 4, 3]))
# [4, 2, 4, -1, -1]
Why does this work? We maintain a stack of elements that haven’t found their “next greater” yet. When we see a bigger element, it becomes the answer for everything in the stack that’s smaller. Each element is pushed and popped at most once, so it’s .
The histogram problem (largest rectangle in histogram) is the classic monotonic stack problem, and it’s genuinely tricky. If you can solve that one cold, you’re ready.
Quick Pattern Selection Guide
| Problem Signal | Pattern to Try |
|---|---|
| “Contiguous subarray/substring” | Sliding Window |
| “Sorted array, find pair” | Two Pointers |
| “Linked list, find cycle/middle” | Fast & Slow Pointers |
| “Merge overlapping X” | Merge Intervals |
| “Numbers in range [1,n], missing/duplicate” | Cyclic Sort |
| “Reverse portion of linked list” | In-place Reversal |
| “Level-by-level tree” | BFS |
| “Path in tree, depth, subtree” | DFS |
| “Running median” | Two Heaps |
| “All combinations/permutations” | Backtracking |
| “Sorted but rotated” | Modified Binary Search |
| “K largest/smallest” | Heap |
| “Merge K sorted things” | K-way Merge |
| “Dependency order” | Topological Sort |
| “Next greater/smaller” | Monotonic Stack |
FAQ
Q: How many patterns should I memorize before interviewing?
You don’t need to memorize implementations verbatim. What matters is recognizing which pattern applies to a problem. Practice 2-3 problems per pattern until you can identify them within the first minute of reading a problem. That recognition skill is what transfers to new problems.
Q: What if a problem combines multiple patterns?
This happens often at higher difficulty levels. A problem might need BFS for traversal plus a monotonic stack for processing at each step. The trick is to decompose: “What’s the main structure?” (usually graph/tree/array traversal) and “What’s happening at each step?” (often a subpattern). Solve each piece, then combine.
Q: Should I always start with brute force in interviews?
Yes, but briefly. State the brute force complexity (“Naive approach is “) and immediately follow with “but I think we can do better with X pattern because…”. This shows you understand the baseline while demonstrating you can optimize. Spending too long on brute force implementation wastes time.
Q: What about Dynamic Programming?
DP deserves its own deep dive—it’s more of a problem-solving paradigm than a single pattern. Common DP patterns include 0/1 Knapsack, Unbounded Knapsack, Longest Common Subsequence, and Matrix Chain. The key insight is recognizing overlapping subproblems and optimal substructure. I’ll cover DP patterns in a future post.
What I’m Still Working On
Honestly, the patterns I struggle with most are the ones that combine DP with other techniques. Bitmask DP, DP on trees, DP with segment trees—these feel like pattern combinations where my recognition skills break down. More practice on medium-hard DP problems helps, but there’s no shortcut around the deliberate practice.
For most coding interviews at most companies, these 15 patterns cover probably 80% of what you’ll see. Master the recognition, drill the implementations until they’re automatic, and you’ll spend your interview time on the actual problem-solving instead of fighting syntax.
And one more thing: don’t just read about patterns. Open LeetCode, filter by tag, and solve 3 problems per pattern. That’s the only way this sticks.
Did you find this helpful?
☕ Buy me a coffee
Leave a Reply