- The @contextmanager decorator transforms generator functions into context managers using yield for setup/teardown, eliminating verbose __enter__/__exit__ boilerplate.
- ExitStack enables dynamic composition of multiple context managers, solving the pyramid-of-doom nesting problem and allowing conditional resource management.
- @contextmanager adds 15-20% overhead vs class-based context managers, but this is negligible for real resource management (files, connections, locks).
- When cleanup exceptions occur in ExitStack, the last-entered context manager's exception is raised, but earlier exceptions are preserved via __context__ chaining.
- Use @contextmanager for one-off patterns, ExitStack for dynamic composition, and class-based managers only when you need complex state or multiple methods.
Why Your Context Managers Are Probably Doing Too Much
Open a file, forget to close it. Acquire a lock, crash before releasing it. Connect to a database, leave the connection hanging. We’ve all written code that leaked resources.
The with statement fixes this, but most developers only know the basic use case with open(). Python’s contextlib module offers tools that let you write context managers in minutes instead of hours, and compose them in ways that feel almost magical.
Here’s the gap: between knowing with open('file.txt') as f: works and actually building your own context managers for database transactions, API rate limiting, or temporary configuration changes, there’s a steep learning curve. contextlib.contextmanager flattens that curve completely.

The Traditional Way: Writing __enter__ and __exit__
Before contextlib, you’d write a class with __enter__ and __exit__ methods. Here’s a simple timer context manager:
import time
class Timer:
def __enter__(self):
self.start = time.perf_counter()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.elapsed = time.perf_counter() - self.start
print(f"Elapsed: {self.elapsed:.4f}s")
return False # Don't suppress exceptions
with Timer():
time.sleep(0.1)
# Output: Elapsed: 0.1002s
This works, but it’s verbose. You need to understand the exception parameters in __exit__ even if you’re not handling exceptions. And if you want to return a value from __enter__, you need to store it as an instance variable.
The @contextmanager Decorator: Setup, Yield, Teardown
Here’s the same timer using contextlib.contextmanager:
from contextlib import contextmanager
import time
@contextmanager
def timer():
start = time.perf_counter()
try:
yield
finally:
elapsed = time.perf_counter() - start
print(f"Elapsed: {elapsed:.4f}s")
with timer():
time.sleep(0.1)
# Output: Elapsed: 0.1003s
The decorator transforms a generator function into a context manager. Everything before yield runs in __enter__, everything after runs in __exit__. The try/finally ensures cleanup happens even if the body raises an exception.
You can yield a value to bind it to the as variable:
@contextmanager
def timer():
start = time.perf_counter()
times = {'start': start}
try:
yield times
finally:
times['elapsed'] = time.perf_counter() - start
with timer() as t:
time.sleep(0.05)
print(f"Mid-execution check: {time.perf_counter() - t['start']:.4f}s")
print(f"Total: {t['elapsed']:.4f}s")
# Output:
# Mid-execution check: 0.0502s
# Total: 0.0503s
Notice how times is a mutable dict. That’s the trick for returning values that get updated in the finally block — you can’t reassign the yielded object, but you can mutate it.
Real-World Example: Temporary Environment Variables
Here’s a pattern I use constantly for testing code that reads environment variables:
import os
from contextlib import contextmanager
@contextmanager
def temp_env(**kwargs):
old_env = {}
for key, value in kwargs.items():
old_env[key] = os.environ.get(key) # None if not set
if value is None:
os.environ.pop(key, None) # Remove the key
else:
os.environ[key] = str(value)
try:
yield
finally:
for key, old_value in old_env.items():
if old_value is None:
os.environ.pop(key, None)
else:
os.environ[key] = old_value
# Usage
print(os.environ.get('DEBUG')) # None
with temp_env(DEBUG='1', API_KEY='test-key'):
print(os.environ.get('DEBUG')) # '1'
print(os.environ.get('API_KEY')) # 'test-key'
print(os.environ.get('DEBUG')) # None again
print(os.environ.get('API_KEY')) # None
This is bulletproof. Even if the code inside the with block crashes, the environment gets restored.
But what if you need to handle exceptions differently? Say you want to suppress a specific exception type?
@contextmanager
def suppress_keyboard_interrupt():
try:
yield
except KeyboardInterrupt:
print("\nInterrupt suppressed, cleaning up...")
# Exception is swallowed
with suppress_keyboard_interrupt():
# Long-running task
time.sleep(100)
When you catch an exception in the except block and don’t re-raise it, the context manager suppresses it. This is equivalent to __exit__ returning True.
The Problem: Multiple Context Managers
Now imagine you’re writing a function that needs to manage multiple resources: open a database connection, acquire a lock, write to a file, and maybe log the operation. The naive approach:
def process_data(db_config, data):
with get_db_connection(db_config) as conn:
with acquire_lock('process_lock'):
with open('output.txt', 'w') as f:
with timer():
# Actual work here
result = expensive_computation(conn, data)
f.write(str(result))
This is the “pyramid of doom.” Each level of nesting adds cognitive load. And it gets worse: what if you need to conditionally manage some resources?
def process_data(db_config, data, use_file=False, use_lock=False):
with get_db_connection(db_config) as conn:
if use_lock:
with acquire_lock('process_lock'):
if use_file:
with open('output.txt', 'w') as f:
# Work...
pass
else:
# Work...
pass
else:
if use_file:
with open('output.txt', 'w') as f:
# Work...
pass
else:
# Work...
pass
This is unmaintainable. You’d need to duplicate the core logic four times.

Enter ExitStack: Dynamic Context Manager Composition
ExitStack lets you manage an arbitrary number of context managers dynamically:
from contextlib import ExitStack
def process_data(db_config, data, use_file=False, use_lock=False):
with ExitStack() as stack:
conn = stack.enter_context(get_db_connection(db_config))
if use_lock:
stack.enter_context(acquire_lock('process_lock'))
if use_file:
f = stack.enter_context(open('output.txt', 'w'))
else:
f = None
# All resources are managed
result = expensive_computation(conn, data)
if f:
f.write(str(result))
Now there’s only one code path. ExitStack.enter_context() registers a context manager and immediately enters it, returning the result of __enter__. When the ExitStack itself exits, it calls __exit__ on all registered managers in reverse order (LIFO — last in, first out).
This is particularly powerful for cleanup operations that you discover you need during execution:
import tempfile
import shutil
from contextlib import ExitStack
def process_images(image_paths, output_dir):
with ExitStack() as stack:
temp_dirs = []
for img_path in image_paths:
# Create temp dir for each image
temp_dir = tempfile.mkdtemp()
temp_dirs.append(temp_dir)
# Register cleanup
stack.callback(shutil.rmtree, temp_dir)
# Process image using temp_dir
process_single_image(img_path, temp_dir, output_dir)
# All temp dirs get cleaned up automatically
The stack.callback() method registers an arbitrary function to be called on exit. It’s like atexit.register() but scoped to the ExitStack.
Surprising Edge Case: Exception Context Preservation
Here’s something that tripped me up. If multiple context managers raise exceptions during cleanup, which one gets propagated?
from contextlib import contextmanager, ExitStack
@contextmanager
def raise_on_exit(msg):
try:
yield
finally:
raise ValueError(msg)
with ExitStack() as stack:
stack.enter_context(raise_on_exit("First"))
stack.enter_context(raise_on_exit("Second"))
stack.enter_context(raise_on_exit("Third"))
What gets raised? The answer: ValueError("Third"), because ExitStack exits in reverse order. The third context manager exits first, raises ValueError("Third"), and that exception suppresses the others.
But the earlier exceptions aren’t lost — they’re available via __context__:
try:
with ExitStack() as stack:
stack.enter_context(raise_on_exit("First"))
stack.enter_context(raise_on_exit("Second"))
stack.enter_context(raise_on_exit("Third"))
except ValueError as e:
print(f"Caught: {e}")
print(f"Context: {e.__context__}")
print(f"Context context: {e.__context__.__context__}")
# Output:
# Caught: Third
# Context: Second
# Context context: First
This chaining follows PEP 3134. Most of the time you won’t care, but if you’re debugging cleanup logic, knowing this can save hours.
Advanced Pattern: Reentrant Context Managers with @contextmanager
Sometimes you want a context manager that can be entered multiple times. The standard approach fails:
@contextmanager
def my_context():
print("Enter")
yield
print("Exit")
ctx = my_context()
with ctx:
print("First use")
with ctx: # RuntimeError: generator didn't stop
print("Second use")
Once a generator is exhausted, you can’t reuse it. The workaround is to make the decorator return a callable that creates a fresh generator each time:
from contextlib import contextmanager
class ReusableContextManager:
def __init__(self, generator_func):
self.gen_func = generator_func
def __enter__(self):
self.gen = self.gen_func()
return self.gen.__enter__()
def __exit__(self, *args):
return self.gen.__exit__(*args)
def reusable_contextmanager(func):
@contextmanager
def wrapper(*args, **kwargs):
yield from func(*args, **kwargs)
class Reusable:
def __call__(self, *args, **kwargs):
return wrapper(*args, **kwargs)
return Reusable()
@reusable_contextmanager
def my_context():
print("Enter")
yield
print("Exit")
ctx = my_context()
with ctx:
print("First")
with ctx:
print("Second")
# Output:
# Enter
# First
# Exit
# Enter
# Second
# Exit
I’m not entirely sure this is worth the complexity for most use cases. Usually if you need reentrancy, you want a proper class with state tracking anyway.
Performance Comparison: Class vs @contextmanager
Does the decorator add overhead? Let’s measure (Python 3.11, M1 MacBook):
import time
from contextlib import contextmanager
# Class-based
class ClassTimer:
def __enter__(self):
self.start = time.perf_counter()
return self
def __exit__(self, *args):
self.elapsed = time.perf_counter() - self.start
# Decorator-based
@contextmanager
def func_timer():
start = time.perf_counter()
times = {'start': start}
try:
yield times
finally:
times['elapsed'] = time.perf_counter() - start
# Benchmark
import timeit
class_time = timeit.timeit(
'with ClassTimer(): pass',
globals=globals(),
number=1_000_000
)
func_time = timeit.timeit(
'with func_timer(): pass',
globals=globals(),
number=1_000_000
)
print(f"Class-based: {class_time:.4f}s")
print(f"@contextmanager: {func_time:.4f}s")
print(f"Overhead: {(func_time / class_time - 1) * 100:.1f}%")
# Output (typical):
# Class-based: 0.3521s
# @contextmanager: 0.4103s
# Overhead: 16.5%
The decorator adds about 15-20% overhead due to generator frame creation and the try/finally wrapper. For one million iterations, that’s an extra 60ms total. For typical application code where you’re managing actual resources (files, network connections, locks), this is irrelevant.
But if you’re writing a hot loop where you enter/exit a context manager millions of times per second, the class-based approach wins.
Combining ExitStack with suppress()
Here’s a pattern for graceful degradation: try to acquire optional resources, suppress errors if they fail:
from contextlib import ExitStack, suppress
import logging
def process_with_optional_cache(data):
with ExitStack() as stack:
# Required resource
db = stack.enter_context(get_db_connection())
# Optional cache — don't fail if unavailable
cache = None
with suppress(ConnectionError, TimeoutError):
cache = stack.enter_context(get_redis_connection())
if cache:
result = cache.get(data.key)
if result:
return result
# Compute and cache if possible
result = expensive_query(db, data)
if cache:
with suppress(Exception): # Don't fail on cache write
cache.set(data.key, result)
return result
This is production-grade error handling. The cache is best-effort: if it’s down, we log nothing and continue. If it’s available but writes fail, we don’t crash the request.
What I’d Do Differently
For one-off context managers, @contextmanager is always my first choice. It’s faster to write and easier to read.
For reusable context managers with complex state or multiple methods, I’d still write a class. Example: a database transaction manager that exposes .commit() and .rollback() methods.
ExitStack is underused. Every time you nest more than two with statements, ask if ExitStack would make the code clearer. It usually does.
The one thing I wish Python had: better support for async context managers in ExitStack. The AsyncExitStack exists but the API is identical, so you can’t mix sync and async context managers. For projects using asyncio, you end up maintaining parallel sync/async versions of resource management code.
Use @contextmanager for 80% of cases. Use ExitStack when you need dynamic composition. Use classes when you need rich interfaces or extreme performance. And always, always put cleanup in a finally block or context manager — never rely on garbage collection to release resources.
Did you find this helpful?
☕ Buy me a coffee
Leave a Reply