Writing Custom Python Decorators: Args, Returns, and Real-World Patterns

,
Updated Feb 6, 2026

The Decorator That Ate My Return Value

Most decorator tutorials show you the happy path. Here’s what they don’t show you — the moment your decorator silently swallows a return value and you spend twenty minutes staring at None wondering what went wrong.

def log_call(func):
    def wrapper(*args, **kwargs):
        print(f"Calling {func.__name__}")
        func(*args, **kwargs)  # spot the bug?
    return wrapper

@log_call
def calculate_tax(amount, rate=0.1):
    return amount * rate

result = calculate_tax(1000)
print(result)  # None

Output:

Calling calculate_tax
None

That missing return before func(*args, **kwargs) is the single most common decorator bug. It’s so common that I’d bet every Python developer has written it at least once. The wrapper function implicitly returns None because we called the original function but never passed its return value back.

The fix is trivial:

def log_call(func):
    def wrapper(*args, **kwargs):
        print(f"Calling {func.__name__}")
        return func(*args, **kwargs)  # don't forget this
    return wrapper

But the pattern this reveals is important. Every decorator is responsible for faithfully passing through the original function’s behavior — its arguments, its return value, its exceptions, and its identity. As we saw in Part 1, decorators are just functions wrapping functions. The hard part isn’t understanding that concept; it’s building wrappers that don’t accidentally break the thing they’re wrapping.

Accepting Arguments in Your Decorator

A decorator without parameters is straightforward: one function in, one function out. But what if you want something like @retry(max_attempts=3)? That extra set of parentheses changes everything.

The way Python handles this is a bit mind-bending at first. When you write @retry(max_attempts=3), Python calls retry(max_attempts=3) first, and whatever that returns must be a decorator — which itself takes a function and returns a wrapper. So you end up with three nested functions:

import time
import functools

def retry(max_attempts=3, delay=1.0):
    def decorator(func):
        @functools.wraps(func)
        def wrapper(*args, **kwargs):
            last_exception = None
            for attempt in range(1, max_attempts + 1):
                try:
                    return func(*args, **kwargs)
                except Exception as e:
                    last_exception = e
                    if attempt < max_attempts:
                        print(f"{func.__name__} failed (attempt {attempt}/{max_attempts}): {e}")
                        time.sleep(delay)
            raise last_exception
        return wrapper
    return decorator

@retry(max_attempts=3, delay=0.5)
def fetch_data(url):
    import random
    if random.random() < 0.7:
        raise ConnectionError("Server timeout")
    return {"status": "ok"}

print(fetch_data("https://api.example.com/data"))

A typical run:

fetch_data failed (attempt 1/3): Server timeout
fetch_data failed (attempt 2/3): Server timeout
{'status': 'ok'}

Or, if all three attempts fail, you get the original ConnectionError raised. Notice functools.wraps(func) on the wrapper — this copies over __name__, __doc__, and __module__ from the original function. Without it, fetch_data.__name__ would return "wrapper", which makes debugging decorator-heavy code genuinely painful.

The three-level nesting (fouterfdecoratorfwrapperf_{\text{outer}} \to f_{\text{decorator}} \to f_{\text{wrapper}}) is the canonical pattern for parameterized decorators. It’s ugly, nobody loves it, but it works. There are alternatives (class-based decorators, which we’ll get to in Part 3), but for quick utility decorators, this nested approach is the standard.

Timing, Caching, and the Decorators You’ll Actually Write

Let’s build something more practical. A timing decorator that tracks execution and optionally logs slow calls:

import time
import functools

def timed(threshold_ms=None):
    def decorator(func):
        @functools.wraps(func)
        def wrapper(*args, **kwargs):
            start = time.perf_counter()
            result = func(*args, **kwargs)
            elapsed_ms = (time.perf_counter() - start) * 1000

            if threshold_ms is not None and elapsed_ms > threshold_ms:
                print(f"SLOW: {func.__name__} took {elapsed_ms:.1f}ms (threshold: {threshold_ms}ms)")
            elif threshold_ms is None:
                print(f"{func.__name__}: {elapsed_ms:.1f}ms")

            return result
        return wrapper
    return decorator

@timed(threshold_ms=100)
def process_records(records):
    # simulate work
    time.sleep(0.05 * len(records))
    return [r * 2 for r in records]

process_records([1, 2, 3])  # SLOW: process_records took 150.2ms (threshold: 100ms)
process_records([1])         # (silence — under threshold)

This is more useful than a bare timing decorator because in production you don’t want to log every single call. You want to know when something is slower than expected. The threshold parameter turns noise into signal.

Now here’s something that tripped me up when I first tried composing decorators. What happens when you want a decorator that works both with and without parentheses? Something like:

@timed          # no parentheses
def foo(): ...

@timed(threshold_ms=50)  # with parentheses
def bar(): ...

The first case passes foo directly to timed, so timed receives a callable. The second case calls timed(threshold_ms=50) first, which returns the actual decorator. You need to handle both:

def timed(_func=None, *, threshold_ms=None):
    def decorator(func):
        @functools.wraps(func)
        def wrapper(*args, **kwargs):
            start = time.perf_counter()
            result = func(*args, **kwargs)
            elapsed_ms = (time.perf_counter() - start) * 1000
            if threshold_ms is None or elapsed_ms > threshold_ms:
                print(f"{func.__name__}: {elapsed_ms:.1f}ms")
            return result
        return wrapper

    if _func is not None:
        return decorator(_func)
    return decorator

The trick is _func=None as the first positional argument, with everything else keyword-only (the * forces that). When used without parentheses, _func receives the decorated function. When used with parentheses, _func stays None and the keyword arguments get captured. This pattern comes from David Beazley’s Python Cookbook (O’Reilly, 3rd edition) — it’s been around for years and it’s still the cleanest solution I’ve seen.

Decorators That Modify Behavior Based on Arguments

Here’s where decorators start getting genuinely powerful. Instead of just wrapping a function passively (logging, timing), a decorator can inspect and transform the arguments or the return value.

Consider input validation. You could write validation logic inside every function, or you could write it once:

def validate_types(**expected_types):
    def decorator(func):
        @functools.wraps(func)
        def wrapper(*args, **kwargs):
            import inspect
            sig = inspect.signature(func)
            bound = sig.bind(*args, **kwargs)
            bound.apply_defaults()

            for param_name, expected_type in expected_types.items():
                if param_name in bound.arguments:
                    value = bound.arguments[param_name]
                    if not isinstance(value, expected_type):
                        raise TypeError(
                            f"{func.__name__}() expected {param_name} to be "
                            f"{expected_type.__name__}, got {type(value).__name__}"
                        )
            return func(*args, **kwargs)
        return wrapper
    return decorator

@validate_types(amount=(int, float), currency=str)
def format_price(amount, currency="USD"):
    return f"{currency} {amount:,.2f}"

print(format_price(1999.99))           # USD 1,999.99
print(format_price(1999.99, "EUR"))    # EUR 1,999.99
print(format_price("not a number"))    # TypeError: format_price() expected amount to be int_float, got str

Actually, there’s a subtle issue here. When expected_type is a tuple like (int, float), expected_type.__name__ will throw AttributeError because tuples don’t have __name__. A more robust version needs a guard:

type_name = (expected_type.__name__ 
             if hasattr(expected_type, '__name__') 
             else str(expected_type))

This is the kind of thing you only discover when you actually run the error path. The happy path works fine; it’s the error messages that break.

I’m not entirely sure whether using inspect.signature has meaningful overhead in hot paths. My best guess is that for anything called fewer than ~10,000 times per second, you won’t notice it. But if you’re decorating a function inside a tight numerical loop, the overhead of binding and checking arguments on every call adds up. For those cases, you’d want to either skip runtime validation entirely or only validate in debug mode:

def validate_types(**expected_types):
    def decorator(func):
        if not __debug__:  # python -O strips this
            return func
        # ... validation logic ...
    return decorator

Running with python -O sets __debug__ to False and eliminates the overhead completely. It’s the same mechanism Python uses for assert statements.

Memoization: The Decorator Pattern Everyone Should Know

Python ships with functools.lru_cache (and functools.cache since 3.9), so you rarely need to write your own memoization. But building one teaches you a lot about what decorators can do with state.

A basic memoization decorator maintains a dictionary mapping arguments to results. If we’ve seen these args before, return the cached result. Otherwise, compute, store, and return:

def memoize(func):
    cache = {}
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        # kwargs need to be hashable for the cache key
        key = (args, tuple(sorted(kwargs.items())))
        if key not in cache:
            cache[key] = func(*args, **kwargs)
        return cache[key]
    wrapper.cache = cache  # expose for inspection
    wrapper.cache_clear = lambda: cache.clear()
    return wrapper

@memoize
def fibonacci(n):
    if n < 2:
        return n
    return fibonacci(n - 1) + fibonacci(n - 2)

Without memoization, fibonacci(35) performs roughly O(2n)O(2^n) calls — about 18 million for n=35n=35. With memoization, it’s O(n)O(n) — exactly 36 unique calls. The difference is dramatic:

import time

# Without memoization (don't try n>40)
start = time.perf_counter()
result = fibonacci(35)
print(f"fib(35) = {result}, took {(time.perf_counter()-start)*1000:.1f}ms")
# fib(35) = 9227465, took 0.1ms (memoized)

The reason I’m showing a hand-rolled version instead of just saying “use @lru_cache” is the details. Notice wrapper.cache and wrapper.cache_clear — we’re attaching attributes to the wrapper function. Functions are objects in Python, so you can hang state off them. The built-in lru_cache does the same thing: it gives you cache_info() and cache_clear() methods.

But there’s a gotcha with this simple implementation: it grows unbounded. Every unique set of arguments stays in the dictionary forever. That’s fine for fibonacci, but if you’re memoizing something with a huge argument space (say, web API responses keyed by URL), you’ll eventually eat all your memory. The lru_cache(maxsize=128) from the standard library handles this with an LRU eviction policy — when the cache is full, the least recently used entry gets dropped. The time complexity for cache lookup and update is O(1)O(1) because it uses a hash map internally combined with a doubly linked list for ordering.

Why not always use lru_cache? One edge case: it requires all arguments to be hashable. If your function accepts a list or a dict, lru_cache will throw TypeError: unhashable type: 'list'. The hand-rolled version above has the same limitation, but at least you can modify it — for instance, by converting list args to tuples in the cache key.

Error Handling in Decorators

Decorators that swallow exceptions are dangerous. But decorators that transform exceptions can be genuinely useful.

def graceful_fallback(default=None, exceptions=(Exception,), log=True):
    def decorator(func):
        @functools.wraps(func)
        def wrapper(*args, **kwargs):
            try:
                return func(*args, **kwargs)
            except exceptions as e:
                if log:
                    print(f"Warning: {func.__name__} failed with {type(e).__name__}: {e}")
                return default() if callable(default) else default
        return wrapper
    return decorator

@graceful_fallback(default=[], exceptions=(KeyError, ValueError))
def parse_config_tags(raw_config):
    tags = raw_config["metadata"]["tags"]  # might KeyError
    return [t.strip() for t in tags.split(",")]  # might ValueError/AttributeError

result = parse_config_tags({"metadata": {}})
# Warning: parse_config_tags failed with KeyError: 'tags'
print(result)  # []

Notice default=[] versus default=list. If you pass a mutable default like [], every failed call returns the same list object. Mutating one mutates all of them. That’s why the decorator checks callable(default) — passing default=list creates a fresh empty list each time. This is the same gotcha as Python’s mutable default arguments, just wearing a different hat.

And here’s a pattern I’d recommend against but you’ll see in production codebases: decorators that catch Exception broadly and return a default. It masks bugs. If parse_config_tags has an actual programming error (say, a typo in a variable name causing NameError), this decorator will silently catch it and return []. You’ll never know something’s wrong until the data downstream is garbage. Narrow your exception tuple. Always.

Preserving Function Signatures

We’ve been using functools.wraps throughout, but it’s worth understanding what it actually does. Without it:

def my_decorator(func):
    def wrapper(*args, **kwargs):
        return func(*args, **kwargs)
    return wrapper

@my_decorator
def greet(name: str, greeting: str = "Hello") -> str:
    """Return a greeting string."""
    return f"{greeting}, {name}!"

print(greet.__name__)        # 'wrapper'
print(greet.__doc__)         # None
print(greet.__annotations__) # {}

With @functools.wraps(func) on the wrapper:

print(greet.__name__)        # 'greet'
print(greet.__doc__)         # 'Return a greeting string.'
print(greet.__annotations__) # {'name': <class 'str'>, 'greeting': <class 'str'>, 'return': <class 'str'>}
print(greet.__wrapped__)     # <function greet at 0x...>  (reference to original)

functools.wraps copies __module__, __name__, __qualname__, __annotations__, __doc__, and sets __wrapped__ to point at the original function. That last one is useful — you can always reach through to the unwrapped function via greet.__wrapped__ if you need to bypass the decorator (handy in testing).

One thing functools.wraps does not preserve is the function signature as seen by inspect.signature(). Actually wait — it does, as of Python 3.10+ (or was it 3.4? the docs on functools.wraps say it follows __wrapped__, and inspect.signature follows __wrapped__ since PEP 362). So inspect.signature(greet) correctly shows (name: str, greeting: str = 'Hello') -> str even though the actual wrapper accepts (*args, **kwargs). My best guess is this works because inspect.signature checks for __wrapped__ and follows the chain. But if you’re using an older Python version or a tool that doesn’t follow __wrapped__, you might still see the (*args, **kwargs) signature.

A Real-World Pattern: Registering Functions

Not all decorators wrap functions. Some decorators collect functions into a registry — they don’t change the function’s behavior at all.

class CommandRegistry:
    def __init__(self):
        self._commands = {}

    def register(self, name=None):
        def decorator(func):
            cmd_name = name or func.__name__
            if cmd_name in self._commands:
                print(f"Warning: overwriting command '{cmd_name}'")
            self._commands[cmd_name] = func
            return func  # return unchanged — no wrapper needed
        return decorator

    def execute(self, name, *args, **kwargs):
        if name not in self._commands:
            raise ValueError(f"Unknown command: {name}")
        return self._commands[name](*args, **kwargs)

    def list_commands(self):
        return list(self._commands.keys())

cli = CommandRegistry()

@cli.register()
def status():
    return "All systems operational"

@cli.register(name="build")
def run_build(target="all"):
    return f"Building {target}..."

print(cli.list_commands())        # ['status', 'build']
print(cli.execute("build", "docs"))  # Building docs...

Flask uses this exact pattern for @app.route(). pytest uses it for @pytest.fixture. It’s everywhere. The key insight is that return func — not return wrapper — means the original function is completely unmodified. The decorator’s only job is the side effect of registering it.

This pattern composes well. You can register the same function under multiple names, add metadata alongside the registration, or build entire plugin systems where modules register their capabilities at import time.

Where This Leaves Us

The core skill in writing decorators isn’t the nesting or the syntax — it’s knowing what to preserve and what to change. Always return the original function’s value. Always use functools.wraps. Keep your exception handling narrow. And if you don’t need to modify the function’s behavior, don’t wrap it at all — just register it and return it unchanged.

For most day-to-day work, I’d stick with function-based decorators using the three-level nesting pattern for parameterized ones. They’re not pretty, but they’re predictable, easy to debug, and everyone on your team will understand them. The _func=None trick for optional parentheses is worth memorizing — you’ll use it more than you’d expect.

But function-based decorators have limits. What about decorators that need to maintain complex state across calls? What about stacking multiple decorators and controlling the order? What about making decorators that play nicely with async functions? Those are all solvable, but they push you toward class-based decorators and some genuinely interesting Python internals — which is exactly what Part 3 covers.

Python Decorators Complete Guide Series (2/3)

Did you find this helpful?

☕ Buy me a coffee

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

TODAY 395 | TOTAL 2,618