← All Posts

The Paralyzed Programmer: What Spinal Cord Injury Taught Me About Debugging

Nov 13, 2025

8 min read

AI Policy

I've spent fifteen years obsessing over system architecture. We build distributed systems, design for failure, implement redundancies to hit the "five nines." But lately my study of complex systems hasn't been on a whiteboard. It's been in a wheelchair.

When I sustained a spinal cord injury, I didn't just lose mobility. I got a front-row seat to the ultimate legacy system failure, and I couldn't look away. The parallels between the human nervous system and distributed computing aren't just poetic. Both rely on a central control plane, edge nodes, message passing, feedback loops. When the human body breaks, it fails in ways that would be instantly recognizable in a post-mortem: network partitions, latency spikes, cascading failures.

I've started reading my medical charts like stack traces.

Note: I'm an engineer, not a neuroscientist. These comparisons reflect my understanding from living with SCI and reading medical literature, but the nervous system is complex and I've probably oversimplified. If something's off or you have resources that would help me learn more, I'd genuinely appreciate it.


The Architecture

To understand the failure, you have to understand the spec. The human body runs as a federated network: the brain handles high-level logic (orchestration, planning, complex decisions) but delegates aggressively. The spinal cord and peripheral ganglia handle local, latency-sensitive tasks without waiting for instructions from headquarters.

Touch a hot stove and you'll see this delegation in action. The sensory signal travels to your spine, interneurons process it, and a motor command fires back to yank your hand away. Your brain finds out about it afterward, like a manager getting CC'd on an email the team already handled.

This is edge computing, and the performance gains are significant. A round-trip to the brain takes 150–200 milliseconds, while spinal reflexes complete in 15–50ms. When you're dealing with a hot stove, that difference matters. It's the gap between "ow" and skin grafts.


The Partition

A spinal cord injury is a permanent network partition. The cable connecting the control plane to the data plane has been cut. Not degraded. Cut.

In software, when a service doesn't get an ACK, it retries. My brain does exactly this. When I try to move my leg, the command goes out, nothing comes back, and the brain tries harder: recruiting more motor units, cranking up neural drive. It's a retry loop that never terminates. The request times out, but the caller keeps calling.

Meanwhile, below the partition, my spinal cord is still running. It just doesn't have anyone telling it what to do.

Spasticity

In a healthy system, the brain sends inhibitory signals, a damping factor that keeps local reflexes from overreacting. Without that dampening, the edge nodes get twitchy. A breeze across my leg, a wrinkle in my sock, and the local neurons interpret the noise as a command. Muscles fire when they shouldn't.

That's spasticity: the biological equivalent of an underdamped control loop oscillating into chaos. The spinal cord is working exactly as designed. It's just that the design assumed supervision that no longer exists.


Autonomic Dysreflexia

The scariest parallel is Autonomic Dysreflexia, or AD. This is what happens when a distributed system has no circuit breaker.

I had an episode last year that I still think about. I was in a meeting (video on, presenting to the team) when I started getting a pounding headache. Then the sweating started. My face flushed. I knew the signs, so I made an excuse and dropped off the call.

My blood pressure was 190/115. Normal is 120/80.

Here's what was happening in the stack: a pair of tweezers left on my seat that I'd missed was stabbing into my leg. Trivial stimulus, but my spinal cord (isolated from the brain) detected a "fault" and panicked. It triggered a massive sympathetic response, fight-or-flight, trying to fix the problem locally. Blood vessels below my injury clamped down, and pressure spiked.

My brain detected the spike through baroreceptors in my neck and chest. It tried to send calming signals (parasympathetic, "rest and digest," slow everything down) but those signals couldn't get past the injury. They couldn't reach the constricted vessels. So the brain, failing to fix the pressure from above, slowed my heart rate instead. Bradycardia. My heart dropped into the 40s while my blood pressure stayed through the roof.

The system was stuck in a contradiction it couldn't resolve. High pressure, slow heart, no path to stability. This can cause a stroke. People die from this.

I found the tweezers, removed them, and sat there for twenty minutes waiting for my vitals to normalize. The "fix" took two seconds. Finding it took ten minutes of systematic checking: pants, shoes, catheter, skin. I was debugging my own body, checking logs I could barely read because half my sensory telemetry is offline.

The Missing Breaker

In software, we'd wrap this kind of volatile subsystem in a circuit breaker. When something acts anomalously, the breaker trips and sheds load to protect the cluster.

My body doesn't have that. The sympathetic loop has no MAX_RETRIES, no backoff. It will keep hammering until the hardware fails.

This changed how I write code. I used to think of safety limits as the orchestrator's job: the central service decides when to kill a runaway process. Now I build limits into the edge. Every agent, every worker, every Lambda function gets its own kill switch, because sometimes the message from the control plane never arrives.


Code

That experience made me want to write down the patch my nervous system is missing. Here's what it looks like:

import time
from functools import wraps

class CircuitBreakerOpen(Exception):
    pass

class CircuitBreaker:
    def __init__(self, threshold, cooldown_seconds):
        self.threshold = threshold
        self.cooldown = cooldown_seconds
        self.tripped_at = None
        self.state = "CLOSED"

    def __call__(self, func):
        @wraps(func)
        def wrapper(*args, **kwargs):
            # If breaker is open, check if cooldown has passed
            if self.state == "OPEN":
                elapsed = time.time() - self.tripped_at
                if elapsed < self.cooldown:
                    raise CircuitBreakerOpen("Still cooling down")
                self.state = "CLOSED"  # Try again

            # Check current load before proceeding
            load = self.get_current_load()
            if load > self.threshold:
                self.state = "OPEN"
                self.tripped_at = time.time()
                raise CircuitBreakerOpen(f"Load {load} exceeds {self.threshold}")

            return func(*args, **kwargs)
        return wrapper

    def get_current_load(self):
        # Stub—plug in your monitoring here
        # CPU, memory, latency, error rate, whatever matters
        return current_system_load()

The guard clause at the top, if self.state == "OPEN", is the thing my body doesn't have. It prevents new load from piling on when the system is already drowning. My nervous system just keeps constricting blood vessels even when pressure is already well above normal.

The threshold check is the tripwire my body ignores. In production, you pick your number: latency over 2 seconds, error rate over 5%, memory over 90%. My body needed one for blood pressure and doesn't have it.

The cooldown lets things stabilize. Wait for the storm to pass before trying again.

Since I can't patch my firmware, I act as my own circuit breaker. I watch for warning signs like headache, sweating, goosebumps, and flushing, then manually hunt for the trigger. In software, we can automate this. We should.


Phantom Pain

There's another bug in the system worth mentioning, one that mystified me until I thought about it in terms of caching: phantom limb pain. The sensation of pain in a body part that's been amputated or disconnected.

The brain maintains an internal map of the body called the cortical homunculus. When you disconnect the hardware, the pointer still references the old address.

def check_limb_status(limb_id):
    pointer = body_map.get(limb_id)  # Returns stale address
    signal = read_sensor(pointer)    # Returns null or garbage

    if signal is None:
        return AMPLIFIED_NOISE  # Brain interprets garbage as pain

The brain expects input and gets nothing. So it amplifies background noise, interprets random neural firing as a signal. The cache says the limb exists; reality says otherwise.

The neuroscience is messier than this. Cortical reorganization, peripheral neuromas, and spinal plasticity all play a role. But the cache metaphor captures something real: the internal model has gone stale, and the system is interpreting garbage as meaningful data.

Mirror therapy works by forcing a cache refresh. You trick the brain with visual input, get it to update the map. Manual invalidation. Hacky, but sometimes it works.


Agents

I think about these failure modes constantly now that I'm building agentic systems.

We're creating autonomous agents: software that reasons, uses tools, and executes tasks without human oversight. In biological terms, we're building spinal cords: capable execution engines designed to run without constant supervision from the control plane.

I've already seen the parallels play out. An agent hits a syntax error, tries a fix, gets the same error, tries the same fix again. Loop forever. Like my spinal cord, it lacks the broader context to realize it's stuck. No inhibitory signal. No sensation of failure.

Without something playing the role of a prefrontal cortex (a system that can observe the agent and say "this isn't working, stop"), we get the software equivalent of autonomic dysreflexia. The agent burns through tokens, floods APIs with retries, hallucinates fixes that corrupt the database. All because nothing in the system can feel that it's failing.

My injury taught me that pain is information. (I'm speaking as an engineer here, not a philosopher. Whether pain is data in some deeper sense is above my pay grade.) The loss of sensation below my injury isn't a relief. It's blindness. I can't tell if my shoes are too tight or if I'm sitting on something sharp until I've already done damage.

Agents need sensation too. They need awareness of their own state: token budget, recursion depth, confidence levels. And they need something like pain: high-priority signals that fire when they touch dangerous territory. PII exposure. Infinite loops. Unauthorized endpoints. Not just metrics for the dashboard, but signals that actually change behavior.

We keep optimizing for accuracy. We should be optimizing for survival.


The Translation Layer

For those who prefer tables, here's how my medical chart maps to system architecture:

Condition System Equivalent What's Happening
Spinal Shock Cold Start Everything goes dark after the crash. Gradual recovery over weeks as the system reboots.
Autonomic Dysreflexia Retry Storm Positive feedback loop with no breaker. Local node panics, escalates, can't be stopped from above.
Phantom Pain Dangling Pointer Stale cache referencing nonexistent hardware. Null interpreted as signal.
Spasticity Underdamped Oscillation Missing inhibition causes signal amplification. Small input, huge noisy output.

What I'm Left With

I used to think of the body as one thing. Now I see a distributed system held together by fragile cables, and I can't unsee it.

My partition is permanent. I can't refactor the code or upgrade the hardware. But I can learn from watching it fail, and I can build systems that fail better than I do.

Every system will eventually partition. The network will go down. The question is what the edge nodes do when the control plane stops responding. Do they have their own limits? Their own breakers? Or do they just keep retrying until they burn out?

I build different systems now. Limits at the edge. Breakers that don't wait for permission. Agents that can feel when something's wrong, not just report metrics to a dashboard nobody's watching.

My legs are offline. The lessons they taught me are not.


Additional Resources

Circuit Breaker Pattern & Resilience Engineering

Distributed Systems & Edge Computing

Neuroscience & Systems Biology

AI Safety & Agent Governance

Ryan Davis

Written by

Ryan Davis

Systems thinker and accessibility advocate building AI/ML solutions with a focus on agentic workflows. When not coding for non-profits or tinkering with robotics, I geek out over distributed systems design and making technology work for everyone.