
When my father was admitted to the ER with an acute arterial occlusion in his lower limb, the attending vascular team initially attempted a revascularization procedure. Still, it did not succeed sufficiently to change their prognosis. They quickly concluded that amputation was the only viable intervention.
The actual rationale, however, was not articulated. There was no differential discussion, no mention of attempts at revascularization, and certainly no shared decision-making. We were simply told the leg was not salvageable.
I wasn’t in the hospital—I was 10,000 kilometers away, working remotely. But as a tech person with a systems mindset, I knew we needed to understand the failure mode before accepting the outcome. So the first tactical move was to push for immediate discharge—not because we were rejecting care, but because discharge would entitle us to the full diagnostic workup, which was otherwise inaccessible. My brother was on-site. He obtained the release, photographed the vascular report and imaging summary, and sent it to me. I OCR’d the scans (This is Germany and you’ll get everything on paper) to extract structured text. That’s when a single phrase appeared that hadn’t been communicated verbally by any clinician: “lack of runoff.”
This wasn’t just jargon—it was the diagnostic justification for writing off revascularization. But no one had explained what it meant. ChatGPT did.
I queried ChatGPT with the phrase and context. It returned a detailed explanation: in vascular terms, “lack of runoff” refers to an absence of viable distal vessels capable of receiving flow, making bypass or thrombectomy unlikely to succeed. That insight—delivered in less than a minute—was the missing link. Suddenly, the medical logic made sense. But more importantly, it also highlighted the assumptions baked into that logic: that revascularization wasn’t worth attempting in no-runoff cases. That may be true in general settings, but it is not necessarily true in specialized vascular centers.
Now was the time to shift gears. I asked ChatGPT to help identify advanced clinics with published experience in no-runoff salvage procedures. It gave me a prioritized list, annotated with academic references, treatment modalities, and proximity data. As my brother was already in the car with our father, I began adding constraints: the clinic had to be reachable within four hours, needed both vascular and cardiac diagnostics on-site (due to my father’s comorbidities), and ideally had prior publications on limb salvage under poor prognostic indicators.
ChatGPT handled this query exceptionally well. It cross-referenced vascular publications, current geolocation of my father, and hospital profiles to shortlist a few facilities. Could I have found this clinic myself, through manual research? Probably—but it would have taken days of reading, cross-checking, and verifying. Days we simply did not have. One matched—and we executed. There was no time for a formal transfer. We assumed the risk, my brother managed the transport, and I remotely coordinated the intake process and documentation from abroad.
The receiving clinic reviewed the former diagnosis, challenged its finality, and found marginal viability in one distal segment—enough to attempt a high-risk intervention. It's a strong reminder that challenging each other in science isn't disrespect—it's the mission. But that's a whole other story.
A high-risk revascularization was performed. It succeeded—partially. Blood flow was restored. Most of the foot was saved. One toe wasn’t salvageable.
My father isn’t walking yet. His systemic condition remains unstable. But the limb is still there, and with it, options remain open. Would you trade a toe for a foot?
This wasn’t about replacing doctors with AI. It was about augmenting our capacity to navigate opaque, high-stakes medical decisions using a technical interface. ChatGPT served as a real-time triage copilot: parsing clinical language, identifying viable institutions, and modeling multi-criteria constraints into actionable outcomes. We never treated it as infallible—LLMs don't offer 100% answers. But that's precisely why we challenged its suggestions, stress-tested the logic, and injected human reasoning and out-of-the-box thinking into the process. It wasn't magic. It was a collaboration. Without it, we would have accepted amputation as inevitable. Instead, we ran a parallel decision-making thread—one that made a tangible difference.
Because we extracted the report. Because we challenged the defaults. Because we added system-level reasoning to a local clinical decision. And because an AI helped us close the knowledge gap faster than the clock could close our options. Querying the LLM boiled down to what I’d like to identify as the “Good Practice Loop”:
- Start with a broad prompt (explore possibilities)
- Add constraints (narrow scope)
- Challenge the assumptions (what is it leaving out?)
- Ask for alternatives (what if this condition changes?)
- Cross-check independently (if critical)
And now it gets weird, because I believe that in software engineering, we're trading feet for toes every day. We constantly make compromises—rarely is there ever a truly optimal outcome. We're always exchanging time for scope or scope for time.
Please keep that in mind the next time you’re designing a product—or trying to save an extremity.