Uttam Varma
← Back to articles

March 30, 2026 · 2 min read

Shipping AI for Medical Devices: A PM's Operating Model

How to deliver AI-powered features in regulated medical device software without breaking compliance.

AIMedical DevicesComplianceProduct Management

The challenge

Bringing AI into regulated medical software is not the same as shipping a consumer feature. Every model output touches patient safety, audit trails, and post-market surveillance obligations.

Yet teams still need to move fast. The question is not whether to ship AI but how to ship it without creating a compliance bottleneck.

A lightweight operating model

After shipping AI-assisted features across multiple Class II devices, a pattern emerges:

  1. Scope the autonomy boundary early. Define what the model decides versus what it recommends. This single decision shapes your entire regulatory strategy.

  2. Treat the model as a supplier. Your QMS already has supplier controls. Apply them: qualification testing, incoming inspection (evaluation datasets), and periodic re-evaluation.

  3. Version everything. Eval data, results and the decision to deploy, all versioned and traceable.

  4. Design the human checkpoint. Regulators want to see where a clinician or qualified user can override, review, or reject the AI output. Build this into the UX, not as an afterthought.

What this means for PMs

Your job is to make the compliance path the default path. If engineers have to go out of their way to be compliant, they won't be. Bake evals and traceability into the tools, not the process documents.

The best regulatory strategy is one that engineers follow without thinking about it.

Key takeaways

  • Define autonomy boundaries before writing a single line of model code
  • Reuse existing QMS supplier controls for AI/ML components
  • Version model artifacts with the same rigor as software releases
  • Design human oversight into the product, not the SOP