Provider opacity: Why your app shouldn't know which AI model ran
March 12, 2026
When you charge a credit card through Stripe, do you know which acquiring bank processed it? No. And you don't care. The payment either succeeds or fails, and you get a normalized response.
ModelRoute applies the same principle to AI execution.
What is provider opacity?
Provider opacity means your application never knows — and never needs to know — which AI provider executed your request. You interact with canonical model slugs, canonical error codes, and canonical billing. The provider is an implementation detail.
Why it matters
**1. Vendor independence**: You're not locked into any provider. We can route your request to whichever provider offers the best combination of availability, cost, and capability.
**2. Simplified integration**: One API, one set of error codes, one billing system. No per-provider SDKs, no provider-specific error handling.
**3. Resilience**: If a provider goes down, we route to another. Your app never sees the outage.
**4. Clean architecture**: Your application code doesn't contain provider-specific logic. When we add a new provider, your code doesn't change.
How it works in practice
Your API request uses a canonical model slug like `video-generation-standard` — not a provider-specific model name. The response contains a normalized structure with platform-owned file references, not provider URLs.
Error codes are canonical: `RATE_LIMITED`, `MODEL_OVERLOADED`, `GENERATION_FAILED`. Not `openai_429` or `anthropic_overloaded`.
Billing is canonical: one hold, one settlement, one balance. Not separate invoices from three providers.
The tradeoff
Provider opacity means you can't cherry-pick a specific provider for a specific request. This is intentional. If you need provider-specific features, you should use that provider's API directly.
ModelRoute is for teams that want reliable, normalized AI execution without infrastructure overhead. The provider is our problem. The product is yours.