Why MCP Bet on Dynamic Client Registration (And Why OAuth Proxy Had to Exist)

In FastMCP 2.12 we released official support for OAuth Proxy. This post is about what OAuth Proxy is, what problem it solves, why we implemented it, and how it unlocks Enterprise MCP.

Adam Azzam
Adam Azzam
Author
September 8, 2025
Published
FastMCPMCPOAuthDynamic Client RegistrationSecurityAuthentication

In FastMCP 2.12 we released official support for OAuth Proxy. This post is about what OAuth Proxy is, what problem it solves, why we implemented it, and how it unlocks Enterprise MCP.

When OAuth Meets AI Agents

You've just built an MCP server that exposes your company's Slack search. The idea is simple: let AI agents help your team find old discussions, documentation, decisions buried in thousands of messages. An AI assistant that actually knows your company's context.

But you can't just expose this to the world. You need authentication.

"Easy," you think. "Everyone on the team has GitHub accounts. I'll use GitHub OAuth."

You open GitHub's OAuth app registration. Name? "Company Slack Search MCP." Homepage? Your server URL. So far so good. Then you hit the redirect URL field.

This is where OAuth sends users after they authenticate. The callback that receives the authentication token. You start typing your server's URL, then pause.

Wait.

  • Your colleague Sarah wants to use this with Claude. That runs at claude.ai/mcp/callback.
  • Tom tests locally with a development agent at localhost:3000/callback.
  • The infra team is experimenting with their own assistant at infra-ai.internal/callback.

For security’s sake, GitHub wants you to list every single redirect URL in advance so it doesn’t forward access credentials willy nilly. Not patterns, not wildcards. Exact URLs.

You realize the problem: These are AI agents connecting dynamically. You have no idea where they'll be running from. New agents could appear tomorrow running from URLs you've never seen. But GitHub's OAuth system was designed for a world where you know all the players in advance.

You're stuck.

Why OAuth Works the Way It Does

Why does GitHub even need to know redirect URLs in advance?

Before OAuth, sharing data between services meant sharing passwords. Want an app to access your Gmail? Give it your Google password. The security nightmare is obvious. One compromised app means your entire Google account is gone.

OAuth's genius was eliminating password sharing. Instead of giving apps your credentials, you redirect users to the service (like Google), they authenticate there, and Google gives your app a special token that only works for specific permissions.

But this created a new challenge: Google needs to know where to send users back with that token. This is the redirect URI, and it's the root of our entire problem.

Here's how traditional OAuth registration works:

1. Developer registers app with GitHub
   → "My app is SuperCoolApp"
   → "It lives at https://supercool.app/callback"

2. GitHub responds with credentials
   → Client ID: abc123
   → Client Secret: xyz789
   → Allowed redirect: https://supercool.app/callback ONLY

3. Developer hardcodes these credentials into their app

4. When users authenticate:
   → App sends them to GitHub with client_id=abc123
   → GitHub ONLY redirects back to supercool.app/callback
   → Tokens are safe from hijacking

This works perfectly when you know all the parties involved. It falls apart when you don't.

The AI Agent Authentication Challenge

Back to your Slack search MCP server. The fundamental mismatch is now clear:

  • OAuth's assumption: Apps have fixed, known addresses
  • MCP's reality: AI agents are everywhere, with dynamic addresses

Your server is designed to be discovered and accessed by AI agents dynamically. Any agent, running anywhere, should be able to connect (with user permission, of course). This is MCP's superpower: composability. Any AI can talk to any tool.

When Sarah's Claude instance encounters your MCP server:

  1. Claude has never seen your Slack search server before
  2. Your server has never seen this Claude instance before
  3. They need to establish trust through Sarah's GitHub authorization
  4. Claude needs to receive tokens at claude.ai/mcp/callback
  5. But you never registered that URL with GitHub (how could you have?)

In the traditional OAuth model, making this work would require:

  • You pre-registering claude.ai/mcp/callback with GitHub (and somehow knowing this URL in advance)
  • Also pre-registering Tom's localhost:3000/callback
  • And the infrastructure team's enterprise-ai.internal/callback
  • And every other possible agent URL that might exist now or in the future

Manual pre-registration destroys MCP's composability. The composability that makes MCP powerful would be gone.

The “Official” Solution: Dynamic Client Registration

The IETF recognized this problem years ago and created a solution: Dynamic Client Registration (DCR), defined in RFC 7591.

Instead of manual pre-registration, DCR allows clients to register themselves programmatically. An agent discovers a new MCP server, sends its metadata (client name, redirect URIs, grant types) to a registration endpoint, and immediately gets back credentials. A client ID and secret it can use for OAuth flows.

It's elegant. It's standardized. It solves the exact problem MCP faces.

MCP adopted DCR because it perfectly preserves what makes the protocol powerful:

  • Agents discover and connect to new servers without friction
  • Users maintain control through explicit authorization
  • Each connection gets isolated credentials
  • Standard OAuth security properties apply

There’s just one problem: support isn’t ubiquotous. GitHub doesn't support DCR. Neither does Google, Azure, Discord. Unless you had the good sense to use WorkOS or other forward-thinking IDPs who have introduced support for it, you’re out of luck.

Why Some Providers Don't Support DCR

This gap reflects a fundamental disagreement about trust and control.

The Business Model Conflict

When you register an app with GitHub, you're not just getting credentials. You're:

  • Agreeing to their terms of service
  • Subject to their rate limits and pricing
  • Part of their abuse prevention systems
  • Potentially paying for API access

DCR breaks this model. If anyone can register programmatically, how do you enforce terms? Track usage? Prevent abuse? The business relationship that manual registration creates disappears.

The Security Concerns

Dynamic redirect URIs without the right controls are legitimately risky. Most OAuth security vulnerabilities involve redirect URI manipulation: tricking the provider into sending tokens to attacker-controlled URLs.

With pre-registered URLs, providers can:

  • Validate redirects against an explicit allowlist
  • Review high-risk redirect patterns manually
  • Revoke compromised applications immediately
  • Maintain audit trails of who registered what

With DCR, you're trusting your validation logic against every possible attacker creativity. One bypass and tokens get stolen.

The Liability Problem

When a malicious app steals user data through your OAuth system, who's responsible? With manual registration, there's a paper trail, a real developer, possibly a business agreement. With DCR, there might be nothing but an ephemeral client that's already gone.

OAuth providers have been sued over third-party data breaches. Their legal teams understandably prefer the accountability of manual registration.

Bridging the Gap: OAuth Proxy

MCP needs DCR. Not every provider supports it. This incompatibility is why we built the FastMCP OAuth Proxy in FastMCP 2.12.0.

The proxy acts as a translator between two incompatible but valid worldviews. To MCP clients, it presents a DCR-compliant interface. To traditional OAuth providers, it appears as a single registered application.

Remember your stuck Slack search server? With the OAuth proxy, you register your server ONCE with GitHub, using a fixed redirect URL like https://your-server.com/auth/callback. That's the only URL GitHub ever sees.

But when Sarah's Claude connects, when Tom's localhost agent connects, when anyone connects, they each register dynamically with your proxy, not with GitHub.

We maintain a transaction store that maps each dynamic client registration to its callback URL. When a client registers, we store this mapping and return our fixed upstream credentials. During authorization, we handle two separate Proof Key for Code Exchange (PKCE—an OAuth extension that binds the authorization code to the client using a one-time code verifier/challenge to prevent interception) flows. One with the client, one with the provider, maintaining security at both layers.

Dual-PKCE ensures that even though the proxy sits in the middle, it can't impersonate clients or steal tokens. Each layer validates independently. The client proves itself to the proxy, the proxy proves itself to GitHub. End-to-end security is maintained.

Provider differences complicated this. GitHub uses opaque tokens, Google uses JWTs with JWKS endpoints, Azure requires specific authentication methods. We built provider-specific implementations to handle these differences automatically.

We also had to account for the various environments where MCP clients run: random localhost ports during development, fixed URLs like claude.ai in production, and internal enterprise URLs. The proxy accepts them all while maintaining a single registered application with the upstream provider.

Your Slack search server can now be accessed by any AI agent, using GitHub authentication, without you knowing any agent URLs in advance.

The Authentication Evolution

The MCP specification itself has always been idealistic about authentication. Early versions required developers to implement full authorization servers themselves. A massive undertaking just to add auth to your MCP server. The spec later added support for remote authorization servers, allowing developers to delegate to existing auth infrastructure, but this still requires those servers to support DCR.

Simple API keys? The MCP spec doesn't have native support for them. They only work to the extent you trust a remote server to handle them.

FastMCP's approach has been pragmatic: we implement what developers actually need. While the MCP spec beautifully defines how authentication should work in an ideal world where everyone supports DCR, we built the OAuth proxy to work with the providers developers actually use. GitHub, Google, Azure, and others that will likely never support DCR.

The IETF engineers who designed DCR in 2015 were remarkably prescient. They anticipated a world of dynamic, ephemeral clients long before AI agents existed. MCP might be the first major protocol to fully realize DCR's vision. The challenge isn't the protocol design, it's that most of the OAuth world hasn't caught up to what the IETF saw coming.

What This Means for Your MCP Server

Building an MCP server today, you have three paths:

Use a DCR-compliant provider (rare but ideal) - Some leading providers like WorkOS AuthKit actually support DCR, allowing true dynamic registration.

Use the OAuth proxy for traditional providers (sometimes the pragmatic choice) - FastMCP's OAuth proxy handles the impedance mismatch with providers like GitHub:

from fastmcp.server.auth.providers.github import GitHubProvider

auth = GitHubProvider(
    client_id="your-github-app-id",
    client_secret="your-github-app-secret",
    base_url="https://your-server.com"
)
mcp = FastMCP(name="My Server", auth=auth)

Roll your own authorization server (maximum control) - Build your own auth system with full control over the registration and validation process.

The OAuth proxy exists because the second option provides the best balance for most developers. Standard OAuth security, major provider compatibility, minimal complexity.

Where This Leaves Us

The DCR gap reveals something profound about protocol evolution. The IETF engineers who designed DCR correctly identified that dynamic systems need dynamic authentication. The platform teams who rejected it correctly identified that their business models need controlled relationships.

The OAuth proxy bridges these different but valid requirements. MCP's bet on DCR remains the right architectural choice. As the ecosystem matures, as providers recognize the need for dynamic registration, as security models evolve to handle ephemeral clients, we'll need this proxy less.

But today, it's the essential adapter that lets MCP's elegant architecture work with current providers.

Adam Azzam
Adam Azzam
Thanks for reading! Follow us for more updates.
More articles