Skip to main content
Azure OpenAI gives you the same OpenAI models but deployed within your Azure environment, with enterprise compliance controls, private networking, and Azure-managed encryption.

Setup

You need an Azure OpenAI resource and a deployment.
export AZURE_OPENAI_API_KEY=...
export AZURE_OPENAI_ENDPOINT=https://my-resource.openai.azure.com

Config

{
  "providers": {
    "azure-openai-responses": {
      "apiKey": "${AZURE_OPENAI_API_KEY}",
      "baseUrl": "${AZURE_OPENAI_ENDPOINT}"
    }
  }
}

Use it

{
  "agents": [
    { "name": "coder", "model": "azure-openai-responses:gpt-4o" }
  ]
}
The model ID must match the deployment name in your Azure OpenAI resource, not the base model name. If you deployed GPT-4o as my-gpt4o-deployment, use that as the model ID.

Models

Same models as OpenAI — availability depends on which models you’ve deployed in your Azure resource.

Features

FeatureSupported
StreamingYes
Tool useYes
Vision (images)Yes (GPT-4o deployments)
Private networkingYes
Azure AD authYes
Content filteringYes

Provider Details

Provider IDazure-openai-responses
Env variablesAZURE_OPENAI_API_KEY + AZURE_OPENAI_ENDPOINT
API typeAzure OpenAI Responses API
Base URLRequired (your Azure resource endpoint)

When to Use Azure OpenAI vs. OpenAI Direct

OpenAI DirectAzure OpenAI
AuthAPI keyAPI key or Azure AD
ComplianceSOC 2SOC 2, HIPAA, FedRAMP
Data residencyNo controlRegion-specific
NetworkingPublicPrivate endpoints, VNet
Content filteringBasicConfigurable
Best forGeneral useEnterprise / regulated industries

Notes

  • Each model must be deployed separately in the Azure portal before it can be used.
  • Azure OpenAI deployments have per-minute rate limits configured at the resource level.
  • The baseUrl must include the full endpoint URL (e.g. https://my-resource.openai.azure.com), not just the resource name.