Stop Paying the Anthropic API Tax: Route Claude Code Through Your Existing Cloud

The same Claude. Your cloud. Your billing. Your compliance boundary. That’s the enterprise AI setup nobody told you was already possible.”

Step-by-step guide to running Claude Code CLI and the VS Code extension through AWS Bedrock, Google Vertex AI, and Microsoft Foundry — with real config, real env vars, and no made-up packages.

There’s a lot of excitement — and a lot of misinformation — floating around about “connecting Claude to your cloud provider.” Before diving into setup guides, let’s clear up the most important thing first.

Standard Claude Desktop (the chat interface at claude.ai and the desktop app) is hardcoded to Anthropic’s API. You cannot swap its underlying inference backend. There’s no config file, no proxy trick, no npm package that changes that. Even a GitHub issue filed against Anthropic’s own codebase makes it plain: “Claude Desktop/Cowork has no equivalent configuration — it requires connectivity to Anthropic’s own infrastructure for auth and inference.”

But here’s what is true, and it’s genuinely exciting for developers and enterprise teams:

Claude Code — Anthropic’s agentic coding tool, available as a CLI and VS Code extension — has first-class, official support for AWS Bedrock, Google Vertex AI, and Microsoft Azure AI Foundry. No hacks. No fake packages. Real documentation, real env vars, real enterprise billing.

And for organizations that need the full Claude Desktop experience (projects, file uploads, memory) routed through their own cloud? Claude Cowork on Amazon Bedrock is exactly that product.

This article covers all of it — correctly.

Why This Matters: The Real Problem Statement

💸 You’re Billing Twice

Many enterprise teams already have:

  • AWS Bedrock capacity under an AWS Enterprise Discount Program (EDP)
  • Google Vertex AI credits from committed GCP use
  • Azure AI Foundry access under an existing Microsoft agreement

Using Claude Code or Claude Desktop natively means your token spend goes to Anthropic’s API — outside those contracts, outside your cloud billing, and outside your existing compliance controls.

The opportunity: route your AI coding workloads through the cloud infrastructure you already pay for.

🔒 Data Residency Is a Hard Requirement

Teams in financial services, healthcare, government, and defense frequently operate under policies that prohibit sending code or data to any third-party API that isn’t already in their approved cloud environment. Anthropic’s direct API may not qualify. Your AWS VPC or GCP project almost certainly does.

Routing Claude Code through Bedrock or Vertex means:

  • Inference stays inside your cloud account
  • Your VPC, security groups, and network policies control egress
  • Audit logs land in CloudTrail, Cloud Logging, or Azure Monitor — tools you already have

🧩 Model Flexibility for the Right Tasks

AWS Bedrock, Vertex AI, and Azure Foundry all support model routing. You can configure Claude Haiku for quick autocomplete, Claude Sonnet for complex refactoring, and let the platform route accordingly — all on a single bill.

Understanding the Ecosystem: Three Real Paths

Before jumping to config, understand which Anthropic product fits your situation:

ProductCompletion BackendNotesClaude Desktop (chat UI)Anthropic API onlyNo swap possible todayClaude CoworkBedrock, Vertex, Azure Foundry, or LLM gatewayEnterprise, MDM-deployableClaude Code CLIBedrock, Vertex, Azure Foundry, or Anthropic directConfigured via env varsClaude Code VS Code ext.Same as CLINeeds an extra login config stepAnthropic SDK in your appBedrock, Vertex via SDKFor custom application development

This article focuses on Claude Code (the most accessible path for developers) and Claude Cowork (the enterprise desktop path).

How It Actually Works: The Correct Architecture

When Claude Code is configured for a cloud provider, the flow is clean and direct — no local proxy required:

┌───────────────────────────┐
│ Claude Code CLI / VS Code │ ← Developer interface (unchanged)
│ Extension │
└────────────┬──────────────┘
│ Direct HTTPS
│ Official cloud SDK + SigV4 / ADC / az login
┌────────┼──────────┐
▼ ▼ ▼
AWS Google Microsoft
Bedrock Vertex AI AI Foundry

Claude Code talks directly to your cloud provider’s inference endpoint using your existing credentials — the same credential chain your other AWS/GCP/Azure workloads already use.

A note on MCP: Model Context Protocol (MCP) servers in Claude Code and Claude Desktop are for giving the AI tools — access to your file system, databases, APIs, code execution, and external services. AWS, Google, and Microsoft all publish MCP servers for their cloud services (S3, BigQuery, Azure DevOps, etc.), which make Claude smarter and more capable. But MCP servers are tools the AI calls, not backends that runthe AI. Do not confuse the two.

Setting It Up: Three Official Paths

Prerequisites (All Three)

bash

# Install Claude Code CLI
npm install -g @anthropic-ai/claude-code
# Verify
claude --version

Node.js 18+ is required. The VS Code extension bundles the CLI — install it from the VS Code Marketplace by searching “Claude Code” (official Anthropic extension, 2M+ installs). It also works in Cursor, Windsurf, and other VS Code forks.

Path 1: AWS Bedrock

AWS Bedrock hosts Claude Sonnet, Claude Haiku, Claude Opus, and other models. This is the most common enterprise path for AWS-first organizations.

Step 1: IAM Permissions

Your IAM user or role needs these permissions:

json

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"bedrock:InvokeModel",
"bedrock:InvokeModelWithResponseStream",
"bedrock:ListFoundationModels",
"bedrock:ListInferenceProfiles"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"aws-marketplace:Subscribe",
"aws-marketplace:ViewSubscriptions"
],
"Resource": "*"
}
]
}

Step 2: Complete the Anthropic First Time Use (FTU) Form

This is a one-time step per AWS account. If you use AWS Organizations, submit it once from the management account and it propagates to all child accounts automatically.

  1. AWS Console → Amazon BedrockModel catalog
  2. Select any Anthropic model → complete the use-case form
  3. Access is granted immediately after submission
Critical: As of late 2025, AWS auto-enables serverless foundation models by default — but Anthropic models specifically still require this FTU form. Without it, API calls may succeed initially, then fail with a 403 error after ~15 minutes.

Step 3: Configure AWS Credentials

bash

# Option A: Interactive setup (recommended for individuals)
aws configure
# Prompts for: Access Key, Secret Key, Region (e.g. us-east-1), Output format
# Option B: Named profile for team environments
aws configure --profile bedrock-prod

For production, use IAM roles, AWS SSO (Identity Center), or instance roles rather than long-lived access keys.

Step 4: Configure Claude Code

Edit (or create) ~/.claude/settings.json:

json

{
"env": {
"CLAUDE_CODE_USE_BEDROCK": "1",
"AWS_REGION": "us-east-1",
"AWS_PROFILE": "default"
}
}

To pin specific model versions — strongly recommended for team deployments to avoid surprise upgrades when Anthropic releases new model versions on Bedrock:

json

{
"env": {
"CLAUDE_CODE_USE_BEDROCK": "1",
"AWS_REGION": "us-east-1",
"AWS_PROFILE": "bedrock-prod",
"ANTHROPIC_DEFAULT_SONNET_MODEL": "us.anthropic.claude-sonnet-4-5-20250929-v1:0",
"ANTHROPIC_DEFAULT_HAIKU_MODEL": "us.anthropic.claude-haiku-4-5-20251001-v1:0"
}
}
Use cross-region inference profile IDs (the us. prefix) rather than base model IDs. On-demand throughput requires profile IDs — base model IDs will fail with "On-demand throughput isn't supported."

Step 5: Run Claude Code

bash

claude
# At the login prompt: select "3rd-party platform" → "Amazon Bedrock"
# Or skip the wizard entirely — settings.json already has your config

Verify everything is working:

/status

VS Code Extension Setup:

Open VS Code settings (Cmd+, / Ctrl+,) and add to settings.json:

json

{
"claudeCode.environmentVariables": [
{ "name": "CLAUDE_CODE_USE_BEDROCK", "value": "1" },
{ "name": "AWS_REGION", "value": "us-east-1" },
{ "name": "AWS_PROFILE", "value": "bedrock-prod" }
],
"claudeCode.disableLoginPrompt": true
}

The disableLoginPrompt flag is critical — without it, the VS Code extension shows the Anthropic sign-in screen even with Bedrock fully configured in ~/.claude/settings.json.

Path 2: Google Vertex AI

Google’s Vertex AI hosts Claude models through the Model Garden, integrated with standard GCP IAM, billing, and audit logging.

Step 1: GCP Prerequisites

bash

# Authenticate your account
gcloud auth login
gcloud auth application-default login
# Set your project
gcloud config set project YOUR_PROJECT_ID
# Enable Vertex AI API
gcloud services enable aiplatform.googleapis.com --project=YOUR_PROJECT_ID

Your account needs the roles/aiplatform.user IAM role. This grants:

  • aiplatform.endpoints.predict — model invocation
  • aiplatform.endpoints.computeTokens — token counting (required by Claude Code)

Step 2: Enable Claude Models in the Model Garden

Navigate to Vertex AI → Model Garden in the GCP Console. Find and enable the Claude models you need (Sonnet, Haiku, Opus). Enable all three to give Claude Code flexibility across task types.

Step 3: Use the Setup Wizard (Recommended)

The wizard requires Claude Code v2.1.98 or later:

bash

claude
# Select: "3rd-party platform" → "Google Vertex AI"
# The wizard detects your credentials, project, region, and available models
# You can pin model versions at this step

Run /setup-vertex at any time to reconfigure.

Step 4: Or Configure Manually via ~/.claude/settings.json

json

{
"env": {
"CLAUDE_CODE_USE_VERTEX": "1",
"ANTHROPIC_VERTEX_PROJECT_ID": "your-gcp-project-id",
"CLOUD_ML_REGION": "us-central1",
"ANTHROPIC_DEFAULT_SONNET_MODEL": "claude-sonnet-4@20250514",
"ANTHROPIC_DEFAULT_HAIKU_MODEL": "claude-3-5-haiku@20241022"
}
}

VS Code Extension Setup:

json

{
"claudeCode.environmentVariables": [
{ "name": "CLAUDE_CODE_USE_VERTEX", "value": "1" },
{ "name": "ANTHROPIC_VERTEX_PROJECT_ID", "value": "your-gcp-project-id" },
{ "name": "CLOUD_ML_REGION", "value": "us-central1" }
],
"claudeCode.disableLoginPrompt": true
}

Path 3: Microsoft Azure AI Foundry

Azure AI Foundry (Microsoft Foundry) is the Azure-hosted path, with tight Active Directory and Azure RBAC integration. It’s the natural choice for Microsoft-first organizations.

Step 1: Deploy Claude Models in Foundry

In the Microsoft Foundry Portal:

  1. Go to Discover → Models → Search “Claude”
  2. Select your model (Sonnet 4.5, Haiku 4.5, or Opus 4.5)
  3. Click Deploy → Default settings
  4. From the Details tab, note your Target URI and API Key

Foundry Model Router (Optional but Recommended)

Foundry’s Model Router intelligently dispatches each prompt to the best underlying model based on query complexity and cost. It currently supports Claude Haiku 4.5, Sonnet 4.5, and Opus 4.1 alongside GPT, Llama, and other models — giving you automatic cost optimization from a single endpoint.

Step 2: Authenticate

bash

az login

Claude Code uses your Azure CLI credentials automatically when Foundry mode is enabled.

Step 3: Configure Claude Code

bash

# Bash / macOS / Linux / WSL
export CLAUDE_CODE_USE_FOUNDRY=1
export ANTHROPIC_FOUNDRY_RESOURCE=your-resource-name
# Or use the full base URL instead:
# export ANTHROPIC_FOUNDRY_BASE_URL=https://your-resource.services.ai.azure.com
# Optional: pin deployment names to match your Foundry deployments
export ANTHROPIC_DEFAULT_SONNET_MODEL="claude-sonnet-4-5"
export ANTHROPIC_DEFAULT_HAIKU_MODEL="claude-haiku-4-5"
export ANTHROPIC_DEFAULT_OPUS_MODEL="claude-opus-4-5"

Persist in ~/.claude/settings.json:

json

{
"env": {
"CLAUDE_CODE_USE_FOUNDRY": "1",
"ANTHROPIC_FOUNDRY_RESOURCE": "your-resource-name",
"ANTHROPIC_DEFAULT_SONNET_MODEL": "claude-sonnet-4-5",
"ANTHROPIC_DEFAULT_HAIKU_MODEL": "claude-haiku-4-5"
}
}

PowerShell (Windows):

powershell

$env:CLAUDE_CODE_USE_FOUNDRY = "1"
$env:ANTHROPIC_FOUNDRY_RESOURCE = "your-resource-name"
$env:ANTHROPIC_DEFAULT_SONNET_MODEL = "claude-sonnet-4-5"

VS Code Extension Setup:

json

{
"claudeCode.environmentVariables": [
{ "name": "CLAUDE_CODE_USE_FOUNDRY", "value": "1" },
{ "name": "ANTHROPIC_FOUNDRY_RESOURCE", "value": "your-resource-name" }
],
"claudeCode.disableLoginPrompt": true
}

Bonus Path: Claude Cowork on Amazon Bedrock (Enterprise Desktop)

If your organization wants the full Claude Desktop experience — projects, memory, artifact generation, file uploads — but routed entirely through AWS Bedrock, Claude Cowork on Amazon Bedrock is the answer.

Available on AWS Marketplace, Cowork routes model inference exclusively through Bedrock in your AWS account:

  • No Anthropic-side billing — pure consumption-based AWS pricing
  • Full data residency — prompts, files, and responses go to Bedrock, never Anthropic’s infrastructure
  • Anthropic receives only aggregate telemetry (token counts, model ID, error codes, anonymous device ID) — configurable off
  • MDM-deployable with enterprise policy controls
  • Also supports Vertex AI, Azure AI Foundry, and custom LLM gateways as inference backends
Trade-off: Some Claude Desktop features require Anthropic-hosted inference and are unavailable in Cowork’s cloud-provider mode: the Chat tab in standard mode, Computer Use, and the Skills Marketplace. The research, document analysis, file processing, and project capabilities are all present.

The Monetary Math: Is It Worth It?

Let’s run approximate numbers for a 20-developer engineering team.

Assumptions: 6 hours of active AI coding use per developer per day, ~300K tokens/developer/day (heavy use), total ~180M tokens/month.

ProviderModelInput (per 1M tokens)Output (per 1M tokens)Anthropic DirectClaude Sonnet 4.5~$3.00~$15.00AWS BedrockClaude Sonnet 4.5~$3.00~$15.00AWS BedrockClaude Haiku 4.5~$0.25~$1.25Google Vertex AIClaude Sonnet 4.5~$3.00~$15.00Azure AI FoundryClaude Sonnet 4.5varies by agreementvaries

⚠️ Pricing changes frequently. Verify current rates at AWS Bedrock Pricing, Google Cloud Pricing, and Azure AI Foundry pricing pages before making decisions. Numbers above are approximate and for illustration only.

Where the actual savings come from:

Base token prices for Claude on Bedrock and Vertex are similar to Anthropic direct. The real financial leverage is:

  1. AWS Enterprise Discount Programs (EDP) — 20–40% discounts on committed spend apply to Bedrock token consumption
  2. Cross-model optimization — a mixed Sonnet/Haiku strategy routes simple tasks (autocomplete, docstrings) to Haiku (~12x cheaper) and complex work (architecture, debugging) to Sonnet
  3. Consolidated billing — AI spend flows into existing cloud cost centers, budgets, and chargeback structures instead of a separate Anthropic invoice
  4. No double-billing — teams running both Anthropic Pro/Team seats and cloud AI credits are paying for capacity they overlap on

A thoughtful mixed-model strategy with existing EDP discounts can realistically reduce per-team AI spend by 40–60% versus unoptimized direct API usage.

Pros and Cons: The Honest Assessment

✅ Pros

Full Data Governance

  • When using Bedrock, Vertex, or Foundry: prompts, files, tool inputs/outputs, and model responses are sent to your cloud provider — Anthropic does not see this traffic
  • Full audit trail via AWS CloudTrail, GCP Cloud Logging, or Azure Monitor
  • Meets SOC 2, HIPAA, FedRAMP compliance frameworks your cloud provider already covers

Cost Leverage

  • Apply enterprise cloud discounts to Claude inference spend
  • Mix Sonnet and Haiku by task complexity to optimize cost
  • Centralize AI billing under existing cloud cost management

Vendor Independence

  • Your developer tooling doesn’t hard-depend on Anthropic’s direct API availability
  • Pre-configured fallback paths if one provider has an outage
  • Negotiating leverage at contract renewal time

Future-Proof Upgrades

  • Update one env var to point to a new model version
  • Developer workflow, IDE setup, and team habits stay unchanged
  • Model upgrades become an ops task, not a product migration project

❌ Cons

Claude Code Only Runs Claude Models

  • Claude Code routes to Claude models on your chosen cloud — it doesn’t let you swap in GPT-4o or Gemini as the agentic coding assistant
  • For non-Claude models as the coding agent, you need a different tool (Cursor, Continue.dev, GitHub Copilot)

Standard Claude Desktop Has No Bedrock Mode

  • The familiar Claude Desktop chat interface stays on Anthropic’s API
  • Claude Cowork (a separate product) fills this gap for enterprise deployments, but requires AWS Marketplace setup

Some Cowork Features Are Unavailable in Cloud-Provider Mode

  • Computer Use, the Chat tab in standard mode, and the Skills Marketplace require Anthropic-hosted inference
  • Cowork on Bedrock covers projects, research, file analysis, and artifacts — but not the full feature set

Setup Overhead

  • IAM policies, FTU forms, model enablement, env var management, VS Code config — meaningful engineering time
  • Justified at team or enterprise scale; for individual developers, a Claude Pro subscription is often simpler

Region Availability

  • Not all Claude models are available in all AWS regions or GCP locations
  • Cross-region inference profiles help but add latency for geographically distributed teams

Common Mistakes to Avoid

❌ Using base model IDs instead of inference profile IDs on Bedrock

bash

# Wrong ❌ — will fail with "On-demand throughput isn't supported"
ANTHROPIC_DEFAULT_SONNET_MODEL="anthropic.claude-3-5-sonnet-20241022-v2:0"
# Right ✅ — cross-region inference profile ID
ANTHROPIC_DEFAULT_SONNET_MODEL="us.anthropic.claude-sonnet-4-5-20250929-v1:0"

Get the correct IDs:

bash

aws bedrock list-inference-profiles --region us-east-1

❌ Skipping the Anthropic FTU form on Bedrock

Models are auto-enabled by default on Bedrock — but Anthropic models specifically require a one-time use-case form. Without it, your first calls may work briefly, then fail with a 403 error. Complete the form before testing.

❌ Not setting disableLoginPrompt in the VS Code extension

The Claude Code VS Code extension has its own auth state, separate from ~/.claude/settings.json. Even with Bedrock fully configured in your settings file, the extension will show the Anthropic login screen unless you set "claudeCode.disableLoginPrompt": true in VS Code settings.

❌ Missing IAM Marketplace permissions

Many teams configure bedrock:InvokeModel but miss:

aws-marketplace:Subscribe
aws-marketplace:ViewSubscriptions

These are required to activate Anthropic model access and cause silent failures when absent.

❌ Not pinning model versions for team deployments

Without pinning, model aliases resolve to the latest available version on Bedrock, which may not immediately match what Anthropic just released. This creates inconsistent behavior across a team until Bedrock catches up. Pin specific IDs and update deliberately.

❌ Using long-lived IAM access keys in shared environments

Use AWS SSO (Identity Center), IAM roles, or instance roles for production and shared environments. Reserve access key + secret pairs for individual local dev only.

Best Practices for Enterprise Rollout

1. Submit the FTU Form at the Organization Level

Use the management account and bedrock:PutUseCaseForModelAccess API (or the AWS Organizations one-time form) so access propagates automatically to all child accounts — no per-account setup required.

2. Version-Control a Canonical settings.json Template

Create a standard ~/.claude/settings.json that's distributed via MDM, developer onboarding scripts, or a shared dotfiles repository. This eliminates per-developer config drift.

3. Use Short-Lived Credentials Wherever Possible

Configure awsAuthRefresh in settings.json for SSO-based credential refresh:

json

{
"awsAuthRefresh": "aws sso login --profile bedrock-prod"
}

This automatically re-authenticates before credentials expire — no user intervention needed.

4. Enable Bedrock Invocation Logging for Audit

bash

aws bedrock put-model-invocation-logging-configuration \
--logging-config '{
"cloudWatchConfig": {
"logGroupName": "/aws/bedrock/invocations",
"roleArn": "arn:aws:iam::ACCOUNT:role/BedrockLoggingRole"
}
}'

This enables full audit trails of all model invocations — required for many compliance frameworks.

5. Validate Before Rolling Out

After configuring Claude Code, run /status from within the tool to confirm it's connected to your intended provider, authenticated, and able to invoke models. Run this before pushing config to an entire team.

6. For Azure: Use Model Router for Automatic Cost Optimization

Foundry’s Model Router dispatches prompts based on complexity — simple completions go to Haiku, complex reasoning goes to Sonnet or Opus — from a single endpoint, automatically. Deploy it instead of (or alongside) individual model deployments to get cost optimization without per-request routing logic.

7. Document Your Model Strategy in an ADR

Write an internal Architecture Decision Record covering: which models are enabled and why, how version pinning works, which cloud provider is used for which teams, and the process for evaluating and adopting new model versions.

Future Scope: Where This Is Heading

🔮 LLM Gateways as Standard Infrastructure

Tools like LiteLLM and Portkey already provide unified OpenAI-compatible API layers across Bedrock, Vertex, and Foundry. Claude Code supports custom LLM gateways natively — any endpoint exposing /v1/messages with the correct headers works. Enterprise infrastructure teams are moving toward maintaining an internal LLM gateway the way they maintain API gateways today — and the Claude tooling is already ready for that world.

🔮 MCP Deepens Cloud-Native Integration

AWS, Google, and Microsoft are all publishing official MCP servers for their cloud services. As these mature, Claude Code will be able to not just run on your cloud infrastructure but reason about it — reading CloudFormation stacks, querying BigQuery, inspecting Azure DevOps pipelines — with proper auth and governance, as native tools in the coding session.

🔮 Model Routing Will Become Intelligent and Automatic

As fine-tuned specialist models improve alongside foundation models, a single Claude Code session might automatically route by task type:

  • Architecture review → Claude Opus via Bedrock
  • Code generation → Claude Sonnet via Vertex
  • Documentation → Claude Haiku (cost-optimized)

All from one developer interface, one cloud bill, one session.

🔮 Claude Desktop Bedrock Support Is on the Roadmap

A feature request already filed against Anthropic’s repositories asks for Claude Desktop/Cowork to support a Bedrock configuration equivalent to Claude Code’s CLAUDE_CODE_USE_BEDROCK=1. Given that Claude Code CLI already has it and Cowork on Bedrock already exists, extending it to standard Claude Desktop seems like a natural next step to watch for.

Key Takeaways

  • Claude Code CLI officially supports AWS Bedrock, Google Vertex AI, and Microsoft Azure AI Foundry via environment variables — no proxy, no fake packages, no tricks
  • Claude Code VS Code extension uses the same config, plus claudeCode.disableLoginPrompt: true to bypass the Anthropic login screen
  • Claude Cowork on AWS Marketplace provides the full Claude Desktop experience with Bedrock as the inference backend — for enterprise, MDM-managed teams
  • Standard Claude Desktop does not support swapping its completion backend — that’s a product limitation, not a config issue
  • MCP servers are for tools and context (file system, databases, APIs) — not for model backend routing. Don’t confuse the two
  • ✅ The real financial benefit comes from applying existing cloud enterprise discounts and mixing Sonnet/Haiku by task complexity
  • ✅ AWS Bedrock requires complete IAM permissions (including Marketplace permissions) and a one-time FTU form per account
  • ✅ Always use cross-region inference profile IDs (not base model IDs) on Bedrock; always pin model versions for team deployments

Conclusion: The Enterprise AI Stack Is Already Waiting for You

The future of AI in the enterprise isn’t a single vendor, model, or billing relationship. It’s Claude Code running on the cloud infrastructure your team already understands, audits, and pays for.

That future isn’t theoretical. The environment variables exist. The IAM policies are documented. The VS Code extension supports it. AWS, Google, and Microsoft all have official integration guides. Anthropic has the wizard built into the CLI.

What’s missing is usually just awareness — and now you have it.

Start with one team, one cloud provider, and the handful of environment variables that matter. Validate with /status. Watch the spend appear in your existing cloud billing dashboard. Then scale from there.

The setup takes an afternoon. The organizational leverage lasts for years.

Found this useful? Follow for more technically accurate, no-BS deep-dives on enterprise AI architecture, Claude Code, and the real mechanics of building production AI systems.

Questions about your specific setup — IAM policies, multi-region configs, Cowork on Bedrock? Drop them in the comments.

Connect on LinkedIn for enterprise AI strategy conversations.


Stop Paying the Anthropic API Tax: Route Claude Code Through Your Existing Cloud was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top