Sunday, April 5

By the end of this tutorial, you’ll have a working Make.com scenario that calls Claude via the Anthropic API, processes the response, and routes the output to any downstream app — Gmail, Notion, Airtable, Slack, whatever your workflow needs. No backend server, no infrastructure to maintain. Make.com Claude integration automation is one of the fastest ways to wire LLM intelligence into real business processes, and the whole thing takes about 20 minutes to set up.

I’ve used this pattern to automate everything from lead qualification emails to weekly content briefs. The approach works because Make’s HTTP module is flexible enough to hit any REST API, and Claude’s responses are clean enough that you don’t need much post-processing logic.

What You’ll Build (and How Long It Takes)

The core pattern: a Make trigger (form submission, Gmail watcher, webhook, whatever) feeds data into an HTTP module that calls the Anthropic Messages API, Claude processes it, and the structured response flows into your output app. We’ll use a lead qualification scenario as the concrete example — a new Typeform submission triggers Claude to score and summarize the lead, then writes the result to Airtable.

This same skeleton works for content generation, email drafting, document summarization, support ticket triage, and a dozen other use cases. Swap the trigger and output modules; the Claude HTTP call stays identical.

  1. Get your Anthropic API key — create an account and grab credentials from the Anthropic console
  2. Create a new Make scenario — set up the canvas and choose your trigger module
  3. Configure the HTTP module for Claude — set headers, request body, and model parameters
  4. Parse Claude’s JSON response — extract the text content from the API response structure
  5. Map the output to your destination app — Airtable, Notion, Slack, or wherever
  6. Add error handling — configure retry logic and fallback paths

Step 1: Get Your Anthropic API Key

Go to console.anthropic.com, create an account if you haven’t, and navigate to API Keys. Generate a new key and copy it somewhere safe — you won’t see it again after creation.

Check your rate limits before building. Free tier accounts have aggressive limits that will cause problems during testing. If you’re seeing frequent 429 errors, you need to add a credit card and move to a paid tier. At current pricing, claude-haiku-3-5 costs roughly $0.0008 per 1K input tokens and $0.004 per 1K output tokens — a typical lead qualification call with 500 input tokens and 200 output tokens runs about $0.0012 per execution. Even at 10,000 runs a month, you’re under $15.

Step 2: Create a New Make Scenario

In Make, click Create a new scenario. For this tutorial, add a Webhooks → Custom webhook module as your trigger. This lets you test by sending POST requests directly without needing to configure a real upstream app first.

Copy the webhook URL Make gives you — you’ll use it to send test payloads. Set up your webhook data structure by clicking Redetermine data structure and sending a sample payload:

{
  "name": "Sarah Chen",
  "company": "Acme Corp",
  "role": "Head of Engineering",
  "use_case": "We need to automate our quarterly compliance reports. Currently takes 3 days of manual work.",
  "budget": "50000",
  "timeline": "Q2 this year"
}

Send that via a tool like curl or Postman to the webhook URL. Make will detect the structure and let you map individual fields downstream.

Step 3: Configure the HTTP Module for Claude

Add an HTTP → Make a request module after your trigger. This is where the actual Make.com Claude integration happens. Configure it as follows:

  • URL: https://api.anthropic.com/v1/messages
  • Method: POST
  • Headers: Add two headers — x-api-key with your API key value, and anthropic-version set to 2023-06-01
  • Body type: Raw
  • Content type: JSON (application/json)

For the request body, use Make’s variable mapping to inject the trigger data into your prompt:

{
  "model": "claude-haiku-3-5",
  "max_tokens": 512,
  "system": "You are a sales qualification assistant. Analyze leads and return a JSON object with three fields: score (integer 1-10), summary (one sentence), and recommended_action (string: 'schedule_call', 'send_nurture', or 'disqualify').",
  "messages": [
    {
      "role": "user",
      "content": "Qualify this lead:\nName: {{1.name}}\nCompany: {{1.company}}\nRole: {{1.role}}\nUse case: {{1.use_case}}\nBudget: ${{1.budget}}\nTimeline: {{1.timeline}}\n\nReturn only valid JSON."
    }
  ]
}

The {{1.name}} syntax maps from module 1 (your webhook). Adjust the module number to match your scenario. A few things worth noting: always pin the anthropic-version header — Anthropic uses this to version the API contract and changing it can break response parsing. And explicitly ask for JSON output in your system prompt; it makes the next step much cleaner. For deeper guidance on writing reliable instructions, see our article on system prompts that actually work for consistent agent behavior.

Step 4: Parse Claude’s Response

Claude’s API returns a nested JSON structure. The actual text lives at content[0].text. In Make, after the HTTP module, add a Tools → Parse JSON module to handle this.

First, extract the raw text from the HTTP response. In a subsequent step, reference:

{{3.data.content[].text}}

That gives you Claude’s text output, which (because of your system prompt) should be a JSON string like:

{
  "score": 8,
  "summary": "Engineering leader with clear pain point and defined budget for Q2 implementation.",
  "recommended_action": "schedule_call"
}

Add a Tools → Parse JSON module, pass the extracted text into it, and Make will expose score, summary, and recommended_action as individual mappable variables. This is important — without parsing, you’re mapping a raw string to your destination, which creates fragile downstream logic.

If you’re building something more complex than a single classification, you might want Claude to return richer structured output. Our post on reducing LLM hallucinations with structured outputs covers how to make Claude’s responses consistently parseable, which is directly applicable here.

Step 5: Map Output to Your Destination App

Add an Airtable → Create a record module (or whatever your destination is). Map the parsed fields:

  • Name → {{1.name}} (from webhook)
  • Company → {{1.company}}
  • Lead Score → {{4.score}} (from parsed JSON module)
  • AI Summary → {{4.summary}}
  • Recommended Action → {{4.recommended_action}}

Add a Router module before the Airtable write if you want to branch based on recommended_action. Route schedule_call leads to a Calendly invite email, send_nurture to a Mailchimp sequence, and disqualify to a simple archive record. This is where Make’s visual workflow builder genuinely shines — branching logic that would require code in a custom integration is just a few drag-and-drop connections. If you’re comparing platforms for this kind of work, our n8n vs Make vs Zapier comparison for AI automation is worth reading before you commit to a platform.

Step 6: Add Error Handling

This is the step most tutorials skip and where real workflows break. Claude’s API occasionally returns 529 (overloaded) or 429 (rate limited) responses. Without handling, your scenario fails silently.

Right-click the HTTP module and select Add error handler. Add a Resume route with a Sleep module (set to 30 seconds) followed by a retry of the HTTP call. For hard failures, add a separate error route that writes the failed payload to a “failed_leads” Airtable table so nothing gets lost.

Also configure the HTTP module’s Parse response setting and check the Evaluate all states as errors except for 2xx option — this ensures non-200 responses actually trigger your error handler instead of being treated as successful empty responses.

For production scenarios handling volume, consider adding a filter that checks if {{3.data.content[].text}} is not empty before the parse step. Claude occasionally returns empty content arrays on malformed inputs, and that’ll crash your JSON parser downstream. Our post on LLM fallback and retry logic for production covers the broader patterns here, even if you’re implementing them in Make instead of code.

Common Errors and How to Fix Them

401 Unauthorized — “invalid x-api-key”

Almost always a header configuration issue. Make sure you’re using x-api-key (lowercase, with hyphens) as the header name — not Authorization and not X-API-Key. Also confirm you haven’t accidentally included a space or newline in the key value when pasting. The Anthropic API is case-sensitive on header names.

JSON Parse Error on Claude’s Response

Claude returned a valid API response but the text content isn’t clean JSON. This happens when Claude adds a preamble like “Here is the JSON:” before the object. Fix it in two ways: add “Return only valid JSON, no explanation” to your system prompt, or add a regex Text parser step in Make to extract the JSON object from the text before parsing. The regex \{[\s\S]*\} reliably extracts the first JSON object from any string.

Scenario Runs But Airtable Record Is Empty

You’re mapping the raw HTTP response body instead of the parsed JSON fields. Check your module numbering — if your scenario is Webhook (1) → HTTP (2) → Parse JSON (3) → Airtable (4), the parsed fields are at {{3.fieldname}}, not {{2.data.content...}}. Use Make’s inspector panel during a test run to see exactly what each module outputs.

What to Build Next

The lead qualification workflow is a skeleton. The natural next step is adding a document context to the Claude call — before the HTTP module, add a step that fetches the lead’s company website or LinkedIn profile and injects that text into the prompt. A Google Search API → HTTP (fetch page) → Text parser chain before your Claude call turns a 5-field form submission into a genuinely researched lead brief. You’re still at zero backend infrastructure, and the additional API calls add maybe $0.005 per run. That’s a real research automation workflow, not just a classification toy. For a production-grade version of this pattern, see our guide on automating lead qualification with Claude and CRM integration.

Bottom Line: When to Use This Setup

Solo founders and small teams with limited engineering bandwidth: this is your best path to AI-powered automation. You get the full Claude API without maintaining any infrastructure, and Make’s visual debugger makes iteration fast. The Make.com Claude integration automation pattern scales to hundreds of thousands of monthly operations before you’d need to consider a custom backend.

Engineering teams who already have backend infrastructure: Make is still useful for non-engineering stakeholders who need to own workflows. Build the Claude HTTP module pattern once, publish it as a Make template internally, and let your marketing or ops team build their own automations without filing tickets.

Budget-conscious builders: use claude-haiku-3-5 for everything that doesn’t require nuanced reasoning. It’s fast, cheap, and handles classification, summarization, and structured extraction extremely well. Save Sonnet for complex multi-step reasoning tasks where the quality delta actually matters.

Frequently Asked Questions

Does Make.com have a native Claude or Anthropic module?

Not as of this writing — Make doesn’t have a dedicated Anthropic app in its module library the way it does for OpenAI. You use the generic HTTP module to call the Anthropic Messages API directly, which actually gives you more flexibility since you control every parameter. The setup takes about 5 minutes once you know the correct header format.

Which Claude model should I use in Make workflows?

For most automation tasks — classification, summarization, structured extraction, email drafting — use claude-haiku-3-5. It’s fast enough for real-time workflows (typically under 3 seconds), cheap enough to run at volume, and handles the majority of business automation tasks well. Only step up to claude-sonnet-3-5 if you’re doing complex reasoning, long document analysis, or code generation where output quality is critical.

How do I store my Anthropic API key securely in Make?

Don’t paste it directly into the HTTP module header value. Instead, go to Make’s Connections or use an environment variable-style approach by storing the key in a Make Data Store or a dedicated “config” Airtable/Google Sheet row that your scenario fetches at runtime. This keeps the key out of your scenario JSON export, which matters if you share scenarios or export them for version control.

Can I run Make.com Claude workflows on a schedule instead of a trigger?

Yes — replace the webhook trigger with a Schedule module. Set it to run hourly, daily, or weekly. Then add a data source module (Airtable search, Google Sheets rows, etc.) before the HTTP call to pull the records you want to process. Use Make’s iterator module to loop over multiple records and call Claude once per item. This is the standard pattern for batch processing jobs.

What’s the maximum payload size I can send to Claude through Make?

Claude’s context window isn’t the constraint — Make’s HTTP module limits request body size to 10MB, which is more than enough for most use cases. The practical limit is Claude’s context window: claude-haiku-3-5 has a 200K token context. The real concern is cost and latency at large contexts; sending 50K tokens of document text per call adds up quickly and will push response times beyond Make’s default 40-second module timeout.

How do I handle Make scenarios that time out when Claude takes too long?

Make’s HTTP module has a default 40-second timeout, which Claude rarely exceeds for short prompts but can hit with large context or complex generation tasks. First, reduce max_tokens to the minimum you actually need — this significantly speeds up response time. If you genuinely need long outputs, consider splitting the task into smaller Claude calls chained sequentially, or switch to claude-haiku-3-5 which is substantially faster than Sonnet for equivalent prompt lengths.

Put this into practice

Try the Mcp Integration Engineer agent — ready to use, no setup required.

Browse Agents →

Editorial note: API pricing, model capabilities, and tool features change frequently — always verify current details on the vendor’s website before building in production. Code examples are tested at time of writing; pin your dependency versions to avoid breaking changes. Some links in this article may be affiliate links — we may earn a commission if you sign up, at no extra cost to you.


Share.
Leave A Reply