Sunday, April 5

By the end of this tutorial, you’ll have a working Claude HR onboarding automation agent that collects new hire information, generates personalized welcome packets, schedules orientation sessions, and sends all the right emails — without a human touching any of it until the first day. The agent handles the mechanical 80% of onboarding so your HR team can focus on the 20% that actually requires judgment.

Most companies treat onboarding as a series of manual handoffs: HR sends an email, waits for a reply, sends another email, books a calendar slot, forwards documents. A single new hire easily generates 15–20 back-and-forth exchanges before they even show up. With Claude HR onboarding automation, you collapse that entire sequence into a triggered pipeline that runs in minutes.

  1. Install dependencies — Set up the Python environment with Anthropic SDK, scheduling, and email tools
  2. Build the document generation tool — Create a Claude tool that drafts personalized offer letters and welcome packets
  3. Wire up the information collection form — Build a structured intake that feeds data into the agent
  4. Implement the scheduling tool — Automate calendar booking for orientation and equipment setup
  5. Orchestrate the full pipeline — Chain all tools into a single agent run triggered by a webhook
  6. Add error handling and logging — Make it production-safe with retries and audit trails

What the Agent Actually Does

The trigger is simple: a new hire record lands in your ATS (or a webhook fires when an offer is accepted). From that point, the agent runs five tasks in sequence — no human in the loop until day one.

  • Extracts structured employee data from the incoming payload
  • Generates a personalized welcome email and pre-boarding packet
  • Books orientation slots on a shared calendar via Google Calendar API
  • Sends IT provisioning requests with the right equipment specs
  • Creates a 30-60-90 day plan draft for the hiring manager to review

Running this on Claude Haiku 3.5 costs roughly $0.003–$0.008 per new hire depending on document length. At that price, you could process 10,000 hires for under $80 in API costs. The real savings are in HR time — typically 3–5 hours per hire that just disappears.

Step 1: Install Dependencies

# Python 3.11+ recommended
pip install anthropic==0.28.0 google-auth google-auth-oauthlib \
  google-api-python-client python-dotenv pydantic==2.7.0 \
  httpx jinja2

Pin your versions. The Anthropic SDK has had breaking changes between minor versions, and pydantic v1/v2 incompatibilities will cause silent failures in tool call parsing if you’re not careful.

Step 2: Define the Onboarding Tools

Claude’s tool use is the right architecture here — you’re not asking the model to “do” things directly, you’re giving it a set of actions it can invoke with structured arguments. This is what makes it auditable and testable. If you’re new to this pattern, the comparison in Claude Agents vs OpenAI Assistants is worth reading before you build.

import anthropic
import json
from pydantic import BaseModel
from typing import Optional

# Tool definitions passed to Claude
ONBOARDING_TOOLS = [
    {
        "name": "generate_welcome_document",
        "description": "Creates a personalized welcome email and pre-boarding packet for a new hire",
        "input_schema": {
            "type": "object",
            "properties": {
                "employee_name": {"type": "string"},
                "role": {"type": "string"},
                "start_date": {"type": "string", "description": "ISO 8601 format"},
                "department": {"type": "string"},
                "manager_name": {"type": "string"},
                "remote_or_onsite": {"type": "string", "enum": ["remote", "onsite", "hybrid"]}
            },
            "required": ["employee_name", "role", "start_date", "department", "manager_name"]
        }
    },
    {
        "name": "schedule_orientation",
        "description": "Books orientation session on the shared calendar",
        "input_schema": {
            "type": "object",
            "properties": {
                "employee_email": {"type": "string"},
                "start_date": {"type": "string"},
                "timezone": {"type": "string", "default": "UTC"},
                "session_type": {
                    "type": "string",
                    "enum": ["it_setup", "hr_orientation", "team_intro", "benefits"]
                }
            },
            "required": ["employee_email", "start_date", "session_type"]
        }
    },
    {
        "name": "send_it_provisioning_request",
        "description": "Sends equipment and access provisioning request to IT",
        "input_schema": {
            "type": "object",
            "properties": {
                "employee_name": {"type": "string"},
                "employee_email": {"type": "string"},
                "role": {"type": "string"},
                "equipment_tier": {
                    "type": "string",
                    "enum": ["standard", "developer", "executive"]
                },
                "remote_or_onsite": {"type": "string"}
            },
            "required": ["employee_name", "employee_email", "role"]
        }
    }
]

Step 3: Build the Tool Execution Layer

This is where most tutorials go vague. Here’s the actual execution logic — each tool maps to a real function, and the agent’s tool call arguments are validated before execution.

from jinja2 import Template
import httpx
from datetime import datetime, timedelta

def execute_tool(tool_name: str, tool_input: dict) -> str:
    """Routes tool calls to their implementation and returns a string result."""
    
    if tool_name == "generate_welcome_document":
        return generate_welcome_document(**tool_input)
    elif tool_name == "schedule_orientation":
        return schedule_orientation(**tool_input)
    elif tool_name == "send_it_provisioning_request":
        return send_it_provisioning_request(**tool_input)
    else:
        return json.dumps({"error": f"Unknown tool: {tool_name}"})

def generate_welcome_document(
    employee_name: str,
    role: str,
    start_date: str,
    department: str,
    manager_name: str,
    remote_or_onsite: str = "hybrid"
) -> str:
    # In production, pull this template from your CMS or S3
    template_str = """
Subject: Welcome to the team, {{ name }}!

Hi {{ name }},

We're thrilled to have you joining {{ department }} as {{ role }} on {{ start_date }}.

Your manager {{ manager }} has been looking forward to working with you.
Your workspace setup is: {{ work_mode }}.

In the next 48 hours you'll receive:
- Calendar invites for orientation sessions
- IT access credentials
- Your 30-60-90 day plan draft

Questions? Reply directly to this email.

The People Team
    """
    
    rendered = Template(template_str).render(
        name=employee_name,
        role=role,
        start_date=start_date,
        department=department,
        manager=manager_name,
        work_mode=remote_or_onsite
    )
    
    return json.dumps({
        "status": "generated",
        "document_type": "welcome_email",
        "content": rendered,
        "word_count": len(rendered.split())
    })

def schedule_orientation(
    employee_email: str,
    start_date: str,
    session_type: str,
    timezone: str = "UTC"
) -> str:
    # Stub — replace with Google Calendar API call
    # Session offsets from start_date: IT day -2, HR orientation day 1, etc.
    offsets = {
        "it_setup": -2,
        "hr_orientation": 0,
        "team_intro": 1,
        "benefits": 2
    }
    
    base = datetime.fromisoformat(start_date)
    session_date = base + timedelta(days=offsets.get(session_type, 0))
    
    # Real implementation: google_calendar_client.events().insert(...)
    return json.dumps({
        "status": "scheduled",
        "session_type": session_type,
        "date": session_date.isoformat(),
        "attendee": employee_email,
        "calendar_event_id": f"evt_{session_type}_{employee_email[:6]}"
    })

def send_it_provisioning_request(
    employee_name: str,
    employee_email: str,
    role: str,
    equipment_tier: str = "standard",
    remote_or_onsite: str = "hybrid"
) -> str:
    # Stub — replace with your IT ticketing system (Jira, ServiceNow, etc.)
    ticket_payload = {
        "summary": f"New hire provisioning: {employee_name}",
        "description": f"Role: {role}\nEmail: {employee_email}\nTier: {equipment_tier}\nLocation: {remote_or_onsite}",
        "priority": "high",
        "assignee": "it-provisioning@company.com"
    }
    
    # httpx.post("https://yourjira.atlassian.net/rest/api/2/issue", json=ticket_payload)
    return json.dumps({"status": "ticket_created", "ticket_id": f"IT-{hash(employee_email) % 9999}"})

Step 4: Wire Up the Agent Orchestration Loop

This is the standard agentic loop — Claude decides which tools to call, you execute them, you feed results back. Keep running until Claude returns a stop_reason of end_turn with no more tool calls.

import anthropic
import os
from dotenv import load_dotenv

load_dotenv()
client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])

SYSTEM_PROMPT = """You are an HR onboarding automation agent. When given a new hire's details, 
you must complete ALL of the following tasks in order:
1. Generate a welcome document
2. Schedule all four orientation sessions (it_setup, hr_orientation, team_intro, benefits)
3. Send an IT provisioning request

Do not ask for confirmation. Execute all tasks and report what you completed.
Be systematic — complete every task before reporting done."""

def run_onboarding_agent(employee_data: dict) -> dict:
    """
    Runs the full onboarding pipeline for a single employee.
    Returns a summary of all actions taken.
    """
    messages = [
        {
            "role": "user",
            "content": f"Process onboarding for this new hire: {json.dumps(employee_data)}"
        }
    ]
    
    completed_actions = []
    
    while True:
        response = client.messages.create(
            model="claude-haiku-3-5-20241022",  # Fast and cheap for this task
            max_tokens=4096,
            system=SYSTEM_PROMPT,
            tools=ONBOARDING_TOOLS,
            messages=messages
        )
        
        # Append assistant response to message history
        messages.append({"role": "assistant", "content": response.content})
        
        # Check if we're done
        if response.stop_reason == "end_turn":
            break
        
        # Process tool calls
        tool_results = []
        for block in response.content:
            if block.type == "tool_use":
                print(f"  → Executing: {block.name}({list(block.input.keys())})")
                result = execute_tool(block.name, block.input)
                completed_actions.append({
                    "tool": block.name,
                    "result": json.loads(result)
                })
                tool_results.append({
                    "type": "tool_result",
                    "tool_use_id": block.id,
                    "content": result
                })
        
        if tool_results:
            messages.append({"role": "user", "content": tool_results})
        else:
            # No tool calls but also not end_turn — shouldn't happen, but break to avoid infinite loop
            break
    
    return {
        "employee": employee_data.get("employee_name"),
        "actions_completed": len(completed_actions),
        "details": completed_actions
    }

# Example trigger — in production this comes from a webhook
if __name__ == "__main__":
    new_hire = {
        "employee_name": "Sarah Chen",
        "role": "Senior Backend Engineer",
        "department": "Engineering",
        "start_date": "2025-02-03",
        "employee_email": "sarah.chen@company.com",
        "manager_name": "James Okafor",
        "remote_or_onsite": "hybrid"
    }
    
    result = run_onboarding_agent(new_hire)
    print(f"\nOnboarding complete: {result['actions_completed']} actions taken")

Step 5: Add the Webhook Trigger

In production, this agent fires when your ATS sends a webhook. Here’s a minimal FastAPI handler that validates the payload and kicks off the pipeline. If you’re deploying this as a serverless function, the tradeoffs in choosing the right serverless platform for Claude agents are worth a read before you commit to an infrastructure approach.

from fastapi import FastAPI, HTTPException, BackgroundTasks
from pydantic import BaseModel, EmailStr
import logging

app = FastAPI()
logger = logging.getLogger(__name__)

class NewHirePayload(BaseModel):
    employee_name: str
    role: str
    department: str
    start_date: str  # ISO format: 2025-02-03
    employee_email: EmailStr
    manager_name: str
    remote_or_onsite: str = "hybrid"
    # Add a shared secret in production — validate it before processing
    webhook_secret: str

@app.post("/webhooks/new-hire")
async def handle_new_hire(payload: NewHirePayload, background_tasks: BackgroundTasks):
    # Validate webhook secret against env var
    if payload.webhook_secret != os.environ.get("WEBHOOK_SECRET"):
        raise HTTPException(status_code=401, detail="Invalid webhook secret")
    
    # Run async — don't block the webhook response
    background_tasks.add_task(
        run_onboarding_agent,
        payload.model_dump(exclude={"webhook_secret"})
    )
    
    return {"status": "accepted", "employee": payload.employee_name}

Step 6: Logging and Audit Trails

HR processes get audited. You need a record of every action the agent took, when, and with what inputs. The simplest approach is writing each completed action to a database row with a timestamp and the raw tool inputs/outputs.

import sqlite3
from datetime import datetime

def log_onboarding_action(employee_email: str, tool_name: str, 
                           inputs: dict, result: dict):
    conn = sqlite3.connect("onboarding_log.db")
    conn.execute("""
        CREATE TABLE IF NOT EXISTS onboarding_actions (
            id INTEGER PRIMARY KEY AUTOINCREMENT,
            timestamp TEXT,
            employee_email TEXT,
            tool_name TEXT,
            inputs TEXT,
            result TEXT
        )
    """)
    conn.execute(
        "INSERT INTO onboarding_actions VALUES (NULL, ?, ?, ?, ?, ?)",
        (datetime.utcnow().isoformat(), employee_email, tool_name,
         json.dumps(inputs), json.dumps(result))
    )
    conn.commit()
    conn.close()

For higher-volume deployments, swap SQLite for Postgres and consider structured logging to a service like Datadog or Axiom. The principles in observability for production Claude agents apply directly here — you want to be able to replay any failed run from its inputs.

Common Errors

Tool call loop never terminates

This happens when your system prompt doesn’t clearly tell Claude to stop after all tasks are done, or when tool results return error states that confuse the model. Fix: add an explicit “When all tasks are complete, respond with a summary and stop” instruction to your system prompt. Also add a max_iterations guard counter in your loop — anything over 20 iterations in this workflow is a bug.

Pydantic validation errors on tool inputs

Claude occasionally passes a string where an enum is expected (e.g., "Hybrid" instead of "hybrid"). Add a pre-processing step that normalizes inputs before passing them to your tool functions, or use .lower().strip() defensively in each function. Don’t trust that Claude will perfectly match your enum values every time — it usually does, but production failures are expensive.

Calendar scheduling conflicts

If you’re using real Google Calendar API calls and the target calendar is full, the API returns a 409. Your tool function needs to handle this gracefully, return a meaningful error message to Claude (not raise an exception), and let the agent retry with an alternative slot. Claude handles retry logic well when the tool result explicitly says “slot unavailable, please try [date+1]”.

Cost Reality Check

A full onboarding run — welcome doc, four scheduling calls, one IT ticket — involves roughly 8–12 API calls. With claude-haiku-3-5-20241022 at $0.80/M input tokens and $4/M output tokens, a complete run costs around $0.004–$0.009. If you’re processing hundreds of hires monthly, look at LLM caching strategies — the system prompt and tool definitions are identical across all runs and are excellent candidates for prompt caching, which can cut your input token costs by 40–60%.

Don’t use Claude Sonnet or Opus for this workflow. The tasks are formulaic enough that Haiku handles them with 99%+ reliability, and the cost difference is 5–10x. Save the larger models for document review or nuanced HR communications that actually require more reasoning.

What to Build Next

The natural extension is a 90-day follow-up agent that triggers check-in emails at day 30, 60, and 90, analyzes manager feedback forms using Claude’s structured output, and flags at-risk new hires based on engagement signals. The same tool-use architecture applies — you’re just adding a scheduler trigger and a sentiment analysis tool. If you’ve already built something similar for customer communications, the AI email agent pattern translates directly. You could also extend the IT provisioning tool to integrate with Okta or Azure AD directly, so accounts are provisioned automatically rather than via a ticket — that’s where you reclaim another 2–3 hours per hire.

Who Should Deploy This

Solo founders and small teams: Run this as a simple script triggered by a Zapier webhook from your ATS. You don’t need the FastAPI layer. Total setup time is 2–3 hours, and it pays for itself after the first hire.

HR teams at 50–500 person companies: Deploy the full FastAPI service on a cheap VPS or serverless function. Add the SQLite audit log minimum — your legal team will thank you later. Budget $30–$50/month for API costs at typical hiring volumes.

Enterprise: You’ll need SSO integration, role-based access to the webhook, and the audit log in a proper database with retention policies. The agent architecture here scales fine — what changes is the surrounding compliance infrastructure. The Compliance Specialist Agent is worth looking at if you need automated policy checks baked into the onboarding flow itself.

The bottom line on Claude HR onboarding automation: This is one of the clearest ROI cases for LLM agents in business operations. The tasks are deterministic enough to automate reliably, the cost is negligible, and the time savings are immediate. If your company hires more than 5 people a year, this build pays for itself in the first month.

Frequently Asked Questions

Can this agent handle different document templates for different roles?

Yes — the cleanest approach is storing role-specific templates in S3 or a CMS and passing the template key as part of the tool input. Claude selects the right template key based on the role field in the employee payload. You can also have Claude generate template content dynamically, but pre-written templates are more consistent and auditable for HR purposes.

What happens if a tool call fails mid-pipeline?

Return a structured error JSON from your tool function (don’t raise an exception) — Claude will read the error and either retry with corrected inputs or report what failed in its final summary. Add a max_iterations guard in your loop and log every failure to your audit table. For production deployments, add an alerting step so HR is notified when a run fails rather than finding out on day one.

Is Claude reliable enough for HR document generation without human review?

For template-based documents with fixed structure (welcome emails, onboarding checklists), yes — Haiku’s reliability is high enough for production use. For anything with legal implications (offer letters, employment contracts), always route through a human review step before sending. The agent should draft; a human should approve contracts. This isn’t a model limitation, it’s just good process design.

How do I integrate this with an existing ATS like Greenhouse or Workday?

Both Greenhouse and Workday support outbound webhooks on status changes. In Greenhouse, configure a webhook on “Candidate Stage Changed” filtered to your “Offer Accepted” stage — it sends a JSON payload to your endpoint. In Workday, you’ll need a Business Process notification or a scheduled report export if you don’t have webhook support on your plan. Parse the ATS payload format and map fields to your NewHirePayload model before calling the agent.

How much does it cost to run this for a company that hires 200 people per year?

At roughly $0.007 average per run on Claude Haiku 3.5, 200 hires costs about $1.40 in API fees annually. Even at 10x that estimate it’s under $20/year. The meaningful cost is your engineering time to build and maintain the integration, not the API usage. Infrastructure hosting for the webhook server adds $5–20/month depending on your platform choice.

Put this into practice

Try the Connection Agent agent — ready to use, no setup required.

Browse Agents →

Editorial note: API pricing, model capabilities, and tool features change frequently — always verify current details on the vendor’s website before building in production. Code examples are tested at time of writing; pin your dependency versions to avoid breaking changes. Some links in this article may be affiliate links — we may earn a commission if you sign up, at no extra cost to you.


Share.
Leave A Reply