By the end of this tutorial, you’ll have a working Python system where Claude agents generate platform-specific social posts from a content brief, slot them into a scheduling queue, and pull engagement metrics back into a single dashboard. We’re talking Twitter/X, LinkedIn, and Instagram — different character limits, different tones, different optimal posting times — all handled automatically with social media automation Claude agents doing the heavy lifting.
This isn’t a “use Buffer + ChatGPT” post. We’re building the actual orchestration layer: a content generation agent that adapts brand voice per platform, a scheduling agent that manages a SQLite queue, and a metrics agent that knows when to flag underperforming content for regeneration. Here’s the full build.
- Install dependencies — Set up the Python environment with Anthropic SDK, platform API clients, and APScheduler
- Define brand voice and platform rules — Build the system prompt layer that enforces tone and format constraints per platform
- Build the content generation agent — Claude generates three platform-native variants from a single brief
- Set up the scheduling queue — SQLite-backed queue with APScheduler for time-based dispatch
- Wire in platform APIs — Tweepy for Twitter, LinkedIn API, and Graph API for Instagram
- Build the engagement metrics agent — Pull performance data and trigger regeneration on underperforming posts
- Run the full orchestration loop — Tie all agents together with a coordinator
Step 1: Install Dependencies
You need Python 3.11+. The core stack is the Anthropic SDK for Claude, Tweepy for Twitter, the LinkedIn API wrapper, and APScheduler for the time-based dispatch queue.
pip install anthropic tweepy linkedin-api apscheduler requests python-dotenv SQLAlchemy
Create a .env file:
ANTHROPIC_API_KEY=sk-ant-...
TWITTER_API_KEY=...
TWITTER_API_SECRET=...
TWITTER_ACCESS_TOKEN=...
TWITTER_ACCESS_SECRET=...
LINKEDIN_ACCESS_TOKEN=...
INSTAGRAM_ACCESS_TOKEN=...
INSTAGRAM_ACCOUNT_ID=...
Step 2: Define Brand Voice and Platform Rules
This is where most implementations fall apart — they use one generic prompt for all platforms and wonder why the LinkedIn post sounds like a tweet. Each platform has real constraints that need to be baked into the system prompt.
If you’re not familiar with how to structure system prompts for consistent agent behavior, the guide on role prompting best practices for Claude agents covers the exact pattern we’re using here.
PLATFORM_RULES = {
"twitter": {
"max_chars": 280,
"tone": "punchy, direct, conversational — use threads for depth, not walls of text",
"hashtags": "2-3 max, only if genuinely relevant",
"emoji": "sparingly, max 2",
"no_nos": "no corporate speak, no 'excited to announce'"
},
"linkedin": {
"max_chars": 3000,
"tone": "professional but human — insight-driven, not promotional. Lead with a hook line.",
"hashtags": "3-5 at the end",
"emoji": "optional, professional context only",
"no_nos": "no buzzwords, no 'synergy', no 'thought leadership'"
},
"instagram": {
"max_chars": 2200,
"tone": "visual-first storytelling — assume the caption supports an image. Be warm, specific.",
"hashtags": "10-15 in first comment or end of caption",
"emoji": "encouraged, used to break up text",
"no_nos": "no links in caption (they don't work), no pure text dumps"
}
}
BRAND_VOICE = """
You are the social media voice for [Company]. Our brand is:
- Direct and technically credible — we don't dumb things down
- Opinionated — we share actual takes, not hedged corporate statements
- Helpful first — every post should give the reader something useful
- Never promotional in an obvious way — show value, don't tell it
"""
Step 3: Build the Content Generation Agent
The generation agent takes a content brief and returns three platform-specific variants in one Claude call using structured output. Running three separate API calls for each platform is wasteful; we get them all in one structured response.
At current Claude Haiku 3.5 pricing (~$0.0008/1K input tokens, $0.004/1K output tokens), generating all three variants from a brief costs roughly $0.003–0.006 per content item. That’s negligible for a daily posting schedule.
import anthropic
import json
from typing import Optional
client = anthropic.Anthropic()
def generate_social_content(
brief: str,
topic: str,
target_audience: str,
existing_posts: Optional[list] = None # for deduplication awareness
) -> dict:
dedup_context = ""
if existing_posts:
recent = "\n".join(existing_posts[-5:]) # last 5 posts for context
dedup_context = f"\n\nRecent posts to avoid repeating angles:\n{recent}"
platform_specs = json.dumps(PLATFORM_RULES, indent=2)
prompt = f"""
{BRAND_VOICE}
Platform rules:
{platform_specs}
Content brief: {brief}
Topic: {topic}
Target audience: {target_audience}
{dedup_context}
Generate platform-specific posts for Twitter, LinkedIn, and Instagram.
Return ONLY valid JSON in this exact structure:
{{
"twitter": {{
"content": "post text here",
"char_count": 0,
"hashtags": ["tag1", "tag2"]
}},
"linkedin": {{
"content": "post text here",
"char_count": 0,
"hashtags": ["tag1", "tag2", "tag3"]
}},
"instagram": {{
"content": "post text here",
"char_count": 0,
"hashtags": ["tag1", "tag2"],
"image_prompt": "description for image generation if needed"
}}
}}
"""
response = client.messages.create(
model="claude-haiku-4-5", # Haiku is fast enough and cheap for generation
max_tokens=2000,
system="You are a social media content specialist. Always return valid JSON only.",
messages=[{"role": "user", "content": prompt}]
)
try:
content = response.content[0].text.strip()
# Strip any markdown code fences if Claude added them
if content.startswith("```"):
content = content.split("```")[1]
if content.startswith("json"):
content = content[4:]
return json.loads(content)
except json.JSONDecodeError as e:
raise ValueError(f"Claude returned invalid JSON: {e}\nRaw: {response.content[0].text}")
Note: use Claude Haiku for generation — it’s fast and cheap. Save Sonnet for anything requiring deeper reasoning, like the engagement analysis step later. This mirrors the model selection logic covered in our Claude vs GPT-4 benchmark comparison.
Step 4: Set Up the Scheduling Queue
SQLite with SQLAlchemy handles persistence. APScheduler reads the queue and fires posts at the right time. This survives process restarts, which a pure in-memory queue won’t.
from sqlalchemy import create_engine, Column, String, DateTime, Text, Enum, Integer
from sqlalchemy.orm import declarative_base, sessionmaker
from datetime import datetime
import enum
engine = create_engine("sqlite:///social_calendar.db")
Base = declarative_base()
Session = sessionmaker(bind=engine)
class PostStatus(str, enum.Enum):
QUEUED = "queued"
PUBLISHED = "published"
FAILED = "failed"
PAUSED = "paused"
class ScheduledPost(Base):
__tablename__ = "scheduled_posts"
id = Column(Integer, primary_key=True, autoincrement=True)
platform = Column(String(20), nullable=False)
content = Column(Text, nullable=False)
hashtags = Column(Text) # JSON string
scheduled_time = Column(DateTime, nullable=False)
status = Column(String(20), default=PostStatus.QUEUED)
post_id = Column(String(100)) # returned by platform after publishing
topic = Column(String(200))
created_at = Column(DateTime, default=datetime.utcnow)
engagement_score = Column(Integer, default=0)
Base.metadata.create_all(engine)
def queue_post(platform: str, content: str, hashtags: list,
scheduled_time: datetime, topic: str) -> int:
session = Session()
post = ScheduledPost(
platform=platform,
content=content,
hashtags=json.dumps(hashtags),
scheduled_time=scheduled_time,
topic=topic
)
session.add(post)
session.commit()
post_id = post.id
session.close()
return post_id
def get_due_posts() -> list:
session = Session()
now = datetime.utcnow()
posts = session.query(ScheduledPost).filter(
ScheduledPost.scheduled_time <= now,
ScheduledPost.status == PostStatus.QUEUED
).all()
result = [{"id": p.id, "platform": p.platform,
"content": p.content, "hashtags": json.loads(p.hashtags or "[]")}
for p in posts]
session.close()
return result
Step 5: Wire in Platform APIs
The platform dispatcher is a thin adapter layer. Keep it isolated — when Twitter breaks their API (and they will), you fix one function.
import tweepy
import requests
import os
def post_to_twitter(content: str, hashtags: list) -> str:
# Append hashtags if they fit within 280 chars
hashtag_str = " ".join(f"#{h}" for h in hashtags)
full_content = f"{content}\n\n{hashtag_str}".strip()
if len(full_content) > 280:
full_content = content[:277] + "..."
auth = tweepy.OAuthHandler(
os.getenv("TWITTER_API_KEY"),
os.getenv("TWITTER_API_SECRET")
)
auth.set_access_token(
os.getenv("TWITTER_ACCESS_TOKEN"),
os.getenv("TWITTER_ACCESS_SECRET")
)
api = tweepy.API(auth)
tweet = api.update_status(full_content)
return str(tweet.id)
def post_to_linkedin(content: str, hashtags: list) -> str:
hashtag_str = " ".join(f"#{h}" for h in hashtags)
full_content = f"{content}\n\n{hashtag_str}"
url = "https://api.linkedin.com/v2/ugcPosts"
headers = {
"Authorization": f"Bearer {os.getenv('LINKEDIN_ACCESS_TOKEN')}",
"Content-Type": "application/json"
}
# You need the LinkedIn person URN — get it from /v2/me endpoint
person_urn = "urn:li:person:YOUR_PERSON_ID"
payload = {
"author": person_urn,
"lifecycleState": "PUBLISHED",
"specificContent": {
"com.linkedin.ugc.ShareContent": {
"shareCommentary": {"text": full_content},
"shareMediaCategory": "NONE"
}
},
"visibility": {"com.linkedin.ugc.MemberNetworkVisibility": "PUBLIC"}
}
resp = requests.post(url, headers=headers, json=payload)
resp.raise_for_status()
return resp.headers.get("x-restli-id", "unknown")
def post_to_instagram(content: str, hashtags: list, image_url: str) -> str:
# Instagram requires an image — this uses the Graph API
account_id = os.getenv("INSTAGRAM_ACCOUNT_ID")
token = os.getenv("INSTAGRAM_ACCESS_TOKEN")
hashtag_str = "\n\n" + " ".join(f"#{h}" for h in hashtags)
caption = content + hashtag_str
# Step 1: Create media container
container_url = f"https://graph.facebook.com/v18.0/{account_id}/media"
container_resp = requests.post(container_url, params={
"image_url": image_url,
"caption": caption,
"access_token": token
})
container_id = container_resp.json()["id"]
# Step 2: Publish
publish_url = f"https://graph.facebook.com/v18.0/{account_id}/media_publish"
publish_resp = requests.post(publish_url, params={
"creation_id": container_id,
"access_token": token
})
return publish_resp.json()["id"]
PLATFORM_DISPATCHERS = {
"twitter": post_to_twitter,
"linkedin": post_to_linkedin,
"instagram": post_to_instagram
}
Step 6: Build the Engagement Metrics Agent
The metrics agent does two things: pulls engagement data and uses Claude to decide whether a post’s performance warrants regenerating similar content or doubling down on that angle. This is where using a slightly smarter model pays off — use Claude Sonnet here for analysis quality.
For production reliability on this step, implement the retry patterns described in our article on LLM fallback and retry logic — rate limits from platform APIs will hit you unpredictably.
def fetch_twitter_metrics(post_id: str) -> dict:
# Using Twitter API v2 with Tweepy
client_v2 = tweepy.Client(
bearer_token=os.getenv("TWITTER_BEARER_TOKEN"),
consumer_key=os.getenv("TWITTER_API_KEY"),
consumer_secret=os.getenv("TWITTER_API_SECRET"),
access_token=os.getenv("TWITTER_ACCESS_TOKEN"),
access_token_secret=os.getenv("TWITTER_ACCESS_SECRET")
)
tweet = client_v2.get_tweet(
post_id,
tweet_fields=["public_metrics"]
)
if tweet.data:
return tweet.data.public_metrics
return {}
def analyze_engagement_with_claude(
platform: str,
content: str,
metrics: dict,
topic: str
) -> dict:
"""
Returns analysis with recommendation: 'regenerate', 'amplify', or 'archive'
"""
prompt = f"""
Analyze this social media post's performance and give a recommendation.
Platform: {platform}
Topic: {topic}
Content: {content}
Metrics: {json.dumps(metrics)}
Typical benchmarks for context:
- Twitter: >50 impressions/post is baseline, >2% engagement rate is good
- LinkedIn: >100 impressions, >3% engagement is solid
- Instagram: >200 impressions, >4% engagement rate
Return JSON only:
{{
"performance": "above_average|average|below_average",
"reason": "one sentence explanation",
"recommendation": "regenerate|amplify|archive",
"suggested_angle": "if regenerate, what angle to try instead"
}}
"""
response = client.messages.create(
model="claude-sonnet-4-5", # Sonnet for better analytical reasoning
max_tokens=500,
messages=[{"role": "user", "content": prompt}]
)
return json.loads(response.content[0].text)
Step 7: Run the Full Orchestration Loop
The coordinator ties everything together. It runs on a schedule: generate content in batches at the start of the week, dispatch posts throughout the week, pull metrics daily, and feed insights back into the next generation cycle.
from apscheduler.schedulers.blocking import BlockingScheduler
from datetime import datetime, timedelta
scheduler = BlockingScheduler()
def schedule_weekly_content(briefs: list):
"""
briefs: list of dicts with keys: brief, topic, audience, optimal_times
optimal_times: dict mapping platform -> datetime
"""
session = Session()
for brief_item in briefs:
print(f"Generating content for: {brief_item['topic']}")
# Get recent posts for deduplication context
recent = session.query(ScheduledPost.content).filter(
ScheduledPost.topic == brief_item["topic"]
).order_by(ScheduledPost.created_at.desc()).limit(5).all()
recent_texts = [r.content for r in recent]
variants = generate_social_content(
brief=brief_item["brief"],
topic=brief_item["topic"],
target_audience=brief_item["audience"],
existing_posts=recent_texts
)
for platform, post_data in variants.items():
if platform in brief_item["optimal_times"]:
queue_post(
platform=platform,
content=post_data["content"],
hashtags=post_data["hashtags"],
scheduled_time=brief_item["optimal_times"][platform],
topic=brief_item["topic"]
)
print(f" Queued {platform}: {post_data['char_count']} chars")
session.close()
def dispatch_due_posts():
"""Runs every 5 minutes to publish queued posts."""
due = get_due_posts()
session = Session()
for post in due:
try:
dispatcher = PLATFORM_DISPATCHERS.get(post["platform"])
if not dispatcher:
continue
platform_post_id = dispatcher(post["content"], post["hashtags"])
# Update record
db_post = session.query(ScheduledPost).get(post["id"])
db_post.status = PostStatus.PUBLISHED
db_post.post_id = platform_post_id
session.commit()
print(f"Published to {post['platform']}: {platform_post_id}")
except Exception as e:
db_post = session.query(ScheduledPost).get(post["id"])
db_post.status = PostStatus.FAILED
session.commit()
print(f"Failed to post to {post['platform']}: {e}")
session.close()
def run_metrics_collection():
"""Runs daily — pulls metrics for posts published 24-48 hours ago."""
session = Session()
cutoff = datetime.utcnow() - timedelta(hours=24)
old_cutoff = datetime.utcnow() - timedelta(hours=48)
posts = session.query(ScheduledPost).filter(
ScheduledPost.status == PostStatus.PUBLISHED,
ScheduledPost.scheduled_time.between(old_cutoff, cutoff)
).all()
for post in posts:
if post.platform == "twitter" and post.post_id:
metrics = fetch_twitter_metrics(post.post_id)
analysis = analyze_engagement_with_claude(
post.platform, post.content, metrics, post.topic
)
print(f"Post {post.id} ({post.platform}): {analysis['performance']} — {analysis['recommendation']}")
# Store engagement score or trigger regeneration workflow here
session.close()
# Register scheduled jobs
scheduler.add_job(dispatch_due_posts, "interval", minutes=5)
scheduler.add_job(run_metrics_collection, "cron", hour=9) # Daily at 9am UTC
if __name__ == "__main__":
# Example: schedule content for the week
example_briefs = [{
"brief": "We just published a tutorial on building Claude agents for email triage. Share the key insight: you can use tool_use to read and categorize emails without any custom backend.",
"topic": "claude-email-agents",
"audience": "developers building automation workflows",
"optimal_times": {
"twitter": datetime.utcnow() + timedelta(hours=2),
"linkedin": datetime.utcnow() + timedelta(hours=3),
"instagram": datetime.utcnow() + timedelta(hours=4),
}
}]
schedule_weekly_content(example_briefs)
scheduler.start()
Common Errors
1. Claude returns JSON with markdown fences
Even with “Return ONLY valid JSON” in the prompt, Claude Haiku occasionally wraps output in ```json ... ```. The stripping logic in the generation function handles this, but in production you should also run a regex fallback: re.search(r'\{.*\}', content, re.DOTALL) to extract the JSON object from any surrounding text. Adding the instruction “Do not use markdown code blocks” to the system prompt reduces frequency significantly.
2. Platform API rate limits killing the dispatcher
Twitter’s v2 API allows 300 tweets per 3-hour window per user. LinkedIn caps at 100 posts per day. If you’re running this for multiple accounts or hammering the dispatcher, you’ll hit 429s silently — the Tweepy exception is tweepy.TweepyException with code 429, not a specific RateLimitError. Wrap dispatchers in exponential backoff and set status = FAILED only after 3 retries with a re-queue mechanism for 429s specifically.
3. Instagram Graph API rejecting image URLs
The Instagram container creation endpoint requires the image to be publicly accessible via HTTPS and returns a generic OAuthException: Invalid parameter if the URL is behind auth, returns a redirect, or uses HTTP. Pre-validate image URLs with a HEAD request before queuing Instagram posts. If you’re generating images programmatically, upload to S3 first and use the public S3 URL.
What to Build Next
The natural extension is an approval gate between generation and scheduling. Right now posts go straight to the queue — which is fine for established content patterns, but risky for breaking news topics or anything touching your product roadmap. Add a Slack webhook that fires when new posts are generated, dumps the variants as a formatted message with Approve/Reject buttons (using Slack Block Kit), and only moves approved posts to QUEUED status. With the metrics agent already flagging what works, the review step becomes a 30-second skim rather than an actual content decision.
If you’re looking to extend the analytics side, the pattern from our SEO content audit automation guide maps directly to building a content performance reporting agent that runs weekly and surfaces which topics, tones, and posting times are driving the most engagement across platforms.
Bottom Line: Who Should Build This
Solo founders: Start with just Twitter and LinkedIn, skip Instagram (image requirement adds complexity). Use the batch scheduling pattern to front-load content creation on Monday mornings and run on autopilot. Your per-post Claude cost is under a cent — the ROI on even one recovered hour per week makes this a no-brainer.
Teams managing multiple brand accounts: Add a brand_id column to the ScheduledPost table and separate BRAND_VOICE configs per account. The architecture already supports it. Consider wrapping the orchestration in n8n if your team is non-technical — the scheduling and metrics logic translates cleanly to n8n workflows with the Claude API node.
Budget-conscious builders: Stick with Claude Haiku for all generation steps (Sonnet only for engagement analysis). With 20 posts per week across 3 platforms, you’re looking at roughly $0.50–$1.00/month in Claude API costs. The bigger cost is platform API access — Twitter’s Basic tier ($100/month) is the gating factor if you need posting privileges.
The core pattern — generate, queue, dispatch, measure, regenerate — is what separates real social media automation Claude agents from one-shot content scripts. Once the loop is running, you’re not managing content; you’re managing the system.
Frequently Asked Questions
How do I keep brand voice consistent across platforms with Claude agents?
Encode your brand voice in the system prompt as specific behavioral rules (“never use corporate speak”, “lead with the insight, not the product”) and platform rules as separate constraints. The key is separating brand voice (tone/values, constant across platforms) from platform format rules (character limits, hashtag counts, emoji usage). Don’t merge them into one wall of instructions — Claude handles layered constraints better when they’re structured.
Can Claude agents post directly to social media platforms?
Not natively — Claude generates content but posting requires direct API calls to each platform (Tweepy for Twitter, HTTP calls to LinkedIn’s UGC API, Facebook Graph API for Instagram). Claude’s tool_use feature lets you build this as an agentic loop where Claude calls a “post_to_platform” tool, but for production scheduling you’re better off keeping Claude in the content generation role and using a separate dispatcher as shown in this tutorial.
What’s the best Claude model for social media content generation?
Claude Haiku 3.5 for content generation — it’s fast (sub-2 second responses typically) and costs around $0.003–0.006 per full three-platform content set. Use Claude Sonnet for engagement analysis and strategic decisions where reasoning quality matters more than speed. Running Sonnet for bulk generation is roughly 7x more expensive with marginal quality improvement for short-form content.
How do I avoid Claude regenerating the same post angles repeatedly?
Pass the last 5 posts on the same topic as context with an explicit instruction to avoid those angles. Maintain a topic-keyed cache in your database and query it before each generation call. For longer-running calendars, consider embeddings-based deduplication — store post embeddings and reject generated content with cosine similarity above 0.85 to recent posts on the same topic.
How do I handle Instagram’s image requirement in automated posting?
Instagram’s Graph API requires a publicly accessible image URL for every post — there’s no text-only option for feed posts (Reels aside). In practice, maintain a library of pre-approved brand images tagged by topic and have Claude select the most relevant one. Alternatively, generate image prompts in the content output (as shown in this tutorial) and pipe them through DALL-E or Stable Diffusion, then upload to S3 before queuing the Instagram post.
Can I run this social media automation workflow in n8n instead of Python?
Yes — the orchestration layer maps cleanly to n8n. Use the Schedule Trigger for dispatch timing, the Anthropic node for Claude API calls, HTTP Request nodes for platform APIs, and a SQLite or Airtable node for the queue. The Python implementation gives you more control over error handling and complex logic, but n8n is a reasonable choice if your team needs a non-code interface for managing the content calendar.
Put this into practice
Try the Content Marketer agent — ready to use, no setup required.
Editorial note: API pricing, model capabilities, and tool features change frequently — always verify current details on the vendor’s website before building in production. Code examples are tested at time of writing; pin your dependency versions to avoid breaking changes. Some links in this article may be affiliate links — we may earn a commission if you sign up, at no extra cost to you.

