Sunday, April 5

Supabase Schema Architect: Stop Writing Database Migrations From Scratch

Every Supabase project hits the same wall. You’re moving fast, the data model is evolving, and suddenly you’re staring at a migration file that touches eight tables, needs RLS policies that don’t accidentally lock out your service role, and has to roll back cleanly if something goes wrong in production. You know the correct approach — transactions, backward compatibility, staged rollouts — but writing all of that by hand while keeping feature work moving is genuinely painful.

The Supabase Schema Architect agent exists specifically to eliminate that friction. It’s a Claude Code sub-agent that operates as a PostgreSQL-native database architect: it analyzes your existing schema through Supabase’s MCP connection, designs normalized table structures, generates migration SQL, builds RLS policy sets with positive and negative test coverage, and produces matching TypeScript types — all in a single workflow. Instead of context-switching between your editor, the Supabase dashboard, and PostgreSQL docs, you describe what you need and get implementation-ready artifacts back.

For senior developers, the real value isn’t just speed. It’s having an agent that enforces the standards you already know are correct — 3NF normalization, sub-50ms index coverage for common queries, every migration wrapped in a transaction with a tested rollback path — so you’re not making those tradeoffs under deadline pressure.

When to Use the Supabase Schema Architect

This agent is designed to be invoked proactively, not just when something breaks. The following scenarios represent exactly the kind of work it handles well:

  • Greenfield schema design: You’re starting a new Supabase project and need to model relationships between users, organizations, content, and billing. Rather than whiteboarding in isolation, describe your data requirements and access patterns and let the agent produce a normalized schema with indexes and constraints baked in from the start.
  • Adding multi-tenancy to an existing schema: You have a single-tenant data model and need to introduce organization isolation. This typically touches every sensitive table and requires RLS policies that are both correct and performant. The agent handles the policy logic and the migration sequencing.
  • Pre-production migration audits: Before promoting a migration to production, use the agent to review execution risk, identify missing rollback logic, and estimate execution time against realistic data volumes.
  • RLS policy architecture: You need role-based access control — admins see everything, users see their own data, service roles bypass restrictions for background jobs. Writing and testing this policy set manually is tedious and error-prone. The agent generates the full policy suite with test cases.
  • Performance investigations: Slow queries that trace back to missing indexes or poorly structured JSON/JSONB columns. The agent can audit the schema and propose targeted changes with projected query time improvements.
  • TypeScript type synchronization: After schema changes, keeping your TypeScript types aligned with the actual database structure. The agent generates type definitions as part of the migration workflow so nothing drifts.

Real-World Examples

Scenario 1: Designing a Multi-Tenant SaaS Schema

A developer is building a project management SaaS and needs to model workspaces, projects, tasks, and user memberships with proper isolation between tenants.

Developer: I'm building a project management app on Supabase. I need tables for 
workspaces, projects, tasks, and user memberships. Each workspace is a tenant — 
users should only see data from workspaces they belong to. I'll have three roles: 
owner, admin, and member. What's the schema and RLS strategy?

Agent: 🏗️ SUPABASE SCHEMA ARCHITECTURE

## Schema Analysis
- Current tables: 0 (greenfield)
- Relationship complexity: MEDIUM
- RLS coverage: Target 100% of tenant-scoped tables
- Access patterns: User → Workspace (many-to-many via memberships), 
  Workspace → Projects → Tasks (hierarchical)

## Proposed Changes

### New Tables

workspaces
  - id uuid PRIMARY KEY DEFAULT gen_random_uuid()
  - name text NOT NULL
  - slug text UNIQUE NOT NULL
  - created_at timestamptz DEFAULT now()

workspace_memberships
  - id uuid PRIMARY KEY DEFAULT gen_random_uuid()
  - workspace_id uuid REFERENCES workspaces(id) ON DELETE CASCADE
  - user_id uuid REFERENCES auth.users(id) ON DELETE CASCADE
  - role text CHECK (role IN ('owner', 'admin', 'member'))
  - UNIQUE(workspace_id, user_id)
  - INDEX: (user_id, workspace_id) — critical for RLS policy performance

projects
  - id uuid PRIMARY KEY DEFAULT gen_random_uuid()
  - workspace_id uuid REFERENCES workspaces(id) ON DELETE CASCADE
  - name text NOT NULL
  - INDEX: (workspace_id)

tasks
  - id uuid PRIMARY KEY DEFAULT gen_random_uuid()
  - project_id uuid REFERENCES projects(id) ON DELETE CASCADE
  - assigned_to uuid REFERENCES auth.users(id)
  - title text NOT NULL
  - status text DEFAULT 'todo'
  - INDEX: (project_id), (assigned_to)

### RLS Policies

workspaces — SELECT policy:
  EXISTS (
    SELECT 1 FROM workspace_memberships
    WHERE workspace_id = workspaces.id
    AND user_id = auth.uid()
  )

tasks — SELECT policy (joins through project → workspace):
  EXISTS (
    SELECT 1 FROM projects p
    JOIN workspace_memberships wm ON wm.workspace_id = p.workspace_id
    WHERE p.id = tasks.project_id
    AND wm.user_id = auth.uid()
  )

Note: The workspace_memberships index on (user_id, workspace_id) is required 
for these policies to avoid sequential scans at scale. Without it, RLS 
overhead will exceed 10ms per query past ~10k membership rows.

### Migration Strategy
1. Phase 1: Core tables (workspaces, projects, tasks) — Risk: LOW
2. Phase 2: workspace_memberships with constraints — Risk: LOW
3. Phase 3: Enable RLS and apply policies — Risk: MEDIUM (test with 
   service role bypass before deploying to production)
4. Rollback plan: Policies dropped first, then tables in reverse 
   dependency order within single transaction

## Implementation Files
- Migration SQL: supabase/migrations/20240101_initial_schema.sql
- RLS policies: supabase/migrations/20240101_rls_policies.sql
- TypeScript types: types/database.ts (generated)
- Test cases: Positive: member reads own workspace data ✓
              Negative: member cannot read other workspace data ✓
              Edge: service role bypasses RLS for background jobs ✓

Scenario 2: Auditing an Existing Migration Before Production Deploy

A developer has written a migration that adds a new billing table and modifies an existing users table, and wants a risk review before running it in production.

Developer: I have a migration that adds a subscriptions table, adds a 
stripe_customer_id column to users, and creates an index on subscriptions(user_id). 
We have 400k users. What's the risk profile and what am I missing?

Agent: 🏗️ SUPABASE SCHEMA ARCHITECTURE

## Migration Risk Analysis

### Adding stripe_customer_id to users (400k rows)
- Risk: MEDIUM
- The ALTER TABLE ADD COLUMN on PostgreSQL 11+ is near-instant for nullable 
  columns — no table rewrite. Confirm the column is nullable or has a DEFAULT; 
  if NOT NULL with no default, this will rewrite the entire table and lock it.
- Recommendation: Add as nullable first, backfill via batched UPDATE 
  (1000 rows/transaction), then add NOT NULL constraint separately.

### Creating index on subscriptions(user_id)
- Risk: LOW if using CONCURRENTLY, HIGH if not
- Standard CREATE INDEX takes an ACCESS EXCLUSIVE lock, blocking all reads 
  and writes on subscriptions for the duration.
- Recommendation: CREATE INDEX CONCURRENTLY — cannot run inside a 
  transaction block, so wrap the index creation in a separate migration file.

### Missing items
1. No transaction wrapper on the table creation — add BEGIN/COMMIT
2. No RLS policy on subscriptions — this table contains billing data 
   and must have RLS enabled with user-scoped SELECT policy
3. No rollback migration — add DROP TABLE subscriptions and 
   ALTER TABLE users DROP COLUMN stripe_customer_id
4. Estimated execution time on 400k users: < 30 seconds for ADD COLUMN, 
   backfill time depends on Stripe sync schedule

## Revised Migration Strategy
1. Migration A (transactional): CREATE TABLE subscriptions, ADD COLUMN, RLS
2. Migration B (non-transactional): CREATE INDEX CONCURRENTLY
3. Migration C: Backfill script with progress logging

What Makes This Agent Powerful

Enforces Production-Grade Standards Automatically

The agent applies specific, measurable thresholds rather than generic advice: 3NF minimum normalization, query response targets under 50ms for common operations, RLS policy execution overhead under 10ms, migration execution under five minutes. These aren’t aspirational — they’re defaults that get applied to every design decision.

Migration Safety as a First Principle

Every migration the agent produces is wrapped in a transaction with a tested rollback procedure. It distinguishes between operations that are safe under load (nullable column additions, concurrent index builds) and those that require maintenance windows (NOT NULL additions without defaults, non-concurrent indexes on large tables).

RLS as an Integrated Design Concern

RLS isn’t bolted on after the schema is designed — it’s planned from the start. The agent designs indexes specifically to support efficient policy execution, generates both positive and negative test cases for every policy, and accounts for service role bypass patterns needed for background jobs.

Full Artifact Generation

A single request produces migration SQL, RLS policy definitions, TypeScript type definitions, and a test case suite. This eliminates the type drift that happens when database changes and application code are updated separately.

How to Install the Supabase Schema Architect

Claude Code supports project-level sub-agents loaded from the .claude/agents/ directory. To install this agent:

Create the file at the following path in your project root:

.claude/agents/supabase-schema-architect.md

Paste the full agent system prompt into that file and save it. Claude Code automatically discovers and loads all agents in the .claude/agents/ directory when it starts. No additional configuration is required.

Once installed, you can invoke it directly in Claude Code by referencing it in your prompt:

Use the supabase-schema-architect to design the schema for our notifications system.

Or Claude Code will select it automatically when your request matches its domain — database design, migration planning, or RLS policy work.

If you’re using Supabase MCP, the agent will connect to your project to analyze the existing schema. Ensure your MCP configuration is set up in your Claude Code environment before invoking the agent on an existing database.

Conclusion and Next Steps

The Supabase Schema Architect agent pays for itself the first time it catches a missing CONCURRENTLY on an index creation or flags a NOT NULL column addition that would have locked a production table. Beyond preventing incidents, it shifts schema work from a bottleneck into a structured, repeatable process.

Start with an audit of your current schema: invoke the agent, describe your existing tables and access patterns, and ask for a risk and coverage assessment. You’ll likely surface missing indexes, tables without RLS coverage, and migrations that lack rollback procedures. Address those first, then use the agent proactively for every schema change going forward.

Pair this agent with your CI pipeline — run migration validation as part of your pull request process so schema quality is enforced before code reaches staging. The agent’s structured output format makes it straightforward to parse and integrate into automated checks.

Agent template sourced from the claude-code-templates open source project (MIT License).

Share.
Leave A Reply