Enterprise

Self-Hosted Deployment

Deploy MergeShield on your own infrastructure for full data control, compliance requirements, or air-gapped environments.

Architecture Overview

MergeShield consists of 4 core services that can be deployed independently or together:
API Server — Hono on Bun runtime, handles webhooks, REST API, SSE events → Worker Processes — BullMQ workers for PR analysis, auto-merge, notifications, scheduled jobs → PostgreSQL — Primary data store for organizations, PRs, analyses, policies → Redis — Job queue (BullMQ), pub/sub (SSE events), caching, distributed locks

Docker Deployment

The recommended approach for most teams. A single Docker image runs both the API server and workers.
Requirements: → Docker 20.10+ or Podman 4+ → PostgreSQL 14+ (managed or self-hosted) → Redis 6+ (managed or self-hosted) → Anthropic API key (for Claude AI analysis) → GitHub App credentials (for webhook integration)
Quick Start: ``bash docker pull ghcr.io/mergeshield/api:latest docker run -p 4000:4000 \ -e DATABASE_URL=postgres://... \ -e REDIS_URL=redis://... \ -e ANTHROPIC_API_KEY=sk-ant-... \ -e GITHUB_APP_ID=... \ -e GITHUB_PRIVATE_KEY="..." \ -e GITHUB_WEBHOOK_SECRET=... \ ghcr.io/mergeshield/api:latest ``

Kubernetes / Helm

For production deployments at scale, we recommend Kubernetes with our Helm chart.
Components: → API Deployment (2+ replicas, horizontal pod autoscaler) → Worker Deployment (separate from API for independent scaling) → PostgreSQL (use your existing managed DB or StatefulSet) → Redis (use your existing managed Redis or StatefulSet) → Ingress (nginx/traefik for HTTPS termination)
Scaling Guidelines: → API: Scale horizontally — stateless, safe for multiple replicas → Workers: Scale horizontally — BullMQ handles job distribution automatically → PostgreSQL: Single primary, read replicas for analytics queries → Redis: Single instance sufficient for most workloads (<10k analyses/day)

Environment Variables

Required:DATABASE_URL — PostgreSQL connection string → REDIS_URL — Redis connection string → ANTHROPIC_API_KEY — Claude API key for AI analysis → GITHUB_APP_ID — GitHub App ID → GITHUB_PRIVATE_KEY — GitHub App private key (PEM format) → GITHUB_WEBHOOK_SECRET — Webhook signature verification → GITHUB_CLIENT_ID / GITHUB_CLIENT_SECRET — GitHub OAuth → BETTER_AUTH_SECRET — Session encryption key → BETTER_AUTH_URL — API base URL → FRONTEND_URL — Dashboard URL (for CORS, email links)
Optional:RESEND_API_KEY — Email delivery (graceful degradation without) → STRIPE_SECRET_KEY — Billing integration (skip for self-hosted) → PORT — API port (default: 4000)

Network Architecture

Inbound: → GitHub webhooks → API server (port 4000, HTTPS required) → Dashboard → API server (CORS-protected, session cookies) → GitHub OAuth callback → API server
Outbound: → API → GitHub API (api.github.com) for PR data, comments, merges → API → Anthropic API (api.anthropic.com) for AI analysis → API → Resend API (optional, for email delivery) → API → Stripe API (optional, for billing)
Internal: → API ↔ PostgreSQL (port 5432) → API ↔ Redis (port 6379) → Workers share the same PostgreSQL and Redis connections

Security Considerations

GitHub App private key: Store as a Kubernetes secret or vault entry, never in environment files → Database: Use SSL/TLS connections, restrict network access to API/worker pods only → Redis: Enable AUTH, restrict network access, consider Redis TLS → API keys: All API keys are SHA-256 hashed before storage — plaintext never persisted → Webhook signatures: GitHub webhook payloads are verified with HMAC-SHA256 → Session cookies: SameSite=None, Secure, HttpOnly — requires HTTPS → Rate limiting: Redis-based with in-memory fallback — protects against abuse → Body size limit: 2MB maximum payload to prevent memory exhaustion

Monitoring & Health

Health Check: `` GET /api/health `` Returns 200 with DB and Redis connectivity status.
Observability: → Structured JSON logging (timestamp, level, message, context) → Request correlation IDs (X-Request-Id header) → BullMQ dashboard compatible (Bull Board, Arena) → Prometheus-compatible metrics (planned)
Key Metrics to Monitor: → Analysis queue depth and processing time → API response times (p50, p95, p99) → Worker error rates → PostgreSQL connection pool utilization → Redis memory usage

Need help with deployment?

Our enterprise team can help with architecture planning, deployment, and ongoing support.

Contact Enterprise Sales