This guide is for self-hosted Infisical deployments experiencing performance degradation with large audit log tables. ClickHouse is optional and only recommended for deployments with 300 million+ audit log rows.
Why ClickHouse?
For high-volume audit log workloads, ClickHouse provides significant advantages over PostgreSQL:- Superior compression: ClickHouse’s columnar storage significantly reduces disk footprint compared to PostgreSQL’s row-oriented storage.
- Query performance: Analytical queries with complex filters and aggregations are substantially faster due to ClickHouse’s vectorized execution engine.
Prerequisites
- A running ClickHouse instance. See ClickHouse deployment documentation for installation options.
- A database created in ClickHouse for audit logs.
- Network connectivity from your Infisical backend to the ClickHouse instance.
Configuration
Set theCLICKHOUSE_URL environment variable in your Infisical backend configuration:
audit_logs):
How It Works
OnceCLICKHOUSE_URL is set, audit log writes switch from PostgreSQL to ClickHouse. Events are queued in a Redis stream and batch-inserted into ClickHouse every 5 seconds via a background worker. Reads also route to ClickHouse automatically. On first startup, Infisical creates the ClickHouse table if it doesn’t already exist.
Migration Strategy
Recommended approach for existing deployments:-
Phase 1 — Migrate historical data (recommended):
- Before switching, bulk-load existing audit logs from PostgreSQL into ClickHouse to avoid a visibility gap
-
Phase 2 — Enable ClickHouse:
- Set
CLICKHOUSE_URL - New audit logs write to ClickHouse and all queries read exclusively from ClickHouse
- PostgreSQL audit log writes stop automatically
- Set
-
Phase 3 — Cleanup (optional):
- Once confident ClickHouse is working correctly, prune old PostgreSQL audit log data
Verification
After configuration, verify ClickHouse integration:- Perform an action in Infisical (e.g., read a secret)
- Query ClickHouse directly to confirm rows are being inserted