Gate is a multi-protocol, policy-driven privileged access gateway. It sits between users and upstream databases, APIs, and servers, evaluates OPA/Rego policies, masks sensitive data, and centralizes audit, webhook, and recording workflows in a control plane.
┌─────────────────────────────────┐
│ gate-control │
│ Admin UI + HTTP + gRPC + DB │
│ │
│ Resources Policies Audit │
│ Identities Groups Webhooks │
│ Organizations Recordings │
└──────────────┬──────────────────┘
│ gRPC / Protobuf
sync / audit / │ registration / heartbeat
│
┌──────────────▼──────────────────┐
┌──────────┐ │ gate-connector │ ┌──────────┐
│ Clients │────────▶│ Policy evaluation + proxying │────────▶│ Upstream │
│ psql/curl │◀────────│ PostgreSQL / MongoDB / MySQL │◀────────│ services │
└──────────┘ │ HTTP / SSH / MCP / masking │ └──────────┘
└─────────────────────────────────┘
- PostgreSQL, MongoDB, MySQL, HTTP, SSH, and MCP proxying.
- Rego v1 policy evaluation at
session,pre_request, andpost_requeststages. - Central management for resources, policies, identities, groups, organizations, scoped API keys, webhooks, access requests, audit logs, and recordings.
- gRPC/Protobuf control-plane contracts with generated Go stubs.
- Embedded admin UI plus HTTP
/api/v1/*compatibility endpoints over the same backend RPC logic. - Dedicated connector health, readiness, and Prometheus metrics endpoints.
- Durable dead-letter storage for async delivery paths and optional session recording storage.
gate-control exposes three surfaces on the same listener:
- Embedded admin UI at
/ - HTTP health and admin endpoints such as
/healthand/api/v1/* - gRPC services
gate.v1.ControlPlaneServiceandgate.v1.ConnectorService
The HTTP admin handlers are thin compatibility adapters over the gRPC services, so the web UI, HTTP clients, and Go RPC clients all share the same business logic and auth rules.
Connector-to-control-plane traffic now uses gRPC/Protobuf for:
- Policy sync
- Audit ingestion
- Connector registration
- Connector heartbeat
- Connector deregistration
Relevant sources:
- Protobuf contracts:
proto/gate/v1/*.proto - Generated Go stubs:
internal/gen/gate/v1 - Shared Go admin client:
internal/controlplaneclient
For the fastest local stack:
docker compose up --buildThis starts:
gate-controlonhttp://localhost:8080gate-connectorPostgreSQL listener onlocalhost:6432- connector health and metrics on
:8081inside the container (not published by default) - control-plane Postgres on
localhost:5433 - sample upstream Postgres on
localhost:5434
The compose stack waits for a healthy control plane before starting the connector, and both Gate services publish container health checks so docker compose up --wait is race-free.
The checked-in docker-compose.yml is a local dev setup and runs the control plane with GATE_INSECURE=true. The sample connector config already points at the compose services and loads policies/example.rego for all three policy stages.
Example connection through the connector:
PGPASSWORD=app psql -h localhost -p 6432 -U app -d appdbBoth checked-in config files support environment-variable expansion:
configs/control.yamlconfigs/connector.yaml
Run the binaries directly:
go run ./cmd/gate-control -config configs/control.yaml
go run ./cmd/gate-connector -config configs/connector.yaml- HTTP and gRPC both use Bearer auth.
api_keyin the control-plane config is the system/root key.- Scoped organization API keys support roles such as
readonly,editor,admin, andconnector. - System callers can apply org scope with
X-Org-IDon HTTP requests orx-org-idgRPC metadata. - If
api_keyis empty, you must startgate-controlwith--insecureorGATE_INSECURE=true; every unauthenticated request is logged atwarnlevel. This is only for local development.
The sample configs in configs/ are the best starting point. The fields below are the ones that matter most when wiring a real deployment.
listen_addr: ":8080"
api_key: "${GATE_API_KEY}"
database:
dsn: "postgres://${GATE_DB_USER}:${GATE_DB_PASSWORD}@${GATE_DB_HOST}:${GATE_DB_PORT}/${GATE_DB_NAME}?sslmode=disable"
max_open_conns: 25
min_conns: 5
recording:
dir: ".data/recordings" # optional
delivery:
dead_letter_dir: ".data/control-deadletters" # optional
slack: # optional
bot_token: "${GATE_SLACK_BOT_TOKEN}"
signing_secret: "${GATE_SLACK_SIGNING_SECRET}"
default_approval_channel: "C0123456789"
org_approval_channels:
acme-prod: "C0987654321"
resource_approval_channels:
resource-123: "C0246813579"
requester_mappings:
alice@example.com: "U0123456789"
logging:
level: "info"
format: "json"listen_addr: ":6432"
health_addr: ":8081"
control_plane:
url: "http://localhost:8080"
token: "${GATE_CONTROL_TOKEN}"
sync_interval: "30s"
sync_jitter: "5s" # optional: spread connector polls to avoid thundering herds
resources:
- name: "primary-db"
protocol: "postgres"
host: "localhost"
port: 5432
database: "appdb"
require_access_grant: true # optional: require an approved active grant before session setup
- name: "agent-tools"
protocol: "mcp"
host: "mcp.internal"
port: 8080
listen_port: 7443 # required for MCP resources
endpoint_path: "/mcp" # optional: defaults to /mcp
upstream_tls: true # optional: dial the MCP server over HTTPS
require_access_grant: true
policies:
- path: "policies/example.rego"
stage: "session"
- path: "policies/example.rego"
stage: "pre_request"
- path: "policies/example.rego"
stage: "post_request"
tls:
enabled: false # optional
recording:
dir: ".data/recordings" # optional
delivery:
dead_letter_dir: ".data/connector-deadletters" # optional
logging:
level: "info"
format: "json"MCP resources use dedicated listeners because they ride over Streamable HTTP rather than the connector's shared protocol-detection path. When session recordings are enabled, MCP interactions are stored alongside SSH recordings and can be replayed through the control plane recording APIs.
Control-plane migrations are embedded in the binary by default. Set database.migrations_path only when you intentionally want to override the built-in migration set during development.
Approved access requests can also carry scoped MCP grants. A grant with scopes such as {operation: "tool", target: "customer_lookup"} lets a connector enforce target-aware access directly, and tools/list, resources/list, and prompts/list responses are filtered down to the approved targets when the grant is narrower than the whole resource.
gate-control can post new JIT access requests to Slack and accept approve or deny button clicks back on /integrations/slack/actions.
Routing precedence is:
- resource-specific channel from
slack.resource_approval_channels - org-specific channel from
slack.org_approval_channels - fallback
slack.default_approval_channel
Requester notifications use, in order:
- explicit
slack.requester_mappings - a direct Slack user ID in
requested_by - Slack
users.lookupByEmailwhenrequested_byis an email address
See docs/slack-jit-approvals.md for the required Slack app scopes, interactivity setup, and an end-to-end configuration example.
Gate also emits outbound webhook events for access-request lifecycle changes:
access.requestedaccess.approvedaccess.denied
Each event carries the request id, org_id, requester/subject fields, resource_id, reason, duration, status, scopes, and timestamps. Approval events also include approved_by and expires_at; denial events include denied_by. That payload is designed to support downstream approval bridges such as Slack or Jira without forcing an extra control-plane read before rendering a notification or status update.
Gate evaluates Rego policies in three stages:
session: allow or block a connection/session before it is establishedpre_request: allow, block, or rewrite a request/query before forwarding upstreampost_request: allow or transform the upstream response, including masking
post_request filters are currently enforced for HTTP JSON array responses. PostgreSQL, MongoDB, and MySQL post-request enforcement currently supports masking, while MCP rejects response filters and rewrites instead of silently applying them.
Managed policies in the control plane move through draft, dry_run, and active states.
Resource targeting supports exact resource names as well as shell-style glob patterns such as prod-*, *-staging, and db-??.
Minimal example:
package formal.v2
import rego.v1
default pre_request := {"action": "allow"}
pre_request := {"action": "block", "reason": "destructive queries are not allowed"} if {
input.query.type == "drop"
}
post_request := {
"action": "mask",
"masks": [
{"column": "email", "strategy": "partial"},
{"column": "ssn", "strategy": "redact"},
],
}Masking strategies include redact, partial, hash, and email.
HTTP post_request policies may also return filters rules with eq, neq, in, and not_in operators to drop non-matching objects from JSON array responses before masking is applied.
For MCP traffic, Gate also enriches policy input and audit details with mcp.method, mcp.namespace, mcp.action, mcp.operation, the generic mcp.target field, and specific target fields such as mcp.tool_name, mcp.resource_uri, and mcp.prompt_name.
For MongoDB traffic, Gate parses OP_MSG and legacy OP_QUERY commands into input.query.operation, input.query.database, input.query.collection, and input.query.filter, and applies post-request masking to matching reply fields before they reach the client.
Gate exposes authenticated identity fields under input.user.*. For verified OIDC/JWT identities, Gate currently reads:
{
"sub": "user-123",
"name": "alice",
"email": "alice@example.com",
"groups": ["admins", "engineering"]
}The groups array is mapped directly to input.user.groups, so policies can use expressions such as "admins" in input.user.groups. HTTP and MCP listeners also normalize X-Forwarded-Groups into the same input.user.groups field when they sit behind an authenticating proxy.
See policies/example.rego for the sample policy used by the local connector config.
See examples/policies/README.md for ready-to-load policies covering IP allowlists, MongoDB collection allowlists, PII masking, business hours, audit-only dry runs, DDL restrictions, and read-only enforcement.
See docs/policy-rollout.md for the safe promotion path from draft to dry_run to active, including rollback guidance and a worked IP allowlist example.
See docs/runbooks.md for operational runbooks covering policy denials, connector registration failures, audit sink recovery, pool contention, and TLS rotation.
See docs/mcp-gateway-quickstart.md for an end-to-end MCP gateway example built with the official Go SDK.
See docs/mcp-proxy-architecture.md for the MCP gateway request, policy, audit, masking, and grant-scoping flow.
See docs/control-plane-mcp.md for the read-only Gate control-plane MCP endpoint.
See docs/webhook-signatures.md for webhook signing semantics and verification examples in Go, Python, and Node.
See docs/slack-jit-approvals.md for Slack approval workflow setup and callback configuration.
See docs/benchmarks.md for the reproducible latency benchmark harness and docs/compliance-mappings.md for the initial compliance capability mapping.
See docs/acra-porting-plan.md for the research-backed roadmap for porting Acra-style data protection patterns into Gate.
Connector health endpoints expose per-resource and per-upstream pool metrics:
gate_connpool_active_connectionsgate_connpool_waitersgate_connpool_wait_duration_secondsgate_connpool_exhausted_total
Use them together when sizing connector pools:
- If
gate_connpool_waitersstays above0, requests are queuing for that upstream. - If
gate_connpool_exhausted_totalkeeps climbing, the pool is saturating under real traffic instead of only during spikes. - If
gate_connpool_wait_duration_secondsshows a growing tail, add headroom before users start feeling connection setup latency. - If
gate_connpool_active_connectionsis pinned high alongside waiters, scale out connectors or raise the per-upstream pool limit in your next rollout.
Connector metrics also expose latency histograms for policy evaluation and end-to-end request handling:
gate_policy_evaluation_duration_seconds{protocol,resource,stage,action}gate_request_duration_seconds{protocol,resource}gate_connpool_wait_duration_seconds{resource,upstream}
PromQL examples:
- p95 end-to-end latency by protocol/resource:
histogram_quantile(0.95, sum by (le, protocol, resource) (rate(gate_request_duration_seconds_bucket[5m]))) - p95 policy evaluation latency by protocol/resource/stage/action:
histogram_quantile(0.95, sum by (le, protocol, resource, stage, action) (rate(gate_policy_evaluation_duration_seconds_bucket[5m]))) - p95 upstream pool wait by resource/upstream:
histogram_quantile(0.95, sum by (le, resource, upstream) (rate(gate_connpool_wait_duration_seconds_bucket[5m])))
Top-level HTTP admin areas under /api/v1/ include:
resourcespoliciesorganizationsidentitiesgroupsconnectorswebhooksaccess-requestsauditrecordings
The control plane also exposes a read-only MCP endpoint on /mcp. It uses the same Bearer API keys and optional X-Org-ID scoping as the HTTP and gRPC admin surfaces, and currently includes tools for listing resources, policies, connectors, access requests, querying audit logs, and simulating policies.
The gRPC contracts are defined in:
proto/gate/v1/controlplane.protoproto/gate/v1/connector.proto
Domain messages are split across:
proto/gate/v1/resource_policy.protoproto/gate/v1/identity_group.protoproto/gate/v1/tenant.protoproto/gate/v1/webhook_access.protoproto/gate/v1/audit.protoproto/gate/v1/recording.proto
Prerequisites:
- Go 1.25+
- Docker for local compose and integration tests
golangci-lintv2 formake lint
Common commands:
make build
make test
make test-race
make lint
make generate-proto
make verify-generated
make docker
make test-e2eNotes:
make generate-protoregenerates all Go stubs fromproto/gate/v1/*.proto.make verify-generatedis what CI uses to ensure generated files are up to date.make test-e2ebrings updocker-compose.test.yml, runs the integration suite with theintegrationbuild tag, then tears the stack down.docker compose -f docker-compose.test.yml --profile stack up --build --waitbrings up the same control-plane and connector health-gated stack used in local compose, alongside the test databases.
gate/
├── cmd/
│ ├── gate-connector/ # Connector binary entrypoint
│ ├── gate-control/ # Control-plane binary entrypoint
│ └── gate-policy/ # Policy tooling
├── configs/ # Sample runtime configs
├── internal/
│ ├── connector/ # Proxy runtime, routing, TLS, health, metrics
│ ├── controlplane/ # HTTP server, gRPC services, embedded UI bridge
│ ├── controlplaneclient/ # Shared Go admin client for ControlPlaneService
│ ├── enforcement/ # Masking, filtering, rewrite helpers
│ ├── gen/gate/v1/ # Generated protobuf and gRPC stubs
│ ├── policy/ # OPA/Rego policy engine
│ ├── protocol/ # Postgres, MySQL, HTTP, SSH protocol handlers
│ ├── recording/ # Session recording storage/replay primitives
│ ├── store/ # Postgres and in-memory stores + migrations
│ ├── sync/ # Connector policy sync and audit shipping
│ └── webhooks/ # Webhook dispatch and retry/dead-letter handling
├── policies/ # Example local policies
├── proto/gate/v1/ # Protobuf contracts
├── scripts/ # Helper scripts, including protobuf generation
├── Dockerfile # Multi-stage connector/control image build
├── docker-compose.yml # Local dev stack
└── docker-compose.test.yml # Integration test dependencies
Business Source License 1.1. See LICENSE for the current terms.