Skip to content

Self-hosted OSS server

The open-source Agent Analytics server is for teams that want to own the runtime, storage, API keys, retention, and backups.

It exposes the same self-hosted analytics API surface your agent can query with AGENT_ANALYTICS_URL and AGENT_ANALYTICS_API_KEY. It does not include the managed cloud dashboard, billing, account management, hosted OAuth, or project provisioning flows from Agent Analytics Cloud.

Cloudflare Workers + D1

Cloudflare Workers + D1

Run the OSS server as a Cloudflare Worker and store events in D1.

Best for: low-ops self-hosting when you already use Cloudflare or want Cloudflare to handle TLS, edge runtime, and Worker uptime.

Docker Kubernetes

Docker / Kubernetes + SQLite

Run the Node.js server in a container and persist SQLite on a volume.

Best for: local infrastructure, VPS/container hosts, kind/minikube tests, or teams that want direct runtime ownership.

Clone and install the OSS server:

Terminal window
git clone https://github.com/Agent-Analytics/agent-analytics.git
cd agent-analytics
npm install

Create a D1 database:

Terminal window
npx wrangler d1 create agent-analytics

Update wrangler.toml with the database ID from Wrangler. Keep the D1 binding name as DB.

Initialize the schema and deploy:

Terminal window
npx wrangler d1 execute agent-analytics --remote --file=./schema.sql
npx wrangler deploy

Set the read API key and public project token:

Terminal window
echo "YOUR_API_KEY" | npx wrangler secret put API_KEYS
echo "YOUR_PROJECT_TOKEN" | npx wrangler secret put PROJECT_TOKENS

Your endpoint will look like:

https://agent-analytics.YOUR-SUBDOMAIN.workers.dev

No official container image is published yet. Build the image yourself from the open-source repo.

Terminal window
git clone https://github.com/Agent-Analytics/agent-analytics.git
cd agent-analytics
docker build -t agent-analytics:local .

Run the container with SQLite persisted on /data:

Terminal window
docker run --rm \
-p 8787:8787 \
-e API_KEYS=YOUR_API_KEY \
-e PROJECT_TOKENS=YOUR_PROJECT_TOKEN \
-e DB_PATH=/data/analytics.db \
-v agent_analytics_data:/data \
agent-analytics:local

Verify the server:

Terminal window
curl http://localhost:8787/health

Create a first event. Use a browser-like User-Agent; curl’s default user agent can be treated as automated traffic.

Terminal window
curl http://localhost:8787/track \
-H "Content-Type: application/json" \
-H "User-Agent: Mozilla/5.0 Smoke Test" \
-d '{"token":"YOUR_PROJECT_TOKEN","project":"marketing-site","event":"page_view","properties":{"path":"/"}}'

Read it back:

Terminal window
curl "http://localhost:8787/stats?project=marketing-site&since=7d" \
-H "X-API-Key: YOUR_API_KEY"

You can also run it with Compose:

Terminal window
API_KEYS=YOUR_API_KEY PROJECT_TOKENS=YOUR_PROJECT_TOKEN docker compose up --build

The OSS repo includes Kubernetes manifests for a single-replica StatefulSet, a ClusterIP service, a PVC mounted at /data, and example secret/ingress files.

For a full local kind walkthrough, including a live project and CLI verification, use the repo guide:

https://github.com/Agent-Analytics/agent-analytics/blob/main/deploy/kubernetes/README.md

The short version:

Terminal window
docker build -t agent-analytics:local .
kind load docker-image agent-analytics:local --name agent-analytics
kubectl create namespace agent-analytics
kubectl -n agent-analytics create secret generic agent-analytics-secrets \
--from-literal=API_KEYS=YOUR_API_KEY \
--from-literal=PROJECT_TOKENS=YOUR_PROJECT_TOKEN
kubectl -n agent-analytics apply -f deploy/kubernetes/service.yaml
kubectl -n agent-analytics apply -f deploy/kubernetes/statefulset.yaml

Wait for the workload:

Terminal window
kubectl -n agent-analytics rollout status statefulset/agent-analytics --timeout=120s
kubectl -n agent-analytics get pods,pvc,svc

For local access:

Terminal window
kubectl -n agent-analytics port-forward svc/agent-analytics 18787:8787

Then query http://127.0.0.1:18787.

For self-hosted OSS, do not use the hosted login flow. Point the CLI or your agent at your endpoint:

Terminal window
export AGENT_ANALYTICS_URL=https://your-server.example.com
export AGENT_ANALYTICS_API_KEY=YOUR_API_KEY

Then query:

Terminal window
npx --yes @agent-analytics/[email protected] projects
npx --yes @agent-analytics/[email protected] stats marketing-site --days 7
npx --yes @agent-analytics/[email protected] events marketing-site --days 7 --limit 20

SQLite is a good default for the OSS Docker/Kubernetes path when you run one server process against one database file.

For SQLite deployments:

  • run exactly one Node process or one Kubernetes pod against a given SQLite file
  • mount persistent storage at /data
  • keep DB_PATH=/data/analytics.db, or another path on persistent storage
  • do not mount the same SQLite file into multiple pods or replicas
  • verify your storage class supports SQLite WAL locking before using network filesystems

Move beyond SQLite when you need sustained high write volume, frequent long analytical reads during ingestion, or horizontally scaled API replicas. That next step should be a client/server database adapter, such as Postgres, before scaling the API as a Kubernetes Deployment.