Self-hosted OSS server
תוכן זה אינו זמין עדיין בשפה שלך.
The open-source Agent Analytics server is for teams that want to own the runtime, storage, API keys, retention, and backups.
It exposes the same self-hosted analytics API surface your agent can query with AGENT_ANALYTICS_URL and AGENT_ANALYTICS_API_KEY. It does not include the managed cloud dashboard, billing, account management, hosted OAuth, or project provisioning flows from Agent Analytics Cloud.
Choose a route
Section titled “Choose a route”|
Cloudflare Workers + D1Run the OSS server as a Cloudflare Worker and store events in D1. Best for: low-ops self-hosting when you already use Cloudflare or want Cloudflare to handle TLS, edge runtime, and Worker uptime. |
Docker / Kubernetes + SQLiteRun the Node.js server in a container and persist SQLite on a volume. Best for: local infrastructure, VPS/container hosts, kind/minikube tests, or teams that want direct runtime ownership. |
Route 1: Cloudflare Workers + D1
Section titled “Route 1: Cloudflare Workers + D1”Clone and install the OSS server:
git clone https://github.com/Agent-Analytics/agent-analytics.gitcd agent-analyticsnpm installCreate a D1 database:
npx wrangler d1 create agent-analyticsUpdate wrangler.toml with the database ID from Wrangler. Keep the D1 binding name as DB.
Initialize the schema and deploy:
npx wrangler d1 execute agent-analytics --remote --file=./schema.sqlnpx wrangler deploySet the read API key and public project token:
echo "YOUR_API_KEY" | npx wrangler secret put API_KEYSecho "YOUR_PROJECT_TOKEN" | npx wrangler secret put PROJECT_TOKENSYour endpoint will look like:
https://agent-analytics.YOUR-SUBDOMAIN.workers.devRoute 2: Docker + SQLite
Section titled “Route 2: Docker + SQLite”No official container image is published yet. Build the image yourself from the open-source repo.
git clone https://github.com/Agent-Analytics/agent-analytics.gitcd agent-analytics
docker build -t agent-analytics:local .Run the container with SQLite persisted on /data:
docker run --rm \ -p 8787:8787 \ -e API_KEYS=YOUR_API_KEY \ -e PROJECT_TOKENS=YOUR_PROJECT_TOKEN \ -e DB_PATH=/data/analytics.db \ -v agent_analytics_data:/data \ agent-analytics:localVerify the server:
curl http://localhost:8787/healthCreate a first event. Use a browser-like User-Agent; curl’s default user agent can be treated as automated traffic.
curl http://localhost:8787/track \ -H "Content-Type: application/json" \ -H "User-Agent: Mozilla/5.0 Smoke Test" \ -d '{"token":"YOUR_PROJECT_TOKEN","project":"marketing-site","event":"page_view","properties":{"path":"/"}}'Read it back:
curl "http://localhost:8787/stats?project=marketing-site&since=7d" \ -H "X-API-Key: YOUR_API_KEY"You can also run it with Compose:
API_KEYS=YOUR_API_KEY PROJECT_TOKENS=YOUR_PROJECT_TOKEN docker compose up --buildKubernetes + SQLite
Section titled “Kubernetes + SQLite”The OSS repo includes Kubernetes manifests for a single-replica StatefulSet, a ClusterIP service, a PVC mounted at /data, and example secret/ingress files.
For a full local kind walkthrough, including a live project and CLI verification, use the repo guide:
https://github.com/Agent-Analytics/agent-analytics/blob/main/deploy/kubernetes/README.mdThe short version:
docker build -t agent-analytics:local .kind load docker-image agent-analytics:local --name agent-analytics
kubectl create namespace agent-analytics
kubectl -n agent-analytics create secret generic agent-analytics-secrets \ --from-literal=API_KEYS=YOUR_API_KEY \ --from-literal=PROJECT_TOKENS=YOUR_PROJECT_TOKEN
kubectl -n agent-analytics apply -f deploy/kubernetes/service.yamlkubectl -n agent-analytics apply -f deploy/kubernetes/statefulset.yamlWait for the workload:
kubectl -n agent-analytics rollout status statefulset/agent-analytics --timeout=120skubectl -n agent-analytics get pods,pvc,svcFor local access:
kubectl -n agent-analytics port-forward svc/agent-analytics 18787:8787Then query http://127.0.0.1:18787.
Point the CLI at your server
Section titled “Point the CLI at your server”For self-hosted OSS, do not use the hosted login flow. Point the CLI or your agent at your endpoint:
export AGENT_ANALYTICS_URL=https://your-server.example.comexport AGENT_ANALYTICS_API_KEY=YOUR_API_KEYThen query:
SQLite production boundary
Section titled “SQLite production boundary”SQLite is a good default for the OSS Docker/Kubernetes path when you run one server process against one database file.
For SQLite deployments:
- run exactly one Node process or one Kubernetes pod against a given SQLite file
- mount persistent storage at
/data - keep
DB_PATH=/data/analytics.db, or another path on persistent storage - do not mount the same SQLite file into multiple pods or replicas
- verify your storage class supports SQLite WAL locking before using network filesystems
Move beyond SQLite when you need sustained high write volume, frequent long analytical reads during ingestion, or horizontally scaled API replicas. That next step should be a client/server database adapter, such as Postgres, before scaling the API as a Kubernetes Deployment.