跳转到内容

Closed-loop growth analysis for AI agents

此内容尚不支持你的语言。

Use this guide when you want your AI agent to turn Agent Analytics reads into a growth decision.

The loop is: context → properties/events → funnel → paths → breakdown → journey/events → experiment → readout.

This is guidance, not a rigid protocol. Your agent can skip steps that are already answered or irrelevant. If you only ask for an experiment readout, it should not rerun a full activation diagnosis. If you ask a retention question, it should use retention instead of forcing everything through one funnel.

Give your agent this task:

Use Agent Analytics to diagnose where <project> loses users before activation. Start from project context, use the configured activation events as the source of truth, discover the real event and property vocabulary, run a funnel and session paths, break down the largest leak, inspect representative journeys or events only if useful, then recommend one narrow experiment or the readiness fix that blocks readout.

Expected answer shape:

  1. Best bet or diagnosis.
  2. Metric definition: population, window, event names, identity basis, and conversion window.
  3. Evidence: counts, rates, raw activity, strict survivors, and the biggest driver.
  4. Segment, cohort, or surface where the issue concentrates.
  5. Caveat: identity, sample size, attribution, right-censoring, instrumentation, or causality limit.
  6. One bounded next query or action.

The agent should start project-specific work by reading context:

Terminal window
agent-analytics context get <project>

Project context can include activation events, event-name glossary entries, goals, date annotations, and product notes that explain what the numbers mean.

Project-defined activation events are the source of truth. If context says activation is trial_signup plus first_project_created, use that. Do not silently replace it with a generic event like signup or page_view.

If activation is missing, do not guess silently. The next action is to ask for the activation definition or configure it before claiming an activation diagnosis:

I do not see a project-defined activation event for <project>. What event or sequence means a user reached first value? If you want, I can inspect recent events and suggest candidates, but I will not treat those as activation until you approve them.

2. Discover events and properties before filters

Section titled “2. Discover events and properties before filters”

Before constructing funnels, breakdowns, or filters, the agent should inspect what the project actually sends:

Terminal window
agent-analytics properties <project>
agent-analytics properties-received <project>
agent-analytics events <project> --since 7d --limit 20

Use this to confirm event names, property keys, and whether the planned dimensions exist. Do not invent unsupported fields from the user’s wording.

Use funnel for ordered activation leakage:

Terminal window
agent-analytics funnel <project> --steps "<step_1>,<activation_event>" --since 30d

If activation has more than two steps, use the configured event sequence from context. The funnel answers where users drop in the intended success path. It does not prove why they dropped.

The agent should call out both the largest absolute loss and the largest relative loss when the data supports it.

Use paths to understand what sessions do around the goal:

Terminal window
agent-analytics paths <project> --goal <activation_event> --since 30d --max-steps 5

Paths help explain entry pages, exit pages, detours, and terminal states. They are especially useful after a funnel shows a leak but does not explain where dropped sessions went.

Session paths are session-local. A goal counts only when it happens in the same bounded session, so do not present paths as long-cycle attribution.

After finding the biggest leak, use practical dimensions that actually exist in the project data:

Terminal window
agent-analytics breakdown <project> --event <leak_step_event> --by <property_name> --since 30d

Good dimensions include path, source, referrer, CTA label, device, browser, country, campaign, plan, surface, or onboarding step when those properties exist.

Prefer fixed product-growth commands for broad diagnosis. Do not start broad activation analysis with /query when context, funnel, paths, breakdown, journey, retention, or experiments answer the question better. Use query later for narrow aggregations when the fixed commands cannot answer a specific count or grouping.

6. Inspect journeys or events only when useful

Section titled “6. Inspect journeys or events only when useful”

Raw events are for representative inspection and instrumentation sanity, not the default answer.

Use them when you need to verify that a leak is real, inspect a few examples, or confirm that an event fires with the expected properties:

Terminal window
agent-analytics journey <project> --anonymous-id <anonymous_id> --since 30d
agent-analytics events <project> --since 30d --limit 50

Do not dump raw logs if the aggregate diagnosis is already clear.

7. Recommend one narrow experiment or a readiness fix

Section titled “7. Recommend one narrow experiment or a readiness fix”

Default to one narrow experiment after diagnosis:

Terminal window
agent-analytics experiments create <project> --name <experiment_name> --goal <goal_event>

Good experiment recommendations change one thing: one CTA, one headline, one pricing message, one onboarding step, or one follow-up prompt.

But do not force an experiment when the blocker comes first. Recommend a readiness fix instead when:

  • activation is not defined
  • the goal event is missing or unreliable
  • the sample is too small
  • identity or session semantics make the readout misleading
  • tracking is too noisy to isolate the leak
  • an existing experiment already tests the same bet

The metric should be the business goal event or activation event, not exposure count.

Use this after the experiment has traffic:

Read <experiment_name> for <project> against <goal_event>. Decide whether to keep running it, change it, stop it, or complete it with a winner. Use the business goal event, not exposure count. Include sample-size and causality caveats, and tell me the next one bounded action.

The agent should read the experiment and connect it back to the original diagnosis:

Terminal window
agent-analytics experiments get <project> <experiment_name>

A useful readout says:

  • what changed
  • which goal event was used
  • whether the result is practically meaningful
  • whether traffic is enough to decide
  • whether the result supports, weakens, or does not answer the original diagnosis
  • what to keep, change, stop, or test next

Funnels show ordered leakage, not the reason users dropped.

Session paths are session-local. They explain bounded journeys, not long-cycle attribution.

Identity can be anonymous, user-level, session-local, or portfolio-linked. The agent should state which basis the readout uses and avoid strict cross-project claims unless identity linking is configured.

Small samples are directional. Do not overstate noisy counts, early experiment data, or segments with tiny denominators.

Correlation is not causality. A breakdown can find where the problem concentrates; an experiment is what tests a causal change.

After you understand the real-project workflow, you can try the same loop on seeded demo data.

Use Agent Analytics demo data to run the closed-loop growth workflow. Treat the demo project as practice only: read context, discover event/property vocabulary, run funnel and paths to the demo activation event, break down the biggest leak, and recommend one narrow experiment or readiness fix.

If your agent has the CLI installed and authenticated, it can use the demo project name that appears in your account or docs environment. Keep demo results separate from real product decisions.