A/B Experiment
A controlled product test — variants, metrics, results, and conclusions that validates product hypotheses.
Why This Object Matters for AI
AI A/B test analysis determines significance and recommends next experiments; product decisions depend on experiment data.
Product Management & Development Capacity Profile
Typical CMC levels for product management & development in SaaS/Technology organizations.
CMC Dimension Scenarios
What each CMC level looks like specifically for A/B Experiment. Baseline level is highlighted.
Product experiments happen informally — someone changes a button color and checks the dashboard a week later to 'see if it helped.' There are no documented experiment records, no stated hypotheses, no defined success metrics. Whether the change worked is a matter of opinion, not measurement.
None — AI cannot analyze experiment results because no A/B experiment records exist in any system.
Start documenting experiments — write down the hypothesis, the variants, the metric being measured, and the result for each test, even in a simple spreadsheet.
A/B experiments are logged inconsistently. The growth PM maintains a personal spreadsheet of experiments with results. The product team runs feature flags but doesn't always document what was tested or why. Finding the result of an experiment from six months ago means asking 'who ran that test?' and hoping they still have the data.
AI could parse individual experiment logs, but cannot compare experiments across teams, detect conflicting results, or identify experimental gaps because records are scattered and inconsistently formatted.
Consolidate experiment tracking into a single tool with standard fields — hypothesis, variants, primary metric, sample size, duration, statistical significance, and conclusion for every test.
A/B experiments live in a dedicated experimentation platform or structured tracker with consistent fields — hypothesis, variants, metrics, duration, and results. PMs can browse past experiments by product area. But experiment records don't connect to the feature requests that motivated them, the product metrics they moved, or the downstream business impact they created.
AI can generate experiment result summaries and detect statistical significance issues, but cannot assess business impact or recommend follow-up experiments because experiment records lack connections to customer segments, feature adoption metrics, and revenue data.
Link experiment records to feature request demand signals, product metric definitions, and customer segment data so each test carries business context beyond the isolated statistical result.
A/B experiments are comprehensive records linked to product metrics, customer segments, and feature roadmap items. A PM can query 'show me all experiments targeting conversion for mid-market accounts in the last quarter, their statistical results, and the roadmap decisions they influenced' and get a complete, contextualized answer.
AI can recommend experiment designs based on historical patterns, predict likely outcomes from past results, and quantify the cumulative impact of experimentation programs. Cannot yet auto-design multivariate experiments because the experiment schema lacks formal interaction models between variants.
Formalize the experiment schema with machine-readable hypothesis specifications, validated metric definitions, structured interaction models, and quantified confidence intervals that AI agents can reason over programmatically.
A/B experiments are formal entities in a product optimization ontology. Hypotheses are machine-readable specifications with quantified predictions. Variant definitions include structured interaction models. Results carry validated statistical analyses and formal causal claims. An AI agent can design experiments that avoid past failures, optimize for interaction effects, and predict outcomes before launch.
AI can autonomously design experiments, compute required sample sizes, predict outcomes, and generate actionable recommendations from results. Human judgment is needed only for strategic experimentation priorities and ethical considerations.
Implement real-time experimentation intelligence — experiments auto-adjust based on incoming results, and the experimentation platform continuously optimizes across all concurrent tests.
A/B experiment records generate and update automatically in real-time. The experimentation platform continuously runs, monitors, and concludes tests without manual intervention. Experiment results feed directly into product configuration changes. The experiment record is a living optimization engine that documents itself as it runs.
Fully autonomous experimentation intelligence. AI designs, runs, monitors, concludes, and acts on experiment results in real-time. The product optimizes itself through continuous experimentation.
Ceiling of the CMC framework for this dimension.
Capabilities That Depend on A/B Experiment
Other Objects in Product Management & Development
Related business objects in the same function area.
Feature Request
EntityA user-submitted product improvement suggestion — request details, source, votes, prioritization score, and status that captures customer product needs.
Product Roadmap Item
EntityA planned product feature or initiative — description, priority, timeline, dependencies, and status that tracks product development plans.
Product Requirements Document
EntityA formal feature specification — requirements, user stories, acceptance criteria, and technical constraints that define what to build.
User Research Study
EntityA qualitative research project — interviews, transcripts, observations, and synthesized insights that inform product decisions.
Product Metric
EntityA tracked product KPI — definition, baseline, target, and current value that measures product health.
What Can Your Organization Deploy?
Enter your context profile or request an assessment to see which capabilities your infrastructure supports.