Glossary

Guardrail Metrics

Guardrail metrics are the business metrics that you don't want to see negatively impacted while conducting experiments like A/B tests.

What Are Guardrail Metrics?

Guardrail metrics are the business or user experience metrics you protect during an A/B test. They are not the metric you are trying to improve. They are the ones you do not want to break. If a winning variant lifts conversions but drops a guardrail like page load time or revenue per session, the test is not safe to ship.

Why Guardrail Metrics Matter in A/B Testing

Most experiments focus on a single primary metric, like signups or revenue. The problem is that one metric never tells the full story. A change can boost the primary metric and quietly hurt something else: trial-to-paid rate, support tickets, refunds, or page speed.

Guardrail metrics catch those side effects before you roll out a change to all users. They keep your team from celebrating a win that costs the business more than it earns.

Examples of Common Guardrail Metrics

Business guardrails

  • Revenue per visitor
  • Average order value
  • Trial-to-paid conversion
  • Refund or cancellation rate

User experience guardrails

  • Bounce rate
  • Session duration
  • Customer satisfaction score (CSAT)
  • Net Promoter Score (NPS)

Technical guardrails

  • Page load time and Core Web Vitals
  • Error rate
  • Crash rate (mobile or app tests)
  • API latency

Funnel guardrails

  • Add-to-cart rate when testing product page changes
  • Checkout completion when testing cart changes
  • Steps further down the funnel that a top-of-funnel test might affect

How to Set Guardrail Metrics in A/B Tests

  1. Pick the primary metric first. Decide what the test is trying to improve.
  2. List what could break. If this change wins, what is the most likely thing it could quietly hurt? Those are your guardrail candidates.
  3. Cap the list at 2 to 4 guardrails. Too many makes every test look like it failed somewhere.
  4. Set thresholds, not just metrics. For example: revenue per visitor must not drop more than 2% with statistical significance.
  5. Pre-register the guardrails before launch. Define them in the test plan so the team agrees on the rules before results come in.
  6. Review every test on both sides. Even when the primary metric wins, scan the guardrails.

Guardrail Metrics vs Primary Metrics

Primary metrics answer "did this change work?" Guardrail metrics answer "did it work without making something else worse?" A test is only a real win when the primary metric improves and every guardrail stays inside its safe range.

Frequently Asked Questions

What is the difference between guardrail metrics and guard rail metrics?

Both mean the same thing. "Guardrail metrics" is the more common spelling. Some teams write it as "guard rail metrics" with a space.

How many guardrail metrics should I set per test?

Two to four is usually right. Fewer misses real side effects. More creates false alarms, since at least one will wiggle by chance.

What is a good guardrail metric for an e-commerce A/B test?

Revenue per visitor is a strong default. It catches both the conversion-rate side and the order-value side.

Should guardrail metrics use the same significance level as primary metrics?

Most teams loosen the threshold for guardrails. The goal is to catch real harm, not wait for perfect certainty. A common pattern is to flag any movement of 1 to 2% with at least 80% confidence.

How do you track guardrail metrics in Optibase?

In Optibase you can track guardrail metrics alongside the primary metric for any test. Define them at experiment setup and they appear in the same results view as the primary.