Dashboard
Distributed Rate Limiter
Traffic control Operator workspace Realtime signals

Learn the System

Start here for the quick mental model. The full guide stays below when you want deeper context.
Use Basic to send a check and read the verdict. Move into Advanced only when you want deeper signals, events, or failure drills.
Start with one check

Pick a starter, keep the defaults, and send a single request first.

Read the verdict

Focus on allowed or denied, remaining budget, and suggested retry timing.

Open Advanced when needed

Monitoring, Events, and Chaos are there when you need to inspect behavior in depth.

Open the full guide
What this console does

This page sends a check through the HTTP gateway, Envoy, and gRPC rate limiter, then returns a Redis-backed allow or deny decision.

Where the signals go

Each decision also lands in the Live Event Stream, while logs, traces, alerts, and stack state flow into the Debug Dashboard and Grafana.

Quick start
Pick the starter that best matches your traffic pattern.
Keep the defaults unless you want to test a different subject or limit.
Run one check first, then open monitoring only if the result needs more context.
Request path
1. BrowserYour form input becomes a check request.
2. EnvoyThe gateway forwards the request to the rate limiter service.
3. Limiter + RedisThe rule is evaluated, shared state is updated, and an event is emitted.
4. SignalsMonitoring and debug tools show the decision in charts, logs, traces, and events.
What the fields mean
Traffic group namespaceKeeps unrelated traffic from sharing the same limit.
Subject keyIdentifies the user, client, or entity being checked.
Limit ruleDefines either a refill rate or a requests-per-window threshold.
Strategy algorithmChooses the enforcement algorithm, while AUTO derives a default from the limit format.
Cost costSpends multiple budget units for heavier checks.
When to use each strategy
Token bucketAllows short bursts while enforcing an average refill rate.
Leaky bucketSmooths traffic by draining queued work at a steady rate.
Fixed window counterSimple and fast, but can allow boundary bursts.
Sliding window logTracks exact request timestamps for precise moving-window limits.
Sliding window counterUses weighted counters for a lighter approximate moving window.
Recovery drillsUse the chaos tools only when you are intentionally testing failure handling.
Current check
View Basic
Scenario Per-user API quota
Subject user123
Limit 20rps
Run mode Single request
Interval On demand
Last result Awaiting check

Run a Check

Start by choosing what you want to protect. The form will fill in a recommended limit and strategy.
1Pick what to protect
2Send once
3Read allowed or denied
4Try continuous mode
This becomes the raw key sent to the service. Each unique value gets its own limit.
Raw limit formats: 20rps for rate-based strategies and 5/10s for window-based strategies.
Open advanced request controls

Limit Builder

Choose a limit format and the raw limit string will update for you automatically.
Requests per second for rate-based bucket strategies.
Maps to the raw namespace value sent to the service, such as api or checkout.
Leave this on auto in most cases. Rate format defaults to token bucket; window format defaults to sliding window log.
Use a higher weight when one request should spend more budget than a standard check.
Check before sending
    What this means

    Choose a starter to see the limit in plain English.

    Auto selected Waiting for a valid limit
    Will this pass? Waiting for a valid limit
    Choose Continuous to open cadence presets and the random spike overlay.
    Ready.
    Continuous mode is idle.

    What This Check Does

    Live Request Preview
    Effective strategy: awaiting input.
    Computed Redis key The computed Redis key will appear once the request is valid enough to infer a strategy.

    Strategy Comparison

    Compare every supported strategy against the current rule shape and request weight.
    Side-by-Side View

    Result

    Awaiting check
    Awaiting check
    Send one check to get a decision, remaining budget, or retry guidance.
    Ready for a first check.

    Start with the quick read below. Open the raw response only if you need the exact payload.

    Suggested next step
    Send one check to see remaining budget and the recommended follow-up.
    Submit a request to generate a plain-language interpretation of the service response.
    Why this result occurred Submit a request to review validation issues, acceptance details, or denial reasoning.
    Show raw response JSON
    {
      "tip": "Press \"Send once\" to call the service."
    }

    Decision Health

    Start with the three signals that matter most: traffic volume, rejection pressure, and latency. Open the deeper tools only when you need them.
    Traffic Volume -- Current request throughput across the rate limiter service.
    Deny Ratio -- Share of recent traffic currently being rejected.
    P95 latency -- Waiting for the first monitoring refresh.
    Open monitoring tools

    Allowed vs Denied

    How accepted and rejected traffic are moving together.
    Allowed Denied
    Allowed Now --
    Denied Now --
    Deny ratio --
    Waiting for the first monitoring refresh.
    Shows successful and rejected traffic. Submit a few requests if the chart is empty.
    Latency details

    P95 latency

    Tail latency is the quickest way to spot slower requests before averages drift.
    P95 latency
    Latest --
    Peak --
    Average --
    Waiting for the first monitoring refresh.
    Shows the 95th percentile of request latency so the slow edge is easier to spot.
    Traffic volume trend

    Request rate

    Last 10 minutes, refreshed every 5 seconds.
    Total requests
    Latest --
    Peak --
    Average --
    Waiting for the first monitoring refresh.
    Shows overall request throughput during the last 10 minutes.
    Stack Health
    Live status of every stack component. Refreshed every 5 seconds.

    Live Event Feed

    Connecting…
    Every rate-limit decision streamed in real time from the Redis Stream. Pause the feed, search for a subject, and click a row for details.
    0 events
    Showing live events as they arrive.
    TimeTraffic GroupSubjectStrategy ResultRemainingLatencyInstance
    No live events yet.

    Selected Event

    Choose a row to inspect the exact event payload.
    No event selected yet.
    Logs and Alerts

    Active Alerts

    Firing alerts from AlertManager. Refreshed every 15 seconds.

    Recent Logs

    Structured log lines from Loki. Refreshed every 15 seconds.

    Chaos Engineering

    Use these controls only when you are intentionally testing recovery behavior. Every action below can interrupt live requests until the stack stabilizes again.
    Use these only during failure drills.
    Stopping the rate limiter or Redis can trigger request failures, empty metrics, and noisy alerts until recovery finishes.
    Waiting for the first container refresh.