Replay allows you to run Detection & Response (DR) rules against historical traffic. This can be done in a few combinations of sources:

Rule Source:

  • Existing rule in the organization, by name.
  • Rule in the replay request.


  • Sensor historical traffic.
  • Local events provided during request.


Using the Replay API requires the API key to have the following permissions:

  • dr.list

If the traffic source is from an organization, the following additional permissions are required:

  • insight.evt.get
  • insight.det.get

The returned data from the API contains the following:

  • responses: a list of the actions that would have been taken by the rule (like report, task, etc).
  • num_evals: a number of evaluation operation performed by the rule. This is a rough estimate of the performance of the rule.
  • num_events: the number of events that were replayed.
  • eval_time: the number of seconds it took to replay the data.

Python CLI

The Python CLI gives you a friendly way to replay data, and to do so across larger datasets by automatically splitting up your query into multiple queries that can run in parallel.

Sample command line to query one sensor:

limacharlie-replay --sid 9cbed57a-6d6a-4af0-b881-803a99b177d9 --start 1556568500 --end 1556568600 --rule-content ./test_rule.txt

Sample command line to query an entire organization:

limacharlie-replay --entire-org --start 1555359000 --end 1556568600 --rule-name my-rule-name

If specifying a rule as content with the --rule-content, the format should be in JSON or YAML like:

  event: DNS_REQUEST
  op: is
  path: event/DOMAIN_NAME
  - action: report
    name: dilbert-is-here

Instead of specifying the --entire-org or --sid flags, you may use events from a local file via the --events flag.

We invite you to look at the command line usage itself as the tool evolves.


The Replay API is available to all DataCenter locations using a per-location URL. To get the appropriate URL for your organization, use the REST endpoint to retrieve the URLs found here. named replay.

Having per-location URLs will allow us to guarantee that processing occurs within the geographical area you chose. Do not that currently some locations are NOT guaranteed to be in the same area due to the fact we are using the Google Cloud Run product that is not available globally. For these cases, processing is current done in the United States but as soon as it becomes available in your area, the processing will be moved transparently.

Authentication to this API works with the same JWTs as the main API.

For this example, we will use the experimental datacenter's URL:

The API mainly works on a per-sensor basis, on a limited amount of time. Replaying for multiple sensors (or entire org), or longer time period is done through multiple parallel API calls. This multiplexing is taken care of for you by the Python CLI above.

Specify which Organization ID (OID) and Sensor ID (SID) through the following URI:{OID}/{SID}

Specify the start and end time range, as unix second epoch in the query string:{OID}/{SID}?start={START_EPOCH}&end={END_EPOCH}

Specify the rule to apply. This can be done via a rule_name query string parameter, or by supplying the rule, as JSON in the body of the POST and a Content-Type header of application-json:{OID}/{SID}?start={START_EPOCH}&end={END_EPOCH}&rule_name={EXISTING_RULE_NAME}

You may also use events provided during the request by using the endpoint:{OID}

The body of the POST should be a JSON blob like:

  "rule": {...},
  "events": [

Like the other endpoints you can also submit a rule_name in the URL query if you want to use an existing organization rule.

You may also specify a limit_event and limit_eval parameter as integers. They will limit the number events evaluated and the number of rule evaluations performed (approximately). If the limits are reached, the response will contain an item named limit_eval_reached: true and limit_event_reached: true.

Finally, you may also set trace to true in the request to receive a detailed trace of the rule evaluation. This is useful in the development of new rules to find where rules are failing.


The Replay service is billed on a per operator evaluation basis.

A D&R Rule is composed of multiple operator evaluations. It is each of those evaluations that gets billed for the Replay service. It means that generally speak the number of operator evaluations will be based on the number of events replayed X complexity of the rule.

Rules, especially complex ones can be hard to evaluate since rules will often perform evaluation short-cicruits to reduce the number of evaluations in certain cases. Therefore the best way to evaluate a rule is to use the LimaCharlie CLI with the limacharlie-replay command which outputs precise statistics about a Replay job. This will include number of operator evaluations which will then help you determine the performance of your rule.