Outputs
  • 13 Jan 2023
  • 2 Minutes to read
  • Contributors
  • Dark
    Light

Outputs

  • Dark
    Light

Article Summary

LimaCharlie provides multiple output options, referred to as streams, for you to send data from LimaCharlie to other destination(s). We provide native support for output with multiple different providers, or "destinations". The diagram below provides some basic examples of where data is sourced from and where data can be sent to.

Flow of Data within LimaCharlie

Outputs should be thought of in two capacities: Streams and Destinations. A stream is what you are sending, whereas a destination is where you are sending it to. We will look at both in detail.

Check out the following YouTube video for a walkthrough of configuring an output.

Streams

Streams define what data points will be sent to an output destination.

Available streams include:

  • event : The bulk of data events coming from sensors. Note: this will be very verbose
  • detect : Alerts, as generated by the report action in detection and response rules.
  • audit : Events generated by the LimaCharlie platform, such as access control.
  • deployment : Events about your deployment, like sensor enrollments or cloned sensors.
  • artifact : Meta-events reporting on newly-ingested files through the Artifact Collection mechanism.
  • tailored : Only events specifically flagged for outputs sent to this stream.

Destinations

LimaCharlie integrates with several providers, such as S3, Google Cloud, or Slack, as destinations.

Allow Lists

Looking to add LimaCharlie outputs to an allow list (aka "whitelist")? See more details here.

Use Cases

There are multiple use cases or integration strategies for shipping telemetry to and from the LimaCharlie platform. Some common approaches we have seen:

All data over batched files via SFTP, Splunk or ELK consumes the received files for ingestion.

Sensor ---> LCC (All Streams) ---> SFTP ---> ( Splunk | ELK )

All data streamed in real-time via Syslog, Splunk or ELK receive directly via an open Syslog socket.

Sensor ---> LCC (All Streams) ---> Syslog( TCP+SSL) ---> ( Splunk | ELK )

All data over batched files stored on Amazon S3, Splunk or ELK consumes the received files remotely for ingestion.

Sensor ---> LCC (All Streams) ---> Amazon S3 ---> ( Splunk | ELK )

Bulk events are uploaded to Amazon S3 for archiving, while alerts and auditing events are sent in real-time to Splunk via Syslog. Note: This has the added benefit of reducing Splunk license cost while keeping the raw events available for analysis at a lower cost.

Sensor ---> LCC (Event Stream) ---> Amazon S3
       +--> LCC (Alert+Audit Streams) ---> Syslog (TCP+SSL) ---> Splunk

Was this article helpful?

What's Next