Integrate Your AWS S3 Data Source with the Security Data Lake
    • 12 Nov 2024
    • 1 Minute to read
    • PDF

    Integrate Your AWS S3 Data Source with the Security Data Lake

    • PDF

    Article summary

    Any external data source that can be configured to write logs to an AWS S3 bucket can forward data to the Red Canary Security Data Lake.

    By integrating your security logs with the Red Canary Security Data Lake, you can meet data retention requirements, export logs when needed for investigation or reporting, and ensure greater visibility into your security infrastructure for your team and Red Canary. To integrate an external data source with Red Canary through AWS S3, follow the procedure below from beginning to end.

    Step 1: Red Canary–Create your Red Canary generated URL

    1. From your Red Canary dashboard navigate to Integrations, click the split button to the right of Add Integration, and click Add Data Lake Integration.

    2. Enter a name for your integration.

    3. Under Ingest Format / Method, select Data Source via S3 (Security Data Lake).

    4. Select the desired data retention period in days (default: 90).

    5. Click Save.

    6. Click Edit Configuration.

    7. Click Activate.

    8. After a few minutes, Red Canary will generate an S3 Folder URL, AWS Access ID, and AWS Secret Key that you will use to set up log forwarding in your external data source. Copy and then save these values. You will use them in a later step.

      These configuration settings will not be generated until the Red Canary integration is saved and activated.

    Step 2: External Data Source–Configure log forwarding

    1. From your external data source, set up log forwarding using the S3 Folder URL, AWS Access ID, and AWS Secret Key values noted in the previous section.

    2. Ensure that the data source is configured to emit logs in a line-delimited format.

      Examples of line-delimited file formats: newline separated JSON, CSV, TSV, CEF, CLF, etc.

    3. Files can be sent in a raw, uncompressed format, or compressed. As long as the compression uses a common compression algorithm, the Data Lake Adapter should be able to ingest the data.

      Examples of supported compression algorithms: ZIP, GZIP, etc.


    Was this article helpful?