Secure60 Collector


Overview

The Secure60 Collector is a service that is able to ingest data from a range of log sources and dynamically transform this data into the Secure60 Common Information Model (CIM). The collector then sends this normalised data to the relevant Secure60 Project.

This capability enables fast onboarding of new sources of security data from within your organisation and automates the field mapping and transformations needed.

The Collector has the ability deliver the following capabilities:

The solution is deployed in customers infrastructure (or as a cloud hosted add on to your plan).

The solution is based on open source log ingestion technology (vector.dev) and scales to hundreds thousands of Events per second.

The configuration is easily customisable/extensible to bespoke needs for your organisation.

Customers can implement similar functionality in any other tooling they have available or leverage the Secure60 Collector as an endpoint that they output to from other log/data management sources.

The Secure60 Collector in it’s default configuration also normalises inbound data into Secure60 schema for known fieldnames. This powerful feature significantly reduces the configuration needed to enable that platform - often zero additional configuration is needed.

Deployment

A deployment consists of 2 components:

  1. An environment file that contains ingest token information to allow connectivity to your Project and the configuration preferences you would like enabled.
  2. The Secure60 Collector binary files - Docker container is the recommended implementation option.

Deploy Secure60 Collector

1. Create an enviornment file

In the Secure60 Portal browser to the Integrations page for your Project (Select to Secure60 Collector for a base configuration or an item like Syslog for a preconfigured environment file for Syslog).

This will provide a one click button to generate your environment (.env) file.

Store this securely as it contains your Ingest Token and wont be displayed again

The environment file generated would look like:

    S60_PROJECT_ID=YOUR_PROJECT_ID
    S60_INGEST_TOKEN=YOUR_TOKEN
    ENABLE_GENERIC_NORMALISE=true
2. Run the Secure60 Collector

The following options are supported:

  1. (Recommended) Deployment Via Docker container (Docker Engine Install) The Secure60 Collector can be run natively in docker (or via Docker compose or Kubernetes), Here is a sample command to spin up the container:
docker run -i --name s60-collector -p 80:80 -p 443:443 -p 514:514 -p 6514:6514 -p 5044:5044 --rm -d --env-file .env secure60/s60-collector:1.07
  1. Cloud Secure60 Collector We can deploy a Cloud hosted version of the Secure60 Collector as an additional add on - There are more limited configuration options with this deployment method when compared to the docker option

  2. Bare metal and other manual installations We can provide instructions for other deployment options on request

3. Configure your devices to send data to the Secure60 Collector

You can now configure devices (Desktops, Servers, Network devices, Applications) to send their data to the Secure60 collector. This is often achieved via tools such as Syslog that you would configure to output to the IP address of the Secure60 Collector you have deployed.

Once you have configured a device, Login to the Secure60 Portal and view the Overview page to look for the count of Events to be increasing OR view the Search page to see the actual Events you are sending.

We strongly recommend touching base with our Integrations team to for custom advice and onboarding assistance specific to your organisations needs: integrations@secure60.io


Controlling Transformations / Parsing


Enabling custom configurations

Custom configurations allow you to enable transformations of incoming data into the Secure 60 schema. Getting data into the Secure60 schema allows you to take advantage of the built in Managed Rules and detections provided by the platform.

Custom configurations also allow you to activate additional data ingestion methods (Such as Kafka, Redis, S3 Pull or GCP PubSub).

Enabling transformations using Environment variables

The Secure60 Collector offers a range of environment variables that can be set which will enable the transformation of source data into Secure60 Schema.

The most important of these environment variables is the ENABLE_GENERIC_NORMALISE variable, This is set to true in default configurations. Its purpose is to automatically convert known field names into the Secure60 schema. This powerful feature covers field names from Cloud providers, Operating systems, Network devices and more.

There are additionally more specific transformations you can enable, For example to transform all common Linux OS SSHD login events into enriched Secure60 Events that include Username, Source IP address and the outcome of the login you can enable a custom Linux transformation.

To do this you define 3 environment variables:

ENABLE_LINUX_MATCH_FIELD=.source_type #Looks for a fieldname called .source_type
ENABLE_LINUX_MATCH_VALUE=syslog #The value of the fieldname must be syslog
ENABLE_LINUX=true #Enable the transformation

There are a number of additional custom transformations available - See full list under “Environment File Reference” below.

Enabling additional inputs and fully bespoke transformations

The Secure60 Collector can be extended with completely custom configuration files that are written in YAML.

This enables completely bespoke configurations that support sophisticated ingestion and transformation scenarios.

We strongly recommend working with the Secure60 Integrations team to fast track your custom configuration.

Custom configurations are achieved by mapping .yaml files into the running Secure60 Collector container. Mapping files into specific locations with specific definitions allows the Secure60 Collector to pickup these files and include them in the default running configuration.

This mapping can be achieved in docker via -v or creating a Docker compose file or Kubernetes ConfigMap

Examples of the mapping process include:

  1. Enabling a new input type - Ingesting from Filebeat

To achieve this configuration we would create a file called: transform-filebeat.yaml

  #Used when ingesting data from Filebeat sources - this will connect to Filebeat to receive Events
  incoming_redis:
    type: redis
    url: redis://localhost:6379/0
    key: filebeat
    data_type: list

You would then run docker with the following syntax:

docker run -i -v ./transform-filebeat.yaml:/etc/vector/transforms-active/transform-filebeat.yaml --name s60-collector -p 80:80 -p 443:443 -p 514:514 -p 6514:6514 -p 5044:5044 --rm -d --env-file .env secure60/s60-collector:1.07

The additional syntax: -v ./transform-filebeat.yaml:/etc/vector/transforms-active/transform-filebeat.yaml takes a file in your local folder called transform-filebeat.yaml and inserts it into the docker container in a location that will be picked up and activated automatically.

Running the docker conatiner again will start with the configuration attempting to connect to filebeat via a redis sync path.

  1. Enabling a custom transformation

Here is a snippet of a custom transformation relating to Google Cloud (GCP), Create a file transform_gcpconvert.yaml

transforms:
  transform_gcppubsub:
    inputs:
    - source_gcp_pubsub
    type: remap
    source: |
      .msg = parse_json!(.message)
      .msg= flatten!(.msg, "_")
      .msg = map_values(.msg, recursive: true) -> |value|{ if is_string(value) { value; }else{ encode_json(value); } }
      tmp = .msg

And map into the Secure60 Collector in the docker run command:

docker run -i -v ./transform-gcpconvert.yaml:/etc/vector/transforms-active/transform-gcpconvert.yaml --name s60-collector -p 80:80 -p 443:443 -p 514:514 -p 6514:6514 --rm -d --env-file .env secure60/s60-collector:1.07

For full example configuration contact the Secure60 Integrations team: integrations@secure60.io

Notes
  • To see Secure60 Collector startup debug information, drop the -d from the command line
  • View debug information
  • Enable the .env variable: DEBUG_OUTPUT=true to output the Events log just before they are send to Secure60 Ingest endpoint
  • For deep debug output enable the .env variable: VECTOR_LOG=debug
  • Naming of files and source / transformation blocks is important
    • Always start an integration input with the name incoming_
    • Always start a transformation with the name transform_
    • Always file map into /etc/vector/transforms-active/ within the container as this location is automatically monitored for files and file changes
  • Sending Test Logs
    • Send a full file (eg large JSON object): curl -X POST -H "Content-Type: application/json" --data @./aws-cloudtrail-sample.json http://127.0.0.1
    • Send information to Syslog endpoint: nc -w0 127.0.0.1 514 <<< "Feb 8 14:30:15 server1 sshd[1234]: Accepted password for user123 from 192.168.1.100 port 12345 ssh2"

Secure60 Collector - Ports and traffic flow

In order to work in your environment the Secure60 Collector has the following port requirements:


Environment File Reference

S60_PROJECT_ID=<YOUR-PROJECT-ID>
S60_INGEST_TOKEN=<YOUR-INGEST-TOKEN>
ENABLE_GENERIC_NORMALISE=true
DEBUG_OUTPUT=false

ENABLE_SYSLOG
ENABLE_CUSTOM
ENABLE_CUSTOM_MATCH_FIELD
ENABLE_CUSTOM_MATCH_VALUE
ENABLE_AWS=false
ENABLE_AWS_MATCH_FIELD=
ENABLE_AWS_MATCH_VALUE=
ENABLE_NGINX_MATCH_FIELD
ENABLE_NGINX_MATCH_VALUE
ENABLE_NGINX=false
ENABLE_APACHE_MATCH_FIELD
ENABLE_APACHE_MATCH_VALUE
ENABLE_APACHE=false
ENABLE_M365=false
ENABLE_LINUX_MATCH_FIELD=
ENABLE_LINUX_MATCH_VALUE=
ENABLE_LINUX=false
ENABLE_CUSTOM_SYSLOG_MATCH_FIELD
ENABLE_CUSTOM_SYSLOG_MATCH_VALUE
ENABLE_CUSTOM_SYSLOG=false
ENABLE_DEBUG_MATCH_FIELD
ENABLE_DEBUG_MATCH_VALUE
ENABLE_DEBUG=false
GCP_PUBSUB_CREDENTIAL_PATH
GCP_PUBSUB_PROJECT
GCP_PUBSUB_SUBSCRIPTION
GOOGLEWORKSPACE_CREDENTIAL_PATH
GOOGLEWORKSPACE_PROJECT
GOOGLEWORKSPACE_SUBSCRIPTION
DATA_MASKING_ARRAY
ENABLE_DATA_MASKING_HASH
ENABLE_DATA_MASKING_X
DATA_MASKING_ENCRYPTION_ALGORITHM

Data Masking and Hashing

Secure60 Collector implements the ability to mask / redact data which allows the Secure60 platform to prevent sensitive data to be ingested and stored.

This solution is applied inside the customer environment, before any information reaches the Secure60 platform.

Purpose:

Features:

Data masking can be applied via 2 strategies

Implementation:

Implement via Secure60 Collector environment file

DATA_MASKING_ARRAY - Array of fieldnames to apply masking to ENABLE_DATA_MASKING_X- Enable Replacement feature that will replace values with XXXX ENABLE_DATA_MASKING_HASH- Enable Hashing feature that will apply an encryption algorithm to values

DATA_MASKING_ENCRYPTION_ALGORITHM - Optional selection of encryption algorithm. Default: SHA3 other options: SHA2, SHA, MD5

Back to top