Datadog Log Collection with Kubernetes

  1. Datadog using STDOUT/STDERR to Collect container logs
  2. For above, Spring Boot using Logback Appenders. ConsoleAppender using System.out or System.err
  3. K8S write logs into directories within /var/log/pods.
  4. Datadog Agent collect logs Kubernetes log files (automatically handled by Kubernetes).
  5. All your logs (raw and JSON) by sending them through a processing pipeline. Pipelines take logs from a wide variety of formats and translate them into a common format in Datadog.

Preprocessing of JSON logs occurs before logs enter pipeline processing. Preprocessing runs a series of operations based on reserved attributes, such as timestamp, status, host, service, and message. If you have different attribute names in your JSON logs, use preprocessing to map your log attribute names to those in the reserved attribute list.

Appenders with LogstashEncoder:

<appender name="DATADOG_STDOUT" class="ch.qos.logback.core.ConsoleAppender">
    <encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
</appender>

LogstashEncoder is Logback JSON encoder and appenders. Datadog Reserved Attributes.

Kubernetes:

Datadog Agent:

  • The Datadog Agent is software that runs on your hosts. It collects events and metrics from hosts and sends them to Datadog, where you can analyze your monitoring and performance data.
  • Datadog automatically parses JSON-formatted logs with Pipelines.
    • You can then add value to all your logs (raw and JSON) by sending them through a processing pipeline. Pipelines take logs from a wide variety of formats and translate them into a common format in Datadog.