Telegraf is a plugin-driven agent for collecting, processing, aggregating and writing metrics and events. Telegraf ships as a single binary with no external dependencies that runs with a minimal footprint and a plugin system that supports many popular services.
Telegraf is used to collect metrics from the system it runs on, applications, remote IoT devices and many other inputs. Telegraf can also capture data from event-driven operations. Once processed, Telegraf can send the metrics and events to various data stores, services and message queues.
Plugins Link to heading
Telegraf comes included with over 300 plugins ready to use. Users enable and configure whatever plugins their specific use case requires.
Below is an introduction to the inputs where Telegraf can collect metrics from and outputs where Telegraf can send metrics to:
Inputs Link to heading
Input plugins are available for over 200 different inputs. Below are examples of a few categories of plugins that are available:
- Applications: Interact with Apache Kafka, MongoDB, MySQL, PostgreSQL, Apache Web Server, NGINX, postfix and many others.
- Cloud: Connect with Amazon CloudWatch, Amazon ECS, Azure Storage Queue, Google Cloud PubSub, Salesforce, VMware vSphere and others.
- Containers: Inspect Kubernetes clusters, Docker, etc.
- Hardware: Collect information about a system’s CPU, memory, network, and disk utilization. Redfish and IPMI allow for collecting metrics from hardware directly as well.
- IoT: Gather from generic sensors, IPMI, KNX, MQTT, and other IoT sensor projects.
- Universal: The file, tail, listener, poller, exec, and execd plugins provide generic mechanisms for collecting metrics that an included plugin might not cover.
For a complete list of all available input plugins, check out the input plugins list on the documentation site.
Outputs Link to heading
Output plugins define where Telegraf will deliver the collected metrics.
One such output is InfluxDB, an open-source time series database, which is well-suited for operations monitoring, application metrics, IoT sensor data, and real-time analytics. InfluxDB provides a place to store collected metrics and the ability to graph and alert on metrics. InfluxDB is offered as a cloud service and as a download for running locally. As such, InfluxDB is a perfect output for collecting metrics collected from Telegraf.
Other example outputs include sending metrics to Apache Kafka, Amazon CloudWatch, Datadog, a file, or via AMPQ, and many more. For a complete list of more than 50 available output plugins, check out the output plugins list on the documentation site.
Processors & aggregators Link to heading
Telegraf also has the concept of processors and aggregators. These plugins allow for processing and grouping data up after collecting it and before sending it to an output. This can be helpful if a user wishes to collect an average value, add a specific tag to data, or filter certain data.
Download Link to heading
Telegraf is available for download on a wide range of operating systems and architectures. Telegraf comes as a single binary with no external dependencies, making it incredibly easy to deploy everywhere.
Archives containing the built binary are readily available for Microsoft Windows, Linux, OS X, FreeBSD, and various system architectures. Users can integrate Telegraf into IoT devices like Raspberry Pis, routers, and sensors to giant IBM Power and s390x mainframes and on servers, PCs, or cloud instances.
|Operating System||Architecture Support|
|Linux||amd64, arm64, armv5, armv6+, i386, mips, mipsle, ppc64le, s390x|
|Microsoft Windows||amd64, i386|
|FreeBSD||amd64, i386, armv7|
There are also RPM and DEB packages available for a subset of Linux downloads.
Config File Link to heading
With Telegraf in hand, the next step is to provide a configuration file for Telegraf which outlines what plugins are enabled and how to configure them. The configuration file is a TOML formatted file.
Here is a very simple configuration file that enables the collection of various system metrics and outputs the metrics using the file output:
This example enables the CPU, disk, memory, and network inputs and the file output.
Generating a configuration file Link to heading
The config subcommand can produce a telegraf configuration file with all possible plugins and configuration options commented out to help users get going quickly:
The user can then use this file to see all the possible plugins and configuration options. Then the user can uncomment whatever inputs and outputs are required.
For detailed information on the configuration file as well as using environment variables, see the configuration docs page.
Using Telegraf Link to heading
Here is a brief example of collecting metrics about the local system and sending those outputs to a file. This example will use the CPU, memory, disk, and network inputs and output to a file.
Configuration Link to heading
First, generate a configuration file with only the specific inputs and outputs required. The config subcommand can take filters to reduce the size and only output the required sections. The following will produce a config file with only the four inputs and one output that this example uses:
Users can separate multiple plugins with a colon (e.g.
use a single colon (e.g. :) if no plugins of a specific type are needed to omit
At this point, if a particular plugin requires additional configuration like credentials, hostnames, or other modifications, the user could edit the file to add those settings.
Execution Link to heading
To run telegraf, point it at the configuration file and run:
Starting at the top, the version of Telegraf is running, and all the loaded plugins are printed out. Users first see a list of the loaded inputs and outputs specified by the configuration file.
Next are the tags and agent-specific configuration values.
The final lines are the metrics themselves. With the
File Output Plugin,
as metrics are collected, the output of those metrics is printed to stdout in
the InfluxDB Line Protocol and
/tmp/metrics.out, both of which are
configurable to other output types or locations.
InfluxDB Line Protocol Link to heading
The output of metrics in the above case is in a format called line protocol. The line protocol is the protocol used by InfluxDB and shows how Telegraf collects and parses the data, so it is ready for use by one of many outputs.
The format includes a metric name, tag set, fieldset, and timestamp separated by newlines. Below is a breakdown of the various items:
- measurement: required, name of the metric itself
- tags: optional, key-value pairs that contain string metadata about the metric. InfluxDB will index these values.
- fields: at least one is required and contains key-value pairs. These are the actual metric data.
- timestamp: optional Unix timestamp for the data point. If the line does not include a timestamp, then the current system time is typically used.
Below is a regex of the line protocol: