Skip to content
Simple daemon for easy stats aggregation
JavaScript Shell Perl
Find file
Latest commit 54e6478 Feb 23, 2015 Thomas Mühlschlegel committed with pathzzrd create accurate timer for metric flush
clamp timestamp to a precise interval #459
Failed to load latest commit information.
backends pickling support. Jul 10, 2015
bin add bin/statsd Jun 28, 2012
debian Merge pull request #553 from ppershing/master Mar 28, 2016
docs Merge pull request #528 from aronatkins/pickle Mar 28, 2016
examples partable bash shebang Sep 25, 2015
lib Merge branch 'MephistoMMM-master' Mar 28, 2016
packager Enable packaging at https://packager.io/gh/pkgr/statsd Nov 20, 2014
servers removing superfluous logging setup in udp server Jun 19, 2015
test Merge pull request #528 from aronatkins/pickle Mar 28, 2016
utils Nagios and Keepalived check script Mar 18, 2016
.gitignore Added IDEA project files to .gitignore Sep 22, 2014
.pkgr.yml Enable packaging at https://packager.io/gh/pkgr/statsd Nov 20, 2014
.travis.yml add newer nodejs versions to travis Feb 9, 2016
CONTRIBUTING.md move contributing information into CONTRIBUTING.md Jan 29, 2014
Changelog.md bumping for v0.7.2 patch release Sep 22, 2014
LICENSE Updated copyright to 2016 Jan 1, 2016
README.md Minor font style improvements Oct 23, 2015
exampleConfig.js pickling support. Jul 10, 2015
exampleProxyConfig.js properly use ipv6 config when starting proxy, adds extra keys to exam… Mar 23, 2016
package.json use modern-syslog instead of node-syslog, because node-syslog is not … Feb 9, 2016
proxy.js properly use ipv6 config when starting proxy, adds extra keys to exam… Mar 23, 2016
run_tests.sh Make failing tests exit with non-zero status Jul 20, 2012
stats.js create accurate timer for metric flush Apr 11, 2016

README.md

StatsD Build Status

A network daemon that runs on the Node.js platform and listens for statistics, like counters and timers, sent over UDP or TCP and sends aggregates to one or more pluggable backend services (e.g., Graphite).

We (Etsy) blogged about how it works and why we created it.

Inspiration

StatsD was inspired (heavily) by the project (of the same name) at Flickr. Here's a post where Cal Henderson described it in depth: Counting and timing Cal re-released the code recently: Perl StatsD

Key Concepts

  • buckets Each stat is in its own "bucket". They are not predefined anywhere. Buckets can be named anything that will translate to Graphite (periods make folders, etc)

  • values Each stat will have a value. How it is interpreted depends on modifiers. In general values should be integer.

  • flush After the flush interval timeout (defined by config.flushInterval, default 10 seconds), stats are aggregated and sent to an upstream backend service.

Installation and Configuration

  • Install node.js
  • Clone the project
  • Create a config file from exampleConfig.js and put it somewhere
  • Start the Daemon:
    node stats.js /path/to/config

Usage

The basic line protocol expects metrics to be sent in the format:

<metricname>:<value>|<type>

So the simplest way to send in metrics from your command line if you have StatsD running with the default UDP server on localhost would be:

echo "foo:1|c" | nc -u -w0 127.0.0.1 8125

More Specific Topics

Debugging

There are additional config variables available for debugging:

  • debug - log exceptions and print out more diagnostic info
  • dumpMessages - print debug info on incoming messages

For more information, check the exampleConfig.js.

Tests

A test framework has been added using node-unit and some custom code to start and manipulate statsd. Please add tests under test/ for any new features or bug fixes encountered. Testing a live server can be tricky, attempts were made to eliminate race conditions but it may be possible to encounter a stuck state. If doing dev work, a killall statsd will kill any stray test servers in the background (don't do this on a production machine!).

Tests can be executed with ./run_tests.sh.

Meta

  • IRC channel: #statsd on freenode
  • Mailing list: statsd@librelist.com
Something went wrong with that request. Please try again.