Introduction to Journald and Structured Logging
The mildly interesting depictions one finds in their journal
To start, Journald’s minimalist webpage describes itself as:
[as a] service that collects and stores logging data. It creates and maintains structured, indexed journals.
With one of the sources of logging data being:
Structured log messages
Before diving into journald, what’s structured logging?
Imagine the following hypothetical log outputs
The first example is unstructured or simple, textual logging. The date format makes it a tad contrived, but the log line contains everything in a rather human readable format. From the line, I know when someone accessed what and how fast – in a sentence like structure. If I was tasked with reading log lines all day I would prefer the first output.
The second output is an example of structured logging using JSON. Notice that it conveys the same information, but instead of a sentence, the output are key-value pairs. Considering JSON support is ubiquitous, querying and retrieving values would be trivial in any programming language. Whereas, one would need to meticulously parse the first output to ensure no ambiguity. For instance, not everyone has a given and last name, response time units need to be parsed, the url is arbitrary, timezone conversion, etc. There are way too many pitfalls if one stored their logs in the first format – it would be too hard to consistently analyze.
One could massage their textual log format into a semi-structured output using colons as field delimiters.
This may be a happy medium for those unable or unwilling to adopt structured
logging, but there are still pitfalls. To give an example if I wanted to find
all the log statements with a
WARN level, I have to remember to match against
only the beginning of the log line or I run the risk of matching against
in the user name or in the url. What if I wanted to find all of the searches
by Ben Stiller? I’d need to be careful to exclude the lines where people are
searching for “Who is Ben Stiller”. These examples are not artificial either as
yours truly has fallen victim to several of these mistakes.
Let’s say that one does accomplish a level of insight from the textual format using text manipulations. If the log format were to ever change (eg. transpose response time and url, more data being logged, etc) the log parsing code would break. So if you’re planning on gaining insight from text logs, make sure you define a rigorous standard first!
There is also a nice benefit of a possibility of working with types with structured logging. Instead of working with only strings, JSON also has a numeric type so one doesn’t need the conversion when analyzing.
The only downsides that I’ve seen for structured logging (and specifically JSON structured logging) are log file size increases due to the added keys for disambiguation, and the format won’t be in a grammatically correct English sentence! These seem like minor downsides for the benefit of easier log analysis.
Now that we’ve established the case for structured logging, now onto journald. Be warned, this is a much more controversial topic.
Journald is the logging component of systemd, which was a rethinking of Linux’s boot and process management. A lot of feathers were ruffled are still ruffled because of the movement towards systemd (1, 2, 3, 4, 5). Wow, so a multitude of complaints. There must be several redeeming qualities to systemd because most distros are converging on it. I won’t be talking about systemd, but rather the logging component.
To put it simply, journald is a structured, binary log that is indexed and rotated. It was introduced in 2011.
Here’s how we would query the log for all messages written by
For all sshd messages since yesterday
To view properties for autossh and sshd messages since yesterday (output truncated to first event)
To find all events logged through the journal API for autossh. If a
+ is included in the command, it means “OR” else entries need to match both expressions
Find all possible values written for a given field:
What I think about journald
I want journald to be the next big thing. To have one place on your server were all logs are sent to* sounds like a pipe dream. No longer do I have to look up where logfiles are stored.
Journald has nice size based log rotation, meaning I no longer have to be woken up at night because a rogue log grew unbounded, which could degrade other services.
Gone are the days of arguing what format logs should be in – these would be replaced with disucssions about what metadata to expose.
With journald I can cut down on the number of external service that each service talks to. Instead of having every service write metrics to carbon, metrics would be written to journald. This way applications don’t need to go through the hoops of proper connection managment: re-connect on every metric sent, a single persistent connection, or some sort of hybrid? By logging to journald, carbon or the log forwarder can be down, but metrics will still be written to the local filesystem. There is very little that would case an absolute data loss.
People can use tools that they are most familiar with: some can use
journalctl with the indexes on the local box and a others will want to see the bigger picture once the same logs are aggregated into another system.
* Technically the data may not be sent to a single file location as journald can be configured such that each user has their own journal – but journalctl abstracts that away such that users won’t know or care.
Complaints Against journald
- Journald can’t be used outside of systemd, which limits it to only newer distros that have adopted systemd. I have CentOS 6 servers, so it’s a hard no to use journald on those systems.
- Journald writes to a binary file that one can’t use standard unix tools to dissect, resulting in difficulty if the log becomes corrupt. If the log is not corrupt, one can pipe the output of
journalctlto the standard tools.
- There’s not a great story for centralizing journald files. The introduction mentioned copying the files to another server. People have found a way using
journalctl -o jsonand sending the output to their favorite log aggregation.
- A lot of third party plugins for journald ingestion for log management suites don’t appear well maintained.
- It invented another logging service instead of working with pre-existing tools. Considering Syslog can work with structured data – that’s one less reason to switch to journald.
- The data format is not standardized or documented well
- Will not support encryption other than file-system encryption. If a user has access to the file system and has permission to read the log file, all logs will be available.
- No way to exclude sensitive information from the log (like passwords on the commandline) – though you’re probably doing something wrong if this is an issue.
- The best way to communicating with journald programmatically seems to be either through the C API or
With all these complaints, it may be a wonder why I lean towards advocating journald. By advocating structured data first, journald is setting the tone for the logging ecosystem. Yes, I know that jounald is far from the first, but the simplicity of having single, queryable, structured log baked into the machine is admirable.