5 things you need to know when working with big logs

With everything being logged the logs on a busy server can get very big and very noisy. The bigger your logs the harder it is to extract the information you want, therefore it is essential to have a number of analytics techniques up your sleeve.

In the case of an outage logs are indispensable to see what happened. If you’re under attack it will be logged. Everything is logged so it is essential to pay attention.
– From my last blog post why there’s nothing quite like Logcheck.

These are our top five tips when working with large log files.

1. tail

The biggest issue with log files is the size, logs can easily grow into gigabytes. Most text editing tools normally used with other text files (vim, nano, gedit etc) load these files into memory, this is not an option when the file is larger than your systems memory.

The tail command fixes this by only getting the bottom lines of the log file. It fixes the problem of reading the whole file into memory by only loading the final bytes of the file.

Log files nearly always have new log lines appended to the bottom of them meaning they are already in chronological order. tail therefore gets you the most recent logs.

A good technique with this is to use tail to export a section of the log (in this example the last 5000 lines of the log). This means you can comb a smaller extract (perhaps with the further tools below) without needing to look through every single log line, reducing resource usage on the server.

tail -5000 > ~/logfile.log

You may also find the head command useful, it is just like tail but for the top lines in a file.

2. grep is your best friend.

Perhaps you are only interested in a certain lines in your log file, then you need grep.

For example if you are only interested in a specific timestamp, this grep returns all of the logs that match the 05th March 2019 at 11:30 until 11:39.

grep "05/Mar/2019:11:3" logfile.log

When using grep you need to know what is in the log file and how it is formatted, head and tail can help there.

Be careful to not assume things, different logs are often written in different formats logs even when they are created by the one application (for example trying to compare webserver access and error logs).

So far I have only used grep inclusively but you can also use it to exclude lines. For example the bellow command returns all logs from the 05th of March at 11:30 and then removes lines from two IP’s. You can use this to remove your office IP’s from your log analytics.

grep "05/Mar/2019:11:3" logfile.log | grep -v '203.0.113.43\|203.0.113.44'

3. Unique identifiers

grep is at its best when working with unique identifiers as you saw above we focussed in on a specific time stamps. This can be extended to any unique identifier but what do you look for?

A great unique identifier for web server logs is the visitors IP address this can be used to follow their session and see all of the URL’s they visited on the site. Unless they are trying to obfuscate it, their IP address persists everywhere the visitor goes so can be used when collating logs between multiple servers.

grep "203.0.113.43" server1-logfile.log server2-logfile.log

Some software includes its own unique identifiers for example email software like postfix logs a unique ID against each email it processes. You can use this identifier to collate all logs related to a specific email. It could be that the email has been stuck in the system for days which this approach will pick up on.

This command will retrieve all logs with the unique identifier 123ABC123A from all files that start “mail.log” (mail.log.1, mail.log.3.gz)

zgrep '123ABC123A' mail.log*

Taking points 2 and 3 one step further, with a little bit of command line magic. This command returns the IP addresses of the most frequent site visitors at on the 5th of March at 11 AM.

grep "05/Mar/2019:11:" nginx-access.log | awk '{ print $1 }' | sort | uniq -c | sort -n | tail

4. Logrotate

As I have said before logs build up quickly over time and to keep your logs manageable it is good to rotate them. This means that rather than one huge log file you have multiple of smaller files. Logrotate is a system tool which does this, in fact you may likely find that it is already installed.

It stores its config’s in /etc/logrotate.d and most software provides their own config’s to rotate their logs.

If you are still dealing with large log files then it may well be time to edit these config’s.

A quick win might be rotating the file daily rather than weekly.

You can also configure logrotate to rotate files based on size rather than date.

5. AWS Athena

AWS Athena brings your log analytics to the next level. With it you can turn your text log file into a database and search it with SQL queries. This is great for when you are working with huge volumes of log data. To make this easier Athena natively supports the Apache log format and only charge you for the queries you make.

AWS have lots of good documentation on setting up Athena and tying it into Apache logs.

Fighting huge log files? not getting the insights you want? Contact us and see how we can help.

 

Feature image by Ruth Hartnup licensed CC BY 2.0.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *