Tag Archive for: Security

CVE-2021-44228 – Log4j2 vulnerability

This weekend, a very serious vulnerability in the popular Java-based logging package Log4j was disclosed. This vulnerability allows an attacker to execute code on a remote server; a so-called Remote Code Execution (RCE). Because of the widespread use of Java and Log4j, as well as the relative ease with which the vulnerability can be exploited, this is likely one of the most serious vulnerabilities on the Internet since both Heartbleed and ShellShock.

This was a “zero day exploit”, meaning that the bad guys found this vulnerability and started exploiting it before that vulnerability could be fixed. The NIST has catalogued this as CVE-2021-44228 with a 10/10 severity (the most severe).

Who’s affected

Put simply – Java applications that use the log4j package. It is almost impossible to conclusively list all affected software and services, given such widespread use and the multiple versions and implementations that affects the ability to exploit the vulnerability.

An attempt to list responses from as many vendors and service suppliers can be found here, though this list shouldn’t be taken as authoritative.

What you can do

Most importantly you should take immediate action to do the following:

  • Identify usage of affected log4j versions within your infrastructure.
  • Apply available patches from your software vendors, or consider disabling elements of your infrastructure/services until patches are available.
  • Monitor your systems/logs for signs of previous and ongoing exploit attempts.
  • Take immediate steps to restore any affected systems to a known good state.

Our Customers

We are actively following the steps above and triaging those affected. Those most severely affected will have already been contacted and we will continue to proactively monitor all infrastructure to ensure all systems are patched as soon as possible.

How to set-up fail2ban for a WordPress site

What is Fail2ban

Fail2ban is a tool which you can use to reduce the impact of attacks on your servers. Typically you configure it to monitor a log file for suspicious activity.  Then once the activity crosses a threshold you can have it take an action, such as block the source IP address in your firewall.  It’s a good way to stop attacks early but doesn’t entirely prevent them.

Why use it to protect WordPress

Due to it’s popularity, WordPress is often the target of automated attacks.  We often see bruteforce attacks targeting xmlrpc.php or wp-login.php, these rely on making a huge number of requests in the hope that one will eventually be successful.  Using strong passwords, especially for accounts with admin access is important to reduce the risk from attacks.  Fail2ban can be used to slow attackers down.  This helps for two reasons: it makes them less likely to succeed; it reduces the load on the server caused by processing these requests.

Other options

Blocking an attack as far upstream as possible is always advantageous to save resources so we would typically favour a Web Application Firewall or Webserver configuration over using fail2ban or a WordPress plugin however every site, server and customer is different so it will depend on the exact configuration used and required.

Install & config fail2ban (for Ubuntu)

sudo apt-get install fail2ban

Fail2ban works by having a jail file which references the log file, a filter and an action.  By default fail2ban will protect sshd.  If you are restricting SSH access in another way then you might want to turn this off.  You can do so by creating the following as /etc/fail2ban/jail.local

enabled = false

Fail2ban doesn’t come with a filter for WordPress.  I’d like to credit these two articles for providing a good starting point.  We see requests to ‘//xmlrpc.php’ (note the double slashes) fairly frequently so tweaked the below to also flag them.  As this is detecting any requests to wp-login/xmlrpc it will flag legitimate admin users when they login etc.  We’ll look to account for this with the jail configuration.

You can create the filter as /etc/fail2ban/filter.d/wordpress.conf

failregex = ^<HOST> .* "(GET|POST) /+wp-login.php
            ^<HOST> .* "(GET|POST) /+xmlrpc.php

The jail file has most of the configuration options:

  • logpath, in this case the path to your Apache access log
  • action, adjust this if you’re using a different firewall or want to be sent email instead, you can see available options in /etc/fail2ban/action.d/
  • maxretry, the number of requests within findtime seconds to ban after
  • bantime, the number of seconds to ban for, with this action how long they’re blocked in iptables

As mentioned, the filter will catch both malicious and legitimate users.  We’re configuring maxrety fairly high and bantime relatively low to minimise the probability and impact if we do block a legitimate user.  Whilst this allows attackers to make roughly one request every ten seconds this is a fraction of what they’d make without fail2ban.

You can create the jail file as /etc/fail2ban/jail.d/wordpress.conf

enabled = true
port = http,https
filter = wordpress
action = iptables-multiport[name=wordpress, port="http,https", protocol=tcp]
logpath = /var/log/apache2/all-sites-access.log
maxretry = 12
findtime = 120
bantime = 120

Restart fail2ban to load in your new config

sudo systemctl restart fail2ban

Checking your fail2ban set-up

Show the jails

sudo fail2ban-client status

Show info about the WordPress jail

sudo fail2ban-client status wordpress

List the f2b-wordpress iptables chain (this will be created the first time fail2ban blocks an IP and contain blocked IPs)

sudo iptables -v -n -L f2b-wordpress
sudo ip6tables -v -n -L f2b-wordpress

View the log to see any IPs banned or flagged

sudo less /var/log/fail2ban.log


Feature image by Jussie D.Brito licensed CC BY-SA 2.0.

Removing support for TLS 1.0 and TLS 1.1


For security reasons, it is best practice to disable TLS 1.0 and TLS 1.1, but before you do this you may need to weigh up the risks to traffic from old browsers.

After disabling TLS 1.0 and TLS 1.1 any visitors using old browsers won’t be able to access your site.  If you are accepting credit card payments through your website then your customers security is more important but if you have a public information site then this may not be the case.

Don’t I always want the best security?

Please don’t get us wrong. We are NOT advocating blindly reducing security. This post is very much a response to customers that come to us wanting changes that will break their sites in order to get a perfect score or tick a compliance box. We can usually come up with a best of both worlds once we show the exact implications of the change.

What’s the fuss about?

GlobalSign’s What’s Behind the Change? paragraph sums this up nicely:

Various vulnerabilities over the past few years (e.g., BEAST, POODLE, DROWN…we love a good acronym, don’t we?) have had industry experts recommending disabling all versions of SSL and TLS 1.0 for a while now. PCI Compliance was another driving factor. On June 30, 2018, the PCI Data Security Standard (DSS) required that all websites needed to be on TLS 1.1 or higher in order to comply.

The RFC 7525 from 2015 stipulates that implementations should not use TLS 1.0 or TLS 1.1:

   o  Implementations SHOULD NOT negotiate TLS version 1.0 [RFC2246];
      the only exception is when no higher version is available in the
      Rationale: TLS 1.0 (published in 1999) does not support many
      modern, strong cipher suites.  In addition, TLS 1.0 lacks a per-
      record Initialization Vector (IV) for CBC-based cipher suites and
      does not warn against common padding errors.
   o  Implementations SHOULD NOT negotiate TLS version 1.1 [RFC4346];
      the only exception is when no higher version is available in the
      Rationale: TLS 1.1 (published in 2006) is a security improvement
      over TLS 1.0 but still does not support certain stronger cipher

Qualys SSL Labs have reduced their grading for servers which support TLS 1.0 or TLS 1.1

Assessing the risk

Who won’t be able to access my website if I disable TLS 1.0 or TLS 1.1? Generally speaking browsers before 2013 will have trouble.  Most popular clients affected are old Android phones and old versions of Windows with Internet Explorer 10.  For the exact Android versions and other affected clients this is a nice breakdown.  As you’d expect the number of visitors with these old clients will vary according to your user base.  It’s best you check your site’s analytics to inform your decision.

Again, you can take into account how important encryption is for your website.  For example, at the time of writing it’s interesting to note that paypal.com has removed support for TLS 1.0 & 1.1 whilst google.com has not.


So what does this mean?  Lets give some examples…

If security is important to you; perhaps you have an e-commerce site taking payments or you are a IT consultancy like ourselves where people wish to share private information. You must disable old SSL/TLS protocols so that the only way people can communicate with your site is as secure as possible.

If accessibility is important to you; perhaps you are trying to share public information, be it a marketing or public resources site. It maybe worth supporting old protocols to allow your message to be shared as wide as possible.

Remember; it maybe typically called a sales “funnel” but traffic doesn’t have to end up in just one place. Users not supporting the right levels of security can be redirected to alternative pages where they can be contacted in other ways.  Why lose a sale when you don’t have to!


We’ve intentionally painted with broad strokes in this blog post.  We’re happy to give specific advice if you contact us and feel free to leave a comment 🙂

Warning sign - Outages

Common warning signs before server outages

Everyone knows that server outages and server down time cost. It directly affects your business in a number of ways including:

  • Loss of opportunities
  • Damage to your brand
  • Data loss
  • Lost sales
  • Lost trust

It’s essential to stay on top and ahead of any potential downtime.

Here are three areas where you need to be ahead of the curve:

Know your limits / server resources

Physical resource shortages

A common cause of downtime is from running out of server resources.

Whether it is RAM, CPU, disk space or other, when you run out, you risk data corruption, programs crashing and severe slowdowns to say the least. It is essential to perform regular server monitoring of your resources.

One of the most important; yet overlooked metrics, is disk space. Running out of disk space is one of the most preventable issues facing IT systems in our opinion.

When you run out of disk space, your system can no longer save files, losing data and leading to data corruption.

Often your website might still look like it is up and running and it’s only when a customer interacts with it, perhaps uploading new data or adding an item to a shopping basket, that you find it then fails to work.

We see this happen most frequently, when there is a “run-away log file” that keeps expanding until everything stops on the server!

CMS systems like Magento fall particularly prey to this as they often have unchecked application logs.

Internally, we record all server resource metrics every 10 seconds onto our MINDER stack and alerts will be raised well in advance of disk space running out. You don’t need to be this ‘advanced’ – you could simply have a script check current disk space hourly and email you if it is running out.

Configured resource shortages

Another common resource limit is a misconfigured server.

You could have a huge server with more CPU cores, RAM and storage than you could dream of using, but if your software isn’t configured to use it it won’t matter.

For example, if you were using PHP-FPM, and hadn’t configured it correctly, it would only have five processes running to process PHP. This means that in the case of a traffic spike, the first five requests would be served as normal but anything beyond that 5th request will be queued up until the first five had been served. This will of course needlessly slowing the site down for visitors.

Issues like this are often flagged up in server logs, letting you know when you hit these configured limits, so it is good to keep your eyes on them. These logs can also indicate that your site is getting busier and help you to grow your infrastructure in good time, along with your visitors.

You might be thinking, “why are there these arbitrary limits getting in my way? I don’t need these at all”.

Well, it is good to have these limits so that in the case of an unusual traffic spike, everything will run slowly but importantly it will work! If they are set too high, or not set at all, you might reach the aforementioned “physical limits” issue risking data corruption and crashing.

Did you know, by default NGINX only runs with one single threaded worker!


As a small business, it is normally impossible to do everything in house – and why would you want to,  when you need to focus on your business?

So it is good to step back every once in a while and document your suppliers.

Even if you only own a simple website, suppliers could include:

  • Domain registrar (OpenSRS, Domainbox, …)
  • DNS providers (Route 53, DNS Made Easy, …)
  • Server hosting (Rapidswitch, Linode, AWS EC2, …)
  • Server maintenance (Dogsbody Technology, …)
  • Website software updates (WordPress, Magento, …)
  • Website plugin updates (Akismit, W3 Total Cache, …)
  • Content Delivery Network (Cloudflare, Akamai, …)
  • Third parties (Sagepay, Worldpay, …)

All of these providers need to keep their software and/or services up to date. Some will cause more impact on you than others.

Planned maintenance

Looking at server hosting, all servers need maintenance every now and again, perhaps to load in a recent security update or to migrate you away from ageing hardware.

The most important point here is to be aware of it.

All reputable providers will send notifications about upcoming maintenance windows and depending on the update they will let you reschedule the maintenance for a more convenient time – reducing the effect on your business.

It is also good to have someone (like us) on call in case it doesn’t go to plan. Maintenance work might start in the dead of night, but if no one realises it’s still broken at 09:00, heads might roll!

Unplanned maintenance

Not all downtime can is planned. Even the giants, Facebook and Amazon have unplanned outages now and again.

This makes it critical to know where to go if your provider is having issues. Most providers have support systems where you can reach their technical team. Our customers can call us up at a moments notice.

Another good first point of call is a provider’s status page, here you can see any current (as well as past or future) maintenance or issues that are occurring. For example if you use Linode you can see issues on their status page here.

Earlier this year, we developed Status Pile a webapp, which combines provider status information into one place, making it easier for you to see issues at a glance.

Uptime monitoring

This isn’t really a warning sign, but it’s impossible to foresee everything. The above areas are great places to start, but they can’t cover you for the unexpected.

That’s where uptime monitoring comes in. Regardless of the cause, you need to know when your site goes down and you need to know fast.

We recommend monitoring your website at least minutely with a provider like Pingdom or AppBeat.

Proper configuration

Just setting up uptime monitoring is one thing, but it is imperative to configure it properly. You can tell someone to “watch the turkey in the oven” and so they watch the turkey burn!

I’ve seen checks which make sure a site returns a webpage, but if that page says, “error connecting to database” it doesn’t matter!

Good website monitoring checks the page returned includes the correct status code and site content. Perhaps your website connects to your docker application but only for specific actions then you should test specifically as well.

Are you checking your entire website stack?

Cartoon Dog sat at Table woth fire around him. Next frames he is saying 'this is fine'

Who is responsible?

A key part of uptime monitoring – and a number of other items I have mentioned – is that it alerts the right people and that they action those alerts.

If your uptime alerts flag an outage and they are sent to an accounts team it’s unlikely they’ll be able to take action. Equally if an alert comes in late in the evening when no one is around your site might be down until 0900 the next morning.

This is where our maintenance service comes in. We have a support team on call 24/7, ready to jump on any issues.


Phew that was a lot, we handle all of this and more. Contact us and see how we can give you peace of mind.

Feature image by Andrew Klimin licensed CC BY 2.0.

Python 2 will go end of life on 01 Jan 2020

Quick Public Safety Announcement, Python 2.7 goes end of life 01 Jan 2020.  This is the end of the road for Python 2.x – there won’t be a version 2.8.

This means any Python code that’s still on 2.x needs updating to Python 3.  Any code that isn’t moved over won’t receive security updates so will inevitably become insecure.

Identify your code

If you’ve got a lot of code it’s worth taking the time to check what’s where and which version of Python it’s using.

Python 3 was released at the end of 2008.  Adoption has been slow, a factor has been that all of your dependencies need to support Python 3 before you can.  Now that we’re over 10 years down the road this is much less likely to be an issue.

You can start off by checking code that has been written more recently.  Hopefully this will have been written for Python 3.  A survey by JetBrains shows that between 2017 and 2018 the number of developers that mostly used Python 2 fell from 25% to just 16%.  It’s also interesting to note the divide between use cases.  Data science having better adoption than both web and dev-ops.

Don’t forget old code

Unfortunately the numbers above are for code that developers are writing now.  We’re also concerned with code that was written many years ago and hasn’t recently had any major changes.  Looking at the number of packages downloaded instead of what developers are mostly using gives a different picture.  The numbers are closer to 50/50 with the trend between data science and dev-ops still clear.  TensorFlow is most often downloaded for Python 3 whilst botocore is heavily Python 2.  Boto is heavily used in API access to cloud providers such as AWS.

If all of your recent code is Python 3 it’s worth having a good dig around for places old code might be hiding.

What are the steps to update to Python 3?

  • The first step to update code is to make sure any packages you’re using support Python 3.  A tool such as caniusepython3 should show you where the issues are.
  • After that depending on the complexity of your code you can update it by hand or use a tool such as Futurize to help with the conversion .

A key part of smoothly updating is to have a good testing process so you can quickly find and fix the bits that unexpectedly break.  See the porting guide for more info.


Feature image by See1,Do1,Teach1 licensed CC BY 2.0.

Tripwire – How and Why

Open Source Tripwire is a powerful tool to have access to.  Tripwire is used by the MOD to monitor systems.  The tool is based on code contributed by Tripwire – a company that provide security products and solutions.  If you need to ensure the integrity of your filesystem Tripwire could be the perfect tool for you.

What is Tripwire

Open Source Tripwire is a popular host based intrusion detection system (IDS).  It is used to monitor filesystems and alert when changes occur.  This allows you to detect intrusions or unexpected changes and respond accordingly.  Tripwire has great flexibility over which files and directories you choose to monitor which you specify in a policy file.

How does it work

Tripwire keeps a database of file and directory meta data.  Tripwire can then be ran regularly to report on any changes.

If you install Tripwire from Ubuntu’s repo as per the instructions below a daily cron will be set-up to send you an email report.  The general view with alerting is that no news is good news.  Due to the nature of Tripwire it’s useful to receive the daily email, that way you’ll notice if Tripwire gets disabled.

Before we start

Before setting up Tripwire please check the following:

  • You’ve configured email on your server.  If not you’ll need to do that first, we’ve got a guide.
  • You’re manually patching your server.  Make sure you don’t have unattended upgrades running (see the manual updates section) as unless you’re co-ordinating Tripwire with your patching process it will be hard for you to distinguish between expected and unexpected changes.
  • You’re prepared to put some extra time into maintaining this system for the benefit of knowing when your files change.

Installation on Ubuntu

sudo apt-get update
sudo apt-get install tripwire

You’ll be prompted to create your site and local keys, make sure you record them in your password manager.

In your preferred editor open /etc/tripwire/twpol.txt

The changes you make here are based on what you’re looking to monitor, the default config has good coverage of system files but is unlikely to be monitoring your website files if that’s something you wanted to do.  For example, I’ve needed to remove /proc and some of the files in /root that haven’t existed on systems I’ve been monitoring.

Then create the signed policy file and then the database:

sudo twadmin --create-polfile /etc/tripwire/twpol.txt
sudo tripwire --init

At this point it’s worth running a check. You’ll want to make sure it has no errors.

sudo tripwire --check

Finally I’d manually run the daily cron to check the email comes through to you.

sudo /etc/cron.daily/tripwire

Day to day usage

Changing files

After you make changes to your system you’ll need to run a report to check what tripwire sees have changed.

sudo tripwire --check

You can then update the signed database.  This will open up the report allowing you to check you’re happy with the changes before exiting.

sudo tripwire --update -r /var/lib/tripwire/report/$HOSTNAME-yyyyMMdd-HHmmss.twr

You’ll need your local key in order to update the database.

Changing policy

If you decide you’d like to monitor or exclude some more files you can update /etc/tripwire/twpol.txt.  If you’re monitoring this file you’ll need to update the database as per the above section.  After that you can update the signed policy file (you’ll need your site and local keys for this).

sudo tripwire --update-policy /etc/tripwire/twpol.txt


As you can see tripwire can be an amazingly powerful tool in any security arsenal.  We use it as part of our maintenance plans and encourage others to do the same.


Feature image by Nathalie licensed CC BY 2.0.

Password Managers: What, How & Why?

So its 2019 and the new years resolution are on hold; Veganuary is over and Marie Kondo is helping us declutter our servers lifes. We expect a few new website and apps have caught your attention and you’ve created an account using a unique strong 12+ character password. Right?

Unfortunately data breaches are as prevalent now as they ever were. If you are still trying to memorise all your passwords, or writing them down, or the big no no, reusing them then make 2019 the year you improve your relationship with passwords.

Today we are talking about Password Managers as a method of creating and storing your passwords.

What is a password manager?

A password manager is an app, device, or cloud service that stores your passwords in an encrypted vault that can only be unlocked with a single master password. This means you only have to remember one ultra secure password not 100’s.

How does a Password manager work?

The video below (from 2017) clearly explains why you should stop memorising your passwords and why a password manager maybe a great first step to managing and securing them.

Two keys points:

  • The password manager creates the random passwords for you – A password manager isn’t a place to store your own made up passwords, its a place to create random computer generated ones. I don’t even know my own passwords, I just know one master password.
  • A password manager can store other information too – for example – the security questions some websites ask you for, mothers maiden name, first pet, first school, where you meet your husband/wife, provided these aren’t being used to prove your identity, can be completely fictitious and different for every account you set up.

Adding a further level of security

Our other blog on Multi -Factor Authentication explains about a further level of security you can use if offered.

A poll in 2018 saw more than three quarters of 2,000 UK adults do not see the point of ‘unnecessary’, ‘overly complicated’ internet security measures, ironically 46% had been victims of banking fraud. I’ll let you draw your own conclusion!

Don’t let this be you, even if a password manager doesn’t appeal at least use unique strong 12+ character passwords. With a password manager you can have easily have a 64 character password and it will 4 untrigintillion years to crack.  If its easy to create and “remember” why wouldn’t you.

And finally why 12+ characters?

A 12 character unique random password would take a computer about 3 thousand years to crack, however this true story from 2017 proves to us that it could take a whole lot longer. Always be aware that a 3 thousand year password may well be a 300 year password in 10 years time so the more characters the better.



Multi -Factor Authentication And Why You Should Use it

With ever-growing portions of our lives spent on the internet, or using services that depend on it, keeping your online accounts secure has never been more important. A breach of a key personal account can have devastating effects on our lives. Think identity theft, or embarrassing information/media being leaked.

One of the most effective solutions to this problem is Multi-Factor Authentication, or MFA (sometimes written 2FA for 2 Factor Authentication).

What is MFA?

MFA is a process by which more than one piece of information is required in order to authenticate against a service or provider.

What’s the problem?

In days of old, and still on less tech-savvy sites on the internet, a single username and password combination would be sufficient to grant you access to an account. Now in an ideal world, everybody would use lengthy and difficult to guess passwords, using different passwords for every service. But humans will be human, and take the easier route of using shorter, easy to remember, and worst of all common passwords. This inevitably leads to accounts being compromised when those common passwords are tried, or when the attacker reads the post-it that’s stuck to the bottom of your monitor…

How does MFA help?

MFA helps to resolve this problem by requiring a second piece of information; a second factor. This second factor can be many different things, with different sites offering the choices they think best. The most common are:

  • email
  • SMS
  • Automated phone call
  • Mobile device

How does it work?

Upon entering your valid username and password combination, the site or application will ask you for your second factor. If you provide this second factor correctly, then you will be authenticated. If you provide the wrong information, or no information at all, then you are denied. Simple right?

Isn’t this essentially just a second password?

Great question! Some sites may just require a second piece of text for your second factor, and in this case, it is essentially just a second password yes. However, good MFA is usually configured so that is requires something you know, and something you have. For example, a password, and an SMS. Using SMS as the second factor requires the user to have the mobile phone with the number configured on the account. Same for a phone call. This means that if somebody learns your password, it is useless unless they also have your unlocked mobile phone.

The next thing to consider is that the second factor is changing regularly and often. When a provider sends you an SMS, this is usually valid for a short period of time, say 10 minutes. If you wish to login after this time, you must receive a new SMS with a new passcode. This of course prevents people from writing the second factor down, as it would be useless a short while later, and also means that if an attacker were to find out what the second factor was, they would have a very short window in which to login.

Side note: though we’ve used SMS as an example here, there’s a growing movement of people that consider it insecure due to demonstrated attacks which are able to bypass this second factor. As with any security procedure, you should always consider it’s merits and potential weaknesses before putting it in place yourselves.

In summary MFA is both a simple and effective way of keeping your online accounts secure. We strongly recommend everyone enables it where possible. You should still continue to use strong passwords and follow best practices in terms of security too. 

Feature image background by ChrisDag licensed CC BY 2.0.


Holey jeans

Manual patching vs auto patching

Everyone agrees keeping your software and devices updated is important.  These can be manually or automatically installed.  People assume that automatic is the better option however both have their advantages.

I’m Rob, I look after maintenance packages here at Dogsbody Technology. I want to explain the advantages between the two main patching approaches.

What to Patch

Before we get into the differences of how to patch it’s worth discussing what to patch.

Generally speaking we want to patch everything.  A patch has been produced for a reason to either fix a bug or security issue.

Sometimes patches add new features to a package and this can be when issues occur.  Adding new features can cause things to break (usually due to broken configuration files).

Identifying when a patch release is for a bug, a security fix or adding a feature can be hard. In some cases the patch can be all three things.  Some operating systems try and separate or tag security patches separately however our experience shows that these are rarely accurate.

One of the reasons we like manual patching so much is that it allows us to treat each patch/customer/server combination independently and only install what is required, when it is required.

Auto Patching Advantages

The server checks and updates itself regularly (hourly/daily/weekly).

  • Patches can easily be installed out of hours overnight.
  • Patches are installed during the weekend and bank holidays.
  • Perfect for dev environments where downtime is OK.
  • Perfect for use in Constant Integration (CI) workflows where new patches can be tested before being put into production.

Our automatic patching strategy is to typically install all patches available for the system as it is the only sure way to know you have all the security patches required.

Manual Patching Advantages

A notification (e-mail or internal ticket) is sent to the server admin who logs onto the server and installs the latest updates.

  • Patches can be held during busy/quiet periods.
  • The admin can ensure that services are always restarted to use the patch.
  • The admin can search for dependant applications that maybe using a library that has been patched (e.g. glibc patches)
  • The admin is already logged onto the server ready to act in case something does break.
  • Kernel reboots (e.g. Meltdown or Stack Clash) can be scheduled in and mitigated.
  • Configuration changes can be reviewed and new options implemented when they are released. Catching issues before something tries to load a broken configuration file.
  • Perfect for production environments where you need control. Manual patching works around your business.

Because we manually track the packages used by a customer we can quickly identify when a patch is a security update for that specific server.  We typically patch security updates on the day it is released also install non-security updates at the same time to ensure the system has the latest and greatest.


Are you unsure of your current patch strategy? Unsure what the best solution is for you? Contact us today!


Feature image background by Courtnei Moon licensed CC BY 2.0.

Intel vulnerabilities (Meltdown & Spectre)

On 3rd January 2018 engineers around the world scrambled to respond to the announcement that most CPUs on the planet had a vulnerability that would allow attackers to steal data from affected computers.  Almost two weeks later and we do know a lot more however the outlook is still bleak.

Am I vulnerable?

Almost definitely.  While only Intel CPUs are affected by the Meltdown vulnerability (CVE-2017-5754) CPUs made by AMD, ARM, Nvidia and other manufactures are all affected by the Spectre vulnerabilities (CVE-2017-5753 &  CVE-2017-5715).

Additionally, Spectre is a collection of vulnerabilities.  Only two of the easiest to implement attacks are currently being patched for.  There are literally hundreds of ways to exploit Spectre and many do not have an easy fix. The Spectre collection of vulnerabilities are responsible for the slowdown of CPUs in your computer as they target a major part of the CPU responsible for the speed (speculative execution).

There are a few exceptions for CPUs not affected by these vulnerabilities however so far these have all been low powered ARM devices such as the Raspberry Pi.

It is worth pointing out that while most computers, servers & mobile phones are vulnerable, an attacker would still have to be able to run code on the same CPU you are using in order for you the be affected. For cloud computing providers this is a big issue as the same CPU is being used by many guest systems. For desktop systems this is a problem as most websites nowadays require that browsers run untrusted Javascript.  For dedicated servers being used by one company however, the only code that should be running on the system is trusted code. While this doesn’t make dedicated servers any less vulnerable, it does severely reduce the attack surface.

How does it work?

Better people than us have already covered this.  We recommend these two blog posts…

How do I fix this?

You replace your CPU.  Seriously! This is currently the only 100% guaranteed method to be free of these vulnerabilities.  However, that there currently aren’t actually any replacement CPUs that aren’t vulnerable! This issue may speed up some providers depreciation of old technology.

Patches for the Meltdown vulnerability have been made available for all major operating systems now.  Make sure you have installed and rebooted to ensure that the patch is loaded in.

If you are using any sort of virtualisation or cloud infrastructure then make sure that your host is patched too. Most cloud providers are announcing reboots at very short notice.

Patches for the Spectre vulnerabilities are still dribbling out and new patches will likely be required for years to come as new fixes are developed.  The current two Spectre patches include a microcode patch for the actual CPU firmware.  This firmware update should still be shipped out via the standard operating system updates.  These patches will also require systems to be rebooted (again).

But I’m a customer!

Don’t worry, we got you.  We are actively working with all our customers to patch systems and mitigate issues.


In tracking these vulnerabilities and writing this blog post we built up a comprehensive timeline of events linking to sources of more information that maybe useful…

  • Between Aug 2016 & Jun 2017 – Multiple vulnerabilities are discovered and published by multiple researchers, mostly building on each others work.
  • 01 Feb 2017 – CVE numbers 2017-5715, 2017-5753 and 2017-5754 are assigned to/reserved by Intel to cover these vulnerabilities.
  • 01 Jun 2017 – The two attack vectors are independently found by Google’s Project Zero researchers and researchers from the academic world which are shared with Intel, AMD and ARM.
  • Sep 2017 – Google deploys fixes in their Linux based infrastructure to protect their customers.  Google proposes to pass the patches upstream to the Linux kernel after the public disclosure of Spectre/Meltdown.
  • 09 Nov 2017 – Intel informs partners and other interested parties under Non Disclosure Agreement (NDA).
  • 20 Nov 2017 – The CRD (Coordinated Release Date) is agreed upon to be 09 Jan 2018 by the parties involved.
  • 13 Dec 2017 – Apple releases iOS 11.2, MacOS 10.13.2 and TVos 11.2. These update contain fixes for Meltdown but that is not mentioned in the release notes.
  • 15 Dec 2017 – Amazon starts sending emails to AWS customers, informing them of a scheduled reboot of EC2 instances on or around the 06 Jan 2018. People that reboot following that email notice degraded performance and start discussing this.
  • 20 Dec 2017 – Jonathan Corbet publishes an article and remarks that the KPTI patches have “all the markings of a security patch being readied under pressure from a deadline”.
  • 01 Jan 2018 – A pythonsweetness post appears, speculating about what’s behind the KPTI patches for the Linux kernel.
  • 02 Jan 2018 – The Register publishes an article that puts enough of the information together.
  • 02 Jan 2018 – Andres Freund posts to the PostgreSQL mailing list showing a 17-23% slowdown in PostgreSQL when using the KPTI patch.
  • 03 Jan 2018 – Google breaks the agreed CRD and makes everything public.
  • 03 Jan 2018Two websites are launched to explain the findings.  The vulnerabilities are “officially” named Meltdown and Spectre.
  • 03 Jan 2018 – Microsoft rushes out a series of fixes, including security updates and patches for its cloud services, which were originally planned for a January 9 release.
  • 03 Jan 2018 – Amazon says it has secured almost all of its affected servers.
  • 03 Jan 2018 – Google details its efforts to safeguard its systems and user data.
  • 03 Jan 2018 – Intel acknowledges the existence of the vulnerability, but refutes reports implying it is the only chipmaker affected.
  • 04 Jan 2018 – Media organisations such as the BBC pick up the story.
  • 04 Jan 2018 – Apple confirms its iPhones, iPads, and Macs are affected by the Meltdown and Spectre vulnerabilities.
  • 09 Jan 2018 – Microsoft confirms that patches rolled out to close Meltdown and Spectre security loops have caused PC and server performance slowdowns.