Open letter to future apprentices

Here at Dogsbody HQ we have found one of the the best ways to grow our business and get the skills we require is to run an apprenticeship programme.

We interview throughout the year looking for the keenest candidates. Having hired 7 apprentices over the past 5 years and interviewed many many more we thought we would share with all our future employees what we take away from your first impressions.

The world of work can be a scary prospect and completely different to your school and college days.

Your first full time position and the steps to secure it can be daunting so here are some hints and tips for what we at Dogsbody Technology like to look for in our candidates:

Submitting your CV

You have found the job you want and are ready to submit your CV – The first impressions start from this moment so:

  • Submit your CV in PDF; It means your prospect employers sees it the way you intended – send it in .doc or .odt and it can lose its formatting.
  • Use an appropriate email address; bigcheeks@example.com may have been appropriate for your Amazon account but not for your CV. Email addresses are free, get an appropriate one.
  • Switch on your Voicemail; If you’re busy and I call, how do you know I will bother calling back?  Which leads me to my second point…
  • Set a Voicemail message – If the default VM message is just your number how do I know I have reached the correct person? Make sure you set a message like ‘you have reached the mobile of (you). Please leave a message’.
  • Answer your phone politely; I know ‘unknown to you’ numbers could be spam but when you’re applying for jobs you have to answer the phone as if it’s your future employer! If you want to be extra organised put the office number of the companies you applied to in your phone so it hopefully comes up ‘Dogsbody Technology’ but never answer the phone ‘yes’… not a great first impression…

Impressing in a CV or interview (telephone or face to face)

It’s potentially your first full time job (but we hope you have done something else) so make sure you tell us what you have done – in your CV preferably.

Dogsbody Technology give every applicant a telephone interview… why? Because we once hired someone whose CV never mentioned anything about all of the technical things they had done – it was only in the telephone interview they mentioned them – if we hadn’t called we would’ve never known!

Things to mention

  • Work experience – Work Experience is invaluable – It doesn’t have to be in the field you are applying for a job in now (IT for us) but it shows so much. The classic ‘Saturday job’ is still a strong tick in the box.
  • If you have tinkered with tech – You earn big points if you tinker with tech – get a Raspberry Pi, Install Ubuntu on your laptop, even AWS servers are pennies to run – don’t just follow the guides parrot fashion – Explore, find out what the commands you are typing actually do!  An apprenticeship is about learning on the job and practical experience. When they don’t teach this stuff in school we want to see you are keen to teach/learn yourself.

Things to remember

  • Playing games is not an interest in computers – Sorry but its a hobby, unless you want to be a games developer or games tester than playing computer games is not an interest in computers. We all play computer games at Dogsbody Technology but it doesn’t help us be a Sysadmin 🙂 List it under hobbies.
  • Grades aren’t everything – Your teachers will hate us for saying this however unless you want to be a doctor or have to get specific grades for a college/uni course then if academia isn’t your thing don’t worry. Apprenticeship are designed to be for the practical among us. To do an apprenticeship you need Maths and English at grade C (4 in the new system) or above, but even if you don’t have that you can do a key skill module during your apprenticeship to get you there.

Finally we loved this image from Dave Cornthwaite; add “Good Manners” and we think it a pretty good starting point…

 

Potential is essential  – What we do isn’t taught in schools/college so we don’t expect you to know it all but we do expect you to be passionate about tech.

If you want to join an established company who is an established Apprenticeship provider in Linux Sysadmin then apply now for our proven Linux Apprenticeship

Want more Apprenticeship advise – read our Apprentice Guide: How to Impress IT Employers

 

Updated Privacy Policies & Terms and Conditions

Dogsbody Technology have updated their Privacy Policies and Ts&Cs …

View our updated Privacy Policies

View our updated Terms and Conditions

Lets be clear….

The new General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679) laws haven’t changed anything for how Dogsbody Technology treat your personal data. We have always treated your personal data as strictly confidential and will continue to do so.
Dogsbody Technology has always had security by design and by default – its our business.

Dogsbody Technology Ltd. have never and will never:

  • buy or sell personal data
  • use automated decision making, including profiling
  • spam you

Dogsbody Technology Ltd. will continue to:

  • use appropriate security, technical and organisational measures to keep your personal data safe.
  • let you opt out (if applicable).
  • provide a copy of any information and/or assets we have regarding you at any time (requires proof on identity).
  • be a UK registered Limited company that stands by UK laws.

If you have any questions regarding these documents please feel free to contact us at any time.

Replacement Server Monitoring – Part 3: Kapacitor alerts and going live!

So far in this series of blog posts we’ve discussed picking a replacement monitoring solution and getting it up and running. This instalment will cover setting up the actual alerting rules for our customers’ servers, and going live with the new solution.

Kapacitor Alerts

As mentioned in previous posts, the portion of the TICK stack responsible for the actual alerting is Kapacitor. Put simply, Kapacitor takes metrics stored in the InfluxDB database, processes and transforms them, and then sends alerts based on configured thresholds. It can deal with both batches and streams of data, the difference is fairly clear from the names; batch data takes multiple data points as an input and looks at them as a whole. Streams accept a single point at a time, folding each new point into the mix and re-evaluating thresholds each time.

As we wanted to monitor servers constantly over large time periods, stream data was the obvious choice for our alerts.

We went through many iterations of out alerting scripts, known as TICK scripts, before mostly settling on what we have now. I’ll explain one of our “Critical” CPU alert scripts to show how things work (comments inline):

var critLevel = 80 // The CPU percentage we want to alert on
var critTime = 15 // How long the CPU percentage must be at the critLevel (in this case, 80) percentage before we alert
var critResetTime = 15 // How long the CPU percentage must be back below the critLevel (again, 80) before we reset the alert

stream // Tell Kapacitor that this alert is using stream data
    |from()
        .measurement('cpu') // Tell Kapacitor to look at the CPU data
    |where(lambda: ("host" == '$reported_hostname') AND ("cpu" == 'cpu-total')) // Only look at the data for a particular server (more on this below)
    |groupBy('host')
    |eval(lambda: 100.0 - "usage_idle") // Calculate percentage of CPU used...
      .as('cpu_used') // ... and save this value in it's own variable
    |stateDuration(lambda: "cpu_used" >= critLevel) // Keep track of how long CPU percentage has been above the alerting threshold
        .unit(1m) // Minutely resolution is enough for us, so we use minutes for our units
        .as('crit_duration') // Store the number calculated above for later user
    |stateDuration(lambda: "cpu_used" < critLevel) // The same as the above 3 lines, but for resetting the alert status 
        .unit(1m) .as('crit_reset_duration') 
    |alert() // Create an alert... 
        .id('CPU - {{ index .Tags "host" }}') // The alert title 
        .message('{{.Level}} - CPU Usage > ' + string(critLevel) + ' on {{ index .Tags "host" }}') // The information contained in the alert
        .details('''
        {{ .ID }}
        {{ .Message }}
        ''')
        .crit(lambda: "crit_duration" >= critTime) // Generate a critical alert when CPU percentage has been above the threshold for the specified amount of time
        .critReset(lambda: "crit_reset_duration" >= critResetTime) // Reset the alert when CPU percentage has come back below the threshold for the right time
        .stateChangesOnly() // Only send out information when an alert changes from normal to critical, or back again
        .log('/var/log/kapacitor/kapacitor_alerts.log') // Record in a log file that this alert was generated / reset
        .email() // Send the alert via email 
        |influxDBOut() // Write the alert data back into InfluxDB for later reference...
             .measurement('kapacitor_alerts') // The name to store the data under
             .tag('kapacitor_alert_level', 'critical') // Information on the alert
.tag('metric', 'cpu') // The type of alert that was generated

The above TICK script generates a “Critcal” level alert when the CPU usage on a given server has been above 80% for 15 minutes or more. Once it has alerted, the alert will not reset until the CPU usage has come back down below 80% for a further 15 minutes. Both the initial notification and the “close” notification are sent via email.

The vast majority of our TICK scripts are very similar to the above, with changes to monitor different metrics (memory, disk space, disk IO etc) with different threshold levels and times etc.

To load this TICK script into Kapacitor, we use the kapacitor command line interface. Here’s what we’d run:

kapacitor define example_server_cpu -type stream -tick cpu.tick -dbrp example_server.autogen
kapacitor enable example_server_cpu

This creates a Kapacitor alert with the name “example_server_cpu”, with the “stream” alert type, against a database and retention policy we specify.

In reality, we automate this process with another script. This also replaces the $reported_hostname slug with the actual hostname of the server we’re setting the alert up for.

Getting customer servers reporting

Now that we could actually alert on information coming into InfluxDB, it was time to get each of our customers’ servers reporting in. Since we have a large number of customer systems to monitor, installing and configuring Telegraf by hand was simply not an option. We used ansible to roll the configuration out to the servers that needed it which involved 12 different operating systems and 4 different configurations.

Here’s a list of the tasks that Ansible carries out for us:

  • On our servers:
    • Create a specific InfluxDB database for the customers server
    • Create a locked down InfluxDB write only user for the server to send it’s data in with
    • Add Grafana data source to link the database to the customer
  • On the customers server:
    • Setup the Telegraf repo to ensure it is updated
    • Install Telegraf
    • Configure Telegraf outputs to point to our endpoint with the correct server specific credentials
    • Configure Telegraf inputs with all the metrics we want to capture
    • Restart Telegraf to load the new configuration

The above should be pretty self-explanatory. Whilst every one of the above steps would be carried out for a new server, we wrote the Ansible files to allow for most of them to be run independently of one another. This means that in future we’d be able to, for example, include another input to report metrics on, with relative ease.

For those of you not familiar with Ansible, here’s an excerpt from one of the files. It places a Telegraf config file into the relevant directory on the server, and sets the file permissions to the values we want:

---
- name: Copy inputs config onto client
  copy:
    src: ../files/telegraf/telegraf_inputs.conf
    dest: /etc/telegraf/telegraf.d/telegraf_inputs.conf
    owner: root
    group: root
    mode: 0644
become: yes

 

With the use of more ansible we incorporated various different tasks into a single repository structure, did lots of testing, and then ran things against our customers’ servers. Shortly after, we had all of our customers’ servers reporting in. After making sure everything looked right, we created and enabled various alerts for each server. The process for this was to write a BASH script which looped over a list of our customers’ servers and the available alert scripts, and combined them so that we had alerts for the key metrics across all servers. The floodgates had been opened!

Summary

So, at the end of everything covered in the posts in this series, we had ourselves a very respectable New Relic replacement. We ran the two systems side by side for a few weeks and are very happy with the outcome.  While what we have described here is a basic guide to setting the system up we have already started to make improvements way beyond the power we used to have.  If any of them are exciting enough, there will be more blog posts coming your way, so make sure you come back soon.

We’re also hoping to open source all of our TICK scripts, ansible configs, and various other snippets used to tie everything together at some point, once they’ve been tidied up and improved a bit more. If you cannot wait that long and need them now, drop us a line and we’ll do our best to help you out.

I hope you’ve enjoyed this series. It was great of a project that the whole company took part in and that enabled us to provide an even better experience for our customers. Thanks for reading!

Replacement Server Monitoring

Feature image background by swadley licensed CC BY 2.0.

Replacement Server Monitoring – Part 2: Building the replacement

This is part two of a three part series of blog posts about picking a replacement monitoring solution, getting it running and ready, and finally moving our customers over to it.

In our last post we discussed our need for a replacement monitoring system and our pick for the software stack we were going to build it on. If you haven’t already, you should go and read that before continuing with this blog post.

This post aims to detail the set up and configuration of the different components to work together, along with some additional customisations we made to get the functionality we wanted.

Component Installation

As mentioned in the previous entry in this series, InfluxData, the TICK stack creators, provide package repositories where pre-built and ready to use packages are available. This eliminates the need for configuration and compilation of source code before we can use it. This allows us to install and run software with the use of a few commands with very predictable results, as opposed to often many commands needed for compilation, with sometimes wildly varying results. Great stuff.

All components are available from the same repository. Here’s how you install them (example shown is for an Ubuntu 16.04 “Xenial” system

curl -sL https://repos.influxdata.com/influxdb.key | sudo apt-key add -
source /etc/lsb-release
echo "deb https://repos.influxdata.com/${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
sudo apt-get update && sudo apt-get install influxdb
sudo systemctl start influxdb

The above steps are also identical for the other components, Telegraf, Chronograf and Kapacitor. You’ll just need to replace “influxdb” with the correct name in lines 4 and 5.

Configuring and linking the components

As each of the components are created by the same people, InfluxData, linking them together is fortunately very easy (another reason we went with the TICK stack). I’ll show you what additional configuration was put in place for the components and how we then linked together. Note that the components are out of order here, as the configuration of some components is a prerequisite to linking them to another.

InfluxDB

The main change that we make to InfluxDB is to have it listen for connections over HTTPS, meaning any data flowing to/from it will be encrypted. (To do this, you will need to have an SSL certificate and key pair to use. Obtaining that cert/key pair is outside the scope of the blog post). We also require authentication for logins, and disable the query log. We then restart InfluxDB for these changes to take effect.

sudo vim /etc/influx/influx.conf

[http]
    enabled = true
    bind-address = "0.0.0.0:8086"
    auth-enabled = true
    log-enabled = false
    https-enabled = true
    https-certificate = "/etc/influxdb/ssl/reporting-endpoint.dogsbodytecnhology.com.pem"

sudo systemctl restart influxd

Note that the path used for the “https-certificate” parameter will need to exist on your system of course.

We then need to create an administrative user like so:

influx -ssl -host ivory.dogsbodyhosting.net
> CREATE USER admin WITH PASSWORD 'superstrongpassword' WITH ALL PRIVILEGES

Telegraf

The customisations for Telegraf involve telling it where to reports its metrics to, and what metrics to record. We have an automated process, using ansible for rolling these customisations out to customer servers, which we’ll cover in the next part of this series. Make sure you check back for that. These are essentially what changes are made:

sudo vim /etc/telegraf.d/outputs.conf

[[outputs.influxdb]]
  urls = ["https://reporting-endpoint.dogsbodytechnology.com:8086"]
  database = "3340ad1c-31ac-11e8-bfaf-5ba54621292f"
  username = "3340ad1c-31ac-11e8-bfaf-5ba54621292f"
  password = "supersecurepassword"
  retention_policy = ""
  write_consistency = "any"
  timeout = "5s"

The above dictates that Telegraf should connect securely over HTTPS and tells it the username, database and password to use for it’s connection.

We also need to tell Telegraf what metrics it should record. This is configured like so:

[[inputs.cpu]]
  percpu = true
  totalcpu = true
  collect_cpu_time = false
  report_active = true
[[inputs.disk]]
  ignore_fs = ["tmpfs", "devtmpfs", "devfs"]
[[inputs.diskio]]
[[inputs.net]]
[[inputs.kernel]]
[[inputs.mem]]
[[inputs.processes]]
[[inputs.swap]]
[[inputs.system]]
[[inputs.procstat]]
  pattern = "."

The above tells Telegraf what metrics to report, and customises how they are reported a little. For example, we tell it to ignore some pseudo-filesystems in the disk section, as these aren’t important to us.

Kapacitor

The customisations for Kapacitor primarily tell it which InfluxDB instance it should use, and the channels it should use for sending out alerts:

sudo vim /etc/kapacitor/kapacitor.conf
    [http]
    log-enabled = false
    
    [logging]
    level = “WARN”

    [[influxdb]]
    name = "ivory.dogsbodyhosting.net"
    urls = ["https://reporting-endpoint.dogsbodytechnology.com:8086"]
    username = admin
    password = “supersecurepassword”

    [pushover]
    enabled = true
    token = “yourpushovertoken”
    user-key = “yourpushoveruserkey”

    [smtp]
    enabled = true
    host = "localhost"
    port = 25
    username = ""
    password = ""
    from = "alerts@example.com"
    to = ["sysadmin@example.com"]

As you can probably work out, we use Pushover and email to send/receive our alert messages. This is subject to change over time. During the development phase, I used the Slack output.

Chronograf Grafana

Although the TICK stack offers it’s own visualisation (and control) tool, Chronograf, we ended up using the very popular Grafana instead. At the time when we were building the replacement solution, Chronograf, although very pretty, was somewhat lacking in features, and the features that did exist were sometimes buggy. Please do note that Chronograf was the only component that was still in beta at this period in time. It’s now had a full release and another ~5 months of development. You should definitely try it out for yourself before jumping straight to Grafana. We intend to re-evaluate Chronograf ourselves soon, especially as it is able to control the other components in the TICK stack, something which Grafana does not offer at all.

The Grafana install is pretty straightforward, as it also has a package repository:

sudo vim /etc/apt/sources.list.d/grafana.list
    deb https://packagecloud.io/grafana/stable/debian/ jessie main
sudo apt update
sudo apt install grafana

We then of course make some customisations. The important part here is setting the base URL which is required due to the fact we’ve got Grafana running behind an nginx reverse proxy. (We love nginx and use it wherever we get the chance. We won’t detail the customisations here though as they’re not strictly related to the monitoring solution, and Grafana works just fine on it’s own.)

sudo vim /etc/grafana/grafana.ini
    [server]
    domain = display-endpoint.dogsbodytechnology
    root_url = %(protocol)s://%(domain)s:/grafana
sudo systemctl restart grafana

Summary

The steps above left us with a very powerful and customisable monitoring solution, which worked fantastically for us. Be sure to check back for future instalments in this series. In part 3 we cover setting up alerts with Kapacitor, creating awesome visualisations with Grafana, and getting all of our hundreds of customers’ servers reporting in and alerting.

Part three is here.

Replacement Server Monitoring

Feature image background by tomandellystravels licensed CC BY 2.0.

Replacement Server Monitoring – Part 1: Picking a Replacement

As a company primarily dealing with Linux servers and keeping them online constantly, here at Dogsbody we take a huge interest in the current status of any and all servers we’re responsible for. Having accurate and up to date information allows us to move proactively and remedy potential problems before they became service-impacting for our customers.

For many years, and as long as I have worked at the company, we’d used an offering from New Relic, called simply “Servers”. In 2017, New Relic announced that they would be discontinuing their “Servers” offering, with their “Infrastructure” product taking it’s place. The pricing for New Relic infrastructure was exorbitant for our use case, and there were a few things we wanted from our monitoring solution that New Relic didn’t offer, so being the tinkerers that we are, we decided to implement our own.

This is a 3 part series of blog posts about picking a replacement monitoring solution, getting it running and ready, and finally moving our customers over to it.

What we needed from our new solution

The phase one objective for this project was rather simple: to replicate the core functionality offered by New Relic. This meant that the following items were considered crucial:

  • Configurable alert policies – All servers are different. Being able to tweak the thresholds for alerts depending on the server was very important to us. Nobody likes false alarms, especially not in the middle of the night!
  • Historical data – Being able to view system metrics at a given timestamp is of huge help when investigating problems that have occurred in the past
  • Easy to install and lightweight server-side software – As we’d be needing to install the monitoring tool on hundreds of servers, some with very low resources, we needed to ensure that this was a breeze to configure and as slim as possible
  • Webhook support for alerts – Our alerting process is built around having alerts from various different monitoring tools report to a single endpoint where we handle the alerting with custom logic. Flexibility in ours alerts was a must-have

Solutions we considered

A quick Google for “linux server monitoring” returns a lot of results. The first round of investigations essentially consisted of checking out the ones we’d heard about and reading up on what they had to offer. Anything of note got recorded for later reference, including any solutions that we knew would not be suitable for whatever reason. It didn’t take very long for a short list of “big players” to present themselves. Now, this is not to say that we discounted any solutions on the account of them being small, but we did want a solution that was gonna be stable and widely supported from the get-go. We wanted to get on with using the software, instead of spending time getting it to install/run.

The big names were Nagios, Zabbix, Prometheus, and Influx (TICK).

After much reading of the available documentation, performing some test installations (some successful, some very much not), and having a general play with each of them, I decided to look further at the TICK stack from InfluxData. I wont go too much into the negatives of the failed candidates, but the main points across them were:

  • Complex installation and/or management of central server
  • Poor / convoluted documentation
  • Lack of repositories for agent installation

Influx (TICK)

The monitoring solution offered by Influx consists of 4 parts, each of which can be installed independently of one another

TTelegraf – Agent for collecting and reporting system metrics

IInfluxDB – Database to store metrics

CChronograf – Management and graphing interface for the rest of the stack

KKapacitor – Data processing and alerting engine

 

Package repositories existed for all parts of the stack, most importantly for Telegraf which would be going on customer systems. This allowed for easy installation, updating, and removal of any of the components.

One of the biggest advantages for InfluxDB was the very simple installation: add the repo, install the package, start the software. At this point Influx was ready to accept metrics reported from a server running Telegraf (or anything else for that matter. There were many clients that support reporting to InfluxDB, which was another positive)

In the same vein, the Telegraf installation was also very easy, using the same steps as above, with the additional step of updating the config to tell the software where to report it’s metrics too. This is a one-line change in the config, followed by a quick restart of the software.

At this point we had basically all of the system information we could ever need, in an easy to access format, only a few seconds after things happen. Awesome.

Although the most important functionality to replicate was the alerting, the next thing we installed and focused on was the visualisation of the data Telegraf was reporting into InfluxDB. We needed to ensure the data we were receiving mirrored what we were seeing in New Relic, and it can also be tricky to create test alerts when you have no visibility of the data you’re alerting against too, so we needed some graphs (everyone loves pretty graphs as well of course!)

As mentioned above, Chronograf is the component of the TICK stack responsible for data visualisation, and also allows you to interface with InfluxDB and Kapacitor, to run queries and create alerts, respectively.

In summary, the TICK stack offered us an open source, modular and easy to use system. It felt pleasant to use, the documentation was reasonable, and the system seemed very stable. We had a great base, one which we could design and build our new server monitoring system. Exciting!

Part two is here.

Replacement Server Monitoring

Feature image background by xmodulo licensed CC BY 2.0.

Congratulations Jim Carter

We are very pleased to congratulate Jim Carter on completing his Linux Systems Apprenticeship.

Jim now holds…

City & Guilds Certificate in IT Systems and Principles
City & Guilds Level 3 Diploma in IT Professional Competence

We are even more pleased that Jim has chosen to continue his career with Dogsbody Technology as a permanent member of staff. Jim is now looking forward to continuing his education with professional qualifications from Amazon Web Services (AWS) and honing his coding skills (we will break his habit of using Python “for” loops for everything).

If you are interested in joining the Dogsbody Technology team as a Linux Systems Apprentice please apply here.

 

2017 Christmas Shutdown

Christmas is fast approaching and we wanted to let you know that Dogsbody Technology will be taking some time off to celebrate the festive season.

Our office will be closed on the following days and response to emails maybe slower:
– Fri 15 Dec 2017 – Office Closed after 13:00
– Fri 22 Dec 2017 – Office Closed after 13:00
– Mon 25 Dec 2017 – Public Holiday
– Tue 26 Dec 2017 – Public Holiday
– Wed 27 Dec 2017 – Office Closed
– Thu 28 Dec 2017 – Office Closed
– Fri 29 Dec 2017 – Office Closed
– Mon 1 Jan 2018 – Public Holiday

During this time any issues will be dealt with on an emergency out of hours basis and we will only be able to support customers who are experiencing a situation where business cannot function without a resolution such as:
– Website / Server down
– Inability to trade online

We will continue monitoring and patching servers as usual over the Christmas period.

We recommend our hosting customers check our status site as usual which will continue to be updated.

If you need to raise an issue please use the standard contact details as calls to our office and support emails will be routed to the engineer on call.

Thank you for your continued support throughout 2017. We hope you have a very Merry Christmas and a Happy New Year.

#FoodbankAdvent 2017 – The Reverse Advent Calendar

As Foodbank demand soars in the UK Dogsbody Technology are proud to announce that this year instead of further charity giving or sending corporate gifts we will be taking part in the Reverse Advent Calendar #FoodbankAdvent by donating essential items to our local Farnborough Trussell Trust Foodbank.

December is the busiest month of the year for foodbanks, with 45% more referrals during the two weeks before Christmas. More than 90% of the food donations comes from the public.

The Trussell Trust runs the largest network of foodbanks in the UK, giving emergency food and support to people in crisis. Thirteen million people live below the poverty line and in the last year they gave 1,182,954 three-day emergency food supplies to people in crisis – up over 6% from the previous year. Of this number 436,938 went to children.

What is a Reverse Advent Calendar?

Traditionally advent calendars are something you open each day from the 1st to the 25th December and get a reward. It used to be a tiny picture or a chocolate, now adults can indulge themselves with everything from make up to alcohol!.

Instead with the Reverse Advent Calendar you start with nothing (empty box) and put one item for your local food bank into it every day. You could do this for 25 days to mirror the advent calendar – or perhaps for a whole month.

Follow us on social media to see our Advent grow!

Throughout December we will be posting updates on how our calendar is growing and our aim is to donate over 60+ items in total.

Want to take part too? Its not to late to start!

  • Read Trussell’s Trust’s What goes in a food bank parcel and non food items to see the items they like to be donated
  • Find your local Trussell Trust food bank or donate to others in your area.
  • Look at your local food bank’s website for the list of items they urgently need.
  • Donate long life items (tinned or dried) so as not to waste fresh food (which goes off quickly)
  • Get your donation to the food bank by early December for them to be useful for Christmas
  • If you want to collect an item a day as per the advent calendar, then your donations will be just as welcome in January as winter sets in!

In one of the richest countries in the world, no one should be hungry at any time of year but especially not at Christmas. Dogsbody Technology hope to make a small difference to someone this Christmas.

Update: Dogsbody Technology donated 74 items to Farnborough Trussell Trust Foodbank #FoodbankAdvent weighing in at 37.1 Kg!

It was great to give something back to our community and the Trussell Trust Foodbank were very pleased with our donation – it is definitely something we will repeat!

A big thank you to all our customers and suppliers who have sent us season greetings, cards and presents – Dogsbody Technology are pleased to have such awesome allies 🙂

 

We wish all our customers, suppliers and supporters a very Merry Christmas and a Happy 2018.

 

Dogsbody Technology are moving

After four years at Ferneberga House we are moving 3 miles around Farnborough airport to Cody Technology Park.

After 1st July 2017 our address will be …

Dogsbody Technology Ltd.
Cody Technology Park
Ively Road
Farnborough
Hampshire
GU14 0LX

Please update any record you have.  All other contact details (email addresses, phone numbers) will remain the same.

Cody Technology Park is a secure List X site which means that all visitors will need photo ID and a security check to get on-site.  As you can imagine we are very happy to add this extra level of protection to the layers of security we already have in place.

We will be moving over the weekend of Friday 30th June – Monday 3rd July 2017.  We aren’t expecting any delays when dealing with customers but please bear with us if we take a little longer to respond during this time.

We can’t wait to share pictures of our space in the future, watch this blog for info on some of the tech we will be installing 🙂

Dogsbody walks for Cystic Fibrosis

Last Saturday 10th June 2017 The Techy Trekkers (8 employees from Dogsbody Technology and Adapt Digital) walked OVER 40 miles taking part in the Great Strides 65 Surrey Hills Team Challenge in aid of the Cystic Fibrosis Trust. It was without a doubt the biggest physical challenge any of us had faced and was indeed a challenge.

Our Team was in the final wave which started at 7.30AM. There were 12 Checkpoints, 7 of which involved meeting with the wonderful team in our support car, who carried all the heavy stuff like food, drink and medical supplies! 40 Miles at the average walking pace of 3 MPH would take us 13 hours and 20 minutes with no stops. Our actual moving time (according to our GPS) was 13 hours and 49 minutes, which wasn’t so bad – it was the support stops that slowly got longer as we got more tired, needed more time to eat, tend to feet and queue for the loo.

We ended up completing the event at 1AM to an amazing cheer from the organisers who were brilliant on the day; it may have taken us 2 hours longer than we planned for but all 8 of us finished and we are immensely proud of the team for continuing despite the blisters and pain.

Our team of truly amazing people have raised over £2,300 in sponsorship so far but the whole event currently stands at a fundraising total (inclusive of Gift Aid) of £200,233.36 for the walk and a further £11,111.61 for the ultra (running race)  – a massive amount which will help the Cystic Fibrosis Trust in its mission to ensure that everyone born with cystic fibrosis can live a Life Unlimited.

There is still one month to donate to such an amazing cause so please spare some pennies if you can do so we can reach our personal target of £2500.00.

“I completed the hardest physical and mental challenge of my life with a team of amazing people!” – Teammate Katie

Images courtesy of Jan Benton, Tracey Clarkson & Mark Turner.