Skip to main content

Using defence in depth to mitigate the risk of ransomware

I've written before about the evils of crypto locker and the spawn of that devilish state of affairs known as ransomware. Recently I came across an infection and saw first hand how defence in depth can save your data and the bitcoin.

Firstly, let's consider the perimeter of the network. What vectors for attack exist externally to the network? There are many and they include:

  • malicious emails
  • dodgy websites with malicious payload 
  • malicious actors (hackers) out to get you
The first layers of defence include (in this case):
  • an antivirus/antispam gateway for email, with the firewall at the main router allowing only connections on port 25 (smtp) from the mail scanner gateway
  • antispyware/antivirus email on the computers scanning every website that a user visits, plus using OpenDNS with a variety of restrictions on it to protect the user from themselves
  • firewalls and obfuscated ports where applicable with minimal "open-to-the-world" ports
That's the hard outer layer. Past the router / firewall and onto the network, we use:
  • firewalls on all PCs (granted only the windows ones, but supplemented with the anti-virus product's offerings)
  • WSUS to keep everything patched and up to date
  • VLANs to separate out stuff
  • usernames / passwords for access to all network resource
To further enhance security:
  • all backups go to a UNC path (i.e \\nas\backups) rather than a mapped drive (like an S:\ drive) which is important because ransomware will attack both local drives and network mapped drives - encrypted backups are 100% useless
  • users have restrictions based on principles of least privilege and this is rigorously enforced
  • servers are also patched and up to date
  • logs are maintained on a separate server
  • PRTG is used to monitor network traffic on the switches and a variety of other stuff
So what happened and how did this all help to mitigate a ransomware attack?

Well a user, let's call him Jim Bob, has a very weak password. Let's say it's Secret01 (yes if this is your password, it is shite. Change it now!)

An external attacker managed to get Jim Bob's username, and then proceeded to attack the remote desktop services to see if they could brute force their way in. What would you know, about 2 hours after starting, they got the password and were in. This could have been mitigated by having a password locking system turned on, but remember - you have to balance usability with security. This particular organisation struggles with passwords at the best of times, so locking a user out after 3 failed attempts for 10 minutes would have very high frustration levels as a result.

The attacker now had access to a server. But - Jim Bob's account was extremely limited. He just couldn't do much - he didn't need to. Basically log on and access the internet. That's him to a tee. This therefore, is all the attacker could do. Frustrating I bet!

Along comes the admin and logs on. The attacker - seeing a systems admin hit the server panics and drops a ransomware payload, probably thinking they'll get some sort of a payday somehow. Their ransomware attack manages to get a foothold on a mapped drive. The security on that mapped drive allows for only a small percentage of accessible files to be encrypted before the wily sysadmin spots it and locks Jim Bob's account down, and shuts off the file server. 

Our defence has now limited the risk by controlling the access to files and what could be attacked. And it gets better. Our snapshot backups are working as advertised and have a 15 minute old copy of the entire mapped drive's file system. With a few clicks, our intrepid sysadmin restores the whole lot over the next hour. Hundreds of files, barely out of date and only in a few instances. Within a few hours, Jim Bob's account has been restricted, password changed and the user Jim Bob given a kick in the bum for having a crappy password and the network drive is fully recovered.

Although our initial defensive line was penetrated (users can be your greatest security risk), the rest of the network's defences held firm mitigating the impact of the attack and the organisation's exposure to loss of data. No pay day for our arsehole attacker today! I like to think of how sad they must be, all that effort and no reward.

In the wash up, the sysadmin goes through and using the logs, PRTG and combing files finds the attacker's trail and mops up after them, making notes on what failed and how to improve it for next time. 

The moral of the story is this - defence always loses. Attackers will win. All we can do is to mitigate the damage and risk to the best of our capabilities and budget. Hopefully you will read this and get a few little ideas about how to perhaps enhance your existing defence, or even think about what attack vectors might exist. This pretend network is by no means perfect - it could always be better. Budget and skill restrictions come into play though and mean we have to find the best effort with whatever we've got at hand to make it work. Be smart and get margin into your security so a break in doesn't break your heart or your budget!


  1. This is a very useful post as ransomware and hacking has become a very common practice. You have provided great details about the security enabling and how to protect our data. Thanks for writing. Your blog is very helpful

  2. This is a very useful piece of writing. Everything has been explained so well. I don't read about military or their policies but started reading your blog now.

  3. it could get really frustrating for the user of something like that happens! You have a point there... ransomware is really troublesome to put up with!

  4. Hi Angus Beath ! your blog is very interesting and it will be benefitting a lot of people. This post has been explained in a very easy way so that anyone can understand easily. Thank you for sharing this post.


Post a Comment

Popular posts from this blog

Plone - the open source Content Management System - a review

One of my clients, a non-profit, has a lot of files on it's clients. They need a way to digitally store these files, securely and with availability for certain people. They also need these files to expire and be deleted after a given length of time - usually about 7 years. These were the parameters I was given to search for a Document Management System (DMS) or more commonly a Content Management System (CMS). There are quite a lot of them, but most are designed for front facing information delivery - that is, to write something, put it up for review, have it reviewed and then published. We do not want this data published ever - and some CMS's make that a bit tricky to manage. So at the end of the day, I looked into several CMS systems that looked like they could be useful. The first one to be reviewed was OpenKM ( ). It looked OK, was open source which is preferable and seemed to have solid security and publishing options. Backing up the database and upgradin

Musings on System Administration

I was reading an article discussing forensic preparation for computer systems. Some of the stuff in there I knew the general theory of, but not the specifics of how to perform. As I thought about it, it occurred to me that Systems Administration is such a vast field. There is no way I can know all of this stuff. I made a list of the software and operating systems I currently manage. They include: - Windows Server 2003, Standard and Enterprise - Exchange 2003 - Windows XP - Windows Vista - Windows 2000 - Ubuntu Linux - OpenSuSE Linux - Mac OSX (10.3 and 10.4) - Solaris 8 - SQL 2005 - Various specialised software for the transport industry I have specific knowledge on some of this, broad knowledge on all of it, and always think "There's so much I *don't* know". It gets a bit down heartening sometimes. For one thing - I have no clue about SQL 2005 and I need to make it work with another bit of software. All complicated and nothing straightforward. Irritating doesn&

Traffic Monitoring using Ubuntu Linux, ntop, iftop and bridging

This is an update of an older post, as the utilities change, so has this concept of a cheap network spike - I use it to troubleshoot network issues, usually between a router and the network to understand what traffic is going where. The concept involves a transparent bridge between two network interface cards, and then looking at that traffic with a variety of tools to determine network traffic specifics. Most recently I used one to determine if a 4MB SDSL connection was saturated or not. It turned out the router was incorrectly configured and the connection had a maximum usage under 100Kb/s (!) At $1600 / month it's probably important to get this right - especially when the client was considering upgrading to a faster (and more expensive) link based on their DSL provider's advice. Hardware requirements: I'm using an old Dell Vostro desktop PC with a dual gigabit NIC in it - low profile and fits into the box nicely. Added a bit of extra RAM and a decent disk and that&