My next class:
Network Monitoring and Threat Detection In-DepthSingaporeNov 18th - Nov 23rd 2024

Synolocker: Why OFFLINE Backups are important

Published: 2014-08-05. Last Updated: 2014-08-05 13:15:31 UTC
by Johannes Ullrich (Version: 1)
4 comment(s)

One current threat causing a lot of sleepless nights to victims is "Cryptolocker" like malware. Various variations of this type of malware are still haunting small businesses and home users by encrypting files and asking for ransom to obtain the decryption key. Your best defense against this type of malware is a good backup. Shadow volume copies may help, but aren't always available and complete.

In particular for small businesses, various simple NAS systems have become popular over the recent years. Different manufacturers offer a set of Linux based devices that are essentially "plug and play" and offer high performance RAID protected storage that is easily shared on the network. One of these vendors, Synology, has recently been somewhat in the cross hairs of many attacks we have seen. In particular vulnerabilities int he web based admin interface of the device have led to numerous exploits we have discussed before. 

The most recent manifestation of this is "Synolocker", malware that infects Synology disk storage devices and encrypts all files, similar to the original cryptolocker. Submissions to the Synology support forum describe some of the results [1].

The malware will also replace the admin console index web page with a ransom message, notifying the user of the exploit. It appears however that this is done before the encryption finishes. Some users where lucky to notice the message in time and were able to save some files from encryption.

It appears that the best way to deal with this malware if found is to immediatly shut down the system and remove all drives. Then reinstall the latest firmware (may require a sacrificial drive to be inserted into the system) before re-inserting the partially encrypted drives.

To protect your disk station from infection, your best bet is:

  • Do not expose it to the internet, in particular the web admin interface on port 5000
  • use strong passwords for the admin functions
  • keep your system up to date
  • keep offline backups. this could be accomplished with a second disk station that is only turned on to receive the backups. Maybe best to have two disk stations from different manufacturers.

It is important to note that while Synology has been hit the hardest with these exploits, other devices from other manufacturers had vulnerabilities as well and the same security advice applies (but typically, they listen on ports other then 5000). 

[1] http://forum.synology.com/enu/viewtopic.php?f=3&t=88716

---
Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

4 comment(s)
My next class:
Network Monitoring and Threat Detection In-DepthSingaporeNov 18th - Nov 23rd 2024

Comments

I run a linux based backup server that does not export NSF or M$ shares, and is behind multiple firewalls before the internet can be seen. It uses fully encrypted drives in a 3-way RIAD1 mirror, so I still have redundancy even when one drive is offline. It uses 3 TB consumer grade SATA drives (Price $99.00 at the local Microcenter). Because there are 3 drives, and it takes 2 months to fill 3 TB, that means no drive is in service for over 6 months. When it fills up, I pull the oldest drive and take it to the safe deposit box at the bank. This system backs up every machine (18 of them, I think) every night, for every filesystem on each machine, unless directory is explicitly excluded by a configuration file. All traffic to and from the backup server is SSL encrypted, and uses rsync for source level dedup, and uses a file level destination dedup scheme. Studies have show I get about 20 to 30 times compression that way.

I have been running a variant of this system since 2000, and using hot-swap SATA drives since 2007. I have a complete history of every file on evey device for the pat 7 years.
At $99.00 for a drive that holds 2 months, that is $50 per month, or less than $2.00 per day -- less than a fancy coffee at the local barista. That's cheap insurance.
For some applications that makes sense.

For SQL servers and Microsoft Exchange mailbox databases it does not.
One day's worth of changes to the database alone is likely about 80 to 100 gigabytes,
and a spare SATA HDD will be used in no time.

To do a recovery, you need a point in time consistent copy of the database file and all the transaction logs between that time consistent point and the time operation ceased before recovery.

Rsync does not provide a "point in time"-consistent copy. It will capture the new version of some files and the old version of other files on the filesystem.
For large enough rapidly changing files, it doesn't even manage to get a clean copy of the file.
Using VSS or other snapshot technology you can get crash-consistent (equivalent to if you hit the reset button) point-in-time rsync backups.

JH

Diary Archives