Even More Thoughts on Legacy Systems

Published: 2009-11-08. Last Updated: 2009-11-08 16:31:39 UTC
by Kevin Liston (Version: 1)
2 comment(s)

Legacy systems have been a popular topic here recently (see http://isc.sans.org/diary.html?storyid=7528 and http://isc.sans.org/diary.html?storyid=7546).  Any environment of sufficient size, complexity or age will have its share of legacy systems.  While we can work with policy and management to phase them out, in the meantime one has to deal with the fact that they’re on the network and vulnerable, which makes your network vulnerable.  Does it have to be that way?

Consider this simplified example: your company makes widgets, the widget-making machine is computer controlled, and the company that wrote the software is now out of business, so there is no chance of upgrades or patches in your future.  A bad-case example scenario: a consultant from Acme Industries comes into your facility with a laptop infected with an old worm (say Downandup,) and when they connect to your network it infects your widget-making machine.  Hilarity ensues.

A possible solution is to reconsider why that legacy machine needs to be on the network.  Do you know why?  It’s probably serving a web application, or someone is VNCing into the system to manage, or it has to send out status emails, etc.  That’s the first step: understand what services are required.  Then, use another device (because if you could lock down the legacy system it would already be locked down, right?) to isolate that system.  Old techniques like Access-Control-Lists, and Virtual LANs won’t block a dedicated human attacker, but against automated malware it can be quite effective.  If you have to expose a vulnerable service, limit that exposure to known and trusted systems on your network, not to everyone on you network.  Also, make sure that the isolation works both ways, if something manages to get into the system, you can at least limit it from spamming out to the rest of your network.

This approach also works when you have to plug a vendor’s “Appliance” into your network.

Keywords: legacy
2 comment(s)

Comments

I recently thought of creating a hard disk image of a legacy UNIX system and trying to run it under full platform virtualisation eg. qemu.

I believe qemu creates a virtual network interface in the Linux host system. That offers a lot of flexibility in what network traffic, if any, you can allow to reach the virtualised legacy system, and how: e.g. iptables NAT for some/all ports, or service-specific proxy applications running on the host, e.g.: squid/nginx/apache for HTTP(S), relaying mailservers SMTP, perdition for IMAP, or SSH accounts as a front-end to telnet. These may apply access control, or simply 'sanitise' incoming traffic.

This idea would only work if you can emulate all the necessary hardware for the legacy system to function, and if the legacy system's OS supports that virtualised hardware. But if all goes well, your legacy system may perform better than before (if the host system's performance is greater than the legacy system after any performance hit due to virtualisation).

And there are numerous other benefits: safety against the legacy system's hardware failing; being able to create snapshots of the virtualised legacy system's state for backup; or being able to run additional instances of the legacy system in isolation for safe testing.


I'd also like to suggest 'arpwatch' as a nice way to detect the devices on your network, including the legacy devices that you or your client may have forgotten about.
'arpwatch' is great for *nix.

If you need a quick check using a Windows workstation, check out ARP Monitor by BinaryPlant Software. It does sort of the same thing and has a nifty Windows GUI.

See http://blog.kmint21.com/2008/03/12/arp-monitor/
The site is in Russian, but the software is in English.

Diary Archives