Defenses Against Automated Patch-Based Exploit Generation
Last month, we reported on research that shows it is possible to create exploits from reverse engineering patches as they come out and this process can be automated. At that time, I didn't have alot to say about how to defend because I hadn't thought about the problem enough yet... I've had some time now.
Encrypting Patches
The paper mentions encrypted patches so that distribution of the patch could still take some time but they send out the decryption key simultaenously allowing the patch to be applied the same time around the globe. This would, in theory limit the window of opportunity for a hacker to reverse engineer the patch, get a working exploit, and start attacking the world. The problem with this is that the delay from the time of releasing the patch is not caused from the rolling cycle of downloads, but from the need to reboot systems after a patch is applied (most of the time). In short, a system may still have the key to decrypt a patch, but it would not be applied until either the user rebooted the machine or at some default time when a reboot is acceptable (i.e. 3am). The chief problem is the need to reboot which is a significant business disruption. Encrypting patches wouldn't fix this problem, it just creates another layer of the patching process.
Patches that Don't Require Reboot
This particular defense is for OS vendors only (and one vendor in particular). Patches that require a reboot must inevitably result in delaying the application until a maintenance window. If patches can be applied without incurring downtime, particularly among end-user workstations, this allows patches to be pushed out and applied as soon as they are available. This would go along way to closing the window of opportunity when a patch is out and when the patch is applied. Some patches, obviously, must entail a reboot, but as many patches as possible should be developed in such a way to minimize the need to reboot.
The Renewed Need for Workarounds
This defense is mostly on us (the Internet Storm Center) and the security community in general. For some time, workarounds have been less necessary because patching has been relatively easy to handle. The need to go significant periods of time before patching has only occurred a handful of times in the past few years. If the patch window is gone, that requires us to renew the efforts to find quick "workarounds" to limit the exposure of machines during the vulnerable period. Some patches will require reboots and there will be no way around that. We need to find defenses to allow people to protect themselves in the meantime.
Configuration Management
The last piece of the puzzle, a defense available to the people in the trenches, is centralized configuration/patch management. In part, this follows from our diary from yesterday on configuration management. If we get out hotfixes, registry changes, killbits, or any other defense, centralized configuration management allows for the quick deployment of these minor protective changes that will allow you to "limp along" until a patch can be applied. The important note about configuration management is that deploying a solution, especially if it manages everything in your environment, makes that configuration management solution that absolute most important system in your environment, even more important than those that house trade secrets, etc. A configuration management system becomes a "single point of 0wnership" that allows an attacker to take direct control over not one machine, but an entire organization whole and entire. Everything has its costs and benefits, and as long as you control the risks of centralized configuration management, the benefits certainly make it worth it. Protect the keys to the kingdom.
Comments? Send em along.
--
John Bambenek / bambenek \at\ gmail /dot/ com
Comments
~nog_lorp
nog_lorp
May 6th 2008
1 decade ago