On the importance of patching fast

Published: 2009-02-03. Last Updated: 2009-02-03 18:33:39 UTC
by Swa Frantzen (Version: 2)
3 comment(s)

Patching

Every month we create an overview of the patches released by Microsoft on black Tuesday. Over the years we learned that our readers like to have our idea of what to patch more urgently than what else, mainly due to them getting burned with patches that broke other stuff.

While I create many of those overviews with very important help behind the scenes of the rest of the handlers, the cycle we collectively implement to delay patching is something that keeps me concerned as it might very well be just not fast enough. Personally, I think we might need to evolve our re-testing (the vendor already tested) of patches to be far more lean and mean.

Especially since the amount of feedback we get on a monthly basis of the Microsoft patches causing trouble dwindled to a very tiny amount of really minor issues, I feel we have helped build a too heavy process in many organizations that results in patches being deployed rather slowly. Perhaps too slow, see the cautionary tale below.

PHPList

PHPList is an open-source newsletter manager. It is written in php. On January 29th 2009 they posted a software update. "[The update] fixes a local file include vulnerability.This vulnerability allows attackers to display the contents of files on the server, which can aid them to gain unauthorised access".

They also included a one-line workaround if you could not patch fast enough.

UPDATE: An exploit against this vulnerability was published and used in the wild on Jan 14th 2009, 2 weeks before the patch was issued.

pbpBB

phpBB Is an open-source bulletin board software. It is written in php as well, but product-wise the relation with PHPlist stops there.

UPDATED: please read the updated details below as well, they change the basic setting of this significantly. Instead of just erasing this, please see it as a fictional story that might have happened and has some useful things to learn from it regardless of facts catching up us. The events to the best of our knowledge are under the update heading below.

The www.phpBB.com server however had the PHPlist software installed and February 1st 2009 -merely 3 days later-, they got hit by an attack against PHPlist.

The attack was not only successful, but the attackers got hold of the list of email addresses in the phpBB announcement list, the encrypted passwords of all of the users of the phpBB forum on phpBB.com, and published that.

While the phpBB software was not the path the attackers followed to get on the server, the impact is for all users of phpBB.com's forum and mailing lists, many of them being administrators of phpBB installations. Let's hope they do not use the same login/password combination elsewhere.

Learn lessons

We can either learn from falling our selves, and standing up again, or we can try to be a bit smarter and also learn from what made others fall.

How long would your organization have taken to have a roll-out of a patch released on Thursday? Would it have been implemented on all servers well before Sunday?

Are we ready to patch in less than 3 days time, even if that includes a weekend? If not, we might need to accelerate things:

  • How do we find out about patches being available ? Make sure we're warned for all software we use!
  • How to we test (if we test?) and validate we're going to implement it in production? Even if it is a weekend?
  • How do we choose between the turn-around times of a workaround vs. that of a full patch ?

The odds in this game are high. All the attacker has to do is find a single hole. While it's our job to fix them all. Moreover the reputation of our respective organizations depends on our success at staying safe and ahead of all the attackers.

Update

We're been given additional information regarding the incident above, changing the story above quite a bit.

  • The story didn't begin on Jan 29th as noted above, but quite a bit earlier.
  • On Jan 14th an exploit was made public on one of the well known exploit publishing sites against phplist
  • That exploit was used against www.phpBB.com's instance of phplist on Jan 14th.
  • So in essence the this break-in was done well before there was a patch available (a so-called "0-day")

This also means the lessons above -while still very valid lessons to learn, it could have been true aren't derived from a real set of events, and need to be expanded with far harder things:

  • Do you know if the software you have deployed has publicly known exploits against it? How do you find out?
  • Can you keep track of vulnerabilities in software you use and make sure they all get a timely patch?
  • When do you switch tactics on waiting for a software vendor/maker to issue a patch and start to do something yourself?

These are very, very hard questions to ask. A lot of trust in suppliers of software is put in the balance if you try to answer these questions.

  • How do you detect break-ins that only steal information ?

The answer to this is a basically the answer to the need for a "detect loss of confidentiality" control. The most common solution to this is to monitor the access to the information for anomalies, but even then it's by far not easy to get this right without a lot of false positives or the risk of true negatives.

--
Swa Frantzen -- Section 66

Keywords: patch phpBB PHPList
3 comment(s)

Comments

A suite of automated tests is good practice for software development, but these do not always ship with a release build (especially with proprietary software). If they did, users of the software could run these tests after patching as a safeguard against newly-introduced problems that the developer missed. An auto-update tool could run these automatically after patching, and roll back the patch and show warnings if the tests failed, maybe providing the vendor with useful debugging info.

Perhaps some organisations could benefit from writing their own suite of automated tests for their own environment, in the case of GUI apps maybe even 'screen-driving' (controlling the mouse/keyboard to test functionality) and then testing the final output file or screen display; ensuring all important functionality works as it needs to. These could be run after applying new patches to give some reassurance to the administrator that the software still works, and to hopefully pick up on any issues before site-wide deployment happens.
Another important lesson to learn from the incident: do not EVER reuse a password for two systems that do not belong to the same security boundary, in particular on web-based systems.

If possible, use different passwords everywhere and manage then through password management tools.
or better still, use an algorithm for your passwords, then you dont need a password manager, and ever site you log onto has a different password (and a generic password reminder, which in my case, reminds me how to build the algorithm, something like \"where am I?\")

Diary Archives