Predicting Microsoft
After our morning vulnerability briefings, I often receive emails from our support engineers asking: "when will Microsoft release a patch for this?" Answers such as "it depends" and "probably next Microsoft Tuesday," although technically accurate, do not provide much value to them. A real answer is important to them; it impacts their planned maintenance cycles and can have a real dollar impact to the firm. How do you really answer that question?
First, what kind of answer is required? For this particular example, a "yes" or "no" answer to "Will it appear in the next monthly release?" is acceptable, but a level of confidence should accompany it. "Will it be released before schedule?" is also a key follow-on question. So a good answer would be: "This patch in 95% likely to appear in next month's release, an early release is unlikely."
If you don't provide a level of confidence, you really can't improve your process. Vague terms like "not likely," or "probably," are not measurable. If you can't measure your intelligence process, you can't improve it. This is why confidence levels and feedback are so important.
To develop this answer I use a conceptual model. This process became a lot easier once Microsoft moved to their monthly release schedule. Given the past year's release data, I stand a reasonable chance of predicting accurately. A key element is our knowledge base, which contains known vulnerability information, patch release dates, and details of the patch. I expect Bayesian analysis could be as accurate as I am. Some of the key findings from the model are:
- Microsoft will release an out-of-band patch only if a third party has released an unofficial patch, and that patch involves a change more involved than a kill-bit.
- Microsoft will release a patch on the next release date if the fix involves only a kill-bit.
Try this at home/work/office yourself. There are 5 unknowns coming out next week. Make your guesses what will be released. The kill-bit for the XML Core Services issue has already been announced. Be sure to keep score, shooting for that 95% confidence rating. For issues where you're not 95% confident, list what information you need to improve that rating. That's how you improve your process. For bonus points, train a learning algorithm (Bayesian, neural network, automata, etc.) to play along.
Comments