Is Anti-Virus Dead?
Each SANSFIRE, the Handlers who can make it to DC get together for a panel discussion on the state of information security. Besides discussion of the hot DNS issue, between most of us there is a large consensus into some of the biggest problems that we face. Two come to mind, the fact that "users will click anything" and that "anti-virus is no longer sufficient". These are actually both related in my mind.
Users will click anything
Some studies show that the success rate of a well-formatted phishing attempt can garner about a 10% click-through rate. However, with targeting techniques, such as using what would be expected to be legitimate content in a phishing attempt this can go upwards of 80%. An example, if you got a random PDF file from someone named "fbtgsertgrwetgfe" with the Subject "Angelina Jolie NEKKID!" you would most likely not click on the e-mail. Even better, your anti-spam solution might even filter that message. However, if you got a PDF file from your CEO with the subject "Important Changes to Health Care Plans", you would likely take a gander. The better targeted a phishing attack, the more likely even savvy people get infected. It isn't even necessarily targeting via email that can be widely successful. How many of you add every facebook application that gets forwarded to you without even bothering to do any examination of the content?
However, the fundamental problem behind this isn't so much that users will click anything, but that whatever the user says goes. Or, to put it another way, we tend to operate desktops under the principle of most privilege. How many of you allow your users administrator rights in the workplace? At home, everyone has local administrator. This allows the "bad guys" free reign. If you look at the development of the various phishing kits, they aren't really high tech. For them, its lather, rinse, repeat all day long. The real development of malware tends to be on the command & control side, the phishing kits, web sites and to a lesser extend, the droppers, don't seem to be evolving all that quickly. They simply don't have to evolve fast, what they do keeps on working.
Is Anti-Virus Dead?
"I can't get infected by malware. I have anti-virus!" The absurdity of that statement needs no explanation at this point. This has led to people considering anti-virus a dead technology because it is always one-step behind attackers. This isn't necessarily untrue, but anti-virus by its very nature is reactive... it will only block against known threats. Additionally, anti-virus signatures are essentially public. Any number of resources exist to scan your malware to see if it detects. In short, you know ahead of time if you have the first ~24 hours of free reign. If you target your attack, you can have far longer because you have a higher potential of floating under the radar and getting your bad bytes captures by the AV guys and/or security researchers like us. AV, like all reactive technologies, suffers from the "First Win problem". It isn't so much that they are "one-step behind"; it is that fundamentally it can never be ahead of the attackers.
Does that mean AV solutions should just be chucked? Of course not. AV is a "90% solution", it still does protect against known threats. Is it sufficient? No, but it also never has been sufficient. Blocklisting technologies are far more effective when combined with whitelisting technologies. For instance, the combination of AV protection with a good perimeter firewall brings you a little farther down the road of security. While there is a debate on whitelisting vs. blocklisting technologies for binaries, a good step would be to start digitally signing binaries and go to a "bayesian" method of determining risk. Not perfect, but better. Heuristics would also be another good step (although heuristics is still basically a blocklisting technology and reactive).
What now?
So how do we protect ourselves from malware? That's the million dollar question but here are some suggestions. Please send in your feedback and we'll do a follow on post.
-- We need to shift our paradigm in what we protect. We ought not to primarily be concerned with protecting "machines". Machines are a means to an end, not an end in and of themselves. We protect "information" not hardware. For instance, we simply cannot protect consumer PCs. They are inherently insecure and insecurable and it's fundamentally unsound and unfair to expect consumers to be able to harden their own machines. We need to accommodate our electronic commerce to this fact. For instance, we assume that the "cloud" of the Internet between point A and point B is insecure. That is why we have things like VPNs; we simply bypass the problem with encryption. The same should be true of consumer PCs; we need to find ways to do commerce on an insecure system so that information cannot be stolen... or at least enough information by which we can totally jack someone's identity. The same is true on the corporate side... we don't protect hardware for the sake of protecting hardware. We are securing intellectual property and in that sense, we need to "redraw" our perimeter around the logical information flows of confidential data.
-- As I mentioned before, digital signatures for binaries and "bayesian" style scoring for binaries/scripts.
-- Stop operating under a Principle of Most Privilege for the desktops. In a corporate environment this is far easier. A little more difficult in an academic environment (I've been party to debates in academia on why we can't do information security because it impedes academic freedom... luckily much of this has subsided, but still a problem). It is a very difficult problem at home, but there are still some things that we can do and some things that operating systems shouldn't allow.
-- We've conditioned our users to operate their computers in a "button mash" method. The infinite series of "Are you sure?" messages no longer mean anything, whether it's installing programs or getting AV warnings or pop-up windows. The UI needs to stop the information spam to unsophisticated users because the overload causes people to shutdown their thought processes in looking at it and simply mash "Next... Next... Next...".
What else would you add?
----
John Bambenek
bambenek /at/ gmail /dot/ com
Linus - Linux and Security - follow-up
As promised in an earlier diary, some follow-up with your comments.
By far the most comments we received spoke out either in favor or either against full-disclosure. Some selected comments:
- Neal: "Unless you are doing a red-team audit, working exploits are never needed. I could build a bomb in my home to prove it can be done, but the actual construction is never needed. If your goal is only to explain that it is possible, then there is no need to do it. (At Defcon last year, one guy tried to explain why working exploits were needed. I told him to prove to me that gravity is dangerous -- go jump off a cliff.)"
- Guy: "While full disclosure is far from perfect, it's the best solution in the absence of any alternative. For commercial vendors, it's often the only reason to quickly fix a security vulnerability (look at Microsoft's track record). It also protects the finder of the hole from being silenced instead of fixing the bug."
- Joshua: "How are you going to get the full picture without exploit code, thinking the software manufacturers are going to release this information to security researches is naive. This is kind of like making bullet proof vests without ever firing a gun or testing it." To which one of the other handlers replied: "Yea but who is wearing the vest during testing? In the FD world those of us wearing a vest can be "test" shot by anyone since FD provides the guns and ammo."
I guess we'll not settle such wide diverging viewpoints on full disclosure anytime soon.
What stood out to me is that most of pro FD arguments are with issues vs. commercial and/or closed source software where the vendors are unwilling to admit to have sold highly broken products to the consumers at large. If you read back to the original subject, we were discussing Linux: open source and the community at large is the developer ...
About Linus' viewpoint to not provide any hint of a security bug aside of the fix itself in the source code, it stood out nobody spoke up to defend that viewpoint. On the contrary:
- Alan: "Apparently those who develop code cannot be relied upon to provide the information needed by the public to manage their patch processes -- even for major open source projects."
- Guy: "I do not believe that it is possible to keep security issues secret. Some attacker might find it during the time that it wasn't disclosed and exploit it on his own (or sell it)."
- A reader wishing anonymity: "I am troubled by the security through obscurity approach Torvalds puts forward."
Similarly, I've seen little indication our readers think security bugs should treated just like ordinary bugs.
An important reminder came from Jos: "What exactly is a security bug? I think it's not only about unauthorized access, it's also every bug that could lead to data corruption or less availability of a time-critical application." Something we all need to agree to security is about managing risk to "CIA" Confidentiality, Integrity and Availability. Also somethign we all to often tend to know but not use in the way we speak about things.
- Morten: "I think Aidan Thornton has an important point, although obvious in his own words. Bugs can hit you on three levels:
1. You loose availability of your system (crash, just restart the program.)
2. You loose data (silent data corruption, just use a backup, which you do of course have!)
3. You loose your money!
"Normal" bugs hit you on level 1 and possibly on level 2, security holes can hit you on *all* levels, so they ARE different, and need to be found, fixed and rolled out BEFORE they hit anyone. You can't just restart your money, or roll them back in from a backup (mmmmmm... 8-] if only...). Security bugs must be hunted actively, normal bugs you just catch in traps ..." - Niel had an interesting observation: "[Security bugs are different from normal bugs] due to intent, but not due to the fundamental cause." This observation is very interesting in itself indeed as it can show a relationship between developers (who unwillingly create the bugs), QA who try to find bugs and security who extend the bugs towards motivation. In Niel's words:"There is a big difference between developers, quality assurance (software testers), and security people.
- Developers may not think about how the code will be used beyond the specifications.
- QA must think differently, to make sure the code does what it should do (alpha testing) and does not do what it shouldn't (beta testing). In addition, not every QA person does development.
- Security people extend QA to motivation. "Security staff" are a special extension to QA and usually need to have developer skills in order to evaluate exploits, create solutions, and implement test cases. In this regard, security people ARE better than normal developers and QA because they must see a bigger picture and be able to react to it.
- An anonymous reader wrote: "Security bugs ARE different. They require faster and sometimes untested repairs."
Few responses focussed on finding a balance between full disclosure and obscurity, a notable exception was Morten:
"I have always upgraded my open source software/freeware as soon as I saw a newer version of it, important fixes or not (with a few exceptions), but I like to be given just some obscure reason like "Security bug fixed involving feature A used with feature B resulting in remote code execution or elevation of rights or ...", so I can make sure to upgrade to exactly this release, but I don't need to see POCs or exploits to be convinced. It is always a race, and I see no reason to change the rules of the game, before I have had a chance to enter the pit to change the tires as preparation to the new rules. Full disclosure, no thanks, just a rough summing up to tell normal bugs from security bugs."
I'd like to wrap up with a very important reminder from Jos that the business is what we're all about:
The main 'problem' I observe is that the discussions remain mainly on the tech side.
The main contributers to both the kernel and DNS discussion are technicians. The business side does not have a voice and therefore people don't consider the implications about their suggestions for the business.
What impact does Full Disclosure have on a company that does not have the resources to patch at will, but need several days or even weeks to fix?
Does it really matter if a bug is marked as generic or security from business perspective? How many companies have a dedicated 'security team' available to evaluate those kinds of bugs? And if they don't, how are they going to evaluate the bug and patch?
I don't believe in security by obscurity, but also don't believe that making every security leak visible helps.
Be honest, tell there is a problem in the software and indicate how much of a problem it is so business can determine the amount of effort they have to take to solve the problem. A label 'security patch' or the full details are not needed most of the time.
I like the need for an impact indication instead of labelling "Confdentiality" and "Integrity" problems as "security", but not doing so for "Availability" issues a lot!
--
Swa Frantzen -- Section 66
Comments