The argument for moving SSH off port 22

Published: 2015-01-05. Last Updated: 2015-01-06 00:08:22 UTC
by Rick Wanner (Version: 1)
21 comment(s)

An interesting discussion is occurring on reddit on whether Secure Shell (SSH) should be deployed on a port other than 22 to reduce the likelihood of being compromised.  One interesting comment is that security by obscurity is not a security measure, but a way to delay the attacker, so it provides little value.  While it is true that it is difficult to stop a determined attacker who is targetting you, any measure that stops the random script kiddies and scanners from poking at your SSH is not completely useless.

The truth is that I have been deploying SSH on non-standard ports (typically 52222) for more than 15 years on the Internet facing servers I manage.   Of course this is not the only security measure I employ.  I patch daily; use hosts.allow where practical, keys and passphrases instead of passwords, and deploy DenyHosts.  Do I deploy on a non-standard port because of the security advantages to be had by security by obscurity?  Not at all!  I deploy SSH on a non-standard port because it eliminates all the noise that is every present on port 22.  The continual scanning and attempted brute forcing of SSH that has been on the Internet since the beginning of time, and seems to get worse every year, generates a lot of noise in the logs and is at best a nuisance and at worst service affecting for the server.  Why put up with it if you don’t have to?

It decreases the volume so much that I often have to test my defenses to be sure they are working. Sometimes I even deploy a honeypot on port 22 to see what the bad guys are up to. (-;

Opinions?

-- Rick Wanner MSISE - rwanner at isc dot sans dot edu- http://namedeplume.blogspot.com/ - Twitter:namedeplume (Protected)

Keywords:
21 comment(s)

Defensible network architecture

Published: 2015-01-05. Last Updated: 2015-01-05 02:52:09 UTC
by Rick Wanner (Version: 1)
6 comment(s)

For the nearly 20 years since Zwicky, Cooper and Chapman first wrote about Firewalls the firewall has been the primary defense mechanism of nearly every entity attached to the Internet.  While perimeter protection is still important in the modern enterprise, the fact is that the nature of Internet business has vastly changed and the crunchy perimeter and squishy inside approach has long since become outdated.   You can’t deny what you must permit and the primary attack vectors today appear to be email and browser exploits; two aspects of your business model that you cannot do without and which can give the bad guys a foothold inside your perimeter protections.

As the Sony, Target, Home Depot, and many other breaches have shown, once the bad guys are into the network they are content to dig in, explore, and exfiltrate large amounts of data and will often go undetected for months.  What is needed is a security architecture that focuses on protecting data and detecting anomalies. A security architecture that results in a network that is capable of defending itself from the bad guys.

Richard Bejtlich introduced the concept of a defensible network architecture over 10 years ago in his books, but the concepts are even more important today, and are gaining traction, but have not reached widespread adoption in the security industry.

What does a defensible network architecture look like today? In my opinion these are the minimum fundamentals to aim for in a modern defensible security design:

Segregation

The fact is that most enterprise networks are very flat and provide little resistance once the network perimeter is breached. Desktops are the most likely ingress vector for malware, yet most organization’s desktop LAN’s are very flat and desktops often have virtually unimpeded access to the entire network. Creating segregation between the desktop LAN and the critical data, stored on servers, is a huge step in impeding a breach of the network.  The fact is that desktops do not normally need to communicate with other desktops.  So the first step would be to segregate desktops from each other to limit desktop reconnaissance and worm type propagation between desktops. Second, the desktop LAN should be treated as a hostile network and should only be permitted access to the minimum data required to do business.

Servers should be segregated from each other, and from the desktop LAN, with firewalls.  Access to the servers must be limited only to communication on ports required to deliver the business functions of the server.  This applies to desktops as well.  Only a chosen few who require administrative access to perform their responsibilities should have administrative access.  Why a firewall for segregation, not VLANs or some other method?  The firewall gives you detailed logging which can be used as an audit trail for incident response purposes.

Instrumentation

Dr. Eric Cole has always evangelized that “Prevention is ideal, detection is a must”.  Prevention is a laudable goal, but the fact is that prevention, in most cases, is hard.  Aim for prevention where you can, but assume that whatever preventive controls you deploy are going to be defeated or fail. When the controls do fail would you notice?  Sony failed to detect a number of Terabytes of data leaving their network.  Would you notice it leaving yours?  It is essential is to instrument your network so that detection is possible.  There are two essential elements to minimal detection. The first is properly installed and managed intrusion detection, preferably at the network and host level.  But even importantly is network instrumentation that will permit you to detect network anomalies.  The goal is to notice deviations from the norm, in order to notice those deviations it is important to understand what the network baseline is. NetFlow data is sufficient here, but there are many network products that will provide network instrumentation that can be used for alerting, and monitoring of the network and will be critical in the investigation of a breach.

Application whitelisting

Let’s get it out of the way…signature based anti-virus may not be dead, but it is on life support. The model of protecting hosts based on known malware threats is badly broken in this era of ever-changing malware.  Proactive patching is definitely a step in the correct direction as far as plugging known vulnerabilities, but user behaviour is still the weak link in the malware chain and removing users from the picture is not a practical solution. The best approach is to apply a similar approach as network access; restricting access to the minimum required to do business, to the host.    Deny all behaviours on the host except for those required by the applications to do business.  This is a huge shift in host architecture that is probably going to be met with a lot of resistance from SysAdmins and application owners, but it is one of the few practical approaches to host defense that provides any practical likelihood of successful host defense. Application whitelisting has been available since early last decade, but it is only in the last few years that these products have matured to the point of being manageable. 

This is my approach.  What suggestions do you have for creating a defensible network architecture?

-- Rick Wanner MSISE - rwanner at isc dot sans dot edu - http://namedeplume.blogspot.com/ - Twitter:namedeplume (Protected)

Keywords:
6 comment(s)
ISC StormCast for Monday, January 5th 2015 http://isc.sans.edu/podcastdetail.html?id=4295

Comments


Diary Archives