My next class:

Automatically Documenting Network Connections From New Devices Connected to Home Networks

Published: 2015-03-16. Last Updated: 2015-03-16 14:24:45 UTC
by Johannes Ullrich (Version: 1)
4 comment(s)

This is a guest diary submitted by Xavier Mertens.

Writing documentation is a pain for most of us but... mandatory! Pentesters and auditors don't like to write their reports once the funny stuff has been completed. It is the same for the developers. Writing code and developing new products is fun but good documentation is often missing. By documentation, I mean "network" documentation. Why?

When you buy from a key player some software or hardware which will be connected to a corporate environment, the documentation usually contains a clear description of the network requirements. They could be:

    • A list of ports to allow in firewalls or any kind of filter to access the device/application
    • A list of ports used by the device/application to access online resources and basic services (NTP, DNS, Internet, Proxy, ...)
    • A list of online resources used (to fetch updates, to send bug reports, to connect to cloud services, ...)

But today, more and more devices are connected (think about the IoT-buzz - "Internet of Things"). These devices are manufactured in a way that they automatically use any available network connectivity. Configure a wireless network and they are good to go. Classic home networks are based on xDSL or cable modems which provide basic network services (DHCP, DNS). This is not the best way to protect your data. They lack of egress filters and any connected device will have a full network connectivity and potentially exfiltrate juicy data. That's why I militate in favor of a documentation template to describe the resources required to operate such "smart" devices smoothly. Here is an good example. I've a Nest thermostat installed at home and it constantly connects to the following destinations:

54.227.140.192.9543
23.21.241.75.443
23.23.91.51.80
54.243.35.110:443
87.106.208.187:80

It's easy to make your home network safer without spending a lot of time and money. When a new device is connected to my network, it receives a temporary IP address from a small DHCP pool (Ex: 192.168.0.200-210). This pool has a very limited network connectivity. It uses a local DNS resolver (to track used domains) and is only allowed to communicate over HTTPS to the Internet. A Snort IDS and a tcpdump are constantly running to capture and inspect all packets generated by the IP addresses from the DHCP pool. This is easy to configure with the following shell script running in the background.

#!/bin/bash
while true
do
   TODAY=`/bin/date +"%Y%m%d"`
   /usr/sbin/tcpdump -i eth1 -lenx -X -s 0 -w /data/pcaps/tcpdump-$TODAY.pcap \
       host 192.168.0.200 or \
            192.168.0.201 or \
            192.168.0.202 or \
            192.168.0.203 or \
            192.168.0.204 or \
            192.168.0.206 or \
            192.168.0.207 or \
            192.168.0.208 or \
            192.168.0.209 or \
            192.168.0.210 &
   TCPDUMP_PID=$!
   sleep 86400 # Go to sleep for one day
   kill $TCPDUMP_PID
   gzip -9 /data/pcaps/tcpdump-$TODAY.pcap
done

When a new device is connected, its traffic is automatically captured and can be analyzed later. Once completed, a static DHCP lease is configured with the device MAC address and the firewall policy adapted to permit the required traffic. Not only, it helps to secure your network but it can reveal interesting behavior.
 

---
Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

4 comment(s)
My next class:

Comments

Thanks Xavier and Dr. B. Nice diary for some thought.

One additional thing I've done is to force internal hosts at home to use my internal DNS as well. You alluded to this but thought I'd point it out in more detail. I don't allow udp/tcp 53 outbound from my house, except for allowing my DNS server to look up queries upstream.

I've seen several software applications and a hardware device recently start attempting to use external DNS hosts first regardless of my DNS and DHCP settings. I realize that some of these applications are doing this to make sure there's no hokey DNS stuff going on with my network, but I think I deserve to decide that first. ;)

I added a SecurityOnion installation after Dr. J's diary regarding data exfiltration in February and I'm really enjoying the visibility that shows.
> You alluded to this but thought I'd point it out in more detail. I don't allow udp/tcp 53 outbound from my house, except for allowing my DNS server to look up queries upstream.

I guess I could never really put up with that.... Being able to troubleshoot DNS issues is very important, and a 'dig' trace is one of the
most useful tools in my toolbox. I have dig installed on all my workstations. And it ought to be in the toolbox of
any sysadmin or network op who needs to do some further diagnosis, after it's determined that accessing
a URL has apparently inconsistent results, or inconsistent load times.

Forward and Reverse DNS query response is vital. The tool might be more important than PING.

For example... if the DNS response is wrong, I need to determine whether it's still wrong, or if it's bad data in cache.
I occasionally have applications that need to query all of a target domain's authoritative servers,
to verify that all servers are returning the same results for a certain query; this is also a basic DNS monitoring function.
I'd submit that monitoring all outbound traffic is useful not just in a home environment. I have a love/hate relationship with a few custom snort rules I dropped in our snort servers at $DAYJOB$. Basically, they alert on any traffic from certain server subnets and any outside IP address except for a list of IPs/networks I have exempted.

I hate these rules because they can be noisy. It's become ridiculous how many applications want to phone home to some IP or network (like a certain backup client that phones home every time backups are run - how creepy is that - I'm told it's just date/time/start/stop/volume sort of info but still). Or all the windows servers downloading various certification revocation lists. Or some app that downloads it's updates from an Akamai IP that never seems to be the same one twice. (sigh)

On the other hand, I also love these rules because they uncover "stupid" (tm) like vendors/contractors surfing to myfacespacebook.com from systems they're supposed to be working on, not surfing the interwobble. They also uncover badly configured hardware/software (like some app determined to use a public NTP server instead of the internal ones we told it to use).

The original intent of these rules was to help watch for data exfiltration and just to get a handle on what these servers were talking to (in theory, we thought, they shouldn't be talking to ANYONE outside of our IP space). It's been useful, but it's also been a bit of a headache occasionally...
Oh yeah, one other note... blocking outbound DNS except from from DNS proxies you control is a VERY good idea. I'd go so far as to say it oughta be on everyone's mandatory checklist. :-) Not only that, but then monitoring your firewall logs for outbound DNS queries being blocked.

Once upon a time, I had oak (my swiss-army-knife-tool for watching logs) alerting me anytime it saw ANY outbound DNS being blocked and we caught quite a few infected systems that way. These days, however, I've had to turn that off because of all the newer linuxes that (stupidly, IMHO) use dnsmasq as a caching DNS client. They're always trying to contact the root servers or other DNS servers on their own instead of using the DNS servers we tell 'em to use via DHCP. (sigh - whoever thought using dnsmasq was a good idea deserves a slap)

I DO still have a watchlist of IPs known to be hosting or having once hosted malicious DNS servers though. And we do still watch our firewall logs for any blocked outbound DNS packets going to those IPs.

Diary Archives