The Other Side of CIS Critical Control 2 - Inventorying *Unwanted* Software

Published: 2019-06-26. Last Updated: 2019-06-27 14:26:03 UTC
by Rob VandenBrink (Version: 1)
6 comment(s)

When I work with clients and we discuss CIS Critical Control 2, their focus is often on inventorying their installed software.  Today we'll talk about inventorying software that you didn't install.  Malware is typically the primary target in that “we didn’t install that software” list.

The method we're looking at today will inventory the running processes across the enterprise, and we'll look at how to "sift" that information to find outliers - applications that are running only one or two (or 5 or 10%, whatever your cutoff is) of hosts.  Note that this is hunting for *running* software, not software that was installed with a traditional MSI file, so this does a good job of finding malware, especially malware that hasn't spread too far past its initial infection hosts yet.

OK, let's look at the base code.  We're basically running get-process, getting the path on disk for that process, then hashing that file on disk.  If the hash operation errors out (which it will for file-less malware for instance), that file is saved to an error log.  The hash is the key item, it uniquely identifies each file, even if malware has replaced a known filename the hash will be different on that station.  You can then use this hash to reference back to malware IOCs if that's helpful.  Note that the hash in this case is SHA1 - you can change this to meet whatever your hashing requirements are, or add a few different hashing algorithms if that works better for you.

# collect the process list, then loop through the list
$proc = @()
Foreach ($proc in get-process)
    {
    try
        {
        # hash the executable file on disk
        $hash = Get-FileHash $proc.path -Algorithm SHA1 -ErrorAction stop
        }
    catch
        {
         # error handling.  If the file can't be hashed - either it's not there or we don't have rights to it
        $proc.name, $proc.path | out-file c:\temp\proc_hash_error.log -Append
        }
    }

We'll then run our script across the entire organization, and save both the process data and the errors in one set of files. Because we're hashing the files, its likely better (and certainly much faster) to run this operation on the remote systems rather than opening all the files over the network to hash them.

Note that when we do this we’ll be logging the error information out to a remote share.

function RemoteTaskList {
# collect the process list, then loop through the list

$proc = @()
$wsproclist = @()
Foreach ($proc in get-process)
    {
    try
        {
        # hash the executable file on disk
        $hash = Get-FileHash $proc.path -Algorithm SHA1 -ErrorAction stop
        $p = $proc
        $p | add-member -membertype noteproperty -name FileHash -value $hash.hash
        $p | add-member -membertype noteproperty -name HashAlgo -value $hash.Algorithm
        $wsproclist += $p
        }
    catch
        {
         # error handling.  If the file can't be hashed - either it's not there or we don't have rights to it
         # note that you will need to edit the host and share for your environment
        $env:ComputerName,$proc.name,$proc.path | out-file \\loghost\logshare\hash_error.log -Append
        }
    }
    $wsproclist
}

$targets =get-adcomputer -filter * -Property DNSHostName
$DomainTaskList = @()
$i = 1
$count = $targets.count

foreach ($targethost in $targets) {
   write-host $i of $count -  $targethost.DNSHostName
   if (Test-Connection -ComputerName $targethost.DNSHostName -count 2 -Quiet) {
       $DomainTaskList += invoke-command -ComputerName $targethost.DNSHostName ${function:RemoteTaskList}
       ++$i
       }
   }

$DomainTaskList | select-object PSComputerName,Id,ProcessName,Path,FileHash,FileVersion,Product,ProductVersion, HashAlgo | export-csv domain-wide-tasks.csv

With that CSV file exported, you can now look at the domain-wide list in Excel or any tool of your choice that will read a CSV file.

===============
Rob VandenBrink
Coherent Security

Keywords:
6 comment(s)

Comments

This looks interesting Rob. Would you be able to post a version of this script that enumerates only the host it is being run on? Possible write to the network share but append the IP/System Name to the file (i.e. hash_error.log.192.168.1.10) as well as the csv. This would allow us to automate running the script on specific systems with some of our existing tools.
Sure, more or less you are looking for the code from the first script, the error handling from the second and some changes to the file naming.
something like:

# collect the process list, then loop through the list

$hostname = $env:ComputerName
# update the filenames to suit
$errfname = "\\loghost\sharename\err"+"-"+$hostname+".csv"
$outputfname = "\\loghost\sharename\proclist-"+$hostname+".csv"

$proc = @()
Foreach ($proc in get-process)
{
try
{
# hash the executable file on disk
$hash = Get-FileHash $proc.path -Algorithm SHA1 -ErrorAction stop
}
catch
{
# error handling. If the file can't be hashed - either it's not there or we don't have rights to it
# note that you will need to edit the host and share for your environment
$env:ComputerName,$proc.name,$proc.path | out-file $errfname -Append }
}

$proc | select-object PSComputerName, Id, ProcessName, Path,FileHash,FileVersion,Product,ProductVersion, HashAlgo | export-csv $outputfname
You could also create an event trigger on event 4688. Just be careful to filter out whatever program you're triggering from the event, or you'll get an event storm.
"The client cannot connect to the destination specified in the request. Verify that the service on the destination is running and accepting requests." Also, I'm curious how long this process would take for 10,000+ endpoints?
If the line throwing the error is the invoke command, then you don't have WinRM enabled, which is required for powershell remoting. I'll have a story coming up for that, I've had a few questions on this. It's important that you only enable winrm for trusted "admin" stations.
It's actually pretty quick on active stations. The delay for a station that's offline is a solid 2 seconds, you'll want to minimize that by running this at "prime time", and also to "prune" retired or otherwise inactive workstations from AD before you run AD-wide scripts like this.

There's something to be said for running the called functions in the background - - the trick there is to limit how many background threads are in play to reduce the impact on cpu and in particularly memory of the machine running the script.

I've had more experience on running concurrent threads like this in python than in PowerShell, I guess it's time I researched the PowerShell possibilities for this.

There might be another story in my future on concurrency ....

Diary Archives