When Rogue On-Line Pharmacies Take Over Forum Discussions

Published: 2010-01-20. Last Updated: 2010-01-27 14:21:27 UTC
by Lenny Zeltser (Version: 2)
3 comment(s)

Rogue on-line pharmacy sites, claiming to sell legitimate medicine to naive shoppers, continue to be a problem. This quick note is about one approach used to insert advertisements into forum discussions that completely cover up the legitimate discussion page.

My first look at this approach began with an ISC reader J. notifying us of an apparent defacement of a particular discussion thread on social.technet.microsoft.com:

The advertisement is for medical.deal-info.info (please don't go there).

The offending HTML code seems to have been added to the discussion thread as a forum posting. Here's the relevant HTML source code excerpt that sets the stage for the advertisement:

<div class="container"><div class="body"><div style="border:medium none;background:white none repeat scroll 0% 50%;position:fixed;left:0pt;top:0pt;text-decoration:none;width:1700px;height:7600px;z-index:2147483647">

The <div class="body"> tag part of the original website's code and is supposed to be followed by the user's forum posting, such as "I have a question about CAS servers..." Instead, we see HTML code creating a white DIV region that is at the top left corner of the browser's window and is 1700x7600 pixels in size to cover the forum's legitimate content. The "z-index" parameter is set to 2147483647, which is the largest possible value for many browsers; this is to make sure that the offending region is on top of any other elements on the page.

As the result, the whole website looks defaced. In reality, the discussion's page content is still in place--it was just covered up by the advertisement.

I'm unclear why the forum software did not filter out the HTML tags when they were submitted for posting; this may be attributed to an input-scrubbing bug.

I came across several other pharma-advertising websites that employed a similar discussion-covering technique:

This advertisement is for canadian-drugshop.com and supercapsulesrx.com (please don't go there).

Here's relevant HTML source code excerpt:

div style=&quot;border: medium none ; background: white none repeat scroll 0% 0%; -moz-background-clip: border; -moz-background-origin: padding; -moz-background-inline-policy: continuous; position: fixed; left: 0pt; top: 0pt; text-decoration: none; width: 1700px; height: 7600px; z-index: 2147483647

And another example using similar code:

This advertisement is for top.pharma-search.biz and purchase.dnsdojo.com (please don't go there).

Update: Folks at StopTheHacker.com performed interesting analysis of forums that display pharmacy advertisements. If you find this note useful, you will probably enjoy reading reviewing their findings as well.

Have you analyzed such incidents? Have insights to offer? Please let us know.

 -- Lenny

Lenny Zeltser - Security Consulting
Lenny teaches malware analysis at SANS Institute. You can find him on Twitter.

Keywords:
3 comment(s)
New stable version of Nmap (5.20) available for download: http://nmap.org/download.html

Using Curl to Retrieve Malicious Websites

Published: 2010-01-20. Last Updated: 2010-01-20 22:04:25 UTC
by Lenny Zeltser (Version: 2)
4 comment(s)

Here's how to use Curl to download potentially-malicious websites, and why you may want to use this tool instead of the more-common Wget.

Curl and Wget are excellent command-line tools for Windows and Unix. They can download remote files and save them locally without attempting to display or render them. As the result, these tools are handy for retrieving files from potentially malicious website for local analysis--the small feature-set of these utilities, compared to traditional Web browsers, minimizes the vulnerability surface.

Both Curl and Wget support HTTP, HTTPS and FTP protocols, and allow the user to define custom HTTP headers that malicious websites may examine before attempting to attack the visitor (more on that below). Curl also supports other protocols you might find useful, such as LDAP and SFTP; however, these protocols are rarely used by analysts when examining content and code of malicious websites.

Overall, the two tools are similar when it comes to retrieving remote website files. However, the one limitation of Wget that is relevant for analyzing malicious websites it its inability to display contents of remote error pages. These error pages might be fake and contain attack code. Curl will retrieve their full contents for your review; Wget will simply display the HTTP error code.

Consider this example that uses Wget:

$ wget http://www.example.com/page

Resolving www.example.com...
Connecting to www.example.com:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2010-01-19 05:37:11 ERROR 404: Not Found. 

Many analysts assume that the malicious web page is gone when they see this. However, consider the same connection made with Curl:

$ curl http://www.example.com/page

<HTML>
<HEAD><TITLE>404 Not Found</TITLE></HEAD>
<BODY>
<H2>404 Not Found</H2>
<SCRIPT>
document.write("Hi there, bear!");
</SCRIPT>

<P>The requested URL was not found on this server.</P>
</BODY>
</HTML>

Now you can see that the error page is an HTML document that has JavaScript embedded in it. In this example, the script simply prints a friendly greeting; however, it could have been malicious. The victim's browser would render the page and execute the script that could implement an attack.

Another useful feature of Curl is its ability to save headers that the remote web server supplied when responding to the HTTP request. This is useful because JavaScript obfuscation techniques make use of information about the page and its context, such as its last-modified time. Saving the headers allows the analyst to use this information when/if it becomes necessary. Use the "-D" parameter to specify the filename where the headers should be saved:

$ curl http://www.example.com/page -D headers.txt

<HTML>
<HEAD><TITLE>404 Not Found</TITLE></HEAD>
...

$ cat headers.txt

HTTP/1.1 404 Not Found
Server: Apache/2.0.55
Content-Type: text/html; charset=iso-8859-1
Date: Wed, 19 Jan 2010 05:51:44 GMT
Last-Modified: Wed, 19 Jan 2010 03:51:44 GMT
Accept-Ranges: bytes
Connection: close
Cache-Control: no-cache,no-store

If you wish Curl to also save the retrieved page to a file, instead of sending it to STDOUT, use the "-o" parameter, or simply redirect STDOUT to a file using ">". This is particularly useful when retrieving binary files, or when the web server responds with an ASCII file that it automatically compressed. If you're not sure about the type of the file you obtained, check it using the Unix "file" command or the TrID utility (available for Windows and Unix).

Update: Didier Stevens mentioned that using "-d -o" parameters to Wget allows him to capture full HTTP request and response details in the specified log file. However, this does not seem to address the issue of Wget not displaying contents of HTTP error pages.

Whether using Curl or Wget to retrieve files from potentially-malicious websites, consider what headers you are supplying to the remote site as part of your HTTP request. Many malicious sites look at the headers to determine how or whether to attack the victim, so if they notice Curl's or Wget's identifier in the User-Agent header, you won't get far. Malicious sites also frequently examine the Referer header to target users that came from specific sites, such as Google. Even if you define these headers, the lack of other less-important headers typically set by traditional Web browsers could give you away as an analyst.

I recommend creating a .curlrc or a .wgetrc file that defines the headers you wish these tools to supply. You can define these options on the command-line when calling Curl and Wget, but I find it more convenient to use the configuration files. Consider using your own web server, "nc -l -p 80", and/or a network sniffer to observe what headers a typical browser such as Internet Explorer sends, and define them in your .curlrc or .wgetrc file. Here's one example of a .curlrc file:

header = "Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-ms-application, application/x-ms-xbap, application/vnd.ms-xpsdocument, application/xaml+xml, */*"
header = "Accept-Language: en-us"
header = "Accept-Encoding: gzip, deflate"
header = "Connection: Keep-Alive"

user-agent = "Mozilla/4.0 (Mozilla/4.0; MSIE 7.0; Windows NT 5.1; SV1; .NET CLR 3.0.04506.30)"
referer = "http://www.google.com/search?hl=en&q=web&aq=f&oq=&aqi=g1"

The syntax for .wgetrc is very similar, except you should not use quotation marks when defining each field. (Here is another example specific to .wgetrc.)

You may need to tweak "user-agent" and "referer" fields for a specific situation. For more examples of User-Agent strings, see UserAgentString.com.

The "Accept-Encoding" specifies that your browser is willing to accept compressed files from the web server. This will slow you down a bit, because you'll need to decompress the responses (e.g., "gunzip"); however, it will make your request seem more legitimate to the malicious website.

There you have it--a few tips for using Curl (and Wget) for retrieving files from potentially malicious websites. What do you think?

 -- Lenny

Lenny Zeltser - Security Consulting
Lenny teaches malware analysis at SANS Institute. You can find him on Twitter.

Keywords:
4 comment(s)

Microsoft Announces Out-of-Band Security Bulletin for the IE Vulnerability

Published: 2010-01-20. Last Updated: 2010-01-20 22:03:06 UTC
by Lenny Zeltser (Version: 2)
0 comment(s)

Microsoft posted "an advance notification of one out-of-band security bulletin that Microsoft is intending to release on January 21, 2010. The bulletin will be for Internet Explorer to address limited attacks against customers of Internet Explorer 6, as well as fixes for vulnerabilities rated Critical that are not currently under active attack."

For details, see:

http://www.microsoft.com/technet/security/bulletin/ms10-jan.mspx

Update:

Microsoft also posted a comprehensive overview of the exploits that target this vulnerability. See:

http://blogs.technet.com/srd/archive/2010/01/20/reports-of-dep-being-bypassed.aspx

 -- Lenny

Lenny Zeltser - Security Consulting
Lenny teaches malware analysis at SANS Institute. You can find him on Twitter.

Keywords:
0 comment(s)
Weathering the Storm Part 1: An analysis of our SANS ISC weblogs http://appsecstreetfighter.com

Security Patch for for BIND 9.6.1 Released

Published: 2010-01-20. Last Updated: 2010-01-20 03:24:17 UTC
by Lenny Zeltser (Version: 1)
0 comment(s)

Internet Systems Consortium (ISC) announced the release of the BIND 9.6.1-P3 security patch to address two cache poisoning vulnerabilities, "both of which could allow a validating recursive nameserver to cache data which had not been authenticated or was invalid."

CVE-2010-0097: Low severity
CVE-2009-4022: Medium severity

You can download BIND 9.6.1-P3 from:

ftp://ftp.isc.org/isc/bind9/9.6.1-P3/bind-9.6.1-P3.tar.gz
ftp://ftp.isc.org/isc/bind9/9.6.1-P3/BIND9.6.1-P3.zip (binary kit for Windows XP/2003/2008)

 -- Lenny

Lenny Zeltser - Security Consulting
Lenny teaches malware analysis at SANS Institute. You can find him on Twitter.

Keywords:
0 comment(s)

Comments


Diary Archives