Archive for December, 2007

This column falls under the heading “Ye Gods, What Next?” What we’re talking about here is downloaded viruses / trojans / key-loggers that are served and installed when you simply visit a site bearing them. And mostly these infections bypass current safety measures.

In mid-2007 Google issued a paper entitled “The Ghost in the Browser: Analysis of Web-based Malware” and it’s a very scary read. This paper is the result of a 12-month review of the Google site crawler results, with follow-up verification through the use of an instrumented sacrificial machine that connected to suspected sites and monitored the infections it picked up. This study found a minimum of 300,000 URLs that were bearing infected downloads.

What makes this particular danger so scary, besides it’s stunning prevalence, is summarized as follows:

  • They can infect your machine by your just going to the site — you don’t have to consciously download anything extra, the site does it for you;
  • These downloads are “requested” by the page you asked to see, hence they bypass firewalls and NAT translation filters;
  • Even good sites can become infection vectors, as I discuss below, so it’s only somewhat safe to limit yourself to “reputable sites.” By no means are these hosted only on porn or celebrity-dirt or similar sites, they can be on your local newspaper’s site;
  • Just about the only way to protect yourself is through periodic virus scans of your disks, and hopefully by your anti-virus package doing it’s job as they try to land on you, or by turning off scripting in your browser, which will render a large proportion of all websites unusable to you, unless you selectively allow them.

So, where do these infections come from? Google identified four main vectors. First, is web servers which have been compromised by unpatched vulnerabilities in their scripting application engines, and Google attributes this to the complexity of supporting a modern web server, especially ones that serve scripting applications and widgets, and the limited time and resources of harried webmasters and server managers. Critical patches don’t get made right away (or at all) and the bad guys will find those servers sooner or later.

Second, lots of sites these days serve user-contributed content, even including HTML files. Examples of this situation include bulletin boards, forums, and sites that allow comments on articles, video, or other content. If this contributed content isn’t sanitized properly, malware-loading content can easily sit there waiting for you to come along.

Third is a similar situation with third-party widgets, such as “free” visit counters, time-and-temperature displays, cutie little faces that laugh, and so on. Often these widgets are inserted into an innocent page but are served from an external server over which the webmaster of the first site has no control. They may start out innocent, but then (and Google saw this) after some time, even a few years, suddenly starts downloading malware along with the widget code. Voila, infection!

And the forth source is compromised ads. The problem here is syndication: an agency signs up clients whose ads are placed on a site, the clients go to their creative agency, who farms it out to . . . and eventually these ad-slots get in the hands of shady characters who place a little present with the ad — a piece of malware. Presto, infected by the Tampa Bay Online, or the Minneapolis Star-Tribune, or whoever!


There are no really great solutions. As an end user end, you can turn off scripting in your browser. Steve Gibson of Gibson Research and the main panelist on the Security Now podcast, has advocated this for a long time. He advocates your turning off scripting in the browser and then specifically allowing it for sites you trust. This is pretty extreme, as Javascript, ActiveX, PHP, and other scripting is used on perhaps70% of all websites, so doing this will pretty much kill that proportion of the Web for you, unless you then specifically allow each site in turn to use scripting. And even this won’t entirely protect you, as third-party widgets and long chains of syndicated ads can still get you on a supposedly “safe” site. So overall, this is a lot of hassle and not completely foolproof.

The other big thing a user can do is to have good, up-to-date virus scanning of incoming objects, which may catch them as they arrive. This is by no means a sure thing, either, as some malware “unpacks” itself after it arrives, which can fool many anti-virus scanners. So it remains important to fully scan your disks at least weekly to find the ones that do get past the initial scan.

The rest of the solutions are the responsibility of server and site managers, people who are already overworked and often missing the boat on current threats. Of course, scripting should be turned of on the browser on any server! A server should have no business running off to miscellaneous websites except for various software package updates and the like, so unlike an individual’s browsing, this shouldn’t be much of a problem on a server.

But more importantly, site managers MUST start taking absolute responsibility for what is getting served on their sites. If they allow contributed HTML content, they must sanitize it properly and fully before it’s posted. They should require that other third-party objects such as widgets be served from their own servers, and sanitized! And the same goes for syndicated ads — one level of syndication is plenty, and ad-placement contracts must contain significant penalties for allowing malware-laced ads to be served. And it would be nice if sites could be certified that they meet these criteria, and that a user could set a preference in their browser to not load sites that don’t have this certification. But I’m sure the bad guys would find a way to bypass this, too.

If we don’t take enough of these steps, we’re in danger of making the Internet so dangerous that no one will want to be there, however interesting or useful it is. And believe me, that IS a danger.

Read Full Post »

A few dismal statistics on the prevalence of dangerous email, from a post here in a security blog SearchSecurity, quoting a study by MessageLabs, a security software firm:

  • Based on their samples, MessageLabs believes that 90% of all emails globally are spam;
  • 1 in 200 emails contain a phishing attack;
  • 68% of all malicious emails they intercepted were phishing attacks.

As a personal note, I think they’re understating the proportion of spam — my own sample here at the office is that only 3% of our incoming email traffic was valid email, based on a study I did in mid-October.

So as usual, the bad guys are keeping one step ahead by devising new forms of attack when the old ones become less effective; nothing new there, online or offline. But phishing attacks are particularly worrysome because they aren’t as readily filtered out as other forms of spam that are just selling some kind of hokum. Phishing attacks work by getting the victim to disgorge their account numbers and passwords, which are then used to vacuum out bank accounts, open illegitimate credit cards, and all the rest of it.

A good proportion of the rest of all this spam is devoted to getting the victim to allow the spammer to download to the victim’s machine some hostile software, including keystroke loggers and the control software that will turn the victim’s machine into a spam-relaying robot.

And all it takes is one mistake, one visit to an apparently innocent but actually hostile website, and you’re nailed, and if the infection is a rootkit it’s likely that you won’t be able to either find it or fix it without a complete software rebuild on the machine.

Here is an excellent non-technical overview of one of the world’s largest phishing organizations, from the Seattle Times. It’s more than scary.

Now I’m not an alarmist, but at what point does the Internet become just too dangerous to be worth the trouble? Back about 15 years ago I used to go out to some of the Compuserve newsgroups pretty regularly for technical subjects, but gradually they became so filled up with spam messages that perhaps one in 50 was a valid message, the rest were all machine-generated junk. At that point, I just quit going there. At what point will the broader public come to the same conclusion, and become afraid to use the Internet?

Read Full Post »