Note: This entry has been restored from old archives.
The last two days have been somewhat joyous in a less than traditional sense. Two whole days with all computers shut off! OK, so not that much different from our recent holiday without work/computers, but more relaxing.
The reason for the title is twofold, firstly the weather here really is rather shit. It’s England! What should I expect? Christmas day was chilly and wet, at least on boxing day there was a little sunlight. Dreaming of a white Christmas around London? Not much chance these days it would seem. This is my third in the UK, the first was white thanks only to a heavy frost, there was a little snow around the period but it wasn’t so cold that I didn’t spend the day on my bike in Wendover Woods. Last year it was cold at least. This year it isn’t even chilly, there isn’t one 0 or sub-zero day predicted in the entire last week of the month! Today has a predicted minimum of 9 — I can quite comfortably wear just a thin t-shirt under an unbuttoned jacket. Oh well.
The other bad storm is one of 2007’s old favourites, the Zhelatin/Storm/Nuwar “worm”. After somewhat of a lull in seeing emails from this network I suddenly got one on the 23rd, as I mentioned on Monday.
This turned out to be the first of many as the network pumped out a full-scale assault capitalising on the jovial season, both Christmas and New Year. Taking advantage in two ways I think: 1) people probably are sending a lot of stupid email right now so it may be more likely that people follow the evil links, 2) A lot of people, including those in the security industry and the IT-shops responsible for maintaining corporate security, are on holiday so the “good guys” may have a slower response time.
The latter point is worth some thought. I’m sure it has been discussed before: computers don’t have holidays, crims take advantage of holidays, most normal people let their guard down on holidays. Good news for botnet herders. As I mentioned earlier in the week the malware payload wasn’t detected by any of the large-market-share AV engines, the biggest player to detect some of the samples I tried was Kaspersky (finding accurate market-share figures is difficult, suggestions on the net for KAV are between 5 and 1 percent). As has now been clearly established, I’d think, the malware writers test against the biggest AV engines. We can get a good picture of which engines they’re testing with by rounding up as many of these jolly-Storms as possible and scanning them to see which engines, when loaded with a pre-mailout database, detect close to 0% of the samples. The list you’ll find isn’t all that surprising.
It’d be really nice to have a good statistic on the size of the botnet on December 20th versus the size on January 7th. But all botnet size estimates are generally a product of bad guesstimation, we can’t expect anyone to know the numbers except the ones in control.
I’m becoming more pessimistic about the situation as time goes on. The concept of a “virus filter” product seems to have been proven fatally flawed. Whether detection takes place via signatures or “heuristics” (in my opinion this is little more than complicated signatures) the approach is reactive. Either to specific malware or to specific exploits, the latter gets a lot of press as “generic” detection usually classified as “heuristic” but in the end is just reactive detection taken from a different angle. AV engines do have their place, but they’re not a solution — certainly not anymore. A small thought, and privacy advocates would hate this thought, is that maybe the AV vendors need to make their software 100%-report-to-base. Try to take some of the testing ability away from the criminals? Could this even be workable, what information could you report to base that’d help? How long would it be before the bad guys subverted the process or simply circumvented it… probably not long. sigh
I guess this is why the security industry is diversifying into more elements of command and control, maybe there is some light at the end of the tunnel? Of course is it likely that anything of this sort is best done at-or-below the OS level, thus by the OS vendor, but when Microsoft tried to do this for Vista there was an all-out cry of foul from the AV industry! Protecting themselves, or protecting users from the likelihood that Microsoft would get it wrong? A bit of both I expect.
In this direction lot of noise was made about one thing in the last year that to me smells like a load of of bollocks: virtualisation. It’s a very neat geek-toy that has spawned both it’s own industry around maintaining systems and has been co-opted by the security industry in a way that stinks of “silver-bullet”. The former works for me, but I think we want to keep in mind that virutalisation used this way is just an evolutionary step. Virtualisation for robustness/etc is a neat replacement for things like telnettable power supplies and Dell DRAC (remote administration) hardware. Security tends to be fitted in from a perspective of keeping an eye on things from the outside. We like this image because it works fairly well with physical-world security systems. My guess is that it isn’t going to work out quite as neatly or easily as hoped when it comes to anti-malware. I think the best anti-virutalisation FUD I’ve seen came from Theo, of OpenBSD fame.
[Update: In case it isn’t as blindly obvious as I thought, I agree with Theo de Raadt’s FUD (though I don’t understand why anybody thinks my agreement or labelling matters). sigh “FUD” is a just TLA, please attach less emotion to it Internet randoms. I’m wasting my time since the complaint I received was clearly derived purely from the sight of the TLA and the context ignored. Anyway, FUD = “Fear, Uncertainty, and Doubt” and in my mind is a mere function of marketing. Negative marketing based on perceived flaws in the security sphere is a case of FUD (since this is what it causes), sometimes for good (being informative), sometimes for bad (being misleading). Pro-virtualisation-for security people will label de Raadt’s opinion as FUD in the traditional sense, but I bag up what they see as smelly manure and feed it to my roses. I apologise for going against the grain of the TLA and upsetting a poor sensitive soul or two. To repeat: I, in my non-expert opinion, am more convinced by Theo’s FUD than the FUD from the other side of the argument. If it makes you feel better execute a mental s/FUD/marketing/g or just go away.]
Still, we have to grasp at what straws present themselves. (Remembering to let go of the ones that have burnt all the way down to our fingers.) I try to remind myself that entirely giving up hope is not the correct response. Especially while people are profiting from criminal acts that take advantage of the industry’s current failure to adequately deal with the problem.
At this moment, given a corporate network to run and short of “running with scissors”, I’d be focusing attention on environment control. Mostly meaning various approaches to controlled execution. I don’t think it’s an easy path, but does anyone expect a solution to really be “easy”? Hah! There’s a strong chance it’d just turn into another reactive scene, say we allow IE to run, fine, then malware runs it’s own code as part of IE. (Through one of virtually limitless vectors, from buffer buffer overflows inserting actual machine code to simple exploitation of design flaws in JS/VBS/Flash/plugin-X/technology-Y.) What about the much-maligned (at least it is in OSS/FSF circles) TPM approach? (Maybe just simplified virtualisation that’ll just come with a heap of it’s own new flaws.)
Network segregation should offer some relief and damage control. Do users really always need to access email/web from the same machine they access the IRS/HMRC/etc database from? At least if there is an infection (inevitable?) it can only go so far. This is heading into DLP territory though, which is a different problem and mostly the bugs that need to be fixed are in process and people.
Have we given up on user education yet? It’s bloody difficult, but I hope not. We can’t really expect people to always do the right thing, just as we can’t always expect programmers who know they should use validate all user data to always remember to do so (humans tend to be lazy by preference!). That said, the situation is certainly worse if they don’t even know what the right/wrong things are!
It’s easy to become despondent. I’m certainly not all that happy with the industry that I, in a small way, am part of. Taken as a whole the last year or two has been pretty abysmal. Surely things can only improve from this point?