Us Techies

Note: This entry has been restored from old archives.

Us techies (in my generation: who grew up in the 80s/90s knowing what an IRC was, how to push the power button on the “computer”, and wrapping weird batch scripts around multi-disc ARJ decompression to install pirated games for our friends, for example) were a fairly unique bunch in back in the day. But to those entering Uni now (10 years later) the tech was there from the day they were born, for all of them.

Amongst our own generation we’re still respected (though never necessarily liked) for our ability to tweak the Excel spreadsheet or fix the Internet.

Amongst the youngsters will we be no better than the dude who fixes the shower? Just another annoying job that somebody has to do.

Nothing against plumbers, here in the UK the plumbers make the $$$ and can actually afford things like houses. I still think I’d rather be a landscape gardener.

My thoughts while reading Meet Your Future Employee.

I do worry when they start talking about “HTML programming” though, and “advances like blogging and social networking”. I suspect the reporter may be showing her own generational gap… nobody can hope to keep up with the pace of change.

I can’t help but observe that statements like “[lack of] face-to-face communications skills, a critical asset for a modern IT career” are comming from the old IT career professionals. Predicting their own obsolescence?

[Actually, I think my IT-generation sits somewhere in between that which is the subject of the article and that which is the observer, um, or am I generation Y? I’m certainly not X. The article mainly leaves me just all colours of confused. Gah, bloody compartmentalisation.]

Collateral Damage: An Unintentional Storm Worm DOS

Note: This entry has been restored from old archives.

Anyone else get the feeling that the Storm Worm proves that the entire ‘net security industry is useless? We already know that most security is ineffective against targeted attacks, and now Storm makes is clear that the state of security in general is ineffective against widespread attacks. Sure, your AV product will almost certainly protect you from Storm, but it wont protect you from Storm breaking the ‘net in general. The problem is that the fact that you do have an AV product installed and up to date places you in the minority.

OK, implying that we’re all stuffed is rather over the top … but sometimes I really feel rather perturbed by the whole situation.

Anyway, the latest fun fact I’ve noticed regarding the Storm worm is that some security-sensitive sites have started using blacklists to block HTTP clients. At this moment there are several security sites that give me messages like “ACCESS DENIED” or “File Not Found or your IP is blocked. Sorry.” but they work perfectly well if I bounce through a remote proxy. Why? Well according to some lists, such as cbl.abuseat.org, I have a Storm Worm infection. It happens that my ADSL picked up a new dynamic IP this morning that someone with an infection must have had last week. I understand why the websites are doing this, though I’m skeptical of the effectiveness of it as a countermeasure. Being the victim of a DDoS is pretty much worst-case-scenario for a popular site, anything that might reduce your vulnerability is going to look good.

What is the solution? Certainly not this sort of blacklistsing? We probably need to see a shift in the responsibility. The dumb end users can’t be held responsible, would it be a car owner’s fault if his car was stolen and subsequently the thief runs down a child with it? What if the car owner left the car with the engine running while popping into the newsagent to pick up a paper? The child’s death is still not the car owner’s fault I’d say, even if said owner is somewhat foolish. But we don’t know how to hold the thief responsible in the botnet case. The analogy works to describe my case for absolving the user, but breaks down when you look at it for assigning blame to the driver. Are the cars computers, IP addresses, or packets? Who’s the driver? What we do know is that 100% of car thieves are homicidal maniacs! Iieee!

Now, given that there are cars speeding around everywhere being driven by child-killers, roadblocks have been set up all over the place to keep the killer-cars out. Each roadblock has a long list of number-plates to check against approaching cars, the problem is that the list is very large and is always out of date. Some killers will get though (but you may be saved from the DDoS) though you’ll possibly just end up with a huge line of cars at your roadblock (DDoSing your roadblock!). Also keep in mind that the killers who aren’t on the list know that they aren’t and are capable of organising themselves to show up at a given location instantly.

How do we reliably know a bad packet from a good one? Who should be responsible (infrastructure providers need to foot some of this I think). What’s the solution? Buggered if I know 🙂 and if I did I wouldn’t be telling, would I? Let’s hope that some of the large number of smart cookies out there thinking about this come up with something that doesn’t suck! However, I fear that all solutions involve a giant and expensive leap: a new Internet. (Or, at least, a major overhaul of the one we have.) Is that even possible?

Fungal Positioning System

Note: This entry has been restored from old archives.

Walking, rambling, trekking… call it what you will, we do like a good long trundle. Alas, we don’t always have the time and energy left on the weekend for gallivanting. We didn’t make the best use of the, rather wet, summer here in the UK, but now that the occasional crisp sunny days of the colder months have arrived we’re getting out more.

A good while back, inspired by Antonio Carluccio’s Neal Street Restaurant and the subsequent addition to our library of his Complete Mushroom Book book, we became interested in the pursuit of fungi. This, combined with our fondness for wandering, has since inspired the collection of a few more books[1] and a serfish habit of walking with eyes downcast.

Amethysts In Hand
Amethysts In Hand

Now Autumn is well upon us and legions of fungi abound! However, we’re not yet so confident as to go merrily munching away at the bounty of the woods. That said, last weekend (Oct 7th) we saw some interesting specimens in woods south of Rickmansworth and we did net ourselves a good collection of Laccaria Amethystea (the common name is Amethyst Deceiver, one of my photos is to the right but this is a far better photo). These made a pleasant addition to the evening’s pasta. Yes, we picked bright purple toadstools and then ate them!

Over the weekend just passed we became more serious in our fungal pursuit. But now I shall significantly digress to the other subject of this post: GPS. Last week I was doing a little web-shopping, thinking to get a funky LED torch[2] and/or a couple of foldable knives (for fungus gathering). In the end I came away with neither item, having been lured off target by the glingy goodness of a fancy electronic gadget.

There are many GPS units around these days, with Garmin and Magellan seeming to have the best ranges for for the off-road trekker. In the end I picked a Garmin eTrex Vista HCx, the top of the line for the eTrex range, complete with the iffy features of an electronic compass and barometric altimeter (but hey, when buying a new toy you may as well get all the geekbling! gling?). The cost/benefit analysis of the purchase decision basically came down to 50 quid extra for the altimeter, compass, and high-speed GPS hardware (with the additional cost of battery life being 25 hours rather than 32). In the end I decided that for the cost of a reasonable dinner for two… why not?

Along with the Vista I have the official Garmin TOPO Great Britain map, a bloody expensive heap of bytes. At 100 quid from many UK sellers, it seems very expensive until you stop to think that it includes topographical and road data for the entire UK. Reflect on Encyclopaedia Britannica for a moment though, remember when they produced a CDROM version and tried to flog it for a four digit price? The digression digresses… It’s the great divide between, what I think of as, “the past” versus the new “digital product generation”. Shelves of encyclopaedias that you pay thousands of units of currency for have become an anachronism and I expect many parts of that industry were laid to rest by the “digital generation”. At a time when it seems even the empires of the media distribution companies may crumble, vendor lock-in can’t keep the likes of Garmin going for long. Tomorrow the capabilities of their eTrex will be in my phone[3] and Google Earth will be the only software I need as roving communities of GPS geeks build up their own databases of topographical data. Gah! Enough idle speculation, back to my digression.

Garmin Vista HCx
Garmin Vista HCx

In the short time I have had to play with my geek bounty I’ve been pretty impressed. The Garmin gets a lock damn fast and the GPS tracking against their map is impressively spot on, doubly impressive to see it map into Google Maps with high accuracy as well! (More on that in a moment.) At first the screen seemed rather small (3.3×4.3cm), but it does not inhibit use of the device as much as I expected (it is also surprisingly readable in daylight). The input interface is simple, using 5 buttons and a mini-joystick, it took a little learning but after a day in the field I didn’t have to think to operate it.

So, the downsides? Well, as per earlier rant, map data is very expensive. I bought this for UK trekking so the UK map was essential (and realise, if you prefer to pay for such things this will add 50% more to the price of a good unit). While, considering the content, I think the price isn’t unjustified I also think that it is a significant “hidden cost” that really should be better disclosed in the product description and specifications. Time for some more subdigression. A system when you could license, say, 100 square miles of map would be great for the trekker. By this I mean you’d have such a license and at any one time be able to load on at most 100 square miles from an online Garmin world-map database. For something like a 20 quid yearly subscription this would seem pretty attractive. It is probably prone to having the data ripped though, but that’s nothing new — as far as I can see you can already download unlocked versions of the majority of Garmin map products from various file-sharing systems.

The second point about the maps is: don’t get your hopes up. They’re nowhere near as good as the Ordnance Survey OS and Landranger maps. Consider it this way: a Garmin GPS unit with GB TOPO maps is a near-perfect navigational aid, but keep your trusty OS handy for the fine details. The up-side is that the topographic data on the GB TOPO maps is from the OS, so it matches perfectly and it’s easy to both home in on your on-paper location, and map a waypoint into the GPS based on OS map features. As far as I can work out the TOPO maps are the best you’ll get for the Garmin, I think trying to display all the OS data would be a UI nightmare anyway.

What else is wrong with the device? Well, I find the electronic compass to be too unstable, but I might just need to get more used to it. So far I’m not convinced that I’d want to use it to take a bearing. Now to my main gripe, the little research I have done indicates that Linux basically doesn’t exist in the world of Garmin. (Shock! Horror! Oh, poor me, the big bad company doesn’t care that I’m a technodeviant!) The Win32 MapSource tool that comes with the device is a bit clunky but actually does it’s job pretty well, letting you plot out courses to upload to the GPS device and download then edit tracks and waypoints saved on your trekking. (With the insane limitation that it cuts off waypoint names at something like ten characters, what decade is this!)

What can Linux deviants turn to? Well, some dude has done a great job on a tool called gpsbabel, this does the very important task of sucking data from the unit or from files saved in MapSource format and converting them into a variety of other formats. I have found that the process that works best for me is to download data from the unit in Windows/MapSource to tidy up the tracks and waypoints as necessary, then use gpsbabel to covert the data into the format I ultimately desire: Google KML. (gpsbabel works under both Windows and Linux.) Though the KML needs to be hand cleansed, otherwise Google Maps barfs on some parts of it, I haven’t had time to take a closer look at this.

I’ve had the eTrex Vista HCx for only 4 days, so it is still “early days”. I’m hoping to work out an acceptable all-Linux solution. This might be using gpsbabel to suck from (and load to) the device and “Google Earth” to edit and create tracks and routes. The $US20 per year version of Google Earth appears to support Garmin devices, that is certainly worth exploring. Unfortunately Google Earth stopped working for me when I upgraded my Ubuntu to gutsy (I’ll echo other people in the opinion that upgrading to gutsy was mostly a PITA, last thing I wanted was bloody geek wank like compiz), I’ll wait for the free version to work again before trying the Plus version. You can also edit tracks and points with the Google Maps web-application, but I find it too laggy. (Is is just me, or has Firefox become a slow piece of crud these days, I find myself using Opera more and more often now.)

As is the way of these things I have now written a lot more about the negative than the positive. Don’t be fooled! So far I’m very happy and impressed with the new toy, it was really very pleasant company on a couple of longish walks we did this past weekend.

So, fungus I said. Gus? Who’s Gus? (Gus is the name identifier I’ve loaded onto my GPS!)

Whippendell 20071020 Samples
Fungal Specimens from Whippendell Woods

On Saturday October 20th, GPS in hand, we reprised our Whippendell Woods Walk — hunting fungi. Mapping the track from the Garmin into Google Maps left me rather impressed by both the accuracy of the GPS and the translation between the GPS and Google Maps. The trail comes up with enough accuracy to even be mostly on the correct side of the canal we followed (though often in the canal). We gathered 10 samples for later identification, which has proven to be a fun exercise. It’ll be interesting to see how long our little amateur-mycology hobby lasts. (Historically, I’m very bad at hobbies, the pattern tending to be an intense burst of focused interest shortly followed by complete and utter neglect.) The hardest part of fungus hunting is that we have an interest in finding stuff that is good to eat, gastronomic exploration is very much a part of who I am. But fungi are a bit of a dangerous minefield of creatures with names including words like “death”, “sickening”, and “poison” and on top of that we noticed this weekend that people had been through and really not treated the fungi very well. (There’s quite a bit of money in commercial harvesting of wild fungi these days, sometimes I curse the recent gourmet revolution driving up the scarcity and prices of things that used to be little-known delicacies.)

On Sunday we did a quick south-of-Ricky pub-ramble. Taking in the Ye Olde Greene Manne (nothing special, a chainpub) and the Rose and Crown (pretty good pub).

I intend to write more about both walks… though, as ever, such intentions go onto the pile with the likes of writing about some call graph visualisation I explored recently, several noteworthy places I’ve eaten at, some good coffee houses, some interesting books… the list goes on.


[1] The Encyclopedia of Fungi of Britain and Europe by Michael Jordan (excellent but rather large for trekking); Field Guide to Edible Mushrooms of Britain and Europe by Peter Jordan (not related); Collins Gem – Mushrooms by Patrick Harding (ultra mobile).

[2] The LED Lenser V2 Professional seems rather nice, though I have read some less than positive comments about the LED Lenser products.

[3] You should see the technogeek lolly goodness available (or soon to be) in Japan, the likes of: OLED display watches with 4GB storage for audio and video; normal sized mobiles with wifi and GPS; self-milking genetically engineered digital cows that you can keep in the fridge and that live on old food that otherwise might evolve

Django on Debian Sarge

Note: This entry has been restored from old archives.

The main reason I’m posting this is so that other people can avoid trying to play “chase the dependency trail” in back-porting the Debian etch Django source package to sarge. If you want to do that I suggest working on modifications to the source package rather than following the trail.

[Note: If you want Django on Debian etch then you can simply: apt-get install python-django (not the latest and greatest though, 0.95.1 at this time, your best bet in all cases is probably a local user install from SVN)]

Here’s how I successfully installed Django as a Debian package, followed by the way I failed to do so.

Creating a Django Debian package (successfully)

After my trail following led me to a cliff I gave up on the 0.95.1 Debian source and moved on to this hackish method of getting a package. There are plenty of ways to get to packages on Debian. Let me scare you with this:

:; su -c 'apt-get install rpm alien'

(I’ll used Sam’s neat prompt to differentiate between commands and other crud.)

Then:

:; wget http://www.djangoproject.com/download/0.96/tarball/
:; tar xzf Django-0.96.tar.gz
:; cd Django-0.96
:; python setup.py bdist_rpm --packager="Yvan Seth" --vendor "django" 
            --release 1 --distribution-name Debian

However, the last step fails with “Illegal char '-' in version: Version: 0.96-None” so I edit build/bdist.linux-i686/rpm/SPECS/Django.spec and remove the -None from version (can’t see a way to do this with the bdist_rpm options):

:; vim build/bdist.linux-i686/rpm/SPECS/Django.spec
:; cp ../Django-0.96.tar.gz build/bdist.linux-i686/rpm/SOURCES/
:; rpmbuild -ba --define "_topdir 
        $HOME/source/Django-0.96/build/bdist.linux-i686/rpm" --clean 
        build/bdist.linux-i686/rpm/SPECS/Django.spec
:; fakeroot alien -k 
        build/bdist.linux-i686/rpm/RPMS/noarch/Django-0.96-1.noarch.rpm

Joy! Now I have a django_0.96-1_all.deb, do I dare to install this critter?

:; su -c 'dpkg -i django_0.96-1_all.deb'
Password: 
Selecting previously deselected package django.
(Reading database ... 41757 files and directories currently installed.)
Unpacking django (from django_0.96-1_all.deb) ...
Setting up django (0.96-1) ...

This sucks like a 747 engine (or blows, all a matter of perspective), but the deed is done. Probably better than “su -c 'python setup.py install'” but in the end it would probably have been best to just do a local --prefix=$HOME/blah type of install.

Setting it up

The Django site documents installation at http://www.djangoproject.com/documentation/0.96/install/.

The Django documentation is rather good, once I was though with my packaging débâcle the doco got me up and running in next to no time. My notes here are specific to my system and probably not useful to anyone.

Database

I’m already running both PostgreSQL and MySQL. I chose to use PostgreSQL because the sarge package for python-psycopg fits the specification given by the Django instructions while the python-mysqldb version is a little older than the specified minimum version. I’m also more familiar with postgres. So:

:; su -c 'apt-get install python-psycopg'

You’ll want to set up a database, so get an appropriately privileged psql shell and, for example, do:

CREATE DATABASE django_mysite;
CREATE USER django_mysite WITH PASSWORD 'dumbpassword';
GRANT ALL PRIVILEGES ON DATABASE django_mysite TO django_mysite;

mod_python

The mod_python setup is documented here: http://www.djangoproject.com/documentation/0.96/modpython/

I have mod_python installed and it makes sense to use it. I’m using a pretty funky Apache vhost setup though and I’m not going to detail it here. In essence you want to find the appropriate VirtualHost section and insert something like this:

    <Location "/djangotest/">
        SetHandler python-program
        PythonHandler django.core.handlers.modpython
        SetEnv DJANGO_SETTINGS_MODULE mysite.settings
        PythonPath "['/home/vhost/malignity.net/data/django/'] + sys.path"
        PythonDebug On
    </Location>

With that done you should see a basic status page at the configured URL (i.e. http://malignity.net/djangotest/).

All Good!

Now to follow the great tutorial that starts here: http://www.djangoproject.com/documentation/0.96/tutorial01/ I’m mildly impressed already, and web stuff usually pisses me off before I even touch it. We’ll see how it fares after I try and actually do something with it though.

How Not To Do It

Backporting python-django from Debian etch to sarge

This was a supposed to be a command log for a successfully back-ported Django package build. But it didn’t turn out so well.

We start with a Debian sarge server box. I really don’t want to risk dist-upgrading a box that’s in another country a 2 hour flight away. The box runs with some backports.org packages for the likes of Apache2, SpamAssassin, and ClamAV updates. Consider it a fully up to date general server with this sources.list file:

:; cat /etc/apt/sources.list
deb ftp://ftp.de.debian.org/debian sarge main contrib non-free
deb http://security.debian.org/ sarge/updates main
deb http://www.backports.org/debian sarge-backports main

What I want on this box is this cool new Django thing I’ve heard so much about, but rather than download and work with the tarball I’d prefer a Debian package (call it a form of OCD if you like). A fairly up-to-date package is available in current the currently stable etch and we can trust Debian to track this for security fixes, if I track this package manually we ought to be able to have a relatively safe install. So! Time to try and build the etch sources for sarge.

;: su
:; cat << END >> /etc/apt/source.list
deb-src ftp://ftp.de.debian.org/debian etch main contrib non-free
deb-src http://security.debian.org/ etch/updates main
deb-src http://www.backports.org/debian etch-backports main
END
:; apt-get update
:; exit

Create and install python-django?

:; mkdir -p pkgbuild
:; cd pkgbuild
:; apt-get source python-django
:; pushd python-django-0.95.1
:; fakeroot dpkg-buildpackage
...
dpkg-checkbuilddeps: Unmet build dependencies: debhelper (>= 5.0.37.2)
    python-dev cdbs (>= 0.4.42) python-setuptools (>= 0.6b3-1)
    python-support (>= 0.3)
...

In a perfect world this would build your package! But things are rarely perfect. The dpkg-buildpackage reports any missing build dependencies, I had to install the parts that I could:

:; su -c 'apt-get install debhelper python-dev cdbs python-setuptools'
...
Setting up cdbs (0.4.28-1) ...
Setting up python2.3-dev (2.3.5-3sarge2) ...
Setting up python-dev (2.3.5-2) ...
Setting up python2.3-setuptools (0.0.1.041214-1) ...
...

Notice the versions aren’t really what I’m after, I’ll be punished for that later.

Create and install python-support?

Alas is isn’t as simple as that, another build dependency was python-support which wasn’t available for sarge, so…

:; popd
:; apt-get source python-support
:; pushd python-support-0.5.6
:; fakeroot dpkg-buildpackage
:; su -c 'dpkg -i ../python-support_0.5.6_all.deb'
...
 python-support conflicts with debhelper (<< 5.0.38)
...

Nope, no luck. Looks like the version of debhelper installed really needs an upgrade… I’ll prefer to trust the package maintainers and not try and force an older debhelper version on things.

Create and install debhelper?

:; popd
:; apt-get source debhelper
:; pushd debhelper-5.0.42
:; fakeroot dpkg-buildpackage
...
dpkg-checkbuilddeps: Unmet build dependencies: po4a (>= 0.24)
...
:; su -c 'apt-get install po4a'
...
Setting up po4a (0.20-2) ...
...
:; popd

Create and install po4a?

It is getting a bit over the top now… another route would be advisable at this point. While there’s got to be a better way I’m possessed of a zombie-like persistence so keep going…

:; apt-get source po4a
:; pushd po4a-0.29
:; fakeroot dpkg-buildpackage
:; # Joy! It builds!!
:; popd
:; su -c 'dpkg -i po4a_0.29-1_all.deb'
:; # Joyjoy! It installs!!!

Pass 2 of: create and install debhelper?

:; fakeroot dpkg-buildpackage
:; # Joy! It builds!!
:; popd
:; su -c 'dpkg -i debhelper_5.0.42_all.deb'
 debhelper depends on dpkg-dev (>= 1.13.13); however
  Version of dpkg-dev on system is 1.10.28.

Sigh. Is it really worth following this path any further?

Create and install dpkg-dev >= 1.13.13?

No, I really don’t want to do this. Time to stop! Additionally:

:; popd
:; apt-get source dpkg-dev
:; pushd dpkg-1.13.25
:; fakeroot dpkg-buildpackage
...
dpkg-checkbuilddeps: Unmet build dependencies: debhelper (>= 4.1.81)
  pkg-config libncurses5-dev | libncurses-dev zlib1g-dev (>= 1:1.1.3-19.1)
  libbz2-dev libselinux1-dev (>= 1.28-4)

A circular dependency on my debhelper/dpkg requirements. Version requirements for libselinux1-dev and zlib1g-dev that are too up to date for sarge and would require builds. No, it’s gone too far.

Another route would be to try modifying the package build information to get a Django package that doesn’t have any of these requirements. But that also tends to be a slippery slope.

Yeah… just don’t do this. OK?

Radio(head) Kills The Music Mafia

Note: This entry has been restored from old archives.

Check out the new Radiohead album. They’re self-distrbuting and seemingly not involving a big label. The deal is pretty sweet too, the “discbox” contains a nice swag of stuff (but I’m not into things so I’m less likely to go for that) and the “download” comes with a checkout page where you choose the price.

It’ll be interesting to see how this experiment goes.

The “music industry” might have more to worry about than filesharing. In a couple of decades they might not have any new music to sell.

Fatsos Wiggle Their Toes

Note: This entry has been restored from old archives.

Now I say this as a certified “fatso”: pedometers/wiis/foot-tapping aren’t going to make fat bastards experience “big health benefits” (my qualifications: BMI of 27.6 … still a fat bastard, but that’s down from the 37.6 I was at 3 years ago — BMI’s don’t mean a huge deal but the mirror doesn’t lie). Maybe there’s one fatso in one thousand who gets a case of OCD over their wiggle-my-stick-wii game and loses a few pounds, but seriously? “The allure of computer gaming and competition with other users encourages players to make small lifestyle changes that can add up to big health benefits.”

But these are scientists saying this is so, so who am I to pronounce judgement
or even have an opinion? It’s just rare that I see something come through ACM
TechNews that makes me think “what a load of tripe”.

AFAIC fat bastards have two things to do:

  1. eat less,
  2. lift some heavy shit (seriously, muscle burns calories).

Wiggling your PDA and other “small lifestyle changes” isn’t going to do it for
you. And for the truly obese “running” is just going to stuff your knees.
Want real incentive? Take away access to public health services of any kind,
it should be treated the same as smoking or any other personal risk-increasing
habit. You choose to increase your risk, you cover the consequences. Of
course the US is way ahead of everybody on that front, but they’re still all
fat. I guess that’s why I’m not qualified to have an opinion 🙂 d’oh!

Of course, this is actually from a press release, so isn’t worth paying attention to.

Bah, this isn’t even going in tech.

[[[Update: Fellow fatsos should read this.]]]

Another Dying Gasp From Email?

Note: This entry has been restored from old archives.

I’ve sporadically been losing emails recently. It turns out this is due to two things.

  1. I changed ISPs and now have a dynamic IP that is in several blacklists.
  2. I’ve been sending emails with the string “configure.ac” in them and this is in several URI blacklists.

Mostly this means I don’t receive my own emails, but sometimes the IP thing seems to catch emails on their way to me from someone else. I do have to wonder who else is not getting my emails though 🙁

OK, so “dying gasp” is a bit melodramatic. But email seems to become increasingly unreliable. Unless you’re expecting email and will thus miss it when it doesn’t arrive how do you know you’ve missed the unexpected? There’s no way of knowing whether you’re getting all you should be, or others are getting all you’re sending! More and more I use IM and websites for communication, and email becomes an “on the record” and “just a sec, I’ll email you the file” medium.

The listed IP thing is only going to happen to geeks who have local mail relays. I use a local mail relay for work email, so it is kind of important. I guess I’ll have to configure the local MX to not add a received header.

The “configure.ac” thing is just a PITA.

Still Doesn’t Like Kaspersky

Note: This entry has been restored from old archives.

Seeing more of those emails that try to hurt Kaspersky’s feelings. An interesting note about them. If you download with an IE useragent string you get something different that what you get with a Firefox useragent string. If the UA string isn’t FF or IE you get simple HTML with just the link to the exploit .exe file. The obvious difference between the FF and IE versions is that the FF version of the code doesn’t insult Kaspersky.

Beyond that the FF and IE have very different payloads attached. The IE payloads I see now are very similar to the weekend’s, some minor differences that seem to mainly revolve around the different IP address. The decoded script contains a variety of nastiness, including downloading “file.php” which is another PE executable, yet another version of Zhelatin/Nuwar/Storm. This site’s version of video.exe is labor.exe (Labor Day in US). Both PEs are detected as Zhelatin vars by KAV. KAV catches the IE version of the web script, but not the FF version. Overall scan results are pretty average (heh, these guys probably use sites like virustotal.com to test their damn malware).

   File             | Caught By | As %
--------------------+-----------+--------
IE Script           |   6/31    | 19.36% 
IE Script (decoded) |  15/32    | 46.88% 
FF Script           |   7/31    | 22.59% 
FF Script (decoded) |  12/32    | 37.50% 
labor.exe           |  16/32    | 50.00% 
file.php            |  12/32    | 37.50% 

(The /31 entries are where the Prevx1 scanner wasn’t included for some unexplained reason.)

The FireFox post-xor payload is much shorter than the IE version. It seems to contain just a couple of simpler exploits. One of which is for Windows Media Player plugin EMBED bug MS06-006. The other looks like something intended to do some stack smashing in the FF javascript engine.

Also worth noting, each time you download you get a script that has used a different value for the xor key (well, probably random rather than specifically different). Both versions have the same obvious xor decrypt though. Getting closer to some difficult form of polymorphism?

Seen only a couple of IPs hosting this creature so far. In both cases they’re RoadRunner owned IPs in the US.

Finally, here’s a coverage summary from a script that processes virustotal.com results. This data is by no means a meaningful representation of anything at all. Top points to Webwasher, although AFAIK they uses multiple AV engines. I’ve never even heard of half these scanners outside of virustotal.com scans.

                                 FF-dec FF IE-dec IE file.php labor.exe  COVERAGE
Webwasher-Gateway (2007.09.03):       Y  Y      Y  Y        Y         Y     100%
              AVG (2007.09.03):       x  Y      Y  Y        Y         Y      83%
          AntiVir (2007.09.03):       Y  x      Y  x        Y         Y      66%
      VirusBuster (2007.09.03):       x  x      Y  Y        Y         Y      66%
        Kaspersky (2007.09.03):       x  Y      Y  x        Y         Y      66%
           McAfee (2007.09.03):       Y  Y      Y  Y        x         x      66%
         F-Secure (2007.09.03):       Y  Y      Y  x        x         Y      66%
         Symantec (2007.09.03):       Y  x      Y  x        Y         Y      66%
      BitDefender (2007.09.03):       Y  x      Y  x        Y         Y      66%
       eTrust-Vet (2007.09.03):       Y  x      Y  x        x         Y      50%
        Microsoft (2007.09.03):       x  x      Y  Y        x         Y      50%
            eSafe (2007.09.03):       x  Y      x  Y        x         Y      50%
           Sophos (2007.09.03):       Y  x      x  x        Y         Y      50%
           Rising (2007.09.03):       Y  x      Y  x        x         x      33%
            Ewido (2007.09.03):       x  Y      Y  x        x         x      33%
    CAT-QuickHeal (2007.09.03):       x  x      x  x        Y         Y      33%
            DrWeb (2007.09.03):       x  x      x  x        Y         Y      33%
          Sunbelt (2007.08.31):       x  x      x  x        Y         Y      33%
           Norman (2007.09.03):       Y  x      x  x        x         Y      33%
           Ikarus (2007.09.03):       Y  x      x  x        x         x      16%
            Panda (2007.09.03):       x  x      x  x        Y         x      16%
       Authentium (2007.09.02):       x  x      Y  x        x         x      16%
            VBA32 (2007.09.03):       Y  x      x  x        x         x      16%
           F-Prot (2007.09.02):       x  x      Y  x        x         x      16%
            Avast (2007.09.03):       x  x      x  x        x         x       0%
        AhnLab-V3 (2007.09.03):       x  x      x  x        x         x       0%
          NOD32v2 (2007.09.03):       x  x      x  x        x         x       0%
      FileAdvisor (2007.09.03):       x  x      x  x        x         x       0%
         Fortinet (2007.09.03):       x  x      x  x        x         x       0%
           Prevx1 (2007.09.03):       x  O      x  O        x         x       0%
           ClamAV (2007.09.03):       x  x      x  x        x         x       0%
        TheHacker (2007.09.02):       x  x      x  x        x         x       0%

[[[FYI I’m a big fan of using different AV scanners. I.e. use one product on your desktop, another on your mail server, and yet another at the gateway. I have a leaning towards McAfee and KAV, in the rather unrepresentative example above they make a perfect combination. 😉 It’s a bit expensive though, and you’re not going to get any “seamless integration” this way. Could be some call for a meta-AV company. The meta-AV company creates a UTM, remote desktop management system, and messaging (mail, etc) server scan interface with one unified management system. What would make it different from the alternatives I’ve seen around is that rather than being single-vendor based the aim would be to allow different AV products to plug in to each location.

Another semi-related thought is that you could have a system where a business has n different AV products installed across it’s desktop systems. Most employee desktops do stuff-all with their mega-cpu-power, so let’s put it to some good use. What you get is a “farm” of AV engines that your email/proxy infrastructure can call out to for scanning. To make it even more distributed you could have employee mail clients and web browsers pulling their traffic through their peers in such a way that each peer links through a peer with a different AV product. It’s a bit rough around the edges. Can you trust a desktop platform to do the job of secure proxy server? What about the added latency, is it significant? AV scanning tends to be slow.]]]

Someone Doesn’t Like Kaspersky

Note: This entry has been restored from old archives.

Seeing more and more of these spammed attempts to get people to self-infect. Most recently I passed over one that looked much like one described by the AVERT blog a short while ago. A very simple email with the line:

Dude I know thats you, someone emailed me a link to the video. see for yourself… http://www.youtube.com/watch?v=iVyfrel8jIt

The bit that seems to be a YouTube link is actually wrapped in an anchor tag linking to an IP address (not reproduced above). Not YouTube! Duh! (It’s rather disappointing that the YouTube URL actually doesn’t show some amusing video.)

If you hit the site you get a nice HTML page that tells you your video will be ready in 15 seconds. Meanwhile it tries to break your web browser, as recently described on the Kaspersky blog. In fact I think the author of this malware might read the KAV blog too, from the script code:

function kaspersky(suck,dick){}; function kaspersky2(suck_dick,again){};

Ouch! Getting personal in malware code!

As an added bonus the page includes:

If your download does not start in approximately 15 seconds, you can click here to launch the download and then press Run.

Sure, “press Run”? But how many people will this sucker? Too many I’m afraid.

ClamAV tells me that the HTML page is “JS.XorCrypt” (some sort of generic signature I assume) and that the video.exe file linked to is “Trojan.Small-3273”. McAfee and Kaspersky both catch both files too, “Nuwar” and “Zhelatin” respectively for video.exe… no surprises there. I guess the author is right to be annoyed at Kaspersky, it catches their malware! Ha! (On VirusTotal.com 46.88% of 32 scanners detect the HTML file and 78.13% detect the executable – detected malware names vary greatly.)

Examining the code in these things is often fun. In this example the HTML page contains the (reformatted) code:

function xor_str(plain_str, xor_key)
{   var xored_str = "";
    for (var i = 0 ; i < plain_str.length; ++i)
    xored_str += String.fromCharCode(xor_key ^ plain_str.charCodeAt(i)); 
    return xored_str;
} 
function kaspersky(suck,dick){}; 
function kaspersky2(suck_dick,again){};
var plain_str = <<OBFUSCATED_STRING_HERE>>
var xored_str = xor_str(plain_str, 20);
eval(xored_str);

Given a couple of minutes I can translate this to:

#!/usr/bin/perl -w
use strict;
sub xor_str
{
    my ($plain_str, $xor_key) = @_;
    my $xored_str = "";
    for my $chr (split //, $plain_str)
    {
        $xored_str .= chr($xor_key ^ ord($chr));
    }
    return $xored_str;
}
my $plain_str = <<OBFUSCATED_STRING_HERE>>
my $xored_str = xor_str($plain_str, 20);
print $xored_str;

I don’t really have time to dig deeper (it’s 03:31 right now!), but here’s the list of functions grepped out of the decoded exploit code.

h() {mm=mm; setTimeout("h()", 2000);}
getb(b, bSize)
cf()
startWinZip(object)
startWVF()
elea(){
yah()
startOverflow(num)
GetRandString(len)
CreateObject(CLSID, name) {
XMLHttpDownload(xml, url) {
ADOBDStreamSave(o, name, data) {
ShellExecute(exec, name, type) {
MDAC() {
start() {

A final note. The virustotal.com result for the decoded payload gives a 46.88% (15/32) detection rate. What is interesting is that the detections are by a very different set of AV products and identified by a very different set of names! Only 7 engines detected both the encoded and decoded forms. Of these seven only one gave them the same name, but this was the rather uninspiring “Downloader” from Symantec. I kind of expected that at least one product would be able to perform the decode and identify the payload (although if you can detect prior to doing this you save CPU time, so doing the decode isn’t necessarily desirable).

All in all I think it is rather sad that malware this lame will probably do it’s intended job and net a few more netizens for the botnet empire.

Fun’n’games.

Malware Spam Joy

Note: This entry has been restored from old archives.

Malware seems to be getting more straightforward these days, from a short while ago:

We are looking for Consumer opinions of our new software Digital Kittens

This beta testing will enable us to fine tune the software for public
release. For helping out, you will receive a free edition and 5 years of
updates.

1: Download the software  2: Try it  3: Tell us what you think If you
want to participate, just follow the link to our download site:
http://7w.2xx.2y.1zz/setup.exe

Who wouldn’t want free digital kittens?! You can play with beta kittens, help some company out, and get years of free digital kittens as a reward. How do you fight that wetware exploitation? “Don’t accept kittens from strangers.”? I have trouble getting over the point of view that “it’s damn obvious that you don’t execute unsolicited .exe files”, but the fact is this still seems to only be obvious to a minority of computer users. Got to have that AV installed! It’ll give you some protection, though probably wont be much use if you’re in the first wave of recipients of a properly engineered piece of malware that’s been tested against the AV engines.

VirusTotal.com tells me (with engines that failed to do the job edited out):

AhnLab-V3           Win32/Zhelatin.worm.140367
AntiVir             WORM/Zhelatin.Gen
Authentium          Possibly a new variant of W32/Fathom.3-based!Maximus
Avast               Win32:Tibs-BFG
AVG                 Downloader.Tibs.7.X
BitDefender         Trojan.Peed.IGS
CAT-QuickHeal       (Suspicious) - DNAScan
ClamAV              Trojan.Small-3637
DrWeb               Trojan.Packed.142
eSafe               Win32.Zhelatin.hq
eTrust-Vet          Win32/Sintun.AE
Ewido               Worm.Zhelatin.hq
Fortinet            W32/Tibs.GN@mm
F-Prot              W32/Fathom.3-based!Maximus
F-Secure            Email-Worm.Win32.Zhelatin.hs
Ikarus              Email-Worm.Win32.Zhelatin.hq
Kaspersky           Email-Worm.Win32.Zhelatin.hs
McAfee              Tibs-Packed
Microsoft           Trojan:Win32/Tibs.DV
NOD32v2             Win32/Nuwar.Gen
Norman              W32/Tibs.ASFB
Panda               W32/Nurech.AU.worm
Sophos              Mal/Dorf-E
Sunbelt             VIPRE.Suspicious
Symantec            Trojan.Packed.13
TheHacker           W32/Zhelatin.genw
VirusBuster         Trojan.Tibs.Gen!Pac.132
Webwasher-Gateway   Worm.Zhelatin.Gen

This kitten is diseased. Time to back over it’s poor little head with a car.