Category Archives: Uncategorized

Clean Swarm

Note: This entry has been restored from old archives.

There’s been an increasing density of news from the robotics front in the tech media over the last year, it’s even been spilling over into mainstream news. Much of the interest is, of course, in humanising of robotics. The latest two-legged walking robot, the cutest robot, etc. The more interesting news is in areas like unmanned exploratory robotics (those amazing Mars Rovers but more autonomous), and context-aware machinery. Generally, it looks like we’re moving towards far more capable robotics.

Beyond all the cuteness, humanness, industrial efficiency, and extreme exploration there’s something I really want out of robotics: a cleaner.

It seems a less horrendous problem than making a machine like a human, but there are probably difficulties I’ve not imagined. I’ve been thinking about swam robotics a lot in recent times, though not many stories report on it. I’d have thought that swarms would be more robust and flexible in many situations, especially exploration. You could have a generic chassis plus some specialisation and you’d have redundancy. I guess the problems are in connectivity and co-ordination, and energy density. So maybe we’re waiting for advances in power and mesh-networking technologies to make this sort of thing feasible. Another approach would be a “queen bee” that mothers a swam which are the queen’s eyes, ears, hands, etc. Maybe this could mitigate the power and control problems by adding some centralisation? I guess if it comes to exploration there’s also the chance a shark might eat your swarm-bots! 🙂

Aaaanyway… cleaning swarms? I’m terrible when it comes to cleaning, and only slightly better than Kat :-p so our place can generally get pretty chaotic. I’m often heard to exclaim, much exasperated, about my inability to keep the kitchen in a state that doesn’t resemble a pig sty. (Yet I cook in there, to some extent, almost every day — and am yet to have given either of us a case of food poisoning.) Now chaos I don’t actually mind, it’s the dirt and grime that breeds within the chaos that gets my goat. My thought is that you could have a small swam of ‘bots that have simple cleaning functions. They don’t do anything pointlessly complex, like stack stuff in the dish-washer, rather, they clean everything in-place. Dishes, utensils, surfaces, everything.

They have little brushes and mops and scuttle around washing dirt off everything. Biological recyclables go in one bucket and everything else in another (that might be a hard one to implement). There’ll be a dump-station where they empty their little rubbish accumulations. A central command computer, leaving the ‘bots themselves requiring minimal intelligence of their own. And a maintenance station where they can charge up (power, cleaner), self-clean, and change any consumable parts when required. They’re only active when there is no non-‘bot activity in the room, if a human enters while they’re active they scuttle to the corners and stop, so long as they can’t get in the way (maybe ‘bot-holes?), otherwise they just stop (and work themselves out if manually relocated).

Yeah, there’s a lot of difficulties. How hard to scrub? What to scrub? What is mess and what is something left on the bench for later? What about if you’re interrupted by a phone-call in the middle of preparing dinner and the ‘bots clean away your “mess”?

It’d be a great area to work in. One of the many things that makes me wish I was at, or able to go back to, Uni.

Beerolies

Note: This entry has been restored from old archives.

Believe it or not, there are Calories in beer. So while you’re being careful with that poached chicken and steamed vegie dinner[1] the two beers you wash it down with might double your Calories! [2]

Alcohol is much more a carbohydrate than a protein or far and for nutritional purposes you can count it as such. Note however that alcohol gives you 7 Calories per gram, rather than the 4 from normal carbs. Most beer will also contain sugars which contribute to Calories. The unfortunate thing is that brewers don’t have to put any nutritional information on their beer (nor wineries on wine for that matter). Luckily for us we can get a vague idea of the damage we’re doing to our careful planning from the alcohol volume. In some beers (much more so for wines) the sugar content is much less significant a contribution to Calories than the alcohol, although others can have a fairly high carbohydrate contribution from sugars. So alcohol-only derived Calories are a minimum and the true calories could be higher still (some examples of alcoholic Calories versus published Calories are given below).

The calculation is simple, but if you try to find information on the ‘net you mostly seem to get not-so-useful “select number of drinks” weekly calculators, where the drink classifications may or may not be relevant to whatever you’re guzzling. (“Red Wine” hey, 10% or 14% alcohol volume?)

My example is an Innis & Gunn Oak Aged Beer. A 330ml bottle at 6.6% alcohol volume.

The calculation:

  • Calculate millilitres of alcohol, 6.6% of 330ml:
    • 0.066 x 330 = 21.78
  • Calculate weight by multiplying millilitres by the specific gravity of alcohol:
    • 21.78 x 0.789 = 17.18442
  • Calculate Calories by multiplying by Calories per gram of alcohol:
    • 17.18442 x 7 = 120

So, there are at least 120 Calories in a bottle of Innis & Gunn Oak Aged Beer.

Another useful number to know is the number of Calories in a “standard unit” of Alcohol, in the UK and Australia this is 10ml:

  • 10 x 0.789 = 7.89
  • 7.89 x 7 = 55

This is a very useful number to know, no matter where you are you just need to know that your alcoholic Calorie intake is: <std-drinks>x55! Though that might be a bit hard to deal with after eight pints of larger.

So, food for thought:

A pint of Guinness, 568ml at 4.3%:

  • 0.043 x 568 = 24.424
  • 24.424 x 0.789 = 19.270536
  • 19.270536 * 7 = 135

The official figure for a pint of Guinness is 210 Calories, as you can see there is a good number of Calories from other sources.

A 187.5ml (ÂĽ bottle) of 12.5% wine:

  • 0.125 x 187.5 = 23.4375
  • 23.4375 x 0.789 = 18.4921875
  • 18.4921875 x 7 = 129

Average figures available on the ‘net for “dry white” are around 140 Calories for this volume.

40ml of Lagavulin 16yo single malt whisky at 43%:

  • 0.43 x 40 = 17.2
  • 17.2 x 0.789 = 13.5708
  • 13.5708 x 7 = 95

So there you go, maybe you’ll hold that third beer now? Regretting those 6 pints of Guinness every Friday after work, and maybe a few other days too?


[1] 200g chicken breast, 100g broccoli, 7.5g olive oil, plus herbs and spices: 320 Calories.

[2] It is a somewhat unusual convention that “Calories” with a capital “C” represents “kilo-calories”. A Calorie is enough energy to raise the temperature of one litre of water by one degree, a calorie is enough to raise the temperature of one millilitre of water by one degree. Almost always when you see calories discussed in the context of nutrition (even with lowercase “c”) people are talking about kilo-calories.

Comments

Note: This entry has been restored from old archives.

I’ve added commenting. It’s likely not at all worth the effort involved, but eh. Maybe now I wont have to try and remember corrections/observations that people send my way via email, once or twice a year. Minor spam protection is in place, but no registration/captcha … for now (let’s see how long that lasts). Not quite sure what’s proper for this sort of thing, the comment form ends up in the RSS — maybe that’s wrong? Doesn’t seem to be normal. Existing comments also end up in the RSS version of entries, but do not have their own RSS feed and the UUID isn’t altered so there isn’t an RSS way of tracking them (that’d really not be worth bothing with!).

Along the way I had troubles getting LWP to work. The reason being that I run apache in a gaol (being a tech-term I guess I should use “jail”?) and it didn’t quite have the full set of required files. Anyway, strace is your friend in these instances. Error along the lines of:

500 Can't connect to google.com:80 (Bad protocol 'tcp')

Caused by lack of /etc/protocols and /etc/libnss_files.so.2. Or:

500 Can't connect to google.com:80 (Bad hostname 'google.com')

Caused by lack of /lib/libnss_dns.so.2.

Example of inventorying the files required for something like LWP:

:; strace lwp-request http://google.com/ 2>&1 | 
    grep '^open' | 
    grep -v ENOENT | 
    cut -d'"' -f2 | 
    sort -u | 
    grep '^/(etc|lib)'
/etc/host.conf
/etc/hosts
/etc/ld.so.cache
/etc/localtime
/etc/nsswitch.conf
/etc/protocols
/etc/resolv.conf
/lib/libc.so.6
/lib/libcrypt.so.1
/lib/libdl.so.2
/lib/libm.so.6
/lib/libnss_dns.so.2
/lib/libnss_files.so.2
/lib/libpthread.so.0
/lib/libresolv.so.2

Note that while these files are used by the command they’re not all necessarily required. That final grep is just to trim down the list, which is otherwise quite a flood from /usr/lib/

Erroneous Blame for Firefox Slowness

Note: This entry has been restored from old archives.

For a while I’ve been very annoyed by how horribly slow Firefox is, writing it off as Firefox just having grown into a disgusting slow heap. That said, I wasn’t comfortable blaming Firefox in such an off-hand manner, the issue could be Ubuntu doing something wrong, or one of the extensions I use. I almost felt I’d confirmed it was Ubuntu a little while back when switching to the mozilla.org firefox install sped my Firefox up — yes it did (something to do with fonts and AA I’ve read) but it was still pretty slow. I’ve wiped my profile and rebuilt my Firefox setup from scratch a couple of times even, still all bad.

What I failed to do was start by blaming that which is, really, the most unreliable part of my configuration: the ten or so extensions I use. Extensions are outside the control of Firefox and Ubuntu, often written by some random, and often written badly. (Well, so I expect in my cynical way.) Today I nuked my Firefox install and browsed my usual morning sites with no extensions installed, using the Ubuntu Firefox, and it really is pretty snappy. I’ve now re-installed Google Browser Sync and browsing has not degraded. Over the next few days I’ll reinstall my set of usual extensions and find out which is to blame (if any single one).

My Firefox extensions are:

  • Google Browser Sync (I don’t know how I lived without this. On the slowdown front it Seems OK, so far.)
  • SwitchProxy Tool (Essential, I work through different redirected proxies throughout the day. Might be a better plugin for this though. There are notes on the addons.mozilla.org page that say this is a cause of slowdown.)
  • AdBlock Plus (Difficult to live without this, I hate flashing/moving graphics all over websites. Flashblocker almost replaces it. Need flash+anigif blocker, that might be OK.)
  • NeoDiggler (Provides the essential “clear URL bar” button, does some other things too that I don’t use.)
  • Google Toolbar (I probably don’t really need this, it’s so common though that I doubt it is the problem.)
  • Tab Mix Plus (Use this to tweak a few tab settings, can probably live without — closed tabs history is often helpful though.)
  • Web Developer (Usually disabled anyway, very useful. It can cause slowness when enabled.)
  • Firebug (Usually disabled anyway, extremely useful. It causes extreme slowness when some parts are enabled, shouldn’t be a worry in a disabled state though.)
  • Google Gears (Have issues with this, it occasionally segfaults at shutdown-time, at least that’s where GDB points the finger. It is “Google BETA”. It makes offline Google Reader work, but I never use it.)

I’ll reinstall one per day over the next few days, in the order above, and see how my browsing joy fares. I’ll need at least a full day’s worth of browsing to work out if a plugin has a noticeable impact. (I don’t generally do a lot of web browsing.) I might try installing the Load Time Analyser extension next though, so long as it doesn’t slow anything down it seems likely to be useful.

Even with the massive no-extensions responsiveness boost, Firefox seems less speedy than Opera. I’ve been using Opera more often these days, now that it has some sort of sync feature it might be a viable Firefox replacement.

Referrer Bot

Note: This entry has been restored from old archives.

This is a quick addition to my previous post: Bot or Not?. Curiosity got the better of me so, through roundabout means, I got samples of some of the pages. First note is that the ‘hyml’ pages are 404s, so probably a typo.

Next note is that there is some dodgey looking script in some of the pages. My first thought was: Oh, this is just another botnet propogation setup. There’s two layers of encode in the snippet, first the data is URI-decoded, then each byte has 1 subtracted from it to get the real code, this is then eval()ed. This shows that the decoded content is short and simple, not a bot infester:

var r=escape(document.referrer), t="", q;
document.write("<script src=\"http://www.Z-ZZZZZ-Z.com/counter.php?id=ambien&r="+r+"\"></script>");

URL obscured, but points to what looks like a front with no links and the text “See How The Traffic Is Driven To Your Site” (the page is nothing but an image with no links). So this looks like just a route to grabbing referrer dollars from a dodgey advertising site. Note how the target script will neatly get both the spammy page and the URL of the page that was spammed.

So what about counter.php? More redirection! The script imported looks like this (reformatted for readability):

<!-- document.write(
    '<script language="JavaScript">
        function f() {
            document.location.href = "http://www.XXXXXXXXX.com/ambien.html";
        } window.onFocus =  f(); </'+'script>'); // -->
<script>
    document.write(
        '<script language="JavaScript">
            function f() {
                document.location.href = "http://www.XXXXXXXXX.com/ambien.html";
            } window.onFocus =  f(); </'+'script>');
</script>

We’ve reached the end of the road. The real URL in this code goes to an “Online Pharmacy” at a domain registered since February this year. The page contains little javascript, no exploits. A function for adding to bookmarks, some “menu” code, and it imports “urchin.js” from Google Analytics.

So yeah, everyday, regular spam.

Digital Spectrum

Note: This entry has been restored from old archives.

IEEE’s Spectrum magazine is making a digital distribution available[1]. I’ve been trying to use it over the last couple of months and have opted to get the digital version from next year. It’s a mutually exclusive offer, you either get bits or you get paper. The digital carrot is very compelling:

  • You get your Spectrum significantly earlier, fresh news is always alluring.
  • You don’t end up with a pile of paper that gathers dust.

So, like I said, I’ve opted for digital distribution. Piles of IEEE emails on the subject have compelled me to do so. There’s a rather large BUT though:

I will no longer read Spectrum.

Why? Well, the actual news content of any printed publication is valueless these days so this isn’t the reason Spectrum gets read in the first place. I’ll have skimmed anything interesting from the weekly news mailouts I get from IEEE, ACM, and SANS — not to mention news feeds like Slashdot, and Google. I read the paper edition of Spectrum because I can read it in the toilet, it’s not pretty but it’s true. Spectrum has well written and detailed stories on subjects that I wouldn’t normally investigate, it doesn’t matter that the information isn’t breaking-news and I’m using time in which I’d otherwise be staring at the door.

What does the new digital Spectrum do for me?

  • It employs an annoying and cumbersome non-web online reader.
  • It ties me to reading only when I’m in front of a computer.
  • I can’t read it on the toilet, or in bed late at night.

These are both locations where I tend not to take the laptop, and, really, I’d prefer neither one to be any more digitally enabled. So, I’ll only be able to read Spectrum while I’m sitting at my desk, or when laptopping elsewhere. But in these cases I usually have work to do, and in-between work times I have the entire Internet before me. Why opt to read Spectrum when I have expert-selected content feeds?

As for the first point, the digital Spectrum interface is crap. The real Spectrum killer for me is in the toilet, but usability is pretty important too. Has anyone ever seen one of these non-web web-content systems that doesn’t suck? They would be better off just sticking to PDF, but then I guess they’d loose whatever DRM the system they’re using provides. I’ve seen a lot of publications go for such non-web online systems during these web-or-die times, most of them have either given up (nobody reads because they made it too difficult) or switched to the sanity of just sticking with HTML. (Example: The West Australian, a newspaper I grew up with but stopped reading when I left WA because their online setup was unusable. Now they use a site that looks like every other news site, while design-dorks may shudder and think “urgh, how ununique”, my opinion is: good, I know how to use this site. I’m after news, not obstructions.)

So, despite all my complaining, I’ve opted for digital. But now I wont read Spectrum. Logic anyone?! I’m not at all sad about this, it was my decision. I have other magazines to stock the toilet, and now I wont have to debate with myself over how long to keep Spectrums and feel bad about throwing stacks of them in the recycling every 6-or-so months (so: periodical karma improved by about one fifth). It is intriguing to reflect on these moments when something leaves your life, why is it so and what do the stirrings of these surface currents indicate is lurking below. Then get on with life, differently informed.


[1] Using Qmags, which seems to offer quite a selection of publications. Maybe I’m in a minority, thinking the interface is crap. Or maybe there just happens to be enough people willing to use it to keep the thing alive. I’m not investigating their service in detail, the IEEE Spectrum interface might not even be what they use to deliver most of their titles. Some “Secure” Acrobat/ebook file would be another option, though I don’t like them much either (still not loo-compatible in my mind, and printouts defeat the purpose).

Us Techies

Note: This entry has been restored from old archives.

Us techies (in my generation: who grew up in the 80s/90s knowing what an IRC was, how to push the power button on the “computer”, and wrapping weird batch scripts around multi-disc ARJ decompression to install pirated games for our friends, for example) were a fairly unique bunch in back in the day. But to those entering Uni now (10 years later) the tech was there from the day they were born, for all of them.

Amongst our own generation we’re still respected (though never necessarily liked) for our ability to tweak the Excel spreadsheet or fix the Internet.

Amongst the youngsters will we be no better than the dude who fixes the shower? Just another annoying job that somebody has to do.

Nothing against plumbers, here in the UK the plumbers make the $$$ and can actually afford things like houses. I still think I’d rather be a landscape gardener.

My thoughts while reading Meet Your Future Employee.

I do worry when they start talking about “HTML programming” though, and “advances like blogging and social networking”. I suspect the reporter may be showing her own generational gap… nobody can hope to keep up with the pace of change.

I can’t help but observe that statements like “[lack of] face-to-face communications skills, a critical asset for a modern IT career” are comming from the old IT career professionals. Predicting their own obsolescence?

[Actually, I think my IT-generation sits somewhere in between that which is the subject of the article and that which is the observer, um, or am I generation Y? I’m certainly not X. The article mainly leaves me just all colours of confused. Gah, bloody compartmentalisation.]

Collateral Damage: An Unintentional Storm Worm DOS

Note: This entry has been restored from old archives.

Anyone else get the feeling that the Storm Worm proves that the entire ‘net security industry is useless? We already know that most security is ineffective against targeted attacks, and now Storm makes is clear that the state of security in general is ineffective against widespread attacks. Sure, your AV product will almost certainly protect you from Storm, but it wont protect you from Storm breaking the ‘net in general. The problem is that the fact that you do have an AV product installed and up to date places you in the minority.

OK, implying that we’re all stuffed is rather over the top … but sometimes I really feel rather perturbed by the whole situation.

Anyway, the latest fun fact I’ve noticed regarding the Storm worm is that some security-sensitive sites have started using blacklists to block HTTP clients. At this moment there are several security sites that give me messages like “ACCESS DENIED” or “File Not Found or your IP is blocked. Sorry.” but they work perfectly well if I bounce through a remote proxy. Why? Well according to some lists, such as cbl.abuseat.org, I have a Storm Worm infection. It happens that my ADSL picked up a new dynamic IP this morning that someone with an infection must have had last week. I understand why the websites are doing this, though I’m skeptical of the effectiveness of it as a countermeasure. Being the victim of a DDoS is pretty much worst-case-scenario for a popular site, anything that might reduce your vulnerability is going to look good.

What is the solution? Certainly not this sort of blacklistsing? We probably need to see a shift in the responsibility. The dumb end users can’t be held responsible, would it be a car owner’s fault if his car was stolen and subsequently the thief runs down a child with it? What if the car owner left the car with the engine running while popping into the newsagent to pick up a paper? The child’s death is still not the car owner’s fault I’d say, even if said owner is somewhat foolish. But we don’t know how to hold the thief responsible in the botnet case. The analogy works to describe my case for absolving the user, but breaks down when you look at it for assigning blame to the driver. Are the cars computers, IP addresses, or packets? Who’s the driver? What we do know is that 100% of car thieves are homicidal maniacs! Iieee!

Now, given that there are cars speeding around everywhere being driven by child-killers, roadblocks have been set up all over the place to keep the killer-cars out. Each roadblock has a long list of number-plates to check against approaching cars, the problem is that the list is very large and is always out of date. Some killers will get though (but you may be saved from the DDoS) though you’ll possibly just end up with a huge line of cars at your roadblock (DDoSing your roadblock!). Also keep in mind that the killers who aren’t on the list know that they aren’t and are capable of organising themselves to show up at a given location instantly.

How do we reliably know a bad packet from a good one? Who should be responsible (infrastructure providers need to foot some of this I think). What’s the solution? Buggered if I know 🙂 and if I did I wouldn’t be telling, would I? Let’s hope that some of the large number of smart cookies out there thinking about this come up with something that doesn’t suck! However, I fear that all solutions involve a giant and expensive leap: a new Internet. (Or, at least, a major overhaul of the one we have.) Is that even possible?

Django on Debian Sarge

Note: This entry has been restored from old archives.

The main reason I’m posting this is so that other people can avoid trying to play “chase the dependency trail” in back-porting the Debian etch Django source package to sarge. If you want to do that I suggest working on modifications to the source package rather than following the trail.

[Note: If you want Django on Debian etch then you can simply: apt-get install python-django (not the latest and greatest though, 0.95.1 at this time, your best bet in all cases is probably a local user install from SVN)]

Here’s how I successfully installed Django as a Debian package, followed by the way I failed to do so.

Creating a Django Debian package (successfully)

After my trail following led me to a cliff I gave up on the 0.95.1 Debian source and moved on to this hackish method of getting a package. There are plenty of ways to get to packages on Debian. Let me scare you with this:

:; su -c 'apt-get install rpm alien'

(I’ll used Sam’s neat prompt to differentiate between commands and other crud.)

Then:

:; wget http://www.djangoproject.com/download/0.96/tarball/
:; tar xzf Django-0.96.tar.gz
:; cd Django-0.96
:; python setup.py bdist_rpm --packager="Yvan Seth" --vendor "django" 
            --release 1 --distribution-name Debian

However, the last step fails with “Illegal char '-' in version: Version: 0.96-None” so I edit build/bdist.linux-i686/rpm/SPECS/Django.spec and remove the -None from version (can’t see a way to do this with the bdist_rpm options):

:; vim build/bdist.linux-i686/rpm/SPECS/Django.spec
:; cp ../Django-0.96.tar.gz build/bdist.linux-i686/rpm/SOURCES/
:; rpmbuild -ba --define "_topdir 
        $HOME/source/Django-0.96/build/bdist.linux-i686/rpm" --clean 
        build/bdist.linux-i686/rpm/SPECS/Django.spec
:; fakeroot alien -k 
        build/bdist.linux-i686/rpm/RPMS/noarch/Django-0.96-1.noarch.rpm

Joy! Now I have a django_0.96-1_all.deb, do I dare to install this critter?

:; su -c 'dpkg -i django_0.96-1_all.deb'
Password: 
Selecting previously deselected package django.
(Reading database ... 41757 files and directories currently installed.)
Unpacking django (from django_0.96-1_all.deb) ...
Setting up django (0.96-1) ...

This sucks like a 747 engine (or blows, all a matter of perspective), but the deed is done. Probably better than “su -c 'python setup.py install'” but in the end it would probably have been best to just do a local --prefix=$HOME/blah type of install.

Setting it up

The Django site documents installation at http://www.djangoproject.com/documentation/0.96/install/.

The Django documentation is rather good, once I was though with my packaging débâcle the doco got me up and running in next to no time. My notes here are specific to my system and probably not useful to anyone.

Database

I’m already running both PostgreSQL and MySQL. I chose to use PostgreSQL because the sarge package for python-psycopg fits the specification given by the Django instructions while the python-mysqldb version is a little older than the specified minimum version. I’m also more familiar with postgres. So:

:; su -c 'apt-get install python-psycopg'

You’ll want to set up a database, so get an appropriately privileged psql shell and, for example, do:

CREATE DATABASE django_mysite;
CREATE USER django_mysite WITH PASSWORD 'dumbpassword';
GRANT ALL PRIVILEGES ON DATABASE django_mysite TO django_mysite;

mod_python

The mod_python setup is documented here: http://www.djangoproject.com/documentation/0.96/modpython/

I have mod_python installed and it makes sense to use it. I’m using a pretty funky Apache vhost setup though and I’m not going to detail it here. In essence you want to find the appropriate VirtualHost section and insert something like this:

    <Location "/djangotest/">
        SetHandler python-program
        PythonHandler django.core.handlers.modpython
        SetEnv DJANGO_SETTINGS_MODULE mysite.settings
        PythonPath "['/home/vhost/malignity.net/data/django/'] + sys.path"
        PythonDebug On
    </Location>

With that done you should see a basic status page at the configured URL (i.e. http://malignity.net/djangotest/).

All Good!

Now to follow the great tutorial that starts here: http://www.djangoproject.com/documentation/0.96/tutorial01/ I’m mildly impressed already, and web stuff usually pisses me off before I even touch it. We’ll see how it fares after I try and actually do something with it though.

How Not To Do It

Backporting python-django from Debian etch to sarge

This was a supposed to be a command log for a successfully back-ported Django package build. But it didn’t turn out so well.

We start with a Debian sarge server box. I really don’t want to risk dist-upgrading a box that’s in another country a 2 hour flight away. The box runs with some backports.org packages for the likes of Apache2, SpamAssassin, and ClamAV updates. Consider it a fully up to date general server with this sources.list file:

:; cat /etc/apt/sources.list
deb ftp://ftp.de.debian.org/debian sarge main contrib non-free
deb http://security.debian.org/ sarge/updates main
deb http://www.backports.org/debian sarge-backports main

What I want on this box is this cool new Django thing I’ve heard so much about, but rather than download and work with the tarball I’d prefer a Debian package (call it a form of OCD if you like). A fairly up-to-date package is available in current the currently stable etch and we can trust Debian to track this for security fixes, if I track this package manually we ought to be able to have a relatively safe install. So! Time to try and build the etch sources for sarge.

;: su
:; cat << END >> /etc/apt/source.list
deb-src ftp://ftp.de.debian.org/debian etch main contrib non-free
deb-src http://security.debian.org/ etch/updates main
deb-src http://www.backports.org/debian etch-backports main
END
:; apt-get update
:; exit

Create and install python-django?

:; mkdir -p pkgbuild
:; cd pkgbuild
:; apt-get source python-django
:; pushd python-django-0.95.1
:; fakeroot dpkg-buildpackage
...
dpkg-checkbuilddeps: Unmet build dependencies: debhelper (>= 5.0.37.2)
    python-dev cdbs (>= 0.4.42) python-setuptools (>= 0.6b3-1)
    python-support (>= 0.3)
...

In a perfect world this would build your package! But things are rarely perfect. The dpkg-buildpackage reports any missing build dependencies, I had to install the parts that I could:

:; su -c 'apt-get install debhelper python-dev cdbs python-setuptools'
...
Setting up cdbs (0.4.28-1) ...
Setting up python2.3-dev (2.3.5-3sarge2) ...
Setting up python-dev (2.3.5-2) ...
Setting up python2.3-setuptools (0.0.1.041214-1) ...
...

Notice the versions aren’t really what I’m after, I’ll be punished for that later.

Create and install python-support?

Alas is isn’t as simple as that, another build dependency was python-support which wasn’t available for sarge, so…

:; popd
:; apt-get source python-support
:; pushd python-support-0.5.6
:; fakeroot dpkg-buildpackage
:; su -c 'dpkg -i ../python-support_0.5.6_all.deb'
...
 python-support conflicts with debhelper (<< 5.0.38)
...

Nope, no luck. Looks like the version of debhelper installed really needs an upgrade… I’ll prefer to trust the package maintainers and not try and force an older debhelper version on things.

Create and install debhelper?

:; popd
:; apt-get source debhelper
:; pushd debhelper-5.0.42
:; fakeroot dpkg-buildpackage
...
dpkg-checkbuilddeps: Unmet build dependencies: po4a (>= 0.24)
...
:; su -c 'apt-get install po4a'
...
Setting up po4a (0.20-2) ...
...
:; popd

Create and install po4a?

It is getting a bit over the top now… another route would be advisable at this point. While there’s got to be a better way I’m possessed of a zombie-like persistence so keep going…

:; apt-get source po4a
:; pushd po4a-0.29
:; fakeroot dpkg-buildpackage
:; # Joy! It builds!!
:; popd
:; su -c 'dpkg -i po4a_0.29-1_all.deb'
:; # Joyjoy! It installs!!!

Pass 2 of: create and install debhelper?

:; fakeroot dpkg-buildpackage
:; # Joy! It builds!!
:; popd
:; su -c 'dpkg -i debhelper_5.0.42_all.deb'
 debhelper depends on dpkg-dev (>= 1.13.13); however
  Version of dpkg-dev on system is 1.10.28.

Sigh. Is it really worth following this path any further?

Create and install dpkg-dev >= 1.13.13?

No, I really don’t want to do this. Time to stop! Additionally:

:; popd
:; apt-get source dpkg-dev
:; pushd dpkg-1.13.25
:; fakeroot dpkg-buildpackage
...
dpkg-checkbuilddeps: Unmet build dependencies: debhelper (>= 4.1.81)
  pkg-config libncurses5-dev | libncurses-dev zlib1g-dev (>= 1:1.1.3-19.1)
  libbz2-dev libselinux1-dev (>= 1.28-4)

A circular dependency on my debhelper/dpkg requirements. Version requirements for libselinux1-dev and zlib1g-dev that are too up to date for sarge and would require builds. No, it’s gone too far.

Another route would be to try modifying the package build information to get a Django package that doesn’t have any of these requirements. But that also tends to be a slippery slope.

Yeah… just don’t do this. OK?

Fatsos Wiggle Their Toes

Note: This entry has been restored from old archives.

Now I say this as a certified “fatso”: pedometers/wiis/foot-tapping aren’t going to make fat bastards experience “big health benefits” (my qualifications: BMI of 27.6 … still a fat bastard, but that’s down from the 37.6 I was at 3 years ago — BMI’s don’t mean a huge deal but the mirror doesn’t lie). Maybe there’s one fatso in one thousand who gets a case of OCD over their wiggle-my-stick-wii game and loses a few pounds, but seriously? “The allure of computer gaming and competition with other users encourages players to make small lifestyle changes that can add up to big health benefits.”

But these are scientists saying this is so, so who am I to pronounce judgement
or even have an opinion? It’s just rare that I see something come through ACM
TechNews that makes me think “what a load of tripe”.

AFAIC fat bastards have two things to do:

  1. eat less,
  2. lift some heavy shit (seriously, muscle burns calories).

Wiggling your PDA and other “small lifestyle changes” isn’t going to do it for
you. And for the truly obese “running” is just going to stuff your knees.
Want real incentive? Take away access to public health services of any kind,
it should be treated the same as smoking or any other personal risk-increasing
habit. You choose to increase your risk, you cover the consequences. Of
course the US is way ahead of everybody on that front, but they’re still all
fat. I guess that’s why I’m not qualified to have an opinion 🙂 d’oh!

Of course, this is actually from a press release, so isn’t worth paying attention to.

Bah, this isn’t even going in tech.

[[[Update: Fellow fatsos should read this.]]]