Category Archives: Uncategorized

Linux Freeziness

Note: This entry has been restored from old archives.

I’ve been using Linux for quite some time now. It’s been more than 10 years since my first hesitant install of Debian (from the front cover of a tech mag) in early 1998, and a little longer since I first poked at my cousin’s Linux computers. In the earlier days of my Linux use there was rarely a time I had to power-cycle a locked up machine when I was running a ‘stable’ distro. Sure, sometimes a bleeding-edge X, or somesuch, would lock up – but I was nearly always able to switch to a console or get a serial terminal and kick it in the arse.

Things seem to be a bit different now. More and more often I’ll have to hold down that power button. Sometimes I can magic-sysreq a reboot, but often not. The culprit is usually web browsers, but sometimes other X applications. It does seem to be an X thing. I use an RC Firefox with a beta Flash, so I expect breakage. What I don’t expect is breakage in my userspace apps to lock up my machine irretrievably. Especially since I’m using a stable distro, aside from the Flash and Firefox my machine is 100% Ubuntu ‘gutsy’ (no proper ‘net yet so I haven’t done my dist-upgrade, the 13th I’m told now, Friday the 13th! Joy.)

Linux never used to do this to me. Has robustness suffered in the quest for a snappier user experience? Am I just unlucky? It could be my non-free video driver (nvidia) I guess. I haven’t had the time to try debugging the problem, I should really, since it can be replicated.

Maybe I’m just seeing the past through rose coloured glasses, could it have been worse than I recall? I’ve had to reboot twice this morning, so now I’ll stop trying to play with the Firefox RC and get back to browsing with Opera.

That aside, it looks like Firefox 3 could win me back from being an Opera user. If my bloody Linux stops hanging when I use it!

O2 Donglenet

Note: This entry has been restored from old archives.

My donglenet is cool because I can post this from my laptop while travelling home on the train while passing between a couple of fields.

OK, so it cuts out from time to time, well… a lot, and can’t handle tunnels at all.

Yeah, it isn’t really that good.

But it’s still cool. 🙂

Of course, funny to think that I had access to this very same coolness nearly 5 years ago when I got my first 3G handset back in Sydney. Old tech in new packaging (though definitely higher bandwidth.)

Debian SSH, what are the chances?

Note: This entry has been restored from old archives.

First, these are more informed and to the point:

Some people think that having a 1-in-131k chance of getting the right key for a susceptible account isn’t enough of a risk to cause much concern. I can’t agree, I think any reasonable risk is a concern and that this is certainly a reasonable risk. Given that the default is 2048 bit RSA keys and it is reportedly likely that the generating PID is low the risk is probably higher anyway, less than 1-in-32k.

There’s an easy solution for SSH authorized_keys: check the keys against the blacklists! The metasploit page has blacklists for the usual key sizes available as well as a few less common key sizes.

What’s the risk though? The main known factor is that there are a small number of possible keys. If we know our target is x86 (highly likely) and only go for RSA 1024, 2048, and 4096 keys plus DSA keys that’s only about 131k keys (as large as that number seems, I assure you that it really is rather small in this context.) For an attacker to successfully compromise your system he’ll have to “get lucky” with a key and username combination. There are a couple of general classifications for the attack scenarios I guess:

  1. Global brute-force. (Low chance of successful attack on a specific system?)
  2. Targeted attack with inside knowledge. (High chance of successful attack on a specific system?)

The first scenario is best executed with a botnet. If I was an evil botnet herder I’d probably consider devoting some of my resources to this. I’d probably select only 2048 and 4096 bit keys to attack, as in my experience these are the most recommended key sizes (in fact, it is probably a good bet to try for just 2048 bit RSA keys only, as this is the default for ssh-keygen and I expect most people stick to the defaults. I’m not sure about the data-point regarding PIDs on the metasploit site, but I can imagine it to be true. (I imagine many user keys are generated as an almost first-step after installing and booting a new system, thus a low PID.) Believing the data-point I’d assume that limiting the PID seed to 10k probably increases the likely rate of successful compromises.

The other variable from the point of view of a single system is user-names. I think trying for ‘root’ is a given, and may even consider that to be the only user-name worth trying. Not going for the lowest hanging fruit, going for the ripe and juicy windfall apples. (I know admins who prefer to give ‘root’ access via ssh keys, because then you can revoke an individual’s root access without having to change the password and update everyone. There’s sudo for that these days though, stop using ssh keys.)

So, block remote root logins – as you should – and you’d probably be safe from me. But there’s also common user-names, attackers have lists of these. (I get a large number of SSH login attempts daily for user/pass combinations like bob/bob, tom/tom, … etc.) These user-names are probably a definite risk. Like I said, I see a regular flow of brute-force attempts to SSH in brute-forcing on usernames and passwords, given that this activity is common on the ‘net I reckon it is a given that it’s already happening for the bad SSH keys.

How about the likelihood of your server actually being attacked? From the point of view of the attacker this is a third variable, server address. It is a big Internet. Assuming bot-nets are randomly testing all valid ‘net IPs individual machines are probably, statistically, fairly safe. Though if I were configuring a botnet I’d pare the IP range down to blocks owned by specific co-location providers. Places like EV1 where large numbers of machines with lots of bandwidth are administrated by large numbers of totally clueless gits. If you’re on such a network your risk is likely to be higher (and who doesn’t have a server that’s in a “bad neighbourhood” purely because such neighbourhoods are cheap?)

Overall, I’d rate the chances of becoming the victim of a global-brute-force as fairly low. Still, the bad guys are going to successfully compromise some machines, it could be your unlucky day! This bug still increases your overall chances of compromise and you decrease them by fixing it. At the very least ensure that remote root isn’t permitted (and fix the situation if it is) then check all user-keys without locking down SSH.

The second scenario could leave you much more vulnerable.

If your company is likely to be targeted then it is likely that the attackers can get hold of all kinds of information. If they know the names of your employees (easy to find out) they can probably work out a list of likely user-names. If any one of your uses logs in with key-based auth and has a susceptible key you’re probably screwed (especially if they chose a 2048 or 4096 bit RSA key, like most people probably do?)

If an employee has lost a laptop with a susceptible SSH key you have a new worry (amongst the many your probably have anyway): it no longer matters how good their passphrase is. The new owner of the laptop’s data now knows the employee’s login (well, it is likely, .bash_history for example) and the size and type of their key.

What if some employee logs in from a shared machine where somebody else has root? OK, this is already a risk. You should simply never do this and employee’s who use SSH should know better! Anyway, previously the untrusted root would have to set something up to snaffle the key when the user types in their passphrase – now they can narrow it down to one of 32k keys without intervention.

What if your employee logs into another server run by someone else using an SSH key that they also use on your server? The owners of the other system now have that employee’s private key, all they have to guess is the login. Vice versa, you now have the private key for systems that the employee may log into with that same key. It probably only takes a little bit of digging to find out what companies/institutions many of the users of your system uses. The user could be security concious and may not trust you, but it’s OK the only data they’ve given you is their SSH public key, right? But now you have their private key! It is likely that they use the same key for other systems, have a guess at their username and try logging into the IPs they’ve logged in from.

There’s more…

Anyway, this is all speculation. I’m just tossing around some of the obvious risk scenarios in my head, I don’t have the data to put any numbers on them so I can’t prove anything at all. The main point is that I think there are enough risk cases that taking this potential hole in your system lightly is probably a bad mistake. You should have fixed it already.

In the end I think this is a case where it is better to be paraniod.

And I’m only talking about SSH user keys here – server certificates are a whole other nightmare. Given that you publish your SSL public key as part of the transaction I assume it would be trivial for someone to generate your private key from this data – Mavis is going to be one very happy girl. IIRC signed keys are typically only 1024 or 2048 bit (last time I worked with CA-signed server keys, years ago, our CA would only sign 1024 bit SSL keys.)

Debian SSH joy

Note: This entry has been restored from old archives.

Everyone is writing about the Debian & derivatives SSH issue.

[Update: For the sake of accuracy I did an s/32k/196k/ in a couple of places.]

I don’t think it can be written about enough, the more exposure the better. When I first saw the headlines I thought, “oh, probably just anther ones of those things where a heat-death-of-the-universe problem has become a 1-million-years problem” – ho ho! How wrong was I? It really is very serious. If you generated an SSH key on a Debian/etc machine while the bug was in place your private key is one of only 32k possible keys (for each key type for each key size, i.e. 32k possible 2048 bit RSA keys, etc. So for 1024, 2048, and 4096 bit DSA & RSA keys that’s 196608 possible keys [Update: oops, DSA keys can only be 1024 bits!]) This means that if someone knows a machine you log into and your username it’ll take them no more than 196k attempts to log in to the system as you (and probably less, since the Metasploit page linked to above claims most keys are generated by proceses with low PIds.) That’s a tiny number of attempts in the brute-forcing world.

Some of us have brute-force blocks on our gateways and servers, this is great up to a point. For example a typical set-up is to start blocking all traffic from an IP if it hits port 22 more than 10 ten times in 30 seconds. I do the this on my server.

This is only good so-far though. First, if yo aren’t in a hurry configure your brute-force script to try every 10 seconds. I doubt most firewall setups go as far as to notice that sort of thing.

Second, this is a bonanza for botnet owners! If yo have a modestly sized botnet (say, 32k nodes) yo just give each bot a set of keys, a list of common logins, and the Internet. Actually, you’ll probably do pretty well just testing the entire internet with ‘root@’. (AFAIC you should never permit remote root login anyway – but I suspect many servers do and they make it “secure” by permitting only key-based ath, heh heh.)

What I wold be doing right this moment if I was an admin:

  1. Block SSH at the gateway! (Ouch!) And all other SSL protected servers.
  2. Check your server keys, replace if needed. This could take a long time where re-signing for public keys is required.
  3. Move all users .ssh directories to something like .ssh_suspect
  4. Inform users, probably by phone. (They’ll probably call you when they loose SSH.)
  5. Start scanning all .ssh_suspect directories for blacklisted keys, remove them, inform the users, reinstate SSH with good keys restored.
  6. Continue mopping up the mess– probably mostly a case of chasing up server certificate re-signing and informing/handling users.

I’m not an admin and not an SSH expert, so that scheme is vague and probably needs further tightening. It surely must be better than doing nothing though. You probably need to audit all your logs too, especially auth and firewall. Essentially the security of all your systems is suspect until you are certain that all logins prior to the lock-down were kosher (probably requiring a lot of back-n-forthing with users.)

I haven’t even sat down and contemplated the full extent of this yet.

The fact that we could even have got into the situation is insane. Peer review?
How cold someone clueless enough to cluelessly mess with the PRNG in security
code even be permitted to make such a change?! It beggars belief.

Of course, anyone even vaguely interested in security has probably already reached the cynical point of believing that there isn’t any security. Awareness is key.

Novatel Ovation MC930D and Linux (Ubuntu)

Note: This entry has been restored from old archives.

[Quick answer: try eject /dev/sr1 (that’s probably what it’ll be if you have a CDROM, for me it was /dev/sr0, to confirm insert the dongle and check the last few lines of dmesg) as step zero for the Novatel Linux instructions.]

Gah, I got sick of having to use WinXP to get my mobile broadband. Last week I signed up with O2 and got a Novatel Ovation MC930D as part of my contract. Initially I had fairly low expectations for this being easy to get working in Linux. Then I found a page on the Novatel site explaining how to set up the device in linux. w00t! Oh, ah, not so fast…

I got to step 15 and didn’t get anything back from the modem query. To get this far I had chosen the USB product id of 0x5010, since that is what I saw when I plugged in the dongle. The page actually says I should use 0x4400 for my device, but I figured it was some sort of mistake since all I saw was 0x5010! There was more to it than that as well, I also had to remove the usbstorage driver first because it picked up the dongle as a storage device and created /dev/sr0 for it. No great surprise, it does have 64MB of flash available.

In the end further web searching found that the dongle is a “switch mode” USB device. I.e. if you poke it in the right ways it turns into different devices, changing its skin like a chameleon. This is a pretty slick set up for Windows installs, it simply looks like a memory stick. The trick is that it has an autorun.inf and when inserted takes you through the Novatel/O2 driver/software installation. Once the driver is installed the device is switched, and is automatically switched by the driver on future insertions.

There’s a tool for switching various USB devices, including my Novatel MC930D. It involves compiling and crap though, I do enough compiling as it is, ick.

Lucky me! There’s a note that mentions that the Novatel actually switches on a storage/SCSI ‘eject’ command. How about we try eject /dev/sr0? Gotya!

So, in the end I can recommend the official Novaltel Linux instructions linked to above. However, first insert this new “step 0”.

0. Execute: sudo eject /dev/sr0

When you do this the 1410:5010 USB device will vanish and in its place a 1410:4400 device will appear. From this point onwards the official Novatel instructions can be followed.

Note that I’m using an Ubuntu ‘gutsy’ system here, so YMMV.

If you’re wondering about other “fill in the blanks” for the Novatel setup page then here’s an answer-sheet for using the Novatel MC930D (maybe other devices too) with O2 (UK mobile provider):

  • Phone Number: *99***1#
  • Initialization String 2: AT+CGDCONT=1,"IP","mobile.o2.co.uk"
  • Username: o2web
  • Password: password

What’s really insane is that the connection seems to be far more stable under Linux. On Windows it gives about 15 minutes of connectivity punctuated with 5 minutes of “not reachable.” I just got more than 3 hours out of the last Linux connection.

Underground, overground, dongling free,
The dongles of Dingledon Common are we

Now I can dongle in the middle of Wimbledon Common at 7.6Mbps with my “free” OS. Wombling free al’right.

Don’t store references to boost::shared_ptr

Note: This entry has been restored from old archives.

I should have known better, in fact I’m pretty sure I did know better, but it is so easy to fall into the trap of using reference members as often as possible. Actually, in other code I’ve not done this, so I’ll call it a “typo”, yeah. 🙂

No time to go into details with examples and all that, best to keep things simple anyway.

Avoid storing auto/smart/shared pointer references in the name of avoiding premature optimisation. You always have to think: will this be destroyed before this is destroyed. If it takes more than 10 seconds to work that out then steer clear of the reference path! Hoping it’ll all be OK is asking for trouble. Especially if you don’t own the calling code!

A shared pointer instance stores little state, so just copy the damn thing. This way the target instance will be destroyed when nothing needs it anymore, which is the whole point. Better a few copies than having things vanish earlier than you expected because you tried to game the system.

My new personal rule is: always copy shared/auto/smart pointers.

If the copy is a problem it’ll show up in profiling later and that’s when you work out how to fix it.

(Alas there isn’t an easy way around avoiding circular references.)

Bjarnterview

Note: This entry has been restored from old archives.

If you have an interest in C++ (and CS/SE education) it is highly worth your time reading this interview with Bjarne Stroustrup.

Choice quote:

“Learn to use the language features to solve simple programs at first. That might sound trivial, but I receive many questions from people who have “studied C++” for a couple of weeks and are seriously confused by examples of multiple inheritance using names like B1, B2, D, D2, f, and mf. They are—usually without knowing it—trying to become language lawyers rather than programmers.”

The linked JSF++ and Performance TR are likely to be worth reading too, but with both being well over 100 pages I’ve only had time for a brief skim.

Computer God

Note: This entry has been restored from old archives.

I’ve had a string of thoughts that require deeper consideration. The thoughts start with:

There is a god, her name is Hope.

And eventually reach:

If computer intelligence doesn’t get religion then the Singularity cannot occur.

There’s a heck of a lot between those two points, I’m not sure I’ll ever be able to translate it from strings of thoughts to strings of words.

What do wine and tech have in common?

Note: This entry has been restored from old archives.

Answer: an epiphytic marketing industry.

More specifically: a plethora of meaningless awards and certifications and the companies and organisations responsible for them.

Go to the wine section in your supermarket and you’ll see dozens of bottles with little silver, gold, or bronze “medals.” Read the labels and find out what awards they’ve won. Often it’ll be “best” of some ridiculous niche category like “best merolt-shiraz blend from the west side of Dead Man’s Hill.” Seriously, many of them are about that precise, covering all of 3 or 4 wines. That’s if they explain anything about the award at all, in other cases it may just have a year and the name of some unheard-of wine show, or grand-sounding “challenge.”

In my years of drinking wine I’ve come to the conclusion that there is very little relationship between awards and my enjoyment of the wine. However there is some relationship between the awards and the price of the wine. As far as I see it you’re better off going for cheaper wines with no awards. It’s hard to remember this sometimes, we grow up instilled with such a strong sense that everything must be ranked that this “medal” technique hammers right into our subconscious bypassing rational thought.

The technology sphere has a similar system, whereby a plethora of publications, organisations, and dodgy websites give out awards like they’re going out of fashion (I wish.) Many of the awards will have dozens of categories that really only contain 3 or 4 competing products. Sometimes even less since some of these schemes require you to pay up to be considered (and I expect many wine awards are the same.) These rankings are usually of little technical merit, even often judged by non-experts based on marketing material rather than any practical results. Techies may dismiss them yet, disturbingly, they can be an important part of selling a product. The inexpert are easily swayed by these seeming ticks-of-approval, as with wine it can be exactly this sort of meaningless ranking that gets you short-listed in the mind of your customers (most products are not sold to experts.)

Certifications are similar. Take IGT/DOC/DOCG in Italian wine for example, a set of rules that define how you must make your wine if you want to market it in certain ways. It seems as if you’re claiming some guarantee that your wine attains a minimum standard of quality. In reality the only real meaning is that the wine complies by a set of rules that grant it the acronym and, like the awards, enjoyability bears little relationship to the certification. The fact is that mere Vino Da Tavola wines (“table wine,” the term for uncertified wine) seem to be as good a bet if you just want to enjoy a glass. Further, most people don’t know what all the classifications mean anyway! Quality is rarely certified, typically the most you can read into it is location, grape blend, and process – in the hope that prescribed bounds increases the chances you’ll enjoy your wine.

In technology systems exist that are much the same. (Also for people, but that’s another issue.) Some are tick-box standards or rule systems, like the currently hyped SOx and PCI. The problem with these is that compliance really only means that you’ve “gone through the motions,” there’s no guarantee that you actually care or are pro-active about the problems the systems supposedly address. Another form of certification is “product X achieved Y with system Z” – any tech person I know looks at these things and shrugs, sometimes muttering an expletive. It’s a funny old situation where successful companies are created around products that give certifications that the entire base of technical experts in the relevant field dismiss as not being a whole lot of use! But the fact is that, while they’re rarely a useful assessment of the practical effectiveness of a product, they’re often an important cog in the marketing machine. People want benchmarks, people want ratings – how else can they judge the quality. The underlying problem is that there are few experts but many buyers.

Where the whole thing goes most horribly wrong, in the tech industry at least, is when this system starts feeding back into itself. Regulations are created that spawn whole technologies, such as SOx and PCI. We get technology specified by the regulations rather than focused on solving the problems the regulations were invented to address. Defacto-standard rankings are created that measure a quantity that isn’t actually a useful part of a technology’s function. We expend vast engineering effort tuning technologies to do well in the rankings rather than addressing real problems. Successful companies are born from the precise specification and measurement of the wrong things!

It is a little reassuring that no number of unreliable or misleading guarantees makes a crap product a good one and that’ll be the stumbling block for many solutions riding a wave of medals. Alas, it is much harder for the inexpert to know the technology they’ve bought is crap than it is for the inexpert to know the wine they’re drinking tastes like copper coins. (This doesn’t matter so much for wine, there are many buyers and you can get by just fine if you never sell a second bottle to the same person. You typically have much more restricted markets with technology and rely on people coming back for more.)

I started typing this up with the intention of jotting down just a couple of paragraphs comparing wine medals to technology awards. Now it’s >1000 words later and I’ve left many loose ends flapping around in my mind. Someone who knew enough about the industry, marketing, and human psychology could probably write a decent paper (or book) on this. “Technology Defined by Marketing” or, more scathingly, “Selling Widgets to Idiots.” I expect it’d probably be boring and wouldn’t change anything anyway.

[[[FYI: The root thought occurred to me as I sipped an insipid and tinny French red that had a silver medal. “Silver Medal International Wine Challenge” it says, it’s also from a ‘Cru’ status village, and is AOC (of course.) We have a lot of seeming guarantees that this is going to be a damn fine drink. Wrong!]]]
[[[P.S. I don’t think all awards and certifications are meaningless. I also don’t think they’re all (or even mostly) the product of unethical exploitation of the expertise vacuum. To narrow down the more suspect ones look out for those that involve the exchange of money, especially where that money goes to a for-profit entity.]]]
[[[P.P.S. Yes, I have read that the introduction of wine standards certifications in Italy, France, and Spain improved the average quality of wine. Alas I am not so old as to have known wine back before the systems were introduced. As a consumer in the modern wine market (and we’re taking 3 or 4 different wines a week) my observation thus-far is that at a given price level certifications (or awards) don’t mean much when comparing my enjoyment of wine. Further, I’ve drunk several non-certified wines in both France and Italy that are far above the average enjoyability. I’ve even had an French wine producer rant at me about how much he hates AOC since his best wine is always a single-varietal and thus cannot qualify. (If you’re only familiar with “new world” wines (i.e. Australia) you probably don’t have a clue what I’m on about, since the “new world” mostly sticks to naming wines based on the grapes that made them and single-varietals are common. Anyway, there’s a lot more to these “old world” systems than I have time to go into, there are some good justifications for AOC/etc.)]]]

Further notes on C++ lambda (N2550)

Note: This entry has been restored from old archives.

Since my earlier post (content here should mostly make sense independent of the post) I’ve taken some time to explore the C++ lambda proposal further, specifically document N2550 [pdf] that fully defines the proposal.

The first note is that the “In a nutshell” example used to illustrate the use of the method is not reassuring. First they introduce the functor:

class between {
  double low, high;
  public:
    between(double l, double u) : low(l), high(u) { }
    bool operator()(const employee& e) {
      return e.salary() >= low && e.salary() < high;
    }
}
....
double min salary;
....
std::find if(employees.begin(), employees.end(),
                between(min salary, 1.1 * min_salary));

Then they provide the lambda equivalent:

double min salary = ....
double u limit = 1.1 * min_salary;
std::find if(employees.begin(), employees.end(),
                [&](const employee& e) { return e.salary() >= min salary && e.salary() < u limit; });

This seems to be the same example that the Beautiful Code blog picked up on.

I have to admit however, this is growing on me just a little. Maybe like mould growing on damp bread.

To dissect:

The [&] introduces the lambda expression, in this case the inclusion of a lone & indicates that the closure created will have reference access to all names in the creating (calling) scope. This is why the code can use min_salary, if the lambda was introduced with only [] the code would be erroneous. You can, and possibly should in most cases, specify exactly what the closure can access and in this case that would be done using [&min_closure]. The normal meaning of & is retained here, the variables the closure accesses are references to those in the enclosing scope — for better safety in this case it is maybe better to use [min_closure], this will ensure the closure will not modify the original value (consider is passed to the closure by value.)

I see a pitfall here, the non-& capture means the closure can access values by the enclosing name and modify them but the modifications will not propagate to the enclosure. It wouldn’t surprise me if, for clarity and readability, the & form (pass by reference) becomes standard and the default form (pass by value) becomes “not done.” I feel it may be preferable for the “pass by value” to actually be “pass by const reference.”

The (const employee& e) gives the “parameter declaration” of the lambda. Essentially we’re declaring a function, or functor, that takes a single const reference to an employee as an argument. Note that this mirrors the operator() of the between functor. There’s more to the declaration than is given here, it can also sport an optional exception-specification (um?) and a return type. Let’s skip the exception specification since I don’t feel up to the task of trying to explain that one. (I suspect it is because the result of a lambda is under the hood a functor and it isn’t insensible to export this part of the definition to the user rather than enforcing no specification.) The return parameter is likely to be more regularly interesting, in this case it is not given but is actually -> bool – why? Because of paragraph 6 of 5.1.1, which essentially says that the return type is defined by the type of the argument to the return statement in the lambda.

The lambda expression itself, or code of the closure if you prefer, is between the { and }. That part is clear enough I expect.

The return type is defined by the top-level expression of the return statement in the expression. This is &&, clearly enough the return type is a bool.

Read the document [pdf], it’s worth the time – it’s a short document.

So?

Um, still not sure, but seeing all that is specified by the document does open up the possibilities much further. I said earlier that I think this lambda syntax has the potential to make code clearer where the function-object (or binder) code replaced achieves a simply expressed end. On further consideration I think this is especially the case where the expression is not easily described with a reasonable functor name (i.e. the functor approach for “greater_than” is fine and possibly better than the lambda equivalent.) The question, for maintainability, becomes: is:

    between(min salary, 1.1 * min_salary))

better than:

    [&](const employee& e) { return e.salary() >= min salary && e.salary() < u limit; });

Personally I still don’t see enough of a win to the latter syntax to justify the extension. Saving the code that defines the functor doesn’t look like a big win and the latter code, while more explicit at place-of-use, doesn’t seem so stupendously more clear to justify the extension either.

What if…

What if the extension allowed for creation of closures more in keeping with their much-hyped use in other languages? Would I, monkey boy, be convinced then? Quite possibly. Scroll down to Chapter 20 in the document and peruse “Class template reference_closure“.

Looking interesting. As I see it the reference_closure definition will permit the creation of instances-of-lambdas (closures.) These can be returned from a class as “closures” giving very specific restricted write access to the internals of a class. Vague shadows of how this could make the design of code I’ve worked on in the past more straightforward are beginning to form. That’s far from an indication of usefulness however, the shadows aren’t any better than “deformed rabbit” right now. I’m still a bit confused as to the lifetime of variables local to the scope of the enclosing function, as opposed to member variables, in relation to an exported “closure.” Especially since the closure template defined seems to restricted to reference-only closures.

I don’t have time to explore deeper at the moment, but it’s certainly food for thought and brings this C++ lambda/closure extension more in line with the hype around it in other languages. (Kat’s work has recently jumped on the closure bandwagon (best thing since Agile?) and it seems that their picture of closures in Perl very much mirrors what would be achieved by this reference_closure. At the same time… it is also achievable with everyday functors, but maybe the new specification makes it all easier and more accessible? Perhaps that is the question.)

The point of all this text is: RTFM [pdf]. Herb Sutter’s post is a quick-n-dirty illustration of simple things that can be achieved using the specification — he refers to the specification itself for your further investigation. I’m envious that he had an implementation that will compile his lambda examples, presumably VC++.

[[I use “closure” and “lambda” almost interchangeably here, however I’m far from confident in my use of this language. Reading up on this online does not help much as there’s a gap between the CS definitions of the terms and many commonly blogged use-cases. In practise it seems a lambda is an in-place definition of function code, what we often refer to as an “anonymous (unnamed) function”, that can be passed down to called functions (and is most often seen defined in the place of a parameter interchangeable with a function reference.) Practice again seems to define a closure as inline function code that can access and modify values in the scope where it was defined but can be returned to and invoked by calling code, this matches fairly well with the wikipedia definition of the term. YMMV.]]