Category Archives: Uncategorized

Printing TPU End Caps on a Flashforge Adventurer 3 Pro

Filament: Overture “High Speed” TPU

Yesterday I observed Kat draping some random glove liner over her webcam after use. Making a cap for it popped into mind as an obvious thing to do, and the 3d printer was gathering dust – the inspiration hit me. I fired up FreeCAD and knocked up about the most basic of basic 3d designs. I popped some TPU into the dehydrator. Today I lobbed the design over to the desktop, loaded it into the slicer…

A few months ago I tuned the slicer settings for getting some good basic TPU prints out of my printer (A FlashForge Adventurer 3 Pro) – it took a fair bit of fiddling. It’s not a printer that’s really suitable for TPU, especially given it’s a Bowden Tube based model rather than Direct Drive. (The foibles of this printer are a story for another time, but the basic background is I needed to print fire retardant ABS, so needed an enclosed printer, I did my research, decided on this, and it -eventually- did the job I needed. I would not choose or recommend it as a general purpose hobby 3d printer.) Of course I’d made no record of the settings, not so much as a saved FlashPrint profile… I admonished myself accordingly.

Thankfully my memory wasn’t too bad and with only one abortive attempt I got a good enough print. The key thing with printing TPU in this printer is: slowly, slowly, wins the race. Previously I spent loads of time struggling with the material clumping up in the extrusion mechanism. This time around I started slow, low retraction, and using a 0.6mm nozzle helps a lot too.

In terms of the settings in FlashPrint (the slicer that goes with this printer, I’ve never really got around to trying another slicer) I started with the “standard” 0.6mm PLA settings in FlashPrint 5.8.5 and adjusted these settings:

Printer
Extruder Temperature:
225C
Bed Temperature: 45C

General
Layer Height:
0.30mm
First Layer Height: 0.30mm (the default)
Base Print Speed: 20mm/s
Retraction Length: 2.0mm
Retract Speed: 10mm/s

Infill
Top Solid Layers:
4
Bottom Solid Layers: 4

Raft
Enable Raft:
No

Cooling
Cooling Fan Control:
Always Off

Advanced
First Layer Extrusion Ratio:
100%

Others
Z Offset: 0.05mm (entirely dependent on your calibration!)

Getting the z-height is very important of course, perhaps more important with TPU than other filaments in my experience. The reason being that whole jamming extruder problem. Too close and most filaments just “click” and skip a bit, but TPU bunches up, jams, and it all goes wrong; too far and you’ve got no adhesion! And the margin between these two is narrow. I suggest printing a base pad as a test to get it calibrated right. Every time I change filament or do my first print after a hiatus I use the printer’s bed calibration function with the bed and nozzle preheated to match the print settings, setting it up so it just lightly clamps a bit of 80gsm printer paper. Then I fiddle with the z-offset adjustment in the slicer to make it work, today the ideal seemed to be a +0.05 z-height adjustment. Tomorrow it could be different!

For the small cap I watched the extrusion mechanism like hawk – if you catch it quick and pull it back a bit you can save the print. It printed with zero intervention. So I was more laid back about the larger cap print… and it also printed without intervention. So it might be possible to push that print speed up a little in future, but I probably wouldn’t bother.

On these settings the small (28mm outer dia) cap was a mere 7 minute print, and the larger (68mm outer dia) one was a whopping 1hr 40min print! Noting that the small cap had a 1mm thick base and 1 shell thick side, and the larger one a much beefier 2mm thick base and 2 shell thick side.

I would in general recommend keeping TPU printing to a minimum with this specific printer. Small simple objects… though I did print a small squishy mesh cat one time and that worked. Mainly the reason is because the print speed is so slow… if it goes wrong in a long print you’ve got a lot of time wasted. If you want to print a lot of TPU then it seems getting a direct drive extruder is the key recommendation. (It’s on my wishlist!)

Animation through each layer of the 3D print of the larger end-cap.

Converting WordPress PNGs to WEBPs

I have a “legacy” business website I wish to keep online for archival purposes. The problem is that in building this site over a period of a decade we were fairly particular about keeping a lot of the images on it good enough for print quality as this was a useful service to customers. A decade later and our wp-content/uploads folder weighs in at 30GB.

Sure, that’s not a tonne of data by modern standards … but I just want to host this on a little personal VPS and don’t want to pay for extra storage just for the sake of it.

Obvious modern solution? Convert all the PNG images to WEBP.

Should be easy, right? Look, there’s a load of plugins for this! Oh, wait, no, nearly all of them make an extra copy of the images in WEBP format rather than convert the existing files. I can understand why, but at least give me the option? Mostly it’s complex overlays of your existing media. Some even try to monetise it – how about using a third party service to “optimise” your images?! Madness… I’ve got a perfectly good ImageMagick thanks. The one plugin that might have done what I wanted wasn’t maintained and didn’t work on my up to date WordPress install.

So… down to fundamentals. Manually convert the files with a shell script, and then poke at the database. How hard can it be? It turns out: not super hard. I’m not sure I’d recommend this be copied verbatim on a production site (use a staging copy at least) but it seems to have worked for me. My biggest issue was thumbnails and the fact they store the data for these in a manky string encoded format in the database, but for this task at least there’s a plugin to help.

Step 1: Manually convert all PNGs to WEBPs:

That took about 45 minutes to crunch my PNGs and after this my 20* folders had shrunk from 27GB to 5.5GB! Nice.

But of course my media files in WordPress are all broken now!

Step 2: Go hit the database with a hammer…

What you should find now it the key media URLs work from the media library, but all thumbnail/resize versions are broken images. Rather than muck around processing WordPress’s horrid string encoding of the metadata for these files I found a plugin that can force the regeneration of all thumbnails. This takes a good long while to run, but cost me zero time coding or mucking about and just did its thing.

Step 3: Install plugin to regenerate resized images

This is the plugin: Force Regenerate Thumbnails

When it is installed find in your WordPress admin:

Tools > Force Regenerate Thumbnails

JPG files?

Yeah, you can do them too… just replace “png” above with “jpg” or “jpeg” where required – noting that typically the file name is “jpg” but the mime-type is “image/jpeg” (but you could have “jpeg” file extensions present too I guess.) I repeated the above process for JPG files and further shrunk the data size down from 5.2GB to 3.2GB.

DANGER WILL ROBINSON! DANGER!

I hope this post might be helpful to someone, but it comes with a big caveat… there are errors. This isn’t perfect. I’ve found a few cases where files ended up missing, and I had to copy them back in. No biggie for me, this is just an archive. (I do have the original data in backups in the unlikely case I ever want it.)

Definitely use a sandbox, don’t play in production, and test the results!

Kaput

What do you get when you mix together failure to implement a reasonable backup scheme, hard-drive failure, and “oh, I thought it was supposed to be RAID-1”? A right pain in the bloody arse!

Gradually putting things back together – starting with the ale.gd site. I do have copies of all the entries I’ve written but they’re in a funny old format. I’m not using blosxom any more, I’d made a lot of customisations to the code and it seems I’ve lost half of them! So, trying wordpress – reluctantly. It has the advantage that it “just works”, a bonus as I don’t have much time for personal hacking. Sadly I also seem to have lost the photo album content. Not the actual photos, I have all of them backed up, but I’ve lost the commentary I’d added to the albums.

Ah – think of it like a house fire. It feels a bit like that. In fact, as far as personal data goes an actual house-fire would probably have been less damaging!

Open Tech 2008

Note: This entry has been restored from old archives.

A couple of weekends back Kat and I went to the Open Tech 2008 one day conference in London. I had planned to write about some things I came across there in some depth, alas time is against me. It would be criminal for me to let it go completely unmentioned though.

There’s something amazing about OpenTech: it costs just £5 to attend. For the breadth of coverage, interesting speakers, things learned, and inspiration gained over the day this is an extreme bargain.

Giving myself a few minutes to note down a few points still in the top of my head 10 days later:

  • There was an overwhelming theme of “public good” running through the conference. From the projects devoted to this, such as mysociety.org, through to entrepreneurs and icons pushing to inspire everyone to follow their various leads. This is a great change from the usual case of “this tech is cool because, well, it is” – I loved to hear that tech was cool for the ways is was actually helping everyday people.
  • Further contrast between the geeks and the suits (generalisations, I know.) A few weeks back I went to a serious business-tech conference hosted by the 451 group, this was also good stuff but coming at security from a completely different angle (security was just one of several topics covered.) The contrast is all the more interesting because there’s a convergence. At the business conference we hear “security is difficult, we have to try harder, alas, some things may be impossible” at OpenTech we hear “security is impossible, but we can try harder and do better.” There’s far too much depth to this for me to go into right now, not that my own thoughts are in any good order. Suffice to say, studying the application of security from social and economic standpoints would be very interesting right now. There’s a lot of material out there, and people(/businesses) are speaking more openly about security issues these days I think.
  • More on/around security. People get very confused about identity versus reputation, especially when technical definitions of authentication are worked into the mix. People, even a room full of geeks, know very little about the history of currency, and banking in general (a cultural weakness in the geek horde?) Cryptographers are regarded as some sort of higher being… maybe they are! (Aside: I’ve just read Simon Singh’s Fermat’s Last Theorem – it lives up to its reputation, and man those number theorists are an insane bunch!)
  • Ubiquitous networking has changed the world, maybe those of us who’ve lived through the changes sometimes don’t appreciate how revolutionary the changes are (I have trouble seeing it sometimes, much older geeks seems to see it more clearly.) What’s scary, is that the field is still young and haphazard, what further refinement will bring is difficult to imagine.
  • The above is amazing, now how to we deliver this to the rest of the world. Can it actually help solve the terrible problems most of the world has? I’d like to think so.

Of the sessions I attended these are memorable:

  • Most entertaining: The Web is Agreement, Paul Downy. A talk/rant around current trends centred on Paul’s sketch of the same title. (The talk “Living on The Edge” from Danny O’Brien was also entertaining, and the only time I’ve seen a geek talk “flood” with what can only be called “groupies”, it was strange.)
  • Most inspiring: Digital Money, David Birch. This guy’s online presence seems to be a blog about digital money. In essence this was a short, angry rant about the fact that us geeks have not solved the problem of “digital money.” At the core of the rant was the idea that functional digital cash will make the world a better place, breaking down unnecessary barriers in the world of money (think of sending aid/donations right to where they’re needed, family members sending money home without the “Western Union” tax, etc.)
  • Most relevant (to me): Security Discussion with Ben Laurie and Friends. Four security/crypto geeks/experts talking about how much things are broken. Entertaining, enlightening, and (to some) challenging.
  • Most disappointing: Android and the Open Handset Alliance. It just wasn’t techie enough, more a marketing spiel from a “developer advocate.” I wa hoping for a crash “how stuff works” intro to Android.

On reflection… of the talks I saw there were a lot of “grumpy old(er) men.”

xfig still state of the art?

Note: This entry has been restored from old archives.

From time to time I have to draw a diagram. This excites me, I enjoy diagrams. Back in university the way to draw a diagram for a report was either xfig or to marking it up directly in latex. But they’re oldskool there are groovy new ways to draw diagrams these days. The state of the GUI diagramming art seems to be either inkscape or dia.

Really? Well… I don’t think so. While both are good in an attractive sort of way, they also both suck. On my Ubuntu “hardy” machine I get SEGVs out of inkscape every few minutes, not sure why and have no time to investigate. As for dia, it is just plain lacking. In inkscape I can group objects and resize the group, in dia a group of objects (even just rectangles) isn’t scalable! (Even xfig can do that.) So I have one option that breaks and is unusable, and another that lacks essential features and is unusable! All I want to do is draw some boxes with words in them, and maybe make them look pretty.

So, I’m using xfig now. Clearly, it is the state of the art in Linux diagramming. (And I’m oh so tempted to boot over to Windows and use Visio sometimes.)

What else is there? If I want to draw some class diagrams, boxes and lines, maybe even UML shudder, what do I use? A few apt-cache searches bring up some candidates, but I just don’t have time to play with them. I’ve got a bloody diagram to draw! Go go xfig!

Inkscape is extremely promising, I’ll maybe try it again in another year.

std::endl

Note: This entry has been restored from old archives.

All these years I’ve been making regular use of good old std::endl without realising that it is a templated function! We live in a crazy world. Somewhere in the back of my head I guess I’d just assumed it was simply platform-dependant constant… not quite.

I came across this merry discovery when implementing a logger, since re-inventing the wheel is always so much fun. It isn’t as bad as it seems, the “logger” is basically just a matter of encapsulating the relevant utility code, weighing in at less than 500 lines of which more than half are API comments.

Anyway, I’d like my logger to act like a std::ostream, although I don’t want it to be a std::ostream really (preferring to avoid inheriting from such beasts without a really good reason.) The functionality is simple, it wraps a set of std::ostreams to send the output to. The important part is that it implements log levels and and log message classes (which you can register and name if you wish.) At any one time there is a log threshold and a set of active classes. To keep things really simple it has a single output method that takes only a std::string ref.

Now I want to use it with good old << since that is comfortable and not unexpected so I implement:

template <typename T>
Log& operator<<(Log & log, T const & t) {
    std::ostringstream oss;
    oss << t;
    log.output(oss.str());
    return log;
}

This seems to work swimmingly until I try the likes of log << std::endl, then it fails to compile. Eh, what on earth is std::endl that the template won’t catch it? A peak into the ostream C++ header tells us it is a function pointer:

  template<typename _CharT, typename _Traits>
    inline basic_ostream<_CharT, _Traits>&.
    endl(basic_ostream<_CharT, _Traits>& __os)
    { return flush(__os.put(__os.widen('n'))); }

Egads! So I implement this (bear with me):

Log& Ephedrine::operator<<(Log & log, std::ostream& (*fn)(std::ostream&)) {
    std::ostringstream oss;
    fn(oss);
    log.output(oss.str());
    return log;
}

And now I can do this:

g_log << Log::NORMAL << "Normal log output.  A number is " << 2.123 << std::endl;
g_log << Log::VERBOSE << "Verbose log output." << std::endl;

Joy.

But there’s more than one worm in this can. Note that in the definition above it calls flush on the ostream. In the doxygen comments it notes “This manipulator is often mistakenly used when a simple newline is desired, leading to poor buffering performance.” On checking the standard there it is too, this is std::endl by definition. I’ve never used ostreams, thus never std::endl, in performance critical code but I’ll have to keep this in mind if I ever do! In all of the C++ books I’ve read, a fairly large number, I don’t recall ever seeing this noted anywhere.

This brings into mind a question of design. I’ve avoided being an std::ostream yet gone and used std::endl, and std::endl is supposed to flush the stream (by definition, it’s in the standard, see section 27.6.2.7.) My current implementation doesn’t flush the streams, so what should I do?

  • A) Not care.
  • B) Reimplement as Log::endl (which will ultimately call std::endl and thus end up flushing the streams!)
  • C) Create a special case specifically for std::endl.

I don’t like the first option, not my way of doing things by preference (alas, it has been known to happen.) The second option is easy to implement but will feel unnatural to the user (a moot point, I’m the sole user!) The third option smells “wrong,” but I already decided to do it didn’t I, and I went one step worse and broke the guarantee for std::endl (albeit probably mostly unknown.)

Is C really enough though? No, it turns out. Aside from breaking the flush guarantee it also doesn’t go quite far enough. Think about std::hex and similar modifiers. By converting them to a string (an empty string) before writing to the underlying std::ostreams their meaning is lost. My original implementation above is, in essence, an abomination.

In the end I settled on E: templates. What I should have done in the first place, but avoided since I was hung up on pimpl at the time (templates and pimpl don’t mix well.)

template <typename T>
void Log::output(T const & t) {
    ... /prep/
        BOOST_FOREACH(std::ostream & os, m_osList) {
            os << t;
        }
    ...
}

Quick

Note: This entry has been restored from old archives.

“because it’s not compiled, it’s also very quick” [1]

It’s a different definition of quick of course, but somehow I begin to feel disconnected and trolling mode wells up from the darkness within.

But… even in that sense, is it really as quick as we think it is? In my experience it certainly makes it quicker to discover new and interesting types of bugs.

The quote relates to Ruby, but really, be it Ruby, Perl, or Python it seems much the same.

[1] I’ve quoted from the 3rd page of an article there. I’m still not quite sure why these sites bother to separate their articles into “pages.” Even SMH used to be one-page-per-article before this pagination fad caught on.

Bitten by the python

Note: This entry has been restored from old archives.

I play with the snake from time to time, Python having now almost entirely supplanted Perl as my hak-n-slash language. But I’m certainly still learning as I go. Today I’ve been trying to work out what’s gone wrong with something I’m doing. I thought I’d done something strange with inheritance, not understanding some case where inheritance causes data to become static (in the sense of static members in C++.) What was really the problem is that I didn’t realise that default parameters to members are kept and reused, and if you go and use a parameter you can end up with state being held on to where you don’t (well, I didn’t) expect it.

I’ve hit this at least once before I think, it is vaguely familiar. Anyway, it’s a bit of a tricky gotcha as far as I’m concerned. Intuitive? I wouldn’t say so. Here’s the example:

#!/usr/bin/python

class Parent:
    def __init__(self, stuff = []):
        self.stuff = stuff
    def doIt(self):
        print " ".join(self.stuff)
    def addStuff(self, stuff):
        self.stuff.append(stuff)

class Child(Parent):
    def __init__(self, stuff = []):
        Parent.__init__(self)
    def doIt(self):
        self.addStuff("foo")
        Parent.doIt(self)

c1 = Child()
c1.doIt()
c2 = Child()
c2.doIt()
c3 = Child()
c3.doIt()

Execute this and we see:

foo
foo foo
foo foo foo

What? Why is it accumulating “foo”s? The answer is in the “stuff = []” argument to __init__. What is going on, as far as I can garner from the documentation, is that that default argument (a list) is being instantiated once and then kept for future use. What’s more, I’m assigning self.stuff to this, which is kept as a reference and doesn’t create a copy. So I have ended up with a static value for self.stuff, well, within the functionality of this code – it isn’t entirely congruous to a static member.

How to fix it? I don’t know the definitive approach, but here’s a couple of ways. To achieve complete deep copying of the argument whether it be default or passed in use copy.deepcopy:

#!/usr/bin/python
import copy

class Parent:
    def __init__(self, stuff = []):
        self.stuff = copy.deepcopy(stuff)
...

Alternatively, you could use copy.copy for a shallow copy and I guess that might be analogous to this:

#!/usr/bin/python

class Parent:
    def __init__(self, stuff = []):
        self.stuff = []
        self.stuff.extend(stuff)
...

I’m sure there are great uses for this behaviour in Python. What are they? Something more fundamental than “neat tricks” involving incremental/changing default state? Am I the only one to think this somewhat of a gotcha?

Now that I’ve worked out what was wrong I can retrospectively build the right Google magic to get straight to the answer.

No time to read up on more casual chatter about it though. The documented formula is something like:

#!/usr/bin/python

class Parent:
    def __init__(self, stuff = None):
        if stuff is None:
            stuff = []
        self.stuff = stuff
...

Making O2 donglenet more reliable

Note: This entry has been restored from old archives.

In theory my ADSL finally goes live tomorrow. The frustration of trying to work without a reliable ‘net connection has been high. The O2 mobile broadband account has been a life-saver, although it certainly has its failings.

There’s one thing that has been rather a bother. While SSH connections are fairly manageable, the only problem being intermittent disconnections lasting 5 to 10 minutes, HTTP behaves very badly. Even when I did have an apparent working ‘net link, evidenced by the fact that my SSH sessions were still live, HTTP connections would time out and fail regularly.

I have HTTP working pretty well now, here’s what I did:

  1. Forward all HTTP(S) to a remote proxy, in my case running on a machine I have in Germany.
  2. Stop using the O2 allocated DNS addresses, I’m using OpenDNS‘s 208.67.222.220 and 208.67.222.222 instead, though I’m considering just using my own remote DNS server.

The former should be all that is required, however it looks like Opera uses local name resolution for something even when a proxy is set up. The latter alone may be sufficient too, since the ‘net is certainly still there but it looks like name resolution times out or fails.