Posts for Tuesday, April 13, 2010

how do you structure your python codebase?

One thing that’s awesome in python is having a small codebase that can fit in a single directory. It’s a comfy setting, everything is right there at your fingertips, no directory traversal needed to get a hold of a file.

Flat structure

Let’s check out one right now:


Download this code: python_codebase_structure_flat.txt

And here’s the import relationship between them:


Easy, straightforward. I can execute any one of the files by itself to make sure the syntax is correct or to run an “if __main__” style unit test on it.

Tree structure

But suppose the codebase is expanding and I decide I have to get a bit more structured? I devise a directory structure like this:


Download this code: python_codebase_structure_tree.txt

The same files, but now with files all over the codebase to tell python to treat each directory as a package. And now my import statements have to be changed too, let’s see master:

# from:
import mystring
import page
# to:
import media.mystring

Download this code:

Nice one. Okay, let’s see how this works now:

$ python
page says hello!
sentence says hello!
frame says hello!
mystring says hello!
master says hello!

Download this code: python_codebase_structure_run_user.txt

user imports page and then master. The first 4 lines are due to page, which imports three modules, and finally we see master arriving at the scene. All the files it imports have already been imported, so python doesn’t redo those. Everything is in order.

As you can see, imports between modules in the tree work out just fine, page finds both the local sentence and the distant frame.

But if we run master it’s a different story:

$ python media/
master says hello!
Traceback (most recent call last):
  File "media/", line 3, in <module>
    import media.mystring
ImportError: No module named media.mystring

Download this code: python_codebase_structure_run_master.txt

And it doesn’t actually matter if we run master from media/ or run media/master from ., it’s the same result. And it’s the same story with page, which is deeper in the tree.

These modules, which used to be executable standalone, no longer are. :(

A hackish solution

So we need something. The nature of the problem is that once we traverse into media/, python no longer can see that there is a package called media, because it’s not found anywhere on sys.path. What if we could tell it?

The problem pops up when the module is being executed directly, in fact when __name__ == ‘__main__‘. So this is the case in which we need to do something differently.

Here’s the idea. We put a file in the root directory of the codebase, a file we can find that marks where the root is. Then, whenever we need to find the root, we traverse up the tree until we find it. The file is called .codebase_root. And for our special when-executed logic, we use a file called __path__ that we import conditionally. Here’s what it looks like:

import os
import sys
def find_codebase(mypath, codebase_rootfile):
    root, branch = mypath, 'nonempty'
    while branch:
        if os.path.exists(os.path.join(root, codebase_rootfile)):
            codebase_root = os.path.dirname(root)
            return codebase_root
        root, branch = os.path.split(root)
def main(codebase_rootfile):
    thisfile = os.path.abspath(sys.modules[__name__].__file__)
    mypath = os.path.dirname(thisfile)
    codebase_root = find_codebase(mypath, codebase_rootfile)
    if codebase_root:
        if codebase_root not in sys.path:
            sys.path.insert(0, codebase_root)
codebase_rootfile = '.codebase_root'

Download this code:

So now, when we find ourselves in a module that’s somewhere inside the media/ package, we have this bit of special handling:

print "master says hello!"
if __name__ == '__main__':
    import __path__
import media.mystring

Download this code:

Unfortunately, importing __path__ unconditionally breaks the case where the file is not being executed directly and I haven’t been able to figure out why, so it has to be done like this. :/

python_codebase_structure_treeYou end up with a tree looking as you can see in the screenshot.

I’ve pushed the example to Github so by all means have a look:

We pass the test, all the modules are executable standalone again. But I can’t say that it’s awesome to have to do it like this.

Posts for Monday, April 12, 2010

Welcome to Canada

I haven't had much time to blog lately because I was busy moving all my stuff to Canada. I'm finally here and starting to get settled a bit, so I thought I'd write about the culture shock, or lack thereof. Here are some differences and similarities between Canada and 'merka.


  1. In Canada people aren't very outwardly patriotic. You don't see Canadian flags plastered all over everything in sight. In the US there's a flag everywhere you look.

    Winner: Canada. I don't really need visual reminders of what country I'm in.

  2. US: Dollar bills. Canada: Dollar coins.

    Winner: US. You can't make origami out of coins.

  3. Sizes of fountain drinks at fast food places are vastly different. I got a "medium" at Tim Horton's and it was smaller than a typical "small" in the US. My wife says she ordered a large drink at McDonalds in the US and had to send it back because it was too big.

    Winner: Canada. Maybe this is one reason Canada has such a low level of obesity. Does anyone really need a liter of Pepsi with lunch?

  4. In Canada there's French all over everything. In the US there's Spanish all over everything. I find they appear in almost equal amounts between the countries.

    Winner: Draw. In BC you don't need to speak French, so I don't plan to learn it. Same with Spanish in the US.

  5. Canada is metric. Temperatures are in Celcius and speed limits are in km/h. The US is Imperial.

    Winner: Draw. Unit of measure for non-science purposes is a pretty arbitrary choice, so who cares?

  6. I have a queen now.

    Winner: Canada, because it's a still a novel concept to me. But most people in Canada don't really care about the queen, from what I can tell.

  7. In Canada they put vinegar on french fries.

    Winner: US. Seriously, come on now.

  8. Everything is way more expensive in Canada and there's lots of sales tax. Example: gasoline is $1.10/liter (over $4/gallon). In Oregon it was always $2-something per gallon. On the other hand, everything is clean and there's cheap universal health care and the social programs seem to keep crime down.

    Winner: Canada. What good is cheap gas if you're dead?

  9. They spell things strangely up here. Favourite, colour, centre.

    Winner: US, for our far more efficient use of vowels.

  10. My bank card for my Canadian bank can't be used as a credit card. Haven't seen that in the US for a decade or two.

    Winner: Draw. Almost everywhere in Canada takes debit cards anyways. Plus they have little portable debit machines so you can pay at your table in restaurants.

  11. No one owns guns. I have yet to fear for my life since I've gotten here.

    Winner: Canada. I imagine crime still sucks in the big cities, but here on the island it's nice.

  12. Most people in Canada seem to keep up to date on US and world news. People in the US don't even remember that Canada exists most of the time.

    Winner: Canada. Thanks for being educated.

  13. The last letter of the alphabet is now "ZED" instead of "ZEE".

    Winner: US. The alphabet song doesn't even rhyme if you say "zed" at the end.

  14. Gay marriage is legal here. Relatedly, there isn't a church on every street corner and I have yet to meet many overly religious people. Censorship on TV and radio is way less. I actually saw a TV show with "atheists vs. religious people" testing their IQs via trivia questions. You would never see that in America.

    Winner: Canada. The US can DIAF in this regard.


  1. Drivers suck in Canada as much as or more than they suck in the US. Speeding and passing on the right without turn signals seems to be a national pastime.

  2. TV is mostly the same (i.e. not worth watching). There are mostly the same channels as the US other than Canada-only ones like CBC.

  3. Walmart and Starbucks are still everywhere. But so are Tim Horton's and Canadian Tire.

Honestly things aren't that different up here. Sometimes I forget I'm even in a new country, other than the streets being clean and everyone being polite all the time. It's been a good move.


Tech Tip #6: Reencode any video to ensure compatibility with Windows Media Player

Other very useful tip I picked up when doing video manipulation the other day that deserves its own post is reencoding any video so that it will work on a vanilla Windows Media Player (without any other codecs added). Windows Media Player is probably the most stubborn, pathetic video player the software world has ever seen and unfortunately if you produce a video for the general public to view, you need to make sure WMP is happy to play it.

The tool for such a job is obviously ffmpeg, but the suggested commands on the compatibility page of their site seem to compress the videos to a horrendous state at the same time, so after asking on their IRC channel on freenode this is the command that turned up:

ffmpeg -i input -acodec libmp3lame -ab 128k -vcodec msmpeg4v2 -qscale 3 output.avi

Wonderful. Now I can render to whatever I please and worry about compatibility later.

Related posts:

  1. Tech Tip #5: Rotate a video by 90 degrees with mencoder
  2. Tech tip #3: Rip audio from an .FLV file.
  3. Top 10 Windows Mobile Applications

Posts for Wednesday, April 7, 2010

On Functional Programming Languages

As a programming-language adept I’ve been studying the ideas, concepts and theory of functional programming (FP) and FP-related languages for about 2 years now, still learning new things everyday.

Recently a ‘FP User Group’ was started by some people at Ghent University, called GhentFPG, and the first meeting took place last thursday, with great interest from students, university employees as well as people working in the industry. You can find some more info in the GhentFPG Google Group or in the wiki (where you can also find the slides of the presentations given during the first meeting).

Some days ago someone new to FP posted a message on the mailing list, asking which language he should study, among other things.

Since I think my reply might be of general interest (also outside GhentFPG), I decided to post a copy on this blog as well (note I did add some extra markup). Comments welcome!

Based on my experience (which is biased, obviously):

  • Functional Programming is not only a language-related thing. FP
    languages do enforce you to apply functional paradigms, but you can
    easily follow these paradigms in lots of other (more mainstream?)
    languages as well: it is easier to learn people a paradigm using a
    language they already know, rather than telling them FP is really cool
    and useful and interesting, but requires them to learn a new
    language/toolchain/… first.

    Not talking about Java or C++ or something similar here, rather Python
    and Ruby.

  • If you’re into Java/C#/…, Scala is a really good introduction to FP:
    it allows you to write OOP code just like you do already, but also
    provides you lots of FP-related features, and pushes you gently into the
    FP approach. The book “Programming in Scala” by Odersky et al. (the main
    author of Scala) is IMO a really good intro to both Scala as well as the
    FP concepts it provides, not only showing them but also explaining
    gently why they’re useful, and why they’re ‘better’ than the approaches
    you’re taking already.

    The Scala type system is rather interesting as well.

    It’s the gentle path, so you want ;-) Learning Scala before reading
    Real World Haskell‘ certainly helped me a lot to understand the latter.

  • Haskell is an incredibly interesting language because of the concepts
    it adopted and types it provides, but it does require an immediate mind
    switch when coming from a non-OOP world (I once spent about 2 hours to
    explain a Java-guy how classes and instances in Haskell relate to
    classes and instances in Java, it wasn’t obvious). “Real World Haskell”
    is certainly worth a read (and if you read “Programming in Scala” as
    well, you’ll notice lots of similarities).

    I for one can read Haskell code pretty easily and learned lots of
    CS/math things thanks to learning it, but I’m (still) unable to write
    non-trivial code (I need some good project to get my hands dirty I

  • Erlang is really interesting from a (very specific) feature
    perspective: high-availability, distributed computing, the actor system
    and the OTP library on top of it,…

    It’s a rather ‘old’ language, but I kind of like it. Some people do
    complain about the syntax, but once you figured out ‘,’, ‘;’ and ‘.’ are
    used almost the same as they are in ‘human’ written language, everything
    becomes obvious :-)

    Do note though Erlang is not a normal general-purpose language. You can
    code +- everything you want using it, but it’s really targeted to
    distributed/high-available/network applications. You most likely won’t
    use it to solve mathematical problems or write a game. It’s really good
    at what it’s built for though.

    One final note: please don’t ever make the mistake I made. If you know
    Erlang, and take a look at Scala (which also has an actor library in the
    standard distribution, as well as the more advanced Akka-library), don’t
    judge Scala as being a competitor for Erlang, they’re both completely
    different languages targeting different applications. ‘Scala’ is not
    about ’scalability’ as Erlang is (it’s a “Scalable Language”).

  • F# (and most likely OCaml as well, although I never used it though) is
    certainly worth a look as well. I only read 3/4th of a book on it, but
    it looks really promising and interesting.

  • There’s obviously all sorts of Lisp dialects. I have no opinion on
    them, never looked into any Lisp closely enough. I only wrote some
    Clojure (a Lisp-dialect for the JVM) code one day, but need to learn
    more about the Lisp-way of programming. Clojure seems to be interesting
    because of the deep integration of Software Transactional Memory (STM)
    in the language (yet another approach to concurrency ;-) ).

As for the IDE question: Vim and a decent terminal are all you need,
luckily none of the above languages require you to learn how to use a
toolchain which enforces you (or some magic IDE) to write 500 lines of
XML-based build ‘programs’ or other insanities.

My advice: pick some language, learn it, but make sure you don’t only
learn the language, but especially the concepts (type system,
higher-order stuff, list manipulation,…). Then pick some other
language and learn it as well (which will be easier since you got the
concepts already) and so on.

And read tons of papers available on the internet in between ;-) Even if
you don’t understand a paper completely, you’ll pick up some things
already, and re-reading it 2 weeks later helps a lot :-D

Just my .02,



Tech Tip #5: Rotate a video by 90 degrees with mencoder

I was recently doing some video editing work where the workflow was something like this: film in portrait, transfer to computer, rotate videos by 90 degrees, sequence together several videos, strip out background noise from entire video. Filming was done with a camera, sequencing was done by Kdenlive (I’ve previously only had experience with Blender’s VSE and I must say I was very happy with this new application), and the noise-stripping was done with Audacity. I must say I’m surprised at how fast this was all accomplished and kudos to all those developers who created these apps.

However one thing I didn’t know how to do was how to rotate the video by 90 degrees. Kdenlive can do it but it ends up being awkwardly stretched and I couldn’t figure out how to unstretch it. Luckily mencoder, which comes with the mplayer package, has got a few tricks up its sleeve.

More for my own records than for anybody else, here’s the command I used:

mencoder -vf rotate=2 -o output.avi -oac pcm -ovc lavc

As my input file was a .mov some of the sound wasn’t synchronised well after rotating, which was easily fixed by this option `-demuxer mov`. If you want to rotate clockwise instead of anticlockwise change `rotate=2` to `rotate=1`.

Related posts:

  1. Tech Tip #6: Reencode any video to ensure compatibility with Windows Media Player
  2. Tech tip #3: Rip audio from an .FLV file.
  3. Blender 2.5 Features Video

Posts for Monday, April 5, 2010


Rigging a machine.

Things have been going absurdly slow lately. No commits to WIPUP. No new ThoughtScore models (though a few more seconds of video have been added). Nothing open-source related (except for trying out the Ubuntu beta1 on a box). Even schoolwork has slowed.

Because I fully emphathise that people with a grip of things wouldn’t give a rat’s ass about my life, I decided to show some pictures of the trouble I’ve been having trying to rig Taras, one of the main characters in The ThoughtScore Project. Here are two statically posed shots of Taras:

The left shows him in his unrigged pose. The pose he was modeled in. The right shows him "looking" down, slightly bent forward with his left arm reaching towards you. Disregarding the fact that the lighting is completely faked (what is that suit reflecting, anyway?), we have two other major problems to deal with.

Problem Number One: His arm was not built to be in that pose. Not was any other part of his anatomy. When standing straight his arms are abnormally squashed in order to look natural in that one pose… and when in a dark environment. In any other scenario you’d see two spindly arms sticking out of a hunk of metal. The way it was designed, his shoulder "ball and socket" joint is more of a "plank of wood stuck on a block of wood" joint. It doesn’t fit nicely like a joint should.

Put simply, all of his joints (legs included) will have to be remodelled in order so that you don’t have gaping holes or bits of the suit intersecting when limbs are moved in their extremeties. Not an easy task.

Problem Number Two: The torso. The torso is made up of several different meshes. Each part fits together nicely in one way and one way only. If you look at the picture, you’ll see that when he leans forward, the upper torso covers the middle torso, which largely remains stationary, the groin panel shifts outwards slightly, and the piping all has to move to accomodate this change and not randomly stick out where it shouldn’t. Think of it like the parts of a steam engine.

Long story short, it’s going to be a PITA to rig that guy just to bend over. Heck, I don’t think you can bend over in a suit like that.

Normally I stubbornly plod down the road of "create first, learn later, fix and redo even later", but this time I think I’d better buy some of Blender’s training DVDs before continuing on ThoughtScore.

Related posts:

  1. ThoughtStall
  2. Kayaking in Langkawi!
  3. Blender 2.5 Features Video

Posts for Saturday, April 3, 2010

Announcing colibri 1.0 alpha1, a mailing list manager with a django based web interface

It has been more than one year now that I’m running my own mailing list software here at freehackers, and I think it is now time to release a first preview of it. Let me introduce Colibri 1.0 alpha1

Colibri is a free software (GPL), based on python and Django.


It’s not feature complete, but it actually forward mails. From the web interface, people can (un)subscribe and configure their accounts.

The webpage, with screenshots, download, bugtracker and some documentation is at

I use mercurial for source control, and the repository is available both for cloning and browsing at


Irssi 0.8.15 Released

Irssi 0.8.15 has just been released.

Check out for more information and remember to read the NEWS- and ChangeLog-file.

New Features:

  • Add active_window_ignore_refnum option With active_window_ignore_refnum ON, the current behavior for the active_window key (meta-a by default) is preserved: it switches to the window with the highest activity level that was last activated. With active_window_ignore_refnum OFF, the old behavior is used: it switches to the window with the highest activity level with the lowest refnum.
  • Show new Charybdis +q list in channel windows (numerics 728 and 729).
  • Allow servers to belong to multiple networks.
  • Improve paste detection. Irssi now detects a paste if it reads at least three bytes in a single read; subsequent reads are associated to the same paste if they happen before paste_detect_time time since the last read. If no read occurs after paste_detect_time time the paste buffer is flushed; if there is at least one complete line its content is sent as a paste, otherwise it is processed normally.
  • Show “target changing too fast” messages in the channel/query window.
  • Use default trusted CAs if nothing is specified. This allows useful use of -ssl_verify without -ssl_cafile/-ssl_capath, using OpenSSL’s default trusted CAs.
  • Show why an SSL certificate failed validation.
  • Make own nick and actions use default colour instead of white.


  • Change some characters illegal in Windows filenames to underscores in logs
  • Fix disconnects when sending large amounts of data over SSL
  • Show all nicks instead of just the first in an /accept * listing.
  • Make several signals without parameters available to perl again. In particular, this includes the “beep” signal.
  • Close the config file fd after saving.
  • Check if an SSL certificate matches the hostname of the server we are connecting to.
  • Fix bash’isms, use command -v instead of which and use bc -l in /CALC.
  • Fix a crash with handling the DCC queue.
  • Fix crash when checking for fuzzy nick match when not on the channel.

A more personal note: I’ve quit my job as an embedded software developer at NorthQ and I’m starting as a software engineer at Nokia at the beginning of the next month. I’m very excited about this, but it was not an easy choice to say goodbye to the good co-workers at NorthQ. But at age 20, I just couldn’t say no to Nokia’s job offer. I’m going to work with embedded Linux, Qt and other cool open source stuff that I’m probably not allowed to talk about.

Posts for Wednesday, March 31, 2010

Paludis is Going Into a Cave

The following applies to Gentoo, not Exherbo. Exherbo developers (Exherbo has no users) already know what’s going on there. I figure it’s worth having a clear source of information on this for Gentoo users, though, rather than making people rely upon rumours and third hand transcriptions of what’s been said on IRC.

The original Paludis client was more or less designed to be a less perverse version of emerge, with additional options for things like querying. On the plus side, it made things slightly easier for Gentoo users to pick up. However:

  • Users would be confused as to why things like --show-use-descriptions, which is an option for the --install action, would not also do something with --query. Funnily enough, users don’t seem to wonder why emerge --sync --changelog --pretend doesn’t work.
  • We had to be extremely careful allocating short options. Thus, we only allocated short options when we were sure an option would regularly be used both enabled and disabled, since otherwise it could just go in PALUDIS_OPTIONS. This worked well enough for users who were paying attention, but seemed to scare off certain people who jumped in without understanding what they were doing.
  • It meant we regularly had to decide between breaking familiarity or doing something dumb just to make Portage converts comfortable.

I’ve never especially liked the result. It’s not close enough to emerge to make learning zero cost, and it’s too close to emerge to be a pleasant user interface. Fortunately, the UI and the library are mostly nicely separated, so we can fix this. The plan is as follows:

  • A new client, named cave, will be provided. All pretence of feeling anything like emerge has been abandoned for cave. Instead, the interface is roughly based upon git and similar tools.
  • Depending upon whether or not we can be bothered, a second new client named egress may or may not be produced. If it is, it will be command-line-compatible with emerge, and with a reasonably similar output format (within reason). I’m not convinced something like this would be worth the effort to write, although there are some Gentoo developers who insist that something that behaves almost identically to emerge is a critical requirement for any official Gentoo package manager…
  • The paludis client will be deprecated and phased out.

To make things simpler, we’ve also decided to use the introduction of cave to switch to the shiny new resolver. The shiny new resolver is a lot more flexible, a lot more powerful and a heck of a lot easier to maintain and modify than the old resolver. However, making the paludis client use it would be quite a bit of work, so we’re not going to backport it.

There is no specific timeframe for any of this, and no estimates will be provided. There are, however, lists of things that need to be done.

  • The Basic Functionality milestone contains all of the things that have to be done before we consider cave to be ‘more or less usable’. Until this milestone is complete, we won’t be enabling cave in any of the ebuilds on Gentoo, and we don’t recommend users sneakily enabling it themselves. Note in particular that some of the things not yet implemented include “displaying a user-readable error message rather than a big fat “UI not implemented!” when attempting an unsafe package uninstall, and “handle virtual blockers”.
  • The Useful Functionality milestone contains all of the things that have to be done before we remove the paludis client. We will be making cave easily available before it includes every bit of functionality present in paludis, but we won’t be forcing a switch until we’ve either implemented equivalents for every feature or decided we’re not going to support a particular feature at all.
  • The Long Term Extras milestone contains things we’ll be doing at some point.

These milestones are probably incomplete, and are definitely open to arbitrary additions, removals, changes and being ignored based upon developer whims, available development time, bribery, patches, lack of interest, features looking fun or boring to code and the phase of the moon.

I realise this is probably futile, but I should stress that users should not attempt to enable or use cave on Gentoo until we pass the Basic Functionality milestone. This isn’t taking away your fun — some of the not yet completed Basic Functionality items are necessary for basic usability and correct operation on Gentoo.

Filed under: paludis for users Tagged: gentoo, paludis

offering downloads via p2p using bitTorrent

source: wikipedia, see 'The BitTorrent - Logo'

BitTorrent logo, source: wikipedia

if you ever wanted to publish a file via bittorrent, here is a small guide how to do it with linux (using a shell). this guide – in contrast to most other guides found in the net – is NOT about downloading files using bitTorrent, instead is is about PUBLISHING FILES! first of, our tools:

  • bittornado [1] for tracker/seeder (that is the central part of your hosting service)
    BitTornado-0.3.17.tar.gz worked, while net-p2p/bittornado-0.3.18-r2 did fail
  • mktorrent [7] to create a .torrent on the shell
  • ktorrent [2] used as ‘normal’ client (that would be a user of your service)

bittorrent is basically nothing more than a simple protocol how to copy files and how to enforce integrity by hashing a big file into chunks of smaller files of equivalent 1Mib blocks (size may vary from .torrent to .torrent). a mixture of all related computers hosting chunks are called the swarm. if in doubt just read the wikipedia article [3] to get the terms & concepts. a very nice matrix about bitTorrent clients and tracker capabilities can be found at [4].

the basic idea about hosting downloads with bitTorrent

open source projects often offer files via bitTorrent. to offer a file in bitTorrent one has to do this:

  1. create a .torrent of the file or directory one wants to offer
  2. host a tracker (which manages the swarm, usually a tracker is a central server)
  3. connect aseed’ to the tracker, so that others can download the file and spread it further
  4. test & monitor the tracker or swarm
  5. security

in general this is nothing new as this technique is used already for years. i would like to create a very lightweight documentation about how these 4 points are achieved with ease, so here we go:

(1) create a torrent

let’s use one of my 23Mib screencasts to create a .torrent for (just use whatever you want) using mktorrent [7]

# wget ’’
# mktorrent -a -c ‘a screencast showing what libnoise-viewer does’ libnoise-viewer.ogv

mktorrent 0.4 (c) 2007 Emil Renner Berthing
Hashed 92/92 pieces.
Writing metainfo file… done.

Now we created the libnoise-viewer.ogv.torrent as it can be found in the directory after we used mktorrent. for my private computers i use a DDNS service which is updated on every reconnect of my firtzbox.

(2) host a tracker

in my case i host this tracker at a dialup connection using dsl. this might not be ideal (but redundancy might get better if clients use a dht, more on that later). since the computer is behind a nat (network address translation) we have to forward some ports (sockets) to the server hosting the tracker behind the nat. this can be done by using your fritzbox configuration dialog (or equivalent, whatever you use – i will just pick that example with a fritzbox) .

# mkdir torrent;
# mv *.torrent torrent/
# –port 6969 –dfile dstate –logfile tracker.log –allowed_dir torrent/
# netstat -tulpen
tcp        0      0  *               LISTEN      0          234712     21631/python2.5

we could check for html output with curl as well (curl should basically installed on every linux installation)

curl localhost:6969
the output should be some html code with tracker version and a list of torrents (tracked files)

so we see that the service using tcp with port 6969 is up and running. if you are behind nat – as i am – also check from a remote machine, if the service is working (can also be done using curl).
if you need to configure a fritzbox or other socket/html based router which has no direct internet configuration enabled you can also use a ssh redirect. in my case i can access my server behind the nat with ssh. so if you want to configure your fritzbox as if you had your computer plugged in the local lan swith see a different documentation of mine at [8] (german only).
i used the tcp portrange from 6881 to 6999 for the tracker as well as for the seeder connections, so forward these ports.

(3) connect a ’seed’ to the tracker

without doing this nobody will be able to download anything, since nothing is there! so let’s add some data. be aware that if that seed is behind nat as well it might not work at all.
# –responsefile /root/torrent/libnoise-viewer.ogv.torrent –minport 6881 –maxport 6999 –max_upload_rate 20 –saveas /root/libnoise-viewer.ogv
saving:         libnoise-viewer.ogv (22.8 MB)

percent done:   0.0
time left:      Download Succeeded!
download to:    /root/libnoise-viewer.ogv
download rate:
upload rate:    12.4 kB/s
share rating:   oo  (0.4 MB up / 0.0 MB down)
seed status:    0 seen recently, plus 0.076 distributed copies
peer status:    1 seen now, 7.6% done at 6.7 kB/s

so now you have to upload the libnoise-viewer.ogv.torrent to somewhere, or copy it to your client directly.
you might want to have a look at the other parameters as for instance:
  • –max_upload_rate <arg>
    maximum kB/s to upload at (0 = no limit, -1 = automatic) (defaults to 0)
  • –minport <port>
  • –maxport <port>

you can also use several of these clients from different machines to ensure redundancy here as well. it will scale pretty well! you can also use any other torrent program as ktorrent or rtorrent for instance.

(4) test & monitor the tracker or swarm

use rtorrent or ktorrent and import the libnoise-viewer.ogv.torrent and see if the download works. if it works you are done!
monitoring is basically done by visiting the tracker, as:
  • there must be at least one seed (that is your seed)
  • see the logs created by the tracker, that is: tracker.log
  • you can also do bandwidth monitoring, ktorrent has a nice monitor included (in case you are seeding) but you most likely don’t need bandwidth monitoring for the tracker as it won’t use much bandwidth
  • both services might open many connections and some firewalls can’t handle this well since the resources needed for ‘connection tracking’ in statefull firewalls will be exceeded soon resulting in strange effects.

ktorrent downloading the torrent

(5) security

i did run the script as root but if you want to establish a service which should be secure and reliable DO NOT RUN AS USER ROOT! just create a new user and run it as this new user. this is very easy since bittorrent does NOT use any privileged ports (1-1024) but bitTorrent uses unprivileged ports instead (all others as 1025-65535). in my example even less.

if you have frequent issues with the tracker you could use minit [5] to restart the service automatically.


i hope this helps you to offer big downloads easier. just keep in mind that you have to follow two rules:

  1. for every new download you need to create a new .torrent file
  2. you need to inject a seed for every .torrent via a tracker

using bittorrent will show it’s full potential when we migrated to ipv6 since nat is (asipv6 mentioned in about every 5th posting on my blog) is a real problem.


  • since the tracker seems not to be stable for both versions of bittornado we now use bittorrent instead. the concepts apply 1:1 so the adaption isn’t very complicated
  • be aware that NAT will most likely kill a lot of upload potential of the clients since most clients will not have a direct valid ipv4 address but will be behind NAT all the time with improper port forwarding. this will then result in much less throughput and clients will have small download rates in general. one exception to this rule will be two clients which can utilize the full potential if one has configured proper port forwarding while the other is behind nat.










Posts for Tuesday, March 30, 2010

Paludis 0.46.0 Released

Paludis 0.46.0 has been released:

  • We can now read environment.bz2 files from VDB created by newer Portage versions.
  • appareo now has various options relating to checksum overriding.
  • When creating cache subdirectories, we now copy the mode of the main cache directory rather than using umask to determine permissions.

Filed under: paludis releases Tagged: paludis

Posts for Monday, March 29, 2010

acronis true image 2009

i have a windows xp server on which i use ‘acronis true image home 2009′ for periodical backups.

my problem:

my 1TB backup storage was filled with incremental/full backups although i’ve used incremental backups with consolidation enabled.

to investigate the issue i’ve installed a windows xp into a VirtualBox on my linux machine – to experiment – in the hope to find some easy solution. previously i’d been searching in forums for a fix to the issue but it seems a lot of ppl have the same issues, too, without providing a fix.

the acronis documentation does no explain what ‘acronis’ actually does or it is far to complicated. i’ve read the manual (not the printed version but instead the acronis help when pressing the ‘?’ button on the ‘backups – incremental’ dialog) a few times but i did not understand it as it is quite complex and i don’t like how they explain the single steps.

what i want:

first let’s see what i want:

  1. a incremental backup should be made every day (the first backup is a full backup of course)
  2. the main archive (the first full backup) should be validated on every backup
  3. if more than 6 backups are there, delete the oldest one
  4. the old backup may only be deleted if the new backup is 100% consistent
  5. the backup must always be in a consistent state, no merging of a full+diff when not having another full backup around

i’m not sure if the backup merge (merging a full and a incremental backup) is atomic, which would mean that if the merge fails the old files are not lost. after some experiments i’m not sure but i doubt that it is atomic. i have a bad feeling about this. so i think (4) and (5) can’t be done directly. maybe with two backup jobs, one every ‘odd day’, the other every ‘even day’ would be a solution. but there are so many indications which i can’t take into account – so i currently think the best is to go with (1),(2) and (3) only while checking the backup from time to time manually (which i would do anyway).

so here is the configuration of how i achieve (1),(2) and(3) but NOT (4) and (5) :



automatic consolidation

problem to exceed the harddrive capacity

to recall: my primary problem was that the backups did exceed the harddrive capacity as no old archives were consolidated:

that resulted from the ‘backup method’ where i also checked the last point ‘create a new backup after x incremental or differential backups’ which then would create a new backup. that means: consolidation was never done since the counter for consolidation was 6 (shown in the screenshots from above) but a new backup was created after 3 successive backups (disabled in the first screenshot).

it seems to me that the acronis is executing like this:

if (sum(backups) > consolidation_threshold) -> consolidate backups

where sum(backups) is interpreted: accumulate all different types as full|incremental|differential

example: sum(one full backup and 3 differential backups) = 4

however: if the checkbox ‘create a new backup on every x’th backup’ is checked the algorithm is never executed when x is smaller than the consolidation_threshold leaving old backups undeleted!


currently i disable the ‘create a new backup after x incremental or differential backups’. that means all options are set as done in the 3 screenshots.


this is a very typical flaw in gui design as intuition misleads based on the facts presented. a very clear and intuitive approach would be a schedule visualization for the current setup. i’m not very happy with acronis currently.

i’ve also purchased the more recent version of ‘true image home 2010′ but it still is the same issue. maybe someone understands this better than me, then please give me a hint.

Posts for Saturday, March 27, 2010

magic dhcp stuff – ISC Dynamic Host Configuration Protocol

source: friend of mine, Andreas Korsten, showed me how to execute custom scripts when a dhcp-lease is passed to a client. this is interesting stuff and since it seems not to be documented anywhere yet, i decided to blog it. it is probably of use for other admins out there – thanks to Andreas Korsten!


idea: run a custom script when a lease is passed to the client. in the example below every client in the netboot group will trigger ‘custom logging’ and additionally execute a script.

ISC Dynamic Host Configuration Protocol

It is about: net-misc/dhcp-3.1.2_p1 (gentoo, portage), see [1]

No special useflags were used: +kernel_linux -doc -minimal -selinux -static

setup /etc/dhcp/dhcp.conf

1  # vim: set noet ts=4 sw=4:
3  allow booting;
4  allow bootp;
6  server-name "myServer";
7  default-lease-time 3000;
8  max-lease-time 6000;
9  ddns-update-style none;
11  subnet netmask {
12    range;
13    option subnet-mask;
14    option domain-name-servers;
15    option domain-name "myPool";
17    group netboot {
18      next-server;
19      #server-identifier;
20      #filename "pxelinux.0";
22      #on commit { execute ("/tmp/", hardware , "fnord", host-decl-name, "foo", leased-address, "bar" ); }
23      #on commit { execute ("/tmp/", host-decl-name ); }
24      #on commit { execute ("/tmp/", leased-address ); }
26      # helpful:
27      on commit {
28        set ClientIP = binary-to-ascii(10, 8, ".", leased-address);
29        set ClientMac = binary-to-ascii(16, 8, ":", substring(hardware, 1, 6));
30        log(concat("Commit: IP: ", ClientIP, " Mac: ", ClientMac));
31        execute("/tmp/", "commit", ClientIP, ClientMac);
33        #if(execute("/root/scripts/dhcp-event", "commit", ClientIP, ClientMac) = 0) {
34        #if(execute("/tmp/", "commit", ClientIP, ClientMac) = 0)
35        #{
36        #       log(concat("Sent DHCP Commit Event For Client ", ClientIP));
37        #}
38        #} else {
39        #       log(concat("Error Sending DHCP Commit Event For Client ", ClientIP));
40        #}
41      }
43      host router5 { hardware ethernet 00:40:ff:aa:b0:44; fixed-address; option host-name "router5"; }
44      #include "/etc/dhcp/dhcpd.otherhosts.conf";
45    }
46  }

the important lines are highlighted with the bold tag.

the script

you could send an email or jabber message or just do some advanced logging. consider: if you have a server-farm it might be interesting to see if a reboot actually worked. the arguments to the bash script can be processed within the script. the order of the arguments is given by the dhcpd.conf file, see above.

possible errors

always review the logs, in my case /var/log/syslog and since the dhcpd service on gentoo is running as user ‘dhcp’ and the script was not accessable for the user ‘dhcp’ this error could be found:

debug: Mar 27 13:45:17 dhcpd: Commit: IP: Mac: 0:40:ff:aa:b0:44
debug: Mar 27 13:45:17 dhcpd: execute_statement argv[0] = /tmp/
debug: Mar 27 13:45:17 dhcpd: execute_statement argv[1] = commit
debug: Mar 27 13:45:17 dhcpd: execute_statement argv[2] =
debug: Mar 27 13:45:17 dhcpd: execute_statement argv[3] = 0:40:ff:aa:b0:44
err: Mar 27 13:45:17 dhcpd: Unable to execute /tmp/ Permission denied
err: Mar 27 13:45:17 dhcpd: execute: /tmp/ exit status 32512
info: Mar 27 13:45:17 dhcpd: DHCPREQUEST for ( from 0:40:ff:aa:b0:44 via ath0
info: Mar 27 13:45:17 dhcpd: DHCPACK on to 0:40:ff:aa:b0:44 via ath0

right after i corrected the permission issue:

debug: Mar 27 13:52:32 dhcpd: Commit: IP: Mac: 0:40:ff:aa:b0:44
debug: Mar 27 13:52:32 dhcpd: execute_statement argv[0] = /tmp/
debug: Mar 27 13:52:32 dhcpd: execute_statement argv[1] = commit
debug: Mar 27 13:52:32 dhcpd: execute_statement argv[2] =
debug: Mar 27 13:52:32 dhcpd: execute_statement argv[3] = 0:40:ff:aa:b0:44
info: Mar 27 13:52:32 dhcpd: DHCPREQUEST for from 0:40:ff:aa:b0:44 via ath0
info: Mar 27 13:52:32 dhcpd: DHCPACK on to 00:40:ff:aa:b0:44 via ath0




Posts for Thursday, March 25, 2010


Using OpenVPN to route a specific subnet to the VPN

I have an OpenVPN server that has the push "redirect-gateway" directive. This directive changes the default gateway of the client to be the OpenVPN server, what I wanted though was to connect to the VPN and access only a specific subnet (eg. through it without changing the server config (other people use it as a default gateway).

In the client config I removed the client directive and replaced it with these commands:

What the previous lines do:
tls-client: Acts as a client! (“client” is an alias for “tls-client” + “pull” … but I don’t like what the pull did–>it changed my default route)
ifconfig The tun0 interface will have ip on our side and on the server side. The IPs are not random, they are the ones OpenVPN used to assign to me while I was using the “client” directive.
route Route all packets to on the tun0 interface. In order to access services running on the OpenVPN server ( I needed a route to them.
route Route all packets to on the tun0 interface

A traceroute to now shows that I accessing that subnet through the vpn.


Gentoo... improving?!

There's been lots of talk in the past about Gentoo dying.  I won't provide the links - they're (usually) useless and uneducated non-Gentooers trying to play fortune teller.  From the "inside" perspective of a user, I still use Gentoo and it still works.

So following on from the comments on a previous post about some network control tools, a user commented on a Summer of Code project to improve Network Manager integration in Gentoo.

As I was browsing through the 2010 ideas, I realised there are some quite neat ideas here which will continue to keep Gentoo configurable, fast, and leading edge. Such as tags support for portage; Fastboot and Upstart for 5 to 10 second boot times; Dracut (the "distro-neutral initrd framework"); even an ebuild generator; and Visual Gentoo to graphically edit Gentoo configuration files. (OK this last one, it could be argued, was leading edge a long time ago, but then it could also be argued that text-based configuration files are the one true way!)

There's even some nice Council support goodness tabled.  Anything to help the council, young Padawan!

Let's hope lots of you SOC young'uns get going and support these projects.

So I finish by saying Gentoo: "It's not dead".

Posts for Wednesday, March 24, 2010

με αφορμή το + σύντομος οδηγός επιβίωσης

Μάθαμε όλοι τις εξελίξεις σχετικά με το κλείσιμο του, είτε από ειδησιογραφικά sites (ακόμα και διεθνή) είτε από την επίσημη ανακοίνωση της αστυνομίας.

Δεν θα σταθώ (προς το παρόν) στο θέμα της πνευματικής ιδιοκτησίας. Είμαι θεμελιακά αντίθετος, αλλά θα αδικήσω το θέμα αν αναπτύξω τη σκέψη μου με αφορμή το κλείσιμο ενός torrent tracker. Δεν θα σταθώ ούτε στο γεγονός πως η αστυνομία με την ανακοίνωση της έγραψε στα παλιότερα των υποδημάτων της το τεκμήριο της αθωότητας, ούτε στο γεγονός πως οι τελευταίοι παράγραφοι είναι προφανές πως τις έχουν υπαγορεύσει οι εταιρίες δικαιωμάτων.

Θα σταθω κυρίως στο τι ακριβώς είναι ένας torrent tracker. Θυμίζω πως στην περιβόητη δίκη του pirate bay οι μισές κατηγορίες κατέπεσαν μόλις την δεύτερη μέρα επειδή οι κατήγοροι δεν γνώριζαν πως οι ταινίες, που μοιράζοντας οι χρήστες του συγκερκιμένου torrent tracker, δεν ήταν πάνω στο site αλλά στον δίσκο αυτών των χρηστών. Ένας torrent tracker  παρέχει απλώς ένα αρχείο (.torrent) το οποίο περιέχει μεταδεδομένα που είναι απαραίτητα ώστε να γίνει αυτός ο διαμοιρασμός (πχ. το όνομα του αρχείου/ταινίας). Όποιος χρήστης έχει κατεβάσει αυτό το αρχείο γίνεται μέλος ενός “δικτύου” που μοιράζει το αρχείο που περιγράφεται απ’ το .torrent αρχείο.

(Παρένθεση: Οι πιο περίεργοι ας ψάξουν να βρουν πως χρησιμοποιείται και η τεχνολογία DHT στα torrents. Διαδικασία που εξηγήθηκε και στη δίκη του Pirate Bay απ’ τους διαχειριστές του, και που πρακτικά στερεί απ’ τον tracker ακόμα και αυτή την απλή συμμετοχή στον διαμοιρασμό των αρχείων, καθιστώντας την όλη διαδικασία πλήρως αποκεντρωμένη.)

Ο torrent tracker (pirate bay,, κλπ) δεν κατέχει λοιπόν παράνομο υλικό, συνεπώς δεν μπορεί να κατηγορηθεί για διακίνηση του. Αυτό για το οποίο θα μπορούσαν να κατηγορηθούν τέτοια sites είναι για παρακινηση και διευκόλυνση των χρηστών τους σε παράνομες δραστηριότητες. Αμφιβάλλω βέβαια κατά πόσο υπάρχει στην Ελλάδα το αντίστοιχο νομικό πλαίσιο για να στηριχθεί μια τέτοια κατηγορία. Ήδη διαβάζουμε πως στην Ισπανία είχαμε μια θετική δικαστική απόφαση σχετικά με αυτό το θέμα, που πρακτικά αθοώνει sites τύπου με το αιτιολογικό πως πρόκειται για απλούς μεταγωγούς δεδομένων και άρα δεν καταπατούν τους νόμους περί πνευματικής ιδιοκτησίας.

(Παρένθεση: Ειδικά στην υπόθεση του έχει ενδιαφέρον να δούμε πως οι αρχές βρήκαν τις διευθύνσεις και λοιπά στοιχεία των διαχειριστών, καθώς τίθεται θέμα παραβίασης του απορρήτου των επικοινωνιών.)

Οδηγός Επιβίωσης

Με βάση το δελτίο τύπου της αστυνομίας φαίνεται πως κατασχέθηκαν οι προσωπικοί υπολογιστές των συλληφθέντων. Δύο μικρά tips ώστε να είστε σίγουροι πως ο δίσκος σας δεν θα σας “προδώσει”.

1. Καταρχήν χρησιμοποιείστε κρυπτογραφημένο filesystem. Η διαδικασία είναι πάρα πολύ απλή (τουλάχιστον στο Linux) και συνήθως είναι ένα απλό checkbox κατά τη διάρκεια της εγκατάστασης. Για παράδειγμα αναφέρω το Fedora Linux, που χρησιμοποιώ προσωπικά, στο οποίο ενεργοποιώντας την αντίστοιχη επιλογή στην εγκατάσταση:

Μου ζητάει κατά την εκκίνηση να βάλω το passphrase που έχω επιλέξει:

(Να θυμάστε πως το passphrase δεν είναι password. Το σημαντικό δεν είναι να είναι δύσκολο, αλλά μεγάλο. Χρησιμοποιήστε πχ. έναν στίχο από αγαπημένο σας ποίημα. Όχι haiku :P)

2. Αν θέλετε να εξαφανίσετε ίχνη που έχετε ήδη στον δίσκο σας και να κάνετε μια καθαρή εγκατάσταση, γράψτε καταρχήν σε ένα cdάκι ένα LiveCD. Μας κάνει και το Fedora Linux, αλλά γιατί κάτι τόσο απλό μας κάνει και κάτι σαν το slax. Ξεκίνηστε τον υπολογιστή σας μ’ αυτό και όταν τελειώσει η εκκίνηση ανοίξτε ένα τερματικό και γράψτε την εντολή:

dd if=/dev/urandom of=/dev/sda

όπου sda είναι ο 1ο δίσκος, sdb ο 2ος, κοκ. Η παραπάνω διαδικασία γεμίζει με τυχαία δεδομένα τον δίσκο και είνα καλό να προηγηθεί ακόμα και αν κρυπτογραφήσετε τον δίσκο σας. Να είστε όπως προετοιμασμένοι πως θα πάρει αρκετοί ώρα (10-24h) ανάλογα με την χωρητικότητα του δίσκου.

Posts for Tuesday, March 23, 2010


WIPUP 19.03.10a – under or overcooked?

It’s WIPUP statistics time, folks. I’d like to apologise for the lack of "proper" posts as I’ve been busy making a portfolio for a university application and bachoté in some new ThoughtScore stuff. Yes, that’s right. So a sad excuse is to look at statistics. (Those viewing my profile would probably know this already though)

As you can see only 3 or so days after the release we’ve hit the same level of views as previous updates. At the same time we see we’ve resumed our correlation between updates and views. I think the image really speaks for itself.

It’s however a bit more interesting to note that we’ve had 4 new updates added by new users (one apparently being a 77 year old lady from Alaska). I’ve also posted a thread on the BlenderArtists "news" forum category, and although we’ve only had 3 people view the thread (yeah, not that active apparently) we’ve gathered 3 very positive comments and had 3 registrations. Sounds good to me. Very good sign.

When dogfooding lately for current WIPs which weren’t built to be documented and not entirely of personal artistic nature I’ve noticed a natural rejection to putting work online. Something along the lines of "it’s not ready! It’s ugly as bollocks!" However I’ve resisted deleting anything and I don’t regret doing so. However I’m concerned that others (after overcoming the initial excitement) will experience the same. I guess it’s time to orchestrate a few social experiments, which if they prove anything interesting I’ll post about later.

All-nighter coming up.

Related posts:

  1. The WIPUP 21.02.10 stats are out.
  2. After the WIPUP release, the stats are in.
  3. Countdown to KDE 4.4 and the new KDE website: 2 days left

Posts for Monday, March 22, 2010

Computer Aided Government?

Random thought of the day…

As most programmers, I see tendencies of over optimism in myself.  Yet Mike Judge’s Idiocracy seems like a strange window into the future.  Part of me thinks that government should include an open source heuristic computer simulation doing minimax on wealth creation(aka technology) and personal well-being to aide in decision making.

I suggest a new field of research:  Computer Aided Government (CAG).  How can we wire sensors and algorithms into society to enable us to make optimized decisions?  How can we use game theory, statistics, Bayes’ Theorem, simulation, sensors, neural nets, etc. to improve the human condition?  I think IBM is on to something big with their Smarter Planet initiative.

And just to reel it in if you think I’m bat shit insane, think that the current best forms of government were originated over 300 years ago if not earlier.  This was before many forms of computation and logic had been explored and applied.  Surely technology can improve this field as it has for nearly every other facet of life.  I think open source computer scientists can step up in a big way here.  Research in the field could affect billions to come.

Think on it and comment.

Share and Enjoy: Digg Slashdot Facebook Reddit StumbleUpon Google Bookmarks FSDaily Twitter email Print PDF

Related posts:

  1. Computer e-Recycling (an I.T. WTF Odyssey) Story Time: Computer e-Recycling an I.T. WTF Odyssey I had...

Starcraft 2 BETA Thoughts (or: Cool Kids Club Post)

I remember when I bought Warcraft 3 and started playing through it. I was relatvely put off by the "tactical action" campaign levels (levels where you go for long period of time without a base), and the "heros". Starcraft has it's fair share of UMS Hero maps, where you generally send a hoard of units, plus your hero, and if you can swing it, a healing/repairing unit focused solely on your hero. But when the story is shaped around the hero, it puts a strong emphasis on the expendable nature of some of your units, and the "protect at all odds" nature of your hero. Go figure, right?

What does this have to do with Starcraft 2? Absolutely nothing, there's no campaign levels in the BETA :).

I will say one thing. I sucked at the Warcraft 3 campaign, I always cheated my way through the Starcraft campaigns, I loved the Warcraft 2 campaigns, and... I always had my ass handed to me in multiplayer on any of the aforementioned games.

I'm equally delighted to state, that absolutely nothing has changed in this regard! I've never been good at managing resources in those games. I'm either continually bumping into the insufficient resources line, or I wind up with a surplus which ultimately does me no good, because building significant units (battlecruisers, carriers, etc.) just takes too long. I stick one of every building in one single base, and then wonder why I get horribly owned within just a few enemy attack waves.

Starcraft 2 brought some changes that I'm both delighted, confused, and annoyed by.

Many of the maps have a large tree, or series of rocks, or some one big object that obstructs a path out of your base. And it takes FOREVER to destroy. I stuck an SCV on it during the beginning of one bout, and they never finished the job. This seems awkward. I'm well aware that an SCV isn't exactly powerful, but this is just one example. After depleting all my crystals in one base, I took the leftover fleet of maybe 15 SCVs on one of these tree obstacles, and had them all go to town, force attacking the structure; Still, no reasonable damage was done to it after the duration of the rest of the bout.

I know, I should stick a primary attacking unit on it, but generally speaking, I just send those air units out unobstructed, or with transport units, or, there's usually a second entrance anyways, and I just take that route instead.

I am aware that this tree/rock/whatever usually prevents access to a base location (with enhanced crystal minerals usually) on some maps, but that's not always the case.

The changes to the Zerg creep are very interesting. The creep does NOT expand, except when you do two things:
(1) Evolve a Hatchery into a Hive, and then place your overlords over a normal terrain spot. Overlords now have a "spew creep" option (or something to that effect), in which the constantly drip the goop of the creep, and you create a small radius you can build things on.
Think of the Zerg Creep more like Protoss Pylons now, except you have to sacrifice a unit in order to expand the creep. Needless to say, it's only temporary.
Given that Zerg "buildings" take damage whenever their surrounding creep is gone, this seems like a ridiculously dangerous change. Take out one overlord, and *bam*. Chain reaction that starts the death clock for numerous zerg base expansions.

(2) Build a Nydus Worm (renamed Nydus Canal from Brood War). I have never really been able to gauge how much the worm helps. I could also be entirely misconstruing it's helpfulness too.

You know longer have to build a a Creep Colony, then evolve it into a Sunken or a Spore Colony. THANK GOD. This is a change that makes so much damn sense. Were the Creep Colonies even useful for anything before? I don't think they were, and now they're gone. Huzzah at Blizzards intellect!

I've played very little Protoss so far. Given that I suck with resources just as much as ever, and I still consider Protoss the most money hungry race, I'm shying away from it until I start sucking a lot less at the game. Nothing stood out from the Protoss in the one or two games I played as them except that I think the Reavers are gone.

I've enjoyed playing as the Terran, and have actually snuck out a few wins using them. Every building that is capable of building an addon usually has the option of two. A "reactor" (allows you to build two units at the same time), or a Tech Kit, which allows that building ONLY to build the advanced units. The command center no longer has addons, but it can be completely added on to in one of two ways.

Turning your Command Center into a gigantic (admittedly, ground enemy only) turret is HILARIOUS. The Command Center bolts on a swivel head for aiming, and the sound that emits when it shoots... yowza. Think Tanks in Siege Mode, but more bassy. It's a marvelous sound.

Then there's the "communications" add on, that allows you to stack your supply depots, doubling their output, it has the sensor sweep, as usual, and then it has a special SCUD crystal collector that runs for 90 seconds (I think).

Starcraft 2 is a serious evolution. The game looks absolutely beautiful, and it's a shame the beta only has mid sized maps at the largest. I'm looking forward to 4v4 (or bigger?) games with a HUGE beautiful landscape. It'll be great to watch, it always is.

You can add computer players into custom games, but they "very easy" only at the moment. And by do they work as advertised >_>. I mean, I'm slow in building, as compared to a lot of my friends, but holy cow. The fact that the computer works solely on buildings and barely enough units to defend itself? Game over, man.

Also, when you beat the computer, the computer sends "gg" to you via chat :D. I lol'ed a bit.

Starcraft 2 looks amazing, plays amazing, is still... well, it's still "blocky" to me (that is, the lobby/set up interface), but it has enormous potential. I look forward to seeing the campaigns they come up with, and hope they take back their "buy all three games" bullshit.

No, I don't have an invite. No, I won't give one to you even if you ask.

Posts for Sunday, March 21, 2010


Irssi 0.8.15-RC1 Released

Irssi 0.8.15 release candidate 1 has been released tonight. I’ve poked some of the package maintainers on IRC, so hopefully it’ll be available as an unstable package in your favourite Linux / BSD distribution or whatever you’re using soon.

Please test it and submit bugs.

For more information, please see Irssi’s website.

Irssi at Open Source Days 2010

Irssi was present at Open Source Days 2010 here in Copenhagen earlier this month. Here’s a nice picture of our fancy new banner that was kindly sponsored by Foreningen Fri Software.

Irssi Banner at Open Source Days 2010

Western Digital Passport - now with 50% less hackability!

I have a Western Digital My Passport here from a friend.  It's been dropped, and it's making clicking noises (uh-oh).  I'm trying to see if it's recoverable, so I thought I'd remove the disk and plug it directly onto the motherboard.

After I read a couple of success stories I thought it would be simple.  At least I'd have a free SATA to USB converter if all else failed.  I removed the case and to my surprise WD is now manufacturing the drives with the USB port directly on the (non-removable) hard disk board.

Don't try and tell me this is necessary, the only reason I can see is to stop people (such as myself) re-using the drive in a computer, or using the enclosure with an upgrade / replacement.

I can't speak for your specific My Passport, but here are the details of this one for the Googlers:
S/N: WX80AB962763
R/N: C0B

The serial number is the same as the internal drive.  This drive is stamped with the date 03 Dec 2009.

If you haven't bought a WD yet, don't expect to be able to replace the internal drive with a generic one!

Posts for Saturday, March 20, 2010


NetworkManager vs wicd vs wpa_gui

Due to some idle time* a couple of weeks ago, here's a quick comparison between a few network control tools for Linux.

These tools all give you some sort of network control from the Desktop - a service traditionally provided by daemons and initialisation scripts.  The problem with that is roaming - it's much more common nowadays to have a laptop travel between multiple access points (Ethernet, 802.11, wireless broadband...) and many of the tasks can be automated.  So what better way to use a point-and-click approach.

The three competitors, and here's how they compare by features:

Tool 802.11 (wireless) control ethernet control mobile broadband control VPN controldbus notification
NetworkManager yes yes yes yes yes
wicd yes yes no planned for 2.0 no
wpa_gui yes no no no no

Personally I use NetworkManager.  I use all types of network control, and the dbus notification tells my mail client to go offline as soon as the network in not available.  (Previously I would have to wait for my mail client to time out).

This is not saying that you should use NetworkManager too - find the list of features you require and use the appropriate tool.

Be warned: NetworkManager, while feature rich, is polarising the community - either it works and you love it, or it doesn't work and you hate it.  There is a common wireless connect-disconnect issue which seems to be caused by various different problems.  I see it at work but not at home.  According to one dev, it's buggy kernel drivers, but that doesn't explain why it works for me in some places but not others on the same laptop.  YMMV!

*My development laptop provided by customer A is locked out of their domain - stupid windows!  My employer only has the this job for me right now, so I have to wait until they resolve the problem...

Electronically, my dear Wattson

I just borrowed a Wattson Power Meter from a friend at work, and while there's nothing special about power meters, the good folks at DIY Kyoto have put a nice touch on this one.  [Standard disclaimer: I don't work for them and I haven't received any incentives  from them either!]

There has been a trend of wireless power meters for the home, so they can be easily adapted to the consumer market.  They solve the problem of running wires around your house - you put the sensor (or current transducer or CT) in your meter box or on a specific appliance, and the display goes somewhere convenient.  Wattson has the opportunity to connect 4 CTs: 3 for 3 phases and one for renewable monitoring, or in any other configuration.

But Why?  Well there were numerous reasons for me, everyone is different:

Firstly I wanted to see how much my 60L camping fridge cost to run on electricity (it runs on LPG, 240V AC or 12V DC).  It turns out it draws less than 100W continuous, which would cost about $160/year on our current tariff (if I calculate correctly).  That's assuming the fridge is running full time, but it has a thermostat so the actual cost will depend on the ambient temperature.

Secondly, I have a "solar aware" dishwasher.  Essentially it has a thermostat as well to measure the water temperature.  If you have solar hot water, you connect the hot pipe (instead of the usual cold) to the dishwasher and it doesn't use it's internal electric heater.  I wanted to see if it was cost effective to pay for a plumber to put in a hot feed (and a tap for those cloudy days so I still have warm showers).

I connected Wattson, and turned on the dishwasher (full of dishes of course!).  It used about 50W at first, for the actuators I assume.  Then about 200W as the water filled and the "sprinklers" started.  Well, 200 Watts is nothing I thought.  But about 10 minutes in the heater started.  The power jumped up to 1.6kW!!  That's more than my split system air-conditioner!  Luckily it only ran like this for about 20 minutes, but still that's a decent heater!

I calculate about $54 per year just for the dishwasher heater (I can't save the costs of the other actions of the dishwasher - unless I have solar power too!).  So it looks like a plumber wouldn't be very cost effective.  I'm probably looking at an $80 call out fee plus an hours labour and parts.  Close to $200, which would take four years to pay back!

The final stage is to connect Wattson to my meter box, and watch the total energy consumption of my home.  Just from today (Saturday with the whole family home) we use about 500W without airconditioners on, and about 3kW with them on!  It was interesting to see the different appliances turn on and off (fan - 80W, washing machine - 300W, microwave - 2000W).

Wattson provides a few weeks of storage built-in, and there is software called Holmes (yes, Holmes and Wattson).  Holmes is flash based, for Windows or Mac only.  Luckily Wattson uses and FTDI usb-serial connection, so it shouldn't be impossible to get some data in Linux.  I'll keep you posted with my success!

Posts for Friday, March 19, 2010

2 weeks of silence

I just thought I'd let you guys know that I won't be posting anything here in the next 2 weeks cause me and my sweetheart are going to enjoy a few weeks off on Cuba. Given that I don't take any computer and that Internet in Cuba doesn't seem to be all that available, I guess I'll write something when I come back. Have a blast in the next two weeks without me :-)

Planet Larry is not officially affiliated with Gentoo Linux. Original artwork and logos copyright Gentoo Foundation. Yadda, yadda, yadda.