Posts for Monday, March 29, 2010

acronis true image 2009

i have a windows xp server on which i use ‘acronis true image home 2009′ for periodical backups.

my problem:

my 1TB backup storage was filled with incremental/full backups although i’ve used incremental backups with consolidation enabled.

to investigate the issue i’ve installed a windows xp into a VirtualBox on my linux machine – to experiment – in the hope to find some easy solution. previously i’d been searching in forums for a fix to the issue but it seems a lot of ppl have the same issues, too, without providing a fix.

the acronis documentation does no explain what ‘acronis’ actually does or it is far to complicated. i’ve read the manual (not the printed version but instead the acronis help when pressing the ‘?’ button on the ‘backups – incremental’ dialog) a few times but i did not understand it as it is quite complex and i don’t like how they explain the single steps.

what i want:

first let’s see what i want:

  1. a incremental backup should be made every day (the first backup is a full backup of course)
  2. the main archive (the first full backup) should be validated on every backup
  3. if more than 6 backups are there, delete the oldest one
  4. the old backup may only be deleted if the new backup is 100% consistent
  5. the backup must always be in a consistent state, no merging of a full+diff when not having another full backup around

i’m not sure if the backup merge (merging a full and a incremental backup) is atomic, which would mean that if the merge fails the old files are not lost. after some experiments i’m not sure but i doubt that it is atomic. i have a bad feeling about this. so i think (4) and (5) can’t be done directly. maybe with two backup jobs, one every ‘odd day’, the other every ‘even day’ would be a solution. but there are so many indications which i can’t take into account – so i currently think the best is to go with (1),(2) and (3) only while checking the backup from time to time manually (which i would do anyway).

so here is the configuration of how i achieve (1),(2) and(3) but NOT (4) and (5) :



automatic consolidation

problem to exceed the harddrive capacity

to recall: my primary problem was that the backups did exceed the harddrive capacity as no old archives were consolidated:

that resulted from the ‘backup method’ where i also checked the last point ‘create a new backup after x incremental or differential backups’ which then would create a new backup. that means: consolidation was never done since the counter for consolidation was 6 (shown in the screenshots from above) but a new backup was created after 3 successive backups (disabled in the first screenshot).

it seems to me that the acronis is executing like this:

if (sum(backups) > consolidation_threshold) -> consolidate backups

where sum(backups) is interpreted: accumulate all different types as full|incremental|differential

example: sum(one full backup and 3 differential backups) = 4

however: if the checkbox ‘create a new backup on every x’th backup’ is checked the algorithm is never executed when x is smaller than the consolidation_threshold leaving old backups undeleted!


currently i disable the ‘create a new backup after x incremental or differential backups’. that means all options are set as done in the 3 screenshots.


this is a very typical flaw in gui design as intuition misleads based on the facts presented. a very clear and intuitive approach would be a schedule visualization for the current setup. i’m not very happy with acronis currently.

i’ve also purchased the more recent version of ‘true image home 2010′ but it still is the same issue. maybe someone understands this better than me, then please give me a hint.

Posts for Saturday, March 27, 2010

magic dhcp stuff – ISC Dynamic Host Configuration Protocol

source: friend of mine, Andreas Korsten, showed me how to execute custom scripts when a dhcp-lease is passed to a client. this is interesting stuff and since it seems not to be documented anywhere yet, i decided to blog it. it is probably of use for other admins out there – thanks to Andreas Korsten!


idea: run a custom script when a lease is passed to the client. in the example below every client in the netboot group will trigger ‘custom logging’ and additionally execute a script.

ISC Dynamic Host Configuration Protocol

It is about: net-misc/dhcp-3.1.2_p1 (gentoo, portage), see [1]

No special useflags were used: +kernel_linux -doc -minimal -selinux -static

setup /etc/dhcp/dhcp.conf

1  # vim: set noet ts=4 sw=4:
3  allow booting;
4  allow bootp;
6  server-name "myServer";
7  default-lease-time 3000;
8  max-lease-time 6000;
9  ddns-update-style none;
11  subnet netmask {
12    range;
13    option subnet-mask;
14    option domain-name-servers;
15    option domain-name "myPool";
17    group netboot {
18      next-server;
19      #server-identifier;
20      #filename "pxelinux.0";
22      #on commit { execute ("/tmp/", hardware , "fnord", host-decl-name, "foo", leased-address, "bar" ); }
23      #on commit { execute ("/tmp/", host-decl-name ); }
24      #on commit { execute ("/tmp/", leased-address ); }
26      # helpful:
27      on commit {
28        set ClientIP = binary-to-ascii(10, 8, ".", leased-address);
29        set ClientMac = binary-to-ascii(16, 8, ":", substring(hardware, 1, 6));
30        log(concat("Commit: IP: ", ClientIP, " Mac: ", ClientMac));
31        execute("/tmp/", "commit", ClientIP, ClientMac);
33        #if(execute("/root/scripts/dhcp-event", "commit", ClientIP, ClientMac) = 0) {
34        #if(execute("/tmp/", "commit", ClientIP, ClientMac) = 0)
35        #{
36        #       log(concat("Sent DHCP Commit Event For Client ", ClientIP));
37        #}
38        #} else {
39        #       log(concat("Error Sending DHCP Commit Event For Client ", ClientIP));
40        #}
41      }
43      host router5 { hardware ethernet 00:40:ff:aa:b0:44; fixed-address; option host-name "router5"; }
44      #include "/etc/dhcp/dhcpd.otherhosts.conf";
45    }
46  }

the important lines are highlighted with the bold tag.

the script

you could send an email or jabber message or just do some advanced logging. consider: if you have a server-farm it might be interesting to see if a reboot actually worked. the arguments to the bash script can be processed within the script. the order of the arguments is given by the dhcpd.conf file, see above.

possible errors

always review the logs, in my case /var/log/syslog and since the dhcpd service on gentoo is running as user ‘dhcp’ and the script was not accessable for the user ‘dhcp’ this error could be found:

debug: Mar 27 13:45:17 dhcpd: Commit: IP: Mac: 0:40:ff:aa:b0:44
debug: Mar 27 13:45:17 dhcpd: execute_statement argv[0] = /tmp/
debug: Mar 27 13:45:17 dhcpd: execute_statement argv[1] = commit
debug: Mar 27 13:45:17 dhcpd: execute_statement argv[2] =
debug: Mar 27 13:45:17 dhcpd: execute_statement argv[3] = 0:40:ff:aa:b0:44
err: Mar 27 13:45:17 dhcpd: Unable to execute /tmp/ Permission denied
err: Mar 27 13:45:17 dhcpd: execute: /tmp/ exit status 32512
info: Mar 27 13:45:17 dhcpd: DHCPREQUEST for ( from 0:40:ff:aa:b0:44 via ath0
info: Mar 27 13:45:17 dhcpd: DHCPACK on to 0:40:ff:aa:b0:44 via ath0

right after i corrected the permission issue:

debug: Mar 27 13:52:32 dhcpd: Commit: IP: Mac: 0:40:ff:aa:b0:44
debug: Mar 27 13:52:32 dhcpd: execute_statement argv[0] = /tmp/
debug: Mar 27 13:52:32 dhcpd: execute_statement argv[1] = commit
debug: Mar 27 13:52:32 dhcpd: execute_statement argv[2] =
debug: Mar 27 13:52:32 dhcpd: execute_statement argv[3] = 0:40:ff:aa:b0:44
info: Mar 27 13:52:32 dhcpd: DHCPREQUEST for from 0:40:ff:aa:b0:44 via ath0
info: Mar 27 13:52:32 dhcpd: DHCPACK on to 00:40:ff:aa:b0:44 via ath0




Posts for Thursday, March 25, 2010


Using OpenVPN to route a specific subnet to the VPN

I have an OpenVPN server that has the push "redirect-gateway" directive. This directive changes the default gateway of the client to be the OpenVPN server, what I wanted though was to connect to the VPN and access only a specific subnet (eg. through it without changing the server config (other people use it as a default gateway).

In the client config I removed the client directive and replaced it with these commands:

What the previous lines do:
tls-client: Acts as a client! (“client” is an alias for “tls-client” + “pull” … but I don’t like what the pull did–>it changed my default route)
ifconfig The tun0 interface will have ip on our side and on the server side. The IPs are not random, they are the ones OpenVPN used to assign to me while I was using the “client” directive.
route Route all packets to on the tun0 interface. In order to access services running on the OpenVPN server ( I needed a route to them.
route Route all packets to on the tun0 interface

A traceroute to now shows that I accessing that subnet through the vpn.


Gentoo... improving?!

There's been lots of talk in the past about Gentoo dying.  I won't provide the links - they're (usually) useless and uneducated non-Gentooers trying to play fortune teller.  From the "inside" perspective of a user, I still use Gentoo and it still works.

So following on from the comments on a previous post about some network control tools, a user commented on a Summer of Code project to improve Network Manager integration in Gentoo.

As I was browsing through the 2010 ideas, I realised there are some quite neat ideas here which will continue to keep Gentoo configurable, fast, and leading edge. Such as tags support for portage; Fastboot and Upstart for 5 to 10 second boot times; Dracut (the "distro-neutral initrd framework"); even an ebuild generator; and Visual Gentoo to graphically edit Gentoo configuration files. (OK this last one, it could be argued, was leading edge a long time ago, but then it could also be argued that text-based configuration files are the one true way!)

There's even some nice Council support goodness tabled.  Anything to help the council, young Padawan!

Let's hope lots of you SOC young'uns get going and support these projects.

So I finish by saying Gentoo: "It's not dead".

Posts for Wednesday, March 24, 2010

με αφορμή το + σύντομος οδηγός επιβίωσης

Μάθαμε όλοι τις εξελίξεις σχετικά με το κλείσιμο του, είτε από ειδησιογραφικά sites (ακόμα και διεθνή) είτε από την επίσημη ανακοίνωση της αστυνομίας.

Δεν θα σταθώ (προς το παρόν) στο θέμα της πνευματικής ιδιοκτησίας. Είμαι θεμελιακά αντίθετος, αλλά θα αδικήσω το θέμα αν αναπτύξω τη σκέψη μου με αφορμή το κλείσιμο ενός torrent tracker. Δεν θα σταθώ ούτε στο γεγονός πως η αστυνομία με την ανακοίνωση της έγραψε στα παλιότερα των υποδημάτων της το τεκμήριο της αθωότητας, ούτε στο γεγονός πως οι τελευταίοι παράγραφοι είναι προφανές πως τις έχουν υπαγορεύσει οι εταιρίες δικαιωμάτων.

Θα σταθω κυρίως στο τι ακριβώς είναι ένας torrent tracker. Θυμίζω πως στην περιβόητη δίκη του pirate bay οι μισές κατηγορίες κατέπεσαν μόλις την δεύτερη μέρα επειδή οι κατήγοροι δεν γνώριζαν πως οι ταινίες, που μοιράζοντας οι χρήστες του συγκερκιμένου torrent tracker, δεν ήταν πάνω στο site αλλά στον δίσκο αυτών των χρηστών. Ένας torrent tracker  παρέχει απλώς ένα αρχείο (.torrent) το οποίο περιέχει μεταδεδομένα που είναι απαραίτητα ώστε να γίνει αυτός ο διαμοιρασμός (πχ. το όνομα του αρχείου/ταινίας). Όποιος χρήστης έχει καταβάσει αυτό το αρχείο γίνεται μέλος ενός “δικτύου” που μοιράζει το αρχείο που περιγράφεται απ’ το .torrent αρχείο.

(Παρένθεση: Οι πιο περίεργοι ας ψάξουν να βρουν πως χρησιμοποιείται και η τεχνολογία DHT στα torrents. Διαδικασία που εξηγήθηκε και στη δίκη του Pirate Bay απ’ τους διαχειριστές του, και που πρακτικά στερεί απ’ τον tracker ακόμα και αυτή την απλή συμμετοχή στον διαμοιρασμό των αρχείων, καθιστώντας την όλη διαδικασία πλήρως αποκεντρωμένη.)

Ο torrent tracker (pirate bay,, κλπ) δεν κατέχει λοιπόν παράνομο υλικό, συνεπώς δεν μπορεί να κατηγορηθεί για διακίνηση του. Αυτό για το οποίο θα μπορούσαν να κατηγορηθούν τέτοια sites είναι για παρακινηση και διευκόλυνση των χρηστών τους σε παράνομες δραστηριότητες. Αμφιβάλλω βέβαια κατά πόσο υπάρχει στην Ελλάδα το αντίστοιχο νομικό πλαίσιο για να στηριχθεί μια τέτοια κατηγορία. Ήδη διαβάζουμε πως στην Ισπανία είχαμε μια θετική δικαστική απόφαση σχετικά με αυτό το θέμα, που πρακτικά αθοώνει sites τύπου με το αιτιολογικό πως πρόκειται για απλούς μεταγωγούς δεδομένων και άρα δεν καταπατούν τους νόμους περί πνευματικής ιδιοκτησίας.

(Παρένθεση: Ειδικά στην υπόθεση του έχει ενδιαφέρον να δούμε πως οι αρχές βρήκαν τις διευθύνσεις και λοιπά στοιχεία των διαχειριστών, καθώς τίθεται θέμα παραβίασης του απορρήτου των επικοινωνιών.)

Οδηγός Επιβίωσης

Με βάση το δελτίο τύπου της αστυνομίας φαίνεται πως κατασχέθηκαν οι προσωπικοί υπολογιστές των συλληφθέντων. Δύο μικρά tips ώστε να είστε σίγουροι πως ο δίσκος σας δεν θα σας “προδώσει”.

1. Καταρχήν χρησιμοποιείστε κρυπτογραφημένο filesystem. Η διαδικασία είναι πάρα πολύ απλή (τουλάχιστον στο Linux) και συνήθως είναι ένα απλό checkbox κατά τη διάρκεια της εγκατάστασης. Για παράδειγμα αναφέρω το Fedora Linux, που χρησιμοποιώ προσωπικά, στο οποίο ενεργοποιώντας την αντίστοιχη επιλογή στην εγκατάσταση:

Μου ζητάει κατά την εκκίνηση να βάλω το passphrase που έχω επιλέξει:

(Να θυμάστε πως το passphrase δεν είναι password. Το σημαντικό δεν είναι να είναι δύσκολο, αλλά μεγάλο. Χρησιμοποιήστε πχ. έναν στίχο από αγαπημένο σας ποίημα. Όχι haiku :P)

2. Αν θέλετε να εξαφανίσετε ίχνη που έχετε ήδη στον δίσκο σας και να κάνετε μια καθαρή εγκατάσταση, γράψτε καταρχήν σε ένα cdάκι ένα LiveCD. Μας κάνει και το Fedora Linux, αλλά γιατί κάτι τόσο απλό μας κάνει και κάτι σαν το slax. Ξεκίνηστε τον υπολογιστή σας μ’ αυτό και όταν τελειώσει η εκκίνηση ανοίξτε ένα τερματικό και γράψτε την εντολή:

dd if=/dev/urandom of=/dev/sda

όπου sda είναι ο 1ο δίσκος, sdb ο 2ος, κοκ. Η παραπάνω διαδικασία γεμίζει με τυχαία δεδομένα τον δίσκο και είνα καλό να προηγηθεί ακόμα και αν κρυπτογραφήσετε τον δίσκο σας. Να είστε όπως προετοιμασμένοι πως θα πάρει αρκετοί ώρα (10-24h) ανάλογα με την χωρητικότητα του δίσκου.

Posts for Tuesday, March 23, 2010


WIPUP 19.03.10a – under or overcooked?

It’s WIPUP statistics time, folks. I’d like to apologise for the lack of "proper" posts as I’ve been busy making a portfolio for a university application and bachoté in some new ThoughtScore stuff. Yes, that’s right. So a sad excuse is to look at statistics. (Those viewing my profile would probably know this already though)

As you can see only 3 or so days after the release we’ve hit the same level of views as previous updates. At the same time we see we’ve resumed our correlation between updates and views. I think the image really speaks for itself.

It’s however a bit more interesting to note that we’ve had 4 new updates added by new users (one apparently being a 77 year old lady from Alaska). I’ve also posted a thread on the BlenderArtists "news" forum category, and although we’ve only had 3 people view the thread (yeah, not that active apparently) we’ve gathered 3 very positive comments and had 3 registrations. Sounds good to me. Very good sign.

When dogfooding lately for current WIPs which weren’t built to be documented and not entirely of personal artistic nature I’ve noticed a natural rejection to putting work online. Something along the lines of "it’s not ready! It’s ugly as bollocks!" However I’ve resisted deleting anything and I don’t regret doing so. However I’m concerned that others (after overcoming the initial excitement) will experience the same. I guess it’s time to orchestrate a few social experiments, which if they prove anything interesting I’ll post about later.

All-nighter coming up.

Related posts:

  1. The WIPUP 21.02.10 stats are out.
  2. After the WIPUP release, the stats are in.
  3. Countdown to KDE 4.4 and the new KDE website: 2 days left

Posts for Monday, March 22, 2010

Computer Aided Government?

Random thought of the day…

As most programmers, I see tendencies of over optimism in myself.  Yet Mike Judge’s Idiocracy seems like a strange window into the future.  Part of me thinks that government should include an open source heuristic computer simulation doing minimax on wealth creation(aka technology) and personal well-being to aide in decision making.

I suggest a new field of research:  Computer Aided Government (CAG).  How can we wire sensors and algorithms into society to enable us to make optimized decisions?  How can we use game theory, statistics, Bayes’ Theorem, simulation, sensors, neural nets, etc. to improve the human condition?  I think IBM is on to something big with their Smarter Planet initiative.

And just to reel it in if you think I’m bat shit insane, think that the current best forms of government were originated over 300 years ago if not earlier.  This was before many forms of computation and logic had been explored and applied.  Surely technology can improve this field as it has for nearly every other facet of life.  I think open source computer scientists can step up in a big way here.  Research in the field could affect billions to come.

Think on it and comment.

Share and Enjoy: Digg Slashdot Facebook Reddit StumbleUpon Google Bookmarks FSDaily Twitter email Print PDF

Related posts:

  1. Computer e-Recycling (an I.T. WTF Odyssey) Story Time: Computer e-Recycling an I.T. WTF Odyssey I had...

Starcraft 2 BETA Thoughts (or: Cool Kids Club Post)

I remember when I bought Warcraft 3 and started playing through it. I was relatvely put off by the "tactical action" campaign levels (levels where you go for long period of time without a base), and the "heros". Starcraft has it's fair share of UMS Hero maps, where you generally send a hoard of units, plus your hero, and if you can swing it, a healing/repairing unit focused solely on your hero. But when the story is shaped around the hero, it puts a strong emphasis on the expendable nature of some of your units, and the "protect at all odds" nature of your hero. Go figure, right?

What does this have to do with Starcraft 2? Absolutely nothing, there's no campaign levels in the BETA :).

I will say one thing. I sucked at the Warcraft 3 campaign, I always cheated my way through the Starcraft campaigns, I loved the Warcraft 2 campaigns, and... I always had my ass handed to me in multiplayer on any of the aforementioned games.

I'm equally delighted to state, that absolutely nothing has changed in this regard! I've never been good at managing resources in those games. I'm either continually bumping into the insufficient resources line, or I wind up with a surplus which ultimately does me no good, because building significant units (battlecruisers, carriers, etc.) just takes too long. I stick one of every building in one single base, and then wonder why I get horribly owned within just a few enemy attack waves.

Starcraft 2 brought some changes that I'm both delighted, confused, and annoyed by.

Many of the maps have a large tree, or series of rocks, or some one big object that obstructs a path out of your base. And it takes FOREVER to destroy. I stuck an SCV on it during the beginning of one bout, and they never finished the job. This seems awkward. I'm well aware that an SCV isn't exactly powerful, but this is just one example. After depleting all my crystals in one base, I took the leftover fleet of maybe 15 SCVs on one of these tree obstacles, and had them all go to town, force attacking the structure; Still, no reasonable damage was done to it after the duration of the rest of the bout.

I know, I should stick a primary attacking unit on it, but generally speaking, I just send those air units out unobstructed, or with transport units, or, there's usually a second entrance anyways, and I just take that route instead.

I am aware that this tree/rock/whatever usually prevents access to a base location (with enhanced crystal minerals usually) on some maps, but that's not always the case.

The changes to the Zerg creep are very interesting. The creep does NOT expand, except when you do two things:
(1) Evolve a Hatchery into a Hive, and then place your overlords over a normal terrain spot. Overlords now have a "spew creep" option (or something to that effect), in which the constantly drip the goop of the creep, and you create a small radius you can build things on.
Think of the Zerg Creep more like Protoss Pylons now, except you have to sacrifice a unit in order to expand the creep. Needless to say, it's only temporary.
Given that Zerg "buildings" take damage whenever their surrounding creep is gone, this seems like a ridiculously dangerous change. Take out one overlord, and *bam*. Chain reaction that starts the death clock for numerous zerg base expansions.

(2) Build a Nydus Worm (renamed Nydus Canal from Brood War). I have never really been able to gauge how much the worm helps. I could also be entirely misconstruing it's helpfulness too.

You know longer have to build a a Creep Colony, then evolve it into a Sunken or a Spore Colony. THANK GOD. This is a change that makes so much damn sense. Were the Creep Colonies even useful for anything before? I don't think they were, and now they're gone. Huzzah at Blizzards intellect!

I've played very little Protoss so far. Given that I suck with resources just as much as ever, and I still consider Protoss the most money hungry race, I'm shying away from it until I start sucking a lot less at the game. Nothing stood out from the Protoss in the one or two games I played as them except that I think the Reavers are gone.

I've enjoyed playing as the Terran, and have actually snuck out a few wins using them. Every building that is capable of building an addon usually has the option of two. A "reactor" (allows you to build two units at the same time), or a Tech Kit, which allows that building ONLY to build the advanced units. The command center no longer has addons, but it can be completely added on to in one of two ways.

Turning your Command Center into a gigantic (admittedly, ground enemy only) turret is HILARIOUS. The Command Center bolts on a swivel head for aiming, and the sound that emits when it shoots... yowza. Think Tanks in Siege Mode, but more bassy. It's a marvelous sound.

Then there's the "communications" add on, that allows you to stack your supply depots, doubling their output, it has the sensor sweep, as usual, and then it has a special SCUD crystal collector that runs for 90 seconds (I think).

Starcraft 2 is a serious evolution. The game looks absolutely beautiful, and it's a shame the beta only has mid sized maps at the largest. I'm looking forward to 4v4 (or bigger?) games with a HUGE beautiful landscape. It'll be great to watch, it always is.

You can add computer players into custom games, but they "very easy" only at the moment. And by do they work as advertised >_>. I mean, I'm slow in building, as compared to a lot of my friends, but holy cow. The fact that the computer works solely on buildings and barely enough units to defend itself? Game over, man.

Also, when you beat the computer, the computer sends "gg" to you via chat :D. I lol'ed a bit.

Starcraft 2 looks amazing, plays amazing, is still... well, it's still "blocky" to me (that is, the lobby/set up interface), but it has enormous potential. I look forward to seeing the campaigns they come up with, and hope they take back their "buy all three games" bullshit.

No, I don't have an invite. No, I won't give one to you even if you ask.

Posts for Sunday, March 21, 2010


Irssi 0.8.15-RC1 Released

Irssi 0.8.15 release candidate 1 has been released tonight. I’ve poked some of the package maintainers on IRC, so hopefully it’ll be available as an unstable package in your favourite Linux / BSD distribution or whatever you’re using soon.

Please test it and submit bugs.

For more information, please see Irssi’s website.

Irssi at Open Source Days 2010

Irssi was present at Open Source Days 2010 here in Copenhagen earlier this month. Here’s a nice picture of our fancy new banner that was kindly sponsored by Foreningen Fri Software.

Irssi Banner at Open Source Days 2010

Western Digital Passport - now with 50% less hackability!

I have a Western Digital My Passport here from a friend.  It's been dropped, and it's making clicking noises (uh-oh).  I'm trying to see if it's recoverable, so I thought I'd remove the disk and plug it directly onto the motherboard.

After I read a couple of success stories I thought it would be simple.  At least I'd have a free SATA to USB converter if all else failed.  I removed the case and to my surprise WD is now manufacturing the drives with the USB port directly on the (non-removable) hard disk board.

Don't try and tell me this is necessary, the only reason I can see is to stop people (such as myself) re-using the drive in a computer, or using the enclosure with an upgrade / replacement.

I can't speak for your specific My Passport, but here are the details of this one for the Googlers:
S/N: WX80AB962763
R/N: C0B

The serial number is the same as the internal drive.  This drive is stamped with the date 03 Dec 2009.

If you haven't bought a WD yet, don't expect to be able to replace the internal drive with a generic one!

Posts for Saturday, March 20, 2010


NetworkManager vs wicd vs wpa_gui

Due to some idle time* a couple of weeks ago, here's a quick comparison between a few network control tools for Linux.

These tools all give you some sort of network control from the Desktop - a service traditionally provided by daemons and initialisation scripts.  The problem with that is roaming - it's much more common nowadays to have a laptop travel between multiple access points (Ethernet, 802.11, wireless broadband...) and many of the tasks can be automated.  So what better way to use a point-and-click approach.

The three competitors, and here's how they compare by features:

Tool 802.11 (wireless) control ethernet control mobile broadband control VPN controldbus notification
NetworkManager yes yes yes yes yes
wicd yes yes no planned for 2.0 no
wpa_gui yes no no no no

Personally I use NetworkManager.  I use all types of network control, and the dbus notification tells my mail client to go offline as soon as the network in not available.  (Previously I would have to wait for my mail client to time out).

This is not saying that you should use NetworkManager too - find the list of features you require and use the appropriate tool.

Be warned: NetworkManager, while feature rich, is polarising the community - either it works and you love it, or it doesn't work and you hate it.  There is a common wireless connect-disconnect issue which seems to be caused by various different problems.  I see it at work but not at home.  According to one dev, it's buggy kernel drivers, but that doesn't explain why it works for me in some places but not others on the same laptop.  YMMV!

*My development laptop provided by customer A is locked out of their domain - stupid windows!  My employer only has the this job for me right now, so I have to wait until they resolve the problem...

Electronically, my dear Wattson

I just borrowed a Wattson Power Meter from a friend at work, and while there's nothing special about power meters, the good folks at DIY Kyoto have put a nice touch on this one.  [Standard disclaimer: I don't work for them and I haven't received any incentives  from them either!]

There has been a trend of wireless power meters for the home, so they can be easily adapted to the consumer market.  They solve the problem of running wires around your house - you put the sensor (or current transducer or CT) in your meter box or on a specific appliance, and the display goes somewhere convenient.  Wattson has the opportunity to connect 4 CTs: 3 for 3 phases and one for renewable monitoring, or in any other configuration.

But Why?  Well there were numerous reasons for me, everyone is different:

Firstly I wanted to see how much my 60L camping fridge cost to run on electricity (it runs on LPG, 240V AC or 12V DC).  It turns out it draws less than 100W continuous, which would cost about $160/year on our current tariff (if I calculate correctly).  That's assuming the fridge is running full time, but it has a thermostat so the actual cost will depend on the ambient temperature.

Secondly, I have a "solar aware" dishwasher.  Essentially it has a thermostat as well to measure the water temperature.  If you have solar hot water, you connect the hot pipe (instead of the usual cold) to the dishwasher and it doesn't use it's internal electric heater.  I wanted to see if it was cost effective to pay for a plumber to put in a hot feed (and a tap for those cloudy days so I still have warm showers).

I connected Wattson, and turned on the dishwasher (full of dishes of course!).  It used about 50W at first, for the actuators I assume.  Then about 200W as the water filled and the "sprinklers" started.  Well, 200 Watts is nothing I thought.  But about 10 minutes in the heater started.  The power jumped up to 1.6kW!!  That's more than my split system air-conditioner!  Luckily it only ran like this for about 20 minutes, but still that's a decent heater!

I calculate about $54 per year just for the dishwasher heater (I can't save the costs of the other actions of the dishwasher - unless I have solar power too!).  So it looks like a plumber wouldn't be very cost effective.  I'm probably looking at an $80 call out fee plus an hours labour and parts.  Close to $200, which would take four years to pay back!

The final stage is to connect Wattson to my meter box, and watch the total energy consumption of my home.  Just from today (Saturday with the whole family home) we use about 500W without airconditioners on, and about 3kW with them on!  It was interesting to see the different appliances turn on and off (fan - 80W, washing machine - 300W, microwave - 2000W).

Wattson provides a few weeks of storage built-in, and there is software called Holmes (yes, Holmes and Wattson).  Holmes is flash based, for Windows or Mac only.  Luckily Wattson uses and FTDI usb-serial connection, so it shouldn't be impossible to get some data in Linux.  I'll keep you posted with my success!

Posts for Friday, March 19, 2010

2 weeks of silence

I just thought I'd let you guys know that I won't be posting anything here in the next 2 weeks cause me and my sweetheart are going to enjoy a few weeks off on Cuba. Given that I don't take any computer and that Internet in Cuba doesn't seem to be all that available, I guess I'll write something when I come back. Have a blast in the next two weeks without me :-)

Posts for Thursday, March 18, 2010

kernel 2.6.25 and udev > 141 issue

source: problem:

labsystem boot halts with this error:

Current udev only supports Linux kernel 2.6.25 and newer.
Your kernel is too old to work with this version of udev.


either upgrade kernel or downgrad udev, i downgraded udev

system setup before fix was applied:

  • installed kernel: 2.6.23-gentoo
  • installed   udev: sys-fs/udev-149

usb boot stick: grml version: 2008.11 – Release Codename Schluchtenscheisser 2008-11-30

disk usage:
/dev/sda* not important

/dev/sdb1 boot,ext2
/dev/sdb2 lvm

/dev/sdc1 boot,ext2
/dev/sdc2 lvm

md0: /dev/sdb1 /dev/sdc1
md1: /dev/sdb2 /dev/sdc2

now the fix

i basically used [1] and [2] as reference. i don’t like that portage didn’t tell me that with a dependency issue.

  1. grml usb stick boot
  2. ‘Start mdadm-raid’ found in the bootlog on Console F1
  3. vgscan; vgchnage -ay (wait about 10-30secs)
  4. mount /dev/vg/root /mnt/root (create /mnt/root first)
  5. cd /mnt/root; chroot .
  6. emerge =sys-fs/udev/-146-r1 (did not work)
  7. emerge =sys-fs/udev/-141
    edit /etc/portage/package.mask
    # DON’T INSTALL udev > 141 without making a kernel update first (2010-03-18)
  8. exit; reboot

system setup after fix was applied:

  • installed kernel: 2.6.23-gentoo (not changed)
  • installed udev: sys-fs/udev-141



Posts for Tuesday, March 16, 2010

ogre 3d IV

i’ve finished a set of basic ogre examples – since the ones coming with ogre 1.7+ are not very good. you can download them from, see [1]. i’ve added a README which explains all in detail, again see [1]. i wish i had the time to create more of these examples. so here are a few screenshots.

mars example (about texturing)

ninja-camera (about using the mouse navigation)

mars-rings (about creating circles in ogre)

mars-rings-shader-cg (about using shaders)

so basically my examples are about:

  • how to create fully self-contained simple examples independent of each other
  • how to get ogre running (with the example framework from the ogre wiki tutorials)
  • using ois or sdl joystick/gamepad support in ogre
  • how to create spheres and how to texture them properly
  • how to approximate rings which are approximated circles using line segments
  • how to use cg (nvidia) and glsl (opengl) shaders
  • how to use cmake
  • how to write small but yet powerful example
  • i’ve also put attention at a very light weight CustomMedia system, it will be easy for you to understand how to use materials and textures together with shaders

all of these use cmake and you can use them for whatever purpose you like. just note that i’ve included code from the ogre wiki and i don’t know which copyright or license they use which is ‘yet’ another criticism (but i will stop right here). i would like to thank the irc folks in for their support.

it would be nice if you would drop me a email with feedback on how you liked/used my examples.



Posts for Monday, March 15, 2010


Today I was pointed to this article titled "Programming is the New Literacy". (Go, read it, then come back.) Ok, that was some long text, wasn't it? But well written, don't ya think?

Naaa, who am I kiddin, not all of you read it so here's a brief summary: The author Marc Prensky (who has written books on game-based learning) argues that what we consider "literacy" will change in the future (and actually already does in the present). It will transform from "The ability to carefully read and write a contemporary spoken language" into "the ability to make digital technology do whatever, within the possible one wants it to do -- to bend digital technology to one's needs, purposes, and will, just as in the present we bend words and images". He argues that kids are already living that change while others are just talking: Kids build games with Flash or PHP or some other technology and get into programming. Though the author's idea of what programming is is more generic than what most people would call programming with him including customizing your "programmable" remote control into the whole thing.
He makes a good point (and I'm gonna quote this paragraph here cause I can't summarize it any better):
One might ask, "Will every educated person really have to program? Can't the people who need programming just buy it?" Possibly. Of course, with that model, we have in a sense returned to the Middle Ages or ancient Egypt, or even before. Then, if you needed to communicate your thoughts on paper, you couldn't do it yourself. You had to hire a better-educated person -- a scribe -- who knew the writing code. Then, at the other end, you needed someone to read or decode it -- unless, of course, you were "well educated," that is, you had been taught to read and write and thus had become literate.

While I think the author does make a great point that addresses the whole "complex devices should just have one button, preferably with an Apple logo on it" bullshit agenda that has been pushed lately he has made a mistake that is just all too common: A direct translation of historic and current structures into the future.

If you are a native speaker you will by now probably have realized that I'm none: My English is … let's just call it "spiced up" with "Germanisms", phrases that are a literal translation of a German figure of speech or grammatical structure that can only be considered to be wrong in English. We know that kind of thing from languages but we don't seem to apply that kind of knowledge and structure to other topics.

Where our linked author decided that our world has "moved on" (yes, I have been heavily into Steven King's Dark Tower Series lately) he just intends to replace something we have had in order to adapt to the new world. That has often been a good approach: When people complain that the "wild people" know so much more about nature and animals and whatnot we can easily see that that kind of knowledge is a skill necessary for survival in their world whereas we in our modern, "civilized" world have to rely on other kinds of knowledge (like for example the knowledge about how to handle an ATM). When evolving from Neanderthals to our current state of being we dropped skills we couldn't use any longer for skills we needed cause learning skills takes time and time is bloody limited (it's not like we couldn't learn all about the local wildlife and plants, most of us just don't have the time and we don't really need it). For literacy that does not apply.

What really happens is that we add another skill to our literacy requirement: While being literate was enough for you to basically have rather good opportunities to be successful in the recent past and probably today, in the future just being able to speak and write a current language won't suffice, but it will still be a requirement. There are people that are able to get some programming done while not being able to write but their options are really limited and they are basically the statistical anomalies that every set of real-world data has. What will happen is that we will not just have trouble making sure we're not leaving the people behind who have trouble reading or writing, we'll have the new trouble of leaving behind people who are not sufficiently computer-literate.

The really interesting thing about the text was that, while I thought it was basically a rather smart piece of writing, it just affirmed my perception of us human beings as rather simple creatures: When face with change we try not to move too much.

We've accepted that today every person needs a certain skill set (literacy being one of said skills) and when change comes, we just take one skill out of the "requirements" basket and pick up another one (preferably we just take the label of the thing we throw out and stick it on the new thing). We see that a lot: When old business models fail heavily (yes, I am looking at you guys from the film and music "industry" here - what the heck, make that "content industry") we look for ways to just pick up one piece from our old house of cards and replace it with one that looks a little bit different. Just don't have too much change.

I know that change is scary. It scares me just as much as it does the other guy. But we won't make it better by pretending it's not there and just replacing words and relabeling things is not "dealing with change". What do you think? Are those of us that can program the scribes of our world? Is programming the power base for us as a new elite?

Ethical website advertising?

The web is one of those strange places where people expect things for free. The reason is quite simple – and that it’s because all of the internet’s "good" and "services" are virtual, they aren’t tangible doohickeys we can break and sell for spare parts. However the reality is that in providing that service, no matter how much cheaper it’s getting by bulk datacenters, still costs money.

Each webmaster should be well within their rights to try and monetise websites, at least to the point to cover running costs. However in my opinion people’s perceived view of what is "ethical" advertising is skewed at best. Some people say that all advertising is bad and ethical advertising is a banner or two that will be removed once the payment quota has been achieved. Others say advertising is only ethical if the contents of the advertisement are from one-on-one deals or with related content. Another group calls ads ethical as long as they don’t follow the ad format conventions and aren’t popups – in fact, if you take off your leaderboard ad from the bottom of your page, make it a unique square shape and plaster it in a "dedicated" section, people are less likely to consider it unethical. Of course there’s also the extremists who believe any ad on a webpage is the sign of the devil and use browser extensions to rid themselves completely of it.

The reason I’m asking this question is because I’m wondering whether or not WIPUP should have ads or not (or any other monetising factor). Right now WIPUP is sponsored by our very lovely host OpticEmpire who has been just plain awesome. However it’d be fair to give back to them, as well as reward developers (no, I’m not saying this just because I’m the only developer at the moment). But WIPUP being an open-source project, any concept of "salary" is considered taboo. Obviously this is all hypothetical as there’s no way that WIPUP can at this stage generate any respectable amount of revenue (and even if I wanted to dump an ad, there’s no space in the layout :P ).

The obvious finger-pointing scenario right now is Wikipedia. They’re asking for money from their users. Directly. Some consider that ethical, whereas I find it downright silly why they could not’ve just invented their own ad format that wasn’t particularly intrusive and made clear positive from normative, charged advertisers, and then users could continue to use it perfectly for free without regular campaigning hubbub. Now let’s questions again whether asking money directly is ethical given an accepted alternative.

I think the major blockade here is that people are confusing with open source software and provision based on open-source software. I’m wondering if there’s been any standard already set on dividing revenue gained through any open-source initiative where there is a party paying actual money to provide the service, and there is another party not paying money, but spending time creating the service. For the sake of simplicity, let’s disregard marketing or transport costs for physical developer meetings.

Any thoughts?

Related posts:

  1. Countdown to KDE 4.4 and the new KDE website: 1 day left
  2. Countdown to KDE 4.4 and the new KDE website: 5 days left
  3. Countdown to KDE 4.4 and the new KDE website: 4 days left

Posts for Sunday, March 14, 2010

Book review: Python Testing – Beginner’s Guide

As mentioned before, some days ago I received a copy of a recent book from Packt Publishing titled “Python Testing – Beginner’s Guide” by Daniel Arbuckle. I read the whole book (it’s not huge, around 220 pages), and wrote a review, as requested by Packt.

The book targets people who know Python (it doesn’t contain a language introduction chapter or something alike, which would be rather pointless anyway), and want to start testing the code they write. Even though the author starts by explaining basic tools like doctests and the unittest framework contained in the Python standard library, it could be a useful read even if you used these tools before, e.g. when the Mock library is explained, or in the chapter on web application testing using Twill.

The text is easy to read, and contains both hands-on code examples, explanations as well as tasks for the reader and quiz questions. I did not audit all code for correctness (although in my opinion some more time should have been invested here before the book was publishing: some code samples contain errors, even invalid syntax (p45: “self.integrated_error +q= err * delta“), which is not what I expect in a book about code testing), nor all quizes. These could’ve used some more care as well, e.g. on p94 one can read

What is the unittest equivalent of this doctest?

>>> try:
...     int('123')
... except ValueError:
...     pass
... else:
...     print 'Expected exception was not raised'

I was puzzled by this, since as far as I could remember, int(‘123′) works just fine, and I didn’t have a computer at hand to check. Checked now, and it works as I expected, so maybe I’m missing something here? The solution found in the back of the book is a literal unittest-port of the above doctest, and should fail, if I’m not mistaken:

>>> def test_exceptions(TestCase):
...     def test_ValueError(self):
...         self.assertRaises(ValueError, int, '123')

This example also shows one more negative point of the book, IMHO: the code samples don’t follow PEP-8 (or similar) capitalization, which makes code rather hard to read sometimes.

The solutions for the last quiz questions are missing as well, and accidently I did want to read those.

Don’t be mistaken though: these issues don’t reduce the overall value of the book, it’s certainly worth your time, as long as you keep in mind not to be too confused by the mistakes as shown above.

Topic overview

The book starts with a short overview of types of testing, including unit, integration and system testing, and why testing is worth the effort. This is a very short overview of 3 pages.

Starting from chapter 2, the doctest system is introduced. I think it’s an interesting approach to start with doctest instead of using unittest, which is modeled after the more ’standard’ xUnit packages. Doctests are useful during specification writing as well, which is in most project the first stage, before any unittestable code is written. The chapter also introduces an overview of the doctest directives, which was useful to read.

In chapter 3 gives an example of the development of a small project, and all stages involved, including how doctests fit in every stage.

Maybe a sample of Sphinx and its doctest integration would have been a nice addition to one of the previous chapters, since the book introduced doctest as part of stand-alone text files, not as part of code docstrings (although it does talk about those as well). When writing documentation in plain text files, Sphinx is certainly the way to go, and its doctest plugin is a useful extra.

Starting in chapter 4, the Python ‘mocking‘ library is introduced. The chapter itself is a rather good introduction to mock-based testing, but I don’t think mocks should be used in doctests, which should be rather small, examplish snippets. Mock definitions don’t belong there, IMO. This chapter also shows some lack of pre-publishing reviews in a copy-paste error, in the block explaining how to install mocker on page 62, telling from now on Nose is ready to be used.

Chapter 5, which you can read here introduces the unittest framework, its assertion methods, fixtures and mocking integration.

In chapter 6 ‘nose‘ is introduced, a tool to find and run tests in a project. I use nose myself in almost every project, and it’s certainly a good choice. The chapter gives a pretty good overview of the useful features nose provides. It does contain a strange example of module-level setup and teardown methods, whilst IMHO subclassing TestCase would be more suited (and more portable).

Chapter 7 implements a complete project from specification to implementation and maintenance. Useful to read, but I think the chapter contains too much code, and it’s repeated too often.

Chapter 8 introduces web application testing using Twill, which I never used before (nor did I ever test a web application before). Useful to read, but Twill might be a strange choice, since there have been no releases since end 2007… Selenium might have been a better choice?

A large part of the chapter is dedicated to list all possible Twill commands as well, which I think is a waste of space, this can be easily found in the Twill language reference.

Chapter 9 introduces integration and system testing. Interesting to read, the diagram-drawing method used is certainly useful, but it also contains too much code listings.

Finally, chapter 10 gives a short overview of some other testing tools. First is explained, which is certainly useful. Then integration of test execution with version control systems is explained. I think this is certainly useful, but not at this level of detail. Setting up a Subversion repository is not exactly what I expect here, especially not when non-anonymous, password-based authentication over svn:// is used (which is a method which should be avoided, AFAIK).
Finally, continuous integration using Buildbot is tackled. No comments here, although I tend to use Hudson myself ;-)

Final words

Is this book worth your time and money? If you’re into Python and you don’t have lots of experience with testing Python code, it certainly is. Even if you wrote tests using unittest or doctests before, you’ll most likely learn some new things, like using mocks.

I’m glad Packt gave me the opportunity to read and review the book. I’d advise them to put some more effort in pre-publishing reviews for future titles, but the overall quality of the non-code content was certainly OK, and I hope lots of readers will enjoy and learn from this book.

Jack a dull boy

Did a move several weeks ago and am finally getting all my pieces back together again. Moving takes a considerable amount of focus to accomplish I’ve come to know, from learning about the new place I was going to live in, to gathering everything that was important and deciding how to get it there. This is how we learn and unfortunately led to my old computer being unusable :(, but, (thankfully) led to a gift in an older computer. After several weeks of effort, I’ve got my desktop up to a state of usability and for the most part am satisfied with the outcome. Sorry for being absent for a bit, but am sort of in the swing of things again. Some Linux things I’ve done lately:

The restored computer is a 2004 HP Pavilion ze5500 laptop (5570 technically but really the same as were all the 5500 series). For the most part everything worked out of the box. It’s a good computer for being it’s age, and has some nice features. However, no more KDE for me, at least for now. The 490MB or memory it has just ain’t enough enough horse to power this Vista-ish competitor. One thing I noticed immediately was how far disk I/O improvements have come in the last few years. When you hit swap in Linux (at least on this computer) things crunch to a crawl. And because Linux is designed to use memory as much as possible (for best performance – works well with computers with a lot of RAM and fast disk speeds) KDE 4.4 was right out. And in came something better (you can find my install guide for the pavilion here):

Enter the Bird (LXDE)

I had forgotten how much I loved this Desktop. Not only is it light but it has most of what I need. Did I mention it runs good. A couple people were talking in the forum the other day on how good XP was. This computer also has XP on it and they were right: light, competent, responsive, few bugs. Well getting LXDE on here made me feel like that, plus it’s more customizable. Just solid. Love this desktop. Did I mention KDE 4.4. 4.4 is nice and I’m sure on my other computer it would have loved it, unfortunately 4.4 isn’t ready for prime-time and shouldn’t be used for distro’s I’m thinking ’till 4.4.3 possibly 4. Puts together nicely some issues I had earlier with it and Oxygen is beginning to look real nice. Anyways, because LXDE is kicking ass, I made an icon for it. Feel free to check it out here.

Oh My Darlin’

Just a quick note for those that haven’t heard of it yet. Clementine a new music player is being developed. If you heard of it before, you’ll know that it’s a music player developed from the Amarok 1.4 code-base. Basically what they are trying to do is port the Qt3 libs structure to Qt4 and from the early preview, they’re doing a good job of it. I’ve never tried 1.4 before so this was a nice treat. Real basic music player that provides good music handling and has nice features too like Internet radio. The preview is still not ready for a regular basis for me as it had a bad memory leak, but I hope development of it continues – best lightweight music player I’ve seen so far next to Google’s.


I love collaborative working, it helps me be creative and can sometimes keep me interested when alone I would probably have dropped a project. On the other hand it obviously adds some extra work and challenges to a project: You have to organize communication, make sure that tasks get done and that results are properly communicated plus fixing misunderstandings and all that jazz.

All those things are fine, you go into a project prepared and ready for action. What sucks is if people decide to just drop everything and mess the project up that way.

You invest time, work, brainpower, motivation to do your tasks, to help others do theirs and right before you reach a critical milestone (that might for example help get funding) some project partner/coworker decided to just say "Naa, this isn't gonna be successful, right? I'll just walk away." And while this is something you can usually fix, the ripples it causes in the whole project lead to others losing motivation and the project finally dieing.

Working together with people who are not willing to face a challenge when it comes to a project, people who just roll over and let it die, is demotivating and highly annoying.

And this is the real danger for projects: Not difficulties and problems that were not foreseen, not people having trouble with each other. The real danger is that some people are not willing to fight for something, are not willing to face challenges and address them.

I love working on projects with people but that last week really sucked in that department.

Posts for Friday, March 12, 2010

aopy: aspect oriented python

Aspect oriented programming is one of those old new ideas that haven’t really made a big impact (although perhaps it still will, research ideas sometimes take decades to appear in the professional world). The idea is really neat. We’ve had a few decades now to practice our modularity and the problem hasn’t really been solved fully (the number of design patterns that have been invented I think is telling). What’s different about AOP from just plain old “architecture” is the notion of “horizontal” composition. That is to say you don’t solve the problem by decomposing and choosing your parts more carefully, you inject code into critical places instead. The technique is just as general, but I would suggest differently applicable.

I realized I haven’t really explained anything yet, so let’s look at a suitably contrived example.

A network manager

Suppose you’re writing a network manager type of application (I actually tried that once). You might have a class called NetworkIface. And the class has an attribute ip. So how does ip get its value? Well, it can be set statically, or via dhcp. In the latter case there is a method dhcp_request, which requests an ip address and assigns to ip.

# <./>
class NetworkIface(object):
    def __init__(self):
        self.ip = None
    def dhcp_request(self):
        self.ip = (10,0,0,131) # XXX magic goes here
if __name__ == '__main__':
    iface = NetworkIface()
    iface.ip = (10,0,0,1)
    iface.ip = (10,0,0,2)

Download this code:

Now suppose you are in the course of writing this application, and you need to do some debugging. It would be nice to know a few things about NetworkIface:

  1. The dhcp server seems to be assigning ip addresses to clients in a (possibly) erroneous manner. We’d like to keep a list of all the ips we’ve been assigned.
  2. Sometimes the time between making a dhcp request and getting a response seems longer than reasonable. We’d like to time the execution of the dhcp_request method.
  3. Some users are reporting strange failures that we can’t seem to reproduce. We would like to do exhaustive logging, ie. every method entry and exit, with parameters.

Now, this kind of debugging logic, however we realize it, is not really something we want in the release version of the application. It doesn’t belong. It belongs in debug builds, and we’re probably not going to need it permanently.

Here we will demo how to achieve the first point and omit the other two for brevity.

Where AOP comes in

Common to these issues is the fact that they all have to do with information gathering. But that’s not necessarily the only thing we might want to do. We might want to tweak the behavior of dhcp_request for the purpose of debugging. For instance, if it took too long to get an ip, we could set one statically after some seconds. Again, that would be a temporary piece of logic not meant to be in the release version.

Now, AOP says “don’t change your code, you’ll only make a mess of it”. Instead you can write that piece of code you need to write, but quite separately from your codebase. This you call an aspect, with the intention that it captures some aspect of behavior you want to inject into your code. And then, during compilation from source code to bytecode (or object code) you inject the aspect code where you want it to go. Compiler? Yes, AOP comes with a special compiler, which makes injection very toggable. Want vanilla code? Use the regular compiler. Want aspected code? Use the AOP compiler.

How does the compiler know where to inject the aspect code? AOP defines strategic injection points called join points. Exactly what these are depends on the programming language, but typically there is a join point preceding a method body, a join point preceding a method call, a method return and so on. (As we shall see, in aopy we are being more Pythonic.) Join points are defined by the AOP framework. But how do you tell it to inject there? With point cuts. A point cut is a matching string (ie. regular expression) which is matched against every join point and determines if injection happens there.

Back to you, John

Enough chatter, the code is getting cold! As it happens, Python has first rate facilities for writing AOP-ish code. We already have language features that can modify or add behavior to existing code:

  • Properties let us micromanage assignment to/reading from instance variables.
  • Decorators let us wrap function execution with additional logic, or even replace the original function with another.
  • Metaclasses can do just about anything to a class by rebinding the class namespace arbitrarily.

We will use these language constructs as units of code injection, called advice in AOP. This way we can reuse all the decorators and metaclasses we already have and we can do AOP much the way we write code already. Let’s see the aspects then.

A caching aspect

The first thing we wanted was to cache the values of ip. For this we have a pair of functions which will become methods in NetworkIface and make ip a property.

# <aspects/>
class Cache():
    def __init__(self):
        self.values = set()
        self.value = None
cache = Cache()
def get(self):
    return cache.value
def set(self, value):
    if value:
        print "c New value: %s" % str(value)
    if any(cache.values):
        prev = ", ".join([str(val) for val in cache.values])
        print "c  Previous values: %s" % prev
    if value:
        cache.values = cache.values.union([value])
    cache.value = value

Download this code:

Cache is the helper class that will store all the values.

A spec

Aspects are defined in specification files which provide the actual link between the codebase and the aspect code.

# <./>
import aopy
import aspects.cache
caching_aspect = aopy.Aspect()
    fget=aspects.cache.get, fset=aspects.cache.set)
__all__ = ['caching_aspect']

Download this code:

We start by importing the aopy library and the aspect code we’ve written. Then we create an Aspect instance and call add_property to add a property advice to this aspect. The first argument is the point cut, ie. the matching string which defines what this property is to be applied to. Here we say “in a module called main, in a class called NetworkIface, find a member called ip“. The other two arguments provide the two functions we wish to use in this property.


To compile the aspect into the codebase we run the compiler, giving the spec file. And we give it a module (or a path) that indicates the codebase.

$ aopyc -t
Transforming module /home/alex/uu/colloq/aopy/code/
Pattern matched: main:NetworkIface/ip on main:NetworkIface/ip

Download this code: main.compile

The compiler will examine all the modules in the codebase (in this case only and attempt code injection in each one. Whenever a point cut matches, injection happens. The transformed module is then compiled to bytecode and written to disk (as main.pyc).

main.pyc now looks like this:

# <./> transformed
import sys ### <-- injected
for path in ('.'): ### <-- injected
    if path not in sys.path: ### <-- injected
        sys.path.append(path) ### <-- injected
import aspects.cache as cache ### <-- injected
class NetworkIface(object):
    def __init__(self):
        self.ip = None
    def dhcp_request(self):
        self.ip = (10,0,0,131) # XXX magic goes here

    ip = property(fget=cache.get, fset=cache.set) ### <-- injected
if __name__ == '__main__':
    iface = NetworkIface()
    iface.ip = (10,0,0,1)
    iface.ip = (10,0,0,2)

Download this code:

Injected lines are marked. First we find some import statements that are meant to ensure that the codebase can find the aspect code on disk. Then we import the actual aspect module that holds our advice. And finally we can ascertain that NetworkIface has gained a property, with get and set methods pulled in from our aspect code.

Running aspected

When we now run main.pyc we get a message every time ip gets a new value. We also get a printout of all the previous values.

c New value: (10, 0, 0, 1)
c New value: (10, 0, 0, 2)
c  Previous values: (10, 0, 0, 1)
c New value: (10, 0, 0, 131)
c  Previous values: (10, 0, 0, 1), (10, 0, 0, 2)

Download this code: main.output

And the yet the codebase has not been touched, if we execute instead we find the original code.

Here the show endeth

And that wraps up a hasty introduction to AOP with aopy. There is a lot more to be said, both about AOP in Python and aopy in particular. Interested parties are kindly directed to these two papers:

  1. Strategies for aspect oriented programming in Python
  2. aopy: A program transformation-based aspect oriented framework for Python

If you prefer reading code rather than English (variable names are still in English though, sorry about that), here is the repo for your pleasure:

And if you still have no idea what AOP is and think the whole thing is bogus then you can watch this google talk (and who doesn’t love a google talk!) by mr. AOP himself.


What should I put in the ceiling?

Mock exams are all over and they’re probably the best mock exams I’ve ever sat. However today I’d like to talk about a rather funny trend going on in my college’s common room, where we bum around in-between classes.Before I start I’d like to share a short paraphrased story from another person who paraphrased it from the original. Or something.

Some Brits might’ve heard of the Fencemaster. He’s your average Brit, lives in London, late thirties, and an office worker who cycles to work each day. He’d chain his bike against a fence near his workplace every day. It has to be said that this fence was probably the most dull, unimaginative fence possible. Facing a dumpster and behind that a blank building wall, not in anyone’s line of sight, and his bike was the only thing that decided to chain itself every day to it. During the tube strike it was obviously quite trendy to sue to upcoming new popular form of transport – bicycles, and our hero was miffed to be confronted one morning by a sign affixed to the fence stating clearly "Howard De Walden Estates Limited. Bicycles found parked against or chained to these railings will be removed without further notice".

Obviously directed at our hero and being the classic Brit, he took the sign literally and that night calmly drilled two holes in an old kettle and the next day padlocked it to the fence. This attracted tourists and also being classic tourists, they took pictures standing next to it. The Fencemaster wasn’t through – he then attached a steam iron, then a refrigerator door – and by this time others decided to join in. An ironing board was chained to the fence. Stuffed animals. Champagne flutes.

The fence in question now can barely be seen for all the objects that are attached to it. The police decided to pay him a visit but they were fans of the ordeal, suggesting "it might be a good idea not to attach things to the fence anymore", adding "we realise of course you can’t stop other people from attaching things to the fence". What’d he do? He created a website –

This happened quite a long time ago and the site can only now be accessed through the Internet Wayback Time Machine. However for the lazy here’s the fence in question.

Of course but that’s not what this post is really about :) – no, our common room has a rather bad ceiling that due to the leaky aircons have been reduced to little more than cardboard. There’s a charming hole in the ceiling and it’s forever having something stuffed inside it to stop spewing out water from the piping (fix it, you say? Nonsense). It’s had a Perspective magazine (not one of mine, thank goodness), leftover Christmas ornaments, RAM chips on a string, a massive union jack, LAN cables (all of the computers are broken, so we strip down the hardware), and the most delicate of paper cutouts stuffed into it. Here’s a picture of a (broken – the insides have been removed) computer mouse hanging from it.

So – what should I put in the ceiling?

No related posts.

Running Fedora

I always like to test distributions but due to lack of time i rarely do it :) I’m using Gentoo for quite some time now (and Debian on some machines I administrate), but i wanted a more desktop-oriented distribution, at least for my netbook where Gentoo compile-all philosophy was not the best way to go.

I have to admit that my first thought was Ubuntu. Maybe because I’m already using an apt-get distro and it seemed like the obvious choice.

But who am I kidding? I’m an active member of Greek FOSS Communtiy, so the existence, and more important the quality and activities, of the community that inevitably grows around a distribution is very important thing to me.

So I was looking for a desktop-oriented distribution with bleeding-edge technologies and vivid community. And the only name that came up to my mind was Fedora!

I first installed it on my netbook, where I also had the ability to test Moblin (on Fedora it’s just a ‘yum install @moblin-desktop’ away :)) and I was so pleased by the performance that I have already Fedorized my Desktop!

Did i said anything about the community? I was impressed by the quality of Fedora community last year on Fosdem and that impression was enhanced this year by Fedora’s participation on the biggest FOSS European Conference. Besides that, over the last 1-2 years i have met some members of the Greek Fedora Community, the Greek Fedora Ambassadors, and i have to admit that it’s one of the most active and vivid community in Greece.

Being willing to contribute, as i already do in many ways for FOSS in general with mostly advocating activities, I have already apply for Fedora Ambassador and hoping to find the time to be more involved and active inside Fedora ecosystem.

PS. for those wondering, Gentoo (and secondly Debian) will still be my first choice when it comes to Systems Administration, but it was time i move over to a new Desktop Distribution.

Falcon Update 9.6.4 – Chimera

I’m a little late to the game… okay I’m very late to the game, but better late than never. The Falcon maintainers have released a new version so I’m releasing back to the community in return.

Notable Changes:

* fixed: returning oob(1) from a filter in comp() & single sequence mfcomp didn’t discard the value.
* fixed: Path::uriToWin() (used to post-normalize paths on MS-Windows system) didn’t translate ‘+’ into ‘ ‘, which meant troubles with filenames including spaces on MS-Windows.
* fixed: falpack used the user-provided main script path as-is, without prior normalization, causing all the scripts to be considered non-applciation (system) when the path was given in non-canonical format.
* minor: Asynchronous messages are now handled more smoothly.
* fixed: future binding (named parameters) “invaded” local variables in target function symbol table.
* fixed: strSplitTrimmed (and FBOM.splittr) put an extra empty string at the end of the array when multiple separators was at the end of the parsed input strings.
* fixed: Failing to init the frame in new VMContexts would have caused random crashes depending on linkage or memory conditions (thanks Mordae).
* added: Function.trace() to get the traceback steps.
* fixed: Invalid loop condition caused fself.caller() to segfault (all thanks to fgenesis).
* fixed: Fordot statement (.=) couldn’t be used after for/in…: (after a colon)
* added: Methodic functions strEsq, strUnesq, strEscape, strUnescape (and similarly named methods in String) helping string transformation in web modules.
* fixed: URI::URLEscape needed much more escaping…
* fixed: Error reporting in include() may crash.
* fixed: def statement crashed if declared variables was not assigned.
* fixed: Directory.descend can be called with nil as directory handler function.
* fixed: URI::URLDecode was a bit too strict; there’s no need to filter chars under 0×20.
* added: Event model to VMSlot (children named slots).
* fixed: StreamBuffer may cause hangs on partial reads in net-based streams.
* fixed: clone() and *comp() didn’t duplicate strings as semantic would suggest.
* added: Now unknown messages are optionally marshalled to “__on_event” in objects.

Posts for Thursday, March 11, 2010

KRunner Dictionary Plugin

KRunner needs a dictionary plugin. Work flow:

  1. alt + f2
  2. type “define “
  3. ctrl + v
  4. awe at simplicity

While we’re at it, a translation plugin would be nice too.

I will make the dictionary runner. Standby.

Planet Larry is not officially affiliated with Gentoo Linux. Original artwork and logos copyright Gentoo Foundation. Yadda, yadda, yadda.