Planet Larry

May 29, 2009

Clete Blackwell

Bandwidth Caps Are Evil

I have no pity for those who cap bandwidth. It’s the norm in Europe, but intolerable here in the United States. A few companies here have been considering bandwidth caps for their users, but the public has opposed it so violently that they scrapped all of their capping plans — for now.

A small, local company named Connectivity U runs our internet connection here (served through Comcast). This wonderful company caps bandwidth per day, rather than per month. All I can say is that I hate it. It’s so inconvenient. I can’t even download a Linux ISO without going over my cap. Also, if I am uploading a fair amount of files VIA FTP, it will shut my internet connection off with a screen saying that I have exceeded my upload limit and my internet connection has been disabled (even if I have not exceeded my limit). Apparently, if I upload too much at one time, I am kicked off. I despise their pitiful attempt to disguise the fact that they do not own enough bandwidth to suffice for all users to conduct normal (and legal) internet activities.

By the way, they don’t even attempt to block the BitTorrent protocol. Wouldn’t that be a better solution?

I will end this very angry post with a little picture demonstrating just how little bandwidth I get per day. Please note that the upload limit is more than twice that of the download limit. Make sense to you?


May 29, 2009

May 28, 2009


Greatest Ever Free Books

The Internet has allowed the people to have access to information and experience many wonderful new things: viagra spam, home bomb making, 404 pages, teenagers slapping each other on youtube, macromedia plugins and appeals from disposed Nigerian ministers.

The greatest of these revolutionary feats of human progress are e-books. An e-book will contain content that is so good, no publisher was willing to touch it. They must have been intimated by the superior page design and outstanding editorial standards found in many e-books.

Because ebooks are of such astounding quality, people seem to email them to me all the time. Not even spammers, people that actually claim to know me. Some of them are even the authors.

Never wanting to be behind the curve, and due to my extensive academic and publishing contacts, I have managed to bring you three free exclusive e-books.

  • How to lose weight - featuring our proprietary 3-step plan for easy and ecological weight loss.
  • How to get a girlfriend - featuring our scientifically proven 5-step plan for finding a better class of woman.
  • How to get a boyfriend - overcome shyness and snag your superman with our specially designed 5-step superplan.

Don't leave home without them. Shall we have some random keywords: free, exclusive, e-books, ebooks, greatest ebook ever.

Discuss this post - Leave a comment

May 28, 2009 :: West Midlands, England  

My Twitter tag clouds

Hello, welcome back! Man life has been busy. How are all you Command Line Warriors? Have you missed me?

Danux wrote a post called Are the right people following you? where he put his Twitter username into the TwitterSheep website. This makes a 'tag cloud', showing the interests of the people that are following your updates.

I have two Twitter accounts, here is the tag cloud of my normal permanent account:

It just goes to show, you will get readers that reflect what you write about. Probably the most successful strategy is to not range too much. I have been thinking about politics recently, but I probably should not write about it in my Twitter account, as my readers probably are not interested, unless it affects Free Software or the Internet somehow.

Here is the tag cloud of my, so far incognito, experiment:

They are far more different than I thought they would be. In this account I talk about completely different matters, and it shows from the interests of the people reading it.

Let me know what you come up with.

Discuss this post - Leave a comment

May 28, 2009 :: West Midlands, England  

Andreas Aronsson

Mess up drupal and back again


Sometime when I was updating drupal from 6.10 to 6.11 I thought I was going to be clever and
update as soon at the core module was available upstream. Not to wait until it reached portage. I
downloaded it and placed it in


and updated to that version of the core module.
But thinking about it today I found it excessive and stupid to be doing manual updates =). Better let webapp-config handle it again as before. So how do I go about that? Just updating with webapp-config did not suffice since the running installation still refers to the one I manually downloaded and the update script didn't help either.
After some searching in the database I found alot of references to my


directory. Some of the references resided in the cache_* tables as well.
Well, I thought that truncating the cache tables couldn't hurt so I did that and then wrote a small routine to replace the references to not include the $WEBROOT/sites/all/modules/drupal-6.10:


$safe = true;

$link = mysql_connect('', 'drupal', 'password');
if (!$link) {
    die ('Error connecting' . mysql_error());
echo 'Successful connect';
mysql_select_db ('drupal', $link) or die ("Error selecting");    

// ======= fix menu_router ======
$sql = "SELECT * "
    ."FROM `menu_router` "
    ."WHERE `file` LIKE '%drupal-6.10%'";
$menu_router_contents = mysql_query($sql) or die ("Error fetching values from menu_router".mysql_error()); 
$row = 0;
while($row = mysql_fetch_array($menu_router_contents)) {

/*      print_r($row); */

    echo "\nResult no ".$i.": ".$row['file'];
    $newvalue = str_replace('sites/all/modules/drupal-6.10/', '', $row['file']);
    $sql = 'UPDATE `drupal`.`menu_router` SET `file` = \''.$newvalue
	.'\' WHERE CONVERT(`menu_router`.`path` USING utf8) = \''.$row['path'].'\' LIMIT 1;';
    exec_sql($sql, $safe);

// ======= fix system ======
$sql = 'SELECT * FROM `system` WHERE `filename` LIKE \'%drupal-6.10%\''; 
$system_contents = mysql_query($sql) or die ("Error fetching values from system".mysql_error());
$row = 0;
while($row = mysql_fetch_array($system_contents)) {
/*     print_r($row); */

    echo "\nResult no ".$i.": ".$row['filename'];
    $newvalue = str_replace('sites/all/modules/drupal-6.10/', '', $row['filename']);
    $sql = 'UPDATE `drupal`.`system` SET `filename` = \''.$newvalue
	.'\' WHERE CONVERT(`system`.`filename` USING utf8) = \''.$row['filename'].'\' LIMIT 1;';

    exec_sql($sql, $safe);


function exec_sql($query, $safe) {
    if ($safe) {
	echo "\nThis is the query: ".$query;
    } else {
	mysql_query($query) or die ("Error executing query".mysql_error());


Still no cigar tough. The admin interface tells me that there are updates available. Using the database update script (update.php) puts me back to where I started. So off I go to examine that script too. I notice that it's using the POST variable to set some values that has to do with module versions. Intriguing. Next thing to try is to clear the browser cache, run the php script again and voila´. Now it's looking good. I had to run the update.php again and it didn't mess stuff up anymore either.
NOTE: To make the script 'bite', set "$safe = false" at the top of it. Use at own risk ofcos =).

May 28, 2009 :: Sweden

May 27, 2009

George Kargiotakis

ivman is dead, long live halevt

It’s been a while since ivman stopped working on my Gentoo box but I never had the time nor the willingness to take a look into it. It appears that ivman is incompatible with some newer versions of hal and dbus. The good thing is that there’s an alternative, it’s called halevt and as far as I’ve taken a look into it the configuration options look quite straightforward.
For Gentoo, there are ebuilds for halevt on Gentoo bugzilla, which install just fine.

In my point of view there’s an issue here for Gentoo. Latest ivman (sys-apps/ivman-0.6.14) compiles just fine against all of its dependencies, but then it does nothing at all when a deviced is plugged in. If the devices are present when ivman starts then it can detect and mount them, if you plug the devices after ivman is started, then ivman does nothing at all. I think ivman is broken since hal 0.5.9.X versions. Gentoo developers stll keep ivman in the stable tree though. I find no real logic to this decision. Ivman is buggy with current stable hal and dbus. I would prefer a de-stabilization of ivman or even a package mask for it. What’s the point in keeping a package (ivman) in the stable tree since it requires not the latest stable but an older version of another package (hal) ? IMHO, since they correctly decided to stabilize hal 0.5.11-r8, which subsequently rendered ivman useless, ivman should be wiped from the stable tree.
Some bugs on ivman reported on Gentoo Bugzilla:

I once used ivman with a couple of custom scripts to create/remove icons of automounted devices on my ROX desktop. I think I can make these scripts work again with halevt…I am in the process of rewriting them. More on that in the following days…

May 27, 2009 :: Greece  

Mac OS X Mail app and Courier IMAP(-ssl) problems

If you have an IMAP server based on Courier-IMAP and you get complains from people using the default on Mac OS X about getting many warning messages and not being able to connect, the remedy is to increase the maximum allowed concurrent sessions per IP. They possibly have multiple accounts on the server and is not able to handle each connection properly.
The cure is to open up /etc/courier-imap/imapd-ssl and /etc/courier-imap/imapd, find the MAXPERIP setting and change it to something like:

Don’t forget to restart your courier-imap server for changes to take effect.

By default the MAXPERIP setting is set to 4. On the imapd-ssl file it’s not even included in the config file (but still set to 4) so you need to add it yourself.

May 27, 2009 :: Greece  

Dion Moult

Lang-8: Learning languages the fun (and free) way.

logo_loggedinPeople that have known me for a while know three things. Firstly, they know me (well, duh). Secondly, they know that I know Chinese. Thirdly, they know that I just lied, I know how to pretend I know Chinese - actually I suck pretty darn bad at it. In fact, I suck at the whole pretending business too.

So the other day, I came across the original Chinese hacker dude who created the Shellex-overlay for running Chromium on Gentoo Linux. Unfortunately the blog post was in Chinese. I immediately enrolled myself in a chinese tuition class an-

No, I fired up Google translate and stole my brother’s Chinese girlfriend to properly translate sections that apparently read “building know not updated porridge“.

Well, I don’t really like languages. Not that I don’t think they’re really useful and stuff, I just don’t like them because 1) they suck, and 2) I suck at them. The reasoning I like to use for “they suck” is the theory behind natural languages, then moving on to mathematics and programming languages and showing how “look. 1=1 is 1=1″. However like a friend of mine, one of my pet peeves is people “who wan talk talk leik vely good you know wan liao!“. (a representative sample of the local accent and style of speaking here. (see Wikipedia entry on Manglish - it’s friggin’ made up of English, Malay, Hokkien, Cantonese, Tamil - oh and you’d find “g’day mate” popping up once in a blue moon) - yeah, so I can’t exactly say being decent at a language is unimportant.

In fact language is really important. Especially when maintaining professionalism. You can’t argue - having good language skills are vital for…communicating well? Wow, that sounded bad.

…and for that reason when some Taiwanese guy lurking in the #gentoo-chat channel (sinsun) introduced me to the website Lang-8, I signed up. It’s a website where you can write journal entries (like a mini-blog) in another language, and other users who are native speakers of that language can come along and insult you.

So you know right away, I go allleady what lah and start correct the crap crap english udder people is writing wan.

No. I actually did start jocking up whatever was left of my 汉语 knowledge and now I’m learning that I sucked even worse than I thought I did. Well, it’s still an amazing website, and I’ve not seen one like it before. I would highly recommend it to anybody thinking to brush up on a foreign language. The correction system is pretty nifty (like crossing out stuff and highlighting) and it has one hell of an active community - you’d get responses literally minutes after you post it.

Feel free to add me as a friend, especially if you’re good at Chinese and can teach me how to say “Git repository” in Mandarin. My username there is “Moult”.

Note: I actually really do think having good knowledge of a language is very useful - up to the point where you start thinking “Here is a piece of paper” is some sort of symbolic metaphorical imagery for racism.

Related posts:

  1. A Little 21 Fun with C++.
  2. How to Actually Use Your Computer: Part 1
  3. How to install Chromium (Google Chome) on Gentoo Linux

May 27, 2009 :: Malaysia  

May 26, 2009

Dirk R. Gently

Upgrading Your Video Card – Part 2

Continued from Part 1. Now that you got your card and PSU there’s couple more things to know.

Linux Drivers

It’s best to add your new video card drivers before you shut down so your new card will just boot up to the deskotp. Remove your old driver and install the new. Don’t try to keep both drivers as they’d probably conflict. Don’t worry about uninstalling a video driver on a running system, the driver is stored in memory so this is no problem to do.


For newer BIOS’s this isn’t a problem. On mine I had a video setting that by default said “PCI Express First’ so I didn’t have to change anything. The card was recognized at boot and the BIOS disabled the onboard one automatically. Most BIOS’s have this option. You may have to change this yourself though so look into your BIOS before booting.

Post Boot

If everything is set up correctly, you should have your new video card up and running. In the terminal type:

lspci | grep VGA

and you should see your new card. With that you can try out a game or an HD movie and see how it does.


There’s alot of talk in the video card click of overclocking. My advice? Don’t! Sure you can if you want to but keep in mind that overclocking voids most warranties. Overclocking can also take years off a video cards lifetime. Plus even the greatest overclocks will usually only yield about 2-4 fps. If you need more fps than that then likely you need another video card. If you absolutely have to do it, there’s a good post in the nvidia forums (see third post down).

Budget for a Price

I decided to go with the nvidia PNY 9600 GSO 768MB card. This isn’t a good card for gamers (be careful of manufacturer-reviews) but it is good for the price I got it for. I picked this up at Fry’s for $40 after a mail-in rebate. It’s runs quiet and has a three year warranty to boot. I tested a few games and found out it ran decent on most new games on medium settings (Crysis plays 20-40fps). I found out a bit too late but if you want a good budget/gamer card you’re gonna have to begin at the $100 price range and I’ve heard alot of good things about the ATi 4770.


This PSU is a great buy for the money as I said before though if you want to be sure you got a good PSU spend $40 dollars or more. As it was though, I couldn’t afford it and took a chance on a bargain PSU that has gotten some good reviews. The hec HP485D 485W ATX12V Power Supply installed easily and seems to be doing ok (I just hope it holds out for more that a year :) )

May 26, 2009 :: WI, USA  

Dan Fego

New Computer (and its woes)

After a long time coming, I finally took the plunge and bought a new computer, mostly for the occasion of graduating from college. So after a bunch of looking around, I went and bought this computer. In any case, I received it after a couple of days of intense waiting, and now I’ve got it and am very pleased with it (and the 23″ monitor I got with it).

However, after spending a day on it, I felt the need to get started with Linux. However, Vista isn’t that bad when you have a quad-core processor and 8 gigs of RAM. My problem is as follows:

  • I need to be able to play games
  • I want my games to run well
  • I need a Linux environment
  • Ideally, I’d run Linux natively

This leaves me with the obvious option of dual-booting, but I’d really rather not. I find it so… traumatic, if you will, to have to reboot my computer every time I want to change what I’m doing. And since I tend to fire up Team Fortress 2 rather frequently, I’m afraid I’d sit in Vista most of the time because of it, and only go to Linux when I need to. And that’s exactly the opposite of what I’d want. So what to do?

I don’t know what I’m going to do. In addition, while I’ve always had fun with Gentoo, the new installation I started has been proving challenging. The basic system was easy, but the framebuffered console and a desktop (with Compiz-Fusion) has proven difficult. This is in large part, I believe, because of the now-scattered documentation due to the data loss of our beloved Gentoo Wiki. And then I pop in an… an… Ubuntu (sorry, it just feels dirty to me) CD, and everything works. But it’s not quite right. It’s not perfect, and I don’t have portage, and I can’t use my shiny new computer to compile things all the time! (that was part of the reason I wanted such power :)).

So I’m left with a dilemma. And because of my tendency to get paralyzed by indecision, I’ll probably stick with Vista for a while, until I figure out my solution, which will still probably involve dual-booting, since Wine doesn’t seem to be up to the task. If anyone’s got a similar situation/setup/solution, I’d love to hear about it. I love my Linux, but I also love my games.

May 26, 2009 :: USA  

May 25, 2009

Brian Carper

I paid for music

As a general rule, I don't pay for music. The main reason of course is that the music industry are a bunch of thugs. If you don't know that already, you've been living under a rock for the past few decades. I won't even buy music for other people as a gift if I can help it.

Recently however I did buy music, specifically Jonathan Coulton's latest DVD. JoCo releases his music under Creative Commons, which is awesome, and when you buy it (from What Are Records) you get MP3s that are not infested with DRM, which is also awesome. When you buy that particular DVD, you get a DVD of the concert, a music CD of the same concert, AND you can immediately download MP3s of said concert while you wait for the DVD in the mail. All for $20. Well worth it for such quality music.

I first heard most of JoCo's music via shaky concert recordings on Youtube and via MP3s acquired "elsewhere" (nearly all of which are free downloads on Joco's website though); otherwise I'd never even have known he existed. And yet I ended up giving him my money, happily and willingly, and probably will again. Amazing how things turn out.

The other music I bought recently is Stephen Lynch. Again I heard most of his music first on Youtube. Again I gleefully spent money on his latest CD because it's good music and because it's DRM-less and thug-less entertainment and a good portion of that money is going to the artists.

Most of the music I like comes from Japan or various corners of Europe. Amazon sells a few (very few) Japanese music CDs, for between $50 and $90 each (plus shipping). Do you know how much it costs to ship a stream of bytes from Japan to the US via the intertubes? Hint, it's not $90. How does a stream of bytes increase $90 in value when it's written onto a piece of plastic?

These are strange times. There's such disparity between what the average person believes is right and wrong on the internet and what the law says is lawful and unlawful. This kind of disparity can't last forever. My high school history teacher said that in America at least, a law that is opposed by the majority of citizens in the country never lasts long; I think that's true. And it's as it should be. In a few decades, we're going to look back at how things were in the 90's and 00's and laugh.

May 25, 2009 :: Pennsylvania, USA  

May 24, 2009

Clete Blackwell

Google Reader: Amazing

RSS (”Really Simple Syndication”) is a kind of news feed created so that users could have more simplified access to their news. Most blogs, news organizations, and websites that are updated frequently contain RSS (or Atom, another kind of news syndication) feeds. RSS feeds are also referred to as live bookmarks in Firefox. Basically, Firefox’s implementation lets you see a bookmark folder that updates with each new story that is released. See below for an example.Live Bookmarks

Here we see that it’s just a big list of news stories posted on my favorite website, Engadget.

This works well, except for when you have 10-15 feeds that you subscribe to. It becomes very tedious to click on each feed and then go into the main website and view every new story. It’s quite a pain to click on each site individually, go back a few pages, and pick up where you left off. That’s when Google Reader comes into play. Google Reader is a RSS feed tool, where you add all of your favorite feeds and Google keeps track of them for you. It will do much more than Firefox’s Live Bookmarks. It actually shows the content of each post, saving you from having to visit the website. Below is a screenshot of Google Reader in action (click to enlarge).

Google Reader

Here, I can sort all of my websites in to folders. See “Geek News,” “Comics,” etc. Also note that I have one new post. Google keeps track of what I have read. I can also star items, just like in GMail. It will save that item for me later. If I click on an individual story, it will expand into a view similar to that of what an actual visitor would see if they were to visit the website, images and all.

Google Reader is a great tool for you to keep up with a lot of websites — all at once. It’s a great news reader and I would recommend it to anyone who reads even just two websites per day.

May 24, 2009

Martin Matusiak

classes of functions

I decided to finally blog this since I can never frickin remember which is which. Maybe if I write this and draw the diagram it’ll finally stick. If not, I’ll have something to come back to. To make it more accessible I’ll be using a running example: the bike allocation problem. Given a group of people and a set of bikes, who gets which bike?

Partial functions

function_properties_partialPartial functions are pretty self explanatory. All you have to remember is which side the partiality concerns. In our example a partial function means that not every person has been assigned a bike. Some persons do not have bikes, so lookup_bike(person) will not work on all inputs.

Partial functions are common in code: reading from files that don’t exist, and of course the ever lurking NullPointerException — following a pointer to an object that is not live. In haskell, this is where the Maybe monad appears.

Total functions

function_properties_totalNot surprisingly, total functions are the counterpart to partial functions. A total function has a value for every possible input, so that means every person has been assigned a bike. But it doesn’t tell you anything about how the bikes are distributed over the persons; whether it’s one-to-one, all persons sharing the same bike etc.

Clearly, total functions are more desirable than partial ones — it means the caller can call the function with any value without having to check it first. Partial functions often masquerade as total ones, by returning a value outside the expected range (which explains the existence of a null value in just about every programming language and data format). In python the value 0, None and any empty sequence (string, list, tuple, dict) all represent null, which makes writing total functions easy.

Bijective/isomorphic functions (forall one-to-one)

function_properties_bijectiveA bijective function (also called isomorphic) is a one-to-one mapping between the persons and the bikes (between the domain and codomain). It means that if you find a bike, you can trace it back to exactly one person, and that if you have a person you can trace it to exactly one bike. In other words it means the inverse function works, that is both lookup_bike(person) and lookup_person(bike) work for all inputs.

Isomorphic functions are found in all kinds of translations; storing objects in a database, compressing files etc. The name literally means “the same shape”, so any format that can reproduce the same structure can represent the same data.

Injective functions (one-to-one)

function_properties_injectiveAn injective function returns a distinct value for every input. That is, no bike is assigned to more than one person. If the function is total, then what prevents it from being bijective is the unequal cardinality of the domain and codomain (ie. more bikes than persons).

Another way to understand it is to think of something small being stored in (embedded in) something big. In order to maintain unique output values, the codomain must be at least as big as the domain. GUIDs are an example of this. A GUID generator guarantees a globally unique identifier by picking values from a sufficiently large space. Given a GUID that has been issued, you can trace it back to exactly one object, but you cannot take just any value in the GUID space, because most of them have never (and will never) be issued to anyone.

Surjective functions (many-to-one)

function_properties_surjectiveA surjective function is one where all values in the codomain are used (ie. all bikes are assigned). In a way it is the inverse property of a total function (where all persons have a bike).

Surjective functions are often undesirable in practice, meaning that you have too few resources at your disposal, which forces sharing (threads on a cpu) or rejection (streaming server can only accept so many clients).

The way to think of injections and surjections is not as opposites, but as complementary properties. A function can be both injective (all persons have a unique bike) and surjective (all bikes are used). If so, it is bijective.

May 24, 2009 :: Utrecht, Netherlands  

May 23, 2009

Jason Jones

Narcoleptic Snapshot of Life

Well....  It's late, and I'm tired, but I wanted to let you all know, that right now, I know that if I dig in and learn the Zend framework, immerse myself in it as much as I can, while maintaining some semblance of a balance in my life, short-term, I'll make it with my new employment.  I've just gotta jump in, so I sink or swim.  I think I'm gonna start with re-programming my site using the Zend framework.  Not sure if that's a good idea or not, but that's where I'm going with it.

I also need a new laptop.

As of right now, I have a very, very general overview-type understanding of both OOP principles, the MVC structure of web development, and how the Zend framework fits into all of this.  I've got a lot of learning to do, but I feel that if I jump in and try my best to do things right, I'll learn a heckuva lot more than if I just sit and read all day.

Oh, and we've really got to get someone to rent our investment property.  Anyone wanna bite?

May 23, 2009 :: Utah, USA  

Roderick B. Greening

Where have I Been?

Wow, I can't believe it's been about 5 months since I last blogged. That will tell you how hectic my life has been lately.

What's been happening? Well, for one, I'm about to be an uncle... my little sister is having a baby due in less than 7 weeks. I think I am almost as excited as she is! :) I can't wait to spoil the little tyke .. hahah!

Besides that, I have been really busy with work. We are undergoing some changes, and as a result, it has pulled me away from my normal Kubuntu packaging. However, I now have things mostly back in order, and am back to my packaging duties.

Finally, I am typing this at the airport, on my way to UDS Barcelona. I think this trip will be much more interesting than the California one, and definately hotter. I don't think I packed enough shorts... :)

Anyway, here's to hoping I have more time to blog and more frequently.

May 23, 2009 :: NL, Canada  

Dion Moult

rtm - a Command Line Tool for RememberTheMilk

2009-05-23-154648_1280x800_scrotBefore I begin my post, I’d like to apologise to all the Planet Larry readers for the 10 hours or so of downtime I caused sometime yesterday. I don’t always break things, and that borkage was, well - quite unintended. For the technically inclined, basically I had set some .htaccess restrictions on another domain which I forgot I was hotlinking files to. This carried over the .htaccess restrictions and as my blog was aggregated, this somehow carried over there too.

OK, back to what’s new and amazing. Some days ago I wrote an article on RememberTheMilk, a really awesome to-do list website (I have 60+ and counting tasks listed over there now!). However a main issue with it is that even though it’s extremely accessible through lots of mediums (phone, email, twitter, plasmoids, etc) - they are all graphical! We’re missing a command line interface for it!

Well, not quite so. With some Google-fu I found some French guy with a fetish for white rabbits (no, seriously this time I’m friggin’ sure he’s french) who made a command line tool. It’s not much more than a script, but that doesn’t stop me from me from putting it in the sunrise Portage overlay so Gentooer’s can get it! Actually, what did stop me was the fact that 1) I didn’t know how to write ebuilds, 2) I didn’t have an account to commit ebuilds to the sunrise-overlay, and 3) I didn’t have a GPG key (for part 2).

So, a while later learning about ebuild writing, getting a key and a commit-able account, babysat ever so generously by hwoarang and idl0r for the ebuild part and scarabeus for the key and account, I have today commited two ebuilds to the sunrise overlay (layman -a sunrise). The first is app-misc/rtm, which is the tool itself, and the second is dev-perl/WebService-RTMAgent, which is a Perl module (dependency for rtm). I’m not sure when it’ll get into the publicly approved sunrise overlay, but it’s definitely there in the developer checkout (Unless I borked up the commit).

So, install it, try it out on your architectures (I’m only ~amd64 dont’ forget), and enjoy! Hopefully this’ll mark the start of more Gentoo contribution. Or it will if I don’t get distracted and play this game about white rabbits. - or maybe this one, which is more related to rabbits - in more ways than one.

What the hell do rabbits and ebuilds have in common? :P Stupid French guy.

Related posts:

  1. How to install Chromium (Google Chome) on Gentoo Linux
  2. VisionBin - A Tool for Creative People

May 23, 2009 :: Malaysia  

May 22, 2009

Jason Jones

New Keyboard

Alright..  I'm just typing this entry for mostly the purpose of trying out my new keyboard I got at work.  For the past employments I've had, probably for the past 2 years, I've used the Microsoft Natural 4000 keyboard, and to be honest, it took a long time to get used to.  The buttons were too far apart, and it just felt a bit more difficult to reach the keys I needed to reach.  That, and I really found it difficult to be a Microsoft basher, while at the same time being tied to using a Microsoft keyboard.  So...

This time around, I looked and looked for a non-microsoft natural (broken) keyboard, but I couldn't find one.  I don't know if they're not popular or what, but I totally love the broken feeling.  It feels quite natural to my arms and fingers.  But, anyway..

I just received my new keyboard today, ordered from newegg.  It's the Logitech Wave Keyboard, and so far, my wrists are hurting just a bit due to the flat nature of the keyboard.  Maybe it's my horrible posture I maintain while I type, who knows.  Anyway...  I'll give this new keyboard a whirl and keep you updated.  So far, I've been using it a total of about 10 minutes, and .... well...  It's okay.

...and it's not Microsoft.

May 22, 2009 :: Utah, USA  

Dan Ballard

Little Lisp coding project at night

Maybe it's just me, but I've found that as much fun as Lisp can be, it doesn't play so well with other, weather that be the environment it's executing in, or talking to other languages.

I'm playing with a new RPC framework that doesn't have Lisp support but I'm looking to see if I could add it, and I've noticed that because it mainly uses a 'binary protocol' that Lisp doesn't necessarily support binary data types like float much. Or at all. And interestingly (at least to me) Lisp doesn't seem to have a Perl compatible (and PHP, Ruby, and Python) pack() and unpack() function, which, looking at the code of the implementation of this RPC frame work in all the other languages, is what they use to encode the data.

So I've stepped back from that for a moment and am playing around with writing pack() and unpack() functions in Lisp. Through this little side project I'm finally learning a lot more about Lisp packages and ASDF, so it's been fun so far. Integer support was dead easy to write, but I was dreading groking the ieee float specification enough to try implementing it, but thankfully someone already did and I can make use of the Lisp package ieee-floats for that, so Yay and thanks! And so hopefully soon I'll have a perl compatible pack() and unpack() set of functions for Lisp.

As a side note, even though it can be frustrating, I really do enjoy hacking away in Lisp, for some reason I just find it fun.

May 22, 2009 :: British Columbia, Canada  

May 21, 2009

Dirk R. Gently

Upgrading Your Video Card

I’ve been using my built-in nvideo 7050 video card for a while now and for a built-in card it’s pretty good. I get decent compositing and Urban Terror plays around 30 frames per second but I’ve come to realize that I just want to be able to do more: play better games, watch HD video… so I decided to update my video card and now I’m amazed at what it can do. If you’re thinking about upgrading yours, this is what I learned from mine.

Note: This guide focuses on modernish hardware and on nvidia video cards. I’m not biased or anything, it’s just that my built-in nvidia worked so well, and that nvidia does a great job supporting Linux, that I decided to go with nvidia again.

Digging in the Pockets

Yeah that $200 dollar top-of-the-line video card looks cool, but most of us don’t wanna spend dollars like that for something we use a couple hours of the day. Video cards can be pricey but even on a modest budget you can get a decent card a good step up from a built-in one. Good $40-80 cards can be found that can easily double frame rates and help you play new or newer games. Save yourself a little budget for a power supply too as video cards take a good amount of wattage and many stock desktops only provide power for the components involved. With $100 (minus $20-40 in rebates) you can get a fifth tier video card and a power supply to go with it.

Whats Your Motherboard Got?

Pop open your hood to see what you got (or if you’re lucky enough you’ll have an owners manual that tells you). In most recent desktops for the last few years are PCI Express slots and are very good for video cards. If you have an AGP or PCI even these can have cards added that can help improve performance. For the purpose of this upgrade I’ll be talking about PCI Express.

A PCI Express slot will look something like this (see bottom of page). If you’re not sure, look closely at the motherboard. Alot of motherboards print a small label like PCIE next to the slot. If you got that, you’re good to go. This could be either a PCI Express 1.0 or 2.0 slot. 2.0 slots add alot of bandwidth but at this time no video card is really able to take advantage of it. You also don’t need to worry about what PCI Express version video card you buy either as 2.0 is backward-compatible with 1.0.

Queen of Hearts – Picking the Right Card

To pick a good nvidia card, nvidia appends their card versions with a couple letters. The version tells the capabilites of the card (OpenGL 2.1, DirectX10…) while the lettering indicated performance. GS cards are clocked the lowest, GT is middle, GTS is high, and GTX is extremely high. For example the 9500 GT is nvidia’s last generation card with medium performance. A good place to compare video card performance is Tom’s hardware’s video card hierarchy page (includes nvidea and ATI).

It’s pretty hard to go wrong with any of top level video card but a word of warning: not all branded video cards are alike. Because third party companies assemble the components together you will occasionally see a components that are skimped on. I’ve seen a number of poor reviews on what normally should be a pretty good video card. I get alot of my reviews at newegg. Newegg offers good prices on alot of different cards and they have a customer review section for each product, so most of the reviews are pretty up front. Compare the card with different vendors that offer the same branded product to be sure you’re getting all you should.

A couple things I noticed comparing vender cards was that some of them offer a good number less stream processors and others would use old memory chips. There can be any kind of cavaets like this so keep your eyes open. Memory isn’t terribly expensive these days and you should at least try to find something with DD3 or above.

The amount of memory you choose is important too. I had one person tell me that 512 MB of memory is the sweat spot, that you would never really use more than that. But when I tried Crysis on my 756 MB graphic card, it almost maxed it out. Memory on the video card is almost directly proportional to the resolution. I have a 1440×900 resolution which isn’t the biggest so if you have something bigger you might want to consider a 1 GB card. Memory spills over to the computer memory but it’s better if it’s kept on the card.

Another thing to consider when getting a video card is what type of outlets it has. Most newer cards have two DVI’s and a HDTV outlets (and sometimes svideo).

Fire and Brimstone (or Noise, Heat, and Size)

If you looked over some video cards already you’ve noticed how big some of them look. Unfortunately most video card specifications don’t have measurements listed. When there’s not alot of space by your PCI Express slot look at the reviews and see if anyone else had trouble getting them in. If they did you should look for a low-profile card. Or you might wanna take a chance and try to put one in – most manufacturers are good about taking back such products.

Think about just how hot your card may get too. The high-powered cards available have a good size fan on them but that fan isn’t going to do alot of good if your computer case has hardly any vents. A card that gets too hot is gonna have a much shorter life span.

One of the most common gripes I read in the reviews about video cards was how that some of them sounded like a helicopter taking off. Yeah these cards get pretty hot and your bargain basement versions don’t put alot of money into quiet fans. If you think a constant buzz is gonna bother you after awhile you may have to look into a more expensive card with a better fan or a card with less performance.

9/10 Ladies Prefer the Graphic Man

If you anticipate you’re going to need a real workhorse of a computer, and you got the extra slot for it, remember SLI. SLI is Nvidia’s technology that allows graphic cards to work in parallel process to one another (ATI’s is called Crossfire). To utilize this technology though you’ll need a an nvidia motherboard 680i or greater and a supported PSU.

Power Supply (PSU) and Cables

No shying off it, almost everyone is gonna have to get one. It’s not fun to have to pay the extra cost of another PSU but I can tell you they are fun to put in. Do yourself a favor and don’t think you might just get by. If a PSU gets overtaxed it will shut down your computer or possibly even worse things. And don’t listen to what the video card recommendations say, alot of times they just give an estimate and have no idea what you are running in your computer. Newegg has a PSU calculator that will give you a good idea what you need.

Now check what cable connections you need. Unplug your box (all external connections), destatic by touching the frame, and trace all your PSU connections. You’ll probably need at least these: 2 SATA power (one for hard drive another for DVD/CD), one main power (motherboard) 24 pin connector, a 4 pin CPU power cord, and a 6 pin PCI-Express cable. The 6 pin PCI-Express cable isn’t a big deal as most cards include a dual-molex to 6 pin adapter and most PSU’s have at least 4 molex cables. For the motherboard cable almost all new ones have a 24 pin slot, the PSU’s though (to be compatible with older motherboards) have 20 and 4 pin cables that can be snapped together. When you look to buy make sure the cables are long enough. SATA plugs are often put on one wire several inches apart, are your components close enough together?

Someone in the know posted in a forum that for a good video card you’re gonna want 30 amps on the rails. I couldn’t get more information on this but I’m pretty sure he meant that you want 30 Amps delivered to your video card. One molex cable on my power supply has 16A and another has 17A they plug into the dual-molex adapter that in turn plugs into the video card. I’ve played games for several hours at a time and haven’t had any problems.

Also look to be sure that you have the necessary room for a larger PSU. I wasn’t expecting it but the unit I bought was a good inch deeper than the original and made for a tight fit.

Real cheap PSU’s start around $15 dollars but you might be able to find a good enough one for a basic system at $20. Most people recommend though that you look for PSU’s beginning at the $40 price range.


This is my first time buying a video card so if I messed something up or missed anything important, let me know!

Configuring the BIOS, Linux and a good budget video card are in Upgrading Your Video Card Part 2.

May 21, 2009 :: WI, USA  

Brian Carper

More Clojure Mandelbrot Goodness

After my brief stint in the world of fractal geometry and Clojure, I decided to make a real Mandelbrot set viewer. The resulting source code is here. Here's a simple output (click for bigger version):


It's a pretty naive implementation, barely 100 lines of code, but even with my brute-force approach, given a liberal sprinkling of type hints it runs fast enough. Programming Swing from Clojure couldn't be easier (though I doubt programming Swing from any language is ever really enjoyable, it's a painful bunch of libraries).

There's a discussion of different coloring algorithms on Wikipedia, but even after reading that, getting this thing to look good was difficult. I don't know enough math for it. I ended up cheating and I colored a couple of them in the GIMP, so I could use them as desktop wallpapers.

/clojure/mandelbrot/thumbs/mandelbrot-rainbow.png /clojure/mandelbrot/thumbs/mandelbrot-rainbow-2.png /clojure/mandelbrot/thumbs/mandelbrot-rainbow-3.png

/clojure/mandelbrot/thumbs/mandelbrot-rainbow-4.png /clojure/mandelbrot/thumbs/mandelbrot-rainbow-5.png

There are some more PNGS over here including one that's 16000x16000 (producing it almost melted my CPU last night).

May 21, 2009 :: Pennsylvania, USA  

Who needs a DB?

My blog is still working, in spite of my best efforts to crash it. So that's good. But lately I've been thinking that an SQL database is a lot of overkill just to run a little blog like this.

My blog only has around 450 posts total (over the course of many years), and about an equal number of user comments (thanks to all commenters!). Why do I need a full-blown database for that? All of my posts plus comments plus all meta-data is only 2 MB as a flat text file, 700k gzipped.

By far the most complicated part of my blog engine is the part that stuffs data into the database and gets it back out again in a sane manner (translating Clojure data to SQL values, and back again; splitting up my Clojure data structures into rows for different tables, and then re-combining values joined from multiple tables into one data structure). Eliminating that mess would be nice.

Inevitably I ended up with some logic in the database too: enforcing uniqueness of primary keys, marking some fields as NOT NULL, giving default values and so on. But a lot of other logic was in my Clojure code, e.g. higher-level semantic checking, and some things I wanted to set as default values were impossible to implement in SQL.

Wouldn't it be nice for all the logic to be in Clojure? And the data store on disk to be a simple dump of a Clojure data structure? I can (and did) write a few macros to give me SQL-like field declaration and data validation, for uniqueness of IDs and data types etc. For my limited needs it works OK.

The next question is what format to use for dumping to disk. Happily Clojure is Lisp, so dumping it as a huge s-exp via pr-str works fine, and reading it back in later via read-string is trivial.

Some Java data types can't be printed readably by default, for example java.util.Dates, which print like this:

#<Date Wed May 20 22:39:00 PDT 2009>

The #<> reader macro deliberately throws an error if you try to read that back in, because the reader isn't smart enough to craft Date objects from strings by default. But Clojure is extensible; you can specify a readable-print method for any data type like this:

(defmethod clojure.core/print-method java.util.Date [o w]
  (.write w (str "#=" `(java.util.Date. ~(.getTime o)))))

Now dates print as

#=(java.util.Date. 1242884415044)

and if you try to read that via read-string, it'll create a Date object like you'd expect.

user> (def x (read-string "#=(java.util.Date. 1242884415044)"))
user> (class x)
user> (str x)
"Wed May 20 22:40:15 PDT 2009"

Storing data in a plain file has another benefit of letting me grep my data from a command line, or even edit the data in a text editor and re-load it into the blog (God help me if that's ever necessary).

Having multiple threads banging on a single file on disk is a horrible idea, but Clojure refs and agents and transactions handle that easily. But I do have to work out how not to lose all my data in case the server crashes in the middle of a file update. (I've lost data (in a recoverable way) due to a server crash in the middle of a MySQL update too, so this is a problem for everyone.) Perhaps I'll keep a running history of my data, each update being a new timestamped file, so old files can't possibly be corrupted. Or use the old write-to-tmp-file-and-rename-to-real-file routine. Or heck, I could keep my data in Git and use Git commands from Clojure. It'd be nice to have a history of edits.

If this idea works out I'll upload code for everything to github, as usual.

May 21, 2009 :: Pennsylvania, USA  

Dion Moult

Setting up SSH to work whilst at college.

Well, if you’re out and about quite a bit and you run a Linux computer at home, you should have a good relationship with SSH. If you’ve never felt the need to access your home computer remotely, this is what you should do.

For those that don’t know what SSH is, it is basically a network protocol (for example like FTP, SMTP, etc) that allows you to securely connect to another computer. For those that don’t speak jargon, it is some cool thing that allows me to use my computer remotely.

One of my well-visited locations is my college. I wonder why :P … and like most places, it runs Windows. Using a Windows computer leaves me feeling crippled and with a sense of repulsion at the most innocent of small creatures. Combined with my college’s restrictions, there is a lot of stuff I can’t do. For example, I can’t download a .doc file. Also, it is quite troublesome to constantly transfer files over with a memory stick, so I decided to set up SSH.

Little did I know how pathetically paranoid the IT technicians were.

Problem 1: setting up SSH and connecting to my dynamic IP.

The first step was to install (emerge openssh) and set up SSH. (I run Gentoo- the steps will be different for your distribution or if you are running Windows - say, you can set up SSH on Windows, can’t you?) This was simple. Now the problem here is that my IP keeps on changing. Especially because my ISP’s connection is quite volatile, my IP is dynamic and resets several times a day. The method to solve this was to set up a dynamic dns, available from This is a free service, and allows me to connect to a sane domain name whilst a client running on my machine updates it regularly on the latest IP.

Problem 2: port 22 is blocked.

The next day I popped PuTTY on a thumb drive and tested it out - or at least tried to. I had a network connection refused error. Later that night I learned that most public networks blocked certain ports, for example port 22 which SSH normally uses.

The fix around this was to change the config in /etc/ssh/sshd_config to use Port 443. You see, whilst http:// defaults to port 80, https:// URLs default to port 443 - and are hence rarely blocked. Et Voila - you can now connect! However, I also run an Apache webserver. This clashes as it also tries to use port 443. As I don’t care to serve SSL webpages on my localhost, I decided it was a decent sacrifice, and I removed Apache’s HTTPS support by removing `-D SSL -D SSL_DEFAULT_VHOST` from /etc/conf.d/apache2. Tada. I can now access SSH at my college.

Problem 3: What about SCP? Surely you’d want to copy files over.

Well, transferring files over is another issue. It’s all good with your vast array of Command-Line-Interface applications for IRC, Vim text editing, file browsing, MSN, email, calendar and PIM, etc - but every so often you will have the need to transfer a file over. Sending yourself an email with a file doesn’t help, as my college blocks almost every single website out there aside from (note: Gmail is not - so it is blocked too). At the same time, it also blocks downloading every single file type aside from image files. The quick fix for this was to put the file on my Apache localhost, change the file extension to a .jpg or something then download it from there.

But no. Two problems occured. 1) My website was blocked - haha, and 2) the IT technicians filter it not by extension, but by actual file contents. To get past the website block, I run a proxy script on my webserver (there are plenty of free proxies out there too) to access my webserver. But then of course I’m stuck with the file filter. So looks as though it’s game over.

Not really, there’s always SCP. I couldn’t send files using my computer as I didn’t know the network information of my college network. So I decided to SCP using the college’s Windows computer. I hear there’s a program known as WinSCP which is pretty nifty, but at the time I only had PuTTY’s collection of tools and thus PSCP.EXE - which pretty much did the same thing - with one catch: it’s a CLI application. You see, they’ve also blocked the command prompt. OK - for understandable reasons.

To get myself a command prompt, I did the age old innocent trick. This involves creating a plain text file (eg: .txt), putting the words `` in it, and then saving it as cmd.bat. (Notice the changed file extension). This will give you a prompt to work with. Now - using that to run PSCP.EXE, I succesfully transferred my target file over - which was in this case a .doc file, which contained some homework.

What about the ethics of this?

Well. I personally feel as though these workarounds are anything but a way for me to do my work conveniently. The computer system is riddled enough with viruses as it is without my doing, and I doubt anybody will be motivated enough to have such a complicated setup unless they were either particularly vicious or needed a file really urgently (such as me at the time). But seriously - a learning centre blocking .doc files?

If you have more experience in networks than me and feel as though this article is innappropriate, feel free to contact me and I’ll willingly take it down.

Related posts:

  1. How to Actually Use Your Computer: Part 3
  2. History of the Internet
  3. What is FTP?

May 21, 2009 :: Malaysia  

May 20, 2009

Iain Buchanan

Another Password Generator

So I wrote a password generator, why not?!

I had some free time recently, I was about to reset someone's password for a site I administer, and I thought it would be nice to have a small script generate semi easy to remember but semi secure passwords.

Firstly, I usually use either of these two one liners:
$ for ((n=0;n<10;n++)); if="/dev/urandom" count="1"> /dev/null | uuencode -m -| head -n 2 | tail -n 1 | cut -c-8; done

$ for ((n=0;n<10;n++)); if="/dev/urandom" count="1" bs="8">/dev/null | uuencode -m - | tail -n 2 | head -n 1 | cut -c -8; done

The results are similar. You get a bunch of passwords looking like this:

Which is handy for setting up lots of accounts, which I do occasionally. However, people hate them because 9FyV1zJq is harder to remember than their cat's name.

The script I just wrote (in Perl) uses word lists and random numbers to generate passwords like this:

Sure, these aren't as secure, but they're better than "tiggles".

Oh, and my wordlists come from

#!/usr/bin/perl -w

# A utility to create reasonable strength and semi-easy to remember passwords
# out of word lists and random characters.
# Copyright 2009 Iain Buchanan. Freely redistributable and modifiable.

use strict;

my @lists = ('/home/iain/personal/ispell-enwl-3.1.20/altamer.0',

my @wordlist;

foreach my $list (@lists) {
open (WL, "$list") or print "Couldn't open wordlist '$list': '$!', skipping.\n";

while () {

next if (length >= 5); # ignore long words
next if /^[A-Z]/; # ignore Nouns & abbvs.

push @wordlist, $_;
close (WL);

for (1..10) {
print $wordlist[int (rand ($#wordlist))];
print int (rand (999));
print $wordlist[int (rand ($#wordlist))];
print "\n";

May 20, 2009 :: Australia  

May 19, 2009

Bryan Østergaard

Exherbo was announced one year ago today!

And to celebrate the occasion I'll be looking back over the past year, recounting some of our many successes and also given a glimpse into the future - at least the way I see Exherbo's future.

But first I'd like to thank all the developers and users contributing in various ways to Exherbo. According to there've been 52 contributors so far but that's leaving out people contributing to Exherbo related repositories that Ohloh doesn't know about or contributing in ways not involving commits. My guess is that we have had 60+ committers during this first year which is very good indeed.

A big thank you to all of you - Exherbo wouldn't have been anywhere near as usable without your continued commitment.

State of Exherbo
At this point I consider Exherbo very usable and quite stable. There're still major changes happening from time to time but usually the upgrade path can be easily explained in a few lines on the exherbo-dev mailing list.

As for packages we have supported KDE, Gnome, XFCE and Awesome on the desktop for a long time now. On the server side we have most of the usual suspects as well including the apache and lighttpd webservers, samba, exim, postfix, sendmail and so on.

Many people are likely still missing a couple of packages but that's easily solved using importare, writing your own exheres package or requesting it in the #exherbo IRC channel.

Many people have also started to test Exherbo after we started publishing Exherbo images for virtual machines. Just recently it became possible to easily build your own Exherbo images from scratch which will hopefully lead to lots of new ideas for Exherbo and make it easier to mold Exherbo to specific needs.

A year of accomplishments
There's been too many interesting things happening around Exherbo this past year to name them all but here's a mostly chronological list of major events.  All these events have helped shape Exherbo one way or another.

June 7th 2007
Stephen Bennett sets up the exherbo-dev mailing list. Everything keeps happening on IRC.

August 5th 2007
The old goatoo repository is killed and everything is moved to the new arbor and exherbo repositories. We still live in the dark subversion age.

October 13th 2007
Importare is born, makes life much easier as we have very few packages at this point in time. As described on importare is a paludis client allowing proper package installs, uninstalls and upgrades without an exheres.  At a point where we still had very few packages beyond what's required for a base system install this had a big impact on Exherbo. Importare is as important today as it was a year ago as it allow to concentrate on widely used packages instead of spending time on more obscure packages.

July 5th 2007
Support for the Exheres format is added to Paludis. Officially it's described as a test EAPI used to play around with new ideas that might not be suited for Gentoo.

July 24th 2007
We solve the problem with colliding source tarball names by introducing arrows. This allows us to rename distribution files on mirrors and locally to include package versions for example.

December 7th 2007
We add a commits mailing list. This is a big help for reviewing commits and lots of bugs are caught this way.

Early January 2008
Our mascot Zebrapig is born.

January 31th 2008
We add src_prepare and src_configure phases to exheres-0. For many packages this helps us write much cleaner packages as it matches the stages of the build process much better than just having one big src_compile phase.

March 14th 2008
First draft of Exheres-for-smarties is committed. Exheres-for-smarties becomes our main technical document on the Exheres format and repository structure.

March 15th 2008
We add :* and := support to specify slot dependencies more precisely.

May 2008
We gain a new, much better default src_install implementation which was later followed up by revamping pretty much every default function as well as the various helper functions. We also switched from subversion to git and had the frst archived discussion of replacing categories - this is still a frequently discussed topic.

May 18th 2008
Announcing Exherbo on my blog
It took only an hour or two from my announcement being published to it hitting Slashdot, Digg and The Register to name but a few. The next several weeks was spend answering tons of questions and trying to resolve the worst misunderstandings.

May 23th 2008
We got tired of answering the same questions over and over so Ciaran wrote a quick install guide on  This is of historical interest only but it was important at the time as it allowed us to get back to development for the most part. It's also interesting as a fairly accurate description of the state of Exherbo back then.

June 4th 2008
FOSS Aalborg takes place and I open with a talk describing the main ideas behind Exherbo, some of the bigger issues we want to solve and why I chose to start a new Linux distribution instead of joining an existing distribution.  Much interest shown and it was quite encouraging for myself to present my ideas before a large crowd of technical people. The video of my talk is still available at

June 12th 2008
We add UnavailableRepository to Paludis and get a much better grip on the expanding number of package repositories. The script we use to build the package indexes for all the repositories hits Gentoo hard and we had to fiddle a bit with the updates before everybody was happy :)

August 17th 2008
My first "Exherbo goals" mail. This has become a series of mails where I describe the state of all the different ideas and features we're working on.

August 27th 2008
KDE 4.1.0 has landed! This marks the beginning of Exherbos KDE support and one of the more important milestones for desktop systems.

September 17th 2008
Markus Rothe announces his first PPC64 stage tarball. Markus ported Exherbo to PPC64 in fairly short time and is one of our many frequent contributors.

October 2nd 2008
Exherbo-cn, one of the early user managed repositories starts. It shows the strength of our distributed repository model by providing packages for Chinese support (fonts, input methods and so on). Exherbo-cn continues to be very active and one of the stronger parts of the community surrounding Exherbo.

October 4th 2008
The second day of the danish open source conference Open Source Days take place and I give a talk on my favorite subject - how we're rethinking Linux distributions and what it means to both developers and users. Unfortunately there's no video available of this talk.

Besides the talk we also had a fairly successful booth with plenty of visitors throughout the day. All in all a very good experience that I hope to repeat this year.

October 6th 2008
We add Unwritten repository support to Paludis and move all package requests from Bugzilla to unwritten so we can query them using paludis just like other packages.

January 26th 2009
Just in time for FOSDEM Ciaran adds AccountsRepository support to Paludis.  Packages can now depend on users and groups just like they would depend on various libraries. We quickly proceed to kill enewuser and enewgroup usage.

February 7th 2009
I was invited to FOSDEM as a maintrack speaker and had a blast! I gave a talk on '10 cool things about Exherbo' where I presented some of the cool things we've done to improve the user and developer experience. The rest of the weekend I was constantly approached by people wanting to know more about Exherbo and it was definitely my best FOSDEM experience so far. Video from my talk is available at

February 11th 2009
I reorganised our website and changed the build infrastructure to make it easier to maintain. The new website makes it much easier to find needed information and just as importantly it makes it quite easy to contribute updates and new content.

February 14th 2009
First mention that I can find of Sydbox, our future sandbox implementation written by Ali Polatel.

February 12th 2009
We add parametrised exlibs. This is quickly used to specify supported autotools versions, perl module authors and a host of other things making many exlibs much cleaner.

February 15th 2009
We add src_test_slow() phase for those packages that takes a ridiculous time to run their testsuites, often measured in hours. Users can control this with a build_option.

March 2nd 2009
Jonathan Dahan grabbed the chance and wrote an install guide for Exherbo as well as a short FAQ. This is the first major piece of user contributed documentation to the website.

March 3rd 2009
First virtual machine images are published and becomes quite popular. The images are all built manually which convinces me to start writing a script to build them.

March 19th 2009
We replace versionator by internal functions. This way we can take advantage of Paludis own version comparison primitives instead of trying to keep a bash script in sync.

April 14th 2009
First release of Sydbox by Ali Polatel. Sydbox is intended to replace Gentoo's sandbox implementation in Exherbo and should fix most if not all the shortcomings of the existing sandbox implementation. This is an important example of core code being contributed to Exherbo from a user and shows that there's really no distance between users and developers in Exherbo.

May 10th 2009
I published my script for automatic KVM image creation. Several bugfixes and general clean up of the script is offered over the next few days.

Future expectations
Exherbo differs greatly from most other distributions and we don't really follow the normal pattern for distribution development. We have no release schedule for example - in fact we don't have any plans of a release at all!

That is not to say that a proper release might not happen but we'd need convincing arguments why a release is necessary before spending lots of time on it. So what do we do when we're not building new releases?

Small improvements
Well, most of our time is spend on what I consider small improvements. Much of the above list describe such improvements. Looked at individually they're interesting but rarely earth shattering. Based on this my predictions of what's to come is also going to be mostly about small but important things with a few bigger things thrown in as well.

Stable Exheres format
One of the most obvious things are the ever evolving Exheres format. At some point we're going to define our first stable format exheres-1 and convert our repositories to that. Before that happens I'd like to see proper support for binary packages and multi-ABI though.

Binary packages already works but we need to fix various problems before using it more officially. For multi-ABI we have the design more or less pinned down but there's some pretty annoying implementation issues that we need to work out.

Build infrastructure
We already have some parts of the infrastructure needed to build various Exherbo blobs like KVM images for example but lots more is needed.

Right now we can build KVM images for x86 and amd64 in a fairly inflexible manner. We need to expand current scripts and write new scripts allowing us to automatically build binary packages, several different kinds of image files, flexible configuration of partitions and file systems. And while at it we need to expand all this to be able to build images for CDs and USB sticks as well.

The build infrastructure should also be able to easily build customised images and be used for more or less unrelated purposes such as tinderboxing.

New init system
This is one of the more mysterious Exherbo projects but also one of the things that I'm most excited about personally. I've talked about it in public on several occasions so many of the basic ideas are already known. That said it's changed direction quite a bit and should be even more interesting when it's finally published.

Easier management of our distributed repository model
Our model of many Topic Repositories and Developer Repositories works fairly well as is but there's no doubt it can be improved further. Currently we want to implement a "repository of repositories" so you can install new repositories using paludis just like you install packages. As we continue to grow and refine our model I'm sure we're going to focus even more on this area and I'm looking forward to seeing what exciting ideas we're going to come up with in the future.

Better documentation
This is one area that haven't got a lot of focus so far. Our documentation mostly consists of Exheres-for-smarties and of course the paludis documentation. Lots of other areas needs to be documented and I'm hoping some users will step up to help with this important task.

Growing user community
This one seems obvious at first but a large part of our users come to Exherbo because of the flexibility of the distribution and our strong focus on technical design of new features as well as the rapid development happening.  This also means that many of our users actively participate in the development which is something I'm hoping to strengthen further as we go along.

It keeps the community very much alive and we seem quite capable of keeping the focus and direction of Exherbo despite having twice as many users contributing in the past year as there are official Exherbo developers.

New profiles
Our current profiles aren't very flexible or useful. We have some vague idea of "mix-ins" allowing us to "mix" several different profiles like an amd64 profile + a KDE profile for example. The idea is fairly vague at this point but at some point we'll get much more flexible profiles allowing for easier maintenance and use.

The great unknown
And perhaps the most exciting part of the future is the part that we can't foresee at all. The development rate have only increased since announcing Exherbo and we often get ideas from unexpected sources. Some of these ideas don't fit in very well with Exherbo and are quickly discarded but many ideas are used in one way or another. Usually that requires some molding to make the idea fit the rest of Exherbo properly which in turn might lead to new ideas.

This process of constantly exploring new ideas helps keep Exherbo at the forefront and definitely keeps it a fun project to work on.

Thank you all for being part of this project - Exherbo might be my baby but you're all helping it grow up and shaping it into something very exciting.

May 19, 2009

Dion Moult

How to install Chromium (Google Chome) on Gentoo Linux

The other day I was surfing the web and read an article about Google Chrome in some sort of hacking competition - this then prompted me to check out Google’s progress on porting Google Chrome to Linux and Mac. For those that don’t know Google Chrome is Google’s attempt at making a browser. So far it seems like a really good attempt.

It seems as though lately the Linux builds (I ignored the Mac stuff - but I hear it’s getting good too) seem to be getting to a usable state. Definitely not finished, definitely buggy, but usable. So, like any other Gentooer, I began trying to find out how to get it.

Step 1) Any ebuilds out there?

Why bother do hard work myself if somebody’s already put it in portage? With some google-fu it seems as though there are a couple ebuilds. One by the French, and another by the Chinese. The French one (have not tested) is available in the `THE` overlay, available by doing layman -a THE. The chinese one seems to be called “Shellex-overlay”, and can be accessed here. I’m not quite sure what the French one does as the ebuild didn’t really like my amd64 system (note: Google Chrome only supports 32-bit as of writing). However the Chinese one fared better and provided me with a binary. If you are on a 32-bit system (x86) you should try those ebuilds.

If you don’t want to compile from source, check the depencies list just a bit further down, then check out the build bot. Note: the build bot provides binaries for Windows, Linux AND Mac, so if you’re on a Mac, you’re in luck!

For more information, you should visit the Chromium Linux Building page.

Under `Prerequisites`, it lists down the dependencies as packaged by the Ubuntu system. Here is the list of dependencies as what Gentoo calls them:

  • Python >= 2.4
  • Perl >= 5.x
  • gcc >= 4.2
  • bison >= 2.3
  • flex >= 2.5.34
  • gperf >= 3.0.3
  • pkgconfig >= 0.20
  • nss >= 3.12
  • gconf
  • glib
  • gtk-engines-murrine
  • nspr
  • corefonts
  • freetype
  • cairo
  • dbus

Their version requirements are listed as needed.

Step 2) What about 64-bit systems?

There are several techniques of getting Chromium on a 64-bit system. However no matter what, I highly recommend that you create a 32-bit chroot. If you want to track each library individually and symlink your system to hell (as I first attempted), be my guest, but you’re wasting your time. So, first create a chroot by following this nifty guide.

Once you’ve got your chroot up, you can either try out the ebuilds I mentioned before, compile it yourself from source (via Google’s instructions) or be lazy and grab the binary from the Chromium build bot. I have tested the latter two techniques (can’t trust the French nor the Chinese!). To compile it yourself from source, follow the Chromium Linux Building page. Note: you will require quite a bit of HD space (the sources tarball itself is 640MB+), I also suggest you bootstrap from the tarball, the subversion checkout seems a lot longer and a waste of time really. Finally, if you’re just interested in getting the binary and running Chrome (not development), I would do use Release mode (see the building page for instructions). Of course, after making sure you have the dependencies I listed above, you should have Chrome compiled!

If you’re lazy and don’t want to compile, there is a build bot.

Step 3) Run Chromium on Linux!

At this stage, you should have the Chrome binary (either by compiling from source or ebuild, or getting the build bot binary). Now you just have to run the program and enjoy. If you’re running using the chroot, you should use the xhost hack. Do xhost local:localhost outside the chroot, then try run the binary again. Obviously you don’t want to waste time setting up X in the chroot.

Finally, here is a screenshot of Chromium running on Fluxbox! (I normally use KDE, but I wanted a more lightweight WM when compiling Chromium) You might also consider doing nice -n 10 when compiling if you want to continue doing your stuff. In fact, I’m running Chrome right now on KDE to write this post. It’s very fast, uses about 1% CPU, separates itself into different processes per tab, and so far seems pretty “stable”. However I have found that opening a file browse dialog (eg: in an upload form) makes Chromium jump up to about 50% CPU, which sucks.

Any thanks, issues or problems feel free to ask.

Related posts:

  1. rtm - a Command Line Tool for RememberTheMilk
  2. Gentoo installed (again).
  3. Some Linux/Gentoo Wallpapers

May 19, 2009 :: Malaysia  

May 18, 2009

Martin Matusiak

generalized makefiles

Build systems are probably not the most beloved pieces of machinery in this world, but hey we need them. If your compiler doesn’t resolve dependencies, you need a build system. You may also want one for any repeated task that involves generating targets from sources as the sources change over time (building dist packages, xml -> html, latex -> pdf etc).

generalized_makefiles_singleFittingly, there are quite a few of them. I haven’t done an exhaustive review, but I’ve mentioned ant and scons in the past. They have their strengths, but the biggest problem, as always, is portability. If you’re shipping java then having ant is a reasonable assumption. But if not.. Same goes for python, especially if you’re using scons as build system for something that generally gets installed before “luxury packages” like python. Besides, scons isn’t popular. I also had a look at cmake, which is disgustingly verbose.

Make is the lowest common denominator and thus the safest option by far. So over the years I’ve tried as best to cope with it. Fortunately, I tend to have fairly simple builds. There’s also autotools, but for a latex document it seems like overkill, to put it politely.

One to one, single instance

So what’s the problem here anyway? Let’s use a simple example, the randomwalks code. The green file is the source. The orange file is the target. And you have to go through all the yellow ones on the way. The problem is that make only knows about the green one. That’s the only one that exists.

So the simplest thing you can do is state these dependencies explicitly, pointing each successive file at the previous one. Then it will say “randomwalks.s? That doesn’t exist, but I know how to produce it.” And so on.

targets := randomwalks
all: $(targets)
randomwalks : randomwalks.o
	cc -o randomwalks randomwalks.o
randomwalks.o : randomwalks.s
	as -o randomwalks.o randomwalks.s
randomwalks.s : randomwalks.c
	cc -S -o randomwalks.s randomwalks.c
	rm -f *.o *.s $(targets)

Download this code: generalized_makefiles_single

Is this what we want? No, not really. Unfortunately, it’s what most make tutorials (yes, I’m looking at you, interwebs) teach you. It sucks for maintainability. Say you rename that file. Have fun renaming every occurrence of it in the makefile! Say you add a second file to be compiled with the same sequence. Copy and paste? Shameful.

One to one, multiple instances

generalized_makefiles_multipleIt’s one thing if the dependency graph really is complicated. Then the makefile will be too, that’s unavoidable. But if it’s dead obvious like here, which it often is, then the build instructions should mirror that. I run into a lot of cases where I have the same build sequence for several files. No interdependencies, no multiple sources, precisely as shown in the picture. Then I want a makefile that requires no changes as I add/remove files.

I’ve tried and failed to get this to work several times. The trick is you can’t use variables, you have to use patterns. Otherwise you break the “foreach” logic that runs the same command on one file at a time. But then patterns are tricky to combine with other rules. For instance, you can’t put a pattern as a dependency to all.

At long last, I came up with a working makefile. Use a wildcard and substitution to manufacture a list of the target files. Then use patterns to state the actual dependencies. It’s also helpful to unset .SUFFIXES so that the default patterns don’t get in the way.

targets := $(patsubst %.c,%,$(wildcard *.c))
all: $(targets)
% : %.o
	cc -o $@ $<
%.o : %.s
	as -o $@ $<
%.s : %.c
	cc -S -o $@ $<
	rm -f *.o *.s $(targets)

Download this code: generalized_makefiles_multiple

Many to one

generalized_makefiles_manytooneWhat if it gets more complicated? Latex documents are often split up into chapters. You only compile the master document file, but all the imports are dependencies. Well, you could still use patterns if you were willing to use article.tex as the main document and stash all the imports in article/.

This works as expected, $< gets bound to article.tex, while the *.tex files in article/ correctly function as dependencies. Now add another document story.tex with chapters in story/ and watch it scale. :cap:

targets := $(patsubst %.tex,%.pdf,$(wildcard *.tex))

all: $(targets)

%.pdf : %.tex %/*.tex
	pdflatex $<
	rm -f *.aux *.log *.pdf

Download this code: generalized_makefiles_manytoone

Many to many

Latex documents don’t often have interdependencies. Code does. And besides, I doubt you want to force this structure of subdirectories onto your codebase anyway. So I guess you’ll have to bite the bullet and put some filenames in your makefile, but you should still be able to abstract away a lot of cruft with patterns. Make also has a filter-out function, so you could state your targets explicitly, then wildcard on all source files and filter out the ones corresponding to targets, and use the resulting list as dependencies. Obviously, you’d have to be willing to use all non-targets as dependencies to every target, which yields some unnecessary builds. But at this point the only alternative is to maintain the makefile manually, so I’d still go for it on a small codebase.

PS. First time I used kivio to draw the diagrams. It works quite okay and decent on functionality, even if the user interface is a bit awkward. Rendering leaves something to be desired clearly.

May 18, 2009 :: Utrecht, Netherlands  

Andreas Aronsson

Lovely org

The editor emacs continues to amaze me. For some time now I have been using emacs as a day-planner in the excellent org-mode. Once you get used to the commands it's a breeze to create documents with structured headlines, internal and external links, etc. It's also very versatile in that it can export the same document to different formats like html or ascii. The stable version that comes with Gentoo unfortunately does not seem to support docbook format (latest upstream stable version does however and possible to use with Gentoo if you run the keyworded version at the time of this writing).
You can find a nice screencast here. Org-mode was also featured with Google Tech Talk.

May 18, 2009 :: Sweden

May 17, 2009

Jason Jones

Life Comes Fast...

Okay...  Wow..  A pretty huge amount of stuff has been going on.  I've been writing about it in my personal journal, but some of the stuff shouldn't be public, so I'm writing this as a supplement for those who care.

Man...  Where do I even begin.  Awhile back, I posted a couple of private entries about an interview I had with a couple of guys starting up their own Internet-based company, and they were interested in bringing me on board.  Holy cow.  Things have definitely progressed since then.

To start, the business model of this company (the company is called Conexm, and is run by a guy named Alma Tuck) is amazing - the people running it seem very cool, hard-working, interesting, movtivated, and moral people.  It doesn't get any better than that...

Well, yes, actually it does.

You see, I've worked for "small businesses" before, and it seems like everything about this new company isn't small.  It's hard to describe with words, but the experience the owner has, and the business sense of his partner, make the possibility of Conexm succeeding, much more than just a reality.  Hopefully it'll be at least a very good move for my career.

When I initially heard about the employment position, I was interested, but highly skeptical.  The more I talked with Alma, the more interested I became.  It got to the point where both me and my wife needed to know if this was the right move for our family.  We both prayed and pondered over the benefits and risks of making the move.  I was in this spirit, working on my computer when Sarah came down, holding Sam, and told me, "I just needed to tell you that I really think you should take this job.  It feels like you should take it."  I talked with her a little bit, and then continued working on whatever I was working on with the computer, when she said, "It's actually pretty overwhelming, this feeling.  I really think you should take it."  The funny thing about this was, I hadn't even got an offer from them yet.

So, through a bit of trepidation from the things associated with working with an entrepreneur, we finally got an offer, and both me and my wife were dumbfounded.  It was more than gracious.  Let's just say, if things work out with Conexm, I'll be hoping to work there for a long, long time.  As our family was, while I was at Nature's Way, we lived comfortably, and happily.  This has the potential to allow us to do a bit more for others.

Now, a bit about why I even considered quitting Nature's Way...  Suffice it to say, and to make a very long depressing story as short as possible, Nature's Way was merged with another company called Enzymatic Therapy.  They had the upper-hand, so they did what all companies in their position do:  They fire people.  To keep people from jumping ship right and left, Nature's Way offered a severance package to those who chose to stay - and it was a package unlike anything I had seen before.  Very much worth it to stay because of it.  Anyway...  Enzy decided to hire me, instead of lay me off, and that - believe it or not - was a bad thing.  They use microsoft products exclusively.  All their code is in C# on the .NET framework.  Not for me.  So, after realizing that Nature's Way in and of itself was disintegrating, there really wasn't any reason for me to stay.

It's really hard to get the following point across to people who don't know what working at a company like Nature's Way is like, but, it was all about the culture and the poeple there.  Sure, we made money, and had a good business model, but there were programmers working at NW who had been there for 30 years - and not just a few. 4 people of our 6-person team had been there more than 10 years.  If you know anything about the technical sector of the marketplace, that should blow you away.  I rarely hear of people staying in IT positions with the same company for more than 5 years.  I would have stayed at Nature's Way for the rest of my life.  Such a shame that it's now practically defunct.  In the space of 6 months, it went from a place where an eye-shot of 90% of everyone working there loved their job, to a place where nobody smiled much at all, and there was only one topic of conversation in the cafeteria.  When I announced my resignation, both my immediate manager, along with the CTO of NW congratulated me, and were sincerely happy I had found something else.  Man, what a freakin' shame.

So, yes, my first day at Conexm was last Monday, and they don't even have an office yet.  It's very stressful, because they have me learning programming concepts I've only heard about, and I'm feeling that I need to perform at the top of my game 100% of the time in order to "make my keep", as it were.  Hopefully I'll get things in order, learn what I need to learn, and start kicking some major bootie.  There is a lot to do, and not a lot of time to get up to speed.

Also, as you most likely now know, I bought an iPhone, and have been loving it.  Yeah.  Great little device.

Anyway...  Lots of changes going on in not too long of a time span for the Jones family.

And although it feels stressful, and is quite risky, I still have a feeling that this time, again, it will be for the better.

May 17, 2009 :: Utah, USA  

Dion Moult

Remember The Milk: A Great Online To-do List Service

2009-05-14-203431_1280x800_scrotRemember the milk? What an awesome name for my newly discovered service. Just yesterday I was poking around the my newly installed KDE 4.3beta (4.2.85) and I came across the “Remember The Milk” plasmoid hiding in the kdeplasma-addons package in the kde-testing overlay. That was the beginning of about 2 hours or so spent discovering more about Remember The Milk.

Wait, what actually IS Remember The Milk?

It’s a todo list website. You can create categories, and put tasks in them. You can also prioritise tasks from High, Medium and Low. Tasks can also have due dates, and be recurring.

But What’s so amazing about it?

Well, I am no stranger to to-do applications. I have used the Windows Mobile 6’s todo on my phone to quickly note stuff down, I have used KTodo (and despised it), I’ve used devtodo (highly recommended CLI todo list), I’ve used stand-alone plaintext files for todo lists, post-it notes, calcurse (CLI ncurses-based calendar + todo app)…well, a lot of to-do things. Over time here are the things I’ve decided that make a to-do service useful:

  • You make an effort to use it yourself. You don’t use it, it’s not useful.
  • Priority system, but not a bloody /10 rating for each task.
  • Due date system.
  • Fast and accessible.
  • Simple and intuitive interface.

Here’s what makes Remember The Milk great. For a priority, you have a choice of 3: high, medium, low. I think that’s the best combination. It’s awesomely accessible, giving the flexibility of access-the-website-anywhere with a KDE plasmoid to quickly access it on my desktop. It’s fast - it sports an uncanny interface (’uncanny’ used in the technical sense of the word) with plenty of what looks like jquery usability tweaks (if it’s not jquery, so sue me).

Oh, and the greatest thing is how intuitive and simple it is to use. I can type in “Visit X for dinner tomorrow“, and it’ll parse the “tomorrow” and put it as a due date accordingly. Same with “Mechanics exam on monday“. It’s all automated.  Even cooler is the recurring function, I can do “blog post every two days“, and it’ll work it all out for me. The categorising feature allows me to group it easily in the projects I’m involved with.

It also has the amazing feature of sharing todo lists and publishing them publicly/privately. This allows me to set up a collaborative todo list for a mini-project or such - allowing me to communicate development easily to the public and letting them use it as a wishlist! I can also send/receive todos from other users. I swear, if I were leading any sort of team in a business, I would make it compulsory for them to use this - it’s great for collaboration!

Well, that’s my two cents, and I suggest you check out RememberTheMilk!

Related posts:

  1. rtm - a Command Line Tool for RememberTheMilk

May 17, 2009 :: Malaysia  

May 15, 2009

Steven Oliver

Is Object Oriented always the correct strategy?

Being the age I am, and given the training I’ve received, Object Oriented is really the only way I know. Another way to program was never an option for me. Now the semantics of which language is and which language isn’t is immaterial here. C and C++ may not be purely OO languages but you still use them in an OO fashion. All (err most) of the OO principles are there: inheritance, abstraction, and even polymorphism. But I have recently, with my current job, run into a dilemma. What if a language, despite the ability, really shouldn’t be used in a OO fashion?

My problem stems from PL/SQL. We have an interface between two databases at work written in PL/SQL. So far so good. The problem is though is that the interface is written in such a fashion that, along with Oracle’s massively helpful error messages, it is almost impossible to debug quickly and easily. And as far as I can tell, it all stems from the OO fashion in which it was written. Now, unlike most modern languages, there is very little OO in PL/SQL as far as I’m concerned. You have encapsulation and modularity, but I don’t see much abstraction for example. It appears to me that despite Oracle’s best efforts their attempt to make PL/SQL OO have been (overly) appreciated but not worth it. Whats wrong with old fashion procedural programming? What do I even need OO for with PL/SQL? I have a massive data set that needs to be worked. I need to filter through it, pull what I need, discard what I don’t. And then fill a table with what’s left. Not hard. Do 1 then 2 then 3. Even when written in English (as opposed to code) it’s all very un-OO.

If you want specifics I can elaborate in another post. For now though I need to wash and wax my car.

Enjoy the Penguins!

May 15, 2009 :: West Virginia, USA  

Jason Jones


I've been pretty much stuck 100% in the Linux world now for around 5 years.  I bought myself a Cowon iAudio X5 media player based on its compatibility with Linux (and its complete lack of anything DRM related) - and I've loved it for the 2.5 years I've had it.

I also had a blackberry which I had used for 2 years, thanks to Nature's Way.  Well.. I now no longer work at Nature's Way (I'll blog about all that later), and along with my resignation, came the termination of my phone.  So...

I was wandering around Wednesday afternoon wondering what I was gonna do about a phone, when I went in to just re-new my blackberry.  It would cost me around $100.00 / month to get what I needed.  I thought that was a bit much, and so I went to the AT&T store, and asked about the iPhone.  The guy gave me the schpeil, and I took about 30 minutes to think it over.  The iPhone would cost me around $130.00 / month, but provide me many, many more options for the price.  I would have to buy the iPhone, which would cost me much more than I was planning on spending, but then I could sell my blackberry to offset some of the cost.  So...  I did it.

I bought an iPhone.  And I simply must say - it blows away the competition.  Seriously.

I'm not a mac-aholic like many people I know, who eat, drink, and breathe the Apple lifestyle (which, in my opinion is nothing more than high-priced stuff made shiny).  I mean...  C'mon.  A laptop for $3500.00!?!?!  Is it REALLY that much better?  I could build myself 2 desktops, and still have enough money left over to buy a laptop with that much money.  Sheesh.  So... anyway... Yeah, I'm definitely not, haven't been, and probably won't be much of a fan of Apple's ideology.

But, then there's this iPhone.  I went all out.  I got a jawbone bluetooth ear piece, a leather case, the car charger, a pretty solid plan, and the 16gig phone.  I'm all set.

During the three days I've had the phone, I've managed to get it synced with all google apps I use, namely:

  • gmail - it syncs every hour, and that's because I set it that way.  You can set it to check as often as you like, or not at all.

  • google calendar - this is a freaking life saver.  I've been over-scheduling stuff because I have no idea what my wife's calendar looks like.  I now see my calendar, my wife's calendar, and my church responsibilities calendar all on my iPhone - and it syncs every minute or so.  Amazing - amazing stuff.

  • google contacts - I tried it, and didn't like it due to the lack of features per contact.  I'll look more into this in the future.

Then there's the app store.  Man, what genius came up with that!?!?  Apart from the generally amazing interface of the rest of the phone, I've never seen anything so intuitive as the app store interface.   The way it displays things through the selection of, installation of, and use of the apps, is simply genius.  Stuff just works, and is easy to use.  Yes, I love that about Apple.  Is it worth the price they're asking?  Yes, for the iPhone, but pretty much heck no for everything else.

Now, one serious gripe which, apart from the general "overpriced" nature of Apple's toys, has kept me away from Apple, is their proprietary nature.  They like to keep Apple's things in Apple's universe.  It's good that iTunes works with Windows, but it's a complete failure with Linux.  And that's a shame.

Also, it seems that half of the iPhone's capabilities are crippled, if not totally useless without iTunes.  Like - I can't seem to put any images on the phone at all, unless I take the image with the phone's camera, or they come through iTunes.  This sucks royally, because I didn't have any Windows installations at all at the time I bought the phone.  So, through a whole lot of grumbling and curled upper lips, I grabbed an empty drive, and a spare copy of XP I had, and spent a good 6 hours trying to get XP installed (The disk was pre service pak 1, so it didn't recognize any of the SATA drives I had).  Man, Windows is a pain.  But, I got it up and running, and got iTunes installed, and my account set up.  So... Anyway...  Despite the few hiccups along the way, the iPhone seems to be a great product.

Anyway... Just thought I'd let ya'll know what I think.

May 15, 2009 :: Utah, USA  

May 14, 2009

Dion Moult

The Blender Model Repository and BlenderNation: open-source merger?

2009-05-15-005937_1280x800_scrotAs some might know, Blender is an open-source 3D content creation application - it’s cross-platform, a pioneer in the free 3D application market, and I use it. Not only do I use it, love it, and hang out in the #blenderchat IRC channel on freenode, I host the Blender Model Repository, taking over from Andrew Kator long time ago when he suffered legal issues. It’s been running stable for the past year or so, every so often getting new model submissions, and users finding it a useful resource.

Even if you know nothing about Blender, help me in this open-source dilemma, please read on.

Recently, Bart Veldhuizen over from started beta-testing for a new resource sharing system known as BlenderNation Links. BlenderNation, for those that don’t know, is the central news website for all things Blender related. It’s the central hub that Blender development and community news goes through - outside the official website, which is a bit more boring and just says “hey guys, new version” - as do most official websites. (Just joking!)

I was recently pleased to be given the opportunity to beta-test the new system. Well, this new BN Links categorises things as “individual” items - a model repository, as one might expect is not just one individual item, but instead a whole other resource system. The thing I’m wondering about is “how do I make the repository’s resources just as accessible through the BN Links system?“.

A while back I wrote the second part of my open-source analysis article, called “The Open-Source Market - Limitless and Forever Expanding?” (click it to read the article - it might interest you) One of the conclusions I came up with there was that in the short term, open-source should have plenty of choice and competition, but in the long-term, it must realise the synergy is what is needed to ensure its survival and continued growth. This is a perfect example of this concept in real life. There are two resource sites, one obviously much larger and more popular than the other, originally offering slightly different things. BlenderNation focuses on news, and has a small tutorials/resources section, whereas the BMR (Blender Model Repository) focuses on…hosting models and tutorials. Now BlenderNation wants to increase its focus on tutorials and resources, thus duplicating the BMR’s function somewhat. Is this, perhaps, the time to synergise?

Firstly, let’s get the facts down:

  • BlenderNation is much more popular and well known than the BMR. It also has a cooler name.
  • The BMR is a hub for models. I have no legal right to give all my models/let them be published on BN Links.
  • Competition is good, but function replication is not.
  • I do have the legal right to “link” to each individual model, but such manual addition is tiresome, and will have to be constantly updated as new models come in.
  • The BMR does have a built-up reputation for those that know it. It’s not very nice to say “hey guys, we’ve uh, disappeared - check out this cooler site
  • The BMR is running on depreciated technology - sad but true. Whoops, did I just say that? But hey, if it ain’t broke, don’t fix it.
  • The BMR is a bit like a music collection with some missing metadata. Some files are hosted elsewhere, some don’t have preview pictures. This means that links die out.
  • The BN Links system, from what I’ve seen, seems a lot more flexible and makes it much easier for users to find what they want, which is great for the community.
  • I juggle a lot of projects. BMR maintenance is somewhat of a gypsy on my todo list.
  • I’m human - try ask someone else to delete a section of their site so somebody else can run it. (Ok, that sounded very selfish and attached)

Well. Here’s where you guys come in. To what extent can I realistically share resources, how should this be done, and tell me - is this the time to synergise?

Please leave a comment. Even if you know nothing about Blender.

Related posts:

  1. Linux: Open Source Theory
  2. The Open-Source Market - Limitless and Forever expanding?
  3. Kaizen and Kakushin’s Practicality in Open-Source Business Models

May 14, 2009 :: Malaysia  

Patrick Nagel

Umlaute ohne QWERTZ Tastatur-Layout (für Windows-Nutzer)

Ich verwende seit Jahren unter Linux (X11) eine .Xmodmap-Datei die mir ermöglicht, stressfrei deutsche Umlaute zu schreiben: Drücke ich [Windows]-[o] bekomme ich ein ö. Brauche ich ein Ö, drücke ich [Windows]-[Shift]-[o]. Ein ß gibt’s mit [Windows]-[s]. Das funktioniert hervorragend, nach wenigen Tagen hatte ich mich daran gewöhnt - und die Windows-Taste ist nun endlich mal für etwas gut :)

Da ich mich seit längerem im Ausland aufhalte, treffe ich viele Deutsch-Schreibende die auf amerikanischem Tastatur-Layout schreiben. Darunter sind (leider) auch viele Windows-Benutzer. Alle diese Personen umschreiben die Umlaute mit “oe”, “Oe”, “ss”, etc.

Also habe ich heute mal recherchiert, ob es nicht doch eine Möglichkeit gibt, diesen Personen zu Umlauten zu verhelfen. Die offensichtlichste Möglichkeit ist natürlich, auf Linux umzusteigen und meine .Xmodmap zu verwenden. Aber das ist eher etwas längerfristiges, und viele sind dafür einfach zu unflexibel…
Es gibt jedoch noch eine andere Lösung, dank der unter der GPL stehenden Software AutoHotKey. Ungewohnt schön (für Windows-Programm-Verhältnisse) an dieser Software ist, dass sie sogar einen “Compiler” mitbringt, der aus der Tasten-Mapping-Definition eine ausführbare Datei erstellt.

Die AutoHotKey Mapping-Definition (.AHK Datei), die zu meiner gewünschten Funktionalität führt, sieht so aus:


Sehr wichtig dabei ist, dass die Datei ANSI-codiert gespeichert wird, weil AutoHotKey wohl nichts mit Unicode anzufangen weiß. Wenn man sich allerdings (wie ich während meiner Recherche) auf einer Windows-Installation befindet, deren Non-Unicode program locale keine deutschen Umlaute im erweiterten ASCII-Bereich hat, ist das nicht sonderlich einfach zu bewerkstelligen. Man kann die Datei nicht ohne Weiteres mit Windows-Bordmitteln ANSI-codiert abspeichern, sofern man nicht die Locale auf Deutsch (oder irgend eine andere Sprache bei der die o.g. Umlaute im erweiterten ASCII-Bereich vorkommen) umstellt. Und danach ist natürlich Neustarten angesagt - wie könnte es auch anders sein…

Mit etwas Vertrauen in mich kannst du aber auch einfach meine umlauts.exe herunterladen, die ich mit AutoHotKey erstellt habe. Nach dem Ausführen befindet sich das AutoHotKey-Logo im System-Tray und man kann Umlaute wie oben beschrieben tippen. Ich habe keine Trojaner oder sonstige Schädlinge eingebaut (zumindest nicht wissentlich).

Update (2009-05-17): Es hat sich herausgestellt, dass diese ganze Sache in GTK-für-Windows-Anwendungen wie Pidgin nicht funktioniert. Ich habe nun einen Workaround gebastelt, der in Pidgin funktioniert - aber schön ist was anderes. Die AHK-Datei ist dadurch etwas länger geworden: umlauts.ahk

Die umlauts.exe habe ich aktualisiert.

May 14, 2009 :: Shanghai, China  

Matija Šuklje

The Jamendo Experiment — report no. 2

Welcome to the second weekly report of the Great Jamendo Experiment! I decided that due to my real life obligations I will publish reports on my Jamendo Experiment on a irregular basis (cca. twice a month). With this said, I will also prolong the experiment for at least a few weeks if not months.

For those of the impatient nature: in short, it is still possible to survive with free (as in beer and speech) music. And after almost a month I still do not miss a thing from the land of the commercial record labels. In fact the longer I listen to free music, the more I like it!

I was quite surprised really that after such a short amount of time free music started to really show on my overall Last.FM track charts. Jimmy the Hideous Penguin's track Fucking ABBA is currently my 15th most listened to track of all times! And not only that — I already get mostly free music recommended by Last.FM (mostly Jamendo artists) :D

On a side note, Last.FM Radio has become a paid service (unless you are a USA, UK or German citizen) and I already "spent" my 30-track trial, so I had to stop using the Recommendations Radio (which was really the only one I used every now and again). But not all is lost, since Amarok will in the near future (probably in 2.2 or a later 2.1.x) have integrated Last.FM similar artists. In any case if I feel someday that I need Last.FM Radio, 3 €/month is not too much to ask really. Although I do find it legally dubious that some EU citizens have to pay for the same service, while others (UK and German) are covered by ad revenue. EU is a single market last time I checked.

On an Amarok-related note, the before mentioned bug with the Jamendo plug-in not listing all albums was solved.

While searching around I stumbled upon a nice community project called Free Music Charts. It is hosted and maintained by Darker Radio — a portal for free "dark" music (gothic, emo, darkwave, industrial, IDM, synthpop etc.) — and is decided each month by a community vote. All the tracks are available under a CC license and there is even a monthly podcast (I hate this term) with a review of the albums. Just a warning: the site and the reviews are in German.

When looking at Try^d's profile I found about another interesting record label/project — Opsound. It basically lets anyone participate as long as they use the CC-BY-SA license and tried to follow the free culture and gift economy concepts as closely as possible. There are also a lot links to further reading on that topic on their site.

Artists that I found and loved lately are:

  • T r y Δ d (also written as: T r y ^ d, Tryad) — some very nice electronica/trip-hop with an interesting history. Tryad are said to be one of the first virtual bands — their music is made not in a single studio, but by collaborating over the internet. Somewhat similar to how most FOSS is made. I have mostly listened to their album Listen, which I also like the most. Their style reaches from a very easy to the ears electronica such as the tracks Beauty, Listen and Lovely to a much harder and darker crossover style as You Are God. Piano is not an uncommon instrument on their tracks and the vocals (especially female) sound well trained (as opposed to many commercial electronic music artists). The lyrics are not just the standard wishy-washy and unintelligible kind you hear on the radio — for example Mesmerize talks about despair that almost resulted in a suicide. At the end I just have to say that their track This is the first after a very long time that sends shivers down my spine and a smile on my face every single time I hear it! The flow and the vocals on it remind me (oddly) of Blue Oyster Cult's Don't Fear the Ripper, while the beat and the melody change to and from calm and easy to lively and empowering. Amazing stuff! No wonder they are currently no. 2 on Jamend's weekly charts.
  • Grace Valhalla — self-proclaimed amateur artist who says she just produces music she likes in her spare time. If this amateurism can in part be justified in her first two albums — PEAK~ the more rockish Psychopathetic — it is far from the truth for her latest album SummerCamp. This does not mean that her previous work was bad, it just shows how she evolved in the short period of time between these albums. Her style mixes electronic music with pop rock elements (lately even jazz) and sometimes 8-bit effects. Although this sounds like a bit rough mix, it is actually quite smoothly blended together and produces some very summer-ish tunes. I like it and I can barely wait for her next album. If this is amateurism, I wish more musicians would keep the "amour" in/for their music.
  • Moondogs Blues Party — very enjoyable blues that is both somewhat classical (especially the smoky vocals and acoustic guitar) and has a modern touch (jazz, latin influences) to it at times. The overall feeling is mellow yet not too whiny. Although the guitar solos on their album O cadelo lunático do not sound very complicated, it is still a great listen!
  • A Sound Travesty — a one man band playing what I imagine would be if Pixies or Dinosaur Jr. started to play emo punk. The vocals are at times mellow and at times screaming, although the rhythm usually stays the same throughout the track, the power and the force changes. A good example of post punk and emo crossover.
  • The Very Sexuals — light rock with post punk elements and a mellow, yet sweet pop taste. Their album Post-Apocalyptic Love at times reminds (but not mimics!) of 90's and even older rock music.
  • Professor Kliq — a quite popular artist on Jamendo, which is due to the quality of his work quite understandable. His style is a nice mix of trip-hop and break-beat that at times reminds me of Daft Punk and Chemical Brothers. So far I listened to his album Guns Blazin' and was so impressed that I had to download all his other work as well. In my opinion he can very well compete with any commercial artist in his genre. If you add to that his young age (22 years!), it is definitely worth keeping an eye on him.

hook out >> off to bed after trying to study civil procedural law and obligations late at night

May 14, 2009 :: Slovenia  

May 13, 2009

Martin Matusiak

ruby compiler series: annotated git history

I’ve been reading along with Vidar Hokstad’s rather excellent Writing a compiler in Ruby bottom up. It’s a 20 part (so far) series documenting his effort to hack together a Ruby hosted compiler that in the end will compile a language similar to Ruby into x86 assembly.

Compilers are complicated beasts that take a lot of planning to build. Now I’m not saying Vidar didn’t do all the planning, but what makes this series especially palatable is the fact that he’s writing it literally bottom up, through what you might call evidence based hacking. That is, compile the very simplest thing you can (starting with an empty ELF binary), and then see what gcc produces. From there on, add print “Hello World” and see how the code changes and so forth, adding new constructs. This means you can read along even if you don’t know any assembly (like yours truly) and take it in small steps without first having to absorb the complexity of a whole compiler.

It’s a great learning opportunity, seeing as how each step is a working compiler one iteration up. You can read along Vidar’s blog with the git diff side by side and see how the assembly is changing. To make this a bit clearer I’ve forked his repo and annotated the early commits with tags (where they were missing) and made sure the customary make run / make clean work as expected. I’ve also added some commit messages that tell you exactly what the iteration achieves at each particular step, so you can browse the history and figure out say “where do I look to see how to do a while construct”.

I’ve annotated the first 15 steps (the rest were already tagged):


Given how git works, where every commit is hashed from the sum total of the previous ones, the only way I could do this is by rewriting the git history, which is not ideal. So Vidar’s commit objects won’t match mine, but all I’ve done is cherry pick them off his branch and add some annotations. It’s all still there. :)

May 13, 2009 :: Utrecht, Netherlands  

Andreas Aronsson


So I decided to spice up my Gentoo work station a little. I was having a little too much crud lying
around like ldap and mysql support anyways. To get a more complete control over what was installed,
I started moving away from using a list of use flags in make.conf by doing a

 emerge -evp --columns world >> tmpfile

with that format it was easy to use an openoffice spreadsheet and insert from file with fixed width
delimiters cut and then go through the file with emacs and do some search and replace to end up with
the exact format that /etc/portage/package.use employs.
Then, with the help of profuse and simple stuff like

sed -i 's/ ldap/ -ldap/g' tmpfile

I soon had the use flag settings I wanted. A nice side effect was that I also discovered the truetype flag which
endowed me with pwettier apps on my desktop.
While at it, I did a check for cflags and I noticed that since my gcc was now at a sufficient high
level to be able to interpret the -march=core2, I put that in. Also found that my /proc/cpuinfo
revealed the msse4.1 would be possible for my system. Good source of info this.
Having done all that which took me quite a bit of time (there's probably a much more efficient way I
reckon =/ ). It was time for the good old rebuild all-script (I had also a new linux-headers to
upgrade to):


 PKGLIST="linux-headers glibc binutils-config binutils gcc-config gcc glibc binutils gcc system world"

 for i in $PKGLIST
     emerge -q $i

It's probably just my imagination but everything seems prettier and snappier =).

May 13, 2009 :: Sweden

Johannes Gilger

genkernel 3.4.10-r2 with dmraid and hibernate-support

This comes from the I-have-to-pay-my-electrical-bill and from the learn-something-fun-about-your-system-everyday-department. As you know I’ve got Gentoo on my workstation. When I still lived in my dorm it was running 24/7 (last but not least to supply everyone with the latest episodes of popular shows). When I moved I started using a hacked-together suspend-to-ram script which worked well enough for the last year or so. Seeing how quick and (seemingly easy) hibernate/suspend-to-disk worked on my new netbook I decided to once again give it a try on my workstation.
Since my system has one of those stupid fake-raids I have always needed an initrd to boot the kernel. The initrd calls dmraid, which maps the strange arrays into usable disk-devices (don’t ask me). The preferred way of creating an initrd with Gentoo is genkernel, which can be used to build a kernel as well. The only problem was that genkernel still lacks support for suspend (user-space suspend, see Gentoo Bug), while I can’t do without dmraid.

So I fired up git and started hacking on the genkernel-code and now have a system which does suspend-to-disk and dmraid. Having said that, I must stress that you should really know what you’re doing if you’re trying to use this. You could potentially shoot yourself in the foot if for some reason a hibernated image gets loaded after your harddrives have already been mounted. I try to avoid that in my patch by calling suspend_resume immediately after dmraid has been called (and only there). Improvements or constructive flames would be highly appreciated.

May 13, 2009 :: Germany  

Allen Brooker

Return time of last world update

Came up with a short script that returns the last time @world is mentioned in /var/log/emerge.log.

date -d "1970-01-01 `grep @world /var/log/emerge.log | tail -n 1 | sed -e 's/:.*//'` sec" +"%Y-%m-%d %T %z"

Obviously you may want to modify this slightly (for example, to ignore –pretend), but I hope this will give anyone looking to script this sort of information a good starting point.

For people who use the –ask option, you’ll want to use the following, which filters out occasions where “emerge –ask @world” was run, but the user then cancelled the actual merge (answered with “no”). As you can see I also split out the command which greps the timestamp to use, for easier reading:

TIMESTAMP=`grep @world /var/log/emerge.log -A1 | grep '>>>' | tail -n1 | sed -e 's/:.*//'`
date -d "1970-01-01 ${TIMESTAMP} sec" +"%Y-%m-%d %T %z"

Also note that this version relies on users using the set notation (@world instead of just world) - but once 2.2 is out everyone will be doing that anyway (and I think set notation does work for current stable portage for the built-in sets).

May 13, 2009 :: England

Dion Moult

Ratpoison: an efficient and minimalist WM.

Ratpoison. It sounds like something that kills a rat. It sure does. Ratpoison is a WM (Window Manager) that runs on Linux that has one purpose: to kill your rat.  Here we are referring to that disgusting little lump beside your keyboard that shoots lasers out of its arse.

Ratpoison is a tiling WM, which basically means that windows do not overlap.  They tile. An easy way to visualise this it by thinking of a table. A table can have rows and columns split up into as weird a configuration you can think of, but there is only one value per cell, and that value cannot overlap another cell. You can resize your cells, switch values from one cell to another, split, remove and merge cells. Not only that, but you can do this using naught but your keyboard. It’s time to push that rodent away from your computer and appreciate the gazillion keys you already have to input information to your computer.

But why, you say, would I enjoy having to design a complex table layout for a simple task? For a number of reasons. Firstly, it’s damned fast. It’s so fast you can split em, switch em, merge em, and focus on what matters most: your work … not moving around windows so that you get a good look at them. Secondly, it uses up all your screen space. No wasting space on window borders, taskbars, panels, etc. Every single bit of your screen is showing useful information, and nothing more. This is often referred to as “efficient use of screen real estate”. Thirdly, it’s a minimal WM. This means it starts up quick, doesn’t have a gazillion dependencies, and is lightweight on your system space and resources.

Take a quick peek at this screenshot to show ratpoison in use (click for full resolution):


Now terminal junkies would feel right at home here. The only time ratpoison doesn’t really play nice is with The GIMP, which has 3 windows. However splitting your screen into frames where the docks fit nicely into is good enough for most people. It’s just a bit of a hassle.

As you can see, ratpoison is basically shortcut driven. You use the keyboard to do everything: open new applications, close applications, resize windows, move windows … well, pretty much everything.

One thing I didn’t really like about ratpoison was the keybindings. The default ones look as though they were programmed by a drunkard (well, if you visit the official ratpoison website, and take a look at how they get inspiration, you’ll see they are drunkards). The solution to achieving a wonderful system is to use a combination of .ratpoisonrc settings and xmodmap settings. Here is a nifty guide that should get you started with some usable keybindings.

My RatPoison setup.

I’ll admit that I have not run ratpoison in a while. Recently I’ve hopped on the hip & trendy KDE 4.x bandwagon, and I love to see active development. All the same, the memories I’ve had with ratpoison have always been awesome, and that’s why I’m sharing it here.

Basically, the default ctrl-t to access the commands is stupid imho, because you have to stretch your hand and it hurts. So the first thing I do is remove my Caps Lock key (nobody ever uses it anyway) and change it to an imaginary key called F13. This way all I have to do is shift my left pinky slightly to the left when I want to do something. This makes using RP really fast!

So create a file called .xmodmaprc in your ~ directory, and put this in it:

remove lock = Caps_Lock
keycode 66 = F13

Next thing you want to do is make sure these key changes take effect before you start the X server. So in your .xinitrc file in the ~ directory, before the exec ratpoison line, add this:

xmodmap .xmodmaprc

Now you want to actually configure ratpoison. Create a .ratpoisonrc file in your ~ dir (yes, all these files are hidden with a . prefix). Put this code:

escape F13
bind Next exec amixer -q set PCM 2- unmute
bind Prior exec amixer -q set PCM 2+ unmute
unbind k
bind j focusdown
bind h focusleft
bind k focusup
bind l focusright
bind J exchangedown
bind H exchangeleft
bind K exchangeup
bind L exchangeright
bind C-k delete
exec /usr/bin/rpws init 4 -k
set winname class
defborder 0
defpadding 0 0 0 0
defbarpadding 0 0
bind space exec xterm

Alright. From top to bottom. First I say F13 (the caps lock) is now the new special key. Then I set my pageup and pagedown (next and prior) keys to control my volume (just because I like it. It’s not compulsory. The next bunch of binds basically make it so that it uses the Vim keys hjkl to move from left/down/top/right windows respectively. Then let’s say I want to switch the bottom window with the top one, I just do F13 + shift + k. So that’s basically F13 + capital K. K is the up key in vim, so it’s very logical and easy to use. I never have to move my hands anywhere on the keyboard when I want to switch windows. When I want to use alt tab, it’s even easier, just double tap the F13 key! The bind C-k delete is the shortcut to close the window. The exec rpws thing is simply if you have rpws installed (not sure if it’s there by default) it sets up virtual desktops. So ctrl-f1,f2,f3,f4 will switch between the 4. set winname class makes the window names something intelligent than the default. The border and paddings simply reduce the space between applications so I get 100% screen real estate used. Finally I use xterm a lot, so I find it easy to just do F13+space to quickly launch it.

There are lots of documentation available on what else to put in your .ratpoisonrc to configure it more. In these following lines what I have done is turned my Windows Key (Hyper_L) into a special key, so that when combined with another key, it launches one of my favourite programs, or even controls my music player! Nifty!

definekey top Hyper_L thisIsNotAWindowsKey
definekey top H-f exec firefox-bin
definekey top H-o exec ooffice
definekey top H-b exec blender
definekey top H-p exec mpc toggle
definekey top H-bracketleft exec mpc next
definekey top H-bracketright exec mpc prev

Don’t forget if you want to try out the commands real time, use F13 (or whatever modifier) + : then type your command that you would use in your .ratpoisonrc. If you want to run a shell command or app, just do F13 + ! then type it in.

Well. Good luck with ratpoison, and I hope you enjoy using it. I know I have!

Related posts:

  1. GIMPup A Webdesign
  2. KDE 4.1.2 in main tree!
  3. A Visual Guide to KDE 4.2

May 13, 2009 :: Malaysia  

Steven Oliver

Lazy Linux

Do you ever get tired of putting a lot of effort in to Linux? Get tired of waiting for things to compile? Grow wearing of trying to figure out why something fails to compile?

I do. Right now my computer has, of all things, Fedora 10 on it. Why? Not because it’s awesome, that’s for sure. Though if you have to go precompiled, easy to install, and easy to use, it more or less tops the list in my book.

I don’t know. I love Gentoo, and though I haven’t tried it yet, I’m sure the distributional offspring of paludis, Exherbo, is a wonderful setup as well. It’s just that sometimes I don’t care to tinker anymore. I don’t want to configure. I don’t want to setup. I just want to turn it on. Know it will boot. Use it. Then turn it off. Sometimes I don’t even care if GCC is installed. Just let me surf the net.

Yet, at the end of the day, I feel guilty for this. It’s almost as if I have betrayed myself by succumbing to my lazy whims. Linux has become as much of a constant work of art for me as much as it is a toy or even just a tool. I feel compelled to not just use, but to improve. For me “improve” rarely means code. I spend all day at work writing PL/SQL…. hardly something that translates into FOSS. I’m currently writing a program in C. But it is slow coming. I admire people who write these incredibly complicated programs seemingly in their spare time. I assume that most of the creators and major contributors to paludis have full time jobs. I know some of them do. I consider a full time college student a full time job too. I work 40 hours a week. I come home tired. Ready for a nap. It’s hard for me to fire up the desktop or even open the lid on my Mac and start programming. It’s almost to the point I’d rather donate money than code myself…

Enjoy the Penguins! (especially if you write them)

May 13, 2009 :: West Virginia, USA  

Brian Carper

Clojure: ASCII Mandelbrot Set

Did you know there's this neat Lisp message board where from time to time someone posts a short problem similar in spirit to the infamous RubyQuiz?

Not a lot of people have participated so far, hopefully that changes. I participated this time; the problem is to render the Mandelbrot Set in ASCII. Here's my Clojure version (based loosely on this one).

(ns mandelbrot
  (:refer-clojure :exclude [+ * <])
  (:use (clojure.contrib complex-numbers)
        (clojure.contrib.generic [arithmetic :only [+ *]]
                                 [comparison :only [<]]
                                 [math-functions :only [abs]])))

(defn- mandelbrot-seq [x y]
  (let [z (complex x y)]
    (iterate #(+ z (* % %)) z)))

(defn- mandelbrot-char [x y]
  (loop [c 126
         m (mandelbrot-seq x y)]
    (if (and (< (abs (first m)) 2)
             (> c 32))
      (recur (dec c) (rest m))
      (char c))))

(defn- mandelbrot-line [xs y]
  (apply str (map #(mandelbrot-char % y) xs)))

(defn- m-range [min max num-steps]
  (range min
         (/ (+ (abs min)
               (abs max))

(defn mandelbrot [rmin rmax imin imax]
  (let [rows 30
        cols 50
        xs (m-range rmin rmax cols)
        ys (m-range imin imax rows)]
    (dorun (map #(println (mandelbrot-line xs %)) ys))))

  ;Example run:
  (mandelbrot -2.0 1.0 -1.5 1.5)
~~~}}|||||||||||||{{{{{{zzyyws   .vyzzz{{|||||}}}}
~~~}||||||||||||{{{{{zzxwwwvus   muvxyywz{|||||}}}
~~}|||||||||||{{{zzzzyyu= p         oteqpz{|||||}}
~~||||||||||{zzzzzzyyyvtm              oxz{{|||||}
~}|||||{{{zyvwxxxxxxxwrG                vuz{|||||}
~||{{{{{zzzywsMsqRovvs                  pxz{{|||||
~|{{{{{zzzyxsq      pj                  `xz{{|||||
~{{{{yyyxwsrp                           wyz{{|||||
~?:3 3 #                              ovxzz{{|||||
~{{{{yyyxwsrp                           wyz{{|||||
~|{{{{{zzzyxsq      pj                  `xz{{|||||
~||{{{{{zzzywsMsqRovvs                  pxz{{|||||
~}|||||{{{zyvwxxxxxxxwrG                vuz{|||||}
~~||||||||||{zzzzzzyyyvtm              oxz{{|||||}
~~}|||||||||||{{{zzzzyyu= p         oteqpz{|||||}}
~~~}||||||||||||{{{{{zzxwwwvus   muvxyywz{|||||}}}
~~~}}|||||||||||||{{{{{{zzyyws   .vyzzz{{|||||}}}}

And here's some obligatory Jonathan Coulton.

May 13, 2009 :: Pennsylvania, USA  

May 12, 2009

Lars Strojny

An introduction to Domain Driven Design

My collegues asked me to write down some documentation on the basic concepts of domain driven design. Why not make a blog post out of it?

Domain Driven Design is all about the domain. The premise is, that what we call model, is the model of the real world process we are going to implement (the “domain”). DDD focuses on the domain of the problem, not on data, not on functions, not on control structures (although all of this stuff is used to implement them). Object technology fits pretty well into DDD as it allows us to narrow reality, as we can express behavior and state together (in an object).

A short glossary

Domain Layer

The layer in the application where the domain is expressed in terms of objects.

Domain Objects

All the objects in the domain layer


Entities are those domain objects that are equal by identity as they express a specific state of a specific entity in the system. Examples are a customer or a purchase.

Value Object

The opposite of entities in the domain layer. Objects that are equal because of equal values not because they are identical. A money object is a typical value object and so is an address.


An aggregate is an objects graph in the domain layer consisting of entity and value objects. A customer has a number of addresses and an order has a money value object.

Aggregate root

The top level object in an aggregate. In the customer example, the customer is the aggregate root and all the other objects are aggregations to the customer.


The repository acts like a Facade to the ORM components of a system. In DDD we focus only on the domain, we don’t care about ORM, we are ignorant against ORM. The repository allows us to be ignorant as it provides a simple, collection like interface to the user. Think of the repository as a factory to the persisted objects with a collection interface and being a facade to keep away all that sad details of ORM.

Ubiquitous Language

At the beginning everything is messy: you think that girl is stupid, she thinks you are a quirky nerd with strange hobbies and even more strange friends. A few dates later you both find an ubiquitous language which allows you to communicate efficiently. DDD encourages to find a set of terms to describe the system, that is modeled after the language of the domain. So the knowledge gathered in the development team about the domain is directly build into the systems core.

May 12, 2009

Nikos Roussos

2nd fosscomm

gentoo presentation

back in athens after a great weekend at larisa, where the 2nd fosscomm (greek foss communities conference) took place.

most presentations had a great interest and workshops where the new addition of this year event. i had the chance to participate in two presentations. on saturday along with kranidiotis we talked about the role that hellug can play inside the greek foss community and on sunday along with tampakrap we made a brief presentation of gentoo linux project and gentoo greek community.

additionally, the greek gentoo community organized a workshop about gentoo installation process, with wired leading this one and the rest of us helping the participants (who were unexpected a lot).

but i think the best thing in these kind of events is the fact that we have the chance to meet again all the greek free software hackers and talk (or flame :P)

you can see my photo set from fosscomm on my flickr here.

May 12, 2009 :: Athens, Greece

George Kargiotakis

mysql not starting

comzeradd send me an e-mail about a mysql service not starting in a server we administer. I started taking a look around…nothing seemed suspicious.
I tried uninstalling and re-installing mysql-server-5.0 a few times and I always got this kind of output from apt-get:

Stopping MySQL database server: mysqld.
Starting MySQL database server: mysqld . . . . . . . . . . . . . . failed!
invoke-rc.d: initscript mysql, action “start” failed.
dpkg: error processing mysql-server-5.0 (–configure):
subprocess post-installation script returned error exit status 1
Errors were encountered while processing:
Reading package lists… Done
Building dependency tree
Reading state information… Done
Reading extended state information
Initializing package states… Done
Reading task descriptions… Done

Reconfiguring the package just prompted me to input a new root password, still it could not start. Here’s what the syslog output looked like:

May 12 16:19:56 foo mysqld_safe[24776]: started
May 12 16:19:56 foo mysqld[24779]: 090512 16:19:56 InnoDB: Started; log sequence number 0 43655
May 12 16:19:56 foo mysqld[24779]: 090512 16:19:56 [ERROR] Can’t start server: Bind on TCP/IP port: Cannot assign requested addr
May 12 16:19:56 foo mysqld[24779]: 090512 16:19:56 [ERROR] Do you already have another mysqld server running on port: 3306 ?
May 12 16:19:56 foo mysqld[24779]: 090512 16:19:56 [ERROR] Aborting
May 12 16:19:56 foo mysqld[24779]:
May 12 16:19:56 foo mysqld[24779]: 090512 16:19:56 InnoDB: Starting shutdown…
May 12 16:19:58 foo mysqld[24779]: 090512 16:19:58 InnoDB: Shutdown completed; log sequence number 0 43655
May 12 16:19:58 foo mysqld[24779]: 090512 16:19:58 [Note] /usr/sbin/mysqld: Shutdown complete
May 12 16:19:58 foo mysqld[24779]:
May 12 16:19:58 foo mysqld_safe[24816]: ended

But there was no other mysql server running. I then typed ifconfig and here’s the output:

lo Link encap:Local Loopback
LOOPBACK MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

The loopback did not have an IP address!!!
I looked inside mysql’s my.cnf and mysqld had

bind-address =

The command:
ifconfig lo
fixed the problem :)

May 12, 2009 :: Greece  

Patrick Nagel

Fresh vs. rotten ext3

Did you ever hear sentences like “Linux/Unix filesystems are superior, to stuff like NTFS, let alone FAT32 - you don’t even need a defragmentation tool.”?

That statement may be technically correct, since fragmentation is really rare with ext3 - but what about spatial locality of reference - files that are often accessed at nearly the same time being spread over the whole disk, thus causing long access times due to head positioning?
There is (AFAIK) no way to (programmatically) optimise / sort a Linux filesystem in a way that for example all init scripts, binaries, libraries, etc. reside in nearby sectors on a harddisk. Those tools do exist in the Windows world (built into those 3rd party defragmenters).

The only way to do this optimisation is “by hand”: make a backup, mkfs and restore the backup - but that’s something you only do when you have a lot of time, or when I/O finally got painfully slow.

One factor that probably plays an important role, is, that I usually only have one partition (+swap) that contains everything - etc, usr, usr/portage, var, home, …
With such a setup, after only a few months of updating the system, downloading stuff from the net, copying pictures from the DSLR to the harddisk, updating the system again, etc. will lead to all kinds of files being well spread throughout the whole disk. Imagine some init scripts and binaries they call still being at the “beginning” of the disk, some others, due to updates, at the “end”. And that’s what makes I/O slow… see for yourself:

From loading the kernel to KDM being ready for login on a

(sorry for the poor video quality)

That is an ~80% performance decrease.

How I got the videos:

  1. Capture startup of my system stored on a months old ext3 partition
  2. Backup (with dar)
  3. mkfs.ext3
  4. Restore
  5. Capture startup of my system on a freshly made ext3 partition
  6. Trim the videos so that frame 1 is the first frame on which kernel output is visible and the last frame is the first frame on which KDM shows the input widgets

Hoping that this situation will improve with ext4 - I heard online defragmentation will be possible at some point, and that probably also makes “sorting” the filesystem possible.

May 12, 2009 :: Shanghai, China  

George Kargiotakis

Fosscomm 2009

Μετά από ένα τρομερό Σαββατοκύριακο στη Λάρισα γυρίσα πάλι πίσω στη Θεσσαλονίκη. Το Fosscomm 2009 ήταν πάρα πολύ καλά οργανωμένο και τα παιδιά του Linux Team ΤΕΙ Λάρισας αξίζουν πολλά συγχαρητήρια για την προσπάθειά τους.

Τα παιδιά μας είχαν κλείσει ξενοδοχείο και ήταν πάρα πολύ ωραίο να μένουν 60+ άτομα που όλοι κουτσά στραβά γνωρίζονται στο ίδιο ξενοδοχείο. Θυμίζε πενταήμερη! Επίσης είχαν τυπώσει μπλουζάκια (πρόλαβα και πηρα ένα), κονκάρδες και διάφορα άλλα. Μιας και η συμμετοχή στο συνέδριο ήταν μεγάλη, θεωρώ πως μόνο το Σάββατο πρέπει να ήταν πάνω από 200 άτομα, θα είχε μεγάλο ενδιαφέρον να δούμε τις απαντήσεις στο ερωτηματολόγιο αξιολόγησης της εκδήλωσης που είχαν στο stand τους και όποιος ήθελε συμπλήρωνε.

Έξω από το επαρκέστατα εξοπλισμένο αμφιθέατρο, υπήρχαν stand με έντυπο υλικό καθώς και CD/DVD από διάφορες κοινότητες. Εκεί ήταν το Foss.Ntua, η ΕΕΛΛΑΚ, ο HELLUG, η ελληνική κοινότητα Fedora και η ελληνική κοινότητα Freebsd. Φυσικά παρόν ήταν και η ομάδα του Linux Team ΤΕΙ Λάρισας με το δικό της stand. Το κλίμα ήταν γενικώς πάρα πολύ καλό και συνάντησα πολλούς φίλους και γνωστούς που είχα μήνες να τους δω.

Θεωρώ πως οι παρουσιάσεις ήταν γενικά καλές, θα τολμήσω να πω πως ήταν ανώτερες από το επίπεδο του περσυνού Fosscomm που είχε γίνει στο Μετσόβειο, αλλά κυριώς ευχαριστήθηκα τα workshops που πέρυσι έλειπαν εντελώς. Συγχαρητήρια σε όσους κόπιασαν για να τα διοργανώσουν γιατί ένα καλό workshop είναι πολύ πιο δύσκολο από μια απλή παρουσίαση…Δυστυχώς και φέτος υπήρξαν κάποιες ακυρώσεις ομιλιών, ελπίζω του χρόνου τα πράγματα να κυλήσουν πιο ομαλά :)

Αν και δεν είδα όλες τις παρουσιάσεις γιατί προτίμησα να περάσω κάποιο παραπάνω χρόνο με φίλους και γνωστούς στους διαδρόμους, από όσα είδα έμεινα εντυπωσιασμένος από την δουλειά που έχει γίνει στη Σχολή Ικάρων από τον κύριο Αντώνιο Ανδρεάτο. Είμαι σίγουρος πως πολλοί καθηγητές των δικών μας πανεπιστημίων, ακόμα και από σχολές πληροφορικής, θα ντρεπόταν αν έβλεπαν τα βήματα προόδου σχετικά με τη χρήση και διάδωση ελεύθερου λογισμικού που έχουν κάνει στην Σχολή Ικάρων. Επίσης ευχαριστήθηκα πάρα πολύ την παρουσίαση του Android που έκανε ο Κώστας Πολυχρόνης. Με έψησε ακόμα περισσότερο να αποκτήσω το HTC Magic…με τσουρούφλησε θα έλεγα. Δεν θα μπορούσα να παραλείψω φυσικά το workshop και την παρουσίαση του Gentoo, που είναι και το αγαπημένο μου distribution, και έγινε από φίλους. Επίσης το Xen workshop, από την κοινότητα ανοιχτού λογισμικού του Πανεπιστημίου Πειραιά, με έβαλε και πάλι σε σκέψεις για να αρχίζω να πειραματίζομαι ξανά με το xen. Η κοινότητα αυτή μας είχε κανει πέρυσι (2007-2008) την τιμή να ανοίξουμε (εγώ και ο Fuzz) μια σειρά από εκδηλώσεις/παρουσιάσεις που ακολούθησαν και μάλιστα βράβευσαν το iloog ως το καλύτερο ελληνικό open source project για το 2008 στην περσινή DTE!

Είμαι επίσης πολύ χαρούμενος που αρκετός κόσμος με ρωτούσε τι γίνεται με το iloog και αν θα βγει ξανά καινούργια έκδοση. Μου έδωσαν ώθηση να συνεχίσω να ασχολούμαι μαζί του και υπόσχομαι να βγάλω μια έκδοση μέσα στους επόμενους μήνες. Ελπίζω φυσικά σε αυτό να βοηθήσουν και κάποιοι άλλοι …αν έχουν ακόμα όρεξη (στραβοκοιτάζω προς τον Fuzz και τον comzeradd αν δεν το έχουν ήδη καταλάβει)….

Το highlight του σαββατοκύριακου όμως ήταν η έξοδος μας το σάββατο το βράδυ. Τα παιδιά του ΤΕΙ Λάρισας ήρθαν και μας πήραν από το ξενοδοχείο και ένα τεράστιο τσούρμο 50+ ανθρώπων περπατούσαμε μέσα στην Λάρισα για να φτάσουμε τελικά σε ένα τσιπουράδικο. Εκεί οι φωτογραφίες, που θα αρχίσουν φαντάζομαι να δημοσιεύονται σε λίγες μέρες, θα μαρτυρύσουν το τι έγινε :D

Τι μου άρεσε:
α) Η οργάνωση
β) Κάποιες παρουσιάσεις και κυρίως τα workshops
γ) Που είδα φίλους και γνωστούς

Τι δεν μου άρεσε:
α) Που φέτος ήρθαν λιγότεροι Αθηναίοι…
β) Που κάποιες παρουσιάσεις ακυρώθηκαν για λόγους που δεν ανακοινώθηκαν (δεν αναφέρομαι προφανώς σε εκείνους που τους έτυχε κάτι έκτακτο…)

Τι θα ήθελα για του χρόνου:
α) Αντί για 10 παρουσιάσεις κοινοτήτων θα προτιμούσα ένα session με εκπροσώπους από όλες τις κοινότητες σε ένα panel, όπου για 1h να μας παρουσιάσουν τα των μελών τους. Δεν χρειάζεται η κάθε κοινότητα να μας λέει πόσα μέλη έχει στο forum/mailing lists/etc σε ξεχωριστή παρουσίαση. Αυτό μπορεί να γίνει από όλους μαζί…
β) Περισσότερα projects από τις κοινότητες. Προσωπικά με ενδιαφέρει περισσότερο να δω ότι ένα άτομο από την τάδε κοινότητα ξεκίνησε κάτι και τον βοήθησαν οι υπόλοιποι παρά να βλέπω ότι 1500 καινούρια μέλη γράφτηκαν στο forum τους.
γ) Καλύτερη παρουσίαση όχι τόσο των events της κάθε κοινότητας αλλά περισσότερο του τρόπου με τον οποίο τα οργάνωσαν. Αυτό λείπει κυρίως, έλλειψη οργάνωσης και εκεί θέλουν βοήθεια οι περισσότερες κοινότητες (αλλά και οι σύλλογοι).
δ) Ακόμα περισσότερα workshops.

Το Linux Team ΤΕΙ Λάρισας ανέβασε πολύ ψηλά τον πήχη αλλά ελπίζω το επόμενο Fosscomm, το Fosscomm 2010, όπου και να γίνει, να είναι ακόμα καλύτερο!

Συγχαρητήρια και πάλι :)

May 12, 2009 :: Greece  

upgrading a gentoo box that hasn’t been upgraded since 2007

I was given root today in a gentoo box that nobody had upgraded since 2007. As expected the “emerge –sync; emerge -uDavt world” showed a lot of blockages.

I tried to solve each one but I got stuck while trying to upgrade portage to In order to upgrade portage I had to upgrade sandbox, but sandbox couldn’t be ugraded correctly due to portage being unable to handle .tar.lzma files.
The box had sandbox-1.2 installed and it was unable to upgrade to sandbox-1.6. The error was:

unpack sandbox-1.6.tar.lzma: file format not recognized. Ignoring.

Upgrading lzma-utils, tar and a few other packages did not work. In the end I edited the sandbox-1.6-r2.ebuild and changed the src_unpack function from:
src_unpack() {
unpack ${A}
cd "${S}"
epatch "${FILESDIR}"/${P}-disable-qa-static.patch
epatch "${FILESDIR}"/${P}-disable-pthread.patch
epatch "${FILESDIR}"/0001-libsandbox-handle-more-at-functions.patch

src_unpack() {
unpack ${A}
cd /var/tmp/portage/sys-apps/sandbox-1.6-r2/
tar --lzma -xvf sandbox-1.6.tar.lzma
mv sandbox-1.6/ work/
cd "${S}"
epatch "${FILESDIR}"/${P}-disable-qa-static.patch
epatch "${FILESDIR}"/${P}-disable-pthread.patch
epatch "${FILESDIR}"/0001-libsandbox-handle-more-at-functions.patch

cd /usr/portage/sys-apps/sandbox/; ebuild sandbox-1.6-r2.ebuild manifest

After this edit, sandbox emerged properly, so portage emerged properly too. Everything else worked as expected…

May 12, 2009 :: Greece  

Brian Carper

Microsoft, you still surprise me

I use Windows XP at work (not by choice) and I've been continually saying "no" when it tried to install SP3. Why? No tangible reason other than that decades of experience with Windows has shown me that any time you touch any system files or settings in Windows, crap breaks. When it comes to Windows, you set things up and then like a teetering house of playing cards, you back away slowly and try not to breathe.

Which brings us to the other day. I first noticed something was up when a got a popup dialog on my work machine asking me every 15 minutes whether I wanted to Reboot Now or Reboot Later. Confused, I clicked "later" but again and again and again this prompt appeared. After hours of this interrupting my futile attempts at work I relented; I laboriously shut down my half-dozen command prompts and carefully-placed Vim sessions and various server daemons and all the tools I got to look forward to re-opening after Yet Another Unnecessary Reboot, and then I rebooted.

So then XP left me alone and all was well with the world. Ha, just kidding, it started doing the same thing again almost immediately. Reboot Now or Reboot Later? I hatefully tolerated this for as long as I could but it was a futile battle. Microsoft won in the end and I rebooted again.

A few other people at work reported the same thing on their systems, so I thought maybe it was a virus, but I checked a few things and noticed a shiny new SP3 installed on my system (so my initial guess was close). Somehow SP3 was forced onto my machine, not sure if it was the sysadmins pushing it out or Microsoft's doing, but either way: why was it possible to install a Service Pack on my machine without my even being aware it happened? I do not consider this a good thing.

In any case, after the second reboot, strange things happened. My taskbar settings were all reverted to defaults and I noticed my Address Bar was missing. The Address Bar is a little URL/file path bar in the taskbar where you can type a file path and open an Explorer window quickly. One of the very few semi-useful bits of the XP interface.

But it was gone. What happened? A short Google later and I learned that Microsoft removed the feature in SP3 permanently, by design. Why? Because of anti-monopoly regulatory concerns.

Wow. So it turns out I wasn't disappointed, and a few dozen cards toppled from the shaky tower as I watched, helpless. Not the end of the world, but what an annoyance.

The reason I bothered blogging this is because, hilariously enough, you can still add the Address Bar back in SP3. As I read somewhere or other, probably here, you simply 1) Drag a "My Computer" icon to the top of the screen to make a useless "My Computer" toolbar, 2) Right click that and add the Address Bar, which is still an option there, 3) Drag that Address Bar to your main taskbar, 4) Remove the useless toolbar from above. And then you have your Address Bar back. Oops!

So, in summary:

  1. Two forced reboots via 20 repeated un-ignorable popup prompts.
  2. Service Pack installed without my knowledge or consent.
  3. Useful piece of functionality removed.
  4. Item 3 caused by a history of monopolistic business practices and the resulting legal fallout.
  5. Functionality in question removed so incompetently that it can be added back anyways in a matter of seconds.
  6. Another hour of my life sucked into the black hole of the Microsoft Windows User Experience™, forever lost.

May 12, 2009 :: Pennsylvania, USA  

Iain Buchanan

The results are in: It's Apathy by a landslide!

Thanks to everyone who took part in my recent poll "What would you like me to post more about?" That is, all 6 of you, including myself. The results are as follows:

3(50%)IT related technical articles
3 (50%)
Linux howto's, tips & tricks
3 (50%)Renewable energy power station bio's
2 (33%)Reviews of my electronics (phone, set top box, espresso machine, etc)
2 (33%)Random thoughts & musings on anything
1 (16%)Personal & Family events
0 (0%)Dell Precision M6300 howto's for running Linux
0 (0%)I can't stand reading anything you write!

There were 6 unique voters, including myself. The poll ran for most of April, 2009. Voters could select multiple entries.


So what? Well, there were nowhere near enough votes for me to make any drastic changes. Most of my visits are for the ever-popular Vmware keyboard page. I can safely say that the results can be completely ignored!

If you don't agree, comment!

May 12, 2009 :: Australia  

May 11, 2009

Ciaran McCreesh

Introducing e4r, a Script to Make Vim Useless

As some of you may be aware, there exist a few dark heretics who have yet to embrace the One True Editor and instead go around peddling their evil ways (via m-x peddle-evil-ways) or their primitive superstitions (yes, ^O to save makes perfect sense!).

Although Exherbo would ordinarily go out of its way to smite those wicked sinners, unfortunately it seems that some of them are so set in their ways that even removing their satanic instruments from the basic install will not quell their iniquity. Thus, in a rare and display of compromise that risks turning Exherbo into… uh, no, wait, I can’t think of any distributions capable of sensible compromises. Anyway:

e4r is a small script that turns Vim into a very crude, minimally functional non-modal editor that approximately resembles Nano. It has no dependencies other than Vim, and is extremely tiny, making it practical to include it in stages for people whose time isn’t valuable enough for them to learn how to use an effective text editor.

Some screenshots:

Editing a file using e4r

Editing a file using e4r

Loading a file with e4r

Loading a file with e4r

e4r Help Menu

e4r Help Menu

Git format-patches welcome.

Posted in exherbo Tagged: e4r, exherbo, vim

May 11, 2009

Jürgen Geuter

Why DVCS? I'm just working on this little thingy here ...

DVCS are all the rage lately, especially with huge projects like GNOME moving to git and Python to to mercurial, but many people are still hesitant to switch: "Why should I learn this new tool, SVN is good enough for my small projects?"

This sounds like a valid assumption: On your own stuff you probably don't get a lot of merge requests with people working independently from you on bigger feature changes. You don't have to deal with huge patches because the merging sucks because the biggest patches you get send are 2 lines of code. So why should you consider a DVCS?

Because you don't know the future.

Every big project started small, with maybe just one or two people working on something. SVN was enough, it was free and easy enough to set up. And there was much rejoicing. But sometimes projects gain momentum and sometimes that happens really fast: An article on a big news-site mentioning your project might lead to a crowd of people wanting to hack around on your code. But you might not want to give all those strangers commit access to the main repository, of course. They need to prove themselves first, right? And after some time you'll realize that SVN sucks and that a DVCS could really help you with your maintenance: No longer do you have to merge huge patches but you can just pull feature branches where all changes are properly done in fine-grained commits. Of course you can try switching to a DVCS when the need emerges but do you really want to do a migration that integral later when so many people are involved?

The other important aspect is that one day you'll go away. It doesn't have to be dieing though: You might lose interest in the project you started, you might no longer have the time or a million of different reasons. You will no longer work on the project. You might also drop an old email address connected to the project which renders you pretty much unreachable. You're internet-dead. Now people can get the code from sourceforge or download the last snapshot from wherever you hosted it, but if they want to continue the project, they are barred from contributing. They could just start fresh from the latest tarball and lose all history of course, but that's one bad solution because the history of the code is important (not just for licensing reasons). If you would have used a DVCS, people could just clone your repository and continue, keeping the full history of the code with every bit of it being properly attributed to the author.

DVCS are an investment into the future, and investment that might force you to learn something new now, but an investment that will pay off later. Maybe your server will just crash and someone else cloned your repository and therefore has a full backup of it. And maybe on two years someone can just continue where you left the project. Think two steps ahead and learn a DVCS, you won't regret it.

May 11, 2009 :: Germany  

Dion Moult

How do you use your desktop?

Imagine a computer system that was semantic. For those unaffiliated with this concept, this is similar to having your computer understand you as a human would. This is often easier to explain through examples. For example, when you click that spot on the screen, that’s because you want to achieve something. The computer understands what you are trying to achieve and thus will do it for you. What we have now is “this is how I work - use me”.
There are many ways in which people are trying to achieve this symantic desktop. Two examples off the top of my head are 1) Nepomuk and Strigi and 2) The 3D desktop.

Let’s first look at nepomuk and strigi. These are two technologies used by the K Desktop Environment (excuse any technical misunderstandings), which from what I understand are meant to store a wealth of “meta-info” about all your stored data. Be it your email, contact lists, favourites, essays, presentations, music, images, etc. It will turn them from being stored as data into being stored as information. I’m then meant to be able to find/sort/store them much easier than before. Must be heaven when trying to find that centuries old self-note I wrote.

The second example is the 3D desktop. A concept that I myself am trying to spread is that your desktop is…well, a desktop. You keep what you’ve been recently working on and what you’re currently working on…on your desktop. Your desktop is where you dump  your stuff in-between sorting them, and where you leave stuff piled after a long days’ work. It is where it is both easy to access stuff and dispose of stuff.

Oh really.

I don’t think it’s working so far. Nepomuk/Strigi has never once shown me anything useful. I store my own files the way I want to. Microsoft and Apple both categorise things for you (well, Microsoft tries) in their own structures, whilst the Linux filesystem is…organised chaos.

KDE was meant to have revolutionised the desktop. I might not know the advance of the system’s backend of plasma and the such, but whatever happened, i’m just not quite seeing it. The concept of plasmoids on the desktop itself (yes, on panels they are very useful) might be good, but utterly impractical. The main reasons I find for this are:

They are inaccessible.

Even with show-desktop/show-plasma-dashboard, they are still very limited in function. The folder view plasmoid just shows a folder, then allows me to open files in the folder or open subdirectories in Dolphin. I can’t do my actual file sorting with the plasmoid. The quicklaunch plasmoid is heaps better, but very small.

They replicate functionality.

We have the folder view, and dolphin (not to mention konqueror). All browse files. We have the calculator plasmoid, but what use is that when I have my nifty alt-f2 calculator embedded in krunner? The media player plasmoid - which is easier, tapping a shortcut or showing my desktop then pausing/playing/etc? Analog clock? I have my good ‘ol digital clock in the bottom right corner. Web browser plasmoid? Seriously. Blue marble, ball, binary clock, conway’s game of life? Useful? I think not.

So, the question is, how do you use your desktop? (if in KDE, this includes plasma - if not, then just in terms of file organisation?)

(in unrelated news, my blog now uses Slimbox for displaying images, so there is increased sexiness when you click on them!)

Related posts:

  1. kde-crazy: KDE Devs on Steroids!
  2. KDE 4.1.2 in main tree!
  3. Conquering Konqueror

May 11, 2009 :: Malaysia  

Allen Brooker

Gentoo Projects Status Reports

The Gentoo developers are doing one of their regular-ish status report runs. I’ve summarised the reports at

This is a great way to get an idea of what different development teams are currently working on and where they need help.

Direct links to the original thread and project homepages can be found on the above mentioned article. There’s a forum thread at

May 11, 2009 :: England

May 10, 2009

Jürgen Geuter

man, info pages suck. No more.

When working with GNU tools while browsing the man page you will probably have stumbled on a note like this one:
The full documentation for sed is maintained as a Texinfo manual. If the info and sed programs are properly installed at your site, the command

      info sed

should give you access to the complete manual.

You enter the "info" command and get something like this:

You quickly realize that info-pages are more complex than man-pages cause you just get an overview page that seems to link to subpages. You try a few keypresses, fail to get the thing to do what you want and try to see if your search engine of choice can help you. "info pages suck" is something you'll hear a lot.

Well someone fixed the problem which a program called "pinfo". Instead of calling "info sed" you use "pinfo sed". You'll be greeted by a ncurses interface with (if your terminal supports it) colors. Links are automatically highlighted when hitting up- or down-arrows. You follow a link with the right-arrow and go back with the left-arrow:

Suddenly info-pages are not worse but in fact better than manpages because they offer more structure and more in-depth information. Have fun with info-pages!

May 10, 2009 :: Germany  

Bryan Østergaard

KVM images

I've uploaded new KVM images based on the 20090504 stages a day or two ago. The images are available at as usual.

But more interestingly I've now made the script used to build the images available so you can build new images yourself whenever you like. The script is available in the scripts/ directory of the exherbo repository.

All you need to build your own image is a few prerequisites and this script. The script requires kvm (for kvm-image) and parted (used to manipulate the partition table) and sfdisk (used to get some partition table information) installed.

"paludis --install kvm parted util-linux" will ensure you have all the needed prerequisites. After installing those all you need is to specify a few options to the script and everything should be automatic from there on.

The script describes the available options and their defaults when passed -h or --help.

# ./create-kvm-image --help
Usage: create-kvm-image [OPTIONS]
    --arch=amd64|x86                Target architecture for image file
    --kernelversion=            Kernel version to be used in image
    --stageversion=             Date of tarball, for example 20090504 or current
    --kvmtmpdir=/path/to/image      Where to build the image file. Defaults to /tmp/kvm-tmp/
    --kvmtmpkernel=/path/to/kernel  Where to build the kernel. Defaults to /rootfs
    --kvmimagename=/path/to/image   Image filename (including path). Defaults to /exherbo-x86_64.img
    --kvmimagesize=          Size of image file in gigabytes. Defaults to 6G
    --jobs=                  Number of make jobs when building the kerne. Defaults to 4

If you're satisfied with the defaults all you need to specify are kernel version, stage tarball version and architecture. Which gives you a command like ./create-kvm-image --kernelversion= --stageversion=20090504 --arch=amd64

And a few minutes later (takes about 5 minutes on my quad-core Core2 box) you'll have a brand new KVM image called exherbo-x86_64.img in /tmp/kvm-tmp/. Please note that we don't support cross compiling yet so you'll have to specify the same target architecture as your host architecture for now.

And as always, I welcome git format-patches to add support for other image types (virtualbox, vmware, ..) and other features.

May 10, 2009

Dan Fego

Wireless Security

I thought I’d share this slightly humorous, slightly telling anecdote. I’ll try to keep it brief.

I just moved into a brand new apartment. Unfortunately, my wired internet isn’t going to be installed for another week and a half. Naturally, I turn to wireless (other peoples’ wireless, that is). So I do a quick scan to check out what’s around, and to my surprise, all the networks (minus the municipal one which doesn’t seem to work) had some kind of security, at least WEP.

After making sure that none of the networks were open, I busted out airodump, scanned, and saw only one network with any traffic going over it. This was necessary to get some packets so I could crack the key. I spent 54 minutes and 52 seconds (well, my computer did) sniffing enough packets to break the encryption. Turns out 367,366 IVs did it in this case. In any case, I come over to the computer with glee, seeing the network was cracked, and what do I see?



That’s right, the key was found! And it was… 12:34:56:78:9A. Seriously? I sat there for a minute laughing and actually thinking that couldn’t be it. I mean, that’s the equivalent of “password” as a password. I tentatively try to connect with my newly-found WEP key and without a delay, I was connected to the network. Wow.

Lesson learned: try out simple WEP keys before going through the effort of cracking the network. You just might get lucky. I mean, if the person is using WEP anyway, they probably don’t know all that much about security.

May 10, 2009 :: USA  

May 9, 2009

Brian Carper

Git tutorial

Finally I found a good Git tutorial that starts from the absolute basics and goes steadily through more advanced things. I highly recommend it.

May 9, 2009 :: Pennsylvania, USA  

Kevin Bowling

El Reg Humor and Java in free software

The Register has a good article on Sphinx search with some entertaining pop-shots at Java and “enterprise software” that got a rise out of me:

Solr is popular with the enterprise crowd, who love its Java. Being a Java program, Solr includes no shortage of technology whose acronyms contain the letters J and X.

This tickles the enterprise pink, because these sorts of developers love nothing more than hanging out around a whiteboard drawing boxes and arrows and, from time to time, writing XML to make it look like they’re doing real work. Solr thrives in this environment, being an Apache Foundation project, the Apache Foundation, of course, widely known as a cruel experiment to see what happens when bureaucrats do open source.

Having a bit of experience with Java from academia and a few open source projects I make use of, I can’t help but laugh at how comically and concisely the editor summed it up.

By and large, successful open source projects tend to be written in languages other than Java. The entire GNU/Linux OS stack is primarily C, with some components using C++ like KDE, OpenOffice and Firefox.  On the ever popular web front, PHP, Ruby, and Python lead the pack.

I think it turned out this way for a multitude of reasons.  When working on the OS stack, the power and control of C and C++ are hard to beat.  The plethora of libraries and raw speed of these compiled languages set the bar high for any newcomers.  Java exists as a kludge, mildly useful for desktop apps and mildly useful for web apps while historically having a lot of problems.  Native look and feel have long been the layman’s complaint, though SWT has done a pretty good job there.  Of course, omnipresent Java in the Linux world is relatively new.  I think Java would have been the darling language of client apps had it been open sourced sooner, but this came about 7 years too late to have a large impact on shaping the common FOSS userland.

It is interesting how the open source projects built with Java tend to be highly bureaucratic and abstract.  I think the bottom line is that FOSS programmers do what they do because it is fun and demand pragmatism.  The “enterprise software” attitude/baggage that many Java apps and libraries carry are a big turn off to pragmatism and the hacking culture.  The barrier to entry for Java web programming is also much higher than its “scripting language” competitors, which carry light and simple frameworks that focus on results, not procedure.

Java itself isn’t that of a bad language.  I actually enjoy working with it in school (…though I think it really isn’t appropriate as an introductory teaching language, shielding important concepts from students.  Maybe a future post?..).  When it comes time for real work though, I consider Python, C,  C++ more pragmatic depending on the job at hand.  That, and the fact that most of the common scripting languages are gaining JIT compilers may accelerate Java toward status as a legacy language.

Your thoughts?

Share and Enjoy: Digg Slashdot Facebook Reddit StumbleUpon Google Print this article!

Related posts:

  1. One Small Step for QT, One Giant Leap for Free Software QT Software, under the graces of Nokia, has released the...
  2. My thoughts on software and complexity My thoughts on the growth of the Linux kernel and...
  3. Bulletproof your server to survive Digg/Slashdot implementing scale up for web 2.0 sites with current practices...

May 9, 2009

May 8, 2009

Kevin Bowling

To users that miss xorg.conf and complain about it

I get requests from users and see questions all the time for “where did my xorg.conf go in the latest Ubuntu or Fedora?”, though it is usually a bit more of a flame.

The quick answer… press Ctrl+Alt+F2 or similar to log into a TTY console, or type ‘init 3′ into a root X terminal.

If you haven’t already, log in as root and  kill X or type ‘init 3′ if you want to be heavy handed.  Then run:

X -configure
mv ~/ /etc/X11/xorg.conf

xorg.config in two commands.  Run the ‘init 5′ command to get back to your GUI login (or kdm or gdm or startx, etc if you know what you are doing.  Worst case remove the .conf and restart.)

If you are advanced enough to edit an xorg.conf, the above should be a cakewalk and you shouldn’t complain about it.

Regardless, you should investigate ‘xrandr’ which makes it simple to do runtime adjustments.

If you are a newbie, look into a gui.  KDE has KRandRTray which makes controlling outputs and resolutions a breeze.  Don’t forget to toggle the output on with the Fn key if you are a laptop user.

Needless to say, Xorg is moving in the right direction.  Stop complaining about it.

Share and Enjoy: Digg Slashdot Facebook Reddit StumbleUpon Google Print this article!

Related posts:

  1. How to upgrade to ext4 in place Here’s how you upgrade to ext4.  The process is pretty...
  2. I hate Ubuntu I hate Ubuntu.  I immediately lose respect for anyone who...
  3. Xen 3.3 in RHEL/CentOS 5 and more Link Aggregation Fun RHEL 5 includes the now ancient Xen 3.0 hypervisior.  A...

May 8, 2009

Brian Carper

Crackberry Acquired

All I ever wanted out of life was to SSH to my computer from a cell phone. That dream has finally come true.

Up to this point I have not owned a cell phone. I bought one a few years back, then I returned it and got a refund because it was pointless. Communicating with other human beings via spoken voice? How trite. My current employer gave me a phone for free but I never used it.

But nowadays cell phones are pretty much mini computers that happen to be able to make phone calls as a side effect. I almost got an iPhone, but I am very wary about hype. Apple's business practices turn me off; the app store is a shystering waiting to happen, their crappy proprietariness makes me puke, their overblown marketing and "image" makes me puke even more. I don't want an MP3 player in my phone; my Cowon D2 is far superior to any silly iPod. And as I tried the touch screen keyboard, I quickly realized that the Blackberry's physical keys win in that category by a mile.

So I got a Blackberry Bold and I'm pretty happy with it so far. I have yet to make a single phone call, but I've put it to good use. I installed all kinds of silly stuff on there, including an SSH client so I can do system maintenance while driving. (Not really, don't worry.) I can look at Google maps when I get lost, which happens embarrassingly often in my car. I can look at Slashdot from the sushi restaurant. I can get the weather updated every 15 minutes, which saves me from rotating my head 25 degrees and looking out the window.

I still object to certain cell phone things on principle. Paying $3 for a 15-second song clip as a ring tone for example; the insanity of this is almost physically painful to me. The Blackberry let me set any old MP3 I wanted as the ring tone though, which is nice.

Paying for text messages is almost as painful. How can it cost a quarter to send 160 bytes of text to another phone, when the whole freaking internet costs orders of magnitude less? How do cell phone companies get away with this? It's such a racket. But I can put IM clients on my phone and use email and I have "unlimited" data transfer each month, so that's nice. (And I really grilled the salesperson about what "unlimited" means. She said some people go into the gigabytes of transfer each month without consequence, so it looks like I need to find a torrent client now!)

Maybe one of these days I'll call someone. What a novel concept.

May 8, 2009 :: Pennsylvania, USA  

May 7, 2009

Thomas Capricelli

mercurial and ipv6

Today I needed to use mercurial over IPv6 in order to share a repository which is on a computer behind an (ipv4) firewall, but that can be reached over ipv6.

The naive

  • hg clone ssh://orzel@ipv6computername/hg/dir
  • hg clone ssh://orzel@[ipv6::address]/hg/dir

miserably failed. But i was hinted on IRC (thanks ‘Ry4an’ !) on how to do this, and thought I should share this knowledge until mercurial get better ipv6 support.

First for the clone call, do

hg --config ui.ssh='ssh -6' clone ssh://orzel@ipv6computername

then you will not be able to push/pull until you add to the repository .hg/hgrc the lines:

ssh=ssh -6

Enjoy mercurial on IPv6 :-)

May 7, 2009