Posts for Thursday, September 10, 2009


mutt: threading like a pro

muttI’m sure I could come up with 5 of these every day if I read the whole user-manual religilously and followed dozens of blogs, but I don’t, so here goes:
mutt supports threading (duh!) through

set sort="threads"
set strict_threads="yes"

But you want threads which have new mails appear at the top/bottom (whichever way you work):

set sort_browser="reverse-date"
set sort_aux="last-date-received"

Voila! Additionally it doesn’t hurt to unset

unset collapse_unread

The default-keybinding for threads isn’t the best, so it’s best to rebind it to something unused but easy:

bind index - collapse-thread
bind index _ collapse-all

Don’t wonder about the absence of “uncollapse”, in mutt there is only toggle (like it or not).
One of the most useful features when communicating with people who don’t give a crap about mail is the split and link-feature for threads: Use the key ‘#’ to break apart a thread, or tag some mails (using ‘t’ for example) and the use the ampersand ‘&’ to link them to another mail (because some people see the “Reply” button as a cheap way to get an address into the To-field).
muttAnother feature of mutt (that’s worth using it over TB alone) is the quickness and ease with which it deletes attachments from mails without breaking the threading or anything else. Just press ‘v’ on mail, select the attachment, mark it deleted with ‘d’, go back to your mailbox and hit ‘$’ to write the changes to the mailbox. This will keep your mailbackups lean.

Changing the from field when sending email

This is simple, but I always forget how to do it when I need it and it’s hard to find on google for some reason.

echo "hi" | mail -s "My Subject" -- -f

Posts for Wednesday, September 9, 2009


Hello. I hacked the GIS website.

Not only did I hack it. I plastered my name all over it. Because what I really want to do is go to jail. I also want to get on bad terms with my school just before they write my references for my university application. Oh, I especially want to fill up that field saying “criminal records” on the UCAS application website.

My sense of humour is limited to rickrolling people on the internet. It is my life’s dream and destiny for my name to become synonymous with Rick Astley. I also linked to my blog so that everybody will know who I am. I can get more visitors on my blog and be really famous and popular. Hooray.

I also invented the name “Team Aerosol”. The name demonstrates the amazing linguistic capabilities and love for imagery in literature I have exhibited over the years.

Interestingly enough, when I was hacking the website, a lot of people started adding me as friends on Facebook. I am now extremely popular and I can add it to my list of social networking sites of which I have the most friends.

I also know how to use this fancy technique called “MSSQL injection” to hack the website. That is how I did it. MSSQL stands for Microsoft SQL. This is tribute to my most respected company of all time. Microsoft products have been part of my daily routine and I love nothing more than to purchase their corporate licenses just “for the fun of it”.

With my upcoming year 13 exams, I place hacking the school website as my highest priority.

Also, in all the years I’ve been in the school, I’ve been waiting for the opportune moment to leave such a mark. By leaving it to so late in the game, not only am I certain that they cannot catch me as I will leave to university before they have the chance, but it also ensures that I can make my knowledge about computers inconspicuous for all the previous years and strike without anybody suspecting it was me.

Disclaimer: no. I did not hack it. If you are unable to recognise my dripping sarcasm perhaps that explains why I got a B in English at IGCSE.

Update: it’s past midnight, hacker(s). I like to get a good 6 hours of sleep (at least). It’s healthy. Good night. You should sleep too, continue tomorrow if it pleases you.

Update: this hacker has style.

Related posts:

  1. Never gonna give you up.
  2. Blender Model Repository system upgraded
  3. How to Make a Website Part 1 – The Environment

A couple of HTML tools

While doing HTML work I tend to do my work with text editors. I use Arachnophilia, a java text/html editor with easy, editable tag buttons. I prefer it over Bluefish for ease of use. It works good for basic web page design and allows me to create good code that doesn’t need to be edited. Here are a couple other tools to help you create HTML more quickly.

HTML entities from the command line

Arachnophilia has support to convert characters to HTML entities but isn’t easy to get to (HTML > More Functions > Char to Entity. There are various web sites that do but if you have the terminal open it might be quicker to do from there. Darren has created a script that uses perl’s HTML::Entities to easily do so. You’ll probably need to edit the script to point to perl proper:

whereis perl

More than likely its in ‘/usr/bin/perl’. You’ll also likely need to install HTML::Entities. Your distro might have it in the repos but likely you’ll need to install it manually, take a look at this page on how to do so. When it is installed just running the script will put you in a sub-shell that you can copy and paste characters to be encoded.

You can also convert a whole file. To print to stdout (the terminal):

htmlentities filename

Or convert the file by:

htmlentities < file > convertedfile

Strip HTML tags

Occasionally you might just want to redesign a page and just want the content, a nice basic sed command can do this:

sed -i -e 's/<[^>]*>//g' filename

There’s probably many more tools to use. Be glad to hear about them.

Plamsa + Ruby = Ouch

I wrote my first KDE4 plasmoid the other day. I can't release it because it's essentially a clone of something you aren't allowed to copy (maybe I can replace him with a penguin and release it that way though).

But I need to rewrite it first anyways, because I did it using the Ruby bindings for Qt4 and Plasma, and wow it's painful. It has a 50/50 shot of even initializing at any given point. When it does initialize, it has about a 1 on 8 chance of immediately crashing Plasma. And some things I just can't get to work at all, e.g. setting a default size or resizing the applet programmatically; X-Plasma-DefaultSize in the metadata is supposed to do it but it does nothing. And it's not just my system (using KDE 4.3), because I tried it on a Kubuntu machine using stable KDE 4.2 and had the same problems.

The other snag is that the documentation of the Plasma API is buried so deep on the KDE site that I don't even know how I found it. Here it is for those who care (and for my own future reference). I hit lots of dead links on the KDE site on the way there.

Next step is to rewrite the plasmoid in Python or C++ I guess.

Review: Coders at Work

Recently I received a preview copy of Peter Seibel's newest book, Coders at Work.

This is a wonderful book if you are a programmer and care at all about the art, craft, and/or science of programming. If there is any wisdom to be had in this fledgling field of ours, this book contains buckets of it.

The book consists entirely of long interviews with some big names in the world of programming: Donald Knuth, Ken Thompson, Jamie Zawinski, Guy Steele, Peter Norvig, the list goes on. There are also some names I wasn't quite so familiar with (but maybe should have been), like Brad Fitzpatrick, the inventor of Livejournal.

But everyone interviewed for the book has produced some grand, successful project in their time. These are tried-and-true, battle-tested programmers and in this book they share their war stories and advice.

Questions and Answers

There are a few questions that Seibel asked everyone, and it's interesting to compare and contrast the answers.

  • How do you start learning to program? Perhaps because of the varying ages of the interviewees, the answers range from punch cards to Perl scripts. Is a CS degree a necessity, a boon, or a hindrance? Is a background in mathematics necessary? Very different views depending who you ask.

  • How does a person become an outstanding programmer? Is the ability to program something you're born with or something you learn? This book has a lot to say on the topic, directly and indirectly. Knuth thinks that in any group of 100 people, "2 of them are programmers in the sense that they really resonate with the machine". Fran Allen says that working on a farm helped her have a better understanding of large complex systems with inputs and outputs. Many stories point to people being "naturals", e.g. Guy Steele learning APL from a couple brochures at an exhibit. But it's clear that years (or decades) of hard work and dedication are needed too.

  • If there's anything most of the coders have in common, it's starting at a young age and being rabidly enthusiastic. You get the impression that these guys (and gal) love programming. It's not just a job, it's a passion. Many are the tales of 26-hour coding sessions. Most of those interviewed say they still fiddle around with code in their spare time, even those who have retired from professional programming (or burned out on it entirely).

  • What tools do great programmers use? Which editors? Debuggers? IDEs? If this book is any indication, the answer is Emacs. Or in some cases, plain old pen and paper. There are a few representatives of the IDE side of the aisle, but mostly the tools are simple and the minds do most of the heavy lifting. (There's nary a mention of Vim. It breaks my heart a little.)

  • How do you go about debugging code? Print statements are (perhaps surprisingly?) very popular among the greats in this book. There is much lamenting over the current state of debuggers, which are, in the words of Brendan Eich, "pieces of shit from the '70s like GDB". One of the most common methods of debugging is just mentally stepping through code until you see a problem. Some of those interviewed share some of their horror stories of extremely difficult bugs to squash.

  • Is programming an art, a craft, a science, a kind of engineering, or something else? The answers are wide and varied; many seem to view it somewhere between an art and a craft, but some have other views. L. Peter Deutsch muses that "coder" is to software development what "bricklayer" is to constructing buildings.

Programming today

A recurring theme in the book is the beauty of simplicity.

Many people interviewed for this book did their best work decades ago, when computers were very different beasts. Programmers were constrained by the technology of the times, but perhaps because of those constraints, code had to be simple and straightforward, and there was an elegance in that simplicity that we've since lost.

As a young programmer who never experienced such an environment, I found it enlightening to hear the opinions of those who had. Seibel asked everyone what they thought of today's world, and the answer was often amazement combined with fear and dismay.

Quoth Guy Steele:

I guess to me the biggest change is that nowadays you can't possibly know everything that's going on in the computer. There are things that are absolutely out of your control because it's impossible to know everything about all the software. Back in the '70s a computer had only 4,000 words of memory. It was possible to do a core dump and inspect every word to see if it was what you expected. It was reasonable to read the source listings of the operating system and see how that worked.

Knuth has this to say:

There's this overemphasis on reusable software where you never get to open up the box and see what's inside the box. It's nice to have these black boxes but, almost always, if you can look inside the box you can improve it and make it work better once you know what's inside the box.

Joe Armstrong, of Erlang fame:

Also, I think today we're kind of overburdened by choice. I mean, I just had Fortran. I don't think we even had shell scripts. We just had batch files so you could run things, a compiler, and Fortran. And assembler possibly, if you really needed it. So there wasn't this agony of choice. Being a young programmer today must be awful--you can choose 20 different programming languages, dozens of framework and operating systems and you're paralyzed by choice. There was no paralysis of choice then. You just start doing it because the decision as to which language and things is just made--there's no thinking about what you should do, you just go and do it.

And Bernie Cosell, one of the original programmers who worked on ARPANET:

At one level I'm thinking, "This is way cool that you can do that." The other level, the programmer in me is saying, "Jesus, I'm glad that this wasn't around when I was a programmer." I could never have written all this code to do this stuff. How do these guys do that? There must be a generation of programmers way better than what I was when I was a programmer. I'm glad I can have a little bit of repute as having once been a good programmer without having to actually demonstrate it anymore, because I don't think I could.

Dealing with the ever-increasing complexity of computers today is something we all struggle with. We're endlessly re-inventing wheels in this field; everyone knows it. Getting back to the basics is a very appealing sentiment.


The questions and answers in this book are brutally honest and buzzword-free, which is refreshing. It's enlightening and at times giggle-worthy. There's a kind of snark that only a disgruntled computer programmer can produce.

L. Peter Deutsch:

[M]y description of Perl is something that looks like it came out of the wrong end of a dog.

Peter Norvig:

After college I worked for two years, for a software company in Cambridge. And after two years I said, "It took me four years to get sick of school and only two years to get sick of work, maybe I like school twice as much."

Guy Steele:

If I could change one thing -- this is going to sound stupid -- but if I could go back in time and change one thing, I might try to interest some early preliterate people in not using their thumbs when they count. It could have been the standard, and it would have made a whole lot of things easier in the modern era. On the other hand, we have learned a lot from the struggle with the incompatibility of base-ten with powers of two.

The bad

The only complaint I have about the book (if it is a complaint) is the length of the some of the interviews. Some were hard to get through in a sitting. There's also some profanity, if you care about such things (I don't).

If you buy this book expecting to hear stories about TDD and .NET and RoR and other such trendy three-letter acronymns, you may be disappointed. If you are immersed in the present and don't care about the past, this book may not be for you.


This book is a great read, educational and entertaining and I dare say inspiring. Other reviews have said this and I agree: Seibel is a programmer and he asks the questions a programmer would ask. I highly recommend this book. It's for sale mid-September; check the book's website.

Posts for Tuesday, September 8, 2009

Hard Drive Crisis

Okay...  So three days ago (that would be Saturday morning), I found that my server was having weird problems.  I was getting an I/O error when I tried to start a movie for my daughter.  Yeah, that can't be good.  I'd seen that problem before when LVM got out of sync somehow (after about 6 months of uptime), and decided to reboot it.  Upon rebooting, I noticed the computer couldn't make it past the BIOS, and, I heard a not-very-familiar, yet very-widely-known *click* sound coming from the server.

Yeah...  I was getting the "click of death" from one of my hard drives.

Later that night, I found it was the newest Western Digital RE2 drive I had bought from PCClub a little over a year ago, and under normal situations, a hard drive going bad to the point that it's unreadable by the OS is a very bad thing.

In my situation, it would have been a *very* bad thing, because the broken drive was part of a LVM2 array which houses everything including my movies, my music, all the digital photography and videography of my family for the past 6 years.  And according to what I've researched, one cannot restore a broken LVM2 array without all the drives being present.  I soooooooo hope I'm wrong on this.

Anyway... All is not lost, because I keep a nightly backup of the entire array.  So, I wasn't worried at all about the array being down.  Even if I have to rebuild the whole array, I still have the data backed up on  another computer.... Right?

Yeah...  That was right... until yesterday.

Yesterday, I powered on my backup machine (which houses another LVM2 array which contains all the aforementioned backups), and it wouldn't get past the BIOS.  Chills went immediately up my spine, all the way to the back of my head.

I rebooted, and this time, I could NOT believe what I heard.  The backup server was - clicking.

This couldn't be happening.

Three days ago, I had ordered a new 640 gig WD black hard drive to replace the one in my server, and, due to the labor-day weekend, won't be here for another two days, giving me a 5-day window to get a new hard drive, install it, and copy all the files over. FIVE DAYS!!!! That's all I needed.

I rebooted.    ...nothing.


So, I open the backup box to find the exact same 500 gig Western Digital RE2 drive, which I had bought the same day, from the same place, a little over a year ago.  Yeah, it was dead too.

I spent hours last night googling options from restoring partial LVM2 arrays, to reviving dead drives, to professional data restoration...  Because now, I was up a creek.  I mean, who could imagine that both the server and the backup server could go completely dead within a week.   Oh, and get this..  Both drives are warrantied until 2011 - THAT'S TWO YEARS FROM NOW!!!  So, theoretically, neither of them should have died...  Much less both of them, and even less that they died 3 days from each other.  Talk about horrible luck.

I would have preferred that all 5 of the drives in my server explode into fireballs and physically melt my server than this.  Gah....

So... The resolution is this:

I've heard from many different sources, that you can temporarily revive a hard drive by putting it in the freezer, and then, when fully frozen, take it out, connect it, and do your best to get all the data off it before it tanks again....  But I've never heard first-hand of this working.  It's always been a guy who knew a guy.

Western Digital also makes a tool that can supposedly tell me why my drive is dead.

So, when I have my new 640GB drive in hand, I plan on using WD's tool, find out everything I can,  about the two broken drives, call WD support, find out as much as I can from them...  Then, if no other solution presents itself, I'm gonna freeze the drive which I think is the least broken, and see if I can use the pvmove LVM2 command to migrate the stuff from the broken HD to the new HD.  I only need this to work once - on ONE of the drives, and I'm back in business.

If not, there's gonna be a whole lot of weeping, wailing and gnashing of teeth in the Jones household...

I'll keep you all posted on the results.  My new drive is expected to arrive this Thursday.

If you know of any solution other than what I've stated, please comment.  Also, if you know of any way to get a partial LVM2 array to assemble itself, please comment.

Both drives are completely dead, unable to be recognized at all by the BIOS, or the OS - and I'm running gentoo Linux on both servers.


movietime – stop powersaving to watch a movie


Note: This script is for KDE 4 as I use KDE’s dbus interface to inhibit suspend. If anyone knows how to invoke dbus from Gnome please let me know (as this command should work on both desktop environments) and I’ll put it in the script.

Like to watch a movie, get set up with your popcorn on coke, and five minutes later your screen dims? This is pretty common in Linux because the xorg server by default is set to do so. This script will toggle display power management and suspend so that you can watch a movie without fuss.

Xorg Defaults

This section isn’t required as the script will disable dpms (display power management) but will set the xorg server to more sane defaults of powering down your screen. In your ‘/etc/X11/xorg.conf’ put:

Section "Monitor"
  Option      "DPMS"          "true"  # display power management (true/false)

Section "ServerFlags"
  Option      "BlankTime"     "0"     # LED still on, no + (0 disables)
  Option      "StandbyTime"   "10"    # turns off LED
  Option      "SuspendTime"   "0"     # turns off LED, and most power
  Option      "OffTime"       "30"    # turns off all power

If you do not have the configuration, you’ll have to create it. Time is in minutes so set to it what’s best for you. BlankTime and SuspendTime have been disabled. BlankTime just blacks the screen and acts as a “cheap screensaver“. It doesn’t have any powersaving capabilities and overall is pretty useless. You can use SuspendTime for a slight savings on power if you wish instead of StandbyTime, but you may prefer your screen to wakeup quicker.


If you aren’t familiar with having your own scripts and how to run them, take a look at this page.

# movietime - disables display power management to watch movies (KDE 4).

# Tests for X server blanking / Monitor blanking
dpmstest=$(xset -q | grep "  DPMS is Enabled")

# Save dpms values
xset -q | grep -o "Standby: [0-9]*[0-9]" | sed -e "s/Standby: //" \
> /tmp/dpmsvalues
xset -q | grep -o "Suspend: [0-9]*[0-9]" | sed -e "s/Suspend: //" \
xset -q | grep -o "Off: [0-9]*[0-9]" | sed -e "s/Off: //" >> /tmp/dpmsvalues
sed -i -e ':b;N;s/\n/ /;bb' /tmp/dpmsvalues # replace newlines with spaces

if [[ -n "$dpmstest" ]]; then
  # Turn off X blanking, display power management (also disables screensaver)
  xset s off; xset -dpms
  # Turn off X blanking, turn off display after three hours
  # xset s off; xset dpms 0 0 10800
  # Inhibit suspend
  echo '#!/bin/bash'  >  /tmp/inhibit-suspend
  echo 'while :'      >> /tmp/inhibit-suspend
  echo 'do'           >> /tmp/inhibit-suspend
  echo 'qdbus org.freedesktop.PowerManagement /org/freedesktop/PowerManagement \
  org.freedesktop.PowerManagement.Inhibit.Inhibit "movieview" "Playing movie"' \
                      >> /tmp/inhibit-suspend
  echo 'sleep 119'    >> /tmp/inhibit-suspend
  echo 'done'         >> /tmp/inhibit-suspend
  chmod u+x /tmp/inhibit-suspend
  nohup "/tmp/inhibit-suspend" &> /dev/null &
  echo " - Disabled screensaving, and suspend"; else
  # Resume display power management
  xset +dpms
  # Resume display power management with previous values
  # if [ -f /tmp/dpmsvalues ]; then
  #   xset dpms `cat /tmp/dpmsvalues` && rm /tmp/dpmsvalues
  # fi
  pkill inhibit-suspend
  echo " + Enabled screensaving, and suspend"

# Notes:
#  On resume X blanking is ignored.
#  Doesn't disable computer sleeping

I also put an option in to turn off the display after a certain amount of time. Uncomment those lines and comment the above line to activate it.

Turn off all cellphones and enjoy the show!

Fast Compositing with KDE4 and FGLRX

After a much heated discussion about how to fix the horrible resizing and performance bug with FGLRX and KDE4, no one knew where to start looking. The X team had to do a little digging; the KDE4 team needed to change somethings; the FGLRX warehouse needed to get their shit together and listen to the users… bla bla bla the flame wars raged on, fingers were pointed, and nothing ever got done.

That is, nothing got done until a lone user piped up with a workaround. Here is what he wrote in the comments of that blog post:

Hi, I have been pissed off by this problem a long time and assumed it was ATI’s fault. Tonight I made one last effort before ordering a Nvidia graphics card. And I was successful.

I am running catalyst 9.8 using a Radeon 3850 and have had this re-size/maximize problem as long as long as I have used KDE4. To solve the problem I needed to modify a file in xorg-server. in the code directory it is called ./composite/compalloc.c. Here I commented out most of a function called compNewPixmap. Everything below these lines:

pPixmap->screen_x = x;
pPixmap->screen_y = y;

all the way down to (but not including) the last line:

return pPixmap;

After this I am running KDE4 with all desktop effects that I want and without any lag in resizing/maximizing.
I am running Gentoo, so I just updated the xorg-server source package file and put it back into the source repository, rebuilt the manifest and emerged it again. Voila!

Voila indeed. The patch he’s talking about looks like this (thanks to this Russian blog I can’t read):

--- composite/compalloc.c.orig  2009-09-08 02:54:28.657143479 +0700                              
+++ composite/compalloc.c       2009-09-08 02:55:42.835357653 +0700                              
@@ -484,64 +484,6 @@                                                                             
     pPixmap->screen_x = x;                                                                      
     pPixmap->screen_y = y;                                                                      
-    if (pParent->drawable.depth == pWin->drawable.depth)                                        
-    {                                                                                           
-       GCPtr   pGC = GetScratchGC (pWin->drawable.depth, pScreen);                              
-       /*                                                                                       
-        * Copy bits from the parent into the new pixmap so that it will                         
-        * have "reasonable" contents in case for background None areas.                         
-        */                                                                                      
-       if (pGC)                                                                                 
-       {                                                                                        
-           XID val = IncludeInferiors;                                                          
-           ValidateGC(&pPixmap->drawable, pGC);                                                 
-           dixChangeGC (serverClient, pGC, GCSubwindowMode, &val, NULL);                        
-           (*pGC->ops->CopyArea) (&pParent->drawable,                                           
-                                  &pPixmap->drawable,                                           
-                                  pGC,                                                          
-                                  x - pParent->drawable.x,                                      
-                                  y - pParent->drawable.y,                                      
-                                  w, h, 0, 0);                                                  
-           FreeScratchGC (pGC);                                                                 
-       }                                                                                        
-    }                                                                                           
-    else                                                                                        
-    {                                                                                           
-       PictFormatPtr   pSrcFormat = compWindowFormat (pParent);                                 
-       PictFormatPtr   pDstFormat = compWindowFormat (pWin);                                    
-       XID             inferiors = IncludeInferiors;                                            
-       int             error;                                                                   
-       PicturePtr      pSrcPicture = CreatePicture (None,                                       
-                                                    &pParent->drawable,                         
-                                                    pSrcFormat,                                 
-                                                    CPSubwindowMode,                            
-                                                    &inferiors,                                 
-                                                    serverClient, &error);                      
-       PicturePtr      pDstPicture = CreatePicture (None,                                       
-                                                    &pPixmap->drawable,                         
-                                                    pDstFormat,
-                                                    0, 0,
-                                                    serverClient, &error);
-       if (pSrcPicture && pDstPicture)
-       {
-           CompositePicture (PictOpSrc,
-                             pSrcPicture,
-                             NULL,
-                             pDstPicture,
-                             x - pParent->drawable.x,
-                             y - pParent->drawable.y,
-                             0, 0, 0, 0, w, h);
-       }
-       if (pSrcPicture)
-           FreePicture (pSrcPicture, 0);
-       if (pDstPicture)
-           FreePicture (pDstPicture, 0);
-    }
     return pPixmap;

This patch works like a charm. All of the FGLRX resizing/maximizing bugs disappear. Not only that, but things like clicking on the K menu are suddenly a lot faster… KDE4 doesn’t seem laggy and now has the performance I’ve expected all along. The effects look great, and my transparent terminal is a delight.

There is, however, a bit of garbage that shows up occasionally, and perhaps there’s a good use for the code that was removed in the patch. Why is it only FGLRX that benefits from removing this code? I don’t know much about XOrg internals, but I’m guessing it has to do with some sort of sometimes-required allocation that causes a readback in the FGLRX driver but not in other drivers. What’s the deal? Is fixing this problem as simple as committing this patch and then fixing the garbage error? Or is the code that was removed necessary, and really the problem lays with FGLRX? What to do at this point?

Posts for Monday, September 7, 2009

Dress-up your Firefox

I just stumbled across a Mozilla Labs project called Personas.  It’s  light-weight theming for Firefox that can be changed without restarting the browser.  After you install Personas, you get a new menu entry Tools->Personas for Firefox, where you can quickly change the persona you are using.  From what I can tell, Personas seem to change the your browser toolbar and menu font colors and usually add a lightweight background picture.  According to the website, the project has been going since Dec 2007, so there’s a lot of Personas to choose from.  I guess I’m a little slow sometimes. :)

One thing that is really cool is you can visit the personas gallery and see a bunch of different personas and when you hover your cursor over a persona, your browser will temporarily use that persona.  If you want to use that persona now, just click on it and it’s your active persona.

Here’s a quick little clip that shows what Personas does.

Spriting and learning

In the mid-1990's I was really into Nintendo games, as was everyone. My favorite was the original NES Final Fantasy. Sometime in my teens I got my first computer, and I decided it would be cool if I had some sprites of that game on my computer.


My first computer ran at 640x480 with 16 colors. I had Windows 3.1 and the most sophisticated image manipulation program around was MS Paint. How could I get sprites into my computer? Well, I had a strategy guide for the game, with blurry photos of all of the enemies, so I just opened up MS Paint, zoomed waaaaaaaay in, and drew all of the sprites pixel-by-pixel. Insane? Maybe, but it's a fun kind of insane.

This took about a year of off-and-on work, but in the end I had something I thought was great. I still have the file:


Note that my computer couldn't even produce sophisticated colors like "orange" and "brown", so I had to tile red and yellow together so it looks orange from a distance. Oh how computers have progressed since then.

At this point I loved computers but I literally didn't even know what programming was. I had never heard of the internet. I didn't even get a taste of programming until high school.

In 1999 I was in college but still largely ignorant of programming. I decided to start my first website, which was about the NES Final Fantasy that I still liked. I decided to put some sprites on the site. For the first version I chopped up my old image file from above into individual sprite files, but the quality of these was terrible.

So the way I got good sprites was by taking screen-captures of the game in an NES emulator, then in Paint Shop Pro I cropped out the backgrounds. This was much faster than hand-drawing, but it still took months.

This is an example of using a thousand-dollar piece of equipment as a hammer to pound in a nail. It's the most rudimentary form of computer use. It's the kind of thing I cringe at when I see coworkers do it today.


Last weekend, many years older and hopefully a tiny bit wiser, I pulled out my copy of the most recent remake of FF1, for the PSP. I decided it'd be cool if I could rip some sprites from this.

So, first I got an ISO of the game and mounted it loopback so I could view the files. In the ISO there's a 100MB BPK file. I didn't know what this is so I opened it in a hex editor and saw that it was some kind of archive. You could clearly see an initial list of filenames with byte offsets and some other flags for each, then a bunch of binary data.

A few google searches later and I found this where someone else had the same idea as I did. There's an extraction script there in some language I don't know, but it wasn't hard to figure out what it was doing.

So then I was going to write a script to extract the BPK but thankfully someone in Japan already did which saved me the trouble of even doing that. Some of the extracted files were themselves archives, but after running the script on its own output a few times I had a bunch of GIM files.

What's a GIM file? Never heard of it. But a quick google search for "GIM to BMP" will net you a program called gimconv. Sadly it's Windows-only, but a batch file or two later and I had a bunch of BMPs like this:


These files have solid backgrounds, but I want transparent backgrounds. But it isn't hard to make an image's background transparent in Linux using ImageMagick and its nice documentation. One snag is that all the images have different background colors, but I can tell ImageMagick to use the color of the top-left pixel as the transparency color:

for f in *.bmp; do convert -matte -fill none -draw 'color 0,0 replace' $f ${f/bmp/png}; done

Some of these images look like sprite sheets. So let's pick one, chop the sprites into individual files, then make an animated .gif from these sprites (while also adding a transparent border around it).

convert KURO.png -crop 32x32 +repage KURO%d.gif
convert -matte -bordercolor none -border 28 -compose Copy KURO?.gif -delay 25 -dispose Background KURO.gif

Giving us:


So after a matter of 2-3 hours (most of which was puzzling over some hex data and then googling around), I am already pretty much done. Barely any skill on my part required other than knowing what to look for.

The moral of this story is:

  1. The internet is awesome. It's easy to forget how much better life is with so much knowledge at our fingertips. I can't even remember what it was like without the internet and I'd never want to go back to that.
  2. Learning is fun. What was a year-long job became a few-minutes job with even a rudimentary knowledge of scripting.
  3. ImageMagick is pretty handy.
  4. Old school games are great. FF1 is still fun after 20 years. They keep re-making it for new systems for a good reason.
  5. I got a late start in programming. I wish I would've started at a younger age. Think of how much more I could've accomplished with my time. I'm still playing catch-up in many ways.
  6. Sitting here in front of 3840x1200 pixels worth of million-color monitor screen, typing a story people are going to read in a few minutes in countries I'll never visit, ripping sprites from a portable game device that's probably thousands of times faster than my old NES, I can't even imagine what we're going to be doing with computers in another 15 years.

New Resolver Data Structure Pictures, or, Why I Need Lots of Pens

As some people may or may not have heard, one of the big Paludis projects we’ve been discussing for the past couple of years has been to come up with a super-amazing dependency resolver that can handle ABIs, binaries and chroots perfectly, provide complete customisation so people can do stupid things like “update everything except glibc”, cure cancer, be adapted to support arbitrary new features with no difficulty and explain all of its decisions in an easy to understand manner. Obviously, doing all of that at once is rather ambitious, so in the interests of it ever being finished, I’ve instead been working on a stupid but incrementally expandable resolver designed around:

  • Doing only the basics initially, but having a simple design that cleanly splits apart things like ID selection, dependency selection and ordering, even if doing so prevents certain short cuts from being taken. That way, when we add things in later, we don’t have to rely upon lots of subtle interactions between all the different components.
  • Making sure that we can explain exactly why we’ve done a particular thing, even if this means not including clever trickery.
  • Having easily accessible innards, meaning if people still insist upon having an “upgrade everything except glibc” option, we can easily move a very small amount of code out into a std::tr1::function and let clients handle it that way without having to pollute the resolver.

The basic features all now pretty much work, and cave resolve is usable on Exherbo (although not Gentoo at present, since I haven’t implemented virtuals handling), although there’s no sensible error handling, several obvious optimisations haven’t been made, the UI is highly crude and there are no bells, whistles or cookies. Still. being able to do this is rather fun:

$ cave resolve gnome --explain libbonoboui:2

Explaining requested decisions:

For gnome-platform/libbonoboui:2:
    The following constraints were in action:
      * >=gnome-platform/libbonoboui-2.1.1, use installed if possible, installing to /
        because of dependency >=gnome-platform/libbonoboui-2.1.1 from gnome-desktop/gnome-panel-2.26.3:0::gnome
      * >=gnome-platform/libbonoboui-2.1.1, use installed if possible, installing to /
        because of dependency >=gnome-platform/libbonoboui-2.1.1 from gnome-desktop/gnome-panel-2.26.3:0::gnome
      * >=gnome-platform/libbonoboui-2.13.1:2, use installed if possible, installing to /
        because of dependency >=gnome-platform/libbonoboui-2.13.1:2 from gnome-platform/libgnomeui-2.24.0:2::gnome
      * >=gnome-platform/libbonoboui-2.13.1:2, use installed if possible, installing to /
        because of dependency >=gnome-platform/libbonoboui-2.13.1:2 from gnome-platform/libgnomeui-2.24.0:2::gnome
    The decision made was:
        Use gnome-platform/libbonoboui-2.24.0:2::gnome
        Install to / using repository installed

Now to the important part: the pretty pictures!

Regular visitors to #exherbo may have noticed me moaning that I don’t have enough pens to implement their feature of choice. Here’s why:

Resolver Design 7

Resolver Design 7

Since I can’t keep track of more than around five classes at once in my head, I have to have summaries written out on paper. Furthermore, each class summary has to be in a different colour (although my scanner’s done a fairly good job of hiding that in the picture above…), which means I need a pen (a proper fountain pen, or I can’t write with it) for each class. This in turn means that any new feature will likely require one or more additional pens, and I am more or less at my limit.

I also need a couple of colours spare to be able to scribble all over the diagrams, draw lines, change things and generally make a huge mess of things. An earlier design page now looks like this (and note that this is the most readable of the earlier design pages):

Resolver Design 5

Resolver Design 5

On top of that, any problem too complicated to be solved in my head gets its own highly weird picture drawn out. Unfortunately the only example of this that I have handy (working out a circular dependency breaking algorithm) is on A3 paper, which I can’t easily scan…

I’ve found that working on paper for this kind of thing is much faster than working on a computer (writing’s as fast as typing, but the layout’s much quicker on paper, and scribbling over computerised designs doesn’t work). I don’t use a formal design system at this stage because it’s more pain than it’s worth, especially when there’s no need for other people to be able to read the design without being able to ask questions, although in some ways what I do is close to CRC cards with all the bits I don’t need ripped out.

I do not claim that my system is sane; merely that it works.

Posted in paludis internals Tagged: paludis

Posts for Sunday, September 6, 2009

Looking better.

Just a few small updates today before work. It looks like my updated captcha is stopping all the spam comments, thank god. Today, I focused a little on the backend to make sure categories were working properly. Changing pages on categories still doesn't work, but I haven't started on it yet. I'm sure it'll be done in a few more days.

I'm trying to focus a little more on the aesthetics at this point as well, but I just can't find a way that I like the little login box on the right hand side. What's there at the moment works for me at the moment, but any ideas would be appreciated.

I'm also thinking about writing my own Twitter script to update the Twitter box at the top of my page. After about an hour or so of no updates, the box stops displaying anything. Why? That's stupid.


How to solve the big Internet problem.

I’ve said it before and I’ll say it again, the Internet is full of trash.

When I say trash, I don’t just refer to websites and data, I mean people. The Internet has a startlingly similar effect to drugs – its addictive and makes people act like idiots. As you’ve probably already guessed from the title, these two are the “big internet problem(s)”.

Addiction is one that is easily fixed and is progressively being fixed. As we integrate technology and the Internet more and more into our daily lives addiction will be disguised as a lifestyle. If you can’t see the problem anymore, you don’t try to solve it. Not because it’s the right or wrong thing to do, it’s because most people are lazy arses (I once misspelt “lazy” as “lady” in a chat conversation, big mistake) and so the tackling the fundamental problem becomes quite futile.

The second is that people start acting like idiots. The reasons for this falls neatly into two categories: 1) they were idiots to begin with, and 2) they interacted too much with real idiots and so acculturated accordingly. Removing the first category kills two birds with one stone, which is what I shall accomplish in my lovely plan which I’ll write about in a bit.

In a bit.

Here’s the plan. You let evolution take its course. You simply remove the Internet. For a year or two. The issue lies with the fact that you can’t punch somebody through a computer screen. Once you remove the Internet, idiots cannot hide behind aliases and are forced to be idiots to real, live people.

These innocent people will be suddenly exposed to a huge influx of stupidity and will involuntarily resort to their instincts – to vent out their frustration in the most effective way possible. The most effective way is also normally proportional to the amount of pain the idiot experiences.

A year or two of this shock treatment should be enough to weed out the majority of this problem. We then put the Internet back up and purge any website that isn’t visited within the first couple of days. The people hosting these websites that get purged which represent more than 30% of the total amount of folks they host will be suspended for manual interrogation.

This one to two year absence of the internet will also remove Internet addiction. It should be ample time for people to redevelop a lifestyle that doesn’t revolve around the constant communication the Internet provides.

We’d also save on a crapload of energy costs for those two years. This has major environmental advantages. We’d also shutdown a good percentage of our industry with labourers with non-transferrable skills, not to mention seriously harm the backbone of many other businesses. However this will also allow us to look with a fresh vision on whatever stupid economic system we’ve got in place today. This is the jolt we all need to start restructuring our societies, not with visionaries spouting their optimism to closed ears but an actual realisable event.

Ok. I promise I’ll do a real post when I’m next due.

Related posts:

  1. Good riddance, Twitter.
  2. Is your ISP causing slow Internet?
  3. Mass-amateurisation of the Internet

Posts for Saturday, September 5, 2009

Improve flash performance (a bit, maybe)


I’ve been struggling with flash quite a bit. I like to watch flash videos online because the time I’m able to get to them are usually at odd times of the day. The issue with flash (I’m using the 64bit alpha but think this effects other versions too) is that higher definition flash can often become choppy, and tear – particularly in fullscreen. I read that this has to do with how flash uses Xvideo. I’ve tried numerous hacks I’ve seen around but none that have worked. Flash 64bit alpha has been around coming on 10 months now so hopefully we’ll see an update soon, but until then I did find something that might improve your flash performance a bit. This I found while going through the Ubuntu forums – (thanks to Labello who figured it out). This is just a simple xorg server edit that may on some systems be already enabled. Flash appears to require a couple options that some xorg.confs may not provide. To give an idea on performance, 1080 flash video before was unwatchable sometimes giving me as low as 1 frame every five seconds and 720 video would tear at times. With the edit, low motion 1080 video (yeah I know) like Law and Order is mostly tolerable and 720 is playing without a problem. To get these benefits (will vary from system to system) be sure that these settings are in your xorg.conf and then restart the xorg server.

Section "Extensions"
  Option      "Composite"     "Enable" # for 3D, alpha desktop effects

Section "DRI"
  Mode 0666                            # helps flash performance

There’s also an edit on the link about overriding gpu checks. I think that this may help a bit, but it could just be my imagination :), not sure.

Assorted C++ Linkage

Posted in Uncategorized Tagged: c++, c++0x, programming

Posts for Friday, September 4, 2009


why do people program perl these days?

With so many other awesome alternatives, I don’t understand why people use perl.

Just the other day, I was programming in perl (at the day job… as I have written a considerable amount in perl already and haven’t gotten the infrastructure in python yet to switch over).  I noticed how poorly the support is for Windows.  Alright everyone, guess what perl returns from this on Windows:

C:\Documents and Settings\username\> perl
use Cwd;
print “cwd: “.getcwd().”\n”;

Well, if you guessed “C:\Documents and Settings\username\“, you are wrong.  In perl, of course, its actually “C:/Documents and Settings/username/“.  I realize it “happens” to work (as in, some windows versions support it… not perl), but I’m not ok with it “happening” to work.  I need it to work.  I don’t want my code breaking on something so simple as a path separator.  And it really isn’t that hard to support both / and \.  Sure enough though, I can’t trust perl to “do the right thing”.  I end up hacking around everything perl has in its standard library.

Another doozy is when you start using File::Find and find out, it doesn’t work on Windows with no_chdir.  Why? Not sure… And I don’t much care.

I guess, what I’m curious about is, how many people still see a good reason to use perl.  One that doesn’t include “we already have an entire infrastructure coded in perl”.  I understand no one really likes “rewriting” code and that it has a batch of its own problems such as reintroducing regressions and is “needlessly expensive”.  However, I would think those projects go away over time.  New projects should use newer technologies and get the old and new technologies to work together.  And eventually, maybe, phase out the old stuff.  Maybe its not something that someone converts overnight, but at least move in the right direction.

My biggest problem with perl is that its standard library is hard to trust to “do the right thing”.  A language in which this stands true is very hard to program efficiently and effectively in.


Sabnzbd behind apache

So after upgrading my sabnzbd installation to version 0.4.11, which I by the way the way should create a better ebuild for and do some dependency cleaning but that is a different story.

Like I was saying after upgrading to version 0.4.11 I decided I wanted to close port 8080 (the one sabnzbd is using) to the outside word. Now I know apache has some nice proxy functions so it should be easy.

Make sure apache is compiled with the following modules: apache2_modules_proxy apache2_modules_proxy_http apache2_modules_proxy_balancer

That was the hard part. Now just add a vhost:


order deny,allow
deny from all
allow from all
ProxyPass http://localhost:8080/sabnzbd/
ProxyPassReverse http://localhost:8080/sabnzbd/a

ErrorLog /var/log/apache2/error.sabnzb.log
LogLevel warn
CustomLog /var/log/apache2/access.sabnzb.log combined

This assumes that you have sabnzbd listening on localhost:8080. Now you probably do not want the whole world watching your downloads (or deleting them or whatever). So we just add some basic apache authentication, use htpasswd2 to create a file with authorized users and add the following lines into the location block.

AuthName "Login Required"
AuthType Basic
AuthUserFile <>
require valid-user

That is all.
Happy downloading!

Posts for Thursday, September 3, 2009


The Euphemism Website – the failed idea.

One of the websites I always wanted to make was a Euphemism dictionary. It would be pretty similar to urbandictionary in terms of concept and allow user-defined euphemisms for common insulting phrases. This would thus prevent us racking our brains every single time we wanted to come up with another creative way of saying “my aunt’s maid’s son is better at computers than you“.

It’s also a great initiative to start putting literature to good use rather than the common application of analysing the themes and symbolic imagery behind fictional characters.

Another objective would be to disprove most of the web community do’s and don’ts through the context and artificially induced environment the website will create. For example, users will be insulted constantly, from the minute they enter the website, when they register, log in, or do anything that involves a mouse and a screen. Definitions and euphemisms will have a voting system, except it’s unidirectional – you are only allowed to vote down submissions. I don’t mean this in the rottentomatoes format where more tomatoes means its good, I mean this quite literally. You’re not allowed to say something is good, you say if something is crap, and then we’ll list the least crap and the most crap. Take your pick. User interaction will be minimal – you’re allowed submit and vote, nothing else, users with accounts will be given no option but to receive spam email from an entirely unrelated mailing list, all the time mocking you of your gullibility of registering to such a shady website.

We won’t only break community conventions, we’ll break design conventions too. There will be no clear header or footer. The title will be a randomly rotating insulting phrase (of your choice if you register an account). The content will be single column, left aligned, with a colourscheme worse than my dad’s tie, and a table will be used for everything. One huge table with colspans and rowspans that’ll make the folks in the #css channel choke.

With all that said, it’ll still be better than 95% of the websites on the internet.

Gosh the internet is so full of trash.

Related posts:

  1. Hello. I hacked the GIS website.
  2. How to Make a Website Part 1 – The Environment
  3. How to solve the big Internet problem.

Paludis 0.40.0 Released

Paludis 0.40.0 has been released:

  • Notifier callbacks allow clients to tell the user what’s going on when generating metadata, performing resolutions etc.
  • Sets now work slightly differently. For sets defined by multiple repositories (e.g. ’system’), ’setname::repo’ can be used to access the part of the set defined by a given repository.
  • Bugfix: Upgrading an unpackaged package no longer errors out.
  • Bugfix: Combining :slot and ::/path restrictions now works correctly.
Posted in paludis releases Tagged: paludis

dev-lang works… maybe

I updated the one exheres in my dev-lang repository. I still have not purchased the power supply for my Linux computer so I still have not tested my own repository, but if someone else wants to that would be nice! I more or less just copied the it straight from the ebuild in Gentoo (giving proper credit of course). I hope they all work. I’ve been looking around at how to get all of falcon vim config files pulled in together into one exheres.

Moving on to my app-vim exheres; I’ve been looking around and the only method of doing this I can find is to tarball the files then install that. While I’m sure that works wonderfully, that would require more work than I thought it would. While creating an exheres for every config doesn’t sound like a good idea either, it would be easier to keep current. I’d only have to update the script on Vim’s homepage, which I’ll be doing anyway. If I did have one for each file though, then I could simply do a single flacon-vim-config-all.exheres-0 file and let it pull them all in as required deps. That seems very sloppy to me though, and I’m not sure how paludis would handle the deps. Suggestions anyone?

Enjoy the Penguins!

Posts for Tuesday, September 1, 2009

Exherbo Repository

My would be repository, as far as I can tell, has everything it needs now in order for paludis to actually pick up and install things from it. Granted, at this point it only has three exheres in it, and granted, only two of them will work. But none the less it is there and you can use it.

the steveno repo

Again, suggestions, hints, and corrections are welcome.

Enjoy the Penguins!


Rapid Fire

Some things that didn’t make it into a thinkMoult post. In no particular order. Sometimes posts like these are mandatory.

  • is now functional and running, albeit very unfinished. Our social desktop submission stands at 68% and we welcome you to contribute your vote.
  • Kamal has joined the Eadrax team from the graphics side to replace Chris Peters.
  • I have the privilege of being a beta-tester on Lockerz, a site where you earn points through doing activities and can exchange these for real-life merchandise. It’s legit and looks quite spiffy. Leave a comment if you want me to send you an invite.
  • The first 5 portfolio entries on the carousel have been linked properly to their respective items. Go on, try click them.
  • 10 or so new submissions have been added to the Blender Model Repository.
  • A one-hour speech to a small group of 50-60 people on education and communities.
  • School has started and I finally received my badge for the International Award Silver Level (they forgot to give it last time and I only got the certificate) – hopefully I’ll get my gold before the end of this year.
  • A lovely new set of material to use for portfolio creation after some full time work-experience at an architecture firm.
  • An attempt to cook with my Dad that ended up in spaghetti being burned. Yes, with flames and everything – I didn’t even know that was possible.

… and of course all the usual routine stuff with a hilarious schedule – but then again, you already knew that.

Related posts:

  1. Blender Suzanne Awards announced.
  2. Back from the Jungle
  3. What’s new 18th July 09

Posts for Monday, August 31, 2009

Tools for prototyping

Yesterday I sat in a meeting where we looked at a prototype modelling something that we had come up in an earlier workshop. The prototype is necessary cause it's not obvious whether the model we came up with works (well enough) to be of use to us, we had to check our assumptions basically.

One of the participants put quite a lot of work into the prototype so we could check out results, the prototype kinda integrated with some sort of metamodelling framework in JAVA so the class diagram was a mess (not cause of bad modelling on his part but just cause of all the hoops he had to jump through to get stuff going).

It was one of those moments where you realize something consciously that you always knew but never properly phrased: You need some sort of prototyping tool for your area of expertise or you will waste a lot of time and/or create bad results.

Prototyping is something completely different than "proper" development. Let's look at software as an example: If I write software to try out something the internal design, the architecture isn't all that relevant, I just slap enough code together that I can validate or invalidate my assumptions, not one line more. In prototyping there's no "ugly code", no "wrong" cause they are basically just tests that are gonna be thrown away later anyways. After I have validated my assumptions I then start from scratch and design a "proper" solution, that does things right.

We are lazy and the idea of throwing away code scares many people: "Why would I write something again that already worked? That sounds like twice the work!" is something you might hear from these people. This train of thought is based on the wrong assumptions.

More often than not it's a bad idea to carry the code from your prototype over to the proper implementation. All the corners you cut when prototyping will get expensive code dept in your "proper" design.

Prototyping is about quicklygetting results. That also means that some technologies work better than others: If you design interfaces for your software for example, prototyping on paper is about the fastest and most useful tool you can ever have. Using a GUI-editor to click together interfaces is second and writing real "code" to create a GUI is way last. Paper gets results quickly, it's easy to change stuff, to develop ideas and change details, it's easy to rearrange things. When you want to write prototypical code, don't use something like C++ or JAVA cause you have to deal with too much "administration" and "code bureaucracy" in order to get shit done. Use something dynamic, fast that gives you a bunch of building blocks that you loosely throw together to check whether an idea works. Use Python, Ruby, Prolog, whatever dynamic tool floats your boat and gets stuff done quickly, even if you subscribe to the belief that static typing is somehow a good thing. Actually, especially then.

Thinking back to the wide array of classes on the diagram yesterday I wonder how much more could have been done using a proper prototyping tool for the job. How much easier some quick fixes would have been to check some other idea.

If you develop anything you will have to prototype at some point. If you don't have the proper toolset to do that you will waste your time running in the wrong direction. Look at your own design process. Do you have proper tools to prototype really quickly? If not, find some now. Learn their ins and outs. You'll be thankful when your next project comes.

Xen Hypervisor Nullmodem Connection

Been playing about with the rebase/master xen git branch again today to see if I can get it to boot my Gentoo xen setup. No luck, still the same panic on boot, so I decided to find out how to capture the output.

Grabbing the output from the kernel is simple: Add “console=tty0 console=ttyS0″ to your kernel command line. Then cat ttyS0 on the other end. For more details, there’s the TDLP Remove Serial Console Howto

Getting the Xen hypervisor to do something similar was slightly trickier. Turns out you have to use minicom, or all you get is a string of control codes. After some help from ##xen on Freenode, I was able to find OpenSuse Wiki: How to capture Xen hypervisor and kernel messages using a serial cable.

I can now get Xen output - however it seems to block the kernel output, so it’s one or the other at the moment. Would be nice if I can find out how to get both (kernel and hypervisor) working at the same time.

Planet Larry is not officially affiliated with Gentoo Linux. Original artwork and logos copyright Gentoo Foundation. Yadda, yadda, yadda.