Planet Larry

February 8, 2009

Clete Blackwell

Road Trip - An iPhone Application

One of my all-time favorite (and one of my most used) applications for the iPhone is Road Trip. It is developed soley by Darren Stone. Darren has done a great job polishing this application. One of my largest complaints about most applications is that they are not polished and they do not feel like they are native iPhone apps. This app is not that way; it has a very clean and easy-to-use interface and it has all of the features that I need.

Let me bein by showing the home page. After you add your first vehicle, this page shows up to show you all of your data. I currently have 6 fill-ups entered into the application:img_0011

This menu shows all of the statistics for your trip. It’s great for a quick glance to see how much money I spend a day on gas and what the average price of gas is. Here is a picture showing the graph that was previously covered up:img_0012

So we can see that the minimum price for gas is $1.49 and the maximum is $1.87. I have ranged between 18.3 and 32.4 mpg (although I’m not sure how I managed 18.3; maybe it was an input error). I can also glance and see how much I have spent on maintenance. Also, the paid version of this application allows me to specify certain time ranges and road trips, so that I can see how much I spent in a month or on a specific road trip. It is a really nice feature.


In the fuel tab, I can see (and edit) each individual fuel entry. If you click on the “+” symbol, it brings you to a menu where you can add more fuel. This is the best part of the application. You put in the cost of the fuel, your odometer reading, and how much fuel (in gallons or liters) you pumped and it does the rest. Optionally, you can specify a location for the gas pump and whether you were driving in city conditions, highway conditions, or mixed conditions. This is great for keeping city/highway meters.

The preferences panel:

Here you can set all of your preferences, but I want to draw your attention to some extra features. In the notes section, you can enter additional information that you may need, such as your license plate number, VIN, insurance name (and details). Also, the Export Data button is extremely helpful. You can export your data (and import it into other iPhones/iPod Touches) and then import it into Microsoft Excel or equivalent.

Road Trip is a great iPhone/iPod Touch application and I highly recommend it. A “Road Trip Lite” version is available for free from the App store. Road Trip is currently $4.99 and is well worth its price.

February 8, 2009

Nikos Roussos

fosdem (part 1)

i really enjoyed the first day at fosdem! i had the opportunity to attend many interesting speeches, including Fedora's excellent presentation about sugar and a not quite good one about exherbo.

but speeches is not the only enjoying thing at université libre de bruxelles. i had the chance to meet and talk with people from the free software community. it's nice to see greek hackers in almost every distribution that participates at fosdem (gentoo, fedora, debian).

it's getting sleepy here, but i promise for another post tomorrow and some photos on my flickr gallery in a few days.

February 8, 2009 :: Athens, Greece

February 6, 2009

Christoph Bauer

eMail for Info-Junkies

There are many situations in which you might find it important to be up to date with your mailings - especially as soon as they hit your mailbox. This function is available with some mobile phones and commonly sold as ‘Blackberry functionality’ or push mail. In other words it is a more or less permanent connection between the mail server and the mobile phone which is now used by the mail server for pushing the messages across as soon as they arrive.

Usually this feature is only available with Microsoft Exchange boxes using the ActiveSync protocol which is even talked by some Nokia phones - they just call it ‘Mail for Exchange’.

As I am running Linux, you surely won’t find a Microsoft server in my near range. But that doesn’t say that we can’t do such fancy stuff - we just do it in a different way: A group of developers ported the Active Sync stuff to PHP and named it z-push.

As the site itself looks promising, I decided to give it a try. After 5 Minutes the whole setup was done and working. Simple and effective.

Copyright © 2007
Please note that this feed is for private use only. All other usage, including the distribution or reproduction of multiple copies, performance or otherwise use in a public way of the images or text require the authorization of the author.
(digitalfingerprint: 0f46ca51d0fa4e6588e24f0bf2b80fed)

February 6, 2009 :: Vorarlberg, Austria  

February 5, 2009

Dieter Plaetinck

Fosdem 2009

I'm going to FOSDEM, the Free and Open Source Software Developers' European Meeting

I'm particulary interested in:

February 5, 2009 :: Belgium  


Xorg and hal... Removing my xorg.conf

As easy as the title says: emerged x11-base/xorg-server-1.5.3-r1 with the "hal" USE flag enabled, removed my /etc/X11/xorg.conf and everything works like a charm!

Direct rendering works, fonts are ok, screen resolution is perfect...

I still have to look into advanced synaptics touchpad settings, like mouse speed, scrolldown etc, by the way the touchpad works out of the box.

I'm really happy that I don't have to bother anymore setting up xorg.conf, so thank you X devlopers for making it possible, and thank you gentoo developers for making it available on my system and for making the transition so painless.

P.S. No, I don't use proprietary video drivers, as I have an Intel card, this is probably the reason all this was so easy for me.

February 5, 2009 :: Italy  

February 4, 2009

Ciaran McCreesh

Paludis 0.34.2 Released

Paludis 0.34.2 has been released:

  • Wildcards are now allowed for –contents and –executables.
  • A hardlink-merging bug has been fixed.
  • New appareo client, for manifest generation.
   Tagged: paludis   

February 4, 2009

Jason Jones

Car Mods and Emissions

Last spring, I modified my car - and then modified it again.

The 2nd modifications entailed me re-programming (or flashing, or tuning) my car's computer.  One thing I didn't realize at the time was that evidently, a standard protocol for most tuners out there is to turn off the O2 sensors which monitor emissions after the catalytic converters.

This doesn't mean squat to pretty much anything, unless the state you're living in requires emissions (or smog) testing.

I went in to have my car checked out yesterday, and lo-and-behold, it failed because 2 of my O2 sensors were in a "not ready" state.  After a bunch of driving (trying to create "drive cycles" in vain) and googling, I found out that the tune I had installed in my car last year had in essence, completely turned off the O2 sensors.

So, the fix was easy:

I re-tuned it with my SCT tuner with a canned tune.  In going through the questions, one of them was obvious, "Turn off O2 sensors?"  I answered the negative, and was on my way.

I then spent last night driving around with a good friend of mine, and at one point, nearly missing a huge recently desceased deer in the middle of the road.  Hehe... That was fun.

In driving around yesterday, I managed to create drive cycles for all but one sensor, which was fine, because in Utah, they allow one sensor to be in a "not ready" state, so... I passed emissions!

Also, as soon as I got my "all clear" certificate, while in the mechanic's parking lot, I re-installed my original kick-butt tune.  Good for another 2 years.


February 4, 2009 :: Utah, USA  

Steven Oliver

The dreaded OR statment (SQL)

If you’ve ever written a lot of SQL, and since taking the job I’m currently at, I spend 90% of every day writing SQL for Oracle databases. One thing I have come across is the OR statement is usually the worst the thing that can happen to your query.

Now, I do not fully understand how exactly this works behind the scenes, it is something I need to further investigate, but one would logically assume that

select *
bar in(’A',’B')

would be equivlent to

select *
where bar = ‘A’
or bar = ‘B’

I can tell you from experience while the above example is contrived, once your query starts touching tables in large quantities the difference becomes obvious. For whatever reason this is not the case. I suppose since you’re working with a query language there is no compiler to optimize your code for you, though I wish there was. Or, if by chance, there is, it is apparently not smart enough to make the conversion for you.

Moving on, it is also possible that if using an in() statement is not an option you can also replace it with a UNION. This situation would only arise though in very limited situations, but it is still possible. For example, say we have table_foo setup like so
p_key    f_key    alpha_1
1              2              A
2              4              B
3              5              C
4              6              D

So instead of doing

select *
from table_foo foo, table_bar bar
where foo.p_key = bar.fake_key
or foo.f_key = bar.fake_key

you can instead use a UNION

select *
from table_foo foo, table_bar bar
where foo.p_key = bar.fake_key
select *
from table_foo foo, table_bar bar
where foo.f_key = bar.fake_key

by arranging you code in this way you make sure that the database is making good use of any indexes that have been created on the table. This might be a bit to difficult to apply in the real world, but if you can it will be amazing how much faster the UNION will return results as opposed to the or above it.

Enjoy the Penguins!


February 4, 2009 :: West Virginia, USA  

February 3, 2009

Aaron Mavrinac

Play Alone Soon

Thousand Parsec single player mode is almost ready! The initial goal is to release tpclient-pywx with single player mode, along with at least one server and one AI client supporting two rulesets, on Windows and Gentoo Linux. We achieved a few of the final steps last month.

First, we have the release of tpserver-cpp 0.6.0. This release includes the new Risk ruleset as well as the administration protocol, both Google Summer of Code 2008 projects. The Gentoo ebuild for tpserver-cpp now pulls in the recently-released tpadmin-cpp. We're currently working on a Windows package for the server.

Next, we have a preliminary release of daneel-ai, also a GSoC project, which implements an AI client for the Risk and RFTS rulesets in pure Python. The Gentoo ebuild installs a script in the path and the XML file necessary for single player mode, so on that platform we're good to go. We plan to have a more solid release and a package for Windows soon.

February 3, 2009

George Kargiotakis

Help needed on apache2 segfaults

Dear Internet,

I need your help!
I have a debian stable (4.0) server with apache2 (Version: 2.2.3-4+etch6) running which is hosting more than 10 different sites. The problem is that in the apache2 error log I can see a lot of segfaults. All sites though continue to work properly and nobody has ever complained about them.

Some logs:

[Tue Feb 03 18:30:36 2009] [notice] child pid 1353 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:37 2009] [notice] child pid 29343 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:37 2009] [notice] child pid 1350 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:38 2009] [notice] child pid 1349 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:38 2009] [notice] child pid 1352 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:39 2009] [notice] child pid 1354 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:41 2009] [notice] child pid 1380 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:42 2009] [notice] child pid 1378 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:42 2009] [notice] child pid 1714 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:44 2009] [notice] child pid 1715 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:44 2009] [notice] child pid 1718 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:45 2009] [notice] child pid 1720 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:45 2009] [notice] child pid 1721 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:46 2009] [notice] child pid 1723 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:47 2009] [notice] child pid 1724 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:47 2009] [notice] child pid 1725 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:49 2009] [notice] child pid 1726 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:49 2009] [notice] child pid 1728 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:50 2009] [notice] child pid 1729 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:50 2009] [notice] child pid 1730 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:51 2009] [notice] child pid 1358 exit signal Segmentation fault (11)
[Tue Feb 03 18:30:51 2009] [notice] child pid 1733 exit signal Segmentation fault (11)

In order to find out what causes the segfaults I have enabled the following options:
inside /etc/apache2/apache2.conf
CoreDumpDirectory /tmp-apache/
$ ls -Fla / | grep tmp-apache
drwxrwxrwx 2 www-data www-data 4096 2009-01-31 11:01 tmp-apache/

I have changed the ulimit settings inside /etc/security/limits.conf
* soft core unlimited
* hard core unlimited

I have even added a ulimit -c unlimited setting inside /etc/init.d/apache2.
But still I get no core dumps inside /tmp-apache2/ from the segfaulting children.
If I manually kill -11 then I can see a core file inside /tmp-apache/

I have only seen one or two core dumps generated by apache and using gdb I could see that they both “blamed” a function of /usr/lib/apache2/modules/ In my quest to find which site/code causes the segfaults I have recompiled apache2 to enable mod_whatkilledus. But no core dump was created in /tmp-apache/ for more than a week even if the segfaults keep happening.

I have reduced my modules, removed mod_python, mod_perl, etc and still these segfaults keep occuring but no core dumps. I suspect that the only time I got a core was when a parent and not a child process segfaulted. I don’t think that my apache2 children dump core when they segfault.

Is there anything I could have done and I haven’t done it ? Is there a way I can force apache2 children to dump core or any other way to determine what causes these segfaults ? All these without of course closing down the sites one by one to see when the segfaults stop…

Thanks in advance to anyone that replies!

P.S. blog’s database is making some tricks…I hope it’s ok now and the post is fully published

February 3, 2009 :: Greece  


Processing your offline gmail in Python

In a discussion in my local Linux group, I was encouraged to try out a new feature of Google's webmail ('Gmail') that allows you to read your email offline.

Inside Gmail, you can click on Settings then Labs then Enable Offline.

This will then prompt you to install a Firefox extension called Google Gears. The one provided this way is only for 32 bit platforms, for 64 bit Linux, I found a third-party 64-bit compiled version in the blog of Niels Peen that works on my 64 Linux laptop.

I had a nosy in my .mozilla directory and found that Google stores your email in a SQLite database. So I thought I would have a play with it in Python.

To follow along at home, set up the Offline Gmail, then type the following commands into the Python shell, which you can open with python (or if you are lucky) ipython.

To start with, you need to set the filepath of the database, have a little ls around and then swap the two instances of something below with whatever it is on your system.

PATH = ".mozilla/firefox/something.default/Google Gears for Firefox/" + \

Now we can connect to the database:

import sqlite3
conn = sqlite3.connect(PATH)

Lets start by pulling out all the available tables:

tables = conn.execute('SELECT name FROM sqlite_master WHERE type = "table"')
for table in tables:
    print table

Now lets read the first message in the first thread:

messages = conn.execute('SELECT * FROM messagesFT_content')
mymessage =
for line in mymessage:
    print line

Now lets make a dictionary of all your contacts

contacts = {}
for contact in conn.execute('SELECT * FROM Contacts'):
    contacts[contact[2]] = contact[3]

print contacts

Now lets make a dictionary of your attachments, sorted by filetype:

attachments = {}
for attachment in conn.execute('SELECT * FROM Attachments'):
    if attachments.has_key(attachment[5]):
        attachments[attachment[5]] = [attachment[3],]

print attachments

There are lots of useful possibilities. Of course there are other easier ways to access your Gmail in Python, but this approach is useful when you might need to do something offline, e.g. on a train or plane.

If you come up with any interesting uses or code with this, please do leave a comment letting everyone know about it.

Discuss this post - Leave a comment

February 3, 2009 :: West Midlands, England  

Roy Marples

February 2, 2009

Jason Jones

LVM2 Auto-Reconnect

I run a server in my home.  It's nothing more than a hobby-type server that runs gentoo Linux, a couple of web-sites, my myth-tv installation for my home-theater, and all the family files.  I also have about 150 compressed movies on there, all of which can certainly take up a lot of hard drive space.

I started this project back in 2004, and my little server just keeps chugging along.  It's gone through 2 motherboards, 2 CPUs, a root hard drive failure (which was 99% recovered thanks to the beauty of ReiserFS's fsck implementation), and about 4 sets of hard drives.

The reason for so many hard drives is quite simple.  I keep buying movies.  As the need for additional hard drive space goes upward, I simply buy another hard drive and (using the magic of LVM2), simply move all content from one hard drive, remove the hard drive, insert the bigger one, assign it the space, and voila! my movies directory magically has more space!

Anyway...  Because this server's gentoo installation has undergone some serious abuse over the course of 4 years, the installation was seriously getting foobed.  This isn't a server which just sits and runs.  Every month or so, I get the itch to try something new and funky on it, and then sometimes remove the new package, sometimes not.  I've switched between instable and stable for practically every package on there at least 5 times.

The wear and tear on this poor server was getting evident.  It only recognized one of the two CPU cores (and yes, I checked and re-checked the kernel), mythtv was forgetting to record stuff and crashing intermittently, and more and more stuff just didn't work quite right.

I think the original kernel installed on the box 4 years ago was around 2.6.8.  Anyway...  I digress.

To make this already-too-long story a bit shorter, suffice it to say that it was well past time to do a complete re-install of gentoo.

So, yesterday, I undertook the challenge and built my new server Linux installation, on my workstation, and after verifying everything, I powered the old beast down, removed the root hard drive, and put the new one in.

Now, my box has had from 4 to 6 hard drives in it through the years and at the time, there were 5.  Four of these hard drives were driven by LVM2 and ReiserFS.  Their destiny in life is to provide me a place to store about a terabyte of media.  The fifth was the root.

So, I completely assumed that removing the root hard drive would kill all the data needed to restore the LVM partitions containing my media.  No problem.  I backed it all up beforehand.

So, with the new hard drive in, I powered it up, fixed a few things I forgot, got it to boot, and all was good.

I powered it down to do some hardware maintenence, and just happened to be watching the shut-down scripts, and I was about to head back to the server, when I saw the system unmounting the LVM2 partitions which I thought had been deleted.

I had to do a double-take.

So, I powered it up, went to /dev, and sure enough...  All the nodes for my media were there waiting to be mounted to their directories.

I couldn't believe it!  Ever since the idea of a Linux-driven media server came to mind, I've depended on LVM2 to help keep things in check.  I've loved it.  It's done exactly what it is supposed to, and hasn't caused me any grief at all.  This was the icing on the cake.

So, I quickly mounted my "movies" node to a directory, and sure enough, a full listing of all my movies was glowing before my smiling face.  Although I was ready to restore it all, and had completely intended on it disappearing, all my original media on my server's LVM2 partitions were already good to go.

Thanks to all the intelligent developers working on LVM2, making my life a little bit better.  Keep it up.

February 2, 2009 :: Utah, USA  

Lars Strojny

A Tech Book a Day

When it comes to reading I’m coming from a different corner: I read a lot of philosophical books from philosophers like Adorno, Marcuse, Marx before I really started reading tech books. These books are hard to read, especially the works of the Frankfurt School are notorious for their specific language which is sometimes hard to decipher. Tech books are exactly the opposite: while there are entertaining technical writers with a good style a lot use a pretty common and dry vocabulary – which is a good thing. The thing is, you don’t really need to read tech books.

Novels, philosophical – and more general humanistic – works are much harder. They often transport semantics in metaphors you don’t get when just reading. You have to read a sentence more than once to get it. But when you read a book about Design Patterns, your favourite book on PHP or something similar non-algorithm related you can just scan the book for news, read and understand the code samples and go on, page per page. Scan through the page, take notes but just note what’s new to you. If it is a reference, mark the important parts with stickers. Ignore the rest, remember, don’t read, just scan.

Additionally technical books tend to have a foreword and a foreword for the second edition and a forward for the third edition and a lot of testimonials attesting how good this book is (hey, I already purchased it, don’t sell it to me again). So the real content starts at page 40. Excluding white pages the book that was 400 pages long might shrink to 300 pages. If you need 30 seconds per page that means you can read the book in two and a half hours. And 30 seconds per page a a pessimistic estimation. With this technique it is possible to read a technical book in a day without stress and totally relaxed in a week. That means you could read 52 tech books a year. I’m lame, I just read scanned around 20 last year.

February 2, 2009

February 1, 2009

Brian Carper

Stylus DIY, hand health

The stylus that comes with a Nintendo DS is a very mild form of hand torture. Not sure whose hands those were designed for, but not mine. In googling for a good replacement, I chanced upon a blog post which suggests finding a nice big ballpoint pen and jamming a DS stylus inside so just the tip sticks out. This works amazingly well. It's not as portable, but I will make that sacrifice to prevent being crippled.

I am in fact always a bit worried about preserving the health of my hands. I have no hard data to support this, but I suspect my generation may have major hand-related problems in the coming decades. What with computer keyboards and tiny cell-phone and PDA keys and lots of other techy things. Many of us use our hands to communicate almost as much as our voices. Until we have Star Trek voice-recognition software, this will be a problem.

I started experiencing a lot of aches and pains in my hands and wrists a decade or so ago, and I attributed it to computer use. Since I started paying more attention, things are better. I maintain a very comfortable typing position for my hands. I have a nice big comfortable mouse. And so on. My hands don't hurt any longer nowadays, which is nice. If I become unable to type someday, I'm completely screwed. How can I work as a programmer if I can't input text into a computer? And I won't be able to draw or do origami or play video games or do many other things I enjoy.

February 1, 2009 :: Pennsylvania, USA  

Ciaran McCreesh

Pakuma Choroka K1: Great Laptop Bag or the Greatest Laptop Bag?

I got a Pakuma Choroka K1 for Christmas, to replace my rapidly falling apart generic backpack. I must say, I am extremely impressed.

First up, space. There’s plenty of room for a beast of a laptop, the power supply and a mouse. And a load of paper. And several large books. And a water bottle. And the shopping. And a cat or small dog. No problems there.

Next up, the organisation of said space. Lots of compartments and pockets. Possibly slightly overdoing things, but fortunately the little compartments are inside big compartments, so there’s no wasted space if you decide not to use them.

The foam-cushioned laptop compartment (supposedly using NASA-invented memory foam, and if you believe that I’ve got a bridge for sale) appears to be effective.

Then there’s the strap. My old backpack’s straps were falling off and falling apart thanks to shoddy stitching and cheap materials. No danger of this here.

The clever bits… Two stand out. First, the nifty little hole to allow headphones to go into the bag. Neat. Second, the slightly-shiny light grey inner lining. This makes it much easier to see things inside the bag.

Colour-wise, it’s inoffensive, which is all I care about.

A couple of things I’d possibly consider changing: The big cushioned laptop-holding section does waste a bit of space if there’s no laptop in the bag. Making it removable would be rather nifty, although possibly difficult to do without reducing the strength of the bag. And a small handle on the top of the bag wouldn’t go amiss either — the over-the-shoulder strap is sometimes overkill.

All in all, a rather good buy, especially if someone else is paying for it.

   Tagged: bag, hardware, laptop, pakuma   

February 1, 2009

Patrick Nagel

KDE 4.2.0 on my netbook

It’s just great! Update from 4.1.4 went smoothly, thanks to Gentoo’s KDE maintainers, great work!

Screenshot KDE 4.2.0 on my netbook

Screenshot KDE 4.2.0 on my netbook

(the left and bottom panels are usually set to ‘auto-hide’, and the right one (which currently only contains the System Tray plasmoid) can be covered by windows, so I have the full screen available for applications)

KDE 4.2.0 brought the following features that I missed a lot since KDE 3.5:

  • The Task Manager plasmoid (the taskbar that shows a button
    for each running program) can finally have multiple rows, buttons can be grouped
  • The Digital Clock plasmoid can show other timezones on hovering with the mouse
  • Global keyboard shortcuts work
  • Some dialogue windows have been resized to fit on smaller displays

… and I’m still exploring :)

Many thanks to all KDE developers for this great piece of Free Software!

February 1, 2009 :: Shanghai, China  

Brian Carper

Remote webcam viewing: Ubuntu 3, Gentoo 0

One could argue that boringness is a good attribute for a distro. Gentoo has stayed out of my way for a good long time. I update world once a week and I haven't had a package fail to build or fail to work in a while.

Until a few days ago. I wanted to view video from my laptop's built-in webcam, on my desktop, over my local network. My laptop is running Ubuntu, and my desktop is running Gentoo. One point in favor of Ubuntu, my webcam works without any effort on my fault. It works right on a fresh Ubuntu install off the install CD. I never did get any webcam working on any Gentoo install whenever I've tried over the years. Maybe the situation has rectified itself at this point, but I don't anticipate trying.

Unfortunately, viewing my laptop's feed on my desktop also failed to work. First I tried an X-forwarding SSH tunnel, and running xawtv -remote, but I got all kinds of nasty errors along the lines of

X Error of failed request:  BadWindow (invalid Window parameter)
  Major opcode of failed request:  2 (X_ChangeWindowAttributes)
  Resource id in failed request:  0x1a5
  Serial number of failed request:  55
  Current serial number in output stream:  56

Extensive Googling turned up nothing on this, which isn't surprising given how un-informative an error message this is. Maybe some extension in X needed to be built to get xawtv to work. Maybe it's a version incompatibility. Maybe some hardware thing with my video card driver. Who knows. On the other hand when I tried to view my laptop's feed on another laptop running Ubuntu (actually Kubuntu), it worked fine. Albeit incredibly slowly.

Then I noticed Ekiga comes installed on Ubuntu by default, so I figured I'd try that, in spite of it being a bit overkill. But installing Ekiga on Gentoo died with a build error, because I needed to build pwlib with ldap support. Ekiga between the two Ubuntu laptops worked fine without any effort too, so at that point I gave up on getting it working in Gentoo, since it was no longer worth it.

No big deal, but slightly annoying. Probably could've gotten it to work in Gentoo eventually, but I have less and less patience for fiddling with installation nowadays. This is probably one of the benefits of the sort of mono-culture Ubuntu is turning into. Everyone using Ubuntu has the same basic crap installed. Whereas there's probably no one in the world with a Gentoo install quite like mine.

But Gentoo is still working well for me overall.

February 1, 2009 :: Pennsylvania, USA  

January 31, 2009

Martin Matusiak

that thing about ruby

Ruby is a great language, but one thing it needs is process. And what seems to suffer most from this is documentation.

  1. Ruby’s not ready
  2. Ruby 1.9.1 released

January 31, 2009 :: Utrecht, Netherlands  

Steven Oliver


Do you ever get the feeling, despite your best efforts. Your hard work, and even good code you’re just not wanted?

Yeah, I feel your pain. I know exactly how you feel. It’s not a good feeling, but at the same time, perhaps you can contribute to another project instead. Talent is easy to transfer from one project to another.

Enjoy the Penguins!


January 31, 2009 :: West Virginia, USA  

Brian Carper


How do you write a parser in a functional language like Clojure? (That's a rhetorical question.) There are parser libraries for Haskell I could use as reference but they're still a bit over my head at this point.

The original parser in Perl uses global hashes and regex-mangles strings directly. I could actually duplicate this exactly in Clojure, because Clojure isn't purely functional. But I'm trying to do it in a more functional way, and so far it's working out OK.

One of the bad things about Markdown is that perhaps because it's originally implemented in Perl as a bunch of regex-replacements on a string, and not as a real parser with a proper grammar, all of the implementations of Markdown in various other languages give slightly different results. So much so that someone wrote a website just to compare different implementations of Markdown against each other. So now writing a parser in Clojure, I face the difficulty of which behavior I want to duplicate. Some things Markdown does less than ideal, but I think I have to err on the side of replicating the original. One implementation, Pandoc, claims to be "more accurate" than, but Pandoc seems to purposefully break from things that are laid out explicitly in the specification, which is bad.

January 31, 2009 :: Pennsylvania, USA  

Ciaran McCreesh

Programming - Principles and Practice using C++

Programming — Principles and Practice Using C++ is the new book by Bjarne Stroustrup, the daddy of C++. It’s an introduction to programming rather than an advanced book; I’ve been holding off writing up my impressions of it because I’m not entirely sure what to say.

The overriding theme of this book seems to be “there are lots of complications, special cases and obscure things”. This is of course true, and it’s a refreshing change from most introductory books that go out of their way to construct highly contrived examples that conveniently ignore any obscurity. But I suspect it goes too far — pretty much every example is twice as long as it probably should be. There’s so much focus on dealing with complexities that the underlying “what’s going on?” is lost.

Partly this is down to C++. A language designed to handle real world, large scale problems and provide for maintainability over decades isn’t going to be the most elegant. On the other hand, purely teaching languages that dismiss the real world entirely are of no practical use. The question is whether C++ as a first language is a sensible idea, and I’m not in the least bit convinced that it is.

Partly, though, this is down to the choice of projects. An example: two chapters are devoted to writing a calculator program. These chapters cover lexers, parsers, grammars and error recovery. This isn’t one of those cop-out calculator programs where syntax is carefully selected to hide any kind of mess, either — it almost looks like the book is going to end up implementing a compiler… Unfortunately, there’s nothing in the final program that really needs any of this complexity; a simple “tokenise into a list, then replace all the multiplications with their result, then replace all the divisions with their result and so on” would work just as well for the requirements, and wouldn’t have most of the mess.

The scope of the book is impressive, though. It doesn’t gloss over classes, templates, pointers, exceptions or even dealing with code written in C. It’s extremely comprehensive, even in places where it probably shouldn’t be.

Finally, a note on writing style. The word ‘basically’ appears on average once per page, and sometimes three or four times in a single paragraph. This gets very annoying very quickly. Stroustrup’s other books don’t suffer from this.

I suppose my conclusion is: if you have to learn C++ as a first language, this is the book to use. If you have a choice, though, learn one of the monkey languages first, and then pick up The C++ Programming Language and The C++ Standard Library.

   Tagged: books, c++, programming   

January 31, 2009

January 30, 2009

Christoph Bauer

Enough joking

As the title says, I’m getting serious about my server boxes - no more software raid, no more running all that websites using the same user, no more kidding. As the new mailserver is already productive, the second IBM x345.

As already told in the post dealing with the new server, it was no big deal doing the setup as it’s good hardware. No quirks, no hacking - just straight forward.

Copyright © 2007
Please note that this feed is for private use only. All other usage, including the distribution or reproduction of multiple copies, performance or otherwise use in a public way of the images or text require the authorization of the author.
(digitalfingerprint: 0f46ca51d0fa4e6588e24f0bf2b80fed)

January 30, 2009 :: Vorarlberg, Austria  

Jürgen Geuter

My local overlay

My local portage overlay is not that local anymore, I just pushed it to gitorious so there might be some packages for you to check out in there, like qgtkstyle (so Qt apps like sane), cairo in a patched version, a newer django version, some clutter stuff and bumped sqlalchemy packages. Have fun.

January 30, 2009 :: Germany  

Daniel Robbins

The Camel has Landed!

Just wanted to let everyone know that Perl 5.10 is now enabled in the Funtoo Portage tree, and the Funtoo stage builds are now working with Perl 5.10.

Also, I’ve decided to return as official maintainer of the Funtoo Portage tree. I’m glad to be back!

If you’d like to submit patches or improvements to me, please email them to me at Patches inline in the email work best.

Best Regards,


January 30, 2009

Kyle Brantley

IPv6... months later

So I wrote about IPv6 a few months back. Tunneling it over IPv4, general networking with it, and even ping6'ing Google.

Been using it ever since.

Whoa now, wait a minute! People use IPv6?

For the most part, I set it up and poked with it for vanity purposes. "Hey look at me! I'm speaking a protocol that your router has no idea what to do with!" I had little actual use for it. For the most part I never had any real problems, but no real benefit either.

But it's been a few months, and I recently had my "IPv6 Epiphany." So here, have some random bits of info that I've picked up while playing with it.

The Problems
1. IPv6-in-IPv4 tunnels aren't really firewall friendly, nor are they the easiest thing to configure. I wound up whitelisting my home router's IPv4 address on my server, exempting it from all other iptables rules. This fixed a problem that cropped up when I rebooted my server, resetting my firewall rules to their saved state, and broke my ability to SSH into my server from home without specifying -4. Further, configuring a tunnel with iproute2 is pretty easy. Configuring a tunnel from CentOS to Debian using the "proper system-specific methods" really isn't. Debian I got working. CentOS I didn't, and wound up writing a pseudo-service to manage the tunnels and routes. All things considered, I probably would have wound up doing the same thing for my Debian router if it was as overloaded as my server in terms of IPv6 config.

Plus you have the increased latency. As a whole, this hasn't been a problem for me.

2. Not everyone who runs IPv6 maintains their v6 stack nearly as well as they do their v4 stack. This has proven to be a problem. For example, I was looking into H.323, and tried to open up the Open H.323 website.

The problem lies in the DNS. The OpenH323 project had a v6 DNS server. This server did not respond to queries coming over the v6 transport, breaking DNS resolution nicely for me. When I went poking with dig, it responded happily over v4. (It seems that their DNS is broken for both v4 and v6, so perhaps it was coming anyway. But the point stands. When your site works, you're content. You're not going to spend time checking that it works over both v4 and v6. This leads to problems.)

3. Application support.

From a sysadmin standpoint, nearly every computer out there has a DHCP client. Wait, sorry. Nearly every computer out there has a DHCPv4 client. This poses a problem when it comes to v6 connectivity. This is one area where Vista is quite a bit ahead of the *nix - they ship a DHCPv6 client and full stateless v6 autoconfig support by default. Their stateless autoconfig leaves a bit to be desired, as it ignores RDNSS data in the router advertisements, but they have documented how to get full DNS resolution on a stateless-only interface. It's pretty simple.

Linux, at a minimum, has a stateful DHCP client kicking around, but it isn't installed or even mentioned in most distro networking guides. It's not even available in several distros. The kernel has great stateless autoconfig, but RDNSS isn't exactly a kernel space setting either. There is a user space tool around that watches for the router adverts and adjusts /etc/resolv.conf as needed, but it's even less known than the stateful DHCP client.

There are also a couple really popular open source programs out there that don't speak v6 at all. There are two that bug me to this day: MySQL and Asterisk. MySQL is really not too huge of an issue right now, but to my knowledge they aren't even working on it. Maybe Drizzle could?

Asterisk is really the bigger issue. One of the largest roadblocks to getting VoIP with SIP to play nicely is NAT. To put it simply, it doesn't work with NAT. I can see (properly done) VoIP being a huge, monumental boost of support and a fantastic reason to get v6 working. Nearly the entire point of deploying v6 now is massively increased connectivity (with v4 connectivity dropping drastically in the near future). The current v4 (NAT) landscape is incredibly inhibiting to SIP, and while you can argue the relative merits of SIP to any other VoIP protocol, the value of having full connectivity from any one device to any other device really can't be understated.

(A note to you "but I like NAT because it's a great firewall!" people: first off, no it isn't. Second, there is a very simple rule here that both mirrors what you "get" with NAT and is arguably more secure than NAT. It happens to be called "default deny." From there, if you want to support VoIP, you can add one single rule and have great VoIP support. Have a /48 that houses both users and servers? Great - subnetting is your friend. Just open :80 to the server subnet.)

4. IPSEC. Still sucks to configure, is only going to become more important with enhanced device to device communication. Isn't supported by any mobile phone I'm aware of. Have a mobile phone that can connect to a SIP server over wifi? Great! Can it do IPSEC? Nope. Sure, TLS exists for a reason, but full-blown IPSEC has numerous advantages over TLS and it really isn't supported anywhere but the router and desktop. (Plus it still sucks to configure.)

Does your (insert handheld gaming device here) support IPSEC? No? Well sure I wasn't expecting it to, but it'll be interesting to see how this plays out over the next couple years.

5. Reverse DNS. Do I really need to say just how much configuring reverse DNS sucks? No? Good. Is there a better solution? Probably not. I'm just glad that dig and ipv6calc are of use here, so I don't have to manually type out every full-length DNS record.

1. Pretty much every single application I've used on linux supports it very well. Everything from HTTP to IMAP to Kerberos to SSH is operating flawlessly for me over v6. Vista has v6 CIFS, rdesktop, and RPC. I could make a full list here of what is supported in terms of services and clients across different OSs, but really, the list of what isn't properly supported is shorter at this point. And yes, for the most part, that applies to Windows too.

2. It's supported, well, by every (very?) modern OS. Vista just works with it. Linux just works with it.
There are some "gotchas" with both, but they'll be resolved over time and as more and more sysadmins come to use it. Vista actually has a default 6to4 tunnel built in, that starts up if you have a public v4 address. Even if your ISP doesn't support v6 (not that any of them do), if you can plug your Vista box in straight to the internet, you'll get v6 without any configuration or hassle.

3. NAT really sucks. The simple connectivity provided by v6 rocks. Now this leads back to how I started this entire post. First, a bit of background.

I run CentOS on my server. SSH has all password-based authentication disabled, and only supports Kerberos (GSSAPI) and pubkey auth.

I have a few RPMs that I need to rebuild to support a few extra things (namely postfix to support mysql, and kerberos to support an LDAP backend). I'd rather not keep all of the needed -devel packages installed on my server, and I'd also prefer to keep gcc and the rest of the needed buildutils not installed. The obvious solution is to rebuild them on another CentOS box, create a mini RPM repo, and then just use yum to install them. The process is simple enough.

The trick comes in actually getting those rebuilt RPMs to my server. This is also where v6 happens to make my life incredibly easy.

My CentOS "build box" is a VM running on my Vista box. This is really the best solution for me. I don't need to have a dedicated CentOS box here, and as a result of that I can click "turn off" and forget about it entirely until I need it again. This is probably the only really good use of VMs that I've found so far, but I digress.

As mentioned, you can't login to my server over SSH without an SSH key or kerberos auth. This means that I can't just scp them up to my server without either copying my existing key(s) over, or generating new keys and adding them

It was at this point I realized that my v6 setup meant that my VMs had public v6 addresses. And then a light clicked on.

I fired up rsync on the VM, copied the v6 address, and then from the server used rsync to move them over.

And it just worked. No port forwarding. No key configuration. No advanced auth config for the VM. I could have used apache+wget just as easily. I was able to start a service (on a VM that sits on a host behind NAT) and use it without any hassle, without any VPN trickery - it just worked.

If you compare the effort it would take to setup v6 on your home network and an "external" network, and compare that to the port forwarding, NAT translation/incompatibility, "hey this port is already in use, by another NAT'd device, guess that means we get to start using extensive proxies or odd ports" mess that may be involved in something as simple just getting host A to talk to host B...

... I think v6 comes out ahead in terms of what you get and the time it takes to make it work.

January 30, 2009 :: Utah, USA  

Dirk R. Gently

Reviving a Power Mac G4 with Ubuntu Server


I had been considering building my own server for a home network and decided to buy an old garage Power Mac G4 400. This is a good computer and will work great as a server definitely so I decided to install Ubuntu Server on it. I’m a Gentoo user normally but being the adventure that I am I decided to try something new.

Ubuntu official doesn’t support PowerPC documentation or installation-CDs anymore but the community do still produce installtion-CDs.


Processor  - G4 400MHz
RAM        - 512 MB
Videocard  - Rage 128 Pro, AGP 4xsl
Hard Drive - 10.3 Quantum Fireball LM10.2
Network    - Built-in Sun GEM Gigabyte Ethernet
           - TRENDnet TEG-PCITXR Gigabyte Ethernet - uses Realtek 8169 chipset

The best place to begin with an old computer is to test the hardware. Apple has done a good thing and made their PowerPC Hardware Test CDs available for download. You’ll need Mac OS X to burn CD dmg images though, I’ve tried various Windows (MagicISO) and Linux utilies (dmg2iso, dmg2img, acetoneiso2) that don’t work.

I’m building a server to use as a firewall so all the hardware is there except an additional network card. Another network card will be needed to route to another computer. Here’s good list of Power Mac G4 network cards that work in OS X, check and see if there is a Linux driver for them. The card listed above does.

Update Firmware

The firmware will need to be updated to the most recent available. You can check this by booting into Open Firmware (Apple + Option + O + F) at boot and looking at the OF version on the top then compare it to the newest on Apple’s website.

This firmware update requires Mac OS 9.1, luckily I have an old iBook 9.0 install disk that installed. The old software update panel doesn’t work any more though but the 9.1 update can be downloaded. I downloaded the files onto my Linux desktop and burned them to disk:

mkisofs -o PowerMacG4-Updates.iso G4_FW_Update_4.2.8.smi.bin \
cdrecord -v -dao PowerMacG4Firmware.iso

Reset NVRAM, PRAM, Clock

It’s a real good idea to reset the NVRAM, PRAM and Clock in case any values are set incorrectly:

  1. Remove or disconnect the memory battery. Leave the battery disconnected for 5-10* minutes.
  2. Reinstall or reconnect the battery.
  3. Depress the CUDA (aka PMU) button (for 5 seconds) with a non-metallic (plastic, wood, etc.) device.

Clock Set, Optional Password

Boot into Open Firmware again and set the clock (military time):

decimal dev rtc sec min hour day month year set-time

Optionally you can add security so no one can tamper your Open Firmware settings, and add protection from being able to be able to boot directly to disk, CD, or netboot.

Linux StartCD

I used Linux to download and burn the install CD, Ubuntu CD’s can be found here.

And burned them with:

cdrecord -v dao name.iso

The Power Mac G4 Sawtooth Open Firmware only has rudimentary support for Linux and cannot boot Linux CD’s by holding down C or holding down option. Rather you will need to direct OF to the Linux InstallCD’s yaboot file:

boot cd:,\install\yaboot

Select Kernel and Options

The Ubuntu Installer will now ask what kernel to load and will tell of a few options that can be passed to the kernel. For most people, the default install-ppc will do - use -smp for duelcpu systems. I decided on the expert-powerpc.

For reference, I followed and the Ubuntu Server Guide and the slightly aged Ubuntu PowerPC Guide for PowerPC related parts.

Switch to Console for a Couple Tasks

When the installer begins a couple tasks may need to be done. First if you didn’t use the Apple Hardware Test Disk, check the hard disk now for bad blocks. Also the console too add the ide-scsi device to the kernel, the Debian installer fails to recognize it. Get to the second console by pressing Ctrl + Alt + F2.

Check for Damaged Blocks on Drive(s):

Bad blocks can cause serious problems running software. If you discover a bad block it will be marked and not used but be warned when drives begin to get bad blocks the drive is almost always failing.

mac-fdisk -l
mke2fs -j -c /dev/sda

DVD/CD-ROM Drive Not Detected

On this computer, the installer failed to load the driver to have the DVD/CD-ROM work (go ahead - it won’t hurt if you don’t need it):

modprobe ide-scsi

Return to the install by doing Ctl + Alt + F1

Time to Build

Note: Older CD-ROMs have trouble being recognized on a regular basis, and have bad, slow, error correcting. You may have to reload the CD multiple times. If the installer gives you alot of trouble I’d recommend the Gentoo Minimal Install CD that only needs to boot correctly (use “gentoo docache”) and everything else will be done on the hard drive.

Basically you just go step by step. Select you langauge and in keyboards select “macintosh” for keyboard. “Detect and Mount CD-ROM” should now work, then “Load debconf…” and then “Load installer components from CD”. I did this quickly after the “Detect and Mount…” option because once the CD was forgoten by the installer.

In “…InstallerComponents” the only option I choose was “mirror select” but its buggy and didn’t work for me. You can find the mirrors available and then you have to enter the mirror without any subdirectories (e.g. in the next dialog enter the subdirectorties (e.g /pub/ubuntu-releases/). I ended up choosing the default UK mirror. The mirror can later be change in /etc/apt/sources.

You’ll need to have to download some files for the download to complete so setup the network.

When you get to partitioning choose the one right for you. I decided on the LVM with encryption. This too has a bug. I got a dialog that said “No NewWorld boot partition was found…”. Yaboot (the Mac bootloader) requires this to boot. As I said its a bug and you can ignore it. It will ask you, “Go back to the menu and resume partitioning?” Select “No” and write the partition table.

The rest should be pretty self explanitory, configure the package manager, users… I opted to have a root account because I know “rm -f /” is bad. ;) Install the software you need. The Ubuntu Server Guide details plenty of options: a dns server, firewall, web server… I installed OpenSSH server because it’s easier just to have one monitor on my desk. LAMP to use apache for webadmin tasks (OSSEC-HID, snort) and DNS Server to setup a local LAN.

Now install the yaboot bootloader (skip LTSP), and thats all you need to do. End the installtion and it’ll ask you what type of clock you want. I set the clock to UTC time.

Reboot system and see your new Ubuntu server.

A Few Handy Ubuntu Commands

The meta packages that you installed with the installer can be manipulated with tasksel.

tasksel --list-tasks
tasksel --task-packages openssh-server   # list packages installed with openssh
tasksel remove openssh-server
tasksel install openssh-server
do-release-upgrade  # upgrade all packages for each new Ubuntu release
apt-get install package    # installs package and dependencies
apt-get remove -D name.deb # removes package and unneeded dependencies
apt-get update             # updates to new repository
apt-get upgrade            # updates all packages
apt-get clean && apt-get autoclean # cleans apt-get caches and package downloads
apt-get search package

dpkg -l                # lists all installed packages
     -L package        # lists package's files
     -S /etc/host.conf # tells what package the file belongs to


change console font in /etc/default/console-setup
Debians bashrc tanks - better bashrc
vim-lite wtf?

Good luck with your new OS!



January 30, 2009 :: WI, USA  

January 29, 2009

Roy Marples

Happy Birthday to Me

36 years young today Laughing out loud

January 29, 2009

Johannes Gilger

git: full-length side-by-side diffs

Got this one from the git-ml (which I read via gmane, but that’s a different post):
I’m not very good at reviewing patches. Especially not if it’s something like JavaScript. git at least colorizes its diffs, which makes it somewhat better. But as soon as you have a big file which is being patched in multiple, disconnected places (different “hunks” in git terminology), so that the context between these hunks is missing, it gets to messy imho.

git-difftool in action

Fortunately there is an easy solution: In git there is contrib/difftool, which makes it easy to display git diffs side-by-side with a viewer of your choice. Of course that meant vimdiff for me. To see all the lines of a file (and not just the changed + a little context) I call git-difftool like this:
git-difftool --tool=vimdiff -U99999Now you can alias that command and use the normal rev-parse arguments. As you can see on the screenshot you can easily distinguish between removed lines, added lines and changed lines. For changed lines the part that was changed is highlighted.

January 29, 2009 :: Germany  

Bandan Das

So, what I have been upto lately?

Nothing much! No, really, I really have no clue why I couldn't get time to post something; or maybe, I didn't have anything to write about! Well, whatever the reason, I thought I would sum it up. To be honest, making a sudden thrust to a "so called busy life" has made me busy :), that's what I think happened to me. I mean, ofcourse, school was busy but you tend to fall into that daily routine and then you find ways of being free, err.. don't understand what I am trying to say here ? Me neither, just forget it!

Among other things, i got a chance to put my carpentry skills to test with this DIY computer table kit (courtesy my girlfriend). Here is how it looks now:

Yeah I know, it looks *abused* now but at some point it was shining new! If you lift your head up towards upper left, you will notice the alien-ish blue LED clock that is also a DIY. The kit is a perfect gift to yourself for those unthinkable boring times and you can get it here . The best part, however is that you don't even have to solder anything and the connections simply look cute. Here's how the backside looks:

And this is how the front looks (Lights switched off to bring out the best ^-^)

Then, there was also this trip to India last December. Going back home was more than refreshing. No, I don't have any pictures; I mean c'mon I was out there enjoying! Not snapping photographs to post on my blog! Yes, you got me there ! As a matter of fact, I was but I am just too lazy. Here's a picture of a dam I visited which is supposed to be Asia's largest in terms of (catchment) area.

Here's one more of the whole catchment area

Coming back to the snow haven was absolutely not very pleasant. But maybe, that was the reason why I got to stay indoors and write this post :)

read more

January 29, 2009 :: India  

January 28, 2009

Jürgen Geuter

"How can you not wanna know?"

I've always been interested in knowing the "irrelevant" stuff: I could hardly remember any of the things my teachers wanted me to memorize (like historical dates and such) but I was able to talk about some Star Wars character's name and its origin. Some say I wasted (and waste ;-) ) a lot of time, I say I was (and still am curious).

The problem back then was that relevant information was often not easy to aquire: You either had to borrow or buy a book specifically on the one topic you were interested in or you could look up the thingy in the dictionary the parents owned. Getting information about the less common knowledgy things was a time consuming task and often came with quite some money to be spent.

Things changed with the Internet, the Wikipedia and all those obscure little sites all over the internet, all the blogs and fansites and whatnot. Information is just on search away, the task is not so much finding any information but filtering all the information being offered to you about any given topic.

I just was out with Annette, fetching something to eat and cause the drink was very cold and I was very thirsty I got the good old "brainfreeze". It hurt and I instantly wondered why that specific pain is so typical and unique so I added a mental note and looked it up when I came home.

This is not information I need to do my job or to save money or other resources. It's just one of these weird little pieces of information that is neat to know. I'll probably have forgotten it in a week, too, and will search for the same thing again.

Our access to information is quick, cheap and easy nowadays but it comes with a price, it kinda forces us to get that information.

The situation kinda flip-flopped: Before you asked about some random topic, nobody knew anything and the normal case was that you just couldn't be bothered to really find it out, really do the work, really go through the books. Only in rare cases you would take the time to get the information.

Nowadays it's cheap, so cheap that when you bother to ask some sort of question, you kinda have the obligation to look it up. The question is: "How can you not want to know that, considering how easy the information is to aquire?"

This creates stress because if we look at it purely economically there's hardly a reason not to know something as soon as a question emerges (as long as the question is simple enough obviously): If you ask something, even if you just ask it yourself, just hack the question into a search engine: Instead of thinking about it for 5 minutes, you will know it in 3.

Stress is a terrible thing, one of the things that we should try to avoid and remove from our lives as much as possible: (Constant) Stress drains your energy and makes you restless.

The decision that we just can't be bothered to look something up is a hard one at times, but can be quite relieving. It does not mean not to look up brainfreeze in the wikipedia but it means that it's ok not to know when exactly something happened or how exactly something works. It's ok to say "it doesn't matter", it's ok to embrace your own ignorance at times.

We live in a time where knowledge and information are being thrown at us all day, it's important to keep in mind that "deciding not to care" is an important freedom, too, a freedom that's worth indulging in sometimes.

January 28, 2009 :: Germany  

Michael Klier

Why I Bought (CR)Apple Hardware

Warning: Long post ahead!

Yeah, it's true. I got myself a Mac Book Pro last week. And well, the overall responses of my net-friends (of who the most are FOSS enthusiasts like me) wheren't very positive at all ;-). So I'll try to explain what led me to buy a (CR)Apple product (because there is some sense in it).

Beside being a Linux user for many years now and a fan and supporter of Open Source Software, information freedom, Creative Commons and all I'm also an audio engineer by trade and profession. 8 hours a day and 5 days a week I spend cutting audio material, listening concentrated to eliminate stuff in vocal/music tracks most people wouldn't care about (because they wouldn't notice it) and try to make commercials promoting a slowly dying medium to sound cool. I also do a lot of music in my free time. You see, music and work with audio material is a very dominant part of my life.

So how does Apple fit into this? During the past 8 years I've been working with audio on various platforms including Windows 98, Windows ME, Windows 2000, Windows XP, Mac OS, Mac OSX and Linux. All of them had their downsides, the ones with the least up- and downsides however, where the Mac ones. I'll try to summarize my experiences:

As crazy as this may sound, but, from all the Microsoft OS's I've done audio with, Windows 98 was teh most stable, non-complaining, comfortable OS to work with. In fact, I know people who used to work on Win 98 until maybe two years ago because of that fact. ME was horrible, as well as Win 2008 and XP (I don't know about Vista, but I think an OS which needs that much RAM can't be any good for doing audio which obviously requires a lot of RAM and a good CPU as well). Apart from the OS, there where a lot of hardware issues. Everybody knows that it's not an easy task to put a PC together in which all components work together without flaws. Maybe it was just bad luck, or my incompetence, but I haven't worked with a self build PC system, nor a bought in one piece one that hadn't any flaws (failing disks, jitter, extrem noisy etc.) ever. And last not least, there's Asio, a as they say low-latency and high fidelity interface between a software application and a computer's sound card designed by Steinberg and now the defacto standard for professional audio on M$ Systems, which is, at least IMHO, an unreliable bitch (sorry but there are no lighter words to put this).

Mac OS(X):
The most obvious downside is that Apple (P)PC's where/are totally overpriced. You just pay way too much for the value you get. On of the most retarded facts of the latest Intel Macs is that the inbuild audio interface comes configured to not do capture and playback at the same time. This is quite annoying for endusers who have no idea that this can be changed. And people who say OSX never crashed never saw the spinning wheel of death ;-).

There's just one point which makes Linux suck in regards to audio, and it's not even it's fault or the fault of the community, but the fault of the audio device manufacturers who don't provide Linux drivers for their products. If it comes to better than decent audio cards you're choices are quite limited. There has been a time where RME had native Linux drivers for the Hammerfall card series which was/is quite good. Also, M-Audio does a good job in providing Linux drivers for most of their products. However, even though M-Audio makes nice devices, most of them are targeted at the consumer market and homerecording (I'm part of the homerecording group as well, but working each day with a 16.000€ Pro Tools system spoils you). Don't get me wrong, those devices are nice, but if it comes to quality in A/D convenvertes or pre-amps there are those which are good, and those which are better, and I would love to have the choice to work with the better ones on a Linux platform.

Of course all of those platforms have their positive sides as well:

Just one Word. Plugins. The defacto standard for audio Plugins, things like Reverb, Delay, Chorus or virtual Instruments (software implementations of synthesizers etc.) on the Windows Platform is VST, which is again a Steinberg product. You can download the VST SDK from the Steinberg website, and you are allowed to redistribute Plugins you've created for free or commercially, as long as you apply their license terms. Over the many years since VST exists this has lead to an immense number (literally thousands) of available commerical and freeware VST Plugins which aim to bring the functionality of good old analog studio hardware, completely new experimental ways of audio processing, and emaulations of your favorite synthezisers to your digital audio setup. (As a side note: it's possible to use VST Plugins on Linux systems as well!).

Mac OS (X):
There are not really many things which make OS (X) itself better than Windows. Allthough I wouldn't say it never crashes, it's far more reliable if you have to work with it 8hours a day 5 days a week. Of course I can't provide numbers to prove that, so, take this with a grain of salt, however, it resembles my experience. Of course you can use VST Plugins too, though, due to the fact that until recently they used PPC CPUs there weren't as many free plugins available as for Windows, but I expect that to change in the near future. Another point is X11. The latest OSX comes with X11 by default. This means you can run a lot of the open source audio programs (like Ardour, more on that later) natively on OSX (without much hassle like before), and here comes the important thing, in combination with really decent audio cards! Other than that, I have no real opinion on Core Audio or Audio Units, it works, the only but maybe important difference is that Audio Units comes as part of the OS itself.

Apart of the fact that Linux just rocks anyway, there's a lot of innovation regarding audio processing happening in the Linux world. The JACK audio connnection kit is pretty much the wet dream of anybody who wants to manage a complex audio setup. Being able to route audio sources/destinations in whatever way you want is one of the most important things in a studio setup. Most PC audio card manufacturers implement their own more or less good routing software tools which comes bundled with the sound card itself. Having JACK, working with all the audio cards in your computer is just plain cool. Of course JACK doesn't make things easy and if you come from a world where you have one stereo input and one stereo output most of the time, it's not easy to get the hang out of it. As a side note: JACK is available for OSX as well, and is being ported to Windows too. Another cool thing which is emerging in the open source world, and which I'd love to see becoming the successor of MIDI and to get adapted as the new standard for device communication in a recording studio is Open Sound Control or short OSC. And last not least there's Ardour, pretty much the only DAW softare which I think could be able to replace Pro Tools in my digital recording setup.

So why exactly did I buy a Mac Book Pro? First of all I was in the need for a new computer for my audio work at home. Until today I was working on a several years old Athlon XP with 1GB RAM and old IDE drives. While working with such old hardware is still possible (you can pretty much make music with everything, but you have to adapt your workflow to the limitations the hardware opposes on you). I just wanted the same convenience at home as I have at work, like, not being forced to render effects into my audio tracks directly each time when the CPU was beginning to choke (which it did very early after applying just one processor intensiv Reverb for example). Another point is that I want to run Pro Tools at home. This is the DAW I've been working with over the past couple of years, and it's my favorite one of all of them. To my luck it's also the most expensive one :-/. I don't want to go into too much detail about why I think Pro Tools is better than other DAWs like Cubase, Logic, Sonar, Samplitude etc.. IMHO it sounds better, yes, as strange as this might sound, even if you use the same audio hardware, all those DAWs sound different, and what I like most about Pro Tools is that it sounds very clean (some would say sterile). Enter the Mac. The latest Version of Pro Tools is optimized to run under OSX and my overall observation is that Pro Tools runs way more stable on OSX than on Windows systems (I've worked with Pro Tools on Windows for almost a year). Why Pro Tools if there's Ardour? Of course I've immediately installed Ardour on my new Mac, and I really it like so far (it still lacks Midi support, but it will get Midi with the 3.x version I think), but, the main reason to get Pro Tools is compatibility with the most part of recording studios across the world. Pro Tools is pretty much the studio standard, be it music production, post procution yada yada yada. Being able to just burn/ftp a session and send it to a mastering studio for example, or take the work you've done at home to finish it in a rented studio is very important if you plan to work as freelancer in the audio field. Of course, most studios you can rent, offer you to work with other DAWs as well, but overall, Pro Tools dominates the professional market. Also, Pro Tools is AFAIK the DAW which has the best sample accuracy. That's also why you can create mixdowns in real time only, not like with other DAWs which work and apply effects over blocks of audio data (most people find it retarded to not being able to just render a mixdown in a couple of seconds, but that's the price you pay for a better result). Apart of that it's the only DAW to my knowing in which your able to compensate the delays which plugins generate while processing the audio signals in realtime, which in turn reduces possible signal phasing.

To come to an end with this fairly long post. There's of course much more to say about all those different platforms. And digital audio is a way to complex field to pack into a single post, but, I hope my FOSS friends (if you're still reading ;-)) are now able to understand why I bought a (CR)Apple product ;-).

Read or add comments to this article

January 28, 2009 :: Germany  

Steven Oliver

Mac Screenshot

I’ve my MacBook for quite a long time now and I do a lot of stuff on it. Use a lot of open source applications on it. While I find my Mac is better for development purposes than a Windows PC I do find it more restrictive. It’s probably the most tightly controlled computer I’ve ever used. But anyway, here is what my Mac looks like.macbook_screenshot

Enjoy the Penguins!


January 28, 2009 :: West Virginia, USA  

January 27, 2009

Kevin Bowling

KDE 4.2 on Gentoo

KDE 4.2 is out officially.  The ebuilds for Gentoo have been ready for a while.  This is a truly fantastic release.  If you’ve ever made an opinion about KDE in the past, I encourage you to give it another go.

My beta1 review back in December sums up most of my thoughts on the release.  There isn’t anything significantly changed since, just lots of polish and bug fixing.  Everything has been stable and functional since I’ve been using it in the RC phase.   This is a worthy opponent to KDE 3.5, GNOME, Windows and OS X.

Thanks again to the Gentoo KDE team.  The ebuilds are in great shape!

Share and Enjoy: Digg Slashdot Facebook Reddit StumbleUpon Google Print this article!

Related posts:

  1. KDE 4.2 beta 1 on Gentoo KDE 4.2 is set for release on January 27th.  Eager...
  2. KDE4 on Gentoo So I bit the bullet and installed KDE 4.0 on...
  3. Gentoo 2007.0 Released! “The Gentoo project is pleased to announce the much-delayed release...

January 27, 2009

Ciaran McCreesh

Paludis 0.34.1 Released

Paludis 0.34.1 has been released:

  • We can now skip src_ phases where it is safe to do so.
  • Documentation updates for repository configuration.
  • Support for managing user and group accounts (requires distribution support).
   Tagged: paludis   

January 27, 2009

Martin Matusiak

Van Lustbader’s Jason Bourne

With Ludlum’s passing in 2001, Eric Van Lustbader has taken up the mantle of writing more Bourne books for the fans. As it turns out, he does this surprisingly well in that his voice is very similar to Ludlum’s. To date, he’s put out three books, with a fourth on the way. Interestingly, Ludlum left Bourne when the character was 50 years old, so to the extent that Van Lustbader wants to keep this going, he’ll have to equip Bourne with the characteristics of a James Bond, or… Donald Duck. Characters that never seem to age, merely appear again and again in successive episodes.


Nevertheless, The Bourne Legacy certainly does add to Jason’s life story, as the title no doubt implies. Sadly, Van Lustbader kills off the lovable characters Alex Conklin and Morris Panov right off the bat. I suppose after three stories we’ve had it with them? This sets the stage for Jason, who is implicated as the suspect through a set up. Strangely enough, the CIA takes the bait without ever considering the possibility that something is amiss. What’s mind boggling about these people is that they never seem to know what the people working for them actually are capable of. First they train a Bourne, and then they’re astonished that a simple hit squad can’t take him down. The agency director, a long time friend of Conklin’s, puts a price on Jason’s head without thinking twice about it.

It’s odd that a man with no connections into the agency is able to execute such a plot, sending the whole agency after one of its agents. But that’s what Stepan Spalko, on the face of it a well respected leader of a humanitarian organization, has done. His hired gun is a man called Khan, a superb assassin whose main asset is to never betray his emotions, no matter the situation. Van Lustbader lets it slip quite early on that Khan is actually Jason’s long lost son Joshua, presumed dead, from his first marriage. But it takes us until the end of the story for Jason to accept this truth.

In the meantime, there is a scheme to execute a bacteriological attack on the participants of the terrorists summit in Iceland, that is leaders from the US, Russia and Arab states. Naturally, the plot is Spalko’s, with the help of the puppeteer’s favorite puppets: Chechnyan rebels. In the end, Khan has an unlikely soul cleansing moment with one of Spalko’s betrayed Chechnyans, Zena, through which (although Zena is dying) he’s able to gain some fresh perspective on his father who supposedly abandoned him back then in Phnom Penh.


I’m starting to resent Van Lustbader. He is systematically destroying everything Ludlum built up. First he killed off Panov and Conklin, and now Marie. Marie was shockingly absent from Legacy, and now she’s met her end in the most trivial and un-Bourne like way, to pneumonia. Imagine, the strong and resourceful Marie to wither like this? It’s absurd. I can think of two reasons. Either Van Lustbader doesn’t like Marie or he doesn’t have it in him to write her part.

The more I think about it, there’s something bigger going on here. You don’t just kill off the second most important character without reason. But it isn’t just her. Van Lustbader’s characters are different. They are exaggerated, caricatures almost. Conklin first appeared every bit the single minded, firing from the hip kind of guy, but he turned out to be a wonderfully nuanced character. And Panov had great personal warmth. Then it was Marie, the most complicated of all of Ludlum’s characters. She was never a flat character instructed to repeat the same concerns in the same words. On the contrary, there was much growth, and you could always sense that Ludlum had a lot more in store for her, he was never finished with her. A wonderful aspect of the Bourne stories was precisely the unpredictability of Marie.

Contrast Conklin, David Abbott and Peter Holland to the nameless director of CIA. Ludlum’s characters are flesh and blood, they feel guilt and remorse. Van Lustbader’s director, in contrast, is Pointy Haired Boss. And he barely has a handle on the job, consumed in the struggle to maintain his political position and that of the company. He knows little about the ongoings and understands even less, least of all about Bourne. It struck me how odd this was. Surely the CIA chief would be a highly sophisticated character, surely he’d be clever enough both to protect the agency and run it, or how else would he have risen to the highest position? Strangely enough, Van Lustbader uses him a lot, but then he doesn’t bother to build him a decent character.

But that’s the thing about Van Lustbader, he can’t do characters. Ludlum would never motivate killing or terror with anger or hatred. Hatred is a complicated emotion, with forays into many other states of mind. Furthermore, a character who’s hateful is not hateful all the time, he undergoes moments of weakness, of shame and doubt. Meanwhile, the CIA chief orders Bourne’s execution more or less because he’s sick of him. That doesn’t make a whole lot of sense. Van Lustbader’s one honest attempt at creating a character was Kahn who, agonizingly, doesn’t reappear.

Van Lustbader’s villains are also altogether different. They go under the common banner of “terrorists”. First Chechnyan rebels and now Saudi jihadists. Their objective is a rather vague scheme to disrupt US-Saudi relations. Compare that to the mesmerizing plot of the Taiwanese magnate who wanted to seize control of a fragile Chinese state, now that was a plot! Van Lustbader’s villains can said to be more or less “crazy”, but in a more shallow sense than, say, Carlos. There is endless rhetoric about Western decadence, but what exactly are they trying to achieve? At least Carlos had his reasons, and there were reasons why he had them. Spalko too was a stronger character. If nothing else at least it was clear that he was a puppeteer, and a puppeteer never reveals his cards to his puppets. But Fadi and Karim spend 20 years planning a “Face Off” reenactment only to detonate a nuke in Washington DC. Well, so what? What does that accomplish?

Ludlum believed that life can twist you every which way, but it does not ultimately possess you. He used a lot of older characters, he believed in redemption and forgiveness. With Van Lustbader, there are three options. You are corrupt from the start and eventually meet your end. You start out good, then become corrupted. Or, you remain good, but you’ll be put through the very harshest episodes in life. That’s the complete set of human experience with Van Lustbader.

A final, smaller, matter is Van Lustbader’s courageous stab at computer security. I understand his motive, but he should keep it to a minimum. That scene where Karim runs a virus on the mainframe and brings down the entire network is quite sad. So the CIA, one of the most technologically advanced organizations (as every spy fiction writer insists) only has a single mainframe? He completely betrays his ignorance of the computer networks present even in small businesses, let alone huge organizations. The insistence on portraying “the firewall” as some kind of fantastic artificial intelligence is also rather tedious.


As much as the previous Van Lustbader novels were sub par, this one just didn’t grip me at all. His lack of imagination is tiring.

Tiring is also his reliance on recycling the same themes again and again. CIA gets a new female director. None of the men respect a woman in charge. Yadayadayada. And it drags on and on. Come to think of it, this feminist angle has been present in all of his books. Yes, we get it, don’t you have anything else?

The plot this time is convoluted without being especially interesting. Which is an odd thing to say for a Bourne story. But lo and behold, another Van Lustbader favorite theme: muslim fundamentalism. A group calling themselves the Black Legion is planning a large scale attack on the continental US. Their motive is typically weak. “We can not accept the Western way of life so we must strike.” Another pointless motive that can’t possibly accomplish anything, this is starting to feel familiar. Let’s see, all of Van Lustbader’s villains are muslim terrorists.

The other half of the story is the CIA vs NSA power struggle. This is what Van Lustbader loves to write about, the hard man in charge. Again in stark contrast to Ludlum’s characters. Yes, Luther Laval is clever, but he’s not as sophisticated as Ludlum’s characters. He’s simple minded, one sided, flat. At least the two agencies fighting it out is somewhat interesting, but Van Lustbader completely fails to imprint Veronica Heart’s character on the story.

So how does it go? Pick some tried and true themes. Add a few locations, some exotic names (preferably Russian or Turkish), shallow characters and wrap it up with Bourne. Oh, and add a lot of politics. Yeah, that seems to work well enough.

January 27, 2009 :: Utrecht, Netherlands  

Rodrigo Lazo

python and emacs: the rope way

If you follow Planet Emacsen you have probably read a couple of great posts about emacs and python.

Ryan McGuire's EnigmaCurry blog has a post about using Emacs as a powerful Python IDE, and later about using Autocomplete.el: code completion in emacs. The heart of both posts is rope and ropemacs. Rope is a great!, go to rope's website for a full feature listing (e.g. autocompletion, refactoring, pydoc, etc.)

A simple post about what you need to install and how you have this post from Edward O'Connor.

Also, if you use yasnippet for templates you may also find interesting Ian Eure's post about Disabling Python-mode’s skeletons would be also very useful.

January 27, 2009 :: Arequipa, Perà  

John Alberts

New Gentoo Home Page Coming Soon?

I saw a posting today on one of the Gentoo listservs about the recent lack of newsletters and website updates.  Unfortunately, the lack of updates isn’t unusual, but I did pickup an interesting bit of information.

It looks like there is a new index page coming soon to the Gentoo website.  It looks like it’s just a matter of when it gets committed.  The new page appears to provide automated news updates with information such as:

  • Latest GLSA’s
  • compilation of dev blog posts from p.g.o
  • latest package additions

The look of the page is pretty much the same as the old page.

Take a look for yourself.

January 27, 2009 :: Indiana, USA  

January 26, 2009

Ciaran McCreesh

Managing Accounts with the Package Manager

Paludis is a multi-format package manager. One beneficial side effect of this is that the core code is sufficiently flexible to make handling things that aren’t really ‘packages’ in the conventional sense very easy; in the past, this has been used to deliver unavailable, unwritten and unpackaged repositories.

One of the things Exherbo inherited from Gentoo without modification was user and group management. In Gentoo, this is done by functions called enewuser and enewgroup from eutils.eclass; a package that needs a user or group ID must call these functions from pkg_setup. Although usable, this is moderately icky; Exherbo can do better than that.

Really, user and group accounts are just resources. A package that needs a particular user ID can be thought of as depending upon that ID — the only disconnect is that currently dependencies are for packages, not resources. Can we find a way of representing resources as something like packages, in a way that makes sense?

Fortunately, the obvious solution works. Having user/paludisbuild and group/paludisbuild as packages makes sense; adding the user or group is equivalent to installing the appropriate package, and if the user or group is present on the system, it shows up as installed. Then, instead of calling functions, the exheres can just do:


What about defaults? Different users need different shells, home directories, groups and so on. We could represent these a bit like options, but there’s a better way.

If two or more ebuilds need the same user, they all have to do the useradd call. This means duplicating things like home directory information and preferred uid over lots of different ebuilds, which is bad. It would be better to place the users somewhere else. For Exherbo, we’ve gone with metadata/accounts/{users,groups}/*.conf. A user’s settings look something like this (the username is taken from a filename, so this would be metadata/accounts/users/paludisbuild.conf):

shell = /bin/bash
gecos = Used by Paludis for operations that require an unprivileged user
home = /var/tmp/paludis
primary_group = paludisbuild
extra_groups =
preferred_uid =

And a group, metadata/accounts/groups/paludisbuild.conf:

preferred_gid =

We only specify ‘empty’ keys for demonstration purposes; ordinarily they would be omitted.

We automatically make users depend upon the groups they use. The existing dependency abstractions are sufficient for this. There’s a bit of trickery in Paludis to allow supplemental repositories to override user defaults found in their masters; details are in the source for those who care.

One more thing to note: with accounts specified this way, we can be sure that the package manager only manages relevant accounts. There’s no danger of having the package manager accidentally start messing with your user accounts.

So what are the implications?

  • We’re no longer tied to a particular method of adding users. If a user doesn’t want to use useradd and groupadd, they can write their own handler for the package manager to update users via LDAP or whatever. Paludis supports multiple handlers here.
  • Users who would rather manage a particular account manually can add it themselves, and the package manager will treat it as being already installed and won’t try to mess with it.
  • User and group defaults are in one place, not everywhere that uses them.
  • It’s much more obvious when an account is going to be added.
  • Accounts that are no longer required can be purged using the usual uninstall-unused mechanism.

And what does it look like?

$ paludis -pi test-pkg
Building target list...
Building dependency list...   

These packages will be installed:

* group/alsogroupdemo [N 0]
    Reasons: *user/accountsdemo-0:0::accounts
* group/groupdemo [N 0]
    Reasons: *user/accountsdemo-0:0::accounts
* group/thirdgroupdemo [N 0]
    Reasons: *user/accountsdemo-0:0::accounts
* user/accountsdemo [N 0]
    Reasons: *test-cat/test-pkg-2:2::ciaranm_exheres_test
    "A demo account"
* test-cat/test-pkg::ciaranm_exheres_test :2 [N 2] <target>
    -foo build_options: recommended_tests split strip
    "Dummy test package"

We can have a look at the accounts before they’re installed:

$ paludis -q accountsdemo groupdemo
* user/accountsdemo
    accounts:                0* {:0}
    Username:                accountsdemo
    Description:             A demo account
    Default Group:           groupdemo
    Extra Groups:            alsogroupdemo thirdgroupdemo
    Shell:                   /sbin/nologin
    Home Directory:          /dev/null

* group/groupdemo
    accounts:                0* {:0}
    Groupname:               groupdemo
    Preferred GID:           123

Note the dependencies:

$ paludis -qDM accountsdemo
* user/accountsdemo
    accounts:                0* {:0}
    username:                accountsdemo
    gecos:                   A demo account
    default_group:           groupdemo
    extra_groups:            alsogroupdemo thirdgroupdemo
    shell:                   /sbin/nologin
    home:                    /dev/null
    dependencies:            group/alsogroupdemo, group/groupdemo, group/thirdgroupdemo
    location:                /var/db/paludis/repositories/ciaranm_exheres_test/metadata/accounts/users/accountsdemo.conf
    defined_by:              ciaranm_exheres_test

The install is fairly boring:

(4 of 5) Installing user/accountsdemo-0:0::accounts

* Executing phase 'merge' as instructed
>>> Installing user/accountsdemo-0:0::accounts using passwd handler
useradd -r accountsdemo -c 'A demo account' -G 'alsogroupdemo,thirdgroupdemo' -s '/sbin/nologin' -d '/dev/null'
>>> Finished installing user/accountsdemo-0:0::accounts

And once they’re installed:

$ paludis -q accountsdemo groupdemo
* user/accountsdemo
    installed-accounts:      0* {:0} 

* group/groupdemo
    installed-accounts:      0* {:0}

Exherbo will be migrating to this new mechanism shortly — package manager support is already there (it was only a few hours’ work), so it’s just a case of gradually hunting down and killing those enew* function calls.

   Tagged: accounts, ebuild, exherbo, exheres-0, gentoo, groups, paludis, users   

January 26, 2009

Jürgen Geuter

Groups and Tags on or to be precise Laconica, the software that powers, recently got a massive update. Now some people like the new look, some don't (I guess that always happens when you do a redesign) but the thing I wanted to write about is a new feature that goes beyond what Twitter offers: Groups.

As on Twitter, you have been able to use so-called "hashtags" to tag your message: By having #TAG in your message the message was automatically tagged with "TAG". now has a new syntax with uses "!" as a special character: Groups. Having "!group" in your message sends the notice to everybody who has subscribed to the given group.

The idea sounded nice, groups were created and joined and all was great. But some people seem to have trouble understanding the difference between groups and tags (basically many just used the group syntax and pushed out many extra notices spamming half the population [when I got notices like "restarted my !linux server" I knew we were having a problem]), so I thought I'd write down a few notes on what the properties of those two concepts are and what the differences are.


A tag is used to taxonomize your notice, to make it easy to find. Tags are used when you go through all the existing notices and try to find notices about a certain topic. Tags allow you to sort the huge load of notices and get to the ones you wanna check out quicker.


Apart from allowing a group to have a logo (something tags don't have) the function of groups is not to allow filtering the huge heap of data but to allow you to easily address a set of people about a certain topic, groups are like @-replies: You send the message to everybody in that group.


While both constructs do allow you to somehow organize the flood of things tags and groups are semantically completely different beasts: While you could see groups just like tags as a way to reduce the amount of notices that are relevant for a given search, you mustn't forget that groups are also a direct message into the stream of many others.


If there's one scarce resource nowadays it is attention: We have so much information around us that making sure we don't drown in the flood of it is not an easy task. If you use groups as tags and push a lot of unwanted noise into the stream of other people, a stream they usually carefully crafted to fit their needs, you are basically spamming them.

Groups can be a great tool to get in contact with many people that are somewhat experienced with a certain topic, it's a way for you to get help or start a discussion with likeminded people that you don't yet know. Groups also allow you to follow a "topic" instead of "people": You might wanna know everything that deals with "python" but don't wanna have to subscribe to the people who sometimes talk about "python".

Before you send a notice to a group, think. Do you really want to send that notice directly to everybody in that group? Will they see it as a valid question or an interesting addition or will they see it as spam?

January 26, 2009 :: Germany  

Roy Marples

dhcpcd GTK+ Monitor available

dhcpcd-gtk is a GTK+ monitor for dhcpcd. It uses dhcpcd-dbus to actually talk to dhcpcd and wpa_supplicant. The end game is to be a viable alternative to NetworkManager for wired and wireless setups but without reliance on Linux specific libraries - we just require dhcpcd and GTK+ available on your platform.

At present, dhcpcd-gtk is just an application which sits in the notification area. The icon has several states, showing offline, address negotiation and online. When attempting to negotiate an address you get a nice animation. A notification bubble is also shown per interface state change.

Future versions will have Access Point selection and dhcpcd configuration options.
Both are currently available via pkgsrc-wip. Hopefully available in Gentoo soon as well Smiling
EDIT: ebuilds available for dhcpcd-dbus and dhcpcd-gtk from my ftp server.

EDIT: Here's a screenshot as requested


January 26, 2009

Martin Matusiak

what is smalltalk today?

So Smalltalk is one of those languages that gets thrown around a lot in discussions about languages. After all, it is where the object oriented paradigm pretty much originated (although I suppose Simula sort of had a version of that without that explicit name), and so many languages have drawn on it for inspiration, even if most have gone a completely different way in realizing the OO idea.

As historical reference, Smalltalk is big, no doubt about that. And as a language it is pretty clever. Dynamic and self reflective, it just runs the whole time, objects are live and can be altered on-the-fly with a change to the code.

But what is Smalltalk today? Is it worth learning? From what I can see, and I should say I haven’t really spent that much time looking, it seems pretty dead. There is Squeak, where you get the whole image and you do everything inside the virtual machine. From what I understand this is the way you’re supposed to use Smalltalk. But frankly I’m not interested in an application that only runs inside a VM. For the same reason that noone really wants to run apps in VirtualBox or VMware.

Most of the Squeak tutorials seem to be 404. And I have yet to see anything that’s really interesting. In the end, programming is about programs, and without shiny programs to show off, what is left for us? The Smalltalk website, sporting a 1997 kind of look, has a list of apps on show. Based on that it’s hard to get excited about the language.

Okay, so there is something called Seaside, a web framework. And I can kind of see how it’s cool to have a web application that has to run 24/7, and meanwhile you can do live updates to the objects. But I’m not shopping for a web framework anyway, and there’s a ton of them already.

So is Smalltalk merely a historical curiosity at this point?

January 26, 2009 :: Utrecht, Netherlands  

January 25, 2009

Steven Oliver

iTunes + School

Did you know, for all of you who actually have a computer that doesn’t have Linux on it, that MIT has put a lot of thier free lecture videos on iTunes??

Very nice!

Enjoy the Penguins!


January 25, 2009 :: West Virginia, USA  

Goal Update

Step 1: Download the source code for your newly adopted project.

Step 2: Review the source code. Get a general idea of how things are setup and where things are at.

Step 3: Actually install and use your newly adopted program.

So far so good!

Enjoy the Penguins!


January 25, 2009 :: West Virginia, USA  

Dirk R. Gently

PCI, PCI-X, PCI Express - Oh boy!

Lately I bought an old pc to use as a server and needed a network card for it. I didn’t think it be such a hassle but because of multiple PCI specs finding a card wasn’t easy. Theres been alot of confusion about pci cards and what card to get for your computer - PCI cards come in alot of different types and versions. I’ve done a good amount of research on this (if there are any discrepancies, please let me know) and hopefully this post will help clear things up.


Standard PCI cards (sometimes called pci 1.0) have a 32 bit width slot, and operate at 33 MHz. Originally they started as 5 volt cards but 3.3 volt cards began to be made that use a different slot.

PCI 2.1 came a few years later that added the Universal PCI card spec that allowed cards to be used in both 3.3 and 5v slots, and upped the bus to 66 MHz. Also they created a pci 64 bit width slot for high end cards (gigabit networking,…). This meant that there could be one of 4 different slots in your computer: 5v 32bit, 3.3v 32bit, 3.3v 64bit, 5v 64bit (see graphic below). This meant you either had to buy an exact card for the slot or a universal card (which most manufactures began to build).

The PCI bus 2.3 spec came along and nix’d 5v adapters (cards). PCI 2.3 was adaptable though and supported 3.3v cards and universal pci cards.


PCI-X or PCI eXtended was built mainly for high end use. It has a bus speed of 66 or 133 MHz and only used the 64 bit 3.3v slot. It is fully backward compatible though with the existing PCI architecture: 33/66 MHz PCI adapters (cards) can be used in PCI-X slots and PCI-X adapters can be used in PCI slots. PCI-X 2.0 came along and really upped the bus speed to either 266 MHz or 533 MHz, but was still fully backwards compatible.

Which Card to Get?

Well really you can get any universal card and have it work. Carnildo helped me see things the easy way:

The rule of thumb for PCI and PCI-X cards is that if it fits in the slot, it’ll work. The bus and cards will negotiate the fastest, widest connection that all of them can use, so a 133MHz 64-bit card in a standard PCI slot will transfer data as if it were a 33MHz 32-bit PCI card.

Also keep in mind that, “The slowest board dictates the maximum speed on a particular bus!”

PCI Express?

PCI Express uses an entirely different architecture, different slot sizes, and is incompatible with with PCI or PCI-X. It’s expected to coexist with PCI-X and not replace it.


Thanks to the guys at the Gentoo forums who helped me straighten this.


January 25, 2009 :: WI, USA  

Martin Matusiak

java plugin galore

Ever lay awake at night wondering what happens when you hit a web page with a java applet on a vanilla Ubuntu? Me neither. It turns out that it’s this:

Embarrassment of riches! There are a few problems with this feature:

  1. While it’s great that you help me install the plugin, I have no idea what all these things are. All I wanted was “java”.
  2. There is no “default” or “recommended” choice. I can see that one of them is selected, but for all I know that’s because the choices showed up in this order at random.
  3. Even if I were inclined to think that the selected choice is selected for a reason, there’s another choice that’s exactly the same.
  4. “No description found in plugin database.” is not exactly helpful. In fact, it could be just the thing to help me here.
  5. If I wing it and install one of these, and then it turns out it doesn’t work (perish the thought!), the little notification at the top of the web page isn’t going to show up again (because a java plugin, working or not, would be installed). So there’s no way I can come back to this screen.
  6. If I am the kind of user who understands that the choices in this dialog represent packages in the system, then I don’t know what they are called, because the package names are not mentioned. So if I want to uninstall a plugin that doesn’t work, I don’t know what to uninstall.

There is another dialog in the Firefox settings for plugins:

Strangely, there is no option to uninstall plugins here, just disable. But I guess that if I disable the java plugin, I can revisit that java web page and get that plugin selection dialog again (and try a different one). Still, it takes a bit of detective work to figure that out, it could be made more obvious.

This example demonstrates the difference between starting on a problem, and actually solving it. I’m very pleased that we have these helper dialogs now, but it needs a bit more thought put into it.

Bug: #320989

I actually picked this example because there used to be two or three options in that dialog, but now there are five.

January 25, 2009 :: Utrecht, Netherlands  

January 23, 2009

Steven Oliver

Stupid Logic

I do a lot of Crystal Reports at work. We use version XI and while Crystal as a program is okay (at best) I have now run into rather startling flaw.

The best example I have is I wanted to count the number of records being returned on the report. While I can do this through the SQL I wanted to do it programmatically through Crystal’s own reporting SQL like language so I could dictate which records count and which don’t. So here was what I started with

Command.Record_num <> Previous (Command.Record_num)

Note that Crystal, in this situation assumes an if style statement with this, so as long as the above expression is true, its counted. Now if you’re clever you’ve already realized that will always be at least one off. It will never count the first record to return because it has nothing to compare it to. So I changed it like so

Command.Record_num <> Previous (Command.Record_num)

I think the above code is obvious so I won’t explain it. Now that didn’t work either. It still wasn’t counting the first record. I went through a lot of very clever code trying to make it count the first record thinking I must be a real idiot because I can’t for the life of me see why that doesn’t work. Well, I’m not. There is no reason that shouldn’t work. Here is what I ended doing in order to make Crystal count the first record

Command.Record_num <> Previous (Command.Record_num)

You don’t have to have ever programmed a day in your life to realize how stupid the above code is. Yet that was the only way I could get it to work. I’ve had to resort to such trickery at least twice now. Its pathetic.

Enjoy the Penguins!


January 23, 2009 :: West Virginia, USA  

Dirk R. Gently

Updating BIOS with Linux

If you don’t have Windows installed and you need to upgrade your BIOS, Linux does have the tools to be able to create a BIOS flash CD. Not many companies make Linux flash utilities and alot of these utilites are DOS utilities so a bootable DOS disk is needed. This is a simple, easy way to create a BIOS flash CD.

First, get a BIOS image. You’ll need to download a BIOS image for your board. For information on what Flash utility to use, a good place to look is your computer manufacturers homepage. Award BIOS and American Megatrends BIOS are the most popular BIOS’s used on motherboards.

Editing FreeDOS Minimal Boot Image

Note: This didn’t work for me but plenty of people have had success with it, fdboot.img is a bit old and may not work on newer hardware. Look at flashrom below for another alternative.

FreeDOS provides a bootable DOS image. Download the DOS image to the Desktop:


and mount it:

mount -t vfat -o loop /home/user/Desktop/fdboot.img /media/ISO

The BIOS flash utility and BIOS image will need to be added to the freedos image. I prefer to use /media/ISO but any empty directory will do. The bootable image has a fixed size (1,440 Kb, the size of a floppy disk) and hence /media/ISO will also have that limited memory. The size needs to remain fixed in order to create a bootable floppy of it. You can see the space used in the image by:

du -b /media/ISO

Add the flash utility DOS executable and the BIOS image (there should be just enough room for it). I prefer to put these in a new directory but it’s up to you.

cd /media/ISO
mkdir bios
cp /home/user/Desktop/flashprog.exe /home/user/Desktop/bios-image /media/ISO/bios

The data added to the FreeDos image will be save when the ISO is unmounted:

umount /media/ISO

Now return to the Desktop and convert the appended FreeDos image to a bootable ISO:

mkisofs -r -b fdboot.img -c -o fdboot-bios.iso fdboot.img

The -b option defines the floppy image used for booting; the -c option will create a file that directs to fdboot.img and is necessary for booting; the -o option defines the output file, in this case a bootable iso; and finally the image file needs to be added.

Now just burn the iso to the CD/DVD:

cdrecord fdboot-bios.iso

Flash BIOS in Linux with Flashrom

Flashrom is a utility to directly flash the bios directly in Linux. It’s design to be a comprehensive utility and supports a good number of hardware devices. Above that, flashrom is easy to use. Check their page for compatibility, or install flashrom and see if it recognizes your chipset. I’d tell more but the flashrom website does a good job of telling about the utility. I also updated the Gentoo ebuild for flashrom.


Because BIOS sizes are getting larger, we may need to learn how to create larger bootable images. mkisofs mentions that is can create an El Torito (bootable) iso with either 1200 Kb, 1440 Kb, or 2880 Kb images. I know how to create an empty vfat image can be created with:

mkfs.msdos -C newimage.img 2880

And, of course, it can be mounted and the freedos files can be copied there, but how could we make it bootable?



January 23, 2009 :: WI, USA  

January 22, 2009

Dieter Plaetinck

Arch Linux release engineering

I don't think I've ever seen so much anxiety/impatience/hope/buzz for a new Arch Linux release. (this is because of 2.6.28 with ext4 support).
The last release was 6 months ago, which is not so good.. also the arch-installer project has been slacking for a while. But the Arch devs have been very busy and many things going on. You know how it goes...

That's why some new people have stepped up to help out on a new release:
Today, we are on the verge of a 2009-01 release (though that has been said so many times lately ;-) and together with Aaron we have started a new project: the Arch Linux Release Engineering team.
Members of this team are Aaron himself, Gerhard Brauer and me.

Our goals:

  • coordinated releases following the rhythm of kernel releases (that's a release every 2-3 months baby!).
  • anticipate the availability of the kernel in the testing repo's instead of having to wait for it to go to core before building alpha/beta images
  • migration to AIF as the new Arch installer (woot!)
  • testing. Leveraging the possibilities of AIF as an unattended installer, we should be able to script full installations and health checking of the resulting system.
  • involving the community more? releasing "testing iso's" ? not sure about that. we'll see...

We also have:

Oh yeah AIF is mirrored @ and available packaged in the official repo's!

January 22, 2009 :: Belgium  

Roy Marples

losing that Drupal lovin

I like trac. It powers a lot of my project websites. Well, all of them infact.
It's written in Python which is a very nice language.
trac upgrades are few and far between, there have been no security issues since I've been using it and it supports my DB of choice (PostgreSQL) very well.

I'm starting to dislike Drupal, which I currently use for this blog.
It's written in PHP which is not a very nice language.
Drupal seems to have a new security hole every month.
It's modules (whilst many) often have issues on PostgreSQL DB's and are a pain to maintain.

So I got thinking Smiling

I use Drupal for this blog and my image gallery. That's it.
I use trac for a lot more, like projects, documentation, ticket tracking, source browsing, etc.
I discovered that trac has a blog plugin and a screen shots plugin which covers my drupal usage.

I've knocked up a demo site here. As you can see, it's not as pretty as Drupal, and the commenting system isn't as good.
Well, not good at first glance - it just needs a reply button. trac-0.11 has a new theming engine and the theme plugs are still not ported which is why it looks a little ugly. However, you now get to use wiki formatting for comments, so it's good Smiling I still need to come up with a way to move my pictures across. Feel free to add comments, I have a python script to convert a Drupal blog into a trac FullBlog so I can roll over at any time Eye-wink

January 22, 2009

January 21, 2009

Jürgen Geuter

Socks? I don't need your stinking socks!

As I wrote yesterday I asked for the client to support socks proxies. Why?

Well I am at client sites at times and some do police their network quite a bit. A certain client does for example ban any access to audio streams, another client does not allow instant messaging.

SSH is there for the rescue, I just build a tunnel to my vserver and use the SOCKS4 proxy that SSH provides for me (you can open a tunnel with the command ssh -f -D 1080 USER@HOSTNAME -N). Great a working SOCKS4 proxy on port 1080 on localhost. I can tunnel firefox and many other apps that know how to work with SOCKS proxies. The problem is, some apps don't.

One such app is the client which can handle web proxies of sorts but no SOCKS but today I found something neat to fix the issue: tsocks.

tsocks allows you to use apps that don't know how to work with SOCKS proxies to still use the SOCKS proxy transparently: The app doesn't even know it's using a proxy.

You can make any app use tsocks by calling it through the tsocks wrapper: tsocks lastfm does make the client use my SSH SOCKS4 proxy without any modifications, the client setting even say "no proxy". The same works with gpodder for podcast downloads and many other applications.

So the next time you are in a locked up environment that allows you to SSH out on some port, use SSH to build a local SOCKS proxy and then use tsocks to tunnel the apps through it that don't know how to properly handle SOCKS.

January 21, 2009 :: Germany  

January 20, 2009

Ciaran McCreesh

Paludis 0.34.0 Released

Paludis 0.34.0 has been released:

  • instruo is now parallelised
  • For NoConfigEnvironment, profiles are only loaded when needed.
   Tagged: paludis   

January 20, 2009

Jürgen Geuter

Wishlist for the client

Please support SOCKS proxies so I can tunnel through SSH when at a site that filters streaming music.

I tried to put Privoxy in front of the SSH tunnel but the client wouldn't stream properly :-(

January 20, 2009 :: Germany  

Christoph Bauer


Diese Geschichte bekam ich heut morgen von einem Freund per Mail zugesandt… eigentlich nichts besonderes, eine kleine Geschichte eben… aber irgendwie doch anders…

Diese Geschichte, nun, egal, ob real geschehen oder nur Fiktion, sie zeigt eines, das vielleicht für uns “Erwachsene” doch ein Denkanstoß von unseren Kindern kommen könnte, wir müßten es ihnen vielleicht nur ein bißchen vorleben…

…und als Ansatz, vielleicht auch einem oder mehreren Kindern ein Lächeln ins Gesicht zu zaubern, möchte ich in einem Zug an die Toy-Run 2009 in Wien, SCS, am 21. Juni 2009 und an den Joy-Ride am 6. Juni in Lustenau erinnern…

Und hier nun die Geschichte:

Bei einem Wohltätigkeitsessen zugunsten von Schülern mit Lernschwierigkeiten hielt der Vater eines der Kinder eine Rede, die so schnell keiner der Anwesen den vergessen wird.

Nachdem er die Schule und ihre Mitarbeiter in höchsten Tönen gelobt hatte, stellte er folgende Frage: “Wenn keine störenden äußeren Einflüsse zum Tragen kommen, gerät alles, was die Natur anpackt, zur Perfektion. Aber mein Sohn Shay ist nicht so lernfähig wie andere Kinder. Er ist nicht in der Lage, die Dinge so zu verstehen wie andere Kinder. Wo ist die natürliche Ordnung der Dinge bei meinem Sohn?”

Das Publikum war angesichts dieser Frage vollkommen stumm.
Der Vater fuhr fort: “Ich bin der Meinung, wenn ein Kind so ist wie Shay, das geistig und körperlich behindert zur Welt kommt, dann entsteht die Möglichkeit, wahre menschliche Natur in die Tat umzusetzen, und es liegt nur daran, wie die Menschen dieses Kind behandeln.”

Dann erzählte er die folgende Geschichte:
Shay und ich waren einmal an einem Park vorbeigekommen, in dem einige Jungen, die Shay kannte, Baseball spielten. Shay fragte: “Glaubst du, sie lassen mich mitspielen?”

Ich wusste, dass die meisten der Jungen jemanden wie Shay nicht in ihrer Mannschaft haben wollten, aber als Vater war mir auch Folgendes klar: Wenn mein Sohn mitspielen durfte, dann würde dies ihm ein Dazugehörigkeitsgefühl geben, nach dem er sich so sehr sehnte, und auch die Zuversicht, trotz seiner Behinderung von anderen akzeptiert zu werden. Ich ging also zu einem der Jungen auf dem Spielfeld und fragte, ohne allzu viel zu erwarten, ob Shay mitspielen könne.

Der Junge schaute sich hilfesuchend um und sagte: “Wir haben schon sechs Runden verloren und das Spiel ist gerade beim achten Inning. Ich glaube schon, dass er mitspielen kann. Wir werden versuchen, ihn dann beim neunten Inning an den Schläger kommen zu lassen.”

Shay kämpfte sich nach drüben zur Bank der Mannschaft und zog sich mit einem breiten Grinsen ein Trikot des Teams an. Ich schaute mit Tränen in den Augen und Wärme im Herzen zu.

Die Jungen sahen, wie ich mich freute, weil mein Sohn mitspielen durfte.

Am Ende des achten Innings hatte Shays Team ein paar Runden gewonnen, lag aber immer noch um drei im Rückstand. Mitten im neunten Inning zog sich Shay den Handschuh an und spielte im rechten Feld mit. Auch wenn keine Schläge in seine Richtung gelangten, war er doch begeistert, dass er mit dabei sein durfte, und grinste bis zu beiden Ohren, als ich ihm von der Tribüne aus zuwinkte.

Am Ende des neunten Innings holte Shays Mannschaft noch einen Punkt. In der jetzigen Ausgangslage war der nächste Run ein potenzieller Siegesrun, und Shay kam als Nächster an die Reihe.

Würden sie in diesem Moment Shay den Schläger überlassen und damit die Chance, das Spiel zu gewinnen, aufs Spiel setzen? Überraschenderweise bekam Shay den Schläger. Jeder wusste, dass ein Treffer so gut wie unmöglich war, denn Shay wusste nicht einmal, wie er den Schläger richtig halten sollte, geschweige denn, wie er den Ball schlagen sollte.

Als Shay allerdings an den Abschlagpunkt trat, merkte der Pitcher, dass die gegnerische Mannschaft in diesem Moment nicht gerade auf den Sieg aus zu sein schien, und warf den Ball so vorsichtig, dass Shay ihn wenigstens treffen konnte.

Beim ersten Pitch schwankte Shay etwas unbeholfen zur Seite und schlug vorbei. Der Pitcher ging wieder ein paar Schritte nach vorn und warf den Ball vorsichtig in Shays Richtung.

Als der Pitch hereinkam, hechtete Shay zum Ball und schlug ihn tief nach unten gezogen zurück zum Pitcher. Das Spiel wäre nun gleich zu Ende. Der Pitcher nahm den tiefen Ball auf und hätte ihn ohne Anstrengung zum ersten Baseman werfen können. Shay wäre dann rausgeflogen, und das Spiel wäre beendet gewesen.

Aber stattdessen warf der Pitcher den Ball über den Kopf des ersten Basemans und außer Reichweite der anderen Spieler. Von der Tribüne und von beiden Teams schallte es: “Shay lauf los! Lauf los!”

Noch nie im Leben war Shay so weit gelaufen, aber er schaffte er bis First Base. Mit weit aufgerissenen Augen und etwas verwundert hetzte er die Grundlinie entlang. Alle schrien: “Lauf weiter, lauf weiter!” Shay holte tief Atem und lief unbeholfen, aber voller Stolz weiter, um ans Ziel zu gelangen.

Als Shay um die Ecke zur zweiten Basis bog, hatte der rechte Feldspieler den Ball … er war der kleinste Junge im Team, der jetzt seine erste Chance hatte, zum Held seines Teams zu werden.

Er hätte den Ball dem zweiten Baseman zuwerfen können, aber er hatte verstanden, was der Pitcher vorhatte, und so warf er den Ball absichtlich ganz hoch und weit über den Kopf des dritten Basemans. Also rannte Shay wie im Delirium zur dritten Basis, während die Läufer vor ihm die Stationen bis nach Hause umrundeten.

Alle schrien nun: “Shay, Shay, Shay, lauf weiter, lauf weiter”

Shay erreichte die dritte Basis, weil der gegnerische Shortstop ihm zur Hilfe gelaufen kam und ihn in die richtige Richtung der dritten Basis gedreht und gerufen hatte: “Lauf zur dritten!” - “Shay, lauf zur dritten!”

Als Shay die dritte Basis geschafft hatte, waren alle Spieler beider Teams und die Zuschauer auf den Beinen und riefen: “Shay, lauf nach Hause! Lauf nach Hause!”

Shay lief nach Hause, trat auf die Platte und wurde als Held des Tages gefeiert, der den Grand Slam erreicht und den Sieg für seine Mannschaft davongetragen hatte.

“An diesem Tag”, so sagte der Vater, während ihm die Tränen übers Gesicht liefen, “brachten die Spieler von beiden Mannschaften ein Stück wahrer Liebe und Menschlichkeit in Shays Welt.”

Shay erlebte keinen weiteren Sommer mehr. Er starb im folgenden Winter und hatte nie vergessen, wie es war, ein Held zu sein und mich so glücklich gemacht zu haben und zu sehen, wie die Mutter ihren kleinen Helden unter Tränen umarmte, als er nach Hause kam!”

Copyright © 2007
Please note that this feed is for private use only. All other usage, including the distribution or reproduction of multiple copies, performance or otherwise use in a public way of the images or text require the authorization of the author.
(digitalfingerprint: 0f46ca51d0fa4e6588e24f0bf2b80fed)

January 20, 2009 :: Vorarlberg, Austria  

January 19, 2009

John Alberts

Combine your partition space with mhddfs.

As I was browsing the Gentoo forums today, I came across a very interesting post.

A user had 2 partitions on different hard drives that he wanted to combine the space on.  Ok, well the interesting part was one persons reply about a new fuse filesystem called mhddfs.  He pointed out an article on that explained a bit about this new filesystem and how to use it.

Sure, there’s multiple ways to combine the two drives, but this one is pretty intesting.  You can use mhddfs to combine 2 partitions into one virtual partition.  Mhddfs will automatically merge (overlay) the contents of both partitions so it looks like one big partition.

The advantages are:

  1. No need to move and backup existing data on the partitions.
  2. Easily implemented in fuse.
  3. Allows a regular user to mount and unmount the filesystem.

According to the forum thread poster, his tests show there is virtually no speed difference when using mhddfs, which is very surprising.  My experience with using fuse in the past with NTFS, was that it was painfully slow.  I’m sure things have matured greatly since I tried it a few years ago.

January 19, 2009 :: Indiana, USA  

Brian Carper

Clojure: 1, Common Lisp: 0

Waaaay back in January 2008 I finished my origami gallery photo-blog, written from scratch in Common Lisp. It took me about a month and half of struggling to get it going.

Shattered Dreams

Ah, to be young again. I was very enthusiastic about learning a Lisp, and Common Lisp was just about the best in town. I'd recently read Practical Common Lisp and maybe I hadn't imbibed the Kool-Aid yet, but I'd sipped it quite a bit.

Then reality came a'knocking. SBCL couldn't run on my server's VPS without hacking and recompiling it (many times) to fix memory-mapping issues and get threading working. Cleanly installing all the necessary libraries via ASDF was a problem and a half. Learning Emacs and SLIME was just about the most painful computer experience I've ever been through.

And those were the big things. There were so many little things. Dealing with pathnames in a language that was written before modern-day pathnames were semi-standardized. Destructive vs. safe list operations. Crappy hash-table support, to the point where I gave up and used lists for everything. CL-SQL, which is pretty good but whose syntax and verbosity and quirkiness leaves much to be desired. (I never got the CLOS stuff in CL-SQL working.)

Trying to wrap my head around macros enough to use CL-WHO effectively (never did get it right). Trying to wrap my brain around CL packages and setting up ASDF to work with my own code (I got this working, but I couldn't tell you how at this point). The list goes on and on.

Once I finished the site, I was proud of slogging through it, but I was also exhausted. Common Lisp is language that looks great on paper but it never clicked with me. I set up some Debian init scripts on my server to make sure SBCL would start if my server restarted, and then I tried my best to forget about it. I was burned out.

Sometime a few weeks ago, my origami gallery stopped working. Nothing but 404's. I'm not sure when or why. I tried running my script to restart the SBCL background process and it died with a Bible-and-a-half worth of errors in SBCL. Sigh. I didn't have the patience or the desire to fix it.

Clojure to the rescue

So, today on a whim I decided to re-write the site in Clojure to get it back up and running, using Compojure, a nice new Clojure web framework.

I'm happy to say I'm already done. My newly-Clojure-powered origami gallery, up and running. Cheerfully untested in Internet Explorer, but works in Firefox and Opera.

Start to finish the whole thing took me 8 hours. That includes time writing that little thumbnail-scrolling strip at the bottom in JQuery, a bit of Javascript to hide and show the comments pane, a new stylesheet, and also a bit of time to resize and touch up a couple of new photos of new models to post. The whole site weighs in at 350 lines of Clojure code, which includes an ORM-ish database layer, all of the HTML (s-exp style) and all the controller code.

It was such a joy to write, compared to my first slog through Common Lisp. But why? I don't feel appreciably smarter than I was a year ago. Why did it take me 8 hours this time but 2 months last time? Probably thanks to Clojure itself.

Deploying SBCL was one of my biggest roadblocks last time. By contrast, installing Clojure is easy, even in its infancy where there are sometimes quirks with SLIME compatibility and such. These problems are always minor and the mailing list is always on top of them.

Installing libs is easy, you throw a jar into a directory and you're done. I deployed everything to my Debian server in 15 minutes, which included installing the JVM for the first time, fetching Clojure and all required libs, compiling them, and setting up an environment. (Pro-tip, there's a new bash launcher script in clojure-contrib now, which makes starting Clojure a bit easier and more standard.)

How do you install MySQL support for Clojure? You don't; you install it for Java. There is official documentation on the MySQL website about getting it to work. It's a single jar file you download and throw into your CLASSPATH. Then get the SQL lib from clojure-contrib; 5 minutes of documentation reading, and I was done. When's the last time you saw official documentation on a vendor's website for a Lisp?

That's how it is with Clojure. Compojure uses Jetty as its HTTP server. Jetty is mature, stable, widely used and very well documented. If we had to wait for someone to write an HTTP server for Clojure from scratch, where would we be? I can't say enough times how great it is to be able to slurp up all the Java resources in the world and play with them in a Lisp.

But it's the little things too, that make Clojure such a joy. How do you concatenate strings in Clojure? (str string1 string2 string3). How do I access the name of a "comment" object? (:name comment). How do I set up Compojure so that when someone accesses the url "/" it calls a function called index-page? (defservlet my-servlet (GET "/" (index-page))).

The Compojure HTML-generating library takes full advantage of Clojure literal syntax so that you can do things like [:a {:href ""} (str "Goo" "gle")] to output an HTML link, using a mixture of vector and hash literals and function calls. This alone makes it far more pleasant to use than CL-WHO (and much nicer than writing raw HTML).

And so on. Easy easy easy.

Example one: What time is it?

Here is an issue that exemplifies the kind of hassle I went through in Common Lisp, that I never hope to go through again. How can you get Common Lisp to tell you the current time, and store it in your database? I use this when people post comments, to capture the time the comment was posted. And some other things.

You can read all about getting the current time in Common Lisp here or here. Ignoring that there are two representations of time to choose from (universal vs. internal), the important thing to note is that neither is easily used for anything. CL-SQL meanwhile has types "wall-time" and "universal-time".

So we'll go with universal time. This seemed to be the most popular way to store a time in a database field, back when I researched it. Universal time is a count of milliseconds since 1900. How do you turn this into something a database can understand as a timestamp, or turn it into something readable for human beings? Many languages stores times in a similar way as a huge integer, but most also give you really easy ways to turn millisecond counts into something legible. Not in Common Lisp. Instead, it's as simple as

	(second minute hour date month year day-of-week dst-p tz)
    (format t "It is now ~2,'0d:~2,'0d:~2,'0d of ~a, ~d/~2,'0d/~d (GMT~@d)"
	      (nth day-of-week *day-names*)
	      (- tz)))

In other words, it's not simple. At all. Sure it's just a couple of utility functions (and learning the magic, cryptic FORMAT language) away, but multiply a couple of utility functions by the dozens upon dozens of times I have to do this kind of crap for a simple 300-line website, and it turns out I'm not making a website any longer, instead I'm ad-hoc re-writing Common Lisp to do what any language invented in the past two decades can do out of the box. (e.g., in Ruby it's

I never did get this working entirely right for my site. Timestamps worked and were stored in the DB, but they were never in the proper timezone for some reason. I could've fixed it, but it was a low-priority problem below sundry other problems.

So how do you do this in Clojure? When I use the SQL lib from clojure-contrib, TIMESTAMP values in the database end up as Java Timestamp objects in Clojure by the time I see them. You can read about it here in simple, easily-navigable javadoc. I call (.toString timestamp) and it gives me a human-readable version of the time. Or I can just do (str timestamp). Or I can use .toLocaleString (deprecated) or use a DateFormat object if I want anything fancier. The end. Because Java can do it, Clojure can do it.

Example two: Filenames

How do I get a list (or vector) of all the files in a directory? For my photo-blog I use this to get lists of thumbnails for the photos. In Clojure it was simple enough that I wrote it out myself; there may be a shorter way.

(defn glob [dirname] (into [] (.list (new dirname))))

For Common Lisp you probably want to asdf-install CL-FAD, which "Returns a 'fresh' list of pathnames corresponding to the truenames of all files within the directory named by the non-wild pathname designator dirname." What the hell does that even mean? In fact I do know what it means, but only after plenty of reading. Completely unnecessary reading, if I was using any other language.

Just be careful with filenames, because CL has two ways of representing directories, and this also varies between implementations. This is enough of a problem that a whole chapter of PCL is devoted to writing a library to fix it.

I think the reason I got this project done so much more quickly this time is mostly because I'm using a better language for the job.

Update: hi Reddit. Thanks for DDOS-murdering my server. :)

January 19, 2009 :: Pennsylvania, USA  

Christoph Bauer

A new MTA

I guess you all know the common problem with cheap hardware being used as a server box, don’t you? I’m fed up with those troubles and decided to use some professional hardware.

It doesn’t sound that bad running a desktop workstation as a server, but have you ever thought about that such PCs are not made to run 24 hours a day? The consequence might be unexpected breakdowns. So if you sum it up alltogether - time, spare parts, anger, risk - is the cheap PC really that cheap?

In my case here it wasn’t worth all the anger - so I got me two IBM Server to do the serving. Surprisingly there were no problems getting the server installed. The reason for is IBM hardware which is really linux-friendly.

Copyright © 2007
Please note that this feed is for private use only. All other usage, including the distribution or reproduction of multiple copies, performance or otherwise use in a public way of the images or text require the authorization of the author.
(digitalfingerprint: 0f46ca51d0fa4e6588e24f0bf2b80fed)

January 19, 2009 :: Vorarlberg, Austria  

Dan Fego

CCDC Qualifying Round Review and Excitement

The Competition

On Saturday, myself and 7 of my classmates from GWU had a chance to head up to Lancaster, PA to the home of White Wolf Security for the 4th Annual Mid-Atlantic Collegiate Cyber Defense Competition Qualifying Round. At this round of the competition along with GWU were George Mason, Jameson Madison, and Millersville Universities. For those who aren’t familiar, the competition puts the students in the roles of system administrators who were recently hired to secure and maintain a company’s network. The whole affair is pretty exciting, and the pressure can get very intense. While attempting to prevent and root out attacks from an all-volunteer (but skilled) red team sitting in another room, a white team also throws business injects at us that have us do things like install wikis, set up PKI, and create office templates for our company. We get scored separately on attack prevention, injects, and service uptime, and at the end of the day the top two teams move on to the next round. The whole competition ran for about 7 hours, and we were getting pounded from minute 1.

The Plan

After experiencing the chaos last year, we put together a list of basic things to do as soon as everything started that would keep out the easiest attacks. After blocking all external traffic with our firewall (for a few minutes so we could have some “safe time”), we set out to do these things in the first 15 minutes or so. This was just changing all the passwords on the boxes, killing extraneous services, setting client firewalls, and backing up important data and configuration files. I only managed to get to changing passwords on the boxes I was handling. They gave us 4 Linux boxes, and those were the ones I was in charge of. They weren’t the newest versions of the OS’s, and for the life of me I can’t understand how our Nagios box (Fedora, I believe) didn’t come with lsof, but I did my best to get everything locked down.

The First Problem

Well, the best plans of mice and men blah blah blah, and in the few minutes it took our firewall guy to figure out where the admin console was, the red had team managed to get onto two of the Linux boxes and leave their mark before I had a chance to change the root passwords. After about an hour, I noticed the intrusion on one of the boxes as I attempted to set up iptables and noticed that there were a bunch of identical ACCEPT rules in there that I didn’t put there. It was go time.

The Source

I called over our team captain to let him know there was a problem, and I set out to figure out just what was going on. I flushed the tables, set the firewall policies to DROP, and hopped over to /sbin to take a look if anything seemed weird. After checking iptables again, I noticed some more ACCEPT rules were in there. I cleared them out and opened the crontab to see if anything was running; it wasn’t. Not sure what was going on, I took a moment to restart SSH to boot off any active connections, just in case. Upon examining the files in /sbin, there were some that were world-writable. I knew that wasn’t quite right. One of those files, however, was iptables. At cappy’s suggestion, I viewed the contents of the file, and sure enough it was a perl script instead of the iptables binary. Since I was under the gun I didn’t quite deduce what the script did, but it called the real iptables (which they renamed) with ACCEPT commands, instead of the ones I kept giving it. While the real iptables was mentioned in that perl script, I didn’t quite catch on right away, so I looked at the size of iptables on another computer and looked for a binary in /sbin with a similar size, and found it. After that, I chmod-ed all the files in /sbin to remove world-writability, to prevent any further problems in case of non-root intrusion.

The Solution

At this point, I had found intruder access, a malicious script, and a moved iptables. However, more and more ACCEPTs kept being added to my chains. Once again at cappy’s suggestion, I moved the real iptables to another name, and left their script in place as “evidence,” and in case they had any mechanism for replacing it. This seemed to finally stop the problem. At this point, I just needed to figure out where our attackers came from. The rules of the competition say we can’t completely block any IP addresses without approval from the white team, which will come if we have details proving that the IP is malicious. I believe the reasoning for this is so that we can’t just block any IP range, as well as the fact that the scoring bot shifts IPs, so we could also screw ourselves if we just blocked lots of them.

The Culprit

I needed to find out the intruder, but I didn’t know how. I took a look in /var/log and saw a bunch of files, more than I usually see, so apparently I don’t log enough on my own computer. :-P My first look was at /var/log/messages, but that didn’t yield anything of value. Next, I stumbled across /var/log/secure, which seemed to be a log of SSH activity. I hit the jackpot, because I found logins about an hour and a half prior by two specific IP addresses. I was ecstatic. This was our culprit. I was surprised that they didn’t delete such logs, but perhaps they didn’t think of it, or were instructed not to by the white team, as not to make our jobs of tracking them impossible. In any case, I saved the log to a file, filled out an incident report, and sent it over to the white team. They looked over the report and checked on something (I honestly don’t know what) and then let us block the IPs. Mission accomplished.

The Wikis

Well, at least that mission was accomplished. I felt pretty good a little after 11am when this all was wrapped up, but that quickly faded as business injects got annoying. We had to install wiki software, which gave us infinite problems. We first tried MediaWiki, which was a bust because our database server was using MySQL 3.x. 3.x? What the hell? I’ve never seen that anywhere before. My impeccable sources tell me that it’s about 9 years old! Yeah, pretty egregious, but there wasn’t too much we could do about it under the circumstances. So we looked for other softwares, of which there were many. However, after failing to find Tigerwiki (apparently it’s discontinued) and having ridiculous troubles with MoinMoin and TikiWiki, we ended up running out of time and failing the inject. That wouldn’t have been so bad if it weren’t for having another inject which built off of that one later in the day. So that sucked. In the end, we found an older version of MediaWiki (why didn’t we think of it earlier?) and installed that for the second inject, but we ran out of time and failed. And in that last bit when I say “we,” I mean two of my teammates, because I was sick of wikis and had to step away before bashing the computer with a chair.

The Cable

The rest of the day was relatively less pressure for me, just keeping a check on my systems, handling another inject, and trying to get our damn Nagios box to actually work. For some reason, it wasn’t connected to anything. We couldn’t explain it, though we thought our routes were a bit screwy. After a lot of investigation, one of my teammates brilliantly found that there was no network cable in the computer. I know, I know, that’s normally the first thing to check, but we were given computers and were allowed to assume that there’d at least be cables in everything! And it’s not like it had come loose or fallen out or anything; there was just no cable for that box. So we went to the white team and they remedied the problem, but we all had a good laugh over that. In the afternoon there was also another intrusion that I helped get logs for, but it wasn’t nearly as exciting as the morning breach.

The Nagios Box

Amidst everything else I was doing, I took a good shot in the afternoon to configure our Nagios box. I remembered from the last competition that the IPs were wrong, so that seemed to be what I’d probably have to do again, just fix up the network portion to that which we were assigned for all the entries. Simple enough with sed. However, I found enormous difficulties getting into the web console, considering they didn’t give us the username and password, and they weren’t any kind of defaults. Well, in the end, it turns out they were. “nagiosadmin” is apparently a standard username, and the password was the standard one for the competition. It just took way too long to figure that out. Once I fixed the IPs and logged in, I realized that all most of the checks were failing. Not good. That generally meant that the scorebot also would be counting those tests as failures. I talked to our firewall guy, who had egress filtering on (blocking outgoing traffic), which he suggested would give such results. We argued, he got busy, and I never got to see the beautiful green colors of Nagios that come with a fully working network. Oh well. At around 4, the competition ended, we packed up our computers, and we headed over to another room for a debrief.

The End

All tired, hungry, and anxious to hear the results, we waited and were fed pizza while the event organizers talked to us for a while. Then they let the red team have a go and both tell us what they did to us over the day, as well as query us about our strategies and give us some tips for defending. I actually got to talk to the guy who put that perl script on the Linux boxes, and he asked “did you find the others?” We laughed, and then realized that we hadn’t even looked for others. It didn’t even occur to us for some reason. So he pointed out that if you find something malicious, there’s almost certainly something else there, and you should make some effort to find it. He suggested grepping all the files in /sbin for “perl”, while I probably would have used “find” to find any files modified in the last few hours. Either way, it’s something solid that I learned and will most certainly apply at the next competition. Which leads to the most awesome part: GWU got 2nd place, and we’ll be competing at the regionals in Baltimore in March! We’ve got a lot of work to do, myself included.

All in all, I found the whole thing very worthwhile for the third time, and recommend any college students in the US with an interest in computer security to look at creating a team and competing in a regional competition. The whole affair, while stressful, is not only fun but a great experience for anyone interested in information assurance. As a matter of fact, I’m not particularly enthralled with security and I found it a great experience too. In past events (but not this one, because of the inauguration), we had Secret Service agents there as well to talk to us a bit and have to consult regarding some of the legal issues with intrusions, to discuss our incident reports with, and have drinks with afterward. :-P I can’t wait until March. Maybe we’ll make nationals!

External Links

January 19, 2009 :: USA  

January 18, 2009

Roy Marples

dhcpcd gains DBus bindings

dhcpcd is DHCP client. DBus is an IPC mechanism. Add them together and you get dhcpcd-dbus!

dhcpcd-dbus receives interface configuration events from the dhcpcd control socket and emits them to the DBus listeners. dhcpcd-dbus also has methods to release, rebind, stop and query dhcpcd on an interface. This allows users to control dhcpcd to some extent as all dhcpcd opertaions require root privilege and DBus has a fine grained ACL list for accessing these functions which dhcpcd-dbus can optionally use.

Of course, to the end user, dhcpcd-dbus by itself is useless. I've started work on another project, gnome-dhcpcd-applet which will just provide information on dhcpcd via a systray icon and popup tooltips when things happen. This should be done sometime next week. Future versions will allow for some configuration, wireless AP selection but most importantly try and eumlate the NetworkManager "I'm online" flag.

January 18, 2009

Dan Fego

Comment Issues

A visitor to my blog yesterday was kind enough to bring to my attention a technical issue with commenting on my site. I had a CAPTCHA plugin enabled, but unfortunately it didn’t seem to work properly, so no one at all could post. After investigating the issue a bit, I came across a piece of advice for the plugin I was using (MyCaptcha) which said to make sure that the following line was in my comments.php file:

<?php do_action('comment_form', $post->ID); ?>

In the same breath, it was mentioned that most themes do in fact have this in there. Well, to my luck, the theme I’m currently using (iNove) doesn’t have that line in the comments.php file. So I had to add it. Unfortunately, I only realized this after going through a few other plugins to see if I could solve the problem by switching it up. I guess all these plugins rely on the same line, since they all seemed to have a similar issue. Finally, I’ve settled on Raven’s Antispam plugin, mostly because that was the plugin I was trying when I decided to add the above line to my comments.php file. Seems to do the trick (at least it lets users post), and it’s supposed to be transparent unless Javascript is disabled. Seemingly ideal! So anyway, hopefully that will end my problems for good with this issue.

Also, I realized I don’t make it all that apparent what my email address is, so in case anyone wants to contact me regarding my blog (or anything else for that matter), that can be at

External Links

January 18, 2009 :: USA