Planet Larry

September 25, 2008

Thomas Capricelli

KDE 4.1.2 tagged, gentoo land frozen

I’m not a gentoo fan. Mainly because I don’t like the idea of being a ‘fan’. Being a fan in the free software world usually means being an extremist and i hate extremism.

I nonetheless use almost exclusively Gentoo on all computers, laptops, servers and other divx boxes I have or maintain. That means a lot of them and it makes my Debian friends laugh. Who cares ? I use Gentoo and Free software because I find this convenient, I like the ideas behind them.

Though I don’t share the optimism of people who think that gentoo is growing. On July 29th, KDE 4.1, the first almost usable KDE version since the 3.5 branch, has been released, and since then guess what happened in the gentoo-kde land ? Nothing. By nothing I mean first that not a single ebuild, even masked, even hard masked, has reached the official portage tree, and secondly, that despite the huge KDE user base in Gentoo, not a single official statement has been done concerning this issue. Because, believe me or not, there is an actual issue. Nothing was said on the main Gentoo page, almost nothing on gentoo planet (only one post focused on whether kde should install in a different place or not). In the gentoo land, everybody speaks about everything but KDE in gentoo. Has the meaning of ‘g’ in gentoo recently changed ?

When you try to know a little bit more about this, it’s getting worse. Rumors are that developers have fought each others and the kde team is just no more. It’s a new KDE team that is here for whatever reason (to which, by the way, I send my very best support, for the development of new ebuilds, for being put under such light/pressure, and for being sent in this lion’s cage that seems to be gentoo devs). I don’t know anything about this, but it’s not the first time I hear about huge tensions between gentoo developers, and this worries me a lot.

I don’t want politic, I want developers, I want free software developers. If I wanted politic, I would have gone for Debian, which, by the way, have had packages for kde 4.1 and 4.1.1 for long.

Growing is not something easy to handle. It seems to me that KDE has just managed to do this quite well : a lot of work has been done the few last years to ’scale’ up and I think they managed to do this hugely needed step. Gentoo has still a lot of work to do in this area. As a user, my expectations are the same as what you can read everywhere : transparency, transparency and transparency.

I love gentoo, i can understand a lot of things, I can wait, I can deal with human resource shortage, I could even help. I’m used with all of that because that is so common in free software and that is part of the deal. But I can’t bear darkness and closeness.

I will not conclude by threatening to leave for another distribution. I’m most than happy with gentoo as a distribution and I will keep on using it as long as it is possible. I have a KDE checkout anyway on my main computer. If things are going worse though, I’m not sure I will dare trying to work on the ebuilds.

I’m ready to ignore the “If you’re not happy with gentoo leave it” type of comments.

September 25, 2008 11:44 PM

Roy Marples

dhcpcd changes to svn and trac

After changing openresolv to trac and svn, I've done the same for dhcpcd. As such, the bugzilla database is now closed for new bugs for dhcpcd and openresolv and you should now use trac for each. I've migrated the bugs, attachments, resolutions and activity across for both.

These scripts are for bugzilla-3.0.3 and trac-0.11.1 and assume that no custom fields have been added.
They are also coded for specific product id's and my name - you will need to adjust accordingly.

bugzilla to trac sql script. It simply creates new tables for use in a trac db - ticket_change_status needs to be copied into ticket_change though.
bugzilla to trac perl srcript. Extracts attachments from bugzilla and creates them in the current directory in a structure for use in trac.

TODO - attachment filesize is 0, this needs fixing.

September 25, 2008 07:40 PM

Daniel Robbins

Gentoo 2008.1 Release Solutions

Gentoo seems to be having problems with .”1” releases – 2007.1 was cancelled and now 2008.1 has been cancelled. The Gentoo project has also announced a desire to move to a more “back to basics approach” where they are doing weekly builds of Gentoo stages.

Good idea. As many of you know, I am already building fresh stages for x86, i686, athlon-xp, pentium4, core32, amd64, core64, ~x86 and ~amd64 as well as OpenVZ templates at

Since I’ve been building Gentoo stages for a while, I know that Gentoo’s catalyst tool (the tool that is used for Gentoo releases) is in poor shape – it has been poorly maintained over the years and also does not have any documentation, so it is not really up to the task of building Gentoo releases anymore.

The lack of catalyst documentation makes it much more difficult for others (like Gentoo users and other Gentoo-based projects) to build their own Gentoo releases, and this, along with the poor state of catalyst itself, tends to perpetuate the centralized Gentoo development model – a model that is not very efficient and also isn’t very much fun.

It is a shame (and somewhat ironic) that a well-renowned build-from-source distribution does not have a decent and well-maintained release building tool. So it’s time to fix this…

In a few weeks, I will be releasing a completely redesigned release build tool called “Metro”. This is the tool that I use to build my daily Funtoo stages and supports building both stable and unstable (~) stages. It is much more capable than catalyst and has a much better architecture. Metro is a full recipe-based build engine that will allow the larger Gentoo community to build Gentoo (and even non-Gentoo - it is  not Gentoo-specific) releases and stages easily and share their build recipes with others.

Metro allows anyone to set up their own automated builds and greatly simplifies the task of maintaining a web mirror of these builds. It will make it a lot easier for people to create their own Gentoo-based distributions as well.

My focus is on empowering the larger Gentoo community, but I do hope that the official Gentoo project will use Metro for their release engineering efforts – I think it will help not only the Gentoo project but also facilitate collaboration with projects outside Gentoo (by sharing build recipies) and thus help Gentoo to move in more of a distributed direction and innovate more quickly. It’s time to get Gentoo back to being the leaders of innovation in the world of Linux.

I am currently finalizing some interfaces in Metro before I start writing documentation for the tool. Once documentation is done (should be in a couple of weeks,) I will be releasing Metro to the public. Until then, you can enjoy the fruits of Metro by using my Funtoo stages at .


September 25, 2008 07:22 PM

September 24, 2008

Brian Carper

Westinghouse: It Never Ends

(If you're just tuning in, long story short: I bought a Westinghouse L2410NM monitor November 2007, it broke March 2008, I sent it to Westinghouse (paying for shipping myself), they sent it back to the wrong address and didn't tell me about it for 2 months, I filed a BBB complaint, they didn't respond to that for another couple of months, and seven months and 30+ phone calls later, I still don't have my monitor back.)

My last post about Westinghouse's horrendous customer service and never-ending RMA process was titled "Westinghouse: Finally getting somewhere?". The answer to that is sadly "no".

I got a flurry of phone calls and emails from Westinghouse's corporate office, attempting to settle my BBB complaint. On September 12th, Westinghouse finally responded to the BBB, saying:

Company states, replacement unit shipped 09/10/08

Good news! I was looking forward to posting an end to this horror story.

However, today is September 24th, and guess what? No monitor. I contacted Westinghouse last week, asking for a UPS tracking number so I'd know when to expect my monitor. However, after being promised a phone call last Thursday that never came, and then sending an email Friday which was never answered, and then waiting three more days for good measure, it appears I'm once again being given the runaround.

So today I sent this email to my contact at Westinghouse:

Do you have access to Google? Please search for "westinghouse rma" and look at the top result. I believe it will be my website. I've been carefully documenting all of my adventures with Westinghouse for the past seven(!) months. On my website, many other people have related their own similarly terrible experiences being kept in the dark for months by your customer service departments.

You promised me a phone call on Sept 18th to provide me with a tracking number for my replacement monitor, but I never heard from you. I also never received a reply to the email I sent you since then.

The BBB was informed that a replacement monitor shipped on the 10th. If that was the case, I probably should've had it in my hands by now, given that it's been two weeks. Has it actually even been shipped? I suspect not. I feel as though I'm once again being given the runaround while nothing is done to resolve this issue. Please understand my frustration.

If I don't have a UPS tracking number by Friday, I'm filing a complaint with the FTC and the California Attorney General. They have a very easy-to-use form for filing complaints here: and here:

My website only has a couple thousand readers, but I'm also going to cross-post my story to every online tech news aggregate I can think of (e.g. and, which translates to tens of thousands more potential readers. The story I would like to tell is "Westinghouse finally sent me my monitor after seven months", but I'll tell it either way.

I look forward to hearing from you,

Look for this story on Reddit and Digg on Friday if I don't hear anything.

UPDATE: Well, I got a reply already. That was fast.

Your Fed Ex tracking number is 772xxxxxxxxxxx, you can track the
package at to see the progress of your shipment.
Please keep mind that there was a delay at our warehouse and your unit
is going to ship tonight.

Just a little two-week delay, I guess those things happen. Hopefully if/when it shows up, the monitor actually works. I've burned through seven months of my warranty and somehow I doubt Westinghouse will courteously extend it for me if this monitor fails too.

(Read the whole crappy story of Westinghouse's dishonesty and horrible customer service: The beginning, Update 1, Update 2, Update 3, Update 4, Update 5, Update 6, Update 7, Update 8, Update 9.)

September 24, 2008 09:40 PM :: Pennsylvania, USA  

Roy Marples

openresolv changes to svn and trac

Using Drupal as a CMS is nice - it's worked for me very well.
However, it's not made for project management. I just had a static page that people couldn't add comments or feedback to (well, they could if I enable comments but that gets messy after a while). I do have bugzilla to handle bugs but I find it too overblown and complex for my needs. Don't get me wrong, bugzilla has it's place and it's a solid project - it's just not suited for my small site. Could be due to my fanatical dislike of perl Sticking out tongue

Also, my company suddenly had a need for a bug tracking system and a colleague of mine suggested trac which I installed on a server. I only looked at track briefly many years ago, and it had promise but lacked in a lot of places. I was pleased to see that a lot of good progress had been made and it's now very useable Smiling So much so, I've decided to install it here and it now powers the openresolv project page. Because it's made to integrate into subversion I used git2svn to convert the openresolv git repo trunk into an svn trunk. It's now open for business and anonymous users can create and modify tickets and the wiki (well, parts of the wiki).

So is svn better than git or is git better than svn? It's a hard one to answer, both have their pluses and minuses. Luckily there is a trac addon that works with git, so I'll give that a try with dhcpcd.

September 24, 2008 09:29 PM

Daniel Robbins

More Git Madness

Today, I spent some time looking at better ways to organize the Portage tree in git, and I'm interested in getting feedback on what I've done.

Please take a look at my new portage-new git repository. This new repository contains both the main tree in the "master" branch, and the tree in the "" branch. This seems to be a much better way to organize things, for the following reasons:

  1. It's space-efficient - the trees are over 99% similar, and now a single clone operation grabs both.
  2. There is a unified history - you can easily see the differences between the trees by typing "git diff master".
  3. The GitHub Network Graph now shows how the and tree relate to one another, which is useful. In the tree, you can see where I'm pulling from.
  4. It allows people to easily switch between both trees with a simple "git checkout" command.

If you want to test out portage-new and see how the branches work, please consult my updated wiki documentation but clone "portage-new" rather than "portage". I have the repo name as "portage" in the wiki docs because I'm already anticipating making this tree the official one in a few days.

I think this is probably the repository model to use for Portage git development. If someone wants to use this tree as the basis for their own development, they can clone the tree and create a branch that contains their changes. This will allow them to benefit from the multiple-branch model and facilitate easier integration and diffs with upstream.

Barring any major complaints, in a few days I am probably going to delete my two existing portage git repositories and rename portage-new to portage, and it will become the official one.

Let me know what you think.

September 24, 2008 12:23 AM

Patrick Lauer

Make your Intertubez a nicer place

I've been badly annoyed by some ads lately. As I'm already using AdBlock in Firefox and started growing a large banlist in Konqueror too (leaving Opera dangerously exposed) I started modifying my approach. So here's my additions to /etc/hosts:

# google #[Ewido.TrackingCookie.Googleadservices]                                                                                                                                                                                                                                                                                                                                                              #[Microsoft.Typo-Patrol]                                                           #[Urchin Tracking Module]                          

#doubleclick #[MVPS.Criteria] #[Panda.Spyware:Cookie/Doubleclick]                                                                                                                                                                             #[SunBelt.DoubleClick]                                                                                                       #[Tenebril.Tracking.Cookie]                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          #[Lycos]                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
# [Google/DoubleClick via Falk AdSolution][Falk eSolutions AG]                                                                                                                                       #[Ewido.TrackingCookie.Falkag] #[McAfee.Adware-Zeno] #[Panda.Spyware:Cookie/Falkag] #[] #[] #[Tenebril.Tracking.Cookie] #[] #[Ad-Aware.Tracking.Cookie]
Et voila. Your Intertubez have now about 75% less braindamage. It's funny to see websites cleaning up on reload ... blink blink reload empty. Only text left ...

There's one issue though: It's by far not complete. I think I'll need some privoxy added to that to be really happy. If I do I'll let you know how it goes.

September 24, 2008 12:17 AM

September 23, 2008

Roy Marples

lighttpd out, apache in

You may have noticed an interuption to this service.....

I finally got too irritated with the lighttpd configuration. Seems there's a few fastcgi issues which I'm now seeing. Also, development seems to have stalled. Sad

So, I gave apache another whirl. I don't recall why I changed from apache to lighttpd, but it was propably speed related. This is due to me running this site on an old VIA C3-2 processor and apache is slower than lighttpd - noticably on that box. This new(ish) server is an AMD64 Sempron (2400) and has the horsepower and memory for apache on this small site.

Anyway, the configuration layout for apache has also changed drastically since I last used it - and for the better! The Gentoo apache team have my thanks for the nice overhaul Smiling

I'm also playing around trac as a replacement to bugzilla and the dhcpcd project page. I've set it up here against an svn repo I migrated from git a while ago. We'll see if I like this to change over.

September 23, 2008 10:40 PM

Jürgen Geuter

Linux does not "need its own Steve Jobs" (repeating wrongs doesn't create rights)

In a break today I found yet another article outlining why "Linux needs its own Steve Jobs for it to be good". We get those quite a lot it's kinda the Top10 list of people with half a brain. Well, here's the final discussion why that idea is wrong (and retarded), so people can stop writing the same article that was wrong back in 1999:

I'm not talking about whether Apple's OSX or their whole DRM-mess is good or not: People seem to fall for the marketing campaign and myth that Steve Jobs writes every line of code in any Apple product by hand so let's for the sake of the argument just go with it. (Of course it's not all fine in Apple land but that is another post).

  • Steve Jobs gives the company direction and that makes their product great.
    If that is really your argument, welcome your Master: He's called Mark Shuttleworth and does pretty much exactly that. He has a vision and throws money at the aspects of the Linux stack that he thinks need work (as Greg Kroah-Hartmann has pointed out: the kernel and the "backend" don't seem to be a part of that). He does exactly what "mythical" Jobs does, he looks at problems and hires people so he can order them to fix them. Anybody else with some money can do the same. We can create one, two, many Steve Jobses (the question is if we really want that?)
  • Steve Jobs has visions that push their products where nobody thought about going before.
    Yeah you're right, Apple has been driven by a vision: To stop being a computer company and turn into a content provider who fights with any dirty trick it can find to lock customers in. Apple does not invent, they revamp iTunes to push more DRM crap down to the customers. If you think about innovation look at what the free software desktop does: Integrate your desktop experience more and more, harmonize, standartize. GNOME people are working on a distribution-neutral way to install packages, the X people might not be fast but they start to really get their shit together and have X work its magic pretty much without tinkering. The whole Netbook thingy was just possible cause of Linux. Where was Apple? Making their crappy usability-Horror that is the dock reflective.
  • Steve Jobs can work as well cause nobody in the company can work against him or he gets fired, that leads to everybody working in one direction.
    If stagnation is what you want that is the right way to handle things. One community, one software stack, one leader? The fact that everybody can take the whole shebang and modify it to be different is the strength of the free software stack. Yeah many modifications suck or don't lead anywhere. But somebody tried and looked into it. What about the Pidgin fork? People didn't like the decisions made by the devs so they forked. If we had the Leader-model that wouldn't happen.
  • Oh and just as another remark: Introducing a single point of failure is never smart. Linus has the main kernel repository and does the releases. But if something happened to him there are others that have the tree and knowledge to take over, that is another strength. One person "in charge" means that your whole project dies with that person. Great idea.

It's just like in politics, when things go bad, people cry for the leader to make all problems magically disappear, and it sometimes does: Apple has stopped being a computer and technology company but turned into a big music store, the "problem" in the technology department was solved by running away into another market. Better than the usual way that it turns to when you get a new leader: Apple did not start a war.

So next time you wanna write about Linux needing a leader, direct your browser to the Wikipedia and read.

September 23, 2008 08:10 PM :: Germany  

Dirk R. Gently

A Wic’d Solution

When I first saw NetworkManager back in Ubuntu 6.10 (Edgy Eft), I realized what a godsend it was. Previously connected to a wireless network was to the new user confusing at best. Previously I had created scripts that used iwlist, iwconfig, ifconfig…, then NetworkManager came in and make my laptop truely mobile. When I moved to Gentoo, NetworkManager took a bit more to set up so I wrote the NetworkManager wiki.

Lately though I’ve discovered NetworkManager doesn’t configure dhcp correctly with certain networks, and I have to configure dhcp manually. This isn’t a big deal, but it is an inconvienance. Lately, I heard boast about another wired/wireless network manager called Wicd so I decided to give it a try.

In Gentoo it’s easy to set up, just emerge it and add it to the default run level:

sudo emerge -v wicd
sudo rc-update add wicd default

Also if using baselayouts network-connecting scripts disable them. Either delete the net.eth0, net.ath1 links (or whatever they are called) or you can edit “rc.conf” which located in /etc/ if using OpenRC or in /etc/conf.d/ (if you haven’t migrated to OpenRC yet), and edit the preference “rc_plug_services” to “!net.*” Leave net.lo alone though as loopback will still be needed.

Stop NetworkManager daemon and load Wicd daemon:

sudo /etc/init.d/NetworkManager stop
sudo /etc/init.d/wicd start

Restart X server so the applet is loaded:

wicd start

Wicd is a claims to work well with lightweight desktops. Clicking on the notification icon will bring up the Wicd Manager.

wicd manager

Wicd will not connect automaticly to a network unless the option is selected, which I think is a good idea:

wicd auto

The preferences of Wicd allow connecting to more difficult networks.



Looks like I got a new network manager. Thanks to the developers of Wicd.


September 23, 2008 06:31 PM :: WI, USA  

Jürgen Geuter


Been terribly busy in the last few days writing other stuff so there was no time to post. A few short blurbs:

  • Everything is Miscellaneous - The Power of the new digital disorder by David Weinberger is a brilliant book. It's cheap so get it if you are anyhow interested in how to present knowledge. It's about tagging, categorizing and how those work. Written in a clear but still very witty way it's a pleasure to read.
  • Been really getting into Juno Reactor lately...
  • "Star Wars - The Force unleashed" on the Wii looks horrible and plays very generic. I don't know if I'm spoiled or whatever but while the story might be ok, the game itself is pretty boring, the motion controls feel like they were just thrown in because they could, many things don't make too much sense.
  • Spore is boring. Or I have not found the game in that thing. You never know.
  • Thinking about netbooks lately. The Acer Aspire One looks neat but why the hell do the Linux Versions of Netbooks often get the short end of the stick when it comes to RAM? Any Netbook owners reading this? What do you own and how do you like it? I want 1024 screen width and Linux.
  • I feel somewhat dirty for posting this almost "Micro-bloggy" post.

September 23, 2008 09:05 AM :: Germany  

September 22, 2008


Keeping a hostname even when not on lan

I use to connect to my home server from my laptop, from my LAN when I'm at home and from the internet when I'm not.

My server has a static ip address in my lan ( and a dyndns name on the internet.

The server's hostname is "fandango" and the dyndns name is something like "".

I had this line in my /etc/hosts: fandango

This configuration was a pain in the ass because from home I had to "ssh TopperH@fandango", while from the outside I had to "ssh". I also had double passwords saved in my web browser, double quassel configuration ecc.

The idea is to always relate to my server as "fandango", wether at home or not, so I made two scripts and created a postup in my /etc/conf.d/net.


OLDHOST=`grep fandango $MYFILE | awk '{ print $1 }'`
NEWHOST=`host | gawk '{print $4}'`
sed s/$OLDHOST/$NEWHOST/ $MYFILE > /etc/hosts


OLDHOST=`grep fandango $MYFILE | awk '{ print $1 }'`
sed s/$OLDHOST/$NEWHOST/ $MYFILE > /etc/hosts


postup() {
if [[ ${IFACE} == "ppp1" ]] ; then
elif [[ ${IFACE} == "ppp2" ]] ; then
return 0

I'm sure there are more elegant ways to achieve the same results, and comments are welcome... By the way it just works :)

September 22, 2008 04:40 PM :: Italy  


My God, it's Full of XML

In recent posts I looked at a native XML database called DBXML and we looked at where XML came from.

You may find yourself in the situation that you are given a pile of XML documents, possibly broken, and it is up to you to make sense of them. This post explains some tools that can form your first-aid kit for dealing with problem XML documents.

Shine like a star(let)

xmlstarlet is available from your friendly neighbourhood package manager or from the xmlstarlet website

xmlstarlet is a command line tookit that provides various different XML related helpers. For details on all the xmlstarlet tools, type:

xmlstarlet --help

Brock wrote recently about using xmlstarlet's select tool that allows you get use XPATH expressions to query your XML.

Viewing the element structure

Another handy xmlstarlet tool is the element structure viewer, this provides a friendly, xpath style view into the XML document.

xmlstarlet el filename.xml

This I tend to use the -u option which only shows the unique lines:

xmlstarlet el -u filename.xml

There is also -a for attributes and -v for the attribute values as well.

Checking for well-formed XML documents

The most useful xmlstarlet tool for me has been the XML validator, which tests whether your documents are well formed or not. You use the tool as follows:

xmlstarlet val xmlfile.xml

It also has a number of options, the main one I have used is to validate against a Document Type Definition:

xmlstarlet val -d dtdfile.dtd xmlfile.xml

Tidying up your XML files

Sometimes programs output really ugly looking XML. So when you have made sure your document is well-formed with xmlstarlet, you might want to tidy it up a bit before letting anyone else see it.

Xmltidy is a handy little Java program that loads your XML document into memory and then outputs it in a nice looking form with linebreaks and indentation.

This is especially useful when you have a collection of XML files that are referencing each other. Xmltidy will combine them into a nice looking XML document.

Download the jar file from the xmltidy homepage, and then run:

java -jar xmltidy.jar --input oldfile.xml --output newfile.xml

Dealing with Unicode problems

Some of the most annoying problems with XML files can be when the files encoding is not valid UTF-8 and some program is rejecting XML files.

I found a really nice package called uniutils, which is again available from your friendly neighbourhood package manager or from the uniutils website.

Like xmlstarlet, this gives you various utilities, however the main one I use it for is to check whether my XML files are valid UTF-8 unicode. It gives useful error messages when a file is not unicode. you can then check the file in a text editor and/or hex viewer (e.g. Ghex) to see what the problem is. So to validate an XML file, we simply go:

uniname -V filename.xml

If it has non-unicode characters, you will receive errors such as:

Invalid UTF-8 code encountered at line 215, character 115037, byte 115036. The first byte, value 0x82, with bit pattern 10001100, is not a valid first byte of a UTF-8 sequence because its high bits are 10.

So the character with hex value x82 is not a valid character in the UTF-8 encoding. In Emacs you can look at the character by typing

M-x goto-char 115037

Or you can open your hex editor. In Ghex, you can go to the edit menu and use the "Goto byte" feature to the problem character, for example, if the byte number was 119, then you can go:

That works for one character. If we want to recursively check all XML files within a directory, we can use find:

find . -name '*.xml' -print -exec uniname -V {} \;

So now lets imagine we find that the files have a non-unicode character with the hex value x82 as above, then we might want to replace it with a characters or entity, the following use of find and sed replaces all occurrences of the hex x82 with C:

find . -iname '*.xml' -exec sed -i 's/\x82/\C/g' {} \;

This can help a lot as most XML programs will reject files with inconstant encoding.


These are my tips for dealing with a pile of XML broken files. if you have any tips or suggestions of your own, please share them by leaving a comment below.

In some future posts, we will look at using XML with Python, and with the Django web framework.

Thanks to Andy and Nick for help with this post, and the title was based on Tommi Virtanen's fantastic Europython talk.

If you are a Digg fan, give it some lovin!

Discuss this post - Leave a comment

September 22, 2008 02:49 PM :: West Midlands, England  

Brian Carper

Practicality: PHP vs. Lisp?

Eric at LispCast wrote an article about why PHP is so ridiculously dominant as a web language, when arguably more powerful languages like Common Lisp linger in obscurity.

I think the answer is pretty easy. In real life, practicality usually trumps everything else. Most programmers aren't paid to revolutionize the world of computer science. Most programmers are code monkeys, or to put it more nicely, they're craftsmen who build things that other people pay them to create. The code is a tool to help people do a job. The code is not an end in itself.

In real life, here's a typical situation. You have to make a website for your employer that collects survey data from various people out in the world, in a way that no current off-the-shelf program quite does correctly. If you could buy a program to do it that'd be ideal, but you can't find a good one, so you decide to write one from scratch. The data collection is time-sensitive and absolutely must start by X date. The interface is a web page, and people are going to pointy-clicky their way through, and type some numbers, that's it; the backend just doesn't matter. For your server, someone dug an old dusty desktop machine out of a closet and threw Linux on there for you and gave you an SSH account. Oh right, and this project isn't your only job. It's one of many things you're trying to juggle in a 40-hour work week.

One option is to write it in Common Lisp. You can start by going on a quest for a web server. Don't even think about mod_lisp, would be my advice, based on past experience. Hunchentoot is good, or you can pay a fortune for one of the commercial Lisps. If you want you could also look for a web framework; there are many to choose from, each more esoteric, poorly documented and nearly impossible to install than the last. Then you get to hunt for a Lisp implementation that actually runs those frameworks. Then you get to try to install it and all of your libraries on your Linux server, and on the Windows desktop machine you have to use as a workstation. Good luck.

Once you manage to get Emacs and SLIME going (I'm assuming you already know Emacs intimately, because if you don't, you already lose) you get to start writing your app. Collecting data and moving it around and putting it into a database and exporting it to various statistics packages is common, so you'd do well to start looking for some libraries to help you out with such things. In the Common Lisp world you're likely not to find what you need, or if you're lucky, you'll find what you need in the form of undocumented abandonware. So you can just fix or write those libraries yourself, because Lisp makes writing libraries from scratch easy! Not as easy as downloading one that's already been written and debugged and matured, but anyways. Then you can also roll your own method of deploying your app to your server and keeping it running 24/7, which isn't quite so easy. If you like, you can try explaining your hand-rolled system to the team of sysadmins in another department who keep your server machine running.

Don't bet on anyone in your office being able to help you with writing code, because no one knows Lisp. Might not want to mention to your boss that if you're run over by a bus tomorrow, it's going to be impossible to hire someone to replace you, because no one will be able to read what you wrote. When your boss asks why it's taking you so long, you can mention that the YAML parser you had to write from scratch to interact with a bunch of legacy stuff is super cool and a lovely piece of Lisp code, even if it did take you a week to write and debug given your other workload.

Be sure to wave to your deadline as it goes whooshing by. If you're a genius, maybe you managed to do all of the above and still had time to roll out a 5-layer-deep Domain Specific Language to solve all of your problems so well it brings tears to your eye. But most of us aren't geniuses, especially on a tight deadline.

Another option is to use PHP. Apache is everywhere. MySQL is one simple apt-get away. PHP works with no effort. You can download a single-click-install LAMP stack for Windows nowadays. PHP libraries for everything are everywhere and free and mature because thousands of people already use them. The PHP official documentation is ridiculously thorough, with community participation at the bottom of every page. Google any question you can imagine and you come up with a million answers because the community is huge. Or walk down the hall and ask anyone who's ever done web programming.

The language is stupid, but stupid means easy to learn. You can learn PHP in a day or two if you're familiar with any other language. You can write PHP code in any editor or environment you want. Emacs? Vim? Notepad? nano? Who cares? Whatever floats your boat. Being a stupid language also means that everyone knows it. If you jump ship, your boss can throw together a "PHP coder wanted" ad and replace you in short order.

And what do you lose? You have to use a butt-ugly horrid language, but the price you pay in headaches and swallowed bile is more than offset by the practical gains. PHP is overly verbose and terribly inconsistent and lacks powerful methods of abstraction and proper closures and easy-to-use meta-programming goodness and Lisp-macro syntactic wonders; in that sense it's not a very powerful language. Your web framework in PHP probably isn't continuation-based, it probably doesn't compile your s-expression HTML tree into assembler code before rendering it.

But PHP is probably the most powerful language around for many jobs if you judge by the one and only measure that counts for many people: wall clock time from "Here, do this" to "Yay, I'm done, it's not the prettiest thing in the world but it works".

The above situation was one I experienced at work, and I did choose PHP right from the start, and I did get it done quickly, and it was apparently not too bad because everyone likes the website. No one witnessed the pain of writing all that PHP code, but that pain doesn't matter to anyone but the code monkey.

If I had to do it over again I might pick Ruby, but certainly never Lisp. I hate PHP more than almost anything (maybe with the exception of Java) but I still use it when it's called for. An old rusty wobbly-headed crooked-handled hammer is the best tool for the job if it's right next to you and you only need to pound in a couple of nails.

September 22, 2008 09:17 AM :: Pennsylvania, USA  


Ohloh and the popularity of programming languages in free and open source software

I came across my name in a site called Ohloh. I remember it coming out a few years ago. Now it has had time to really get going, I thought it was about time that I review the site here.

Ohloh tracks the free/open source software it knows about, they only track code held in CVS, Subversion or Git (i.e. not in bazaar, which I tend to use, or mercurial), in repositories that they can easily find. Despite the limitations, this is a very large amount of code.

Ohloh tries to figure out from the commits who the developers are, and thus my name came up (because of a very minor contribution to Gentoo once upon a time).

Ohloh also tries to figure out the usage of programming languages in free/open source software. It allows you to produce various graphs; those below are based on the total number of active free/open source projects for each language.

Some important caveats to bear in bind:

  • Ohloh only tracks how the language is being used in free/open source software, the majority of code written in the world runs on in-house systems; this code is often never shared externally.
  • The percentage figures may be somewhat lower than one would expect because their definition of a language is rather weaker than I would personally use, so many markup formats such as HTML or XML and specialised syntaxes are all counted as programming languages even though they are not Turing-complete.
  • These are relative percentages, we are comparing languages against each other. All languages featured here are growing steadily in terms of the absolute number of free/open source programmers using them. So essentially what we doing here is comparing the speed at which languages are growing.

Regular readers will know that I like high level, general purpose, dynamic languages; so lets start with them:

Go Python! Of course these figures might be completely meaningless as Perl is often used by sys-admins who rarely share their code using public revision control repositories.

Now lets look at the big beasts, the major compiled languages. These bread and butter languages seem to be stabilising around equal percentages:

Platform-oriented proprietary languages are not heavily used in free/open source software, as you might expect, however lets compare two against each other, Microsoft's C# versus Apple's Objective C:

C# is stronger, not surprising considering the vast difference in users between Windows and OS X.

A more interesting question is whether the rising use of C# in free/open source software is evidence of a developing accommodation between the Microsoft world and the Free World?

At least that is until Microsoft next calls us all cancer and threatens to sue the whole free/open source world again.

Interesting stuff, let me know if you come up with any interesting comparisons.

Discuss this post - Leave a comment

September 22, 2008 12:13 AM :: West Midlands, England  

September 21, 2008

Martin Matusiak

git by example - upgrade wordpress like a ninja

I addressed the issue of wordpress upgrades once before. That was a hacky home grown solution. For a while now I’ve been using git instead, which is the organized way of doing it. This method is not specific to wordpress, it works with any piece of code where you want to keep current with updates, and yet you have some local modifications of your own.

To recap the problem shortly.. you installed wordpress on your server. Then you made some changes to the code, maybe you changed the fonts in the theme, for instance. (In practice, you will have a lot more modifications if you’ve installed any plugins or uploaded files.) And now the wordpress people are saying there is an upgrade available, so you want to upgrade, but you want to keep your changes.

If you are handling this manually, you now have to track down all the changes you made, do the upgrade, and then go over the list and see if they all still apply, and if so re-apply them. git just says: you’re using a computer, you git, I’ll do it for you. In fact, with git you can keep track of what changes you have made and have access to them at any time. And that’s exactly what you want.

1. Starting up (the first time)

The first thing you should find out is which version of wordpress you’re running. In this demo I’m running 2.6. So what I’m going to do is create a git repository and start with the wordpress-2.6 codebase.

# download and extract the currently installed version
tar xzvf wordpress-2.6.tar.gz
cd wordpress
# initiate git repository
# add all the wordpress files
git-add .
# check status of repository
# commit these files
git-commit -m'check in initial 2.6.0 upstream'
# see a graphical picture of your repository
gitk --all

Download this code: git_wordpress_init

This is the typical way of initializing a repository, you run an init command to get an empty repo (you’ll notice a .git/ directory was created). Then you add some files and check the status. git will tell you that you’ve added lots of files, which is correct. So you make a commit. Now you have one commit in the repo. You’ll want to use the gui program gitk to visualize the repo, I think you’ll find it’s extremely useful. This is what your repo looks like now:

gitk is saying that you have one commit, it’s showing the commit message, and it’s telling you that you’re on the master branch. This may seem odd seeing as how we didn’t create any branches, but master is the standard branch that every repository gets on init.

The plan is to keep the upstream wordpress code separate from your local changes, so you’ll only be using master to add new wordpress releases. For your own stuff, let’s create a new branch called mine (the names of branches don’t mean anything to git, you can call them anything you want).

# create a branch where I'll keep my own changes
git-branch mine
# switch to mine branch
git-checkout mine
# see how the repository has changed
gitk --all

Download this code: git_wordpress_branch

When we now look at gitk the repository hasn’t changed dramatically (after all we haven’t made any new commits). But we now see that the single commit belongs to both branches master and mine. What’s more, mine is displayed in boldface, which means this is the branch we are on right now.

What this means is that we have two brances, but they currently have the exact same history.

2. Making changes (on every edit)

So now we have the repository all set up and we’re ready to make some edits to the code. Make sure you do this on the mine branch.

If you’re already running wordpress-2.6 with local modifications, now is the time to import your modified codebase. Just copy your wordpress/ directory to the same location. This will obviously overwrite all the original files with yours, and it will add all the files that you have added (plugins, uploads etc). Don’t worry though, this is perfectly safe. git will figure out what’s what.

Importing your codebase into git only needs to be done the first time, after that you’ll just be making edits to the code.

# switch to mine branch
git-checkout mine
# copy my own tree into the git repository mine branch
#cp -ar mine/wordpress .. 
# make changes to the code
#vim wp-content/themes/default/style.css
# check status of repository

Download this code: git_wordpress_edit

When you check the status you’ll see that git has figured out which files have changed between the original wordpress version and your local one. git also shows the files that are in your version, but not in the original wordpress distribution as “untracked files”, ie. files that are lying around that you haven’t yet asked git to keep track of.

So let’s add these files and from now on every time something happens to them, git will tell you. And then commit these changes. You actually want to write a commit message that describes exactly the changes you made. That way, later on you can look at the repo history and see these messages and they will tell you something useful.

# add all new files and changed files
git-add .
# check in my changes on mine branch
git-commit -m'check in my mods'
# see how the repository has changed
gitk --all

Download this code: git_wordpress_commit

When you look at the repo history with gitk, you’ll see a change. There is a new commit on the mine branch. Furthermore, mine and master no longer coincide. mine originates from (is based on) master, because the two dots are connected with a line.

What’s interesting here is that this commit history is exactly what we wanted. If we go back to master, we have the upstream version of wordpress untouched. Then we move to mine, and we get our local changes applied to upstream. Every time we make a change and commit, we’ll add another commit to mine, stacking all of these changes on top of master.

You can also use git-log master..mine to see the commit history, and git-diff master..mine to see the actual file edits between those two branches.

3. Upgrading wordpress (on every upgrade)

Now suppose you want to upgrade to wordpress-2.6.2. You have two branches, mine for local changes, and master for upstream releases. So let’s change to master and extract the files from upstream. Again you’re overwriting the tree, but by now you know that git will sort it out. ;)

# switch to the master branch
git-checkout master
# download and extract new wordpress version
cd ..
tar xzvf wordpress-2.6.2.tar.gz
cd wordpress
# check status

Download this code: git_wordpress_upgrade

Checking the status at this point is fairly important, because git has now figured out exactly what has changed in wordpress between 2.6 and 2.6.2, and here you get to see it. You should probably look through this list quite carefully and think about how it affects your local modifications. If a file is marked as changed and you want to see the actual changes you can use git-diff <filename>.

Now you add the changes and make a new commit on the master branch.

# add all new files and changed files
git-add .
# commit new version
git-commit -m'check in 2.6.2 upstream'
# see how the repository has changed
gitk --all

Download this code: git_wordpress_commitnew

When you now look at the repo history there’s been an interesting development. As expected, the master branch has moved on one commit, but since this is a different commit than the one mine has, the branches have diverged. They have a common history, to be sure, but they are no longer on the same path.

Here you’ve hit the classical problem of a user who wants to modify code for his own needs. The code is moving in two different directions, one is upstream, the other is your own.

Now cheer up, git knows how to deal with this situation. It’s called “rebasing”. First we switch back to the mine branch. And now we use git-rebase, which takes all the commits in mine and stacks them on top of master again (ie. we base our commits on master).

# check out mine branch
git-checkout mine
# stack my changes on top of master branch
git-rebase master
# see how the repository has changed
gitk --all

Download this code: git_wordpress_rebase

Keep in mind that rebasing can fail. Suppose you made a change on line 4, and the wordpress upgrade also made a change on line 4. How is git supposed to know which of these to use? In such a case you’ll get a “conflict”. This means you have to edit the file yourself (git will show you where in the file the conflict is) and decide which change to apply. Once you’ve done that, git-add the file and then git-rebase --continue to keep going with the rebase.

Although conflicts happen, they are rare. All of your changes that don’t affect the changes in the upgrade will be applied automatically to wordpress-2.6.2, as if you were doing it yourself. You’ll only hit a conflict in a case where if you were doing this manually it would not be obvious how to apply your modification.

Once you’re done rebasing, your history will look like this. As you can see, all is well again, we’ve returned to the state that we had at the end of section 2. Once again, your changes are based on upstream. This is what a successful upgrade looks like, and you didn’t have to do it manually. :cap:


Don’t be afraid to screw up

You will, lots of times. The way that git works, every working directory is a full copy of the repository. So if you’re worried that you might screw up something, just make a copy of it before you start (you can do this at any stage in the process), and then you can revert to that if something goes wrong. git itself has a lot of ways to undo mistakes, and once you learn more about it you’ll start using those methods instead.

Upgrade offline

If you are using git to upgrade wordpress on your web server, make a copy of the repo before you start, then do the upgrade on that copy. When you’re done, replace the live directory with the upgraded one. You don’t want your users to access the directory while you’re doing the upgrade, both because it will look broken to them, and because errors can occur if you try to write to the database in this inconsistent state.

Keep your commits small and topical

You will probably be spending most of your time in stage 2 - making edits. It’s good practice to make a new commit for every topical change you make. So if your goal is to “make all links blue” then you should make all the changes related to that goal, and then commit. By working this way, you can review your repo history and be able to see what you tried to accomplish and what you changed on each little goal.

Revision control is about working habits

You’ve only seen a small, albeit useful, slice of git in this tutorial. git is a big and complicated program, but as with many other things, it already pays off if you know a little about it, it allows you to be more efficient. So don’t worry about not knowing the rest, it will come one step at a time. And above all, git is all about the way you work, which means you won’t completely change your working habits overnight, it will have to be gradual.

This tutorial alone should show you that it’s entirely possible to keep local changes and still upgrade frequently without a lot of effort or risk. I used to dread upgrades, thinking it would be a lot of work and my code would break. I don’t anymore.

September 21, 2008 08:19 PM :: Utrecht, Netherlands  


Django FreeComments cleanup script

This site uses the comments module provided by the Django web framework, in particular, is uses the FreeComment model to allow you to leave comments. One field I had not used so far was the "approved" field, I had simply put all the comments up on the web straight away, and just deleted the occasional spam that managed to beat the system.

Now however, I have decided to use the approved field. I will still put comments up straight away, but now I will set ones I have read to approved. Allowing me to view new comments behind the scenes.

One flaw in this plan is that I needed to set the existing comments to approved.

I could have just gone:

# Set all comments to approved
comments = FreeComment.objects.filter(approved=0)
for comment in comments:
    comment.approved = 1

But I was not 100% sure that the odd spam was not caught, so while eating my morning porridge, I turned it into a really simple command line adventure game.

Just in case it is useful to anyone, here is it below. I actually typed the whole thing into the shell, but ipython has a lovely history command that allows you output everything you wrote.

Obviously, the LOCATION_OF_DJANGO_PROJECT needs to be set to the directory that your Django project is in, not the project directory itself.

#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Simple and ugly script to sort out FreeComments."""

# Configure the following three variables:

URL = ""
LOCATION_OF_DJANGO_PROJECT = "/home/django/sites/"
CLEAR_COMMAND = "clear" # For Windows use CLS


import os
import sys

# Add Django project to Path

# The following magic spell sets up the Django Environment
from import setup_environ
from basic import settings

# Get the FreeComment model
from django.contrib.comments.models import FreeComment

def main():
    """Cycle through the comments, offer a simple choice."""
    # Get all the unapproved comments
    comments = FreeComment.objects.filter(approved=0)
    print "There are", len(comments), "comments to judge.\n"

    # Go through the comments
    for comment in comments:
        # Show the hyperlink to the comment,
        # In case you want to check it in the browser
        print URL + comment.get_absolute_url()
        # Comment name
        print comment.person_name, "said:"
            # Comment text
            print comment.comment
        except UnicodeEncodeError:
            # The world is a big place.
            print "[something in unicode]"
        print "\n\n"

        # Now offer choice at the command line
        print "Do you approve this comment?"
        print "Press y for yes, d for delete, " + \
              "nothing for skip, anything else to exit."
        answer = raw_input()
        if answer == "y":
            comment.approved = 1
        elif answer == "d":
        elif answer == "":

# Start the ball rolling.
if __name__ == '__main__':
    print "All done."

So pretty dumb, but publishing it here might save someone five minutes.

Discuss this post - Leave a comment

September 21, 2008 12:22 PM :: West Midlands, England  

September 20, 2008


Forwarding local mail to Gmail using postfix

On my workstation I have postfix set up to delivery local mail to a maildir in my $HOME, so that I can read it using my mail client of choice.

I also have a server and I often forget to ssh in it and open mutt to read the emails that the system (mostly cron) sends me.

I know there are simple ways to be notified every time I open a console, for example this:

echo "MAILCHECK=30" >> ~/.bashrc
echo 'MAILPATH=~/.maildir/new?"You have a new mail. Read it with 'mutt'."' >> ~/.bashrc
But as long as the server works fine I don't need to login that often.

So, why not sending all the local mail to my gmail account, so that I can read it wherever I am, even on my BlackBerry? Here I found a nice howto.

First of all I need postfix set up:

# emerge -C ssmtp
# echo mail-mta/postfix mbox pam sasl ssl >> /etc/portage/package.use
# emerge postfix

Once is emerged I edit /etc/postfix/ being careful to change XXX with something meaningful:

inet_interfaces =
relayhost = []:587
smtp_use_tls = yes
smtp_tls_CAfile = /etc/postfix/cacert.pem
smtp_tls_cert_file = /etc/postfix/XXX-cert.pem
smtp_tls_key_file = /etc/postfix/XXX-key.pem
smtp_tls_session_cache_database = btree:/var/run/smtp_tls_session_cache smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/saslpass
smtpd_sasl_local_domain = $myhostname
smtp_sasl_security_options = noanonymous

Then, according to this tutorial I create the tls certificate:

# /etc/ssl/misc/ -newca
# openssl req -new -nodes -subj '/' -keyout XXX-key.pem -out XXX-req.pem -days 3650

Domain, name, country, state, location and email address must be substituted and remembered, to be used in next step (once again XXX must be filled as above):

# openssl ca -out XXX-cert.pem -infiles XXX-req.pem
# cp demoCA/cacert.pem XXX-key.pem XXX-cert.pem /etc/postfix
# chmod 644 /etc/postfix/XXX-cert.pem /etc/postfix/cacert.pem
# chmod 400 /etc/postfix/XXX-key.pem

Now I edit the /etx/postfix/saslpass using my gmail username and password:


and I create the associated hash file:

# cd /etc/postfix
# postmap saslpass
# chmod 600 saslpass
# chmod 644 saslpass.db

Now, as regular user, specify the local forward:

$ echo '' > ~/.forward

I also set up local aliases in /etc/mail/aliases:

root: username
operator: username

Postfix needs a few commands before being started:

# postfix upgrade-configuration
# postfix check
# newaliases
# /etc/init.d/postfix start

Now all my local emails should be sent to my gmail account, let's see if thigs are working:

# emerge -av mail-client/mailx
$ andrea@fandango ~ $ mail root
Subject: postfix works?
Yes it does!!!

This is the output of /var/log/messages

Sep 20 13:58:40 fandango postfix/pickup[23235]: 3F61AF066C: uid=1000 from=
Sep 20 13:58:40 fandango postfix/cleanup[23243]: 3F61AF066C: message-id=<20080920115840.3f61af066c@localhost>
Sep 20 13:58:40 fandango postfix/qmgr[23239]: 3F61AF066C: from=, size=339, nrcpt=1 (queue active)
Sep 20 13:58:40 fandango postfix/cleanup[23243]: 41AAEF066B: message-id=<20080920115840.3f61af066c@localhost>
Sep 20 13:58:40 fandango postfix/qmgr[23239]: 41AAEF066B: from=, size=471, nrcpt=1 (queue active)
Sep 20 13:58:40 fandango postfix/qmgr[23239]: 3F61AF066C: removed
Sep 20 13:58:40 fandango postfix/local[23245]: 3F61AF066C: to=, orig_to=, relay=local, delay=0.01, delays=0.01/0/0/0, dsn=2.0.0, status=sent (forwarded as 41AAEF066B)
Sep 20 13:58:43 fandango postfix/qmgr[23239]: 41AAEF066B: removed
Sep 20 13:58:43 fandango postfix/smtp[23246]: 41AAEF066B: to=, orig_to=,[]:587, delay=3.3, delays=0/0/1.4/1.9, dsn=2.0.0, status=sent (250 2.0.0 OK 1221912634 12sm2163798fgg.0)

September 20, 2008 12:17 PM :: Italy  

Martin Matusiak

Dear Nokia

I’m confused.

You’re making these internet tablets with a keyboard, built-in wlan and bluetooth. It looks like a pretty complete mini-desktop device. The KDE people are really excited about running KDE on it, that’s wonderful.

There’s just one big question mark here. Why do I need a little computer that gives me internet access? I don’t know about you, but where I live there are computers anywhere I turn, at home, at school, at work. And if I really needed a smaller one I would get the Acer Aspire One, which is much more powerful and useful than your tablets (and it’s the same price range!).

Because, you see, if I’m not at home or school or work, I don’t have an internet connection. So your “portable internet device” just becomes a portable without connectivity. No different from my laptop.

I wonder… is there anything that would make this “portable” more useful? Perhaps some kind of universal communications network that doesn’t require a nearby wireless access point? Like say, the phone network? I hear you’re flirting with the idea of building phones, yes?

So why not build the phone into the “internet tablet”? That would actually give it something my laptop doesn’t have, it’d give me a reason to buy it. I mean you’ve already put everything else a modern phone has on the tablet, how hard could it be to add a phone?

I’ll tell you what, I’m in the market for one at the moment. I’ve never bought a Nokia product in my life, so this is your big chance. Do we have a deal?

September 20, 2008 10:57 AM :: Utrecht, Netherlands  


The history of XML

XML did not fall from heaven (or if you prefer, arise out of hell) fully completed. Instead there was a long process of standardisation.

In 1969, Bob Dylan started his comeback at the Isle of Wight festival, meanwhile, Elvis began his in Las Vegas, Elton John releases his first record and David Bowie's Space Oddity coincided with the Apollo 11 mission to the Moon.

Meanwhile in 1969, in IBM, Goldfarb, Mosher and Lorie were working on an application for legal offices. They decided to make a standardised high-level markup language that was independent of whatever control codes your printer used. They called this markup language after their initials: GML.

A decade later, ANSI (the American National Standards Institute) began developing a standard for information exchange based on GML, this became SGML, which stood for 'Standard Generalized Markup Language', this became an ISO (International Standards Organisation) standard in 1986.

In 1991, CERN physicist Tim Berners-Lee releases his Internet-based hypertext system called the 'World-Wide-Web', this used a particularly dirty SGML variant called HTML - 'HyperText Markup Language', HTML was dirty SGML because it went against the separation of content from presentation, with <b>, <center>, <font>, <blink>, <marquee> and other in-line monstrosities.

Despite being a complete hack and the bane of SGML purists, HTML propelled SGML out of the academic, literary and textual processing circles into the wider world. Angle brackets had taken over the world.

SGML had many features and very few restrictions; i.e. one program may have implemented a certain subset of SGML, while another program would have implemented a different subset, breaking the whole point of SGML which was to be a common information exchange format.

So in a, perhaps futile, attempt to establish order out of chaos, an international working group formed under more international quangos from 1996 to 1998, which defined a subset of SGML, called XML, 'Extensible Markup Language', which aimed to be simpler, stricter, easier to implement and more interoperable. A note by James Clark, the leader of the original technical group, explains the differences between SGML and XML. Over the last decade XML has been constantly revised and improved.

Of course, programs still implement XML in different ways, and one may find a load of marked up files that are somewhere between SGML and XML, as well as program or group specific non-standard behaviour.

The most enthusiastic XML advocates will recommend using XML for everything, including brushing your teeth. However, to be brutally honest, one uses XML when one is forced to.

XML does work better in some situations than others, for example, when you want to pass non-relational data between arbitrary systems, then XML works quite well.

In a future post, we will look at what do you do if you find yourself having to sort out a pile of random XML files.

Discuss this post - Leave a comment

September 20, 2008 10:53 AM :: West Midlands, England  

September 19, 2008

Daniel Robbins

New Git Funtoo Tutorial

For those of you interested to learn more about the Funtoo Portage tree, I have written a nice tutorial which you can view at

This tutorial explains how to use git, how to use the Funtoo Portage tree for development, and how to easily fork the tree for your own collaborative projects.

Enjoy! :)

September 19, 2008 07:25 PM

Bryan Østergaard

Software Freedom Day + Planet Larry

Tomorrow is Software Freedom Day - a yearly event where people all over the world get together to celebrate free software, enjoy talks related to free software and just as importantly get to meet lots of people.

If you happen to be in Copenhagen tomorrow you can meet myself and several other people from SSLUG at Copenhagen Business school. SSLUG's SFD program includes talks on Free Software, Linux, Open Office and GIMP. Everybody else can look up their local Software Freedom Day events - there's more than 500 teams registered all over the world so there's probably going to be an event nearby.

And regarding Planet Larry.. Steve Dibb just announced that he's setting up a feed for retired Gentoo developers which is very good news in my opinion. Lots of retired developers blog and they often have interesting comments on things related to Gentoo or tips that other people can benefit from. And this way people can know whether the blog posts they're reading comes from a normal user or a retired developer. I would probably have prefered marking retired developers another way instead of having multiple feeds but I can see why some people wants to know who's who and I'd much rather have a seperate feed than nothing at all. Oops, I was a bit too quick - Exdevs are now going in the main feed instead and will be marked using colour or some other way instead of a seperate feed.

And since I've been having this discussion with Steve on and off for quite some time: Thank you Steve :)

September 19, 2008 07:06 PM

Steve Dibb

I don’t know about anyone else, but everytime I want to go Planet Larry, I still type in, even though I ditched the domain a few months ago.

Well, I got tired of it not working, so I re-registered it, and it redirects once again as normal.

Also, we can always use more bloggers — if you have a Gentoo blog, lemme know about it, and we’ll get you added.  It’s a very informal process, just send me an email with your blog URL and stuff.  Now that I think about it, I really need to catch up with all the new Gentoo devs and get them on Planet Gentoo as well. Slack…

Finally, I decided I’m going to create a feed specifically for ex-developers, but since I’m too lazy to go out and find their blogs (and I don’t think I still have an old copy), if you guys could send me your info, that would greatly help to speed things along. Update: It’s too much work to create a separate feed, so I just put them back in the main feeds. Now, behave. :)

And here’s an image just because this blog post is so boring, it needs one.

September 19, 2008 06:08 PM :: Utah, USA  

Daniel Robbins

Funtoo on GitHub

I now have the official Gentoo Portage tree as well as my slightly tweaked Funtoo Portage tree hosted at GitHub. The "portage" repository is the Funtoo one, whereas the "" tree is the canonical Gentoo tree.

To use the Gentoo version of the tree, do:

# git clone git://

This will create a directory called To use this directory as your portage tree, edit /etc/make.conf and set PORTDIR to the path to this directory. This isn't an overlay, it is a full tree (which I prefer.)

To use the Funtoo version of the tree, do:

# git clone git://

Edit make.conf and set PORTDIR to point to the new portage directory that was created. Also, for Funtoo, you should also set the unstable keyword by setting ACCEPT_KEYWORDS to "~x86" or "~amd64".

The Gentoo tree is updated every few days as is the Funtoo tree. This is mainly a service for developers who want to use git for development, or who want to merge in ebuilds and send me changesets for integrating into the Funtoo tree.


September 19, 2008 03:53 PM

September 18, 2008

Nirbheek Chauhan

An important announcement

We interrupt your regular lazy-webbing to make this two important announcements:

A) AutotuA 0.0.1 released! Try it out and report bugs (if you can't follow the instructions in the link given, your services will be required when 0.0.2 is released :)

B) IMO, the two best distros in this world are:

  1. Foresight Linux
  2. Gentoo
    • The GNOME Team
    • Brent Baude (ranger): master-of-the-PPC-arch
    • Donnie Berkholz (dberkholz): X11, Council, and Desktop Team Emperor
    • Raúl Porcel (armin76): generic bitch; maintains half the arches and Firefox
    • Robin H. Johnson (robbat2): Infra demi-god
    • Zac Medico (zmedico): Portage demi-god
All these people are just too awesome (and too overworked) for words. If I hadn't got myself deep into Gentoo (which led to SoC too), I would've gone to Foresight :)

Who has high hopes for AutotuA, and also hopes the best of Foresight and conary can be brought to Gentoo.

PS: Donnie, congrats once again! ;)

September 18, 2008 04:08 PM :: Uttar Pradesh, India  

Martin Matusiak

general purpose video conversion has arrived!

When I started undvd I set out to solve one very specific, yet sizeable, problem: dvd ripping&encoding. I did that not because I really felt like diving head first into the problem would be fun, but because there was nothing “out there” that I could use with my set of skills (none). Meanwhile, I needed a dvd ripper from time to time, and since I didn’t need it often I would completely forget everything I had researched the last time I had used one. This was a big hassle, I felt like I had no control over the process, and I could never assure myself that the result would be good. Somehow, somewhere, there was a reason why all my outputs seemed distinctly mediocre. Visibly downgraded from the source material.

Writing undvd was a decent challenge in itself, because of all the complexity involved in the process. I had to find out all the stuff about video encoding that I didn’t really care about, but I thought if I put it into undvd, and make sure it works, then I can safely forget all about it and just use my encoder from that point on. When you start a project you really have no idea of where it’s going to end up. undvd has evolved far beyond anything I originally set out to build. That’s just what happens when you add a little piece here and another piece there. It adds up.

It’s been about 20 months. undvd is quite well tested and has been “stable” (meaning I don’t find bugs in it myself anymore) for over a year. One of the by products is a tool called vidstat for checking properties of videos. I wrote that one just so I could easily check the video files undvd was producing. But it turns out to be useful and I use it all the time now (way more than undvd). In the beginning I was overwhelmed by the number of variables that go into video encoding, and I wanted to keep as many of them as I could under tight control. I have since backtracked on a number of features I initially thought would be a really bad idea for encoding stability. But that’s just the way code matures, you start with something simple and when you’ve given it enough thought and enough tests, you can afford to build a little more complexity into the code.

Codec selection landed just recently. And once I was done scratching my head and trying to decide which ones to allow and/or suggest, I suddenly realized that with this last piece of the puzzle I was a stone’s throw away from opening up undvd to general video conversion. Urgently needed? Not really. But since it’s so easy to do at this point, why not empower?

The new tool is called encvid. It works just like undvd, stripped of everything dvd specific. It also doesn’t scale the video by default (generally in conversion you don’t want that). So if you’ve figured out how to use undvd, you already know how to use encvid, you dig? :cap:

Demo time

Suppose you want to watch a talk from this year’s Fosdem (which incidentally, you can fetch with spiderfetch if you’re so inclined). You get the video and play it. But what’s this? Seeking doesn’t work, mplayer seems to think the video stream is 21 hours long, that’s obviously not correct (incidentally, I heard a rumor that ffmpeg svn finally fixed this venerable bug). It seems a little heavy handed, but if you want to fix a problem like this, one obvious option is to transcode. If the source video is good quality, at least from my observations so far, the conversion won’t noticeably degrade it.

So there you go, a conversion with the default options. You can also set the codecs and container to your heart’s content.

You can also use encvid (or undvd for that matter) to cut some segment of a video with the --start and --end options. :)

I’m sold, where can I buy it?

September 18, 2008 10:11 AM :: Utrecht, Netherlands  

Christoph Bauer

Living without aRtsd isn’t bad at all

aRts is the old KDE Sound daemon which appeared around Version 2.0 of the KDE. Its purpose was mixing multiple music channels in real time - in other words playing a beep sound while playing music and so on as the common soundcard wasn’t able to do this. Later on hardware and drivers moved on and the main developer retired from the project. In other words, the project went pretty dead and is depreciated by now.

Nevertheless I used aRts for quite a long time - honestly, I have used it since two days ago and it has caused too many problems. As aRts is depreciated, it was time to remove it from my system. As I am using Gentoo Linux, the KDE 3.5.10 update and the buggy kde-base/kdemultimedia-arts ebuild were the best time to do so.

Removing aRts is quite simple. First of all, we start by deactivating the Soundserver using the KDE Control center. But as we want to have some noise on our box, we can adjust the system sound settings to use an external player. As I want to keep it simple, I’m using the “play” binary from the media-sound/sox package.

As those changes were made, it’s a good thing to test the current setup to see if things are working. If the sound still works, it’s time to remove the arts USE-flag from the make.conf. The next step is emerging the packages depending on arts and arts itself. And that’s all.

Copyright © 2007
Please note that this feed is for private use only. All other usage, including the distribution or reproduction of multiple copies, performance or otherwise use in a public way of the images or text require the authorization of the author.
(digitalfingerprint: 0f46ca51d0fa4e6588e24f0bf2b80fed)

September 18, 2008 07:31 AM :: Vorarlberg, Austria  

September 17, 2008

Patrick Lauer

Local File-to-ebuild Database

Hai everyone,
I've been a bit quiet the last few $timeunits. Life is good.

Here is a little toy I've been working on since yesterday. It is still very embryonic, but what it does is simple: Map files to packages and packages to files, using a local sqlite DB I generated out of binary packages. The index is not complete, it has been generated with ~5500 packages. I will try to update it when I have more packages built.

If you have any great queries just throw them at me and I'll try to update the query script. Also I intend to totally rewrite the database structure because I've already noticed a few issues with the current design. But for now have fun with it!

September 17, 2008 10:07 PM

Brian Carper

Copy/paste in Linux: Eureka

It's been a few years since I officially grasped Linux's (well, X Windows') weird concept of copying and pasting, with its multiple discrete copy/paste methods: the highlight + middle click version, and "clipboard" Edit->Copy" + "Edit->Paste version.

But once in a blue moon, copying and pasting in X still surprises me. Try this:

  1. Open Firefox and a text editor. I'm trying with Vim.
  2. Highlight some text in Firefox.
  3. Middle-click paste it into the editor. The highlighted text is pasted, as expected.
  4. Close Firefox.
  5. Middle-click into the editor again.

Can you guess what happens at the end? If you said "Some random text from another application and/or nothing at all is pasted rather than the stuff from Firefox", you're right!

But today I read this article on and finally understood how copy/paste works in X. Highlighting text doesn't copy anything, it just announces to the world "If any applications want to middle-click paste something, come ask me for it". So if you close the application you wanted to paste text from before you actually do the pasting, the application isn't around to give you the text you wanted any more, so you can't get it. The Edit->Copy / Edit->Paste version of copy/paste behaves the same way. You can't "Copy", close app, "Paste".

Note, this is different from how MS Windows works. When you copy some text in Windows it really copies to another location. You can close the app and still paste away. But Windows has a different (inconsistent) behavior when copy/pasting files in Explorer. There, it behaves like X in Linux: if right click a file and "Copy", it doesn't actually do anything with the data until you paste. If you right-click, "Copy", delete the file, "Paste", you don't get an error until you try to Paste.

In Vim in Linux, the "* register lets you access the "primary selection" (highlight / middle click selection), and the "+ register lets you access the clipboard.

In Vim in Windows, "* and "+ do the same thing, and use the clipboard.

September 17, 2008 01:42 AM :: Pennsylvania, USA  

September 16, 2008

Jürgen Geuter

Last century's technology fail@Adobe

So Adobe has released a Beta of their AIR platform for linux which is nice if they had not once again failed to include support for modern machines.

From the release notes:

System Requirements
* Processor - Modern x86 processor (800MHz or faster, 32-bit)

Guess what Adobe, my Core2Duo here is kinda modern but since I prefer to run a 64bit operating system on my 64bit machine, your fancy "new" and "modern" software won't run. Same crap as we have with Flash which does not run properly on 64bit.

Just tell me Adobe, why the hell do you hate 64bit so much?

September 16, 2008 06:08 PM :: Germany  

Tagcloud fail@Stack Overflow

So Stack Overflow, the new site of Joel Spolsky and Jeff Atwood, launched public beta. It's supposed to be a place to ask technical questions and get answers from the other people around there (I checked it out for 5 Minutes and it was very Windows-centric so kinda boring to me).

Now both of the designers are big names when it comes to developing software, both are often quoted when it comes to best practices and whatnot, so how does this happen?

How can they not be able to implement a simple tagcloud?

I'm writing about tags and tagclouds as we speak and the first thing that comes to my mind is that the tags are not ordered alphabetically, which does make the whole tagcloud worthless. Tags are for finding things easier, if I cannot look for a certain tag quickly you can just drop the whole tagging thing. Yeah I could find the few big tags easily but the rest completely drowns in the data mud.

If you implement a tag cloud, do it right: Tags have to be ordered alphabetically which more important tags printed bigger. There's the half-assed concept of ordering tags by importance (the biggest on first) but that one doesn't do a lot right either.

How serious can you take those guys if they can't get those simple things right?

September 16, 2008 09:45 AM :: Germany  

September 15, 2008

Thomas Capricelli

About mercurial and permissions

Distributed source control is really great, and among them, the tool I love the most is, by far, mercurial. I use it for all my free software projects, my own non-software projects (config files, mathematical articles and such) and also, dare I say it, for my CLOSE SOURCE projects. Yes, I also do this kind of things, how harsh a world this is, isn’t it ?

In the latter case, though, I often have some problems with permissions. In my (quite common) setup, I have a central repository and the whole tree belongs to a (unix-) group. File access is restricted to this group only (chmod -R o= mydir).

On lot of current linux distribution, each user has an associated group with the same name (john:john), at least that’s how it behaves on both debian and gentoo.

When a user does a push which creates some new directory/file, then those are created as belonging to this user and its main group (john:john here). As a result, other people can not access to it, and when you want to pull the repository, you got a big ugly crash:

pulling from ssh://
searching for changes
adding changesets
transaction abort!
rollback completed
abort: received changelog group is empty
remote: abort: Permission denied: .hg/store/data/myfile.i

Of course, i can create a big fixperms scripts in the repository, but then I need to start it each time the problem arises, which if each time someone creates a new file/di: this is far too often.

I thought about the set-group-ID (see man ls) and indeed it works. I dont know if this is the official way of solving this problem among the mercurial communauty, and I would love to know if some other people solve it differently. At least that’s how it is documented on the mercurial site.

Now, you might as well find out about this problem once your repository has been used for a while and is already full of useful stuff. Then it is a little bit less simple than what the mercurial documentation says. Namely, you need to put the set-group-ID in the whole .hg/store/data :

cd topsecretproject/
chown john:topsecretgroup -R .
chmod g=u,o= -R .
find .hg/store/data -type d  | xargs chmod g+s
chmod g+s .hg # needed for .hg/requires

September 15, 2008 10:20 PM

Michael Klier

Long Time No Blog ...

yet, I am still alive ;-), so here's a short notice to prove it.

Real life is sucking up most of my time and motivation to hang in front of my computer recently. I have a new volunteer who keeps me from lurking on all the webdottohoo™ sites all day long and I am also finally moving in my very own flat :-). Until I am moved (in two weeks from now) my signal to silence ratio will prolly stay at its current level.

Read or add comments to this article

September 15, 2008 09:09 PM :: Germany  

Roy Marples

Experimental dhcpcd-4.99.1 available

dhcpcd now manages routing in a sane manner across multiple interfaces on BSD. It always has on Linux due to it's route metric support, but as BSD doesn't have this it's a little more tricky. We basically maintain a routing table built from all the DHCP options per interface and change it accordingly. As such, dhcpcd now prefers wired over wireless and changes back to wireless if the cable is removed (assuming both on the same subnet) and this works really well Laughing out loud

It's now starting to look quite stable and all the features in dhcpcd-4 appear to be working still so I've released an experimental version to get some feedback. BSD users can get a rc.d script here.
So, lets have it!

September 15, 2008 07:36 PM

September 14, 2008

Jason Jones

Quicktip: Kdenlive Text

I'm probably going to re-write this later, but I just wanted to jot down quickly how to do text with Kdenlive.

It's really quite easy to set it up.  You just click on "Project->Create Text Clip", and proceed to create your clip.

My problem was this:  What if I don't want the text on its own page, but rather have it show on top of the video? How do I create a text clip and blend it with the video?

It took me a bit, but that's really easy, too.  You just add a transition and select "Push".  This will give you all sorts of options on blending two videos together, and since Kdenlive makes the text clip its own video (with the option of having a tranparent background), this is both very easy, and quite customizable.

Simply add the text clip on the extra video track, add the transition, and play with the settings.  There ya have it.

Hopefully I'll add some screenshots here soon to make this how-to a bit easier to read.

I've gotta get to sleep.

September 14, 2008 11:15 PM :: Utah, USA  

September 13, 2008

Thomas Capricelli

Konqueror web shortcut for gentoo packages

You’re going to think that I’m some kind of web shortcut maniac, but I really think I’m not.

I’m using a lot, and only today did I think about creating a konqueror web shortcut to get there faster. I’ve called it ‘gt’ (for gentoo, yes, i’m that lazy), and the magic url thinguy is\{@}

Now I can type “gt:kdebase-meta” in konquy or alt-f2 and feel ashamed in front of my debian friends whom I’ve been laughing at so much in the past because debian is slow to catch up with kde releases.

Today, several weeks after KDE 4.1 has been released, and 10 days after KDE 4.1.1 has been released, there is still no official ebuilds for the kde 4.1 branch. I know there are overlays, there are some heated bug-reports, blogs and even some unofficial status page about it, but still, kde 4.1 is not in gentoo, and it makes me feel really sad.

Yes, I know, it’s free software and I can do it myself. And then?

September 13, 2008 01:22 AM

Lars Strojny

Recovering a software RAID

The scenario: my RAID crashed because I’ve messed around with the partition table of one of the disks in there. This results in a RAID array not being able to assemble itself because the superblock of the messed up device is invalid. The trick is pretty easy: just recreate the whole RAID with mdadm. The existing metadata will not be overwritten, the current information is just replicated. I used to have a simple RAID1, but I’ve now recreated it as an incomplete RAID5 (--level=5, --raid-devices=2) as the missing disk is soon to be bought.

$ mdadm --create /dev/md0 --level=5 --raid-devices=2 /dev/<original> /dev/<crashed>

If you like to stick with a RAID1, and not doing the migration to RAID5 along the way, just use --level=1 instead. I’m not really sure if the order of the disks matter and I’m not brave enough to find it out.

Tomorrow I’m going to buy the next disk for the RAID to make sure the redundancy level is alright. Generally I’m pretty amazed that this kind of setup is so robust. Even me messing around with it can’t bring it down.

September 13, 2008 01:17 AM

September 12, 2008

Jürgen Geuter

Explaining words: Copyright, trademarks and patents

In many discussions, especially in a free software context the words copyright, patent and trademark are often used either synonymously or at least as if the were similar things. That's wrong so here's a short text to clear up what word stands for what concept. (I am not a lawyer so you'll have to take my words with a grain of salt but still, I am right of course ;-)).


Copyright is the legal concept that gives the author, the creator of something the right to determine what people can do with his work. In commercial contexts you often see "all rights reserved" (meaning that author does not give you any rights to adapt, change or do whatever with the content). Nowadays there are things like the GPL or creative commons licenses that are often used by authors to give extra rights to their users. In some countries there's also the concept of "public domain" which means that no single person can claim any special rights on the content: It belongs to everybody equally so to speak.

The important thing is to realize that the GPL or whatever free license is not in opposition to copyright, but is actually based on the legal concept: You as an author can only make the GPL the license for your content because you have the copyright giving you exactly that right. When people claim "copyright infringement" it means that someone violated the rights that they chose for their content, breaking the GPL is copyright infringement, too.


A trademark is pretty much just a unique identifier for a source of stuff. When you or your company gets a trademark registered it means that you are the only entity allowed to brand the objects it creates with that identifier (Only one company can brand their stuff with Coca Cola for example). Having a trademark does give the holder somewhat of the monopoly on the trademarked phrase (at least in certain contexts): Some advertising slogans have been trademarked for example so you can of course use them while speaking and talking but not to advertise for your product or service.

You might have heard about the drama that led to the fact that debian does not ship "Firefox" but "Iceweasel": Debian wanted to patch their Firefox distribution but Mozilla claimed that you may only call your product Firefox if it's exactly the product they release and though Firefox's code is free software it does make legal sense: They have a trademark on a set of certain binaries they release, those can be called Firefox. If you change the source and create other binaries you are providing something different, something that is from a different source. Now people said that Mozilla should just have led it slide cause Debian is nice and free software and all, the problem is: If you have a trademark, you have to enforce it or you lose it. If they had led Debian modify things and still call it Firefox it would have meant that someone who would build his own Firefox full of adware and spyware could legally call his product Firefox, too, with the same justification.

So, trademarks do not have anything to do with free software or unfree content, it's just the identifier of a source.


A patent is a monopoly. Well for a certain time. The state grants the inventor the monopoly for some technology that guy invented in exchange for the inventor documenting how it works.

Patents come from a time when people were making inventions but not telling how they worked: They relied on the fact that most people would just not be capable to reverse engineer their stuff. The problem was: Nobody could understand things and build on top of them. Nobody could find flaws in the design, problems or even find out if something was legal. When the inventor died you had the danger that the knowledge could die with him which would leave all of mankind with less knowledge. Patents were a way to pay people for adding to the common pool of knowledge of mankind.

You can obviously only patent things that you have the copyright to (we just pretend that the patent system works here for the sake of the argument) but that is pretty much all that relates those two concepts.

All three things are legal terms but cover completely different areas and problems so make sure to use the right term when you talk about some problem, mixing things up makes talking about the issues, that are already almost too complex for the sane mind to comprehend, even harder.

September 12, 2008 03:50 PM :: Germany  

Christopher Smith

Xorg 7.4 Review

Xorg 7.4 was released recently although the biggest features touted for this version, namely DRI2 and randr 1.3 were dropped from this release. Despite this there have been significant improvements. This is the first release for me that actually displays acceptable performance using the EXA acceleration extension with the Intel driver. This means I can use XV to for accelerated video rendering with Compiz. Before this was only possible using a patch for mplayer and a special plugin for Compiz. This was not a great solution though because no other players worked with this plugin, which meant no Totem, Xine, or even Gnome-Mplayer which also did not work correctly. I've also noticed that Blender works much better now with Compiz. It no longer flickers uncontrollably.

So for the only regressions I have experienced is a slower recovery from suspend and some minor artifacting on my Gnome Panel applets. The artifacts seem to go away when mousing over. Despite these minor regressions I think it is a good upgrade for anyone using the Intel driver. If you're anything like me you've been waiting a long time for a proper Intel EXA implementation to accelerate video while using Compiz or another compositing window manager.

Meanwhile Intel has announced that they want a release of xorg server 1.6 before the year is out. This release should include DRI2 using GEM and randr 1.3. Knowing the history of Xorg release schedules I have my doubts about getting 1.6 out that quickly but the latest Xorg has pacified me for now.

September 12, 2008 01:26 PM :: Connecticut, USA  

September 11, 2008

Iain Buchanan

You know you're using the computer too much when

Last Sunday morning, I was actually trying to sleep in longer by dreaming something like --extend-sleep=1h. I am not joking *sigh*.

It didn't work - but for a good reason - my family was waking me up with a breakfast in bed for Fathers day :)

September 11, 2008 01:54 AM :: Australia  

September 10, 2008

Martin Matusiak

how to pick a codec

The great thing about standards is there are so many to choose from.
- Someone

undvd 0.5.0 introduced a new option to choose the codec and container for the rip. The only problem is that you have to know which ones to choose. mencoder supports a staggering number of codecs and containers, most of which are now exposed also in undvd. The resulting rip can also be remuxed to a couple of other popular containers with additional tools.

But I wasn’t content with solving a problem by introducing a new problem. Now, it’s not so easy to say exactly which combinations are good and bad, but if at least you knew which ones definitely do not work, that would be a start, wouldn’t it? Then at least you can rescue the user from phase one of the Monte Carlo method in getting something that actually works.

The methodology is like this:

  1. Rip 5 seconds of the dvd using undvd with a given container/video codec/audio codec combination.
  2. Attempt playback with mplayer.

This is what codectest does. The result is either a text file showing line by line whether or not the given combination successfully produced a rip, or a pretty matrix picture. This gives you an idea of what you can expect to use. If you run this on your system, it’s also a tip off if you see something that should work but doesn’t.

I must stress that if the given combination of codecs does produce a file, this is no guarantee that the file is to be considered a good rip. It may not play on other media players, it may not even play on mplayer (incidentally, this is something akin to a fuzzer, I’ve discovered that some combinations really aren’t expected :D ). So if codectest says it works, verify that you get a working video file out of it!

The standard set looks something like this:

It’s also possible to run it on the full combination of all codecs and containers that are now exposed in undvd. You’ll need a few hours to do it:

September 10, 2008 11:57 PM :: Utrecht, Netherlands  

Christopher Smith

Evolution-RSS SVN Ebuild

Ok so I haven't been able to get it to build yet but I am working on an evolution-rss svn ebuild. My main impetus for this ebuild has been the fact that I filed a bug a week or two ago about the appearance of feeds when using a dark theme and it is now apparently resolved in svn. This and the lack of a dbus connection are my two biggest gripes with evolution-rss. The theme bug is the classic "light text on white background" issue that seems to plague dark themes. I'm looking forward to GNOME 2.24 because they have been working on dark theme integration. Hopefully this will create a much more usable dark theme environment. Maybe I'll be lucky and a new version of evolution-rss will be released for the new evolution and I won't have to fiddle with this ebuild any more. If I do get it to work before a new version is released I will be sure to share it here.

September 10, 2008 10:47 PM :: Connecticut, USA  

Nikos Roussos

flash: thanks, but no thanks

many people are complaining about the memory footprint of all the well known browsers (including chrome). and of course they are right. i think that the situation would be far more bearable if we could ban flash from the web!

i have seen some great sites that depend their design on flash technology, but let's face it. adobe plugin sucks big time. i just installed a flash block add-on to my browser (again) and i'm not going to remove it until adobe decides to release a descent flash player plugin for all the browsers that will not need 100% cpu utilization every time i open a flash site on a tab.

depending on the browser you use, you can find an add-on with the same functionality to flashblock firefox add-on. after a quick search i find this for opera and this for safari. i don't care about internet explorer since it can't even pass the acid2 test, so you shouldn't use it anyway.

ps. i think that it's time for browser developers to consider the option of full svg support, as w3c suggests. maybe that would be a new "reclaim the web" action.

September 10, 2008 05:34 PM :: Athens, Greece

Roy Marples


You tag something with a metric. The same somethings with a lower metric take precedence over the same somethings with a higher metric.

dhcpcd has been able to apply metrics to routes on Linux so that we can prefer to route packets over wired instead of wireless.
dhcpcd-git is now able to distinguish wired from wireless and can make a metric accordingly.

But how do we teach configuration files about this? Well, dhcpcd-git has an environment variable sent to each script telling it about the preferred order of interfaces (based on carrier, if we have a lease or not and metric). This works well, and we can now prefer wired nameservers over wireless ones in /etc/resolv.conf. Well, we can at least put them first in the list.

Whilst doing this, it struct me that resolvconf has no means of preferring configurations other than a static interface-order file. This is not good for automatic foo! So openresolv now understands the -m metric option and the IF_METRIC environment variable so it can tag resolv.confs by priority. If no metric is specified, it takes priority. If >1 interface on the same metric then we take lexical order.

September 10, 2008 01:34 PM

Christoph Bauer

Login with the USB stick

That I am lazy if it comes to typing much shouldn’t be a secret by now, but mentioning it here might be wrong too - but it was the main cause for doing some research for an alternative login method. Fingerprints may sound fun, but if it comes to security or if you need to pass on the password to someone…

But as you might guess, I’m not the only one facing that problem - there’s even a nice packet waiting for us, called sys-auth/pam_usb. If you’re using gentoo linux, you can simply emerge it after unmasking a current version. A version number below 0.4.1 isn’t recommended at all.

# echo “sys-auth/pam_usb” >> /etc/portage/package.keywords
# emerge -av “>=sys-auth/pam_usb-0.4.1″

As the compile finishes, we want to go to the funny part - configuration. As a test USB device I have chosen to use my SanDisk Corp. Cruzer Titanium. But let’s start right in using the on-board tools delivered with the package:

# pamusb-conf –add-device MySecretDevice
Please select the device you wish to add.
* Using “SanDisk Corp. Cruzer Titanium (SNDKXXXXXXXXXXXXXXXX)” (only option)
Which volume would you like to use for storing data ?
* Using “/dev/sdb1 (UUID: <6F6B-42FC>)” (only option)
Name : MySecretDevice
Vendor : SanDisk Corp.
Model : Cruzer Titanium
Volume UUID : 6F6B-42FC (/dev/sdb1)
Save to /etc/pamusb.conf ?
[Y/n] y

If things work the same way they do on my howto, the USB stick is now set up for authentication in general. Now let’s add users. I’d say using root is pretty cool - so let’s try.

# pamusb-conf –add-user root
Which device would you like to use for authentication ?
* Using “MyDevice” (only option)
User : root
Device : MySecretDevice
Save to /etc/pamusb.conf ?
[Y/n] y

Theoretically our user root is configured now for usb authentification. But as it’s an important system user, let’s make sure, things are working:

# pamusb-check root
* Authentication request for user “root” (pamusb-check)
* Device “MySecretDevice” is connected (good).
* Performing one time pad verification…
* Verification match, updating one time pads…
* Access granted.

Well - the basic stuff is done. Now let’s start with the difficult part: we’re dangling on PAM. As I want to make the changes system wide, I’m working with the file /etc/pam.d/system-auth where I am looking for the line saying

auth required nullok_secure

which I’m changing to

auth sufficient
auth required nullok_secure

A test follows.

pavilion $ su
* pam_usb v.0.4.3
* Authentication request for user “root” (su)
* Device “MySecretDevice” is connected (good).
* Performing one time pad verification…
* Verification match, updating one time pads…
* Access granted.

And hooray, it works. Regarding the different pam-implementations, you may even get it to work with kdm, gdm and all that jazz using it. But there’s a funny thing I am still missing: automatic locking of the screen on removing the device.

Believe it or not - there’s even a solution for our problem. It’s called pamusb-agent. The config for it could look that way if you’re using KDE:

    dcop kdesktop KScreensaverIface lock
    dcop kdesktop KScreensaverIface quit

For making things roll, just ensure it’s started on KDE startup:

cd ~/.kde/Autostart
ln -s /usr/bin/pamusb-agent pamusb-agent

And that’s it.

Copyright © 2007
Please note that this feed is for private use only. All other usage, including the distribution or reproduction of multiple copies, performance or otherwise use in a public way of the images or text require the authorization of the author.
(digitalfingerprint: 0f46ca51d0fa4e6588e24f0bf2b80fed)

September 10, 2008 12:58 PM :: Vorarlberg, Austria  

Jürgen Geuter

Calvin and Hobbes, once a day

I love Calvin and Hobbes comics so I was really happy to find a site that gives me one Calvin and Hobbes strip each day. Head over to and subscribe to the RSS feed.

September 10, 2008 09:52 AM :: Germany  

Brian Carper

Westinghouse: Finally getting somewhere?

I finally got not one, but two phone calls from Westinghouse today, inquiring as to the status of my Better Business Bureau complaint against their company. This is of course in connection with the big expensive L2410NM computer monitor I sent off for repairs in March and never got back.

I actually have three different people's names to get in contact with at Westinghouse now, two or more of which are apparently supervisors. After I returned one call and was told all the supervisors went home for the day, I then received an unprecedented second call back from a different supervisor, saying she was on her way out the door but that I should call her tomorrow, and she gave me a direct line to contact her.

Some supervisors must be rooting through old BBB complaints and responding to them all, would be my guess. A phone rep let slip that there's someone working on "all of these cases... er, I mean, your case". The LA BBB still lists 69 unanswered complaints against Westinghouse, so I'm sure there's plenty of work to go around.

After six months of bullcrap, I can't get my hopes up at this point that I'm going to actually have this resolved but hey, you never know. In spite of my 21 phone calls to Westinghouse (and counting) and many promises of a return call, this is the FIRST TIME I've ever heard from anyone at the company. I'll be posting the result of my phone call tomorrow.

(Read the whole crappy story of Westinghouse's dishonesty and horrible customer service: The beginning, Update 1, Update 2, Update 3, Update 4, Update 5, Update 6, Update 7, Update 8, Update 9.)

September 10, 2008 12:43 AM :: Pennsylvania, USA  

September 09, 2008

Brian Carper

Perl6 features borrowed from Lisp

Via PerlMonks I found a couple of articles discussing in good detail some of the new features of Perl6.

Perl6 steals even more things from Common Lisp than Perl5 did: it has multimethods / multiple dispatch for example, which is a huge plus. Via this interview with Damian Conway we learn that Perl6 will also have named, optional, and "rest" parameters to subs, just like in CL. That's also a good thing; CL's parameter-passing styles are nice, and it's awesome how you can combine them. Certainly better than Perl5 (but everything is better than Perl5). There's also apparently special Perl6 syntax for applying functions to lists and currying functions, and weird Capture objects to explicitly deal with multiple-value returns from subs. Good stuff.

Perl6 is also apparently taking first-class functional objects to an extreme; blocks, subs, and methods are all objects and there are all kinds of metaprogramming hooks to screw around with them. This is one area where Ruby is just a little bit lacking: functions and methods aren't quite first-class enough in Ruby. Most people seem to pass around symbols / names of methods rather than pass around methods as objects themslves. Anonymous blocks are used liberally but mostly via yield, limiting you to one block per method and largely hiding away the block objects themselves.

I'm honestly a bit excited about Perl6, but largely as a curiosity or new toy to play with. It is kind of interesting how languages keep creeping more and more toward Common Lisp. If Perl is a nicer-looking Common Lisp which I can edit properly in Vim, it'll be almost a dream come true; I hate Emacs and Common Lisp tends to be butt-ugly. (Not talking about the parens, mostly about the verbosity and cruft and inconsistencies. Larry Wall famously said that Common Lisp looks like (paraphrased) "oatmeal with toenail clippings mixed in". Perl is certainly at the other extreme.) is a good site for keeping up on Perl6 news. It's pretty active. Here's hoping we see a real release of Perl6 someyear.

September 09, 2008 11:28 PM :: Pennsylvania, USA  

Nicolas Trangez

Scripting your app

Lots of buzz on adding scripting interfaces to applications on Planet GNOME recently, cool. Looks like Alexander Larsson hacked together a wrapper around SpiderMonkey (the Mozilla Engine) to get JavaScript integrated. Related to the jscore-python thing I blogged about before.

Not sure this is the best way to tackle this cool new opportunity though. JavaScript can be pretty hard to “get” for people not used to it, but more familiar with classical languages (wrt object oriented paradigms). I guess lots of current code contributors are not very familiar with JavaScript, but do have an in-depth knowledge of some other scripting language though (not listing any here, you certainly can name some).

So, wouldn’t it be nice if the GScript interface would be abstract for the application developer, who should just create a script context, put some objects in it, and launch a script, after which the GScript runtime figures out which interpreter to use for this script, so scripting languages become plugable?

Reminds me a little of my old OPluginManager work :-)

September 09, 2008 09:12 PM

Jürgen Geuter

Cory Doctorow's "Content"

Content is a collection of "essays on technology, creativity, copyright and the future of the future". Basically it's Cory Doctorow writing about DRM, intellectual property and topics like that. Each essay is short enough to be read in your lunchbreak but (admittedly I love his style so I'm not objective) still worth every second. Insightful, smart, a must-read. The best thing: You can get the full thing from his website free and CC licensed. No excuse not to read it left :-)

September 09, 2008 07:48 PM :: Germany  

Jason Jones

Evolution / Exchange Howto

I just spent the last 5 hours trying to get Evolution to connect to our corporate Exchange server.  I finally figured it out, and I can now interoperate 100% with calendars, contacts, meetings, and email from our Exchange server.

Due to the wide array of configuration options in Exchange, doing a step-by-step how-to would be, for the most part, fruitless.  So here are some pointers.


After I figured this out, it took less than 10 minutes to get evolution connected to exchange.

At the office they're using Exchange 2000, so this article will reference things pertaining to that version.

I'm using Evolution

You will need to have the ebuild evolution-exchange emerged.  It won't work without it.

After that, there are only a few things you need to get connected.  Here's a screenshot of the login details I used.

In the username space,  I tried every conceivable concoction of domain / username combinations.  Believe me, you only need your username there.  Not DOMAIN/username, not username@domain.fqdn, just your username.

The OWA URL should be provided to you by your system administrator.

When you put in a valid OWA URL, the Authenticate button will become activated.  Click it, and it should auto-generate the Mailbox for you.  It did for me.

Make sure the authentication type is right.  The "authenticate" button seems to authenticate successfully whether the type is plain, or encrypted.  But, if it's the wrong type, and you go to get your email, it will complain that your password might be wrong.   Just use both types and you should get it right.

After that, you should be able to have access to just about everything Exchange provides.

Disclaimer: This article is only a couple of tips, and is by no means a comprehensive tutorial.  There's a good chance none of this will work for you.

Good luck!

UPDATE - The day after I got this working, I updated my world and magically, without having updated anything related to evolution, my connection wouldn't authenticate again.  After trying for 2 days to get it to work, I changed the IP address in the OWA URL to the domain name of the server.   So, instead of, it was http://uth-mail011/exchange.  Then it started working again.  Hopefully it'll keep working this time.

September 09, 2008 02:14 PM :: Utah, USA  

Nikos Roussos

linux format articles

i uploaded three more articles on the foss section published at the greek edition of linux format.

the two parts of my "spreading linux" article and the 6 pages "introduction to gentoo" (co-writed with kargig)

happy reading :)

September 09, 2008 12:33 PM :: Athens, Greece

September 08, 2008

Dirk R. Gently

Old, Abandoned, Aging Ebuilds

As per may last post, I’d been left in the Dark with my new Mozilla Firefox build. To summarize, lately I had updated my April build of Firefox (build 3.0-beta5) to Firefox-3.01. I expected a few bug fixes, but mainly didn’t feel safe running a beta version. Unfortunately, the Mozilla developers must have decided memory allocation wasn’t enough and the new 3.01 version (possibly before) got a good bump of memory resources to use. However, on my memory-maxed laptop (192 MB, if you can believe it) the new requirements brought the whole desktop to a hard-drive frictioning halt. Luckily for me Gentoo Developers are as through saving ebuilds as they are of writing them.

Old, abandoned, and aging ebuilds are saved in Gentoo CVS. I’ve heard about this before but when I looked into CVS, I couldn’t find them. It turns out that they are kept in the x86 trunk:

Looks like all ebuilds are retired to this trunk! Yeeahh! If you can’t find an ebuild be sure to look into the link “show dead files” on the top of the list.

So this is good news for me in the future when one of my updates doesn’t hold up to what I hope. I wasn’t able to revert back to beta5 because the ebuild calls for a language file (mozilla-firefox-3.0b5-sq.xpi) in in gentoo experimental that is no longer there. Oh well. I have installed Epiphany once again, and though it relies on the same xulrunner backend, it uses considerable less resources and functions beautifully on my laptop. I’ll miss my NASA Night Laugh skin and the Awesome Bar, but mostly I’m pretty happy with Epiphany.

Enjoy your blogging!

September 08, 2008 10:49 PM :: WI, USA  

Martin Matusiak

of codecs and containers

I have been very skeptical about adding options for other codecs in undvd, purely because of the test burden. With a single combination of container and pair of audio/video codecs I can be reasonably confident that I’ve done enough manual testing (and judging video quality doesn’t trivially lend itself to automated testing, sadly) to account for most potential problems.

But at the end of the day it’s a question of priorities, and having scratched all the important technical itches by now, if anything this is the right time for it. I got some user feedback recently that set me onto this path. The user was having trouble playing the files encoded in the classical avi+h264+mp3 format on other platforms, and that’s when I asked myself how important is it really to have a single format? As long as the default still works well, what’s the harm in offering a little customization?

Testing is a huge problem, which is why this new feature is considered to be experimental. The most common seems to be bad a/v sync. There is just no way to account for all the possible combinations of codecs and containers, and to maintain an up-to-date document for this as things evolve. So the burden of testing is squarely on the user here (which is quite unfortunate).

The new functionality is available in undvd 0.5 and up. Here’s a shot of the new goodness. All these files were encoded from the same dvd title. A 22 minute title was ripped with different containers (represented with different filenames). The audio codec is mostly the same in all cases (mad = mp3), except for 1.mp4 (faad = aac). The video codec is also mostly the same (h264 = avc1), except for 1.flv. The only variation here is the container being set to different values, all the other settings are defaults. You can also witness that some containers are more wasteful than others (given the same a/v streams), but not by a huge amount. (The audio bitrates shown are actually misleading, mplayer seems to give the lowest bitrate in a vbr setting.)

This demo is by no means exhaustive of the full collection of codecs that can be used, for that see the user guide. There is also an option to use the copy codec, which just copies the audio/video stream as is.

September 08, 2008 09:52 PM :: Utrecht, Netherlands  

Christopher Smith

Buffer Overflow in Gnome-Panel?

I upgraded to Gnome 2.22 recently as it has gone stable in Gentoo's portage. One issue I encountered was that the panel would crash whenever I clicked on the clock applet. Investigating futher I checked my logs and found that it was crashing due to SSP. I never had this problem with previous versions of Gnome Panel. Anyhow I disabled SSP and it works fine now but I worry that a bug has been introduced into the latest stable release of the panel.

September 08, 2008 05:19 PM :: Connecticut, USA  

Christoph Bauer

Microsoft Key-Changer

No, this is not a mistake: Microsoft offers a small tool for changing the product key of windows xp or vista. Ok, they call it ‘updating the product key’, but that’s just a different word for it. In general, I like the idea.

The impact:

More and more computers come preinstalled, having a recovery cd which is more or less just a disc image or an automated installer. But sometimes CDs break and/or you’re forced to use another installation medium.

As I’m trying to keep things legal, I need to activate windows for getting all the required updates. So I need to insert the right key. A small recherche took me to the following Link

This is the Microsoft Product Key-Updater which even activates your windows using the newly inserted key. Full service to get things right.

Copyright © 2007
Please note that this feed is for private use only. All other usage, including the distribution or reproduction of multiple copies, performance or otherwise use in a public way of the images or text require the authorization of the author.
(digitalfingerprint: 0f46ca51d0fa4e6588e24f0bf2b80fed)

September 08, 2008 11:59 AM :: Vorarlberg, Austria  

September 07, 2008

Kyle Brantley

IPv6 and... software!

A protocol is nothing if never used. Well, okay, maybe it can be a joke. Maybe. Okay, so that's not really a protocol. Evil Bit jokes are still positive net karma, right?

Likewise, IPv6 is pretty much useless if it is never used. I can assign the addresses all I please but ultimately if all I do is ping my desktop that sits "behind NAT" with it then for the most part the effort was wasted.

My server runs CentOS 5.2, my desktop runs Gentoo, my laptop Debian, my router Debian, my windows desktop Vista (dual boot Server 2008), and the Vista box also has three instances of OpenBSD running within VMWare.

I've got a pretty good testbed to see just what does/doesn't support IPv6, in terms of everything general web browsing to random system daemons to whatever end user programs you have a desire to run. So, I put together a small bit of info concerning what handles IPv6 perfectly, what is kind of broken, and what just looks at it with a mystified look on its face.

So to start:

Operating Systems

As far as I know, the first IPv6 stack was available for Windows 2000 via a separate download. XP bundled it by default, but left it uninstalled. Vista has the IPv6 stack enabled by default.

Got a pretty new IPv6 stack with 2.6. Had a working stack in 2.4. I'm pretty sure 2.2 had a functional stack too, as did 2.0. Don't quote me on that.

Has supported IPv6 since 2.7.


Apache has support IPv6 ever since the 2.0 release. Every component of apache that I tested supported IPv6 just fine, from general web page serving to SSL to proxies. Considering how much of the web is still on 1.3, all of those hosts will have to be upgraded to 2.0+ before a much wider IPv6 web base is available.

IIS (the Microsoft webserver) has supported IPv6 from their 6.0 release, also known as Server 2003. Most places use at least 2003 on their servers, the era of Win2k webservers kind of died out with Code Red and all of those other worms.

Just kind of sits and looks at IPv6 like it has no clue what it is. Which is actually entirely true. Boo.

Talks happily with IPv6. At least I think. I'm too lazy to start my local copy and check. Their page on the matter isn't what one would call descriptive. No clue when this support was added.

Supported since their 2005 release.

Offically supported as of 2006.

Supported as of the 3.2 release, which was actually just on June 1st of this year.

Windows SMB/CIFS
Supported with XP and onward. Probably Win2000 too.

So the servers are looking pretty good. Unless you run MySQL, which is pretty much everyone. Boo.

At a minimum, we can serve any content over HTTP just fine, and we can access most database just fine too, unless your name starts with a "My" and ends with a "SQL."

End-user programs

Mozilla Suite (and Firefox, Thunderbird, Seamonkey and friends)
Native IPv6 support, ever since the year 2000. Still has some work to be done according to the meta bug, but pretty much all of those bugs are on random operating systems that don't adversely change your ability to connect to IPv6 enabled sites.

Internet Explorer
Supported IPv6 ever since 4.0, once you applied a patch from their research division. Likewise real native support was probably with 5.0, if not it was by 6.0.

Supported as of Outlook 2007.

Supported. The KDE project has traces of IPv6 development starting around 1999. As far as I can tell, IPv6 is natively supported in every program in 3.5.

Supported. Not clue as of when due to the GAIM --> Pidgin name change, and I'm far too lazy to figure that out.

MSN Messenger, AIM, ICQ and friends
Who cares? (Likely not supported, though I doubt the client is the blocker in these cases.)

Supported since '04.

Supported. Probably since forever. Go OpenSSH.


Not supported without loading a third-party DLL. mIRC sucks anyway.

Supported.... on Windows since '03, *nix and friends likely even earlier.

I could go on and on and on. I won't, because I have no desire to list hundreds of thousands of software packages and their relative IPv6 states. Plus I'm getting tired and this entire post was spontaneous. Not too bad for 30 minutes of google.

But for the most part, we've got a great picture. Every operating system, browser, and web server supports IPv6 and supports it fantastically well. Nearly every program on *nix supports IPv6 and has for quite some time, and most of the big name Windows programs support IPv6 as well.

Not mentioned here was DNS, but the protocol has had support for it since (just about) forever and now that we have AAAA records for the root servers in the public DNS, DNS is good to go with IPv6 from start to finish.

Now we just have to work on the ISPs and home grade routers...

Footnote: one of the comments I got on my initial IPv6 entry was someone reporting success in integrating their LAN with IPv6. While I'm glad to hear it, I'm even more glad that when I got the "unapproved comment has been posted" notification e-mail, the corresponding IP address was a v6 address. The second I had IPv6 up and running on my server, I threw in AAAA records for pretty much everything. If I had to guess, they didn't even know they were using IPv6 to view this blog and post the comment - which is exactly the goal.

September 07, 2008 04:31 AM :: Utah, USA  

September 06, 2008

Jason Jones

Project :

Although I use the Internet and Linux to make a living, there's still a bunch I have to learn.  One of those things happens to be the most popular ad affiliate program in the world.

Last week, I was doing a bit of research, and found that I could be making a bit of money using google adsense.  Enough to get rich?  Heavens no, but every little bit counts, right?

So, I took the layout for this little journaling site I did, and in about 3 days, I built the all new

I built it with ease of use first and foremost, with the ability to easily search, and organize by topic or author.

Hopefully I hit my mark.

Let me know what you think.  As soon as I get approved for adsense, you should see an ad-bar appear somewhere on the site.

Let's see if we can spread the good word, and make a little money on the side, too. :)

(I promised myself I wouldn't put ads on , nor, so you'll likely never see any ads on those sites.

Let me know what you think!

Click here to go to!

September 06, 2008 09:32 PM :: Utah, USA  

Ben Leggett

Disc Resurrection: The Brasso Trick Works

I bought an old DOS game, the Realms Of Arkania Trilogy, off of eBay recently, only to discover when I gleefully shredded the packaging that CD #2 had a bad scratch, and couldn’t be installed or copied due to read errors. When I verbally abused contacted the seller, he offered me the option of a price reduction or a return. I didn’t want to bother with a return, and at least one game in the trilogy was playable, so I went for a refund.

Now at the back of my mind, I was musing on several options. I could download an ISO of the second disc from Bittorrent, burn it to a CD-R, and forget the whole thing. Hrm. Well, ethically I see no problem with that, since I bought the game. Or, I could sent it to a professional refinishing company. These are reasonably priced, but more of a last ditch effort. Thirdly, I could try using one of the home remedies that abound. You know, everything from toothpaste to covering the disc with syrup and microwaving it. Most of these sounded like a crock, with wildly varying success rates, but one method seemed to stand out as being pretty consistently helpful, and with a handful of tests that showed pretty good results.

Using random info from the Internet like this has a vague stench of idiocy, of course, but the premise behind the trick sounded logical enough, and there were enough agreeing accounts to lend some credence to the process. Also, the trick seemed to have been around for a while, and not something some kid dreamed up to fix his trashed XBox discs. I didn’t really expect it to work, but I figured I could always send it to a refinisher if I goofed.

Anyway, Brasso is a polishing compound for a metal that I’ll leave you to guess at. Readily available at your local WalMart or Ace Hardware, and selling for the laughably low price of $19.99, er, sorry $3.

Following the instructions from this YouTube video, I set to work. After one application, the problem still persisted. I tried a second time, though, and concentrated specifically on the big problem scratch.
Cleaned the disc, stuck it in, and voyla!, it worked. I was quite surprised. Obviously not as neat a job as a pro, but I just want a usable disc, not a perfect disc.

September 06, 2008 06:59 PM :: Georgia, USA  

Brian Carper

Vim + screen + REPL = win

Via the Clojure wiki I found a great page describing how you can use GNU screen and some Vim magic to let Vim play nicely with an interactive commandline program like a Common Lisp REPL, Ruby's irb, or Python's, well, python.

That page is a very stripped-down and simpler version of what Limp does for Vim+Lisp. But Jonathan Palardy's version has the benefit of being so simple that you can set it up yourself manually in a second or two. I still have never gotten Limp to work quite right and I don't have the time to debug a big mess of Vim script.

The idea is to start up a named screen session via e.g. screen -S foo -t bar, then start an irb session (or whatever) in there, and then in Vim you can simply yank some text into a named register and send it off to screen via a system call. Download Jonathan's code and see.

It's not a full-blown SLIME; it doesn't have tab-completion or weird interactive debugging windows or such bullcrap. It doesn't capture the output of your command and feed it back into your Vim buffer. But hey, it's pretty good for something you can throw together in 2 minutes, and it works.

So there goes my last reason to ever use Emacs. Good riddance, I must say.

Honestly, Emacs just frustrates the living hell out of me. Oh how I tried to like it. I really did. I've used it on and off constantly over the past year. I have Emacs shortcuts written all over the whiteboard in my office. But its braindead window management, its terrible broken undo/redo system, its finger-crippling key-chord combos, its lack of features I need (like line numbering), its reliance on broken 3rd-party elisp hack scripts for things Vim has built in (like line numbering!), its ugly fonts and GUI elements, and so on and so forth. Vim is such a joy in comparison.

September 06, 2008 08:36 AM :: Pennsylvania, USA