Planet Larry

April 2, 2009

N. Dan Smith

IcedTea coming to Gentoo PowerPC (someday)

Today I successfully built the IcedTea Java virtual machine on Gentoo/PowerPC.  What does that mean?  It means that someday Gentoo/PowerPC users will be able to have a source-based, free software Java system.  Currently we have to use IBM’s proprietary Java development kit, which brings with it a whole host of problems (from obtaining the binaries to fixing bugs).

The ebuild I used for dev-java/icedtea6 (which provides a 1.6 JDK) is from the Java overlay. After it is further stabilized and pending some legal discussion, we should have it in the main Gentoo tree, meaning that someday ibm-jdk-bin will disappear or become the secondary option for Java.  Hooray!

Once I get my feet a bit wetter I might post a more specific guide on getting IcedTea up and running on your Gentoo/PowerPC machine.

April 2, 2009 :: Oregon, USA  

April 1, 2009

Leif Biberg Kristensen

Update

It’s been a long time since I posted anything here. The Exodus project is still alive and kicking; it’s my primary tool for doing genealogy research, so I’m using it every day, and I am continually doing fixes and extensions.

I simply haven’t had much motivation for writing anything about it.

The most important changes to the code base since my last blog are:

1) The add_source() function has been refactored, and most of the logic has been moved to a beast of a plpgsql function:

CREATE OR REPLACE FUNCTION add_source(INTEGER,INTEGER,INTEGER,INTEGER,TEXT,INTEGER) RETURNS INTEGER AS $$
-- Inserts sources and citations, returns current source_id
-- 2009-03-26: this func has finally been moved from PHP to the db.
-- Should be called via the functions.php add_source() which is left as a gatekeeper.
DECLARE
    person  INTEGER = $1;
    tag     INTEGER = $2;
    event   INTEGER = $3;
    src_id  INTEGER = $4;
    txt     TEXT    = $5;
    srt     INTEGER = $6;
    par_id  INTEGER;
    rel_id  INTEGER;
    x       INTEGER;
BEGIN
    IF LENGTH(txt) <> 0 THEN -- source text has been entered, add new node
        par_id := src_id;
        SELECT MAX(source_id) + 1 FROM sources INTO src_id;
        -- parse text to infer sort order:
        -- 1) use page number for sort order (low priority, may be overridden)
        IF srt = 1 THEN -- don't apply this rule unless sort = default
            IF txt SIMILAR TO E'%side \\d+%' THEN -- use page number as sort order
                SELECT SUBSTR(SUBSTRING(txt, E'side \\d+'), 5,
                    LENGTH(SUBSTRING(txt, E'side \\d+')) -4)::INTEGER INTO srt;
            END IF;
        END IF;
        -- 2) use ^#(\d+) for sort order
        IF txt SIMILAR TO E'#\\d+%' THEN
            SELECT SUBSTR(SUBSTRING(txt, E'#\\d+'), 2,
                LENGTH(SUBSTRING(txt, E'#\\d+')) -1)::INTEGER INTO srt;
            txt := REGEXP_REPLACE(txt, E'^#\\d+ ', ''); -- strip #number from text
        END IF;
        -- 3) increment from max(sort_order) of source group
        IF txt LIKE '++ %' THEN
            SELECT MAX(sort_order) + 1
                FROM sources
                WHERE get_source_gp(source_id) =
                    (SELECT parent_id FROM sources WHERE source_id = par_id) INTO srt;
            txt := REPLACE(txt, '++ ', ''); -- strip symbol from text
        END IF;
        -- there's a unique constraint on (parent_id, source_text) in the sources table, don't violate it.
        SELECT source_id FROM sources WHERE parent_id = par_id AND source_text = txt INTO x;
        IF NOT FOUND THEN
            INSERT INTO sources (source_id, parent_id, source_text, sort_order) VALUES (src_id, par_id, txt, srt);
        ELSE
            RAISE NOTICE 'Source % has the same parent id and text as you tried to enter.', x;
            RETURN -x; -- abort the transaction and return the offended source id as a negative number.
        END IF;
        IF event <> 0 THEN
            -- if new cit. is expansion of an old one, we may remove the "parent node" citation
            DELETE FROM event_citations WHERE event_fk = event AND source_fk = par_id;
            -- Details about a birth event will (almost) always include parental evidence. Therefore, we'll
            -- update relation_citations if birth event (and new source is an expansion of existing source)
            IF tag = 2 THEN
                FOR rel_id IN SELECT relation_id FROM relations WHERE child_fk = person LOOP
                    INSERT INTO relation_citations (relation_fk, source_fk) VALUES (rel_id, src_id);
                    -- again, remove references to "parent node"
                    DELETE FROM relation_citations WHERE relation_fk = rel_id AND source_fk = par_id;
                END LOOP;
            END IF;
        END IF;
    END IF;
    IF event <> 0 THEN
        PERFORM * FROM event_citations WHERE event_fk = event AND source_fk = src_id;
        IF NOT FOUND THEN
                INSERT INTO event_citations (event_fk, source_fk) VALUES (event, src_id);
            ELSE
                RAISE NOTICE 'citation exists';
            END IF;
    END IF;
    RETURN src_id;
END
$$ LANGUAGE PLPGSQL VOLATILE;

2) New «Search for Couples» page. I have simply used the view I’ve described earlier and used the index.php as a template to put a PHP wrapper script around it. So now I can find out in an instant if I have a couple like Ole Andersen and Anne Hansdatter who married around 1760.

3) New «Search for Source Text» page. I had this function which I used to run a lot from the command line:

-- CREATE TYPE int_bool_text AS (i INTEGER, b BOOL, t TEXT);

CREATE OR REPLACE FUNCTION find_src(TEXT) RETURNS SETOF int_bool_text AS $$
-- function for searching for source text from psql
-- example: select find_src('%Solum%Konfirmerte%An%Olsd%');
    SELECT source_id, is_unused(source_id), strip_tags(get_source_text(source_id))
        FROM sources
        WHERE get_source_text(source_id) LIKE $1
        ORDER BY is_unused(source_id), date_extract(strip_tags(get_source_text(source_id)))
$$ LANGUAGE SQL STABLE;

There are two issues with that. First, it’s cumbersome, even with readline and tab completion, to use the psql shell every time I want to look up a source text. Second, the query takes half a minute to run because it has to build the full source text for every single node in the sources table (currently 41072 nodes) and run a sequential search through them. For most searces, I don’t actually need more than the transcript part of the source text. So, again using the index.php as a template, I built a PHP page that did the job in a more flexible manner, with two radio buttons for «partial» or «full» search respectively. The meat of the script is the query:

$scope = $_GET['scope'];
if ($src) {
    if ($scope == 'partial')
        $query = "SELECT source_id, is_unused(source_id) AS unused,
                            get_source_text(source_id) AS src_txt
                    FROM sources
                    WHERE source_text SIMILAR TO '%$src%'
                    ORDER BY date_extract(source_text)";
    if ($scope == 'full')
        $query = "SELECT source_id, is_unused(source_id) AS unused,
                            get_source_text(source_id) AS src_txt
                    FROM sources
                    WHERE get_source_text(source_id) SIMILAR TO '%$src%'
                    ORDER BY date_extract(source_text)";

By using SIMILAR TO, I can easily search for variant spellings. For instance, the given name equivalent to Mary in Norwegian is frequently spelled as Maren, Mari, Marie or Maria. Giving the atom as "Mar(en|i)[ea]*” deals effectively with this. (Future project: use tsearch and build a thesaurus of variant name spellings.)

Integrating the search within the application brought another bonus. I made the node number in the query result clickable with a link to the Source Manager. So, just by opening the node in a new tab, I both get to see which events and relations the source is associated with, and automatically sets the last_selected_source to this node, ready to associate with an Event or Relation.

The last_selected_source (LSS) has grown to become a powerful concept within the application. I seldom enter a source node number by hand anymore; it’s much easier to modify the LSS before entering a citation. Therefore, I’ve also added a «Use» hotlink that updates the LSS in the Family View Notes section to each of the citations.

I probably should write some words about how I operate this program, as it’s very unconventional with respect to other genealogy apps. The source model is, as I’ve described in the Exodus article, «a self-referential hierarchy with an unknown number of levels.» (See the Gentech GDM document, section 5.3: Evidence Submodel.) The concept is generally known as an «adjacency tree» in database parlance. My own twist to it is that each node contains a partial string, and the full source text is produced at run-time by a recursive concatenation of the strings. It’s a simple, yet powerful, approach. Supplementary text, not intended to show up in the actual citation, is enclosed in {curlies}.

I usually start with entering source transcripts from a church book, every single one of them in sequential order. The concatenated node text up to that point is something like “Church book|for Solum|Mini 2 (1713-1761).|Baptisms,|page 62.” (The pipes are actually spaces, I just wanted to show the partial strings.) When I add a transcript, I usually increment the sort_order by prefixing the text with ‘++ ‘, and the add_source function (see above) will automatically assign the correct sort order number to the node. At the same time, I’ll look up the name in the database to see if I’ve already got that person or the family. Depending on the search result, I may associate the newly entered transcript with the relevant Events/Relations, or may leave it lying around, waiting for that person or family to approach «critical mass» in my research. Both in the Source Manager and in the new Search for Source text, unused transcripts are rendered with grey text, making it easy to see which sources that are actually associated with «persons» in the database.

It can be seen that the process is entirely «source driven», to an extent that I have not seen in any other genealogy research tool. And, of course, it’s totally uncompatible with GEDCOM.

For that reason, and for several others, it’s also totally unsuitable for a casual «family historian». Most people compile their genealogy by drawing information from lots and lots of different sources. I, on the other hand, conduct a «One-place Study» in two adjacent parishes, and use a few sources exhaustively. I’m out to get the full picture of those two parishes, and my application is designed with that goal in mind.

April 1, 2009 :: Norway  

Dan Fego

Merging files with pr

Tonight, I’ve been poring over a rather large data set that I want to get some useful information out of. All the data was originally stored in a .html file, but after some (very) crude extraction techniques, I managed to pull out just the data I wanted, and shove it into a comma-separated file. Earlier, I had given up on my tools at hand and typed up an entire list of row headings for my newly-gotten data. So I had two files like so:

headings.txt
Alpha
Bravo
Charlie

values.csv
1,2,3,4
5,6,7,8
9,10,11,12

I spent quite a bit of time trying to figure out how to combine the two columns into one file with what I knew, but none of my tools could quite do it without nasty shell scripting. It took me a while, but I eventually found this post that cracked the case for me. The pr command, ostensibly for paging documents, has enough horsepower to solve my problem in short order, like so:

$ pr -tm -s, headings.txt values.csv

The -t tells the program to omit headers and footers, and -m tells it to merge each line. The -s, tells it to use commas as field-separators. My desired result, like so:

headings.txt
Alpha,1,2,3,4
Bravo,5,6,7,8
Charlie,9,10,11,12

There are numerous other options to pr, and depending on your potential line lengths, one may have to experiment. But for me, this got the job done.

External Links

April 1, 2009 :: USA  

Brian Carper

Trying Arch

Thanks to all who gave helpful suggestions about running VMs in Gentoo. The main reason I wanted a VM was to play around with some other distros and see what I liked.

But then I got to thinking, and I realized that I have over 250 GB of free hard drive space sitting around. So I made a new little partition and per Noah's suggestion, threw Arch Linux on there.

I'm fairly impressed so far. The install was easy. In contrast to the enormous Gentoo handbook, the whole Arch install guide fits on one page of the official Arch wiki. Why doesn't Gentoo have an official wiki? I know there are concerns over the quality of something anyone can edit, but in practice is it a big a deal? Is it worth the price of sending users elsewhere, to potentially even WORSE places, when the Gentoo docs don't cover everything we need? The quality of the unofficial Gentoo wiki is often very good but sometimes hit-or-miss, and it also sort of crashes and loses all data without backups every once in a while.

The Arch installer is a commandline app using ncurses for basic menus and such, which is more than sufficient and a good compromise between commandline-only and full-blown X-run Gnome bloat. The install itself went fine, other than my own mistakes. I'm sharing /boot and /home between Gentoo and Arch so I can switch between them easily. During the install Arch tried to create some GRUB files, but they already existed care of Gentoo, so the install bombed without much notification and I didn't notice until 3 steps later. No big deal to fix, but I'd have liked a louder error message right away when it happened. The base install took about 45 minutes.

Another nice thing is that the Arch install CD has vi on it. I didn't have to resort to freaking nano or remember to install vim first thing. A mild annoyance to be sure, but it bugged me every time I installed Gentoo.

After boot, installing apps via pacman is simple enough. KDE 4.2 installed in about 15 minutes, as you'd expect from a distro with binary packages. I found a mirror with 1.5 Mb/sec downloads, which is awfully nice. Syncing the package tree takes less than 2 seconds, which is also nice compared to Portage's 5-minute rsync and eix update times. Searching the tree via regex is also somehow instantaneous in Arch.

Oddly, KDE didn't seem to pull in Xorg as a dependency, but other dependencies worked fine so far. Time will tell how well this all holds up. Most package managers do fine on the normal cases but the real test is the funky little obscure apps. pacman -S gvim resulted in a Vim with working rubydo and perldo, which means Arch passed the Ubuntu stink test.

Another nice thing is that KDE4 actually works. My Gentoo install is years old and possibly crufted beyond repair, or something else was wrong, but I have yet to get KDE4 working in Gentoo without massive breakage. Possibly if I wiped Gentoo and tried KDE4 without legacy KDE3 stuff everywhere it'd also be smooth.

Regardless, it all works in Arch. NVidia drivers and Twinview settings were copy/pasted from Gentoo, and compositing all works fine. No performance problems in KDE with resizing or dragging windows, no Plasma crashes (yet), no missing icons or invisible notification area. QtCurve works in Qt3, Qt4 and GTK just fine. My sound card worked without any manual configuration at all. My mouse worked without tweaking, including the thumb buttons. Same with networking, the install prompted me for my IP and gateway etc. and then it worked, no effort.

I've mentioned before, but one nice thing about Linux is that if you have /home in its own partition, it's no big deal at all to share it between distros. With no effort at all I'm now using all my old files and settings in Arch, and I can switch back and forth between this and Gentoo without any troubles.

So we'll see how this goes. So far so good though. Arch seems very streamlined and its goal is minimalism, which is nice. Gentoo has not felt minimalistic to me in a while. Again, may be due to the age of my install, cruft and bit-rot.

April 1, 2009 :: Pennsylvania, USA  

March 31, 2009

Ciaran McCreesh

Feeding ERB Useful Variables: A Horrible Hack Involving Bindings


I’ve been playing around with Ruby to create Summer, a simple web packages thing for Exherbo. Originally I was hand-creating HTML output simply because it’s easy, but that started getting very very messy. Mike convinced me to give ERB a shot.

The problem with template engines with inline code is that they look suspiciously like the braindead PHP model. Content and logic end up getting munged together in a horrid, unmaintainable mess, and the only people who’re prepared to work with it are the kind of people who think PHP isn’t more horrible than an aborted Jacqui Smith clone foetus boiled with rotten lutefisk and served over a bed of raw sewage with a garnish of Dan Brown and Patricia Cornwell novels. So does ERB let us combine easy page layouts with proper separation of code?

Well, sort of. ERB lets you pass it a binding to use for evaluating any code it encounters. On the surface of it, this lets you select between the top level binding, which can only see global symbols, or the caller’s binding, which sees everything in scope at the time. Not ideal; what we want is to provide only a carefully controlled set of symbols.

There are three ways of getting a binding in Ruby: a global TOPLEVEL_BINDING constant, which we clearly don’t want, the Kernel#binding method which returns a binding for the point of call, and the Proc#binding method which returns a binding for the context of a given Proc.

At first glance, the third of these looks most promising. What if we define the names we want to pass through in a lambda, and give it that?

require 'erb'

puts ERB.new("foo <%= bar %>").result(lambda do
    bar = "bar"
end)

Mmm, no, that won’t work:

(erb):1: undefined local variable or method `bar' for main:Object (NameError)

Because the lambda’s symbols aren’t visible to the outside world. What we want is a lambda that has those symbols already defined in its binding:

require 'erb'

puts ERB.new("foo <%= bar %>").result(lambda do
    bar = "bar"
    lambda { }
end.call)

Which is all well and good, but it lets symbols leak through from the outside world, which we’d rather avoid. If we don’t explicitly say “make foo available to ERB”, we don’t want to use the foo that our calling class happens to have defined. We also can’t pass functions through in this way, except by abusing lambdas — and we don’t want to make the ERB code use make_pretty.call(item) rather than make_pretty(item). Back to the drawing board.

There is something that lets us define a (mostly) closed set of names, including functions: a Module. It sounds like we want to pass through a binding saying “execute in the context of this Module” somehow, but there’s no Module#binding_for_stuff_in_us. Looks like we’re screwed.

Except we’re not, because we can make one:

module ThingsForERB
    def self.bar
        "bar"
    end
end

puts ERB.new("foo <%= bar %>").result(ThingsForERB.instance_eval { binding })

Now all that remains is to provide a way to dynamically construct a Module on the fly with methods that map onto (possibly differently-named) methods in the calling context, which is relatively straight-forward, then we can do this in our templates:

<% if summary %>
    <p><%=h summary %>.</p>
<% end %>

<h2>Metadata</h2>

<table class="metadata">
    <% metadata_keys.each do | key | %>
        <tr>
            <th><%=h key.human_name %></th>
            <td><%=key_value key %></td>
        </tr>
    <% end %>
</table>

<h2>Packages</h2>

<table class="packages">
    <% package_names.each do | package_name | %>
        <tr>
            <th><a href="<%=h package_href(package_name) %>"><%=h package_name %></a></th>
            <td><%=h package_summary(package_name) %></td>
        </tr>
    <% end %>
</table>

Which gives us a good clean layout that’s easy to maintain, but lets us keep all the non-trivial code in the controlling class.

Posted in summer Tagged: exherbo, ruby, summer

March 31, 2009

Jürgen Geuter

Themeability can result in bad software

Gwibber is a microblogging client for Linux based on Python and GTK. Well some of it is.

But in order to give it simple skinability or themeability it was decided to use an embedded Webkit browser to display the information. Even better, the HTML wasn't even rendered statically but after parsing all data it would be rendered to the template in HTML but as data that was then dynamically parsed using jQuery and JavaScript.

That sounds like a neat "proof of concept" thingy, you know, one of those thing where people ask: "Why would you do that?" And you answer: "Because I can."

Many people nowadays know at least some HTML, CSS and JavaScript so many projects are tempted to use those technologies as markup to gain the ability to skin their software but I think that is not the right direction.

Yes some people will claim that people want to use pretty software and if your software is not as pretty as a fairy princess, nobody will want to run it.

But on the other hand Gwibber gives us an example for the opposite point of view: The embedded webkit browser thingy in connection with JavaScript is really unstable and fragile. Today I updated my system and got a newer webkit-gtk which made Gwibber pretty much die. It's a known bug and it's really hard to debug what exactly goes wrong.

While Gwibber kinda has the important features there still is quite some stuff it lacks but right now the most energy has to be spend to reworking the inner workings and get the webkit thingy to display ome statically rendered HTML.

A better approach would have been to implement the functionality in a library and then build a client on top of that, a simple client that just works. Then you can start adding code to the whole thing that allows you to make it all pretty and fancy.

Right now we have a package that's kinda nifty but forces you to find a random version of webkit-gtk that might work and if you find it, never upgrade. You have a pretty tool that users start to adopt, it gets included into Ubuntu's next release but, guess what? The current version won't run. That makes the project look bad. Even if the software looks good. If you know what I mean.

March 31, 2009 :: Germany  

Martin Matusiak

emerge mono svn

Yes, it’s time for part two. If you’re here it’s probably because someone said “fixed in svn”, and for most users of course that doesn’t matter, but if you’re a developer you might need to keep up with the latest.

Now, it’s one thing to do a single install and it’s another to do it repeatedly. So I decided to do something about it. Here is the script, just as quick and dirty and unapologetic as the language it’s written in. To make up for that I’ve called it emerge.pl to give it a positive association.

What it does is basically encapsulate the findings from last time and just put it all into practice. Including setting up the parallel build environment for you. Just remember that once it’s done building, source the env.sh file it spits out to run the installed binaries.

$ ./emerge.pl merge world

$ . env.sh

$ monodevelop &

This is pretty simple stuff, though. Just run through all the steps, no logging. If it fails at some point during the process it stops so that you can see the error. Then if you hit Enter it continues.

#!/usr/bin/perl
# Copyright (c) 2009 Martin Matusiak <numerodix@gmail.com>
# Licensed under the GNU Public License, version 3.
#
# Build/update mono from svn
 
use warnings;
 
use Cwd;
use File::Path;
use Term::ReadKey;
 
 
my $SRCDIR = "/ex/mono-sources";
my $DESTDIR = "/ex/mono";
 
 
sub term_title {
	my ($s) = @_;
	system("echo", "-en", "\\033]2;$s\\007");
}
 
sub invoke {
	my (@args) = @_;
 
	print "> "; foreach my $a (@args) { print "$a "; }; print "\\n";
 
	$exit = system(@args);
	return $exit;
}
 
sub dopause {
	ReadMode 'cbreak';
	ReadKey(0);
	ReadMode 'normal';
}
 
 
sub env_var {
	my ($var) = @_;
	my ($val) = $ENV{$var};
	return defined($val) ? $val : "";
}
 
sub env_get {
	my ($env) = {
		DYLD_LIBRARY_PATH => "$DESTDIR/lib:" . env_var("DYLD_LIBRARY_PATH"),
		LD_LIBRARY_PATH => "$DESTDIR/lib:" . env_var("LD_LIBRARY_PATH"),
		C_INCLUDE_PATH => "$DESTDIR/include:" . env_var("C_INCLUDE_PATH"),
		ACLOCAL_PATH => "$DESTDIR/share/aclocal",
		PKG_CONFIG_PATH => "$DESTDIR/lib/pkgconfig",
		XDG_DATA_HOME => "$DESTDIR/share:" . env_var("XDG_DATA_HOME"),
		XDG_DATA_DIRS => "$DESTDIR/share:" . env_var("XDG_DATA_DIRS"),
		PATH => "$DESTDIR/bin:$DESTDIR:" . env_var("PATH"),
		PS1 => "[mono] \\\\w \\\\\\$? @ ",
	};
	return $env;
}
 
sub env_set {
	my ($env) = env_get();
	foreach my $key (keys %$env) {
		if ((!exists($ENV{$key})) || ($ENV{$key} ne $env->{$key})) {
			$ENV{$key} = $env->{$key};
		}
	}
}
 
sub env_write {
	my ($env) = env_get();
	open (WRITE, ">", "env.sh");
	foreach my $key (keys %$env) {
		my ($line) = sprintf("export %s=\\"%s\\"\\n", $key, $env->{$key});
		print(WRITE $line);
	}
	close(WRITE);
}
 
 
sub pkg_get {
	my ($name, $svnurl) = @_;
	my $pkg = {
		name => $name,
		dir => $name, # fetch to
		workdir => $name, # build from
		svnurl => $svnurl,
		configurer => "autogen.sh",
		maker => "make",
		installer => "make install",
	};
	return $pkg;
}
 
sub pkg_print {
	my ($pkg) = @_;
	foreach my $key (keys %$pkg) {
		printf("%14s : %s\\n", $key, $pkg->{$key});
	}
	print("\\n");
}
 
sub pkg_action {
	my ($action, $dir, $pkg, $code) = @_;
 
	# Report on action that is to commence
	term_title(sprintf("Running %s %s", $action, $pkg->{name}));
 
	# Create destination path if it does not exist
	my ($path) = File::Spec->catdir($SRCDIR, $dir);
	unless (-d $dir) {
		mkpath($path);
	}
 
	# Chdir to source path
	my ($cwd) = getcwd();
	chdir($path);
 
	# Set environment
	env_set();
 
	# Perform action
	my ($exit) = &$code;
 
	# Chdir back to original path
	chdir($cwd);
 
	# Check exit code
	if ($exit == 0) {
		term_title(sprintf("Done %s %s", $action, $pkg->{name}));
	} else {
		term_title(sprintf("Failed %s %s", $action, $pkg->{name}));
		dopause();
	}
}
 
sub pkg_fetch {
	my ($pkg, $rev) = @_;

	if (exists($pkg->{svnurl})) {
		my $code = sub {
			return invoke("svn", "checkout", "-r", $rev, $pkg->{svnurl}, ".");
		};
		pkg_action("fetch", $pkg->{dir}, $pkg, $code);
	}
}
 
sub pkg_configure {
	my ($pkg) = @_;
 
	if (exists($pkg->{configurer})) {
		my $code = sub {
			my ($configurer) = $pkg->{configurer};
			if (!-e $configurer) {
				if (-e "configure") {
					$configurer = "configure";
				}
			}
			return invoke("./$configurer --prefix=$DESTDIR");
		};
		pkg_action("configure", $pkg->{workdir}, $pkg, $code);
	}
}
 
sub pkg_premake {
	my ($pkg) = @_;
 
	if (exists($pkg->{premaker})) {
		my $code = sub {
			return invoke($pkg->{premaker});
		};
		pkg_action("premake", $pkg->{workdir}, $pkg, $code);
	}
}
 
sub pkg_make {
	my ($pkg) = @_;
 
	if (exists($pkg->{maker})) {
		my $code = sub {
			return invoke($pkg->{maker});
		};
		pkg_action("make", $pkg->{workdir}, $pkg, $code);
	}
}
 
sub pkg_install {
	my ($pkg) = @_;
 
	if (exists($pkg->{installer})) {
		my $code = sub {
			return invoke($pkg->{installer});
		};
		pkg_action("install", $pkg->{workdir}, $pkg, $code);
	}
}
 
 
sub pkglist_get {
	my $mono_svn = "svn://anonsvn.mono-project.com/source/trunk";
	my (@pkglist) = (
		{"libgdiplus" => "$mono_svn/libgdiplus"},
		{"mcs" => "$mono_svn/mcs"},
		{"olive" => "$mono_svn/olive"},
		{"mono" => "$mono_svn/mono"},
		{"debugger" => "$mono_svn/debugger"},
		{"mono-addins" => "$mono_svn/mono-addins"},
		{"mono-tools" => "$mono_svn/mono-tools"},
		{"gtk-sharp" => "$mono_svn/gtk-sharp"},
		{"gnome-sharp" => "$mono_svn/gnome-sharp"},
		{"monodoc-widgets" => "$mono_svn/monodoc-widgets"},
		{"monodevelop" => "$mono_svn/monodevelop"},
		{"paint-mono" => "http://paint-mono.googlecode.com/svn/trunk"},
	);
 
	my (@pkgs);
	foreach my $pkgh (@pkglist) {
		# prep
		my @ks = keys(%$pkgh); my $key = $ks[0];
 
		# init pkg
		my $pkg = pkg_get($key, $pkgh->{$key});
 
		# override defaults
		if ($pkg->{name} eq "mcs") {
			delete($pkg->{configurer});
			delete($pkg->{maker});
			delete($pkg->{installer});
		}
		if ($pkg->{name} eq "olive") {
			delete($pkg->{configurer});
			delete($pkg->{maker});
			delete($pkg->{installer});
		}
		if ($pkg->{name} eq "mono") {
			$pkg->{premaker} = "make get-monolite-latest";
		}
		if ($pkg->{name} eq "gtk-sharp") {
			$pkg->{configurer} = "bootstrap-2.14";
		}
		if ($pkg->{name} eq "gnome-sharp") {
			$pkg->{configurer} = "bootstrap-2.24";
		}
		if ($pkg->{name} eq "paint-mono") {
			$pkg->{workdir} = File::Spec->catdir($pkg->{dir}, "src");
		}
 
		push(@pkgs, $pkg);
	}
	return @pkgs;
}
 
 
sub action_list {
	my (@pkgs) = pkglist_get();
	foreach my $pkg (@pkgs) {
		printf("%s\\n", $pkg->{name});
	}
}
 
my %actions = (
	list => -1,
	merge => 0,
	fetch => 1,
	configure => 2,
	make => 3,
	install => 4,
);
 
sub action_merge {
	my ($action, @worklist) = @_;
 
	# spit out env.sh to source when running
	env_write();
 
	# init source dir
	unless (-d $SRCDIR) {
		mkpath($SRCDIR);
	}
 
	my (@pkgs) = pkglist_get();
	foreach my $pkg (@pkgs) {
		# filter on membership in worklist
		if (grep {$_ eq $pkg->{name}} @worklist) {
			pkg_print($pkg);
 
			# fetch
			if (($action == $actions{merge}) || ($action == $actions{fetch})) {
				my $revision = "HEAD";
				pkg_fetch($pkg, $revision);
			}
 
			# configure
			if (($action == $actions{merge}) || ($action == $actions{configure})) {
				pkg_configure($pkg);
			}
 
			if (($action == $actions{merge}) || ($action == $actions{make})) {
				# premake, if any
				pkg_premake($pkg);
 
				# make
				pkg_make($pkg);
			}
 
			# install
			if (($action == $actions{merge}) || ($action == $actions{install})) {
				pkg_install($pkg);
			}
		}
	}
}
 
 
sub parse_args {
	if (scalar(@ARGV) == 0) {
		printf("Usage:  %s <action> [<pkg1> <pkg2> | world]\\n", $0);
		printf("Actions: %s\\n", join(" ", keys(%actions)));
		exit(2);
	}
 
	my ($action) = $ARGV[0];
	if (!grep {$_ eq $action} keys(%actions)) {
		printf("Invalid action: %s\\n", $action);
		exit(2);
	}
 
	my (@pkgnames) = splice(@ARGV, 1);
	if (grep {$_ eq "world"} @pkgnames) {
		@allpkgs = pkglist_get();
		@pkgnames = ();
		foreach my $pkg (@allpkgs) {
			push(@pkgnames, $pkg->{name});
		}
	}
 
	return (action => $action, pkgs => \\@pkgnames);
}
 
sub main {
	my (%input) = parse_args();
 
	printf("Action selected: %s\\n", $input{action});
	if (scalar(@{ $input{pkgs} }) > 0) {
		printf("Packages selected:\\n");
		foreach my $pkgname (@{ $input{pkgs} }) {
			printf(" * %s\\n", $pkgname);
		}
		print("\\n");
	}
 
	if ($actions{$input{action}} == $actions{list}) {
		action_list();
		exit(2);
	}
 
	action_merge($actions{$input{action}}, @{ $input{pkgs} })
}
 
main();

Download this code: emerge_pl

March 31, 2009 :: Utrecht, Netherlands  

Brian Carper

Gentoo VMWare Fail

According to this bug, VMWare on Gentoo is in a sorry state, with one lone person trying to keep it going. I can't get vmware-modules to compile on my system no matter what I try, which is depressing. Kudos to all of our one-man army Gentoo devs who are keeping various parts of the distro going, but I wonder how many other areas of Gentoo are largely unmaintained nowadays.

KVM was braindead simple to get set up in comparison with VMWare, but I can't get networking to work. This is because I'm an idiot when it comes to TUN/TAP and iptables. I've read wiki articles that suggest setting up my system to NAT-forward traffic into the VM but I couldn't get that working and don't have a lot of time to screw with it.

On one of the Gentoo mailing lists I noticed that a dev has posted some KVM images of Gentoo suitable for testing. But I'm looking to start up an image from scratch and that doesn't help, and it's not going to help me get networking going any easier.

Why do I feel like this'd take 10 minutes to set up on Ubuntu? Look at this, or search for "ubuntu vmware" and see the hundreds of results. Given that it's a VM and it doesn't really matter what the host OS is anyways, I'll probably do that on my laptop, but it's still depressing.

March 31, 2009 :: Pennsylvania, USA  

March 30, 2009

N. Dan Smith

Gentoo on the iBook G4

While Debian may be suitable for my Apple Powermac G3 Blue and White, nothing can beat Gentoo on my iBook G4.  I have resolved that being a Gentoo developer is not part of my future.  But I cannot stay away from Gentoo as a user, especially when it comes to my iBook.  Pure computing joy.

It was not always so.  When I first started using Gentoo there were no drivers for the Broadcom wireless card it has.  Thankfully since then free and open drivers have been developed which work great for me.  Also, all of the Mac buttons and features (including sleep) work perfectly, so it makes a great notebook.  I plan as using it as my main work horse for thesis research and writing.

March 30, 2009 :: Oregon, USA  

Gentoo on iBook G4: The Essentials

When it comes to running Linux on an Apple iBook G4 (or any iBook or PowerBook in general), there are a few essential resources.  Here they are:

  • Gentoo Linux PPC Handbook - The installation instructions for Gentoo are among the best documentation available for Linux.
  • Gentoo PPC FAQ - This document answers all your questions about the idiosyncrasies of running Linux on PowerPC hardware.  This includes information on how to enable your soundcard as well as recommendations for laptop-specific applications (which can be installed with portage).  First and foremost of these is pbbuttonsd (”PowerBook buttons daemon”), which makes the volume, brightness, and eject keys work, along with sleep and other power managment features.  There is nothing like being able to close the lid and forget about it, just like in Mac OS X.
  • Airport Extreme Howto - This is a very clear and concise guide to getting your Airport Extreme wireless network card working.  Until these drivers came along, Linux on the iBook G4 was not very fun.  Now I can enjoy its full laptop potential.
  • Gentoo Hardware 3D Acceleration Guide - You have a Radeon Mobility video card in that iBook.  Use it!  Follow this guide to ensure that hardware rendering is enabled.  This will open the door to goodies like Compiz Fusion, which does work fairly well on the iBook G4.
  • Inputd - This program allows for right-click solutions (e.g. command + left-click = right click) and much more.  The cure to the one button mouse.  It requires some changes in the kernel and perhaps its config file, but it should not be too challenging for any user who has successfully completed the Gentoo install.

It is best to consult all of those resources during the initial installation.  That way you do not have to go back and rebuild your kernel when you add each feature.

March 30, 2009 :: Oregon, USA  

zsh on Gentoo and OS X

I am now a zsh man.  The key to a happy zsh experience is a good ~/.zshrc file.  Thanks to Gentoo’s docs, I have a good start:

#!/bin/zsh
# completion
autoload -U compinit
compinit
# prompt
autoload -U promptinit
promptinit
prompt adam1
# options
setopt correctall
setopt autocd
setopt extendedglob
# history
export HISTSIZE=2000
export HISTFILE="$HOME/.history"
export SAVEHIST=$HISTSIZE
setopt hist_ignore_all_dups
# zstyle
zstyle ':completion:*:descriptions' format '%U%B%d%b%u'
zstyle ':completion:*:warnings' format '%BNo matches for: %d%b'
# color
[ -f /etc/DIR_COLORS ] && eval $(dircolors -b /etc/DIR_COLORS)
alias ls=”ls –color=auto -h”
alias grep=”grep –color=auto”

There are many more zsh options to play with.  For example, you can use prompt -l to see the list of available prompt templates if adam1 does not suit you.  Customized designs are doable as well.

You can also set the OS X Terminal.app to use zsh (/bin/zsh), but the color section of the file needs to be a bit different:

# color
alias ls="ls -Gh"
alias grep="grep --color=auto"

Enjoy!

March 30, 2009 :: Oregon, USA  

The Complete Idiot’s Guide to Paludis

Paludis is a package manager for Linux. It started out as an alternative to Portage for Gentoo, but it also supports another distribution now. I use paludis in my Gentoo setup because I think it works better than portage. Others may disagree. Really it comes down to user preference. There is a lot of package manager zealotry out there, so I thought I would add my own fuel to the fire. Here follows my tips for happy paludis usage for a new user.

  • Know what you are doing with Gentoo. In other words, if you are an idiot, paludis is not for you. :-)
  • Read the documentation, including the man pages for paludis and associated programs.
  • When you configure paludis for the first time, choose the manual configuration option. You want to learn how paludis works, and this is the best introduction. This will also require you to read the configuration documentation.
  • Read and appropriately respond to the warnings and error messages paludis reports.
  • Use conf.d directories for your keywords and use configurations. This will keep your configuration files clean and organized, and will facilitate easier system administration and package testing.
  • Move your Gentoo repository from /usr/portage to /var/paludis/repositories/gentoo. It requires a little work, but it’s just better that way. This of course breaks portage (but who cares?).
  • Develop a thick skin. The paludis developers are brilliant, but they have very poor public relations skills. If you venture onto the mailing lists or into the IRC channel, do not take anything personally. Asking direct questions and providing pertinent info is an important prerequisite to getting paludis support. (Probably all software projects can benefit from not letting developers do PR.)

Flame on. :-)

March 30, 2009 :: Oregon, USA  

Two Penguins are Better than One

Yesterday I had the fortune of finding a rather affordable PowerMac dual G4 1.0 GHz, a.k.a. the “mirrored drive door.”  The machine was lacking all the drives and a video card, but I had all those to spare, so I picked it up.  Needless to say, I am quite pleased, since my G3 had been acting up of late. This machine will serve as an excellent testing/development host as well as a desktop for me.  I’ve already got Gentoo installed and I am working on getting it up to speed as a desktop.

March 30, 2009 :: Oregon, USA  

Deep Breath 4.2

I am going to be installing KDE 4.2.  Wish me luck.

March 30, 2009 :: Oregon, USA  

KDE Fails Again

Well, not really KDE.  Qt has some sort of bug on PowerPC in Gentoo where the colors get mixed up, especially orange and blue.  Also, the Rage128 xorg driver is apparently broken in version 1.5.  So I guess I am sticking with XFCE4 for the time being.

March 30, 2009 :: Oregon, USA  

Nagios

We decided to add some proactive monitoring to various systems at work. I discovered Nagios.  It was not difficult to install and configure, and there is even some Gentoo-specific documentation. I had to customize the default install a bit to accommodate lighttpd and nbsmtp (the mailer I use). Now all of our servers are monitored and alerts are sent out via email (to a Crackberry) as needed.

During the course of configuring servers I had the misfortune of discovering a bug in one of our machines which defies any attempt at a Let-Me-Google-That-For-You fix, so alas I will be calling MSFT support tomorrow.  If I get that fixed and get the stupid coffee bar point-of-sale machine to stay operational, I will be a happy camper.

March 30, 2009 :: Oregon, USA  

Nagios using NBSMTP as an MTA

I wanted to use email notifications in Nagios, but I didn’t want to set up a complicated mail transfer agent (postfix, qmail, exim, etc.). I discovered nbsmtp (”no-brainer SMTP”) through my experience with Mutt on Gentoo. It is not a real MTA, but it just punts your outgoing mail to another mail server (your ISP, Gmail, etc.). Yesterday I married the two, and since I could not find any documentation online about it, I will post it here.

First install nbsmtp on your system.  Then switch users to the nagios user (probable “nagios” - whichever user your nagios instance runs at).  In that user’s home folder, create .nbsmtprc and fill in the following:

auth_user = from-address@example.com
auth_pass = your_password_here
relayhost = smtp.example.com
fromaddr = from-address@example.com
port = 587
use_starttls = True
domain = example.com

This example happens to work if you are using Gmail.  Just adjust your settings accordingly.  Now whenever the nagios user runs nbsmtp, all of the runtime configuration can be read from the file, which simplifies the command. Next, nagios’ commands.cfg needs to be customized to reflect the change to nbsmtp.  Here is my example:

/usr/bin/printf "%b" "To: $CONTACTEMAIL$\nFrom: from-address@example.com\nSubject: $NOTIFICATIONTYPE$ Host $HOSTNAME$ is $HOSTSTATE$\n\nType: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\nTime: $LONGDATETIME$\n" | nbsmtp

/usr/bin/printf "%b" "To: $CONTACTEMAIL$\nFrom: from-address@example.com\nSubject: $NOTIFICATIONTYPE$ Svc $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$\n\nType: $NOTIFICATIONTYPE$\nSvc: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddr: $HOSTADDRESS$\nState: $SERVICESTATE$\nTime: $LONGDATETIME$\nInfo:\n$SERVICEOUTPUT$\n" | nbsmtp

Nagios will fill in the variables, except you need to specify the From address to match your nbsmtprc.  The key to these commands is that you have one line for each header (To:, From:, Cc:, Subject:, etc.) and then two newline characters before the body of the message.  Then you can format the message however you like. Assuming everything is properly configured, you should be receiving email alerts from Nagios when there is trouble.  Of course it is best to test an alert to verify that email works before you run into a real problem.

March 30, 2009 :: Oregon, USA  

TopperH

Gentoo releases, my point of view

This entry follows up this nice article by Jeremy Olexa (darkside) and the relative comments.

I'm not a developer and I don't know much about the technical stuff that my idea involves, it's just a personal and different approach on the question that Jeremy asks.

Reading the article and the comments it looks like PR and advertising are the main issues. I couldn't agree more. When a distro comes out with a new version, popular sites (slashdot, distrowatch...) write an article, popular bloggers try the distro and write their opinion, other bloggers publish screenshots... A lot of buzz is generated, and people are aware that the new distro is out.

Gentoo is always up to date

Gentoo is a rolling release distro, it never needs upgrades, but just updates. The installation in my workstation (made in 2006) is as much up-to-date as the shiny new install on my laptop. That's gentoo magic, you sync, you emerge world, and have every day the latest and greatest.
Most people don't realise that, and this is why all the "gentoo is dead" thing grows.

Installation media

So, when is it that I have a new gentoo release? Maybe when a new installation media is out?
Well, I used just once the minimal gentoo cd to actually do an installation, then I realised that there are better ways to install gentoo. I think gentoo could invest less manpower on installation media releases. What we need is a very minimal cd, with basic tools for networking and partitioning (lvm and raid), that is updated not more than every 12-18 months, and, a very clear and complete chapter in the handbook, explaining how gentoo can be built using livecds like systemrescucd, knoppix, sabayon, or even ubuntu, and how is possible for people coming from other distros to install gentoo in a partition without leaving the environment they are are familiar with.

So, what makes a new release?

If I look other people workstations I can most of times tell at first sight whether they are using Ubuntu, Windows XP, Suse, OSX... The fact is that a lot of people don't care too much about theming their desktops, they just keep the vanilla install as it is.

Let's be honest: I'm sure Ubuntu developers did a lot of background work, but, what comes out from the press for the next release? "A new notification style, and a shining new color theme" Wow... those guys are great in PR stuff.

Gentoo doesn't have a consistent artwork theme, and if I publish a post of my desktop today I will look more or less the same as my desktop 2 years ago.
So, here comes my suggestion: a new gentoo release every time a new artwork theme is ready. I'm not kidding, let's see how it should work...

How it works

The gentoo artwork team provides consistent themes and wallpapers for the most popular DEs, login managers, toolkits, framebuffer and grub. (Sabayon guys are really good in it: from the moment you boot, till the moment you are into the graphical environment the transition looks really smooth). All those themes will be shipped in a package called media-gfx/gentoo-artwork and versioned like gentoo releases (2009.0, 2010.1 ecc.). Those packages will be slotted.

This package will have an USE flag for all the packages we have a theme for. For example "grub framebuffer xdm gdm kdm slim gnome xfce kdm openbox wallpaper", and according with the selected ones the relevant parts will be extracted.

The extracted themes will be named according to version (gtk-theme-gentoo-2009.0, gtk-theme-gentoo-2010.1) and with a symlink (gtk-theme-gentoo-default) that will be managed by an eselect module.

Assuming I have a default installation with no personal customizations, when a new version of gentoo-artwork comes out all I have to do is "eselect gentoo-artwork set n" and tah-dah, my whole gentoo changes shape and I'm ready to publish screenshots of my new gentoo in this blog.

Of course, if this new artwork comes along with a new major version of portage, or a new stabilized gcc, I will have something more to blog about :P

What else?

Gentoo is all about choices, so, if I want to keep the current behaviour all I have to to is add a "-gentoo-artork" USE in my make.conf.

My 2 cents...

March 30, 2009 :: Italy  

Kyle Brantley

v6 tunnels and v4 firewalls

My home network has "native" IPv6 through a series of tunnels that I've set up. The setup is pretty basic. A v6-in-v4 tunnel comes in through HE to my server, giving my server control over... a lot of v6. From here I segment it off a bit, and then branch the connectivity out over several other tunnels. One of these tunnels, as you could guess, heads to my home router.

When I was initially setting up the server <--> home tunnel, my firewalling rules gave me a fair bit of crap. Staring at tcpdump for quite some time didn't give me any leads concerning the proper rule to create, and I wound up whitelisting my entire home IPv4 address (that sounds a bit silly - whitelisting an 'entire v4 address' - you know, all one of them).

I finally got sick of allowing this IP full access to everything, because there were quite a number of ports "open" on the server but that I didn't want anyone outside accessing. This also caused problems with creating proper rules in the first place, because my only test bed was... from an entirely whitelisted IP. Suffice it to say some things that I thought were open were in fact not open to anyone but me, and this caused me quite the headache before I figured it out.

So how did I fix this? The answer is actually pretty simple - 42.

Wait, no. I meant 41. Sorry. Really I did. 41 is the protocol number assigned to IPv6. If this was obvious to others, well, sorry that I'm so slow. I didn't know. If I had known that I should be picking random numbers and trying them in a not exactly often used iptables command, then maybe I would have done this earlier.

Fun fact: "TCP" is 6. Note how this is ambiguous in terms of which "IP" it means, but in this case, it means IPv4. Why TCP is "6" is evidently defined in RFC 793, and why IPv6 is "41" can be found in RFC 1883 (or 1112, not exactly sure).

Note how TCP is 6, and that UDP is 17. Both TCP and UDP are commonly known as "TCP/IP" and "UDP/IP." Both of these operate quite nicely over both IPv4 and IPv6. IPv6 has an assigned number - but IPv4 does not. How you would intermix this I'm not sure. I can block IPv6 quite nicely it seems, but IPv4 is strangely absent. Does 6 imply v4? Does 17 imply v4? How can you filter UDP over 41?

I have no idea. I'm confused too. If you can make sense of the why, I'd be very interested in finding out why these protocol number seem so convoluted and inconsistent. It is pretty obvious that the protocol number for v6 was tacked on long after the base numbers for TCP and UDP were established, but whatever.

Enough rambling.

So how did I fix this firewalling issue?

# iptables -I INPUT -s <v4 home address here> -p 41 -j ACCEPT

... from the tunnel server. I didn't have to create a matching rule on my home router, and of course, ymmv.

For those of you familiar with iptables, the "-p 41" may look somewhat familiar to you. It should:

# iptables -I INPUT -p tcp --dport 80 -j ACCEPT
It is just a simple protocol match. All we're doing is matching the v4 source address, the v6 data, and allowing it through. Despite the above example doing something quite different, the -p switch does the same thing: matches a protocol.

March 30, 2009 :: Utah, USA  

March 29, 2009

Steven Oliver

Proposed Small PC


I recently posted that I wanted a new PC. Well, I want a desktop anyway. My Apple laptop is still in excellent shape, especially since I dropped 4G of RAM in it. Anyway, I have built a computer for myself on Newegg and saved it as a public wish list. Something to consider before viewing. I will be reusing my current hard drive along with my current CD/DVD burner. Outside of that I think I’ve got everything there I need.

NewEgg Wish List
(If that stupid link doesn’t work blame newegg)

Suggestions?

Enjoy the Penguins!

March 29, 2009 :: West Virginia, USA  

March 27, 2009

Ciaran McCreesh

EAPI 3: A Preview


Gentoo is shuffling its way towards EAPI 3. The details haven’t been worked out yet, but there’s a provisional list of things likely to show up that’s mostly been agreed upon. This post will provide a summary; when EAPI 3’s finalised, I’ll do a series of posts with full descriptions as I did for EAPI 2. PMS will remain the definitive definition; I’ve put together a a draft branch (I’ll be rebasing this, so don’t base work off it if you don’t know how to deal with that).

Everything on this list is subject to removal, arbitrary change or nuking from orbit. We’re looking for a finalisation reasonably soon, so if it turns out Portage is unable to support any of these, they’ll be dropped rather than holding the EAPI up.

EAPI 3 will be defined in terms of differences to EAPI 2. These differences may include:

  • pkg_pretend support. This will let ebuilds signal a lot more errors at pretend-time, rather than midway through an install of a hundred packages that you’ve left running overnight. This feature is already in exheres-0.
  • Slot operator dependencies. This will let ebuilds specify what to do when they depend upon a package that has multiple slots available — using :* deps will mean “I can use any slot, and it can change at runtime”, whilst := means “I need the best slot that was there at compile time”. This feature is already in exheres-0 and kdebuild-1.
  • Use dependency defaults. With EAPI 2 use dependencies, it’s illegal to reference a flag in another package unless that package has that flag in IUSE. With use dependency defaults, you’ll be able to use foo/bar[flag(+)] and foo/bar[flag(-)] to mean “pretend it’s enabled (disabled) if it’s not present”. This feature is already in exheres-0.
  • DEFINED_PHASES and PROPERTIES will become mandatory (they’re currently optional). This won’t have any effect for users (although without the former, pkg_pretend would be slooooow).
  • There’s going to be a default src_install of some kind. Details are yet to be entirely worked out.
  • Ebuilds will be able to tell the package manager that it’s ok or not ok to compress certain documentation things using the new docompress function.
  • dodoc will have a -r, for recursively installing directories.
  • doins will support symlinks properly.
  • || ( use? ( ... ) ) will be banned.
  • dohard and dosed will be banned. (Maybe. This one’s still under discussion.)
  • New doexample and doinclude functions. (Again, maybe. Quite a few people think these’re icky and unnecessary.)
  • unpack will support a few new extensions, probably xz, tar.xz and maybe xpi.
  • econf will pass --disable-dependency-tracking --enable-fast-install. This is already done for exheres-0.
  • pkg_info will be usable on uninstalled packages too. This is already in exheres-0 and kdebuild-1.
  • USE and friends will no longer contain arbitrary extra values. (Possibly. Not sure Portage will have this one done in time.)
  • AA and KV will be removed.
  • New REPLACED_BY_VERSION and REPLACING_VERSIONS variables, to let packages work out whether they’re upgrading / downgrading / reinstalling. exheres-0 has a more sophisticated version.
  • The automatic S to WORKDIR fallback will no longer happen under certain conditions. exheres-0 already has this.
  • unpack will consider unrecognised suffixes an error unless --if-compressed is specified, and the default src_unpack will pass this. exheres-0 already has this. (Maybe. Not everyone’s seen the light on this one yet.)
  • The automagic RDEPEND=DEPEND ick will be gone.
  • Utilities will die on failure unless prefixed by nonfatal. exheres-0 already has this.

Unless, of course, something completely different happens.

Posted in eapi 3 Tagged: eapi, eapi 3, gentoo

March 27, 2009

Brian Carper

Blog source code released

By popular demand, I've released the source code for my blog. Hope someone finds it useful.

http://github.com/briancarper/cow-blog/tree/master

Feedback and bug reports welcome, email me or post them somewhere on my blog and I'll find them.

March 27, 2009 :: Pennsylvania, USA  

Iain Buchanan

Linux is about choice (pt 2)

So you've seen my hasty "Linux is about choice" post already. In all fairness to Zimbra, it's a great product, and I'm sure many people rightly swear by it.

Part two of my rant deals with another situation that is slightly different -
"Why then, do applications (or their developers) decide to take away [or keep] that choice?"

While the Zimbra example is easy to argue (and has been suggested already) as a "bug", my second example could be purely opinion.

Think about the great program gnome-power-manager. For those of you who don't know Gnome / Linux, gnome-power-manager is an all-in-one laptop battery monitoring tool. It has the standard battery icon showing charge level; a power history graph showing power history, voltage history, charge profiles and more; as well as LCD backlight, sleep and hibernate controls. And in my opinion, it does a a great job!

(Should any of the developers involved read this, my intention is not to pick on or make fun of you, I hope to purely use the issue as an example, not the people involved!)

Ok, so I was configuring gnome-power-manager to handle everything it is designed to handle, with the exception of power off / hibernate. I use ACPI to hibernate my machine when the power button is pressed, and when the battery power drops to below 5%. (Why ACPI? Because it works regardless of weather I'm logged into Gnome or not; or even if X is not running at all).

Here are the related power button options:
"When the power button is pressed", the options are (Ask me, Hibernate, or Shutdown)
"When the suspend button is pressed", the options are (Do Nothing or Hibernate).

There is no option to "Do nothing" when the power button is pressed. In fact, why are the four options not available for either button? (Ask, Hibernate, Shutdown, Nothing).

In my opinion, this would be the ultimate options offering the most flexibility, without overloading the user with a bulk of detail in the control panel. And yet it looks like my opinion is not understood. It appears the primary reason is because including the "Do Nothing" option would mean gnome-power-manager is doing "half a job".

Could you not forsee that parts of your application may be highly desired, and other parts not so? Given the large "roll your own" background of so many Linux users, why would that mantra not continue as far as possible? Why does Evolution (and Claws and Thunderbird), Firefox, and so on have a plugin framework? Or an external editor option?

Precisely because different people use Linux in different ways. And this is why Linux is about choice!

OK, I promise I'll get back to a technical blog post next :) And if you're interested, the bug is here.

March 27, 2009 :: Australia  

22 degrees C, and light snow!

A few weeks ago on a trip to New Zealand, I had some restrictive internet access, so my usual RSS feeds and news reports weren't working.

I set up iGoogle (which lets you customise your google home page) to keep me up to date.

One day I noticed the weather report: 22 degrees Celcius, and light snow! I would have liked to see the snow, especially given it was summer!

Here's the screenshot:

March 27, 2009 :: Australia  

Linux is about choice (pt 1)

I argue that Linux is about choice. You may argue that is about something else. I think that's fine, so long as we don't argue against each other, but for each other. Why? Because Linux is about different things to different people, and that's great! That's why it is so attractive and diverse.

Why then, do applications (or their developers) decide to take away that choice? Is it because they really don't see how other people may like to use their programs? Fair enough. Is it because they want to impose their ideas on how and why their program should be used? Not fair. What if you provide polite detailed examples of different use cases, and yet the response is "no thanks, we don't / won't do it that way".

You've guessed by now this is a rant. What sparked it off? Two recent applications are giving me grief. This post will look at the first, and why:

Zimbra webmail client

Zimbra makes a great webmail, calendaring (and others) suite. However, I noticed that since I set up my Zimbra calendar (and so did 20+ other people here) that any appointments people send me are being automatically accepted. So what? Well, from time to time I get a (usually pointless) meeting request that I don't want to accept, and yet I find Zimbra has accepted it, even when I'm not logged on.

No problemo, just find the preference and turn off "automatically accept meeting requests".

The only options that look close are in "preferences > calendar":

Permissions
Free / Busy:
[ ] Allow all users to see my free/busy information
[ ] Allow these users to see my free/busy information:
[text box]

Invites:
[ ] Allow all users to invite me to meetings
[ ] Allow these users to invite me to meetings:
[text box]

So first of all, I chose "Allow these users to invite me to meetings:" and left it blank. This didn't work, in fact the behaviour was exactly the same as before.

So secondly I kept "Allow these users to invite me to meetings:" but entered my email address in the text box. Surely this would work?

Well it kind of worked. Now when people send me appointments, they only show as attachments which I can do nothing with (in Zimbra webmail). I can't even add them to my calendar. I suppose I should be happy that at least they don't get automatically accepted...

So my next solution was to try Evolution. I shared my Zimbra calendar and loaded it into Evolution. Great! There's all my appointments! However, when people send me meeting requests, I can't add them to my Zimbra calendar from Evolution. Even though Evolution asked me for the user name and password.

Then I gave up. I've deleted my Zimbra calendar and gone back to plain old Evolution.

Your thoughts, gentle reader? Am I expecting too much? Is this such an edge case that no Zimbra developer could possibly have forseen it? I think not.

March 27, 2009 :: Australia  

Steven Oliver

Uninstall MySQL on Mac OS X


This was a life saver!

  • sudo rm /usr/local/mysql
  • sudo rm -rf /usr/local/mysql*
  • sudo rm -rf /Library/StartupItems/MySQLCOM
  • sudo rm -rf /Library/PreferencePanes/My*
  • edit /etc/hostconfig and remove the line MYSQLCOM=-YES-
  • sudo rm -rf /Library/Receipts/mysql*
  • sudo rm -rf /Library/Receipts/MySQL*

Credit: Link

March 27, 2009 :: West Virginia, USA  

March 26, 2009

George Kargiotakis

commandlinefu.com random entry parser

I’ve written a small perl script to parse random entries from the extremely usefull commandlinefu.com website. Quoting from their site:

Command-Line-Fu is the place to record those command-line gems that you return to again and again.

The script code is very “clean”. I can almost say that it’s written in a very python-ish way.
Sample output:%./cfu.pl
CMD: for (( i = 0; i < 100; i++ )); do echo "$i"; done
URL=http://www.commandlinefu.com/commands/view/735/perform-a-c-style-loop-in-bash. Title=Perform a C-style loop in Bash.
Description: Print 0 through 99, each on a separate line.
%./cfu.pl
CMD: rsync -av -e ssh user@host:/path/to/file.txt .
URL=http://www.commandlinefu.com/commands/view/20/synchronise-a-file-from-a-remote-server Title=Synchronise a file from a remote server
Description: You will be prompted for a password unless you have your public keys set-up.

You can get it from here: commandlinefu.com random entry parser perl script

As far as I’ve tested, it works out of the box on default perl installations of Debian, Gentoo and Mac OS X.

March 26, 2009 :: Greece  

Steven Oliver

Who really keeps open source out of business?


I work for a fairly large company. More than 10,000 employees. And we use a lot of closed source software and I always ask them why don’t we use open source tools. They have every benefit and no downside that I can see. And, being in IT, I get a lot of chances to ask people who have clout in what we do actually use and buy. The amazing part is though that everyone ask always says, “I don’t know, I’d like to use [insert open source project here] as well.”

Well, I figured it out. It’s the users. People who don’t actually work on the computer are scared to death of open source. For the same stupid reasons Linux is not a popular desktop, open source tools have tough times in business.

Let me clarify something real fast though. I’m not talking about severs. It’s pretty hard these days to have UNIX servers without some open source code thrown in. And it’s hard not to find a major company without at least one UNIX server. We are moving to Linux for a lot of boxes though, that will be nice (don’t know what distro don’t ask). What kind of tools I am talking about though are things like Tora. An open source replacement for Toad. Or even MySQL. Have you ever looked at how much Oracle costs? If you haven’t you don’t want to know. And then are always the end user programs as well. For example I’m forced to use a horrible properitary tool to move code changes into testing with (I’m not allowed to put changes into production). I can’t tell you how many times I’ve asked, “Why not just SVN or Git.” The only decent answer I’ve ever got from that question was, “Because of Sarbanes and Oxley.” If you’re not familiar or not from America, that was the act put in place after Enron collapsed. Well, that excuse is swell to you realize it’s not a valid reason at all. What they’re looking for a paper trail. They want to be able to see who did what. What do they think logs are for? Seriously.

So, what about non-IT users. Well, they have no idea what they’re talking about 99% of the time as far as these things go. But they all still have a “If I don’t buy it from a massive company it must be virus infected and horrible” attitude. Which I guess I can understand. After all nothing in life is free. And I can’t tell you how many times I’ve heard, “if it really is free odds are you don’t want it anyway.” What they don’t know though is that when it comes to software all of that is completely wrong!

So yeah. We use crappy programs at work because money = good software. If your a “end user” and you’re reading this, I’m here to tell you, you are wrong. The closed source proprietary programs we use at work are some of the worst designed pieces of software I’ve ever seen.

Enjoy the Penguins!

March 26, 2009 :: West Virginia, USA  

Roeland Douma

Toughts about packages

While cleaning up my package.keywords and filling stable requests I got a good, or I like to think so, idea about “improving” Gentoo. Well maybe improving is to big of a word for it but they can help to improve Gentoo.

As many of you probably know Gentoo is often not, when not running ~ARCH, the most up to date distro. I am not blaming anyone since the devs, arch testers etc are doing a great job, but in some areas we are just lagging behind.

Now of course this rss feed (sorry I forgot who created it…) keeps a nice list of packages in the tree for more than 30 days. Browsing trough this list from time to time has allowed me to fill a bunch of stable requests for packages I use on a regular basis.

Now what kind of other usefull things could we extract from the portage tree that would help improving Gentoo? I toughed of two things:

  • Finding “important” packages: With some smart python program packages which have a lot of other packages depending on it could be located. These packages are often important to keep up to date. Also keeping those packages up to date often allows for more packages to be stabilized.
  • Finding packages without a stable version: A lot of new packages hit the tree on a regular basis (this is of course a good thing). However this also leads to packages in the tree without a stable version. Of course there is a period in which this is not possible. But after the 30 days (or make it 45 for the initial version) it would be good to stabilize since then people that want to run a stable system can also use the package!.

Unfortunately I do now have the time to write these tools. But they would be really cool and use full to see! If anyone has some spare time… you know what to do! And of course tell me if you know any other things we could extract from the portage tree in order to help improving Gentoo.

March 26, 2009 :: The Netherlands  

March 25, 2009

Daniel Robbins

Vserver, Opteron Funtoo Stages Now Available

We now have Funtoo (unstable) stages available for the AMD Athlon 64 and Opteron processor. These stages differ from the generic “amd64” stages in that they have been optimized with -march=opteron, and are thus AMD-specific. They should run on AMD-based Opteron and Athlon64 systems. To download them, head over to http://www.funtoo.org and click on the “Opteron” link.

Also, I wanted to thank Benedikt Böhm for submitting patches to add Linux vserver support to Metro. I am now building Linux vserver templates for all Funtoo builds. You can find the vserver templates in the “vserver” subdirectory inside each stage directory. Likewise, you can find my OpenVZ templates in an “openvz” subdirectory inside the stage directory too. Thanks again, Benedikt, for the submission :)

March 25, 2009

Bryan Østergaard

Repository naming

To avoid name clashes and silly names we're adding a new set of rules for naming repositories. The rules affect profiles/repo_name and not the actual sync url which can differ if neccessary. I'd recommend using the same name however.

The new rules for repo_name is as follows:
- Official topic repositories uses the topic as name.
- Personal repositories uses use the owners (nick)name
- Personal topic repositories use owners (nick)name-topic

So the official KDE repository is named 'kde' and you can find all it's packages using for example 'paludis --list-packages --repository kde'. Ingmars personal repository is named 'ingmar' and if he had a personal topic repository for office type packages it would be named 'ingmar-office'.

I hope the new rules will make the status of repositories easier to understand.

March 25, 2009

March 24, 2009

Jürgen Geuter

Versioning PDF files with git

I version pretty much everything but some things just work better than others: Namely text files are perfectly handled by version control systems, but binary files usually aren't. Which sucks. But git has a few tricks up its sleeve to mend the situation.

Versioning of documents and files is based on the concept of "difference". Something is a new version of a document of there are differences between the "now"-state and the "then"-state. If those differences are only textual (as in sourcecode) we have the brilliant diff application that shows us the difference between two versions (for those who have never used diff, just look at how the Wikipedia displays differences between versions. The idea is to now just say "yeah there are differences" but also to exactly say what changed, which lines are new, which have been deleted and so on:


PDF files (and other rather binary formats [I know that strictly speaking PDF is somewhat text-y]) are harder to version properly. You see that things differed, but not exactly what. We'll fix that.

First install the pdftotext tool (in Gentoo it's part of app-text/poppler). This allows you to transform a given PDF to a plaintext representation (this loses some information but for diffing it's good enough most of the time). But we don't want to call the conversion by hand and maybe even add the txt file to the repository, that would suck, we just want to see the differences in commands like git diff.

If you have a recent git version (>1.6) you just have to do three things:

  1. Since git needs the contents to diff on stdout you have to write a short wrapper for pdftotext. Call it "pdf2txt", put it in /usr/local/bin or if you have it in ~/bin. It contains:
    #!/bin/bash
    pdftotext $1 -
  2. to your ~/.gitconfig file add the paragraph
    [diff "pdf"]
        textconv = pdf2txt
    This tells git that if you select the diff mechanism "pdf" to use the command "pdf2txt" (the wrapper we created) to convert the given data object to text.
  3. In your repository edit the file .gitattributes and add the line
    *.pdf diff=pdf
    It tells git to use the "pdf" diff mechanism (the one we set up in step 2) for any file that matches the description "*.pdf" (as in "any .pdf file", you could setup different textual representation tools for differently named pdfs if you wanted to).


Now git will show you proper text-diffs for your pdf files when using git diff or when looking at the different commits.

You can set a similar mechanism up for other binary formats (I have played around with using aalib for images but the output of aalib is too fragile and you'll end up with useless diffs but I'm still working on it), you just need to have a program that creates a plaintext representation of your binary data.

March 24, 2009 :: Germany  

Brian Carper

Anti-spam field still holding

So far my silly anti-spam measures are working. Since last week I've had 1861 spam comment attempts, of which 0 were successful. 1857 of them didn't even alter the text my the captcha text field at all. Four of them inexplicably HTML-escaped the < into a &lt;.

One feature I didn't implement from Wordpress is subscribing to comments via email. Sending an email from Java is possible but a little bit painful to implement. The Javamail API is a monster.

I do think it's useful to be able to know when someone responds to comment you left, but is spamming your inbox really the best way? I have to think there's a better way.

I did implement an RSS feed for each individual post's comments. And separate RSS feeds for all the tags on my blog, and all the categories. When RSS feeds are generated dynamically, why not? This is all of the code for the tag feeds:

(defn tag-rss [tagname]
  (if-let [tag (get-tag tagname)]
    (rss
        (str "briancarper.net Tag: " (:name tag))
        (str "http://briancarper.net/" (:url tag))
        "briancarper.net"
        (map rss-item (take 25 (all-posts-with-tag tag))))
    (error-404 )))

Plus the routing code:

(GET "/feed/tag/:name" (tag-rss (route :name)))

But I haven't uploaded the comment-feed feature because I don't know if it's overkill. Personally I am liberal with my RSS feeds, I just pop them into my Akregator and off I go. But I don't know if other people take their feeds more seriously, or what. RSS feeds can be a bit heavyweight. Maybe I should make a feed for all of my comments across all posts.

March 24, 2009 :: Pennsylvania, USA  

March 23, 2009

Martin Matusiak

Windows Forms: because you’re worth it

Ah, Windows Forms. You’ve changed my life. I used to think Java was the lowest of low in gui programming, but I’ve had to reset my lowest common denominator in several areas already. Layout is horrendous, and the WinForms threading model is unfathomable. At least it’s nothing important then.

But it also comes wrapped with easter eggs for your enjoyment. Try this on for size:

  1. Create a new thread, run splash screen.
  2. Initialize main gui in default thread.
  3. Signal splash thread to end.
  4. Splash screen calls Close().
  5. Call Join() on splash thread.
  6. Run main gui.
  7. Main gui appears behind all other open windows.

What do you mean that’s not what you wanted? So you think to yourself “Aha! I’m cleverer than you are, stupid dot net. I’ll make you wish you never gave me a BringToFront() method!” But that, despite the fantastically promising name, doesn’t do anything. Neither does Activate().

No big deal. I’m sure noone is planning to use a splash screen in .NET anyway. So after aimlessly looking out the window for an hour and downing another three cups of awful coffee, you snap out of your prolonged daydream and start scrolling through the member list for the seventh time. Hm, a TopMost property. I wonder… Ah yes, that’s the Stalin button. Makes your window always on top of all others. But what if..

// <stalin>
this.TopMost = true;
// </stalin>
this.Load += delegate (object o,EventArgs a) { this.TopMost = false; };

Download this code: winforms_fix_behind_windows.cs

Yup, that actually works. When you’re initializing the gui you fool the stack that hates you into thinking that you’re going to be a dictator, but just when the window loads you turn off god mode. And that’s enough to bring it to the front.

Windows Forms is truly fantastic for the kind of people who enjoy trivia. Just remembering hundreds of facts that have no real use just for the sake of knowing what few other people know. Take threading. Now, in most runtimes you would imagine that creating a new thread in your program has semantics such that whatever you’ve been doing until now in your single threaded application continues to work. The new bit you have to tackle is new threads you create in addition to the main one. Not so in .NET (I bet you saw that coming, you rascal!) For inexplicable reasons, your process has a way of hanging at the end just for the heck of it. I think I figured it out though. Are you ready for it?

splashthread = new Thread(RunSplash);
splashthread.IsBackground = true;
Thread.CurrentThread.IsBackground = true; // cure for cancer?

Download this code: winforms_fix_hanging_process.cs

That’s right, suddenly your main thread doesn’t work the same anymore. You have to make it a background thread (which it clearly isn’t), otherwise it just hangs there after Close().

EDIT: No, that didn’t fix it either.

What a shame I didn’t discover Windows Forms 7 years ago, I could have spent all that time learning all these exciting hacks instead of wasting my time on useless and unproductive things like Python.

March 23, 2009 :: Utrecht, Netherlands  

Dirk R. Gently

Less Colors For Man Pages


Man pages by default use less for displaying. I’ve used vim before to for colored text in man pages but something got bjorked in an update. You can color man pages with less too with the use of termcap. Thanks to nico for the tip.

All that needs to be done it to export bold and underline values of termcap. You can add the values to your ~/.bashrc so that they are always used:

# Less Colors for Man Pages
export LESS_TERMCAP_mb=$'\E[01;31m'       # begin blinking
export LESS_TERMCAP_md=$'\E[01;38;5;74m'  # begin bold
export LESS_TERMCAP_me=$'\E[0m'           # end mode
export LESS_TERMCAP_se=$'\E[0m'           # end standout-mode
export LESS_TERMCAP_so=$'\E[38;5;246m'    # begin standout-mode - info box
export LESS_TERMCAP_ue=$'\E[0m'           # end underline
export LESS_TERMCAP_us=$'\E[04;38;5;146m' # begin underline

And resource the ~/.bashrc:

source ~/.bashrc

tohave it work now. Notice I used Arch and Gentoo colors, my two favorite distros :) :

This also made me realize that my Output Color on Bash Scripts - Advanced page needed updated. Grok?! :)

March 23, 2009 :: WI, USA  

Dan Fego

Follow-up on mplayer’s tab-completion

So after a big of Googling and finding this bug (after several others), I was made aware of three things (in response to my previous post):

  1. mplayer’s tab-completion support does in fact come from bashcomp
  2. this support is covered in the “base” module
  3. the appropriate file uses strange regular expressions

At the time this bug was filed, the appropriate file to edit was (or was in) /etc/bash_completion. Since bash-completion-20081218, the bug was fixed, but the package also underwent some changes that seemingly caused locations of config files to change. (Apologies if this is incorrect, but I never went diving into the configs of bashcomp before now!)

In any case, the bashcomp configuration files are now in /usr/share/bash-completion. Since mplayer’s support is in base, the file that handles mplayer is “base” in that directory.

Now as for the “strange” regular expressions, that deserves some qualification. I’ve already seen lots of regular expressions on my (albeit rather short) day, but the reason I consider these ones strange is because they seem both unnecessary and redundant. The line in question is currently 5983 on my version 20081219-r1:

_filedir '@(mp?(e)g|MP?(E)G|wm[av]|WM[AV]|avi|AVI|asf|ASF|…|fl[iv]|FL[IV]…’

The two ellipses are my own adding, since the actual expression is one humongous line that ends up looking rather horrible here. My problems with this are both the explicit writing out of upper and lower case alternatives and the the bothering to do things like fl[iv]. Actually, I don’t have a problem with the latter except in the presence of the former. And to be fair, I probably wouldn’t have ever cared or noticed if I hadn’t been able to find “flv” when grepping numerous files. Not that “greppability” is necessarily a goal for configuration files, but it’s certainly annoying when it’s specifically hindered by regular expressions that save negligible space like the ones in this file.

As a final note, I’m going to disclaim that I’m no expert on bash scripting and the various intricacies of handling regular expressions therein, so I’d be happy to hear from anyone who knows better about why I should lay off the poor bash-completion folks. :-P

External Links

March 23, 2009 :: USA  

Jürgen Geuter

SkoleLinux

Linuxoutlaws' own Fab wrote about Skolelinux being deployed in German schools today. This reminded me that I wanted to write a few notes on SkoleLinux for a while now.

SkoleLinux (also called debianEDU) is a Debian based distribution focussed on being deployed in School contexts. It comes with a pre-configured server-part and also different workstation/thin client profiles. In the following I'll list some of the strengths and weaknesses of the distribution that I have come across in my work (I work as a System Administrator for a school).

Strengths

  • Simple installation: SkoleLinux is really simple to install (servers as well as clients). You click about 3 checkboxes to select keyboard layout, language of the installer and partition layout and you're good to go.
  • Preconfigured LDAP+Samba: The server (called "tjener") is preconfigured to offer centralized authentification via LDAP and exporting of HOME dirs via NFS. Samba is configured to allow Windows clients to easily authentificate against the LDAP server, too.
  • Access to default Debian packages: Since SkoleLinux is just a layer on top of Debian you have all the Debian packages just one apt-get away.

Weaknesses

  • SkoleLinux is pretty strongly focussed on KDE. This goes so far that not even a decent GTK theme is packaged and installed as default which makes GTK apps look really crappy (even though many GTK apps are installed cause a lot of educational apps are GTK based). This makes the whole desktop look very unpolished and unprofessional.
  • Building custom install CDs is a huge pain in the ass. When I tried doing it you had to check out some custom patches to the debian CD builder that wouldn't run. Really bad if you want to build a custom CD to install client systems.
  • In connection with the previous aspect it's a pain in the butt to get all clients do certain things. You end up hacking around a lot to get certain things to work properly (as in having all client systems run automatic updates or install a certain new package or meta-package).
  • Konqueror is not a browser you want to give people. It might be fine for the nerd but people who just want to use the system are irritated by the browser not working properly. Most people don't use Linux at home and if the do they probably run Firefox. The Debian version of Firefox is called Iceweasel which alone irritates people enough, but giving them a completely different default browser is nonsense.
  • SkoleLinux sets up weird configuration directives for its services. The DHCP server is configured in /etc/dhcp3/dhcpd-debian-edu.conf, not in /etc/dhcp3/dhcpd.conf, but /etc/dhcp3/dhcpd.conf still exists. This leads to errors and is a bad idea.
  • Setting KDE to activate icons with one click is a bad idea. Some geeks might like it but people are used to double click and single-click makes migration harder and lessens acceptance.

Summary

SkoleLinux gets you started really quickly but as soon as you want to leave the path that the system is hardwired to you often have to fight hard.

SkoleLinux is not a bad idea and I fully support deploying it to schools but I think in order for it to be a real alternative a few things need to change, cause right now you have to know too much about Linux and the system to get even trivial tasks to work.

Outlook (what has to change)


I'll just add a few generic thoughts here, I'll probably write about educational distros later in detail (I have a post half-written):
  • Centralized documentation set up and added to each user's bookmarks. In our school we have a Dokuwiki instance installed to gather documentation that is user-editable. This should be automatically set up because you'll end up needing it anyways.
  • A way to push certain bookmarks to each user. As Admin you might want to have a set of bookmarks that every user automatically gets. This should be easy to do (preferably in a web-based GUI). A workaround is to put a write-protected folder on each person's Desktop and drop .url files in there which is not pretty.
  • A shared folder should be set up for every user. The user profiles should be prepared with a "shared" folder that automatically is public to everybody. Samba or something similar should be set up so that the folders are easy to find. This would allow people to really collaborate. You can build that yourself by hand nowadays, but you need some knowledge to do it. It should be default.
  • Automatic meta-package setup. On our school we want the same software on all boxes. Therefore we set up a pretty much empty meta-package that pull all the software we want as dependency. If you know what you're doing that's not hard to do but in fact that setup should be automatically set up and controllable via a web-interface (just allow selecting of apps you want installed on every box, not advanced packaging via web-gui obviously).
  • Automatic command to each box. There needs to be a system set up to send commands to "all clients" or "all clients in group G" that allows you to run a certain command globally (even if the repsective client isn't on right now). Right now you can grep your way though the DHCP leases file/try nmap and try SSH but that ain't fun. It's really not. Yes, I've done that.

Final words


Don't get me wrong, SkoleLinux is a great start. But it's not more, it's pretty much a 0.2 version. Many things are very rough and need a lot of tweaking. But it's not so much of a technical problem. Right now it lacks everything that would convince people to run it (apart from the money savings): It needs to support cooperation and collaboration out of the box, it has to show the user that he or she can work with others and share a lot more.

SkoleLinux right now is just a "free Windows replacement" which reduces all it has to offer to lower costs. This is below what we can do.

March 23, 2009 :: Germany  

Testing OpenSolaris

Recently I wanted to test something in OpenSolaris, so I downloaded the ISO for the most recent version and tried running it.

In spite of Innotek belonging to Sun, Virtualbox refused to run it (installing worked after I had given it more than a GB of RAM, but the installed version wouldn't boot).

Well, I had not used my Desktop in a while so I thought I could just slap it on there. I burned the CD and was able to install it quite easily (luckily my Desktop has 1 GB of RAM) though the installation was a lot slower than I am used to from various Linux distributions. The system booted and I thought I could play around with it a little, but alas, I didn't have any sound or network.

My computer has a Nforce2 based Motherboard as well as a Soundblaster Live! soundcard, not exactly uncommon hardware, so I thought I would just have to load a driver or something. What kinda irritated me was the fact that I had an Nvidia binary driver loaded automatically. Well, I started the hardware diagnostic tool thingy to see what drivers I'd have to load.

Guess what? In order to get network I would have to download the source from some third-party site and try to get it to compile by hand. The hardware diagnostic tool even gave me the URL (which was not really all that useful without an internet connection) and I could lookup the instructions to build the module. But it would have been out of the package management, without any support. Not even "unofficial" packages or anything are offered, just "yeah see, there's this site somewhere that has some code with might work". This was the point where I just couldn't be assed to continue the experiment.

OpenSolaris needs 1 GB of RAM to install in text-mode. But the installed system seems to be as picky when it comes to hardware like OSX. The whole idea of giving people a hardware diagnostics tool to determine which driver to use is nice but where's the point if you get just a random URL to build your own stuff?

I'm not scarred of building code on my own, but I just don't want to enter the maintenance hell of building half my system without any package manager integration (and considering that even very mainstream hardware isn't supported I would end up pretty much building my own distro).

OpenSolaris has a bunch of nifty technologies I was planning to dive into but in its current state I don't think it's worth spending a lot of my time with it. Too much hassle for simple things. Maybe in 2 years.

Cya OpenSolaris.

March 23, 2009 :: Germany  

March 22, 2009

George Kargiotakis

Severely degraded harddisk performance on sata_sil by athcool

I am writing this post to provide some statistics on athcool + sata_sil usage. The results are horrible.

Athcool is a small utility, enabling/disabling Powersaving mode for AMD Athlon/Duron processors.
The homepage of the utility has a big fat warning as well:

WARNING: Depending on your motherboard and/or hardware components, enabling powersaving mode may cause:

* noisy or distorted sound playback
* a slowdown in harddisk performance
* system locks or instability

The Gentoo ebuild also has these warnings:

ewarn “WARNING: Depending on your motherboard and/or hardware components,”
ewarn “enabling powersaving mode may cause:”
ewarn ” * noisy or distorted sound playback”
ewarn ” * a slowdown in harddisk performance”
ewarn ” * system locks or unpredictable behavior”
ewarn ” * file system corruption”
ewarn “If you met those problems, you should not use athcool. Please use”
ewarn “athcool AT YOUR OWN RISK!”

Ignoring all these warning I was using athcool for years on my old desktop box filled with 2 IDE disks. Never had any real problem at all, except for a some performance loss. The problem appeared when I first used a sata disk on motherboard’s, Gigabyte GA-7VAXP-A Ulta, sata controller which uses the sata_sil module.

Here are some tests using dd and vmstat. The two commands were run on different terminals at the same time:
1) athcool off
a) dd to IDE
(TERMINAL 1) user@box:~% vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
0 0 0 392296 19272 663892 0 0 0 56 260 332 0 0 100 0
1 0 0 813648 19344 245960 0 0 4 64604 486 849 4 58 14 24
0 1 0 769380 19388 289680 0 0 0 44032 399 890 3 30 0 67
0 2 0 769876 19388 290704 0 0 0 4 368 594 1 3 0 96
0 3 0 708372 19460 351120 0 0 0 40632 347 589 2 46 0 52
0 1 0 674272 19492 383928 0 0 0 53936 471 821 1 27 0 72
0 1 0 655796 19508 401744 0 0 0 17640 376 651 1 13 0 86
0 1 0 588712 19572 466880 0 0 0 65544 469 1024 9 40 0 51
1 1 0 579412 19584 478072 0 0 0 6148 368 852 5 8 0 87
0 2 0 504516 19664 550856 0 0 0 79104 414 817 3 52 0 45
1 0 0 453800 19712 600040 0 0 0 45420 350 616 4 30 0 66
0 1 0 414740 19748 637952 0 0 0 40872 401 700 3 26 0 71
0 1 0 367248 19792 684064 0 0 0 46112 357 619 5 32 0 63
1 1 0 360552 19804 693280 0 0 0 7240 421 795 2 8 0 90
2 1 0 268668 19896 782368 0 0 0 84768 385 722 5 65 0 30
2 1 0 230724 19932 819248 0 0 0 43672 378 598 2 27 0 71
1 1 0 184224 19976 864328 0 0 0 45080 349 587 5 27 0 68
0 1 0 142932 20016 904288 0 0 0 39960 369 653 1 33 0 66
1 1 0 136608 20032 913928 0 0 0 6400 337 523 2 11 0 87
0 1 0 46708 20120 1000568 0 0 0 90360 387 626 4 61 0 35
0 1 0 26508 14348 1025136 0 0 0 44048 346 633 1 29 0 70
0 1 0 23404 13368 1028324 0 0 0 42016 363 638 2 26 0 72
0 1 4 23776 11816 1028108 0 0 0 43048 336 631 0 30 0 70
1 1 4 24024 11000 1031428 0 0 0 26796 391 663 4 25 0 71
0 0 4 23652 11028 1032324 0 0 0 16836 336 648 2 14 18 66
0 0 4 23776 11028 1032324 0 0 0 0 278 397 1 0 99 0
(TERMINAL 2) user@box:~%dd if=/dev/zero of=/path/to/parition/on/IDE/disk/file bs=1024 count=1000000
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB) copied, 22.8884 s, 44.7 MB/s

Pretty decent performace.

b) dd to SATA
(TERMINAL 1) user@box:~% vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
0 0 4 738008 21872 317948 0 0 0 0 308 427 2 0 98 0
0 0 4 738008 21880 317948 0 0 0 48 268 342 0 1 99 0
3 0 4 722508 21916 334312 0 0 4 8456 320 491 3 11 84 2
1 0 4 647612 22008 407372 0 0 0 80844 374 762 9 52 0 39
0 1 4 624796 22032 429996 0 0 0 22528 389 465 0 17 0 83
1 0 4 557836 22096 495032 0 0 0 64520 361 395 3 50 0 47
1 0 4 512576 22136 539012 0 0 0 44032 386 548 5 33 0 62
0 2 4 473640 22176 579080 0 0 0 38916 353 488 2 31 0 67
2 0 4 448220 22220 605548 0 0 0 23156 399 434 3 20 0 77
0 1 4 381880 22280 669192 0 0 0 60492 353 374 2 48 0 50
1 0 4 331784 22328 717848 0 0 0 49272 388 431 3 36 0 61
1 2 4 286648 22372 761716 0 0 0 43120 345 366 4 33 0 63
0 3 4 249696 22408 799240 0 0 0 46832 391 437 2 30 0 68
1 3 4 212124 22448 837352 0 0 0 28460 351 446 3 29 0 68
0 2 4 161160 22496 886304 0 0 0 58856 439 479 3 39 0 58
0 2 4 115652 22540 930336 0 0 0 44032 355 523 2 32 0 66
0 2 4 68036 22584 976416 0 0 0 46080 426 699 6 32 0 62
1 1 4 22548 22628 1020476 0 0 0 44028 366 590 15 29 0 56
2 2 4 23032 22672 1019420 0 0 0 42012 386 1082 40 36 0 24
0 2 4 24272 20772 1019388 0 0 0 45116 360 779 16 36 0 48
1 1 184 23652 16948 1022416 0 0 0 33968 365 446 20 30 0 50
0 2 184 23280 16668 1022620 0 0 0 50188 369 505 3 35 0 62
0 2 184 24520 16688 1020540 0 4 0 44036 376 458 9 36 0 55
0 2 184 27868 16632 1022072 0 0 0 25908 459 441 5 19 0 76
1 1 184 23652 6796 1036996 0 0 0 37820 386 472 3 30 0 67
0 0 184 24520 6084 1037356 0 0 0 16580 342 424 4 15 1 80
0 0 184 24644 6084 1037356 0 0 0 0 294 374 1 0 99 0
(TERMINAL 2) user@box:~%dd if=/dev/zero of=/path/to/parition/on/SATA/disk/file bs=1024 count=1000000
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB) copied, 22.2838 s, 46.0 MB/s

Sata was a bit faster, as expected.

2) athcool on
a) dd to IDE
(TERMINAL 1) user@box:~% vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
2 0 184 119536 7040 942204 0 0 119 2047 300 740 8 3 82 7
0 0 184 119536 7040 942204 0 0 0 0 290 711 4 0 96 0
1 2 184 103292 7092 958828 0 0 16 4496 413 685 2 13 82 3
0 1 184 67828 7124 993432 0 0 0 46884 612 1149 2 30 0 68
0 1 184 54188 7136 1006756 0 0 0 13312 552 878 2 12 0 86
0 1 184 22908 7196 1036576 0 0 0 62472 447 1007 7 50 0 43
0 1 184 24644 7224 1034272 0 0 0 24428 459 739 1 27 0 72
0 3 184 25760 7236 1034740 0 0 0 4392 499 732 2 10 0 88
1 3 184 23652 7284 1036484 0 0 0 42900 536 977 3 39 0 58
1 4 184 25140 7308 1034448 0 0 0 41144 470 861 3 19 0 78
0 2 184 24636 7340 1034084 0 0 12 33816 466 797 4 26 0 70
0 2 184 25256 7340 1034084 0 0 0 0 447 605 2 3 0 95
0 2 184 22904 7424 1035360 0 0 4 73032 466 848 6 58 0 36
0 2 184 23272 7452 1036132 0 0 0 23308 458 786 3 27 0 70
0 2 184 24144 7500 1035644 0 0 0 46364 478 800 7 35 0 58
0 2 184 25132 7560 1034492 0 0 0 23356 441 673 2 16 0 82
0 2 184 25752 7560 1034492 0 0 0 0 502 728 1 0 0 99
0 1 184 23768 7632 1034748 0 0 0 74820 407 778 6 57 0 37
0 2 184 23644 7672 1035388 0 0 0 41016 465 829 5 30 0 65
0 2 184 23272 7712 1036284 0 0 0 39216 448 744 3 31 0 66
0 1 184 23148 7752 1035900 0 0 0 37084 535 1120 3 29 0 68
0 1 184 23024 7800 1035420 0 0 0 46112 435 724 3 34 0 63
0 1 184 23024 6900 1035472 0 0 0 34832 480 737 5 24 0 71
1 1 184 23024 6660 1036344 0 0 0 28956 450 759 2 30 0 68
1 1 184 24016 6592 1035540 0 0 0 37108 483 873 2 27 0 71
0 1 184 23396 6636 1036104 0 180 0 46644 420 874 6 34 0 60
1 0 196 23148 6668 1036156 0 0 0 37896 473 812 5 29 0 66
0 1 196 23272 6712 1035412 0 0 0 45064 409 709 2 34 0 64
1 1 196 23148 6720 1037232 0 0 0 356 471 637 2 7 0 91
0 2 196 23648 6744 1036236 0 0 0 48400 433 893 3 34 0 63
0 1 196 23396 6276 1036672 0 0 0 46100 462 1001 4 33 0 63
0 0 196 24016 6280 1036672 0 0 0 4 445 586 1 2 75 22
0 0 196 24388 6280 1036672 0 0 0 0 358 475 1 3 96 0
0 0 196 24388 6280 1036672 0 0 0 0 272 362 1 0 99 0
(TERMINAL 2) user@box:~% dd if=/dev/zero of=/path/to/parition/on/IDE/disk/file bs=1024 count=1000000
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB) copied, 28.1283 s, 36.4 MB/s

Hello degraded performace! 44.7Mb/s -> 36.4MB/s. This is a 18.57% drop. Still I consider it quite acceptable for a desktop pc.

a) dd to SATA
(TERMINAL 1) user@box:~% vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
2 0 196 1031508 5284 43612 0 0 94 2389 310 715 8 3 77 12
0 0 196 1031508 5284 43612 0 0 0 0 312 662 5 0 95 0
1 0 196 966284 5284 107512 0 0 0 63484 343 457 6 48 15 31
0 1 196 959588 5284 114240 0 0 0 7168 403 694 4 6 0 90
0 1 196 952272 5284 121408 0 0 0 7168 371 610 4 8 0 88
0 1 196 944956 5284 128572 0 0 0 7164 420 478 2 6 0 92
1 1 196 918544 5324 151464 0 0 376 22652 429 1713 18 27 0 55
0 2 196 917924 5324 152488 0 0 0 1048 416 649 2 3 0 95
0 2 196 918048 5324 152488 0 0 0 16 392 540 0 0 0 100
0 2 196 917552 5336 152648 0 0 172 20 435 780 2 0 0 98
0 2 196 917676 5336 152648 0 0 0 0 372 491 0 0 0 100
0 2 196 917800 5344 152648 0 0 0 60 404 559 3 0 0 97
0 2 196 917924 5344 152648 0 0 0 20 380 620 2 1 0 97
0 2 196 918172 5344 152648 0 0 0 28 416 966 6 1 0 93
0 2 196 918296 5344 152648 0 0 0 16 369 356 2 0 0 98
0 2 196 918420 5344 152648 0 0 0 16 401 384 0 1 0 99
0 1 196 850840 5360 219204 0 0 0 66884 467 696 3 51 0 46
0 1 196 845384 5368 224324 0 0 0 5136 419 520 3 1 0 96
0 1 196 836952 5368 232512 0 0 0 8188 385 742 4 5 0 91
0 1 196 831744 5368 237632 0 0 0 5120 426 489 0 1 0 99
0 1 196 825420 5368 243776 0 0 0 6144 422 600 0 1 0 99
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
1 2 196 819064 5368 245824 0 0 0 2096 417 1427 29 6 0 65
0 2 196 823560 5368 245824 0 0 0 20 494 1913 16 4 0 80
0 2 196 823808 5376 245824 0 0 0 28 480 1719 8 2 0 90
0 1 196 758336 5384 310328 0 0 0 64800 423 1204 24 53 0 23
[snip]
0 2 196 104236 5628 954752 0 0 0 8 405 745 2 2 0 96
0 2 196 43600 5640 1013984 0 0 0 59740 453 815 4 44 0 52
0 2 196 29712 5640 1027300 0 0 0 13320 403 758 3 5 0 92
0 2 196 23892 5640 1032852 0 0 0 9220 430 693 4 4 0 92
0 1 196 22776 3892 1035820 0 0 0 7184 402 728 5 1 0 94
0 1 196 22776 3892 1035820 0 0 0 0 451 1172 5 3 0 92
0 1 196 22776 3892 1035820 0 0 0 0 386 813 2 1 0 97
0 1 196 22776 3896 1035820 0 0 0 4 412 385 0 0 0 100
0 1 196 22776 3896 1035820 0 0 0 4 358 320 0 0 0 100
0 1 196 25380 3896 1032748 0 0 0 0 416 722 7 1 0 92
0 1 196 25504 3904 1032748 0 0 0 16 365 404 2 2 0 96
0 1 196 25628 3904 1032748 0 0 0 0 396 471 1 1 0 98
0 1 196 25876 3904 1032748 0 0 0 0 364 526 6 0 0 94
0 0 196 26992 3912 1032748 0 0 0 968 395 1288 14 3 55 28
procs ———–memory———- —swap– —–io—- -system– —-cpu—-
r b swpd free buff cache si so bi bo in cs us sy id wa
0 1 196 26992 3912 1032748 0 0 0 4 323 1412 13 1 86 0
0 0 196 26992 3920 1032748 0 0 0 12 379 1755 16 1 83 0
(TERMINAL 2) user@box:~%dd if=/dev/zero of=/path/to/parition/on/SATA/disk/file bs=1024 count=1000000
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB) copied, 114.792 s, 8.9 MB/s

What about that!!! 46Mb/s -> 8.9Mb/s That’s an incredible 80% percent performance loss! I/O wait is always maxed at 90-100% which makes the machine totally unresponsive. I had to snip the output of vmstat because it is huge. It’s more than 140lines while all the others were at max 36 (athcool on, IDE). If anyone wants it I can surely upload it here.

I don’t think that’s what the programmer of athcool had in mind when he was talking about degraded performance…I think this is a bug. I can accept a 20% loss but not an 80% loss of performance. By the way, these tests were performed on ext3, but the same results appear with reiserfs as well.

This “bug” took me a while to figure out. I had my sata disk crawling for a long time before I thought of disabling athcool (yeah yeah I know it should have been the first thing to do…). I initially thought on posting the “bug” to the Gentoo bugzilla but while searching it I came up with this:
sys-power/athcool causes massive filesystem corruption; upstream was informed but did not respond
. As far as I can tell Gentoo developers think that since there is a warning there is no extra reason that this package should be masked or whatever.

Dear Internets,
Is there anyone else who can confirm the same behavior of athcool with sata controllers ? I don’t have another sata controller to test, but if someone uses a different sata module, a test like the one I performed would show us whether it’s a sata_sil problem, so that I should report it to the kernel maintainers or an athcool problem, so that we should at least fill a new bug at the Gentoo bugzilla to ask the developers to hardmask the package.

March 22, 2009 :: Greece  

Brian Carper

Conky Goodness

I uploaded a new screenshot:

/screenshots/2009/2009-03-21.png

The conky with weather pictures in it is stolen from RAMC's conkyrc which you can find on the Gentoo MB and also apparently here. There's a python script there to fetch and display weather info.

Whoever thought up the idea of making a font that consists of little weather pictures was pretty clever. Whoever thought up making a font that consists of Linux distro emblems has a bit too much time on his hands.

Oddly enough, Unicode itself includes glyphs for weather symbols. e.g. this is Unicode character 2603:

If your font supports it, it should show up as a snowman. If your font doesn't support it, it may show up as an ice cube.

March 22, 2009 :: Pennsylvania, USA  

March 21, 2009

Michael Klier

Resurrecting My NSLU2 - Part 1

I finally got around to add a serial port to my hopefully still working NSLU2. Since a couple of weeks, the box doesn't allow me to install Debian on it. The installer boots but dies once I connect a USB drive (no matter if I add it before I power the box up or after). Someone else on the Debian boot mailinglist apparently had the same problem, but, because nobody was able to reproduce this the only way to find out what's going on is to add a serial adapter to the NSLU2.

Unfortunately I just found out that I gave away the last computer I had which had a serial port just two weeks ago. Stupid me :-(. I'll lend one of the older laptops from work next week along with a Knoppix CD probably. Until then, I thought I could post some pictures of the progress and you could tell me if I did something fundamentally wrong or how much I suck at electronics ;-) (and because I promised to post a picture from my new hacker space at home I'll add one too).

Filed under: ,
Read or add comments to this article · Save to del.icio.us

March 21, 2009 :: Germany  

Brian Carper

Internet Explorer 8 Review

I installed Internet Explorer 8 today. I need it to test the websites at work. I couldn't care less if my personal sites render properly in IE at this point, but I must accommodate people at work.

I should mention right off the bat that given the way Microsoft takes a dump all over web standards and the hours and hours of grief as a web developer trying to get sites to look proper in IE6, unless IE8 crapped gold nuggets every time I clicked a link I don't think I'd like it.

Installing

I wasn't disappointed. IE8 is hate-worthy. A steaming pile of offal. First there was the joy of trying to install it.

Why does installing a web browser require checking my computer for "malicious software"? Why can't I opt out of this? In any case I didn't have to worry about it, because the first time I tried the install, it bombed before it got that far, and demanded that I go to the Windows Update site and install some patch for IE7 before I could continue. Note: I don't have IE7 on my computer. This is a work machine that I kept IE6 on for testing our company websites. This blew my mind.

So I tried to download this patch for IE7, but I couldn't, because I had to get Windows Genuine disAdvantage first. Rage filled me at this point to the point of overflowing. If it was my home computer I'd have stopped right there. But I need this garbage for work, so I held my nose and did it.

Of course the patch required a reboot. Reboot #1.

Now I was able to continue with the install. A slow, plodding download; I think it took 5-10 minutes to do its thing, but it's hard to tell. There was no progress bar to show me how far along it was, nothing to tell me the elapsed time, no indication how large the files were that were being fetched. There is something resembling a progress bar, but it doesn't actually show you much in the way of "progress". Instead a little green thing bounces around like the car from Knight Rider. How much cocaine do you need to imbibe to invent a GUI like this?

Of course IE8 itself required a reboot. Reboot #2.

Why? Installing Firefox and Opera don't require reboots. They download as self-contained .exe installer files. I run them and software appears. This is 2009, for the love of God. Maybe in 20 more years Microsoft will finally manage to re-invent emerge or apt.

The IE8 install, including patching and reboots, took me 45 minutes. If I had to do this on more than one machine, I'd probably jump out the window. How much time have you sucked out of my life, Microsoft? To compare, I decided to install Opera. Opera took less than one minute to download AND install and didn't require a reboot.

Features

When you first open it up, it sends you through a wizard and asks you if you want to enable a bunch of crap. I said no to everything. What the hell is an "Accelerator"? I assumed it was something that tried to make web pages load faster, like the download accelerator scams you used to get popups for all the time in 2001. So I said no.

Turns out "Accelerators" are plugins. Why didn't they call them Plugins? Did some marketroid decide "plugin" wasn't EXTREME enough, so decided to make up their own word? Why do I have to relearn the English language every time someone releases new software? Not Invented Here syndrome?

Windows tried to default me to Live Search, but I give it credit for being upfront in allowing me to turn that crap off and use Google. (No doubt thanks to US anti-trust court proceedings.) 473 wizard dialogs later I had a browser.

The next thing I noticed is more lame attempts to push more Microsoft services at me. In the URL bar every time you type anything, you see this:

Awesome. Is there any way to remove this spamvertisement other than installing Windows Search? If I planned to use IE8, which I don't, I imagine I'd inevitably click that by accident, which is probably the whole idea.

IE8 also added a bunch of useless garbage to my bookmarks toolbar which I insta-deleted. Or tried to. My favorite feature of IE8 by far is this one:

Apparently deleting things from the bookmarks toolbar is just too much for a modern 4-core CPU to handle. Congrats Microsoft. Hang, crash, boom.

There is no menu in IE8 by default. No wait, there is a menu. It's just in the wrong place (lower right side of the top browser area), and instead of readable text it's mostly unlabeled buttons with tiny arrows next to it.

It's like a traditional menu and a fun mystery novel combined! What is in the dropdown next to the house? I'm sure it's a fun surprise.

And actually you can get the old menu to appear too, if you press Alt. Insanity. But it doesn't appear at the top, it appears under the URL bar. One of the few arguably good things about Windows is that programs have consistent GUI parts and work the same way: they have a menu at the top, it's always in the same one place, there's a File and an Edit, and it's predictable. Thanks Microsoft for getting even that wrong.

When I highlight text on a web page, a little blue thing appears that I think I'm supposed to click on. The icon is a bunch of lines and squiggles and an arrow or something. There's no indication what that thing actually does. I clicked it out of curiosity and get a menu full of a bunch of random options like "Search for this". I think this is where ACCELERATORS are supposed to pop up, or something, who cares?

Fonts in IE8 look fuzzy. As a bonus, after installing IE8, fonts in a bunch of other programs (Outlook) are fuzzy now too. Hurrah! IE8, like its predecessors, apparently extends its tendrils into every nook and cranny of your system, corrupting and perverting as it goes. Maybe that's why it needed to reboot my computer twice to install it.

IE8 comes with a Firebug ripoff, which is better than View Source invoking Notepad, but took a full 2 minutes to load when I tried to open it the first time.

IE8 does render my blog properly, which is good. IE7 does too, I think, I only tested it once. I'm not losing sleep over it. Thank you Firefox and Opera: if you didn't exist and put the pressure on, we'd all still be using IE6 and I'd still be writing all my web pages twice to make sure they work in Internet Excrementplorer. As much as I detest IE, if people migrate to IE8 from the shard of utmost evil that is IE6, I'll be happy.

March 21, 2009 :: Pennsylvania, USA  

Steven Oliver

Do you ever make a blog post…


and then feel like a complete idiot after you post it? I hate it when that happens.

March 21, 2009 :: West Virginia, USA  

March 20, 2009

Steven Oliver

Here is a question for you…


You’re writing some software and you’re using an API so your new program can interact with the old one. You go to compile your new changes and you get a syntax error. But wait, the compiler is claiming the syntax error isn’t coming from your code. It’s coming from the header file in the old program!

Totally just happened to me. What on earth do you do about that? File a bug report and hope they fix it?

March 20, 2009 :: West Virginia, USA  

Michael Klier

My MTB 2009 Season Has Begun

Chewy Today I finally went on my first bike trip this year. I really couldn't have my newly inspected bike just standing around any longer. It is still cold and it turned out it was too cold for my normal long bike pants. The first 6 km were fun, but then it started to snow and my knees got a little too cold for my taste and because I didn't want to stress my joints too much I headed home again. In the end it were only 12km. Anyway, it was quite nice and I am looking forward to the first big tour and some trail surfing again :-D. Because I don't want to bother anyone with boring bike statistics, I've setup a new mini blog which will only contain short notes along with some details of my trips and the possibility to see the tracks on google maps (powered by http://gpsies.com).

On a technical side note: I've searched for a nice way to get the GPS data from my Garmin Edge on OS X. Even though Garmin offers a browser plugin to allow websites to communicate with their devices it diddn't work in my case. On Linux I usually use gpsbabel and viking. For OS X I found a neat little tool named LoadMyTracks1) which does a good job getting the data of my Edge. Maybe someone could be kind enough to add viking to macports (I'm actually curious how difficult might be) :-P.

Filed under:
1) yes I know there's gpsbabel for OS X ;-)
Read or add comments to this article · Save to del.icio.us

March 20, 2009 :: Germany  

Roy Marples

dhcpcd-4.99.15 out

PPP users will like this release :)

dhcpcd can be configured to monitor an interface and wait for a static IP address to be assigned. For Point To Point interfaces, we can use this directive

interface pppoe0
static ip_address=
destination routers domain_name_servers

This means that the interface destination is also the gateway and DNS server.

A funkier approach is this

interface pppoe0
inform

This enables DHCP INFORM over PPP - basically the destination should also be a DHCP server (or relay) so we can configure DNS and other nice DHCP things like NTP servers.

However, I've not tested this at all, but some Cisco documentation hints that some Windows machines do this so it should work :)

March 20, 2009

Bryan Østergaard

Just left all gentoo IRC channels

Just a quick note that I left all gentoo IRC channels for good as a few gentoo developers are always either putting words in my mouth or attacking me in silly ways when I try to participate in technical discussions. I have absolutely no inclination of getting dragged down to that level so I've simply left all the channels now.

If anybody needs my help I'm sure you know where to find me but please consider carefully if it has any relation to gentoo before contacting me and DON'T contact me if the answer is yes. Thanks for your consideration.

March 20, 2009

Brian Carper

Goodbye, sweet uptime

I finally had to reboot my Gentoo box today. My uptime as of reboot:

315 days, 54 min

Not spectacular, but not bad for a dust-covered desktop machine, and it probably could've gone another 300 days or so. I only had to reboot because I bought another 500GB hard drive.

Funny thing about rebooting after that long, you have no idea what's going to happen. I finally unmasked and compiled a newer kernel, and there were quite a few new options and features in there to root through. My disk hadn't been fscked for 396 days, and after rebooting and 15 minutes of grinding away, it found a few dozen orphaned inodes. A few init scripts having to do with modules gave me some warnings, but I fixed that up.

But I think I can spare an hour every year or two to update my system.

March 20, 2009 :: Pennsylvania, USA  

March 19, 2009

Brian Carper

Blog is still going strong

After I implemented that silly CAPTCHA yesterday, the spam was stopped. There's also a honeypot form field (it's hidden via CSS so humans don't know it's there, and if any bot POSTs text for that field, the data is rejected automatically). It's silly and easily defeated, yet it stopped all 262 spam attempts since yesterday. It looks like all the spam is for one site, but it's coming from a huge range of IPs. So it's probably a botnet. Thanks, MS Windows!

I rewrote my whole CRUD layer so that I could use it for more than one database at once, and then rewrote my gallery code to take advantage, and now two hours later I have my origami gallery back up and running. Both sites are running from the same JVM. I wonder how many sites I can have going at once before the server melts into a puddle of Java-inflicted goo.

  PID PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
11338 16   0  512m 128m  12m S    0  0.3   0:28.33 java

Good thing I have plenty of RAM on the server. From looking at before and after shots of the memory usage, 66 MB is the JVM itself, and 40MB more is Jetty and Compojure and my code and all the dependencies. Then the last ~20 MB or so is my database slurped into RAM. So I can probably fit another few tens of thousands of posts and comments in here before I have to worry much. The real test will be letting this thing run for a couple weeks and see how hard it leaks.

March 19, 2009 :: Pennsylvania, USA  

March 18, 2009

Jürgen Geuter

Easystroke

Easystroke is an application for X that allows you to specify mouse gestures to invoke actions.

Those actions can be anything ranging from keystrokes, synthesising button presses to running random commands.

After running easystroke you get a tiny icon in your systray:

Clicking on it allows you to configure actions or set preferences for easystroke. The list of actions looks like this:



To create a new entry you click "Add Action". The next thing is to perform the mouse gesture you want to assign to the action. Afterwards you select which type of action this gesture should trigger (emulate a keypress for example) and finally configure the actions (like specifying which keys should be triggered or which command should be run).

You can configure which keys to use to start a gesture (default is to use the middle mouse button) and you can also tell easystroke to ignore gestures while in certain applications or just accept gestures from certain devices.

It's a nifty tool that can improve productivity by adding gesture support to basically any application you run.

The official homepage offers packages for Ubuntu Hardy and Intrepid as well as a source package. Gentoo users can grab the ebuild directly from my overlay or add my overlay to layman by running layman -o http://the-gay-bar.com/overlays.xml -a tante_overlay.

If there's interest, I'll post the ebuild to bugzilla later, just didn't want to clutter it without any interested user ;-)

March 18, 2009 :: Germany  

Brian S. Stephan

Player's Handbook 2 - a review

So, increasing my all-consuming interest in Dungeons & Dragons 4e, I picked up the Player’s Handbook 2 today, and spent a good part of the evening paging through it. Despite my renewed enthusiasm for the game (now that I have some playtime on both sides of the screen), I still came at the book with a bit of skepticism — let it be known now, it was mostly unwarranted. What is great, what is good, and what remains, you ask?

Pre-content Thoughts

Normal D&D4e production values in printing and layout. Inside the book, it’s indistinguishable from the relevant sections of the PH1 — same style for races, classes and their powers, paragon paths, so on.

The art is hit and miss. The primary art (chapter opens, races, and the like) are all pretty good to very good, but some of the incidental art and paragon path illustrations could have used another look. Nothing ridiculously wrong like some of the Complete Divine art (which my group liked poking fun at), but occasionally underwhelming. I guess not all of the illustrations can be knockouts.

On the nitpicking anal-retentive side, I’m wondering if every book is going to have a slightly different colored spine — the blue of the book is slightly lighter than that of the Player’s Handbook, and while trying to color code the books is no fun either, I’m wondering if my bookshelf is going to look like a shuffled Crayola box by the end.

Those highly concerned with dollars and cents may feel a bit stiffed in terms of volume — the book runs at the same price as the first Player’s Handbook but comes in at almost 100 pages less, putting it alongside the Dungeon Master’s Guide (a useful book, don’t get me wrong) in page-to-dollar ratio. While the page disparity can be easily explained given the content in PH2, maybe knocking $5 off the retail price would have been in order.

Introduction

The obligatory introduction opens with the standard introduction section, laying out what is to come, and includes a cookie-cutter sidebar prodding players to describe their powers less systematically, and using backgrounds (introduced later) to help facilitate the character’s backstory.

More interestingly, the introduction concludes (thankfully, already) with a page on the primal power source (the new source in PH2. It’s a pretty good write-up, with an honest approach to characters of the wild, essentially stating “while primal characters may not care much about that divine hooey or the growth of civilization, they’re not diametrically opposed — all three sides have common enemies and bigger fish to fry". It doesn’t wax poetic or anything, of course, but it’s a nice framing of the power, and I wish they’d done the same for the powers in the Player’s Handbook.

Character Races

Five races are introduced: devas, servants of the gods being reborn in the common world; gnomes, trickster fey (no surprise there); goliaths, tough, rugged mountain nomads; half-orcs, orc/human hybrids presented as a unique line rather than halfbreeds (interesting); and shifters, bestial humanoids with trace amounts of lycanthrope blood (hence the shifting).

The latter three races tend to bleed together a bit in their focus on the wilderness (but hey, that is to be expected with the introduction of the new power source), but each has its defining qualities. None of the five seem like they were tacked-on, or an afterthought, and the description of each makes them more than caricatures (with the exception of the shifters, maybe). I was pleasantly surprised by the half-orc, which finally sheds its stereotype as a dumb thicky, constantly on the fringes of (both human and orc) civilization.

The second of the chapter presents some racial paragon paths, one for each PH1/PH2 race (with the exception of the half-elf, who gets improved multiclassing via a feat instead). The racial paragon paths take the traits of each race to their obvious pinnacle: shifters become moonstalkers and get hunting-themed abilities, gnomes become fey beguilers and get sneaking and illusion abilities, eladrin ascend to shiere knights and represent the pinnacle of the Feywild, and so on. Nothing appears wrong with the racial paragon paths, but they’re not quite my cup of tea, and they do appear to have slight difficulty in differentiating themselves from class-based paragon paths. But, for those looking to have their character become the adventurer by which all of their race are judged (heh, or stereotyped), these do exactly that. The powers stand out well without going too much one way or the other on the balance scale.

Again, I can see plenty of people being pleased with these, I just personally find the class-based paragon paths more interesting.

Character Classes

The meat and potatoes of the book. Eight new classes:

  • the avenger (striker), a divine agent of battle, predisposed to neutrality, dishing out their god’s will, and a pretty interesting class all told;
  • the barbarian (striker), the classic “I’m going to rip that one guy to shreds and damn the defenses” warrior;
  • the bard (leader), which I’m excited about for some strange reason — a fun-looking party support leader who buffs with a little bit of controller mixed in;
  • the druid (controller), the classic nature-based shapeshifter that is all about flexibility, with a litany of powers in and out of their beast form;
  • the invoker (controller), an impressive but somewhat derivative conduit for divine will, either protective or wrathful;
  • the shaman (leader), a battle guide with a companion spirit to act as another ally (setting up flanks, acting as a healing focal point, etc.);
  • the sorcerer (striker), a channeler of raw arcane energy that mixes the striker’s focus with burst and blast attacks;
  • and the warden (defender), a primal protector of nature (and of course your party) with a controller-like mass-mark ability and beast or tree forms.

All of the classes have their primary role clearly indicated, and the support text also points out common secondary roles, which is a nice addition, showing the diversity of the classes. Naturally, each class has their entire power list laid out as in the PH1, along with a number of paragon paths. System balance is solid here too; none of the classes or powers appear to be broken, with the exception of a rare higher-level power which will seem to have one too many dice or the like. Definitely not a deal-breaker, though.

What impressed me the most was that, just like with the races, none of the classes feel tacked on or doing something totally antithesis to the standard set by the first book — all of the classes stand up alongside their PH1 kin, acting as part of the overall design while still offering their unique qualities.

The chapter ends with epic destinies, which follow the tradition of being a storytelling mechanism along the lines of “I want my character to be remembered for…". The Harbinger of Doom stands out to me as a great example of that — as interesting features as the other destinies, of course, but framed with a certain foreboding that keeps the destiny mechanic on a whole interesting.

Character Options

An assortment of less significant mechanics fill this chapter. It begins with backgrounds, which serve the immediate purpose of describing your character before level 1 while adding some minor benefits to the character. These, in my experience, work pretty well — I used the regional benefits in the Forgotten Realms Player’s Guide for my game, and the backgrounds section of PH2 claims those as a subset of the overall background concept. A DM who is not interested in the mechanical benefits of the backgrounds may still be interested in presenting them, just to get the gears turning in players’ heads.

Of course, there is the normal collection of feats, feats, and feats (for each tier). At a glance, half of the list is focused on the new classes, with around half of the remaining feats related to new races. The feats, naturally, vary in theme based on the focus of the class or desired action, but, again, everything appears to have been balanced well. One feat of note is the replacement for the half-elf’s missing racial paragon path, a feat that allows the Dilettante racial trait to be used as an at-will power, with essentially limitless multiclassing options for those choosing the paragon multiclassing option. It sounds like a nice feat, and it gives some more love to the oft-disregarded half-elf race.

As would be expected, multiclass feats are included for the book’s new classes.

A more than modest selection of magic items is included, again mostly focused on the wants and needs of the new classes, but a number of the options are definitely useful for the PH1’s classes, including new forms of masterwork armor. The new implements (totems, and weapons as implements) are introduced, as well as musical instrument wondrous items, acting as implements for bards but usable by anyone.

A couple dozen new rituals are added, filling some utility needs introduced by the primal power (standbys such as speak with nature, control weather), introducing utility bardsongs, and throwing in the wildcard or two (reverse portal, for instance).

Appendix: Rule Updates

Seeing this section scared the hell out of me at first. If anything made d20 (3.0 or 3.5) unpalatable, it was its constant revising of the rules, adding new action types based on the miniatures games, or introducing new uses of skills, and the like. The issue, ultimately, with these changes was that no attempt was made to make the established order fit with the additions, leading to a hodge podge of exception cases and, ultimately, imbalance.

That was 3.x, however, and so far 4e has avoided that problem. The appendix serves mainly to rewrite the “how to read a power” section of the PH1, including both new keywords and expanding/re-explaining the terminology introduced in the first book. While this sounds like it could be abysmal, nothing I saw contradicts or breaks the established order, instead items are just clarified. For example, the appendix states that the sequence of “effect” texts in a power is not accidental, and indentations are indeed intended to create conditional hierarchies ("secondary attack” is indented under “hit” because it is only relevant if you hit).

A number of other minor power clarifications show up: a character does not need to have an implement to use implement-keyword powers, they just need the ability to use the relevant implement (the difference between carrying a wand and being able to use a wand), reliable powers go unspent if every target is missed. Nothing here seems earth-shattering to me (some of it, in my opinion, is and always was obvious), but it looks like Wizards sought to answer what must be common questions with this superseding text.

There are “new” stealth rules as well, but they focus on more clarifications: creating a diversion to hide (a usage of Bluff) and Stealth are contested with passive Insight and Perception. How Stealth works in combat is explained and presented a bit better as well. Perception is a bit cheaper now, becoming a minor action (hooray). Finally, a couple terms are added to the glossary.

All in all, these are best described as clarifications and minor bugfixes — if Wizards reprints the Player’s Handbook, I wouldn’t be surprised to see these included along with more standard errata fixes (the text of PH2 even presents them that way, saying what snippets of the book are replaced with the new text).

Conclusion

I’m pretty pleased with this book. I think its success is evident in the feeling I get upon having read much and skimmed the rest — that it is not “the new book for players", but a legitimate expansion of scope. It does nothing to ruin, shatter, or unbalance the year of D&D4e we’ve had so far, and it is not even fair to call it another layer of content; its new content is neither above nor below the Player’s Handbook in value, it simply makes the core player content larger, adding without obsoleting. Which is exactly the point of the book.

March 18, 2009 :: Wisconsin, USA  

Brian Carper

Fun with HTTP headers

One fun thing about playing with Compojure is that it doesn't do much with HTTP headers for you, which is a good learning opportunity. RFC 2616 is rather helpful here.

For example I learned that if you don't set a Cache-Control or Expires header, your browser will happily re-fetch files over and over, which is a bit of performance hit. Static files that don't change often like images etc. can be set with a higher Expires value so they're cached.

Another thing to keep in mind (note to self) is that using mod_proxy to forward traffic to a local Jetty server means that the "remote IP" you get from (.getRemoteAddr request) will always be 127.0.0.1. If you want the user's real remote IP, you have to look in the X-Forwarded-For header (easily accessed as (:x-forwarded-for headers) in Compojure. Given that Identicons are generated from a hash of an IP address, this has resulted in some screwed up (wrongly identical) avatars for a bunch of people in posts for the past couple days. Oops. Not much I can do to fix that now.

In other non-news, I just the spam logging for the blog so I can see the kinds of things bots are doing to get around my feeble anti-spam measures. Sadly the spam seems to have stopped entirely, right after I set this up. How annoying.

March 18, 2009 :: Pennsylvania, USA  

Darn you, spammers.

I was in a rush to get this darn blog finally done, so I threw some stupid anti-spam measures on here. Namely, the comment form included 20 textareas, 19 of which were display: hidden and one of which was randomly the right one, and any text in the hidden ones would cause the comment posting to fail.

It only took a spam bot 48 hours to figure this out, I guess, because the last hour I've been hammered. So I implemented a CAPTCHA as another short-term holdover until I can code up something good. At least it immediately stopped this spam bot whose crap I've been deleting for the past hour.

Hopefully this isn't too intrusive. I think it fits the site fairly well, as you will probably agree once you see it.

March 18, 2009 :: Pennsylvania, USA  

March 17, 2009

Jason Jones

Outlook Calendar Events in PHP

Man, the weather's getting nice out there...  Makes me not want to program.  Yet, here I am.

Today I had the task of somehow getting a PHP-generated web app to generate an Outlook appointment based on database information, and have it work.

Turns out that 2 hours later, I have it functioning perfectly, thanks, in large part to this blog entry. (Thanks Luke!)

So, in case you didn't want to read his entry, I'll post basically the same code snippet here as he did there, without the detailed explanation.

<?php
//SET THE TIMEZONE
$success = date_default_timezone_set("America/Denver");
header("Content-Type: text/Calendar");
header("Content-Disposition: inline; filename=calendar.ics");
echo "BEGIN:VCALENDAR\n";
echo "VERSION:2.0\n";
echo "PRODID:-//Generated by PHP in Linux!//NONSGML Linux Rocks//EN\n";
echo "METHOD:REQUEST\n"; // requied by Outlook
echo "BEGIN:VEVENT\n";
echo "UID:".date('Ymd').'T'.date('His')."-".rand()."-example.com\n"; // required by Outlok
echo "DTSTAMP:".date('Ymd').'T'.date('His')."\n"; // required by Outlook
echo "DTSTART:$_GET[date]\n";
echo "SUMMARY:Visit with $_GET[contact]\n";
echo "DESCRIPTION: Visit with $_GET[contact] at $_GET[customer]\n";
echo "END:VEVENT\n";
echo "END:VCALENDAR\n";
?>

You'll, of course, want to get rid of the GET variable crud, but other than that, I have verified that this works with Outlook 2003.  In outlook 2007, it comes as an event, but not an appointment.  I'll have to figure that out later.

So...  Happy code pirating.  (worked for me!)

March 17, 2009 :: Utah, USA  

Roy Marples

openresolv-3.0 out

openresolv-3.0 is now out.

It doesn't have any functional changes or improvements over the 2.0 version. So why the big release? Configuration baby, configuration.

Debian's resolvconf has plenty of mini configuration files, which openresolv has supported. This is quite problematic from a package distribution and maintainability perspective. Also, the implementation required plenty of forking sed or equivalent to parse them which is inefficient.

So openresolv now sports a shiny configuration file, which I think is more flexible and easier to manage. It has a downside and an upside. The downside is that you now have to configure the files to write resolver configuration to for dnsmasq/named. The upside is that if they are unconfigured (the default) then the subscribers don't litter /etc with files you'll never use.

Also, the subscribers themselves have been moved out of /etc and into /libexec as they aren't really configuration files.

March 17, 2009

Dan Fego

.flv files finally tab-complete with mplayer

After a long time of playing .flv files in mplayer on the command line in Gentoo, I noticed recently that they now tab-complete. I’m not quite sure how recently this change occurred or what caused it, but I’m very pleased with the update.

I’m not quite sure how the tab-completion infrastructure works, but I know it’s got a lot of “packages” for different programs. A surprising amount, actually. For the uninitiated, on my system, I’m looking at:

dfego@antica ~ $ eselect bashcomp list
Available completions:
[1] _subversion
[2] apache2ctl
[3] base *
[4] bitkeeper
[5] bittorrent
[6] cksfv
[7] clisp
[8] dsniff
[9] eselect *
[10] freeciv
[11] gcl
[12] gentoo *
[13] git *
[14] gkrellm
[15] gnatmake
[16] gpg2
[17] gvim
[18] harbour
[19] isql
[20] larch
[21] lilypond
[22] lisp
[23] mailman
[24] mcrypt
[25] mercurial *
[26] modules
[27] monodevelop
[28] mpc *
[29] mtx
[30] p4
[31] povray
[32] qdbus
[33] ri
[34] sbcl
[35] sitecopy
[36] snownews
[37] ssh *
[38] subversion *
[39] tig *
[40] tree *
[41] unace
[42] unrar *
[43] vim *
[44] xxd

The ones with the asterisks are ones I currently have enabled for my main user. As you can see, there are a lot of options to choose from, and for some weird reason, my git functionality died after a recent update. But wait… mplayer isn’t there… Interesting…

Interesting…

So where does mplayer’s tab-completion come from? It’s not just the normal one provided by the shell, because it before excluded certain file types.

Interesting. This must be investigated.

March 17, 2009 :: USA  

Brian Carper

Clojure 1, PHP 0

Goodbye Wordpress

As I mentioned many times, I've been working on replacing Wordpress for my blogging needs. Wordpress has been pretty good for the past three years, but it's time to move on, for a bunch of reasons.

Primarily, the way Wordpress automatically mangles my text is annoying. For example, it turns newlines into paragraphs inconsistently (especially when it comes to pre/code blocks). This blog is mostly about programming, which means being able to post code without having my quotes turned into "smart" quotes and my --flags turned into long-dashes is kind of important. HTML is sometimes automatically escaped, and sometimes not. I can't count how many comments I've gotten where someone posted some code, then posted again to inform me that Wordpress ate the code for dinner. There are plugins to fix some of this, which break every time Wordpress releases a new version, and have never really worked that well for me.

Writing a theme for Wordpress means a mix of PHP and HTML and CSS, which is painful to read and even more painful to write. Aside from the considerable ugliness of PHP itself, there's a lot of weird magic involved with themes, based on naming conventions for files, weird fall-through behavior when certain theme files aren't present and so on. The Wordpress API is enormous and not fun to work with if you want to do something other than the standard Wordpressy kind of blog structure. Static pages aren't too much fun to work with in Wordpress either.

Lately I think I was getting hammered with spam partly because Wordpress is such an easy target. Askimet is nice but it wasn't catching enough lately; maybe 10-15 spams per week were slipping through. And there was always the chance that some widely-known exploit in Wordpress was going to leave my site susceptible to some roving bot.

And so on.

Hello Clojure

Why Clojure? Because it's awesome and fun and powerful and I wanted to learn it better.

Compojure is a web framework for Clojure that made a lot of this very easy. Coming here from a Ruby on Rails background, Compojure has a lot going for it in comparison. Compojure is lightweight and more low-level than Rails. For example Compojure doesn't enforce MVC on you, doesn't force a unit testing framework on you, and doesn't care how you access your data. Compojure just lets you route HTTP requests to Clojure functions based on the URL and request method (RESTfully: POST/GET/DELETE/PUT), and it gives you easy access to the request information, session, GET/POST parameters and cookies.

Under the hood it's all servlets and Jetty, both of which are solid, stable, well-tested, well-documented technologies. However, thankfully, all of that Java stuff is under the hood, and well under it. I didn't have to write a single line of Java or interact with single servlet directly. Everything (session, params, headers) is a Clojure hash-map from the perspective of my code.

Compojure also comes with a domain-specific language for writing HTML, which is similar to CL-WHO and myriad other Common Lisp HTML DSL's. All of which are awesome. I can't say enough how much nicer it is to write (or generate) structured s-exps than to write HTML by hand. More on that below.

Compojure doesn't come with any way to interact with a database, so I had to write one. clojure.contrib has an SQL lib which easily lets you interact with a MySQL database. (Clojure can talk to MySQL via MySQL's JDBC connector, of course.) I used clojure.contrib.sql to write a small (192 lines) library which slurps up a bunch of database tables into Clojure refs, and provides a few functions for basic CRUD operations so that any updates to the ref data is also transparently reflected in the database. The database is essentially only for keeping an on-disk cache of the data in case I need to restart the server. The average number of DB queries per page is zero; everything except posting/editing/deleting data just reads out of a Clojure ref.

With possibly multiple users posting data at once, it's nice to have Clojure's built-in concurrency support. Updating the data refs with new data is always safe from multiple threads simply by throwing a (dosync) around all of the write accesses. This was completely painless to write.

I decided I wanted to use Markdown for posting comments and authoring new pages. This was also very simple to do; I outlined how to get Markdown working in Java and Clojure, in a previous post. The real-time previews for comments are largely inspired by / ripped-off from Stack Overflow, implemented mostly using open-source Javascript libraries like Showdown, JQuery, TypeWatch and TextAreaResizer.

A Brief Comparison: Clojure vs. Wordpress

All of my code including the CRUD library, all of the HTML for the templates and layout, admin controls, and all the glue to put it together is 1,253 lines of code. Wordpress is somewhere over 78,000 lines of PHP depending what you count (doesn't include any themes or layout, but does include Wordpress features I didn't need and didn't implement). It's still a pretty nice reduction in code overall, any way you look at it.

As an example, in my old Wordpress site I had a plugin catcloud to generate a "tag cloud". This plugin itself is 226 lines of PHP, not bad. However, here's the Clojure code to generate a similar tag cloud (which you can see here currently):

(defn tag-cloud []
  (let [tags (sort-by #(.toLowerCase (:name (first %))) (all-tags-with-counts))
        counts (map second tags)
        max-count (apply max counts)
        min-count (apply min counts)
        min-size 90.0
        max-size 200.0
        color-fn (fn [val]
                   (let [b (min (- 255 (Math/round (* val 255))) 200)]
                     (str "rgb(" b "," b "," b ")")))
        tag-fn (fn [[tag c]]
                 (let [weight (/ (- (Math/log c) (Math/log min-count))
                                 (- (Math/log max-count) (Math/log min-count)))
                       size (+ min-size (Math/round (* weight
                                                       (- max-size min-size))))
                       color (color-fn (* weight 1.0))]
                   [:a {:href (:url tag)
                        :style (str "font-size: " size "%;" "color:" color)}
                    (:name tag)]))]
    (block nil
           [:h2 "Tags"]
           [:div.tag-cloud
            (apply html (interleave (map tag-fn tags)
                                    (repeat " ")))])))

This is 10 times less code, which is a good reduction in my opinion. Most of the code is the math to generate a weight logarithmically for each tag so they scale nicely. (all-tags-with-counts) fetches a seq of two-item pairs: the tags themselves (which are hash-maps) and a count of posts for each tag. There are two locally-defined functions in the let which generate the text color and the font size and HTML for each tag.

The vectors that look like [:h2 "Tags"] are input for Compojure's HTML-generating DSL; this would be transformed for example into <h2>Tags</h2>. (block ...) is a macro which wraps its content in HTML for the rounded borders of my layout. (Math/log ...) and friends are calls to standard Java math functions.

This whole function is less code than just the horrible boilerplate array declarations at the top of the Wordpress plugin:

$catcloud_field_data = array(
  array('name' => 'Minimum Font Size', 'option' => 'catcloud_min_font_size', 'size' => '4', 'maxlength' => '3',
       'default' => '9', 'note' => 'Used for the least frequent categories', 'validation' => '/^\d{1,3}(\.\d{1,3})?$/'),
  array('name' => 'Maximum Font Size', 'option' => 'catcloud_max_font_size', 'size' => '4', 'maxlength' => '3',
       'default' => '18', 'note' => 'Used for the most frequent categories', 'validation' => '/^\d{1,3}(\.\d{1,3})?$/'),
  array('name' => 'Font Face', 'option' => 'catcloud_font_face', 'size' => '15', 'maxlength' => '254',
       'default' => '', 'note' => 'Set an optional list of font faces', 'validation' => '/.*/'),
  array('name' => 'Font Units', 'option' => 'catcloud_font_units', 'size' => '3', 'maxlength' => '2',
       'default' => 'pt', 'note' => 'Choose one of em, pt, px or %', 'validation' => '/^(%|em|pt|px)$/'),
  array('name' => 'Color Start', 'option' => 'catcloud_color_start', 'size' => '7', 'maxlength' => '6',
       'default' => '0066CC', 'note' => 'For the least frequent categories. Use a hexadecimal RGB triplet. ie. 0066CC',
       'validation' => '/^[\dA-F]{6}$/i'),
  array('name' => 'Color End', 'option' => 'catcloud_color_end', 'size' => '7', 'maxlength' => '6',
       'default' => 'CC6600', 'note' => 'For the most frequent categories. Use a hexadecimal RGB triplet. ie. CC6600',
       'validation' => '/^[\dA-F]{6}$/i'),
  array('name' => 'Before Category', 'option' => 'catcloud_before', 'size' => '3', 'maxlength' => '20',
       'default' => '[', 'note' => 'Set the character(s) to display before category names', 'validation' => '/.*/'),
  array('name' => 'After Category', 'option' => 'catcloud_after', 'size' => '3', 'maxlength' => '20',
       'default' => ']', 'note' => 'Set the character(s) to display after category names', 'validation' => '/.*/'),
  array('name' => 'Show Top N Categories', 'option' => 'catcloud_top_n_cats', 'size' => '5', 'maxlength' => '3',
       'default' => '', 'note' => 'Show only the top N categories (where N is a number like 10 or 25 or whatever. Set to 0 or empty for no limit.',
       'validation' => '/^\d*$/'),
  array('name' => 'Excluded Categories', 'option' => 'catcloud_excluded_cats', 'size' => '15', 'maxlength' => '254',
       'default' => '', 'note' => 'A comma-separated list of category ids.',
       'validation' => '/^[\d, ]*$/'),
)

Ugh. As another example, here's the code that handles a POST request to add a new blog page:

(defn do-new-post []
  (check-login
   (let [post (add-post *params*)]
     (sync-tags post (:all-tags *params*))
     (redirect-to "/"))))

It does exactly what it says: Check to make sure the user is logged in, add the post based on the POST params, sync up the tags for that post and redirect to the front page. Lisp lets you say what you want very concisely, with a bare minimum of boilerplate.

How about speed? My Clojure code is actually generating HTML in the most brute-force and wasteful way possible. The HTML for each page is regenerated from scratch, via a cascade of a couple dozen function and macro calls, every time you load a page. But it's still pretty fast, a couple hundred milliseconds for most page requests. This is slightly faster than the Wordpress version of my site. If I ever have performance issues I can switch to another Clojure HTML library, like clj-html which uses the same vector-style syntax but pre-compiles the HTML.

How hard was it to set up on the server? Wordpress is pretty famous for being dirt-easy to deploy anywhere. My Clojure app by comparison was slightly more difficult, as you might expect, but it wasn't brain surgery. My server runs Debian. First I installed the JVM via apt, then I rsynced a bunch of jar's and clj files to the server, then I installed emacs and screen also via apt. Then I put two lines into an Apache config file to proxy-forward traffic to a local port where jetty would be listening. I started Emacs, did (require 'bcc.blog.server), did (bcc.blog.server/go) to start everything, and that's about it. Took about 15 minutes to set up from scratch. When I find a bug, I SSH in, re-attach to screen, fix it in Emacs, hit C-c C-c to recompile just the functions I need to update, and then detach from screen again.

I'm pretty pleased with this so far. It was fun to write and has all the features I used from Wordpress, plus more, and the building blocks are there to extend things if I imagine up a new feature I like.

Looks like my blog is still running today in spite of my predictions. Still waiting for the JVM to crash though, I know it's coming. I plan to post the source code for some of this once I'm sure it works.

March 17, 2009 :: Pennsylvania, USA  

March 16, 2009

George Kargiotakis

Convert greek characters from latin1 mysql database fields to pure utf8

The Problem
To sum it up, the case is this: many many many web applications were programmed so that they used latin1 collation for their fields inside mysql databases. But most users now use utf8 from within their browsers. What happens is that utf8 characters are getting stored inside latin1 fields, which in return produces chaos! A huge web application that used that kind of madness was Wordpress. Luckily (or not) Wordpress now uses utf8 everywhere. I’ve known many many many people that got so frustrated when they tried to move from their old Wordpress installation to a newer one because all their greek posts couldn’t be exported “easily”, I won’t say “properly” because there are always solutions to problems like this, but all the solutions were not straightforward at all, that they finally dumped the idea of moving the posts and started a new blog.

This is a HUGE problem for many greek (and not only) users and I hope I now have an elegant(?) solution to it.

The solution that I provide does not require any use of the mysqldump utility at all. Most solutions to the problem I’ve seen so far were more or less using the mysqldump utility like this:
$ mysqldump --default-character-set=latin1 --opt -u user -p dbname> latin1-dbname.sql
Since many people have their blogs on shared hosting or have very limited shell access, the previous solution is a no-go for them because it requires that they contact their hosting support, explain to them what they want and wait for their reply. If they are lucky they might get the .sql file…else…they are back to where they started.

My solution:
First of all it is based purely on this post: http://combatwombat.7doves.com/2008/10/26/mysql-latin1-to-utf8-issues .While that post does not mention greek characters at all, it gave me an idea of how it should be done.
In order to solve the problem using my solution you need a Linux or a MacosX host. This is because the solution is based on a bash script that needs the sed utility for character conversion. Both bash and sed are of course not included on a default windows installation. So if you are a windows only user you can either install those tools through cygwin and try if it works (never tested), ask a friend of yours that uses linux or macosX to help you, boot a linux live cd, or install linux :D

What every hosting solution definitely has is access to mysql databases through phpmyadmin. Even if your hosting provider or control panel does not provide it for you, you can always install it manually. One of the easiest things to do on phpmyadmin is export a database. Just open phpmyadmin, select your database, click on export on the right, select some or all of the database tables you want, select “Save as File” and click on zipped. Click on “Go” and after a few seconds you will have your .sql.zip file sent to you. If you find that hard to do please ask a friend. Please don’t blame me for blowing up your mysql database if you don’t know how to handle these simple directions..
Let’s say that the db name was sample-db, then you should have gotten a file named: sample-db.sql.zip
Unzip it:
$ unzip sample-db.sql.zip
and then edit it with a text editor,vim for example:
$ vim sample-db.sql
If you are suffering from the problem mentioned before you will probably see things like:
greek-utf8-inside-latin1
To start the conversion you need to download the following script: greek-convert-latin1-to-utf8.sh or greek-convert-latin1-to-utf8.sh.gz (you need to extract the .gz)
The make it an executable: $ chmod +x greek-convert-latin1-to-utf8.sh
And then execute the script with the database as an input:
$ ./greek-convert-latin1-to-utf8.sh sample-db.sql
I'll work on sample-db.sql ...
sample-db.sql...done

Then you will have a new file named sample-db.sql.clean as output on the same dir you ran the script.
Open it and you should now see every post in pure utf8 greek like this:
greek-utf8-inside-latin1-converted
As you can see 99% percent of the characters were correctly converted to proper greek utf8 ones. I don’t currently have the time to investigate why a few characters don’t get properly converted, but I’ll soon find a solution for that too :)

What’s now left to do is to import the sample-db.sql.clean file to your new hosting…you can do that of course through phpmyadmin…

The conversion table that I used is here:greek-replacement-latin1-to-utf8.ods

This might be a late solution, since the problem is quite old, but I am sure that there are many people still having headaches over issues like this. Enjoy :)

Downloads:
greek-convert-latin1-to-utf8.sh
greek-convert-latin1-to-utf8.sh.gz

March 16, 2009 :: Greece  

Brian Carper

New Blog... I think...

OK, here's the new blog. Apologies to anyone who may be following my RSS feed, because the whole feed is probably going to be reset by switching blog engines.

If you can call this an "engine". This is my Clojure rewrite. I'll have much more to write about this tomorrow when I'm awake. In the meantime, bug reports are welcome.

Here are my estimates:

  • 52% chance the blog is crashed and down by the time I wake up tomorrow.
  • 27% chance my feeble anti-spam measures are easily defeated, and hundreds of spam comments are waiting for me in the morning.
  • 14% chance the JVM brings down the whole server.
  • 7% chance everything works swimmingly.

I had to take down my origami gallery site just to get this to run. Fun times ahead.

When I came up with this blog layout I thought it was great, but three weeks of looking at it and now I'm starting to hate it. I can work on making it all pretty later though.

Ah well, more tomorrow. Keeping my fingers crossed.

March 16, 2009 :: Pennsylvania, USA  

March 15, 2009

Roeland Douma

Stable system?

From time to time I browse trough some of the files on my system. Today is one of those days. When coming across my /etc/portage/package.keywords/-directory I found out that right now I am not running the most stable system (according to gentoo). Of course KDE4.2 is the main reason for my huge lists of package unmasks and then there is Xorg-1.5 but that is pulled in by KDE4.2…

But even without all those packages the list is still huge… So probably I will be spending some hours today on cleaning my package.keywords and filling some stable requests. Which is always good. We need to keep those Gentoo-devs busy ;)

March 15, 2009 :: The Netherlands