Planet Larry

April 17, 2009

Brian Carper

Unicomp Customizer keyboard review

I got my Unicomp Customizer 104 in the mail today. This is a keyboard using the same technology as the infamous IBM keyboards of yore.

/screenshots/photos/thumbs/customizer.png

Why?

The Customizer is an enormous blocky hunk of hard black and grey matte plastic. It is the very antithesis of modern, soft, rounded, Apple-esque fashion. It has no "multimedia" keys, it doesn't glow in the dark, it doesn't have a built-in USB hub, it looks distinctly 80's-ish, and it costs $70. Why on earth would anyone want this thing?

A couple of reasons... one is that it's a status symbol of grizzled old hackers. This keyboard has gotten a lot of good reviews, e.g. last year on Slashdot, but I've heard the sentiment repeated elsewhere. There are stories of people rescuing old IBM keyboards out of dumpsters and selling them on ebay.

If it was simply a status symbol I would look away without a second glance. (Which is why I own a Cowon D2 and not an iPod. I like to research my purchases to the point of paranoia.)

But the popularity seems to be backed up by real functionality and build quality. These keyboards have a reputation for being great to type on due to the unique feel of their buckling spring "clicky" keys, and for being indestructible, with some keyboards still in use after two decades. So I decided why not see for myself?

A keyboard is the main tool of my livelihood and one of the main tools of most of my hobbies. It makes sense to try to get the best tool for the job. The three most important parts of a computer in my opinion are the keyboard, mouse, and monitor. CPU? RAM? Hard disk space? I'll take whatever you give me. But the things I interact with on a constant basis, I want those things to be comfortable.

Clicka clicka clicka

Yeah, this thing is clicky. Even after all the reviews, I was unprepared for just how clicky it is. You can feel the click of each keypress in your fingers and hear the clicking from 3 miles away.

I tried pushing a key down slowly to make it click without activating a keypress, and I found it very difficult if not impossible. You can always tell when you've successfully pressed a key on this keyboard: if it clicked, you did; if it didn't click, you didn't.

One bad thing about the clicking is annoying everyone in the room with you. I'm a bit worried I'm slowly going to drive my wife insane.

Finger workout

The keys have a lot of weight to them compared to the mushy feel of modern keyboards (which usually use some rubber or plastic dome under the keys). The Customizer's keys have little springs in them, and you can feel the keys pushing back on your fingers as you type. It feels much different than any other keyboard I've used.

Is it a good or bad feel? I'm undecided. It does feel pretty good, there's a lot of response to the keyboard and you can more easily tell when you miss a key or flub a keypress and hit two keys at once. I think this probably aids accuracy. I don't type more accurately but I more easily notice my mistakes.

I'm afraid the weight might lead to fatigue though; the keys are harder to press than other keyboards and my hands feel like they're getting a workout in comparison. However I've had a few long nights of typing on this keyboard and haven't noticed any more fatigue than usual, so the worry may be unfounded. On the other hand, I do often notice how annoying it is to type on a laptop which has no resistance and no distance to the keys at all. The resistance in this keyboard is a nice change of pace.

Built well?

I think "indestructible" is probably an apt word. I've only had mine for a couple days, but just hefting the thing, you can tell it's built like a tank. Very thick hard plastic all around. It weighs a ton. If I had to choose a keyboard to use as a weapon in a pinch, I'd grab this one immediately.

The keys come off easily; every key is just a cap over a smaller plastic key beneath, and that cap is a simple piece atop a tube with a spring in it. There isn't a lot of room for mechanical failure here unless you lose the springs. Everything comes off and goes back on very easily, which is nice for when I need to clean out the gunk in a year.

I have heard that if you spill a cup of milk into one of these keyboards, you may find it hard to drain. So don't do that.

Lack of features is a feature

Multimedia keys suck. I've never used them. They waste space and the only time I remember they exist is when I push them accidentally.

The Customizer is very "traditional". There are no multimedia keys, no volume controls, no programmable (i.e. useless) macro keys, no email or internet shortcuts. Just the standard 105 keys. This is a plus in my book.

Caps Lock is slightly shortened with a gap between itself and the A key, which is nice to avoid hitting it accidentally. The version of the keyboard I got has a modern Super ("windows") modifier key, but you can get a version without even that, if you like. Otherwise there are no frills.

Speed typing

I took a couple of silly online typing tests, and I got between 75 and 95 WPM with 98% accuracy, which is as good as I've ever gotten. My six-fingered typing style is a bit odd but this keyboard suits me well.

WPM is a terrible measure of programming speed, because programming has a much higher punctuation-to-letter ratio than English prose. So I also tried an Emacs session and a bunch of Vimming, and I experienced no problems. I forgot I was using this keyboard almost immediately, which is a good thing. It means it wasn't annoying me.

Very important to me, as a Vimmer, is the position and size of the Escape key. I have one other keyboard that has Escape offset to the right a half inch, which is horrendous and messes up my Vimming all the time. My other other keyboard has a tiny little Escape key, half as big as a normal key, which is equally bad.

On the Customizer, Escape is positioned off by itself in the corner as it should be, with a ton of space between itself and the number row, and the Escape key itself is freaking enormous. This is a huge plus in my book. You can't miss Escape on this keyboard.

Similarly, all the other keys are the right sizes and in the right places.

Verdict

So how is the Unicomp Customizer?

It's solid, standard, unique, and has a nice retro, minimalist style that I personally enjoy.

It's also huge, loud, and expensive. Is it worth buying? If you have the money to spend, I think it is. I don't regret the buy after a few days. When I come home from work and start typing on this guy, I'm always pleasantly surprised.

April 17, 2009 :: Pennsylvania, USA  

April 16, 2009

Ciaran McCreesh

Distributed Distribution Development, and Why Git and / or Funtoo is Not It


Gentoo is slowly shuffling towards switching from CVS to Git. This is a good thing, because CVS stinks. Using Git will reduce the amount of time developers need to waste to get something committed, make it easier to apply patches from third parties and make tree-wide changes merely a lot of work rather than practically impossible. What it will not do is make Gentoo in any way more ‘distributed’, ‘decentralised’ or ‘democratic’.

Some of the Git work has already been done, in a reduced manner (no history and no mirroring), by Daniel Robbins’ Funtoo, which is purported to be more distributed than Gentoo. The problem is, there’s nothing there to back up the distributed claim.

Distributed development, in the sense for which Git was designed (and ignoring the intervening BitKeeper stage), meant moving away from having a single central repository off of which everyone worked to having everyone work off their own, publishable repositories and providing easy ways of merging changes from one to another. ‘Good’ changes would tend to find their way from the authors up the food chain to the main repository whence official releases are made. Users requiring things that hadn’t made their way to the top would maintain their own repository, and merge in changes from elsewhere that they needed.

Typical Git Workflow Model

Typical Git Workflow Model

For a conventional codebase, this model works. But it’s not particularly nice, and it’s driven by necessity. You’ll note the big red dots in the diagrams. These represent places where people (assisted to some highly variable degree by Git) have to do merges. I chose big red dots rather than soft fluffy clouds because merges can be a lot of work (and because drawing clouds takes effort).

If you’ve got a conventional codebase, you have to do merges to make use of things from multiple sources — the compiler takes a single codebase and produces a program from it. You can do the same thing with a distribution. Funtoo, for example has had the Sunrise repository merged in to the main repository. Such a change would likely not be possible with Gentoo’s current CVS architecture.

It’s not entirely clear whether Funtoo intends to have users who want to use other overlays merge those overlays into their own tree. Doing so would be more Gitish.

Apparent Funtoo Workflow Model

Apparent Funtoo Workflow Model

But why bother? There’s no need to have a single codebase — there’s no compiler that has to take every input at once and turn it into a single monolithic product. Those big red dots are unnecessary.

A lot of fashionable programs are moving away from the big monolithic binary model and towards a plugin-assisted architecture. If you want Firefox to do a few things it doesn’t, you don’t hunt around for people who have already written them and then try to merge their source trees together. You install plugins. Only for more severe changes do you have to dive into the source, and the severity of change requiring a trip to the source is gradually increasing.

There’s a reason for this — whilst the merge model is a lot better than a single authoritative codebase and a bunch of patches, it’s a lot more work than providing limited composable extensibility at a higher level.

What, then, would a plugin-based model look like for a Gentoo-like distribution?

Presumably, one would have a centralised ‘main’ codebase. One could then add additional small extras to that main codebase to obtain new functionality (packages, in this case); these extras would rely upon parts of the main codebase and wouldn’t be able to operate on their own. Sound familiar? Yup, overlays are plugins.

This whole “merging overlays into the main tree” thing is starting to look like a step in the wrong direction. What would be some steps in a better direction?

One thing that comes instantly to mind is improving overlay handling. Portage’s overlay handling currently (at least in stable) looks like this:

Portage Overlay Model

Portage Overlay Model

Portage takes the main Gentoo repository, and then merges it with each overlay in turn, creating one ‘final’ overlay that ends up being used. I’ve used an orange dot here rather than a red one because it’s a different kind of merge. Rather than doing a source-level merge, the orange dot merge more or less (sort of) works like this:

  • If there’s a package with the same name and version in the origin and the overlay we’re merging in, take the overlay version.
  • If there’s an eclass with the same name and version in the origin and the overlay we’re merging in, sometimes take the overlay version.
  • Do some horrid hackery to merge together any colliding profile things in an uncontrolled manner that doesn’t work for more than one merge.
  • Pass everything else through.

Now, to be fair, the orange dot merge usually works. Most overlays don’t try to override eclasses, don’t have eclasses that conflict with each other and don’t mess with profiles. For colliding versions, you end up being stuck with a single selected version, which isn’t always so good.

Unfortunately, some overlays do try to override eclasses and profiles, and the result isn’t pretty. You’re ok so long as you only use a single overlay that does this, and so long as any eclass changes aren’t incompatible, but anything beyond that and weird stuff happens.

A less dangerous model would be to make the package manager support multiple repositories. Presumably most overlays wouldn’t want to have to reimplement all the profile and eclass things in the Gentoo repository, so the model would look like this:

Safer Overlay Model

Safer Overlay Model

Here, repositories, rather than the user, have control over which implementation of eclasses and so on gets used. Paludis uses this model for Gentoo overlays unless told not to.

Sidebar: one might want to go a step further, and allow repositories to use multiple masters. Some Exherbo supplemental repositories do this — the gnome supplemental repository, for example, makes use of both arbor (the ‘main’ repository) and x11:

Exherbo Repository Model

Exherbo Repository Model

Note that we chose not to make a repository use its master’s masters. We could’ve gone either way on this one — it’s slightly easier if masters are inherited, but it can lead to unnecessary inter-repository dependencies.

Unstable Portage, meanwhile, is starting to support controlled masters for eclass merging, but not version handling, which will eventually give:

New Portage Overlay Model

New Portage Overlay Model

A multiple repository model is clearly safer than the Portage model, and does away with the manual merges required by the Funtoo model. This gives us:

Model Multiple Repositories? Manual Merges? Unsafe Automatic Merges?
Portage (Stable) No No Yes
Portage (Unstable) No No Sometimes
Funtoo No Yes No
Safe Yes No No

I consider the multiple repository model to be better for users even ignoring the merge or conflict issues. Here’s why:

  • Users can make selective, rather than all or nothing, use of a repository. It becomes possible to mask the foo-1.2 provided by the dodgy overlay, and use the one in the main tree or a different overlay.
  • Similarly, users can choose not to use anything from a particular overlay except things they explicitly request.
  • It paves the way for handling repositories of different formats.

There aren’t any downsides, either — so long as repositories have user-orderable importance, there’s no loss of functionality.

Finally, I’d like to debunk the myth that the Git model is somehow ‘democratic’. There’s nothing in the least bit democratic about everyone having their own repository. At best, it could be said to be a way of allowing everyone to have their own dictatorship that anyone else can be free to visit — all very well, but when tin pot dictators fall back on old habits it does little to encourage collaboration. A democratic distribution would more likely make use of a special repository which lets people vote on unwritten packages and version bumps — clearly a recipe for disaster, since most people think “I haven’t noticed any bugs” means “stable it instantly”…

The only thing switching Gentoo to Git will solve is the pain of having to use CVS. This alone is enough to make the move worthwhile, but it will do little to nothing to fix Gentoo’s monolithic design and inherently centralised model. Nor does Funtoo’s merge approach solve the problem — on the contrary, it replaces a model where the package manager automatically does unnecessary merging (and sometimes gets things wrong) with a model where people do unnecessary merging (which is a lot of work, and they will still sometimes get things wrong). The future is (or at least should be) in a multi-repository model with good support from the package manager that removes the costs of decentralisation.

Posted in gentoo Tagged: funtoo, gentoo, paludis

April 16, 2009

Dirk R. Gently

Desktop… Phht


I don’t post screenshots usually because they just don’t get my attention. If i’m able to get things done then it doesn’t matter if i’m with AIG or on Gilligan’s Island. On my desktop, I don’t have fancy spinning-cubes, fire-drawing cursors, or wallpapers that leave a negative image floating on the back of my retina. What i do got is a desktop that would hopefully make Bender’s God happy :) :

Details:

April 16, 2009 :: WI, USA  

Dan Ballard

Winning BattleCode (excluding MIT)

I've been quiet on the blogging front lately. Don't know exactly why, could be school has kept me busy, or who knows.

Anyways, I thought I'd pop in and mention something I should have back in January when it started, which is that two friends and I entered MIT's BattleCode competition. It's an AI competition that a class at MIT was running, but was also open to public participation. Basically you are writing AI to run inside robots on a battle field. You and your opponent start with a few robots and they have to coordinate and do things like build more units, mine, and attack then enemy. The AI executes inside each robot so there is not overall "player" of the game, just lots of instances of your code, hopefully working together. It was a fun neat challenge.

It also reminded me of how much I'm not a fan of Java, and don't think I didn't make a list that I might publish if I get unlazy at some future point. Anyways, we worked on it for a while and then the open tournament was run. And we got the results back this week,

Now since the MIT teams had class time and a whole class of people to work with and bounce ideas off of, sadly they still dominated.

But there was a second ranking, this time of Non MIT teams only, and now for the real surprise: we won!

battlecode.mit.edu/2009/info/glory

Our team was called "Bad Meme" and we were representing UBC and you can see it all their on the results page. Of all the other non MIT teams, we were the best. It's really kind of surprising and awesome. Especially when you consider that anyone anywhere could enter and it appears that there were teams from places like Stanford and Harvard, and that we beat them. So that's kind of a buzz.

And so that a big chunk of what I've been doing in the past month: programming battle AI. That and school. But now with the competition over and school coming to a draw it's time to look for some new projects. I have a few ideas already, hopefully I'll get around to mentioning them before they are over this time, but time will tell.

April 16, 2009 :: British Columbia, Canada  

April 14, 2009

Jürgen Geuter

Things shouldn't always be wiped

Wiping is good sometimes. After spilling your drink for example. When selling your computer wiping your hard disk is really important to make sure your personal data stays personal. If you maintain a public internet terminal you want to wipe it after every use. But often we tend to wipe too much or see it as a quick solution even when it's not.

As some might now, I work in a school from time to time. The computers in the "computer room" (a room with many computers so a whole class can work at the same time) are wiped with every reboot. This is a common practice for a very simple reason: If you allow people to change things it will irritate other people. The readers of this blog are probably all very skilled in using their computers but many people with less knowledge are seriously irritated as soon as you change their desktop background. Wiping can solve the issue: You set the machine up as it is supposed to be and that state will persist. Win? Not always!

When people change things on the computer they use it might just be curiosity ("What happens if I change this setting?") but after a while people start changing things because the given state annoys them, they feel limited by the system and the way it is set up. It might just be a small thing: You want a certain program launcher to be at a certain point on your screen or you want a certain program to stop autostarting. But when you wipe the system, you lose that flexibility.

Especially if you are dealing with Windows boxes it has become sort of common knowledge that wiping the hard disk is a good approach to keeping the system untampered with and stable: After all you do want your users to have a stable system they can rely on. But I think that more often than we might think wiping is the wrong approach.

If we stop allowing people to change the way the computer interacts with them, we are basically holding them back. You can only work as good with a computer as the mental model the given setup represents matches yours. Damn, that was a long sentence, let's milk it a little:

Every computer and every desktop environment and window manager, every software basically, represents a certain way of thinking, a certain mental model of how things are. Take for example a file manager: In nautilus (GNOME's file manager) you can enable "spatial mode" which means that every folder is opened in a new window and every folder can be open only once. The way most people use nautilus (maybe because they are used to working like that from past experience) is the "browser mode" where double clicking a folder opens it in the current window and where you can have any folder open as many times as you want. "Spatial mode" is conceptually better for some and you might even be able to present a million studies showing you how much better it is but if your mental model of how files and file management work don't match the spatial paradigm you will not be able to use the system properly, will be annoyed and perceive the file manager as broken, which it isn't: It just doesn't fit to how you think.

With that being said I think my objection to relying on wiping computer systems becomes clear: Wiping systems makes sense in privacy related-contexts but in general it's not the right technique to ensure a stable working environment for regular users.

The idea of having computer systems around that work without the user identifying him- or herself to it is an anachronism, back from when people used Windows95 and thought that that was it. We tend to see entering a username and password purely as a security measure when in fact it's also a way to customize the system to your personal needs.

Using a system where you cannot change settings is a huge pain in the ass. Not cause you can't install software (it often makes sense to restrict that to a certain degree) but because you end up with a system that doesn't perform as it should.

Wiping is like the dark side of the force: It's the quick solution, it's simplicity is charming but in the long run you don't serve your users well. Users are individuals, everyone has slightly different needs and preferences (insert random sexual joke here) and we have gotten way too used to ignoring the huge benefit that users can gain from customization.

April 14, 2009 :: Germany  

George Kargiotakis

command exit status on zsh using RPROMPT

I’ve just updated by .zshrc so that I can get the exit status of commands on a “right prompt” using zsh’s RPROMPT variable. Exit status appears only if the value is non-zero.

Example usage:
zsh-rpropmpt-exit-codes

You can find my zshrc and more dot files that I use in my Pages/My dot files

April 14, 2009 :: Greece  

April 13, 2009

Jason Jones

Gphoto2 / Gnome Problem

Ya know...  Sometimes I hate open source.  Most times I love it, but sometimes I hate it.

I recently updated my gentoo gnome installation to 2.24.2, and didn't think much of it.  I then tried to connect my Nikon S210 digital camera to my usb port and perform the simple task of importing photos.  I have done this probably 100 times successfully.

This time, however, a new window popped up telling me, "Oh!  We found a camera on your system!  Do you want me to act like windows and try to do everything for you?"  Well...  Okay..  That was a bit harsh, but it was slightly annoying.

Anyway..  Here's the screenshot of what popped up.



So, thinking to myself, "Nope..  I know what I want to do with the photos, so, I clicked "Cancel".

I then tried to start up digikam, and it didn't say anything, other than it wouldn't import, or show, anything.  I tried flphoto with the same results.  So, I'm thinking, "Great...  I've gotta go through gphoto2's fabulous CLI interface to figure out what the heck is up."  Not exciting.  But then I found gtkam and that saved me a boat-load of time.

gtkam basically gave me the finger, too, but it told me a bit more than nothing.  It said "Could not initialize camera".  I could successfully detect it with no problem at all, but immediately after, it flashed the error message.

Tried as I could, I couldn't do anything about it.  I tried emerging the unstable tree, and then re-configuring the use flags.  Uhhh yeah.  Nothing.

So, I went to my other computer and downloaded them there.  No problems at all.  In fact, I was using the same version of gphoto2 on my 2nd computer as I was with the one having the problems!

Yeah... Not happy times for me.

Anyway..  To make a long story short,  I came down today and saw my camera sitting there on the floor and tried to have another go at it, because I just can't leave broken alone.

This time, I notice the "Unmount" button on one of the two boxes that pop-up.  So, I click "unmount" and then load up gtkam.

Yup...  It detected the camera and loaded up the images just fine.  ARGH!

So, yeah...  Everything is good again.

I just wish little gotchas like that would be thought-through before they're pushed live.

Why render such a hugely popular program such as gphoto2 useless by auto-mounting the friggin' camera as soon as it's plugged in???

Not cool.

Not cool at all.

So, anyway..     Yeah..  To fix this problem, simply do the following:

 

JUST CLICK THE UNMOUNT BUTTON IN GNOME'S AUTO POP-UP BOX AS SOON AS YOU PLUG IN YOUR CAMERA



That should do the trick quite nicely.

April 13, 2009 :: Utah, USA  

Matija Šuklje

Literally cut- and pasting law ;)

It's monday, I just cleaned my shoes and my sport sandals and I decided to update my Civil Procedure Act.

Because I'm a nice person and don't want to kill too many trees, I decided to manually update my paperback edition of the Civil Procedure Act (consolidated text 2) by crossing out what's outdated and writing the new text between the the lines wherever I can.

All fair and good, but the problem's that the C and D amendments (which I'm lacking) are between them around 150 articles long with some of which presenting completely rewritten articles or new ones that need to fit in somehow.

...meeeeaaaaaaniiiiing that I have to print out the longer sections and physically cut them out and glue them to my book. Oldskool! XD

Later on, polishing my shoes with oldskool shoe wax...

On a side note, I've been living quite happily with Magnatune, Jamendo and Last.fm for the past few days. I don't really feel any worse for losing my music collection anymore. Backup-wise, SpiderOak is really looking great, so I'll try to write an ebuild for their client.

hook out >> drinking recycled Yorkshire Gold (from Taylors of Harrogate) and getting messy with the scissors and glue ...oldskool style!
<!--break-->

April 13, 2009 :: Slovenia  

Brian Carper

A Sad, Dark Day

Today was a terrible day. I found myself subconsciously trying to use Emacs keystrokes in Vim. I feel dirty. I took a bath but it won't come clean. : (

It just goes to show that you can get used to anything if you do it often enough. Emacs still drives me up the wall but maybe I've achieved a critical mass of enough custom keybindings to let me tolerate it.

Aside from paredit, which has no equal even in Vim, Emacs does have some vaguely non-sucky features. hi-lock is pretty nice (Vim has an equivalent of course). Once I learned a few of the shortcuts for git-emacs I actually found myself using Git much more effectively. Having to drop into a shell to type Git commands is just enough of a disruption to prevent me from doing it often enough. I never got the hang of any version control library in Vim.

I'm almost even getting used to the Emacs buffer model. I find myself C-x bing and flipping back and forth between buffers by name, rather than my Vim practice of opening buffers in certain carefully-placed windows and leaving them there.

On the subject of typing, I broke down finally and ordered a Unicomp Customizer 104 keyboard. I've heard too many hackers say that the old IBM clicky keyboards are good for typing. It should arrive Tuesday, and I'm a lot more excited than anyone should be over a keyboard.

Expect a keyboard review. Try to contain your excitement until then. I know it'll be hard.

April 13, 2009 :: Pennsylvania, USA  

KDE4 Konsole Kolor Skheme Kdownload

I put a color scheme for KDE4's Konsole up for download. From a cursory glance I think KDE3 and KDE4 color schemes are the same format, but I haven't tried it.

Also I know I'm not the first to say it, but all of the K's in KDE program names are a bit annoying after a while, aren't they?

April 13, 2009 :: Pennsylvania, USA  

Blog and CRUD

I updated my blog source code on github. I also split my CRUD library out into its own clj-crud repo. It is cruddy, so the name is apt.

This code still isn't polished enough for someone to drop it on a server and fire it up, but maybe it'll give someone some ideas. I think the new code is cleaner and it'll be easier for me to add features now.

Beware bugs, I'm positive I introduced some.

EDIT: A word about the CRUD library... persisting data to disk is hard when the data may be mutated by many threads at once and the destination for your data is an SQL database that may or may not even be running. I have more respect for people who've written libraries that actually do this kind of thing and work right. Granted I only spent 3 days on mine but still, it's tricky.

I gave up for a while and tried clj-record, but it was prohibitively slow. It has the old N+1 queries problem when trying to select an object which has N sub-objects. In real life you'd write SQL joins to avoid such things. Ruby on Rails on the other hand gets around this via some nasty find syntax.

I get around it by having all my data in a Clojure ref in RAM already so it doesn't matter. And by using hooks so each object keeps a list of its sub-objects and the list is always up-to-date (updates of sub-objects propagate to their parents). But the crap I have to do to get this to just barely work is pretty painful.

April 13, 2009 :: Pennsylvania, USA  

April 10, 2009

Matija Šuklje

External disk dead, backups gone ...music too

The day before yesterday it happened...
The unthinkable...
The unbearable...
My backup disk died!! :(

Of course the warranty for my Western Digital MyBook (it's actually a Caviar 2500 inside) expired half a year ago. Exactly on the 8th of August 2008 — the same day that the XXIX. Summer Olympic games in Beijing started.

The problem is most probably a faulty controller, so if anyone out there has a Western Digital Caviar SE WD2500JB with intact electronics that (s)he's willing to give me, it'd make me very happy!!

What makes it even odder is that the backup disk died, while the half year older Fujitsu HDD that I back up onto that WD MyBook is still alive *knocks on wood*

Needless to say, Murphy strikes with perfect timing, when I was just deciding which backup tool to use next (and trying to write an ebuild for it)! Initially I used KDar, then KBackup and later switched to RDiff-based Keep. I grew quite enthusiastic about RSync'ing backups, but had to chose an applications that would not tie me to the old KDE3 libraries (none of the above have a KDE4 port yet).

My current contenders are (or rather were before my backup disk died):

...there's even a thread on the KDE forums that talks about all three.

But this incident made me realise that your backups are only as strong as the medium you use. So, I'm actually considering an online backup service. I've just started looking, but so far SpiderOak looks pretty good. Especially their security and privacy looks right (and the fact that they support FOSS). I'm still new to this idea, but I feel kind of vulnerable without backups, so I'll be looking into it a bit more.

But it's not all about backups — because my laptop is only 60 GiB small, I have (had) all my music on the external disk. You'd expect me to be mad as a bat right now because of that, but now I'm getting my fix directly from Jamendo, Magnatune, Last.fm and (other) streaming stations like ShoutCast and Soma.fm (and good ol' FM radio on my iRiver). I can barely wait for Amarok2 to be usable on AMD64 in Gentoo to make better use of such services! :D

hook out >> sitting on the balcony, watching the sun set, blogging and sipping Taylors of Harrogate Mango (black) tea
<!--break-->

April 10, 2009 :: Slovenia  

Dirk R. Gently

Mplayer with DVDs


There are plenty of movie players for Linux but my all time favorite is Mplayer. Not only is Mplayer quick and responsive but it can play almost anything. I’ve used mplayer before but I realized that my movies weren’t playing just as I wanted them too - no menu support, picture quality wasn’t as I expected. If you’d like to play DVD’s with player, here’s a guide that can show you how to get good functional DVD player.

Calibrating Display

Presentation is a large part of a good movie experience. Movie companies and movie theaters put a good deal of consideration over how a movie looks and sounds. THX for example became a standard in the movie industry defining such. Therefore, how your display looks also will represent the quality of the movie you play with Mplayer. There are a couple things you can do to create good picture quality on your monitor but first a quick bit on colorschemes.

Windows and Mac OS both have built in colorschemes (also known as ICC profiling). Colorschemes define such things for the display as color balance and gamma. Linux by default does not have any colorschemes defined. Often new users will report that their display when first installed looks “too bright”. There is no way to define a colorscheme in Linux but most of this “too bright” reporting is because of gamma and there is something you can do about that.

A good program to discover the proper gamma for Linux is to use a program called Monica. Use monica to calibrate your gamma. Calibrating Monica you’ll notice the whole display will change. Ignore this and just be sure your red, green, and blue gammas are set ok. When this is done, Monica will display an option to have Monica load at desktop startup. This can be done but it’s better to have the X server know the settings directly because if you play games (for instance) your gamma will be reset. The X server can be made aware of the gamma in the “/etc/X11/xorg” file. For example:

Section "Monitor"
    Identifier     "Monitor0"
    Gamma           0.86 0.85 0.87
EndSection

Gamma values are in RGB order. Restart the X server to have the gamma values permanently applied.

Selecting Video and Audio Output Devices

Mplayer defaults will work on just about any media. If you want to test Mplayer, try:

mplayer dvd://1

Track 1 almost always has something on it and you should get a good idea how Mplayers plays with the default settings. First thing you should do is decide what video output driver to use. Most people tend to use xv, this is the XVideo extension and has hardware accelerated playback. I however use the OpenGL driver because it give me slight better performance. For example:

mplayer -vo gl:yuv=2:force-pbo -dr -framedrop -fs dvd://1
mplayer -vo xv -dr -framedrop -fs -cache 8192 dvd://1

For OpenGL you’ll have to use a proper yuv setting, look into “man mplayer” for all the options. Adding the ‘-dr’ option to make sure direct rendering gets used and add ‘-framedrop’ because if a CPU intensive task starts in the background audio and video will get out of sync. Using -fs will start mplayer in full screen-mode.

For xv make sure to use the ‘-cache’ option as xv video doesn’t play well without it.

For audio, I just allow use mplayers default. I’ve tried setting ‘-ao alsa’ but occasionally I get skips with that and find the default (usually aoss) works better.

Filters

One of the things you’ll notice at this time is that their is a little noise to the picture quality. This is common because TV’s have built-in noise-reduction filters. You’ll also notice if you are playing a DVD recorded tv show that the picture appears “lined”(interlacing). TV’s produce pictures by displaying alternate lines. So a property called deinterlacing is used to produce a combined image. To add deinterlacing and a noise filter try this:

mplayer -vo gl:yuv=2:force-pbo -dr -framedrop -fs \
-vf yadif=3,hqdn3d dvd://1

Yadif is a good deinterlacer and hqdn3d will help to smooth the picture. I find that hqdn3d produces a bit too blurred image so I’ve reduced it to:

mplayer -vo gl:yuv=2:force-pbo -dr -framedrop -fs \
-vf yadif=3,hqdn3d=3:2.8:1:3 dvd://1

For movies that aren’t interlaced mplayer won’t use the yadif filter.

Aspect-Ratio

Mplayer may choose to alter the aspect-ratio which will result in a distorted picture. I think there is some legacy code in Mplayer that tries to scale based on screen size. Add ‘-noaspect’ to prevent this from happening:

mplayer -vo gl:yuv=2:force-pbo -dr -framedrop -fs \
-vf yadif=3,hqdn3d=3:2.8:1:3 -noaspect dvd://1

Contrast, Brightness, and Saturation

Even for a properly monitor the picture isn’t going to look quite right because movies use a different colorspace that is designed for proper display on a television. While not perfect this too can be corrected to a good degree with brightness, contrast, and saturation values.

If you’re using the gl driver, you’ll be able to adjust contrast, brightness, hue, and saturation with 1 and 2, 3 and 4, 5 and 6, 7 and 8, respectively. To add the values to the command line:

mplayer -vo gl:yuv=2:force-pbo -dr -framedrop -fs \
-vf yadif=3,hqdn3d=3:2.8:1:3 -noaspect \
-contrast 14 -brightness 8 -saturation -9 dvd://1

If you’re using the xv driver, you can use the software equalizer to enable the ability to adjust these values:

mplayer -vo xv -dr -framedrop -fs -cache 8192 \
-vf yadif=3,hqdn3d=3:2.8:1:3,eq2 -noaspect -contrast 14 \
-brightness 8 -saturation -9 dvd://1

mplayer -vo xv -dr -framedrop -fs -cache 8192 \
-vf yadif=3,hqdn3d=3:2.8:1:3,eq2=1:1.14:0.08:0.91 -noaspect \
-contrast 14 -brightness 8 -saturation -9 dvd://1

DVD Menus

New versions of Mplayer (as of this writing mplayer-28347-4) now include support for DVD menus. Mplayer will have to be compiled with “–enable-dvdnav” for DVD menus to work. From the command line, tell Mplayer to use DVD menus:

mplayer -vo gl:yuv=2;force-pbo -dr -framedrop -fs \
-vf yadif=3,hqdn3d=3:2.8:1:3 -noaspect \
-contrast 14 -brightness 8 -saturation -9 dvdnav://

You can also add support for being able to choose DVD menu items with the mouse:

mplayer -vo gl:yuv=2;force-pbo -dr -framedrop -fs \
-vf yadif=3,hqdn3d=3:2.8:1:3 -noaspect \
-contrast 14 -brightness 8 -saturation -9 \
-mouse-movements dvdnav://

If using mplayer with DVD menu support make sure you do not to have caching on or Mplayer won’t work properly.

That’s it! You should now have a great DVD player for you Linux.

Extranei

Sometimes selections in DVD menus don’t get recognized. I found that pressing 5 will bring them up again.

Mplayer uses keyboard presses for input. A basic reference of commonly used keys:

  • F - Fullscreen toggle
  • Q - Quit
  • P - Pause
  • ← - Backward 10 seconds
  • → - Forward 10 seconds
  • ↑ - Forward 1 minute
  • ↓ - Backward 1 minute
  • Pgup - Forward 10 minutes
  • Pgdown - Backward 10 minutes
  • !/@ - Backward/Forward Chapters
  • Arrow Keys or Numpad Arrow Keys - DVD navigation

Because DVD navigation binds to the arrow keys, they cannot be used to skip while using DVD navigation.

Users of newer Nvidia cards might want to look at Mplayer support for VDPAU (Purevideo technology).

Lastly, thanks to electro for his hqdn3d values.

April 10, 2009 :: WI, USA  

April 9, 2009

Daniel de Oliveira

Zine


Hi all (after a long time).

Actually Im trying to start a zine with help of some friends. At least, something like Full Circle but with more General Linux, and most more Gentoo related stuff at all.

I’ll try to do something about server and desktop, special configurations, tuning and so on.

If anyone reading this are able to help or wants to contribute, feel free to drop a message and I’ll give a feedbak ASAP.

Thanks all

April 9, 2009 :: São Paulo, Brazil  

April 8, 2009

Brian Carper

Lisp Syntax Doesn't Suck

I spend a lot of time talking about what I don't like about various languages, but I never talk about what I do like. And I do like a lot, or I wouldn't spend so much time programming and talking about programming.

So here goes. I like the syntax of Lisp. I like the prefix notation and the parentheses.

Common Complaints

A common criticism of Lisp from non-Lispers is that the syntax is ugly and weird. The parentheses are impossible to keep balanced. It ends up looking like "oatmeal with fingernail clippings mixed in".

Also, prefix notation is horrible. 1 + 2 is far superior to (+ 1 2). Infix notation is how everyone learns things and how all the other languages do it. There are countless numbers of people (example) who have proposed to "fix" this, to give Lisp some kind of infix notation. The topic inevitably comes up on Lisp mailing lists and forums.

Partly this is subjective opinion and can't be argued with. I can't say that Lispy parens shouldn't be ugly for people, any more than I can say that someone is wrong to think that peanut butter is gross even though I like the taste of it. But in another sense, does it matter that it's painful? Does it need to be changed? Should the weird syntax stop you from learning Lisp?

Prefix Notation: Not Intuitive?

There is no "intuitive" when it comes to programming. There's only what we're used to and what we aren't.

What does = mean in a programming language? Most people from a C-ish background will immediately say assignment. x = 1 means "give the variable/memory location called X the value 1".

For non-programmers, = is actually an equality test or a statement of truth. 2 + 2 = 4; this is either a true or false statement. There is no "assignment". The notion of assignment statements is an odd bit of programming-specific jargon. In most programming languages we've learned instead that == is an equality test. Of course some have := for assignment and = for equality tests. But = and == seems to be more common. Some languages even have ===. Or x.equals(y). Even less like what we're used to. (Don't get started on such magic as +=.)

Most of us have no problem with these, after a while. But few of us were programmers before we learned basic math. How many of us remember the point in time when we had to re-adjust our thinking that = means something other than what we've always learned it to mean? I actually do remember learning this, over a decade ago. This kind of un-learning is painful and confusing, there's no question.

But it's also necessary, because these kinds of conventions are arbitrary and vary between fields of study (and between programming languages). And there are only so many symbols and words available to use, so we re-use them. None of the meanings for = is "right" or more "intuitive" than the other. = has no inherent meaning. It means whatever we want it to mean. Programming is chock-full of things like this that makes no sense until you memorize the meaning of them.

Consider a recent article that got a lot of discussion, about why all programmers should program in English. How much less intuitive can you get, for a native speaker of another language to program using words in English? Yet they manage. (Have you ever learned to read sheet music? Most of the terms are in Italian. I don't speak a word of Italian, yet I managed.)

The point is that it's very painful to un-learn things that seem intuitive, and to re-adjust your thinking, but it's also very possible. We've all done it before to get to where we are. We can all do it again if we need to.

Prefix notation is unfamiliar and painful for many people. When I first started learning Lisp, the prefix notation was awfully hard to read without effort, even harder to write. I would constantly trip up. This is a real distraction when you're trying to write code and need to concentrate. But it only took me maybe a week of constant use to ingrain prefix notation to the point where it didn't look completely alien any longer.

At this point prefix notation reads to me as easily as infix notation. I breeze right through Lisp code without a pause. In Clojure, you can write calls to Java methods in Java order like (. object method arg arg arg) or you can use a Lispy order like (.method object arg arg arg); I find myself invariably using the Lispy way, as does most of the community, even though the more traditional notation is available.

You can get used to it if you put in a minimal amount of effort. It's not that hard.

Benefits of Prefix Notation

Why bother using prefix notation if infix and prefix are equally good (or bad)? For one thing, prefix notation lets you have variable-length parameter lists for things that are binary operations in other languages. In an infix language you must say 1 + 2 + 3 + 4 + 5. In a prefix language you can get away with (+ 1 2 3 4 5). This is a good thing; it's more concise and it makes sense.

Most languages stop at offering binary operators because that's as good as you get when you have infix operators. There's a ternary operator x?y:z but it's an exception. In Lisp it's rare to find a function artificially limited to two arguments. Functions tend to take as many arguments as you want to throw at them (if it makes sense for that function).

Prefix notation is consistent. It's always (function arg arg arg). The function comes first, everything else is an argument. Other languages are not consistent. Which is it, foo(bar, baz), or bar.foo(baz)? There are even oddities in some languages where to overload a + operator, you write the function definition prefix, operator+(obj1, obj2), but to call that same function you do it infix, obj1 + obj2.

The consistency of Lisp's prefix notation opens up new possibilities for Lispy languages (at least, Lisp-1 languages). If the language knows the first thing in a list is a function, you can put any odd thing you want in there and the compiler will know to call it as a function. A lambda expression (anonymous function)? Sure. A variable whose value is a function? Why not? And if you put a variable whose value is a function in some place other than at the start of a list, the language knows you mean to pass that function as an argument, not call it. Other languages are far more rigid, and must resort to special cases (like Ruby's rather ugly block-passing syntax, or explicit .call or .send).

Consistency is good. It's one less thing you have to think about, it's one less thing the compiler has to deal with. Consistent things can be understood and abstracted away more easily than special cases. The syntax of most languages largely consists of special cases.

Parens: Use Your Editor

The second major supposed problem with Lisp syntax is the parens. How do you keep those things balanced? How do you read that mess?

Programming languages are partly for human beings and partly for computers. Programming in binary machine code would be impossible to read for a human. Programming in English prose would be impossible to parse and turn into a program for a computer. So we meet the computer halfway. The only question is where to draw the line.

The line is usually closer to the computer than to the human, for any sufficiently powerful language. There are very few programing languages where we don't have to manually line things up or match delimiters or carefully keep track of punctuation (or syntactic whitespace, or equivalent).

For example, any language with strings already makes you pay careful attention to quotation marks. And if you embed a quotation mark in a quote-delimited string, you have to worry about escaping. And yet we manage. In fact I think that shell-escaping strings is a much hairier problem than balancing a lot of parens, but we still manage.

This is sadly a problem we must deal with as programmers trying to talk to computers. And we deal with it partly by having tools to help us. Modern text editors do parenthesis matching for you. If you put the cursor on a paren, it highlights the match. In Vim you can bounce on the % key to jump the cursor between matching parens. Many editors go one step further and insert the closing paren whenever you insert an opening one. Emacs of course goes one step further still and gives you ParEdit. Some editors will even color your parens like a rainbow, if that floats your boat. Keeping parens matched isn't so hard when you have a good editor.

And Lisp isn't all about the parens. There are also generally-accepted rules about indentation. No one writes this:

(defn foo [x y] (if (= (+ x 5) y) (f1 (+ 3 x)) (f2 y)))

That is hard to read, sure. Instead we write this:

(defn foo [x y]
  (if (= (+ x 5) y)
    (f1 (+ 3 x))
    (f2 y)))

This is no more difficult to scan visually than any other language, once you're used to seeing it. And all good text editors will indent your code strangely if you forget to close a paren. It will be immediately obvious.

A common sentiment in various Lisp communities is that Lispers don't even see the parens; they only see the indentation. I wouldn't go that far, but I would say that the indentation makes Lisp code easily bearable. As bearable as a bunch of gibberish words and punctuation characters can ever be for a human mind.

When I was first learning Lisp I did have some pain with the parens. For about a week. After learning the features of Vim and Emacs that help with paren-matching, that pain went away. Today I find it easier to work with and manipulate paren-laden code than I do to work with other languages.

Benefits of the Parens

Why bother with all the parens if there's no benefit? One benefit is lack of precedence rules. Lisp syntax has no "order of operations". Quick, what does 1 + 2 * 3 / 4 - 5 mean? Not so hard, but it takes you a second or two of thinking. In Lisp there is no question: (- (+ 1 (/ (* 2 3) 4)) 5). It's always explicit. (It'd look better properly indented.)

This is one less little thing you need to keep in short-term memory. One less source of subtle errors. One less thing to memorize and pay attention to. In languages with precedence rules, you usually end up liberally splattering parens all over your code anyways, to disambiguate it. Lisp just makes you do it consistently.

As I hinted, code with lots of parens is easy for an editor to understand. This make it easier to manipulate, which makes it faster to write and edit. Editors can take advantage, and give you powerful commands to play with your paren-delimited code.

In Vim you can do a ya( to copy an s-exp. Vim will properly match the parens of course, skipping nested ones. Similarly in Emacs you can do C-M-k to kill an s-exp. How do you copy one "expression" in Ruby? An expression may be one line, or five lines, or fifty lines, or half a line if you separate two statements with a semi-colon. How do you select a code block? It might be delimited by do/end, or curly braces, or def/end, or who knows. There are plugins like matchit and huge syntax-parsing scripts to help editors understand Ruby code and do these things, but it's not as clean as Lisp code. Not as easy to implement and not as fool-proof that it'll work in all corner cases.

ParEdit in Emacs gives you other commands, to split s-exps, to join them together, to move the cursor between them easily, to wrap and unwrap expressions in new parens. This is all you need to manipulate any part of Lisp code. It opens up possibilities that are difficult or impossible to do correctly in a language with less regular syntax.

Of course this consistency is also partly why Lisps can have such nice macro systems to make programmatic code-generation so easy. It's far easier to construct Lisp code as a bunch of nested lists, than to concatenate together strings in a proper way for your non-Lisp language of choice to parse.

Conclusion

Yeah Lisp syntax isn't intuitive. But nothing really is. You can get used to it. It's that not hard. It has benefits.

Sometimes it's worth learning things that aren't intuitive. You limit yourself and miss out on some good things if you stick with what you already know, or what feels safe and sound.

April 8, 2009 :: Pennsylvania, USA  

Nikos Roussos

alphabet linux

for the past three years i work in greek elementary schools, and very recently i started building my own linux distribution for the school lab. so i thought why not share it with the rest of the world ;)

the distribution goal is to cover the first two levels of greek education system. greek school labs are famous for their very old hardware, so this distribution is based on gentoo (with xfce as window manager) in order to be lightweight.

i won't explain (at least not at this post) why i think that free (as in speech) software is the only way to go when comes to education. the purpose of this post is just to point the web site of the distribution:
alphabet linux

PS. many thanks to kargig. his experience from iloog development helped me a lot.

alphabet linux

April 8, 2009 :: Athens, Greece

April 7, 2009

Steven Oliver

Entity Managment


I was looking through the internets the other day and it occurred to me that there is no open source software out there devoted to this. What is entity management? Well, it’s simply keeping track of what you own, what you lease out, what you rent, and what you sell. Power companies have to lease out land a lot of times because they don’t necessarily own the land the power poles are on. Obviously gas companies are in a very similar situation. Even companies you don’t expect to need such software might. Large banks for example might lease the land the bank on. I know a local car wash doesn’t actually own the lot the wash is on, just the car wash itself. Why doesn’t this software exist? My guess is simply because it’s boring. Who would want to and why? It’s like writing medical records software or something. How boring is that?

But in light of this, I’ve decided to give it a go. Why not? Screw it, I can code. I can write software as crappy as anyone else on the internets. In fact, I’ve already come with a basic database layout using MySQL. To be quite honest though I’m not a fan of MySQL thus far and I might find myself quickly switching to Postgre. I think the SQL i’ve written thus far will probably easily work in either, it’s not exactly complicated stuff at this point.

I haven’t published any code or even given this potential project a name yet, but I might later. What is it they say, “release early, release often.”

Enjoy Penguins!

April 7, 2009 :: West Virginia, USA  

Coding in Open Source


Do you ever want to contribute to a project or even start or your own? Obviously you do. Why else would you be reading a blog devoted to Linux. Given that then do you even find yourself with absolutely zero passion left because the task is so daunting, or the program you would to contribute to has tens of thousands of lines of code? Yeah… that’s totally me on regular basis. Can I code? Yeah. I can make programs do all kinds of neat things. Do I really want to spend weeks figuring out your code? No. Do I want to spend weeks just writing back end “boiler” code to start my own project? No. Sort of makes you hate programming doesn’t it?

April 7, 2009 :: West Virginia, USA  

N. Dan Smith

A Free Software Thesis

Last year I set out to produce my master’s thesis using only free software. Having turned in my final copy today, I can report a qualified success.

Despite some early interest in using Lyx (maybe someday in another life), I ended up going with a standard word processor in the form of OpenOffice (and its cousin NeoOffice). The downside in doing so is that I would have to deal directly with formatting issues. Thankfully OpenOffice has some versatile formatting styles which allowed me to satisfy the crazy formatting requirements (seriously - can I have a type-setting degree too?).

As for operating system, I was split between Gentoo Linux (free software) and Mac OS X (decidedly un-free software), where I did the majority of the actual typing. This is where the qualified yes comes in. It has nothing to do with any deficiency of Gentoo or OpenOffice. Rather I only had one machine available, and it had to be running Mac OS X for another reason, so it was just a matter of convenience. As it turned out, some font rendering problems in NeoOffice brought me back to Gentoo, which is the platform upon which I produced the final form of my thesis. It all worked out in the end.

So yes, it is possible to craft a big, important paper using free software tools.

April 7, 2009 :: Oregon, USA  

April 6, 2009

Jason Jones

Disney DRM, Ripping DVDs

Lately, I've been viewing a few Disney flicks on DVD.  I got Bolt and Bedtime Stories.  Because I usually rent them on redbox and can only have them for a day, I will rip them, and then when I get around to it, I'll watch em' then delete them.  No problems with that, as far as I can see.  I rent them to view them once, and that's what I do.

Well, lately, Disney DVDs have been tougher to rip.  The table of contents listed by dvd::rip had me confused for a bit.  Take a look at the screenshot below:



You'll notice that titles 8 through 19 (actually through 42, offscreen) all seem to be full-length movies.  So, if I try to play any of them on mplayer, mplayer fails and doesn't play anything.  VLC works just fine, if you play it from the menus, but what if I want to rip just the movie, with no menus?

Well, using vlc, you can see what title is actually playing, so while viewing the movie, I right-click and see what title is actually playing, then use vlc (version 8.6i, the new 9.8a doesn't seem to rip anything successfully) to rip the title which actually plays.

Hope that doesn't confuse everyone.  I just wanted to blog about how to get it done.  So far, VLC version 8.6i is the version I use to rip Disney movies.  Everything else either can't rip it, or flat-out can't play it.

On the flip-side, it seems that Sony hasn't been putting any effort at all towards DRM on their normal DVDs.  They're probably just putting all their efforts into copy protecting Blu-Rays now, which is just fine by me.

April 6, 2009 :: Utah, USA  

April 5, 2009

Andreas Aronsson

Don't extend

As I am nowadays using the keyworded gentoo-sources, I am already using the 2.6.29 kernel with promised updated ext4 stuff and some more goodies. However, after doing my normal upgrading routine with make oldconfig and sifting through all the new options, running my 'build kernel and drivers'-script, my system wouldn't boot =|. Unable to remount read-write dmesg said. A wee bit stumped, I went back to 2.6.28 for a few days but now I had a go again and took a look at my fstab. In the mount options, I had put "extents, barriers=0". Not sure why since none of the threads I found with google made those options look very promising. Particularly, when I found a note about deprecating extents, I figured thay have to go. Said and done, I have now booted with little devils peeking at me from the screen instead of penguins. I might have noticed a very slight speedup when starting programs too.
Ah, portage tells me it's time to go xorg-1.5. Now where did I put the bookmark for the upgrade guide...

April 5, 2009 :: Sweden

Ow Mun Heng

Postgresql 8.4 -&gt; Where are On Disk Bitmap Indexes?

Postgresql 8.4 is nearly out. There's quite a few things which looks interesting to me. However, the one thing which I'm still missing and am not able to find the status of is where or what happened to the On-Disk-Bitmap-Indexes which was supposed to come out for the 8.4 release.

Anyone from the Postgreql SQL Team would be privy to that info? can't really seem to find it on google.

Thanks.

April 5, 2009

Zeth

Getting value for money for my council tax money

Council Tax

It is April, which in England means we have to start paying a tax to the local government. This tax is called 'Council Tax' and it is levied on each house. Since everyone has to live somewhere, it is basically a tax on everyone, except full-time students, poor people and so on. Sadly I do not fall into any the exemptions anymore so will have to find the thousand odd pounds or arrange installments.

The city government ('council') has lots of other income, but this is the most visible as you have to organise the payment yourself. Just under 3.8% of the payment goes to the Fire station, fair enough, I do want to be rescued in the event of a fire; and 7.8% goes to the local police force who have proved their value to me already, catching and locking up the person who robbed my house a couple of years ago.

Half of the rest goes towards schools and other services for the city's children. Now I don't have any children, so I don't personally benefit. Well perhaps indirectly, schools keep the local tearaways rounded up in school, giving a few blessed hours on the bus and in shopping centres without the little darlings - that has got to be worth something per week.

Where the rest goes I am not sure. So since I cannot avoid the council tax, I decided to see whether this year, I could get better value for money out of my council tax. I will look into what useful services they have that I don't currently take advantage of. By the end of the year, I will decide whether the council is a huge rip-off or whether I have gotten good value for my money. Of course, I will take a special interest in services I can access digitally. Starting with a spring declutter.

Bulk Item Collection

In my city, the council take away our rubbish each week. However, they cannot take large or heavy items in these weekly collections.

For large items, you can drive them yourself to the 'recycling centre'. Previously when I wanted to get rid of larger things, I would get a visiting relative to drive me to the dump (what a pleasant experience for them).

However, for people like me who do not own a car, the council provides a service called 'Bulky Waste Collections'.

It worked pretty well, I filled out an online form which automatically booked me an appointment. All I had to then was bung all my heavy crap into my front garden and then the council crew came yesterday with their truck and picked it all up.

http://commandline.org.uk/images/posts/other/bulk-items.jpg

You are allowed six things per appointment, so I decided to get rid of:

  • An electric Fire, which went somewhat rusty in damp student digs.
  • A VGA monitor circa 1992, still worked
  • A hoover, Broken
  • An HP printer, the plastic cog was broken, couldn't find a replacement, cost to get the cog fabricated was greater than cost of new printer.
  • An Apple Power Macintosh Performa 6420 and monitor.

Having men in a truck take away your old heavy crap is a useful service, I will certainly use it again. It certainly feels liberating to throw out stuff, I am already eying up stuff for my next six items.

Discuss this post - Leave a comment

April 5, 2009 :: West Midlands, England  

Brian Carper

Disabling Ctrl-Alt-Backspace

After being reminded the hard way yet again that C-S-Backspace in Emacs invokes the very handy kill-whole-line function, but that C-M-Backspace, while uncomfortably similar to that key-chord, does something very different, I have now officially added to my /etc/X11/xorg.conf:

Section "ServerFlags"
    Option "DontZap" "True"
EndSection

to prevent me from accidentally murdering my X server at the worst possible times.

April 5, 2009 :: Pennsylvania, USA  

April 4, 2009

Aaron Mavrinac

Das Komputermaschine Ist Fur Der Gefingerpoken

A good friend of mine recently tossed me some computer parts, including an HP illuminated multimedia USB keyboard (model SK-2565, part no. 5185-2027). Since I had been looking to replace my old keyboard (a $10 PS/2 job that I turned into a k-rad all-black cowboy deck with blank keys), and had been suffering from an inability to control my PCM volume or music from the keyboard without launching alsamixer or mocp respectively, a particularly acute problem when playing StarCraft, I found herein an opportunity.

HP SK-2565 USB Keyboard


This keyboard has nineteen buttons and one knob across the top. In order, they are (or look like) sleep, help, HP, printer, camera, shopping, sports, finance, web (connect), search, chat, e-mail, the five standard audio buttons (stop, previous, play/pause, next, load), a volume knob, mute, and music. Since the keyboard was furry enough to qualify as a mammal upon receipt, the first thing I did was clean it, a process which spanned several hours (though the process was niced down somewhat). The previous two sentences are related: the top buttons also happen to be built in such a way as to require utterly complete disassembly of the keyboard to remove and replace, and I am ashamed but not at all surprised to say I got the replacing part wrong. The play/pause button is now swapped with the previous button. And I am totally not taking this thing apart again any time soon.

But it is for the best! After figuring out sometime later that I had goofed, I decided (Daniel Gilbert, this one's for you) that I liked it better this way anyway. Which is perfectly fine, of course, since I'm about to get to the good part: how I made my HP illuminated multimedia USB keyboard special upper buttons work in Linux, using Xmodmap, and in awesome, using rc.lua.

Turns out it's extremely easy to bind arbitrary keycodes to keysyms (a full list of which can be found in /usr/share/X11/XKeysymDB), at least using GDM. By default (on Gentoo), GDM loads /etc/X11/Xmodmap, as specified by the sysmodmap setting in /etc/X11/gdm/Init/Default. Mine now looks like this:

keycode 223 = XF86Sleep
keycode 197 = XF86Shop
keycode 196 = XF86LightBulb
keycode 195 = XF86Finance
keycode 194 = XF86WWW
keycode 229 = XF86Search
keycode 121 = XF86Community
keycode 120 = XF86Mail
keycode 144 = XF86AudioPlay
keycode 164 = XF86AudioStop
keycode 160 = XF86AudioMute
keycode 162 = XF86AudioPrev
keycode 153 = XF86AudioNext
keycode 176 = XF86AudioRaiseVolume
keycode 174 = XF86AudioLowerVolume
keycode 118 = XF86Music


And now, the answers to all your questions:

  1. I figured the keycodes out by running xev and banging on the buttons.

  2. XF86LightBulb is the closest thing I could find to "sports" that wasn't already taken.

  3. The volume knob "clicks" and sends a keycode 176 or 174 depending on the turn direction.

  4. I did not map help, HP, printer, or camera because they do not appear to generate keycodes.

  5. I did not map audio load because I forgot. I will do it when I can think of an action to bind it to.

The next step was to make these keys actually do something in my window manager. Bindings are pretty easy to make in /etc/xdg/awesome/rc.lua. Without getting into too much detail, I bound keys to things. I am particularly impressed with how I can control audio via amixer, and my MOC playlist via commands without even having the interface open. Another bonus is the sleep button running xlock. Here's a sample line:

key({ }, "XF86LightBulb", function () awful.util.spawn("starcraft") end),

A particularly nice one is the search button, which runs the following script (be nice, my bash-fu is rusty):

#!/bin/bash
Q=`zenity --entry --width 600 --title="Google Search" --text="Google search query:"`
if [[ "$Q" != "" ]]; then
EQ=`echo $Q | sed s/\ /\%20/g`
firefox http://www.google.ca/search?q=$EQ
fi


I frequently say that if I took one thing home from working in the automotive sector, it was Kaizen.

April 4, 2009

Brian Carper

Vim cterm colors

Note to self. Vim color schemes that only set cterm colors don't work unless you export TERM=xterm-256color in your terminal emulator. Konsole in KDE4 seems to default to plain xterm. Took me a half hour to figure out why my color scheme wasn't working in Konsole.

April 4, 2009 :: Pennsylvania, USA  

Kevin Bowling

FS-Cache merged in Kernel 2.6.30

FS-Cache has been merged into the upcoming kernel 2.6.30.  This allows for a generic caching interface in the kernel for other file systems.  For example, you can use local hard disks to cache data accessed via NFS, AFS, or CD-Rom.  Since these tend to be high-latency while the disks are low latency, it should provide for a nice speedup.

Of particular interest to me, I contacted maintainer David Howells who is a Redhat employee.  I asked whether this infrastructure would help with large disk image files stored on NFS — a common though not particularly efficient case for VMWare, Xen, KVM, etc.  His exact response was “Quite feasible.  As long as you have a local disk on which to cache the files.”

I am quite happy as I run this setup at work for some production VMs since it allows for easy migration and backup without the complexity and cost of a SAN or cluster FS.  I look forward to testing when 2.6.30 hits the stable tree.

Share and Enjoy: Digg del.icio.us Slashdot Facebook Reddit StumbleUpon Google Print this article!

Related posts:

  1. Linux Kernel 2.6.21 and Tickless Kernel (CONFIG_NO_HZ) So Linux kernel 2.6.21 is finally out and all the...
  2. Link Bonding Craziness in RHEL/Centos 5 I just went through hell in a handbasket trying to...
  3. Xen 3.3 in RHEL/CentOS 5 and more Link Aggregation Fun RHEL 5 includes the now ancient Xen 3.0 hypervisior.  A...

April 4, 2009

April 3, 2009

Leif Biberg Kristensen

Simple PHP link factory

I’ve been reviewing old code again, and have grown really tired of PHP code like this:

    if ($parent_id) {
        echo "<span class=\"hotlink\">"
            . " (<a href=\"./relation_edit.php?person=$person"
            . "&amp;parent=$parent_id\">$_edit</a>"
            . " / <a href=\"./relation_delete.php?person=$person"
            . "&amp;parent=$parent_id\">$_delete</a>)</span>\n"
            . cite(get_relation_id($person, $gender), 'relation', $person);
    }

It’s way too messy. So, today, I wrote a simple PHP function to clean up the act:

function to_url($base_url, $params, $txt) {
    $str = '<a href="' . $base_url;
    if ($params) {
        foreach ($params as $key => $value)
            $pairs[] = $key . '=' . $value;
        $str .= '?' . join($pairs, '&amp;');
    }
    $str .= '">' . $txt . '</a>';
    return $str;
}

This means that the first code snippet now have been rewritten as:

    if ($parent_id) {
        echo ' <span class="hotlink">'
            . to_url('./relation_edit.php', array('person' => $person, 'parent' => $parent_id), $_edit)
            . ' / '
            . to_url('./relation_delete.php', array('person' => $person, 'parent' => $parent_id), $_delete)
            . "</span>\n"
            . cite(get_relation_id($person, $gender), 'relation', $person);
    }

It’s not a giant step for mankind, for sure. But I publish it in the hope that it may be useful to others. It’s a bit strange that it took me eight years of writing data-driven PHP code to discover such a basic thing.

Edit: As noted in the comments, the built-in PHP function http_build_query may be better. Actually, my function to_url seems to replicate at least parts of it. There are a couple of things I’d like to point out, though.

  1. I’d never let a user on the ‘net input data via this function. It’s only used for navigational links in a private application where I must assume that the user has no malicious intentions. Just by looking at the links (edit or delete person data) you should see that the user has full control over the data in the first place. For that reason, there’s hardly any point in URL-encoding the GET string.
  2. The http_build_query builds only the parameter string, and the rest of the link, both base URL and text, will have to be provided by another function.
  3. For complex data like the examples in the PHP documentation, you should really use the POST method. Example #3 is just senseless.

April 3, 2009 :: Norway  

Simple PHP link factory

I’ve been reviewing old code again, and have grown really tired of PHP code like this:

echo"<\"hotlink\">"
            . " (<a href=\"./relation_edit.php?person=$person"
            . "&amp;parent=$parent_id\">$_edit</a>"
            . " / <a href=\"./relation_delete.php?person=$person"
            . "&amp;parent=$parent_id\">$_delete</a>)</span>\n"
?>

It’s way too messy. So, today, I wrote a simple PHP function to clean up the act:

<?php
function to_url($base_url, $params, $txt) {
    // link factory
    $str = '<a href="' . $base_url;
    if ($params) {
        foreach ($params as $key => $value)
            $pairs[] = $key . '=' . $value;
        $str .= '?' . join($pairs, '&amp;');
    }
    $str .= '">' . $txt . '</a>';
    return $str;
}
?>

this means that the first code snippet now have been rewritten as:

<?php
        echo ' <span class="hotlink">'
            . to_url('./relation_edit.php', array('person' => $person, 'parent' => $parent_id), $_edit)
            . ' / '
            . to_url('./relation_delete.php', array('person' => $person, 'parent', $parent_id), $_delete)
            . "</span>\n"
?>

It’s not a big ting, for sure. But I publish it in the hope that it may be useful to others. It’s a bit strange that it took me eight years of writing data-driven PHP code to discover such a basic thing.

April 3, 2009 :: Norway  

TopperH

Get remote irssi notifications without X forwarding

I was looking for a simple method to have irssi highlight notifications on my local machine while having irssi running on my remote server.

Googoling a bit I found that most methods require X forwarding (and libnotify installed on the server), or screen attached in a terminal in the local machine.

My server has no X, so I'm not going to install libnotify and its dependencies on it, and I don't want to have an irssi terminal open, unless I need it.



Here I found a nice solution:

Server side:

I assume sshd is working on server machine and there is key authentication, so no password is required)

wget http://www.leemhuis.info/files/fnotify/fnotify
cp fnotify ~/.irssi/scripts/fnotify.pl
cd ~/.irssi/scripts
ln -s fnotify.pl autostart/
touch ~/irssi/fnotify


then I reload irssi or I type "/RUN fnotify.pl" inside irssi (I do this step just the first time, then it will be done automatically at irssi startup).

From now, every higlighted message will be logged in this file.

On client side I cd my favourite bin directory (foe me is ~/scripts, but can also be /usr/local/bin) and create a file called irssi-notification.sh:

#!/bin/sh

ssh user@host tail -F ~/.irssi/fnotify | sed -u 's/[<@&]//g' |while read heading message
do notify-send -i gtk-dialog-info -t 300000 -- "${heading}" "${message}"
done


Change the red part with your username and host for the server machine and chmod +x the file.

Make sure x11-libs/libnotify is installed in your sistem (I think that some distro call this package libnotify-bin... don't ask me, debian and ubuntu like to have things complicated).


Now run the file and notifications will appear.



April 3, 2009 :: Italy  

Jason Jones

ILMJ Auto-Saved Entries

Lately, I've been getting a lot of feedback concerning lost entries from I Love My Journal.  Occasionally, I would even lose one myself.

The session timer for ILoveMyJournal.com is set to 2 hours, which means that you can stay logged in to the stie for 2 hours without being logged out due to inactivity.

I initially thought was long enough, but life will get in the way regardless if you're typing your journal or not, and many times I have gone out to check on my kids, end up watching a movie, come back in, finish my entry, and as soon as I click "publish to blog", I get the wonderful login message basically saying "You've been owned, and your entry has been lost".

So, I spent the majority of today writing an AJAX-based auto-save mechanism which will auto-save your entry every 30 seconds (I might up that to 1 or 2 minutes, but we'll see how it goes).

So, if you press a wrong button your keyboard which closes the browser, or your computer crashes, or you leave your computer for 10 hours straight - now, it doesn't matter at all.

ILoveMyJournal.com will take care of it for you.

Here's a screenshot with the not-very-aesthetically-pleasing note at the bottom.  I'll make it look better later.



April 3, 2009 :: Utah, USA  

Iain Buchanan

Blocking port 25

I had a call from a friend complaining that they just purchased a wireless broadband stick (from Telstra using their Next-G network which is a HSDPA network using UMTS850MHz) and the could not send mail via their normal mail accounts.

A few minutes of checking found that Telstra and Bigpond block outgoing access to port 25 to anything other than their own mail servers.

The reasons are listed here [bigpond.custhelp.com] as well as at other pages. This post will list why their reasons are flawed, and how to get around them.

Flawed Reasoning

Bigpond claims they manage the use of port 25 to "to prevent spammers sending unsolicited email using [their] network." OK, that sounds fair enough at first glance, but when you realise how easy this is to get around (use a different port, for example) then this reason becomes redundant.

Bigpond claims that other ISPs are taking similar steps and that their changes have been "proven to prevent some types of spam activity". However spammers, like advertisers, attempt to stay ahead of the latest trends, and as soon as one method of spamming is blocked, they will use another. Also Internode (as an example) blocks port 25 by default, but lets you turn this feature off.

Furthermore, spammers are setting up real mail servers around the world. In conjunction with a tailored trojan that uses a different port to send mail, Bigponds efforts are useless. In fact Spam levels are back to 95% of all email traffic!

Finally, you could pay the extra money for a fixed IP address from Telstra, and they won't block the port. In my opinion, this is shameless money grabbing. Please explain why a user on a fixed IP address is not susceptible to a spam sending trojan or virus?

Perhaps the spam is purposefully malicious, and Telstra would like to know whose account to suspend? Telstra (along with most ISPs) keep detailed logs of traffic and authentications, so they can easily tell which user from a dynamic IP address was accessing which sites at any point in recent history, therefore static IP addresses are no easier to crack down on.

More Problems than Solutions

Bigpond says that you can use their Bigpond mail server to send mail, and thus get around the port block. You can in fact do this, and still have your email appear to come from you@yourhost.com (and not you@bigpond.com).

This solution is not ideal for two reasons:

1. Travelling
The frequent traveller, like my friend, is often on different networks. He must be able to use whichever network he is on and send / receive his normal email. To set up a different outgoing mail server, and perhaps a different profile (from whichever mail client he is using) for each network is both time consuming and pointless.

2. Your email looks like spam
When you send email where the FROM address is you@yourhost.com, but it goes through a different email server you@bigpond.com, the recipient's (him@friendsmail.com) mail server may block or mark your email as spam.

This is because exactly that technique (using a FROM address and mail server that do not match) is used by spammers to send spam. The recipient mail server checks the DNS records of the sender (yourhost.com), and if they don't match the originating server (bigpond.com), then your email may be deleted, rejected, or set aside.

Getting around it

OK, so what do you do to get around it? By far the best way is to authenticate with your mail server, and use a secure port. By using a secure port (usually not port 25) Bigpond won't block your outgoing mail. In fact this should work for many networks that block port 25.

You have the added advantage that your mail is probably encrypted, or at least your password will be (don't rely on this to encrypt sensitive emails though, as you can bet it will be transmitted in plain text at some stage of the process).

Is my mail server compatible?
The best thing to do is try! Different mail clients do this in different ways:

Evolution 2.24.5
Edit > Preferences > Mail Accounts > Edit > Sending Email > Use Secure Connection

Thunderbird 3.0b3
Edit > Account Settings > Outgoing Server > Edit > Connection Security

Outlook [including Express]
You have to edit your account settings from one of the main menus. You may have to then choose View or Change existing email accounts. Then select the account and choose Change; then more settings (I think) and then you should see a secure option. Note the SPA option is not what you're looking for here, although you can use it if supported.

If you get timeouts or errors sending mail, then try slightly different options (if you have a choice).

April 3, 2009 :: Australia  

Brian Carper

Real Confusing Haskell

I can pinpoint the exact page in Real World Haskell where I became lost. I was reading along surprisingly well until page 156, upon introduction of newtype.

At that my point my smug grin became a panicked grimace. The next dozen pages were an insane downward spiral into the dark labyrinth of Haskell's type system. I had just barely kept data and class and friends straight in my mind. type I managed to ignore completely. newtype was the straw that broke the camel's back.

As a general rule, Haskell syntax is incredibly impenetrable. => vs. -> vs. <-? I have yet to reach the chapter dealing with >>=. The index tells me I can look forward to such wonders as >>? and ==> and <|>. Who in their right mind thought up the operator named .&.? The language looks like Japanese emoticons run amuck. If and when I reach the \(^.^)/ operator I'm calling it a day.

Maybe Lisp has spoiled me, but the prospect of memorizing a list of punctuation is wearisome. And the way you can switch between prefix and infix notation using parens and backticks makes my eyes cross. Add in syntactic whitespace and I don't know what to tell you.

I could still grow to like Haskell, but learning a new language for me always goes through a few distinct stages:

Curiosity -> Excitement -> Reality Sets In -> Frustration -> Rage ...

At Rage I reach a fork in the road: I either proceed through Acceptance into Fumbling and finally to Productivity, or I go straight from Rage to Undying Hatred. Haskell could still go either way.

April 3, 2009 :: Pennsylvania, USA  

April 2, 2009

Zeth

Printing in black and white on Linux

I do not normally print very much at home, however I decided to get a very cheap printer for coach tickets, airplane boarding passes and other last minute emergencies.

I went for the HP Deskjet D2560. Here it is in its full twenty-five pound glory:

http://commandline.org.uk/images/posts/gnome/print-black-and-white/printing0.png

The printer was so cheap in that it did not come with a USB cable, however I had a few at home already. The printer end needs a B-type connector.

http://commandline.org.uk/images/posts/gnome/print-black-and-white/printing1.png

The first lead I tried, the posher one pictured top, didn't work as the connector didn't penetrate the socket enough. The second lead, pictured bottom, did work. So if you buy a lead at the same time as the printer, make sure your B connector is long enough.

The printer worked with my Linux computer out of the box and printed fine in both colour and black and white.

The printer came with a black ink cartridge and a colour ink cartridge. With these cheapey printers they have a razor and blades model. It is indeed cheaper to buy the printer again and throw it away, than to buy both of the cartridges again.

Therefore I decided to conserve ink, and thus cost, by printing pages in black and white only.

I pressed Ctrl+P which gives the normal GNOME print dialog that most of the programs have. Then I tried to find the button to set it to black and white.

How to do this on Linux through the graphical interface is not obvious enough in my opinion. The fact that I had to Google through random forum posts for the answer is a somewhat damning indication that the button is too far down.

So the task I was trying to achieve was to 'make my document print in black and white only'. However, it turns out that the interface forces you to 'change your printer mode in your printer settings to grayscale'. The same result but the path you make through the interface is different. The Linux desktop needs a lot more usability testing.

Anyhow, in the end I went to the top panel and clicked on 'System', then 'Administration' then 'Printing'.

http://commandline.org.uk/images/posts/gnome/print-black-and-white/printing2.png

Then I had to right-click on the particular printer and choose 'Properties'. Making it per printer means that if I choose a different printer then my document prints in colour, as before, I am not convinced that this approach has the highest level of usability for most people.

http://commandline.org.uk/images/posts/gnome/print-black-and-white/printing3.png

Lastly, I then clicked on 'Printer Options' and then under 'General', I used the drop-down labelled 'Printout Mode'.

http://commandline.org.uk/images/posts/gnome/print-black-and-white/printing4.png

A lot of work, a least compared to the equivalent option on the legacy operating system. Oh well, let the presses run!

Discuss this post - Leave a comment

April 2, 2009 :: West Midlands, England  

Brian Carper

My Poor Headphones

My precious Grado SR-80's needed some emergency surgery a while back, resulting in this disaster. They still work today, in the sense that sound is still emitted from them, but in terms of aesthetics, the situation has rapidly deteriorated. I've got bare wire and sticky electrical tape hanging all over the place. Also I'm probably one good yank away from snapping the wires off again.

If anyone reading this has a good tutorial or information on re-wiring a set of headphones, it'd be appreciated. I've never soldered anything in my life. I don't know where to acquire the wires; I imagine any wire will do, but I'm clueless when it comes to such things. I think I might like to do something like this mod and run the wire up over the top, to prevent the inevitable twisting from destroying the wires in the future, but I'm uncertain I could pull it off without complete destruction.

(At least I know enough about these things to cringe when people start talking about the "performance" of their headphone wires. $400 for a hunk of wire? Wow.)

April 2, 2009 :: Pennsylvania, USA  

George Kargiotakis

HOWTO remotely install debian over gentoo without physical access

The Task
Last year, me and comzeradd set up a Gentoo server for HELLUG according to our plot to help Gentoo conquer the world. Unfortunately Gentoo is out of HELLUG’s administration policy, all servers must be Debian. We didn’t know that, so after a small flame :), we decided that we should take back the server to somebody’s home and re-install Debian over it, the problem was that the server was located at University of Athens campus which is a bit far from downtown Athens where comzeradd lives. I also live 500km away so we were pretty much stuck. Months passed and nobody actually had enough free time to go to UOA’s campus and take the server to their house. …In the meantime manji joined us as an extra root for the server.

One Saturday night while chatting at IRC (what else could we be doing on saturday night ??) we had an inspiration, why not install Debian remotely, without taking the server home. Even if everything got eventually borked it couldn’t get any worse than going there, taking the server home and fixing it, just like we would do any way. So we gathered on a new IRC channel with some more friends that are really good with Debian and started the conversion progress.

The Server
The interesting part about the server was that it had 2×250Gb IDE disks. The Gentoo setup had these disks partitioned to 4 software raid devices + swap partitions.

(Gentoo) # fdisk -l
Disk /dev/hda: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x431bd7b7
Device Boot Start End Blocks Id System
/dev/hda1 * 1 6 48163+ fd Linux raid autodetect
/dev/hda2 7 130 996030 82 Linux swap / Solaris
/dev/hda3 131 27964 223576605 fd Linux raid autodetect
/dev/hda4 27965 30401 19575202+ 5 Extended
/dev/hda5 27965 29183 9791586 fd Linux raid autodetect
/dev/hda6 29184 30401 9783553+ fd Linux raid autodetect
Disk /dev/hdb: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/hdb1 * 1 6 48163+ fd Linux raid autodetect
/dev/hdb2 7 130 996030 82 Linux swap / Solaris
/dev/hdb3 131 27964 223576605 fd Linux raid autodetect
/dev/hdb4 27965 30401 19575202+ 5 Extended
/dev/hdb5 27965 29183 9791586 fd Linux raid autodetect
/dev/hdb6 29184 30401 9783553+ fd Linux raid autodetect

md1 was RAID1 with hda1+hdb1 for /boot/
md3 was RAID1 with hda3+hdb3 for /
md5 was RAID1 with hda5+hdb5 for /var/db/
md6 was RAID0 with hda6+hdb6 for /usr/portage/

SUMMARY
What we had to do was:
A)break all RAID1 and RAID0 devices, set all hdbX partitions as faulty and remove them from the RAID.
B)repartition hdb, create new RAID1 arrays with LVM on top and format the new partitions
C)install debian on hdb
D)configure grub to boot debian

HOWTO
In order to be extra cautious for every command we gave we all logged in inside Gentoo and one of us set up a “screen” and the others joined that screen session using # screen -x

Now everything that one typed could be seen realtime by all the others.
PART A) RAID Manipulation
Check the status of the raid devices: cat /proc/mdstat
Copy /usr/portage/ to / as /usr/portage2 so that we can completely delete md6 (RAID0).
(Gentoo) # mkdir /usr/portage2/
(Gentoo) # cp -rp /usr/portage/* /usr/portage2/
(Gentoo) # umount /usr/portage
(Gentoo) # mv /usr/portage2 /usr/portage
(Gentoo) # mdadm --stop /dev/md6

Reminder: There’s no need to mdadm --remove /dev/md6 /dev/hdb6 since RAID0 can’t live with only one disk. The mdadm –remove command does nothing at all for RAID0.

We continued by breaking the rest of the RAID1 arrays.
(Gentoo) # mdadm --set-faulty /dev/md1 /dev/hdb1
(Gentoo) # mdadm --remove /dev/md1 /dev/hdb1
(Gentoo) # mdadm --set-faulty /dev/md3 /dev/hdb3
(Gentoo) # mdadm --remove /dev/md3 /dev/hdb3
(Gentoo) # mdadm --set-faulty /dev/md5 /dev/hdb5
(Gentoo) # mdadm --remove /dev/md5 /dev/hdb5

Checked on the current RAID status. Every RAID array should have been failed and with only one disk:
(Gentoo) # cat /proc/mdstat
Personalities : [raid0] [raid1]
md1 : active raid1 hda1[0]
48064 blocks [2/1] [U_]
md3 : active raid1 hda3[0]
223576512 blocks [2/1] [U_]
md5 : active raid1 hda5[0]
9791488 blocks [2/1] [U_]
done

We were now ready to repartition /dev/hdb.
PART B) Repartition hdb
(Gentoo) # fdisk hdb
Created 3 partitions: a) 128Mb for /boot, b) 1Gb for Swap and c) the rest for LVM
In order to re-read the partition table we issue:
(Gentoo) # hdparm -z /dev/hdb
Check if everything is OK
(Gentoo) # cat /proc/partitions | grep hdb

PART C) Install Debian on /dev/hdb
We first had to install the proper tools to do that. In order to create LVM partitions we needed the lvm userspace tools:
(Gentoo) # emerge -avt lvm2
Then we needed to install the tools to create the Debian system, the package is called debootstrap.
(Gentoo) # emerge -avt debootstrap
Created the new RAID1 arrays:
(Gentoo) # mdadm --create /dev/md11 --level=1 -n 2 /dev/hdb1 missing
(Gentoo) # mdadm --create /dev/md12 --level=1 -n 2 /dev/hdb2 missing
(Gentoo) # mdadm --create /dev/md13 --level=1 -n 2 /dev/hdb3 missing

Checked the new RAID arrays:
(Gentoo) # cat /proc/mdstat
Created some basic LVM partitions on top of md13. We didn’t use the whole space of hdb3 because we are going to create more partitions when and where we need to in the future:
(Gentoo) # pvcreate /dev/md13
(Gentoo) # vgcreate local /dev/md13
(Gentoo) # vgdisplay
(Gentoo) # lvcreate -n root -L 10G local
(Gentoo) # lvcreate -n tmp -L 2G local
(Gentoo) # lvcreate -n home -L 20G local

Formatted the LVM partitions and mounted them someplace.
(Gentoo) # mkfs.ext2 /dev/md11
(Gentoo) # mkfs.ext3 /dev/local/root
(Gentoo) # mkfs.ext3 /dev/local/home
(Gentoo) # mkfs.ext3 /dev/local/tmp
(Gentoo) # tune2fs -c 0 -i 0 /dev/local/root
(Gentoo) # tune2fs -c 0 -i 0 -m 0 /dev/local/home
(Gentoo) # tune2fs -c 0 -i 0 /dev/local/tmp
(Gentoo) # mkdir /mnt/newroot
(Gentoo) # mkdir /mnt/newroot/{boot,home,tmp}
(Gentoo) # mount /dev/local/root /mnt/newroot/
(Gentoo) # mount /dev/md11 /mnt/newroot/boot/
(Gentoo) # mount /dev/local/home /mnt/newroot/home/
(Gentoo) # mount /dev/local/tmp /mnt/newroot/tmp/

Then it was time to install Debian on /mnt/newroot using debootstrap:
(Gentoo) # debootstrap --arch=amd64 lenny /mnt/newroot/ http://ftp.ntua.gr/debian

After a while, when it was over we chrooted to the Debian install:
(Gentoo) # cd /mnt/newroot/
(Gentoo) # mount -o bind /dev dev/
(Gentoo) # mount -t proc proc proc
(Gentoo) # chroot . /bin/bash
(Debian) #

We created the network config,
(Debian) # vi /etc/network/interfaces
(contents)
auto eth0
iface eth0 inet static
address X.Y.Z.W
netmask 255.255.255.240
gateway A.B.C.D
(/contents)

We fixed /etc/apt/sources.list:
(Debian) # vim /etc/apt/sources.list
(contents)
deb http://ftp.ntua.gr/debian lenny main contrib non-free
deb http://volatile.debian.org/debian-volatile lenny/volatile main contrib non-free
deb http://ftp.informatik.uni-frankfurt.de/debian-security/ lenny/updates main contrib
deb-src http://security.debian.org/ lenny/updates main contrib
(/contents)

We upgraded the current system and installed various usefull packages.
(Debian) # aptitude update
(Debian) # aptitude full-upgrade
(Debian) # aptitude install locales
(Debian) # vi /etc/locale.gen
(contents)
el_GR ISO-8859-7
el_GR.UTF-8 UTF-8
en_US.UTF-8 UTF-8
(/contents)
(Debian) # locale-gen
(Debian) # aptitude install openssh-server
(Debian) # aptitude install linux-image-2.6.26-1-amd64
(Debian) # aptitude install lvm2 mdadm
(Debian) # aptitude purge citadel-server exim4+
(Debian) # aptitude purge libcitadel1
(Debian) # aptitude install grub less
(Debian) # vi /etc/kernel-img.conf
(contents)
do_symlinks = Yes
do_initrd = yes
postinst_hook = update-grub
postrm_hook = update-grub
(/contents)
(Debian) # vi /etc/hosts
(Debian) # vi /etc/fstab
(contents)
proc /proc proc defaults 0 0
/dev/local/root / ext3 defaults,noatime 0 0
/dev/local/tmp /tmp ext3 defaults,noatime,noexec 0 0
/dev/local/home /home ext3 defaults,noatime 0 0
/dev/md11 /boot ext2 defaults 0 0
/dev/md12 none swap sw 0 0
(/contents)
(Debian) # update-initramfs -u -k all
(Debian) # passwd

And we logged out of Debian to go back to Gentoo to fix grub.
PART D) Configure Grub on Gentoo (hda) to boot Debian /em>
Since we didn’t have physical access to the server we had to boot Debian by using Grub on hda, where Gentoo’s Grub was.
We copied the kernel from debian:
(Gentoo) # cp /mnt/newroot/boot/vmlinuz-2.6.26-1-amd64 /boot/
(Gentoo) # cp /mnt/newroot/boot/initrd.img-2.6.26-1-amd64 /boot/

We edited grub config to add an entry for debian and set it as default! Otherwise the system would reboot back to Gentoo.
(Gentoo) # vi /boot/grub/menu.lst
(contents)
default 1
fallback 0
timeout 10
title=Gentoo
root(hd0,0)
kernel /gentoo-kernel ........
initrd /gentoo-initrd
title=debian (hdb)
root(hd1,0)
kernel /vmlinuz-2.6.26-1-amd64 root=/dev/mapper/local-root ro
initrd /initrd.img-2.6.26-1-amd64
(/contents)

Then we unmounted all partitions from /mnt/newroot/, we crossed our fingers and rebooted!
Voila! We could ssh to our new debian install :) And there was much rejoicing…

What was left to be done, was to mount the old RAID arrays of Gentoo (md1,md3) take backups of configs and place them inside Debian. Then we could kill the old RAID arrays entirely, recreate partitions on hda and add those to RAID arrays of Debian (md11,md12,md13). Of course there should be special attention to re-install grub seperately on hda and hdb!!

Debian-izing the disk with the Gentoo
After a couple of days I decided to move on, kill Gentoo completely and make Debian use both disks.
First thing I did was to stop the old RAID1 arrays.
(Debian) # mdadm --stop /dev/md6
(Debian) # mdadm --stop /dev/md3
(Debian) # mdadm --stop /dev/md1

Then I repartitioned /dev/sda (the Debian kernel uses the modules that all disks appear as /dev/sdX) and created partitions the same size as /dev/sdb’s.:
(Debian) # fdisk /dev/sda
That was the point of no-return :)

There’s a risk involved here. The original sda1 was 64Mb and the newer sdb1 was 128Mb. I couldn’t add sda1 to md11 without extending the sda1 partition. If completely scratched /dev/sda1 to create a new partition of 128Mb in size and a power failure occurred while this process was going on, the server could become unbootable, because it wouldn’t find a proper sda1 to boot from. If someone wanted to minimize that risk, he would have to repartition sda, extend sda1 to the size of sdb1, extend the old /dev/md1 to fit the new sda1 size and extend the fs beneath it. Of course there is still a problem of what would happend if a power failure occured while extending the fs…so I chose to skip that “risk” and pretend it’s not there :)

Re-read the partition table:
(Debian) # hdparm -z /dev/sda
Add the new partitions to the Debian RAID1 arrays.
The first array I fixed was the /boot RAID1 array because it would only take some seconds to sync and minimizes the risk of a power failure while there’s no boot manager on the MBR and the rest of partitions are still syncing:
(Debian) # mdadm --add /dev/md11 /dev/sda1
When the sync is over I installed Grub on both sda1 and sdb1:
(Debian) # grub
grub> device (hd0) /dev/sda
grub> root (hd0,0)
Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd0)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
[...snip...]
grub> quit
(Debian) # grub
grub> device (hd1) /dev/sdb
grub> root (hd1,0)
Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd1)
Checking if “/boot/grub/stage1″ exists… no
Checking if “/grub/stage1″ exists… yes
Checking if “/grub/stage2″ exists… yes
Checking if “/grub/e2fs_stage1_5″ exists… yes
[...snip...]
grub> quit

Then we fix the rest RAID1 arrays:
(Debian) # mdadm --add /dev/md12 /dev/sda2
(Debian) # mdadm --add /dev/md13 /dev/sda3

The last sync took a while (approx 1h).

Make some final checks:
a) Check that grub is installed on every disk’s MBR
(Debian) # dd if=/dev/sda of=test.file bs=512 count=1
1+0 records in
1+0 records out
512 bytes (512 B) copied, 5.4721e-05 s, 9.4 MB/s
(Debian) # grep -i grub test.file
Binary file test.file matches
(Debian) # dd if=/dev/sdb of=test2.file bs=512 count=1
1+0 records in
1+0 records out
512 bytes (512 B) copied, 5.4721e-05 s, 9.4 MB/s
(Debian) # grep -i grub test2.file
Binary file test2.file matches

b) Make sure you have the correct entries in grub config:
(Debian) # cat /boot/grub/menu.lst
default 0
timeout 10
title=debian
root(hd0,0)
kernel /vmlinuz-2.6.26-1-amd64 root=/dev/mapper/local-root ro
initrd /initrd.img-2.6.26-1-amd64

c) Check the RAID1 arrays
(Debian) # cat /proc/mdstat
Personalities : [raid0] [raid1]
md13 : active raid1 sdb3[0] sda3[1]
243071360 blocks [2/2] [UU]
md12 : active (auto-read-only) raid1 sdb2[0] sda2[1]
987904 blocks [2/2] [UU]
md11 : active raid1 sdb1[0] sda1[1]
136448 blocks [2/2] [UU]
unused devices:

That’s all. Only a reboot will show whether everything went right.
Good luck!

P.S. The struggle of Gentoo taking over the world is not over. We may have lost a battle but we haven’t lost the war!

References:
a) HOWTO - Install Debian Onto a Remote Linux System
Pretty old but was the base of our efforts
b) RAID1 on Debian Sarge
c) growing ext3 partition on RAID1 without rebooting
d) Remote Conversion to Linux Software RAID-1 for Crazy Sysadmins HOWTO
e) Gentoo LVM2 installation

April 2, 2009 :: Greece  

Roy Marples

dhcpcd-4.99.16 out

The should be the last experimental release of dhcpcd-4.99 as the last feature I wanted is now in - ARP ping support. This is handy for mobile sites that require a static IP. You can configure it like so:

interface bge0
arping 192.168.0.1
# 192.168.0.1 exists on more than one site
# so we differentiate by hardware address
profile 00:11:22:33:44:55
static ip_address=192.168.0.10/24
static domain_name_servers=192.168.0.2
# All other profiles for 192.168.0.1
profile 192.168.0.1
static ip_address=192.168.0.20/24
static domain_name_servers=192.168.0.1

This now means that dhcpcd can replace the all the interface configuration modules in Gentoo baselayout and OpenRC. This means we can move the link management modules into proper init scripts, which is where they really belong.

So, get testing it and report back any bugs, even compile warnings :)

April 2, 2009

Kevin Bowling

Good Linux File System Developments

ext4 has sparked good controversy on the LKML. Aside from the recent delayed alloc and fsync issues, the whole FS stack is getting some much needed attention.  Indeed, Linux file systems are starting to feel like first class citizens again with ext4 and Btrfs (merged in 2.6.29 for testing!) and the surrounding infrastructure being worked on.  A lot of long overdue problems are being mitigated.  Jens Axboe claims 8% single drive and 25% array speedup with some recent pdflush patches.  This is very good news for all users since disk I/O has had a fast growing gap with CPU and main memory bandwidth, even with SSDs.  The fruits of this labor are quite visible with recent boot speedups in distros like the upcoming Fedora 11.

Mandatory reading:

Share and Enjoy: Digg del.icio.us Slashdot Facebook Reddit StumbleUpon Google Print this article!

Related posts:

  1. More Linux File Systems It seems I caught the wave of interest in Linux...
  2. Linux: Btrfs, File Data and Metadata Checksums Chris Mason announced an early alpha release of his new...
  3. Linux Kernel 2.6.21 and Tickless Kernel (CONFIG_NO_HZ) So Linux kernel 2.6.21 is finally out and all the...

April 2, 2009

N. Dan Smith

IcedTea coming to Gentoo PowerPC (someday)

Today I successfully built the IcedTea Java virtual machine on Gentoo/PowerPC.  What does that mean?  It means that someday Gentoo/PowerPC users will be able to have a source-based, free software Java system.  Currently we have to use IBM’s proprietary Java development kit, which brings with it a whole host of problems (from obtaining the binaries to fixing bugs).

The ebuild I used for dev-java/icedtea6 (which provides a 1.6 JDK) is from the Java overlay. After it is further stabilized and pending some legal discussion, we should have it in the main Gentoo tree, meaning that someday ibm-jdk-bin will disappear or become the secondary option for Java.  Hooray!

Once I get my feet a bit wetter I might post a more specific guide on getting IcedTea up and running on your Gentoo/PowerPC machine.

April 2, 2009 :: Oregon, USA  

April 1, 2009

Leif Biberg Kristensen

Update

It’s been a long time since I posted anything here. The Exodus project is still alive and kicking; it’s my primary tool for doing genealogy research, so I’m using it every day, and I am continually making improvements and extensions.

I simply haven’t had much motivation for writing anything about it.

The most important changes to the code base since my last blog are:

1) The add_source() function has been refactored, and most of the logic has been moved to a beast of a plpgsql function:

CREATE OR REPLACE FUNCTION add_source(INTEGER,INTEGER,INTEGER,INTEGER,TEXT,INTEGER) RETURNS INTEGER AS $$
-- Inserts sources and citations, returns current source_id
-- 2009-03-26: this func has finally been moved from PHP to the db.
-- Should be called via the functions.php add_source() which is left as a gatekeeper.
DECLARE
    person  INTEGER = $1;
    tag     INTEGER = $2;
    event   INTEGER = $3;
    src_id  INTEGER = $4;
    txt     TEXT    = $5;
    srt     INTEGER = $6;
    par_id  INTEGER;
    rel_id  INTEGER;
    x       INTEGER;
BEGIN
    IF LENGTH(txt) <> 0 THEN -- source text has been entered, add new node
        par_id := src_id;
        SELECT MAX(source_id) + 1 FROM sources INTO src_id;
        -- parse text to infer sort order:
        -- 1) use page number for sort order (low priority, may be overridden)
        IF srt = 1 THEN -- don't apply this rule unless sort = default
            IF txt SIMILAR TO E'%side \\d+%' THEN -- use page number as sort order
                SELECT SUBSTR(SUBSTRING(txt, E'side \\d+'), 5,
                    LENGTH(SUBSTRING(txt, E'side \\d+')) -4)::INTEGER INTO srt;
            END IF;
        END IF;
        -- 2) use ^#(\d+) for sort order
        IF txt SIMILAR TO E'#\\d+%' THEN
            SELECT SUBSTR(SUBSTRING(txt, E'#\\d+'), 2,
                LENGTH(SUBSTRING(txt, E'#\\d+')) -1)::INTEGER INTO srt;
            txt := REGEXP_REPLACE(txt, E'^#\\d+ ', ''); -- strip #number from text
        END IF;
        -- 3) increment from max(sort_order) of source group
        IF txt LIKE '++ %' THEN
            SELECT MAX(sort_order) + 1
                FROM sources
                WHERE get_source_gp(source_id) =
                    (SELECT parent_id FROM sources WHERE source_id = par_id) INTO srt;
            txt := REPLACE(txt, '++ ', ''); -- strip symbol from text
        END IF;
        -- there's a unique constraint on (parent_id, source_text) in the sources table, don't violate it.
        SELECT source_id FROM sources WHERE parent_id = par_id AND source_text = txt INTO x;
        IF NOT FOUND THEN
            INSERT INTO sources (source_id, parent_id, source_text, sort_order) VALUES (src_id, par_id, txt, srt);
        ELSE
            RAISE NOTICE 'Source % has the same parent id and text as you tried to enter.', x;
            RETURN -x; -- abort the transaction and return the offended source id as a negative number.
        END IF;
        -- the rest of the code will only be executed if the source is already associated with a person-event,
        -- ie. the source has been entered from the add/edit event forms.
        IF event <> 0 THEN
            -- if new cit. is expansion of an old one, we may remove the "parent node" citation
            DELETE FROM event_citations WHERE event_fk = event AND source_fk = par_id;
            -- Details about a birth event will (almost) always include parental evidence. Therefore, we'll
            -- update relation_citations if birth event (and new source is an expansion of existing source)
            IF tag = 2 THEN
                FOR rel_id IN SELECT relation_id FROM relations WHERE child_fk = person LOOP
                    INSERT INTO relation_citations (relation_fk, source_fk) VALUES (rel_id, src_id);
                    -- again, remove references to "parent node"
                    DELETE FROM relation_citations WHERE relation_fk = rel_id AND source_fk = par_id;
                END LOOP;
            END IF;
        END IF;
    END IF;
    -- associate source node with event
    IF event <> 0 THEN
        -- don't violate unique constraint on (source_fk, event_fk) in the event_citations table.
        -- if this source-event association already exists, it's rather pointless to repeat it.
        PERFORM * FROM event_citations WHERE event_fk = event AND source_fk = src_id;
        IF NOT FOUND THEN
                INSERT INTO event_citations (event_fk, source_fk) VALUES (event, src_id);
            ELSE
                RAISE NOTICE 'citation exists';
            END IF;
    END IF;
    RETURN src_id;
END
$$ LANGUAGE PLPGSQL VOLATILE;

(Edit: The reason behind moving this logic into the db is of course the relatively large amount of interdependent queries, which I seriously dislike running from a PHP script. I have been anticipating this move for a really long time. And, after posting it here, I finally got around to add some semi-intelligent exception handling. My old PHP function just called die() to prevent Postgres from barfing all over the place in case of the “sources” constraint violation.)

2) New «Search for Couples» page. I have simply deployed the view I’ve described earlier and used the index.php as a template to put a PHP wrapper script around it. So now I can find out in an instant if I have a couple like Ole Andersen and Anne Hansdatter who married around 1760.

3) New «Search for Source Text» page. I had this function which I used to run a lot from the command line:

-- CREATE TYPE int_bool_text AS (i INTEGER, b BOOL, t TEXT);

CREATE OR REPLACE FUNCTION find_src(TEXT) RETURNS SETOF int_bool_text AS $$
-- function for searching for source text from psql
-- example: select find_src('%Solum%Konfirmerte%An%Olsd%');
    SELECT source_id, is_unused(source_id), strip_tags(get_source_text(source_id))
        FROM sources
        WHERE get_source_text(source_id) LIKE $1
        ORDER BY is_unused(source_id), date_extract(strip_tags(get_source_text(source_id)))
$$ LANGUAGE SQL STABLE;

There are two issues with that. First, it’s cumbersome, even with readline and tab completion, to use the psql shell every time I want to look up a source text. Second, the query takes half a minute to run because it has to build the full source text for every single node in the sources table (currently 41072 nodes) and run a sequential search through them. For most searces, I don’t actually need more than the transcript part of the source text. So, again using the index.php as a template, I built a PHP page that did the job in a more flexible manner, with two radio buttons for «partial» or «full» search respectively. The meat of the script is the query:

$scope = $_GET['scope'];
if ($src) {
    if ($scope == 'partial')
        $query = "SELECT source_id, is_unused(source_id) AS unused,
                            get_source_text(source_id) AS src_txt
                    FROM sources
                    WHERE source_text SIMILAR TO '%$src%'
                    ORDER BY date_extract(source_text)";
    if ($scope == 'full')
        $query = "SELECT source_id, is_unused(source_id) AS unused,
                            get_source_text(source_id) AS src_txt
                    FROM sources
                    WHERE get_source_text(source_id) SIMILAR TO '%$src%'
                    ORDER BY date_extract(source_text)";

By using SIMILAR TO, I can easily search for variant spellings. For instance, the given name equivalent to Mary in Norwegian is frequently spelled as Maren, Mari, Marie or Maria. Giving the atom as "Mar(en|i)[ea]*” deals effectively with this. (Future project: use tsearch and build a thesaurus of variant name spellings.)

Integrating the search within the application brought another bonus. I made the node number in the query result clickable with a link to the Source Manager. So, just by opening the node in a new tab, I both get to see which events and relations the source is associated with, and automatically sets the last_selected_source to this node, ready to associate with an Event or Relation.

The last_selected_source (LSS) has grown to become a powerful concept within the application. I seldom enter a source node number by hand anymore; it’s much easier to modify the LSS before entering a citation. Therefore, I’ve also added a «Use» hotlink that updates the LSS in the Family View Notes section to each of the citations.

I probably should write some words about how I operate this program, as it’s very unconventional with respect to other genealogy apps. The source model is, as I’ve described in the Exodus article, «a self-referential hierarchy with an unknown number of levels.» (See the Gentech GDM document, section 5.3: Evidence Submodel.) The concept is generally known as an «adjacency tree» in database parlance. My own twist to it is that each node contains a partial string, and the full source text is produced at run-time by a recursive concatenation of the strings. It’s a simple, yet powerful, approach. Supplementary text, not intended to show up in the actual citation, is enclosed in {curlies}.

I usually start with entering source transcripts from a church book, every single one of them in sequential order. The concatenated node text up to that point is something like “Church book|for Solum|Mini 2 (1713-1761).|Baptisms,|page 62.” (The pipes are actually spaces, I just wanted to show the partial strings.) When I add a transcript, I usually increment the sort_order by prefixing the text with ‘++ ‘, and the add_source function (see above) will automatically assign the correct sort order number to the node. At the same time, I’ll look up the name in the database to see if I’ve already got that person or the family. Depending on the search result, I may associate the newly entered transcript with the relevant Events/Relations, or may leave it lying around, waiting for that person or family to approach «critical mass» in my research. Both in the Source Manager and in the new Search for Source text, unused transcripts are rendered with grey text, making it easy to see which sources that are actually associated with «persons» in the database.

It can be seen that the process is entirely «source driven», to an extent that I have not seen in any other genealogy research tool. And, of course, it’s totally uncompatible with GEDCOM.

For that reason, and for several others, it’s also totally unsuitable for a casual «family historian». Most people compile their genealogy by drawing information from lots and lots of different sources. I, on the other hand, conduct a «One-place Study» in two adjacent parishes, and use a few sources exhaustively. I’m out to get the full picture of those two parishes, and my application is designed with that goal in mind.

April 1, 2009 :: Norway  

Dan Fego

Merging files with pr

Tonight, I’ve been poring over a rather large data set that I want to get some useful information out of. All the data was originally stored in a .html file, but after some (very) crude extraction techniques, I managed to pull out just the data I wanted, and shove it into a comma-separated file. Earlier, I had given up on my tools at hand and typed up an entire list of row headings for my newly-gotten data. So I had two files like so:

headings.txt
Alpha
Bravo
Charlie

values.csv
1,2,3,4
5,6,7,8
9,10,11,12

I spent quite a bit of time trying to figure out how to combine the two columns into one file with what I knew, but none of my tools could quite do it without nasty shell scripting. It took me a while, but I eventually found this post that cracked the case for me. The pr command, ostensibly for paging documents, has enough horsepower to solve my problem in short order, like so:

$ pr -tm -s, headings.txt values.csv

The -t tells the program to omit headers and footers, and -m tells it to merge each line. The -s, tells it to use commas as field-separators. My desired result, like so:

headings.txt
Alpha,1,2,3,4
Bravo,5,6,7,8
Charlie,9,10,11,12

There are numerous other options to pr, and depending on your potential line lengths, one may have to experiment. But for me, this got the job done.

External Links

April 1, 2009 :: USA  

Brian Carper

Trying Arch

Thanks to all who gave helpful suggestions about running VMs in Gentoo. The main reason I wanted a VM was to play around with some other distros and see what I liked.

But then I got to thinking, and I realized that I have over 250 GB of free hard drive space sitting around. So I made a new little partition and per Noah's suggestion, threw Arch Linux on there.

I'm fairly impressed so far. The install was easy. In contrast to the enormous Gentoo handbook, the whole Arch install guide fits on one page of the official Arch wiki. Why doesn't Gentoo have an official wiki? I know there are concerns over the quality of something anyone can edit, but in practice is it a big a deal? Is it worth the price of sending users elsewhere, to potentially even WORSE places, when the Gentoo docs don't cover everything we need? The quality of the unofficial Gentoo wiki is often very good but sometimes hit-or-miss, and it also sort of crashes and loses all data without backups every once in a while.

The Arch installer is a commandline app using ncurses for basic menus and such, which is more than sufficient and a good compromise between commandline-only and full-blown X-run Gnome bloat. The install itself went fine, other than my own mistakes. I'm sharing /boot and /home between Gentoo and Arch so I can switch between them easily. During the install Arch tried to create some GRUB files, but they already existed care of Gentoo, so the install bombed without much notification and I didn't notice until 3 steps later. No big deal to fix, but I'd have liked a louder error message right away when it happened. The base install took about 45 minutes.

Another nice thing is that the Arch install CD has vi on it. I didn't have to resort to freaking nano or remember to install vim first thing. A mild annoyance to be sure, but it bugged me every time I installed Gentoo.

After boot, installing apps via pacman is simple enough. KDE 4.2 installed in about 15 minutes, as you'd expect from a distro with binary packages. I found a mirror with 1.5 Mb/sec downloads, which is awfully nice. Syncing the package tree takes less than 2 seconds, which is also nice compared to Portage's 5-minute rsync and eix update times. Searching the tree via regex is also somehow instantaneous in Arch.

Oddly, KDE didn't seem to pull in Xorg as a dependency, but other dependencies worked fine so far. Time will tell how well this all holds up. Most package managers do fine on the normal cases but the real test is the funky little obscure apps. pacman -S gvim resulted in a Vim with working rubydo and perldo, which means Arch passed the Ubuntu stink test.

Another nice thing is that KDE4 actually works. My Gentoo install is years old and possibly crufted beyond repair, or something else was wrong, but I have yet to get KDE4 working in Gentoo without massive breakage. Possibly if I wiped Gentoo and tried KDE4 without legacy KDE3 stuff everywhere it'd also be smooth.

Regardless, it all works in Arch. NVidia drivers and Twinview settings were copy/pasted from Gentoo, and compositing all works fine. No performance problems in KDE with resizing or dragging windows, no Plasma crashes (yet), no missing icons or invisible notification area. QtCurve works in Qt3, Qt4 and GTK just fine. My sound card worked without any manual configuration at all. My mouse worked without tweaking, including the thumb buttons. Same with networking, the install prompted me for my IP and gateway etc. and then it worked, no effort.

I've mentioned before, but one nice thing about Linux is that if you have /home in its own partition, it's no big deal at all to share it between distros. With no effort at all I'm now using all my old files and settings in Arch, and I can switch back and forth between this and Gentoo without any troubles.

So we'll see how this goes. So far so good though. Arch seems very streamlined and its goal is minimalism, which is nice. Gentoo has not felt minimalistic to me in a while. Again, may be due to the age of my install, cruft and bit-rot.

April 1, 2009 :: Pennsylvania, USA  

March 31, 2009

Ciaran McCreesh

Feeding ERB Useful Variables: A Horrible Hack Involving Bindings


I’ve been playing around with Ruby to create Summer, a simple web packages thing for Exherbo. Originally I was hand-creating HTML output simply because it’s easy, but that started getting very very messy. Mike convinced me to give ERB a shot.

The problem with template engines with inline code is that they look suspiciously like the braindead PHP model. Content and logic end up getting munged together in a horrid, unmaintainable mess, and the only people who’re prepared to work with it are the kind of people who think PHP isn’t more horrible than an aborted Jacqui Smith clone foetus boiled with rotten lutefisk and served over a bed of raw sewage with a garnish of Dan Brown and Patricia Cornwell novels. So does ERB let us combine easy page layouts with proper separation of code?

Well, sort of. ERB lets you pass it a binding to use for evaluating any code it encounters. On the surface of it, this lets you select between the top level binding, which can only see global symbols, or the caller’s binding, which sees everything in scope at the time. Not ideal; what we want is to provide only a carefully controlled set of symbols.

There are three ways of getting a binding in Ruby: a global TOPLEVEL_BINDING constant, which we clearly don’t want, the Kernel#binding method which returns a binding for the point of call, and the Proc#binding method which returns a binding for the context of a given Proc.

At first glance, the third of these looks most promising. What if we define the names we want to pass through in a lambda, and give it that?

require 'erb'

puts ERB.new("foo <%= bar %>").result(lambda do
    bar = "bar"
end)

Mmm, no, that won’t work:

(erb):1: undefined local variable or method `bar' for main:Object (NameError)

Because the lambda’s symbols aren’t visible to the outside world. What we want is a lambda that has those symbols already defined in its binding:

require 'erb'

puts ERB.new("foo <%= bar %>").result(lambda do
    bar = "bar"
    lambda { }
end.call)

Which is all well and good, but it lets symbols leak through from the outside world, which we’d rather avoid. If we don’t explicitly say “make foo available to ERB”, we don’t want to use the foo that our calling class happens to have defined. We also can’t pass functions through in this way, except by abusing lambdas — and we don’t want to make the ERB code use make_pretty.call(item) rather than make_pretty(item). Back to the drawing board.

There is something that lets us define a (mostly) closed set of names, including functions: a Module. It sounds like we want to pass through a binding saying “execute in the context of this Module” somehow, but there’s no Module#binding_for_stuff_in_us. Looks like we’re screwed.

Except we’re not, because we can make one:

module ThingsForERB
    def self.bar
        "bar"
    end
end

puts ERB.new("foo <%= bar %>").result(ThingsForERB.instance_eval { binding })

Now all that remains is to provide a way to dynamically construct a Module on the fly with methods that map onto (possibly differently-named) methods in the calling context, which is relatively straight-forward, then we can do this in our templates:

<% if summary %>
    <p><%=h summary %>.</p>
<% end %>

<h2>Metadata</h2>

<table class="metadata">
    <% metadata_keys.each do | key | %>
        <tr>
            <th><%=h key.human_name %></th>
            <td><%=key_value key %></td>
        </tr>
    <% end %>
</table>

<h2>Packages</h2>

<table class="packages">
    <% package_names.each do | package_name | %>
        <tr>
            <th><a href="<%=h package_href(package_name) %>"><%=h package_name %></a></th>
            <td><%=h package_summary(package_name) %></td>
        </tr>
    <% end %>
</table>

Which gives us a good clean layout that’s easy to maintain, but lets us keep all the non-trivial code in the controlling class.

Posted in summer Tagged: exherbo, ruby, summer

March 31, 2009

Jürgen Geuter

Themeability can result in bad software

Gwibber is a microblogging client for Linux based on Python and GTK. Well some of it is.

But in order to give it simple skinability or themeability it was decided to use an embedded Webkit browser to display the information. Even better, the HTML wasn't even rendered statically but after parsing all data it would be rendered to the template in HTML but as data that was then dynamically parsed using jQuery and JavaScript.

That sounds like a neat "proof of concept" thingy, you know, one of those thing where people ask: "Why would you do that?" And you answer: "Because I can."

Many people nowadays know at least some HTML, CSS and JavaScript so many projects are tempted to use those technologies as markup to gain the ability to skin their software but I think that is not the right direction.

Yes some people will claim that people want to use pretty software and if your software is not as pretty as a fairy princess, nobody will want to run it.

But on the other hand Gwibber gives us an example for the opposite point of view: The embedded webkit browser thingy in connection with JavaScript is really unstable and fragile. Today I updated my system and got a newer webkit-gtk which made Gwibber pretty much die. It's a known bug and it's really hard to debug what exactly goes wrong.

While Gwibber kinda has the important features there still is quite some stuff it lacks but right now the most energy has to be spend to reworking the inner workings and get the webkit thingy to display ome statically rendered HTML.

A better approach would have been to implement the functionality in a library and then build a client on top of that, a simple client that just works. Then you can start adding code to the whole thing that allows you to make it all pretty and fancy.

Right now we have a package that's kinda nifty but forces you to find a random version of webkit-gtk that might work and if you find it, never upgrade. You have a pretty tool that users start to adopt, it gets included into Ubuntu's next release but, guess what? The current version won't run. That makes the project look bad. Even if the software looks good. If you know what I mean.

March 31, 2009 :: Germany  

Martin Matusiak

emerge mono svn

Yes, it’s time for part two. If you’re here it’s probably because someone said “fixed in svn”, and for most users of course that doesn’t matter, but if you’re a developer you might need to keep up with the latest.

Now, it’s one thing to do a single install and it’s another to do it repeatedly. So I decided to do something about it. Here is the script, just as quick and dirty and unapologetic as the language it’s written in. To make up for that I’ve called it emerge.pl to give it a positive association.

What it does is basically encapsulate the findings from last time and just put it all into practice. Including setting up the parallel build environment for you. Just remember that once it’s done building, source the env.sh file it spits out to run the installed binaries.

$ ./emerge.pl merge world

$ . env.sh

$ monodevelop &

This is pretty simple stuff, though. Just run through all the steps, no logging. If it fails at some point during the process it stops so that you can see the error. Then if you hit Enter it continues.

#!/usr/bin/perl
# Copyright (c) 2009 Martin Matusiak <numerodix@gmail.com>
# Licensed under the GNU Public License, version 3.
#
# Build/update mono from svn
 
use warnings;
 
use Cwd;
use File::Path;
use Term::ReadKey;
 
 
my $SRCDIR = "/ex/mono-sources";
my $DESTDIR = "/ex/mono";
 
 
sub term_title {
	my ($s) = @_;
	system("echo", "-en", "\\033]2;$s\\007");
}
 
sub invoke {
	my (@args) = @_;
 
	print "> "; foreach my $a (@args) { print "$a "; }; print "\\n";
 
	$exit = system(@args);
	return $exit;
}
 
sub dopause {
	ReadMode 'cbreak';
	ReadKey(0);
	ReadMode 'normal';
}
 
 
sub env_var {
	my ($var) = @_;
	my ($val) = $ENV{$var};
	return defined($val) ? $val : "";
}
 
sub env_get {
	my ($env) = {
		DYLD_LIBRARY_PATH => "$DESTDIR/lib:" . env_var("DYLD_LIBRARY_PATH"),
		LD_LIBRARY_PATH => "$DESTDIR/lib:" . env_var("LD_LIBRARY_PATH"),
		C_INCLUDE_PATH => "$DESTDIR/include:" . env_var("C_INCLUDE_PATH"),
		ACLOCAL_PATH => "$DESTDIR/share/aclocal",
		PKG_CONFIG_PATH => "$DESTDIR/lib/pkgconfig",
		XDG_DATA_HOME => "$DESTDIR/share:" . env_var("XDG_DATA_HOME"),
		XDG_DATA_DIRS => "$DESTDIR/share:" . env_var("XDG_DATA_DIRS"),
		PATH => "$DESTDIR/bin:$DESTDIR:" . env_var("PATH"),
		PS1 => "[mono] \\\\w \\\\\\$? @ ",
	};
	return $env;
}
 
sub env_set {
	my ($env) = env_get();
	foreach my $key (keys %$env) {
		if ((!exists($ENV{$key})) || ($ENV{$key} ne $env->{$key})) {
			$ENV{$key} = $env->{$key};
		}
	}
}
 
sub env_write {
	my ($env) = env_get();
	open (WRITE, ">", "env.sh");
	foreach my $key (keys %$env) {
		my ($line) = sprintf("export %s=\\"%s\\"\\n", $key, $env->{$key});
		print(WRITE $line);
	}
	close(WRITE);
}
 
 
sub pkg_get {
	my ($name, $svnurl) = @_;
	my $pkg = {
		name => $name,
		dir => $name, # fetch to
		workdir => $name, # build from
		svnurl => $svnurl,
		configurer => "autogen.sh",
		maker => "make",
		installer => "make install",
	};
	return $pkg;
}
 
sub pkg_print {
	my ($pkg) = @_;
	foreach my $key (keys %$pkg) {
		printf("%14s : %s\\n", $key, $pkg->{$key});
	}
	print("\\n");
}
 
sub pkg_action {
	my ($action, $dir, $pkg, $code) = @_;
 
	# Report on action that is to commence
	term_title(sprintf("Running %s %s", $action, $pkg->{name}));
 
	# Create destination path if it does not exist
	my ($path) = File::Spec->catdir($SRCDIR, $dir);
	unless (-d $dir) {
		mkpath($path);
	}
 
	# Chdir to source path
	my ($cwd) = getcwd();
	chdir($path);
 
	# Set environment
	env_set();
 
	# Perform action
	my ($exit) = &$code;
 
	# Chdir back to original path
	chdir($cwd);
 
	# Check exit code
	if ($exit == 0) {
		term_title(sprintf("Done %s %s", $action, $pkg->{name}));
	} else {
		term_title(sprintf("Failed %s %s", $action, $pkg->{name}));
		dopause();
	}
}
 
sub pkg_fetch {
	my ($pkg, $rev) = @_;

	if (exists($pkg->{svnurl})) {
		my $code = sub {
			return invoke("svn", "checkout", "-r", $rev, $pkg->{svnurl}, ".");
		};
		pkg_action("fetch", $pkg->{dir}, $pkg, $code);
	}
}
 
sub pkg_configure {
	my ($pkg) = @_;
 
	if (exists($pkg->{configurer})) {
		my $code = sub {
			my ($configurer) = $pkg->{configurer};
			if (!-e $configurer) {
				if (-e "configure") {
					$configurer = "configure";
				}
			}
			return invoke("./$configurer --prefix=$DESTDIR");
		};
		pkg_action("configure", $pkg->{workdir}, $pkg, $code);
	}
}
 
sub pkg_premake {
	my ($pkg) = @_;
 
	if (exists($pkg->{premaker})) {
		my $code = sub {
			return invoke($pkg->{premaker});
		};
		pkg_action("premake", $pkg->{workdir}, $pkg, $code);
	}
}
 
sub pkg_make {
	my ($pkg) = @_;
 
	if (exists($pkg->{maker})) {
		my $code = sub {
			return invoke($pkg->{maker});
		};
		pkg_action("make", $pkg->{workdir}, $pkg, $code);
	}
}
 
sub pkg_install {
	my ($pkg) = @_;
 
	if (exists($pkg->{installer})) {
		my $code = sub {
			return invoke($pkg->{installer});
		};
		pkg_action("install", $pkg->{workdir}, $pkg, $code);
	}
}
 
 
sub pkglist_get {
	my $mono_svn = "svn://anonsvn.mono-project.com/source/trunk";
	my (@pkglist) = (
		{"libgdiplus" => "$mono_svn/libgdiplus"},
		{"mcs" => "$mono_svn/mcs"},
		{"olive" => "$mono_svn/olive"},
		{"mono" => "$mono_svn/mono"},
		{"debugger" => "$mono_svn/debugger"},
		{"mono-addins" => "$mono_svn/mono-addins"},
		{"mono-tools" => "$mono_svn/mono-tools"},
		{"gtk-sharp" => "$mono_svn/gtk-sharp"},
		{"gnome-sharp" => "$mono_svn/gnome-sharp"},
		{"monodoc-widgets" => "$mono_svn/monodoc-widgets"},
		{"monodevelop" => "$mono_svn/monodevelop"},
		{"paint-mono" => "http://paint-mono.googlecode.com/svn/trunk"},
	);
 
	my (@pkgs);
	foreach my $pkgh (@pkglist) {
		# prep
		my @ks = keys(%$pkgh); my $key = $ks[0];
 
		# init pkg
		my $pkg = pkg_get($key, $pkgh->{$key});
 
		# override defaults
		if ($pkg->{name} eq "mcs") {
			delete($pkg->{configurer});
			delete($pkg->{maker});
			delete($pkg->{installer});
		}
		if ($pkg->{name} eq "olive") {
			delete($pkg->{configurer});
			delete($pkg->{maker});
			delete($pkg->{installer});
		}
		if ($pkg->{name} eq "mono") {
			$pkg->{premaker} = "make get-monolite-latest";
		}
		if ($pkg->{name} eq "gtk-sharp") {
			$pkg->{configurer} = "bootstrap-2.14";
		}
		if ($pkg->{name} eq "gnome-sharp") {
			$pkg->{configurer} = "bootstrap-2.24";
		}
		if ($pkg->{name} eq "paint-mono") {
			$pkg->{workdir} = File::Spec->catdir($pkg->{dir}, "src");
		}
 
		push(@pkgs, $pkg);
	}
	return @pkgs;
}
 
 
sub action_list {
	my (@pkgs) = pkglist_get();
	foreach my $pkg (@pkgs) {
		printf("%s\\n", $pkg->{name});
	}
}
 
my %actions = (
	list => -1,
	merge => 0,
	fetch => 1,
	configure => 2,
	make => 3,
	install => 4,
);
 
sub action_merge {
	my ($action, @worklist) = @_;
 
	# spit out env.sh to source when running
	env_write();
 
	# init source dir
	unless (-d $SRCDIR) {
		mkpath($SRCDIR);
	}
 
	my (@pkgs) = pkglist_get();
	foreach my $pkg (@pkgs) {
		# filter on membership in worklist
		if (grep {$_ eq $pkg->{name}} @worklist) {
			pkg_print($pkg);
 
			# fetch
			if (($action == $actions{merge}) || ($action == $actions{fetch})) {
				my $revision = "HEAD";
				pkg_fetch($pkg, $revision);
			}
 
			# configure
			if (($action == $actions{merge}) || ($action == $actions{configure})) {
				pkg_configure($pkg);
			}
 
			if (($action == $actions{merge}) || ($action == $actions{make})) {
				# premake, if any
				pkg_premake($pkg);
 
				# make
				pkg_make($pkg);
			}
 
			# install
			if (($action == $actions{merge}) || ($action == $actions{install})) {
				pkg_install($pkg);
			}
		}
	}
}
 
 
sub parse_args {
	if (scalar(@ARGV) == 0) {
		printf("Usage:  %s <action> [<pkg1> <pkg2> | world]\\n", $0);
		printf("Actions: %s\\n", join(" ", keys(%actions)));
		exit(2);
	}
 
	my ($action) = $ARGV[0];
	if (!grep {$_ eq $action} keys(%actions)) {
		printf("Invalid action: %s\\n", $action);
		exit(2);
	}
 
	my (@pkgnames) = splice(@ARGV, 1);
	if (grep {$_ eq "world"} @pkgnames) {
		@allpkgs = pkglist_get();
		@pkgnames = ();
		foreach my $pkg (@allpkgs) {
			push(@pkgnames, $pkg->{name});
		}
	}
 
	return (action => $action, pkgs => \\@pkgnames);
}
 
sub main {
	my (%input) = parse_args();
 
	printf("Action selected: %s\\n", $input{action});
	if (scalar(@{ $input{pkgs} }) > 0) {
		printf("Packages selected:\\n");
		foreach my $pkgname (@{ $input{pkgs} }) {
			printf(" * %s\\n", $pkgname);
		}
		print("\\n");
	}
 
	if ($actions{$input{action}} == $actions{list}) {
		action_list();
		exit(2);
	}
 
	action_merge($actions{$input{action}}, @{ $input{pkgs} })
}
 
main();

Download this code: emerge_pl

March 31, 2009 :: Utrecht, Netherlands  

Brian Carper

Gentoo VMWare Fail

According to this bug, VMWare on Gentoo is in a sorry state, with one lone person trying to keep it going. I can't get vmware-modules to compile on my system no matter what I try, which is depressing. Kudos to all of our one-man army Gentoo devs who are keeping various parts of the distro going, but I wonder how many other areas of Gentoo are largely unmaintained nowadays.

KVM was braindead simple to get set up in comparison with VMWare, but I can't get networking to work. This is because I'm an idiot when it comes to TUN/TAP and iptables. I've read wiki articles that suggest setting up my system to NAT-forward traffic into the VM but I couldn't get that working and don't have a lot of time to screw with it.

On one of the Gentoo mailing lists I noticed that a dev has posted some KVM images of Gentoo suitable for testing. But I'm looking to start up an image from scratch and that doesn't help, and it's not going to help me get networking going any easier.

Why do I feel like this'd take 10 minutes to set up on Ubuntu? Look at this, or search for "ubuntu vmware" and see the hundreds of results. Given that it's a VM and it doesn't really matter what the host OS is anyways, I'll probably do that on my laptop, but it's still depressing.

March 31, 2009 :: Pennsylvania, USA  

March 30, 2009

N. Dan Smith

Gentoo on the iBook G4

While Debian may be suitable for my Apple Powermac G3 Blue and White, nothing can beat Gentoo on my iBook G4.  I have resolved that being a Gentoo developer is not part of my future.  But I cannot stay away from Gentoo as a user, especially when it comes to my iBook.  Pure computing joy.

It was not always so.  When I first started using Gentoo there were no drivers for the Broadcom wireless card it has.  Thankfully since then free and open drivers have been developed which work great for me.  Also, all of the Mac buttons and features (including sleep) work perfectly, so it makes a great notebook.  I plan as using it as my main work horse for thesis research and writing.

March 30, 2009 :: Oregon, USA  

Gentoo on iBook G4: The Essentials

When it comes to running Linux on an Apple iBook G4 (or any iBook or PowerBook in general), there are a few essential resources.  Here they are:

  • Gentoo Linux PPC Handbook - The installation instructions for Gentoo are among the best documentation available for Linux.
  • Gentoo PPC FAQ - This document answers all your questions about the idiosyncrasies of running Linux on PowerPC hardware.  This includes information on how to enable your soundcard as well as recommendations for laptop-specific applications (which can be installed with portage).  First and foremost of these is pbbuttonsd (”PowerBook buttons daemon”), which makes the volume, brightness, and eject keys work, along with sleep and other power managment features.  There is nothing like being able to close the lid and forget about it, just like in Mac OS X.
  • Airport Extreme Howto - This is a very clear and concise guide to getting your Airport Extreme wireless network card working.  Until these drivers came along, Linux on the iBook G4 was not very fun.  Now I can enjoy its full laptop potential.
  • Gentoo Hardware 3D Acceleration Guide - You have a Radeon Mobility video card in that iBook.  Use it!  Follow this guide to ensure that hardware rendering is enabled.  This will open the door to goodies like Compiz Fusion, which does work fairly well on the iBook G4.
  • Inputd - This program allows for right-click solutions (e.g. command + left-click = right click) and much more.  The cure to the one button mouse.  It requires some changes in the kernel and perhaps its config file, but it should not be too challenging for any user who has successfully completed the Gentoo install.

It is best to consult all of those resources during the initial installation.  That way you do not have to go back and rebuild your kernel when you add each feature.

March 30, 2009 :: Oregon, USA  

zsh on Gentoo and OS X

I am now a zsh man.  The key to a happy zsh experience is a good ~/.zshrc file.  Thanks to Gentoo’s docs, I have a good start:

#!/bin/zsh
# completion
autoload -U compinit
compinit
# prompt
autoload -U promptinit
promptinit
prompt adam1
# options
setopt correctall
setopt autocd
setopt extendedglob
# history
export HISTSIZE=2000
export HISTFILE="$HOME/.history"
export SAVEHIST=$HISTSIZE
setopt hist_ignore_all_dups
# zstyle
zstyle ':completion:*:descriptions' format '%U%B%d%b%u'
zstyle ':completion:*:warnings' format '%BNo matches for: %d%b'
# color
[ -f /etc/DIR_COLORS ] && eval $(dircolors -b /etc/DIR_COLORS)
alias ls=”ls –color=auto -h”
alias grep=”grep –color=auto”

There are many more zsh options to play with.  For example, you can use prompt -l to see the list of available prompt templates if adam1 does not suit you.  Customized designs are doable as well.

You can also set the OS X Terminal.app to use zsh (/bin/zsh), but the color section of the file needs to be a bit different:

# color
alias ls="ls -Gh"
alias grep="grep --color=auto"

Enjoy!

March 30, 2009 :: Oregon, USA  

The Complete Idiot’s Guide to Paludis

Paludis is a package manager for Linux. It started out as an alternative to Portage for Gentoo, but it also supports another distribution now. I use paludis in my Gentoo setup because I think it works better than portage. Others may disagree. Really it comes down to user preference. There is a lot of package manager zealotry out there, so I thought I would add my own fuel to the fire. Here follows my tips for happy paludis usage for a new user.

  • Know what you are doing with Gentoo. In other words, if you are an idiot, paludis is not for you. :-)
  • Read the documentation, including the man pages for paludis and associated programs.
  • When you configure paludis for the first time, choose the manual configuration option. You want to learn how paludis works, and this is the best introduction. This will also require you to read the configuration documentation.
  • Read and appropriately respond to the warnings and error messages paludis reports.
  • Use conf.d directories for your keywords and use configurations. This will keep your configuration files clean and organized, and will facilitate easier system administration and package testing.
  • Move your Gentoo repository from /usr/portage to /var/paludis/repositories/gentoo. It requires a little work, but it’s just better that way. This of course breaks portage (but who cares?).
  • Develop a thick skin. The paludis developers are brilliant, but they have very poor public relations skills. If you venture onto the mailing lists or into the IRC channel, do not take anything personally. Asking direct questions and providing pertinent info is an important prerequisite to getting paludis support. (Probably all software projects can benefit from not letting developers do PR.)

Flame on. :-)

March 30, 2009 :: Oregon, USA  

Two Penguins are Better than One

Yesterday I had the fortune of finding a rather affordable PowerMac dual G4 1.0 GHz, a.k.a. the “mirrored drive door.”  The machine was lacking all the drives and a video card, but I had all those to spare, so I picked it up.  Needless to say, I am quite pleased, since my G3 had been acting up of late. This machine will serve as an excellent testing/development host as well as a desktop for me.  I’ve already got Gentoo installed and I am working on getting it up to speed as a desktop.

March 30, 2009 :: Oregon, USA  

Deep Breath 4.2

I am going to be installing KDE 4.2.  Wish me luck.

March 30, 2009 :: Oregon, USA  

KDE Fails Again

Well, not really KDE.  Qt has some sort of bug on PowerPC in Gentoo where the colors get mixed up, especially orange and blue.  Also, the Rage128 xorg driver is apparently broken in version 1.5.  So I guess I am sticking with XFCE4 for the time being.

March 30, 2009 :: Oregon, USA  

Nagios

We decided to add some proactive monitoring to various systems at work. I discovered Nagios.  It was not difficult to install and configure, and there is even some Gentoo-specific documentation. I had to customize the default install a bit to accommodate lighttpd and nbsmtp (the mailer I use). Now all of our servers are monitored and alerts are sent out via email (to a Crackberry) as needed.

During the course of configuring servers I had the misfortune of discovering a bug in one of our machines which defies any attempt at a Let-Me-Google-That-For-You fix, so alas I will be calling MSFT support tomorrow.  If I get that fixed and get the stupid coffee bar point-of-sale machine to stay operational, I will be a happy camper.

March 30, 2009 :: Oregon, USA  

Nagios using NBSMTP as an MTA

I wanted to use email notifications in Nagios, but I didn’t want to set up a complicated mail transfer agent (postfix, qmail, exim, etc.). I discovered nbsmtp (”no-brainer SMTP”) through my experience with Mutt on Gentoo. It is not a real MTA, but it just punts your outgoing mail to another mail server (your ISP, Gmail, etc.). Yesterday I married the two, and since I could not find any documentation online about it, I will post it here.

First install nbsmtp on your system.  Then switch users to the nagios user (probable “nagios” - whichever user your nagios instance runs at).  In that user’s home folder, create .nbsmtprc and fill in the following:

auth_user = from-address@example.com
auth_pass = your_password_here
relayhost = smtp.example.com
fromaddr = from-address@example.com
port = 587
use_starttls = True
domain = example.com

This example happens to work if you are using Gmail.  Just adjust your settings accordingly.  Now whenever the nagios user runs nbsmtp, all of the runtime configuration can be read from the file, which simplifies the command. Next, nagios’ commands.cfg needs to be customized to reflect the change to nbsmtp.  Here is my example:

/usr/bin/printf "%b" "To: $CONTACTEMAIL$\nFrom: from-address@example.com\nSubject: $NOTIFICATIONTYPE$ Host $HOSTNAME$ is $HOSTSTATE$\n\nType: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\nTime: $LONGDATETIME$\n" | nbsmtp

/usr/bin/printf "%b" "To: $CONTACTEMAIL$\nFrom: from-address@example.com\nSubject: $NOTIFICATIONTYPE$ Svc $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$\n\nType: $NOTIFICATIONTYPE$\nSvc: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddr: $HOSTADDRESS$\nState: $SERVICESTATE$\nTime: $LONGDATETIME$\nInfo:\n$SERVICEOUTPUT$\n" | nbsmtp

Nagios will fill in the variables, except you need to specify the From address to match your nbsmtprc.  The key to these commands is that you have one line for each header (To:, From:, Cc:, Subject:, etc.) and then two newline characters before the body of the message.  Then you can format the message however you like. Assuming everything is properly configured, you should be receiving email alerts from Nagios when there is trouble.  Of course it is best to test an alert to verify that email works before you run into a real problem.

March 30, 2009 :: Oregon, USA  

TopperH

Gentoo releases, my point of view

This entry follows up this nice article by Jeremy Olexa (darkside) and the relative comments.

I'm not a developer and I don't know much about the technical stuff that my idea involves, it's just a personal and different approach on the question that Jeremy asks.

Reading the article and the comments it looks like PR and advertising are the main issues. I couldn't agree more. When a distro comes out with a new version, popular sites (slashdot, distrowatch...) write an article, popular bloggers try the distro and write their opinion, other bloggers publish screenshots... A lot of buzz is generated, and people are aware that the new distro is out.

Gentoo is always up to date

Gentoo is a rolling release distro, it never needs upgrades, but just updates. The installation in my workstation (made in 2006) is as much up-to-date as the shiny new install on my laptop. That's gentoo magic, you sync, you emerge world, and have every day the latest and greatest.
Most people don't realise that, and this is why all the "gentoo is dead" thing grows.

Installation media

So, when is it that I have a new gentoo release? Maybe when a new installation media is out?
Well, I used just once the minimal gentoo cd to actually do an installation, then I realised that there are better ways to install gentoo. I think gentoo could invest less manpower on installation media releases. What we need is a very minimal cd, with basic tools for networking and partitioning (lvm and raid), that is updated not more than every 12-18 months, and, a very clear and complete chapter in the handbook, explaining how gentoo can be built using livecds like systemrescucd, knoppix, sabayon, or even ubuntu, and how is possible for people coming from other distros to install gentoo in a partition without leaving the environment they are are familiar with.

So, what makes a new release?

If I look other people workstations I can most of times tell at first sight whether they are using Ubuntu, Windows XP, Suse, OSX... The fact is that a lot of people don't care too much about theming their desktops, they just keep the vanilla install as it is.

Let's be honest: I'm sure Ubuntu developers did a lot of background work, but, what comes out from the press for the next release? "A new notification style, and a shining new color theme" Wow... those guys are great in PR stuff.

Gentoo doesn't have a consistent artwork theme, and if I publish a post of my desktop today I will look more or less the same as my desktop 2 years ago.
So, here comes my suggestion: a new gentoo release every time a new artwork theme is ready. I'm not kidding, let's see how it should work...

How it works

The gentoo artwork team provides consistent themes and wallpapers for the most popular DEs, login managers, toolkits, framebuffer and grub. (Sabayon guys are really good in it: from the moment you boot, till the moment you are into the graphical environment the transition looks really smooth). All those themes will be shipped in a package called media-gfx/gentoo-artwork and versioned like gentoo releases (2009.0, 2010.1 ecc.). Those packages will be slotted.

This package will have an USE flag for all the packages we have a theme for. For example "grub framebuffer xdm gdm kdm slim gnome xfce kdm openbox wallpaper", and according with the selected ones the relevant parts will be extracted.

The extracted themes will be named according to version (gtk-theme-gentoo-2009.0, gtk-theme-gentoo-2010.1) and with a symlink (gtk-theme-gentoo-default) that will be managed by an eselect module.

Assuming I have a default installation with no personal customizations, when a new version of gentoo-artwork comes out all I have to do is "eselect gentoo-artwork set n" and tah-dah, my whole gentoo changes shape and I'm ready to publish screenshots of my new gentoo in this blog.

Of course, if this new artwork comes along with a new major version of portage, or a new stabilized gcc, I will have something more to blog about :P

What else?

Gentoo is all about choices, so, if I want to keep the current behaviour all I have to to is add a "-gentoo-artork" USE in my make.conf.

My 2 cents...

March 30, 2009 :: Italy  

Kyle Brantley

v6 tunnels and v4 firewalls

My home network has "native" IPv6 through a series of tunnels that I've set up. The setup is pretty basic. A v6-in-v4 tunnel comes in through HE to my server, giving my server control over... a lot of v6. From here I segment it off a bit, and then branch the connectivity out over several other tunnels. One of these tunnels, as you could guess, heads to my home router.

When I was initially setting up the server <--> home tunnel, my firewalling rules gave me a fair bit of crap. Staring at tcpdump for quite some time didn't give me any leads concerning the proper rule to create, and I wound up whitelisting my entire home IPv4 address (that sounds a bit silly - whitelisting an 'entire v4 address' - you know, all one of them).

I finally got sick of allowing this IP full access to everything, because there were quite a number of ports "open" on the server but that I didn't want anyone outside accessing. This also caused problems with creating proper rules in the first place, because my only test bed was... from an entirely whitelisted IP. Suffice it to say some things that I thought were open were in fact not open to anyone but me, and this caused me quite the headache before I figured it out.

So how did I fix this? The answer is actually pretty simple - 42.

Wait, no. I meant 41. Sorry. Really I did. 41 is the protocol number assigned to IPv6. If this was obvious to others, well, sorry that I'm so slow. I didn't know. If I had known that I should be picking random numbers and trying them in a not exactly often used iptables command, then maybe I would have done this earlier.

Fun fact: "TCP" is 6. Note how this is ambiguous in terms of which "IP" it means, but in this case, it means IPv4. Why TCP is "6" is evidently defined in RFC 793, and why IPv6 is "41" can be found in RFC 1883 (or 1112, not exactly sure).

Note how TCP is 6, and that UDP is 17. Both TCP and UDP are commonly known as "TCP/IP" and "UDP/IP." Both of these operate quite nicely over both IPv4 and IPv6. IPv6 has an assigned number - but IPv4 does not. How you would intermix this I'm not sure. I can block IPv6 quite nicely it seems, but IPv4 is strangely absent. Does 6 imply v4? Does 17 imply v4? How can you filter UDP over 41?

I have no idea. I'm confused too. If you can make sense of the why, I'd be very interested in finding out why these protocol number seem so convoluted and inconsistent. It is pretty obvious that the protocol number for v6 was tacked on long after the base numbers for TCP and UDP were established, but whatever.

Enough rambling.

So how did I fix this firewalling issue?

# iptables -I INPUT -s <v4 home address here> -p 41 -j ACCEPT

... from the tunnel server. I didn't have to create a matching rule on my home router, and of course, ymmv.

For those of you familiar with iptables, the "-p 41" may look somewhat familiar to you. It should:

# iptables -I INPUT -p tcp --dport 80 -j ACCEPT
It is just a simple protocol match. All we're doing is matching the v4 source address, the v6 data, and allowing it through. Despite the above example doing something quite different, the -p switch does the same thing: matches a protocol.

March 30, 2009 :: Utah, USA  

March 29, 2009

Steven Oliver

Proposed Small PC


I recently posted that I wanted a new PC. Well, I want a desktop anyway. My Apple laptop is still in excellent shape, especially since I dropped 4G of RAM in it. Anyway, I have built a computer for myself on Newegg and saved it as a public wish list. Something to consider before viewing. I will be reusing my current hard drive along with my current CD/DVD burner. Outside of that I think I’ve got everything there I need.

NewEgg Wish List
(If that stupid link doesn’t work blame newegg)

Suggestions?

Enjoy the Penguins!

March 29, 2009 :: West Virginia, USA  

March 27, 2009

Ciaran McCreesh

EAPI 3: A Preview


Gentoo is shuffling its way towards EAPI 3. The details haven’t been worked out yet, but there’s a provisional list of things likely to show up that’s mostly been agreed upon. This post will provide a summary; when EAPI 3’s finalised, I’ll do a series of posts with full descriptions as I did for EAPI 2. PMS will remain the definitive definition; I’ve put together a a draft branch (I’ll be rebasing this, so don’t base work off it if you don’t know how to deal with that).

Everything on this list is subject to removal, arbitrary change or nuking from orbit. We’re looking for a finalisation reasonably soon, so if it turns out Portage is unable to support any of these, they’ll be dropped rather than holding the EAPI up.

EAPI 3 will be defined in terms of differences to EAPI 2. These differences may include:

  • pkg_pretend support. This will let ebuilds signal a lot more errors at pretend-time, rather than midway through an install of a hundred packages that you’ve left running overnight. This feature is already in exheres-0.
  • Slot operator dependencies. This will let ebuilds specify what to do when they depend upon a package that has multiple slots available — using :* deps will mean “I can use any slot, and it can change at runtime”, whilst := means “I need the best slot that was there at compile time”. This feature is already in exheres-0 and kdebuild-1.
  • Use dependency defaults. With EAPI 2 use dependencies, it’s illegal to reference a flag in another package unless that package has that flag in IUSE. With use dependency defaults, you’ll be able to use foo/bar[flag(+)] and foo/bar[flag(-)] to mean “pretend it’s enabled (disabled) if it’s not present”. This feature is already in exheres-0.
  • DEFINED_PHASES and PROPERTIES will become mandatory (they’re currently optional). This won’t have any effect for users (although without the former, pkg_pretend would be slooooow).
  • There’s going to be a default src_install of some kind. Details are yet to be entirely worked out.
  • Ebuilds will be able to tell the package manager that it’s ok or not ok to compress certain documentation things using the new docompress function.
  • dodoc will have a -r, for recursively installing directories.
  • doins will support symlinks properly.
  • || ( use? ( ... ) ) will be banned.
  • dohard and dosed will be banned. (Maybe. This one’s still under discussion.)
  • New doexample and doinclude functions. (Again, maybe. Quite a few people think these’re icky and unnecessary.)
  • unpack will support a few new extensions, probably xz, tar.xz and maybe xpi.
  • econf will pass --disable-dependency-tracking --enable-fast-install. This is already done for exheres-0.
  • pkg_info will be usable on uninstalled packages too. This is already in exheres-0 and kdebuild-1.
  • USE and friends will no longer contain arbitrary extra values. (Possibly. Not sure Portage will have this one done in time.)
  • AA and KV will be removed.
  • New REPLACED_BY_VERSION and REPLACING_VERSIONS variables, to let packages work out whether they’re upgrading / downgrading / reinstalling. exheres-0 has a more sophisticated version.
  • The automatic S to WORKDIR fallback will no longer happen under certain conditions. exheres-0 already has this.
  • unpack will consider unrecognised suffixes an error unless --if-compressed is specified, and the default src_unpack will pass this. exheres-0 already has this. (Maybe. Not everyone’s seen the light on this one yet.)
  • The automagic RDEPEND=DEPEND ick will be gone.
  • Utilities will die on failure unless prefixed by nonfatal. exheres-0 already has this.

Unless, of course, something completely different happens.

Posted in eapi 3 Tagged: eapi, eapi 3, gentoo

March 27, 2009

Brian Carper

Blog source code released

By popular demand, I've released the source code for my blog. Hope someone finds it useful.

http://github.com/briancarper/cow-blog/tree/master

Feedback and bug reports welcome, email me or post them somewhere on my blog and I'll find them.

March 27, 2009 :: Pennsylvania, USA