Planet Larry

April 22, 2009

Steven Oliver

Setting-up Oracle


Setting-up Oracle (at least under Windows) is the most horrible feature I’ve ever seen in my life. Why is it so bad? How can a company that can afford to buy Sun make such crappy software?

Okay. I realize that Oracle’s databases are some of, if not, the best in the world. But it absurd what you have to go through to install the client and everything else you need just to work with one. At work, like a lot of companies, we use Oracle, of course, but not only do we use it, we use like 4 different versions of it. So that means I have to have, at least for now, two different versions of Oracle client installed on my work PC.

For all of you who have never had to do this, each version of the client, assuming you do a complete install is roughly 900MB a pop. Each install requires a different “home” directory and each creates it’s own set of registry keys (we use Windows XP of course). So far so good I guess, except for the size, except that the installers never work. For example, if you need version 7 along with a later version, you have to install version 7 first or it ruin your other clients.

So today, after realizing I had magically gained somewhere upward of 5 different homes and only two functioning clients (9.0.4 and 10.0.1) I decided to blow it all away and start fresh. It seemed like perfect timing as well since we’re upgrading our main DB to 10g which meant my default client, 9.0.4, wouldn’t work anymore.

So after doing all of that, which ended up being very painful because Oracle really buries itself in your registry, I installed both clients. Naturally, after installing my new 10g client along with 9.04 (I still need it for other databases) I naturally encountered errors. Logging in through various means I encountered the following two errors: ORA - 12222 and ORA - 12538. So after about an hour going through Google trying to find the answer I finally figured it out buried deep in a forum post the guy with the problem totally ignored.

The first install apparently setup two ORACLE_HOME environment variables, one for the User and one for the System. After deleting both, both errors went away.

All of that and Toad for Oracle, latest version, still crashes on me on a regular basis.

Enjoy the Penguins!

April 22, 2009 :: West Virginia, USA  

April 20, 2009

Michael Klier

Spider Your DokuWiki Using Wget

Some of you might have been in that situation already and know that sometimes it's necessary to spider your DokuWiki. For example if you need to rebuild the search index or you make use of the tag plugin and you don't want to visit each site on your own to trigger the (re)generation of the needed meta data1).

Here's a quick bash snippet which uses wget I want to share with you. You have to run it inside of your <dokuwiki>/data/pages folder or it won't work.

for file in $(find ./ -type f); do
    file=${file//.\//}
    file=${file//\//:}
    file=$(basename $file '.txt')
    url="http://yourdomain.org/doku.php?id=$file"
    wget -nv "$url" -O /dev/null
    [ $? != 0 ] && echo "ERROR fetching $url"
    sleep 1
done

There are probably one million other ways to do this in bash. The reason I search the pages directory first instead of using the <dokuwiki>/data/index/page.idx file, is that there could be pages added from a script which in turn could be missing in the global index because of that.

Note: I would set the sleep count at least to one second (if not more) in order to give the indexer enough time to finish his job and to avoid lock conflicts.

Filed under: , ,
1) DokuWiki uses a Webbug to do all what's needed in the background
Read or add comments to this article · Save to del.icio.us

April 20, 2009 :: Germany  

Nicolas Trangez

April 19, 2009

Nicolas Trangez

Erlang, Python and Twisted mashup using TwOTP

Recently, I’ve been toying around with Erlang again. After creating some simple apps I wanted to integrate some Erlang code inside a Python application (since that’s still my favorite day-to-day language, it’s used at work and I’m sort-of convinced Erlang would be a good choice for several of the applications we need to develop, integrated with our existing Python code). The most obvious solution would be to use an Erlang port, but this is IMHO rather cumbersome: it requires a developer to define a messaging format, parsing code for incoming messages, etc. There’s a tutorial available if you want to take this route.

A more elegant solution is creating a node using Python, similar to JInterface and equivalents. Luckily there’s an existing project working on a library to create Erlang nodes using Python and Twisted: TwOTP.

One downside: it’s rather underdocumented… So here’s a very quick demo how to call functions on an Erlang node from within a Twisted application.

First of all we’ll create 2 Erlang functions: one which returns a simple “Hello” message, one which uses an extra process to return ‘pong’ messages on calls to ‘ping’, and counts those.

The code:

-module(demo).
-export([hello/1, ping/0, start/0]).

hello(Name) ->
    Message = "Hello, " ++ Name,
    io:format(Message ++ "~n", []),
    Message.

ping_loop(N) ->
    receive
        {get_id, From} ->
            From ! {pong, N},
            ping_loop(N + 1)
    end.

ping() ->
    pingsrv ! {get_id, self()},
    receive
        {pong, N} -> ok
    end,
    {pong, N}.

start() ->
    Pid = spawn_link(fun() -> ping_loop(1) end),
    register(pingsrv, Pid).

This should be straight-forward if you’re familiar with Erlang (which I assume).

The Python code is not that hard to get either: it follows the basic Twisted pattern. First one should create a connection to EPMD, the Erlang Port Mapper Daemon (used to find other nodes), then a connection to the server node should be created, and finally functions can be called (calls happen the same way as Erlang’s RPC module).

Here’s the code. I’d advise to read it bottom-to-top:

import sys

from twisted.internet import reactor
import twotp

def error(e):
    '''A generic error handler'''
    print 'Error:'
    print e
    reactor.stop()

def do_pingpong(proto):
    def handle_pong(result):
        # Parse the result
        # 'ping' returns a tuple of an atom ('pong') and an integer (the pong
        # id)
        # In TwOTP, an Atom object has a 'text' attribute, which is the string
        # form of the atom
        text, id_ = result[0].text, result[1]
        print 'Got ping result: %s %d' % (text, id_)
        # Recurse
        reactor.callLater(1, do_pingpong, proto)

    # Call the 'ping' function of the 'demo' module
    d = proto.factory.callRemote(proto, 'demo', 'ping')
    # Add an RPC call handler
    d.addCallback(handle_pong)
    # And our generic error handler
    d.addErrback(error)

def call_hello(proto, name):
    def handle_hello(result):
        print 'Got hello result:', result
        # Erlang strings are lists of numbers
        # The default encoding is Latin1, this might need to be changed if your
        # Erlang node uses another encoding
        text = ''.join(chr(c) for c in result).decode('latin1')
        print 'String form:', text
        # Start pingpong loop
        do_pingpong(proto)

    # Call the 'hello' function of the 'demo' module, and pass in argument
    # 'name'
    d = proto.factory.callRemote(proto, 'demo', 'hello', name)
    # Add a callback for this function call
    d.addCallback(handle_hello)
    # And our generic error handler
    d.addErrback(error)

def launch(epmd, remote, name):
    '''Entry point of our demo application'''
    # Connect to a node. This returns a deferred
    d = epmd.connectToNode(remote)
    # Add a callback, called when the connection to the node is established
    d.addCallback(call_hello, name)
    # And add our generic error handler
    d.addErrback(error)

def main():
    remote = sys.argv[1]
    name = sys.argv[2]
    # Read out the Erlang cookie value
    cookie = twotp.readCookie()
    # Create a name for this node
    this_node = twotp.buildNodeName('demo_client')
    # Connect to EPMD
    epmd = twotp.OneShotPortMapperFactory(this_node, cookie)
    # Call our entry point function when the Twisted reactor is started
    reactor.callWhenRunning(launch, epmd, remote, name)
    # Start the reactor
    reactor.run()

if __name__ == '__main__':
    main()

Finally, to run it, you should first start a server node, and run the ‘pingsrv’ process:

MacBook:pyping nicolas$ erl -sname test@localhost
Erlang (BEAM) emulator version 5.6.5 [source] [async-threads:0] [hipe] [kernel-poll:false]

Eshell V5.6.5  (abort with ^G)
(test@localhost)1> c(demo).
{ok,demo}
(test@localhost)2> demo:start().
true

Notice we started erl providing test@localhost as short node name.

Now we can launch our client:

(pythonenv)MacBook:pyping nicolas$ python hello.py 'test' Nicolas
Got hello result: [72, 101, 108, 108, 111, 44, 32, 78, 105, 99, 111, 108, 97, 115]
String form: Hello, Nicolas
Got ping result: pong 1
Got ping result: pong 2
Got ping result: pong 3

‘test’ is the shortname of the server node.

You can stop the ping loop using CTRL-C. If you restart the client afterwards, you can see the ping IDs were retained:

(pythonenv)MacBook:pyping nicolas$ python hello.py 'test' Nicolas
Got hello result: [72, 101, 108, 108, 111, 44, 32, 78, 105, 99, 111, 108, 97, 115]
String form: Hello, Nicolas
Got ping result: pong 4
Got ping result: pong 5

That’s about it. Using TwOTP you can also develop a node which exposes functions, which can be called from an Erlang node using rpc:call/4. Check the documentation provided with TwOTP for a basic example of this feature.

Combining Erlang applications as distributed, fault tolerant core infrastructure and Python/Twisted applications for ‘everyday coding’ can be an interesting match in several setups, an TwOTP provides all required functionalities to integrate the 2 platforms easily.

April 19, 2009

Daniel de Oliveira

Avahi/mDNSResponder war


If you’re using gnome and trying to compile something like media-sound/amarok in your gentoo, you know what I mean.

To fix this, just enable avahi USEFLAGS for kdelibs, so kde will not use mDNSResponder and you’ll be able to use amarok inside gnome (sorry guys, but gnome lacks a good media player, and yes, I also test every mediaplayer like exaile, etc.).

April 19, 2009 :: São Paulo, Brazil  

Roy Marples

Breathing some life into ifconfig for Linux

Linux ifconfig truely sucks. It cannot handle multiple inet addresses easily (using aliases for the interface name sucks). Also ifconfig sucks for a lot of scripting usage. Whilst iproute2 is a lot friendlier, it's also not ifconfig - which makes portable(ish) network configuration a lot harder.

Lastly, Linux has a lof of misc tools - ifenslave, vlanconfig, etc, which should all be wrapped up into ifconfig. The BSD ifconfig is a very good example of how network configuration should be done.

As such, I will attempt to write an ifconfig for Linux which addresses all the above, with a comprehensive man page. This will make the new network script for OpenRC a lot easier to use in turn. It's not aimed at being an iproute2 replacement as iproute2 handles a lot of other stuff outside of the scope of ifconfig.

April 19, 2009

Crusing....

So I've been away on a Cruise to France and Spain, aboard the Oceana to celebrate my Parents Ruby Wedding Anniversay (was last year, but the arrival of Robyn delayed things). It was my first cruise and it was a very enjoyable experiance, which the exception of getting an acute allergic reaction from the on board Temple Spa shower gel (my normal stuff leaked).

Since being back I've been very busy and both Abbey and I recovering from various ailments, but we're now in tip top condition! As such I've had time to post some new screen shots (filter for Family Cruise (Apr 2009)).

Now the parents have promised a repeat cruise in 9 years time - can't wait :)

April 19, 2009

Brian Carper

Clojure Reader Macros

Unlike Common Lisp, Clojure doesn't support user-defined reader macros. You can read some of the rationale for why in this chat log, among other places. I think that's probably a good decision; I don't see a lot of need for mangling the reader. Regular macros get you pretty far already and Clojure has built-in reader support for all the good stuff.

But how hard would it be to have custom reader macros in Clojure if you wanted them? Turns out not too hard if you're willing to ruthlessly break encapsulation and rely on implementation details. Here's one way you could define a dispatch reader macro (i.e. one starting with # and some specified second character):

(defn dispatch-reader-macro [ch fun]
  (let [dm (.get (.getDeclaredField clojure.lang.LispReader "dispatchMacros") nil)]
    (aset dm (int ch) fun)))

Pass in a character and an fn and you get a reader macro. For a silly example let's make reader syntax to uppercase a literal string.

(defn uppercase-string [rdr letter-u]
  (let [c (.read rdr)]
    (if (= c (int \"))
      (.toUpperCase (.invoke
                     (clojure.lang.LispReader$StringReader.)
                     rdr
                     c))
      (throw (Exception. (str "Reader barfed on " (char c)))))))

The function is passed a reader and the dispatch character (which you can usually ignore). I cheat and use Clojure's StringReader to do the real work.

Now I can do this:

user> (dispatch-reader-macro \U uppercase-string)
#<user$uppercase_string__1295 user$uppercase_string__1295@9b59a2>

user> #U"Foo bar BAZ"
"FOO BAR BAZ"

user> (println #U"foo\nbar")
FOO
BAR
nil

user> #U(blarg)
java.lang.Exception: Reader barfed on (

(= "FOO" "foo")
false

(= "FOO" #U"foo")
true

Oh sweet Jesus don't use this in real code, because:

  1. The community will rightly hunt you down with torches and pitchforks.
  2. Reader macro characters are reserved and may conflict with later changes to the core language.
  3. These are set globally, not per-namespace.
  4. And so on. Just don't.

But I think it's a nice demonstration. I've read opinions that Clojure isn't a Real Lisp™ because a lot of Clojure is written in Java and isn't extensible in Clojure itself, but that's generally not true. The reader code for Clojure was all written in Java, but above I modify it from Clojure. There is no line separating Java-land and Clojure-land. It's all one big happy family.

April 19, 2009 :: Pennsylvania, USA  

April 18, 2009

Dirk R. Gently

Webkit browsers on their way to Linux but not there yet


Firefox really shocked up the browser wars when it released version 3.0. The more I use it the more I realize what a great browser it is. When Firefox first released 3.0 it was full-steam ahead. Soon we heard about a new javascript engine and it seemed like 3.1 would be just on the horizon. Then something happened and the Firefox locomotive haltingly put it’s breaks on. 3.1 was delayed indefinitely and a horrible exploit bug was discovered. Firefox also stopped working the my hotmail account (probably more a problem with hotmail). While Firefox gets things back on track, I decided it was a good time to try the new web browser rendering engine Webkit.

Webkit in General

Webkit is a rendering engine based on KHTML (KHTML is KDE’s Konquerer’s rendering engine) that has been radically modified by Apple for their web browser Safari. Because Webkit has received a good amount of development it will probably replace KHTML in KDE soon.

Rekonq

Rekonq is an effort to replace KHTML with Webkit in Konquerer. One of the first things you’ll notice about Webkit is that it renders pages really fast. This could be because that it’s new but from my tests Webkit seems to be able to render anything that Firefox can. Not only that but Webkit renders web pages beautifully.

Still in it’s early stages, Rekonq doesn’t add many configurations yet: saved passwords, minimum font size, saved tabs… And with qt’s version of Webkit redirects dont’ work yet.

Arora

Arora has been in development longer than Rekonq and has a few more configurations. It includes privacy settings, tab session savings, proxy…

Arora’s a good browser that’s coming along nicely. If I were to gripe about anything of Arora is that it does a big no by forcing a default font so that web pages just don’t look the way they should.

Chromium

Googles’ new browser Chrome also uses Webkit but was originally designed for Windows. Thankfully though Google had the good graces to open-source the project and very early Linux builds are being made. I didn’t get a chance to try Chromium yet. As development has centered on developing Chrome 32 bit no version is available for my 64 bit machine. And it looks like I may not being trying Chromium soon either as developing a 64 bit version will require mounting some pretty big bumps. I did try cxchromium though (an altered version of Chrome design to run under wine) and I did get an idea what they are trying to do. I like the modular tabs that seperate different webpages and http boxes nicely. Also I like all-in-one http box that can be used for searchs, previously visited sites, and bookmarks.

Midori

Midori I’m going to label as the current champ of Linux Webkit browsers. It’s able to save tabs, has a minimum font size setting, works with flash nicely, and has the ability to page zoom. Midori uses GTK and appears to be progressing nicely:

Midori may be the first real Firefox alternative in Linux. Hopefully they’ll fix the same error that Arora makes by forcing a default font.

Epiphany

Awhile back Epiphany made the committment to switch from Gecko (Firefox’s default rendering engine) to Webkit. Unfortunately development has been slow and didn’t make it into Gnome 2.26. Looking at the newest version though it looks about ready.

Epiphany updated it’s http box too to behave more like Firefox’s awesome bar does and it’s a nice touch. Again this browser forces a default font and configurability is limited. Epiphany though for the most part runs great on lower-end machines.

Leader of the Pack

I thought about switching to another web browser because i use KDE and would just prefer it that way. I can say that I was pretty close. From my tests Webkit could render anything Firefox did as good or better. And flash worked good with all of them for the most part. None of these browsers though recognized the java plugin. While I’m sure there’s a hack out there, I didn’t really want make a hack and try to remember how to erase it later. Mostly why I didn’t leave Firefox is that there are some great things about Firefox that are hard to leave behind. First, the awesome bar is well…awesome. Not only can I find previously viewed webpages easily, but also I can find webpages that I visited long ago plus the awesome bar does it quickly. I also find that I use web page zooming in Firefox quite a bit. Just because how some web pages choose their font sizes, reading a long article with small fonts can be a strain on the eye. Firefox not only zooms the entire page but it also remembers the settings so that next time I go back there I don’t have to do it again.

No I don’t think I’ll be migrating away from Firefox anytime soon but I don’t think a good Webkit browser is too far off on the horizon.</p

April 18, 2009 :: WI, USA  

Brian Carper

Vim regexes are awesome

Two years ago I wrote about how Vim's regexes were no fun compared to :perldo and :rubydo. Turns out I was wrong, it was just a matter of not being used to them.

Vim's regexes are very good. They have all of the good features of Perl/Ruby regexes, plus some extra features that don't make sense outside of a text editor, but are nonetheless very helpful in Vim.

Here are a few of the neat things you can do.

Very magic

Vim regexes are inconsistent when it comes to what needs to be backslash-escaped and what doesn't, which is the one bad thing. But Vim lets you put \v to make everything suddenly consistent: everything except letters, numbers and underscores becomes "special" unless backslash-escaped.

Without \v:

:%s/^\%(foo\)\{1,3}\(.\+\)bar$/\1/

With \v:

:%s/\v^%(foo){1,3}(.+)bar$/\1/

Far easier to read. Along with \c to turn on and off case sensitivity, these are good options to make a habit of prepending to regexes when needed. It eventually becomes second-nature. See also :h /\v

Spanning newlines

One thing that :perldo and :rubydo can't do is span newlines; you can't combine two lines and you can't break one line into two.

But Vim's regexes can span newlines if you use \_. instead of .. I find this to be a lot more aesthetically pleasing than Perl's horrible s and m modifiers tacked onto the end of a regex. e.g. this strips <body> tags from a text document.

:%s@<body>\v(\_.+)\V</body>@\1@

(Note: in real life, never use a regex to parse HTML or XML. Down that path lies madness. The above is OK because I'd expect only one <body> tag to appear in any document.)

(Note^2: being able to turn on and off magic in the middle of a regex is awfully helpful.)

(Note^4: You can use arbitrary delimiters like @ for the regex, which is useful if your pattern includes literal /'s.)

See also :h \_.

\zs

Vim lets you demand that some text match, but ignore that text when it comes to the substitution part. This is handy for certain specific kinds of regexes. Normally if you want to match some text and then leave it alone in the substitution, you have to capture it and then put it back manually; \zs lets you avoid this.

Say you want to chop some text off the end of a line, but leave the rest of the line alone. Normally you'd have to do this:

:%s/\v^(foobar)(baz)/\1/

to put the foobar back. Of course you can also use a zero-width lookbehind assertion:

:%s/\v(^foobar)@<=baz//

But that's even more line-noise. This is the easiest way:

:%s/^foobar\zsbaz//

See :h /\zs. (And :h /\@<= if you're so inclined.)

Expressions

Using \=, you can put arbitrary expressions on the right side of a regex substitution. For example say you have this text:

~/foo ~/bar

If you do this:

:%s/\v(\S+)/\=expand(submatch(1))/g

You end up with:

/home/user/foo /home/user/bar

Because you can also call your own user-defined functions in the expression part, this can end up being pretty powerful. For example it can be used to insert incrementing numbers into arbitrary places in your text. See :h sub-replace-\=.

And so on

Read :h regexp if you haven't already. Tons of other features in there that can make your life easy if you manage to internalize them. It is difficult to get used to Vim's funky syntax if you're very familiar with Perl/Ruby-style regexes, but I think it's worth it. Only took me two years! (OK, more like a couple days of concerted effort after a year-and-a-half delay.)

April 18, 2009 :: Pennsylvania, USA  

N. Dan Smith

Crunchbang your Gentoo

Through Identica I discovered Crunchbang (#!) Linux. It is an Ubuntu-derived distro with a minimalist GUI. I was enamored with Crunchbang’s visual style, so I installed it on a spare machine and took a look under the hood. Essentially it is OpenBox and some LXDE components with a few other goodies. I realized that I could easily replicate the Crunchbang desktop on my Gentoo machine. So I pulled in the following packages (and did some keyword-fu):

  • lxde-base/lxsession-lite
  • lxde-base/lxappearance
  • x11-misc/pcmanfm
  • lxde-base/lxpanel
  • x11-misc/parcellite
  • x11-wm/openbox
  • x11-misc/obconf
  • x11-misc/obmenu
  • x11-misc/menumaker
  • x11-misc/nitrogen
  • app-admin/conky

Then I set up my ~/.config/openbox/autostart.sh as follows:

lxsession &
pcmanfm --daemon-mode &
nitrogen --restore &
conky &
(sleep 2s && lxpanel) &
(sleep 1s && parcellite) &

That is just a modified version of the autostart.sh which Crunchbang ships with (I removed some cruft like gnome-power-manager). After a bit more tweaking, this is the result:

Gentoo Crunchbang

Pretty close to the default Crunchbang desktop. Some suggested slogans:

  • What one distro can do, Gentoo can do.
  • Anything you can do, Gentoo can do better.

April 18, 2009 :: Oregon, USA  

Steven Oliver

Programmer’s Pinky


I don’t have carpel tunnel *knock on wood* but I have discovered something else. Programmer’s pinky. I was formally trained how to type. As I assume most people are these days who attend school. But like most people, I assume, my typing isn’t perfect. I don’t hit all the keys with the right fingers. My form isn’t perfect I suppose but I still manage a respectable 70 wpm last time I checked, so I’m not really worried about it. On the same hand though the other day I was converting some SQL files from our old database to out new one and I was using ctl + c, ctl + v, ctl + x, etc. so much that my pinky was actually sore the next day. The problem stems alot from the fact that I always use my left pinky for the Ctrl key no matter which side of keyboard the other key is on. I guess that’s one habit I’m going to have to retrain myself on.

Enjoy the Penguins!

April 18, 2009 :: West Virginia, USA  

April 17, 2009

Brian Carper

Unicomp Customizer keyboard review

I got my Unicomp Customizer 105 in the mail today. This is a keyboard using the same technology as the infamous IBM keyboards of yore.

Why?

The Customizer is an enormous blocky hunk of hard black and grey matte plastic. It is the very antithesis of modern, soft, rounded, Apple-esque fashion. It has no "multimedia" keys, it doesn't glow in the dark, it doesn't have a built-in USB hub, it looks distinctly 80's-ish, and it costs $70. Why on earth would anyone want this thing?

/screenshots/photos/thumbs/customizer.png

A couple of reasons... one is that it's a status symbol of grizzled old hackers. This keyboard has gotten a lot of good reviews, e.g. last year on Slashdot, but I've heard the sentiment repeated elsewhere. There are stories of people rescuing old IBM keyboards out of dumpsters and selling them on ebay.

If it was simply a status symbol I would look away without a second glance. (Which is why I own a Cowon D2 and not an iPod. I like to research my purchases to the point of paranoia.)

But the popularity seems to be backed up by real functionality and build quality. These keyboards have a reputation for being great to type on due to the unique feel of their buckling spring "clicky" keys, and for being indestructible, with some keyboards still in use after two decades. So I decided why not see for myself?

A keyboard is the main tool of my livelihood and one of the main tools of most of my hobbies. It makes sense to try to get the best tool for the job. The three most important parts of a computer in my opinion are the keyboard, mouse, and monitor. CPU? RAM? Hard disk space? I'll take whatever you give me. But the things I interact with on a constant basis, I want those things to be comfortable.

Clicka clicka clicka

Yeah, this thing is clicky. Even after all the reviews, I was unprepared for just how clicky it is. You can feel the click of each keypress in your fingers and hear the clicking from 3 miles away.

I tried pushing a key down slowly to make it click without activating a keypress, and I found it very difficult if not impossible. You can always tell when you've successfully pressed a key on this keyboard: if it clicked, you did; if it didn't click, you didn't.

One bad thing about the clicking is annoying everyone in the room with you. I'm a bit worried I'm slowly going to drive my wife insane.

Finger workout

The keys have a lot of weight to them compared to the mushy feel of modern keyboards (which usually use some rubber or plastic dome under the keys). The Customizer's keys have little springs in them, and you can feel the keys pushing back on your fingers as you type. It feels much different than any other keyboard I've used.

Is it a good or bad feel? I'm undecided. It does feel pretty good, there's a lot of response to the keyboard and you can more easily tell when you miss a key or flub a keypress and hit two keys at once. I think this probably aids accuracy. I don't type more accurately but I more easily notice my mistakes.

I'm afraid the weight might lead to fatigue though; the keys are harder to press than other keyboards and my hands feel like they're getting a workout in comparison. However I've had a few long nights of typing on this keyboard and haven't noticed any more fatigue than usual, so the worry may be unfounded. On the other hand, I do often notice how annoying it is to type on a laptop which has no resistance and no distance to the keys at all. The resistance in this keyboard is a nice change of pace.

Built well?

I think "indestructible" is probably an apt word. I've only had mine for a couple days, but just hefting the thing, you can tell it's built like a tank. Very thick hard plastic all around. It weighs a ton. If I had to choose a keyboard to use as a weapon in a pinch, I'd grab this one immediately.

The keys come off easily; every key is just a cap over a smaller plastic key beneath, and that cap is a simple piece atop a tube with a spring in it. There isn't a lot of room for mechanical failure here unless you lose the springs. Everything comes off and goes back on very easily, which is nice for when I need to clean out the gunk in a year.

I have heard that if you spill a cup of milk into one of these keyboards, you may find it hard to drain. So don't do that.

Lack of features is a feature

Multimedia keys suck. I've never used them. They waste space and the only time I remember they exist is when I push them accidentally.

The Customizer is very "traditional". There are no multimedia keys, no volume controls, no programmable (i.e. useless) macro keys, no email or internet shortcuts. Just the standard 105 keys. This is a plus in my book.

Caps Lock is slightly shortened with a gap between itself and the A key, which is nice to avoid hitting it accidentally. The version of the keyboard I got has a modern Super ("windows") modifier key, but you can get a version without even that, if you like. Otherwise there are no frills.

Speed typing

I took a couple of silly online typing tests, and I got between 75 and 95 WPM with 98% accuracy, which is as good as I've ever gotten. My six-fingered typing style is a bit odd but this keyboard suits me well.

WPM is a terrible measure of programming speed, because programming has a much higher punctuation-to-letter ratio than English prose. So I also tried an Emacs session and a bunch of Vimming, and I experienced no problems. I forgot I was using this keyboard almost immediately, which is a good thing. It means it wasn't annoying me.

Very important to me, as a Vimmer, is the position and size of the Escape key. I have one other keyboard that has Escape offset to the right a half inch, which is horrendous and messes up my Vimming all the time. My other other keyboard has a tiny little Escape key, half as big as a normal key, which is equally bad.

On the Customizer, Escape is positioned off by itself in the corner as it should be, with a ton of space between itself and the number row, and the Escape key itself is freaking enormous. This is a huge plus in my book. You can't miss Escape on this keyboard.

Similarly, all the other keys are the right sizes and in the right places.

Verdict

So how is the Unicomp Customizer?

It's solid, standard, unique, and has a nice retro, minimalist style that I personally enjoy.

It's also huge, loud, and expensive. Is it worth buying? If you have the money to spend, I think it is. I don't regret the buy after a few days. When I come home from work and start typing on this guy, I'm always pleasantly surprised.

April 17, 2009 :: Pennsylvania, USA  

April 16, 2009

Ciaran McCreesh

Distributed Distribution Development, and Why Git and / or Funtoo is Not It


Gentoo is slowly shuffling towards switching from CVS to Git. This is a good thing, because CVS stinks. Using Git will reduce the amount of time developers need to waste to get something committed, make it easier to apply patches from third parties and make tree-wide changes merely a lot of work rather than practically impossible. What it will not do is make Gentoo in any way more ‘distributed’, ‘decentralised’ or ‘democratic’.

Some of the Git work has already been done, in a reduced manner (no history and no mirroring), by Daniel Robbins’ Funtoo, which is purported to be more distributed than Gentoo. The problem is, there’s nothing there to back up the distributed claim.

Distributed development, in the sense for which Git was designed (and ignoring the intervening BitKeeper stage), meant moving away from having a single central repository off of which everyone worked to having everyone work off their own, publishable repositories and providing easy ways of merging changes from one to another. ‘Good’ changes would tend to find their way from the authors up the food chain to the main repository whence official releases are made. Users requiring things that hadn’t made their way to the top would maintain their own repository, and merge in changes from elsewhere that they needed.

Typical Git Workflow Model

Typical Git Workflow Model

For a conventional codebase, this model works. But it’s not particularly nice, and it’s driven by necessity. You’ll note the big red dots in the diagrams. These represent places where people (assisted to some highly variable degree by Git) have to do merges. I chose big red dots rather than soft fluffy clouds because merges can be a lot of work (and because drawing clouds takes effort).

If you’ve got a conventional codebase, you have to do merges to make use of things from multiple sources — the compiler takes a single codebase and produces a program from it. You can do the same thing with a distribution. Funtoo, for example has had the Sunrise repository merged in to the main repository. Such a change would likely not be possible with Gentoo’s current CVS architecture.

It’s not entirely clear whether Funtoo intends to have users who want to use other overlays merge those overlays into their own tree. Doing so would be more Gitish.

Apparent Funtoo Workflow Model

Apparent Funtoo Workflow Model

But why bother? There’s no need to have a single codebase — there’s no compiler that has to take every input at once and turn it into a single monolithic product. Those big red dots are unnecessary.

A lot of fashionable programs are moving away from the big monolithic binary model and towards a plugin-assisted architecture. If you want Firefox to do a few things it doesn’t, you don’t hunt around for people who have already written them and then try to merge their source trees together. You install plugins. Only for more severe changes do you have to dive into the source, and the severity of change requiring a trip to the source is gradually increasing.

There’s a reason for this — whilst the merge model is a lot better than a single authoritative codebase and a bunch of patches, it’s a lot more work than providing limited composable extensibility at a higher level.

What, then, would a plugin-based model look like for a Gentoo-like distribution?

Presumably, one would have a centralised ‘main’ codebase. One could then add additional small extras to that main codebase to obtain new functionality (packages, in this case); these extras would rely upon parts of the main codebase and wouldn’t be able to operate on their own. Sound familiar? Yup, overlays are plugins.

This whole “merging overlays into the main tree” thing is starting to look like a step in the wrong direction. What would be some steps in a better direction?

One thing that comes instantly to mind is improving overlay handling. Portage’s overlay handling currently (at least in stable) looks like this:

Portage Overlay Model

Portage Overlay Model

Portage takes the main Gentoo repository, and then merges it with each overlay in turn, creating one ‘final’ overlay that ends up being used. I’ve used an orange dot here rather than a red one because it’s a different kind of merge. Rather than doing a source-level merge, the orange dot merge more or less (sort of) works like this:

  • If there’s a package with the same name and version in the origin and the overlay we’re merging in, take the overlay version.
  • If there’s an eclass with the same name and version in the origin and the overlay we’re merging in, sometimes take the overlay version.
  • Do some horrid hackery to merge together any colliding profile things in an uncontrolled manner that doesn’t work for more than one merge.
  • Pass everything else through.

Now, to be fair, the orange dot merge usually works. Most overlays don’t try to override eclasses, don’t have eclasses that conflict with each other and don’t mess with profiles. For colliding versions, you end up being stuck with a single selected version, which isn’t always so good.

Unfortunately, some overlays do try to override eclasses and profiles, and the result isn’t pretty. You’re ok so long as you only use a single overlay that does this, and so long as any eclass changes aren’t incompatible, but anything beyond that and weird stuff happens.

A less dangerous model would be to make the package manager support multiple repositories. Presumably most overlays wouldn’t want to have to reimplement all the profile and eclass things in the Gentoo repository, so the model would look like this:

Safer Overlay Model

Safer Overlay Model

Here, repositories, rather than the user, have control over which implementation of eclasses and so on gets used. Paludis uses this model for Gentoo overlays unless told not to.

Sidebar: one might want to go a step further, and allow repositories to use multiple masters. Some Exherbo supplemental repositories do this — the gnome supplemental repository, for example, makes use of both arbor (the ‘main’ repository) and x11:

Exherbo Repository Model

Exherbo Repository Model

Note that we chose not to make a repository use its master’s masters. We could’ve gone either way on this one — it’s slightly easier if masters are inherited, but it can lead to unnecessary inter-repository dependencies.

Unstable Portage, meanwhile, is starting to support controlled masters for eclass merging, but not version handling, which will eventually give:

New Portage Overlay Model

New Portage Overlay Model

A multiple repository model is clearly safer than the Portage model, and does away with the manual merges required by the Funtoo model. This gives us:

Model Multiple Repositories? Manual Merges? Unsafe Automatic Merges?
Portage (Stable) No No Yes
Portage (Unstable) No No Sometimes
Funtoo No Yes No
Safe Yes No No

I consider the multiple repository model to be better for users even ignoring the merge or conflict issues. Here’s why:

  • Users can make selective, rather than all or nothing, use of a repository. It becomes possible to mask the foo-1.2 provided by the dodgy overlay, and use the one in the main tree or a different overlay.
  • Similarly, users can choose not to use anything from a particular overlay except things they explicitly request.
  • It paves the way for handling repositories of different formats.

There aren’t any downsides, either — so long as repositories have user-orderable importance, there’s no loss of functionality.

Finally, I’d like to debunk the myth that the Git model is somehow ‘democratic’. There’s nothing in the least bit democratic about everyone having their own repository. At best, it could be said to be a way of allowing everyone to have their own dictatorship that anyone else can be free to visit — all very well, but when tin pot dictators fall back on old habits it does little to encourage collaboration. A democratic distribution would more likely make use of a special repository which lets people vote on unwritten packages and version bumps — clearly a recipe for disaster, since most people think “I haven’t noticed any bugs” means “stable it instantly”…

The only thing switching Gentoo to Git will solve is the pain of having to use CVS. This alone is enough to make the move worthwhile, but it will do little to nothing to fix Gentoo’s monolithic design and inherently centralised model. Nor does Funtoo’s merge approach solve the problem — on the contrary, it replaces a model where the package manager automatically does unnecessary merging (and sometimes gets things wrong) with a model where people do unnecessary merging (which is a lot of work, and they will still sometimes get things wrong). The future is (or at least should be) in a multi-repository model with good support from the package manager that removes the costs of decentralisation.

Posted in gentoo Tagged: funtoo, gentoo, paludis

April 16, 2009

Dirk R. Gently

Desktop… Phht


I don’t post screenshots usually because they just don’t get my attention. If i’m able to get things done then it doesn’t matter if i’m with AIG or on Gilligan’s Island. On my desktop, I don’t have fancy spinning-cubes, fire-drawing cursors, or wallpapers that leave a negative image floating on the back of my retina. What i do got is a desktop that would hopefully make Bender’s God happy :) :

Details:

April 16, 2009 :: WI, USA  

Dan Ballard

Winning BattleCode (excluding MIT)

I've been quiet on the blogging front lately. Don't know exactly why, could be school has kept me busy, or who knows.

Anyways, I thought I'd pop in and mention something I should have back in January when it started, which is that two friends and I entered MIT's BattleCode competition. It's an AI competition that a class at MIT was running, but was also open to public participation. Basically you are writing AI to run inside robots on a battle field. You and your opponent start with a few robots and they have to coordinate and do things like build more units, mine, and attack then enemy. The AI executes inside each robot so there is not overall "player" of the game, just lots of instances of your code, hopefully working together. It was a fun neat challenge.

It also reminded me of how much I'm not a fan of Java, and don't think I didn't make a list that I might publish if I get unlazy at some future point. Anyways, we worked on it for a while and then the open tournament was run. And we got the results back this week,

Now since the MIT teams had class time and a whole class of people to work with and bounce ideas off of, sadly they still dominated.

But there was a second ranking, this time of Non MIT teams only, and now for the real surprise: we won!

battlecode.mit.edu/2009/info/glory

Our team was called "Bad Meme" and we were representing UBC and you can see it all their on the results page. Of all the other non MIT teams, we were the best. It's really kind of surprising and awesome. Especially when you consider that anyone anywhere could enter and it appears that there were teams from places like Stanford and Harvard, and that we beat them. So that's kind of a buzz.

And so that a big chunk of what I've been doing in the past month: programming battle AI. That and school. But now with the competition over and school coming to a draw it's time to look for some new projects. I have a few ideas already, hopefully I'll get around to mentioning them before they are over this time, but time will tell.

April 16, 2009 :: British Columbia, Canada  

April 14, 2009

Jürgen Geuter

Things shouldn't always be wiped

Wiping is good sometimes. After spilling your drink for example. When selling your computer wiping your hard disk is really important to make sure your personal data stays personal. If you maintain a public internet terminal you want to wipe it after every use. But often we tend to wipe too much or see it as a quick solution even when it's not.

As some might now, I work in a school from time to time. The computers in the "computer room" (a room with many computers so a whole class can work at the same time) are wiped with every reboot. This is a common practice for a very simple reason: If you allow people to change things it will irritate other people. The readers of this blog are probably all very skilled in using their computers but many people with less knowledge are seriously irritated as soon as you change their desktop background. Wiping can solve the issue: You set the machine up as it is supposed to be and that state will persist. Win? Not always!

When people change things on the computer they use it might just be curiosity ("What happens if I change this setting?") but after a while people start changing things because the given state annoys them, they feel limited by the system and the way it is set up. It might just be a small thing: You want a certain program launcher to be at a certain point on your screen or you want a certain program to stop autostarting. But when you wipe the system, you lose that flexibility.

Especially if you are dealing with Windows boxes it has become sort of common knowledge that wiping the hard disk is a good approach to keeping the system untampered with and stable: After all you do want your users to have a stable system they can rely on. But I think that more often than we might think wiping is the wrong approach.

If we stop allowing people to change the way the computer interacts with them, we are basically holding them back. You can only work as good with a computer as the mental model the given setup represents matches yours. Damn, that was a long sentence, let's milk it a little:

Every computer and every desktop environment and window manager, every software basically, represents a certain way of thinking, a certain mental model of how things are. Take for example a file manager: In nautilus (GNOME's file manager) you can enable "spatial mode" which means that every folder is opened in a new window and every folder can be open only once. The way most people use nautilus (maybe because they are used to working like that from past experience) is the "browser mode" where double clicking a folder opens it in the current window and where you can have any folder open as many times as you want. "Spatial mode" is conceptually better for some and you might even be able to present a million studies showing you how much better it is but if your mental model of how files and file management work don't match the spatial paradigm you will not be able to use the system properly, will be annoyed and perceive the file manager as broken, which it isn't: It just doesn't fit to how you think.

With that being said I think my objection to relying on wiping computer systems becomes clear: Wiping systems makes sense in privacy related-contexts but in general it's not the right technique to ensure a stable working environment for regular users.

The idea of having computer systems around that work without the user identifying him- or herself to it is an anachronism, back from when people used Windows95 and thought that that was it. We tend to see entering a username and password purely as a security measure when in fact it's also a way to customize the system to your personal needs.

Using a system where you cannot change settings is a huge pain in the ass. Not cause you can't install software (it often makes sense to restrict that to a certain degree) but because you end up with a system that doesn't perform as it should.

Wiping is like the dark side of the force: It's the quick solution, it's simplicity is charming but in the long run you don't serve your users well. Users are individuals, everyone has slightly different needs and preferences (insert random sexual joke here) and we have gotten way too used to ignoring the huge benefit that users can gain from customization.

April 14, 2009 :: Germany  

George Kargiotakis

command exit status on zsh using RPROMPT

I’ve just updated by .zshrc so that I can get the exit status of commands on a “right prompt” using zsh’s RPROMPT variable. Exit status appears only if the value is non-zero.

Example usage:
zsh-rpropmpt-exit-codes

You can find my zshrc and more dot files that I use in my Pages/My dot files

April 14, 2009 :: Greece  

April 13, 2009

Jason Jones

Gphoto2 / Gnome Problem

Ya know...  Sometimes I hate open source.  Most times I love it, but sometimes I hate it.

I recently updated my gentoo gnome installation to 2.24.2, and didn't think much of it.  I then tried to connect my Nikon S210 digital camera to my usb port and perform the simple task of importing photos.  I have done this probably 100 times successfully.

This time, however, a new window popped up telling me, "Oh!  We found a camera on your system!  Do you want me to act like windows and try to do everything for you?"  Well...  Okay..  That was a bit harsh, but it was slightly annoying.

Anyway..  Here's the screenshot of what popped up.



So, thinking to myself, "Nope..  I know what I want to do with the photos, so, I clicked "Cancel".

I then tried to start up digikam, and it didn't say anything, other than it wouldn't import, or show, anything.  I tried flphoto with the same results.  So, I'm thinking, "Great...  I've gotta go through gphoto2's fabulous CLI interface to figure out what the heck is up."  Not exciting.  But then I found gtkam and that saved me a boat-load of time.

gtkam basically gave me the finger, too, but it told me a bit more than nothing.  It said "Could not initialize camera".  I could successfully detect it with no problem at all, but immediately after, it flashed the error message.

Tried as I could, I couldn't do anything about it.  I tried emerging the unstable tree, and then re-configuring the use flags.  Uhhh yeah.  Nothing.

So, I went to my other computer and downloaded them there.  No problems at all.  In fact, I was using the same version of gphoto2 on my 2nd computer as I was with the one having the problems!

Yeah... Not happy times for me.

Anyway..  To make a long story short,  I came down today and saw my camera sitting there on the floor and tried to have another go at it, because I just can't leave broken alone.

This time, I notice the "Unmount" button on one of the two boxes that pop-up.  So, I click "unmount" and then load up gtkam.

Yup...  It detected the camera and loaded up the images just fine.  ARGH!

So, yeah...  Everything is good again.

I just wish little gotchas like that would be thought-through before they're pushed live.

Why render such a hugely popular program such as gphoto2 useless by auto-mounting the friggin' camera as soon as it's plugged in???

Not cool.

Not cool at all.

So, anyway..     Yeah..  To fix this problem, simply do the following:

 

JUST CLICK THE UNMOUNT BUTTON IN GNOME'S AUTO POP-UP BOX AS SOON AS YOU PLUG IN YOUR CAMERA



That should do the trick quite nicely.

April 13, 2009 :: Utah, USA  

Matija Šuklje

Literally cut- and pasting law ;)

It's monday, I just cleaned my shoes and my sport sandals and I decided to update my Civil Procedure Act.

Because I'm a nice person and don't want to kill too many trees, I decided to manually update my paperback edition of the Civil Procedure Act (consolidated text 2) by crossing out what's outdated and writing the new text between the the lines wherever I can.

All fair and good, but the problem's that the C and D amendments (which I'm lacking) are between them around 150 articles long with some of which presenting completely rewritten articles or new ones that need to fit in somehow.

...meeeeaaaaaaniiiiing that I have to print out the longer sections and physically cut them out and glue them to my book. Oldskool! XD

Later on, polishing my shoes with oldskool shoe wax...

On a side note, I've been living quite happily with Magnatune, Jamendo and Last.fm for the past few days. I don't really feel any worse for losing my music collection anymore. Backup-wise, SpiderOak is really looking great, so I'll try to write an ebuild for their client.

hook out >> drinking recycled Yorkshire Gold (from Taylors of Harrogate) and getting messy with the scissors and glue ...oldskool style!
<!--break-->

April 13, 2009 :: Slovenia  

Brian Carper

A Sad, Dark Day

Today was a terrible day. I found myself subconsciously trying to use Emacs keystrokes in Vim. I feel dirty. I took a bath but it won't come clean. : (

It just goes to show that you can get used to anything if you do it often enough. Emacs still drives me up the wall but maybe I've achieved a critical mass of enough custom keybindings to let me tolerate it.

Aside from paredit, which has no equal even in Vim, Emacs does have some vaguely non-sucky features. hi-lock is pretty nice (Vim has an equivalent of course). Once I learned a few of the shortcuts for git-emacs I actually found myself using Git much more effectively. Having to drop into a shell to type Git commands is just enough of a disruption to prevent me from doing it often enough. I never got the hang of any version control library in Vim.

I'm almost even getting used to the Emacs buffer model. I find myself C-x bing and flipping back and forth between buffers by name, rather than my Vim practice of opening buffers in certain carefully-placed windows and leaving them there.

On the subject of typing, I broke down finally and ordered a Unicomp Customizer 104 keyboard. I've heard too many hackers say that the old IBM clicky keyboards are good for typing. It should arrive Tuesday, and I'm a lot more excited than anyone should be over a keyboard.

Expect a keyboard review. Try to contain your excitement until then. I know it'll be hard.

April 13, 2009 :: Pennsylvania, USA  

KDE4 Konsole Kolor Skheme Kdownload

I put a color scheme for KDE4's Konsole up for download. From a cursory glance I think KDE3 and KDE4 color schemes are the same format, but I haven't tried it.

Also I know I'm not the first to say it, but all of the K's in KDE program names are a bit annoying after a while, aren't they?

April 13, 2009 :: Pennsylvania, USA  

Blog and CRUD

I updated my blog source code on github. I also split my CRUD library out into its own clj-crud repo. It is cruddy, so the name is apt.

This code still isn't polished enough for someone to drop it on a server and fire it up, but maybe it'll give someone some ideas. I think the new code is cleaner and it'll be easier for me to add features now.

Beware bugs, I'm positive I introduced some.

EDIT: A word about the CRUD library... persisting data to disk is hard when the data may be mutated by many threads at once and the destination for your data is an SQL database that may or may not even be running. I have more respect for people who've written libraries that actually do this kind of thing and work right. Granted I only spent 3 days on mine but still, it's tricky.

I gave up for a while and tried clj-record, but it was prohibitively slow. It has the old N+1 queries problem when trying to select an object which has N sub-objects. In real life you'd write SQL joins to avoid such things. Ruby on Rails on the other hand gets around this via some nasty find syntax.

I get around it by having all my data in a Clojure ref in RAM already so it doesn't matter. And by using hooks so each object keeps a list of its sub-objects and the list is always up-to-date (updates of sub-objects propagate to their parents). But the crap I have to do to get this to just barely work is pretty painful.

April 13, 2009 :: Pennsylvania, USA  

April 10, 2009

Matija Šuklje

External disk dead, backups gone ...music too

The day before yesterday it happened...
The unthinkable...
The unbearable...
My backup disk died!! :(

Of course the warranty for my Western Digital MyBook (it's actually a Caviar 2500 inside) expired half a year ago. Exactly on the 8th of August 2008 — the same day that the XXIX. Summer Olympic games in Beijing started.

The problem is most probably a faulty controller, so if anyone out there has a Western Digital Caviar SE WD2500JB with intact electronics that (s)he's willing to give me, it'd make me very happy!!

What makes it even odder is that the backup disk died, while the half year older Fujitsu HDD that I back up onto that WD MyBook is still alive *knocks on wood*

Needless to say, Murphy strikes with perfect timing, when I was just deciding which backup tool to use next (and trying to write an ebuild for it)! Initially I used KDar, then KBackup and later switched to RDiff-based Keep. I grew quite enthusiastic about RSync'ing backups, but had to chose an applications that would not tie me to the old KDE3 libraries (none of the above have a KDE4 port yet).

My current contenders are (or rather were before my backup disk died):

...there's even a thread on the KDE forums that talks about all three.

But this incident made me realise that your backups are only as strong as the medium you use. So, I'm actually considering an online backup service. I've just started looking, but so far SpiderOak looks pretty good. Especially their security and privacy looks right (and the fact that they support FOSS). I'm still new to this idea, but I feel kind of vulnerable without backups, so I'll be looking into it a bit more.

But it's not all about backups — because my laptop is only 60 GiB small, I have (had) all my music on the external disk. You'd expect me to be mad as a bat right now because of that, but now I'm getting my fix directly from Jamendo, Magnatune, Last.fm and (other) streaming stations like ShoutCast and Soma.fm (and good ol' FM radio on my iRiver). I can barely wait for Amarok2 to be usable on AMD64 in Gentoo to make better use of such services! :D

hook out >> sitting on the balcony, watching the sun set, blogging and sipping Taylors of Harrogate Mango (black) tea
<!--break-->

April 10, 2009 :: Slovenia  

Dirk R. Gently

Mplayer with DVDs


There are plenty of movie players for Linux but my all time favorite is Mplayer. Not only is Mplayer quick and responsive but it can play almost anything. I’ve used mplayer before but I realized that my movies weren’t playing just as I wanted them too - no menu support, picture quality wasn’t as I expected. If you’d like to play DVD’s with player, here’s a guide that can show you how to get good functional DVD player.

Calibrating Display

Presentation is a large part of a good movie experience. Movie companies and movie theaters put a good deal of consideration over how a movie looks and sounds. THX for example became a standard in the movie industry defining such. Therefore, how your display looks also will represent the quality of the movie you play with Mplayer. There are a couple things you can do to create good picture quality on your monitor but first a quick bit on colorschemes.

Windows and Mac OS both have built in colorschemes (also known as ICC profiling). Colorschemes define such things for the display as color balance and gamma. Linux by default does not have any colorschemes defined. Often new users will report that their display when first installed looks “too bright”. There is no way to define a colorscheme in Linux but most of this “too bright” reporting is because of gamma and there is something you can do about that.

A good program to discover the proper gamma for Linux is to use a program called Monica. Use monica to calibrate your gamma. Calibrating Monica you’ll notice the whole display will change. Ignore this and just be sure your red, green, and blue gammas are set ok. When this is done, Monica will display an option to have Monica load at desktop startup. This can be done but it’s better to have the X server know the settings directly because if you play games (for instance) your gamma will be reset. The X server can be made aware of the gamma in the “/etc/X11/xorg” file. For example:

Section "Monitor"
    Identifier     "Monitor0"
    Gamma           0.86 0.85 0.87
EndSection

Gamma values are in RGB order. Restart the X server to have the gamma values permanently applied.

Selecting Video and Audio Output Devices

Mplayer defaults will work on just about any media. If you want to test Mplayer, try:

mplayer dvd://1

Track 1 almost always has something on it and you should get a good idea how Mplayers plays with the default settings. First thing you should do is decide what video output driver to use. Most people tend to use xv, this is the XVideo extension and has hardware accelerated playback. I however use the OpenGL driver because it give me slight better performance. For example:

mplayer -vo gl:yuv=2:force-pbo -dr -framedrop -fs dvd://1
mplayer -vo xv -dr -framedrop -fs -cache 8192 dvd://1

For OpenGL you’ll have to use a proper yuv setting, look into “man mplayer” for all the options. Adding the ‘-dr’ option to make sure direct rendering gets used and add ‘-framedrop’ because if a CPU intensive task starts in the background audio and video will get out of sync. Using -fs will start mplayer in full screen-mode.

For xv make sure to use the ‘-cache’ option as xv video doesn’t play well without it.

For audio, I just allow use mplayers default. I’ve tried setting ‘-ao alsa’ but occasionally I get skips with that and find the default (usually aoss) works better.

Filters

One of the things you’ll notice at this time is that their is a little noise to the picture quality. This is common because TV’s have built-in noise-reduction filters. You’ll also notice if you are playing a DVD recorded tv show that the picture appears “lined”(interlacing). TV’s produce pictures by displaying alternate lines. So a property called deinterlacing is used to produce a combined image. To add deinterlacing and a noise filter try this:

mplayer -vo gl:yuv=2:force-pbo -dr -framedrop -fs \
-vf yadif=3,hqdn3d dvd://1

Yadif is a good deinterlacer and hqdn3d will help to smooth the picture. I find that hqdn3d produces a bit too blurred image so I’ve reduced it to:

mplayer -vo gl:yuv=2:force-pbo -dr -framedrop -fs \
-vf yadif=3,hqdn3d=3:2.8:1:3 dvd://1

For movies that aren’t interlaced mplayer won’t use the yadif filter.

Aspect-Ratio

Mplayer may choose to alter the aspect-ratio which will result in a distorted picture. I think there is some legacy code in Mplayer that tries to scale based on screen size. Add ‘-noaspect’ to prevent this from happening:

mplayer -vo gl:yuv=2:force-pbo -dr -framedrop -fs \
-vf yadif=3,hqdn3d=3:2.8:1:3 -noaspect dvd://1

Contrast, Brightness, and Saturation

Even for a properly monitor the picture isn’t going to look quite right because movies use a different colorspace that is designed for proper display on a television. While not perfect this too can be corrected to a good degree with brightness, contrast, and saturation values.

If you’re using the gl driver, you’ll be able to adjust contrast, brightness, hue, and saturation with 1 and 2, 3 and 4, 5 and 6, 7 and 8, respectively. To add the values to the command line:

mplayer -vo gl:yuv=2:force-pbo -dr -framedrop -fs \
-vf yadif=3,hqdn3d=3:2.8:1:3 -noaspect \
-contrast 14 -brightness 8 -saturation -9 dvd://1

If you’re using the xv driver, you can use the software equalizer to enable the ability to adjust these values:

mplayer -vo xv -dr -framedrop -fs -cache 8192 \
-vf yadif=3,hqdn3d=3:2.8:1:3,eq2 -noaspect -contrast 14 \
-brightness 8 -saturation -9 dvd://1

mplayer -vo xv -dr -framedrop -fs -cache 8192 \
-vf yadif=3,hqdn3d=3:2.8:1:3,eq2=1:1.14:0.08:0.91 -noaspect \
-contrast 14 -brightness 8 -saturation -9 dvd://1

DVD Menus

New versions of Mplayer (as of this writing mplayer-28347-4) now include support for DVD menus. Mplayer will have to be compiled with “–enable-dvdnav” for DVD menus to work. From the command line, tell Mplayer to use DVD menus:

mplayer -vo gl:yuv=2;force-pbo -dr -framedrop -fs \
-vf yadif=3,hqdn3d=3:2.8:1:3 -noaspect \
-contrast 14 -brightness 8 -saturation -9 dvdnav://

You can also add support for being able to choose DVD menu items with the mouse:

mplayer -vo gl:yuv=2;force-pbo -dr -framedrop -fs \
-vf yadif=3,hqdn3d=3:2.8:1:3 -noaspect \
-contrast 14 -brightness 8 -saturation -9 \
-mouse-movements dvdnav://

If using mplayer with DVD menu support make sure you do not to have caching on or Mplayer won’t work properly.

That’s it! You should now have a great DVD player for you Linux.

Extranei

Sometimes selections in DVD menus don’t get recognized. I found that pressing 5 will bring them up again.

Mplayer uses keyboard presses for input. A basic reference of commonly used keys:

  • F - Fullscreen toggle
  • Q - Quit
  • P - Pause
  • ← - Backward 10 seconds
  • → - Forward 10 seconds
  • ↑ - Forward 1 minute
  • ↓ - Backward 1 minute
  • Pgup - Forward 10 minutes
  • Pgdown - Backward 10 minutes
  • !/@ - Backward/Forward Chapters
  • Arrow Keys or Numpad Arrow Keys - DVD navigation

Because DVD navigation binds to the arrow keys, they cannot be used to skip while using DVD navigation.

Users of newer Nvidia cards might want to look at Mplayer support for VDPAU (Purevideo technology).

Lastly, thanks to electro for his hqdn3d values.

April 10, 2009 :: WI, USA  

April 9, 2009

Daniel de Oliveira

Zine


Hi all (after a long time).

Actually Im trying to start a zine with help of some friends. At least, something like Full Circle but with more General Linux, and most more Gentoo related stuff at all.

I’ll try to do something about server and desktop, special configurations, tuning and so on.

If anyone reading this are able to help or wants to contribute, feel free to drop a message and I’ll give a feedbak ASAP.

Thanks all

April 9, 2009 :: São Paulo, Brazil  

April 8, 2009

Brian Carper

Lisp Syntax Doesn't Suck

I spend a lot of time talking about what I don't like about various languages, but I never talk about what I do like. And I do like a lot, or I wouldn't spend so much time programming and talking about programming.

So here goes. I like the syntax of Lisp. I like the prefix notation and the parentheses.

Common Complaints

A common criticism of Lisp from non-Lispers is that the syntax is ugly and weird. The parentheses are impossible to keep balanced. It ends up looking like "oatmeal with fingernail clippings mixed in".

Also, prefix notation is horrible. 1 + 2 is far superior to (+ 1 2). Infix notation is how everyone learns things and how all the other languages do it. There are countless numbers of people (example) who have proposed to "fix" this, to give Lisp some kind of infix notation. The topic inevitably comes up on Lisp mailing lists and forums.

Partly this is subjective opinion and can't be argued with. I can't say that Lispy parens shouldn't be ugly for people, any more than I can say that someone is wrong to think that peanut butter is gross even though I like the taste of it. But in another sense, does it matter that it's painful? Does it need to be changed? Should the weird syntax stop you from learning Lisp?

Prefix Notation: Not Intuitive?

There is no "intuitive" when it comes to programming. There's only what we're used to and what we aren't.

What does = mean in a programming language? Most people from a C-ish background will immediately say assignment. x = 1 means "give the variable/memory location called X the value 1".

For non-programmers, = is actually an equality test or a statement of truth. 2 + 2 = 4; this is either a true or false statement. There is no "assignment". The notion of assignment statements is an odd bit of programming-specific jargon. In most programming languages we've learned instead that == is an equality test. Of course some have := for assignment and = for equality tests. But = and == seems to be more common. Some languages even have ===. Or x.equals(y). Even less like what we're used to. (Don't get started on such magic as +=.)

Most of us have no problem with these, after a while. But few of us were programmers before we learned basic math. How many of us remember the point in time when we had to re-adjust our thinking that = means something other than what we've always learned it to mean? I actually do remember learning this, over a decade ago. This kind of un-learning is painful and confusing, there's no question.

But it's also necessary, because these kinds of conventions are arbitrary and vary between fields of study (and between programming languages). And there are only so many symbols and words available to use, so we re-use them. None of the meanings for = is "right" or more "intuitive" than the other. = has no inherent meaning. It means whatever we want it to mean. Programming is chock-full of things like this that makes no sense until you memorize the meaning of them.

Consider a recent article that got a lot of discussion, about why all programmers should program in English. How much less intuitive can you get, for a native speaker of another language to program using words in English? Yet they manage. (Have you ever learned to read sheet music? Most of the terms are in Italian. I don't speak a word of Italian, yet I managed.)

The point is that it's very painful to un-learn things that seem intuitive, and to re-adjust your thinking, but it's also very possible. We've all done it before to get to where we are. We can all do it again if we need to.

Prefix notation is unfamiliar and painful for many people. When I first started learning Lisp, the prefix notation was awfully hard to read without effort, even harder to write. I would constantly trip up. This is a real distraction when you're trying to write code and need to concentrate. But it only took me maybe a week of constant use to ingrain prefix notation to the point where it didn't look completely alien any longer.

At this point prefix notation reads to me as easily as infix notation. I breeze right through Lisp code without a pause. In Clojure, you can write calls to Java methods in Java order like (. object method arg arg arg) or you can use a Lispy order like (.method object arg arg arg); I find myself invariably using the Lispy way, as does most of the community, even though the more traditional notation is available.

You can get used to it if you put in a minimal amount of effort. It's not that hard.

Benefits of Prefix Notation

Why bother using prefix notation if infix and prefix are equally good (or bad)? For one thing, prefix notation lets you have variable-length parameter lists for things that are binary operations in other languages. In an infix language you must say 1 + 2 + 3 + 4 + 5. In a prefix language you can get away with (+ 1 2 3 4 5). This is a good thing; it's more concise and it makes sense.

Most languages stop at offering binary operators because that's as good as you get when you have infix operators. There's a ternary operator x?y:z but it's an exception. In Lisp it's rare to find a function artificially limited to two arguments. Functions tend to take as many arguments as you want to throw at them (if it makes sense for that function).

Prefix notation is consistent. It's always (function arg arg arg). The function comes first, everything else is an argument. Other languages are not consistent. Which is it, foo(bar, baz), or bar.foo(baz)? There are even oddities in some languages where to overload a + operator, you write the function definition prefix, operator+(obj1, obj2), but to call that same function you do it infix, obj1 + obj2.

The consistency of Lisp's prefix notation opens up new possibilities for Lispy languages (at least, Lisp-1 languages). If the language knows the first thing in a list is a function, you can put any odd thing you want in there and the compiler will know to call it as a function. A lambda expression (anonymous function)? Sure. A variable whose value is a function? Why not? And if you put a variable whose value is a function in some place other than at the start of a list, the language knows you mean to pass that function as an argument, not call it. Other languages are far more rigid, and must resort to special cases (like Ruby's rather ugly block-passing syntax, or explicit .call or .send).

Consistency is good. It's one less thing you have to think about, it's one less thing the compiler has to deal with. Consistent things can be understood and abstracted away more easily than special cases. The syntax of most languages largely consists of special cases.

Parens: Use Your Editor

The second major supposed problem with Lisp syntax is the parens. How do you keep those things balanced? How do you read that mess?

Programming languages are partly for human beings and partly for computers. Programming in binary machine code would be impossible to read for a human. Programming in English prose would be impossible to parse and turn into a program for a computer. So we meet the computer halfway. The only question is where to draw the line.

The line is usually closer to the computer than to the human, for any sufficiently powerful language. There are very few programing languages where we don't have to manually line things up or match delimiters or carefully keep track of punctuation (or syntactic whitespace, or equivalent).

For example, any language with strings already makes you pay careful attention to quotation marks. And if you embed a quotation mark in a quote-delimited string, you have to worry about escaping. And yet we manage. In fact I think that shell-escaping strings is a much hairier problem than balancing a lot of parens, but we still manage.

This is sadly a problem we must deal with as programmers trying to talk to computers. And we deal with it partly by having tools to help us. Modern text editors do parenthesis matching for you. If you put the cursor on a paren, it highlights the match. In Vim you can bounce on the % key to jump the cursor between matching parens. Many editors go one step further and insert the closing paren whenever you insert an opening one. Emacs of course goes one step further still and gives you ParEdit. Some editors will even color your parens like a rainbow, if that floats your boat. Keeping parens matched isn't so hard when you have a good editor.

And Lisp isn't all about the parens. There are also generally-accepted rules about indentation. No one writes this:

(defn foo [x y] (if (= (+ x 5) y) (f1 (+ 3 x)) (f2 y)))

That is hard to read, sure. Instead we write this:

(defn foo [x y]
  (if (= (+ x 5) y)
    (f1 (+ 3 x))
    (f2 y)))

This is no more difficult to scan visually than any other language, once you're used to seeing it. And all good text editors will indent your code strangely if you forget to close a paren. It will be immediately obvious.

A common sentiment in various Lisp communities is that Lispers don't even see the parens; they only see the indentation. I wouldn't go that far, but I would say that the indentation makes Lisp code easily bearable. As bearable as a bunch of gibberish words and punctuation characters can ever be for a human mind.

When I was first learning Lisp I did have some pain with the parens. For about a week. After learning the features of Vim and Emacs that help with paren-matching, that pain went away. Today I find it easier to work with and manipulate paren-laden code than I do to work with other languages.

Benefits of the Parens

Why bother with all the parens if there's no benefit? One benefit is lack of precedence rules. Lisp syntax has no "order of operations". Quick, what does 1 + 2 * 3 / 4 - 5 mean? Not so hard, but it takes you a second or two of thinking. In Lisp there is no question: (- (+ 1 (/ (* 2 3) 4)) 5). It's always explicit. (It'd look better properly indented.)

This is one less little thing you need to keep in short-term memory. One less source of subtle errors. One less thing to memorize and pay attention to. In languages with precedence rules, you usually end up liberally splattering parens all over your code anyways, to disambiguate it. Lisp just makes you do it consistently.

As I hinted, code with lots of parens is easy for an editor to understand. This make it easier to manipulate, which makes it faster to write and edit. Editors can take advantage, and give you powerful commands to play with your paren-delimited code.

In Vim you can do a ya( to copy an s-exp. Vim will properly match the parens of course, skipping nested ones. Similarly in Emacs you can do C-M-k to kill an s-exp. How do you copy one "expression" in Ruby? An expression may be one line, or five lines, or fifty lines, or half a line if you separate two statements with a semi-colon. How do you select a code block? It might be delimited by do/end, or curly braces, or def/end, or who knows. There are plugins like matchit and huge syntax-parsing scripts to help editors understand Ruby code and do these things, but it's not as clean as Lisp code. Not as easy to implement and not as fool-proof that it'll work in all corner cases.

ParEdit in Emacs gives you other commands, to split s-exps, to join them together, to move the cursor between them easily, to wrap and unwrap expressions in new parens. This is all you need to manipulate any part of Lisp code. It opens up possibilities that are difficult or impossible to do correctly in a language with less regular syntax.

Of course this consistency is also partly why Lisps can have such nice macro systems to make programmatic code-generation so easy. It's far easier to construct Lisp code as a bunch of nested lists, than to concatenate together strings in a proper way for your non-Lisp language of choice to parse.

Conclusion

Yeah Lisp syntax isn't intuitive. But nothing really is. You can get used to it. It's that not hard. It has benefits.

Sometimes it's worth learning things that aren't intuitive. You limit yourself and miss out on some good things if you stick with what you already know, or what feels safe and sound.

April 8, 2009 :: Pennsylvania, USA  

Nikos Roussos

alphabet linux

for the past three years i work in greek elementary schools, and very recently i started building my own linux distribution for the school lab. so i thought why not share it with the rest of the world ;)

the distribution goal is to cover the first two levels of greek education system. greek school labs are famous for their very old hardware, so this distribution is based on gentoo (with xfce as window manager) in order to be lightweight.

i won't explain (at least not at this post) why i think that free (as in speech) software is the only way to go when comes to education. the purpose of this post is just to point the web site of the distribution:
alphabet linux

PS. many thanks to kargig. his experience from iloog development helped me a lot.

alphabet linux

April 8, 2009 :: Athens, Greece

April 7, 2009

Steven Oliver

Entity Managment


I was looking through the internets the other day and it occurred to me that there is no open source software out there devoted to this. What is entity management? Well, it’s simply keeping track of what you own, what you lease out, what you rent, and what you sell. Power companies have to lease out land a lot of times because they don’t necessarily own the land the power poles are on. Obviously gas companies are in a very similar situation. Even companies you don’t expect to need such software might. Large banks for example might lease the land the bank on. I know a local car wash doesn’t actually own the lot the wash is on, just the car wash itself. Why doesn’t this software exist? My guess is simply because it’s boring. Who would want to and why? It’s like writing medical records software or something. How boring is that?

But in light of this, I’ve decided to give it a go. Why not? Screw it, I can code. I can write software as crappy as anyone else on the internets. In fact, I’ve already come with a basic database layout using MySQL. To be quite honest though I’m not a fan of MySQL thus far and I might find myself quickly switching to Postgre. I think the SQL i’ve written thus far will probably easily work in either, it’s not exactly complicated stuff at this point.

I haven’t published any code or even given this potential project a name yet, but I might later. What is it they say, “release early, release often.”

Enjoy Penguins!

April 7, 2009 :: West Virginia, USA  

Coding in Open Source


Do you ever want to contribute to a project or even start or your own? Obviously you do. Why else would you be reading a blog devoted to Linux. Given that then do you even find yourself with absolutely zero passion left because the task is so daunting, or the program you would to contribute to has tens of thousands of lines of code? Yeah… that’s totally me on regular basis. Can I code? Yeah. I can make programs do all kinds of neat things. Do I really want to spend weeks figuring out your code? No. Do I want to spend weeks just writing back end “boiler” code to start my own project? No. Sort of makes you hate programming doesn’t it?

April 7, 2009 :: West Virginia, USA  

N. Dan Smith

A Free Software Thesis

Last year I set out to produce my master’s thesis using only free software. Having turned in my final copy today, I can report a qualified success.

Despite some early interest in using Lyx (maybe someday in another life), I ended up going with a standard word processor in the form of OpenOffice (and its cousin NeoOffice). The downside in doing so is that I would have to deal directly with formatting issues. Thankfully OpenOffice has some versatile formatting styles which allowed me to satisfy the crazy formatting requirements (seriously - can I have a type-setting degree too?).

As for operating system, I was split between Gentoo Linux (free software) and Mac OS X (decidedly un-free software), where I did the majority of the actual typing. This is where the qualified yes comes in. It has nothing to do with any deficiency of Gentoo or OpenOffice. Rather I only had one machine available, and it had to be running Mac OS X for another reason, so it was just a matter of convenience. As it turned out, some font rendering problems in NeoOffice brought me back to Gentoo, which is the platform upon which I produced the final form of my thesis. It all worked out in the end.

So yes, it is possible to craft a big, important paper using free software tools.

April 7, 2009 :: Oregon, USA  

April 6, 2009

Jason Jones

Disney DRM, Ripping DVDs

Lately, I've been viewing a few Disney flicks on DVD.  I got Bolt and Bedtime Stories.  Because I usually rent them on redbox and can only have them for a day, I will rip them, and then when I get around to it, I'll watch em' then delete them.  No problems with that, as far as I can see.  I rent them to view them once, and that's what I do.

Well, lately, Disney DVDs have been tougher to rip.  The table of contents listed by dvd::rip had me confused for a bit.  Take a look at the screenshot below:



You'll notice that titles 8 through 19 (actually through 42, offscreen) all seem to be full-length movies.  So, if I try to play any of them on mplayer, mplayer fails and doesn't play anything.  VLC works just fine, if you play it from the menus, but what if I want to rip just the movie, with no menus?

Well, using vlc, you can see what title is actually playing, so while viewing the movie, I right-click and see what title is actually playing, then use vlc (version 8.6i, the new 9.8a doesn't seem to rip anything successfully) to rip the title which actually plays.

Hope that doesn't confuse everyone.  I just wanted to blog about how to get it done.  So far, VLC version 8.6i is the version I use to rip Disney movies.  Everything else either can't rip it, or flat-out can't play it.

On the flip-side, it seems that Sony hasn't been putting any effort at all towards DRM on their normal DVDs.  They're probably just putting all their efforts into copy protecting Blu-Rays now, which is just fine by me.

April 6, 2009 :: Utah, USA  

April 5, 2009

Andreas Aronsson

Don't extend

As I am nowadays using the keyworded gentoo-sources, I am already using the 2.6.29 kernel with promised updated ext4 stuff and some more goodies. However, after doing my normal upgrading routine with make oldconfig and sifting through all the new options, running my 'build kernel and drivers'-script, my system wouldn't boot =|. Unable to remount read-write dmesg said. A wee bit stumped, I went back to 2.6.28 for a few days but now I had a go again and took a look at my fstab. In the mount options, I had put "extents, barriers=0". Not sure why since none of the threads I found with google made those options look very promising. Particularly, when I found a note about deprecating extents, I figured thay have to go. Said and done, I have now booted with little devils peeking at me from the screen instead of penguins. I might have noticed a very slight speedup when starting programs too.
Ah, portage tells me it's time to go xorg-1.5. Now where did I put the bookmark for the upgrade guide...

April 5, 2009 :: Sweden

Ow Mun Heng

Postgresql 8.4 -&gt; Where are On Disk Bitmap Indexes?

Postgresql 8.4 is nearly out. There's quite a few things which looks interesting to me. However, the one thing which I'm still missing and am not able to find the status of is where or what happened to the On-Disk-Bitmap-Indexes which was supposed to come out for the 8.4 release.

Anyone from the Postgreql SQL Team would be privy to that info? can't really seem to find it on google.

Thanks.

April 5, 2009

Zeth

Getting value for money for my council tax money

Council Tax

It is April, which in England means we have to start paying a tax to the local government. This tax is called 'Council Tax' and it is levied on each house. Since everyone has to live somewhere, it is basically a tax on everyone, except full-time students, poor people and so on. Sadly I do not fall into any the exemptions anymore so will have to find the thousand odd pounds or arrange installments.

The city government ('council') has lots of other income, but this is the most visible as you have to organise the payment yourself. Just under 3.8% of the payment goes to the Fire station, fair enough, I do want to be rescued in the event of a fire; and 7.8% goes to the local police force who have proved their value to me already, catching and locking up the person who robbed my house a couple of years ago.

Half of the rest goes towards schools and other services for the city's children. Now I don't have any children, so I don't personally benefit. Well perhaps indirectly, schools keep the local tearaways rounded up in school, giving a few blessed hours on the bus and in shopping centres without the little darlings - that has got to be worth something per week.

Where the rest goes I am not sure. So since I cannot avoid the council tax, I decided to see whether this year, I could get better value for money out of my council tax. I will look into what useful services they have that I don't currently take advantage of. By the end of the year, I will decide whether the council is a huge rip-off or whether I have gotten good value for my money. Of course, I will take a special interest in services I can access digitally. Starting with a spring declutter.

Bulk Item Collection

In my city, the council take away our rubbish each week. However, they cannot take large or heavy items in these weekly collections.

For large items, you can drive them yourself to the 'recycling centre'. Previously when I wanted to get rid of larger things, I would get a visiting relative to drive me to the dump (what a pleasant experience for them).

However, for people like me who do not own a car, the council provides a service called 'Bulky Waste Collections'.

It worked pretty well, I filled out an online form which automatically booked me an appointment. All I had to then was bung all my heavy crap into my front garden and then the council crew came yesterday with their truck and picked it all up.

http://commandline.org.uk/images/posts/other/bulk-items.jpg

You are allowed six things per appointment, so I decided to get rid of:

  • An electric Fire, which went somewhat rusty in damp student digs.
  • A VGA monitor circa 1992, still worked
  • A hoover, Broken
  • An HP printer, the plastic cog was broken, couldn't find a replacement, cost to get the cog fabricated was greater than cost of new printer.
  • An Apple Power Macintosh Performa 6420 and monitor.

Having men in a truck take away your old heavy crap is a useful service, I will certainly use it again. It certainly feels liberating to throw out stuff, I am already eying up stuff for my next six items.

Discuss this post - Leave a comment

April 5, 2009 :: West Midlands, England  

Brian Carper

Disabling Ctrl-Alt-Backspace

After being reminded the hard way yet again that C-S-Backspace in Emacs invokes the very handy kill-whole-line function, but that C-M-Backspace, while uncomfortably similar to that key-chord, does something very different, I have now officially added to my /etc/X11/xorg.conf:

Section "ServerFlags"
    Option "DontZap" "True"
EndSection

to prevent me from accidentally murdering my X server at the worst possible times.

April 5, 2009 :: Pennsylvania, USA  

April 4, 2009

Aaron Mavrinac

Das Komputermaschine Ist Fur Der Gefingerpoken

A good friend of mine recently tossed me some computer parts, including an HP illuminated multimedia USB keyboard (model SK-2565, part no. 5185-2027). Since I had been looking to replace my old keyboard (a $10 PS/2 job that I turned into a k-rad all-black cowboy deck with blank keys), and had been suffering from an inability to control my PCM volume or music from the keyboard without launching alsamixer or mocp respectively, a particularly acute problem when playing StarCraft, I found herein an opportunity.

HP SK-2565 USB Keyboard


This keyboard has nineteen buttons and one knob across the top. In order, they are (or look like) sleep, help, HP, printer, camera, shopping, sports, finance, web (connect), search, chat, e-mail, the five standard audio buttons (stop, previous, play/pause, next, load), a volume knob, mute, and music. Since the keyboard was furry enough to qualify as a mammal upon receipt, the first thing I did was clean it, a process which spanned several hours (though the process was niced down somewhat). The previous two sentences are related: the top buttons also happen to be built in such a way as to require utterly complete disassembly of the keyboard to remove and replace, and I am ashamed but not at all surprised to say I got the replacing part wrong. The play/pause button is now swapped with the previous button. And I am totally not taking this thing apart again any time soon.

But it is for the best! After figuring out sometime later that I had goofed, I decided (Daniel Gilbert, this one's for you) that I liked it better this way anyway. Which is perfectly fine, of course, since I'm about to get to the good part: how I made my HP illuminated multimedia USB keyboard special upper buttons work in Linux, using Xmodmap, and in awesome, using rc.lua.

Turns out it's extremely easy to bind arbitrary keycodes to keysyms (a full list of which can be found in /usr/share/X11/XKeysymDB), at least using GDM. By default (on Gentoo), GDM loads /etc/X11/Xmodmap, as specified by the sysmodmap setting in /etc/X11/gdm/Init/Default. Mine now looks like this:

keycode 223 = XF86Sleep
keycode 197 = XF86Shop
keycode 196 = XF86LightBulb
keycode 195 = XF86Finance
keycode 194 = XF86WWW
keycode 229 = XF86Search
keycode 121 = XF86Community
keycode 120 = XF86Mail
keycode 144 = XF86AudioPlay
keycode 164 = XF86AudioStop
keycode 160 = XF86AudioMute
keycode 162 = XF86AudioPrev
keycode 153 = XF86AudioNext
keycode 176 = XF86AudioRaiseVolume
keycode 174 = XF86AudioLowerVolume
keycode 118 = XF86Music


And now, the answers to all your questions:

  1. I figured the keycodes out by running xev and banging on the buttons.

  2. XF86LightBulb is the closest thing I could find to "sports" that wasn't already taken.

  3. The volume knob "clicks" and sends a keycode 176 or 174 depending on the turn direction.

  4. I did not map help, HP, printer, or camera because they do not appear to generate keycodes.

  5. I did not map audio load because I forgot. I will do it when I can think of an action to bind it to.

The next step was to make these keys actually do something in my window manager. Bindings are pretty easy to make in /etc/xdg/awesome/rc.lua. Without getting into too much detail, I bound keys to things. I am particularly impressed with how I can control audio via amixer, and my MOC playlist via commands without even having the interface open. Another bonus is the sleep button running xlock. Here's a sample line:

key({ }, "XF86LightBulb", function () awful.util.spawn("starcraft") end),

A particularly nice one is the search button, which runs the following script (be nice, my bash-fu is rusty):

#!/bin/bash
Q=`zenity --entry --width 600 --title="Google Search" --text="Google search query:"`
if [[ "$Q" != "" ]]; then
EQ=`echo $Q | sed s/\ /\%20/g`
firefox http://www.google.ca/search?q=$EQ
fi


I frequently say that if I took one thing home from working in the automotive sector, it was Kaizen.

April 4, 2009

Brian Carper

Vim cterm colors

Note to self. Vim color schemes that only set cterm colors don't work unless you export TERM=xterm-256color in your terminal emulator. Konsole in KDE4 seems to default to plain xterm. Took me a half hour to figure out why my color scheme wasn't working in Konsole.

April 4, 2009 :: Pennsylvania, USA  

Kevin Bowling

FS-Cache merged in Kernel 2.6.30

FS-Cache has been merged into the upcoming kernel 2.6.30.  This allows for a generic caching interface in the kernel for other file systems.  For example, you can use local hard disks to cache data accessed via NFS, AFS, or CD-Rom.  Since these tend to be high-latency while the disks are low latency, it should provide for a nice speedup.

Of particular interest to me, I contacted maintainer David Howells who is a Redhat employee.  I asked whether this infrastructure would help with large disk image files stored on NFS — a common though not particularly efficient case for VMWare, Xen, KVM, etc.  His exact response was “Quite feasible.  As long as you have a local disk on which to cache the files.”

I am quite happy as I run this setup at work for some production VMs since it allows for easy migration and backup without the complexity and cost of a SAN or cluster FS.  I look forward to testing when 2.6.30 hits the stable tree.

Share and Enjoy: Digg del.icio.us Slashdot Facebook Reddit StumbleUpon Google Print this article!

Related posts:

  1. Linux Kernel 2.6.21 and Tickless Kernel (CONFIG_NO_HZ) So Linux kernel 2.6.21 is finally out and all the...
  2. Link Bonding Craziness in RHEL/Centos 5 I just went through hell in a handbasket trying to...
  3. Xen 3.3 in RHEL/CentOS 5 and more Link Aggregation Fun RHEL 5 includes the now ancient Xen 3.0 hypervisior.  A...

April 4, 2009

April 3, 2009

Leif Biberg Kristensen

Simple PHP link factory

I’ve been reviewing old code again, and have grown really tired of PHP code like this:

    if ($parent_id) {
        echo "<span class=\"hotlink\">"
            . " (<a href=\"./relation_edit.php?person=$person"
            . "&amp;parent=$parent_id\">$_edit</a>"
            . " / <a href=\"./relation_delete.php?person=$person"
            . "&amp;parent=$parent_id\">$_delete</a>)</span>\n"
            . cite(get_relation_id($person, $gender), 'relation', $person);
    }

It’s way too messy. So, today, I wrote a simple PHP function to clean up the act:

function to_url($base_url, $params, $txt) {
    $str = '<a href="' . $base_url;
    if ($params) {
        foreach ($params as $key => $value)
            $pairs[] = $key . '=' . $value;
        $str .= '?' . join($pairs, '&amp;');
    }
    $str .= '">' . $txt . '</a>';
    return $str;
}

This means that the first code snippet now have been rewritten as:

    if ($parent_id) {
        echo ' <span class="hotlink">'
            . to_url('./relation_edit.php', array('person' => $person, 'parent' => $parent_id), $_edit)
            . ' / '
            . to_url('./relation_delete.php', array('person' => $person, 'parent' => $parent_id), $_delete)
            . "</span>\n"
            . cite(get_relation_id($person, $gender), 'relation', $person);
    }

It’s not a giant step for mankind, for sure. But I publish it in the hope that it may be useful to others. It’s a bit strange that it took me eight years of writing data-driven PHP code to discover such a basic thing.

Edit: As noted in the comments, the built-in PHP function http_build_query may be better. Actually, my function to_url seems to replicate at least parts of it. There are a couple of things I’d like to point out, though.

  1. I’d never let a user on the ‘net input data via this function. It’s only used for navigational links in a private application where I must assume that the user has no malicious intentions. Just by looking at the links (edit or delete person data) you should see that the user has full control over the data in the first place. For that reason, there’s hardly any point in URL-encoding the GET string.
  2. The http_build_query builds only the parameter string, and the rest of the link, both base URL and text, will have to be provided by another function.
  3. For complex data like the examples in the PHP documentation, you should really use the POST method. Example #3 is just senseless.

April 3, 2009 :: Norway  

Simple PHP link factory

I’ve been reviewing old code again, and have grown really tired of PHP code like this:

echo"<\"hotlink\">"
            . " (<a href=\"./relation_edit.php?person=$person"
            . "&amp;parent=$parent_id\">$_edit</a>"
            . " / <a href=\"./relation_delete.php?person=$person"
            . "&amp;parent=$parent_id\">$_delete</a>)</span>\n"
?>

It’s way too messy. So, today, I wrote a simple PHP function to clean up the act:

<?php
function to_url($base_url, $params, $txt) {
    // link factory
    $str = '<a href="' . $base_url;
    if ($params) {
        foreach ($params as $key => $value)
            $pairs[] = $key . '=' . $value;
        $str .= '?' . join($pairs, '&amp;');
    }
    $str .= '">' . $txt . '</a>';
    return $str;
}
?>

this means that the first code snippet now have been rewritten as:

<?php
        echo ' <span class="hotlink">'
            . to_url('./relation_edit.php', array('person' => $person, 'parent' => $parent_id), $_edit)
            . ' / '
            . to_url('./relation_delete.php', array('person' => $person, 'parent', $parent_id), $_delete)
            . "</span>\n"
?>

It’s not a big ting, for sure. But I publish it in the hope that it may be useful to others. It’s a bit strange that it took me eight years of writing data-driven PHP code to discover such a basic thing.

April 3, 2009 :: Norway  

TopperH

Get remote irssi notifications without X forwarding

I was looking for a simple method to have irssi highlight notifications on my local machine while having irssi running on my remote server.

Googoling a bit I found that most methods require X forwarding (and libnotify installed on the server), or screen attached in a terminal in the local machine.

My server has no X, so I'm not going to install libnotify and its dependencies on it, and I don't want to have an irssi terminal open, unless I need it.



Here I found a nice solution:

Server side:

I assume sshd is working on server machine and there is key authentication, so no password is required)

wget http://www.leemhuis.info/files/fnotify/fnotify
cp fnotify ~/.irssi/scripts/fnotify.pl
cd ~/.irssi/scripts
ln -s fnotify.pl autostart/
touch ~/irssi/fnotify


then I reload irssi or I type "/RUN fnotify.pl" inside irssi (I do this step just the first time, then it will be done automatically at irssi startup).

From now, every higlighted message will be logged in this file.

On client side I cd my favourite bin directory (foe me is ~/scripts, but can also be /usr/local/bin) and create a file called irssi-notification.sh:

#!/bin/sh

ssh user@host tail -F ~/.irssi/fnotify | sed -u 's/[<@&]//g' |while read heading message
do notify-send -i gtk-dialog-info -t 300000 -- "${heading}" "${message}"
done


Change the red part with your username and host for the server machine and chmod +x the file.

Make sure x11-libs/libnotify is installed in your sistem (I think that some distro call this package libnotify-bin... don't ask me, debian and ubuntu like to have things complicated).


Now run the file and notifications will appear.



April 3, 2009 :: Italy  

Jason Jones

ILMJ Auto-Saved Entries

Lately, I've been getting a lot of feedback concerning lost entries from I Love My Journal.  Occasionally, I would even lose one myself.

The session timer for ILoveMyJournal.com is set to 2 hours, which means that you can stay logged in to the stie for 2 hours without being logged out due to inactivity.

I initially thought was long enough, but life will get in the way regardless if you're typing your journal or not, and many times I have gone out to check on my kids, end up watching a movie, come back in, finish my entry, and as soon as I click "publish to blog", I get the wonderful login message basically saying "You've been owned, and your entry has been lost".

So, I spent the majority of today writing an AJAX-based auto-save mechanism which will auto-save your entry every 30 seconds (I might up that to 1 or 2 minutes, but we'll see how it goes).

So, if you press a wrong button your keyboard which closes the browser, or your computer crashes, or you leave your computer for 10 hours straight - now, it doesn't matter at all.

ILoveMyJournal.com will take care of it for you.

Here's a screenshot with the not-very-aesthetically-pleasing note at the bottom.  I'll make it look better later.



April 3, 2009 :: Utah, USA  

Iain Buchanan

Blocking port 25

I had a call from a friend complaining that they just purchased a wireless broadband stick (from Telstra using their Next-G network which is a HSDPA network using UMTS850MHz) and the could not send mail via their normal mail accounts.

A few minutes of checking found that Telstra and Bigpond block outgoing access to port 25 to anything other than their own mail servers.

The reasons are listed here [bigpond.custhelp.com] as well as at other pages. This post will list why their reasons are flawed, and how to get around them.

Flawed Reasoning

Bigpond claims they manage the use of port 25 to "to prevent spammers sending unsolicited email using [their] network." OK, that sounds fair enough at first glance, but when you realise how easy this is to get around (use a different port, for example) then this reason becomes redundant.

Bigpond claims that other ISPs are taking similar steps and that their changes have been "proven to prevent some types of spam activity". However spammers, like advertisers, attempt to stay ahead of the latest trends, and as soon as one method of spamming is blocked, they will use another. Also Internode (as an example) blocks port 25 by default, but lets you turn this feature off.

Furthermore, spammers are setting up real mail servers around the world. In conjunction with a tailored trojan that uses a different port to send mail, Bigponds efforts are useless. In fact Spam levels are back to 95% of all email traffic!

Finally, you could pay the extra money for a fixed IP address from Telstra, and they won't block the port. In my opinion, this is shameless money grabbing. Please explain why a user on a fixed IP address is not susceptible to a spam sending trojan or virus?

Perhaps the spam is purposefully malicious, and Telstra would like to know whose account to suspend? Telstra (along with most ISPs) keep detailed logs of traffic and authentications, so they can easily tell which user from a dynamic IP address was accessing which sites at any point in recent history, therefore static IP addresses are no easier to crack down on.

More Problems than Solutions

Bigpond says that you can use their Bigpond mail server to send mail, and thus get around the port block. You can in fact do this, and still have your email appear to come from you@yourhost.com (and not you@bigpond.com).

This solution is not ideal for two reasons:

1. Travelling
The frequent traveller, like my friend, is often on different networks. He must be able to use whichever network he is on and send / receive his normal email. To set up a different outgoing mail server, and perhaps a different profile (from whichever mail client he is using) for each network is both time consuming and pointless.

2. Your email looks like spam
When you send email where the FROM address is you@yourhost.com, but it goes through a different email server you@bigpond.com, the recipient's (him@friendsmail.com) mail server may block or mark your email as spam.

This is because exactly that technique (using a FROM address and mail server that do not match) is used by spammers to send spam. The recipient mail server checks the DNS records of the sender (yourhost.com), and if they don't match the originating server (bigpond.com), then your email may be deleted, rejected, or set aside.

Getting around it

OK, so what do you do to get around it? By far the best way is to authenticate with your mail server, and use a secure port. By using a secure port (usually not port 25) Bigpond won't block your outgoing mail. In fact this should work for many networks that block port 25.

You have the added advantage that your mail is probably encrypted, or at least your password will be (don't rely on this to encrypt sensitive emails though, as you can bet it will be transmitted in plain text at some stage of the process).

Is my mail server compatible?
The best thing to do is try! Different mail clients do this in different ways:

Evolution 2.24.5
Edit > Preferences > Mail Accounts > Edit > Sending Email > Use Secure Connection

Thunderbird 3.0b3
Edit > Account Settings > Outgoing Server > Edit > Connection Security

Outlook [including Express]
You have to edit your account settings from one of the main menus. You may have to then choose View or Change existing email accounts. Then select the account and choose Change; then more settings (I think) and then you should see a secure option. Note the SPA option is not what you're looking for here, although you can use it if supported.

If you get timeouts or errors sending mail, then try slightly different options (if you have a choice).

April 3, 2009 :: Australia  

Brian Carper

Real Confusing Haskell

I can pinpoint the exact page in Real World Haskell where I became lost. I was reading along surprisingly well until page 156, upon introduction of newtype.

At that my point my smug grin became a panicked grimace. The next dozen pages were an insane downward spiral into the dark labyrinth of Haskell's type system. I had just barely kept data and class and friends straight in my mind. type I managed to ignore completely. newtype was the straw that broke the camel's back.

As a general rule, Haskell syntax is incredibly impenetrable. => vs. -> vs. <-? I have yet to reach the chapter dealing with >>=. The index tells me I can look forward to such wonders as >>? and ==> and <|>. Who in their right mind thought up the operator named .&.? The language looks like Japanese emoticons run amuck. If and when I reach the \(^.^)/ operator I'm calling it a day.

Maybe Lisp has spoiled me, but the prospect of memorizing a list of punctuation is wearisome. And the way you can switch between prefix and infix notation using parens and backticks makes my eyes cross. Add in syntactic whitespace and I don't know what to tell you.

I could still grow to like Haskell, but learning a new language for me always goes through a few distinct stages:

Curiosity -> Excitement -> Reality Sets In -> Frustration -> Rage ...

At Rage I reach a fork in the road: I either proceed through Acceptance into Fumbling and finally to Productivity, or I go straight from Rage to Undying Hatred. Haskell could still go either way.

April 3, 2009 :: Pennsylvania, USA  

April 2, 2009

Zeth

Printing in black and white on Linux

I do not normally print very much at home, however I decided to get a very cheap printer for coach tickets, airplane boarding passes and other last minute emergencies.

I went for the HP Deskjet D2560. Here it is in its full twenty-five pound glory:

http://commandline.org.uk/images/posts/gnome/print-black-and-white/printing0.png

The printer was so cheap in that it did not come with a USB cable, however I had a few at home already. The printer end needs a B-type connector.

http://commandline.org.uk/images/posts/gnome/print-black-and-white/printing1.png

The first lead I tried, the posher one pictured top, didn't work as the connector didn't penetrate the socket enough. The second lead, pictured bottom, did work. So if you buy a lead at the same time as the printer, make sure your B connector is long enough.

The printer worked with my Linux computer out of the box and printed fine in both colour and black and white.

The printer came with a black ink cartridge and a colour ink cartridge. With these cheapey printers they have a razor and blades model. It is indeed cheaper to buy the printer again and throw it away, than to buy both of the cartridges again.

Therefore I decided to conserve ink, and thus cost, by printing pages in black and white only.

I pressed Ctrl+P which gives the normal GNOME print dialog that most of the programs have. Then I tried to find the button to set it to black and white.

How to do this on Linux through the graphical interface is not obvious enough in my opinion. The fact that I had to Google through random forum posts for the answer is a somewhat damning indication that the button is too far down.

So the task I was trying to achieve was to 'make my document print in black and white only'. However, it turns out that the interface forces you to 'change your printer mode in your printer settings to grayscale'. The same result but the path you make through the interface is different. The Linux desktop needs a lot more usability testing.

Anyhow, in the end I went to the top panel and clicked on 'System', then 'Administration' then 'Printing'.

http://commandline.org.uk/images/posts/gnome/print-black-and-white/printing2.png

Then I had to right-click on the particular printer and choose 'Properties'. Making it per printer means that if I choose a different printer then my document prints in colour, as before, I am not convinced that this approach has the highest level of usability for most people.

http://commandline.org.uk/images/posts/gnome/print-black-and-white/printing3.png

Lastly, I then clicked on 'Printer Options' and then under 'General', I used the drop-down labelled 'Printout Mode'.

http://commandline.org.uk/images/posts/gnome/print-black-and-white/printing4.png

A lot of work, a least compared to the equivalent option on the legacy operating system. Oh well, let the presses run!

Discuss this post - Leave a comment

April 2, 2009 :: West Midlands, England  

Brian Carper

My Poor Headphones

My precious Grado SR-80's needed some emergency surgery a while back, resulting in this disaster. They still work today, in the sense that sound is still emitted from them, but in terms of aesthetics, the situation has rapidly deteriorated. I've got bare wire and sticky electrical tape hanging all over the place. Also I'm probably one good yank away from snapping the wires off again.

If anyone reading this has a good tutorial or information on re-wiring a set of headphones, it'd be appreciated. I've never soldered anything in my life. I don't know where to acquire the wires; I imagine any wire will do, but I'm clueless when it comes to such things. I think I might like to do something like this mod and run the wire up over the top, to prevent the inevitable twisting from destroying the wires in the future, but I'm uncertain I could pull it off without complete destruction.

(At least I know enough about these things to cringe when people start talking about the "performance" of their headphone wires. $400 for a hunk of wire? Wow.)

April 2, 2009 :: Pennsylvania, USA  

George Kargiotakis

HOWTO remotely install debian over gentoo without physical access

The Task
Last year, me and comzeradd set up a Gentoo server for HELLUG according to our plot to help Gentoo conquer the world. Unfortunately Gentoo is out of HELLUG’s administration policy, all servers must be Debian. We didn’t know that, so after a small flame :), we decided that we should take back the server to somebody’s home and re-install Debian over it, the problem was that the server was located at University of Athens campus which is a bit far from downtown Athens where comzeradd lives. I also live 500km away so we were pretty much stuck. Months passed and nobody actually had enough free time to go to UOA’s campus and take the server to their house. …In the meantime manji joined us as an extra root for the server.

One Saturday night while chatting at IRC (what else could we be doing on saturday night ??) we had an inspiration, why not install Debian remotely, without taking the server home. Even if everything got eventually borked it couldn’t get any worse than going there, taking the server home and fixing it, just like we would do any way. So we gathered on a new IRC channel with some more friends that are really good with Debian and started the conversion progress.

The Server
The interesting part about the server was that it had 2×250Gb IDE disks. The Gentoo setup had these disks partitioned to 4 software raid devices + swap partitions.

(Gentoo) # fdisk -l
Disk /dev/hda: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x431bd7b7
Device Boot Start End Blocks Id System
/dev/hda1 * 1 6 48163+ fd Linux raid autodetect
/dev/hda2 7 130 996030 82 Linux swap / Solaris
/dev/hda3 131 27964 223576605 fd Linux raid autodetect
/dev/hda4 27965 30401 19575202+ 5 Extended
/dev/hda5 27965 29183 9791586 fd Linux raid autodetect
/dev/hda6 29184 30401 9783553+ fd Linux raid autodetect
Disk /dev/hdb: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/hdb1 * 1 6 48163+ fd Linux raid autodetect
/dev/hdb2 7 130 996030 82 Linux swap / Solaris
/dev/hdb3 131 27964 223576605 fd Linux raid autodetect
/dev/hdb4 27965 30401 19575202+ 5 Extended
/dev/hdb5 27965 29183 9791586 fd Linux raid autodetect
/dev/hdb6 29184 30401 9783553+ fd Linux raid autodetect

md1 was RAID1 with hda1+hdb1 for /boot/
md3 was RAID1 with hda3+hdb3 for /
md5 was RAID1 with hda5+hdb5 for /var/db/
md6 was RAID0 with hda6+hdb6 for /usr/portage/

SUMMARY
What we had to do was:
A)break all RAID1 and RAID0 devices, set all hdbX partitions as faulty and remove them from the RAID.
B)repartition hdb, create new RAID1 arrays with LVM on top and format the new partitions
C)install debian on hdb
D)configure grub to boot debian

HOWTO
In order to be extra cautious for every command we gave we all logged in inside Gentoo and one of us set up a “screen” and the others joined that screen session using # screen -x

Now everything that one typed could be seen realtime by all the others.
PART A) RAID Manipulation
Check the status of the raid devices: cat /proc/mdstat
Copy /usr/portage/ to / as /usr/portage2 so that we can completely delete md6 (RAID0).
(Gentoo) # mkdir /usr/portage2/
(Gentoo) # cp -rp /usr/portage/* /usr/portage2/
(Gentoo) # umount /usr/portage
(Gentoo) # mv /usr/portage2 /usr/portage
(Gentoo) # mdadm --stop /dev/md6

Reminder: There’s no need to mdadm --remove /dev/md6 /dev/hdb6 since RAID0 can’t live with only one disk. The mdadm –remove command does nothing at all for RAID0.

We continued by breaking the rest of the RAID1 arrays.
(Gentoo) # mdadm --set-faulty /dev/md1 /dev/hdb1
(Gentoo) # mdadm --remove /dev/md1 /dev/hdb1
(Gentoo) # mdadm --set-faulty /dev/md3 /dev/hdb3
(Gentoo) # mdadm --remove /dev/md3 /dev/hdb3
(Gentoo) # mdadm --set-faulty /dev/md5 /dev/hdb5
(Gentoo) # mdadm --remove /dev/md5 /dev/hdb5

Checked on the current RAID status. Every RAID array should have been failed and with only one disk:
(Gentoo) # cat /proc/mdstat
Personalities : [raid0] [raid1]
md1 : active raid1 hda1[0]
48064 blocks [2/1] [U_]
md3 : active raid1 hda3[0]
223576512 blocks [2/1] [U_]
md5 : active raid1 hda5[0]
9791488 blocks [2/1] [U_]
done

We were now ready to repartition /dev/hdb.
PART B) Repartition hdb
(Gentoo) # fdisk hdb
Created 3 partitions: a) 128Mb for /boot, b) 1Gb for Swap and c) the rest for LVM
In order to re-read the partition table we issue:
(Gentoo) # hdparm -z /dev/hdb
Check if everything is OK
(Gentoo) # cat /proc/partitions | grep hdb

PART C) Install Debian on /dev/hdb
We first had to install the proper tools to do that. In order to create LVM partitions we needed the lvm userspace tools:
(Gentoo) # emerge -avt lvm2
Then we needed to install the tools to create the Debian system, the package is called debootstrap.
(Gentoo) # emerge -avt debootstrap
Created the new RAID1 arrays:
(Gentoo) # mdadm --create /dev/md11 --level=1 -n 2 /dev/hdb1 missing
(Gentoo) # mdadm --create /dev/md12 --level=1 -n 2 /dev/hdb2 missing
(Gentoo) # mdadm --create /dev/md13 --level=1 -n 2 /dev/hdb3 missing

Checked the new RAID arrays:
(Gentoo) # cat /proc/mdstat
Created some basic LVM partitions on top of md13. We didn’t use the whole space of hdb3 because we are going to create more partitions when and where we need to in the future:
(Gentoo) # pvcreate /dev/md13
(Gentoo) # vgcreate local /dev/md13
(Gentoo) # vgdisplay
(Gentoo) # lvcreate -n root -L 10G local
(Gentoo) # lvcreate -n tmp -L 2G local
(Gentoo) # lvcreate -n home -L 20G local

Formatted the LVM partitions and mounted them someplace.
(Gentoo) # mkfs.ext2 /dev/md11
(Gentoo) # mkfs.ext3 /dev/local/root
(Gentoo) # mkfs.ext3 /dev/local/home
(Gentoo) # mkfs.ext3 /dev/local/tmp
(Gentoo) # tune2fs -c 0 -i 0 /dev/local/root
(Gentoo) # tune2fs -c 0 -i 0 -m 0 /dev/local/home
(Gentoo) # tune2fs -c 0 -i 0 /dev/local/tmp
(Gentoo) # mkdir /mnt/newroot
(Gentoo) # mkdir /mnt/newroot/{boot,home,tmp}
(Gentoo) # mount /dev/local/root /mnt/newroot/
(Gentoo) # mount /dev/md11 /mnt/newroot/boot/
(Gentoo) # mount /dev/local/home /mnt/newroot/home/
(Gentoo) # mount /dev/local/tmp /mnt/newroot/tmp/

Then it was time to install Debian on /mnt/newroot using debootstrap:
(Gentoo) # debootstrap --arch=amd64 lenny /mnt/newroot/ http://ftp.ntua.gr/debian

After a while, when it was over we chrooted to the Debian install:
(Gentoo) # cd /mnt/newroot/
(Gentoo) # mount -o bind /dev dev/
(Gentoo) # mount -t proc proc proc
(Gentoo) # chroot . /bin/bash
(Debian) #

We created the network config,
(Debian) # vi /etc/network/interfaces
(contents)
auto eth0
iface eth0 inet static
address X.Y.Z.W
netmask 255.255.255.240
gateway A.B.C.D
(/contents)

We fixed /etc/apt/sources.list:
(Debian) # vim /etc/apt/sources.list
(contents)
deb http://ftp.ntua.gr/debian lenny main contrib non-free
deb http://volatile.debian.org/debian-volatile lenny/volatile main contrib non-free
deb http://ftp.informatik.uni-frankfurt.de/debian-security/ lenny/updates main contrib
deb-src http://security.debian.org/ lenny/updates main contrib
(/contents)

We upgraded the current system and installed various usefull packages.
(Debian) # aptitude update
(Debian) # aptitude full-upgrade
(Debian) # aptitude install locales
(Debian) # vi /etc/locale.gen
(contents)
el_GR ISO-8859-7
el_GR.UTF-8 UTF-8
en_US.UTF-8 UTF-8
(/contents)
(Debian) # locale-gen
(Debian) # aptitude install openssh-server
(Debian) # aptitude install linux-image-2.6.26-1-amd64
(Debian) # aptitude install lvm2 mdadm
(Debian) # aptitude purge citadel-server exim4+
(Debian) # aptitude purge libcitadel1
(Debian) # aptitude install grub less
(Debian) # vi /etc/kernel-img.conf
(contents)
do_symlinks = Yes
do_initrd = yes
postinst_hook = update-grub
postrm_hook = update-grub
(/contents)
(Debian) # vi /etc/hosts
(Debian) # vi /etc/fstab
(contents)
proc /proc proc defaults 0 0
/dev/local/root / ext3 defaults,noatime 0 0
/dev/local/tmp /tmp ext3 defaults,noatime,noexec 0 0
/dev/local/home /home ext3 defaults,noatime 0 0
/dev/md11 /boot ext2 defaults 0 0
/dev/md12 none swap sw 0 0
(/contents)
(Debian) # update-initramfs -u -k all
(Debian) # passwd

And we logged out of Debian to go back to Gentoo to fix grub.
PART D) Configure Grub on Gentoo (hda) to boot Debian /em>
Since we didn’t have physical access to the server we had to boot Debian by using Grub on hda, where Gentoo’s Grub was.
We copied the kernel from debian:
(Gentoo) # cp /mnt/newroot/boot/vmlinuz-2.6.26-1-amd64 /boot/
(Gentoo) # cp /mnt/newroot/boot/initrd.img-2.6.26-1-amd64 /boot/

We edited grub config to add an entry for debian and set it as default! Otherwise the system would reboot back to Gentoo.
(Gentoo) # vi /boot/grub/menu.lst
(contents)
default 1
fallback 0
timeout 10
title=Gentoo
root(hd0,0)
kernel /gentoo-kernel ........
initrd /gentoo-initrd
title=debian (hdb)
root(hd1,0)
kernel /vmlinuz-2.6.26-1-amd64 root=/dev/mapper/local-root ro
initrd /initrd.img-2.6.26-1-amd64
(/contents)

Then we unmounted all partitions from /mnt/newroot/, we crossed our fingers and rebooted!
Voila! We could ssh to our new debian install :) And there was much rejoicing…

What was left to be done, was to mount the old RAID arrays of Gentoo (md1,md3) take backups of configs and place them inside Debian. Then we could kill the old RAID arrays entirely, recreate partitions on hda and add those to RAID arrays of Debian (md11,md12,md13). Of course there should be special attention to re-install grub seperately on hda and hdb!!

Debian-izing the disk with the Gentoo
After a couple of days I decided to move on, kill Gentoo completely and make Debian use both disks.
First thing I did was to stop the old RAID1 arrays.
(Debian) # mdadm --stop /dev/md6
(Debian) # mdadm --stop /dev/md3
(Debian) # mdadm --stop /dev/md1

Then I repartitioned /dev/sda (the Debian kernel uses the modules that all disks appear as /dev/sdX) and created partitions the same size as /dev/sdb’s.:
(Debian) # fdisk /dev/sda
That was the point of no-return :)

There’s a risk involved here. The original sda1 was 64Mb and the newer sdb1 was 128Mb. I couldn’t add sda1 to md11 without extending the sda1 partition. If completely scratched /dev/sda1 to create a new partition of 128Mb in size and a power failure occurred while this process was going on, the server could become unbootable, because it wouldn’t find a proper sda1 to boot from. If someone wanted to minimize that risk, he would have to repartition sda, extend sda1 to the size of sdb1, extend the old /dev/md1 to fit the new sda1 size and extend the fs beneath it. Of course there is still a problem of what would happend if a power failure occured while extending the fs…so I chose to skip that “risk” and pretend it’s not there :)

Re-read the partition table:
(Debian) # hdparm -z /dev/sda
Add the new partitions to the Debian RAID1 arrays.
The first array I fixed was the /boot RAID1 array because it would only take some seconds to sync and minimizes the risk of a power failure while there’s no boot manager on the MBR and the rest of partitions are still syncing:
(Debian) # mdadm --add /dev/md11 /dev/sda1
When the sync is over I installed Grub on both sda1 and sdb1:
(Debian) # grub
grub> device (hd0) /dev/sda
grub> root (hd0,0)
Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd0)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
[...snip...]
grub> quit
(Debian) # grub
grub> device (hd1) /dev/sdb
grub> root (hd1,0)
Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd1)
Checking if “/boot/grub/stage1″ exists… no
Checking if “/grub/stage1″ exists… yes
Checking if “/grub/stage2″ exists… yes
Checking if “/grub/e2fs_stage1_5″ exists… yes
[...snip...]
grub> quit

Then we fix the rest RAID1 arrays:
(Debian) # mdadm --add /dev/md12 /dev/sda2
(Debian) # mdadm --add /dev/md13 /dev/sda3

The last sync took a while (approx 1h).

Make some final checks:
a) Check that grub is installed on every disk’s MBR
(Debian) # dd if=/dev/sda of=test.file bs=512 count=1
1+0 records in
1+0 records out
512 bytes (512 B) copied, 5.4721e-05 s, 9.4 MB/s
(Debian) # grep -i grub test.file
Binary file test.file matches
(Debian) # dd if=/dev/sdb of=test2.file bs=512 count=1
1+0 records in
1+0 records out
512 bytes (512 B) copied, 5.4721e-05 s, 9.4 MB/s
(Debian) # grep -i grub test2.file
Binary file test2.file matches

b) Make sure you have the correct entries in grub config:
(Debian) # cat /boot/grub/menu.lst
default 0
timeout 10
title=debian
root(hd0,0)
kernel /vmlinuz-2.6.26-1-amd64 root=/dev/mapper/local-root ro
initrd /initrd.img-2.6.26-1-amd64

c) Check the RAID1 arrays
(Debian) # cat /proc/mdstat
Personalities : [raid0] [raid1]
md13 : active raid1 sdb3[0] sda3[1]
243071360 blocks [2/2] [UU]
md12 : active (auto-read-only) raid1 sdb2[0] sda2[1]
987904 blocks [2/2] [UU]
md11 : active raid1 sdb1[0] sda1[1]
136448 blocks [2/2] [UU]
unused devices:

That’s all. Only a reboot will show whether everything went right.
Good luck!

P.S. The struggle of Gentoo taking over the world is not over. We may have lost a battle but we haven’t lost the war!

References:
a) HOWTO - Install Debian Onto a Remote Linux System
Pretty old but was the base of our efforts
b) RAID1 on Debian Sarge
c) growing ext3 partition on RAID1 without rebooting
d) Remote Conversion to Linux Software RAID-1 for Crazy Sysadmins HOWTO
e) Gentoo LVM2 installation

April 2, 2009 :: Greece  

Roy Marples

dhcpcd-4.99.16 out

The should be the last experimental release of dhcpcd-4.99 as the last feature I wanted is now in - ARP ping support. This is handy for mobile sites that require a static IP. You can configure it like so:

interface bge0
arping 192.168.0.1
# 192.168.0.1 exists on more than one site
# so we differentiate by hardware address
profile 00:11:22:33:44:55
static ip_address=192.168.0.10/24
static domain_name_servers=192.168.0.2
# All other profiles for 192.168.0.1
profile 192.168.0.1
static ip_address=192.168.0.20/24
static domain_name_servers=192.168.0.1

This now means that dhcpcd can replace the all the interface configuration modules in Gentoo baselayout and OpenRC. This means we can move the link management modules into proper init scripts, which is where they really belong.

So, get testing it and report back any bugs, even compile warnings :)

April 2, 2009

Kevin Bowling

Good Linux File System Developments

ext4 has sparked good controversy on the LKML. Aside from the recent delayed alloc and fsync issues, the whole FS stack is getting some much needed attention.  Indeed, Linux file systems are starting to feel like first class citizens again with ext4 and Btrfs (merged in 2.6.29 for testing!) and the surrounding infrastructure being worked on.  A lot of long overdue problems are being mitigated.  Jens Axboe claims 8% single drive and 25% array speedup with some recent pdflush patches.  This is very good news for all users since disk I/O has had a fast growing gap with CPU and main memory bandwidth, even with SSDs.  The fruits of this labor are quite visible with recent boot speedups in distros like the upcoming Fedora 11.

Mandatory reading:

Share and Enjoy: Digg del.icio.us Slashdot Facebook Reddit StumbleUpon Google Print this article!

Related posts:

  1. More Linux File Systems It seems I caught the wave of interest in Linux...
  2. Linux: Btrfs, File Data and Metadata Checksums Chris Mason announced an early alpha release of his new...
  3. Linux Kernel 2.6.21 and Tickless Kernel (CONFIG_NO_HZ) So Linux kernel 2.6.21 is finally out and all the...

April 2, 2009

N. Dan Smith

IcedTea coming to Gentoo PowerPC (someday)

Today I successfully built the IcedTea Java virtual machine on Gentoo/PowerPC.  What does that mean?  It means that someday Gentoo/PowerPC users will be able to have a source-based, free software Java system.  Currently we have to use IBM’s proprietary Java development kit, which brings with it a whole host of problems (from obtaining the binaries to fixing bugs).

The ebuild I used for dev-java/icedtea6 (which provides a 1.6 JDK) is from the Java overlay. After it is further stabilized and pending some legal discussion, we should have it in the main Gentoo tree, meaning that someday ibm-jdk-bin will disappear or become the secondary option for Java.  Hooray!

Once I get my feet a bit wetter I might post a more specific guide on getting IcedTea up and running on your Gentoo/PowerPC machine.

April 2, 2009 :: Oregon, USA  

April 1, 2009

Leif Biberg Kristensen

Update

It’s been a long time since I posted anything here. The Exodus project is still alive and kicking; it’s my primary tool for doing genealogy research, so I’m using it every day, and I am continually making improvements and extensions.

I simply haven’t had much motivation for writing anything about it.

The most important changes to the code base since my last blog are:

1) The add_source() function has been refactored, and most of the logic has been moved to a beast of a plpgsql function:

CREATE OR REPLACE FUNCTION add_source(INTEGER,INTEGER,INTEGER,INTEGER,TEXT,INTEGER) RETURNS INTEGER AS $$
-- Inserts sources and citations, returns current source_id
-- 2009-03-26: this func has finally been moved from PHP to the db.
-- Should be called via the functions.php add_source() which is left as a gatekeeper.
DECLARE
    person  INTEGER = $1;
    tag     INTEGER = $2;
    event   INTEGER = $3;
    src_id  INTEGER = $4;
    txt     TEXT    = $5;
    srt     INTEGER = $6;
    par_id  INTEGER;
    rel_id  INTEGER;
    x       INTEGER;
BEGIN
    IF LENGTH(txt) <> 0 THEN -- source text has been entered, add new node
        par_id := src_id;
        SELECT MAX(source_id) + 1 FROM sources INTO src_id;
        -- parse text to infer sort order:
        -- 1) use page number for sort order (low priority, may be overridden)
        IF srt = 1 THEN -- don't apply this rule unless sort = default
            IF txt SIMILAR TO E'%side \\d+%' THEN -- use page number as sort order
                SELECT SUBSTR(SUBSTRING(txt, E'side \\d+'), 5,
                    LENGTH(SUBSTRING(txt, E'side \\d+')) -4)::INTEGER INTO srt;
            END IF;
        END IF;
        -- 2) use ^#(\d+) for sort order
        IF txt SIMILAR TO E'#\\d+%' THEN
            SELECT SUBSTR(SUBSTRING(txt, E'#\\d+'), 2,
                LENGTH(SUBSTRING(txt, E'#\\d+')) -1)::INTEGER INTO srt;
            txt := REGEXP_REPLACE(txt, E'^#\\d+ ', ''); -- strip #number from text
        END IF;
        -- 3) increment from max(sort_order) of source group
        IF txt LIKE '++ %' THEN
            SELECT MAX(sort_order) + 1
                FROM sources
                WHERE get_source_gp(source_id) =
                    (SELECT parent_id FROM sources WHERE source_id = par_id) INTO srt;
            txt := REPLACE(txt, '++ ', ''); -- strip symbol from text
        END IF;
        -- there's a unique constraint on (parent_id, source_text) in the sources table, don't violate it.
        SELECT source_id FROM sources WHERE parent_id = par_id AND source_text = txt INTO x;
        IF NOT FOUND THEN
            INSERT INTO sources (source_id, parent_id, source_text, sort_order) VALUES (src_id, par_id, txt, srt);
        ELSE
            RAISE NOTICE 'Source % has the same parent id and text as you tried to enter.', x;
            RETURN -x; -- abort the transaction and return the offended source id as a negative number.
        END IF;
        -- the rest of the code will only be executed if the source is already associated with a person-event,
        -- ie. the source has been entered from the add/edit event forms.
        IF event <> 0 THEN
            -- if new cit. is expansion of an old one, we may remove the "parent node" citation
            DELETE FROM event_citations WHERE event_fk = event AND source_fk = par_id;
            -- Details about a birth event will (almost) always include parental evidence. Therefore, we'll
            -- update relation_citations if birth event (and new source is an expansion of existing source)
            IF tag = 2 THEN
                FOR rel_id IN SELECT relation_id FROM relations WHERE child_fk = person LOOP
                    INSERT INTO relation_citations (relation_fk, source_fk) VALUES (rel_id, src_id);
                    -- again, remove references to "parent node"
                    DELETE FROM relation_citations WHERE relation_fk = rel_id AND source_fk = par_id;
                END LOOP;
            END IF;
        END IF;
    END IF;
    -- associate source node with event
    IF event <> 0 THEN
        -- don't violate unique constraint on (source_fk, event_fk) in the event_citations table.
        -- if this source-event association already exists, it's rather pointless to repeat it.
        PERFORM * FROM event_citations WHERE event_fk = event AND source_fk = src_id;
        IF NOT FOUND THEN
                INSERT INTO event_citations (event_fk, source_fk) VALUES (event, src_id);
            ELSE
                RAISE NOTICE 'citation exists';
            END IF;
    END IF;
    RETURN src_id;
END
$$ LANGUAGE PLPGSQL VOLATILE;

(Edit: The reason behind moving this logic into the db is of course the relatively large amount of interdependent queries, which I seriously dislike running from a PHP script. I have been anticipating this move for a really long time. And, after posting it here, I finally got around to add some semi-intelligent exception handling. My old PHP function just called die() to prevent Postgres from barfing all over the place in case of the “sources” constraint violation.)

2) New «Search for Couples» page. I have simply deployed the view I’ve described earlier and used the index.php as a template to put a PHP wrapper script around it. So now I can find out in an instant if I have a couple like Ole Andersen and Anne Hansdatter who married around 1760.

3) New «Search for Source Text» page. I had this function which I used to run a lot from the command line:

-- CREATE TYPE int_bool_text AS (i INTEGER, b BOOL, t TEXT);

CREATE OR REPLACE FUNCTION find_src(TEXT) RETURNS SETOF int_bool_text AS $$
-- function for searching for source text from psql
-- example: select find_src('%Solum%Konfirmerte%An%Olsd%');
    SELECT source_id, is_unused(source_id), strip_tags(get_source_text(source_id))
        FROM sources
        WHERE get_source_text(source_id) LIKE $1
        ORDER BY is_unused(source_id), date_extract(strip_tags(get_source_text(source_id)))
$$ LANGUAGE SQL STABLE;

There are two issues with that. First, it’s cumbersome, even with readline and tab completion, to use the psql shell every time I want to look up a source text. Second, the query takes half a minute to run because it has to build the full source text for every single node in the sources table (currently 41072 nodes) and run a sequential search through them. For most searces, I don’t actually need more than the transcript part of the source text. So, again using the index.php as a template, I built a PHP page that did the job in a more flexible manner, with two radio buttons for «partial» or «full» search respectively. The meat of the script is the query:

$scope = $_GET['scope'];
if ($src) {
    if ($scope == 'partial')
        $query = "SELECT source_id, is_unused(source_id) AS unused,
                            get_source_text(source_id) AS src_txt
                    FROM sources
                    WHERE source_text SIMILAR TO '%$src%'
                    ORDER BY date_extract(source_text)";
    if ($scope == 'full')
        $query = "SELECT source_id, is_unused(source_id) AS unused,
                            get_source_text(source_id) AS src_txt
                    FROM sources
                    WHERE get_source_text(source_id) SIMILAR TO '%$src%'
                    ORDER BY date_extract(source_text)";

By using SIMILAR TO, I can easily search for variant spellings. For instance, the given name equivalent to Mary in Norwegian is frequently spelled as Maren, Mari, Marie or Maria. Giving the atom as "Mar(en|i)[ea]*” deals effectively with this. (Future project: use tsearch and build a thesaurus of variant name spellings.)

Integrating the search within the application brought another bonus. I made the node number in the query result clickable with a link to the Source Manager. So, just by opening the node in a new tab, I both get to see which events and relations the source is associated with, and automatically sets the last_selected_source to this node, ready to associate with an Event or Relation.

The last_selected_source (LSS) has grown to become a powerful concept within the application. I seldom enter a source node number by hand anymore; it’s much easier to modify the LSS before entering a citation. Therefore, I’ve also added a «Use» hotlink that updates the LSS in the Family View Notes section to each of the citations.

I probably should write some words about how I operate this program, as it’s very unconventional with respect to other genealogy apps. The source model is, as I’ve described in the Exodus article, «a self-referential hierarchy with an unknown number of levels.» (See the Gentech GDM document, section 5.3: Evidence Submodel.) The concept is generally known as an «adjacency tree» in database parlance. My own twist to it is that each node contains a partial string, and the full source text is produced at run-time by a recursive concatenation of the strings. It’s a simple, yet powerful, approach. Supplementary text, not intended to show up in the actual citation, is enclosed in {curlies}.

I usually start with entering source transcripts from a church book, every single one of them in sequential order. The concatenated node text up to that point is something like “Church book|for Solum|Mini 2 (1713-1761).|Baptisms,|page 62.” (The pipes are actually spaces, I just wanted to show the partial strings.) When I add a transcript, I usually increment the sort_order by prefixing the text with ‘++ ‘, and the add_source function (see above) will automatically assign the correct sort order number to the node. At the same time, I’ll look up the name in the database to see if I’ve already got that person or the family. Depending on the search result, I may associate the newly entered transcript with the relevant Events/Relations, or may leave it lying around, waiting for that person or family to approach «critical mass» in my research. Both in the Source Manager and in the new Search for Source text, unused transcripts are rendered with grey text, making it easy to see which sources that are actually associated with «persons» in the database.

It can be seen that the process is entirely «source driven», to an extent that I have not seen in any other genealogy research tool. And, of course, it’s totally uncompatible with GEDCOM.

For that reason, and for several others, it’s also totally unsuitable for a casual «family historian». Most people compile their genealogy by drawing information from lots and lots of different sources. I, on the other hand, conduct a «One-place Study» in two adjacent parishes, and use a few sources exhaustively. I’m out to get the full picture of those two parishes, and my application is designed with that goal in mind.

April 1, 2009 :: Norway  

Dan Fego

Merging files with pr

Tonight, I’ve been poring over a rather large data set that I want to get some useful information out of. All the data was originally stored in a .html file, but after some (very) crude extraction techniques, I managed to pull out just the data I wanted, and shove it into a comma-separated file. Earlier, I had given up on my tools at hand and typed up an entire list of row headings for my newly-gotten data. So I had two files like so:

headings.txt
Alpha
Bravo
Charlie

values.csv
1,2,3,4
5,6,7,8
9,10,11,12

I spent quite a bit of time trying to figure out how to combine the two columns into one file with what I knew, but none of my tools could quite do it without nasty shell scripting. It took me a while, but I eventually found this post that cracked the case for me. The pr command, ostensibly for paging documents, has enough horsepower to solve my problem in short order, like so:

$ pr -tm -s, headings.txt values.csv

The -t tells the program to omit headers and footers, and -m tells it to merge each line. The -s, tells it to use commas as field-separators. My desired result, like so:

headings.txt
Alpha,1,2,3,4
Bravo,5,6,7,8
Charlie,9,10,11,12

There are numerous other options to pr, and depending on your potential line lengths, one may have to experiment. But for me, this got the job done.

External Links

April 1, 2009 :: USA  

Brian Carper

Trying Arch

Thanks to all who gave helpful suggestions about running VMs in Gentoo. The main reason I wanted a VM was to play around with some other distros and see what I liked.

But then I got to thinking, and I realized that I have over 250 GB of free hard drive space sitting around. So I made a new little partition and per Noah's suggestion, threw Arch Linux on there.

I'm fairly impressed so far. The install was easy. In contrast to the enormous Gentoo handbook, the whole Arch install guide fits on one page of the official Arch wiki. Why doesn't Gentoo have an official wiki? I know there are concerns over the quality of something anyone can edit, but in practice is it a big a deal? Is it worth the price of sending users elsewhere, to potentially even WORSE places, when the Gentoo docs don't cover everything we need? The quality of the unofficial Gentoo wiki is often very good but sometimes hit-or-miss, and it also sort of crashes and loses all data without backups every once in a while.

The Arch installer is a commandline app using ncurses for basic menus and such, which is more than sufficient and a good compromise between commandline-only and full-blown X-run Gnome bloat. The install itself went fine, other than my own mistakes. I'm sharing /boot and /home between Gentoo and Arch so I can switch between them easily. During the install Arch tried to create some GRUB files, but they already existed care of Gentoo, so the install bombed without much notification and I didn't notice until 3 steps later. No big deal to fix, but I'd have liked a louder error message right away when it happened. The base install took about 45 minutes.

Another nice thing is that the Arch install CD has vi on it. I didn't have to resort to freaking nano or remember to install vim first thing. A mild annoyance to be sure, but it bugged me every time I installed Gentoo.

After boot, installing apps via pacman is simple enough. KDE 4.2 installed in about 15 minutes, as you'd expect from a distro with binary packages. I found a mirror with 1.5 Mb/sec downloads, which is awfully nice. Syncing the package tree takes less than 2 seconds, which is also nice compared to Portage's 5-minute rsync and eix update times. Searching the tree via regex is also somehow instantaneous in Arch.

Oddly, KDE didn't seem to pull in Xorg as a dependency, but other dependencies worked fine so far. Time will tell how well this all holds up. Most package managers do fine on the normal cases but the real test is the funky little obscure apps. pacman -S gvim resulted in a Vim with working rubydo and perldo, which means Arch passed the Ubuntu stink test.

Another nice thing is that KDE4 actually works. My Gentoo install is years old and possibly crufted beyond repair, or something else was wrong, but I have yet to get KDE4 working in Gentoo without massive breakage. Possibly if I wiped Gentoo and tried KDE4 without legacy KDE3 stuff everywhere it'd also be smooth.

Regardless, it all works in Arch. NVidia drivers and Twinview settings were copy/pasted from Gentoo, and compositing all works fine. No performance problems in KDE with resizing or dragging windows, no Plasma crashes (yet), no missing icons or invisible notification area. QtCurve works in Qt3, Qt4 and GTK just fine. My sound card worked without any manual configuration at all. My mouse worked without tweaking, including the thumb buttons. Same with networking, the install prompted me for my IP and gateway etc. and then it worked, no effort.

I've mentioned before, but one nice thing about Linux is that if you have /home in its own partition, it's no big deal at all to share it between distros. With no effort at all I'm now using all my old files and settings in Arch, and I can switch back and forth between this and Gentoo without any troubles.

So we'll see how this goes. So far so good though. Arch seems very streamlined and its goal is minimalism, which is nice. Gentoo has not felt minimalistic to me in a while. Again, may be due to the age of my install, cruft and bit-rot.

April 1, 2009 :: Pennsylvania, USA  

March 31, 2009

Ciaran McCreesh

Feeding ERB Useful Variables: A Horrible Hack Involving Bindings


I’ve been playing around with Ruby to create Summer, a simple web packages thing for Exherbo. Originally I was hand-creating HTML output simply because it’s easy, but that started getting very very messy. Mike convinced me to give ERB a shot.

The problem with template engines with inline code is that they look suspiciously like the braindead PHP model. Content and logic end up getting munged together in a horrid, unmaintainable mess, and the only people who’re prepared to work with it are the kind of people who think PHP isn’t more horrible than an aborted Jacqui Smith clone foetus boiled with rotten lutefisk and served over a bed of raw sewage with a garnish of Dan Brown and Patricia Cornwell novels. So does ERB let us combine easy page layouts with proper separation of code?

Well, sort of. ERB lets you pass it a binding to use for evaluating any code it encounters. On the surface of it, this lets you select between the top level binding, which can only see global symbols, or the caller’s binding, which sees everything in scope at the time. Not ideal; what we want is to provide only a carefully controlled set of symbols.

There are three ways of getting a binding in Ruby: a global TOPLEVEL_BINDING constant, which we clearly don’t want, the Kernel#binding method which returns a binding for the point of call, and the Proc#binding method which returns a binding for the context of a given Proc.

At first glance, the third of these looks most promising. What if we define the names we want to pass through in a lambda, and give it that?

require 'erb'

puts ERB.new("foo <%= bar %>").result(lambda do
    bar = "bar"
end)

Mmm, no, that won’t work:

(erb):1: undefined local variable or method `bar' for main:Object (NameError)

Because the lambda’s symbols aren’t visible to the outside world. What we want is a lambda that has those symbols already defined in its binding:

require 'erb'

puts ERB.new("foo <%= bar %>").result(lambda do
    bar = "bar"
    lambda { }
end.call)

Which is all well and good, but it lets symbols leak through from the outside world, which we’d rather avoid. If we don’t explicitly say “make foo available to ERB”, we don’t want to use the foo that our calling class happens to have defined. We also can’t pass functions through in this way, except by abusing lambdas — and we don’t want to make the ERB code use make_pretty.call(item) rather than make_pretty(item). Back to the drawing board.

There is something that lets us define a (mostly) closed set of names, including functions: a Module. It sounds like we want to pass through a binding saying “execute in the context of this Module” somehow, but there’s no Module#binding_for_stuff_in_us. Looks like we’re screwed.

Except we’re not, because we can make one:

module ThingsForERB
    def self.bar
        "bar"
    end
end

puts ERB.new("foo <%= bar %>").result(ThingsForERB.instance_eval { binding })

Now all that remains is to provide a way to dynamically construct a Module on the fly with methods that map onto (possibly differently-named) methods in the calling context, which is relatively straight-forward, then we can do this in our templates:

<% if summary %>
    <p><%=h summary %>.</p>
<% end %>

<h2>Metadata</h2>

<table class="metadata">
    <% metadata_keys.each do | key | %>
        <tr>
            <th><%=h key.human_name %></th>
            <td><%=key_value key %></td>
        </tr>
    <% end %>
</table>

<h2>Packages</h2>

<table class="packages">
    <% package_names.each do | package_name | %>
        <tr>
            <th><a href="<%=h package_href(package_name) %>"><%=h package_name %></a></th>
            <td><%=h package_summary(package_name) %></td>
        </tr>
    <% end %>
</table>

Which gives us a good clean layout that’s easy to maintain, but lets us keep all the non-trivial code in the controlling class.

Posted in summer Tagged: exherbo, ruby, summer

March 31, 2009

Jürgen Geuter

Themeability can result in bad software

Gwibber is a microblogging client for Linux based on Python and GTK. Well some of it is.

But in order to give it simple skinability or themeability it was decided to use an embedded Webkit browser to display the information. Even better, the HTML wasn't even rendered statically but after parsing all data it would be rendered to the template in HTML but as data that was then dynamically parsed using jQuery and JavaScript.

That sounds like a neat "proof of concept" thingy, you know, one of those thing where people ask: "Why would you do that?" And you answer: "Because I can."

Many people nowadays know at least some HTML, CSS and JavaScript so many projects are tempted to use those technologies as markup to gain the ability to skin their software but I think that is not the right direction.

Yes some people will claim that people want to use pretty software and if your software is not as pretty as a fairy princess, nobody will want to run it.

But on the other hand Gwibber gives us an example for the opposite point of view: The embedded webkit browser thingy in connection with JavaScript is really unstable and fragile. Today I updated my system and got a newer webkit-gtk which made Gwibber pretty much die. It's a known bug and it's really hard to debug what exactly goes wrong.

While Gwibber kinda has the important features there still is quite some stuff it lacks but right now the most energy has to be spend to reworking the inner workings and get the webkit thingy to display ome statically rendered HTML.

A better approach would have been to implement the functionality in a library and then build a client on top of that, a simple client that just works. Then you can start adding code to the whole thing that allows you to make it all pretty and fancy.

Right now we have a package that's kinda nifty but forces you to find a random version of webkit-gtk that might work and if you find it, never upgrade. You have a pretty tool that users start to adopt, it gets included into Ubuntu's next release but, guess what? The current version won't run. That makes the project look bad. Even if the software looks good. If you know what I mean.

March 31, 2009 :: Germany  

Martin Matusiak

emerge mono svn

Yes, it’s time for part two. If you’re here it’s probably because someone said “fixed in svn”, and for most users of course that doesn’t matter, but if you’re a developer you might need to keep up with the latest.

Now, it’s one thing to do a single install and it’s another to do it repeatedly. So I decided to do something about it. Here is the script, just as quick and dirty and unapologetic as the language it’s written in. To make up for that I’ve called it emerge.pl to give it a positive association.

What it does is basically encapsulate the findings from last time and just put it all into practice. Including setting up the parallel build environment for you. Just remember that once it’s done building, source the env.sh file it spits out to run the installed binaries.

$ ./emerge.pl merge world

$ . env.sh

$ monodevelop &

This is pretty simple stuff, though. Just run through all the steps, no logging. If it fails at some point during the process it stops so that you can see the error. Then if you hit Enter it continues.

#!/usr/bin/perl
# Copyright (c) 2009 Martin Matusiak <numerodix@gmail.com>
# Licensed under the GNU Public License, version 3.
#
# Build/update mono from svn
 
use warnings;
 
use Cwd;
use File::Path;
use Term::ReadKey;
 
 
my $SRCDIR = "/ex/mono-sources";
my $DESTDIR = "/ex/mono";
 
 
sub term_title {
	my ($s) = @_;
	system("echo", "-en", "\\033]2;$s\\007");
}
 
sub invoke {
	my (@args) = @_;
 
	print "> "; foreach my $a (@args) { print "$a "; }; print "\\n";
 
	$exit = system(@args);
	return $exit;
}
 
sub dopause {
	ReadMode 'cbreak';
	ReadKey(0);
	ReadMode 'normal';
}
 
 
sub env_var {
	my ($var) = @_;
	my ($val) = $ENV{$var};
	return defined($val) ? $val : "";
}
 
sub env_get {
	my ($env) = {
		DYLD_LIBRARY_PATH => "$DESTDIR/lib:" . env_var("DYLD_LIBRARY_PATH"),
		LD_LIBRARY_PATH => "$DESTDIR/lib:" . env_var("LD_LIBRARY_PATH"),
		C_INCLUDE_PATH => "$DESTDIR/include:" . env_var("C_INCLUDE_PATH"),
		ACLOCAL_PATH => "$DESTDIR/share/aclocal",
		PKG_CONFIG_PATH => "$DESTDIR/lib/pkgconfig",
		XDG_DATA_HOME => "$DESTDIR/share:" . env_var("XDG_DATA_HOME"),
		XDG_DATA_DIRS => "$DESTDIR/share:" . env_var("XDG_DATA_DIRS"),
		PATH => "$DESTDIR/bin:$DESTDIR:" . env_var("PATH"),
		PS1 => "[mono] \\\\w \\\\\\$? @ ",
	};
	return $env;
}
 
sub env_set {
	my ($env) = env_get();
	foreach my $key (keys %$env) {
		if ((!exists($ENV{$key})) || ($ENV{$key} ne $env->{$key})) {
			$ENV{$key} = $env->{$key};
		}
	}
}
 
sub env_write {
	my ($env) = env_get();
	open (WRITE, ">", "env.sh");
	foreach my $key (keys %$env) {
		my ($line) = sprintf("export %s=\\"%s\\"\\n", $key, $env->{$key});
		print(WRITE $line);
	}
	close(WRITE);
}
 
 
sub pkg_get {
	my ($name, $svnurl) = @_;
	my $pkg = {
		name => $name,
		dir => $name, # fetch to
		workdir => $name, # build from
		svnurl => $svnurl,
		configurer => "autogen.sh",
		maker => "make",
		installer => "make install",
	};
	return $pkg;
}
 
sub pkg_print {
	my ($pkg) = @_;
	foreach my $key (keys %$pkg) {
		printf("%14s : %s\\n", $key, $pkg->{$key});
	}
	print("\\n");
}
 
sub pkg_action {
	my ($action, $dir, $pkg, $code) = @_;
 
	# Report on action that is to commence
	term_title(sprintf("Running %s %s", $action, $pkg->{name}));
 
	# Create destination path if it does not exist
	my ($path) = File::Spec->catdir($SRCDIR, $dir);
	unless (-d $dir) {
		mkpath($path);
	}
 
	# Chdir to source path
	my ($cwd) = getcwd();
	chdir($path);
 
	# Set environment
	env_set();
 
	# Perform action
	my ($exit) = &$code;
 
	# Chdir back to original path
	chdir($cwd);
 
	# Check exit code
	if ($exit == 0) {
		term_title(sprintf("Done %s %s", $action, $pkg->{name}));
	} else {
		term_title(sprintf("Failed %s %s", $action, $pkg->{name}));
		dopause();
	}
}
 
sub pkg_fetch {
	my ($pkg, $rev) = @_;

	if (exists($pkg->{svnurl})) {
		my $code = sub {
			return invoke("svn", "checkout", "-r", $rev, $pkg->{svnurl}, ".");
		};
		pkg_action("fetch", $pkg->{dir}, $pkg, $code);
	}
}
 
sub pkg_configure {
	my ($pkg) = @_;
 
	if (exists($pkg->{configurer})) {
		my $code = sub {
			my ($configurer) = $pkg->{configurer};
			if (!-e $configurer) {
				if (-e "configure") {
					$configurer = "configure";
				}
			}
			return invoke("./$configurer --prefix=$DESTDIR");
		};
		pkg_action("configure", $pkg->{workdir}, $pkg, $code);
	}
}
 
sub pkg_premake {
	my ($pkg) = @_;
 
	if (exists($pkg->{premaker})) {
		my $code = sub {
			return invoke($pkg->{premaker});
		};
		pkg_action("premake", $pkg->{workdir}, $pkg, $code);
	}
}
 
sub pkg_make {
	my ($pkg) = @_;
 
	if (exists($pkg->{maker})) {
		my $code = sub {
			return invoke($pkg->{maker});
		};
		pkg_action("make", $pkg->{workdir}, $pkg, $code);
	}
}
 
sub pkg_install {
	my ($pkg) = @_;
 
	if (exists($pkg->{installer})) {
		my $code = sub {
			return invoke($pkg->{installer});
		};
		pkg_action("install", $pkg->{workdir}, $pkg, $code);
	}
}
 
 
sub pkglist_get {
	my $mono_svn = "svn://anonsvn.mono-project.com/source/trunk";
	my (@pkglist) = (
		{"libgdiplus" => "$mono_svn/libgdiplus"},
		{"mcs" => "$mono_svn/mcs"},
		{"olive" => "$mono_svn/olive"},
		{"mono" => "$mono_svn/mono"},
		{"debugger" => "$mono_svn/debugger"},
		{"mono-addins" => "$mono_svn/mono-addins"},
		{"mono-tools" => "$mono_svn/mono-tools"},
		{"gtk-sharp" => "$mono_svn/gtk-sharp"},
		{"gnome-sharp" => "$mono_svn/gnome-sharp"},
		{"monodoc-widgets" => "$mono_svn/monodoc-widgets"},
		{"monodevelop" => "$mono_svn/monodevelop"},
		{"paint-mono" => "http://paint-mono.googlecode.com/svn/trunk"},
	);
 
	my (@pkgs);
	foreach my $pkgh (@pkglist) {
		# prep
		my @ks = keys(%$pkgh); my $key = $ks[0];
 
		# init pkg
		my $pkg = pkg_get($key, $pkgh->{$key});
 
		# override defaults
		if ($pkg->{name} eq "mcs") {
			delete($pkg->{configurer});
			delete($pkg->{maker});
			delete($pkg->{installer});
		}
		if ($pkg->{name} eq "olive") {
			delete($pkg->{configurer});
			delete($pkg->{maker});
			delete($pkg->{installer});
		}
		if ($pkg->{name} eq "mono") {
			$pkg->{premaker} = "make get-monolite-latest";
		}
		if ($pkg->{name} eq "gtk-sharp") {
			$pkg->{configurer} = "bootstrap-2.14";
		}
		if ($pkg->{name} eq "gnome-sharp") {
			$pkg->{configurer} = "bootstrap-2.24";
		}
		if ($pkg->{name} eq "paint-mono") {
			$pkg->{workdir} = File::Spec->catdir($pkg->{dir}, "src");
		}
 
		push(@pkgs, $pkg);
	}
	return @pkgs;
}
 
 
sub action_list {
	my (@pkgs) = pkglist_get();
	foreach my $pkg (@pkgs) {
		printf("%s\\n", $pkg->{name});
	}
}
 
my %actions = (
	list => -1,
	merge => 0,
	fetch => 1,
	configure => 2,
	make => 3,
	install => 4,
);
 
sub action_merge {
	my ($action, @worklist) = @_;
 
	# spit out env.sh to source when running
	env_write();
 
	# init source dir
	unless (-d $SRCDIR) {
		mkpath($SRCDIR);
	}
 
	my (@pkgs) = pkglist_get();
	foreach my $pkg (@pkgs) {
		# filter on membership in worklist
		if (grep {$_ eq $pkg->{name}} @worklist) {
			pkg_print($pkg);
 
			# fetch
			if (($action == $actions{merge}) || ($action == $actions{fetch})) {
				my $revision = "HEAD";
				pkg_fetch($pkg, $revision);
			}
 
			# configure
			if (($action == $actions{merge}) || ($action == $actions{configure})) {
				pkg_configure($pkg);
			}
 
			if (($action == $actions{merge}) || ($action == $actions{make})) {
				# premake, if any
				pkg_premake($pkg);
 
				# make
				pkg_make($pkg);
			}
 
			# install
			if (($action == $actions{merge}) || ($action == $actions{install})) {
				pkg_install($pkg);
			}
		}
	}
}
 
 
sub parse_args {
	if (scalar(@ARGV) == 0) {
		printf("Usage:  %s <action> [<pkg1> <pkg2> | world]\\n", $0);
		printf("Actions: %s\\n", join(" ", keys(%actions)));
		exit(2);
	}
 
	my ($action) = $ARGV[0];
	if (!grep {$_ eq $action} keys(%actions)) {
		printf("Invalid action: %s\\n", $action);
		exit(2);
	}
 
	my (@pkgnames) = splice(@ARGV, 1);
	if (grep {$_ eq "world"} @pkgnames) {
		@allpkgs = pkglist_get();
		@pkgnames = ();
		foreach my $pkg (@allpkgs) {
			push(@pkgnames, $pkg->{name});
		}
	}
 
	return (action => $action, pkgs => \\@pkgnames);
}
 
sub main {
	my (%input) = parse_args();
 
	printf("Action selected: %s\\n", $input{action});
	if (scalar(@{ $input{pkgs} }) > 0) {
		printf("Packages selected:\\n");
		foreach my $pkgname (@{ $input{pkgs} }) {
			printf(" * %s\\n", $pkgname);
		}
		print("\\n");
	}
 
	if ($actions{$input{action}} == $actions{list}) {
		action_list();
		exit(2);
	}
 
	action_merge($actions{$input{action}}, @{ $input{pkgs} })
}
 
main();

Download this code: emerge_pl

March 31, 2009 :: Utrecht, Netherlands  

Brian Carper

Gentoo VMWare Fail

According to this bug, VMWare on Gentoo is in a sorry state, with one lone person trying to keep it going. I can't get vmware-modules to compile on my system no matter what I try, which is depressing. Kudos to all of our one-man army Gentoo devs who are keeping various parts of the distro going, but I wonder how many other areas of Gentoo are largely unmaintained nowadays.

KVM was braindead simple to get set up in comparison with VMWare, but I can't get networking to work. This is because I'm an idiot when it comes to TUN/TAP and iptables. I've read wiki articles that suggest setting up my system to NAT-forward traffic into the VM but I couldn't get that working and don't have a lot of time to screw with it.

On one of the Gentoo mailing lists I noticed that a dev has posted some KVM images of Gentoo suitable for testing. But I'm looking to start up an image from scratch and that doesn't help, and it's not going to help me get networking going any easier.

Why do I feel like this'd take 10 minutes to set up on Ubuntu? Look at this, or search for "ubuntu vmware" and see the hundreds of results. Given that it's a VM and it doesn't really matter what the host OS is anyways, I'll probably do that on my laptop, but it's still depressing.

March 31, 2009 :: Pennsylvania, USA  

March 30, 2009

N. Dan Smith

Gentoo on the iBook G4

While Debian may be suitable for my Apple Powermac G3 Blue and White, nothing can beat Gentoo on my iBook G4.  I have resolved that being a Gentoo developer is not part of my future.  But I cannot stay away from Gentoo as a user, especially when it comes to my iBook.  Pure computing joy.

It was not always so.  When I first started using Gentoo there were no drivers for the Broadcom wireless card it has.  Thankfully since then free and open drivers have been developed which work great for me.  Also, all of the Mac buttons and features (including sleep) work perfectly, so it makes a great notebook.  I plan as using it as my main work horse for thesis research and writing.

March 30, 2009 :: Oregon, USA  

Gentoo on iBook G4: The Essentials

When it comes to running Linux on an Apple iBook G4 (or any iBook or PowerBook in general), there are a few essential resources.  Here they are:

  • Gentoo Linux PPC Handbook - The installation instructions for Gentoo are among the best documentation available for Linux.
  • Gentoo PPC FAQ - This document answers all your questions about the idiosyncrasies of running Linux on PowerPC hardware.  This includes information on how to enable your soundcard as well as recommendations for laptop-specific applications (which can be installed with portage).  First and foremost of these is pbbuttonsd (”PowerBook buttons daemon”), which makes the volume, brightness, and eject keys work, along with sleep and other power managment features.  There is nothing like being able to close the lid and forget about it, just like in Mac OS X.
  • Airport Extreme Howto - This is a very clear and concise guide to getting your Airport Extreme wireless network card working.  Until these drivers came along, Linux on the iBook G4 was not very fun.  Now I can enjoy its full laptop potential.
  • Gentoo Hardware 3D Acceleration Guide - You have a Radeon Mobility video card in that iBook.  Use it!  Follow this guide to ensure that hardware rendering is enabled.  This will open the door to goodies like Compiz Fusion, which does work fairly well on the iBook G4.
  • Inputd - This program allows for right-click solutions (e.g. command + left-click = right click) and much more.  The cure to the one button mouse.  It requires some changes in the kernel and perhaps its config file, but it should not be too challenging for any user who has successfully completed the Gentoo install.

It is best to consult all of those resources during the initial installation.  That way you do not have to go back and rebuild your kernel when you add each feature.

March 30, 2009 :: Oregon, USA