Planet Larry

November 28, 2008

Dirk R. Gently

Ubuntu on an iBook


Notes:

  • I did this installation at the beginning of 2007. Ubuntu since then has officially dropped support for PowerPC’s. The Ubuntu community however is continuing development of Ubuntu (see the Ubuntu wiki for details.
  • For alternatives, I use Gentoo Linux and enjoy it alot. Installing Gentoo is a more advanced install but I’ve discovered because it is extraordinarily customizable can be more efficient than standard distros. I’ve detailed my install at the wiki

Chasing the Rabbit

I have a clam-shell iBook, (one of the originals, mind ya kindly, a rev. B 300 mHz, bondi blue!) I love this computer, so don’t ask to buy it ;). It’ll be forever a classic in my view. I have to admit, though, it was beginning to feel a little bit dated. When I used it to surf the web was when I began to feel it was a bit dated. Webpages would load slow and some pages would just not load at all! And, continually, MacOS9 apps became more depracated (unuseful) and I wasn’t able to find new ones to do the things I wanted/needed to do. So, I admit, I became curious when a group of my friends one day talked Linux. I can tell you that Linux isn’t as intimidating nowadays as the days I first tried it 10 or so years ago. But some of you may be asking why change when Mac OS 9 is a pretty reliable OS?

Looking Down the Rabbit-Hole

I confess though, I got the bug. I had to try it. What I had just wasn’t enough, wasn’t exciting enough. If Ubuntu didn’t work, I justified to myself, I would just go back and restore my OS 9. I can tell now that it did work, better than I thought dreamed it could. And the installation is nothing like the installation I did ten years ago. It did have one concerning “kinda”, never mind that though, I’ll get to that later.
So, I read a lot of what I wanted, what I needed, to do to get Linux installed on my on my iBook. I mean alot. I’ve messed things up before on computers, like the time I deleted an entire Windows 95 from a simple command line. Shhhh! So this time, I am happy to say that, I!, was prepared.

After studying what linux distro may be best, I choose Ubuntu. Ubuntu is the most used Linux, and hence, has the most support. Literally hundreds of thousands of people use it and many of them belong to Ubuntu Forums. The forums are a great place to ask and have questions answered and to learn what I could do with my new OS.

The Dark Plunge - Leaving OS 9

I started by backing up my information. Everything, everything - including the System Folder. I used Disk Copy to image everything in around 600 MB chunks so that I could burn them to a CD. It isn’t necessary to image the files, but for me it gives a nice insulation to protect the files. However, to my surprise, because my iBook had ‘Disk-Burner’ in the Apple Menu, I thought I had a CD burner. Heh lol. Fortunately, I do know a friend that does. So I connected to the Internet and used the Web Sharing control panel to transfer my files. Web Sharing completely rescued me. Since I know little to nothing about networking, it was able to make the transfer files relatively easily. In Web Sharing, all I had to do was tell the folder I wished to transfer. Web Sharing gave the folder with all my images a web address. I just booted my friends computer, opened the web browser and download the files.

Because Macintosh files are generally corrupted on other types of computers, Web Sharing encodes them before they’re transfered. I gave up transferring the files after a couple hours. Web Sharing appear to have issues with encoding large files. So, I found ZipIt to be good solution. I made sure I had the MacBinary checked in options and zipped the images. Alternately some people use the DropStuff program to do this.
With all my MacOS9 and data on the other computer, I burned them to CD’s.
The Ubuntu Installation Guide is a good guide on what needs to be done - though is is a little sparse MacOS9 support, it’s not too bad. I read through most of it in a breeze, then just kept it nearby when I installed.

The Ubuntu Install ISO for the CD needs to be downloaded. I made sure I chose the PowerPC version. I choose to download since I was using a friends computer :) It’s 600 friggin’ megs. The Installation Guide recommends doing md5 checksum on the downloaded ISO to verify it, but there isn’t any md5 checksum utility for MacOS9. I burned the CD without any problems. I found a great program on Windows that burned the ISO called ImgBurn - it did a good job, and it was free.

Stop Smilin’ Cheshire

I planned to have both Mac OS 9 and Ubuntu on my computer, so the first job I had to do is division the hard drive. It would have been nice to have a another hard drive lying about but… I ]separated the drive into four pieces. One for Ubuntu Linux, another for Mac OS 9, another a Linux Swap, and a boot-strap. Linux Swap is used for memory; while boot-strap is needed to dual-boot. It’s not for the feint of heart, trust me I’m one of them, I’ve done this before so I knew what I was doing. Partitioning will un-index all the information on the hard drive, effectively erasing it - thats why everything was backed up.

I started my computer holding C with my MacOS9 Install CD in the computer. I started Drive Setup in the Utilities Folder. I noticed first my original MacOS9 install has already partitioned the drive a little bit. The MacOS creates 7 or so mini-partitions that are used for hard disk driver, directory info, and some other things I don’t know about. They have to stay there if I want to use MacOS9 on your computer so I didn’t touch them. From the pull down menu I selected four partitions (Drive Setup doesn’t show the 7 mini ones). The first partition had to be the boot strap partition so that I can dual-boot. This partition only has to be 832 kb. Drive Setup only allows partitions down to 32 MB so thats what I did. I left its type as unallocated so that Ubuntu Install could later create it correctly. The next HAS to be the MacOS partition. I’ve heard of others putting MacOS on a later partition but from all the documentation I saw, it is not a good idea. This partition I put at 2.6 GB - pretty small but would do for what I needed. I set this partition as HFS+ Extended. The next was Swap then the Ubuntu partition - it doesn’t matter if swap or Ubuntu are interchanged. Swap I set at 191 MB - the same as installed memory - and it works plenty well. The Ubuntu partition the Installation Guide recommends to be at least 2 GB. Both these (the swap and Ubuntu partition) I set as unallocated letting the Ubuntu Install CD partitioning-tool finish the job correctly. Drive Setup partitioning worked best if I sized these when I worked from the bottom. At this point, everything is new. The Ubuntu Installation Guide says to install MacOS now. I’m not sure if this is necessary but I did it anyway. Since I was booted from the MacOS install CD there was no way I could use my burned CDs to restore my saved MacOS9, so I just installed the one from the CD temporarily.

A Little Lighter - Oh Pooo!

I have an internet connection so I chose the LiveCD (the default one). I do believe there is a minimum install CD as well that doesn’t require an internet connection. The LiveCD (6.10 Efty Edge) has alot of Ubuntu Linux on it but still needs to download a fair deal to have a well rounded Linux setup. So after I finished Drive Setup I put CD in and restarted. Holding down C, the Live CD booted to the Ubuntu Desktop. I got a several warnings at the very start that the PCI something or other had an error, but these were just errors appling to the startup-screen. I clicked on Install-icon. Basic information will be asked. Name, name of this computer, time and date. Do you like Tom Jones? I did my name and named the computer lastname-iBook. This made it pretty easy for me to spot it on the network.

When the installer got to the part about partitioning I choose not to have Ubuntu do the recommended partitioning scheme and chose to manual edit the partition-table myself. Here it is pretty easy for me to select my already sized partitions and make them the type I need them to be. The hard drive in Linux is called is called hda, it says it on the upper right. Its called sda on some computers, etc etc. This varies depending on the type of hard drive a computer has. Here I needed to format (make the right type) the Ubuntu partition, the swap-partition, and the bootstrap. Now I can see the 7 mini-partitions (this may vary too) hda1 - hda7 the ones needed to boot MacOS, I didn’t touch them. I selected the first partition (hda8 on this computer) after the mini-partitions and changed it to bootstrap from the format menu. hda9 (the MacOS disk) I left alone. hda10 I choose swap. And hda11 I choose ext3 (the Linux file system). I hit Continue. On the next screen, I needed to tell the installer where to install what. On the left, I selected bootstrap from first pull down menu and on the right I choose hda8 (my first partition.) Next, I picked swap from the next pull down on the left and chose hda10 on the right. On the third pull down on the left I chose “/”. This “/” is known in the linux world as ‘root’. This is where Ubuntu will be put. After selecting / on the left, I chose hda11 on the right. Now, Ubuntu is ready to install.

The Red Queen

Thats about it, I sat down and grab myself something to drink. I does take a little bit depending on connection speed etc etc. My installation didn’t hang at all though. I haveread others that did. Generally these aren’t hangs and just take a little while. I was recommended to give a potential hang at least 30 or 40 minutes.

Finally, Ubuntu installs yaboot on the bootstrap. This is the program the allows the iBook to dual-boot. Well, that’s it! I restarted to see how it did.

The Keyhole in the Tunnel

yaboot starts up and asks press m for macos, or l for linux. I press l. Another screens appears after thats asks what version of linux do I want to start. I press return to do the default. Everything loads and runs quickly. I type my password and I see my desktop. After spending an hour to get things just how I like. And I really do mean as I like. I linux you can change just about everything! I tell ya, I liked Ubuntu from the start. Clean, and neat. Runs great! More responsive than MacOS, doesn’t hang. It beautiful bro. I thought I was gonna cry. I’ve used the desktop for over a month now and I’m not turning back. Everything I wanted is here and almost all the programs for linux are open-source. Firefox is a great web browser. It’s much quicker at loading and had been able to load every thing I’ve thrown at it. The apps I need are all available to me. But it isn’t feathers and roses.

Note: Ubuntu 6.10 Efty Edge used an outdated version of yaboot that was why I wasn’t able to book into MacOS as I detail below.

It was awhile before I even tried to boot into MacOS again, but since others occasionally use this computer. I wanted to get it going. So at yaboot I press m to load MacOS. The screen turns to the dithered grey we’re used to and then I see a flashing disk. And everything I try to get MacOs running does not work. I’m not gonna get into all the details, but its been frustrating and aggravating. I got to the point where I removed the memory as a desperate measure.
There is a little light at the end of this tunnel though. A great program call MacOnLinux runs MacOs through Linux just as good, and it’s a very simple to put up. All I had to do was select the Synaptic Package Manager and I found it in there. Instructions are pretty easy:

MacOnLinuxHowto

I had a lot of fun doing this, and I love my clamshell yet even more. If someone has an aging iBook, iMac, etc. Linux is definitely worth a look. It can even become a great hobby!

Notes:

- Ubuntu has an alternate Live CD for those who have a issues with the original one.
- There is no accelerated video, so don’t expect to be playing the newest games.
- check for firmware updates before install.
- It’s been suggested to be to use Yellow Dog Linux. I had thought of this since Yellow Dog Linux specializes in PowerPC. Yet, I made my Ubuntu very very comfortable and customized. Also it is true that by 2008 Ubuntu will probably drop PowerPC support.
- I had read writings that I could do partitioning without deleting my MacOS9 and it’s data. By setting up MacOS9 at the beginning part of your disk drive, partitioning, and rebuilding the disk directory - it is possible. Highly unrecommended however. Linux on a Mac needs that first partition to run well.
- Lastly there’s Xubuntu for a lighter system.

Enjoy All!

      

November 28, 2008 :: WI, USA  

November 27, 2008

Clete Blackwell

Windows XP and Internet Connection Sharing

Recently, I have had many people inquire about how to properly set up internet connection sharing (ICS) in Windows so that the connection can be shared wirelessly and so that it is usable with mobile devices, such as the iPhone. I have done a lot of research on this topic recently and I was not able to dig up much. Most of the guides out there were outdated and didn’t describe how to properly set it up so that it would work with mobile devices. So, I have decided to compile my own guide. I hope that you will find this useful.

Let me start by saying that I do not have a Windows Vista machine on hand at the moment (I downgraded my laptop to XP Pro), so this guide is for Windows XP. However, Vista’s setup is similar to this and it shouldn’t be too hard to figure out. This guide will be aimed towards the end-user. I will describe every step as vividly as I am able to (such as how to find which DNS servers you are using).

Let’s begin.

Our first step is to set up Internet Connection Sharing. Let us start by going to the Start menu. Follow these menu choices (Also, if you can’t find these options in the Control Panel, just click on “Network Connections”):

Start -> Control Panel -> Network and Internet Connections -> Network Connections -> (on the left panel) “Set up a home or small office network”

This wizard will assist in setting most of the settings that are necessary for Internet Connection Sharing.

Go through the wizard until you get to this screen (click to enlarge):

Note: In this wizard, if you get any messages saying that there is disconnected hardware, just check the box at the bottom to ignore it.

Now, tick the box at the top, as shown in the picture above. Click next  to continue. You will need to choose the network connection that connects you to the other computers on your network. In my scenario, this is the wireless, as I need to connect my iPhone. You will also be asked to choose the network connection that connects you to the internet. In my scenario, it is an ethernet connection. Choose these options approporiately and complete the wizard.

Now that Internet Connection Sharing is set up properly, it’s time to gather some information about your internet connection.

Go to Start -> Run (or in Vista, Start -> type directly in the search box) and type in the box “cmd” and press enter. Now, in the black window, type in “ipconfig /all” and press enter. You should be looking at your connection to the internet that you chose in the previous wizard. Write down the two (or first two if you have more) DNS servers.

Now, we need to set up a wireless network. Go back to the window where you previously found the wizard. You should right-click on your wireless adapter and click properties. Here I will be giving a bunch of small directions. Here is a picture to aid in setting these settings up:

In the new window that appears, go into the Wireless Networks tab. Click “Add…” Now in the window that popped up, choose these things:

Name your network (I named mine “adhocsharing”). Data encryption: None (you can optionally set this up if you wish, but it is often easier to get it working first and then set it up later). Check both the “Connect even if this network is not broadcasting” and the ad-hoc network box. Click on the Connection tab and check the box “Connect when this network is in range.” Whew. Now click ok to exit this window, but stay in the underlying window.

Now, go back to the General tab of the window and scroll down to “Internet Protocol (TCP/IP)” (do not bother with IPv6). Click both of the bottom radio buttons of the two sets as shown below.

Now, enter this information: IP Address: 192.168.0.1 Subnet mask: 255.255.255.0 Default gateway: 192.168.0.1

Also, enter the two DNS servers that you wrote down earlier. Click ok on both menus and you should be good to go! Try to connect your iPhone or a computer wirelessly.

Now, I will introduce a caveat to this wonderful method of sharing your internet connection. If you ever want to connect to a wireless internet connection, you will have to go back into the menu pictured above and check the top two boxes and press ok. This can become a bit of a nuisance. I don’t currently know a way around this, so if any readers do, please comment.

If I have left anything unexplained, feel free to leave a comment. I hope that this tutorial works for you!

November 27, 2008

Daniel Robbins

What I’ve Been Up To – New site, etc.

Here’s what I’ve been up to:

http://www.funtoo.org has been redesigned. It now has more of a portal design to get you to the latest Funtoo, Funtoo+OpenVZ and Gentoo builds. I hope you like it :)

In the Funtoo Portage tree, I’ve added a new build of OpenRC, a new udev ebuild (133 with some OpenRC-compatibility and other fixes), and I’m keeping Portage 2.2_rc* unmasked. The Gentoo Portage tree has masked 2.2_rc* to try to get more testing of the upcoming 2.1* release, which is fine, but we’re going to stick with the development branch.

Work is ongoing with Metro. The git version (pre-1.2) now has a few new targets: stage3-quick and stage3-freshen. In the past, Metro would build a new stage3 by going through these steps:

seed stage3->stage1->stage2->stage3

“stage3-quick” builds a new stage3 as follows:

seed stage3->stage3

It uses ROOT=/tmp/stage3root and emerge system to make this happen.

“stage3-freshen” is even faster, and builds a new stage3 as follows:

seed stage3->stage3

The difference here is that the “stage3-freshen” target uses the seed stage3 in-place and runs an emerge –u –deep system and emerge –u –deep <extrapackages>. This is a nice way to freshen slightly old stage3’s without doing a full rebuild.

So there you have it. I hope you enjoy the new site. I sure do, since it updates itself automatically, and it’s fun for me to look at every morning :)

November 27, 2008

Dan Ballard

Disappointed in the extream

I am shocked and annoyed and disappointed with Ubuntu right now. Upgrade my parent's box to Intrepid and some how the new grub menu.lst told grub to find the kernel on the wrong harddrive and passed the wrong partition to the kernel for hte root filesystem, and it dropped the windows boot option. The computer went from dual boot to 100% unbootable with a standard Intrepid upgrade. I am so disappointed. If it had been my parents doing this they would have essentially bricked their computer. What the hell.

Of course I just pulled out a Gentoo livecd (those things are eternally useful) and went in and fixed the menu.lst file. It's a good thing I made a copy of the old menu.lst so I could get the proper partition's UUID. I mean seriously. What the hell. Get your act together. This upgrade was a clusterfuck.

How can I tell my parents to use a system that might accidentally brick the whole computer (their other OS too).

November 27, 2008 :: British Columbia, Canada  

November 26, 2008

Roderick B. Greening

My recent development work... aka training excercise

I recently began to dabble with Python, and I am really starting to like the language. It is quite powerful, and intuitive, and allows for some really rapid development, especially when coupled with pyKDE4 (KDE Python bindings).

I was originally trained as a programmer, primarily using C/C++. However, in more recent years, I have been mostly programming in bash and perl, all under work related coding.

With my recent involvement with Kubuntu, I have felt it necessary to learn python, as many of its support applications are written in that language. So, what better way to learn than jump right into a new project. That's where todays post comes from...

I have recently submitted two new projects to Launchpad, with the following two goals:

1. teach myself python and pyKDE
2. hopefully fill a gap within Kubuntu in time for Jaunty

At this point in time, the projects are mere shells/placeholders, and really do not do anything useful besides provide a tray icon and a main window and help dialog. However, this is all a part of the learning process, and more code will appear as I test out various things.

So, what are these two projects? Well, the first is ufw-kde, a graphical interface to the Uncomplicated Firewall. The second is clamav-kde, a replacement for Klamav, which has yet to be ported to KDE4.

Anyway, my primary goal is learning python and kde programming via the python bindings. If these do indeed turn out to be useful in their own right, then that's awesome too.

If anyone is interested in help out on either project, feel free to contact me.

November 26, 2008 :: NL, Canada  

Martin Matusiak

hierarchical temporal memory

As is often said, we humans (if you are not one of us you can join on the website, membership fees are high but not impossible) are pattern seeking animals. This implies that it is difficult for us to understand a completely “new kind of thing”, we tend to seek something else that we can compare it to. Psychology got a minor win when computers emerged, because it finally had a model for the brain. Psychology professors could point to the computer and say “the brain, it’s somewhat like that”. It behooves psychology that the computer we know has a distinct memory, a processing/reasoning unit, and input channels that receive transmissions of “sensory perception”.

The computer as we know it is the so called Von Neumann architecture, every computer we’ve ever had has been designed in those basic components. This design is simple enough (and, in fact, dumb enough) to handle just about anything at all, it is the general purpose computer (a way of saying that it doesn’t have any purpose).

Now a bunch of neuroscientists have figured out that the memory in our computers is too dumb to do certain things well. Our linear memory, where a memory cell has no relationship to the neighboring cells, is abstract and general enough for anyone’s pleasure, but it’s not the way human memory works. Our memory is hierarchical, that is to say it’s made up of levels where the bottom levels remember very simple things, like shapes and sounds in time. As you ascend the hierarchy, the levels above that do not remember “discrete” things, they remember unifications over the simple things. That is the way in which you understand that a leg is both a discrete thing as well as it can be part of a human body, one part in something larger.

Now, if you think about it, this is a crude first model for learning, you are being fed a lot of facts in the hope that you will be able to unify them and see “meaning” to them as a whole. This, unfortunately, is necessary, because we don’t know how to transmit the meaning itself, we think the only way is to send the facts and then the mind will infer the meaning by itself. (It’s quite an optimistic strategy, isn’t it?) Interestingly, there is a trade off at this point. Apparently, you cannot both remember all the discrete facts *and* be able to unify them. So that could explain how some people have a propensity for lots of facts without seeing the bigger picture, while others can’t hold on to all the little pieces. In a way it makes sense, doesn’t it? Like doing research. Once you’ve stated your thesis, you don’t need all those little notes anymore, they are subsumed in the larger unifying rationale.

But now back to technology. A bunch of people have built this model of memory in software, calling it a “hierarchical temporal memory”. It’s an absolutely fascinating premise.

November 26, 2008 :: Utrecht, Netherlands  

Michael Klier

Are You Ready For My Linux Desktop?

I thought I could join all of this years ”Is Linux Ready For The Desktop” buzz, but in a slightly different way, because Linux is certainly ready for my desktop 8-).

My Linux Desktop

From left to right:

  • my Nintendo DS runnning DSLinux
  • my NSLU2 runnig Debian
  • my media box (center) running Arch Linux and XFCE (no TFT yet)
  • a Linksys WRT54G router running OpenWRT
  • another Linksys WRT54G router running FreeWRT
  • my Dell workstation running Arch Linux and Awesome
  • my Asus 1000h running Arch Linux and LXDE

Is Linux ready for your desktop as well? If yes, pictures! Or it didn't happen :-P!

PS.: I obviously I have Internet at home, yay!

Filed under: , ,
Read or add comments to this article

November 26, 2008 :: Germany  

November 25, 2008

Dirk R. Gently

Better LCD Font Rendering


For that past year or two several patches have been made to help improve LCD font rendering in Linux. This can be done fairly easy and can provide a tremendous difference to some Linux users. Before doing this though one first needs to make sure the DPI (dots per inch) is calculated right by the X server. Read Howto Fonts and DPI to learn more on that. Once DPI is set now we can work on font rendering.

There are patches available to a few common libraries that aren’t official that can give improved subpixel rendering for LCD’s. Like the font rendering in OSX? These patches can give an OSX like quality to font rendering. Note, a couple of these patches are made by the folks at Ubuntu so Ubuntu-folk should already have them, most others probably don’t. This process uses subpixel-rendering to improve font rendering. One may have heard about it here.

Let’s get going. I’d like to thank bi3l who has been building the ebuilds, and also to whomever runs the devnull overlay.

Get the Updated Packages

These packages are in the devnull overlay - a mercurial SCM. If mercurial isn’t on your system:

emerge mercurial

Use layman to handle the overlay, if never used layman read this.

Add devnull:

layman -a devnull

Then emerge these four programs:

emerge -1 freetype fontconfig libXft cairo

Next it’s a good idea to set up system settings for font rendering. For Gentoo this is real easy thanks to Doug Goldstein who added a module for fontconfig to the eselect package. Other distros will need to link these preferences to “/etc/fonts???” I can’t remember Exactly how to do it at the moment. For Gentoo users just typing “eselect fontconfig list” will give the options. Enabling/Disabling is as simple as:

eselect fontconfig enable 1

There are many different settings to try but heres a couple tips:

    1. Hopefully obviously, “antialias” needs to be enabled.
      Pick a specific hint level and don’t combine it with “autohint”.
      Choose the subpixeling appropriate for you monitor. Find out the monitor type here.
      Disable bitmap font rendering (un-antialiased fonts don’t looks so good).
      I’d recommend for more advanced configuring than this to create a ~/.fonts.conf where fonts can be substituted, hinting size parameters can be defined…
  • That’s it. Hopefully for those Linux users out there that have been disappointed with font rendering may have a fix.

    Update: Because of a bug in one of these programs creating a “~/.fonts.conf” with the basic settings is a good thought:

    fonts.conf.tgz

          

    November 25, 2008 :: WI, USA  

    Jürgen Geuter

    Casing of tags

    The thing you see to the right of this text is a tag cloud, as you all probably know, it lists all tags from a given context alphabetically and modifies the output according to how often a given tag is being used: A tag that's used a lot is written bigger than one that is used only rarely. It's a very simple concept but one that visualizes things quite well.

    "Tags" are a weird beasts: They allow us to slice huge amounts of data into handier portions while carrying hardly any meaning themselves as they are just a strings. Those strings might mean something to us but since they offer no context at all the same tag might mean something completely different for the next person. Today, while working on a tagging system, I stumbled on a rather simple question: Take the three strings "Tag", "tag" and "TAG". Do they all mean the same thing? Should they be mashed together?

    I asked around on identi.ca and got a reply by Evan:
    "Obviously, I think that case shouldn't matter."


    Ignoring case seems to be the standard practice many people seem to follow: Flickr ignores case as does Delicious. But is that the right way of doing things?

    As already said tags don't really have an immanent meaning: If I tag some object with "important" or "asdfgh" doesn't make any difference except for me personally. Of course my brain fills in blanks and just cause I use words for tags all kinds of connotations are added to the tag itself, that are not in fact contained in it: If I read "important" as tag I might think of the tagged object as being important even though that is not contained in the tag itself, it's my interpretation.

    Let's look at the tag "python" for a second. We might get images of snakes, some images showing source code in a certain programming language or pictures of a bunch of great comedians, all mashed together. That is one of the strengths of tags, to show connections that otherwise were not visible, that categorisations might not be able to show. On the other hand it might also implicate connections that are not directly there: The language Python was named after the comedians, not the snake (even though the logo shows snaky things these days). So would have made more sense to keep the tags "Python" (as in language and comedians) and "python" as in snake separate? Would we benefit from this way to further distinguish tags from each other?

    I've been thinking about this for a while now and I'm still not completely sure which way is right: Ignoring case works but still I can see cases where having case sensitive tags would make sense. Tags are a very simple data structure so limiting them even more for convenience makes sense because it makes things easy: No need to worry which case the user meant, it's lowercase or nothing.

    I guess for those reasons it makes sense to lowercase tags but it still somehow rubs me the wrong way for reasons I don't know. Like a tingling in my ear telling me that I haven't thought it through completely. Irritates the hell out of me ;-)

    November 25, 2008 :: Germany  

    Matija Šuklje

    Less is more in modern X

    <!--break-->

    Software needed

    These are the versions that I used and short explanations as to why these are the minimum versions required for this magic to work:

    • xorg-server 1.5.3 (with the AllowEmptyInput by default patch) — needed to solve a problem with the keyboard (included in the X11 Gentoo overlay)
    • xf86-input-evdev 2.1 — needed for some advanced settings like ButtonMapping (I had to manually bump this ebuild)
    • libXrandr 1.2, randrproto 1.2 and xrandr 1.2 — needed for monitor hotplugging (in the official Gentoo tree)

    Before the big change

    For those faint of heart, without interest or who just plainly do not want to bother, just skip this part.

    I used to have a big (yes, handwritten!) xorg.conf settings file that was mostly written using arcane man and HOWTO knowledge — all neatly commented, so I would not get (too) lost in the whole mess:

    # **********************************************************************
    # DRI Section
    # **********************************************************************
     
    Section "dri"
    # Access to OpenGL ICD is allowed for all users:
    	Mode	0666
    EndSection
     
    # **********************************************************************
    # Module section -- this  section  is used to specify
    # which dynamically loadable modules to load.
    # **********************************************************************
     
    Section "Module"
     
    	Load	"dbe"				# Double buffer extension
    	Load	"vbe"				# za VESA za vsak slučaj
    	Load	"extmod"
    	Load	"bitmap"
    	Load	"ddc"				# da monitor sam pove kakšno resolucijo hoče
     
    	Load	"type1"
    	Load	"freetype"
     
    	Load	"glx"				# libglx.a
    	Load	"dri"				# libdri.a
    	Load	"drm"
     
    	### Dodano, ker tako hoče Acer Aspire 5024 HOWTO
    	Load	"xtrap"
    	Load	"record"
     
    EndSection
     
    # **********************************************************************
    # Files section.  This allows default font and rgb paths to be set
    # **********************************************************************
     
    Section "Files"
     
    	# The module search path.  The default path is shown here.
     
    	FontPath	"/usr/share/fonts/misc:unscaled"
    	FontPath	"/usr/share/fonts/Type1"
    	FontPath	"/usr/share/fonts/TTF"
    	FontPath	"/usr/share/fonts/corefonts"
    	FontPath	"/usr/share/fonts/freefonts"
    	FontPath	"/usr/share/fonts/terminus"
    	FontPath	"/usr/share/fonts/ttf-bitstream-vera"
    	FontPath	"/usr/share/fonts/unifont"
    	FontPath	"/usr/share/fonts/75dpi:unscaled"
    	FontPath	"/usr/share/fonts/100dpi:unscaled"
    	FontPath	"/usr/share/fonts/artwiz-aleczapka-en"
    	FontPath	"/usr/local/share/fonts"
     
    EndSection
     
     
    # **********************************************************************
    # Server flags section.
    # **********************************************************************
     
    Section "ServerFlags"
     
     
    EndSection
     
    # **********************************************************************
    # Input devices
    # **********************************************************************
     
    # **********************************************************************
    # Core keyboard's InputDevice section
    # **********************************************************************
     
    Section "InputDevice"
     
    	Identifier	"Keyboard1"
    	Driver		"kbd"
     
    	# For most OSs the protocol can be omitted (it defaults to "Standard").
    	# When using XQUEUE (only for SVR3 and SVR4, but not Solaris),
    	# uncomment the following line.
     
    	#	Option	"Protocol"	"Xqueue"
     
    	Option	"AutoRepeat"	"500 30"
     
    	# Specify which keyboard LEDs can be user-controlled (eg, with xset(1))
    	#	Option	"Xleds"	"1 2 3"
     
    	#	Option	"LeftAlt"	"Meta"
    	#	Option	"RightAlt"	"ModeShift"
     
    	#	Option	"XkbDisable"
     
    	Option	"XkbRules"	"xorg"
    	Option	"XkbModel"	"pc105"
    	Option	"XkbLayout"	"si"
     
    EndSection
     
     
    # **********************************************************************
    # Core Pointer's InputDevice section
    # **********************************************************************
     
    Section "InputDevice"
     
    # Identifier and driver
     
    ### Logitech USB
    	Identifier	"Mouse"
    	Driver	"evdev"
    	Option	"Name"			"Logitech Optical USB Mouse"
    	Option	"Emulate3Buttons"
    	Option	"Resultion"		"800"
    EndSection
     
    Section "InputDevice"
    	Identifier	"Synaptics"
    	Driver		"synaptics"
    	Option	"Device"		"/dev/input/mouse0"
    	Option	"Protocol"		"auto-dev"
    	Option	"LeftEdge"		"1700"
    	Option	"RightEdge"		"5300"
    	Option	"TopEdge"		"1700"
    	Option	"BottomEdge"		"4200"
    	Option	"FingerLow"		"25"
    	Option	"FingerHigh"		"30"
    	Option	"MaxTapTime"		"180"
    	Option	"MaxTapMove"		"220"
    	Option	"VertScrollDelta"	"100"
    	Option	"MinSpeed"		"0.09"
    	Option	"MaxSpeed"		"0.18"
    	Option	"AccelFactor"		"0.0015"
    	Option	"SHMConfig"		"true"
    	Option	"Repeater"		"/dev/ps2mouse"
    	### KSynaptics prav da rab UseShm
    	Option	"UseShm"		"true"
    EndSection
     
    # **********************************************************************
    # Monitor section
    # **********************************************************************
     
    # Any number of monitor sections may be present
     
    Section "Monitor"
    	Identifier	"InternalLCD"
    	Option		"DPMS"	"true"
    	DisplaySize	331 207
     
    EndSection
     
     
    # **********************************************************************
    # Graphics device section
    # **********************************************************************
     
     
    Section "Device"
    	Identifier	"ATI Radeon X600"
    	VendorName	"ATI Technologies Inc"
    	BoardName	"unknown"
     
    	Driver		"radeon"
     
    	Option		"MonitorLayout"		"LVDS,CRT"
    	Option		"MergedFB"		"true"
    	Option		"CRT2HSync"		"30-86"
    	Option		"CRT2VRefresh"		"50-120"
    	Option		"MetaModes"		"1280x800-1280x1024 1280x800-1024x768 1280x800-800x600"
    	Option		"MergedNonRectangular"	"true"
     
    	Option		"FBTexPercent"		"50" 		# Nej bi poštelal Gart, da dela, kot je treba
    	Option		"AGPMode"		"4"
    	Option		"AGPFastWrite"		"true"
    	Option		"ColorTiling"		"true"
    	Option		"EnablePageFlip"	"false"		### mogoče pomaga, če izklopim pri stabilnosti — „man radeon“ pravi, da v redkih primerih nagaja
    	Option		"RenderAccel"		"false"		### „man radeon“ pravi, da še ni podprt za novejše čipe od 9200 (moj je novejši)
    	Option		"AccelMethod"		"XAA" 		# XAA je starejši, bolj stabilen za 3D, EXA pa novejši in boljši za Render ter Composite
    #	Option		"XaaNoOffScreenPixmaps"			### mogoče to kej pomaga
    	Option		"DDCMode"		"true"		### da naj monitor sam pove resolucijo
     
            # enable (partial) PowerPlay features
    	Option		"DynamicClocks"		"true"		### mogoče kej pomaga pri 3D, če ga izklopim — „man radeon“ pravi, da bi lahko
     
    EndSection
     
    # **********************************************************************
    # Screen sections
    # **********************************************************************
    Section "Screen"
    	Identifier	"Screen0"
    	Device		"ATI Radeon X600"
    	Monitor		"InternalLCD"
    	DefaultDepth	24
     
    	Subsection	"Display"
    		Depth		24
    		Modes		"1280x800" "1024x768" "800x600"
    		Virtual		1280 800
    		ViewPort	0 0
    	EndSubsection
    EndSection
     
    # **********************************************************************
    # ServerLayout sections.
    # **********************************************************************
     
    Section "ServerLayout"
     
    	Identifier	"Server Layout"
     
    	Screen		"Screen0"
     
    	InputDevice	"Keyboard1"	"CoreKeyboard"
    	InputDevice	"Mouse"		"AlwaysCore"
    	InputDevice	"Synaptics"	"CorePointer"
     
    EndSection
     
    Section "Extensions"
    	Option "Composite" "Enable"
    EndSection

    The big downsides of this are no real hotplugging of input and output devices and ... well ... the general mess of it all.

    Getting rid of the unneeded

    First on there is a bit of cleaning up to do.

    As already said X nowadays is able to work without a single line in xorg.conf or it even existing. But there are still occasions where the user would like to have an option differently then the defaults.

    In my settings — and this can be probably quite safely applied to most others as well — the following sections were obsolete, as X is able to load automatically what it needs:

    • Section "dri"
    • Section "Module"
    • Section "Files"
    • Section "ServerFlags"
    • Section "Extensions"

    All these can be safely removed regardless of whether further down the line you want to enable input device and dualhead hotplugging or keep it all static in your xorg.conf. If you later on happen to find an option that you want to set up differently then the defaults (e.g. turn off Composite), consult the man xorg.conf.

    Input devices hotplugging

    Input device hotplugging is especially useful for laptop owners or those who use systems where mice, keyboards and/or other input devices (e.g. trackballs, drawing tablets, joysticks, etc.) are often being un- and re-plugged while X is running and you would not want to restart X just to get the new device working.

    The old way of how X handles input devices was to have it set up in xorg.conf with a device path (e.g. /dev/input/mouse0). This meant that whenever a new device was introduced X had to be restarted.

    Things became a bit better with UDEV and even more so with HAL. Thanks to those two devices that are plugged in while the system is running will get a device path automatically and HAL will know all sorts of useful info about the device (e.g. the manufacturer, model, number of keys, etc.). This also means that when you change the below explained .fdi file you do not need to restart the whole X, but only the HAL daemon in order for changes to be taken into account — even this can be a big plus, when you need a X session to run for a longer time.

    With all this modernisation of how devices are handled, one really asks oneself why still let X handle them like in the 1990's. The people at Xorg have also though about this and made a new, smart driver called Evdev that supports all input devices that the Linux kernel does and can communicate with HAL.

    Keyboard

    Many will not see the point in having a configuration that enables keyboard hotplugging, as most of us (especially laptop owners) use only one keyboard on the same box. But there is more to it then being able to just plug in e.g. and external USB keyboard and start using it the very same moment. There is also the big plus of not having to bother about counting how many keys are on the keyboard and so on if HAL can detect it.

    To do so you only need to remove the whole InputDevice section that handles your keyboard and the keyboard line from the ServerLayout section.

    E.g. in my case, from xorg.conf I deleted:

    Section "InputDevice"
     
    	Identifier	"Keyboard1"
    	Driver		"kbd"
     
    	# For most OSs the protocol can be omitted (it defaults to "Standard").
    	# When using XQUEUE (only for SVR3 and SVR4, but not Solaris),
    	# uncomment the following line.
     
    	#	Option	"Protocol"	"Xqueue"
     
    	Option	"AutoRepeat"	"500 30"
     
    	# Specify which keyboard LEDs can be user-controlled (eg, with xset(1))
    	#	Option	"Xleds"	"1 2 3"
     
    	#	Option	"LeftAlt"	"Meta"
    	#	Option	"RightAlt"	"ModeShift"
     
    	#	Option	"XkbDisable"
     
    	Option	"XkbRules"	"xorg"
    	Option	"XkbModel"	"pc105"
    	Option	"XkbLayout"	"si"
     
    EndSection
    </del>

    and

    	InputDevice	"Keyboard1"	"CoreKeyboard"
    </del>

    Now that these settings are missing, X will not override what HAL detects. If the defaults work for you, you can safely just leave this as it is.

    But if you e.g. use a different keyboard layout that you want to associate with a (in my case a specific) keyboard, then you might want to migrate at least some of the previous settings to a .fdi file for HAL to take into account. This file has to reside in the /etc/hal/fdi/policy/ folder and is a simple XML file that can implement all of the chosen driver's options and HAL's recognision patterns.

    For example, this is my /etc/hal/fdi/policy/keyboard.fdi:

    <?xml version="1.0" encoding="UTF-8"?>
    deviceinfo version="0.2">
    	<device>
    		<match key="info.capabilities" contains="input.keyboard">
    		<match key="info.product" contains="AT Translated Set 2 keyboard">
    			<merge key="input.x11_driver" type="string">evdev</merge>
    			<merge key="input.x11_options.XkbLayout" type="string">si</merge>
    		</match>
    		</match>
    	</device>
    </deviceinfo>

    The match tags are there for HAL and Evdev to narrow down the search which device to apply the commands in the merge tags. In my case I wanted to enforce this setting for only my integrated keyboard (hence the product line).

    In the example you can also see the input.x11_options.XkbLayout line, which does exactly the same as did the Option "XkbLayout" line in xorg.conf. You can implement any of the Options that the driver you use (e.g. Evdev) or X itself supports, as long as you put it in a line like: <merge key="input.x11_options.MyOption" type="string">MyOptionValue</merge> (change the MyOption and MyOptionValue accordingly, of course!). Also worth noting here is that irrelevant of what the driver manual says the type is, for HAL you should always chose type="string" — that means also when the Options are boolean (i.e. only true or false; on or off) or numeric, you should still write it as string in the .fdi file.

    The only two lines that are a must is a match line to select the device(s) and the input.x11_driver line to select the driver.

    It is possible to associate a device by different means, but I prefer using the product identifier, because a) when hotplugging a device it tends to get associated with a different device path, but its identifier stays the same and b) if I happen to get my hands on e.g. another of the same device, I would like the same rules to apply, but perhaps not when the device is different (e.g. different keyboard).

    Information needed to associate an .fdi file contents with a device can be found by running hal-device in a treminal emulator. The above .fdi example bases on the hal-device output below:

    35: udi = '/org/freedesktop/Hal/devices/platform_i8042_i8042_KBD_port_logicaldev_input'
      input.keymap.data = { 'e025:help', 'e026:setup', 'e027:battery', 'e029:switchvideomode',
    'e033:euro', 'e034:dollar', 'e055:wlan', 'e056:wlan', 'e057:bluetooth', 'e058:bluetooth', '
    e071:f22', 'e072:f22', 'e073:prog2', 'e074:prog1' } (string list)
      input.xkb.rules = 'base'  (string)
      linux.sysfs_path = '/sys/class/input/input3/event3'  (string)
      info.category = 'input'  (string)
      info.subsystem = 'input'  (string)
      input.xkb.model = 'evdev'  (string)
      info.parent = '/org/freedesktop/Hal/devices/platform_i8042_i8042_KBD_port'  (string)
      info.capabilities = { 'input', 'input.keyboard', 'input.keypad', 'input.keys', 'input.key
    map', 'button' } (string list)
      info.product = 'AT Translated Set 2 keyboard'  (string)
      input.xkb.layout = 'us'  (string)
      info.udi = '/org/freedesktop/Hal/devices/platform_i8042_i8042_KBD_port_logicaldev_input'
     (string)
      input.xkb.variant = ''  (string)
      input.device = '/dev/input/event3'  (string)
      input.x11_driver = 'evdev'  (string)
      input.product = 'AT Translated Set 2 keyboard'  (string)
      linux.hotplug_type = 2  (0x2)  (int)
      input.x11_options.XkbLayout = 'si'  (string)
      linux.subsystem = 'input'  (string)
      linux.device_file = '/dev/input/event3'  (string)
      info.addons.singleton = { 'hald-addon-input' } (string list)
      info.callouts.add = { 'hal-setup-keymap' } (string list)
      input.originating_device = '/org/freedesktop/Hal/devices/platform_i8042_i8042_KBD_port'  (string)
    <code>
     
    As you can see I included one of the <code>info.capabilities
    strings and the info.product string to let HAL and Evdev know which keyboard(s) exactly it should apply the rules in the keyboard.fdi. You can also see the results of the .fdi file, as the output shows input.x11_options.XkbLayout = 'si'  (string) — meaning that this setting overrides the input.xkb.layout = 'us'  (string) default.

    More information on which options are avaliable can be found on the driver's man page (man evdev) and there is an example .fdi file online as well

    The above on writing .fdi files also applies to other input devices below.

    Touchpad

    When it comes to touchpads the same reasons why to switch to HAL handling the devices apply as for keyboards.

    Also the whole logic of writing a .fdi file and removing the static config from xorg.conf applies.

    Just remove the whole InputDevice section that has the Synatpics driver and the appropriate line in ServerLayout.

    E.g. remove this:

    Section "InputDevice"
    	Identifier	"Synaptics"
    	Driver		"synaptics"
    	Option	"Device"		"/dev/input/mouse0"
    	Option	"Protocol"		"auto-dev"
    	Option	"LeftEdge"		"1700"
    	Option	"RightEdge"		"5300"
    	Option	"TopEdge"		"1700"
    	Option	"BottomEdge"		"4200"
    	Option	"FingerLow"		"25"
    	Option	"FingerHigh"		"30"
    	Option	"MaxTapTime"		"180"
    	Option	"MaxTapMove"		"220"
    	Option	"VertScrollDelta"	"100"
    	Option	"MinSpeed"		"0.09"
    	Option	"MaxSpeed"		"0.18"
    	Option	"AccelFactor"		"0.0015"
    	Option	"SHMConfig"		"true"
    	Option	"Repeater"		"/dev/ps2mouse"
    	### KSynaptics prav da rab UseShm
    	Option	"UseShm"		"true"
    EndSection

    and

    	InputDevice	"Synaptics"	"CorePointer"

    And then write a .fdi file with the appropriate options you find from the hal-device output and the Synaptics manual page (man synaptics). The defaults are quite sane, so try starting with bare minimum (the device and the driver) and then start adding options that you find useful.

    For example, here is my /etc/hal/fdi/policy/synaptics.fdi:

    <?xml version="1.0" encoding="UTF-8"?>
     
    <deviceinfo version="0.2">
    	<device>
    	<match key="info.capabilities" contains="input.touchpad">
    	<match key="info.product" contains="SynPS/2 Synaptics TouchPad">
    		<merge key="input.x11_driver" type="string">synaptics</merge>
    		<merge key="input.x11_options.TapButton1" type="string">1</merge>
    		<merge key="input.x11_options.TapButton2" type="string">2</merge>
    		<merge key="input.x11_options.TapButton3" type="string">3</merge>
    		<merge key="input.x11_options.VertTwoFingerScroll" type="string">false</merge>
    		<merge key="input.x11_options.HorizTwoFingerScroll" type="string">false</merge>
    		<merge key="input.x11_options.Emulate3Buttons" type="string">true</merge>
    	</match>
    	</match>
    	</device>
    </deviceinfo>

    As you can see there are a few Options set in the above .fdi example. Notice that the right way to do it is to first test the defaults and then set up individual Options only if you want behaviour other then the default.

    External mouse

    Again, there is not much new to tell about how to migrate the external mouse. It's pretty much the same as with above, only you should consult the Evdev man page (man evdev) for options.

    In my case, this is the relevant hal-device output:

    1: udi = '/org/freedesktop/Hal/devices/usb_device_46d_c521_noserial_if1_logicaldev_input'
      info.addons.singleton = { 'hald-addon-input' } (string list)
      linux.sysfs_path = '/sys/class/input/input10/event6'  (string)
      input.originating_device = '/org/freedesktop/Hal/devices/usb_device_46d_c521_noserial_if1
    '  (string)
      info.subsystem = 'input'  (string)
      info.parent = '/org/freedesktop/Hal/devices/usb_device_46d_c521_noserial_if1'  (string)
      info.product = 'Logitech USB Receiver'  (string)
      info.udi = '/org/freedesktop/Hal/devices/usb_device_46d_c521_noserial_if1_logicaldev_inpu
    t'  (string)
      input.xkb.rules = 'base'  (string)
      input.xkb.model = 'evdev'  (string)
      linux.hotplug_type = 2  (0x2)  (int)
      input.xkb.layout = 'us'  (string)
      linux.subsystem = 'input'  (string)
      input.xkb.variant = ''  (string)
      info.capabilities = { 'input', 'input.keys', 'button' } (string list)
      input.device = '/dev/input/event6'  (string)
      linux.device_file = '/dev/input/event6'  (string)
      input.x11_driver = 'evdev'  (string)
      info.category = 'input'  (string)
      input.product = 'Logitech USB Receiver'  (string)
     
    2: udi = '/org/freedesktop/Hal/devices/usb_device_ffffffff_ffffffff_noserial_logicaldev_inp
    ut'
      linux.sysfs_path = '/sys/class/input/input9/event5'  (string)
      input.originating_device = '/org/freedesktop/Hal/devices/usb_device_ffffffff_ffffffff_noserial'  (string)
      info.subsystem = 'input'  (string)
      info.parent = '/org/freedesktop/Hal/devices/usb_device_ffffffff_ffffffff_noserial'  (string)
      info.product = 'Logitech USB Receiver'  (string)
      info.udi = '/org/freedesktop/Hal/devices/usb_device_ffffffff_ffffffff_noserial_logicaldev_input'  (string)
      linux.hotplug_type = 2  (0x2)  (int)
      linux.subsystem = 'input'  (string)
      info.capabilities = { 'input', 'input.mouse' } (string list)
      input.device = '/dev/input/event5'  (string)
      linux.device_file = '/dev/input/event5'  (string)
      input.x11_driver = 'evdev'  (string)
      info.category = 'input'  (string)
      input.product = 'Logitech USB Receiver'  (string)
      input.x11_options.ButtonMapping = '1 0 3 4 5 6 7 8 2'  (string)

    As you can see, with this specific mouse (Logitech NX80 and similar) the same device is associated for both keyboard and mouse movement. I imagine this is because Logitech did not bother to make different receivers for different products. So you have to make sure to associate the .fdi file settings with the input.mouse. Also Logitech did not bother to remove the button 2 (i.e. "middle mouse button") support on its NX80, although this button is physically missing due to the two-speed vertical wheel where you (mechanically) change speeds by pressing the wheel down.

    In the example above in the last line you can also see the ButtonMapping option set — this is because on my system I have already written an .fdi file for the mouse and HAL has already taken into account the ButtonMapping options set as you can see below.

    Example /etc/hal/fdi/usb-mouse-receiver.fdi:

    <?xml version="1.0" encoding="UTF-8"?>
     
    <deviceinfo version="0.2">
    	<device>
    		<match key="info.capabilities" contains="input.mouse">
    		<match key="info.product" contains="Logitech USB Receiver">
    			<merge key="input.x11_driver" type="string">evdev</merge>
    			<merge key="input.x11_options.ButtonMapping" type="string">1 0 3 4 5 6 7 8 2</merge>
    		</match>
    		</match>
    	</device>
    </deviceinfo>

    Now you can safely remove the mouse settings from the xorg.conf file:

    Section "InputDevice"
     
    # Identifier and driver
     
    ### Logitech USB
    	Identifier	"Mouse"
    	Driver	"evdev"
    	Option	"Name"			"Logitech Optical USB Mouse"
    	Option	"Resultion"		"1000"
    	Option	"ButtonMappint"		"1 0 3 4 5 6 7 8 2"
    EndSection

    and

    	InputDevice	"Mouse"		"AlwaysCore"

    As you can see we did not migrate the Resolution option, because HAL guesses it correctly.

    Output devices hotplugging

    There are other cases where output device hotplugging makes sense, but the most common (and the one that also applies to me) is when the computer or laptop has a dual-head graphics card and the user would like to use more then one display to show his desktop(s).

    Here I will walk you through how to enable clone mode — which is the most used by laptop users, who want to reproduce the video output they see on their internal LCD displays also on an external device (e.g. a projector). Other modes take only a quick look at man xorg.conf and in some cases also the man page of the graphics driver you use (a list of those can be found at the very end of man xorg.conf) and writing those settings into xorg.conf.

    For our undertaking RandR defaults are pretty much enough and all it takes is to remove the obsolete lines from xorg.conf and let RandR work its magic by itself.

    From the Device section:

    	Option		"MonitorLayout"		"LVDS,CRT"
    	Option		"MergedFB"		"true"
    	Option		"CRT2HSync"		"30-86"
    	Option		"CRT2VRefresh"		"50-120"
    	Option		"MetaModes"		"1280x800-1280x1024 1280x800-1024x768 1280x800-800x600"
    	Option		"MergedNonRectangular"	"true"
     
    	Option		"DDCMode"		"true"		### da naj monitor sam pove resolucijo

    the whole ServerLayout section:

    Section "ServerLayout"
     
    	Identifier	"Server Layout"
     
    	Screen		"Screen0"
     
    	InputDevice	"Keyboard1"	"CoreKeyboard"
    	InputDevice	"Mouse"		"AlwaysCore"
    	InputDevice	"Synaptics"	"CorePointer"
     
    EndSection

    and from the Display subsection of the Screen section:

    		Modes		"1280x800" "1024x768" "800x600"
    		Virtual		1280 800
    		ViewPort	0 0

    When the xorg.conf is finally free of unneeded user settings, just restart X and let RandR handle everything.

    Now, let us say, you want to clone the display also to the external monitor or projector. Now plug the external display in and run xrandr --auto. In most cases this is enough, because RandR sees which resolutions are supported for each display and automatically applies the biggest resolution that works for both (or all). When you plug the external monitor/projector out, just run xrandr --auto again and all will be back to the way it was before.

    In case you want more control, just run xrandr to see which ports are used and which resolutions those displays support (* marks which resolution is currently in use and + marks the optimal resolution for that display).

    You will most likely see something like this:

    Screen 0: minimum 320 x 200, current 1280 x 800, maximum 1280 x 1200
    VGA-0 connected (normal left inverted right x axis y axis)
       1280x1024      60.0 +   75.0     60.0     60.0
       1280x960       60.0     60.0
       1152x864       75.0     75.0
       1024x768       75.1     75.0     70.1     60.0
       832x624        74.6
       800x600        72.2     75.0     60.3     56.2
       640x480        75.0     72.8     72.8     75.0     66.7     60.0     59.9
       720x400        70.1
    LVDS connected 1280x800+0+0 (normal left inverted right x axis y axis) 0mm x 0mm
       1280x800       60.0*+
       1024x768       60.0
       800x600        60.3
       640x480        59.9
    S-video disconnected (normal left inverted right x axis y axis)

    Now that you know which output devices are plugged on which ports (LDVS is the internal laptop LCD), you can e.g. disable the internal monitor by hand by running xrandr --output LVDS --off

    More info can be found by reading man xrandr — there you can also see how to set up modes other then clone (e.g. if you want the displays to form a grid, or just one left of the other etc.), handle picture rotations and a lot of other settings.

    There are also GUI frontends for xrandr in existance — for example KDE (at least 3.x) users have the Resize and Rotate System Tray Applet.

    In case some output devices are always present (e.g. the integrated monitor or if you use certain monitors in a specific layout all the time), you can also put their specific settings into xorg.conf, for which the man pages of xorg.conf and your graphic card driver apply.

    General clean-up

    By now we succeeded in removing a lot from xorg.conf, but you can also safely strip it of any options that you do either do not need or that the defaults of them suit you just fine.

    For example, this is how my xorg.conf looks like after this whole procedure:

    Section "Monitor"
    	Identifier	"Elberethin LCD"
    	DisplaySize	331 207
    EndSection
     
    Section "Device"
    	Identifier	"ATI Radeon X600"
    	VendorName	"ATI Technologies Inc"
    	Driver		"radeon"
    # 	Option		"AGPFastWrite"		"true"
    	Option		"RenderAccel"		"true"		### zgleda, da r300 še nima 3D pospeševanja (glej „man radeon“)
    	Option		"AccelMethod"		"EXA" 		### XAA je starejši, bolj stabilen za 3D, EXA pa novejši in boljši za Render ter Composite
    	Option		"DynamicClocks"		"true"		### vklopljen varčuje baterijo; vendar nekateri pravijo, da oslabi 3D
    	Option		"Monitor-LVDS"		"Elberethin LCD"	### da za vgrajeni LCD uporabi posebne nastavitve samo zanj
    EndSection
     
    Section "Screen"
    	Identifier	"Primarni zaslon"
    	Device		"ATI Radeon X600"
    	Monitor		"Elberethin LCD"
    	DefaultDepth	24
     
    	Subsection	"Display"
    		Depth		24
    	EndSubsection
    EndSection

    The difference from the beast at the beginning of this article is stunning, is it not? And not only is it now more legible, but we now have both input and output device hotplugging working properly. Long live Xorg and its evolution!

    November 25, 2008 :: Slovenia  

    Patrick Nagel

    My Dell Mini setup

    Jürgen and Michael started this (it seems like everybody is getting a netbook these days), and so I continue by posting my netbook setup as well.

    I could get all hardware components except the built-in Bluetooth chip to work with very little trouble. The bluetooth chip is supposed to work in Ubuntu Intrepid, so I guess that should also be solved, soon. I’m using a USB bluetooth dongle for now. For details, please have a look at the page I filled out in the Linux Laptop Wiki.

    I’m using Gentoo Linux (~x86) on the netbook just as on my other computers (why would I choose something else?). To help with the compiling, I set up distcc in a VM on my company desktop. Even without that, the small machine is astoundingly fast. The 16 GB SSD’s low access latency kicks ass: for example system startup, where many small files scattered throughout the “disk” need to be read, takes a mere 20 seconds (from grub to KDM being ready to receive the password for login). Suspend to RAM also just works (with gentoo-sources, but probably also with vanilla-sources), and the system resumes automatically when opening the lid. The battery lasts quite long, too (see my small battery consumption test) and the device is completely silent at all times - so all in all, I’m very satisfied with this little device.

    Last week I bought two additional no-name el cheapo power supplies for a total of 180¥ (20€ / $26) and put them into the places where I spend most of my time, so I never need to carry the bulky thing around :)

    Well, and here is the obligatory screenshot:

    November 25, 2008 :: Shanghai, China  

    Ciaran McCreesh

    Paludis 0.32.0_alpha1 Released


    Paludis 0.32.0_alpha1 has been released:

    • Support for packages that haven’t been written yet.
    • --debug-build and --checks are gone, replaced by the special build_options: choice that can be configured in a similar way to use flags.
    • Clients using NoConfigEnvironment now use --extra-repository-dir (possibly multiple times) and --master-repository-name rather than --master-repository-dir.
    • The contrarius client has been removed.
    • metadata.xml support.
       Tagged: paludis   

    November 25, 2008

    November 24, 2008

    Michael Klier

    Tracks: "Sunday"

    Time for another piece of music :-). I have this laying around for quite a long time now and never really bothered to finish it (I don't consider it to be finished now either, but I tinkered the whole day with it and think it's at least acceptable now). Beside metal or guitar based music I'm also into electronic sounds of almost any kind, well I'm more or less all over the place when it comes to music, so, this is an attempt in electronic music ;-). To be honest I'd like to do more of that stuff, but I just don't have the computing power for drum 'n bass (my machine can hardly manage 12 samples in my Sampler and one virtual instrument at 260BPM … frustrating). In fact I had to mixdown several drum tracks during the whole editing to be able to actually listen to it without annoying crackling all the time. The drum sounds are mostly natural drum sounds which I bitcrushed in the sampler I use. Also, the piano in the middle of the track is probably the cheapest piano emulator since General Midi emerged to rule over digital music, so, don't expect to much ;-).

    Enjoy!

    Download: chizm_sunday.mp3

    Filed under: , ,
    Read or add comments to this article

    November 24, 2008 :: Germany  

    Jürgen Geuter

    Audioplayers are easy, right?

    I love music. Not the playing or writing part (since I'm a talentless sod) but the listening part, the discovering part. Music kicks ass.

    Back in the nineties when I still used Windows, I was one of the many WinAmp 2.X users. I could drag files from my harddisk or CDs into the playlist and listen to them. It was convenient so I digitalized all my media pretty quickly. When I moved to Linux I started out with XMMS, a WinAmp Clone, no surprises there, things worked as I was used to.

    Along came the "Library based" players that offered so much more functionality: Instead of having to know where my files are I could just use the Metatags to find the music I wanted to listen to, it was the jump away from "playing a file" towards "playing a song". After all kinds of tests left and right I settled with KDE's Amarok that offered the most features at the time. It always felt alien in my pretty much completely GTK/GNOME based desktop but it was the best tool for the job so I stuck with it for quite a while albeit always looking for something better.

    I tried many of the Banshee incarnations, Rhythmbox, Listen, Exaile, if it somehow built I ran it for a while.

    Right now I'm listening to "Building Steam with a Grain of Salt" on Exaile. It's a player very rough around the edges, early development stage but it works, it supports my iPod and ... that's pretty much it. I'm not excited by it though it is a very solid player that actually is a lot snappier than Amarok. Why am I not excited?

    Because audio players are boring nowadays. Amarok threw away all its code to port itself to the new KDE. Now they released their RC1 for the 2.0 version today that is pretty ugly, lacks features and basically leaves me very unimpressed, it's like taking half the features of Amarok 1.X and slapping a new (but not better) GUI on top. Banshee is not bad but also boring, same mix of cloning iTunes and the free competition.

    Right now I have a hard time getting excited about any audio player out there right now. Why? Cause they all include the same basic things:

    • I can play music (that's a given) and use a library that is built on the metatags of my files to find what to play.
    • Most players integrate Last.fm in a few ways: Listening to the radio, submitting stuff and whatnot, sometimes even using last.fm to determine which track to play next
    • Streaming internet radio works
    • Podcasts can be downloaded
    • Files can be send to the media player of choice
    • Usually some hacks are in place to download artist information from Wikipedia or Last.fm is employed (Bonus points for just embedding a browser!), the same thing happens with lyrics
    • Some netlabely things like Jamendo or Magnatune are integrated


    All have these features, maybe lack one, maybe have another one not mentioned but that is the featureset. If you thought about writing another player with those features just stop right there. We've seen it.

    Yeah, you can talk a lot about overengineering whatever abstract framework for some random feature as it happened with the new Amarok, but it's still the same lame player with a different skin.

    It's time to think about new ways to think about our music: We've perfected the library-based audioplayer and can tweak it into oblivion but it will stay boring. It's time to jump. Where?

    Well we have to improve our metadata for files. A lot. The fact that two bands have the same name should not mash those files together. It's also time to stop thinking about albums so much. Yes, we all grew up on them and I still listen to most of my music in album form (as in dragging Album X of artist Y to my playlist) but it's time to think about other ways to look at the huge amount of media we have. Right now we have the exact same MP3 which is on one album and one soundtrack appear as two different songs. That's retarded and bad design.

    We should also think about ways to bridge the gap between video, audio and text: A .txt file could contain the lyrics to a song, a .mpg file could contain the official video for said song. One MP3 could be the officcial album track, another one could be a live version. But the are basically all the same song, the same content, just different view on it.

    Library-based audio players work and allow us great things but I think it's time for the next step.

    November 24, 2008 :: Germany  

    Dieter Plaetinck

    Looking for a new job

    The adventure at Netlog didn't work out entirely, so I'm looking for a new challenge!

    My new ideal (slightly utopic) job would be:

    • Conceptual engineering while still being close to the technical side as well, most notably system engineering and development.
    • Innovative: go where no one has gone before.
    • Integrated in the open-source world. (Bonus points for companies where open source is key in their business model)

    To get a detailed overview of my interests and skills, I refer to:

    November 24, 2008 :: Belgium  

    Patrick Nagel

    The Closest Book Meme

    I was just reading some posts on planet.gentoo.org and thought I’d take part in The Closest Book Meme. So as a reply to Christian, this is mine:

    Since I don’t have any real books (paper is deprecated), I opened the first ebook I found when browsing through the files stored on my mobile (lying directly in front of me, thus technically being the nearest book) ;)

    It’s The Short-Timers: The Spirit of the Bayonet by Gustav Hasford. After pressing page down for 55 times in fullscreen mode on my Nokia 9300i, I found the fifth sentence to be:

    Civilians and members of the lesser services bleed all over the place like bed wetters.

    —————————–

    1. Grab the nearest book.
    2. Open it to page 56.
    3. Find the fifth sentence.
    4. Post the text of the sentence in your journal along with these instructions.
    5. Don’t dig for your favorite book, the cool book, or the intellectual one: pick the CLOSEST.

    November 24, 2008 :: Shanghai, China  

    November 23, 2008

    Roeland Douma

    last.fm support comming to QtMPC

    Sander and I are thinking about adding last.fm support to QtMPC. This since the interface for requesting album covers, artist/album info is so much easier then the amazon-api.

    For this we found a nice library: libmaia. Beside a small bug I found (which reminds me I have to report it upstream). This works very well. I browsed a little trough the source and it is pure Qt so it won’t limit the platforms QtMPC can run on.

    Now of course last.fm support does not limit us to retrieving album covers. We can also submit the played songs to last.fm so the users can keep scrobbling. Now the only thing I could not figure out if submitting played songs can also be done trough XML-RPC? Is there anyone with experience in this are that can tell me if it is possible?

    Other than this we are waiting for MPD 0.14, which will include idle (event) support, before go coding like crazy on QtMPC again. This since events will require a change in QtMPC. Good change which will allow QtMPC to have much less wake ups. Which is generally a good thing.

    Of course we still do bug fixes.

    November 23, 2008 :: The Netherlands  

    Matija Šuklje

    Less is more in modern X

    Or How I Enabled Hotplugging in X, Survived and Got an Extra Treat

    In these modern times it is said that in most cases X does not even need xorg.conf to exist in order to work properly. These days due to HAL, Evdev and RandR devices that X needs can be detected and configured automatically. No more resetting X and fiddling about to get the external projector working or cursing why the mouse will not come back when it is un- and re-plugged.

    Herein you should find both a HOWTO and short explanations as to why and how some options are to be used and what the caveats are, based on personal experience when migrating.

    <!--break-->

    Software needed

    These are the versions that I used and short explanations as to why these are the minimum versions required for this magic to work:

    • xorg-server 1.5.3 (with the AllowEmptyInput by default patch) — needed to solve a problem with the keyboard (included in the X11 Gentoo overlay)
    • xf86-input-evdev 2.1 — needed for some advanced settings like ButtonMapping (I had to manually bump this ebuild)
    • libXrandr 1.2, randrproto 1.2 and xrandr 1.2 — needed for monitor hotplugging (in the official Gentoo tree)

    Before the big change

    For those faint of heart, without interest or who just plainly do not want to bother, just skip this part

    I used to have a big (yes, handwritten!) xorg.conf settings file that was mostly written using arcane man and HOWTO knowledge — all neatly commented, so I would not get (too) lost in the whole mess:

    # **********************************************************************
    # DRI Section
    # **********************************************************************
    
    Section "dri"
    # Access to OpenGL ICD is allowed for all users:
    	Mode	0666
    EndSection
    
    # **********************************************************************
    # Module section -- this  section  is used to specify
    # which dynamically loadable modules to load.
    # **********************************************************************
    
    Section "Module"
    
    	Load	"dbe"				# Double buffer extension
    	Load	"vbe"				# za VESA za vsak slučaj
    	Load	"extmod"
    	Load	"bitmap"
    	Load	"ddc"				# da monitor sam pove kakšno resolucijo hoče
    
    	Load	"type1"
    	Load	"freetype"
    
    	Load	"glx"				# libglx.a
    	Load	"dri"				# libdri.a
    	Load	"drm"
    
    	### Dodano, ker tako hoče Acer Aspire 5024 HOWTO
    	Load	"xtrap"
    	Load	"record"
    
    EndSection
    
    # **********************************************************************
    # Files section.  This allows default font and rgb paths to be set
    # **********************************************************************
    
    Section "Files"
    
    	# The module search path.  The default path is shown here.
    
    	FontPath	"/usr/share/fonts/misc:unscaled"
    	FontPath	"/usr/share/fonts/Type1"
    	FontPath	"/usr/share/fonts/TTF"
    	FontPath	"/usr/share/fonts/corefonts"
    	FontPath	"/usr/share/fonts/freefonts"
    	FontPath	"/usr/share/fonts/terminus"
    	FontPath	"/usr/share/fonts/ttf-bitstream-vera"
    	FontPath	"/usr/share/fonts/unifont"
    	FontPath	"/usr/share/fonts/75dpi:unscaled"
    	FontPath	"/usr/share/fonts/100dpi:unscaled"
    	FontPath	"/usr/share/fonts/artwiz-aleczapka-en"
    	FontPath	"/usr/local/share/fonts"
    
    EndSection
    
    
    # **********************************************************************
    # Server flags section.
    # **********************************************************************
    
    Section "ServerFlags"
    
    
    EndSection
    
    # **********************************************************************
    # Input devices
    # **********************************************************************
    
    # **********************************************************************
    # Core keyboard's InputDevice section
    # **********************************************************************
    
    Section "InputDevice"
    
    	Identifier	"Keyboard1"
    	Driver		"kbd"
    
    	# For most OSs the protocol can be omitted (it defaults to "Standard").
    	# When using XQUEUE (only for SVR3 and SVR4, but not Solaris),
    	# uncomment the following line.
    
    	#	Option	"Protocol"	"Xqueue"
    
    	Option	"AutoRepeat"	"500 30"
    
    	# Specify which keyboard LEDs can be user-controlled (eg, with xset(1))
    	#	Option	"Xleds"	"1 2 3"
    
    	#	Option	"LeftAlt"	"Meta"
    	#	Option	"RightAlt"	"ModeShift"
    
    	#	Option	"XkbDisable"
    
    	Option	"XkbRules"	"xorg"
    	Option	"XkbModel"	"pc105"
    	Option	"XkbLayout"	"si"
    
    EndSection
    
    
    # **********************************************************************
    # Core Pointer's InputDevice section
    # **********************************************************************
    
    Section "InputDevice"
    
    # Identifier and driver
    
    ### Logitech USB
    	Identifier	"Mouse"
    	Driver	"evdev"
    	Option	"Name"			"Logitech Optical USB Mouse"
    	Option	"Emulate3Buttons"
    	Option	"Resultion"		"800"
    EndSection
    
    Section "InputDevice"
    	Identifier	"Synaptics"
    	Driver		"synaptics"
    	Option	"Device"		"/dev/input/mouse0"
    	Option	"Protocol"		"auto-dev"
    	Option	"LeftEdge"		"1700"
    	Option	"RightEdge"		"5300"
    	Option	"TopEdge"		"1700"
    	Option	"BottomEdge"		"4200"
    	Option	"FingerLow"		"25"
    	Option	"FingerHigh"		"30"
    	Option	"MaxTapTime"		"180"
    	Option	"MaxTapMove"		"220"
    	Option	"VertScrollDelta"	"100"
    	Option	"MinSpeed"		"0.09"
    	Option	"MaxSpeed"		"0.18"
    	Option	"AccelFactor"		"0.0015"
    	Option	"SHMConfig"		"true"
    	Option	"Repeater"		"/dev/ps2mouse"
    	### KSynaptics prav da rab UseShm
    	Option	"UseShm"		"true"
    EndSection
    
    # **********************************************************************
    # Monitor section
    # **********************************************************************
    
    # Any number of monitor sections may be present
    
    Section "Monitor"
    	Identifier	"InternalLCD"
    	Option		"DPMS"	"true"
    	DisplaySize	331 207
    
    EndSection
    
    
    # **********************************************************************
    # Graphics device section
    # **********************************************************************
    
    
    Section "Device"
    	Identifier	"ATI Radeon X600"
    	VendorName	"ATI Technologies Inc"
    	BoardName	"unknown"
    
    	Driver		"radeon"
    
    	Option		"MonitorLayout"		"LVDS,CRT"
    	Option		"MergedFB"		"true"
    	Option		"CRT2HSync"		"30-86"
    	Option		"CRT2VRefresh"		"50-120"
    	Option		"MetaModes"		"1280x800-1280x1024 1280x800-1024x768 1280x800-800x600"
    	Option		"MergedNonRectangular"	"true"
    
    	Option		"FBTexPercent"		"50" 		# Nej bi poštelal Gart, da dela, kot je treba
    	Option		"AGPMode"		"4"
    	Option		"AGPFastWrite"		"true"
    	Option		"ColorTiling"		"true"
    	Option		"EnablePageFlip"	"false"		### mogoče pomaga, če izklopim pri stabilnosti — „man radeon“ pravi, da v redkih primerih nagaja
    	Option		"RenderAccel"		"false"		### „man radeon“ pravi, da še ni podprt za novejše čipe od 9200 (moj je novejši)
    	Option		"AccelMethod"		"XAA" 		# XAA je starejši, bolj stabilen za 3D, EXA pa novejši in boljši za Render ter Composite
    #	Option		"XaaNoOffScreenPixmaps"			### mogoče to kej pomaga
    	Option		"DDCMode"		"true"		### da naj monitor sam pove resolucijo
    
            # enable (partial) PowerPlay features
    	Option		"DynamicClocks"		"true"		### mogoče kej pomaga pri 3D, če ga izklopim — „man radeon“ pravi, da bi lahko
    
    EndSection
    
    # **********************************************************************
    # Screen sections
    # **********************************************************************
    Section "Screen"
    	Identifier	"Screen0"
    	Device		"ATI Radeon X600"
    	Monitor		"InternalLCD"
    	DefaultDepth	24
    
    	Subsection	"Display"
    		Depth		24
    		Modes		"1280x800" "1024x768" "800x600"
    		Virtual		1280 800
    		ViewPort	0 0
    	EndSubsection
    EndSection
    
    # **********************************************************************
    # ServerLayout sections.
    # **********************************************************************
    
    Section "ServerLayout"
    
    	Identifier	"Server Layout"
    
    	Screen		"Screen0"
    
    	InputDevice	"Keyboard1"	"CoreKeyboard"
    	InputDevice	"Mouse"		"AlwaysCore"
    	InputDevice	"Synaptics"	"CorePointer"
    
    EndSection
    
    Section "Extensions"
    	Option "Composite" "Enable"
    EndSection
    

    The big downsides of this are no real hotplugging of input and output devices and ... well ... the general mess of it all.

    Getting rid of the unneeded

    First on there is a bit of cleaning up to do.

    As already said X nowadays is able to work without any entry in xorg.conf. But there are still occasions where the user would like to have an option differently then the defaults.

    In my settings — and this can be probably quite safely applied to most others as well — the following sections were obsolete, as X is able to load automatically what it needs:

    • Section "dri"
    • Section "Module"
    • Section "Files"
    • Section "

    Input devices hotplugging

    Keyboard

    Touchpad

    External mouse

    Output devices hotplugging

    November 23, 2008 :: Slovenia  

    Jürgen Geuter

    Setting up an EEEPC

    As the old saying goes "As Chi does, so shall you" and I'm of course following that rule. He posted about his Eeepc setup and so will I.

    I run Debian Linux on my EEE, which is customized quite nicely: The Debian Wiki has a lot of great information on how to properly setup your EEE with many tweaks thrown in the mix.

    On my EEE 900A everything works: Sound, WiFi, power management. I don't swap to the harddrive (8GB Solid-state) but I have a swapfile created that I use to hibernate to (yes, you can hibernate without wasting precious drive space on a swap partition ;-)).

    As desktop environment I run GNOME 2.22 which works great: I removed the bottom panel and put the application switcher into the top panel (I use one button that brings up a list instead of the many buttons that most people use).

    Here's the screenshot that shows how things look (click to enlarge):


    A few hints for maximum pleasure:
    • Install the speeddial firefox extension which offers quick access to the sites you visit most.
    • Install the It's all Text! extension: With such a small screen editing text on websites without the full power of your preferred text editor is torture. Actually: Install that addition on any machine you got, you'll learn to love it.
    • Cheese is fun!


    As a little bonus, this is how my workspace as in physical desktop is looking at the moment.

    November 23, 2008 :: Germany  

    Michael Klier

    My Eee Desktop

    A couple of weeks ago I got myself an Asus 1000h netbook and I've been using it a lot since. Until now, I have a dual boot setup consisting of the included Windows XP and Arch Linux of course. Once I've made sure I got every component setup and running under Linux, XP will have to leave ;-) (I haven't got the chance to test wireless/bluetooth yet, everything else works quite well).

    When I installed Arch on that box I wondered about which WM to use. On my big dell laptop I've switched to awesome, which is a tiling window manager, and I got quite used to it. My dell has a 1440×900 resolution, and it's a real joy for coding etc. to let the window manager handle all the window placement and to be able to switch back and forth between different screen layouts with just one key combination.

    Because of that I decided to put Awesome on my Eee as well. However, it turned out that, at least for me, a tiling window manager isn't really usable on a netbook. I noticed that, mostly because I kept booting XP lately whenever I powered up my Eee instead of Linux, even though everything was working. I think the main reason is the keyboard size. The Asus 1000h has, compared to other netbooks, a quite big keyboard. It's comfortable for writing, but still not 100% comfortable to control a WM with it (coding isn't really fun as well, especially because the up key comes before the right shift key :-/). Also, I consider a netbook still to be more of a fun tool for browsing etc. than a computer you want to do serious work on for a couple of hours (writing this blog post on it is serious work already ;-)).

    Looking for alternatives, I thought I could give LXDE a try. I haven't tried a DE for a while and I've heard some good things about LXDE. Also LXDE uses Openbox as WM, which I've used before.

    I have to say I'm delighted ;-). Everything feels still as snappy as with Awesome and it seems this is just the right thing for a netbook. LXDE is not as bloated as other DE's, it consists only of a few components (like a session manager, a GTK theme switcher etc.) bundled with Openbox, PCManFM and GPicView. Also it took me only half an hour to configure the whole WM to my likings.

    Because a post titled like this one can't come without a screenshot here are two:

    Kamino uncluttered

    Kamino cluttered

    I'd be interested in what others use as WM on their netbooks. Do you use GNOME/KDE/XFCE, if yes, are those still usable?

    Filed under: , , , , , ,
    Read or add comments to this article

    November 23, 2008 :: Germany  

    November 21, 2008

    Nikos Roussos

    ibook adventures: debootstrap, cdrom removal, encryption

    i replaced the ibook hard disk (once again) in order to place a bigger and faster disk. so i had to setup my system again (plus i wanted to encrypt my home partition).

    the only ppc distributions are gentoo and debian. i only use the laptop when i leave my desk (my desktop runs gentoo of course), and i wanted something that i could install it fast, so the obvious choice was debian.

    given the opportunity of opening the ibook, i decided to remove all the parts i don't use. such as the modem, and most important, the cdrom. but i had to find a way of installing debian to the new disk since ibook doesn't boot from usb (aggrrr).

    the last resort is always netboot, but following the advice of a (debian-expert) fellow i decided to give debootstrap a chance.

    first of all i plugged the new disk on usb, with the appropriate adapter, and i made the partitions using mac-fdisk. then i emerged debootsrap on my gentoo and:

    debootstrap --verbose --arch powerpc lenny /mnt/debian/ ftp://ftp.somemirror.org/debian/
    

    afterwards i chrooted to it following the standard procedure:

    mount -t proc none /mnt/debian/proc/
    mount -o bind /dev/ /mnt/debian/dev/
    chroot /mnt/debian/ /bin/bash
    

    fetched the kernel source:

    aptitude install linux-image-2.6.26-1-powerpc
    

    and then i edit /etc/fstab and installed yaboot:

    exit
    yabootconfig --chroot /etc/fstab
    

    i disassembled the ibook removed the modem and the cdrom and replaced the hard disk.

    when i first boot it i realised that yaboot was not installed properly and actually i had a non bootable disk. of course disassembling it again in order to connect the cdrom, was not an option. fortunately the cdrom connector is right below the keyboard, so i lift it over and plugged the cdrom.



    i boot from the ppc gentoo installation cd, mounted the hard disk and install yaboot again. for all the curious out there, the problem was that the disk was located on /dev/sda on the first attempt (because it was connected on a usb port) and now on /dev/hda.

    the last thing i had to do is encrypt /home.

    cryptsetup -h sha256 -c aes-cbc-essiv:sha256 -s 256 luksFormat /dev/hdax
    cryptsetup luksOpen /dev/hdax chome
    mkfs.ext3 -m 0 /dev/mapper/chome
    

    Added this line to /etc/crypttab:

    chome    /dev/hdax    none    luks,timeout=30
    

    and /etc/fstab:

    /dev/mapper/chome /home ext3 nodev,nosuid,relatime 0 2
    

    and rebooted :)

    November 21, 2008 :: Athens, Greece

    Thomas Keller

    Monty Python on youtube

    Monty Python has created an official channel on youTube. The intro they did is really funny:

    November 21, 2008

    Steven Oliver

    Vim Filetypes


    At work I do a lot of work with Oracle databases. Which also means I end up doing a lot of editing of files with funny endings like *.psc, *.prog, *.proc. Mostly file type endings that are meaningless and made up. So in order to get my install of gVim working properly with these goofy tags I had to force it to associate them with the plsql filetype built into vim.

    Well, I originally had all of this inside of .vimrc file (or _vimrc under Windows XP). But I noticed that it wasn’t working the other day (something I honestly paid little attention to before because 99% of edits were so simple and quick). Well, after looking around forever on the internet trying to figure out why it wouldn’t work in my vimrc (which I never figured out by the way) I went back to the vim documentation and resorted to creating a third vim configuration file.

    As a side note here, it is getting to the point that vim has taken so much of my time to configure and learn that if I had to do it all over again, there is a good chance I’d picked another editor to master. I started this blog in 2005 a month or two after starting with Linux. To this day I still find myself feeling lost everytime I use Vim as more than a notepad replacement. Its just that complicated. Thats scary.

    Back to the point of this post though. For some reason (which I’ll never find out probably) I ended up having to create a file called filetype.vim and sticking it in $VIM/vimfiles. I have posted the file with the rest of my vim config files so anyone interested can see what I did. Like all of my other files (which have been updated as well) it is nicely commented.

    And one more thing before I forget. When in command mode in vim try some of these commands. They have become invaluable for me.
    : echo $HOME
    : echo $VIM

    Those will output where each of those directories are. Which is not always so obvious. Especially when your switching OSes on a regular basis like myself (XP -> Linux -> Vista > OS X -> Solaris).

    Enjoy the Penguins!

          

    November 21, 2008 :: West Virginia, USA  

    November 19, 2008

    Nikos Roussos

    don't shoot the maintainers

    following kargig's post, i found the criticism extremely harsh. all the distributions that aim mostly on power users leave the resolving of blocking/breaking dependencies on the user. i 've never seen an official guide from debian on how to fix a _specific_ broken or unmet dependencies issue.

    when i 'm not sure how to resolve such an issue, the first thing i do (before even googling it) is to search on gentoo related sites that usually host such kind of instructions. i did the same thing with the e2fsprogs thing, that started all this discussion, and i found exactly what i had to do in less than 2 minutes. i proceeded with the instructions and i met no further problem, including my vps server which is located in a place where i have no physical access.

    i think we should be a little more lenient when it comes to community based distributions. besides, gentoo has taken all these one step forward comparing to debian, introducing portage 2.2 that tries to resolve blocking packages automatically. none of these e2fsprogs problems arose on portage 2.2 systems. (yes, i just upgraded my portage :P)

    November 19, 2008 :: Athens, Greece

    Jürgen Geuter

    Cars and computers

    I own a car, probably one you'd not expect me to have (got it for little money used after my old car was broken in an accident):


    Yesterday it wouldn't want to start so I had to call the ADAC (which is a club in Germany that, when you're a member, comes when your car is broken and tries to fix it or tows you to the next repair station for free).

    I don't know a thing about cars. I know where the key goes, where I put in the fuel and other required liquids, but that's it. I just never invested any time into learning anything about cars cause it just doesn't interest me: My car is something that can transport me from A to B without me getting wet. I don't care whether it's pretty or clean (oviously ;-)), I just want it to run.

    So the guy came, my car miraculously started again and probably something hidden is broken that will fuck up some day soon for me (yay!). I guess this is how many people see their computers, they just want them to run without investing anything into learning how they work and after me telling you this short story you might think I support them and their position. But I don't.

    My car is old, it has pretty much no electronics in it, it's simple. I don't care about it, because there are people that know that kind of stuff and can fix "bugs", but when I don't know about the internals I can still fully use it. When I run into a problem I might have trouble getting from one place to another, I might have to call a cab or walk or buy/rent another car but that's it. Inconvenience.

    When you don't understand your computer and you run into trouble it's more than inconvenience: You lose access to your data, the binary representation of the things in your head. You often cannot just buy another computer cause you will have trouble getting to your data: When my car dies the Place I wanna go to is still there, when my computer is down, I might not be able to access the data I need and that might not be anywhere else (Yes backups rock, I got them, but how often are you called by people who wanna hand in their thesis paper and can't access it cause of a fucked up computer that don't have any backups).

    Cars and computers are different because of one thing: A car is like a function. It doesn't keep internal state that matters to you (of course it has internal state), it just offers one functionality to you. A computer is like an Object: It offers functions but also keeps internal state that you care about.

    You could turn your computer into a car-ish state if you moved all state data, all your files, all your settings, all your everything away from it, turning it into a thin client, but that just creates the problem of a server to administer and connectivity problems (plus what happens when your network connection is down?).

    Cars and computer analogies are wrong.

    November 19, 2008 :: Germany  

    Michael Klier

    Ardour SAE Version Released

    Two days ago, Pauld Davis, lead developer of Ardour announced the release of the Ardour SAE Version.

    For those among you who've never heard about Ardour here's a short excerpt from their home page:

    Ardour is a digital audio workstation. You can use it to record, edit and mix multi-track audio. You can produce your own CDs, mix video soundtracks, or just experiment with new ideas about music and sound.

    Ardour capabilities include: multichannel recording, non-destructive editing with unlimited undo/redo, full automation support, a powerful mixer, unlimited tracks/busses/plugins, timecode synchronization, and hardware control from surfaces like the Mackie Control Universal. If you've been looking for a tool similar to ProTools, Nuendo, Pyramix, or Sequoia, you might have found it.

    I've been following this project for quite some time now and this is really great news. In April 2007, the SAE (School of Audio Engineering, which I attended too) decided to become a corporate sponsor of the project. The goal was to develop a native OSX version of Ardour (which prior to that needed a X server).

    Now it's done, the SAE Student Version1) runs natively on OSX (jackd is included in the package) and even has support for Apples Audio Unit plugin architecture, which enables you to use a lot of cool tools in Ardour. The main differences between the SAE Version and the classic Ardour, are AFAIK different key bindings and less (yes less) included plugins.

    What's even more exciting about this SAE version, is that it will bring a whole bunch of new users and grow the Ardour community. A lot of the European SAE institutes are going to have dedicated Ardour workstations and even include Ardour lessons in their classes and make it part of the training!

    This definitely has great potential to increase the overall adoption of Ardour in the audio industry. In terms of functionality and features Ardour is already a big competitor for the de facto standard DAW's like Digedesign Pro Tools, but it's certainly still lacking a certain degree of awareness among Audio Engineers.

    Now that the OSX version arrived I'm also finally able to use Ardour with a decent sound card on my G4 at work (at home I still lack a good sound card which runs under Linux).

    I'm thinking about posting some tutorial like blog posts about general Ardour usage and first steps in DAW based recording/editing here. If you'd be interested let me know in the comments ;-).

    Filed under: , , , ,
    1) the classic Ardour for OSX will be released soon as well
    Read or add comments to this article

    November 19, 2008 :: Germany  

    Music On This Blog - Survey

    As you might have noticed I've added the music section to this page lately because I want to make some of the music I make publicly available to get some feedback, or just to keep it from rotting on my hard disk. Separating this from my main blog was intentional, because I thought it didn't fit the overall context. However, I'm not really lucky with that yet.

    In the past I haven't written much about what's my other number one passion beside computers, namely making music / mixing / sound synthesis etc.. Mainly because this blog has always been about computer/linux/web related and personal stuff.

    I've been thinking a lot about it recently, and decided that I definitely like to write more about the audio engineering side of me . But I'm not entirely sure if it fits the overall context of this blog.

    I'm not talking about posting full songs I make, but rather about experiments I do with synthesizers, the mixing process, tips on using DAWs etc.. I also plan to start doing some Field Recording experiments next year.

    I've registered a domain last year (http://soundmonks.org) which I haven't used up to now, but as the domain name suggests it's more predestined to be used for a networking/group project, which was the intention why I got it in the first place (I actually still have no idea for what I'm going to use it, except of some loose ideas).

    So, this survey is about which direction I should take this blog next year. So, would you mind if I'd mix in some completely new topics? Or am I better off separating the one from the other? Or should I keep the blog/music section separated (I'd prefer to integrate the music section completely into the blog, or move it away).

    I think it'll be better to separate it completely on a dedicated domain, but then, I am not sure if I'm able to keep two blogs running.

    What do you think? Please let me know in the comments (I promise not to delete them this time ;-)).

    Filed under: , ,
    Read or add comments to this article

    November 19, 2008 :: Germany  

    November 18, 2008

    Jürgen Geuter

    Howto generate barcodes in Python with reportlab

    The reportlab library for Python is great when it comes to generating PDFs, here's an example on how to generate Barcodes with it:

    from reportlab.pdfgen import canvas from reportlab.lib.pagesizes import A4 from reportlab.lib.units import mm #I'll be generating code39 barcodes, others are available from reportlab.graphics.barcode import code39 # generate a canvas (A4 in this case, size doesn't really matter) c=canvas.Canvas("/tmp/barcode_example.pdf",pagesize=A4) # create a barcode object # (is not displayed yet) # The encode text is '123456789' # barHeight encodes how high the bars will be # barWidth encodes how wide the "narrowest" barcode unit is barcode=code39.Extended39("123456789",barWidth=0.5*mm,barHeight=20*mm) # drawOn puts the barcode on the canvas at the specified coordinates barcode.drawOn(c,100*mm,100*mm) # now create the actual PDF c.showPage() c.save()

    If you run the given example, you will get a barcode, placed 10cm from the lower and 10cm from the left border, with a height of 20mm.

    Other barcode types are available in the reportlab.graphics.barcode module.

    November 18, 2008 :: Germany  

    November 17, 2008

    Dirk R. Gently

    Getting Gnome Volume Manager to Play Nice


    The quick answer to this, is that GVM can - in a limited sense. Up till now there were two choices: either accept how Gnome Volume Manager handles storage devices, or input every storage device that can be thought of into “/etc/fstab”.

    Gnome Volume Manager has it’s own way of doing things. GVM appears to name storage devices as it pleases. GVM on this PC names the Vista partition as “OS”, the USB stick is named 1.0 GB Media, and so on. Gnome Volume Manager also defines it’s own options - sometime erraticallty. Sometimes a storage unit will be mounted, other times not.

    Trying to assist Gnome Volume Manager with fstab is possible to some degree. Gnome Volume Manager will listen to fstab and mount the storage unit in the appropriate directory, but fstab options may or may not be used.

    The best bet is to go ahead and enter the storage units into “/etc/fstab”. First give UUID’s to give specific detail of the drives, partitions… (especially dynamic ones: USB sticks, external hard disks…). Enter the device/devicenames of all volumes as seen by fdisk:

    sudo fdisk -l

    Device names might also be discovered in “/etc/mtab” or at the end of the “dmesg” listing.

    To get more information on known storage unit type in:

    file -s /dev/devicename

    To get the UUID:

    vol_id -u /dev/devicename

    The UUID is a permenant, unique identifier that always can be assign to a storage unit.

    Open the fstab file and in place of using “/dev/devicename” use:

    UUID=4c7b-bfbe-21310c36c89e

    Or whatever the UUID’s are.

    Now create folders in “/media” for the storage units:

    sudo mkdir /media/WinVista
    sudo mkdir /media/USB-Stick-1
    sudo mkdir /media/DVD-RW

    then enter the corresponding mount points in “/etc/fstab”.

    Research what options are needed. The “/etc/fstab” file is read and mounts volumes during boot. Gnome Volume Manger will listen to some options. The most important option GVM looks for is the users option. If users option isn’t found then Gnome Volume Manager will not give regular user rights to the storage unit and the common “You are not privileged to mount the volume” dialog will appear. Another option “auto” can be entered in the storage units option that will have the volume mounted on boot. Unfortunately, Gnome Volume Manager will not listen to this option, Gnome Volume Managers preferences though do allow automatic loading of removable drives and media (albeit somewhat erratically and unpredictably).

    An example “/etc/fstab”:

    #/etc/fstab
    
    #
    # Shared-Memory
    /dev/shm  									/dev/shm		    tmpfs  	    defaults        			            0 	0
    
    # Window Vista Partition
    UUID=D6F275C3F275A87F  						/media/WinVista     ntfs-3g     users,defaults,force,auto	            0 	0
    # Linux System Partition
    UUID=8f30c65c-ac3f-4c7b-bfbe-21310c36c89e 	/	                ext3	    noatime,user_xattr                      1   1
    
    # DVD Drive
    /dev/sr0                                    /media/DVD-RW	    udf,iso9660 auto,users,rw 		                    0   0
    
    # USB Stick 1
    UUID=48BC-9FFE                              /media/USB-Stick-1  vfat        users,auto,uid=1000,gid=100,umask=007   0   0

    With these changes most storage units will be loaded when are where expected to.

    Change Storage Device Labels

    To change the Label GVM shows, do it with a tool like Gparted. GVM reads the volume label that is assigned in the Master Boot Record, if there is none it gives the size the the storage device. The best option is to use the Gparted LiveCD or use any other LiveCD that that has Gparted on it. I had no problems adding a label to the storage units, but it’s a good idea to do as GParted warns and to backup any files first. If no name changes need to be made to root (/) or say another fixed partition (/home) Gparted can be used right then and there but be sure to kill GVM first:

    killall gnome-volume-manager

    gparted label

    Avoid Broken Links to Other Storage Units

    If there are links, to say, some files on the Windows partition, they can be broken if not set up correctly. First be sure the filesystem is mounted at boot by naming them in fstab. GVM/Nautilus will then recognize the link when it loads. Second, make a direct link. Don’t use the storage unit links on the left-hand side of Nautilus - these are shortcuts. Instead name directly the device path:

    ln -s /media/WinVista/Users/Username/Documents/ My\ Documents

    At this point I reboot to see how the configurations work from booting. This should do it. Drives should un/mount properly and have good disk labels. Hope this helps.

    purty nautilus

          

    November 17, 2008 :: WI, USA  

    Jürgen Geuter

    A few short blurbs

    • The MplayerWii port is sheer awesome: Small app installed and your Wii plays pretty much every media file from SDCard or USB mass storage devices. Best thing I ever installed on my Wii.
    • If you have to generate professional PDFs in Python use Reportlab. It's easy to use and produces professional looking PDFs, including barcode generation.
    • Another Python note: If you need an ORM for something not-Django, use sqlalchemy. Flexible, powerful, and easy to extend. Writing your own Widgets is really easy.
    • "Stranded on Earth" by The Herbaliser is wicket cool:

    November 17, 2008 :: Germany  

    Roy Marples

    dhcpcd no longer sends a ClientID by default

    this is a small commit with big consequences. Basically it means that dhcpcd will no longer send a default ClientID. You have to specify this behavior. This change has been made so that we mirror the lease credentials sent by the in-kernel DHCP client, the ClientID itself is NOT mandatory for ethernet and it turns out some very badly written DHCP servers do not like ANY ClientID.

    How does this affect you? Well, DHCP leases work by ClientID. Now depending on the DHCP server you may or may not be affected. With ISC dhcp-4, dhcpcd will now get a different lease as ISC dhcp-4 treats a ClientID of the hardware family + address as being different from just using the chaddr field of the DHCP message. With dnsmasq-2.46 you get the same DHCP lease.

    Is this the right thing to do? Well, yes and no. It's the right thing to do by default in my eyes. This now mirrors the behavior of ISC dhclient, pump and Solaris DHCP client. Interestingly, firewire and infiniband users still get a default ClientID has the RFC's demand it because you cannot fit the hardware address in the DHCP chaddr field.

    Is the change final? Maybe not - depends on the user backlash I guess.

    November 17, 2008

    Martin Matusiak

    havenet: network perimeter test

    Network connections fail all the time, we’ve all been there. There are so many things that can go wrong, the network adapter driver can fail, the dhcp server can revoke the lease, the wifi router can disappear, the routing may be wrong at some point along the line, the dns server can be overloaded, or the remote host may be down. Those are some of the possibilities, and it can be quite a pain to track down the problem.

    But the first thing to do is to figure out exactly what is working and what isn’t. If you know that much then at least you know where to start. My goal here is to create a fairly simple test to examine the status of the network connection, leading up to a working internet connection. One constraint that I have is that I like it to be portable, so that I can carry it around along with my dotfiles. That means I would like it to work in any location just as long as I can get a shell, it should not require any dependencies.

    A fully functional network connection looks like this:

    What I do is try to detect the parameters of the network step by step, using the regular tools like route, ifconfig. Once I know what the hosts are, I do a ping. Now, a ping obviously isn’t a foolproof test; if you’re on a network that doesn’t allow outgoing icmp then it’s entirely possible that you can tcp out anyway. So what you really should do is tcp on port 80, not ping. But ping is extremely portable, whereas doing a tcp/udp probe is asking a lot more from the environment, needing something like nmap or hping.

    Once you’ve established that the connection is working, and you want to know more about the network, you can go further with something like netscan.

    The code is relatively stupid and messy, but that’s the way bash is.

    #!/bin/bash
    #
    # Author: Martin Matusiak <numerodix@gmail.com>
    # Licensed under the GNU Public License, version 3.
     
    function havenet {
    	local route="/sbin/route -n"
    	local ping="ping -c1 -W2"
     
    	local badrange="169.254"
     
    	local rootname="A.ROOT-SERVERS.NET."
    	local rootip="198.41.0.4"
     
    	local inethost="yahoo.com"
     
    	local creset="\\e[0m"
    	local cred="\\e[0;31m"
    	local cgreen="\\e[0;32m"
    	local cyellow="\\e[0;33m"
    	local ccyan="\\e[0;36m"
     
    	### Scan networks
    
    	echo -e "${cyellow} + Scanning for networks...${creset}"
    	test=$($route 2>/dev/null | grep -v $badrange | egrep "^[1-9]")
    	if [[ $? != 0 ]]; then
    		echo -e "    ${cred}none found${creset}"
    	else
    		local nets=$(echo "$test" | awk '{ print $1 }')
    		for net in $nets; do
    			local gw=$($route 2>/dev/null | egrep "^$net" | awk '{ print $3 }')
    			echo -e "    ${cgreen}$net ${ccyan}/ $gw${creset}"
    		done
     
    		### Detect ips
     
    		local ips=
    		for net in $nets; do
    			local r=$(echo $net | sed "s/.0$//g" | sed "s/.0$//g" | sed "s/.0$//g")
    			local ip=$(/sbin/ifconfig 2>/dev/null | grep $r | sed "s/inet addr:\\([0-9.]*\\).*$/\\1/g")
    			ips="$ip $ips"
    		done
     
    		echo -e "${cyellow} + Detecting ips...${creset}"
    		test=$(echo "$ips" | egrep -v "^[ ]+$")
    		if [[ $? != 0 ]]; then
    			echo -e "    ${cred}none found${creset}"
    		else
    			for ip in $ips; do
    				echo -en "    ${cgreen}$ip${creset}   ping: "
    				test=$($ping $ip 2>/dev/null)
    				if [[ $? != 0 ]]; then
    					echo -e "${cred}failed${creset}"
    				else
    					local t=$(echo "$test" | grep "min/avg" | sed "s/.*= \\([0-9.]*\\)\\/.*$/\\1/g")
    					echo -e "${cgreen}$t ms${creset}"
    				fi
    			done
     
    			### Detect gateways
     
    			echo -e "${cyellow} + Detecting gateways...${creset}"
    			test=$($route 2>/dev/null | grep UG)
    			if [[ $? != 0 ]]; then
    				echo -e "    ${cred}none found${creset}"
    			else
    				local gws=$(echo "$test" | awk '{ print $2 }')
    				for gw in $gws; do
    					echo -en "    ${cgreen}$gw${creset}   ping: "
    					test=$($ping $gw 2>/dev/null)
    					if [[ $? != 0 ]]; then
    						echo -e "${cred}failed${creset}"
    					else
    						local t=$(echo "$test" | grep "min/avg" | sed "s/.*= \\([0-9.]*\\)\\/.*$/\\1/g")
    						echo -e "${cgreen}$t ms${creset}"
    					fi
    				done
    			fi
    		fi
    	fi
     
    	### Test inet connection
     
    	echo -e "${cyellow} + Testing internet connection...${creset}"
    	echo -en "    ${ccyan}$rootname  ${cgreen}$rootip${creset}   ping: "
    	test=$($ping $rootip 2>/dev/null)
    	if [[ $? != 0 ]]; then
    		echo -e "${cred}failed${creset}"
    	else
    		local t=$(echo "$test" | grep "min/avg" | sed "s/.*= \\([0-9.]*\\)\\/.*$/\\1/g")
    		echo -e "${cgreen}$t ms${creset}"
    	fi
     
    	### Detect dns
     
    	echo -e "${cyellow} + Detecting dns servers...${creset}"
    	test=$(cat /etc/resolv.conf 2>/dev/null | grep nameserver)
    	if [[ $? != 0 ]]; then
    		echo -e "    ${cred}none found${creset}"
    	else
    		local dnss=$(echo "$test" | awk '{ print $2 }')
    		for dns in $dnss; do
    			echo -en "    ${cgreen}$dns${creset}   ping: "
    			test=$($ping $dns 2>/dev/null)
    			if [[ $? != 0 ]]; then
    				echo -e "${cred}failed${creset}"
    			else
    				local t=$(echo "$test" | grep "min/avg" | sed "s/.*= \\([0-9.]*\\)\\/.*$/\\1/g")
    				echo -e "${cgreen}$t ms${creset}"
    			fi
    		done
    	fi
     
    	### Test inet dns
     
    	echo -e "${cyellow} + Testing internet dns...${creset}"
    	echo -en "    ${cgreen}$inethost${creset}   ping: "
    	test=$($ping $inethost 2>/dev/null)
    	if [[ $? != 0 ]]; then
    		echo -e "${cred}failed${creset}"
    	else
    		local t=$(echo "$test" | grep "min/avg" | sed "s/.*= \\([0-9.]*\\)\\/.*$/\\1/g")
    		echo -e "${cgreen}$t ms${creset}"
    	fi
    }

    Download this code: havenet_networktest.sh

    November 17, 2008 :: Utrecht, Netherlands  

    Dieter Plaetinck

    AIF: the brand new Arch Linux Installation Framework

    Recently I started thinking about writing my own automatic installer that would set up my system exactly the way I want.
    (See http://dieter.plaetinck.be/rethinking_the_backup_paradigm_a_higher-level...)

    I looked at the official Arch install scripts to see if I could reuse parts of their code, but unfortunately the code was just one big chunk of bash code with the main program and "flow control" (you must first do this step, then that), UI-code (dialogs etc) and backend logic (create filesystems, ...) all mangled up and mixed very closely together.
    Functionality-wise the installer works fine, but I guess the code behind it is the result of years of adding features and quick fixes without refactoring, making it impossible to reuse any of the code.

    So I started to write AIF: the Arch Linux Installation Framework (actually it had another name until recently), with these 3 goals in mind:

    • Make all code modular, reusable etc. Everyone should be able to add/change/remove change certain aspects of an installation procedure easily or build custom installation relying on existing code where appropriate
    • Port /arch/setup and /arch/quickinst, so you get (almost) the same installer as before, but using totally refactored code.
    • Write my own automatic procedure for my own custom needs

    Right now most of the hard work is done and the ported version of /arch/setup seems to work more or less.
    I've posted to the arch-general mailing list and the responses I got were very positive.
    This is what Aaron Griffin (lead developer of Arch Linux) said:

    My honest opinion is that this is awesome. You're the reason I love open source 8)

    That said, we haven't release a 2.6.27 ISO just yet, and I need to go
    in panic mode and get it out this weekend. But for the next release,
    or even a smaller release before then, I'd *love* to incorporate this.

    (...)

    Just letting you know: I'm not silent because I don't care. I'm silent
    because I'm watching and drooling 8)

    You can read the whole thread here: http://www.nabble.com/Fifa:-Flexible-Installer-Framework-for-Arch-linux-...

    I've also built packages to make it easy to install on a current installcd. The package also comes with a readme and howto that explain how to install and use AIF.

    Right now I encourage people to try it out. All known bugs are documented in the TODO file, there are probably more that I didn't discover yet. But it should work pretty well.
    I'm very curious for input on the code/design level as well.

    Hopefully the Arch guys can set me up with a bugtracker and make some sort of announcement to the community to try it out...

    November 17, 2008 :: Belgium  

    November 16, 2008

    Jürgen Geuter

    On software installation and activities

    When you hear people talking about Linux you'll probably hear either one of these two positions:
    Pro-Linux Person
    Installing and keeping software up-to-date is so much easier with Linux than with Windows or MacOSX, package repositories are the shit

    Anti-Linux Person
    Installing software in Linux is so hard, and it never has the software I want. Windows' setup.exe dance is so much better

    And if you involve an OSX person he or she might tell you:
    I have all my apps in my profile folder and can easily take them with me when I copy my profile.


    The way software is handled is one of the aspects where the three major operating systems differ and it is somewhat of a religious war (but that happens a lot when it comes to operating systems ;-)).

    Let's look at all three options real quick:
    • Windows: You go to the software maker, get a CD/DVD or download a "Setup.exe", run it and you have it installed. Windows offers a remove tool for software that installed that way. Updates are not handled if the program does not do it by itself. Disadvantages: No centralized update.
    • Linux: For most software you just pick the package from your distribution's repository. Software is removed the same way- via a centralized package manager. Users can provide their own repositories that integrate nicely. Updates are done in a centralized way, the software itself does not have to bother updating. Advantage: All packages are automatically kept up-to-date. Disadvantage: When you copy your profile to another computer you might have to install some software cause the software itself does not come from your profile.
    • OSX: You go to the software vendor, download a file, drag and drop things around for a while then the software is installed into your home dir. Updates happen if the software does it by itself.Advantage: Your programs "travel" with you. Disadvantage: No automatic updates.


    Now I read more and more how people love the OSX way and how it is so much better that what the free software world has and I see how someone might only see the advantages: Backup your home dir and you've got everything. How nice - how convenient.

    But there's a problem with the OSX way: Your programs all have to repeatedly write the same stuff all over again. Write an updating component over and over again. No dependency checking so you either bundle every library you might need or you force your users to jump through hoops left and right to get something running.

    With the advent of web-based applications the notion of installing software locally got a bad reputation: Why use an installed local office application if you can use Google Docs or something similar? The online version is always up-to-date, works on whatever computer you are using and keeps your data in a centralized place.

    I think in the long run neither OSX nor Windows will get away with not offering centralized update facilities for installed software but it still does not take care of either dependencies or the actual installation.

    For commercial vendors that is a big problem actually: If you are Apple or Microsoft you cannot just let everyone push his package description into your "blessed" repository which means you are establishing a structure of the "haves" (those in the "blessed" repo) and the "have-nots" (those not in the blessed repo).

    But as much as the Linux (and unix) way might get bad press or reviews from time to time, this is actually the aspect of the whole operating system thingy, that free operating systems like Linux und the BSD family beat all the competition: The ease of installing packages and keeping them up--to-date.

    But there's always room for improvement. Debian's "tasksel" application for example is a great example that gets too little publicity in my opinion.

    "tasksel" allows you, upon installation to select tasks you want your system to fulfill: If you check "Desktop" for example, you get the X Window system and a desktop environment preinstalled, if you select LAMP server, you get a stack of Apache, PHP and Mysql (just examples).

    This is a great way to the problem we sometimes hear from new users: "I don't know which software to install, there are so many!"

    Ubuntu is not my preferred distribution but they do one thing right: They identify the most common use cases and chose one application for that use case. One Image Editor, one office software, one browser. More are available if the user wants of course.

    Now let's move away from packages/programs and think about "tasks" or "activities": When I think about Photo management for example I could just install F-Spot, but I'll reach limit really quick. How about we define an "Activity" Photo Management that does not just include an image viewer or organizer like F-Spot but also a real image editor that allows advanced image processing? How about we define the "activity" podcasting: It installs audio editors, VOIP software and a preconfigured stack of audio libs that allow the easy creation of podcasts.

    We are too focussed on our applications itself, a bis mistake that our desktops also show: The default desktop in KDE or GNOME has the taskbar where every application has its little button, but is that what we want to know?

    I don't really care how many GVIM windows I have open, but this window I'm working in right now belongs to one certain browser window and those two windows (regardless of the applications) together form my activity "write a blog post".

    Our desktops and our package managers should move to an activity-based model, which would make it easier for new users to know what to do: You want to program in Python? Get the "programming (Python)" activity which gives you a great working environment without having to chose between all the available editors. Of course you should have the chance to add that one program that your prefer, but the environment is already complete at that point: You are not building the environment, you're merely customizing.

    Gentoo's portage system knows something called "sets" that are pretty much an aggregation of packages, but that is not exactly what I mean: An "activity" should come with a good default configuration, one that works. Coming with too much configuration and patching is often a bad idea. You introduce new bugs and the documentation of the software might no longer correspond to your distribution (the "Ubuntu problem"), but if somebody choses an "activity" everything should be set to reasonable defaults that allow things to work. Let me give you an example:

    Let's define the activity "anonymous browsing": It needs a browser obviously. Then we add Tor for anonymity but in order to work with a browser we also need a proxy that tunnels things through tor, for example Privoxy. Currently installing Privoxy does not set it up for Tor which makes sense: How should the package maintainer know what the users want to do with the proxy? Too much configuration wouldn't make sense here. But if the user installs the "anonymous browsing" activity he or she obviously wants to use Privoxy to use Tor with his browser which means that the configuration should automatically connect Privoxy and Tor.

    We could of course model activities as "meta-packages", Packages without real content (apart from configuration probably), but that wouldn't help: Where users didn't find the right package before they don't find the right activity now in the big list of all the packages. Activities are on a higher, more abstract level, and should be seen as something different than packages, there should be a very simple interface to look at activities and install them.

    What do you think? Should we keep on focussing on applications or are we at the point where the old system works well but could use some improvement?

    November 16, 2008 :: Germany  

    November 15, 2008

    Martin Matusiak

    Chuck: properly farsical

    A lot of really bad “comedy” movies have been made to portray the despair of suburbia. People whose lives revolve around work in a big supermarket or other chain, empty most of the time, so they try to find something, anything, to distract themselves from the daily routine.

    The premise for Chuck is the same. He’s a geek, he has a pity-friend geekier than him. He works at a big electronics chain. And he has a “normal” sister who wants him to be “normal”.

    Then it happens. His old college buddy, a CIA agent gone rogue, sends him a message containing every government secret he’s stolen in his “rogueness”. Chuck somehow absorbs the whole thing, the computer breaks, and now he’s the only one with the secrets. Except he’s still the geeky suburbia guy, so two agents from competing agencies show up to make sure nothing “happens” to him. Needless to say, he cannot divulge anything to his sister or his friend, so he has to pretend like nothing has changed. The agents, in turn, get jobs near him and have to fit into the suburban landscape.

    You’re probably thinking “with a premise like that it could so easily suck”. And I’m with you. But it doesn’t. Chuck is pretty good in his role, and the whole spying thing is sufficiently farsical to be funny, but not so overdone that it’s stupid.

    November 15, 2008 :: Utrecht, Netherlands  

    George Kargiotakis

    Gentoo’s epic phail

    As some people already know I’ve joined the army since 2 months ago, this makes it somewhat difficult for me to keep up with the latest updates for every machine I use. Today I tried to upgrade a machine running stable (x86) Gentoo Linux after more than 15 days since the last upgrade and I [...]

    November 15, 2008 :: Greece  

    Kun Xi

    Poor man’s NAS

    A Network Attached Storage(NAS) has been in my wanted list for quite a long time, thanks to Live Search Cashback program to make it happen: a Western Digital MyBook World Edition(500GB). More information about the hardware specification:

    • ARM926EJ-Sid(wb) [41069265] revision 5 (ARMv5TEJ) 99.73 MHz
    • Memory: 32M
    • VIA Networking Velocity Family Gigabit Ethernet
    • WD5000AAVS-0 500G HD

    I believe 100MHz ARM CPU is powerful enough to drive this tiny box, but the limited capacity of memory cripples it as a lame duck. The sustainable file write(85G using lftp mirror) rate is approximately 3.8MB/s. It hardly qualifies any service beyond file server. Now, it is time to hack.

    Jailbreak and SSH

    The first thing to do is to create a user in the web interface of MyBook as root with null password is banned for security reason. Log on with admin and 123456, create a user JOE and setup the password for later use.

    Run the script discussed in the wiki, and ssh with JOE. Now you can su to root with blank password, 0wned!

    User management

    MyBook takes a very intricate way to manage users:

    All Samba users are granted shell access, but unix password sync = yes is not set, the /etc/shadow and /var/private/smbpasswd are updated individually by a Perl script via the web interface. The only reasonable explanation is the minimized Samba lacks PAM support.

    All user names are capitalized. I assume this is a brutal force approach to address the difference between Samba and Linux native accounts: Windows user name is case insensitive, while Linux is case-sensitive.

    As the password scrambled in /etc/shadow, it is easier to add/delete/update users via the web interface, then fine-tune the corresponding files. The user administration executives are hidden in /usr/www/nbin.

    Share with Samba

    The default exported directory is /share/internal/PUBLIC, the permission of the directory is set as rwsr-sr-x, and the owner is www-data, YMMV. So any file/directory created will be owned by www-data. If you are unhappy with the name, you may add a user, e.g joe as discussed before, then add joe to www-data group:

    # /etc/group, YMMV
    www-data:x:33:share

    remember to change the default mask in /etc/smb.conf:

    create mask = 0775
    directory mask = 0775

    Package management

    Though I am a big fan of Gentoo, it is a little bit paranoid to build everything from scratch. A precompiled package management, like Optware makes more sense. Check out this tutorial for bootstrapping.

    The essential packages for daily administration imho are screen, lftp.

    Feature requests

    There are some itchy miss features, if you happen to know a solution or hint, please drop me a message in the comment:

    Access Anywhere No mionet, just SSH. If you are a perfectionist, consider to port this Delphi application to MyBook to host MyBook in your preferred domain.

    Download Manager A web front-end to listen to download requests from Firefox/IE plugins, then delegate it to wget backend with cookie support. A more aggressive approach may support megaupload happy hour.

    November 15, 2008 :: Washington, D.C., USA  

    November 14, 2008

    TopperH

    Dear lazyweb...


    The laptop I used as my home server passed away yesterday and needs to be replaced.

    I bought this on ebay and it is going to arrive me on tuesday/wednesday.

    Of course I'm going to install gentoo.

    I don't have a monitor to attach on it (it's gonna be an headless server), so I asked the seller to setup the bios for cd boot.

    I will do all the installation via ssh but I know for sure that both the gentoo livecd and systemrescuecd need user interaction to have ssh up and running (you have to setup a root password).



    Probably there will be no problem to it blind:
    1. insert the cd
    2. power the machine on
    3. wait for a while
    4. type "passwd" "******" "******"
    5. type "/etc/init.d/sshd restart"
    But I guess if there is a live distro that has a fixed root password and starts ssh and dhcp by itself.
    I've been told on irc that xbox's distros can do that, but I don't think I will be able to chroot in an x86_64 enviroment from that...

    Does anybody have a better idea?

    November 14, 2008 :: Italy  

    Thomas Capricelli

    Mercurial bulk update

    I don’t know about you, but I have on a lot of different places a directory called ‘hg’ with lot of different mercurial clones inside. Whether on the home of my several computers for my own projects, or inside other directories for external projects, and so on.

    Now, remember one important aspect of distributed source control : your clone is actually both a repository and a working directory. This is why you usually (git and others do the same) have two different commands : one to synchronize the  repository (pull) and one to update the working directory (update).

    Updating comes with a risk : you can have conflicts. This is why I never update a svn repository without thinking first (do I have local modifications ?). But pulling is a lot less problematic. And, especially on my laptop, I often want to ’sync them all’ as soon as I have some internet connection. Until now i had a script syncall with the path of all (svn,unison and) mercurial repositories hardcoded. This does not scale, and I now need that in at least 5 different places. I dont feel like maintaining such scripts.

    And now comes the magic alias that made my day. I’m usually lame at shell scripting, so I’m sure there are better ways. But it works, now, on my computer. And this is so useful.

    alias hgbulk '\ls */.hg -d | cut -d\/ -f1 | xargs -i bash -c  "(cd {}; hg pull )"'

    (yes, I use tcsh, but i’ve tested that in bash too. Don’t ask why I use tcsh.)

    November 14, 2008

    November 13, 2008

    Matt Harrison

    Sandisk and U3 annoy me

    I occasionally need to use windows for clients. And I often transfer files via thumb drives. I needed a new drive so I slipped of to Costco to buy a 3 pack. I copy the files from linux, and put the disk in the windows machine and it starts install

    November 13, 2008 :: Utah, USA  

    John Alberts

    GMN Late

    I don’t know why they didn’t post this on the Gentoo front page, but obviously the October GMN is not coming.  If you are on the ‘Gentoo-dev-announce’ mailing list, you would have seen Anant mention what’s going on with the GMN.

    Hi Folks,

    I’ve been extremely busy traveling & attending conferences for the last few weeks and will be required to continue the same for atleast 2 weeks more; and nightmorph is just recovering from his failed hardware. As a result, there will be no October issue of the GMN. We hope to resume to normality by the end of November.

    Apologies.

    Anant

    November 13, 2008 :: Indiana, USA  

    Alexander Faeroy

    Erlang foredrag


    Onsdag d. 26/11 kl. 19 - til SSLUGs onsdagsmøde på CBS, vil Jesper Louis Andersen snakke om sproget Erlang.

    I den virkelige verden foregår der mange ting samtidigt. Mennesker samt maskiner arbejder parallelt ved siden af hinanden — det er tydeligt at interaktionen mellem forskellige computersystemer bliver større og større.

    Erlang er ikke det første sprog som forsøger at modellere maskinen som mange små systemer, der samarbejder om at løse en opgave — men det er et system med en del success’er bag sig. I dette foredrag vil jeg ikke forsøge at gå i detaljer med sproget, men i stedet forklare den model, som danner dets fundament. Jeg skal endvidere komme med et bud på hvor Erlang kan benyttes i gængse softwaresystemer allerede i dag.

    Om foredragsholderen

    Jesper Louis Andersen er Bachelor i Datalogi og er i skrivende stund igang med at tage sin kandidatgrad. Han interesserer sig for programmeringssprog af alle slags, samt hvad teknologien bag også kan anvendes til uden for sprog-feltet. For tiden køres der med Ubuntu Linux, men han har tidligere kørt Redhat, Debian, FreeBSD, NetBSD og OpenBSD.

    Tid og sted

    Mødet foregår på: CBS - Copenhagen Business School, Howitzvej 60, 2000 Frederiksberg og døren vil være åben fra kl. 18.00. Foredraget starter kl. 19.00.

    Se eventuelt SSLUGs wiki.

          

    November 13, 2008

    Thomas Capricelli

    Yet another activity graph : how often do you emerge ?

    Really, I seem to be fond of activity graphs those days. I have reused part of this previous code, but this time I parse the emerge log file to display the activity of your successful emerges. Think of it as a graphical view of ‘genlop -l’.

    Those examples are the emerge activity of my two main computers.

    The current code does the bare minimum, and  I need to add at least command line options for

    • logfile to use (currently/default : /var/log/emerge.log)
    • filename to create (currently/default : activity.png)
    • width/height of the image (currently/default : 800×600)

    The usage is straightforward:

    orzel@berlioz EmergeActivity% ./EmergeActivity.py
    There are 9896 emerge completed successfully
    Created the file 'activity.png'
    orzel@berlioz EmergeActivity% xv activity.png &

    You can grab the source (browse, tarballs, mercurial clone, even RSS) from :

    http://sources.freehackers.org/hg.cgi/EmergeActivity/

    November 13, 2008

    November 12, 2008

    Roeland Douma

    Altitude in YOURS

    This weekend I spend getting the srtm data (thanks NASA) into postgres to use it. To do that I used Sjors way to do that which is with python. I got it to work in no time on my Gentoo-amd64-dev-box. That was Saturday.

    Now Sunday I spend all day trying to get it to work on the dutch openstreetmap server. Which was quite a bigger challenge. Since well on CentOS 5 not all packages were correct. So I ended up compiling a lot myself, this however is pain in the ass if it turn out you have to compile almost all dependencies yourself to.

    But it works! Check it on altitude.openstreetmap.nl. There you won’t find much but check the wiki page for more info on how to get data out of it. We only host the srtm data for the Netherlands, Belgium and Luxembourg. This since the data is quite large and well, I am not the only one using the server.

    But back to the title. YOURS is an OpenStreetMap routing service. Which is kick-ass by the way. YOURS supports the altitude data and give the altitude profile of your route. Right now only a plot of the altitude is shown but I guess we get some nice stats soon!

    Now for a little problem YOURS does not work really well when using Konqueror. I tried to figure out why but I’m not really a java script expert. So if you are and have some free time. Please find the problem :)

    Now for another project idea of mine (probably around Christmas): porting Sjors python implementation to C. This since well. I love C.

    November 12, 2008 :: The Netherlands  

    Writing a music player demon

    Yesterday Sander and I released a new version of QtMPC (0.4.1).

    However when thinking about some of the features we would like to have in QtMPC we once again got annoyed by the MPD protocol. One of the main issues we had was that it is not event based, or even publish-subscribe. You have to ask for every little detail you want. This is a choice and it is true that this way you do not get unwanted messages thus generate unwanted network traffic.

    But we feel that some sort of event based protocol will do a better job here. Lets say in the default nothing is send as well. That way mobile clients can still control the player without receiving anything unwanted. Or just subscribe to events related to playback. This way mobile clients are still supported but again. They do not have to request the status ever x seconds. They just get a message once had changed.

    For full blown clients event bases is also a positive thing. For example the reaction time. When adding a new song. Right now this has to be noted by the client when requesting the version of the playlist. A simple message from the server saying playlist current newsong …….. (or something like that) could be send right after the song is actually added. It just makes things easier.

    Another thing that kind of bother us that there was no inotify support. This would be great however. Lets say you have a music server running on your server. And add some new music to the collection. It would be best of course if it would find the new music. Add it to the database and notify the clients of a change in the music library.

    Now we have not written any code yet. But we are thinking about it. For the music playback part we are planing to use gstreamer. Since well it already supports a lot of audio formats. And why reinvent the wheel right?

    November 12, 2008 :: The Netherlands  

    Ben Leggett

    Quite Possibly The Most Extensive Fallout 3 Review Yet


    The Romanian/English site computergames.ro has published a whopping 8-page Fallout 3 review that may just be the most in-depth and profound review yet. If you haven’t the patience to read an eight page review, you aren’t the target audience, move along, go play Peggle. But I really recommend the review, it’s quite good.

    It’s got everything: A historical perspective, honest criticism and praise, and musings on the nature of the universe. Okay, maybe no musings.

    Commenting on “choices and consequences”: “I thought that game designers had learned by now that placing 2-3 key moments in a timeline and setting variables to them isn’t the best solution you can take when trying to make an “open-ended, replayable experience”. Especially when the result is hardly influenced by the character’s moral build. In other words, you can finish the game as Jesus himself even if you swept the floor with every miserable bastard you came across, stole everything that wasn’t nailed down and/or if you lied remorselessly to every naive lad that put their life-depending hopes on you.”

    On Fallout 3’s dialogs: “A cool breeze comes in the form of skill checks during dialog. There are plenty of situations in which you can use your character’s language skills in social interactions. Should you so desire, you can also lie to the NPCs – with various odds of success – to skip, for instance, certain boring missions. On the other hand, some quests offer secondary objectives that encourage their “legitimate” completion. Sadly, most lines in the game are utterly embarrassing. Skipping over the oceans of dialogues that seem to represent the conversation between a bar of soap and a person unfortunate enough to suffer from Down syndrome, ironically, the juiciest lines when it comes to unintentional humor are the speech options given by high Intelligence. The remarks you may choose vary from repeating the previous line using different terms, to the most ridiculous and stupid conclusions anyone with an IQ made out of more than two digits can come to.”

    The closing comments are worth framing and hanging over your plastic racecar bed: “To me, Fallout 3 is Bethesda’s best game yet, but it’s got more holes than a sinking ship. It’s a perfect symbol for contemporary games: oversimplified, too accessible and way too commercial. The clever, edgy dialogues are gone, along with the complex relationships and the depth of the game world, replaced by a flawed visual feast, generic conversations and a gameplay fit for the masses….The [previous] Fallout games aren’t really difficult. Least of all hardcore. However, they do demand a minimum amount of logic and thought from the player. Compared to Mass Effect or Oblivion, they don’t push you towards the end. You need the determination and a minimum degree of inventive thought to make your way up in the world, and towards your own objectives.

    This is perhaps Fallout 3’s biggest failure: it’s not nearly complex or cerebral enough for the role in role-playing game to really shine.”

    He does praise other aspects of the game, but you’ll have to read the article to find those bits. On a related note, I have to wonder about the “less-known/foreign site = lower overall score” trend that I’ve been seeing with Fallout 3 reviews. Could this phenomenon be partly explained by the fact that Bethesda actively courts only bigwig game journalists for their lavish, (in)famous “press parties”? Not to mention that it’s hard to really criticize a game’s flaws when there are massive ads for it splashed all over your site.

    Anyway, this site seems to have generally perceptive and long RPG reviews from what I can gather, I intend to peruse their stuff further.

          

    November 12, 2008 :: Georgia, USA  

    November 11, 2008

    Jürgen Geuter

    Subscribe-to-comments back

    I had to disable "subscribe-to-comments" a few weeks ago due to German law. Running Serendipity-1.4-beta1 now subscribe-to-comments is back with a double opt-in mechanism:

    When you select that you want to be notified of new comments on a post you will get a mail from the system asking you to confirm that by clicking on a link.

    All the rest should run smoothly, if problems emerge, tell me.

    November 11, 2008 :: Germany  

    Thomas Capricelli

    Opale ported to qt4 and kde4

    Opale was an application written using koffice that I use to handle my personal accounts. Long ago I have dropped support for koffice (mainly because of the crappy/undocumented/buggy chart API) and since then opale was a kde-based application.

    One year ago, i have started porting it to kde4, and, meanwhile, made it a Qt4 application. Using configuration, you can now have either a qt4 or a kde4 application. Actually, I have to say the kde4 stuff is not thoroughly tested. Anyway, the qt4 port is done and I now have Opale working under windows. Not that I really care, but that can be useful for others.

    It was not until few weeks ago that kde4 was good enough for me (copyright me) so I can actually use this version of opale (yes, I know about running kde4 application under kde3, but no, thanks). Now that this version is tested enough, i release it as 0.9.

    The roadmap for 0.10 is

    • macintosh version
    • well-tested kde4 application
    • template editing/removing

    Opale homepage

    Opale project on freehackers’laboratories.

    November 11, 2008

    Steven Oliver

    Vim Fonts


    I know very little about fonts or how they work but that doesn’t stop me from attempting to learn. So today I opened up Visual Studio 2008 Express Edition for Visual Basic and thought to myself, “Gee that’s a good looking font.” So naturally I went back to Vim and attempted to load the same font. All I could think to myself after seeing it was, “Ugh!” It looked horrible. Now, I don’t know why it would so bad in Vim and not in Visual Studio but it was totally awful. So I went searching as for possible reasons why. My search was interrupted but I did learn a little about a vim and setting fonts

    set guifont=ProggyCleanTTSZBP:h12:cDEFAULT

    Personally, it never occurred to me what that “cDEFAULT” meant at the end of that line. Today I discovered it represents your encoding. And that’s where all the great discoveries ended. Because no matter what I did to setting it never got better. It got worse most of the time, but never better. Perhaps another day….

    Enjoy the Penguins!

          

    November 11, 2008 :: West Virginia, USA  

    November 10, 2008

    Jürgen Geuter

    Systrays

    This is my current systray after I have done some serious cleaning: Just Last.fm, Pidgin, Dropbox, NetworkManager and PulseaudioApplet left. Usually it's a lot fuller, some file transfer might be in there, gpodder to download podcasts, Mail-Notification might be popping up or some some other task with background activity wants to tell me about its status there. Systrays tend to get cluttered.

    Now I do think systrays are quite a good compromise: Where Applets/Widgets/Gadgets/Plasmoids on the desktop fail (because you can't see the desktop while doing anything interesting so you can't see the widgets) systray apps can offer you a convenient way to achieve a few things:
    • You want to have some program run in the background
    • You don't want it minimized cause it will clutter your ALT+TAB window switcher
    • You want a quick way to access the background program


    Things in our systray usually are supposed to do something for us that we don't want in our face all the time, we want it to just do its job and maybe give us feedback but not really disturb us while doing what we're doing, but what can we do about the clutter?

    In Windows XP the systray sometimes started to arbitrarily hide icons, that is obviously not the right way to do it: How can you know which icons the user actually wants to see and which icons he doesn't care about? Making icons smaller is no option cause the icons are often quite small anyways and if they are supposed to give us any information we really need to be able to actually see them. The panel that incorporates the systray (usually it's some panel of sorts I guess) could expand but that is not elegant at all either.

    I'm not sure about the perfect solution either: Having the systray in your panel can make the space in that panel really limited and make it kinda hard to separate the systray from the applets/icons you do have in your panel, that problem gets even worse with setups that rely on just one panel. It's time to think about better ways to allow user processes and application to go into "hiding" without completely disappearing: Systrays are supposed to give us access to otherwise invisible applications but they are not scaling very well with the growing number of little helper apps we tend to have running.

    November 10, 2008 :: Germany  

    November 9, 2008

    Steven Oliver

    Using Vi


    I use Vim a lot. For almost all my text editing on a regular basis. On my Mac I use MacVim. I have XCode installed. Wonderful program I guess but I have yet to use it. But how many people actually Vi (not Vim) on a regular basis. I don’t. And why should I? Vim does everything Vi does and more.

    The other day at work though I was tasked with editing some files on our servers. Mainly bash scripts, a couple of SQL packages, and I was forced to Vi. Using Vi is a whole other game from Vim. Things like syntax highlighting don’t exist. Niceties like your rc files don’t exist. The difference between insert and command mode is hard and unforgiving. In Vim its softer. You might be in command mode, but a lot of time you’d never know it the way you still navigate thought the text.

    I’ll tell you though. If you want to learn how to use Vim, learn Vi. It’ll make a man out of you.

    Enjoy the Penguins!

          

    November 9, 2008 :: West Virginia, USA  

    The quest for my own site


    Before diving in with lots of money and time I decided to play it safe. I installed all the necessary components on my desktop and began to work through all the things I would have to do in order to start my own blog using in a custom home brew blogging engine.

    I installed all kinds of wonderful things I never had a need for. Like Apache, MySQL, Ruby on Rails, and some other minor things that aren’t worth mentioning. Getting everything up and running… not hard. Getting everything working together… a little more difficult. Rolling it all into a functional home made blog… that will take some time. I think I can do it though. I don’t see why not after all?

    There are other alternatives out there. I will eventually blow away rails probably and trying something like Merb, but for now I’m still experimenting with Rails. At this point my biggest challange I believe is the database for the backend. Setting up all the necessary tables, columns, links, keys, etc. There is a lot of thought that goes into a working production level database. Its one thing to make this things for class as a joke for your grade in school, but its another when it means something to you.

    Enjoy the Penguins!

          

    November 9, 2008 :: West Virginia, USA  

    Jürgen Geuter

    The fundamental error of most "semantic" technologies

    Currently there's quite some buzz about semantic technologies: "The semantic web", "the semantic desktop", "semantic semantics of ontologies" (yeah I invented that last one, but it sounds bullshitty enough to be true).

    "Semantic" is a term used often to indicate the next step of technology, the step where we build agents, in software or hardware, that understand something and don't just rely on clever "guessing" programmed into them by people. Brave new world.

    But first, we should look at what "semantics" actually means. I'm quoting the Wikipedia entry on Semantics just as a first foundation:
    "Semantics is the study of meaning in communication."


    Semantics deal with what something actually means in a given context. Part of this is the denotation, the literal meaning: When somebody talks about a "triangle", he talks in the literal meaning of the word about a geometrical figure. But in some contexts he might not mean that: He might be talking about a romantic relationship that involves three people (we probably have all been there). That is another part of semantics. Analyze all the meanings that go far beyond the literal meaning of the word.

    Semantics is, as you will probably have already noticed, a very complex topic because it deals with human language: To know what somebody is talking about you need to know a lot about his/her personal language, is he using a certain word in weird meanings sometimes? Is he often sarcastic/ironic?

    This makes things messy, messier than most technologists want them: The need for smarter agents is obvious so people started working on compromises (even though some obviously never realized how much they were compromising).

    Let's take another few steps back, go back in time, around 2400 years. The now famous Aristotele worked in Greece and did some work that would still be relevant many many hundred years later. Aristotele had clear concept of the world: He thought that you can take all the things in the world, physical or mental and bring them in a certain order. This order was not random but based on the essence of things: The essence of a thing is what separates it from all the other things in the world. In most modern people's world you could say that the essence of human beings is that they are the only intelligent life form, this is what separates them from all the other things in the world: Other things live, other things have two legs or two eyes, but none is really intelligent.

    Aristotele thought that by taking all things and just separating the sets you could create a hierarchy of things: First separate the physical from the mental things, then look at the physical realm and separate the living things from the dead things. Repeat that until you have all things in their own little categorie that is based on their essence. You can see something like that in the classification biologists use to structure the animals.

    That's all very aristotelic. Based on the idea that things have an essence that defines them from within, essence as something that exists within the object, inseparable connected to it. And that is the state of mind current semantic projects live in.

    You see them modelling things by talking about an object's properties in (in the case of the posterchild of that ancient train of thought, RDF) triples of "Subject", "predicate" and "object": "The sky has the color blue" attributes a certain property to the subject. The Subject (sky), the property (color) and the actual value (blue). This looks very natural to us and for very simple sentences and objects it might in fact work. The sky is blue. The ocean is blue, too. The world is saved and can easily be pushed into simple classifications. The "theory of everything" is always just months away.

    The problem is just that there's always a spoilsport and in this case he came to the party in the first half of the 20th century. His name was Ludwig Wittgenstein and he made it clear that the idea of organizing the world by the essences of things was maybe a nice naive idea but didn't work, not even for very basic and simple things. I'm quoting from his book "Philosophical Investigations":
    "Consider for example the proceedings that we call "games". I mean board-games, card-games, ball-games, Olympic games, and so on. What is common to them all? -- Don't say: "There must be something common, or they would not be called 'games' "-but look and see whether there is anything common to all. -- For if you look at them you will not see something that is common to all, but similarities, relationships, and a whole series of them at that. To repeat: don't think, but look! -- Look for example at board-games, with their multifarious relationships.
    Now pass to card-games; here you find many correspondences with the first group, but many common features drop out, and others appear.
    [...]Are they all 'amusing'? Compare chess with noughts and crosses. Or is there always winning and losing, or competition between players? Think of patience. In ball games there is winning and losing; but when a child throws his ball at the wall and catches it again, this feature has disappeared.[...]
    And we can go through the many, many other groups of games in the same way; can see how similarities crop up and disappear. And the result of this examination is: we see a complicated network of similarities overlapping and cries-crossing: sometimes overall similarities."


    We cannot even give the essence of the simple word "game", how can we think that we are able to structure the world that way? Wittgenstein's book deals with so called "language-games" which basically means: One word means one thing in one context and in others it means something completely different. "Water!" can be an order, it can be the answer to a question, an exclamation or something else. The word itself has no meaning at all when it's taken out of its language-game.

    But that is what many semantic projects try: Define sets of properties to structure the world by, define the properties that you need to divide things by their essence. It's the same mistake that leads to many really bad object-oriented designs: People believe that they just need to understand a certain problem field, abstract some common classes to group subclasses, define the hierarchy of things and they have a good models for the part of the world their software deals with. Some languages even encourage that train of thought by forcing you to define exactly that kind of hierarchy cause of language deficits (JAVA with its lack of multiple inheritance comes to mind).

    The idea that there is a "right" or "objective" representation of things, that you can define a thing by its objective properties was very new and exciting 400 BC when Aristotele worked with that thesis, but 2400 years later we (should) know more.

    In fact let's look at the real world for a change (finally away from all those "old" philosophers!): We write about US presidents, and if we look back just a few years we see two which are related: George H.W. Bush, 41st president of the US and George W. Bush, 43rd president of the US.

    Those two people are related, one is the father of the other. The simplistic "semantic" way is to define a property: "father" in George W. Bush and link those two objects that way. Yay, we have a semantic model of how things are! Not even close. Every father who reads this and every son or daughter will tell you that the relationship between a father and his kids is way more complex than that simple link. And what's a father anyways? Do we just rely on biology? What's with adopted kids and (angry look at California here) what's with kids adopted by a gay couple? Modeling "father" that simple might "work" for simple examples but calling it "semantic" is pretty much a blatant lie: That simple property has nothing to do with what "being the father of" means.

    Properties in objects don't model a lot, the semantics of things are very personal, they are largely based on the relationships between things and those relationships themselves, while invisible and often just in our heads, are things. The simple relationship "is father of" means something completely different to each one of us, depending on our experiences and our life, and those meanings are the meanings that are of real interest to us. Why do we look at who somebody's father is? Do we care so much about genes? No but we have a very deep connection to the relationship "having a father" and that connection projected on other father-kid relations helps us structure the world. Even though our view on that relationship is probably nothing like the view that the father and kid that we look at have on the same relationship.

    Another final example: Let's get back to the start of this way too long post, let's look at triangles. What's a triangle? Some will say: It's a geometrical figure defined by three points in space that are not on one line. True. So how do we model this with properties? We take three points. Hmm. Now we take a Triangle and connect the points to it. This models it, right? Wrong. The triangle exists because of the three points, and exactly those three points. In the naive model you can take a point out and there's still a triangle which is wrong (if you write software this is where you hack all kinds of checks and exceptions together to enforce something that your wrong modelling broke in the first place). The triangle is an object but it is also a relation: It is a relation between three different objects, three points. And that relation is not just some weird "link", it has new properties: It has interior angles and if you add them up they will be 180°, it has an area, but if you take one point away, even just for one second, the whole thing is gone: Poof! The thing does not exist and waits till you put the point back because you want to put another point in. The thing is the relation and the thing is not just some link that you model in properties.

    This is the fundamental error of most of our "semantic" projects, be it the "semantic web" or the "semantic desktop": The focus on properties as if they were able to model anything interesting. What we end up with is a set of magical properties that some authority defines and the task to put all the things of the world into it. Because that approach has worked so well up to now. Look at a simple example MP3 ID tags: A set of properties that is supposed to model everything within a certain domain. And did it work? Obviously not. Look at Last.fm and how often you read "There are multiple artists with the name X". How do you model that one song can be on many different albums if you have just one property "album"?

    The wave of "semantic" projects that we see is trying to build the next step while relying on a mental model of the world that is 2400 years old and has been shown to not be very good. It all comes down to one thing that you automatically realise when you study philosophy and computer science: All the problems that we think are new in this whole virtual world/internet context and that we think are so next gen have been talked about in philosophy in the end of the 19th and the beginning of the 20th century. But if there's one thing computer science and its people are guilty of it's the not invented here syndrome.

    (Note: Since people sometimes ask me how long these overly long posts take to write, this one took 58 minutes)

    November 9, 2008 :: Germany  

    Clete Blackwell

    Mozilla’s Firefox Attains 20% Usage

    During the first and last week of October, Mozilla’s Firefox browser accounted for 20% of the total pages requested on the internet. Of course, these numbers are not entirely accurate, as there is a margin of error.

    This is the first time since Firefox’s initial release (0.1) in 2002 that Firefox has surpassed the 20% milestone in either a weekly period or a monthly period. In the current week, Firefox is averaging between 19% and 22% usage. This is great news for the Mozilla team. Finally, Internet Explorer is beginning to be conquered.

    Firefox has a high percentage of following from those whom work in the computing industry. Most Linux users use Firefox. Also, many enthusiasts who use Windows and Mac OSX use Firefox. The Firefox trend began with enthusiasts and is spreading to the end-user.

    Read the blog post at Mozilla or see the numbers for yourself.

    November 9, 2008

    Ow Mun Heng

    Speakers LineUp @ Foss.MY



    This is the small version

    The LARGE version is a bit "sad" as it distorts Toru's Face.. (sorry Toru!!)

    this was an autostitched of 8 pictures..



    November 9, 2008

    November 8, 2008

    Jürgen Geuter

    Technological progress is not about improving things: Glossy displays

    When most people talk about technological progress they see it as a long line of improvements: Older technology gets replaced by better technology. Then the old stuff gets thrown out. But in fact that's not what happens and it's kinda important to realize what happens.

    Current example: Glossy displays.

    If you buy a notebook today you'll have trouble finding one without a glossy display. Displays used to be "unglossy" because it ensures you can actually read stuff on the display when you are ... you know somewhere out of your full control. In your office you can set everything up in a way that makes sure you won't have trouble reading but when you sit in a train or a coffee shop, you can't say what your setup will be like. Now, especially with people getting more mobile and many people carrying computers around, the naive thought is that the type of display is more wide-spread that allows you to actually see things.

    Well that's not how it is. Glossy displays look good in a store. You see the background artwork and the colors might have some extra brilliance. Ok, that's of no use when you really want to fix a bug in your code and all you see is a reflection of your face, but it looks so shiny!

    "Glossy" is the thing these days. Laptops and other devices have to have that glossy look for some reason, even some Desktop Environments have jumped on the bandwagon: KDE4 tries to look as glossy as it can. The good thing for KDE4 is that you don't see fingerprints on the widgets as it is with the Nintendo DS Lite for example, or my EEEPC. Don't get me wrong, I love both devices to pieces but why anybody would build a hull for something from glossy plastic is beyond me: It looks good as long as you don't use of touch it. And that's what I want to spend my money on, right? Things I can't touch.

    The whole glossy fad has to die soon. I get that it makes prettier advertising photos but after two uses everything just looks filthy.

    November 8, 2008 :: Germany  

    Niel Anthony Acuna

    let’s learn powerpc linux!

    i’m actually using gcc4 and noticed that there are minor deviations
    from the usual prolog/epilog functions since gcc3 when doing intel. i
    have to read up on this.

    so let’s try to follow a simple “hello world” program to see how gcc makes ppc
    programs. it’s compiled with no optimizations.

    int procedure1(const char *string)
    {
            printf(string);
            return 0;
    }
    	
    int main(int argc, char *argv[])
    {
            return procedure1(\"hello world\n\");
    }

    the V4 stack frame.

    	SP---->	+---------------------------------------+
    		| back chain to caller			| 0
    		+---------------------------------------+
    		| saved LR				| 4
    		+---------------------------------------+
    		| Parameter save area (P)		| 8
    		+---------------------------------------+
    		| Alloca space (A)			| 8+P
    		+---------------------------------------+
    		| Local variable space (L)		| 8+P+A
    		+---------------------------------------+
    		| saved CR (C)				| 8+P+A+L
    		+---------------------------------------+
    		| Save area for GP registers (G)	| 8+P+A+L+C
    		+---------------------------------------+
    		| Save area for FP registers (F)	| 8+P+A+L+C+G
    		+---------------------------------------+
    	old SP->| back chain to caller's caller		|
    		+---------------------------------------+
    

    the V4 Registers

    r0 volatile, may be used by function linkage
    r1 stack pointer
    r2 reserved for system
    r3 .. r4 volatile, pass 1st - 2nd int args, return 1st - 2nd ints
    r5 .. r10 volatile, pass 3rd - 8th int args
    r11 .. r12 volatile, may be used by function linkage
    r13 small data area pointer
    r14 .. r31 saved
    f0 volatile
    f1 volatile, pass 1st float arg, return 1st float
    f2 .. f8 volatile, pass 2nd - 8th float args
    f9 .. f13 volatile
    f14 .. f30 saved
    f31 saved, static chain if needed.
    lr volatile, return address
    ctr volatile
    xer volatile
    fpscr volatile*
    cr0 volatile
    cr1 volatile**
    cr2 .. cr4 saved
    cr5 .. cr7 volatile

    * The VE, OE, UE, ZE, XE, NI, and RN (rounding mode) bits of the FPSCR may be
    changed only by a called function such as fpsetround that has the documented
    effect of changing them, the rest of the FPSCR is volatile.

    ** Bit 6 of the CR (CR1 floating point invalid exception bit) is set to 1 if a
    variable argument function is passed floating point arguments in registers.

    the PPC architecture does not have a push/pop instruction that implicitly
    operates on the stack. as such, stack management could be a little more “hands
    on” on PPC in comparison to intel. the stack frame convention above is defined
    to support parameter passing, reserved registers (nonvolatile) preservation and
    local variables. each function which either calls another function or modifies
    saved register must create a stack frame from memory set aside for use as a
    stack defined by the r1 register (r1 = sp).

    the stwu instruction assures that stack frame allocation is atomic. stack
    space is allocated and the sp (r1) is updated in just one instruction.

    parameter passing. ppc passes function parameters thru registers rather than
    pushing them all in the stack.

    <procedure1>
    stwu    r1,-32(r1)  ; save former and allocate a new stack frame
    mflr    r0          ; return addr of procedure1()
    stw     r31,28(r1)  ; save r31
    stw     r0,36(r1)   ; save r0
    mr      r31,r1      ; mirror sp
    stw     r3,8(r31)   ; save char * to stack
    lwz     r3,8(r31)   ; argument to printf
    bl      10010a00
    
    li      r0,0        ; r0 = 0
    mr      r3,r0       ; r3 = 0 = return value
    lwz     r11,0(r1)   ; access back chain
    lwz     r0,4(r11)   ; access return address
    mtlr    r0          ; update link register with return addr of procedure1()
    lwz     r31,-4(r11) ; restore r31
    mr      r1,r11      ; restore caller’s stack
    blr                 ; return to caller
    	
    <main>
    stwu    r1,-32(r1)      ; save former frame and allocate a new stack frame
    mflr    r0              ; return address of main()
    stw     r31,28(r1)      ; save r31
    stw     r0,36(r1)       ; save link register
    mr      r31,r1          ; mirror sp
    stw     r3,8(r31)       ; save r3
    stw     r4,12(r31)      ; save r4
    lis     r9,4096         ; lower 16 bits of string addr
    addi    r3,r9,2176      ; higher 16 bits of string addr
    bl      10000434
     ; procedure1(char *)
    mr      r0,r3           ; r0 = return val of procedure1()
    mr      r3,r0           ; r3 = 0 (retval)
    lwz     r11,0(r1)       ; access back chain
    lwz     r0,4(r11)       ; access return address
    mtlr    r0              ; update link register
    lwz     r31,-4(r11)     ; restore r31
    mr      r1,r11          ; restore stack frame
    blr                     ; leave main()
    

    next time, we’ll delve more deeper and see how to access system calls directly
    and how these system calls are implemented.

    November 8, 2008 :: Zamboanga, Philippines  

    Dan Ballard

    Firefox and the new Hotmail

    I don't know who to be annoyed at actually. Microsoft just upgrade their hotmail interface, and hotmail is about the last MS product I use (that and the MSN server though I obviously don't use their client). Anyways, suddenly hotmail wasn't working for me which was a rather large pain. I searched around and found the solution.

    In Firefox, goto "about:config" and put a filter of "vendor" in and hit enter. Change general.useragent.vendor from "Ubuntu" to "Firefox".

    Then it works fine. It's annoying MS is still doing such browser specific tweeking and it breaks on something so small, but then again, it does seem that they at least support vanilla Firefox. I don't know how important the vendor string is but if it's breaking compatibility maybe Canoncial should leave it alone?

    November 8, 2008 :: British Columbia, Canada  

    November 7, 2008

    Jürgen Geuter

    Optgroups with Django's forms module

    I recently need "optgroups" in a select widget and looking through Django's sources I found that the Select widget that Django uses to render ChoiceFields supports optgroups out of the box. The documentation was not as clear as I would have liked so here's a short howto: "How to get optgroups in your SELECT input field with Django".

    Take this example:


    The code for the form looks like this:
    import django.forms as forms class ExampleForm(forms.Form): items=[ ('Animals',(('1','Monkey'),('2','Turtle'))), ('Aliens',(('3','Zim'),('4','Tak'))), ] select=forms.ChoiceField(label="Selection", choices=items)

    November 7, 2008 :: Germany  

    Roeland Douma

    Javascript Warning Boxes…

    I’m pretty sure you all know what I’m talking about. When you are at some site and you fill in a form. Doesn’t matter what is is for. And you submit (everything you entered is OK) and a warning box pops up telling me that everything is send.

    First of all why is this a Warning? I mean I know I pressed Send. And besides that if it is a waring they should provide me with contact information on how to get my submission removed.

    Now apart from the warning box it could be a message telling me everything is inserted into the database. Let say they have some java script in the background that does that and once everything is inserted it gives the warning (still a warning is wrong but OK). However I checked the source of the site and guess what? All the button does is generate the pop-up after which the stuff is submitted.

    Now I’m wondering why people do this? It does not look good, since it is a wrong pop-up box. It does not speed up the process. It is wasting my time. Which in general I do not like.

    November 7, 2008 :: The Netherlands