Planet Larry

October 01, 2008

Matt Harrison

zipwrap - a unixy python interface for zipfiles

Sometimes you need to play with zipfiles, jars, wars, or odt files. So I've written an interface that allows for easy creation of zip files from existing zips or directories. I think the "unixy" interface (ie, cat, touch, rm, mkdir) is simple and e

October 01, 2008 09:54 PM :: Utah, USA  

Steve Dibb

planet larry google search

Planet Larry has kept archives of posts for about five months now, but I just barely added a Google search form to the side nav so you can … uh, search.

It would be nice to have the archived pages put the dates in the title tag, but looking through the Venus code I couldn’t really see a quick way to do that, so meh.  I could probably just write a quick sed line to fix it when its archived, I suppose.  I’ll look at it later.  Or let someone else figure it out for me. :)

Oh, also, just a reminder, Planet Gentoo stores archives as well, though there’s no index page or search index yet.  Another item on my long to do list.

October 01, 2008 06:33 PM :: Utah, USA  

Matija Šuklje

Gentoo not cutting edge anymore

It is no great secret that Gentoo's official Portage tree does not include nearly as many cutting edge ebuilds as it used to.

People have diverse theories why this is so — from too few developers to even some bizare conspiracy theories.

Mine is pretty simple: overlays. And I will try to explain why I think so.
<!--break-->

In the olden days Gentoo only had one Portage tree and its users would be happy to have a centralised tree that held all the "packages" irrelevant of their origin, licenses or other factors that would in other distributions (most notably Debian) lead to exclusion from the official tree and inclusion into one of the repositories.

In fact, not having to deal with repositories and having bleeding edge versions available used to be a one of the adventages of Gentoo.

Portage's slots, keywords, the ability to mask and unmask ebuilds are all perfect tools to mix the stable and testing/unstable branches into the system — so there is no reason for overlays to exist because of keeping the official tree stable. If it is quality control to keep non-perfect ebuilds out of the official tree, it is always possible to hardmask them (if the testing keyword is not enough).

Paludis is even and even better example, since the user can also mask ebuilds according to their license — which is very close to what Debian does with their policy what can enter the official repository and what the others.

What all these overlays do is keep (even if for just some time) ebuilds from reaching the official tree — as they would have in the past. Also we should take into account that maintenance of dozens of overlays costs precious time that could otherwise be used for maintaining the official tree.

The bottom line is that I feel no reason why so many overlays are needed and in fact feel that they make my life more miserable then when there were none and I did not have to spend time searching for overlays on the internet to get an ebuild.

October 01, 2008 03:06 PM :: Slovenia  

Ciaran McCreesh

EAPI 2: doman language support


This is the final post in a series on EAPI 2.

The doman helper is one of those pesky little beasts that makes specifying EAPI behaviour formally such a nuisance (although it is nowhere near as horrible as dohtml). EAPI 2 makes it even peskier.

I’ll try that again.

The doman helper makes writing ebuilds substantially easier by automagically doing the right thing when installing manual pages, freeing the developer from having to care about manual sections. EAPI 2 makes doman even more useful by making it aware of language codes as well as sections.

The painful details are available in PMS, but basically this will now ‘do the right thing’:

doman foo.1 foo.en.1 foo.en_GB.1

Previously only the first of the items would go to the right place.

This one’s a Gentoo innovation; see bug 222439 for its history. It was shamelessly stolen for exheres-0, but was too late for kdebuild-1.

   Tagged: eapi, eapi 2, ebuild, gentoo, paludis   

October 01, 2008 10:10 AM

Christoph Bauer

Patch me gently

Have you ever asked yourself what a patch exactly is? Well - let’s take a closer look at the things your mother used to fix your trousers - a patch. That’s quite close as it fixes a hole. But patches in the it field can be even more than just a fix. If you want to see a really huge patch that’s improving the kernel, you might want to look at the latest kernel-snapshot - it’s more or less a howto on changing the code the way the developer use for saving some bandwidth as they don’t need to transfer the unchanged files.

But that’s the theory. Let’s get down to the shell and let’s have a look at a small utility named ‘diff’. Diff, as the name might suspect shows differences between files, so you can see the changes easily. But if you compare it to the patch from above, they look quite different. But let’s have a look at it with some parameters:

diff -Naur file.old file.new > update.patch

Et voilà - the file we got now looks like a patch should look like. Diff can compare files or even directories and make a patch. If you are a developer it even works with revision control.

Well… enough about patch generation. We got our patch now. But how to work with it? Let’s have a look at the other side - we got the old file and our patch.

As we’re in the same directory as the patch is, we can safely use the -p1 parameter for skipping the path stuff. But if we’re using an absolute path, -p0 is the way to go:

patch -p0 < update.patch
patch -p1 < update.patch

The following output should look that way:

patching file file.old

If everything worked, file.new and file.old should be the same now, as our patch updated file.old to the new version.


Copyright © 2007
Please note that this feed is for private use only. All other usage, including the distribution or reproduction of multiple copies, performance or otherwise use in a public way of the images or text require the authorization of the author.
(digitalfingerprint: 0f46ca51d0fa4e6588e24f0bf2b80fed)

October 01, 2008 05:55 AM :: Vorarlberg, Austria  

September 30, 2008

Dirk R. Gently

Mac Security


I’ve decided to lay down my arms with Mac security. I had bought a new MacBook a year and a half ago and it was a thing of beauty - Apple does make good hardware. But I eventually began to learn that there were security problems. For those that read this blog, you know how I expressed the horrible security leaks I had on my computer and how I became forced to sell my new MacBook because it was huge security risk. I let off most of the heat back in December. For months prior, unbeknownst to me, a hacker had gottin into my computer and was able to see everything I was doing on my computer. Even worse, the attacker had also gained access to the built-in camera and was viewing me point blank. I sent in the laptop to Apple and spoke to Apple support. Apple tech support was friendly but after an hour talking to them I got no help in fixing the problem. I sent the laptop in only to have it returned with the same problems and sold it quickly there after.

Unfortunately not long after selling the laptop I discovered how the laptop was accessed and a way to prevent this from happening in the future. To prevent this from happening to other Mac users, here’s what I learned in what to do to protect your Mac (some experts even say it’s a necessity).

First begin by booting your Mac and go to the Open Firmware prompt. This can be done by holding Apple-Option-O-F for a few seconds after pressing the power button. When you see the white screen, you’re in the Open Firmware shell. Now reset the Open Firmware:

reset-nvram
set-defaults
reset-all

Now a password needs to be set. There are threee types of password protection: none, command, and full.

  • None - all commands allowed without password
  • Command - “go” and “boot” (no arguments) allowed
  • Full - Password required before entering any command is entered.

I prefer to use full security but “Command” security should prevent hacking too.

Enable a password and set security type:

password
setenv security-mode full
reset-all

Now reboot and the password prompt will appear. Type in the password and then start linux by typing:

boot

Note: I have to type boot twice to get into Linux due to a bug in yaboot-1.3.14 and older.

Later, if you want to disable security then just set password protection to “None”. Password

setenv security-mode none
reset-all

Apple is a great company that is doing alot to help the computer business. I’m still nettled a bit (I just expect products to work), but they were professional through the whole of it. I’m glad to seem them do well, and will probably buy another Mac someday. I still think this is a security problem that needs addressed but I do believe they are a company that are interested in high quality products.

Hope this helps get you Mac nice and safe! Have a Good Day.

      

September 30, 2008 09:26 PM :: WI, USA  

Jason Jones

The Geek In Me

Ya know...  I'm not the best coder in the world, and I'm not the geekiest of geeks, either.  I personally know quite a few people who would classify me as a "mutt coder", because I really don't care too much about code purity.

I'm not quite sure what in me makes me that way, but it's true.  Maybe it's because I learned programming backwards.  Where most people learn the hard stuff first, such as assembler and C and COBOL, I learned HTML first.  I learned everything there was to learn about front-end web development, including image manipulation, HTML, CSS, DHTML, JavaScript, ActionScript, etc...

It was only when I realized that being a front-end developer was a dead-end street, especially for someone who couldn't design worth a hoot, that I looked into server-side scripting.  It was then that I found PHP and the power of relational databases such as MySQL and PostgreSQL.

Through my career as a web application developer, I've created my fair share of applications, web sites, pages, snippets, and have quite the arsenal of code in my toolbelt, yet I don't find myself too terribly concerned with the purity of my code.  I really do "leave well enough alone".

Now, before you all toss me away as some ego-driven script kiddie who can do nothing but copy and paste, I must say that yes, I like need my code to be maintainable, and I can't stand coding won't code now without using a MVC framework of some sort.  But honestly, for me to go through my code every few months and make sure it's pure, and as golden as the sun...   ... sorry, that just doesn't happen.

I think that because of this, my programming skills might not be up where they otherwise could be, but I sacrafice this for one thing that I believe has given me the ability to rise up as a programmer as fast as I have.  That thing is what gives me the most pride in my work.   When asked what the most satisfying part of my job is, I simply answer, "making people happy".

My whole development career has been working as an internal team, creating applications for my fellow employees.  Luckily, I've been blessed to work for appreciative and understanding folks, and in return, making them happy is what keeps me going.

A few days ago, my boss came to me and asked me to build a ticket-tracking program.

I know there are loads of programs out there that do an excellent job of doing just that, and after relaying that to my boss, he told me he has tried 5 separate times  to implement canned ticket trackers for the company.

All 5 times it failed miserably.  They were too hard to use, too confusing, too technical, or some other problem keeping the company from using it.  It's hard to compete with walking up to the IT team, or placing a quick phone call.

Anyway...  I spent 3 days on a system and we implemented it yesterday.  Focusing on absolute ease-of-use, I believe I was able to create a system people will actually use.

My boss came to me this afternoon and told me that more people are using this one than any of the other 5 previous canned systems.  I guess that's a good thing.

The tracker consists of 3 screens, and gives the users no more options than they absolutely need.  It also doesn't require any passwords, and uses persistent cookies to remember who is logged in.

So....  Not entirely sure why I'm writing this, other than I feel like it, but...  Thanks for reading.

I guess I just really like it when my boss comes in to my office, sits down, and says, "Ya know, you're making my job a heckuva lot easier.  Thanks."

Here are some screenshots of the ticket tracker I built:





September 30, 2008 03:06 PM :: Utah, USA  

Ciaran McCreesh

EAPI 2: default_ phase functions and the default function


This post is part of a series on EAPI 2.

With EAPIs 0 and 1, if you want to add something to, say, src_unpack, you have to manually write out the default implementation and then add your code. This is easy to screw up — developers are highly prone to getting the quoting wrong and forgetting which functions do and do not want a || die on the end.

EAPI 2 makes the default implementation of phase functions available as functions themselves. These functions are named default_src_unpack, default_src_configure and so on.

Typing out default_src_compile in full is pointless, though (especially since it’s illegal to call phase functions or default phase functions from other phase functions). So we also introduce the special default function, which calls whichever default_ phase function is appropriate for the phase we’re in. Thus:

src_compile() {
    default
    if useq extras ; then
        emake extras || die "splat"
    fi
}

Both features first appeared in exheres-0.

An alternative proposal (I think it came from the Pkgcore camp) was to make all EAPI default implementations available through functions named like eapi0_src_compile, eapi1_src_compile and eapi2_src_compile. This proposal was rejected because various Paludis people moaned about it not making sense or having any legitimate use cases (the ‘obvious’ use cases don’t work if you think them through), and no-one stood up to defend it.

   Tagged: eapi, eapi 2, ebuild, gentoo, paludis   

September 30, 2008 10:15 AM

EAPI 2: src_configure and src_compile


This is post five in a series describing EAPI 2.

EAPI 2 splits src_compile into src_configure and src_compile. Like src_prepare, it’s mostly a convenience thing to reduce copying default implementations, although in this case it also makes it easier to hook in code in between configure and make being run.

The default src_configure implementation behaves like this:

src_configure() {
    if [[ -x ${ECONF_SOURCE:-.}/configure ]]; then
        econf
    fi
}

This is the first half of EAPI 1’s src_compile, not the non-ECONF_SOURCE-aware EAPI 0 version.

The default src_compile implementation is reduced accordingly:

src_compile() {
    if [[ -f Makefile ]] || [[ -f GNUmakefile ]] || [[ -f makefile ]]; then
        emake || die "emake failed"
    fi
}

The split configure / compile setup was first used in exheres-0, which uses more elaborate default implementations. Like src_prepare, it was considered but rejected for kdebuild-1 because of eclass difficulties.

   Tagged: eapi, eapi 2, ebuild, gentoo, paludis   

September 30, 2008 10:10 AM

September 29, 2008

Dieter Plaetinck

I'm done with Gnome/Gconf

I'm managing my ~ in svn but using gnome & gconf makes this rather hard.
They mangle cache data together with user data and user preferences and spread that mix over several directories in your home (.gconf, .gnome2 etc).
The .gconf directory is the worst. This is where many applications store all their stuff. User preferences but also various %gconf.xml files, which seem to be updated automatically everytime 'something' happens: They keep track of timestamps for various events such as when you press numlock or become available on pidgin.
I'm fine with the fact they do that. I'm sure it enables them to provide some additional functionality. But they need to do it in clearly separated places (such as xdg's $XDG_CACHE_HOME directory)

read more

September 29, 2008 08:01 PM :: Belgium  

Roy Marples

Looking for Logos!

No, no, not the company I work for Sticking out tongue

I'm looking for somewith with artistic skill to create logos for OpenRC, dhcpcd and openresolv. Ideally the OpenRC logo should also be done as ASCII art with a max size of 15 rows for a curses splash plugin with progress bar I'm working on.

Email me your submissions at roy@marples.name and I'll pick the winning logo!
There are no prizes - other than cookies!

September 29, 2008 11:33 AM

Ciaran McCreesh

EAPI 2: src_prepare


This is post four in a series describing EAPI 2.

EAPI 2 has a new phase function called src_prepare. It is called after src_unpack, and can be used to apply patches, do sed voodoo and so on. The default implementation does nothing.

This function is purely for convenience. It gets rather tedious copying out the default implementation of src_unpack just to add a patch in somewhere.

src_prepare was first introduced in exheres-0 (which has a more elaborate default implementation). It was considered but rejected for kdebuild-1 because making best use of it requires eclass awareness, and the packages using kdebuild-1 had to share eclasses with the main Gentoo tree.

   Tagged: eapi, eapi 2, ebuild, gentoo, paludis   

September 29, 2008 10:02 AM

EAPI 2: !! Blockers


This is part three of a series of posts describing EAPI 2.

Blockers are a nuisance for end users. It’s rarely obvious how to fix them or what they mean, and getting it wrong can leave systems unusable.

There have been various proposals on how to fix this. For exheres-0, we’re going to go with something like this:

DEPENDENCIES="
    !app-misc/superfrozbinator [[
        description = [ Can only have one frozbinator installed at once ]
        resolution = uninstall-blocked-after
        url = [ http://explain.example.org/?only-one-frozbinator ]
    ]]
    !dev-libs/icky [[
        description = [ Having icky installed breaks the build process ]
        resolution = [ manual ]
        url = [ http://explain.example.org/?myfroz-hates-icky ]
    ]]"

The user can then be presented with a list of things that would need to be uninstalled to resolve blockers, along with clear descriptions of why they need to do so. Once the user has explicitly accepted the uninstalls, the package manager could then safely perform the installs.

Unfortunately, annotations aren’t something that can be implemented for Portage any time soon. Instead, Portage has gone with a fairly horrible and dangerous semi-automatic block resolution system that sometimes removes blocked packages automatically (often screwing up the user’s system in the process). Whilst doing so, Portage changed the meaning of EAPI 0 / 1 blockers from “this must not be installed when we do the build” to “this must be uninstalled after we do the build”.

EAPI 2 introduces a new kind of blocker using double exclamation marks, like !!app-misc/other. This goes back to the old meaning of “this must not be installed when we do the build”, keeping !app-misc/other for “this must be uninstalled after we do the build”.

This does not, unfortunately, make the user any safer, but it does allow packages that really can’t have something installed at build time to say so.

   Tagged: eapi, eapi 2, ebuild, gentoo, paludis   

September 29, 2008 10:01 AM

Christoph Bauer

Login via Bluethooth

If you try to go over the top with some things, then you should really do it instead of just using some tiny hacks. What about logging into your computer using a mobile phone? Sounds cool, huh? So let’s use it and get to work.

As the title already says, you will need a bluetooth enabled phone, a computer with a compatible bluetooth dongle, the common bluetooth programs and finally a kernel running the correct modules. If you already got those prerequisites, you can check your setup using the hcitool scan command. If things are working and your phone is ‘detectable’, the output should look like that:


user@example:~> hcitool scan
Scanning …
00:0E:07:BF:B4:C4 Z1010
00:04:61:81:5C:6B ubuntu-0

So the computer is able to ’see’ the device. Now it’s time to install the pam_blue module. Usually there should be a package available for your distribution. Gentoo in example got one. But however - you can always get the latest version from the programmers site. I won’t talk much about compiling now as it usually works quite problem free here. Regarding the configuration - this shouldn’t be a problem too:

general {
timeout = 3;
}

# configuration for user stargazer
stargazer = {
name = Z1010;
bluemac = 00:0E:07:BF:B4:C4;
}

The config shown above allows to authenticate the user ’stargazer’ with the bluetooth mac from the config. If you got things set up correctly, it is time to get your hands on the pam configuration which I already described there. The corresponding pam line would look like that:

auth sufficient /lib/security/pam_blue.so


Copyright © 2007
Please note that this feed is for private use only. All other usage, including the distribution or reproduction of multiple copies, performance or otherwise use in a public way of the images or text require the authorization of the author.
(digitalfingerprint: 0f46ca51d0fa4e6588e24f0bf2b80fed)

September 29, 2008 07:15 AM :: Vorarlberg, Austria  

September 28, 2008

Ciaran McCreesh

EAPI 2: Use Dependencies


This is the second post in a series of posts describing EAPI 2.

Use dependencies have been needed for a very long time. They eliminate most of the built_with_use errors you see during pkg_setup, replacing them with an error that is seen at pretend-install time.

The first two real world trials of use dependencies were with Exherbo’s exheres-0 and Gentoo’s kdebuild-1. It became apparent that an awful lot of packages would end up with dependencies like:

blah? ( app-misc/foo[blah] ) !blah? ( app-misc/foo )
monkey? ( app-misc/foo[monkey] ) !monkey? ( app-misc/foo[-monkey] )
fnord? ( app-misc/foo ) !fnord? ( app-misc/foo[-fnord] )

Syntactically, that’s rather inconvenient. For exheres-0 and kdebuild-1, we added the following syntax:

[opt]
The flag must be enabled.
[opt=]
The flag must be enabled if the flag is enabled for the package with the dependency, or disabled otherwise.
[opt!=]
The flag must be disabled if the flag is enabled for the package with the dependency, or enabled otherwise.
[opt?]
The flag must be enabled if the flag is enabled for the package with the dependency.
[opt!?]
The flag must be enabled if the use flag is disabled for the package with the dependency.
[-opt]
The flag must be disabled.
[-opt?]
The flag must be disabled if the flag is enabled for the package with the dependency.
[-opt!?]
The flag must be disabled if the flag is disabled for the package with the dependency.

Dependencies could be combined by specifying multiple blocks, as in foo/bar[baz][monkey?].

For EAPI 2, Zac decided to go with an arbitrarily different syntax:

[opt]
The flag must be enabled.
[opt=]
The flag must be enabled if the flag is enabled for the package with the dependency, or disabled otherwise.
[!opt=]
The flag must be disabled if the flag is enabled for the package with the dependency, or enabled otherwise.
[opt?]
The flag must be enabled if the flag is enabled for the package with the dependency.
[!opt?]
The flag must be disabled if the use flag is disabled for the package with the dependency.
[-opt]
The flag must be disabled.

And to combine use dependencies, one uses a comma, as in foo/bar[baz,monkey?].

In both cases, the slot dependency must go before the dependency, so foo/bar:1[baz], not foo/bar[baz]:1. The use dependency goes after any version restrictions, so >=foo/bar-2.1:2[baz].

In both cases, it is illegal to reference a use flag that does not exist (including USE_EXPAND flags that are not explicitly listed in IUSE). So foo/bar[opt] when any version of foo/bar does not have opt in IUSE is illegal and has undefined behaviour, as is foo/baz[opt?] if either the owning package or foo/baz has no opt. For cases where only some versions of a package have a flag, use dependencies can be combined with version or slot restrictions.

From an implementation perspective: the package manager should not try to automatically solve unmet use dependencies. The package manager doesn’t know the impact of changing a use flag (changing some flags makes a system unbootable), so it can’t simply override the user’s choice. (Paludis will suggest an automatic reinstall if and only if the user has already modified their use.conf, so you don’t need to manually reinstall a dependency if you’re ok with altering the flags with which it is built.)

   Tagged: eapi, eapi 2, ebuild, gentoo, paludis   

September 28, 2008 06:46 PM

EAPI 2: SRC_URI Arrows


This is the first item in a series of posts describing EAPI 2.

Some upstreams use annoyingly named tarballs. Most commonly, they don’t include either the package name or the version in the filename. Because DISTDIR is a flat directory, this causes problems — the tree must not use two different tarballs with the same name. Previously, the solution to horrible upstream naming was to manually mirror the tarball with a new filename; this was considered excessively icky.

There have been two sane solutions proposed for this over time. The one we didn’t use was to define a DISTDIR_SUBDIR variable, and do all downloads into there. This would have made the A variable quite a bit messier, and complicated sharing certain tarballs between packages.

The arrows solution was something I came up with for early Paludis experimental EAPIs, and was adopted for kdebuild-1 and from there into 2; it’s also always been present in exheres-0. It works like this:

SRC_URI="http://example.com/stupid-named/1.23/stupid.tar.bz2 -> stupid-1.23.tar.bz2"

or using variables:

SRC_URI="http://example.com/stupid-named/${PV}/${PN}.tar.bz2 -> ${P}.tar.bz2"

This tells the package manager to look at the URL on the left of the arrow, but save to the filename on the right.

Mirroring effects are slightly subtle. Consider:

SRC_URI="mirror://foo/${PN}/${PV}.tar.bz2 -> ${P}.tar.bz2"

The package manager will look both on mirror://foo/ and mirror://gentoo/ for the download. When looking on foo, the raw filename must be used, but when looking on gentoo, the rewritten filename is used.

Anyone using arrows on mirror://gentoo/ URIs gets stabbed.

Arrows make another proposed but rejected EAPI feature irrelevant: there was a proposal floating around (I think it originated with drobbins, but I can’t find an original source) to make unpack ignore ;sf=tbz2 and ;sf=tgz suffixes on filenames, for interoperability with gitweb. Arrows are a more general solution.

Implementation-wise, anyone still using a lexer-based parser will need a single token of lookahead for this. Apparently this causes minor inconveniences in some broken programming languages that only support what C++ calls input iterators; I consider this a good thing, because it might make people either use a better iterator model or stop using lexers.

   Tagged: eapi, eapi 2, ebuild, gentoo, paludis   

September 28, 2008 03:18 PM

What’s in EAPI 2?


EAPI 2 has been approved by the Gentoo Council and so can now be used in ebuilds. The first package manager with support was Paludis 0.30.1; Portage support came along with 2.2_rc11.

EAPI 2 consists purely of extensions to EAPI 1. The new features are:

Formal definitions can be found in PMS; an overview of each feature will follow in subsequent posts.

   Tagged: eapi, eapi 2, ebuild, gentoo, paludis   

September 28, 2008 03:08 PM

Jürgen Geuter

The meaning of "to like something" in the social web

In modern web services personalisation and adjusting to the user's preferences is of huge importance. Most services allow you to "vote something up" or to "lie something" as a way to express your interest in a certain object, whether it's a picture, an article or a video.

The modelled relation is simple, binary: You either like something or you don't, there's usually no way to say "I kinda like it". There's also usually no way to be specific about why you vote something up, though there are many different options: You could generally "like" the linked object, or you "find it interesting", "recommend it" or do many other things with the linked object that made you vote it up.

Today on Friendfeed I stumbled on this:


Somebody posted a link to a story about some guy who flew to another country to kill someone over some internet argument, probably about a video game. Another person voted it up, or "liked it" in Friendfeed terms. But was "to like" really what the upvoter intended to communicate?

It's similar to our use of the word "friend" or "buddy" in online communities: On myspace every contact you have is your "friend", even though some might be your boss, your teacher or just "some guy you know".

We've gotten used to this by now and often consider it "normal" to model our relations to people and objects like this when we think about the web: It's simple to implement, simple to understand and simple to use. No need for people to think about what kind of relationship they really have to that other person: You add him, he's your friend.

But I find myself getting more and more bored by this overly simplistic approach: I want to be able to clearly define my relationships to other people, not to make them public, but to be able to filter content: Think of a huge stream of posts by people you have in your list. You might not have the nerve to dig through all of the posts now, so you just want posts of your real friends, not your boss, not some guy who you don't really know all that well.

Also, I want to be able to clearly specify why I upvote something. Let's look at the friendfeed example:
Someone watches my feed but right now is not in a serious but silly mood. He might just wanna see things I upvoted because I "found them funny", not things I found "interesting".

It's time to get rid of the overly simplistic view on relationships of people amongst each other and of people towards objects. Yes, it's simpler to implement but it limits the usefulness of your application. It's like a car that is stuck in first gear: You are just not gonna be able to use it to its full potential and might choose a different vehicle since cars limit yourself too much.

(And this post gets me another "using a not really well thought-out car analogy to explain a computer or software isse"-Point)

September 28, 2008 10:45 AM :: Germany  

Thomas Keller

Mediawiki 1.3.0 broken in Gentoo

As I wanted to start with my studies revisions on M362 today, I realized that I couldn’t access my mediawiki-environment with all my notes any more! The error message was something about a missing “includes/parser.php” file (”no such file or directory”). When I googled for this, I stumbled upon a corresponding Gentoo bug - apparently, the mediawiki-package [...]

September 28, 2008 10:02 AM

September 27, 2008

Thomas Capricelli

Django browser for Redmine database

Do you know redmine ? This is, to my knowledge, the best project manager you could ever find out there. I like to describe it as ‘trac done well‘. It has only one, big, ugly, fat inconvenient for me : it is written on top of ruby on rails. I could tell you how slow it is (true), or how many security flaws are found everyday in the ruby/rails world. But the real reason I’m actually really concerned about ruby/rails is that I don’t know ruby. When I look at the code I don’t understand anything, and I can’t change something to ’suit my needs’, as is so common in Free Software.

I wish it was written in Django.

Well, anyway, I spent some times today to create a small Django application in order to display stuff from the redmine database. For this, I’ve used the wonderful “inspectdb” feature of Django, which access a database and creates the (Django) models required to access it.

Then, I had to ‘adapt’ those models and to create an admin.py file so that I could browse (and even modify) the database from django.

Adapting meant:

  • removing all the “id” field, they are automatically created by django and it seems rails use the very same name (”id”) so this is compatible.
  • Change the “obvious” references to other model from IntegerField to ForeignKey
  • Some models reference themselves, you needo to use a ForeignKey to ’self’ (including quotes) to do that.
  • add some __unicode__(self) for the most important/obvious models

The admin works was basically about:

  • Creates Admin objects for all models (thanks vim’s macro!)
  • add list_display / list_filter args for the most important ones

And the remaining problem is :

  • It seems Boolean from ruby/rails have the values ‘t’/'f’ while those from django have 1/0

Mandatory screenshot (corresponding to the public stuff from http://labs.freehackers.org):

Example of djangoredmineadmin in use

Link to the project homepage

September 27, 2008 06:49 PM

Brian Carper

Gentoo still rules

The version of akregator I have always displays article link text in an ugly dark blue, which doesn't show up well against my dark Qt theme. I can barely read an ebuild to save my life, and the KDE ebuilds are full of all kinds of odd KDE-specific stuff, but it still took me just a couple of minutes to:

  1. Find the sources in /usr/portage/distfiles
  2. Cludgily patch akregator to use normal text color for links (underlines still distinguish them)
  3. Copy the akregator ebuild into an overlay, throw the patch in there and add one line to the ebuild to read it
  4. emerge away

Et voilà, custom-patched, package-manager-managed app. Gentoo is pretty good for this kind of thing, whatever its other shortcomings. Does any other distro make it this easy to do such things? (I'm genuinely curious.)

September 27, 2008 02:56 AM :: Pennsylvania, USA  

September 26, 2008

Nicolas Trangez

Python gotcha

Don’t ever do this unless it’s really what you want:

import os

def some_func(fd):
    f = os.fdopen(fd, 'w')
    f.write('abc')

fd = get_some_fd()
some_func(fd)
some_other_func(fd)

Here’s what goes wrong: when some_func comes to an end, f (which is a file-like objects) goes out of scope, is destructed, which causes fd to be closed. I think this is pretty weird behavior (an object closing an fd it didn’t open itself), but well.

Here’s a better version, for reference:

def some_func(fd):
    f = os.fdopen(os.dup(fd), 'w')
    #Use f here

Try this on fd 0/1/2 in an (I)Python shell ;-)

September 26, 2008 07:35 PM

Matt Harrison

Kubuntu 8.4 "xine was unable to initialize any audio drivers"

Note to self: The next time sound stops working in Kubuntu, amarok, youtube, etc stop working, run the following:lsof | grep snd and kill the offending process.

September 26, 2008 04:06 PM :: Utah, USA  

Roy Marples

bugzilla shutdown

My bugzilla installation has now been shutdown, and all bugs have been migrated to new trac projects for all of them (found here for dhcpcd , here for openrc and here for openresolv).

The main selling point is now an easy interface to trac bugs to code changes and code changes in response to bugs, as demonstrated here. That bug also demonstrates one issue with the bug migration - wiki formatting.

Over the weekend, I'll try and implement Multiple Trac Views on trac-0.11 as it looks quite handy to replace the old projects page.
The current one in the default trac looks a bit naff Smiling

September 26, 2008 11:02 AM

September 25, 2008

Thomas Capricelli

KDE 4.1.2 tagged, gentoo land frozen

I’m not a gentoo fan. Mainly because I don’t like the idea of being a ‘fan’. Being a fan in the Free Software world usually means being an extremist and i hate extremism.

I nonetheless use almost exclusively Gentoo on all computers, laptops, servers and other divx boxes I have or maintain. That means a lot of them and it makes my Debian friends laugh. Who cares ? I use Gentoo and Free Software because I find this convenient, and I like the ideas behind them.

Yet I don’t share the optimism of people who think that gentoo is growing. On July 29th, KDE 4.1, the first almost-usable KDE version since the 3.5 branch, has been released, and since then, guess what have happened in the gentoo-KDE land ? Nothing. By nothing I mean first that not a single ebuild, even masked, even hard masked, has reached the official portage tree, and secondly, that despite the huge KDE user base in Gentoo, not a single official statement has been done concerning this issue. Because, believe me or not, there is an actual issue. Nothing was said on the main Gentoo page, almost nothing on gentoo planet (only one post focused on whether KDE should install in a different place or not). In the gentoo land, everybody speaks about everything but KDE in gentoo. Has the meaning of ‘g’ in gentoo recently changed ?

When you try to know a little bit more about this, it’s getting worse. Rumors are that developers have fought each others and the KDE team is just no more. It’s a new KDE team that is here for whatever reason (to which, by the way, I send my very best support, for the development of new ebuilds, for being put under such light/pressure, and for being sent in this lion’s cage that seems to be gentoo devs). I don’t know anything about this, but it’s not the first time I hear about huge tensions between gentoo developers, and this worries me a lot.

I don’t want politics, I want developers, I want Free Software developers. If I wanted politics, I would have gone for Debian, which, by the way, have had packages for KDE 4.1 and 4.1.1 for long.

Growing is not something easy to handle. It seems to me that KDE has managed to do this very well : a lot of work has been done in the few last years to ’scale up’ and I think they managed to do this hugely needed step. Gentoo still has to improve a lot in this area. As a user, my expectations are the same as what you can read everywhere : transparency, transparency and transparency.

I love gentoo, i can understand a lot of things, I can wait, I can deal with human resource shortage, I could even help. I’m used with all of that because that is so common in Free Software and that is part of the deal. But I can’t bear darkness and closeness.

I will not conclude by threatening to leave for another distribution. I’m most than happy with gentoo as a distribution and I will keep on using it as long as it is possible. I have a KDE checkout anyway on my main computer. If things are going worse though, I’m not sure I will dare trying to work on the ebuilds.

I’m ready to ignore the “If you’re not happy with gentoo leave it” type of comments.

September 25, 2008 11:44 PM

Ciaran McCreesh

Paludis 0.30.1 Released


Paludis 0.30.1 has been released:

  • EAPI 2 support.
   Tagged: paludis, paludis releases   

September 25, 2008 09:55 PM

Roy Marples

dhcpcd changes to svn and trac

After changing openresolv to trac and svn, I've done the same for dhcpcd. As such, the bugzilla database is now closed for new bugs for dhcpcd and openresolv and you should now use trac for each (found here for dhcpcd and here for openresolv). I've migrated the bugs, attachments, resolutions and activity across for both.

These scripts are for bugzilla-3.0.3 and trac-0.11.1 and assume that no custom fields have been added.
They are also coded for specific product id's and my name - you will need to adjust accordingly.

bugzilla to trac sql script. It simply creates new tables for use in a trac db - ticket_change_status needs to be copied into ticket_change though.
bugzilla to trac perl srcript. Extracts attachments from bugzilla and creates them in the current directory in a structure for use in trac.

TODO - attachment filesize is 0, this needs fixing.

September 25, 2008 07:40 PM

Daniel Robbins

Gentoo 2008.1 Release Solutions

Gentoo seems to be having problems with .”1” releases – 2007.1 was cancelled and now 2008.1 has been cancelled. The Gentoo project has also announced a desire to move to a more “back to basics approach” where they are doing weekly builds of Gentoo stages.

Good idea. As many of you know, I am already building fresh stages for x86, i686, athlon-xp, pentium4, core32, amd64, core64, ~x86 and ~amd64 as well as OpenVZ templates at http://www.funtoo.org/linux.

Since I’ve been building Gentoo stages for a while, I know that Gentoo’s catalyst tool (the tool that is used for Gentoo releases) is in poor shape – it has been poorly maintained over the years and also does not have any documentation, so it is not really up to the task of building Gentoo releases anymore.

The lack of catalyst documentation makes it much more difficult for others (like Gentoo users and other Gentoo-based projects) to build their own Gentoo releases, and this, along with the poor state of catalyst itself, tends to perpetuate the centralized Gentoo development model – a model that is not very efficient and also isn’t very much fun.

It is a shame (and somewhat ironic) that a well-renowned build-from-source distribution does not have a decent and well-maintained release building tool. So it’s time to fix this…

In a few weeks, I will be releasing a completely redesigned release build tool called “Metro”. This is the tool that I use to build my daily Funtoo stages and supports building both stable and unstable (~) stages. It is much more capable than catalyst and has a much better architecture. Metro is a full recipe-based build engine that will allow the larger Gentoo community to build Gentoo (and even non-Gentoo - it is  not Gentoo-specific) releases and stages easily and share their build recipes with others.

Metro allows anyone to set up their own automated builds and greatly simplifies the task of maintaining a web mirror of these builds. It will make it a lot easier for people to create their own Gentoo-based distributions as well.

My focus is on empowering the larger Gentoo community, but I do hope that the official Gentoo project will use Metro for their release engineering efforts – I think it will help not only the Gentoo project but also facilitate collaboration with projects outside Gentoo (by sharing build recipies) and thus help Gentoo to move in more of a distributed direction and innovate more quickly. It’s time to get Gentoo back to being the leaders of innovation in the world of Linux.

I am currently finalizing some interfaces in Metro before I start writing documentation for the tool. Once documentation is done (should be in a couple of weeks,) I will be releasing Metro to the public. Until then, you can enjoy the fruits of Metro by using my Funtoo stages at http://www.funtoo.org/linux .

:-)

September 25, 2008 07:22 PM

September 24, 2008

Brian Carper

Westinghouse: It Never Ends

(If you're just tuning in, long story short: I bought a Westinghouse L2410NM monitor November 2007, it broke March 2008, I sent it to Westinghouse (paying for shipping myself), they sent it back to the wrong address and didn't tell me about it for 2 months, I filed a BBB complaint, they didn't respond to that for another couple of months, and seven months and 30+ phone calls later, I still don't have my monitor back.)

My last post about Westinghouse's horrendous customer service and never-ending RMA process was titled "Westinghouse: Finally getting somewhere?". The answer to that is sadly "no".

I got a flurry of phone calls and emails from Westinghouse's corporate office, attempting to settle my BBB complaint. On September 12th, Westinghouse finally responded to the BBB, saying:

Company states, replacement unit shipped 09/10/08

Good news! I was looking forward to posting an end to this horror story.

However, today is September 24th, and guess what? No monitor. I contacted Westinghouse last week, asking for a UPS tracking number so I'd know when to expect my monitor. However, after being promised a phone call last Thursday that never came, and then sending an email Friday which was never answered, and then waiting three more days for good measure, it appears I'm once again being given the runaround.

So today I sent this email to my contact at Westinghouse:

Do you have access to Google? Please search for "westinghouse rma" and look at the top result. I believe it will be my website. I've been carefully documenting all of my adventures with Westinghouse for the past seven(!) months. On my website, many other people have related their own similarly terrible experiences being kept in the dark for months by your customer service departments.

You promised me a phone call on Sept 18th to provide me with a tracking number for my replacement monitor, but I never heard from you. I also never received a reply to the email I sent you since then.

The BBB was informed that a replacement monitor shipped on the 10th. If that was the case, I probably should've had it in my hands by now, given that it's been two weeks. Has it actually even been shipped? I suspect not. I feel as though I'm once again being given the runaround while nothing is done to resolve this issue. Please understand my frustration.

If I don't have a UPS tracking number by Friday, I'm filing a complaint with the FTC and the California Attorney General. They have a very easy-to-use form for filing complaints here: https://www.ftccomplaintassistant.gov/ and here: http://ag.ca.gov/contact/complaint_form.php?cmplt=CL

My website only has a couple thousand readers, but I'm also going to cross-post my story to every online tech news aggregate I can think of (e.g. http://reddit.com and http://digg.com), which translates to tens of thousands more potential readers. The story I would like to tell is "Westinghouse finally sent me my monitor after seven months", but I'll tell it either way.

I look forward to hearing from you,
--Brian

Look for this story on Reddit and Digg on Friday if I don't hear anything.

UPDATE: Well, I got a reply already. That was fast.

Your Fed Ex tracking number is 772xxxxxxxxxxx, you can track the
package at www.fedex.com/tracking to see the progress of your shipment.
Please keep mind that there was a delay at our warehouse and your unit
is going to ship tonight.

Just a little two-week delay, I guess those things happen. Hopefully if/when it shows up, the monitor actually works. I've burned through seven months of my warranty and somehow I doubt Westinghouse will courteously extend it for me if this monitor fails too.

(Read the whole crappy story of Westinghouse's dishonesty and horrible customer service: The beginning, Update 1, Update 2, Update 3, Update 4, Update 5, Update 6, Update 7, Update 8, Update 9.)

September 24, 2008 09:40 PM :: Pennsylvania, USA  

Roy Marples

openresolv changes to svn and trac

Using Drupal as a CMS is nice - it's worked for me very well.
However, it's not made for project management. I just had a static page that people couldn't add comments or feedback to (well, they could if I enable comments but that gets messy after a while). I do have bugzilla to handle bugs but I find it too overblown and complex for my needs. Don't get me wrong, bugzilla has it's place and it's a solid project - it's just not suited for my small site. Could be due to my fanatical dislike of perl Sticking out tongue

Also, my company suddenly had a need for a bug tracking system and a colleague of mine suggested trac which I installed on a server. I only looked at track briefly many years ago, and it had promise but lacked in a lot of places. I was pleased to see that a lot of good progress had been made and it's now very useable Smiling So much so, I've decided to install it here and it now powers the openresolv project page. Because it's made to integrate into subversion I used git2svn to convert the openresolv git repo trunk into an svn trunk. It's now open for business and anonymous users can create and modify tickets and the wiki (well, parts of the wiki).

So is svn better than git or is git better than svn? It's a hard one to answer, both have their pluses and minuses. Luckily there is a trac addon that works with git, so I'll give that a try with dhcpcd.

September 24, 2008 09:29 PM

Daniel Robbins

More Git Madness

Today, I spent some time looking at better ways to organize the Portage tree in git, and I'm interested in getting feedback on what I've done.

Please take a look at my new portage-new git repository. This new repository contains both the main gentoo.org tree in the "master" branch, and the funtoo.org tree in the "funtoo.org" branch. This seems to be a much better way to organize things, for the following reasons:

  1. It's space-efficient - the trees are over 99% similar, and now a single clone operation grabs both.
  2. There is a unified history - you can easily see the differences between the trees by typing "git diff master funtoo.org".
  3. The GitHub Network Graph now shows how the gentoo.org and funtoo.org tree relate to one another, which is useful. In the funtoo.org tree, you can see where I'm pulling from.
  4. It allows people to easily switch between both trees with a simple "git checkout" command.

If you want to test out portage-new and see how the branches work, please consult my updated wiki documentation but clone "portage-new" rather than "portage". I have the repo name as "portage" in the wiki docs because I'm already anticipating making this tree the official one in a few days.

I think this is probably the repository model to use for Portage git development. If someone wants to use this tree as the basis for their own development, they can clone the tree and create a foobar.org branch that contains their changes. This will allow them to benefit from the multiple-branch model and facilitate easier integration and diffs with upstream.

Barring any major complaints, in a few days I am probably going to delete my two existing portage git repositories and rename portage-new to portage, and it will become the official one.

Let me know what you think.

September 24, 2008 12:23 AM

Patrick Lauer

Make your Intertubez a nicer place

I've been badly annoyed by some ads lately. As I'm already using AdBlock in Firefox and started growing a large banlist in Konqueror too (leaving Opera dangerously exposed) I started modifying my approach. So here's my additions to /etc/hosts:

# google
127.0.0.1       ssl.google-analytics.com
127.0.0.1       www.google-analytics.com
127.0.0.1       pagead2.googlesyndication.com
127.0.0.1       pagead.googlesyndication.com 
127.0.0.1       adservices.google.com        
127.0.0.1       imageads.googleadservices.com #[Ewido.TrackingCookie.Googleadservices]
127.0.0.1       imageads1.googleadservices.com                                        
127.0.0.1       imageads2.googleadservices.com                                        
127.0.0.1       imageads3.googleadservices.com                                        
127.0.0.1       imageads4.googleadservices.com                                        
127.0.0.1       imageads5.googleadservices.com                                        
127.0.0.1       imageads6.googleadservices.com                                        
127.0.0.1       imageads7.googleadservices.com                                        
127.0.0.1       imageads8.googleadservices.com                                        
127.0.0.1       imageads9.googleadservices.com                                        
127.0.0.1       partner.googleadservices.com                                          
127.0.0.1       www.googleadservices.com                                              
127.0.0.1       apps5.oingo.com #[Microsoft.Typo-Patrol]                              
127.0.0.1       www.appliedsemantics.com                                              
127.0.0.1       service.urchin.com #[Urchin Tracking Module]                          

#doubleclick
127.0.0.1  ad.doubleclick.net #[MVPS.Criteria]
127.0.0.1  ad2.doubleclick.net #[Panda.Spyware:Cookie/Doubleclick]
127.0.0.1  ad.3ad.doubleclick.net                                 
127.0.0.1  ad.3au.doubleclick.net                                 
127.0.0.1  ad.adx.doubleclick.net                                 
127.0.0.1  ad.ae.doubleclick.net                                  
127.0.0.1  ad.ar.doubleclick.net                                  
127.0.0.1  ad.au.doubleclick.net                                  
127.0.0.1  ad.be.doubleclick.net                                  
127.0.0.1  ad.br.doubleclick.net #[SunBelt.DoubleClick]           
127.0.0.1  ad.ca.doubleclick.net                                  
127.0.0.1  ad.ch.doubleclick.net                                  
127.0.0.1  ad.cl.doubleclick.net                                  
127.0.0.1  ad.cn.doubleclick.net                                  
127.0.0.1  ad.de.doubleclick.net #[Tenebril.Tracking.Cookie]      
127.0.0.1  ad.dk.doubleclick.net                                  
127.0.0.1  ad.es.doubleclick.net                                  
127.0.0.1  ad.fi.doubleclick.net                                  
127.0.0.1  ad.fr.doubleclick.net                                  
127.0.0.1  ad.gr.doubleclick.net                                  
127.0.0.1  ad.hk.doubleclick.net                                  
127.0.0.1  ad.hu.doubleclick.net                                  
127.0.0.1  ad.ie.doubleclick.net                                  
127.0.0.1  ad.in.doubleclick.net                                  
127.0.0.1  ad.jp.doubleclick.net                                  
127.0.0.1  ad.kr.doubleclick.net                                  
127.0.0.1  ad.it.doubleclick.net                                  
127.0.0.1  ad.nl.doubleclick.net                                  
127.0.0.1  ad.no.doubleclick.net                                  
127.0.0.1  ad.nz.doubleclick.net                                  
127.0.0.1  ad.pl.doubleclick.net                                  
127.0.0.1  ad.pt.doubleclick.net                                  
127.0.0.1  ad.ro.doubleclick.net                                  
127.0.0.1  ad.ru.doubleclick.net                                  
127.0.0.1  ad.se.doubleclick.net                                  
127.0.0.1  ad.sg.doubleclick.net                                  
127.0.0.1  ad.terra.doubleclick.net                               
127.0.0.1  ad.th.doubleclick.net                                  
127.0.0.1  ad.tw.doubleclick.net                                  
127.0.0.1  ad.uk.doubleclick.net                                  
127.0.0.1  ad.us.doubleclick.net                                  
127.0.0.1  ad.za.doubleclick.net                                  
127.0.0.1  ad.n2434.doubleclick.net                               
127.0.0.1  creatives.doubleclick.net                              
127.0.0.1  dfp.doubleclick.net                                    
127.0.0.1  fls.doubleclick.net                                    
127.0.0.1  ir.doubleclick.net                                     
127.0.0.1  iv.doubleclick.net                                     
127.0.0.1  ln.doubleclick.net #[Lycos]                            
127.0.0.1  m.doubleclick.net                                      
127.0.0.1  m2.doubleclick.net                                     
127.0.0.1  m3.doubleclick.net                                     
127.0.0.1  m.us.doubleclick.net                                   
127.0.0.1  motifcdn.doubleclick.net                               
127.0.0.1  motifcdn2.doubleclick.net                              
127.0.0.1  n3285ad.doubleclick.net                                
127.0.0.1  n3349ad.doubleclick.net                                
127.0.0.1  n4061ad.doubleclick.net                                
127.0.0.1  n4403ad.doubleclick.net                                
127.0.0.1  n479ad.doubleclick.net                                 
127.0.0.1  n609ad.doubleclick.net                                 
127.0.0.1  optout.doubleclick.net                                 
127.0.0.1  optimize.doubleclick.net                               
127.0.0.1  optimize.3optimization.doubleclick.net                 
127.0.0.1  paypalssl.doubleclick.net                              
127.0.0.1  rd.intl.doubleclick.net                                
127.0.0.1  se1.doubleclick.net                                    
127.0.0.1  twx.doubleclick.net                                    
127.0.0.1  doubleclick.ne.jp                                      
127.0.0.1  www3.doubleclick.net                                   
127.0.0.1  www.doubleclick.net                                    
127.0.0.1  doubleclick.com                                        
127.0.0.1  ad.doubleclick.com                                     
127.0.0.1  www2.doubleclick.com                                   
127.0.0.1  www3.doubleclick.com                                   
127.0.0.1  www.doubleclick.com                                    
127.0.0.1  www.messagemedia.com                                   
127.0.0.1  www.performics.com                                     
127.0.0.1  doubleclick.shockwave.com                              
# [Google/DoubleClick via Falk AdSolution][Falk eSolutions AG]    
127.0.0.1  a.as-eu.falkag.net                                     
127.0.0.1  a.as-eu1.falkag.net                                    
127.0.0.1  admin.as-eu.falkag.net                                 
127.0.0.1  bw.as-eu.falkag.net                                    
127.0.0.1  c.as-eu.falkag.net                                     
127.0.0.1  data.as-eu.falkag.net
127.0.0.1  e.as-eu.falkag.net #[Ewido.TrackingCookie.Falkag]
127.0.0.1  f.as-eu.falkag.net
127.0.0.1  origin.as-eu.falkag.net
127.0.0.1  red.as-eu.falkag.net #[McAfee.Adware-Zeno]
127.0.0.1  red01.as-eu.falkag.net
127.0.0.1  sel.as-eu.falkag.net
127.0.0.1  a.as-test.falkag.net #[Panda.Spyware:Cookie/Falkag]
127.0.0.1  bw.as-test.falkag.net
127.0.0.1  red.as-test.falkag.net
127.0.0.1  sel.as-test.falkag.net
127.0.0.1  a.as-us.falkag.net #[SunBelt.as-us.falkag]
127.0.0.1  b.as-us.falkag.net
127.0.0.1  bw.as-us.falkag.net #[a1339.g.akamai.net]
127.0.0.1  c.as-us.falkag.net #[Tenebril.Tracking.Cookie]
127.0.0.1  data.as-us.falkag.net
127.0.0.1  e.as-us.falkag.net #[a1339.g.akamai.net]
127.0.0.1  origin.as-us.falkag.net
127.0.0.1  red.as-us.falkag.net
127.0.0.1  red01.as-us.falkag.net
127.0.0.1  s.as-us.falkag.net
127.0.0.1  sel.as-us.falkag.net
127.0.0.1  as1.falkag.de #[Ad-Aware.Tracking.Cookie]
127.0.0.1  www.falkag.de
Et voila. Your Intertubez have now about 75% less braindamage. It's funny to see websites cleaning up on reload ... blink blink reload empty. Only text left ...

There's one issue though: It's by far not complete. I think I'll need some privoxy added to that to be really happy. If I do I'll let you know how it goes.

September 24, 2008 12:17 AM

September 23, 2008

Roy Marples

lighttpd out, apache in

You may have noticed an interuption to this service.....

I finally got too irritated with the lighttpd configuration. Seems there's a few fastcgi issues which I'm now seeing. Also, development seems to have stalled. Sad

So, I gave apache another whirl. I don't recall why I changed from apache to lighttpd, but it was propably speed related. This is due to me running this site on an old VIA C3-2 processor and apache is slower than lighttpd - noticably on that box. This new(ish) server is an AMD64 Sempron (2400) and has the horsepower and memory for apache on this small site.

Anyway, the configuration layout for apache has also changed drastically since I last used it - and for the better! The Gentoo apache team have my thanks for the nice overhaul Smiling

I'm also playing around trac as a replacement to bugzilla and the dhcpcd project page. I've set it up here against an svn repo I migrated from git a while ago. We'll see if I like this to change over.

September 23, 2008 10:40 PM

Jürgen Geuter

Linux does not "need its own Steve Jobs" (repeating wrongs doesn't create rights)

In a break today I found yet another article outlining why "Linux needs its own Steve Jobs for it to be good". We get those quite a lot it's kinda the Top10 list of people with half a brain. Well, here's the final discussion why that idea is wrong (and retarded), so people can stop writing the same article that was wrong back in 1999:

I'm not talking about whether Apple's OSX or their whole DRM-mess is good or not: People seem to fall for the marketing campaign and myth that Steve Jobs writes every line of code in any Apple product by hand so let's for the sake of the argument just go with it. (Of course it's not all fine in Apple land but that is another post).

  • Steve Jobs gives the company direction and that makes their product great.
    If that is really your argument, welcome your Master: He's called Mark Shuttleworth and does pretty much exactly that. He has a vision and throws money at the aspects of the Linux stack that he thinks need work (as Greg Kroah-Hartmann has pointed out: the kernel and the "backend" don't seem to be a part of that). He does exactly what "mythical" Jobs does, he looks at problems and hires people so he can order them to fix them. Anybody else with some money can do the same. We can create one, two, many Steve Jobses (the question is if we really want that?)
  • Steve Jobs has visions that push their products where nobody thought about going before.
    Yeah you're right, Apple has been driven by a vision: To stop being a computer company and turn into a content provider who fights with any dirty trick it can find to lock customers in. Apple does not invent, they revamp iTunes to push more DRM crap down to the customers. If you think about innovation look at what the free software desktop does: Integrate your desktop experience more and more, harmonize, standartize. GNOME people are working on a distribution-neutral way to install packages, the X people might not be fast but they start to really get their shit together and have X work its magic pretty much without tinkering. The whole Netbook thingy was just possible cause of Linux. Where was Apple? Making their crappy usability-Horror that is the dock reflective.
  • Steve Jobs can work as well cause nobody in the company can work against him or he gets fired, that leads to everybody working in one direction.
    If stagnation is what you want that is the right way to handle things. One community, one software stack, one leader? The fact that everybody can take the whole shebang and modify it to be different is the strength of the free software stack. Yeah many modifications suck or don't lead anywhere. But somebody tried and looked into it. What about the Pidgin fork? People didn't like the decisions made by the devs so they forked. If we had the Leader-model that wouldn't happen.
  • Oh and just as another remark: Introducing a single point of failure is never smart. Linus has the main kernel repository and does the releases. But if something happened to him there are others that have the tree and knowledge to take over, that is another strength. One person "in charge" means that your whole project dies with that person. Great idea.


It's just like in politics, when things go bad, people cry for the leader to make all problems magically disappear, and it sometimes does: Apple has stopped being a computer and technology company but turned into a big music store, the "problem" in the technology department was solved by running away into another market. Better than the usual way that it turns to when you get a new leader: Apple did not start a war.

So next time you wanna write about Linux needing a leader, direct your browser to the Wikipedia and read.

September 23, 2008 08:10 PM :: Germany  

Dirk R. Gently

A Wic’d Solution


When I first saw NetworkManager back in Ubuntu 6.10 (Edgy Eft), I realized what a godsend it was. Previously connected to a wireless network was to the new user confusing at best. Previously I had created scripts that used iwlist, iwconfig, ifconfig…, then NetworkManager came in and make my laptop truely mobile. When I moved to Gentoo, NetworkManager took a bit more to set up so I wrote the NetworkManager wiki.

Lately though I’ve discovered NetworkManager doesn’t configure dhcp correctly with certain networks, and I have to configure dhcp manually. This isn’t a big deal, but it is an inconvienance. Lately, I heard boast about another wired/wireless network manager called Wicd so I decided to give it a try.

In Gentoo it’s easy to set up, just emerge it and add it to the default run level:

sudo emerge -v wicd
sudo rc-update add wicd default

Also if using baselayouts network-connecting scripts disable them. Either delete the net.eth0, net.ath1 links (or whatever they are called) or you can edit “rc.conf” which located in /etc/ if using OpenRC or in /etc/conf.d/ (if you haven’t migrated to OpenRC yet), and edit the preference “rc_plug_services” to “!net.*” Leave net.lo alone though as loopback will still be needed.

Stop NetworkManager daemon and load Wicd daemon:

sudo /etc/init.d/NetworkManager stop
sudo /etc/init.d/wicd start

Restart X server so the applet is loaded:

wicd start

Wicd is a claims to work well with lightweight desktops. Clicking on the notification icon will bring up the Wicd Manager.

wicd manager

Wicd will not connect automaticly to a network unless the option is selected, which I think is a good idea:

wicd auto

The preferences of Wicd allow connecting to more difficult networks.

asdf

asd

Looks like I got a new network manager. Thanks to the developers of Wicd.

      

September 23, 2008 06:31 PM :: WI, USA  

Jürgen Geuter

Blurp!

Been terribly busy in the last few days writing other stuff so there was no time to post. A few short blurbs:

  • Everything is Miscellaneous - The Power of the new digital disorder by David Weinberger is a brilliant book. It's cheap so get it if you are anyhow interested in how to present knowledge. It's about tagging, categorizing and how those work. Written in a clear but still very witty way it's a pleasure to read.
  • Been really getting into Juno Reactor lately...
  • "Star Wars - The Force unleashed" on the Wii looks horrible and plays very generic. I don't know if I'm spoiled or whatever but while the story might be ok, the game itself is pretty boring, the motion controls feel like they were just thrown in because they could, many things don't make too much sense.
  • Spore is boring. Or I have not found the game in that thing. You never know.
  • Thinking about netbooks lately. The Acer Aspire One looks neat but why the hell do the Linux Versions of Netbooks often get the short end of the stick when it comes to RAM? Any Netbook owners reading this? What do you own and how do you like it? I want 1024 screen width and Linux.
  • I feel somewhat dirty for posting this almost "Micro-bloggy" post.

September 23, 2008 09:05 AM :: Germany  

Clete Blackwell

Apple iPhone 3G White 16GB

On Saturday, I ordered a brand new iPhone (white 16GB) from AT&T. I’m really excited about it. I have been wanting one ever since the rumors spread on Engadget and elsewhere that Apple would be releasing a phone. I was able to easily place the order on AT&T’s website. Generally, you are not able to upgrade to an iPhone through their website, but since we receive an employee discount from my Dad’s job through the “Premier” program, I was able to. Sadly, the phone is now on backorder, but it should ship this week. I’m extremely excited.

In preperation for my new toy, I have searched the application store for some great additions to my phone. Here is what I have come up with so far:

  • Air Sharing — Allows sharing of files between your computer and your iPhone.
  • CheckPlease — Tip calculator.
  • eBay Mobile
  • Facebook
  • Flashlight — Will help me check for dead pixels. Also can turn the screen white to be used as a flashlight at night.
  • Google Mobile App
  • iProcrastinate — A homework scheduler.
  • Loopt — Finds iPhone users in your area.
  • Mobile Banking from Bank of America
  • Mocha VNC Lite — A VNC client.
  • Pandora Radio — Free music radio.
  • Shazam — If you don’t know the name of a song that is playing, hold your iPhone up to the speaker and it will identify the song for you.
  • SimStapler — A staple simulator
  • Tap Tap Revenge — It looks like a fun game.
  • Units — Unit conversion.
  • WhosHere — Another application similar to Loopt.

More to come once I receive my phone! :D

September 23, 2008 03:31 AM

September 22, 2008

Zeth

My God, it's Full of XML

In recent posts I looked at a native XML database called DBXML and we looked at where XML came from.

You may find yourself in the situation that you are given a pile of XML documents, possibly broken, and it is up to you to make sense of them. This post explains some tools that can form your first-aid kit for dealing with problem XML documents.

Shine like a star(let)

xmlstarlet is available from your friendly neighbourhood package manager or from the xmlstarlet website

xmlstarlet is a command line tookit that provides various different XML related helpers. For details on all the xmlstarlet tools, type:

xmlstarlet --help

Brock wrote recently about using xmlstarlet's select tool that allows you get use XPATH expressions to query your XML.

Viewing the element structure

Another handy xmlstarlet tool is the element structure viewer, this provides a friendly, xpath style view into the XML document.

xmlstarlet el filename.xml

This I tend to use the -u option which only shows the unique lines:

xmlstarlet el -u filename.xml

There is also -a for attributes and -v for the attribute values as well.

Checking for well-formed XML documents

The most useful xmlstarlet tool for me has been the XML validator, which tests whether your documents are well formed or not. You use the tool as follows:

xmlstarlet val xmlfile.xml

It also has a number of options, the main one I have used is to validate against a Document Type Definition:

xmlstarlet val -d dtdfile.dtd xmlfile.xml

Tidying up your XML files

Sometimes programs output really ugly looking XML. So when you have made sure your document is well-formed with xmlstarlet, you might want to tidy it up a bit before letting anyone else see it.

Xmltidy is a handy little Java program that loads your XML document into memory and then outputs it in a nice looking form with linebreaks and indentation.

This is especially useful when you have a collection of XML files that are referencing each other. Xmltidy will combine them into a nice looking XML document.

Download the jar file from the xmltidy homepage, and then run:

java -jar xmltidy.jar --input oldfile.xml --output newfile.xml

Dealing with Unicode problems

Some of the most annoying problems with XML files can be when the files encoding is not valid UTF-8 and some program is rejecting XML files.

I found a really nice package called uniutils, which is again available from your friendly neighbourhood package manager or from the uniutils website.

Like xmlstarlet, this gives you various utilities, however the main one I use it for is to check whether my XML files are valid UTF-8 unicode. It gives useful error messages when a file is not unicode. you can then check the file in a text editor and/or hex viewer (e.g. Ghex) to see what the problem is. So to validate an XML file, we simply go:

uniname -V filename.xml

If it has non-unicode characters, you will receive errors such as:

Invalid UTF-8 code encountered at line 215, character 115037, byte 115036. The first byte, value 0x82, with bit pattern 10001100, is not a valid first byte of a UTF-8 sequence because its high bits are 10.

So the character with hex value x82 is not a valid character in the UTF-8 encoding. In Emacs you can look at the character by typing

M-x goto-char 115037

Or you can open your hex editor. In Ghex, you can go to the edit menu and use the "Goto byte" feature to the problem character, for example, if the byte number was 119, then you can go:

http://media.commandline.org.uk/images/posts/gnome/ghex.png

That works for one character. If we want to recursively check all XML files within a directory, we can use find:

find . -name '*.xml' -print -exec uniname -V {} \;

So now lets imagine we find that the files have a non-unicode character with the hex value x82 as above, then we might want to replace it with a characters or entity, the following use of find and sed replaces all occurrences of the hex x82 with C:

find . -iname '*.xml' -exec sed -i 's/\x82/\C/g' {} \;

This can help a lot as most XML programs will reject files with inconstant encoding.

Conclusion

These are my tips for dealing with a pile of XML broken files. if you have any tips or suggestions of your own, please share them by leaving a comment below.

In some future posts, we will look at using XML with Python, and with the Django web framework.

Thanks to Andy and Nick for help with this post, and the title was based on Tommi Virtanen's fantastic Europython talk.

If you are a Digg fan, give it some lovin!

Discuss this post - Leave a comment

September 22, 2008 08:49 PM :: West Midlands, England  

TopperH

Keeping a hostname even when not on lan

I use to connect to my home server from my laptop, from my LAN when I'm at home and from the internet when I'm not.

My server has a static ip address in my lan (192.168.1.5) and a dyndns name on the internet.

The server's hostname is "fandango" and the dyndns name is something like "fandango.foo.bar".

I had this line in my /etc/hosts:

192.168.1.5 fandango


This configuration was a pain in the ass because from home I had to "ssh TopperH@fandango", while from the outside I had to "ssh TopperH@fandango.foo.bar". I also had double passwords saved in my web browser, double quassel configuration ecc.

The idea is to always relate to my server as "fandango", wether at home or not, so I made two scripts and created a postup in my /etc/conf.d/net.

/root/scripts/hosts.world

#!/bin/bash
MYFILE='/etc/hosts.backup'
OLDHOST=`grep fandango $MYFILE | awk '{ print $1 }'`
NEWHOST=`host fandangofoo.bar | gawk '{print $4}'`
sed s/$OLDHOST/$NEWHOST/ $MYFILE > /etc/hosts



/root/scripts/hosts.home

#!/bin/bash
MYFILE='/etc/hosts.backup'
OLDHOST=`grep fandango $MYFILE | awk '{ print $1 }'`
NEWHOST=192.168.1.5
sed s/$OLDHOST/$NEWHOST/ $MYFILE > /etc/hosts


/etc/conf.d/net

[snip]
postup() {
if [[ ${IFACE} == "ppp1" ]] ; then
/root/scripts/hosts.home
elif [[ ${IFACE} == "ppp2" ]] ; then
/root/scripts/hosts.world
fi
return 0
}


I'm sure there are more elegant ways to achieve the same results, and comments are welcome... By the way it just works :)

September 22, 2008 04:40 PM :: Italy  

Brian Carper

Practicality: PHP vs. Lisp?

Eric at LispCast wrote an article about why PHP is so ridiculously dominant as a web language, when arguably more powerful languages like Common Lisp linger in obscurity.

I think the answer is pretty easy. In real life, practicality usually trumps everything else. Most programmers aren't paid to revolutionize the world of computer science. Most programmers are code monkeys, or to put it more nicely, they're craftsmen who build things that other people pay them to create. The code is a tool to help people do a job. The code is not an end in itself.

In real life, here's a typical situation. You have to make a website for your employer that collects survey data from various people out in the world, in a way that no current off-the-shelf program quite does correctly. If you could buy a program to do it that'd be ideal, but you can't find a good one, so you decide to write one from scratch. The data collection is time-sensitive and absolutely must start by X date. The interface is a web page, and people are going to pointy-clicky their way through, and type some numbers, that's it; the backend just doesn't matter. For your server, someone dug an old dusty desktop machine out of a closet and threw Linux on there for you and gave you an SSH account. Oh right, and this project isn't your only job. It's one of many things you're trying to juggle in a 40-hour work week.

One option is to write it in Common Lisp. You can start by going on a quest for a web server. Don't even think about mod_lisp, would be my advice, based on past experience. Hunchentoot is good, or you can pay a fortune for one of the commercial Lisps. If you want you could also look for a web framework; there are many to choose from, each more esoteric, poorly documented and nearly impossible to install than the last. Then you get to hunt for a Lisp implementation that actually runs those frameworks. Then you get to try to install it and all of your libraries on your Linux server, and on the Windows desktop machine you have to use as a workstation. Good luck.

Once you manage to get Emacs and SLIME going (I'm assuming you already know Emacs intimately, because if you don't, you already lose) you get to start writing your app. Collecting data and moving it around and putting it into a database and exporting it to various statistics packages is common, so you'd do well to start looking for some libraries to help you out with such things. In the Common Lisp world you're likely not to find what you need, or if you're lucky, you'll find what you need in the form of undocumented abandonware. So you can just fix or write those libraries yourself, because Lisp makes writing libraries from scratch easy! Not as easy as downloading one that's already been written and debugged and matured, but anyways. Then you can also roll your own method of deploying your app to your server and keeping it running 24/7, which isn't quite so easy. If you like, you can try explaining your hand-rolled system to the team of sysadmins in another department who keep your server machine running.

Don't bet on anyone in your office being able to help you with writing code, because no one knows Lisp. Might not want to mention to your boss that if you're run over by a bus tomorrow, it's going to be impossible to hire someone to replace you, because no one will be able to read what you wrote. When your boss asks why it's taking you so long, you can mention that the YAML parser you had to write from scratch to interact with a bunch of legacy stuff is super cool and a lovely piece of Lisp code, even if it did take you a week to write and debug given your other workload.

Be sure to wave to your deadline as it goes whooshing by. If you're a genius, maybe you managed to do all of the above and still had time to roll out a 5-layer-deep Domain Specific Language to solve all of your problems so well it brings tears to your eye. But most of us aren't geniuses, especially on a tight deadline.

Another option is to use PHP. Apache is everywhere. MySQL is one simple apt-get away. PHP works with no effort. You can download a single-click-install LAMP stack for Windows nowadays. PHP libraries for everything are everywhere and free and mature because thousands of people already use them. The PHP official documentation is ridiculously thorough, with community participation at the bottom of every page. Google any question you can imagine and you come up with a million answers because the community is huge. Or walk down the hall and ask anyone who's ever done web programming.

The language is stupid, but stupid means easy to learn. You can learn PHP in a day or two if you're familiar with any other language. You can write PHP code in any editor or environment you want. Emacs? Vim? Notepad? nano? Who cares? Whatever floats your boat. Being a stupid language also means that everyone knows it. If you jump ship, your boss can throw together a "PHP coder wanted" ad and replace you in short order.

And what do you lose? You have to use a butt-ugly horrid language, but the price you pay in headaches and swallowed bile is more than offset by the practical gains. PHP is overly verbose and terribly inconsistent and lacks powerful methods of abstraction and proper closures and easy-to-use meta-programming goodness and Lisp-macro syntactic wonders; in that sense it's not a very powerful language. Your web framework in PHP probably isn't continuation-based, it probably doesn't compile your s-expression HTML tree into assembler code before rendering it.

But PHP is probably the most powerful language around for many jobs if you judge by the one and only measure that counts for many people: wall clock time from "Here, do this" to "Yay, I'm done, it's not the prettiest thing in the world but it works".

The above situation was one I experienced at work, and I did choose PHP right from the start, and I did get it done quickly, and it was apparently not too bad because everyone likes the website. No one witnessed the pain of writing all that PHP code, but that pain doesn't matter to anyone but the code monkey.

If I had to do it over again I might pick Ruby, but certainly never Lisp. I hate PHP more than almost anything (maybe with the exception of Java) but I still use it when it's called for. An old rusty wobbly-headed crooked-handled hammer is the best tool for the job if it's right next to you and you only need to pound in a couple of nails.

September 22, 2008 09:17 AM :: Pennsylvania, USA  

Zeth

Ohloh and the popularity of programming languages in free and open source software

I came across my name in a site called Ohloh. I remember it coming out a few years ago. Now it has had time to really get going, I thought it was about time that I review the site here.

Ohloh tracks the free/open source software it knows about, they only track code held in CVS, Subversion or Git (i.e. not in bazaar, which I tend to use, or mercurial), in repositories that they can easily find. Despite the limitations, this is a very large amount of code.

Ohloh tries to figure out from the commits who the developers are, and thus my name came up (because of a very minor contribution to Gentoo once upon a time).

Ohloh also tries to figure out the usage of programming languages in free/open source software. It allows you to produce various graphs; those below are based on the total number of active free/open source projects for each language.

Some important caveats to bear in bind:

  • Ohloh only tracks how the language is being used in free/open source software, the majority of code written in the world runs on in-house systems; this code is often never shared externally.
  • The percentage figures may be somewhat lower than one would expect because their definition of a language is rather weaker than I would personally use, so many markup formats such as HTML or XML and specialised syntaxes are all counted as programming languages even though they are not Turing-complete.
  • These are relative percentages, we are comparing languages against each other. All languages featured here are growing steadily in terms of the absolute number of free/open source programmers using them. So essentially what we doing here is comparing the speed at which languages are growing.

Regular readers will know that I like high level, general purpose, dynamic languages; so lets start with them:

http://media.commandline.org.uk/images/posts/languages/comparison.png

Go Python! Of course these figures might be completely meaningless as Perl is often used by sys-admins who rarely share their code using public revision control repositories.

Now lets look at the big beasts, the major compiled languages. These bread and butter languages seem to be stabilising around equal percentages:

http://media.commandline.org.uk/images/posts/languages/comparison2.png

Platform-oriented proprietary languages are not heavily used in free/open source software, as you might expect, however lets compare two against each other, Microsoft's C# versus Apple's Objective C:

http://media.commandline.org.uk/images/posts/languages/comparison3.png

C# is stronger, not surprising considering the vast difference in users between Windows and OS X.

A more interesting question is whether the rising use of C# in free/open source software is evidence of a developing accommodation between the Microsoft world and the Free World?

At least that is until Microsoft next calls us all cancer and threatens to sue the whole free/open source world again.

Interesting stuff, let me know if you come up with any interesting comparisons.

Discuss this post - Leave a comment

September 22, 2008 06:13 AM :: West Midlands, England  

September 21, 2008

Martin Matusiak

git by example - upgrade wordpress like a ninja

I addressed the issue of wordpress upgrades once before. That was a hacky home grown solution. For a while now I’ve been using git instead, which is the organized way of doing it. This method is not specific to wordpress, it works with any piece of code where you want to keep current with updates, and yet you have some local modifications of your own.

To recap the problem shortly.. you installed wordpress on your server. Then you made some changes to the code, maybe you changed the fonts in the theme, for instance. (In practice, you will have a lot more modifications if you’ve installed any plugins or uploaded files.) And now the wordpress people are saying there is an upgrade available, so you want to upgrade, but you want to keep your changes.

If you are handling this manually, you now have to track down all the changes you made, do the upgrade, and then go over the list and see if they all still apply, and if so re-apply them. git just says: you’re using a computer, you git, I’ll do it for you. In fact, with git you can keep track of what changes you have made and have access to them at any time. And that’s exactly what you want.

1. Starting up (the first time)

The first thing you should find out is which version of wordpress you’re running. In this demo I’m running 2.6. So what I’m going to do is create a git repository and start with the wordpress-2.6 codebase.

# download and extract the currently installed version
wget http://wordpress.org/wordpress-2.6.tar.gz
tar xzvf wordpress-2.6.tar.gz
cd wordpress
 
# initiate git repository
git-init
 
# add all the wordpress files
git-add .
 
# check status of repository
git-status
 
# commit these files
git-commit -m'check in initial 2.6.0 upstream'
 
# see a graphical picture of your repository
gitk --all

Download this code: git_wordpress_init

This is the typical way of initializing a repository, you run an init command to get an empty repo (you’ll notice a .git/ directory was created). Then you add some files and check the status. git will tell you that you’ve added lots of files, which is correct. So you make a commit. Now you have one commit in the repo. You’ll want to use the gui program gitk to visualize the repo, I think you’ll find it’s extremely useful. This is what your repo looks like now:

gitk is saying that you have one commit, it’s showing the commit message, and it’s telling you that you’re on the master branch. This may seem odd seeing as how we didn’t create any branches, but master is the standard branch that every repository gets on init.

The plan is to keep the upstream wordpress code separate from your local changes, so you’ll only be using master to add new wordpress releases. For your own stuff, let’s create a new branch called mine (the names of branches don’t mean anything to git, you can call them anything you want).

# create a branch where I'll keep my own changes
git-branch mine
 
# switch to mine branch
git-checkout mine
 
# see how the repository has changed
gitk --all

Download this code: git_wordpress_branch

When we now look at gitk the repository hasn’t changed dramatically (after all we haven’t made any new commits). But we now see that the single commit belongs to both branches master and mine. What’s more, mine is displayed in boldface, which means this is the branch we are on right now.

What this means is that we have two brances, but they currently have the exact same history.

2. Making changes (on every edit)

So now we have the repository all set up and we’re ready to make some edits to the code. Make sure you do this on the mine branch.

If you’re already running wordpress-2.6 with local modifications, now is the time to import your modified codebase. Just copy your wordpress/ directory to the same location. This will obviously overwrite all the original files with yours, and it will add all the files that you have added (plugins, uploads etc). Don’t worry though, this is perfectly safe. git will figure out what’s what.

Importing your codebase into git only needs to be done the first time, after that you’ll just be making edits to the code.

# switch to mine branch
git-checkout mine
 
# copy my own tree into the git repository mine branch
#cp -ar mine/wordpress .. 
 
# make changes to the code
#vim wp-content/themes/default/style.css
 
# check status of repository
git-status

Download this code: git_wordpress_edit

When you check the status you’ll see that git has figured out which files have changed between the original wordpress version and your local one. git also shows the files that are in your version, but not in the original wordpress distribution as “untracked files”, ie. files that are lying around that you haven’t yet asked git to keep track of.

So let’s add these files and from now on every time something happens to them, git will tell you. And then commit these changes. You actually want to write a commit message that describes exactly the changes you made. That way, later on you can look at the repo history and see these messages and they will tell you something useful.

# add all new files and changed files
git-add .
 
# check in my changes on mine branch
git-commit -m'check in my mods'
 
# see how the repository has changed
gitk --all

Download this code: git_wordpress_commit

When you look at the repo history with gitk, you’ll see a change. There is a new commit on the mine branch. Furthermore, mine and master no longer coincide. mine originates from (is based on) master, because the two dots are connected with a line.

What’s interesting here is that this commit history is exactly what we wanted. If we go back to master, we have the upstream version of wordpress untouched. Then we move to mine, and we get our local changes applied to upstream. Every time we make a change and commit, we’ll add another commit to mine, stacking all of these changes on top of master.

You can also use git-log master..mine to see the commit history, and git-diff master..mine to see the actual file edits between those two branches.

3. Upgrading wordpress (on every upgrade)

Now suppose you want to upgrade to wordpress-2.6.2. You have two branches, mine for local changes, and master for upstream releases. So let’s change to master and extract the files from upstream. Again you’re overwriting the tree, but by now you know that git will sort it out. ;)

# switch to the master branch
git-checkout master
 
# download and extract new wordpress version
cd ..
wget http://wordpress.org/wordpress-2.6.2.tar.gz
tar xzvf wordpress-2.6.2.tar.gz
cd wordpress
 
# check status
git-status

Download this code: git_wordpress_upgrade

Checking the status at this point is fairly important, because git has now figured out exactly what has changed in wordpress between 2.6 and 2.6.2, and here you get to see it. You should probably look through this list quite carefully and think about how it affects your local modifications. If a file is marked as changed and you want to see the actual changes you can use git-diff <filename>.

Now you add the changes and make a new commit on the master branch.

# add all new files and changed files
git-add .
 
# commit new version
git-commit -m'check in 2.6.2 upstream'
 
# see how the repository has changed
gitk --all

Download this code: git_wordpress_commitnew

When you now look at the repo history there’s been an interesting development. As expected, the master branch has moved on one commit, but since this is a different commit than the one mine has, the branches have diverged. They have a common history, to be sure, but they are no longer on the same path.

Here you’ve hit the classical problem of a user who wants to modify code for his own needs. The code is moving in two different directions, one is upstream, the other is your own.

Now cheer up, git knows how to deal with this situation. It’s called “rebasing”. First we switch back to the mine branch. And now we use git-rebase, which takes all the commits in mine and stacks them on top of master again (ie. we base our commits on master).

# check out mine branch
git-checkout mine
 
# stack my changes on top of master branch
git-rebase master
 
# see how the repository has changed
gitk --all

Download this code: git_wordpress_rebase

Keep in mind that rebasing can fail. Suppose you made a change on line 4, and the wordpress upgrade also made a change on line 4. How is git supposed to know which of these to use? In such a case you’ll get a “conflict”. This means you have to edit the file yourself (git will show you where in the file the conflict is) and decide which change to apply. Once you’ve done that, git-add the file and then git-rebase --continue to keep going with the rebase.

Although conflicts happen, they are rare. All of your changes that don’t affect the changes in the upgrade will be applied automatically to wordpress-2.6.2, as if you were doing it yourself. You’ll only hit a conflict in a case where if you were doing this manually it would not be obvious how to apply your modification.

Once you’re done rebasing, your history will look like this. As you can see, all is well again, we’ve returned to the state that we had at the end of section 2. Once again, your changes are based on upstream. This is what a successful upgrade looks like, and you didn’t have to do it manually. :cap:

Tips

Don’t be afraid to screw up

You will, lots of times. The way that git works, every working directory is a full copy of the repository. So if you’re worried that you might screw up something, just make a copy of it before you start (you can do this at any stage in the process), and then you can revert to that if something goes wrong. git itself has a lot of ways to undo mistakes, and once you learn more about it you’ll start using those methods instead.

Upgrade offline

If you are using git to upgrade wordpress on your web server, make a copy of the repo before you start, then do the upgrade on that copy. When you’re done, replace the live directory with the upgraded one. You don’t want your users to access the directory while you’re doing the upgrade, both because it will look broken to them, and because errors can occur if you try to write to the database in this inconsistent state.

Keep your commits small and topical

You will probably be spending most of your time in stage 2 - making edits. It’s good practice to make a new commit for every topical change you make. So if your goal is to “make all links blue” then you should make all the changes related to that goal, and then commit. By working this way, you can review your repo history and be able to see what you tried to accomplish and what you changed on each little goal.

Revision control is about working habits

You’ve only seen a small, albeit useful, slice of git in this tutorial. git is a big and complicated program, but as with many other things, it already pays off if you know a little about it, it allows you to be more efficient. So don’t worry about not knowing the rest, it will come one step at a time. And above all, git is all about the way you work, which means you won’t completely change your working habits overnight, it will have to be gradual.

This tutorial alone should show you that it’s entirely possible to keep local changes and still upgrade frequently without a lot of effort or risk. I used to dread upgrades, thinking it would be a lot of work and my code would break. I don’t anymore.

September 21, 2008 08:19 PM :: Utrecht, Netherlands  

Zeth

Django FreeComments cleanup script

This site uses the comments module provided by the Django web framework, in particular, is uses the FreeComment model to allow you to leave comments. One field I had not used so far was the "approved" field, I had simply put all the comments up on the web straight away, and just deleted the occasional spam that managed to beat the system.

Now however, I have decided to use the approved field. I will still put comments up straight away, but now I will set ones I have read to approved. Allowing me to view new comments behind the scenes.

One flaw in this plan is that I needed to set the existing comments to approved.

I could have just gone:

# Set all comments to approved
comments = FreeComment.objects.filter(approved=0)
for comment in comments:
    comment.approved = 1
    comment.save()

But I was not 100% sure that the odd spam was not caught, so while eating my morning porridge, I turned it into a really simple command line adventure game.

Just in case it is useful to anyone, here is it below. I actually typed the whole thing into the shell, but ipython has a lovely history command that allows you output everything you wrote.

Obviously, the LOCATION_OF_DJANGO_PROJECT needs to be set to the directory that your Django project is in, not the project directory itself.

#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""Simple and ugly script to sort out FreeComments."""

######################
# Configure the following three variables:

URL = "http://commandline.org.uk"
LOCATION_OF_DJANGO_PROJECT = "/home/django/sites/"
CLEAR_COMMAND = "clear" # For Windows use CLS

#
#######################

import os
import sys

# Add Django project to Path
sys.path.append(LOCATION_OF_DJANGO_PROJECT)

# The following magic spell sets up the Django Environment
from django.core.management import setup_environ
from basic import settings
setup_environ(settings)

# Get the FreeComment model
from django.contrib.comments.models import FreeComment

def main():
    """Cycle through the comments, offer a simple choice."""
    # Get all the unapproved comments
    comments = FreeComment.objects.filter(approved=0)
    os.system(CLEAR_COMMAND)
    print "There are", len(comments), "comments to judge.\n"

    # Go through the comments
    for comment in comments:
        # Show the hyperlink to the comment,
        # In case you want to check it in the browser
        print URL + comment.get_absolute_url()
        # Comment name
        print comment.person_name, "said:"
        try:
            # Comment text
            print comment.comment
        except UnicodeEncodeError:
            # The world is a big place.
            print "[something in unicode]"
        print "\n\n"

        # Now offer choice at the command line
        print "Do you approve this comment?"
        print "Press y for yes, d for delete, " + \
              "nothing for skip, anything else to exit."
        answer = raw_input()
        if answer == "y":
            comment.approved = 1
            comment.save()
        elif answer == "d":
            comment.delete()
        elif answer == "":
            pass
        else:
            sys.exit()
        os.system(CLEAR_COMMAND)

# Start the ball rolling.
if __name__ == '__main__':
    main()
    print "All done."

So pretty dumb, but publishing it here might save someone five minutes.

Discuss this post - Leave a comment

September 21, 2008 06:22 PM :: West Midlands, England  

September 20, 2008

Zeth

The history of XML

XML did not fall from heaven (or if you prefer, arise out of hell) fully completed. Instead there was a long process of standardisation.

In 1969, Bob Dylan started his comeback at the Isle of Wight festival, meanwhile, Elvis began his in Las Vegas, Elton John releases his first record and David Bowie's Space Oddity coincided with the Apollo 11 mission to the Moon.

Meanwhile in 1969, in IBM, Goldfarb, Mosher and Lorie were working on an application for legal offices. They decided to make a standardised high-level markup language that was independent of whatever control codes your printer used. They called this markup language after their initials: GML.

A decade later, ANSI (the American National Standards Institute) began developing a standard for information exchange based on GML, this became SGML, which stood for 'Standard Generalized Markup Language', this became an ISO (International Standards Organisation) standard in 1986.

In 1991, CERN physicist Tim Berners-Lee releases his Internet-based hypertext system called the 'World-Wide-Web', this used a particularly dirty SGML variant called HTML - 'HyperText Markup Language', HTML was dirty SGML because it went against the separation of content from presentation, with <b>, <center>, <font>, <blink>, <marquee> and other in-line monstrosities.

Despite being a complete hack and the bane of SGML purists, HTML propelled SGML out of the academic, literary and textual processing circles into the wider world. Angle brackets had taken over the world.

SGML had many features and very few restrictions; i.e. one program may have implemented a certain subset of SGML, while another program would have implemented a different subset, breaking the whole point of SGML which was to be a common information exchange format.

So in a, perhaps futile, attempt to establish order out of chaos, an international working group formed under more international quangos from 1996 to 1998, which defined a subset of SGML, called XML, 'Extensible Markup Language', which aimed to be simpler, stricter, easier to implement and more interoperable. A note by James Clark, the leader of the original technical group, explains the differences between SGML and XML. Over the last decade XML has been constantly revised and improved.

Of course, programs still implement XML in different ways, and one may find a load of marked up files that are somewhere between SGML and XML, as well as program or group specific non-standard behaviour.

The most enthusiastic XML advocates will recommend using XML for everything, including brushing your teeth. However, to be brutally honest, one uses XML when one is forced to.

XML does work better in some situations than others, for example, when you want to pass non-relational data between arbitrary systems, then XML works quite well.

In a future post, we will look at what do you do if you find yourself having to sort out a pile of random XML files.

Discuss this post - Leave a comment

September 20, 2008 04:53 PM :: West Midlands, England  

TopperH

Forwarding local mail to Gmail using postfix

On my workstation I have postfix set up to delivery local mail to a maildir in my $HOME, so that I can read it using my mail client of choice.

I also have a server and I often forget to ssh in it and open mutt to read the emails that the system (mostly cron) sends me.

I know there are simple ways to be notified every time I open a console, for example this:

echo "MAILCHECK=30" >> ~/.bashrc
echo 'MAILPATH=~/.maildir/new?"You have a new mail. Read it with 'mutt'."' >> ~/.bashrc
But as long as the server works fine I don't need to login that often.

So, why not sending all the local mail to my gmail account, so that I can read it wherever I am, even on my BlackBerry? Here I found a nice howto.

First of all I need postfix set up:

# emerge -C ssmtp
# echo mail-mta/postfix mbox pam sasl ssl >> /etc/portage/package.use
# emerge postfix


Once is emerged I edit /etc/postfix/main.cf being careful to change XXX with something meaningful:

inet_interfaces = 127.0.0.1
relayhost = [smtp.gmail.com]:587
smtp_use_tls = yes
smtp_tls_CAfile = /etc/postfix/cacert.pem
smtp_tls_cert_file = /etc/postfix/XXX-cert.pem
smtp_tls_key_file = /etc/postfix/XXX-key.pem
smtp_tls_session_cache_database = btree:/var/run/smtp_tls_session_cache smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/saslpass
smtpd_sasl_local_domain = $myhostname
smtp_sasl_security_options = noanonymous


Then, according to this tutorial I create the tls certificate:

# /etc/ssl/misc/CA.pl -newca
# openssl req -new -nodes -subj '/CN=domain.com/O=Name/C=US/ST=State/L=Location/emailAddress=user@gmail.com' -keyout XXX-key.pem -out XXX-req.pem -days 3650


Domain, name, country, state, location and email address must be substituted and remembered, to be used in next step (once again XXX must be filled as above):

# openssl ca -out XXX-cert.pem -infiles XXX-req.pem
# cp demoCA/cacert.pem XXX-key.pem XXX-cert.pem /etc/postfix
# chmod 644 /etc/postfix/XXX-cert.pem /etc/postfix/cacert.pem
# chmod 400 /etc/postfix/XXX-key.pem


Now I edit the /etx/postfix/saslpass using my gmail username and password:

[smtp.gmail.com]:587 user@gmail.com:password


and I create the associated hash file:

# cd /etc/postfix
# postmap saslpass
# chmod 600 saslpass
# chmod 644 saslpass.db


Now, as regular user, specify the local forward:

$ echo 'user@gmail.com' > ~/.forward


I also set up local aliases in /etc/mail/aliases:

root: username
operator: username


Postfix needs a few commands before being started:

# postfix upgrade-configuration
# postfix check
# newaliases
# /etc/init.d/postfix start


Now all my local emails should be sent to my gmail account, let's see if thigs are working:

# emerge -av mail-client/mailx
$ andrea@fandango ~ $ mail root
Subject: postfix works?
Yes it does!!!
Cc:


This is the output of /var/log/messages

Sep 20 13:58:40 fandango postfix/pickup[23235]: 3F61AF066C: uid=1000 from=
Sep 20 13:58:40 fandango postfix/cleanup[23243]: 3F61AF066C: message-id=<20080920115840.3f61af066c@localhost>
Sep 20 13:58:40 fandango postfix/qmgr[23239]: 3F61AF066C: from=, size=339, nrcpt=1 (queue active)
Sep 20 13:58:40 fandango postfix/cleanup[23243]: 41AAEF066B: message-id=<20080920115840.3f61af066c@localhost>
Sep 20 13:58:40 fandango postfix/qmgr[23239]: 41AAEF066B: from=, size=471, nrcpt=1 (queue active)
Sep 20 13:58:40 fandango postfix/qmgr[23239]: 3F61AF066C: removed
Sep 20 13:58:40 fandango postfix/local[23245]: 3F61AF066C: to=, orig_to=, relay=local, delay=0.01, delays=0.01/0/0/0, dsn=2.0.0, status=sent (forwarded as 41AAEF066B)
Sep 20 13:58:43 fandango postfix/qmgr[23239]: 41AAEF066B: removed
Sep 20 13:58:43 fandango postfix/smtp[23246]: 41AAEF066B: to=, orig_to=, relay=smtp.gmail.com[72.14.221.109]:587, delay=3.3, delays=0/0/1.4/1.9, dsn=2.0.0, status=sent (250 2.0.0 OK 1221912634 12sm2163798fgg.0)

September 20, 2008 12:17 PM :: Italy  

Martin Matusiak

Dear Nokia

I’m confused.

You’re making these internet tablets with a keyboard, built-in wlan and bluetooth. It looks like a pretty complete mini-desktop device. The KDE people are really excited about running KDE on it, that’s wonderful.

There’s just one big question mark here. Why do I need a little computer that gives me internet access? I don’t know about you, but where I live there are computers anywhere I turn, at home, at school, at work. And if I really needed a smaller one I would get the Acer Aspire One, which is much more powerful and useful than your tablets (and it’s the same price range!).

Because, you see, if I’m not at home or school or work, I don’t have an internet connection. So your “portable internet device” just becomes a portable without connectivity. No different from my laptop.

I wonder… is there anything that would make this “portable” more useful? Perhaps some kind of universal communications network that doesn’t require a nearby wireless access point? Like say, the phone network? I hear you’re flirting with the idea of building phones, yes?

So why not build the phone into the “internet tablet”? That would actually give it something my laptop doesn’t have, it’d give me a reason to buy it. I mean you’ve already put everything else a modern phone has on the tablet, how hard could it be to add a phone?

I’ll tell you what, I’m in the market for one at the moment. I’ve never bought a Nokia product in my life, so this is your big chance. Do we have a deal?

September 20, 2008 10:57 AM :: Utrecht, Netherlands  

September 19, 2008

Daniel Robbins

New Git Funtoo Tutorial

For those of you interested to learn more about the Funtoo Portage tree, I have written a nice tutorial which you can view at http://github.com/funtoo/portage/wikis/home.

This tutorial explains how to use git, how to use the Funtoo Portage tree for development, and how to easily fork the tree for your own collaborative projects.

Enjoy! :)

September 19, 2008 07:25 PM

Bryan Østergaard

Software Freedom Day + Planet Larry

Tomorrow is Software Freedom Day - a yearly event where people all over the world get together to celebrate free software, enjoy talks related to free software and just as importantly get to meet lots of people.

If you happen to be in Copenhagen tomorrow you can meet myself and several other people from SSLUG at Copenhagen Business school. SSLUG's SFD program includes talks on Free Software, Linux, Open Office and GIMP. Everybody else can look up their local Software Freedom Day events - there's more than 500 teams registered all over the world so there's probably going to be an event nearby.

And regarding Planet Larry.. Steve Dibb just announced that he's setting up a feed for retired Gentoo developers which is very good news in my opinion. Lots of retired developers blog and they often have interesting comments on things related to Gentoo or tips that other people can benefit from. And this way people can know whether the blog posts they're reading comes from a normal user or a retired developer. I would probably have prefered marking retired developers another way instead of having multiple feeds but I can see why some people wants to know who's who and I'd much rather have a seperate feed than nothing at all. Oops, I was a bit too quick - Exdevs are now going in the main feed instead and will be marked using colour or some other way instead of a seperate feed.

And since I've been having this discussion with Steve on and off for quite some time: Thank you Steve :)

September 19, 2008 07:06 PM

Steve Dibb

planetlarry.org

I don’t know about anyone else, but everytime I want to go Planet Larry, I still type in planetlarry.org, even though I ditched the domain a few months ago.

Well, I got tired of it not working, so I re-registered it, and it redirects once again as normal.

Also, we can always use more bloggers — if you have a Gentoo blog, lemme know about it, and we’ll get you added.  It’s a very informal process, just send me an email with your blog URL and stuff.  Now that I think about it, I really need to catch up with all the new Gentoo devs and get them on Planet Gentoo as well. Slack…

Finally, I decided I’m going to create a feed specifically for ex-developers, but since I’m too lazy to go out and find their blogs (and I don’t think I still have an old copy), if you guys could send me your info, that would greatly help to speed things along. Update: It’s too much work to create a separate feed, so I just put them back in the main feeds. Now, behave. :)

And here’s an image just because this blog post is so boring, it needs one.

September 19, 2008 06:08 PM :: Utah, USA  

Daniel Robbins

Funtoo on GitHub

I now have the official Gentoo Portage tree as well as my slightly tweaked Funtoo Portage tree hosted at GitHub. The "portage" repository is the Funtoo one, whereas the "portage-gentoo.org" tree is the canonical Gentoo tree.

To use the Gentoo version of the tree, do:

# git clone git://github.com/funtoo/portage-gentoo.org.git

This will create a directory called portage-gentoo.org. To use this directory as your portage tree, edit /etc/make.conf and set PORTDIR to the path to this directory. This isn't an overlay, it is a full tree (which I prefer.)

To use the Funtoo version of the tree, do:

# git clone git://github.com/funtoo/portage.git

Edit make.conf and set PORTDIR to point to the new portage directory that was created. Also, for Funtoo, you should also set the unstable keyword by setting ACCEPT_KEYWORDS to "~x86" or "~amd64".

The Gentoo tree is updated every few days as is the Funtoo tree. This is mainly a service for developers who want to use git for development, or who want to merge in ebuilds and send me changesets for integrating into the Funtoo tree.

Enjoy!

September 19, 2008 03:53 PM

September 18, 2008

Nirbheek Chauhan

An important announcement

We interrupt your regular lazy-webbing to make this two important announcements:

A) AutotuA 0.0.1 released! Try it out and report bugs (if you can't follow the instructions in the link given, your services will be required when 0.0.2 is released :)

B) IMO, the two best distros in this world are:

  1. Foresight Linux
  2. Gentoo
    • The GNOME Team
    • Brent Baude (ranger): master-of-the-PPC-arch
    • Donnie Berkholz (dberkholz): X11, Council, and Desktop Team Emperor
    • Raúl Porcel (armin76): generic bitch; maintains half the arches and Firefox
    • Robin H. Johnson (robbat2): Infra demi-god
    • Zac Medico (zmedico): Portage demi-god
All these people are just too awesome (and too overworked) for words. If I hadn't got myself deep into Gentoo (which led to SoC too), I would've gone to Foresight :)


~Nirbheek,
Who has high hopes for AutotuA, and also hopes the best of Foresight and conary can be brought to Gentoo.

PS: Donnie, congrats once again! ;)

September 18, 2008 04:08 PM :: Uttar Pradesh, India  

Martin Matusiak

general purpose video conversion has arrived!

When I started undvd I set out to solve one very specific, yet sizeable, problem: dvd ripping&encoding. I did that not because I really felt like diving head first into the problem would be fun, but because there was nothing “out there” that I could use with my set of skills (none). Meanwhile, I needed a dvd ripper from time to time, and since I didn’t need it often I would completely forget everything I had researched the last time I had used one. This was a big hassle, I felt like I had no control over the process, and I could never assure myself that the result would be good. Somehow, somewhere, there was a reason why all my outputs seemed distinctly mediocre. Visibly downgraded from the source material.

Writing undvd was a decent challenge in itself, because of all the complexity involved in the process. I had to find out all the stuff about video encoding that I didn’t really care about, but I thought if I put it into undvd, and make sure it works, then I can safely forget all about it and just use my encoder from that point on. When you start a project you really have no idea of where it’s going to end up. undvd has evolved far beyond anything I originally set out to build. That’s just what happens when you add a little piece here and another piece there. It adds up.

It’s been about 20 months. undvd is quite well tested and has been “stable” (meaning I don’t find bugs in it myself anymore) for over a year. One of the by products is a tool called vidstat for checking properties of videos. I wrote that one just so I could easily check the video files undvd was producing. But it turns out to be useful and I use it all the time now (way more than undvd). In the beginning I was overwhelmed by the number of variables that go into video encoding, and I wanted to keep as many of them as I could under tight control. I have since backtracked on a number of features I initially thought would be a really bad idea for encoding stability. But that’s just the way code matures, you start with something simple and when you’ve given it enough thought and enough tests, you can afford to build a little more complexity into the code.

Codec selection landed just recently. And once I was done scratching my head and trying to decide which ones to allow and/or suggest, I suddenly realized that with this last piece of the puzzle I was a stone’s throw away from opening up undvd to general video conversion. Urgently needed? Not really. But since it’s so easy to do at this point, why not empower?

The new tool is called encvid. It works just like undvd, stripped of everything dvd specific. It also doesn’t scale the video by default (generally in conversion you don’t want that). So if you’ve figured out how to use undvd, you already know how to use encvid, you dig? :cap:

Demo time

Suppose you want to watch a talk from this year’s Fosdem (which incidentally, you can fetch with spiderfetch if you’re so inclined). You get the video and play it. But what’s this? Seeking doesn’t work, mplayer seems to think the video stream is 21 hours long, that’s obviously not correct (incidentally, I heard a rumor that ffmpeg svn finally fixed this venerable bug). It seems a little heavy handed, but if you want to fix a problem like this, one obvious option is to transcode. If the source video is good quality, at least from my observations so far, the conversion won’t noticeably degrade it.

So there you go, a conversion with the default options. You can also set the codecs and container to your heart’s content.

You can also use encvid (or undvd for that matter) to cut some segment of a video with the --start and --end options. :)

I’m sold, where can I buy it?

September 18, 2008 10:11 AM :: Utrecht, Netherlands  

Christoph Bauer

Living without aRtsd isn’t bad at all

aRts is the old KDE Sound daemon which appeared around Version 2.0 of the KDE. Its purpose was mixing multiple music channels in real time - in other words playing a beep sound while playing music and so on as the common soundcard wasn’t able to do this. Later on hardware and drivers moved on and the main developer retired from the project. In other words, the project went pretty dead and is depreciated by now.

Nevertheless I used aRts for quite a long time - honestly, I have used it since two days ago and it has caused too many problems. As aRts is depreciated, it was time to remove it from my system. As I am using Gentoo Linux, the KDE 3.5.10 update and the buggy kde-base/kdemultimedia-arts ebuild were the best time to do so.

Removing aRts is quite simple. First of all, we start by deactivating the Soundserver using the KDE Control center. But as we want to have some noise on our box, we can adjust the system sound settings to use an external player. As I want to keep it simple, I’m using the “play” binary from the media-sound/sox package.

As those changes were made, it’s a good thing to test the current setup to see if things are working. If the sound still works, it’s time to remove the arts USE-flag from the make.conf. The next step is emerging the packages depending on arts and arts itself. And that’s all.


Copyright © 2007
Please note that this feed is for private use only. All other usage, including the distribution or reproduction of multiple copies, performance or otherwise use in a public way of the images or text require the authorization of the author.
(digitalfingerprint: 0f46ca51d0fa4e6588e24f0bf2b80fed)

September 18, 2008 07:31 AM :: Vorarlberg, Austria  

September 17, 2008

Patrick Lauer

Local File-to-ebuild Database

Hai everyone,
I've been a bit quiet the last few $timeunits. Life is good.

Here is a little toy I've been working on since yesterday. It is still very embryonic, but what it does is simple: Map files to packages and packages to files, using a local sqlite DB I generated out of binary packages. The index is not complete, it has been generated with ~5500 packages. I will try to update it when I have more packages built.

If you have any great queries just throw them at me and I'll try to update the query script. Also I intend to totally rewrite the database structure because I've already noticed a few issues with the current design. But for now have fun with it!

September 17, 2008 10:07 PM

Brian Carper

Copy/paste in Linux: Eureka

It's been a few years since I officially grasped Linux's (well, X Windows') weird concept of copying and pasting, with its multiple discrete copy/paste methods: the highlight + middle click version, and "clipboard" Edit->Copy" + "Edit->Paste version.

But once in a blue moon, copying and pasting in X still surprises me. Try this:

  1. Open Firefox and a text editor. I'm trying with Vim.
  2. Highlight some text in Firefox.
  3. Middle-click paste it into the editor. The highlighted text is pasted, as expected.
  4. Close Firefox.
  5. Middle-click into the editor again.

Can you guess what happens at the end? If you said "Some random text from another application and/or nothing at all is pasted rather than the stuff from Firefox", you're right!

But today I read this article on jwz.org and finally understood how copy/paste works in X. Highlighting text doesn't copy anything, it just announces to the world "If any applications want to middle-click paste something, come ask me for it". So if you close the application you wanted to paste text from before you actually do the pasting, the application isn't around to give you the text you wanted any more, so you can't get it. The Edit->Copy / Edit->Paste version of copy/paste behaves the same way. You can't "Copy", close app, "Paste".

Note, this is different from how MS Windows works. When you copy some text in Windows it really copies to another location. You can close the app and still paste away. But Windows has a different (inconsistent) behavior when copy/pasting files in Explorer. There, it behaves like X in Linux: if right click a file and "Copy", it doesn't actually do anything with the data until you paste. If you right-click, "Copy", delete the file, "Paste", you don't get an error until you try to Paste.

In Vim in Linux, the "* register lets you access the "primary selection" (highlight / middle click selection), and the "+ register lets you access the clipboard.

In Vim in Windows, "* and "+ do the same thing, and use the clipboard.

September 17, 2008 01:42 AM :: Pennsylvania, USA  

September 16, 2008

Jürgen Geuter

Last century's technology fail@Adobe

So Adobe has released a Beta of their AIR platform for linux which is nice if they had not once again failed to include support for modern machines.

From the release notes:

System Requirements
Hardware
* Processor - Modern x86 processor (800MHz or faster, 32-bit)


Guess what Adobe, my Core2Duo here is kinda modern but since I prefer to run a 64bit operating system on my 64bit machine, your fancy "new" and "modern" software won't run. Same crap as we have with Flash which does not run properly on 64bit.

Just tell me Adobe, why the hell do you hate 64bit so much?

September 16, 2008 06:08 PM :: Germany  

Tagcloud fail@Stack Overflow

So Stack Overflow, the new site of Joel Spolsky and Jeff Atwood, launched public beta. It's supposed to be a place to ask technical questions and get answers from the other people around there (I checked it out for 5 Minutes and it was very Windows-centric so kinda boring to me).

Now both of the designers are big names when it comes to developing software, both are often quoted when it comes to best practices and whatnot, so how does this happen?



How can they not be able to implement a simple tagcloud?

I'm writing about tags and tagclouds as we speak and the first thing that comes to my mind is that the tags are not ordered alphabetically, which does make the whole tagcloud worthless. Tags are for finding things easier, if I cannot look for a certain tag quickly you can just drop the whole tagging thing. Yeah I could find the few big tags easily but the rest completely drowns in the data mud.

If you implement a tag cloud, do it right: Tags have to be ordered alphabetically which more important tags printed bigger. There's the half-assed concept of ordering tags by importance (the biggest on first) but that one doesn't do a lot right either.

How serious can you take those guys if they can't get those simple things right?

September 16, 2008 09:45 AM :: Germany  

September 15, 2008

Thomas Capricelli

About mercurial and permissions

Distributed source control is really great, and among them, the tool I love the most is, by far, mercurial. I use it for all my free software projects, my own non-software projects (config files, mathematical articles and such) and also, dare I say it, for my CLOSE SOURCE projects. Yes, I also do this kind of things, how harsh a world this is, isn’t it ?

In the latter case, though, I often have some problems with permissions. In my (quite common) setup, I have a central repository and the whole tree belongs to a (unix-) group. File access is restricted to this group only (chmod -R o= mydir).

On lot of current linux distribution, each user has an associated group with the same name (john:john), at least that’s how it behaves on both debian and gentoo.

When a user does a push which creates some new directory/file, then those are created as belonging to this user and its main group (john:john here). As a result, other people can not access to it, and when you want to pull the repository, you got a big ugly crash:

pulling from ssh://foo@freehackers.org///usr/olocal/hg/topsecretproject
searching for changes
adding changesets
transaction abort!
rollback completed
abort: received changelog group is empty
remote: abort: Permission denied: .hg/store/data/myfile.i

Of course, i can create a big fixperms scripts in the repository, but then I need to start it each time the problem arises, which if each time someone creates a new file/di: this is far too often.

I thought about the set-group-ID (see man ls) and indeed it works. I dont know if this is the official way of solving this problem among the mercurial communauty, and I would love to know if some other people solve it differently. At least that’s how it is documented on the mercurial site.

Now, you might as well find out about this problem once your repository has been used for a while and is already full of useful stuff. Then it is a little bit less simple than what the mercurial documentation says. Namely, you need to put the set-group-ID in the whole .hg/store/data :

cd topsecretproject/
chown john:topsecretgroup -R .
chmod g=u,o= -R .
find .hg/store/data -type d  | xargs chmod g+s
chmod g+s .hg # needed for .hg/requires

September 15, 2008 10:20 PM

Michael Klier

Long Time No Blog ...

yet, I am still alive ;-), so here's a short notice to prove it.

Real life is sucking up most of my time and motivation to hang in front of my computer recently. I have a new volunteer who keeps me from lurking on all the webdottohoo™ sites all day long and I am also finally moving in my very own flat :-). Until I am moved (in two weeks from now) my signal to silence ratio will prolly stay at its current level.


Read or add comments to this article

September 15, 2008 09:09 PM :: Germany  

Roy Marples

Experimental dhcpcd-4.99.1 available

dhcpcd now manages routing in a sane manner across multiple interfaces on BSD. It always has on Linux due to it's route metric support, but as BSD doesn't have this it's a little more tricky. We basically maintain a routing table built from all the DHCP options per interface and change it accordingly. As such, dhcpcd now prefers wired over wireless and changes back to wireless if the cable is removed (assuming both on the same subnet) and this works really well Laughing out loud

It's now starting to look quite stable and all the features in dhcpcd-4 appear to be working still so I've released an experimental version to get some feedback. BSD users can get a rc.d script here.
So, lets have it!

September 15, 2008 07:36 PM