Posts for Monday, December 13, 2010

Interesting comic regarding Wikileaks

Not KDE/Kubuntu related, but hits a note on freedom...

Posts for Sunday, December 12, 2010

Community :: Ethics :: Should Julian Assange be extradited to Sweden?

I own the state.

So to start with, let me be open about my own beliefs so you know where I am coming from.

As citizens and taxpayers we own the government, and as owners we have a right to know everything about our property. I believe in freedom of government information, and that the freedom of information act is a nice start but only a small step to transparent accountable government, more must be done. We must know that the government is serving us, the citizens, we must be shown that the government has not been captured by minority elites or corporate interests.

I believe the words of Jesus in John 8:32, namely that "the truth will set you free". Governments should not have secrets, or at least should have as few as possible. The date and location of the Normandy landings in 1944 is the kind of thing I think of as appropriate for a state secret, everything below that level should be public. Information governments hold about Oil or Pharmaceutical companies up to no good should and must be in the public domain. Tittle-tattle about what the head of the Bank of England might think about the shadow chancellor after a few drinks does not qualify for legal protection as a 'state secret'.

A lot of what is in the Wikileaks cables so far is the result of US diplomats writing down various unfounded rumours and slander about foreign leaders, people should not be being paid to write this nonsense down in the first place. Meanwhile real facts based on evidence should be put into the public domain. If the US government had taken this approach then there would be nothing to leak. The problem is with the people who wrote the cables in the first place, not the people who published them when they leak.

Still the biggest stick

The United States armed forces is a trillion dollar investment with more hi-tech weaponry than the rest of the world put together and well over 2 million highly-trained forces in active duty or in reserve. The "People's Liberation Army" of China has a slightly larger nominal headcount but is decades behind in technology and training. No one can argue that US hegemony of the world is affected by Wikileaks. America's hegemony is not based on secrets or 'soft power', it is based on overwhelming capability. The more it is known about, the more it deters enemy nations from attacking America.

The right-wing hacks and government insiders moaning about Wikileaks are whistling into the wind. Major established newspapers such as the Guardian and the New York Times already have all the cables and are co-publishing them with Wikileaks. If they somehow magically made Wikileaks disappear it would not change anything since the newspapers will still press ahead as planned. At time of writing, Wikileaks has a network of 1697 mirrors; it is statistically likely that a minority will be broken at any one time, but even so that is enough to make the website content more or less impossible to take offline.

Why is Assange being extradited to Sweden?

Julian Assange is the public face of Wikileaks, he is also now detained at Her Majesty's pleasure in Wandsworth prison, remanded in custody and awaiting a potential extradition to Sweden. A guardian article explains his conditions.

During August 2010, Julian Assange was in Sweden on Wikileaks business when he had intimate relations with two women. What actually happened we have no idea, at the moment all we have to go on is third party he said/she said-type rumours; and these are bizarre. Assange is wanted for questioning, no charges have actually been filed yet, and so all the media links that follow have to be taken with a truckload of salt.

A lot of the media so far that have looked at the scant information we do know have already tended to side critically with Assange. Richard Pendlebury from the Daily Mail was sent to Sweden to have a go at putting the chronology together. As do Israel Shamir and Paul Bennett in their article. Mark Hosenball in an article I read in the Toronto Star has a different take on it.

Naomi Wolf argues that Assange is a jerk, but that does not make him a rapist. I can think of several instances in my life where I have been a complete jerk (please don't write in listing them here!), therefore the Wolf article does sadly ring quite true, but that probably says more about me and the people that Wolf has dated than Assange (Wolf herself has been the alleged victim of another high profile alleged sexual harassment but that is another story altogether). Assange's lawyer has been publicly expressing the "Hell hath no fury like a woman scorned" argument, that the women were expecting a real relationship with Assange and when they found out that he was sleeping with both ended up at the Police station and part of a rape charge.

The general theme of the articles is that the women later met together, and at that point generated or clarified various concerns about the encounters with Assange, and then went together to the police who constructed these concerns into a case. The descriptions are complicated by the complexities and differences of Swedish law and by the fact that Assange had slept with two politically active women with twitter accounts, Youtube videos, links to political parties and backgrounds in sexual politics. More on this in David Edwards' article. Assange broke one of life's golden rules - don't date wannabes - this includes political wannabes as well as actresses and performers, especially when they are not very attractive!

Katrin Axelsson from Women Against Rape wonders "at the unusual zeal with which Julian Assange is being pursued for rape allegations" when often clearer cut and violent cases languish without giving their victims justice. In Shamir and Bennett's article (mentioned already above), one of their arguments is that the CIA "threatened to discontinue intelligence sharing with SEPO, the Swedish Secret Service" unless the government worked against Assange and the whole thing might be a "honey trap".

US via Sweden?

Jemima Khan argues for the extradition theory, the idea that a Swedish prosecutor is attempting to extradite Assange as part of an eventual extradition out of the EU altogether to the US:

"I believe that this is about censorship and intimidation. The timing of these rehashed allegations is highly suspicious, coinciding with the recent WikiLeaks revelations and reinvigorated by a rightwing Swedish politician. There are credible rumours that this is a holding charge while an indictment is being sought in secret for his arrest and extradition to the US. An accusation of rape is the ultimate gag. Until proved otherwise, Assange has done nothing illegal, yet he is behind bars."

One argument against the extradition theory is to ask the rhetorical question of why should it be easier to extradite Assange from Sweden rather from than the UK?

Well due to the Gary McKinnon case, the British public mood for sending suspects across the Atlantic is low, (mis)using the extradition treaty again for a case that is not a bomb wielding terrorist may lead to the treaty being repealed. The US would certainly want to avoid that.

Sweden is not the liberal paradise as it is sometimes portrayed, there is more to Sweden than Abba's Dancing Queen, there is an authoritarian streak also. Sweden had a forced sterilization program from the 1930s until 1976, forced crime sterilization is a crime against humanity. Also, the United Nations ruled that Sweden violated the global torture ban in 2006 by knowingly co-operating with US process that led to asylum seekers being transferred from Stockholm to Egypt and then being tortured there. Sweden was involved in other illegal rendition flights going from and through Sweden. Sweden was also part of the war in Afghanistan. Sweden also does not have a Jury system, trials are performed for judges alone.

The possibility of Sweden being more likely to extradite Assange may not be the motive, the motive may be to keep Assange within countries likely to extradite to the US. If Assange went to Venezuela, Ecuador, Russia or any other place outside America's Empire then he would be out of reach. If he is being held on remand in the UK, or being tried in Sweden, then he is kept in place.

So according to the extradition theory, part of it just opportunity. Assange admits he had, what he saw as consensual, sex in Sweden; and Sweden has a prosecutor willing to push the case, albeit that the Stockholm prosecutor decided there was no evidence, so a second prosecutor from Gothenburg was brought into play. This gives a reason to keep Assange in place.

It will all no doubt come out in the wash, one way or another. Either there is evidence in Sweden of sexual offences or there is not. A holding strategy could not work for very long. We have a right to free movement across the EU, especially for work purposes, and Assange could have been questioned at Scotland Yard or at the Swedish embassy in London; so if it turns out that the Swedish prosecutor cannot get a conviction then serious questions will be need to be asked about European warrants and using them for fishing exhibitions.

The Wikileaks cables shows governments up to all sorts of stranger-than-fiction hijinks, just look at all the shenanigans that happened with Abdel Baset Ali al-Megrahi. Even so the extradition theory is based on weaving together circumstantial evidence and seems a bit far fetched, (but that does not necessarily make it untrue). It is a 'who ordered the death of JFK type question'. If you think it was the military-industrial complex or whatever then the extradition theory could be credible. If you think JFK was killed by a nut then life is just sometimes random - America wants to punish Assange and he just happens to be an alleged rapist.

Dr Kirk James Murphy's article laying out the possible conspiracy is let down for me by the phrase "just happens" the implication that it is to coincidental to be true, and he says sarcastically "Small world, isn’t it?". Well actually yes, it is a small world, especially when it comes to capital cities, universities, political parties and activist movements and so on, you always seem to see the same old faces again and again.

Throwing out the baby with the bathwater

Extreme cases like Wikileaks are a poor basis for reform of freedom of speech, as the adage goes - hard cases make bad law. A worrying sign is that in response to Wikileaks, freedom of speech may be further restricted. Such laws will have no effect on situations like Wikileaks but will no doubt have negative unintended consequences further down the line.

Well that was my attempt to make sense of it. Please leave a reply and let me know what you think. Should Julian Assange be extradited to Sweden?

Posts for Thursday, December 9, 2010

No Nonsense Logging in C (and C++)

A lot of times people do zany things and try and reinvent wheels when it comes to programming. Sometimes this is good: when learning, when trying to improve state of the art, or when trying to simplify when only Two-Ton solutions are available.

For a current daemon project I need good, fast, thread-safe logging. syslog fits the bill to a tee and using anything else would be downright foolish — akin to implementing my own relational database. There’s one caveat. For development and debugging, I’d like to not fork/daemonize and instead output messages to stdout. Some implementations of syslog() define LOG_PERROR, but this is not in POSIX.1-2008 and it also logs to both stderr and wherever the syslog sink is set. That may not be desired.

So, the goals here are: continue to use syslog() for the normal case as it is awesome, but allow console output in a portable way. Non-goals were using something asinine like a reimplementation of Log4Bloat or other large attempt at thread-safe logging from scratch.

Using function pointers, we can get a close approximation of an Interface or Virtual Function of Object Oriented languages:

void (*LOG)(int, const char *, ...);
int (*LOG_setmask)(int);

These are the same parameters that POSIX syslog() and setlogmask() take. Now, at runtime, if we desire to use the the “real” syslog:

LOG = &syslog;
LOG_setmask = &setlogmask;

If we wish to instead log to console, a little more work is in order. Essentially, we need to define a console logging function “inheriting” the syslog() “method signature” (or arguments for non-OO types).

/* In a header somewhere */
void log_console(int priority, const char *format, ...);
int log_console_setlogmask(int mask);

And finally, a basic console output format:

/* Private storage for the current mask */
static int log_consolemask;

int log_console_setlogmask(int mask)
  int oldmask = log_consolemask;
  if(mask == 0)
    return oldmask; /* POSIX definition for 0 mask */
  log_consolemask = mask;
  return oldmask;

void log_console(int priority, const char *format, ...)
  va_list arglist;
  const char *loglevel;
  va_start(arglist, format);

  /* Return on MASKed log priorities */
  if (LOG_MASK(priority) & log_consolemask)

  case LOG_ALERT:
    loglevel = "ALERT: ";
  case LOG_CRIT:
    loglevel = "CRIT: ";
  case LOG_DEBUG:
    loglevel = "DEBUG: ";
  case LOG_EMERG:
    loglevel = "EMERG: ";
  case LOG_ERR:
    loglevel = "ERR: ";
  case LOG_INFO:
    loglevel = "INFO: ";
  case LOG_NOTICE:
    loglevel = "NOTICE: ";
    loglevel = "WARNING: ";
    loglevel = "UNKNOWN: ";

  printf("%s", loglevel);
  vprintf(format, arglist);

Now, if console output is what you desire at runtime you could use something like this:

LOG = &log_console;
LOG_setmask = &log_console_setlogmask;

LOG(LOG_INFO, "Program Started!");

In about 60 lines of code we got the desired functionality by slightly extending rather than reinventing things or pulling in a large external dependency. If C++ is your cup of tea, it is left as a trivial reimplementation where you can store the console logmask as a private class variable.

Some notes:

  1. You should still call openlog() at the beginning of your program in case syslog() is selected at runtime. Likewise, you should still call closelog() at exit.
  2. It’s left as a trivial exercise to the reader to define another function to do logging to both stdout and, using vsyslog(), the syslog. This implements LOG_PERROR in a portable way.
  3. I chose stdout because it is line buffered by default. If you use stderr, you should combine the loglevel, format, and newline with sprintf before calling vprintf on the variable arglist to prevent jumbled messages.
  4. Of course, make sure you are cognizant that the format string is passed in and do not allow any user-supplied format strings as usual.
Share this article: Reddit HackerNews Slashdot Facebook StumbleUpon Google Bookmarks FSDaily Twitter Digg Print email PDF

Related posts:

  1. Why VIM is not my favorite editor UPDATE: clang_complete is what the people want and what the...
  2. To users that miss xorg.conf and complain about it I get requests from users and see questions all the...
  3. Bulletproof your server to survive Digg/Slashdot implementing scale up for web 2.0 sites with current practices...

Posts for Wednesday, December 8, 2010


Filesystem code in AIF

In light of the work and discussions around supporting Nilfs2 and Btrfs on Arch Linux and its installer AIF,
I've shared some AIF filesystem code design insights and experiences on the arch-releng mailing list.
This is some hard to understand code. Partly because it's in bash (and I've needed to work around some limitations in bash),
partly because there is some complex logic going on.

I think it's very useful material for those who are interested (it can also help understanding the user aspect),
so I wanted to share an improved version here.
On a related topic: I proposed to do a session at Fosdem 2011/"distro miniconf" about simple (console based) installers for Linux,
and how multiple distributions could share efforts maintaining installation tools, because there are a lot of cross-distribution concerns
which are not trivial to get right (mostly filesystems, but I also think about clock adjustments, bootloaders, etc).
Already several distro's use the (or a fork of) the Arch installer, for example Pentoo,
but I think cooperation could be much better and more efficient.


my acronyms for this text:

  • LV = lvm2 Logical Volume
  • VG = lvm2 Volume Group
  • PV = lvm2 Physical Volume
  • DM = Device Mapper
  • BD = Block Device
  • FS = File System
  • DF = DeviceFile

"Normal" FS'es ("do something on the BD represented by DF /dev/foo, so that you can then call `mount /dev/foo $somedir`") are trivial to add to aif.
Basically you just need to tell aif the name of the filesystem, and which commands and arguments it needs to invoke to create it and add a label to it.
Nilfs2 falls in this category. So do ext2/3/4, xfs, jfs, reiserfs, etc. (Nilfs is now supported and new archiso testbuilds are available)

"Complex" FS'es (which yield a new DF for a DM BD, which can span multiple underlying BD's, etc) are more difficult, and I'll explain why.
Btrfs falls in this category. So do LVM, dm_crypt and softraid. In aif terminology anything you put on top of a BD is a FS. This is not always technically correct, but it's not far-fetched and avoids needless complication. For example softraid would be a FS you put on top of one or more BD's, and which itself yields a new BD on top of which you can put something else.
In the same way I call a VG a FS being applied on top of a PV BD, which itself results in a new BD which can host multiple LV FS'es, which in return yield new BD's which can host other FS'es. Currently AIF provides support for lvm and dm_crypt, but not softraid or Btrfs.

How aif works is this: it uses a "model" that represents how your DF/FS structure will look like.
I personally usually configure my hard disk like this:
a boot partition, and a partition on which i do dm_crypt, which results in a DM BD, which I make a PV, then put a VG on
it, which contains multiple LV's, one for swap, and two containing
some FS'es which get mounted as / and /home.

You can see that model on the bottom ($BLOCKDATA) of the example file "fancy-install-on-sda" for automatic installations, included with aif.

You might have noticed in the installer - if you do it interactively - how you first configure all
your filesystems in the dialog interface, but only after confirming, it
does all the required actions, step by step. Actually the dialog-based configuration helper will just generate a textfile in the same format as $BLOCKDATA and the processing code is the same for interactive and automatic installs.
Since in the config file for an automatic install you could define your FS's in abitrary order, aif
figures out the dependencies and processes things in the right order.
With the example given above aif will parse the text and will figure out the order of creation: first partition (obviously), then create the dm_crypt, then the PV, then the VG,
then the LV's, then the FS'es on those LV's)
then mount all mountpoints in the right order (first /, then /home and /boot)
On rollback (when user changes his mind, an error occured - usually because of misconfiguration like a too big LV) everything I just explained happens in reverse.
This allows users to tune their config (or try something completely different), without the installer breaking with errors like "This device is already marked as encrypted" or "VG already exists"

I choose this model-based approach initially because I wanted to get rid
of the ugly, hacky original installer code, but still provide a lot of
control through the nice dialog interfaces. And interfaces that work fast (not causing you to wait between every step because it's creating the FS you just defined).
Perhaps most importantly, since my main goal was automautic installs where you could just specify how you wanted your
FS/BD structure to look like (not a series of commands), relatively little extra code was needed. (this is also what separates AIF from FAI&debian installer. On Debian the interactive and automatic installers are different projects. On Arch they are just different extensions for the same codebase)


  • provides some abstraction, it's trivial to support for new (simple) filesystems.
  • makes dialog-based "configurator" easier, because we can share code with the automated installer
  • the descriptive style of the config makes it easier (although the actual format could be further improved).
    definitely easier then running/writing a series of commands (resp. interactive, automatic install).
    if we would make users run/write a series of commands, we also couldn't support automatic rollbacks in case of failures.


  • the more control you want to give users, the more you're just putting
    effort in wrapping commandline arguments in fancy dialog interfaces
    (although there is also a textbox where you can enter which ever
    additional arguments you want, so this is a compromise)
  • pretty hard to implement fancier filesystems, you usually need to
    take the common use cases. (see next point)
  • bash datastructures are very limited. it's not easy to translate this model in a
    datastructure. If in "FS on top of a BD" the FS is a child, and the BD a parent, it would be like a tree,
    but each node could have multiple children and multiple parents, i.e. a graph.
    For now I choose to ignore the "multiple parents" clause.
    Currently the only implication in aif is that you cannot have a VG which consumes multiple PV's.
    This is rare enough to justify this simplification, but still quite some code is needed to update and parse text files
    to mimic the datastructure, although I do consider using a specific
    optimized text format and an external tool to update/query the data. (yaml?)
    (See FS 15640).
    Multiple children do need to be supported (a VG can have multiple LV's. line 34 in the example file)
    Implementing Softraid in this model means I need to support multiple parents, i.e. work with a graph structure.
    For Btrfs, the LVM-like restrictiction (no multiple physical volumes for the same FS) might be an acceptable compromise
  • users cannot do their own stuff outside aif and expect to see the
    results inside aif. If the installer would execute everything in realtime,
    you could detect changes being made to the system (probably not easy though)
  • since changes don't happen in realtime, the model needs to update itself to represent how
    the actual state would look like. For example: if you just added a dm_crypt FS on a BD, we must create
    a new entry /dev/mapper/$label in the model
  • for PV's, you need a way to differentiate in the menu between the
    real BD (the one on which you say you want to put a PV FS) and the actual PV (on which you can put VG FS's),
    so aif generates an entry with the same name as the DF, but with a '+' appended to the file.
    (see the example file if unclear). Although this problem is not specific to the actual model approach,
    it's more caused by my desire to present a "global interface" to the user, and not a series of wizards and dialogs
  • Rollbacks are cool, but requires some hard to maintain code, and I doubt they are used often.
    (also this is not really specific about the model approach taken, but still worth mentioning)

Because of all this, I have sweared quite a bit over the last few
years, wondering if bash really was a good choice. But using an external tool to work with a text-base graph/tree datastructure (where nodes can have some properties) will probably remove the big nuisance. Also, Because bash v4 has associative arrays (finally), I can clean up a bunch of other code as well.

I've pondered whether we should just let users do everything on the commandline (like Gentoo), or provide a minimal layer of
abstraction, like provide some scripts which they can modify that setup
a system in a certain way. (for example, basically a series of mkfs; mount;pacman; calls)
Another alternative would be to just make the user choose between a series of "common setups" and make them answer some questions for the choosen setup.
This is how the old installer did it, and afaik archboot still does it, but it doesn't scale well with more and more possibilities.
Actually, aif still contains the optional "autoprepare" method, which is in fact a simple wizard for a simple setup.

I guess it's a tradeoff between making it easy for users and not overloading the brain of people who want to hack on the installer.
The approach which afaik has always been taken by Arch is to make the installer hold the hands of the user.
When creating aif, I've choosen to stick to that concept. And frankly I still like the idea. It may be hard to maintain,
but as a user you can finish all your installs and have a lot of options, but still use very minimal keystrokes (and mental energy).

Like mentioned earlier, softraid hasn't been implemented yet in aif, nor btrfs.
I would need to know the most common/recommended use cases, and figure out the best way to implement them.
Btrfs might be relatively easy, if i can implement it like i did lvm. But it seems like a great FS and I don't want to provide only-half-assed support for it.
Either way, refactoring the datastructure is something that will needs to be done at some point, especially for softraid.

I hope this article was a bit understandable. And if you have any advice, please share :)
The actual blockdevices-filesystems library of AIF is here:
And the menu's are here:" (the most interesting functions are interactive_filesystems for the main FS menu, and interactive_filesystem which handles definition of a specific FS)


SATA disk beeping

I've never heard a hard drive beep!  Until yesterday.

I just yoinked a free second hand Inspiron 9400 laptop for a media centre, sans disk.  I purchased the cheapest SATA 2.5" disk I could find - 320Gb for about $50.  I started installing MythBuntu and then the drive started clicking and beeping, and the installation froze!  It was the usual crunch of a failing drive with an intermittent "beep" (much like the electrical interference noise you sometimes get in laptops / desktops).  The drive worked fine in a USB caddy.

At a complete loss, I turned to the oracle (Google) and found this.  Strangely, some drives are sensitive to the 5V power supply (it was the cheapest drive I could find).

Up until then I had been working on battery, so I plugged in the power, rebooted, and the drive worked flawlessly.  Hopefully it won't do the same when I loose power...

Posts for Tuesday, December 7, 2010

Paludis 0.56.0 Released

Paludis 0.56.0 has been released:

  • New ‘cave’ subcommand: ‘print-spec’.
  • New ‘cave resolve’ option: ‘–dependencies-to-slash’.
  • New ‘cave sync’ options: ‘–suffix’ (and support for sync suffixes in repository configuration files).
  • ‘cave resolve’ now shows which reasons are responsible for reconfiguration requirements.
  • The user used for userpriv operations (typically ‘paludisbuild’) is now expected to be in the ‘tty’ group.
  • The ‘repo_file’ variable may now be used in repository_defaults.conf. Added new repo_file_basename, repo_file_unsuffixed, variables.
  • Values in overlay thirdpartymirrors files now override those in masters.
  • The documentation now recommends use of the ‘cave’ client rather than ‘paludis’.

Filed under: paludis releases Tagged: paludis

Posts for Sunday, December 5, 2010

Run Android Apps in Windows using YouWave

Now you can test run apps in your Windows machine before installing them in your device or perhaps you don’t want to miss the fun playing Robo Defense. This is best for extending the battery life of your device if you are just testing apps.

Watch the demo [here]. Click [here] to download. Enjoy!

Posts for Saturday, December 4, 2010

Wikipedia Adblock Filter

Do you love Wikipedia but hate Jimmy Wales’ infuriating grin? Add this filter to your Adblock filter list and resume clicking-and-clicking-and-clicking in peace:


The first rule blocks the javascript loader, and the second rule blocks the html stub. Either one will effectively block the Wikipedia banner.

Here’s what it looks like with Chrome Adblock:

<iframe allowtransparency="true" frameborder="0" scrolling="no" src=";layout=standard&amp;show_faces=true&amp;width=450&amp;action=like&amp;colorscheme=light" style="border:none; overflow:hidden; width:450px; height:65px"></iframe>

Paludis 0.54.11 Released

Paludis 0.54.11 has been released:

  • ‘cave resolve –make binaries’ now behaves properly when considering packages that cannot be made into binaries.
  • ‘cave show x’ will no longer suggest every package name containing an ‘x’.
  • Display-If-Profile headers in news items are now handled correctly.

Filed under: paludis releases Tagged: paludis

Posts for Thursday, December 2, 2010

Linux problems you never considered: Handling Fortran90 modules for multiple compilers

One of the strangest areas of Linux packaging is scientific software. Often it’s written by non-programmers, it has an ad-hoc, handwritten or poorly maintained build system, and it uses unusual features of strange languages (like Fortran, the topic of this post). I’ve given talks on how upstreams should package scientific software in the past, but this post touches on a different issue: how distributions should handle one of the stranger aspects of Fortran packages.

The rough equivalent of libraries in Fortran90 is modules. One major problem, however, is that modules (“libraries”) are stored differently and change for each compiler+version used to build the package. For example, modules built using GCC’s gfortran and Intel’s ifort are entirely incompatible; even gfortran 4.3 and 4.4 are not expected to play nicely together.

This becomes a problem for people who care about performance, or people who develop Fortran programs, because these people need to have modules available for many different compilers. Initially, you might think we should store Fortran modules in directories reflecting this diversity. Running `gcc -print-file-name=finclude` on recent GCC versions prints the location where GCC installs its own Fortran modules: /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.3/finclude on my system. So you could imagine a series of directories like /usr/lib/$COMPILER/$VERSION/finclude/ where Fortran modules end up for each compiler.

But the problem arises when you consider how packaging actually works: you only get one simultaneous installation of each package+version. That means you can’t easily install modules for three different compiler+version combinations at once. For each module set, you need to rebuild the package for a new compiler and reinstall the package; this means you uninstall the old modules built for the other compiler.

Three possible solutions occurred to me:

  1. Litter modules by making the package forget it installed them. In this scenario, you would rebuild a package multiple times with different compilers, and the modules would get left behind in a compiler-specific directory like /usr/lib/$COMPILER/$VERSION/finclude/.
  2. Create a mechanism for switching between the same package version built by a different compiler. This might work by creating binary packages for module-installing packages, then storing them in directories like /usr/portage/packages/$COMPILER/$VERSION/. A switching script could examine these directories and switch between them on-demand by installing those packages using Gentoo’s PKGDIR setting. Using package-specific settings in /etc/portage/env/ to know when to create binaries by setting FEATURES=buildpkg, then adding a late hook to copy the binpkgs to the compiler-specific package directory, might be one route to this.
  3. Build the same package version with many compilers at once, then bundle it in a single package and install modules for all of them. This would work similarly to Gentoo’s experimental multi-ABI support (available in some overlays), which rebuilds a package numerous times for 32-bit or 64-bit within a single ebuild. This approach has two major downsides: (1) It requires explicit support to be written into every ebuild using it, and (2) a change to just one version of one compiler requires rebuilding the package for every compiler+version.

I’m leaning toward approach 2, which looks relatively easy and quick to support, with the benefit of feeling much cleaner than approach 1 and easier to implement & faster in action than approach 3. With approach 2, only one module directory is required rather than compiler-specific directories. A reasonably compiler-neutral location for Fortran modules would be /usr/$LIBDIR/finclude/, so that’s what I propose to use.

If you have any other ideas or think a different option is better, please let me know in the comments.

Tagged: gentoo

Community :: Linux :: Six Command Line Tips

Here are six little command line tips that I have used lately.

Scheduling a task every minute

The syntax of the Crontab file is very concise and very difficult to remember. It is quite easy to write a line in the file and come back and find your task did not run as expected. However, most modern crons have nice folders such as /etc/cron.daily where you can deposit an executable such as a Bash script or Python script.

Sometimes I want to run a task every minute. My approach to this now is to add a /etc/cron.minutely folder. To do this, add the following line to your /etc/crontab file and don't forget to actually create the /etc/cron.minutely folder.

*  *    * * *   root    cd / && run-parts --report /etc/cron.minutely

Converting a PDF to grey

These days, whenever one flies, you have to print a boarding pass, yet another hurdle one has to worry about when trying to get to the plane. One of my relatives has a Canon colour printer but to save on the overpriced ink, does not put a colour cartridge in the printer. Ryanair, to be annoying, has colour boarding passes with key parts of the boarding pass in blue and yellow. When my relative prints the pass, it comes out with the key parts missing. I couldn't find an option in the Ubuntu or Windows GUI to successfully print in greyscale. Which is odd as my printer on my computer has that option. Anyway, I found a cool solution on the web.

So the following command converts the file input.pdf to greyscale, resulting in output.pdf

gs -sOutputFile=output.pdf -sDEVICE=pdfwrite -sColorConversionStrategy=Gray -dProcessColorModel=/DeviceGray -dCompatibilityLevel=1.4 input.pdf &lt; /dev/null

Half Remembered Commands

à propos is French for "on that subject". If you can't remember a command, then you can use apropos, though how you remember that I am not sure! Apropos searches the manual pages for the keyword you give it.

So to see all the commands relating to pdf, type:

apropos pdf | less

Switching between directories

Sometimes I will be working in a directory but then I will need to go somewhere else briefly then come back. There are various ways to help with this. Using screen is quicker than starting a new virtual terminal. However, there are some even quicker ways. Firstly, cd - goes back to the last directory. However, a more explicit approach is to use pushd and popd.

So when you want to leave a particular directory, instead of using cd, use pushd:
pushd <directory-path>

Then when you want to go back to where you started, use popd:


Running a command later

If you want to run a command at a certain time then use 'at':

$ touch test.txt | at 20:45
warning: commands will be executed using /bin/sh
job 1 at Wed Dec  1 20:45:00 2010

There you go, have you got any tricks or commands you want to share? If so please leave a reply.

mkimage for Windows

Even if you do not use Linux, you can still roll your own Android firmware update on Windows. Click [here] to download mkimage for Windows.

Posts for Wednesday, December 1, 2010


cvechecker 2.0 released

Okay, enough play – time for a new release. Since cvechecker 1.0 was released, a few important changes have been made to the cvechecker tools:

  • You can now tell cvechecker to only check newly added files, or remove a set of files from its internal database. Previously, you had to have cvechecker scan the entire system again.
  • cvechecker can now also report if vulnerabilities have been found in software versions that are higher than the version you currently have installed. This can help you find seriously outdated software, but also help you identify possible vulnerabilities if the CVE itself doesn’t contain all vulnerable versions, just the “latest” vulnerable version.
  • The toolset now contains a command called cverules which, on a Gentoo system, will attempt to generate version matching rules for software that is currently not detected by cvechecker yet. Very useful as I myself cannot install every possible software on my system to enhance the version matching rules. If you want to help out, run the cverules command and send me the output.
  • Some needed performance enhancements have been added as well

One thing I wanted to include as well was a tool that validates cvechecker output against the distribution security information. Some distributions patch software (to fix a vulnerability) rather than ask the user to upgrade to a non-vulnerable software. The cvechecker tools often cannot differentiate between the vulnerable and non-vulnerable binaries (as they both mention the same version), but often you can check against some meta data files of the distribution if and which CVEs have been resolved in which versions of a distribution package.

The cvechecker tarball contains a script (see the scripts/ folder for cvepkgcheck_gentoo) for Gentoo that tries to get this information from the GLSAs, but it is far from ready. I should try setting up a KVM instance with an “old” Gentoo installation just to validate if the command works, but even if it does, I’m not happy with how it is written. Seems to me a lot of trouble, and if it cannot be done simply, I’m afraid I’m doing it wrong ;-)

Anyhow, I hope you enjoy version 2.0 of cvechecker.

Community :: This Week :: This Week: The Social Web

So back to my series on what I have read lately online.

Rullzer is thinking through how a distributed social network could work. In my opinion, there are already decent protocols such as FOAF (Friend of a Friend). The problem is that if some of the people you want to socialise with are not warriors then these kind of independent and open protocols are often not linked to the most popular proprietary social networks. However, one can setup your social broadcast client to post simultaneously to multiple networks, so you can post to both Twitter and to somewhere with decent FOAF support.

Tante outlines the book 'Program or be Programmed' by Douglas Rushkoff. Sounds very much like the theme of this site i.e. you take command of your technology or it takes command of you. I will try to get hold of the book at some point and let you know more.

Tante follows this with an post called Cultural techniques, here he explains that use of the web has become a fundamental life skill and without it, opportunities are extremely restricted. You often hear the term 'Digital Divide' which is really a divide between the educated, working, urban young and the older, less educated and unemployed.

I personally think that in the UK, the digital divide is something that could have been avoided, but due to Thatcherite Corporatist dogma, the digital divide was built in to the British Internet.

Until 1982, all communications in the UK were controlled by the state-run post office. The post office provided communication services to the whole population no matter how rich or poor, how remote or how old. The post was delivered to every house in the kingdom and any home who could afford a phone could have one, while a network of phone boxes was provided for those that did not want or could not afford a phone.

In 1982, the profitable phone part was split off and sold off as British Telecom. In retrospect this was very short sighted. If the post office had been in control when the World Wide Web came along then things could have been very different. Under the post office, a top down plan would have not allowed a digital divide to emerge.

In the same way every house is assigned a post code, every house in the country could be provided with Internet Access and everyone could have been assigned an email address. When Wifi appeared, a national wireless Internet network would be far superior to what we have - a patchy, inefficient and redundant wireless Internet network. There are ten wireless access points broadcasting into my house, consuming electricity 24 hours a day, each providing access to a single house. A national network could have been far more efficient, with wireless routers built into streetlamps, telegraph poles, traffic lights and other existing infrastructure.

The key to a national network is understanding that ISPs are not the important economic benefit of the Internet, they are just the infrastructure, the backbone, the pipe. The really important economic developments are the products and services running on top of the network. A national Wifi network would have allowed legions of more businesses to take advantage of web-based opportunities, filling the missing markets we have now.

Instead we have overpriced and under-investing ISPs, who promise a level of service they know they cannot deliver, who cut you off if you use the bandwidth claimed in the advertisements. We have already talked about how rubbish 3G mobile broadband is. There is nothing 'fair use' about a bait and switch scam, there no technological progress in trying to use content-based rate-limiting and all sorts of scams in an attempt to make the broken fractured system viable.

Moving on, Matija explains the issues involved in leaving various proprietary instant message protocols. Myself I use IRC - read my article about that here - and find that most people for whom I need to chat online with seem to pick it up quite easily.

"Is Open Source under Siege? Let's Hope Not!", one-time Brummie Tx points out that many of the long standing big brand companies within open source have sold out the open source community and its principles. Tx argues that "it reiterates the importance of individual software contributors to protect themselves" especially by getting involved in projects that allow you to keep your own copyright, rather than losing it. Tx also discusses wikileaks and thinks about what (if anything) it will reflect on the open source community.

Screenshots of early versions of Ubuntu 11.04 are on various sites (e.g. this one). The interesting thing about next year's Ubuntu release is that Ubuntu has decided to use its Unity interface (so far used on Netbooks) instead of a standard GNOME desktop. Whether this gets watered down before release remains to be seen. I think it is good to see some innovation, the current interface on all the major operating systems have only seen incremental improvements since the Xerox Alto in 1973.

Python 2.7.1/3.3 has been released, my favourite change being the ordered dict. Already I have been using an ordered dict recipe, but having a proper and optimised ordered dict will be a great plus for Python developers. My second favourite change is by default silencing some of the warnings such as DeprecationWarning for production programs. I hate seeing DeprecationWarning in logs and so on. 2.7.1 is the last major parallel release of Python versions 2 and 3. Now development is being focused on the Python 3.x series.

Even more exciting is the release of PyPy 1.4. PyPy is a new Python interpreter written in Python. It offers lots of performance improvements over standard Python including just-in-time compilation (JIT). Version 1.4 now fully supports 64-bit. Below is a graph (source) which compares PyPy (in orange) to normal Python (in blue). You will see that for most tasks, PyPy is much faster:

The downside is that PyPy uses more RAM than CPython, although the PyPy team are currently working on decreasing the difference in RAM.

Talking of numbers, I see myself as a computing humanist rather than a computing scientist; so precise sweating over numbers tends to leave me a bit sleepy. Luckily, Armin Ronacher tries to put numbers into perspective by comparing them. Well worth a read.

So that is what I have read, if you have read or written something cool lately, please leave a reply and tell us all about it.

Posts for Tuesday, November 30, 2010

scriptcmd for Android

Here’s my script for updating the Android firmware. This script must contain the correct file header for your processor.

setenv BMP_ADR 3c00000
fatload mmc 0 $(BMP_ADR) script/hint1_en.bmp
setenv lcdparam 1,30000,8,800,480,48,40,40,3,29,13
setenv pwmparam 0,45,1040,1040
setenv LCDC_FB f900000
logo show -1 0
textout 30 80 "Android update will start after 8 seconds..." ffff00
sleep 1
textout 30 80 "Android update will start after 7 seconds..." ffff00
sleep 1
textout 30 80 "Android update will start after 6 seconds..." ffff00
sleep 1
textout 30 80 "Android update will start after 5 seconds..." ffff00
sleep 1
textout 30 80 "Android update will start after 4 seconds..." ffff00
sleep 1
textout 30 80 "Android update will start after 3 seconds..." ffff00
sleep 1
textout 30 80 "Android update will start after 2 seconds..." ffff00
sleep 1
textout 30 80 "Android update will start after 1 second..." ffff00
sleep 1
setenv text1 'textout 705 458 "  1.9.99 by eradicus" c5c5c5'
run text1
textout 30 80 "Android Update" ffff00
textout -1 -1 "Updating w-load..." ffff00
fatload mmc 0 0 script/wload.bin
erase ffff0000 +10000
cp.b 0 ffff0000 10000
textout -1 -1 "w-load update done!" ff00
textout -1 -1 "Updating u-boot..." ffff00
fatload mmc 0 0 script/u-boot.bin
erase fff80000 +50000
cp.b 0 fff80000 50000
textout -1 -1 "u-boot update done!" ff00
setenv touchic  true
setenv bootdelay 1
setenv audioic  wm9715
setenv touchirq  gpio5
setenv battvoltlist  6830,7086,7310,7503,7575,7636,7720,7861,7953,8018,8190
setenv gpiostate  3
setenv kpadid wms8088b_14
setenv panelres.x 800
setenv panelres.y 480
setenv logocmd 'nand read 3c00000 600000 150000;logo show;run text1'
setenv bootcmd 'nand read 0 0 380000;bootm 0'
setenv bootargs 'mem=237M noinitrd root=/dev/mtdblock9 rootfstype=yaffs2 rw console=ttyS0,115200n8 init=/init lcdid=1 androidboot.console=ttyS0 loadtime=-3'
setenv sd_powerup
setenv sd_powerdown
setenv amp_powerup 0xd811005c|0x4,0xd8110084|0x4,0xd81100ac&~0x4
setenv amp_powerdown 0xd811005c|0x4,0xd8110084|0x4,0xd81100ac|0x4
setenv wifi_powerdown 0xd811005c|0x2,0xd8110084|0x2,0xd81100ac&~0x2
setenv wifi_powerup 0xd811005c|0x2,0xd8110084|0x2,0xd81100ac|0x2
setenv regop $(amp_powerdown),$(wifi_powerdown),D8130054|0x1
setenv basevolt 3300
setenv hibernation_ui no
setenv eth_ui yes
setenv gsensor_axis 0,1,1,-1,2,1
setenv gsensor_int gpio6
setenv gsensor_ui yes
setenv motor_ui yes
setenv photo_ui_slideshow_mode
setenv vibra_start 0xD811005C|0x8,0xD8110084|0x8,0xD81100AC|0x8
setenv vibra_stop 0xD81100AC&~0x8
setenv vibra_enable 0
setenv video_ui_dir_select
setenv 88 1
setenv dw
setenv restore
setenv need_restore_data yes
setenv orientation_ui yes
setenv cam_pre_width 360
setenv cam_pre_height 480
setenv camera_rotate 90
setenv camera_chip sonix
setenv camera_up 5c|0x1,84|0x1,ac|0x1
setenv camera_down ac&~0x1
setenv camera_ui yes
setenv customer_id 1
setenv musicplayer_black_cd yes
setenv enable_hw_scal yes
setenv enable_gome_theme no
setenv modem3g_ui no
setenv pppoe_ui no
setenv release_ver 1.9_88v4c
setenv release_date 20101107
setenv release_language english
setenv bluetooth_ui no
setenv wmt.model 8088b_90_20k
setenv powerhold 1
setenv touchcodec
setenv amp_stop_when_nouse
protect off all
fatload mmc 0 0 script/androidpad.bmp
textout -1 -1 "Updating splash screen..." ffff00
nand write 0 600000 $(filesize)
textout -1 -1 "Splash screen update done!" ff00
fatload mmc 0 0 script/ramdisk_88_en.gz
textout -1 -1 "Updating ramdisk..." ffff00
nand write 0 C00000 $(filesize)
textout -1 -1 "ramdisk update done!" ff00
fatload mmc 0 0 script/uzImage.bin
textout -1 -1 "Updating kernel..." ffff00
nand write 0 0 $(filesize)
textout -1 -1 "Kernel update done!" ff00
textout -1 -1 "Updating file system..." ffff00
setenv bootargs 'mem=237M root=/dev/ram rw initrd=0x01000000,32M console=ttyS0,115200n8 init=/linuxrc lcdid=1 loadtime=-3'
fatload mmc 0 1000000 script/mvl5_v5t_ramdisk_WM8505.090922.loop_en.gz
textout -1 -1 "Please wait..." ff00
bootm 0

To prepend the file header, execute

mkimage -A arm -O linux -T script -C none -a 0 -e 0 -n "info" -d thisscript.txt scriptcmd

Posts for Sunday, November 28, 2010

Cultural techniques

In the last few days I have thought about cultural techniques, on the one hand due to a recent Chaosradio as well as the book “Program or be programmed” that I already wrote about some days ago. Cultural techniques are those techniques that you need to have mastered in order to “function properly” in a given culture. Traditionally we have for example counted “ability to read and write” amongst those, basic arithmetics as well as the ability to understand somewhat abstract representations of data like maps or diagrams.

I and many others argue that we now have at least one new cultural technique called “using the Internet”. This claim is usually quickly fought by the statement that “you can l live without the Internet” and that “if the Internet went away tomorrow, society would still stay intact” but both those two critiques are very wrong. Let’s look at them in detail.

“You can live without the Internet”. Yes you can. Let me guess, you next sentence is gonna be “My Grandpa doesn’t use the Internet and he lives in this society.”? Not being able to use the Internet effectively is kinda like not being able to read: You can get by and maybe even “function” somewhat but you are always at a great disadvantage: You are radically limited in your choice of a job, you will spend comparably more of your available resources on the goods and services you want that people able to use the Internet. The argument that you can get by without using the Internet falls flat because you could also get by without writing and reading. Does that make writing and reading no longer a basic cultural technique?

“If the Internet went away tomorrow, society would still be intact”. Yes, we wouldn’t all burn and die. But what would be left is a different beast that what we call our culture today: As I’m writing this, Wikileaks has just published a bunch of secret US documents for everyone to check out, my microblogging client shows me opinions, discussions and the stream of consciousness of thousands of people (well not that many cause I don’t follow all that many people icon wink Cultural techniques ) and I am writing and (later) publishing an article about cultural techniques that (again possibly) hundreds of people read (in fact it’s more like 10 people and a bunch of robots). Whether you like it or not, the Internet has changed our society irrevocably and we could never go back to the state we were in. The knowledge about what it means to be connected worldwide and being able to write and publish without cost is something that many people would try to rebuild as soon as the Internet would “crash”.

In the discussion about cultural techniques I realized that we often do see them as a list that’s just getting longer and longer; the loss of cultural techniques is mourned: “Nobody writes letters anymore!”, “Kids today can’t even recite one classic poem.”, “Nobody reads books anymore!” Those all are really fucking stupid complaints. Not cause those are bad or stupid things, not at all! I love reading books! But the direct value of those things is no longer there. Where books were one of the few methods to entertain oneself or tickle one’s phantasy there are many many other options nowadays that books have to compete with and people can – without any sort of problem – participate fully in our culture and society without ever reading a book. Or writing a letter for that matter.

It’s the way the world works: Everything changes. Do you know how to build a bow? Or hunt and skin a deer? Can you make fire without a lighter? Some of you might and those were really important skills a few thousand years ago, but nowadays? Not so much.

It’s this weird idea that “whatever we have now” is great an natural” and whatever comes up next year is just “fancy stuff that you can play with but that’s not important”. Whatever comes up in ten years is just “stupid and worthless”.

The set of basic cultural techniques is always changing, morphing, evolving. If it stops doing that it means that we have a big problem, because our society would no longer be changing and evolving (and since I don’t believe in Marxistic phantasies I don’t believe in our society ever reaching a perfect and stable state). It’s human to try to change the world around us, it’s only necessary to change our cultural techniques along the way.

flattr badge large Cultural techniques

Exherbo Development Workflow, Version 2

My original Exherbo Development Workflow post seems to have become the standard way of doing things. However, it does rather assume that you are developing on most repositories most of the time. When that’s not the case, a new feature named “sync suffixes” may be of use. With sync suffixes, a typical workflow now looks like this:

Repositories are configured as normal, with their sync set to point to the usual remote location. In addition, for any repository you are going to work on, you use a sync suffix to specify a local path too. For example:

sync = git:// local: git+file:///home/users/ciaranm/repos/arbor

where /home/users/ciaranm/repos/arbor is a personal copy of the repository that is entirely unrelated to the checkout Paludis uses.

Normally, when you sync, you’ll be syncing against upstream. But when you want to do some work:

  • Update your personal copy of the repository.
  • Work on and commit your changes.
  • Use cave sync --suffix local arbor to sync just that repository, and against your local checkout rather than upstream.
  • Test your changes.
  • Make fixes, commit, sync using the suffix etc until everything works.
  • Use the wonders of git rebase -i to tidy up your work into nice friendly pushable commits.
  • Push or submit a git format-patch for your changes.
  • Go back to syncing without the suffix.

Some things to note:

  • This only really works with Git, and only when using the default ‘reset’ sync mode.
  • You’re never manually modifying any repository which Paludis also modifies.
  • Unlike the original version of this workflow, you only need to keep your personal copies of repositories up to date when you work on them.
  • The suffix things work on sync_options too, if you need it. Thus, for branches, you can use sync_options = branch-on-upstream local: my-local-branch.

Filed under: exherbo Tagged: exherbo, git, paludis

What's wrong with proprietary IM (ICQ, AIM, YIM, MSN/WLM)

Exactly three months ago I wrote that I'm migrating to XMPP/Jabber and slowly leaving all proprietary IM protocols. In that blog post is also an example message I'm leaving my contacts why I'm doing it.

Today I'm happy to announce that I've also dropped ICQ. This leaves me only with MSN/WLM… but I think in three months' time I'll be XMPP-only already. :)

Without digging too deep, I found the following issues:

  • MSN/WLM — I already mentioned in a comment that MSN/WLM keeps full chat logs forever and analyses them at times. Oddly though I found deleting a MSN/WLM/Live account pretty easy.
  • ICQ — being the first non-UNIX IM protocol I expected from ICQ a lot less dirt that I found. For starters, by accepting its EULA/ToS you agree to give ICQ all the copyright and some other rights over the information posted to ICQ, which implies that ICQ may publish, distribute etc. any messages sent through the system that could to be private. Stretching it a bit, ICQ may even use your brilliant idea you had in a private chat with your coworker. Relevant excerpt from their EULA/ToS:

    You agree that by posting any material or information anywhere on the ICQ Services and Information you surrender your copyright and any other proprietary right in the posted material or information. You further agree that ICQ Inc. is entitled to use at its own discretion any of the posted material or information in any manner it deems fit, including, but not limited to, publishing the material or distributing it.

    Also there is no way to delete your ICQ account. There's even two FAQ entries to tell you that — for ICQ 7 and for ICQ 6.5.

  • AIM — as I already wrote, AIM complicates deleting your account in some cases (e.g. mine) to such extent that in practice you cannot delete it.
  • YIM — I still use my Yahoo as a spam mail account and there is an option to turn off YIM. It is nice to know that you can delete your account on Yahoo! If you're interested there's also a leaked Yahoo! Complience Guide for Law Enforcement that clearly states which private data Yahoo stores for about its users, for how long, who may get access and how much access costs.

In any case, do not count on just deleting your account to delete all your already collected private data, chat logs, etc.

To be fair, even if you use XMPP you have to keep an eye on which provider you chose. E.g. Also in Google the chat logs (GTalk is just a XMPP server) were read and misused by one of its employees.

So repeat after me: Reading the EULA, ToS and PP before signing is a smart thing to do. (Actually in general it's a smart thing to read what you sign!)

To conclude, I'd suggest either joining a trusted XMPP server or better yet run your own server. Personally I'm very happy with Gabbler since they promise not to log any data about you and would recommend them (sadly they don't accept new accounts at the moment). There are quite a few XMPP servers though that provide a smilarly sane privacy policy out there.

hook out → listening to music on my new AKG K330 headphones :3

Posts for Saturday, November 27, 2010

Mansfield University’s IT Doesn’t Respect Students

It all started with an issue in a computer lab in room 216 of Elliot Hall. Over the summer, the IT staff at Mansfield University decided it would be proper to disallow access from this particular lab, citing “security” issues. This is all fine and dandy, since the people who use the lab are Computer Science majors and know the power of proxies. Proxies are less than ideal, but are necessary if we feel the need to do any school work in a state-funded computer lab. Never mind that the Computer Science Club cannot even access their own server from the lab. Never mind that we cannot check class cancellations, campus news, or even put in our work hours without memorizing an outside URL. Never mind that there are tutors, including me, who need to access the site from inside of the lab. What kind of twisted rationale would allow the university to block its own network from a location inside of its campus?

Until now, we have dealt with it. We have realized that Mansfield thinking is backwards and the bureaucracy controls the university and there is nothing us lowly students can do about it. We dealt with it until one day I had enough. The internet access in the lab had slowed to a crawl. Our department head cited that it took him 6 minutes to load his slides from inside of the Elliot 216 lab (which he must have hosted outside of his faculty account on His students were unable to complete their lab that day because the internet access was unusable.

After this, I decided to write an e-mail to Alan Johnson, Associate Director of Campus Technologies. Alan has helped me in the past when I have had issues with the Mansfield network. This time I decided to not only ask for a fix on the internet speeds, but also for access to Alan didn’t get back to me by the next business day, so I sent a reminder e-mail to him. He responded stating he was out of the office and that he would look into it when his vacation was over. Ok, no problem. I understand that entirely and it is right that he should not have to work during vacation time. Less than five minutes later, I received a follow-up e-mail from Connie Beckman, Director of Campus Technologies. This is where things became interesting. Below is the text of her e-mail (unedited):

I don’t know what he is trying to do, but it is likely not the intent that he should do it in that lab.  In addition, he is not Dan McKee – the Chair.  Therefore, he should not be asking you to address anything.  Don’t rush to do anything or feel you need to respond.

I know he is a dorm student who thinks he knows a great deal – except the rules.

Connie Beckman, whom I had never had any contact with before, nor did I even know who she was, had degraded me. I was outraged, and rightfully so. First off, I am not a dorm student. I do not know what she is implying, but I was certainly only asking for rightful and expedient internet access. Connie and I had many further exchanges, as I expressed my disappointment that she could feel such a way about someone who she had never even met or spoken to before. I will include the full text of all e-mail correspondance with Connie Beckman, including my initial inquiry to Alan Johnson, at the bottom of this post.

Connie Beckman should be exhibiting professional behavior towards both the professional staff and towards the students. She is the head of Campus Technologies and she should act as such. Her attitude towards me, other students, and towards the Computer Science department staff is unacceptable. She attempts to force people to adhere to bureaucracy (which is why she mentioned that I am not Dr. Dan McKee, the chair of the Computer Science department) and is generally disrespectful. Unfortunately, little will be done to reprimand her, as she is retiring in a month.

I wrote a letter to the editor to the Flashlight, our campus newspaper. I was told that my story would be published at the next release, but they never published it. I attempted to bring the matter to the president of the university but never received a reply back. Ashley, my fiance, spoke with the president and she claimed that corrective action would be taken against Connie Beckman, but I have not heard of any repercussions for this outburst. I also spoke personally with the provost of the university and he said that he would follow up on the incident, but gave no specifics. Overall the response from the university has been lackluster at best.

I have little to no hope of attaining access in the Elliot 216 lab and I have very little hope that anything will change internally. The university has failed to hear my complaint and has failed to act on an unjust response from a professional. I have personally made my fair share of mistakes and I have been held accountable for them. All I ask is that Connie Beckman be held accountable for her actions.

The following is all correspondence between Connie Beckman, Alan Johnson, and me:

Connie Beckman E-mails (PDF)


Helping with version detection rules in cvechecker

The new development snapshot, available from the cvechecker project site, contains a helper script that returns potential version detection rules for your system if the current cvechecker database doesn’t detect your software. The script is currently available for Gentoo (called cverules_gentoo) but other distributions can be easily added. The actual logic for detection is distribution-agnostic (the script cvegenversdat) so it shouldn’t be too much of a problem for other distributions to be supported as well.

Note that the script isn’t very fast (it’s not intended to be) nor has a very high accuracy rate. After all, it does use generic regular expressions to try. The idea is that deployments on systems that have software I don’t have on my system can help me with the development of the version detection rules by sending me the output of the helper script.

Next up: tool to auto-generate (part of) the acknowledgements file for reporting purposes – getting information from distribution-specific information. Once that is in, I’ll tag it version 2.0 of cvechecker.

Posts for Friday, November 26, 2010


The kde-www war: part 1

In my initial post, I talked about the wall of text. I described some of the symptoms of the wall of text, and proclaimed that is terrible. I listed some of the basics of cleaning up text, and gathered some information about the “why” of

Unfortunately, is representative of a very large and vibrant community, and although formatting and eyecandy insertions will come in good time, we have to first understand the site’s structure to make informed decisions before tidying up small details.’s wall of text problem is not simply due to a few bad aesthetic choices, but instead a side-effect of a more fundamental problem in KDE-www’s structure.

When I defined the wall of text issue, I described the problem being boiling to the essence of what you’re trying to communicate to the audience, and how to present it. Thus let’s look at what we are trying to communicate to the KDE audience – of which there are essentially two parties:

The uninitiated potential KDE user

The new user is interested in the single question of “What is KDE?“. They will want to understand that KDE is a community, and that its product is KDE SC – of which is a multidimensional beast full of wonders both for end-users and developers. When this has been answered, we want to tell them “Why is KDE right for me?“, and finally when convinced, “How do I start?“.

New users have a very specific workflow, and so we should recognise this, tailor it to them, and remove any potential “sidetracking” factoids.

The existing KDE user

The existing KDE user knows what KDE is and is currently using it, but most importantly, the existing user IS KDE. The rebranding effort was not about changing KDE to KDE SC, but instead about separating product from people. Technically, open-source is simply a business model, but in reality, open-source is a philosophy constructed by people. KDE’s challenge is how to turn one of open-source’s most intangible qualities into an axiom for all users.

So let’s talk a bit about KDE instead of KDE: SC. It has a “magazine” of sorts, the Dot, which gives “official” news on the ongoing events in KDE. It has an active blogosphere by PlanetKDE, which is populated basically by the people behind KDE: SC, which report upcoming features, discussions about KDE-related topics, ongoing physical events, and ongoing virtual events. It has a micro-blogosphere, by buzz.kde, which highlights recent Flickr and Picasa activity, YouTube videos, Tweets, and Dents. KDE’s community also has the Forums, which acts both as discussions, support and brainstorm. There is a multitude of Wikis: Userbase, the by and for users, Techbase, the by and for developers, and Community, used to organise community activities. There is KDE e.V, which does awesome stuff which isn’t publicised enough, and a variety of groups in social networks such as Facebook and Freenode’s network has a collection of IRC channels where KDE enthusiasts hang out. There is a variety of regional communities which all hold their own KDE specific stuff, and an entire of network of community-contributed KDE resources through the OpenDesktop API, and various other KDE connections through the SocialDesktop.

For your convenience, I’ve bolded what is KDE in the above paragraph. KDE-www, being representative of KDE, must stress that this is what KDE is – firstly by presenting in a digestable form the amazing influx of activity from all of those sources, and secondly by making it easy for any KDE user, old or new, to find out where they belong, and how they can add to the community. If you look at KDE-www from this perspective, it’s not hard to come to the conclusion that is terrible.

But where do we start?

Given such a complex problem, let’s start by mapping out the ideal routes for each user. Here’s the proposal:

When looking at the chart above, notice how we clearly separate KDE from KDE:SC. I would like to highlight that the two final goals for existing users are not mutually exclusive. You can both contribute to KDE:SC but at the same time contribute to KDE – as long as you communicate your activity.

Now that we have identified the ideal paths for our target audiences, we can start making informed decisions about restructuring But before I get to that in part 2, feel free to add your opinion.

P.S. There is some wrong terminology used when it comes to KDE:SC, it should be referred to as KDE Software, as SC is more of a technical term used to describe a specific subset of packages in KDE Software.

Related posts:

  1. Help defeat the wall of text.
  2. Marketing noise or marketing contribution?
  3. WIPUP 25.07.10 beta released.

Paludis 0.54.10 Released

Paludis 0.54.10 has been released:

  • A bug in via-binary package ordering has been fixed.
  • ‘cave owner’ now has a –dereference option.
  • We now use libmagic rather than calling the ‘file’ executable to determine whether or not to strip files.

Filed under: paludis releases Tagged: paludis

Posts for Tuesday, November 23, 2010

Paludis 0.54.9 Released

Paludis 0.54.9 has been released:

  • Binary package configuration is now documented, although it is still considered experimental. Various binary-related bugs are fixed.
  • We now display much cleaner errors if output manager creation fails (e.g. if the log directory does not exist).
  • New cave print-unused-distfiles subcommand.
  • We no longer show “no output for X seconds” messages when only one job is running.

Filed under: paludis releases Tagged: paludis

WIPUP 24.11.10b released!

For the uninitiated, WIPUP is a way to share, critique, and track projects. Or more specifically, works-in-progresses. Us in the open-source community are constantly working on things, and being open-source, we like to share them.

WIPUP was specifically built and tailored towards sharing works-in-progresses – ranging from a twitter-like update, to a fully formatted document complete with images, videos, and pastebin support. With WIPUP’s new FreeDesktop approved OCS (open collaboration services) REST API, it’s one step closer to turning the advanced Linux desktop into a Social Desktop.

Imagine being able to share what you’re working on immediately from KSnapshot, or finding a "Subscribe to this project" or "Track this developer" in Amarok’s About dialog.

It’s completely free to use and (of course) its entire codebase is open-source.

Check out the release notes, and then try it out if you haven’t already!

Related posts:

  1. WIPUP 27.06.10a released!
  2. WIPUP 25.07.10 beta released.
  3. WIPUP 23.09.10b released!

Planet Larry is not officially affiliated with Gentoo Linux. Original artwork and logos copyright Gentoo Foundation. Yadda, yadda, yadda.