Posts for Tuesday, August 19, 2014

avatar

Switching to new laptop

I’m slowly but surely starting to switch to a new laptop. The old one hasn’t completely died (yet) but given that I had to force its CPU frequency at the lowest Hz or the CPU would burn (and the system suddenly shut down due to heat issues), and that the connection between the battery and laptop fails (so even new battery didn’t help out) so I couldn’t use it as a laptop… well, let’s say the new laptop is welcome ;-)

Building Gentoo isn’t an issue (having only a few hours per day to work on it is) and while I’m at it, I’m also experimenting with EFI (currently still without secure boot, but with EFI) and such. Considering that the Gentoo Handbook needs quite a few updates (and I’m thinking to do more than just small updates) knowing how EFI works is a Good Thing ™.

For those interested – the EFI stub kernel instructions in the article on the wiki, and also in Greg’s wonderful post on booting a self-signed Linux kernel (which I will do later) work pretty well. I didn’t try out the “Adding more kernels” section in it, as I need to be able to (sometimes) edit the boot options (which isn’t easy to accomplish with EFI stub-supporting kernels afaics). So I installed Gummiboot (and created a wiki article on it).

Lots of things still planned, so little time. But at least building chromium is now a bit faster – instead of 5 hours and 16 minutes, I can now enjoy the newer versions after little less than 40 minutes.

Posts for Sunday, August 10, 2014

jumping directly into found results in menuconfig

For those who still use menuconfig for configuring their kernel - there's a neat trick which let you jump directly into a found result.

For example you would like to add a new driver. Usually you go into menuconfig and start searching for it with the "/" shortcut. What you probably not know, after you found your module - like you searched for the "NetXen Multi port Gigabit Ehernet NIC" with just searching for "xen" - you can go directly to the particular config via it's number shortcut:
Search result for "xen"












Notice this line:


The "(5)" is the shortcut. Just press the number 5 on your keyboard and you'll jump directly into the QLogic devices config.
For every found entry there is a number shortcut which let you directly jump into the given config. If you go back with esc-esc <esc><esc>you also go back to the search result.</esc></esc>

I think not many people know this trick and i hope someone can use it for further kernel builds ;)

Posts for Saturday, August 9, 2014

avatar

Some changes under the hood

In between conferences, technical writing jobs and traveling, we did a few changes under the hood for SELinux in Gentoo.

First of all, new policies are bumped and also stabilized (2.20130411-r3 is now stable, 2.20130411-r5 is ~arch). These have a few updates (mergers from upstream), and r5 also has preliminary support for tmpfiles (at least the OpenRC implementation of it), which is made part of the selinux-base-policy package.

The ebuilds to support new policy releases now are relatively simple copies of the live ebuilds (which always contain the latest policies) so that bumping (either by me or other developers) is easy enough. There’s also a release script in our policy repository which tags the right git commit (the point at which the release is made), creates the necessary patches, uploads them, etc.

One of the changes made is to “drop” the BASEPOL variable. In the past, BASEPOL was a variable inside the ebuilds that pointed to the right patchset (and base policy) as we initially supported policy modules of different base releases. However, that was a mistake and we quickly moved to bumping all policies with every releaes, but kept the BASEPOL variable in it. Now, BASEPOL is “just” the ${PVR} value of the ebuild so no longer needs to be provided. In the future, I’ll probably remove BASEPOL from the internal eclass and the selinux-base* packages as well.

A more important change to the eclass is support for the SELINUX_GIT_REPO and SELINUX_GIT_BRANCH variables (for live ebuilds, i.e. those with the 9999 version). If set, then they pull from the mentioned repository (and branch) instead of the default hardened-refpolicy.git repository. This allows for developers to do some testing on a different branch easily, or for other users to use their own policy repository while still enjoying the SELinux integration support in Gentoo through the sec-policy/* packages.

Finally, I wrote up a first attempt at our coding style, heavily based on the coding style from the reference policy of course (as our policy is still following this upstream project). This should allow the team to work better together and to decide on namings autonomously (instead of hours of discussing and settling for something as silly as an interface or boolean name ;-)

Posts for Friday, August 8, 2014

The Jamendo experiment – “week” 1

As forecast in a previous blog post, this is the first "weekly" report from my Jamendo experiment. In the first part I will talk a bit about the player that I use (Amarok), after that will be a short report on where I get my music fix now and how it fares and in the end I will introduce some artists and albums that I found on Jamendo and like.

Amarok 2.0.2 sadly has a bug that makes it lack some Jamendo albums. This makes searching and playing Jamendo albums directly from Amarok a bit less then perfect and forces me to still use Firefox (and Adobe Flash) to browse music on Jamendo. Otherwise Amarok with its version 2.x has become an amazing application or even platform, if you will, not only for playing and organising, but also for discovering new music. You can even mix in the same playlist your local collection with tracks from web services and even streams.

Most of the music I got directly from Jamendo, a bit less I listened online from Magnatune and the rest was streams from Last.FM (mostly from my recommendations). As far as music on Jamendo and Magnatune – both offer almost exclusively CC licensed music – I honestly found it equally as good, if not better, then what conservative record labels and stations offer. This could in part be because of my music taste, but even so, I am rather picky with music. As far as the quality of the sound is concerned, being able to download music in Ogg/Vorbis (quality 7) made me smile and my ears as well. If only I had a better set of headphones!

Now here's the list of artists that I absolutely must share:

<iframe frameborder="0" height="315" id="widget" scrolling="no" src="http://widgets.jamendo.com/v3/artist/7977?autoplay=0&amp;layout=standard&amp;manualWidth=400&amp;width=480&amp;theme=light&amp;highlight=0&amp;tracklist=true&amp;tracklist_n=4&amp;embedCode=" style="width: 480px; height: 315px; display: block; margin: auto;" width="480"></iframe>

Jimmy the Hideous Penguin – Jimmy Penguin is by far my absolute favorite artist right now! His experimental scratching style over piano music is just godly to my ears – the disrhythmia that his scratching brings over the standard hip hop beats, piano and/or electronica is just genius! The first album that made me fall in love was Jimmy Penguin's New Ideas – it starts with six tracks called ff1 to ff6 with already the first one (ff1) showing a nice melange of broken sampling layered with a melody and even over that lies some well placed scratching. The whole album is amazing! From the previously mentioned ff* tracks, I would especially like to put into the limelight apart from ff1, then also ff3 and ff4. The ff6 (A Long Way to Go) and Polish Jazz Thing bare some jazz elements as well, while Fucking ABBA feels like flirting with R&B/UK garage. On the other hand the album Split Decisions has more electronic elements in it and feels a bit more meditative, if you will. The last of his albums that I looked at was Summer Time, which I have not listened to thoroughly enough, but so far I like it a lot and it's nice to see Jimmy Penguin take on even more styles, as the track Jimmy Didn't Name It has some unmistakable Asian influences.

<iframe frameborder="0" height="315" id="widget" scrolling="no" src="http://widgets.jamendo.com/v3/album/42122?autoplay=0&amp;layout=standard&amp;manualWidth=400&amp;width=480&amp;theme=light&amp;highlight=0&amp;tracklist=true&amp;tracklist_n=4&amp;embedCode=" style="width: 480px; height: 315px; display: block; margin: auto;" width="480"></iframe>

No Hair on Head – very enjoyable lounge/chillout electronica. Walking on Light is the artist's first album and is a collection of some his tracks that he made in the past 5 years. It's great to see that outside mainstream artists are still trying to make albums that make sense – consistent style, but still diverse enough – and this album is just such. The first track Please! is not a bad start into the album, Inducio is also a nice lively track, but I what I think could be hits are the tracks Anywhere You Want and Fiesta en Bogotá – the first one starts rather standard, but then develops into a very nice pop-ish, almost house-like summery electronic song with tongue-in-cheek lyrics; the latter features an accordion and to me feels somehow like driving through Provence or Karst (although Bogotá lies actually in Columbia).

<iframe frameborder="0" height="315" id="widget" scrolling="no" src="http://widgets.jamendo.com/v3/album/35414?autoplay=0&amp;layout=standard&amp;manualWidth=400&amp;width=480&amp;theme=light&amp;highlight=0&amp;tracklist=true&amp;tracklist_n=4&amp;embedCode=" style="width: 480px; height: 315px; display: block; margin: auto;" width="480"></iframe>

Electronoid – great breakbeat! If you like Daft Punk's album Homework or less popular tracks by the Chemical Brothers, you will most probably enjoy Electronoid (album) as well.

<iframe frameborder="0" height="315" id="widget" scrolling="no" src="http://widgets.jamendo.com/v3/album/26195?autoplay=0&amp;layout=standard&amp;manualWidth=400&amp;width=480&amp;theme=light&amp;highlight=0&amp;tracklist=true&amp;tracklist_n=4&amp;embedCode=" style="width: 480px; height: 315px; display: block; margin: auto;" width="480"></iframe>

Morning Boy— great mix of post punk with pop-ish elements. On their album For us, the drifters. For them, the Bench, the song Maryland reminds me of Dinosaur Jr., while Whatever reminds me of Joan of Arc with added pop. Although All Your Sorrows is probably the track I like best so far – it just bursts with positive attitude while still being somewhat mellow.

Bilk (archived) – a fast German pop punk with female vocals that limits on the Neue Deutsche Welle music movement from the 80's. Their album Ich will hier raus (archived) is not bad and might even compare to more known contemporary artists like Wir sind Helden. Update: Sadly they removed themselves from Jamendo, they have their own website now, but unfortunately there is no licensing info available about the music.

<iframe frameborder="0" height="315" id="widget" scrolling="no" src="http://widgets.jamendo.com/v3/artist/1235?autoplay=0&amp;layout=standard&amp;manualWidth=400&amp;width=480&amp;theme=light&amp;highlight=0&amp;tracklist=true&amp;tracklist_n=4&amp;embedCode=" style="width: 480px; height: 315px; display: block; margin: auto;" width="480"></iframe>

Ben Othman – so far I have listened to two of his albums – namely Lounge Café Tunis "Intellectuel" and Lounge Café Tunis "Sahria" – they consist of good lounge/chillout music with at times very present Arabic influences.

<iframe frameborder="0" height="315" id="widget" scrolling="no" src="http://widgets.jamendo.com/v3/album/830?autoplay=0&amp;layout=standard&amp;manualWidth=400&amp;width=480&amp;theme=light&amp;highlight=0&amp;tracklist=true&amp;tracklist_n=4&amp;embedCode=" style="width: 480px; height: 315px; display: block; margin: auto;" width="480"></iframe>

Silence – this seems like a very popular artist, but so far I only managed to skim through the album L'autre endroit. It seems like a decent mix of trip-hop with occasional electric guitars and other instruments. Sometimes it bares elements of IDM and/or dark or industrial influences. I feel it is too early for me to judge if it conforms my taste, but it looks like an artist to keep an eye on.

<iframe frameborder="0" height="315" id="widget" scrolling="no" src="http://widgets.jamendo.com/v3/album/2572?autoplay=0&amp;layout=standard&amp;manualWidth=400&amp;width=480&amp;theme=light&amp;highlight=0&amp;tracklist=true&amp;tracklist_n=4&amp;embedCode=" style="width: 480px; height: 315px; display: block; margin: auto;" width="480"></iframe>

Project Divinity – enjoyable, very calm ambiental new age music. The mellowness and openness of the album Divinity is very easy to the ears and cannot be anything else then calming.

<iframe frameborder="0" height="315" id="widget" scrolling="no" src="http://widgets.jamendo.com/v3/artist/337741?autoplay=0&amp;layout=standard&amp;manualWidth=400&amp;width=480&amp;theme=light&amp;highlight=0&amp;tracklist=true&amp;tracklist_n=4&amp;embedCode=" style="width: 480px; height: 315px; display: block; margin: auto;" width="480"></iframe>

SoLaRis – decent goatrance, sometimes wading even into the dark psytrance waters.

<iframe frameborder="0" height="315" id="widget" scrolling="no" src="http://widgets.jamendo.com/v3/artist/346674?autoplay=0&amp;layout=standard&amp;manualWidth=400&amp;width=480&amp;theme=light&amp;highlight=0&amp;tracklist=true&amp;tracklist_n=4&amp;embedCode=" style="width: 480px; height: 315px; display: block; margin: auto;" width="480"></iframe>

Team9 – after listening to some of their tracks on Jamendo, I decided to download their full album We Don't Disco (for free, under CC-BY-SA license) from their (archived) homepage. Team9 is more known for their inventive remixes of better known artists' songs, but their own work at least equally as amazing! They describe themselves as "melodic, ambient and twisted" and compare themselves to "Vangelis and Jean Michel Jarre taking Royksopp and Fad Gadget out the back of the kebab shop for a smoke" – both descriptions suit them very well. The whole album is great, maybe the title track We Don't Disco Like We Used To and the track _Aesthetic Athletics _stand out a bit more because they feel a bit more oldskool and disco-ish then the rest of them, but quality-wise the rest of the tracks is just as amazing!

As you can see, listening only to free (as in speech, not only as in beer) music is not only possible, but quite enjoyable! There is a real alternative out there! Tons of great artists out there are just waiting to be listened to – that ultimately is what music is all about!

hook out → going to bed…

Posts for Wednesday, August 6, 2014

How to write your Pelican-powered blog using ownCloud and WebDAV

Originally this HowTo was part of my last post – a lengthy piece about how I migrated my blog to Pelican. As this specific modification might be more interesting than reading the whole thing, I decided to fork and extend it.

What and why?

What I was trying to do is to be able to add, edit and delete content from Pelican from anywhere, so whenever inspiration strikes I can simply take out my phone or open up a web browser and create a rough draft. Basically a make-shift mobile and desktop blogging app.

I decided to that the easiest this to do this by accessing my content via WebDAV via ownCloud that runs on the same server.

Why not Git and hooks?

The answer is quite simple: because I do not need it and it adds another layer of complication.

I know many use Git and its hooks to keep track of changes as well as for backups and for pushing from remote machines onto the server. And that is a very fine way of running it, especially if there are several users committing to it.

But for the following reasons, I do not need it:

  • I already include this page with its MarkDown sources, settings and the HTML output in my standard RSnapshot backup scheme of this server, so no need for that;
  • I want to sometimes draft my posts on my mobile and Git and Vim on a touch-screen are just annoying to use;
  • this is a personal blog, so the distributed VCS side of Git is just an overhead really;
  • there is no added benefit to sharing the MarkDown sources on-line, if all the HTML sources are public anyway.

Setting up the server

Pairing up Pelican and ownCloud

In ownCloud it is very easy to mount external storage, and a folder local to the server is still considered “extrenal” as it is outside of ownCloud. Needless to say, there is a nice GUI for that.

Once you open up the Admin page in ownCloud, you will see the External Storage settings. For security reasons only admins can mount a local folder, so if you aren’t one, you will not see Local as an option and you will have to ask your friendly ownCloud sysAdmin to add the folder from his Admin page for you.

If that is not an option, on a GNU/Linux server there is an easy, yet hackish solution as well: just link Pelican’s content folder into your ownCloud user’s file system – e.g:

ln -s /var/www/matija.suklje.name/content/ /var/www/owncloud/htdocs/data/hook/files/Blog

In order to have the files writeable over WebDAV, they need to have write permission from the user that PHP and web-server are running under – e.g.:

chown -R nginx:nginx /var/www/owncloud/htdocs/data/hook/files/Blog/

Automating page generation and ownership

To have pages constantly automatically generated, there is a option to call pelican --autoreload and I did consider turning it into an init script, but decided against it for two reasons:

  • it consumes too much CPU power just to check for changes;
  • as on my poor ARM server a full (re-)generation of this blog takes about 6 minutes2, I did not want to hammer my system for every time I save a minor change.

What I did instead was to create an fcronjob to (re-)generate the website every night at 3 in the morning (and send a mail to root’s default address), under the condition that there blog posts have either been changed in content or added since yesterday:

%nightly,mail * 3 cd /var/www/matija.suklje.name && posts=(content/**/*.markdown(Nm-1)); if (( $#posts )) LC_ALL="en_GB.utf8" make html

Update: the above command is changed to use Zsh; for the old sh version, use:

%nightly,mail * 3 cd /var/www/matija.suklje.name && [[ `find content -iname "*.markdown" -mtime -1` != "" ]] && LC_ALL="en_GB.utf8" make html

In order to have the file permissions on the content directory always correct for ownCloud (see above), I changed the Makefile a bit. The relevant changes can be seen below:

html:
    chown -R nginx:nginx $(INPUTDIR)
    $(PELICAN) $(INPUTDIR) -o $(OUTPUTDIR) -s $(CONFFILE) $(PELICANOPTS)

clean:
    [ ! -d $(OUTPUTDIR) ] || rm -rf $(OUTPUTDIR)

regenerate:
    chown -R nginx:nginx $(INPUTDIR)
    $(PELICAN) -r $(INPUTDIR) -o $(OUTPUTDIR) -s $(CONFFILE) $(PELICANOPTS)

E-mail draft reminder

Not directly relevant, but still useful.

In order not to forget any drafts unattended, I have also set up an FCron job to send me an e-mail with a list of all unfinished drafts to my private address.

It is a very easy hack really, but I find it quite useful to keep track of things – find the said fcronjob below:

%midweekly,mailto(matija@suklje.name) * * cd /var/www/matija.suklje.name/content/ && ack "Status: draft"

Client software

ownNotes

As a mobile client I plan to use ownNotes, because it runs on my Nokia N91 and supports MarkDown highlighting out-of-the-box.

All I needed to do in ownNotes is to provide it with my ownCloud log-in credentials and state Blog as the "Remote Folder Name" in the preferences.

But before I can really make use of ownNotes, I have to wait for it to starts using properly managing file-name extensions.

ownCloud web interface

Since ownCloud includes a webGUI text editor with MarkDown highlighting out of the box, I sometimes use that as well.

An added bonus is that the Activity feed of ownCloud keeps a log of when which file changed or was added.

It does not seem possible yet to collaboratively edit files other than ODT in ownCloud’s webGUI, but I imagine that might be the case in the future.

Kate via WebDAV

In many other desktop environments it is child’s play to add a WebDAV remote folder — just adding a link to the file manager should be enough, e.g.: webdavs://thatfunkyplace.wheremymonkeyis.at:443/remote.php/webdav/Blog.

KDE’s Dolphin makes it easier for you, because all you have to do is select RemoteAdd remote folder and if you already have a connection to your ownCloud with some other service (e.g. Zanshin and KOrganizer for WebCal), it will suggest all the details to you, if you choose Recent connection.

Once you have the remote folder added, you can use it transparently all over KDE. So when you open up Kate, you can simply navigate the remote WebDAV folders, open up the files, edit and save them as if they were local files. It really is as easy as that! ☺

Note: I probably could have also used the more efficient KIO FISH, but I have not bothered with setting up a more complex permission set-up for such a small task. For security reasons it is not possible to log in via SSH using the same user the web server runs under.

SSH and Vim

Of course, it is also possible to ssh to the web server, su to the correct user, edit the files with Vim and let FCron and Make file make sure the ownership is done appropriately.

hook out → back to studying Arbitration law


  1. Yes, I am well aware you can run Vim and Git on MeeGo Harmattan and I do use it. But Vim on a touch-screen keyboard is not very fun to use for brainstorming. 

  2. At the time of writing this blog includes 343 articles and 2 pages, which took Pelican 440 seconds to generate on my poor little ARM server (on a normal load). 

Posts for Tuesday, August 5, 2014

kmscon - next generation virtual terminals

KMSCON is a simple terminal emulator based on linux kernel mode setting (KMS). It can replace the in-kernel VT implementation with a userspace console. It's a pretty new project and still very experimental.
Even though gentoo provides a ebuild its rather rudiment and it's better to use the live ebuild form [1] plus the libtsm package, which is needed for kmscon, from [2]. Personally i've added those ebuilds into my private overlay.

Don't forget to unmask/keyword the live ebuild:
# emerge -av =sys-apps/kmscon-9999

These are the packages that would be merged, in order:

Calculating dependencies... done!
[ebuild R *] sys-apps/kmscon-9999::local USE="drm fbdev gles2 optimizations pango unicode -debug -doc -multiseat -pixman -static-libs -systemd" 0 kB

Total: 1 package (1 reinstall), Size of downloads: 0 kB

After successfully emerging kmscon it's pretty simple to start a new vt with (as root):
# kmscon --vt=8 --xkb-layout=de --hwaccel

This starts kmscon on vt8 with hardware-accel on and a german keyboard layout.

If your experimental you can add (or replace) an additional virtual terminal to your inittab. A line like following should suffice to start kmscon everytime you boot your system.
c11:2345:respawn:/usr/bin/kmscon --vt=8 --xkb-layout=de --hwaccel


I've tested it with my amd cards (r600g and radeonsi) and it worked with some minor output corruptions. However, in certain cases it works already faster than agetty, for example printing dmesg output. So far it looks really promising, sadly development seems to be really slow. You'll find the git repository here [3]

[1] https://bugs.gentoo.org/show_bug.cgi?id=490798
[2] https://bugs.gentoo.org/show_bug.cgi?id=487394
[3] http://cgit.freedesktop.org/~dvdhrm/kmscon/

Posts for Friday, August 1, 2014

avatar

Gentoo Hardened July meeting

I failed to show up myself (I fell asleep – kids are fun, but deplete your energy source quickly), but that shouldn’t prevent me from making a nice write-up of the meeting.

Toolchain

GCC 4.9 gives some issues with kernel compilations and other components. Lately, breakage has been reported with GCC 4.9.1 compiling MySQL or with debugging symbols. So for hardened, we’ll wait this one out until the bugs are fixed.

For GCC 4.10, the –enable-default-pie patch has been sent upstream. If that is accepted, the SSP one will be sent as well.

In uclibc land, stages are being developed for PPC. This is the final architecture that is often used in embedded worlds that needed support for it in Gentoo, and that’s now being finalized. Go blueness!

SELinux

A libpcre upgrade broke relabeling operations on SELinux enabled systems. A fix for this has been made part of libselinux, but a little too late, so some users will be affected by the problem. It’s easily worked around (removing the *.bin files in the contexts/files/ directory of the SELinux configuration) and hopefully will never occur again.

The 2.3 userland has finally been stabilized (we had a few dependencies that we were waiting for – and we were a dependency ourselves for other packages as well).

Finally, some thought discussion is being done (not that there’s much feedback on it, but every documented step is a good step imo) on the SELinux policy within Gentoo (and the principles that we’ll follow that are behind it).

Kernel and grsecurity / PaX

Due to some security issues, the Linux kernel sources have been stabilized more rapidly than usual, which left little time for broad validation and regression testing. Updates and fixes have been applied since and new stabilizations occurred. Hopefully we’re now at the right, stable set again.

The C-based install-xattr application (which is performance-wise a big improvement over the Python-based one) is working well in “lab environments” (some developers are using it exclusively). It is included in the Portage repository (if I understand the chat excerpts correctly) but as such not available for broader usage yet.

An update against elfix is made as well as there was a dependency mismatch when building with USE=-ptpax. This will be corrected in elfix-0.9.

Finally, blueness is also working on a GLEP (Gentoo Linux Enhancement Proposal) to export VDB information (especially NEEDED.ELF.2) as this is important for ELF/library graph information (as used by revdep-pax, migrate-pax, etc.). Although Portage already does this, this is not part of the PMS and as such other package managers might not do this (such as Paludis).

Profiles

Updates on the profiles has been made to properly include multilib related variables and other metadata. For some profiles, this went as easy as expected (nice stacking), but other profiles have inheritance troubles making it much harder to include the necessary information. Although some talks have arised on the gentoo-dev mailinglist about refactoring how Gentoo handles profiles, there hasn’t been done much more than just talking :-( But I’m sure we haven’t heard the last of this yet.

Documentation

Blueness has added information on EMULTRAMP in the kernel configuration, especially noting to the user that it is needed for Python support in Gentoo Hardened. It is also in the PaX Quickstart document, although this document is becoming a very large one and users might overlook it.

Posts for Thursday, July 31, 2014

The right tool for the job

Every subculture, even most smaller groups establish practices that are typical for said subculture or group. They often emerge within the foundations of the group itself or the background of an influential part of the members. A group of historians will probably tackle problems in a different way than engineers would for example: Where the historians might look for similarities in structure between the current issue and the past, engineers would try to divide the problem up into smaller and smaller units of work, assign them and hope that by assembling all the parts a solution will be created. Obviously the previous example was slightly exaggerated and simplified but you catch my drift. The people or the “culture” a group emerged from influence massively the set of tools the group has to interact with the world.

These tools exist on many levels. They can be physical objects like with a group of mechanics bringing actual tools from their workshops into the group. There are digital tools such as publication software or networked democracy/liquid democracy tools. The tools can be intellectual: Specific methods to process information or analyze things. Social tools can help organize and communicate. The list goes on and on.

Today I want to talk about the intellectual or procedural tools of a certain subculture1 that I do have my run-ins with: The hackers. Not the “let’s break shit and steal money like they do in cheesy movies” type but the “we are fighting for digital civil liberties and free software and crypto for everyone and shit” type. The type that can probably best be defined as: People unwilling to always follow the instructions that things come with, especially technical things.

While the myth of the evil hackers destroying everything still is very powerful especially within mainstream media, that subculture has – even given all the problems and issues raging through that scene2 - gotten kind of a tough job these days. Because we as a society are overwhelmed by our own technical progress.

So we’ve kinda stumbled on this nice thing that some scientists developed to share information and we realized: Wow I can copy all kinds of music and movies I can share Information and publish my own creative works! And others found that thing interesting as well, bolted some – not always3 beautifully designed interfaces and technologies onto that “Internet” thing and used it to sell books and clothes and drugs and bitcoins to a global customer base.

Obviously I simplified things again a little. But there’s no denying that the Internet changed many many aspects of our life with shopping only being one of them. Global companies could suddenly move or spread data (and themselves) to different locations in zero-time circumventing in many cases at least parts of the legal system that was supposed to protect the people against their actions. Established social rules such as copyright or privacy came under pressure. And then there was the intelligence community. What a field trip they had!

All the things that used to be hard to gather, that could only be acquired through deploying agents and time and money, conversations and social graphs and “metadata” could be gathered, stored and queried. Globally. All the time. The legal system supposed to protect the people actually gave them the leverage to store all data they could get their hands on. All for the good of the people and their security.

So here we are with this hot and flaming mess and we need someone, anyone to fix it. To make things ok. So we ask the hackers because they actually know, understand and – more often than many want to admit – build the technology causing problems now. And they tried to come up with solutions.

The hacker subculture is largely and dominantly shaped by a related group of people: Security specialists. To be able to assess and test the security of a technical system or an algorithm you really need to understand it and its environment at a level of detail that eludes many people. The problems the security community have to deal with are cognitively hard and complex, the systems and their interactions and interdependencies growing each day. The fact that those security holes or exploits can also be worth a lot of money to someone with … let’s say flexible ethics also informed the competitiveness of that scene.

So certain methods or MOs developed. One very prominent one that has influenced the hacker culture a lot is the “break shit in a funny way” MO. It goes like this: You have something that people (usually the people selling it) claim to be secure. Let’s say a voting machine or an iris scanner on a new smartphone. In come the hackers. They prod the system, poke it with sticks and tools until they get the voting machine to play pong and the iris scanner to project My Little Pony episodes. They break shit.

This leads to (if you are somewhat tech savvy) very entertaining talks at hacker conferences where the ways of how to break it are displayed. Some jokes at the expense of the developers are thrown in and it usually ends with a patch, a technical solution to the problem, that does at least mitigate the worst problems. Hilarity ensues.

But herein lies the problem. The issues we have with our political system, with the changes that tech brought to the social sphere are not easily decomposed into modules, broken and fixed with some technological patch. Showing that the NSA listens to your stuff, how they do it is all fine and dandy but the technical patch, the bazillion of crypto tools that are released every day don’t address the issues at hand – the political questions, the social questions.

That’s not the fault of the hacker scene really. They did their job, analyzed what happened and sometimes could even provide fixes. But building new social or legal concepts really isn’t in their toolbox. When forced they have to fallback on things such as “whistleblowing” as a catchall which really is no replacement for political theory. Obviously there are hackers who are also political but it’s not genuine to the subculture, nothing belonging to them.

In Germany we can see that every day within the politically … random … actions of the Pirate Party who recruited many of their members from said hacker culture (or related subcultures). They think in systems and patches, talk about “a new operating system for democracy”. Even the wording, the framing shows that they don’t think in political terms but in their established technical phrases. Which again isn’t their fault, it’s what every subculture does.

Hackers can do a lot for our societies. They can help officials or NGOs to better understand technology and maybe even its consequences. They just might not in general be the right people to talk to when it comes to building legal or social solutions.

The different subcultures in a society all contribute different special skill sets and knowledge to the discourse. It’s about bringing all the right people and groups to the table in every phase of the debate. That doesn’t mean that people should be excluded but that certain groups or subcultures should maybe take the lead when it comes to the domains they know a lot about.

Use the right tool for the job.

Header image by: Ivan David Gomez Arce

  1. if it actually is a subculture which we could debate but let’s do that another time
  2. I’m not gonna get into it here, it’s a topic for another text that I’m probably not going to write
  3. as in never

flattr this!

Posts for Friday, July 25, 2014

On whistleblowing

As some might know, I spent the last week in New York attending the HOPE conference. Which was btw. one of the more friendly and diverse conferences I have been to and which I enjoyed a lot not just because of it’s awe inspiring location.

It was not surprising that the session program would put big emphasis on whistleblowing. Edward Snowden’s leaks have pretty much defined the last year when it came to tech-related news. HOPE contextualized those leaks by framing Snowden with the famous US whistleblowers Thomas Drake and Daniel Ellsberg who both have had immense impact with their leaks. Drake had leaked information on NSA programs violating many US laws, Ellsberg had released the “Pentagon papers” proving that the public had been lied to by different US governments when it came to the Vietnam war. Ellsberg, Drake, Snowden. 3 whistleblowers, 3 stories of personal sacrifice and courage1. 3 stories about heroes.

All of them enforced how important better infrastructure for leaks was. How important it was that the hacker community would provide better tools and tutorials that help keeping informers anonymous and protected. How central it was to make OpSec (operations security) easier for journalists and potential whistleblowers. Especially Snowden voiced how well he understood people not leaking anything when faced with the complete destruction of their lives as they know it.

And the community did actually try to deliver. SecureDrop was presented as a somewhat simpler way for journalists to supply a drop site for hot documents and the Minilock project is supposed to make the encryption of files much easier and less error-prone.

But in between the celebration of the courage of individuals and tools helping such individuals something was missing.

Maybe it was the massive presence of Snowden or maybe the constant flow of new details about his leaks but in our focus on and fascination for the whistleblower(s) and their work we as a community have somewhat forgotten to think about politics and policies, about what it actually is that “we” want.

Whistleblowing can be important, can change the world actually. But it is not politics. Whistleblowing can be the emergency brake for political processes and structures. But sadly nothing more.

Just creating some sort of transparency (and one could argue that Snowden’s leak has not really created even that since just a selected elite of journalists is allowed to access the treasure chest) doesn’t change anything really. Look at the Snowden leaks: One year full of articles and columns and angry petitions. But nothing changed. In spite of transparency things are mostly going on as they did before. In fact: Certain governments such as the Germans have talked about actually raising the budget for (counter)intelligence. The position of us as human beings in this cyberphysical world has actually gotten worse.

Simple solutions are really charming. We need a few courageous people. And we can build some tech to lower the courage threshold, tools protecting anonymity. Problem solved, back to the playground. We’ve replaced political theory, structures, activism and debate with one magic word: Whistleblowing. But that’s not how it works.

What happens after the leak? Why do we think that a political system that has created and legitimized the surveillance and intelligence state times upon times would autocorrect itself just because we drop some documents into the world? Daniel Ellsberg called it “telling the truth with documents”. But just telling some truth isn’t enough.

It’s time to stop hiding behind the hope for whistleblowers and their truth. To stop dreaming of a world that would soon be perfect if “the truth” is just out there. That’s how conspiracy nuts think.

“Truth” can be a resource to create ideas and policy from. To create action. But that doesn’t happen automagically and it’s not a job we can just outsource to the media because they know all that weird social and political stuff. Supporting the works of whistleblowers is important and I was happy to see so many initiatives, but they can get us at most a few steps forward on our way to fixing the issues of our time.

Header image by: Kate Ter Haar

  1. I have written about the problem I have with the way Snowden is framed (not him as a person or with his actions) here

flattr this!

Posts for Sunday, July 13, 2014

avatar

Anonymous edits in Hellenic Wikipedia from Hellenic Parliament IPs

Inspired from another project called “Anonymous Wikipedia edits from the Norwegian parliament and government offices” I decided to create something similar for the Hellenic Parliament.

I downloaded the XML dumps (elwiki-20140702-pages-meta-history.xml.7z) for the elwiki from http://dumps.wikimedia.org/elwiki/20140702/. The compressed file is less than 600Mb but uncompressing it leads to a 73Gb XML which contains the full history of edits. Then I modified a parser I found on this blog to extract the data I wanted: Page Title, Timestamp and IP.

Then it was easy to create a list that contains all the edits that have been created by Hellenic Parliament IPs (195.251.32.0/22) throughout the History of Hellenic Wikipedia:
The list https://gist.github.com/kargig/d2cc8e3452dbde774f1c.

Interesting edits

  1. Former Prime Minister “Κωνσταντίνος Σημίτης”
    An IP from inside the Hellenic Parliament tried to remove the following text at least 3 times in 17-18/02/2014. This is a link to the first edit: Diff 1.

    Για την περίοδο 1996-2001 ξοδεύτηκαν 5,2 τρις δρχ σε εξοπλισμούς. Οι δαπάνες του Β` ΕΜΠΑΕ (2001-2006) υπολογίζεται πως έφτασαν τα 6 με 7 τρις δρχ.<ref name="enet_01_08_01">[http://www.enet.gr/online/online_hprint?q=%E5%EE%EF%F0%EB%E9%F3%EC%EF%DF&a=&id=71538796 ''To κόστος των εξοπλισμών''], εφημερίδα ”Ελευθεροτυπία”, δημοσίευση [[1 Αυγούστου]] [[2001]].</ref>Έπειτα απο τη σύλληψη και ενοχή του Γ.Καντά,υπάρχουν υπόνοιες για την εμπλοκή του στο σκάνδαλο με μίζες από Γερμανικές εταιρίες στα εξοπλιστικά,κάτι το οποίο διερευνάται απο την Εισαγγελία της Βρέμης.

  2. Former MP “Δημήτρης Κωνσταντάρας”
    Someone modified his biography twice. Diff Links: Diff 1 Diff 2.
  3. Former football player “Δημήτρης Σαραβάκος”
    In the following edit someone updated this player’s bio adding that he ‘currently plays in porn films’. Diff link. The same editor seems to have removed that reference later, diff link.
  4. Former MP “Θεόδωρος Ρουσόπουλος”
    Someone wanted to update this MP’s bio and remove some reference of a scandal. Diff link.
  5. The movie “Ραντεβού με μια άγνωστη”
    Claiming that the nude scenes are probably not from the actor named “Έλενα Ναθαναήλ”. Diff link.
  6. The soap opera “Χίλιες και Μία Νύχτες (σειρά)”
    Someone created the first version of the article on this soap opera. Diff Link.
  7. Politician “Γιάννης Λαγουδάκος”
    Someone edited his bio so it seemed that he would run for MP with the political party called “Ανεξάρτητοι Έλληνες”. Diff Link
  8. University professor “Γεώργιος Γαρδίκας”
    Someone edited his profile and added a link for amateur football team “Αγιαξ Αιγάλεω”. Diff Link.
  9. Politician “Λευτέρης Αυγενάκης”
    Someone wanted to fix his bio and upload a file, so he/she added a link from the local computer “C:\Documents and Settings\user2\Local Settings\Temp\ΑΥΓΕΝΑΚΗΣ”. Diff link.
  10. MP “Κώστας Μαρκόπουλος”
    Someone wanted to fix his bio regarding his return to the “Νέα Δημοκρατία” political party. Diff Link.
  11. (Golden Dawn) MP “Νίκος Μιχαλολιάκος”
    Someone was trying to “fix” his bio removing some accusations. Diff Link.
  12. (Golden Dawn) MP “Ηλίας Κασιδιάρης”
    Someone was trying to fix his bio and remove various accusations and incidents. Diff Link 1, Diff Link 2, Diff Link 3.

Who’s done the edits ?
The IP range of the Hellenic Parliament is not only used by MPs but from people working in the parliament as well. Don’t rush to any conclusions…
Oh, and the IP 195.251.32.48 is probably a proxy inside the Parliament.

Threat Model
Not that it matters a lot for MPs and politicians in general, but it’s quite interesting that if someone “anonymously” edits a wikipedia article, wikimedia stores the IP of the editor and provides it to anyone that wants to download the wiki archives. If the IP range is known, or someone has the legal authority within a country to force an ISP to reveal the owner of an IP, it is quite easy to spot the actual person behind an “anonymous” edit. But if someone creates an account to edit wikipedia articles, wikimedia does not publish the IPs of its users, the account database is private. To get an IP of a user, one would need to take wikimedia to courts to force them to reveal that account’s IP address. Since every wikipedia article edit history is available for anyone to download, one is actually “more anonymous to the public” if he/she logs in or creates a (new) account every time before editing an article, than editing the same article without an account. Unless someone is afraid that wikimedia will leak/disclose their account’s IPs.
So depending on their threat model, people can choose whether they want to create (new) account(s) before editing an article or not :)

Similar Projects

  • Parliament WikiEdits
  • congress-edits
  • Riksdagen redigerar
  • Stortinget redigerer
  • AussieParl WikiEdits
  • anon
  • Bonus
    Anonymous edit from “Synaspismos Political Party” (ΣΥΡΙΖΑ) address range for “Δημοκρατική Αριστερά” political party article, changing it’s youth party blog link to the PASOK youth party blog link. Diff Link

    Posts for Wednesday, July 9, 2014

    avatar

    Segmentation fault when emerging packages after libpcre upgrade?

    SELinux users might be facing failures when emerge is merging a package to the file system, with an error that looks like so:

    >>> Setting SELinux security labels
    /usr/lib64/portage/bin/misc-functions.sh: line 1112: 23719 Segmentation fault      /usr/sbin/setfiles "${file_contexts_path}" -r "${D}" "${D}"
     * ERROR: dev-libs/libpcre-8.35::gentoo failed:
     *   Failed to set SELinux security labels.
    

    This has been reported as bug 516608 and, after some investigation, the cause is found. First the quick workaround:

    ~# cd /etc/selinux/strict/contexts/files
    ~# rm *.bin
    

    And do the same for the other SELinux policy stores on the system (targeted, mcs, mls, …).

    Now, what is happening… Inside the mentioned directory, binary files exist such as file_contexts.bin. These files contain the compiled regular expressions of the non-binary files (like file_contexts). By using the precompiled versions, regular expression matching by the SELinux utilities is a lot faster. Not that it is massively slow otherwise, but it is a nice speed improvement nonetheless.

    However, when pcre updates occur, then the basic structures that pcre uses internally might change. For instance, a number might switch from a signed integer to an unsigned integer. As pcre is meant to be used within the same application run, most applications do not have any issues with such changes. However, the SELinux utilities effectively serialize these structures and later read them back in. If the new pcre uses a changed structure, then the read-in structures are incompatible and even corrupt.

    Hence the segmentation faults.

    To resolve this, Stephen Smalley created a patch that includes PCRE version checking. This patch is now included in sys-libs/libselinux version 2.3-r1. The package also recompiles the existing *.bin files so that the older binary files are no longer on the system. But there is a significant chance that this update will not trickle down to the users in time, so the workaround might be needed.

    I considered updating the pcre ebuilds as well with this workaround, but considering that libselinux is most likely to be stabilized faster than any libpcre bump I let it go.

    At least we have a solution for future upgrades; sorry for the noise.

    Edit: libselinux-2.2.2-r5 also has the fix included.

    Posts for Wednesday, July 2, 2014

    avatar

    Multilib in Gentoo

    One of the areas in Gentoo that is seeing lots of active development is its ongoing effort to have proper multilib support throughout the tree. In the past, this support was provided through special emulation packages, but those have the (serious) downside that they are often outdated, sometimes even having security issues.

    But this active development is not because we all just started looking in the same direction. No, it’s thanks to a few developers that have put their shoulders under this effort, directing the development workload where needed and pressing other developers to help in this endeavor. And pushing is more than just creating bugreports and telling developers to do something.

    It is also about communicating, giving feedback and patiently helping developers when they have questions.

    I can only hope that other activities within Gentoo and its potential broad impact work on this as well. Kudos to all involved, as well as all developers that have undoubtedly put numerous hours of development effort in the hope to make their ebuilds multilib-capable (I know I had to put lots of effort in it, but I find it is worthwhile and a big learning opportunity).

    Posts for Monday, June 30, 2014

    avatar

    D-Bus and SELinux

    After a post about D-Bus comes the inevitable related post about SELinux with D-Bus.

    Some users might not know that D-Bus is an SELinux-aware application. That means it has SELinux-specific code in it, which has the D-Bus behavior based on the SELinux policy (and might not necessarily honor the “permissive” flag). This code is used as an additional authentication control within D-Bus.

    Inside the SELinux policy, a dbus permission class is supported, even though the Linux kernel doesn’t do anything with this class. The class is purely for D-Bus, and it is D-Bus that checks the permission (although work is being made to implement D-Bus in kernel (kdbus)). The class supports two permission checks:

    • acquire_svc which tells the domain(s) allowed to “own” a service (which might, thanks to the SELinux support, be different from the domain itself)
    • send_msg which tells which domain(s) can send messages to a service domain

    Inside the D-Bus security configuration (the busconfig XML file, remember) a service configuration might tell D-Bus that the service itself is labeled differently from the process that owned the service. The default is that the service inherits the label from the domain, so when dnsmasq_t registers a service on the system bus, then this service also inherits the dnsmasq_t label.

    The necessary permission checks for the sysadm_t user domain to send messages to the dnsmasq service, and the dnsmasq service itself to register it as a service:

    allow dnsmasq_t self:dbus { acquire_svc send_msg };
    allow sysadm_t dnsmasq_t:dbus send_msg;
    allow dnsmasq_t sysadm_t:dbus send_msg;
    

    For the sysadm_t domain, the two rules are needed as we usually not only want to send a message to a D-Bus service, but also receive a reply (which is also handled through a send_msg permission but in the inverse direction).

    However, with the following XML snippet inside its service configuration file, owning a certain resource is checked against a different label:

    <selinux>
      <associate own="uk.org.thekelleys.dnsmasq"
                 context="system_u:object_r:dnsmasq_dbus_t:s0" />
    </selinux>

    With this, the rules would become as follows:

    allow dnsmasq_t dnsmasq_dbus_t:dbus acquire_svc;
    allow dnsmasq_t self:dbus send_msg;
    allow sysadm_t dnsmasq_t:dbus send_msg;
    allow dnsmasq_t sysadm_t:dbus send_msg;
    

    Note that only the access for acquiring a service based on a name (i.e. owning a service) is checked based on the different label. Sending and receiving messages is still handled by the domains of the processes (actually the labels of the connections, but these are always the process domains).

    I am not aware of any policy implementation that uses a different label for owning services, and the implementation is more suited to “force” D-Bus to only allow services with a correct label. This ensures that other domains that might have enough privileges to interact with D-Bus and own a service cannot own these particular services. After all, other services don’t usually have the privileges (policy-wise) to acquire_svc a service with a different label than their own label.

    Posts for Sunday, June 29, 2014

    avatar

    D-Bus, quick recap

    I’ve never fully investigated the what and how of D-Bus. I know it is some sort of IPC, but higher level than the POSIX IPC methods. After some reading, I think I start to understand how it works and how administrators can work with it. So a quick write-down is in place so I don’t forget in the future.

    There is one system bus and, for each X session of a user, also a session bus.

    A bus is governed by a dbus-daemon process. A bus itself has objects on it, which are represented through path-like constructs (like /org/freedesktop/ConsoleKit). These objects are provided by a service (application). Applications “own” such services, and identify these through a namespace-like value (such as org.freedesktop.ConsoleKit).
    Applications can send signals to the bus, or messages through methods exposed by the service. If methods are invoked (i.e. messages send) then the application must specify the interface (such as org.freedesktop.ConsoleKit.Manager.Stop).

    Administrators can monitor the bus through dbus-monitor, or send messages through dbus-send. For instance, the following command invokes the org.freedesktop.ConsoleKit.Manager.Stop method provided by the object at /org/freedesktop/ConsoleKit owned by the service/application at org.freedesktop.ConsoleKit:

    ~$ dbus-send --system --print-reply 
      --dest=org.freedesktop.ConsoleKit 
      /org/freedesktop/ConsoleKit/Manager 
      org.freedesktop.ConsoleKit.Manager.Stop
    

    What I found most interesting however was to query the busses. You can do this with dbus-send although it is much easier to use tools such as d-feet or qdbus.

    To list current services on the system bus:

    ~# qdbus --system
    :1.1
     org.freedesktop.ConsoleKit
    :1.10
    :1.2
    :1.3
     org.freedesktop.PolicyKit1
    :1.36
     fi.epitest.hostap.WPASupplicant
     fi.w1.wpa_supplicant1
    :1.4
    :1.42
    :1.5
    :1.6
    :1.7
     org.freedesktop.UPower
    :1.8
    :1.9
    org.freedesktop.DBus
    

    The numbers are generated by D-Bus itself, the namespace-like strings are taken by the objects. To see what is provided by a particular service:

    ~# qdbus --system org.freedesktop.PolicyKit1
    /
    /org
    /org/freedesktop
    /org/freedesktop/PolicyKit1
    /org/freedesktop/PolicyKit1/Authority
    

    The methods made available through one of these:

    ~# qdbus --system org.freedesktop.PolicyKit1 /org/freedesktop/PolicyKit1/Authority
    method QDBusVariant org.freedesktop.DBus.Properties.Get(QString interface_name, QString property_name)
    method QVariantMap org.freedesktop.DBus.Properties.GetAll(QString interface_name)
    ...
    property read uint org.freedesktop.PolicyKit1.Authority.BackendFeatures
    property read QString org.freedesktop.PolicyKit1.Authority.BackendName
    property read QString org.freedesktop.PolicyKit1.Authority.BackendVersion
    method void org.freedesktop.PolicyKit1.Authority.AuthenticationAgentResponse(QString cookie, QDBusRawType::(sa{sv} identity)
    method void org.freedesktop.PolicyKit1.Authority.CancelCheckAuthorization(QString cancellation_id)
    signal void org.freedesktop.PolicyKit1.Authority.Changed()
    ...
    

    Access to methods and interfaces is governed through XML files in /etc/dbus-1/system.d (or session.d depending on the bus). Let’s look at /etc/dbus-1/system.d/dnsmasq.conf as an example:

    <!DOCTYPE busconfig PUBLIC
     "-//freedesktop//DTD D-BUS Bus Configuration 1.0//EN"
     "http://www.freedesktop.org/standards/dbus/1.0/busconfig.dtd">
    <busconfig>
            <policy user="root">
                    <allow own="uk.org.thekelleys.dnsmasq"/>
                    <allow send_destination="uk.org.thekelleys.dnsmasq"/>
            </policy>
            <policy context="default">
                    <deny own="uk.org.thekelleys.dnsmasq"/>
                    <deny send_destination="uk.org.thekelleys.dnsmasq"/>
            </policy>
    </busconfig>

    The configuration mentions that only the root Linux user can ‘assign’ a service/application to the uk.org.thekelleys.dnsmasq name, and root can send messages to this same service/application name. The default is that no-one can own and send to this service/application name. As a result, only the Linux root user can interact with this object.

    D-Bus also supports starting of services when a method is invoked (instead of running this service immediately). This is configured through *.service files inside /usr/share/dbus-1/system-services/.

    We, the lab rats

    The algorithm constructing your Facebook feed is one of the most important aspects of Facebooks business. Making sure that you see all the things you are interested in while skipping the stuff you don’t care about is key to keeping you engaged and interested in the service. On the other hand Facebook needs to understand how you react to certain types of content to support its actual business (making money from ads or “boosted” posts).

    So it’s no surprise that Facebook is changing an tweaking the algorithm every day. And every new iteration will be released into a small group of the population to check how it changes people’s behavior and engagement. See if it’s a better implementation than the algorithm used before. Human behavior boiled down to a bunch of numbers.

    The kind and amount of data that Facebook sits on is every social scientist’s dream: Social connection, interactions, engagement metrics, and deeply personal content all wrapped up in one neat structured package with a bow on top. And Facebook is basically the only entity with full access: There is no real open set of similar data points to study and understand human behavior from.

    So the obvious happened. Facebook and some scientists worked together to study human behavior. To put it in a nutshell: The picked almost 700000 Facebook users and changed the way their feed worked. Some got more “negative” posts, some more “positive” posts and the scientists measured how that changed people’s behavior (by seeing how their language changed in their own posts). Result: The mood of the things you read does change your own behavior and feeling congruently. Read positive stuff and feel better, read negative stuff and feel worse. This is news because we only know this from direct social interaction not from interaction mediated through the Internet (the result might not surprise people who believe that the Internet and the social interactions in it are real though).

    Many people have criticized this study and for very different reasons, some valid, some not.

    The study is called scientifically unethical: The subjects didn’t know that their behavior was monitored and that they were in an experiment. It is obviously often necessary to leave somewhat in the dark what the actual goal of an experiment is in order to make sure that the results remain untainted, but it’s scientific standard to tell people that they are in an experiment. And to tell them what was going on after the experiment concludes. (Usually with experiments that change people’s behavior so deeply you would even consider adding psychological exit counselling for participants.) This critique is fully legitimate and it’s something the scientists will have to answer for. Not Facebook cause they tweak their algorithm each day with people’s consent (EULA etc) but that’s nothing the scientists can fall back on. What happened is a certain break of trust: The deal is that Facebook can play with the algorithm as much as they want so long as they try to provide me with more relevance. They changed their end of the bargain (not even with bad intentions but they did it intransparently) which taints people’s (and my) relationship to the company slightly.

    From a purely scientific standpoint the study is somewhat problematic. Not because of their approach which looks solid after reading their paper but because noone but them can reproduce their results. It’s closed source science so it cannot really be peer reviewed. Strictly speaking we can only consider their paper an idea because their data could be basically made up (not that I want to imply that but we can’t check anything). Not good science though sadly the way many studies are.

    Most of the blame lands on the scientists. They should have known that their approach was wrong. The potential data was seductive but they would have had to force Facebook to do this more transparently. The best way would have been an opt-in: “The scientists want to study human interaction so they ask to get access to certain statistics of your feed. They will at no point be able to read all your posts or your complete feed. Do you want to participate? [Yes] [No]“. A message to people who were part of the study after it concluded with a way to remove the data set from the study as sort of a punishment for breaking trust would be the least that would have been needed to be done.

    Whenever you work with people and change their life you run risks. What happens if one of the people whose feed you worsen up is depressive? What will that do to him or her? The scientists must have thought about that but decided not to care. There are many words we could find for that kind of behavior: Disgusting. Assholeish. Sociopathic.

    It’s no surprise that Facebook didn’t catch this issue because tweaking their feed is what they do all day. And among all their rhetoric their users aren’t the center of their attention. We could put the bad capitalism stamp of disapproval on this thought and move on but it does show something Facebook needs to learn: Users might not pay but without them Facebook is nothing. There is a lot of lock-in but when the trust into Facebook’s sincerity gets damaged too much, you open yourself up for competition and people leaving. There is still quite some trust as the growth of users and interaction in spite of all the bad “oh noez, Facebook will destroy your privacy and life and kill baby seals!” press shows. But that’s not a given.

    Companies sitting on these huge amounts of social data have not only their shareholders to look out for but also their users. The need to establish ways for users to participate and keep them honest. Build structures to get feedback from users or form groups representing users and their interests to them. That’s the actual thing Facebook can and should learn from this.

    For a small scientific step almost everybody lost: Scientists showed an alarming lack of awareness and ethic, Facebook an impressive lack of understanding of how important trust is and people using Facebook because for an experiment their days might have been ruined. Doesn’t look like a good exchange to me. But we shouldn’t let this put a black mark in the study of social behavior online.

    Studying how people interact is important to better understand what we do and how and why we do it. Because we want systems to be built in a way that suits us, helps us lead better more fulfilling lives. We want technology to enrichen our worlds. And for that we need to understand how we perceive and interact with them.

    In a perfect world we’d have a set of data that is open and that can be analyzed. Sadly we don’t so we’ll have to work with the companies having access to that kind of data. But as scientists we need to make sure that – no matter how great the insights we generate might be – we treat people with the dignity they deserve. That we respect their rights. That we stay honest and transparent.

    I’d love to say that we need to develop these rules because that would take some of the blame from the scientists involved making the scientific community look less psychopathic. Sadly these rules and best practices have existed for ages now. And it’s alarming to see how many people involved in this project didn’t know them or respect them. That is the main teaching from this case: We need to take way better care of teaching scientists the ethics of science. Not just how to calculate and process data but how to treat others.

    Title image by: MoneyBlogNewz

    flattr this!

    Posts for Friday, June 27, 2014

    The body as a source of data

    The quantified self is starting to penetrate the tiny bubble of science enthusiasts and nerds. More health-related devices start connecting to the cloud (think scales and soon smart watches or heartrate monitors and similar wearables). Modern smartphones have built-in stepcounters or use GPS data to track movement and interpolate (from the path and the speed the mode of transportation as well as the amount of calories probably spent). Apple’s new HealthKit as well as Google’s new GoogleFit APIs are pushing the gathering of data about one’s own body into the spotlight and potentially a more mainstream demographic.

    Quantifiying oneself isn’t always perceived in a positive light. Where one group sees ways to better understand their own body and how it influences their feelings and lives others interpret the projection of body functions down to digital data as a mechanization of a natural thing, something diminishing the human being, as humans kneeling under the force of capitalism and its implied necessity to optimize oneself’s employability and “worth” and finally a dangerous tool giving companies too much access to data about us and how we live and feel. What if our health insurance knew how little we sleep, how little we exercise and what bad dieting habits we entertain?

    Obviously there are holistic ways to think about one’s own body. You can watch yourself in the mirror for 5 minutes every morning to see if everything is OK. You can meditate and try to “listen into your body”. But seeing how many negative influences on one’s personal long-term health cannot really be felt until it is too late a data-centric approach seems to be a reasonable path towards detecting dangerous (or simply unpleasant) patterns and habits.

    The reason why metrics in engineering are based on numbers is that this model of the world makes the comparison of two states simple: “I used to have a foo of 4 now my foo is 12.” Regardless of what that means, it’s easy to see that foo has increased which can be translated in actions if necessary (“eat less stuff containing foo”). Even projecting feelings onto numbers can yield very useful results: “After sleeping for 5 hours my happiness throughout the day seems to average around 3, after sleeping 7 hours it averages around 5″ can provide a person a useful input when deciding whether to sleep more or not. Regardless of what exactly a happiness of “3″ or “5″ means in comparison to others.

    A human body is a complex machine. Chemical reactions and electric currents happen throughout it at a mindblowing speed. And every kind of data set, no matter how great the instrument used to collect it, only represents a tiny fraction of a perspective of a part of what constitutes a living body. Even if you aggregate all the data about a human being we can monitor and record these days, all you have is just a bunch of data. Good enough to mine for certain patterns suggesting certain traits or illnesses or properties but never enough to say that you actually know what makes a person tick.

    But all that data can be helpful to people for very specific questions. Tracking food intake and physical activity can help a person control their weight if they want to. Correlating sleep and performance can help people figuring out what kind of schedule they should sleep on to feel as good as possible. And sometimes these numbers can just help oneself to measure one’s own progress, if you managed to beat your 10k record.

    With all the devices and data monitors we surround us with, gathering huge amounts of data becomes trivial. And everyone can store that data on their own harddrives and develop and implement algorithms to analyse and use this source of information. Why do we need the companies who will just use the data to send us advertising in exchange for hosting our data?

    It comes back to the question whether telling people to host their own services and data is cynical. As I already wrote I do believe it is. Companies with defined standard APIs can help individuals who don’t have the skills or the money to pay people with said skills to learn more about their bodies and how they influence their lives. They can help make that mass of data manageable, queryable, actionable. Simply usable. That doesn’t mean that there isn’t a better way. That an open platform to aggregate one’s digital body representation wouldn’t be better. But we don’t have that, especially not for mainstream consumption.

    Given these thoughts I find recent comments on the dangers and evils of using one of the big companies to handle the aggregation of the data about your body somewhat classist. Because I believe that you should be able to understand your body better even if you can’t code or think of algorithms (or pay others to do that for you individually). The slippery slope argument that if the data exists somewhere it will very soon be used to trample on your rights and ruin your day doesn’t only rob certain people of the chance to improve their life or gain new insights, it actually enforces a pattern where people with fewer resources tend to get the short end of the stick when it comes to health an life expectancy.

    It’s always easy to tell people not to use some data-based product because of dangers for their privacy or something similar. It’s especially easy when whatever that service is supposed to do for you you already own. “Don’t use Facebook” is only a half-earnest argument if you (because of other social or political networks) do not need this kind of networking to participate in a debate or connect to others. It’s a deeply paternalist point of view and carries a certain lack of empathy.

    Companies aren’t usually all that great just as the capitalist system we live in isn’t great. “The market is why we can’t have nice things” as Mike Rugnetta said it in this week’s Idea Channel. But at least with companies you know their angle (Hint: It’s their bottom line). You know that they want to make money and that they offer that service “for free” usually means that you pay with attention (through ads). There’s no evil conspiracy, no man with a cat on his lap saying “No Mr. Bond, I want you to DIE!”.

    But given that a company lets you access and export all that data you pour into their service I can only urge you to think whether the benefit that their service can give you isn’t worth those handful of ads. Companies aren’t evil demons with magic powers. They are sociopathic and greedy, but that’s it.

    The belief that a company “just knows too much” if they gather data about your body in on place overestimates the truth that data carries. They don’t own your soul or can now cast spells on you. Data you emit isn’t just a liability, something you need to keep locked up and avoid. It can also be your own tool, your light in the darkness.

    Header image by: SMI Eye Tracking

    flattr this!

    Posts for Tuesday, June 24, 2014

    “The Open-Source Everything Revolution” and the boxology syndrome

    Yesterday @kunstreich pointed me to a rather interesting article in the Guardian. Under the ambitious title “The open source revolution is coming and it will conquer the 1% – ex CIA spy“. We’ll pause for a second while you read the article.

    For those unwilling to or with limited amount of time available, here’s my executive summary. Robert David Steele, who has worked for the CIA  for quite a while at some point wanted to introduce more Open Source practices into the intelligence community. He realized that the whole secret tech and process thing didn’t scale and that gathering all those secret and protected pieces of information were mostly not worth the effort, when there’s so much data out there in the open. He also figured out that our current western societies aren’t doing so well: The distribution of wealth and power is messed up and companies have – with help by governments – created a system where they privatize the commons and every kind of possible profit while having the public pay for most of the losses. Steele, who’s obviously a very well educated person, now wants to make everything open. Open source software, open governments, open data, “open society”1 in order to fix our society and ensure a better future:

    open source The Open Source Everything Revolution and the boxology syndrome

    Open Source Everything (from the Guardian)

    Steele’s visions sounds charming: When there is total knowledge and awareness, problems can be easily detected and fixed. Omniscience as the tool to a perfect world. This actually fits quite well into the intelligence agency mindset: “We need all the information to make sure nothing bad will happen. Just give us all the data and you will be safe.” And Steele does not want to abolish Intelligence agencies, he wants to make them transparent and open (the question remains if they can be considered intelligence agencies by our common definition then).

    But there are quite a few problems with Steele’s revolutionary manifesto. It basically suffers from “Boxology Syndrome”.

    The boxology syndrome is a Déformation professionnelle that many people in IT and modelling suffer from. It’s characterized by the belief that every complex problem and system can be sufficiently described by a bunch of boxes and connecting lines. It happens in IT because the object-oriented design approach teaches exactly that kind of thinking: Find the relevant terms and items, make them classes (boxes) and see how they connect. Now you’ve modeled the domain and the problem solution. That was easy!

    But life tends to be messy and confusing, the world doesn’t seem to like to live in boxes, just as people don’t like it.

    Open source software is brilliant. I love how my linux systems2 work transparently and allow me to change how they work according to my needs. I love how I can dive into existing apps and libraries to pick pieces I want to use for other projects, how I can patch and mix things to better serve my needs. But I am the minority.

    4014689 a1bbcaf037 300x225 The Open Source Everything Revolution and the boxology syndrome

    By: velkr0

    Steele uses the word “open” as a silver bullet to … well … everything. He rehashes the ideas from David Brin’s “The Transparent Society” but seems to be working very hard to not use the word transparent. Which in many cases seems to be what he is actually going for but it feels like he is avoiding the connotations attached to the word when it comes to people and societies: In a somewhat obvious try to openwash, he reframes the ideas of Brin my attaching the generally positively connotated word “open”.

    But open data and open source software do not magically make everyone capable of seizing these new found opportunities. Some people have the skills, the resources, the time and the interest to get something out of it, some people can pay people with the skills to do what they want to get done. And many, many people are just left alone, possibly swimming in a digital ocean way to deep and vast to see any kind of ground or land. Steele ignores the privilege of the educated and skilled few or somewhat naively hopes that they’ll cover the needs of those unable to serve their own out of generosity. Which could totally happen but do we really want to bet the future on the selflessness and generosity of everyone?

    Transparency is not a one-size-fits-all solution. We have different levels of transparency we require from the government or companies we interact with or that person serving your dinner. Some entities might offer more information than required (which is especially true for people who can legally demand very little transparency from each other but share a lot of information for their own personal goals and interests).

    Steele’s ideas – which are really seductive in their simplicity – don’t scale. Because he ignores the differences in power, resources and influence between social entities. And because he assumes that – just because you know everything – you will make the “best” decision.

    There is a lot of social value in having access to a lot of data. But data, algorithms and code are just a small part of what can create good decisions for society. There hardly ever is the one best solution. We have to talk and exchange positions and haggle to find an accepted and legitimized solution.

    Boxes and lines just don’t cut it.

    Title image by: Simona

    1. whatever that is supposed to mean
    2. I don’t own any computer with proprietary operating systems except for my gaming consoles

    flattr this!

    Posts for Sunday, June 22, 2014

    avatar

    Chroots for SELinux enabled applications

    Today I had to prepare a chroot jail (thank you grsecurity for the neat additional chroot protection features) for a SELinux-enabled application. As a result, “just” making a chroot was insufficient: the application needed access to /sys/fs/selinux. Of course, granting access to /sys is not something I like to see for a chroot jail.

    Luckily, all other accesses are not needed, so I was able to create a static /sys/fs/selinux directory structure in the chroot, and then just mount the SELinux file system on that:

    ~# mount -t selinuxfs none /var/chroot/sys/fs/selinux
    

    In hindsight, I probably could just have created a /selinux location as that location, although deprecated, is still checked by the SELinux libraries.

    Anyway, there was a second requirement: access to /etc/selinux. Luckily it was purely for read operations, so I was first contemplating of copying the data and doing a chmod -R a-w /var/chroot/etc/selinux, but then considered a bind-mount:

    ~# mount -o bind,ro /etc/selinux /var/chroot/etc/selinux
    

    Alas, bad luck – the read-only flag is ignored during the mount, and the bind-mount is still read-write. A simple article on lwn.net informed me about the solution: I need to do a remount afterwards to enable the read-only state:

    ~# mount -o remount,ro /var/chroot/etc/selinux
    

    Great! And because my brain isn’t what it used to be, I just make a quick blog for future reference ;-)

    Posts for Sunday, June 15, 2014

    avatar

    Gentoo Hardened, June 2014

    Friday the Gentoo Hardened project had its monthly online meeting to talk about the progress within the various tools, responsibilities and subprojects.

    On the toolchain part, Zorry mentioned that GCC 4.9 and 4.8.3 will have SSP enabled by default. The hardened profiles will still have a different SSP setting than the default (so yes, there will still be differences between the two) but this will help in securing the Gentoo default installations.

    Zorry is also working on upstreaming the PIE patches for GCC 4.10.

    Next to the regular toolchain, blueness also mentioned his intentions to launch a Hardened musl subproject which will focus on the musl C library (rather than glibc or uclibc) and hardening.

    On the kernel side, two recent kernel vulnerabilities in the vanilla kernel Linux (pty race and privilege escalation through futex code) painted the discussions on IRC recently. Some versions of the hardened kernels are still available in the tree, but the more recent (non-vulnerable) kernels have proven not to be as stable as we’d hoped.

    The pty race vulnerability is possibly not applicable to hardened kernels thanks to grSecurity, due to its protection to access the kernel symbols.

    The latest kernels should not be used with KSTACKOVERFLOW on production systems though; there are some issues reported with virtio network interface support (on the guests) and ZFS.

    Also, on the Pax support, the install-xattr saga continues. The new wrapper that blueness worked in dismissed some code to keep the PWD so the $S directory knowledge was “lost”. This is now fixed. All that is left is to have the wrapper included and stabilized.

    On SELinux side, it was the usual set of progress. Policy stabilization and user land application and library stabilization. The latter is waiting a bit because of the multilib support that’s now being integrated in the ebuilds as well (and thus has a larger set of dependencies to go through) but no show-stoppers there. Also, the SELinux documentation portal on the wiki was briefly mentioned.

    Also, the policycoreutils vulnerability has been worked around so it is no longer applicable to us.

    On the hardened profiles, we had a nice discussion on enabling capabilities support (and move towards capabilities instead of setuid binaries), which klondike will try to tackle during the summer holidays.

    As I didn’t take notes during the meeting, this post might miss a few (and I forgot to enable logging as well) but as Zorry sends out the meeting logs anyway later, you can read up there ;-)

    Posts for Tuesday, June 3, 2014

    Why and how to shave with shaving oil and DE safety razors

    So, I’ve been shaving with shaving oil and safety razors 1 for a while now and decided that it’s time I help my fellow geeks by spreading some knowledge about this method (which is saddly still poorly documented online). Much of the below method is hacks assembled together from different sources and lots of trial and error.

    Why shave with oil and DE safety razors

    First of all, shaving with oldskool DE razors is not as much about being hip and trendy 2 as it is about optimising. Although, I have to admit, it is still looks pretty cool ☺

    There are several reasons why shaving with oil and DE razors beats modern foam and system multiblade razors hands down:

    • they’ve got multiple uses – shaving oil replaces both the shaving foam/soap and aftershave (and pre-shaving balm); DE razors are used in tools and well, they’re proper blades for crying out loud!;
    • the whole set takes a lot less space when traveling – one razor, a puny pack of blades and a few ten ml of oil is all you need to carry around 3;
    • you get a better shave – once you start shaving properly, you get less burns and cutticles and a smoother shave as well;
    • it’s more ecological – the DE blades have less different materials and are easier to recycle, all shaving oils I found so far are Eco certified;
    • and last, but not least in these days, it’s waaaaaaay cheaper – (more on that in a future blog post).

    History and experience (skip if you’re not interested in such bla bla)

    I got my first shaving oil4 about two years ago, when I started to travel more. My wonderful girlfriend bought it for me, because a 30 ml flask took a lot less space then a tin of shaving foam and a flask aftershave. The logic behind this decision was:

    “Well, all the ancient people managed to have clean shaves with oil, my beard can’t be that much different than the ones they had in the past.”

    And, boy, was I in for a nice surprise!

    I used to get inflamations, pimples and in-grown hair quite often, so I never shaved very close – but when shaving with oil, there was none of that! After one or two months of of trial and error with different methods and own ideas, I finally figured out how to properly use it and left the shaving soaps, gels and foams for good.

    As I shaved for a while with oil I noticed that all “regular modern” system multiblade razors have strips of an aloe vera gel, that works well with shaving foam, gels and soap; but occasionally stick to your face if you’re using shaving oil. This is true for as many or as little blades in the razor heads as possible. – I just couldn’t find razors without it.

    That’s why I started thinking about the classic DE safety razors and eventually got a plastic Wilkinson Sword Classic for a bit over 5 €. Surprisingly, after just a few miniscule cuts, the improvement over the system multiblade razors got quite apparent. I haven’t touched my old Gillette Mach3 ever since. The Wilkinson Sword Classic is by far not a very good DE razor, but it’s cheap and easy to use for beginners. But if you decide you like this kind of shave, I would warmly recommend that you upgrade to a better one.

    For example recently I got myself a nice Edwin Jagger razor with their DE8 head and I love it. It’s full-metal, chromed, closed-comb razor, which means it has another bar below the blade, so it’s easier and safer to use then an more agressive open-comb version.

    How to Shave with oil and DE razors

    OK, first of all, don’t panic! – they’re called “safety razors” for a reason. As opposed to the straight razors, the blade is enclosed, so even if you manage to cut yourself, you can’t get a deep cut. This is truer still for closed-comb razors.

    1. Wash your face to remove dead skin and fat. It’s the best if you shave just after taking a shower.

    2. Get moisture into the hairs. Beard hair is hard as copper wire while it is dry; but wet, it’s quite soft. The best way is to apply a towel soaked in very hot water for a few (times per) ten seconds to your face – the hot water also opens up the pores. If you are traveling and don’t have hot water, just make sure those hairs are wet. As it’s a good idea to have your razor up to temperature as well, I usually put hot water in the basin and leave the razor in it while I towel my face.

    3. Put a few drops of shaving oil into the palm of your hand (5-6 is enough for me) and with two fingers apply it to all the places on your face that you want to shave. Any oil you may have left on your hands, you can safely rub into your hair (on top of your head) – it’ll do them good and you won’t waste the oil.

    4. Splash some more (hot) water on your face – the fact that water and oil don’t mix well is the reason why your blade glides so fine. Also during the shave, whenever feel your razor doesn’t glide that well anymore, usually just applying some water is enough to fix it.

    5. First shave twice in the direction of the grain – to get a feeling for the right angle, take the handle of the razor in your fingers and lean the flat of the head onto your cheek, so the handle is 90° to your cheek; then reduce the angle until you get to a position where shaving feels comfortable. Also it’s easier to shave moving your whole arm then just the wrist. Important: DO NOT apply pressure – the safety razors expose enough blade that with a well ballanced razor just the weight of the head produces almost enough pressure for a good shave (as opposed to sytem multiblade razors). Pull in the direction of the handle with slow strokes – on thicker beard you will need to make shorter strokes then on less thick beard. To get a better shave, make sure to stretch your skin where you currently shave. If the razor gets stuck with hair and oil, just swish it around in the water to clean it.

    6. Splash your face with (hot) water again and now shave accross the grain. This gives you a closer shave5.

    7. Splash your face with cold water to get rid of any hair remains and to close the pores. Get a drop or two of shaving oil and a few drops of water into your palm and mix it with two fingers. Rub the oil-water mixture into your face instead of using after-shave and leave your face to dry – the essential oils in the shaving oil enriches and dezinfects your skin.

    8. Clean your razor under running water to remove hair and oil and towel-dry it (don’t rub the blade!). When I take it apart to change blades, I clean the razor with water and rub it with the towel, to keep it shiney.

    Update: I learned that it is better to shave twice with the grain and once across, than once with it and twice across. Update: I figured out the trick with rubbing the excess oil into hair.

    Enjoy shaving ☺

    It is a tiny bit more work then shaving with system multiblade razors, but it’s well worth it! For me the combination of quality DE safety razors and shaving oil, turned shaving from a bothersome chore into a morning ritual I look forward to.

    …and in time, I’m sure you’ll find (and share) your own method as well.

    Update: I just stumbled upon this great blog post “How Intellectual Property Destroyed Men’s Shaving” and thought it be great to mention here.

    hook out → see you well shaven at Akademy ;)


    1. Double edged razors as our grandads used to shave with. 

    2. Are oldskool razors hip and trendy right now anyway? I haven’t noticed them to be so. 

    3. I got a myself a nice leather Edwin Jagger etui for carrying the razor and two packs of blades that measures 105 x 53 x 44 mm (for comparison: the ugly Gillette Mach3 plastic holder measures 148 x 57 x 28 mm and does’t hold much protection when travelling). 

    4. L’Occitane Cade (wild juniper) shaving oil, and I still happy with that one. 

    5. Some claim that for a really close shave you need to shave against the grain as well, but I found that to be too aggressive for my beard. Also I heard this claim only from people shaving with soap. 

    Posts for Saturday, May 31, 2014

    avatar

    Visualizing constraints

    SELinux constraints are an interesting way to implement specific, well, constraints on what SELinux allows. Most SELinux rules that users come in contact with are purely type oriented: allow something to do something against something. In fact, most of the SELinux rules applied on a system are such allow rules.

    The restriction of such allow rules is that they only take into consideration the type of the contexts that participate. This is the type enforcement part of the SELinux mandatory access control system.

    Constraints on the other hand work on the user, role and type part of a context. Consider this piece of constraint code:

    constrain file all_file_perms (
      u1 == u2
      or u1 == system_u
      or u2 == system_u
      or t1 != ubac_constrained_type
      or t2 != ubac_constrained_type
    );
    

    This particular constraint definition tells the SELinux subsystem that, when an operation against a file class is performed (any operation, as all_file_perms is used, but individual, specific permissions can be listed as well), this is denied if none of the following conditions are met:

    • The SELinux user of the subject and object are the same
    • The SELinux user of the subject or object is system_u
    • The SELinux type of the subject does not have the ubac_constrained_type attribute set
    • The SELinux type of the object does not have the ubac_constrained_type attribute set

    If none of the conditions are met, then the action is denied, regardless of the allow rules set otherwise. If at least one condition is met, then the allow rules (and other SELinux rules) decide if an action can be taken or not.

    Constraints are currently difficult to query though. There is seinfo –constrain which gives all constraints, using the Reverse Polish Notation – not something easily readable by users:

    ~$ seinfo --constrain
    constrain { sem } { create destroy getattr setattr read write associate unix_read unix_write  } 
    (  u1 u2 ==  u1 system_u ==  ||  u2 system_u ==  ||  t1 { screen_var_run_t gnome_xdg_config_home_t admin_crontab_t 
    links_input_xevent_t gpg_pinentry_tmp_t virt_content_t print_spool_t crontab_tmp_t httpd_user_htaccess_t ssh_keysign_t 
    remote_input_xevent_t gnome_home_t mozilla_tmpfs_t staff_gkeyringd_t consolekit_input_xevent_t user_mail_tmp_t 
    chromium_xdg_config_t mozilla_input_xevent_t chromium_tmp_t httpd_user_script_exec_t gnome_keyring_tmp_t links_tmpfs_t 
    skype_tmp_t user_gkeyringd_t svirt_home_t sysadm_su_t virt_home_t skype_home_t wireshark_tmp_t xscreensaver_xproperty_t 
    consolekit_xproperty_t user_home_dir_t gpg_pinentry_xproperty_t mplayer_home_t mozilla_plugin_input_xevent_t mozilla_plugin_tmp_t 
    mozilla_xproperty_t xdm_input_xevent_t chromium_input_xevent_t java_tmpfs_t googletalk_plugin_xproperty_t sysadm_t gorg_t gpg_t 
    java_t links_t staff_dbusd_t httpd_user_ra_content_t httpd_user_rw_content_t googletalk_plugin_tmp_t gpg_agent_tmp_t 
    ssh_agent_tmp_t sysadm_ssh_agent_t user_fonts_cache_t user_tmp_t googletalk_plugin_input_xevent_t user_dbusd_t xserver_tmpfs_t 
    iceauth_home_t qemu_input_xevent_t xauth_home_t mutt_home_t sysadm_dbusd_t remote_xproperty_t gnome_xdg_config_t screen_home_t 
    chromium_xproperty_t chromium_tmpfs_t wireshark_tmpfs_t xdg_videos_home_t pulseaudio_input_xevent_t krb5_home_t 
    pulseaudio_xproperty_t xscreensaver_input_xevent_t gpg_pinentry_input_xevent_t httpd_user_script_t gnome_xdg_cache_home_t 
    mozilla_plugin_tmpfs_t user_home_t user_sudo_t ssh_input_xevent_t ssh_tmpfs_t xdg_music_home_t gconf_tmp_t flash_home_t 
    java_home_t skype_tmpfs_t xdg_pictures_home_t xdg_data_home_t gnome_keyring_home_t wireshark_home_t chromium_renderer_xproperty_t 
    gpg_pinentry_t mozilla_t session_dbusd_tmp_t staff_sudo_t xdg_config_home_t user_su_t pan_input_xevent_t user_devpts_t 
    mysqld_home_t pan_tmpfs_t root_input_xevent_t links_home_t sysadm_screen_t pulseaudio_tmpfs_t sysadm_gkeyringd_t mail_home_rw_t 
    gconf_home_t mozilla_plugin_xproperty_t mutt_tmp_t httpd_user_content_t mozilla_xdg_cache_t mozilla_home_t alsa_home_t 
    pulseaudio_t mencoder_t admin_crontab_tmp_t xdg_documents_home_t user_tty_device_t java_tmp_t gnome_xdg_data_home_t wireshark_t 
    mozilla_plugin_home_t googletalk_plugin_tmpfs_t user_cron_spool_t mplayer_input_xevent_t skype_input_xevent_t xxe_home_t 
    mozilla_tmp_t gconfd_t lpr_t mutt_t pan_t ssh_t staff_t user_t xauth_t skype_xproperty_t mozilla_plugin_config_t 
    links_xproperty_t mplayer_xproperty_t xdg_runtime_home_t cert_home_t mplayer_tmpfs_t user_fonts_t user_tmpfs_t mutt_conf_t 
    gpg_secret_t gpg_helper_t staff_ssh_agent_t pulseaudio_tmp_t xscreensaver_t googletalk_plugin_xdg_config_t staff_screen_t 
    user_fonts_config_t ssh_home_t staff_su_t screen_tmp_t mozilla_plugin_t user_input_xevent_t xserver_tmp_t wireshark_xproperty_t 
    user_mail_t pulseaudio_home_t xdg_cache_home_t user_ssh_agent_t xdg_downloads_home_t chromium_renderer_input_xevent_t cronjob_t 
    crontab_t pan_home_t session_dbusd_home_t gpg_agent_t xauth_tmp_t xscreensaver_tmpfs_t iceauth_t mplayer_t chromium_xdg_cache_t 
    lpr_tmp_t gpg_pinentry_tmpfs_t pan_xproperty_t ssh_xproperty_t xdm_xproperty_t java_xproperty_t sysadm_sudo_t qemu_xproperty_t 
    root_xproperty_t user_xproperty_t mail_home_t xserver_t java_input_xevent_t user_screen_t wireshark_input_xevent_t } !=  ||  t2 { 
    screen_var_run_t gnome_xdg_config_home_t admin_crontab_t links_input_xevent_t gpg_pinentry_tmp_t virt_content_t print_spool_t 
    crontab_tmp_t httpd_user_htaccess_t ssh_keysign_t remote_input_xevent_t gnome_home_t mozilla_tmpfs_t staff_gkeyringd_t 
    consolekit_input_xevent_t user_mail_tmp_t chromium_xdg_config_t mozilla_input_xevent_t chromium_tmp_t httpd_user_script_exec_t 
    gnome_keyring_tmp_t links_tmpfs_t skype_tmp_t user_gkeyringd_t svirt_home_t sysadm_su_t virt_home_t skype_home_t wireshark_tmp_t 
    xscreensaver_xproperty_t consolekit_xproperty_t user_home_dir_t gpg_pinentry_xproperty_t mplayer_home_t 
    mozilla_plugin_input_xevent_t mozilla_plugin_tmp_t mozilla_xproperty_t xdm_input_xevent_t chromium_input_xevent_t java_tmpfs_t 
    googletalk_plugin_xproperty_t sysadm_t gorg_t gpg_t java_t links_t staff_dbusd_t httpd_user_ra_content_t httpd_user_rw_content_t 
    googletalk_plugin_tmp_t gpg_agent_tmp_t ssh_agent_tmp_t sysadm_ssh_agent_t user_fonts_cache_t user_tmp_t 
    googletalk_plugin_input_xevent_t user_dbusd_t xserver_tmpfs_t iceauth_home_t qemu_input_xevent_t xauth_home_t mutt_home_t 
    sysadm_dbusd_t remote_xproperty_t gnome_xdg_config_t screen_home_t chromium_xproperty_t chromium_tmpfs_t wireshark_tmpfs_t 
    xdg_videos_home_t pulseaudio_input_xevent_t krb5_home_t pulseaudio_xproperty_t xscreensaver_input_xevent_t 
    gpg_pinentry_input_xevent_t httpd_user_script_t gnome_xdg_cache_home_t mozilla_plugin_tmpfs_t user_home_t user_sudo_t 
    ssh_input_xevent_t ssh_tmpfs_t xdg_music_home_t gconf_tmp_t flash_home_t java_home_t skype_tmpfs_t xdg_pictures_home_t 
    xdg_data_home_t gnome_keyring_home_t wireshark_home_t chromium_renderer_xproperty_t gpg_pinentry_t mozilla_t session_dbusd_tmp_t 
    staff_sudo_t xdg_config_home_t user_su_t pan_input_xevent_t user_devpts_t mysqld_home_t pan_tmpfs_t root_input_xevent_t 
    links_home_t sysadm_screen_t pulseaudio_tmpfs_t sysadm_gkeyringd_t mail_home_rw_t gconf_home_t mozilla_plugin_xproperty_t 
    mutt_tmp_t httpd_user_content_t mozilla_xdg_cache_t mozilla_home_t alsa_home_t pulseaudio_t mencoder_t admin_crontab_tmp_t 
    xdg_documents_home_t user_tty_device_t java_tmp_t gnome_xdg_data_home_t wireshark_t mozilla_plugin_home_t 
    googletalk_plugin_tmpfs_t user_cron_spool_t mplayer_input_xevent_t skype_input_xevent_t xxe_home_t mozilla_tmp_t gconfd_t lpr_t 
    mutt_t pan_t ssh_t staff_t user_t xauth_t skype_xproperty_t mozilla_plugin_config_t links_xproperty_t mplayer_xproperty_t 
    xdg_runtime_home_t cert_home_t mplayer_tmpfs_t user_fonts_t user_tmpfs_t mutt_conf_t gpg_secret_t gpg_helper_t staff_ssh_agent_t 
    pulseaudio_tmp_t xscreensaver_t googletalk_plugin_xdg_config_t staff_screen_t user_fonts_config_t ssh_home_t staff_su_t 
    screen_tmp_t mozilla_plugin_t user_input_xevent_t xserver_tmp_t wireshark_xproperty_t user_mail_t pulseaudio_home_t 
    xdg_cache_home_t user_ssh_agent_t xdg_downloads_home_t chromium_renderer_input_xevent_t cronjob_t crontab_t pan_home_t 
    session_dbusd_home_t gpg_agent_t xauth_tmp_t xscreensaver_tmpfs_t iceauth_t mplayer_t chromium_xdg_cache_t lpr_tmp_t 
    gpg_pinentry_tmpfs_t pan_xproperty_t ssh_xproperty_t xdm_xproperty_t java_xproperty_t sysadm_sudo_t qemu_xproperty_t 
    root_xproperty_t user_xproperty_t mail_home_t xserver_t java_input_xevent_t user_screen_t wireshark_input_xevent_t } !=  ||  t1 
    <empty set="set"> ==  || );
    

    There RPN notation however isn’t the only reason why constraints are difficult to read. The other reason is that seinfo does not know (anymore) about the attributes used to generate the constraints. As a result, we get a huge list of all possible types that match a common attribute – but we don’t know which anymore.

    Not everyone can read the source files in which the constraints are defined, so I hacked together a script that generates GraphViz dot file based on the seinfo –constrain output for a given class and permission and, optionally, limiting the huge list of types to a set that the user (err, that is me ;-) is interested in.

    For instance, to generate a graph of the constraints related to file reads, limited to the user_t and staff_t types if huge lists would otherwise be shown:

    ~$ seshowconstraint file read "user_t staff_t" > constraint-file.dot
    ~$ dot -Tsvg -O constraint-file.dot
    

    This generates the following graph:

    If you’re interested in the (ugly) script that does this, you can find it on my github location.

    There are some patches laying around to support naming constraints and taking the name up in the policy, so that denials based on constraints can at least give feedback to the user which constraint is holding an access back (rather than just a denial that the user doesn’t know why). Hopefully such patches can be made available in the kernel and user space utilities soon.

    Posts for Tuesday, May 27, 2014

    Blocked by GMail

    Our increased dependency on centralised solutions – even in systems that are created to be decentralised – is becoming alarming.

    This weeks’s topic is GMail1. And if you have not yet, do read up Mako’s and Karsten’s blog posts.

    What is currently happening to me is that for some reason GMail stopped accepting mail from my private e-mail address, claiming I am a likely spammer. In case you wondered: I am not sending our spam, would be very surprised if I had a virus on my regularly updated GNU/Linux laptop, and even more so if my e-mail provider’s server was abused.

    When everyone you know with a GMail e-mail account suddenly sends you replies in the following manner, you realise just how depending on an outside service provider you are in your communication, even if you are not their client:

    <example@gmail.com>: host
        gmail-smtp-in.l.google.com[2a00:1450:4013:c01::1b] said: 550-5.7.1
        [2a02:d68:500::122      12] Our system has detected that this message
        550-5.7.1 is likely unsolicited mail. To reduce the amount of spam sent to
        550-5.7.1 Gmail, this message has been blocked. Please visit 550-5.7.1
        http://support.google.com/mail/bin/answer.py?hl=en&answer=188131 for 550
        5.7.1 more information. u13si15106334wiv.49 - gsmtp (in reply to end of
        DATA command)
    

    The problem of not being their client is even worse, as then you do not have a big enough leverage, and often not even an easy way to contact them with such issues.

    On a not too unrelated note, e-mail is a complex beast2 and while deprecating it would take an immense amount of work as well as quite a long time, it is interesting to see new technology popping up to create a new and better Internet as well as old technology like GnuPG improving to protect us in the digital world.

    While this rant was triggered by my trouble GMail, do note that this it is not just Google out there that we have to be wary of – in other areas of our communication, we need to aim for decentralisation as well. SecuShare3 provides a nice comparison of current and planned technology.

    hook out → catching up with e-mail backlogs :P

    Update: It seems that the whole mail server got blocked by GMail. The issue is now finally solved by migrating the whole mail server and with that creating new SSL/TLS certificates.


    1. And I am not talking about top-posting, full-quoting and other major violations of the general e-mail netiquette that GMail users regularly make. 

    2. As two examples, let us name Facebook and Microsoft. On the server-side Facebook’s recent withdrawal from offering e-mail service, was anticipated by some IETF members, as Facebook has not attended any e-mail related conferences and workshops, where apparently you get to fully understand the interaction. On the client-side Microsoft’s Outlook is already infamous for ignoring major parts of the e-mail standard (e.g. quotation marks, attachments, …). 

    3. SecuShare a project based on GNUnet and PSYC, that is well worth checking out. 

    Posts for Monday, May 26, 2014

    USB passthrough to a VM, via GUI only

    It sure has gotten easier to add USB devices to VMs with libvirt-manager and it’s nice UI

    www.linux-kvm.org/page/USB_Host_Device_Assigned_to_Guest

    Posts for Monday, May 19, 2014

    KDE Community: be liberal with ourselves, be harsh with others

    (yes, the title is a tribute to the robustness principle)   Censored In quite an aggressive move, I’ve been censored by KDE. My blog has been removed from kdeplanet.  The only information I have so far is a mail (and this): SVN commit 1386393 by jriddell: Disable Thomas Capricelli's blog for breaching Planet KDE guidelines CCMAIL:orzel@xxxxx […]

    Posts for Thursday, May 15, 2014

    Sony, meet the EFF

    Picture by “1984…meet DRM by Josh Bonnain (CC-BY)

    Today the Internet was dominated (at least in Europe) by two main topics1:

    The first topic was the fallout of a legal debate. The European Court of Justice decided to rule in favor of a “right to be forgotten” regarding search engines.  A Spanish gentleman had, after unsuccessfully trying to get a Spanish newspaper  to unpublish an older story about the bankruptcy of a company he had owned, sued Google to remove all pointers to that still existing article from its index. The court claimed that a person’s right to privacy would in general trump all other potential rights (such as Google’s freedom of expression to link to an undisputedly true article). Washington Post has a more detailed post on this case. I have also written about the hazards of the “right to be forgotten” a few times in the past so I’m not gonna repeat myself.

    The second important story today had more of a technical spin: Mozilla, the company developing the popular standards compliant and open source web browser Firefox announced that they would implement the DRM2 standard that the W3C proposed. DRM means that a content provider can decide what you, the user can do with the content they made available to you: Maybe you can only watch it on one specific device or you may not save a copy or you can only read it once. It’s about giving a content provider control about the use of data that they released into the wild. The supporters of civil liberties and the open web from the Electronic Frontier Foundation (EFF) were not exactly happy lamenting “It’s official: the last holdout for the open web has fallen

    What do these stories have to do with each other?

    Both deal with control. The DRM scheme Mozilla adopted (following the commercial browser vendors such as Apple, Google and Microsoft) is supposed to define a standardized way for content providers to control the use of data.3 The EU court order is supposed to give European people the legal tools to control their public image in our digital age.

    That made me wonder. Why do so many privacy and civil rights organizations condemn technical DRM with such fury? Let’s do a quick thought experiment.

    Let’s assume that the DRM would actually work flawlessly. The code of the DRM module – while not being open source – would have been audited by trusted experts and would be safe for the user to run. So now we have the infrastructure to actually enforce the legal rights of the content providers: If they only want you to view their movie Thursdays between 8 and 11 PM that’s all you can do. But if we defined the DRM standard properly we as individuals could use that infrastructure as well! We could upload a picture to Facebook and hardwire into it that people can only see it once. Or that they cannot download it to their machines. We can attach that kind of rights management to the data we send out to a government agency or to amazon when buying a bunch of stuff. We do gain real, tangible control over our digital representation.

    Privacy in its interpretation as the right to control what happens with the data you emit into the world is structurally very similar to the kind of copyright control that the movie studios, music publishers or software companies want: It’s about enforcing patterns of behavior with data no longer under your direct control.

    Having understood this it seems strange to me that NGOs and entities fighting for the right of people to control their digital image do not actually demand standardized DRM. There is always the issue of the closed source blog that people have to run on their machines that right now never is audited properly and therefore is much more of  a security risk than a potential asset. Also the standard as it is right now4 doesn’t seem to make it simple for people to actually enforce their own rights, define their own restrictions. But all those issues sound a lot like implementation details, like bugs in the first release of the specification.

    We have reached somewhat of a paradox. We demand for the individual to be able to enforce its rights even when that means to hide things that are actually legal to  publish (by making them invisible to the big search engines). But when other entities try the same we can’t cry foul fast enough.

    The rights of the individual (and of other legal entities for that matter even though I find treating companies as people ludicrous) do always clash with the rights of other individuals. My right to express myself clashes with other people’s right to privacy. There is no way to fully express all those rights, we have to balance them against each other constantly. But there also is no simple hierarchy of individual rights. Privacy isn’t the superright that some people claim it to be and it shouldn’t be. Even if the EU court of justice seems to believe so.

    The EFF and Sony might really have more goals in common than they think. If I was the EFF that would seriously make me think.

    1. at least in my filter bubble, YMMV
    2. Digital Rights Management
    3. Admittedly by breaking one of Mozilla’s promises: While the programming interface to the DRM software module is open source, the DRM module itself isn’t and cannot be to make it harder for people wanting to get around the DRM.
    4. keep in mind that I am not a member of the W3C or an expert in that matter

    flattr this!

    Planet Larry is not officially affiliated with Gentoo Linux. Original artwork and logos copyright Gentoo Foundation. Yadda, yadda, yadda.