Posts for Tuesday, April 7, 2015

Smartwatches and the “veil of the analogue”

When I was about 5 my father gave me my first watch. It was very cool, had a colorful watch band and had a football player1 on its watch face. And it was a manual watch that I had to wind up every week which I found utterly fascinating: Just a few turns of the small wheel and it “magically” worked for a long time (when you’re 4 or 5 a week is long!).

I don’t know where that watch ended up or how long I used it. I went through some more or less cheap ass digital watches during my time in school. Ugly, plasticy devices that had one thing going for them: They were digital. Precise. You got the exact time with a short glance and not just a rough estimation. Which felt “better” in a weird way, like the future as it was shown in the 70ies SciFi movies, with clear and precise, sometimes curved lines. It felt right in a way. A little bit of the promise of a more developed future. Star Trek on my wrist. In some small and maybe not even meaningful way it felt powerful. As if that quartz driven precision would somehow make the constant flow of time more manageable.

When I went to study computer science I always had some form of cheap watch on my arm to tell the time since I refused to get a cell phone for the longest time (for a bunch of mostly stupid and pretentious reasons). But when some software development gig that me and a friend worked on finished with a bonus payment I got a “real” watch.

eebef7133cff4735e31b72e974cd2833

 

I wore that thing for a few years and loved it. Not just because it reminded me of Gir from Invader Zim (even though that was a big part of it). The fallback to an analog watch felt like a nice contrast to the digital spaces I spend more and more time in. The two watch faces were so small that you could hardly read a precise time on them and it had two of them. It was a strangely out-of-time anchor to the past while I dove head on into what I perceived to be “the future“.

A cellphone came, then a feature phone and finally smartphones and at some point I stopped replacing the batteries in my watch. I carried my phone around with me anyways so why add another device to the mix that had only that one feature? It felt pointless especially with me being sort of the prototypical nerd back then explicitly not “caring” about “looks and style” and all that stuff while sticking very closely to the codices and fashions of that subculture I identified with back then. If you gather from this that I was probably just another insecure dude in the beginning of his twenties a little too full of his own bullshit you would probably be correct.

But about 6 months ago things changed and I got another watch. A “smart” one even (we’ll come back to that word later). Here’s some kind of “review” or a summary of the things I learned from wearing one for those few months. But don’t be afraid, this isn’t a techbloggy text. Nobody cares about pixel counts and the megahertzs of whatever core processors. Given the state of the art the answer to most inquiries about features and performance is usually “enough” or “more than enough”. It’s also not about the different software platforms and if Google’s is better than Apple’s (which very few people have even used for more than a few minutes if at all) because given the relative newness of the form factor and class of device it will look very different in a few years anyways. And for most interesting questions about the devices the technical configurations just don’t matter regardless of what the producer’s advertising is trying to tell you.

I’ll write about how a smartwatch can change your relationship to your phone, your interactions with people and how smartwatches cast a light on where tech might be in a few years. Not from a business, startup, crowdfunding perspective but as a few loosely connected thoughts on humans and their personal devices as sociotechnical systems.

I should also add a short disclaimer. I understand that me posting about jewelry and rather expensive devices could be read as classist. I live a very privileged life with – at least at the moment – enough income to fund frivolities such as a way more expensive than necessary smart watch, a smart phone etc. but I do believe that my experience with these device can help understanding and modeling different uses of technology, their social context and their meaning. These devices will grow cheaper including more and more people (at least in certain parts of the world). But I am aware of the slightly messed up position I write this from.

Let’s dive in (finally!).

So. What is a smartwatch, really?

Wearables are a very hot topic right now2. Many wearables are … well let’s not say stupid, let’s call them very narrowly focused. Step counters are very popular these days for example turning your daily movement into a few numbers to store and compare and optimize. Some of these things try to measure heart rates similar low hanging fruits as well.

On the other end of the spectrum we have our smart phones and tablet computers or maybe even laptops which we carry around each day. Many might not consider their phone a wearable because it resides in your pocket or satchel but in the end it is more than just some object you schlep around all day but – for many if not most of us – an integral piece of our mental exoskeleton. Just ask people whose phone needs a repair longer than a few hours.

Smartwatches are somewhere between those two extremes. Many modern examples of this class of devices include a few of the sensors typically associated with dumb wearables: A heartrate monitor or a pedometer (fancytalk for step counter) for example. But smart watches can do more can install apps and provide features that make them feel very capable … unless you forget your phone.

Because in the end a smartwatch is just a different view into your smartphone. A smaller screen attached to a somewhat more convenient location of your body. Sure, there are great apps for smartwatches. I got one that makes my hand movements give me Jedi force powers turning certain movements into configurable actions. Another app is just a very simple recorder for audio memos. There are calculators, dice rolling apps and many more but their usefulness is usually very limited. No, let’s again say focused. And that is a good thing.

Without a phone connected my watch falls back to one of its dumbest and surprisingly most useful feature: It shows me the time and my agenda.

You can imagine the sort of look that my wife gave me when I proclaimed this fundamental understanding of my new plaything. “So the killer feature of that smart device is showing the time? ” she asked jokingly. But it’s a little more complex. My smartwatch allows me to check certain things (time, agenda, certain notifications from apps I authorized) without picking up my phone which can – and all too often does – pull your attention in like a black hole. You just wanted to check the time but there’s this notification and someone retweeted you and what’s going on on Facebook … what was I supposed to do here?

I’m not a critic of our networked culture in general, not one of the neo-luddites trying to frame being “offline” in some way as the better or more fulfilling mode of life. But the simplicity that the small form factor and screen size of smartwatches enforces, the reduction to very simple interactions can help to stay focused when you want to.

Most apps on my watch are mere extensions of the apps running on my phone. And that’s actually precisely what I want and not the drawback it’s sometimes made out to be. I get a message pushed to my wrist and can react on it with a few prepared response templates or by using voice recognition (with all the funny problems that come with it). But again: I can stay focused on whatever I am doing now (such as riding my bike in traffic) while still being able to tell the person I am meeting that I’ll be there in 5 minutes. The app I use to record my running shows me certain stats on my wrist, I can switch to the next podcast or music track in my queue while still keeping my attention on the road.

I don’t know when I printed my last map screenshot or route description. When smartphones and navigation became widely available there was not longer the need to reduce a place you were going to to a set of a handful of predefined railways to created for yourself in order to  not get lost. You drop out of the train or plane and the unknown city opens up to you like the inviting and fascinating place it probably is. You can just start walking since you know that even if you don’t speak the language perfectly you’ll find your way. My smartwatch does that while allowing me to keep my phone in my pocket allowing me to look less like a tourist or target. When the little black thing on my arm vibrates I check it to see where to turn, apart from that I just keep walking.

Sure, there will be apps coming that use these watches in more creative and useful ways. That thrive not in spite of but because of the tiny form factor. But that’s mostly a bonus and if it doesn’t happen I’d be fine as well. Because the watch as a simplified, ultra-reduced, ultra-focused remote to my mobile digital brain is feature enough. Where digital watches used to give me an intangible feeling of control of time the smart-ish watch does actually help me feel augmented by my devices in a way that doesn’t try to capture as much of my attention as smartphones tend to do. The watch is not a smaller smartphone but your phone’s little helper. The small and agile Robin to the somewhat clunky Batman in your pocket3.

Acceptance

Any new technology has to carve out its niche and fight for acceptance. And some don’t and die for a plethora of reasons (Minidisk, I always liked you). There are many reasons why people, mostly “experts” of some sort, don’t believe that smartwatches will gain any traction.

You have to recharge them every night, my watch runs for weeks, months, years!” Yeah. And on Tuesday it’s darker than at night. Oh, we weren’t doing the whole wrong comparison thing? Damn. Just as people learned to charge their phones every night they’ll get used to throwing their watch on a charger at night. My watch gets charged wirelessly with a Qi standard charger that sets you back about 10 bucks. It’s a non-issue.

But it doesn’t do a lot without a phone! It needs its own camera, internet connection, coffee maker and washing machine!” Nope. Simplicity and reduction is what makes that class of devices interesting and useful. I don’t need a half-assed smartphone on my arm when I have a good one in my pocket. I need something that helps my use my actual device better. Another device means all kinds of annoyances. Just think about synchronization of data.

I am in the lucky position not to have to deal with tech writers and pundits in all of the facets of my live. What I learned from interacting with non-techy people and the watch is actually not that surprising if you think about it: A smart watch is a lot less irritating and invasive than a smart phone.

There are friends where I know I can just look at my phone while we hang out and they’ll not consider it an insult or affront. They might enjoy the break from talking, might want to check a few things themselves or just relax for a second without having to entertain me or the others in the room. But not everybody feels that way (and why should they, it’s not like submerging yourself in the digital is the only right way to live). In those situations the look at the watch is an established and accepted practice mostly unless you check your watch every minute.

Some tech people tend to ignore the social. They might try to press it into services and data but often seem to overlook any sort of information a social act transports apart from the obvious. In pre-digital worlds checking your watch every few minutes was sometimes considered rude or would be read as a signal to leave your hosts etc. But where the glance at the watch is merely the acknowledgement of the existence of time and a world outside of the current situation, getting out your smartphone puts the outside world into focus making the people you share a physical space with just a blur in the corner of your vision.

Of course it’s your right to check your phone whenever you want just as people can be insulted or at least irritated by it. The way a smart watch can serve as a proxy of your access to your digital identity and network from your physical location and context it can help you communicate that you value the moment without feeling disconnected. Especially since neither being very digitally connected nor valuing physical meetings more highly is “better”, having this sort of a reduced stub of the digital that closely on you can serve as a good compromise for these situations.

A smartwatch is accepted because it is a watch. And we as a culture know watches. Sure, some very techy, clunky, funky looking devices break that “veil of the analogue” by screaming “I AM TECHNOLOGY, FEAR ME” through their design. But the more simple versions that avoid the plasticy look of Casio watches on LSD are often even overlooked and not even perceived as technology (and therefore as an irritation or even dangerous) by people who are sceptical of technology. That’s the problem devices such as Google’s Glass project have who also have very interesting and potentially beneficial use cases but look so undeniably alien that everyone expects a laser gun to appear. And that’s where smart watches embed themselves into existing social norms and practices. by looking like the past and not screaming FUTURE all too loud.

Body Area Network and the Future

What does this mean for the Future(tm)?  The idea of the Body Area Network and the Personal Area Network do already exist: We are more and more turning into digital cyborgs4 creating our own personal “cloud” and network of data and services along the axes of and around our physical bodies.

Right now Smartphones seem to be some sort of Hub we carry around. The little monolith containing our data, internet access and main mobile interface to our digital self. Other devices connect to the hub, exchange data and use the services it provides (such as Internet connectivity or a camera). But looking at things like Google’s project Ara a different idea emerges.

Ara is a module smartphone platform that allows you to add, remove and change the hardware modules of your phone at runtime. While it’s mostly framed as a way for people to buy their devices in parts upgrading it when the personal financial situation allows it the modular approach also has different trajectories influencing how our BANs and PANs might look in a few years.

Changing a phone can be annoying and/or time consuming. The backup software might have failed or forgotten something valuable. Maybe an app isn’t available on the new system or the newer version is incompatible to the data structure the old version left in your backup. We suffer through it because many of us rely on our personal information hubs making us potentially more capable (or at least giving us the feeling of being that).

Understanding smart watches as reduced, minimal, simplified interfaces to our data, looking at wearables as very specific data gathering or displaying devices it seems to make sense to centralize your data on one device that your other devices just connect to. These days we work around that issue with tools such as Dropbox and other similar cloud sync services trying to keep all our devices up-to-data and sometimes failing horribly. But what if every new device just integrates into your BAN/PAN, connects to your data store and either contributes to it or gives you a different view on it? In that world wearables could become even “dumber” while still appearing to the user very “smart” (and we know that to the user, the interface is the product).

The smartphones that we use are built with healthy people in mind with nimble fingers and good eyesight. Smart watches illustrate quite effectively that the idea of the one device for every situation has overstayed its welcome somewhat. That different social or even personal circumstances require or or benefit from different styles and types of interfaces. Making it easier for people to find the right interfaces for their needs, for the situations they find themselves in will be the challenge of  the next few years. Watches might not always look like something we’d call a watch today. Maybe they’ll evolve into gloves, or just rings. Maybe the piercing some wear in their upper lip will contain an antenna to amplify the connectivity of the BAN/PAN.

Where Ara tries making phones more modular, wearables – when done right – show that we can benefit a lot from modularizing the mobile access to our digital self. Which will create new subtle but powerful signals: Leaving certain types of interfaces at home or disabled on the table to communicate an ephemeral quality of a situation, only using interfaces focused on shared experience of the self and the other when being with another person creating a new kind of intimacy.

Comedown

But right now it’s just a watch. With some extras. Useful extras though. You wouldn’t believe how often the app projecting me the video from my smartphone camera to my wrist has been useful to find something that has fallen behind the furniture. But none of them really, honestly legitimizing the price of the devices.

But the price will fall and new wearables will pop up. If you have the opportunity, try them out for a while. Not by fiddling around on a tiny display playing around with flashy but ultimately useless apps but by integrating them into your day for a few weeks. Don’t believe any review written with less than a few weeks of actual use.

  1. Football in the meaning most of the world uses it. The one where you play by kicking a ball around into goals.
  2. It’s so hot that my bullshit-o-meter has reached a new peak while reading the term “wearable clothing” somewhere recently.
  3. that sounded dirtier that it was supposed to
  4. we have always been cyborgs, beings combined from biology and culture and technology so that isn’t actually surprising

Flattr this!

Posts for Saturday, April 4, 2015

Paludis 2.4.0 Released

Paludis 2.4.0 has been released:

  • Bug fixes.
  • We now use Ruby 2.2, unless –with-ruby-version is specified.

Filed under: paludis releases

Posts for Sunday, March 29, 2015

A wise choice? Github as infrastructure

So more and more projects are using github as infrastructure. One of the biggest cases I’ve seen is the Go programming language which allows you to specify “imports” directly hosted on code sharing sites like github and “go get” to get them all before compilation, but also lots of other projects are adopting it like Vim’s Vundle plugin manage which also allows fetching and updating of plugins directly from github. Also I wouldn’t be surprised if one or more other languages’ package managers from pip to npm do this too. I know it’s pretty easy and now cool to do this but…

It isn’t actually infrastructure grade. And that is hilighted well in event’s like this week when they are suffering continuals outages from a massive DDOS attack that some news sources are suspecting might be nation-state based.

How much fun is your ops having deploying your new service when half it’s dependencies are being pulled directly from github which is unavailable? Bit of a strange blocker hm?

Posts for Wednesday, March 25, 2015

HowTo: Permanently redirect a request with parameter consideration in Tengine/NginX

Well, this one gave me a super hard time. I looked everywhere and found nothing. There is a lot of misinformation.

As usual, the Nginx and Funtoo communities helped me. Thanks to:

  • MTecknology in #nginx @ Freenode
  • Tracerneo in #funtoo @ Freenode

So, how do we do this? Easy, we use a map:

    # get ready for long redirects
    map_hash_bucket_size 256;
    map_hash_max_size 4092;

    # create the map
    map $request_uri $newuri {
        default 0;

        /index.php?test=1 /yes;
        /index.php?test=2 https://google.com/;
    }

    server {
        listen *;
        server_name test.php.g02.org;
        root /srv/www/php/test/public;

        # permanent redirect
        if ($newuri) {
            return 301 $newuri;
        }


        index index.php index.html;
        autoindex on;

        include include.d/php.conf;

        access_log /var/log/tengine/php-access.log;
        error_log /var/log/tengine/php-error.log;
    }

So, basically, you want to use the $request_uri in order to catch the the uri with it’s parameters. I wasted all day figuring out why $uri didn’t have this. It turns out it discards the parameters… anyway.

This one was a hard one to find. Please, share and improve!

References

Posts for Friday, March 6, 2015

avatar

Trying out Pelican, part one

One of the goals I’ve set myself to do this year (not as a new year resolution though, I *really* want to accomplish this ;-) is to move my blog from WordPress to a statically built website. And Pelican looks to be a good solution to do so. It’s based on Python, which is readily available and supported on Gentoo, and is quite readable. Also, it looks to be very active in development and support. And also: it supports taking data from an existing WordPress installation, so that none of the posts are lost (with some rounding error that’s inherit to such migrations of course).

Before getting Pelican ready (which is available through Gentoo btw) I also needed to install pandoc, and that became more troublesome than expected. While installing pandoc I got hit by its massive amount of dependencies towards dev-haskell/* packages, and many of those packages really failed to install. It does some internal dependency checking and fails, informing me to run haskell-updater. Sadly, multiple re-runs of said command did not resolve the issue. In fact, it wasn’t until I hit a forum post about the same issue that a first step to a working solution was found.

It turns out that the ~arch versions of the haskell packages are better working. So I enabled dev-haskell/* in my package.accept_keywords file. And then started updating the packages… which also failed. Then I ran haskell-updater multiple times, but that also failed. After a while, I had to run the following set of commands (in random order) just to get everything to build fine:

~# emerge -u $(qlist -IC dev-haskell) --keep-going
~# for n in $(qlist -IC dev-haskell); do emerge -u $n; done

It took quite some reruns, but it finally got through. I never thought I had this much Haskell-related packages installed on my system (89 packages here to be exact), as I never intended to do any Haskell development since I left the university. Still, I finally got pandoc to work. So, on to the migration of my WordPress site… I thought.

This is a good time to ask for stabilization requests (I’ll look into it myself as well of course) but also to see if you can help out our arch testing teams to support the stabilization requests on Gentoo! We need you!

I started with the official docs on importing. Looks promising, but it didn’t turn out too well for me. Importing was okay, but then immediately building the site again resulted in issues about wrong arguments (file names being interpreted as an argument name or function when an underscore was used) and interpretation of code inside the posts. Then I found Jason Antman’s converting wordpress posts to pelican markdown post to inform me I had to try using markdown instead of restructured text. And lo and behold – that’s much better.

The first builds look promising. Of all the posts that I made on WordPress, only one gives a build failure. The next thing to investigate is theming, as well as seeing how good the migration goes (it isn’t because there are no errors otherwise that the migration is successful of course) so that I know how much manual labor I have to take into consideration when I finally switch (right now, I’m still running WordPress).

Posts for Tuesday, March 3, 2015

Rejected session proposals for republica #rp5

I submitted two talk proposals to this year’s re:publica conference which got both rejected. If you have another conference where you think they might fit in drop me an email.


The Ethics of (Not-)Sharing

Like this, share this, tweet this. Our web tech is focussed on making things explode: More clicks, more ads, more attention. But recent events have shown that sharing isn’t always good or even wanted. But how to decide when sharing is cool and when it isn’t? This talk will dive into those questions and explain why context matters and why “RT≠Endorsement” is bullshit. Sharing is a political act.
The digital economy, or at least big parts of it, have been characterized as the attention economy: People and companies exchanging goods and services for other people’s attention (usually to translate said attention into money through ads).

In an attention economy the act of sharing is the key to ongoing growth: You need more reach, more follower and likes for your product so getting people to share is paramount in order to raise sales or subscriptions.

But given how the platforms for social interactions used by most people are built the same applies to people and their relationships and posts. Facebook, Google, Twitter, Tumblr, no matter what platform you turn to, the share button is god. Sharing means not just caring but is very often one of the most prominently placed functions on any given site.

And who wouldn’t? Sharing is nice, it’s what we teach our kids. Sharing as a method to spread culture, to give access to resources, to make someone’s voice heard? Who could have anything against sharing? Sharing has become so big and important, that the term itself has been used to whitewash a new kind of business model that is really not about sharing at all, the “Sharing Economy” of Uber and Airbnb.

In light of the recent terror videos of beheadings the question on when it’s OK to share something has come back into the public attention: Are we just doing the terrorists’ work by sharing their propaganda? When Apple’s iCloud was hackend and naked pictures of female celebrities were published, didn’t all people sharing them participate in the sexual assault that it was?

The answers to those very extreme situations might be simple for many. But in our digital lives we are often confronted with the question of whether sharing a certain piece of information is OK, is fair or right.

In this session I want to argue that sharing isn’t just about content but also about context. When sharing something we are not just taking on some responsibility of our social circle, the people around us, but we are also saying something by who we share on what topic at what time and so on. I’ll show some experiments or rules that people have published on their sharing and look at the consequences. The session will finish with the first draft of an ethics of (not)sharing. A set of rules governing what to share and what to leave alone.


In #Cryptowars the social contract is the first casualty

The cryptowars started again this year and the netizens seem to agree that regulation seems stupid: How can governments believe to regulate or ban math? In this talk we’ll examine this position and what it means for a democratic government. How can local laws be enforced in an interconnected, global world? Should they at all? Are the cypherpunks and cryptolibertarians right? This talk will argue the opposite from the basis of democratic legitimization and policy.
The year began with governments trying to regulate cryptography, the so-called cryptowars 2.0. Cameron in the UK, de Maizière in Germany and a bunch of people in the EU were looking into cryptography and how to get access to people’s communications in order to enforce the law. The Internet, hackers and cypherpunks at the forefront, wasn’t happy.

But apart from the big number of technical issues with legislation on cryptography there are bigger questions at hand. Questions regarding democracy and politics in a digital age. Questions that we as a digital community will have to start having good answers to soon.

We’ve enjoyed the exceptionism of the Digital for many years now. Copyright law mas mostly just an annoyance that we circumvented with VPNs and filesharing. We could buy drugs online that our local laws prohibited and no content was out of our reach, regardless of what our governments said. For some this was the definition of freedom.

Then came the politicians. Trying to regulate and enforce, breaking the Internet and being (in our opinion) stupid and clueless while doing so.

But while the Internet allows us (and corporations) to break or evade many laws we have to face the fact that the laws given are part of our democraticly legitimized social contract. That rules and their enforcement are traditionally the price we pay for a better, fairer society.

Do governments have the duty to fight back on cryptography? What kind of restrictions to our almost limitless freedom online should we accept? How can a local democracy work in a globalized digital world? Or is the Internet free of such chains? Are the cryptolibertarians right?

These and more questions I’ll adress in this session. Europe as a young and still malleable system could be the prototype of a digital democracy of the future. Let’s talk about how that could and should work.


 

Flattr this!

Conspiracies everywhere

So Google decided to stop making full disk encryption the default for Android for now (encryption is still easily available in the settings).
UPDATE: It was pointed out to me that Google’s current Nexus line devices (Nexus 6 and 9) do come with encrypted storage out of the box, it’s just not default for legacy devices making the EFF comment even more wrong.

It took about 7 centiseconds for the usual conspiracy nuts to crawl out of the woodwork. Here an example from the EFF:

“We know that there’s been significant government pressure, the Department of Justice has been bringing all the formal and informal pressure it can bear on Google to do exactly what they did today,” Nate Cardozo, a staff attorney at the Electronic Frontier Foundation, told me.

In the real world the situation is a lot simpler, a lot less convoluted: Android phones sadly often come with cheap Flash storage and only few devices use modern file systems. Full disk encryption on many android devices (such as my own Nexus5) is slow as molasses. So Google disabled the default to make phones running its operating system not look like old Pentium machines trying to run Windows8.

It’s easy to see conspiracies everywhere. It’s also not great for your mental health.

Flattr this!

Posts for Sunday, February 15, 2015

avatar

CIL and attributes

I keep on struggling to remember this, so let’s make a blog post out of it ;-)

When the SELinux policy is being built, recent userspace (2.4 and higher) will convert the policy into CIL language, and then build the binary policy. When the policy supports type attributes, these are of course also made available in the CIL code. For instance the admindomain attribute from the userdomain module:

...
(typeattribute admindomain)
(typeattribute userdomain)
(typeattribute unpriv_userdomain)
(typeattribute user_home_content_type)

Interfaces provided by the module are also applied. You won’t find the interface CIL code in /var/lib/selinux/mcs/active/modules though; the code at that location is already “expanded” and filled in. So for the sysadm_t domain we have:

# Equivalent of
# gen_require(`
#   attribute admindomain;
#   attribute userdomain;
# ')
# typeattribute sysadm_t admindomain;
# typeattribute sysadm_t userdomain;

(typeattributeset cil_gen_require admindomain)
(typeattributeset admindomain (sysadm_t ))
(typeattributeset cil_gen_require userdomain)
(typeattributeset userdomain (sysadm_t ))
...

However, when checking which domains use the admindomain attribute, notice the following:

~# seinfo -aadmindomain -x
ERROR: Provided attribute (admindomain) is not a valid attribute name.

But don’t panic – this has a reason: as long as there is no SELinux rule applied towards the admindomain attribute, then the SELinux policy compiler will drop the attribute from the final policy. This can be confirmed by adding a single, cosmetic rule, like so:

## allow admindomain admindomain:process sigchld;

~# seinfo -aadmindomain -x
   admindomain
      sysadm_t

So there you go. That does mean that if something previously used the attribute assignation for any decisions (like “for each domain assigned the userdomain attribute, do something”) will need to make sure that the attribute is really used in a policy rule.

Posts for Saturday, February 14, 2015

I ♥ Free Software 2015

“Romeo, oh, Romeo!” exclaims the 3D-printed robot Julliet to her 3D-printed Romeo.

It is that time of the year again – the day we display our affection to our significant other …and the Free Software we like best.

Usually I sing praise to the underdogs that I use, the projects rarely anyone knows about, small odd things that make my everyday life nicer.

This year though I will point out some communities, that I am (more or less) active in, that impressed me the most in the past year.

  • KDE – this desktop needs no introduction and neither should its community. But ever so often we have to praise things that we take for granted. KDE is one of the largest and nicest FS communities I have ever come across. After meeting a few known faces and some new ones at FOSDEM, I am very much looking forward to going to Akademy again this year!
  • Mageia – as far as GNU/Linux distros go, many would benefit by taking Mageia as a good example on how to include your community and how to develop your infrastructure to be inclusive towards newcommers.
  • Mer, Nemo Mobile – note: Jolla is a company (and commercial product with some proprietary bits), most of its Sailfish OS’s infrastructure is FS and Jolla tries very hard to co-operate with its community and as a rule develops in upstream. This is also the reason why the communities of the mentioned projects are very intertwined. The co-operation in this wider community is very active and while not there yet, Mer and Nemo Mobile (with Glacier UI coming soon) are making me very optimistic that a modern Free Software mobile OS is just around the corner.
  • Last, but not least, I must mention three1 communities that are not FS projects by themselves, but where instrumental to educating me (and many others) about FS and digital freedoms in general – Thank you, LUGOS for introducing me to FS way back in the ’90s and all the help in those early days! Thank you, Cyberpipe for all the things I learnt in your hackerspace! And thank you, FSFE for being the beacon of light for Free Software throughout Europe (and beyond)!

hook out → closing my laptop and running back to my lovely Andreja, whom I thank for bearing with me


  1. Historically Cyberpipe was founded as part of Zavod K6/4, but in 2013 Cyberpipe merged with one of its founders – LUGOS, thus merging the two already before intertwined communities for good. 

Posts for Sunday, February 8, 2015

avatar

Have dhcpcd wait before backgrounding

Many of my systems use DHCP for obtaining IP addresses. Even though they all receive a static IP address, it allows me to have them moved over (migrations), use TFTP boot, cloning (in case of quick testing), etc. But one of the things that was making my efforts somewhat more difficult was that the dhcpcd service continued (the dhcpcd daemon immediately went in the background) even though no IP address was received yet. Subsequent service scripts that required a working network connection failed to start then.

The solution is to configure dhcpcd to wait for an IP address. This is done through the -w option, or the waitip instruction in the dhcpcd.conf file. With that in place, the service script now waits until an IP address is assigned.

Posts for Saturday, February 7, 2015

I’d like my kernel vanilla, please

Yep, vanilla is the flavor of the kernel for me. I like using vanilla in #funtoo. It is nice and it is simple. No patches. No security watch-cha-ma-call-it or anything like that. Just me and that good ‘ol penguin; which deals with my hardware, networking and you-name-it systems.

I like tailoring my kernel to my needs. Ran the glorious:

make localmodconfig

With all my stuff plugged in and turned on. Also, I took the time of browsing the interesting parts of my kernel; checking out the help and all to see if I want those features or not. Specially on my networking section!

Anyway, that hard work is only done a few times (yep, I missed a lot of things the first time). It is fun and, after a while, you end up with a slim kernel that works fine for you.

All this said, I just wanna say: thank you, bitches! To the genkernel-next team. They’re doing great work while enabling me to use btrfs and virtio on my kernel by simplifying the insertion of these modules into my initrd. All I do when I get a kernel src upgrade is:

genkernel --virtio --btrfs --busybox --oldconfig --menuconfig --kernel-config=/etc/kernels/kernel-config-x86_64-3.18.<revision -minus-1> all
boot-update

or, what I just did to install 3.18.6:

genkernel --virtio --btrfs --busybox --oldconfig --menuconfig --kernel-config=/etc/kernels/kernel-config-x86_64-3.18.5 all
boot-update

Funtoo stores my kernel configs in /etc/kernels. This is convenient and genkernel helps me re-build my kernel, taking care of the old configuration and giving me the menuconfig to decide if I wanna tweak it some more or not.

Quite honestly, I don’t think –oldconfig is doing much here. It doesn’t ever ask me what I wanna do with the new stuff. It is supposed to have sane defaults. Maybe I am missing something. If anybody wants to clarify this, I am all eyes.

Oh well, I hope you got an idea of how to maintain your own vanilla kernel config with genkernel-next and Funtoo.

Posts for Friday, January 30, 2015

avatar

Things I should’ve done earlier.

On Linux, there are things that you know are better but you don’t switch because you’re comfortable where you are. Here’s a list of the things I’ve changed the past year that I really should’ve done earlier.

  • screen -> tmux
  • apache -> nginx
  • dropbox -> owncloud
  • bash -> zsh
  • bootstrapping vim-spf -> my own tailored and clean dotfiles
  • phing -> make
  • sahi -> selenium
  • ! mpd -> mpd (oh why did I ever leave you)
  • ! mutt -> mutt (everything else is severely broken)
  • a lot of virtualbox instances -> crossbrowsertesting.com (much less hassle, with support for selenium too!)

… would be interested to know what else I could be missing out on! :)

The post Things I should’ve done earlier. appeared first on thinkMoult.

Posts for Thursday, January 29, 2015

Cryptography and the Black or White Fallacy

Cryptography is the topic du jour in many areas of the Internet. Not the analysis of algorithms or the ongoing quest to find some reasonably strong kind of crypto people without a degree in computer science and black magic are able and willing to use but in the form of the hashtag #cryptowars.

The first crypto wars were fought when the government tried to outlaw certain encryption technologies or at least implementations thereof with a certain strength. Hackers and coders found ways to circumvent the regulation and got the technology out of the US and into the hands of the open source community. Since these days cryptography has been widely adopted to secure websites, business transactions and – for about 7 people on this planet and Harvey the invisible bunny –  emails.

But there is a storm coming:

Governments are publicly wondering whether maybe asking platform providers to keep encryption keys around so that the police can access certain communication given proper authorization (that idea is usually called key escrow). Now obviously that is not something everyone will like or support. And that’s cool, we call it democracy. It’s people debating, presenting ideas, evaluating options and finally coming up with a democratically legitimized consensus or at least a resolution.

There are very good arguments for that kind of potential access (for example enforcement of the social contract/law, consistency with the application of norms in the physical world) as well as against it (for example the right to communicate without interference or the technical difficulty and danger of a key escrow system). For the proponents of such a regulation the argument is simple: Security, Anti-terror, Protection. Bob’s your uncle. For the opposition it’s harder.

I read many texts in the last few days about how key escrow would “ban encryption”. Which we can just discard as somewhat dishonest given the way the proposed legislation is roughly described. The other train of thought seems to be that key escrow would “break” encryption. And I also find that argument somewhat strange.

If you are a purist, the argument is true: If encryption has to perfectly protect something against everyone, key escrow would “break” it. But I wonder what kind of hardware these purists run their encryption on, what kind of operating systems. How could anyone every be sure that the processors and millions of lines of code making up the software that we use to run our computers can be trusted? How easy would it be for Intel or AMD or whatever Chip manufacturer you can think of to implement backdoors? And we know how buggy operating systems are. Even if we consider them to be written in the best of faith.

Encryption that has left the wonderful and perfect world of theory and pure algorithms is always about pragmatism. Key lengths for example are always a trade-off between the performance penalty they cause and the security they provide given a certain technological default. In a few years computers have gotten faster, which would make your keys short enough to be broken but since computers have gotten faster, you can use longer keys and maybe even more complex encryption algorithms.

So why, if deploying encryption is always about compromise, is key escrow automatically considered to “break” all encryption. Why wouldn’t people trust the web anymore? Why would they suddenly be the target of criminals and theft as some disciples of the church of crypto are preaching?

In most cases not the whole world is your enemy. At least I hope so, for your sake. Every situation, every facet of life has different threat models. How do threat models work? When I ride my bike to work I could fall due to a bad road, ice, some driver could hit me with their car. I address those threats in the way I drive or prepare: I always have my bike’s light on to be seen, I avoid certain roads and I keep an eye on the car traffic around me. I don’t consider the dangers of a whale falling down on me, aliens abducting me or the CIA trying to kill me. Some people might (and might have to given they annoyed the CIA or aliens), but for me, those are no threats I spend any mental capacities on.

My laptop’s harddrive is encrypted. The reason is not that it would protect its data against the CIA/NSA/AlienSecurityAgency. Because they’d just lock me up till I give them the key. Or punch me till I do. Or make me listen to Nickelback. No, I encrypt my drive so that in case my laptop gets stolen the thief might have gotten decent hardware but no access to my accounts and certain pieces of information. Actually, in my personal digital threat modeling, governments really didn’t influence my decision much.

In many cases we use encryption not to hide anything from the government. HTTPS makes sense for online stores not because the government could see what I buy (because given reasonable ground for suspicion they could get a court order and check my mail before I get it which no encryption helps against) but because sending around your credit card data in the clear is not a great idea(tm) if you want to be the only person using that credit card to buy stuff.

There are reasonable situations where encryption is used as defense against governments and their agencies. But in those cases it’s some form of open source end-to-end cryptography anyways, something you cannot outlaw (as the crypto wars of old have proven). On the other hands, in many situations encryption is mostly used to protect us from certain asshats who would love to change our Facebook profile picture to a penis or a frog or a frog’s penis1 or who’d like us to pay for their new laptop and Xbox. And they wouldn’t get access to any reasonably secure implementation of key escrow.

The idea that any “impurity”, any interference into cryptography is a typical black or white fallacy. Two options are presented for people to choose from: A) Cryptography deployed perfectly as it is in its ideal form and B) Cryptography is “broken”. But we know from our everyday life that that is – excuse my language – bullshit. Because every form of encryption we use is a compromise in some way, shape or form.

I have to extend trust to the makers of my hardware and software, to the people who might have physical access to my laptop at some point and to the fact that nobody sneaks into my home at night to install weird keyloggers on my machine. All that trust I extend does not “break” the encryption on my harddrive. You could argue that it weakens it against certain adversaries (for example a potentially evil Intel having a backdoor in my machine) but for my personal threat model those aspects are mostly irrelevant or without options. I don’t have to option to completely build my own computer and all the required software on it. Because I’ve got shit to do, pictures of monkeys to look at etc.

Personally I haven’t fully come to a conclusion on whether key escrow is a reasonable, good way to deal with the problem of enforcement of certain laws. And if it is, which situations it should apply to, who that burden should be placed on. But one thing is obvious: All those articles of the “death of crypto” or the “destruction of crypto” or the “war against crypto” seem to be blown massively out of proportion forfeiting the chance to make the case for certain liberties or against certain regulation for a style of communication reminding me of right-wing politicians using terrorist attacks to legitimize massive violations of human rights. Which is ironically exactly the kind of argument that those writing all these “crypto is under fire!!11″ articles usually complain about.

Photo by Origami48616

  1. I don’t know if frogs have penises

Flattr this!

Posts for Tuesday, January 27, 2015

StrongSwan VPN (and ufw)

I make ample use of SSH tunnels. They are easy which is the primary reason. But sometimes you need something a little more powerful, like for a phone so all your traffic can’t be snooped out of the air around you, or so that all your traffic not just SOCKS proxy aware apps can be sent over it. For that reason I decided to delve into VPN software over the weekend. After a pretty rushed survey I ended up going with StrongSwan. OpenVPN brings back nothing but memories of complexity and OpenSwan seemed a bit abandoned so I had to pick one of its decendands and StrongSwan seemed a bit more popular than LibreSwan. Unscientific and rushed, like I said.

So there are several scripts floating around that will just auto set it up for you, but where’s the fun (and understanding allowing tweeking) in that. So I found two guides and smashed them together to give me what I wanted:

strongSwan 5: How to create your own private VPN is the much more comprehensive one, but also set up a cert style login system. I wanted passwords initially.

strongSwan 5 based IPSec VPN, Ubuntu 14.04 LTS and PSK/XAUTH has a few more details on a password based setup.

Additional notes: I pretty much ended up doing the first one stright through except creating client certs. Also the XAUTH / IKE1 setup of the password tutorial seems incompatible with the Android StrongSwan client, so I used EAP / IKE2, pretty much straight out of the first one. Also seems like you still need to install the CA cert and vpnHost cert on the phone unless I was missing something.

Also, as an aside, and a curve ball to make things more dificult, this was done one a new server I am playing with. Even since I’d played with OpenBSD’s pf, I’ve been ruined for iptables. It’s just not as nice. So I’d been hearing about ufw from the Ubuntu community from a while and was curious if it was nicer and better. I figured after several years maybe it was mature enough to use on a server. I think maybe I misunderstood its point. Uncomplicated maybe meant not-featureful. Sure for unblocking ports for an app it’s cute and fast, and even for straight unblocking a port its syntax is a bit clearer I guess? But as I delved into it I realized I might have made a mistake. It’s built ontop of the same system iptables uses, but created all new tables so iptables isn’t really compatible with it. The real problem however is that the ufw command has no way to setup NAT masquerading. None. The interface cannot do that. Whoops. There is a hacky work around I found at OpenVPN – forward all client traffic through tunnel using UFW which involves editing config files in pretty much iptables style code. Not uncomplicated or easier or less messy like I’d been hopnig for.

So a little unimpressed with ufw (but learned a bunch about it so that’s good and I guess what I was going for) and had to add “remove ufw and replace with iptables on that server” to my todo list, but after a Sunday’s messing around I was able to get my phone to work over the VPN to my server and the internet. So a productive time.

Posts for Wednesday, January 21, 2015

avatar

Old Gentoo system? Not a problem…

If you have a very old Gentoo system that you want to upgrade, you might have some issues with too old software and Portage which can’t just upgrade to a recent state. Although many methods exist to work around it, one that I have found to be very useful is to have access to old Portage snapshots. It often allows the administrator to upgrade the system in stages (say in 6-months blocks), perhaps not the entire world but at least the system set.

Finding old snapshots might be difficult though, so at one point I decided to create a list of old snapshots, two months apart, together with the GPG signature (so people can verify that the snapshot was not tampered with by me in an attempt to create a Gentoo botnet). I haven’t needed it in a while anymore, but I still try to update the list every two months, which I just did with the snapshot of January 20th this year.

I hope it at least helps a few other admins out there.

Posts for Wednesday, January 14, 2015

Digital dualism, libertarians and the law – cypherpunks against Cameron edition

The sociologist Nathan Jurgenson coined the term “digital dualism” in 2011. Digital dualism is the idea that the digital sphere is something separate from the physical sphere, that those two “spaces” are distinct and have very different rulesets and properties, different “natural laws”.

Jurgenson defined this term in light of an avalanche of articles explaining the emptiness and non-realness of digital experiences. Articles celebrating the “Offline” as the truer, realer and – yes – better space. But the mirror-image to those offline-enthusiasts also exists. Digital dualism permeates the Internet positivists probably as much as it does most Internet sceptics. Take one of the fundamental, central documents that so many of the ideology of leading digital activists and organisations can be traced back to: The Declaration of the Independence of Cyberspace. Digital dualism is at the core of that eloquent piece of writing propping up “cyberspace” as the new utopia, the (quote) “new home of Mind“.

I had to think of that, as Jurgenson calls it, digital dualism fallacy, when Great Britain’s Prime Minister David Cameron’s position on digital communication went public. Actually – I started to think about it when the reactions to Mr. Cameron’s plans emerged.

BoingBoing’s Cory Doctorow immediately warned that Cameron’s proposal would “endanger every Briton and destroy the IT industry“, the British Guardian summarized that Cameron wanted to “ban encryption“, a statement repeated by security guru Bruce Schneier. So what did Mr. Cameron propose?

In a public speech, about 4 minutes long, Cameron argued that in the light of terrorist attacks such as the recent attacks in Paris, the British government needed to implement steps to make it harder for terrorists to communicate without police forces listening in. The quote most news agencies went with was:

In our country, do we want to allow a means of communication between people which […] we cannot read?

Sounds grave and … well … evil. A big brother style government peeking into even the most private conversations of its citizens.

But the part left out (as indicated by the […]) adds some nuance. Cameron actually says (go to 1:50 in the video):

In our country, do we want to allow a means of communication between people which even in extremis with a signed warrant by the home secretary personally we cannot read?

He also goes into more detail, illustrating a process he wants to establish for digital communication analogue to the legal process we (as in liberal democracies) already have established for other, physical means of communication.

Most liberal democracies have similar processes for when the police needs to or at least wants to investigate some private individual’s communication such as their mail or the conversations within their own apartments or houses. The police needs to make their case to a judge explaining the precise and current danger for the public’s or some individual’s safety or present enough evidence to implicate the suspect in a crime of significant gravity. Then and only then the judge (or a similar entity) can decide that the given situation warrants the suspects’ human rights to be infringed upon. With that warrant or court order the police may now go and read a person’s mail to the degree the judge allowed them to.

Cameron wants something similar for digital communication meaning that the police can read pieces of it with a warrant or court order. And here we have to look at encryption: Encryption makes communication mostly impossible to read unless you have the relevant keys to unlock it. But there are different ways to implement encryption that might look very similar but make a big difference in cases like this.

The platform provider – for example WhatsApp or Google with their GMail service – could encrypt the data for its users. That would mean that the key to lock or unlock the data would reside with the platform provider who would make sure that nobody apart from themselves or the parties communicating could read it. In the best-practice case of so-called end-to-end encryption, only the two parties communicating have the keys to open the encrypted data. Not even the platform provider could read the message.

If we look at physical mail, the content of a letter is protected with a nifty technology called an “envelope”. An envelope is a paper bag that makes the actual contents of the letter unreadable, only the source and target addresses as well as the weight and size of the content can be seen. Physically envelopes are not too impressive, you can easily tear them open and look at what’s in them, but they’ve got two things going for them. First of all you can usually see when an envelope has been opened. But secondly and a lot more powerfully the law protects the letter inside. Opening someone else’s mail is a crime even for police detectives (unless they have the court order we spoke about earlier). But if the content is written in some clever code or secret language, the police is still out of luck, even with a court order.

From my understanding of Cameron’s argument, supported by his choice of examples, what he is going for is something called key escrow. This means that a platform provider has to keep the encryption keys necessary to decrypt communication going over their servers available for a while. Only when an authorized party asks for them with proper legitimisation (court order), the platform provider hands over the keys for the specific conversations requested. This would actually work very similar to how the process for access to one’s mail works today. (Britain does already have a so called key disclosure law called RIPA which forces suspects to hand over their own personal encryption keys with a court order. This servers a slightly different use case though because forcing someone to hand over their keys does automatically inform them of their status as a suspect making surveillance in order to detect networks of criminals harder.)

Key escrow is highly problematic as anyone slightly tech-savvy can probably guess. The recent hacks on Sony have shown us that even global corporations with significant IT staff and budget have a hard time keeping their own servers and infrastructure secure from unauthorized access. Forcing companies to store all those encryption keys on their servers would paint an even bigger target on them than there already is: Gaining access to those servers would not only give crackers a lot of data about people but access to their communication and potentially even the opportunity for impersonation with all of its consequences. And even if we consider companies trustworthy and doing all they can to implement secure servers and services, bugs happen. Every software more complex than “Hello World” has bugs, some small, some big. And if they can give attackers access to the keys to all castles, they will be found if just by trial and error or pure luck. People are persistent like that.

Tech people know that, but Mr. Cameron might actually not. And as a politician his position is actually very consistent and consequent. It’s his job to make sure that the democratically legitimized laws and rules of the country he governs are enforced, that the rights these laws give its citizens and all people are defended. That is what being elected the prime minister of the UK means. Public and personal security are, just as a reasonable expectancy of privacy, a big part of those rights, of those basic human rights. Mr. Cameron seems to see the safety and security of the people in Britain in danger and applies and a adapts a well-established process to the digital sphere and the communication therein homogenizing the situation between the physical and the digital spheres. He is in fact actively reducing or negating digital dualism while implicitly valuing the Internet and the social processes in it as real and equal to those in the physical sphere. From this perspective his plan (not the potentially dangerous and flawed implementations) is actually very forward thinking and progressive.

But laws are more than just ideas or plans, each law can only be evaluated in the context of its implementation. A law giving every human being the explicit right to ride to work on a unicorn is worthless as long as unicorns don’t exist. And who would take care of all the unicorn waste anyways? And as we already analysed, key escrow and similar ways of giving governments central access to encryption keys is very, very problematic. So even if we might agree that his idea about the police having potential access to selected communication with a court order is reasonable, the added risks of key escrow would make his proposal more dangerous and harmful that it would bring benefit. But agree the cypherpunks do not.

Cypherpunks are a subculture of activists “advocating widespread use of strong cryptography as a route to social and political change” (quote Wikipedia). Their ideology can be characterized as deeply libertarian, focused on the individual and its freedom from oppression and restriction. To them privacy and anonymity are key to the digital age. Quoting the Cypherpunk Manifesto:

Privacy is necessary for an open society in the electronic age. […]

We cannot expect governments, corporations, or other large, faceless organizations to grant us privacy […]

We must defend our own privacy if we expect to have any. We must come together and create systems which allow anonymous transactions to take place. People have been defending their own privacy for centuries with whispers, darkness, envelopes, closed doors, secret handshakes, and couriers. The technologies of the past did not allow for strong privacy, but electronic technologies do.

Famous cypherpunks include Wikileaks’ Julian Assange, Jacob Applebaum who worked on the anonymisation software Tor and Snowden’s leaked documents as well as EFF’s Jillian C. York. If there was an actual cypherpunk club, it’s member list would be a who-is-who of the digital civil rights scene. The cypherpunk movement is also where most of the most fundamental critique of Cameron’s plans came from, their figure heads pushed the idea of the government banning encryption.

Cypherpunks generally subscribe to digital dualism as well. The quote from their manifesto makes it explicit mirroring the idea of the exceptionalism of the Internet and the digital sphere: “The technologies of the past did not allow for strong privacy, but electronic technologies do.” In their belief the Internet is a new and different thing, something that will allow all their libertarian ideas of free and unrestricted societies to flourish. Governments don’t sit all too well with that idea.

Where the anti-Internet digital dualists argue for the superiority of the physical, the space where governments rule in their respective areas, mostly conceptualizing the digital sphere as a toy, a play thing or maybe an inferior medium, the pro-Internet digital dualists of the cypherpunk clan feel that the Internet has transcended, superseded the physical. That in this space for its inhabitants new rules – only new rules – apply. Governments aren’t welcome in this world of bits and heroes carrying the weapons of freedom forged from code.

To these self-proclaimed warriors of digital freedom every attempt by governments to regulate the Internet, to enforce their laws in whatever limited way possible, is an attack, a declaration of war, an insult to what the Internet means and is. And they do have good arguments.

The Internet has a different structure than the physical. Where in the physical world distances matter a lot to define who belongs together, where borders are sometimes actually hard to cross, the Internet knows very little distance. We feel that our friends on the other side of the globe might have a different schedule, might have finished dinner before we even had breakfast, but they are still as close to us as our next-door neighbor. Messages travel to any point on this globe fast enough for us not to be able to perceive a significant difference between a message to a friend in Perth or one in Madrid.

Which government is supposed to regulate the conversation some Chinese, some Argentinian and some Icelandic people are having? Whose laws should apply? Does the strictest law apply or the most liberal one? Can a person break the laws of a country without ever having stepped into it, without ever having had the plan to visit that place? And how far is that country potentially allowed to go to punish these regressions? Most of these questions haven’t been answered sufficiently and convincingly.

The approach of the Internet as this whole new thing beyond the reach of the governments of the physical world of stone and iron seems to solve these – very hard – problems quite elegantly. By leaving the building. But certain things don’t seem to align with our liberal and democratic ideas. Something’s rotten in the state of cypherpunkia.

Our liberal democracies are founded of the principle of equality before the law. The law has to treat each an every one the same. No matter how rich you are, who your familiy is or what color your toenails have: The rules are the rules. There is actually quite the outrage when that principle is transgressed, when privileged people get free where minorities are punished harshly. The last months with their numerous dead people of color killed by policemen in the US have illustrated the dangerous, even deadly consequences of a society applying rules and the power of the enforcement entities unequally. Equality before the law is key to any democracy.

Here’s where pro-Internet digital dualism is problematic. It claims a different, more liberal ruleset for skilled, tech-savvy people. For those able to set up, maintain and use the digital tools and technologies securely. For the digital elite. The high priests of the new digital world.

The main argument against Cameron’s plans seems not to be that the government should never look at any person’s communication but that it shouldn’t be allowed to look at the digital communication that a certain group of people has access to and adopted as their primary means of communication. It’s not challenging the idea of what a government is allowed to do, it’s trying to protect a privilege.

Even with the many cases of the abuse of power by the police or by certain individuals within that structure using their access to spy on their exes or neighbors or whoever, there still seems to be a democratic majority supporting a certain level of access of the government or police to private communication in order to protect other goods such as public safety. And where many journalists and critics push for stronger checks and better processes to control the power of the police and its officers I don’t see many people arguing for a total restriction.

This debate about government access illustrates what can happen when libertarian criticism of the actions of certain governments or government agencies of democratic states capsizes and becomes contempt for the idea of democracy and its processes itself.

Democracy is not about efficiency, it’s about distributing, legitimizing and checking power as fairly as possible. The processes that liberal democracies have established to give the democratically legitimized government access to an individual’s communication or data in order to protect a public or common good are neither impenetrable nor efficient. It’s about trade-offs and checks and balances to try to protect the system against manipulation from within while still getting anything done. It’s not perfect, especially not in the implementations that exist but it does allow people to participate equally, whether they like hacking code or not.

When digital activists argue against government activities that are properly secured by saying “the requirement of a court order is meaningless because they are trivial to get” they might mean to point at some explicit flaw in a certain process. But often they also express their implicit distrust towards all government processes. Forgetting or ignoring that governments in democratic countries are the legitimized representation if the power of the people.

Digital dualism is a dangerous but powerful fallacy. Where it has created a breeding ground for texts about the horrors of the Internet, the falsehood of all social interaction in this transnational digital sphere it has also created an environment where the idea of government and with it often the ideas of democracy have been put up for debate to be replaced with … well … not much. Software that skilled people can use to defend themselves against other skilled people who might have even better software.

Cryptography is a very useful tool for the individual. It allows us to protect communication and data, makes so much of the Internet even possible. Without encryption we couldn’t order anything online or do our banking or send emails or tweets or Facebook updates without someone hacking in, we couldn’t store our data on cloud services as backups. We couldn’t trust the Internet at all.

But we are more than individuals. We are connected into social structures that sometimes have to deal with people working against them or the rules the social systems agreed upon. Technology, even one as powerful as cryptography, does not protect and strengthen the social systems that we live in, the societies and communities that we rely on and that make us human, define our cultures.

The fight against government spying (and that is what this aggressive battle against Cameron’s suggestion stems from: The fear that any system like that would be used by governments and spy agencies to collect even more data) mustn’t make us forget what defines our cultures, our commons and our communities.

We talk a lot about communities online and recently even about codes of conduct and how to enforce them. Big discussions have emerged online on how to combat harassment, how to sanction asocial behavior and how to protect those who might not be able to protect themselves. In a way the Internet is having a conversation with itself trying to define its own rules.

But we mustn’t stop there. You might think that coming up with rules on how to act online and ways to enforce them is hard, but the actual challenge is to find a way to reintegrate all we do online with the offline world. Because the are not separate: Together they form the world.

The question isn’t how to keep the governments out of the Internet. The real question is how we can finally overcome the deeply rooted digital dualism to create a world that is worth living for for people who love tech as well as people who might not care. The net is no longer the cyber-utopia of a few hackers. It’s potentially part of everybody’s life and reality.

How does the democracy of the future look like? How should different national laws apply in this transnational space? How do human rights translate into the digital sphere and where do we need to draw he lines for government regulation and intervention? Those are hard questions that we have to talk about. Not just hackers and techies with each other but everyone. And I am sure that at the end of that debate a key escrow system such as the one Mr. Cameron seemingly proposed wouldn’t be what we agree on. But to find that out we have to start the discussion.

Photo  by dullhunk

Flattr this!

Posts for Sunday, January 4, 2015

Changes to my blog in 2015

New year usually brings changes. And the same holds true for my blog.

In (early) 2015 I will finally finish my LL.M1. and therefore hopefully have more time for my blog (and myself). Below you can find some of the planned and already ongoing changes relating to my blog.

Slightly modified tagging system

From now on tags named after communities like FSFE, Kiberpipa / Cyberpipe and KDE represent not only topics that directly relate to them – but also topics should be of interest to those particular communities.

If you are reading this through a planet (or similar) aggregator and think some kinds of blog posts do not belong there, let me know and I will change the feed accordingly.

On the other hand if you are subscribed directly to my blog via the Atom feed, you can apart from the main feed, finegrain your selection by subscribing only to specific categories or tags. To do so, you only need to visit the hereinbefore mentioned two links and in the browser (or HTML source code) select the Atom feed(s) you like.

Testing comments system

As promised before (more than once) I am looking into bringing comments back.

From the options that I could find, it seems Isso seems to bring the best usability vs ease of administration for use on a self-hosted2 static blog, such as mine.

At the moment I am in the testing phase – trying to set it up and get it running. But after that, I plan to migrate the previous comments and make it live. This could take a while, since there is no Pelican plugin for it yet …there is a (broken?) pull request for it though.

Hopefully Isso will last longer against spam comments as systems I tried so far.

More content in 2015

Since this year I plan to finish my studies, I will finally have more time to spare to blog. I hope you are looking forward to more articles at least as much as I am to writing them!

Internet Archive

While I was at it, I also made sure that all the so far written blog posts are actually showing up on the Internet Archive Wayback Machine and not just the first page. Most of them did not, but they are now.

hook out → happy new year everyone! ☺


  1. My LL.M. thesis is about “FLA – new challenges” and you can follow its progress on Git. Unfortunately for most readers, it is required by law to be in Slovenian. But important outcomes will follow in English later this year. 

  2. Since I host my on blog, leaving something as precious as comments on a 3rd party proprietary server is out of the question. 

Posts for Saturday, January 3, 2015

avatar

SELinux is great for enterprises (but many don’t know it yet)

Large companies that handle their own IT often have internal support teams for many of the technologies that they use. Most of the time, this is for reusable components like database technologies, web application servers, operating systems, middleware components (like file transfers, messaging infrastructure, …) and more. All components that are used and deployed multiple times, and thus warrant the expenses of a dedicated engineering team.

Such teams often have (or need to write) secure configuration deployment guides, so that these components are installed in the organization with as little misconfigurations as possible. A wrongly configured component is often worse than a vulnerable component, because vulnerabilities are often fixed with the software upgrades (you do patch your software, right?) whereas misconfigurations survive these updates and remain exploitable for longer periods. Also, misuse of components is harder to detect than exploiting vulnerabilities because they are often seen as regular user behavior.

But next to the redeployable components, most business services are provided by a single application. Most companies don’t have the budget and resources to put dedicated engineering teams on each and every application that is deployed in the organization. Even worse, many companies hire external consultants to help in the deployment of the component, and then the consultants hand over the maintenance of that software to internal teams. Some consultants don’t fully bother with secure configuration deployment guides, or even feel the need to disable security constraints put forth by the organization (policies and standards) because “it is needed”. A deployment is often seen as successful when the software functionally works, which not necessarily means that it is misconfiguration-free.

As a recent example that I came across, consider an application that needs Node.js. A consultancy firm is hired to set up the infrastructure, and given full administrative rights on the operating system to make sure that this particular component is deployed fast (because the company wants to have the infrastructure in production before the end of the week). Security is initially seen as less of a concern, and the consultancy firm informs the customer (without any guarantees though) that it will be set up “according to common best practices”. The company itself has no engineering team for Node.js nor wants to invest in the appropriate resources (such as training) for security engineers to review Node.js configurations. Yet the application that is deployed on the Node.js application server is internet-facing, so has a higher risk associated with it than a purely internal deployment.

So, how to ensure that these applications cannot be exploited or, if an exploit is done, how to ensure that the risks involved with the exploit are contained? Well, this is where I believe SELinux has a great potential. And although I’m talking about SELinux here, the same goes for other similar technologies like TOMOYO Linux, grSecurity’s RBAC system, RSBAC and more.

SELinux can provide a container, decoupled from the application itself (but of course built for that particular application) which restricts the behavior of that application on the system to those activities that are expected. The application itself is not SELinux-aware (or does not need to be – some applications are, but those that I am focusing on here usually don’t), but the SELinux access controls ensure that exploits on the application cannot reach beyond those activities/capabilities that are granted to it.

Consider the Node.js deployment from before. The Node.js application server might need to connect to a MongoDB cluster, so we can configure SELinux to allow just that, but all other connections that originate from the Node.js deployment should be forbidden. Worms (if any) cannot use this deployment then to spread out. Same with access to files – the Node.js application probably only needs access to the application files and not to other system files. Instead of trying to run the application in a chroot (which requires engineering effort from those people implementing Node.js, which could be a consultancy firm that does not know or want to deploy within a chroot) SELinux is configured to disallow any file access beyond the application files.

With SELinux, the application can be deployed relatively safely while ensuring that exploits (or abuse of misconfigurations) cannot spread. All that the company itself has to do is to provide resources for a SELinux engineering team (which can be just a responsibility of the Linux engineering teams, but can be specialized as well). Such a team does not need to be big, as policy development effort is usually only needed during changes (for instance when the application is updated to also send e-mails, in which case the SELinux policy can be adjusted to allow that as well), and given enough experience, the SELinux engineering team can build flexible policies that the administration teams (those that do the maintenance of the servers) can tune the policy as needed (for instance through SELinux booleans) without the need to have the SELinux team work on the policies again.

Using SELinux also has a number of additional advantages which other, sometimes commercial tools (like Symantecs SPE/SCSP – really Symantec, you ask customers to disable SELinux?) severly lack.

  • SELinux is part of a default Linux installation in many cases. RedHat Enterprise Linux ships with SELinux by default, and actively supports SELinux when customers have any problems with it. This also improves the likelihood for SELinux to be accepted, as other, third party solutions might not be supported. Ever tried getting support for a system on which both McAfee AV for Linux and Symantec SCSP are running (if you got it to work together at all)? At least McAfee gives pointers to how to update SELinux settings when they would interfere with McAfee processes.
  • SELinux is widely known and many resources exist for users, administrators and engineers to learn more about it. The resources are freely available, and often kept up2date by a very motivated community. Unlike commercial products, whose support pages are hidden behind paywalls, customers are usually prevented from interacting with each other and tips and tricks for using the product are often not found on the Internet, SELinux information can be found almost everywhere. And if you like books, I have a couple for you to read: SELinux System Administration and SELinux Cookbook, written by yours truly.
  • Using SELinux is widely supported by third party configuration management tools, especially in the free software world. Puppet, Chef, Ansible, SaltStack and others all support SELinux and/or have modules that integrate SELinux support in the management system.
  • Using SELinux incurs no additional licensing costs.

Now, SELinux is definitely not a holy grail. It has its limitations, so security should still be seen as a global approach where SELinux is just playing one specific role in. For instance, SELinux does not prevent application behavior that is allowed by the policy. If a user abuses a configuration and can have an application expose information that the user usually does not have access to, but the application itself does (for instance because other users on that application might) SELinux cannot do anything about it (well, not as long as the application is not made SELinux-aware). Also, vulnerabilities that exploit application internals are not controlled by SELinux access controls. It is the application behavior (“external view”) that SELinux controls. To mitigate in-application vulnerabilities, other approaches need to be considered (such as memory protections for free software solutions, which can protect against some kinds of exploits – see grsecurity as one of the solutions that could be used).

Still, I believe that SELinux can definitely provide additional protections for such “one-time deployments” where a company cannot invest in resources to provide engineering services on those deployments. The SELinux security controls do not require engineering on the application side, making investments in SELinux engineering very much reusable.

avatar

Gentoo Wiki is growing

Perhaps it is because of the winter holidays, but the last weeks I’ve noticed a lot of updates and edits on the Gentoo wiki.

The move to the Tyrian layout, whose purpose is to eventually become the unified layout for all Gentoo resources, happened first. Then, three common templates (Code, File and Kernel) where deprecated in favor of their “*Box” counterparts (CodeBox, FileBox and KernelBox). These provide better parameter support (which should make future updates on the templates easier to implement) as well as syntax highlighting.

But the wiki also saw a number of contributions being added. I added a short article on Efibootmgr as the Gentoo handbook now also uses it for its EFI related instructions, but other users added quite a few additional articles as well. As they come along, articles are being marked by editors for translation. For me, that’s a trigger.

Whenever a wiki article is marked for translations, it shows up on the PageTranslation list. When I have time, I pick one of these articles and try to update it to move to a common style (the Guidelines page is the “official” one, and I have a Styleguide in which I elaborate a bit more on the use). Having a common style gives a better look and feel to the articles (as they are then more alike), gives a common documentation development approach (so everyone can join in and update documentation in a similar layout/structure) and – most importantly – reduces the number of edits that do little more than switch from one formatting to another.

When an article has been edited, I mark it for translation, and then the real workhorse on the wiki starts. We have several active translators on the Gentoo wiki, who we cannot thank hard enough for their work (I used to start at Gentoo as a translator, I have some feeling about their work). They make the Gentoo documentation reachable for a broader audience. Thanks to the use of the translation extension (kindly offered by the Gentoo wiki admins, who have been working quite hard the last few weeks on improving the wiki infrastructure) translations are easier to handle and follow through.

The advantage of a translation-marked article is that any change on the article also shows up on the list again, allowing me to look at the change and perform edits when necessary. For the end user, this is behind the scenes – an update on an article shows up immediately, which is fine. But for me (and perhaps other editors as well) this gives a nice overview of changes to articles (watchlists can only go so far) and also shows the changes in a simple yet efficient manner. Thanks to this approach, we can more actively follow up on edits and improve where necessary.

Now, editing is not always just a few minutes of work. Consider the GRUB2 article on the wiki. It was marked for translation, but had some issues with its style. It was very verbose (which is not a bad thing, but suggests to split information towards multiple articles) and quite a few open discussions on its Discussions page. I started editing the article around 13.12h local time, and ended at 19.40h. Unlike with offline documentation, the entire process of the editing can be followed through the page’ history). And although I’m still not 100% satisfied with the result, it is imo easier to follow through and read.

However, don’t get me wrong – I do not feel that the article was wrong in any way. Although I would appreciate articles that immediately follow a style, I rather see more contributions (which we can then edit towards the new style) than that we would start penalizing contributors that don’t use the style. That would work contra-productive, because it is far easier to update the style of an article than to write articles. We should try and get more contributors to document aspects of their Gentoo journey.

So, please keep them coming. If you find a lack of (good) information for something, start jotting down what you know in an article. We’ll gladly help you out with editing and improving the article then, but the content is something you are probably best to write down.

Posts for Wednesday, December 31, 2014

Once More, with Feeling #31c3

“A new Dawn”. That’s the motto that more than 10000 hackers, activists and people interested in or connected to that (sub-)culture assembled under in Hamburg for the last few days. This probably slightly long-ish text outlines my thoughts on the 31st Chaos Communication Congress taking part in the congress center Hamburg.

(You probably should take the things I write with a tablespoon of salt. After public personal attacks on me by representatives of the CCC I quit my membership ending a few years of semi-public dissent on certain key aspects of the digital life of human beings in the beginning of the 21st century. I’ll try to be fair and as objective as human beings can be but obviously I can’t deny some sore emotional spots when it comes to that organisation and its figureheads. Also I should note that the program committee did reject the sessions I proposed. I did expect that rejection and can live with it but still add it here for transparency reasons. )

2013 wasn’t a good year for the hacker/digital activist/etc community. Snowden’s leaked documents and Glenn Greenwald’s strategy of continuous publication of small (in length/volume, not in impact) pieces put that – usually quite resilient- community in a state of shock. An ideology had radically fallen apart within months leaving its protagonists rendered helpless and without orientation for a while. Check out my article on last year’s conference for a more detailed report on the event and its context and environment.

The tagline (“A new Dawn”) sounded refreshingly optimistic. A fresh start, a reboot of efforts. Rethinking the positions of the hacker culture in the greater scheme of things. My first thought upon reading the congress motto was wondering what kind of agenda the CCC would set up for itself for the coming year. Curiosity is quite a positive and optimistic feeling so I obviously liked that line a lot.

The CCC conference organisation is – after all these years – a well-oiled machine. No matter what you throw their way, the conference attendees will not feel a hickup. The whole team organizing the conference, from the video streaming and recording “angels” to all the helpers keeping people hydrated and the facilities clean to the tech tech providing more Internet bandwidth than some countries have access to is second to none. Literally. The self-organized assemblies where people gathered coming from different hackspaces and organizations into new local communities providing services and learning opportunities knocked it out of the park again with workshops and an insane amount of infrastructure they offered to conference attendees. I can’t think of any conference that comes even close to that level of competence and “professionalism” by – in the literal meaning of the word – amateurs. Lovers of whatever it is they do.1

But for some reason, the motto didn’t seem to click for me and many others I talked to (on the other hand: for some it did). It was not about the people who were so obviously happy to meet, hang out, talk, dance, teach and learn. It was not about the brilliant people I met and hung out with. It was just a program underdelivering on the promise the motto made.

The conference program is grouped into so-called tracks: Each with their own focus and agenda. The Hardware&Making track talks about practice, about building hardware (usually with blinking LEDs) and creating things. The Security&Hacking track punches holes into whatever protocol or service you can think of. Art&Culture gives room to artists to present their work, Science gives scientists a platform to disseminate  their findings. Ethics, Society & Politics tries to tackle the pressing social and political questions of these days while Entertainment adds some low- and middlebrow amusement. And the are some CCC specific talks that deal with the life of that organization.

Many tracks delivered. Hardware&Making, Security&Hacking and Art&Beauty did exactly what’s expected of them. And while I am not a blinky LED person and no security nerd there were quite impressive talks there (you might have heard about starbug making a fake fingerprint from a photo or about how the SS7 standard can be used by anyone with a few extra bucks to track you regardless of how secure your phone is). I’ve never been a fan of the entertainment sessions at conferences, but maybe they are fun if you drink.

But sadly the Ethics, Society & Politics in general fell flat. That doesn’t mean that all the talks were bad (quite the opposite in some cases) it means that the whole impetus of that track was hard to read. But “A new Dawn” it wasn’t. All those talks could have happened at any C3 in the last 3 or 4 years. It lacked a vision, an agenda, a perspective. Which could be read as a continuation of last year’s shock state but I think that would be wrong. Nobody is shocked. Things are just back to normal.

Maybe “Back to normal” would have been the perfect motto for this year. The “product” CCC congress is established, successful and works. It’s like Microsoft Office or EA’s sports games: Every year you get an update with a few new bells and whistles, some neat new additions and an updated look and feel. But the product itself stays the same because its consumers have gotten used to and comfortable with it.

And so the usual suspects go through the motions of for example the “Fnord News Show” or similar events whose main function is to provide the community a folklore to assemble around. But folklore tends to be about the past, about keeping something alive through its rituals even when the world has moved on. Some people dance in the outfits of their great-grandparents, some gather to laugh at “stupid” politicians who couldn’t code their own kernel to save their lives. Ho ho ho!

The scene has found its way to deal with the situtation the Snowden docs created. A friend called that approach the “Snowden-industrial complex”. All those companies and governments and agencies need security consultants, every week sees a new cryptographic silver bullet to crowndfund or buy and a small group has made sitting on panels and milking the Snowden docs quite the successful business model. As Jacob Applebaum’s talk this year illustrated the scene has learned how to work with and against the docs to create whatever story sells best at any given moment. Sadly the product they are selling seems to be only very loosely connected to truth or politics sometimes.

And that was the saddest realization of the congress. That in a building full of art and music and smart people no forward momentum for any form of structural change was emerging. Everything felt chained to the way things have always been done(tm).

Just as with the cycle of Snowden leaks the subculture is still caught in its old MO: Take something , look at technical details, break them, wait for them to be patched. Rinse and repeat. Rinse and repeat. Rinse and repeat.

Often the most interesting things are those that happen without much thought. In that regard the “Tea House” was probably the most revealing. I don’t even want to go into the whole cultural appropriation angle of mostly white dudes building an “oriental” and “exotic” space to hang out at a conference of mostly white dudes. But architecturally and visually that space, claimed by many of it’s frequent visitors to be “the place to be”2, felt like a royal palace of sorts, with layered zones of different exclusivity and digital lords and ladies holding court with their courtiers.

I realized that the scene is in no way apolitical which is an accusation that has been put forward at times not only be me. Actually one of the most awesome things about the congress where the donation boxes for the refugees of Lampedusa put up everywhere as well as many stickers and flags by groups such as the Antifa3. There still are these isles of radical, political and mostly left thinking around but sadly they don’t feel like they have any real wide-spread impact.

The main vibe was that of a Silicon Valley Libertarianism spiced up with some idealization of the (German) constitution and the constitutional court. A deeply rooted antagonism towards the institutions of government, their perceived incompetency and evilness, in connection with yearning to be respected and acknowledged. Not only German politics has managed to mostly contain all Snowden-induced outrage and canalize most of the NGO-based energy into different committees investigating what the intelligence community has lied about (everything) and how they can be controlled better in the future (they can’t). But instead of looking at political solutions, at structural issues the congress kept it light. Focused on details while still being able to leave the big picture out of the equation.

The hacking subculture in Germany is at a crossroads. It has to decide on whether to politicize, to develop a set of bigger political goals even if that might cost certain people in the community certain business opportunities, or whether to stay on its current trajectory drifting closer and closer to a Defcon-like exclusive security-tech-bubble forming the recruiting environment for entities that used to have no place there.4

I had decided that this congress would be my last one in advance allowing me to take a more distanced, observing position. And I had a really interesting time and a bunch of great conversations giving me more perspective on the whole shebang (special thanks to Christoph Engemann,  Richard Marggraf-Turley, Norbert SchepersLaura Dornheim and Anna-Lena Bäcker who helped me understand different things better and see clearer as well as so many others I forgot and who might now be mad at me – please don’t be).

The classic Buffy Episode “Once More, with Feeling” shows us a Vampire Slayer back from the dead. She is lost and has lost her inspiration and energy, is just “Going through the Motions”:

Every single night
The same arrangement
I go out and fight the fight
Still, I always feel the strange estrangement
Nothing here is real
Nothing here is right

I’ve been making shows of trading blows
Just hoping no one knows
That I’ve been going through the motions

<iframe allowfullscreen="true" class="youtube-player" frameborder="0" height="424" src="https://www.youtube.com/embed/zMv0abh4Vrc?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="700"></iframe>

The song ends with Buffy trying to leave that state of stagnation, I’d hoped to see the hackers do the same. But I realized that maybe we expect too much from that group. That they are stuck trying to keep the old super hero narrative which that sub culture oh so much adores alive.

The congress is a very interesting event to attend. Brilliant people all around. Brilliant projects and infrastructure. It’s just not the place to work on the questions that I feel we need to work on. Rethinking what privacy, the self, connectedness and responsibility mean in a digital world. Rethinking the future of work and internet governance. Building utopias and goals for a better future that’s more than just a bunch of white men with tech backgrounds making a comfortable living. Those are some of the things I want to work on in 2015 with many different people.

I hope you’ll all have fun at the 32c3 and I am sure it will be an even better product that it already is. Thanks to everyone helping to make the 31c3 a great and interesting place to be. My congress visits have been quite a ride. Now I have to leave the cart to get back to work.

  1. Obviously there is always an element of self-exploitation which I know the organizers tried to limit as much as possible by forcing people to take breaks.
  2. it’s funny how a scene whose central narrative is based on “we were all excluded by the mainstream, now we meet here to transcend those structures” quickly rushes to establishing its own “in-crowd” and exclusive places
  3. a German left radical antifascist movement
  4. non-whistleblowing ex-intelligence people as well as cyberwar enthusiasts spoke at this year’s conference and the coming policy for intelligence personal speaking at C3 events is quite shocking to people knowing CCC’s history

Flattr this!

Posts for Tuesday, December 30, 2014

avatar

Why does it access /etc/shadow?

While updating the SELinux policy for the Courier IMAP daemon, I noticed that it (well, the authdaemon that is part of Courier) wanted to access /etc/shadow, which is of course a big no-no. It doesn’t take long to know that this is through the PAM support (more specifically, pam_unix.so). But why? After all, pam_unix.so should try to execute unix_chkpwd to verify a password and not read in the shadow file directly (which would require all PAM-aware applications to be granted access to the shadow file).

So I dived into the PAM-Linux sources (yay free software).

In pam_unix_passwd.c, the _unix_run_verify_binary() method is called but only if the get_account_info() method returns PAM_UNIX_RUN_HELPER.

static int _unix_verify_shadow(pam_handle_t *pamh, const char *user, unsigned int ctrl)
{
...
        retval = get_account_info(pamh, user, &pwent, &spent);
...
        if (retval == PAM_UNIX_RUN_HELPER) {
                retval = _unix_run_verify_binary(pamh, ctrl, user, &daysleft);
                if (retval == PAM_AUTH_ERR || retval == PAM_USER_UNKNOWN)
                        return retval;
        }

In passverify.c this method will check the password entry file and, if the entry is a shadow file, will return PAM_UNIX_RUN_HELPER if the current user id is not root, or if SELinux is enabled:

PAMH_ARG_DECL(int get_account_info,
        const char *name, struct passwd **pwd, struct spwd **spwdent)
{
        /* UNIX passwords area */
        *pwd = pam_modutil_getpwnam(pamh, name);        /* Get password file entry... */
        *spwdent = NULL;
 
        if (*pwd != NULL) {
...
                } else if (is_pwd_shadowed(*pwd)) {
                        /*
                         * ...and shadow password file entry for this user,
                         * if shadowing is enabled
                         */
#ifndef HELPER_COMPILE
                        if (geteuid() || SELINUX_ENABLED)
                                return PAM_UNIX_RUN_HELPER;
#endif

The SELINUX_ENABLED is a C macro defined in the same file:

#ifdef WITH_SELINUX
#include <selinux/selinux.h>
#define SELINUX_ENABLED is_selinux_enabled()>0
#else
#define SELINUX_ENABLED 0
#endif

And this is where my “aha” moment came forth: the Courier authdaemon runs as root, so its user id is 0. The geteuid() method will return 0, so the SELINUX_ENABLED macro must return non-zero for the proper path to be followed. A quick check in the audit logs, after disabling dontaudit lines, showed that the Courier IMAPd daemon wants to get the attribute(s) of the security_t file system (on which the SELinux information is exposed). As this was denied, the call to is_selinux_enabled() returns -1 (error) which, through the macro, becomes 0.

So granting selinux_getattr_fs(courier_authdaemon_t) was enough to get it to use the unix_chkpwd binary again.

To fix this properly, we need to grant this to all PAM using applications. There is an interface called auth_use_pam() in the policies, but that isn’t used by the Courier policy. Until now, that is ;-)

Posts for Friday, December 26, 2014

My KWin short-cuts experiment

Inspired by Aurélien Gâteau’s blogpost and the thread on KDE Forums, I decided to change my global KWin short-cuts as well to see how it fares.

Shortcuts

As proposed in the forum thread and by Aurélien, I have concentrated my desktop/window manipulation short-cuts around the Meta key.

In addition I figured out that to navigate virtual desktops and activities I practically only use the following effects:

  • Activities bar
  • Activities menu (which I have bound to right-click on background)
  • Desktop grid
  • Show all windows on all desktops

Here are the most important changes:

Virtual desktops

  • Meta+F? – goes to the desktop number ?
  • Meta+Shift+F? – moves/shifts the active window to desktop number ?
  • Meta+Down – shows all desktops in the grid effect

Window management

  • Meta+F – puts window in full-screen mode (i.e. maximises and hides windows decorations)
  • Meta+Up – maximises the window (or de-maximises it)
  • Meta+Left – window occupies the left half of the screen
  • Meta+Right – window occupies the right half of the screen
  • Meta+PageUp – keep window above others
  • Meta+PageDown – keep window below others
  • Meta+Tabr – show all windows from all desktops
  • Meta+Esc – close window
  • Meta+Ctrl+Esc – kill window

Launchers, Activities, etc.

  • Meta+A – opens the Activities bar
  • Meta+Space – Krunner
  • Meta+Enter – Yakuake

How does it feel

I actually quite like it and it does not need a lot to get used to. It is far easier to remember than the KDE Plasma default. And I am saying this after years and years of using the default as well as years of using a different custom set up (concentrated on Alt).

Personally, I think it would make sense to adopt such a change of defaults. But if that does not happen, I know I can still just change it myself locally …and I will ☺

hook out → taking a final sip of honey-sweetened Yorkshire Gold tea (Taylors of Harrogate) and going to sleep

Posts for Wednesday, December 24, 2014

Why and how to shave with shaving oil and DE safety razors

So, I have been shaving with shaving oil and safety razors 1 for a while now and decided that it is time I help my fellow geeks by spreading some knowledge about this method (which is sadly still poorly documented on-line). Much of the below method is hacks assembled together from different sources and lots of trial and error.

Why shave with oil and DE safety razors

First of all, shaving with old-school DE razors is not as much about being hip and trendy 2 as it is about optimising. Although, I have to admit, it is still looks pretty cool ☺

There are several reasons why shaving with oil and DE razors beats modern foam and system multi-blade razors hands down:

  • they have got multiple uses – shaving oil replaces both the shaving foam/soap and aftershave (and pre-shaving balm); DE razors are used in tools and well, they are proper blades for crying out loud!;
  • the whole set takes a lot less space when travelling – one razor, a puny pack of blades and a few ten ml of oil is all you need to carry around 3;
  • you get a better shave – once you start shaving properly, you get less burns and cuticles and a smoother shave as well;
  • it is more ecological – the DE blades have less different materials and are easier to recycle, all shaving oils I found so far have some sort of Eco and/or Bio certification;
  • and last, but not least in these days, it is waaaaaaay cheaper – (more on that in a future blog post).

History and experience (skip if you are not interested in such bla bla)

I got my first shaving oil4 about two years ago, when I started to travel more. My wonderful girlfriend bought it for me, because a 30 ml flask took a lot less space then a tin of shaving foam and a flask aftershave. The logic behind this decision was:

“Well, all the ancient people managed to have clean shaves with oil, my beard cannot be that much different than the ones they had in the past.”

And, boy, was I in for a nice surprise!

I used to get inflammations, pimples and in-grown hair quite often, so I never shaved very close – but when shaving with oil, there was none of that! After one or two months of of trial and error with different methods and own ideas, I finally figured out how to properly use it and left the shaving soaps, gels and foams for good.

As I shaved for a while with oil I noticed that all “regular modern” system multi-blade razors have strips of an aloe vera gel, that works well with shaving foam, gels and soap; but occasionally stick to your face if you are using shaving oil. This is true for as many or as little blades in the razor heads as possible. – I just could not find razors without it.

That is why I started thinking about the classic DE safety razors and eventually got a plastic Wilkinson Sword Classic for a bit over 5 €. Surprisingly, after just a few minuscule cuts, the improvement over the system multi-blade razors got quite apparent. I have not touched my old Gillette Mach3 ever since. The Wilkinson Sword Classic is by far not a very good DE razor, but it is cheap and easy to use for beginners. But if you decide you like this kind of shave, I would warmly recommend that you upgrade to a better one.

For example recently I got myself a nice Edwin Jagger razor with their DE8 head and I love it. It is a full-metal, chromed, closed-comb razor, which means it has another bar below the blade, so it is easier and safer to use then an more aggressive open-comb version.

How to Shave with oil and DE razors

OK, first of all, do not panic! – they are called “safety razors” for a reason. As opposed to the straight razors, the blade is enclosed, so even if you manage to cut yourself, you cannot get a deep cut. This is truer still for closed-comb razors.

  1. Wash your face to remove dead skin and fat. It is the best if you shave just after taking a shower.

  2. Get moisture into the hairs. Beard hair is hard as copper wire while it is dry; but wet, it is quite soft. The best way is to apply a towel soaked in very hot water for a few (times per) ten seconds to your face – the hot water also opens up the pores. If you are travelling and do not have hot water, just make sure those hairs are wet. I usually put hot water in the basin and leave the razor in it while I towel my face, so the razor is also warm.

  3. Put a few drops of shaving oil into the palm of your hand (3-6 is enough for me, depending on the oil) and with two fingers apply it to all the places on your face that you want to shave. Any oil you may have left on your hands, you can safely rub into your hair (on top of your head) – it will do them good and you will not waste the oil.

  4. Splash some more (hot) water on your face – the fact that water and oil do not mix well is the reason why your blade glides so fine. Also during the shave, whenever feel your razor does not glide that well any more, usually just applying some water is enough to fix it.

  5. First shave twice in the direction of the grain – to get a feeling for the right angle, take the handle of the razor in your fingers and lean the flat of the head onto your cheek, so the handle is 90° to your cheek; then reduce the angle until you get to a position where shaving feels comfortable. Also it is easier to shave moving your whole arm then just the wrist. Important: DO NOT apply pressure – the safety razors expose enough blade that with a well balanced razor just the weight of the head produces almost enough pressure for a good shave (as opposed to system multi-blade razors). Pull in the direction of the handle with slow strokes – on thicker beard you will need to make shorter strokes then on less thick beard. To get a better shave, make sure to stretch your skin where you currently shave. If the razor gets stuck with hair and oil, just swish it around in the water to clean it.

  6. Splash your face with (hot) water again and now shave across the grain. This gives you a closer shave5.

  7. Splash your face with cold water to get rid of any hair remains and to close the pores. Get a drop or two of shaving oil and a few drops of water into your palm and mix it with two fingers. Rub the oil-water mixture into your face instead of using after-shave and leave your face to dry – the essential oils in the shaving oil enriches and disinfects your skin.

  8. Clean your razor under running water to remove hair and oil and towel-dry it (don not rub the blade!). When I take it apart to change blades, I clean the razor with water and rub it with the towel, to keep it shiny.

Update: I learned that it is better to shave twice with the grain and once across, than once with it and twice across. Update: I figured out the trick with rubbing the excess oil into hair. Update: Updated the amount of oil needed, to match new experience.

Enjoy shaving ☺

It is a tiny bit more work then shaving with system multi-blade razors, but it is well worth it! For me the combination of quality DE safety razors and shaving oil, turned shaving from a bothersome chore into a morning ritual I look forward to.

…and in time, I am sure you will find (and share) your own method as well.

Update: I just stumbled upon this great blog post “How Intellectual Property Destroyed Men’s Shaving” and thought it be great to mention here.

hook out → see you well shaven at Akademy ;)


  1. Double edged razors as our granddads used to shave with. 

  2. Are old-school razors hip and trendy right now anyway? I have not noticed them to be so. 

  3. I got a myself a nice leather Edwin Jagger etui for carrying the razor and two packs of blades that measures 105 x 53 x 44 mm (for comparison: the ugly Gillette Mach3 plastic holder measures 148 x 57 x 28 mm and does not hold much protection when travelling). 

  4. L’Occitane Cade (wild juniper) shaving oil, and I still happy with that one. 

  5. Some claim that for a really close shave you need to shave against the grain as well, but I found that to be too aggressive for my beard. Also I heard this claim only from people shaving with soap. 

Posts for Tuesday, December 23, 2014

avatar

Added UEFI instructions to AMD64/x86 handbooks

I just finished up adding some UEFI instructions to the Gentoo handbooks for AMD64 and x86 (I don’t know how many systems are still using x86 instead of the AMD64 one, and if those support UEFI, but the instructions are shared and they don’t collide). The entire EFI stuff can probably be improved a lot, but basically the things that were added are:

  1. boot the system using UEFI already if possible (which is needed for efibootmgr to access the EFI variables). This is not entirely mandatory (as efibootmgr is not mandatory to boot a system) but recommended.
  2. use vfat for the /boot/ location, as this now becomes the EFI System Partition.
  3. configure the Linux kernel to support EFI stub and EFI variables
  4. install the Linux kernel as the bootx64.efi file to boot the system with
  5. use efibootmgr to add boot options (if required) and create an EFI boot entry called “Gentoo”

If you find grave errors, please do mention them (either on a talk page on the wiki, as a bug or through IRC) so it is picked up. All developers and trusted contributors on the wiki have access to the files so can edit where needed (but do take care that, if something is edited, that it is either architecture-specific or shared across all architectures – check the page when editing; if it is Handbook:Parts then it is shared, and Handbook:AMD64 is specific for the architecture). And if I’m online I’ll of course act on it quickly.

Oh, and no – it is not a bug that there is a (now not used) /dev/sda1 “bios” partition. Due to the differences with the possible installation alternatives, it is easier for us (me) to just document a common partition layout than to try and write everything out (making it just harder for new users to follow the instructions).

Posts for Sunday, December 14, 2014

avatar

Handbooks moved

Yesterday the move of the Gentoo Wiki for the Gentoo handbooks (whose most important part are the installation instructions for the various supported architectures) has been concluded, with a last-minute addition being the one-page views so that users who want to can view the installation instructions completely within one view.

Because we use lots of transclusions (i.e. including different wiki articles inside another article) to support a common documentation base for the various architectures, I did hit a limit that prevented me from creating a single-page for the entire handbook (i.e. “Installing Gentoo Linux”, “Working with Gentoo”, “Working with portage” and “Network configuration” together), but I could settle with one page per part. I think that matches most of the use cases.

With the move now done, it is time to start tackling the various bugs that were reported against the handbook, as well as initiate improvements where needed.

I did make a (probably more – but this one is fresh in my memory) mistake in the move though. I had to do a lot of the following:

<noinclude><translate></noinclude>
...
<noinclude></translate></noinclude>

Without this, transcluded parts would suddenly show the translation tags as regular text. Only afterwards (I’m talking about more than 400 different pages) did I read that I should transclude the /en pages (like Handbook:Parts/Installation/About/en instead of Handbook:Parts/Installation/About) as those do not have the translation specifics in them. Sigh.

Planet Larry is not officially affiliated with Gentoo Linux. Original artwork and logos copyright Gentoo Foundation. Yadda, yadda, yadda.