Posts for Sunday, February 15, 2015

avatar

CIL and attributes

I keep on struggling to remember this, so let’s make a blog post out of it ;-)

When the SELinux policy is being built, recent userspace (2.4 and higher) will convert the policy into CIL language, and then build the binary policy. When the policy supports type attributes, these are of course also made available in the CIL code. For instance the admindomain attribute from the userdomain module:

...
(typeattribute admindomain)
(typeattribute userdomain)
(typeattribute unpriv_userdomain)
(typeattribute user_home_content_type)

Interfaces provided by the module are also applied. You won’t find the interface CIL code in /var/lib/selinux/mcs/active/modules though; the code at that location is already “expanded” and filled in. So for the sysadm_t domain we have:

# Equivalent of
# gen_require(`
#   attribute admindomain;
#   attribute userdomain;
# ')
# typeattribute sysadm_t admindomain;
# typeattribute sysadm_t userdomain;

(typeattributeset cil_gen_require admindomain)
(typeattributeset admindomain (sysadm_t ))
(typeattributeset cil_gen_require userdomain)
(typeattributeset userdomain (sysadm_t ))
...

However, when checking which domains use the admindomain attribute, notice the following:

~# seinfo -aadmindomain -x
ERROR: Provided attribute (admindomain) is not a valid attribute name.

But don’t panic – this has a reason: as long as there is no SELinux rule applied towards the admindomain attribute, then the SELinux policy compiler will drop the attribute from the final policy. This can be confirmed by adding a single, cosmetic rule, like so:

## allow admindomain admindomain:process sigchld;

~# seinfo -aadmindomain -x
   admindomain
      sysadm_t

So there you go. That does mean that if something previously used the attribute assignation for any decisions (like “for each domain assigned the userdomain attribute, do something”) will need to make sure that the attribute is really used in a policy rule.

Posts for Saturday, February 14, 2015

I ♥ Free Software 2015

“Romeo, oh, Romeo!” exclaims the 3D-printed robot Julliet to her 3D-printed Romeo.

It is that time of the year again – the day we display our affection to our significant other …and the Free Software we like best.

Usually I sing praise to the underdogs that I use, the projects rarely anyone knows about, small odd things that make my everyday life nicer.

This year though I will point out some communities, that I am (more or less) active in, that impressed me the most in the past year.

  • KDE – this desktop needs no introduction and neither should its community. But ever so often we have to praise things that we take for granted. KDE is one of the largest and nicest FS communities I have ever come across. After meeting a few known faces and some new ones at FOSDEM, I am very much looking forward to going to Akademy again this year!
  • Mageia – as far as GNU/Linux distros go, many would benefit by taking Mageia as a good example on how to include your community and how to develop your infrastructure to be inclusive towards newcommers.
  • Mer, Nemo Mobile – note: Jolla is a company (and commercial product with some proprietary bits), most of its Sailfish OS’s infrastructure is FS and Jolla tries very hard to co-operate with its community and as a rule develops in upstream. This is also the reason why the communities of the mentioned projects are very intertwined. The co-operation in this wider community is very active and while not there yet, Mer and Nemo Mobile (with Glacier UI coming soon) are making me very optimistic that a modern Free Software mobile OS is just around the corner.
  • Last, but not least, I must mention three1 communities that are not FS projects by themselves, but where instrumental to educating me (and many others) about FS and digital freedoms in general – Thank you, LUGOS for introducing me to FS way back in the ’90s and all the help in those early days! Thank you, Cyberpipe for all the things I learnt in your hackerspace! And thank you, FSFE for being the beacon of light for Free Software throughout Europe (and beyond)!

hook out → closing my laptop and running back to my lovely Andreja, whom I thank for bearing with me


  1. Historically Cyberpipe was founded as part of Zavod K6/4, but in 2013 Cyberpipe merged with one of its founders – LUGOS, thus merging the two already before intertwined communities for good. 

Posts for Sunday, February 8, 2015

avatar

Have dhcpcd wait before backgrounding

Many of my systems use DHCP for obtaining IP addresses. Even though they all receive a static IP address, it allows me to have them moved over (migrations), use TFTP boot, cloning (in case of quick testing), etc. But one of the things that was making my efforts somewhat more difficult was that the dhcpcd service continued (the dhcpcd daemon immediately went in the background) even though no IP address was received yet. Subsequent service scripts that required a working network connection failed to start then.

The solution is to configure dhcpcd to wait for an IP address. This is done through the -w option, or the waitip instruction in the dhcpcd.conf file. With that in place, the service script now waits until an IP address is assigned.

Posts for Friday, January 30, 2015

avatar

Things I should’ve done earlier.

On Linux, there are things that you know are better but you don’t switch because you’re comfortable where you are. Here’s a list of the things I’ve changed the past year that I really should’ve done earlier.

  • screen -> tmux
  • apache -> nginx
  • dropbox -> owncloud
  • bash -> zsh
  • bootstrapping vim-spf -> my own tailored and clean dotfiles
  • phing -> make
  • sahi -> selenium
  • ! mpd -> mpd (oh why did I ever leave you)
  • ! mutt -> mutt (everything else is severely broken)
  • a lot of virtualbox instances -> crossbrowsertesting.com (much less hassle, with support for selenium too!)

… would be interested to know what else I could be missing out on! :)

The post Things I should’ve done earlier. appeared first on thinkMoult.

Posts for Thursday, January 29, 2015

Cryptography and the Black or White Fallacy

Cryptography is the topic du jour in many areas of the Internet. Not the analysis of algorithms or the ongoing quest to find some reasonably strong kind of crypto people without a degree in computer science and black magic are able and willing to use but in the form of the hashtag #cryptowars.

The first crypto wars were fought when the government tried to outlaw certain encryption technologies or at least implementations thereof with a certain strength. Hackers and coders found ways to circumvent the regulation and got the technology out of the US and into the hands of the open source community. Since these days cryptography has been widely adopted to secure websites, business transactions and – for about 7 people on this planet and Harvey the invisible bunny –  emails.

But there is a storm coming:

Governments are publicly wondering whether maybe asking platform providers to keep encryption keys around so that the police can access certain communication given proper authorization (that idea is usually called key escrow). Now obviously that is not something everyone will like or support. And that’s cool, we call it democracy. It’s people debating, presenting ideas, evaluating options and finally coming up with a democratically legitimized consensus or at least a resolution.

There are very good arguments for that kind of potential access (for example enforcement of the social contract/law, consistency with the application of norms in the physical world) as well as against it (for example the right to communicate without interference or the technical difficulty and danger of a key escrow system). For the proponents of such a regulation the argument is simple: Security, Anti-terror, Protection. Bob’s your uncle. For the opposition it’s harder.

I read many texts in the last few days about how key escrow would “ban encryption”. Which we can just discard as somewhat dishonest given the way the proposed legislation is roughly described. The other train of thought seems to be that key escrow would “break” encryption. And I also find that argument somewhat strange.

If you are a purist, the argument is true: If encryption has to perfectly protect something against everyone, key escrow would “break” it. But I wonder what kind of hardware these purists run their encryption on, what kind of operating systems. How could anyone every be sure that the processors and millions of lines of code making up the software that we use to run our computers can be trusted? How easy would it be for Intel or AMD or whatever Chip manufacturer you can think of to implement backdoors? And we know how buggy operating systems are. Even if we consider them to be written in the best of faith.

Encryption that has left the wonderful and perfect world of theory and pure algorithms is always about pragmatism. Key lengths for example are always a trade-off between the performance penalty they cause and the security they provide given a certain technological default. In a few years computers have gotten faster, which would make your keys short enough to be broken but since computers have gotten faster, you can use longer keys and maybe even more complex encryption algorithms.

So why, if deploying encryption is always about compromise, is key escrow automatically considered to “break” all encryption. Why wouldn’t people trust the web anymore? Why would they suddenly be the target of criminals and theft as some disciples of the church of crypto are preaching?

In most cases not the whole world is your enemy. At least I hope so, for your sake. Every situation, every facet of life has different threat models. How do threat models work? When I ride my bike to work I could fall due to a bad road, ice, some driver could hit me with their car. I address those threats in the way I drive or prepare: I always have my bike’s light on to be seen, I avoid certain roads and I keep an eye on the car traffic around me. I don’t consider the dangers of a whale falling down on me, aliens abducting me or the CIA trying to kill me. Some people might (and might have to given they annoyed the CIA or aliens), but for me, those are no threats I spend any mental capacities on.

My laptop’s harddrive is encrypted. The reason is not that it would protect its data against the CIA/NSA/AlienSecurityAgency. Because they’d just lock me up till I give them the key. Or punch me till I do. Or make me listen to Nickelback. No, I encrypt my drive so that in case my laptop gets stolen the thief might have gotten decent hardware but no access to my accounts and certain pieces of information. Actually, in my personal digital threat modeling, governments really didn’t influence my decision much.

In many cases we use encryption not to hide anything from the government. HTTPS makes sense for online stores not because the government could see what I buy (because given reasonable ground for suspicion they could get a court order and check my mail before I get it which no encryption helps against) but because sending around your credit card data in the clear is not a great idea(tm) if you want to be the only person using that credit card to buy stuff.

There are reasonable situations where encryption is used as defense against governments and their agencies. But in those cases it’s some form of open source end-to-end cryptography anyways, something you cannot outlaw (as the crypto wars of old have proven). On the other hands, in many situations encryption is mostly used to protect us from certain asshats who would love to change our Facebook profile picture to a penis or a frog or a frog’s penis1 or who’d like us to pay for their new laptop and Xbox. And they wouldn’t get access to any reasonably secure implementation of key escrow.

The idea that any “impurity”, any interference into cryptography is a typical black or white fallacy. Two options are presented for people to choose from: A) Cryptography deployed perfectly as it is in its ideal form and B) Cryptography is “broken”. But we know from our everyday life that that is – excuse my language – bullshit. Because every form of encryption we use is a compromise in some way, shape or form.

I have to extend trust to the makers of my hardware and software, to the people who might have physical access to my laptop at some point and to the fact that nobody sneaks into my home at night to install weird keyloggers on my machine. All that trust I extend does not “break” the encryption on my harddrive. You could argue that it weakens it against certain adversaries (for example a potentially evil Intel having a backdoor in my machine) but for my personal threat model those aspects are mostly irrelevant or without options. I don’t have to option to completely build my own computer and all the required software on it. Because I’ve got shit to do, pictures of monkeys to look at etc.

Personally I haven’t fully come to a conclusion on whether key escrow is a reasonable, good way to deal with the problem of enforcement of certain laws. And if it is, which situations it should apply to, who that burden should be placed on. But one thing is obvious: All those articles of the “death of crypto” or the “destruction of crypto” or the “war against crypto” seem to be blown massively out of proportion forfeiting the chance to make the case for certain liberties or against certain regulation for a style of communication reminding me of right-wing politicians using terrorist attacks to legitimize massive violations of human rights. Which is ironically exactly the kind of argument that those writing all these “crypto is under fire!!11″ articles usually complain about.

Photo by Origami48616

  1. I don’t know if frogs have penises

flattr this!

Posts for Tuesday, January 27, 2015

StrongSwan VPN (and ufw)

I make ample use of SSH tunnels. They are easy which is the primary reason. But sometimes you need something a little more powerful, like for a phone so all your traffic can’t be snooped out of the air around you, or so that all your traffic not just SOCKS proxy aware apps can be sent over it. For that reason I decided to delve into VPN software over the weekend. After a pretty rushed survey I ended up going with StrongSwan. OpenVPN brings back nothing but memories of complexity and OpenSwan seemed a bit abandoned so I had to pick one of its decendands and StrongSwan seemed a bit more popular than LibreSwan. Unscientific and rushed, like I said.

So there are several scripts floating around that will just auto set it up for you, but where’s the fun (and understanding allowing tweeking) in that. So I found two guides and smashed them together to give me what I wanted:

strongSwan 5: How to create your own private VPN is the much more comprehensive one, but also set up a cert style login system. I wanted passwords initially.

strongSwan 5 based IPSec VPN, Ubuntu 14.04 LTS and PSK/XAUTH has a few more details on a password based setup.

Additional notes: I pretty much ended up doing the first one stright through except creating client certs. Also the XAUTH / IKE1 setup of the password tutorial seems incompatible with the Android StrongSwan client, so I used EAP / IKE2, pretty much straight out of the first one. Also seems like you still need to install the CA cert and vpnHost cert on the phone unless I was missing something.

Also, as an aside, and a curve ball to make things more dificult, this was done one a new server I am playing with. Even since I’d played with OpenBSD’s pf, I’ve been ruined for iptables. It’s just not as nice. So I’d been hearing about ufw from the Ubuntu community from a while and was curious if it was nicer and better. I figured after several years maybe it was mature enough to use on a server. I think maybe I misunderstood its point. Uncomplicated maybe meant not-featureful. Sure for unblocking ports for an app it’s cute and fast, and even for straight unblocking a port its syntax is a bit clearer I guess? But as I delved into it I realized I might have made a mistake. It’s built ontop of the same system iptables uses, but created all new tables so iptables isn’t really compatible with it. The real problem however is that the ufw command has no way to setup NAT masquerading. None. The interface cannot do that. Whoops. There is a hacky work around I found at OpenVPN – forward all client traffic through tunnel using UFW which involves editing config files in pretty much iptables style code. Not uncomplicated or easier or less messy like I’d been hopnig for.

So a little unimpressed with ufw (but learned a bunch about it so that’s good and I guess what I was going for) and had to add “remove ufw and replace with iptables on that server” to my todo list, but after a Sunday’s messing around I was able to get my phone to work over the VPN to my server and the internet. So a productive time.

Posts for Wednesday, January 21, 2015

avatar

Old Gentoo system? Not a problem…

If you have a very old Gentoo system that you want to upgrade, you might have some issues with too old software and Portage which can’t just upgrade to a recent state. Although many methods exist to work around it, one that I have found to be very useful is to have access to old Portage snapshots. It often allows the administrator to upgrade the system in stages (say in 6-months blocks), perhaps not the entire world but at least the system set.

Finding old snapshots might be difficult though, so at one point I decided to create a list of old snapshots, two months apart, together with the GPG signature (so people can verify that the snapshot was not tampered with by me in an attempt to create a Gentoo botnet). I haven’t needed it in a while anymore, but I still try to update the list every two months, which I just did with the snapshot of January 20th this year.

I hope it at least helps a few other admins out there.

Posts for Wednesday, January 14, 2015

Digital dualism, libertarians and the law – cypherpunks against Cameron edition

The sociologist Nathan Jurgenson coined the term “digital dualism” in 2011. Digital dualism is the idea that the digital sphere is something separate from the physical sphere, that those two “spaces” are distinct and have very different rulesets and properties, different “natural laws”.

Jurgenson defined this term in light of an avalanche of articles explaining the emptiness and non-realness of digital experiences. Articles celebrating the “Offline” as the truer, realer and – yes – better space. But the mirror-image to those offline-enthusiasts also exists. Digital dualism permeates the Internet positivists probably as much as it does most Internet sceptics. Take one of the fundamental, central documents that so many of the ideology of leading digital activists and organisations can be traced back to: The Declaration of the Independence of Cyberspace. Digital dualism is at the core of that eloquent piece of writing propping up “cyberspace” as the new utopia, the (quote) “new home of Mind“.

I had to think of that, as Jurgenson calls it, digital dualism fallacy, when Great Britain’s Prime Minister David Cameron’s position on digital communication went public. Actually – I started to think about it when the reactions to Mr. Cameron’s plans emerged.

BoingBoing’s Cory Doctorow immediately warned that Cameron’s proposal would “endanger every Briton and destroy the IT industry“, the British Guardian summarized that Cameron wanted to “ban encryption“, a statement repeated by security guru Bruce Schneier. So what did Mr. Cameron propose?

In a public speech, about 4 minutes long, Cameron argued that in the light of terrorist attacks such as the recent attacks in Paris, the British government needed to implement steps to make it harder for terrorists to communicate without police forces listening in. The quote most news agencies went with was:

In our country, do we want to allow a means of communication between people which […] we cannot read?

Sounds grave and … well … evil. A big brother style government peeking into even the most private conversations of its citizens.

But the part left out (as indicated by the […]) adds some nuance. Cameron actually says (go to 1:50 in the video):

In our country, do we want to allow a means of communication between people which even in extremis with a signed warrant by the home secretary personally we cannot read?

He also goes into more detail, illustrating a process he wants to establish for digital communication analogue to the legal process we (as in liberal democracies) already have established for other, physical means of communication.

Most liberal democracies have similar processes for when the police needs to or at least wants to investigate some private individual’s communication such as their mail or the conversations within their own apartments or houses. The police needs to make their case to a judge explaining the precise and current danger for the public’s or some individual’s safety or present enough evidence to implicate the suspect in a crime of significant gravity. Then and only then the judge (or a similar entity) can decide that the given situation warrants the suspects’ human rights to be infringed upon. With that warrant or court order the police may now go and read a person’s mail to the degree the judge allowed them to.

Cameron wants something similar for digital communication meaning that the police can read pieces of it with a warrant or court order. And here we have to look at encryption: Encryption makes communication mostly impossible to read unless you have the relevant keys to unlock it. But there are different ways to implement encryption that might look very similar but make a big difference in cases like this.

The platform provider – for example WhatsApp or Google with their GMail service – could encrypt the data for its users. That would mean that the key to lock or unlock the data would reside with the platform provider who would make sure that nobody apart from themselves or the parties communicating could read it. In the best-practice case of so-called end-to-end encryption, only the two parties communicating have the keys to open the encrypted data. Not even the platform provider could read the message.

If we look at physical mail, the content of a letter is protected with a nifty technology called an “envelope”. An envelope is a paper bag that makes the actual contents of the letter unreadable, only the source and target addresses as well as the weight and size of the content can be seen. Physically envelopes are not too impressive, you can easily tear them open and look at what’s in them, but they’ve got two things going for them. First of all you can usually see when an envelope has been opened. But secondly and a lot more powerfully the law protects the letter inside. Opening someone else’s mail is a crime even for police detectives (unless they have the court order we spoke about earlier). But if the content is written in some clever code or secret language, the police is still out of luck, even with a court order.

From my understanding of Cameron’s argument, supported by his choice of examples, what he is going for is something called key escrow. This means that a platform provider has to keep the encryption keys necessary to decrypt communication going over their servers available for a while. Only when an authorized party asks for them with proper legitimisation (court order), the platform provider hands over the keys for the specific conversations requested. This would actually work very similar to how the process for access to one’s mail works today. (Britain does already have a so called key disclosure law called RIPA which forces suspects to hand over their own personal encryption keys with a court order. This servers a slightly different use case though because forcing someone to hand over their keys does automatically inform them of their status as a suspect making surveillance in order to detect networks of criminals harder.)

Key escrow is highly problematic as anyone slightly tech-savvy can probably guess. The recent hacks on Sony have shown us that even global corporations with significant IT staff and budget have a hard time keeping their own servers and infrastructure secure from unauthorized access. Forcing companies to store all those encryption keys on their servers would paint an even bigger target on them than there already is: Gaining access to those servers would not only give crackers a lot of data about people but access to their communication and potentially even the opportunity for impersonation with all of its consequences. And even if we consider companies trustworthy and doing all they can to implement secure servers and services, bugs happen. Every software more complex than “Hello World” has bugs, some small, some big. And if they can give attackers access to the keys to all castles, they will be found if just by trial and error or pure luck. People are persistent like that.

Tech people know that, but Mr. Cameron might actually not. And as a politician his position is actually very consistent and consequent. It’s his job to make sure that the democratically legitimized laws and rules of the country he governs are enforced, that the rights these laws give its citizens and all people are defended. That is what being elected the prime minister of the UK means. Public and personal security are, just as a reasonable expectancy of privacy, a big part of those rights, of those basic human rights. Mr. Cameron seems to see the safety and security of the people in Britain in danger and applies and a adapts a well-established process to the digital sphere and the communication therein homogenizing the situation between the physical and the digital spheres. He is in fact actively reducing or negating digital dualism while implicitly valuing the Internet and the social processes in it as real and equal to those in the physical sphere. From this perspective his plan (not the potentially dangerous and flawed implementations) is actually very forward thinking and progressive.

But laws are more than just ideas or plans, each law can only be evaluated in the context of its implementation. A law giving every human being the explicit right to ride to work on a unicorn is worthless as long as unicorns don’t exist. And who would take care of all the unicorn waste anyways? And as we already analysed, key escrow and similar ways of giving governments central access to encryption keys is very, very problematic. So even if we might agree that his idea about the police having potential access to selected communication with a court order is reasonable, the added risks of key escrow would make his proposal more dangerous and harmful that it would bring benefit. But agree the cypherpunks do not.

Cypherpunks are a subculture of activists “advocating widespread use of strong cryptography as a route to social and political change” (quote Wikipedia). Their ideology can be characterized as deeply libertarian, focused on the individual and its freedom from oppression and restriction. To them privacy and anonymity are key to the digital age. Quoting the Cypherpunk Manifesto:

Privacy is necessary for an open society in the electronic age. […]

We cannot expect governments, corporations, or other large, faceless organizations to grant us privacy […]

We must defend our own privacy if we expect to have any. We must come together and create systems which allow anonymous transactions to take place. People have been defending their own privacy for centuries with whispers, darkness, envelopes, closed doors, secret handshakes, and couriers. The technologies of the past did not allow for strong privacy, but electronic technologies do.

Famous cypherpunks include Wikileaks’ Julian Assange, Jacob Applebaum who worked on the anonymisation software Tor and Snowden’s leaked documents as well as EFF’s Jillian C. York. If there was an actual cypherpunk club, it’s member list would be a who-is-who of the digital civil rights scene. The cypherpunk movement is also where most of the most fundamental critique of Cameron’s plans came from, their figure heads pushed the idea of the government banning encryption.

Cypherpunks generally subscribe to digital dualism as well. The quote from their manifesto makes it explicit mirroring the idea of the exceptionalism of the Internet and the digital sphere: “The technologies of the past did not allow for strong privacy, but electronic technologies do.” In their belief the Internet is a new and different thing, something that will allow all their libertarian ideas of free and unrestricted societies to flourish. Governments don’t sit all too well with that idea.

Where the anti-Internet digital dualists argue for the superiority of the physical, the space where governments rule in their respective areas, mostly conceptualizing the digital sphere as a toy, a play thing or maybe an inferior medium, the pro-Internet digital dualists of the cypherpunk clan feel that the Internet has transcended, superseded the physical. That in this space for its inhabitants new rules – only new rules – apply. Governments aren’t welcome in this world of bits and heroes carrying the weapons of freedom forged from code.

To these self-proclaimed warriors of digital freedom every attempt by governments to regulate the Internet, to enforce their laws in whatever limited way possible, is an attack, a declaration of war, an insult to what the Internet means and is. And they do have good arguments.

The Internet has a different structure than the physical. Where in the physical world distances matter a lot to define who belongs together, where borders are sometimes actually hard to cross, the Internet knows very little distance. We feel that our friends on the other side of the globe might have a different schedule, might have finished dinner before we even had breakfast, but they are still as close to us as our next-door neighbor. Messages travel to any point on this globe fast enough for us not to be able to perceive a significant difference between a message to a friend in Perth or one in Madrid.

Which government is supposed to regulate the conversation some Chinese, some Argentinian and some Icelandic people are having? Whose laws should apply? Does the strictest law apply or the most liberal one? Can a person break the laws of a country without ever having stepped into it, without ever having had the plan to visit that place? And how far is that country potentially allowed to go to punish these regressions? Most of these questions haven’t been answered sufficiently and convincingly.

The approach of the Internet as this whole new thing beyond the reach of the governments of the physical world of stone and iron seems to solve these – very hard – problems quite elegantly. By leaving the building. But certain things don’t seem to align with our liberal and democratic ideas. Something’s rotten in the state of cypherpunkia.

Our liberal democracies are founded of the principle of equality before the law. The law has to treat each an every one the same. No matter how rich you are, who your familiy is or what color your toenails have: The rules are the rules. There is actually quite the outrage when that principle is transgressed, when privileged people get free where minorities are punished harshly. The last months with their numerous dead people of color killed by policemen in the US have illustrated the dangerous, even deadly consequences of a society applying rules and the power of the enforcement entities unequally. Equality before the law is key to any democracy.

Here’s where pro-Internet digital dualism is problematic. It claims a different, more liberal ruleset for skilled, tech-savvy people. For those able to set up, maintain and use the digital tools and technologies securely. For the digital elite. The high priests of the new digital world.

The main argument against Cameron’s plans seems not to be that the government should never look at any person’s communication but that it shouldn’t be allowed to look at the digital communication that a certain group of people has access to and adopted as their primary means of communication. It’s not challenging the idea of what a government is allowed to do, it’s trying to protect a privilege.

Even with the many cases of the abuse of power by the police or by certain individuals within that structure using their access to spy on their exes or neighbors or whoever, there still seems to be a democratic majority supporting a certain level of access of the government or police to private communication in order to protect other goods such as public safety. And where many journalists and critics push for stronger checks and better processes to control the power of the police and its officers I don’t see many people arguing for a total restriction.

This debate about government access illustrates what can happen when libertarian criticism of the actions of certain governments or government agencies of democratic states capsizes and becomes contempt for the idea of democracy and its processes itself.

Democracy is not about efficiency, it’s about distributing, legitimizing and checking power as fairly as possible. The processes that liberal democracies have established to give the democratically legitimized government access to an individual’s communication or data in order to protect a public or common good are neither impenetrable nor efficient. It’s about trade-offs and checks and balances to try to protect the system against manipulation from within while still getting anything done. It’s not perfect, especially not in the implementations that exist but it does allow people to participate equally, whether they like hacking code or not.

When digital activists argue against government activities that are properly secured by saying “the requirement of a court order is meaningless because they are trivial to get” they might mean to point at some explicit flaw in a certain process. But often they also express their implicit distrust towards all government processes. Forgetting or ignoring that governments in democratic countries are the legitimized representation if the power of the people.

Digital dualism is a dangerous but powerful fallacy. Where it has created a breeding ground for texts about the horrors of the Internet, the falsehood of all social interaction in this transnational digital sphere it has also created an environment where the idea of government and with it often the ideas of democracy have been put up for debate to be replaced with … well … not much. Software that skilled people can use to defend themselves against other skilled people who might have even better software.

Cryptography is a very useful tool for the individual. It allows us to protect communication and data, makes so much of the Internet even possible. Without encryption we couldn’t order anything online or do our banking or send emails or tweets or Facebook updates without someone hacking in, we couldn’t store our data on cloud services as backups. We couldn’t trust the Internet at all.

But we are more than individuals. We are connected into social structures that sometimes have to deal with people working against them or the rules the social systems agreed upon. Technology, even one as powerful as cryptography, does not protect and strengthen the social systems that we live in, the societies and communities that we rely on and that make us human, define our cultures.

The fight against government spying (and that is what this aggressive battle against Cameron’s suggestion stems from: The fear that any system like that would be used by governments and spy agencies to collect even more data) mustn’t make us forget what defines our cultures, our commons and our communities.

We talk a lot about communities online and recently even about codes of conduct and how to enforce them. Big discussions have emerged online on how to combat harassment, how to sanction asocial behavior and how to protect those who might not be able to protect themselves. In a way the Internet is having a conversation with itself trying to define its own rules.

But we mustn’t stop there. You might think that coming up with rules on how to act online and ways to enforce them is hard, but the actual challenge is to find a way to reintegrate all we do online with the offline world. Because the are not separate: Together they form the world.

The question isn’t how to keep the governments out of the Internet. The real question is how we can finally overcome the deeply rooted digital dualism to create a world that is worth living for for people who love tech as well as people who might not care. The net is no longer the cyber-utopia of a few hackers. It’s potentially part of everybody’s life and reality.

How does the democracy of the future look like? How should different national laws apply in this transnational space? How do human rights translate into the digital sphere and where do we need to draw he lines for government regulation and intervention? Those are hard questions that we have to talk about. Not just hackers and techies with each other but everyone. And I am sure that at the end of that debate a key escrow system such as the one Mr. Cameron seemingly proposed wouldn’t be what we agree on. But to find that out we have to start the discussion.

Photo  by dullhunk

flattr this!

Posts for Sunday, January 4, 2015

Changes to my blog in 2015

New year usually brings changes. And the same holds true for my blog.

In (early) 2015 I will finally finish my LL.M1. and therefore hopefully have more time for my blog (and myself). Below you can find some of the planned and already ongoing changes relating to my blog.

Slightly modified tagging system

From now on tags named after communities like FSFE, Kiberpipa / Cyberpipe and KDE represent not only topics that directly relate to them – but also topics should be of interest to those particular communities.

If you are reading this through a planet (or similar) aggregator and think some kinds of blog posts do not belong there, let me know and I will change the feed accordingly.

On the other hand if you are subscribed directly to my blog via the Atom feed, you can apart from the main feed, finegrain your selection by subscribing only to specific categories or tags. To do so, you only need to visit the hereinbefore mentioned two links and in the browser (or HTML source code) select the Atom feed(s) you like.

Testing comments system

As promised before (more than once) I am looking into bringing comments back.

From the options that I could find, it seems Isso seems to bring the best usability vs ease of administration for use on a self-hosted2 static blog, such as mine.

At the moment I am in the testing phase – trying to set it up and get it running. But after that, I plan to migrate the previous comments and make it live. This could take a while, since there is no Pelican plugin for it yet …there is a (broken?) pull request for it though.

Hopefully Isso will last longer against spam comments as systems I tried so far.

More content in 2015

Since this year I plan to finish my studies, I will finally have more time to spare to blog. I hope you are looking forward to more articles at least as much as I am to writing them!

Internet Archive

While I was at it, I also made sure that all the so far written blog posts are actually showing up on the Internet Archive Wayback Machine and not just the first page. Most of them did not, but they are now.

hook out → happy new year everyone! ☺


  1. My LL.M. thesis is about “FLA – new challenges” and you can follow its progress on Git. Unfortunately for most readers, it is required by law to be in Slovenian. But important outcomes will follow in English later this year. 

  2. Since I host my on blog, leaving something as precious as comments on a 3rd party proprietary server is out of the question. 

Posts for Saturday, January 3, 2015

avatar

SELinux is great for enterprises (but many don’t know it yet)

Large companies that handle their own IT often have internal support teams for many of the technologies that they use. Most of the time, this is for reusable components like database technologies, web application servers, operating systems, middleware components (like file transfers, messaging infrastructure, …) and more. All components that are used and deployed multiple times, and thus warrant the expenses of a dedicated engineering team.

Such teams often have (or need to write) secure configuration deployment guides, so that these components are installed in the organization with as little misconfigurations as possible. A wrongly configured component is often worse than a vulnerable component, because vulnerabilities are often fixed with the software upgrades (you do patch your software, right?) whereas misconfigurations survive these updates and remain exploitable for longer periods. Also, misuse of components is harder to detect than exploiting vulnerabilities because they are often seen as regular user behavior.

But next to the redeployable components, most business services are provided by a single application. Most companies don’t have the budget and resources to put dedicated engineering teams on each and every application that is deployed in the organization. Even worse, many companies hire external consultants to help in the deployment of the component, and then the consultants hand over the maintenance of that software to internal teams. Some consultants don’t fully bother with secure configuration deployment guides, or even feel the need to disable security constraints put forth by the organization (policies and standards) because “it is needed”. A deployment is often seen as successful when the software functionally works, which not necessarily means that it is misconfiguration-free.

As a recent example that I came across, consider an application that needs Node.js. A consultancy firm is hired to set up the infrastructure, and given full administrative rights on the operating system to make sure that this particular component is deployed fast (because the company wants to have the infrastructure in production before the end of the week). Security is initially seen as less of a concern, and the consultancy firm informs the customer (without any guarantees though) that it will be set up “according to common best practices”. The company itself has no engineering team for Node.js nor wants to invest in the appropriate resources (such as training) for security engineers to review Node.js configurations. Yet the application that is deployed on the Node.js application server is internet-facing, so has a higher risk associated with it than a purely internal deployment.

So, how to ensure that these applications cannot be exploited or, if an exploit is done, how to ensure that the risks involved with the exploit are contained? Well, this is where I believe SELinux has a great potential. And although I’m talking about SELinux here, the same goes for other similar technologies like TOMOYO Linux, grSecurity’s RBAC system, RSBAC and more.

SELinux can provide a container, decoupled from the application itself (but of course built for that particular application) which restricts the behavior of that application on the system to those activities that are expected. The application itself is not SELinux-aware (or does not need to be – some applications are, but those that I am focusing on here usually don’t), but the SELinux access controls ensure that exploits on the application cannot reach beyond those activities/capabilities that are granted to it.

Consider the Node.js deployment from before. The Node.js application server might need to connect to a MongoDB cluster, so we can configure SELinux to allow just that, but all other connections that originate from the Node.js deployment should be forbidden. Worms (if any) cannot use this deployment then to spread out. Same with access to files – the Node.js application probably only needs access to the application files and not to other system files. Instead of trying to run the application in a chroot (which requires engineering effort from those people implementing Node.js, which could be a consultancy firm that does not know or want to deploy within a chroot) SELinux is configured to disallow any file access beyond the application files.

With SELinux, the application can be deployed relatively safely while ensuring that exploits (or abuse of misconfigurations) cannot spread. All that the company itself has to do is to provide resources for a SELinux engineering team (which can be just a responsibility of the Linux engineering teams, but can be specialized as well). Such a team does not need to be big, as policy development effort is usually only needed during changes (for instance when the application is updated to also send e-mails, in which case the SELinux policy can be adjusted to allow that as well), and given enough experience, the SELinux engineering team can build flexible policies that the administration teams (those that do the maintenance of the servers) can tune the policy as needed (for instance through SELinux booleans) without the need to have the SELinux team work on the policies again.

Using SELinux also has a number of additional advantages which other, sometimes commercial tools (like Symantecs SPE/SCSP – really Symantec, you ask customers to disable SELinux?) severly lack.

  • SELinux is part of a default Linux installation in many cases. RedHat Enterprise Linux ships with SELinux by default, and actively supports SELinux when customers have any problems with it. This also improves the likelihood for SELinux to be accepted, as other, third party solutions might not be supported. Ever tried getting support for a system on which both McAfee AV for Linux and Symantec SCSP are running (if you got it to work together at all)? At least McAfee gives pointers to how to update SELinux settings when they would interfere with McAfee processes.
  • SELinux is widely known and many resources exist for users, administrators and engineers to learn more about it. The resources are freely available, and often kept up2date by a very motivated community. Unlike commercial products, whose support pages are hidden behind paywalls, customers are usually prevented from interacting with each other and tips and tricks for using the product are often not found on the Internet, SELinux information can be found almost everywhere. And if you like books, I have a couple for you to read: SELinux System Administration and SELinux Cookbook, written by yours truly.
  • Using SELinux is widely supported by third party configuration management tools, especially in the free software world. Puppet, Chef, Ansible, SaltStack and others all support SELinux and/or have modules that integrate SELinux support in the management system.
  • Using SELinux incurs no additional licensing costs.

Now, SELinux is definitely not a holy grail. It has its limitations, so security should still be seen as a global approach where SELinux is just playing one specific role in. For instance, SELinux does not prevent application behavior that is allowed by the policy. If a user abuses a configuration and can have an application expose information that the user usually does not have access to, but the application itself does (for instance because other users on that application might) SELinux cannot do anything about it (well, not as long as the application is not made SELinux-aware). Also, vulnerabilities that exploit application internals are not controlled by SELinux access controls. It is the application behavior (“external view”) that SELinux controls. To mitigate in-application vulnerabilities, other approaches need to be considered (such as memory protections for free software solutions, which can protect against some kinds of exploits – see grsecurity as one of the solutions that could be used).

Still, I believe that SELinux can definitely provide additional protections for such “one-time deployments” where a company cannot invest in resources to provide engineering services on those deployments. The SELinux security controls do not require engineering on the application side, making investments in SELinux engineering very much reusable.

avatar

Gentoo Wiki is growing

Perhaps it is because of the winter holidays, but the last weeks I’ve noticed a lot of updates and edits on the Gentoo wiki.

The move to the Tyrian layout, whose purpose is to eventually become the unified layout for all Gentoo resources, happened first. Then, three common templates (Code, File and Kernel) where deprecated in favor of their “*Box” counterparts (CodeBox, FileBox and KernelBox). These provide better parameter support (which should make future updates on the templates easier to implement) as well as syntax highlighting.

But the wiki also saw a number of contributions being added. I added a short article on Efibootmgr as the Gentoo handbook now also uses it for its EFI related instructions, but other users added quite a few additional articles as well. As they come along, articles are being marked by editors for translation. For me, that’s a trigger.

Whenever a wiki article is marked for translations, it shows up on the PageTranslation list. When I have time, I pick one of these articles and try to update it to move to a common style (the Guidelines page is the “official” one, and I have a Styleguide in which I elaborate a bit more on the use). Having a common style gives a better look and feel to the articles (as they are then more alike), gives a common documentation development approach (so everyone can join in and update documentation in a similar layout/structure) and – most importantly – reduces the number of edits that do little more than switch from one formatting to another.

When an article has been edited, I mark it for translation, and then the real workhorse on the wiki starts. We have several active translators on the Gentoo wiki, who we cannot thank hard enough for their work (I used to start at Gentoo as a translator, I have some feeling about their work). They make the Gentoo documentation reachable for a broader audience. Thanks to the use of the translation extension (kindly offered by the Gentoo wiki admins, who have been working quite hard the last few weeks on improving the wiki infrastructure) translations are easier to handle and follow through.

The advantage of a translation-marked article is that any change on the article also shows up on the list again, allowing me to look at the change and perform edits when necessary. For the end user, this is behind the scenes – an update on an article shows up immediately, which is fine. But for me (and perhaps other editors as well) this gives a nice overview of changes to articles (watchlists can only go so far) and also shows the changes in a simple yet efficient manner. Thanks to this approach, we can more actively follow up on edits and improve where necessary.

Now, editing is not always just a few minutes of work. Consider the GRUB2 article on the wiki. It was marked for translation, but had some issues with its style. It was very verbose (which is not a bad thing, but suggests to split information towards multiple articles) and quite a few open discussions on its Discussions page. I started editing the article around 13.12h local time, and ended at 19.40h. Unlike with offline documentation, the entire process of the editing can be followed through the page’ history). And although I’m still not 100% satisfied with the result, it is imo easier to follow through and read.

However, don’t get me wrong – I do not feel that the article was wrong in any way. Although I would appreciate articles that immediately follow a style, I rather see more contributions (which we can then edit towards the new style) than that we would start penalizing contributors that don’t use the style. That would work contra-productive, because it is far easier to update the style of an article than to write articles. We should try and get more contributors to document aspects of their Gentoo journey.

So, please keep them coming. If you find a lack of (good) information for something, start jotting down what you know in an article. We’ll gladly help you out with editing and improving the article then, but the content is something you are probably best to write down.

Posts for Wednesday, December 31, 2014

Once More, with Feeling #31c3

“A new Dawn”. That’s the motto that more than 10000 hackers, activists and people interested in or connected to that (sub-)culture assembled under in Hamburg for the last few days. This probably slightly long-ish text outlines my thoughts on the 31st Chaos Communication Congress taking part in the congress center Hamburg.

(You probably should take the things I write with a tablespoon of salt. After public personal attacks on me by representatives of the CCC I quit my membership ending a few years of semi-public dissent on certain key aspects of the digital life of human beings in the beginning of the 21st century. I’ll try to be fair and as objective as human beings can be but obviously I can’t deny some sore emotional spots when it comes to that organisation and its figureheads. Also I should note that the program committee did reject the sessions I proposed. I did expect that rejection and can live with it but still add it here for transparency reasons. )

2013 wasn’t a good year for the hacker/digital activist/etc community. Snowden’s leaked documents and Glenn Greenwald’s strategy of continuous publication of small (in length/volume, not in impact) pieces put that – usually quite resilient- community in a state of shock. An ideology had radically fallen apart within months leaving its protagonists rendered helpless and without orientation for a while. Check out my article on last year’s conference for a more detailed report on the event and its context and environment.

The tagline (“A new Dawn”) sounded refreshingly optimistic. A fresh start, a reboot of efforts. Rethinking the positions of the hacker culture in the greater scheme of things. My first thought upon reading the congress motto was wondering what kind of agenda the CCC would set up for itself for the coming year. Curiosity is quite a positive and optimistic feeling so I obviously liked that line a lot.

The CCC conference organisation is – after all these years – a well-oiled machine. No matter what you throw their way, the conference attendees will not feel a hickup. The whole team organizing the conference, from the video streaming and recording “angels” to all the helpers keeping people hydrated and the facilities clean to the tech tech providing more Internet bandwidth than some countries have access to is second to none. Literally. The self-organized assemblies where people gathered coming from different hackspaces and organizations into new local communities providing services and learning opportunities knocked it out of the park again with workshops and an insane amount of infrastructure they offered to conference attendees. I can’t think of any conference that comes even close to that level of competence and “professionalism” by – in the literal meaning of the word – amateurs. Lovers of whatever it is they do.1

But for some reason, the motto didn’t seem to click for me and many others I talked to (on the other hand: for some it did). It was not about the people who were so obviously happy to meet, hang out, talk, dance, teach and learn. It was not about the brilliant people I met and hung out with. It was just a program underdelivering on the promise the motto made.

The conference program is grouped into so-called tracks: Each with their own focus and agenda. The Hardware&Making track talks about practice, about building hardware (usually with blinking LEDs) and creating things. The Security&Hacking track punches holes into whatever protocol or service you can think of. Art&Culture gives room to artists to present their work, Science gives scientists a platform to disseminate  their findings. Ethics, Society & Politics tries to tackle the pressing social and political questions of these days while Entertainment adds some low- and middlebrow amusement. And the are some CCC specific talks that deal with the life of that organization.

Many tracks delivered. Hardware&Making, Security&Hacking and Art&Beauty did exactly what’s expected of them. And while I am not a blinky LED person and no security nerd there were quite impressive talks there (you might have heard about starbug making a fake fingerprint from a photo or about how the SS7 standard can be used by anyone with a few extra bucks to track you regardless of how secure your phone is). I’ve never been a fan of the entertainment sessions at conferences, but maybe they are fun if you drink.

But sadly the Ethics, Society & Politics in general fell flat. That doesn’t mean that all the talks were bad (quite the opposite in some cases) it means that the whole impetus of that track was hard to read. But “A new Dawn” it wasn’t. All those talks could have happened at any C3 in the last 3 or 4 years. It lacked a vision, an agenda, a perspective. Which could be read as a continuation of last year’s shock state but I think that would be wrong. Nobody is shocked. Things are just back to normal.

Maybe “Back to normal” would have been the perfect motto for this year. The “product” CCC congress is established, successful and works. It’s like Microsoft Office or EA’s sports games: Every year you get an update with a few new bells and whistles, some neat new additions and an updated look and feel. But the product itself stays the same because its consumers have gotten used to and comfortable with it.

And so the usual suspects go through the motions of for example the “Fnord News Show” or similar events whose main function is to provide the community a folklore to assemble around. But folklore tends to be about the past, about keeping something alive through its rituals even when the world has moved on. Some people dance in the outfits of their great-grandparents, some gather to laugh at “stupid” politicians who couldn’t code their own kernel to save their lives. Ho ho ho!

The scene has found its way to deal with the situtation the Snowden docs created. A friend called that approach the “Snowden-industrial complex”. All those companies and governments and agencies need security consultants, every week sees a new cryptographic silver bullet to crowndfund or buy and a small group has made sitting on panels and milking the Snowden docs quite the successful business model. As Jacob Applebaum’s talk this year illustrated the scene has learned how to work with and against the docs to create whatever story sells best at any given moment. Sadly the product they are selling seems to be only very loosely connected to truth or politics sometimes.

And that was the saddest realization of the congress. That in a building full of art and music and smart people no forward momentum for any form of structural change was emerging. Everything felt chained to the way things have always been done(tm).

Just as with the cycle of Snowden leaks the subculture is still caught in its old MO: Take something , look at technical details, break them, wait for them to be patched. Rinse and repeat. Rinse and repeat. Rinse and repeat.

Often the most interesting things are those that happen without much thought. In that regard the “Tea House” was probably the most revealing. I don’t even want to go into the whole cultural appropriation angle of mostly white dudes building an “oriental” and “exotic” space to hang out at a conference of mostly white dudes. But architecturally and visually that space, claimed by many of it’s frequent visitors to be “the place to be”2, felt like a royal palace of sorts, with layered zones of different exclusivity and digital lords and ladies holding court with their courtiers.

I realized that the scene is in no way apolitical which is an accusation that has been put forward at times not only be me. Actually one of the most awesome things about the congress where the donation boxes for the refugees of Lampedusa put up everywhere as well as many stickers and flags by groups such as the Antifa3. There still are these isles of radical, political and mostly left thinking around but sadly they don’t feel like they have any real wide-spread impact.

The main vibe was that of a Silicon Valley Libertarianism spiced up with some idealization of the (German) constitution and the constitutional court. A deeply rooted antagonism towards the institutions of government, their perceived incompetency and evilness, in connection with yearning to be respected and acknowledged. Not only German politics has managed to mostly contain all Snowden-induced outrage and canalize most of the NGO-based energy into different committees investigating what the intelligence community has lied about (everything) and how they can be controlled better in the future (they can’t). But instead of looking at political solutions, at structural issues the congress kept it light. Focused on details while still being able to leave the big picture out of the equation.

The hacking subculture in Germany is at a crossroads. It has to decide on whether to politicize, to develop a set of bigger political goals even if that might cost certain people in the community certain business opportunities, or whether to stay on its current trajectory drifting closer and closer to a Defcon-like exclusive security-tech-bubble forming the recruiting environment for entities that used to have no place there.4

I had decided that this congress would be my last one in advance allowing me to take a more distanced, observing position. And I had a really interesting time and a bunch of great conversations giving me more perspective on the whole shebang (special thanks to Christoph Engemann,  Richard Marggraf-Turley, Norbert SchepersLaura Dornheim and Anna-Lena Bäcker who helped me understand different things better and see clearer as well as so many others I forgot and who might now be mad at me – please don’t be).

The classic Buffy Episode “Once More, with Feeling” shows us a Vampire Slayer back from the dead. She is lost and has lost her inspiration and energy, is just “Going through the Motions”:

Every single night
The same arrangement
I go out and fight the fight
Still, I always feel the strange estrangement
Nothing here is real
Nothing here is right

I’ve been making shows of trading blows
Just hoping no one knows
That I’ve been going through the motions

<iframe allowfullscreen="true" class="youtube-player" frameborder="0" height="424" src="https://www.youtube.com/embed/zMv0abh4Vrc?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="700"></iframe>

The song ends with Buffy trying to leave that state of stagnation, I’d hoped to see the hackers do the same. But I realized that maybe we expect too much from that group. That they are stuck trying to keep the old super hero narrative which that sub culture oh so much adores alive.

The congress is a very interesting event to attend. Brilliant people all around. Brilliant projects and infrastructure. It’s just not the place to work on the questions that I feel we need to work on. Rethinking what privacy, the self, connectedness and responsibility mean in a digital world. Rethinking the future of work and internet governance. Building utopias and goals for a better future that’s more than just a bunch of white men with tech backgrounds making a comfortable living. Those are some of the things I want to work on in 2015 with many different people.

I hope you’ll all have fun at the 32c3 and I am sure it will be an even better product that it already is. Thanks to everyone helping to make the 31c3 a great and interesting place to be. My congress visits have been quite a ride. Now I have to leave the cart to get back to work.

  1. Obviously there is always an element of self-exploitation which I know the organizers tried to limit as much as possible by forcing people to take breaks.
  2. it’s funny how a scene whose central narrative is based on “we were all excluded by the mainstream, now we meet here to transcend those structures” quickly rushes to establishing its own “in-crowd” and exclusive places
  3. a German left radical antifascist movement
  4. non-whistleblowing ex-intelligence people as well as cyberwar enthusiasts spoke at this year’s conference and the coming policy for intelligence personal speaking at C3 events is quite shocking to people knowing CCC’s history

flattr this!

Posts for Tuesday, December 30, 2014

avatar

Why does it access /etc/shadow?

While updating the SELinux policy for the Courier IMAP daemon, I noticed that it (well, the authdaemon that is part of Courier) wanted to access /etc/shadow, which is of course a big no-no. It doesn’t take long to know that this is through the PAM support (more specifically, pam_unix.so). But why? After all, pam_unix.so should try to execute unix_chkpwd to verify a password and not read in the shadow file directly (which would require all PAM-aware applications to be granted access to the shadow file).

So I dived into the PAM-Linux sources (yay free software).

In pam_unix_passwd.c, the _unix_run_verify_binary() method is called but only if the get_account_info() method returns PAM_UNIX_RUN_HELPER.

static int _unix_verify_shadow(pam_handle_t *pamh, const char *user, unsigned int ctrl)
{
...
        retval = get_account_info(pamh, user, &pwent, &spent);
...
        if (retval == PAM_UNIX_RUN_HELPER) {
                retval = _unix_run_verify_binary(pamh, ctrl, user, &daysleft);
                if (retval == PAM_AUTH_ERR || retval == PAM_USER_UNKNOWN)
                        return retval;
        }

In passverify.c this method will check the password entry file and, if the entry is a shadow file, will return PAM_UNIX_RUN_HELPER if the current user id is not root, or if SELinux is enabled:

PAMH_ARG_DECL(int get_account_info,
        const char *name, struct passwd **pwd, struct spwd **spwdent)
{
        /* UNIX passwords area */
        *pwd = pam_modutil_getpwnam(pamh, name);        /* Get password file entry... */
        *spwdent = NULL;
 
        if (*pwd != NULL) {
...
                } else if (is_pwd_shadowed(*pwd)) {
                        /*
                         * ...and shadow password file entry for this user,
                         * if shadowing is enabled
                         */
#ifndef HELPER_COMPILE
                        if (geteuid() || SELINUX_ENABLED)
                                return PAM_UNIX_RUN_HELPER;
#endif

The SELINUX_ENABLED is a C macro defined in the same file:

#ifdef WITH_SELINUX
#include <selinux/selinux.h>
#define SELINUX_ENABLED is_selinux_enabled()>0
#else
#define SELINUX_ENABLED 0
#endif

And this is where my “aha” moment came forth: the Courier authdaemon runs as root, so its user id is 0. The geteuid() method will return 0, so the SELINUX_ENABLED macro must return non-zero for the proper path to be followed. A quick check in the audit logs, after disabling dontaudit lines, showed that the Courier IMAPd daemon wants to get the attribute(s) of the security_t file system (on which the SELinux information is exposed). As this was denied, the call to is_selinux_enabled() returns -1 (error) which, through the macro, becomes 0.

So granting selinux_getattr_fs(courier_authdaemon_t) was enough to get it to use the unix_chkpwd binary again.

To fix this properly, we need to grant this to all PAM using applications. There is an interface called auth_use_pam() in the policies, but that isn’t used by the Courier policy. Until now, that is ;-)

Posts for Friday, December 26, 2014

My KWin short-cuts experiment

Inspired by Aurélien Gâteau’s blogpost and the thread on KDE Forums, I decided to change my global KWin short-cuts as well to see how it fares.

Shortcuts

As proposed in the forum thread and by Aurélien, I have concentrated my desktop/window manipulation short-cuts around the Meta key.

In addition I figured out that to navigate virtual desktops and activities I practically only use the following effects:

  • Activities bar
  • Activities menu (which I have bound to right-click on background)
  • Desktop grid
  • Show all windows on all desktops

Here are the most important changes:

Virtual desktops

  • Meta+F? – goes to the desktop number ?
  • Meta+Shift+F? – moves/shifts the active window to desktop number ?
  • Meta+Down – shows all desktops in the grid effect

Window management

  • Meta+F – puts window in full-screen mode (i.e. maximises and hides windows decorations)
  • Meta+Up – maximises the window (or de-maximises it)
  • Meta+Left – window occupies the left half of the screen
  • Meta+Right – window occupies the right half of the screen
  • Meta+PageUp – keep window above others
  • Meta+PageDown – keep window below others
  • Meta+Tabr – show all windows from all desktops
  • Meta+Esc – close window
  • Meta+Ctrl+Esc – kill window

Launchers, Activities, etc.

  • Meta+A – opens the Activities bar
  • Meta+Space – Krunner
  • Meta+Enter – Yakuake

How does it feel

I actually quite like it and it does not need a lot to get used to. It is far easier to remember than the KDE Plasma default. And I am saying this after years and years of using the default as well as years of using a different custom set up (concentrated on Alt).

Personally, I think it would make sense to adopt such a change of defaults. But if that does not happen, I know I can still just change it myself locally …and I will ☺

hook out → taking a final sip of honey-sweetened Yorkshire Gold tea (Taylors of Harrogate) and going to sleep

Posts for Wednesday, December 24, 2014

Why and how to shave with shaving oil and DE safety razors

So, I have been shaving with shaving oil and safety razors 1 for a while now and decided that it is time I help my fellow geeks by spreading some knowledge about this method (which is sadly still poorly documented on-line). Much of the below method is hacks assembled together from different sources and lots of trial and error.

Why shave with oil and DE safety razors

First of all, shaving with old-school DE razors is not as much about being hip and trendy 2 as it is about optimising. Although, I have to admit, it is still looks pretty cool ☺

There are several reasons why shaving with oil and DE razors beats modern foam and system multi-blade razors hands down:

  • they have got multiple uses – shaving oil replaces both the shaving foam/soap and aftershave (and pre-shaving balm); DE razors are used in tools and well, they are proper blades for crying out loud!;
  • the whole set takes a lot less space when travelling – one razor, a puny pack of blades and a few ten ml of oil is all you need to carry around 3;
  • you get a better shave – once you start shaving properly, you get less burns and cuticles and a smoother shave as well;
  • it is more ecological – the DE blades have less different materials and are easier to recycle, all shaving oils I found so far have some sort of Eco and/or Bio certification;
  • and last, but not least in these days, it is waaaaaaay cheaper – (more on that in a future blog post).

History and experience (skip if you are not interested in such bla bla)

I got my first shaving oil4 about two years ago, when I started to travel more. My wonderful girlfriend bought it for me, because a 30 ml flask took a lot less space then a tin of shaving foam and a flask aftershave. The logic behind this decision was:

“Well, all the ancient people managed to have clean shaves with oil, my beard cannot be that much different than the ones they had in the past.”

And, boy, was I in for a nice surprise!

I used to get inflammations, pimples and in-grown hair quite often, so I never shaved very close – but when shaving with oil, there was none of that! After one or two months of of trial and error with different methods and own ideas, I finally figured out how to properly use it and left the shaving soaps, gels and foams for good.

As I shaved for a while with oil I noticed that all “regular modern” system multi-blade razors have strips of an aloe vera gel, that works well with shaving foam, gels and soap; but occasionally stick to your face if you are using shaving oil. This is true for as many or as little blades in the razor heads as possible. – I just could not find razors without it.

That is why I started thinking about the classic DE safety razors and eventually got a plastic Wilkinson Sword Classic for a bit over 5 €. Surprisingly, after just a few minuscule cuts, the improvement over the system multi-blade razors got quite apparent. I have not touched my old Gillette Mach3 ever since. The Wilkinson Sword Classic is by far not a very good DE razor, but it is cheap and easy to use for beginners. But if you decide you like this kind of shave, I would warmly recommend that you upgrade to a better one.

For example recently I got myself a nice Edwin Jagger razor with their DE8 head and I love it. It is a full-metal, chromed, closed-comb razor, which means it has another bar below the blade, so it is easier and safer to use then an more aggressive open-comb version.

How to Shave with oil and DE razors

OK, first of all, do not panic! – they are called “safety razors” for a reason. As opposed to the straight razors, the blade is enclosed, so even if you manage to cut yourself, you cannot get a deep cut. This is truer still for closed-comb razors.

  1. Wash your face to remove dead skin and fat. It is the best if you shave just after taking a shower.

  2. Get moisture into the hairs. Beard hair is hard as copper wire while it is dry; but wet, it is quite soft. The best way is to apply a towel soaked in very hot water for a few (times per) ten seconds to your face – the hot water also opens up the pores. If you are travelling and do not have hot water, just make sure those hairs are wet. I usually put hot water in the basin and leave the razor in it while I towel my face, so the razor is also warm.

  3. Put a few drops of shaving oil into the palm of your hand (3-6 is enough for me, depending on the oil) and with two fingers apply it to all the places on your face that you want to shave. Any oil you may have left on your hands, you can safely rub into your hair (on top of your head) – it will do them good and you will not waste the oil.

  4. Splash some more (hot) water on your face – the fact that water and oil do not mix well is the reason why your blade glides so fine. Also during the shave, whenever feel your razor does not glide that well any more, usually just applying some water is enough to fix it.

  5. First shave twice in the direction of the grain – to get a feeling for the right angle, take the handle of the razor in your fingers and lean the flat of the head onto your cheek, so the handle is 90° to your cheek; then reduce the angle until you get to a position where shaving feels comfortable. Also it is easier to shave moving your whole arm then just the wrist. Important: DO NOT apply pressure – the safety razors expose enough blade that with a well balanced razor just the weight of the head produces almost enough pressure for a good shave (as opposed to system multi-blade razors). Pull in the direction of the handle with slow strokes – on thicker beard you will need to make shorter strokes then on less thick beard. To get a better shave, make sure to stretch your skin where you currently shave. If the razor gets stuck with hair and oil, just swish it around in the water to clean it.

  6. Splash your face with (hot) water again and now shave across the grain. This gives you a closer shave5.

  7. Splash your face with cold water to get rid of any hair remains and to close the pores. Get a drop or two of shaving oil and a few drops of water into your palm and mix it with two fingers. Rub the oil-water mixture into your face instead of using after-shave and leave your face to dry – the essential oils in the shaving oil enriches and disinfects your skin.

  8. Clean your razor under running water to remove hair and oil and towel-dry it (don not rub the blade!). When I take it apart to change blades, I clean the razor with water and rub it with the towel, to keep it shiny.

Update: I learned that it is better to shave twice with the grain and once across, than once with it and twice across. Update: I figured out the trick with rubbing the excess oil into hair. Update: Updated the amount of oil needed, to match new experience.

Enjoy shaving ☺

It is a tiny bit more work then shaving with system multi-blade razors, but it is well worth it! For me the combination of quality DE safety razors and shaving oil, turned shaving from a bothersome chore into a morning ritual I look forward to.

…and in time, I am sure you will find (and share) your own method as well.

Update: I just stumbled upon this great blog post “How Intellectual Property Destroyed Men’s Shaving” and thought it be great to mention here.

hook out → see you well shaven at Akademy ;)


  1. Double edged razors as our granddads used to shave with. 

  2. Are old-school razors hip and trendy right now anyway? I have not noticed them to be so. 

  3. I got a myself a nice leather Edwin Jagger etui for carrying the razor and two packs of blades that measures 105 x 53 x 44 mm (for comparison: the ugly Gillette Mach3 plastic holder measures 148 x 57 x 28 mm and does not hold much protection when travelling). 

  4. L’Occitane Cade (wild juniper) shaving oil, and I still happy with that one. 

  5. Some claim that for a really close shave you need to shave against the grain as well, but I found that to be too aggressive for my beard. Also I heard this claim only from people shaving with soap. 

Posts for Tuesday, December 23, 2014

avatar

Added UEFI instructions to AMD64/x86 handbooks

I just finished up adding some UEFI instructions to the Gentoo handbooks for AMD64 and x86 (I don’t know how many systems are still using x86 instead of the AMD64 one, and if those support UEFI, but the instructions are shared and they don’t collide). The entire EFI stuff can probably be improved a lot, but basically the things that were added are:

  1. boot the system using UEFI already if possible (which is needed for efibootmgr to access the EFI variables). This is not entirely mandatory (as efibootmgr is not mandatory to boot a system) but recommended.
  2. use vfat for the /boot/ location, as this now becomes the EFI System Partition.
  3. configure the Linux kernel to support EFI stub and EFI variables
  4. install the Linux kernel as the bootx64.efi file to boot the system with
  5. use efibootmgr to add boot options (if required) and create an EFI boot entry called “Gentoo”

If you find grave errors, please do mention them (either on a talk page on the wiki, as a bug or through IRC) so it is picked up. All developers and trusted contributors on the wiki have access to the files so can edit where needed (but do take care that, if something is edited, that it is either architecture-specific or shared across all architectures – check the page when editing; if it is Handbook:Parts then it is shared, and Handbook:AMD64 is specific for the architecture). And if I’m online I’ll of course act on it quickly.

Oh, and no – it is not a bug that there is a (now not used) /dev/sda1 “bios” partition. Due to the differences with the possible installation alternatives, it is easier for us (me) to just document a common partition layout than to try and write everything out (making it just harder for new users to follow the instructions).

Posts for Sunday, December 14, 2014

avatar

Handbooks moved

Yesterday the move of the Gentoo Wiki for the Gentoo handbooks (whose most important part are the installation instructions for the various supported architectures) has been concluded, with a last-minute addition being the one-page views so that users who want to can view the installation instructions completely within one view.

Because we use lots of transclusions (i.e. including different wiki articles inside another article) to support a common documentation base for the various architectures, I did hit a limit that prevented me from creating a single-page for the entire handbook (i.e. “Installing Gentoo Linux”, “Working with Gentoo”, “Working with portage” and “Network configuration” together), but I could settle with one page per part. I think that matches most of the use cases.

With the move now done, it is time to start tackling the various bugs that were reported against the handbook, as well as initiate improvements where needed.

I did make a (probably more – but this one is fresh in my memory) mistake in the move though. I had to do a lot of the following:

<noinclude><translate></noinclude>
...
<noinclude></translate></noinclude>

Without this, transcluded parts would suddenly show the translation tags as regular text. Only afterwards (I’m talking about more than 400 different pages) did I read that I should transclude the /en pages (like Handbook:Parts/Installation/About/en instead of Handbook:Parts/Installation/About) as those do not have the translation specifics in them. Sigh.

Posts for Friday, December 12, 2014

avatar

Gentoo Handbooks almost moved to wiki

Content-wise, the move is done. I’ve done a few checks on the content to see if the structure still holds, translations are enabled on all pages, the use of partitions is sufficiently consistent for each architecture, and so on. The result can be seen on the gentoo handbook main page, from which the various architectural handbooks are linked.

I sent a sort-of announcement to the gentoo-project mailinglist (which also includes the motivation of the move). If there are no objections, I will update the current handbooks to link to the wiki ones, as well as update the links on the website (and in wiki articles) to point to the wiki.

Posts for Wednesday, December 10, 2014

avatar

Sometimes I forget how important communication is

Free software (and documentation) developers don’t always have all the time they want. Instead, they grab whatever time they have to do what they believe is the most productive – be it documentation editing, programming, updating ebuilds, SELinux policy improvements and what not. But they often don’t take the time to communicate. And communication is important.

For one, communication is needed to reach a larger audience than those that follow the commit history in whatever repository work is being done. Yes, there are developers that follow each commit, but development isn’t just done for developers, it is also for end users. And end users deserve frequent updates and feedback. Be it through blog posts, Google+ posts, tweets or instragrams (well, I’m not sure how to communicate a software or documentation change through Instagram, but I’m sure people find lots of creative ways to do so), telling the broader world what has changed is important.

Perhaps a (silent or not) user was waiting for this change. Perhaps he or she is even actually trying to fix things himself/herself but is struggling with it, and would really benefit (time-wise) from a quick fix. Without communicating about the change, (s)he does not know that no further attempts are needed, actually reducing the efficiency in overall.

But communication is just one-way. Better is to get feedback as well. In that sense, communication is just one part of the feedback loop – once developers receive feedback on what they are doing (or did recently) they might even improve results faster. With feedback loops, the wisdom of the crowd (in the positive sense) can be used to improve solutions beyond what the developer originally intended. And even a simple “cool” and “I like” is good information for a developer or contributor.

Still, I often forget to do it – or don’t have the time to focus on communication. And that’s bad. So, let me quickly state what things I forgot to communicate more broadly about:

  • A new developer joined the Gentoo ranks: Jason Zaman. Now developers join Gentoo more often than just once in a while, but Jason is one of my “recruits”. In a sense, he became a developer because I was tired of pulling his changes in and proxy-committing stuff. Of course, that’s only half the truth; he is also a very active contributor in other areas (and was already a maintainer for a few packages through the proxy-maintainer project) and is a tremendous help in the Gentoo Hardened project. So welcome onboard Jason (or perfinion as he calls himself online).
  • I’ve started with copying the Gentoo handbook to the wiki. This is still an on-going project, but was long overdue. There are many reasons why the move to the wiki is interesting. For me personally, it is to attract a larger audience to update the handbook. Although the document will be restricted for editing by developers and trusted contributors only (it does contain the installation instructions and is a primary entry point for many users) that’s still a whole lot more than when just a handful (one or two actually) developers update the handbook.
  • The SELinux userspace (2.4 release) is looking more stable; there are no specific regressions anymore (upstream is at release candidate 7) although I must admit that I have not implemented it on the majority of test systems that I maintain. Not due to fears, but mostly because I struggle a bit with available time so I can do without testing upgrades that are not needed. I do plan on moving towards 2.4 in a week or two.
  • The reference policy has released a new version of the policy. Gentoo quickly followed through (Jason did the honors of creating the ebuilds).

So, apologies for not communicating sooner, and I promise I’ll try to uplift the communication frequency.

Posts for Friday, November 28, 2014

Banning IPs on DD-WRT Based on Failed SSH Authentication

In response to literally thousands of failed SSH attempts from china, I have written a Python script to automate blocking IPs that fail authentication.

Capture

It is a VERY dirty Python script. It reads syslog from my router, finds the IPs that have failed SSH authentication to my router, and adds a firewall rule to block them.

1. nvram get rc_firewall
2. Scan syslog for bad authentication
3. Build list of IPs in syslog but not in rc_firewall
4. scp file containing new rc_firewall
5. Apply new rc_firewall, commit, and reboot the router

It’s nasty right now, but it works!

https://github.com/Clete2/DD-WRT-Ban-IP

This all assumes a very specific setup:

1. SSH is enabled

2. syslog is logging to an external server

3. The account you run this script on has public key authentication setup with the router

Posts for Sunday, November 2, 2014

avatar

No more DEPENDs for SELinux policy package dependencies

I just finished updating 102 packages. The change? Removing the following from the ebuilds:

DEPEND="selinux? ( sec-policy/selinux-${packagename} )"

In the past, we needed this construction in both DEPEND and RDEPEND. Recently however, the SELinux eclass got updated with some logic to relabel files after the policy package is deployed. As a result, the DEPEND variable no longer needs to refer to the SELinux policy package.

This change also means that for those moving from a regular Gentoo installation to an SELinux installation will have much less packages to rebuild. In the past, getting USE="selinux" (through the SELinux profiles) would rebuild all packages that have a DEPEND dependency to the SELinux policy package. No more – only packages that depend on the SELinux libraries (like libselinux) or utilities rebuild. The rest will just pull in the proper policy package.

Posts for Friday, October 31, 2014

avatar

Using multiple priorities with modules

One of the new features of the 2.4 SELinux userspace is support for module priorities. The idea is that distributions and administrators can override a (pre)loaded SELinux policy module with another module without removing the previous module. This lower-version module will remain in the store, but will not be active until the higher-priority module is disabled or removed again.

The “old” modules (pre-2.4) are loaded with priority 100. When policy modules with the 2.4 SELinux userspace series are loaded, they get loaded with priority 400. As a result, the following message occurs:

~# semodule -i screen.pp
libsemanage.semanage_direct_install_info: Overriding screen module at lower priority 100 with module at priority 400

So unlike the previous situation, where the older module is substituted with the new one, we now have two “screen” modules loaded; the last one gets priority 400 and is active. To see all installed modules and priorities, use the --list-modules option:

~# semodule --list-modules=all | grep screen
100 screen     pp
400 screen     pp

Older versions of modules can be removed by specifying the priority:

~# semodule -X 100 -r screen

Posts for Thursday, October 30, 2014

avatar

Migrating to SELinux userspace 2.4 (small warning for users)

In a few moments, SELinux users which have the ~arch KEYWORDS set (either globally or for the SELinux utilities in particular) will notice that the SELinux userspace will upgrade to version 2.4 (release candidate 5 for now). This upgrade comes with a manual step that needs to be performed after upgrade. The information is mentioned as post-installation message of the policycoreutils package, and basically sais that you need to execute:

~# /usr/libexec/selinux/semanage_migrate_store

The reason is that the SELinux utilities expect the SELinux policy module store (and the semanage related files) to be in /var/lib/selinux and no longer in /etc/selinux. Note that this does not mean that the SELinux policy itself is moved outside of that location, nor is the basic configuration file (/etc/selinux/config). It is what tools such as semanage manage that is moved outside that location.

I tried to automate the migration as part of the packages themselves, but this would require the portage_t domain to be able to move, rebuild and load policies, which it can’t (and to be honest, shouldn’t). Instead of augmenting the policy or making updates to the migration script as delivered by the upstream project, we currently decided to have the migration done manually. It is a one-time migration anyway.

If for some reason end users forget to do the migration, then that does not mean that the system breaks or becomes unusable. SELinux still works, SELinux aware applications still work; the only thing that will fail are updates on the SELinux configuration through tools like semanage or setsebool – the latter when you want to persist boolean changes.

~# semanage fcontext -l
ValueError: SELinux policy is not managed or store cannot be accessed.
~# setsebool -P allow_ptrace on
Cannot set persistent booleans without managed policy.

If you get those errors or warnings, all that is left to do is to do the migration. Note in the following that there is a warning about ‘else’ blocks that are no longer supported: that’s okay, as far as I know (and it was mentioned on the upstream mailinglist as well as not something to worry about) it does not have any impact.

~# /usr/libexec/selinux/semanage_migrate_store
Migrating from /etc/selinux/mcs/modules/active to /var/lib/selinux/mcs/active
Attempting to rebuild policy from /var/lib/selinux
sysnetwork: Warning: 'else' blocks in optional statements are unsupported in CIL. Dropping from output.

You can also add in -c so that the old policy module store is cleaned up. You can also rerun the command multiple times:

~# /usr/libexec/selinux/semanage_migrate_store -c
warning: Policy type mcs has already been migrated, but modules still exist in the old store. Skipping store.
Attempting to rebuild policy from /var/lib/selinux

You can manually clean up the old policy module store like so:

~# rm -rf /etc/selinux/mcs/modules

So… don’t worry – the change is small and does not break stuff. And for those wondering about CIL I’ll talk about it in one of my next posts.

Posts for Tuesday, October 21, 2014

Seek God Sooner

So, I thought I'd write something down that happened today.

For those of you who know me, I'm a pretty laid-back, easy-going type of guy.

My wife and I went to sleep quite late, mostly because we couldn't stop talking and laughing with one another (not an uncommon occurrence, unfortunately), and also unfortunately, we were woken up early by all of our fire alarms going off in unison.  There was no fire.  This also is not an uncommon occurrence (we've got to figure that one out).

Anyway, I couldn't get back to sleep after that, and due to my sleepiness, I had, what turned out to be one of the most frustratingly rotten days of my life today.  My main studio and programming computer decided to have a myriad of uncommon issues, which due to my sleepiness, were uncommonly difficult for me to solve.  I also had planned on working extra hours today on my programming project, which couldn't happen due to the various computer problems.

This lasted for about 5 hours.

After dinner, I had a recording session with a client, and thankfully, that went very well.  After the recording session, it was about 9:15pm, and I still had hours of programming work to do.

With a sigh, I sat down to start programming, and remembered that earlier this morning, after my morning prayer, I hadn't studied my scriptures as I did every morning.  So I decided to do that before programming tonight.

I can't tell you how much that one decision changed my entire demeanor.  All the frustrations melted away as I felt the Spirit of God course into my heart  as I listened to the words of General Conference.

Immediately after, I knelt down to thank God, and found myself being gently reminded that I had not done it earlier, and that if I had done it earlier, my day would have gone much, much better.

It's now easier to concentrate, get focused, and I'm ready to get down and code for the next few hours.  Happily.

I love this Gospel.  The simple truths can change lives.

Posts for Saturday, October 4, 2014

My very first commit to KDE

Hello world Planet!

My name is Matija Šuklje 1, but geeks call me Hook 2. I have been lurking around KDE and using it since its 2.x (or 1.x) times and in the many years mostly contributed by submitting nasty bug reports 3, suggesting crazy ideas and here and there helping translate KDE software into my mother tongue – Slovenian.

As a (very soon to be) lawyer with very limited coding skills, that is as much as I could have done for the community so far.

But in the past years, I got lucky and got employed by the FSFE to lead the FSFE Legal team. Since the FLA that KDE e.V. uses was made in tight cooperation with FSFE, I finally had an excuse to go to Akademy found a way to help out the KDE community with my skills and hold a lightning talk on how the FLA works and why KDE gearheads should sign it (video).

My very first commit to KDE

After helping with a recent local KDE translation sprint, Andrej Mernik suggested that I should ask for direct commit access to the KDE localisations SVN, so I do not bug him or Andrej Vernekar to commit translations for me.

So I did, and Andrej Vernekar later supported my application and shortly thereafter Víctor Blázquez welcomed me with a nice new developer package. It is great to see the KDE community so welcoming to newcomers! ☺

Excited by my new powers, as soon as time let me, I fired up the trusty 4 Lokalize and started translating some of the packages that have been in my ToDo list for a long time now.

Just a few hiccups with my OpenPGP card setup, and the first ever commit to KDE repositories, signed with my name, was on-line. Ah, what a thrill!

Sign(ed) the FLA

Haha!

you might think,

Now we have you! Have you signed the FLA that you tell us all is such a great idea?

… and you would have all the reasons to ask.

And the answer is: Yes, of course, I contacted KDE e.V., where Albert Astals Cid answered me, I printed the copies, signed and sent them just a week after my first commit!

While I was filling it out, I did realise that the document needs to be a bit easier to read and understand. So I took notes of that and in the relatively near future am going to try to come up with a few suggestions how to make the FLA even better 5. This also means, I would very much welcome any feedback from the wider community on the text.

hook out → I wish I had time to go to Akademy 2014 as well …see you next year!


  1. I know it not easy to pronounce. Matija is the Slovenian equivalent of Matthias (and is pronounced the same, just drop the S). As for Šuklje, it sounds a bit like “shoe kle” in “shoe kleptomaniac”, but has nothing to do with it. 

  2. On FreeNode I go under the nickname silver_hook and for other ways to get in touch, feel free to check my contacts page

  3. I have a knack for finding bugs – in digital as well as real life. One of the funnier occasions was at Akademy 2013, where I managed to find and coherently replicate a bug in one of the elevators in the place where most of participants were staying. Together with David E. “DMaggot” Narváez we also found a workaround and submitted the bug to the local person in charge. 

  4. Lokalize might slowly be in need of a few visual improvements and better documentation, but it still is an awesome tool for localisation. 

  5. Full disclaimer: The FLA is part of my work for FSFE as well as the topic of my LLM thesis. 

Planet Larry is not officially affiliated with Gentoo Linux. Original artwork and logos copyright Gentoo Foundation. Yadda, yadda, yadda.