Posts for Sunday, July 13, 2014


Anonymous edits in Hellenic Wikipedia from Hellenic Parliament IPs

Inspired from another project called “Anonymous Wikipedia edits from the Norwegian parliament and government offices” I decided to create something similar for the Hellenic Parliament.

I downloaded the XML dumps (elwiki-20140702-pages-meta-history.xml.7z) for the elwiki from The compressed file is less than 600Mb but uncompressing it leads to a 73Gb XML which contains the full history of edits. Then I modified a parser I found on this blog to extract the data I wanted: Page Title, Timestamp and IP.

Then it was easy to create a list that contains all the edits that have been created by Hellenic Parliament IPs ( throughout the History of Hellenic Wikipedia:
The list

Interesting edits

  1. Former Prime Minister “Κωνσταντίνος Σημίτης”
    An IP from inside the Hellenic Parliament tried to remove the following text at least 3 times in 17-18/02/2014. This is a link to the first edit: Diff 1.

    Για την περίοδο 1996-2001 ξοδεύτηκαν 5,2 τρις δρχ σε εξοπλισμούς. Οι δαπάνες του Β` ΕΜΠΑΕ (2001-2006) υπολογίζεται πως έφτασαν τα 6 με 7 τρις δρχ.<ref name="enet_01_08_01">[ ''To κόστος των εξοπλισμών''], εφημερίδα ”Ελευθεροτυπία”, δημοσίευση [[1 Αυγούστου]] [[2001]].</ref>Έπειτα απο τη σύλληψη και ενοχή του Γ.Καντά,υπάρχουν υπόνοιες για την εμπλοκή του στο σκάνδαλο με μίζες από Γερμανικές εταιρίες στα εξοπλιστικά,κάτι το οποίο διερευνάται απο την Εισαγγελία της Βρέμης.

  2. Former MP “Δημήτρης Κωνσταντάρας”
    Someone modified his biography twice. Diff Links: Diff 1 Diff 2.
  3. Former football player “Δημήτρης Σαραβάκος”
    In the following edit someone updated this player’s bio adding that he ‘currently plays in porn films’. Diff link. The same editor seems to have removed that reference later, diff link.
  4. Former MP “Θεόδωρος Ρουσόπουλος”
    Someone wanted to update this MP’s bio and remove some reference of a scandal. Diff link.
  5. The movie “Ραντεβού με μια άγνωστη”
    Claiming that the nude scenes are probably not from the actor named “Έλενα Ναθαναήλ”. Diff link.
  6. The soap opera “Χίλιες και Μία Νύχτες (σειρά)”
    Someone created the first version of the article on this soap opera. Diff Link.
  7. Politician “Γιάννης Λαγουδάκος”
    Someone edited his bio so it seemed that he would run for MP with the political party called “Ανεξάρτητοι Έλληνες”. Diff Link
  8. University professor “Γεώργιος Γαρδίκας”
    Someone edited his profile and added a link for amateur football team “Αγιαξ Αιγάλεω”. Diff Link.
  9. Politician “Λευτέρης Αυγενάκης”
    Someone wanted to fix his bio and upload a file, so he/she added a link from the local computer “C:\Documents and Settings\user2\Local Settings\Temp\ΑΥΓΕΝΑΚΗΣ”. Diff link.
  10. MP “Κώστας Μαρκόπουλος”
    Someone wanted to fix his bio regarding his return to the “Νέα Δημοκρατία” political party. Diff Link.
  11. (Golden Dawn) MP “Νίκος Μιχαλολιάκος”
    Someone was trying to “fix” his bio removing some accusations. Diff Link.
  12. (Golden Dawn) MP “Ηλίας Κασιδιάρης”
    Someone was trying to fix his bio and remove various accusations and incidents. Diff Link 1, Diff Link 2, Diff Link 3.

Who’s done the edits ?
The IP range of the Hellenic Parliament is not only used by MPs but from people working in the parliament as well. Don’t rush to any conclusions…
Oh, and the IP is probably a proxy inside the Parliament.

Threat Model
Not that it matters a lot for MPs and politicians in general, but it’s quite interesting that if someone “anonymously” edits a wikipedia article, wikimedia stores the IP of the editor and provides it to anyone that wants to download the wiki archives. If the IP range is known, or someone has the legal authority within a country to force an ISP to reveal the owner of an IP, it is quite easy to spot the actual person behind an “anonymous” edit. But if someone creates an account to edit wikipedia articles, wikimedia does not publish the IPs of its users, the account database is private. To get an IP of a user, one would need to take wikimedia to courts to force them to reveal that account’s IP address. Since every wikipedia article edit history is available for anyone to download, one is actually “more anonymous to the public” if he/she logs in or creates a (new) account every time before editing an article, than editing the same article without an account. Unless someone is afraid that wikimedia will leak/disclose their account’s IPs.
So depending on their threat model, people can choose whether they want to create (new) account(s) before editing an article or not :)

Similar Projects

  • Parliament WikiEdits
  • congress-edits
  • Riksdagen redigerar
  • Stortinget redigerer
  • AussieParl WikiEdits
  • anon
  • Bonus
    Anonymous edit from “Synaspismos Political Party” (ΣΥΡΙΖΑ) address range for “Δημοκρατική Αριστερά” political party article, changing it’s youth party blog link to the PASOK youth party blog link. Diff Link

    Posts for Wednesday, July 9, 2014


    Segmentation fault when emerging packages after libpcre upgrade?

    SELinux users might be facing failures when emerge is merging a package to the file system, with an error that looks like so:

    >>> Setting SELinux security labels
    /usr/lib64/portage/bin/ line 1112: 23719 Segmentation fault      /usr/sbin/setfiles "${file_contexts_path}" -r "${D}" "${D}"
     * ERROR: dev-libs/libpcre-8.35::gentoo failed:
     *   Failed to set SELinux security labels.

    This has been reported as bug 516608 and, after some investigation, the cause is found. First the quick workaround:

    ~# cd /etc/selinux/strict/contexts/files
    ~# rm *.bin

    And do the same for the other SELinux policy stores on the system (targeted, mcs, mls, …).

    Now, what is happening… Inside the mentioned directory, binary files exist such as file_contexts.bin. These files contain the compiled regular expressions of the non-binary files (like file_contexts). By using the precompiled versions, regular expression matching by the SELinux utilities is a lot faster. Not that it is massively slow otherwise, but it is a nice speed improvement nonetheless.

    However, when pcre updates occur, then the basic structures that pcre uses internally might change. For instance, a number might switch from a signed integer to an unsigned integer. As pcre is meant to be used within the same application run, most applications do not have any issues with such changes. However, the SELinux utilities effectively serialize these structures and later read them back in. If the new pcre uses a changed structure, then the read-in structures are incompatible and even corrupt.

    Hence the segmentation faults.

    To resolve this, Stephen Smalley created a patch that includes PCRE version checking. This patch is now included in sys-libs/libselinux version 2.3-r1. The package also recompiles the existing *.bin files so that the older binary files are no longer on the system. But there is a significant chance that this update will not trickle down to the users in time, so the workaround might be needed.

    I considered updating the pcre ebuilds as well with this workaround, but considering that libselinux is most likely to be stabilized faster than any libpcre bump I let it go.

    At least we have a solution for future upgrades; sorry for the noise.

    Edit: libselinux-2.2.2-r5 also has the fix included.

    Posts for Wednesday, July 2, 2014


    Multilib in Gentoo

    One of the areas in Gentoo that is seeing lots of active development is its ongoing effort to have proper multilib support throughout the tree. In the past, this support was provided through special emulation packages, but those have the (serious) downside that they are often outdated, sometimes even having security issues.

    But this active development is not because we all just started looking in the same direction. No, it’s thanks to a few developers that have put their shoulders under this effort, directing the development workload where needed and pressing other developers to help in this endeavor. And pushing is more than just creating bugreports and telling developers to do something.

    It is also about communicating, giving feedback and patiently helping developers when they have questions.

    I can only hope that other activities within Gentoo and its potential broad impact work on this as well. Kudos to all involved, as well as all developers that have undoubtedly put numerous hours of development effort in the hope to make their ebuilds multilib-capable (I know I had to put lots of effort in it, but I find it is worthwhile and a big learning opportunity).

    Posts for Monday, June 30, 2014


    D-Bus and SELinux

    After a post about D-Bus comes the inevitable related post about SELinux with D-Bus.

    Some users might not know that D-Bus is an SELinux-aware application. That means it has SELinux-specific code in it, which has the D-Bus behavior based on the SELinux policy (and might not necessarily honor the “permissive” flag). This code is used as an additional authentication control within D-Bus.

    Inside the SELinux policy, a dbus permission class is supported, even though the Linux kernel doesn’t do anything with this class. The class is purely for D-Bus, and it is D-Bus that checks the permission (although work is being made to implement D-Bus in kernel (kdbus)). The class supports two permission checks:

    • acquire_svc which tells the domain(s) allowed to “own” a service (which might, thanks to the SELinux support, be different from the domain itself)
    • send_msg which tells which domain(s) can send messages to a service domain

    Inside the D-Bus security configuration (the busconfig XML file, remember) a service configuration might tell D-Bus that the service itself is labeled differently from the process that owned the service. The default is that the service inherits the label from the domain, so when dnsmasq_t registers a service on the system bus, then this service also inherits the dnsmasq_t label.

    The necessary permission checks for the sysadm_t user domain to send messages to the dnsmasq service, and the dnsmasq service itself to register it as a service:

    allow dnsmasq_t self:dbus { acquire_svc send_msg };
    allow sysadm_t dnsmasq_t:dbus send_msg;
    allow dnsmasq_t sysadm_t:dbus send_msg;

    For the sysadm_t domain, the two rules are needed as we usually not only want to send a message to a D-Bus service, but also receive a reply (which is also handled through a send_msg permission but in the inverse direction).

    However, with the following XML snippet inside its service configuration file, owning a certain resource is checked against a different label:

      <associate own=""
                 context="system_u:object_r:dnsmasq_dbus_t:s0" />

    With this, the rules would become as follows:

    allow dnsmasq_t dnsmasq_dbus_t:dbus acquire_svc;
    allow dnsmasq_t self:dbus send_msg;
    allow sysadm_t dnsmasq_t:dbus send_msg;
    allow dnsmasq_t sysadm_t:dbus send_msg;

    Note that only the access for acquiring a service based on a name (i.e. owning a service) is checked based on the different label. Sending and receiving messages is still handled by the domains of the processes (actually the labels of the connections, but these are always the process domains).

    I am not aware of any policy implementation that uses a different label for owning services, and the implementation is more suited to “force” D-Bus to only allow services with a correct label. This ensures that other domains that might have enough privileges to interact with D-Bus and own a service cannot own these particular services. After all, other services don’t usually have the privileges (policy-wise) to acquire_svc a service with a different label than their own label.

    Posts for Sunday, June 29, 2014


    D-Bus, quick recap

    I’ve never fully investigated the what and how of D-Bus. I know it is some sort of IPC, but higher level than the POSIX IPC methods. After some reading, I think I start to understand how it works and how administrators can work with it. So a quick write-down is in place so I don’t forget in the future.

    There is one system bus and, for each X session of a user, also a session bus.

    A bus is governed by a dbus-daemon process. A bus itself has objects on it, which are represented through path-like constructs (like /org/freedesktop/ConsoleKit). These objects are provided by a service (application). Applications “own” such services, and identify these through a namespace-like value (such as org.freedesktop.ConsoleKit).
    Applications can send signals to the bus, or messages through methods exposed by the service. If methods are invoked (i.e. messages send) then the application must specify the interface (such as org.freedesktop.ConsoleKit.Manager.Stop).

    Administrators can monitor the bus through dbus-monitor, or send messages through dbus-send. For instance, the following command invokes the org.freedesktop.ConsoleKit.Manager.Stop method provided by the object at /org/freedesktop/ConsoleKit owned by the service/application at org.freedesktop.ConsoleKit:

    ~$ dbus-send --system --print-reply 

    What I found most interesting however was to query the busses. You can do this with dbus-send although it is much easier to use tools such as d-feet or qdbus.

    To list current services on the system bus:

    ~# qdbus --system

    The numbers are generated by D-Bus itself, the namespace-like strings are taken by the objects. To see what is provided by a particular service:

    ~# qdbus --system org.freedesktop.PolicyKit1

    The methods made available through one of these:

    ~# qdbus --system org.freedesktop.PolicyKit1 /org/freedesktop/PolicyKit1/Authority
    method QDBusVariant org.freedesktop.DBus.Properties.Get(QString interface_name, QString property_name)
    method QVariantMap org.freedesktop.DBus.Properties.GetAll(QString interface_name)
    property read uint org.freedesktop.PolicyKit1.Authority.BackendFeatures
    property read QString org.freedesktop.PolicyKit1.Authority.BackendName
    property read QString org.freedesktop.PolicyKit1.Authority.BackendVersion
    method void org.freedesktop.PolicyKit1.Authority.AuthenticationAgentResponse(QString cookie, QDBusRawType::(sa{sv} identity)
    method void org.freedesktop.PolicyKit1.Authority.CancelCheckAuthorization(QString cancellation_id)
    signal void org.freedesktop.PolicyKit1.Authority.Changed()

    Access to methods and interfaces is governed through XML files in /etc/dbus-1/system.d (or session.d depending on the bus). Let’s look at /etc/dbus-1/system.d/dnsmasq.conf as an example:

    <!DOCTYPE busconfig PUBLIC
     "-//freedesktop//DTD D-BUS Bus Configuration 1.0//EN"
            <policy user="root">
                    <allow own=""/>
                    <allow send_destination=""/>
            <policy context="default">
                    <deny own=""/>
                    <deny send_destination=""/>

    The configuration mentions that only the root Linux user can ‘assign’ a service/application to the name, and root can send messages to this same service/application name. The default is that no-one can own and send to this service/application name. As a result, only the Linux root user can interact with this object.

    D-Bus also supports starting of services when a method is invoked (instead of running this service immediately). This is configured through *.service files inside /usr/share/dbus-1/system-services/.

    We, the lab rats

    The algorithm constructing your Facebook feed is one of the most important aspects of Facebooks business. Making sure that you see all the things you are interested in while skipping the stuff you don’t care about is key to keeping you engaged and interested in the service. On the other hand Facebook needs to understand how you react to certain types of content to support its actual business (making money from ads or “boosted” posts).

    So it’s no surprise that Facebook is changing an tweaking the algorithm every day. And every new iteration will be released into a small group of the population to check how it changes people’s behavior and engagement. See if it’s a better implementation than the algorithm used before. Human behavior boiled down to a bunch of numbers.

    The kind and amount of data that Facebook sits on is every social scientist’s dream: Social connection, interactions, engagement metrics, and deeply personal content all wrapped up in one neat structured package with a bow on top. And Facebook is basically the only entity with full access: There is no real open set of similar data points to study and understand human behavior from.

    So the obvious happened. Facebook and some scientists worked together to study human behavior. To put it in a nutshell: The picked almost 700000 Facebook users and changed the way their feed worked. Some got more “negative” posts, some more “positive” posts and the scientists measured how that changed people’s behavior (by seeing how their language changed in their own posts). Result: The mood of the things you read does change your own behavior and feeling congruently. Read positive stuff and feel better, read negative stuff and feel worse. This is news because we only know this from direct social interaction not from interaction mediated through the Internet (the result might not surprise people who believe that the Internet and the social interactions in it are real though).

    Many people have criticized this study and for very different reasons, some valid, some not.

    The study is called scientifically unethical: The subjects didn’t know that their behavior was monitored and that they were in an experiment. It is obviously often necessary to leave somewhat in the dark what the actual goal of an experiment is in order to make sure that the results remain untainted, but it’s scientific standard to tell people that they are in an experiment. And to tell them what was going on after the experiment concludes. (Usually with experiments that change people’s behavior so deeply you would even consider adding psychological exit counselling for participants.) This critique is fully legitimate and it’s something the scientists will have to answer for. Not Facebook cause they tweak their algorithm each day with people’s consent (EULA etc) but that’s nothing the scientists can fall back on. What happened is a certain break of trust: The deal is that Facebook can play with the algorithm as much as they want so long as they try to provide me with more relevance. They changed their end of the bargain (not even with bad intentions but they did it intransparently) which taints people’s (and my) relationship to the company slightly.

    From a purely scientific standpoint the study is somewhat problematic. Not because of their approach which looks solid after reading their paper but because noone but them can reproduce their results. It’s closed source science so it cannot really be peer reviewed. Strictly speaking we can only consider their paper an idea because their data could be basically made up (not that I want to imply that but we can’t check anything). Not good science though sadly the way many studies are.

    Most of the blame lands on the scientists. They should have known that their approach was wrong. The potential data was seductive but they would have had to force Facebook to do this more transparently. The best way would have been an opt-in: “The scientists want to study human interaction so they ask to get access to certain statistics of your feed. They will at no point be able to read all your posts or your complete feed. Do you want to participate? [Yes] [No]“. A message to people who were part of the study after it concluded with a way to remove the data set from the study as sort of a punishment for breaking trust would be the least that would have been needed to be done.

    Whenever you work with people and change their life you run risks. What happens if one of the people whose feed you worsen up is depressive? What will that do to him or her? The scientists must have thought about that but decided not to care. There are many words we could find for that kind of behavior: Disgusting. Assholeish. Sociopathic.

    It’s no surprise that Facebook didn’t catch this issue because tweaking their feed is what they do all day. And among all their rhetoric their users aren’t the center of their attention. We could put the bad capitalism stamp of disapproval on this thought and move on but it does show something Facebook needs to learn: Users might not pay but without them Facebook is nothing. There is a lot of lock-in but when the trust into Facebook’s sincerity gets damaged too much, you open yourself up for competition and people leaving. There is still quite some trust as the growth of users and interaction in spite of all the bad “oh noez, Facebook will destroy your privacy and life and kill baby seals!” press shows. But that’s not a given.

    Companies sitting on these huge amounts of social data have not only their shareholders to look out for but also their users. The need to establish ways for users to participate and keep them honest. Build structures to get feedback from users or form groups representing users and their interests to them. That’s the actual thing Facebook can and should learn from this.

    For a small scientific step almost everybody lost: Scientists showed an alarming lack of awareness and ethic, Facebook an impressive lack of understanding of how important trust is and people using Facebook because for an experiment their days might have been ruined. Doesn’t look like a good exchange to me. But we shouldn’t let this put a black mark in the study of social behavior online.

    Studying how people interact is important to better understand what we do and how and why we do it. Because we want systems to be built in a way that suits us, helps us lead better more fulfilling lives. We want technology to enrichen our worlds. And for that we need to understand how we perceive and interact with them.

    In a perfect world we’d have a set of data that is open and that can be analyzed. Sadly we don’t so we’ll have to work with the companies having access to that kind of data. But as scientists we need to make sure that – no matter how great the insights we generate might be – we treat people with the dignity they deserve. That we respect their rights. That we stay honest and transparent.

    I’d love to say that we need to develop these rules because that would take some of the blame from the scientists involved making the scientific community look less psychopathic. Sadly these rules and best practices have existed for ages now. And it’s alarming to see how many people involved in this project didn’t know them or respect them. That is the main teaching from this case: We need to take way better care of teaching scientists the ethics of science. Not just how to calculate and process data but how to treat others.

    Title image by: MoneyBlogNewz

    flattr this!

    Posts for Friday, June 27, 2014

    The body as a source of data

    The quantified self is starting to penetrate the tiny bubble of science enthusiasts and nerds. More health-related devices start connecting to the cloud (think scales and soon smart watches or heartrate monitors and similar wearables). Modern smartphones have built-in stepcounters or use GPS data to track movement and interpolate (from the path and the speed the mode of transportation as well as the amount of calories probably spent). Apple’s new HealthKit as well as Google’s new GoogleFit APIs are pushing the gathering of data about one’s own body into the spotlight and potentially a more mainstream demographic.

    Quantifiying oneself isn’t always perceived in a positive light. Where one group sees ways to better understand their own body and how it influences their feelings and lives others interpret the projection of body functions down to digital data as a mechanization of a natural thing, something diminishing the human being, as humans kneeling under the force of capitalism and its implied necessity to optimize oneself’s employability and “worth” and finally a dangerous tool giving companies too much access to data about us and how we live and feel. What if our health insurance knew how little we sleep, how little we exercise and what bad dieting habits we entertain?

    Obviously there are holistic ways to think about one’s own body. You can watch yourself in the mirror for 5 minutes every morning to see if everything is OK. You can meditate and try to “listen into your body”. But seeing how many negative influences on one’s personal long-term health cannot really be felt until it is too late a data-centric approach seems to be a reasonable path towards detecting dangerous (or simply unpleasant) patterns and habits.

    The reason why metrics in engineering are based on numbers is that this model of the world makes the comparison of two states simple: “I used to have a foo of 4 now my foo is 12.” Regardless of what that means, it’s easy to see that foo has increased which can be translated in actions if necessary (“eat less stuff containing foo”). Even projecting feelings onto numbers can yield very useful results: “After sleeping for 5 hours my happiness throughout the day seems to average around 3, after sleeping 7 hours it averages around 5″ can provide a person a useful input when deciding whether to sleep more or not. Regardless of what exactly a happiness of “3″ or “5″ means in comparison to others.

    A human body is a complex machine. Chemical reactions and electric currents happen throughout it at a mindblowing speed. And every kind of data set, no matter how great the instrument used to collect it, only represents a tiny fraction of a perspective of a part of what constitutes a living body. Even if you aggregate all the data about a human being we can monitor and record these days, all you have is just a bunch of data. Good enough to mine for certain patterns suggesting certain traits or illnesses or properties but never enough to say that you actually know what makes a person tick.

    But all that data can be helpful to people for very specific questions. Tracking food intake and physical activity can help a person control their weight if they want to. Correlating sleep and performance can help people figuring out what kind of schedule they should sleep on to feel as good as possible. And sometimes these numbers can just help oneself to measure one’s own progress, if you managed to beat your 10k record.

    With all the devices and data monitors we surround us with, gathering huge amounts of data becomes trivial. And everyone can store that data on their own harddrives and develop and implement algorithms to analyse and use this source of information. Why do we need the companies who will just use the data to send us advertising in exchange for hosting our data?

    It comes back to the question whether telling people to host their own services and data is cynical. As I already wrote I do believe it is. Companies with defined standard APIs can help individuals who don’t have the skills or the money to pay people with said skills to learn more about their bodies and how they influence their lives. They can help make that mass of data manageable, queryable, actionable. Simply usable. That doesn’t mean that there isn’t a better way. That an open platform to aggregate one’s digital body representation wouldn’t be better. But we don’t have that, especially not for mainstream consumption.

    Given these thoughts I find recent comments on the dangers and evils of using one of the big companies to handle the aggregation of the data about your body somewhat classist. Because I believe that you should be able to understand your body better even if you can’t code or think of algorithms (or pay others to do that for you individually). The slippery slope argument that if the data exists somewhere it will very soon be used to trample on your rights and ruin your day doesn’t only rob certain people of the chance to improve their life or gain new insights, it actually enforces a pattern where people with fewer resources tend to get the short end of the stick when it comes to health an life expectancy.

    It’s always easy to tell people not to use some data-based product because of dangers for their privacy or something similar. It’s especially easy when whatever that service is supposed to do for you you already own. “Don’t use Facebook” is only a half-earnest argument if you (because of other social or political networks) do not need this kind of networking to participate in a debate or connect to others. It’s a deeply paternalist point of view and carries a certain lack of empathy.

    Companies aren’t usually all that great just as the capitalist system we live in isn’t great. “The market is why we can’t have nice things” as Mike Rugnetta said it in this week’s Idea Channel. But at least with companies you know their angle (Hint: It’s their bottom line). You know that they want to make money and that they offer that service “for free” usually means that you pay with attention (through ads). There’s no evil conspiracy, no man with a cat on his lap saying “No Mr. Bond, I want you to DIE!”.

    But given that a company lets you access and export all that data you pour into their service I can only urge you to think whether the benefit that their service can give you isn’t worth those handful of ads. Companies aren’t evil demons with magic powers. They are sociopathic and greedy, but that’s it.

    The belief that a company “just knows too much” if they gather data about your body in on place overestimates the truth that data carries. They don’t own your soul or can now cast spells on you. Data you emit isn’t just a liability, something you need to keep locked up and avoid. It can also be your own tool, your light in the darkness.

    Header image by: SMI Eye Tracking

    flattr this!

    Posts for Tuesday, June 24, 2014

    “The Open-Source Everything Revolution” and the boxology syndrome

    Yesterday @kunstreich pointed me to a rather interesting article in the Guardian. Under the ambitious title “The open source revolution is coming and it will conquer the 1% – ex CIA spy“. We’ll pause for a second while you read the article.

    For those unwilling to or with limited amount of time available, here’s my executive summary. Robert David Steele, who has worked for the CIA  for quite a while at some point wanted to introduce more Open Source practices into the intelligence community. He realized that the whole secret tech and process thing didn’t scale and that gathering all those secret and protected pieces of information were mostly not worth the effort, when there’s so much data out there in the open. He also figured out that our current western societies aren’t doing so well: The distribution of wealth and power is messed up and companies have – with help by governments – created a system where they privatize the commons and every kind of possible profit while having the public pay for most of the losses. Steele, who’s obviously a very well educated person, now wants to make everything open. Open source software, open governments, open data, “open society”1 in order to fix our society and ensure a better future:

    open source The Open Source Everything Revolution and the boxology syndrome

    Open Source Everything (from the Guardian)

    Steele’s visions sounds charming: When there is total knowledge and awareness, problems can be easily detected and fixed. Omniscience as the tool to a perfect world. This actually fits quite well into the intelligence agency mindset: “We need all the information to make sure nothing bad will happen. Just give us all the data and you will be safe.” And Steele does not want to abolish Intelligence agencies, he wants to make them transparent and open (the question remains if they can be considered intelligence agencies by our common definition then).

    But there are quite a few problems with Steele’s revolutionary manifesto. It basically suffers from “Boxology Syndrome”.

    The boxology syndrome is a Déformation professionnelle that many people in IT and modelling suffer from. It’s characterized by the belief that every complex problem and system can be sufficiently described by a bunch of boxes and connecting lines. It happens in IT because the object-oriented design approach teaches exactly that kind of thinking: Find the relevant terms and items, make them classes (boxes) and see how they connect. Now you’ve modeled the domain and the problem solution. That was easy!

    But life tends to be messy and confusing, the world doesn’t seem to like to live in boxes, just as people don’t like it.

    Open source software is brilliant. I love how my linux systems2 work transparently and allow me to change how they work according to my needs. I love how I can dive into existing apps and libraries to pick pieces I want to use for other projects, how I can patch and mix things to better serve my needs. But I am the minority.

    4014689 a1bbcaf037 300x225 The Open Source Everything Revolution and the boxology syndrome

    By: velkr0

    Steele uses the word “open” as a silver bullet to … well … everything. He rehashes the ideas from David Brin’s “The Transparent Society” but seems to be working very hard to not use the word transparent. Which in many cases seems to be what he is actually going for but it feels like he is avoiding the connotations attached to the word when it comes to people and societies: In a somewhat obvious try to openwash, he reframes the ideas of Brin my attaching the generally positively connotated word “open”.

    But open data and open source software do not magically make everyone capable of seizing these new found opportunities. Some people have the skills, the resources, the time and the interest to get something out of it, some people can pay people with the skills to do what they want to get done. And many, many people are just left alone, possibly swimming in a digital ocean way to deep and vast to see any kind of ground or land. Steele ignores the privilege of the educated and skilled few or somewhat naively hopes that they’ll cover the needs of those unable to serve their own out of generosity. Which could totally happen but do we really want to bet the future on the selflessness and generosity of everyone?

    Transparency is not a one-size-fits-all solution. We have different levels of transparency we require from the government or companies we interact with or that person serving your dinner. Some entities might offer more information than required (which is especially true for people who can legally demand very little transparency from each other but share a lot of information for their own personal goals and interests).

    Steele’s ideas – which are really seductive in their simplicity – don’t scale. Because he ignores the differences in power, resources and influence between social entities. And because he assumes that – just because you know everything – you will make the “best” decision.

    There is a lot of social value in having access to a lot of data. But data, algorithms and code are just a small part of what can create good decisions for society. There hardly ever is the one best solution. We have to talk and exchange positions and haggle to find an accepted and legitimized solution.

    Boxes and lines just don’t cut it.

    Title image by: Simona

    1. whatever that is supposed to mean
    2. I don’t own any computer with proprietary operating systems except for my gaming consoles

    flattr this!

    Posts for Sunday, June 22, 2014


    Chroots for SELinux enabled applications

    Today I had to prepare a chroot jail (thank you grsecurity for the neat additional chroot protection features) for a SELinux-enabled application. As a result, “just” making a chroot was insufficient: the application needed access to /sys/fs/selinux. Of course, granting access to /sys is not something I like to see for a chroot jail.

    Luckily, all other accesses are not needed, so I was able to create a static /sys/fs/selinux directory structure in the chroot, and then just mount the SELinux file system on that:

    ~# mount -t selinuxfs none /var/chroot/sys/fs/selinux

    In hindsight, I probably could just have created a /selinux location as that location, although deprecated, is still checked by the SELinux libraries.

    Anyway, there was a second requirement: access to /etc/selinux. Luckily it was purely for read operations, so I was first contemplating of copying the data and doing a chmod -R a-w /var/chroot/etc/selinux, but then considered a bind-mount:

    ~# mount -o bind,ro /etc/selinux /var/chroot/etc/selinux

    Alas, bad luck – the read-only flag is ignored during the mount, and the bind-mount is still read-write. A simple article on informed me about the solution: I need to do a remount afterwards to enable the read-only state:

    ~# mount -o remount,ro /var/chroot/etc/selinux

    Great! And because my brain isn’t what it used to be, I just make a quick blog for future reference ;-)

    Posts for Sunday, June 15, 2014


    Gentoo Hardened, June 2014

    Friday the Gentoo Hardened project had its monthly online meeting to talk about the progress within the various tools, responsibilities and subprojects.

    On the toolchain part, Zorry mentioned that GCC 4.9 and 4.8.3 will have SSP enabled by default. The hardened profiles will still have a different SSP setting than the default (so yes, there will still be differences between the two) but this will help in securing the Gentoo default installations.

    Zorry is also working on upstreaming the PIE patches for GCC 4.10.

    Next to the regular toolchain, blueness also mentioned his intentions to launch a Hardened musl subproject which will focus on the musl C library (rather than glibc or uclibc) and hardening.

    On the kernel side, two recent kernel vulnerabilities in the vanilla kernel Linux (pty race and privilege escalation through futex code) painted the discussions on IRC recently. Some versions of the hardened kernels are still available in the tree, but the more recent (non-vulnerable) kernels have proven not to be as stable as we’d hoped.

    The pty race vulnerability is possibly not applicable to hardened kernels thanks to grSecurity, due to its protection to access the kernel symbols.

    The latest kernels should not be used with KSTACKOVERFLOW on production systems though; there are some issues reported with virtio network interface support (on the guests) and ZFS.

    Also, on the Pax support, the install-xattr saga continues. The new wrapper that blueness worked in dismissed some code to keep the PWD so the $S directory knowledge was “lost”. This is now fixed. All that is left is to have the wrapper included and stabilized.

    On SELinux side, it was the usual set of progress. Policy stabilization and user land application and library stabilization. The latter is waiting a bit because of the multilib support that’s now being integrated in the ebuilds as well (and thus has a larger set of dependencies to go through) but no show-stoppers there. Also, the SELinux documentation portal on the wiki was briefly mentioned.

    Also, the policycoreutils vulnerability has been worked around so it is no longer applicable to us.

    On the hardened profiles, we had a nice discussion on enabling capabilities support (and move towards capabilities instead of setuid binaries), which klondike will try to tackle during the summer holidays.

    As I didn’t take notes during the meeting, this post might miss a few (and I forgot to enable logging as well) but as Zorry sends out the meeting logs anyway later, you can read up there ;-)

    Posts for Tuesday, June 3, 2014

    Why and how to shave with shaving oil and DE safety razors

    So, I’ve been shaving with shaving oil and safety razors 1 for a while now and decided that it’s time I help my fellow geeks by spreading some knowledge about this method (which is saddly still poorly documented online). Much of the below method is hacks assembled together from different sources and lots of trial and error.

    Why shave with oil and DE safety razors

    First of all, shaving with oldskool DE razors is not as much about being hip and trendy 2 as it is about optimising. Although, I have to admit, it is still looks pretty cool ☺

    There are several reasons why shaving with oil and DE razors beats modern foam and system multiblade razors hands down:

    • they’ve got multiple uses – shaving oil replaces both the shaving foam/soap and aftershave (and pre-shaving balm); DE razors are used in tools and well, they’re proper blades for crying out loud!;
    • the whole set takes a lot less space when traveling – one razor, a puny pack of blades and a few ten ml of oil is all you need to carry around 3;
    • you get a better shave – once you start shaving properly, you get less burns and cutticles and a smoother shave as well;
    • it’s more ecological – the DE blades have less different materials and are easier to recycle, all shaving oils I found so far are Eco certified;
    • and last, but not least in these days, it’s waaaaaaay cheaper – (more on that in a future blog post).

    History and experience (skip if you’re not interested in such bla bla)

    I got my first shaving oil4 about two years ago, when I started to travel more. My wonderful girlfriend bought it for me, because a 30 ml flask took a lot less space then a tin of shaving foam and a flask aftershave. The logic behind this decision was:

    “Well, all the ancient people managed to have clean shaves with oil, my beard can’t be that much different than the ones they had in the past.”

    And, boy, was I in for a nice surprise!

    I used to get inflamations, pimples and in-grown hair quite often, so I never shaved very close – but when shaving with oil, there was none of that! After one or two months of of trial and error with different methods and own ideas, I finally figured out how to properly use it and left the shaving soaps, gels and foams for good.

    As I shaved for a while with oil I noticed that all “regular modern” system multiblade razors have strips of an aloe vera gel, that works well with shaving foam, gels and soap; but occasionally stick to your face if you’re using shaving oil. This is true for as many or as little blades in the razor heads as possible. – I just couldn’t find razors without it.

    That’s why I started thinking about the classic DE safety razors and eventually got a plastic Wilkinson Sword Classic for a bit over 5 €. Surprisingly, after just a few miniscule cuts, the improvement over the system multiblade razors got quite apparent. I haven’t touched my old Gillette Mach3 ever since. The Wilkinson Sword Classic is by far not a very good DE razor, but it’s cheap and easy to use for beginners. But if you decide you like this kind of shave, I would warmly recommend that you upgrade to a better one.

    For example recently I got myself a nice Edwin Jagger razor with their DE8 head and I love it. It’s full-metal, chromed, closed-comb razor, which means it has another bar below the blade, so it’s easier and safer to use then an more agressive open-comb version.

    How to Shave with oil and DE razors

    OK, first of all, don’t panic! – they’re called “safety razors” for a reason. As opposed to the straight razors, the blade is enclosed, so even if you manage to cut yourself, you can’t get a deep cut. This is truer still for closed-comb razors.

    1. Wash your face to remove dead skin and fat. It’s the best if you shave just after taking a shower.

    2. Get moisture into the hairs. Beard hair is hard as copper wire while it is dry; but wet, it’s quite soft. The best way is to apply a towel soaked in very hot water for a few (times per) ten seconds to your face – the hot water also opens up the pores. If you are traveling and don’t have hot water, just make sure those hairs are wet. As it’s a good idea to have your razor up to temperature as well, I usually put hot water in the basin and leave the razor in it while I towel my face.

    3. Put a few drops of shaving oil into the palm of your hand (5-6 is enough for me) and with two fingers apply it to all the places on your face that you want to shave. Any oil you may have left on your hands, you can safely rub into your hair (on top of your head) – it’ll do them good and you won’t waste the oil.

    4. Splash some more (hot) water on your face – the fact that water and oil don’t mix well is the reason why your blade glides so fine. Also during the shave, whenever feel your razor doesn’t glide that well anymore, usually just applying some water is enough to fix it.

    5. First shave twice in the direction of the grain – to get a feeling for the right angle, take the handle of the razor in your fingers and lean the flat of the head onto your cheek, so the handle is 90° to your cheek; then reduce the angle until you get to a position where shaving feels comfortable. Also it’s easier to shave moving your whole arm then just the wrist. Important: DO NOT apply pressure – the safety razors expose enough blade that with a well ballanced razor just the weight of the head produces almost enough pressure for a good shave (as opposed to sytem multiblade razors). Pull in the direction of the handle with slow strokes – on thicker beard you will need to make shorter strokes then on less thick beard. To get a better shave, make sure to stretch your skin where you currently shave. If the razor gets stuck with hair and oil, just swish it around in the water to clean it.

    6. Splash your face with (hot) water again and now shave accross the grain. This gives you a closer shave5.

    7. Splash your face with cold water to get rid of any hair remains and to close the pores. Get a drop or two of shaving oil and a few drops of water into your palm and mix it with two fingers. Rub the oil-water mixture into your face instead of using after-shave and leave your face to dry – the essential oils in the shaving oil enriches and dezinfects your skin.

    8. Clean your razor under running water to remove hair and oil and towel-dry it (don’t rub the blade!). When I take it apart to change blades, I clean the razor with water and rub it with the towel, to keep it shiney.

    Update: I learned that it is better to shave twice with the grain and once across, than once with it and twice across. Update: I figured out the trick with rubbing the excess oil into hair.

    Enjoy shaving ☺

    It is a tiny bit more work then shaving with system multiblade razors, but it’s well worth it! For me the combination of quality DE safety razors and shaving oil, turned shaving from a bothersome chore into a morning ritual I look forward to.

    …and in time, I’m sure you’ll find (and share) your own method as well.

    Update: I just stumbled upon this great blog post “How Intellectual Property Destroyed Men’s Shaving” and thought it be great to mention here.

    hook out → see you well shaven at Akademy ;)

    1. Double edged razors as our grandads used to shave with. 

    2. Are oldskool razors hip and trendy right now anyway? I haven’t noticed them to be so. 

    3. I got a myself a nice leather Edwin Jagger etui for carrying the razor and two packs of blades that measures 105 x 53 x 44 mm (for comparison: the ugly Gillette Mach3 plastic holder measures 148 x 57 x 28 mm and does’t hold much protection when travelling). 

    4. L’Occitane Cade (wild juniper) shaving oil, and I still happy with that one. 

    5. Some claim that for a really close shave you need to shave against the grain as well, but I found that to be too aggressive for my beard. Also I heard this claim only from people shaving with soap. 

    Posts for Saturday, May 31, 2014


    Visualizing constraints

    SELinux constraints are an interesting way to implement specific, well, constraints on what SELinux allows. Most SELinux rules that users come in contact with are purely type oriented: allow something to do something against something. In fact, most of the SELinux rules applied on a system are such allow rules.

    The restriction of such allow rules is that they only take into consideration the type of the contexts that participate. This is the type enforcement part of the SELinux mandatory access control system.

    Constraints on the other hand work on the user, role and type part of a context. Consider this piece of constraint code:

    constrain file all_file_perms (
      u1 == u2
      or u1 == system_u
      or u2 == system_u
      or t1 != ubac_constrained_type
      or t2 != ubac_constrained_type

    This particular constraint definition tells the SELinux subsystem that, when an operation against a file class is performed (any operation, as all_file_perms is used, but individual, specific permissions can be listed as well), this is denied if none of the following conditions are met:

    • The SELinux user of the subject and object are the same
    • The SELinux user of the subject or object is system_u
    • The SELinux type of the subject does not have the ubac_constrained_type attribute set
    • The SELinux type of the object does not have the ubac_constrained_type attribute set

    If none of the conditions are met, then the action is denied, regardless of the allow rules set otherwise. If at least one condition is met, then the allow rules (and other SELinux rules) decide if an action can be taken or not.

    Constraints are currently difficult to query though. There is seinfo –constrain which gives all constraints, using the Reverse Polish Notation – not something easily readable by users:

    ~$ seinfo --constrain
    constrain { sem } { create destroy getattr setattr read write associate unix_read unix_write  } 
    (  u1 u2 ==  u1 system_u ==  ||  u2 system_u ==  ||  t1 { screen_var_run_t gnome_xdg_config_home_t admin_crontab_t 
    links_input_xevent_t gpg_pinentry_tmp_t virt_content_t print_spool_t crontab_tmp_t httpd_user_htaccess_t ssh_keysign_t 
    remote_input_xevent_t gnome_home_t mozilla_tmpfs_t staff_gkeyringd_t consolekit_input_xevent_t user_mail_tmp_t 
    chromium_xdg_config_t mozilla_input_xevent_t chromium_tmp_t httpd_user_script_exec_t gnome_keyring_tmp_t links_tmpfs_t 
    skype_tmp_t user_gkeyringd_t svirt_home_t sysadm_su_t virt_home_t skype_home_t wireshark_tmp_t xscreensaver_xproperty_t 
    consolekit_xproperty_t user_home_dir_t gpg_pinentry_xproperty_t mplayer_home_t mozilla_plugin_input_xevent_t mozilla_plugin_tmp_t 
    mozilla_xproperty_t xdm_input_xevent_t chromium_input_xevent_t java_tmpfs_t googletalk_plugin_xproperty_t sysadm_t gorg_t gpg_t 
    java_t links_t staff_dbusd_t httpd_user_ra_content_t httpd_user_rw_content_t googletalk_plugin_tmp_t gpg_agent_tmp_t 
    ssh_agent_tmp_t sysadm_ssh_agent_t user_fonts_cache_t user_tmp_t googletalk_plugin_input_xevent_t user_dbusd_t xserver_tmpfs_t 
    iceauth_home_t qemu_input_xevent_t xauth_home_t mutt_home_t sysadm_dbusd_t remote_xproperty_t gnome_xdg_config_t screen_home_t 
    chromium_xproperty_t chromium_tmpfs_t wireshark_tmpfs_t xdg_videos_home_t pulseaudio_input_xevent_t krb5_home_t 
    pulseaudio_xproperty_t xscreensaver_input_xevent_t gpg_pinentry_input_xevent_t httpd_user_script_t gnome_xdg_cache_home_t 
    mozilla_plugin_tmpfs_t user_home_t user_sudo_t ssh_input_xevent_t ssh_tmpfs_t xdg_music_home_t gconf_tmp_t flash_home_t 
    java_home_t skype_tmpfs_t xdg_pictures_home_t xdg_data_home_t gnome_keyring_home_t wireshark_home_t chromium_renderer_xproperty_t 
    gpg_pinentry_t mozilla_t session_dbusd_tmp_t staff_sudo_t xdg_config_home_t user_su_t pan_input_xevent_t user_devpts_t 
    mysqld_home_t pan_tmpfs_t root_input_xevent_t links_home_t sysadm_screen_t pulseaudio_tmpfs_t sysadm_gkeyringd_t mail_home_rw_t 
    gconf_home_t mozilla_plugin_xproperty_t mutt_tmp_t httpd_user_content_t mozilla_xdg_cache_t mozilla_home_t alsa_home_t 
    pulseaudio_t mencoder_t admin_crontab_tmp_t xdg_documents_home_t user_tty_device_t java_tmp_t gnome_xdg_data_home_t wireshark_t 
    mozilla_plugin_home_t googletalk_plugin_tmpfs_t user_cron_spool_t mplayer_input_xevent_t skype_input_xevent_t xxe_home_t 
    mozilla_tmp_t gconfd_t lpr_t mutt_t pan_t ssh_t staff_t user_t xauth_t skype_xproperty_t mozilla_plugin_config_t 
    links_xproperty_t mplayer_xproperty_t xdg_runtime_home_t cert_home_t mplayer_tmpfs_t user_fonts_t user_tmpfs_t mutt_conf_t 
    gpg_secret_t gpg_helper_t staff_ssh_agent_t pulseaudio_tmp_t xscreensaver_t googletalk_plugin_xdg_config_t staff_screen_t 
    user_fonts_config_t ssh_home_t staff_su_t screen_tmp_t mozilla_plugin_t user_input_xevent_t xserver_tmp_t wireshark_xproperty_t 
    user_mail_t pulseaudio_home_t xdg_cache_home_t user_ssh_agent_t xdg_downloads_home_t chromium_renderer_input_xevent_t cronjob_t 
    crontab_t pan_home_t session_dbusd_home_t gpg_agent_t xauth_tmp_t xscreensaver_tmpfs_t iceauth_t mplayer_t chromium_xdg_cache_t 
    lpr_tmp_t gpg_pinentry_tmpfs_t pan_xproperty_t ssh_xproperty_t xdm_xproperty_t java_xproperty_t sysadm_sudo_t qemu_xproperty_t 
    root_xproperty_t user_xproperty_t mail_home_t xserver_t java_input_xevent_t user_screen_t wireshark_input_xevent_t } !=  ||  t2 { 
    screen_var_run_t gnome_xdg_config_home_t admin_crontab_t links_input_xevent_t gpg_pinentry_tmp_t virt_content_t print_spool_t 
    crontab_tmp_t httpd_user_htaccess_t ssh_keysign_t remote_input_xevent_t gnome_home_t mozilla_tmpfs_t staff_gkeyringd_t 
    consolekit_input_xevent_t user_mail_tmp_t chromium_xdg_config_t mozilla_input_xevent_t chromium_tmp_t httpd_user_script_exec_t 
    gnome_keyring_tmp_t links_tmpfs_t skype_tmp_t user_gkeyringd_t svirt_home_t sysadm_su_t virt_home_t skype_home_t wireshark_tmp_t 
    xscreensaver_xproperty_t consolekit_xproperty_t user_home_dir_t gpg_pinentry_xproperty_t mplayer_home_t 
    mozilla_plugin_input_xevent_t mozilla_plugin_tmp_t mozilla_xproperty_t xdm_input_xevent_t chromium_input_xevent_t java_tmpfs_t 
    googletalk_plugin_xproperty_t sysadm_t gorg_t gpg_t java_t links_t staff_dbusd_t httpd_user_ra_content_t httpd_user_rw_content_t 
    googletalk_plugin_tmp_t gpg_agent_tmp_t ssh_agent_tmp_t sysadm_ssh_agent_t user_fonts_cache_t user_tmp_t 
    googletalk_plugin_input_xevent_t user_dbusd_t xserver_tmpfs_t iceauth_home_t qemu_input_xevent_t xauth_home_t mutt_home_t 
    sysadm_dbusd_t remote_xproperty_t gnome_xdg_config_t screen_home_t chromium_xproperty_t chromium_tmpfs_t wireshark_tmpfs_t 
    xdg_videos_home_t pulseaudio_input_xevent_t krb5_home_t pulseaudio_xproperty_t xscreensaver_input_xevent_t 
    gpg_pinentry_input_xevent_t httpd_user_script_t gnome_xdg_cache_home_t mozilla_plugin_tmpfs_t user_home_t user_sudo_t 
    ssh_input_xevent_t ssh_tmpfs_t xdg_music_home_t gconf_tmp_t flash_home_t java_home_t skype_tmpfs_t xdg_pictures_home_t 
    xdg_data_home_t gnome_keyring_home_t wireshark_home_t chromium_renderer_xproperty_t gpg_pinentry_t mozilla_t session_dbusd_tmp_t 
    staff_sudo_t xdg_config_home_t user_su_t pan_input_xevent_t user_devpts_t mysqld_home_t pan_tmpfs_t root_input_xevent_t 
    links_home_t sysadm_screen_t pulseaudio_tmpfs_t sysadm_gkeyringd_t mail_home_rw_t gconf_home_t mozilla_plugin_xproperty_t 
    mutt_tmp_t httpd_user_content_t mozilla_xdg_cache_t mozilla_home_t alsa_home_t pulseaudio_t mencoder_t admin_crontab_tmp_t 
    xdg_documents_home_t user_tty_device_t java_tmp_t gnome_xdg_data_home_t wireshark_t mozilla_plugin_home_t 
    googletalk_plugin_tmpfs_t user_cron_spool_t mplayer_input_xevent_t skype_input_xevent_t xxe_home_t mozilla_tmp_t gconfd_t lpr_t 
    mutt_t pan_t ssh_t staff_t user_t xauth_t skype_xproperty_t mozilla_plugin_config_t links_xproperty_t mplayer_xproperty_t 
    xdg_runtime_home_t cert_home_t mplayer_tmpfs_t user_fonts_t user_tmpfs_t mutt_conf_t gpg_secret_t gpg_helper_t staff_ssh_agent_t 
    pulseaudio_tmp_t xscreensaver_t googletalk_plugin_xdg_config_t staff_screen_t user_fonts_config_t ssh_home_t staff_su_t 
    screen_tmp_t mozilla_plugin_t user_input_xevent_t xserver_tmp_t wireshark_xproperty_t user_mail_t pulseaudio_home_t 
    xdg_cache_home_t user_ssh_agent_t xdg_downloads_home_t chromium_renderer_input_xevent_t cronjob_t crontab_t pan_home_t 
    session_dbusd_home_t gpg_agent_t xauth_tmp_t xscreensaver_tmpfs_t iceauth_t mplayer_t chromium_xdg_cache_t lpr_tmp_t 
    gpg_pinentry_tmpfs_t pan_xproperty_t ssh_xproperty_t xdm_xproperty_t java_xproperty_t sysadm_sudo_t qemu_xproperty_t 
    root_xproperty_t user_xproperty_t mail_home_t xserver_t java_input_xevent_t user_screen_t wireshark_input_xevent_t } !=  ||  t1 
    <empty set="set"> ==  || );

    There RPN notation however isn’t the only reason why constraints are difficult to read. The other reason is that seinfo does not know (anymore) about the attributes used to generate the constraints. As a result, we get a huge list of all possible types that match a common attribute – but we don’t know which anymore.

    Not everyone can read the source files in which the constraints are defined, so I hacked together a script that generates GraphViz dot file based on the seinfo –constrain output for a given class and permission and, optionally, limiting the huge list of types to a set that the user (err, that is me ;-) is interested in.

    For instance, to generate a graph of the constraints related to file reads, limited to the user_t and staff_t types if huge lists would otherwise be shown:

    ~$ seshowconstraint file read "user_t staff_t" >
    ~$ dot -Tsvg -O

    This generates the following graph:

    If you’re interested in the (ugly) script that does this, you can find it on my github location.

    There are some patches laying around to support naming constraints and taking the name up in the policy, so that denials based on constraints can at least give feedback to the user which constraint is holding an access back (rather than just a denial that the user doesn’t know why). Hopefully such patches can be made available in the kernel and user space utilities soon.

    Posts for Tuesday, May 27, 2014

    Blocked by GMail

    Our increased dependency on centralised solutions – even in systems that are created to be decentralised – is becoming alarming.

    This weeks’s topic is GMail1. And if you have not yet, do read up Mako’s and Karsten’s blog posts.

    What is currently happening to me is that for some reason GMail stopped accepting mail from my private e-mail address, claiming I am a likely spammer. In case you wondered: I am not sending our spam, would be very surprised if I had a virus on my regularly updated GNU/Linux laptop, and even more so if my e-mail provider’s server was abused.

    When everyone you know with a GMail e-mail account suddenly sends you replies in the following manner, you realise just how depending on an outside service provider you are in your communication, even if you are not their client:

    <>: host[2a00:1450:4013:c01::1b] said: 550-5.7.1
        [2a02:d68:500::122      12] Our system has detected that this message
        550-5.7.1 is likely unsolicited mail. To reduce the amount of spam sent to
        550-5.7.1 Gmail, this message has been blocked. Please visit 550-5.7.1 for 550
        5.7.1 more information. u13si15106334wiv.49 - gsmtp (in reply to end of
        DATA command)

    The problem of not being their client is even worse, as then you do not have a big enough leverage, and often not even an easy way to contact them with such issues.

    On a not too unrelated note, e-mail is a complex beast2 and while deprecating it would take an immense amount of work as well as quite a long time, it is interesting to see new technology popping up to create a new and better Internet as well as old technology like GnuPG improving to protect us in the digital world.

    While this rant was triggered by my trouble GMail, do note that this it is not just Google out there that we have to be wary of – in other areas of our communication, we need to aim for decentralisation as well. SecuShare3 provides a nice comparison of current and planned technology.

    hook out → catching up with e-mail backlogs :P

    Update: It seems that the whole mail server got blocked by GMail. The issue is now finally solved by migrating the whole mail server and with that creating new SSL/TLS certificates.

    1. And I am not talking about top-posting, full-quoting and other major violations of the general e-mail netiquette that GMail users regularly make. 

    2. As two examples, let us name Facebook and Microsoft. On the server-side Facebook’s recent withdrawal from offering e-mail service, was anticipated by some IETF members, as Facebook has not attended any e-mail related conferences and workshops, where apparently you get to fully understand the interaction. On the client-side Microsoft’s Outlook is already infamous for ignoring major parts of the e-mail standard (e.g. quotation marks, attachments, …). 

    3. SecuShare a project based on GNUnet and PSYC, that is well worth checking out. 

    Posts for Monday, May 26, 2014

    USB passthrough to a VM, via GUI only

    It sure has gotten easier to add USB devices to VMs with libvirt-manager and it’s nice UI

    Posts for Monday, May 19, 2014

    KDE Community: be liberal with ourselves, be harsh with others

    (yes, the title is a tribute to the robustness principle)   Censored In quite an aggressive move, I’ve been censored by KDE. My blog has been removed from kdeplanet.  The only information I have so far is a mail (and this): SVN commit 1386393 by jriddell: Disable Thomas Capricelli's blog for breaching Planet KDE guidelines CCMAIL:orzel@xxxxx […]

    Posts for Sunday, May 18, 2014


    On Graphite, Whisper and InfluxDB

    Graphite, and the storage Achilles heel

    Graphite is a neat timeseries metrics storage system that comes with a powerful querying api, mainly due to the whole bunch of available processing functions.
    For medium to large setups, the storage aspect quickly becomes a pain point. Whisper, the default graphite storage format, is a simple storage format, using one file per metric (timeseries).
    • It can't keep all file descriptors in memory so there's a lot of overhead in constantly opening, seeking, and closing files, especially since usually one write comes in for all metrics at the same time.
    • Using the rollups feature (different data resolutions based on age) causes a lot of extra IO.
    • The format is also simply not optimized for writes. Carbon, the storage agent that sits in front of whisper has a feature to batch up writes to files to make them more sequential but this doesn't seem to help much.
    • Worse, due to various implementation details the carbon agent is surprisingly inefficient and cpu-bound. People often run into cpu limitations before they hit the io bottleneck. Once the writeback queue hits a certain size, carbon will blow up.
    Common recommendations are to run multiple carbon agents and running graphite on SSD drives.
    If you want to scale out across multiple systems, you can get carbon to shard metrics across multiple nodes, but the complexity can get out of hand and manually maintaining a cluster where nodes get added, fail, get phased out, need recovery, etc involves a lot of manual labor even though carbonate makes this easier. This is a path I simply don't want to go down.

    These might be reasonable solutions based on the circumstances (often based on short-term local gains), but I believe as a community we should solve the problem at its root, so that everyone can reap the long term benefits.

    In particular, running Ceres instead of whisper, is only a slight improvement, that suffers from most of the same problems. I don't see any good reason to keep working on Ceres, other than perhaps that it's a fun exercise. This probably explains the slow pace of development.
    However, many mistakenly believe Ceres is "the future".
    Switching to LevelDB seems much more sensible but IMHO still doesn't cut it as a general purpose, scalable solution.

    The ideal backend

    I believe we can build a backend for graphite that
    • can easily scale from a few metrics on my laptop in power-save mode to millions of metrics on a highly loaded cluster
    • supports nodes joining and leaving at runtime and automatically balancing the load across them
    • assures high availability and heals itself in case of disk or node failures
    • is simple to deploy. think: just run an executable that knows which directories it can use for storage, elasticsearch-style automatic clustering, etc.
    • has the right read/write optimizations. I've never seen a graphite system that is not write-focused, so something like LSM trees seems to make a lot of sense.
    • can leverage cpu resources (e.g. for compression)
    • provides a more natural model for phasing out data. Optional, runtime-changeable rollups. And an age limit (possibly, but not necessarily round robin)
    While we're at it. pub-sub for realtime analytics would be nice too. Especially when it allows to use the same functions as the query api.
    And getting rid of the metric name restrictions such as inability to use dots or slashes.


    There's a lot of databases that you could hook up to graphite. riak, hdfs based (opentsdb), Cassandra based (kairosdb, blueflood, cyanite), etc. Some of these are solid and production ready, and would make sense depending on what you already have and have experience with. I'm personally very interested in playing with Riak, but decided to choose InfluxDB as my first victim.

    InfluxDB is a young project that will need time to build maturity, but is on track to meet all my goals very well. In particular, installing it is a breeze (no dependencies), it's specifically built for timeseries (not based on a general purpose database), which allows them to do a bunch of simplifications and optimizations, is write-optimized, and should meet my goals for scalability, performance, and availability well. And they're in NYC so meeting up for lunch has proven to be pretty fruitful for both parties. I'm pretty confident that these guys can pull off something big.

    Technically, InfluxDB is a "timeseries, metrics, and analytics" databases with use cases well beyond graphite and even technical operations. Like the alternative databases, graphite-like behaviors such as rollups management and automatically picking the series in the most appropriate resolutions, is something to be implemented on top of it. Although you never know, it might end up being natively supported.

    Graphite + InfluxDB

    InfluxDB developers plan to implement a whole bunch of processing functions (akin to graphite, except they can do locality optimizations) and add a dashboard that talks to InfluxDB natively (or use Grafana), which means at some point you could completely swap graphite for InfluxDB. However, I think for quite a while, the ability to use the Graphite api, combine backends, and use various graphite dashboards is still very useful. So here's how my setup currently works:
    • carbon-relay-ng is a carbon relay in Go. It's a pretty nifty program to partition and manage carbon metrics streams. I use it in front of our traditional graphite system, and have it stream - in realtime - a copy of a subset of our metrics into InfluxDB. This way I basically have our unaltered Graphite system, and in parallel to it, InfluxDB, containing a subset of the same data.
      With a bit more work it will be a high performance alternative to the python carbon relay, allowing you to manage your streams on the fly. It doesn't support consistent hashing, because CH should be part of a strategy of a highly available storage system (see requirements above), using CH in the relay still results in a poor storage system, so there's no need for it.
    • I contributed the code to InfluxDB to make it listen on the carbon protocol. So basically, for the purpose of ingestion, InfluxDB can look and act just like a graphite server. Anything that can write to graphite, can now write to InfluxDB. (assuming the plain-text protocol, it doesn't support the pickle protocol, which I think is a thing to avoid anyway because almost nothing supports it and you can't debug what's going on)
    • graphite-api is a fork/clone of graphite-web, stripped of needless dependencies, stripped of the composer. It's conceived for many of the same reasons behind graphite-ng (graphite technical debt, slow development pace, etc) though it doesn't go to such extreme lengths and for now focuses on being a robust alternative for the graphite server, api-compatible, trivial to install and with a faster pace of development.
    • That's where graphite-influxdb comes in. It hooks InfluxDB into graphite-api, so that you can query the graphite api, but using data in InfluxDB. It should also work with the regular graphite, though I've never tried. (I have no incentive to bother with that, because I don't use the composer. And I think it makes more sense to move the composer into a separate project anyway).
    With all these parts in place, I can run our dashboards next to each other - one running on graphite with whisper, one on graphite-api with InfluxDB - and simply look whether the returned data matches up, and which dashboards loads graphs faster. Later i might do more extensive benchmarking and acceptance testing.

    If all goes well, I can make carbon-relay-ng fully mirror all data, make graphite-api/InfluxDB the primary, and turn our old graphite box into a live "backup". We'll need to come up with something for rollups and deletions of old data (although it looks like by itself influx is already more storage efficient than whisper too), and I'm really looking forward to the InfluxDB team building out the function api, having the same function api available for historical querying as well as realtime pub-sub. (my goal used to be implementing this in graphite-ng and/or carbon-relay-ng, but if they do this well, I might just abandon graphite-ng)

    To be continued..

    Posts for Thursday, May 15, 2014

    Sony, meet the EFF

    Picture by “1984…meet DRM by Josh Bonnain (CC-BY)

    Today the Internet was dominated (at least in Europe) by two main topics1:

    The first topic was the fallout of a legal debate. The European Court of Justice decided to rule in favor of a “right to be forgotten” regarding search engines.  A Spanish gentleman had, after unsuccessfully trying to get a Spanish newspaper  to unpublish an older story about the bankruptcy of a company he had owned, sued Google to remove all pointers to that still existing article from its index. The court claimed that a person’s right to privacy would in general trump all other potential rights (such as Google’s freedom of expression to link to an undisputedly true article). Washington Post has a more detailed post on this case. I have also written about the hazards of the “right to be forgotten” a few times in the past so I’m not gonna repeat myself.

    The second important story today had more of a technical spin: Mozilla, the company developing the popular standards compliant and open source web browser Firefox announced that they would implement the DRM2 standard that the W3C proposed. DRM means that a content provider can decide what you, the user can do with the content they made available to you: Maybe you can only watch it on one specific device or you may not save a copy or you can only read it once. It’s about giving a content provider control about the use of data that they released into the wild. The supporters of civil liberties and the open web from the Electronic Frontier Foundation (EFF) were not exactly happy lamenting “It’s official: the last holdout for the open web has fallen

    What do these stories have to do with each other?

    Both deal with control. The DRM scheme Mozilla adopted (following the commercial browser vendors such as Apple, Google and Microsoft) is supposed to define a standardized way for content providers to control the use of data.3 The EU court order is supposed to give European people the legal tools to control their public image in our digital age.

    That made me wonder. Why do so many privacy and civil rights organizations condemn technical DRM with such fury? Let’s do a quick thought experiment.

    Let’s assume that the DRM would actually work flawlessly. The code of the DRM module – while not being open source – would have been audited by trusted experts and would be safe for the user to run. So now we have the infrastructure to actually enforce the legal rights of the content providers: If they only want you to view their movie Thursdays between 8 and 11 PM that’s all you can do. But if we defined the DRM standard properly we as individuals could use that infrastructure as well! We could upload a picture to Facebook and hardwire into it that people can only see it once. Or that they cannot download it to their machines. We can attach that kind of rights management to the data we send out to a government agency or to amazon when buying a bunch of stuff. We do gain real, tangible control over our digital representation.

    Privacy in its interpretation as the right to control what happens with the data you emit into the world is structurally very similar to the kind of copyright control that the movie studios, music publishers or software companies want: It’s about enforcing patterns of behavior with data no longer under your direct control.

    Having understood this it seems strange to me that NGOs and entities fighting for the right of people to control their digital image do not actually demand standardized DRM. There is always the issue of the closed source blog that people have to run on their machines that right now never is audited properly and therefore is much more of  a security risk than a potential asset. Also the standard as it is right now4 doesn’t seem to make it simple for people to actually enforce their own rights, define their own restrictions. But all those issues sound a lot like implementation details, like bugs in the first release of the specification.

    We have reached somewhat of a paradox. We demand for the individual to be able to enforce its rights even when that means to hide things that are actually legal to  publish (by making them invisible to the big search engines). But when other entities try the same we can’t cry foul fast enough.

    The rights of the individual (and of other legal entities for that matter even though I find treating companies as people ludicrous) do always clash with the rights of other individuals. My right to express myself clashes with other people’s right to privacy. There is no way to fully express all those rights, we have to balance them against each other constantly. But there also is no simple hierarchy of individual rights. Privacy isn’t the superright that some people claim it to be and it shouldn’t be. Even if the EU court of justice seems to believe so.

    The EFF and Sony might really have more goals in common than they think. If I was the EFF that would seriously make me think.

    1. at least in my filter bubble, YMMV
    2. Digital Rights Management
    3. Admittedly by breaking one of Mozilla’s promises: While the programming interface to the DRM software module is open source, the DRM module itself isn’t and cannot be to make it harder for people wanting to get around the DRM.
    4. keep in mind that I am not a member of the W3C or an expert in that matter

    flattr this!

    Posts for Wednesday, May 14, 2014

    My Testimony

    So...  I thought I'd be going to bed, but here I am.

    When the Spirit directed me to come down and write down my testimony, I wasn't (and still am not) quite sure what to write.  This certainly isn't due to a lack of testimony comparative to other times I've done this, but rather quite the exact opposite.  It is because my testimony seems to be in new territory altogether.

    After reading the book "Journey To The Veil", I felt quite deeply that the principle of following the Spirit at any cost was true.  I have always known that God would never lead me astray.  I didn't, however, always identify the Holy Spirit correctly.

    John Pontius' explanation of how to decipher the Spirit seemed both simple and elegant, so I decided to try it.  Instead of writing it all down, I simply follow any prompting I have that seems like it could possibly lead to good.  So far, it's been working quite well.  Although, it's hard sometimes to break away from the "bah, why am I thinking THAT?" mentality, while casting it away as something I don't need to do, it becomes quite obvious, after obedience to the prompting, that it came from Christ.

    For example, I've been fasting much, much more often than I ever have in my life, and although drastic overt changes aren't immediately apparent, I feel the change inside me, and the change in my thinking is quite dramatic.  No real examples come to mind to illustrate this, but it's as if thinking along the lines of pure truth is as natural as is any thought has ever been to me.  Feeling loved, patient, and confident come naturally while in the mode of "following the Spirit at any cost".  Thoughts on how to better parent my children through difficulty come, and when I act upon them, the result is usually a more happy, harmonious home.  Loving my wife and showing it comes more naturally.

    It might sound a bit weird, but I feel more authentically "me".

    As far as my testimony goes, I'm more sure now than I ever have been that God is real, and Christ is my personal savior.  I have an intense testimony of the Holy Ghost, and how He wants to help direct my life for good.  I know  the Holy Ghost is capable of directing our lives for good, to the same degree that we am willing to follow His counsel.

    I feel deeply that God, Christ,and the Holy Ghost want the best for me, and I feel that my life is being directed, almost at an hour-by-hour basis right now.  Some of the promptings I feel are so, so quiet, it's almost as if they are passing thoughts, which will disappear into nothingness if not immediately grabbed and acted upon.  I'm still learning what is the voice of the Spirit, and what is the voice of Jason, but I feel I'm progressing.

    One thing I've been learning, and this might be only for me, is that I haven't regretted acting upon any prompting that has come in, whether it be mine, or The Spirit's.  This may be due to various interpretations of those voices, but I feel that it is because the Holy Ghost has been prompting and guiding me a lot.  This may or may not be due to increasing amounts of promptings, but I rather think it is because of my attitude and willingness to follow the promptings.  My guess is that after following a prompting, the Holy Ghost will be more able to guide me, due to my previous obedience.   (Isn't this stuff great!?  The super cool thing is - anyone can do this!  Any baptized member of Christ's church has access to *continuous* guidance of the Spirit.  I guess He just gets tired of prompting us when we don't obey.  Try obeying *all* of the promptings and see what happens.)

    I also have a testimony that God will never lead us astray.  Never.  We might lead ourselves astray and blame God for it, but it's up to us to learn how to properly interpret, and then follow His guidance.  Quitting my job was one of the hardest things I've ever done, but even harder was the lesson which immediately followed.  That was, "Sometimes God lets us suffer a bit in order to help our faith grow."  If you don't know what I'm talking about, simply read my journal entries from the past year.  There were some entries which were very, very difficult to write, reflecting experiences I had which made no sense whatsoever, within the context of my then-understanding-of-what-faith-was.  It's hard for me to admit that I murmured.  I questioned, and I doubted.  But....  and this I'm just now realizing...

    God allowed me to doubt.  He allowed me to question, and yell, and become frustrated with Him.  He also loved me enough to put me in a situation where He knew I would be led back to relying upon his grace to pull me through.  I'm not sure, but I think that if I were to be put through the past year over again, but do so while having a steady job, with enough money, I might have had a greater chance at falling into disbelief, rather than clinging onto faith like it was the last thing I had to hold on to.  Oh, I'm constrained to proclaim that God is good!  Hosannah unto God in the highest!  He truly has snatched me from an everlasting hell, and seen in His great mercy to allow me to learn through suffering and trials, the goodness with which he entreats mankind.

    When the heavens open, if only for a second, and allow intelligence and understanding to flow down to me, I am in awe at how much grace God truly has, and at the same time, am in awe at my own nothingness.  With God, I am capable of anything.  In and of myself, I am nothing, and barely capable of drawing my own breath (because God has given it to me.)

    Thank you Heavenly Father for prompting me to come and write.  My testimony has grown, and hopefully yours (whoever is reading this) has too.

    Posts for Monday, May 12, 2014


    Revamped our SELinux documentation

    In the move to the Gentoo wiki, I have updated and revamped most of our SELinux documentation. The end result can be seen through the main SELinux page. Most of the content is below this page (as subpages).

    We start with a new introduction to SELinux article which goes over a large set of SELinux’ features and concepts. Next, we cover the various concepts within SELinux. This is mostly SELinux features but explained more in-depth. Then we go on to the user guides. We start of course with the installation of SELinux on Gentoo and then cover the remainder of administrative topics within SELinux (user management, handling AVC denials, label management, booleans, etc.)

    The above is most likely sufficient for the majority of SELinux users. A few more expert-specific documents are provided as well (some of them still work in progress, but I didn’t want to wait to get some feedback) and there is also a section specific for (Gentoo) developers.

    Give it a review and tell me what you think.

    Posts for Saturday, May 10, 2014


    SMTP over Hidden Services with postfix

    More and more privacy experts are nowdays calling people to move away from the email service provider giants (gmail, yahoo!, microsoft, etc) and are urging people to set up their own email services, to “decentralize”. This brings up many many other issues though, and one of which is that if only a small group people use a certain email server, even if they use TLS, it’s relatively easy for someone passively monitoring (email) traffic to correlate who (from some server) is communicating with whom (from another server). Even if the connection and the content is protected by TLS and GPG respectively, some people might feel uncomfortable if a third party knew that they are actually communicating (well these people better not use email, but let’s not get carried away).

    This post is about sending SMTP traffic between two servers on the Internet over Tor, that is without someone being able to easily see who is sending what to whom. IMHO, it can be helpful in some situations to certain groups of people.

    There are numerous posts on the Internet about how you can Torify all the SMTP connections of a postfix server, the problem with this approach is that most exit nodes are blacklisted by RBLs so it’s very probable that the emails sent will either not reach their target or will get marked as spam. Another approach is to create hidden services and make users send emails to each other at their hidden service domains, eg username@a2i4gzo2bmv9as3avx.onion. This is quite uncomfortable for users and it can never get adopted.

    There is yet another approach though, the communication could happen over Tor hidden services that real domains are mapped to.

    Both sides need to run a Tor client:
    aptitude install tor torsocks

    The setup is the following, the postmaster on the receiving side sets up a Tor Hidden Service for their SMTP service (receiver). This is easily done in his server (server-A) with the following line in the torrc:
    HiddenServicePort 25 25. Let’s call this HiddenService-A (abcdefghijklmn12.onion). He then needs to notify other postmasters of this hidden service.

    The postmaster on the sending side (server-B) needs to create 2 things, a torified SMTP service (sender) for postfix and a transport map that will redirect emails sent to domains of server-A to HiddenService-A.

    Steps needed to be executed on server-B:
    1. Create /usr/lib/postfix/smtp_tor with the following content:

    usewithtor /usr/lib/postfix/smtp $@</code2>

    2. Make it executable
    chmod +x /usr/lib/postfix/smtp_tor

    3. Edit /etc/postfix/ and add a new service entry
    smtptor unix - - - - - smtp_tor

    4. If you don’t already have a transport map file, create /etc/postfix/transport with content (otherwise just add the following to your transport maps file):

    <code2>.onion              smtptor:        smtptor:[abcdefghijklmn12.onion]        smtptor:[abcdefghijklmn12.onion]</code2>

    5. if you don’t already have a transport map file edit /etc/postfix/ and add the following:
    transport_maps = hash:/etc/postfix/transport

    6. run the following:
    postmap /etc/postfix/transport && service postfix reload

    Well that’s about it, now every email sent from a user of server-B to will actually get sent over Tor to server-A on its HiddenService. Since HiddenServices are usually mapped on, it will bypass the usual sender restrictions. Depending on the setup of the receiver it might even evade spam detection software, so beware…If both postmasters follow the above steps then all emails sent from users of server-A to users of server-B and vice versa will be sent anonymously over Tor.

    There is nothing really new in this post, but I couldn’t find any other posts describing such a setup. Since it requires both sides to actually do something for things to work, I don’t think it can ever be used widely, but it’s still yet another way to take advantage of Tor and Hidden Services.

    Can hidden services scale to support hundreds or thousands of connections e.g. from a mailing list ? who knows…
    This type of setup needs the help of big fishes (large independent email providers like Riseup) to protect the small fishes (your own email server). So a new problem arises, bootstrapping and I’m not really sure this problem has any elegant solution. The more servers use this setup though, the more useful it becomes against passive adversaries trying to correlate who communicates with whom.
    The above setup works better when there are more than one hidden services running on the receiving side so a passive adversary won’t really know that the incoming traffic is SMTP, eg when you also run a (busy) HTTP server as a hidden service at the same machine.
    Hey, where did MX record lookup go ?

    Trying it
    If anyone wants to try it, you can send me an email using voidgrz25evgseyc.onion as the Hidden SMTP Service (in the transport map).


    Posts for Friday, May 9, 2014


    Dropping sesandbox support

    A vulnerability in seunshare, part of policycoreutils, came to light recently (through bug 509896). The issue is within libcap-ng actually, but the specific situation in which the vulnerability can be exploited is only available in seunshare.

    Now, seunshare is not built by default on Gentoo. You need to define USE="sesandbox", which I implemented as an optional build because I see no need for the seunshare command and the SELinux sandbox (sesandbox) support. Upstream (Fedora/RedHat) calls it sandbox, which Gentoo translates to sesandbox as it collides with the Gentoo sandbox support otherwise. But I digress.

    The build of the SELinux sandbox support is optional, mostly because we do not have a direct reason to support it. There are no Gentoo users that I’m aware of that use it. It is used to start an application in a chroot-like environment, based on Linux namespaces and a specific SELinux policy called sandbox_t. The idea isn’t that bad, but I rather focus on proper application confinement and full system enforcement support (rather than specific services). The SELinux sandbox makes a bit more sense when the system supports unconfined domains (and users are in the unconfined_t domain), but Gentoo focuses on strict policy support.

    Anyway, this isn’t the first vulnerability in seunshare. In 2011, another privilege escalation vulnerability was found in the application (see bug 374897).

    But having a vulnerability in the application (or its interaction with libcap-ng) doesn’t mean an exploitable vulnerability. Most users will not even have seunshare, and those that do have it will not be able to call it if you are running with SELinux in strict or have USE="-unconfined" set for the other policies. If USE="unconfined" is set and you run mcs, targeted or mls (which isn’t default either, the default is strict) then if your users are still mapped to the regular user domains (user_t, staff_t or even sysadm_t) then seunshare doesn’t work as the SELinux policy prevents its behavior before the vulnerability is triggered.

    Assuming you do have a targeted policy with users mapped to unconfined_t and you have built policycoreutils with USE="sesandbox" or you run in SELinux in permissive mode, then please tell me if you can trigger the exploit. On my systems, seunshare fails with the message that it can’t drop its privileges and thus exits (instead of executing the exploit code as it suggested by the reports).

    Since I mentioned that most user don’t use SELinux sandbox, and because I can’t even get it to work (regardless of the vulnerability), I decided to drop support for it from the builds. That also allows me to more quickly introduce the new userspace utilities as I don’t need to refactor the code to switch from sandbox to sesandbox anymore.

    So, policycoreutils-2.2.5-r4 and policycoreutils-2.3_rc1-r1 are now available which do not build seunshare anymore. And now I can focus on providing the full 2.3 userspace that has been announced today.

    Ceci n’est pas une pipe

    Until 2013 hidden graffiti at the back door of Cyberpipe

    This is the first blog post in a series about the migration and new plans of my hackerspace alma mater – Kiberpipa / Cyberpipe.

    In this post I hope to quench the thirst of many people by explaining shortly where Kiberpipa is now and how we got there.

    Later in the series I plan to write more about the current community, its goals and ideas … stay tuned!

    Where we are now?


    That is the big question now, is it not? ☺

    Well, I am extremely happy to let you know that we have a signed lease contract, have already moved into our new home and have almost finished with brushing up the place!

    As of 1. April 2014 1, our new address is:

    Gosposvetska cesta 2
    SI-1000 Ljubljana

    Not only is this literally just around the corner from our previous address, but the location is much much better as well.

    For starters, we are located in a gallery of one of Ljubljana’s most iconic buildings: Tavčarjeva palača – commonly known as Palača/Kavarna Evropa.

    Apart from some shops, apartments and a café, this building also houses Ljubljana’s main public library – Knjižnica Otona Župančiča.

    This combination gives us a visibility that we could only dream of in our previous (admitedly large) basement! And when the full-length glass wall of our ~47 m² ground floor is too much, we can just move to the privacy that our windowless ~47 m² first floor offers.

    …and legally

    Since some of the main reasons for the liberation of Cyberpipe was to gain more independence, the more important question is that of our legal status.

    We are proud to say that we are now part of LUGOS – an NGO/NPO that was one of our founders way back in 2001. Cyberpipe and LUGOS have from the very start maintained a fruitful cooperation on several projects.

    Inside LUGOS, Cyberpipe has a separate bank account and has a substantial autonomy. This symbiosis has spurred a lot of positive vibes and activity in both groups.

    How did we get here?

    It was quite a long and bumpy ride, but I will try to keep it short. For more details, you can ask me or any of the other migrators in person.

    Design intermezzo

    But before we get into that, we need to look hard at the logo of Cyberpipe itself:

    Cyberpipe logo

    The pipe enclosed in a C2 stands for “Cyberpipe” and is heavily inspired by the famous La Trahison des Images by Magritte:

    Inspiration for Cyberpipe’s logo comes from Magritte’s famous work

    And the inspiration is not only visual. The text below the Magritte’s pipe says « Ceci n’est pas une pipe. » – indicating that this (image) is not a pipe and that claiming that it is a pipe would be deceiving. What it is though, is a manifestation of a pipe.

    The very same can be said about Cyberpipe: Cyberpipe is a concept and every physical manifestation of it is just that – a concept, nothing more and nothing less. So what we have here now is not a new Cyberpipe, but just a new manifestation of it – the location and people may change in time, but the idea(l) stays the same.

    Migration itself

    The wish for (more) independence was present for a very long time and cannot be attributed to a single one moment in time. The fact that we were told to move out of our basement on Kersnikova 6 was just the final trigger for getting off of our arses and moving out – much like a child that grew up – instead of just moving next door.

    So in summer of 2013 we parted our ways in good (and written) terms with Zavod K6/4 – part of which we were from 2001 – and started looking for a new home.

    For months many of us were scouting the city centre (and a bit outskirts) for an affordable and acceptable new location. We met many supporters, but very few options – none of which were realistic for our survival.

    It was a very frustrating time, but as individuals we learnt a lot and as a community we grew tighter.

    I will skip all the possibilities that we got offered and did not work out. But would at this point like to thank everyone who offered help, even if in the end it worked out otherwise.

    Our luck changed when we got invited3 to Studio City, where Marcel Štefančič jr. interviewed Andraž “minmax” Tori and myself regarding the future of Cyberpipe on the national television. The gist of our message was that we just need a place to work in and we will handle the rest ourselves.

    What happened later is a consequence of coincidences that lead to the awesome place we can now call home…

    For quite a long time we just continued with our plans in the following order:

    1. enquiring at the municipality for a subsidised (or even gratis) location;
    2. asking everyone we knew for hints and ideas; and
    3. scouting the area over and over again.

    …but while we were doing that, my dad was constantly nagging me to go and ask the city public library, stating that if anyone, they would be a good partner.

    So as soon as I needed to borrow a book, I caved in and asked the first guy I met there. The dialogue went something like this:


    Hey! I know this is a long shot and the answer will probably be “no”, but do you have any idea where we could find a space for a hackerspace?

    Random bearded bloke (later turns out to be Joško Senčar – the head of the library’s Mediatheque):

    Hey! I just saw you on the telly the other day and ever since I was thinking “Oh, we should totally find a space for Cyberpipe!”


    You’re kidding, right?!

    Joško Senčar:

    Not at all, let’s figure out something.

    Skip ahead a few months, and what came out of that discussion is an awesome location and a cooperation contract with the Ljubljana city library!

    Contribute and help

    But freedom comes at a price – in this case very literally!

    When we left the previous place, we took with us very little equipment and no finances at all.

    Now that we have our own place, we also have to pay rent for it as well as standard costs of heating, internet etc.

    So I am hereby humbly asking you to please support Cyberpipe by financial, material or other means.


    It was a long and thorny journey and without mutual support and optimism, we would never have made it this far! My hat is off to you, my fellow “pipci in pipke”4 – it is an honour to be in your midst.

    During the year of migration and even before, too many people have helped us to list here. You know who you are, so hereby please accept our eternal collective gratitude!

    hook out → eating my second breakfast and sipping Julius Meinl coffee in Nova Gorica whil looking through the window of the café and forward to seeing the new place finally renovated in the coming days ☺

    1. By now I hope you realise this is not a joke ☺ 

    2. This is also why the abbreviation for Cyberpipe / Kiberpipa is C|

    3. I am extremely happy for minmax to talk me into this – when I got the call, I was in Zagreb to hold a talk at DORS/CLUC and was not in the mood to hold a live interview next day. 

    4. The members of Kiberpipa often refer to themselves as “pipci” (male) and “pipke” (female). 

    Posts for Saturday, May 3, 2014

    Fix Baloo on KDE using the same trick as once used with Nepomuk

    update: this post made me banned from KDE planet in a very rough way   Nepomuk Problem Since the  daunting day of the kde 4.0 release, I’ve been struggling with nepomuk. I’m no random user, i know about low-level programming, i/o bound, cpu bound stuff, and I kinda tried to have it working. I failed […]

    Posts for Sunday, April 20, 2014


    Stepping through the build process with ebuild

    Today I had to verify a patch that I pushed upstream but which was slightly modified. As I don’t use the tool myself (it was a user-reported issue) I decided to quickly drum up a live ebuild for the application and install it (as the patch was in the upstream repository but not in a release yet). The patch is for fcron‘s SELinux support, so the file I created is fcron-9999.ebuild.

    Sadly, the build failed at the documentation generation (something about “No targets to create en/HTML/index.html”). That’s unfortunate, because that means I’m not going to ask to push the live ebuild to the Portage tree itself (yet). But as my primary focus is to validate the patch (and not create a live ebuild) I want to ignore this error and go on. I don’t need the fcron documentation right now, so how about I just continue?

    To do so, I start using the ebuild command. As the failure occurred in the build phase (compile) and at the end (documentation was the last step), I tell Portage that it should assume the build has completed:

    ~# touch /var/portage/portage/sys-process/fcron-9999/.compiled

    Then I tell Portage to install the (built) files into the images/ directory:

    ~# ebuild /home/swift/dev/gentoo.overlay/sys-process/fcron/fcron-9999.ebuild install

    The installation phase fails again (with the same error as during the build, which is logical as the Makefile can’t install files that haven’t been properly build yet.) As documentation is the last step, I tell Portage to assume the installation phase has completed as well, continuing with the merging of the files to the life file system:

    ~# touch /var/portage/portage/sys-process/fcron-9999/.installed
    ~# ebuild /home/swift/dev/gentoo.overlay/sys-process/fcron/fcron-9999.ebuild qmerge

    Et voila, fcron-9999 is now installed on the system, ready to validate the patch I had to check.

    Posts for Thursday, April 17, 2014


    If things are weird, check for policy.29

    Today we analyzed a weird issue one of our SELinux users had with their system. He had a denial when calling audit2allow, informing us that sysadm_t had no rights to read the SELinux policy. This is a known issue that has been resolved in our current SELinux policy repository but which needs to be pushed to the tree (which is my job, sorry about that). The problem however is when he added the policy – it didn’t work.

    Even worse, sesearch told us that the policy has been modified correctly – but it still doesn’t work. Check your policy with sestatus and seinfo and they’re all saying things are working well. And yet … things don’t. Apparently, all policy changes are ignored.

    The reason? There was a policy.29 file in /etc/selinux/mcs/policy which was always loaded, even though the user already edited /etc/selinux/semanage.conf to have policy-version set to 28.

    It is already a problem that we need to tell users to edit semanage.conf to a fixed version (because binary version 29 is not supported by most Linux kernels as it has been very recently introduced) but having load_policy (which is called by semodule when a policy needs to be loaded) loading a stale policy.29 file is just… disappointing.

    Anyway – if you see weird behavior, check both the semanage.conf file (and set policy-version = 28) as well as the contents of your /etc/selinux/*/policy directory. If you see any policy.* that isn’t version 28, delete them.

    Planet Larry is not officially affiliated with Gentoo Linux. Original artwork and logos copyright Gentoo Foundation. Yadda, yadda, yadda.