Posts for Friday, January 29, 2010

Faunus is alive!

Finally, after an epic battle with disk encryption, the kernel and initramfs — my brand spanking new shiney matt ThinkPad is up and running!

World, be prepared for Faunus — the horned god of the forest!!!

Compiling even KDE on this puppy was a dream. I find it surreal how quiet and cool it is even when put under loads as high as 5 and 8 :3

hook out >> gotta run to the faculty for a meeting

Posts for Thursday, January 28, 2010

avatar

How dumb is Slashdot?

OK that title is a bit provocative.  I enjoy reading Slashdot as much as the next guy, and I'd always laughed at the comments about Slashdot readers being dumb, but this post got me rolling my eyes in frustration: "2 Displays and 2 Workspaces With Linux and X?"

The OP asks about buying a second monitor and setting up two screens - one large desktop or separate X screens.  Firstly, I would expect a question like this from an Ubuntu noob, followed by lots of answers like RTFM, Google it, see this FAQ, etc.

However on Slashdot, there are so many people who still don't realise that one large desktop doesn't mean windows have to maximise to two screens.  So few people seem to know about xinerama and yet they're still giving advice!  Someone said that "Windows 7's easy dual monitor setup lets you maximise to one window - can Linux do that?" (sheesh, only for years now...)

Slashdot users have some fantastic, intersting, and informative posts.  Unfortunately, unlike a regular email list where only the people who might actually know the answer reply, everybody on Slashdot wants to reply.

Quad Erat Rant-astrandum!
avatar

bash.rss to feedburner

According to the statistics many of you use my bash.org rss feed. The traffic generated by this feed has grown over the last year and right now it is around 70% of the total traffic of this website (around 600MB a month).

Just out of principle this is a bit much for a RSS feed so I decided to move the feed to feedburner. Thanks to a .htaccess rule the feed is already forwarded. This means that current subscribers to the feed should not notice anything ;)

Happy reading :-)

avatar

Debian adventures

This is post is a rant. So don’t complain, I warned you.

<rant>
On my laptop (Macbook 4,1) I run Debian testing/experimental which was running quite smoothly since I installed it apart from the couple few weeks.

The first problem I faced was java not running inside browsers. Firefox, Iceweasel, Opera, google-chrome…nothing. I spent at least 2 hours installing/uninstalling various java packages, moving plugins to new locations and I couldn’t get it to work. I was furiously googling about the issue until I hit the jackpot: squeeze : in case you have no network connection with java apps …

Today I upgraded xserver-xorg-input-synaptics from 1.2.0-2 to 1.2.1-1. Even though it is a minor version bump a kind fairy also told me to reboot…I rebooted and my touchpad was not working properly, tapping was lost, I couldn’t use synclient because shared memory config (SHM) was not activated and so on and so on. My dynamic config using hal was there, /var/log/Xorg.0.log said that I was using the proper device and lshal showed correct settings for the device. I read /usr/share/doc/xserver-xorg-input-synaptics/NEWS.Debian.gz nothing new. After some googling another jackpot: Bug#564211: xserver-xorg-input-synaptics: Lost tapping after upgrading to 1.2.1-1. For some reason touchpad config has moved to udev from hal and the maintainers didn’t think it was important enough that needed to be documented someplace or put it in README.Debian…

The last issue I am having is with linux-image-2.6.32-trunk-686-bigmem not working correctly with KMS and failing with DRM.
[ 0.967942] [drm] set up 15M of stolen space
[ 0.968030] nommu_map_sg: overflow 13d800000+4096 of device mask ffffffff
[ 0.968085] [drm:drm_agp_bind_pages] *ERROR* Failed to bind AGP memory: -12
[ 0.968159] [drm:i915_driver_load] *ERROR* failed to init modeset
[ 0.973067] i915: probe of 0000:00:02.0 failed with error -28

linux-image-2.6.32-trunk-686 works fine with those though.
[ 0.973466] [drm] set up 15M of stolen space
[ 1.907642] [drm] TV-16: set mode NTSC 480i 0
[ 2.137173] [drm] LVDS-8: set mode 1280x800 1f
[ 2.193497] Console: switching to colour frame buffer device 160x50
[ 2.197435] fb0: inteldrmfb frame buffer device
[ 2.197436] registered panic notifier
[ 2.197442] [drm] Initialized i915 1.6.0 20080730 for 0000:00:02.0 on minor 0

Xorg is amazingly sluggish using linux-image-2.6.32-trunk-686-bigmem kernel. I search the debian bugs database and noone seems to have reported such an issue. But google came up with: [G35/KMS] DRM failure during boot (linux 2.6.31->2.6.32 regression). The issue looks solved so I will try and report it to Debian and see what comes out of it…
*Update* Bug Report: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=567352

If you dare to comment saying “that’s what you get for using experimental” I really hope and curse you to spend 3 hours today to try and figure out what has changed in a minor version upgrade of one of your installed packages.
Even worse, if you are on those guys that kept telling me “don’t use stable, testing is stable as a rock, never had a problem in years…” then I curse you to spend a whole day trying to reconfigure something with no documentation :P
<rant></rant>

avatar

Firefox Personas

Firefox Persona's - inevitable Bling Bling or worthwhile (but still Bling Bling)?

If you don't know what I'm talking about, I just upgraded Mozilla Firefox to version 3.6.  The what's new? page is different this time.  Instead of the usual congratulations, security notes and links, I'm greeted with "Thanks for supporting Mozilla’s mission of encouraging openness, innovation and opportunity on the Web!" and "Choose Your Persona".


If you mouse-over any of the "persona" thumbnails, Firefox's theme changes dynamically. Cool.  Note only that but there's been quite a bit of design effort into making these personas look sleek, integrated, and elegant.

From This

To This!

I would be happy with the old Netscape look for years to come (why did they need to keep changing the logo anyway?) but I guess the iPod yungun's of today are attracted to shiny silver objects, and that goes for the software world too.  In the age of "I'll buy anything new from Apple just  because it's cool" it's inevitable that Firefox adds some chrome!
avatar

Emerge multiple packages at once

You may or may not already know about this feature, but you can emerge multiple packages at once in Gentoo.  If you have any semi-recent machine (2 years old or newer) you should definitely be using it.

Being source based - and hence compiling everything before you install it - any build time speed improvements are welcome.  We already have the -j option which can be passed to make via make.conf:
MAKEOPTS="-j3"
Various sources say to set this number j = N(CPU) + 1 or j = 2N(CPU) + 1.  I find the former is sufficient.

But what about configure?  Before you compile a package, you have to configure it, which typically can only use one CPU.  In addition there are other operations that are disk-intensive while not being CPU intensive (for instance, unpacking source code).  And finally some packages are just "broken" and internally set -j1.

So it would be nice to build (unrelated) packages simultaneously.  While one configure script is running, another could be compiling, further utilising those MeGaHurTz you paid so dearly for!  Recently I tested this for the first time.  I ran emerge like so:
$ emerge -vauDN --jobs=2 world

After looking through the output, this is how it proceeds:
Total: 70 packages (66 upgrades, 1 new, 3 reinstalls, 3 uninstalls), Size of downloads: 0 kB
Conflict: 23 blocks
Portage tree and overlays:
[0] /usr/portage
[1] /usr/local/portage

>>> Verifying ebuild manifests
>>> Starting parallel fetch
>>> Emerging (1 of 70) x11-libs/qt-xmlpatterns-4.6.1
>>> Emerging (2 of 70) sys-devel/binutils-2.20
>>> Jobs: 0 of 70 complete, 2 running Load avg: 5.56, 2.53, 1.67

And just to prove that two packages are emerging:
$ genlop -c

Currently merging 2 out of 70

* sys-devel/binutils-2.20

current merge time: 1 minute and 27 seconds.
ETA: less than a minute.

Currently merging 1 out of 70

* x11-libs/qt-xmlpatterns-4.6.1

current merge time: 1 minute and 28 seconds.
ETA: less than a minute.

A little while later, my load average settles down around 4.8:
>>> Installing (18 of 70) dev-python/pytz-2010b
>>> Installing (16 of 70) x11-libs/qt-script-4.6.1
>>> Emerging (19 of 70) dev-util/subversion-1.6.9
>>> Emerging (20 of 70) dev-lang/python-2.6.4-r1
>>> Jobs: 17 of 70 complete, 1 running Load avg: 4.84, 4.84, 3.94

You may come across some packages that are interactive, such as skype, which forces you to view and accept their EULA.  In that case the concurrent jobs are disabled.  If you wish to go ahead with all non-interactive jobs (a good idea!) run emerge like so:

$ emerge -vauDN --jobs=2 --accept-properties=-interactive world

Note this feature is not supported in older versions of portage.  I tested with sys-apps/portage-2.1.7.16
avatar

Listing packages installed from overlays

Gentoo provides an official package repository, and the mechanism for creating third-party repositories, called overlays.  Overlays can be home-made, developer-made, community-made, you name it!

It occurred to me that I wanted to list all installed packages that come from overlays.  (I'm doing some house cleaning, so I'm removing overlays I don't need anymore).  There appears to be no way to generate this list via equery (the "gentoolkit" method of doing various package queries).

This one-liner should do the trick.

$ for i in /var/db/pkg/*/*; do if ! grep gentoo $i/repository >/dev/null; then echo -e "`basename $i`\t`cat $i/repository`"; fi; done

The output of which looks (only slightly messy) like:
revoco-0.5    Orpheus Local Overlay
synce-gvfs-0.3.1    SynCE
synce-serial-9999    SynCE
synce-trayicon-0.14    SynCE
nautilussvn-0.12_beta1-r2    Orpheus Local Overlay
evolution-data-server-2.28.2    Orpheus Local Overlay
gnome-hearts-0.3    Orpheus Local Overlay
nautilus-python-0.5.1    rion
nautilussvn-0.12_beta1_p2    Orpheus Local Overlay
mozilla-thunderbird-bin-3.0_beta2    Orpheus Local Overlay
libgii-1.0.2    Orpheus Local Overlay
grub-0.97-r9    rion
usb-rndis-lite-0.11    SynCE
xorg-server-1.7.4    Orpheus Local Overlay

You can see here that I have various packages installed from the SynCE overlay, the rion overlay and my homespun "Orpheus" overlay.

It assumes your overlay was set up correctly with the file profiles/repo_name containing the overlay name, at the time of install (not available in earlier versions of portage).
avatar

iPad, what about you?

Now I really couldn’t resist. Really – and if you haven’t heard of it yet I guess Apple needs to get more fanboys, or at least ones who talk more. The iPad was released yesterday, and is the embodiment of magical Apple orgasm. Here’s a picture. Apple loves pictures.

Apple iPad

Yep, it’s simply one big iPod/iPhone with a bad accent.

I preach the economics of technology. Simply put I am mostly ridiculed for that theory by anybody who’s glimpsed at economics and doesn’t know much about technology. On the other hand it turns out that everybody I’ve talked to who does keep a close eye on the tech industry agrees almost instantly that yes, the success of products in the technology market are due to developer interest, and only developer interest in the long-run. Now I remain a firm believer of this myself and have been trying to find exceptions to this rule. One that was suggested was the Apple iPod, which as we all know was a runaway success. However seeing that lately the traditional iPods have started to phase out in favour of iPod Touches (where all the developer interest is) this example simply reaffirms my theory. The other fine suggestion was an interesting one, too – computer games. These, I believe, have a much longer period until developer interest deals the final blow – and in some cases are completely consumer-determined. These are an anomaly. I challenge people to find others.

But but but – for the rest the theory will apply. So why don’t we look at the iPad from the perspective of a producer-determined success?

If anything, Apple hit the jackpot. It’s not a secret that developers have been looking forward until the time we had a sensible tablet platform to work wonders on. When Apple decided to allow iPhone apps to run on it natively unchanged, not only does this mean that developers don’t need to bother about learning a brand new system (simplifying things a bit here), it also means that porting over applications are quick and easy. 140,000 applications immediately available to a consumer? I’ll take that, thank you very much.

I’m not too knowledgable about Apple products but I do know that iPhones can be “jailbroken” – a way of breaking your deal with Apple to enjoy a bit more freedom. If this iPad can be jailbroken to run third-party applications that don’t have the Jobs seal of approval and bypass other random restrictions I’m sure will exist, that’ll blow developer interest sky-high.

One thing many people seem to confuse developer interest with is that they think the degree of developer freedom is proportional to the interest received. No, this is not true. Developer interest arises and shifts prone to as many factors and more as consumers. If a developer thinks consumers will like it, regardless if they do or don’t, they will devote time to the product. So despite the face analysis that the iPad has 140,000 developers already upfront (on the assumption that there is on average 1 developer per app) we can’t ignore the other main factors.

In the beginning I mentioned that developers have been looking forward until we had a sensible tablet platform – so when I say other main factors, this is the one I’m talking about. Once they get over the fact that it’s quite simply a fat iPod Touch looks-wise, we’ll have to question whether or not the time is ripe for a tablet platform to come or if this is just going to be classed as another failed attempt to make a tablet successful and the “perfect” tablet is yet to come. What determines how other developers see this is how well Apple has upgraded the in-house apps to take advantage of the bigger screen.

Well folks, as you can see even though we’ve not once considered the consumer’s point of view it doesn’t get us much closer to guessing how successful it’ll be. No – the economics of technology should not be shunned to a corner and disrespected but instead embraced as a new way to look at success in technology.

Related posts:

  1. The economics of technology
  2. On the nth day of Christmas, my true love gave to me nx, (n-1)x, (n-2)x…
  3. The Google Operating System – Chrome.

Posts for Wednesday, January 27, 2010

avatar

How to check if you are editing GPO’s using a local, or central store

After my previous posts about preparing to build a new Group Policy for Windows 7 and about setting up the Central Store it occurred to me that it may be useful to actually check that when we edit the GPO that we are actually using the admx files from our Central Store rather than those stored locally. This may not seem important until you think that Windows Vista, Server 2008, Windows 7 and Windows Server 2008 R2 all ship with different versions of the ADMX files so you want to avoid a situation where you are building your GPO and dont realise that you are missing potentially useful options.

Thankfully it is very simple to see if you are using the ADMX files from your Central Store or not:

ADMX Files being loaded locally

This shows the ADMX files are being loaded locally

ADMX Files being loaded from the Central Store

This shows the ADMX files being loaded from the Central Store

avatar

When Dell Doesn't Deliver

I've never had a "bad" experience from Dell (Australia) with the exception of the outsourced sometimes hard-to-understand technical support.  So it is interesting to watch what happens when Dell doesn't deliver the way they promise.

Here's the sequence of events.  For the record, I do not and have never worked for Dell, nor do I receive any free or discounted goods or services from them.

14 April 2008

Ordered Dell Precision M6300 laptop including a Logitech MX Revolution cordless laser mouse with CompleteCover Guard and Next Business Day Onsite warranty.

November 2009

The mouse stopped charging (charging light flashes red when placed on charger).  I didn't do anything at the time since I was busy.

12 January 2010

My first contact with Dell about the mouse.  Was transferred from their usual warranty number (Indian speaker) to the "premium" warranty area (Australian speaker).  Was told a new mouse would be here in 2 days.

18 January 2010

Received email from TNS requesting I complete a survey regarding my recent call.
Completed the survey on the same day.  I noted in one of the survey questions that the issue was "unresolved" since I had not received the replacement part

<= 22 January 2010

Decided to call Dell again to find out about the mouse.  Warranty told me that the part had not been sent, and it would have to be handled by Logitech.  They transferred me to Logitech who took details of the mouse and told me to expect a new one up to two weeks later.

27 January 2010

Received a call from Dell regarding the survey I completed.  The caller asked if I had received the part, and offered to get the original person ("Nick") to look into it.  He asked if I had a mouse to use in the mean time (I said yes).

27 January 2010

Received a call from Nick from Dell.  He asked about the part and said he would check with Logitech and get back to me.

27 January 2010

The mouse arrived in the afternoon!  I called Dell to let them know they could stop looking for it!

So it took 9 days to respond to the survey.  The replacement mouse was here in around one week, although not in the next business day as the warranty implied.  But then, it was an accessory and not a typical spare component of the laptop.

And in case you're wondering about the Gold Phone Technical Support, apparently it's the difference between speaking to someone in India vs someone in Austrlia.  The "Pro" warranty personnel even answers the phone "This is <name> in Sydney".

Posts for Tuesday, January 26, 2010

Working remotely

I'm sitting here in Canada trying to work for my employer back in the US for a month. It's been a few weeks already, and I'm surprisingly pleased (or pleasantly surprised) with how well it's working. At the same time, certain aspects of this rather suck.

One huge obstacle so far is (of course) Windows. Aside from the Linux server that I convinced IT to let me run out of a closet, the whole place is Microsoft. Whatever MS VPN software we're using is slow, clunky, unreliable, and generally annoying.

At one point I tried to fetch a file from a network drive and watched it download at 0.2 k/sec. Then I had someone back home copy it onto my Linux box, and I downloaded from there at 120 k/sec. The Windows and Linux servers are in the same room in the same building behind the same network connection; I don't understand how VPN overhead slowed things down by that many orders of magnitude.

After connecting to VPN, there's about a 25% chance that Outlook will be able to connect to the Exchange server at work. Generally I have to fire up the VPN, turn it off, turn it on, turn it off, turn it on and then Outlook will find it. Sometimes I close Outlook, but it lives on as a zombie, futilely hammering away at the server but unable to find it, until I CTRL-ALT-DEL and kill it. This is with Office 2007.

But the work I do on the Linux server is (of course) easy. No problems whatsoever. Working over SSH is how I did things when I was sitting in my office anyways. I tunnel in and use local GUI SQL clients. I put VirtualBox on my laptop and I do a bunch of stuff in a Linux VM and rsync it back home with no problems. I can edit files over SSH right in Emacs as if they were on my local box, if I care to.

Sometimes I wonder if my dislike of Microsoft is irrational. Any belief that is caused by or results in a strong emotional response should be subject to questioning. Then reality comes waltzing by and reminds me that no, MS software really does suck.

I've worked for this company for over two years before moving. I don't know how well I'd be doing if this was a company I just started with. It's hard to see how important face-to-face communication is until it's impossible. Email is OK, but the benefit of knowing people in person and knowing how they talk and how they think really goes a long way to being able to interpret and understand plaintext communication.

avatar

Setting up a basic test lab using VMware

One of my favourite features in VMware Workstation that I have found recently is the ability to create a ‘team’ of virtual machines. What this does is allow you to have one or more virtual machines running on a virtual LAN, essentially allowing you to setup a private test network where you for example run test domain controllers or any network application and as long as you have the network setup correctly there is no way for anything to ‘leak’ out onto your production network.

Ive been using this to run a simple test network with two virtual machines to help develop and test a new Group Policy for our Windows 7 deployment later this year. In one virtual machine I have Windows Server 2003 running as a Domain Controller and as a Router/DHCP Server (this VM effectively becomes our virtual LAN’s gateway for internet access and so needs two network interfaces – one to connect to hosts network and gain internet access and the other to connect to the internal virtual LAN), and in the other I have Windows 7 setup as a member of the test domain.

Once you have your virtual machines ready to go we are ready to create our Team and add the virtual machines to it. In VMware go to File -> New -> Team to launch the New Team Wizard. Give the Team a name and decide where you want to store the configuration file then add the virtual machines you want in the Team (you can always add and remove virtual machines to and from Team at any point). Next you need to add at least one LAN Segment, this is basically the virtual LAN that will connect our Domain Controller to our Windows 7 virtual machine (any any other VM’s you add), you can has multiple segments, all with different network speeds if you want to simulate a larger, multi-site network but for our simple lab it is easiest to just use one segment. Finally you need to which network adaptor connects to which network (virtual or otherwise), this can be a confusing if you are not used to networking and VMware so here is a screenshot of my configuration that you can use as a base.

My VMware Team Network Configuration

My VMware Team Network Configuration

The important thing here is to make sure that one network adaptor of the Domain Controller is on the Virtual LAN with the Windows 7 VM (and that if you have already run the network setup wizard after installing the network router/DHCP roles on the Domain Controller you make sure you select the correct adaptor – dont worry, it can always be changed if you get it wrong). Also, assuming you want all the machines in your Team to be able to access the internet then you will need to map the Internet facing adaptor on your Domain Controller to the host machines network, my recommendation is to use NAT here to ensure your Virtual Network remains isolated although aslong as you are careful when you configure the Domain Controller’s routing you can use Bridged networking.

And there we go, you should now have a simple, but very useful Virtual lab environment that you can use like me to test new Group Policy options, or really anything (ive been running the new Sharepoint 2010 beta in another test network), you can even extend the lab with additional LAN Segments to represent remote sites (with simulated packet loss too if you want), the Team options give you a lot of options if you want to expand your lab, the only limitation is how fast your computer is!

avatar

foaf

Some time ago I learned about the foaf-project, I did like the project back then but I never took the time to write my own foaf.rdf. Well now it is created. Very basic but I hope it will be extended soon.

Since my recent social network activity the step to foaf was, in my eyes, also a logical one. But it does seem that a lot more people have facebook than foaf….

One of the nice things about foaf, if you are interested in semantic web stuff, is that it is in constant development there are discussions on the mailing list about how to structure the data etc or why things should change. This makes it a very dynamic project.

As you can see I do not yet have any foaf:knows elements in my foaf.rdf but lets hope that soon will change so a semantic network can span around me.

avatar

The Sarc Mark available for Linux?

Raise your hand if you’ve seen this little gem:

That’s the latest addition to the English language (The Guardian). Used like the rest of the Mark brothers (Mr Question Mark and his annoying sister Little Miss Exclamation Mark) Mr Sarc’s purpose is to denote sarcasm. Now instead of using various to-be-deprecated techniques such as the sarcasm tag </sarcasm> vague emoticons or my personal favourite "No shit, Sherlock", we have a standard to look towards that will appear whenever you hit ctrl-. (that’s full-stop). Of course you’d have to pay some US company 2USD to get it there (please, please don’t tell me you actually tried it just now).

I was wondering if anybody took the initiative to create a font with support for this on Linux – others might see it as a complete waste of time and resources but I can’t wait to write my next essay with this in it.

Related posts:

  1. rtm – a Command Line Tool for RememberTheMilk
  2. Hello. I hacked the GIS website.

New York City – KDE SC 4.4 Release Event

Anyone living in NYC up for putting together KDE SC 4.4 release shenanigans? It would be around Feb 9th, probably a day or two after, I suppose.

Posts for Monday, January 25, 2010

Wpa_gui is Underrated

A hot topic in the community is wireless management. There’s a whole lot of buzz about NetworkManager, Wicd, dbus, frontends, PolicyKit, plasmoids, and the whole modicum of dizzying names and acronyms. Let me tell you about my mobile laptop’s wifi setup and why it’s easier and slimmer than any of the classic bloat.

I use wpa_supplicant’s optional wpa_gui. It’s a tiny Qt app that has a tray icon and a command line switch to start in the tray. Wpa_supplicant is required for all modern wireless connections and is always running in the background no matter what. Wpa_gui simply connects to wpa_gui’s socket and tells it what to do. I like having wpa_gui in my system tray so that I can reconfigure wifi networks easily.

zx2c4@ZX2C4-Laptop ~ $ cat ~/.kde/Autostart/wpagui.sh
wpa_gui -t

And check it out:

A simple, somewhat ugly, but extremely functional info display. I can connect to new networks with a simple double click:

And presto it connects to the wifi network. I can also configure all of the highly advanced encryption profiles that wpa_supplicant supports. All of this is easily accessible in my tray:

If I did not want wpa_gui -t running all the time, I could pretty easily make this into a little quick launch plasma button, and it would start up nearly as fast, because wpa_gui is so light weight.

This is how I do wireless. I have never had any trouble, and I can connect to wifi networks anywhere I go with ease. It remembers the connections and the priorities that I assign, and I have not seen any system simpler or easier than this.

For wired networking, netplug calls my ethernet setup scripts when I plug in an ethernet cable. No tinkering required. For my cellphone internet via bluetooth, I run “pon nokia” and my ppp chatscript does all the rest. This could easily be tied to a little menu button in my launcher.

I’m bloat free, and networking dynamically on the go with my laptop does not require any advanced timely tinkering.

Why are you all using NM, wicd, etc instead of good ol’ wpa_gui?

avatar

How to Create and Edit Group Policy for Vista/Windows 7 PC’s

Ive spent the better part of the last week or so documenting our existing Group Policy and getting a test environment ready so I can develop and test a new policy for Vista and Windows 7 (well, most likely just Windows 7 as I cant see us ever touching Vista again!). One problem I’ve hit so far is there is no easy guide that explains how to get everything setup, just different guides all pointing to different files (at one point I think I was downloading 3 different versions of the same file because different Microsoft guides said to use different versions).

So, heres what you need to manage GPO’s for Windows 7:

  • Windows 7 – Even if all your Domain Controllers are Windows 2003 you can only create/edit Windows 7 GPO’s from a Windows 7/Vista/2008 R2 host. My recommendation is to use a virtual machine for this, if you dont want to buy a license yet you can use the trial version of Windows 7 for 90 days.
  • Download and install the Windows 7 Remote System Administrators Tools pack (This will only work for Windows 7, if you are using Windows Vista or 2008 to manage your GPO’s you will need to corresponding RSAT pack).
  • By default the Group Policy Management Console isnt enabled so we need to enable it in the Control Panel. Go to Control Panel -> Programs and Features -> Turn Windows features on or off -> Remote Server Administration Tools -> Feature Administration Tools -> Enable Group Policy Management Tools.
  • Now we can see all the shiny new Group Policy options that have been added for Windows 7 but we need to make it so that when we create a policy all the other computers that use it make use of the same source admx files, currently GPMC is only looking at the admx files installed locally. To change this we need to copy all our admx and adml files onto a Domain Controller (which will then sync them to all the other DC’s in your network).
  • Copy the PolicyDefinitions folder that is in the Windows folder on your Windows 7 PC to your Domain Controller’s sysvol folder, this is normally \\<domain controller>\sysvol\<your domain name>\Policies

There we go, you should now be able to use this Windows 7 PC to create and manage your Group Policy for all Vista/Win7/Win 2008 machines even if your domain controllers all run Windows 2003. Dont forget though, even though you can see these Windows 7 policies in GPMC on Windows 2003, if you edit them there you risk corrupting them and causing yourself a big headache! Only edit Windows 7 GPO’s from a computer running Windows Vista, 7, 2008 or 2008 R2!

avatar

Third time lucky

Well, its been a few months since my old blog died when I cancelled the hosting and since I had to renew my domain this month I decided to go crazy and setup some new hosting and give this blogging thing another go.

Since I last posted anything i’ve started a new job, im now the systems administrator for www.interregs.com and www.lsi.edu. I get to play around with and manage all their servers (currently just 2 racks full but a third may arrive later this year when we deploy Exchange 2010 and WSS2010). All this is a great change of pace and a whole lot more fun than my previous job at www.cobweb.com, which, while teaching me a lot was a little too busy for my liking (try talking on the phone for 5-6 hours a day, 5 days a week!). While I do sometimes miss the chaos and banter of a busy office its certainly a lot nicer being my own master and I can finally start on my career path to becoming a BOFH :D

So, as I trundle through the next few months/years(!?!), I’l be using this blog to post anything I find thats useful while I rollout our Windows 7 deployment, new Exchange 2010 server and then eventually a Sharepoint 2010 server aswell.

Posts for Sunday, January 24, 2010

avatar

Fosdem 2010

I'll be at fosdem - 10th edition - again this year.
I'm going to FOSDEM, the Free and Open Source Software Developers' European Meeting

I'll be presenting a lightning talk about uzbl.
Also, Arch Linux guys Roman, JGC, Thomas and me will hang out at the distro miniconf. We might join the infrastructure round-table panel, but there is no concrete information yet.

More stuff I'm looking forward to:

I'm suprised myself how there are much more topics of interest to me then last year, and I'm not sure if the program is even finished.

avatar

Wikisurfing, the latest in extreme sports.

Say, my good sir, have you gone Wikisurfing before? No? Well- it’s time to expose yourself to the latest and greatest addiction.

Wikisurfing is the act of a group of people starting on a predetermined page on Wikipedia.org (in whatever language). Their objective is to navigate through Wikipedia to another predetermined article, using only inline links on the page excluding "See Alsos", disambiguations, and any lists (eg: list of countries, list of singers, etc). The first person to arrive on that page wins.

So for example we’d have 5 people all on the page on the Sultan Iskandar of Johor (Malaysia), who died a few days ago, all trying to click through links to navigate to Old Ephraim – and depending on your level of general knowledge you’d know that Old Ephraim was a very large grizzly bear that lived quite happily in Utah until he got shot in the head by an unwitting shepherd. You’d have to plan your strategy – you could go through the countries-america-utah route or instead try the animals-bears-grizzlys technique. It’s very hard to find a page that you really can’t find a link to, but I’m willing to bet Old Ephraim is one of them (30 minutes without success!)

There are many variations of the game, such as one where every 5 minutes you must switch computers, another where at random times you may click a link on your neighbour’s computer, or ones where you must navigate through many topics in a sequence of your choice.

An experienced wikisurfer can tell you that there are certain topics that are dead-ends and others which are self-sustaining spirals of disaster – you will be stuck in that topic once you enter it. One example of this would probably be some complex topic in hypothetical physics. Once somewhere within the topic it’s very hard to move to another. Whereas in other topics, such as sociology, would be able to link directly to the most random of articles with almost guilty relevance.

A great way to describe this situation is that it represents the depth of a topic. A topic where it would be hard to reach other unrelated topics would be seen as an in-depth and technical topic, whereas one which could easily be changed to another (such as in 5 clicks or less) would be seen as a shallow, academically unchallenging topic. The problem when trying to measure this is determining exactly which page would be used as a base-page to attempt to navigate to. For example, hypothetical physics would be very deep compared to your average McDonald’s hamburger, whereas detailed biology, though arguably as complex as physics, would be significantly easier to get to your burger.

The way to get around this is to divide our intelligence into several different categories, such as music, art, physics, biology, chemistry, math, etc – and use a representative article for each of these subjects. The next step is to proceed from each page in an article into the representative of each of the other subjects. Thus you could find an average depth of a subject.

I find that since we have a massive repository of knowledge instead of simply using it as a portal we should use it as a data-source for these sorts of fun facts – like for example exactly how useful would it be to know what a tomium is? If mapped to a specialised personality profile we can actually start sorting out knowledge that would be useful for people, so instead of searching for information it would be fed to us.

Someone remind me why I’m wasting their bandwidth when they need help? Shame on me.

Related posts:

  1. Achieving what?

Posts for Saturday, January 23, 2010

avatar

Social Networking

After having resisted for a long time last week I finally created a facebook account. I still prefer e-mail or instant messages (jabber!) but I have to admit it is a nice way to get in touch with a lot of people and to follow activity.

So now that I have sold my soul I decided to create an extra box on the blog with direct link to my various social network profiles. Just to make it look pretty!

So far I am active on 3 social networks:

  • last.fm: I actually created my account in order to improve QtMPC, however it is now also coupled to my amarok and I upload the scrobble log from my ipod (with rockbox).
  • linkedin: I created this to join the OpenStreetMap group.
  • facebook: Now since a few days facebook.

Now I think that is (for now) enough to keep track of but it makes me wonder why there aren’t any good desktop apps to manage some of this. Like the facebook notification thingy at the bottom right. Why isn’t there a cool Qt app to just make this sit in the system tray? A well maybe a cool project for the upcoming semester :)

Destroying an LVM Array

Alright... So one of my hard drives failed in my backup server LVM array, and I received the new replacement disk today.

So, I turn on my computer and even though the volume group isn't initialized (due to the missing disk), it still exists, and has all the other drives in the array still held captive.

So, this is how to completely destroy your LVM array and all the data within it after a drive has failed.

DO NOT DO THIS IF YOU WANT TO KEEP ANY DATA ON THE ARRAY. (you have been warned)

The procedure is really quite easy.

First, do pvdisplay to see which disks are still part of the array you wish to destroy. (in my case it was /dev/sdb /dev/sdc and /dev/sdd)

simply run the command : "pvcreate -ff /dev/deviceyouwanttokill"

that will initialize the disk, even if it is part of another array.

Do that to all the disks you want to be part of your new array, and voila!  You're done.

(hope you didn't just do all that hoping to do anything but COMPLETELY DESTROY your LVM array.)

Posts for Friday, January 22, 2010

avatar

Webcam

Since my girlfriend is going to do her research and write most of her thesis on Curaçao I will only have contact with her trough email/skype for some time.

She already has a webcam build into her webcam. Not great quality but good enough for skype. Also there was an old webcam which was bought approx 5 years ago, a Creative Webcam NX Pro, now I have not yet figured out what is so “pro” about it but it works like a charm!

The GSPA ZC3XX driver is the one I need! After that some programs need to be compiled with video4linux (2) support but that is to be expected. The only problem I ran in so far was skype itself! For some reason skype does not work with v4l2 so in order to run skype I have to use the following code:

LD_PRELOAD=/usr/lib/libv4l/v4l1compat.so skype

Long story short. The webcam works like a charm and thanks to the hard work of many people I had it working in no time!

Spring Schedule

I apologize for not posting in a (very) long time. I have been busy. Additionally, I haven’t had many topics to talk about.

Currently, I’m in limbo waiting for the new MacBook Pro laptops to be released. It is rumored that a refresh might happen at Apple’s January tablet event. I’m purchasing a MacBook Pro due to the endless volley of issues being thrown at me directly by Dell’s abomination of a service department.

School has started back. I will be listing my schedule below:

MWF Fundamentals of Calculus 8:30-9:20

MWF Operations Research 2:30-3:20

TTH Personal Finance 8:30-9:45

TTH Database Systems 1:00-2:15

TTH English Composition II 2:30-3:45

TTH Astronomy 5:00-6:15

Monday Committee Meetings at 6:30-7:30

Work:

Monday 1:00-2:30

3:30-4:30

Tuesday 12:00-1:00

Wednesday 1:00-2:30

3:30-4:30

Desk hours: 9:30-12:30

Duty on some Sundays and every other Thursday night.

Posts for Thursday, January 21, 2010

How free music could become even more available

After hacking for a while on the beforementioned Jamendo helper KTorrent script and learning about Libre.fm I got a great idea!

How about if we made not only a central system to promote all free music, but also to make all the free music available to users/listeners in a more natural and easy way and at the same time lessen the load and costs of the netlabels?

Well, I have a plan and I think it could work!

Because I'm quite busy lately, I'll just quote my own e-mail that I sent to the Libre.fm mailing list with my battle plan in it:

Hullo,

I've just recently learnt about Libre.fm and it made me think about the
possibilities...

Because this will be a longer e-mail, I'll try to put some structure in it. At
first I'll talk in short about what I imagine Libre.fm could provide that
Last.fm doesn't (and couldn't), then show you the big picture and at the
end a detailed example of a real-life use case.

// Libre.fm and Downloads //

One thing that Libre.fm has that Last.fm didn't (and couldn't by this day) is
access to all of the music it promotes and streams. It would be a shame
to let this wonderful opportunity pass us by!

What I propose is to have next to every album that is promoted on
Libre.fm:
* a direct HTTP/FTP link to the download
* a torrent or magnet link to the album

The direct download link was proposed already, from what I gathered from
IRC, so I'll concentrate on the latter idea.

The idea behind having a BitTorrent link — either as a torrent link with
Libre.fm running a tracker or better yet a tracker-less magnet link — has
many positive side effects.

Firstly it would shift the web traffic from the indie net labels (and Libre.fm)
towards the users, which would lower the hosting costs. A possible
downside would be that the net labels wouldn't have their own track then
on how many people downloaded the album, but a) since it's under a
copyleft license, the users could share it otherwise anyway; and b) they
could always check Libre.fm for that data (via API?).

Secondly the fact that it was handled via P2P would mean that even if a
net label went away, the albums would still be shared. Here the magnet
link would IMHO fare even better then a tracker, in case Libre.fm gets into
trouble (knocks on wood).

// The Big Plan (TM) //

Of course, just having a direct download and a P2P link is a nice touch, but
it's nothing really revolutionary. But using the Libre.fm API to extend this
idea, Libre.fm could bring the music directly to the user's fingertips and
ears with minimal effort from the user him-/her-self.

What I imagine is that by using the Libre.fm API integrated into music
players and P2P clients we could make access to music a lot more natural:

1) Let's say the Jimmy listens to a music stream either in the browser or
favourite music program.
2) Jimmy likes the current song and wants to check out the album, so he
clicks on the "download" button.
3) Automatically his computer (or other device) downloads the album and
puts it into his music collection, without bothering him about it.
4) When Jimmy goes offline he can still enjoy the music, without bothering
too much where, how and using which protocols his got his music.

So, from Jimmy's point of view, he would just click (or drag and drop or
whatever) the download button and that was it!

// Technicalities of the Use Case //

Here's how I imagine the above use case would work in the background
(be warned, I'm not much of a coder!) with already existing technology:

1) Jimmy launches Amarok and tunes into the Libre.fm plugin to check his
recommended artists.
2) When he hears something he likes, he clicks the little "download" button
in Amarok.
3) Amarok uses the Libre.fm API to check out the torrent/magnet link and
uses the user's default BitTorrent application to open it — let's use
KTorrent, because I'm somewhat familiar with its scripting API.
4) KTorrent has a script implemented, which would check where the
torrent/magnet link came from and/or which tracker, and because it came
from one of the free music net labels, it would automatically apply the
user's settings for it (e.g. download to the music collection folder, any
bandwidth restrictions, add it to the appropriate torrent group etc.).
5) At the same time the same KTorrent script would check if there's any
seeds available. If there aren't any (for a longer period of time) it would
trigger a direct download, which it would get from the Libre.fm API,
uncompress the album (if needed) and move its contents into the music
folder accordingly. Then it would start seeding that same album, so the
next user(s) could already use the benefit of P2P.
6) Amarok would automatically notice the new album in its local collection
and Jimmy would see it there with artwork and all.

For this to work optimally, all the albums in the torrents would have to be
ready for use the moment they finish downloading — e.g. not
compressed, all tracks (including artwork and license) in a folder.

I imagine we could stretch this even further, so that even if the user didn't
start the download from a Libre.fm page or service, the system (e.g.
browser, music player, BitTorrent client or all of them) could check via API
if there is a torrent/magnet link available on Libre.fm for the album. Maybe
it would be plausible (I doubt it though) even to get BitTorrent clients to
upload torrent/magnet links to Libre.fm.

Of course, it shouldn't be limited only to KDE software — that's why I think
it would be a great idea to use Libre.fm and its API as the central
intermediary for it all.

// Conclusion //

I know this is quite an enthusiastic idea, but I'm already working on
something similar for Jamendo using Amarok (the plugin's there) and
KTorrent (I'm writing the script) — and it seems possible to do!

So far the biggest problem I encountered was that KTorrent is not
forwarding some methods that I need to their API, but the devs already
have that planned.

Of course, doing this on such a grand scale as I propose, would be more
difficult, but I think it'd be worth it! As I said, although I'm not much of a
coder, I'll be happy to at least write the Amarok and KTorrent scripts in
order to make it work.

So... what do you think of it?

Cheers,
Matija "hook" Šuklje (a.k.a. "silver_hook")

hook out >> studying, going to bed
<!--break-->

Planet Larry is not officially affiliated with Gentoo Linux. Original artwork and logos copyright Gentoo Foundation. Yadda, yadda, yadda.