So I wrote about IPv6 a few months back. Tunneling it over IPv4, general networking with it, and even ping6'ing Google.
Been using it ever since.Whoa now, wait a minute! People use IPv6?
For the most part, I set it up and poked with it for vanity purposes. "Hey look at me! I'm speaking a protocol that your router has no idea what to do with!" I had little actual use for it. For the most part I never had any real problems, but no real benefit either.
But it's been a few months, and I recently had my "IPv6 Epiphany." So here, have some random bits of info that I've picked up while playing with it.The Problems1. IPv6-in-IPv4 tunnels aren't really firewall friendly, nor are they the easiest thing to configure.
I wound up whitelisting my home router's IPv4 address on my server, exempting it from all other iptables rules. This fixed a problem that cropped up when I rebooted my server, resetting my firewall rules to their saved state, and broke my ability to SSH into my server from home without specifying -4. Further, configuring a tunnel with iproute2 is pretty easy. Configuring a tunnel from CentOS to Debian using the "proper system-specific methods" really isn't. Debian I got working. CentOS I didn't, and wound up writing a pseudo-service to manage the tunnels and routes. All things considered, I probably would have wound up doing the same thing for my Debian router if it was as overloaded as my server in terms of IPv6 config.
Plus you have the increased latency. As a whole, this hasn't been a problem for me.2. Not everyone who runs IPv6 maintains their v6 stack nearly as well as they do their v4 stack.
This has proven to be a problem. For example, I was looking into H.323, and tried to open up the Open H.323
The problem lies in the DNS. The OpenH323 project had a v6 DNS server. This server did not respond to queries coming over the v6 transport, breaking DNS resolution nicely for me. When I went poking with dig, it responded happily over v4. (It seems that their DNS is broken for both v4 and v6, so perhaps it was coming anyway. But the point stands. When your site works, you're content. You're not going to spend time checking that it works over both v4 and v6. This leads to problems.)3. Application support.
From a sysadmin standpoint, nearly every computer out there has a DHCP client. Wait, sorry. Nearly every computer out there has a DHCPv4
client. This poses a problem when it comes to v6 connectivity. This is one area where Vista is quite a bit ahead of the *nix - they ship a DHCPv6 client and full stateless v6 autoconfig
support by default. Their stateless autoconfig leaves a bit to be desired, as it ignores RDNSS data in the router advertisements, but they have documented
how to get full DNS resolution on a stateless-only interface. It's pretty simple.
Linux, at a minimum, has a stateful DHCP client
kicking around, but it isn't installed or even mentioned in most distro networking guides. It's not even available in several distros. The kernel has great stateless autoconfig, but RDNSS isn't exactly a kernel space setting either. There is a user space tool
around that watches for the router adverts and adjusts /etc/resolv.conf as needed, but it's even less known than the stateful DHCP client.
There are also a couple really popular open source programs out there that don't speak v6 at all. There are two that bug me to this day: MySQL and Asterisk. MySQL is really not too huge of an issue right now, but to my knowledge they aren't even working on it. Maybe Drizzle
Asterisk is really the bigger issue. One of the largest roadblocks to getting VoIP with SIP to play nicely is NAT. To put it simply, it doesn't work with NAT. I can see (properly done) VoIP being a huge, monumental boost of support and a fantastic reason to get v6 working. Nearly the entire point of deploying v6 now is massively increased connectivity (with v4 connectivity dropping drastically in the near future). The current v4 (NAT) landscape is incredibly inhibiting to SIP, and while you can argue the relative merits of SIP to any other VoIP protocol, the value of having full connectivity from any one device to any other device really can't be understated.
(A note to you "but I like NAT because it's a great firewall!" people: first off, no it isn't. Second, there is a very simple rule here that both mirrors what you "get" with NAT and is arguably more secure than NAT. It happens to be called "default deny." From there, if you want to support VoIP, you can add one single rule and have great VoIP support. Have a /48 that houses both users and servers? Great - subnetting is your friend. Just open :80 to the server subnet.)4. IPSEC.
Still sucks to configure, is only going to become more important with enhanced device to device communication. Isn't supported by any mobile phone I'm aware of. Have a mobile phone that can connect to a SIP server over wifi? Great! Can it do IPSEC? Nope. Sure, TLS exists for a reason, but full-blown IPSEC has numerous advantages over TLS and it really isn't supported anywhere but the router and desktop. (Plus it still sucks to configure.)
Does your (insert handheld gaming device here) support IPSEC? No? Well sure I wasn't expecting it to, but it'll be interesting to see how this plays out over the next couple years.5. Reverse DNS. 3.d.126.96.36.199.e.f.f.f.b.3.f.1.2.0.d.f.f.f.b.2.8.d.0.7.4.0.188.8.131.52.ip6.arpa.
Do I really need to say just how much configuring reverse DNS sucks? No? Good. Is there a better solution? Probably not. I'm just glad that dig and ipv6calc are of use here, so I don't have to manually type out every full-length DNS record.Advantages1. Pretty much every single application I've used on linux supports it very well.
Everything from HTTP to IMAP to Kerberos to SSH is operating flawlessly for me over v6. Vista has v6 CIFS, rdesktop, and RPC. I could make a full list here of what is supported in terms of services and clients across different OSs, but really, the list of what isn't properly supported is shorter at this point. And yes, for the most part, that applies to Windows too.2. It's supported, well, by every (very?) modern OS.
Vista just works with it. Linux just works with it.
There are some "gotchas" with both, but they'll be resolved over time and as more and more sysadmins come to use it. Vista actually has a default 6to4 tunnel built in, that starts up if you have a public v4 address. Even if your ISP doesn't support v6 (not that any of them do), if you can plug your Vista box in straight to the internet, you'll get v6 without any configuration or hassle.3. NAT really sucks. The simple connectivity provided by v6 rocks.
Now this leads back to how I started this entire post. First, a bit of background.
I run CentOS on my server. SSH has all password-based authentication disabled, and only supports Kerberos (GSSAPI) and pubkey auth.
I have a few RPMs that I need to rebuild to support a few extra things (namely postfix to support mysql, and kerberos to support an LDAP backend). I'd rather not keep all of the needed -devel packages installed on my server, and I'd also prefer to keep gcc and the rest of the needed buildutils not installed. The obvious solution is to rebuild them on another CentOS box, create a mini RPM repo, and then just use yum to install them. The process is simple enough.
The trick comes in actually getting those rebuilt RPMs to my server. This is also where v6 happens to make my life incredibly easy.
My CentOS "build box" is a VM running on my Vista box. This is really the best solution for me. I don't need to have a dedicated CentOS box here, and as a result of that I can click "turn off" and forget about it entirely until I need it again. This is probably the only really good use of VMs that I've found so far, but I digress.
As mentioned, you can't login to my server over SSH without an SSH key or kerberos auth. This means that I can't just scp them up to my server without either copying my existing key(s) over, or generating new keys and adding them
It was at this point I realized that my v6 setup meant that my VMs had public v6 addresses. And then a light clicked on.
I fired up rsync on the VM, copied the v6 address, and then from the server used rsync to move them over.
And it just worked. No port forwarding. No key configuration. No advanced auth config for the VM. I could have used apache+wget just as easily. I was able to start a service (on a VM that sits on a host behind NAT) and use it without any hassle, without any VPN trickery - it just worked.
If you compare the effort it would take to setup v6 on your home network and an "external" network, and compare that to the port forwarding, NAT translation/incompatibility, "hey this port is already in use, by another NAT'd device, guess that means we get to start using extensive proxies or odd ports" mess that may be involved in something as simple just getting host A to talk to host B...
... I think v6 comes out ahead in terms of what you get and the time it takes to make it work.