IP v6 – who cares and/or so what?

For reasons that escape me, many of my initial pokings at IP v6 devolve into references to things in the Hitchhiker’s Guide To The Galaxy (the novel, and not the actual guide, which doesn’t actually exist, mostly.)

IP v6 on Wikipedia: http://en.wikipedia.org/wiki/IPv6

There are several reasons above and beyond wanting to be in front of early-adopter customers:

  • Its improved address space is both larger and better organized, which makes managing information from/about large numbers of devices easier.
  • Improvements and additions to protocols, specifically I’m thinking about
    • Multicast (sending a single packet to multiple destinations)
    • potential for improved performance, especially when transiting routers because v6 routers never fragment a packet
    • Mobile routing as efficient as “regular” routing
    • Jumbograms – want 4 GiB in a packet?  OK.  NO problem.

Lastly, at least for the moment, is that IP v6 seems to be surrounded by an SEP field.  Rather than waiting for someone else to do something about it, we’re going to grasp the nettle firmly and pull ourselves up by our bootstraps and eat our own dogfood.  Or something.  It’s going to have to happen, because v4 is already creaky.  To quote a total scumbag, we really need to “get thar fust with the most men.”

The Hitchhiker’s Guide to the Galaxy has this to say about IP v6’s address space,

“Bigger than the biggest thing ever and then some. Much bigger than that in fact, really amazingly immense, a totally stunning size, real ‘wow, that’s big’, time. Infinity is just so big that by comparison, bigness itself looks really titchy. Gigantic multiplied by colossal multiplied by staggeringly huge is the sort of concept we’re trying to get across here.”

In the interest of honesty, as slim as that may be, I should also point out that the HHG actually says that about infinity, and not IP v6, but it could have.  Perhaps this quote is a bit more accurate,

The car shot forward straight into the circle of light, and suddenly Arthur had a fairly clear idea of what infinity looked like. It wasn’t infinity in fact. Infinity itself looks flat and uninteresting. Looking up into the night sky is looking into infinity — distance is incomprehensible and therefore meaningless. The chamber into which the aircar emerged was anything but infinite, it was just very very big, so that it gave the impression of infinity far better than infinity itself.

That’s why it’s important.  Because there’s room for everything.  Everything, or nearly enough everything that anything left over isn’t worth bothering oneself about.  Currently, IP v4 is 32 bits of address space, which if it were evenly parceled out, would give about 2/3 of the people on the planet a single address.  Not nearly good enough, since the phone companies want everyone to have a phone, right?

IP v6 has 128 bits/16 bytes, which as described above, is beyond big.  “In a different perspective, this is 252 (about 4.5×1015) addresses for every observable star in the known universe.” – Wikipedia article on IP v6.  When CygNet is ready to monitor every cell and bacterium in your body, we’ll be able to assign each of them their own IP address. 

So, yes, rather large enough.  For now.


A certain degree of frustration

Now on iteration…mmmmm….four, I think, of pushing IP v6 into CygNet, and am making invisible progress, I’d say.  That’s the kind where you seem to be doing a lot and accomplishing little of substance.  But, it leads to good situations like today, where I realized trying to keep the v4 infrastructure and shovel in the v6 isn’t going to be a good idea, even for a prototype.

Part of the problem is that CygNet is very nearly perfect in its architecture, in the sense that it works without requiring enormous computational overhead, and that it’s been pruned by expert code gardeners wielding sharp and precise tools.  There’s no fat, and there’s the source of one of my problems.  In many systems it’s possible to exploit of inherent inefficiencies in objects/classes/entities.  We don’t have those, in any meaningful way.

The other issue, and the one that’s been stymieing me is that the network world has changed.  When CygNet rolled out, most non-gateway(router) machines had a single NIC and a single address, which rarely changed.  Today, though, it’s hard to even define what “computer” or “host” means.  The research server has six IP addresses scattered across 3 physical NICs and 3 virtual (for VMware).  It has inward and outward facing names/addresses (frith.research.dom and research.cygnetscada.com).

It has at least 4 different copies (in 2 versions) of CygNet running on it: one on the main OS and the others on simultaneously executing VMs.

Now, you tell me, does it make sense to say of CygNet, “It’s running on the research server.”?  Well, yes, it does, if you’re willing to accept that “server” no longer means a physical computer, but instead refers to a logical locus of execution.  This is, I should point out, part of the premise of cloud computing.  Unfortunately for us, simply pushing CygNet onto v6 without any other changes wouldn’t accomplish much, in my current opinion.

And then there’s NAT: we already have problems with NAT, because NAT is the Wrong Thing, poorly implemented.  It broke the fundamental design of end-to-end connectivity upon which the ‘net (and CygNet) was predicated.

It’s not that we should stop pushing v6 into the product, but simply swapping out the v4 for v6 addresses isn’t going to teach us anything, and will cost us in the long run because we’ll need to come to terms with the v6 world and the coming distributed environments.

Architecture Notes

Imagine I wanted to run Cygnipede services on the Research server.  What I’m saying by this is that there is a business domain (Research) in which computing resources are [probably] available to execute “business” (logical) transactional activities.

Consequence:  rather than being host centric (and literally tied to a single IP address), a v6 CygNet could use a scope id to define a logical set of addresses from which services could be requested.  Changes to CygNet, beginning at the bottom (CAddressBytes()), the v6 changes need to accommodate a logical-locus-of-execution model.  Entities which currently use CAddressBytes() as a way to access services will need to move to using a new entity like “CLogicalDomain())”, which itself is a list (ideally dynamic) of CServiceHosts which are lists of 1 or more service/host names and corresponding addresses.  Note that IP is clever in that you can have multiple addresses associated with multiple names.

More later after I try some more mods to CAddressBytes().

SCTP – the other, other protocol

What is SCTP?

Wikipedia: http://en.wikipedia.org/wiki/SCTP

“In computer networking, the Stream Control Transmission Protocol (SCTP) is a Transport Layer protocol, serving in a similar role as the popular protocols Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). It provides some of the same service features of both, ensuring reliable, in-sequence transport of messages with congestion control.

The protocol was defined by the IETF Signaling Transport (SIGTRAN) working group in 2000, and is maintained by the IETF Transport Area (TSVWG) working group. RFC 4960 defines the protocol. RFC 3286 provides an introduction.”

Why should I care?

It’s record oriented, has separate message and control channels, but mostly importantly it allows each end of a “stream” to be multi-homed, something which is very important in the larger IPv6 and/or mobile worlds.

It was an outgrowth of the work on SS7.  It’s also a sort of hybrid of UDP and TCP, which is similar to how we ended up growing our own reliable UDP.

In theory, it looks ideal for the next generation of CygNet.  In the real world, of course, things can look very different.  I’m not suggesting that we do anything other than look closely at it for possible exploitation, and or lessons.

Definitely read the article.

Some Other Articles

May you network in interesting times

Part and parcel of learning about IPv6 has been establishing an internal infrastructure capable of running it, and which must be readily reconfigurable in order to test common configurations.  Stephen H and I have spent dozens of hours trying various configurations of modern, common, small-network configurations as part of the research v6 testbed.

In no particular order, here are some things we’ve learned:


China  – For years we used to say to people who claimed to be interested in network security, “Listen, most Internet-based attacks are coming from address blocks allocated to China, and tracerouting would seem to indicate that there are thousands and thousands and millions of break-in attempts every day.” And…they’d look at me as though I’d shown them a mouthful of black beetles whilst wearing an aluminum-lined baseball helmet, so eventually I just quit talking about it. 

Mind you, this started in the early 1990s.  As I’ve been watching network traffic, I noticed it’s only gotten worse, so I’ve blocked basically all APNIC and Russian address blocks, which has significantly cut down on the random attack traffic I’ve seen.

Why should we care?  Because one big feature of v6 is that it [potentially] brings back the end-to-end architecture that is TCP/IP’s original primary design feature.  You won’t need NAT and everything will be hunky-dory, and or peachy keen, whatever those mean.  Unfortunately, one good side effect of NAT is security-through-inaccessibility, as in you can’t attack what you can’t get to.  With e-to-e v6, though, your hosts are potentially once again available for attack.  Because c6 hasn’t been widely tested (by crackers), it’s a good bet that many implementations will have a lot of security problems in their first few years of deployment.

Windows IP Helper Service – IPv6 transitional assistance service (“Provides tunnel connectivity using IPv6 transition technologies (6to4, ISATAP, Port Proxy, and Teredo), and IP-HTTPS. If this service is stopped, the computer will not have the enhanced connectivity benefits that these technologies offer.”)  While debugging some other network issues, I discovered that at least on one machine, having this service running was generating a lot of network activity to weird random IP addresses on the Internet.  It’s off for the moment until I can confirm that this is normal behavior.


DNS – Domain controllers with multiple NICs that are running DNS will not necessarily return the expected IP address.  The default setting is to round-robin-return the various NIC addresses that register themselves with DNS.  Our first thought was, “turn off round robin,” but that didn’t work, and then “don’t allow DNS registration,” and that didn’t work.

Current status:  not solved

Current “solution”: from Microsoft-> don’t run DCs with multiple NICs.  Seriously, this is their advice.