[Around the world alarm bells are going off that we are running out of IPv4 address space.
Yet despite the imminent termination of any new IP4 addresses, there seems to be no large scale movement to adopt IPv6. A number of networks have implemented dual stack support, but the deployment of IPv6 applications and support services by institutions such as universities and large enterprises is moving at a snail’s pace.
I very much worry that IPv6 is a clear market failure and yet no one is seriously thinking of alternate solutions to allow for a graceful migration from IPv4. We are stuck in a Mexican standoff where no one wants to admit that maybe this glorious future of IPv6 is never going to happen. When governments start advocating and legislating the adoption of IPv6 I really start worry. As the Economist magazine has pointed out, a true sign that a standard or technology is in its death throes is when governments start advocating the adoption of that technology – think OSI, or more recently NGN.
One of the challenges for institutions to move to IPv6 is that it provides no new functionality – but only added cost. As the brilliant Geoff Huston pointed out in an excellent article on this subject (http://www.potaroo.net/) the problem is that if you move to IPv6 you still need to maintain all your IPv4 addresses and services. IPv6 to IPv4 address translation boxes are available to provide internal IPv6 services – but IPv4 addresses will always be needed externally until virtually everybody else switches to IPv6. It is a classic Catch-22 phenomena.
The Internet is not the only technology that has run into problems with address scaling. In years past the North American telephone system and the US post office has run into this issue as well. Their solution was much simpler – rather than building a whole new addressing system from scratch they extended the existing addressing system by adding suffixes and/or prefixes. I am sure such an idea must have been proposed in the early discussions on a successor to IPv4 – and I am surprised it has never gained any currency. To my simplistic mind adding a 16, 32 , 64 or 128 bit suffix to existing IPv4 address would allow us to assign individual addresses to every conceivable device on the planet. But existing IPv4 address boundaries would insure the same aggregation of routes as today. Existing routers would not have to be changed. New routers could be programmed to prioritize routes based on the much longer finer grained routes identified with the new suffix. This would allow a much more graceful and unsynchronized migration away from today’s mess.
The other challenge we face is the politics and technology associated with the global DNS. Government crackdowns on Wikileaks and proposed legislation like COICA in the US and LOPPSI in France are concerning many of those who believe in an open and free Internet for all of us. Lauren Weinstein has maintained an excellent blog on this subject (http://lauren.vortex.com/) and is promoting a new concept of distributed, secure DNS system called IDONS. I agree 100% with Lauren that “Given the availability of advanced crypto systems and distributed networking methodologies, highly evolved search engine capabilities and so on, the old DNS model simply does not make sense any more. And a number of recent events have obviously rather dramatically exposed the vulnerabilities and unjustifiable cost structures that have unfortunately now become part and parcel of the DNS system as we know it today.” IDONS also has the potential to address many of the current abuses of the DNS systems in terms of domain name squatting, copyright issues and internationalization.
But Lauren is not the only one who is rethinking DNS, the people who brought you Bit Torrent are also advocating a new DNS P2P architecture. According to the project’s website, the goal is to “create an application that runs as a service and hooks into the hosts DNS system to catch all requests to the .p2p TLD while passing all other request cleanly through. Requests for the .p2p TLD will be redirected to a locally hosted DNS database. By creating a .p2p TLD that is totally decentralized and that does not rely on ICANN or any ISP’s DNS service will help insure an Open Internet.
With all these pressures from COICA, LOPPSI, RIAA, MPAA and renewed interest in Deep Packet Inspection, there is also growing momentum to encrypt all Internet traffic edge to edge using a network of encrypted overlay tunnels. TOR (http://www.torproject.org/) is a good of such an initiative. At the end of the day those who want to control and restrict the Internet are only creating an environment that insures the exact opposite outcome.
While there is considerable university research into next generation Internet around the world such as FIRE and GENI, so far there seems to be very little interest by the academic community or the R&E networks in exploring these new technologies. To my mind R&E networks and initiatives like UCAN in the US should be the forefront in the battle to maintain an open and universal Internet that is available for the rest of us. I have long argued that the Internet is more than a technology, but one the most powerful societal tools since the invention of the alphabet and printing press for exchanging information, knowledge and freedom of expression.
While Gutenberg did invent the printing press he only saw it as a tool for more efficient copying of the Bible. It was an Englishman named William Tyndale who grasped the significance of the printing press as a way of mass distribution and educating the masses. For his troubles he was burned at the stake by the Catholic Church – a fate that many telco executives, movie and record producers could only wish upon those who are fighting for an open Internet today.
Nice and quite informative post. I really look forward to your other posts.
ReplyDeletePanasonic - 14" Toughbook Notebook - 8 GB Memory and 128 GB Solid State Drive
Panasonic - 14" Toughbook Notebook - 4 GB Memory - 500 GB Hard Drive (CF-53SSLHYLM)