The problem with private Internets
A significant benefit of business school is sitting in on one dog and pony show after another. And then trying to rip them to shreds during the Q&A.
A couple of weeks ago I had the privilege of seeing a presentation by a product manager at the @Work division of the @Home network. Since @Home just launched their cable-modem service a couple of weeks prior in Fremont, California, I thought it’d be interesting to hear how the darlings of the Valley were going to solve all of our bandwidth woes.
Their service is enticing, for sure. For a flat rate per month of around $40, you get a cable modem and a dedicated connection to the ‘net at speeds approaching 10 megabits per second. He demonstrated some of these ungodly transfer rates with a videotape showing an instant download of a 500k jpeg through the @Home network alongside the same image being downloaded over a 28.8k modem.
Even the T1 addicts were salivating. Until they started asking questions.
@Home is providing gobs of bandwidth through two primary features of their network: a fat pipe, and a private network of proxy servers. The fat pipe is easy enough for most people to visualize: take that thick black coaxial cable that MSNBC comes in on, and partition off a channel for downstream data and a channel for upstream data. Voila – MSNBC alongside MSBNC. Since it’s a dedicated connection, you can leave it up all day long without tying up a phone line.
The proxy servers are other half of the equation. And the half that makes me nervous. In their presentation, and on their web site, @Home makes no bones about the fact that in order to give users the fastest downloads possible, they basically have to build their own private internet.
From the user’s point of view, the coaxial pipe from their cable modem goes out to the street, to a cable headend, and then up to a regional data center. Those regional data centers are connected to each other by a private ATM backbone, which in turn is connected to the Internet at large. (For a very basic schematic, check out @Home’s network architecture.) Some of the more astute members of the audience I listened with commented that the speed the user experiences will be limited by the speed of the web sites they’re viewing, or the traffic on the Internet at large.
“Ah, but the beauty of @Home is in the regional data centers,” the product manager replied. “We’re going to cache the most popular sites on the Internet there.”
What this means is that @Home will be offering subscribers a copied (or “cached” or “proxied”) version of the Internet at large. @Home’s regional data centers will periodically query the “most popular” sites on the Internet for changes, and make a copy of the site on their own servers. Thus, when an @Home user goes to look at Pathfinder, for example, they won’t be looking at the actual Pathfinder, but a reasonable facsimile of Pathfinder which will be sitting at @Home’s regional data center.
This gives @Home a definite speed advantage. An @Home user’s request for Pathfinder will never have to travel outside the @Home private network, and will never have to compete with the traffic of the great unwashed masses of normal Internet users.
On a pure bandwidth level, @Home will basically further balkanize the Internet. Those communities that have cable companies that have an alignment with @Home will have an advantage over the rest of us. They will have access to privately cached websites, at 10 megabits per second. The rest of us will be left fight it out at the regional NAPs.
But @Home’s private internet is also an issue for web publishers. With a robust set of proxy servers, @Home could start to do all sorts of interesting things to content coming in off the ‘net. Censorship for families that don’t want their kids surfing Playboy without the delay. Or replacing a web site’s own ad banners with some sold by @Home.
Admittedly, proxy servers are nothing new. AOL runs a whole host of proxy servers in order to make up for their own bandwidth limitations. And corporations run them all the time, in order to add a layer of security between their internal networks an the rest of the world. But @Home is looking to proxy services as a primary differentiator for their service. And as a small web publisher, that frightens me.
With a private internet, @Home has the power to make it very difficult for smaller web publishers to get access to @Home subscribers. What if their proxy server only gets around to updating my site once a month? What about sites that change daily? Or hourly? It’s not too hard to imagine a scenario where content providers will have to pay @Home to have their site proxied on a regular basis (daily, hourly) in order to provide @Home subscribers with a full bandwidth-intensive experience. It comes as no surprise that @Home is cutting deals with major online information providers for customized, proxied content for the @Home network.
Finally, let’s say for argument’s sake that a small web publisher named Jill does a good enough job promoting her web site that she can sell ads to support her writing habit. And, let’s say that @Home actually does bother to proxy her site. But what happens to Jill’s count of page views when @Home hits it once, and then serves it up to their X million subscribers? Does Jill get “credit” for those hits? No. Does Jill get paid for those hits? No.
Even given the lousy state of traditional bandwidth offerings, if @Home were offered in my neighborhood I’d have to think twice.