Saturday, February 23, 2008
The Server Dilemma
The following quote from the article Second Earth (MIT Technology Review, July-August 2007) provides some interesting numbers around the server dilemma associated with server hosted virtual worlds such as Second Life:
This reimagining of the real world can go only so far, given current limitations on the growth of Linden Lab's server farm, the amount of bandwidth available to stream data to users, and the power of the graphics card in the average PC.
According to [Cory] Ondrejka [Linden Lab's now former CTO], Linden Lab must purchase and install more than 120 servers every week to keep up with all the new members pouring into Second Life, who increase the computational load by creating new objects and demanding their own slices of land. Each server at Linden Lab supports one to four "regions," 65,536-square-meter chunks of the Second Life environment--establishing the base topography, storing and rendering all inanimate objects, animating avatars, running scripts, and the like. This architecture is what makes it next to impossible to imagine re-creating a full-scale earth within Second Life, even at a low level of detail. At one region per server, simulating just the 29.2 percent of the planet's surface that's dry land would require 2.3 billion servers and 150 dedicated nuclear power plants to keep them running. It's the kind of system that "doesn't scale well," to use the jargon of information technology.
But then, Linden Lab's engineers never designed Second Life's back end to scale that way. Says Ondrejka, "We're not interested in 100 percent veracity or a true representation of static reality."
1 comment:
I think its been a while since anything was actually built to scale within the technological community.
Post a Comment