Google Details Network Challenges, Seeks Academic Feedback

In an unprecedented move, Google revealed the details of how it developed and improved software-defined networking (SDN). In a paper presented at the ACM SIGCOMM 2015 conference in London, Google described the steps taken over a ten-year period, moving from third party vendor switches in 2004 to, a year later, building its own hardware and shuttling data among servers in its own data centers. The company is describing its network in part to share its experiences and seek assistance from the academic community.

google29“Ten years ago,” says a Google blog post, “We realized that we could not purchase, at any price, a data center network that could meet the combination of our scale and speed requirements.”

By 2005, Google found its bandwidth demands were doubling every 12 to 15 months and decided on a custom-built approach to overcome issues of cost and operational complexity in using third party solutions, reports The Wall Street Journal, noting that, “the effort was inspired by the company’s success in using commodity servers for high-performance computing.”

The work done by Google (and others) led to the development of software-defined networking that, unlike conventional switching, is less expensive because it allows remote management through software. Google describes the “technical details on five generations of our in-house data center network architecture.”

“Our latest-generation Jupiter network has improved capacity by more than 100x relative to our first generation network, delivering more than 1 petabit/sec of total bisection bandwidth,” explains Google. “This means that each of 100,000 servers can communicate with one another in an arbitrary pattern at 10Gb/s.”

According to WSJ, “one major reason for sharing information about its network now is that Google is opening up its infrastructure and offering Google Cloud platform services to others.” Google fellow Amin Vahdat told CIO Journal that, Google “would like developers at other companies to understand they can run jobs such as Big Data analytics on its infrastructure with reliable speed and performance.”

With “big challenges around availability, configuration and management of the infrastructure and overall predictability,” says Vahdat, Google hopes that the academic community can help. “While Google might have faced some of these challenges earlier, everyone is faced with these kinds of issues now,” said Vahdat. “The amount of bandwidth you need within the data center to process through all of your data is enormous and growing.”

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.