/ / News, Podcast

I wrote a feature for this week’s issue of the journal Nature on “petascale” data-centers — giant data-centers used in scholarship and science, from Google to the Large Hadron Collider to the Human Genome and Thousand Genome projects to the Internet Archive. The issue is on stands now and also available free online. Yesterday, I popped into Nature’s offices in London and recorded a special podcast on the subject, too. This was one of the coolest writing assignments I’ve ever been on, pure sysadmin porn. It was worth doing just to see the the giant, Vader-cube tape-robots at CERN.

At this scale, memory has costs. It costs money — 168 million Swiss francs (US$150 million) for data management at the new Large Hadron Collider (LHC) at CERN, the European particle-physics lab near Geneva. And it also has costs that are more physical. Every watt that you put into retrieving data and calculating with them comes out in heat, whether it be on a desktop or in a data centre; in the United States, the energy used by computers has more than doubled since 2000. Once you’re conducting petacalculations on petabytes, you’re into petaheat territory. Two floors of the Sanger data centre are devoted to cooling. The top one houses the current cooling system. The one below sits waiting for the day that the centre needs to double its cooling capacity. Both are sheathed in dramatic blue glass; the scientists call the building the Ice Cube.
Blank slate

The fallow cooling floor is matched in the compute centre below (these people all use ‘compute’ as an adjective). When Butcher was tasked with building the Sanger’s data farm he decided to implement a sort of crop rotation. A quarter of the data centre — 250 square metres — is empty, waiting for the day when the centre needs to upgrade to an entirely new generation of machines. When that day comes, Butcher and his team will set up in that empty space the yet-to-be-specified systems for power, cooling and the rest of it. Once the new centre is up, they’ll be able to shift operations from the obsolete old centre in sections, dismantling and rebuilding without a service interruption, leaving a new patch of the floor fallow — in anticipation of doing it all again in a distressingly short space of time.

The first rotation may come soon. Sequencing at the Sanger, and elsewhere, is getting faster at a dizzying pace — a pace made possible by the data storage facilities that are inflating to ever greater sizes. Take the human genome: the fact that there is now a reference genome sitting in digital storage brings a new generation of sequencing hardware into its own. The crib that the reference genome provides makes the task of adding together the tens of millions of short samples those machines produce a tractable one. It is what makes the 1000 Genomes Project, which the Sanger is undertaking in concert with the Beijing Genomics Institute in China and the US National Human Genome Research Institute, possible — and with it the project’s extraordinary aim of identifying every gene-variant present in at least 1% of Earth’s population.

Big data: Welcome to the petacentre,

Podcast about Petacentres,

My Flickr photos of petacenters