Microsoft Revolutionizes the Data Center...By Keeping Them Underwater

Microsoft Revolutionizes the Data Center…By Keeping Them Underwater

3 Likes

So who’s going to be the first one to schedule a class on Underwater Data Center Management?

Too bad April has come and gone already…

4 Likes

This reminds me of Google’s preference for building DCs next to lakes, and heat-exchanging into the lake, for an apparent energy savings over using cooling towers. Google also runs their DCs at an interior temperature at or above 80F.

As a former DC technician, I would prefer not to.

Rumor has it M$ does 90F. Presumably trying to best Google at their own game.
Not a fan, myself, either…

As long as airflow and overall DC air mass is sufficient, the servers will perform identically, whether room temperature is 50 degrees or 90 degrees. But if you’re starting at 90 and experience a cooling failure, you’re at a lot greater risk of things reaching critical temps before cooling is recovered- even at 90, a server room still needs a lot of cooling.

Not to mention, high DC temps discourage technician presence. And attentive technicians are a much better early warning system than any sensor suite.

Burn the land, boil the sea
But you can’t take the sky from me

3 Likes

Data centers have nothing on the heat the Hanford project dumped into the Columbia river. Multiple reactors running river water through the cores and back to the river in the 40s, 50s and probaly later before they added heat exchangers.

Only if the present crop of cryptocurrencies keep on scaling.

I work in a Tier 3 DC with multiple 10k sq. ft data halls and we strive for avg return air @ 80. Obviously the hot aisles are warmer and yes, the techs do not care for that very much. The good news for them is we utilize a separate build room for the majority of their work and then move the loaded/tested cabinets into the DH ready to plug and play. The techs are not in the halls much and then the work is usually minor and routine so to speak. As for cooling failures, your absolutely correct, starting at 80 degrees, the DH temp would rise within minutes but by definition, any Tier 2-3-4 DC has redundant critical components (N+1, +2, etc…) so that risk is minimized…

1 Like

One of the DCs I ran had 22,000sqft data halls with 23 foot ceilings, and we ran those with 74-78F room averages and 65-70F cold aisle containment, though hot aisle temps were regularly in the 110s on the high-power-density rows, where techs would have to spend a fair amount of time inspecting, cabling, or connecting crash-carts to servers.

Conversely, another DC I ran had customer server rooms in the 2000sqft range, and 8 foot ceilings. Room temperature was maintained in the low 60s, with 50F cold aisle temps, because air mass in a room that size is insufficient to accommodate any type of cooling failure at high room temperatures.

The internal-use room at this site is a little over 120sqft and is maintained below 60 degrees through the use of a pair of Liebert 160kW CRACs. One CRAC disengaged for 10 minutes for a frozen coil while I was on shift one night. In that time, room temperature went from ~58F to 93F, and several nodes logged thermal faults.

Mind you both of these DCs are Tier 4, with fully redundant power, networking, and cooling systems. However, due to limitations of the building the latter was installed in, engineering in a low operating temperature is part of ensuring fault tolerance. Generators are supposed to kick on and maintain cooling power within 90 seconds, but worst-case time to load is 7 minutes, and that’s enough time for servers at the top of the rack to start overheating in a room with low ceilings, even with cold rooms. Moreover, in the unlikely-but-not-unfathomable event of multiple CRAC failures, starting low gives more time to get sufficient cooling back in place before thermal issues crop up.

Most high-temp DCs get away with it through having tall ceilings and plenty of air mass. If you have the thermal inertia, by all means make your lowly DC techs suffer the temps. But there are plenty of cases where the energy cost for low-temp cooling is a necessary expenditure. I personally prefer working at those DCs. Much easier to stay busy if I need to put a hoodie on, than if I have to brave the sweat factory. I got out of automotive for a reason.

I checked again today on our temps, we are supplying chilled water at 50 deg with a return temp @ 62-64. The supply air to the DH is @ 60 and the return air temp is 88-90. I didn’t look at the temps in the hot aisles, but will do that Monday. I’m guessing the ceilings are @ 20’, but will check. The facility is only 3 yrs old and was built specifically as a DC so they planned on the higher temps in the design, as well as only going for Tier 3 redundancy. Our Tier 4 DC’s are @ 24 yrs old and ran at the cooler temps, and this design had a specific goal to change that. Its been interesting to hear the different opinions within the organization about that and like everything, there are different thoughts about it. The most surprising thing to me was the use of exterior air cooled chillers, so no cooling towers! It seems counter intuitive to me for that to work in Texas but the engineers know more than I do and so far so good…the 250 ton Trane screw compressors with VFD cooling fans are working very efficiently. Anyway, thanks for the discussion, its made me think harder about the design than I had in a while. And the Underwater DC management is new to me, I’ll add that to my internet “Business Acumen” reading at work :slight_smile:

1 Like

That’s actually pretty low compared to a lot of the data center I go to. Most now run 55-60 supply chilled water temp. The trend is the temps are going up. 7-8 years ago, it was common to see 42-45 degree chilled water.

As far as chillers go, we manufacture and have manufactured air cooled chillers on VFD’s since about 2005. This is the actual compressor on VFD with an output frequency from 50-200 hz. We have made water cooled chillers on drives since the early 80’s.

Current technology for our magnetic bearings water cooled machines push the output frequency to 430hz in some cases.

1 Like

For what it’s worth, the older and smaller of the two datacenters was built specifically as a datacenter. Thirty years ago. As a result of some of the building limitations, several of the server rooms use HFC phase change air conditioners, which expel heat into a mechanical room, where a chilled water heat exchanger loops into the main building air conditioning system. Even purpose-built datacenters can end up with painful design workarounds when the technology they were built to support is replaced.

Conversely, the newer, larger DC is built into a retrofitted semiconductor plant, with all of the power and cooling facilities engineered to our specific application requirements. No workarounds needed, even though the building itself was never intended to be a datacenter, thanks to its previous state as an empty industrial shell.

What we’re really looking forward to is new low-TDP processors that just don’t put out the same amount of megawatts of heat that so much of our engineering expense has gone to manage. Not to mention easing the load on the power facilities.