You are here: Foswiki>ATLAS Web>Placement (20 Nov 2007, Carsten)Edit Attach

Boundary Conditions


  • Cooling: up to 42*220W = 9.24 kW per rack
  • Electrical Power: up to 9.24 kW per rack
  • 42 horizontal and a few vertical height units are available
  • 48 water cooled racks and 7 open racks


  • Goal: Try to minimze power consumption after 1 minute on battery
  • How:
    • Switch off all compute nodes after 1 minute (should save about 1340*180W=214.2kW)
    • Switch off all compute nodes racks after 2 minues (should save about 16 LCP * 800 W/LCP(?) = 12.8kW)

Cooling Water/Air

  • Goal: Try to maintain stability without primary heat exchanger as long as possible
  • Rule of thumb: We have about 2500 l (TBC) of water with 4.2 kJ/(K * kg), that is about 10.5 MJ/K
  • Starting with 12 C cold water, assumptions are we can run till water is at about 30 C (TBC)
  • This leaves about 189 MJ of "free" energy
  • Assumption: Compute nodes need 80 s to power down (on average). This will dissipate about 17 MJ.
  • Assumption: On power failure the storage racks are opened. From this point on these machine will warm the room as well as being cooled by the racks (difficult to estimate how long they could run):

If only air is accounted for: specific heat capacity is about 1 kJ/(K*kg), density about 1.2 kg/m^3, assuming half the rooms volume can be used taking heat: 0.5 * 19.25m * 11.5m * 2.5m is about 275 m^3 or about 332 kg of air. The heat capacity of the room is then about 332 kJ/K. Starting at about 18 C and allowing air temperatures to reach 35 C that leaves only 5.6 MJ of total heat capacity. Unfortunately, it is hard to estimate how much heat can leave the room freely and increase this total capacity.

Given that each file server consumes about 500W and each X4500 about 800W(?) this totals to 25kW. This would heat the room by about 1 K every 13s. This margin is possibly to small, but will be helped by the remaining heat capacity of the cooling water.

Equipment to place

cooled racks

horizontally mounted

  • 1340 compute nodes (1U) -> 1340 U
  • 31 storage nodes (3U) -> 93 U
  • 13 X4500 (4U) -> 52 U
  • 4 head nodes (2U) -> 8 U
  • misc. nodes total: 15 U
  • misc. file server: 10 U

vertically mounted

  • network switches

open racks

  • network switches

Possible solution

Compute Racks

  • stuff as many nodes into each compute rack
  • 42 servers per rack (slight oversubscription accepted)
  • last rack 4 free HU
  • total of 32 racks needed (1344 HU total)

Storage racks

  • X4500 only into lower parts
  • 3 X4500 per rack -> 12 U
  • 8 file server per rack -> 24 U
  • 1 head node -> 2 U
  • last rack misses 1 file server
  • total of 4 racks (168 HU total)
  • these racks should be refitted with automatic door openers

DMZ rack

  • place X4500 for secure storage here (4U)
  • place a few 1U nodes here as needed

Infracstructure rack

  • place infrastructure 1U nodes here

networking rack (open rack)

  • Place central core switch here
  • Eventually put also the RRZN switch here, if cable mess allows
Topic revision: r1 - 20 Nov 2007, Carsten
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback