Abstract

Demand is growing for the dense and high-performing IT computing capacity to support artificial intelligence, deep learning, machine learning, autonomous cars, the Internet of Things, etc. This led to an unprecedented growth in transistor density for high-end CPUs and GPUs, creating thermal design power (TDP) of even more than 700 watts for some of the NVIDIA existing GPUs. Cooling these high TDP chips with air cooling comes with a cost of the higher form factor of servers and noise produced by server fans close to the permissible limit. Direct-to-chip cold plate-based liquid cooling is highly efficient and becoming more reliable as the advancement in technology is taking place. Several components are used in the liquid-cooled data centers for the deployment of cold plate-based direct-to-chip liquid cooling like cooling loops, rack manifolds, CDUs, row manifolds, quick disconnects, flow control valves, etc. Row manifolds used in liquid cooling are used to distribute secondary coolant to the rack manifolds. Characterizing these row manifolds to understand the pressure drops and flow distribution for better data center design and energy efficiency is important. In this paper, the methodology is developed to characterize the row manifolds. Water-based coolant Propylene glycol 25% was used as the coolant for the experiments and experiments were conducted at 21 °C coolant supply temperature. Two, six-port row manifolds' P-Q curves were generated, and the value of supply pressure and the flowrate were measured at each port. The results obtained from the experiments were validated by a technique called flow network modeling (FNM). FNM technique uses the overall flow and thermal characteristics to represent the behavior of individual components.

Introduction

Air cooling, the most common and traditional approach to cooling Information Technology Equipment (ITE), is encountering limitations as the heat fluxes of the components increase. Consequently, data center facilities and researchers have extended the cooling capabilities of air cooling through various methods such as in-row cooling [1,2], enhancing the mixing effectiveness of air at the mixing chamber [3], implementing aisle containments [4], and making design modifications at the server level [5]. While these methods can effectively dissipate large heat loads, there is also a significant increase in the cost of cooling infrastructure [610]. Recently, researchers and the industry have researched liquid cooling technologies, and their capabilities to cool high-power density racks [1113], this includes liquid cooling on processors (i.e., single phase and two-phase cold plate design) [14,15], subimmersed cooled servers, or immersion cooling [16], feasibility study on rear door heat exchangers [17].

Direct-to-chip liquid cooling has been one of the promising and efficient solutions for taking away substantial amounts of concentrated heat loads from the primary components in a server, due to higher volumetric specific heats and high heat transfer coefficients of the fluid [18]. To attain the highest possible efficiency, in liquid cooling the heat transfer from the Information Technology Equipment (ITE) to the liquid is performed at its highest possible temperature [19]. Thus, direct liquid cooling can significantly increase the percentage amount of savings for both the operational as well as capital expenditure costs, especially for high-performance computing equipment.

In direct-to-chip liquid-cooled data centers, there are two heat transfer loops, a primary loop, and a Technology Cooling System (TCS), the TCS loop is also called the secondary loop, where the cooling fluid is circulated by the Cooling Distribution unit. This secondary loop consists of several devices/components such as the row manifolds, rack manifolds, and cooling loops where the cold plates are mounted on top of the heat-dissipating components. On the secondary side, the design of the components must meet operating parameters such as the flowrate, system pressure, and temperature [20]. The Row manifold is an essential part of the secondary loop as it distributes the fluid from the CDU to all the racks in the data center. Dynamic liquid cooling approach is being implemented for current liquid-cooled data centers as it increases the pumping power savings [2123], thus for dynamic cooling, it is important that the components used to build this type of infrastructure are carefully designed to handle the varying flowrate caused by the change in pressure drop and at different fluid temperatures. Thus, before installing these components in the cooling infrastructure, they must be characterized.

In this study, experiments are performed on the row manifolds used in the liquid-cooled data center and are characterized for understanding the pressure drop and the distribution of the fluid to the different rack manifolds for designing an energy-efficient data center. For this set of experiments, propylene glycol-25% was used as the coolant and 21 °C inlet temperature was provided. The results obtained from the experiments were validated with a one-dimensional (1D) flow network modeling simulation for efficient prediction of the flow rates, pressures, and temperature in a complete liquid-cooling system.

Experimental Setup and Procedure

The experimental setup consists of eight 52 U racks, two-row manifolds of six ports each, eight rack manifolds, a Coolant distribution unit, and flow, temperature, and pressure sensors to measure data as shown in Fig. 1 below. The CDU used in this study was 450 kW, which can provide a maximum flowrate of 500 lpm at an external pressure drop of 3.4 bar. The hose connecting the CDU with the Y connectors had having pressure sensor to measure pressure drop across the row manifold. The hoses connecting row manifold ports with rack manifolds have pressure sensors and flow sensors to measure the supply, return pressure, and flowrate, respectively.

Fig. 1
Schematic of experimental setup
Fig. 1
Schematic of experimental setup
Close modal

The coolant used in this study was propylene glycol 25% (PG-25). The coolant was maintained at a set temperature by the CDU, this coolant then passes through the Y connectors in the setup and is distributed among two supply row manifolds. Supply row manifolds then distribute coolant to supply rack manifolds and row manifolds are connected to rack manifolds with the help of Eaton valve FD-83. From the supply rack, manifold coolant is then delivered to the cooling loops, where coolant captures heat from the IT and follows the path back to the CDU using the return rack manifold, row manifold, etc.

This paper focuses on the hydraulic characterization of row manifolds. For the experiments and characterizing the row manifold, ports of row manifolds were short-circuited using FD 83 valves. Out of 6 ports as shown in Fig. 2 below only 4 ports on the row manifolds were considered for this experiment, the first two and last two ports. Out of two row manifolds (one of copper and another one was steel) in parallel, one row manifold was kept open and allowed the coolant to pass through it at a time while another one was kept closed. The row manifolds were characterized at 21 °C inlet coolant temperature and the flowrate of the coolant varied from 40 to 165 lpm. During the experiment, pressure drop across the row manifold, and each port supply pressure and the flowrate were measured.

Fig. 2

Sensor Calibration

In the experiments, pressure sensors and flow sensors were used. For the calibration of the pressure sensors Fluke P5510-2M Pneumatic Comparison Test pump was utilized as shown in Fig. 3, pressure sensors used were Keyence GP-M010. The pressure was increased in the test rig and at different pressure readings the sensor's values, and the reference pressure gauge readings were taken for error analysis. Similarly, for the flow sensor calibrations, Coriolis flowmeter was utilized to calibrate the Keyence FDQ-32C flow sensor. The K-type thermocouples used to measure the fluid temperature were calibrated with Fluke 7109 A calibration bath. A two-point calibration method was used between temperatures of 0–100 °C as shown in Fig. 3, Table 1 shown below shows the details of the sensors used in this study. The calibration of the pressure sensors showed that factory-calibrated sensors were very precise and closely aligned with the reference pressure gauge. The calibration equation obtained was directly used as input in the DAQ software as gain and offset values. To calibrate the ultrasonic flow sensors, a calibrated Electromagnetic flow sensor was used by placing the flow sensor in the same closed-loop along with the Electromagnetic flowmeter. Tables 24 shows the error calculation quantified from the calibration process for pressure sensors, temperature sensors, and flow sensors [24].

Fig. 3
Fluke P5510-2M Pneumatic Comparison Test pump and fluke 7109 A portable calibration bath [24]
Fig. 3
Fluke P5510-2M Pneumatic Comparison Test pump and fluke 7109 A portable calibration bath [24]
Close modal
Table 1

Details of sensor measurement range, accuracy, and operating voltages [24]

SensorOperating voltage/current (mA)Range of measurementAccuracy
Keyence FDQ-32C20V–30 V0.02 L/min–20 L/min+0.003 ml/min
Keyence pressure sensors4–200–1000 kPa+0.25%
K–Type thermocouple0–400 °C+0.75%
SensorOperating voltage/current (mA)Range of measurementAccuracy
Keyence FDQ-32C20V–30 V0.02 L/min–20 L/min+0.003 ml/min
Keyence pressure sensors4–200–1000 kPa+0.25%
K–Type thermocouple0–400 °C+0.75%
Table 2

Percentage of error during pressure sensor calibration [24]

Reference gauge (kPa)GP-M010 pressure sensors (kPa)% Error
150.31500.2
200.22000.1
300.63000.2
400.54000.124
500.65000.12
Reference gauge (kPa)GP-M010 pressure sensors (kPa)% Error
150.31500.2
200.22000.1
300.63000.2
400.54000.124
500.65000.12
Table 3

Percentage of error during temperature sensor calibration [24]

Reference temperature (°C)Measured temperature (°C)% Error
109.82
9089.30.8
Reference temperature (°C)Measured temperature (°C)% Error
109.82
9089.30.8
Table 4

Percentage of error during flow sensor Calibration [24]

Reference flow rate at electromagnetic sensor (lpm)Measured flow rate at sensor (lpm)% Error
1010.33
20215
4041.33.2
5051.53
Reference flow rate at electromagnetic sensor (lpm)Measured flow rate at sensor (lpm)% Error
1010.33
20215
4041.33.2
5051.53

Flow Network Modeling

To perform a complete investigation of a hybrid (air and liquid) solution, both computational fluid dynamics (CFD) and flow network modeling (FNM) should be run in parallel to draw a clear path for the proposed cooling approach. FNM is a generalized methodology for calculating system-wide distributions of flow rates and temperatures in a network representation of a cooling system. A data center's liquid cooling system can be considered a network of flow paths through components such as cold plates, valves, quick disconnects, filters, pumps, ducts, bends, orifices, heat exchangers, and tubes. Each component of the system is defined by pressure drop-flowrate and thermal resistance-flowrate empirical correlation obtained from an experiment or CFD analysis. Unlike CFD, FNM uses the defined characteristics of components instead of attempting to calculate detailed flow fields of velocity and temperature within the component. As a result, the accuracy of FNM is highly dependent on the defined performance curves of components. The thermohydraulic performance of the system is predicted using the imposition of conservation of mass, momentum, and energy in the flow network
(1)

where K is loss coefficient, ρ is coolant density which is PG25 in this study, Q is coolant flowrate, and A is flow area. The bulk temperatures of each cooling stream are calculated from the component-defined heat which transfers to flow streams and mixes the flow streams at different nodes. The Nusselt number of each component is predicted using Eq. (1).

The simulation process starts with processor-level CFD modeling to measure the thermal and hydraulic performances of the cooling system. In direct-to-chip liquid cooling, the processor-level cooling system is the cold plate. Then, the performance curves are used as the input for the FNM model of a server-level cooling loop. All the components (tubes, QDs, valves, Tees, etc.) are included in the cooling loop FNM model. Pressure drop and thermal resistance data of the cooling loop are used as the input for the next step which is the rack-level modeling. In the rack FNM model, the elevation of each server is specified inside the rack, and the uniformity of flow distribution is checked. Finally, the flow network of a data center can be modeled to measure the thermal performance and pressure drop of the whole secondary loop for the required flowrate. CDU's performance will be evaluated for the designed loop in terms of having enough cooling capacity and pressure budget. Here, we used Macro-Flow to model the flow network of the studied loop.

Results and Discussion

As explained in the experimental setup and procedure section, two-row manifolds were characterized in this study, but the results of the one-row manifold are discussed below because of the similarities in the results. The results in this section are divided into two parts, in the first part experimental results will be discussed and in the later part, experimental results will be compared with 1D flow network simulation results.

In the first set of experiments copper, material-based row manifold was characterized. The flowrate varied from 40 lpm to 167 lpm with a coolant supply temperature of 21 °C. Figure 4 shown below shows the flowrate going to the entire row manifold and each individual port. When the system flowrate was around 166.2 lpm the flowrate going to each individual port was around 41.2 lpm, where the maximum difference between the two ports' flowrate was 0.66 lpm, which lies in the range of flow sensor accuracy. Similarly, when the low flowrate of 40 lpm passed through the system, the flowrate going to each individual port was around 9.8 lpm, with the maximum difference in flowrate between the two ports being 1 lpm.

Fig. 4
Graph showing copper row manifold flow rate and flow rate in each row manifold port
Fig. 4
Graph showing copper row manifold flow rate and flow rate in each row manifold port
Close modal

The graph shown below in Fig. 5 shows the copper row manifold pressure drop versus flowrate graph. The row manifold had shown a pressure drop of negligible at the flowrate of 40 lpm and of 4.5 psi (31.02 kPa) at the flowrate of 166 lpm. Figure 6 shows the supply pressure at the inlet of each port of the row manifold, the average supply pressure at 40 lpm is 15.5 psi (106.87 kPa) and at 166 lpm was 19.2 psi (132.38 kPa). Though each port shows some variation in the supply pressure at each port, but it was between the pressure sensors' accuracy range.

Fig. 5
Copper row manifold pressure drop versus flow rate
Fig. 5
Copper row manifold pressure drop versus flow rate
Close modal
Fig. 6
Copper row manifold individual port supply pressure
Fig. 6
Copper row manifold individual port supply pressure
Close modal

In this section, a flow network model is used to demonstrate the pressure drop in each component and verify the experimental results at the maximum tested flowrate (166 lpm). The network representation of the deployment is depicted in Fig. 7. Including the piping, valves, sensors, and all the fittings that are involved. The flow characteristic of each component is introduced to the model to accurately measure the pressure drop of the whole loop. The results are presented in Table 5 showing the contribution of QD, Belimo valves, ball valves, sensor connections, and all other fittings. As is demonstrated in Table 5, the total pressure drop for the whole row manifold is 4.07 psi (28.06 kPa) which is approximately 0.35 psi (2.41 kPa) less than the measured pressure drop across the whole loop between point 1 and 2 indicated in Fig. 6. The coefficient factor (Cv) for rack valve and sensors are 8.7 and 389, respectively. For FD-8 and ball valves, a PQ curve has been used as an input for the model based on the experimental results shown in Fig. 8. Then, the supply pressure is calculated at each port of the row manifold with the flowrate of 40 lpm and presented in Table 6. The uniformity of the flow distribution can be understood by comparing the pressure results with having less than 0.05 psi (0.34 kPa) difference. The FNM model indicates more uniformity in comparison with test results which have around 0.5 psi (3.44 kPa) difference between the minimum and maximum pressures.

Fig. 7
Flow network model of deployment including all the components involved in the design
Fig. 7
Flow network model of deployment including all the components involved in the design
Close modal
Fig. 8
PQ characteristics of ball valve and FD-83
Fig. 8
PQ characteristics of ball valve and FD-83
Close modal
Table 5

Pressure drop of different components in the cooling loop

Pressure drop (psi)DCBA
FD-831.061.121.141.16
Rack valve (100% opening)1.41.641.731.78
Ball valves0.030.040.040.04
Y connection0.12 (supply) + 0.21 (return)
Sensor connections0.2
Tees + bends0.48
Row manifold0.001
ΔP124.07
Pressure drop (psi)DCBA
FD-831.061.121.141.16
Rack valve (100% opening)1.41.641.731.78
Ball valves0.030.040.040.04
Y connection0.12 (supply) + 0.21 (return)
Sensor connections0.2
Tees + bends0.48
Row manifold0.001
ΔP124.07
Table 6

Uniformity of the working pressure at different ports of the row manifold

DCBA
Pressure supply at inlet ports (psi)18.8118.7918.7718.76
DCBA
Pressure supply at inlet ports (psi)18.8118.7918.7718.76

Conclusion

In this paper, the methodology is developed to characterize the row manifolds as no literature was found related to liquid-cooled data center row manifolds. The six-port row manifolds' P-Q curves were generated, and the value of supply pressure and the flowrate were measured at each port, no IT hardware with cooling loops or row manifolds was included in the loop to add heat loads. The results obtained from the experiments were validated by a technique called flow network modeling (FNM). FNM is a 1D simulation suited for the analysis of flow distribution in liquid cooling systems. The designing and characterization of row manifold and its sizing impact the multiple factors like selection of CDU's and dampening of fluctuation of supply coolant temperature at low loads. Which will be addressed in future papers. During the experiments and FNM modeling, it was observed that similar coolant flowrates were achieved through all the ports of the row manifolds with the variation of 1 lpm, which lies under the flow sensor accuracy. The pressure drop across the row manifold was around 4.4 psi (30.34 kPa) at the flowrate of 166 lpm and a negligible pressure drop was observed at the flowrate of 36 lpm. The results obtained from the FNM match the experimental results with a maximum variation of 10%.

Data Availability Statement

The authors attest that all data for this study are included in the paper.

References

1.
Dunlap
,
K.
, and
Rasmussen
,
N.
,
2012
, “
Choosing Between Room, Row, and Rack-Based Cooling for Data Centers
,” APC, South Kingstown, RI, Paper No.
WP-130
.https://2nsystems.com/wp-content/uploads/2019/09/Choosing-Between-Room-Row-and-Rack-based-Cooling-for-Data-Centers.pdf
2.
Evans
,
T.
,
2004
, “
The Different Types of Air Conditioning Equipment for IT Environments
,” APC, South Kingstown, RI, Paper No.
59
.https://cloudserver012718.home.pl/pub/katalogi/schneider/6_systemy_zasilania_gwarantowanego_i_chlodzenia/6_9_dokumenty_white_paper/wp-59.pdf
3.
Kaulgud
,
P.
,
Siddarth
,
A.
,
Simon
,
V. S.
, and
Agonafer
,
D.
,
2022
, “
Characterization of Parallel and Opposed Control Dampers to Observe the Effect on Thermal Mixing of Air Streams in an Air-Cooling Unit
,”
38th Semiconductor Thermal Measurement, Modeling & Management Symposium
(
SEMI-THERM
), San Jose, CA, Mar. 21–25, pp.
62
66
.https://ieeexplore.ieee.org/document/9755903
4.
Nemati
,
K.
,
Alissa
,
H. A.
,
Murray
,
B. T.
,
Schneebeli
,
K.
, and
Sammakia
,
B.
,
2017
, “
Experimental Failure Analysis of a Rear Door Heat Exchanger With Localized Containment
,”
IEEE Trans. Compon., Packag. Manuf. Technol.
,
7
(
6
), pp.
882
892
.10.1109/TCPMT.2017.2682863
5.
Tatchell-Evans
,
M.
,
Kapur
,
N.
,
Summers
,
J.
,
Thompson
,
H.
, and
Oldham
,
D.
,
2017
, “
An Experimental and Theoretical Investigation of the Extent of Bypass Air Within Data Centres Employing Aisle Containment, and Its Impact on Power Consumption
,”
Appl. Energy
,
186
, pp.
457
469
.10.1016/j.apenergy.2016.03.076
6.
Modi
,
H.
,
Chowdhury
,
U.
, and
Agonafer
,
D.
,
2022
, “
Impact of Improved Ducting and Chassis Re-Design for AirCooled Servers in a Data Center
,” 21st IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (
iTherm
), San Diego, CA, May 31–June 3, pp.
1
8
.10.1109/iTherm54085.2022.9899625
7.
Modi
,
H.
,
Shahi
,
P.
,
Chinthaparthy
,
L. S. R.
,
Gupta
,
G.
,
Bansode
,
P.
,
Shalom Simon
,
V.
, and
Agonafer
,
D.
,
2022
, “
Experimental Investigation of the Impact of Improved Ducting and Chassis Re-Design of a Hybrid-Cooled Server
,”
ASME
Paper No. IPACK2022-97587.10.1115/IPACK2022-97587
8.
Breen
,
T. J.
,
Walsh
,
E. J.
,
Punch
,
J.
,
Shah
,
A. J.
,
Bash
,
C. E.
,
Kumari
,
N.
, and
Cader
,
T.
,
2012
, “
From Chip to Cooling Tower Data Center Modeling: Chip Leakage Power and Its Impact on Cooling Infrastructure Energy Efficiency
,”
ASME J. Electron. Packag.
,
134
(
4
), p.
041009
.10.1115/1.4007744
9.
Breen
,
T. J.
,
Walsh
,
E. J.
,
Punch
,
J.
,
Shah
,
A. J.
, and
Bash
,
C. E.
,
2010
, “
From Chip to Cooling Tower Data Center Modeling: Part I Influence of Server Inlet Temperature and Temperature Rise Across Cabinet
,”
12th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems
, Las Vegas, NV, June 2–5, pp.
1
10
.10.1109/ITHERM.2010.5501421
10.
Breen
,
T. J.
,
Walsh
,
E. J.
,
Punch
,
J.
,
Shah
,
A. J.
,
Bash
,
C. E.
,
Rubenstein
,
B.
,
Heath
,
S.
, and
Kumari
,
N.
,
2012
, “
From Chip to Cooling Tower Data Center Modeling: Influence of Air-Stream Containment on Operating Efficiency
,”
ASME J. Electron. Packag.
,
134
(
4
), p.
041006
.10.1115/1.4007110
11.
Kang
,
S.
,
Miller
,
D.
, and
Cennamo
,
J.
,
2007
, “
Closed Loop Liquid Cooling for High Performance Computer Systems
,”
ASME
Paper No. IPACK2007-33870.10.1115/IPACK2007-33870
12.
Gao
,
T.
,
David
,
M.
,
Geer
,
J.
,
Schmidt
,
R.
, and
Sammakia
,
B.
,
2015
, “
Experimental and Numerical Dynamic Investigation of an Energy Efficient Liquid Cooled Chiller-Less Data Center Test Facility
,”
Energy Build.
,
91
, pp.
83
96
.10.1016/j.enbuild.2015.01.028
13.
Gao
,
T.
,
Shao
,
S.
,
Cui
,
Y.
,
Espiritu
,
B.
,
Ingalz
,
C.
,
Tang
,
H.
, and
Heydari
,
A.
,
2017
, “
A Study of Direct Liquid Cooling for High-Density Chips and Accelerators
,” 16th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (
ITherm
), Orlando, FL, May 30–June 2, pp.
565
573
.10.1109/ITHERM.2017.7992537
14.
Manaserh
,
Y. A.
,
Gharaibeh
,
A. R.
,
Tradat
,
M. I.
,
Rangarajan
,
S.
,
Sammakia
,
B. G.
, and
Alissa
,
H. A.
,
2022
, “
Multi-Objective Optimization of 3D Printed Liquid Cooled Heat Sink With Guide Vanes for Targeting Hotspots in High Heat Flux Electronics
,”
Int. J. Heat Mass Transfer
,
184
, p.
122287
.10.1016/j.ijheatmasstransfer.2021.122287
15.
Hoang
,
C. H.
,
Fallahtafti
,
N.
,
Rangarajan
,
S.
,
Gharaibeh
,
A.
,
Hadad
,
Y.
,
Arvin
,
C.
,
Sikka
,
K.
,
Schiffres
,
S. N.
, and
Sammakia
,
B.
,
2021
, “
Impact of Fin Geometry and Surface Roughness on Performance of an Impingement Two-Phase Cooling Heat Sink
,”
Appl. Therm. Eng.
,
198
, p.
117453
.10.1016/j.applthermaleng.2021.117453
16.
Chi
,
Y. Q.
,
Summers
,
J.
,
Hopton
,
P.
,
Deakin
,
K.
,
Real
,
A.
,
Kapur
,
N.
, and
Thompson
,
H.
,
2014
, “
Case Study of a Data Centre Using Enclosed, Immersed, Direct Liquid-Cooled Servers
,”
Semiconductor Thermal Measurement and Management Symposium
(
SEMITHERM
), San Jose, CA, Mar. 9–13, pp.
164
173
.10.1109/SEMI-THERM.2014.6892234
17.
Simon
,
V. S.
,
Modi
,
H.
,
Sivaraju
,
K. B.
,
Bansode
,
P.
,
Saini
,
S.
,
Shahi
,
P.
,
Karajgikar
,
S.
,
Mulay
,
V.
, and
D.
Agonafer
,
2022
, “
Feasibility Study of Rear Door Heat Exchanger for a High Capacity Data Center
,”
ASME
Paper No. IPACK2022-97494.10.1115/IPACK2022-97494
18.
Parida
,
P. R.
,
David
,
M.
,
Iyengar
,
M.
,
Schultz
,
M.
,
Gaynes
,
M.
,
Kamath
,
V.
,
Kochuparambil
,
B.
, and
Chainer
,
T.
,
2012
, “
Experimental Investigation of Water Cooled Server Microprocessors and Memory Devices in an Energy Efficient Chiller-Less Data Center
,”
28th Annual IEEE Semiconductor Thermal Measurement and Management Symposium
(
SEMITHERM
), San Jose, CA, Mar. 18–22, pp.
224
231
.10.1109/STHERM.2012.6188852
19.
Greenberg
,
S.
,
Mills
,
E.
,
Tschudi
,
B.
,
Rumsey
,
P.
, and
Myatt
,
B.
,
2006
, “
Best Practices for Data Centers: Lessons Learned From Benchmarking 22 Data Centers
,”
Proceedings of the ACEEE Summer Study on Energy Efficiency in Buildings
, Asilomar, CA
, Aug. 17–22, pp.
76
87
.https://datacenters.lbl.gov/sites/default/files/aceee-datacenters.pdf
20.
Gao
,
T.
,
Tang
,
H.
,
Cui
,
Y.
, and
Luo
,
Z.
,
2018
, “
A Test Study of Technology Cooling Loop in a Liquid Cooling System
,” 17th IEEE Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems (
ITherm
), San Diego, CA, May 29–June 1, pp.
740
747
.10.1109/ITHERM.2018.8419519
21.
Shahi
,
P.
,
Hurnekar
,
H.
,
Deshmukh
,
A.
,
Saini
,
S.
,
Bansode
,
P.
,
Kasukurthy
,
R.
, and
Agonafer
,
D.
, “
Assessment of Pump Power Savings at Rack Level for Dynamic Direct-to-Chip Liquid Cooling Using a Novel Flow Control Device
,”
J. Enhanced Heat Transfer
, 30(1), pp.
15
33
.10.1615/JEnhHeatTransf.2022044476
22.
Heydari
,
A.
,
Shahi
,
P.
,
Radmard
,
V.
,
Eslami
,
B.
,
Chowdhury
,
U.
,
Sivakumar
,
A.
,
Lakshminarayana
,
A.
, et al.,
2022
, “
Experimental Study of Transient Hydraulic Characteristics for Liquid Cooled Data Center Deployment
,”
ASME
Paper No. IPACK2022-97425.10.1115/IPACK2022-97425
23.
Heydari
,
A.
,
Shahi
,
P.
,
Radmard
,
V.
,
Eslami
,
B.
,
Chowdhury
,
U.
,
Hinge
,
C.
,
Cinthaparthy
,
L. S. R.
, et al.,
2022
, “
A Control Strategy for Minimizing Temperature Fluctuations in High Power Liquid to Liquid CDUs Operated at Very Low Heat Loads
,”
ASME
Paper No. IPACK2022-97434.10.1115/IPACK2022-97434
24.
Shahi
,
P.
,
2022
, “
Cold Plate Based Dynamic Liquid Cooling at Data Center and Chip Level
,”
Ph.D. dissertation
,
The University of Texas at Arlington
, Arlington, TX.https://mavmatrix.uta.edu/mechaerospace_dissertations/136