Marrying CFD with real-time monitoring
21Sep
posted by: Dave Wolfenden, Heatload
Power and cooling is, for many data centre owners, their biggest operational expenditure (Opex) cost.
(Click here to view article in digital issue)
It is not just the physical cost of the energy that impacts Opex, it is the increasing cost of emission taxes that are being applied to data centres. To help improve efficiency, data centre owners have turned to computational fluid dynamics (CFD) to model airflow in order to optimise cooling.
While the science behind CFD makes it ideal for modelling data centres, it can quickly become outdated as workloads change. To solve this problem, Dave Wolfenden of Heatload explains why models must be verified and use real-time data to stay valid.
The rise of Computational Fluid Dynamics in the data centre
CFD is used in a wide variety of industries to understand fluid flows. Complex algorithms and analysis show how a fluid, in this case air, moves. This has made it important to industries such as aerospace and automotive as they look to improve aerodynamics. Racing teams at major motor races use CFD to see how effective the new parts are on a car in the early practice sessions. While those same parts will have already been modelled and tested in a wind tunnel, the real-time race data is used to tune the models.
CFD on its own is not enough. It is akin to creating a new aerodynamic part for a motor racing team then bolting it to the car hoping that it will deliver a race winning performance. While it may deliver some benefits, they will be severely limited in scope and worse still, they are likely to lead to a range of other decisions that can even degrade the overall performance.
Use heat loads to test models
One way to improve models is to introduce heat loads into the data hall to simulate the type of workload that is expected. This data can be captured and then applied to the model to identify where it begins to diverge from the captured data. To stay with the racing analogy above, this is the equivalent of using a wind tunnel to test aerodynamic components before putting them on a race car.
The use of heat loads is nothing new. An increasing number of companies already use them to test the initial design of the data hall. The problem is that they are not universally used nor are they regularly used during refurbishment. This is where data centre designers are missing the point. It is not just about the heat loads validating their designs and models, but providing a better baseline and library of designs that can speed up the design of future data centres.
Heat and cooling are directly related to workload
No matter how efficient the design model appears and how well it has performed under test conditions it is only when real workloads are applied that it can be truly validated. This creates a significant challenge for designers. Hardware, software and workloads change over the life of a data hall. This means that a model can be outdated before any hardware has been installed. When hardware changes, it is possible to import the technical data from the vendor to update a model and this will help improve the model and the way the data centre is configured. The bigger problem is software and the underlying workloads.
An example of the problem is the introduction of virtualisation. Workloads changed from being contained on servers to running anywhere in the data centre. This created the opportunity to move high heat loads to areas where there was adequate cooling. By automating the process it meant that workloads were prioritised for resources rather than heat load balancing.
Returning to the motor racing analogy, this is the equivalent of testing the aerodynamic components on the car during a test session. It delivers accurate data as to how the components work under real conditions which enables designers to further improve their models.
Moving beyond test loads to real-time data
There are several sources of data that can be used to help drive models in a live data centre. The key is to take advantage of the tsunami of sensors that have appeared inside the data centre over the last 20 years. These are located inside servers, storage devices, switches, power units, racks and aisles. So what data can be used and how?
Using the data from sensors in the racks and aisles will provide information on airflow and air temperature both hot and cold. This can be used to feed into the model to see where it is predicting heat and help make it more effective with real-time data. If linked to orchestration software then the data can also be correlated to workflows. This has the advantage of providing data that can be used to carry out predictive analysis of future cooling needs.
Sensors inside servers can also provide a lot of key data. For example they can provide information about CPU temperatures which will show how much processing is being done. With the increase in analytics being done in-memory, this will provide information on where certain workloads are running and the power and heat they generate.
Information from PSUs will also enable a greater understanding of power utilisation across the data centre. It will show where power is getting dangerously close to the maximum capacity in certain racks and where there is little to no power drain showing under-utilised hardware.
All of this not only helps inform the CFD models but also the longer-term models around data centre design and utilisation. For IT managers, they can now see just how effectively they are utilising resources and the cost of that level of utilisation.
Conclusion
Modelling a data centre is a key part of any design. Failing to update that model with real-time data when it is available ensures that the model is not only ineffective but can also incur considerable extra costs.
Contact Details and Archive...