A data processing center (CPD) is a place where all the resources necessary for processing the information of a concentrated organization. They are buildings or rooms duly equipped with a large amount of electronic equipment, computers, communication networks. We offer best products for APC Data Center Solution in Pakistan.
The availability of resources and access to data center information is essential in the daily operations of companies and, therefore, it is It is essential to have a stable DPC with the greatest possible availability, which means that the cooling of data centers is of vital importance.
A data center is not only hardware and telecommunications, it has electrical installations, CPD cooling, fire protection, lighting, among others. An important part of the energy consumed in a data center corresponds to industrial refrigeration systems, which become a key element in reducing energy costs. CPD refrigeration absolutely linked to its energy consumption and, consequently, correct refrigeration allows significant energy savings to obtained.
CPD Classification
For the classification of data centers there is a standard, the AINSI / TIA-942 (Telecommunication Infrastructure Standard for Data Center), created by members of the industry, consultants, and users and that includes the best practices for the construction and management of centers of data. This standard includes an annex on Tier availability degrees, which indicate the level of reliability of a data center. The higher the Tier level, the greater the availability of the CPD.
Tier I
– Basic Data Center. Supports both planned and unplanned outages. There are no redundant components in power distribution or cooling. It must be out of service at least once a year for maintenance reasons.
Tier II
– Data Center with redundant components. Less susceptible to interruptions. Redundant components (N+1 design, which means that there is a duplicate of each component). Maintenance of this distribution line or other parts of the infrastructure requires a service interruption.
Tier III
– Data Center with concurrent maintenance. It allows planning maintenance activities without affecting the service. There is sufficient capacity and distribution to be able to carry out maintenance tasks on one line while servicing others. It accepts planned activities such as preventive maintenance, repairs, or replacement of components, testing of systems or subsystems, among others. Unplanned activities, such as operational errors, can still cause a data center outage. In systems using water cooling, we installed a double set of pipes. Installed redundant components. They connected to multiple electrical and cooling distribution lines but only one active.
Tier IV
– Fault-tolerant Data Center. It allows carrying out any planned activity without service interruptions, but also continues working in the event of a critical unplanned event. This requires two distribution and cooling lines with multiple redundant components (2(N+1)) i.e., 2 UPS with N+1 redundancy.
The Objective: Energy Efficiency in the CPD
There is currently a trend towards the construction of increasingly larger data centers, which demand the best technologies to guarantee their operation and improve their efficiency. The energy consumption in these centers increases, therefore, exponentially.
Simulate The Thermal Load Of Refrigeration Systems
Before housing the entire IT infrastructure in the CPD Galileo, it was necessary to verify, check and test both the cooling systems and the UPS systems distributed in 3 branches. Among other things to determine exactly:
- If the cooling system was well dimensioned
- If the airflow was well distributed
- If the reserve equipment could act correctly against rises in temperature or failure of other air conditioning equipment.
- To check the correct configuration, recirculation, bypass and adjust the airflows of the cooling system.
- To verify all these points, the technical office carried out the study, supply, and installation of intelligent load banks with the aim of simulating the thermal load, considering that the total installed cooling power was 1,056 kw.
On the other hand, to verify the correct operation of the UPS systems and associated electrical protection systems, we connected the load banks to the different connection points of the three busbars in order to feed the entire electrical system to the future infrastructure. ITEM.
Smart Load Banks at Different Points in The Room
For the development of this project, we used intelligent 21kw load banks from our partner Rent load, and we distributed them evenly throughout different points in the room. The objective was to contemplate an equitable distribution in the three electrical branches and a correct distribution of the hot spots to check the cooling systems.
We commissioned the equipment and carried out the tests, simulating different scenarios of electrical consumption and thermal load. In addition, thanks to the monitoring system of the equipment used, we have been able to record all the data and document the commissioning tests.
Before addressing the topic of data center cooling, we begin with a brief introduction to define what they are and how assessed their availability, an aspect in which the CPD cooling system and its maintenance are key.
The Importance of Data Center Cooling
The heat production of the equipment that makes up a data center is one of the main problems and one that most worries its administrators. Excess heat in a server room negatively affects the performance of the equipment and shortens its useful life, as well as being dangerous in the case of reaching high levels. Therefore, the design of a good cooling system for data centers is of vital importance.
In this design, the sizing of the system is essential, which requires understanding the amount of heat produced by the IT equipment, as well as that derived from the operation of other elements usually present such as UPS, power distribution, air conditioning units, lighting, and people…
The Thermal Load
Paying attention to all this is basic to calculate the thermal load. In a typical installation, the heaviest loads are:
- 70% usually corresponds to the load of IT equipment.
- 9%, to lighting.
- 6%, to the distribution of food.
- 2%, to people.
Also, in addition to removing heat, we should design a data center air conditioning system to control humidity. However, in most air conditioning systems, the air-cooled CPD function of the system causes significant water vapor condensation and consequent loss of moisture. Therefore, supplemental humidification is necessary to maintain the desired humidity level.
This supplemental humidification creates an additional heat load on the computer room air conditioning (CRAC) unit, clearly decreasing the CPD cooling capacity of the unit and necessitating oversizing. It is also important to talk about the design of the air duct network or the raised floor, as it has a significant effect on the overall performance of the system and also greatly affects the uniformity of temperature within the data center.
Choosing a modular air distribution system, coupled with proper heat load estimation, can significantly reduce data center design configuration requirements.
Cooling Systems for Data Centers
These large data centers normally exceed 500 or 1,000 m2 (with an average local density of 1,500 W/m2) and the communication sectors are at approximately 300 W/m2 and high-density areas that can reach 4,000 W/m2. In addition, we must add the cooling of electrical rooms, offices, workshops, etc. In general, we reached capacities of 3 to 5 MW of cold in a complete Data Center.
We often selected Systems with chilled water, not only for electrical consumption but rather for flexibility in application and the low volume of refrigerants in use.
For the production of frozen water there are different types of chillers:
- Air-cooled chillers with standard axial fans or low consumption EC-Fan type fans and screw compressor.
- Air-cooled chillers with standard axial fans or low consumption EC-Fan type fans with Free-cooling and screw compressor.
- Water-cooled chiller with cooling towers with screw or centrifugal compressor.
- Chillers cooled by water with Dry Cooler in closed circuit with scroll type compressor, possibly screw or centrifugal.
Chillers that integrate free-cooling offer interesting energy savings. Depending mainly on the climatic conditions of the area where they installed and on the selection of air injection temperature in the cold aisles of the data center. Free-cooling consists of passing the water return coming from the data center first through a water-air heat exchange coil mounted on the condenser air intake, thus achieving pre-cooling in the water return.
Cooling Solution for Tier III Data Centers
This is a typical solution for the cooling of Tier III type data centers, using air-cooled chillers without free-cooling.
The chillers discharge the frozen water to a collector with a double outlet to the pump station that feeds the CRAH (Computer Room Air Handler) and PTU (Puma Transfer Unit) consumptions.
A PLC is responsible for calculating the amount of chiller in operation, starting the unit in stand-by in the event of an error in the operating unit, and performing rotation to ensure similar wear over time.
Depending on the importance of the server room, we can install additional chilled water tanks in case of a total collapse in chilled water production.
The main criterion when designing the production of chilled water (primary hydraulic system) is to ensure a constant flow of water through the evaporator of each chiller. In addition, we recommended it to install water flow control valves in order to guarantee the exact water flow required by each chiller.
The main design criterion of the secondary hydraulic network (supply of consumption with chilled water) is to ensure a flow of water according to consumption at the time. Usually, a by-pass installed because sometimes the primary water flow does not coincide with the secondary. With the by-pass we compensated the difference.
For further details contact APC ups price in Pakistan.