STULZ News http://www.stulz.com.mx/en/ Here you find the latest blogarticles, pressreleases, professional articles and events from stulz.com.mx. en-gb STULZ Wed, 02 Sep 2020 10:35:07 +0200 Wed, 02 Sep 2020 10:35:07 +0200 news-2069 Wed, 31 Jul 2019 17:26:22 +0200 Stulz, Eaton, Panduit y Cisco, certifican a la primera generación de ICE Alliance 2.0. http://www.stulz.com.mx/en/newsroom/event/stulz-eaton-panduit-y-cisco-certifican-a-la-primera-generacion-de-ice-alliance-20-2069/ Además de la entrega de certificaciones ICE Alliance 2.0, los fabricantes anunciaron beneficios,... A siete meses de que las 4 marcas unieron esfuerzos para crear una red de distribuidores que ofreciera soluciones conjuntas para los centros de datos, ya entregaron las primeras certificaciones que habilitan a los canales para comercializar la solución.

Los fabricantes recordaron que ICE Alliance 2.0, es una solución integral para dar respuesta a la necesidad de nubes híbridas, computación central y Edge Computing.

Durante la presentación, Kaleb Ávila, Vicepresidente de Panduit para América Latina, recordó que en el 2017 las 4 empresas se reunieron para desarrollar soluciones a partir de la sinergia de sus portafolios, en pro de las tecnologías 5G, IoT, entre otras; tendencias que podían ser una buena realidad de negocio.

“Nos adelantamos al futuro. Hoy todas estas tendencias son una realidad para la transformación digital de las empresas, y con esta oferta, ustedes serán los encargados de llevar a la realidad la digitalización”, explicó.

Asimismo, dijo que no sólo se trata de una certificación, sino de un trabajo intenso de preparación para afrontar los retos que vendrán de ahora en adelante.

Enrique Chávez, Director de Operaciones de Latinoamérica y el Caribe en Eaton, indicó que se certificaron a 24 empresas, de las cuales, 49 personas fueron habilitadas.

“Nos llena de orgullo reconocer a la primera generación de ICE; las cuatro marcas continuamos trabajando con ustedes para generar mucho negocio. La sinergia que hemos logrado como fabricantes les dará muchos beneficios, haremos lanzamientos para usuarios finales en Guadalajara, Monterrey y ciudad de México”, comentó.

Carlos García, Representante de Cisco, reconoció el esfuerzo y la inversión de los distribuidores, y reiteró el apoyo que tienen de la marca y de su centro de soporte: “Esta solución es única. Por primera vez estamos ofreciendo una herramienta completa para la parte de infraestructura, manejo, procesamiento, e hiperconvergencia”, agregó.

Jorge Belsaguy, Managing Director de Stulz México, subrayó la importancia de contar con partners certificados que hagan frente a este tipo de proyectos para generar más negocio.

 

Otro anuncio importante que se dio fue que el 18 de agosto se llevará a cabo el primer evento para usuarios finales en México, con la intención de dar a conocer el valor de su propuesta; “se trata de un proyecto con más de cuatro años de maduración”, así lo indicó Jorge, de Stulz.

Entre los beneficios mencionaron la creación de un Newsletter de ICE Alliance 2.0. donde ofrecerán noticias de las marcas, alianzas, y proyectos futuros. Asimismo, anunciaron un incentivo para las primeras 3 empresas que vendan la solución, el cual consiste en un viaje por las plantas de los fabricantes; la invitación incluye a un usuario final.

 

Publicación original en IT SITIO

julio 30, 2019

 

]]>
news-1959 Mon, 08 Oct 2018 21:34:02 +0200 Stulz se presenta oficialmente en México con nuevos aires (Publicada en esemanal) http://www.stulz.com.mx/en/newsroom/news/stulz-se-presenta-oficialmente-en-mexico-con-nuevos-aires-publicada-en-esemanal-1959/ POR  

Aunque la marca especializada en aires acondicionados ya lleva aproximadamente cuatro años en territorio mexicano, actualmente busca solidificar sus estrategias comerciales mediante el canal experto.

La compañía de origen alemán, anunció que trabajará con mayoristas e integradores de toda la República Mexicana de forma estrecha, con quienes basará sus modelos comerciales, aunque aclaró que seguirán atendiendo algunas cuentas estratégicas de manera directa.

Sin querer ahondar mucho, la marca señaló que van a realizar un ‘kick off’ en la tercera semana de octubre. Lo poco que desvelaron los directivos fue que estará conformado por 2 líneas: mayoristas y asociados. En el primer caso, habrá una certificación, porque la marca adaptó su esquema y portafolio para tenerlo disponible mediante esta figura, ya que son productos que requieren de una venta más consultiva, con muchos criterios de diseño.

En el segundo caso, se trata de una certificación de partners expertos, con experiencia previa en cooling, el cual es su negocio primario basados en una fuerza propia de ventas, realizan instalación y venden los complementos. Y cuando se acerque un socio que no tenga el expertise o todas las herramientas entra como Alliance Partner, a quien el mayorista lo podrá respaldar al contar con el conocimiento necesario para vender, colocar y mantener. Por el momento, Stulz pretende cerrar el año con cinco mayoristas firmados, y aunque ya trabaja con algunos, se guardaron sus nombres.

Resaltaron que la gran ganancia para los socios son los servicios, más allá de la utilidad que genera la venta del producto. Para ello, el asociado experto debe ser capaz de ofrecer el primer nivel frente al cliente final. Debe estar bien capacitado y certificado para ofrecer hasta el mantenimiento.

Stulz pretende cerrar entre el 23-25% de market share en el país, por lo que el programa debe ser un referente para los canales, ayudarles y empoderarlos. Muestra de ello, es que no maneja cuotas ni contratos de exclusividad en la categoría Alliance Partners.

Finalmente, los voceros resaltaron que los socios que ya estaban trabajando con ellos, antes de la formación del programa de canales, tendrán ciertos beneficios y condiciones que los diferencien por su lealtad. Además de gobierno, el mercado pulverizado de pequeños y medianos negocios serán los objetivos de la marca.

Publicación en esemanal

]]>
news-1402 Tue, 04 Oct 2016 18:59:09 +0200 STULZ and TSI announce joint venture to deliver modular data centre solutions http://www.stulz.com.mx/en/newsroom/news/stulz-and-tsi-announce-joint-venture-to-deliver-modular-data-centre-solutions-1402/ The joint venture will enable the two companies to cooperate closely in delivering unique modular data center solutions globally using the very latest cooling technologies. "We have identified modular data centers as a growing market segment" commented Oliver Stulz, Managing Director, STULZ GmbH. "This Joint Venture with TSI allows us to offer customers a complete solution for modular data centers from high performance computing to telecom enclosures using the latest bespoke designed cooling STULZ technology."

"This joint venture increases our ability to deliver and support our modular DC solutions around the globe with future proof designs, solutions and services. Partnering with such a well-known brand of quality products, STULZ enables us to compete at all levels and support our global clients with a worldwide service organization network which comprises approximately 6,000 service staff" commented Simon Gardner, Managing Director of TSI.

"TSI is attractive to the STULZ Group because it aligns well with our company which already has a long trading relationship and synergy. We believe the joint venture brings together the best of breed technology to the data center market," said STULZ Global Sales Director Christoph Stulz "We have seen a huge increase in demand for modular data centers globally and working together we will look to supply flexible highly efficient solutions".

For more information visit www.tsiuk.com

 

About TSI

TSI’s core business is the design, build and maintenance of modular data centers being supplied globally. The company has over 25 years of experience in this market and continues to successfully provide its clients with a professional service. TSI not only understands the autonomous systems implemented within a data center but appreciates that these systems can have an effect on the data center environment as a single entity and as such all aspects of a fault or repair should be investigated as a risk to the mission critical facilities. Located in Oxford, UK, and currently employing staff dedicated to the aforementioned functions. TSI holds the ISO9001 accreditation for Quality Management and have recently successfully maintained this accreditation and currently hold the ISO9001:2008 Quality Management certification. This certification ensures the processes and procedures are monitored and policed. We are also ISO14001 accredited. We are TIA942 Accredited Designers and Auditors.

]]>
news-743 Mon, 23 May 2016 09:33:17 +0200 Low-noise chillers for data centers: Peace and quiet for your cooling needs http://www.stulz.com.mx/en/newsroom/professional-article/low-noise-chillers-for-data-centers-peace-and-quiet-for-your-cooling-needs-743/ Low-noise chillers for data centers: Peace and quiet for your cooling needs For data centers close to residential areas, compliance with noise regulations is extremely important. Air conditioning systems can be problematic in this respect. Chillers with a cooling capacity of 500kW, in particular, often generate a lot of noise during operation. Here, low-noise solutions from STULZ ensure the necessary noise reduction.


As long as data centers are predominantly located on so-called greenfield sites outside of urban centers, noise emissions are not a major issue. However, these days even large data centers are being built ever closer to populated districts, so that data center operators can no longer avoid tackling the subject of noise optimization. This is especially the case in Germany, where strict noise regulations must be met, particularly in the hours of evening and night. Industrial and service companies – not to mention operators of concert halls and sports facilities – could tell us a thing or two about conflicts with local residents who say they are disturbed by noise. For a data center running in continuous operation 24/7 all year round, there is genuine cause for concern.


Avoiding critical thresholds from the start

To prevent potential conflict long before it rears its ugly head, data center operators must consistently use solutions with noise emissions that do not come anywhere near critical thresholds. Of course, this also applies to cooling systems with compressors, pumps and fans that can produce not inconsiderable noise. Here, we recommend only installing systems and components that are so quiet that their operation is unproblematic, even at night time. However, what sounds good in theory is not always simple to achieve in practice. A classic dilemma is posed by chillers, which exceed approx. 500 kW of cooling capacity and are situated on or next to a data center. The lowest noise emissions would be achieved using soundproof encapsulated compressors and maximum size fans, which produce the necessary airflow with comparatively low speeds and can also keep noise to a minimum – as a system. However, it is evidently not easy to reconcile soundproof encapsulated compressors and large fans with the standardized sizes of chillers. Hence the restrictions on length and width imposed by chillers mean that smaller fans are generally installed. These deliver the necessary capacity, but only at the price of higher speeds, which increases both noise emissions and energy costs.

The rise in heat loads that goes hand in hand with the increased heat density in today's data centers has made the problem even worse: in order to overcome the resulting amounts of heat quietly and efficiently, even more chillers would in fact need to be installed on numerous data center sites. Sometimes, however, there is insufficient space for this, so that the existing CW systems are under even more load than they were already. The result is higher electricity costs and high noise emission.

To provide data center operators with a way out of this dilemma and enable chillers to be used in residential areas, too, quite a number of manufacturers are equipping their fans with noise-reducing diffusors. Hamburg-based precision air conditioning specialist STULZ, on the other hand, is taking an alternative route, and manufactures its Cyber Cool 2 chillers with a view to reduced noise levels right from the start. Even during the development phase, thorough research has gone into compressors, fans and pumps, to examine their noise emissions during operation. Based on the results of these tests, systematic measures have been taken to minimize the amount of noise generated.


Soundproof encapsulated compressors, maximum size fans

The first approach to tackle noise reduction involved the compressors. In some chillers, they are so exposed that their operating noise is diffused into the environment largely unfiltered. Compressor housings, too, are frequently not noise optimized and, depending on their design, can even increase the noise level. At STULZ, on the other hand, the compressors are housed in a special, soundproof encapsulated chamber. Like the walls of a recording studio, their interior walls are completely lined with sound-insulating material, so that as little noise as possible reaches the outside. This first step in itself has greatly reduced noise emissions.

While the development of a soundproof encapsulated compressor chamber was a solution that enjoyed the benefit of experience from other industries, the question of optimizing fan noise levels was considerably more complex. Here, we needed to design a compact construction that enabled noise and efficiency values to be optimized while adhering to the chillers' standard surface dimensions. STULZ solved this difficult task by installing maximum size fans. These make the best possible use of the available space, and are lined up next to one another so closely that you can barely slip a proverbial sheet of paper between them. And finally, with diameters of 910 millimeters, they are so large that they can move the necessary volumes of air removed at moderate speeds, thereby working quietly and energy efficiently. In order for the strengths of this new fan system to be exploited to the full, the entire air conduction system, from the intake through the heat exchangers to the fans, had to be redesigned.

 

 

]]>
news-737 Fri, 18 Mar 2016 13:36:22 +0100 Dehumidification in data centers when using CW units at high-temperature levels http://www.stulz.com.mx/en/newsroom/blog/dehumidification-in-data-centers-when-using-cw-units-at-high-temperature-levels-737/ Most users and planners are now aware that temperature levels when cooling IT equipment in data...

Most users and planners are now aware that temperature levels when cooling IT equipment in data centers have changed dramatically in recent years.

The main reason for the adjustment of air temperatures is ASHRAE recommendation TC 9.9 2011, which recommends air inlet temperatures to IT equipment in a range from 18 °C up to a maximum of 27 °C. Adding an average temperature difference as air flows through the IT equipment of 10-15 K, this produces return air flow temperatures back to the A/C unit in the range from 28 °C to 42 °C (see blog article "Delta T"). The actually most important "side-effect" of this recommendation, however, is utilization of so-called "free cooling" – that is, cooling of IT equipment as far as possible without the energy-intensive use of compression cooling (see blog article "Free cooling"). 

Based on their good scalability and their comparatively simple hydraulics, large data centers mostly use chilled water-cooled precision A/C units (so-called CW units), which require centralized chilled water production (see also blog article "Standby management"). To improve the efficiency of the chiller, and to utilize free cooling at comparatively high outside temperatures, chilled water systems are also being run at ever higher water temperatures. A positive side-effect of high water temperatures in conjunction with high air temperatures is that the purely sensible cooling targeted in data centers (in order to avoid cost-intensive humidification) is assured.

In summary: higher air temperatures + higher water temperatures = avoidance of dehumidification in normal cooling operation and improved utilization of free cooling.

To return briefly to the ASHRAE recommendations: The allowed range of relative humidity for the IT equipment is very generously spanned between 20 % and 80 %.

All these factors together mean, in principle and in theory, that nowadays there is no need for dehumidification or humidification in normal cooling of a data center. Sadly, this is another area in which theory and practice differ. There are requirements regarding ESD (electrostatic discharge) from IT equipment; the data center staff is in the room; the room is not 100% air-tight; and humidity is introduced from the outside; doors are opened and closed, etc. The possibly resultant and required humidification is comparatively easy to realize (humidifier in the A/C unit or in the room). The possibly required dehumidification is difficult however.

In the past (when the equipment was operated at lower air and water temperatures), the dehumidification with CW-units was done in the following way: 

The chilled water control valve is fully opened in dehumidification mode, increasing the water volume flow through the cooling coil. This increases the total cooling capacity of the unit, and the unit's water outlet temperature falls. The temperature difference between the air and water side increases, and the resultant drop below the dew point causes the required dehumidification. In some cases the speed of the EC fans (if installed) is also reduced in order to boost the effect.

When operating CW units at high air and water temperatures, the problem then arises that the drop below dew point necessary for dehumidification can no longer be achieved, because the general temperature level is simply too high.

So what can be done to provide dehumidification?

The technically most practical way is to use one or more so-called "dual-fluid" units in a GCW design. A dual-fluid unit is a combination of a direct expansion (DX) and a CW unit (see GCW refrigeration system). In the GCW design the unit's refrigerant circuit is closed. The heat is dissipated by way of a water-cooled plate condenser, which in this case is simply connected to the existing chilled water system. In normal cooling operation, the unit's CW circuit is additionally used; for dehumidification, however, the switch is made to DX mode. Dehumidification and the drop below dew point are very much easier to achieve in DX mode because the evaporation temperature is normally lower than the water temperature level, or can be more easily brought to the required dehumidification level by way of controls in the refrigerant circuit (expansion valve). The numbers and/or cooling capacities of these units then depend on the expected dehumidification capacity and the size of the data center.

]]>
news-734 Thu, 18 Feb 2016 10:54:41 +0100 AER – A new efficiency indicator for airflow in Data Centers http://www.stulz.com.mx/en/newsroom/blog/aer-a-new-efficiency-indicator-for-airflow-in-data-centers-734/ Recently, a value has repeatedly popped up in air conditioning system specifications, defining the... Recently, a value has repeatedly popped up in air conditioning system specifications, defining the maximum permitted power consumption for the fans at a certain airflow.

In the past, the primary consumer of energy in an air conditioning system was the compressor. Today, most Data Centers use air conditioning systems with Free Cooling. Now mechanical or compressor cooling is only required if it is very warm outside, and the Free Cooling is insufficient for transporting heat out of the Data Center. Due to these changes in air conditioning systems, fan power consumption is now moving into the spotlight, as air needs to be conveyed through the Data Center even in Free Cooling mode. Therefore, these days the fans in the air conditioning units are frequently the primary energy consumer.

To demonstrate how efficiently air is conveyed through a Data Center, we look at the ratio of fan power consumption to airflow. And so that this child has a name, we have called it AER, which stands for Airflow Efficiency Ratio.

The AER describes the ratio of fan power consumption to the airflow of an air conditioning unit at a given external static pressure. The unit used for the AER value is W / (m³/h). In order to obtain numerical values that are easy to handle, we did not choose kilowatts - the usual means of measuring fan power consumption - as the unit, but watts instead. The smaller the AER value, the better. The less power consumption required to achieve a certain airflow, the better.

Here are two examples:

  1. A precision air conditioning unit achieves an airflow of 30,000 m³/h at a static pressure of 20 Pa in the raised floor. The fans have a power consumption of 3.3 kW, or 3,300 watts. This translates as AER = 3,300 / 30,000 = 0.11 W / (m³/h).

  2. For a typical air handler, which conveys air at a rate of 80,000 m³/h at an external static pressure of 50 Pa in the ducts to the Data Center, a power consumption of 28.0 kW results in AER = 28,000 / 80,000 = 0.35 W / (m³/h). The AER can be used to compare different air conditioning units, identical units with differing airflows (e.g. with and without active standby units), or different air conditioning systems in comparable conditions.

It is safe to assume that in future the AER will become a common item among the technical data in air conditioning unit brochures.

]]>
news-733 Thu, 04 Feb 2016 14:55:32 +0100 A brief history of precision air conditioning technology http://www.stulz.com.mx/en/newsroom/professional-article/a-brief-history-of-precision-air-conditioning-technology-733/ STULZ and the road from computer room cooling to modern Data Center air conditioning: The history of... STULZ and the road from computer room cooling to modern Data Center air conditioning: The history of precision air conditioning technology begins in the early 1970s with the air conditioning of countless computer rooms that are springing up. With the transition to the modern Data Center, the exceptionally diverse landscape of precision air conditioning solutions that we know and trust today gradually came into being – a process in which STULZ repeatedly took a pioneering role. The most important milestone was the CyberAir 1, which was the world's first precision air conditioning system to be fitted with EC fans as standard.

]]>
news-723 Mon, 23 Nov 2015 15:27:00 +0100 Delta T – The air-side temperature difference http://www.stulz.com.mx/en/newsroom/blog/delta-t-the-air-side-temperature-difference-723/ Increased efficiency in the Data Center, improved PUE, lower losses – what does all this have to do... A server in a Data Center takes in air at a certain temperature. Once inside the server, this air warms up due to the heat produced by all the components in the server. The air that then exits the server is roughly 10 °C to 15 °C hotter.

An air conditioning unit in a Data Center also takes in air at a certain temperature. Inside the air conditioning unit, this air is cooled and the extracted heat conveyed to the outside. So the air that exits the air conditioning unit is approximately 10 °C to 15 °C cooler.

That all works out fine then, doesn't it? Unfortunately not.

The above-mentioned 10 °C to 15 °C is the so-called air-side temperature difference, or Delta T.

In a theoretical ideal scenario – a closed circulation of air between the server and the air conditioning unit – there would be a certain air-side temperature difference, and the air conditioning unit would work at its planned maximum level of efficiency.

In a real Data Center, this is sadly not the case. The cold air exits the air conditioning unit, flows through the raised floor, enters the cold aisles through the perforations in the raised floor grilles, is sucked in by the server, heated, blown out into the hot aisle, and then begins its journey back to the air conditioning unit. However, air is stupid and lazy. It doesn't know that this is the route it has to take, and showing it the way with blue and red arrows is no help at all.

Some of the air finds openings in the raised floor, e.g. cable cut-outs in the hot aisle that have not been sealed, gaps between the raised floor grilles or even grilles missing altogether below the racks. It then takes one of these shortcuts back to the hot aisle and straight back to the air conditioning unit, without ever having seen a server from the inside or having taken any of its heat away with it. Other bits of air take the planned route into the cold aisle, but then sneak between the servers through unused rack surfaces, or to either side of the servers, hot-footing it straight to the hot aisle and back to the air conditioning unit. This air does absorb a little heat, which the servers radiate to the outside.

Air that takes in only very little heat on its trip through the Data Center lowers the air-side temperature difference and therefore the efficiency of the entire air conditioning system.

Here's an example:

With an airflow of 45,000 m³/h and a Delta T of 15 °C (return air 35 °C, supply air 20 °C), an ASD 2010 CWU air conditioner from STULZ manages a capacity of 228 kW for a power consumption of 6.2 kW. The result is an energy efficiency ratio (EER) of 36.8.

Now, if the actual Delta T is only 10 °C (i.e. return air is only 30 °C) with the same airflow, power consumption and water temperature, capacity drops to 155 kW and the EER is cut to 25.0. As a result, the air conditioning unit has an efficiency 32 % below its possible or planned level.

]]>
news-717 Mon, 19 Oct 2015 07:49:00 +0200 Free Cooling – Direct and Indirect http://www.stulz.com.mx/en/newsroom/blog/free-cooling-direct-and-indirect-717/ The term "Free Cooling" suggests that you don't have to pay for this type of cooling. That is a... Free Cooling for Data Centers: this subject is on everyone's lips and is preoccupying specialists at conferences on Data Center infrastructure. There are now countless variations. But they all pursue the goal of lowering the Data Center's energy consumption and improving the PUE.

The term "Free Cooling" suggests that you don't have to pay for this type of cooling. That is a fallacy. Is anything free these days? Below I will describe the Free Cooling solutions in use today.

Free Cooling

Free Cooling means that the power consumption of the air conditioning system at the site is reduced to the necessary minimum by suitable means, without compromising on reliability and availability. The words "suitable means" and "at the site" open up a very broad range of possibilities.

Direct Free Cooling

To put it briefly, this could be described as follows: window open, blow cold air from outside through the Data Center, pick up the warm air, transport it back outside, voilà! And physically speaking, that's exactly what happens. Only the process of "moving the air" requires energy.

Unfortunately, in real life things are not that simple. Outdoor air is not always in a condition that the IT equipment is comfortable with. Sometimes it's hot and sometimes cold, sometimes it’s very humid and sometimes very dry. What's more, outdoor air is not always clean. The outdoor air is often full of particles which can be very hostile to modern IT equipment.

]]>
news-716 Fri, 09 Oct 2015 14:50:05 +0200 Standby management for CW units http://www.stulz.com.mx/en/newsroom/blog/standby-management-for-cw-units-716/ Today, the operator of a Data Center basically has two fundamental concerns: firstly, reliability,... In most cases, larger Data Centers continue to use closed-circuit air conditioning units. These so-called CW units basically consist "only" of an air/water heat exchanger, fans, air filters, control valves and the necessary electrical components, plus a controller. The cooled water supply to these units is provided by a centralized chiller.

 

To remove the heat load from the Data Center, a certain airflow is required, the amount of which depends on the air-side temperature difference. This airflow is supplied by the closed-circuit air conditioning units.

 

A certain level of so-called "redundancy" of air conditioning units is created, depending on the size and desired reliability level, to ensure reliable Data Center air conditioning. In other words, more units are installed (standby units) than are actually required for air conditioning. Normally, these units are only brought (automatically) into operation if a running unit switches off due to a fault (passive redundancy).

 

The latest closed-circuit air conditioning units make use of EC fans for ventilation. These fans are considerably more energy efficient than the older versions with AC motor. Another major advantage of these fans is that as the fan speed decreases, the motor's power consumption does not decline in a linear fashion as a function of the speed, but by the power of three.

]]>
news-714 Mon, 31 Aug 2015 07:01:00 +0200 Air-conditioning for special applications http://www.stulz.com.mx/en/newsroom/blog/air-conditioning-for-special-applications-714/ Alongside Data Centers, which have been successfully equipped with reliable and efficient precision... Alongside Data Centers, which have been successfully equipped with reliable and efficient precision air-conditioning technology for a number of decades, there are a raft of other applications that require constant climatic conditions. Laboratories, archives, storage rooms, test rooms, museums – as a result of the goods that are stored in these areas or the processes that take place there, all of these applications require highly stable temperature and humidity conditions for short to very long periods. What separates this from air-conditioning for data centers is the thermal load, which is very low or even zero in certain cases.

For example, museums and archive rooms are used for storing unique and priceless cultural objects for very long periods. Here, historic books, documents, parchments, works of art, and artifacts or films are stored under clearly defined room conditions to protect them in the long term and to preserve them for future generations. In addition to air quality, light, and the danger posed by pests, the air temperature and the air humidity are the main factors that influence the durability of the materials. High temperatures speed up the reaction of harmful substances with the materials, alter the acid content, and promote microbiological growth. Temperature fluctuations cause expansion and shrinkage, which in turn leads to material fracture. A high level of air humidity leads to corrosion, warping, cracks, and bacterial growth, whereas low humidity causes the material to dry out and shrink.

In test rooms, in which all kinds of measurements are performed on a wide range of objects and materials using highly sensitive apparatus, it is also necessary to adhere to defined temperature and humidity conditions for the purpose of measuring accuracy. The periods of time that apply in this case are relatively short and are measured in hours or days. A measurement normally consists of a stabilization phase, in which the required room conditions are set, and the subsequent measurement phase, in which the actual measurement takes place. Major fluctuations in temperature or humidity influence the measurement process, reduce the precision of the measurement, and must therefore be reduced to a minimum.

Laboratories are used in a wide range of areas. A distinction is drawn between biology, chemistry, and physics laboratories. The processes that take place there are so diversified that it is impossible to list them all in this text. To provide just a small selection, for example, there are laboratories for biochemistry, botany, pharmacy, organic and inorganic synthesis and analysis, lasers, optics, electronics, and much more besides.

Once again, all of these applications require a stable air temperature and stable air humidity. Further important factors include the air quality, movement, distribution, and speed, as well as the noise level and static underpressure or overpressure.

All these applications therefore share the need for constant air temperature and humidity conditions at a thermal load that is either zero or only very low. The air conditioning that is to be used must therefore be able to meet these requirements in a very reliable (and also efficient) manner in the long term using suitable components and control algorithms. In the case of precision air-conditioning units that were developed for data center air-conditioning, this is only possible if these units are adapted accordingly. The CyberLab from STULZ was specially developed for these requirements and precisely controls the temperature and the humidity with a tolerance of +/-0.5°C and +/-3% relative humidity. This makes CyberLab the first choice for applications with these special requirements.

]]>
news-711 Thu, 30 Jul 2015 14:24:27 +0200 CW units with different heat exchangers http://www.stulz.com.mx/en/newsroom/blog/cw-units-with-different-heat-exchangers-711/ Data Center planners and operators are always interested in finding the optimum operating point... If we just take a look at CW units with an external chilled water supply, the principal factors for the optimum operating point are as follows:

  1. Data Center location (annual temperature profile)
  2. Data Center size
  3. Planning a new Data Center or optimizing an existing one
  4. Number of CW units (redundancy) and total airflow
  5. Type of control
  6. Data heat load
  7. Possible water temperature level of external chilled water supply
  8. Desired air-side temperature level in the Data Center (e.g. air temperature at the server inlet, return air temperature, supply air temperature, temperature difference between return air and supply air)

The last two points, in combination with the airflow, have an influence on the cooling capacity of the closed-circuit air conditioning units. It is therefore important to know and define these figures precisely right from the start.

The water temperature level of the external chilled water supply depends on the chillers used or the type of chilled water supply system. Modern, energy efficient chillers are capable of working with comparatively high water temperatures (20 °C inlet temperature, sometimes even higher). Older systems, on the other hand, are mostly unable to cope with these water temperatures. In some cities, a central district cooling system is in operation, which generally works with very high water-side temperature differences.

The next step is then linking this potential water temperature with the desired air temperature in the data center and the required cooling capacity for each unit.

The cooling capacity of an air/water heat exchanger, as used in CW units, is dependent on the internal design and size of the heat exchanger, and on the temperature difference between the air inlet temperature and the mean water temperature (possible glycol content is not taken into consideration here).

Therefore, the largest possible choice of heat exchangers is vital for planners and customers in the planning phase, so that they can find their optimum operating point.

For this reason, all CW units from STULZ's CyberAir 3 PRO series can be designed and ordered as standard with three different heat exchangers, which can be employed as requirements dictate.

Thanks to this choice of three heat exchangers, energy efficient operation for virtually any requirements is guaranteed, always and at all times. If very special conditions mean that optimum energy efficiency cannot be achieved with one of these three heat exchangers, however, individual or project-specific heat exchangers can be used.

]]>
news-708 Wed, 10 Jun 2015 18:46:00 +0200 Stand-alone air conditioning solution saves space in Data Centers http://www.stulz.com.mx/en/newsroom/news/stand-alone-air-conditioning-solution-saves-space-in-data-centers-708/ The air handling system for installation outside the unit frees up valuable surface space in the... The air handling system for installation outside the unit frees up valuable surface space in the Data Center and is extremely efficient thanks to its Free Cooling and adiabatic module. STULZ CyberHandler is a ready-to-connect air conditioning solution developed specially for Data Centers and equipped with cutting-edge precision air conditioning technology. This complete air conditioning system in an outdoor housing saves precious floor space in the Data Center and can easily be installed next to a building or on a roof. STULZ CyberHandler is available in a range of output ratings from 55 to 460 kW and offers a comprehensive selection of energy-saving Free Cooling modules, including direct and indirect adiabatic modules.   

 

Hamburg, Germany, 10.06.2015 – With its CyberHandler precision air conditioning system, STULZ presents a ready-to-connect air handling system for medium to large Data Centers. The development of the new series was based on the latest requirements in Data Center air conditioning. When producing the STULZ CyberHandler systems, both the supply air temperature window of the ASHRAE TC 9.9 Thermal Guidelines and the efficiency requirements of the ASHRAE 90.1 were taken into account right from the design stage. The system is designed to exploit the potential for savings, achievable by direct Free Cooling and adiabatics, while maintaining maximum integrated reliability. If required, the entire cooling process in the STULZ CyberHandler systems is ensured via compressors, so that the full nominal output is available even without using Free Cooling and adiabatics. The air handling systems are available in a range of output ratings from 55 to 460 kW and create a maximum airflow rate of 20,000 to 71,000 m³/h. The anti- corrosion outdoor housing can be easily installed next to a building or on a roof. The CyberHandler system is connected to the Data Center on the air side. The system gives Data Center operators more floor space for server or storage applications and can also increase operational reliability. Furthermore, there is now no longer any need to access the Data Center to service the air conditioning systems.

]]>
news-707 Sat, 30 May 2015 09:02:00 +0200 Flexible air conditioning solution for modular Data Centers http://www.stulz.com.mx/en/newsroom/news/flexible-air-conditioning-solution-for-modular-data-centers-707/ Outdoor air conditioning container combines energy efficiency with short installation times: The... Outdoor air conditioning container combines energy efficiency with short installation times: The 20-foot air conditioning containers from the STULZ CyberCon series are available with a cooling capacity of 243 kW per unit and offer state-of-the-art energy-saving technology such as Free Cooling function, EC fans and adiabatic cooling.

 

Hamburg, 30.05.2015 – With the CyberCon series, STULZ is introducing a flexible outdoor air conditioning solution for Data Centers in a container format. These precision air conditioning systems are available as DX or CW versions and are delivered pre-installed in standardized 20-foot ISO containers. For the air conditioning of 40-foot Data Centers, two STULZ CyberCon containers can be combined as an end-to-end installation. The systems, which have a vertical air outlet, are specially designed for container Data Centers, and can simply be mounted on a container module with the server equipment that needs cooling. All connections take place on the air side only. The standardized all-in-one design of the CyberCon series meets all the requirements of mobile Data Centers. As server capacities grow, thanks to the STULZ E2 control system further air conditioning containers can easily be added and integrated in the building services management system (Modbus, BacNet). This enables even complex redundancy strategies – for multi-tier Data Centers, for example – to be achieved without problem. What's more, all models are available as dual fluid versions with two independent refrigeration systems, based either on the direct evaporator DX/DX system, the liquid cooled CW/CW System, or the DX/CW system. With two independent cooling sides, redundancy is already integrated in the unit.  

]]>