Seven Design Considerations for a Green Data Centre

Seven Design Considerations for a Green Data Centre

IT departments are under increased scrutiny and pressure to deliver environmentally-sound solutions.

Large data centres are one of the most significant energy consumers in an organisation’s IT infrastructure, so any measures that you can take to reduce this consumption (and therefore also carbon dioxide emissions) will have a positive impact on your organisation’s environmental footprint.

Additionally, a report issued by the Environmental Protection Agency in the States indicates that environmental issues have placed IT departments under pressure to develop ‘green’ data centres.

A green data centre is defined as one in which the mechanical, lighting, electrical and computer systems are designed for maximum energy efficiency and minimum environmental impact. The construction and operation of a green data centre involves advanced technologies and strategies. Some examples include:

  • Reducing the power consumption of the data centre
  • Minimising the footprints of the buildings
  • Maximising cooling efficiency
  • Using low-emission building materials, carpets and paints
  • Installing catalytic converters on backup generators
  • Using alternative energy technologies such as photovoltaic electrical heat pumps and evaporative cooling

The consumption of energy is considered the dominant – and often the only – factor in defining whether or not a facility is green. IT executives therefore need to start investigating alternative ways of building energy-efficient data centres.

By following these seven simple steps, IT executives can come closer to achieving their vision of a green data centre

1. Plan to be green

There is worldwide hype around environmentalist issues and a number of vendors are claiming to be gurus in the field of green. The ‘go green’ marketing hype makes it difficult for organisations to understand the real issues and the potential eco-impact on the data centre. A realistic strategy is necessary that takes into consideration the following aspects when planning a data centre:

  • The existing utilisation rate of your servers which includes the number of servers in the environment, their average power consumption, the cost to run per hour, the cost to run servers per year, the power consumption and the cooling costs.
  • The expected data growth of the data centre, which includes taking into account future business demands (create a scenario of where you’ll need to be to support the business in a few years’ time).

 2. Virtualise and consolidate

A virtualisation and consolidation project is often a step in the right direction towards green computing. Research indicates that a server often only utilises between 5 and 15% of its capacity to service one application. With appropriate analysis and consolidation, many of these low utilisation devices can be combined into a single physical server, consuming only a fraction

  • of the power of the original devices and saving on costs, as well as taking a step towards a more environmentally friendly data centre environment.

The basic concept of virtualisation is simple: encapsulate computing resources and run on shared physical infrastructure in such a way that each appears to exist in its own separate physical environment. This process is accomplished by treating storage and computing resources as an aggregate pool, from which networks, systems and applications can exploit, on an as-needed basis.

Virtualisation and consolidation projects are complex, but the benefits are compelling.Server consolidation ratio examples include 15:1 or 45:3, and virtualisation results in improved application availability and business continuity independent of hardware and operating systems,among others

3. Design a best practice floor plan

The Uptime Institute* produced a white paper based on a survey of 19 data centres and reported that, on average, only 40 percent of the cold air went directly towards cooling the servers in the room, wasting yet more power in the data centre.

So, whether you are designing a new data centre or upgrading your existing environment, make use of existing best practices in data centre floor plan designs.

Examples include:

Hot aisle/cold aisle layout; Adopting an alternating hot aisle/cold aisle layout is optimal and can correct many cooling problems in a typical data centre.

By implementing a hot/cold aisle layout, equipment is spared from having hot air recirculated and thereby eliminating risk of an outage through device failure. Also, by having a common hot aisle, you have the ability to contain areas where heat density is high, such as racks with blade servers, and deal with the heat in a specific manner. This allows for multiple heat rejection methods to be in use within one data centre.

The distribution of power across racks; Another layout consideration is the distribution of power across racks. All attempts should be made to balance the watts per rack to within a 10-15% variance. This minimises hot-spots and the need for sporadic hot-aisle containment.

Often data centre designers locate servers performing related functions in the same racks, but the benefit of having these servers close together is outweighed by the heat density this may cause.

Minimise or eliminate under-floor cabling; It is imperative for organisations with static pressure cooling to minimise or eliminate under-floor cabling. If you can’t avoid it, use conduit, cable trays, and other structured methods for running cabling. This minimises barriers between computer room air conditioning (CRAC) units and perforated tiles, resulting in more efficient air flow and optimised cooling system efficiency.

Don’t underestimate the importance of the physical design of the data centre when it comes to power and cooling, both for sustainability, costs and environmental impact.

4. Use appropriate technology

In taking a green approach to your data centre, your evaluation of products is no longer just a price versus performance comparison. It is important to incorporate the total costs of the environment into the calculation, which then also includes costs for energy consumption.

Firstly, look for vendors that have power and cooling at the forefront of their research and development strategies.

Secondly, select equipment based on life cycle costs that take into account the energy usage of servers.

An example of a green technology is MAID (massive array of idle disks). This is a storage technology that employs a large group of disk drives in which only those drives in active use are spinning at any given time. This technology can have thousands of individual drives, and offers mass storage at a cost per terabyte roughly equivalent to that of tape.

5. Take a green perspective on ILM

ILM is the optimum allocation of storage resources that support a business. Every element of information in an organisation has a useful lifespan and this can range from a voice conversation to certain legal and medical records. By implementing an ILM strategy, you have the ability to create greater efficiencies in data storage, which in turn lead to greater efficiencies in elements such as power consumption.

ILM is the application of rigour to the often chaotic and unstructured data stores that an organisation maintains. The storage, utlisation, maintenance and destruction of this data can be quite expensive over its lifetime, and what is worse, its lifetime is often much longer than its useful life. The art of ILM is to develop an understanding of an organisation’s information needs, and to develop the infrastructure and processes required to maintain the usefulness of the information, while at the same time creating the discipline to minimise the cost of that maintenance.

Tiered storage is at the heart of an ILM implementation. The value of ILM is the ability to tie the cost of storage to the value of the information on it. The most important data, or the most performance-critical data, should be placed on the highest performance and most expensive storage.

In turn, do not use expensive energy consuming servers to store information for compliance, when a tape will do.

Additionally, knowing the character (age, file type, usage frequency, and business value) of the data in your environment is pivotal for being able to make informed decisions around ILM strategies.

Assessments that Dimension Data has conducted with more than 100 organisations worldwide show an average of more than 40% file duplication* in their environments. With this information, organisations have the knowledge to decide whether to move data to less expensive and energy consuming storage, and to better utilise their existing environment and save storage space.

6. Investigate liquid cooling

To meet the challenges of blade servers and high-density computing, more organisations are realising the need for effective cooling and heat management solutions. Many are welcoming liquid cooling systems into their infrastructures to achieve better cooling efficiency, while others may find it difficult to fathom pipes of running water snaking through the plenums of their data centres.

In essence, liquid cooling systems utilise air or liquid heat exchangers to provide effective cooling and isolate equipment from the existing HVAC (heating, ventilating and air conditioning) system.

There are several approaches to data centre liquid cooling:

  • Sidecar heat exchangers – these are closed enclosures that deliver cooling from the side, which keeps the cooling from dissipating into the server room
  • Chip-level cooling and bottom mount heat exchangers – these enclosures use a bottom mount heat exchanger which some claim is safer than sidecar enclosures as components won’t be affected in the event of a water leak
  • Modular liquid cooling units – these units are used within a fully sealed cabinet and are mounted at the rack base, in a rack sidecar
  • Door units – full-door units replace a standard server rack door and contain sealed tubes filled with chilled water
  • Integrated rack-based liquid cooling – these systems incorporate a rack-based architecture that integrates UPS power, power distribution and cooling and feature a cooling distribution unit (CDU) that pumps water through aluminum/plastic tubing to cool servers
  • Device-mounted liquid cooling – these solutions work at the device level, with coolant routed through sealed plates on the top of a CPU (central processing unit)

7. Utilise greener energy sources

Many energy utilities are now offering greener options for customers, with power

from sustainable sources. For example, in the United States, the U.S. Environmental Protection Agency (EPA) has formed the Green Power Partnership, which encourages and assists organisations to buy green power and reduce their impact on the environment.

There are also some emerging powersaving technologies that are likely to become more commonplace in the data centre in the near future. For example, DC-compatible equipment would have a significant impact on power consumption, but it is costly to configure, it is not widely available and it is also more expensive than equivalent AC options.

At present, data centres perform many conversions between alternating current (AC) and direct current (DC). This wastes energy which is emitted as heat and increases the need for cooling. It would be far more efficient to power servers directly from a central DC supply. The Lawrence Berkeley National Laboratory in the US estimates that an organisation may save 10% to 20% of their energy use by moving to direct current technology.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s