The mission of a data center is to ensure that tenants can transfer data between their servers, storage devices and their end users.
Three components are required to accomplish this mission:
In a data center that can be maintained simultaneously, mission-critical equipment is redundant. This requires at least two instances of each critical component and enough spares to keep the network, power and cooling systems running even if the component is offline due to maintenance or failure.
Network redundancy means at least two independent cable entry points, at least two different conference rooms for data exchange, and at least two cable distribution systems. It is critical to ensure that physical network elements enter the data center from independent sources to avoid single points of failure upstream of the data center.
Redundant power infrastructure means two independent sources of utility feeds, two uninterruptible power supplies (UPS), and two independent power distribution systems. Cooling infrastructure, such as air handlers, chillers, and pumps, also requires redundancy.
Network
Data enters and exits the data center over fiber optic cables operated by network providers, or over "dark fiber" dedicated to and operated by a single tenant. Most data centers are "carrier neutral," meaning they allow any carrier to deploy its network infrastructure and lay fiber optic cables into the facility.
Power Infrastructure
Onsite generators: Concurrently maintainable data centers must be able to continue operating for at least 12 hours in the event of a public power outage. This requires onsite power generation capabilities, such as diesel generators and sufficient fuel stored on site to power them.
Uninterruptible Power Supply: Rather than being connected directly to tenants’ IT equipment, the facility’s power is routed through a UPS system to protect servers, routers, and other equipment from disturbances such as power surges, and to provide temporary emergency power in the event of a utility outage to keep the data center running.
Power Distribution: Power is distributed directly to the data hall and tenants’ IT equipment via a UPS.
Cooling
A single data center building uses enough electricity to power 36,000 homes. IT equipment using all that power capacity generates a lot of heat, which needs to be cooled.
There is a range of cooling infrastructure technologies on the market, and the “best” depends on the type of work the IT equipment is performing, the local climate, and the tradeoffs between energy efficiency and water efficiency.
All other factors being equal, closed-loop air-cooled chillers use less water but more energy than water-based evaporative cooling systems. In water-stressed markets, where renewable energy is readily available, leading data center developers are increasingly relying on air-cooled chillers. These systems use water pumped through closed-loop piping to extract heat from the data hall and reject it to the outside air.
Information Technology Equipment
Large data centers hold hundreds of millions of dollars worth of IT equipment, and even more valuable IT systems and proprietary data are the beating hearts of most businesses.
This data is stored in servers in data halls. If you stand inside a data hall, you will see a large room with rows of servers stacked on racks.
Cooled supply air can be delivered to the server racks in a variety of ways, including through a raised floor plenum, through ductwork above the racks, or through rows of fans lining the data hall, which are aptly called "fan walls."
As density increases within data halls, tenants may seek more advanced cooling methods, including the use of liquid cooling to supplement or replace forced air. Often, liquid cooling using equipment such as rear door heat exchangers, or even direct chip cooling, can be incorporated into traditional forced air data halls.
Some data center operators have pioneered immersion cooling to improve efficiency, however, the technology has not been widely adopted due to the need for specialized servers, equipment, and materials to operate the system.
How a particular data hall is configured depends on the specific needs of the tenant. Hyperscalers, which operate gigawatts of data center capacity around the world, often prefer standardized deployments across their portfolios, but the configuration of one company’s data hall may differ significantly from that of its competitors.
Ensuring that data hall designs support the broadest range of tenants and allow for deployment of customer-requested configurations at any time without one-off customization means that data center operators must develop deep relationships with tenants and experienced teams that understand operational needs.
The mission of a data center is to ensure that tenants can transfer data between their servers, storage devices and their end users.
Three components are required to accomplish this mission:
In a data center that can be maintained simultaneously, mission-critical equipment is redundant. This requires at least two instances of each critical component and enough spares to keep the network, power and cooling systems running even if the component is offline due to maintenance or failure.
Network redundancy means at least two independent cable entry points, at least two different conference rooms for data exchange, and at least two cable distribution systems. It is critical to ensure that physical network elements enter the data center from independent sources to avoid single points of failure upstream of the data center.
Redundant power infrastructure means two independent sources of utility feeds, two uninterruptible power supplies (UPS), and two independent power distribution systems. Cooling infrastructure, such as air handlers, chillers, and pumps, also requires redundancy.
Network
Data enters and exits the data center over fiber optic cables operated by network providers, or over "dark fiber" dedicated to and operated by a single tenant. Most data centers are "carrier neutral," meaning they allow any carrier to deploy its network infrastructure and lay fiber optic cables into the facility.
Power Infrastructure
Onsite generators: Concurrently maintainable data centers must be able to continue operating for at least 12 hours in the event of a public power outage. This requires onsite power generation capabilities, such as diesel generators and sufficient fuel stored on site to power them.
Uninterruptible Power Supply: Rather than being connected directly to tenants’ IT equipment, the facility’s power is routed through a UPS system to protect servers, routers, and other equipment from disturbances such as power surges, and to provide temporary emergency power in the event of a utility outage to keep the data center running.
Power Distribution: Power is distributed directly to the data hall and tenants’ IT equipment via a UPS.
Cooling
A single data center building uses enough electricity to power 36,000 homes. IT equipment using all that power capacity generates a lot of heat, which needs to be cooled.
There is a range of cooling infrastructure technologies on the market, and the “best” depends on the type of work the IT equipment is performing, the local climate, and the tradeoffs between energy efficiency and water efficiency.
All other factors being equal, closed-loop air-cooled chillers use less water but more energy than water-based evaporative cooling systems. In water-stressed markets, where renewable energy is readily available, leading data center developers are increasingly relying on air-cooled chillers. These systems use water pumped through closed-loop piping to extract heat from the data hall and reject it to the outside air.
Information Technology Equipment
Large data centers hold hundreds of millions of dollars worth of IT equipment, and even more valuable IT systems and proprietary data are the beating hearts of most businesses.
This data is stored in servers in data halls. If you stand inside a data hall, you will see a large room with rows of servers stacked on racks.
Cooled supply air can be delivered to the server racks in a variety of ways, including through a raised floor plenum, through ductwork above the racks, or through rows of fans lining the data hall, which are aptly called "fan walls."
As density increases within data halls, tenants may seek more advanced cooling methods, including the use of liquid cooling to supplement or replace forced air. Often, liquid cooling using equipment such as rear door heat exchangers, or even direct chip cooling, can be incorporated into traditional forced air data halls.
Some data center operators have pioneered immersion cooling to improve efficiency, however, the technology has not been widely adopted due to the need for specialized servers, equipment, and materials to operate the system.
How a particular data hall is configured depends on the specific needs of the tenant. Hyperscalers, which operate gigawatts of data center capacity around the world, often prefer standardized deployments across their portfolios, but the configuration of one company’s data hall may differ significantly from that of its competitors.
Ensuring that data hall designs support the broadest range of tenants and allow for deployment of customer-requested configurations at any time without one-off customization means that data center operators must develop deep relationships with tenants and experienced teams that understand operational needs.