Skip to content

Six Factors Influencing Data Center Efficiency Design

In rapidly evolving markets, bigger is not always better. Is your data center designed for efficiency?

The aggressive efforts of DISA, the Defense Information Systems Agency, to rationalize and consolidate mission-critical data center facilities has put a spotlight on the challenges of planning a data center infrastructure that is reliable, resilient, responsive, secure and efficient at the same time, from both an energy utilization and financial perspective. It is easy to criticize DISA’s efforts as emblematic of government inefficiency, but that would be an unfair assessment, as there are plenty of equally egregious commercial examples of overbuilding (and underbuilding) in the data center space. Especially in the current hybrid architecture marketplace, designing a data center facility to effectively and efficiently meet both current and anticipated needs takes careful planning and expert engineering.

At BRUNS-PAK, we believe that part of the reason so many projects end up misaligned with the demand profile is that both the customer and vendor design/build teams fail to account for the six critical factors that influence efficiency when working at the design phase of the project:

  • Reliability
  • Redundancy
  • Fault Tolerance
  • Maintainability
  • Right Sizing
  • Expandability

How you balance these individual priorities can make all the difference between a cost-effective design and one that eats away at both CAPEX and OPEX budgets with equal ferocity. Here is a quick review of each critical consideration.


The data center design community has increasingly acknowledged that workloads, and their attendant service level and security requirements, are potentially the most critical driver in defining data center demands. Workloads dictate the specifics of the IT architecture that the data center must support, and with that, the applicability of cloud/colo services, pod designs, and other design/build options. Before initiating a data center project, having a clear picture of the workloads that the site must support will facilitate accurate definition of reliability for the project.


The goal of redundancy is increased reliability, which is defined as the ability to maintain operation despite the loss of use of one or more critical resources in the data center. Recognizing that all systems eventually fail, how you balance component vs. system-wide redundancy (N+1 vs. 2N, 2N+1, etc.) will significantly reshape the cost/benefit curve. Here, it is important to design for logical and reasonable incident forecasts while balancing mean-time-to-failure and customary mean-time-to-recover considerations.

Fault Tolerance

While major system failures constitute worst-case scenarios that ultrareliable data centers must plan for, far more common are point failures/faults. In order to achieve fault tolerance, data centers must have the ability to withstand a single point-of-failure incident for any single component that could curtail data processing operations. Typically, design for fault tolerance emphasizes large electrical/mechanical components like HVAC or power distribution, as well as IT hardware/software assets and network or telecommunications services, all of which will experience periodic failures. Design for fault tolerance should involve more than simple redundancy. Rather, effective design must balance failover capacities, mean-time-to-repair, repair vs. replace strategies, and seasonal workflow variances to ensure that the data center is able to support service level demands without requiring the installation of excess offline capacity.


When designing a data center facility, a common mistake is failing to account for maintainability. Excess complexity can rapidly add to costs since even redundant systems must be exercised and subjected to preventive maintenance. In fact, planning a consistent preventive maintenance schedule can be one of the most effective contributors to long-term efficiency by reducing the need for overcapacity on many key infrastructure components.


When properly accounted for, these final two factors work in tandem to help design/build teams create an effective plan for near-term and long-term requirements. Modern design strategies include the use of techniques like modular/pod design or cloud integration that engineer in long-term capacity growth or peak demand response. This means that the team can better ensure that near-term buildout does not deliver excess capacity simply as a buffer against future demand. Engineering teams can readily design modern infrastructure to smoothly scale to meet even the most aggressive growth forecasts.

Treated as a portfolio, these six factors offer the data center design team diverse levers to balance service delivery against cost while ensuring that the final infrastructure can meet demand without breaking the bank, either through initial capital investment, or long-term operating cost.

How BRUNS-PAK Can Help

Over the past two years, BRUNS-PAK has evolved its proprietary design/build approach to incorporate the evolving array of strategies and tools available to data center planning teams, resulting in the BRUNS-PAK Hybrid Efficient Data Center Design program. Through an interactive process that acknowledges both an organization’s IT requirements and the associated facilities infrastructure needs’, this program delivers a strategic approach to addressing the six critical factors influencing efficient data center design while retaining the performance, resilience and reliability needed in enterprise computing environments. Through our expanded consulting services group, and well-established design/build services team, BRUNS-PAK is uniquely positioned to assist customers seeking to create a long-term strategic direction for their data center that satisfies all stakeholders, including end-users, IT and finance.

BRUNS-PAK Has The Solution For Your Business

Get in touch with us to learn about how our skilled team of professionals will help you achieve your mission critical goals.