Data Growth In Always On World Driving Data Center Demand

At the Techonomy Conference, Google’s then-CEO Eric Schmidt noted that every two days, people create as much information as civilization did from the dawn of time until 2003 — an estimated five exabytes every two day. The explosion of corporate, academic, and governmental data, combined with explosive growth in user-generated content (Twitter, Facebook, YouTube, etc.), is changing the applications landscape, as companies and customers begin to understand the potent mix that online data, high-performance computing, and high-bandwidth networking create. We are already an Always On society, and the future only becomes even more On.

As semiconductor miniaturization enters the deep submicron era, the density of electrical/mechanical loads continues to increase in server farms. Virtualization has helped to increase processor utilization leading to even greater demands on cooling services. And the unique demands of always on computing create new challenges in mirroring, distributed content distribution networks, and cost-efficient peak load servicing.

At BRUNS-PAK, we recognize that the evolution of the data center, and with it our design-build methodologies, is a never-ending process. As the information technology industry continues to evolve and strive ultrareliable performance complemented by increasingly efficient operation and scalable deployment, only BRUNS-PAK has committed the resources to ensure that our technical team will remain ready to design and deliver “state-of-the- art” facilities even as the “art” changes.

High Impact Measures to Boost Data Center Efficiency (Part 1)

With Data Center energy consumption at an all time high, maintaining the lowest possible total cost of ownership has become increasingly difficult. We’ve detailed some high impact measures to help improve efficiency, and reduce power and cooling requirements to create a greener, more cost effective Data Center.

The first step in energy-efficiency planning is measuring current energy usage. The power system is a critical element in the facilities infrastructure, and knowing where that energy is used and by which specific equipment is essential when creating, expanding, or optimizing a Data Center.

In order to understand how energy efficiency measures affect energy consumption in the Data Center, a baseline needs to be established for the current energy used. There are currently two primary metrics being used by a number of organizations such as the Green Grid to promote the notion of measuring Data Center energy efficiency. The first is Power Usage Effectiveness (PUE) which is defined as the total facility power consumed divided by the IT equipment power consumption. The second metric is PUE’s reciprocal known as Data Center Infrastructure Efficiency (DCiE) which is defined as the IT equipment power consumed divided by the total facility power consumption.

Total facility power is defined as power measured from the utility meter or switch gear solely dedicated to the operation of the Data Center infrastructure in the building if the building is a shared facility with other functions. This includes power consumed by electrical equipment such as switchgear, UPSs (uninterruptible power system) and batteries, PDUs (power distribution units), and stand-by generators. Mechanical equipment dedicated to the HVAC needs of the Data Center such as CRACs (computer room air conditioning units), chillers, DX (direct expansion) air handler units, drycoolers, pumps, and cooling towers. IT equipment power includes the loads associated with IT processes including server, storage, network, tape and other processing equipment fed through Data Center infrastructure support equipment such as PDUs, RPPs (remote power panels), or other distribution means fed from a UPS.

To collect the information noted above, an effective building management system (BMS) should be employed to help collect, categorize, and trend the data gathered. Most systems offered by BMS providers such as Johnson Controls, Andover, Automated Logic, Honeywell, Siemens, and others can allow monitoring of energy consumption for both the IT equipment and the associated infrastructure equipment serving the Data Center. Metering and other DCPs (data collection points) should be provided at all switchgear relating to power and mechanical needs of the Data Center. Also metering should be provided at the output side of the UPS modules or better yet the PDUs. This will provide the energy consumption rates of both the facility power and IT equipment power.

The types of electrical monitoring which can be employed to measure this type of information can be broken down into three basic forms:

  • Amperage-only monitoring
  • Estimated Wattage monitoring
  • True RMS Wattage monitoring

Amperage-only and Estimated Wattage monitoring means can be flawed in the information they provide due to the inaccuracies of measuring the sine wave and its form. Should a sine wave be produced inaccurately, as many double conversion UPS systems do, averaging means of formulating power consumption can prove to be flawed. True RMS Wattage monitoring provides a much more accurate means of understanding the idiosyncrasies of power consumption relating to data processing power sources. BMS systems which employ measures such as wave form capture sampling using real time updating provide a very high degree of accuracy. It should be pointed out that this type of monitoring can be expensive at implementation based on the number of locations it is determined to be used. Should the decision be made to measure power at the distribution level of PDUs and CRAC units to determine power consumption, the cost at this level can be higher than if monitoring was to be placed at the distribution panel boards feeding these types of devices. As long as all the IT equipment and associated infrastructure equipment is being fed from a singular (or dual) location, this monitoring may be far less expensive while still providing nearly the same information for the distributed systems required out in the Data Center.

Traditional Data Centers which are not currently enacting any type of energy efficiency measures are operating with an average PUE of over 3.  A Data Center which is actively pursuing energy efficient measures can achieve much lower ratings, and in return realize substantial energy savings.

High Impact Measures to Boost Data Center Efficiency (Part 2)

While typical energy audits focus on the mechanical and electrical infrastructure, in data centers the facility framework is only one factor in the cost equation. Often times improving in other areas can be even more rewarding. For example, consideration of the actual kilowatts consumed by servers and other IT hardware is crucial when examining energy efficiency in a data center.

Data processing equipment accounts for most of the energy consumption in a data center, and because of this, facility executives really need to start by thinking ‘inside the box’. Best practices in the type, usage, and configuration of deployment all can significantly reduce the overall energy needs for this equipment.

Pull the Plug on Idle Servers

It’s a simple concept really, if it’s not doing anything, unplug it.  However, in many data centers, up to 15% of the servers should be decommissioned and yet are left running for no other reason than lack of drive to clean up outdated equipment.  Some estimates indicate that the cost of each idle server can exceed $1,000 annually when considering total data center energy usage.  That’s a lot of wasted capital!  Addressing the issue can have an immediate impact on the bottom line.

The solution is to establish a rigorous program to decommission obsolete hardware.

Maintaining an asset management database is a necessity to help enterprises ensure that they are consuming resources efficiently. This database should contain accurate, up-to-date information on server location and configuration, enabling IT staff to easily identify variables of power, cooling, and available rack space when planning future server and storage deployments and identifying potential systems to retire.

Upgrade to Energy Efficient Servers

Another simple measure to reducing energy consumption is to buy more energy-efficient servers. The bulk of IT departments ignore energy efficiency ratings when selecting new hardware, focusing on performance and up-front costs rather than total cost of ownership. However, if just one server uses 50 watts less than another it will equate to a savings of more than $250 over a three year period, and an even more profound savings of $1,500+ on facility infrastructure expenditures can be realized.

Data processing equipment all rely on power supplies to take incoming power to the device and distribute it accordingly throughout its internal components as required. These power supplies are typically specified by the manufacturer to provide for the worse case conditions of the device under a maximized configuration. In the past, these power supplies typically were rated far beyond the components capabilities to provide a “safety factor” in the device. As more pressure is being brought to the forefront on energy efficiency in computing, manufacturers have been striving to match their power supplies more closely to the components capabilities, or power parity.

One of the more power consuming components in most IT processing equipment are the fans required to provide air for proper cooling internally within the equipment. These fans run continuously as long as the device is running. Both equipment and chip manufacturers have been making strides to better pair fan use with actual equipment needs. As chip development continues, heat tolerance is being increased.  Also, fans are being created which can be run in stages depending on the processing load of the equipment. This means that fans can run at lower speeds when processing is at a lower state, thus consuming less power.

Processing equipment developed within the last 3-5 years (depending on the manufacturer) is likely to be relatively energy efficient, anything older than that certainly should be evaluated.

Consolidate and Virtualize

Another “low-hanging fruit” in many data centers is server consolidation and virtualization. Typical utilization rates for non-virtualized servers is measured between 5 and 10 percent of their total physical capacity, wasting hardware, space, and electricity. By moving to virtualized servers, data centers will be fully supported with less hardware, resulting in lower equipment costs, lower electrical consumption (thanks to reduced server power and cooling), and less physical space required to house the server farm.

It is important to remember that not all applications and servers are good candidates for virtualization which adds complexity to the endeavor.

Along with consolidated server applications, associated storage for these systems is also becoming more consolidated as well. Storage Area Networks (SAN) and Network Attached Storage (NAS) solutions are becoming the norm in data center typologies. Virtualized tape systems are also replacing larger tape storage devices of the past. As these systems become more regularized, they have also been increasing in density. This allows more storage in the same footprint with only marginal increases in energy consumption. The advent of solid state storage devices (SSD’s) will likely only create higher densities with lower overall power consumption in the future. Although these devices are not yet in production on central storage equipment, it will only be a matter of time before they are utilized.

The Bottom Line

A comprehensive efficiency strategy that targets IT processing equipment in addition to other tactics can substantially reduce energy consumption and net large savings. A facility-wide energy audit from an experienced partner will help to identify the areas where the most immediate impact can be achieved.

High Impact Measures to Boost Data Center Efficiency (Part 3)

Energy efficiency in electrical systems can be achieved through some measures to limit losses through devices among these components. Power parity (the amount of power put into a device equaling the amount of power provided to the device) provides for the most efficient use of power. Transformers and equipment which utilize transformers (such as UPS systems and PDU’s) tend to have some losses in efficiency due to the friction losses in the windings of these transformers. As equipment vendors apply more stringent manufacturing techniques to their products, improvements can be made to efficiencies of this type of equipment. UPS vendors now provide UPS systems which operate at a .95 or higher power factor. This means that there is only a 5% loss of power into the device compared to power supplied by the device. It should be noted that these power factors are generally based on a load limit on the device no lower than around 30% of the rated maximum for the device, although some of the newer UPS systems can maintain their power factor down to as low as 20% of the rated maximum. As equipment is replaced due to changes in a system, end of life, or equipment failure, higher efficiency equipment should be specified and provided to improve on energy efficiency for these systems.

Measurement and Recording Data

We mentioned in part 1 of this series that in order to understand the consumption of power related to the data center, metering of these systems needs to be provided. Further, trending of this information is invaluable to understanding a baseline of energy use as well as the outcome of changes implemented to improve efficiency. The Power Usage Effectiveness (PUE) of the systems is an indicator of how efficient the data center operates. It is very important to understand where your data center ranks for PUE in order to know what measures should be taken to improve efficiency. This means that recording power usage at the main switchgear supporting both the electrical and mechanical equipment supplying the data center, and at the distribution side of the UPS systems distribution (preferably at the 120/208 volt level at the PDU’s) is ideal to achieve the simplest means of calculating the PUE.


Lighting systems have been moving towards more energy efficient components in recent years.  These systems have moved away from the use of incandescent and T12 luminaires to compact fluorescent and LED fixtures. ENERGY STAR has reported savings of 42% by switching from T12 fluorescent luminaires with magnetic ballasts to high efficiency T8 luminaires with electronic ballasts. It should be noted that oftentimes these higher efficiency luminaires actually produce higher lighting levels in addition to using less power. The more recent introduction of LED lighting luminaires, which can be retrofit into current fluorescent fixtures, is driving these efficiencies even higher.

Lighting Controls

Another energy savings measure which can be implemented in the data center is lighting controls. The notion of “lights out” data center operations refers to personnel not being normally stationed in the data center space. As operational controls of data processing applications become more network driven, and remotely accessed, less time is required in the data center to perform these activities. As a result of this reduced time spent in the data center, lighting becomes less necessary to operate under non-manned periods. Lighting controls utilizing occupancy sensors as a means of controlling lighting offers a reasonable solution to taking control of shutting off the lights out of the personnel entering and using the space. However, occupancy sensors do not allow for continued presence in the space when personnel are out of sensory contact with a motion or occupancy sensor due to working within or at the lower portions of equipment racks. In order to better accommodate these specialized circumstances in the data center, a combination of occupancy/motion sensors in conjunction with card access systems allows for a highly effective and efficient lighting controls strategy.

The Bottom Line

Once the proper metering components are in place and baselines are established, it’s relatively simple to determine which electrical infrastructure equipment will benefit from an upgrade and what the payback for the investment will be. Also, paying attention to lighting controls can improve energy efficiency in the data center.  No matter what the situation is in your data center, a facility-wide energy audit from an experienced partner will help to identify the areas where the most immediate impact can be achieved.

High Impact Measures to Boost Data Center Efficiency (Part 4)

Mechanical cooling, depending on the efficiencies of the systems being used, can consume as high as 50% of the total power used in a data center. Good engineering practice, equipment efficiencies, and solid operational understanding can all benefit in a lower cost of ownership and operations.

Mechanical Economization or “Free Cooling”

The advent of “green” data center practices has ushered in a heightened interest in reducing mechanical systems energy use. As part of these efforts and in conjunction with data center design “best practices”, a means of mechanical economization or “free cooling” has become a design standard rather than a luxury.

Mechanical economization utilizes the ambient temperature of the local climate to provide an alternative means of heat rejection from standard mechanical systems. Two means of creating this ambient usage are through waterside or airside systems:

Waterside economization utilizes a liquid medium which is run through an outdoor series of coils to be cooled to a lower temperature.  If the ambient cooling meets the necessary set point required for the supply water temperature, the chiller barrel never needs to run, thus greatly reducing the power required for the chiller. During periods where the ambient temperature are not at levels to provide 100% economization, partial “free cooling” can still be provided reducing the overall power needs of the chiller, while still providing some mechanical cooling to reduce the return water to a proper supply temperature.

Airside economization utilizes an air exchange through either a cross stream configuration (which mixes return air with outside air passing through a filter media to help create the supply air stream) or a heat wheel (also known as an enthalpy wheel which nearly eliminates outside air mixing, but typically requires a much larger footprint). This system essentially eliminates the water system medium requirements. These systems can outperform waterside economization in colder climates.

Mechanical Systems Controls and Monitoring

With mechanical systems improving their efficiencies through the equipment improvements and added systems designs noted above, controls and monitoring of these systems becomes more critical in order to maintain these efficient operations. CRAC unit manufacturers have added better controls for these units that allow for a more systematic approach for data center HVAC concerns. Units now communicate with one another throughout the data center and share individual operating conditions to assure a more singular response to the general room conditions.

Monitoring of these systems and trending data also benefit operations and maintenance personnel associated with the data center to better understand the effects of things like economization, maintenance, and other conditions which may affect the data center mechanical systems.

Motors and Drives

Because of reliability requirements in the data center, oftentimes mechanical systems are running at 50% or less their rated capacities during normal operation. This allows for failover scenarios to provide design loads even when a component in the system is not in operation. To further hamper efficient operation, most data center loads are typically below their maximum design capacities in order to plan for growth in the space.

In order to help alleviate the power consumption on equipment running at lower loads and help this equipment maintain better efficiency (as well as better life expectancy), the use of Variable Frequency Drives (VFDs) has provided a simple solution to allow better performance at lower loading while also reducing power consumption. A VFD is an electrical controlling device for motors varying the frequency to consume less power at
lower speeds when loads are not at their rated capacities. A motor can consume as low as 25% of the power required at 60% of the speed compared to 100% loading and speed. Additional benefits include reduced wear at start up and reduced over all motor wear by running it at lower rates than at its single speed maximum.

VFDs can be used in chillers, pumps, and cooling towers of a central system. VFDs can also be used on the air handler systems in the data center as well. VFDs can further provide more recordable information about power consumption for mechanical equipment. The recent improvements in the design technology of the new Variable Frequency Drives (VFDs) over the past few years have been substantial.  The operation of HVAC equipment, especially the pumps and CRAC Unit Fans, at reduced speed can produce cost saving of almost 20 percent.

The New Normal in Data Center Infrastructure Strategy

IT/Line of Business IT SpendingCloud computing is a top-of-mind initiative for organizations in all industries. The promise of scalable, on-demand infrastructure, consumption-based pricing that reduces capex demands, and faster time-to-market for new solutions constitutes an intoxicating potion for requirements-challenged, cash-strapped IT executives.

However, for many IT executives, the migration to the cloud is not a simple decision for one big reason security. When you own and manage your own infrastructure or employ traditional colo or managed hosting services, there are established policies, practices and risk mitigation strategies that are widely accepted. In the murky waters of the cloud, entirely new risks emerge, including:

  • Less transparency on infrastructure security practices, especially in below-the-hypervisor assets
  • New multi-tenancy considerations that are not as well documented or understood
  • Greater delegation of governance, risk and compliance demands to the cloud services provider

Despite these considerations, the financial lure of the cloud is inescapable. Public cloud services providers (CSPs) like Amazon and Microsoft have created massive economies of scale and are increasingly focused on segmented private cloud services that set a new normal in terms of cost-effectiveness, scalability and the ability to deliver truly agile IT infrastructure.

This has forced many IT departments to begin to look at workload segmentation in a new light. Beyond the questions of transactional vs. archival or batch vs. real-time workloads, organizations now need to look at applications that are “cloud adaptable”, both in terms of performance/technical readiness and in terms of governance, risk and compliance. New, business-driven applications like social CRM, human capital management, collaborative procurement and predictive analytics are all strong candidates for migration to on-demand cloud architecture.

This leads to another ‘new normal’ in IT infrastructure hybrid architectures. Hybrid IT infrastructure bridges public and private clouds, managed services providers and on-premise data centers. This composite fabric needs to be secured and managed for optimized performance, compliance and risk, opening up entirely new challenges and ushering in whole new classes of automation and management toolkits, such as internal cloud services brokers. It also forces greater emphasis on internal plans for virtualization or on-premise cloud deployments that can be integrated seamlessly in these complex architectures.

Making sense of this trend and its associated technologies can be confusing. BRUNS-PAK Consulting Services is a growing part of BRUNS-PAK’s comprehensive data center services offerings. Our consulting services team is expert at helping customers to plan and implement complex strategies for alternative infrastructures and dynamic IT deployment. By helping IT management understand and optimize the following critical infrastructure considerations, we can make it easier to align IT strategy with business needs, and reduce the rise of shadow IT initiatives:

  • Value of current facilities renovation/expansion (CAPEX vs. OPEX)
  • New data center build options (CAPEX)
  • Alternative financing options/leaseback (OPEX)
  • Co-location design and optimization
  • Cloud integration
  • Containers/Pods
  • Network/WiFi design and management
  • Migration/relocation options
  • Hybrid computing environment design and deployment

GE and EMC Pivotal: Three Things Every CIO Can Learn From Them.

Recently, General Electric announced a $105 Million investment in EMC Pivotal. The investment reflects the companies growing commitment to smart systems/devices under their industrial Internet initiative. From locomotives to turbines to household appliances, GE sees a world where the ‘internet of things’ delivers measurable value to users of these increasingly intelligent systems.

They are not alone in their strategy. Apple ex-pats Tony Fadell and Matt Rogers took their knowledge of design engineering and online connectivity to create Nest, which sells smart building thermostats. Nest is more than a programmable thermostat, however. This web-connected device learns from a homeowner’s behavioral patterns and creates a temperature-setting schedule from them. It is also a data-use giant…compiling data on its users to drive smarter energy utilization. More important, it shows how entrepreneurs are beginning to embrace technology to do to other common devices what Apple has done to our portable music devices (iPod) and phones (iPhone)—namely make them stylish, fun and easy to use.

So, at GE, drawing on the trend, the company is rethinking how turbines can talk to their owners to drive smarter operation…or more reliable operation. How locomotives can talk to controllers to ensure timely services and ensure maintenance schedules are maintained. And for IT teams at GE this means tons of diverse data streams, structure, unstructured and semi-structured that need storage and interpretation. If this is your business, as GE increasingly deems it is, then the investment in EMC Pivotal makes sense.

But what can we all learn from GE? Here are three important takeaways from the GE investment for CIOs in all business, academic and government segments:

Data Volume Will Grow.

In conversation with IT executives, we still see a tendency to talk about data in traditional terms. That is, we think of applications in our traditional departments (HR, sales, finance, manufacturing, etc) as being our data sources. However, overlooking the explosion in data volumes likely to come from marketing, social media and from customer devices like the Nest thermostats could leave IT teams scrambling for resource when the tsunami from these sources hit.

CIOs Must Drive Business Value…Not Just IT.

GE is slowly and methodically betting its business on data and they are not alone. The key takeaway is the rapid shift from CIO as owner of IT services to broker of services supporting business value. This shift requires CIOs to rethink their facilities and infrastructure strategy in order to ensure, nimble, scalable, secure on-demand, affordable resources for the business.

Data Center Facilities Are Not What They Used To Be.

The Microsoft Azure cloud facility in Quincy, WA includes three distinct architectural approaches to data center design, from traditional raised floor integrated facility to a novel, open air modular form factor that redefines what it means to be a data center. This one facility single-handedly demonstrates the complex decisions facing IT executives looking to plot data center facility strategy for the next decade. Building out data center resources to support consumer-grade data processing (i.e. Google or Amazon class price/performance), you need to consider groundbreaking concepts.

The BRUNS-PAK Data Center Methodology

Over the past 44 years, BRUNS-PAK has quietly assembled one of the most diverse, skilled teams of professionals focused on the strategies and implementation tactics required to craft durable data center strategies in this new era. From strategic planning to design/build support, construction and commissioning, BRUNS-PAK is helping clients craft solutions that balance the myriad decisions underpinning effective data center strategy, including:

  • Renovation vs. expansion options (CAPEX v. OPEX)
  • Build and own
  • Build and leaseback
  • Migration/relocation options
  • Co-Location
  • Cloud integration / Private cloud build out
  • Container/Pod deployment
  • Network optimization
  • Business impact analysis
  • Hybrid computing architecture

With over 6,000 customers in all industry, government and academic sectors, BRUNS-PAK has a proven process for designing, constructing, commissioning and managing data center facilities, including LEED-certified, high efficiency facilities in use by some of the world’s leading companies and institutions.

Guided by the Green Grid

As business demands increase, so too does the number of data center facilities and the amount of IT equipment they house. With escalating demand for data center operations and rising energy costs, it is essential for data center owners and operators to monitor, assess and continually improve performance using energy efficiency and environmental impact metrics. “Overall, global data center traffic is estimated to grow threefold from 2012 to 2017 and although data centers are becoming more efficient, their total energy use is projected to grow,” said Deva Bodas, principal engineer and lead architect for Server Power Management at Intel Corporation and board member for The Green Grid. Government and industry regulators are now adding increased pressure for energy-efficient computing in order to reduce the carbon footprint while data center managers fear they may reach a point of resource limitations.

The ever-present issue is that with such a diverse range of efficiency assessment approaches, many organizations are unclear of what exactly their efficiency assessments should entail. The Green Grid Association, a global consortium, provides a forum where IT, facilities and other C-level executives come together to discuss different options for implementing standardized data center measurement systems. Through data collection and analysis, assessment of emerging technologies and exploration of top data center operation practices, industry-leading metrics are collaboratively devised by end users, policy makers, technology providers, facility architects and utility companies. Many data center efficiency metrics established by the Green Grid task force are now industry-standard, including Power Usage Effectiveness (PUE™), Data Center Infrastructure Efficiency (DCiE™), Carbon Usage Effectiveness (CUE™), Water Usage Effectiveness (WUE™) and Data Center Productivity (DCP). These globally adapted metrics are employed by BRUNS-PAK as a dependable way to measure specific data center results against comparable organizations, improve existing data center efficiencies and make intelligent decisions in new data center deployments.

BRUNS-PAK fully maximizes the metrics, technical resources and educational tools that the Green Greed provides to accurately assess various key elements of data center efficiency. Standardized life cycle assessments such as the Green Grid’s Data Center Maturity Model (DCMM) and Data Center Life Cycle Analysis are essential resources used by BRUNS-PAK in conversations with data center owners to give the knowledge required in deciding whether to rebuild or renovate, predict expected returns and identify areas of IT operations that require improvement. In addition to the standard PUE metric, BRUNS-PAK also leverages Green Grid’s DCeP (Data Center Energy Productivity), a new equation that quantifies useful work that a data center produces based on the amount of energy it consumes and allows each organization to define “useful work” as it relates to their unique business. Additionally, the EDE (Electronic Disposal Efficiency) metric is implemented by BRUNS-PAK to help data center operators evaluate how their outdated electronic equipment is managed and disposed. It the combination of these recognized metrics that guide the design of all of BRUNS-PAK facilities.

Following the five core tenants of the Green Grid Design Guide, a new architectural approach to how data centers are built and modernized that focuses on energy efficiency, BRUNS-PAK takes a holistic approach to data center design that leverages such efficiency metrics from start to finish.

The Green Grid Design Guide is described as “a guide for the standardization and evolution of key capabilities” and is based on five core tenants that BRUNS-PAK factors into every data center design:

Fully Scalable:

ƒAll systems/subsystems scale energy consumption and performance to use the minimal energy required to accomplish workload.

Fully Instrumented:

ƒAll systems/subsystems within the datacenter are instrumented and provide real time operating power and performance data through standardized management interfaces.

Fully Announced:

ƒAll systems/subsystems are discoverable and report minimum and maximum energy used, performance level capabilities and location.

Enhanced Management Infrastructure:

ƒCompute, network, storage, power, cooling and facilities utilize standardized management/interoperability interfaces and language.

Policy Driven:

Operations are automated at all levels via policies set through management infrastructure.

Standardized Metrics/Measurements:

ƒEnergy efficiency is monitored at all levels within the datacenter from individual subsystems to complete datacenter and is reported using standardized metrics during operation.

The 2014 Data Center Transformation

Welcome to BRUNS-PAK’s Mission Critical Data Center Blog. We hope to provide you with interesting and timely information to help you navigate the ever changing Data Center in your facility. With that said, lets shed some light on one of our clients most basic and pressing questions:


  1. Worldwide economic downturn from 2008 – 2014 to conserve capex spending – do more for less.
  2. The availability of the emerging co-location / cloud / container / disaster recovery strategies.
  3. The maturation of the total cost of ownership models (15 elements) revealing a “hybrid” solution (in many cases) that optimize return on investment while balancing risk (warning: risk models evolving 2014+).
  4. Improve the “quarterly bottom line” at all costs.
  5. In the private sectors, focus on the short term stock appreciation, is in many cases, priority one.


We are very proud of our FAQs page, as it is an extensive list with great information for future planning of your Data Center. Discussions regarding your data center are happening and so the topic of “Energy Efficiency” is an inevitable challenge that will come across your desk. We would like to highlight some topics covered on our FAQs page to help arm you with the necessary information you need come Data Center decision making time!
  1. ASHRAE 9.9 – Higher Inlet Temperatures
  2. Why pay for electrical consumption for mechanical cooling?
  3. CFD Models
  4. Heat Wheel
  5. 400v AC/DC
  6. DCIM
  7. Virtualization of services
  8. Higher efficiency computer equipment
  9. March 14, 2014 – Federal data center efficiency legislation passes US House of Representatives


  1. Leased data center constructed space
  2. Capex schedule of delivery minimized
  3. ROI – see total cost of ownership – 3+?
  4. Other tenants? – Impact of security
  5. Downtime: Who pays?
  6. Security Breach: Who pays?
  7. Terms and conditions (Legal Beagles 2014!!!)
  8. Senator Menendez – New Jersey sponsoring new legislation 2014
  9. True “partner” of equal financial stability


  1. Leased data center constructed space
  2. Capex schedule of delivery minimized
  3. ROI – see total cost of ownership – 3+?
  4. Other tenants? – Impact of security
  5. Downtime: Who pays?
  6. Security Breach: Who pays?
  7. Terms and conditions (Legal Beagles 2014!!!)
  8. Senator Menendez – New Jersey sponsoring new legislation 2014
  9. True “partner” of equal financial stability

Where does your Data Center currently rank?

BRUNS-PAK is happy to share a tool to help you understand where your current Facility Data Center is operating at the moment. This is a great chart to help our clients and future clients see where they are and where they want to be. While Budgets are getting planned and fine tuned it is essential to understand the data needs of your organization. BRUNS-PAK provides solutions to get you there.

Check it out, you can also view this chart under our Resources tab!


When it comes to Data Security, we are all concerned and interested. Of course we like to arm ourselves with the latest information to protect our data, however it is also important to understand what happens when a breach occurs. Most importantly, who pays?
  1. Federal data center efficiency legislation required – passed the United States House of Representatives March 14, 2014
  2. Senator Menendez – New Jersey – initiates damages / liabilities / penalties associated with the “Target” issue.  Ongoing March 2014
  3. The United States reaction and planning regarding future “Mr. Snowden” leaks to Wikipedia.
    • – Who Pays?
    • – What are the damages? – financial? – security? – both?
    • – Rights of NSA
  4. If my personal data is compromised, what are the fines?  Who pays?
  5. Government outsource contracts – Amazon with the CIA according to 60 Minutes???