Mobile Computing and Driving Cloud Applications

How Cloud Computing is Driving Mobile Data Growth and a Next Generation of Data Center Strategies.

At the 2010 Techonomy Conference in Lake Tahoe, Eric Schmidt quoted a figure that most data professionals can relate to:

Every two days, there is as much ‘data’ produced globally as was produced from the dawn of civilization up to 2003 —  approximately five exabytes every 48 hours.

Certainly, the rapid growth in user-generated content is a major driver of the expansion. But, user-generated content is really a reflection of a bigger shift in the IT landscape that is being driven by changes in the way business interacts with customers in both B2B and B2C markets, and an attendant revolution in the tools that consumers and businesses use to interact with applications and data.

According to Gartner, data growth remains on the top of list of IT’s biggest challenges for 2011, with 47% of respondents to a summer 2010 survey of over 1,000 large enterprises citing data growth in their top three challenges. (i)  (N.B.  37% of listed system performance and scalability and 36% cited network congestion and connectivity issues.) Gartner’s research uncovered data growth rate between 40% and 60% y/y. For enterprise IT professionals this includes rapid expansion in unstructured data, such as e-mail and regulatory and compliance documents. But that is just part of the story.

In November 2009, technology writer Robert Cringely wrote:

“We’re in the middle of a huge platform shift in computing and most of us don’t even know it.  The transition is from desktop to mobile and is as real as earlier transitions from mainframes to minicomputers to personal computers to networked computers with graphical interfaces.”  (ii) 

According to ABI Research, quoted in IEEE Spectrum, monthly data transmission by mobile computing devices, including cellphones, tablets and portable computers, will increase 1400% by 2014, with the number of people subscribing to cloud-based applications increasing from 71 million to just under 1 billion in the same timeframe. (iii)  Cloud applications, in this case, are defined as applications that are delivered with data storage or processing power not primarily resident on the mobile device. The leading applications in this shift include consumer-friendly utility software (such as maps), but also business productivity tools (especially for sales, data sharing, and collaboration), rapidly growing social networking applications, and now-ubiquitous search functionality.

From consumers to business professionals, we are rapidly becoming addicted to real-time access of all forms of data. Rich-media (audio, video) garners the headlines, but today, we expect to log on to find order status, place new orders, make stock trades or bank deposits, explore real-time inventory, or review engineering designs — all through a growing array of mobile devices.

What does all this mean to data center professionals?

Mobile applications carry new demands on data types, access control and traffic management, especially in cloud architectures. In the cloud, network resources are consumed by virtual machines, and network management and monitoring must intelligently correlate virtual machine traffic with physical network components and resources. Rapidly increasing demand on transactional resources dictates completely new QoS approaches to bandwidth allocation and traffic shaping. And data resources must be dynamically provisioned and carefully mirrored to ensure durable performance under widely varying loads.

BRUNS-PAK Cloud Computing Programs

BRUNS-PAK engineers are acutely aware this dramatic evolution in the IT market.  Our cloud computing team helps clients:

  • Organize and document objectives
  • Understand data center implications for both private cloud deployment or public cloud integration
  • Plan for long-term optimized data center performance though load modeling for high-utilization, virtualized deployment architectures
  • Develop a data center deployment model that is adaptive to rapid growth in both network access and data volume and can respond to the need for capacity expansion on a rapid basis through either increased site utilization or modular expansion

For more information on our Cloud Computing capabilities and offerings, click here to access our Contact Us page and call or request information.


(i)  “Data Growth Remains IT’s Biggest Challenge, Gartner Says.”  Computerworld Online.  http://www.computerworld.com/s/article/9194283/Data_growth_remains_IT_s_biggest_challenge_Gartner_says. November 2, 2010.

(ii)  Cringely, Robert X, “Pictures In Our Heads.” I, Cringely. 11/17/2009.

(iii) “Cloud Computing Drives Mobile Data Growth”.  IEEE Spectrum Online. http://spectrum.ieee.org/telecom/wireless/cloud-computing-drives-mobile-data-growth. October 2009

The Importance of Being Agile

IT/Line of Business IT SpendingIn a Nov 2012 survey by IDG Research Services1, CIOs got yet another wakeup call regarding the price for being non-responsive to increasingly demanding line-of-business owners in the enterprise. In the study, the majority of respondents indicated that line-of-business teams control over 20% of the IT budget. Further, 20% of respondents indicated that these teams now dictate the majority of IT spending. For CIO’s used to having rigid control of their IT infrastructure, this is a discomforting reminder of the changes being wrought on IT by rapidly growing demand for agility that supports greater alignment with business operations.

The accessibility of software as a service solutions, growing acceptance of mobile applications running on personally or company owned devices, and shortening time to market for new services-oriented applications are combining to make it easier than ever for non-IT professionals to identify lightweight solutions to business problems that can be deployed without IT assistance and charged to company credit cards. The trend may have started in the shadows, but it is a mainstream approach, with applications like Salesforce.com, Workday and Evernote now mainstays in enterprise operations. The upside? Speed, reduced capital costs and flexible deployment options. The downside? Greater enterprise risk, compliance concerns, and barriers to sharing data across application environments.

How can IT adapt? In today’s market, CIOs must eschew old strategies favoring tight management of infrastructure and services, in favor of a new services-driven orchestration model in which the IT department focuses on the delivery of strategic value from IT assets, independent of where those assets reside and who owns them. This model makes concepts like just-in-time IT a focus, and forces the integration of new strategies for enterprise IT delivery, including:

  • cloud computing
  • containers/PODS/ultramicro data centers
  • advanced networking techniques for minimal latency
  • SDN (software defined networking)
  • OPEX/CAPEX rebalancing

Making sense of these trends and technologies can be confusing. BRUNS-PAK Consulting Services is an integral part of BRUNS-PAK’s comprehensive data center services offerings. Our consulting services team is expert at helping customers to plan and implement complex strategies for alternative infrastructures and dynamic IT deployment. By helping IT management understand and optimize the following critical infrastructure considerations, we can make it easier to align IT strategy with business needs, and reduce the rise of shadow IT initiatives:

  • Value of current facilities renovation/expansion (CAPEX vs. OPEX)
  • New data center build options (CAPEX)
  • Alternative financing options/leaseback (OPEX)
  • Co-location design and optimization
  • Cloud integration
  • Containers/Pods
  • Network/WiFi design and management
  • Migration/relocation options
  • Hybrid computing environment design and deployment

To learn more about how BRUNS-PAK Consulting Services can help you address emerging challenges in your data center strategy, contact Paul Evanko, Vice President at 732-248-4455, or via e-mail at pdevanko@bruns-pak.com.

Request More Information

[1] IDG Research Services, “Clouds, business issues and time management dominate the CIO’s world in 2013”, Nov. 2012 http://www.enterprisecioforum.com/en/whitepaper/it-and-cios-2013-will-look-awful-lot-201

Why CFD for Energy Efficiency?

While winter temperatures make it a little easier to distract yourself from the costs of data center cooling, the realities are that for many companies, data center cooling remains a topic of high importance. At BRUNS-PAK, we have long championed design options that can make significant difference in your data center HVAC costs, including:

  • Airside Economization: the use of “free” outside air in your cooling plan
  • Heat Wheel Integration: integration of heat wheel exchange systems for optimizing energy efficiency
  • Higher Data Center Ambient Temperature: following the guidelines in ASHRAE 9.9 means real savings
  • Hot Aisle/Cold Aisle Configuration: reducing hot/cold mixing can produce measurable improvements in cooling efficiency

However, one item that companies do not take regular advantage of is CFD Modeling. Computational Flow Dynamics is often used in data center design projects, but its use in understanding airflow and cooling efficiency in existing data centers can yield measurable improvements in the optimized configuration of your data center assets, along with recommendations for HVAC improvements.

As leaders in the use of CFD modeling, BRUNS-PAK can provide expert consultation on ways to leverage this technique to support both short-term energy efficiency optimization modifications, and long-term strategic options for improving your data center sustainability profile.

For more information, contact Paul Evanko, Vice President at 732-248-4455, or via e-mail at pdevanko@bruns-pak.com.

Request More Information

Alternative Financing Strategies for Data Center Expansion

The rising reliance on real-time, data-informed decision making in the enterprise, is placing new demands on the CIO to increase capacity and quality of service to knowledge workers throughout the enterprise. The CIO challenge in many organizations, however, is how to deliver that increased capability, capacity and quality of service while dealing with rising pressure to cut costs or forego major capital expenditures.

Traditionally, this has meant a strategic decision between:

  • Renovation of existing data center facilities (dominantly OPEX)
  • Expansion of existing data center facilities (balance of OPEX and CAPEX)
  • Building new data center facilities (CAPEX program)

BRUNS-PAK Data Center Design/Build Leaseback Programs offer a secure way to finance new data center capacity through allocation of operating dollars instead of capital dollars. Backed by one of the nation’s leading financial services institutions, BRUNS-PAK leaseback options are integrated with the BRUNS-PAK design/build methodology which offers both the data center owner and the financing organization, a clear, well-documented, fixed price plan for data center construction projects. Financing options are available for both large-scale and moderate-scale programs.

If you are looking to evaluate OPEX options for an upcoming data center project, contact Paul Evanko, Vice President at 732-248-4455, or via e-mail at pdevanko@bruns-pak.com.

Request More Information

The New Normal in Data Center Infrastructure Strategy

IT/Line of Business IT SpendingCloud computing is a top-of-mind initiative for organizations in all industries. The promise of scalable, on-demand infrastructure, consumption-based pricing that reduces capex demands, and faster time-to-market for new solutions constitutes an intoxicating potion for requirements-challenged, cash-strapped IT executives.

However, for many IT executives, the migration to the cloud is not a simple decision for one big reason security. When you own and manage your own infrastructure or employ traditional colo or managed hosting services, there are established policies, practices and risk mitigation strategies that are widely accepted. In the murky waters of the cloud, entirely new risks emerge, including:

  • Less transparency on infrastructure security practices, especially in below-the-hypervisor assets
  • New multi-tenancy considerations that are not as well documented or understood
  • Greater delegation of governance, risk and compliance demands to the cloud services provider

Despite these considerations, the financial lure of the cloud is inescapable. Public cloud services providers (CSPs) like Amazon and Microsoft have created massive economies of scale and are increasingly focused on segmented private cloud services that set a new normal in terms of cost-effectiveness, scalability and the ability to deliver truly agile IT infrastructure.

This has forced many IT departments to begin to look at workload segmentation in a new light. Beyond the questions of transactional vs. archival or batch vs. real-time workloads, organizations now need to look at applications that are “cloud adaptable”, both in terms of performance/technical readiness and in terms of governance, risk and compliance. New, business-driven applications like social CRM, human capital management, collaborative procurement and predictive analytics are all strong candidates for migration to on-demand cloud architecture.

This leads to another ‘new normal’ in IT infrastructure hybrid architectures. Hybrid IT infrastructure bridges public and private clouds, managed services providers and on-premise data centers. This composite fabric needs to be secured and managed for optimized performance, compliance and risk, opening up entirely new challenges and ushering in whole new classes of automation and management toolkits, such as internal cloud services brokers. It also forces greater emphasis on internal plans for virtualization or on-premise cloud deployments that can be integrated seamlessly in these complex architectures.

Making sense of this trend and its associated technologies can be confusing. BRUNS-PAK Consulting Services is a growing part of BRUNS-PAK’s comprehensive data center services offerings. Our consulting services team is expert at helping customers to plan and implement complex strategies for alternative infrastructures and dynamic IT deployment. By helping IT management understand and optimize the following critical infrastructure considerations, we can make it easier to align IT strategy with business needs, and reduce the rise of shadow IT initiatives:

  • Value of current facilities renovation/expansion (CAPEX vs. OPEX)
  • New data center build options (CAPEX)
  • Alternative financing options/leaseback (OPEX)
  • Co-location design and optimization
  • Cloud integration
  • Containers/Pods
  • Network/WiFi design and management
  • Migration/relocation options
  • Hybrid computing environment design and deployment

GE and EMC Pivotal: Three Things Every CIO Can Learn From Them.

Recently, General Electric announced a $105 Million investment in EMC Pivotal. The investment reflects the companies growing commitment to smart systems/devices under their industrial Internet initiative. From locomotives to turbines to household appliances, GE sees a world where the ‘internet of things’ delivers measurable value to users of these increasingly intelligent systems.

They are not alone in their strategy. Apple ex-pats Tony Fadell and Matt Rogers took their knowledge of design engineering and online connectivity to create Nest, which sells smart building thermostats. Nest is more than a programmable thermostat, however. This web-connected device learns from a homeowner’s behavioral patterns and creates a temperature-setting schedule from them. It is also a data-use giant…compiling data on its users to drive smarter energy utilization. More important, it shows how entrepreneurs are beginning to embrace technology to do to other common devices what Apple has done to our portable music devices (iPod) and phones (iPhone)—namely make them stylish, fun and easy to use.

So, at GE, drawing on the trend, the company is rethinking how turbines can talk to their owners to drive smarter operation…or more reliable operation. How locomotives can talk to controllers to ensure timely services and ensure maintenance schedules are maintained. And for IT teams at GE this means tons of diverse data streams, structure, unstructured and semi-structured that need storage and interpretation. If this is your business, as GE increasingly deems it is, then the investment in EMC Pivotal makes sense.

But what can we all learn from GE? Here are three important takeaways from the GE investment for CIOs in all business, academic and government segments:

Data Volume Will Grow.

In conversation with IT executives, we still see a tendency to talk about data in traditional terms. That is, we think of applications in our traditional departments (HR, sales, finance, manufacturing, etc) as being our data sources. However, overlooking the explosion in data volumes likely to come from marketing, social media and from customer devices like the Nest thermostats could leave IT teams scrambling for resource when the tsunami from these sources hit.

CIOs Must Drive Business Value…Not Just IT.

GE is slowly and methodically betting its business on data and they are not alone. The key takeaway is the rapid shift from CIO as owner of IT services to broker of services supporting business value. This shift requires CIOs to rethink their facilities and infrastructure strategy in order to ensure, nimble, scalable, secure on-demand, affordable resources for the business.

Data Center Facilities Are Not What They Used To Be.

The Microsoft Azure cloud facility in Quincy, WA includes three distinct architectural approaches to data center design, from traditional raised floor integrated facility to a novel, open air modular form factor that redefines what it means to be a data center. This one facility single-handedly demonstrates the complex decisions facing IT executives looking to plot data center facility strategy for the next decade. Building out data center resources to support consumer-grade data processing (i.e. Google or Amazon class price/performance), you need to consider groundbreaking concepts.

The BRUNS-PAK Data Center Methodology

Over the past 44 years, BRUNS-PAK has quietly assembled one of the most diverse, skilled teams of professionals focused on the strategies and implementation tactics required to craft durable data center strategies in this new era. From strategic planning to design/build support, construction and commissioning, BRUNS-PAK is helping clients craft solutions that balance the myriad decisions underpinning effective data center strategy, including:

  • Renovation vs. expansion options (CAPEX v. OPEX)
  • Build and own
  • Build and leaseback
  • Migration/relocation options
  • Co-Location
  • Cloud integration / Private cloud build out
  • Container/Pod deployment
  • Network optimization
  • Business impact analysis
  • Hybrid computing architecture

With over 6,000 customers in all industry, government and academic sectors, BRUNS-PAK has a proven process for designing, constructing, commissioning and managing data center facilities, including LEED-certified, high efficiency facilities in use by some of the world’s leading companies and institutions.

Fire Detection and Suppression Technology

Sometimes the unimaginable happens. A fire can threaten to destroy a data center. To protect the valuable equipment and information housed in the facility, it is critical to install a fire suppression system adequate to the size, type and operational responsibilities of the complex. By definition, a fire suppression system is a combination of fire detection and extinguishing devices designed to circumvent catastrophic business loss as a result of a fire. This loss includes not only the cost of equipment replacement, but also the cost of recovering lost data or business-specific applications.

Detection Systems: The First Line of Defense

A critical component of any suppression system is smoke detectors. Depending on the application, they can be of the photoelectric or ionization type. Detectors perform several vital functions:

  • Warn facility occupants of possible fire.
  • Shut down all electrical service to the equipment so as not to “fuel the fire.”
  • Activate the suppression medium.

If it is properly designed, the detection system can also be used to limit business loss due to power-off interfaces by detecting a system failure rather than an actual smoke condition.

A highly effective detection system is one we call an “intelligent” system. It uses a software-based early warning system to provide an accurate means of detection and verification at the ceiling plane and underfloor plenum.

Water and Clean Agent Gas: Common Suppression Media

Suppression medium is activated if a true emergency is detected. The two most commonly used media to put out a fire are water and clean agent gas such as FM200, Inergen, and NAFS-III.

Determining which type of suppression medium to use depends in large part on the requirements of local code enforcement authorities, building and/or landlord stipulations, and input from insurance underwriters. It also depends on user preference, which is influenced by such factors as cost, business risk relative to data recovery, existing systems, and so forth.

Water sprinkler systems

Water sprinkler systems are found in most buildings regardless of the presence of a data center. As a general rule, where sprinkler systems exist, it is less expensive to convert to a pre-action sprinkler system than to install a clean agent system. Pre-action sprinklers are the water-based choice for data centers and refer to systems that control the flow of water to pipes in the ceiling plane. Smoke and heat activate a valve that advances the water to the ceiling plane. That way, inadvertent damage to equipment from leakage or accidental discharge is prevented. (By comparison, with an ordinary sprinkler system, water is contained in pipes in the ceiling plane at all times.)

Water is highly effective at putting out fires and is well suited for areas like printer rooms that contain combustible materials like paper and toner. The downside of water-based systems is the messy and lengthy clean up and recovery time after a water discharge.

Clean agents

There are primarily three clean agents presently vying for acceptance in the marketplace, FM200, NAF S-III, and Inergen. These agents were developed in response to the phase-out of Halon and the development of NFPA 2001, which was adopted in the Fall of 1994.

Consideration of these agents as alternatives to CO2 in under floor applications is viable. The costs of these systems has dropped in recent years due to more competition in the market place with competing vendors offering these various gas options.

  1. FM-200 (Heptafluoropropane – HFC-227EA) is a colorless, liquefied compressed gas. It is stored as a liquid and dispensed into the hazard as a colorless,FM-200 tanks electrically non-conductive vapor. It leaves no residue. It has acceptable toxicity for use in occupied spaces when used as specified in the United States Environmental Protection Agency (EPA) proposed Significant New Alternatives Policy (SNAP) program rules. FM-200 extinguishes a fire by a combination of chemical and physical mechanisms.

    FM-200 is an effective fire-extinguishing agent that can be used on many types of fires. It is effective for use on Class A Surface-Burning Fires, Class B Flammable Liquid, and Class C Electrical Fires.

    On a weight of agent basis, FM-200 is a very effective gaseous extinguishing agent. The minimum design concentration for total flood applications in accordance with NFPA 2001 shall be 7.0%.

  2. NAF S-III is a clean, non-conductive media used for the protection of a variety of potential fire hazards, including electrical and electronic equipment. NAF S-III is a clean gaseous agent at atmospheric pressure and does not leave a residue. It is colorless and non-corrosive.

    NAF S-III acts as a fire-extinguishing agent by breaking the free radical chain reaction that occurs in the flame during combustion and pyrolysis. Like Halon 1301, NAF S-III has a better efficiency with flaming liquids than with deep-seated Class A fires.

    NAF S-III fire extinguishing systems have the capability to rapidly suppress surface-burning fires within enclosures. The extinguishing agent is a specially developed chemical that is a gas at atmospheric pressure and is effective in an enclosed risk area. NAF S-III extinguishes most normal fires at the design concentration by volume of 8.60% at 20° C.

    NAF S-III is stored in high-pressure containers and super-pressurized by dry nitrogen to provide additional energy to ensure rapid discharge. At the normal operating pressure of 360 psi (24.8 bar) or 600 psi (42 bar), NAF is in liquid form in the container.

    Once the system is activated, the container valves are opened and the nitrogen propels the liquid under pressure through the pipe work to the nozzles, where it vaporizes. The high rate of the discharge through the nozzles ensures a homogeneous mixture with the air. Sufficient quantities of NAF S-III should be discharged to meet the concentration required and the pressure at each nozzle must be located to achieve uniform mixing.

  3. Inergen is composed of naturally occurring gases already found in Earth’s atmosphere (nitrogen, argon, and CO2). Inergen suppresses fire by displacing the oxygen in the environment. Inergen, however, is not toxic to the occupants because of the way it interacts with the human body. The level of CO2 in Inergen stimulates the rate of respiration and increases the body’s use of oxygen. This compensates for the lower oxygen levels that are present when Inergen is discharged.

    Inergen is stored as a dry, compressed gas and is released through piping systems similar to those utilized in other gaseous suppression systems.

  4. FE-25 fire suppression agent is environmentally acceptable replacement for Halon 1301. FE-25 is an odorless, colorless, liquefied compressed gas. It is stored as a liquid and dispensed into the hazard as a colorless, electrically non-conductive vapor that is clear and does not obscure vision. It leaves no residue and has acceptable toxicity for use in occupied spaces at design concentrations. FE-25 extinguishes a fire by a combination of chemical and physical mechanisms. FE-25 does not displace oxygen and therefore is safe for use in occupied spaces without fear of oxygen deprivation.

    FE-25 has zero ozone depleting potential, a low global warming potential, and a short atmospheric lifetime.

    FE-25 closely matches Halon 1301 in terms of physical properties such as flow characteristics and vapor pressure. The pressure traces, vaporization, and spray patterns for FE-25 nearly duplicate that of Halon 1301. The minimum design concentration for FE-25 systems is 8.0% meaning that about 25% more of FE-25 agent will be required. Fe-25 requires about 1.3 times the storage area of Halon.

    When retrofitting existing Halon 1301 system, the nozzles and cylinder assembly will need to be upgraded, however, the piping system likely will not need to be changed, which is cost-effective retrofit that minimizes business interruption.

  5. FE-13 is a clean, high-pressure agent that leaves no residue when discharged. FE-13 efficiently suppresses fire by the process of physiochemical thermal transfer. The presence of FE-13 absorbs heat from the fire as a sponge absorbs liquid. FE-13 is safe for use in occupied spaces up to a 24% concentration. Design concentration for total flood application is 16%.
  6. Novec 1230 is the newest clean-agent gas available on the market. It is marketed as a long-term sustainable alternative to FM-200 and Halon. Novec 1230 has a 0.0 ozone depletion potential (equivalent to FM-200), but has an atmospheric lifetime of only five days, compared to FM-200’s half life of over 20 years. Novec 1230 has a zero global warming potential. Novec 1230 is designed to a concentration level of 4-6%, which will require less gas than other clean agent. Novec 1230 extinguishes the fire by heat absorption, and is heavier than air, so the gas will sink in the room. Novec 1230 is also safe for electronic equipment, so the data center may not have to be shut down in the event of a gas discharge.

    Novec 1230 will require the same amount of tanks as FM-200, and is stored as a liquid under pressure. Under normal atmospheric conditions, it will exist as a gas. The system is approximately 5-7% more expensive than FM-200.

Table – Relative Cost Comparison of Extinguishing Methods
Scenario Characteristics:

  • Occupied Room
  • Housing electrical equipment
  • 10,000 cu-ft room volume
  • Room fully enclosed and building is fully sprinklered
Design Basis:

(1) Total flooding.

(2) Does not include the cost of fire alarm and detection system. Probable cost < $4,000.

(3) Assume a fully sprinklered building and

(4) Includes the cost of the extinguishing agent.

Extinguishing Agent Design Concentration, Density Agent Quantity Installation Cost (4) Recharge Cost Design Basis
FM-200 7.44 % by volume 364 lbs 20% more than Inergen Almost twice the cost of Inergen (1) + (2)
FE-25 96% by volume 335 lbs Parallel to FM-200 less gas 20%-25% less than FM-200 (1) + (2)
Inergen 37.5 % by volume 4780-cu-ft (1) + (2)
NAF S-III 8.60 % by volume ___ ___ ___ (1) + (2)
Pre-Action Sprinklers 0.1 gpm/s.f water N/A 1/4 the cost of Halon or Inergen N/A (2) + (3)

Note: NAF S-III does not appear to have the market presence to be a viable alternative.

Best Practices for Protecting Against Data Breaches

The list of substantial and incredibly costly data breaches grows every day. Target, Home Depot, Dairy Queen, and Ahold Supermarkets are some of the more well-known examples, but there are literally hundreds of other breaches that get far less attention. It all adds up to a huge problem for IT and data center professionals.

The impact of a breach is so substantial that the real question becomes, “How can I build the best plan to protect my valuable data?”

One of the best answers is to start with independent experts who can help you create a strategy for preventing breaches that is developed specifically for your organization and its needs. Much like snowflakes, the plan for preventing breaches must be unique to the vulnerabilities, compliance demands, and IT infrastructure of each firm. At BRUNS-PAK, we bring decades of experience and the cross-disciplinary knowledge necessary to build an effective plan to help prevent breaches.

In addition, BRUNS-PAK is not a vendor of security products or services that is trying to sell its own offerings, and will “manage” the strategic plan results to ensure recommendation of that solution. We are vendor neutral, and looking out for your best interest. In fact, we’ re so well known as an expert independent party that Gartner, Forrester, and IDC have all met with BRUNS-PAK to discuss this very important issue. This is a testament to the experience and broad knowledge of data center/IT issues that BRUNS-PAK brings to the table.

One of the most important aspects of our security evaluation process is to consider how co-location and cloud service providers can impact your organization. This information is integrated with the evaluation of your traditional on-premise data center. BRUNS-PAK’s holistic approach to security considers all of the issues that might impact your ability to prevent a breach; this is critical to a successful plan.

Getting started is easy. BRUNS-PAK experts will meet with you to develop a consultative approach based on best practices that will drive your ability to prevent breaches. Utilizing BRUNS-PAK’s leading-edge, 16-step approach, which provides a comprehensive framework for strategic data center issues, you’ll benefit from the best thinking in the industry.

Protecting your organization and infrastructure from data breaches is critical. Being protected starts with having the right plan. Too often, vendors are only attempting to justify why you should buy their product. A better approach is to engage BRUNS-PAK, an independent authority that is on your side, to build an effective plan against data breaches. Click here for our white paper on the topic, or you can contact Jackie Porr at 732-248-4455/888-704-1400 ext 111, or jporr@bruns-pak.com

Next In Queue:

We’ ll look at how to effectively eliminate existing and potential hot spots in your data center with BRUNS-PAK expertise and Computational Fluid Dynamics (CFD) analysis.

Guided by the Green Grid

As business demands increase, so too does the number of data center facilities and the amount of IT equipment they house. With escalating demand for data center operations and rising energy costs, it is essential for data center owners and operators to monitor, assess and continually improve performance using energy efficiency and environmental impact metrics. “Overall, global data center traffic is estimated to grow threefold from 2012 to 2017 and although data centers are becoming more efficient, their total energy use is projected to grow,” said Deva Bodas, principal engineer and lead architect for Server Power Management at Intel Corporation and board member for The Green Grid. Government and industry regulators are now adding increased pressure for energy-efficient computing in order to reduce the carbon footprint while data center managers fear they may reach a point of resource limitations.

The ever-present issue is that with such a diverse range of efficiency assessment approaches, many organizations are unclear of what exactly their efficiency assessments should entail. The Green Grid Association, a global consortium, provides a forum where IT, facilities and other C-level executives come together to discuss different options for implementing standardized data center measurement systems. Through data collection and analysis, assessment of emerging technologies and exploration of top data center operation practices, industry-leading metrics are collaboratively devised by end users, policy makers, technology providers, facility architects and utility companies. Many data center efficiency metrics established by the Green Grid task force are now industry-standard, including Power Usage Effectiveness (PUE™), Data Center Infrastructure Efficiency (DCiE™), Carbon Usage Effectiveness (CUE™), Water Usage Effectiveness (WUE™) and Data Center Productivity (DCP). These globally adapted metrics are employed by BRUNS-PAK as a dependable way to measure specific data center results against comparable organizations, improve existing data center efficiencies and make intelligent decisions in new data center deployments.

BRUNS-PAK fully maximizes the metrics, technical resources and educational tools that the Green Greed provides to accurately assess various key elements of data center efficiency. Standardized life cycle assessments such as the Green Grid’s Data Center Maturity Model (DCMM) and Data Center Life Cycle Analysis are essential resources used by BRUNS-PAK in conversations with data center owners to give the knowledge required in deciding whether to rebuild or renovate, predict expected returns and identify areas of IT operations that require improvement. In addition to the standard PUE metric, BRUNS-PAK also leverages Green Grid’s DCeP (Data Center Energy Productivity), a new equation that quantifies useful work that a data center produces based on the amount of energy it consumes and allows each organization to define “useful work” as it relates to their unique business. Additionally, the EDE (Electronic Disposal Efficiency) metric is implemented by BRUNS-PAK to help data center operators evaluate how their outdated electronic equipment is managed and disposed. It the combination of these recognized metrics that guide the design of all of BRUNS-PAK facilities.

Following the five core tenants of the Green Grid Design Guide, a new architectural approach to how data centers are built and modernized that focuses on energy efficiency, BRUNS-PAK takes a holistic approach to data center design that leverages such efficiency metrics from start to finish.

The Green Grid Design Guide is described as “a guide for the standardization and evolution of key capabilities” and is based on five core tenants that BRUNS-PAK factors into every data center design:

Fully Scalable:

ƒAll systems/subsystems scale energy consumption and performance to use the minimal energy required to accomplish workload.

Fully Instrumented:

ƒAll systems/subsystems within the datacenter are instrumented and provide real time operating power and performance data through standardized management interfaces.

Fully Announced:

ƒAll systems/subsystems are discoverable and report minimum and maximum energy used, performance level capabilities and location.

Enhanced Management Infrastructure:

ƒCompute, network, storage, power, cooling and facilities utilize standardized management/interoperability interfaces and language.

Policy Driven:

Operations are automated at all levels via policies set through management infrastructure.

Standardized Metrics/Measurements:

ƒEnergy efficiency is monitored at all levels within the datacenter from individual subsystems to complete datacenter and is reported using standardized metrics during operation.

Why the Internet of Things Must Be On Your Data Center Radar

Splunk. Glassbeam. Azure. Amazon Web Services.

If you are a CIO, get used to these names, because they (or their competitors) are likely to become an active part of your IT infrastructure over the next few years as the Internet of Things moves from bleeding-edge concept to mission-critical reality. The Internet of Things (IoT), the growing network of connected devices…everything from that Fuel band on your wrist and the refrigerator in your kitchen to wind turbines providing your electricity or the jet engines thrusting you skyward…is rapidly altering how CIOs need to engineer their data centers.

The high profile examples to date have focused on industries like aerospace where small operational improvements can lead to major savings or dramatic improvements in customer service. For example, the airline industry spends approximately $200 billion annually on fuel. Every 1% improvement in efficiency that can be gleaned by more efficient in-flight decision-making means $1 billion in savings. Or, real-time feedback from jet engines experiencing an issue in-flight, can mean faster repair turnaround on the ground, since parts and technicians can be ready for the flight upon arrival.

But, machine-to-machine (M2M) interactions introduce a completely new data profile into the mix for CIOs. Current internet applications operate on a transactional basis…a user makes a request that a server responds to. In M2M applications, data is supplied as a continuous, real-time stream that can add up to a large final data set, and that may require equally real-time response streams to be sent back to the source device. Virgin Atlantic IT Director David Bulman noted that a single flight for the company’s recently purchased 787 Dreamliner could generate up to a half terabyte of data! And, getting fuel optimization programs in place means analyzing some of that data in real-time to provide feedback to the flight crew.

Storing all that data is one obvious implication for the data center, bringing smiles to the faces of executives at companies like EMC. However, there is no reason to collect data if you have no plan to use and analyzing large data sets is the next implication. Server capacity must adapt to new processing loads driven by entirely new software platforms like Splunk or Glassbeam, applications optimized for handling and analyzing large machine data sets.

But the implications go beyond the walls of the data center as well. Machine data implies collection from tens, hundred, thousands or millions of devices scattered around the globe. Moving this data in an optimal, secure and real-time fashion implies sophisticated and creative integration of web services like Azure or Amazon Web Services. For CIOs, this opens yet another reason for evaluation of hybrid architectures for the data center.

OK. It’s Real…So, Now What?

For CIOs evaluating data center plans, the Internet of Things must be part of the future capacity planning process since miscalculation can significantly alter a company’s competitive posture. Here are three tips for integrating a M2M strategy into your broader data center planning process:

    1. Be Integrated. First and foremost, the IT team needs to be fully integrated with product development and customer service planning processes since IoT demand will arise not from an IT requirement, but rather from real-world new product/service innovation. This means that demand forecasts in IT that may historically have only needed to account for classic administrative, finance, engineering and manufacturing workloads, will now need to account for real-time data exchange as part of product/service delivery. This makes IT part of design and customer service conversationsnot just IT support.
    2. Be Web Integrated. As implied above, the networking and distributed processing demands of M2M streams means opening new discussions about Web integration in the data center architecture. For both networking or remote processing, CIOs cannot overlook the importance and potential value of cloud-based services in support IoT workloads.
    3. Be Nimble. The Internet of Things is spawning yet another era of innovation and demand in the data center. From exploding demand for data scientists to a new expansion of capacity, M2M interactions will most certainly shine a spotlight on IT with good planning key to being able to support this exploding requirement.

How BRUNS-PAK Can Help

BRUNS-PAK’s proprietary design/build methodologies integrate an evolving array of strategies and tools for data center planning teams that must account for the potential impact of IoT workloads, including the need to fully integrate cloud services strategies. The BRUNS-PAK Hybrid Efficient Data Center Design program offers an iterative process that acknowledges both rapidly changing IT requirements and their associated facilities infrastructure needs, resulting in a strategic plan to address the evolving capacity and complex networking requirements created by M2M work streams. Through our expanded consulting services group, and well-established design/build services team, we can help you create a strategy that ensures your data center is as resilient and responsive as the devices you are monitoring around the globe!


REFERENCES

[1] ComputerWeekly.com, “GE uses big data to power machine services business” http://www.computerweekly.com/news/2240176248/GE-uses-big-data-to-power-machine-services-business
[2] ComputerWorld.com, “Boeing 787s to create half a terabyte of data per flight, says Virgin Atlantic” http://www.computerworlduk.com/news/infrastructure/3433595/boeing-787s-create-half-terabyte-of-data-per-flight-says-virgin-atlantic/

A Four-Part Framework for Resilient Data Center Architecture

Cornerstone concepts to support cybersecurity

While working on a recent project, we came across a newsletter authored by Deb Frincke, then Chief Scientist of Cybersecurity Research for the National Security Division at the Pacific Northwest National Lab in Seattle, which outlined her team’s initiatives for “innovative and proactive science and technology to prevent and counter acts of terror, or malice intended to disrupt the nation’s digital infrastructures.” In cybersecurity, the acknowledged wisdom is that there is no “perfect defense” to prevent a successful cyberattack. Dr. Frincke’s framework defined four cornerstone concepts for architecting effective cybersecurity practices:

  • Predictive Defense through use of models, simulations, and behavior analyses to better understand potential threats
  • Adaptive Systems that support a scalable, self-defending infrastructure
  • Trustworthy Engineering that acknowledges the risks of “weakest links” in complex architecture, the challenges of conflicting stakeholder goals, and the process requirements of sequential buildouts
  • Cyber Analytics to provide advanced insights and support for iterative improvement

In this framework, the four cornerstones operate interactively to support a cybersecurity fabric that can address the continuously changing face of cyber threats in today’s world.

If you are a CIO with responsibility for an enterprise data center, you may quickly see that these same cornerstone principles provide an exceptional starting point for planning a resilient data center environment, especially with current generation hybrid architectures. Historically, the IT community has looked at data center reliability through the lens of preventive defense…in the data center, often measured through parameters like 2N, 2N+1, etc redundancy.

However, as the definition of the data center expands beyond the scope of internally managed hardware/software into the integration of modular platforms and cloud services, simple redundancy calculations become only one factor in defining resilience. In this world, Dr. Frincke’s four-part framework provides a valuable starting point for defining a more comprehensive approach to resilience in the modern data center. Let’s look at how these principles can be applied.

Predictive Defense: We believe the starting point for any resilient architecture is comprehensive planning that incorporates modeling (including spatial, CFD, and network traffic) and dynamic utilization simulations for both current and future growth projections to help visualize operations before initiating a project. Current generation software supports extremely rich exploration of data center dynamics to minimize future risks and operational limitations.

Adaptive Systems: Recently, Netflix has earned recognition for its novel use of resilience tools for testing the company’s ability to survive failures and operating abnormalities. The company’s Simian Army, consisting of services (monkeys) that unleash failures on their systems to test how adaptive their environment actually is. These tools, including Chaos Monkey, Janitor Monkey and Conformity Monkey, demonstrate the importance of adaptivity in a world where no team can accurately predict all possible occurrences, and where unanticipated consequence of a failure anywhere in a complex network of hardware fabrics can lead to cascading failures. The data center community needs to challenge itself to find similar means for testing adaptivity in modern hybrid architectures if it is to rise to the challenge of ultrareliability as current scale.

Trustworthy Engineering: Another hallmark of cybersecurity is the understanding that the greatest threats often lie inside the enterprise with disgruntled employees, or simply as a result of human error. Similarly, in modern data center design, tracking a careful path that iteratively builds out the environment while checking off compliance benchmarks and ‘trustworthiness’ at each decision point, becomes a critical step in avoiding the creation of a hybrid house-of-cards.

Analytics: With data center infrastructure management (DCIM) tools becoming more sophisticated, and with advancing integration between facilities measurement and IT systems measurement platforms, the availability of robust data for informing ongoing decision-making in the data center is now possible. No longer is resilient data center architecture just about the building and infrastructure. So, operating by ‘feel’ or ‘experience’ is inadequate. Big data now really must be part of the data center management protocol.

By leveraging these four cornerstone concepts, we believe IT management can begin to frame a more complete, and by extension, robust plan for resiliency when developing data center architectures that bridge the wide array of deployment options in use today. This introduction provides a starting point for ways to use the framework, but we believe that further exploration by data center teams from various industries will create a richer pool of data and ideas that can advance the process for all teams.

How BRUNS-PAK Can Help

Over the past 35 years, BRUNS-PAK has evolved its proprietary design/build methodologies to integrate the evolving array of strategies and tools available to data center planning teams, resulting in the BRUNS-PAK Hybrid Efficient Data Center Design program. Through an iterative process that acknowledges both rapidly changing IT requirements and their associated facilities infrastructure needs, this program delivers a strategic approach to addressing the evolving capacity and complex networking requirements created by the explosive growth in mobile data traffic. Through our expanded consulting services group, and well-established design/build services team, we can help you leverage concepts like this resiliency framework to construct your plans for effective data center deployment, whatever size data center you operate.

REFERENCES

Frincke, Deborah, “I4 Newsletter”, Pacific Northwest National Laboratory, Spring-Summer 2009.

Planning for the Inescapable Crush of Mobile Data Growth

Mobile Access is Driving New Demand for Smarter Networks and More Intelligent Data Center Architecture

Big data often seems to dominate headlines in IT publications. Few trends carry as dramatic a potential impact on business processes, with data-driven decision making becoming de rigueur across departments in all enterprises. But for IT, an even more important trend continues to build momentum, threatening to rewrite many of the rules for data center design and management — mobile data growth.

Global mobile data traffic grew 81% in 2013, reaching 1.5 exabytes per month by December 2013, up from 820 petabytes per month at the end of 2012.1 Mobile data transmissions now represent over 18x the total traffic traversing the entire Internet in 2000. While mobile devices continue to grow in terms of processing power, their true potential for both business and consumer applications comes from their ability to connect users anywhere and anytime to data located in data centers around the globe. This developing ‘anywhere, anytime’ approach introduces a whole new set of rules for data center management, including increasing demand to move information among data centers integrated in hybrid architectures in order to provide optimized user experience through localized points-of-presence.

Of course, many organizations have already begun to address the demands of mobile access and its attendant ‘anywhere, anytime’ use cases. However, the usage patterns we see today only minimally represent what many experts foresee in the future.

In the latest update of the Cisco® Visual Networking Index (VNI) Global Mobile Data Traffic Forecast, networking giant Cisco predicts2:

  • Mobile data traffic will grow at a compound annual growth rate (CAGR) of 61 percent from 2013 to 2018, reaching 15.9 exabytes per month by 2018, up from 1.5 exabytes per month currently
  • By 2018, the number of mobile-connected devices will exceed 10 billion, exceeding the forecasted global population and averaging 1.4 devices per person
  • By 2018, network traffic generated by tablet devices will reach 2.9 exabytes monthly, nearly double the total mobile network traffic today
  • The penetration of smart mobile devices will reach 54%, up from 21% at the end of 2013, but only 15% of the connections will be at 4G speeds. Of note, a typical 4G connection generates nearly 6x more traffic than a non-4G connection, meaning that mobile growth could spike even faster if 4G penetration accelerates

These forecasts indicate a dramatic uptick in demand for mobile connectivity, a demand easily explained as users get more comfortable connecting to data from mobile devices and experience the advantages that real-time connection to data, in all its forms, can provide. Of particular note for many organizations is the explosive growth in demand for video content, which is expected to continue to accelerate across applications for the foreseeable future.

For CIOs, the mobile data explosion is creating rapid escalation of demand for flexible, scalable, high-performance data center capacity that can swiftly be commissioned to meet both organic demand growth in existing application portfolios as well as sudden increase of demand resulting from new application deployments to meet new customer or internal business requirements. For example, retailers are increasingly using in-store video monitoring tools to predict traffic at registers to better manage service levels. At the same time, retail deployment of more sophisticated mobile shopping applications is growing exponentially.

As applications for everything from transactional commerce to customer service, logistics and finance move increasingly online, many traditional approaches to data center architecture built on proprietary, on-premise facilities are being challenged. Few organizations will avoid the need to construct hybrid architectures that integrate cloud, colo, on-premise, POD and modular designs into a multi-faceted environment that can easily address emerging capacity, reliability and cost-efficiency demands.

How BRUNS-PAK Can Help

Over the past thirty-five years, BRUNS-PAK has evolved its proprietary design/build methodologies to integrate the evolving array of strategies and tools available to data center planning teams, resulting in the BRUNS-PAK Hybrid Efficient Data Center Design program. Through an iterative process that acknowledges both rapidly changing IT requirements and their associated facilities infrastructure needs, this program delivers a strategic approach to addressing the evolving capacity and complex networking requirements created by the explosive growth in mobile data traffic. Through our expanded consulting services group, and well-established design/build services team, BRUNS-PAK is uniquely positioned to assist customers seeking to create a long-term strategic direction for their data center that ensures an infrastructure able to support real-world demands in an increasingly mobile age.

REFERENCES

1. Cisco, “Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2013–2018” 02/2014.

2. Ibid.

Five Keys to Improving Data Center Resilience In the Age of Customer-Centricity

When Customers Are Involved, Being “Ready for Anything” Takes on New Urgency

For many CIO’s trained in traditional IT service metrics, the new world of IT can seem daunting. No longer is your customer constituency comprised of primarily internal users. Instead, your most important customers are now your company’s customers, and the implications of disappointing this audience can be swift and painful. Look no further than retail giant Target for the cautionary lessons of IT in the new age of customer-centricity.

Regardless of what industry you operate in, the delivery of “always on,” secure, customer-facing processes and services fundamentally changes the demands on the IT department. Unlike disappointing internal users because of an application outage, failure of externally-facing applications can impact business forecasts, stock price and brand value. For Target, a technology innovator with a long-history of delivering customer value through technology initiatives, consumer confidence eroded and both short and long-term business performance was negatively impacted when hackers infiltrated their credit card processing systems.

Beyond dramatic examples like Target, even short application outages or minor security breaches can have measurable cost implications. According to a Ponemon Institute study, the average per minute cost of data center downtime has risen by 41 percent since 2010, with an average cost per minute of downtime costing approximately $7,000.[i] This is forcing IT to implement entirely new approaches to systems and services that are not just ultrareliable, scalable and dynamic, but also resilient under failure and attack.

Here are five key management strategies for making data center resilience a part of your organizational DNA and enhancing your defense against the negative impacts of unexpected IT incidents:

1.   Reframe the Management Conversation

For years, IT was viewed through an infrastructure lens that focused on empowering internal processes, not external business value. Today, IT management, executive management and directors must all acknowledge IT’s changing role in the organization and focus greater energy on not only addressing immediate term demands, but longer-term business growth and mission-critical risk mitigation issues. Target was an early explorer of embedded chip credit card technology, but was unable to muster the necessary internal and external resources to enable adoption. Decisions on critical IT infrastructure in modern markets must be fully integrated into a broad business strategy context to ensure effectiveness.


2.   Recognize That Resilience is a Journey, Not a Destination

There are many dynamic forces that define IT architecture resilience in modern business: growth impacts demand, security threats are ever-evolving, and risk profiles change with business valuation. That means evaluation of IT resilience must be equally dynamic. From physical data center infrastructure to approaches to DeVops and disaster recovery, planning for and implementing resilient architectures is a continuous process, not a single build-and-deploy project.

3.   Plan for Elasticity

In the week leading up to Christmas 2013, UPS planned to deliver 132 million packages. Unfortunately, demand significantly outstripped that forecast, leading to late deliveries for many high profile e-tailers, including Amazon, and major dissatisfaction with UPS. Thus is the world of customer-facing business processes, where exceeding your wildest dreams of business success can lead to nightmarish end results. For IT, massive capacity bursts need to be built into the plan if resilience objectives are to be consistently met.

4.   The Best Defense is One Grounded in Reality

Reality is that there is no perfect moat to protect IT systems from all cyber threats, natural and man-made disasters, and unforeseen internal incidents. Today, IT systems must be engineered to rapidly react to any incident in order to minimize its impact and/or the time-to-recovery. From foundational infrastructure to self-healing applications and interfaces, resilient environments are the product of planning and architecting for the foreseen…and the unforeseen.

5.   The Data Center is Still the Core of Your IT Infrastructure, Wherever and Whatever Your Data Center Is

Cloud. PODS. Colo. On-Premise. The definition of a ‘data center’ continues to evolve, and for most organizations, a modern data center represents a hybrid architecture that integrates multiple physical architectures and networking strategies. Hybrid architectures can help organizations support services that are massively scalable, ultrareliable, resilient to point failures across hardware and software, risk-managed at-scale, and still cost-efficient and environmentally responsible. Building out an enterprise-class hybrid data center architecture means moving away from old debates about topics like who controls assets toward discussions about how to best broker the continuously evolving portfolio of services needed to satisfy demanding internal and external audiences.

How BRUNS-PAK Can Help

Over the past two years, BRUNS-PAK has evolved its proprietary design/build methodologies to integrate the evolving array of strategies and tools available to data center planning teams, resulting in the BRUNS-PAK Hybrid Efficient Data Center Design program. Through an iterative process that acknowledges both rapidly changing IT requirements and their associated facilities infrastructure needs, this program delivers a strategic approach to addressing the key management levers influencing resilient data center design. Through our expanded consulting services group, and well-established design/build services team, BRUNS-PAK is uniquely positioned to assist customers seeking to create a long-term strategic direction for their data center that ensures an infrastructure able to support real-world demands in the age of customer-centricity.


(i)  “Data Growth Remains IT’s Biggest Challenge, Gartner Says.”  Computerworld Online. http://www.computerworld.com/s/article/9194283/Data_growth_remains_IT_s_biggest_challenge_Gartner_says. November 2, 2010.


[1] Emerson Electric/Ponemon Institute “2013 Study on Data Center Outages” 09/2013.  http://www.emersonnetworkpower.com/documents/en-us/brands/liebert/documents/white%20papers/2013_emerson_data_center_outages_sl-24679.pdf

Six Factors Influencing Data Center Efficiency Design

In rapidly evolving markets, bigger is not always better. Is your data center designed for efficiency?

The aggressive efforts of DISA, the Defense Information Systems Agency, to rationalize and consolidate mission-critical data center facilities has put a spotlight on the challenges of planning a data center infrastructure that is reliable, resilient, responsive, secure and efficient at the same time, from both an energy utilization and financial perspective. It is easy to criticize DISA’s efforts as emblematic of government inefficiency, but that would be an unfair assessment, as there are plenty of equally egregious commercial examples of overbuilding (and underbuilding) in the data center space. Especially in the current hybrid architecture marketplace, designing a data center facility to effectively and efficiently meet both current and anticipated needs takes careful planning and expert engineering.

At BRUNS-PAK, we believe that part of the reason so many projects end up misaligned with the demand profile is that both the customer and vendor design/build teams fail to account for the six critical factors that influence efficiency when working at the design phase of the project:

  • Reliability
  • Redundancy
  • Fault Tolerance
  • Maintainability
  • Right Sizing
  • Expandability

How you balance these individual priorities can make all the difference between a cost-effective design and one that eats away at both CAPEX and OPEX budgets with equal ferocity. Here is a quick review of each critical consideration.

Reliability

The data center design community has increasingly acknowledged that workloads, and their attendant service level and security requirements, are potentially the most critical driver in defining data center demands. Workloads dictate the specifics of the IT architecture that the data center must support, and with that, the applicability of cloud/colo services, pod designs, and other design/build options. Before initiating a data center project, having a clear picture of the workloads that the site must support will facilitate accurate definition of reliability for the project.

Redundancy

The goal of redundancy is increased reliability, which is defined as the ability to maintain operation despite the loss of use of one or more critical resources in the data center. Recognizing that all systems eventually fail, how you balance component vs. system-wide redundancy (N+1 vs. 2N, 2N+1, etc.) will significantly reshape the cost/benefit curve. Here, it is important to design for logical and reasonable incident forecasts while balancing mean-time-to-failure and customary mean-time-to-recover considerations.

Fault Tolerance

While major system failures constitute worst-case scenarios that ultrareliable data centers must plan for, far more common are point failures/faults. In order to achieve fault tolerance, data centers must have the ability to withstand a single point-of-failure incident for any single component that could curtail data processing operations. Typically, design for fault tolerance emphasizes large electrical/mechanical components like HVAC or power distribution, as well as IT hardware/software assets and network or telecommunications services, all of which will experience periodic failures. Design for fault tolerance should involve more than simple redundancy. Rather, effective design must balance failover capacities, mean-time-to-repair, repair vs. replace strategies, and seasonal workflow variances to ensure that the data center is able to support service level demands without requiring the installation of excess offline capacity.

Maintainability

When designing a data center facility, a common mistake is failing to account for maintainability. Excess complexity can rapidly add to costs since even redundant systems must be exercised and subjected to preventive maintenance. In fact, planning a consistent preventive maintenance schedule can be one of the most effective contributors to long-term efficiency by reducing the need for overcapacity on many key infrastructure components.

Right-Sizing/Expandability

When properly accounted for, these final two factors work in tandem to help design/build teams create an effective plan for near-term and long-term requirements. Modern design strategies include the use of techniques like modular/pod design or cloud integration that engineer in long-term capacity growth or peak demand response. This means that the team can better ensure that near-term buildout does not deliver excess capacity simply as a buffer against future demand. Engineering teams can readily design modern infrastructure to smoothly scale to meet even the most aggressive growth forecasts.

Treated as a portfolio, these six factors offer the data center design team diverse levers to balance service delivery against cost while ensuring that the final infrastructure can meet demand without breaking the bank, either through initial capital investment, or long-term operating cost.

How BRUNS-PAK Can Help

Over the past two years, BRUNS-PAK has evolved its proprietary design/build approach to incorporate the evolving array of strategies and tools available to data center planning teams, resulting in the BRUNS-PAK Hybrid Efficient Data Center Design program. Through an interactive process that acknowledges both an organization’s IT requirements and the associated facilities infrastructure needs’, this program delivers a strategic approach to addressing the six critical factors influencing efficient data center design while retaining the performance, resilience and reliability needed in enterprise computing environments. Through our expanded consulting services group, and well-established design/build services team, BRUNS-PAK is uniquely positioned to assist customers seeking to create a long-term strategic direction for their data center that satisfies all stakeholders, including end-users, IT and finance.

OpEx Solutions for Financing Data Center Renovation/Construction

Funding a data center build, renovation or expansion does not have to mean draining capital resources.

Big Data. Mobile enablement. The knowledge economy. The reasons are myriad, but the impact is singular…data center demand continues to grow in the enterprise, regardless of industry or corporate maturity. Today’s CIO must figure out how to satisfy an increasingly demanding audience of users, seeking access to data across a diversifying array of applications, and do so with continually stretched IT budgets.

In fact, many legacy data center assets are being stressed by power density and distribution constraints, rising cooling costs and complex networking and peak load demand curves. However, retrofitting, upgrading or consolidating multiple legacy, lower performing assets into a newly designed and constructed facility, or constructing large new data center facilities to support enterprise growth, can require significant capital.

At BRUNS-PAK, our proprietary Synthesis2 methodology integrates a structured approach to data center planning and construction that includes rigorous estimation and structured adherence to budget guidelines throughout the project. This discipline has helped us define breakthrough approaches to data center financing driven by operating cash flow instead of capital reserves. This can dramatically expand an organization’s ability to support required IT expansion in the face of rising end user demand.

The basic concept behind OpEx financing is the use of long-established structured finance techniques that leverage the credit rating of investment grade companies (BBB or better) to finance the new assets or improvements on a long term basis. In a retrofit or upgrade scenario where energy savings are anticipated as a result of the project, the financing to provide the capital improvements can be secured by the cash flow generated by reduced energy usage. For a new build scenario, the financing to construct and use the facility can be secured by a well structured, bondable, long-term lease.

To illustrate how this can work, here are two scenarios outlined below for a retrofit and new build:

Scenario 1: Energy Saving Retrofit/Upgrade Financing

Financing an energy efficient retrofit or upgrade to a data center requires a few key considerations:

  • The amount of capital required to complete the retrofit or upgrade
  • The energy savings that will generated
  • The term of those energy savings which often coincides with the obsolescence life of the assets being deployed

Baseline anticipated energy savings are first established through an energy audit to determine the as-is energy costs and plan the target cost profile. The difference between current costs and future costs is presumed to apply to the debt service on the construction. If the actual annual energy savings exceed the annual debt service costs of the underlying financing, the owner or user can keep the positive difference or spread between those streams. For example, if an organization invests in a $50 million upgrade that results in $12.5 million in energy savings per year, here is a basic financing option. First, let’s presume 84 month (7 year) financing at a 7% interest rate. That results in an annual debt service cost of $7.5 million. That $7.5 million is paid from the energy savings, and the organization retains the remaining $5 million in savings. After the financing is repaid, the full energy savings flow to the organization’s bottom line.

An important note in this example…the organization has not outlaid any cash for the construction.

Scenario 2: New Build Financing

For new facility financing, we will take into account a different set of considerations, including:

  • The amount of capital and the construction schedule for the facility
  • The credit rating of the user
  • The desired term that the user will occupy the facility which is used to establish the lease term.

In this scenario, the user will execute what is known as a bondable, net-lease that provides sufficient duration to completely pay back the financing provided. Once again, the user is not required to outlay capital for the construction. Instead, they pay for the facility through lease payments that factor in the term, total construction cost, construction period interest, and the assumed interest rate applied to the project.

For example, assume an investment grade rated company wants to consolidate three existing legacy data centers into a new, state of the art facility that will cost approximately $50 million, but they do not want to tap their capital budget. They are, however, prepared to occupy and pay for annual use of the facility over a 15 year period. If we were to apply a 6% interest rate to this project and assume the hypothetical loan would be repaid ratably over the 15 year lease, the company would pay approximately $5.5 million annually over the lease term, with an option to buy the facility at term end.

The BRUNS-PAK Advantage

Using structured finance techniques to finance long term assets is not limited to these two scenarios discussed. In fact, for organizations with strong credit ratings, there are practically endless ways to structure a capital efficient transaction for data center facilities. As noted earlier, BRUNS-PAK’s track record for accurate estimation of facility construction costs and long-standing history of on-budget project completion, have become powerful assets when discussing OpEx solutions.

With over 5,500 customers in all industry, government and academic sectors, BRUNS-PAK’s proven process has helped us line up multiple sources for structured financing that we can introduce into project plans to ensure that you can plan and implement a program that effectively supports your current and future IT infrastructure demands.