Data Center Hybrid Solution©. Element 15, GOVERNMENT/CORPORATE/UNIVERSITY/NON-PROFIT

We are just about there! We are looking at element 15 in our 16 week breakdown of the 16 Elements to Consider when creating a Data Center Hybrid Solution.

So this week lets look at Element #15, GOVERNMENT/CORPORATE/UNIVERSITY/NON-PROFIT

15) Government/Corporate/University/Non-Profit

A.The new “data center energy efficiency” (March 2014) and Senator Menendez Initiative

B.The cost and liability of data infringement

C.Worldwide government are experiencing the impacts of data processing issues

D.Currently US regulations exist for banking, healthcare, financial etc.

E.The board or directors and trustee (university) responsibility and liability

F.Insurance

Here is the full list of 16 Elements we have covered and/or will be reviewing one by one over the next weeks:

Defining the Mix of “Elements” (16) Considered in the Data Center Solution

1. FACILITY INFRASTRUCTURE

2. ENERGY EFFICIENCY

3. COMPUTER HARDWARE

4. CLOUD Internal/External

5. DISASTER RECOVERY

6. CO-LOCATION

7. MIGRATION/RELOCATION

8. COMPUTER SOFTWARE

9. MODULARITY/SCALABILITY /RELIABILITY

10. COMMUNICATIONS/NETWORK

11. SERVICE LEVEL AGREEMENTS

12. PERSONNEL

13. CAPEX vs. LEASE/OPEX

14. CONTAINERS

15. GOVERNMENT/CORPORATE/UNIVERSITY/NON-PROFIT

16. LEGAL REPERCUSSIONS

Copyright © 2014 BRUNS-PAK 2014. All Rights Reserved.

Data Center Hybrid Solution©. Element 16, Legal Repercussions

Well here it is! For 16 weeks we have covered the 16 elements to consider in a Data Center Hybrid Solution. One by one, we broke out each element from our Principal Engineer, Mark Evanko’s vision and his detailed presentation on the topic. Mark just recently presented his latest and greatest at AFCOM April 2015 in Las Vegas.
What we inevitably hope is that our customers get valuable solutions they can apply to their facility’s Data Center. Mark is providing that content and information at a rapid rate!
So here is the 16th and final key element in the Data Center Hybrid Solution. Please inquire with us to find your own Data Center Hybrid Solution!
16. Legal Repercussions
A.The most dominant theme of 2014 / 2015 / 2016 data center optimization impacting in house vs. outsource
B.Government fines
C.Stockholder lawsuits
D.Individual lawsuits
E.Fiduciary responsibility
F.“Non-disclosed” trends

Here is the full list of 16 Elements we have covered and/or will be reviewing one by one over the next weeks:

Defining the Mix of “Elements” (16) Considered in the Data Center Solution

1. FACILITY INFRASTRUCTURE

2. ENERGY EFFICIENCY

3. COMPUTER HARDWARE

4. CLOUD Internal/External

5. DISASTER RECOVERY

6. CO-LOCATION

7. MIGRATION/RELOCATION

8. COMPUTER SOFTWARE

9. MODULARITY/SCALABILITY /RELIABILITY

10. COMMUNICATIONS/NETWORK

11. SERVICE LEVEL AGREEMENTS

12. PERSONNEL

13. CAPEX vs. LEASE/OPEX

14. CONTAINERS

15. GOVERNMENT/CORPORATE/UNIVERSITY/NON-PROFIT

16. LEGAL REPERCUSSIONS

Copyright © 2014 BRUNS-PAK 2014. All Rights Reserved.

Best Practices for Protecting Against Data Breaches

The list of substantial and incredibly costly data breaches grows every day. Target, Home Depot, Dairy Queen, and Ahold Supermarkets are some of the more well-known examples, but there are literally hundreds of other breaches that get far less attention. It all adds up to a huge problem for IT and data center professionals.

The impact of a breach is so substantial that the real question becomes, “How can I build the best plan to protect my valuable data?”

One of the best answers is to start with independent experts who can help you create a strategy for preventing breaches that is developed specifically for your organization and its needs. Much like snowflakes, the plan for preventing breaches must be unique to the vulnerabilities, compliance demands, and IT infrastructure of each firm. At BRUNS-PAK, we bring decades of experience and the cross-disciplinary knowledge necessary to build an effective plan to help prevent breaches.

In addition, BRUNS-PAK is not a vendor of security products or services that is trying to sell its own offerings, and will “manage” the strategic plan results to ensure recommendation of that solution. We are vendor neutral, and looking out for your best interest. In fact, we’ re so well known as an expert independent party that Gartner, Forrester, and IDC have all met with BRUNS-PAK to discuss this very important issue. This is a testament to the experience and broad knowledge of data center/IT issues that BRUNS-PAK brings to the table.

One of the most important aspects of our security evaluation process is to consider how co-location and cloud service providers can impact your organization. This information is integrated with the evaluation of your traditional on-premise data center. BRUNS-PAK’s holistic approach to security considers all of the issues that might impact your ability to prevent a breach; this is critical to a successful plan.

Getting started is easy. BRUNS-PAK experts will meet with you to develop a consultative approach based on best practices that will drive your ability to prevent breaches. Utilizing BRUNS-PAK’s leading-edge, 16-step approach, which provides a comprehensive framework for strategic data center issues, you’ll benefit from the best thinking in the industry.

Protecting your organization and infrastructure from data breaches is critical. Being protected starts with having the right plan. Too often, vendors are only attempting to justify why you should buy their product. A better approach is to engage BRUNS-PAK, an independent authority that is on your side, to build an effective plan against data breaches. Click here for our white paper on the topic, or you can contact Jackie Porr at 732-248-4455/888-704-1400 ext 111, or jporr@bruns-pak.com

Next In Queue:

We’ ll look at how to effectively eliminate existing and potential hot spots in your data center with BRUNS-PAK expertise and Computational Fluid Dynamics (CFD) analysis.

Hybrid Clouds, The Practical Solution

Weekly Greetings! With Mark’s latest presentation, Data Center Total Cost of Ownership vs. Risk, wrapping at AFCOM a couple weeks ago, we want to keep our momentum of info sharing up! Marks thoughts are highlighted in a thought provoking ebook written by Dan Kusnetzky. Keep an eye out for the webinar to supplement the ebook!

Here is a snippet of info that will be covered in the ebook and upcoming webinar:

Computing?
•What is a hybrid solution?
•Consolidation, expansion or outsourcing?
•Best solution is based upon an IT strategy
•What skills are needed?
•Who are you going to call?

Data Center Trends, Containers

Containers (Physical)
1) PLUS
A.Pre-packaged/pre-wired
B.Short term delivery
C.Trailer/modular building blocks
D.Minimized capital expense
E.Modular
2) MINUS
A.Uniform building code acceptance
B.Planning board/zoning board review/approval
C.“Tractor trailer” concept
D.Data center long term work flow consideration
E.ADA accessibility

Data Center Trends, Cloud Computing

 

Here is the last of the Trends highlighted in Mark Evanko’s latest and greatest AFCOM Presentation. We will keep posting from this presentation as it is a wealth of knowledge if you missed it in person! Take a look at Mark’s notes on Cloud Computing:

Cloud Computing

1) PLUS

A. Fast delivery of “IT” services including hardware, software, and network

B. Decreased capital cost of computer hardware, computer software, network, and facilities

C. “Instantaneous” increase in bandwidth planning

D. Decreased electrical utility cost and facility maintenance (assume off site)

E. Access to latest “refresh” of technology

F. Emerging technology

G. Internal vs. external clouds

H. Many vendors

2) MINUS

A. Network security of information

B. Control of operation

C. Legal contract terms and conditions

D. Fees/costs associated with cloud

E. Definition of “total cost of ownership” of the cloud

F. Network/data center liability $$

G. Financial strength of “partner”

H. Candidacy of common platform application

Beyond PUE: Three Factors to Consider in Planning Data Center Strategy

If you are a CIO evaluating data center plans, you already know that the rules have changed and at the forefront of the rule-breaking changes is the cloud. While the driving force for cloud migration is often perceived as capital cost reduction, the cloud is proving to be much more. In a 2012 KPMG survey1, 59% of cloud providers responded that the cloud was driving innovation in their customers’ products and services and 54% felt that cloud supported innovation in processes. Those stats are borne out in survey after survey across the industry.

But, with all the focus on the cloud, traditional data centers in all their emerging physical forms continue to serve as the backbone technology infrastructure in many organizations. Companies like Facebook and Google with their massive footprints are pioneering not only new ways to think about physical infrastructure and server architecture, but also about the strategies used to assess effective performance under real world work loads.
Here are three critical, and often overlooked, factors that leading companies consider in evaluating data center plans and performance:

  1. Cost to Compute: Typically, organizations focus on point metrics like power usage effectiveness (PUE) as a measure of operating efficiency. But, at the leading edge of data center utilization, companies like eBay are more focused on tracking code, servers and composite infrastructure costs as an aggregate and measuring performance according to workload to calculate the true cost to compute. This is utility-thinking taken into the data center…how many watts will it take to complete this transaction and how much do I pay per watt?
  2. Security Process: Security is a top-of-mind concern for any organization with business critical networks, sensitive data or publically-accessible user interfaces. Leading edge thinking in security acknowledges that process is more critical than individual tactics since breaches are inevitable. You cannot build a big enough moat to keep out intruders forever, so, how quickly can you detect and isolate the inevitable breach. Events like the recent NSA scandal illustrate how attack vectors like insider threats combine with tactics like advanced persistent threats to create complex security risks. Simplifying your infrastructure and driving certain standardized processes is critical to managing security in this environment. For many companies, reestablishing internal infrastructure as the hub for information flow across a managed set of external or cloud-based computing resources is becoming a key to ensuring security in an insecure age.
  3. Orchestration Optimization: No two organizations are alike. Data differs, process differs, personnel skills differ. Thus, it stands to reason that no two data center strategies will be truly alike. This means that infrastructure that is truly responsive to all elements of infrastructure service, from server and desktop virtualization to mobile device integration, cold storage practices and authentication and identify management must come together in a coherent manner. For many organizations, the core data center is the nexus for integration of this cross-functional orchestration process.

Being responsive to these types of considerations takes completely new thinking about data center facilities that go “beyond the box” and integrate all the elements of infrastructure. Acknowledging scalability, burstable resources, resilience and security as fundamental needs, it is easy to see how new methods of deployment like modular/pod facilities have gained acceptance. At the same time, new strategies for resource sharing like private co-lo facilities are also emerging as ways to help common needs organizations reach the scale required to achieve Google-scale economies without excessive capital investment.

The BRUNS-PAK Data Center Methodology

Over the past 44 years, BRUNS-PAK has quietly assembled one of the most diverse, skilled teams of professionals focused on the strategies and implementation tactics required to craft durable data center strategies in this new era. From strategic planning to design/build support, construction and commissioning, BRUNS-PAK is helping clients craft solutions that balance the myriad decisions underpinning effective data center strategy, including:

  • Renovation vs. expansion options (CAPEX v. OPEX)
  • Build and own
  • Build and leaseback
  • Migration/relocation options
  • Co-Location
  • Cloud integration / Private cloud buildout
  • Container/Pod deployment
  • Network optimization
  • Business impact analysis
  • Hybrid computing architecture

With over 6,000 customers in all industry, government and academic sectors, BRUNS-PAK has a proven process for designing, constructing, commissioning and managing data center facilities, including LEED-certified, high efficiency facilities in use by some of the world’s leading companies and institutions.

1. KPMG International, “Breaking Through the Cloud Adoption Barriers.” KPMG Cloud Providers Survey, Feb 2013.

DCIM 2.0: The Emergence of Data Center Information Management Systems

Over the past 5 years, data center infrastructure management (DCIM) has become an acknowledged, if somewhat inconsistently implemented, approach to control and oversight of IT facilities. DCIM offers a centralized approach to the monitoring and management of the critical systems in a data center.

Currently, DCIM implementation are primarily focus on physical and asset-level components of the data center facility, such as:

  • For facilities monitoring only
    • Building management systems (BMS)
    • Utility sources and dual power source systems
    • Generators
    • UPS systems
    • Power distributions units (PDUs)
    • Multi-source mechanical systems (chilled water, direct exchange, heat wheel)
    • Fire detection and suppression
    • Temperature
  • For system monitoring and management
    • Valve control
    • Power source control
    • Variable frequency drive (VFD) response to temperature changes
  • For security integration
    • CCTV monitoring
    • Access control systems logging and monitoring
    • Biometric reader logging and monitoring

In these implementations, telecommunication and data networks have typically remained independent, and while there is typically a remote monitoring and management concept implemented, the application focus has clearly been in the collection and presentation of systems data, not the interpreted use of that data for actually achieving improved uptime.

In many respects, the current state of the market represents the business and technical drivers behind these implementations: data center consolidation, implications of increasing power and heat density in server racks, and energy efficiency and sustainability initiatives. With the rapid acceptance of virtualized environments and cloud computing, there is now increasing visibility on the delivery of high-performance, ultra-reliable, efficient data center architectures.

To being, let’s start by understanding cloud computing which NIST defines as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” Inherent in this definition is emphasis on automated provisioning and governance along with built-in focus on the core benefits that cloud is supposed to deliver: cost savings, energy savings, rapid deployment and customer empowerment.

This cloud-influenced perspective is putting traditional DCIM approaches under scrutiny. DCIM is increasingly specializes in automation capabilities to create a dynamic infrastructure that can rapidly adapt to workload demands and resource utilization conditions. At BRUNS-PAK, we refer to this emerging requirement as Data Center Information Management 2.0, DCIM 2.0 for short.

DCIM 2.0 will integrate existing infrastructure management tools and systems with the telecommunication, data and networking feeds needed to create a true ‘internet of things’ for the data center. By bringing these pieces together, along with proactive visualization and predictive analytics applications, DCIM 2.0 can begin to drive systems that control the necessary infrastructure changes to maintain operations with the lowest possible energy utilization. For example, real-time computational fluid dynamics (CFD) modeling of workload driven anticipated temperature changes can be used to control VFD cooling fans to maintain temperature).

Given the increasing intelligence of both the physical and logical devices that need to be part of this environment, implementation of DCIM 2.0 is possible sooner than many IT professionals think. In fact, the largest barriers to initial implementations may be management focus and a conscious desire to avoid responsibility silos (facilities emphasis vs. IT emphasis). Current dashboard tools can unify much of the data needed to begin to bring the DCIM 2.0 to life, and in so doing, help IT teams looking to combine ultra-reliability, scalability and efficiency under one data center vision.

OpEx Solutions for Financing Data Center Renovation/Construction

Funding a data center build, renovation or expansion does not have to mean draining capital resources.

Big Data. Mobile enablement. The knowledge economy. The reasons are myriad, but the impact is singular…data center demand continues to grow in the enterprise, regardless of industry or corporate maturity. Today’s CIO must figure out how to satisfy an increasingly demanding audience of users, seeking access to data across a diversifying array of applications, and do so with continually stretched IT budgets.

In fact, many legacy data center assets are being stressed by power density and distribution constraints, rising cooling costs and complex networking and peak load demand curves. However, retrofitting, upgrading or consolidating multiple legacy, lower performing assets into a newly designed and constructed facility, or constructing large new data center facilities to support enterprise growth, can require significant capital.

At BRUNS-PAK, our proprietary Synthesis2 methodology integrates a structured approach to data center planning and construction that includes rigorous estimation and structured adherence to budget guidelines throughout the project. This discipline has helped us define breakthrough approaches to data center financing driven by operating cash flow instead of capital reserves. This can dramatically expand an organization’s ability to support required IT expansion in the face of rising end user demand.

The basic concept behind OpEx financing is the use of long-established structured finance techniques that leverage the credit rating of investment grade companies (BBB or better) to finance the new assets or improvements on a long term basis. In a retrofit or upgrade scenario where energy savings are anticipated as a result of the project, the financing to provide the capital improvements can be secured by the cash flow generated by reduced energy usage. For a new build scenario, the financing to construct and use the facility can be secured by a well structured, bondable, long-term lease.

To illustrate how this can work, here are two scenarios outlined below for a retrofit and new build:

Scenario 1: Energy Saving Retrofit/Upgrade Financing

Financing an energy efficient retrofit or upgrade to a data center requires a few key considerations:

  • The amount of capital required to complete the retrofit or upgrade
  • The energy savings that will generated
  • The term of those energy savings which often coincides with the obsolescence life of the assets being deployed

Baseline anticipated energy savings are first established through an energy audit to determine the as-is energy costs and plan the target cost profile. The difference between current costs and future costs is presumed to apply to the debt service on the construction. If the actual annual energy savings exceed the annual debt service costs of the underlying financing, the owner or user can keep the positive difference or spread between those streams. For example, if an organization invests in a $50 million upgrade that results in $12.5 million in energy savings per year, here is a basic financing option. First, let’s presume 84 month (7 year) financing at a 7% interest rate. That results in an annual debt service cost of $7.5 million. That $7.5 million is paid from the energy savings, and the organization retains the remaining $5 million in savings. After the financing is repaid, the full energy savings flow to the organization’s bottom line.

An important note in this example…the organization has not outlaid any cash for the construction.

Scenario 2: New Build Financing

For new facility financing, we will take into account a different set of considerations, including:

  • The amount of capital and the construction schedule for the facility
  • The credit rating of the user
  • The desired term that the user will occupy the facility which is used to establish the lease term.

In this scenario, the user will execute what is known as a bondable, net-lease that provides sufficient duration to completely pay back the financing provided. Once again, the user is not required to outlay capital for the construction. Instead, they pay for the facility through lease payments that factor in the term, total construction cost, construction period interest, and the assumed interest rate applied to the project.

For example, assume an investment grade rated company wants to consolidate three existing legacy data centers into a new, state of the art facility that will cost approximately $50 million, but they do not want to tap their capital budget. They are, however, prepared to occupy and pay for annual use of the facility over a 15 year period. If we were to apply a 6% interest rate to this project and assume the hypothetical loan would be repaid ratably over the 15 year lease, the company would pay approximately $5.5 million annually over the lease term, with an option to buy the facility at term end.

The BRUNS-PAK Advantage

Using structured finance techniques to finance long term assets is not limited to these two scenarios discussed. In fact, for organizations with strong credit ratings, there are practically endless ways to structure a capital efficient transaction for data center facilities. As noted earlier, BRUNS-PAK’s track record for accurate estimation of facility construction costs and long-standing history of on-budget project completion, have become powerful assets when discussing OpEx solutions.

With over 5,500 customers in all industry, government and academic sectors, BRUNS-PAK’s proven process has helped us line up multiple sources for structured financing that we can introduce into project plans to ensure that you can plan and implement a program that effectively supports your current and future IT infrastructure demands.

Six Factors Influencing Data Center Efficiency Design

In rapidly evolving markets, bigger is not always better. Is your data center designed for efficiency?

The aggressive efforts of DISA, the Defense Information Systems Agency, to rationalize and consolidate mission-critical data center facilities has put a spotlight on the challenges of planning a data center infrastructure that is reliable, resilient, responsive, secure and efficient at the same time, from both an energy utilization and financial perspective. It is easy to criticize DISA’s efforts as emblematic of government inefficiency, but that would be an unfair assessment, as there are plenty of equally egregious commercial examples of overbuilding (and underbuilding) in the data center space. Especially in the current hybrid architecture marketplace, designing a data center facility to effectively and efficiently meet both current and anticipated needs takes careful planning and expert engineering.

At BRUNS-PAK, we believe that part of the reason so many projects end up misaligned with the demand profile is that both the customer and vendor design/build teams fail to account for the six critical factors that influence efficiency when working at the design phase of the project:

  • Reliability
  • Redundancy
  • Fault Tolerance
  • Maintainability
  • Right Sizing
  • Expandability

How you balance these individual priorities can make all the difference between a cost-effective design and one that eats away at both CAPEX and OPEX budgets with equal ferocity. Here is a quick review of each critical consideration.

Reliability

The data center design community has increasingly acknowledged that workloads, and their attendant service level and security requirements, are potentially the most critical driver in defining data center demands. Workloads dictate the specifics of the IT architecture that the data center must support, and with that, the applicability of cloud/colo services, pod designs, and other design/build options. Before initiating a data center project, having a clear picture of the workloads that the site must support will facilitate accurate definition of reliability for the project.

Redundancy

The goal of redundancy is increased reliability, which is defined as the ability to maintain operation despite the loss of use of one or more critical resources in the data center. Recognizing that all systems eventually fail, how you balance component vs. system-wide redundancy (N+1 vs. 2N, 2N+1, etc.) will significantly reshape the cost/benefit curve. Here, it is important to design for logical and reasonable incident forecasts while balancing mean-time-to-failure and customary mean-time-to-recover considerations.

Fault Tolerance

While major system failures constitute worst-case scenarios that ultrareliable data centers must plan for, far more common are point failures/faults. In order to achieve fault tolerance, data centers must have the ability to withstand a single point-of-failure incident for any single component that could curtail data processing operations. Typically, design for fault tolerance emphasizes large electrical/mechanical components like HVAC or power distribution, as well as IT hardware/software assets and network or telecommunications services, all of which will experience periodic failures. Design for fault tolerance should involve more than simple redundancy. Rather, effective design must balance failover capacities, mean-time-to-repair, repair vs. replace strategies, and seasonal workflow variances to ensure that the data center is able to support service level demands without requiring the installation of excess offline capacity.

Maintainability

When designing a data center facility, a common mistake is failing to account for maintainability. Excess complexity can rapidly add to costs since even redundant systems must be exercised and subjected to preventive maintenance. In fact, planning a consistent preventive maintenance schedule can be one of the most effective contributors to long-term efficiency by reducing the need for overcapacity on many key infrastructure components.

Right-Sizing/Expandability

When properly accounted for, these final two factors work in tandem to help design/build teams create an effective plan for near-term and long-term requirements. Modern design strategies include the use of techniques like modular/pod design or cloud integration that engineer in long-term capacity growth or peak demand response. This means that the team can better ensure that near-term buildout does not deliver excess capacity simply as a buffer against future demand. Engineering teams can readily design modern infrastructure to smoothly scale to meet even the most aggressive growth forecasts.

Treated as a portfolio, these six factors offer the data center design team diverse levers to balance service delivery against cost while ensuring that the final infrastructure can meet demand without breaking the bank, either through initial capital investment, or long-term operating cost.

How BRUNS-PAK Can Help

Over the past two years, BRUNS-PAK has evolved its proprietary design/build approach to incorporate the evolving array of strategies and tools available to data center planning teams, resulting in the BRUNS-PAK Hybrid Efficient Data Center Design program. Through an interactive process that acknowledges both an organization’s IT requirements and the associated facilities infrastructure needs’, this program delivers a strategic approach to addressing the six critical factors influencing efficient data center design while retaining the performance, resilience and reliability needed in enterprise computing environments. Through our expanded consulting services group, and well-established design/build services team, BRUNS-PAK is uniquely positioned to assist customers seeking to create a long-term strategic direction for their data center that satisfies all stakeholders, including end-users, IT and finance.

Five Keys to Improving Data Center Resilience In the Age of Customer-Centricity

When Customers Are Involved, Being “Ready for Anything” Takes on New Urgency

For many CIO’s trained in traditional IT service metrics, the new world of IT can seem daunting. No longer is your customer constituency comprised of primarily internal users. Instead, your most important customers are now your company’s customers, and the implications of disappointing this audience can be swift and painful. Look no further than retail giant Target for the cautionary lessons of IT in the new age of customer-centricity.

Regardless of what industry you operate in, the delivery of “always on,” secure, customer-facing processes and services fundamentally changes the demands on the IT department. Unlike disappointing internal users because of an application outage, failure of externally-facing applications can impact business forecasts, stock price and brand value. For Target, a technology innovator with a long-history of delivering customer value through technology initiatives, consumer confidence eroded and both short and long-term business performance was negatively impacted when hackers infiltrated their credit card processing systems.

Beyond dramatic examples like Target, even short application outages or minor security breaches can have measurable cost implications. According to a Ponemon Institute study, the average per minute cost of data center downtime has risen by 41 percent since 2010, with an average cost per minute of downtime costing approximately $7,000.[i] This is forcing IT to implement entirely new approaches to systems and services that are not just ultrareliable, scalable and dynamic, but also resilient under failure and attack.

Here are five key management strategies for making data center resilience a part of your organizational DNA and enhancing your defense against the negative impacts of unexpected IT incidents:

1.   Reframe the Management Conversation

For years, IT was viewed through an infrastructure lens that focused on empowering internal processes, not external business value. Today, IT management, executive management and directors must all acknowledge IT’s changing role in the organization and focus greater energy on not only addressing immediate term demands, but longer-term business growth and mission-critical risk mitigation issues. Target was an early explorer of embedded chip credit card technology, but was unable to muster the necessary internal and external resources to enable adoption. Decisions on critical IT infrastructure in modern markets must be fully integrated into a broad business strategy context to ensure effectiveness.


2.   Recognize That Resilience is a Journey, Not a Destination

There are many dynamic forces that define IT architecture resilience in modern business: growth impacts demand, security threats are ever-evolving, and risk profiles change with business valuation. That means evaluation of IT resilience must be equally dynamic. From physical data center infrastructure to approaches to DeVops and disaster recovery, planning for and implementing resilient architectures is a continuous process, not a single build-and-deploy project.

3.   Plan for Elasticity

In the week leading up to Christmas 2013, UPS planned to deliver 132 million packages. Unfortunately, demand significantly outstripped that forecast, leading to late deliveries for many high profile e-tailers, including Amazon, and major dissatisfaction with UPS. Thus is the world of customer-facing business processes, where exceeding your wildest dreams of business success can lead to nightmarish end results. For IT, massive capacity bursts need to be built into the plan if resilience objectives are to be consistently met.

4.   The Best Defense is One Grounded in Reality

Reality is that there is no perfect moat to protect IT systems from all cyber threats, natural and man-made disasters, and unforeseen internal incidents. Today, IT systems must be engineered to rapidly react to any incident in order to minimize its impact and/or the time-to-recovery. From foundational infrastructure to self-healing applications and interfaces, resilient environments are the product of planning and architecting for the foreseen…and the unforeseen.

5.   The Data Center is Still the Core of Your IT Infrastructure, Wherever and Whatever Your Data Center Is

Cloud. PODS. Colo. On-Premise. The definition of a ‘data center’ continues to evolve, and for most organizations, a modern data center represents a hybrid architecture that integrates multiple physical architectures and networking strategies. Hybrid architectures can help organizations support services that are massively scalable, ultrareliable, resilient to point failures across hardware and software, risk-managed at-scale, and still cost-efficient and environmentally responsible. Building out an enterprise-class hybrid data center architecture means moving away from old debates about topics like who controls assets toward discussions about how to best broker the continuously evolving portfolio of services needed to satisfy demanding internal and external audiences.

How BRUNS-PAK Can Help

Over the past two years, BRUNS-PAK has evolved its proprietary design/build methodologies to integrate the evolving array of strategies and tools available to data center planning teams, resulting in the BRUNS-PAK Hybrid Efficient Data Center Design program. Through an iterative process that acknowledges both rapidly changing IT requirements and their associated facilities infrastructure needs, this program delivers a strategic approach to addressing the key management levers influencing resilient data center design. Through our expanded consulting services group, and well-established design/build services team, BRUNS-PAK is uniquely positioned to assist customers seeking to create a long-term strategic direction for their data center that ensures an infrastructure able to support real-world demands in the age of customer-centricity.


(i)  “Data Growth Remains IT’s Biggest Challenge, Gartner Says.”  Computerworld Online. http://www.computerworld.com/s/article/9194283/Data_growth_remains_IT_s_biggest_challenge_Gartner_says. November 2, 2010.


[1] Emerson Electric/Ponemon Institute “2013 Study on Data Center Outages” 09/2013.  http://www.emersonnetworkpower.com/documents/en-us/brands/liebert/documents/white%20papers/2013_emerson_data_center_outages_sl-24679.pdf

Planning for the Inescapable Crush of Mobile Data Growth

Mobile Access is Driving New Demand for Smarter Networks and More Intelligent Data Center Architecture

Big data often seems to dominate headlines in IT publications. Few trends carry as dramatic a potential impact on business processes, with data-driven decision making becoming de rigueur across departments in all enterprises. But for IT, an even more important trend continues to build momentum, threatening to rewrite many of the rules for data center design and management — mobile data growth.

Global mobile data traffic grew 81% in 2013, reaching 1.5 exabytes per month by December 2013, up from 820 petabytes per month at the end of 2012.1 Mobile data transmissions now represent over 18x the total traffic traversing the entire Internet in 2000. While mobile devices continue to grow in terms of processing power, their true potential for both business and consumer applications comes from their ability to connect users anywhere and anytime to data located in data centers around the globe. This developing ‘anywhere, anytime’ approach introduces a whole new set of rules for data center management, including increasing demand to move information among data centers integrated in hybrid architectures in order to provide optimized user experience through localized points-of-presence.

Of course, many organizations have already begun to address the demands of mobile access and its attendant ‘anywhere, anytime’ use cases. However, the usage patterns we see today only minimally represent what many experts foresee in the future.

In the latest update of the Cisco® Visual Networking Index (VNI) Global Mobile Data Traffic Forecast, networking giant Cisco predicts2:

  • Mobile data traffic will grow at a compound annual growth rate (CAGR) of 61 percent from 2013 to 2018, reaching 15.9 exabytes per month by 2018, up from 1.5 exabytes per month currently
  • By 2018, the number of mobile-connected devices will exceed 10 billion, exceeding the forecasted global population and averaging 1.4 devices per person
  • By 2018, network traffic generated by tablet devices will reach 2.9 exabytes monthly, nearly double the total mobile network traffic today
  • The penetration of smart mobile devices will reach 54%, up from 21% at the end of 2013, but only 15% of the connections will be at 4G speeds. Of note, a typical 4G connection generates nearly 6x more traffic than a non-4G connection, meaning that mobile growth could spike even faster if 4G penetration accelerates

These forecasts indicate a dramatic uptick in demand for mobile connectivity, a demand easily explained as users get more comfortable connecting to data from mobile devices and experience the advantages that real-time connection to data, in all its forms, can provide. Of particular note for many organizations is the explosive growth in demand for video content, which is expected to continue to accelerate across applications for the foreseeable future.

For CIOs, the mobile data explosion is creating rapid escalation of demand for flexible, scalable, high-performance data center capacity that can swiftly be commissioned to meet both organic demand growth in existing application portfolios as well as sudden increase of demand resulting from new application deployments to meet new customer or internal business requirements. For example, retailers are increasingly using in-store video monitoring tools to predict traffic at registers to better manage service levels. At the same time, retail deployment of more sophisticated mobile shopping applications is growing exponentially.

As applications for everything from transactional commerce to customer service, logistics and finance move increasingly online, many traditional approaches to data center architecture built on proprietary, on-premise facilities are being challenged. Few organizations will avoid the need to construct hybrid architectures that integrate cloud, colo, on-premise, POD and modular designs into a multi-faceted environment that can easily address emerging capacity, reliability and cost-efficiency demands.

How BRUNS-PAK Can Help

Over the past thirty-five years, BRUNS-PAK has evolved its proprietary design/build methodologies to integrate the evolving array of strategies and tools available to data center planning teams, resulting in the BRUNS-PAK Hybrid Efficient Data Center Design program. Through an iterative process that acknowledges both rapidly changing IT requirements and their associated facilities infrastructure needs, this program delivers a strategic approach to addressing the evolving capacity and complex networking requirements created by the explosive growth in mobile data traffic. Through our expanded consulting services group, and well-established design/build services team, BRUNS-PAK is uniquely positioned to assist customers seeking to create a long-term strategic direction for their data center that ensures an infrastructure able to support real-world demands in an increasingly mobile age.

REFERENCES

1. Cisco, “Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2013–2018” 02/2014.

2. Ibid.

A Four-Part Framework for Resilient Data Center Architecture

Cornerstone concepts to support cybersecurity

While working on a recent project, we came across a newsletter authored by Deb Frincke, then Chief Scientist of Cybersecurity Research for the National Security Division at the Pacific Northwest National Lab in Seattle, which outlined her team’s initiatives for “innovative and proactive science and technology to prevent and counter acts of terror, or malice intended to disrupt the nation’s digital infrastructures.” In cybersecurity, the acknowledged wisdom is that there is no “perfect defense” to prevent a successful cyberattack. Dr. Frincke’s framework defined four cornerstone concepts for architecting effective cybersecurity practices:

  • Predictive Defense through use of models, simulations, and behavior analyses to better understand potential threats
  • Adaptive Systems that support a scalable, self-defending infrastructure
  • Trustworthy Engineering that acknowledges the risks of “weakest links” in complex architecture, the challenges of conflicting stakeholder goals, and the process requirements of sequential buildouts
  • Cyber Analytics to provide advanced insights and support for iterative improvement

In this framework, the four cornerstones operate interactively to support a cybersecurity fabric that can address the continuously changing face of cyber threats in today’s world.

If you are a CIO with responsibility for an enterprise data center, you may quickly see that these same cornerstone principles provide an exceptional starting point for planning a resilient data center environment, especially with current generation hybrid architectures. Historically, the IT community has looked at data center reliability through the lens of preventive defense…in the data center, often measured through parameters like 2N, 2N+1, etc redundancy.

However, as the definition of the data center expands beyond the scope of internally managed hardware/software into the integration of modular platforms and cloud services, simple redundancy calculations become only one factor in defining resilience. In this world, Dr. Frincke’s four-part framework provides a valuable starting point for defining a more comprehensive approach to resilience in the modern data center. Let’s look at how these principles can be applied.

Predictive Defense: We believe the starting point for any resilient architecture is comprehensive planning that incorporates modeling (including spatial, CFD, and network traffic) and dynamic utilization simulations for both current and future growth projections to help visualize operations before initiating a project. Current generation software supports extremely rich exploration of data center dynamics to minimize future risks and operational limitations.

Adaptive Systems: Recently, Netflix has earned recognition for its novel use of resilience tools for testing the company’s ability to survive failures and operating abnormalities. The company’s Simian Army, consisting of services (monkeys) that unleash failures on their systems to test how adaptive their environment actually is. These tools, including Chaos Monkey, Janitor Monkey and Conformity Monkey, demonstrate the importance of adaptivity in a world where no team can accurately predict all possible occurrences, and where unanticipated consequence of a failure anywhere in a complex network of hardware fabrics can lead to cascading failures. The data center community needs to challenge itself to find similar means for testing adaptivity in modern hybrid architectures if it is to rise to the challenge of ultrareliability as current scale.

Trustworthy Engineering: Another hallmark of cybersecurity is the understanding that the greatest threats often lie inside the enterprise with disgruntled employees, or simply as a result of human error. Similarly, in modern data center design, tracking a careful path that iteratively builds out the environment while checking off compliance benchmarks and ‘trustworthiness’ at each decision point, becomes a critical step in avoiding the creation of a hybrid house-of-cards.

Analytics: With data center infrastructure management (DCIM) tools becoming more sophisticated, and with advancing integration between facilities measurement and IT systems measurement platforms, the availability of robust data for informing ongoing decision-making in the data center is now possible. No longer is resilient data center architecture just about the building and infrastructure. So, operating by ‘feel’ or ‘experience’ is inadequate. Big data now really must be part of the data center management protocol.

By leveraging these four cornerstone concepts, we believe IT management can begin to frame a more complete, and by extension, robust plan for resiliency when developing data center architectures that bridge the wide array of deployment options in use today. This introduction provides a starting point for ways to use the framework, but we believe that further exploration by data center teams from various industries will create a richer pool of data and ideas that can advance the process for all teams.

How BRUNS-PAK Can Help

Over the past 35 years, BRUNS-PAK has evolved its proprietary design/build methodologies to integrate the evolving array of strategies and tools available to data center planning teams, resulting in the BRUNS-PAK Hybrid Efficient Data Center Design program. Through an iterative process that acknowledges both rapidly changing IT requirements and their associated facilities infrastructure needs, this program delivers a strategic approach to addressing the evolving capacity and complex networking requirements created by the explosive growth in mobile data traffic. Through our expanded consulting services group, and well-established design/build services team, we can help you leverage concepts like this resiliency framework to construct your plans for effective data center deployment, whatever size data center you operate.

REFERENCES

Frincke, Deborah, “I4 Newsletter”, Pacific Northwest National Laboratory, Spring-Summer 2009.

Why the Internet of Things Must Be On Your Data Center Radar

Splunk. Glassbeam. Azure. Amazon Web Services.

If you are a CIO, get used to these names, because they (or their competitors) are likely to become an active part of your IT infrastructure over the next few years as the Internet of Things moves from bleeding-edge concept to mission-critical reality. The Internet of Things (IoT), the growing network of connected devices…everything from that Fuel band on your wrist and the refrigerator in your kitchen to wind turbines providing your electricity or the jet engines thrusting you skyward…is rapidly altering how CIOs need to engineer their data centers.

The high profile examples to date have focused on industries like aerospace where small operational improvements can lead to major savings or dramatic improvements in customer service. For example, the airline industry spends approximately $200 billion annually on fuel. Every 1% improvement in efficiency that can be gleaned by more efficient in-flight decision-making means $1 billion in savings. Or, real-time feedback from jet engines experiencing an issue in-flight, can mean faster repair turnaround on the ground, since parts and technicians can be ready for the flight upon arrival.

But, machine-to-machine (M2M) interactions introduce a completely new data profile into the mix for CIOs. Current internet applications operate on a transactional basis…a user makes a request that a server responds to. In M2M applications, data is supplied as a continuous, real-time stream that can add up to a large final data set, and that may require equally real-time response streams to be sent back to the source device. Virgin Atlantic IT Director David Bulman noted that a single flight for the company’s recently purchased 787 Dreamliner could generate up to a half terabyte of data! And, getting fuel optimization programs in place means analyzing some of that data in real-time to provide feedback to the flight crew.

Storing all that data is one obvious implication for the data center, bringing smiles to the faces of executives at companies like EMC. However, there is no reason to collect data if you have no plan to use and analyzing large data sets is the next implication. Server capacity must adapt to new processing loads driven by entirely new software platforms like Splunk or Glassbeam, applications optimized for handling and analyzing large machine data sets.

But the implications go beyond the walls of the data center as well. Machine data implies collection from tens, hundred, thousands or millions of devices scattered around the globe. Moving this data in an optimal, secure and real-time fashion implies sophisticated and creative integration of web services like Azure or Amazon Web Services. For CIOs, this opens yet another reason for evaluation of hybrid architectures for the data center.

OK. It’s Real…So, Now What?

For CIOs evaluating data center plans, the Internet of Things must be part of the future capacity planning process since miscalculation can significantly alter a company’s competitive posture. Here are three tips for integrating a M2M strategy into your broader data center planning process:

    1. Be Integrated. First and foremost, the IT team needs to be fully integrated with product development and customer service planning processes since IoT demand will arise not from an IT requirement, but rather from real-world new product/service innovation. This means that demand forecasts in IT that may historically have only needed to account for classic administrative, finance, engineering and manufacturing workloads, will now need to account for real-time data exchange as part of product/service delivery. This makes IT part of design and customer service conversationsnot just IT support.
    2. Be Web Integrated. As implied above, the networking and distributed processing demands of M2M streams means opening new discussions about Web integration in the data center architecture. For both networking or remote processing, CIOs cannot overlook the importance and potential value of cloud-based services in support IoT workloads.
    3. Be Nimble. The Internet of Things is spawning yet another era of innovation and demand in the data center. From exploding demand for data scientists to a new expansion of capacity, M2M interactions will most certainly shine a spotlight on IT with good planning key to being able to support this exploding requirement.

How BRUNS-PAK Can Help

BRUNS-PAK’s proprietary design/build methodologies integrate an evolving array of strategies and tools for data center planning teams that must account for the potential impact of IoT workloads, including the need to fully integrate cloud services strategies. The BRUNS-PAK Hybrid Efficient Data Center Design program offers an iterative process that acknowledges both rapidly changing IT requirements and their associated facilities infrastructure needs, resulting in a strategic plan to address the evolving capacity and complex networking requirements created by M2M work streams. Through our expanded consulting services group, and well-established design/build services team, we can help you create a strategy that ensures your data center is as resilient and responsive as the devices you are monitoring around the globe!


REFERENCES

[1] ComputerWeekly.com, “GE uses big data to power machine services business” http://www.computerweekly.com/news/2240176248/GE-uses-big-data-to-power-machine-services-business
[2] ComputerWorld.com, “Boeing 787s to create half a terabyte of data per flight, says Virgin Atlantic” http://www.computerworlduk.com/news/infrastructure/3433595/boeing-787s-create-half-terabyte-of-data-per-flight-says-virgin-atlantic/

Why Does Security Matter?

Here is the next snippet from Mark’s latest presentation at AFCOM 2015!

Why Does Security Matter?

1) Liability, Liability, Liability

2) If corporate, what are the board of directors responsibility to the stockholders?

3) If academic/university, what are the trustees responsibility?

4) If non-profit, what are the board members responsibility?

5) If government, what are the administration members responsibility?

6) Hospital / healthcare:

A. Patient care records?

7) Stock trading

A. SEC