DCIM 2.0: The Emergence of Data Center Information Management Systems

Over the past 5 years, data center infrastructure management (DCIM) has become an acknowledged, if somewhat inconsistently implemented, approach to control and oversight of IT facilities. DCIM offers a centralized approach to the monitoring and management of the critical systems in a data center.

Currently, DCIM implementation are primarily focus on physical and asset-level components of the data center facility, such as:

  • For facilities monitoring only
    • Building management systems (BMS)
    • Utility sources and dual power source systems
    • Generators
    • UPS systems
    • Power distributions units (PDUs)
    • Multi-source mechanical systems (chilled water, direct exchange, heat wheel)
    • Fire detection and suppression
    • Temperature
  • For system monitoring and management
    • Valve control
    • Power source control
    • Variable frequency drive (VFD) response to temperature changes
  • For security integration
    • CCTV monitoring
    • Access control systems logging and monitoring
    • Biometric reader logging and monitoring

In these implementations, telecommunication and data networks have typically remained independent, and while there is typically a remote monitoring and management concept implemented, the application focus has clearly been in the collection and presentation of systems data, not the interpreted use of that data for actually achieving improved uptime.

In many respects, the current state of the market represents the business and technical drivers behind these implementations: data center consolidation, implications of increasing power and heat density in server racks, and energy efficiency and sustainability initiatives. With the rapid acceptance of virtualized environments and cloud computing, there is now increasing visibility on the delivery of high-performance, ultra-reliable, efficient data center architectures.

To being, let’s start by understanding cloud computing which NIST defines as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” Inherent in this definition is emphasis on automated provisioning and governance along with built-in focus on the core benefits that cloud is supposed to deliver: cost savings, energy savings, rapid deployment and customer empowerment.

This cloud-influenced perspective is putting traditional DCIM approaches under scrutiny. DCIM is increasingly specializes in automation capabilities to create a dynamic infrastructure that can rapidly adapt to workload demands and resource utilization conditions. At BRUNS-PAK, we refer to this emerging requirement as Data Center Information Management 2.0, DCIM 2.0 for short.

DCIM 2.0 will integrate existing infrastructure management tools and systems with the telecommunication, data and networking feeds needed to create a true ‘internet of things’ for the data center. By bringing these pieces together, along with proactive visualization and predictive analytics applications, DCIM 2.0 can begin to drive systems that control the necessary infrastructure changes to maintain operations with the lowest possible energy utilization. For example, real-time computational fluid dynamics (CFD) modeling of workload driven anticipated temperature changes can be used to control VFD cooling fans to maintain temperature).

Given the increasing intelligence of both the physical and logical devices that need to be part of this environment, implementation of DCIM 2.0 is possible sooner than many IT professionals think. In fact, the largest barriers to initial implementations may be management focus and a conscious desire to avoid responsibility silos (facilities emphasis vs. IT emphasis). Current dashboard tools can unify much of the data needed to begin to bring the DCIM 2.0 to life, and in so doing, help IT teams looking to combine ultra-reliability, scalability and efficiency under one data center vision.

Beyond PUE: Three Factors to Consider in Planning Data Center Strategy

If you are a CIO evaluating data center plans, you already know that the rules have changed and at the forefront of the rule-breaking changes is the cloud. While the driving force for cloud migration is often perceived as capital cost reduction, the cloud is proving to be much more. In a 2012 KPMG survey1, 59% of cloud providers responded that the cloud was driving innovation in their customers’ products and services and 54% felt that cloud supported innovation in processes. Those stats are borne out in survey after survey across the industry.

But, with all the focus on the cloud, traditional data centers in all their emerging physical forms continue to serve as the backbone technology infrastructure in many organizations. Companies like Facebook and Google with their massive footprints are pioneering not only new ways to think about physical infrastructure and server architecture, but also about the strategies used to assess effective performance under real world work loads.
Here are three critical, and often overlooked, factors that leading companies consider in evaluating data center plans and performance:

  1. Cost to Compute: Typically, organizations focus on point metrics like power usage effectiveness (PUE) as a measure of operating efficiency. But, at the leading edge of data center utilization, companies like eBay are more focused on tracking code, servers and composite infrastructure costs as an aggregate and measuring performance according to workload to calculate the true cost to compute. This is utility-thinking taken into the data center…how many watts will it take to complete this transaction and how much do I pay per watt?
  2. Security Process: Security is a top-of-mind concern for any organization with business critical networks, sensitive data or publically-accessible user interfaces. Leading edge thinking in security acknowledges that process is more critical than individual tactics since breaches are inevitable. You cannot build a big enough moat to keep out intruders forever, so, how quickly can you detect and isolate the inevitable breach. Events like the recent NSA scandal illustrate how attack vectors like insider threats combine with tactics like advanced persistent threats to create complex security risks. Simplifying your infrastructure and driving certain standardized processes is critical to managing security in this environment. For many companies, reestablishing internal infrastructure as the hub for information flow across a managed set of external or cloud-based computing resources is becoming a key to ensuring security in an insecure age.
  3. Orchestration Optimization: No two organizations are alike. Data differs, process differs, personnel skills differ. Thus, it stands to reason that no two data center strategies will be truly alike. This means that infrastructure that is truly responsive to all elements of infrastructure service, from server and desktop virtualization to mobile device integration, cold storage practices and authentication and identify management must come together in a coherent manner. For many organizations, the core data center is the nexus for integration of this cross-functional orchestration process.

Being responsive to these types of considerations takes completely new thinking about data center facilities that go “beyond the box” and integrate all the elements of infrastructure. Acknowledging scalability, burstable resources, resilience and security as fundamental needs, it is easy to see how new methods of deployment like modular/pod facilities have gained acceptance. At the same time, new strategies for resource sharing like private co-lo facilities are also emerging as ways to help common needs organizations reach the scale required to achieve Google-scale economies without excessive capital investment.

The BRUNS-PAK Data Center Methodology

Over the past 44 years, BRUNS-PAK has quietly assembled one of the most diverse, skilled teams of professionals focused on the strategies and implementation tactics required to craft durable data center strategies in this new era. From strategic planning to design/build support, construction and commissioning, BRUNS-PAK is helping clients craft solutions that balance the myriad decisions underpinning effective data center strategy, including:

  • Renovation vs. expansion options (CAPEX v. OPEX)
  • Build and own
  • Build and leaseback
  • Migration/relocation options
  • Co-Location
  • Cloud integration / Private cloud buildout
  • Container/Pod deployment
  • Network optimization
  • Business impact analysis
  • Hybrid computing architecture

With over 6,000 customers in all industry, government and academic sectors, BRUNS-PAK has a proven process for designing, constructing, commissioning and managing data center facilities, including LEED-certified, high efficiency facilities in use by some of the world’s leading companies and institutions.

1. KPMG International, “Breaking Through the Cloud Adoption Barriers.” KPMG Cloud Providers Survey, Feb 2013.

Managing Massive Data Growth

A Combination of Data Efficiency Technologies Provides Ways to Optimize Primary Storage Capacity and Performance

It’s no secret that growth in data is expected to remain rampant for many years to come.  According to the InformationWeek “State of Enterprise Storage 2014” survey, IT is dealing with 25% or more yearly growth at nearly one-third of all companies.  Furthermore, budgets are strained, with 1 in 4 stating they lack the funds to simply meet demands.

As a result, IT directors around the globe are struggling with decisions concerning the handling of both primary and secondary data storage.  Ideally, they need the ability to store and manage data that consumes the least amount of space with little to no impact on performance.  With real-world budgets in play, optimizing performance via high-priced flash-based solutions will continue to be a fantasy for most.  As a result, reducing storage needs can be an integral part of the equation for most organizations.

Data reduction technologies like deduplication, compression, and thin provisioning can reduce data sets by 25-90% and are designed to offset growth by storing more data per storage device.  Provided that IT administrators consider data type and the functionality of each technology, these technologies can provide considerable benefits.

 Data deduplication works by replacing duplicate data across many files with references to a shared single copy. The percentage of organizations using deduplication increased from 38% in 2011 to 55% in 2014.   On average, more than half of the total volume of a company’s data is in the form of redundant copies. Deduplication technologies can reduce the quantity of data stored at many organizations by more than 25x on some data types. Storing less data requires fewer hardware resources, which in turn consumes less energy.

However, not every data set or environment is suitable for deduplication. When used for data sets with large amounts of static data, it can yield significant storage savings. If used for the wrong type of data, performance issues will arise. It is necessary for IT administrators to understand how specific data sets will respond to data deduplication and use it only where the benefits exceed the costs. Deduplication is particularly effective with unstructured data sets (like home directories and department shares), virtual machines and application services, virtual desktops, or test and development environments.

Data compression is a process in which algorithms are used to encode a single block of data to reduce its total physical size, thus providing a storage savings.  As with deduplication, data compression has been well integrated into backup systems for many years.  Now those benefits are available for primary storage data systems. In fact, a recent survey revealed that roughly 33% of IT administrators are benefitting from data compression on the primary side. Space savings from primary storage compression have been estimated at 15 to 30%.

As with data deduplication, compressing data has potential performance pitfalls, and IT administrators need to understand how to best utilize it for maximum efficiency.   Benefits of compression are most often associated with relational databases, including online transaction processing (OLTP), decision support systems (DSS), and data warehouses. Savings diminish with unstructured and encrypted data sets. A key factor for success is the number of compression algorithms provided by the storage platform.

The final strategy, thin provisioning, is not technically a data reduction technology, but does provide an efficient, on-demand storage consumption model. In the past, servers were allocated storage based on anticipated requirements. In order to avoid performance issues if these limits were exceeded, over provisioning of storage normally resulted. Thin provisioning allocates storage on a just-enough, just-in-time basis by centrally controlling capacity and allocating space only as applications require the space. Thus you can allocate space for an application with data storage needs that you expect to grow in the future, but power only storage that is currently in use.  A recent survey revealed that 39% of IT administrators use thin provisioning today, up from 28% in 2012.

In the end, the ultimate goal of data efficiency is to remain transparent to the user while providing tangible benefits like managing growth and reducing overall storage costs. When implemented simultaneously, these three technologies produce peak results. If used appropriately, they will enable organizations to repurpose data center resources and add decades of new life to resource-constrained data centers.