Data Center Trends 2015

Data center technologies are emerging and evolving at an astounding pace. Just consider how a fledgling idea like virtualization became an infrastructure necessity in the span of only a few years, or the expanding role of solid-state drives in high-performance storage cache and virtual SAN deployments.

IT professionals need to pay attention to new developments, and consider the impact that those products or initiatives can have on the data center — and the business. At Gartner’s IT Operations Strategies and Solutions Summit 2015 here this week, analyst David J. Cappuccio outlined 10 IT trends poised to impact data centers over the next year and beyond.

  1. Non-Stop Demand

There are always new workloads, users and data, and the demand for IT resources is constantly increasing. Cappuccio points to an average annual growth rate (AAGR) of 10% in server workloads; 20% in power demands; 35% in network bandwidth; and an astonishing 50% AAGR in storage. IT leaders must follow utilization trends and perform careful capacity planning to ensure adequate resources are available to maintain service performance and user experience levels.

  1. Treating business units as technology startups

The simple reality is that business units need a level of agility and responsiveness that is particularly challenging for IT departments struggling just to keep the lights on. Consequently, individual business units are spending from their budgets to bring in mobile applications and cloud services — and even using their own devices, etc.

In years past, this would have been called “shadow IT” and frowned upon. Business units will simply work around IT if that’s what is required to address business problems. But IT still bears the responsibility to ensure technologies are integrated and managed properly. The goal for IT, Cappuccio said, is to get in front of these efforts and collaborate with business units right from the start to achieve a better business outcome.

For some businesses, the collaboration starts with better tools.

“We’re probably going with SharePoint and outsourcing it to a South African company called Openbox,” said Chris DiGiacomo, vice president and director of operations at corporate financing firm W. P. Carey in New York. “The value is having the end users collaborate better not only with IT but within their own departments.”

W. P. Carey also added four new IT positions — business relationship managers — to act as liaisons between IT and individual business units, DiGiacomo said.

“They’ll understand the business as much as the staff, and help incorporate processes and tools that can make these units more productive and work more efficiently,” he said.

  1. Internet of Things

The proliferation of embedded, networked sensors that deliver an astounding volume of data to the business, more commonly known as the Internet of Things (IoT), is on the rise. Gartner predicts the IoT will include over 26 billion connected devices by 2020, so IT faces the daunting challenge of processing, storing, correlating and reporting an ever-growing volume of real-time data from a multitude of sensor sources.

The business, in turn, can use this data to make superior decisions in real time and see more strategic trends and opportunities over the longer term.

  1. Software-defined infrastructure

By now, IT professionals have heard a variety of different “software-defined” terms, including software-defined storage, software-defined networking (SDN), and even software-defined data centers. It’s a new way to automate, orchestrate and operate enterprise IT. Under ideal conditions, it can bring fast and flexible infrastructure reconfiguration from a single location while enhancing workload performance and network traffic behaviors — all running under open standards.

While specific software-defined elements like SDN are within reach now, others, such as software-defined data centers, will be difficult to achieve until tools can fully integrate on- and off-premises computing resources. In addition, the automation of software-defined infrastructures depends on logic and rule sets that must be reviewed and updated periodically. Otherwise, adopters risk improper or ineffective automation as computing needs change over time.

“If you automate, don’t forget about it,” Cappuccio says.

  1. Integrated systems evolution

The data center trend of integrated infrastructures, commonly known as converged infrastructures (CI), is hardly new. CI has recently gained considerable momentum and is expected to gain even more traction in years to come. CI’s appeal comes from its system-level approach, which includes delivering servers, as well as storage and networking components pre-bundled and heavily integrated by the vendor. CI platforms are continually evolving to provide better performance, power efficiency and manageability.

But CI can be tricky for IT. Cappuccio explained that the expense means senior executives will be deeply involved in CI selection — moving the traditional ‘best product for a given job’ emphasis of IT to a vendor relationship focus that resonates with C-level executives. The investment in CI platform evaluation is also difficult to repeat as infrastructure needs grow and change, so organizations may stick with existing vendors and experience vendor lock-in.

  1. Disaggregated systems

Traditional data center hardware exists as complete subsystems. For example, a server contains a power supply, processors, memory and storage within the same box and is interconnected through proprietary, short-distance electrical interfaces. If you need more processor cycles or memory, you probably will buy more boxes — duplicating other components you don’t need.

The idea of disaggregated systems is to modularize computing building blocks, which can be racked as needs dictate, and the modules join together through high-speed shared connections (such as silicon photonics). For example, if you need more computing cycles, you’d plug more processor modules into the rack.

Rack designs are also changing to provide direct current (DC) to computing devices, thereby reducing the number of power supplies (and possible points of failure) while improving energy efficiency with fewer AC-to-DC conversions. Open Compute servers can leverage rack-distributed DC now, and the trend will continue through disaggregation.

  1. Proactive infrastructures

IT and business leaders increasingly rely on analytical tools to better understand the data center and its computing resources — and then make better decisions about data center utilization and growth. This has been an ongoing process with platforms like data center infrastructure management, but lately, it’s been accelerating to move the organization from a reactive state to a proactive state. For example, today’s tools are very good at helping administrators predict what will happen in the future. But eventually, these tools will evolve to proactively prescribe changes necessary to achieve desired outcomes.

  1. IT service continuity

Business continuity (BC) and disaster recovery (DR) have typically been approached as two separate and distinct functions — usually related to two different sets of problems. But the two disciplines are now merging into a single integrated function Cappuccio terms “IT service continuity.”

An underlying idea addresses the fundamental goal of both BC and DR: to keep essential services available to users. Service continuity relies on multiple sites and increasing intelligence, which can forecast potential disruptions and outages, and then move workloads dynamically to other sites. It’s a data center strategy embraced by large organizations like trading firms, but it should find broader acceptance over the next year and beyond.

  1. Bimodal IT

IT typically struggles with the dual challenges of keeping the shop open (mode 1) and exploring new technologies to enhance the business (mode 2). The two modes of operation don’t work well together because the traditional process- and procedure-driven efforts of production IT can easily be disrupted by new technologies. These upstarts often carry a significant risk for the business — but the risk is critical because business units will eventually adopt those new technologies anyway.

These two modes of IT can (and should) exist together, Cappuccio said. It’s alright to preserve the processes, procedures, and compliance for mode 1 operations and still embrace the agility and experimentation of mode 2 activities. The goal is to run both efforts separately, but don’t penalize IT folks for mistakes or failures in mode 2 endeavors.

It’s also perfectly acceptable to change modes over time. For example, a new technology might come into the business through experimentation and evaluation. But as the technology finds acceptance and there is more reliance on it, IT will adopt more process and procedures to manage it. The use of public cloud is one common example of this type of transition within IT.

  1. Scarcity of IT skills

Finally, Cappuccio sites a lack of IT pros with the skills needed to carry IT and the business forward. Factors such as increased IT complexity, greater support demands, shorter development times, shrinking budgets and end user requirements are putting pressure on IT staff.

IT professionals need to do a better job of thinking outside of their own silos of expertise and tying different skills together. Cross-training staff and encouraging more training and growth is a good strategy for incentivizing and retaining IT professionals — they will be more engaged and stay at their jobs longer.

Originally posted on Tech Target: http://searchdatacenter.techtarget.com/news/4500248427/Ten-data-center-trends-driving-change-in-2015

Cyber Security Hacks Will Increase

President Barack Obama says the issue of Cyber Security hacks targeting the United States is going to increase.

Obama has a new focus on Cyber Security following the massive hack last week of U.S. government employees’ personnel files, described as the most significant cyber attack in U.S. history.

Obama says part of the problem is the U.S. has very old systems for detecting intrusions. He says the U.S. is upgrading old systems agency by agency to ensure that technology is up to date.

But Obama says the problem isn’t going away. He says both governments and individuals are “throwing everything they’ve got” at U.S. systems. Obama says that’s why the U.S. must be much more attentive to Cyber Security

Obama commented Monday in Germany at the close of a summit of the world’s leading democracies.

Next Generation Firewall vs Web Application Firewall

Next Generation Firewalls enable policy based visibility and control  over applications, users and content  using three unique  identification technologies:  App-ID, User-ID and Content-ID. The knowledge  of which application is traversing  the network and who is using it is then be used to create firewall security policies, including  access control,  SSL decryption, threat prevention, and URL filtering. Every corporation needs a firewall.

In contrast, a Web Application Firewall (WAF) is designed to look at web applications, monitoring them for security issues that may arise due to possible coding errors.  The only similarities between the two solutions  is the fact that  they use the term firewall in the name. Only those corporations that  feel they have coding issues in their web applications need a WAF.

Key attributes of Next Generation Firewall:

  • Designed to be a primary  firewall, identifying and controlling applications users and content  traversing  the network.
  • App-ID: Identifies and controls more than  900 applications of all types, irrespective  of port,  protocol, SSL encryption or evasive tactic.
  • User-ID: Leverages user data in Active Directory  (as opposed  to IP addresses) for policy creation,  logging and reporting.
  • Content-ID: Blocks a wide range of malware, controls  web activity and detects data patterns (SSN, CC#) traversing  the network.
  • Logging and reporting: All application, user and threat  traffic is logged for analysis and forensics purposes.
  • Performance: Designed to act as the primary firewall for enterprises  of all sizes which dictates that  it deliver high performance throughput under load. Palo Alto Networks uses a combination of custom hardware, function  specific processing and innovative  software  design to deliver high performance, low latency throughput.

Key attributes of Web Application Firewalls:

  • Designed to compensate for insecure coding practices – only those companies  that  use web applications and are concerned  that their code is insecure need to buy a WAF.
  • Looks specifically for security flaws in the application itself, ignoring the myriad of attacks that  may be traversing  the network.
  • Highly customized for each environment – looking at how the web application is supposed  to act and acting on any odd behavior.
  • Looks only at the specific L7 fields of a web application – they do not look at any of the other layers in the OSI stack.
  • Current WAF offerings are designed to look only at a very small subset of the application traffic and as such, cannot address the performance requirements of a primary  firewall.

Learn more about Netfast Next Generation Firewall Solutions

 

Massive Federal Data Breach – Cyber Security

President Obama announced on Thursday a massive data breach of federal employees’ data. This involves up to four million current and former employees and is believed to have been originated in China.

The Office of Personnel Management which stored the breached data handles government security clearances and federal employee records. The breach is believed to have taken place late last year and was discovered this spring.

Included in the records were social security numbers, address and other personal identifiable information. Authorities were still speculating on whether the attack was for commercial gain or for spying.

It’s unclear at this point whether the attack is state sponsored but federal officials see little doubt at this point that it originated from China. This breach is the third major foreign attack into the United States federal systems in the past year. Previously the White House and the State Department were compromised in attacks attributed to Russian based hackers. This prior attack may have included President Obama’s unclassified records.

As of Thursday night’s announcement the F.B.I was working to investigate the matter. Quoting spokesman Josh Campbell “We take all potential threats to public and private sector systems seriously, and will continue to investigate and hold accountable those who pose a threat in cyberspace”

Personnel affected by the data breach can request up to 18 months of free credit monitoring to ensure ongoing safety and the department has brought on cyber security experts to help assess the data breach.

Cyber Security should be a concern for all organizations big and small. Information security technology of the past are not sufficient to combat the growing skills of hackers. The best way to prevent these threats is ongoing vulnerability assessment and penetration testing by expert consultants.

 

 

 

30% of Data Center Servers are Comatose

Over $30 billion dollars in Data Center investment is not being utilized.

According to recent research from Anthesis Group, a global sustainability consultant over 30% of data center servers are comatose or not in use. Comatose is defined by their research as servers that have not been in service for over six months. The research found that over 10 million servers are comatose worldwide virtualized inclusive This translates into over $30 billion in capital investments sitting unused, which doesn’t even include the ongoing operation costs to manage that infrastructure.

Partner at Anthesis Group, Jon Taylor “Far too many businesses have massive Information Technology (IT) infrastructure inefficiencies of which they are not even aware”

A primary driver of excess server capacity for many firms is poor asset identification and workflow distribution. By implementing proper discovery, asset management and provisioning processes in the enterprise firms can better position themselves for improved efficiency of their capital investments.

According to Dr Koomey a researcher in environmental technology  “In the twenty first century, every company is an IT company, yet far too little attention is given to IT inefficiencies, and to the need for widespread changes in how IT resources are built, provisioned, and managed.”

Despite this opportunity still today many IT organizations are handicapped  by day-to-day  maintenance  of obsolete infrastructure technologies, legacy software, outdated processes and faced with continuous shortage of skilled workforce.  As such, it is often difficult to achieve competitive advantage when the IT Dept is fighting fires day in and day out.

So what can be done to help?

  • Migration to the latest in platform technology including Cloud Computing will empower business to be more agile and take better control over the capital expense.
  • Data Center Optimization solutions can reduce wasted capital expense, power and operational costs associated with wasted capacity
  • A discovery and asset management process should be implemented with a partner that understands the multi vendor environment

Read more about Netfast Technical Capabilities 

Download Anthesis Group Research