Powering the cognitive business: Part 2

In our previous blog we explored how High Performance Computing (HPC) is helping to power today’s fast-paced business world, in which in-depth insight from big data volumes is needed in near real-time. 

This 2nd blog in the series takes a more in-depth look at what a HPC architecture actually looks like for the majority of organisations.

Powering the cognitive business: Part 2

What does HPC look like?

 

1. Fast and powerful

Yes – speed and power are obviously critical components, but focusing solely on microprocessors is insufficient to help organisations overcome performance barriers. Innovation is needed across the entire system stack to drive the required level of performance/cost - processors, memory, storage, networking, file systems, systems management, application development environments, accelerators and workload optimisation. Key to this is minimising data motion by maximising workload efficiencies and optimising compute resources across the system stack.

2. Flexible and Modular

Times and technology have moved on apace since the bespoke, stand-alone Super Computers of the 1990s, with clusters of commodity systems connected with high-speed interconnects now used in 80% of all HPC architecture. This makes sense for most organisations, making best use of existing IT investments and skillsets. The platform will need to scale in line with business growth and demand, so as big data and IoT applications continue to proliferate, HPC in the cloud is also likely to go mainstream.

3. Cost efficient

It’s all about determining which workloads will justify the costs, when it comes to HPC. Aside from high-end engineering, global financial services and government organisations, it can be difficult to justify the cost of investing in a full-scale HPC environment. Cloud computing makes HPC affordable for the masses however, and server consolidation also provides significant opportunities to reduce IT operating costs. IDC's Business Value research shows that consolidating workloads onto fewer, but more powerful hardware systems, can reduce OpEx on IT staff by 50%+, power/cooling up to 20%+, and use of data centre facilities by 30%+ - freeing up spending for more strategic investments, whilst supporting unpredictable and business-critical workloads.

4. Open standards

Investing in a platform that’s open and built on industry-standards not only protects your current IT investments in people, processes, platforms and applications, but also provides a seamless and cost-effective path to scale throughout the journey to HPC.

A great example of this in action is IBM, who in 2014 opened up the technology surrounding its Power Systems architecture, such as processor specifications, firmware and software. Aimed at driving a collaborative development model with partners, the openPOWER Foundation now has over 200 members and enables the server vendor ecosystem to build customised server, networking and storage hardware for HPC, Analytics, Cloud and next-generation data centres.

This short video shows global executives discussing how open source software can provide the agility and speed that businesses demand in today’s digital world.

Read the other blogs in the 'Powering the Cognitive Business' series

Part 1: How is HPC powering today's fast-paced business world?

Part 3: Have you got the power to deliver on the big data promise?