Challenges in Multi-Core Era – Part 1

A few years ago, in 2005, Herb Sutter published an article in Dr. Dobb’s Journal , “The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software”. He talked about the need to start developing software considering concurrency to fully exploit continuing exponential microprocessors throughput gains.

Here we are in year 2009 – more than four years after Sutter’s article publication. What’s going on? How are we doing? How did the industry evolve to tackle the multi-core revolution?

In this three part series, we’ll answer these questions by exploring the recent multi-core inspired evolution of components throughout the application stack, including microprocessors, operating systems and development platforms.

The New Microprocessors

Microprocessor manufacturers are adding processing cores. Most machines today have at least a dual-core CPU. However, quad-core CPUs are quite popular on servers and advanced workstations. More cores are round the corner.

There is a new free lunch. If you have an application designed to take advantage of multi-core and multiprocessor systems, you will be able to scale as the number of cores increase.

Some people say multi-core wasn’t useful. You can take a look as this simple video. It runs four applications (processes) at the same time on a quad-core CPU. Each application runs in a different physical processing core, as shown in the CPU usage history real-time graph (it uses one independent graph per processing core). Hence, it takes nearly the same time to run four applications than to run just one. Running just one application takes 6 seconds. Running four applications takes 7 seconds. What you see is what you get. There are no tricks. Multi-core offers more processing power. It is really easy to test this. However, most of the software wasn’t developed to take advantage of these parallel architectures in single applications.

There is another simple video showing one application running on a quad-core CPU. The first time, it runs using a classic, old-fashioned serial programming model. Hence, it just uses one of the four cores available, as shown in the CPU usage history real-time graph. Then, the same application runs in a parallelized version, taking less time to do the same job.

In recent years, parallel hardware became the mainstream standard in most developed countries. The great problem is that the speed of hardware evolution went much faster than the speed of software evolution, resulting in a large gap between the two. The microprocessors added new features that software developers didn’t exploit. Why did this happen? Because it was very complex to accomplish it. By the way, it’s still a complex task. I’ll get back to this later.

However, the most widespread model for multiprocessor support, SMP (Symmetric Multi-Processor) leaves the pole position to NUMA (Non-Uniform Memory Access). On the one hand, with SMP, the processor bus becomes a limitation to future scalability because each processor has equal access to memory and I/O. On the other hand, with NUMA, each processor gains access to the memory it is close to faster than to the memory that is farther away. NUMA offers better scalability when the number of processors is more than four.

With NUMA, computers have more than one system bus. A certain set of processors uses each available system bus. Hence, each set of processors can access its own memory and its own I/O channels. They are still capable of accessing the memory owned by the other sets of processors, with appropriate coordination schemes. However, it is obviously more expensive to access the memory owned by the other sets of processors (foreign NUMA nodes) than to work with the memory accessed by the local system bus (the NUMA node own memory).

Therefore, NUMA hardware requires different kinds of optimizations. The applications have to be aware of NUMA hardware and its configurations. Hence, they can run concurrent tasks and threads that have to access similar memory positions in the same NUMA node. The applications must avoid expensive memory accesses and they have to favor concurrency taking into account the memory needs.

A new free lunch offers manycore scalability. Expect more cores coming in the next months and years. Learn about the new microprocessors. Be aware of NUMA and optimize your applications for these new powerful architectures.

The New Specialized Hardware

On the one hand, we have a lot of software that is not taking full advantage of the available hardware power. On the other hand, there are many manufacturers developing additional hardware to offload processing from the main CPU. Does this make sense?

This means that you are wasting watts all the time because you’re using obsolete software. In order to solve this problem, you have to add additional, expensive hardware to free CPU cycles. But, you aren’t using entire cores.

TCP/IP Offload Engine (TOE) uses a more powerful NIC (Network Interface Card) or HBA (Host Bus Adapter) microprocessor to process TCP/IP over Ethernet in dedicated hardware. This technique eliminates the need to process TCP/IP via software running over the operating system and consuming cycles from the main CPU. It sounds really attractive, especially when working with 10 Gigabit ethernet and iSCSI.

CPUs are adding additional cores. So far, modern software is not taking full advantage of these additional cores. However, you still need new specialized hardware to handle the network I/O… Most drivers don’t even take advantage of old parallel processing capabilities based on SIMD (Single Instruction Multiple Data) offered since Pentium MMX arrival. TCP/IP offload Engine is a great idea. However, if I own a quad-core CPU with outstanding vectorization capabilities, SSE4.2 and previous versions, I’d love my TCP/IP stack to take advantage of it.

Vectorization based on SIMD allows a single CPU instruction to process multiple complex data at the same time. Thus, using them, it speeds up the execution time of complex algorithms many times. For example, an encryption algorithm requiring thousands of CPU cycles could perform the same results requiring less than a quarter of these CPU cycles using vectorization instructions.

Something pretty similar happens with games. Games are always asking for new GPUs. However, most games take advantage of neither multi-core nor vectorization capabilities offered by modern CPUs. I don’t want to buy new hardware because of software inefficiencies. Do you?

Modern GPUs (Graphics Processing Units) are really very powerful and they offer an outstanding processing power. There are many standards to allow software developers to use these GPUs as a CPU, like CUDA and OpenCL. They allow the possibility to run general purpose code on the GPU to release the main CPU from this load. It sounds really attractive. However, again, most software does not take full advantage of multi-core. It seems rather difficult to see commercial and mainstream software considering the possibilities offered by these modern and quite expensive GPUs. Most modern notebooks don’t offer these GPUs. Therefore, I see many limitations to this technique.

Before considering these great but limited capabilities, it seems logical to exploit the main CPU’s full processing capabilities. Most modern notebooks offer dual-core CPUs.

Specialized hardware is very interesting indeed. However, it isn’t available in every modern computer. It seems logical to develop software that takes full advantage of all the power and instruction sets offered by modern multi-core CPUs before adding more specialized and expensive hardware.

In part two of Challenges in Multi-Core Era, I’ll compare the multi-core capabilities of the latest operating systems.

About the author: Gaston Hillar has more than 15 years of experience in IT consulting, IT product development, IT management, embedded systems and computer electronics. He is actively researching about parallel programming, multiprocessor and multicore since 1997. He is the author of more than 40 books about computer science and electronics.

Gaston is currently focused on tackling the multicore revolution, researching about new technologies and working as an independent IT consultant, and a freelance author. He contributes with Dr Dobb’s Parallel Programming Portal, http://www.go-parallel and is a guest blogger at Intel Software Network http://software.intel.com.

Gaston holds a Bachelor degree in Computer Science and an MBA.You can find him in http://csharpmulticore.blogspot.com and http://software.intel.com/en-us/profile/417051/

Advertisements
Tagged , , ,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: