Monthly Archives: June 2009

Challenges in Multi-Core Era – Part 3

Previously, I compared the performance of today’s popular operating systems with respect to multi-core processors.  In this final part to Challenges in Multi-Core Era, I’ll talk about the multi-core capabilities found in today’s programming languages and development tools.

The Programming Languages

When language architects were designing the foundations of the most popular programming languages, multi-core microprocessors were hidden in laboratories. Only high performance servers had access to multiprocessing systems. Just a few specialized workstations had more than one CPU installed. Therefore, C# and Java offered support for concurrency and multi-threading intended to offer more responsive applications. However, language architects didn’t design this support to optimize applications for future multi-core CPUs. Hence, nowadays, it is really difficult to optimize existing code to take advantage of multi-core CPUs using frameworks prepared for serial code.

In order to take full advantage of multi-core, the applications need a task-based design and a task-based programming. There is no silver bullet. So far, there is no way to optimize an application recompiling it without changes. Some developers expected this to happen. There is a great need for new designs, new programming techniques and new tools. The software development industry needs a new great paradigm shift.

Besides, developers need a framework capable of handling tasks. There are new programming languages, or new versions of older concepts, like functional programming. Functional programming makes it easier to code task-based designs and to split the work to be done in multiple independent tasks that could be run in parallel on multi-core CPUs.

There are many new programming languages with a great focus on functional programming, prepared to take full advantage of multi-core and to offer a great scalability. Just to mention a few:

  • Scala
  • Haskell. Yes, the one that has more than twenty years. Pure functional programming languages are back and they can be the future for parallel programming.
  • Microsoft Axum (formerly Maestro).
  • Microsoft F#.

However, do developers want to begin learning new programming languages? Most developers want to leverage their existing knowledge to move onto multi-core programming.

C++ and Fortran programmers had early access to parallel programming. Nowadays, these are the only programming languages that can take advantage of the full power offered by modern microprocessors. C++ is closer to the hardware. Hence, it allows many optimizations that aren’t available in any other programming language – apart from C and assembler. You can access all the vectorization capabilities from C++ and Fortran.

OpenMP has been offering a high quality multi-platform and open source shared-memory parallel programming API to C/C++ and Fortran for many years now. Besides, Intel Threading Building Blocks, also known as TBB, allows developers to express parallelism in C++ applications to take advantage of multi-core.

Message Passing Interface (MPI), is a language-independent communications protocol used to program parallel computers. You can use MPI to take advantage of multi-core on many programming languages. However, its main focus is to help develop applications to run on clusters and high performance computers.

Intel offers compilers and libraries optimized to take advantage of multi-core. However, you still require coding your applications considering new parallel designs. The usage of vectorization in their math libraries is a great plus point. There is an outstanding opportunity for new libraries and components optimized for multi-core and vectorization. Parallelism brings new opportunities to the software development industry.

There are new companies taking advantage of the need for multi-core optimizations, like Cilk Arts, offering its Cilk++ compiler. It is based on GCC and includes a modified compiler and debugger to simplify multi-core programming for Linux and Windows platforms.

Mac OS X’s Xcode development environment offers access to Grand Central Dispatch and OpenCL. OpenCL allows C programs to run in the GPU instead of loading the main CPU. The interest of developers in Xcode has really grown since multi-core and OpenCL.

C# and Java are evolving in order to offer developers new ways of expressing parallelism in their code. Indeed, they are changing many aspects that were designed for another world, the old single-core machines.  Some of these changes include new garbage collectors, new frameworks and features, new functional approaches and task-based programming capabilities.

Java 7 will offer the new fork-join framework, really optimized for multi-core.

C# 4.0 (Visual Studio 2010) will add task-based programming capabilities and parallelized LINQ (PLINQ). Besides, it will allow the possibility to manage the desired degrees of parallelism.

Furthermore, there are new DSLs (Domain Specific Languages) to add parallel programming capabilities to existing high-level languages. For example, GParallelizer adds nice parallel programming capabilities to Groovy.

Most modern programming languages are evolving or adding inter-operatibility capabilities with other languages to favor multi-core programming.

However, don’t forget about vectorization. Mono, a free and open source .Net compiler, offers access to SSE3 and SSE4 for C# developers.

A few years ago, concurrency was about threads. Now, experts are talking about tasks and fibers. Why? Because in order to develop an application using a task based approach, threads are too heavy. Tasks and fibers are lightweight concurrency elements, much lighter than threads. They allow developers to implement complex task-based designs without the complexities of threads.

Ruby 1.9 added fibers and it simplify the creation of pipelines. Pipelines take great advantage of Hyper-Threading combined with multi-core.

If you want to go parallel, follow James Reinders’ eight key rules for multi-core programming. You can apply them to any combination of programming language, framework, compiler and virtual machine.

Java 7 and .Net 4 will not offer framework support for vectorization (SIMD instructions). SIMD has been there since Pentium MMX microprocessor.  This decision doesn’t make sense especially considering that there is a huge market for smart professionals in the new parallelism age.

The New Tools

A well-known proverb says “A good workman is known by his tools”.

A parallelized application requires new debugging and testing techniques. You need to catch potential bugs introduced by concurrency.

Intel has been offering tools for High Performance Computing and parallelism for many years now. A few weeks ago, Intel launched one of the most complete parallel toolkits for C/C++ developers, Intel Parallel Studio. Among many other features, it helps developers to compile applications tuned for multi-core CPUs and to find concurrency specific bugs and bottlenecks. You should expect to see more tools like this coming on the next quarters.

Visual Studio 2010 will add enhanced multi-monitor support capabilities. You’ll need more than one monitor in order to debug applications running with a task-based approach. It will also add task-based debugging capabilities. However, Visual Studio 2010 has recently entered Beta 1. Therefore, if you want to develop an application using a task-based approach using C# 3.0, you still have to work with threads. Visual Studio 2008 offers nice multithreading debugging capabilities.

Most IDEs are changing to offer new task-based programming, debugging and testing capabilities. You have to test parallelized applications on multi-core CPUs. Many bugs aren’t going to appear when running them on single-core CPUs.

There are many free tools to help you in the multi-core jungle. You can monitor your applications and test their concurrency efficiency using Process Explorer and Intel Concurrency Checker.  If you use these tools to check commercial software, you’ll be able to see the need for new multi-core aware developers. Besides, you’ll see a lot of opportunities in the multi-core age.

By the way, multi-core programming has a high quality weekly talk show lead by Intel experts, Parallel Programming Talk.

Summary

While the old free lunch is over, the industry is reshaping itself to take advantage of the new microprocessors architectures of today and tomorrow.

Hardware will continue to advance and offer more parallel processing capabilities, even though the software industry is moving more slowly than expected.

Bottom line is that multi-core seems to be a really sustainable competitive advantage that requires a great paradigm shift from developers throughout the software lifecycle. There is light at the end of the tunnel. Are you ready to reach it?

About the author: Gaston Hillar has more than 15 years of experience in IT consulting, IT product development, IT management, embedded systems and computer electronics. He is actively researching about parallel programming, multiprocessor and multicore since 1997. He is the author of more than 40 books about computer science and electronics.

Gaston is currently focused on tackling the multicore revolution, researching about new technologies and working as an independent IT consultant, and a freelance author. He contributes with Dr Dobb’s Parallel Programming Portal, http://www.go-parallel and is a guest blogger at Intel Software Network http://software.intel.com.

Gaston holds a Bachelor degree in Computer Science and an MBA.You can find him in http://csharpmulticore.blogspot.com and http://software.intel.com/en-us/profile/417051/

Advertisements
Tagged , , ,

Challenges in Multi-Core Era – Part 2

Previously I talked about the evolution of microprocessors and specialized hardware since the wide-spread adoption of multi-core began a few years ago.  In this second part to Challenges in Multi-Core Era, I’ll compare the multi-core capabilities across today’s popular operating systems.

The Operating Systems

No matter the version, Mac OS always had a great advantage over any other desktop operating system. It knows exactly the underlying hardware because it is designed for running on Mac’s certified hardware only. You can run it on different hardware, at your own risk.  The same company develops the computers and the operating system. Leaving its great innovation aside, this is its great secret. For this reason, it can be tuned to take full advantage of specific hardware. For example, Mac OS X’s latest versions running over Intel microprocessors take advantage of vectorization, they use SSE (Streaming SIMD Extensions) and SSE2. In fact, Apple has been promoting vectorization and SIMD (Single Instruction Multiple Data) instructions in its Developer Connection Website.

However, Mac OS X Snow Leopard is giving another great step, offering Grand Central Dispatch.  Nonetheless, there is a big problem. There are too few developers working with specific Mac developer tools. Mac is also going to the 64-bits arena.

FreeBSD is one of the free and open source operating systems that always offered great features when working with multiprocessor systems. FreeBSD 7 works great with multi-core CPUs as well. Therefore, many high-performance servers around the world trust in FreeBSD’s scheduler.

The key is the operating system scheduler.  It is responsible for distributing the physical and logical processing cores, and assigning processing time to each concurrent thread.  It performs a really complex task.  For example, an operating system running over an Intel Core i7 CPU has eight logical processing cores (it can run eight hardware threads concurrently), but four physical cores. It has to distribute dozens of threads on time-slices available from eight logical cores. Thus, the scheduler efficiency impacts on the application’s performance. An inefficient scheduler could ruin a highly parallelized application’s performance.

Linux works great with multi-core CPUs, FreeBSD works better , but Linux does a great job. However, many desktop applications running on Linux GUIs aren’t optimized to take full advantage of multi-core. Nevertheless, Linux running as a Web server in a classic LAMP configuration is really fined tuned for multi-core.  Free Linux versions have a great advantage on the multi-core world as they don’t limit the number of cores using expensive licenses. Therefore, you can upgrade your desktop or server without needing to worry about the number of cores supported by your operating system license.

Both FreeBSD and Linux have a great advantage over Mac and Windows, most new deployments using these operating systems are using the 64-bits versions. Applications running in 64-bits offer more scalability than their 32-bits counterparts.  Parallel algorithms require more memory than serial ones. Most operating systems running in 32-bits can only address 4 GiB. There are some techniques to work with more memory even in 32-bits. However, they reduce the memory access performance.

4 GiB could seem a lot of memory. Nevertheless, as the memory map is a bit complex, you need some spaces for other purposes, like writing data to the video memory and other buffers. Hence, the operating system cannot access the whole 4 GiB of main memory. Furthermore, some operating systems limit the maximum memory addressable by an application in 32-bits mode (2 GiB in Windows standard configurations). Again, there are some techniques to work with more memory for the applications.

Whilst working with large images, videos and databases, these limits could be a great problem to add scalability. More cores mean more processes, more threads, more tasks or more fibers. A 2 GiB memory limit for a single process could mean a great I/O bottleneck.

Hence, working with 64-bits is a great step ahead. You have the same driver problems in both 32-bits and 64-bits Linux. Thus, you can install 64-bits Linux without worrying about additional problems. Working with 64-bits, you can scale as the number of cores increase without worrying about memory problems. Of course, you do have to worry about available physical memory. However, that’s another problem.

FreeBSD and Linux schedulers already offer an excellent support for NUMA. However, you may see nice improvements in future kernel versions. The idea is simple, the more the number of cores, the more optimizations required for an efficient scheduler.

Now, let’s move to Windows wonderland. Windows has a great disadvantage; each version offers different capabilities related to multi-core. You should check the maximum number of cores and the maximum memory supported by each Windows version. This doesn’t mean that Windows newest versions aren’t capable of handling multi-core CPUs. The problem is that they have different licenses and prices.

If you want scalability, you shouldn’t consider 32-bits versions anymore. Windows 2008 Server R2 and Windows 7 will support up to 256 logical processor cores. Nowadays, 256 logical cores seems a huge number. However, an Intel Core i7 is a quad-core CPU, but it offers eight logical cores (2 logical cores per physical core, 2 x 4 = 8). New CPUs with 8 physical cores are around the corner. Besides, they will offer 16 logical cores (2 logical cores per physical core, 2 x 8 = 16). Hence, you will see 16 graphs in your CPU activity monitor. An operating system offering support up to 256 logical cores really makes sense for the forthcoming years.

Despite being criticized everywhere, Windows Vista offered nice scheduler optimizations for multi-core. Windows 2008 Server R2 and Windows 7 will offer support for NUMA in their 64-bits versions. However, you must use the new functions to take full advantage of NUMA capabilities.

No matter the operating system, the applications must be prepared to take advantage of multi-core CPUs.
The operating systems are adding multi-core optimizations. However, except from Mac OS X, most applications running on the operating system take advantage of neither multi-core nor vectorization. I really cannot understand this.

I do believe it is time to learn from Mac OS X. For example, Windows 7 should offer a special version, let’s call it 7 Duo. 7 Duo could require at least a dual-core CPU with SSE2. So, you wouldn’t be able to run 7 Duo on older machines. If you have newer hardware, you’d buy and install 7 Duo. The operating system would load much faser, taking full advantage of your modern hardware. Your favorite Web browser should take advantage of multi-core and vectorization when parsing HTML or XML. Check this white paper, Parallelizing the Web Browser.

The great problem with PCs (x86 family) is the backward compatibility. It’s been an incredible advantage over the last decades. However, it is time to take advantage of modern hardware establishing baselines.
The same happens with Linux. I’d love to install an Ubuntu Duo on my notebook.

The operating systems are really crucial to tackle the multi-core revolution. Their schedulers are very important to transform multi-core power into application performance. Vectorization and SIMD is also important and most applications are not using it. It seems logical to develop new operating system versions designed to take full advantage of really modern hardware. They would add real value to your notebooks, desktop, workstations and servers.

Click here for part 3 where I compare the new capabilities of programming languages and development tools with respect to multi-core processors.

About the author: Gaston Hillar has more than 15 years of experience in IT consulting, IT product development, IT management, embedded systems and computer electronics. He is actively researching about parallel programming, multiprocessor and multicore since 1997. He is the author of more than 40 books about computer science and electronics.

Gaston is currently focused on tackling the multicore revolution, researching about new technologies and working as an independent IT consultant, and a freelance author. He contributes with Dr Dobb’s Parallel Programming Portal, http://www.go-parallel and is a guest blogger at Intel Software Network http://software.intel.com.

Gaston holds a Bachelor degree in Computer Science and an MBA.You can find him in http://csharpmulticore.blogspot.com and http://software.intel.com/en-us/profile/417051/

Tagged , , , ,

Challenges in Multi-Core Era – Part 1

A few years ago, in 2005, Herb Sutter published an article in Dr. Dobb’s Journal , “The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software”. He talked about the need to start developing software considering concurrency to fully exploit continuing exponential microprocessors throughput gains.

Here we are in year 2009 – more than four years after Sutter’s article publication. What’s going on? How are we doing? How did the industry evolve to tackle the multi-core revolution?

In this three part series, we’ll answer these questions by exploring the recent multi-core inspired evolution of components throughout the application stack, including microprocessors, operating systems and development platforms.

The New Microprocessors

Microprocessor manufacturers are adding processing cores. Most machines today have at least a dual-core CPU. However, quad-core CPUs are quite popular on servers and advanced workstations. More cores are round the corner.

There is a new free lunch. If you have an application designed to take advantage of multi-core and multiprocessor systems, you will be able to scale as the number of cores increase.

Some people say multi-core wasn’t useful. You can take a look as this simple video. It runs four applications (processes) at the same time on a quad-core CPU. Each application runs in a different physical processing core, as shown in the CPU usage history real-time graph (it uses one independent graph per processing core). Hence, it takes nearly the same time to run four applications than to run just one. Running just one application takes 6 seconds. Running four applications takes 7 seconds. What you see is what you get. There are no tricks. Multi-core offers more processing power. It is really easy to test this. However, most of the software wasn’t developed to take advantage of these parallel architectures in single applications.

There is another simple video showing one application running on a quad-core CPU. The first time, it runs using a classic, old-fashioned serial programming model. Hence, it just uses one of the four cores available, as shown in the CPU usage history real-time graph. Then, the same application runs in a parallelized version, taking less time to do the same job.

In recent years, parallel hardware became the mainstream standard in most developed countries. The great problem is that the speed of hardware evolution went much faster than the speed of software evolution, resulting in a large gap between the two. The microprocessors added new features that software developers didn’t exploit. Why did this happen? Because it was very complex to accomplish it. By the way, it’s still a complex task. I’ll get back to this later.

However, the most widespread model for multiprocessor support, SMP (Symmetric Multi-Processor) leaves the pole position to NUMA (Non-Uniform Memory Access). On the one hand, with SMP, the processor bus becomes a limitation to future scalability because each processor has equal access to memory and I/O. On the other hand, with NUMA, each processor gains access to the memory it is close to faster than to the memory that is farther away. NUMA offers better scalability when the number of processors is more than four.

With NUMA, computers have more than one system bus. A certain set of processors uses each available system bus. Hence, each set of processors can access its own memory and its own I/O channels. They are still capable of accessing the memory owned by the other sets of processors, with appropriate coordination schemes. However, it is obviously more expensive to access the memory owned by the other sets of processors (foreign NUMA nodes) than to work with the memory accessed by the local system bus (the NUMA node own memory).

Therefore, NUMA hardware requires different kinds of optimizations. The applications have to be aware of NUMA hardware and its configurations. Hence, they can run concurrent tasks and threads that have to access similar memory positions in the same NUMA node. The applications must avoid expensive memory accesses and they have to favor concurrency taking into account the memory needs.

A new free lunch offers manycore scalability. Expect more cores coming in the next months and years. Learn about the new microprocessors. Be aware of NUMA and optimize your applications for these new powerful architectures.

The New Specialized Hardware

On the one hand, we have a lot of software that is not taking full advantage of the available hardware power. On the other hand, there are many manufacturers developing additional hardware to offload processing from the main CPU. Does this make sense?

This means that you are wasting watts all the time because you’re using obsolete software. In order to solve this problem, you have to add additional, expensive hardware to free CPU cycles. But, you aren’t using entire cores.

TCP/IP Offload Engine (TOE) uses a more powerful NIC (Network Interface Card) or HBA (Host Bus Adapter) microprocessor to process TCP/IP over Ethernet in dedicated hardware. This technique eliminates the need to process TCP/IP via software running over the operating system and consuming cycles from the main CPU. It sounds really attractive, especially when working with 10 Gigabit ethernet and iSCSI.

CPUs are adding additional cores. So far, modern software is not taking full advantage of these additional cores. However, you still need new specialized hardware to handle the network I/O… Most drivers don’t even take advantage of old parallel processing capabilities based on SIMD (Single Instruction Multiple Data) offered since Pentium MMX arrival. TCP/IP offload Engine is a great idea. However, if I own a quad-core CPU with outstanding vectorization capabilities, SSE4.2 and previous versions, I’d love my TCP/IP stack to take advantage of it.

Vectorization based on SIMD allows a single CPU instruction to process multiple complex data at the same time. Thus, using them, it speeds up the execution time of complex algorithms many times. For example, an encryption algorithm requiring thousands of CPU cycles could perform the same results requiring less than a quarter of these CPU cycles using vectorization instructions.

Something pretty similar happens with games. Games are always asking for new GPUs. However, most games take advantage of neither multi-core nor vectorization capabilities offered by modern CPUs. I don’t want to buy new hardware because of software inefficiencies. Do you?

Modern GPUs (Graphics Processing Units) are really very powerful and they offer an outstanding processing power. There are many standards to allow software developers to use these GPUs as a CPU, like CUDA and OpenCL. They allow the possibility to run general purpose code on the GPU to release the main CPU from this load. It sounds really attractive. However, again, most software does not take full advantage of multi-core. It seems rather difficult to see commercial and mainstream software considering the possibilities offered by these modern and quite expensive GPUs. Most modern notebooks don’t offer these GPUs. Therefore, I see many limitations to this technique.

Before considering these great but limited capabilities, it seems logical to exploit the main CPU’s full processing capabilities. Most modern notebooks offer dual-core CPUs.

Specialized hardware is very interesting indeed. However, it isn’t available in every modern computer. It seems logical to develop software that takes full advantage of all the power and instruction sets offered by modern multi-core CPUs before adding more specialized and expensive hardware.

In part two of Challenges in Multi-Core Era, I’ll compare the multi-core capabilities of the latest operating systems.

About the author: Gaston Hillar has more than 15 years of experience in IT consulting, IT product development, IT management, embedded systems and computer electronics. He is actively researching about parallel programming, multiprocessor and multicore since 1997. He is the author of more than 40 books about computer science and electronics.

Gaston is currently focused on tackling the multicore revolution, researching about new technologies and working as an independent IT consultant, and a freelance author. He contributes with Dr Dobb’s Parallel Programming Portal, http://www.go-parallel and is a guest blogger at Intel Software Network http://software.intel.com.

Gaston holds a Bachelor degree in Computer Science and an MBA.You can find him in http://csharpmulticore.blogspot.com and http://software.intel.com/en-us/profile/417051/

Tagged , , ,