Friday, April 19, 2013

Multi-Core Computing

Most modern Central Processing Units (CPUs) are multi-core, and are advertised as such. For example, Intel's processors are usually advertised as "Intel i3 Dual-Core Processor" or "Intel i5 Quad-Core Processor" and even more recently they now have "Intel i7 Hexa-Core Processor." These processors can get a bit extreme, but what does it really mean, let's say, to run a Hexa-Core Processor? Well, on the right you can see a hand-drawn example of a single-core of a CPU, this core can execute a certain number of operations-per-clock-cycle (which used to be 1, but has since increased). Operations can be in the form of a logical operation (like ADD, AND, or XOR) on two binary values of a certain size and then reporting the result to wherever it needs to be. There are many different forms of CPU operations, but most of the actual computing is spent doing the previous example. Processing is just a bunch of math. Adding a second core to a CPU, as you should expect, can theoretically double your operations-per-clock-cycle value. Adding four can quadruple that value. Adding six can sextuple it, and so on and so forth. Within the last year, an exciting new engineering startup company by the name of Parallella began work on a 64-core processor, connected in a square matrix and multiplexed together. The future of multi-core computing is definitely an exciting one.

So at this point, it seems like the more cores, the better. Right? Not exactly, as there are many complications when it comes to programming for multi-core architectures. As an example, take a simple fibonacci sequence calculation. Inside the main loop, the current calculation, which is to add the two previous numbers, relies on the two previous numbers to already have been calculated, and so on and so forth. This greatly limits the amount of multi-tasking that is possible. So computer programs need to specify when multi-core, or "threaded" operations is allowed.

Below are two graphs showing ASU's computer's cores and their usage in a percentage. The graph on the top was recorded during testing of C++ programs, and on the bottom are Java Programs.

CPU usage per core during testing of C++ Applications


CPU usage per core during testing of Java Applications
So, clearly, there are staunch differences in these two pictures. So what exactly is happening that is causing these differences? 

Well, the difference lies in the languages. C++ requires the programmer to explicitly say when operations can be threaded, and when no specific allowances are written, no multi-core optimization takes place. On the contrary, Java allows the programmer to define specific threaded operations, but it does not require that in order to use multi-core optimization. Java's VM will run on multiple cores and distribute operations as they are compiled and executed. So, in conclusion, while C++'s graphs look clean and tidy, and Java's look like a mess, Java is actually optimizing it's code and taking more advantage of the CPU's architecture. 

So now I need to mention the program samples I ran. The CSE students wrote the programs to perform correctly, not efficiently. Maybe some over-achievers might optimize their code, but most students would just try to make the program do what it is supposed to, and when that works they turn it in. If this code were written by an actual software company, like Microsoft or Apple, they would almost certainly invest time in optimizing their code with multiple core architecture in mind. Students on the other hand, would not. 
This brings to mind some pros and some cons. On one hand professional companies have slightly more control over threaded optimization using C++, but on the other hand Java's automatic optimizations make programming much more convenient and can cut down significantly on runtimes of programs that maybe were written by one programmer who otherwise wouldn't have had the time to program lines and lines of code for threaded optimizations.

Of course, this topic will be more deeply analyzed and explained in my upcoming presentation.

Thanks for reading.
- Jeff 

No comments:

Post a Comment