Friday, April 19, 2013

Multi-Core Computing

Most modern Central Processing Units (CPUs) are multi-core, and are advertised as such. For example, Intel's processors are usually advertised as "Intel i3 Dual-Core Processor" or "Intel i5 Quad-Core Processor" and even more recently they now have "Intel i7 Hexa-Core Processor." These processors can get a bit extreme, but what does it really mean, let's say, to run a Hexa-Core Processor? Well, on the right you can see a hand-drawn example of a single-core of a CPU, this core can execute a certain number of operations-per-clock-cycle (which used to be 1, but has since increased). Operations can be in the form of a logical operation (like ADD, AND, or XOR) on two binary values of a certain size and then reporting the result to wherever it needs to be. There are many different forms of CPU operations, but most of the actual computing is spent doing the previous example. Processing is just a bunch of math. Adding a second core to a CPU, as you should expect, can theoretically double your operations-per-clock-cycle value. Adding four can quadruple that value. Adding six can sextuple it, and so on and so forth. Within the last year, an exciting new engineering startup company by the name of Parallella began work on a 64-core processor, connected in a square matrix and multiplexed together. The future of multi-core computing is definitely an exciting one.

So at this point, it seems like the more cores, the better. Right? Not exactly, as there are many complications when it comes to programming for multi-core architectures. As an example, take a simple fibonacci sequence calculation. Inside the main loop, the current calculation, which is to add the two previous numbers, relies on the two previous numbers to already have been calculated, and so on and so forth. This greatly limits the amount of multi-tasking that is possible. So computer programs need to specify when multi-core, or "threaded" operations is allowed.

Below are two graphs showing ASU's computer's cores and their usage in a percentage. The graph on the top was recorded during testing of C++ programs, and on the bottom are Java Programs.

CPU usage per core during testing of C++ Applications


CPU usage per core during testing of Java Applications
So, clearly, there are staunch differences in these two pictures. So what exactly is happening that is causing these differences? 

Well, the difference lies in the languages. C++ requires the programmer to explicitly say when operations can be threaded, and when no specific allowances are written, no multi-core optimization takes place. On the contrary, Java allows the programmer to define specific threaded operations, but it does not require that in order to use multi-core optimization. Java's VM will run on multiple cores and distribute operations as they are compiled and executed. So, in conclusion, while C++'s graphs look clean and tidy, and Java's look like a mess, Java is actually optimizing it's code and taking more advantage of the CPU's architecture. 

So now I need to mention the program samples I ran. The CSE students wrote the programs to perform correctly, not efficiently. Maybe some over-achievers might optimize their code, but most students would just try to make the program do what it is supposed to, and when that works they turn it in. If this code were written by an actual software company, like Microsoft or Apple, they would almost certainly invest time in optimizing their code with multiple core architecture in mind. Students on the other hand, would not. 
This brings to mind some pros and some cons. On one hand professional companies have slightly more control over threaded optimization using C++, but on the other hand Java's automatic optimizations make programming much more convenient and can cut down significantly on runtimes of programs that maybe were written by one programmer who otherwise wouldn't have had the time to program lines and lines of code for threaded optimizations.

Of course, this topic will be more deeply analyzed and explained in my upcoming presentation.

Thanks for reading.
- Jeff 

Friday, April 12, 2013

Debugging: What is it?

Debugging is the process of removing "bugs" from code. There are many different ways to do this, and sometimes it can be a long, drawn-out, and frustrating struggle. The problem is: everybody makes mistakes. No one can sit down and write 1000+ lines of code without something going wrong and misbehaving. What distinguishes good coders from great coders, among other things, is their ability to debug. Over the past few weeks I have been doing a lot of debugging, and I have been working on my techniques. I have been getting faster and faster at identifying the root of the problem. There are a few ways that I have become familiar with to watch your code as it executes.

For code with lots of conditional branching, when something goes wrong, you need to know exactly where something goes wrong. In this case, a programmer can insert code that writes to the console at specific points in their code in order to follow-along. I am quite fond of just printing single "*" characters or maybe "+" or "-" characters after condition checks, this allows me to tell exactly which branch of code is being executed when the undesired behavior occurs.

Another way to do this is to print out all the objects involved in an event, and do the condition checks manually. This process can be quite tedious because the programmer needs to define how the object "prints" in text. But thanks to Eclipses' brilliant generate toString() functionality, this process is pretty pain-free.

The last process I have been using to debug my code is really only useful for visual programs such as my Physics Engine. I have defined a few methods in my Entity class that allow me to draw either the entity's previous path or the entity's velocity vector, or both. This allows me to visually watch what is happening behind-the-scenes of my application. In this picture, you can see the path and the velocity vectors of the balls, along with their direction, converted to degrees and printed above their image for clarity. This allows me to slow down their motion, and carefully watch their behavior. This technique has come in quite handy, as I've recently discovered and fixed more than a few minor bugs in my program.

Thanks for reading!

-Jeff

Wednesday, April 10, 2013

More Work on Collisions!

Scrawlings of a mad man
I have been struggling to successfully implement collisions between objects in my physics engine. It turned out a lot harder than I originally thought it would be. So after many revisions and a hours of frustration, I decided I needed to just sit down and write everything out. On the right is a picture of my desk, taken from my phone. This is how I tested out different mathematical algorithms and discerned which ones work all the time, which ones work some of the time, and which ones don't work at all. Writing down everything certainly helped my thought process, but I suspect that my peers here at ASU now think I am insane.

The red ball was placed 1 unit to the left of the blue one.
Here's what I managed so far. I have written a function of the Entity class that takes the angle of incidence of a collision and calculates its new angle for movement. This method took ages to get right, and now works on all 360 degrees. I have yet to add a dampening force yet, so currently the collisions seem overly explosive, but I'll get to it eventually! I have also written an event handler for the collision event which calculates the angles of incidence of both entities and calls their respective functions. This handler, however, needs a lot of work, because after only a little testing I have found some strange issues that most likely stem from there being no delay between bounces. Sometimes this results in the balls getting stuck inside of each other and weirdly vibrating off of the screen. A rather traumatic bug if you ask me. Here's a screenshot of a successful collision! Woo!


As always, thanks for reading!

- Jeff

Friday, April 5, 2013

Standardized Engines

Since I will be writing two copies of my physics engine, one in Java and one in C++, myself and my professor have found it necessary to attempt to keep the programs relatively similar. This will allow for the visual segment of my presentation to be as accurate as possible.  Dr. Bazzi asked me recently if I had based my programs on any standard foundations.  And I figured the best way to do that would be to keep the programs as true-to-life as possible, and avoid any over optimizations. The programs' calculations will be based on floating point calculations of the kinematic equations. These equations will keep the behavior of the objects in my engine as real-life as possible, and the floating-point variables will keep the values highly accurate. I will try to keep these physics calculations the bulk of the program's performance. In this way, I can maintain a relatively similar program between two different programming languages. The reason that is necessary is so that the comparisons between the two programs are relatively fair and equal, as we will be doing the same benchmark tests on them both.

I am aware things are getting quite technical as we approach the end of the project and begin work on the presentation. Thanks for reading.

- Jeff