There’s buzz in the air about Google’s new language Go. Naturally, I was excited hearing about it. After all, Google has produced so many interesting tools and frameworks to date there’s almost automatic interest in any new Google software release. But this wasn’t just a product, this was a Google language release. My programmer brain pricked up immediately.
Language releases always catch my attention. Since 1995, I’ve constantly wondered what is going to be the Great Java Killing Language. Java’s release was the Perfect Storm of Language Timing–the rise of the internet, the frustration with C++, the desire for dynamic web content, a language bundled with a large series of useful libraries (UI, database, remoting, security, threading) previously never seen. Lots of languages have been released since, but none with quite the reception of Java. But with that perfect storm came some serious fallout.
At the same time Java rose to prominence as the defacto web and enterprise language of choice, Moore’s Law was hard at work and hardware companies were creating new kinds of processors–not just faster ones, but also motherboards that supported multiple processors. And then multiple cores on those processors. Concurrency became the new belle of the ball, with every language making sure they added support for it. Which gave rise to the widespread use of concurrency features in languages. In essence, Java brought attention to the Great Concurrency Problem that has haunted us almost two decades now.
Before I address the Great Concurrency Problem, we have to agree that most people confuse Concurrency with Parallelism. Let’s start with the definitions from Sun’s Multithreaded Programming Guide:
- Parallelism: A condition that arises when at least two threads are executing simultaneously.
- Concurrency: A condition that exists when at least two threads are making progress. A more generalized form of parallelism that can include time-slicing as a form of virtual parallelism.
Parallelism has only come about with multi-processor/multi-core machines in the last decade or so. Previously, we used Concurrency to simulate Parallelism. We program our applications to run as concurrent threads. And we’ve been doing that for years now on multithreaded processors. But the Great Concurrency Problem is really a problem about the differences between Human Thinking and actual Machine Processing. We tend to think about things linearly, going from Breakfast to Lunch to Dinner in a logical fashion. In the background of our mind, we know things are going on. You might even be semi-aware of those yourself. And occasionally, we get those “Aha!” moments from that background processing of previous subjects. We use this mental model and attempt create a similar configuration in our software. But the shared-memory concurrency model used by Java and other languages creates implicit problems that our brains don’t really have. Shared memory is a tricky beast. You have objects and data inside Java that multiple threads can access in ways that aren’t intuitive or easily understood, especially when the objects you share get more and more complex.
There are really two main models for concurrent programming: shared memory and message-passing communication. Both have their ups and downs.
Shared memory communication is the most common of the two and is present in most mainstream languages we use today. Java, C#, C++ and C all used shared memory communication in their thread programming models. Shared memory communication depends on the use of memory locations that two or more threads can access simultaneously. The main danger of shared memory is that we share complex data–whole objects on the heap for example. Each thread can operate on that data independently, and without regard to how other threads need to access it. Access control is granted through monitors, mutexes and semaphores. Making sure you have the right control is the tough part. Too little and you corrupt your data. Too much and you create deadlocks.
Let me give a concrete example to show just how nasty this can get for shared memory communication: Let’s say you’re handling image processing via threads in a shared-memory model–like Photoshop does for image resizing. And let’s say you’re trying to parallelize this processing such that more than one thread handles a given image. (Yes, I understand we don’t do that today and there’s a good reason for that. This is an analogy, just keep your shirt on a sec.) An image is an incredibly complex object: RGB values, size, scale, alpha, layers if you’re in Photoshop, color tables and/or color spaces depending on the format, compressed data, etc. So what happens when Thread A is analyzing the pixel data for transformation and Thread B is trying to display that information on the screen? If Thread A modifies something that Thread B was expecting to be invariant, interesting things happen*. Thread A may accidentally corrupt the state of the image if Thread B doesn’t lock the entire object during read operations. That’s because Threads A and B are sharing the entire object. Oh sure, we can break the image down into smaller, simpler data abstractions but you’re doing that because of the shared memory problem. Fundamentally, Java objects can be shared between threads. That’s just a fact.
Keep in mind this is just a TWO thread example. When you write concurrent systems, two threads is like a warm up before the Big Game–we’re barely getting started. Real systems use dozens, if not hundreds of threads. So if we’re already having trouble keeping things straight with two threads, what happens when we get to 20? 200? The problem is that modeling any system using concurrent programming tools yields a subtle mess of timing bugs and problems that rarely appear until you have mountains of production data or traffic hammering your system. Precisely when it’s too late to do anything about it.
Even Java’s own documentation from ages ago cautions just how hard this problem really is:
‘‘It is our basic belief that extreme caution is warranted when designing and building multi-threaded applications … use of threads can be very deceptive … in almost all cases they make debugging, testing, and maintenance vastly more difficult and sometimes impossible. Neither the training, experience, or actual practices of most programmers, nor the tools we have to help us, are designed to cope with the non-determinism … this is particularly true in Java … we urge you to think twice about using threads in cases where they are not absolutely necessary …’’
Harsh words (at the bottom) from a language that really opened Pandora’s Box in terms of giving us the tools to make concurrency an everyday part of our applications.
Message-passing communication is perhaps the safer of the two models. Originally derived from Hoare’s Communicating Sequential Processes (CSP), message-passing communication is used in languages like Erlang, Limbo and now, Go. In message-passing communication, threads exchange messages with discreet amounts of local data via channels. I like to think of message-passing communication to be kind of algorithmic atomicity–you are performing some action, say transforming an image and at a certain step, you need the data from the image’s color table. So you wait to get a message from another thread when that data is available. And then continue processing locally in your own algorithm.
Because threads are restricted in what they can share, the risk of corrupt data and deadlocks drops considerably. But this comes with a higher processing cost than shared memory communication. With shared memory, there was no re-writing of the data before thread access. Just the opposite is true for message-passing. Until recently, message-passing communication was considered far to expensive to use for real-time systems. But our multi-core, multi-processor world of the 21st century has finally broken down that barrier.
The question is, does Go really solve that problem in a way that overthrows Java as King of the Enterprise? Tune in tomorrow for Part Two, where we look at Go’s features, whether Go really addresses any of these problems, and if Java is doomed.
* “Interesting” is the default programmer adjective we tend to apply when what we really mean is “incredibly BAD”.