Allowing Parallel Solutions

I recently watched an interesting talk by Guy L. Steele Jr. showing how different problem solving approaches can allow or prevent a parallel implementation.

Vocabulary

The vocabulary you have to discuss solutions and algorithms affects how you approach new problems. Guy Steele showed that everyone in his audience already understood ideas for iterative, sequential solutions (loops, lists, accumulators), but only a minority already understood ideas for potentially parallelizable solutions (parallel prefix).

Like design patterns for object-oriented programming and monads for functional programming a higher-order vocabulary of parallel programming idioms would help us both communicate with each other and helps us think differently when considering solutions.

Programming Languages

The programming language you develop in does inform the type of solution you think of. The types of problems and solutions you’ve learned and used inform how you approach new problems.

Lisp and its successors use linked-lists as the common collection and make recursive sequential traversal of lists a natural approach.

Procedural and most object-oriented languages use loops as the common control flow mechanism and make iterating and sequentially traversing data a natural approach.

Both these make thinking of a potentially parallelizable approach harder – first you think of the natural sequential approach, then have to discard that to think of something more general. You have to unlearn.

Many OO languages have recently borrowed ideas from functional languages. This has exposed many more programmers to functional-programming ideas (immutability, higher-order functions, map/reduce) and spread the vocabulary so these techniques can be discussed and considered when approaching a new problem. So spreading new ideas is possible.

New Directions

If we can write solutions that don’t require sequential or parallel execution, but allow either, then we can embed these ideas in our programming languages and leave the compiler or runtime to do the most optimal thing.

There are limited attempts at this already. Parallel.For in C# and parallel_for C++ allow operations to be spread across processers by partitioning the range of data. But because they are embedded in languages where mutable data is the default they require extra effort and care to use, they are not natural approach.

Hopefully experiments in new languages built for parallel computing will allow ideas to be accepted and then used more widely.

Guy Steele compared developers likely reservations about giving compilers control over the parallel behaviour of our software to earlier reservations about structured coding (restrictive if, while, for instead of the freedom of goto) and later about garbage-collected memory (higher memory usage and non-deterministic runtimes). He acknowledged it would be hard to create new languages and compilers and challenging to get acceptance.

But this is just one more step on the general trend in computing of abstracting away concerns, spending more developer time on the problem and letting the compiler deal with more of the details of the solution.

Links:

No Comments

Leave a Reply

Your email address will not be published. Required fields are marked *