Recommended as a set book for modules on parallel programming.
Many undergraduate programmes contain modules covering the hardware and programming of multi-processor systems. However, how can students gain experience of parallel programming (multi-processor systems are very expensive and difficult to use)? The approach taken in this text is to concentrate on parallel programming based on message passing and make use of multiple networked workstations running suitable software.
The text is in two sections the first covering parallel programming fundamentals and the second algorithms targeted at specific application areas. After an initial chapter introducing parallel computing concepts message passing is discussed in detail together with the tools available for workstation cluster parallel programming (see below) and how to evaluate and debug parallel programs. Topics then covered include partitioning, pipelined computations, synchronisation and load balancing. Part one finishes with a chapter on parallel programming using shared memory (using programming examples in Unix and Java). Part two uses the fundamentals covered in part one to implement algorithms for solving application problems, e.g. sorting and searching, image processing (filtering, edge detection, transformations) and numerical algorithms (matrices and linear equations). Each chapter finishes with recommendations for further reading, a bibliography and problems (with could be used for assessment purposes).
A well written and structured text covering theoretical fundamentals together with practical implementation which can be done on modern low-cost workstations. A knowledge of C programming is assumed with examples implemented using the PVM message passing library (seehttp://www.netlib.org/pvm3/) and the MPI 'standard' for message passing (seehttp://www.osc.edu/mpi/); appendices contain details of PVM and MPI routines. There is also web-based support (seehttp://renoir.csc.ncsu.edu/CSC495A/). Recommended as a set book for modules on parallel programming.