You are here


Parallel-izing Compilers

The task of parallelising compilers involves: Given a program that takes long time to run on a serial computer, and given a new computer containing multiple processing units that can operate concurrently the objective is to shorten the running time of the program by breaking it up into pieces that can be processed in parallel or in overlapped fashion in multiprocessing units. Additional task of front end is to look for parallelism and that of back end is to schedule it in such a manner that correct result and improved performance is obtained. Question is what kind of pieces a program should be divided into and how these pieces may be rearranged. This involves

•  granularity, level, and degree of parallelism

•  analysis of the dependencies among the candidates of parallel execution.

Since program pieces and the multiple processing units come in a range of sizes, a fair number of combinations are possible, requiring different compiling approaches. Various combinations have certain needs in common which is available in form of existing compiler optimization techniques for serial computers and vectorisation. The compiler first identifies potential parallel units in the program then performes dependency analysis on them to find those segments which are independent of each other and can be executed concurrently. This approach has been most successfully been applied at the instruction level.



Read on Kindle

Please consider leaving us a review on Amazon if you like it.

Wireless Networking: Introduction to Bluetooth and WiFi

$4.99 Only