The Olympic motto is: “Faster, Higher, Stronger”. In computing a similar motto is: “Faster, Larger and Cheaper”. The evolution of the hardware capabilities of computers over the last sixty years has been remarkable: computers have become faster; their capacity to store data has grown tremendously and it possible to store much larger data sets and the prices per unit of computing and storage capacities have continued to fall, making all kinds of applications feasible.
Using several processors, or even several computers, to solve a problem in a shorter amount of time has become an interesting alternative for applications that require large amounts of processing time. There is a large variety of different computer architectures in use with the aim of reducing processing times. Many software systems have been devised to take advantage of those architectures, from “threads”, to compiler directives, to message passing systems.
Typical efforts to parallelize an application are based on the collaboration between the original developer of the sequential version of the code, who is usually a domain expert. The code is profiled to identify the portions of the code that consume the bulk of the execution time. Those portions of the code are examined in detail to use the appropriate parallelization strategy, depending on the target hardware. The code is then parallelized and tested to confirm that the new parallel version produces correct results. In many cases, the execution times of the parallel version are shorter than the original sequential version.
If you have an application that is taking much longer than what you expect and you would like to discuss the feasibility of parallelizing that application, do not hesitate to contact us at the GVSU Applied Computing Institute.