Project Description

The growth of algorithms utilizing parallel, concurrent, and distributed computing techniques to solve problems in science and engineering continues unabated. In this project, we will analyze and compare how some concurrent algorithms perform under various parallel computing platforms. We will implement and evaluate these algorithms in C/C++, Haskell, or Go, on platforms such as MPI/OpenMP, CUDA, Intel Thread Building Blocks (TBB), and the C++ Standard Thread Library.

Problems will be selected based on the interest of the student and may be from domains such as numerical computation or machine learning. Special emphasis will be given to developing skills in GPU programming. Finally, we will attempt to use a combination of the platforms discussed earlier, such as CUDA-aware MPI, to re-implement these algorithms in order to assess if it is possible to further improve their performance.