NDAK14008U Programming Massively Parallel Hardware (PMPH)
MSc Programme in Bioinformatics
MSc Programme in Computer Science
In simple words, the aim of the course is to teach students how
to write programs that run fast on highly-parallel hardware, such
as general-purpose graphics processing units (GPGPUs), which are
now mainstream. Such architectures are however capricious;
unlocking their power requires understanding their design
principles and also specialised knowledge of code transformations,
for example, aimed at optimising locality of reference, the degree
of parallelism, etc. As such, this course is organised
into three tracks: hardware, software, and lab.
The Software Track teaches how to think parallel. We introduce the
map-reduce functional programming model, which builds programs
naturally, like puzzles, from a nested composition of
implicitly-parallel array operators, which are rooted in the
mathematical structure of list homomorphisms. We reason about the
asymptotic (work and depth) properties of such programs and discuss
the flattening transformation, which converts (all)
arbitrarily-nested parallelism to a more-restricted form that can
be directly mapped to the hardware. We then turn our attention
to legacy-sequential code written in programming languages such as
C. In this context we study dependence analysis, as a tool for
reasoning about loop-based optimisations (e.g., Is it safe to
execute a given loop in parallel, or to interchange two loops?). As
time permits, we may cover more advanced topics, for example,
related to dynamic analysis for optimising locality of
reference.
The Hardware Track studies the design space of the critical components of parallel hardware: processor, memory hierarchy and interconnect networks. We will find out that modern hardware design is governed by old ideas, which are merely adjusted or combined in different ways.
The Lab Track applies the theory learned in the other tracks. We will review the fundamental ideas that govern the GPGPU design and potential performance bottlenecks. We will quickly learn several parallel-programming models, and we will get our hands dirty by putting in practice the optimisations learned in the software track. We will use (the in-house developed) Futhark to write nested-parallel programs, to demonstrate flattening, and as a baseline. We will use OpenMP and CUDA to write "parallel-assembly" code for multi-core and GPGPU execution, respectively.
Knowledge of
- The types and semantics of data-parallel operators.
- Analyses for identifying and optimising parallelism and locality of reference, e.g., flattening, dependence analysis.
- The main hardware-design techniques for supporting parallelism at processor, memory hierarchy and interconnect levels.
Skills in
- Implementing parallel programs in high-level (Futhark) and lower-level programming models (OpenMP, CUDA).
- Applying (by hand) the flattening transformation on specific instances of data-parallel programs.
- Applying (by hand) various "imperative" code transformations (such as loop interchange, loop distribution, block and register tiling) for optimising the degree of parallelism and locality of reference.
- Testing, measuring the impact of applied optimisations and characterising the performance of parallel programs.
Competences in
- Reasoning about the work-depth asymptotic behaviour of specific instances of data-parallel programs.
- Reasoning based on dependence analysis about the (in)correctness of specific instances of loop parallelisation and related optimisations.
- Identifying an effective parallelisation solution for a given application.
The topics taught in the hardware track are selected from the
book "Parallel Computer Organization and Design'', by
Michel Dubois, Murali Annavaram and Per
Stenstrom, Cambridge University Press, the latest
edition.
Buying the hardware book is highly recommended.
Lecture notes covering the material on the software track will be provided on Absalon. Various other related material, such as scientific articles and tutorials (e.g., Futhark, CUDA) will be pointed out from the course pages.
Academic qualifications equivalent to a BSc degree is recommended.
If the time schedule of this course conflicts with your work schedule or with another course, we strongly recommend that you do NOT take this course.
- Category
- Hours
- Lectures
- 28
- Preparation
- 15
- Exercises
- 67
- Laboratory
- 28
- Project work
- 67
- Exam
- 1
- Total
- 206
As an exchange, guest and credit student - click here!
Continuing Education - click here!
PhD’s can register for MSc-course by following the same procedure as credit-students, see link above.
- Credit
- 7,5 ECTS
- Type of assessment
- Continuous assessmentFour individual assignments (40%), group project (report) with individual presentation and a short oral examination (60%).
The oral examination is in continuation of the individual presentation and consists of questions related to the report and/or course material (10 min presentation + up to 20 min oral examination).
No aids are allowed for the oral examination. - Aid
- All aids allowed
- Marking scale
- 7-point grading scale
- Censorship form
- No external censorship
Several internal examiners
- Re-exam
Resubmission of the assignments (35%) and the project extended with additional tasks (40%), and a 30 minutes oral examination (25%) without preparation.
No aids are allowed for the oral examination.
Already passed assignments/report will be considered.
Criteria for exam assesment
See Learning Outcome.
Course information
- Language
- English
- Course code
- NDAK14008U
- Credit
- 7,5 ECTS
- Level
- Full Degree Master
- Duration
- 1 block
- Placement
- Block 1
- Schedule
- A
- Course capacity
- No limit
- Course is also available as continuing and professional education
- Study board
- Study Board of Mathematics and Computer Science
Contracting department
- Department of Computer Science
Contracting faculty
- Faculty of Science
Course Coordinators
- Cosmin Eugen Oancea (cosmin.oancea@di.ku.dk)
Lecturers
Cosmin E. Oancea
Additional maybe: Troels Henriksen