History

Please fill in your query. A complete syntax description you will find on the General Help page.
Scheduling multithreaded computations by work stealing. (English)
J. ACM 46, No. 5, 720-748 (1999).
Summary: This paper studies the problem of efficiently scheduling fully strict (i.e., well-structured) multithreaded computations on parallel computers. A popular and practical method of scheduling this kind of dynamic MIMD-style computation is ‘work stealing’, in which processors needing work steal computational threads from other processors. In this paper, we give the first provably good work-stealing scheduler for multithreaded computations with dependencies. Specifically, our analysis shows that the expected time to execute a fully strict computation on $P$ processors using our work-stealing scheduler is $T_1/P+O(T_\infty)$, where $T_1$ is the minimum serial execution time of the multithreaded computation and $T_\infty$ is the minimum execution time with an infinite number of processors. Moreover, the space required by the execution is at most $S_1P$, where $S_1$ is the minimum serial space requirement. We also show that the expected total communication of the algorithm is at most $O(PT_\infty(1+n_d)S_{\text{max}})$, where $S_{\text{max}}$ is the size of the largest activation record of any thread and $n_d$ is the maximum number of times that any thread synchronizes with its parent. This communication bound justifies the folk wisdom that work-stealing schedulers are more communication efficient than their work-sharing counterparts. All three of these bounds are existentially optimal to within a constant factor.