Task Scheduling for Parallel Systems
暫譯: 平行系統的任務排程
Oliver Sinnen
- 出版商: Wiley
- 出版日期: 2007-05-04
- 定價: $2,998
- 售價: 9.5 折 $2,848
- 語言: 英文
- 頁數: 296
- 裝訂: Hardcover
- ISBN: 0471735760
- ISBN-13: 9780471735762
-
相關分類:
Operating-system
立即出貨 (庫存 < 3)
買這商品的人也買了...
-
$1,400Configuring SAP R/3 FI/CO
-
$780$616 -
$780$663 -
$620$490 -
$880$695 -
$650$514 -
$3,026$2,875 -
$550$468 -
$450$383 -
$980$774 -
$800$680 -
$600$474 -
$880$695 -
$550$468 -
$990$891 -
$1,200$948 -
$600$480 -
$880$695 -
$520$343 -
$980$774 -
$550$435 -
$620$490 -
$620$490 -
$860$731 -
$750$593
相關主題
商品描述
Description
A new model for task scheduling that dramatically improves the efficiency of parallel systemsTask scheduling for parallel systems can become a quagmire of heuristics, models, and methods that have been developed over the past decades. The author of this innovative text cuts through the confusion and complexity by presenting a consistent and comprehensive theoretical framework along with realistic parallel system models. These new models, based on an investigation of the concepts and principles underlying task scheduling, take into account heterogeneity, contention for communication resources, and the involvement of the processor in communications.
For readers who may be new to task scheduling, the first chapters are essential. They serve as an excellent introduction to programming parallel systems, and they place task scheduling within the context of the program parallelization process. The author then reviews the basics of graph theory, discussing the major graph models used to represent parallel programs. Next, the author introduces his task scheduling framework. He carefully explains the theoretical background of this framework and provides several examples to enable readers to fully understand how it greatly simplifies and, at the same time, enhances the ability to schedule.
The second half of the text examines both basic and advanced scheduling techniques, offering readers a thorough understanding of the principles underlying scheduling algorithms. The final two chapters address communication contention in scheduling and processor involvement in communications.
Each chapter features exercises that help readers put their new skills into practice. An extensive bibliography leads to additional information for further research. Finally, the use of figures and examples helps readers better visualize and understand complex concepts and processes.
Researchers and students in distributed and parallel computer systems will find that this text dramatically improves their ability to schedule tasks accurately and efficiently.
Table of Contents
Preface.Acknowledgments.
1. Introduction.
1.1 Overview.
1.2 Organization.
2. Parallel Systems and Programming.
2.1 Parallel Architectures.
2.1.1 Flynn’s Taxonomy.
2.1.2 Memory Architectures.
2.1.3 Programming Paradigms and Models.
2.2 Communication Networks.
2.2.1 Static Networks.
2.2.2 Dynamic Networks.
2.3 Parallelization.
2.4 Subtask Decomposition.
2.4.1 Concurrency and Granularity.
2.4.2 Decomposition Techniques.
2.4.3 Computation Type and Program Formulation.
2.4.4 Parallelization Techniques.
2.4.5 Target Parallel System.
2.5 Dependence Analysis.
2.5.1 Data Dependence.
2.5.2 Data Dependence in Loops.
2.5.3 Control Dependence.
2.6 Concluding Remarks.
2.7 Exercises.
3. Graph Representations.
3.1 Basic Graph Concepts.
3.1.1 Computer Representation of Graphs.
3.1.2 Elementary Graph Algorithms.
3.2 Graph as a Program Model.
3.2.1 Computation and Communication Costs.
3.2.2 Comparison Criteria.
3.3 Dependence Graph (DG).
3.3.1 Iteration Dependence Graph.
3.3.2 Summary.
3.4 Flow Graph (FG).
3.4.1 Data-Driven Execution Model.
3.4.2 Summary.
3.5 Task Graph (DAG).
3.5.1 Graph Transformations and Conversions.
3.5.2 Motivations and Limitations.
3.5.3 Summary.
3.6 Concluding Remarks.
3.7 Exercises.
4. Task Scheduling.
4.1 Fundamentals.
4.2 With Communication Costs.
4.2.1 Schedule Example.
4.2.2 Scheduling Complexity.
4.3 Without Communication Costs.
4.3.1 Schedule Example.
4.3.2 Scheduling Complexity.
4.4 Task Graph Properties.
4.4.1 Critical Path.
4.4.2 Node Levels.
4.4.3 Granularity.
4.5 Concluding Remarks.
4.6 Exercises.
5. Fundamental Heuristics.
5.1 List Scheduling.
5.1.1 Start Time Minimization.
5.1.2 With Dynamic Priorities.
5.1.3 Node Priorities.
5.2 Scheduling with Given Processor Allocation.
5.2.1 Phase Two.
5.3 Clustering.
5.3.1 Clustering Algorithms.
5.3.2 Linear Clustering.
5.3.3 Single Edge Clustering.
5.3.4 List Scheduling as Clustering.
5.3.5 Other Algorithms.
5.4 From Clustering to Scheduling.
5.4.1 Assigning Clusters to Processors.
5.4.2 Scheduling on Processors.
5.5 Concluding Remarks.
5.6 Exercises.
6. Advanced Task Scheduling.
6.1 Insertion Technique.
6.1.1 List Scheduling with Node Insertion.
6.2 Node Duplication.
6.2.1 Node Duplication Heuristics.
6.3 Heterogeneous Processors.
6.3.1 Scheduling.
6.4 Complexity Results.
6.4.1 α|β|γ Classification.
6.4.2 Without Communication Costs.
6.4.3 With Communication Costs.
6.4.4 With Node Duplication.
6.4.5 Heterogeneous Processors.
6.5 Genetic Algorithms.
6.5.1 Basics.
6.5.2 Chromosomes.
6.5.3 Reproduction.
6.5.4 Selection, Complexity, and Flexibility.
6.6 Concluding Remarks.
6.7 Exercises.
7. Communication Contention in Scheduling.
7.1 Contention Awareness.
7.1.1 End-Point Contention.
7.1.2 Network Contention.
7.1.3 Integrating End-Point and Network Contention.
7.2 Network Model.
7.2.1 Topology Graph.
7.2.2 Routing.
7.2.3 Scheduling Network Model.
7.3 Edge Scheduling.
7.3.1 Scheduling Edge on Route.
7.3.2 The Edge Scheduling.
7.4 Contention Aware Scheduling.
7.4.1 Basics.
7.4.2 NP-Completeness.
7.5 Heuristics.
7.5.1 List Scheduling.
7.5.2 Priority Schemes—Task Graph Properties.
7.5.3 Clustering.
7.5.4 Experimental Results.
7.6 Concluding Remarks.
7.7 Exercises.
8. Processor Involvement in Communication.
8.1 Processor Involvement—Types and Characteristics.
8.1.1 Involvement Types.
8.1.2 Involvement Characteristics.
8.1.3 Relation to LogP and Its Variants.
8.2 Involvement Scheduling.
8.2.1 Scheduling Edges on the Processors.
8.2.2 Node and Edge Scheduling.
8.2.3 Task Graph.
8.2.4 NP-Completeness.
8.3 Algorithmic Approaches.
8.3.1 Direct Scheduling.
8.3.2 Scheduling with Given Processor Allocation.
8.4 Heuristics.
8.4.1 List Scheduling.
8.4.2 Two-Phase Heuristics.
8.4.3 Experimental Results.
8.5 Concluding Remarks.
8.6 Exercises.
Bibliography.
Author Index.
Subject Index.
商品描述(中文翻譯)
描述
一種新的任務排程模型,顯著提高平行系統的效率
平行系統的任務排程可能會變成過去幾十年來發展出的啟發式方法、模型和技術的泥沼。這本創新的書籍的作者通過提供一致且全面的理論框架以及現實的平行系統模型,清晰地解釋了這些混亂和複雜性。這些新模型基於對任務排程背後概念和原則的調查,考慮了異質性、通信資源的競爭以及處理器在通信中的參與。
對於可能對任務排程不熟悉的讀者,前幾章是必不可少的。它們作為編程平行系統的優秀入門,並將任務排程置於程序平行化過程的背景中。接著,作者回顧了圖論的基礎,討論了用於表示平行程序的主要圖模型。然後,作者介紹了他的任務排程框架。他仔細解釋了這一框架的理論背景,並提供了幾個示例,以幫助讀者充分理解它如何大大簡化並同時增強排程能力。
文本的後半部分檢視了基本和進階的排程技術,為讀者提供了對排程算法背後原則的透徹理解。最後兩章探討了排程中的通信競爭和處理器在通信中的參與。
每一章都包含練習,幫助讀者將新技能付諸實踐。廣泛的參考書目提供了進一步研究的額外資訊。最後,圖形和示例的使用幫助讀者更好地視覺化和理解複雜的概念和過程。
分散式和平行計算系統的研究人員和學生將發現這本書顯著提高了他們準確和高效地排程任務的能力。
目錄
前言
致謝
1. 介紹
1.1 概述
1.2 組織
2. 平行系統與編程
2.1 平行架構
2.1.1 Flynn的分類法
2.1.2 記憶體架構
2.1.3 編程範式與模型
2.2 通信網路
2.2.1 靜態網路
2.2.2 動態網路
2.3 平行化
2.4 子任務分解
2.4.1 並發性與粒度
2.4.2 分解技術
2.4.3 計算類型與程序公式化
2.4.4 平行化技術
2.4.5 目標平行系統
2.5 依賴分析
2.5.1 數據依賴
2.5.2 循環中的數據依賴
2.5.3 控制依賴
2.6 總結
2.7 練習
3. 圖形表示
3.1 基本圖形概念
3.1.1 圖形的計算機表示
3.1.2 基本圖形算法
3.2 圖形作為程序模型
3.2.1 計算與通信成本
3.2.2 比較標準
3.3 依賴圖 (DG)
3.3.1 迭代依賴圖
3.3.2 總結
3.4 流圖 (FG)
3.4.1 數據驅動執行模型
3.4.2 總結
3.5 任務圖 (DAG)
3.5.1 圖形轉換與轉換
3.5.2 動機與限制
3.5.3 總結
3.6 總結
3.7 練習
4. 任務排程
4.1 基礎
4.2 考慮通信成本
4.2.1 排程示例
4.2.2 排程複雜性
4.3 不考慮通信成本
4.3.1 排程示例
4.3.2 排程複雜性
4.4 任務圖屬性
4.4.1 關鍵路徑
4.4.2 節點層級
4.4.3 粒度
4.5 總結
4.6 練習
5. 基礎