Artificial Intelligence Hardware Design: Challenges and Solutions (Hardcover)
Liu, Albert Chun-Chen, Law, Oscar Ming Kin
買這商品的人也買了...
-
$820$779 -
$490$417 -
$594$564 -
$320$288 -
$880$792 -
$505手把手教你設計 CPU-RISC-V 處理器篇
-
$2,080$1,976 -
$1,950$1,853 -
$1,074$1,020 -
$680$537 -
$474$450 -
$750$675 -
$768$730 -
$474$450 -
$480$408 -
$2,146Introduction to Algorithms, 4/e (Hardcover)
-
$576$547 -
$479$455 -
$1,615Understanding Artificial Intelligence: Fundamentals and Applications (Hardcover)
-
$2,835Hands-On Machine Learning with Scikit-Learn, Keras, and Tensorflow: Concepts, Tools, and Techniques to Build Intelligent Systems, 3/e (Paperback)
-
$720$612 -
$2,050$1,948 -
$780$616 -
$3,150$2,993 -
$380$323
相關主題
商品描述
This book covers the design applications of specific circuits and systems for accelerating neural network processing. Chapter 1 introduces neural networks and discusses its developmental history. Chapter 2 reviews Convolutional Neural Network model (CNN) and describes each layer function and example. Chapter 3 lists parallel architectures such as Intel CPU, Nvidia GPU, Google TPU and Microsoft NPU. Chapter 4 introduces a streaming graph for massive parallel computation through Blaize GSP and Graphcore IPU. Chapter 5 shows how to optimize convolution with UCLA's Deep Convolutional Neural Network (DCNN) accelerator filter decomposition and MIT's Eyeriss accelerator Row Stationary dataflow. Chapter 6 illustrates in-memory computation through Georgia Tech's Neurocube and Stanford's Tetris accelerator using Hybrid Memory Cube (HMC). Chapter 7 highlights near-memory architecture through the embedded eDRAM of Institute of Computing Technology (ICT), Chinese Academy of Science, DaDianNao supercomputer, and others. Chapter 8 describes how Stanford Energy Efficient Inference Engine, Institute of Computing Technology (ICT) and others handle network sparsity through network pruning. Chapter 9 introduces a 3D neural processing technique to support multiple layers neural network. It also offers network bridge to overcome power and thermal challenges as well as the memory bottleneck.
商品描述(中文翻譯)
本書介紹了特定電路和系統在加速神經網絡處理方面的設計應用。第1章介紹了神經網絡並討論了其發展歷史。第2章回顧了卷積神經網絡模型(CNN),並描述了每個層的功能和示例。第3章列出了並行架構,如Intel CPU、Nvidia GPU、Google TPU和Microsoft NPU。第4章介紹了通過Blaize GSP和Graphcore IPU進行大規模並行計算的流圖。第5章展示了如何通過UCLA的深度卷積神經網絡(DCNN)加速器濾波器分解和MIT的Eyeriss加速器行靜態數據流來優化卷積。第6章通過Georgia Tech的Neurocube和Stanford的Tetris加速器使用混合內存立體記憶體(HMC)來進行內存計算。第7章重點介紹了通過中國科學院計算技術研究所的嵌入式eDRAM、大典腦超級計算機等近內存架構。第8章描述了斯坦福節能推理引擎、中國科學院計算技術研究所等如何通過網絡修剪處理網絡稀疏性。第9章介紹了一種支持多層神經網絡的3D神經處理技術。它還提供了網絡橋接來克服功耗和熱量挑戰以及內存瓶頸。
作者簡介
Albert Liu, PhD, is Chief Executive Officer of Kneron. He is Adjunct Associate Professor at National Tsing Hua University, National Chiao Tung University, and National Cheng Kung University. He has published over 15 IEEE papers and is an IEEE Senior Member.
Oscar Ming Kin Law, PhD, is Senior Staff Member of Physical Design at Qualcomm Inc. He has over twenty years of experience in the semiconductor industry working with CPUs, GPUs, FPGAs, and mobile design.
作者簡介(中文翻譯)
Albert Liu博士是Kneron的首席執行官。他同時也是國立清華大學、國立交通大學和國立成功大學的兼任副教授。他發表了超過15篇IEEE論文,並且是IEEE的高級會員。
Oscar Ming Kin Law博士是高通公司物理設計的高級員工。他在半導體行業擁有超過二十年的經驗,並且曾參與中央處理器(CPU)、圖形處理器(GPU)、可程式閘陣列(FPGA)和移動設計等領域的工作。