Skip to content

Commit

Permalink
fix source
Browse files Browse the repository at this point in the history
  • Loading branch information
SAKURA-CAT committed May 28, 2024
1 parent 1a7e68e commit c087518
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 5 deletions.
10 changes: 5 additions & 5 deletions source/第二章/2.4 AI硬件加速设备.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ TPU即Tensor Processing Unit,中文名为张量处理器。2006年,谷歌开

TPU的设计架构如下图

<img src="./figures/TPU.jpg" />
![](./figures/TPU.jpg)

上图:In-datacenter performance analysis of a tensor processing unit,figure 1

Expand All @@ -53,15 +53,15 @@ MMU的脉动阵列包含256 × 256 = 65,536个ALU,也就是说TPU每个周期

TPU以700兆赫兹的功率运行,也就是说,它每秒可以运行65,536 × 700,000,000 = 46 × 1012次乘法和加法运算,或每秒92万亿(92 × 1012)次矩阵单元中的运算。

<img src="./figures/MMU.jpg" style="zoom:80%;" />
![](./figures/MMU.jpg)

上图:In-datacenter performance analysis of a tensor processing unit,figure 4

#### 确定性功能和大规模片上内存

如图是TPU的平面设计简图,黄色为MMU运算单元,蓝色是统一缓存和累加器等数据单元,绿色是I/O,红色是逻辑控制单元。

<img src="./figures/FloorPlan.jpg" style="zoom: 67%;" />
![](./figures/FloorPlan.jpg)

上图:In-datacenter performance analysis of a tensor processing unit,figure 2

Expand Down Expand Up @@ -93,7 +93,7 @@ NPU即Neural-network Processing Unit,中文名为神经网络处理器,它

那么DianNao是如何模拟神经元进行工作的呢,我们可以看看它的内部结构图:

![](./figures/accelerator .jpg)
![](./figures/accelerator.jpg)

上图:DianNao: a small-footprint high-throughput accelerator for ubiquitous machine-learning,figure 11

Expand All @@ -119,7 +119,7 @@ NFU-3是非线性激活函数,该部分由分段线性近似实现非线性函

ShiDianNao是机器视觉专用加速器,集成了视频处理的部分,它也是DianNao系列中唯一一个考虑运算单元级数据重用的加速器,也是唯一使用二维运算阵列的加速器,其加速器的运算阵列结构如下所示:

<img src="./figures/shidiannao.jpg" style="zoom:67%;" />
![](./figures/shidiannao.jpg)

上图:ShiDianNao: Shifting vision processing closer to the sensor,figure 5

Expand Down
File renamed without changes

0 comments on commit c087518

Please sign in to comment.