TA的每日心情 | 开心 2020-4-8 10:45 |
|---|
签到天数: 227 天 [LV.7]分神
|
4 o, w: D1 F' B% }) _5 a7 E在论文里,这是第3.2.2节的内容7 r2 n: N& m/ [( t) ^4 }
/ F, a7 E# K. Z) M" h$ e3 ^+ }) S3.2.2. Efficient Implementation of Cross-Node All-to-All Communication: P8 z( M+ q# t) ]5 a
In order to ensure sufficient computational performance for DualPipe, we customize efficient
( ]& v2 F: i5 ^! e( P: I" qcross-node all-to-all communication kernels (including dispatching and combining) to conserve! B9 B! d0 s1 f; | b" B' c
the number of SMs dedicated to communication. The implementation of the kernels is codesigned with the MoE gating algorithm and the network topology of our cluster. To be specific,
2 V% `$ V4 v+ W4 N, win our cluster, cross-node GPUs are fully interconnected with IB, and intra-node communications
2 P) Y! v ]0 X+ F" f& @# [1 Sare handled via NVLink. NVLink offers a bandwidth of 160 GB/s, roughly 3.2 times that of IB! c$ t6 q% Y' u- ]# U/ i! F! P0 G
(50 GB/s). To effectively leverage the different bandwidths of IB and NVLink, we limit each6 o3 z' h: ]) \) L" F: K
token to be dispatched to at most 4 nodes, thereby reducing IB traffic. For each token, when its O" W, E6 X$ s" c# d$ \
routing decision is made, it will first be transmitted via IB to the GPUs with the same in-node; T% X- t6 ?3 I$ y J- V
index on its target nodes. Once it reaches the target nodes, we will endeavor to ensure that it is
+ z! B8 v" B! u" Yinstantaneously forwarded via NVLink to specific GPUs that host their target experts, without
6 \( C! E2 B8 Q4 Ebeing blocked by subsequently arriving tokens. In this way, communications via IB and NVLink1 q, C/ l+ B0 d
are fully overlapped, and each token can efficiently select an average of 3.2 experts per node
* H1 f/ S/ t9 awithout incurring additional overhead from NVLink. This implies that, although DeepSeek-V38 d% \# E6 H1 e
131 Q8 {; ]& g. t D/ T* G7 |2 S
selects only 8 routed experts in practice, it can scale up this number to a maximum of 13 experts
4 ^ M. w# O$ {# O# B9 a+ ?(4 nodes × 3.2 experts/node) while preserving the same communication cost. Overall, under
, n( P! d% x7 u" E/ D+ m; N1 _such a communication strategy, only 20 SMs are sufficient to fully utilize the bandwidths of IB
9 x9 ~8 g- K3 W8 d: v' kand NVLink.
0 \ K/ H6 _8 [. q+ g9 Q( zIn detail, we employ the warp specialization technique (Bauer et al., 2014) and partition. [9 [& w: x! U! }4 K8 N8 r
20 SMs into 10 communication channels. During the dispatching process, (1) IB sending, (2)
]3 _* D) R6 |3 D, @+ m: |4 Y+ NIB-to-NVLink forwarding, and (3) NVLink receiving are handled by respective warps. The" k$ p* y! x8 Q
number of warps allocated to each communication task is dynamically adjusted according to the) c. N' R& l g" v
actual workload across all SMs. Similarly, during the combining process, (1) NVLink sending,
, W F& c5 s1 ^/ M(2) NVLink-to-IB forwarding and accumulation, and (3) IB receiving and accumulation are also
+ S" m! D8 {( B, c& c- d$ n5 _handled by dynamically adjusted warps. In addition, both dispatching and combining kernels2 i3 L9 E, ?: Z* E1 W0 R, x' k
overlap with the computation stream, so we also consider their impact on other SM computation w: v6 \# H9 T' Q
kernels. Specifically, we employ customized PTX (Parallel Thread Execution) instructions and
+ ^ w" T L0 c- ~auto-tune the communication chunk size, which significantly reduces the use of the L2 cache
7 H/ o$ F8 L* Uand the interference to other SMs.- `, e. ]* v+ G, y4 H) T: n
_: a5 k5 q$ n) ~
通俗一点说,就是为了实现高效的跨节点全面通信。解决的问题本质上和唐家山老师日志里说的双机对拷的场景差不多。一般来说单机多卡之间用nvlink,多机多卡之间依赖IB网络,但nvlink的速率是IB网络的速率的3.2倍,需要通过一些优化来实现更好的传输策略。这是一整套方案。
& [/ a8 \2 |0 [9 h
6 ~ d$ K# J8 R( o0 r: v) X我的理解,使用PTX在其中,是为了更精准的定制线程执行减少通信块分配传输之间的串扰。
$ w; r* O m) d* c) H: c
: X& T6 {9 i. X, ~% j目的不是为了绕cuda,反而是为了让cuda的效率更高。+ F' g; t6 g# e6 S. m
: x; @1 H/ H. f0 _类比一下,就好比发现网卡驱动在对拷特定内存块的时候会和应用的线程执行出现串行导致效率降低,而绕开操作系统定义的与网卡驱动的接口,直接使用网卡支持的指令集进行了优化。 |
评分
-
查看全部评分
|