TA的每日心情 | 开心 2020-4-8 10:45 |
|---|
签到天数: 227 天 [LV.7]分神
|
! y6 n7 w" _; s! F0 `7 h ^2 G在论文里,这是第3.2.2节的内容
$ C8 q4 n# g x% i+ A; v. U$ J4 m8 L8 m
3.2.2. Efficient Implementation of Cross-Node All-to-All Communication" }! t: a2 l) L+ \
In order to ensure sufficient computational performance for DualPipe, we customize efficient6 H' f) `" X ], E( B2 v
cross-node all-to-all communication kernels (including dispatching and combining) to conserve
1 V) R4 X3 \' W; L* u. `2 X, |the number of SMs dedicated to communication. The implementation of the kernels is codesigned with the MoE gating algorithm and the network topology of our cluster. To be specific,
, k! u# z) P7 Din our cluster, cross-node GPUs are fully interconnected with IB, and intra-node communications
3 N% W8 ?, D+ b7 R B: P* c1 ware handled via NVLink. NVLink offers a bandwidth of 160 GB/s, roughly 3.2 times that of IB/ `; @: h6 P; n6 L7 N6 b$ }
(50 GB/s). To effectively leverage the different bandwidths of IB and NVLink, we limit each
: w$ ^9 w; n! @5 s% ntoken to be dispatched to at most 4 nodes, thereby reducing IB traffic. For each token, when its
9 Z# E6 z0 x' X& C& w0 |3 lrouting decision is made, it will first be transmitted via IB to the GPUs with the same in-node `8 p7 t! A1 ?
index on its target nodes. Once it reaches the target nodes, we will endeavor to ensure that it is
- r+ ^, X. _- R" h J! z! B1 Dinstantaneously forwarded via NVLink to specific GPUs that host their target experts, without
, j! Z* ?3 A+ M" i1 I: D+ l4 J' ybeing blocked by subsequently arriving tokens. In this way, communications via IB and NVLink% `3 m) N/ B% O; L; F
are fully overlapped, and each token can efficiently select an average of 3.2 experts per node: K' e, @0 G0 R- c: J
without incurring additional overhead from NVLink. This implies that, although DeepSeek-V3 n, B- @5 i e2 K& x
13
8 U3 ]' Z1 u5 L* R; }# S9 Lselects only 8 routed experts in practice, it can scale up this number to a maximum of 13 experts
1 l' f" ~9 ^# ~( Y/ ?(4 nodes × 3.2 experts/node) while preserving the same communication cost. Overall, under
+ C/ r) b9 f4 q- Nsuch a communication strategy, only 20 SMs are sufficient to fully utilize the bandwidths of IB- X( T! {0 K+ x- p$ k( L
and NVLink.
2 U: c# D; J4 cIn detail, we employ the warp specialization technique (Bauer et al., 2014) and partition; h) R U' I; W L
20 SMs into 10 communication channels. During the dispatching process, (1) IB sending, (2)
) `+ \( I% [$ c# a, g9 sIB-to-NVLink forwarding, and (3) NVLink receiving are handled by respective warps. The! z) I/ o3 A$ b: w0 D
number of warps allocated to each communication task is dynamically adjusted according to the. r* k0 i3 R0 p- _$ a9 t0 e
actual workload across all SMs. Similarly, during the combining process, (1) NVLink sending,
7 h# O4 K# d9 n9 v0 l7 i' O1 i. q(2) NVLink-to-IB forwarding and accumulation, and (3) IB receiving and accumulation are also
4 I2 M6 J, ?5 i8 |& M4 A& {, O2 [handled by dynamically adjusted warps. In addition, both dispatching and combining kernels2 s1 H$ N$ D. }/ m# ?/ U1 f
overlap with the computation stream, so we also consider their impact on other SM computation
; \) M [) ?1 B3 X+ \( N4 v; |7 V- Ukernels. Specifically, we employ customized PTX (Parallel Thread Execution) instructions and
; `+ {% ]1 U* rauto-tune the communication chunk size, which significantly reduces the use of the L2 cache( V- m) R+ j0 X. Z1 u
and the interference to other SMs.
. P% |. w4 @: l! c7 T* j
H* F9 i8 h9 R通俗一点说,就是为了实现高效的跨节点全面通信。解决的问题本质上和唐家山老师日志里说的双机对拷的场景差不多。一般来说单机多卡之间用nvlink,多机多卡之间依赖IB网络,但nvlink的速率是IB网络的速率的3.2倍,需要通过一些优化来实现更好的传输策略。这是一整套方案。
* N0 p4 q3 e) y6 Q. v L' u( `8 g3 I
我的理解,使用PTX在其中,是为了更精准的定制线程执行减少通信块分配传输之间的串扰。2 Z6 i* e! R. q3 _7 U* _: I* w
2 j( q" F3 T9 ]; z目的不是为了绕cuda,反而是为了让cuda的效率更高。6 Z3 X$ P/ n! B( D, b/ p% w
! M' Y( c/ z& D类比一下,就好比发现网卡驱动在对拷特定内存块的时候会和应用的线程执行出现串行导致效率降低,而绕开操作系统定义的与网卡驱动的接口,直接使用网卡支持的指令集进行了优化。 |
评分
-
查看全部评分
|