TA的每日心情 | 开心 2020-4-8 10:45 |
---|
签到天数: 227 天 [LV.7]分神
|
j8 h6 f4 h% _5 S在论文里,这是第3.2.2节的内容
) u' Q6 R, W) b1 N3 t
) H. y1 L: n3 J6 N x3.2.2. Efficient Implementation of Cross-Node All-to-All Communication
6 y5 e. a/ i1 l3 tIn order to ensure sufficient computational performance for DualPipe, we customize efficient
~/ ` u Z1 `/ S1 E- Lcross-node all-to-all communication kernels (including dispatching and combining) to conserve
( Q9 _# t2 w+ a( A" D3 H& Z: M8 H2 g% Qthe number of SMs dedicated to communication. The implementation of the kernels is codesigned with the MoE gating algorithm and the network topology of our cluster. To be specific,
4 b4 E0 H3 Q6 k2 e* ^in our cluster, cross-node GPUs are fully interconnected with IB, and intra-node communications
' m; ?# e( [3 @$ R8 s6 p9 Nare handled via NVLink. NVLink offers a bandwidth of 160 GB/s, roughly 3.2 times that of IB' l) w0 Z0 g8 s- |5 M
(50 GB/s). To effectively leverage the different bandwidths of IB and NVLink, we limit each
% I+ Y3 O% m% Y/ a! p" K. t: z+ {token to be dispatched to at most 4 nodes, thereby reducing IB traffic. For each token, when its
1 s ]. f* f7 Y6 j" Z% Frouting decision is made, it will first be transmitted via IB to the GPUs with the same in-node2 ^3 c% a$ k- S3 u; N& G, k1 P* J
index on its target nodes. Once it reaches the target nodes, we will endeavor to ensure that it is) U S, o" K+ q( {, S8 S2 _
instantaneously forwarded via NVLink to specific GPUs that host their target experts, without, T4 H: _" @2 m- R
being blocked by subsequently arriving tokens. In this way, communications via IB and NVLink
) I8 o2 V/ _8 l; p" n' e: r4 ware fully overlapped, and each token can efficiently select an average of 3.2 experts per node" N! O3 a: H0 o, _! R! d
without incurring additional overhead from NVLink. This implies that, although DeepSeek-V3
+ ]) ^1 y q2 H$ I/ P! t137 ^3 E, }/ T C' s) a. S% e+ b
selects only 8 routed experts in practice, it can scale up this number to a maximum of 13 experts9 T5 V% R5 z$ l. O+ I b6 d
(4 nodes × 3.2 experts/node) while preserving the same communication cost. Overall, under0 E) h' c( t6 M9 @
such a communication strategy, only 20 SMs are sufficient to fully utilize the bandwidths of IB
6 ~# v9 h9 q3 S, Land NVLink.3 r( ?" @, u& [9 L3 E4 a0 r) w# `
In detail, we employ the warp specialization technique (Bauer et al., 2014) and partition) n/ A' w- k! e" f. n) ?0 Z9 p
20 SMs into 10 communication channels. During the dispatching process, (1) IB sending, (2). |. A9 }3 f( q; }- R
IB-to-NVLink forwarding, and (3) NVLink receiving are handled by respective warps. The
9 X* w- H( g8 J! p) x. cnumber of warps allocated to each communication task is dynamically adjusted according to the+ p* r4 T: N5 t) N" q
actual workload across all SMs. Similarly, during the combining process, (1) NVLink sending,
2 M8 C# r" _2 e8 q0 o/ {4 ^(2) NVLink-to-IB forwarding and accumulation, and (3) IB receiving and accumulation are also( k& Q' {7 E8 ]) S/ m
handled by dynamically adjusted warps. In addition, both dispatching and combining kernels; g0 M: I+ B5 v9 a* @; q6 v
overlap with the computation stream, so we also consider their impact on other SM computation
# v! y( d3 V6 Okernels. Specifically, we employ customized PTX (Parallel Thread Execution) instructions and0 s9 q( |# U& s& l& n" s; ?7 ~
auto-tune the communication chunk size, which significantly reduces the use of the L2 cache
3 R4 e0 n4 D' ?7 k" zand the interference to other SMs.
J' C* X. `0 G' R) o2 t n. `7 v {/ @6 D
通俗一点说,就是为了实现高效的跨节点全面通信。解决的问题本质上和唐家山老师日志里说的双机对拷的场景差不多。一般来说单机多卡之间用nvlink,多机多卡之间依赖IB网络,但nvlink的速率是IB网络的速率的3.2倍,需要通过一些优化来实现更好的传输策略。这是一整套方案。
# E: I# R+ V; _0 V8 w4 |2 j) J, _* m, h: |9 J2 ?* z
我的理解,使用PTX在其中,是为了更精准的定制线程执行减少通信块分配传输之间的串扰。1 F7 U [9 `! q, g7 w) u0 d p
1 M# J7 f: R+ a( Q8 K) ~7 i目的不是为了绕cuda,反而是为了让cuda的效率更高。0 o3 b& a& `9 L
: U' ~3 j% F5 R2 p
类比一下,就好比发现网卡驱动在对拷特定内存块的时候会和应用的线程执行出现串行导致效率降低,而绕开操作系统定义的与网卡驱动的接口,直接使用网卡支持的指令集进行了优化。 |
评分
-
查看全部评分
|