TA的每日心情 | 开心 2020-4-8 10:45 |
|---|
签到天数: 227 天 [LV.7]分神
|
; r; V* P0 Q, D/ i
在论文里,这是第3.2.2节的内容
* n4 x; L! H) f* V1 M o ?! Q3 v( {7 }" d {1 r' R& n9 Y: `
3.2.2. Efficient Implementation of Cross-Node All-to-All Communication
; O( s1 L9 F M4 L/ cIn order to ensure sufficient computational performance for DualPipe, we customize efficient
; y8 m. F2 B( H0 A8 ~+ Scross-node all-to-all communication kernels (including dispatching and combining) to conserve
: M& ?3 j/ {$ R3 m7 Othe number of SMs dedicated to communication. The implementation of the kernels is codesigned with the MoE gating algorithm and the network topology of our cluster. To be specific,7 a, Z* ]" {8 a9 p/ [. A+ C
in our cluster, cross-node GPUs are fully interconnected with IB, and intra-node communications, [; c* o2 V9 m
are handled via NVLink. NVLink offers a bandwidth of 160 GB/s, roughly 3.2 times that of IB, C* m7 B8 L# d# C* k
(50 GB/s). To effectively leverage the different bandwidths of IB and NVLink, we limit each2 }- x [- c' l) F' m }
token to be dispatched to at most 4 nodes, thereby reducing IB traffic. For each token, when its! x' n- M$ ~$ D0 ~1 b, i0 T
routing decision is made, it will first be transmitted via IB to the GPUs with the same in-node, g, {% ?; F# l2 g$ q- T* A, O8 |
index on its target nodes. Once it reaches the target nodes, we will endeavor to ensure that it is
2 o6 a7 s U( P7 D( Ninstantaneously forwarded via NVLink to specific GPUs that host their target experts, without
; S7 K4 z) d% k! Vbeing blocked by subsequently arriving tokens. In this way, communications via IB and NVLink
* C n$ p/ b, v$ u( vare fully overlapped, and each token can efficiently select an average of 3.2 experts per node0 K; N/ ~% t# s" Z& K
without incurring additional overhead from NVLink. This implies that, although DeepSeek-V3
# m/ H) c, S# g4 B5 u4 P- y13
( m: L7 x+ U7 I4 p8 \* cselects only 8 routed experts in practice, it can scale up this number to a maximum of 13 experts
, Z6 r1 z) f( ?; s(4 nodes × 3.2 experts/node) while preserving the same communication cost. Overall, under
% B) [$ @- {% M; B9 |such a communication strategy, only 20 SMs are sufficient to fully utilize the bandwidths of IB% p' v- ^+ f9 ?/ v: ^8 F0 `9 R
and NVLink.- i9 C9 ^* m/ C3 N2 m
In detail, we employ the warp specialization technique (Bauer et al., 2014) and partition
E2 @) Q7 z+ ?9 G20 SMs into 10 communication channels. During the dispatching process, (1) IB sending, (2)* t) ? Z% r4 Z ]! f H
IB-to-NVLink forwarding, and (3) NVLink receiving are handled by respective warps. The6 N% R. \& e# F6 O" N
number of warps allocated to each communication task is dynamically adjusted according to the
2 Q+ a0 k6 }- U$ g1 |$ g3 K& D+ D2 Xactual workload across all SMs. Similarly, during the combining process, (1) NVLink sending,) w5 A/ W, p2 i- ^( ~
(2) NVLink-to-IB forwarding and accumulation, and (3) IB receiving and accumulation are also$ r5 D4 _1 L; ^! w& l, U
handled by dynamically adjusted warps. In addition, both dispatching and combining kernels8 O1 }! r: D% P9 A( d; k
overlap with the computation stream, so we also consider their impact on other SM computation
; \; Y" x9 K+ }; t# a: C" T9 wkernels. Specifically, we employ customized PTX (Parallel Thread Execution) instructions and
- X' k" @6 |* N4 _6 f' k: wauto-tune the communication chunk size, which significantly reduces the use of the L2 cache
' b: B" C7 z: f1 N2 iand the interference to other SMs.+ V0 I' k7 h; W- Z
7 H' d9 Q* X. h通俗一点说,就是为了实现高效的跨节点全面通信。解决的问题本质上和唐家山老师日志里说的双机对拷的场景差不多。一般来说单机多卡之间用nvlink,多机多卡之间依赖IB网络,但nvlink的速率是IB网络的速率的3.2倍,需要通过一些优化来实现更好的传输策略。这是一整套方案。
/ ~5 ]9 [7 x- X, S
! W# `$ K7 z2 H我的理解,使用PTX在其中,是为了更精准的定制线程执行减少通信块分配传输之间的串扰。- I( x/ _9 K: T8 I4 K& F) M
% h: r! V/ p+ t; c
目的不是为了绕cuda,反而是为了让cuda的效率更高。
- E: p6 t2 M' J- p i5 x" F2 _2 B$ M0 h, f. X6 l0 k! z
类比一下,就好比发现网卡驱动在对拷特定内存块的时候会和应用的线程执行出现串行导致效率降低,而绕开操作系统定义的与网卡驱动的接口,直接使用网卡支持的指令集进行了优化。 |
评分
-
查看全部评分
|