TA的每日心情 | 开心 2020-4-8 10:45 |
---|
签到天数: 227 天 [LV.7]分神
|
- ^* }2 ~9 v' E在论文里,这是第3.2.2节的内容
$ T2 E0 x- W& y7 G0 B
: ?' k* O0 _5 \8 G. S8 _$ k3.2.2. Efficient Implementation of Cross-Node All-to-All Communication0 t, g( ]# S" V V8 X4 r
In order to ensure sufficient computational performance for DualPipe, we customize efficient! i" o5 {$ i& v o
cross-node all-to-all communication kernels (including dispatching and combining) to conserve y2 h2 A- @) o
the number of SMs dedicated to communication. The implementation of the kernels is codesigned with the MoE gating algorithm and the network topology of our cluster. To be specific,- U' R0 y. g, N$ I+ W0 _0 i
in our cluster, cross-node GPUs are fully interconnected with IB, and intra-node communications
! e% ` U5 ?, n0 gare handled via NVLink. NVLink offers a bandwidth of 160 GB/s, roughly 3.2 times that of IB
' ]8 [2 Q9 {; \/ P. h, O(50 GB/s). To effectively leverage the different bandwidths of IB and NVLink, we limit each
) m1 R" D! @/ S9 }; e" ?6 Ptoken to be dispatched to at most 4 nodes, thereby reducing IB traffic. For each token, when its! j* ]3 u3 ^' m: Q& o
routing decision is made, it will first be transmitted via IB to the GPUs with the same in-node2 ]1 \0 I" x, E; @; D4 Q0 _
index on its target nodes. Once it reaches the target nodes, we will endeavor to ensure that it is' |. h0 D( Y, l4 F
instantaneously forwarded via NVLink to specific GPUs that host their target experts, without
% G5 t% s+ h# `' \- Vbeing blocked by subsequently arriving tokens. In this way, communications via IB and NVLink" U+ Y. r5 Q: g3 w! v$ A
are fully overlapped, and each token can efficiently select an average of 3.2 experts per node9 ]" V6 w4 z% E& K* O s- S+ |
without incurring additional overhead from NVLink. This implies that, although DeepSeek-V3& O* q3 d' v2 p0 Y+ K
13( Z$ J( o- R& k) Q1 Z6 I8 u
selects only 8 routed experts in practice, it can scale up this number to a maximum of 13 experts
" Z2 f8 s) F1 O# m/ S$ Z(4 nodes × 3.2 experts/node) while preserving the same communication cost. Overall, under, \+ o& h- ~8 {7 x
such a communication strategy, only 20 SMs are sufficient to fully utilize the bandwidths of IB
9 b. g8 Q& V9 g& N/ g4 g9 vand NVLink.
/ v H- H3 r, H! f1 c/ ^; {; HIn detail, we employ the warp specialization technique (Bauer et al., 2014) and partition6 y6 s8 C4 ?! B; E
20 SMs into 10 communication channels. During the dispatching process, (1) IB sending, (2); j7 U9 o) t! |/ ?/ w
IB-to-NVLink forwarding, and (3) NVLink receiving are handled by respective warps. The
( B7 J! K5 U: Tnumber of warps allocated to each communication task is dynamically adjusted according to the& T9 k1 m: S. r: f4 {" K1 E: R; R/ p
actual workload across all SMs. Similarly, during the combining process, (1) NVLink sending,
) M7 \/ `% g% P' G) b$ ~(2) NVLink-to-IB forwarding and accumulation, and (3) IB receiving and accumulation are also
" l( X) a) I2 S4 H8 [; u& _! ~7 {handled by dynamically adjusted warps. In addition, both dispatching and combining kernels
6 z# ^" S% T6 doverlap with the computation stream, so we also consider their impact on other SM computation
4 D' U" ~, e, A$ g8 P @kernels. Specifically, we employ customized PTX (Parallel Thread Execution) instructions and* C7 K8 B: a: O) U+ q
auto-tune the communication chunk size, which significantly reduces the use of the L2 cache! l0 y8 Q5 \4 ^/ C0 X9 `
and the interference to other SMs.
8 z( n. L1 F+ b4 X' M) n, O1 L; Q* u2 z- c4 V% M8 z% s. j+ x
通俗一点说,就是为了实现高效的跨节点全面通信。解决的问题本质上和唐家山老师日志里说的双机对拷的场景差不多。一般来说单机多卡之间用nvlink,多机多卡之间依赖IB网络,但nvlink的速率是IB网络的速率的3.2倍,需要通过一些优化来实现更好的传输策略。这是一整套方案。
/ ^; U, ?- g# `. V. h$ r T3 H9 ^' x# q- C# G" g
我的理解,使用PTX在其中,是为了更精准的定制线程执行减少通信块分配传输之间的串扰。+ B% R+ z; N f; Z- n* C! B
$ D" R( B3 P* e( @目的不是为了绕cuda,反而是为了让cuda的效率更高。
+ h7 F/ V% S5 h: P. v' a4 M. @3 m0 {8 q! H: u
类比一下,就好比发现网卡驱动在对拷特定内存块的时候会和应用的线程执行出现串行导致效率降低,而绕开操作系统定义的与网卡驱动的接口,直接使用网卡支持的指令集进行了优化。 |
评分
-
查看全部评分
|