AICurious Logo

What is: Chimera?

SourceChimera: Efficiently Training Large-Scale Neural Networks with Bidirectional Pipelines
Year2000
Data SourceCC BY-SA - https://paperswithcode.com

Chimera is a pipeline model parallelism scheme which combines bidirectional pipelines for efficiently training large-scale models. The key idea of Chimera is to combine two pipelines in different directions (down and up pipelines).

Denote NN as the number of micro-batches executed by each worker within a training iteration, and DD the number of pipeline stages (depth), and PP the number of workers.

The Figure shows an example with four pipeline stages (i.e. D=4D=4). Here we assume there are DD micro-batches executed by each worker within a training iteration, namely N=DN=D, which is the minimum to keep all the stages active.

In the down pipeline, stage_0\_{0}∼stage_3\_{3} are mapped to P_0P_3P\_{0}∼P\_{3} linearly, while in the up pipeline the stages are mapped in a completely opposite order. The NN (assuming an even number) micro-batches are equally partitioned among the two pipelines. Each pipeline schedules N/2N/2 micro-batches using 1F1B strategy, as shown in the left part of the Figure. Then, by merging these two pipelines together, we obtain the pipeline schedule of Chimera. Given an even number of stages DD (which can be easily satisfied in practice), it is guaranteed that there is no conflict (i.e., there is at most one micro-batch occupies the same time slot on each worker) during merging.