forked from EleutherAI/DeeperSpeed
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
universal-ckp: support megatron-deepspeed llama model (microsoft#4666)
Megatron-DeepSpeed's llama implementation of swiglu allocates a single ColumnParallelLinear layer L, but effectively this parameter is a container of two Linear layers L1, L2 used for silu(L1(x)) * L2(x)). This requires special handling in ds_to_universal to create a representation of L parameter where the slices of L1 and L2 are first concatenated and then L is created by concatenating L1 and L2. Signed-off-by: Moshe Island <[email protected]> Co-authored-by: Moshe Island <[email protected]> Co-authored-by: Olatunji Ruwase <[email protected]> Co-authored-by: Logan Adams <[email protected]>
- Loading branch information
1 parent
00e7dc5
commit ce5e56a
Showing
3 changed files
with
32 additions
and
5 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters