You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the complete example you provided, a training process of 160,000 steps was conducted using the HH dataset. However, when I trained using the SHP dataset, only 32,500 steps were completed, despite the SHP dataset having twice the size of the training dataset compared to the HH dataset. What could be the reason for this difference?
In the complete example you provided, a training process of 160,000 steps was conducted using the HH dataset. However, when I trained using the SHP dataset, only 32,500 steps were completed, despite the SHP dataset having twice the size of the training dataset compared to the HH dataset. What could be the reason for this difference?
my code : python -u train.py model=pythia28 datasets=[shp] loss=sft exp_name=shp_sft gradient_accumulation_steps=4 batch_size=24 eval_batch_size=24 trainer=FSDPTrainer sample_during_eval=false model.fsdp_policy_mp=bfloat16
The text was updated successfully, but these errors were encountered: