You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the library only supports Resnet_Glove, CLIP, and CLIP_Slowfast features for the ActivityNet and Charades datasets. However, several papers use other feature types, such as I3D (for Charades-STA) and C3D (for ActivityNet), as noted in Table 2 of the EaTR paper (link to paper). Expanding support to include these features would align with a broader range of existing research.
Additionally, I noticed that for ActivityNet, many works report results on the val_2 split, while here only the val split is provided, which I believe corresponds to val_1. Could you clarify this aspect? How can I access to val_2 split?
Thank you for your efforts in creating this unified framework—it’s much appreciated!
The text was updated successfully, but these errors were encountered:
Currently, the library only supports Resnet_Glove, CLIP, and CLIP_Slowfast features for the ActivityNet and Charades datasets. However, several papers use other feature types, such as I3D (for Charades-STA) and C3D (for ActivityNet), as noted in Table 2 of the EaTR paper (link to paper). Expanding support to include these features would align with a broader range of existing research.
I see. Thank you for your advice.
I will release the trained weights based on I3D and C3D features. Let me take some time to train the models.
Additionally, I noticed that for ActivityNet, many works report results on the val_2 split, while here only the val split is provided, which I believe corresponds to val_1. Could you clarify this aspect? How can I access to val_2 split?
Thanks. Currently, val_2 is not included, but it's easy to implement by creating activitynet_val_release.jsonl based on val_2.json. I will release the val_2 results in the future, so let me take some time.
After implementing your code, I noticed that contrastive_align_loss is missing in models like MomentDETR, QD-DETR, and CG-DETR (reference to original repo). Could you clarify why this was removed and whether it impacts the models' performance?
@alberto-mate Hi, thanks. contrastive_align_loss is not True in their train.sh, so the contrastive loss is not computed. I guess that the authors introduced it but the loss did not work. I removed them from our codebase for the refactoring purpose. In addition, removing these losses does not impact the model's performance. For details, see our paper
Currently, the library only supports
Resnet_Glove
,CLIP
, andCLIP_Slowfast
features for the ActivityNet and Charades datasets. However, several papers use other feature types, such asI3D
(for Charades-STA) andC3D
(for ActivityNet), as noted in Table 2 of the EaTR paper (link to paper). Expanding support to include these features would align with a broader range of existing research.Additionally, I noticed that for ActivityNet, many works report results on the
val_2
split, while here only theval
split is provided, which I believe corresponds toval_1
. Could you clarify this aspect? How can I access toval_2
split?Thank you for your efforts in creating this unified framework—it’s much appreciated!
The text was updated successfully, but these errors were encountered: