You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Not at the moment. However, it's pretty easy to wrap any model from the transformers library. For example, this should work for ViT:
fromtransformersimportViTConfig, ViTModelfromswag_transformers.baseimportSwagConfig, SwagModelMODEL_TYPE='swag_vit'classSwagViTConfig(SwagConfig):
"""Config for ViT model averaging with SWAG"""model_type=MODEL_TYPEinternal_config_class=ViTConfigclassSwagViTModel(SwagModel):
"""SWAG ViT model"""config_class=SwagViTConfigbase_model_prefix=MODEL_TYPEinternal_model_class=ViTModelmodel=ViTModel.from_pretrained('WinKawaks/vit-tiny-patch16-224')
swag_model=SwagViTModel.from_base(model, no_cov_mat=False)
I can also add new models to the library via pull requests.
Hi, does this repository also contain SWAG with Vision Transformers?
The text was updated successfully, but these errors were encountered: