This is the implementation of our paper https://arxiv.org/pdf/2012.02840.pdf
Operating systems: Red Hat Enterprise Linux (RHEL) 7.7
- python==3.6.12
- scikit-learn==0.22.1
- pytorch==1.5.1
- scipy==1.4.1
To train our model, run
python ./src/baseline.py -m [model name] -ls [loss name] -l [fully connected layers] -b [batch size] -f [filter number] -k [filter size] -ft [feature type]
-m
: specifies the name of model (can be ConvModel or SpannyConvModel)
-ls
: specifies the name of loss (can be "HingeLoss1", "HingeLoss2", "HingeLoss3", "MeanSquare")
-l
: specifies the fully connected layers (e.x., "[64]" for one single FC layer with 64 hidden units, and "[64, 8]" for two FC layers with 64 and 8 hidden units, respectively.)
-b
: specifies the size of batch during the training
-f
: specifies the numer of filters in ConvModel and SpannyConvModel.
-k
: specifies the size of filters.
-ft
: specifies the feature type used by the model (can be "Blosum", "Learned", "One-hot", and their combinations connected with "_", for example, "Blosum_Learned".)
We haven't organized/cleaned the code until now. Current version can be run successfully. A neat and clean version will be released later.