Skip to content
This repository has been archived by the owner on Jan 20, 2024. It is now read-only.

Convert to AWNN #27

Open
diazGT94 opened this issue Nov 24, 2021 · 3 comments
Open

Convert to AWNN #27

diazGT94 opened this issue Nov 24, 2021 · 3 comments

Comments

@diazGT94
Copy link

diazGT94 commented Nov 24, 2021

I trained a custom yoloV2 detector with 2 classes using the tools provided in this repository, but I've to modify the input size of the image to 416x416 to get better results for one of the classes during the training. Then I used the export.py script to convert my file into ONNX, which according to the logs reported it was built successfully, the same script also convert my file from ONNX to NCNN, which according and gave me two files (.bin and .param). For some reason when I uploaded my files to the AWNN online converter tool I got the following error. The error is caused because when the input size of the image?

image

Here is a description of what my .param file looks and the structure of the folder I'm trying to upload to the converter.

image

28 28
Input            input0                   0 1 input0 0=416 1=416 2=3
Convolution      Conv_0                   1 1 input0 148 0=32 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=864
ReLU             LeakyRelu_1              1 1 148 110 0=1.000000e-01
Convolution      Conv_2                   1 1 110 151 0=32 1=3 11=3 2=1 12=1 3=2 13=2 4=1 14=1 15=1 16=1 5=1 6=9216
ReLU             LeakyRelu_3              1 1 151 113 0=1.000000e-01
Convolution      Conv_4                   1 1 113 154 0=64 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=18432
ReLU             LeakyRelu_5              1 1 154 116 0=1.000000e-01
Convolution      Conv_6                   1 1 116 157 0=64 1=3 11=3 2=1 12=1 3=2 13=2 4=1 14=1 15=1 16=1 5=1 6=36864
ReLU             LeakyRelu_7              1 1 157 119 0=1.000000e-01
Convolution      Conv_8                   1 1 119 160 0=128 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=73728
ReLU             LeakyRelu_9              1 1 160 122 0=1.000000e-01
Convolution      Conv_10                  1 1 122 163 0=128 1=3 11=3 2=1 12=1 3=2 13=2 4=1 14=1 15=1 16=1 5=1 6=147456
ReLU             LeakyRelu_11             1 1 163 125 0=1.000000e-01
Convolution      Conv_12                  1 1 125 166 0=256 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=294912
ReLU             LeakyRelu_13             1 1 166 128 0=1.000000e-01
Convolution      Conv_14                  1 1 128 169 0=256 1=3 11=3 2=1 12=1 3=2 13=2 4=1 14=1 15=1 16=1 5=1 6=589824
ReLU             LeakyRelu_15             1 1 169 131 0=1.000000e-01
Convolution      Conv_16                  1 1 131 172 0=512 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=1179648
ReLU             LeakyRelu_17             1 1 172 134 0=1.000000e-01	
Convolution      Conv_18                  1 1 134 175 0=512 1=3 11=3 2=1 12=1 3=2 13=2 4=1 14=1 15=1 16=1 5=1 6=2359296
ReLU             LeakyRelu_19             1 1 175 137 0=1.000000e-01
Convolution      Conv_20                  1 1 137 178 0=512 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=2359296
ReLU             LeakyRelu_21             1 1 178 140 0=1.000000e-01
Convolution      Conv_22                  1 1 140 181 0=512 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=2359296
ReLU             LeakyRelu_23             1 1 181 143 0=1.000000e-01
Convolution      Conv_24                  1 1 143 184 0=512 1=3 11=3 2=1 12=1 3=1 13=1 4=1 14=1 15=1 16=1 5=1 6=2359296
ReLU             LeakyRelu_25             1 1 184 146 0=1.000000e-01
Convolution      Conv_26                  1 1 146 output0 0=35 1=1 11=1 2=1 12=1 3=1 13=1 4=0 14=0 15=0 16=0 5=1 6=17920```

@Q2Learn
Copy link

Q2Learn commented Dec 27, 2021

Hi,

Can you please help me and provide some more information on how you trained a custom yolov2 dataset for maix-ii v831. I checked repository https://github.com/sipeed/maix_train/tree/master/pytorch and there isn't a lot of information.

Thanks.

@diazGT94
Copy link
Author

Hello @Q2Learn here are the steps you need for training:

  1. Arrange your dataset in a folder with this structure:
  • Annotations (All .xml files)
  • ImageSets > Main (Two .txt files with the filenames from training and validation)
  • JPEGImages(All your images)
  1. On the train.py script from the repo modify this lines with your custom classes. This is a must step
    classes = ["right", "left", "back", "front", "others"] Replace with your custom classes
    dataset_name = "lobster_5classes" Replace with the name of the folder that contains your dataset

  2. On the same script you can also modify the following lines. This is optional

train = Train(classes,
                "yolov2_slim",
                dataset_name,
                batch_size=32,
                anchors=anchors,
                input_shape=(3, 224, 224)) 
train.train(260, eval_every_epoch = 5, save_every_epoch = 5)
  1. Run the script, this will create an output folder with the saved weights.

Let me know if you need more help.

@Q2Learn
Copy link

Q2Learn commented Jan 20, 2022

Hey @diazGT94 thanks for your reply! Will let you know how it goes.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants