You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
fromultralyticsimportYOLOfromtorchinfoimportsummary# Load the yolov10n modelmodel=YOLO("yolov10n.pt")
# It is a pytorch model, so we count it directrly.pytorch_total_params=sum(p.numel() forpinmodel.parameters())
# Pass model will trigger model.train which is not good.model_summary=summary(model.model,
#input_data="path/to/bus.jpg",input_size=(1,3,640,640),
col_names=("input_size", "output_size", "num_params")
)
withopen('summary.txt', 'w', encoding='utf-8') asthe_file:
the_file.write(f"{str(model_summary)}\r\nTotal Params (torch): {str(pytorch_total_params)}\r\nTotal Params (info): {model.info()}")
My ipynb would be a lot more sophisticated, including intercepting the input data within the pipeline. The counts should remains the same. The inaccurate result will be kind of 2.0b vs 860M (SD1), 2.1b vs 865M (SD2), and 5.3b vs 2.6b (SDXL).
Expected behavior
Viewing the generated summary.txt will see the inconsistent result, which is overestimated for 1.78x. Official claims 2.3M but model.info() used the same nn.numel() approach which gives 2775520 also.
Describe the bug
Total params
of the model may be overestmated up to 2.37x for multiple models, meanwhile other models remains accurate.I wonder if there is something common across these models, yielding some double count and hence the overestimated result.
To Reproduce
Expected behavior
Viewing the generated
summary.txt
will see the inconsistent result, which is overestimated for 1.78x. Official claims 2.3M butmodel.info()
used the same nn.numel() approach which gives 2775520 also.Runtime environment
conda
The text was updated successfully, but these errors were encountered: