Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCR模型自动压缩PPOCRV4_rec报错 #1883

Open
huangguifeng opened this issue Jul 8, 2024 · 2 comments
Open

OCR模型自动压缩PPOCRV4_rec报错 #1883

huangguifeng opened this issue Jul 8, 2024 · 2 comments
Assignees

Comments

@huangguifeng
Copy link

2024-07-08 18:00:06,589-INFO:选定的策略:['qat_dis']
回溯(最近一次调用最后一次):
文件“run.py”,第 171 行,在
main()中
文件“run.py”,第 164 行,在 main
ac.compress() 中
文件“/root/miniconda3/lib/python3.8/site-packages/paddleslim/auto_compression/compressor.py”,第 586 行,在 compress
self.single_strategy_compress(strategy, config, strategies_idx,
文件“/root/miniconda3/lib/python3.8/site-packages/paddleslim/auto_compression/compressor.py”,第 769 行,在 single_strategy_compress
train_program_info, test_program_info = self._prepare_program(
文件“/root/miniconda3/lib/python3.8/site-packages/paddleslim/auto_compression/compressor.py”, 第 506 行, 在 _prepare_program
train_program_info 中, test_program_info = build_distill_program(
文件“/root/miniconda3/lib/python3.8/site-packages/paddleslim/auto_compression/create_compressed_program.py”, 第 327 行, 在 build_distill_program
train_program 中, data_name_map = _load_program_and_merge(
文件“/root/miniconda3/lib/python3.8/site-packages/paddleslim/auto_compression/create_compressed_program.py”, 第 219 行, 在 _load_program_and_merge 中
合并(
文件“/root/miniconda3/lib/python3.8/site-packages/paddleslim/dist/single_distiller.py”, 行197,在合并
student_program.global_block().append_op(
文件“/root/miniconda3/lib/python3.8/site-packages/paddle/fluid/framework.py”中,第 4013 行,在 append_op
op = Operator(
文件“/root/miniconda3/lib/python3.8/site-packages/paddle/fluid/framework.py”中,第 2889 行,在init
中 引发 ValueError(
ValueError: 运算符“dropout”的输出设置不正确,应设置:[Mask]。

@huangguifeng
Copy link
Author

另外 det模型 自动压缩召回率一直为0
2024-07-08 18:17:02,228-INFO: metric eval ***************
2024-07-08 18:17:02,228-INFO: precision:0.0
2024-07-08 18:17:02,228-INFO: recall:0.0
2024-07-08 18:17:02,228-INFO: hmean:0

@huangguifeng
Copy link
Author

直接用这个案例 ,然后模型也是文档里提供的模型。但是运行报这个错误 https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.8/deploy/slim/auto_compression/run.py

Global:
model_type: det
model_dir: C:\Users\Administrator\Downloads\det_server_qat_v100
model_filename: inference.pdmodel
params_filename: inference.pdiparams
algorithm: DB


C++ Traceback (most recent call last):

0 paddle::framework::StandaloneExecutor::Run(paddle::framework::Scope*, std::vector<std::string, std::allocator<std::string > > const&, std::vector<std::string, std::allocator<std::string > > const&)
1 paddle::framework::InterpreterCore::Run(std::vector<std::string, std::allocator<std::string > > const&, bool)
2 paddle::framework::interpreter::BuildOpFuncList(phi::Place const&, paddle::framework::BlockDesc const&, std::set<std::string, std::less<std::string >, std::allocator<std::string > > const&, std::vector<paddle::framework::OpFuncNode, std::allocatorpaddle::framework::OpFuncNode >, paddle::framework::VariableScope, paddle::framework::interpreter::ExecutionConfig const&, bool, bool)
3 paddle::framework::StructKernelImpl<paddle::operators::FakeQuantizeMovingAverageAbsMaxKernel<float, phi::GPUContext>, void>::Compute(phi::KernelContext*)
4 paddle::operators::FakeMovingAverageAbsMaxKernelBase<float, phi::GPUContext>::Compute(paddle::framework::ExecutionContext const&) const
5 phi::DenseTensor::mutable_data(phi::Place const&, phi::DataType, unsigned long)
6 phi::DenseTensor::set_type(phi::DataType)


Error Message Summary:

FatalError: Segmentation fault is detected by the operating system.
[TimeInfo: *** Aborted at 1720491686 (unix time) try "date -d @1720491686" if you are using GNU date ***]
[SignalInfo: *** SIGSEGV (@0x68) received by PID 40490 (TID 0x7f739046a340) from PID 104 ***]

这是需要修改那里吗?为什么运行会报这个错误。。假如用 自己训练的模型召回率会一直为0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants