Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

怀疑存在内存泄漏 #11

Open
crazybill-first opened this issue Sep 9, 2022 · 10 comments
Open

怀疑存在内存泄漏 #11

crazybill-first opened this issue Sep 9, 2022 · 10 comments

Comments

@crazybill-first
Copy link

我尝试用该工程测试一组视频,检测器是yolov5s。该工程会有缓慢的内存泄漏,超过两个小时运行速度明显变慢,内存使用会不断增加

@crazybill-first
Copy link
Author

我尝试用该工程测试一组视频,检测器是yolov5s。该工程会有缓慢的内存泄漏,超过两个小时运行速度明显变慢,内存使用会不断增加

当我不使用跟踪时,系统运行会很稳定

@Pe4nutZz
Copy link

请问这个问题后来有解决吗

@lp6m
Copy link

lp6m commented Nov 14, 2022

Please describe the detail

@Pe4nutZz
Copy link

就是上面这个同学描述的内存泄漏现象,我这边这使用该工程也发现了这种情况

@lp6m
Copy link

lp6m commented Nov 14, 2022

Please provide a detailed situation about what kind of memory leak occurred and under what circumstances.
We welcome pull requests.

@AliaChen
Copy link

同样怀疑存在内存泄漏问题,请问各位大佬解决了吗?

@gujiacheng
Copy link

BYTETracker类中的变量
std::vector removed_stracks_; 在update方法中没有及时清理,导致随着update方法的不断被调用vector的size持续膨胀,不断占用内存,导致oom,在方法最后做了简单判断:
if(removed_stracks_.size() > 500) {
std::vector().swap(removed_stracks_);
}
之后内存、帧率稳定,问题解决;当然这只是针对问题的临时解决方法,还需要了解bytetracker具体逻辑之后对removed_stracks_进行合理的维护

@gujiacheng
Copy link

Please provide a detailed situation about what kind of memory leak occurred and under what circumstances. We welcome pull requests.

notice that size of std::vector removed_stracks_ in BYTETracker.h would keep on rising by keeping calling function update

@yuanxmangel
Copy link

请问在循环处理多个视频时,想要处理完一个视频就释放bytetrack算法占用的所有内存,具体应该如何做呢?

@JUZXF
Copy link

JUZXF commented Apr 23, 2024

hello,请问 代码中的update 是每个box 都要运行一次的吗 。我现在是多目标所以,但是它只跟踪一个框

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants