-
-
Notifications
You must be signed in to change notification settings - Fork 226
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance Bottleneck in Signal Extraction from Large MDF Files #1108
Comments
Hello Martin,
regarding the processing speed:
I would also like to have better performance but at the moment this is the best I could do :( |
Hi Daniel, Thank you for your prompt response. Is the commit you mentioned in the dev branch? In addition to the package-based recommendation, do you think there’s potential to implement multiprocessing or a similar approach in the extraction process? For instance, extracting a single signal is relatively fast, but could we extract multiple signals concurrently to enhance performance? Also I wanted to mention, your package is absolutely amazing :) |
@mbrettsc please try out this wheels and see if you notice some performance improvements |
@mbrettsc please the new wheels found here https://github.com/danielhrisca/asammdf/actions/runs/12299840252 On my PC processing a large files when from ~50MB/s to ~130MB/s |
Python version
('python=3.12.7 | packaged by Anaconda, Inc. | (main, Oct 4 2024, 13:17:27) '
'[MSC v.1929 64 bit (AMD64)]')
'os=Windows-11-10.0.22631-SP0'
'numpy=1.26.4'
'asammdf=8.0.1'
Code
MDF version
4.10
Code snippet
Description
I work with large MDF files and need to extract several signals from them. As the number of signals increases, the execution time grows rapidly. This performance bottleneck becomes problematic with large-scale data processing.
What are the best practices for handling such scenarios efficiently, considering the growing size of the files and the increasing number of signals?
Best regards,
Martin
The text was updated successfully, but these errors were encountered: