-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Process Crash Due to Insufficient Memory #60
Comments
Hi @avizack18, Thanks for your question. The recommendation is to have 8GB of RAM: https://github.com/volatilityfoundation/dwarf2json/blob/master/README.md?plain=1#L27. You do not need to have a large swap file or use one at all. Most modern systems have at least 8GB of RAM. If you are using a VM, make sure to allocate at least 8 GB of RAM to avoid OOM error you are seeing. Out of curiosity, how much RAM does your system have? |
While I have 16GB of RAM installed, a significant portion is likely being consumed by virtual machines (VMs) running on my system. This leaves less than 4GB of free memory available, which seems to be causing the process to crash. Current limitations: Alternative solutions: Looking forward to your thoughts! |
Hi, I also encountered really high RAM usage while using dwarf2json. I might be wrong, but it seems like everything is stored in memory, which eventually makes use of the swap (if configured) and slows down the system. A system of temporary cache (togglable) would allow to store already processed information on disk, chunk by chunk, while also freeing unnecessary loaded data ? This could keep the memory usage around 2-3 GB of RAM and ensure nothing goes to swap ? |
Hi, Yes, dwarf2json currently stores JSON in memory before writing it out. Given the current constraints, it is not feasible to incrementally write out the JSON for the types and the symbols as they are being parsed from the DWARF. dwarf2json follows the volatility3 schema, which can be found here: Feel free to contribute ideas or pull requests with improvements but make sure the dwarf2json output is compatible with the vol3 schema. |
The JSON file will generally be of the ~30 MB order, but RAM can go up to 8GB sometimes. Aren't all the DWARF symbols loaded in memory and then never cleaned out as they are processed ? If so, isn't it possible to store the DWARF symbols in a temporary file (as possible with |
any chance to fix that issue? |
Some form of what you are suggesting could be worth investigating, but it would not be trivial because dwarf2json does not operate on the DWARF types and symbols directly from memory but on the internal representation generated by the dwarf library. The dwarf library also caches these types (dwarf.Type) internally for performance reasons which also increases memory usage. There would need to be a way to map the dwarf entries to the dwarf.Types returned by the dwarf library. Outputting DWARF to an intermediate representation, such as that with dwarfdump, is an interesting idea. However, that would require enhancing dwarf2json to process DWARF in addition to its current functionality that processes DWARF IR from the dwarf library. It may also be possible to rewrite dwarf2json in another language that uses a different DWARF library. There are no immediate plans to do either of these by the core developers of this project. We always welcome pull requests from members of the community. |
This issue reports that the process crashes when the system's available RAM falls below a certain threshold.
While creating a large swap file (e.g., 8GB) can mitigate the issue, it's considered poor practice due to potential performance drawbacks.
The text was updated successfully, but these errors were encountered: