-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Process dies on large NTDS.DIT file #20
Comments
It looks like the OS is killing the process - though I have encountered this issue when the .dit is 20gb+ as well. I'll look into having the .dit read from disk rather than loaded into memory - however in the meantime you can use something like this - which should be quicker than the impacket dumper, and won't have the memory limit: |
Hey. I am learning Go, bu not programming in general. Ive cloned the repo to try and understand this issue and maybe have a look into fixing it. Is this still issue still a thing? |
hey @lwears - the issue is still a thing. This project is not actively maintained, I come back to it whenever there is something I need in an engagement, but I have spent some time in the past trying to find a decent solution for this problem, and run into many roadblocks on the way. The first (easy) issue is that in order to extract the data quickly, we first load the whole .dit into memory, and scan from there - hopping around the database is much quicker in memory than doing disk seeks (even on a macbook with fancy NVME drives). This can be relatively easily addressed by adjusting the readseeker to work on-disk, and infact this is how it originally worked. We still need to build an in-memory version of all of the data in the database (each record links to the next, and it's not always clear which columns a record has), so even if the data is read from disk, some bad assumptions I made early on in the project (like building maps to represent tables 🤦 ) made it quicker, but balloon memory very quickly; even if I'm very particular about deleting unused map entries. To be able to avoid soaking memory up so quickly, I suspect a full reimplementation will be required; which is why it's on the backburner. Since building the tool, the ESE database spec has been released publicly - most of the code here has more or less been translated from the Python impacket version (which was built by reverse engineering how ESE works), so realistically, the most useful and practical contribution would be to build an ESE library that is performant and sensible with regards to memory. |
Commands tried:
./gosecretsdump_linux_v0.3.1 -enabled -ntds.dit -system SYSTEM
Result:
The NTDS.DIT file in this case is 5GB+ in size. I wish I could provide more info but that's all I can share at the moment :/
The text was updated successfully, but these errors were encountered: