-
Notifications
You must be signed in to change notification settings - Fork 585
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tools for SHA256 sums of rawnand/boot01/partitions #101
Comments
What does the big sd matter? There are not many ways for this to go wrong in joining. Although a sha256 of the whole emmc would not be bad. |
The best I could for this, was to either halt the SoC or only get the last part hash. |
If I may, it might be easier to just start out with making a sha256 of a rawnand.bin dump (and/or any other dump of any other thing.) All you would need to do is steal some sha256 generation code and point it at the backup file. |
Another suggestion too, auto zip rawnand.bin . EDIT: Thumbs up or down according to your opinion plz. |
hekate already has hw sha256. The problem is using a buffer to hash and then continue hashing with the new buffer's contents. The main need here is the emmc sha256. The backup's sha256 is useless if you don't know the original one. |
How do you verify a backup then? |
https://github.com/CTCaer/hekate/blob/master/bootloader/main.c#L894 These are 4MB chunks. |
Oh |
I would love to see this feature added as well! I'm not using chunks, but I have a suspicion that my rawnand.bin file got corrupted while I was using retroarch, and I really wish I had a SHA-256 or MD5 checksum to check it with so I know whether or not it's safe to restore it. |
Sounds good to me
On Sat, Nov 3, 2018 at 8:24 PM Ricky ***@***.***> wrote:
Ok, since nobody has tried to takle this issue, and I don't have the
knowledge to, can't we do something simpler? Can we give a sha256 of a
completed rawnand.bin? People who backup in chunks like I did will be out
of luck, but doing a verification on q completed file should be simpler and
give a starting point
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#101 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ALZmAVraE1mDLHIHmx8wPXFWw2FqZOJ6ks5urkHNgaJpZM4WW-a9>
.
--
FennecTECH
|
Ok, since nobody has tried to tackle this issue, and I don't have the knowledge to, cant we do something simpler? We can just make a sha256 of a completed rawnand.bin backup. People who backup in chunks would be out of luck, but this would give a stepping off point... Just make rawnand.bin.sha! |
The backup is written or read in chunks. It does not fit in our 4GB RAM ^^ |
I meant when backing up to the SD card. No RAM would be necessary... |
we already done this convo.
Can't be done currently, because in chunks. EDIT: Also everything goes through ram... |
Can't you just keep track of the 4mb chunk hashes, and then use that to check against a full, joined image? Similarly to how torrents verify their data. |
You can't combine hashes so easily. Yeah, the combined hash that you'll get can be used for validating as long as you'll hash the full file in the same fashion. The only way is software or a way to pause sha engine of SE. |
I wasn't saying to combine them. I was saying to verify the 4MB chunks and keep their hashes and offsets so you can verify the entire image by 4MB blocks later on. Just like torrents do. |
I think that the 4mb chunk hashes would be useful to have in a file as it's trivial to write a python script that goes through and calculates these chunks and matches them. (open file handle, read chunk into a var, calculate sha256, match sha256, repeat until end of file). This should also accommodate for checking that partial |
The sha engine of SE doesn't seem to be able to compute a sha256 hash by being fed separately multiple chunks, reading the Atmosphere source code it seems to only be able to compute a hash directly from a buffer, there's no stateful way to do it in several passes (or it's not implemented in Atmosphere). So obviously this won't work for rawnand as we'll never get enough ram. |
I have a first working version, but it's extremely slow. It's taking roughly 90 seconds per 4 MB block, so this would take more than a week for the full rawnand dump. I'll probably need to find a proper asm-optimized sha256 computing library. |
talk about slow Hashing at dialup speeds! |
So, I've both tried an assembly-optimized version of sha256 and sha1, and a basic crc32. They're faster but still way too slow to be bearable. I can't get under 15 seconds per 4 MB block with those pure software implementations, implementing something in software on the BPMP seems to be a dead end. I guess the only correct way would be to use the Tegra directly. Atmosphere and Hekate do use T210-accelerated sha256 hashing, but only for one-shot buffers. I suppose it's also possible to do it incrementally per 4 MB chunk, but I'm missing some documentation on the T210 to check how to do that. For now I'll implement @noirscape 's suggestion of separately hashing all the 4 MB chunks using the T210 hardware. |
It is possible for hw sha256, but it needs 600MB buffers. |
A PC side application would be a perfectly acceptable option. Probably should use python. For cross platform support. |
OK then a hw sha256 computation for 29.1 GB doesn't seem realistic due to the limitations you've outlined, @CTCaer.
|
Here is a golang starting point: https://hastebin.com/wefigowace.go (used this to generate the sums, basically same code so would need some testing i.e. hekate generating hashes in a test build and making sure they verify: https://hastebin.com/loqiqoziwi.go) It reads the current directory and finds files that match a regex pattern, it loads a text file with .sha256sums after the file name (todo: calculate the chunk sizes by number of hashes) and compares them in order. It validates each file, keeps track of valid and invalid chunks, and prints out a summary for each file. Should work for rawnand.bin or rawnand.bin.XX (split) files. I'll have to make an output file with the invalid chunks so it's more useful. |
I have a working version (see PR #216), I tested it in the generic case, it works. I'll continue testing it to be sure the code behaves correctly in all cases such as multipart backups, continued backups, etc. |
Adapted my golang example to use sha1sums files.. I've not finished a backup using your commit yet (it's running now) so am not sure it will work but will check it soon. |
Works as expected, adding some cli flags to allow users to combine/rehash the larger files instead of just chunks. I also did the part file dumps (fat32). Which worked fine. |
@james-d-elliott does your script also work validating your PRODINFO partition? It fails on mine (but works everywhere else), I suppose this is because PRODINFO is < 4 MiB. |
I've yet to test that, you're just dumping sys? |
Yes. But if that's what I think, you can reproduce the problem by sha256sum ing any file on your computer < 4 MB and point your script at it! |
Yeah it was an easy enough fix, give me a bit to sort out a repo. Also fixing the issue identified a few other issues that I fixed. I'll also expand the usefulness of the tool to the other dumps/backups with a bit of refactoring. The issue was I was hashing the whole buffer which included zero padding (which I expected and suspect you did as well). I resolved it by using the returned bytes value. Tempory fix (should work, though my code has evolved since this): https://hastebin.com/golafucute.go |
https://github.com/james-d-elliott/go-hekatechkbkp/releases Needs testing and need to carefully add some additional features. But it's probably a good starting place. I'll test it against all dumps/backups this weekend with whatever the latest commit is. |
[Tools] implement hash file generation on backup (#101).
@CTCaer any chance to push a release version including this PR? |
@tantoinet |
I've done some testing and it's working as expected. |
Can you redo the tests with new version? |
Full : 47m44s |
Just a reminder that @speed47 's code only lives in Nyx with 5.0.1. So using TUI with hashes option it will just do a full verif. (SE depends on that. Boost on generating sha256 is around x4.5 in Nyx). |
Have in mind that this will change in v5.2.0. hekate will be able to produce real SHA256 hashes of the full file. |
How? |
OK so lets say I have this problem. I need to get my rawnand in parts... but after joining them on my PC I want to make sure is correct, even if it was verified on the switch. I would have no way of checking it without getting a bigger SD! If a SHA256 were derived and put into say a text file on the SD, I could compare SHA256's. Such a feature would be immensely usefull.
The text was updated successfully, but these errors were encountered: