You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It should be possible to write an equivalent model, say machine-word-based-model, where only machine words are used.
The equivalence can be proved by proving functional equivalence of the decode64 functions generated in Coq/isabelle by translating the official model (uses unbounded integers and bitvectors) and the machine-word-based-model model.
How hard would it be to do this? Would the say machine-word-based-model be much more complex?
Doing all this would also provide better C emulators: currently the C translation tries replace unbounded data by machine words, but does not always succeed.
The text was updated successfully, but these errors were encountered:
Proving equivalence might be quite tricky. Although I do have some symbolic execution infrastructure that could maybe be used to do that if there was a version with only machine words and one without. I haven't thought about it too much, but I guess the way to go would be to use some kind of abstract-interpretation like approach to infer the maximum width for all integer variables working down starting with the decode functions. It'd probably be quite a lot of work to set that up, and we don't have anyone working on it.
The ARM ASL specification is itself written in terms of unbounded-precision integers and variable-width bitvectors, and we've mostly tried to do a fairly one-to-one translation from their specification - that's where all the unbounded-precision parts come from. The current approach is to just try to optimize where possible, we have some peephole style optimizations for the common cases and primitives, and there's also an additional monomorphization step that will turn length-polymorphic functions into multiple functions that then allows more of those optimizations to take place (but I don't think that's enabled by default as it's very slow). I also had an option to use clang and gcc's 128-bit integers rather than GMP integers which was a pretty big performance win, although it might be unsound in general if anything actually requires more bits than that.
We might be able to get upper bounds on bitvector widths from Z3's optimization facilities, and if so it would fit in much more nicely with the current Sail tooling.
It should be possible to write an equivalent model, say machine-word-based-model, where only machine words are used.
The equivalence can be proved by proving functional equivalence of the decode64 functions generated in Coq/isabelle by translating the official model (uses unbounded integers and bitvectors) and the machine-word-based-model model.
How hard would it be to do this? Would the say machine-word-based-model be much more complex?
Doing all this would also provide better C emulators: currently the C translation tries replace unbounded data by machine words, but does not always succeed.
The text was updated successfully, but these errors were encountered: