You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I looked briefly at the code and noticed that the encrypt_to_bytes methods do a scalar multiplication and then a point compression for each input item. Although most of this work is the scalar multiplication, it's possible to amortize work across multiple point compressions, using the double_and_compress_batch method. The only wrinkle is that rather than encoding Q_i, this encodes 2Q_i. However, because Q_i is computed as k* P_i, it's still possible to use this method to get the same result as before, by multiplying k by (1/2) mod l (a constant value) to get k', and computing Q_i' <- k' * P_i. Since Q_i = 2 Q_i', batch-double-and-encoding the Q_i' will give enc(Q_1) , ... , enc(Q_n). This costs only one scalar-scalar multiplication for each batch, and I believe will save time as long as the batch is more than 1 element large.
The batching is not parallelizable, but this can still be applied in the parallel case by breaking a larger batch into chunks, and applying the optimization within each chunk.
Also, I think that the code could be slightly streamlined if the methods took impl Iterator<Item=...> rather than slices. For instance, if encrypt_to_bytes took an iterator, instead of having to have separate hash_encrypt_to_bytes, a caller could do
// plaintexts is an iterator of byte slices
encrypt_to_bytes(plaintexts.map(RistrettoPoint::hash_from_bytes::<Sha512>))
and the hash-to-curve calculations would be inlined into the right place.
The text was updated successfully, but these errors were encountered:
I looked briefly at the code and noticed that the
encrypt_to_bytes
methods do a scalar multiplication and then a point compression for each input item. Although most of this work is the scalar multiplication, it's possible to amortize work across multiple point compressions, using thedouble_and_compress_batch
method. The only wrinkle is that rather than encodingQ_i
, this encodes2Q_i
. However, becauseQ_i
is computed ask* P_i
, it's still possible to use this method to get the same result as before, by multiplyingk
by(1/2) mod l
(a constant value) to getk'
, and computingQ_i' <- k' * P_i
. SinceQ_i = 2 Q_i'
, batch-double-and-encoding theQ_i'
will giveenc(Q_1) , ... , enc(Q_n)
. This costs only one scalar-scalar multiplication for each batch, and I believe will save time as long as the batch is more than 1 element large.The batching is not parallelizable, but this can still be applied in the parallel case by breaking a larger batch into chunks, and applying the optimization within each chunk.
Also, I think that the code could be slightly streamlined if the methods took
impl Iterator<Item=...>
rather than slices. For instance, ifencrypt_to_bytes
took an iterator, instead of having to have separatehash_encrypt_to_bytes
, a caller could doand the hash-to-curve calculations would be inlined into the right place.
The text was updated successfully, but these errors were encountered: