Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add ferret with io #137

Conversation

xiangxiecrypto
Copy link
Collaborator

  • SPCOT with io
  • MPCOT with io
  • Ferret with io

Note that in the Ferret implementation, we need two random COTs. One for MPCOT (SPCOT), which can be IKNP-type (KOS) Random COTs. The other one is for the setup in Ferret, which will be Ferret with smaller parameters.

All the parameters will be chosen when the lpn estimator is done.

sinui0 and others added 27 commits April 29, 2024 16:33
* feat: mpz-common

* reduce visibility of test module

* pr feedback
…rations#102)

* Adds an ideal ROT functionality to mpz-ot-core

* Update ot/mpz-ot-core/src/ideal/ideal_rot.rs

Derive Debug for IdealROT

Co-authored-by: sinu.eth <[email protected]>

* Add feedback

* Add more feedback

---------

Co-authored-by: sinu.eth <[email protected]>
…ling-explorations#122)

* add transfer id

* update co15 and kos15

* add Output type

* feat(mpz-common): ideal functionality utils

* refactor ideal functionalities and traits

* pr feedback

* impl ideal rot

* Update crates/mpz-ot/src/ideal/rot.rs

Co-authored-by: th4s <[email protected]>

---------

Co-authored-by: th4s <[email protected]>
* delete obsolete module

* export test-utils and add ideal COT tests
* feat(mpz-common): simple counter

* add derives
@sinui0 sinui0 self-requested a review May 23, 2024 16:23
@sinui0
Copy link
Collaborator

sinui0 commented May 23, 2024

Great, thanks! Will review soon.

@themighty1 themighty1 self-requested a review May 24, 2024 08:12
Copy link
Collaborator

@sinui0 sinui0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excited to see it all come together! 🔥

I have some requests which should simplify things quite a bit.

  1. For MPCOT, I believe we can eliminate a lot of duplication by encapsulating the different LPN types (regular/uniform). Instead of having two types you can add an argument to the constructor and handle it behind the API. The core types can remain separate but the IO wrapper can be unified.
  2. The check and finalize methods should not be exposed in the public API. When calling extend the user should just provide the number of OTs they want and you handle the entire process for them. Additionally, extend should not return the OTs, it should buffer them internally because it is used during the preprocessing phase. This buffering should be performed in the core crate, the same as how we do it for KOS.

For example, here is what the public API could look like:

impl<RandomCOT> Sender<RandomCot> {
  pub fn new(lpn_type: LpnType, rcot: RandomCOT) -> Self { .. }

  pub async fn setup<Ctx: Context>(&mut self, ctx: &mut Ctx) 
    -> Result<(), SenderError> { .. }
  
  /// Preprocesses `count` OTs.
  pub async fn extend<Ctx: Context>(&mut self, ctx: &mut Ctx, count: usize)
    -> Result<(), SenderError> { .. }
}

This applies to SPCOT, MPCOT and Ferret

where
RandomCOT: RandomCOTSender<Ctx, Block>,
{
println!("here");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

left over

Comment on lines +8 to +10
use rayon::iter::{
IndexedParallelIterator, IntoParallelRefIterator, IntoParallelRefMutIterator, ParallelIterator,
};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
use rayon::iter::{
IndexedParallelIterator, IntoParallelRefIterator, IntoParallelRefMutIterator, ParallelIterator,
};
#[cfg(feature = "rayon")]
use rayon::iter::{
IndexedParallelIterator, IntoParallelRefIterator, IntoParallelRefMutIterator, ParallelIterator,
};

Comment on lines +9 to +11
use rayon::iter::{
IndexedParallelIterator, IntoParallelRefIterator, IntoParallelRefMutIterator, ParallelIterator,
};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
use rayon::iter::{
IndexedParallelIterator, IntoParallelRefIterator, IntoParallelRefMutIterator, ParallelIterator,
};
#[cfg(feature = "rayon")]
use rayon::iter::{
IndexedParallelIterator, IntoParallelRefIterator, IntoParallelRefMutIterator, ParallelIterator,
};

Comment on lines 208 to 215
let iter = alphas
.par_iter()
.zip(ms_s.par_iter())
.zip(sum_s.par_iter())
.zip(hs.par_iter())
.zip(ts_s.par_iter())
.zip(trees.par_iter_mut())
.map(|(((((alpha, ms), sum), h), ts), tree)| (alpha, ms, sum, h, ts, tree));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
let iter = alphas
.par_iter()
.zip(ms_s.par_iter())
.zip(sum_s.par_iter())
.zip(hs.par_iter())
.zip(ts_s.par_iter())
.zip(trees.par_iter_mut())
.map(|(((((alpha, ms), sum), h), ts), tree)| (alpha, ms, sum, h, ts, tree));
let iter = alphas
.par_iter()
.zip(ms_s.par_iter())
.zip(sum_s.par_iter())
.zip(hs.par_iter())
.zip(ts_s.par_iter())
.zip(trees.par_iter_mut())
.map(|(((((alpha, ms), sum), h), ts), tree)| (alpha, ms, sum, h, ts, tree));

Comment on lines 217 to 224
let iter = alphas
.iter()
.zip(ms_s.iter())
.zip(sum_s.iter())
.zip(hs.iter())
.zip(ts_s.iter())
.zip(trees.iter_mut())
.map(|(((((alpha, ms), sum), h), ts), tree)| (alpha, ms, sum, h, ts, tree));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
let iter = alphas
.iter()
.zip(ms_s.iter())
.zip(sum_s.iter())
.zip(hs.iter())
.zip(ts_s.iter())
.zip(trees.iter_mut())
.map(|(((((alpha, ms), sum), h), ts), tree)| (alpha, ms, sum, h, ts, tree));
let iter = alphas
.iter()
.zip(ms_s.iter())
.zip(sum_s.iter())
.zip(hs.iter())
.zip(ts_s.iter())
.zip(trees.iter_mut())
.map(|(((((alpha, ms), sum), h), ts), tree)| (alpha, ms, sum, h, ts, tree));

pub async fn setup_with_parameters<Ctx: Context>(
&mut self,
ctx: &mut Ctx,
setup_rcot: &mut impl RandomCOTSender<Ctx, Block>,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this passed in here? I would just call the instance we pass into MPCOT

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you please be more specific?
I think the MPCOT instance is determined by the lpn parameter chosen here.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean that the MPCOT instance already owns an RCOT, so we shouldn't need to pass in another one here

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is what I mentioned in the PR comment. We will use a different RCOT here, i.e., Ferret with smaller parameters, to bootstrap.

@xiangxiecrypto
Copy link
Collaborator Author

Excited to see it all come together! 🔥

I have some requests which should simplify things quite a bit.

  1. For MPCOT, I believe we can eliminate a lot of duplication by encapsulating the different LPN types (regular/uniform). Instead of having two types you can add an argument to the constructor and handle it behind the API. The core types can remain separate but the IO wrapper can be unified.

Yep, we can simplify mpcot to just one type.

  1. The check and finalize methods should not be exposed in the public API. When calling extend the user should just provide the number of OTs they want and you handle the entire process for them. Additionally, extend should not return the OTs, it should buffer them internally because it is used during the preprocessing phase. This buffering should be performed in the core crate, the same as how we do it for KOS.

Maybe we can keep spcot and mpcot internal apis, since they are only called by ferret. In this case, we do not need to keep all the output in internal states, it is much easier to be integrated by ferret.

As to finalize, I meant to explicitly close the extension procedure of ferret. In KOS, the state will automatically turn to Complete after the check procedure (if I understand correctly). While in ferret, it will always allow us to extend, unless we manually turn it off with finalize.

For example, here is what the public API could look like:

impl<RandomCOT> Sender<RandomCot> {
  pub fn new(lpn_type: LpnType, rcot: RandomCOT) -> Self { .. }

  pub async fn setup<Ctx: Context>(&mut self, ctx: &mut Ctx) 
    -> Result<(), SenderError> { .. }
  
  /// Preprocesses `count` OTs.
  pub async fn extend<Ctx: Context>(&mut self, ctx: &mut Ctx, count: usize)
    -> Result<(), SenderError> { .. }
}

This applies to SPCOT, MPCOT and Ferret

@sinui0
Copy link
Collaborator

sinui0 commented Jun 5, 2024

Maybe we can keep spcot and mpcot internal apis, since they are only called by ferret. In this case, we do not need to keep all the output in internal states, it is much easier to be integrated by ferret.

Yes, we can keep SPCOT and MPCOT private for now to simplify things.

As to finalize, I meant to explicitly close the extension procedure of ferret. In KOS, the state will automatically turn to Complete after the check procedure (if I understand correctly). While in ferret, it will always allow us to extend, unless we manually turn it off with finalize.

Ok, I see the intent now. I'm not sure if that is necessary though. For KOS we only do this because the consistency check is leaky and we did not see any proofs that leakage of Delta is bounded when running multiple extensions. For Ferret, my understanding is that extension can be performed indefinitely, is that correct?

@sinui0 sinui0 force-pushed the threading-refactor branch 2 times, most recently from 197d9e8 to e25d123 Compare June 25, 2024 21:20
@sinui0 sinui0 deleted the branch privacy-scaling-explorations:threading-refactor June 26, 2024 12:51
@sinui0 sinui0 closed this Jun 26, 2024
This was referenced Jun 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants