Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor(examples) Refactor quickstart-sklearn-tabular to use the new Flower #3893

Merged
merged 32 commits into from
Aug 8, 2024

Conversation

adam-narozniak
Copy link
Member

@adam-narozniak adam-narozniak commented Jul 24, 2024

Issue

The example does not use the new Flower API.

Proposal

Update the example to follow the structure and code of the new Flower.

@adam-narozniak adam-narozniak marked this pull request as ready for review July 26, 2024 10:38
examples/quickstart-sklearn-tabular/sklearnexample/task.py Outdated Show resolved Hide resolved
examples/quickstart-sklearn-tabular/README.md Outdated Show resolved Hide resolved
examples/quickstart-sklearn-tabular/README.md Outdated Show resolved Hide resolved
examples/quickstart-sklearn-tabular/pyproject.toml Outdated Show resolved Hide resolved
Comment on lines 27 to 28
def get_parameters(self, config):
return get_model_parameters(self.model)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are trying to update the examples in such a way that this method is no longer needed. For this, we need to initialize the global model when constructing the ServerApp.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I don't quite get it. This function, either directly or not (so either using self.get_parameters or get_model_parameters), is needed since we need to get the model parameters each round, not only the global one once. Also, in the mlx example it exists.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes. we want to retain the functionality to extract the parameters (in this case the get_model_parameters() function) but having the get_parameters() method seems in many cases redundant since the only reason why it exists is so, if the strategy doesn't initialise global model, one random client can be sampled and call the get_parameters() method. But it seems to be better to always initialise the global model at the strategy so this get_parameters() is not needed (i.e. fit() can directly call the get_model_parameters() function when returning).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But it seems to be better to always initialise the global mode what is the reason for that?
For me, the discussion seems like a potato-potAto kind of thing. What difference does it really make? It's either the get_model_weights is implemented as the external function and passed to the method or directly as a method, right? So it does not really matter which function/method is used in the fit?

Copy link
Contributor

@jafermarq jafermarq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

some bits i missed earlier

examples/quickstart-sklearn-tabular/pyproject.toml Outdated Show resolved Hide resolved
```shell
poetry run python3 client.py --partition-id 0 # partition-id should be any of {0,1,2}
```bash
flwr run . --run-config penalty="l1"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
flwr run . --run-config penalty="l1"
flwr run . --run-config num-server-rounds =10

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it seems "l1" penalty is not supported, so i'd suggest showing the overriding functionality with another setting.

ValueError: Solver lbfgs supports only 'l2' or None penalties, got l1 penalty.

Copy link
Member Author

@adam-narozniak adam-narozniak Aug 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've changed the solver to "saga" now which supports also l1.

Though I cannot run the
flwr run . --run-config penalty="l1"
I get:

CalledProcessError: Command '['flower-simulation', '--app', '.', '--num-supernodes', '3', '--run-config', 'penalty=l1']' returned non-zero exit status 1.

@jafermarq jafermarq added the Examples Add or update a Flower example label Aug 8, 2024
@jafermarq jafermarq merged commit e52fe37 into main Aug 8, 2024
33 checks passed
@jafermarq jafermarq deleted the update-quickstart-sklearn-tabular-example branch August 8, 2024 14:41
chongshenng pushed a commit that referenced this pull request Aug 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Examples Add or update a Flower example
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants