Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor(examples) Update quickstart-tensorflow example #3919

Merged
merged 13 commits into from
Aug 5, 2024
88 changes: 33 additions & 55 deletions examples/quickstart-tensorflow/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,87 +4,65 @@ dataset: [CIFAR-10]
framework: [tensorflow]
---

# Flower Example using TensorFlow/Keras
# Federated Learning with Tensorflow/Keras and Flower (Quickstart Example)

This introductory example to Flower uses Keras but deep knowledge of Keras is not necessarily required to run the example. However, it will help you understand how to adapt Flower to your use case.
This introductory example to Flower uses Tensorflow/Keras but deep knowledge of this frameworks is required to run the example. However, it will help you understand how to adapt Flower to your use case.
Running this example in itself is quite easy. This example uses [Flower Datasets](https://flower.ai/docs/datasets/) to download, partition and preprocess the CIFAR-10 dataset.

## Project Setup
## Set up the project

Start by cloning the example project. We prepared a single-line command that you can copy into your shell which will checkout the example for you:
### Clone the project

```shell
git clone --depth=1 https:/adap/flower.git && mv flower/examples/quickstart-tensorflow . && rm -rf flower && cd quickstart-tensorflow
```

This will create a new directory called `quickstart-tensorflow` containing the following files:

```shell
-- pyproject.toml
-- client.py
-- server.py
-- README.md
```

### Installing Dependencies

Project dependencies (such as `tensorflow` and `flwr`) are defined in `pyproject.toml`. You can install the dependencies by invoking `pip`:

```shell
# From a new python environment, run:
pip install .
```

Then, to verify that everything works correctly you can run the following command:
Start by cloning the example project:

```shell
python3 -c "import flwr"
git clone --depth=1 https:/adap/flower.git _tmp \
&& mv _tmp/examples/quickstart-tensorflow . \
&& rm -rf _tmp \
&& cd quickstart-tensorflow
```

If you don't see any errors you're good to go!

## Run Federated Learning with TensorFlow/Keras and Flower

Afterward, you are ready to start the Flower server as well as the clients. You can simply start the server in a terminal as follows:
This will create a new directory called `quickstart-tensorflow` with the following structure:

```shell
python3 server.py
quickstart-tensorflow
├── tfexample
│ ├── __init__.py
│ ├── client_app.py # Defines your ClientApp
│ ├── server_app.py # Defines your ServerApp
│ └── task.py # Defines your model, training and data loading
├── pyproject.toml # Project metadata like dependencies and configs
└── README.md
```

Now you are ready to start the Flower clients which will participate in the learning. To do so simply open two more terminals and run the following command in each:

```shell
python3 client.py --partition-id 0
```
### Install dependencies and project

Start client 2 in the second terminal:
Install the dependencies defined in `pyproject.toml` as well as the `tfhexample` package.

```shell
python3 client.py --partition-id 1
```bash
pip install -e .
```

You will see that Keras is starting a federated training. Have a look at the [code](https:/adap/flower/tree/main/examples/quickstart-tensorflow) for a detailed explanation. You can add `steps_per_epoch=3` to `model.fit()` if you just want to evaluate that everything works without having to wait for the client-side training to finish (this will save you a lot of time during development).
## Run the project

## Run Federated Learning with TensorFlow/Keras and `Flower Next`
You can run your Flower project in both _simulation_ and _deployment_ mode without making changes to the code. If you are starting with Flower, we recommend you using the _simulation_ mode as it requires fewer components to be launched manually. By default, `flwr run` will make use of the Simulation Engine.

### 1. Start the long-running Flower server (SuperLink)
### Run with the Simulation Engine

```bash
flower-superlink --insecure
flwr run .
```

### 2. Start the long-running Flower clients (SuperNodes)

Start 2 Flower \`SuperNodes in 2 separate terminal windows, using:
You can also override some of the settings for your `ClientApp` and `ServerApp` defined in `pyproject.toml`. For example:

```bash
flower-client-app client:app --insecure
flwr run . --run-config num-server-rounds=5,learning-rate=0.05
```

### 3. Run the Flower App
> \[!TIP\]
> For a more detailed walk-through check our [quickstart TensorFlow tutorial](https://flower.ai/docs/framework/tutorial-quickstart-tensorflow.html)

With both the long-running server (SuperLink) and two clients (SuperNode) up and running, we can now run the actual Flower App, using:
### Run with the Deployment Engine

```bash
flower-server-app server:app --insecure
```
> \[!NOTE\]
> An update to this example will show how to run this Flower application with the Deployment Engine and TLS certificates, or with Docker.
72 changes: 0 additions & 72 deletions examples/quickstart-tensorflow/client.py

This file was deleted.

36 changes: 27 additions & 9 deletions examples/quickstart-tensorflow/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -3,18 +3,36 @@ requires = ["hatchling"]
build-backend = "hatchling.build"

[project]
name = "quickstart-tensorflow"
version = "0.1.0"
description = "Keras Federated Learning Quickstart with Flower"
authors = [
{ name = "The Flower Authors", email = "[email protected]" },
]
name = "tfexample"
version = "1.0.0"
description = "Federated Learning with Tensorflow/Keras and Flower (Quickstart Example)"
license = "Apache-2.0"
dependencies = [
"flwr>=1.8.0,<2.0",
"flwr-datasets[vision]>=0.0.2,<1.0.0",
"flwr[simulation]>=1.10.0",
"flwr-datasets[vision]>=0.3.0",
"tensorflow-cpu>=2.9.1, != 2.11.1 ; platform_machine == \"x86_64\"",
"tensorflow-macos>=2.9.1, != 2.11.1 ; sys_platform == \"darwin\" and platform_machine == \"arm64\""
]

[tool.hatch.build.targets.wheel]
packages = ["."]

[tool.flwr.app]
publisher = "flwrlabs"

[tool.flwr.app.components]
serverapp = "tfexample.server_app:app"
clientapp = "tfexample.client_app:app"

[tool.flwr.app.config]
num-server-rounds = 3
local-epochs = 1
batch-size = 32
learning-rate = 0.005
fraction-fit = 0.5
verbose = false

[tool.flwr.federations]
default = "local-simulation"

[tool.flwr.federations.local-simulation]
options.num-supernodes = 10
41 changes: 0 additions & 41 deletions examples/quickstart-tensorflow/server.py

This file was deleted.

1 change: 1 addition & 0 deletions examples/quickstart-tensorflow/tfexample/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
"""tfexample."""
67 changes: 67 additions & 0 deletions examples/quickstart-tensorflow/tfexample/client_app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
"""tfexample: A Flower / TensorFlow app."""

from flwr.client import NumPyClient, ClientApp
from flwr.common import Context

from tfexample.task import load_data, load_model


# Define Flower Client
class FlowerClient(NumPyClient):
def __init__(
self,
learning_rate,
data,
epochs,
batch_size,
verbose,
):
self.model = load_model(learning_rate)
self.x_train, self.y_train, self.x_test, self.y_test = data
self.epochs = epochs
self.batch_size = batch_size
self.verbose = verbose

def get_parameters(self, config):
"""Return the parameters of the model of this client."""
return self.model.get_weights()

def fit(self, parameters, config):
"""Train the model with data of this client."""
self.model.set_weights(parameters)
self.model.fit(
self.x_train,
self.y_train,
epochs=self.epochs,
batch_size=self.batch_size,
verbose=self.verbose,
)
return self.model.get_weights(), len(self.x_train), {}

def evaluate(self, parameters, config):
"""Evaluate the model on the data this client has."""
self.model.set_weights(parameters)
loss, accuracy = self.model.evaluate(self.x_test, self.y_test, verbose=0)
return loss, len(self.x_test), {"accuracy": accuracy}


def client_fn(context: Context):
"""Construct a Client that will be run in a ClientApp."""

# Read the node_config to fetch data partition associated to this node
partition_id = context.node_config["partition-id"]
num_partitions = context.node_config["num-partitions"]
data = load_data(partition_id, num_partitions)

# Read run_config to fetch hyperparameters relevant to this run
epochs = context.run_config["local-epochs"]
batch_size = context.run_config["batch-size"]
verbose = context.run_config.get("verbose")
learning_rate = context.run_config["learning-rate"]

# Return Client instance
return FlowerClient(learning_rate, data, epochs, batch_size, verbose).to_client()


# Flower ClientApp
app = ClientApp(client_fn=client_fn)
44 changes: 44 additions & 0 deletions examples/quickstart-tensorflow/tfexample/server_app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
"""tfexample: A Flower / TensorFlow app."""

from typing import List, Tuple
from flwr.common import Context, ndarrays_to_parameters, Metrics
from flwr.server import ServerApp, ServerAppComponents, ServerConfig
from flwr.server.strategy import FedAvg

from tfexample.task import load_model


# Define metric aggregation function
def weighted_average(metrics: List[Tuple[int, Metrics]]) -> Metrics:
# Multiply accuracy of each client by number of examples used
accuracies = [num_examples * m["accuracy"] for num_examples, m in metrics]
examples = [num_examples for num_examples, _ in metrics]

# Aggregate and return custom metric (weighted average)
return {"accuracy": sum(accuracies) / sum(examples)}


def server_fn(context: Context):
"""Construct components that set the ServerApp behaviour."""

# Let's define the global model and pass it to the strategy
# Note this is optional.
parameters = ndarrays_to_parameters(load_model().get_weights())

# Define the strategy
strategy = strategy = FedAvg(
fraction_fit=context.run_config["fraction-fit"],
fraction_evaluate=1.0,
min_available_clients=2,
initial_parameters=parameters,
evaluate_metrics_aggregation_fn=weighted_average,
)
# Read from config
num_rounds = context.run_config["num-server-rounds"]
config = ServerConfig(num_rounds=num_rounds)

return ServerAppComponents(strategy=strategy, config=config)


# Create ServerApp
app = ServerApp(server_fn=server_fn)
Loading