diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index aa3ab69..f1c0914 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -12,7 +12,7 @@ To set the local development environment: - Clone the forked repository locally. ```.sh -git clone git@github.com:/networkx.git +git clone git@github.com:/nx-parallel.git ``` - Create a fresh conda/mamba virtualenv ([learn more](https://github.com/networkx/networkx/blob/main/CONTRIBUTING.rst#development-workflow)) @@ -55,7 +55,13 @@ git push origin ## Testing nx-parallel -The following command runs all the tests in networkx with a `ParallelGraph` object and for algorithms not in nx-parallel, it falls back to networkx's sequential implementations. This is to ensure that the parallel implementation follows the same API as networkx's. +Firstly, install the dependencies for testing: + +```.sh +pip install -e ".[test]" +``` + +Then run the following command that executes all the tests in networkx's test suite with a `ParallelGraph` object and for algorithms not in nx-parallel, it falls back to networkx's sequential implementations. This is to ensure that the parallel backend follows the same API as networkx's. ```.sh PYTHONPATH=. \ @@ -64,7 +70,9 @@ NETWORKX_FALLBACK_TO_NX=True \ pytest --pyargs networkx "$@" ``` -For running additional tests: +Ref. [NetworkX Backend testing docs](https://networkx.org/documentation/latest/reference/backends.html#testing-the-custom-backend) to know about testing mechanisms in networkx. + +For running additional tests specific to nx-parallel, you can run the following command: ```.sh pytest nx_parallel @@ -116,7 +124,7 @@ The default chunking in nx-parallel is done by first determining the number of a - The algorithm that you are considering to add to nx-parallel should be in the main networkx repository and it should have the `_dispatchable` decorator. If not, you can consider adding a sequential implementation in networkx first. - check-list for adding a new function: - [ ] Add the parallel implementation(make sure API doesn't break), the file structure should be the same as that in networkx. - - [ ] add the function to the `Dispatcher` class in [interface.py](https://github.com/networkx/nx-parallel/blob/main/nx_parallel/interface.py) (take care of the `name` parameter in `_dispatchable` (ref. [docs](https://networkx.org/documentation/latest/reference/generated/networkx.utils.backends._dispatchable.html#dispatchable))) + - [ ] add the function to the `Dispatcher` class in [interface.py](https://github.com/networkx/nx-parallel/blob/main/nx_parallel/interface.py) (take care of the `name` parameter in `_dispatchable` (ref. [docs](https://networkx.org/documentation/latest/reference/backends.html))) - [ ] update the `__init__.py` files accordingly - [ ] docstring following the above format - [ ] run the [timing script](https://github.com/networkx/nx-parallel/blob/main/timing/timing_individual_function.py) to get the performance heatmap diff --git a/LICENSE.md b/LICENSE.md new file mode 100644 index 0000000..06988d9 --- /dev/null +++ b/LICENSE.md @@ -0,0 +1,28 @@ +BSD 3-Clause License + +Copyright (c) 2024, [Contributors of nx-parallel](https://github.com/networkx/nx-parallel/graphs/contributors) + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + +1. Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. + +3. Neither the name of the copyright holder nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE +FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR +SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, +OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/README.md b/README.md index 42fc4c0..c161233 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # nx-parallel -nx-parallel is a NetworkX backend that uses joblib for parallelization. This project aims to provide parallelized implementations of various NetworkX functions to improve performance. +nx-parallel is a NetworkX backend that uses joblib for parallelization. This project aims to provide parallelized implementations of various NetworkX functions to improve performance. Refer [NetworkX backends documentation](https://networkx.org/documentation/latest/reference/backends.html) to learn more about the backend architecture in NetworkX. ## Algorithms in nx-parallel @@ -36,8 +36,51 @@ for func in d: +## Installation + +It is recommended to first refer the [NetworkX's INSTALL.rst](https://github.com/networkx/networkx/blob/main/INSTALL.rst). +nx-parallel requires Python >=3.10. Right now, the only dependencies of nx-parallel are networkx and joblib. + +### Install the released version + +You can install the stable version of nx-parallel using pip: + +```sh +$ pip install nx-parallel +``` + +The above command also installs the two main dependencies of nx-parallel i.e. networkx +and joblib. To upgrade to a newer release use the `--upgrade` flag: + +```sh +$ pip install --upgrade nx-parallel +``` + +### Install the development version + +Before installing the development version, you may need to uninstall the +standard version of `nx-parallel` and other two dependencies using `pip`: + +```sh +$ pip uninstall nx-parallel networkx joblib +``` + +Then do: + +```sh +$ pip install git+https://github.com/networkx/nx-parallel.git@main +``` + ## Backend usage +You can run your networkx code by just setting the `NETWORKX_AUTOMATIC_BACKENDS` environment variable to `parallel`: + +```sh +$ export NETWORKX_AUTOMATIC_BACKENDS=parallel && python nx_code.py +``` + +Note that for all functions inside `nx_code.py` that do not have an nx-parallel implementation their original networkx implementation will be executed. You can also use the nx-parallel backend in your code for only some specific function calls in the following ways: + ```.py import networkx as nx import nx_parallel as nxp @@ -45,7 +88,7 @@ import nx_parallel as nxp G = nx.path_graph(4) H = nxp.ParallelGraph(G) -# method 1 : passing ParallelGraph object in networkx function +# method 1 : passing ParallelGraph object in networkx function (Type-based dispatching) nx.betweenness_centrality(H) # method 2 : using the 'backend' kwarg @@ -62,7 +105,7 @@ nxp.betweenness_centrality(H) ### Notes -1. Some functions in networkx have the same name but different implementations, so to avoid these name conflicts at the time of dispatching networkx differentiates them by specifying the `name` parameter in the [`_dispatchable`](https://networkx.org/documentation/latest/reference/generated/networkx.utils.backends._dispatchable.html#dispatchable) decorator of such algorithms. So, `method 3` and `method 4` are not recommended. But, you can use them if you know the correct `name`. For example: +1. Some functions in networkx have the same name but different implementations, so to avoid these name conflicts at the time of dispatching networkx differentiates them by specifying the `name` parameter in the `_dispatchable` decorator of such algorithms. So, `method 3` and `method 4` are not recommended. But, you can use them if you know the correct `name`. For example: ```.py # using `name` parameter - nx-parallel as an independent package @@ -82,4 +125,8 @@ nxp.betweenness_centrality(H) Feel free to contribute to nx-parallel. You can find the contributing guidelines [here](https://github.com/networkx/nx-parallel/blob/main/CONTRIBUTING.md). If you'd like to implement a feature or fix a bug, we'd be happy to review a pull request. Please make sure to explain the changes you made in the pull request description. And feel free to open issues for any problems you face, or for new features you'd like to see implemented. +This project is managed under the NetworkX organisation, so the [code of conduct of NetworkX](https://github.com/networkx/networkx/blob/main/CODE_OF_CONDUCT.rst) applies here as well. + +All code in this repository is available under the Berkeley Software Distribution (BSD) 3-Clause License (see LICENSE). + Thank you :) diff --git a/pyproject.toml b/pyproject.toml index e6d0149..f9310d4 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -27,7 +27,11 @@ path = "nx_parallel/__init__.py" [project.optional-dependencies] developer = [ 'pre-commit', - 'pytest', +] +test = [ + 'pytest>=7.2', + 'numpy>=1.23', + 'scipy>=1.9,!=1.11.0,!=1.11.1', ] [project.entry-points."networkx.backends"]