Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -127,3 +127,7 @@ dmypy.json

# Pyre type checker
.pyre/

# asv
results/
html/
7 changes: 7 additions & 0 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
## Preview benchmarks locally

1. clone this repo
2. `cd benchmarks`
3. `asv run` will run the benchmarks on last commit (or `asv continuous base_commit_hash test_commit_hash` to run the benchmark to compare two commits)
4. `asv publish` will create a `html` folder with the results
5. `asv preview` will host the results locally at http://127.0.0.1:8080/
40 changes: 40 additions & 0 deletions benchmarks/asv.conf.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
{
"version": 1,

"project": "nx-parallel",

"project_url": "https://github.com/networkx/nx-parallel",

"repo": "..",

"branches": ["main"],

"build_command": [
"python -m pip install build hatchling",
"python -m build .",
"PIP_NO_BUILD_ISOLATION=false python -mpip wheel --no-deps --no-index -w {build_cache_dir} {build_dir}"
],

"install_command": ["in-dir={env_dir} python -mpip install {wheel_file}"],

"dvcs": "git",

"environment_type": "virtualenv",

"show_commit_url": "https://github.com/networkx/nx-parallel/commit/",

"matrix": {
"networkx":[],
"nx-parallel": []
},

"benchmark_dir": "benchmarks",

"env_dir": "env",

"results_dir": "results",

"html_dir": "html",

"build_cache_size": 8
}
Empty file.
22 changes: 22 additions & 0 deletions benchmarks/benchmarks/bench_centrality.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
import networkx as nx
import nx_parallel as nxp


class BetweennessCentralityBenchmark:
params = [("parallel", "normal")]
param_names = ["algo_type"]

def setup(self, algo_type):
self.algo_type = algo_type

def time_betweenness_centrality(self, algo_type):
num_nodes, edge_prob = 300, 0.5
G = nx.fast_gnp_random_graph(num_nodes, edge_prob, directed=False)
if algo_type == "parallel":
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should also have a third algo_type, where we run it using a keyword instead of converting the graph to a parallel_graph. That is, the call would be something like:

        - = nx.betweenness_centrality(G, backend=nx-parallel’)

I guess we better make sure that works first, and gives the same results. But it should work and it’d be nice to know if they are the same speed. Of course, if they are almost identical all the time, then we probably should remove it again.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dschult
the output is same for both the ways(i.e. converting into a parallel graph and using keyword argument)when I round the betweenness_centrality value up to 2 decimal places, otherwise, it does give different values sometimes.

and yes, both the ways are almost identical in terms of time, but I've still added it in the comments here if someone wants to try it. There is some difference for 500 nodes graphs as observed in the following ss :
Screenshot 2023-11-03 at 5 24 33 PM

But this difference seem negligible when we bring sequential into the picture(as seen in the following ss) :
Screenshot 2023-11-03 at 5 26 34 PM

also, i have added num_nodes and edge_prob and changed normal algo_type to sequential in this new commit

Thank you :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also, should I create a PR to change the plugin name from parallel to nx-parallel, if that's required?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for those time comparisons. I'm glad it doesn't change the time much between using the backend by object type and by using it via a keyword arg.

I think the naming conventions for these backend libraries is supposed to be, e.g. nx-cugraph when the plugin name is cugraph. I will check with other people though.

The additional parameter graph_types you mention are already obtained from the graph object. So the backend gets that info via, e.g. G.is_directed().

Graph generators will need to be done either in networkx before calling the function, or in nx-parallel by writing a parallel version of that function. I believe the PR with the graph generators getting @dispatch decorators has been merged. The weight parameter should be passed through to the backend function. If it is None then we use an unweighted approach.

I haven't looked at the new commit yet, but hope to soon. :}

H = nxp.ParallelGraph(G)
_ = nx.betweenness_centrality(H)
elif algo_type == "normal":
_ = nx.betweenness_centrality(G)
else:
raise ValueError("Unknown algo_type")