Skip to content

[WIP]: Refactor-- consolidate and simplify#7

Closed
dPys wants to merge 2 commits intonetworkx:mainfrom
dPys:consolidate-and-simplify
Closed

[WIP]: Refactor-- consolidate and simplify#7
dPys wants to merge 2 commits intonetworkx:mainfrom
dPys:consolidate-and-simplify

Conversation

@dPys
Copy link
Contributor

@dPys dPys commented Jul 16, 2023

This PR includes the following changes:

  • Switched to absolute imports to follow best practices recommended by PEP8.
  • Adds, consolidates, and streamlines critical functionality proposed in PR first commit, isolates and betweenness #2 (the remaining tests and algorithms for that PR will still need to be reviewed and merged)
  • Adds the Backends class to a new module external for handling different parallelization backends ( "multiprocessing", "dask", "ray", "loky", "threading", and "ipyparallel") through the joblib API.
  • Adds a new optional_package function to the nx_parallel/misc.py file to accommodate for the possibility of multiple backend dependencies.
  • Adds a partition module, comprising general NxMap and NxReduce classes that can be used to more easily and consistently chunk, map, and reduce parallelizable components of most nx algorithms (as of now, "nodes", "edges", "isolates", and "neighborhoods" are supported).
  • Leverage similar structure for the algorithms module as proposed by @20kavishs, but with the help for the partition classes, this greatly simplifies the implementation for each parallel algorithm variant.
  • Updated the dependencies section in the pyproject.toml file to impose some minimal version requirement for NetworkX and joblib.
  • Once there's consensus on general restructuring, will gladly add unit tests, and address some of the gitworkflow failures

dPys added 2 commits July 15, 2023 18:37
…th new ParallelGraph logic, add a tutorial
…able, and cleanly annotated base classes to permit easy iteration
@dPys
Copy link
Contributor Author

dPys commented Jul 16, 2023

Note, that this now more closely follows the proposed @nx._dispatch structure:

In [1]: import networkx as nx; import nx_parallel

In [2]: G = nx.barabasi_albert_graph(100, 3)

In [3]: H = nx_parallel.ParallelGraph(G)

In [4]: nx.betweenness_centrality(H)
Out[4]:
{0: 0.25151869905207225,
 1: 0.049610587404427434,
 2: 0.12300459374399848,
 3: 0.08304068354565398,
 4: 0.13909529780368793,
 5: 0.033370757979639815,
 6: 0.025224691529040805,
 7: 0.010337586781618939,
 8: 0.013377277246126217,
 9: 0.05723111951263706,
 10: 0.1619649220082718,
 ...
 91: 0.000662037520417236,
 92: 0.0004985754985754986,
 93: 0.0006179450357121424,
 94: 0.0005769500203637495,
 95: 0.0020165815010068505,
 96: 0.0016015554110792203,
 97: 0.0015408701683211486,
 98: 0.0019846075040880237,
 99: 0.0016253091741785242}

Copy link
Member

@Schefflera-Arboricola Schefflera-Arboricola left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @dPys for this PR. It has really helped us shape nx-parallel nicely. I'm closing this because there are a lot of merge conflicts and a lot has changed since you opened this PR. But, please feel free to re-open :) and feel free to provide any kind of feedback on the current state of nx-parallel or any of the open issues.

Thank you very much @dPys :)
Hope to see you contribute here again :)

"ipyparallel",
]

class Backend:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Networkx implements a config context manager class which has been made compatible with nx-parallel in PR #75 . Also joblib.parallel_config can also be used to configure nx-parallel after PR #75 would be merged. Thanks!

Comment on lines +18 to +27
def chunk(l: Union[List, Tuple], n: int) -> Iterable:
"""Divide a list `l` of nodes or edges into `n` chunks."""
l_c = iter(l)
while 1:
x = tuple(itertools.islice(l_c, n))
if not x:
return
yield x

def create_iterables(self, G: nx.Graph, iterator: str) -> Iterable:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These 2 function have been included in the nx-parallel. Thanks @dPys !

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome

return joblib.Parallel(n_jobs=self.backend.processes)(calls)


class NxReduce:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The NxMap and NxReduce architecture implemented here is being discussed in Issue #30.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, will take a look

@dPys
Copy link
Contributor Author

dPys commented Aug 26, 2024

I'm so glad to see the state of nx-parallel improve to what it is now. It's really starting to come together @Schefflera-Arboricola ! Even though this PR is being closed due to the merge conflicts, would still be nice to be listed as a contributor?

By the way-- are you all still meeting regularly (Wednesdays if I recall? I believe I have the link still...) If so, would love to start sitting in on those discussions? This repo seems ripe for a code sprint and perhaps even a conference submission?

@Schefflera-Arboricola
Copy link
Member

Thank you!

And thank you for your contributions. I apologize we don't have you as a contributor on this repository. I should have imported commits of this PR into my PR when I was borrowing the create_iterables function from this PR. At that time, I didn’t even know it was possible to add someone else's commits to my PRs and I very recently learned about it. Please let me know how we can credit you for your contributions.

And yes we have weekly meetings, here’s the calendar link: https://scientific-python.org/calendars/networkx.ics. We’d be delighted to have you join! Also, feel free to lead any sprints on nx-parallel at any upcoming conferences.

Also, if you could give any feedback as a user of nx-parallel then that will be great too :)

Thank you and hope to see you around!

@dPys
Copy link
Contributor Author

dPys commented Sep 4, 2024

You're very welcome! No worries about the commits—happy to have contributed. I joined last week's meeting and should also be there this afternoon. Thanks for the invite and offer to lead sprints on nx-parallel at any upcoming conferences. I don't have any conferences that I'm planning to attend in the next few months, but will consider sprinting if/when I do. And, of course, I'd be glad to provide feedback on nx-parallel as I continue tinkering.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Development

Successfully merging this pull request may close these issues.

3 participants