ENH : adding parallel implementations of all_pairs_ algos#33
ENH : adding parallel implementations of all_pairs_ algos#33dschult merged 24 commits intonetworkx:mainfrom
all_pairs_ algos#33Conversation
resolved conflicts(rebased)
resolved conflicts(rebased)
resolved conflicts(rebased)
weighted.pyall_pairs_ algos
all_pairs_ algosall_pairs_ algos
resolved conflicts(rebased)
0dd112b to
0cba23e
Compare
…tivity and added connectivity.connectivity.all_pairs_node_connectivity and all_pairs_all_shortest_paths
…d an if-else statement, added and updated heatmaps(no speedups for all_pairs_shortest_path_length)
|
|
|
That happened to me yesterday too. I don’ think it has anything to do with the backend, but I’m not sure. |
|
Where does this PR stand? Is there more to do here? |
|
I think we can merge this. The only algo not showing any speedups is |
Please do "Squash and merge" while merging this PR
Added the following :
all_pairs_dijkstraall_pairs_dijkstra_path_lengthall_pairs_dijkstra_pathall_pairs_bellman_ford_path_lengthall_pairs_shortest_pathall_pairs_shortest_path_length- no speedups(the standard sequential implementation is already really fast, and even when the graph size was increased till 4000 nodes, there were negligible speedups, and it only took a minute or 2 to generate its heatmap)all_pairs_node_connectivity1approximate_all_pairs_node_connectivity1all_pairs_all_shortest_paths[1] here, default chunking means having chunks of at most 10 nodes, because otherwise, it was taking too long with bigger node_chunks; and I also reduced the number of nodes while timing, but the speedups are pretty consistent(i.e. around 3). This heatmap alone took 2.5 hrs, and ran successfully after several tries. Also no heatmap for
approximate_all_pairs_node_connectivitybecause for a random graph, it will result in the same heatmap asall_pairs_node_connectivity