1

I'm constructing a graph with ~10000 nodes, each node having metadata that determine which other nodes it will be connected to, using an edge.
Since the number of edge possibilities (~50M) is far greater than the actual number of edges that will be added (~300k), it is suboptimal to just iterate through node-pairs with for loops to check if an edge should be added between them. Using some logic to filter out many pairs to not have to check, with the help of numpy's rapid methods I quickly reduced the possibilities to an array of ~30M pairs only.
However, when iterating through them instead, the performance did not improve much - in fact iterating through a bigger 2D boolean matrix is twice as fast (compared to my method, which previously collected the True values from the matrix and only iterates through these ~30M instances). There must be a way to get the desired performance benefit, but I hit a deadend, looking to understand why some methods are faster and how to improve my runtime.


Context: In particular, every node is an artist with metadata such as locations, birth and death year.
I connect two artists based on a method that calculates a measure of how close they lived to each other at one time approximately (e.g. two artists living at the same place at the same time, for long enough time, would get a high value). This is a typical way to achieve just that (iterating through indices is preferred over names):

for i, j in itertools.combinations(range(len(artist_names)), 2): #~50M iterations
    artist1 = artist_names[i]
    artist2 = artist_names[j]
    #...
    artist1_data = artist_data[artist1]
    artist2_data = artist_data[artist2]

    val = process.get_loc_similarity(artist1_data, artist2_data)
    
    if val > 0:
        G.add_edge(artist1, artist2, weight=val)

As the number of pair of nodes is ~50M, this runs for ~14 mins. I reduced the number of possibilities by sorting out pairs of artists whose lifetimes did not overlap. With numpy's methods running C under the hood, this executed in less than 5 seconds and gathered ~30M pairs to have to check only:

birth_condition_matrix = (birth_years < death_years.reshape(-1, 1))
death_condition_matrix = (death_years > birth_years.reshape(-1, 1))
overlap_matrix = birth_condition_matrix & death_condition_matrix

overlapping_pairs_indices = np.array(np.where(overlap_matrix)).T
overlapping_pairs_indices = np.column_stack((overlapping_pairs_indices[:, 0], overlapping_pairs_indices[:, 1]))

We can thus iterate through less pairs:

for i, j in overlapping_pairs_indices: #~30M iterations
    if i != j:
        artist1 = artist_names[i]
        artist2 = artist_names[j]
        
        artist1_data = artist_data[artist1]
        artist2_data = artist_data[artist2]
        val = process.get_loc_similarity(artist1_data, artist2_data)
    
        if val > 0:
            G.add_edge(artist1, artist2, weight=val)

It comes as a surprise, that this still runs for over ~13 mins - instead of improving runtime by 40% or so.

Surprisingly, iterating on the matrix indices is much faster, nevertheless looking at all 50M combinations:

for i in range(len(artist_names)):
    for j in range(i + 1, len(artist_names)): #~50M iterations
        if overlap_matrix[i, j]:
            artist1 = artist_names[i]
            artist2 = artist_names[j]

            artist1_data = artist_data[artist1]
            artist2_data = artist_data[artist2]

            val = process.get_loc_similarity(artist1_data, artist2_data)

            if val > 0:
                G.add_edge(artist1, artist2, weight=val)

This ran for less than 5 minutes despite iterating again 50M times.
That is surprising and promising, and I would like to figure out what makes this faster than the previous attempt, and how to modify that to be even faster.

How could I improve runtime by using the right methods?
I wonder if there is a possibility of further utilizing numpy, e.g. not having to use for loops even when calling the calculation function, using a method similar to pandas dataframe's .apply() instead.

(I also noticed that looping through a zip such as for i, j in zip(overlap_pairs[:, 0], overlap_pairs[:, 1]) did not improve runtime.)

5
  • (1) How often does i == j in your code that you need that check? If it happens often, you can remove those lines before iterating through the array. (2) Have you profiled your code? Which part is actually causing the slowdown? Commented Dec 25, 2024 at 20:53
  • 1
    Without the missing pieces, we can't test this or alternatives... Commented Dec 25, 2024 at 21:15
  • 1
    Wild guess: iterating over overlapping_pairs_indices results in a temporary array slices which are then split in two np.int32 objects. The later are generally more expensive than built-in types (i.e. int). Overall, iterating over a Numpy array is generally pretty expensive. Here you create at least 90M objects... Converting it to a list before might help to speed this up (and check this is a problem or not). Here is a post about similar effects (see the part about sum(np.arange(N))) Commented Dec 25, 2024 at 22:01
  • 1
    This question is too long. But in general, the fastest way to iterate is DON'T. Or rather use compiled methods to work with the whole array (or some major slice). A few iterations on a complex task are fine. What you want to avoid is many (python level) iterations. Commented Dec 25, 2024 at 22:03
  • 1
    For more help, please provide a minimal reproducible exemple Commented Dec 25, 2024 at 22:03

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.