0

I've written a program with the following output:

> mpiexec -n 3 "Poker Parallel Program.exe"
Entered slave. Rank: 1
Entered slave. Rank: 2
The program is about to do some statistical analysis of poker hands

Slave terminated: 1
Slave terminated: 2
Before recv. Proc number: 1
After slave send
After slave send
After recv. Proc number: 1
Before recv. Proc number: 2

The general code path is this:

  1. Recv in master is called
  2. Two slaves send
  3. First recv in master unblocks
  4. Second recv in master blocks

I just want to know if the recv call needs to be made before the send? I'm not sure why my recv call is blocking, otherwise.

2
  • 5
    Post code! There is no way to tell where your problem lies, otherwise. Commented Oct 2, 2014 at 19:37
  • I likely should have, but I was only interested in the specific behavior in the question, not how to solve my problem. Commented Oct 3, 2014 at 0:59

2 Answers 2

4

It's not required that you post your receive calls before your sends, but it will perform better if you do. It's also less likely to run out of memory.

If you're having problems where your program is hanging, then it's probably not because your ordering is bad. It's probably because you aren't making enough calls or they're not matching correctly.

Sign up to request clarification or add additional context in comments.

3 Comments

Thanks, that's what I'm thinking.
This was exactly the issue. I had a hanging recv / request which I cancelled in my program using MPI_Cancel at a certain point, and that fixed the problem.
What happen if we call MPI_Recv without MPI_Send?
3

As gTcV said in a comment, post code!

That said, here's some useful general advice: read about MPI's communication modes. Note that "blocking" here doesn't mean "waits for a matching receive"; it only means that when the call to MPI_Send() returns, the send buffer is safe for the caller to reuse.

The standard-mode send, MPI_Send(), is allowed (but not required) to use receive-side buffering to complete a send operation even when no matching MPI_Recv() has yet been posted. This can introduce subtle bugs: everything can seem to work fine at a small scale, but as soon as you scale things up, receive-sided buffers can fill up, revealing deadlock conditions that previously remained hidden. To give yourself the highest certainty that your protocol is correct, during testing change every standard-mode MPI_Send() to a synchronous-mode send, MPI_Ssend(). This means that no buffering will be used for sending; every MPI_Ssend() waits for a matching MPI_Recv() to be posted before it returns. When you're confident that everything's working without deadlocks, switch them back to MPI_Send()s to increase performance. You can use a #define macro instead of having to search and replace every instance.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.