0

I've implemented a C socket server that is a slight adaptation of the example server here:

http://www.tutorialspoint.com/unix_sockets/socket_server_example.htm

It is using the first example without forking, because there will only ever be one single connection to it. The changes I have made are to move the bzero and read command to a separate function, then in the main function, I wrap the call to that separate function in a do/while loop. I also changed the read command to recv. So, it looks something like this, pseudocode-wise:

int main( int argc, char *argv[] )
{
  ... (code from the sample code I linked to above) ...

  if (newsockfd < 0) {
    perror("ERROR on accept");
    exit(1);
  }

  puts("Connected.");

  do{       
    res = doStuff(newsockfd);   
  } while(res > 0);

  puts("Disconnected.");

  close(newsockfd); 

  ...
}

int doStuff(int socket){
  ...
  int n;
  uint8_t *buf = (uint8_t *) malloc(4);
  uint8_t *packet;

  bzero(buf, 4);
  n = recv(socket, buf, 4, 0);
  if(n<=0) { return(n); }

  (... do stuff, including creating a packet of a certain size ...)

   if (send(socket, packet, packetLength, MSG_DONTWAIT) == -1) {
      puts("Error with send");
      return(-1);
   }
   return(1);
}

Good news is, this works most of the time. I'm able to connect from my c# client, send multiple messages, and get messages back to the client. I am sending numerous request msesages from the c# client to the C socket server, one right after the other.

Bad news is, sometimes it doesn't work, and it looks like the C server sends some packets to the C client, and the C client adds them together into a single packet.

For example, I sent three messages from the client to the C server in succession, requesting responses of different byte sizes. The C server received the requests and sent them back, and according to the printout on the screen, in the right order. Example, it said something like this: "Received request for 12 bytes. Sent 12 bytes back. Received request for 5 bytes. Sent 5 bytes back. Received request for 18 bytes. Sent 18 bytes back."

So the C server should have received 3 separate packets, one with 12 bytes, one with 5 bytes, one with 18 bytes. Instead, it gets 1 packet with 35 bytes, which is 12+5+18, then it outputs an error because it was expecting a 12-byte packet next. On the C# side, using receive().

Can anyone give me a clue on where I can look to debug this? Thank you.

Edited to add: The packets being sent back and forth are of varying size, but always a hex byte array (like 0x00, 0x42, 0x01, etc)

1
  • Closely read the documentation for recv()/send() and learn that those two functions do not necessarily receive/send as much bytes as they were told to, but few. So looping around such calls counting until all data expected had been received/sent is a good idea, not to say an essential necessity. Commented May 9, 2014 at 7:50

3 Answers 3

4

You must add frames to your protocol over TCP. Nobody ever gurantees that the size sent matches the size received. 10 bytes send can be received as 10 times 1 byte and 10 times 1 byte sent can be received as 7 bytes and 3 bytes.

Adding frames to a protocol is trivial. You can do it old school, with a length header (an int before each message, announcing the message length). Or you can use protobufs, which has many advantages.

Sign up to request clarification or add additional context in comments.

Comments

1

TCP is not the right choice for packet specific processing, it is for stream processing. To do apllication-level framing (which is weasel words for your concept of "packets") you need to read only the expected size on the client, process it, then read the next expected size, continuing.

Comments

1

The problem that you're seeing is expected behavior from TCP. TCP attempts to be as efficient as possible when transmitting, so it will collect several writes into a single packet for transmission whenever possible.

The problem that you evidently haven't yet seen (but will see at some point) is the fragmentation problem. This will occur when you attempt to send more than the maximum transmission unit (MTU) size in a single write. For example, if you write 2000 bytes, and the MTU size is 1500, then the receive side will get a 1500 byte packet followed by a 500 byte packet.

The moral of the story is that the receiver must be prepared to reconstruct the messages both by reading multiple times, and by extracting multiple message from a single read. To do this, you need to implement a protocol that allows you to extract individual messages from a stream of bytes.

3 Comments

Thank you for the advice. I will never be sending more than ~600 bytes in a single send() command, over ethernet (which has MTU 1500 bytes). Do I still need to worry about fragmentation, do you think?
In theory, yes. In practice, probably not. However, it's always best to code for theory, especially if you expect your code to be widely deployed, or if you expect that you will have to maintain your code for a long period of time.
TCP will not fragment packets. It uses MTU Discovery to find the maximum packet size and sends packets that will fit. Sometimes a packet will be fragmented anyway but that is not TCP's fault. Since TCP is stream and not packet based and does not need to maintain original packet sizes it cuts its stream into sizes that fit without fragments.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.