1

I have read that working with more than 64 sockets in a thread is dangerous(?). But -at least for me- Non-blocking sockets are used for avoiding complicated thread things. Since there is only one listener socket, how am i supposed to split sockets into threads and use them with select() ? Should i create fd_sets for each thread or what ? And how am i supposed to assign a client to a thread, since I can only pass values in the beginning with CreateThread() ?

3 Answers 3

4

No no no, you got a few things wrong there.

First, the ideal way to handle many sockets is to have a thread pool which will do the work in front of the sockets (clients).
Another thread, or two (actually in the amount of CPUs as far as I know), do the connection accepting.

Now, when a an event occurs, such as a new connection, it is being dispatched to the thread pool to be processed.

Second, it depends on the actual implementation and environment.
For example, in Windows there's something called IOCP.

If you ask me - do not bother with the lower implementation but instead use a framework such as BOOST::ASIO or ACE.

I personally like ASIO. The best thing about those frameworks is that they are usually cross-platform (nix, Windows etc').

So, my answer is a bit broad but I think it's to the best that you take these facts into consideration before diving into code/manuals/implementation.

Good luck!

Sign up to request clarification or add additional context in comments.

4 Comments

so you mean that one thread accepts connections, one thread holds all of the sockets and receives&sends packets and one thread processes incoming data and informs socket thread back?
You can look at that this way: One thread accepts connections only. The rest of the work is done by another thread(s). All the sockets are being held at the application level, not a thread level. The threads only "work" with these sockets. Check for example the below link:
is that where i will need critical sections ? since one than more threads will use random Client class object.
2

Well, what you have read is wrong. Many powerful single-threaded applications have been written with non-blocking sockets and high-performance I/O demultiplexers like epoll(4) and kqueue(2). Their advantage is that you setup your wait events upfront, so the kernel does not have to copy ton of file descriptors and [re-]setup lots of stuff on each poll.

Then there are advantages to threading if your primary goal is throughput, and not latency.

Check out this great overview of available techniques: The C10K problem.

Comments

0

The "ideal way to handle many sockets" is not always - as Poni seems to believe - to "have a thread pool."

What does "ideal" pertain to? Is it ease of programming? Best performance?

Since he recommends not bothering "with the lower implementation" and "use a framework such as BOOST::ASIO or ACE" I guess he means ease of programming.

Had he had a performance angle on Windows he would have recommended "something called IOCPs." IOCPs are "IO Control Ports" which will allow implementation of super-fast IO-applications using just a handful of threads (one per available core is recommended). IOCP applications run circles around any thread-pool equivalent which he would have known if he'd ever written code using them. IOCPs are not used alongside thread pools but instead of them.

There is no IOCP equivalent in Linux.

Using a framework on Windows may result in a faster "time to market" product but the performance will be far from what it might have been had a pure IOCP implementation been chosen.

The performance difference is such that OS-specific code implementations should be considered. If a generic solution is chosen anyway, at least performance would "not have been given away accidentally."

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.