SDL_net 2.0 multithreading - sdl

Is it safe to call SDL_net functions on another thread (other than the main thread)? And are there any rules about it? I could not find any information about it when I searched for it.

Yes, it is safe. In fact, some operations should be done in a separate thread.
I looked into the TCP part of SDL_net. In particular, any call to
SDLNet_ResolveHost, if it has to resolve the DNS query over a remote host
SDLNet_TCP_Open that connects to a remote host and doesn't just establish a listening socket
SDLNet_TCP_Recv if and only if there there aren't any pending bytes on the TCP stream
SDLNet_TCP_Send
must be done on a separate thread if you want to avoid blocking the render thread, missed timings and windows that are not responding anymore.
However, it should be avoided that two or more threads meddle with the same socket at the same time. Make sure to have the threads communicate with eachother properly to avoid bugs caused by concurrency. Use mutexes, locks, etc. to make sure of that.

Related

asio::async_connect vs asio::connect. Is asio::connect a non-blocking sync?

I'm using the asio 1.18.1 standalone version (no boost) and I wonder about the difference between asio::connect and asio::async_connect.
I can tell myself why I need async for my server, because the point of async is being able to deal with a lot of data on a lot of different connections at the same time.
But when it comes to the client, I really need just one non-blocking thread and isn't async for just one thread useless? Is asio::connect a non-blocking sync, because that's what I really need? If it's a blocking sync, then I would rather choose the asio::async_connect. Same question about asio::async_read and asio::async_write.
I'm using the asio 1.18.1 standalone version (no boost) and I wonder about the difference between asio::connect and asio::async_connect.
asio::connect attempts to connect a socket at the point of call and will block until connection is established. In other words, if it takes eg 20 seconds to resolve a DNS address, it will block for the entire duration.
asio::async_connect will simply queue up connection request and will not actually do anything until you call io_context.run() (or other functions, such as run_once(), etc).
I can tell myself why I need async for my server, because the point of async is being able to deal with a lot of data on a lot of different connections at the same time.
I can neither confirm nor deny that.
But when it comes to the client, I really need just one non-blocking thread and isn't async for just one thread useless?
Not necessarily. If you want to do other things on the same thread, eg show connection progress, execute periodic timer or run interactive GUI, etc. If you call asio::connect, your GUI will freeze until the function returns. You can choose to call asio::connect on a separate thread than your GUI, but then you need to worry about thread synchronization, locks, mutexes, etc.
Is asio::connect a non-blocking sync, because that's what I really need?
I don't really understand this question, but asio::connect is blocking.
If it's a blocking sync, then I would rather choose the asio::async_connect. Same question about asio::async_read and asio::async_write.
asio::connect, asio::read and asio::write are all blocking. In other words, they will execute at the point of call and will block until done.
asio::async_connect, asio::async_read and asio::async_write are their async (non-blocking) counterparts. When you call either one, they will be queued for execution and will be executed once you call io_context.run(). (I am simplifying a bit, but that's the basic concept.)

Is it possible to have two boost acceptors in the same program?

My boost server accidentally stopped accepting incoming connections because some other guy from my team created yet another server using boost acceptor in a different thread (using different port)? Is it normal and how make them two servers work independently and not messing with each other?
SOLVED: acceptors had nothing to do with it, the guy started an infinite loop somewhere that blocked other components. I guess that is what happens when the team is working uncoordinated :( Sorry guys, sehe's the best as always
We're using multiple acceptors with a single io_service just fine, as by design.
Also, we're sharing out work across multiple other io_service instances, using the same sockets, just fine, as by design.
What could be happening in your code base would be antipatterns: if people call stop() on your io_service instance then yes that would wreak havoc on any other async operations also queued on the same instance.
So, in general, the idea would be to avoid using stop() or similar "life-time" operations on a shared io_service instance. The only appropriate time for such a call would be during a forced shutdown sequence, but really graceful shutdown should let all active connections shutdown and the pending work to be drained, so that threads running io_service::run would spontaneously complete anyways.
See also:
Why would we need many acceptors in the Boost.ASIO?
How to design proper release of a boost::asio socket or wrapper thereof

C++ select()/threads or an alternative to handle multiple clients in a server

I am trying to learn C++ (with prior programming knowledge) by creating a server application with multiple clients. Server application will run on Raspberry Pi/Debian (Raspbian). I thought this would also be a good opportunity to learn about low-level concurrent programming with threads (e.g. POSIX). Then I came across with select() function which basically allows usage of blocking functions in a single thread to handle multiple clients, which is interesting. Some people here on StackOverflow mentioned that threads cause a lot of overhead and select() seems to be a nice alternative.
In reality, I will have 1-3 clients connected but I would like to keep my application flexible. As a structure design, I was thinking about a main thread invoking a data thread (processing stuff non-stop) and a server thread (listening for incoming connections). Since accept() call is blocking, the latter one needs to be a separate thread. If a client connects, then for each client, I may need a separate thread as well.
At the end, worker thread will write to the shared memory and client threads will read from there and communicate with the clients. Some people were opposing to the usage of threads but in my understanding, threads are good if they are invoked rarely (and long living) and if there are blocking function calls. For the last one as it seems there is the select() function, which used in a loop, allows for handling of multiple sockets in a single thread.
I think at least for the data processing and server accept() call, I will need 2 separate threads initiated at the beginning. I may handle all clients with select() in a single thread or separate threads. What would be the correct approach and are there smarter alternatives?

Problems implementing a multi-threaded UDP server (threadpool?)

I am writing an audio streamer (client-server) as a project of mine (C/C++),
and I decided to make a multi threaded UDP server for this project.
The logic behind this is that each client will be handled in his own thread.
The problems I`m having are the interference of threads to one another.
The first thing my server does is create a sort of a thread-pool; it creates 5
threads that all are blocked automatically by a recvfrom() function,
though it seems that, on most of the times when I connect another device
to the server, more than one thread is responding and later on
that causes the server to be blocked entirely and not operate further.
It's pretty difficult to debug this as well so I write here in order
to get some advice on how usually multi-threaded UDP servers are implemented.
Should I use a mutex or semaphore in part of the code? If so, where?
Any ideas would be extremely helpful.
Take a step back: you say
each client will be handled in his own thread
but UDP isn't connection-oriented. If all clients use the same multicast address, there is no natural way to decide which thread should handle a given packet.
If you're wedded to the idea that each client gets its own thread (which I would generally counsel against, but it may make sense here), you need some way to figure out which client each packet came from.
That means either
using TCP (since you seem to be trying for connection-oriented behaviour anyway)
reading each packet, figuring out which logical client connection it belongs to, and sending it to the right thread. Note that since the routing information is global/shared state, these two are equivalent:
keep a source IP -> thread mapping, protected by a mutex, read & access from all threads
do all the reads in a single thread, use a local source IP -> thread mapping
The first seems to be what you're angling for, but it's poor design. When a packet comes in you'll wake up one thread, then it locks the mutex and does the lookup, and potentially wakes another thread. The thread you want to handle this connection may also be blocked reading, so you need some mechanism to wake it.
The second at least gives a seperation of concerns (read/dispatch vs. processing).
Sensibly, your design should depend on
number of clients
I/O load
amount of non-I/O processing (or IO:CPU ratio, or ...)
The first thing my server does is create a sort of a thread-pool; it creates 5 threads that all are blocked automatically by a recvfrom() function, though it seems that, on most of the times when I connect another device to the server, more than one thread is responding and later on that causes the server to be blocked entirely and not operate further
Rather than having all your threads sit on a recvfrom() on the same socket connection, you should protect the connection with a semaphore, and have your worker threads wait on the semaphore. When a thread acquires the semaphore, it can call recvfrom(), and when that returns with a packet, the thread can release the semaphore (for another thread to acquire) and handle the packet itself. When it's done servicing the packet, it can return to waiting on the semaphore. This way you avoid having to transfer data between threads.
Your recvfrom should be in the master thread and when it gets data you should pass the address IP:Port and data of the UDP client to the helper threads.
Passing the IP:port and data can be done by spawning a new thread everytime the master thread receives a UDP packet or can be passed to the helper threads through a message queue
I think that your main problem is the non-persistent udp connection. Udp is not keeping your connections alive, it exchanges only two datagrams per session. Depending on your application, in the worst case, it will have concurrent threads reading from the first available information, ie, recvfrom() will unblock even if it is not it's turn to do it.
I think the way to go is using select in the main thread and, with a concurrent buffer, manage what wich thread will do.
In this solution, you can have one thread per client, or one thread per file, assuming that you keep the clients necessary information to make sure you're sending the right file part.
TCP is another way to do it, since it keeps the connection alive for every thread you run, but is not the best transmission way on data lost allowed applications.

IPC connection with thread on Windows using Mutex

I have a question about Windows IPC. I implemented IPC with mutex on Windows, but there is a problem when I made the connection with another thread;when the thread terminated, the connection is closed.
The connection thread(A) makes connection to the server
Main thread(B) uses the connection handle(global variable) returned by A
A terminates
B cannot refer the handle any more - because connection is closed
It is natural that mutex is released when the process terminated. However, in the case of thread, I need the way to hold mutex to maintain connection even though the thread terminated, if the process is alive.
Semaphore can be the alternative on Linux, however, on Windows, it is impossible to use semaphor because it cannot sense the abnormal disconnection.
Does someone have any idea?
There is no way to prevent the ownership of a mutex from being released when the thread that owns it exits.
There are a number of other ways you might be able to fix the problem, depending on the circumstances.
1) Can you change any of the code on the client? For example, if the client executable is using a DLL that you have provided to establish and maintain the connection, you could change the DLL so that it uses a more appropriate object (such as a named pipe) rather than a mutex, or you could get the DLL to start its own thread to own the mutex.
2) Is there more than one client? Presumably, since you are using a mutex, you are only expecting one client to connect at a time. If you can safely assume that only one client will be connected at a time, then when the server detects that the mutex has been abandoned, it could close its own handle to the mutex. When the client process exits, the mutex will automatically be deleted, so the server could periodically check to see whether it still exists or not.
3) How is the client communicating with the server? The server is presumably doing something useful for the client, so there must be another communications channel as well as the mutex. For example, if the client is opening a named pipe to the server, you could use that connection instead of the mutex to detect when the client process exits. Or, if the communications channel allows you to determine the process ID of the client, you could open a handle to the process and use that to detect when the client process exits.
4) If no other solution will work, and you are forced to rewrite the client as well as the server, consider using a more appropriate form of IPC, such as a named pipe.
Additional
5) It is common practice to use a process handle to wait for (or test for) process termination. Most often, these handles are the ones generated for the parent when a process is created, but there is no reason not to use a handle generated by OpenProcess. As far as precedent goes, I assure you there is at least as much precedent for using a handle generated by OpenProcess to monitor a client process as there is for using a mutex; it is entirely possible that you are the first person to ever try to use a Windows mutex to detect that a process has exited. :-)
6) Presumably the SQLDisconnect() function is calling ReleaseMutex in order to disconnect from the server. Since it is doing so from a thread that doesn't own the mutex, that won't do anything except return an error code, so there's no reasonable way for your server to detect that happening. Does the function also call CloseHandle on the mutex? If so, you could use the approach in (2) to detect when this happens. This would work both for calls to SQLDisconnect() and when the process exits. It shouldn't matter that there are multiple clients, since they are using different mutexes.
6a) I say "no reasonable way" because you could conceivably use hooking to change the behaviour of ReleaseMutex. This would not be a good option.
7) You should examine carefully what the SQLDisconnect() function does apart from calling ReleaseMutex and/or CloseHandle. It is entirely possible that you can detect the disconnection by some means other than the mutex.