So, I've run into this issue where I have many threads calling poll on different file descriptors. When I want to add a new one, I need to cancel one of those polls, add a new one, and continue. That alone sounds bad, but also I can't even see how to do that.
Some relevant code:
struct pollfd fds[size];
for(int i = 0;i<size;i++) {
struct pollfd fd;
fd.fd = body[i];
fd.events = POLLIN;
fd.revents = 0;
fds[i] = fd;
}
if(poll(&fds[0], (nfds_t)size, -1) < 0) return NULL;
(I'm using this through JNI also).
I figure I could set a really low delay on poll, and call it over and over, but I think that would begin to defeat the purpose.
The way you can do it is: open a socket or a pipe where, when there is a new file descriptor to add to a polling set, another thread sends some data. Thus, poll will return, you check this reserved file descriptor. If there is data, it means there is a new file descriptor to add.
You can send your process a signal, causing poll() to return -1 and set errno to EINTR. The signal should obviously not cause the process to terminate, so you may need to get some sigaction() or sigprocmask(). However, any signal received between calling either of those and poll() may get lost, similar to select()/pselect(). For this reason some systems may provide additional, non-standard replacements for poll(), like ppoll(), which include a sigset_t to change the signal disposition atomically.
Related
I have a following set of codes specific to Windows,
//1: Declaring HANDLE
HANDLE *m_handle;
//2: Creating HANDLE instance
int m_Count = 4;
m_handle = new HANDLE[m_Count];
//3: Creating Events
for (int i = 0; i < m_Count ; i++)
{
m_handle [i] = CreateEvent(NULL, FALSE, FALSE, NULL);
}
//4: Synchronous API
DWORD dwEvent = WaitForMultipleObjects(m_Count, m_handle, TRUE, 30000);
//5: Closing the HANDLE
for (int i = 0; i < m_Count; i++)
{
CloseHandle(m_handle[i]);
}
How to write the same set of code in case of Linux?
The replacement for CreateEvent is eventfd, you probably want EFD_CLOEXEC and EFD_NONBLOCK flags. Don’t use the semaphore flag unless you know what you’re doing.
The replacement for WaitForMultipleObjects is poll, specify the POLLIN flag in the requested events. Just keep in mind the event is not being reset by poll, it will stay signalled. Read 8 bytes from the eventfd handle to reset. The functionality is identical to manual-reset events on Windows.
To signal an event, call write on the eventfd handle, passing the address of a local uint64_t variable with value 1.
To destroy events once you no longer need them, just call close.
Update: I’ve just noticed you’re passing bWaitAll=TRUE to WaitForMultipleObjects.
Unfortunately, Linux poll can’t quite do that. It returns when timeout is expired, or when at least 1 handle becomes signaled, whichever happens first.
Still, the workaround is not too hard. You can emulate bWaitAll by calling poll multiple times in a loop until all of the events are signaled. No need to rebuild the array of handles, you can set file handle to a negative value for the events which became signaled after poll returned. Note that multiple of them may become signaled at once, poll return value tells how many of them did. Also don't forget to decrease the timeout value.
In a server code I want to use pselect to wait for clients to connect as well monitor the standard output of the prozesses that I create and send it to the client (like a simplified remote shell).
I tried to find examples on how to use pselect but I haven't found any. The socket where the client can connect is already set up and works, as I verified that with accept(). SIGTERM is blocked.
Here is the code where I try to use pselect:
waitClient()
{
fd_set readers;
fd_set writers;
fd_set exceptions;
struct timespec ts;
// Loop until we get a sigterm to shutdown
while(getSigTERM() == false)
{
FD_ZERO(&readers);
FD_ZERO(&writers);
FD_ZERO(&exceptions);
FD_SET(fileno(stdin), &readers);
FD_SET(fileno(stdout), &writers);
FD_SET(fileno(stderr), &writers);
FD_SET(getServerSocket()->getSocketId(), &readers);
//FD_SET(getServerSocket()->getSocketId(), &writers);
memset(&ts, 0, sizeof(struct timespec));
pret = pselect(FD_SETSIZE, &readers, &writers, &exceptions, &ts, &mSignalMask);
// Here pselect always returns with 2. What does this mean?
cout << "pselect returned..." << pret << endl;
cout.flush();
}
}
So what I want to know is how to wait with pselect until an event is received, because currently pselect always returns immediately with a value 2. I tried to set the timeout to NULL but that doesn't change anything.
The returnvalue of pselect (if positive) is the filedescriptor that caused the event?
I'm using fork() to create new prozesses (not implemented yet) I know that I have to wait() on them. Can I wait on them as well? I suppose I need to chatch the signal SIGCHILD, so how would I use that? wait() on the child would also block, or can I just do a peek and then continue with pselect, otherwise I have to concurrent blocking waits.
It returns immediately because the file descriptors in the writers set are ready. The standard output streams will almost always be ready for writing.
And if you check a select manual page you will see that the return value is either -1 on error, 0 on timeout, and a positive number telling you the number of file descriptors that are ready.
I wanted to create a multi-threaded socket server using C++11 and standard linux C-Librarys.
The easiest way doing this would be opening a new thread for each incoming connection, but there must be an other way, because Apache isn't doing this. As far as I know Apache handles more than one connection in a Thread. How to realise such a system?
I thought of creating one thread always listening for new clients and assigning this new client to a thread. But if all threads are excecuting an "select()" currently, having an infinite timeout and none of the already assigned client is doing anything, this could take a while for the client to be useable.
So the "select()" needs a timeout. Setting the timeout to 0.5ms would be nice, but I guess the workload could rise too much, couldn't it?
Can someone of you tell me how you would realise such a system, handling more than one client for each thread?
PS: Hope my English is well enough for you to understand what I mean ;)
The standard method to multiplex multiple requests onto a single thread is to use the Reactor pattern. A central object (typically called a SelectServer, SocketServer, or IOService), monitors all the sockets from running requests and issues callbacks when the sockets are ready to continue reading or writing.
As others have stated, rolling your own is probably a bad idea. Handling timeouts, errors, and cross platform compatibility (e.g. epoll for linux, kqueue for bsd, iocp for windows) is tricky. Use boost::asio or libevent for production systems.
Here is a skeleton SelectServer (compiles but not tested) to give you an idea:
#include <sys/select.h>
#include <functional>
#include <map>
class SelectServer {
public:
enum ReadyType {
READABLE = 0,
WRITABLE = 1
};
void CallWhenReady(ReadyType type, int fd, std::function<void()> closure) {
SocketHolder holder;
holder.fd = fd;
holder.type = type;
holder.closure = closure;
socket_map_[fd] = holder;
}
void Run() {
fd_set read_fds;
fd_set write_fds;
while (1) {
if (socket_map_.empty()) break;
int max_fd = -1;
FD_ZERO(&read_fds);
FD_ZERO(&write_fds);
for (const auto& pr : socket_map_) {
if (pr.second.type == READABLE) {
FD_SET(pr.second.fd, &read_fds);
} else {
FD_SET(pr.second.fd, &write_fds);
}
if (pr.second.fd > max_fd) max_fd = pr.second.fd;
}
int ret_val = select(max_fd + 1, &read_fds, &write_fds, 0, 0);
if (ret_val <= 0) {
// TODO: Handle error.
break;
} else {
for (auto it = socket_map_.begin(); it != socket_map_.end(); ) {
if (FD_ISSET(it->first, &read_fds) ||
FD_ISSET(it->first, &write_fds)) {
it->second.closure();
socket_map_.erase(it++);
} else {
++it;
}
}
}
}
}
private:
struct SocketHolder {
int fd;
ReadyType type;
std::function<void()> closure;
};
std::map<int, SocketHolder> socket_map_;
};
First off, have a look at using poll() instead of select(): it works better when you have large number of file descriptors used from different threads.
To get threads currently waiting in I/O out of waiting I'm aware of two methods:
You can send a suitable signal to the thread using pthread_kill(). The call to poll() fails and errno is set to EINTR.
Some systems allow a file descriptor to be obtained from a thread control device. poll()ing the corresponding file descriptor for input succeeds when the thread control device is signalled. See, e.g., Can we obtain a file descriptor for a semaphore or condition variable?.
This is not a trivial task.
In order to achieve that, you need to maintain a list of all opened sockets (the server socket and the sockets to current clients). You then use the select() function to which you can give a list of sockets (file descriptors). With correct parameters, select() will wait until any event happen on one of the sockets.
You then must find the socket(s) which caused select() to exit and process the event(s). For the server socket, it can be a new client. For client sockets, it can be requests, termination notification, etc.
Regarding what you say in your question, I think you are not understanding the select() API very well. It is OK to have concurrent select() calls in different threads, as long as they are not waiting on the same sockets. Then if the clients are not doing anything, it doesn't prevent the server select() from working and accepting new clients.
You only need to give select() a timeout if you want to be able to do things even if clients are not doing anything. For example, you may have a timer to send periodic infos to the clients. You then give select a timeout corresponding to you first timer to expire, and process the expired timer when select() returns (along with any other concurrent events).
I suggest you have a long read of the select manpage.
I have a program that maintains a list of "streaming" sockets. These sockets are configured to be non-blocking sockets.
Currently, I have used a list to store these streaming sockets. I have some data that I need to send to all these streaming sockets hence I used the iterator to loop through this list of streaming sockets and calling the send_TCP_NB function below:
The issue is that my own program buffer that stores the data before sending to this send_TCP_NB function slowly decreases in free size indicating that the send is slower than the rate at which data is put into the program buffer. The rate at which the program buffer is about 1000 data per second. Each data is quite small, about 100 bytes.
Hence, i am not sure if my send_TCP_NB function is working efficiently or correct?
int send_TCP_NB(int cs, char data[], int data_length) {
bool sent = false;
FD_ZERO(&write_flags); // initialize the writer socket set
FD_SET(cs, &write_flags); // set the write notification for the socket based on the current state of the buffer
int status;
int err;
struct timeval waitd; // set the time limit for waiting
waitd.tv_sec = 0;
waitd.tv_usec = 1000;
err = select(cs+1, NULL, &write_flags, NULL, &waitd);
if(err==0)
{
// time limit expired
printf("Time limit expired!\n");
return 0; // send failed
}
else
{
while(!sent)
{
if(FD_ISSET(cs, &write_flags))
{
FD_CLR(cs, &write_flags);
status = send(cs, data, data_length, 0);
sent = true;
}
}
int nError = WSAGetLastError();
if(nError != WSAEWOULDBLOCK && nError != 0)
{
printf("Error sending non blocking data\n");
return 0;
}
else
{
if(nError == WSAEWOULDBLOCK)
{
printf("%d\n", nError);
}
return 1;
}
}
}
One thing that would help is if you thought out exactly what this function is supposed to do. What it actually does is probably not what you wanted, and has some bad features.
The major features of what it does that I've noticed are:
Modify some global state
Wait (up to 1 millisecond) for the write buffer to have some empty space
Abort if the buffer is still full
Send 1 or more bytes on the socket (ignoring how much was sent)
If there was an error (including the send decided it would have blocked despite the earlier check), obtain its value. Otherwise, obtain a random error value
Possibly print something to screen, depending on the value obtained
Return 0 or 1, depending on the error value.
Comments on these points:
Why is write_flags global?
Did you really intend to block in this function?
This is probably fine
Surely you care how much of the data was sent?
I do not see anything in the documentation that suggests that this will be zero if send succeeds
If you cleared up what the actual intent of this function was, it would probably be much easier to ensure that this function actually fulfills that intent.
That said
I have some data that I need to send to all these streaming sockets
What precisely is your need?
If your need is that the data must be sent before proceeding, then using a non-blocking write is inappropriate*, since you're going to have to wait until you can write the data anyways.
If your need is that the data must be sent sometime in the future, then your solution is missing a very critical piece: you need to create a buffer for each socket which holds the data that needs to be sent, and then you periodically need to invoke a function that checks the sockets to try writing whatever it can. If you spawn a new thread for this latter purpose, this is the sort of thing select is very useful for, since you can make that new thread block until it is able to write something. However, if you don't spawn a new thread and just periodically invoke a function from the main thread to check, then you don't need to bother. (just write what you can to everything, even if it's zero bytes)
*: At least, it is a very premature optimization. There are some edge cases where you could get slightly more performance by using the non-blocking writes intelligently, but if you don't understand what those edge cases are and how the non-blocking writes would help, then guessing at it is unlikely to get good results.
EDIT: as another answer implied, this is something the operating system is good at anyways. Rather than try to write your own code to manage this, if you find your socket buffers filling up, then make the system buffers larger. And if they're still filling up, you should really give serious thought to the idea that your program needs to block anyways, so that it stops sending data faster than the other end can handle it. i.e. just use ordinary blocking sends for all of your data.
Some general advice:
Keep in mind you are multiplying data. So if you get 1 MB/s in, you output N MB/s with N clients. Are you sure your network card can take it ? It gets worse with smaller packets, you get more general overhead. You may want to consider broadcasting.
You are using non blocking sockets, but you block while they are not free. If you want to be non blocking, better discard the packet immediately if the socket is not ready.
What would be better is to "select" more than one socket at once. Do everything that you are doing but for all the sockets that are available. You'll write to each "ready" socket, then repeat again while there are sockets that are not ready. This way, you'll proceed with the sockets that are available first, and then with some chance, the busy sockets will become themselves available.
the while (!sent) loop is useless and probably buggy. Since you are checking only one socket FD_ISSET will always be true. It is wrong to check again FD_ISSET after a FD_CLR
Keep in mind that your OS has some internal buffers for the sockets and that there are way to extend them (not easy on Linux, though, to get large values you need to do some config as root).
There are some socket libraries that will probably work better than what you can implement in a reasonable time (boost::asio and zmq for the ones I know).
If you need to implement it yourself, (i.e. because for instance zmq has its own packet format), consider using a threadpool library.
EDIT:
Sleeping 1 millisecond is probably a bad idea. Your thread will probably get descheduled and it will take much more than that before you get some CPU time again.
This is just a horrible way to do things. The select serves no purpose but to waste time. If the send is non-blocking, it can mangle data on a partial send. If it's blocking, you still waste arbitrarily much time waiting for one receiver.
You need to pick a sensible I/O strategy. Here is one: Set all sockets non-blocking. When you need to send data to a socket, just call write. If all the data writes, lovely. If not, save the portion of data that wasn't sent for later and add the socket to your write set. When you have nothing else to do, call select. If you get a hit on any socket in your write set, write as many bytes as you can from what you saved. If you write all of them, remove that socket from the write set.
(If you need to write to a data that's already in your write set, just add the data to the saved data to be sent. You may need to close the connection if too much data gets buffered.)
A better idea might be to use a library that already does all these things. Boost::asio is a good one.
You are calling select() before calling send(). Do it the other way around. Call select() only if send() reports WSAEWOULDBLOCK, eg:
int send_TCP_NB(int cs, char data[], int data_length)
{
int status;
int err;
struct timeval waitd;
char *data_ptr = data;
while (data_length > 0)
{
status = send(cs, data_ptr, data_length, 0);
if (status > 0)
{
data_ptr += status;
data_length -= status;
continue;
}
err = WSAGetLastError();
if (err != WSAEWOULDBLOCK)
{
printf("Error sending non blocking data\n");
return 0; // send failed
}
FD_ZERO(&write_flags);
FD_SET(cs, &write_flags); // set the write notification for the socket based on the current state of the buffer
waitd.tv_sec = 0;
waitd.tv_usec = 1000;
status = select(cs+1, NULL, &write_flags, NULL, &waitd);
if (status > 0)
continue;
if (status == 0)
printf("Time limit expired!\n");
else
printf("Error waiting for time limit!\n");
return 0; // send failed
}
return 1;
}
How can I create a global counter-value that can be shared between multiple processes in c++? What I need is a way to "invalidate" multiple processes at once, signaling them to perform some operation (like reading from file). All processes would continuously poll (every 10ms) for current counter-value and compare it with internally stored last value. Mismatching values would indicate that some work is needed.
Edit: btw my processes are executing as different .exe:s, not created from some parent process. Operating system is windows.
What about a named semaphore? Posix supports it, not sure about windows.
Consider the way you want to distribute the information and potential overlaps - if it takes longer for any of the readers to finish reading than it takes for a refresh then you are going to get in trouble with the suggested approach.
The way I read your question, there are multiple readers, the writer doesn't know (or care in most part) how many readers there are at one time, but wants to notify the readers that something new is available to read.
Without knowing how many potential readers there are you can't use a simple mutex or semaphore to know when the readers are done, without knowing when everybody is done you don't have good info on when to reset an event to notify for the next read event.
MS Windows specific:
Shared Segments
One option is to place variables within a shared data segment. That means that the same variables can be read (and written to) by all exe's that have named the same segment or if you put it into a DLL - loaded the shared DLL.
See http://www.codeproject.com/KB/DLL/data_seg_share.aspx for more info.
// Note: Be very wary of using anything other than primitive types here!
#pragma data_seg(".mysegmentname")
HWND hWnd = NULL;
LONG nVersion = -1;
#pragma data_seg()
#pragma comment(linker, "/section:.mysegmentname,rws")
IPC - COM
Make your main app a com service where the workers can register with for events, push out the change to each event sink.
IPC - dual events
Assuming any 1 read cycle is much less than time between write events.
create 2 manual reset events, at any time at most 1 of those events will be signaled, alternate between events. signaling will immediatly release all the readers and once complete they will wait on the alternate event.
you can do this the easy way or the way
the easy way is to store shared values in registry or a file so that all processes agree to check it frequently.
the hard way is to use IPC(inter process communication, the most common method that i use is NamedPipes. its not too hard because you can find plenty of resources about IPC on the net.
If you are on *nix you could make the processes read from a named pipe (or sockets), and then write the specific msg there to tell the other processes that they should shutdown.
IPC performance: Named Pipe vs Socket
Windows NAmed Pipes alternative in Linux
Use a named event object with manual reset. The following solution doesn't use the CPU so much than busy waiting
Sending process:
Set event
Sleep 10 ms
Reset Event
Receiving processes:
All waiting processes pass when event is set
They read the file
Let them sleep for 20 ms, so say can't see the same event twice.
Wait again
Sleep( 10 ) might actually take longer than Sleep( 20 ) but this only results in another cycle (reading the unchanged file again).
As the name of the executable is known, I have another solution which I implemented (in C#) in a project just a few days ago:
Every reader process creates a named event "Global\someuniquestring_%u" with %u being it's process id. If the event is signaled, read the file and do the work.
The sender process has a list of event handles and sets them active if the file has changed and thus notifys all reader processes. From time to time, e.g. when the file has changed, it has to update the list of event handles:
Get all processes with name 'reader.exe' (e.g.)
For every process get it's id
Open a handle for the existing event "Global\someuniquestring_%u" if it's a new process.
Close all handles for no longer running processes.
Found one solution for monitoring folder changes (with "event_trigger"-event) and reading additional event information from file:
HANDLE event_trigger;
__int64 event_last_time;
vector<string> event_info_args;
string event_info_file = "event_info.ini";
// On init
event_trigger = FindFirstChangeNotification(".", false, FILE_NOTIFY_CHANGE_LAST_WRITE);
event_last_time = stat_mtime_force("event_info.ini");
// On tick
if (WaitForSingleObject(event_trigger, 0)==0)
{
ResetEventTrigger(event_trigger);
if (stat_mtime_changed("event_info.ini", event_last_time))
{
FILE* file = fopen_force("event_info.ini");
char buf[4096];
assert(fgets(buf, sizeof(buf), file));
split(buf, event_info_args, "\t\r\n");
fclose(file);
// Process event_info_args here...
HWND wnd = ...;
InvalidateRect(wnd,0,false);
}
}
// On event invokation
FILE* file = fopen("event_info.ini", "wt");
assert(file);
fprintf(file,"%s\t%s\t%d\n",
"par1", "par2", 1234);
fclose(file);
stat_mtime_changed("event_info.ini", event_last_time);
// Helper functions:
void ResetEventTrigger()
{
do
{
FindNextChangeNotification(evt);
}
while(WaitForSingleObject(evt, 0)==0);
}
FILE* fopen_force(const char* file);
{
FILE* f = fopen(file, "rt");
while(!f)
{
Sleep(10+(rand()%100));
f = fopen(f, "rt");
}
assert(f);
return f;
}
__int64 stat_mtime_force(const char* file)
{
struct stat stats;
int res = stat(file, &stats);
if(res!=0)
{
FILE* f = fopen(file, "wt");
fclose(f);
res = stat(file, &stats);
}
assert(res==0);
return stats.st_mtime;
}
bool stat_mtime_changed(const char* file, __int64& time);
{
__int64 newTime = stat_mtime(file);
if (newTime - time > 0)
{
time = newTime;
return true;
}
return false;
}