This question already has answers here:
How to set a timeout on blocking sockets in boost asio?
(10 answers)
Cancel async_read due to timeout
(1 answer)
Closed 9 years ago.
Im building a TCP client based on Boost::ASIO lib. I used read_some() from Boost to read the response from the server. I want to implement a time out logic in it, which send a "ping" command if there has been no communication for 10 seconds. The problem is
l=_socket->read_some(boost::asio::buffer(this->reply,sizeof(this->reply)),error);
seems to block the program execution when there is no data to be transferred to the read buffer. So is there any way out of it? I didn't want to use the async_read_some() as I need this thread to sleep if there is no data has been transferred in to the buffer, that was easily done in read_some() as it returns the size of data transferred.
The main thing is even during time-out I dont want to close the connection, but to check if the server responds to a ping command, if it doesn't I would move to re-connection. So this is more or less checking if the server is still there connected, when no data is transmitted over a time period.
Related
This question already has answers here:
Linux C/C++ socket send in multi-thread code
(2 answers)
Can I call socket send() from different threads?
(1 answer)
Closed last year.
I have a Linux TCP server using the epoll mechanism. Multiple threads are running which may send data to any given connected socket/client at any given time. My question arises from the scenario of two threads sending messages to the same socket at the same time via ::send().
Would the resultant messages be sent sequentially or mixed/merged?
The documentation for send() on the Linux manual pages only mention occurance of errors if the message is too long to be atomically sent. There is no confirmation that the sending is atomic. However if so, this scenario is not an issue because the messages would be sent sequentially, right?
Example
Two messages could be:
"HelloThere"
"Ciao"
Sequential Expectation
Received: "HelloThereCiao"
Merged Expectation
Recieved: "HelloCiaoThere" or some vast combination of this.
This question already has answers here:
Howto detect that a network cable has been unplugged in a TCP connection?
(3 answers)
Closed 6 years ago.
I am having a thread which blocks on select() based SSL_read(). The main thread writes whenever needed using SSL_write(). During unit testing, I found a problem:
Client TCP socket connect() to server on SSL(TLS)
After sometime, disable internet
Write some data on client TCP/SSL socket
Even without interenet, the SSL_write() returns the correct number of bytes written, instead of 0 or some error. Such interenet disconnections are arbitrary, neither too high nor too low.
My expectation is that, whenever there is an internet disconnection, the socket should generate some interesting event, which is detectable and I will discontinue the socket + SSL connections.
In case I have to establish some client-server hand made protocol, then that's possible.
What is the best way to achieve such internet detection?
Expecting a solution with lesser CPU cycles & lesser client server communication. I am sure that, this is not a very special problem and hence must have been solved before.
[P.S.: The client sockets are opened on mobile platforms like Android & iOS, however I would like to know a general solution.]
This is general problem with sockets. You can't have a certified delivery with them. If you want to know if the counterparty is indeed reachable, you have to implement some sort of hearbeats yourself.
Generally successful write to the socket is no indication of availability of the recepient on the other end.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
All of my networking applications I have developed have used the blocking socket model. I'm about to start a new project and it requires that the user be able to send requests to a connected server and receive responses on the same socket in parallel without data race.
And might I add this is a multithreaded application as well (x clients to 1 server) and so I want to be able to send a request to the server simultaneously without having to wait for the previous recv/send but at the same time be able to receive a response on the same socket. I hope this makes sense.
The last choice I have is to use the HTTP model of connect/receive > request/serve > close for each request.
PS: I'm not looking for code
The way I do it is to have an I/O thread that is the only thread that is allowed to read from or write to the socket. That threads keeps a FIFO queue of outgoing-request-mesasges, and it writes data from (the head of that queue) to the socket whenever the socket select()'s as ready-for-write, and it reads from the socket whenever the socket select()'s as ready-for-read.
Other threads can add a message to the tail of I/O thread's outgoing-requests-queue at any time (note that you'll need to synchronize these additions with the I/O thread via a mutex or something, and also if the outgoing-requests-queue was empty before the new request was added to it, you'll need a mechanism to wake up the I/O thread so it can start sending the new request; writing a byte to a self-connected socket-pair that the I/O thread select()'s on the other end of will work for that purpose)
For the other direction: When the I/O thread has recv()'d a full message from the socket, it needs to deliver the received message to the appropriate worker thread; that too needs to be done by some thread-safe mechanism, but the implementation of that will depend on how the receiving thread is implemented.
This question already has answers here:
Are parallel calls to send/recv on the same socket valid?
(3 answers)
Closed 7 years ago.
Sockets can generally two way communicate, therefore the same socket can be used to send and recv.
If I wanted to send some data (on another thread) while the socket is getting read, what would the kernel do? This is applied for both parts.
Consider this example: the server is sending you a file and say it will take a lot (low uplink or a very big file). The user gets bored and decides to SIGINT you. You catch it and tell the server to stop sending the file (with some kind of message).
Will you be able to send to tell the server to stop sending even though you're reading from it? And of course, that's applied to the server-side as well.
Hopefully I've been enough clear.
If I wanted to send some data (on another thread) while the socket is getting read, what would the kernel do?
Nothing special... sockets aren't like garden hoses... there's just some meta-data added to a packet that's sent between the machines, so the reading and writing happen independently (apart perhaps from if one side calls recv() on a socket that has unsent data in the local buffers due to the Nagle algorithm, which bunches up data into sensible size packets, it might time-out immediately and send whatever it can, but any tuning of that would be an implementation latency-tuning detail and doesn't change the big picture or way the client and server call the TCP API).
Consider this example: the server is sending you a file and say it will take a lot (low uplink or a very big file). The user gets bored and decides to SIGINT you. You catch it and tell the server to stop sending the file (with some kind of message). Will you be able to send to tell the server to stop sending even though you're reading from it? And of course, that's applied to the server-side as well.
The kernel accepts a limited amount of data to be sent, and a limited amount of data received, after which it forces the sending side to wait until some has been consumed before sending more. So, if you've sent data to a server, then get a local SIGINT and send a "oh, cancel that" in the same way, the server must read all the already-sent data before it can see the "oh, cancel that". If instead of sending it "in the same way" you turn on the Out Of Band (OOB) flag while sending the cancel message, then the server can (if it's written to do so) detect that there's OOB data and read it before it's completed reading/processing the other data. It will still need to read and discard whatever in-band data you've already sent, but the flow control / buffering mentioned above means that should be a manageable amount - far less than your file size might be. Throughout all this, whatever you want to recv or the server sends is independent and unaffected by the large client->server send, any OOB data etc..
There's a discussion and example code from GNU at http://www.gnu.org/software/libc/manual/html_node/Out_002dof_002dBand-Data.html
Thread 1 can safely write to the socket (with send) whilst thread 2 reads from the socket (with recv). What you need to be careful of is that at the point where you close() the socket the threads are synchronised, else the file descriptor may be used elsewhere, so the other thread (if not synchronized) could read from a file descriptor now used for something else. One way to achieve this would be for your reading thread to shutdown the file descriptor, which should cause the other end to drop the connection and thus error an in-progress send.
This question already has an answer here:
Closed 10 years ago.
Possible Duplicate:
Send buffer empty of Socket in Linux?
I want to create a socket server sending some data to a connecting client and disconnect him again.
I'm using non-blocking sockets so I don't know how to figure out if all data have at least been sent (send?) correctly (in short: no more data in my send buffer).
I don't want to keep the connection established if it's not neccessary anymore because I can't ensure that the client disconnects on his own.
Currently I'm just shutting down the client using shutdown() and later close(). But testing showed me a client does not always recieve all data before the connection gets closed.
There must be a way to ensure all data got send before closing the connection on non-blocking sockets, too, isn't there? Hope my question is clear enough and you can help me (:
The only way you can know your data has been sent prior to ending the connection is for the peer to acknowledge it in the application protocol. You can ensure that both ends get to EOS at the same time by shutting down for output at both ends and then reading to EOS at both ends, then closing the socket at both ends.
you can send the file size prior to data of the file. While closing the socket just check the file size and take appropriate action to close or resend the file....