I have a UDP server using the following code:
void initialize()
{
connect(&_udpSocket, SIGNAL(readyRead()), this, SLOT(onUdpDatagram()));
_udpSocket.bind(QHostAddress::Any, 28283);
}
void onUdpDatagram()
{
qDebug() << "udp packet received!";
_udpSocket.write("Hello");
}
Unfortunately when a UDP packet is received, I have the following error in the log:
QIODevice::write: device not open
How can I make the UDP socket writable? I tried to create another socket for the answer that connect to the sender address and port but the sending won't use the 28283 port anymore...
Any idea?
For info: I'm using Qt 5.2.1 on MacOS 10.9
UDP is not a connection-based protocol. You don't get a separate socket for each peer, instead there's one socket for all communication on a single port.
Therefore, there's some extra effort needed to reply to an incoming UDP packet. You need to retrieve the sender address from the datagram you received, and send back to that same address. In the sockets API this is done by using recvfrom and sendto functions instead of recv (or read) and send (or write) -- the latter are designed for connected sockets like you use with TCP.
You didn't show the declaration (really, the type) for your _udpSocket variable, so I'm assuming that you are using a QUdpSocket. In that case, it looks like you will want to use the readDatagram and writeDatagram functions, which like recvfrom and sendto, have an additional parameter for the peer address (actually, it's a pair, one for the IP address, one for the port).
Here's what the Qt documentation says about that:
The most common way to use this class is to bind to an address and port using bind(), then call writeDatagram() and readDatagram() to transfer data. If you want to use the standard QIODevice functions read(), readLine(), write(), etc., you must first connect the socket directly to a peer by calling connectToHost().
Coincidentally, this warning was introduced by me in Qt upstream:
QIODevice::write: device not open
It should be pretty clear unlike before the introduction of this, namely: you have forgotten to connect to the host with your udp socket. You cannot expect it to write and/or read if it is not even open and/or connected. See the documentation for details:
If you want to use the standard QIODevice functions read(), readLine(), write(), etc., you must first connect the socket directly to a peer by calling connectToHost().
You have to do something like this somewhere in your code:
_udpSocket.connectToHost(myHostAddress, 28283, ReadWrite, AnyIPProtocol);
The last two parameters can be skipped as they are the default. As you can read from the documentation, this method call will open the socket for you, too, which is necessary to get done for QIODevice read and write operations.
That being said, you really should not neglect error checking in your code as it currently seems to stand. It will be difficult to find the issues this way.
Also, it is ice on the cake, but I would encourage you to start using the "new" signal-slot syntax, which is not so new, but much more modern and handier:
void initialize()
{
connect(&_udpSocket, &QUdpSocket::connected, [&_udpSocket]() {
connect(&_udpSocket, &QUdpSocket::readyRead, [&_udpSocket]() {
qDebug() << "udp packet received!";
if (_udpSocket.write("Hello") != 6)
qDebug() << "Failed to write:" << _udpSocket.errorString();
});
});
connect(&_udpSocket, &QUdpSocket::error, [&_udpSocket]() {
qDebug() << "Error occured:" << _udpSocket.errorString();
});
_udpSocket.connectToHost(myHostAddress, 28283, ReadWrite, AnyIPProtocol);
}
Related
I have a Qt based TCP client and server making use of QTcpServer and QTcpSocket classes for communication. The server is compiled using Qt 5.3.1 and the client is compiled using Qt 4.8.1. This is done so because the client is a part of a framework that uses Qt 4.8.1 running on Ubuntu 12.04.
Since the classes I make use is available in both Qt versions I assume this wont create a problem.
However my client has some weird issues that it does not receive data from the server! I checked the server side and the data is sent from the server and I can also see the data packet on the wire using wireshark. However on my client code, the data does not arrive!
I investigated this a bit and it led me to a strange conclusion that this happens only if I use the read method of QTcpSocket! If I use the native POSIX read system call, I am able to read the data correctly! Please see my code below:
qDebug() << "QTcpSocket::bytesAvailable() gives" << m_pSocket->bytesAvailable();
char nData;
qint32 szReceived;
if(sizeof(char) != (szReceived = m_pSocket->read((char*)&nData,sizeof(char))))
{
qDebug() << "Error reading data from QTcpSocket::read()" << m_pSocket->errorString();
}
else
{
qDebug() << "QTcpSocket::read() returned" << szReceived;
}
int nDesc = m_pSocket->socketDescriptor();
if(sizeof(char) != (szReceived = read(nDesc, &nData,sizeof(char))))
{
perror("Error reading data from POSIX read()");
}
else
{
qDebug() << "POSIX read() returned" << szReceived;
}
This produces the following output:
QTcpSocket::bytesAvailable() gives 0
Error reading data from QTcpSocket::read() "Network operation timed out"
POSIX read() returned 1
How is it that the POSIX system calls reads the buffered data as expected and the Qt class cannot read it? Plus I have not set any socket options and so I don't know why it reports an error that network operation timed out!
"read" is a blocking call in POSIX, it waits till the data is arrived. while QTcpSocket is non-blocking operation it immediately returns the buffered data. Call waitForReadyRead before doing a read
socket->waitForReadyRead();
if(sizeof(char) != (szReceived = m_pSocket->read((char*)&nData,sizeof(char))))
I think that it is misuse of QTcpSocket concept. QTcpSocket implements asynchronous architecture while POSIX read/write calls are blocking until the success or error of I/O on socket. It is much better to process read in slot for readyRead signal. Consider this:
class MyClient
{
Q_OBJECT
...
private slots:
readFromSocket();
};
In your intialization:
QObject::connect(
m_pSocket, SIGNAL(readyRead()),
this, SLOT(readFromSocket()));
And real job done here:
void
MyClient::readFromSocket()
{
QByteArray buffer = m_pSocket->readAll();
// All your data in buffer.
}
I'm aware of the non-blocking nature of QTcpSocket and blocking nature of POSIX read call. Unfortunately I cannot use the signal readFromSocket because my communication architecture expects a header to be sent before each communication (TCP way) to see the payload that is streamed for that particular message. Hence I have to wait till I receive at least the header.
I do believe that this has something to do with the mode (blocking or non-blocking). I did some more tests and none of them were conclusive. In one of my tests, I tried to call a waitForReadyRead with a timeout of 1ms, 2ms, 3ms. This still wasn't sufficent for the read to succeed! I doubt if the read would need such time to read from the kernel buffers to user space as I can clearly see from wireshark that the message was received within 400ms.
When I give -1 as the timeout value of waitForReadyRead, the read succeeds! To put it in another way, the read succeeds only when the socket waits indefinitely like in the case of POSIX read call.
Another strange thing I observed was, this issue was originally observed when I was running a server compiled using Qt 5.3.1 and client compiled using Qt 4.8.1. When I compile my client to use Qt 5.3.1, I do not see this problem!!! I even tried compiling using Qt 4.7.1 and it worked without any issues!!!
Are there any known issues with socket implementation of Qt 4.8.1? I couldn't find much info regarding this unfortunately.
I have just started using the Poco library. I am having issues getting two computers to communicate using Poco's DatagramSocket objects. Specifically, the receiveBytes function does not seem to return (despite running Wireshark and seeing that the UDP packets I am sending ARE arriving at the destination machine). I assume I am omitting something simple and this is all due to a dumb mistake on my part. I have compiled Poco 1.4.3p1 on Windows 7 using Visual Studio Express 2010. Below are code snippets showing how I am trying to use Poco. Any advise would be appreciated.
Sending
#include "Poco\Net\DatagramSocket.h"
#include "Serializer.h" //A library used for serializing data
int main()
{
Poco::Net::SocketAddress remoteAddr("192.168.1.140", 5678); //The IP address of the remote (receiving) machine
Poco::Net::DatagramSocket mSock; //We make our socket (its not connected currently)
mSock.connect(remoteAddr); //Sends/Receives are restricted to the inputted IPAddress and port
unsigned char float_bytes[4];
FloatToBin(1234.5678, float_bytes); //Serializing the float and storing it in float_bytes
mSock.sendBytes((void*)float_bytes, 4); //Bytes AWAY!
return 0;
}
Receiving (where I am having issues)
#include "Poco\Net\DatagramSocket.h"
#include "Poco\Net\SocketAddress.h"
#include "Serializer.h"
#include <iostream>
int main()
{
Poco::Net::SocketAddress remoteAddr("192.168.1.116", 5678); //The IP address of the remote (sending) machine
Poco::Net::DatagramSocket mSock; //We make our socket (its not connected currently)
mSock.connect(remoteAddr); //Sends/Receives are restricted to the inputted IPAddress and port
//Now lets try to get some datas
std::cout << "Waiting for float" << std::endl;
unsigned char float_bytes[4];
mSock.receiveBytes((void*)float_bytes, 4); //The code is stuck here waiting for a packet. It never returns...
//Finally, lets convert it to a float and print to the screen
float net_float;
BinToFloat(float_bytes, &net_float); //Converting the binary data to a float and storing it in net_float
std::cout << net_float << std::endl;
system("PAUSE");
return 0;
}
Thank you for your time.
The POCO sockets are modeled on the Berkeley sockets. You should read a basic tutorial on the Berkeley socket API, this will make it easier to understand the POCO OOP socket abstractions.
You cannot connect() on both client and server. You connect() on the client only. With UDP, connect() is optional, and can be skipped (then you have to use sendTo() instead of SendBytes()).
On the server, either you bind() on the wildcard IP address (meaning: will then receive on all the available network interfaces on the host), or to a specific IP address (meaning: will then receive only on that IP address).
Looking at your receiver/server code, it seems you want to filter on the address of the remote client. You cannot do it with connect(), you have to read with receiveFrom(buffer, length, address) and then filter yourself on "address".
Security-wise, be careful with the assumptions you make with the source address of the UDP packets you receive. Spoofing a UDP packet is trivial. Said in another way: do not make authentication or authorization decisions based on an IP address (or on anything not secured by proper cryptography).
The POCO presentation http://pocoproject.org/slides/200-Network.pdf explains, with code snippets, how to do network programming with POCO. See slides 15, 16 for DatagramSocket. Note that on slide 15 there is a typo, replace msg.data(), msg.size() with syslogMsg.data(), syslogMsg.size() to compile :-)
Have a look also at the "poco/net/samples" directory for short examples that show also the best practices in using POCO.
I am hooking WSASend, and WSARecv in C++ using the same method I've used to hook the client's WSASend and WSARecv functions. In the client I am able to get the IP, Port, and Socket from the SOCKET structure passed by WSASend/WSARecv; however, for the server when I try to use getpeername or getsockname() they both return the error 10057 (Socket not connected)...
I'm fairly sure that the hook is correct on the server, since it prints the bytes successfully, and I'm also sure the socket SHOULD be valid seeing how client and server establish a successful connection.
Is there a way to resolve this problem by any other alternative methods? I've been looking around the internet to find a solution, but I haven't seen anyone with the same problem.
I've tried this:
sockaddr *address = new sockaddr;
int peer_len;
getpeername(s, address, &peer_len);
int err = WSAGetLastError();
if(err==0)
{
char *Str = inet_ntoa(((sockaddr_in*)address)->sin_addr);
printf("[%s", Str);
printf(":%d]",ntohs(((sockaddr_in*)address)->sin_port));
}
else
{
printf("Error %i\n",err);
}
(Using both getpeername and getsockname)Both result in the same socket not connected error.
I'm planning on using the packets the C++ dll gets and forward the information to the C# dll since it'll be easier to manage on that (for me anyways), but I'd need to distinguish each packet with it's socket id.
You can only do that on the connected socket, i.e. the one returned from the accept() call, not on the listening "server" socket.
I am writing an XMLRPC client in c++ that is intended to talk to a python XMLRPC server.
Unfortunately, at this time, the python XMLRPC server is only capable of fielding one request on a connection, then it shuts down, I discovered this thanks to mhawke's response to my previous query about a related subject
Because of this, I have to create a new socket connection to my python server every time I want to make an XMLRPC request. This means the creation and deletion of a lot of sockets. Everything works fine, until I approach ~4000 requests. At this point I get socket error 10048, Socket in use.
I've tried sleeping the thread to let winsock fix its file descriptors, a trick that worked when a python client of mine had an identical issue, to no avail.
I've tried the following
int err = setsockopt(s_,SOL_SOCKET,SO_REUSEADDR,(char*)TRUE,sizeof(BOOL));
with no success.
I'm using winsock 2.0, so WSADATA::iMaxSockets shouldn't come into play, and either way, I checked and its set to 0 (I assume that means infinity)
4000 requests doesn't seem like an outlandish number of requests to make during the run of an application. Is there some way to use SO_KEEPALIVE on the client side while the server continually closes and reopens?
Am I totally missing something?
The problem is being caused by sockets hanging around in the TIME_WAIT state which is entered once you close the client's socket. By default the socket will remain in this state for 4 minutes before it is available for reuse. Your client (possibly helped by other processes) is consuming them all within a 4 minute period. See this answer for a good explanation and a possible non-code solution.
Windows dynamically allocates port numbers in the range 1024-5000 (3977 ports) when you do not explicitly bind the socket address. This Python code demonstrates the problem:
import socket
sockets = []
while True:
s = socket.socket()
s.connect(('some_host', 80))
sockets.append(s.getsockname())
s.close()
print len(sockets)
sockets.sort()
print "Lowest port: ", sockets[0][1], " Highest port: ", sockets[-1][1]
# on Windows you should see something like this...
3960
Lowest port: 1025 Highest port: 5000
If you try to run this immeditaely again, it should fail very quickly since all dynamic ports are in the TIME_WAIT state.
There are a few ways around this:
Manage your own port assignments and
use bind() to explicitly bind your
client socket to a specific port
that you increment each time your
create a socket. You'll still have
to handle the case where a port is
already in use, but you will not be
limited to dynamic ports. e.g.
port = 5000
while True:
s = socket.socket()
s.bind(('your_host', port))
s.connect(('some_host', 80))
s.close()
port += 1
Fiddle with the SO_LINGER socket
option. I have found that this
sometimes works in Windows (although
not exactly sure why):
s.setsockopt(socket.SOL_SOCKET,
socket.SO_LINGER, 1)
I don't know if this will help in
your particular application,
however, it is possible to send
multiple XMLRPC requests over the
same connection using the
multicall method. Basically
this allows you to accumulate
several requests and then send them
all at once. You will not get any
responses until you actually send
the accumulated requests, so you can
essentially think of this as batch
processing - does this fit in with
your application design?
Update:
I tossed this into the code and it seems to be working now.
if(::connect(s_, (sockaddr *) &addr, sizeof(sockaddr)))
{
int err = WSAGetLastError();
if(err == 10048) //if socket in user error, force kill and reopen socket
{
closesocket(s_);
WSACleanup();
WSADATA info;
WSAStartup(MAKEWORD(2,0), &info);
s_ = socket(AF_INET,SOCK_STREAM,0);
setsockopt(s_,SOL_SOCKET,SO_REUSEADDR,(char*)&x,sizeof(BOOL));
}
}
Basically, if you encounter the 10048 error (socket in use), you can simply close the socket, call cleanup, and restart WSA, the reset the socket and its sockopt
(the last sockopt may not be necessary)
i must have been missing the WSACleanup/WSAStartup calls before, because closesocket() and socket() were definitely being called
this error only occurs once every 4000ish calls.
I am curious as to why this may be, even though this seems to fix it.
If anyone has any input on the subject i would be very curious to hear it
Do you close the sockets after using it?
I want to verify the connection status before performing read/write operations.
Is there a way to make an isConnect() method?
I saw this, but it seems "ugly".
I have tested is_open() function as well, but it doesn't have the expected behavior.
TCP is meant to be robust in the face of a harsh network; even though TCP provides what looks like a persistent end-to-end connection, it's all just a lie, each packet is really just a unique, unreliable datagram.
The connections are really just virtual conduits created with a little state tracked at each end of the connection (Source and destination ports and addresses, and local socket). The network stack uses this state to know which process to give each incoming packet to and what state to put in the header of each outgoing packet.
Because of the underlying — inherently connectionless and unreliable — nature of the network, the stack will only report a severed connection when the remote end sends a FIN packet to close the connection, or if it doesn't receive an ACK response to a sent packet (after a timeout and a couple retries).
Because of the asynchronous nature of asio, the easiest way to be notified of a graceful disconnection is to have an outstanding async_read which will return error::eof immediately when the connection is closed. But this alone still leaves the possibility of other issues like half-open connections and network issues going undetected.
The most effectively way to work around unexpected connection interruption is to use some sort of keep-alive or ping. This occasional attempt to transfer data over the connection will allow expedient detection of an unintentionally severed connection.
The TCP protocol actually has a built-in keep-alive mechanism which can be configured in asio using asio::tcp::socket::keep_alive. The nice thing about TCP keep-alive is that it's transparent to the user-mode application, and only the peers interested in keep-alive need configure it. The downside is that you need OS level access/knowledge to configure the timeout parameters, they're unfortunately not exposed via a simple socket option and usually have default timeout values that are quite large (7200 seconds on Linux).
Probably the most common method of keep-alive is to implement it at the application layer, where the application has a special noop or ping message and does nothing but respond when tickled. This method gives you the most flexibility in implementing a keep-alive strategy.
TCP promises to watch for dropped packets -- retrying as appropriate -- to give you a reliable connection, for some definition of reliable. Of course TCP can't handle cases where the server crashes, or your Ethernet cable falls out or something similar occurs. Additionally, knowing that your TCP connection is up doesn't necessarily mean that a protocol that will go over the TCP connection is ready (eg., your HTTP webserver or your FTP server may be in some broken state).
If you know the protocol being sent over TCP then there is probably a way in that protocol to tell you if things are in good shape (for HTTP it would be a HEAD request)
If you are sure that the remote socket has not sent anything (e.g. because you haven't sent a request to it yet), then you can set your local socket to a non blocking mode and try to read one or more bytes from it.
Given that the server hasn't sent anything, you'll either get a asio::error::would_block or some other error. If former, your local socket has not yet detected a disconnection. If latter, your socket has been closed.
Here is an example code:
#include <iostream>
#include <boost/asio.hpp>
#include <boost/asio/spawn.hpp>
#include <boost/asio/steady_timer.hpp>
using namespace std;
using namespace boost;
using tcp = asio::ip::tcp;
template<class Duration>
void async_sleep(asio::io_service& ios, Duration d, asio::yield_context yield)
{
auto timer = asio::steady_timer(ios);
timer.expires_from_now(d);
timer.async_wait(yield);
}
int main()
{
asio::io_service ios;
tcp::acceptor acceptor(ios, tcp::endpoint(tcp::v4(), 0));
boost::asio::spawn(ios, [&](boost::asio::yield_context yield) {
tcp::socket s(ios);
acceptor.async_accept(s, yield);
// Keep the socket from going out of scope for 5 seconds.
async_sleep(ios, chrono::seconds(5), yield);
});
boost::asio::spawn(ios, [&](boost::asio::yield_context yield) {
tcp::socket s(ios);
s.async_connect(acceptor.local_endpoint(), yield);
// This is essential to make the `read_some` function not block.
s.non_blocking(true);
while (true) {
system::error_code ec;
char c;
// Unfortunately, this only works when the buffer has non
// zero size (tested on Ubuntu 16.04).
s.read_some(asio::mutable_buffer(&c, 1), ec);
if (ec && ec != asio::error::would_block) break;
cerr << "Socket is still connected" << endl;
async_sleep(ios, chrono::seconds(1), yield);
}
cerr << "Socket is closed" << endl;
});
ios.run();
}
And the output:
Socket is still connected
Socket is still connected
Socket is still connected
Socket is still connected
Socket is still connected
Socket is closed
Tested on:
Ubuntu: 16.04
Kernel: 4.15.0-36-generic
Boost: 1.67
Though, I don't know whether or not this behavior depends on any of those versions.
you can send a dummy byte on a socket and see if it will return an error.