udp local connection between 2 computers using qt - c++

I have a problem to connect 2 computers by using UDP protocol. As reference I've used broadcast server and broadcast receiver provided by qt (version 5.9). So what's the problem. When I want to check this 2 programs on one computer, they works correct, but when i use them on the different machines, all crashes. Datagrams wasn't received by receiver computer. What am i do wrong? Could anyone answer me.
sender code:
void Sender::broadcastDatagram()
{
statusLabel->setText(tr("Now broadcasting datagram %1").arg(messageNo));
//! [1]
QByteArray datagram = "Broadcast message " + QByteArray::number(messageNo);
udpSocket->writeDatagram(datagram.data(), datagram.size(),
QHostAddress::Broadcast, 45454);
//! [1]
++messageNo;
}
`
and receiver
udpSocket->bind(QHostAddress::Any,45454, QUdpSocket::ShareAddress);
sender ip is 127.0.0.1 ;
receiver ip is 10.0.0.10

Related

How to tell live555 rtspclient to send setupCommand with specific connection?

I am connection to the camera using live555 testRTSPClient application (http://www.live555.com/liveMedia/#testProgs)
./testRTSPClient rtsp://....
but it does not get any video stream. The problem seems to be that the camera tells in its SDP that it has two connections
...
s=/videoinput_1:0/h264_1/media.stm
c=IN IP4 0.0.0.0
m=video 11800 RTP/AVP 96
c=IN IP4 239.0.3.180/1
...
and testRTSPClient selects the last multicast one. In the Setup command the testRTSPClient tells following
...
User-Agent: ./testRTSPClient (LIVE555 Streaming Media v2017.06.04)
Transport: RTP/AVP;multicast;port=11800-11801
...
When connecting to another camera, which SDP contains only one connection (c=IN IP4 0.0.0.0) then everything is fine.
1) So the first question is that is it possible to force the testRTSPClient to select the UDP unicast instead? The ffplay streams from the camera nicely and Wireshark show that ffplay setups the transports with UDP unicast and not multicast.
2) Secondly, I am using my own C++ example similar to testRTSPClient. In the subSession setup I use the RTSPClient::sendSetupCommand function
rtspClient->sendSetupCommand(*subsession, continueAfterSetup, False, False, False);
but my program still does the Setup with multicast like the testRTSPClient. The parameter forceMulticastOnUnspecified does not seem to make any difference here. Currently, I see that the only option is to remove the second multicast connection substring from the SDP.
void continueAfterDescribe(RTSPClient* rtspClient, int resultCode, char* resultString)
{
...
char* const sdpDescription = resultString;
env << "Got a SDP description:\n" << sdpDescription << "\n";
// Hypothetical new code
if (user determined transport == UDP unicast)
removeSecondConnection(sdpDescription);
// Create a media session object from this SDP description:
MediaSession::createNew(env, sdpDescription);
...
But it is a hack in my mind. Is there any other option to select the unicast using live555 C++ API? I know that there is a TCP unicast option, but I am not interested in it for now.

Why the OS is changing the assigned outgoing port of my packets?

My C++ software is creating syn packets (using boost) to my server with specific outgoing ports (according to the IANA port-assignment standards).
I am picking the outgoing ports for internal purposes.
For some reason, after I checked my application on many machines, with one specific machine am having the below issue:
The outgoing port which is being used isn't the one I assigned - Looks like the OS (Windows 10) is changing it.
What can be the issue?
Below is the relevant code I am using for assigning specific outgoing port:
std::string exceptionFormat = "exception. Error message: ";
error_code socket_set_option_error_code;
socket->set_option(tcp::socket::reuse_address(true), socket_set_option_error_code);
if (socket_set_option_error_code) {
throw SocketException("Got socket reuse set option " + exceptionFormat + socket_set_option_error_code.message());
}
const auto source_endpoint = tcp::endpoint(tcp::v4(), source_port);
error_code bind_socket_error_code;
socket->bind(source_endpoint, bind_socket_error_code);
if (bind_socket_error_code) {
throw SocketException("Got socket bind " + exceptionFormat + bind_socket_error_code.message());
}
Apparently, there were 2 antivirus installed on the machine while one of them changed the outgoing port (Kaspersky).
Tha packets might be flowing through NAT module (NAPT) or firewall which could also be one main reason due to which the port numbers can change.

Using boost::asio for simple udp communication

This is a simple problem, but I can't seem to figure out what I am doing wrong. I am attempting to read data sent to a port on a client using Boost and I have the following code which sets up 1) the UDP client, 2) a buffer for reading to and 3) an attempt to read from the socket:
// Set up the socket to read UDP packets on port 10114
boost::asio::io_service io_service;
udp::endpoint endpoint_(udp::v4(), 10114);
udp::socket socket(io_service, endpoint_);
// Data coming across will be 8 bytes per packet
boost::array<char, 8> recv_buf;
// Read data available from port
size_t len = socket.receive_from(
boost::asio::buffer(recv_buf,8), endpoint_);
cout.write(recv_buf.data(), len);
The problem is that the recieve_from function never returns. The server is running on another computer and generating data continuously. I can see traffic on this port on the local computer using Wireshark. So, what am I doing wrong here?
So, it turns out that I need to listen on that port for connections coming from anywhere. As such, the endpoint needs to be setup as
boost::asio::ip::udp::endpoint endpoint_(boost::asio::ip::address::from_string("0.0.0.0"), 10114);
Using this setup, I get the data back that I expect. And fyi, 0.0.0.0 is the same as INADDR_ANY.

Poor multicast performance sending using boost::asio on Windows

I have a very simple wrapper for boost::asio sockets sending multicast messages:
// header
class MulticastSender
{
public:
/// Constructor
/// #param ip - The multicast address to broadcast on
/// #param port - The multicast port to broadcast on
MulticastSender(const String& ip, const UInt16 port);
/// Sends a multicast message
/// #param msg - The message to send
/// #param size - The size of the message (in bytes)
/// #return number of bytes sent
size_t send(const void* msg, const size_t size);
private:
boost::asio::io_service m_service;
boost::asio::ip::udp::endpoint m_endpoint;
boost::asio::ip::udp::socket m_socket;
};
// implementation
inline MulticastSender::MulticastSender(const String& ip, const UInt16 port) :
m_endpoint(boost::asio::ip::address_v4::from_string(ip), port),
m_socket(m_service, m_endpoint.protocol())
{
m_socket.set_option(boost::asio::socket_base::send_buffer_size(8 * 1024 * 1024));
m_socket.set_option(boost::asio::socket_base::broadcast(true));
m_socket.set_option(boost::asio::socket_base::reuse_address(true));
}
inline size_t MulticastSender::send(const void* msg, const size_t size)
{
try
{
return m_socket.send_to(boost::asio::buffer(msg, size), m_endpoint);
}
catch (const std::exception& e)
{
setError(e.what());
}
return 0;
}
// read and send a message
MulticastSender sender(ip, port);
while(readFile(&msg)) sender.send(&msg, sizeof(msg));
When compiled on Windows 7 using Visual Studio 2013, I get throughput of ~11 MB/s, on Ubuntu 14.04 ~100 MB/s. I added timers and was able to validate the send(...) method is the culprit.
I tried with and without antivirus enabled, and tried disabling a few other services with no luck. Some I cannot disable due to permissions on the computer, like the firewall.
I assume there is a service on Windows running that is interfering, or my implementation is missing something that is effecting the application on Windows and not Linux.
Any ideas on what might be cauing this would be appreciated
Is windows and ubuntu running on the same machine?
If not, it seems that your windows machine is limited by 100Mbit Ethernet, while the ubuntu machine seems to work with 1Gbit Ethernet.
(In case thats not the cause of the problem, i am sorry for posting an anwser instead of commenting. But i am not able to do so and your code is that simple and the data rates are so obvious [11*8MB/s ~ 100Mbit/s and 100MB/s ~ 800Mbit/s]. I just had to make that hint...)
If you data transfer if huge say more that 10 MB messages i would suggest you to use TCP instead of UPD/Multicast. TCP is a reliable protocol.
I read in a case where a stream of 300 byte packets was being sent over Ethernet (1500 byte MTU) and TCP was 50% faster than UDP. Because TCP will try and buffer the data and fill a full network segment thus making more efficient use of the available bandwidth but UDP puts the packet on the wire immediately thus congesting the network with lots of small packets. In windows i suggest you to use TCP over UDP/Multicast.

Socket in use error when reusing sockets

I am writing an XMLRPC client in c++ that is intended to talk to a python XMLRPC server.
Unfortunately, at this time, the python XMLRPC server is only capable of fielding one request on a connection, then it shuts down, I discovered this thanks to mhawke's response to my previous query about a related subject
Because of this, I have to create a new socket connection to my python server every time I want to make an XMLRPC request. This means the creation and deletion of a lot of sockets. Everything works fine, until I approach ~4000 requests. At this point I get socket error 10048, Socket in use.
I've tried sleeping the thread to let winsock fix its file descriptors, a trick that worked when a python client of mine had an identical issue, to no avail.
I've tried the following
int err = setsockopt(s_,SOL_SOCKET,SO_REUSEADDR,(char*)TRUE,sizeof(BOOL));
with no success.
I'm using winsock 2.0, so WSADATA::iMaxSockets shouldn't come into play, and either way, I checked and its set to 0 (I assume that means infinity)
4000 requests doesn't seem like an outlandish number of requests to make during the run of an application. Is there some way to use SO_KEEPALIVE on the client side while the server continually closes and reopens?
Am I totally missing something?
The problem is being caused by sockets hanging around in the TIME_WAIT state which is entered once you close the client's socket. By default the socket will remain in this state for 4 minutes before it is available for reuse. Your client (possibly helped by other processes) is consuming them all within a 4 minute period. See this answer for a good explanation and a possible non-code solution.
Windows dynamically allocates port numbers in the range 1024-5000 (3977 ports) when you do not explicitly bind the socket address. This Python code demonstrates the problem:
import socket
sockets = []
while True:
s = socket.socket()
s.connect(('some_host', 80))
sockets.append(s.getsockname())
s.close()
print len(sockets)
sockets.sort()
print "Lowest port: ", sockets[0][1], " Highest port: ", sockets[-1][1]
# on Windows you should see something like this...
3960
Lowest port: 1025 Highest port: 5000
If you try to run this immeditaely again, it should fail very quickly since all dynamic ports are in the TIME_WAIT state.
There are a few ways around this:
Manage your own port assignments and
use bind() to explicitly bind your
client socket to a specific port
that you increment each time your
create a socket. You'll still have
to handle the case where a port is
already in use, but you will not be
limited to dynamic ports. e.g.
port = 5000
while True:
s = socket.socket()
s.bind(('your_host', port))
s.connect(('some_host', 80))
s.close()
port += 1
Fiddle with the SO_LINGER socket
option. I have found that this
sometimes works in Windows (although
not exactly sure why):
s.setsockopt(socket.SOL_SOCKET,
socket.SO_LINGER, 1)
I don't know if this will help in
your particular application,
however, it is possible to send
multiple XMLRPC requests over the
same connection using the
multicall method. Basically
this allows you to accumulate
several requests and then send them
all at once. You will not get any
responses until you actually send
the accumulated requests, so you can
essentially think of this as batch
processing - does this fit in with
your application design?
Update:
I tossed this into the code and it seems to be working now.
if(::connect(s_, (sockaddr *) &addr, sizeof(sockaddr)))
{
int err = WSAGetLastError();
if(err == 10048) //if socket in user error, force kill and reopen socket
{
closesocket(s_);
WSACleanup();
WSADATA info;
WSAStartup(MAKEWORD(2,0), &info);
s_ = socket(AF_INET,SOCK_STREAM,0);
setsockopt(s_,SOL_SOCKET,SO_REUSEADDR,(char*)&x,sizeof(BOOL));
}
}
Basically, if you encounter the 10048 error (socket in use), you can simply close the socket, call cleanup, and restart WSA, the reset the socket and its sockopt
(the last sockopt may not be necessary)
i must have been missing the WSACleanup/WSAStartup calls before, because closesocket() and socket() were definitely being called
this error only occurs once every 4000ish calls.
I am curious as to why this may be, even though this seems to fix it.
If anyone has any input on the subject i would be very curious to hear it
Do you close the sockets after using it?