In my multicast receiver application I join a group and receive multicast data successfully.
Also, there's an api for filling any gaps. This uses udp. When a gap is detected a retransmission request is sent. There's a thread dedicated to receiving and processing udp datagrams that come back in response to a request.
This all worked fine on a windows machine with one interface card. Now we need to get this to run on a multi-homed machine with 2 NIC cards.
To get the multicast to work we had to add routes so that the app would send the joins out the correct NIC. Again, this part works fine.
However, the point to point udp receive_from method throws an error immediately upon entry. My understanding is that the sender_endpoint is initialized with the method itself. Do I need to do something special for this udp socket to not throw errors upon entry? Do I need to bind the socket in some way or set up special routes at the host level?
Any help would be appreciated.
Here's the error that comes back:
ERR:reRequestMoldUdp64::waitForResponseNT got error:An invalid argument was supplied
boost::asio::ip::address m_targetIP;
short m_port;
udp::endpoint m_receiver_endpoint;
m_receiver_endpoint.address(m_targetIP);
m_receiver_endpoint.port(m_port);
if (!m_socket.is_open()) {
openUdpSocket();
}
while (1)
{
m_last_ReceiveLen = 0;
udp::endpoint sender_endpoint;
try {
//m_last_ReceiveLen = m_socket.receive_from( (boost::asio::buffer(m_Buffer, max_length)), sender_endpoint);
m_last_ReceiveLen = m_socket.receive_from( (boost::asio::buffer(m_Buffer, max_length)), sender_endpoint, 0, ec);
if (ec) {
_snprintf(logBuf, sizeof(logBuf), "%s got error:%s", __FUNCTION__, ec.message().c_str());
MyLog(LOG_ERROR, logBuf);
myExit(__FILE__, __LINE__, 1);
}
}
catch (std::exception& e) {
_snprintf(logBuf, sizeof(logBuf), "INFO:%s sender_endpoint.address[%s] error:[%s]",
__FUNCTION__,
sender_endpoint.address().to_string().c_str(),
e.what());
MyLogAlways(logBuf);
if (!m_socket.is_open()) {
openUdpSocket();
}
if (++m_waitForResponseNTRetryCnt > 100) {
myExit(__FILE__, __LINE__, 1);
}
else {
Sleep(100); // 100 ms
continue;
}
}
std::string dummyStr;
const UINT64 dummySeqno(0);
udpResponseAPI(BuildMap,dummySeqno, dummyStr);
}
}
void reRequestMoldUdp64::openUdpSocket()
{
char logBuf[512];
if (!m_socket.is_open()) {
m_socket.open(udp::v4(), m_Error);
}
else {
return;
}
if (!m_socket.is_open()) {
_snprintf(logBuf, sizeof(logBuf), "%s udp socket didn't reopen error:[%s]", __FUNCTION__, m_Error.message().c_str());
MyLog(LOG_ERROR, logBuf);
}
else {
_snprintf(logBuf, sizeof(logBuf), "%s udp socket successfully opened", __FUNCTION__);
MyLogAlways(logBuf);
}
}
//----------------------------------------------------------------------------------------------------
void reRequestMoldUdp64::sendRequest(std::string request)
{
MyLog("INFO:Begin sendRequest");
m_lastRequestSent = request;
char myBuff[128];
int len(request.length());
memcpy(myBuff, request.c_str(), len);
#if 0
_snprintf(logBuf, sizeof(logBuf), "INFO:Before sendto() len:%d ip:%s port:%hu",
len,
m_receiver_endpoint.address().to_string().c_str(),
m_receiver_endpoint.port()
);
MyLogAlways(logBuf);
#endif
_snprintf(logBuf, sizeof(logBuf),"%s sending udp rerequest to:%s", __FUNCTION__,
m_receiver_endpoint.address().to_string().c_str());
MyLogAlways(logBuf);
if (m_socket.is_open()) {
m_socket.send_to(boost::asio::buffer(myBuff, len), m_receiver_endpoint);
}
else {
m_socket.open(udp::v4(), m_Error);
m_socket.send_to(boost::asio::buffer(myBuff, len), m_receiver_endpoint);
MyLogAlways("ERR:reRequest socket is not open");
}
}
Related
I am using boost::beast library for both WebSocket and TCP server.
Because of requirement, I have to use same port. Thus I implemented server following it.
void on_run() {
// Set suggested timeout settings for the websocket
m_ws.set_option(...);
m_ws.async_accept(
beast::bind_front_handler(
&WsSessionNoSSL::on_accept,
shared_from_this()));
}
virtual void on_accept(beast::error_code ec) {
if(ec) {
std::string msg = ec.message();
CONSOLE_INFO("err: {}", msg);
if(msg != "bad method") {
return fail(ec, "accept");
} else {
doReadTcp();
return;
}
}
doReadWs();
}
void doReadTcp() {
m_ws.next_layer().async_read_some(boost::asio::buffer(m_recvData, 15),
[this, self = shared_from_this()](const boost::system::error_code &error,
size_t bytes_transferred) {
if(error) {
return fail(error, "tcp read fail");
}
CONSOLE_INFO("recvs: {}", bytes_transferred);
doReadTcp();
});
}
void doReadWs() {
m_ws.async_read(...);
}
After accept function is failed, I try to read raw tcp data, however I wasn't able to know passed data. I can only know failure reason via ec.message(). When accept function is failed, can I know passed data?
If It is impossible what I thought, how to solve this problem?
I found solution.
m_ws.async_accept(net::buffer(m_untilStr),
beast::bind_front_handler(
&WsSessionNoSSL::on_accept,
shared_from_this()));
websocket::stream supports buffered accept function.
Thus firstly tcp socket fill handshake data, call the async_accept(buffer, handler).
I have use boost::asio, there are 8 threads
boost::asio::io_service ios;
boost::asio::ip::tcp::acceptor(ios);
boost::asio::ip::tcp::endpoint endpoint(boost::asio::ip::tcp::v4(), port);
acceptor.open(endpoint.protocol());
acceptor.set_option(boost::asio::ip::tcp::acceptor::reuse_address(true));
acceptor.listen();
LocalTcpServer::getInstance()->initialize(ios, acceptor, pool);
boost::thread_group th_group;
for(i=0; i< 8; i++)
th_group.add_thread(new boost::thread(boost::bind(&boost::asio::io_service::run, &ios)));
th_group.join_all();
session::start()
{
socket.async_read_some(boost::asio::buffer(buffer), m_strand.wrap(boost::bind(&session::handle_read, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)))
}
session::handleread(boost::system::error_code &e, size_t byteTrans)
{
if(e || byteTrans == 0 )
{
socket.shutdown(...)
//socketRelease close the socket and delete this
timeInfo->timer->async_wait(boost::bind(socketRelease(), ...);
}
else
{
//deal with data whit pool;
}
socket.async_read_some(.....);
}
LocaltcpServer::initialize(ios, acceptor, pool){
//init, pool is inherit from threadpool, used in handle read to deal with receive data
...;
startaccept();
}
LocalTcpServer::Accept()
{
session* pSession = new session(acceptor->get_io_service, pool);
acceptor.async_accept(session->socket, boost::bind(handle_accept, this, pSession, boost::asio::placeholder::error))
}
LocalTcpServer::handle_accept(boost::system::error_code& e; ... );
{
if(e)
{
//when app run sometime(serveral hours or days, e has always error 22, means invalid argument )
LOG_ERROR << e.message() << e.value();
delete newSession;
accept();
}
else
{
session.start();
accept();
}
}
the app is work fine at first, but some times later, may serveral hours, 1 or two days later , the error comes , hander_accpte always get an err, invalid argument. so , there is no new connect,
the socket connect is almost 10000, and file open limit is 65535,
and I have use netstat to check that the socket is closed normally, there is no socket whitout closed
I wonder why the err occured, and how can I fixed it,
or if my code has some errors?
I wish that I describe the question clear. thanks.
If the listening socket has failed as well, one of the main suspect is dhcp. The interface's ip address may have changed.
In this case, all open sockets bound to that interface become invalid and must be closed, that includes the listening socket, listening must then be restarted with a new socket.
I am converting an app which had a very simple heartbeat / status monitoring connection between two services. As that now needs to be made to run on linux in addition to windows, I thought I'd use boost (v1.51, and I cannot upgrade - linux compilers are too old and windows compiler is visual studio 2005) to accomplish the task of making it platform agnostic (considering, I really would prefer not to either have two code files, one for each OS, or a littering of #defines throughout the code, when boost offers the possibility of being pleasant to read (6mos after I've checked in and forgotten this code!)
My problem now, is the connection is timing out. Actually, it's not really working at all.
First time through, the 'status' message is sent, it's received by the server end which sends back an appropriate response. Server end then goes back to waiting on the socket for another message. Client end (this code), sends the 'status' message again... but this time, the server never receives it and the read_some() call blocks until the socket times out. I find it really strange that
The server end has not changed. The only thing that's changed, is my having altered the client code from basic winsock2 sockets, to this code. Previously, it connected and just looped through send / recv calls until the program was aborted or the 'lockdown' message was received.
Why would subsequent calls (to send) silently fail to send anything on the socket and, what do I need to adjust in order to restore the simple send / recv flow?
#include <boost/signals2/signal.hpp>
#include <boost/bind.hpp>
#include <iostream>
#include <boost/array.hpp>
#include <boost/asio.hpp>
#include <boost/thread.hpp>
using boost::asio::ip::tcp;
using namespace std;
boost::system::error_code ServiceMonitorThread::ConnectToPeer(
tcp::socket &socket,
tcp::resolver::iterator endpoint_iterator)
{
boost::system::error_code error;
int tries = 0;
for (; tries < maxTriesBeforeAbort; tries++)
{
boost::asio::connect(socket, endpoint_iterator, error);
if (!error)
{
break;
}
else if (error != make_error_code(boost::system::errc::success))
{
// Error connecting to service... may not be running?
cerr << error.message() << endl;
boost::this_thread::sleep_for(boost::chrono::milliseconds(200));
}
}
if (tries == maxTriesBeforeAbort)
{
error = make_error_code(boost::system::errc::host_unreachable);
}
return error;
}
// Main thread-loop routine.
void ServiceMonitorThread::run()
{
boost::system::error_code error;
tcp::resolver resolver(io_service);
tcp::resolver::query query(hostnameOrAddress, to_string(port));
tcp::resolver::iterator endpoint_iterator = resolver.resolve(query);
tcp::socket socket(io_service);
error = ConnectToPeer(socket, endpoint_iterator);
if (error && error == boost::system::errc::host_unreachable)
{
TerminateProgram();
}
boost::asio::streambuf command;
std::ostream command_stream(&command);
command_stream << "status\n";
boost::array<char, 10> response;
int retry = 0;
while (retry < maxTriesBeforeAbort)
{
// A 1s request interval is more than sufficient for status checking.
boost::this_thread::sleep_for(boost::chrono::seconds(1));
// Send the command to the network monitor server service.
boost::asio::write(socket, command, error);
if (error)
{
// Error sending to socket
cerr << error.message() << endl;
retry++;
continue;
}
// Clear the response buffer, then read the network monitor status.
response.assign(0);
/* size_t bytes_read = */ socket.read_some(boost::asio::buffer(response), error);
if (error)
{
if (error == make_error_code(boost::asio::error::eof))
{
// Connection was dropped, re-connect to the service.
error = ConnectToPeer(socket, endpoint_iterator);
if (error && error == make_error_code(boost::system::errc::host_unreachable))
{
TerminateProgram();
}
continue;
}
else
{
cerr << error.message() << endl;
retry++;
continue;
}
}
// Examine the response message.
if (strncmp(response.data(), "normal", 6) != 0)
{
retry++;
// If we received the lockdown response, then terminate.
if (strncmp(response.data(), "lockdown", 8) == 0)
{
break;
}
// Not an expected response, potential error, retry to see if it was merely an aberration.
continue;
}
// If we arrived here, the exchange was successful; reset the retry count.
if (retry > 0)
{
retry = 0;
}
}
// If retry count was incremented, then we have likely encountered an issue; shut things down.
if (retry != 0)
{
TerminateProgram();
}
}
When a streambuf is provided directly to an I/O operation as the buffer, then the I/O operation will manage the input sequence appropriately by either commiting read data or consuming written data. Hence, in the following code, command is empty after the first iteration:
boost::asio::streambuf command;
std::ostream command_stream(&command);
command_stream << "status\n";
// `command`'s input sequence contains "status\n".
while (retry < maxTriesBeforeAbort)
{
...
// write all of `command`'s input sequence to the socket.
boost::asio::write(socket, command, error);
// `command.size()` is 0, as the write operation will consume the data.
// Subsequent write operations with `command` will be no-ops.
...
}
One solution would be to use std::string as the buffer:
std::string command("status\n");
while (retry < maxTriesBeforeAbort)
{
...
boost::asio::write(socket, boost::asio::buffer(command), error);
...
}
For more details on streambuf usage, consider reading this answer.
I have to create a basic p2p connection with c++ sockets, which means each user has a server for listening onto connections and and a client for connecting, right?
For now I'm trying to create a master client which has a dedicated server and is a client too.
This means creating the server and client in the same program and I have used fork() which creates a child process of the server and the parent is the client. Now, fork works fine and I'm using select() to check sockets for reading data and i have modeled the server on this http://beej.us/guide/bgnet/output/html/multipage/advanced.html#select
Now when I run the program, the master client is able to connect to its own dedicated server, but the messages don't always get received by the server. Sometimes, it receives it, sometimes it doesn't. Any idea why?
Also, when a second client gets connected to the master client, and it doesn't have it's own server for now, the server shows that it gets a new connection, but when I write the message and send it, it doesn't receive any message from the second client, but it receives a message from the master client sometimes and not always.
EDIT: Added cout.flush
EDIT: I think forking the process causes some delay when a client and server run on the same program.
UPDATE: Added the new server code which causes a delay by one message (in response to the comments)
Here's the code.
SERVER CODE
while (1) {
unsigned int s;
readsocks = socks;
if (select(maxsock + 1, &readsocks, NULL, NULL, NULL) == -1) {
perror("select");
return ;
}
for (s = 0; s <= maxsock; s++) {
if (FD_ISSET(s, &readsocks)) {
//printf("socket %d was ready\n", s);
if (s == sock) {
/* New connection */
cout<<"\n New Connection";
cout.flush();
int newsock;
struct sockaddr_in their_addr;
socklen_t size = sizeof(their_addr);
newsock = accept(sock, (struct sockaddr*)&their_addr, &size);
if (newsock == -1) {
perror("accept");
}
else {
printf("Got a connection from %s on port %d\n",
inet_ntoa(their_addr.sin_addr), htons(their_addr.sin_port));
FD_SET(newsock, &socks);
if (newsock > maxsock) {
maxsock = newsock;
}
}
}
else {
/* Handle read or disconnection */
handle(s, &socks);
}
}
}
}
void handle(int newsock, fd_set *set)
{
char buf[256];
bzero(buf, 256);
/* send(), recv(), close() */
if(read(newsock, buf, 256)<=0){
cout<<"\n No data";
FD_CLR(newsock, set);
cout.flush();
}
else {
string temp(buf);
cout<<"\n Server: "<<temp;
cout.flush();
}
/* Call FD_CLR(newsock, set) on disconnection */
}
I have this piece of code using standard sockets:
void set_fds(int sock1, int sock2, fd_set *fds) {
FD_ZERO (fds);
FD_SET (sock1, fds);
FD_SET (sock2, fds);
}
void do_proxy(int client, int conn, char *buffer) {
fd_set readfds;
int result, nfds = max(client, conn)+1;
set_fds(client, conn, &readfds);
while((result = select(nfds, &readfds, 0, 0, 0)) > 0) {
if (FD_ISSET (client, &readfds)) {
int recvd = recv(client, buffer, 256, 0);
if(recvd <= 0)
return;
send_sock(conn, buffer, recvd);
}
if (FD_ISSET (conn, &readfds)) {
int recvd = recv(conn, buffer, 256, 0);
if(recvd <= 0)
return;
send_sock(client, buffer, recvd);
}
set_fds(client, conn, &readfds);
}
I have sockets client and conn and I need to "proxy" traffic between them (this is part of a socks5 server implementation, you may see https://github.com/mfontanini/Programs-Scripts/blob/master/socks5/socks5.cpp). How can I achieve this under asio ?
I must specify that until this point both sockets were operated under blocking mode.
Tried to use this without success:
ProxySession::ProxySession(ba::io_service& ioService, socket_ptr socket, socket_ptr clientSock): ioService_(ioService), socket_(socket), clientSock_(clientSock)
{
}
void ProxySession::Start()
{
socket_->async_read_some(boost::asio::buffer(data_, 1),
boost::bind(&ProxySession::HandleProxyRead, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void ProxySession::HandleProxyRead(const boost::system::error_code& error,
size_t bytes_transferred)
{
if (!error)
{
boost::asio::async_write(*clientSock_,
boost::asio::buffer(data_, bytes_transferred),
boost::bind(&ProxySession::HandleProxyWrite, this,
boost::asio::placeholders::error));
}
else
{
delete this;
}
}
void ProxySession::HandleProxyWrite(const boost::system::error_code& error)
{
if (!error)
{
socket_->async_read_some(boost::asio::buffer(data_, max_length),
boost::bind(&ProxySession::HandleProxyRead, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
else
{
delete this;
}
}
The issue is that if I do ba::read(*socket_, ba::buffer(data_,256)) I can read data that comes from my browser client through socks proxy but in the version from above ProxySession::Start does not lead to HandleProxyRead being called in any circumstances.
I don't really need an async way of exchanging data here, it;s just that I've come by with this solution here. Also from where I called ProxySession->start from code I needed to introduce a sleep because otherwise the thread context from which this was executing was being shut down.
*Update 2 * See below one of my updates. The question block is getting too big.
The problem ca be solved by using asynchronous write/read functions in order to have something similar with presented code. Basically use async_read_some()/async_write() - or other async functions in these categories. Also in order for async processing to work one must call boost::asio::io_service.run() that will dispatch completion handler for async processing.
I have managed to come with this. This solution solves the problem of "data exchange" for the 2 sockets (that must happen acording to socks5 server proxy) but it is very compute intensive. Any ideas ?
std::size_t readable = 0;
boost::asio::socket_base::bytes_readable command1(true);
boost::asio::socket_base::bytes_readable command2(true);
try
{
while (1)
{
socket_->io_control(command1);
clientSock_->io_control(command2);
if ((readable = command1.get()) > 0)
{
transf = ba::read(*socket_, ba::buffer(data_,readable));
ba::write(*clientSock_, ba::buffer(data_,transf));
boost::this_thread::sleep(boost::posix_time::milliseconds(500));
}
if ((readable = command2.get()) > 0)
{
transf = ba::read(*clientSock_, ba::buffer(data_,readable));
ba::write(*socket_, ba::buffer(data_,transf));
boost::this_thread::sleep(boost::posix_time::milliseconds(500));
}
}
}
catch (std::exception& ex)
{
std::cerr << "Exception in thread while exchanging: " << ex.what() << "\n";
return;
}