I'm trying to use boost::asio for my new little hobby project but I'm having trouble getting the server to read the right data. Sending it works fine, I've checked with wireshark and the bytes [0 0 0 4] followed by [5 0 0 0] are sent. But on the server side I receive [16 -19 105 0] which makes me rather confused.
Here's how I send it, working perfectly when viewed through wireshark:
boost::asio::io_service io;
tcp::resolver resolver(io);
tcp::resolver::query query("localhost", boost::lexical_cast<string>("40001"));
tcp::resolver::iterator endpoints = resolver.resolve(query);
tcp::socket socket(io);
boost::asio::connect(socket, endpoints);
header h(5);
header::storage data = h.store();
boost::asio::write(socket, boost::asio::buffer(&data[0], header::header_size()));
This is a stripped down version of my server class. handle_read_header is called with the correct number of bytes, but headerbuffer contains weird values, [16 -19 105 0].
class tcp_connection : public boost::enable_shared_from_this<tcp_connection>
{
public:
tcp_connection(boost::asio::io_service& io)
: _socket(io)
{
memset(&headerbuffer[0], 0, headerbuffer.size());
}
void start() {
_socket.async_read_some(boost::asio::buffer(&headerbuffer[0], header::header_size()), boost::bind(&tcp_connection::handle_read_header, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void handle_read_header(const boost::system::error_code& error, std::size_t numBytes) {
if(!error) {
BOOST_LOG_SEV(logger, loglvl::debug) <<"handle_read_header, " <<numBytes <<"/" <<header::header_size() <<" bytes";
if (numBytes == header::header_size()) {
std::stringstream ss;
for(u32 a = 0; a < numBytes; ++a) {
ss <<(int)headerbuffer[a] <<" ";
}
BOOST_LOG_SEV(logger, loglvl::debug) <<"header data: " <<ss.str();
mCurrentHeader.load(headerbuffer);
mRemaining = mCurrentHeader.size();
BOOST_LOG_SEV(logger, loglvl::debug) <<"got header with size " <<mCurrentHeader.size();
}
} else {
BOOST_LOG_SEV(logger, loglvl::debug) <<"error " <<error;
}
}
private:
header mCurrentHeader;
std::array<char, 128> headerbuffer;
boost::asio::ip::tcp::socket _socket;
};
(Almost) complete code can be found at http://paste2.org/U97HHaH3
It turned out to be a sneaky, but at the same time obvious, error. tcp_connection::ptr is a shared_ptr. In tcp_server.h I call start() on it, but then nothing referes to it, which makes the shared_ptr, reasonably, assume it can be deleted. So the function is running, but the member variables have been cleared.
No idea why it worked some of the times, but I assume that's in the land of undefined behaviour.
Related
I am attempting to use boost::asio to implement a simple device discovery protocol. Basically I want to send a broadcast message (port 9000) with 2 byte payload. Then read the response from the device (assuming currently it exists). In wireshark I can see the broadcast is been sent and that the device is responding. However, in my example code I get that the bytes returned is 0 in the UDP read, not 30 bytes of data.
No. Time Source Destination Protocol Length
1 0.00000 192.168.0.20 255.255.255.255 UDP 44 52271 -> 9000 Len = 2
2 0.00200 192.168.0.21 192.168.0.20 UDP 72 9000 -> 52271 Len = 30
Should I be reading from a different endpoint than broadcastEndpoint? How do I get the end point?
I am new to asio and trying to teach my self, but I cannot work what I have done wrong.
#include <boost/array.hpp>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <iostream>
class udp_find {
public:
udp_find(boost::asio::io_context& service, unsigned int port)
: broadcastEndpoint_(boost::asio::ip::address_v4::broadcast(), port),
socket_(service)
{
socket_.open(boost::asio::ip::udp::v4());
socket_.set_option(boost::asio::ip::udp::socket::reuse_address(true));
socket_.set_option(boost::asio::socket_base::broadcast(true));
boost::array<unsigned int, 2> data = {255, 255};
socket_.async_send_to(
boost::asio::buffer(data, 2), broadcastEndpoint_,
boost::bind(&udp_find::handle_send, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void handle_receive(const boost::system::error_code& error,
std::size_t bytes_transferred)
{
std::cout << "Received Data" << bytes_transferred << std::endl;
}
void handle_send(const boost::system::error_code& error, std::size_t bytes_transferred)
{
std::cout << "Sent Data" << bytes_transferred << std::endl;
socket_.async_receive_from(
boost::asio::buffer(buffer_), broadcastEndpoint_,
boost::bind(&udp_find::handle_receive, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
private:
boost::asio::ip::udp::socket socket_;
boost::array<char, 128> buffer_;
boost::asio::ip::udp::endpoint broadcastEndpoint_;
};
int main()
{
boost::asio::io_context service;
udp_find(service, 9000);
service.run();
}
Your first problem is Udefined Behaviour.
You start asynchronous operations on a temporary object of type udp_find. The object is destructed immediately after construction, so it doesn't exist anymore even before you start any of the async work (service.run()).
That is easily fixed by making udp_find a local variable instead of a temporary:
udp_find op(service, 9000);
Now sending works for me. You will want to test that receiving works as well. In my netstat output it appears that the UDP socket is bound to an ephemeral port. Sending a datagram to that port makes the test succeed for me.
You might want to actually bind/connect to the broadcast address before receiving (the endpoint& parameter to async_receive_from is not for that, I think it is an output parameter).
I'm writing a secure SSL echo server with boost ASIO and coroutines. I'd like this server to be able to serve multiple concurrent clients, this is my code
try {
boost::asio::io_service io_service;
boost::asio::spawn(io_service, [&io_service](boost::asio::yield_context yield) {
auto ctx = boost::asio::ssl::context{ boost::asio::ssl::context::sslv23 };
ctx.set_options(
boost::asio::ssl::context::default_workarounds
| boost::asio::ssl::context::no_sslv2
| boost::asio::ssl::context::single_dh_use);
ctx.use_private_key_file(..); // My data setup
ctx.use_certificate_chain_file(...); // My data setup
boost::asio::ip::tcp::acceptor acceptor(io_service,
boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), port));
for (;;) {
boost::asio::ssl::stream<boost::asio::ip::tcp::socket> sock{ io_service, ctx };
acceptor.async_accept(sock.next_layer(), yield);
sock.async_handshake(boost::asio::ssl::stream_base::server, yield);
auto ec = boost::system::error_code{};
char data_[1024];
auto nread = sock.async_read_some(boost::asio::buffer(data_, 1024), yield[ec]);
if (ec == boost::asio::error::eof)
return; //connection closed cleanly by peer
else if (ec)
throw boost::system::system_error(ec); //some other error, is this desirable?
sock.async_write_some(boost::asio::buffer(data_, nread), yield[ec]);
if (ec == boost::asio::error::eof)
return; //connection closed cleanly by peer
else if (ec)
throw boost::system::system_error(ec); //some other error
// Shutdown gracefully
sock.async_shutdown(yield[ec]);
if (ec && (ec.category() == boost::asio::error::get_ssl_category())
&& (SSL_R_PROTOCOL_IS_SHUTDOWN == ERR_GET_REASON(ec.value())))
{
sock.lowest_layer().close();
}
}
});
io_service.run();
}
catch (std::exception& e)
{
std::cerr << "Exception: " << e.what() << "\n";
}
Anyway I'm not sure if the code above will do: in theory calling async_accept will return control to the io_service manager.
Will another connection be accepted if one has already been accepted, i.e. it's already past the async_accept line?
It's a bit hard to understand the specifics of your question, since the code is incomplete (e.g., there's a return in your block, but it's unclear what is that block part of).
Notwithstanding, the documentation contains an example of a TCP echo server using coroutines. It seems you basically need to add SSL support to it, to adapt it to your needs.
If you look at main, it has the following chunk:
boost::asio::spawn(io_service,
[&](boost::asio::yield_context yield)
{
tcp::acceptor acceptor(io_service,
tcp::endpoint(tcp::v4(), std::atoi(argv[1])));
for (;;)
{
boost::system::error_code ec;
tcp::socket socket(io_service);
acceptor.async_accept(socket, yield[ec]);
if (!ec) std::make_shared<session>(std::move(socket))->go();
}
});
This loops endlessly, and, following each (successful) call to async_accept, handles accepting the next connection (while this connection and others might still be active).
Again, I'm not sure about your code, but it contains exits from the loop like
return; //connection closed cleanly by peer
To illustrate the point, here are two applications.
The first is a Python multiprocessing echo client, adapted from PMOTW:
import socket
import sys
import multiprocessing
def session(i):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('localhost', 5000)
print 'connecting to %s port %s' % server_address
sock.connect(server_address)
print 'connected'
for _ in range(300):
try:
# Send data
message = 'client ' + str(i) + ' message'
print 'sending "%s"' % message
sock.sendall(message)
# Look for the response
amount_received = 0
amount_expected = len(message)
while amount_received < amount_expected:
data = sock.recv(16)
amount_received += len(data)
print 'received "%s"' % data
except:
print >>sys.stderr, 'closing socket'
sock.close()
if __name__ == '__main__':
pool = multiprocessing.Pool(8)
pool.map(session, range(8))
The details are not that important (although it's Python, and therefore easy to read), but the point is that it opens up 8 processes, and each engages the same asio echo server (below) with 300 messages.
When run, it outputs
...
received "client 1 message"
sending "client 1 message"
received "client 2 message"
sending "client 2 message"
received "client 3 message"
received "client 0 message"
sending "client 3 message"
sending "client 0 message"
...
showing that the echo sessions are indeed interleaved.
Now for the echo server. I've slightly adapted the example from the docs:
#include <cstdlib>
#include <iostream>
#include <memory>
#include <utility>
#include <boost/asio.hpp>
using boost::asio::ip::tcp;
class session :
public std::enable_shared_from_this<session> {
public:
session(tcp::socket socket) : socket_(std::move(socket)) {}
void start() { do_read(); }
private:
void do_read() {
auto self(
shared_from_this());
socket_.async_read_some(
boost::asio::buffer(data_, max_length),
[this, self](boost::system::error_code ec, std::size_t length) {
if(!ec)
do_write(length);
});
}
void do_write(std::size_t length) {
auto self(shared_from_this());
socket_.async_write_some(
boost::asio::buffer(data_, length),
[this, self](boost::system::error_code ec, std::size_t /*length*/) {
if (!ec)
do_read();
});
}
private:
tcp::socket socket_;
enum { max_length = 1024 };
char data_[max_length];
};
class server {
public:
server(boost::asio::io_service& io_service, short port) :
acceptor_(io_service, tcp::endpoint(tcp::v4(), port)),
socket_(io_service) {
do_accept();
}
private:
void do_accept() {
acceptor_.async_accept(
socket_,
[this](boost::system::error_code ec) {
if(!ec)
std::make_shared<session>(std::move(socket_))->start();
do_accept();
});
}
tcp::acceptor acceptor_;
tcp::socket socket_;
};
int main(int argc, char* argv[]) {
const int port = 5000;
try {
boost::asio::io_service io_service;
server s{io_service, port};
io_service.run();
}
catch (std::exception& e) {
std::cerr << "Exception: " << e.what() << "\n";
}
}
This shows that this server indeed interleaves.
Note that this is not the coroutine version. While I once played with the coroutine version a bit, I just couldn't get it to build on my current box (also, as sehe notes in the comments below, you might anyway prefer this more mainstream version for now).
However, this is not a fundamental difference, w.r.t. your question. The non-coroutine version has callbacks explicitly explicitly launching new operations supplying the next callback; the coroutine version uses a more sequential-looking paradigm. Each call returns to asio's control loop in both versions, which monitors all the current operations which can proceed.
From the asio coroutine docs:
Coroutines let you create a structure that mirrors the actual program logic. Asynchronous operations don’t split functions, because there are no handlers to define what should happen when an asynchronous operation completes. Instead of having handlers call each other, the program can use a sequential structure.
It's not that the sequential structure makes all operations sequential - that would eradicate the entire need for asio.
I have assigned to create a HTTPS server using boost::asio, So i did spent some time in the internet and found one source that explains how we can combine boost HTTP and its SSL features together which wasn't explained in the boost official website.Everything has gone fine and now i am in execution phase, that's where a mind sicking problem rose,in my code after i constructed the request stream i am using boost::asio::async_write to deliver it,During runtime i was receiving an error like the below, I am very certain that it caused by boost::asio::async_write, But I am not certain about what caused it to do so, can anyone shed some light for me,I have been wandering in the darkness:( (please see my code below)
terminate called after throwing an instance of 'boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::system::system_error> >'
what(): write: uninitialized
using boost::asio::ip::tcp;
string my_password_callback(size_t, boost::asio::ssl::context_base::password_purpose);
void handle_resolve(const boost::system::error_code& ,
tcp::resolver::iterator);
bool verify_certificate();
void handle_read();
void handle_write();
int i,j,rc;
sqlite3 *db;
string selectsql;
sqlite3_stmt *stmt;
char *zErrMsg = 0;
stringstream ss;
boost::asio::io_service io_service1;
boost::asio::io_service &io_service(io_service1);
boost::asio::ssl::context ctx(boost::asio::ssl::context::sslv23);
boost::asio::ssl::context& context_=ctx;
boost::asio::ssl::stream<boost::asio::ip::tcp::socket> socket_(io_service,context_);
int main()
{
boost::shared_ptr<boost::asio::ssl::context>(boost::asio::ssl::context::sslv23);
context_.set_options(boost::asio::ssl::context::default_workarounds| boost::asio::ssl::context::no_sslv2
| boost::asio::ssl::context::single_dh_use);
context_.set_password_callback(my_password_callback);
context_.use_certificate_chain_file("SSL\\test.crt");
context_.use_private_key_file("SSL\\test.key", boost::asio::ssl::context::pem);
tcp::resolver resolver_(io_service);
tcp::resolver::query query("172.198.72.135:3000", "http");
resolver_.async_resolve(query,boost::bind(handle_resolve,
boost::asio::placeholders::error,
boost::asio::placeholders::iterator));
boost::asio::streambuf request;
string path="https://172.198.72.135:3000/journals/enc_data?";
while(true)
{
char * EJTEXT;
int ID;
if(sqlite3_open("c:\\MinGW\\test.db", &db))
{
selectsql="select IEJ,EJ from EJ limit 1";
sqlite3_prepare_v2(db, selectsql.c_str(), -1, &stmt, NULL);
if(sqlite3_step(stmt)==SQLITE_ROW){
ID=sqlite3_column_int(stmt,0);
EJTEXT=(char *)sqlite3_column_text(stmt,1);
}
else{
}
sqlite3_finalize(stmt);
sqlite3_close(db);
}
string EJ=EJTEXT;
E.Encrypt(EJ);
string data=E.Url_safe(E.cipher);--my logic
string Iv=E.Url_safe(E.encoded_iv);--my logic
std::ostream request_stream(&request);
request_stream << "POST " <<path+"Data="+data+"&"+"iv="+Iv;
request_stream << "Host: " <<"172.198.72.135"<< "\r\n";
request_stream << "Accept: */*\r\n";
request_stream << "Connection: close\r\n\r\n";
//try{
boost::asio::async_write(socket_, request,
boost::asio::transfer_at_least(1),
boost::bind(handle_write));
temp="";
data="";
Iv="";
boost::asio::streambuf response;
std::istream response_stream(&response);
std::string http_version;
response_stream >> http_version;
unsigned int status_code;
response_stream >> status_code;
std::string status_message;
std::getline(response_stream, status_message);
if (!response_stream || http_version.substr(0, 5) != "HTTP/")
{
l.HTTP_SSLLOG("Invalid response");
}
if (status_code== 200)
{
string deletesql="delete * from EJ where IEJ="+ID;
if(sqlite3_open("c:\\MinGW\\test.db", &db))
{
rc=sqlite3_exec(db, deletesql.c_str(), 0, 0, &zErrMsg);
sqlite3_close(db);
if(rc)
{
ss<<ID;
l.EJ_Log("ERROR DELETING EJ FOR "+ss.str());
}
}
else{
l.DB_Log("ERROR OPENING DB");
}
}
else{
continue;
}
Sleep(6000);
}
return 0;
}
string my_password_callback(size_t t, boost::asio::ssl::context_base::password_purpose p)//std::size_t max_length,ssl::context::password_purpose purpose )
{
std::string password;
return "balaji";
}
void handle_resolve(const boost::system::error_code& err,
tcp::resolver::iterator endpoint_iterator)
{
if (!err)
{
socket_.set_verify_mode(boost::asio::ssl::verify_peer | boost::asio::ssl::verify_fail_if_no_peer_cert);
socket_.set_verify_callback(boost::bind(verify_certificate));
boost::asio::connect(socket_.lowest_layer(), endpoint_iterator);
}
else
{
l.HTTP_SSLLOG("Error resolve: "+err.message());
}
}
bool verify_certificate()
{
bool preverified =true;
context_.set_default_verify_paths();
return preverified;
}
void handle_read()
{
}
void handle_write()
{
boost::asio::async_read_until(socket_, response, "\r\n",
boost::bind(handle_read));
}
The asynchronous operations are designed to not throw exceptions and instead pass errors to the completion handlers as their first parameter (boost::system::error_code). For example, the following program demonstrates async_write() failing with an uninitialized error:
#include <iostream>
#include <boost/asio.hpp>
#include <boost/asio/ssl.hpp>
int main()
{
boost::asio::io_service io_service;
boost::asio::ssl::context ctx(boost::asio::ssl::context::sslv23);
boost::asio::ssl::stream<boost::asio::ip::tcp::socket> socket(io_service, ctx);
boost::asio::async_write(socket, boost::asio::buffer("demo"),
[](const boost::system::error_code& error, std::size_t bytes_transferred)
{
std::cout << error.message() << std::endl;
});
io_service.run();
}
The above program will output uninitialized. If an exception is being thrown from an asynchronous operation, then it strongly suggest that undefined behavior is being invoked.
Based on the posted code, the async_write() operation may violate the requirement where ownership of the underlying buffer memory is retained by the caller, who must guarantee that it remains valid until the handler is called. In this case, if the next iteration of the while loop may invalidate the buffer that had been provided to the prior iteration's async_write() operation.
However, even in the absence of undefined behavior, there will be additional problems, as the program neither attempts to establish the connection nor performs the SSL handshake, both of which must be completed before transmitting or receiving data over an encrypted connection.
When using asynchronous operations, a while-sleep loop that is part of the overall operation flow is often an indication of code smell. Consider removing the sqlite3 and encrypt code, and getting an SSL prototype up and running first. It may also help to compile with the highest warning-level/pedantic flags enabled. The Boost.Asio SSL overview shows a typical synchronous usage pattern:
using boost::asio::ip::tcp;
namespace ssl = boost::asio::ssl;
typedef ssl::stream<tcp::socket> ssl_socket;
// Create a context that uses the default paths for
// finding CA certificates.
ssl::context ctx(ssl::context::sslv23);
ctx.set_default_verify_paths();
// Open a socket and connect it to the remote host.
boost::asio::io_service io_service;
ssl_socket sock(io_service, ctx);
tcp::resolver resolver(io_service);
tcp::resolver::query query("host.name", "https");
boost::asio::connect(sock.lowest_layer(), resolver.resolve(query));
sock.lowest_layer().set_option(tcp::no_delay(true));
// Perform SSL handshake and verify the remote host's
// certificate.
sock.set_verify_mode(ssl::verify_peer);
sock.set_verify_callback(ssl::rfc2818_verification("host.name"));
sock.handshake(ssl_socket::client);
// ... read and write as normal ...
The official SSL example can also serve as a great starting point or reference for using asynchronous operations. Once the SSL prototype is confirmed as working, then add the sqlite3 and encrypt logic back into the program.
Also, in the event multiple threads are being used, be aware that the SSL stream is not thread-safe. All asynchronous operations must be synchronized through an explicit strand. For composed operations, such as async_write(), the initiating function must be invoked within the context of a strand, and the completion handler must be wrapped by the same strand.
i have an understanding problem how boost asio handles this:
When I watch my request response on client side, I can use following boost example Example
But I don't understand what happens if the server send every X ms some status information to the client. Have I open a serperate socket for this or can my client difference which is the request, response and the cycleMessage ?
Can it happen, that the client send a Request and read is as cycleMessage? Because he is also waiting for async_read because of this Message?
class TcpConnectionServer : public boost::enable_shared_from_this<TcpConnectionServer>
{
public:
typedef boost::shared_ptr<TcpConnectionServer> pointer;
static pointer create(boost::asio::io_service& io_service)
{
return pointer(new TcpConnectionServer(io_service));
}
boost::asio::ip::tcp::socket& socket()
{
return m_socket;
}
void Start()
{
SendCycleMessage();
boost::asio::async_read(
m_socket, boost::asio::buffer(m_data, m_dataSize),
boost::bind(&TcpConnectionServer::handle_read_data, shared_from_this(), boost::asio::placeholders::error));
}
private:
TcpConnectionServer(boost::asio::io_service& io_service)
: m_socket(io_service),m_cycleUpdateRate(io_service,boost::posix_time::seconds(1))
{
}
void handle_read_data(const boost::system::error_code& error_code)
{
if (!error_code)
{
std::string answer=doSomeThingWithData(m_data);
writeImpl(answer);
boost::asio::async_read(
m_socket, boost::asio::buffer(m_data, m_dataSize),
boost::bind(&TcpConnectionServer::handle_read_data, shared_from_this(), boost::asio::placeholders::error));
}
else
{
std::cout << error_code.message() << "ERROR DELETE READ \n";
// delete this;
}
}
void SendCycleMessage()
{
std::string data = "some usefull data";
writeImpl(data);
m_cycleUpdateRate.expires_from_now(boost::posix_time::seconds(1));
m_cycleUpdateRate.async_wait(boost::bind(&TcpConnectionServer::SendTracedParameter,this));
}
void writeImpl(const std::string& message)
{
m_messageOutputQueue.push_back(message);
if (m_messageOutputQueue.size() > 1)
{
// outstanding async_write
return;
}
this->write();
}
void write()
{
m_message = m_messageOutputQueue[0];
boost::asio::async_write(
m_socket,
boost::asio::buffer(m_message),
boost::bind(&TcpConnectionServer::writeHandler, this, boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void writeHandler(const boost::system::error_code& error, const size_t bytesTransferred)
{
m_messageOutputQueue.pop_front();
if (error)
{
std::cerr << "could not write: " << boost::system::system_error(error).what() << std::endl;
return;
}
if (!m_messageOutputQueue.empty())
{
// more messages to send
this->write();
}
}
boost::asio::ip::tcp::socket m_socket;
boost::asio::deadline_timer m_cycleUpdateRate;
std::string m_message;
const size_t m_sizeOfHeader = 5;
boost::array<char, 5> m_headerData;
std::vector<char> m_bodyData;
std::deque<std::string> m_messageOutputQueue;
};
With this implementation I will not need boost::asio::strand or? Because I will not modify the m_messageOutputQueue from an other thread.
But when I have on my client side an m_messageOutputQueue which i can access from an other thread on this point I will need strand? Because then i need the synchronization? Did I understand something wrong?
The differentiation of the message is part of your application protocol.
ASIO merely provides transport.
Now, indeed if you want to have a "keepalive" message you will have to design your protocol in such away that the client can distinguish the messages.
The trick is to think of it at a higher level. Don't deal with async_read on the client directly. Instead, make async_read put messages on a queue (or several queues; the status messages could not even go in a queue but supersede a previous non-handled status update, e.g.).
Then code your client against those queues.
A simple thing that is typically done is to introduce message framing and a message type id:
FRAME offset 0: message length(N)
FRAME offset 4: message data
FRAME offset 4+N: message checksum
FRAME offset 4+N+sizeof checksum: sentinel (e.g. 0x00, or a larger unique signature)
The structure there makes the protocol more extensible. It's easy to add encryption/compression without touch all other code. There's built-in error detection etc.
I am a beginner of multicast programming. I am using boost::asio to scribe some multicast data.
I wrote a program with the code
boost::array<char,1500> _receiveBuf;
void WaitForNextRead()
{
_receiveSocket->async_receive_from(
boost::asio::buffer(_receiveBuf, 1500),
_receiveEndPoint,
boost::bind(
&AsyncReadHandler,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void AsyncReadHandler(
const boost::system::error_code& error, // Result of operation.
std::size_t bytes_transferred // Number of bytes received.
)
{
std::cout << _receiveEndPoint.address() << ":" << _receiveEndPoint.port() << ":" << std::string(_receiveBuf.c_array(), bytes_transferred) << "\n";
WaitForNextRead();
}
int main()
{
std::string address;
int port;
std::cin >> address;
std::cin >> port;
boost::asio::io_service ioService;
_receiveSocket = new udp::socket( ioService );
_receiveSocket->open( udp::v4() );
_receiveSocket->set_option( udp::socket::reuse_address(true) );
_receiveSocket->bind( udp::endpoint( address::from_string("0.0.0.0"), port ) );
_receiveSocket->set_option( multicast::join_group( address::from_string(address) ) );
_receiveEndPoint.address(address::from_string(address));
_receiveEndPoint.port(port);
WaitForNextRead();
ioService.run();
return 0;
}
My instance A is joining: 239.1.1.1:12345
My instance B is joining: 239.1.127.1:12345
It is very weird that both instance A and B will get the message from both address!!
Did I miss out some socket option?
PS:
Here is my routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
224.0.0.0 * 240.0.0.0 U 0 0 0 eth1
I think I found the answer.
Refer to:
http://man7.org/linux/man-pages/man7/ip.7.html
https://bugzilla.redhat.com/show_bug.cgi?id=231899
Linux has a bug that the broadcast IP_ADD_MEMBERSHIP is a global action to all sockets even when part of another process. We need to set the option IP_MULTICAST_ALL to zero (0) to fix this problem.