i want, to get the url content (http://127.0.0.1:1337/test/test), so in this case "/test/test", how i can do that?
using tcp = boost::asio::ip::tcp;
void ts3plugin_initWebsocket() {
try
{
auto const address = boost::asio::ip::make_address("127.0.0.1");
auto const port = static_cast<unsigned short>(std::atoi("1337"));
boost::asio::io_context ioc{ 1 };
tcp::acceptor acceptor{ ioc, {address, port} };
while (true) {
tcp::socket socket{ ioc };
acceptor.accept(socket);
ts3Functions.logMessage("Connected", LogLevel_INFO, "Plugin", 1);
}
}
catch (const std::exception& e)
{
char msg[512];
snprintf(msg, sizeof(msg), "Error: %s", e.what());
ts3Functions.logMessage(msg, LogLevel_INFO, "Plugin", 1);
}
}
This has little to do with Asio, and everything with HTTP. You want to make a GET request, see e.g. https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/GET
Going by the naming you might also want to upgrade to Websocket protocol over HTTP. Instead of figuring out how to do all that, perhaps just go by one of the Beast examples:
https://www.boost.org/doc/libs/1_78_0/libs/beast/doc/html/beast/examples.html
Related
Using the following boost::asio code I run a loop of 1M sequential http calls to a Docker node.js simple http service that generates random numbers, but after a few thousand calls I start getting async_connect errors. The node.js part is not producing any errors and I believe it works OK.
To avoid resolving the host in every call and trying to speed-up, I am caching the endpoint, which makes no difference, I have tested both ways.
Can anyone see what is wrong with my code below?
Are there any best practices for a stress-test tool using asio that I am missing?
//------------------------------------------------------------------------------
// https://www.boost.org/doc/libs/1_70_0/libs/beast/doc/html/beast/using_io/timeouts.html
HttpResponse HttpClientAsyncBase::_http(HttpRequest&& req)
{
using namespace boost::beast;
namespace net = boost::asio;
using tcp = net::ip::tcp;
HttpResponse res;
req.prepare_payload();
boost::beast::error_code ec = {};
const HOST_INFO host = resolve(req.host(), req.port, req.resolve);
net::io_context m_io;
boost::asio::spawn(m_io, [&](boost::asio::yield_context yield)
{
size_t retries = 0;
tcp_stream stream(m_io);
if (req.timeout_seconds == 0) get_lowest_layer(stream).expires_never();
else get_lowest_layer(stream).expires_after(std::chrono::seconds(req.timeout_seconds));
get_lowest_layer(stream).async_connect(host, yield[ec]);
if (ec) return;
http::async_write(stream, req, yield[ec]);
if (ec)
{
stream.close();
return;
}
flat_buffer buffer;
http::async_read(stream, buffer, res, yield[ec]);
stream.close();
});
m_io.run();
if (ec)
throw boost::system::system_error(ec);
return std::move(res);
}
I have tried both sync/async implementations of a boost http client and I get the exact same problem.
The error I get is "You were not connected because a duplicate name exists on the network. If joining a domain, go to System in Control Panel to change the computer name and try again. If joining a workgroup, choose another workgroup name [system:52]"
So, I decided to... just try. I made your code into self-contained example:
#include <boost/asio/spawn.hpp>
#include <boost/beast.hpp>
#include <fmt/ranges.h>
#include <iostream>
namespace http = boost::beast::http;
//------------------------------------------------------------------------------
// https://www.boost.org/doc/libs/1_70_0/libs/beast/doc/html/beast/using_io/timeouts.html
struct HttpRequest : http::request<http::string_body> { // SEHE: don't do this
using base_type = http::request<http::string_body>;
using base_type::base_type;
std::string host() const { return "127.0.0.1"; }
uint16_t port = 80;
bool resolve = true;
int timeout_seconds = 0;
};
using HttpResponse = http::response<http::vector_body<uint8_t> >; // Do this or aggregation instead
struct HttpClientAsyncBase {
HttpResponse _http(HttpRequest&& req);
using HOST_INFO = boost::asio::ip::tcp::endpoint;
static HOST_INFO resolve(std::string const& host, uint16_t port, bool resolve) {
namespace net = boost::asio;
using net::ip::tcp;
net::io_context ioc;
tcp::resolver r(ioc);
using flags = tcp::resolver::query::flags;
auto f = resolve ? flags::address_configured
: static_cast<flags>(flags::numeric_host | flags::numeric_host);
tcp::resolver::query q(tcp::v4(), host, std::to_string(port), f);
auto it = r.resolve(q);
assert(it.size());
return HOST_INFO{it->endpoint()};
}
};
HttpResponse HttpClientAsyncBase::_http(HttpRequest&& req) {
using namespace boost::beast;
namespace net = boost::asio;
using net::ip::tcp;
HttpResponse res;
req.prepare_payload();
boost::beast::error_code ec = {};
const HOST_INFO host = resolve(req.host(), req.port, req.resolve);
net::io_context m_io;
spawn(m_io, [&](net::yield_context yield) {
// size_t retries = 0;
tcp_stream stream(m_io);
if (req.timeout_seconds == 0)
get_lowest_layer(stream).expires_never();
else
get_lowest_layer(stream).expires_after(std::chrono::seconds(req.timeout_seconds));
get_lowest_layer(stream).async_connect(host, yield[ec]);
if (ec)
return;
http::async_write(stream, req, yield[ec]);
if (ec) {
stream.close();
return;
}
flat_buffer buffer;
http::async_read(stream, buffer, res, yield[ec]);
stream.close();
});
m_io.run();
if (ec)
throw boost::system::system_error(ec);
return res;
}
int main() {
for (int i = 0; i<100'000; ++i) {
HttpClientAsyncBase hcab;
HttpRequest r(http::verb::get, "/bytes/10", 11);
r.timeout_seconds = 0;
r.port = 80;
r.resolve = false;
auto res = hcab._http(std::move(r));
std::cout << res.base() << "\n";
fmt::print("Data: {::02x}\n", res.body());
}
}
(Side note, this is using docker run -p 80:80 kennethreitz/httpbin to run the server side)
While this is about 10x faster than running curl to do the equivalent requests in a bash loop, none of this is particularly stressing. There's nothing async about it, and it seems resource usage is mild and stable, e.g. memory profiled:
(for completeness I verified identical results with timeout_seconds = 1)
Since what you're doing is literally the opposite of async IO, I'd write it much simpler:
struct HttpClientAsyncBase {
net::io_context m_io;
HttpResponse _http(HttpRequest&& req);
static auto resolve(std::string const& host, uint16_t port, bool resolve);
};
HttpResponse HttpClientAsyncBase::_http(HttpRequest&& req) {
HttpResponse res;
req.requestObject.prepare_payload();
const auto host = resolve(req.host(), req.port, req.resolve);
beast::tcp_stream stream(m_io);
if (req.timeout_seconds == 0)
stream.expires_never();
else
stream.expires_after(std::chrono::seconds(req.timeout_seconds));
stream.connect(host);
write(stream, req.requestObject);
beast::flat_buffer buffer;
read(stream, buffer, res);
stream.close();
return res;
}
That's just simpler, runs faster and does the same, down to the exceptions.
But, you're probably trying to cause stress, perhaps you instead need to reuse some connections and multi-thread?
You can see a very complete example of just that here:
How do I make this HTTPS connection persistent in Beast?
It includes reconnecting dropped connections, connections to different hosts, varied requests etc.
Alan's comments gave me the right pointers and I soon found using netstat -a that it was a ports leakage problem with thousands of ports in TIME_WAIT state after running the code for some brief time.
The root cause was both on the client and the server:
In node.js server I had to make sure that responses close the connection by
adding
response.setHeader("connection", "close");
In boost::asio C++ code I replaced stream.close() with
stream.socket().shutdown(boost::asio::ip::tcp::socket::shutdown_both, ec);
That seems to make all the difference. Also I made sure to use
req.set(boost::beast::http::field::connection, "close");
in my requests.
I verfied with the tool running for over 5 hours with no problems at all, so I guess the problem is solved!
Implementing 'Abortive TCP/IP Close' with boost::asio to treat EADDRNOTAVAIL and TIME_WAIT for HTTP client stress test tool
I am revisting the issue to offer an alternative that actually worked much better. Reminding that the objective was to develop a stress test tool for hitting a server with 1M requests. Even though my previous solution worked on Windows, when I loaded the executable on Docker/Alpine it started crashing with SEGFAULT errors that I was unable to trace. The root cause seems to be related to boost::asio::spawn(m_io, [&](boost::asio::yield_context yield) but time pressured me to solve the HTTP problem.
I decided to use synch HTTP and treat EADDRNOTAVAIL and TIME_WAIT errors by following suggestions from Disable TIME_WAIT with boost sockets and TIME_WAIT with boost asio and template code from https://www.boost.org/doc/libs/1_80_0/libs/beast/example/http/client/sync/http_client_sync.cpp.
For anyone having EADDRNOTAVAIL and TIME_WAIT with boost::asio, the solution that worked for me and it is actually much faster than before on both Windows, Linux and Dockers is the following:
HttpResponse HttpClientSyncBase::_http(HttpRequest&& req)
{
namespace beast = boost::beast;
namespace http = beast::http;
namespace net = boost::asio;
using tcp = net::ip::tcp;
HttpResponse res;
req.prepare_payload();
const auto host = req.host();
const auto port = req.port;
const auto target = req.target();
const bool abortive_close = boost::iequals(req.header("Connection"), "close");
const bool download_large_file = boost::iequals(req.header("X-LARGE-FILE-HINT"), "YES");
beast::error_code ec;
net::io_context ioc;
// Resolve host:port for IPv4
tcp::resolver resolver(ioc);
const auto endpoints = resolver.resolve(boost::asio::ip::tcp::v4(), host, port);
// Create stream and set timeouts
beast::tcp_stream stream(ioc);
if (req.timeout_seconds == 0) boost::beast::get_lowest_layer(stream).expires_never();
else boost::beast::get_lowest_layer(stream).expires_after(std::chrono::seconds(req.timeout_seconds));
// Caution: we can get address_not_available[EADDRNOTAVAIL] due to TIME_WAIT port exhaustion
stream.connect(endpoints, ec);
if (ec == boost::system::errc::address_not_available)
throw beast::system_error{ ec };
// Write HTTP request
http::write(stream, req);
// Read HTTP response (or download large file >8MB)
beast::flat_buffer buffer;
if (download_large_file)
{
_HttpResponse tmp;
boost::beast::http::response_parser<boost::beast::http::string_body> parser{ std::move(tmp) };
parser.body_limit(boost::none);
boost::beast::http::read(stream, buffer, parser);
res = HttpResponse(std::move(parser.release()));
}
else
{
http::read(stream, buffer, res);
}
// Try to shut down socket gracefully
stream.socket().shutdown(tcp::socket::shutdown_both, ec);
if (abortive_close)
{
// Read until no more data are in socket buffers
// https://stackoverflow.com/questions/58983527/disable-time-wait-with-boost-sockets
try
{
http::response<http::dynamic_body> res;
beast::flat_buffer buffer;
http::read(stream, buffer, res);
}
catch (...)
{
// should get end of stream here, ignore it
}
// Perform "Abortive TCP/IP Close" to minimize TIME_WAIT port exhaustion
// https://stackoverflow.com/questions/35006324/time-wait-with-boost-asio
try
{
// enable linger with timeout 0 to force abortive close
boost::asio::socket_base::linger option(true, 0);
stream.socket().set_option(option);
stream.close();
}
catch (...)
{
}
}
else
{
try { stream.close(); } catch (...) {}
}
// Ignore not_connected and end_of_stream errors, handle the rest
if (ec && ec != beast::errc::not_connected && ec != beast::http::error::end_of_stream)
throw beast::system_error{ ec };
return std::move(res);
}
In the sample above I should add error handling in write but I guess anyone can do it. _HttpResponse is the following and is the base for HttpResponse.
using _HttpRequest = boost::beast::http::message<true, boost::beast::http::string_body, boost::beast::http::fields>;
using _HttpResponse = boost::beast::http::message<false, boost::beast::http::string_body, boost::beast::http::fields>;
using HttpHeaders = boost::beast::http::header<1, boost::beast::http::basic_fields<std::allocator<char>>>;
For what is worth, when I started the estimation for the job was 5-7 days. Using connetion=close in my previous solution it got down to 7-8 hours. Using Abortive TCP/IP Close I got down to 1.5 hours.
Funny thing is, the server, also boost::asio, could handle the stress while the original stress tool didn't. Finally both the server and its stress test tool work just fine! The code also demonstrates how to download a large file (over 8MB) which was another side-problem, as I needed to download the test results from the server.
I'm trying to implement two-way multicast UDP communication using Boost.Asio.
Actually what I need is client-server architecture.
I used these tutorials and examples and modified them:
https://www.boost.org/doc/libs/1_71_0/doc/html/boost_asio/example/cpp11/multicast/receiver.cpp
https://www.boost.org/doc/libs/1_71_0/doc/html/boost_asio/example/cpp11/multicast/sender.cpp
https://www.boost.org/doc/libs/1_71_0/doc/html/boost_asio/example/cpp11/futures/daytime_client.cpp
https://www.boost.org/doc/libs/1_71_0/doc/html/boost_asio/tutorial/tutdaytime6.html
Futures daytime client and daytime server works perfectly fine, unless I use multicast address for it, which I have to. It just doesn't communicate.
I modified client's daytime function and server example's constructor to look like this:
Client:
void get_daytime(boost::asio::io_context& io_context,
const boost::asio::ip::address& listenAddress,
const boost::asio::ip::address& multicastAddress)
{
try
{
udp::socket socket(io_context);
boost::asio::ip::udp::endpoint listenEndpoint(listenAddress, multicastPort);
socket.open(listenEndpoint.protocol());
socket.set_option(boost::asio::ip::udp::socket::reuse_address(true));
socket.bind(listenEndpoint);
socket.set_option(boost::asio::ip::multicast::join_group(multicastAddress));
std::array<char, 1U> send_buf = {{ 0 }};
std::future<std::size_t> send_length =
socket.async_send_to(boost::asio::buffer(send_buf),
listenEndpoint,
boost::asio::use_future);
send_length.get();
std::array<char, 128U> recv_buf{};
udp::endpoint sender_endpoint;
std::future<std::size_t> recv_length =
socket.async_receive_from(
boost::asio::buffer(recv_buf),
sender_endpoint,
boost::asio::use_future);
std::cout.write(
recv_buf.data(),
recv_length.get()); // Blocks until receive is complete.
}
catch (std::system_error& e)
{
std::cerr << e.what() << std::endl;
}
}
Server:
udp_server(boost::asio::io_context& io_context,
const boost::asio::ip::address& listenAddress,
const boost::asio::ip::address_v4& multicastAddress)
: socket_(io_context)
{
boost::asio::ip::udp::endpoint listenEndpoint(listenAddress, multicastPort);
socket_.open(listenEndpoint.protocol());
socket_.set_option(boost::asio::ip::udp::socket::reuse_address(true));
socket_.bind(listenEndpoint);
socket_.set_option(boost::asio::ip::multicast::join_group(multicastAddress));
start_receive();
}
How should I modify the code in order to make it work over multicast? Thanks.
I found a solution to my problem. I didn't have to modify getDaytime() function, though my udp_server() constructor now looks like this:
udp_server(boost::asio::io_context& io_context)
: socket_(io_context, udp::endpoint(boost::asio::ip::make_address("0.0.0.0"), 60000))
{
socket_.set_option(boost::asio::ip::multicast::join_group(boost::asio::ip::make_address("239.192.0.1")));
start_receive();
}
I'm writing a secure SSL echo server with boost ASIO and coroutines. I'd like this server to be able to serve multiple concurrent clients, this is my code
try {
boost::asio::io_service io_service;
boost::asio::spawn(io_service, [&io_service](boost::asio::yield_context yield) {
auto ctx = boost::asio::ssl::context{ boost::asio::ssl::context::sslv23 };
ctx.set_options(
boost::asio::ssl::context::default_workarounds
| boost::asio::ssl::context::no_sslv2
| boost::asio::ssl::context::single_dh_use);
ctx.use_private_key_file(..); // My data setup
ctx.use_certificate_chain_file(...); // My data setup
boost::asio::ip::tcp::acceptor acceptor(io_service,
boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(), port));
for (;;) {
boost::asio::ssl::stream<boost::asio::ip::tcp::socket> sock{ io_service, ctx };
acceptor.async_accept(sock.next_layer(), yield);
sock.async_handshake(boost::asio::ssl::stream_base::server, yield);
auto ec = boost::system::error_code{};
char data_[1024];
auto nread = sock.async_read_some(boost::asio::buffer(data_, 1024), yield[ec]);
if (ec == boost::asio::error::eof)
return; //connection closed cleanly by peer
else if (ec)
throw boost::system::system_error(ec); //some other error, is this desirable?
sock.async_write_some(boost::asio::buffer(data_, nread), yield[ec]);
if (ec == boost::asio::error::eof)
return; //connection closed cleanly by peer
else if (ec)
throw boost::system::system_error(ec); //some other error
// Shutdown gracefully
sock.async_shutdown(yield[ec]);
if (ec && (ec.category() == boost::asio::error::get_ssl_category())
&& (SSL_R_PROTOCOL_IS_SHUTDOWN == ERR_GET_REASON(ec.value())))
{
sock.lowest_layer().close();
}
}
});
io_service.run();
}
catch (std::exception& e)
{
std::cerr << "Exception: " << e.what() << "\n";
}
Anyway I'm not sure if the code above will do: in theory calling async_accept will return control to the io_service manager.
Will another connection be accepted if one has already been accepted, i.e. it's already past the async_accept line?
It's a bit hard to understand the specifics of your question, since the code is incomplete (e.g., there's a return in your block, but it's unclear what is that block part of).
Notwithstanding, the documentation contains an example of a TCP echo server using coroutines. It seems you basically need to add SSL support to it, to adapt it to your needs.
If you look at main, it has the following chunk:
boost::asio::spawn(io_service,
[&](boost::asio::yield_context yield)
{
tcp::acceptor acceptor(io_service,
tcp::endpoint(tcp::v4(), std::atoi(argv[1])));
for (;;)
{
boost::system::error_code ec;
tcp::socket socket(io_service);
acceptor.async_accept(socket, yield[ec]);
if (!ec) std::make_shared<session>(std::move(socket))->go();
}
});
This loops endlessly, and, following each (successful) call to async_accept, handles accepting the next connection (while this connection and others might still be active).
Again, I'm not sure about your code, but it contains exits from the loop like
return; //connection closed cleanly by peer
To illustrate the point, here are two applications.
The first is a Python multiprocessing echo client, adapted from PMOTW:
import socket
import sys
import multiprocessing
def session(i):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('localhost', 5000)
print 'connecting to %s port %s' % server_address
sock.connect(server_address)
print 'connected'
for _ in range(300):
try:
# Send data
message = 'client ' + str(i) + ' message'
print 'sending "%s"' % message
sock.sendall(message)
# Look for the response
amount_received = 0
amount_expected = len(message)
while amount_received < amount_expected:
data = sock.recv(16)
amount_received += len(data)
print 'received "%s"' % data
except:
print >>sys.stderr, 'closing socket'
sock.close()
if __name__ == '__main__':
pool = multiprocessing.Pool(8)
pool.map(session, range(8))
The details are not that important (although it's Python, and therefore easy to read), but the point is that it opens up 8 processes, and each engages the same asio echo server (below) with 300 messages.
When run, it outputs
...
received "client 1 message"
sending "client 1 message"
received "client 2 message"
sending "client 2 message"
received "client 3 message"
received "client 0 message"
sending "client 3 message"
sending "client 0 message"
...
showing that the echo sessions are indeed interleaved.
Now for the echo server. I've slightly adapted the example from the docs:
#include <cstdlib>
#include <iostream>
#include <memory>
#include <utility>
#include <boost/asio.hpp>
using boost::asio::ip::tcp;
class session :
public std::enable_shared_from_this<session> {
public:
session(tcp::socket socket) : socket_(std::move(socket)) {}
void start() { do_read(); }
private:
void do_read() {
auto self(
shared_from_this());
socket_.async_read_some(
boost::asio::buffer(data_, max_length),
[this, self](boost::system::error_code ec, std::size_t length) {
if(!ec)
do_write(length);
});
}
void do_write(std::size_t length) {
auto self(shared_from_this());
socket_.async_write_some(
boost::asio::buffer(data_, length),
[this, self](boost::system::error_code ec, std::size_t /*length*/) {
if (!ec)
do_read();
});
}
private:
tcp::socket socket_;
enum { max_length = 1024 };
char data_[max_length];
};
class server {
public:
server(boost::asio::io_service& io_service, short port) :
acceptor_(io_service, tcp::endpoint(tcp::v4(), port)),
socket_(io_service) {
do_accept();
}
private:
void do_accept() {
acceptor_.async_accept(
socket_,
[this](boost::system::error_code ec) {
if(!ec)
std::make_shared<session>(std::move(socket_))->start();
do_accept();
});
}
tcp::acceptor acceptor_;
tcp::socket socket_;
};
int main(int argc, char* argv[]) {
const int port = 5000;
try {
boost::asio::io_service io_service;
server s{io_service, port};
io_service.run();
}
catch (std::exception& e) {
std::cerr << "Exception: " << e.what() << "\n";
}
}
This shows that this server indeed interleaves.
Note that this is not the coroutine version. While I once played with the coroutine version a bit, I just couldn't get it to build on my current box (also, as sehe notes in the comments below, you might anyway prefer this more mainstream version for now).
However, this is not a fundamental difference, w.r.t. your question. The non-coroutine version has callbacks explicitly explicitly launching new operations supplying the next callback; the coroutine version uses a more sequential-looking paradigm. Each call returns to asio's control loop in both versions, which monitors all the current operations which can proceed.
From the asio coroutine docs:
Coroutines let you create a structure that mirrors the actual program logic. Asynchronous operations don’t split functions, because there are no handlers to define what should happen when an asynchronous operation completes. Instead of having handlers call each other, the program can use a sequential structure.
It's not that the sequential structure makes all operations sequential - that would eradicate the entire need for asio.
Im following the tutorials at the boost official web site http://www.boost.org/doc/libs/1_55_0/doc/html/boost_asio/tutorial/tutdaytime1.html.
The program is working perfectly if i connect to "localhost" or "127.0.0.1" on the same machine. But if i run the client on another computer with the same network it fails to connect to the server. Why is this happening? and what would i have to do to get the client to run on another network?
Error: connect: No connection could be made because the target machine actively refused it.
Client:
#include <iostream>
#include <boost/array.hpp>
#include <boost/asio.hpp>
using boost::asio::ip::tcp;
int main()
{
try
{
boost::asio::io_service io_service;
tcp::resolver resolver(io_service);
char* serverName = "localhost";
tcp::resolver::query query(serverName, "daytime");
tcp::resolver::iterator endpoint_iterator = resolver.resolve(query);
tcp::socket socket(io_service);
while(true)
{
boost::asio::connect(socket, endpoint_iterator);
for (;;)
{
boost::array<char, 128> buf;
boost::system::error_code error;
size_t len = socket.read_some(boost::asio::buffer(buf), error);
if (error == boost::asio::error::eof)
break; // Connection closed cleanly by peer.
else if (error)
throw boost::system::system_error(error); // Some other error.
std::cout.write(buf.data(), len);
std::cout <<"\n";
}
}
}
catch (std::exception& e)
{
std::cerr << e.what() << std::endl;
}
return 0;
}
Server:
#include <iostream>
#include <string>
#include <boost/asio.hpp>
using boost::asio::ip::tcp;
int main()
{
try
{
boost::asio::io_service io_service;
tcp::acceptor acceptor(io_service, tcp::endpoint(tcp::v4(), 13));
for (;;)
{
tcp::socket socket(io_service);
acceptor.accept(socket);
std::string message = "This is the Server!";
boost::system::error_code ignored_error;
boost::asio::write(socket, boost::asio::buffer(message), ignored_error);
}
}
catch (std::exception& e)
{
std::cerr << e.what() << std::endl;
}
return 0;
}
I would guess your problem might be that you return on the first error. Resolving gives you an iterator on a number of endpoints. You try the first of those and if it does not work out you give up instead of letting the iterator go on.
Again, i am by no means an expert in boost::asio and far less in its TCP world but resolve may return more than one endpoint (for example IPv4 and IPv6) and possibly only one of them does not work out here.
For testing you could create the endpoint yourself by first creating a ip::address object, using its from_string() method to give it the address of the server (works only on your local network of course) and then using it for your endpoint:
boost::asio::ip::address address;
address.from_string("the.servers.ip.here");
boost::asio::ip::tcp::endpoint endpoint(address, 13);
boost::asio::connect(socket, endpoint);
And see if that works. If not, it probably is a problem on the server side.
To run the server and client on separate networks, Make the client connect to the servers external ip address. This is obvious but external ip addresses constantly change so to solve this problem you can go to www.noip.com and create a name that links to your ip address. This way in the client all you have to do is specify a name instead of an ip address.
most likely firewall issue, if you are using windows for server check windows firewall, if you are using linux, check the iptables.
i made a socket app that uses boost asio how ever it seems to take a lot of cpu when i try to read any thing from the socket.
Atm i use a wrapper class so i dont have to expose the boost header files in my header file that looks something like this:
class SocketHandle
{
public:
SocketHandle()
{
m_pSocket = NULL;
m_pService = NULL;
}
~SocketHandle()
{
delete m_pSocket;
delete m_pService;
}
void connect(const char* host, const char* port)
{
if (m_pSocket || m_pService)
return;
m_pService = new boost::asio::io_service();
tcp::resolver resolver(*m_pService);
tcp::resolver::query query(tcp::v4(), host, port);
tcp::resolver::iterator endpoint_iterator = resolver.resolve(query);
tcp::resolver::iterator end;
m_pSocket = new tcp::socket(*m_pService);
boost::system::error_code error = boost::asio::error::host_not_found;
while (error && endpoint_iterator != end)
{
(*m_pSocket).close();
(*m_pSocket).connect(*endpoint_iterator++, error);
}
if (error)
throw ...
}
tcp::socket* operator->()
{
return m_pSocket;
}
private:
tcp::socket *m_pSocket;
boost::asio::io_service *m_pService;
};
and then im reading from the socket like so:
size_t recv(char *data, size_t size)
{
boost::system::error_code error;
size_t len = (*m_pSocket)->read_some(boost::asio::buffer(data, size), error);
if (error)
throw ...
return len;
}
Am i doing something wrong? Is there a better way to read data from the socket?
Boost 1.39.0
visual c++
windows
A change you may wish to consider (and I highly recommend) is to make your socket calls asynchronous. That way you don't have to worry about threads being blocked or any socket calls spinning internally (which I suspect may be what you're seeing). Instead, you simply provide a callback that will receive any errors and the number of bytes received.
There are plenty of examples in the Boost docs that illustrate how to do this, and I've found that it lends itself to a much more efficient use of threads and processor resources.
You should avoid the tight while loop:
// BAD.
while (error && endpoint_iterator != end)
{
(*m_pSocket).close();
(*m_pSocket).connect(*endpoint_iterator++, error);
}
Instead try something like:
try
{
(*m_pSocket).connect(*endpoint_iterator++, error);
// ...
}
catch (std::exception& ex)
{
// Release resources, then try connecting again.
}
Also see these examples at for the right idioms of using Asio.
Consider using the free function instead,
size_t len = asio::read(*m_pSocket,asio::buffer(data, size), error);