Adding protocol layers over socketpair() to get SSL-WebSocket (boost::beast?) - c++

I am experimenting with boost::beast and WebSockets for a project. I would like to layer WebSockets (and ideally SSL) onto a socketpair (boost::asio::local::stream_protocol::socket). This seems like it should be easy, but I'm lost in g++ template errors.
Is there a way to get WebSockets running over an already-connected generic::stream-protocol::socket?
int
main( int argc, char **argv )
{
io_context ioctx;
local::stream_protocol::socket socket1( ioctx );
local::stream_protocol::socket socket2( ioctx );
local::connect_pair( socket1, socket2 );
boost::beast::websocket::??? ws1( socket1 );
boost::beast::websocket::??? ws2( socket2 );
cout << "Here we go..." << endl;
// queue an initial read or write on ws1 or ws2...
ioctx.run( );
// all I/O operations completed
}

Related

Connect with an already created named pipe with boost

I'm looking in to using third party libraries for IPC communication using named pipes on Windows. I've been using the Win32 API for this for the last few years. I was interested in replacing my implementation with a tried and true open source library.
I noticed that boost::process has an implementation of an async_pipe which would allow me to use it with boost::asio which would be really helpful for my application.
What I'm trying to do is create the named pipe on the server, which is a C# application.
Once the pipe has been created, connect to it with a client using the boost::process::async_pipe.
The problem I'm having is I don't see an API in boost::process that would allow me to connect with an already created named pipe. The constructors for async_pipe create the pipe instead of connecting to an already created pipe.
Below is the code I'm currently using in the client which is erroneously creating the pipe
boost::asio::io_context ctx;
std::vector<char> buffer( 8196, 0 );
boost::process::async_pipe pipe{ ctx, R"(\\.\pipe\TestPipe)" };
boost::asio::async_read( pipe, boost::asio::buffer( buffer ),
[ &buffer ]( const boost::system::error_code& ec, std::size_t size )
{
if ( ec )
std::cout << "Error: " << ec.message( ) << '\n';
else
{
std::string message{ std::begin( buffer ), std::begin( buffer ) + size };
std::cout << "Received message: " << message << '\n';
}
} );
ctx.run( );
I'm unsure if I can use boost::process to achieve what I want. I'm wondering if there is a way I could use CreateFileW to connect with the named pipe and then pass the HANDLE to async_pipe but I haven't found any documentation regarding that.
Question
How can I connect with an already created named pipe using boost
OK, so I was going about it the wrong way.
After reading this issue on Github Link I realized I needed to use a stream_handle instead. Note, the pipe must be opened in OVERLAPPED mode for this to work.
Creating the stream_handle
static boost::asio::windows::stream_handle OpenPipe( io_context& context )
{
constexpr const wchar_t* pipeName{ LR"(\\.\pipe\TestPipe)" };
return { context, CreateFileW( pipeName,
GENERIC_READ | GENERIC_WRITE,
0, nullptr,
OPEN_EXISTING,
FILE_FLAG_OVERLAPPED,
nullptr ) };
}
Once the stream_handle is created you can use the async functions it provides for communication.
Using the stream_handle
std::vector<char> buffer( 8196, 0 );
pipe.async_read_some( boost::asio::buffer( buffer ),
[ &buffer ]( auto ec, auto size ) { } );

ZeroMQ IPC across several instances of a program

I am having some problems with inter process communication in ZMQ between several instances of a program
I am using Linux OS
I am using zeromq/cppzmq, header-only C++ binding for libzmq
If I run two instances of this application (say on a terminal), I provide one with an argument to be a listener, then providing the other with an argument to be a sender. The listener never receives a message. I have tried TCP and IPC to no avail.
#include <zmq.hpp>
#include <string>
#include <iostream>
int ListenMessage();
int SendMessage(std::string str);
zmq::context_t global_zmq_context(1);
int main(int argc, char* argv[] ) {
std::string str = "Hello World";
if (atoi(argv[1]) == 0) ListenMessage();
else SendMessage(str);
zmq_ctx_destroy(& global_zmq_context);
return 0;
}
int SendMessage(std::string str) {
assert(global_zmq_context);
std::cout << "Sending \n";
zmq::socket_t publisher(global_zmq_context, ZMQ_PUB);
assert(publisher);
int linger = 0;
int rc = zmq_setsockopt(publisher, ZMQ_LINGER, &linger, sizeof(linger));
assert(rc==0);
rc = zmq_connect(publisher, "tcp://127.0.0.1:4506");
if (rc == -1) {
printf ("E: connect failed: %s\n", strerror (errno));
return -1;
}
zmq::message_t message(static_cast<const void*> (str.data()), str.size());
rc = publisher.send(message);
if (rc == -1) {
printf ("E: send failed: %s\n", strerror (errno));
return -1;
}
return 0;
}
int ListenMessage() {
assert(global_zmq_context);
std::cout << "Listening \n";
zmq::socket_t subscriber(global_zmq_context, ZMQ_SUB);
assert(subscriber);
int rc = zmq_setsockopt(subscriber, ZMQ_SUBSCRIBE, "", 0);
assert(rc==0);
int linger = 0;
rc = zmq_setsockopt(subscriber, ZMQ_LINGER, &linger, sizeof(linger));
assert(rc==0);
rc = zmq_bind(subscriber, "tcp://127.0.0.1:4506");
if (rc == -1) {
printf ("E: bind failed: %s\n", strerror (errno));
return -1;
}
std::vector<zmq::pollitem_t> p = {{subscriber, 0, ZMQ_POLLIN, 0}};
while (true) {
zmq::message_t rx_msg;
// when timeout (the third argument here) is -1,
// then block until ready to receive
std::cout << "Still Listening before poll \n";
zmq::poll(p.data(), 1, -1);
std::cout << "Found an item \n"; // not reaching
if (p[0].revents & ZMQ_POLLIN) {
// received something on the first (only) socket
subscriber.recv(&rx_msg);
std::string rx_str;
rx_str.assign(static_cast<char *>(rx_msg.data()), rx_msg.size());
std::cout << "Received: " << rx_str << std::endl;
}
}
return 0;
}
This code will work if I running one instance of the program with two threads
std::thread t_sub(ListenMessage);
sleep(1); // Slow joiner in ZMQ PUB/SUB pattern
std::thread t_pub(SendMessage str);
t_pub.join();
t_sub.join();
But I am wondering why when running two instances of the program the code above won't work?
Thanks for your help!
In case one has never worked with ZeroMQ,one may here enjoy to first look at "ZeroMQ Principles in less than Five Seconds"before diving into further details
Q : wondering why when running two instances of the program the code above won't work?
This code will never fly - and it has nothing to do with thread-based nor the process-based [CONCURENT] processing.
It was caused by a wrong design of the Inter Process Communication.
ZeroMQ can provide for this either one of the supported transport-classes :{ ipc:// | tipc:// | tcp:// | norm:// | pgm:// | epgm:// | vmci:// } plus having even smarter one for in-process comms, an inproc:// transport-class ready for inter-thread comms, where a stack-less communication may enjoy the lowest ever latency, being just a memory-mapped policy.
The selection of L3/L2-based networking stack for an Inter-Process-Communication is possible, yet sort of the most "expensive" option.
The Core Mistake :
Given that choice, any single processes ( not speaking about a pair of processes ) will collide on an attempt to .bind() its AccessPoint onto the very same TCP/IP-address:port#
The Other Mistake :
Even for the sake of a solo programme launched, both of the spawned threads attempt to .bind() its private AccessPoint, yet none does an attempt to .connect() a matching "opposite" AccessPoint.
At least one has to successfully .bind(), and
at least one has to successfully .connect(), so as to get a "channel", here of the PUB/SUB Archetype.
ToDo:
decide about a proper, right-enough Transport-Class ( best avoid an overkill to operate the full L3/L2-stack for localhost/in-process IPC )
refactor the Address:port# management ( for 2+ processes not to fail on .bind()-(s) to the same ( hard-wired ) address:port#
always detect and handle appropriately the returned {PASS|FAIL}-s from API calls
always set LINGER to zero explicitly ( you never know )

Is the subscriber able to receive updates from multiple publishers in ZeroMQ with cpp binding?

In my code I am only able to receive the messages from the first publisher (on port 5556) to which I connect.
So do I need to close the first connection (5556) before connecting to second (5557)?
If so, then in the statement of ZeroMQ guide
"A subscriber can connect to more than one publisher, using one connect call each time. Data will then arrive and be interleaved ("fair-queued") so that no single publisher drowns out the others."
Does the phrase "using one connect call each time" mean we need to close the first connection before connecting to second publisher?
How can I connect to multiple publishers at the same time to receive messages from both.
Code:
#include <zmq.hpp>
#include
int main (int argc, char *argv[])
{
zmq::context_t context (1);
zmq::socket_t subscriber (context, ZMQ_SUB);
subscriber.connect("tcp://localhost:5556");
subscriber.connect("tcp://localhost:5557");
subscriber.setsockopt(ZMQ_SUBSCRIBE, "", 0);//subscribe to all messages
// Process 10 updates
int update_nbr;
for (update_nbr = 0; update_nbr < 10 ; update_nbr++) {
zmq::message_t update;
subscriber.recv (&update);
// Prints only the data from publisher bound to port 5556
std::string updt = std::string(static_cast<char*>(update.data()), update.size());
std::cout << "Received Update/Messages/TaskList " << update_nbr <<" : "<< updt << std::endl;
}
return 0;
}
Does it mean we need to close the first before second?
No.
One need not .close() so as to launch another call to the .connect(...) method.
How can I connect to multiple PUB-s to receive messages from both?
In cases, where both Fair-Queueing SUB-side Policy and identical Topic-filter Policy logic processing on { PUB | SUB }-side ( version dependent ... ) remains plausible:
int main (int argc, char *argv[])
{
zmq::context_t context (1);
zmq::socket_t subscriber (context, ZMQ_SUB);
subscriber.connect( "tcp://localhost:5556" ); // ipc://first will have less
subscriber.connect( "tcp://localhost:5557" ); // ipc://second protocol overheads
subscriber.setsockopt( ZMQ_SUBSCRIBE, "", 0 );// subscribe to .recv() any message
...
}
In cases where it is not, use multiple SUB-side socket-instances, each .connect()-ed to respective non-balanced PUB-s and use a non-blocking .poll() inside an event-loop, tightly engineered so as to ad-hoc monitor and handle all the non-balanced message-arrival event-streams under the non-coherent Topic-filter Policy being processed per each PUB/SUB ( or XPUB/XSUB ) co-existing message "event-streams".

Binding a subscriber socket and connecting a publisher socket in ZeroMQ is giving error when the code is run. Why?

In this code the Subscriber ( in subscriber.cpp code ) socket binds to port 5556.
It receives updates/messages from publisher ( in subscriber.cpp ), and publisher socket connects to the subscriber at 5556 and sends updates/messages to it.
I know that convention is to .bind() a publisher and not to call .connect() on it. But by theory every socket type can .bind() or .connect().
But, both the codes give zmq error when run. Why?
This is CPP code.
publisher.cpp
#include <iostream>
#include <zmq.hpp>
#include <zhelpers.hpp>
using namespace std;
int main () {
zmq::context_t context (1);
zmq::socket_t publisher(context, ZMQ_PUB);
publisher.connect("tcp://*:5556");
while (1) {
zmq::message_t request (12);
memcpy (request.data (), "Pub-1 Data", 12);
sleep(1);
publisher.send (request);
}
return 0;
}
subcriber.cpp
#include <iostream>
#include <zmq.hpp>
int main (int argc, char *argv[])
{
zmq::context_t context (1);
zmq::socket_t subscriber (context, ZMQ_SUB);
subscriber.bind("tcp://localhost:5556");
subscriber.setsockopt(ZMQ_SUBSCRIBE, "", 0); // subscribe to all messages
// Process 10 updates
int update_nbr;
for (update_nbr = 0; update_nbr < 10 ; update_nbr++) {
zmq::message_t update;
subscriber.recv (&update);
std::string updt = std::string(static_cast<char*>(update.data()), update.size());
std::cout << "Received Update/Messages/TaskList " << update_nbr <<" : "<< updt << std::endl;
}
return 0;
}
No, there is no problem in reversed .bind()/.connect()
This principally works fine.
Yet, the PUB/SUB Formal Archetype is subject to a so called late-joiner syndrome.
Without thorough debugging details, as were requested above, one may just repeat general rules of thumb:
In newer API versions one may
add rc = <aSocket>.setsockopt( ZMQ_CONFLATE, 1 ); assert( rc & "CONFLATE" );
add rc = <aSocket>.setsockopt( ZMQ_IMMEDIATE, 1 ); assert( rc & "IMMEDIATE" );
and so forth,
all that to better tune the context-instance + socket-instance attributes, so as to minimise the late-joiner syndrome effects.
There is no problem with reversed bind()/connect().
The code is working when I changed the line - subscriber.bind("tcp://localhost:5556");
to
subscriber.bind("tcp://:5556");
and
publisher.connect("tcp://:5556");
to
publisher.connect("tcp://localhost:5556");

BOOST ASIO - How to write console server

I have to write asynchronous TCP Sever.
TCP Server have to be managed by console
(for eg: remove client, show list of all connected client, etcc..)
The problem is: How can I attach (or write) console, which can calls above functionalities.
This console have to be a client? Should I run this console client as a sepearate thread?
I have read a lot of tutorials and I couldn`t find a solution to my problem.
ServerTCP code
class ServerTCP
{
public:
ServerTCP(boost::asio::io_service& A_ioService, unsigned short A_uPortNumber = 13)
: m_tcpAcceptor(A_ioService, tcp::endpoint(tcp::v4(), A_uPortNumber)), m_ioService (A_ioService)
{
start();
}
private:
void start()
{
ClientSessionPtr spClient(new ClientSession(m_tcpAcceptor.io_service(), m_connectedClients));
m_tcpAcceptor.async_accept(spClient->getSocket(),
boost::bind(&ServerTCP::handleAccept, this, spClient,
boost::asio::placeholders::error));
}
void handleAccept(ClientSessionPtr A_spNewClient, const boost::system::error_code& A_nError)
{
if ( !A_nError )
{
A_spNewClient->start();
start();
}
}
boost::asio::io_service& m_ioService;
tcp::acceptor m_tcpAcceptor;
Clients m_connectedClients;
};
Main function:
try
{
boost::asio::io_service ioService;
ServerTCP server(ioService);
ioService.run();
}
catch (std::exception& e)
{
std::cerr << "Exception: " << e.what() << "\n";
}
Hello Sam. Thanks for reply. Could you be so kind and show me a some piece of code or some links to examples involve with this problem ?
Propably, I don`t understand correctly "... single threaded server ..."
In Fact in "console" where I want to manage server operations, I need smt like below:
main()
cout << "Options: q - close server, s - show clients";
while(1)
{
char key = _getch();
switch( key )
{
case 'q':
closeServer();
break
case 's':
showClients();
break
}
}
The problem is: How can I attach (or
write) console, which can calls above
functionalities. This console have to
be a client? Should I run this console
client as a sepearate thread?
You don't need a separate thread, use a posix::stream_descriptor and assign STDIN_FILENO to it. Use async_read and handle the requests in the read handlers.
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <boost/enable_shared_from_this.hpp>
#include <boost/shared_ptr.hpp>
#include <iostream>
using namespace boost::asio;
class Input : public boost::enable_shared_from_this<Input>
{
public:
typedef boost::shared_ptr<Input> Ptr;
public:
static void create(
io_service& io_service
)
{
Ptr input(
new Input( io_service )
);
input->read();
}
private:
explicit Input(
io_service& io_service
) :
_input( io_service )
{
_input.assign( STDIN_FILENO );
}
void read()
{
async_read(
_input,
boost::asio::buffer( &_command, sizeof(_command) ),
boost::bind(
&Input::read_handler,
shared_from_this(),
placeholders::error,
placeholders::bytes_transferred
)
);
}
void read_handler(
const boost::system::error_code& error,
size_t bytes_transferred
)
{
if ( error ) {
std::cerr << "read error: " << boost::system::system_error(error).what() << std::endl;
return;
}
if ( _command != '\n' ) {
std::cout << "command: " << _command << std::endl;
}
this->read();
}
private:
posix::stream_descriptor _input;
char _command;
};
int
main()
{
io_service io_service;
Input::create( io_service );
io_service.run();
}
If I understand the OP correctly, he/she wants to run an async TCP server that is controlled via a console i.e console is used as user interface.
In that case you don't need a separate client application to query the server for connected clients, etc.:
You need to spawn a thread that somehow calls the io_service::run method. Currently you are calling this from main. Since your server will probably be scoped in main, you need do something like pass a ref to the server to the new thread. The io_service could e.g be a member of the server class (unless your application has other requirements in which case pass both the server and the io_service to the new thread).
add the corresponding methods such as showClients, closeServer, etc. to your server class
make sure that these calls which are triggered via the console are thread-safe
in your closeServer method you could for instance call io_service::stop which would result in the server ending.