I'm trying to embed a telnet server in a data-capture program I've written. I've got both the data capture, and the telnet server working in their own classes, but now I want to transfer data from one to another, and I'm not sure where to start.
In the example below, I want to be able to send a command to the telnet server to request a data packet from the data capture thread.
So, in code (C++) this is what I want to do:
#include <thread>
void StartTelnetServer()
{
MyTelnetClass tnet;
tnet.Start(); // In here, server starts listening for connections.
}
void StartDataCapture()
{
MyDataCapture dCap;
dCap.Start(); // In here, data capture begins
}
main()
{
std::thread tnetThread(StartTelnetServer);
std::thread dCapThread(StartDataCapture);
// This will run until killed
}
I then want to telnet into it, with a string command such as "SIZE" and for the telnet class to query the latest dCap.GetSize(). There are dozen or so bits of data that I'll want to access in this way. Do I need to declare a static structure of some sort that both classes access? Am I way off base?!
This needs to run on Linux, if that matters to anything.
If the telnet handler should be able to access the data-capture object, but not the other way around, you can create both object in the main function, passing the data-capture object by reference to the telnet handler constructor. Then start the threads using the Start member functions instead.
Something like
...
class MyDataCapture;
class MyTelnetClass
{
public:
MyTelnetClass(MyDataCapture& dc)
: dCap(dc)
{}
...
private:
MyDataCapture& dCap;
...
};
...
int main()
{
MyDataCapture dCap;
MyTelnetClass tnet{dCap}
std::thread dCapThread(&MyDataCapture::Start, dCap);
std::thread tnetThread(&MyTelnetClass::Start, tnet);
...
}
This way the telnet handler can just call functions in the data-capture object when needed. Be careful through so you don't get data-races, protect data with mutexes and locks.
If you want the data-capture object to call functions in the telnet handler object as well you can't use references but have to use pointers.
Related
I am currently implementing a program that realizes TCP communication between PC program and external devices in QT. The problem I have is more general, but I will use this one as an example.
My class hierarchy looks like this:
Program
/ \
Server_A <--> Server_B <--- External system
|
Sockets[]
/ \
Commands Confirmations
/ | \
Interf1 Interf2 Interf3
I can get a command from device (Socket), my command gets into Confirmation class, realizes any Interface job, and returns confirmation back to Socket, which sends it back to device.
The problem occurs when I want to have a command send from an external system, which I also have to confirm.
I get a message on Server_B and pass it to Server_A with information about: socket to send command to and command to realize.
I pass a command to a particular socket
Socket sends a command to Commands, as there is logic for an External System commands.
Commands prepares a message, runs logic, and sends(through socket) message to device
Socket waits for response
Socket gets the response, understands that it was a response to an external system command, and passes it back to Commands
Commands realizes its logic.
Here it would all be fine, but the next step is:
Commands need to confirm the success(or failure) to external system.
So basically, what I have to do is pass a message from Commands to Server_B this way:
Commands->Socket->Server_A->Server_B. For all these classes, I would have to create an unnecessary method just to pass this one information. Is there a way to somehow solve this problem? During my programming, it often occurs that I have to pass something to the higher layer of my class structure, and it looks redundant to realize it through additional methods that only passes information further.
I have provided a sample pseudocode for this problem:
class Program
{
ServerB serverB;
ServerA serverA;
}
class ServerB
{
void send(QString msg);
}
class ServerA
{
QVector<MySocket*> sockets;
}
class MySocket
{
Commands commands;
Confirmations confirmations;
}
class Commands
{
void doLogic();
void sendToExternalSystem(QString message); //How to realize it?
}
My program is much bigger, but I hope it will give you a clue what I am trying to achieve. The simplest solution would be to add a method void sendToExternalSystem(QString message) into Sockets, Server_A and Server_B, aswell as providing a pointer for each parent during construction (commands will have access to sockets, sockets will have access to server_a, and server_a will have access to server_b)
Finally, I came up with a solution. It was necessary to implement ExternalCommand class, which instances were created in Server_B.
In the minimal solution, it has: 1. Field QString Message, 2. Method QString getMessage(), 3. Method void finish(QString), 4. Signal void sendToExternal(QString)
When I read the message sent from the external system in Server_B, I create an instance of this class, and connect it to the Server_B send method. In my code, it looks like that:
ExternalCommand::ExternalCommand(QString message, QObject* parent) : QObject(parent)
{
this->message=message;
}
QString ExternalCommand::getMessage()
{
return this->message;
}
void finish(QString outputMessage)
{
emit sendToExternal(outputMessage);
}
void Server_B::onReadyRead()
{
QTcpSocket *socket = dynamic_cast<QTcpSocket*>(sender());
QString message = socket->readAll();
ExternalCommand* cmd = new ExternalCommand(message);
connect(cmd, &ExternalCommand::sendToExternal, socket,
[socket](QString message) {socket->write(message.toUtf8());});
}
It was also necessary to implement some type of object destruction in ExternalCommand once the command is sent, but it isn't the point of this question.
So once this is implemented, instead of the message as QString, the message is passed to the lower levels as ExternalCommand*, and once an answer is got, it is possible to send it back to the External System, by calling ExternalCommand::finish(QString outputMessage);. Of course, this is just a minimal solution for this problem.
Thanks to #MatG for pointing me to Promise/Future pattern, which was helpful in finding this solution.
I'm trying to create a WebSocket Server.
I can establish a connection and everything works fine so far.
In this GitHub example the data is send within the handleRequest() method that is called when a client connects.
But can I send data to the client from another class using the established WebSocket connection?
How can I archieve this? Is this even possible?
Thank you.
It is, of course, possible. In the example you referred, you should have a member pointer to WebSocket in the RequestHandlerFactory, eg.:
class RequestHandlerFactory: public HTTPRequestHandlerFactory
{
//...
private:
shared_ptr<WebSocket> _pwebSocket;
};
pass it to the WebSocketRequestHandler constructor:
return new WebSocketRequestHandler(_pwebSocket);
and WebSocketRequestHandler should look like this:
class WebSocketRequestHandler: public HTTPRequestHandler
{
public:
WebSocketRequestHandler(shared_ptr<WebSocket> pWebSocket) :_pWebSocket(pWebSocket)
{}
void handleRequest(HTTPServerRequest& request, HTTPServerResponse& response)
{
// ...
_pWebSocket.reset(make_shared<WebSocket>(request, response));
// ...
}
private:
shared_ptr<WebSocket> _pWebSocket;
}
Now, after the request handler creates it, you will have a pointer to the WebSocket in the factory (which is long lived, unlike RequestHandler, which comes and goes away with every request). Keep in mind that handler executes in its own thread, so you should have some kind of locking or notification mechanism to signal when the WebSocket has actually been created by the handler (bool cast of _pWebSocket will be true after WebSocket was successfully created).
The above example only illustrates the case with a single WebSocket - if you want to have multiple ones, you should have an array or vector of pointers and add/remove them as needed. In any case, the WebSocket pointer(s) need not necessarily reside in the factory - you can either (a) put them elsewhere in your application and propagate them to the factory/handler or (b) have a global facility (with proper multi-thread-access mechanism) holding the WebSocket(s).
Referring to HTTP Server- Single threaded Implementation
I am trying to Explicitly control Lifetime of server instance
My Requirements are:
1) I should be able to explicitly destroy the server
2) I need to keep multiple Server Instances alive which should listen to different ports
3) Manager Class maintains list of all active server instances; should be able to create and destroy the server instances by create and drop methods
I am trying to implement Requirement 1 and
I have come up with code:
void server::stop()
{
DEBUG_MSG("Stopped");
io_service_.post(boost::bind(&server::handle_stop, this));
}
where handle_stop() is
void server::handle_stop()
{
// The server is stopped by cancelling all outstanding asynchronous
// operations. Once all operations have finished the io_service::run() call
// will exit.
acceptor_.close();
connection_manager_.stop_all();
}
I try to call it from main() as:
try
{
http::server::server s("127.0.0.1","8973");
// Run the server until stopped.
s.run();
boost::this_thread::sleep_for(boost::chrono::seconds(3));
s.stop();
}
catch (std::exception& e)
{
std::cerr << "exception: " << e.what() << "\n";
}
Question 1)
I am not able to call server::handle_stop().
I suppose io_service_.run() is blocking my s.stop() call.
void server::run()
{
// The io_service::run() call will block until all asynchronous operations
// have finished. While the server is running, there is always at least one
// asynchronous operation outstanding: the asynchronous accept call waiting
// for new incoming connections.
io_service_.run();
}
How do I proceed?
Question 2:
For requirement 2) where I need to have multiple server instances, i think I will need to create an io_service instance in main and must pass the same instance to all server instances. Am I right?
Is it mandatory to have only one io_service instance per process or can I have more than one ?
EDIT
My aim is to implement a class which can control multi server instances:
Something of below sort (Incorrect code // Just giving view, what I try to implement ) I want to achieve-
How do i design?
I have confusion regarding io_Service and how do I cleanly call mng.create(), mng.drop()
Class Manager{
public:
void createServer(ServerPtr)
{
list_.insert(make_shared<Server> (ip, port));
}
void drop()
{
list_.drop((ServerPtr));
}
private:
io_service iO_;
set<server> list_;
};
main()
{
io_service io;
Manager mng(io);
mng.createServer(ip1,port1);
mng.createServer(ip2,port2);
io.run();
mng.drop(ip1,port1);
}
I am not able to call server::handle_stop().
As you say, run() won't return until the service is stopped or runs out of work. There's no point calling stop() after that.
In a single-threaded program, you can call stop() from an I/O handler - for your example, you could use a deadline_timer to call it after three seconds. Or you could do something complicated with poll() rather than run(), but I wouldn't recommend that.
In a multi-threaded program, you could call it from another thread than the one calling run(), as long as you make sure it's thread-safe.
For [multiple servers] I think I will need to create an io_service instance in main
Yes, that's probably the best thing to do.
Is it mandatory to have only one io_service instance per process or can I have more than one?
You can have as many as you like. But I think you can only run one at a time on a single thread, so it would be tricky to have more than one in a single-threaded program. I'd have a single instance that all the servers can use.
You are right, it's not working because you call stop after blocking run, and run blocks until there are some unhandled callbacks. There are multiple ways to solve this and it depands from what part of program stop will be called:
If you can call it from another thread, then run each instance of server in separate thread.
If you need to stop server after some IO operation for example you can simply do as you have tried io_service_.post(boost::bind(&server::handle_stop, this));, but it should be registered from another thread or from another callback in current thread.
You can use io_service::poll(). It is non-blocking version of run, so you create a loop where you call poll until you need to stop server.
You can do it both ways. Even with the link you provided you can take a look at:
HTTP Server 3 - An HTTP server using a single io_service and a thread pool
and HTTP Server 2 - An HTTP server using an io_service-per-CPU design
I have an application built using MFC that I need to add Bonjour/Zeroconf service discovery to. I've had a bit of trouble figuring out how best to do it, but I've settled on using the DLL stub provided in the mDNSresponder source code and linking my application to the static lib generated by that (which in turn uses the system dnssd.dll).
However, I'm still having problems as the callbacks don't always seem to be being called so my device discovery stalls. What confuses me is that it all works absolutely fine under OSX, using the OSX dns-sd terminal service and under Windows using the dns-sd command line service. On that basis, I'm ruling out the client service as being the problem and trying to figure out what's wrong with my Windows code.
I'm basically calling DNSBrowseService(), then in that callback calling DNSServiceResolve(), then finally calling DNSServiceGetAddrInfo() to get the IP address of the device so I can connect to it.
All of these calls are based on using WSAAsyncSelect like this :
DNSServiceErrorType err = DNSServiceResolve(&client,kDNSServiceFlagsWakeOnResolve,
interfaceIndex,
serviceName,
regtype,
replyDomain,
ResolveInstance,
context);
if(err == 0)
{
err = WSAAsyncSelect((SOCKET) DNSServiceRefSockFD(client), p->m_hWnd, MESSAGE_HANDLE_MDNS_EVENT, FD_READ|FD_CLOSE);
}
But sometimes the callback just never gets called even though the service is there and using the command line will confirm that.
I'm totally stumped as to why this isn't 100% reliable, but it is if I use the same DLL from the command line. My only possible explanation is that the DNSServiceResolve function tries to call the callback function before the WSAAsyncSelect has registered the handling message for the socket, but I can't see any way around this.
I've spent ages on this and am now completely out of ideas. Any suggestions would be welcome, even if they're "that's a really dumb way to do it, why aren't you doing X, Y, Z".
I call DNSServiceBrowse, with a "shared connection" (see dns_sd.h for documentation) as in:
DNSServiceCreateConnection(&ServiceRef);
// Need to copy the main ref to another variable.
DNSServiceRef BrowseServiceRef = ServiceRef;
DNSServiceBrowse(&BrowseServiceRef, // Receives reference to Bonjour browser object.
kDNSServiceFlagsShareConnection, // Indicate it's a shared connection.
kDNSServiceInterfaceIndexAny, // Browse on all network interfaces.
"_servicename._tcp", // Browse for service types.
NULL, // Browse on the default domain (e.g. local.).
BrowserCallBack, // Callback function when Bonjour events occur.
this); // Callback context.
This is inside a main run method of a thread class called ServiceDiscovery. ServiceRef is a member of ServiceDiscovery.
Then immediately following the above code, I have a main event loop like the following:
while (true)
{
err = DNSServiceProcessResult(ServiceRef);
if (err != kDNSServiceErr_NoError)
{
DNSServiceRefDeallocate(BrowseServiceRef);
DNSServiceRefDeallocate(ServiceRef);
ServiceRef = nullptr;
}
}
Then, in BrowserCallback you have to setup the resolve request:
void DNSSD_API ServiceDiscovery::BrowserCallBack(DNSServiceRef inServiceRef,
DNSServiceFlags inFlags,
uint32_t inIFI,
DNSServiceErrorType inError,
const char* inName,
const char* inType,
const char* inDomain,
void* inContext)
{
(void) inServiceRef; // Unused
ServiceDiscovery* sd = (ServiceDiscovery*)inContext;
...
// Pass a copy of the main DNSServiceRef (just a pointer). We don't
// hang to the local copy since it's passed in the resolve callback,
// where we deallocate it.
DNSServiceRef resolveServiceRef = sd->ServiceRef;
DNSServiceErrorType err =
DNSServiceResolve(&resolveServiceRef,
kDNSServiceFlagsShareConnection, // Indicate it's a shared connection.
inIFI,
inName,
inType,
inDomain,
ResolveCallBack,
sd);
Then in ResolveCallback you should have everything you need.
// Callback for Bonjour resolve events.
void DNSSD_API ServiceDiscovery::ResolveCallBack(DNSServiceRef inServiceRef,
DNSServiceFlags inFlags,
uint32_t inIFI,
DNSServiceErrorType inError,
const char* fullname,
const char* hosttarget,
uint16_t port, /* In network byte order */
uint16_t txtLen,
const unsigned char* txtRecord,
void* inContext)
{
ServiceDiscovery* sd = (ServiceDiscovery*)inContext;
assert(sd);
// Save off the connection info, get TXT records, etc.
...
// Deallocate the DNSServiceRef.
DNSServiceRefDeallocate(inServiceRef);
}
hosttarget and port contain your connection info, and any text records can be obtained using the DNS-SD API (e.g. TXTRecordGetCount and TXTRecordGetItemAtIndex).
With the shared connection references, you have to deallocate each one based on (or copied from) the parent reference when you are done with them. I think the DNS-SD API does some reference counting (and parent/child relationship) when you pass copies of a shared reference to one of their functions. Again, see the documentation for details.
I tried not using shared connections at first, and I was just passing down ServiceRef, causing it to be overwritten in the callbacks and my main loop to get confused. I imagine if you don't use shared connections, you need to maintain a list of references that need further processing (and process each one), then destroy them when you're done. The shared connection approach seemed much easier.
I am building an HTTP client based on the example on HTTP server given at boost website. Now, the difference between that code and mine is that the example uses the server constructor to start the asynchronous operations. This makes sense since a server is supposed to listen all the time. In my client, on the other hand, I want to first construct the object and then have a send() function that starts off by connecting to the endpoint and later on sends a request and finally listens for the reply. This makes sense too, doesn't it?
When I create my object (client) I do it in the same manner as in the server example (winmain.cpp). It looks like this:
client c("www.boost.org);
c.start(); // starts the io_service in a thread
c.send(msg_);
The relevant parts of the code are these:
void enabler::send(common::geomessage& msg_)
{
new_connection_.reset(new connection(io_service_,
connection_manager_,
message_manager_, msg_
));
boost::asio::ip::tcp::resolver resolver(io_service_);
boost::asio::ip::tcp::resolver::query query(host_address, "http");
resolver.async_resolve(query, boost::bind(
&enabler::handle_resolve,
boost::ref(*this),
boost::asio::placeholders::error,
boost::asio::placeholders::iterator
));
}
void enabler::run()
{
io_service_.run();
}
The problem with this is that the program gets stuck somewhere here. The last thing that prints is the "Resolving host", after that the program ends. I don't know why because the io_service should block until all async operations have returned to their callbacks. If, however, I change the order of how I call the functions, it works. If I call run() just after the call to async_resolve() and also omit calling start() in my main program, it works!
In this scenario, io_service blocks as it should and I can see that I get a response from the server.
It has something to do from the fact that I call run() from inside the same class as where I call async_resolve(). Could this be true? The I suppose I need to give a reference from the main program when I call run(), is it like that?
I have struggled with getting io_service::work to work but the program just gets stuck and yeah, similar problems as the one above occur. So it does not really help.
So, what can I do to get this right? As I said earlier, what I want is to be able to create the client object and have the io_service running all the time in a separate thread inside the client class. Secondly to have a function, send(), that sends requests to the server.
You need to start at least some work before calling run(), as it returns when there is no more work to do.
If you call it before you start the async resolve, it won't have any work so it returns.
If you don't expect to have some work at all times, to keep the io_service busy, you should construct an io_service::work object in some scope which can be exited without io_service::run() having to return first. If you're running the io_service in a separate thread, I would imagine you wouldn't have a problem with that.
It's sort of hard to know what you're trying to do with those snippets of code. I imagine that you'd want to do something along these lines:
struct client
{
io_service io_service_;
io_service::work* w_;
pthread_t main_thread_;
client(): w_(new io_service::work(io_service)) { ... }
void start() { pthread_create(&main_thread_, 0, main_thread, this); }
static long main_thread(void* arg) { ((client*)arg)->io_service_.run(); }
// release the io_service and allow run() to return
void stop() { delete w_; w_ = 0; pthread_join(main_thread_); }
};