Qpid proton c++ - proton::make_work - c++

I'm trying to add proton::work function (opening a new sender) inside the work queue of the proton::connection object. I have a pointer to the working queue, but my problem is how to bind the open_sender function correctly.
I'm aware of the real problem here : the parameter of the function :
sender open_sender(const std::string& addr);
As the string is passed by reference, I have to de-reference it. I'm ok with that, but how to do it with the proton tools ?
Here my line of code :
proton::work w = proton::make_work( &proton::connection::open_sender, &m_connection, p_url);
Note :
Of course I'm not using C++11 in my project, it would be too simple
to ask ;) !
Of course I cannot change to C++11
If you have a better idea on how to create a new sender in a multi-threaded program let me know.

Usually you will use the proton::open_sender API from within the handler for connection open or container start so you will not have to use proton::make_work in most cases. If you look at the Proton C++ examples, a good place to start is simple_send.cpp.
Abbreviated code might look like this:
class simple_send : public proton::messaging_handler {
private:
proton::sender sender;
const std::string url;
const std::string addr;
...
public:
simple_send(...) :
url(...),
addr(...)
{}
...
// This handler is called when the container starts
void on_container_start(proton::container &c) {
c.connect(url);
}
// This handler is called when the connection is open
void on_connection_open(proton::connection& c) {
sender = c.open_sender(addr);
}
...
}
int main() {
...
simple_send send(...);
proton::container(send).run();
...
}
There are other examples that come with Proton C++, that should help you figure out other ways to use Proton C++. See https://github.com/apache/qpid-proton/tree/master/examples/cpp.
There is also API documentation you can find at http://qpid.apache.org/releases/qpid-proton-0.20.0/proton/cpp/api/index.html (for the current release as of February 2018).

Related

How to add a variable to spdlog's flag

I want to use spdlog for my code's logging. In my code, there is a important variable for the step in simulation, and I want it to be always displayed in my logs.
Here is the format I wants.
[log_level][the_special_variable][logger_name] messages
So how could format the logger? Or there isn't any way to do that?
Edited:
Sorry I am not good at asking a question in English.
I've read the Readme.md in spdlog's github, and i saw this
// Log patterns can contain custom flags.
// the following example will add new flag '%*' - which will be bound to a <my_formatter_flag> instance.
#include "spdlog/pattern_formatter.h"
class my_formatter_flag : public spdlog::custom_flag_formatter
{
public:
void format(const spdlog::details::log_msg &, const std::tm &, spdlog::memory_buf_t &dest) override
{
std::string some_txt = "custom-flag";
dest.append(some_txt.data(), some_txt.data() + some_txt.size());
}
std::unique_ptr<custom_flag_formatter> clone() const override
{
return spdlog::details::make_unique<my_formatter_flag>();
}
};
void custom_flags_example()
{
auto formatter = std::make_unique<spdlog::pattern_formatter>();
formatter->add_flag<my_formatter_flag>('*').set_pattern("[%n] [%*] [%^%l%$] %v");
spdlog::set_formatter(std::move(formatter));
}
but I can't understand the usage of it. It seems like it can only add a string for the custom flag. I would like to kwon if it is OK to display a int variable.
Yes, it is okay to add an int to the log message, you just have to stringify it. For example, in the format method:
auto str = std::to_string(my_special_int_variable);
dest.append(...);
The only question is how you make your int var available in the formatter. The example above assumes it's a global variable.

Dynamic Function Args for Callback / RPC in C++

I need to register functions like the following in a list of functions with arguments.
void func1( int a , char* b ) {}
void func2( vec3f a , std::vector<float> b , double c) {}
...
And call them back when I receive data over network with proper arguments. I imagined va_list would solve, but it doesnt work :
void func1(int a, char* b)
{
printf("%d %s",a,b);
}
void prepare(...)
{
va_list argList;
int args = 2;
va_start(argList, args);
((void (*)(va_list))func1)(argList);
va_end(argList);
}
int main(int argc, char **argv)
{
prepare(1, "huhu");
return 0;
}
What is the most elegant way to solve this ?
I know std::bind / std::function has similar abilities, but the internal data is hidden deep in std I assume. I just need a few basic data types, doesnt have to be for arbitrary types. If preprocessor tricks with ##VA_ARGS or using templates would solve, I am also OK with that. Priority is that it is most simple to use.
Edit1 : I found that assembly can solve ( How do I pass arguments to C++ functions when I call them from inline assembly ) - but I would prefer a more platform independent solution.
If your goal is to create your own, small and ad-hoc "rpc" solution, possibly one of the major drivers for making decisions should be: 1. Minimal amount of code 2. Easy as possible.
Keeping that in mind, it is paying off to ponder, what the difference is between the following 2 scenarios:
"Real" RPC: The handlers shall be as you wrote with rpc-method-specific signature.
"Message passing": The handlers receive messages of either "end point-determined type" or simply of a unified message type.
Now, what has to be done to get a solution of type 1?
Incoming byte streams/network packets need to get parsed to some sort of message with regards to some chosen protocol. Then, using some meta-info (contract), according to { serviceContract, serviceMethod }, a specific set of data items needs to be confirmed in the packet and if present, the respective, registered handler function needs to be called. Somewhere within that infrastructure you typically have a (likely code generated) function which does something like that:
void CallHandlerForRpcXYCallFoo( const RpcMessage*message )
{
uint32_t arg0 = message->getAsUint32(0);
// ...
float argN = message->getAsFloat(N);
Foo( arg0, arg1, ... argN );
}
All that can, of course also be packed into classes and virtual methods with the classes being generated from the service contract meta data. Maybe, there is also a way by means of some excessive template voodoo to avoid generating code and having a more generic meta-implementation. But, all that is work, real work. Way too much work to do it just for fun. Instead of doing that, it would be easier to use one of the dozens technologies which do that already.
Worth noting so far is: Somewhere within that piece of art, there is likely a (code generated) function which looks like the one given above.
Now, what has to be done to get a solution of type 2?
Less than for case 1. Why? Because you simply stop your implementation at calling those handler methods, which all take the RpcMessage as their single argument. As such, you can get away without generating the "make-it-look-like-a-function-call" layer above those methods.
Not only is it less work, it is also more robust in the presence of some scenarios where the contract changes. If one more data item is being added to the "rpc solution", the signature of the "rpc function" MUST change. Code re-generated, application code adapted. And that, whether or not the application needs that new data item. On the other hand, in approach 2, there are no breaking changes in the code. Of course, depending on your choices and the kind of changes in the contract, it still would break.
So, the most elegant solution is: Don't do RPC, do message passing. Preferably in a REST-ful way.
Also, if you prefer a "unified" rpc message over a number of rpc-contract specific message types, you remove another reason for code bloat.
Just in case, what I say seems a bit too abstract, here some mock-up dummy code, sketching solution 2:
#include <cstdio>
#include <cstdint>
#include <map>
#include <vector>
#include <deque>
#include <functional>
// "rpc" infrastructure (could be an API for a dll or a lib or so:
// Just one way to do it. Somehow, your various data types need
// to be handled/represented.
class RpcVariant
{
public:
enum class VariantType
{
RVT_EMPTY,
RVT_UINT,
RVT_SINT,
RVT_FLOAT32,
RVT_BYTES
};
private:
VariantType m_type;
uint64_t m_uintValue;
int64_t m_intValue;
float m_floatValue;
std::vector<uint8_t> m_bytesValue;
explicit RpcVariant(VariantType type)
: m_type(type)
{
}
public:
static RpcVariant MakeEmpty()
{
RpcVariant result(VariantType::RVT_EMPTY);
return result;
}
static RpcVariant MakeUint(uint64_t value)
{
RpcVariant result(VariantType::RVT_UINT);
result.m_uintValue = value;
return result;
}
// ... More make-functions
uint64_t AsUint() const
{
// TODO: check if correct type...
return m_uintValue;
}
// ... More AsXXX() functions
// ... Some ToWire()/FromWire() functions...
};
typedef std::map<uint32_t, RpcVariant> RpcMessage_t;
typedef std::function<void(const RpcMessage_t *)> RpcHandler_t;
void RpcInit();
void RpcUninit();
// application writes handlers and registers them with the infrastructure.
// rpc_context_id can be anything opportune - chose uint32_t, here.
// could as well be a string or a pair of values (service,method) or whatever.
void RpcRegisterHandler(uint32_t rpc_context_id, RpcHandler_t handler);
// Then according to taste/style preferences some receive function which uses the registered information and dispatches to the handlers...
void RpcReceive();
void RpcBeginReceive();
void RpcEndReceive();
// maybe some sending, too...
void RpcSend(uint32_t rpc_context_id, const RpcMessage_t * message);
int main(int argc, const char * argv[])
{
RpcInit();
RpcRegisterHandler(42, [](const RpcMessage_t *message) { puts("message type 42 received."); });
RpcRegisterHandler(43, [](const RpcMessage_t *message) { puts("message type 43 received."); });
while (true)
{
RpcReceive();
}
RpcUninit();
return 0;
}
And if RpcMessage then is traded, while packed in a std::shared_ptr, you can even have multiple handlers or do some forwarding (to other threads) of the same message instance. This is one particularly annoying thing, which needs yet another "serializing" in the rpc approach. Here, you simply forward the message.

How to get client IP Address with C++ in thrift

I am implementing a thrift-based (0.4.0) service in C++ at the moment and encountered a question:
Is there a way to get the client's IP address from inside a service method implementation? I am using a TNonblockingServer.
Thanks in advance!
In TNonblockingServer, When TProcessor::process() is called the TProtocol.transport is a TMemoryBuffer, so aquiring client ip address is impossible.
But We can extend class TServerEventHandler, method TServerEventHandler::processContext() is called when a client is about to call the processor.
static boost::thread_specific_ptr<std::string> thrift_client_ip; // thread specific
class MyServerEventHandler : public TServerEventHandler
{
virtual void processContext(void* serverContext, boost::shared_ptr<TTransport> transport)
{
TSocket *sock = static_cast<TSocket *>(transport.get());
if (sock)
{
//thrift_client_ip.reset(new string(sock->getPeerAddress())); // 0.9.2, reused TNonblockingServer::TConnection return dirty address, see https://issues.apache.org/jira/browse/THRIFT-3270
sock->getCachedAddress(); // use this api instead
}
}
};
// create nonblocking server
TNonblockingServer server(processor, protocolFactory, port, threadManager);
boost::shared_ptr<MyServerEventHandler> eventHandler(new MyServerEventHandler());
server.setServerEventHandler(eventHandler);
Ticket THRIFT-1053 describes a similar request for Java. The solution is basically to allow access to the inner (endpoint) transport and retrieve the data from it. Without having it really tested, building a similar solution for C++ should be easy. Since you are operating on Thrift 0.4.0, I'd strongly recommend to look at current trunk (0.9.3) first. The TBufferedTransport, TFramedTransport and TShortReadTransport already implement
boost::shared_ptr<TTransport> getUnderlyingTransport();
so the patch mentioned above may not be necessary at all.
Your TProcessor-derived class gets a hold of both transports when process() gets called. If you overwrite that method you should be able to manage access to the data you are interested in:
/**
* A processor is a generic object that acts upon two streams of data, one
* an input and the other an output. The definition of this object is loose,
* though the typical case is for some sort of server that either generates
* responses to an input stream or forwards data from one pipe onto another.
*
*/
class TProcessor {
public:
// more code
virtual bool process(boost::shared_ptr<protocol::TProtocol> in,
boost::shared_ptr<protocol::TProtocol> out,
void* connectionContext) = 0;
// more code
#ifndef NONBLOCK_SERVER_EVENT_HANDLER_H
#define NONBLOCK_SERVER_EVENT_HANDLER_H
#include <thrift/transport/TSocket.h>
#include <thrift/server/TServer.h>
namespace apache{
namespace thrift{
namespace server{
class ServerEventHandler:public TServerEventHandler{
void* createContext(boost::shared_ptr<TProtocol> input, boost::shared_ptr<TProtocol> output){
(void)input;
(void)output;
return (void*)(new char[32]);//TODO
}
virtual void deleteContext(void* serverContext,
boost::shared_ptr<TProtocol>input,
boost::shared_ptr<TProtocol>output) {
delete [](char*)serverContext;
}
virtual void processContext(void *serverContext, boost::shared_ptr<TTransport> transport){
TSocket *tsocket = static_cast<TSocket*>(transport.get());
if(socket){
struct sockaddr* addrPtr;
socklen_t addrLen;
addrPtr = tsocket->getCachedAddress(&addrLen);
if (addrPtr){
getnameinfo((sockaddr*)addrPtr,addrLen,(char*)serverContext,32,NULL,0,0) ;
}
}
}
};
}
}
}
#endif
boost::shared_ptr<ServerEventHandler> serverEventHandler(new ServerEventHandler()
server.setServerEventHandler(serverEventHandler);

Allow managed code in hosted environment to call back unmanaged code

I have C++ code that hosts a clr in order to make use of Managed.dll, written in c#.
This .net has a method like the following that allows code to register for notification of events:
public void Register(IMyListener listener);
The interface looks something like this
public interface IMyListener
{
void Notify(string details);
}
I'd like to do stuff in the C++ part of the program, triggered by the events in the .net world. I would not even mind creating another managed dll for the sole purpose of making Managed.dll more C++-friendly, if that is necessary.
What are my options here? The only one I am sure I could implement is this:
Write another managed dll that listens for those events, queues them and lets the C++ code access the queue via polling
This would of course change from an 'interrupt' style to a 'polling' style with all its advantages and disadvantages and the need to provide for queuing. Can we do without polling? Could I somehow call managed code and provide it a function pointer into the C++ world as the argument?
Update
Thanks to stijn's answer and comments I hope I moved a bit in the right direction, but I guess the main problem still open is how to pass a fn pointer from unmanaged land into the clr hosted environment.
Say I have an "int fn(int)" type of function pointer that I want to pass to the managed world, here are the relevant parts:
Managed code (C++/CLI)
typedef int (__stdcall *native_fun)( int );
String^ MyListener::Register(native_fun & callback)
{
return "MyListener::Register(native_fun callback) called callback(9): " + callback(9);
}
Unmanaged code
typedef int (__stdcall *native_fun)( int );
extern "C" static int __stdcall NativeFun(int i)
{
wprintf(L"Callback arrived in native fun land: %d\n", i);
return i * 3;
}
void callCLR()
{
// Setup CLR hosting environment
...
// prepare call into CLR
variant_t vtEmpty;
variant_t vtRetValue;
variant_t vtFnPtrArg((native_fun) &NativeFun);
SAFEARRAY *psaMethodArgs = SafeArrayCreateVector(VT_VARIANT, 0, 1);
LONG index = 0;
SafeArrayPutElement(psaMethodArgs, &index, &vtFnPtrArg);
...
hr = spType->InvokeMember_3(bstrMethodName, static_cast<BindingFlags>(
BindingFlags_InvokeMethod | BindingFlags_Static | BindingFlags_Public),
NULL, vtEmpty, psaMethodArgs, &vtRetValue);
if (FAILED(hr))
wprintf(L"Failed to invoke function: 0x%08lx\n", hr);
The spType->InvokeMember_3 call will lead to a 0x80131512 result.
Something seems to be wrong with the way I pass the pointer to NativeFun over to the managed world, or how my functions are defined. When using a String^ param instead of the fn ptr, I can call the CLR function successfully.
You can write a seperate dll in C++/CLI and implement the interface there, and forward the logic to C++. From my experience with mixing managed/unmanaged I can say using an intermediate C++/CLI step is the way to go. No fiddling with DllImport and functions only, but a solid bridge between both worlds. It just takes some getting used to the syntax and marshalling, but once you have that it's practically effortless. If you need to hold C++ objects in the managed class, best way is to use something like clr_scoped_ptr.
Code would look like this:
//header
#using <Managed.dll>
//forward declare some native class
class NativeCppClass;
public ref class MyListener : public IMylIstener
{
public:
MyListener();
//note cli classes automatically implement IDisposable,
//which will call this destructor when disposed,
//so used it as a normal C++ destructor and do cleanup here
~MyListener();
virtual void Notify( String^ details );
private:
clr_scoped_ptr< NativeCppClass > impl;
}
//source
#include "Header.h"
#include <NativeCppClass.h>
//here's how I marshall strings both ways
namespace
{
inline String^ marshal( const std::string& i )
{
return gcnew String( i.data() );
}
inline std::string marshal( String^ i )
{
if( i == nullptr )
return std::string();
char* str2 = (char*) (void*) Marshal::StringToHGlobalAnsi( i );
std::string sRet( str2 );
Marshal::FreeHGlobal( IntPtr( str2 ) );
return sRet;
}
}
MyListener::MyListener() :
impl( new NativeCppClass() )
{
}
MyListener::~MyListener()
{
}
void MyListener::Notify( String^ details )
{
//handle event here
impl->SomeCppFunctionTakingStdString( marshal( details ) );
}
update
Here's a simple solution to call callbacks in C++ from the managed world:
pubic ref class CallbackWrapper
{
public:
typedef int (*native_fun)( int );
CallbackWrapper( native_fun fun ) : fun( fun ) {}
void Call() { fun(); }
CallbackWrapper^ Create( ... ) { return gcnew CallbackWrapper( ... ); }
private:
native_fun fun;
}
you can also wrap this in an Action if you want.
Another way is using GetDelegateForFunctionPointer, for example as in this SO question
If someone still needs a better way for this , you can simply pass c++ function to CLR using intptr_t in variant and long in managed , then use Marshall and delegate to invoke your native function , super easy and works like charm.
if you need a code snippet , let me know.

How can i write my own RPC Implementation for Protocol Buffers utilizing ZeroMQ

According to the Google Protocol Buffers documentation under 'Defining Services' they say,
it's also possible to use protocol buffers with your own RPC implementation.
To my understanding, Protocol Buffers does not implement RPC natively. Instead, they provide a series of abstract interfaces that must be implemented by the user (Thats me!). So I want to implement these abstract interfaces utilizing ZeroMQ for network communication.
I'm trying to create an RPC implementation using ZeroMQ because the project i'm working on already implements ZeroMQ for basic messaging (Hence why I'm not using gRPC, as the documentation recommends).
After reading through the proto documentation thoroughly, i found that I have to implement the abstract interfaces RpcChannel and RpcController for my own implementation.
I've constructed a minimalized example of where I'm currently at with my RPC Implementation
.proto file: Omitted SearchRequest and SearchResponse schema for brevity
service SearchService {
rpc Search (SearchRequest) returns (SearchResponse);
}
SearchServiceImpl.h:
class SearchServiceImpl : public SearchService {
public:
void Search(google::protobuf::RpcController *controller,
const SearchRequest *request,
SearchResponse *response,
google::protobuf::Closure *done) override {
// Static function that processes the request and gets the result
SearchResponse res = GetSearchResult(request);
// Call the callback function
if (done != NULL) {
done->Run();
}
}
}
};
MyRPCController.h:
class MyRPCController : public google::protobuf::RpcController {
public:
MyRPCController();
void Reset() override;
bool Failed() const override;
std::string ErrorText() const override;
void StartCancel() override;
void SetFailed(const std::string &reason) override;
bool IsCanceled() const override;
void NotifyOnCancel(google::protobuf::Closure *callback) override;
private:
bool failed_;
std::string message_;
};
MyRPCController.cpp - Based off of this
void MyRPCController::Reset() { failed_ = false; }
bool MyRPCController::Failed() const { return failed_; }
std::string MyRPCController::ErrorText() const { return message_; }
void MyRPCController::StartCancel() { }
void MyRPCController::SetFailed(const std::string &reason) {
failed_ = true;
message_ = reason;
}
bool MyRPCController::IsCanceled() const { return false; }
void MyRPCController::NotifyOnCancel(google::protobuf::Closure *callback) { }
MyRPCController::ChiRpcController() : RpcController() { Reset(); }
MyRpcChannel.h:
class MyRPCChannel: public google::protobuf::RpcChannel {
public:
void CallMethod(const google::protobuf::MethodDescriptor *method, google::protobuf::RpcController *controller,
const google::protobuf::Message *request, google::protobuf::Message *response,
google::protobuf::Closure *done) override;
};
Questions I have with my example thus far:
Where do I fit ZeroMQ into this?
It seems like it should be going into RPCChannel, because in the examples i see (See 3rd code block here), they pass a string that has the ports to bind to (i.e. MyRpcChannel channel("rpc:hostname:1234/myservice");)
I'm concerned with my RPCController implementation, it seems too simple. Should more be going here?
How do i implement RPCChannel, it seems very similar to the SearchServiceImpl. The 1 virtual function in these classes has a very similar method signature, except it's generic.
Here's some other Stack Overflow questions I came across that had some helpful information on the topic:
Protobuf-Net: implementing server, rpc controller and rpc channel - This is where i found the example for the RPCController implementation.
Using Protocol Buffers for implementing RPC in ZeroMQ - This answer is interesting because in the top answer, is seems that they're recommending against using Protobufs built in RPC formatting for the .proto file.
I also noticed this same notion in this file, in a repository called libpbrpc which seemed like a good source for example code
Can I/Should I be using an existing implementation such as RPCZ?
Thank you for your help. I hope I gave enough information and was clear in what I'm looking for. Please let me know if something is unclear or lacking in information. I'd be happy to edit the question accordingly.
ZeroMQ provides a low-level API for network communication based on messages that can contain any data.
ProtoBuffers is a library that encodes structured data as compressed binary data and decodes such data.
gRPC is a RPC framework that generates code for network communication based RPC services with functions that exchange data as ProtoBuffers data.
Both ZeroMQ and gRPC provides support for network communication but in different ways. You have to chose either ZeroMQ, either gRPC for network communication.
If you choose ZeroMQ, messages can be encoded using ProtoBuffers exchanging binary structured data.
The main point is ProtoBuffers library allows variant records (similar to C/C++ unions) to be encoded and decoded that can fully emulate the functionality provided by RPC services having functions exchanging ProtoBuffers messages.
So the options are:
Use ZeroMQ with send and receive primitives and ProtoBuffers encoded variant messages that can contain various sub-messages, like
union Request
{
byte msgType;
MessageType1 msg1;
MessageType2 msg2;
MessageType3 msg3;
}
union Response
{
byte msgType;
MessageType3 msg1;
MessageType4 msg2;
MessageType5 msg3;
}
send(Request request);
receive(Response response);
Use gRPC generating a service with functions, like
service MyService
{
rpc function1(MessageType1) returns (Response);
rpc function2(MessageType2) returns (Response);
rpc function3(MessageType3) returns (Response);
rpc functionN(MessageType3) returns (MessageType5);
}
(here it's possible to use many many combinations)
Use just a single-function gRPC service, like
service MyService
{
rpc function(Request) returns (Response);
}
The option could depend on
preferred target for client: ZeroMQ or gRPC based client
performance reasons comparing ZeroMQ vs gRPC based service
specific features like how subscription is used/handled in ZeroMQ vs gRPC based service and client (see How to design publish-subscribe pattern properly in grpc?)
For the 1st option, you have to do a lot of stuff comparing to 2nd option. You have to match the type of message sent with the types of expected messages to be received.
The 2nd option would allow an easier/faster understanding of functionality of the service provided if the somebody else will develop the client.
For developing a RPC service on top on ZeroMQ I would define such .proto file specifying the functions, parameters (all possible input and output parameters) and errors like this:
enum Function
{
F1 = 0;
F2 = 1;
F3 = 2;
}
enum Error
{
E1 = 0;
E2 = 1;
E3 = 2;
}
message Request
{
required Function function = 1;
repeated Input data = 2;
}
message Response
{
required Function function = 1;
required Error error = 2;
repeated Output data = 3;
}
message Input
{
optional Input1 data1 = 1;
optional Input2 data2 = 2;
...
optional InputN dataN = n;
}
message Output
{
optional Output1 data1 = 1;
optional Output2 data2 = 2;
...
optional OutputN dataN = n;
}
message Message
{
repeated Request requests;
repeated Response responses;
}
and depending on function id, at run-time the number and the types of parameters have to be checked.