I have an emulator program written in C++ running on Ubuntu 12.04. There are some settings and options needed for running the program, which are given by the main's arguments. I need to query these options via HTTPS from a remote machine/mobile device (so basically imagine I want to return main's arguments). I was wondering if someone can help me with that.
There should probably be some libraries for the ease, for example Poco. I'm not sure how suitable it is for my case, but here is any example of connection setup in poco. It's not a must to use any libraries though; just the most efficient/simplest way.
Mongoose (or the non-GPL fork Civetweeb) are embedded web servers. Very easy to set up and add controllers for (typically a half dozen lines of code)
Just add the project file (1 c file) to your project and build, add a line to start the server listening and give it what options you like, and add a callback function to handle requests. It does ssl out of the box (though IIRC you'll need to have openssl installed too)
There was another SO answer with some comparisons. I used civetweb at work, and was impressed at how easy it all was. There's not too much documentation though.
Here's a stripped-down POCO version, for full code see HTTPSTimeServer example.
struct MyRequestHandler: public HTTPRequestHandler
{
void handleRequest(HTTPServerRequest& request, HTTPServerResponse& response)
{
response.setContentType("text/html");
// ... do your work here
std::ostream& ostr = response.send();
ostr << "<html><head><title>HTTPServer example</title>"
<< "<body>Success!</body></html>";
}
};
struct MyRequestHandlerFactory: public HTTPRequestHandlerFactory
{
HTTPRequestHandler* createRequestHandler(const HTTPServerRequest& request)
{
return new MyRequestHandler;
}
};
// ...
// set-up a server socket
SecureServerSocket svs(port);
// set-up a HTTPServer instance (you may want to new the factory and params
// prior to constructing object to prevent the possibility of a leak in case
// of exception)
HTTPServer srv(new MyRequestHandlerFactory, svs, new HTTPServerParams);
// start the HTTPServer
srv.start();
Related
I am working with gRPC and Protobuf, using a C++ server and a C++ client, as well as a grpc-js client. Is there a way to get a read on all of the HTTP request/response headers from the transport layer in gRPC? I am looking for the sort of typical client/server HTTP headers - particularly, I would like to see what version of the protocol is being used (whether it is HTTP1.1/2). I know that gRPC is supposed to be using HTTP2, but I am trying to confirm it at a low level.
In a typical gRPC client implementation you have something like this:
class PingPongClient {
public:
PingPongClient(std::shared_ptr<Channel> channel)
: stub_(PingPong::NewStub(channel)) {}
// Assembles the client's payload, sends it and presents the response back
// from the server.
PingPongReply PingPong(PingPongRequest request) {
// Container for the data we expect from the server.
PingPongReply reply;
// Context for the client. It could be used to convey extra information to
// the server and/or tweak certain RPC behaviors.
ClientContext context;
// The actual RPC.
Status status = stub_->Ping(&context, request, &reply);
// Act upon its status.
if (status.ok()) {
return reply;
} else {
auto errorMsg = status.error_code() + ": " + status.error_message();
std::cout << errorMsg << std::endl;
throw std::runtime_error(errorMsg);
}
}
private:
std::unique_ptr<PingPong::Stub> stub_;
};
and on the serverside, something like:
class PingPongServiceImpl final : public PingPong::Service {
Status Ping(
ServerContext* context,
const PingPongRequest* request,
PingPongReply* reply
) override {
std::cout << "PingPong" << std::endl;
printContextClientMetadata(context->client_metadata());
if (request->input_msg() == "hello") {
reply->set_output_msg("world");
} else {
reply->set_output_msg("I can't pong unless you ping me 'hello'!");
}
std::cout << "Replying with " << reply->output_msg() << std::endl;
return Status::OK;
}
};
I would think that either ServerContext or the request object might have access to this information, but context seems to only provide an interface into metadata, which is custom.
None of the gRPC C++ examples give any indication that there is such an API, nor do any of the associated source/header files in the gRPC source code. I have exhausted my options here in terms of tutorials, blog posts, videos, and documentation - I asked a similar question on the grpc-io forum, but have gotten no takers. Hoping the SO crew has some insights here!
I should also note that I experimented with passing a variety of environment variables as flags to the running processes to see if I can get details about HTTP headers, but even with these flags enabled (the HTTP-related ones), I do not see basic HTTP headers.
First, the gRPC libraries absolutely do use HTTP/2. The protocol is explicitly defined in terms of HTTP/2.
The gRPC libraries do not directly expose the raw HTTP headers to the application. However, they do have trace logging options that can log a variety of information for debugging purposes, including headers. The tracers can be enabled by setting the environment variable GRPC_TRACE. The environment variable GRPC_VERBOSITY=DEBUG should also be set to make sure that all of the logs are output. More information can be found in this document describing how the library uses envinronment variables.
In the C++ library, the http tracer should log the raw headers. The grpc-js library has different internals and different tracer definitions, so you should use the call_stream tracer for that one. Those will also log other request information, but it should be pretty easy to pick out the headers.
I hope I don't have misunderstood the Thrift concept, but what I see from (example) questions like this, this framework is composed by different modular layers that can be enabled or disabled.
I'm mostly interesed in the "IDL part" of Thrift, so that I can create a common interface between my C++ code and an external Javascript application. I would like to call C++ functions using JS, with Binary data transmission, and I've already used the compiler for this.
But both my C++ (the server) and JS (client) application already exchange data using a C++ Webserver with Websockets support, it is not provided by Thrift.
So I was thinking to setup the following items:
In JS (already done):
TWebSocketTransport to send data to my "Websocket server" (with host ws://xxx.xxx.xxx.xxx)
TBinaryProtocol to encapsulate the data (using this JS implementation)
The compiled Thrift JS library with the correspondent C++ functions to call (done with the JS compiler)
In C++ (partial):
TBinaryProtocol to encode/decode the data
A TProcessor with handler to get the data from the client and process it
For now, the client is already able to sent requests to my websocket server, I see receiving them in binary form and I just need Thrift to:
Decode the input
Call the appropriate C++ function
Encode the output
My webserver will send the response to the client. So no "Thrift server" is needed here. I see there is the TProcessor->process() function, I'm trying to use it when I receive the binary data but it needs an in/out TProtocol. No problem here... but in order to create the TBinaryProtocol I also need a TTransport! If no Thrift server is expected... what Transport should I use?
I tried to set TTransport to NULL in TBinaryProtocol constructor, but once I use it it gives nullptr exception.
Code is something like:
Init:
boost::shared_ptr<MySDKServiceHandler> handler(new MySDKServiceHandler());
thriftCommandProcessor = boost::shared_ptr<TProcessor>(new MySDKServiceProcessor(handler));
thriftInputProtocol = boost::shared_ptr<TBinaryProtocol>(new TBinaryProtocol(TTransport???));
thriftOutputProtocol = boost::shared_ptr<TBinaryProtocol>(new TBinaryProtocol(TTransport???));
When data arrives:
this->thriftInputProtocol->writeBinary(input); // exception here
this->thriftCommandProcessor->process(this->thriftInputProtocol, this->thriftOutputProtocol, NULL);
this->thriftOutputProtocol->readBinary(output);
I've managed to do it using the following components:
// create the Processor using my compiled Thrift class (from IDL)
boost::shared_ptr<MySDKServiceHandler> handler(new MySDKServiceHandler());
thriftCommandProcessor = boost::shared_ptr<TProcessor>(new ThriftSDKServiceProcessor(handler));
// Transport is needed, I use the TMemoryBuffer so everything is kept in local memory
boost::shared_ptr<TTransport> transport(new apache::thrift::transport::TMemoryBuffer());
// my client/server data is based on binary protocol. I pass the transport to it
thriftProtocol = boost::shared_ptr<TProtocol>(new TBinaryProtocol(transport, 0, 0, false, false));
/* .... when the message arrives through my webserver */
void parseMessage(const byte* input, const int input_size, byte*& output, int& output_size)
{
// get the transports to write and read Thrift data
boost::shared_ptr<TTransport> iTr = this->thriftProtocol->getInputTransport();
boost::shared_ptr<TTransport> oTr = this->thriftProtocol->getOutputTransport();
// "transmit" my data to Thrift
iTr->write(input, input_size);
iTr->flush();
// make the Thrift work using the Processor
this->thriftCommandProcessor->process(this->thriftProtocol, NULL);
// the output transport (oTr) contains the called procedure result
output = new byte[MAX_SDK_WS_REPLYSIZE];
output_size = oTr->read(output, MAX_SDK_WS_REPLYSIZE);
}
My webserver will send the response to the client. So no "Thrift server" is needed here. I see there is the TProcessor->process() function, I'm trying to use it when I receive the binary data but it needs an in/out TProtocol. No problem here... but in order to create the TBinaryProtocol I also need a TTransport! If no Thrift server is expected... what Transport should I use?
The usual pattern is to store the bits somewhere and use that buffer or data stream as the input, same for the output. For certain languages there is a TStreamTransport available, for C++ the TBufferBase class looks promising to me.
I'm starting to use Boost, so may be I'm messing something up.
I'm trying to set up http server with boost (ASIO). I've taken the code from docs: http://www.boost.org/doc/libs/1_54_0/doc/html/boost_asio/examples/cpp03_examples.html (HTTP Server, the first one)
The only difference from the example is I'm running server by my own method "run" and starting io_service in background thread, like in the docs: http://www.boost.org/doc/libs/1_54_0/doc/html/boost_asio/reference/io_service.html
boost::asio::io_service::work work(io_service_);
(Also I'm stopping io_service from my run method too.)
When I'm starting this modified server everything seems to be OK, run method is working fine. But then I'm trying to get a doc from the server the request hangs and control flow never comes to "request_handle" method.
Am I missing something?
UPD. Here is my code of run method:
void NetstreamServer::run()
{
LOG4CPLUS_DEBUG(logger, "NetstreamServer is running");
boost::asio::io_service::work work(io_service_);
try
{
while (true)
{
if (condition)
{
io_service_.stop();
break;
}
}
}
catch (std::exception const& e)
{
LOG4CPLUS_ERROR(logger, "NetstreamServer" << " caught exception: " << e.what());
}
}
You should call io_service_::run() - otherwise no one will dispatch the completion handlers of Asio objects serviced by io_service_.
Without including the code you changed, everyone here can only guess. Unfortunately you also do not include the compiler and the OS you are using. Even with boost claiming it is platform independent, you should always include this information, as it reality, platforms are different even with boost.
Let me do a guess. You use Microsoft Windows? How do you prevent the "main" function to exit? You moved the blocking "run" function out of it in another thread, the main function has no wait point anymore. Let me guess again, you used something like "getchar". With that, you can exit your server with only hitting the keyboard return key. If yes, the problem is the getchar, with unfortunately blocks every io of the asio socket implementation, but only on Windows based systems.
I would not need to guess if you would include the informations mentioned in your post. In particular all(!) changes you made to the code sample.
Is is possible to do the following safely:
I have a C++ library which connects to SQL DB at various points. I would like to have a global connection available at all of these points. Can this be done? IS there a standard pattern for this. I was thinking of storing a connection in a singleton.
Edit:
Suppose I have the following interface for the connection.
class Connection {
public:
Connection();
~Connection();
bool isOpen();
void open();
}
I would like to implement the following interface:
class GlobalConnection {
public:
static Connection & getConnection() {
static Connection conn_;
if (!conn_.isOpen())
conn_.open();
return conn_;
}
private:
GlobalConnection() {};
Connection conn_;
};
I have two concerns with the above. One is that the getConnection is not thread safe and the other is that I'm not sure about the destruction of the static resource. In other words, am I guaranteed that the connection will close (ie its destructor will be called).
For the record, the connection class itself is provided by the SQLAPI++ library (though that's not very relevant).
EDIT 2: After doing some research it seems that while SQLAPI doent directly support pooling it can be used to enable connection pooling through the ODBC facilities via the call
setOption("SQL_ATTR_CONNECTION_POOLING") = SQL_CP_ONE_PER_DRIVER
The documentation says that that this call must be made before the first connection is established. What is the best way to assure this in code with multiple potential call sites for opening a connection. What if this doesn't happen? Will an error be thrown or pooling just wont be enabled.
Also what tools are available for monitoring how many open connections there are to the DB?
A Singleton can solve this in any OO language. In C/C++, you can also use a static variable (in case you don't use a pure-OO coding style).
most client libaries support connection pooling.
so open a new connection will just pick a existing connection from the pool.
I need to write a C++ interface that can read our data structure and provide the o/p based on query using http protocol.
Server Need
It should be able to serve 100 clients at the same time.
Why C++
All code is already written in C++. So we need to just write a http layer in C++. That's why I am choosing C++ instead of a more conventional web-programming language.
I am thinking to use nginx to serve static files and use its proxy pass to communicate with C++.
There are two approaches I have found:
Write a FastCGI c++ module.
Write a node.js c++ module.
Please just any other suggestion if you have
Can you please list the pros and cons for each method based on prior experience?
No one here seems to have addressed the actual question, though some nice work arounds have been offered. I've been able to build C++ modules for nginx with a couple of minor changes.
Change the module source file name to end with .cpp so gcc realizes it is dealing with C++.
Make sure all your Nginx includes (e.g. ngx_config.h, ngx_core.h, etc.) are wrapped with an extern "C" { } structure. Similarly make sure any functions called through Nginx function pointers are declared with a wrapper.
Add --with-ld-opt="-lstdc++" to your "configure" invocation when setting up Nginx.
With those three steps your module should compile, build, link, and actually work.
I think I will go forward with Nginx module devlopment http://www.evanmiller.org/nginx-modules-guide.html
Why ?
It don't require any other library dependency like fastcgi and
other.
I can use all feature of nginx inside my module.
What you are asking is basically how to turn the c++ process that holds your data strutures into a webserver. That might not be the best way to go about it. (Then again, maybe it is in your situation. It depends on the complexity of the c++ process's interfaces you are trying to expose i guess.)
Anyways, I would try to stick a small http frontend in between the c++ process and the clients that could do the http work and communicate with the c++ backend process using some simple messaging protocol like ZeroMQ/zmq.
zmq in c/c++ is fairly straight forward, and its very efficient and very fast. Using zmq you could very quickly setup a simple webserver frontend in python, or whatever language you prefer that has zmq bindings, and have that frontend communicate asyncronously or syncronously with the backend c++ process using zmq.
The c++ examples and the guide are nice starting points if you are looking into using zmq.
For Node.js there are also a few examples.
Try G-WAN, it allows you to use your c++ application directly.
You may try nginx c function
It is simple to use and built in nginx cache memory on apps layer, wiki for nginx c function
Example project with cpp
Sample code:
#include <stdio.h>
#include <ngx_http_c_func_module.h>
/*** build the program as .so library and copy to the preferred place for nginx to link this library ***/
/*** gcc -shared -o libcfuntest.so -fPIC cfuntest.c ***/
/*** cp libcfuntest.so /etc/nginx/ ***/
int is_service_on = 0;
void ngx_http_c_func_init(ngx_http_c_func_ctx_t* ctx) {
ngx_http_c_func_log(info, ctx, "%s", "Starting The Application");
is_service_on=1;
}
void my_app_simple_get_greeting(ngx_http_c_func_ctx_t *ctx) {
ngx_http_c_func_log_info(ctx, "Calling back and log from my_app_simple_get");
ngx_http_c_func_write_resp(
ctx,
200,
"200 OK",
"text/plain",
"greeting from ngx_http_c_func testing"
);
}
void my_app_simple_get_args(ngx_http_c_func_ctx_t *ctx) {
ngx_http_c_func_log_info(ctx, "Calling back and log from my_app_simple_get_args");
ngx_http_c_func_write_resp(
ctx,
200,
"200 OK",
"text/plain",
ctx->req_args
);
}
void my_app_simple_get_token_args(ngx_http_c_func_ctx_t *ctx) {
ngx_http_c_func_log_info(ctx, "Calling back and log from my_app_simple_get_token_args");
char * tokenArgs = ngx_http_c_func_get_query_param(ctx, "token");
if (! tokenArgs) {
ngx_http_c_func_write_resp(
ctx,
401,
"401 unauthorized",
"text/plain",
"Token Not Found"
);
} else {
ngx_http_c_func_write_resp(
ctx,
401,
"401 unauthorized",
"text/plain",
tokenArgs
);
}
}
void my_app_simple_post(ngx_http_c_func_ctx_t *ctx) {
ngx_http_c_func_log_info(ctx, "Calling back and log from my_app_simple_post");
ngx_http_c_func_write_resp(
ctx,
202,
"202 Accepted and Processing",
"text/plain",
ctx->req_body
);
}
void my_app_simple_get_no_resp(ngx_http_c_func_ctx_t *ctx) {
ngx_http_c_func_log_info(ctx, "Calling back and log from my_app_simple_get_no_resp");
}
void ngx_http_c_func_exit(ngx_http_c_func_ctx_t* ctx) {
ngx_http_c_func_log(info, ctx, "%s\n", "Shutting down The Application");
is_service_on = 0;
}
Just add an HTTP frontend to your C++ code, possibly using a library such as Beast, and then proxy_pass from nginx to your C++ server. You may or may not need nginx at all, depending on your use.