how to add proxy support to boost::asio? - c++

In my desktop application I added access to various internet resources using boost::asio. All i do is sending http requests (i.e to map tile servers) and read the results.
My code is based on the asio sync_client sample.
Now i get reports from customers who are unable to use these functions as they are running a proxy in their company. In a web browser they can enter the address of their proxy and everything is fine. Our application is unable to download data.
How can i add such support to my application?

I found the answer myself. It's quite simple:
http://www.jmarshall.com/easy/http/#proxies
gives quite a brief and clear description how http proxies work.
All i had to do is add the following code to the asio sync_client sample sample :
std::string myProxyServer = ...;
int myProxyPort = ...;
void doDownLoad(const std::string &in_server, const std::string &in_path, std::ostream &outstream)
{
std::string server = in_server;
std::string path = in_path;
char serice_port[255];
strcpy(serice_port, "http");
if(! myProxyServer.empty())
{
path = "http://" + in_server + in_path;
server = myProxyServer;
if(myProxyPort != 0)
sprintf(serice_port, "%d", myProxyPort);
}
tcp::resolver resolver(io_service);
tcp::resolver::query query(server, serice_port);
...

It seems that sample is merely a show-off of what Boost ASIO can be used for but is likely not intended to be used as-is. You should probably use a complete library that handles not only HTTP proxies, but also HTTP redirects, compression, and so on.
HTTP is a complex thing: without doing so, chances are high that you will get news from another client soon with another problem.
I found cppnetlib which looks promising and is based on Boost ASIO not sure it handles proxies though.
There is also libcurl but I don't know if it can easily be integrated with Boost ASIO.

Related

How do you read/log gRPC HTTP headers (not custom metadata)?

I am working with gRPC and Protobuf, using a C++ server and a C++ client, as well as a grpc-js client. Is there a way to get a read on all of the HTTP request/response headers from the transport layer in gRPC? I am looking for the sort of typical client/server HTTP headers - particularly, I would like to see what version of the protocol is being used (whether it is HTTP1.1/2). I know that gRPC is supposed to be using HTTP2, but I am trying to confirm it at a low level.
In a typical gRPC client implementation you have something like this:
class PingPongClient {
public:
PingPongClient(std::shared_ptr<Channel> channel)
: stub_(PingPong::NewStub(channel)) {}
// Assembles the client's payload, sends it and presents the response back
// from the server.
PingPongReply PingPong(PingPongRequest request) {
// Container for the data we expect from the server.
PingPongReply reply;
// Context for the client. It could be used to convey extra information to
// the server and/or tweak certain RPC behaviors.
ClientContext context;
// The actual RPC.
Status status = stub_->Ping(&context, request, &reply);
// Act upon its status.
if (status.ok()) {
return reply;
} else {
auto errorMsg = status.error_code() + ": " + status.error_message();
std::cout << errorMsg << std::endl;
throw std::runtime_error(errorMsg);
}
}
private:
std::unique_ptr<PingPong::Stub> stub_;
};
and on the serverside, something like:
class PingPongServiceImpl final : public PingPong::Service {
Status Ping(
ServerContext* context,
const PingPongRequest* request,
PingPongReply* reply
) override {
std::cout << "PingPong" << std::endl;
printContextClientMetadata(context->client_metadata());
if (request->input_msg() == "hello") {
reply->set_output_msg("world");
} else {
reply->set_output_msg("I can't pong unless you ping me 'hello'!");
}
std::cout << "Replying with " << reply->output_msg() << std::endl;
return Status::OK;
}
};
I would think that either ServerContext or the request object might have access to this information, but context seems to only provide an interface into metadata, which is custom.
None of the gRPC C++ examples give any indication that there is such an API, nor do any of the associated source/header files in the gRPC source code. I have exhausted my options here in terms of tutorials, blog posts, videos, and documentation - I asked a similar question on the grpc-io forum, but have gotten no takers. Hoping the SO crew has some insights here!
I should also note that I experimented with passing a variety of environment variables as flags to the running processes to see if I can get details about HTTP headers, but even with these flags enabled (the HTTP-related ones), I do not see basic HTTP headers.
First, the gRPC libraries absolutely do use HTTP/2. The protocol is explicitly defined in terms of HTTP/2.
The gRPC libraries do not directly expose the raw HTTP headers to the application. However, they do have trace logging options that can log a variety of information for debugging purposes, including headers. The tracers can be enabled by setting the environment variable GRPC_TRACE. The environment variable GRPC_VERBOSITY=DEBUG should also be set to make sure that all of the logs are output. More information can be found in this document describing how the library uses envinronment variables.
In the C++ library, the http tracer should log the raw headers. The grpc-js library has different internals and different tracer definitions, so you should use the call_stream tracer for that one. Those will also log other request information, but it should be pretty easy to pick out the headers.

Apache Thrift for just processing, not server

I hope I don't have misunderstood the Thrift concept, but what I see from (example) questions like this, this framework is composed by different modular layers that can be enabled or disabled.
I'm mostly interesed in the "IDL part" of Thrift, so that I can create a common interface between my C++ code and an external Javascript application. I would like to call C++ functions using JS, with Binary data transmission, and I've already used the compiler for this.
But both my C++ (the server) and JS (client) application already exchange data using a C++ Webserver with Websockets support, it is not provided by Thrift.
So I was thinking to setup the following items:
In JS (already done):
TWebSocketTransport to send data to my "Websocket server" (with host ws://xxx.xxx.xxx.xxx)
TBinaryProtocol to encapsulate the data (using this JS implementation)
The compiled Thrift JS library with the correspondent C++ functions to call (done with the JS compiler)
In C++ (partial):
TBinaryProtocol to encode/decode the data
A TProcessor with handler to get the data from the client and process it
For now, the client is already able to sent requests to my websocket server, I see receiving them in binary form and I just need Thrift to:
Decode the input
Call the appropriate C++ function
Encode the output
My webserver will send the response to the client. So no "Thrift server" is needed here. I see there is the TProcessor->process() function, I'm trying to use it when I receive the binary data but it needs an in/out TProtocol. No problem here... but in order to create the TBinaryProtocol I also need a TTransport! If no Thrift server is expected... what Transport should I use?
I tried to set TTransport to NULL in TBinaryProtocol constructor, but once I use it it gives nullptr exception.
Code is something like:
Init:
boost::shared_ptr<MySDKServiceHandler> handler(new MySDKServiceHandler());
thriftCommandProcessor = boost::shared_ptr<TProcessor>(new MySDKServiceProcessor(handler));
thriftInputProtocol = boost::shared_ptr<TBinaryProtocol>(new TBinaryProtocol(TTransport???));
thriftOutputProtocol = boost::shared_ptr<TBinaryProtocol>(new TBinaryProtocol(TTransport???));
When data arrives:
this->thriftInputProtocol->writeBinary(input); // exception here
this->thriftCommandProcessor->process(this->thriftInputProtocol, this->thriftOutputProtocol, NULL);
this->thriftOutputProtocol->readBinary(output);
I've managed to do it using the following components:
// create the Processor using my compiled Thrift class (from IDL)
boost::shared_ptr<MySDKServiceHandler> handler(new MySDKServiceHandler());
thriftCommandProcessor = boost::shared_ptr<TProcessor>(new ThriftSDKServiceProcessor(handler));
// Transport is needed, I use the TMemoryBuffer so everything is kept in local memory
boost::shared_ptr<TTransport> transport(new apache::thrift::transport::TMemoryBuffer());
// my client/server data is based on binary protocol. I pass the transport to it
thriftProtocol = boost::shared_ptr<TProtocol>(new TBinaryProtocol(transport, 0, 0, false, false));
/* .... when the message arrives through my webserver */
void parseMessage(const byte* input, const int input_size, byte*& output, int& output_size)
{
// get the transports to write and read Thrift data
boost::shared_ptr<TTransport> iTr = this->thriftProtocol->getInputTransport();
boost::shared_ptr<TTransport> oTr = this->thriftProtocol->getOutputTransport();
// "transmit" my data to Thrift
iTr->write(input, input_size);
iTr->flush();
// make the Thrift work using the Processor
this->thriftCommandProcessor->process(this->thriftProtocol, NULL);
// the output transport (oTr) contains the called procedure result
output = new byte[MAX_SDK_WS_REPLYSIZE];
output_size = oTr->read(output, MAX_SDK_WS_REPLYSIZE);
}
My webserver will send the response to the client. So no "Thrift server" is needed here. I see there is the TProcessor->process() function, I'm trying to use it when I receive the binary data but it needs an in/out TProtocol. No problem here... but in order to create the TBinaryProtocol I also need a TTransport! If no Thrift server is expected... what Transport should I use?
The usual pattern is to store the bits somewhere and use that buffer or data stream as the input, same for the output. For certain languages there is a TStreamTransport available, for C++ the TBufferBase class looks promising to me.

C++ library to send back simple string for HTTPS queries

I have an emulator program written in C++ running on Ubuntu 12.04. There are some settings and options needed for running the program, which are given by the main's arguments. I need to query these options via HTTPS from a remote machine/mobile device (so basically imagine I want to return main's arguments). I was wondering if someone can help me with that.
There should probably be some libraries for the ease, for example Poco. I'm not sure how suitable it is for my case, but here is any example of connection setup in poco. It's not a must to use any libraries though; just the most efficient/simplest way.
Mongoose (or the non-GPL fork Civetweeb) are embedded web servers. Very easy to set up and add controllers for (typically a half dozen lines of code)
Just add the project file (1 c file) to your project and build, add a line to start the server listening and give it what options you like, and add a callback function to handle requests. It does ssl out of the box (though IIRC you'll need to have openssl installed too)
There was another SO answer with some comparisons. I used civetweb at work, and was impressed at how easy it all was. There's not too much documentation though.
Here's a stripped-down POCO version, for full code see HTTPSTimeServer example.
struct MyRequestHandler: public HTTPRequestHandler
{
void handleRequest(HTTPServerRequest& request, HTTPServerResponse& response)
{
response.setContentType("text/html");
// ... do your work here
std::ostream& ostr = response.send();
ostr << "<html><head><title>HTTPServer example</title>"
<< "<body>Success!</body></html>";
}
};
struct MyRequestHandlerFactory: public HTTPRequestHandlerFactory
{
HTTPRequestHandler* createRequestHandler(const HTTPServerRequest& request)
{
return new MyRequestHandler;
}
};
// ...
// set-up a server socket
SecureServerSocket svs(port);
// set-up a HTTPServer instance (you may want to new the factory and params
// prior to constructing object to prevent the possibility of a leak in case
// of exception)
HTTPServer srv(new MyRequestHandlerFactory, svs, new HTTPServerParams);
// start the HTTPServer
srv.start();

Apply HTTP basic authentication to jax ws (HttpSpiContextHandler) in embedded Jetty

There are some similar questions for earlier versions of Jetty (pre 9) but none that address this specific problem :
Server server = new Server();
System.setProperty("com.sun.net.httpserver.HttpServerProvider",
JettyHttpServerProvider.class.getName());
JettyHttpServer jettyServer = new JettyHttpServer(server, true);
Endpoint endpoint = Endpoint.create(new SOAPService()); // this class to handle all ws requests
endpoint.publish(jettyServer.createContext("/service")); // access by path
server.start()
Simplified code example above to show the only way that I have found to bridge between Jetty and incoming soap requests to my jax-ws service. All settings are in code with no web.xml, this is part of a larger solution that has multiple contexts and connections for different purposes (servlets etc..)
I have tried to add a handler class to the jettyServer.createContext("/service",new handler()) to see if I can perform a header extraction to simulate basic auth but it never gets executed.
My problem is that i cannot find a way to specify, by code against the Jetty server, to use basic authentication. Using the setSecurityHandler method of a ServletContextHandler is easy and works great for other contexts, i just can't figure out how to use this concept for the jax-ws service.
Any help would be much appreciated.
p.s. SSL is already implemented, I just need to add http basic auth.
For anyone else that may of come across the same problem here is the answer that i stumbled on eventually.
final HttpContext httpContext = jettyServer.createContext("/service");
com.sun.net.httpserver.BasicAuthenticator a = new com.sun.net.httpserver.BasicAuthenticator("") {
public boolean checkCredentials (String username, String pw)
{
return username.equals("username") && pw.equals("password");
}
};
httpContext.setAuthenticator(a);
endpoint.publish(httpContext);//access by path
You can expand the checkCredentials for something a bit more sophisticated of course, but this shows the basic working method.

How do I duplicate certificate authentication (Mumble (c/c++)) in Python?

Alright so, before I really get into this post, I am going to have to warn you that this might not be an easy fix. Whoever reads and is able to reply to this post must know a lot of c/c++, and at least some python to be able to answer the question I have above.
Basically, I have a connection method from Mumble (a VOIP client), that connects to a server and sends it an SSL certificate for authentication purposes. I also have a Python script that connects to the same Mumble VOIP server, but I don't send a certificate.
I need to modify my existing code to send a certificate, as the current Mumble client does.
--
Here is the C++ code that seems to send a certificate:
ServerHandler::ServerHandler() {
MumbleSSL::addSystemCA();
{
QList<QSslCipher> pref;
foreach(QSslCipher c, QSslSocket::defaultCiphers()) {
if (c.usedBits() < 128)
continue;
pref << c;
}
if (pref.isEmpty())
qFatal("No ciphers of at least 128 bit found");
QSslSocket::setDefaultCiphers(pref);
}
void ServerHandler::run() {
qbaDigest = QByteArray();
QSslSocket *qtsSock = new QSslSocket(this);
qtsSock->setPrivateKey(g.s.kpCertificate.second);
qtsSock->setLocalCertificate(g.s.kpCertificate.first.at(0));
QList<QSslCertificate> certs = qtsSock->caCertificates();
certs << g.s.kpCertificate.first;
qtsSock->setCaCertificates(certs);
cConnection = ConnectionPtr(new Connection(this, qtsSock));
qtsSock->setProtocol(QSsl::TlsV1);
qtsSock->connectToHostEncrypted(qsHostName, usPort);
void ServerHandler::serverConnectionConnected() {
tConnectionTimeoutTimer->stop();
qscCert = cConnection->peerCertificateChain();
qscCipher = cConnection->sessionCipher();
if (! qscCert.isEmpty()) {
const QSslCertificate &qsc = qscCert.last();
qbaDigest = sha1(qsc.publicKey().toDer());
bUdp = Database::getUdp(qbaDigest);
} else {
bUdp = true;
}
QStringList tokens = Database::getTokens(qbaDigest);
foreach(const QString &qs, tokens)
mpa.add_tokens(u8(qs));
QMap<int, CELTCodec *>::const_iterator i;
for (i=g.qmCodecs.constBegin(); i != g.qmCodecs.constEnd(); ++i)
mpa.add_celt_versions(i.key());
sendMessage(mpa);
--
And alas, this is what I do to connect to it right now (in python):
try:
self.socket.connect(self.host)
except:
print self.threadName,"Couldn't connect to server"
return
self.socket.setblocking(0)
print self.threadName,"connected to server"
--
Soo... what do I need to do more to my Python source to connect to servers that require a certificate? Because my source currently connects just fine to any mumble server with requirecert set to false. I need it to work on all servers, as this will be used on my own server (which ironically enough, has requirecerts on.)
I can pregenerate the certificate as a .p12 or w/e type file, so I don't need the program to generate the cert. I just need it to send the cert as the server wants it (as is done in the c++ I posted).
Please help me really soon! If you need more info, message me again.
Stripped out all irrelevant code, now it's just the code that deals with ssl.
From the C++ code it looks like you simply need to have ssl support and negotiate with the correct certification file and encrypt the payload with the correct private key. Those certifications and privates keys are most likely stored in your original program somewhere. If there are non-standard Authorities that the C++ might be loading up you'll need to find out where to put those root authorities in your python installation, or make sure python simply ignores those issues, which is less secure.
In python you can create a socket, like above, except with urllib. This library has support for HTTPs and providing the certification and private keys. URLOpener
Example Usage:
opener = urllib.URLopener(key_file = 'mykey.key', cert_file = 'mycert.cer')
self.socket = opener.open(url)
You'll probably need to make it more robust with the appropriate error checking and such, but hopefully this info will help you out.