C++ OpenSSL: libssl fails to verify certificates on Windows - c++

I've done a lot of looking around but I can't seem to find a decent solution to this problem. Many of the StackOverflow posts are regarding Ruby, but I'm using OpenSSL more or less directly (via the https://gitlab.com/eidheim/Simple-Web-Server library) for a C++ application/set of libraries, and need to work out how to fix this completely transparently for users (they should not need to hook up any custom certificate verification file in order to use the application).
On Windows, when I attempt to use the SimpleWeb HTTPS client, connections fail if I have certificate verification switched on, because the certificate for the connection fails to validate. This is not the case on Linux, where verification works fine.
I was advised to follow this solution to import the Windows root certificates into OpenSSL so that they could be used by the verification routines. However, this doesn't seem to make any difference as far as I can see. I have dug into the guts of the libssl verification functions to try and understand exactly what's going on, and although the above answer recommends adding the Windows root certificates to a new X509_STORE, it appears that the SSL connection context has its own store which is set up when the connection is initialised. This makes me think that simply creating a new X509_STORE and adding certificates there is not helping because the connection doesn't actually use that store.
It may well be that I've spent so much time debugging the minutiae of libssl that I'm missing what the actual approach to solving this problem should be. Does OpenSSL provide a canonical way of looking up system certificates that I'm not setting? Alternatively, could the issue be the way that the SimpleWeb library/ASIO is initialising OpenSSL? I know that the library allows you to provide a path for a "verify file" for certificates, but I feel like this wouldn't be an appropriate solution since I as a developer should be using the certificates found on the end user's system, rather than hard-coding my own.
EDIT: For context, this is the code I'm using in a tiny example application:
#define MY_ENCODING_TYPE (PKCS_7_ASN_ENCODING | X509_ASN_ENCODING)
static void LoadSystemCertificates()
{
HCERTSTORE hStore;
PCCERT_CONTEXT pContext = nullptr;
X509 *x509 = nullptr;
X509_STORE *store = X509_STORE_new();
hStore = CertOpenSystemStore(NULL, "ROOT");
if (!hStore)
{
return;
}
while ((pContext = CertEnumCertificatesInStore(hStore, pContext)) != nullptr)
{
const unsigned char* encodedCert = reinterpret_cast<const unsigned char*>(pContext->pbCertEncoded);
x509 = d2i_X509(nullptr, &encodedCert, pContext->cbCertEncoded);
if (x509)
{
X509_STORE_add_cert(store, x509);
X509_free(x509);
}
}
CertCloseStore(hStore, 0);
}
static void MakeRequest(const std::string& address)
{
using Client = SimpleWeb::Client<SimpleWeb::HTTPS>;
Client httpsClient(address);
httpsClient.io_service = std::make_shared<asio::io_service>();
std::cout << "Making request to: " << address << std::endl;
bool hasResponse = false;
httpsClient.request("GET", [address, &hasResponse](std::shared_ptr<Client::Response> response, const SimpleWeb::error_code& error)
{
hasResponse = true;
if ( error )
{
std::cerr << "Got error from " << address << ": " << error.message() << std::endl;
}
else
{
std::cout << "Got response from " << address << ":\n" << response->content.string() << std::endl;
}
});
while ( !hasResponse )
{
httpsClient.io_service->poll();
httpsClient.io_service->reset();
std::this_thread::sleep_for(std::chrono::milliseconds(20));
}
}
int main(int, char**)
{
LoadSystemCertificates();
MakeRequest("google.co.uk");
return 0;
}
The call returns me: Got error from google.co.uk: certificate verify failed

OK, to anyone who this might help in future, this is how I solved this issue. This answer to a related question helped.
It turns out that the issue was indeed that the SSL context was not making use of the certificate store that I'd set up. Everything else was OK, bu the missing piece of the puzzle was a call to SSL_CTX_set_cert_store(), which takes the certificate store and provides it to the SSL context.
In the context of the SimpleWeb library, the easiest way to do this appeared to be to subclass the SimpleWeb::Client<SimpleWeb::HTTPS> class and add the following to the constructor:
#define WIN32_LEAN_AND_MEAN
#include <Windows.h>
#include <wincrypt.h>
class MyClient : public SimpleWeb::Client<SimpleWeb::HTTPS>
{
public:
MyClient( /* ... */ ) :
SimpleWeb::Client<SimpleWeb::HTTPS>( /* ... */ )
{
AddWindowsRootCertificates();
}
private:
using OpenSSLContext = asio::ssl::context::native_handle_type;
void AddWindowsRootCertificates()
{
// Get the SSL context from the SimpleWeb class.
OpenSSLContext sslContext = context.native_handle();
// Get a certificate store populated with the Windows root certificates.
// If this fails for some reason, the function returns null.
X509_STORE* certStore = GetWindowsCertificateStore();
if ( sslContext && certStore )
{
// Set this store to be used for the SSL context.
SSL_CTX_set_cert_store(sslContext, certStore);
}
}
static X509_STORE* GetWindowsCertificateStore()
{
// To avoid populating the store every time, we keep a static
// pointer to the store and just initialise it the first time
// this function is called.
static X509_STORE* certificateStore = nullptr;
if ( !certificateStore )
{
// Not initialised yet, so do so now.
// Try to open the root certificate store.
HCERTSTORE rootStore = CertOpenSystemStore(0, "ROOT");
if ( rootStore )
{
// The new store is reference counted, so we can create it
// and keep the pointer around for later use.
certificateStore = X509_STORE_new();
PCCERT_CONTEXT pContext = nullptr;
while ( (pContext = CertEnumCertificatesInStore(rootStore, pContext)) != nullptr )
{
// d2i_X509() may modify the pointer, so make a local copy.
const unsigned char* content = pContext->pbCertEncoded;
// Convert the certificate to X509 format.
X509 *x509 = d2i_X509(NULL, &content, pContext->cbCertEncoded);
if ( x509 )
{
// Successful conversion, so add to the store.
X509_STORE_add_cert(certificateStore, x509);
// Release our reference.
X509_free(x509);
}
}
// Make sure to close the store.
CertCloseStore(rootStore, 0);
}
}
return certificateStore;
}
};
Obviously GetWindowsCertificateStore() would need to be abstracted out to somewhere platform-specific if your class needs to compile on multiple platforms.

Related

BLE BlueZ return fixed pin for connection

I have bluez 5.48 running on embedded device and I am able to connect Apple and Windows devices over the bluetooth. I have also been able to get Bluetooth pairing working using DisplayOnly custom agent which generates random pin/pass for pairing.
The embedded device has no Input/Output peripherals so I need to return fixed pin for all connections but for some reason I am not finding the right way to do it. So far I have created custom agent, registered it on dbus, which receives the calls RequestPinCode and DisplayPasskey (but they are set to return auto generated pins.)
here is code snippet from my set up
static void bluez_agent_method_call(GDBusConnection *con,
const gchar *sender,
const gchar *path,
const gchar *interface,
const gchar *method,
GVariant *params,
GDBusMethodInvocation *invocation,
void *userdata)
{
int pass;
int entered;
char *opath;
GVariant *p= g_dbus_method_invocation_get_parameters(invocation);
g_print("Agent method call: %s.%s()\n", interface, method);
if(!strcmp(method, "RequestPinCode")) {
;
}
else if(!strcmp(method, "DisplayPinCode")) {
;
}
else if(!strcmp(method, "RequestPasskey")) {
g_print("Getting the Pin from user: ");
fscanf(stdin, "%d", &pass);
g_print("\n");
g_dbus_method_invocation_return_value(invocation, g_variant_new("(u)", pass));
}
else if(!strcmp(method, "DisplayPasskey")) {
g_variant_get(params, "(ouq)", &opath, &pass, &entered);
cout << "== pass = " << pass << endl;
pass=1234; // Changing value here does not change the actual Pin for some reason.
cout << "== pass = " << pass << "opath = " << opath << endl;
g_dbus_method_invocation_return_value(invocation, NULL);
}
else if(!strcmp(method, "RequestConfirmation")) {
g_variant_get(params, "(ou)", &opath, &pass);
g_dbus_method_invocation_return_value(invocation, NULL);
}
else if(!strcmp(method, "RequestAuthorization")) {
;
}
else if(!strcmp(method, "AuthorizeService")) {
;
}
else if(!strcmp(method, "Cancel")) {
;
}
else
g_print("We should not come here, unknown method\n");
}
I tried changing the pass variable in DisplayPasskey function to set new pin but bluetooth still connects with the auto generated pin only.
I found this stack overflow question which is exactly what I need How to setup Bluez 5 to ask pin code during pairing and from the comments, there seems to be solution to return the fixed pins.
It would be great if somebody can provide me with some examples to return fix pin in DisplayPasskey and RequestPinCode functions.
The Bluetooth standard does not contain a fixed key association model. The standard does not use a PAKE (https://en.m.wikipedia.org/wiki/Password-authenticated_key_agreement) but a custom ad-hoc weaker protocol. The custom protocol used during passkey pairing is only secure for one time passkeys (in particular, a passive eavesdropper learns the passkey used after a successful pairing attempt and can also be brute forced in at most 20 pairing attempts).
BlueZ follows the Bluetooth standard, which says the passkey should be randomly generated. Therefore you cannot set your own fixed passkey. If you don't have the required I/O capabilities, you shall use the "Just Works" association model instead (which unfortunately does not give you MITM protection). If you want higher security by using a fixed passkey for MITM protection, you must implement your own security layer on top of the (insecure) Application layer. This is for example what Apple's Homekit does.
Please also see my post at https://stackoverflow.com/a/59282315.
This article is also worth reading that explains why a static passkey is insecure: https://insinuator.net/2021/10/change-your-ble-passkey-like-you-change-your-underwear/.

pjsip application fails to register the account with Invalid/unsupported digest algorithm error

I tried to run this sample C++ application using pjsip library, But when I ran the application, I faced with this error:
06:56:50.480 sip_auth_client.c ...Unsupported digest algorithm "SHA-256"
06:56:50.480 pjsua_acc.c ....SIP registration error: Invalid/unsupported digest algorithm (PJSIP_EINVALIDALGORITHM) [status=171102]
But when I examined the SIP response from the server, I noticed it contained two WWW-Authenticate headers:
WWW-Authenticate: Digest realm="sip.linphone.org", nonce="ImEX4gAAAAC73QlWAAC9corBNkwAAAAA", opaque="+GNywA==", algorithm=SHA-256, qop="auth"
WWW-Authenticate: Digest realm="sip.linphone.org", nonce="ImEX4gAAAAC73QlWAAC9corBNkwAAAAA", opaque="+GNywA==", algorithm=MD5, qop="auth"
So the question is, if pjsip doesn't support sha-256 algorithm, why it doesn't use md5 alogrithm mentioned in the second header?
The sample code is:
#include <pjsua2.hpp>
#include <iostream>
using namespace pj;
// Subclass to extend the Account and get notifications etc.
class MyAccount : public Account {
public:
virtual void onRegState(OnRegStateParam &prm) {
AccountInfo ai = getInfo();
std::cout << (ai.regIsActive? "*** Register:" : "*** Unregister:")
<< " code=" << prm.code << std::endl;
}
};
int main()
{
Endpoint ep;
ep.libCreate();
// Initialize endpoint
EpConfig ep_cfg;
ep.libInit( ep_cfg );
// Create SIP transport. Error handling sample is shown
TransportConfig tcfg;
tcfg.port = 5060;
try {
ep.transportCreate(PJSIP_TRANSPORT_UDP, tcfg);
} catch (Error &err) {
std::cout << err.info() << std::endl;
return 1;
}
// Start the library (worker threads etc)
ep.libStart();
std::cout << "*** PJSUA2 STARTED ***" << std::endl;
// Configure an AccountConfig
AccountConfig acfg;
acfg.idUri = "sip:test#pjsip.org";
acfg.regConfig.registrarUri = "sip:pjsip.org";
AuthCredInfo cred("digest", "*", "test", 0, "secret");
acfg.sipConfig.authCreds.push_back( cred );
// Create the account
MyAccount *acc = new MyAccount;
acc->create(acfg);
// Here we don't have anything else to do..
pj_thread_sleep(10000);
// Delete the account. This will unregister from server
delete acc;
// This will implicitly shutdown the library
return 0;
}
I've struggled with this myself. Apparently, PJSIP only evaluates the first WWW-Authenticate header, even if the server supplies more than one.
To overcome this problem I changed the following in the source code:
In the file /pjsip/src/pjsip/sip_auth_client.c find the block of code that generates a response for the WWW-Authenticate header. It should be around line 1220.
/* Create authorization header for this challenge, and update
* authorization session.
*/
status = process_auth(tdata->pool, hchal, tdata->msg->line.req.uri,
tdata, sess, cached_auth, &hauth);
if (status != PJ_SUCCESS)
return status;
if (pj_pool_get_used_size(cached_auth->pool) >
PJSIP_AUTH_CACHED_POOL_MAX_SIZE)
{
recreate_cached_auth_pool(sess->endpt, cached_auth);
}
/* Add to the message. */
pjsip_msg_add_hdr(tdata->msg, (pjsip_hdr*)hauth);
/* Process next header. */
hdr = hdr->next;
and replace it with
/* Create authorization header for this challenge, and update
* authorization session.
*/
status = process_auth(tdata->pool, hchal, tdata->msg->line.req.uri,
tdata, sess, cached_auth, &hauth);
if (status != PJ_SUCCESS){
// Previously, pjsip analysed one www-auth header, and if it failed (due to unsupported sha-256 digest for example), it returned and did not consider the next www-auth header.
PJ_LOG(4,(THIS_FILE, "Invalid response, moving to next"));
//return status;
hdr = hdr->next;
}else{
if (pj_pool_get_used_size(cached_auth->pool) >
PJSIP_AUTH_CACHED_POOL_MAX_SIZE)
{
recreate_cached_auth_pool(sess->endpt, cached_auth);
}
/* Add to the message. */
pjsip_msg_add_hdr(tdata->msg, (pjsip_hdr*)hauth);
/* Process next header. */
hdr = hdr->next;
}
Then recompile the source (as per these instructions How to install pjsua2 packages for python?)
Please note that although this solves the problem for linphone, I did not test this for other situations (for example when the server sends multiple valid algorithms or none at all).

Paho MQTT (C++) client fails to connect to Mosquitto

I've got C++ code using the Paho MQTTPacket Embedded C++ library to connect to an MQTT broker. When that broker is io.adafruit.com, it works perfectly fine. But when it's my own Mosquitto instance running on my Raspberry Pi, the connection fails. I've narrowed it down to this line in MQTTClient.h, in the MQTT::Client::connect method:
// this will be a blocking call, wait for the connack
if (waitfor(CONNACK, connect_timer) == CONNACK)
The app hangs here for about 30 seconds, and then gets a result other than CONNACK (specifically 0 rather than 2).
I have tried both protocol version 3 (i.e. 3.1) and 4 (i.e. 3.1.1); same result.
My Mosquitto instance has no authentication or passwords set up. I've tried turning on debug messages in the Mosquitto log, but they're not showing anything useful. I'm at a loss. Why might I be unable to connect to Mosquitto from my C++ Paho code?
EDIT: Here's the client code... again, this works fine with Adafruit, but when I point it to my Mosquitto at localhost, it hangs as described. (I've elided the username and password -- I am sending them, but I really don't think those are the issue, since with mosquitto_pub or mosquitto_sub on the command line, I can connect regardless of this, since mosquitto is configured to allow anonymous connections.)
const char* host = "127.0.0.1";
int port = 1883;
const char* clientId = "ZoomBridge";
const char* username = "...";
const char* password = "...";
MQTT::QoS subsqos = MQTT::QOS2;
ipstack = new IPStack();
client = new MQTT::Client<IPStack, Countdown, 30000>(*ipstack);
MQTTPacket_connectData data = MQTTPacket_connectData_initializer;
data.willFlag = 1;
data.MQTTVersion = 3;
data.clientID.cstring = (char*)clientId;
data.username.cstring = (char*)username;
data.password.cstring = (char*)password;
data.keepAliveInterval = 20;
data.cleansession = 1;
int rc = ipstack->connect(host, port);
if (rc != MQTT::SUCCESS) {
cout << "Failed [1] (result " << rc << ")" << endl;
return rc;
}
rc = client->connect(data);
if (rc != MQTT::SUCCESS) {
cout << "Failed [2] (result " << rc << ")" << endl;
ipstack->disconnect();
return rc;
}
As hashed out in the comments.
It looks like you are setting the flag to indicate you want to set a Last Will and Testament for the client (data.willFlag = 1;) but then not passing any topic or payload for the LWT.
If you don't need the LWT then set the flag to 0 (or remove the line settings flag) as it should default to disabled.
Also worth pointing out to clarity, this is all with the Paho Embedded C++ MQTTPacket client not the full blown Paho C++ client.

Memory leak with multi threaded c++ application on ssl certificate validation

We are using openssl library in our multi threaded c++ application. The application eats up all the memory within 3 days (7 GB instance) due to a memory leak, only if SSL certificate validation is enabled.
Please find my application flow here:
On app launch, We create 150 threads for syncing 30k users data and reserving one SSL_CTX_new object per thread. The same object is reused until the process is killed. The object SSL_CTX_new is created only once on thread initialization, it will be reused for all subsequent ssl connections.
We do the following on processing user's data:
A thread creates a new socket connection to the third party server via ssl and once the required data is fetched from the 3rd party server, ssl connection is terminated.
Similarly all threads pick up one user at a time from the queue and fetches the data from the server.
We have to do above connect, fetch data and disconnect ssl connections for all 30k users.
Please find pseudocode example of our application:
ThreadTask()
{
ssldata* ssl_data;
1. Creates SSL connection: user_ssl_new_connect(ssl_data)
2. Fetch users data
3. Terminate ssl connection: ssl_abort()
}
char* user_ssl_new_connect(ssldata* ssl_data) {
SSL_CTX *ssl_context = InitsslonePerThread();
if (!ssl_context) {
if (!(ssl_context = SSL_CTX_new (SSLv23_client_method ()) {
retur NULL;
}
}
SSL_CTX_set_options (ssl_context,SSL_OP_NO_COMPRESSION|SSL_MODE_RELEASE_BUFFERS);
SSL_CTX_set_verify (ssl_context,SSL_VERIFY_PEER,ssl_open_verify);
SSL_CTX_set_default_verify_paths (ssl_context);
char * s = "sslpath"
SSL_CTX_load_verify_locations (ssl_context,s,NIL);
SetsslconnectionPerThread(ssl_context);
if (!(ssl_data->sslconnection = (SSL *) SSL_new (ssl_context)))
return NULL
bio = BIO_new_socket (ssl_data->sockettcpsi,BIO_NOCLOSE);
SSL_set_bio (ssl_data->sslconnection,bio,bio);
SSL_set_connect_state(ssl_data->sslconnection);
if (SSL_in_init(ssl_data->sslconnection)) SSL_total_renegotiations (ssl_data->sslconnection);
/* now negotiate SSL */
if ((retval = SSL_write (ssl_data->sslconnection,"",0)) < 0) {
return NULL
}
/* validating host names? */
if ((err = ssl_validate_cert (cert = SSL_get_peer_certificate (sslconnection),host))) {
return NULL;
}
}
// one ssl_context per thread in global variable
ssl_context* InitsslonePerThread() {
yULong threadid;
threadid = (unsigned long) pthread_self();
if ssl_context is not created for this threadid
returns new ssl_context.
else
returns previous ssl_context.
}
void SetsslconnectionPerThread(ssl_context*) {
yULong threadid;
threadid = (unsigned long) pthread_self();
#setting ssl_context in global variable
}
long ssl_abort (ssldata* ssl_data)
{
if (ssl_data->sslconnection) { /* close SSL connection */
SSL_shutdown (ssl_data->sslconnection);
SSL_free (ssl_data->sslconnection);
}
return NIL;
}
static char *ssl_validate_cert (X509 *cert,char *host)
{
int i,n;
char *s,*t,*ret;
void *ext;
GENERAL_NAME *name;
char tmp[MAILTMPLEN];
/* make sure have a certificate */
if (!cert) ret = "No certificate from server";
/* and that it has a name */
else if (!cert->name) ret = "No name in certificate";
/* locate CN */
else if (s = strstr (cert->name,"/CN=")) {
if (t = strchr (s += 4,'/')) *t = '\0';
/* host name matches pattern? */
ret = ssl_compare_hostnames (host,s) ? NIL :
"Server name does not match certificate";
if (t) *t = '/'; /* restore smashed delimiter */
/* if mismatch, see if in extensions */
if (ret && (ext = X509_get_ext_d2i (cert,NID_subject_alt_name,NIL,NIL)) &&
(n = sk_GENERAL_NAME_num (ext)))
/* older versions of OpenSSL use "ia5" instead of dNSName */
for (i = 0; ret && (i < n); i++)
if ((name = sk_GENERAL_NAME_value (ext,i)) &&
(name->type = GEN_DNS) && (s = name->d.ia5->data) &&
ssl_compare_hostnames (host,s)) ret = NIL;
} else ret = "Unable to locate common name in certificate";
return ret;
}
if ((err = ssl_validate_cert (cert = SSL_get_peer_certificate (sslconnection),host))) {
return NULL;
}
You must free the X509* returned from SSL_get_peer_certificate using X509_free. There may be more leaks, but that one seems consistent with the description of your issue.
The memory leak is also why OpenSSL's TLS Client example immediately free's the X509*. Its reference counted, so its safe to decrement the count and use the X509* until the session is destroyed (the SSL*).
X509* cert = SSL_get_peer_certificate(ssl);
if(cert) { X509_free(cert); } /* Free immediately */
if(NULL == cert) handleFailure();
...
You may also be leaking some of the names returned from the name walk. I don't use IA5 strings, so I'm not certain. I use UTF8 strings, and they must be freed.
Here's some unrelated comments... You should probably include SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3:
SSL_CTX_set_options (ssl_context,SSL_OP_NO_COMPRESSION|SSL_MODE_RELEASE_BUFFERS);
You should also probably specify SSL_set_tlsext_host_name somewhere to use SNI for hosted environments, where the default site certificate may not be the target site's certificate. SNI is a TLS extension, so it speaks to the need for SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3.
You should also test the application well when using SSL_MODE_RELEASE_BUFFERS. I seem to recall it caused memory errors. Also see Issue 2167: OpenSSL fails if used from multiple threads and with SSL_MODE_RELEASE_BUFFERS, CVE-2010-5298, and Adam Langley's Overclocking SSL.
The sample program provided on the wiki TLS Client may also help you with you name matching. The best I can tell, the code is vulnerable to Marlinspike's embedded NULL tricks. See his Blackhat talk at More Tricks For Defeating SSL In Practice for more details.
This is just an observation... Since you are using C++, why are you not using smart pointers to manage your resources? Something like this works very well in practice, and it would have fixed the leak because X509_ptr has the destructor function specified:
X509_ptr cert(SSL_get_peer_certificate(ssl));
// Use cert, its free'd automatically
char* name = ssl_validate_cert(cert.get(), "www.example.com");
And:
using EC_KEY_ptr = std::unique_ptr<EC_KEY, decltype(&::EC_KEY_free)>;
using EC_GROUP_ptr = std::unique_ptr<EC_GROUP, decltype(&::EC_GROUP_free)>;
using EC_POINT_ptr = std::unique_ptr<EC_POINT, decltype(&::EC_POINT_free)>;
using DH_ptr = std::unique_ptr<DH, decltype(&::DH_free)>;
using RSA_ptr = std::unique_ptr<RSA, decltype(&::RSA_free)>;
using DSA_ptr = std::unique_ptr<DSA, decltype(&::DSA_free)>;
using EVP_PKEY_ptr = std::unique_ptr<EVP_PKEY, decltype(&::EVP_PKEY_free)>;
using BN_ptr = std::unique_ptr<BIGNUM, decltype(&::BN_free)>;
using FILE_ptr = std::unique_ptr<FILE, decltype(&::fclose)>;
using BIO_MEM_ptr = std::unique_ptr<BIO, decltype(&::BIO_free)>;
using BIO_FILE_ptr = std::unique_ptr<BIO, decltype(&::BIO_free)>;
using EVP_MD_CTX_ptr = std::unique_ptr<EVP_MD_CTX, decltype(&::EVP_MD_CTX_destroy)>;
using X509_ptr = std::unique_ptr<X509, decltype(&::X509_free)>;
using ASN1_INTEGER_ptr = std::unique_ptr<ASN1_INTEGER, decltype(&::ASN1_INTEGER_free)>;
using ASN1_TIME_ptr = std::unique_ptr<ASN1_TIME, decltype(&::ASN1_TIME_free)>;
using X509_EXTENSION_ptr = std::unique_ptr<X509_EXTENSION, decltype(&::X509_EXTENSION_free)>;
using X509_NAME_ptr = std::unique_ptr<X509_NAME, decltype(&::X509_NAME_free)>;
using X509_NAME_ENTRY_ptr = std::unique_ptr<X509_NAME_ENTRY, decltype(&::X509_NAME_ENTRY_free)>;
using X509_STORE_ptr = std::unique_ptr<X509_STORE, decltype(&::X509_STORE_free)>;
using X509_LOOKUP_ptr = std::unique_ptr<X509_LOOKUP, decltype(&::X509_LOOKUP_free)>;
using X509_STORE_CTX_ptr = std::unique_ptr<X509_STORE_CTX, decltype(&::X509_STORE_CTX_free)>;

How do I make an in-place modification on an array using grpc and google protocol buffers?

I'm having a problem with a const request with the google protocol buffers using grpc. Here is my problem:
I would like to make an in-place modification of an array's value. For that I wrote this simple example where I try to pass an array and sum all of it's content. Here's my code:
adder.proto:
syntax = "proto3";
option java_package = "io.grpc.examples";
package adder;
// The greeter service definition.
service Adder {
// Sends a greeting
rpc Add (AdderRequest) returns (AdderReply) {}
}
// The request message containing the user's name.
message AdderRequest {
repeated int32 values = 1;
}
// The response message containing the greetings
message AdderReply {
int32 sum = 1;
}
server.cc:
//
// Created by Eric Reis on 7/6/16.
//
#include <iostream>
#include <grpc++/grpc++.h>
#include "adder.grpc.pb.h"
class AdderImpl final : public adder::Adder::Service
{
public:
grpc::Status Add(grpc::ServerContext* context, const adder::AdderRequest* request,
adder::AdderReply* reply) override
{
int sum = 0;
for(int i = 0, sz = request->values_size(); i < sz; i++)
{
request->set_values(i, 10); // -> this gives an error caused by the const declaration of the request variable
// error: "Non-const function 'set_values' is called on the const object"
sum += request->values(i); // -> this works fine
}
reply->set_sum(sum);
return grpc::Status::OK;
}
};
void RunServer()
{
std::string server_address("0.0.0.0:50051");
AdderImpl service;
grpc::ServerBuilder builder;
// Listen on the given address without any authentication mechanism.
builder.AddListeningPort(server_address, grpc::InsecureServerCredentials());
// Register "service" as the instance through which we'll communicate with
// clients. In this case it corresponds to an *synchronous* service.
builder.RegisterService(&service);
// Finally assemble the server.
std::unique_ptr<grpc::Server> server(builder.BuildAndStart());
std::cout << "Server listening on " << server_address << std::endl;
// Wait for the server to shutdown. Note that some other thread must be
// responsible for shutting down the server for this call to ever return.
server->Wait();
}
int main(int argc, char** argv)
{
RunServer();
return 0;
}
client.cc:
//
// Created by Eric Reis on 7/6/16.
//
#include <iostream>
#include <grpc++/grpc++.h>
#include "adder.grpc.pb.h"
class AdderClient
{
public:
AdderClient(std::shared_ptr<grpc::Channel> channel) : stub_(adder::Adder::NewStub(channel)) {}
int Add(int* values, int sz) {
// Data we are sending to the server.
adder::AdderRequest request;
for (int i = 0; i < sz; i++)
{
request.add_values(values[i]);
}
// Container for the data we expect from the server.
adder::AdderReply reply;
// Context for the client. It could be used to convey extra information to
// the server and/or tweak certain RPC behaviors.
grpc::ClientContext context;
// The actual RPC.
grpc::Status status = stub_->Add(&context, request, &reply);
// Act upon its status.
if (status.ok())
{
return reply.sum();
}
else {
std::cout << "RPC failed" << std::endl;
return -1;
}
}
private:
std::unique_ptr<adder::Adder::Stub> stub_;
};
int main(int argc, char** argv) {
// Instantiate the client. It requires a channel, out of which the actual RPCs
// are created. This channel models a connection to an endpoint (in this case,
// localhost at port 50051). We indicate that the channel isn't authenticated
// (use of InsecureChannelCredentials()).
AdderClient adder(grpc::CreateChannel("localhost:50051",
grpc::InsecureChannelCredentials()));
int values[] = {1,2};
int sum = adder.Add(values, 2);
std::cout << "Adder received: " << sum << std::endl;
return 0;
}
My error happens when i try to call the method set_values() on the request object that is defined as const. I understand why this error is occurring but I just can't figure out a way to overcome it without making a copy of the array.
I tried to remove the const definition but the RPC calls fails when I do that.
Since I'm new to this RPC world and even more on grpc and the google protocol buffers I'd like to call for your help. What is the best way to solve this problem?
Please see my answer here. The server receives a copy of the AdderRequest sent by the client. If you were to modify it, the client's original AdderRequest would not be modified. If by "in place" you mean the server modifies the client's original memory, no RPC technology can truly accomplish that, because the client and server run in separate address spaces (processes), even on different machines.
If you truly need the server to modify the client's memory:
Ensure the server and client run on the same machine.
Use OS-specific shared-memory APIs such as shm_open() and mmap() to map the same chunk of physical memory into the address spaces of both the client and the server.
Use RPC to transmit the identifier (name) of the shared memory (not the actual data in the memory) and to invoke the server's processing.
When both client and server have opened and mapped the memory, they both have pointers (likely with different values in the different address spaces) to the same physical memory, so the server will be able to read what the client writes there (with no copying or transmitting) and vice versa.