I am creating a static library out of my grpc c++ client and I am able to successfully call the API in the grpc static library using a test application.
But when I integrate the static library with a different service and call the API in the grpc static lib from that service, it fails with the error below
Handshaker factory creation failed with TSI_INVALID_ARGUMENT.
Failed to create secure subchannel for secure name 'xx.xx.xx.xx:xx'
Failed to create channel args during subchannel creation.
On the same VM where I see the above error when I copy and run the test application that calls the grpc client, it works fine.
Here is the client code, based on(https://www.programmersought.com/article/7290364277/):
int main(int argc, char** argv) {
grpc::SslCredentialsOptions ssl_options;
ssl_options.pem_root_certs = SERVER_CRT;
// Create a default SSL ChannelCredentials object.
auto channel_creds = grpc::SslCredentials(ssl_options);
grpc::ChannelArguments cargs;
cargs.SetSslTargetNameOverride("xxx.xxx.com"); // If you add DNS, you don't need this.
// Create a channel using the credentials created in the previous step.
auto channel = grpc::CreateCustomChannel("1.2.3.4:8000", channel_creds , cargs);
// Instantiate the client.
MailClient tester(channel);
}
ssl_options.pem_root_certs = SERVER_CRT;
// The contents of server.crt
const char SERVER_CRT[] = R"(
-----BEGIN CERTIFICATE-----
TjERMA8GA1UECAwIU2hhbmdoYWkxEjAQBgNVBAcMCVNvbmdqaWFuZzEPMA0GA1UE
...
E6v50RCQgtWGmna+oy1I2UTVABdjBFnyKPEuz106mBfOhT6cg80hBHVgrV7sLHq8
76QolJm8yzZPL1qpiO4dKHHsCP6R
-----END CERTIFICATE-----
)";
Probably some issue with the way I have provided the cert?
why does the rpc call in the grpc client work from the test application but not from a different service
on the same VM?
Any suggestions appreciated.
The application that I was trying to integrate was using libssl 1.0.2 which doesn't support TLS1.3 but grpc 1.35 by default uses TLS1.3 and openssl 1.1.1. So built gRPC with gRPC_SSL_PROVIDER=package and it picked up libssl 1.0.2 and that fixed the issue. Hope this helps anyone.
Related
1.I am trying to achieve the similar functionality as in java client for grpc C++
to trust its own keystore(P12 file).
2.I din't find any good documentation related to it . I am new `
to use of SSLcontext. Any pointers will be really helpful.
3.It can be easily achieved in Java using the trustmanager in the below code :
Keystore keystore = Keystore.getInstance("JKS")
InputStream keyStream = new FileInputStream(keystorePATH))
try
{ keystore.load(keyStream, password.toCharArray());}
KeyManagerFactory kmf =
KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm());
kmf.init(keyStore, password.toCharArray());
javax.net.ssl.TrustManagerFactory tmf =
javax.net.ssl.TrustManagerFactory.getInstance("X509");
tmf.init(keyStore);
final SslContextBuilder ctxBuilder =
SslContextBuilder.forClient().keyManager(kmf).trustmanager(tmf);
return GrpcSslContexts.configure(ctxBuilder).build()
Here we are using the trustmanager and using our own JKs keystore instance.That way we are not sending the certificate explicitly which need to be trusted.Can similar be achieved for GRPC C++ client programmatically.
I am trying to create a lambda S3 listener leveraging Lambda as a native image. The point is to get the S3 event and then do some work by pulling the file, etc. To get the file I am using het AWS 2.x S3 client as below
S3Client.builder().httpClient().build();
This code results in
2020-03-12 19:45:06,205 ERROR [io.qua.ama.lam.run.AmazonLambdaRecorder] (Lambda Thread) Failed to run lambda: software.amazon.awssdk.core.exception.SdkClientException: Unable to load an HTTP implementation from any provider in the chain. You must declare a dependency on an appropriate HTTP implementation or pass in an SdkHttpClient explicitly to the client builder.
To resolve this I added the aws apache client and updated the code to do the following:
SdkHttpClient httpClient = ApacheHttpClient.builder().
maxConnections(50).
build()
S3Client.builder().httpClient(httpClient).build();
I also had to add:
[
["org.apache.http.conn.HttpClientConnectionManager",
"org.apache.http.pool.ConnPoolControl","software.amazon.awssdk.http.apache.internal.conn.Wrapped"]
]
After this I am now getting the following stack trace:
Caused by: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
at java.security.cert.PKIXParameters.setTrustAnchors(PKIXParameters.java:200)
at java.security.cert.PKIXParameters.<init>(PKIXParameters.java:120)
at java.security.cert.PKIXBuilderParameters.<init>(PKIXBuilderParameters.java:104)
at sun.security.validator.PKIXValidator.<init>(PKIXValidator.java:86)
... 76 more
I am running version 1.2.0 of qurkaus on 19.3.1 of graal. I am building this via Maven and the the provided docker container for Quarkus. I thought the trust store was added by default (in the build command it looks to be accurate) but am I missing something? Is there another way to get this to run without the setting of the HttpService on the S3 client?
There is a PR, under review at the moment, that introduces AWS S3 extension both JVM & Native. AWS clients are fully Quarkified, meaning configured via application.properties and enabled for dependency injection. So stay tuned as it most probably be available in Quarkus 1.5.0
I want to set up my local server to communicate with my client. They build TLS connection using Openssl. I try to implement double side authentication, like server would verify client and client also needs to verify server.
When I use certificates generated by my self, everything works fine. The code is as following. It's C++ code in client. I set up client cert, private key and intermediate cert. In server side I saved a CA cert.
The relationship is: CA signs intermediate cert, intermediate cert signs client cert.
As we know, the reason that we need to provide client private key is the client will signature a "challenge" then send to server. Server could get client public key by certificate chain and use it to decode the encrypt "challenge" to see if they matched. You could see this link for detailed process:
https://en.wikipedia.org/wiki/Transport_Layer_Security#TLS_handshake
However in my scenario, I have no permission to get the private key. I only have an API to call, which takes the digest or anything we want to encode as input and return a string encoded by client private key.
Therefore I'm not able to pass any "ClientPrivateKeyFileTest" to TLS.
I searched openssl source code but all handshakes were done in this function: SSL_do_handshake() and I'm not allowed to modify this function.
// load client-side cert and key
SSL_CTX_use_certificate_file(m_ctx, ClientCertificateFileTest, SSL_FILETYPE_PEM);
SSL_CTX_use_PrivateKey_file(m_ctx, ClientPrivateKeyFileTest, SSL_FILETYPE_PEM);
// load intermediate cert
X509* chaincert = X509_new();
BIO* bio_cert = BIO_new_file(SignerCertificateFileTest, "rb");
PEM_read_bio_X509(bio_cert, &chaincert, NULL, NULL);
SSL_CTX_add1_chain_cert(m_ctx, chaincert)
m_ssl = SSL_new(m_ctx);
// get_seocket is my own API
m_sock = get_socket();
SSL_set_fd(m_ssl, m_sock)
// doing handshake and build connection
auto r = SSL_connect(m_ssl);
I think all handshake processes would be done after I call SSL_connect(). So I wonder is there other way I can do to complete the client-authentication?
For example, I could skip adding private key step but set up a callback function somewhere which can handle all cases when SSL needs to use private key to calculate something.
PS: The API is a black box in the client machine.
One more thing, these days I found that openssl engine may help this problem. But does anybody know what kind of engine is useful for this problem? The EC sign, verification or others?
Final update: I implemented a OpenSSL engine to reload EC_KEY_METHOD so that I'm able to use my own sign function.
Thanks a lot!
I've been using the latest code in 4.1.0-BUILD-SNAPSHOT as I need some of the new bug fixes in the 4.1 branch and just noticed that "neo4jServer()" is no longer a method exposed by Neo4jConfiguration. What is the new way to initialize a server connection and an in-memory version for unit tests? Before I was using "RemoteServer" and "InProcessServer", respectively.
Please note, the official documentation will be updated shortly.
In the meantime:
What's changed
SDN 4.1 uses the new Neo4j OGM 2.0 libraries. OGM 2.0 introduces API changes, largely due to the addition of support for Embedded as well as Remote Neo4j. Consequently, connection to a production database is now accomplished using an appropriate Driver, rather than using the RemoteServer or the InProcessServer which are deprecated.
For testing, we recommend using the EmbeddedDriver. It is still possible to create an in-memory test server, but that is not covered in this answer.
Available Drivers
The following Driver implementations are currently provided
http : org.neo4j.drivers.http.driver.HttpDriver
embedded : org.neo4j.drivers.embedded.driver.EmbeddedDriver
A driver implementation for the Bolt protocol (Neo4j 3.0) will be available soon.
Configuring a driver
There are two ways to configure a driver - using a properties file or via Java configuration. Variations on these themes exist (particularly for passing credentials), but for now the following should get you going:
Configuring the Http Driver
The Http Driver connects to and communicates with a Neo4j server over Http. An Http Driver must be used if your application is running in client-server mode. Please note the Http Driver will attempt to connect to a server running in a separate process. It can't be used for spinning up an in-process server.
Properties file configuration:
The advantage of using a properties file is that it requires no changes to your Spring configuration.
Create a file called ogm.properties somewhere on your classpath. It should contain the following entries:
driver=org.neo4j.ogm.drivers.http.driver.HttpDriver
URI=http://user:password#localhost:7474
Java configuration:
The simplest way to configure the Driver is to create a Configuration bean and pass it as the first argument to the SessionFactory constructor in your Spring configuration:
import org.neo4j.ogm.config.Configuration;
...
#Bean
public Configuration getConfiguration() {
Configuration config = new Configuration();
config
.driverConfiguration()
.setDriverClassName
("org.neo4j.ogm.drivers.http.driver.HttpDriver")
.setURI("http://user:password#localhost:7474");
return config;
}
#Bean
public SessionFactory getSessionFactory() {
return new SessionFactory(getConfiguration(), <packages> );
}
Configuring the Embedded Driver
The Embedded Driver connects directly to the Neo4j database engine. There is no server involved, therefore no network overhead between your application code and the database. You should use the Embedded driver if you don't want to use a client-server model, or if your application is running as a Neo4j Unmanaged Extension.
You can specify a permanent data store location to provide durability of your data after your application shuts down, or you can use an impermanent data store, which will only exist while your application is running (ideal for testing).
Create a file called ogm.properties somewhere on your classpath. It should contain the following entries:
Properties file configuration (permanent data store)
driver=org.neo4j.ogm.drivers.embedded.driver.EmbeddedDriver
URI=file:///var/tmp/graph.db
Properties file configuration (impermanent data store)
driver=org.neo4j.ogm.drivers.embedded.driver.EmbeddedDriver
To use an impermanent data store, simply omit the URI property.
Java Configuration
The same technique is used for configuring the Embedded driver as for the Http Driver. Set up a Configuration bean and pass it as the first argument to the SessionFactory constructor:
import org.neo4j.ogm.config.Configuration;
...
#Bean
public Configuration getConfiguration() {
Configuration config = new Configuration();
config
.driverConfiguration()
.setDriverClassName
("org.neo4j.ogm.drivers.embedded.driver.EmbeddedDriver")
.setURI("file:///var/tmp/graph.db");
return config;
}
#Bean
public SessionFactory getSessionFactory() {
return new SessionFactory(getConfiguration(), <packages> );
}
If you want to use an impermanent data store (e.g. for testing) do not set the URI attribute on the Configuration:
#Bean
public Configuration getConfiguration() {
Configuration config = new Configuration();
config
.driverConfiguration()
.setDriverClassName
("org.neo4j.ogm.drivers.embedded.driver.EmbeddedDriver")
return config;
}
Any examples of gRPC server using TLS in CPP??
I am trying to build a gRPC application. The server should provide TLS support if client wants to connect over TLS instead of TCP.
This is my server
void RunServer() {
std::string server_address("0.0.0.0:50051");
GreeterServiceImpl service;
ServerBuilder builder;
std::shared_ptr<ServerCredentials> creds;
if(enable_ssl)
{
grpc::SslServerCredentialsOptions::PemKeyCertPair pkcp ={"a","b"};
grpc::SslServerCredentialsOptions ssl_opts;
ssl_opts.pem_root_certs="";
ssl_opts.pem_key_cert_pairs.push_back(pkcp);
creds = grpc::SslServerCredentials(ssl_opts);
}
else
creds=grpc::InsecureServerCredentials();
// Listen on the given address without any authentication mechanism.
builder.AddListeningPort(server_address, creds);
// Register "service" as the instance through which we'll communicate with
// clients. In this case it corresponds to an *synchronous* service.
builder.RegisterService(&service);
// Finally assemble the server.
std::unique_ptr<Server> server(builder.BuildAndStart());
Error:
undefined reference to grpc::SslServerCredetials(grpc::ssl_opts)
I have included all the necessary files..
You code looks right. If you are adapting from examples/cpp/helloworld, you need to change -lgrpc++_unsecure to -lgrpc++ in the Makefile.
For the benefits of others, an example of using the tls/ssl code can be found at https://github.com/grpc/grpc/blob/master/test/cpp/interop/server_helper.cc#L50