I'm using pion network library to write a HTTP(s) server, pion is a wrapper for boost::asio. I need the server support both HTTP and HTTPS, the HTTP is done with:
#include "pion/http/server.hpp"
#include "pion/http/response_writer.hpp"
using namespace pion;
using namespace pion::http;
struct fake_server {
void start() {
m_server = pion::http::server_ptr(new pion::http::server(80));
m_server->add_resource("/", boost::bind(&fake_server::handle_request, this, _1, _2));
m_server->start();
}
void handle_request(http::request_ptr& _httpRequest, tcp::connection_ptr& _tcpConn) {
http::response_writer_ptr writer(
http::response_writer::create(
_tcpConn,
*_httpRequest,
boost::bind(&tcp::connection::finish, _tcpConn)));
http::response& r = writer->get_response();
writer->write("hello world");
writer->send();
}
pion::http::server_ptr m_server;
};
int main() {
fake_server svr;
svr.start();
while(1) {
Sleep(0);
}
}
But I don't know how to handle the HTTPS, I tried to set the port to 443, and set the ssl flag with:
void start() {
m_server = pion::http::server_ptr(new pion::http::server(443)); // 443
m_server->set_ssl_flag(true); // ssl flag
m_server->add_resource("/", boost::bind(&fake_server::handle_request, this, _1, _2));
m_server->start();
}
It doesn't work, I got an error "no shared cipher", I googled for this error and found some solution that uses openssl to generate cert pairs and then load these cert pairs in the server/client, but my client application is web browser, the browser won't use these generated certifications.
Any idea?
Thanks.
You need to provide an SSL certificate and key the server will use to negotiate the secure connection. This would be done with:
m_server->set_ssl_key_file(pem_filename);
where pem_filename is the name of a PEM formatted file containing both an SSL certificate and key. The key must not be encrypted. There are numerous internet tutorials that tell you how to create a self-signed certificate if you don't already have one from a trusted certificate authority. If you have a key and certificate in separate files then simply concatenate them into a single file.
No prior certificate/key configuration is necessary on the client side (in this case), but note that using a self-signed certificate (or any certificate not signed by a trusted certificate authority) will generate a security warning on most web browsers.
Related
I want to set up my local server to communicate with my client. They build TLS connection using Openssl. I try to implement double side authentication, like server would verify client and client also needs to verify server.
When I use certificates generated by my self, everything works fine. The code is as following. It's C++ code in client. I set up client cert, private key and intermediate cert. In server side I saved a CA cert.
The relationship is: CA signs intermediate cert, intermediate cert signs client cert.
As we know, the reason that we need to provide client private key is the client will signature a "challenge" then send to server. Server could get client public key by certificate chain and use it to decode the encrypt "challenge" to see if they matched. You could see this link for detailed process:
https://en.wikipedia.org/wiki/Transport_Layer_Security#TLS_handshake
However in my scenario, I have no permission to get the private key. I only have an API to call, which takes the digest or anything we want to encode as input and return a string encoded by client private key.
Therefore I'm not able to pass any "ClientPrivateKeyFileTest" to TLS.
I searched openssl source code but all handshakes were done in this function: SSL_do_handshake() and I'm not allowed to modify this function.
// load client-side cert and key
SSL_CTX_use_certificate_file(m_ctx, ClientCertificateFileTest, SSL_FILETYPE_PEM);
SSL_CTX_use_PrivateKey_file(m_ctx, ClientPrivateKeyFileTest, SSL_FILETYPE_PEM);
// load intermediate cert
X509* chaincert = X509_new();
BIO* bio_cert = BIO_new_file(SignerCertificateFileTest, "rb");
PEM_read_bio_X509(bio_cert, &chaincert, NULL, NULL);
SSL_CTX_add1_chain_cert(m_ctx, chaincert)
m_ssl = SSL_new(m_ctx);
// get_seocket is my own API
m_sock = get_socket();
SSL_set_fd(m_ssl, m_sock)
// doing handshake and build connection
auto r = SSL_connect(m_ssl);
I think all handshake processes would be done after I call SSL_connect(). So I wonder is there other way I can do to complete the client-authentication?
For example, I could skip adding private key step but set up a callback function somewhere which can handle all cases when SSL needs to use private key to calculate something.
PS: The API is a black box in the client machine.
One more thing, these days I found that openssl engine may help this problem. But does anybody know what kind of engine is useful for this problem? The EC sign, verification or others?
Final update: I implemented a OpenSSL engine to reload EC_KEY_METHOD so that I'm able to use my own sign function.
Thanks a lot!
Any examples of gRPC server using TLS in CPP??
I am trying to build a gRPC application. The server should provide TLS support if client wants to connect over TLS instead of TCP.
This is my server
void RunServer() {
std::string server_address("0.0.0.0:50051");
GreeterServiceImpl service;
ServerBuilder builder;
std::shared_ptr<ServerCredentials> creds;
if(enable_ssl)
{
grpc::SslServerCredentialsOptions::PemKeyCertPair pkcp ={"a","b"};
grpc::SslServerCredentialsOptions ssl_opts;
ssl_opts.pem_root_certs="";
ssl_opts.pem_key_cert_pairs.push_back(pkcp);
creds = grpc::SslServerCredentials(ssl_opts);
}
else
creds=grpc::InsecureServerCredentials();
// Listen on the given address without any authentication mechanism.
builder.AddListeningPort(server_address, creds);
// Register "service" as the instance through which we'll communicate with
// clients. In this case it corresponds to an *synchronous* service.
builder.RegisterService(&service);
// Finally assemble the server.
std::unique_ptr<Server> server(builder.BuildAndStart());
Error:
undefined reference to grpc::SslServerCredetials(grpc::ssl_opts)
I have included all the necessary files..
You code looks right. If you are adapting from examples/cpp/helloworld, you need to change -lgrpc++_unsecure to -lgrpc++ in the Makefile.
For the benefits of others, an example of using the tls/ssl code can be found at https://github.com/grpc/grpc/blob/master/test/cpp/interop/server_helper.cc#L50
I'm using QNAM and QNetworkRequest to make a post request to our server. On most machines it works fine, but on some it fails. All machines are running windows and connecting to the same Ubuntu server. Between two windows 7 machines, one works and one fails. Both machines should have the same ssleay32.dll and libeay32.dll (I include them in my installation package). After installing chrome on to the "broken" machine, it can now properly perform the SSL handshake. If I remove all certificates (intermediate and trusted) relating to our CA (Thawte) the SSL handshake fails again.
manager = new QNetworkAccessManager( this );
connect( manager, SIGNAL( finished( QNetworkReply* ) ),
this, SLOT( licenseServerReply( QNetworkReply* ) ) );
connect( manager, SIGNAL(sslErrors(QNetworkReply*,QList<QSslError>)),
this, SLOT( sslErrorOccured(QNetworkReply*,QList<QSslError>))
);
request.setUrl( "https://www.myURL.com/postFromMachine/");
postData.append( "computer-name=" );
postData.append( hostInfo.hostName() );
postData.append( "&" );
manager->post( request, postData );
I've connected a slot to the sslErrors() signal of the QNetworkRequest object and I get the following when the SSL handshake fails:
Debug: "The issuer certificate of a locally looked up certificate could not be found"
Debug: "The root CA certificate is not trusted for this purpose"
In an attempt to fix the missing cert I added all certificates (intermediate and trusted) on the working machine concerning our Root Authority "Thawte" and related certificates "Thawte consulting" etc... to an QSSLSocket object and passed that object to the QNAM via QSSLCOnfiguration. There were 9 of them total and it didn't seem to fix the issue. I added the following code before manager->post().
QSslSocket *socket = new QSslSocket( this );
socket->addCaCertificates( ":/Certs/thawte1.cer" );
socket->addCaCertificates( ":/Certs/thawte2.cer" );
// Several more certs ...
socket->addCaCertificates( ":/Certs/thawte9.cer" );
QSslConfiguration conf;
conf.setCaCertificates( socket->caCertificates() );
request.setSslConfiguration( conf );
After discussion on an IRC chat I was pointed to the following command on the server to ensure it has the proper certificate chain:
openssl s_client -verify 5 -CApath /etc/ssl/certs -connect www.myurl.com:443 -showcerts
It replied with [ok] and no errors. The issuer is always the subject of the next cert in the chain, and the chain goes all the way to the root CA. Unless there is more to check with that command, it seems good to me.
I also checked the chain of certificates on the following site, and it seems like everything was good (expect possibly a weak certificate for path #2)
https://www.ssllabs.com/ssltest/analyze.html
Am at a loss of what to do, how can I prove that the server has the right certificate chain? If it does have the right certificate chain, why don't all machines perform the handshake properly (i.e. what certificates or settings do I need to add/change in my application to get them to work)? I don't want to simply ignore the SSLErrors as I want to be 100% sure that I have an encrypted connection to the proper host.
Thanks in advance for all the help!
I modified the socket->addCaCertificate to addDefaultCaCertificate and switched them from .cer to .pem ( I got the .pem files directly from our server ). Now everything works great. Note, don't copy your private key from your server, ensure it is the CA public keys.
I am struggling with a client certificate problem and hope somebody here can help me. I'm developing a client/server pair using boost asio but I'll try to be unspecific. I'm on windows and using openssl 1.0.1e
Basically, I want to have client authentication by using client certificates. The server shall only accept clients that have a certificate signed by my own CA. So I have setup a self signed CA. This has issued two more certificates. One for the client and one for the server. Both signed by the CA.
I have done that quite a few times now and I am confident that I got it.
My server side also works fine. It requests client certificates and if I'm using s_client and give those certs everything works. Also if I'm using a browser and have my root CA installed as trusted and then import the client certs.
The only thing that I can't get to work is the libssl client. It always fails during the handshake and as far as I can see it will not send the client certficate:
$ openssl.exe s_server -servername localhost -bugs -CAfile myca.crt -cert server.crt
-cert2 server.crt -key private/server.key -key2 private/server.key -accept 8887 -www
-state -Verify 5
verify depth is 5, must return a certificate
Setting secondary ctx parameters
Using default temp DH parameters
Using default temp ECDH parameters
ACCEPT
SSL_accept:before/accept initialization
SSL_accept:SSLv3 read client hello A
SSL_accept:SSLv3 write server hello A
SSL_accept:SSLv3 write certificate A
SSL_accept:SSLv3 write key exchange A
SSL_accept:SSLv3 write certificate request A
SSL_accept:SSLv3 flush data
SSL3 alert read:warning:no certificate
SSL3 alert write:fatal:handshake failure
SSL_accept:error in SSLv3 read client certificate B
SSL_accept:error in SSLv3 read client certificate B
2675716:error:140890C7:SSL routines:SSL3_GET_CLIENT_CERTIFICATE:peer did not return a
certificate:s3_srvr.c:3193:
ACCEPT
I'm using this s_server as debugging tool but against my real server the same thing occurs.
s_client will work fine with the same certificates. Also, if I disable "-Verify" in the server the connection works. So it really seems just the client refusing to send it's certficate. What can be the reason for that?
Since I'm using boost asio as an SSL wrapper the code looks like this:
m_ssl_context.set_verify_mode( asio::ssl::context::verify_peer );
m_ssl_context.load_verify_file( "myca.crt" );
m_ssl_context.use_certificate_file( "testclient.crt", asio::ssl::context::pem );
m_ssl_context.use_private_key_file( "testclient.key", asio::ssl::context::pem );
I have also tried to bypass asio and access the SSL context directly by saying:
SSL_CTX *ctx = m_ssl_context.impl();
SSL *ssl = m_ssl_socket.impl()->ssl;
int res = 0;
res = SSL_CTX_use_certificate_chain_file(ctx, "myca.crt");
if (res <= 0) {
// handle error
}
res = SSL_CTX_use_certificate_file(ctx, "testclient.crt", SSL_FILETYPE_PEM);
if (res <= 0) {
// handle error
}
res = SSL_CTX_use_PrivateKey_file(ctx, "testclient.key", SSL_FILETYPE_PEM);
if (res <= 0) {
// handle error
}
I can't see any difference in behavior. It should be mentioned that I am using a very old boost 1.43 asio which I cannot update but I suppose all relevant calls go more or less directly to OpenSSL anyway and the server works fine with that version so I think I can rule that out.
If I start forcing client and server to specific versions, the error messages change but it never works and still always works with the s_client test. Currently it is set to TLSv1
If I switch it to TLSv1 for example there is more chatter between client and server and eventually I get the error:
...
SSL_accept:SSLv3 read client key exchange A
<<< TLS 1.0 ChangeCipherSpec [length 0001]
01
<<< TLS 1.0 Handshake [length 0010], Finished
14 00 00 0c f4 71 28 4d ab e3 dd f2 46 e8 8b ed
>>> TLS 1.0 Alert [length 0002], fatal unexpected_message
02 0a
SSL3 alert write:fatal:unexpected_message
SSL_accept:failed in SSLv3 read certificate verify B
2675716:error:140880AE:SSL routines:SSL3_GET_CERT_VERIFY:missing verify
message:s3_srvr.c:2951:
2675716:error:140940E5:SSL routines:SSL3_READ_BYTES:ssl handshake failure:s3_pkt.c:989:
ACCEPT
I have found an older bug entry posted on the openssl mailing list that refereed to this. Apparently a wrong CRLF in the handshake that has been fixed two yrs ago. Or has it?
I have been debugging this for almost a week now and I'm really stuck. Does anyone have a suggestion on what to try? I'm out of ideas...
Cheers,
Stephan
PS: Here is what the above s_server debug out would be with s_client and the same certficate:
$ openssl s_client -CAfile ca.crt -cert testclient.crt -key private/testclient.key -verify 2 -connect myhost:8887
ACCEPT
SSL_accept:before/accept initialization
SSL_accept:SSLv3 read client hello A
SSL_accept:SSLv3 write server hello A
SSL_accept:SSLv3 write certificate A
SSL_accept:SSLv3 write key exchange A
SSL_accept:SSLv3 write certificate request A
SSL_accept:SSLv3 flush data
depth=1 C = DE, // further info
verify return:1
depth=0 C = DE, // further info
verify return:1
SSL_accept:SSLv3 read client certificate A
SSL_accept:SSLv3 read client key exchange A
SSL_accept:SSLv3 read certificate verify A
SSL_accept:SSLv3 read finished A
SSL_accept:SSLv3 write session ticket A
SSL_accept:SSLv3 write change cipher spec A
SSL_accept:SSLv3 write finished A
SSL_accept:SSLv3 flush data
ACCEPT
... handshake completes and data is transferred.
All right, after much suffering, the answer has been found by Dave Thompson of OpenSSL.
The reason was that my ssl code called all those functions on the OpenSSL context after the socket object (SSL*) was created from it. Which means all those functions did practically nothing or the wrong thing.
All I had to do was either:
1. Call SSL_use_certificate_file
res = SSL_use_certificate_file(ssl, "testclient.crt", SSL_FILETYPE_PEM);
if (res <= 0) {
// handle error
}
res = SSL_use_PrivateKey_file(ssl, "testclient.key", SSL_FILETYPE_PEM);
if (res <= 0) {
// handle error
}
(notice the missing CTX)
2. Call the CTX functions
Call the CTX functions upon the context before the socket was created. As asio seemingly encourages to create the context and socket right afterwards (as I did in the initializer list) the calls were all but useless.
The SSL context (in lib OpenSSL or asio alike) encapsulates the SSL usage and each socket created from it will share it's properties.
Thank you guys for your suggestions.
You should not use both SSL_CTX_use_certificate_chain_file() and SSL_CTX_use_certificate_file(), as SSL_CTX_use_certificate_chain_file() tries to load a chain including the client certificate, not just the CA chain. From SSL_CTX_use_certificate(3):
SSL_CTX_use_certificate_chain_file() loads a certificate chain from file into ctx. The certificates must be in PEM format and must be sorted starting with the subject's certificate (actual client or server certificate), followed by intermediate CA certificates if applicable, and ending at the highest level (root) CA.
I think you should be fine using only SSL_CTX_use_certificate_file() and SSL_CTX_use_PrivateKey_file(), as the client does not care much for the CA chain anyway.
I think you need to call SSL_CTX_set_client_CA_list on the server side. This sets a list of certificate authorities to be sent together with the client certificate request.
The client will not send its certificate, even if one was requested, if the certificate does not match that CA list sent by the server.
I am behind a http/https proxy. So to download a file using QNetworkAccessManager, i set the proxy as following:
if(no_proxy)
{
QNetworkProxyFactory::setUseSystemConfiguration (false);
QNetworkProxy::setApplicationProxy(QNetworkProxy::NoProxy);
}
else if(system_proxy)
{
QNetworkProxyQuery pQuery(QUrl(QLatin1String("http://www.google.com")));
QList<QNetworkProxy>listOfProxies =QNetworkProxyFactory::systemProxyForQuery(pQuery);
QNetworkProxy::setApplicationProxy(listOfProxies.first());
}
else if(manual_proxy)
{
proxy.setHostName(address);
proxy.setPort(port);
if(http_proxy)
proxy.setType(QNetworkProxy::HttpProxy);
else if(socks_proxy)
proxy.setType(QNetworkProxy::Socks5Proxy);
else if(ftp_proxy)
proxy.setType(QNetworkProxy::FtpCachingProxy);
QNetworkProxy::setApplicationProxy(proxy);
}
Now behind http squid proxy server, this code works fine in case of http urls. But, if i try to download a file with ftp url the download fails with the error
no suitable proxy found
It does not seem to use http proxy for ftp urls. But, we have such options like in firefox:
use this proxy server for all protocols
How to do similar thing in Qt!
Update:
void DownloadThread::startDownload()
{
QString args =downUrl,tempFN;
QUrl url = QUrl::fromEncoded(args.toLocal8Bit());
request.setUrl(url);
request.setRawHeader("User-Agent", userAgent);
request.setAttribute(QNetworkRequest::HttpPipeliningAllowedAttribute, true);
reply.setCookieJar(cookieJar);
reply=manager.get(request);
connect(reply, SIGNAL(readyRead()),this,
SLOT(saveToDisk()));
...
}
Have you tried explicitly setting the Qftp proxy?
int setProxy ( const QString & host, quint16 port )
That might get you more joy, but yes, you have to set the proxies up for each connection normally, however, there is always the possibility that the proxy you are trying to use doesn't support FTP? If you pass me some more details about the proxy and where your problems lie (request/response code for example)
Also in squid.conf may want to change/add the following in case they are not present
acl SSL_ports port 443 21
acl FTP proto FTP
always_direct allow FTP
http_access allow ftp
Also, worth checking that the firewall allows port 20, 21 & 443 (I know its a simple check, but often I find its things like these that can be a real pain to find as a root cause).
Do you have a copy of the log file that is generated? it would be interesting/helpful to see what error code is being returned. Also, have you tried manually stepping through the program to see what is contained in the variables at run-time, that would give you a better picture of what is happening, as it may be that everything is fine and that there is a simple way to progress which the contents of the variables will lead you to in short order (might not be the case but it usually worth a try).