error:0906D06C:PEM routines:PEM_read_bio:no start - c++

Getting this very annoying error. error:0906D06C:PEM routines:PEM_read_bio:no start
Code:
RSA* publickey = cWrapper.getPublicKey("C:/rsa-stuff/public.pem");
QByteArray plain = "The man in the black fled into the desert and the gunslinger followed...";
QByteArray encrypted = cWrapper.encryptRSA(publickey, plain);
In encryptRSA():
const char* publicKeyStr = data.constData();
qDebug() << publicKeyStr;
BIO* bio = BIO_new_mem_buf((void*)publicKeyStr, -1);
BIO_set_flags(bio, BIO_FLAGS_BASE64_NO_NL);
RSA* rsaPubKey = PEM_read_bio_RSAPublicKey(bio, NULL, NULL, NULL);
if(!rsaPubKey) {
qDebug() << "Could not load public key, " << ERR_error_string(ERR_get_error(), NULL); // error is here
}
BIO_free(bio);
This is how I read file:
QByteArray data;
QFile file(filename);
if(!file.open(QFile::ReadOnly))
{
printf("Error reading file: %s\n", file.errorString());
return data;
}
data = file.readAll();
file.close();
return data;
When I print out publicKeyStr, looks fine. This is notepad++ view with all characters enabled:
Anyone know what I am doing wrong? Super annoying issue :(
First of all, it's not this problem because I don't get the trusted part. Anyhow, I did try all the "solutions" and none of them worked, same error.

Your RSA public key is in SubjectPublicKeyInfo PEM format, but you are trying to read it using PEM_read_bio_RSAPublicKey which tries to read a PEM RSA key in PKCS#1 format. Try using PEM_read_bio_RSA_PUBKEY instead.
https://www.openssl.org/docs/man1.1.0/crypto/PEM_read_bio_RSAPublicKey.html

I got that same error on an openSSL1.1.0f I ported. The error showed up in my logger when reading out the root certificate from an mqtt client connection, until I figured out that I had forwarded the ERR_put_error() directly to my logger, whereas in openssl - the "real" error handling is kept in an ERR_STATE error buffer, and so sometimes (like in this case), errors are "expected", and the ERR_STATE error buffer is cleared (before anyone should check it).
in crypto/pem/pem_info.c, line 65:
i = PEM_read_bio(bp, &name, &header, &data, &len);
if (i == 0) {
error = ERR_GET_REASON(ERR_peek_last_error());
if (error == PEM_R_NO_START_LINE) {
ERR_clear_error();
break;
}
goto err;
meaning it runs througth the BIO_gets inside the PEM_read_bio until it returns zero, and if you get this PEM_R_NO_START_LINE, then thats just a way of saying its done.
By that time though, the error had already landed in my logger. So for anyone being confused by errors he or she is forwarding directly from ERR_put_error, use the ERR_print_errors_fp(stderr); in your errorhandling routine instead. In my case, as I dont have a stderr, I made a patched version of it, like:
void errorhandling()
{
unsigned long l;
char buf[256];
const char *file, *data;
int line, flags;
while ((l = ERR_get_error_line_data(&file, &line, &data, &flags)) != 0)
{
ERR_error_string_n(l, buf, sizeof buf);
printf("%s:%s:%d:%s\n", buf, file, line, (flags & ERR_TXT_STRING) ? data : "");
}
}

Related

C++ and OpenSSL Library: How can I set subjectAltName (SAN) from code?

I'm trying to create self-signed request with subjectAltName from c++ code (trying to implement dynamic self-signed certificates like this to actual version of OpenResty, but there is not sollution for subjectAltName).
Please, provide some examples of setting SANs from C++/OpenSSL code. I trying some like this:
X509_EXTENSION *ext;
STACK_OF (X509_EXTENSION) * extlist;
char *ext_name = "subjectAltName";
char *ext_value = "DNS:lohsport.com";
extlist = sk_X509_EXTENSION_new_null ();
ext = X509V3_EXT_conf (NULL, NULL, ext_name, ext_value);
if(ext == NULL)
{
*err = "Error creating subjectAltName extension";
goto failed;
}
sk_X509_EXTENSION_push (extlist, ext);
if (!X509_REQ_add_extensions (x509_req, extlist)){
*err = "Error adding subjectAltName to the request";
goto failed;
}
sk_X509_EXTENSION_pop_free (extlist, X509_EXTENSION_free);
It's compiling successfully but not works.
I would be grateful for any help.
UPDATE
Now i trying to work as in selfsing.c demo of OpenSSL Library:
1) I defined a function for adding extensions to CSR:
int add_ext(STACK_OF(X509_EXTENSION) *sk, int nid, char *value)
{
X509_EXTENSION *ex;
ex = X509V3_EXT_conf_nid(NULL, NULL, nid, value);
if (!ex)
return 0;
sk_X509_EXTENSION_push(sk, ex);
return 1;
}
2) Add this block to my function which generates CSR:
char Buffer[512];
// Format the value
sprintf (Buffer, "DNS:%s", info->common_name);
xts = sk_X509_EXTENSION_new_null();
add_ext(exts, NID_subject_alt_name, Buffer);
if(X509_REQ_add_extensions(x509_req, exts) != 1) {
*err = "X509_REQ_add_extensions() failed";
goto failed;
}
sk_X509_EXTENSION_pop_free(exts, X509_EXTENSION_free);
The code again compiles correctly, certificates are generated on the fly, but alternative names still don't work, and I get an error in the browser:
NET :: ERR_CERT_COMMON_NAME_INVALID
and I don’t see the alternative name information in the certificate details.
What other solutions can there be for a SAN problem? I can provide all the code for example on githab if it can help.
Hello I have done something like that (but I am working with X509 object not X509_REQ object like you :
static int cs_cert_set_subject_alt_name(X509 *x509_cert)
{
char *subject_alt_name = "IP: 192.168.1.1";
X509_EXTENSION *extension_san = NULL;
ASN1_OCTET_STRING *subject_alt_name_ASN1 = NULL;
int ret = -1;
subject_alt_name_ASN1 = ASN1_OCTET_STRING_new();
if (!subject_alt_name_ASN1) {
goto err;
}
ASN1_OCTET_STRING_set(subject_alt_name_ASN1, (unsigned char*) subject_alt_name, strlen(subject_alt_name));
if (!X509_EXTENSION_create_by_NID(&extension_san, NID_subject_alt_name, 0, subject_alt_name_ASN1)) {
goto err;
}
ASN1_OCTET_STRING_free(subject_alt_name_ASN1);
ret = X509_add_ext(x509_cert, extension_san, -1);
if (!ret) {
goto err;
}
X509_EXTENSION_free(extension_san);
err:
if (subject_alt_name_ASN1) ASN1_OCTET_STRING_free(subject_alt_name_ASN1);
if (extension_san) X509_EXTENSION_free(extension_san);
return -1;
}
It worked for me for the moment, I still have some trouble when I want to update an already existing certificate with a new subject alt name (because of a new ip address).
To see the result and check if the subject alt name is made :
$ openssl x509 -text -in cert.pem
Instead of X509V3_EXT_conf function try with X509V3_EXT_conf_nid function in which you pass the NID instead of name.
ext = X509V3_EXT_conf (NULL, NULL, ext_name, ext_value);
can be
ext = X509V3_EXT_conf_nid(NULL, NULL, NID_subject_alt_name, ext_value);
Your code might not be working because you might not be exactly matching the extension name with the one in OpenSSL code.

How to avoid SIGABRT when generating RSA Signature at EVP_SignFinal

I'm trying to generate a RSA Signature with libopenssl for c++:
But when I run my code, I get a SIGABRT. I did some deep debugging into libopenssl internal stuff to see where the Segfault comes from. I'll come to this later on.
First I want to make clear, that the RSA PrivateKey was successfully loaded from a .pem file. So Im pretty sure that's not the problem's origin.
So my question is: How to avoid the SIGABRT and what is the cause of it ?
I'm doing this for my B.Sc. Thesis so I really appreciate your help :)
Signature Generation Function:
DocumentSignature* RSASignatureGenerator::generateSignature(ContentHash* ch, CryptographicKey* pK) throw(PDVSException) {
OpenSSL_add_all_algorithms();
OpenSSL_add_all_ciphers();
OpenSSL_add_all_digests();
if(pK == nullptr)
throw MissingPrivateKeyException();
if(pK->getKeyType() != CryptographicKey::KeyType::RSA_PRIVATE || !dynamic_cast<RSAPrivateKey*>(pK))
throw KeyTypeMissmatchException(pK->getPem()->getPath().string(), "Generate RSA Signature");
//get msg to encrypt
const char* msg = ch->getStringHash().c_str();
//get openssl rsa key
RSA* rsaPK = dynamic_cast<RSAPrivateKey*>(pK)->createOpenSSLRSAKeyObject();
//create openssl signing context
EVP_MD_CTX* rsaSignCtx = EVP_MD_CTX_create();
EVP_PKEY* priKey = EVP_PKEY_new();
EVP_PKEY_assign_RSA(priKey, rsaPK);
//init ctxt
if (EVP_SignInit(rsaSignCtx, EVP_sha256()) <=0)
throw RSASignatureGenerationException();
//add data to sign
if (EVP_SignUpdate(rsaSignCtx, msg, std::strlen(msg)) <= 0) {
throw RSASignatureGenerationException();
}
//create result byte signature struct
DocumentSignature::ByteSignature* byteSig = new DocumentSignature::ByteSignature();
//set size to max possible
byteSig->size = EVP_MAX_MD_SIZE;
//alloc buffer memory
byteSig->data = (unsigned char*)malloc(byteSig->size);
//do signing
if (EVP_SignFinal(rsaSignCtx, byteSig->data, (unsigned int*) &byteSig->size, priKey) <= 0)
throw RSASignatureGenerationException();
DocumentSignature* res = new DocumentSignature(ch);
res->setByteSignature(byteSig);
EVP_MD_CTX_destroy(rsaSignCtx);
//TODO open SSL Memory leaks -> where to free open ssl stuff?!
return res;
}
RSA* rsaPK = dynamic_cast(pK)->createOpenSSLRSAKeyObject();
virtual RSA* createOpenSSLRSAKeyObject() throw (PDVSException) override {
RSA* rsa = NULL;
const char* c_string = _pem->getContent().c_str();
BIO * keybio = BIO_new_mem_buf((void*)c_string, -1);
if (keybio==NULL)
throw OpenSSLRSAPrivateKeyObjectCreationException(_pem->getPath());
rsa = PEM_read_bio_RSAPrivateKey(keybio, &rsa, NULL, NULL);
if(rsa == nullptr)
throw OpenSSLRSAPrivateKeyObjectCreationException(_pem->getPath());
//BIO_free(keybio);
return rsa;
}
SigAbrt origin in file openssl/crypto/mem.c
void CRYPTO_free(void *str, const char *file, int line)
{
if (free_impl != NULL && free_impl != &CRYPTO_free) {
free_impl(str, file, line);
return;
}
#ifndef OPENSSL_NO_CRYPTO_MDEBUG
if (call_malloc_debug) {
CRYPTO_mem_debug_free(str, 0, file, line);
free(str);
CRYPTO_mem_debug_free(str, 1, file, line);
} else {
free(str);
}
#else
free(str); // <<<<<<< HERE
#endif
}
the stacktrace
stacktrace screenshot from debugger (clion - gdb based)
I just found the Bug (and Im really not sure if this is a libopenssl bug..)
//set size to max possible
byteSig->size = EVP_MAX_MD_SIZE;
//alloc buffer memory
byteSig->data = (unsigned char*)malloc(byteSig->size);
The problem was when I set the buffer size to EVP_MAX_MD_SIZE!
The (in my opinion) very very strange thing is, that you have to keep the size uninitialized! (not even set to 0 - just "size_t size;" ).
Strange thing here is that then you also HAVE TO allocate memory just like I did. I dont understand this because then an undefined size of memory gets allocated..
What the really weird is that libopenssl internally sets the size back to 0 and allocates the memory itself.. (I detected this by browsing the libopenssl source code)

SSL signature verrification cross language issue

I have the following code in an C websocket server application I have. The code performs ssl signature verification on a message with a given public key. The code works fine in the C application, but recently I started writing it on c++.The issue I encountered is that the same code, that is below, is in both applications, without change, both times receiving the same input data, but the one compiled with c++ yields SSL error bad signature.
Here is the code:
int verifyMessageSignature(const char* decoded_message, int pos,
unsigned char* signature, char* publicKey)
{
SSL_library_init();
SSL_load_error_strings();
ERR_load_BIO_strings();
// OpenSSL_add_all_algorithms()
if (!publicKey)
{
printf("publicKey is null\n");
}
BIO* keyBio = BIO_new_mem_buf(publicKey, -1);
if(!keyBio)
{
printf("failed to created BIO\n");
printError(ERR_get_error());
}
BIO_set_mem_eof_return(keyBio, 0);
RSA* rsa = PEM_read_bio_RSA_PUBKEY(keyBio, NULL, NULL, NULL);
if (!rsa)
{
printf("Error in PEM_read_bio_RSA_PUBKEY\n");
printError(ERR_get_error());
}
EVP_MD_CTX *mdctx = NULL;
if (!(mdctx = EVP_MD_CTX_create()))
{
printf("Error in ctx\n");
printError(ERR_get_error());
}
EVP_PKEY* pk = EVP_PKEY_new();
if (EVP_PKEY_set1_RSA(pk, rsa) != 1)
{
printf("err in EVP_PKEY_set1_RSA\n");
printError(ERR_get_error());
}
if (EVP_DigestVerifyInit(mdctx, NULL, EVP_sha1(), NULL, pk) != 1)
{
printf("error in EVP_DigestVerifyInit\n");
printError(ERR_get_error());
}
if (EVP_DigestVerifyUpdate(mdctx, decoded_message, pos) != 1)
{
printf("error in EVP_DigestVerifyUpdate\n");
printError(ERR_get_error());
}
if (EVP_DigestVerifyFinal(mdctx, signature, 512) == 1)
{
/* Success */
printf("Successful verification!\n");
}
else
{
/* Failure */
printf("Unsuccessful verification!\n");
printError(ERR_get_error());
BIO_free_all(keyBio);
RSA_free(rsa);
EVP_PKEY_free(pk);
EVP_MD_CTX_destroy(mdctx);
ERR_free_strings();
return 1;
}
BIO_free_all(keyBio);
RSA_free(rsa);
EVP_PKEY_free(pk);
EVP_MD_CTX_destroy(mdctx);
ERR_free_strings();
return 0;
}
This code works fine in C. it successfully verifies the signature in my tests, whilst the same code, with the same input data (keys, messages, etc..) in c++ yields bad signature.
I am compiling under Ubuntu, using gcc and g++ (latest)
What could be causing this issue?
Solved. The issue was that i was using std::string to pass the decoded message around, which somehow was fucking it up. I switched the c++ code to use also char* strings for this section and is fine now.
Thanks for the replies, I hope someone else finds this useful.
Edit:
PS: I am using another function for generating the decoded message itself. Usage of std::string there was causing this issue.

OpenSSL RSA_private_decrypt() fails with "oaep decoding error"

I'm trying to implement RSA encryption/decryption using OpenSSL. Unfortunately my code fails during decryption.
I'm using Qt. So here is my code:
QByteArray CryptRSA::rsaEncrypt(QByteArray input)
{
QByteArray result(RSA_size(rsaKey), '\0');
int encryptedBytes = RSA_public_encrypt(RSA_size(rsaKey) - 42, (unsigned char *)input.data(), (unsigned char *) result.data(), rsaKey, RSA_PKCS1_OAEP_PADDING);
if (encryptedBytes== -1)
{
qDebug() << "Error encrypting RSA Key:";
handleErrors();
return QByteArray();
}
else
{
return result;
}
}
QByteArray CryptRSA::rsaDecrypt(QByteArray input)
{
QByteArray result(RSA_size(rsaKey), '\0');
int decryptedBytes = RSA_private_decrypt(RSA_size(rsaKey) - 42, (unsigned char *)input.data(), (unsigned char *)result.data(), rsaKey, RSA_PKCS1_OAEP_PADDING);
if (decryptedBytes == -1)
{
qDebug() << "Error decrypting RSA Key.";
handleErrors();
return QByteArray();
}
else
{
result.resize(decryptedBytes);
return result;
}
}
Here is the error:
error:0407A079:rsa routines:RSA_padding_check_PKCS1_OAEP:oaep decoding error
error:04065072:rsa routines:RSA_EAY_PRIVATE_DECRYPT:padding check failed
It fails in:
RSA_private_decrypt(RSA_size(rsaKey) - 42, (unsigned char *)input.data(),
(unsigned char *)result.data(), rsaKey, RSA_PKCS1_OAEP_PADDING);
I have tried several things, but i can't find my mistakes.
error:0407A079:rsa routines:RSA_padding_check_PKCS1_OAEP:oaep decoding error
error:04065072:rsa routines:RSA_EAY_PRIVATE_DECRYPT:padding check failed
If RSA_public_encrypt succeeds, then set the size of the result array to encryptedBytes. Do similar for RSA_private_decrypt.
Also, its not clear what you are trying to do with RSA_size(rsaKey) - 42. That looks very odd to me. I would expect it to be input.size(). But I'm guessing you know what you are doing with you array.
There could be other problems (like the public and private keys don't match), but we'd need to see more code and parameters to tell.
Also, you should use the EVP_* interfaces. See EVP Asymmetric Encryption and Decryption of an Envelope on the OpenSSL wiki.

How to read to asio buffer say `1` byte from one socket and than `read_some` more from another?

So I am trying to implement timed http connection Keep-Alive. And I need to be capable of killing it on some time-out. So currently I have (or at least I would like to have):
void http_request::timed_receive_base(boost::asio::ip::tcp::socket& socket, int buffer_size, int seconds_to_wait, int seconds_to_parse)
{
this->clear();
http_request_parser_state parser_state = METHOD;
char* buffer = new char[buffer_size];
std::string key = "";
std::string value = "";
boost::asio::ip::tcp::iostream stream;
stream.rdbuf()->assign( boost::asio::ip::tcp::v4(), socket.native() );
try
{
do
{
stream.expires_from_now(boost::posix_time::seconds(seconds_to_wait));
int bytes_read = stream.read_some(boost::asio::buffer(buffer, buffer_size));
stream.expires_from_now(boost::posix_time::seconds(seconds_to_parse));
if (stream) // false if read timed out or other error
{
parse_buffer(buffer, parser_state, key, value, bytes_read);
}
else
{
throw std::runtime_error("Waiting for 2 long...");
}
} while (parser_state != OK);
}
catch (...)
{
delete buffer;
throw;
}
delete buffer;
}
But there is no read_some in tcp::iostream, so compiler gives me an error:
Error 1 error C2039: 'read_some' : is not a member of 'boost::asio::basic_socket_iostream<Protocol>'
That is why I wonder - how to read 1 byte via stream.read (like stream.read(buffer, 1);) and than read_some to that very buffer via socket API ( it would look like int bytes_read = socket.read_some(boost::asio::buffer(buffer, buffer_size)); and than call my parse_buffer function with real bytes_read value)
BTW it seems like there will be a really sad problem of 1 last byte..(
Sorry to be a bit rough, but did you read the documentation? The socket iostream is supposed to work like the normal iostream, like cin and cout. Just do stream >> var. Maybe you want basic_stream_socket::read_some instead?