I am using openssl and trying to decrypt data which is encrypted using RSA_SSLV23_PADDING. The code is as follows:
BIO *pBPK=NULL;
RSA *pPrivKey;
pBPK = BIO_new_mem_buf ( ( void* ) strKey, -1 );
pPrivKey = PEM_read_bio_RSAPrivateKey ( pBPK, NULL, NULL, NULL );
int flen = RSA_size ( pPrivKey );
unsigned char* from = (unsigned char*)strData;
int maxSize = RSA_size ( pPrivKey );
unsigned char* to = new unsigned char[maxSize];
int res = RSA_private_decrypt ( flen, from, to, pPrivKey, RSA_SSLV23_PADDING );
But I am always getting res as -1. When I use RSA_PKCS1_PADDING or RSA_PKCS1_OAEP_PADDING then it works fine.
Decryption with RSA_SSLV23_PADDING not working
That's a padding scheme used for rollback-attack detection in SSLv3 and above. Its meant to be used in the context of SSL/TLS.
When I use RSA_PKCS1_PADDING or RSA_PKCS1_OAEP_PADDING then it works fine.
Right, these are modern RSA padding schemes.
int res = RSA_private_decrypt ( flen, from, to, pPrivKey, RSA_SSLV23_PADDING );
To use RSA_SSLV23_PADDING, I believe you call EVP_PKEY_CTX_set_rsa_padding on the EVP_PKEY_CTX*. See EVP_PKEY_CTX_ctrl man pages for details.
You should probably avoid RSA_PKCS1_PADDING, and use RSA_PKCS1_OAEP_PADDING. For reading, see A bad couple of years for the cryptographic token industry.
Related
I've been working on a project recently that should connect to a server with the help of a protocol. So far so good, but when I combed to decrypt the packages, I quickly noticed that something is not working properly.
The first 16 bytes of all packets are decrypted incorrectly. I have tried it with different libraries but that does not work either. I work in the C++ language and have so far used Crypto++ and OpenSSL for decryption, without success.
Under this Link you can find the protocol, here the decryption protocol Link and here is my corresponding code:
OpenSSL:
void init() {
unsigned char* sharedSecret = new unsigned char[AES_BLOCK_SIZE];
std::generate(sharedSecret,
sharedSecret + AES_BLOCK_SIZE,
std::bind(&RandomGenerator::GetInt, &m_RNG, 0, 255));
for (int i = 0; i < 16; i++) {
sharedSecretKey += sharedSecret[i];
}
// Initialize AES encryption and decryption
if (!(m_EncryptCTX = EVP_CIPHER_CTX_new()))
std::cout << "123" << std::endl;
if (!(EVP_EncryptInit_ex(m_EncryptCTX, EVP_aes_128_cfb8(), nullptr, (unsigned char*)sharedSecretKey.c_str(), (unsigned char*)sharedSecretKey.c_str())))
std::cout << "123" << std::endl;
if (!(m_DecryptCTX = EVP_CIPHER_CTX_new()))
std::cout << "123" << std::endl;
if (!(EVP_DecryptInit_ex(m_DecryptCTX, EVP_aes_128_cfb8(), nullptr, (unsigned char*)sharedSecretKey.c_str(), (unsigned char*)sharedSecretKey.c_str())))
std::cout << "123" << std::endl;
m_BlockSize = EVP_CIPHER_block_size(EVP_aes_128_cfb8());
}
std::string result;
int size = 0;
result.resize(1000);
EVP_DecryptUpdate(m_DecryptCTX, &((unsigned char*)result.c_str())[0], &size, &sendString[0], data.size());
Crypto++:
CryptoPP::CFB_Mode<CryptoPP::AES>::Decryption AESDecryptor((byte*)sharedSecret.c_str(), (unsigned int)16, sharedSecret.c_str(), 1);
std::string sTarget("");
CryptoPP::StringSource ss(data, true, new CryptoPP::StreamTransformationFilter(AESDecryptor, new CryptoPP::StringSink(sTarget)));
I think important to mention is that I use one and the same shared secret for the key and the iv (initialization vector). In other posts, this was often labeled as a problem. I do not know how to fix it in this case because the protocol want it.
I would be looking forward to a constructive feedback.
EVP_EncryptInit_ex(m_EncryptCTX, EVP_aes_128_cfb8(), nullptr,
(unsigned char*)sharedSecretKey.c_str(), (unsigned char*)sharedSecretKey.c_str()))
And:
CFB_Mode<AES>::Decryption AESDecryptor((byte*)sharedSecret.c_str(),
(unsigned int)16, sharedSecret.c_str(), 1);
std::string sTarget("");
StringSource ss(data, true, new StreamTransformationFilter(AESDecryptor, new StringSink(sTarget)));
It is not readily apparent, but you need to set feedback size for the mode of operation of the block cipher in Crypto++. The Crypto++ feedback size is 128 by default.
The code to set the feedback size of CFB mode can be found at CFB Mode on the Crypto++ wiki. You want the 3rd or 4th example down the page.
AlgorithmParameters params =
MakeParameters(Name::FeedbackSize(), 1 /*8-bits*/)
(Name::IV(), ConstByteArrayParameter(iv));
That is kind of an awkward way to pass parameters. It is documented in the sources files and on the wiki at NameValuePairs. It allows you to pass arbitrary parameters through consistent interfaces. It is powerful once you acquire a taste for it.
And then use params to key the encryptor and decryptor:
CFB_Mode< AES >::Encryption enc;
enc.SetKey( key, key.size(), params );
// CFB mode must not use padding. Specifying
// a scheme will result in an exception
StringSource ss1( plain, true,
new StreamTransformationFilter( enc,
new StringSink( cipher )
) // StreamTransformationFilter
); // StringSource
I believe your calls would look something like this (if I am parsing the OpenSSL correctly):
const byte* ptr = reinterpret_cast<const byte*>(sharedSecret.c_str());
AlgorithmParameters params =
MakeParameters(Name::FeedbackSize(), 1 /*8-bits*/)
(Name::IV(), ConstByteArrayParameter(ptr, 16));
CFB_Mode< AES >::Encryption enc;
enc.SetKey( ptr, 16, params );
In your production code you should use unique key and iv. So do something like this using HKDF:
std::string seed(AES_BLOCK_SIZE, '0');
std::generate(seed, seed + AES_BLOCK_SIZE,
std::bind(&RandomGenerator::GetInt, &m_RNG, 0, 255));
SecByteBlock sharedSecret(32);
const byte usage[] = "Key and IV v1";
HKDF<SHA256> hkdf;
hkdf.DeriveKey(sharedSecret, 32, &seed[0], 16, usage, COUNTOF(usage), nullptr, 0);
AlgorithmParameters params =
MakeParameters(Name::FeedbackSize(), 1 /*8-bits*/)
(Name::IV(), ConstByteArrayParameter(sharedSecret+16, 16));
CFB_Mode< AES >::Encryption enc;
enc.SetKey(sharedSecret+0, 0, params);
In the code above, sharedSecret is twice as large as it needs to be. You derive the key and iv from the seed using HDKF. sharedSecret+0 is the 16-byte key, and sharedSecret+16 is the 16-byte iv.
I'm trying to write a DNS Resolver with user-supplied resolvers(just a text file with several IP addresses that could be used for querying) using the standalone ASIO/C++ library and I have failed on every attempt to make the receiver work. All the resolvers do not seem to be responding(udp::receive_from) to the query I'm sending. However, when I try to use the same resolver file with an external library like dnslib, they work like charm, so problem lies in my code. Here's the code I'm using to send data to the DNS servers.
struct DNSPktHeader
{
uint16_t id{};
uint16_t bitfields{};
uint16_t qdcount{};
uint16_t ancount{};
uint16_t nscount{};
uint16_t arcount{};
};
// dnsname, for example is -> google.com
// dns_resolver is a list of udp::endpoint of IPv4 address on port 53.
// ip is the final result
// returns 0 on success and negative value on failure
int get_host_by_name( char const *dnsname, std::vector<udp::endpoint> const & dns_resolvers, OUT uint16_t* ip )
{
uint8_t netbuf[128]{};
char const *funcname = "get_host_by_name";
uint16_t const dns_id = rand() % 2345; // warning!!! Simply for testing purpose
DNSPktHeader dns_qry{};
dns_qry.id = dns_id;
dns_qry.qdcount = 1;
dns_qry.bitfields = 0x8; // set the RD field of the header to 1
// custom_htons sets the buffer pointed to by the second argument netbuf
// to the htons of the first argument
custom_htons( dns_qry.id, netbuf + 0 );
custom_htons( dns_qry.bitfields, netbuf + 2 );
custom_htons( dns_qry.qdcount, netbuf + 4 );
custom_htons( dns_qry.ancount, netbuf + 6 );
custom_htons( dns_qry.nscount, netbuf + 8 );
custom_htons( dns_qry.arcount, netbuf + 10 );
unsigned char* question_start = netbuf + sizeof( DNSPktHeader ) + 1;
// creates the DNS question segment into netbuf's specified starting index
int len = create_question_section( dnsname, (char**) &question_start, thisdns::dns_record_type::DNS_REC_A,
thisdns::dns_class::DNS_CLS_IN );
if( len < 0 ){
fmt::print( stderr, "{}: {} ({})\n", funcname, dnslib_errno_strings[DNSLIB_ERRNO_BADNAME - 1], dnsname );
return -EFAULT;
}
len += sizeof( DNSPktHeader );
fmt::print( stdout, "{}: Submitting DNS A-record query for domain name ({})\n", funcname, dnsname );
asio::error_code resolver_ec{};
udp::socket udp_socket{ DNSResolver::GetIOService() };
udp_socket.open( udp::v4() );
// set 5 seconds timeout on receive and reuse the address
udp_socket.set_option( asio::ip::udp::socket::reuse_address( true ) );
udp_socket.set_option( asio::detail::socket_option::integer<SOL_SOCKET, SO_RCVTIMEO>{ 5'000 } );
udp_socket.bind( udp::endpoint{ asio::ip::make_address( "127.0.0.1" ), 53 } );
std::size_t bytes_read = 0, retries = 1;
int const max_retries = 10;
asio::error_code receiver_err{};
uint8_t receive_buf[0x200]{};
udp::endpoint default_receiver{};
do{
udp::endpoint const & resolver_endpoint{ dns_resolvers[retries] };
int bytes_sent = udp_socket.send_to( asio::buffer( netbuf, len ), resolver_endpoint, 0, resolver_ec );
if( bytes_sent < len || resolver_ec ){
fmt::print( stderr, "{}: (found {}, expected {})\n", funcname, i, sizeof( DNSPktHeader ) );
return -EFAULT;
}
// ======== the problem ==============
bytes_read = udp_socket.receive_from( asio::buffer( receive_buf, sizeof( receive_buf ) ), default_receiver, 0,
receiver_err );
// bytes_read always return 0
if( receiver_err ){
fmt::print( stderr, "{}\n\n", receiver_err.message() );
}
} while( bytes_read == 0 && retries++ < max_retries );
//...
}
I have tried my best but it clearly isn't enough. Could you please take a look at this and help figure where the problem lies? It's my very first time using ASIO on any real-life project.
Don't know if this would be relevant but here's create_question_section.
int create_question_section( const char *dnsname, char** buf, thisdns::dns_record_type type, thisdns::dns_class class_ )
{
char const *funcname = "create_question_section";
if( dnsname[0] == '\0' ){ // Blank DNS name?
fmt::print( stderr, "{}: Blank DNS name?\n", funcname );
return -EBADF;
}
uint8_t len{};
int index{};
int j{};
bool found = false;
do{
if( dnsname[index] != '.' ){
j = 1;
found = false;
do{
if( dnsname[index + j] == '.' || dnsname[index + j] == '\0' ){
len = j;
strncpy( *buf, (char*) &len, 1 );
++( *buf );
strncpy( *buf, (char*) dnsname + index, j );
( *buf ) += j;
found = true;
if( dnsname[index + j] != '\0' )
index += j + 1;
else
index += j;
} else{
j++;
}
} while( !found && j < 64 );
} else{
fmt::print( stderr, "{}: DNS addresses can't start with a dot!\n", funcname );
return -EBADF; // DNS addresses can't start with a dot!
}
} while( dnsname[index] );
uint8_t metadata_buf[5]{};
custom_htons( (uint16_t)type, metadata_buf + 1 );
custom_htons( (uint16_t)class_, metadata_buf + 3 );
strncpy( *buf, (char*) metadata_buf, sizeof(metadata_buf) );
return sizeof( metadata_buf ) + index + 1;
}
There are at least two issues why it's not working for you. They all boil down to the fact that the DNS packet you send out is malformed.
This line
unsigned char* question_start = netbuf + sizeof( DNSPktHeader ) + 1;
sets the pointer into the buffer one position farther than you want. Instead of starting the encoded FQDN at position 12 (as indexed from 0) it starts at position 13. What that means is that the DNS server sees a zero length domain name and some garbage record type and class and ignores the rest. And so it decides not to respond to your query at all. Just get rid of +1.
Another possible issue could be in encoding all the records with custom_htons(). I have no clue how it's implemented and so cannot tell you whether it works correctly.
Furthermore, although not directly responsible for your observed behaviour, the following call to bind() will have zero effect unless you run the binary as root (or with appropriate capabilities on linux) because you are trying to bind to a privileged port
udp_socket.bind( udp::endpoint{ asio::ip::make_address( "127.0.0.1" ), 53 } );
Also, this dns_qry.bitfields = 0x8; doesn't do what you want. It should be dns_qry.bitfields = 0x80;.
Check this and this RFC out for reference on how to form a valid DNS request.
Important note: I would strongly recommend to you not to mix C++ with C. Pick one but since you tagged C++ and use Boost and libfmt, stick with C++. Replace all your C-style casts with appropriate C++ versions (static_cast, reinterpret_cast, etc.). Instead of using C-style arrays, use std::array, don't use strncpy, etc.
Background
I am trying to verify signature of a given binary file using openssl. Actual signing of binary hash is done by a 3rd party. Both 3rd party and I have the same exact certificate - they sent me the certificate.
I have verified health of my certificate by running openssl x509 -noout -text -inform DER -in CERT_PATH. This displays contents of cert correctly.
Following is my code so far - I based it on openssl wiki example here:
static std::vector<char> ReadAllBytes(char const* filename){
std::ifstream ifs(filename, std::ios::binary|std::ios::ate);
std::ifstream::pos_type pos = ifs.tellg();
std::vector<char> result(pos);
ifs.seekg(0, std::ios::beg);
ifs.read(result.data(), pos);
return result;
}
int main(int ac, const char * av[]) {
OpenSSL_add_all_algorithms();
ERR_load_crypto_strings();
// most of error check omitted for brevity
auto foundBinBytes = ReadAllBytes("BINARY_PATH");
auto foundSgnBytes = ReadAllBytes("SIGNATURE_PATH");
auto foundCertBytes = ReadAllBytes("CERT_PATH");
ERR_clear_error();
BIO *b = NULL;
X509 *c;
b = BIO_new_mem_buf(reinterpret_cast<const unsigned char *>(foundCertBytes.data()), foundCertBytes.size());
c = d2i_X509_bio(b, NULL);
EVP_MD_CTX* ctx = NULL;
ctx = EVP_MD_CTX_create();
const EVP_MD* md = EVP_get_digestbyname("SHA256");
int rc = EVP_DigestInit_ex(ctx, md, NULL);
EVP_PKEY *k = NULL;
k = X509_get_pubkey(c);
rc = EVP_DigestVerifyInit(ctx, NULL, md, NULL, k);
rc = EVP_DigestVerifyUpdate(ctx, reinterpret_cast<const unsigned char *>(foundBinBytes.data()), foundBinBytes.size());
ERR_clear_error();
rc = EVP_DigestVerifyFinal(ctx, reinterpret_cast<const unsigned char *>(foundSgnBytes.data()), foundSgnBytes.size());
ERR_print_errors_fp( stdout );
// openssl free functions omitted
if(ctx) {
EVP_MD_CTX_destroy(ctx);
ctx = NULL;
}
return 0;
}
Issue
Running this code produces following errors:
4511950444:error:0D07207B:asn1 encoding routines:ASN1_get_object:header too long:/.../crypto/asn1/asn1_lib.c:152:
4511950444:error:0D068066:asn1 encoding routines:ASN1_CHECK_TLEN:bad object header:/.../crypto/asn1/tasn_dec.c:1152:
4511950444:error:0D07803A:asn1 encoding routines:ASN1_ITEM_EX_D2I:nested asn1 error:/.../crypto/asn1/tasn_dec.c:314:Type=X509_SIG
Question
What is wrong with my setup/code? Did I miss something along the way?
You never check the errors when reading the files. You might have errors there (does the file "CERT_PATH" exist? Do you have read permissions? ...).
If "CERT_PATH" cannot be read, then foundCertBytes.data() is an empty byte array, and this explains the subsequent errors.
If you get these errors in a mosquitto secured server's log, check the config file.
My mosquitto.conf was containing :
require_certificate true
my meross device doesn't send certificate. Turning this to "false" solved my problem after restart
I am writing some code to RSA sign data. I have gotten this code to work successfully on Ubuntu (OpenSSL version is 1.0.1f-1ubuntu2.7). The code takes the message, and produces the signed result with the private key, I can verify is correct using a separate program and the matching public key.
Under windows, the exact same code produces a different result, the signed data is different (And does not verify. I can't figure out why. I am using the OpenSSL distribution 'Win64OpenSSL-1_0_1j' from http://slproweb.com/products/Win32OpenSSL.html .
Here is the code:
const char * rsa_pri_key = "-----BEGIN RSA PRIVATE KEY-----\n"
"MII...L1t\n"
"yC0...+zk\n"
"c0...5Q\n"
"-----END RSA PRIVATE KEY-----\n";
BIO* bio = BIO_new_mem_buf( (void*)rsa_pri_key, -1 );
BIO_set_flags( bio, BIO_FLAGS_BASE64_NO_NL );
RSA* private_key = PEM_read_bio_RSAPrivateKey(bio, NULL, NULL, NULL ) ;
if( !private_key )
{
std::cout << "Key load failed" << std::endl;
}
BIO_free( bio ) ;
// Sign the data
signature = (unsigned char*) malloc(RSA_size(private_key));
RSA_sign(NID_md5, (unsigned char*) message, strlen(message), signature, &slen, private_key)
I think the issue might be related to the char type on Linux and windows differing in whether they're signed or not. You're casting to (unsigned char *) which might be where the issue arises.
Im trying to use statvfs to find free space on a file system.
Here's the code:
const char* Connection::getDiskInfo()
{
struct statvfs vfs;
int nRet = statvfs( "/u0", &vfs );
if( nRet ) return NULL;
char* pOut = (char*)malloc( 256 );
memset( pOut, 0, 256 );
sprintf( pOut, "<disk letter='%s' total='%lu' free='%lu' totalfree='%lu'/>",
"/", ( vfs.f_bsize * vfs.f_blocks ) / ( 1024 * 1024 ),
( vfs.f_bsize * vfs.f_bavail ) / ( 1024 * 1024 ),
( vfs.f_bsize * vfs.f_bfree ) / ( 1024 * 1024 ));
return pOut;
}
In the debugger (NetBeans 6.9) I see the appropriate values for the statvfs struct:
f_bavail = 105811542
f_bfree = 111586082
f_blocks = 111873644
f_bsize = 4096
this should give me total=437006 but my output insists that total=2830. Clearly Im doing something ignorant in my formatting or math.
If I add the line:
unsigned long x = ( vfs.f_bsize * vfs.f_blocks );
x evaluates to 2967912448 while the debugger shows me the appropriate values (see above).
system: Linux version 2.6.18-194.17.1.el5PAE
i386
I've read the other entries here referring to this function and they make it seem pretty straightforward. So where did I go astray?
What is the size of fsblkcnt_t? If it's 32-bit then it's an overflow problem and you simply need to temporarily use a 64-bit size during the calculation.