Cryptography - Bizzare behaviour in Release mode - c++

I've got a project using Crypto++.
Crypto++ is a own project which builds in a static lib.
Aside from that I have another large project using some of the Crypto++ classes and processing various algorithms, which also builds in a static lib.
Two of the functions are these:
long long MyClass::EncryptMemory(std::vector<byte> &inOut, char *cPadding, int rounds)
{
typedef std::numeric_limits<char> CharNumLimit;
char sPadding = 0;
//Calculates padding and returns value as provided type
sPadding = CalcPad<decltype(sPadding)>(reinterpret_cast<MyClassBase*>(m_P)->BLOCKSIZE, static_cast<int>(inOut.size()));
//Push random chars as padding, we never care about padding's content so it doesn't matter what the padding is
for (auto i = 0; i < sPadding; ++i)
inOut.push_back(sRandom(CharNumLimit::min(), CharNumLimit::max()));
std::size_t nSize = inOut.size();
EncryptAdvanced(inOut.data(), nSize, rounds);
if (cPadding)
*cPadding = sPadding;
return nSize;
}
//Removing the padding is the responsibility of the caller.
//Nevertheless the string is encrypted with padding
//and should here be the right string with a little padding
long long MyClass::DecryptMemory(std::vector<byte> &inOut, int rounds)
{
DecryptAdvanced(inOut.data(), inOut.size(), rounds);
return inOut.size();
}
Where EncryptAdvanced and DecryptAdvanced pass the arguments to the Crypto++ object.
//...
AdvancedProcessBlocks(bytePtr, nullptr, bytePtr, length, 0);
//...
These functions have so far worked flawless, no modifications have been applied to them since months.
The logic around them has evolved, though the calls and data passed to them did not change.
The data being encrypted / decrypted is rather small but has a dynamic size, which is being padded if (datasize % BLOCKSIZE) has a remainder.
Example: AES Blocksize is 16. Data is 31. Padding is 1. Data is now 32.
After encrypting and before decrypting, the string is the same - as in the picture.
Running all this in debug mode apparently works as intended. Even when running this program on another computer (with VS installed for DLLs) it shows no difference. The data is correctly encrypted and decrypted.
Trying to run the same code in release mode results in a totally different encrypted string, plus it does not decrypt correctly - "trash data" is decrypted. The wrongly encrypted or decrypted data is consistent - always the same trash is decrypted. The key/password and the rounds/iterations are the same all the time.
Additional info: The data is saved in a file (ios_base::binary) and correctly processed in debug mode, from two different programs in the same solution using the same static librar(y/ies).
What could be the cause of this Debug / Release problem ?
I re-checked the git history a couple of times, debugged for days through the code, yet I cannot find any possible cause for this problem. If any information - aside from a (here rather impossible) MCVE is needed, please leave a comment.

Apparently this is a bug in CryptoPP. The minimum keylength of Rijndael / AES is set to 8 instead of 16. Using a invalid keylength of 8 bytes will cause a out-of-bounds access to the in-place array of Rcon values. This keylength of 8 byte is currently reported as valid and has to be fixed in CryptoPP.
See this issue on github for more information. (On-going conversation)

Related

Explanation needed with AES_cfb128_encrypt() function

Hey I'm looking for some explanation regarding
void AES_cfb128_encrypt(const unsigned char *in, unsigned char *out, size_t length,
const AES_KEY *key, unsigned char *ivec, int *num, int enc)
I understand the straightforward variables like
in is our input buffer
out is our output buffer
length is size of our input/number of bytes of input buffer
key is pointer to our key which we set by AES_set_encrypt_key
enc is the mode we want(whether encryption or decryption)
However, I don't understand what ivec and num do.
I just know (might be wrong) is that ivec stands for initial vector and another thing in a code that i found online there was a comment:
/* set where on the 128 bit encrypted block to begin encryption*/. At first, I thought this left gaps of unencrypted data in the midst of encrypted one, but that wasn't the case when I looked at the byte values while debugging and printing values.
ivec is an Initialisation Vector and commonly written as IV. The keyword is the probabilistic encryption;
If the encryption is not probabilistic then when you re-encrypt a message under the same key, you will get the same result and an eavesdropper will notice this. This is a simple attack, in some modes, this is more catastrophic like CTR mode where encryption of two messages under the same IV can cause loss of confidentiality.
In CFB mode, if you encrypt two different messages then the first blocks are under attack where the attacker gets x-or of the messages.
One should generate random IV's per message since the IV must be unpredictable for CFB mode ( as in CBC mode).
Note that, you need to send the IV to the receiver so that the decryption works. The IV need not be secret, one can prepend it to cyphertext.
Don't forget that, those are archaic mode of operation, in modern cryptography we prefer authenticated encryption modes like AES-GCM, AES-CCM, and ChaCha20-Poly1305.
And, the key is still secure even if you don't generate random per message!

ZLib GZIP Returning Z_BUF_ERROR(-5)

I am using the zlib library (compiled from src) to deflate/inflate gzip/zlib/raw bytes. I have created a wrapper class for decompressing and compressing (Compressor/Decompresser). I have also created several test cases (GZIP, ZLib, Raw, Auto-Detect). The tests pass for Zlib/Raw/Auto-Detect(Zlib), but not for GZip (window bits of 15u | 16u).
Here is my compress function.
std::vector<char> out(zlib->avail_in + 8);
deflateInit2(zlib.get(), Z_DEFAULT_COMPRESSION, Z_DEFLATED, static_cast<int32_t>(mode), 8, Z_DEFAULT_STRATEGY);
zlib->avail_out = out.size();
zlib->next_out = reinterpret_cast<Bytef*>(out.data());
deflate(zlib.get(), Z_FINISH);
out.resize(zlib->total_out + 3);
deflateEnd(zlib.get());
return std::move(out);
And here is decompress
uIntf multiplier = 2;
uIntf currentSize = zlib->avail_in * (multiplier++) * 1000 /* Just to make sure enough output space(will implement loop) */;
std::vector<char> out(currentSize);
inflateInit2(zlib.get(), static_cast<int>(mode));
zlib->avail_out = out.size();
zlib->next_out = reinterpret_cast<Bytef*>(out.data());
inflate(zlib.get(), Z_FINISH);
out.resize(zlib->total_out);
inflateEnd(zlib.get());
return std::move(out);
Input is set in a different function (that is being called), that looks like this. (char* is not being deleted when compress/decompress is called)
zlib->next_in = reinterpret_cast<Bytef*>(bytes);
zlib->avail_in = static_cast<uIntf>(length);
I also have a mode enum
enum class Mode : int32_t {
AUTO = 15u | 32u, // Never used on compress
GZIP = 15u | 16u,
ZLIB = 15,
RAW = -15
};
Note: Test cases with the mode being AUTO (paired with zlib), ZLib, and RAW work. GZip fails the test case. (The test case is just a simple alphanum character array).
Also I debugged the output of the gzip decompress (after it failed) and the output is missing the last 3 characters (y, z, termination character)
Another note:
The constructor of the wrapper classes look like this
zlib->zalloc = Z_NULL;
zlib->zfree = Z_NULL;
zlib->opaque = Z_NULL;
zlib->avail_in = 0;
zlib->next_in = Z_NULL;
First off, a bunch of scattered code fragments with no context makes it impossible to see what's happening. See How to create a Minimal, Reproducible Example for how to provide a decent example.
Second, you are not saying what is returning Z_BUF_ERROR. There aren't even any places in your code fragments where you retain the return values of deflate() or inflate(), so it's not even possible for you to see a Z_BUF_ERROR! You need to at least do something like int ret = deflate(zlib.get(), Z_FINISH); and then check the value of ret.
Third, I cannot tell in your code fragments where or even if you set the input pointer and length. Is the length set to zero before the inits? Or is it set to the data? Or is the data pointer and length set after the inits? See the MRE link above.
Fourth, we don't have the example data that you're using. So we cannot reproduce the error. Again, see the MRE link.
Ok, so making a stab in the dark here, I will guess that deflate() is returning the error. Then the problem is likely that you have not provided enough output space, and you have asked for Z_FINISH, which is telling deflate() you have provided enough output space. In that case, deflate() returning Z_BUF_ERROR means that you didn't. Compression can expand the data if it is not compressible, and gzip adds more header and trailer information than zlib. Your + 8 is inadequate to account for those two things. A zlib header and trailer is six bytes, whereas a gzip header and trailer is at least 18 bytes. The expansion is a multiplier on the input, adding some part of a percent, where you have no multiplier on the length at all.
zlib provides a function for just this purpose, deflateBound(). You would call that after deflateInit() with the size of your input, and it will return the maximum size of the compressed output.
However it is better to call deflate() multiple times in a loop. For most practical applications, it is necessary to call inflate() multiple times in a loop. This is seen in your comment, as well as in your attempt (also inadequate) to account for the possible size of the inflated data by multiplying by a thousand.
You can find a heavily commented example of how to use the zlib functions properly, with loops, at zlib Usage Example.

Exporting Plaintext AES 128 Key to buffer/file Windows Crypto API c++

I'm having a lot of difficulty understanding and implementing the Windows Crypto API to Import and Export Keys in c++.
Despite reading through the MSDN documentation many many times I can't seem to get it to work in the way I want.
Below is a snippet of code from what i'm working on.
if(CryptAcquireContext(&CryptoHandle,NULL,provPointer, PROV_RSA_AES, 0xF0000000))
{
HCRYPTKEY aesKey;
//We now have context on Enhanced AES
if(CryptGenKey(CryptoHandle,CALG_AES_128,CRYPT_EXPORTABLE,&aesKey))
{
DWORD dwBlobLen;
BYTE* pbKeyBlob;
CryptExportKey(aesKey,0,PLAINTEXTKEYBLOB,0,NULL,&dwBlobLen);
if(pbKeyBlob=new BYTE[dwBlobLen])
{
if(CryptExportKey(aesKey, NULL,PLAINTEXTKEYBLOB, 0,pbKeyBlob, &dwBlobLen))
{
//Blah Blah
}
}
}
}
*(Where provPointer is a pointer to the Enhanced crypto api string.
As you might be able to tell from the snippet i'm trying to export a AES 128 key to plaintext.
In the debugger it all executes fine (No visible errors) but I don't understand the outcome at all.
The first call to CryptExportKey fills the dwBlobLen with '28' (What does this mean? Why?)
After the second CryptExport key i've tried writing pbKeyBlob(Which I assume points to the key) to file But I just end up with a constant set of bytes (Same for every try) followed by a set of bytes that I different every time (I assume this is some of the key) (Which add to 28 bytes total)
I'd really appreciate if someone could identify where I've gone wrong. I'm pretty clueless with the whole crypto lingo (Sessions,machine keys, blobs etc.)
In the future I'd like to be able to generate an AES key, use it and export it into a file in a form where I can import it again later.
Thanks in advance.
I'm not an expert on the Windows Cryptography API (or on cryptography in general) but I believe I can shed some light on what's going on here.
The first call to CryptExportKey puts 28 in dwBlobLen because that is the size of the blob that it will created when the key is exported. This is in the MSDN docs: http://msdn.microsoft.com/en-us/library/windows/desktop/aa379931%28v=vs.85%29.aspx
AS far as what you're doing wrong. You aren't doing anything wrong. You are asking CryptExportKey to export a plaintext blob which has the following layout:typedef struct _PLAINTEXTKEYBLOB {
BLOBHEADER hdr;
DWORD dwKeySize;
BYTE rgbKeyData[];
} PLAINTEXTKEYBLOB, *PPLAINTEXTKEYBLOB;
As you can see, the blob starts with a header and a key size (which is the constant set of bytes which you have reported, and should be 12 bytes long), followed by the key data (which is the data that changes every time, and should be 16 bytes long). Remember you are generating a 128 bit key (which is 16 bytes).
The BLOBHEADER has the following layout:typedef struct _BLOBHEADER {
BYTE bType;
BYTE bVersion;
WORD Reserved;
ALG_ID aiKeyAlg;
} BLOBHEADER;
By the way, from the doc on the CryptImportKey function, you can't import the PLAINTEXTBLOB directly, because the BYTE array that you pass to CryptImportKey does not include the keysize. You need to pass a buffer with the BLOBHEADER followed by the key data.

How to perform unpadding after decryption of stream using CryptoPP

I've got the stream to decrypt. I divide it into blocks and pass each block to the method below. The data I need to decrypt is encrypted by 16 - bytes blocks and if the last block is less than 16, then all the rest bytes are filled by padding. Then in the moment of decryption I'm getting my last block result as the value including these additional padding bytes. How can I determine the length of original data and return only it or determine the padding bytes and remove them, considering different paddings could be used?
void SymmetricAlgorithm::Decrypt(byte* buffer, size_t dataBytesSize) {
MeterFilter meter(new ArraySink(buffer, dataBytesSize));
CBC_Mode<CryptoPP::Rijndael>::Decryption dec(&Key.front(), Key.size(), &IV.front());
StreamTransformationFilter* filter = new StreamTransformationFilter(dec, new Redirector(meter), PKCS_PADDING);
ArraySource(buffer, dataBytesSize, true, filter);
dec.Resynchronize(&IV.front());
}
Now I'm trying with PKCS_PADDING and Rijndael, but in general I might need work with any algorithm and any padding.
I divide it into blocks and pass each block to the method below
In this case, you might consider calling ProcessBlock directly:
CBC_Mode<Rijndael>::Decryption dec(...);
// Assume 'b' is a 16-byte block
dec.ProcessBlock(b);
The block is processed in place, so its destructive. You will also be responsible for processing the last block, including the removal of padding.
By blocking and removing padding, you are doing the work of the StreamTransformationFilter (and friends).
As it happens, I found what I needed occasionally in the example from this question.
Appreciate your help, Gabriel L., but I didn't want make my method not to use padding at all. Sorry for unclear explanations, I wanted to extract plain data from decrypted data, which includes padding symbols. And the bold row in this code shows how to find out plain data bytes count.
void SymmetricAlgorithm::Decrypt(byte* buffer, size_t dataBytesSize) {
MeterFilter meter(new ArraySink(buffer, dataBytesSize));
CBC_Mode<CryptoPP::Rijndael>::Decryption dec(&Key.front(), Key.size(), &IV.front());
StreamTransformationFilter* filter = new StreamTransformationFilter(dec, new Redirector(meter), PKCS_PADDING);
ArraySource(buffer, dataBytesSize, true, filter);
int t = meter.GetTotalBytes(); //plain data bytes count
dec.Resynchronize(&IV.front());
}

How can GetFreeDiskSpaceEx return the (seemingly) wrong amount of disk space?

So I work on a device that outputs large images (anywhere from 30MB to 2GB+). Before we begin creating one of these images we check to see if there is sufficient disk space via GetDiskFreeSpaceEx. Typically (and in this case) we are writing to a shared folder on the same network. There are no user quotas on disk space at play.
Last night, in preparation for a demo, we kicked off a test run. During the run we experienced a failure. We needed 327391776 bytes and were told that we only had 186580992 available. The numbers from GetDiskFreeSpaceEx were:
User free space available: 186580992
Total free space available: 186580992
Those correspond to the QuadPart variables in the two (output) arguments lpFreeBytesAvailable and lpTotalNumberOfFreeBytes to GetDiskFreeSpaceAvailable.
This code has been in use for years now and I have never seen a false negative. Here is the complete function:
long IsDiskSpaceAvailable( const char* inDirectory,
const _int64& inRequestedSize,
_int64& outUserFree,
_int64& outTotalFree,
_int64& outCalcRequest )
{
ULARGE_INTEGER fba;
ULARGE_INTEGER tnb;
ULARGE_INTEGER tnfba;
ULARGE_INTEGER reqsize;
string dir;
size_t len;
dir = inDirectory;
len = strlen( inDirectory );
outUserFree = 0;
outTotalFree = 0;
outCalcRequest = 0;
if( inDirectory[len-1] != '\\' )
dir += "\\";
// this is the value of inRequestSize that was passed in
// inRequestedSize = 3273917760;
if( GetDiskFreeSpaceEx( dir.c_str(), &fba, &tnb, &tnfba ) )
{
outUserFree = fba.QuadPart;
outTotalFree = tnfba.QuadPart;
// this is computed dynamically given a specific compression
// type, but for simplicity I had hard-coded the value that was used
float compressionRatio = 10.0;
reqsize.QuadPart = (ULONGLONG) (inRequestedSize / compressionRatio);
outCalcRequest = reqsize.QuadPart;
// this is what was triggered to cause the failure,
// i.e., user free space was < the request size
if( fba.QuadPart < reqsize.QuadPart )
return( RetCode_OutOfSpace );
}
else
{
return( RetCode_Failure );
}
return( RetCode_OK );
}
So, a value of 3273917760 was passed to the function which is the total amount of disk space needed before compression. The function divides this by the compression ratio of 10 to get the actual size needed.
When I checked the disk that the share resides on it had ~177GB free, far more than what was reported. After starting the test again without changing anything it worked.
So my question here is; what could cause something like this? As far as I can tell it is not a programming error and, as I mentioned earlier, this code has been in use for a very long time now with no problems.
I checked the event log of the remote machine and found nothing of interest around the time of the failure. I'm hoping that someone out there has seen something similar before, thanks in advance.
Might not be of any use, but it's "strange" that:
177GB ~= 186580992 * 1000.
This could be explained by a stack corruption (since you don't initialize your local variable) happening elsewhere in the code.
The code "inRequestedSize / compressionRatio" doesn't have to be using float for the division, and since you've silented the "conversion loose precision" warning with the cast, you might actually hit an error too (but the number given in the example should work). You could simply do "inRequestedSize / 10".
Last but not least, you don't say where the code is running. On Mobile, the documentation of GetDiskFreeSpaceEx states:
When Mobile Encryption is enabled, the reporting behavior of this function changes. Each encrypted file has at least one 4-KB page of overhead associated. This function takes this overhead into account when it reports the amount pf space available. That is, if a 128-KB disk contains a single 60-KB file, this function reports that 64 KB is available, subtracting the space occupied by both the file and its associated overhead.
Although this function reports the total available space, keep the space requirement for encrypted files in mind when estimating whether multiple new files will fit into the remaining space. Include the amount of space required for overhead when Mobile Encryption is enabled. Each file requires at least an additional 4 KB. For example, a single 60-KB file requires 64 KB, but two 30-KB files actually require 68 KB.