I've got the stream to decrypt. I divide it into blocks and pass each block to the method below. The data I need to decrypt is encrypted by 16 - bytes blocks and if the last block is less than 16, then all the rest bytes are filled by padding. Then in the moment of decryption I'm getting my last block result as the value including these additional padding bytes. How can I determine the length of original data and return only it or determine the padding bytes and remove them, considering different paddings could be used?
void SymmetricAlgorithm::Decrypt(byte* buffer, size_t dataBytesSize) {
MeterFilter meter(new ArraySink(buffer, dataBytesSize));
CBC_Mode<CryptoPP::Rijndael>::Decryption dec(&Key.front(), Key.size(), &IV.front());
StreamTransformationFilter* filter = new StreamTransformationFilter(dec, new Redirector(meter), PKCS_PADDING);
ArraySource(buffer, dataBytesSize, true, filter);
dec.Resynchronize(&IV.front());
}
Now I'm trying with PKCS_PADDING and Rijndael, but in general I might need work with any algorithm and any padding.
I divide it into blocks and pass each block to the method below
In this case, you might consider calling ProcessBlock directly:
CBC_Mode<Rijndael>::Decryption dec(...);
// Assume 'b' is a 16-byte block
dec.ProcessBlock(b);
The block is processed in place, so its destructive. You will also be responsible for processing the last block, including the removal of padding.
By blocking and removing padding, you are doing the work of the StreamTransformationFilter (and friends).
As it happens, I found what I needed occasionally in the example from this question.
Appreciate your help, Gabriel L., but I didn't want make my method not to use padding at all. Sorry for unclear explanations, I wanted to extract plain data from decrypted data, which includes padding symbols. And the bold row in this code shows how to find out plain data bytes count.
void SymmetricAlgorithm::Decrypt(byte* buffer, size_t dataBytesSize) {
MeterFilter meter(new ArraySink(buffer, dataBytesSize));
CBC_Mode<CryptoPP::Rijndael>::Decryption dec(&Key.front(), Key.size(), &IV.front());
StreamTransformationFilter* filter = new StreamTransformationFilter(dec, new Redirector(meter), PKCS_PADDING);
ArraySource(buffer, dataBytesSize, true, filter);
int t = meter.GetTotalBytes(); //plain data bytes count
dec.Resynchronize(&IV.front());
}
Related
I've got a project using Crypto++.
Crypto++ is a own project which builds in a static lib.
Aside from that I have another large project using some of the Crypto++ classes and processing various algorithms, which also builds in a static lib.
Two of the functions are these:
long long MyClass::EncryptMemory(std::vector<byte> &inOut, char *cPadding, int rounds)
{
typedef std::numeric_limits<char> CharNumLimit;
char sPadding = 0;
//Calculates padding and returns value as provided type
sPadding = CalcPad<decltype(sPadding)>(reinterpret_cast<MyClassBase*>(m_P)->BLOCKSIZE, static_cast<int>(inOut.size()));
//Push random chars as padding, we never care about padding's content so it doesn't matter what the padding is
for (auto i = 0; i < sPadding; ++i)
inOut.push_back(sRandom(CharNumLimit::min(), CharNumLimit::max()));
std::size_t nSize = inOut.size();
EncryptAdvanced(inOut.data(), nSize, rounds);
if (cPadding)
*cPadding = sPadding;
return nSize;
}
//Removing the padding is the responsibility of the caller.
//Nevertheless the string is encrypted with padding
//and should here be the right string with a little padding
long long MyClass::DecryptMemory(std::vector<byte> &inOut, int rounds)
{
DecryptAdvanced(inOut.data(), inOut.size(), rounds);
return inOut.size();
}
Where EncryptAdvanced and DecryptAdvanced pass the arguments to the Crypto++ object.
//...
AdvancedProcessBlocks(bytePtr, nullptr, bytePtr, length, 0);
//...
These functions have so far worked flawless, no modifications have been applied to them since months.
The logic around them has evolved, though the calls and data passed to them did not change.
The data being encrypted / decrypted is rather small but has a dynamic size, which is being padded if (datasize % BLOCKSIZE) has a remainder.
Example: AES Blocksize is 16. Data is 31. Padding is 1. Data is now 32.
After encrypting and before decrypting, the string is the same - as in the picture.
Running all this in debug mode apparently works as intended. Even when running this program on another computer (with VS installed for DLLs) it shows no difference. The data is correctly encrypted and decrypted.
Trying to run the same code in release mode results in a totally different encrypted string, plus it does not decrypt correctly - "trash data" is decrypted. The wrongly encrypted or decrypted data is consistent - always the same trash is decrypted. The key/password and the rounds/iterations are the same all the time.
Additional info: The data is saved in a file (ios_base::binary) and correctly processed in debug mode, from two different programs in the same solution using the same static librar(y/ies).
What could be the cause of this Debug / Release problem ?
I re-checked the git history a couple of times, debugged for days through the code, yet I cannot find any possible cause for this problem. If any information - aside from a (here rather impossible) MCVE is needed, please leave a comment.
Apparently this is a bug in CryptoPP. The minimum keylength of Rijndael / AES is set to 8 instead of 16. Using a invalid keylength of 8 bytes will cause a out-of-bounds access to the in-place array of Rcon values. This keylength of 8 byte is currently reported as valid and has to be fixed in CryptoPP.
See this issue on github for more information. (On-going conversation)
I'm having a lot of difficulty understanding and implementing the Windows Crypto API to Import and Export Keys in c++.
Despite reading through the MSDN documentation many many times I can't seem to get it to work in the way I want.
Below is a snippet of code from what i'm working on.
if(CryptAcquireContext(&CryptoHandle,NULL,provPointer, PROV_RSA_AES, 0xF0000000))
{
HCRYPTKEY aesKey;
//We now have context on Enhanced AES
if(CryptGenKey(CryptoHandle,CALG_AES_128,CRYPT_EXPORTABLE,&aesKey))
{
DWORD dwBlobLen;
BYTE* pbKeyBlob;
CryptExportKey(aesKey,0,PLAINTEXTKEYBLOB,0,NULL,&dwBlobLen);
if(pbKeyBlob=new BYTE[dwBlobLen])
{
if(CryptExportKey(aesKey, NULL,PLAINTEXTKEYBLOB, 0,pbKeyBlob, &dwBlobLen))
{
//Blah Blah
}
}
}
}
*(Where provPointer is a pointer to the Enhanced crypto api string.
As you might be able to tell from the snippet i'm trying to export a AES 128 key to plaintext.
In the debugger it all executes fine (No visible errors) but I don't understand the outcome at all.
The first call to CryptExportKey fills the dwBlobLen with '28' (What does this mean? Why?)
After the second CryptExport key i've tried writing pbKeyBlob(Which I assume points to the key) to file But I just end up with a constant set of bytes (Same for every try) followed by a set of bytes that I different every time (I assume this is some of the key) (Which add to 28 bytes total)
I'd really appreciate if someone could identify where I've gone wrong. I'm pretty clueless with the whole crypto lingo (Sessions,machine keys, blobs etc.)
In the future I'd like to be able to generate an AES key, use it and export it into a file in a form where I can import it again later.
Thanks in advance.
I'm not an expert on the Windows Cryptography API (or on cryptography in general) but I believe I can shed some light on what's going on here.
The first call to CryptExportKey puts 28 in dwBlobLen because that is the size of the blob that it will created when the key is exported. This is in the MSDN docs: http://msdn.microsoft.com/en-us/library/windows/desktop/aa379931%28v=vs.85%29.aspx
AS far as what you're doing wrong. You aren't doing anything wrong. You are asking CryptExportKey to export a plaintext blob which has the following layout:typedef struct _PLAINTEXTKEYBLOB {
BLOBHEADER hdr;
DWORD dwKeySize;
BYTE rgbKeyData[];
} PLAINTEXTKEYBLOB, *PPLAINTEXTKEYBLOB;
As you can see, the blob starts with a header and a key size (which is the constant set of bytes which you have reported, and should be 12 bytes long), followed by the key data (which is the data that changes every time, and should be 16 bytes long). Remember you are generating a 128 bit key (which is 16 bytes).
The BLOBHEADER has the following layout:typedef struct _BLOBHEADER {
BYTE bType;
BYTE bVersion;
WORD Reserved;
ALG_ID aiKeyAlg;
} BLOBHEADER;
By the way, from the doc on the CryptImportKey function, you can't import the PLAINTEXTBLOB directly, because the BYTE array that you pass to CryptImportKey does not include the keysize. You need to pass a buffer with the BLOBHEADER followed by the key data.
I need to encrypt single block of AES. I cant use any modes like CBC and other. Every example what i have seen use streaming modes.
EDIT:
ok, i did it in the next manner, but i really dislike this try.
void dec(const byte *key, const byte* xblock, const byte *cipher, byte *plain) {
AESDecryption d;
try {
const NameValuePairs &nvp = MakeParameters("", 0);
d.UncheckedSetKey(key, 16, nvp);
d.ProcessAndXorBlock(cipher, xblock, plain);
}
catch(...) {}
}
AES in ECB mode is identical to single block encryption, except that you can feed it multiple blocks.
If you've got only CBC mode encryption available you can use the first block of a CBC encrypt using a (block sized) IV containing bytes all valued zero. The same goes for counter (CTR) mode encryption and a nonce containing bytes all valued zero (the counter only increases after the first block encrypt).
Crypto++ seems to be a higher level Crypto API, so it is better not to directly call the AES implementation directly.
I'm recently working on ID3v2.4.0.
Reading 2.4.0 document, i found a particular part that i can't understand - sync-safe integer.
Why does the ID3v2 use this method?
Of course, i know why the ID3v2 uses Unsynchronization scheme, which is used to keep MPEG decoder from considering ID3 tag as a MPEG sync data.
But what i couldn't understand is that why sync-safe integer instead of Unsynchronization Scheme (= inserting $00).
Is there any reason why they adopt sync-safe integer when expressing tag size instead of inserting $00?
These two method result in completely same effect.
ID3v2 document says that the size of unsynchronized data is not known in advance.
But that statement does not make sense.
If tag data is stored in buffer, one can know the size of unsynchronized data after simply replacing the problematic character with $FF 00.
Is there anyone who can help me?
I would presume for simplicity, and the unsynch/synch scheme only makes sense when used on an mpeg file.
It is trivial to read in the four bytes and convert them to a regular integer:
// pseudo code
uint32_t size;
file.read( &size, sizeof(uint32_t) );
size = (size & 0x0000007F) |
( (size & 0x00007F00) >> 1 ) |
( (size & 0x007F0000) >> 2 ) |
( (size & 0x7F000000) >> 3 );
If they used the same unsynch scheme as frame data you would need to read each byte separately, look for the FF00 pattern, and reconstruct the integer byte by byte. Also, if the ‘size’ field in the header could be a variable number of bytes, due to unsynch bytes being inserted, the entire header would be a variable number of bytes. Simpler for them to say 'the header is always 10 bytes in size and it looks like this...'.
ID3v2 document says that the size of unsynchronized data is not known in advance. But that statement does not make sense. If tag data is stored in buffer, one can know the size of unsynchronized data after simply replacing the problematic character with $FF 00.
You are correct, it doesn't make sense. The size written in the id3v2 header and frame headers is the size after unsynchronisation, if any, was applied. However, it is permissible to write frame data without unsynching as id3v2 may be used for tagging files other than mp3, where the concept of unsynch/synch makes no sense. I think what section 6.2 was trying to say is 'regardless of whether this is an mp3 file, or a frame is written unsynched/synched, the frame size is always written in a mpeg synch-safe manner'.
ID3v2.4 frames can have the ‘Data Length Indicator’ flag set in the frame header, in which case you can find out how big a buffer is after synchronisation. Refer to section 4.1.2 of the spec.
Is there anyone who can help me?
Some helpful advice from someone who has written a conforming id3v2 tag reader: Don't try make sense of the spec. It surely was written by madmen and sadists. Just looking at it again is giving me nightmares.
I am developing a program in C++, using the string container , as in std::string to store network data from the socket (this is peachy), I receive the data in a maximum possible 1452 byte frame at a time, the protocol uses a header that contains information about the data area portion of the packets length, and header is a fixed 20 byte length. My problem is that a string is giving me an unknown debug assertion, as in , it asserts , but I get NO message about the string. Now considering I can receive more than a single packet in a frame at a any time, I place all received data into the string , reinterpret_cast to my data struct, calculate the total length of the packet, then copy the data portion of the packet into a string for regex processing, At this point i do a string.erase, as in mybuff.Erase(totalPackLen); <~ THIS is whats calling the assert, but totalpacklen is less than the strings size.
Is there some convention I am missing here? Or is it that the std::string really is an inappropriate choice here? Ty.
Fixed it on my own. Rolled my own VERY simple buffer with a few C calls :)
int ret = recv(socket,m_buff,0);
if(ret > 0)
{
BigBuff.append(m_buff,ret);
while(BigBuff.size() > 16){
Header *hdr = reinterpret_cast<Header*>(&BigBuff[0]);
if(ntohs(hdr->PackLen) <= BigBuff.size() - 20){
hdr->PackLen = ntohs(hdr->PackLen);
string lData;
lData.append(BigBuff.begin() + 20,BigBuff.begin() + 20 + hdr->PackLen);
Parse(lData); //regex parsing helper function
BigBuff.erase(hdr->PackLen + 20); //assert here when len is packlen is 235 and string len is 1458;
}
}
}
From the code snippet you provided it appears that your packet comprises a fixed-length binary header followed by a variable length ASCII string as a payload. Your first mistake is here:
BigBuff.append(m_buff,ret);
There are at least two problems here:
1. Why the append? You presumably have dispatched with any previous messages. You should be starting with a clean slate.
2. Mixing binary and string data can work, but more often than not it doesn't. It is usually better to keep the binary and ASCII data separate. Don't use std::string for non-string data.
Append adds data to the end of the string. The very next statement after the append is a test for a length of 16, which says to me that you should have started fresh. In the same vein you do that reinterpret cast from BigBuff[0]:
Header *hdr = reinterpret_cast<Header*>(&BigBuff[0]);
Because of your use of append, you are perpetually dealing with the header from the first packet received rather than the current packet. Finally, there's that erase:
BigBuff.erase(hdr->PackLen + 20);
Many problems here:
- If the packet length and the return value from recv are consistent the very first call will do nothing (the erase is at but not past the end of the string).
- There is something very wrong if the packet length and the return value from recv are not consistent. It might mean, for example, that multiple physical frames are needed to form a single logical frame, and that in turn means you need to go back to square one.
- Suppose the physical and logical frames are one and the same, you're still going about this all wrong. As noted, the first time around you are erasing exactly nothing. That append at the start of the loop is exactly what you don't want to do.
Serialization oftentimes is a low-level concept and is best treated as such.
Your comment doesn't make sense:
BigBuff.erase(hdr->PackLen + 20); //assert here when len is packlen is 235 and string len is 1458;
BigBuff.erase(hdr->PackLen + 20) will erase from hdr->PackLen + 20 onwards till the end of the string. From the description of the code - seems to me that you're erasing beyond the end of the content data. Here's the reference for std::string::erase() for you.
Needless to say that std::string is entirely inappropriate here, it should be std::vector.