C++ Detour on winsock recv hooking - custom packet - c++

I'm trying to add additional packet in MyRecv function, but I don't know why it doesn't working. I tried to parse incoming packets and function works fine.
So probably my way to sending custom packet to application isn't properly.
In general assumption I just want send prepared packet to application.
This packet i took from WPE PRO.
Code with MyRecv function:
INT WINAPI MyRecv(SOCKET sock, CHAR* buf, INT len, INT flags) {
CHAR buffer[256];
char msg2[] = { 0x1B, 0, 0x04, 0x06, 0, 0x5A, 0x65, 0x6E, 0x74, 0x61,
0x78, 0x06, 0, 0x5A, 0x65, 0x6E, 0x74, 0x61, 0x78, 0x05, 0x07, 0,
0x66, 0x61, 0x6A, 0x6E, 0x69, 0x65, 0x65 };
int ret = precv(sock, buf, len, flags);
if (ret <= 0) {
return ret;
}
if (fake_recv) {
char tmp[256];
fake_recv = false;
printf("Fake1-> Lenght:%d Size:%d", len, strlen(buf));
strcat(buf, msg2);
printf("Fake2-> Lenght:%d Size:%d", len, strlen(buf));
return ret;
}
return ret;
}

msg2 isn't a null-terminated string. In fact it has an interior null. So using strlen() and strcat() with it is never going to work.
Similarly you neither know nor care what's already in buf, so calling strcat() and strlen() on that is both pointless and dangerous: if it contains no nulls at all you will over-run it, and at best over-report the length, and at worst crash.
And you're not adjusting ret for the extra data added into the buffer.
And no useful purpose is accomplished by declaring the unused tmp[] variable.
Try this:
if (fake_recv) {
fake_recv = false;
printf("Fake1-> Length:%d Received:%d", len, ret);
int len2 = min(len-ret, sizeof msg2);
memcpy(&buf[ret], msg2, len2);
ret += len2;
printf("Fake2-> Length:%d Received:%d", len, ret);
return ret;
}

Related

Openssl C++ AES decryption adding unexpected symbols

I'm trying to decrypt in C++ a file that have been encrypted with the Linux command : openssl enc -aes-256-cbc -nosalt -K "E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855" -iv "5DF6E0E2761359D30A8275058E299FCC" -p -in file.json -out file.enc
The decryption works well, but unexpected symbols appear at the end of the file when I print it in the terminal, as in the following image Image
Here is my C++ code, can someone help me ?
int main(int argc, char **argv)
{
EVP_CIPHER_CTX *en, *de;
en = EVP_CIPHER_CTX_new();
de = EVP_CIPHER_CTX_new();
unsigned char key_data[] = {0xE3, 0xB0, 0xC4, 0x42, 0x98, 0xFC, 0x1C, 0x14, 0x9A, 0xFB, 0xF4, 0xC8, 0x99, 0x6F, 0xB9, 0x24, 0x27, 0xAE, 0x41, 0xE4, 0x64, 0x9B, 0x93, 0x4C, 0xA4, 0x95, 0x99, 0x1B,0x78, 0x52, 0xB8, 0x55};
int key_data_len = 32;
std::ifstream in("file.enc");
std::string contents((std::istreambuf_iterator<char>(in)),
std::istreambuf_iterator<char>());
unsigned char iv[] = {0x5D, 0xF6, 0xE0, 0xE2, 0x76, 0x13, 0x59, 0xD3, 0x0A, 0x82, 0x75, 0x05, 0x8E, 0x29, 0x9F, 0xCC};
EVP_CIPHER_CTX_init(en);
EVP_EncryptInit_ex(en, EVP_aes_256_cbc(), NULL, key_data, iv);
EVP_CIPHER_CTX_init(de);
EVP_DecryptInit_ex(de, EVP_aes_256_cbc(), NULL, key_data, iv);
char *plaintext;
int len = strlen(contents.c_str())+1;
plaintext = (char *)aes_decrypt(de, (unsigned char *)contents.c_str(), &len);
printf("%s", plaintext);
free(plaintext);
EVP_CIPHER_CTX_cleanup(en);
EVP_CIPHER_CTX_cleanup(de);
}
And the function to decrypt :
unsigned char *aes_decrypt(EVP_CIPHER_CTX *e, unsigned char *ciphertext, int *len)
{
int p_len = *len, f_len = 0;
unsigned char *plaintext = (unsigned char *)calloc(sizeof(char*), p_len);
EVP_DecryptInit_ex(e, NULL, NULL, NULL, NULL);
EVP_DecryptUpdate(e, plaintext, &p_len, ciphertext, *len);
EVP_DecryptFinal_ex(e, plaintext+p_len, &f_len);
*len = p_len + f_len;
return plaintext;
}
SOLVED : I had to remove the PKCS#7 padding added by the encoder PKCS#7 Padding
The input length is incorrect - strlen(contents.c_str()) + 1 will either return length including additional null byte which should not be included as it is not part of encrypted data, or length of data to the first null byte in encrypted data if there is one. You should use contents.size() instead.
This causes EVP_DecryptUpdate to decrypt all data including last block with padding and expect more data and EVP_DecryptFinal_ex to fail after that. With correct length the padding from last block will be stripped as expected.
There's also issue with output buffer length - it's sizeof(char*) * p_len instead of sizeof(char) * (p_len + EVP_CIPHER_block_size(EVP_aes_256_cbc())). EVP_DecryptUpdate requires output buffer to be large enough to hold input length + cipher block size.
You should also make sure to use only len data in plaintext buffer - printf will output data until null byte, which might not be included in decrypted data.

Missing something when decompressing HTTP gzipped response

I have recently been setting up various testing environments and in this cas I nneed to read and decode a gzip response from a HTTP server. I know what I have so far works as I have tested it with wireshark and hardcoded data as outlined below, my question is what is wrong with how I am handling the gizzped data from a HTTP server?
Here is what Im using:
From this thread http://www.qtcentre.org/threads/30031-qUncompress-data-from-gzip I am using the gzipDecopress function with the data provided and seeing that it works.
QByteArray gzipDecompress( QByteArray compressData )
{
//Hardcode sample data
const char dat[40] = {
0x1F, 0x8B, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0xAA, 0x2E, 0x2E, 0x49, 0x2C, 0x29,
0x2D, 0xB6, 0x4A, 0x4B, 0xCC, 0x29, 0x4E, 0xAD, 0x05, 0x00, 0x00, 0x00, 0xFF, 0xFF, 0x03, 0x00,
0x2A, 0x63, 0x18, 0xC5, 0x0E, 0x00, 0x00, 0x00};
compressData = QByteArray::fromRawData( dat, 40);
//decompress GZIP data
//strip header and trailer
compressData.remove(0, 10);
compressData.chop(12);
const int buffersize = 16384;
quint8 buffer[buffersize];
z_stream cmpr_stream;
cmpr_stream.next_in = (unsigned char *)compressData.data();
cmpr_stream.avail_in = compressData.size();
cmpr_stream.total_in = 0;
cmpr_stream.next_out = buffer;
cmpr_stream.avail_out = buffersize;
cmpr_stream.total_out = 0;
cmpr_stream.zalloc = Z_NULL;
cmpr_stream.zalloc = Z_NULL;
if( inflateInit2(&cmpr_stream, -8 ) != Z_OK) {
qDebug() << "cmpr_stream error!";
}
QByteArray uncompressed;
do {
int status = inflate( &cmpr_stream, Z_SYNC_FLUSH );
if(status == Z_OK || status == Z_STREAM_END) {
uncompressed.append(QByteArray::fromRawData((char *)buffer, buffersize - cmpr_stream.avail_out));
cmpr_stream.next_out = buffer;
cmpr_stream.avail_out = buffersize;
} else {
inflateEnd(&cmpr_stream);
}
if(status == Z_STREAM_END) {
inflateEnd(&cmpr_stream);
break;
}
}while(cmpr_stream.avail_out == 0);
return uncompressed;
}
When the data is hardcoded as in that example, the string is decompressed. However, when I read the response from a HTTP server and store it in a QByteArray, it cannot be uncompressed. I am reading the response as follows and I can see it works when comparing the results on wireshark
//Read that length of encoded data
char EncodedData[ LengthToRead ];
memset( EncodedData, 0, LengthToRead );
recv( socketDesc, EncodedData, LengthToRead, 0 );
EndOfData = true;
//EncodedDataBytes = QByteArray((char*)EncodedData);
EncodedDataBytes = QByteArray::fromRawData(EncodedData, LengthToRead );
I assume i am missing some header or byte order when reading the response, but at the moment have no idea what. Any help very welcome!!
EDIT: So I have been looking at this a little more over the weekend and at the moment im trying to test the encode and decode of the given hex string, which is "{status:false}" in plain text. I have tried to use online gzip encoders such as http://www.txtwizard.net/compression but it returns some ascii text that does not match the hex string in the above code. When I use PHPs gzcompress( "{status:false}", 1) function it gives me non-ascii values, that I cannot copy/paste to test since they are ascii. So I am wondering if there is any standard reference for gzip encode/decode? It is definitely not in some special encoding since both firefox and wireshark can decode the packets, but my software cannot.
So the issue was with my gzip function, the correct function I found on this link: uncompress error when using zlib
As mentioned above by Cornstalks the infalteInit2 function needs to take MAX_WBITS+16 as its max bit size, I think that was the issue. If anybody knows any libraries or plugins to handle this please post them here! I am surprised that this had to be coded manually when it is so commonly used by HTTP clients/servers.

Function returning wrong value in lpc824

I am working on porting an application running on Arduino Mega to LPC824. The following piece of code is working differently for both the platforms.
/**
* Calculation of CMAC
*/
void cmac(const uint8_t* data, uint8_t dataLength) {
uint8_t trailer[1] = {0x80};
uint8_t bytes[_lenRnd];
uint8_t temp[_lenRnd];
memcpy(temp, data, dataLength);
concatArray(temp, dataLength, trailer, 1);
dataLength ++;
addPadding(temp, dataLength);
memcpy(bytes, _sk2, _lenRnd);
xorBytes(bytes,temp,_lenRnd);
aes128_ctx_t ctx;
aes128_init(_sessionkey, &ctx);
uint8_t* chain = aes128_enc_sendMode(bytes, _lenRnd, &ctx, _ivect);
Board_UARTPutSTR("chain\n\r");
printBytes(chain, 16, true);
memcpy(_ivect, chain, _lenRnd);
//memcpy(_ivect, aes128_enc_sendMode(bytes,_lenRnd,&ctx,_ivect), _lenRnd);
memcpy(_cmac,_ivect, _lenRnd);
Board_UARTPutSTR("Initialization vector\n\r");
printBytes(_ivect, 16, true);
}
I am expecting a value like {0x5d, 0xa8, 0x0f, 0x1f, 0x1c, 0x03, 0x7f, 0x16, 0x7e, 0xe5, 0xfd, 0xf3, 0x45, 0xb7, 0x73, 0xa2} for the chain variable. But the follow function is working differently. The print inside the function has the correct value which I want ({5d, 0xa8, 0x0f, 0x1f, 0x1c, 0x03, 0x7f, 0x16, 0x7e, 0xe5, 0xfd, 0xf3, 0x45, 0xb7, 0x73, 0xa2}).
But when the function returns chain is having a different value, compared to what I am expecting, I get the following value for chain {0x00, 0x20, 0x00, 0x10, 0x03, 0x01, 0x00, 0x00, 0xd5, 0x00, 0x00, 0x00, 0xd7, 0x00, 0x00, 0x00}
Inside the function, the result is correct. But it returns a wrong value to the function which called it. Why is it happening so ?
uint8_t* aes128_enc_sendMode(unsigned char* data, unsigned short len, aes128_ctx_t* key,
const unsigned char* iv) {
unsigned char tmp[16];
uint8_t chain[16];
unsigned char c;
unsigned char i;
memcpy(chain, iv, 16);
while (len >= 16) {
memcpy(tmp, data, 16);
//xorBytes(tmp,chain,16);
for (i = 0; i < 16; i++) {
tmp[i] = tmp[i] ^ chain[i];
}
aes128_enc(tmp, key);
for (i = 0; i < 16; i++) {
//c = data[i];
data[i] = tmp[i];
chain[i] = tmp[i];
}
len -= 16;
data += 16;
}
Board_UARTPutSTR("Chain!!!:");
printBytes(chain, 16, true);
return chain;
}
A good start with an issue like this is to delete as much as you can while reproducing the error, with a minimal code example the answer is typically clear. I have done that for you here.
uint8_t* aes128_enc_sendMode(void) {
uint8_t chain[16];
return chain;
}
The chain variable is a local to the function, it ceases to be defined once the function exists. Accessing a pointer to that variable causes undefined behaviour, don't do it.
In practice the pointer to the array still exists and points to an arbitrary block of memory. This block of memory is no longer reserved and can be overwritten at any time.
I suspect it works for the AVR because it is a simple 8 bit chip and that piece of memory was sitting unmolested by the time you used it. The ARM would have used greater optimisations, possibly running the full array on registers, so the data doesn't survive the transition.
tldr; You need to malloc() any arrays that you want to live past the function's exit. Be careful, malloc and embedded systems go together like diesel and styrofoam, it gets messy real quick.

CM_Get_Device_Interface_List returns CR_INVALID_POINTER

So, I have little problem with CM_Get_Device_Interface_List function. Function returns with error code 3, which is CR_INVALID_POINTER. But when I call CM_Get_Device_Interface_List_Size function, it returns success.
ULONG lenght = 0;
PWSTR DevicePath = NULL;
CONFIGRET cr = CR_SUCCESS;
cr = CM_Get_Device_Interface_List_Size(&lenght, (LPGUID)&HWN_DEVINTERFACE_NLED, NULL, CM_GET_DEVICE_INTERFACE_LIST_PRESENT);// success
if (cr != CR_SUCCESS)
{
// error handling
}
cr = CM_Get_Device_Interface_List((LPGUID)&HWN_DEVINTERFACE_NLED, NULL, DevicePath, lenght, CM_GET_DEVICE_INTERFACE_LIST_PRESENT); // error
if (cr != CR_SUCCESS)
{
// error handling
}
DEFINE_GUID(HWN_DEVINTERFACE_NLED,
0x6b2a25e2, 0xaaf5, 0x482c, 0x99, 0xa5, 0x62, 0x05, 0xcd, 0xcc, 0x17, 0x6a); // GUID Declaration
So, why the pointer is invalid?
A bit late, probably figured this out but in case someone else comes across this I believe the reason you are getting the invalid pointer is because you are passing in a null pointer as the buffer (DevicePath), it must be allocated with the size returned from your first call.
Example (cleaned up a bit):
ULONG bufferSize = 0;
if (CM_Get_Device_Interface_List_Size(&bufferSize, (LPGUID)&HWN_DEVINTERFACE_NLED, NULL, CM_GET_DEVICE_INTERFACE_LIST_PRESENT) == CR_SUCCESS)
{
PWSTR buffer = (PWSTR)malloc(bufferSize);
if (CM_Get_Device_Interface_List((LPGUID)&HWN_DEVINTERFACE_NLED, NULL, buffer, bufferSize, CM_GET_DEVICE_INTERFACE_LIST_PRESENT) == CR_SUCCESS)
{
// buffer should now contain a list of NULL-terminated unicode strings
}
if (buffer)
{
free(buffer);
}
}
DEFINE_GUID(HWN_DEVINTERFACE_NLED,
0x6b2a25e2, 0xaaf5, 0x482c, 0x99, 0xa5, 0x62, 0x05, 0xcd, 0xcc, 0x17, 0x6a); // GUID Declaration
Alternate Example (no malloc):
#define BUFFER_SIZE 4096 // 4k buffer should be plenty
WCHAR buffer[BUFFER_SIZE];
if (CM_Get_Device_Interface_List((LPGUID)&HWN_DEVINTERFACE_NLED, NULL, buffer, BUFFER_SIZE, CM_GET_DEVICE_INTERFACE_LIST_PRESENT) == CR_SUCCESS)
{
// buffer should now contain a list of NULL-terminated unicode strings
}
DEFINE_GUID(HWN_DEVINTERFACE_NLED,
0x6b2a25e2, 0xaaf5, 0x482c, 0x99, 0xa5, 0x62, 0x05, 0xcd, 0xcc, 0x17, 0x6a); // GUID Declaration

SizeOfImage member causing program crash

Im trying to look for BYTE patterns in programs but for some reason when i assign the value to from MINFO.SizeOfImage to ModuleSize it causes the program i injected the DLL into to crash.
DWORD FindPattern(const BYTE* Pattern,SIZE_T PatternSize)
{
DWORD ModuleBase = (DWORD)GetModuleHandle(NULL);
DWORD ModuleSize = 0;
MODULEINFO MINFO;
HANDLE hProcess = OpenProcess(PROCESS_ALL_ACCESS,0,GetCurrentProcessId());
if(hProcess)
{
GetModuleInformation(hProcess,GetModuleHandle(NULL),&MINFO,sizeof(MODULEINFO));
CloseHandle(hProcess);
ModuleSize = MINFO.SizeOfImage;
}
else
return 0;
for(int i = 0;i < ModuleSize;i++)
{
if(memcmp((void*)(ModuleBase + i),Pattern,PatternSize) == 0)
return ModuleBase + i;
}
return 0;
}
You code worked just fine when i compiled it and injected. I even tested it against the current FindPattern i am using. I didnt get any errors. Heres my code & yours
bool Compare(const BYTE* pData, const BYTE* bMask, const char* szMask)
{
for(;*szMask;++szMask,++pData,++bMask)
if(*szMask=='x' && *pData!=*bMask) return 0;
return (*szMask) == NULL;
}
DWORD FindPattern(DWORD dwAddress, DWORD dwLen, BYTE *bMask, char * szMask)
{
for(DWORD i=0; i<dwLen; i++)
if (Compare((BYTE*)(dwAddress+i),bMask,szMask)) return (DWORD)(dwAddress+i);
return 0;
}
And then when i run this through it
uint8 DecryptNeedle[] = {0x56, 0x8B, 0x74, 0x24, 0x08, 0x89, 0x71, 0x10,
0x0F, 0xB6, 0x16, 0x0F, 0xB6, 0x46, 0x01, 0x03,
0xC2, 0x8B, 0x51, 0x28, 0x25, 0xFF, 0x00, 0x00,
0x00, 0x89, 0x41, 0x04, 0x0F, 0xB6, 0x04, 0x10};
char DecryptMask[] = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx";
DWORD addrDecrypt = FindPattern(dwModuleStartAddr, 0xA000, DecryptNeedle, DecryptMask);
DWORD decrypt2 = YourFindPattern(DecryptNeedle, 32);
output is identical in both.
I would double check your injection code, and check whatelse could be causing the error. Also, do a quick error check
if(hProcess)
{
if(!GetModuleInformation(hProcess,GetModuleHandle(NULL),&MINFO,sizeof(MODULEINFO)));
{
//error
}
CloseHandle(hProcess);
ModuleSize = MINFO.SizeOfImage;
}