I'm confused on why they use - 1 here. Can someone explain what this line is doing in very very very very low level detail please... Not o its subtracting 1 structure.... I need to know more...about the low level... thanks...
PIMAGE_SECTION_HEADER last_section = IMAGE_FIRST_SECTION(nt_headers) + (nt_headers->FileHeader.NumberOfSections - 1);
The code above is in the below function:
//Reference: http://www.codeproject.com/KB/system/inject2exe.aspx
PIMAGE_SECTION_HEADER add_section(const char *section_name, unsigned int section_size, void *image_addr) {
PIMAGE_DOS_HEADER dos_header = (PIMAGE_DOS_HEADER)image_addr;
if(dos_header->e_magic != 0x5A4D) {
wprintf(L"Could not retrieve DOS header from %p", image_addr);
return NULL;
}
PIMAGE_NT_HEADERS nt_headers = (PIMAGE_NT_HEADERS)((DWORD_PTR)dos_header + dos_header->e_lfanew);
if(nt_headers->OptionalHeader.Magic != 0x010B) {
wprintf(L"Could not retrieve NT header from %p", dos_header);
return NULL;
}
const int name_max_length = 8;
PIMAGE_SECTION_HEADER last_section = IMAGE_FIRST_SECTION(nt_headers) + (nt_headers->FileHeader.NumberOfSections - 1);
PIMAGE_SECTION_HEADER new_section = IMAGE_FIRST_SECTION(nt_headers) + (nt_headers->FileHeader.NumberOfSections);
memset(new_section, 0, sizeof(IMAGE_SECTION_HEADER));
new_section->Characteristics = IMAGE_SCN_MEM_READ | IMAGE_SCN_MEM_EXECUTE | IMAGE_SCN_CNT_CODE;
memcpy(new_section->Name, section_name, name_max_length);
new_section->Misc.VirtualSize = section_size;
new_section->PointerToRawData = align_to_boundary(last_section->PointerToRawData + last_section->SizeOfRawData,
nt_headers->OptionalHeader.FileAlignment);
new_section->SizeOfRawData = align_to_boundary(section_size, nt_headers->OptionalHeader.SectionAlignment);
new_section->VirtualAddress = align_to_boundary(last_section->VirtualAddress + last_section->Misc.VirtualSize,
nt_headers->OptionalHeader.SectionAlignment);
nt_headers->OptionalHeader.SizeOfImage = new_section->VirtualAddress + new_section->Misc.VirtualSize;
nt_headers->FileHeader.NumberOfSections++;
return new_section;
}
In C and C++, array elements are indexed from 0 to n-1 (in FORTRAN from 1 to n). So, if you have a pointer p0 to the first element but want a pointer to the last element you have to add n-1:
plast=p0+n-1. This all there is to this.
Related
I have some constraints where the addon is built with nan.h and v8 (not the new node-addon-api).
The end function is a part of a library. It accepts std::vector<char> that represents the bytes of an image.
I tried creating an image buffer from Node.js:
const img = fs.readFileSync('./myImage.png');
myAddonFunction(Buffer.from(img));
I am not really sure how to continue from here. I tried creating a new vector with a buffer, like so:
std::vector<char> buffer(data);
But it seems like I need to give it a size, which I am unsure how to get. Regardless, even when I use the initial buffer size (from Node.js), the image fails to go through.
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
[1] 16021 abort (core dumped)
However, when I read the image directly from C++, it all works fine:
std::ifstream ifs ("./myImage.png", std::ios::binary|std::ios::ate);
std::ifstream::pos_type pos = ifs.tellg();
std::vector<char> buffer(pos);
ifs.seekg(0, std::ios::beg);
ifs.read(&buffer[0], pos);
// further below, I pass "buffer" to the function and it works just fine.
But of course, I need the image to come from Node.js. Maybe Buffer is not what I am looking for?
Here is an example based on N-API; I would also encourage you to take a look similar implementation based on node-addon-api (it is an easy to use C++ wrapper on top of N-API)
https://github.com/nodejs/node-addon-examples/tree/master/array_buffer_to_native/node-addon-api
#include <assert.h>
#include "addon_api.h"
#include "stdio.h"
napi_value CArrayBuffSum(napi_env env, napi_callback_info info)
{
napi_status status;
const size_t MaxArgExpected = 1;
napi_value args[MaxArgExpected];
size_t argc = sizeof(args) / sizeof(napi_value);
status = napi_get_cb_info(env, info, &argc, args, nullptr, nullptr);
assert(status == napi_ok);
if (argc < 1)
napi_throw_error(env, "EINVAL", "Too few arguments");
napi_value buff = args[0];
napi_valuetype valuetype;
status = napi_typeof(env, buff, &valuetype);
assert(status == napi_ok);
if (valuetype == napi_object)
{
bool isArrayBuff = 0;
status = napi_is_arraybuffer(env, buff, &isArrayBuff);
assert(status == napi_ok);
if (isArrayBuff != true)
napi_throw_error(env, "EINVAL", "Expected an ArrayBuffer");
}
int32_t *buff_data = NULL;
size_t byte_length = 0;
int32_t sum = 0;
napi_get_arraybuffer_info(env, buff, (void **)&buff_data, &byte_length);
assert(status == napi_ok);
printf("\nC: Int32Array size = %d, (ie: bytes=%d)",
(int)(byte_length / sizeof(int32_t)), (int)byte_length);
for (int i = 0; i < byte_length / sizeof(int32_t); ++i)
{
sum += *(buff_data + i);
printf("\nC: Int32ArrayBuff[%d] = %d", i, *(buff_data + i));
}
napi_value rcValue;
napi_create_int32(env, sum, &rcValue);
return (rcValue);
}
The JavaScript code to call the addon
'use strict'
const myaddon = require('bindings')('mync1');
function test1() {
const array = new Int32Array(10);
for (let i = 0; i < 10; ++i)
array[i] = i * 5;
const sum = myaddon.ArrayBuffSum(array.buffer);
console.log();
console.log(`js: Sum of the array = ${sum}`);
}
test1();
The Output of the code execution:
C: Int32Array size = 10, (ie: bytes=40)
C: Int32ArrayBuff[0] = 0
C: Int32ArrayBuff[1] = 5
C: Int32ArrayBuff[2] = 10
C: Int32ArrayBuff[3] = 15
C: Int32ArrayBuff[4] = 20
C: Int32ArrayBuff[5] = 25
C: Int32ArrayBuff[6] = 30
C: Int32ArrayBuff[7] = 35
C: Int32ArrayBuff[8] = 40
C: Int32ArrayBuff[9] = 45
js: Sum of the array = 225
I'm using openssl library to implement an ETSI standard and, more specifically, to realize communications with a PKI.
To do that I must use the ECIES encryption scheme but it isn't implemented in openssl.
I have found this piece of code here in the crypto++ google group:
int curve_id = EC_GROUP_get_curve_name(EC_KEY_get0_group((EC_KEY*)m_pPrivKey));
EC_KEY* temp_key = EC_KEY_new_by_curve_name(curve_id);
size_t uPubLen = i2o_ECPublicKey((EC_KEY*)m_pPrivKey, NULL);
o2i_ECPublicKey(&temp_key, (const byte**)&pCiphertext, uPubLen); // warnign this moves the pCiphertext pointer
uCiphertextSize -= uPubLen;
size_t SecLen = (EC_GROUP_get_degree(EC_KEY_get0_group((EC_KEY*)m_pPrivKey)) + 7) / 8;
byte* pSec = new byte[SecLen];
int ret = ECDH_compute_key(pSec, SecLen, EC_KEY_get0_public_key(temp_key), (EC_KEY*)m_pPrivKey, NULL);
ASSERT(ret == SecLen);
EC_KEY_free(temp_key);
CHashFunction GenFx(CHashFunction::eSHA1); // <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
uPlaintextSize = (uCiphertextSize > GenFx.GetSize()) ? (uCiphertextSize - GenFx.GetSize()) : 0;
int mac_key_len = 16;
int GenLen = uPlaintextSize + mac_key_len;
uint32 counter = 1;
CBuffer GenHash;
while(GenHash.GetSize() < GenLen)
{
GenFx.Add(pSec, SecLen);
CBuffer Buff;
Buff.WriteValue<uint32>(counter++, true);
GenFx.Add(&Buff);
GenFx.Finish();
GenHash.AppendData(GenFx.GetKey(), GenFx.GetSize());
GenFx.Reset();
}
GenHash.SetSize(GenLen); // truncate
delete pSec;
byte* key = GenHash.GetBuffer();
byte* macKey = key + uPlaintextSize;
unsigned char* result;
size_t mac_len = uCiphertextSize - uPlaintextSize;
ASSERT(mac_len == 20);
byte* mac_result = new byte[mac_len];
HMAC_CTX ctx;
HMAC_CTX_init(&ctx);
HMAC_Init_ex(&ctx, macKey, mac_key_len, EVP_sha1(), NULL);
HMAC_Update(&ctx, pCiphertext, uPlaintextSize);
HMAC_Final(&ctx, mac_result, &mac_len);
HMAC_CTX_cleanup(&ctx);
Ret = memcmp(pCiphertext + uPlaintextSize, mac_result, mac_len) == 0 ? 1 : 0;
delete mac_result;
ASSERT(pPlaintext == NULL);
pPlaintext = new byte[uPlaintextSize];
for(int i=0; i < uPlaintextSize; i++)
pPlaintext[i] = pCiphertext[i] ^ key[i];
But I am not sure that it works correctly and I don't know how use this piece of code.
Have anyone already implemented this scheme?
I managed to create a VHD and attached it. Afterwards, I created a disk(IOCTL CREATE_DISK) and set its layout using IOCTL_DISK_SET_DRIVE_LAYOUT_EX. Now, when I examine disk through Disk Management. I have a 14MB with a 7 MB partition, expectedly.
int sign = 80001;
CREATE_DISK disk;
disk.Mbr.Signature = sign;
disk.PartitionStyle = PARTITION_STYLE_MBR;
auto res = DeviceIoControl(device_handle, IOCTL_DISK_CREATE_DISK, &disk, sizeof(disk), NULL, 0, NULL, NULL);
res = DeviceIoControl(device_handle, IOCTL_DISK_UPDATE_PROPERTIES, 0, 0, NULL, 0, NULL, NULL);
LARGE_INTEGER partition_size;
partition_size.QuadPart = 0xF00;
DWORD driver_layout_ex_len = sizeof(DRIVE_LAYOUT_INFORMATION_EX);
DRIVE_LAYOUT_INFORMATION_EX driver_layout_info;
memset(&driver_layout_info, 0, sizeof(DRIVE_LAYOUT_INFORMATION_EX));
driver_layout_info.Mbr.Signature = sign;
driver_layout_info.PartitionCount = 1;
driver_layout_info.PartitionStyle = PARTITION_STYLE_MBR;
PARTITION_INFORMATION_EX part_info;
PARTITION_INFORMATION_MBR mbr_info;
part_info.StartingOffset.QuadPart = 32256;
part_info.RewritePartition = TRUE;
part_info.PartitionLength.QuadPart = partition_size.QuadPart/2 * 4096;
part_info.PartitionNumber = 1;
part_info.PartitionStyle = PARTITION_STYLE_MBR;
mbr_info.BootIndicator = TRUE;
mbr_info.HiddenSectors = 32256 / 512;
mbr_info.PartitionType = PARTITION_FAT32;
mbr_info.RecognizedPartition = 1;
part_info.Mbr = mbr_info;
driver_layout_info.PartitionEntry[0] = part_info;
auto res_layout = DeviceIoControl(device_handle, IOCTL_DISK_SET_DRIVE_LAYOUT_EX, &driver_layout_info, sizeof(driver_layout_info), NULL, 0, NULL, NULL);
Now, how do I partitionize this disk into two partitions? I want to create another partition out of the unpartitioned part of the disk(the other half basically). It says in the documentation is that PartitionEntry is an array of variable size(No, it is not it is an array of size 1.) Do I call set layout IOCTL for every partition I want to create? If so, how do you go about that? Is multi-partitioning possible through WINAPI interface?
P.S: I am aware that people usually invoke diskpart for this line of work.
Edit:
Adding second partition two layout was messing my stack up so I took another route (heap).
DWORD driver_layout_ex_len = sizeof(DRIVE_LAYOUT_INFORMATION_EX) + sizeof(PARTITION_INFORMATION_EX); // one layout+partition + partition
PDRIVE_LAYOUT_INFORMATION_EX driver_layout_info = (PDRIVE_LAYOUT_INFORMATION_EX) std::calloc(1, driver_layout_ex_len);
driver_layout_info->Mbr.Signature = sign;
driver_layout_info->PartitionCount = 2;
driver_layout_info->PartitionStyle = PARTITION_STYLE_MBR;
// omitted here..
PARTITION_INFORMATION_EX part_info2;
part_info2.StartingOffset.QuadPart = 32256 + part_info.PartitionLength.QuadPart;
part_info2.RewritePartition = TRUE;
part_info2.PartitionLength.QuadPart = partition_size.QuadPart / 2 * 4096;
part_info2.PartitionNumber = 2;
part_info2.PartitionStyle = PARTITION_STYLE_MBR;
part_info2.Mbr = mbr_info;
driver_layout_info->PartitionEntry[0] = part_info;
driver_layout_info->PartitionEntry[1] = part_info2;
auto res_layout = DeviceIoControl(device_handle, IOCTL_DISK_SET_DRIVE_LAYOUT_EX, driver_layout_info, driver_layout_ex_len, NULL, 0, NULL, NULL);
auto res_err = GetLastError();
Since it was overriding my device_handle I could not IOCTL at all. This improvement eliminated that. Do not forget to pass driver_layout_info instead of &driver_layout_info after this change.
It says in the documentation is that PartitionEntry is an array of
variable size(No, it is not it is an array of size 1.)
"Some Windows structures are variable-sized,
beginning with a fixed header, followed by
a variable-sized array. When these structures
are declared,
they often declare an array of size 1 where the
variable-sized array should be." Refer to #Raymond blog.
Here DRIVE_LAYOUT_INFORMATION_EX structure is an example:
typedef struct _DRIVE_LAYOUT_INFORMATION_EX {
DWORD PartitionStyle;
DWORD PartitionCount;
union {
DRIVE_LAYOUT_INFORMATION_MBR Mbr;
DRIVE_LAYOUT_INFORMATION_GPT Gpt;
} DUMMYUNIONNAME;
PARTITION_INFORMATION_EX PartitionEntry[1];
} DRIVE_LAYOUT_INFORMATION_EX, *PDRIVE_LAYOUT_INFORMATION_EX;
With this declaration, you would allocate memory for one such
variable-sized DRIVE_LAYOUT_INFORMATION_EX structure like this:
PDRIVE_LAYOUT_INFORMATION_EX driver_layout_info = (PDRIVE_LAYOUT_INFORMATION_EX)malloc(FIELD_OFFSET(DRIVE_LAYOUT_INFORMATION_EX, PartitionEntry[NumberOfPartitions]));
You would initialize the structure like this (Use 2 partitions as example):
DWORD NumberOfPartitions = 2;
LARGE_INTEGER partition_size;
partition_size.QuadPart = 0xF00;
PARTITION_INFORMATION_MBR mbr_info;
mbr_info.BootIndicator = TRUE;
mbr_info.HiddenSectors = 32256 / 512;
mbr_info.PartitionType = PARTITION_FAT32;
mbr_info.RecognizedPartition = TRUE;
PDRIVE_LAYOUT_INFORMATION_EX driver_layout_info = (PDRIVE_LAYOUT_INFORMATION_EX)malloc(FIELD_OFFSET(DRIVE_LAYOUT_INFORMATION_EX, PartitionEntry[NumberOfPartitions]));
for (DWORD Index = 0; Index < NumberOfPartitions ; Index++) {
driver_layout_info->PartitionEntry[Index].PartitionStyle = PARTITION_STYLE_MBR;
driver_layout_info->PartitionEntry[Index].PartitionNumber = Index + 1;
driver_layout_info->PartitionEntry[Index].RewritePartition = TRUE;
driver_layout_info->PartitionEntry[Index].PartitionLength.QuadPart = partition_size.QuadPart / 2 * 4096;
driver_layout_info->PartitionEntry[Index].Mbr = mbr_info;
}
driver_layout_info->Mbr.Signature = sign;
driver_layout_info->PartitionCount = 1;
driver_layout_info->PartitionStyle = PARTITION_STYLE_MBR;
driver_layout_info->PartitionEntry[0].StartingOffset.QuadPart = 32256;
driver_layout_info->PartitionEntry[1].StartingOffset.QuadPart = 32256 + driver_layout_info->PartitionEntry->StartingOffset.QuadPart;
DWORD driver_layout_ex_len = sizeof(DRIVE_LAYOUT_INFORMATION_EX) + sizeof(PARTITION_INFORMATION_EX);
Call free(driver_layout_info); after you use completely.
I've been following this tutorial for SSL/TLS online (well its more of reading the guys source code and following along) but I've hit a bumpy road with this EncryptMessage part because it pushes the data out of the way and encrypts the wrong info.
The pbloBuffer that I send it is:
GET / HTTP/1.1\r\n
HOST: www.google.com\r\n\r\n
But when I do pbMessage = pbloBuffer + Sizes.cbHeader; I end up with (even the microsoft websites says to do this)
1\r\n
HOST: www.google.com\r\n\r\n
Now pbMessage is the code above, and that's inserted under SECBUFFER_DATA so it's not even getting the full data. From what I understand SECBUFFER_DATA is the "user" data that the Webserver will decode and process.
Can you find out how to fix this and properly send the encrypted data?
Full source: (This code is experimental as I am trying to understand it before I makes changes)
int Adaptify::EncryptSend(PBYTE pbloBuffer, int Size) {
SECURITY_STATUS scRet{ 0 };
SecBufferDesc Message{ 0 };
SecBuffer Buffers[4]{ 0 };
DWORD cbMessage = 0, cbData = 0;
PBYTE pbMessage = nullptr;
SecPkgContext_StreamSizes Sizes = { 0 };
QueryContextAttributesW(&hContext, SECPKG_ATTR_STREAM_SIZES, &Sizes);
pbMessage = pbloBuffer + Sizes.cbHeader;
cbMessage = (DWORD)strlen((const char*)pbMessage);
Buffers[0].BufferType = SECBUFFER_STREAM_HEADER;
Buffers[0].cbBuffer = Sizes.cbHeader;
Buffers[0].pvBuffer = pbloBuffer;
Buffers[1].BufferType = SECBUFFER_DATA;
Buffers[1].pvBuffer = pbMessage;
Buffers[1].cbBuffer = cbMessage;
Buffers[2].BufferType = SECBUFFER_STREAM_TRAILER;
Buffers[2].cbBuffer = Sizes.cbTrailer;
Buffers[2].pvBuffer = pbMessage + cbMessage;
Buffers[3].BufferType = SECBUFFER_EMPTY;
Buffers[3].cbBuffer = SECBUFFER_EMPTY;
Buffers[3].pvBuffer = SECBUFFER_EMPTY;
Message.cBuffers = 4;
Message.pBuffers = Buffers;
Message.ulVersion = SECBUFFER_VERSION;
scRet = EncryptMessage(&hContext, 0, &Message, 0);
if (send(hSocket, (const char*)pbloBuffer, Buffers[0].cbBuffer + Buffers[1].cbBuffer + Buffers[2].cbBuffer, 0) < 0) {
MessageBox(0, L"Send error", 0, 0);
}
return 0;
}
first - you need call QueryContextAttributesW only once after InitializeSecurityContextW return SEC_E_OK - no sense call it every time, when you need send data. and save result. say inherit your class from SecPkgContext_StreamSizes - class Adaptify : SecPkgContext_StreamSizes; and call on end handshake (once) QueryContextAttributesW(&hContext, SECPKG_ATTR_STREAM_SIZES, this);
about send send data - in your case Buffers[1].pvBuffer of course must point to your actual data pbloBuffer but not to pbloBuffer + Sizes.cbHeader code can be like this:
int Adaptify::EncryptSend(PBYTE pbloBuffer, int Size) {
SECURITY_STATUS ss = SEC_E_INSUFFICIENT_MEMORY;
if (PBYTE Buffer = new BYTE[cbHeader + Size + cbTrailer]) {
memcpy(Buffer + cbHeader, pbloBuffer, Size);
SecBuffer sb[4] = {
{ cbHeader, SECBUFFER_STREAM_HEADER, Buffer},
{ Size, SECBUFFER_DATA, Buffer + cbHeader},
{ cbTrailer, SECBUFFER_STREAM_TRAILER, Buffer + cbHeader + Size},
};
SecBufferDesc sbd = {
SECBUFFER_VERSION, 4, sb
};
if (SEC_E_OK == (ss = ::EncryptMessage(this, 0, &sbd, 0)))) {
if (SOCKET_ERROR == send(hSocket, (const char*)Buffer, sb[0].cbBuffer+sb[1].cbBuffer+sb[2].cbBuffer+sb[3].cbBuffer, 0))
ss = WSAGetLastError();
}
delete [] Buffer;
}
return ss;
}
so you need allocate new buffer with cbHeader + Size + cbTrailer size (wher Size is your actual message Size and copy your message at Buffer + cbHeader
I'd like to ask, how could I locate a specific (exported) function inside a DLL. For example I'd like to locate ReadProcessMemory inside Kernel32. I wouldn't like to rely on Import table, I'd like to locate different APIs based on their addresses what I get with a custom function.
I tried to make a small research on VA, RVA & File offsets, but I didn't succeed. Here's an example which I tried, but it isn't working (returns 0 in all cases):
DWORD Rva2Offset(DWORD dwRva, UINT_PTR uiBaseAddress)
{
WORD wIndex = 0;
PIMAGE_SECTION_HEADER pSectionHeader = NULL;
PIMAGE_NT_HEADERS pNtHeaders = NULL;
pNtHeaders = (PIMAGE_NT_HEADERS) (uiBaseAddress + ((PIMAGE_DOS_HEADER) uiBaseAddress)->e_lfanew);
pSectionHeader = (PIMAGE_SECTION_HEADER) ((UINT_PTR) (&pNtHeaders->OptionalHeader) + pNtHeaders->FileHeader.SizeOfOptionalHeader);
if (dwRva < pSectionHeader[0].PointerToRawData)
return dwRva;
for (wIndex = 0; wIndex < pNtHeaders->FileHeader.NumberOfSections; wIndex++)
{
if (dwRva >= pSectionHeader[wIndex].VirtualAddress && dwRva < (pSectionHeader[wIndex].VirtualAddress + pSectionHeader[wIndex].SizeOfRawData))
return (dwRva - pSectionHeader[wIndex].VirtualAddress + pSectionHeader[wIndex].PointerToRawData);
}
return 0;
}
Could you help me how could I accomplish this simple task?
Thank you.
P.s.: I'm not sticking to the function above, both if you can point out what's the problem, or give a better source would be awesome.
This gives you the relative virtual address
uintptr_t baseAddr = (uintptr_t)GetModuleHandle("nameOfExe.exe");
uintptr_t relativeAddr = functionAddress - baseAddr;
This converts relative virtual address to File offset:
DWORD RVAToFileOffset(IMAGE_NT_HEADERS32* pNtHdr, DWORD dwRVA)
{
int i;
WORD wSections;
PIMAGE_SECTION_HEADER pSectionHdr;
pSectionHdr = IMAGE_FIRST_SECTION(pNtHdr);
wSections = pNtHdr->FileHeader.NumberOfSections;
for (i = 0; i < wSections; i++)
{
if (pSectionHdr->VirtualAddress <= dwRVA)
if ((pSectionHdr->VirtualAddress + pSectionHdr->Misc.VirtualSize) > dwRVA)
{
dwRVA -= pSectionHdr->VirtualAddress;
dwRVA += pSectionHdr->PointerToRawData;
return (dwRVA);
}
pSectionHdr++;
}
return (-1);
}