Portable executable DOS header length - c++

I've been studying this image for building a portable executable: https://i.imgur.com/LIImg.jpg
The image/walkthrough says the PE header starts at 0x40 (64 in decimal). However, the hexadecimal dump says the DOS header is 32 bytes long. Is it being packed at 4 bytes for each field?
Looking at the IMAGE_DOS_HEADER in WinNT.h, it doesn't seem to fit either. It has 16 2-byte fields, one 4-length 2-byte array, one 10-length 2-byte array, and the 4-byte pointer to the PE location. Any way you look at that it doesn't add up to 64...

However, the hexadecimal dump says the DOS header is 32 bytes long.
Offset:0x30
00 00 00 00-00 00 00 00-00 00 00 00-40 00 00 00
0x30 + 16 = 0x40 (64).
typedef struct _IMAGE_DOS_HEADER
{
// Cumulative size:
WORD e_magic; // 2
WORD e_cblp; // 4
WORD e_cp; // 6
WORD e_crlc; // 8
WORD e_cparhdr; // 10
WORD e_minalloc; // 12
WORD e_maxalloc; // 14
WORD e_ss; // 16
WORD e_sp; // 18
WORD e_csum; // 20
WORD e_ip; // 22
WORD e_cs; // 24
WORD e_lfarlc; // 26
WORD e_ovno; // 28
WORD e_res[4]; // 36
WORD e_oemid; // 38
WORD e_oeminfo; // 40
WORD e_res2[10]; // 60
LONG e_lfanew; // 64
} IMAGE_DOS_HEADER, *PIMAGE_DOS_HEADER;
It has 16 2-byte fields, one 4-length 2-byte array, one 10-length 2-byte array, and the 4-byte pointer to the PE location. Any way you look at that it doesn't add up to 64...
(16 * 2) = 32
(4 * 2) = 8
(10 * 2) = 20
+ 4
------------------
64

Related

How do I use the C Preprocessor to generate a variable-sized (operating system-dependent) hexidecimal literal?

Edited: I'm aware that I can't use sizeof() in the preprocessor. I'm looking into modifying my code.
I have C code that uses the hexidecimal values (stored inside a variable of type unsigned int):
0x0000'0000'0000'0001 (64-bit/8 Byte system, 15 0's)
0x0000'0001 (32-bit/4 Byte system, 7 0's)
0x0001 (16-bit/2 Byte system, 3 0's)
0x01 (8-bit /1 Byte system, 1 0's)
depending on how the operating system decides to represent an unsigned int(I'm aware that most OS don't represent them as 64'b values but bear with me). I would like to generate an object-macro that takes one of the four hexidecimal values above, depending on how the OS decides to implement an unsigned int.
A simple (dead-reckon) way to do this is to use a series of if-else statements:
#define SYS_UINT_TYPE sizeof(unsigned int)
#if SYS_UINT_TYPE == 8
#define DEFAULT_MASK 0x0000000000000001
#elif SYS_UINT_TYPE == 4
#define DEFAULT_MASK 0x00000001
...
#endif
But I was wondering if there is a way to use the ## (C Concatenation Operator) to concatenate the correct number of 0's to generate the correct DEFAULT_MASK macro-expaned value. I know that for a B Byte system, the number of 0's that needs to be concatenated with the text literal 0x is 2B-1 but is no such thing as a For-loop preprocessor directive.
In effect, this would work to reduce the size of the #if ... #elif ... #endif I'd have to type out.
um those are all 0x01 right? they are a literal so they will be int sized if they fit in an int...
when a mask is done the lsb will be lined up with lsb...
convince your self of this with:
printf("%lld",(long long) 0x01 | (long) 0x01 | (int) 0x01 | (short) 0x01 | (char) 0x01);
if this lined up with the most significant bit then it would end up being something like:
0x01 00 00 00 00 00 00 00
0x 01 00 00 00
0x 01 00 00 00
0x 00 01 00
0x 01
------------------------- all of this | or'd together
0x01 00 00 00 01 00 01 01
but instead it is just one (0x01).
all of this happens through the magic of integer promotion, where the smaller sizes are padded to be the same value in a larger storage size...
thing get messier with signed values and negatives... partially because the integer promotion rules are more complicated.

How can I convert a 64bit integer to a big-endian byte array in C++

(This is a follow-up question to Padding the message in SHA256.)
(I am trying to implement the SHA-256 hash function in C++. I am doing this purely for fun and learning purposes.)
I have my string message with length message_length. I have appended the string with the bit 1 followed by 0s so that the length of the string is now 448 bits mod 512 bits.
I now need to append the string with the message_length as 64-bit big-endian integer to the string, but I can't quite figure out how to do this in C++.
For sake of argument then, lets say message_length is 3 bytes = 24 bits. 24 in hex is 18, so I would need to append 00 00 00 00 00 00 00 18 to the string.
So what I would like is a function that converts the integer 3 into the string 00 00 00 00 00 00 00 18 so that I can append this.
My question boils down to
How can I convert a 64bit integer to a big-endian byte array in C++
Edit: I just reread your question (in the intend to edit it). I think you misunderstood the format SHA256 expects you to use. Instead of appending the string 00 00 00 00 00 00 00 18 (each byte hex encoded and separated by spaces) for a message length of 24 you need to append the raw bytes 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 and 0x18. They do not represent any printable characters. See below for a way to get these bytes. My example uses an integer whose byte representation happens to contain only printable characters (in ASCII).
Btw., the reason you pad to 448 bits is that you have 64 bits left in the full block size of 512 bits so you can put the big-endian encoding of the message length in there.
You can extract a single byte of num using num>>(8*i) & 0xff where you will get the least significant byte for i=0 and the most significant byte of an unsigned long (64bit) with i=7. Iterating through all positions you can get each byte:
unsigned long num = 0x626967656e646961L;
//Store big endian representation in a vector:
std::vector<unsigned char> bigEndian;
for(int i=7; i>=0; i--){
bigEndian.push_back( (num>>(8*i)) & 0xff );
}
If you need it as a string you can convert it using the iterator constructor of a string:
std::string bigEndianString(bigEndian.begin(),bigEndian.end());
Complete code with test output:
#include <iostream>
#include <string>
#include <vector>
int main() {
unsigned long num = 0x626967656e646961L;
//Store big endian representation in a vector:
std::vector<unsigned char> bigEndian;
for(int i=7; i>=0; i--){
bigEndian.push_back( (num>>(8*i)) & 0xff );
}
//Convert vector to string:
std::string bigEndianString(bigEndian.begin(),bigEndian.end());
//Test correctness:
for(std::vector<unsigned char>::const_iterator it = bigEndian.begin(); it != bigEndian.end(); ++it) {
std::cout << std::hex << (int)*it << ' ';
}
std::cout << std::endl << bigEndianString << std::endl;
}

CRC32 calculations for png chunk doesn't match the real one

I'm attempting to mimic the function used for creating CRC's in PNG files, I'm using the autodin II polynomial and the source code from:
http://www.opensource.apple.com/source/xnu/xnu-1456.1.26/bsd/libkern/crc32.c
My tests have all been for the IHDR chunk, so my parameters have been:
crc - 0xffffffff and 0 (both have been suggested)
buff - the address of the IHDR Chunk's type.
length - the IHDR Chunk's length + 4 (the length of the chunk's data + the length of the type)
I printed the calculated CRC in binary, which I compared to the actual CRC of the chunk. I can see no similarities (little-big endian, reversed bits, XOR'd, etc).
This is the data for the IHDR chunk (hexadecimal format):
length(big endian): d0 00 00 00 (13)
type: 49 48 44 52
data: 00 00 01 77 00 00 01 68 08 06 00 00 00
existing CRC: b0 bb 40 ac
If anyone can tell me why my calculations are off, or give me a CRC32 function that will work I would greatly appreciate it.
Thank-you!
The CRC-32 algorithm used in PNG images is described here: http://www.w3.org/TR/PNG-Structure.html#CRC-algorithm (there's also a link to C code for doing test calculations).
But as #Jigsore pointed out, you won't get sensible results from the data you posted here. You've given us a 4-byte type identifier and what looks like 7.5 bytes of data to follow it. There should be a total of 13 bytes according to the length header.
EDIT:
This works using the function from w3.org:
int main() {
char input[] = { 0x49,0x48,0x44,0x52,0x00,0x00,0x01,0x77,0x00,
0x00,0x01,0x68,0x08,0x06,0x00,0x00,0x00 };
printf("%08lx\n",crc(input,17));
return 0;
}
Output:
ac40bbb0

How can i read a REG_BINARY values associated value from registry?

In the registry there is one ( or more ) key depending how many monitors you have HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\DISPLAY\DEL404C\{Some Unique ID}\Device Parameters\EDID which is a REG_BINARY key. In my case this is :
00 FF FF FF FF FF FF 00 10 AC 4C 40 53 43 34 42 34 14 01 03 0A 2F 1E 78 EE EE 95 A3 54
4C 99 26 0F 50 54 A5 4B 00 71 4F 81 80 B3 00 01 01 01 01 01 01 01 01 01 01 21 39 90 30
62 1A 27 40 68 B0 36 00 DA 28 11 00 00 1C 00 00 00 FF 00 34 57 31 4D 44 30 43 53 42 34
43 53 0A 00 00 00 FC 00 44 45 4C 4C 20 50 32 32 31 30 0A 20 20 00 00 00 FD 00 38 4B 1E
53 10 00 0A 20 20 20 20 20 20 00 FA
This reg_binary value contains information (such as Serial Number and Type) about the connected monitor. I only need these two values. My question is how can i read these values using C or C++?
I have a VB script which can do this:
'you can tell If the location contains a serial number If it starts with &H00 00 00 ff
strSerFind=Chr(&H00) & Chr(&H00) & Chr(&H00) & Chr(&HfF)
'or a model description If it starts with &H00 00 00 fc
strMdlFind=Chr(&H00) & Chr(&H00) & Chr(&H00) & Chr(&Hfc)
This link also contains information about EDID: http://en.wikipedia.org/wiki/Extended_display_identification_data
Could someone help me, how can i do this in C? I can find only VB script examples, but unfortunately i don't understand them, and also it would be very important for me.
You mention wanting the "serial number" and "type". There is no "type" but there is a manufacturer ID and a product ID. For the most part these aren't stored as meaningful strings in the information you get back...they are just numeric values. And they're all in the first 16 bytes.
I'll decode the beginning according to the spec you cite.
Bytes 0,1,2,3,4,5,6,7 - Header information
This should be the literal string "00h FFh FFh FFh FFh FFh FFh 00h", which serves as a sanity check that we're looking at a valid EDID block. Your data starts off with exactly what we expect:
00 FF FF FF FF FF FF 00
Bytes 8 and 9 - Manufacturer ID.
These IDs are assigned by Microsoft, and are three-letter codes. Oh sure, they could have "wasted" three whole bytes in ASCII for this. But that would have been too sensible. So they frittered away eight bytes on an extremely "non-magic" number for the header, and invented an "ingenious" way to encode those three letters into the sixteen bits held by two bytes. How'd they pull it off?
+--------+--------+
| Byte 8 | Byte 9 |
--------+--------+--------+
Bit # 76543210 76543210
-----------------=---------
Meaning 0αααααββ βββγγγγγ
So the highest-order bit of Byte 8 is always zero, and the remaining 15 bits are divided into three groups of 5 bits (which I've called α, β, and γ). Each is interpreted as a letter, where "00001=A"; "00010=B"; ... "11010=Z".
You've got:
10 AC
And hexadecimal 10AC expressed as 16 binary bits is 0001000010101100. So let's bring that table back again:
+--------+--------+
| Byte 8 | Byte 9 |
--------+--------+--------+
Bit # 76543210 76543210
-----------------=---------
Meaning 0αααααββ βββγγγγγ
-----------------=---------
Yours 00010000 10101100
So α = 00100 (decimal 4), β = 00101 (decimal 5), γ = 01100 (decimal 12). Using those decimal numbers as indexes into the English alphabet we get D-E-L. By this arcane sorcery we have determined that your monitor is most likely made by Dell. :)
Bytes 10 and 11 - Product ID Code
This is a two-byte number, assigned by the manufacturer, stored as "LSB first". This is to say that the first byte is the least significant place value. You have:
4C 40
Which we need to interpret as the hexadecimal number 404C.
Bytes 12,13,14,15 - Serial Number.
This is a 32-bit value assigned by the manufacturer which has no requirement for the format. It is "usually stored as LSB first", but doesn't have to be.
53 43 34 42
You can interpret that as 0x53433442, or 0x42344353, or whatever...so long as you're consistent in comparing one value against another.
So now you see it's just three letters and some numbers. Once you get the bytes into a buffer there are a lot of ways to extract the information. #freerider provided some information on that, I'll just throw in a bit more.
The EDID standard says that what you get back as a description is 128 bytes. That is the case with the registry key here, and you can probably assume that if there are not exactly 128 bytes it is corrupt. So using the code provided by #freerider, there'd be no need to pass in anything larger than that...you could technically go down to just 16 if that's the only part of the EDID you're interested in:
#define EDID_BUFFER_SIZE 128
// in idiomatic C++ it's better to say:
// const size_t edidBufferSize = 128;
BYTE edidBuffer[EDID_BUFFER_SIZE];
DWORD nLength = GetLocalMachineProfileBuffer( Buffer, EDID_BUFFER_SIZE );
if (nLength != EDID_BUFFER_SIZE) {
// handle error case, not a valid EDID block
} else {
// valid EDID block, do extraction:
// * manufacturer ID
// * product ID
// * serial number
}
(Note: I prefer to avoid using the sizeof on arrays like #freerider's sizeof( Buffer ) above. While it will technically work in this case, it doesn't return the number of elements in the array...rather the number of bytes the array occupies in memory. In this case the elements happen to actually be bytes, so it will work...but you quickly run into problems, like when you pass an array to another function by pointer and suddenly it starts reporting its size as the size of a pointer...)
Beyond that, your question of how to extract structural data out of a buffer of bytes is a very general one, and is so foundational to C-style programming that if you don't know where to start on it then you should probably work through simpler programs. Getting the three five bit segments out of the manufacturer name involves things like bit shifting, bit masking, or bit fields. Going through the array deals with addresses and how to index arrays and things like that.
The closest parallel question I could find offhand right now is this:
extract IP from a buffer of bytes
Lots of ways to do it, but an interesting one is that you can define the layout of a structure in memory and then tell the program "hey, this block of memory I found is laid out just like the structure I defined. So let me extract information from it as simply as if I'd defined the object in my program"...
But then you have to be sensitive to issues like data structure alignment. That's because the way your compiler will naturally put objects into memory doesn't necessarily match what you think it would do:
http://en.wikipedia.org/wiki/Data_structure_alignment
With the information above you should at least be able to make a shot at reading some tutorials and seeing what works. If you can't figure out one part of the problem then break that little part out as its own question, and show what you tried and why it didn't work...
This previous question explains how to get EDID with C/C++/C#. It's not through the registry, but as long it works...
Win32 code to get EDID in Windows XP/7
If you want to still read the registry, use RegQueryValueEx and friends.
DWORD GetLocalMachineProfileBuffer(BYTE* pBuffer, DWORD nMaxLength )
{
CString szSubKey = "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\DISPLAY\DEL404C{Some Unique ID}\Device Parameters\EDID";
DWORD rc;
DWORD dwType;
HKEY hOpenedKey;
if( ERROR_SUCCESS == RegOpenKeyEx (
HKEY_LOCAL_MACHINE, // handle of open key
szSubKey, // address of name of subkey to open
0, // reserved
KEY_READ, // security access mask
&hOpenedKey // address of handle of open key
) )
{
rc = RegQueryValueEx(
hOpenedKey,
(const char*)szValueName,
0,
&dwType,
(LPBYTE)pBuffer,
&nMaxLength );
if( rc != ERROR_SUCCESS )
{
return (DWORD)-1;
}
else
{
ASSERT( dwType == REG_BINARY );
}
RegCloseKey( hOpenedKey );
return nMaxLength;
}
else
{
return (DWORD)-1;
}
}
call it like this:
BYTE Buffer[20000];
DWORD nLength = GetLocalMachineProfileBuffer( Buffer, sizeof( Buffer ) );

8-bit char to Hex representation

I'm trying to convert 8 bit char into hex view which looks like this:
00 03 80 45 E5 93 00 18 02 72 3B 90 88 64 11 00
45 FF 00 36 00 FF 45 00 00 34 7B FE 40 00 40 02
But some characters contain negative values which makes a larger hex value of more than 2 digits. how would i get each one as represented above?
I don't know what you are using for formatting, but make sure that you make your byte holding variable an unsigned char (assuming that char is 8-bits on your platform, which it is on all sane platforms), before formatting. If your platform has a sane BYTE typedef, use that. You can also use the boost::uint8_t type to store the byte and avoid these sorts of issues. For example:
char c=-25; // Oh no, this is one of those pesky "negative" characters
unsigned char byteVal=static_cast<unsigned char>(c); // FTFY
// Do the formatting with byteVal
"negative byte values" is an oxymoron, a byte is a number of bits without any sign typically an unsigned char which, when being 8 bits. can contain values 0-255 or in hex 00 to FF.