Related
I am having a problem with a school assignment. The assignment is to write a metamorphic Hello World program. This program will produce 10 .com files that print "Hello World!" when executed. Each of the 10 .com files must be different from the others. I understand the concept of metamorphic vs oligomorphic vs polymorphic. My program currently creates 10 .com files and then writes the machine code to the files. I began by simply writing only the machine code to print hello world and tested it. It worked just fine. I then tried to add a decryption routine to the beginning of the machine code. Here is my current byte array:
#define ARRAY_SIZE(array) (sizeof((array))/sizeof((array[0])))
BYTE pushCS = 0x0E;
BYTE popDS = 0x1F;
BYTE movDX = 0xBA;
BYTE helloAddr1 = 0x1A;
BYTE helloAddr2 = 0x01;
BYTE movAH = 0xB4;
BYTE nine = 0x09;
BYTE Int = 0xCD;
BYTE tOne = 0x21;
BYTE movAX = 0xB8;
BYTE ret1 = 0x01;
BYTE ret2 = 0x4C;
BYTE movBL = 0xB3;
BYTE keyVal = 0x03; // Encrypt/Decrypt key
typedef unsigned char BYTE;
BYTE data[] = { 0x8D, 0x0E, 0x01, 0xB7, 0x1D, 0xB3, keyVal, 0x30, 0x1C, 0x46, 0xFE, 0xCF, 0x75, 0xF9,
movDX, helloAddr1, helloAddr2, movAH, nine, Int, tOne, movAX, ret1, ret2, Int, tOne,
0x48, 0x65, 0x6C, 0x6C, 0x6F, 0x20, 0x57, 0x6F, 0x72, 0x6C, 0x64, 0x21, 0x0D, 0x0D, 0x0A, 0x24 };
The decryption portion of the machine code is the first 14 bytes of "data". This decryption routine would take the obfuscated machine code bytes and decrypt them by xor-ing the bytes with the same key that was used to encrypt them. I am encrypting the bytes in my C++ code with this:
for (int i = 15; i < ARRAY_SIZE(data); i++)
{
data[i] ^= keyVal;
}
I have verified over and over again that my addressing is correct considering that the code begins at offset 100. What I have noticed is that when keyVal is 0x00, my code runs fine and I get 10 .com files that print Hello World!. However, this does me no good as 0x00 leaves everything unchanged. When I provide an actual key like 0x02, my program no longer works. It simply hangs until I close out DosBox. Any hints as to the cause of this would be a great help. I have some interesting plans for junk insertion (The actual metamorphic part) but I don't want to move on to that until I figure out this encrypt/decrypt issue.
The decryption portion of the machine code is the first 14 bytes of "data".
and
for (int i = 15; i < ARRAY_SIZE(data); i++)
do not match since in C++ array indexes start at 0.
In your array data[15] == helloAddr1 which means you are not encrypting the data[14] == movDX element. Double-check which elements should be encrypted and start at i = 14 if required.
My array looks something like this;
unsigned char send_bytes[] = { 0x0B, 0x11, 0xA6, 0x05, 0x00, 0x00, 0x70 };
One of the values is a variable that can change all the time.. so I tried something like this;
const char* input = "0x05";
unsigned char send_bytes[] = { 0x0B, 0x11, 0xA6, input, 0x00, 0x00, 0x70 };
When I compile I get a warning;
warning: initialization makes integer from pointer without a cast
I am a little confused on the conversion I need to do.. since the array has hex strings in it.. and the input string is a char..
in the first line you are declaring a pointer to const char, and initializing to the beginning of string "0x05", that's fine, but it is not the thing you are trying to do.
in the second line, you try to initialize the fourth array element (an unsigned char) with the value of the pointer you assigned to the input variable in the first line. The compiler says you are pretending to embed a pointer value (the address of "0x05" string) into a char variable, so that's why it complained. And also it is not what you intend.
also, take into account that if you are using binary data (from the fact you are initializing arrays with hex numbers) you had better to use unsigned char for binaries, as signed char is valid only for -128 to +127 values, you can expect some more unpredictable behaviour. Perhaps, a declaration typedef unsigned char byte; can do things easier.
typedef unsigned char byte;
byte send_bytes[] = { 0x0b, 0x11, 0xa6, 0x00, 0x00, 0x00, 0x70 };
byte &input = send_bytes[3]; /* input is an alias of send_bytes[3] */
BR,
Luis
Maybe explaining exactly what const char* input = "0x05"; does will clear things up for you.
First the compiler computes the string data and creates it as a static object:
const char data[5] = { 0x30, 0x78, 0x30, 0x35, 0x0 };
Then your variable is initialized:
const char *input = &data[0];
Note that input is a pointer with a value that depends entirely upon the location the compiler chooses to store the string data at, and has nothing to do with the contents of the string. So if you say char c = input; then c basically gets assigned a random number.
So you should be asking yourself "Where is the value 0x05 that I want to store in the send_bytes array?" In your code it's encoded as text, rather than as a number that your program can use directly. You need to figure out how to convert from a string of symbols following the hexadecimal scheme of representing numbers into C++'s native representation of numbers.
Here are a couple hints. Part of the operation involves associating values with each digit symbol. The symbol '0' is associated with the value zero, '1' with the value one, and so on, according to the usual hexadecimal system. Second, once you can get the associated value of a symbol, then you can use those values in some basic arithmetic operations to figure out the value of the number represented by the whole string of symbols.
For example, if you have the symbols '1' '2' and 'a', in that order from left to right then the arithmetic to compute what number is represented is 1 * 16 * 16 + 2 * 16 + 10.
The error string is pretty much telling you exactly what's wrong.
input is of type const char* (a pointer to a const char), whereas your array send_bytes is of type unsigned char[] (an array of unsigned chars).
First, signed and unsigned values are still different types, though your error message isn't referring to that specifically.
In reality, your input value isn't a string (as there is no true string type in C++), but a pointer to a character. This means that the input string doesn't hold the byte x05, but rather the bytes {x30, x78, x30, x35, x00}.
The compiler is saying Hey, I've no idea what you're trying to do, so I'm just converting the address that string I don't understand (input) to an unsigned char and adding it to the array.
That means if the string "0x05" starts at location 0xAB, your array will ultimately contain { 0x0B, 0x11, 0xA6, 0xAB, 0x00, 0x00, 0x70 }.
You're going to either have to convert from a string to an integer using a radix of 16, or just not use a string at all.
I'd also recommend reading up on pointers.
The array doesn't have "hex strings" in it - if they were, they would be enclosed in quotation marks, like all strings.
The literals are integers written in hexadecimal notation, and equivalent to
unsigned char send_bytes[] = { 11, 17, 166, input, 0, 0, 112 };
Since it's an array of unsigned char you should put an unsigned char there:
unsigned char input = 0x05;
unsigned char send_bytes[] = { 0x0B, 0x11, 0xA6, input, 0x00, 0x00, 0x70 };
You had better to put in your code:
unsigned char send_bytes[] = { 0x0b, 0x11, 0xa6, 0x00, 0x00, 0x00, 0x70 };
unsigned char &input = send_bytes[3]; /* input is an alias of send_bytes[3] */
this way you can do things like:
input = 0x26;
send_packet(send_bytes);
I am writing a little program that talks to the serial port. I got the program working fine with one of these lines;
unsigned char send_bytes[] = { 0x0B, 0x11, 0x00, 0x02, 0x00, 0x69, 0x85, 0xA6, 0x0e, 0x01, 0x02, 0x3, 0xf };
However the string to send is variable and so I want make something like this;
char *blahstring;
blahstring = "0x0B, 0x11, 0x00, 0x02, 0x00, 0x69, 0x85, 0xA6, 0x0e, 0x01, 0x02, 0x3, 0xf"
unsigned char send_bytes[] = { blahstring };
It doesn't give me an error but it also doesnt work.. any ideas?
a byte-string is something like this:
char *blahString = "\x0B\x11\x00\x02\x00\x69\x85\xA6\x0E\x01\x02\x03\x0f"
Also, remember that this is not a regular string. It will be wise if you explicitly state it as an array of characters, with a specific size:
Like so:
unsigned char blahString[13] = {"\x0B\x11\x00\x02\x00\x69\x85\xA6\x0E\x01\x02\x03\x0f"};
unsigned char sendBytes[13];
memcpy(sendBytes, blahString, 13); // and you've successfully copied 13 bytes from blahString to sendBytes
not the way you've defined..
EDIT:
To answer why your first send_bytes works, and the second doesn't is this:
The first one, creates an array of individual bytes. Where as, the second one, creates a string of ascii characteres. So the length of first send_bytes is 13 bytes, where as the length of the second send_bytes is much higher, since the sequence of bytes is ascii equivalent of individual characters in the second blahstring.
blahstring is a string of characters.
1st character is 0, 2nd character is x, 3rd character is 0, 4th character is B etc. So the line
unsigned char send_bytes[] = { blahstring };
is an array (assuming that you preform a cast!) will have one item.
But the example that works is an array with the 1st character has a value 0x0B, 2nd character is of value 0x11.
I need to create a program when it run it should extract a image file. to do this I I used a char array to store the data. ex:
char data[]="ÿØÿà......";
I opened the image with a hex editor and copied the data and pasted it as above. but it gives many errors. (that may be because the image data have some bytes that ascii charactors are not available. ex: nul,)
con someone give me some advices on how to do this. how to create a byte array.
thanks in eny advice.
You should use a numeric initializer instead of a string literal... for example
const unsigned char data[] = { 0x01, 0x02, 0x03, 0x04,
0x05, 0x06, 0x07, 0x08 };
A simple way is writing a small script that generates the source code by reading the file... in Python it would be something like
data = open("datafile", "rb").read()
i = 0
while i < len(data):
chunk = data[i:i+8]
print ("0x%02x, " * len(chunk)) % tuple(map(ord, chunk))
i += 8
Read the data from the file using fopen or fstream. If you want to embed the file in the exe using a resource compiler.
I have defined the following struct to represent an IPv4 header (up until the options field):
struct IPv4Header
{
// First row in diagram
u_int32 Version:4;
u_int32 InternetHeaderLength:4; // Header length is expressed in units of 32 bits.
u_int32 TypeOfService:8;
u_int32 TotalLength:16;
// Second row in diagram
u_int32 Identification:16;
u_int32 Flags:3;
u_int32 FragmentOffset:13;
// Third row in diagram
u_int32 TTL:8;
u_int32 Protocol:8;
u_int32 HeaderChecksum:16;
// Fourth row in diagram
u_int32 SourceAddress:32;
// Fifth row in diagram
u_int32 DestinationAddress:32;
};
I now also captured an IP frame with Wireshark. As an array literal it looks like this:
// Captured with Wireshark
const u_int8 cIPHeaderSample[] = {
0x45, 0x00, 0x05, 0x17,
0xA7, 0xE0, 0x40, 0x00,
0x2E, 0x06, 0x1B, 0xEA,
0x51, 0x58, 0x25, 0x02,
0x0A, 0x04, 0x03, 0xB9
};
My question is: How can I create a IPv4Header object using the array data?
This doesn't work because of incompatible endianness:
IPv4Header header = *((IPv4Header*)cIPHeaderSample);
I'm aware of the functions like ntohs and ntohl, but it can't figure out how to use them correctly:
u_int8 version = ntohs(cIPHeaderSample[0]);
printf("version: %x \n", version);
// Output is:
// version: 0
Can anyone help?
The most portable way to do it is one field at a time, using memcpy() for types longer than a byte. You don't need to worry about endianness for byte-length fields:
uint16_t temp_u16;
uint32_t temp_u32;
struct IPv4Header header;
header.Version = cIPHeaderSample[0] >> 4;
header.InternetHeaderLength = cIPHeaderSample[0] & 0x0f;
header.TypeOfServer = cIPHeaderSample[1];
memcpy(&temp_u16, &cIPHeaderSample[2], 2);
header.TotalLength = ntohs(temp_u16);
memcpy(&temp_u16, &cIPHeaderSample[4], 2);
header.Identification = ntohs(temp_u16);
header.Flags = cIPHeaderSample[6] >> 5;
memcpy(&temp_u16, &cIPHeaderSample[6], 2);
header.FragmentOffset = ntohs(temp_u16) & 0x1fff;
header.TTL = cIPHeaderSample[8];
header.Protocol = cIPHeaderSample[9];
memcpy(&temp_u16, &cIPHeaderSample[10], 2);
header.HeaderChecksum = ntohs(temp_u16);
memcpy(&temp_u32, &cIPHeaderSample[12], 4);
header.SourceAddress = ntohl(temp_u32);
memcpy(&temp_u32, &cIPHeaderSample[16], 4);
header.DestinationAddress = ntohl(temp_u32);
ntohl and ntohs don't operate on 1-byte fields. They are for 32 and 16 bit fields, respectively. You probably want to start with a cast or memcpy then byte swap the 16 and 32-bit fields if you need to. If you find that version isn't coming through with that approach without any byte swapping, then you have bit field troubles.
Bit fields are a big mess in C. Most people (including me) will advise you to avoid them.
You want to take a look at an the source for ip.h, that one is from FreeBSD. There should be a pre-dedined iphdr struct on your system, use that. Don't reinvent the wheel if you don't have to.
The easiest way to make this work is to take a pointer to the byte array from wireshark and cast it into a pointer to an iphdr. That'll let you use the correct header struct.
struct iphdr* hrd;
hdr = (iphdr*) cIPHeaderSample;
unsigned int version = hdr->version;
Also, htons takes in a 16-bit and changes the byte order, calling it on a 32-bit variable is just going to make a mess of things. You want htonl for 32-bit variables. Also note that for a byte there is no such thing as an endianess, it takes multiple bytes to have different endianess.
Updated:
I suggest you use memcpy to avoid the issues of bitfields and struct alignment, as this can get messy. The solution below works on a simple example, and can be easily extended:
struct IPv4Header
{
uint32_t Source;
};
int main(int argc, char **argv) {
const uint8_t cIPHeaderSample[] = {
0x45, 0x00, 0x05, 0x17
};
IPv4Header header;
memcpy(&header.Source, cIPHeaderSample, sizeof(uint8_t) * 4);
header.Source= ntohl(header.Source);
cout << hex << header.Source<< endl;
}
Output:
45000517