Calling putchar address from C++ - c++

In a previous question, I asked if it was possible to write and execute assembly commands in memory. I got some nice responses, and after a bit more research, I figured out how to do it. Now that I can do it, I am having trouble figuring out what to write to memory (and how to do it correctly). I know some assembly and how the mnemonics translate to opcodes, but I can't figure out how to use the opcodes correctly.
Here's an example I'm trying to get working:
void(*test)() = NULL; //create function pointer, initialize to NULL
void* hold_address = VirtualAlloc(NULL, 5*1024, MEM_COMMIT, PAGE_EXECUTE_READWRITE); //allocate memory, make writable/ readable/ executable
unsigned char asm_commands[] = {0x55, 0x89, 0xE5, 0x83, 0xEC, 0x18, 0xC7, 0x04, 0x24, 0x41, 0xE8, 0x1E, 0xB3, 0x01, 0x00, 0xC9, 0xC3}; //create array of assembly commands, hex values
memcpy(hold_address, asm_commands, sizeof(asm_commands)[0]*10); //copy the array into the reserved memory
test = (void(*)())hold_address; //set the function pointer to start of the allocated memory
test(); //call the function
Just placing 0xC3 into the asm_commands array works (and the function just returns), but that's boring. The series of opcodes (and addresses) I have in there right now are supposed to print out the character "A" (capital a). I got the opcodes and addresses from debugging a simple program that calls printf("A") and finding the call in memory. Right now, the program returns a 0xC00000096 error, "privileged command". I think the error stems from trying to call the system putchar address directly, which the system doesn't like. I also think I can bypass that by giving my program Ring 0 access, but I hardly know what that entails other than a lot of potential problems.
So is there any way to either call the printf() function (in assembly opcodes) without needing higher privileges?
I'm using Windows 7, 64-bit, Code::Blocks 10.05 (GNU GCC Compiler).
Here's a screenshot of the debugged printf() call (in OllyDebug):

unsigned char asm_commands[] = {0x55, 0x89E5…
Whoa, hang on, stop right there. 0x89E5 isn't a valid value for an unsigned char, and your compiler should probably be complaining about this. (If not, check your settings; you've probably disabled some very important warnings.)
You'll need to split your code in this initializer up into individual bytes, e.g.
{0x55, 0x89, 0xE5, …

Nothing gets printed out because you forgot these zeroes in the dword 0x00000041 and mistakenly wrote 0x1A in stead of 0x1E.
unsigned char asm_commands[] = {0x55, 0x89, 0xE5, 0x83, 0xEC, 0x18, 0xC7, 0x04, 0x24, 0x41, 0x00, 0x00, 0x00, 0xE8, 0x1E, 0xB3, 0x01, 0x00, 0xC9, 0xC3}; //create array of assembly commands, hex values

In addition to what #duskwuff and #user3144770 wrote. Did you change the following line to include every byte?
memcpy(hold_address, asm_commands, sizeof(asm_commands)[0]*10);
I have counted 20 bytes of assembly code!
memcpy(hold_address, asm_commands, sizeof(asm_commands)[0]*20);

Related

Casting to double pointer on Arm Cortex-M3

I am using an Arm Cortex-M3 processor. I receive binary data in an unsigned char array, which must be cast into a suitable variable to be used for further computation:
unsigned char gps[24] = { 0xFA, 0x05, 0x08, 0x00, 0x10, 0x00,0xA4, 0x15, 0x02, 0x42, 0x4D, 0xDF, 0xEB, 0x3F, 0xF6, 0x1A, 0x36, 0xBE, 0xBF, 0x2D, 0xA4, 0x41,
0xAF, 0x1A };
int i = 6;
float f = (float) *(double*)&gps[i];
This code works on a computer to get the correct value of "f" but it fails on the Cortex-M3. I understand that does not have an arithmetic unit on the processor, hence doesn't support 64 bit operations; but is there a work-around to cast as shown above.
Note that the code below works on the processor; only the casting shown above fails:
double d = 9.7;
Also note that 32 bit casts work, as shown below; only double or uint64_t fail.
uint16_t k = *(uint16_t*)&gps[i];
Is there an alternative solution?
Casting the address of an unsigned char to a pointer to a double – and then using it – is violating strict aliasing rules; more importantly (in your case, as discussed below), it also breaks the required alignment rules for accessing multi-byte (i.e. double) data units.
Many compilers will warn about this; clang-cl gives the following for the (double*)&gps[i] expression:
warning : cast from 'unsigned char *' to 'double *' increases required
alignment from 1 to 8 [-Wcast-align]
Now, some architectures aren't too fussy about alignment of data types, and the code may (seem to) work on many of those. However, the Cortex-M3 is very fussy about the alignment requirements for multi-byte data types (such as double).
To remove undefined behaviour, you should use the memcpy function to transfer the component bytes of your gps array into a real double variable, then cast that to a float:
unsigned char gps[24] = {0xFA, 0x05, 0x08, 0x00, 0x10, 0x00,0xA4, 0x15, 0x02, 0x42, 0x4D,
0xDF, 0xEB, 0x3F, 0xF6, 0x1A, 0x36, 0xBE, 0xBF, 0x2D, 0xA4, 0x41, 0xAF, 0x1A };
int i = 6;
double d; // This will be correctly aligned by the compiler
memcpy(&d, &gps[i], sizeof(double)); // This doesn't need source alignment
float f = (float)d; // ... so now, this is a 'safe' cast down to single precision
The memcpy call will use (or generate) code that can access unaligned data safely – even if that means a significant slow-down of the access.

Passing the Number of Elements in an Array to Function?

I am writing a DLL that passes a char array to a function. I define that char array with 22 elements here:
unsigned char data[22] = { 0x00, 0x0A, 0x00, 0x09, 0x70, 0x00, 0x72, 0x00,
0x6F, 0x00, 0x74, 0x00, 0x68, 0x00, 0x65, 0x00, 0x67, 0x00, 0x75, 0x00,
0x79, 0x00 };
Now, I try to pass this array to my function declared as:
bool sendData(unsigned char* sData, unsigned long sSize);
With these arguments:
sendData(data, 22);
This code compiles, but crashes the program when this function is called. Taking a closer look while debugging, I can see that there's an access violation in my function sendData. Looking even further, I see the values of data and sData at run-time:
data points to the 22 byte char array with correct values (obviously)
sData points to a char array that is null-terminated by the first byte, only containing one value (0)
It is clear to me that the compiler does not know to allocate 22 bytes for sData, simply because I do not specify any length for it. So my question is:
How do I specify the length of the sData so that the argument
passed won't terminate early?
If I'm wrong about the issue, please correct me and explain it further. Thanks for any help in advance!
EDIT:
I understand that \0 (the first byte and many more in data) is a null-terminator and will prematurely end the array. What I am asking is how to avoid this. My understanding is that sData is never given a specific length and therefore stops on \0, but I may be wrong.
I was asked to supply my sendData function:
bool sendData(unsigned char* sData, unsigned long sSize)
{
try
{
Send(sData, sSize);
return true;
}
catch (...)
{
return false;
}
}
Send is calling a function from another module, but isn't relevant to the issue, as the error occurs beforehand when the sData argument is passed to sendData.
No allocation of sData is going to happen, it just points to your array. It displays as empty in the debugger because it displays a char* as a string, and strings end when there is a '/0', your first byte. This does not mean sData does not have the correct data. Write sData[0]. sData[1], etc, in your debugger to see the correct values.

Encrypt in C++ / Decrypt in x86

I am having a problem with a school assignment. The assignment is to write a metamorphic Hello World program. This program will produce 10 .com files that print "Hello World!" when executed. Each of the 10 .com files must be different from the others. I understand the concept of metamorphic vs oligomorphic vs polymorphic. My program currently creates 10 .com files and then writes the machine code to the files. I began by simply writing only the machine code to print hello world and tested it. It worked just fine. I then tried to add a decryption routine to the beginning of the machine code. Here is my current byte array:
#define ARRAY_SIZE(array) (sizeof((array))/sizeof((array[0])))
BYTE pushCS = 0x0E;
BYTE popDS = 0x1F;
BYTE movDX = 0xBA;
BYTE helloAddr1 = 0x1A;
BYTE helloAddr2 = 0x01;
BYTE movAH = 0xB4;
BYTE nine = 0x09;
BYTE Int = 0xCD;
BYTE tOne = 0x21;
BYTE movAX = 0xB8;
BYTE ret1 = 0x01;
BYTE ret2 = 0x4C;
BYTE movBL = 0xB3;
BYTE keyVal = 0x03; // Encrypt/Decrypt key
typedef unsigned char BYTE;
BYTE data[] = { 0x8D, 0x0E, 0x01, 0xB7, 0x1D, 0xB3, keyVal, 0x30, 0x1C, 0x46, 0xFE, 0xCF, 0x75, 0xF9,
movDX, helloAddr1, helloAddr2, movAH, nine, Int, tOne, movAX, ret1, ret2, Int, tOne,
0x48, 0x65, 0x6C, 0x6C, 0x6F, 0x20, 0x57, 0x6F, 0x72, 0x6C, 0x64, 0x21, 0x0D, 0x0D, 0x0A, 0x24 };
The decryption portion of the machine code is the first 14 bytes of "data". This decryption routine would take the obfuscated machine code bytes and decrypt them by xor-ing the bytes with the same key that was used to encrypt them. I am encrypting the bytes in my C++ code with this:
for (int i = 15; i < ARRAY_SIZE(data); i++)
{
data[i] ^= keyVal;
}
I have verified over and over again that my addressing is correct considering that the code begins at offset 100. What I have noticed is that when keyVal is 0x00, my code runs fine and I get 10 .com files that print Hello World!. However, this does me no good as 0x00 leaves everything unchanged. When I provide an actual key like 0x02, my program no longer works. It simply hangs until I close out DosBox. Any hints as to the cause of this would be a great help. I have some interesting plans for junk insertion (The actual metamorphic part) but I don't want to move on to that until I figure out this encrypt/decrypt issue.
The decryption portion of the machine code is the first 14 bytes of "data".
and
for (int i = 15; i < ARRAY_SIZE(data); i++)
do not match since in C++ array indexes start at 0.
In your array data[15] == helloAddr1 which means you are not encrypting the data[14] == movDX element. Double-check which elements should be encrypted and start at i = 14 if required.

How can I convert 0x70, 0x61, 0x73 ... etc to Pas ... etc in C++?

I am using MSVC++ 2010 Express, and I would love to know how to convert
BYTE Key[] = {0x50,0x61,0x73,0x73,0x77,0x6F,0x72,0x64};
to "Password" I am having a lot of trouble doing this. :( I will use this knowledge to take things such as...
BYTE Key[] { 0xC2, 0xB3, 0x72, 0x3C, 0xC6, 0xAE, 0xD9, 0xB5, 0x34, 0x3C, 0x53, 0xEE, 0x2F, 0x43, 0x67, 0xCE };
And other various variables and convert them accordingly.
Id like to end up with "Password" stored in a char.
Key is an array of bytes. If you want to store it in a string, for example, you should construct the string using its range constructor, that is:
string key_string(Key, Key + sizeof(Key)/sizeof(Key[0]));
Or if you can compile using C++11:
string key_string(begin(Key), end(Key));
To get a char* I'd go the C way and use strndup:
char* key_string = strndup(Key, sizeof(Key)/sizeof(Key[0]));
However, if you're using C++ I strongly suggest you use string instead of char* and only convert to char const* when absolutely necessary (e.g. when calling a C API). See here for good reasons to prefer std::string.
All you are lacking is a null terminator, so after doing this:
char Key_str[(sizeof Key)+1];
memcpy(Key_str,key,sizeof Key);
Key_str[sizeof Key] = '\0';
Key_str will be usable as a regular char * style string.

how to check a bit is enabled or not in array of hexadecimal digit

#include<iostream>
#define check_bit(var,pos) {return (var & (1 << pos))!=0;}
using namespace std;
int main()
{
uint8_t temp[150]={0x00,0x02,0x17,0xe2,0x1c,0xa8,0x00,0x30,0x96,0xe1,0x8c, 0x38,
0x88, 0x47, 0x00 ,0x01 ,
0x30, 0xfe, 0x00, 0x01 ,0x31, 0xfe, 0x45, 0x00, 0x00 ,0x64, 0x3b, 0x89 ,0x00, 0
x00 ,0xfe, 0x01 ,
0x33, 0x5a, 0xc0 ,0xa8 ,0x79 ,0x02 ,0x0a, 0x0a, 0x0a, 0x01, 0x08, 0x00, 0xe3, 0
x86, 0x00, 0xea,
0x01, 0xd2, 0x00, 0x00, 0x00, 0x05, 0x02, 0x6a, 0x95 ,0x98, 0xab ,0xcd ,0xab, 0x
cd ,0xab, 0xcd,
0xab, 0xcd, 0xab, 0xcd, 0xab, 0xcd, 0xab, 0xcd, 0xab, 0xcd ,0xab, 0xcd ,0xab ,0
xcd ,0xab, 0xcd,
0xab, 0xcd, 0xab, 0xcd, 0xab, 0xcd, 0xab, 0xcd, 0xab, 0xcd ,0xab ,0xcd ,0xab ,0
xcd, 0xab ,0xcd,
0xab, 0xcd, 0xab, 0xcd, 0xab, 0xcd, 0xab, 0xcd , 0xab, 0xcd ,0xab, 0xcd, 0xab, 0
xcd, 0xab ,0xcd,
0xab, 0xcd, 0xab, 0xcd, 0xab, 0xcd, 0xab, 0xcd , 0xab, 0xcd
};
uint16_t *ptr1=(uint16_t*)&temp[0];
while(!(*(ptr1+0)==0x88 && *(ptr1+1)==0x47))
{
ptr1++;
}
cout<<"MPLS packet";
uint32_t *ptr2=(uint32_t*)&temp[0];
cout<<"4 bytes accessed at a time";
ptr2++;
while(check_bit(*(ptr+3),7)!=1)
{
cout<<"bottom of the stack:label 0";
ptr2++;
}
cout<<"mpls label:1";
return 0;
}
The program is intended to identify packet is MPLS or not by accessing two bytes at a time and checking presence of 88 and 47 packets and if MPLS packet then it should access four packets at a time and check 3rd byte(30 in this case) is enabled or not.If not enabled then access next four bytes and check byte is enabled or not.I have written program but it is not working.Please someone help me.I am not able to access individual element of array.if i give cout<<temp[0] it gives garbage value
Please help
First thing I noticed is that your code looks for consecutive 16-bit values of 0x88 and 0x47, but in the packet itself these values seem to be 8-bit (1 byte each). If ptr1 is changed to be uint8_t*, it will be able to find the values. I don't know what the correct behavior for the rest of the code is so I can't check it.
In general, directly reading values that are bigger than 8 bits (e.g. uint16_t or uint32_t) from memory here may not be a good idea because your program will behave differently on little-endian and big-endian processors. And as ydroneaud mentions in a comment, some processors won't be able to read these values because you read them from unaligned addresses.
I think I can fix your program, but you better listen to the other folks who know networking stuff better than myself.
uint8_t *ptr=temp;
while(ptr[0]!=0x88 || ptr[1]!=0x47)
{
ptr++;
}
cout<<"MPLS packet";
ptr+=2;
cout<<"4 bytes accessed at a time";
while(!check_bit(ptr[2],7))
{
cout<<"bottom of the stack:label 0";
ptr+=4;
}
cout<<"mpls label:1";
return 0;
Edit: to print individual bytes from the array you need to cast them to some integer type first. This is because uint8_t is most likely typedeffed as unsigned char which is interpreted by cout as a character code. Then you need to set the cout to hexadecimal mode:
cout << hex << (int)ptr[2] << endl;
Edit 2: there is an error in your check_bit() macro. A macro is not a function, but a piece of text that is copied as is (replacing the arguments) in place where its name is mentioned. It must be
#define check_bit(var,pos) (((var)&(1<<(pos)))!=0)
or define a function instead:
bool check_bit(int var, int pos) {return (var & (1 << pos))!=0;}
A little bit more worked out version of my comment: You should actually decode the network stack to be sure if MPLS is present, the 0x8847 value is not extremely unlikely to occur somewhere in payload, addressing schemes, ... .
To actually get to this you should decode the network stack. Lets assume you begin with an ethernet frame. Note first that most applications will give you data from the destination mac address onwards, preambles and such are discarded. So the 13th and 14th byte are the type field. This tells you what is encapsulated in ethernet, this is usually 0x0800 meaning IP. 0x8847 means a unicast static MPLS label. Other options are possible, for example ipv6 or vlan tags (described below). But note you can determine with certainty which offsets you use. You know what is encapsulated in the mac frame and where this encapsulated data starts (15th octet). Of course you see there are optional q-tags there, I explain these below.
Now as You are looking for 0x8847 I guess you have direct MPLS over ethernet, in which case you shouldn't go any further, but if your stack is more complex you'll have to decode also the next encapsulated data (e.g. IP) and take into account these sizes up until the point where you can find your MPLS header.
For ethernet there are 2 somewhat common options and that are dot1q and qinq tagging, or vlan tags. dot1q adds 4 bytes to the ethernet header, you can recognise this because the type field will be 0x8100, in this case the real type field (of what is encapsulated) will be 4 bytes further one (so the 17th byte) and the encapsulated data will start on the 19th byte. With qinq the type will be 0x9100 and the real type will be 8 bytes further on, so the 21th byte, the encapsulated data can be found from the 23rd byte onwards.
Of course, decoding the whole network stack implementation would be crazy. To start with you can ignore addressing, QoS, ... . You mostly need to find what is the type to the next header and where this starts (this can be influenced by optional fields like dot1q). Usually you know beforehand which kind of stack you have on your system. So it involves studying these headers and finding the fixed offset where you can find your MPLS header, which makes the work quite simple.