I am trying to add a number to a pointer value with the following expression:
&AddressHelper::getInstance().GetBaseAddress() + 0x39EA0;
The value for the &AddressHelper::getInstance().GetBaseAddress() is always 0x00007ff851cd3c68 {140700810412032}
should I not get 0x00007ff851cd3c68 + 0x39EA0 = 7FF81350DB08 as a result?
while I am getting: 0x00007ff851ea3168 or sometimes 0x00007ff852933168 or some other numbers.
Did I took the pointer value incorrectly?
With pointer arithmetic, type is taken into account,
so with:
int buffer[42];
char* start_c = reinterpret_cast<char*>(buffer);
int *start_i = buffer;
we have
start_i + 1 == &buffer[1]
reinterpret_cast<char*>(start_i + 1) == start_c + sizeof(int).
and (when sizeof(int) != 1) reinterpret_cast<char*>(start_i + 1) != start_c + 1
In your case:
0x00007ff851ea3168 - 0x00007ff851cd3c68) / 0x39EA0 = 0x08
and sizeof(DWORD) == 8.
Related
I have a homework assignment where I have to make a function that is used to reverse the effects of a caesar shift on a string. For example, if the string after the shift is "fghij", and the shift value is 5, the function should yield "abcde." This works when I try it on visualstudiocode. When I submit the assignment, it seems as if my function did nothing. The function is as follows:
string decryptCaesar(string ciphertext, int rshift)
{
string plaintext;
int cipher_length = ciphertext.length();
for(int a = 0; a < cipher_length; a++)
{
char individual = char(ciphertext[a]);
if(isalpha(individual) == true)
{
if(65 <= int(individual) && int(individual) <= 90)
{
individual = char(((int(individual) - 65 + 26 - rshift) % 26) + 65);
}
else
{
individual = char(((int(individual) - 97 + 26 - rshift) % 26) + 97);
}
}
plaintext += individual;
}
return plaintext;
}
For a start, the values returned from the classification macros (like isalpha) are nonzero if the character falls into the tested class, and zero if not. See this answer for some more detail.
However, the true constant has only one value, so you should not compare the two (true may be the value 1 but isaplha() may return 42).
Instead, simply rely on the fact that non-zero integers become "true" when interpreted in a boolean context:
if (isalpha(individual)) {
blahBlahBlah();
}
int Fun(int m, int n)
{
if(n==0)
{
return n + 2;
}
return Fun(n-1, m-1) + Fun(m-1,n-1) + 1;
}
I'm completely lost as to what the 1st case would visually look like for this function. I don't understand why the function has two parameters and why we only return 1 parameter at the end with our base case. What would be the process to work this out? Any input you want to use to explain to me is fine I was trying (3,3). Although, now that I'm thinking about it how would this function look like if one of the inputs was smaller than the other like (3,2) or (2,3)?
Note that return n + 2; simplifies to return 2;.
The function takes two arguments (parameters) and returns a single value. That's like the operation of adding two numbers that you were taught in your first year at school.
Whether or not Fun(n - 1, m - 1) is called before Fun(m - 1, n - 1) is not specified by the C++ standard. So I can't tell you what the first recursive call will look like. This gives compilers more freedom in making optimisation choices. Of course the order in which the functions are called has no effect on the eventual result.
The best way of analysing what happens in your particular case is to use a line by line debugger.
There is nothing special about recursive functions - they work exactly like non-recursive functions.
Fun(3,3) does this:
if(3==0)
{
return 3 + 2;
}
return Fun(2, 2) + Fun(2, 2) + 1;
It needs the value of Fun(2,2), which does this:
if(2==0)
{
return 2 + 2;
}
return Fun(1, 1) + Fun(1, 1) + 1;
And that needs Fun(1,1), which does
if(1==0)
{
return 1 + 2;
}
return Fun(0, 0) + Fun(0, 0) + 1;
and Fun(0,0) does
if(0==0)
{
return 0 + 2;
}
return Fun(-1, -1) + Fun(-1, -1) + 1;
which returns 2 since the condition is true.
So, Fun(1, 1) will do
return 2 + 2 + 1;
which is 5, and Fun(2,2) will do
return 5 + 5 + 1;
which is 11, and Fun(3,3) will do
return 11 + 11 + 1;
which is 23.
I'm sure you can work through other examples on your own.
#include <stdio.h>
#include <stdlib.h>
void process_keys34 (int * key3, int * key4) {
*(((int *)&key3) + *key3) += *key4;
}
int main (int argc, char *argv[])
{
int key3, key4;
if (key3 != 0 && key4 != 0) {
process_keys34(&key3, &key4);//first time
}
if (true) {
process_keys34(&key3, &key4);
msg2 = extract_message2(start, stride);//jump to here
printf("%s\n", msg2);
}
}
I test these code on macOS 10.12 use the Xcode7.
The question can be described simply as
Why &key3 == (int*)&key3
but &key3 + 2 != (int*)&key3 + 2
and &key3 + 2 == (int*)&key3 + 4
when using XCode7
I set the key3 and key4 from the argv, I want jump from process_keys34(&key3, &key4);//first timeto
msg2 = extract_message2(start, stride);//jump to here.
So I must change the value of the func process_key34's return address.
Because there is a var key4 between &key3 and return address, so i should add 2 to &key3, which means key3 should be 2.But the truth is that key3 must be 4, then the result is right.
Then do some test in lldb.
I found that
(lldb) p *(&key3 + 2)
(int *) $6 = 0x0000000100000ec3 //that's right.
but
(lldb) p *((int*)&key3+2)
(int) $8 = 1606416192// I dont know what does that mean.
then I test that
(lldb) p &key3
(int **) $5 = 0x00007fff5fbff6d8
(lldb) p (int*)&key3
(int *) $7 = 0x00007fff5fbff6d8
I found these two are same.
but &key3 + 2 ,(int*)&key3 + 2 are different from each another.
(lldb) p &key3 + 2
(int **) $9 = 0x00007fff5fbff6e8
(lldb) p (int*)&key3 + 2
(int *) $10 = 0x00007fff5fbff6e0
and &key3 + 2 ,(int*)&key3 + 4 are the same
(lldb) p &key3 + 2
(int **) $9 = 0x00007fff5fbff6e8
(lldb) p (int*)&key3 + 4
(int *) $14 = 0x00007fff5fbff6e8
I found that &key3 is int**, and (int*)&key3 is int* which is the only difference between these two commands. But I still cant understand why did that happen.
Because accroding to C99, the right part of the + will change to the same type of the left, which means intger 2 will chage to int* or int**. But I think these have no difference, because sizeof(int*) == sizeof(int**).
I don't know why did that happen. Can any one give me some help?
The results you're seeing aren't meaningful (unless we look at the generated assembly but this is not in the question scope).
The code has two cases of undefined behavior.
The variables key3 and key4 are used uninitialized:
int key3, key4;
if (key3 != 0 && key4 != 0) {
The following line reinterprets a stored value of an object key3 which has a type int* with a incompatible type int:
*(((int *)&key3) + *key3) += *key4;
In other words, a type int is written into key3 which has the type int*.
Cause &key3 and (int *)&key3 (note that I consider that key3 is an int *) has not same type.
This is named pointer arithmetic.
On a some system sizeof(int) != sizeof(int *). So, (int *) + 2 is not the same effect that (int **) + 2 because
(int *) + 2 ==> * + (2 * sizeof(int))
(int **) + 2 ==> * + (2 * sizeof(int *)).
I am writing methods for a binary search tree and am having trouble understanding the basics of recursion. I found a method that checks for the size of the binary search tree and I see how it it going through each element of the tree, but I don't understand how it is counting the size exactly. Can someone please explain this to me?
Here is the method:
unsigned long BST::sizeHelper(BSTNode* r){
if (r == NULL){
return 0;
} else {
return (sizeHelper(r->left) + sizeHelper(r->right) + 1); //+1 for the root
}
}
I see the return statement, but I don't see any indication of how it is counting the elements as it goes through them.
Upon each return, the method adds at least one to the total size.
For example, consider the following tree:
(I'm bad at drawing, so I stole one online)
Steps are as follow:
Start from A, return size(B) + size(C) + 1.
For B, return size(D) + 0 + 1. (0 because B has no right child, i.e. NULL)
For D, return 0 + 0 + 1. size(D) = 1.
Now going back, size(B) = 1 + 1 = 2.
For C, return size(E) + size(F) + 1.
Similar to D, size(E) = size(F) = 1.
Going back again, size(C) = 1 + 1 + 1 = 3.
Finally, size(A) = 2 + 3 + 1 = 6.
i have a key with type AcccAA where A-[A...Z] (capital letters), and c is [1..9]. i have 1500 segments.
Now my temp hash function
int HashFunc(string key){
int Adress = ((key[0] + key[1] + key[2] + key[3] + key[4] + key[5]) - 339) * 14;
return Adress;
}
and Excel show a lot of collision in center (from 400 to 900)
Please tell me the hash function to be more evenly.
A common way to build a hash function in this case is to evaluate some polynomial with prime coefficients, like this one:
int address = key[0] +
31 * key[1] +
137 * key[2] +
1571 * key[3] +
11047 * key[4] +
77813 * key[5];
return address % kNumBuckets;
This gives a much larger dispersion over the key space. Right now, you get a lot of collisions because anagrams like AB000A and BA000A will collide, but with the above hash function the hash is much more sensitive to small changes in the input.
For a more complex but (probably) way better hash function, consider using a string hash function like the shift-add-XOR hash, which also gets good dispersion but is less intuitive.
Hope this helps!
One way is to construct a guaranteed collision-free number (which will not make your hash table collision free of course), as long as the possible keys fit in an integral type (e.g. int):
int number = (key[0] - 'A') + 26 * (
(key[1] - '0') + 10 * (
(key[2] - '0') + 10 * (
(key[3] - '0') + 10 * (
(key[4] - 'A') + 26 * (
(key[5] - 'A')
)))));
This works since 26 * 10 * 10 * 10 * 26 * 26 = 17576000 which fits into an int fine.
Finally simply hash this integer.