modifying array values using location in c++ - c++

lets suppose I have an unsigned int* val and unsigned char mat[24][8]. Now the val stores the location of the variable mat. Is it possible to modify the bits in the mat variable using the location in val?
for ex:
val = 0x00000001 and location of val in memory is 0x20004000
the first element of mat is located at 0x00000001.
Now i want to modify the value of mat at, say, 10,4. Is it possible to do this using C++?

Yes, it is possible, unless either of the array members or the pointer target is const.
For example:
int array[3][2] = { { 0, 1 }, { 2, 3 }, { 4, 5 } };
int *p = &array[1][1];
*p = 42;
// array is now: { { 0, 1 }, { 2, 42 }, { 4, 5 } };

Yes, you can change the value of your matrix using the address(what you called location), but you have to calculate the right offset from the start. The offset calculation should be something like this :
(matrix_x_len * Y + X) * sizeof(unsigned int) + offset to the beggining of the matrix
then when you have the offset you can change mat like this : *(val + offset) = new_value.

can do it but making val as unsigned char*
val = &mat;
will make easy to do modification of bits

You can of course modify the bits since unsigned char mat[24][8] gives you a memory chunk with 24*8*sizeof(char) bytes.
(I assume that unsigned char is 1 byte (=8 bits) in size and unsigned int is 4 Bytes (=32 bits) from here but this may be dependant on your system.
But accessing memory elements of 1 byte width using a pointer to elements with 4 bytes width is tricky and can easily produce errors.
If you set element 0 of the int array to 1 for example
#define ROWS 24
#define COLS 8
unsigned char mat[ROWS][COLS];
unsigned int * val = (unsigned int*)&mat;
val[0] = 1;
You will see that mat[0][0] is 0, mat[0][1] is 0, mat[0][2] is 0 and mat[0][3] is 1.
Please not that you cannot edit the elements of mat directly using their offset in memory via such a "miss-typed" pointer.
Accessing val[10*8+4] for example will access byte 336 from the beginning of your memory chunk which has only 192 bytes.
You will have to calculate your index correctly:
size_t byte_index = (10*COLS+4)*sizeof(unsigned char); // will be 84
size_t int_index = byte_index / sizeof(unsigned int); // will be 21
size_t sub_byte = byte_index%sizeof(unsigned int); // will be 0
Therefore you can access val[int_index] or val[21] to access the 4 bytes that contain the data of element mat[10][4] which is byte number sub_byte of the refered unsigned int value.
If you have the same types there is no problem except that you need to calculate the correct offset.
#define ROWS 24
#define COLS 8
unsigned char mat[ROWS][COLS];
unsigned char * val = &mat;
val[10*8+4] = 12; // set mat[10][4] to 12
*(val+10*8+5) = 13; // set mat[10][5] to 13

Related

Why can't I access int with different pointer in C++?

I want to be able to access value with p pointer. But when I use p pointer I'm always getting b variable equal to zero. Please refer to code snippet below.
basepointer = malloc(512);
*((int*)basepointer+32) = 455; // putting int value in memory
void *p = basepointer + 32; // creating other pointer
int a,b;
a = *((int*)basepointer+32); // 455, retrieving value using basepointer
b = *((int*)p); // 0, retrieving value using p
Why it happens so? How can I access value with my p pointer?
I can't find a good duplicate answer, so here's what's going on:
Pointer arithmetic always happens in units of the pointer's base type. That is, when you have T *ptr (a pointer to some type T), then ptr + 1 is not the next byte in memory, but the next T.
In other words, you can imagine a pointer like a combination of an array and an index:
T *ptr;
T array[/*some size*/];
ptr = &array[n];
If ptr is a pointer to array[n] (the nth element), then ptr + i is a pointer to array[n + i] (the (n+i)th element).
Let's take a look at your code:
*((int*)basepointer+32) = 455;
Here you're casting basepointer to (int*), then adding 32 to it. This gives you the address of the 32nd int after basepointer. If your platform uses 4 byte ints, then the actual offset is 32 * 4 = 128 bytes. This is where we store 455.
Then you do
void *p = basepointer + 32;
This is technically invalid code because basepointer is a void *, and you can't do arithmetic in terms of void because void has no size. As an extension, gcc supports this and pretends void has size 1. (But you really shouldn't rely on this: cast to unsigned char * if you want bytewise addressing.)
Now p is at offset 32 after basepointer.
a = *((int*)basepointer+32);
This repeats the pointer arithmetic from above and retrieves the value from int offset 32 (i.e. byte offset 128), which is still 455.
b = *((int*)p);
This retrieves the int value stored at byte offset 32 (which would correspond to int offset 8 in this example). We never stored anything here, so b is essentially garbage (it happens to be 0 on your platform).
The smallest change to make this code work as expected is probably
void *p = (int *)basepointer + 32; // use int-wise arithmetic to compute p
void pointer arithmetic is illegal in C/C++, GCC allows it as extension.
you need to modify this line:
void *p = basepointer + 32;
to
void *p = basepointer + 32 * sizeof(int);
because pointer arithmetic is calculated relatively to the type size.
for example:
if sizeof (int) = 4
and sizeof(short) = 2
and int *p has address of 3000
and short *sp has address of 4000
then p + 3 = p + 3 * sizeof(int) = 3012
and sp + 3 = sp + 3 *sizeof(short) = 4006
for void it is 1 if the compiler allows it.

Addition of long values show different o/p

I am facing problem in doing addition of long values
example
typedef unsigned short UINT16;
UINT16* flash_dest_ptr; // this is equal to in hexa 0XFF910000
UINT16 data_length ; // hex = 0x000002AA & dec = 682
//now when I add
UINT16 *memory_loc_ver = flash_dest_ptr + data_length ;
dbug_printf( DBUG_ERROR | DBUG_NAVD, " ADD hex =0x%08X\n\r",memory_loc_ver );
Actual O/p = 0xFF910554
// shouldn't o/p be FF9102AA ?
It's pointer arithmetic, so
UINT16 *memory_loc_ver = flash_dest_ptr + data_length ;
advances flash_dest_ptr by data_length * sizeof (UINT16) bytes.
Typically, sizeof (UINT16) would be 2, and
2 * 0x2AA = 0x554
When you add integers to a pointer value, you are actually moving the pointer as many bytes as it would take to move data_length UINT16s away in memory, not data_length bytes.

Cast int* to const short int*

I am using functions from a library where a most-important function takes arguments of type const short int*. What I have instead is int * and was wondering if there was a way of casting an int * into a const short int*. The following code snippet highlights the problem I am facing:
/* simple program to convert int* to const short* */
#include <stdio.h>
#include <iostream>
#include <stdlib.h>
void disp(const short* arr, int len) {
int i = 0;
for(i = 0; i < len; i++) {
printf("ith index = %hd\n", arr[i]);
}
}
int main(int argc, char* argv[]) {
int len = 10, i = 0;
int *arr = (int *) malloc(sizeof(int) * len);
for(i = 0; i < len; i++) {
arr[i] = i;
}
disp(arr, len);
return 0;
}
The above code snippet compiles.
This is what I have tried so far:
1. Tried the c-style cast. Function call looked something like so:
disp((const short*) arr, len). The resulting output was quite weird:
ith index = 0
ith index = 0
ith index = 1
ith index = 0
ith index = 2
ith index = 0
ith index = 3
ith index = 0
ith index = 4
ith index = 0
Tried the const-ness cast. Function call looked like:
disp(const_cast<const short*> arr, len);
This resulted in an error while compiling.
My question(s):
1. Why is the output in approach 1 so weird? What is going on over there?
2. I saw some examples that remove const-ness using the const cast in approach 2. I do not know how to add the same.
3. Is there a way to cast an int* into a const short int*?
P.S: If there are questions of this sort that have been asked before, please do let me know. I googled it and did not find anything specific.
In general, casting from int * to short * will not give useful behaviour (in fact, it will probably lead to undefined behaviour if you try to dereference the resulting pointer). They are pointers to fundamentally different types.
If your function expects a pointer to a bunch of shorts, then that's what you'll need to give it. You'll need to create an array of short, and populate it from your original array.
Casts are almost always a sign of a design problem. If you have a function that takes a short* (const or otherwise), you need to call it with a short*. So instead of allocating an array of int, allocate an array of short.
When you cast an int* to a short* you're telling the compiler to pretend that the int* is really a pointer to short. It will do that, but having lied to the compiler, you're responsible for the consequences.
Right, so the SIZE of a short is different to an int in many environments (it doesn't have to be, and there are certainly compilers that for example have 16-bit int and 16-bit short).
So what you get out of casting a pointer in this way is, a) undefined behaviour, and b) unless you know exactly what you are doing, probably not what you wanted anyway.
The output from your code looks perfectly fine from what I'd expect, so clearly you are not telling us something about what you are trying to do!
Edit: Note that since pointers are "size-dependant", if you cast a pointer of one size to a pointer to a type of a different size, the alignment of the data will be wrong, which is why every other value is zero in your output - because an int is 4 bytes [typically] and a short is 2 bytes, the pointer will "step" 2 bytes for each index, (so arr[i] will be original arr + i * 2 bytes. where the int * arr has a 'step' of 4 bytes, so arr + i * 4 bytes. Depending on your integer values, you will get some "half an integer" values out of this - since your numbers are small, this is zero.
So, whilst the compiler is doing exactly what you ask it to do: make a pointer to short point to a lump of memory containing int, it won't do what you EXPECT it to do, namely translate each int into a short. To do that, you will either have to do:
short **sarr = malloc(sizof(short *) * 10);
for(i = 0; i < 10; i++)
{
sarr[i] = (short *)(&arr[i]); // This may well not work, since you get the "wrong half" of the int.
}
If this gives the wrong half, you can do this:
sarr[i] = (short *)(&arr[i])+1; // Get second half of int.
But it depends on the "byte-order" (big endian or little endian) so it's not portable. And it's very bad coding style to do this in general code.
Or:
short *sarr = malloc(sizof(short *) * 10);
for(i = 0; i < 10; i++)
{
sarr[i] = (short)(arr[i]);
}
The second method works in the sense that it makes a copy of your integer array into a short array. This also doesn't depend on the order the content is stored, or anything like that. Of course, if the value in arr[i] is greater than what can fit in a short, you don't get the same value as arr[i] in your short!
And in both cases don't forget to free your array when it is no longer needed.
Answer to q.1
The output is not weird at all. Apparently, your compiler assign each int 4 bytes and each short 2 bytes. Therefore, your arr is of size 10x4 = 40 bytes. When you assign numbers 0..9 to it you have in memory (one char per byte, grouped by int ):
0 0 0 0
0 0 0 1
0 0 0 2
0 0 0 3
0 0 0 4
...
When you cast arr to short the memory is now "grouped" into 2 bytes :
0 0
0 0
0 0
0 1
0 0
0 2
0 0
...
I hope you can see the effect, and why you suddenly have 0-s in between the numbers you assigned.
Answer to q.2
The declaration of const in the function display only means that the content of arr will not be changed during the execution of display. You do not need to explicitly cast the argument to const array in order to invoke display.
Answer to q.3
please see answers by #Pete and #Mats

How does Murmurhash3_x86_128 work for data larger than 15 bytes?

I want to use MurmurHash3 in a deduplication system with no adversary. So Murmurhash3 will hash files, for instance.
However I am having problems using it, meaning I am doing something wrong.
Murmurhash3_x86_128() (source-code) function receives four parameters. This is my understanding of what they are:
key - input data to hash
len - data length
seed - seed
out - computed hash value
When running it fails with segmentation faults, because of this part of the code:
void MurmurHash3_x86_128 ( const void * key, const uint32_t len,
uint32_t seed, void * out )
{
const uint8_t * data = (const uint8_t*)key;
const uint32_t nblocks = len / 16;
...
const uint32_t * blocks = (const uint32_t *)(data + nblocks*16);
for(i = -nblocks; i; i++)
{
uint32_t k1 = blocks[i*4];
...
}
...
}
So if my data has length greater than 15 bytes (which is the case), this for loop is executed. However, blocks is pointed to the end of my data array and then it starts accessing memory positions after that position. The segmentation faults are explained. So key can't be just my data array.
My question is: What should I put in key parameter?
Problem Solved
After the answer of Mats Petersson I realized my code had a bug. i must be an int (signed) and I had it unsigned. That is the reason why it was adding memory positions to blocks and not subtracting.
blocks points at the last even multiple of 16 bytes in the block being calculated.
i starts at -nblocks, and is always less than zero (loop ends at zero).
So, say you have 64 bytes of data, then the pointer blocks will point at data + 64 bytes, and nblocks will be 4.
WHen we get to k1 = blocks[i*4]; the first time, i = -4, so wer get index -16 - which is multiplied by the sizeof(*blocks), that is 4 (int = 4 bytes in most architectures) - so we get -64 = start address of data.

Assigning multiple integers to a character array in binary

I have three integers (4 bytes of memory for each integer) and I want to assign each of their binary values to a character array with 12 elements. So, if each integer had a value of let's say 2, then I want the character array to have these values:
2 0 0 0 2 0 0 0 2 0 0 0
I have tried:
memcpy(cTemp, &integer1 + &integer2 + &integer3, 12);
but I receive an "invalid operands" compiler error.
I have also found the function strcat referenced here: http://www.cplusplus.com/reference/clibrary/cstring/
However it is mentioned as: "The terminating null character in destination is overwritten by the first character of source" which I obviously don't want since most of the times integers will have a null character at the end unless the value is really large. Does anybody know of a better working method? Any help is appreciated.
It is probably simpler (if you are on a x86 at least :P) to just cast the pointer and assign directly. i.e.
int* p = (int*) cTemp;
p[0] = a;
p[1] = b;
p[2] = c;
You can also do a union hack:
union translate {
char c[sizeof(int) * 3];
int i[3];
};
translate t;
t.i[0] = 2;
t.i[1] = 2;
t.i[2] = 2;
// access t.c[x] to get the chars
... and read the chars...
If you want to see how a variable is represented as a sequence of bytes, you can do the following.
int i[3] = {2, 2, 2};
char cTemp[sizeof i];
memcpy(cTemp, &i, sizeof i);
Note however that the representation will be different on different platforms. What are you trying to solve?
Edit:
I'm just writing a program to edit [a file], and the file happens to store integers in binary.
Why didn't you say so in the first place? If you know the program will only run on platforms where int has the correct memory-layout, you can simply store the integer.
fout.write((char const *)&i, sizeof i);
However, if you want to be portable, you need to properly serialize it.
void store_uint32_le(char * dest, unsigned long value)
{
for (int i = 0; i < 4; ++i)
{
*dest++ = value & 0xff;
value >>= 8;
}
assert(value == 0);
}
int main()
{
char serialized[12];
store_uint32_le(serialized, 2);
store_uint32_le(serialized + 4, 2);
store_uint32_le(serialized + 8, 2);
std::ofstream fout("myfile.bin", std::ios::binary);
fout.write(serialized, sizeof serialized);
}
I think this should work:
int i,j,k;
char a[12];
*((int*)a) = i;
*(((int*)a)+1) = j;
*(((int*)a)+2) = k;