How do I assign value to uint32_t key[4] - c++

How do I assign value to uint32_t key[4]
I initially have this uint32_t iv[2] = {0xFFFFFFDD};
Then at my 2nd run.. I need assign a new value.. lets assume the new value is
10273348653513887325 (decimal) but recorded as string for now.
string value = "10273348653513887325";
But I want change the value of iv from 0xFFFFFFDD (hexadecimal) to 10273348653513887325 (decimal)
How do I do it?

You don't. 10273348653513887325 as a decimal will not fit into a uint32_t. You can pretend that iv is a 64 bit value thus:
uint64_t* piv = (uint64_t*)&iv[0];
*piv = decValue; // where dec value is the decimal conversion of the string
but after you've done that, what you'll really have are two 32 bit values that each have a portion of the value you're trying to store. Just because that's not awkward enough, the endianness of the architecture you're using comes into play. So don't do that.
Another alternative is to use an anonymous union of 2 32bit values and one 64 bit value.
It might merit pointing out that the assignment
uint32_t iv[2] = {0xFFFFFFDD };
results in the first element in iv having that value, and the second containing 0. Is that what you meant?
As pointed out by #hvd, there's some lack of clarity about what you're trying to achieve.

Related

Casting int16_t to uint_8t*

I have a library driven result stored as an int16_t value (and it is a negative, which I can use the absolute value of) and another library function that requires this value a few steps later in the form uin8_t*. How can this be done without using String?
The following code works, but uses dreaded Strings. Is there a way to do this without invoking String or std::string?
void setup() {
Serial.begin(9600);
}
void loop() {
delay(5000);
String initialVal= String(-fetchInt());
Serial.print("Initial value is: ");Serial.println(initialVal);//prints "76"
uint8_t medianVal[sizeof(initialVal);
medianVal.getBytes(medianVal, sizeof(initialVal));
Serial.print("median value is: ");Serial.println(medianVal);//prints "76"
uint8_t* finalVal = medianVal;
Serial.print("final value is: ");Serial.println((char*)finalVal);//prints "76"
}
int16_t fetchInt(){
return -76;
}
So, how can I turn int16_t into uint8_t*?
It has been pointed out in comments below that
Serial.println(static_cast<unsigned>(*finalVal)); works, but this solution converts the uint8_t to an unsigned int and the method requires uint8_t*.
I come from Java and the like and it seems crazy that it is so hard to convert an integer to a string.
A pointer of type uint8_t cannot point to an object of type int16_t; you need to copy the value of firstVal, but therefore you'll need a separate object to take on the value.
uint8_t firstValAbs = firstVal >= 0 ? firstVal : -firstVal;
uint8_t* secondVal = &firstValAbs;
Note: uint8_t x = -34 will not give you the absolute value of -34, i.e. it will not result in 34. You'll rather get a two's complement of -34, i.e. 255-34+1 == 222.
int16_t stores a signed numeric value using 16 bits (-32,768 to 32,767).
uint8_t stores an unsigned numeric value using 8 bits (0 to 255).
If you are sure your int16_t value fits into an uint8_t after changing the sign, you can just assign it:
int16_t firstVal = -76;
uint8_t secondVal = -firstVal;
Now, if you need a pointer to the second value, you can just create it. You can not point directly to firstVal because you need to store the changed value.
uint8_t* secondValPointer = &secondVal;
This uint8_t* may be interpreted as a pointer to a character in your library. Normally, you should use char for this purpose (also 8 bits, but it is implementation defined if it is signed or unsigned). You can cast this pointer to char*, although you need to tell the compiler you want to cast between pointers using reinterpret_cast:
char *secondValAsChar = reinterpret_cast<char*>(secondValPointer);
Now, you can treat this pointer as a pointer to character. For example the following code will print 'L' because ASCII code for L is 76:
std::cout << *secondValAsChar << std::endl;
However, you must be very careful with this pointer because secondValAsChar is not a null terminated string, so you may not use the old common methods like strcat.

Lower 25 bits of uint64_t

I am trying to extract the lower 25 bits of uint64_t to uint32_t. This solution shows how to extract lower 16 bits from uint32_t, but I am not able to figure out for uint64_t. Any help would be appreciated.
See How do you set, clear, and toggle a single bit? for bit operations.
To answer your question:
uint64_t lower25Bits = inputValue & (uint64_t)0x1FFFFFF;
Just mask with a mask that leaves just the bits you care about.
uint32_t out = input & ((1UL<<26)-1);
The idea here is: 1UL<<26 provides an (unsigned long, which is guaranteed to be at least 32-bit wide) integer with just the 26th bit set, i.e.
00000100000000000000000000000000
the -1 makes it become a value with all the bits below it set, i.e.:
00000011111111111111111111111111
the AND "lets through" only the bits that in the mask correspond to zero.
Another way is to throw away those bits with a double shift:
uint32_t out = (((uint32_t)input)<<7)>>7;
The cast to uint32_t makes sure we are dealing with a 32-bit wide unsigned integer; the unsigned part is important to get well-defined results with shifts (and bitwise operations in general), the 32 bit-wide part because we need a type with known size for this trick to work.
Let's say that (uint32_t)input is
11111111111111111111111111111111
we left shift it by 32-25=7; this throws away the top 7 bits
11111111111111111111111110000000
and we right-shift it back in place:
00000001111111111111111111111111
and there we go, we got just the bottom 25 bits.
Notice that the first uint32_t cast wouldn't be strictly necessary because you already have a known-size unsigned value; you could just do (input<<39)>>39, but (1) I prefer to be sure - what if tomorrow input becomes a type with another size/signedness? and (2) in general current CPUs are more efficient working with 32 bit integers than 64 bit integers.

pointer conversions char* to int* without copying

I have a file laid out as such:
Offset Type Value [meaning]
0000 32 bit integer 60000 count of data
0004 32 bit integer 32 width
0008 32 bit integer 32 height
0012 byte 1 data; ext.
I read in the data by quickly dumping the file contents to a string.
Due to the size of the file, and the nature of the data and how its used id like to try to avoid copying it all over the place.
So I wish to use pointers to the data, to save time, but I cant get them to work quite right.
Id like to do something like this:
std::string mydata;
dumpdata("myfile.bin",mydata); //dumps data to reference string.
uint32_t* count = (uint32_t*)&mydata[4]; //32 bit integer.
I know i'm forgetting a cast or something but I cant figure out what it is.
What I tried didn't work.
To make this clear. I do not want to copy. I just want count to point to that area and treat it like an uint32_t even tho its an array of bytes.
This is probably a super easy questions, but its one of them "google thinks i'm trying to convert ascii" things, and i'm not.
I expect if the data is: 00002710H I will get the uint 10000. Instead I get 270,991,360
I just want count to point to that area and treat it like an uint32_t even tho its an array of bytes.
This is not possible in Standard C++ (nor C for that matter).
If your intent is to read the data, one solution would be to write:
uint32_t get_count(std::string const& mydata)
{
uint32_t x;
std::memcpy(&x, &mydata[4], 4);
return x;
}

Make your own data range in c++

I want to have a data variable which will be an integer and its range will be from
0 - 1.000.000.
For example normal int variables can store numbers from -2,147,483,648 to 2,147,483,647.
I want the new data type to have less range so it can have LESS SIZE.
If there is a way to do that please let me know?
There isn't; you can't specify arbitrary ranges for variables like this in C++.
You need 20 bits to store 1,000,000 different values, so using a 32-bit integer is the best you can do without creating a custom data type (even then you'd only be saving 1 byte at 24 bits, since you can't allocate less than 8 bits).
As for enforcing the range of values, you could do that with a custom class, but I assume your goal isn't the validation but the size reduction.
So, there's no true good answer to this problem. Here are a few thoughts though:
If you're talking about an array of these 20 bit values, then perhaps the answers at this question will be helpful: Bit packing of array of integers
On the other hand, perhaps we are talking about an object, that has 3 int20_ts in it, and you'd like it to take up less space than it would normally. In that case, we could use a bitfield.
struct object {
int a : 20;
int b : 20;
int c : 20;
} __attribute__((__packed__));
printf("sizeof object: %d\n", sizeof(struct object));
This code will probably print 8, signifying that it is using 8 bytes of space, not the 12 that you would normally expect.
You can only have data types to be multiple of 8 bits. This is because, otherwise that data type won't be addressable. Imagine a pointer to a 5 bit data. That won't exist.

Confused in the output of the following programme

float b = 1.0f;
int i = b;
int& j = (int&)i;
cout<<j<<endl;
o/p = 1
But for the following scenario
float b = 1.0f;
int i = b;
int& j = (int&)b;
cout<<j<<endl;
O/P = 1065353216
since both are having the same value it shall show the same result ...Can anyone please let me know whats really happening when i am doing some change in line number 3 ?
In the first one, you are doing everything fine. The compiler is able to convert float b to int i, losing precision, but it's fine. Now, take a look at my debugger window during the execution of your second example:
Sorry for my Russian IDE interface, the first column is variable name, the second is value, and the third is type.
As you can see, now the float is simply interpreted as int. So the leading 1 bits are interpreted as the integer's bits, which leads to the result you are getting. So basically, you take the float's binary representation (usually it's represented as sign bit, mantissa and exponent), and try to interpret it as an int.
In the first case you're initializing j correctly and the cast is superfluous. In the second case you're doing it wrong (i.e. to an object of a different type) but the cast shuts the compiler up.
In this second case, what you get is probably the internal representation of 1.0 interpreted as in integer.
Integer 1 and floating-point 1.0f may be mathematically the same value, but in C++ they have different types, with different representations.
Casting an lvalue to a reference is equivalent to reinterpret_cast; it says "look at whatever is in this memory location, and interpret those bytes as an int".
In the first case, the memory contains an int, so interpreting those bytes as an int gives expected value.
In the second case, the memory contains a float, so you see the bytes (or perhaps just some of them, or perhaps some extra ones too, if sizeof(int) != sizeof(float)) that represent the floating-point number, reinterpreted as an integer.
Your computer probably uses 32-bit int and 32-bit IEEE float representations. The float value 1.0f has a sign bit of zero, an exponent of zero (represented by the 8-bit value 127, or 01111111 in binary), and a mantissa of 1 (represented by the 23-bit value zero), so the 32-bit pattern would look like:
00111111 10000000 00000000 00000000
When reinterpreted as an integer, this gives the hex value 0x3f800000, which is 1065353216 in decimal.
Reference doesn't do any memory allocation, it just places an entry into table of local names and their addresses. In first case name 'j' points to the memory previously allocated to int datatype (for variable 'i'), while in second case name 'j' points to memory allocated to float datatype (for variable 'b'). When you use 'j' compiler interprets data at the appropriate address as if it was int, but in fact some float is placed there, that's why you get some "strange" numbers instead of 1
The first one first casts b to an int before assigning it to i. This is the "proper" way, as the compiler will properly convert the value.
The second one does no casting and re-interpret's b's bits as an integer. If you read up on floating point format you can see exactly why you're getting the value you're getting.
Under the covers, all your variables are just collections of bits. How you interpret those bits changes the perceived value they represent. In the first one, you're rearranging the bit pattern to preserve the "perceived" value (of 1). In the second one, you're not rearranging the bit pattern, and so the perceived value is not properly converted.