How can i insert integer in char array in c++ using shifting? - c++

I am really new to c++ so sorry if this question is bad or not understandable.
I have an integer, for example, int a; its values can be between 1 and 3500 (this value I get from a file). I also have an array of chars, unsigned char packet[MAX_PACKET_SIZE];. My goal is to place this integer value in this array between indexes packet[10] and packet[17] So in the end takes 8 bytes.
If the value for a is 1 i would like my packet array to be:
packet[10] = 30, packet[11] = 30, packet[12] = 30, packet[13] = 30, packet[14] = 30, packet[15] = 30, packet[16] = 30, packet[17] = 31

You have to look into binary representation and binary math to really understand what bit-wise operations do to values.
Note that 3500 easily fits into a 16 bits value, 3500 is less than 2^16.
If you want to use types with guaranteed sizes, you have to use uint8_t, uint16_t and similar.
Such operations require a careful approach in case of C++, if you want a potable code. An int may have different size and even different order of bytes (endianness), but operators of bit-wise shift >> and << are agnostic to endianness. Operator << always shift toward higher digits, >> always shifts to less significant ones.
Note, that in C++ results of all bit-wise operations are promoted to be of size of unsigned int or larger type if required. Shift operations with signed values are not defined.
In naive but safe variant of required algorithm we have to do those steps.
(Decide in which order we write bytes down into the buffer. Let assume we do that from less significant to more significant.)
Determine first byte to write value to to, pointed by p
Determine size of written value in bytes, a pend pointer defines a byte after the written value.
"Cut" first byte out of the original value by using AND operation (&) and mask consisting of all 1's and assign it to the location pointed by p.
Remove written byte from value by shift to the right (>>).
Increment p.
If (p != pend) go to 3.
(optional) we can save p or pend for further purposes, e.g. for sequenced writes.
In C-styled (but already C++) variant this would look like:
unsigned char * pack_int(unsigned char *p, unsigned value)
{
unsigned char *pend = p + sizeof(value);
while(p != pend)
{
// ~ is a bit-wise not, ~0 produces an int with all bits set
*p = value & ((unsigned char)~0);
value >>= CHAR_BIT;
p++;
}
return p;
}
Use of ((unsigned char)~0) instead of 0xFF literal is simply a protection from bytes that aren't 8 bit. Compiler would convert it into correct literal value.
C++ allows to make this implementation to be type-agnostic. E.g. one that still requires sequenced iterators to address output location:
template <class InIt, class T>
InIt pack(InIt p, T value)
{
using target_t = std::make_unsigned_t<std::remove_reference_t<decltype(*p)>>;
using src_t = std::make_unsigned_t<T>;
InIt pend = p + sizeof(T);
src_t val = static_cast<src_t> (value); // if T is signed, it would fit anyway.
while(p != pend)
{
*p = (val & (target_t)~0);
val >>= CHAR_BIT;
p++;
}
return pend;
}
In C++17 one can use
InIt pend = p;
std::advance(pend, sizeof(T));
A better implementation in C++ would generate conversion sequence statically, during compilation, instead of using a loop, by application of recursive templates.
This is a fully functional program that uses both :
#include <iostream>
#include <array>
#include <climits>
#include <type_traits>
// C-styled function
// packs value into buffer, returns pointer to the byte after its end.
unsigned char * pack_int(unsigned char *p, unsigned value)
{
unsigned char *pend = p + sizeof(value);
while(p != pend)
{
// ~ is a bit-wise not, ~0 produces an int with all bits set
*p = value & ((unsigned char)~0);
value >>= CHAR_BIT;
p++;
}
return p;
}
// a type-agnostic template
template <class InIt, class T>
InIt pack(InIt p, T value)
{
using target_t = std::make_unsigned_t<std::remove_reference_t<decltype(*p)>>;
using src_t = std::make_unsigned_t<T>;
InIt pend = p + sizeof(T);
src_t val = static_cast<src_t> (value); // if T is signed, it would fit anyway.
while(p != pend)
{
*p = (val & (target_t)~0);
val >>= CHAR_BIT;
p++;
}
return pend;
}
int main(int argc, char** argv )
{
std::array<unsigned char, 16> buffer = {};
auto ptr = pack_int(&(buffer[0]), 0xA4B3C2D1);
ptr = pack(ptr, (long long)0xA4B3C2D1);
pack(ptr, 0xA4B3C2D1);
std::cout << std::hex;
for( auto c : buffer)
std::cout << +c << ", ";
std::cout << "{end}\n";
}
Output of this would be
d1, c2, b3, a4, d1, c2, b3, a4, 0, 0, 0, 0, d1, c2, b3, a4, {end}
The sequence d1, c2, b3, a4, repeated twice, is obviously a reversed representation for hex value 0xA4B3C2D1. On little-endian system that matches representation of unsigned int in memory. For 3500 (hex 0xDAC) it would be ac, d, 0, 0.
In communications a "network order" is accepted as a standard, also known as "big-endian", where the most significant byte comes first, which would require a slight alteration to the algorithm above.

Related

Getting shorts from an integer

I'm supposed to pack some shorts into a 32 bit integer. It's a homework assignment that will lead into a larger idea of compression/decompression.
I don't have any problems understanding how to pack the shorts into an integer, but I am struggling to understand how to get each short value stored within the integer.
So, for example, I store the values 2, 4, 6, 8 into the integer. That means I want to print them in the same order I input them.
How do you go about getting these values out from the integer?
EDIT: Shorts in this context refers to an unsigned two-byte integer.
As Craig corrected me, short is a 16 bit variable, therfore only 2 shorts can fit in one int, so here's my answer retrieving shorts:
2|4
000000000000001|00000000000000100
00000000000000100000000000000100
131076
denoting first as the left-most variable and last as the right-most variable, getting the short variable would like like this:
int x = 131076; //00000000000000100000000000000100 in binary
short last = x & 65535; // 65535= 1111111111111111
short first= (x >> 16) & 65535;
and here's my previous answer fixed for compressing chars (8 bit variables):
Let's assume the first char is the one the start on the MSB and the last one is the one that ends on the LSB:
2|4|6|8
00000001|00000010|000000110|00001000
000000010000001000000011000001000
33818120
So, in this example the first char is 2 (0010), followed by 4 (0100), 6 (0110) and last: 8 (1000).
so to get the compressed numbers back one could use this code:
int x = 33818120; //00000010000001000000011000001000 in binary
char last = x & 255; // 255= 11111111
char third = (x >> 8) & 255;
char second = (x >> 16) & 255;
char last = (x >> 24) & 255;
This would be more interesting with char [as you'd get 4]. But, you can only pack two shorts into a single int. So, I'm a bit mystified by this as to what the instructor was thinking.
Consider a union:
union combo {
int u_int;
short u_short[2];
char u_char[4];
};
int
getint1(short s1,short s2)
{
union combo combo;
combo.u_short[0] = s1;
combo.u_short[1] = s2;
return combo.u_int;
}
short
getshort1(int val,int which)
{
union combo combo;
combo.u_int = val;
return combo.u_short[which];
}
Now consider encoding/decoding with shifts:
unsigned int
getint2(unsigned short s1,unsigned short s2)
{
unsigned int val;
val = s1;
val <<= 16;
val |= s2;
return val;
}
unsigned short
getshort2(unsigned int val,int which)
{
val >>= (which * 16);
return val & 0xFFFF;
}
The unsigned code above will probably do what you want.
But, the next uses signed values and probably won't work as well because you may have mixed signs between s1/s2 and encoding/decoding that may present problems
int
getint3(short s1,short s2)
{
int val;
val = s1;
val <<= 16;
val |= s2;
return val;
}
short
getshort3(int val,int which)
{
val >>= (which * 16);
return val & 0xFFFF;
}

How to determine if two pointers point to the same block of memory

I am trying to solve the following problem:
/*
* Return 1 if ptr1 and ptr2 are within the *same* 64-byte aligned
* block (or word) of memory. Return zero otherwise.
*
* Operators / and % and loops are NOT allowed.
*/
/*
I have the following code:
int withinSameBlock(int * ptr1, int * ptr2) {
// TODO
int temp = (1 << 31) >> 25;
int a = ptr1;
int b = ptr2;
return (a & temp) == (b & temp);
}
I have been told that this correctly solves the problem, but I am unsure how it works. Specifically, how does the line int temp = (1 << 31) >> 25; help to solve the problem?
The line:
int temp = (1 << 31) >> 25;
is either incorrect or triggers undefined behavior (depending on wordsize). It just so happens that the undefined behavior on your machine and your compiler does the right thing
and just happens to give the correct answer. To avoid undefined behavior and make the code clearer, you should use:
int withinSameBlock(int * ptr1, int * ptr2) {
uintptr_t temp = ~(uintptr_t)63;
uintptr_t a = (uintptr_t)ptr1;
uintptr_t b = (uintptr_t)ptr2;
return (a & temp) == (b & temp);
}
I'm not sure where you get that code (homework?) but this is terrible.
1. casting pointer to int and do arithmetics is generally very bad practice. The actual size is undefined by those primitive types, for instant, it breaks on every architecture that pointer or int is not 32-bit.
You should use uintptr_t, which is generally larger than or equal to the size of a pointer (except for theoretical arch permitted by ambigous spec)
For example:
#include <stdint.h>
#include <stdio.h>
int withinSameBlock(int * ptr1, int * ptr2) {
uintptr_t p1 = reinterpret_cast<uintptr_t>(ptr1);
uintptr_t p2 = reinterpret_cast<uintptr_t>(ptr2);
uintptr_t mask = ~ (uintptr_t)0x3F;
return (p1 & mask) == (p2 & mask);
}
int main() {
int* a = (int*) 0xdeadbeef;
int* b = (int*) 0xdeadbeee;
int* c = (int*) 0xdeadc0de;
printf ("%p, %p: %d\n", a, b, withinSameBlock(a, b));
printf ("%p, %p: %d\n", a, c, withinSameBlock(a, c));
return 0;
}
First, we need to be clear that the code will only work on systems where a pointer is 32 bits, and int is also 32 bits. On a 64-bit system, the code will fail miserably.
The left shift (1 << 31) sets the most significant bit of the int. In other words, the line
int temp = (1 << 31);
is the same as
int temp = 0x80000000;
Since an int is a signed number, the most significant bit is the sign bit. Shifting as signed number to the right copies the sign bit into lower order bits. So shifting to the right 25 times results in a value that has a 1 in the upper 26 bits. In other words, the line
int temp = (1 << 31) >> 25;
is the same as (and would be much clearer if it was written as)
int temp = 0xffffffc0;
The line
return (a & temp) == (b & temp);
compares the upper 26 bits of a and b, ignoring the lower 6 bits. If the upper bits match, then a and b point to the same block of memory.
Assuming 32 bit pointers, if the two pointers are in the same 64-byte block of memory, then their addresses will vary only in the 6 least significant bits.
(1 << 31) >> 25 will give you a bitmask that looks like this:
11111111111111111111111111000000
a=ptr1 and b=ptr2 will set a and b equal to the value of the pointers, which are memory addresses. The bitwise AND of temp with each of these (i.e., a&temp and b&temp) will mask off the last 6 bits of the addresses held by a and b. If the remaining 26 bits are the same, then the original addresses must have been within 64 bytes of each other.
Demo code:
#include <stdio.h>
void main()
{
int temp = (1 << 31) >> 25;
printf("temp=%x\n",temp);
int p=5, q=6;
int *ptr1=&p, *ptr2=&q;
printf("*ptr1=%x, *ptr2=%x\n",ptr1, ptr2);
int a = ptr1;
int b = ptr2;
printf("a=%x, b=%x\n",a,b);
if ((a & temp) == (b & temp)) printf("true\n");
else printf("false\n");
}

Integer into char array

I need to convert integer value into char array on bit layer. Let's say int has 4 bytes and I need to split it into 4 chunks of length 1 byte as char array.
Example:
int a = 22445;
// this is in binary 00000000 00000000 1010111 10101101
...
//and the result I expect
char b[4];
b[0] = 0; //first chunk
b[1] = 0; //second chunk
b[2] = 87; //third chunk - in binary 1010111
b[3] = 173; //fourth chunk - 10101101
I need this conversion make really fast, if possible without any loops (some tricks with bit operations perhaps). The goal is thousands of such conversions in one second.
I'm not sure if I recommend this, but you can #include <stddef.h> and <sys/types.h> and write:
*(u32_t *)b = htonl((u32_t)a);
(The htonl is to ensure that the integer is in big-endian order before you store it.)
int a = 22445;
char *b = (char *)&a;
char b2 = *(b+2); // = 87
char b3 = *(b+3); // = 173
Depending on how you want negative numbers represented, you can simply convert to unsigned and then use masks and shifts:
unsigned char b[4];
unsigned ua = a;
b[0] = (ua >> 24) & 0xff;
b[1] = (ua >> 16) & 0xff;
b[2] = (ua >> 8) & 0xff
b[3] = ua & 0xff;
(Due to the C rules for converting negative numbers to unsigned, this will produce the twos complement representation for negative numbers, which is almost certainly what you want).
To access the binary representation of any type, you can cast a pointer to a char-pointer:
T x; // anything at all!
// In C++
unsigned char const * const p = reinterpret_cast<unsigned char const *>(&x);
/* In C */
unsigned char const * const p = (unsigned char const *)(&x);
// Example usage:
for (std::size_t i = 0; i != sizeof(T); ++i)
std::printf("Byte %u is 0x%02X.\n", p[i]);
That is, you can treat p as the pointer to the first element of an array unsigned char[sizeof(T)]. (In your case, T = int.)
I used unsigned char here so that you don't get any sign extension problems when printing the binary value (e.g. through printf in my example). If you want to write the data to a file, you'd use char instead.
You have already accepted an answer, but I will still give mine, which might suit you better (or the same...). This is what I tested with:
int a[3] = {22445, 13, 1208132};
for (int i = 0; i < 3; i++)
{
unsigned char * c = (unsigned char *)&a[i];
cout << (unsigned int)c[0] << endl;
cout << (unsigned int)c[1] << endl;
cout << (unsigned int)c[2] << endl;
cout << (unsigned int)c[3] << endl;
cout << "---" << endl;
}
...and it works for me. Now I know you requested a char array, but this is equivalent. You also requested that c[0] == 0, c[1] == 0, c[2] == 87, c[3] == 173 for the first case, here the order is reversed.
Basically, you use the SAME value, you only access it differently.
Why haven't I used htonl(), you might ask?
Well since performance is an issue, I think you're better off not using it because it seems like a waste of (precious?) cycles to call a function which ensures that bytes will be in some order, when they could have been in that order already on some systems, and when you could have modified your code to use a different order if that was not the case.
So instead, you could have checked the order before, and then used different loops (more code, but improved performance) based on what the result of the test was.
Also, if you don't know if your system uses a 2 or 4 byte int, you could check that before, and again use different loops based on the result.
Point is: you will have more code, but you will not waste cycles in a critical area, which is inside the loop.
If you still have performance issues, you could unroll the loop (duplicate code inside the loop, and reduce loop counts) as this will also save you a couple of cycles.
Note that using c[0], c[1] etc.. is equivalent to *(c), *(c+1) as far as C++ is concerned.
typedef union{
byte intAsBytes[4];
int int32;
}U_INTtoBYTE;

How do I use bitwise operators on a "double" on C++?

I was asked to get the internal binary representation of different types in C. My program currently works fine with 'int' but I would like to use it with "double" and "float". My code looks like this:
template <typename T>
string findBin(T x) {
string binary;
for(int i = 4096 ; i >= 1; i/=2) {
if((x & i) != 0) binary += "1";
else binary += "0";
}
return binary;
}
The program fails when I try to instantiate the template using a "double" or a "float".
Succinctly, you don't.
The bitwise operators do not make sense when applied to double or float, and the standard says that the bitwise operators (~, &, |, ^, >>, <<, and the assignment variants) do not accept double or float operands.
Both double and float have 3 sections - a sign bit, an exponent, and the mantissa. Suppose for a moment that you could shift a double right. The exponent, in particular, means that there is no simple translation to shifting a bit pattern right - the sign bit would move into the exponent, and the least significant bit of the exponent would shift into the mantissa, with completely non-obvious sets of meanings. In IEEE 754, there's an implied 1 bit in front of the actual mantissa bits, which also complicates the interpretation.
Similar comments apply to any of the other bit operators.
So, because there is no sane or useful interpretation of the bit operators to double values, they are not allowed by the standard.
From the comments:
I'm only interested in the binary representation. I just want to print it, not do anything useful with it.
This code was written several years ago for SPARC (big-endian) architecture.
#include <stdio.h>
union u_double
{
double dbl;
char data[sizeof(double)];
};
union u_float
{
float flt;
char data[sizeof(float)];
};
static void dump_float(union u_float f)
{
int exp;
long mant;
printf("32-bit float: sign: %d, ", (f.data[0] & 0x80) >> 7);
exp = ((f.data[0] & 0x7F) << 1) | ((f.data[1] & 0x80) >> 7);
printf("expt: %4d (unbiassed %5d), ", exp, exp - 127);
mant = ((((f.data[1] & 0x7F) << 8) | (f.data[2] & 0xFF)) << 8) | (f.data[3] & 0xFF);
printf("mant: %16ld (0x%06lX)\n", mant, mant);
}
static void dump_double(union u_double d)
{
int exp;
long long mant;
printf("64-bit float: sign: %d, ", (d.data[0] & 0x80) >> 7);
exp = ((d.data[0] & 0x7F) << 4) | ((d.data[1] & 0xF0) >> 4);
printf("expt: %4d (unbiassed %5d), ", exp, exp - 1023);
mant = ((((d.data[1] & 0x0F) << 8) | (d.data[2] & 0xFF)) << 8) | (d.data[3] & 0xFF);
mant = (mant << 32) | ((((((d.data[4] & 0xFF) << 8) | (d.data[5] & 0xFF)) << 8) | (d.data[6] & 0xFF)) << 8) | (d.data[7] & 0xFF);
printf("mant: %16lld (0x%013llX)\n", mant, mant);
}
static void print_value(double v)
{
union u_double d;
union u_float f;
f.flt = v;
d.dbl = v;
printf("SPARC: float/double of %g\n", v);
// image_print(stdout, 0, f.data, sizeof(f.data));
// image_print(stdout, 0, d.data, sizeof(d.data));
dump_float(f);
dump_double(d);
}
int main(void)
{
print_value(+1.0);
print_value(+2.0);
print_value(+3.0);
print_value( 0.0);
print_value(-3.0);
print_value(+3.1415926535897932);
print_value(+1e126);
return(0);
}
The commented out 'image_print()` function prints an arbitrary set of bytes in hex, with various minor tweaks. Contact me if you want the code (see my profile).
If you're using Intel (little-endian), you'll probably need to tweak the code to deal with the reverse bit order. But it shows how you can do it - using a union.
You cannot directly apply bitwise operators to float or double, but you can still access the bits indirectly by putting the variable in a union with a character array of the appropriate size, then reading the bits from those characters. For example:
string BitsFromDouble(double value) {
union {
double doubleValue;
char asChars[sizeof(double)];
};
doubleValue = value; // Write to the union
/* Extract the bits. */
string result;
for (size i = 0; i < sizeof(double); ++i)
result += CharToBits(asChars[i]);
return result;
}
You may need to adjust your routine to work on chars, which usually don't range up to 4096, and there may also be some weirdness with endianness here, but the basic idea should work. It won't be cross-platform compatible, since machines use different endianness and representations of doubles, so be careful how you use this.
Bitwise operators don't generally work with "binary representation" (also called object representation) of any type. Bitwise operators work with value representation of the type, which is generally different from object representation. That applies to int as well as to double.
If you really want to get to the internal binary representation of an object of any type, as you stated in your question, you need to reinterpret the object of that type as an array of unsigned char objects and then use the bitwise operators on these unsigned chars
For example
double d = 12.34;
const unsigned char *c = reinterpret_cast<unsigned char *>(&d);
Now by accessing elements c[0] through c[sizeof(double) - 1] you will see the internal representation of type double. You can use bitwise operations on these unsigned char values, if you want to.
Note, again, that in general case in order to access internal representation of type int you have to do the same thing. It generally applies to any type other than char types.
Do a bit-wise cast of a pointer to the double to long long * and dereference.
Example:
inline double bit_and_d(double* d, long long mask) {
long long t = (*(long long*)d) & mask;
return *(double*)&t;
}
Edit: This is almost certainly going to run afoul of gcc's enforcement of strict aliasing. Use one of the various workarounds for that. (memcpy, unions, __attribute__((__may_alias__)), etc)
Other solution is to get a pointer to the floating point variable and cast it to a pointer to integer type of the same size, and then get value of the integer this pointer points to. Now you have an integer variable with same binary representation as the floating point one and you can use your bitwise operator.
string findBin(float f) {
string binary;
for(long i = 4096 ; i >= 1; i/=2) {
long x = * ( long * ) &y;
if((x & i) != 0) binary += "1";
else binary += "0";
}
return binary;
}
But remember: you have to cast to a type with same size. Otherwise unpredictable things may happen (like buffer overflow, access violation etc.).
As others have said, you can use a bitwise operator on a double by casting double* to long long* (or sometimes just long*).
int main(){
double * x = (double*)malloc(sizeof(double));
*x = -5.12345;
printf("%f\n", *x);
*((long*)x) &= 0x7FFFFFFFFFFFFFFF;
printf("%f\n", *x);
return 0;
}
On my computer, this code prints:
-5.123450
5.123450

Store an int in a char array?

I want to store a 4-byte int in a char array... such that the first 4 locations of the char array are the 4 bytes of the int.
Then, I want to pull the int back out of the array...
Also, bonus points if someone can give me code for doing this in a loop... IE writing like 8 ints into a 32 byte array.
int har = 0x01010101;
char a[4];
int har2;
// write har into char such that:
// a[0] == 0x01, a[1] == 0x01, a[2] == 0x01, a[3] == 0x01 etc.....
// then, pull the bytes out of the array such that:
// har2 == har
Thanks guys!
EDIT: Assume int are 4 bytes...
EDIT2: Please don't care about endianness... I will be worrying about endianness. I just want different ways to acheive the above in C/C++. Thanks
EDIT3: If you can't tell, I'm trying to write a serialization class on the low level... so I'm looking for different strategies to serialize some common data types.
Unless you care about byte order and such, memcpy will do the trick:
memcpy(a, &har, sizeof(har));
...
memcpy(&har2, a, sizeof(har2));
Of course, there's no guarantee that sizeof(int)==4 on any particular implementation (and there are real-world implementations for which this is in fact false).
Writing a loop should be trivial from here.
Not the most optimal way, but is endian safe.
int har = 0x01010101;
char a[4];
a[0] = har & 0xff;
a[1] = (har>>8) & 0xff;
a[2] = (har>>16) & 0xff;
a[3] = (har>>24) & 0xff;
#include <stdio.h>
int main(void) {
char a[sizeof(int)];
*((int *) a) = 0x01010101;
printf("%d\n", *((int *) a));
return 0;
}
Keep in mind:
A pointer to an object or incomplete type may be converted to a pointer to a different
object or incomplete type. If the resulting pointer is not correctly aligned for the
pointed-to type, the behavior is undefined.
Note: Accessing a union through an element that wasn't the last one assigned to is undefined behavior.
(assuming a platform where characters are 8bits and ints are 4 bytes)
A bit mask of 0xFF will mask off one character so
char arr[4];
int a = 5;
arr[3] = a & 0xff;
arr[2] = (a & 0xff00) >>8;
arr[1] = (a & 0xff0000) >>16;
arr[0] = (a & 0xff000000)>>24;
would make arr[0] hold the most significant byte and arr[3] hold the least.
edit:Just so you understand the trick & is bit wise 'and' where as && is logical 'and'.
Thanks to the comments about the forgotten shift.
int main() {
typedef union foo {
int x;
char a[4];
} foo;
foo p;
p.x = 0x01010101;
printf("%x ", p.a[0]);
printf("%x ", p.a[1]);
printf("%x ", p.a[2]);
printf("%x ", p.a[3]);
return 0;
}
Bear in mind that the a[0] holds the LSB and a[3] holds the MSB, on a little endian machine.
Don't use unions, Pavel clarifies:
It's U.B., because C++ prohibits
accessing any union member other than
the last one that was written to. In
particular, the compiler is free to
optimize away the assignment to int
member out completely with the code
above, since its value is not
subsequently used (it only sees the
subsequent read for the char[4]
member, and has no obligation to
provide any meaningful value there).
In practice, g++ in particular is
known for pulling such tricks, so this
isn't just theory. On the other hand,
using static_cast<void*> followed by
static_cast<char*> is guaranteed to
work.
– Pavel Minaev
You can also use placement new for this:
void foo (int i) {
char * c = new (&i) char[sizeof(i)];
}
#include <stdint.h>
int main(int argc, char* argv[]) {
/* 8 ints in a loop */
int i;
int* intPtr
int intArr[8] = {1, 2, 3, 4, 5, 6, 7, 8};
char* charArr = malloc(32);
for (i = 0; i < 8; i++) {
intPtr = (int*) &(charArr[i * 4]);
/* ^ ^ ^ ^ */
/* point at | | | */
/* cast as int* | | */
/* Address of | */
/* Location in char array */
*intPtr = intArr[i]; /* write int at location pointed to */
}
/* Read ints out */
for (i = 0; i < 8; i++) {
intPtr = (int*) &(charArr[i * 4]);
intArr[i] = *intPtr;
}
char* myArr = malloc(13);
int myInt;
uint8_t* p8; /* unsigned 8-bit integer */
uint16_t* p16; /* unsigned 16-bit integer */
uint32_t* p32; /* unsigned 32-bit integer */
/* Using sizes other than 4-byte ints, */
/* set all bits in myArr to 1 */
p8 = (uint8_t*) &(myArr[0]);
p16 = (uint16_t*) &(myArr[1]);
p32 = (uint32_t*) &(myArr[5]);
*p8 = 255;
*p16 = 65535;
*p32 = 4294967295;
/* Get the values back out */
p16 = (uint16_t*) &(myArr[1]);
uint16_t my16 = *p16;
/* Put the 16 bit int into a regular int */
myInt = (int) my16;
}
char a[10];
int i=9;
a=boost::lexical_cast<char>(i)
found this is the best way to convert char into int and vice-versa.
alternative to boost::lexical_cast is sprintf.
char temp[5];
temp[0]="h"
temp[1]="e"
temp[2]="l"
temp[3]="l"
temp[5]='\0'
sprintf(temp+4,%d",9)
cout<<temp;
output would be :hell9
union value {
int i;
char bytes[sizof(int)];
};
value v;
v.i = 2;
char* bytes = v.bytes;