I want to store integers in char array ex: 0 up to 1449, I checked other posts and I tried memset, sprinf etc. but either I get gibberish characters or unreadable symbols when I print inside of char array. Can anyone help please?
I checked the duplicate link however I am not trying to print int to char, I want to store int in char array. But I tried buf[i] = static_cast<char>(i); inside for loop but it didn't work. Casting didn't work.
The last one I tried is like this:
char buf[1450];
for (int i = 0; i < 1449; i++)
{
memset(buf, ' '+ i, 1450);
cout << buf[i];
}
The output is:
I'm not sure what you trying to do! You should say your objective.
A char (usually 8 bit) in c++ doesn't hold an int (usually 32 bit), If you want to store an int you should use an int array:
int buf[1500];
The memset(buf, ' '+ i, 1450); will actually write the sum of ' ' ascii number plus i always at beginning of the buffer (buffer address is never incremented).
something like this, maybe is what you want:
int buf[1500] = 0;
for (int i = 0; i < 1449; i++)
{
buf[i] = i;
cout << buf[i] << ' ';
}
consider using c++11 containers like std::vector to hold the int or chars, would much safer to use.
You are going to have to explain better what it is you want, because "store integers in char array" is exactly what this code does:
char buf[1450];
for (int i = 0; i < 1450; i++)
{
buf[i] = static_cast<char>(i);
std::cout << buf[i];
}
Yes, the output is similar to what your picture shows, but that is also the correct output.
When you use a debugger to look at buf after the loop, then it does contain: 0, 1, 2, 3, ..., 126, 127, -128, -127, ..., 0, 1, 2, 3, ... and so on, which is the expected contents given that we are trying to put the numbers 0-1449 into an integer type that (in this case*) can contain the range [-128;127].
If this is not the behavior you are looking for (it sounds like it isn't), then you need to describe your requirements in more detail or we won't be able to help you.
(*) Char must be able to contain a character representative. On many/most systems it is 8 bits, but the size is system dependent and it may also be larger.
New answer.
Thank you for the clarification, I believe that what you need is something like this:
int32_t before = 1093821061; // Int you want to transmit
uint8_t buf[4];
buf[0] = static_cast<uint8_t>((before >> 0) & 0xff);
buf[1] = static_cast<uint8_t>((before >> 8) & 0xff);
buf[2] = static_cast<uint8_t>((before >> 16) & 0xff);
buf[3] = static_cast<uint8_t>((before >> 24) & 0xff);
// Add buf to your UDP packet and send it
// Stuff...
// After receiving the packet on the other end:
int32_t after = 0;
after += buf[0] << 0;
after += buf[1] << 8;
after += buf[2] << 16;
after += buf[3] << 24;
std::cout << before << ", " << after << std::endl;
Your problem (as I see it), is that you want to store 32bit numbers in the 8bit buffers that you need for UDP packets. The way to do this is to pick the larger number apart, and convert it into individual bytes, transmit those bytes, and then assemble the big number again from the bytes.
The above code should allow you to do this. Note that I have changed types to int32_t and uint8_t to ensure that I know the exact size of my types - depending on the library you use, you may have to use plain int and char types, just be aware that then the exact sizes of your types are not guaranteed (most likely it will still be 32 and 8 bits, but they can change in size if you change compiler or compile for a different target system). If you want you can add some sizeof checks to ensure that your types conform to what you expect.
Related
So I currently working on file compression so to compress the file I am converting my string of ints using huffman encoding
example:
1000011011010100011010100010001101111001001111010011000011011100001010010110011011
1001110011111000010110110101111111
to bit set like this :
int i = 0;
while (i < str.length())
{
bitset<8>set(str.substr(i, i + 7));
outputFile << char(set.to_ulong());
i = i + 8;
}
when I am retrieving the contents of that file I cant figure out how to convert it back to a string of ints. Once i get back the string of ints I can retrieve the original contents I encoded its just getting the string back thats the problem
If I correctly understand your question here is what you can do to perform a reversed operation:
constexpr int bits_num = sizeof(unsigned char) * CHAR_BIT; //somewhere upper in the file
//...
std::string outputStr;
unsigned char c;
while (inputFile.get(c)) //the simplest function to read "character by character"
{
std::bitset<bits_num> bset(c); //initialize bitset with a char value
outputStr += bset.to_string(); //append to output string
}
Of course I assume that inputFile stream you will declare and set up for yourself.
sizeof(unsigned char) * CHAR_BIT is an elegant way of expressing a number of bits in a variables's type (like here unsigned char). To use the constant CHAR_BIT (which is basically 8 on all modern architectures) add #include <climits> header.
You have also an error in your code:
bitset<8>set(str.substr(i, i + 7));
You're increasing the cut out substring each iteration (i + 7), change it simply to 8 (or better and less error-prone to the suggested constant):
constexpr int bits_num = sizeof(unsigned char) * CHAR_BIT;
//...
std::bitset<bits_num> set( str.substr(i, bits_num) );
outputFile << static_cast<unsigned char>( set.to_ulong() );
i += bits_num;
How can I convert uint64_t to uint8_t[8] without loosing information in C++?
I tried the following:
uint64_t number = 23425432542254234532;
uint8_t result[8];
for(int i = 0; i < 8; i++) {
std::memcpy(result[i], number, 1);
}
You are almost there. Firstly, the literal 23425432542254234532 is too big to fit in uint64_t.
Secondly, as you can see from the documentation, std::memcpy has the following declaration:
void * memcpy ( void * destination, const void * source, size_t num );
As you can see, it takes pointers (addresses) as arguments. Not uint64_t, nor uint8_t. You can easily get the address of the integer using the address-of operator.
Thridly, you are only copying the first byte of the integer into each array element. You would need to increment the input pointer in every iteration. But the loop is unnecessary. You can copy all bytes in one go like this:
std::memcpy(result, &number, sizeof number);
Do realize that the order of the bytes depend on the endianness of the cpu.
First, do you want the conversion to be big-endian, or little-endian? Most of the previous answers are going to start giving you the bytes in the opposite order, and break your program,` as soon as you switch architectures.
If you need to get consistent results, you would want to convert your 64-bit input into big-endian (network) byte order, or perhaps to little-endian. For example, on GNU glib, the function is GUINT64_TO_BE(), but there is an equivalent built-in function for most compilers.
Having done that, there are several alternatives:
Copy with memcpy() or memmove()
This is the method that the language standard guarantees will work, although here I use one function from a third-party library (to convert the argument to big-endian byte order on all platforms). For example:
#include <stdint.h>
#include <stdlib.h>
#include <glib.h>
union eight_bytes {
uint64_t u64;
uint8_t b8[sizeof(uint64_t)];
};
eight_bytes u64_to_eight_bytes( const uint64_t input )
{
eight_bytes result;
const uint64_t big_endian = (uint64_t)GUINT64_TO_BE((guint64)input);
memcpy( &result.b8, &big_endian, sizeof(big_endian) );
return result;
}
On Linux x86_64 with clang++ -std=c++17 -O, this compiles to essentially the instructions:
bswapq %rdi
movq %rdi, %rax
retq
If you wanted the results in little-endian order on all platforms, you could replace GUINT64_TO_BE() with GUINT64_TO_LE() and remove the first instruction, then declare the function inline to remove the third instruction. (Or, if you’re certain that cross-platform compatibility does not matter, you might risk just omitting the normalization.)
So, on a modern, 64-bit compiler, this code is just as efficient as anything else. On another target, it might not be.
Type-Punning
The common way to write this in C would be to declare the union as before, set its uint64_t member, and then read its uint8_t[8] member. This is legal in C.
I personally like it because it allows me to express the entire operation as static single assignments.
However, in C++, it is formally undefined behavior. In practice, all C++ compilers I’m aware of support it for Plain Old Data (the formal term in the language standard), of the same size, with no padding bits, but not for more complicated classes that have virtual function tables and the like. It seems more likely to me that a future version of the Standard will officially support type-punning on POD than that any important compiler will ever break it silently.
The C++ Guidelines Way
Bjarne Stroustrup recommended that, if you are going to type-pun instead of copying, you use reinterpret_cast, such as
uint8_t (&array_of_bytes)[sizeof(uint64_t)] =
*reinterpret_cast<uint8_t(*)[sizeof(uint64_t)]>(
&proper_endian_uint64);
His reasoning was that both an explicit cast and type-punning through a union are undefined behavior, but the cast makes it blatant and unmistakable that you are shooting yourself in the foot on purpose, whereas reading a different union member than the active one can be a very subtle bug.
If I understand correctly you can do this that way for instance:
uint64_t number = 23425432542254234532;
uint8_t *p = (uint8_t *)&number;
//if you need a copy
uint8_t result[8];
for(int i = 0; i < 8; i++) {
result[i] = p[i];
}
When copying memory around between incompatible types, the first thing to be aware of is strict aliasing - you don't want to alias pointers incorrectly. Alignment is also to be considered.
You were almost there, the for is not needed.
uint64_t number = 0x2342543254225423; // trimmed to fit
uint8_t result[sizeof(number)];
std::memcpy(result, &number, sizeof(number));
Note: be aware of the endianness of the platform as well.
Either use a union, or do it with bitwise operations- memcpy is for blocks of memory and might not be the best option here.
uint64_t number = 23425432542254234532;
uint8_t result[8];
for(int i = 0; i < 8; i++) {
result[i] = uint8_t((number >> 8*(7 - i)) & 0xFF);
}
Or, although I'm told this breaks the rules, it works on my compiler:
union
{
uint64_t a;
uint8_t b[8];
};
a = 23425432542254234532;
//Can now read off the value of b
uint8_t copy[8];
for(int i = 0; i < 8; i++)
{
copy[i]= b[i];
}
The packing and unpacking can be done with masks. One more thing to worry about is the byte order. packing and unpacking should use the same byte order. Beware - This is not super efficient implementation and do not come with guarantees on small CPU that are not native 64-bit.
void unpack_uint64(uint64_t number, uint8_t *result) {
result[0] = number & 0x00000000000000FF ; number = number >> 8 ;
result[1] = number & 0x00000000000000FF ; number = number >> 8 ;
result[2] = number & 0x00000000000000FF ; number = number >> 8 ;
result[3] = number & 0x00000000000000FF ; number = number >> 8 ;
result[4] = number & 0x00000000000000FF ; number = number >> 8 ;
result[5] = number & 0x00000000000000FF ; number = number >> 8 ;
result[6] = number & 0x00000000000000FF ; number = number >> 8 ;
result[7] = number & 0x00000000000000FF ;
}
uint64_t pack_uint64(uint8_t *buffer) {
uint64_t value ;
value = buffer[7] ;
value = (value << 8 ) + buffer[6] ;
value = (value << 8 ) + buffer[5] ;
value = (value << 8 ) + buffer[4] ;
value = (value << 8 ) + buffer[3] ;
value = (value << 8 ) + buffer[2] ;
value = (value << 8 ) + buffer[1] ;
value = (value << 8 ) + buffer[0] ;
return value ;
}
#include<cstdint>
#include<iostream>
struct ByteArray
{
uint8_t b[8] = { 0,0,0,0,0,0,0,0 };
};
ByteArray split(uint64_t x)
{
ByteArray pack;
const uint8_t MASK = 0xFF;
for (auto i = 0; i < 7; ++i)
{
pack.b[i] = x & MASK;
x = x >> 8;
}
return pack;
}
int main()
{
uint64_t val_64 = UINT64_MAX;
auto pack = split(val_64);
for (auto i = 0; i < 7; ++i)
{
std::cout << (uint32_t)pack.b[i] << std::endl;
}
system("Pause");
return 0;
}
Although union approach which is addressed by Straw1239 is better and cleaner.Please do care about compiler/platform compatibility with endianness.
I'm coding a program writing down to a file a large bool area (usually around 512*512 bool variables). It would make great use of saving it in a smart way, I'm thinking about saving them 8 by 8, coding these 8 booleans into one byte of the form:
byte = bit0 * boolean0 | ... | bit7 * boolean7
But I'm not sure how to handle this conversion, though I know to write and read a file byte by byte.
I'm using C++, I've no background in CS but this seems close to the topic of serialization though everything I searched on the subject is really unclear to me. Has it already been implemented, or this there a really simple way to implement it? (I meant saving as much CPU time as possible, my program will write and open millions of these files per instance)
Cheers.
Edit:
With the help of Sean (thank you btw!) I managed to get a bit further but it is still not working, a test of the data after saving and reading tells me that it gets corrupted (as in not reconstructed correctly and so not identical to the initial data) somewhere in the writing, reading or both...
My code will probably help.
Here are the writing lines:
typedef char byte;
ofstream ofile("OUTPUT_FILE");
for(int i=0;i<N/8;i++){
byte encoded = 0;
for(int j=0; j<8; j++){
byte bit = (byte)(tab[(i*8+j)/h][(i*8+j)%h]==1);
encoded = (encoded << 1) | bit;
}
ofile << encoded;
}
and the reading lines:
for(int i=0;i<N/8;i++){ //N is the total number of entries I have in my final array
temp = ifile.get(); //reading the byte in a char
for(int j=0; j<8; j++){ // trying to read each bits in there
if(((temp >> j) & 1) ? '1' : '0' ){
tab[(i*8+j)/h][(i*8+j)%h]=1;
}
else{
tab[(i*8+j)/h][(i*8+j)%h]=-1; //my programs manipulates +1 (TRUE) and -1 (FALSE) making most of the operations easier
}
}
}
ifile.close();
Edit2:
I finally managed to do using the bitset<8> objects, much clearer to me than to manipulate bits inside char. I'll probably update my post later with my working code. I'm still concerned with efficiency, is it much quicker to work with than to bitset do you think?
If you don't need to determine the size of you bit array at runtime, you can use std::bitset
http://en.cppreference.com/w/cpp/utility/bitset
You can encode the bools to an int in a loop like this:
bool flags[8];
// populate flags from somewhere
byte encoded=0;
for(int i=0; i<8; i++)
{
byte bit = (byte)flags[i];
encoded = (encoded << 1) | bit;
}
The code uses the fact that casting a bool to a number yields 1 for true and o for false.
Alternatively you can unroll it:
byte encoded = 0;
encoded |= ((byte)flags[0]) << 7;
encoded |= ((byte)flags[1]) << 6;
encoded |= ((byte)flags[2]) << 5;
encoded |= ((byte)flags[3]) << 4;
encoded |= ((byte)flags[4]) << 3;
encoded |= ((byte)flags[5]) << 2;
encoded |= ((byte)flags[6]) << 1;
encoded |= ((byte)flags[7]);
To convert the byte back to an array of flags you can do something like this:
bool flags[8];
byte encoded= /* some value */
for(int i=0, i<8; i++)
{
bool flag=(bool)(encoded & 1);
flags[7-i]=flag;
encoded>>=1;
}
I want to convert the integer (whose maximum value can reach to 99999999) in to BCD and store in to array of 4 characters.
Like for example:
Input is : 12345 (Integer)
Output should be = "00012345" in BCD which is stored in to array of 4 characters.
Here 0x00 0x01 0x23 0x45 stored in BCD format.
I tried in the below manner but didnt work
int decNum = 12345;
long aux;
aux = (long)decNum;
cout<<" aux = "<<aux<<endl;
char* str = (char*)& aux;
char output[4];
int len = 0;
int i = 3;
while (len < 8)
{
cout <<"str: " << len << " " << (int)str[len] << endl;
unsigned char temp = str[len]%10;
len++;
cout <<"str: " << len << " " << (int)str[len] << endl;
output[i] = ((str[len]) << 4) | temp;
i--;
len++;
}
Any help will be appreciated
str points actually to a long (probably 4 bytes), but the iteration accesses 8 bytes.
The operation str[len]%10 looks as if you are expecting digits, but there is only binary data. In addition I suspect that i gets negative.
First, don't use C-style casts (like (long)a or (char*)). They are a bad smell. Instead, learn and use C++ style casts (like static_cast<long>(a)), because they point out where you are doing things that are dangeruos, instead of just silently working and causing undefined behavior.
char* str = (char*)& aux; gives you a pointer to the bytes of aux -- it is actually char* str = reinterpret_cast<char*>(&aux);. It does not give you a traditional string with digits in it. sizeof(char) is 1, sizeof(long) is almost certainly 4, so there are only 4 valid bytes in your aux variable. You proceed to try to read 8 of them.
I doubt this is doing what you want it to do. If you want to print out a number into a string, you will have to run actual code, not just reinterpret bits in memory.
std::string s; std::stringstream ss; ss << aux; ss >> s; will create a std::string with the base-10 digits of aux in it.
Then you can look at the characters in s to build your BCD.
This is far from the fastest method, but it at least is close to your original approach.
First of all sorry about the C code, I was deceived since this started as a C questions, porting to C++ should not really be such a big deal.
If you really want it to be in a char array I'll do something like following code, I find useful to still leave the result in a little endian format so I can just cast it to an int for printing out, however that is not strictly necessary:
#include <stdio.h>
typedef struct
{
char value[4];
} BCD_Number;
BCD_Number bin2bcd(int bin_number);
int main(int args, char **argv)
{
BCD_Number bcd_result;
bcd_result = bin2bcd(12345678);
/* Assuming an int is 4 bytes */
printf("result=0x%08x\n", *((int *)bcd_result.value));
}
BCD_Number bin2bcd(int bin_number)
{
BCD_Number bcd_number;
for(int i = 0; i < sizeof(bcd_number.value); i++)
{
bcd_number.value[i] = bin_number % 10;
bin_number /= 10;
bcd_number.value[i] |= bin_number % 10 << 4;
bin_number /= 10;
}
return bcd_number;
}
I need to convert integer value into char array on bit layer. Let's say int has 4 bytes and I need to split it into 4 chunks of length 1 byte as char array.
Example:
int a = 22445;
// this is in binary 00000000 00000000 1010111 10101101
...
//and the result I expect
char b[4];
b[0] = 0; //first chunk
b[1] = 0; //second chunk
b[2] = 87; //third chunk - in binary 1010111
b[3] = 173; //fourth chunk - 10101101
I need this conversion make really fast, if possible without any loops (some tricks with bit operations perhaps). The goal is thousands of such conversions in one second.
I'm not sure if I recommend this, but you can #include <stddef.h> and <sys/types.h> and write:
*(u32_t *)b = htonl((u32_t)a);
(The htonl is to ensure that the integer is in big-endian order before you store it.)
int a = 22445;
char *b = (char *)&a;
char b2 = *(b+2); // = 87
char b3 = *(b+3); // = 173
Depending on how you want negative numbers represented, you can simply convert to unsigned and then use masks and shifts:
unsigned char b[4];
unsigned ua = a;
b[0] = (ua >> 24) & 0xff;
b[1] = (ua >> 16) & 0xff;
b[2] = (ua >> 8) & 0xff
b[3] = ua & 0xff;
(Due to the C rules for converting negative numbers to unsigned, this will produce the twos complement representation for negative numbers, which is almost certainly what you want).
To access the binary representation of any type, you can cast a pointer to a char-pointer:
T x; // anything at all!
// In C++
unsigned char const * const p = reinterpret_cast<unsigned char const *>(&x);
/* In C */
unsigned char const * const p = (unsigned char const *)(&x);
// Example usage:
for (std::size_t i = 0; i != sizeof(T); ++i)
std::printf("Byte %u is 0x%02X.\n", p[i]);
That is, you can treat p as the pointer to the first element of an array unsigned char[sizeof(T)]. (In your case, T = int.)
I used unsigned char here so that you don't get any sign extension problems when printing the binary value (e.g. through printf in my example). If you want to write the data to a file, you'd use char instead.
You have already accepted an answer, but I will still give mine, which might suit you better (or the same...). This is what I tested with:
int a[3] = {22445, 13, 1208132};
for (int i = 0; i < 3; i++)
{
unsigned char * c = (unsigned char *)&a[i];
cout << (unsigned int)c[0] << endl;
cout << (unsigned int)c[1] << endl;
cout << (unsigned int)c[2] << endl;
cout << (unsigned int)c[3] << endl;
cout << "---" << endl;
}
...and it works for me. Now I know you requested a char array, but this is equivalent. You also requested that c[0] == 0, c[1] == 0, c[2] == 87, c[3] == 173 for the first case, here the order is reversed.
Basically, you use the SAME value, you only access it differently.
Why haven't I used htonl(), you might ask?
Well since performance is an issue, I think you're better off not using it because it seems like a waste of (precious?) cycles to call a function which ensures that bytes will be in some order, when they could have been in that order already on some systems, and when you could have modified your code to use a different order if that was not the case.
So instead, you could have checked the order before, and then used different loops (more code, but improved performance) based on what the result of the test was.
Also, if you don't know if your system uses a 2 or 4 byte int, you could check that before, and again use different loops based on the result.
Point is: you will have more code, but you will not waste cycles in a critical area, which is inside the loop.
If you still have performance issues, you could unroll the loop (duplicate code inside the loop, and reduce loop counts) as this will also save you a couple of cycles.
Note that using c[0], c[1] etc.. is equivalent to *(c), *(c+1) as far as C++ is concerned.
typedef union{
byte intAsBytes[4];
int int32;
}U_INTtoBYTE;