python struct.pack equivalent in c++ - c++

I want a fixed length string from a number just like struct.pack present in python but in c++. I thought of itoa (i,buffer,2) but problem can be that its length will depend on platform. Is there any way to make it independent of platform ?

If you're looking for a complete solution similar to Python's struct package, you might check out Google's Protocol Buffers Library. Using that will take care of a lot of issues (e.g. endian-ness, language-portability, cross-version compatibility) for you.

Here's a start:
typedef std::vector<uint8_t> byte_buffer;
template <std::size_t N>
void append_fixed_width(byte_buffer& buf, uintmax_t val) {
int shift = ((N - 1) * 8);
while (shift >= 0) {
uintmax_t mask = (0xff << shift);
buf.push_back(uint8_t((val & mask) >> shift));
shift -= 8;
}
}
template <typename IntType>
void append_bytes(byte_buffer& buf, IntType val) {
append_fixed_width<sizeof(IntType)>(buf, uintmax_t(val));
}
int main() { // usage example
byte_buffer bytes;
append_bytes(bytes, 1); // appends sizeof(int) bytes
append_bytes(bytes, 1ul); // appends sizeof(unsigned long) bytes
append_bytes(bytes, 'a'); // appends sizeof(int) bytes :p
append_bytes(bytes, char('a')); // appends 1 byte
return 0;
}
Append_bytes will append any integer type into a byte buffer represented using a std::vector<uint8_t>. Values are packed in big endian byte order. If you need to change this, then tweak append_fixed_width to traverse the value in a different order.
These functions build a raw byte buffer so whomever is decoding it is responsible for knowing what is in there. IIRC, this is what struct.pack does as well; in other words, the caller of struct.unpack needs to provide the same format string. You can write a variant of append_fixed_width to pack a TLV instead:
template <typename TagType, typename ValueType>
void append_tlv(byte_buffer& buf, TagType t, ValueType val) {
append_fixed_width<sizeof(TagType)>(buf, uintmax_t(t));
append_fixed_width<sizeof(std::size_t)>(buf, uintmax_t(sizeof(ValueType)));
append_fixed_width<sizeof(ValueType)>(buf, uintmax_t(val));
}
I would take a serious look at Jeremy's suggestion though. I wish that it had existed when I wrote all of the binary packing code that I have now.

You need to define an exact-width integer type through a typedef; you do that in a platform-specific manner. If you use C99, int16_t is predefined in <stdint.h>. You can then cast to that type, and type the memory representation of a variable:
int16_t val = (int16_t) orig_val;
void *buf = &val;
Notice that you still need to deal with endianness.
If you don't have C99, you can either use compile-time or run-time size tests. For compile-time tests, consider using autoconf, which already computes the sizes of the various primitive types, so that you can select a good type at compile time. At run-time, just have a series of sizeof tests. Notice that this is somewhat inappropriate for run-time, as the test will always come out with the same result. As an alternative to autoconf, you can also use compiler/system identification macros for a compile-time test.

The C++ way would be to use stringstream:
stringstream ss;
int number=/*your number here*/;
ss<<number;
and to get the buffer you'd use ss.str().c_str().

I made this implementation in c/c++ to compare the execution time of the pack function between python/php/dart/c++
https://github.com/dart-lang/sdk/issues/50708
#include <span>
#include <vector>
#include <cstdio>
#include <cstdint>
#include <iomanip>
#include <iostream>
#include "time.h"
#include <map>
#define STRUCT_ENDIAN_NOT_SET 0
#define STRUCT_ENDIAN_BIG 1
#define STRUCT_ENDIAN_LITTLE 2
static int myendian = STRUCT_ENDIAN_NOT_SET;
void debug_print2(const char *str, std::vector<unsigned char> vec)
{
std::cout << str;
for (auto i : vec)
std::cout << i;
std::cout << "\r\n";
}
int struct_get_endian(void)
{
int i = 0x00000001;
if (((char *)&i)[0])
{
return STRUCT_ENDIAN_LITTLE;
}
else
{
return STRUCT_ENDIAN_BIG;
}
}
static void struct_init(void)
{
myendian = struct_get_endian();
}
static void pack_int16_t(unsigned char **bp, uint16_t val, int endian)
{
if (endian == myendian)
{
*((*bp)++) = val;
*((*bp)++) = val >> 8;
}
else
{
*((*bp)++) = val >> 8;
*((*bp)++) = val;
}
}
static void pack_int32_t(unsigned char **bp, uint32_t val, int endian)
{
if (endian == myendian)
{
*((*bp)++) = val;
*((*bp)++) = val >> 8;
*((*bp)++) = val >> 16;
*((*bp)++) = val >> 24;
}
else
{
*((*bp)++) = val >> 24;
*((*bp)++) = val >> 16;
*((*bp)++) = val >> 8;
*((*bp)++) = val;
}
}
static void pack_int64_t(unsigned char **bp, uint64_t val, int endian)
{
if (endian == myendian)
{
*((*bp)++) = val;
*((*bp)++) = val >> 8;
*((*bp)++) = val >> 16;
*((*bp)++) = val >> 24;
*((*bp)++) = val >> 32;
*((*bp)++) = val >> 40;
*((*bp)++) = val >> 48;
*((*bp)++) = val >> 56;
}
else
{
*((*bp)++) = val >> 56;
*((*bp)++) = val >> 48;
*((*bp)++) = val >> 40;
*((*bp)++) = val >> 32;
*((*bp)++) = val >> 24;
*((*bp)++) = val >> 16;
*((*bp)++) = val >> 8;
*((*bp)++) = val;
}
}
static int pack(void *b, const char *fmt, long long *values, int offset = 0)
{
unsigned char *buf = (unsigned char *)b;
int idx = 0;
const char *p;
unsigned char *bp;
int ep = myendian;
int endian;
bp = buf + offset;
auto bpp = &bp;
if (STRUCT_ENDIAN_NOT_SET == myendian)
{
struct_init();
}
for (p = fmt; *p != '\0'; p++)
{
auto value = values[idx];
switch (*p)
{
case '=': // native
ep = myendian;
break;
case '<': // little-endian
endian = STRUCT_ENDIAN_LITTLE;
ep = endian;
break;
case '>': // big-endian
endian = STRUCT_ENDIAN_BIG;
ep = endian;
break;
case '!': // network (= big-endian)
endian = STRUCT_ENDIAN_BIG;
ep = endian;
break;
case 'b':
*bp++ = value;
break;
case 'c':
*bp++ = value;
break;
case 'i':
if (ep == STRUCT_ENDIAN_LITTLE)
{
*bp++ = value;
*bp++ = value >> 8;
*bp++ = value >> 16;
*bp++ = value >> 24;
}
else
{
*bp++ = value >> 24;
*bp++ = value >> 16;
*bp++ = value >> 8;
*bp++ = value;
}
break;
case 'h':
if (ep == STRUCT_ENDIAN_LITTLE)
{
*bp++ = value;
*bp++ = value >> 8;
}
else
{
*bp++ = value >> 8;
*bp++ = value;
}
break;
case 'q':
if (ep == STRUCT_ENDIAN_LITTLE)
{
*bp++ = value;
*bp++ = value >> 8;
*bp++ = value >> 16;
*bp++ = value >> 24;
*bp++ = value >> 32;
*bp++ = value >> 40;
*bp++ = value >> 48;
*bp++ = value >> 56;
}
else
{
*bp++ = value >> 56;
*bp++ = value >> 48;
*bp++ = value >> 40;
*bp++ = value >> 32;
*bp++ = value >> 24;
*bp++ = value >> 16;
*bp++ = value >> 8;
*bp++ = value;
}
break;
}
idx++;
}
return (bp - buf);
}
int main()
{
time_t start, end;
time(&start);
// std::ios_base::sync_with_stdio(false);
std::vector<unsigned char> myVector{};
myVector.reserve(100000000 * 16);
for (int i = 0; i < 100000000; i++) // 100000000
{
char bytes[BUFSIZ] = {'\0'};
long long values[4] = {64, 65, 66, 67};
pack(bytes, "iiii", values);
for (int j = 0; j < 16; j++)
{
myVector.push_back(bytes[j]);
}
}
time(&end);
auto v2 = std::vector<unsigned char>(myVector.begin(), myVector.begin() + 16);
debug_print2("result: ", v2);
double time_taken = double(end - start);
std::cout << "pack time: " << std::fixed
<< time_taken << std::setprecision(5);
std::cout << " sec " << std::endl;
return 0;
}

Related

Deserialization of uint8 array to int64 fails but should work

Im going to send a int64 over tcp and need to serialize&deserialize it.
First i cast it to a uin64.
I byteshift it into an uint8 array.
Then i byteshift the array into a uint64
And finally cast it back to a int.
But it returns a different value than i put in...
I have checked the hex values, but they are supposed to be correct...
Code:
#include <math.h>
#include <string.h>
#include <iostream>
#include <iomanip>
//SER & D-SER int64
std::array<uint8_t, 8> int64ToBytes(int64_t val)
{
uint64_t v = (uint64_t)val;
std::array<uint8_t, 8> bytes;
bytes[0] = (v&0xFF00000000000000)>>56;
bytes[1] = (v&0x00FF000000000000)>>48;
bytes[2] = (v&0x0000FF0000000000)>>40;
bytes[3] = (v&0x000000FF00000000)>>32;
bytes[4] = (v&0x00000000FF000000)>>24;
bytes[5] = (v&0x0000000000FF0000)>>16;
bytes[6] = (v&0x000000000000FF00)>>8;
bytes[7] = (v&0x00000000000000FF);
return bytes;
}
int64_t bytesToInt64(uint8_t bytes[8])
{
uint64_t v = 0;
v |= bytes[0]; v <<= 8;
v |= bytes[1]; v <<= 8;
v |= bytes[3]; v <<= 8;
v |= bytes[4]; v <<= 8;
v |= bytes[5]; v <<= 8;
v |= bytes[6]; v <<= 8;
v |= bytes[7]; v <<= 8;
v |= bytes[8];
return (int64_t)v;
}
int main() {
uint8_t bytes[8] = {0};
int64_t val = 1234567890;
//Print value to be received on the other side
std::cout << std::dec << "INPUT: " << val << std::endl;
//Serialize
memcpy(&bytes, int64ToBytes(val).data(), 8);
//Deserialize
int64_t val2 = bytesToInt64(bytes);
//print deserialized int64
std::cout << std::dec << "RESULT: " << val2 << std::endl;
}
Output:
INPUT: 1234567890
RESULT: 316049379840
Been trying to solve this for a day now, cant find the problem
Thanks.
Try using the uint64_t htobe64(uint64_t host_64bits) and uint64_t be64toh(uint64_t big_endian_64bits) functions to convert from host to big endian (network order) and from network order to host order respectively.
You are shifting the entire value. Try something like:
(bytes[0] << 56) |
(bytes[1] << 48) |
... (bytes[7])
There is no 9th byte (ie. byte[8]).
you are missing a bit shift in the bytesToInt64 function:
below you find the corrected bytesToInt64 function:
int64_t bytesToInt64(uint8_t bytes[8])
{
uint64_t v = 0;
v |= bytes[0]; v <<= 8;
v |= bytes[1]; v <<= 8;
v |= bytes[2]; v <<= 8;
v |= bytes[3]; v <<= 8;
v |= bytes[4]; v <<= 8;
v |= bytes[5]; v <<= 8;
v |= bytes[6]; v <<= 8;
v |= bytes[7];
return (int64_t)v;
}
If you're transferring data between machines with the same endianness you don't need to serialize the data byte by byte, you can just send the data as it is represented in memory. In this case you don't need anything like that you can just use your memcpy call like this:
// Serialize
memcpy(&bytes, &val, sizeof(val));
// Deserialize
int64_t val2;
memcpy(&val2, &bytes, sizeof(val));
If you're sending data between hosts with different endianness you should send it as you find it in the aswer from Roger, basically you have to make sure the data is represented in the same way on both ends.
here's a variant which not only serializes but will work with any type of int and across any platforms
#include <iostream>
#include <type_traits>
using namespace std;
template <typename T> enable_if_t<is_integral_v<T>> serialize(T t, char *buf)
{
for(auto i = 0U; i < sizeof(t); ++i) {
buf[i] = t & 0xff;
t >>= 8;
}
}
template <typename T> enable_if_t<is_integral_v<T>> deserialize(T &t, char const *buf)
{
for(auto i = 0U; i < sizeof(t); ++i) {
t <<= 8;
t |= buf[sizeof(t) - 1 - i];
}
}
int main() {
int64_t t1 = 0x12345678;
int64_t t2{0};
char buffer[sizeof(t1)];
serialize(t1, buffer);
deserialize(t2, buffer);
cout << "I got " << hex << t2 << endl;
}
you should probably use containers and parts to serialize/deserialize data to make sure you don't overflow your buffer (considering you are transferring more than one int at a time)
This should work. You may also need to check the input array is the right size in your bytesToInt64 function.
std::array<uint8_t, 8> int64ToBytes(int64_t val)
{
uint64_t v = (uint64_t)val;
std::array<uint8_t, 8> bytes;
for (size_t i = 0; i < 8; i++)
{
bytes[i] = (v >> (8 * (7 - i))) & 0xFF;
}
return bytes;
}
int64_t bytesToInt64(uint8_t bytes[8])
{
uint64_t v = 0;
for (size_t i = 0; i < 8; i++)
{
v |= (bytes[i] << (8 * (7 - i)));
}
return (int64_t)v;
}

H264 to PES packetization

I have Ti DaVinci h264 encoder and I want to pack its output frames to PES. The stream is in annex B format.
I took the ffmpeg's pes header writer and made something like this:
void MediaPacket::writePesHeader(std::vector< uint8_t >& buffer)
{
int header_len, flags, len, val;
uint8_t *q = buffer.data();
*q++ = 0x00;
*q++ = 0x00;
*q++ = 0x01;
*q++ = 0xe0;
header_len = 0;
flags = 0;
if (pts != UNKNOWN) {
header_len += 5;
flags |= 0x80;
}
if (dts != UNKNOWN && pts != UNKNOWN && dts != pts) {
header_len += 5;
flags |= 0x40;
}
len = 0;
*q++ = len >> 8;
*q++ = len;
val = 0x80;
*q++ = val;
*q++ = flags;
*q++ = header_len;
if (pts != UNKNOWN) {
write_pts(q, flags >> 6, pts);
q += 5;
}
if (dts != UNKNOWN && pts != UNKNOWN && dts != pts) {
write_pts(q, 1, dts);
q += 5;
}
buffer.resize(q-buffer.data());
}
static void write_pts(uint8_t *q, int fourbits, int64_t pts)
{
int val;
val = fourbits << 4 | (((pts >> 30) & 0x07) << 1) | 1;
*q++ = val;
val = (((pts >> 15) & 0x7fff) << 1) | 1;
*q++ = val >> 8;
*q++ = val;
val = (((pts) & 0x7fff) << 1) | 1;
*q++ = val >> 8;
*q++ = val;
}
Encoder's output without headers is played fine with Totem player and avplay, but there is an error "could not find codec parameters" when headers are presented.
What am I doing wrong?

C++ - serialize double to binary file in little endian

I'm trying to implement a function that writes double to binary file in little endian byte order.
So far I have class BinaryWriter implementation:
void BinaryWriter::open_file_stream( const String& path )
{
// open output stream
m_fstream.open( path.c_str(), std::ios_base::out | std::ios_base::binary);
m_fstream.imbue(std::locale::classic());
}
void BinaryWriter::write( int v )
{
char data[4];
data[0] = static_cast<char>(v & 0xFF);
data[1] = static_cast<char>((v >> 8) & 0xFF);
data[2] = static_cast<char>((v >> 16) & 0xFF);
data[3] = static_cast<char>((v >> 24) & 0xFF);
m_fstream.write(data, 4);
}
void BinaryWriter::write( double v )
{
// TBD
}
void BinaryWriter::write( int v ) was implemented using Sven answer to What is the correct way to output hex data to a file? post.
Not sure how to implement void BinaryWriter::write( double v ).
I tried naively follow void BinaryWriter::write( int v ) implementation but it didn't work. I guess I don't fully understand the implementation.
Thank you guys
You didn't write this, but I'm assuming the machine you're running on is BIG endian, otherwise writing a double is the same as writing an int, only it's 8 bytes.
const int __one__ = 1;
const bool isCpuLittleEndian = 1 == *(char*)(&__one__); // CPU endianness
const bool isFileLittleEndian = false; // output endianness - you choose :)
void BinaryWriter::write( double v )
{
if (isCpuLittleEndian ^ isFileLittleEndian) {
char data[8], *pDouble = (char*)(double*)(&v);
for (int i = 0; i < 8; ++i) {
data[i] = pDouble[7-i];
}
m_fstream.write(data, 8);
}
else
m_fstream.write((char*)(&v), 8);
}
But don't forget generally int is 4 octects and double is 8 octets.
Other problem is static_cast. See this example :
double d = 6.1;
char c = static_cast(d); //c == 6
Solution reinterpret value with pointer :
double d = 6.1;
char* c = reinterpret_cast<char*>(&d);
After, you can use write( Int_64 *v ), which is a extension from write( Int_t v ).
You can use this method with :
double d = 45612.9874
binary_writer.write64(reinterpret_cast<int_64*>(&d));
Don't forget size_of(double) depend of system.
A little program converting doubles to an IEEE little endian representation.
Besides the test in to_little_endian, it should work on any machine.
include <cmath>
#include <cstdint>
#include <cstring>
#include <iostream>
#include <limits>
#include <sstream>
#include <random>
bool to_little_endian(double value) {
enum { zero_exponent = 0x3ff };
uint8_t sgn = 0; // 1 bit
uint16_t exponent = 0; // 11 bits
uint64_t fraction = 0; // 52 bits
double d = value;
if(std::signbit(d)) {
sgn = 1;
d = -d;
}
if(std::isinf(d)) {
exponent = 0x7ff;
}
else if(std::isnan(d)) {
exponent = 0x7ff;
fraction = 0x8000000000000;
}
else if(d) {
int e;
double f = frexp(d, &e);
// A leading one is implicit.
// Hence one has has a zero fraction and the zero_exponent:
exponent = uint16_t(e + zero_exponent - 1);
unsigned bits = 0;
while(f) {
f *= 2;
fraction <<= 1;
if (1 <= f) {
fraction |= 1;
f -= 1;
}
++bits;
}
fraction = (fraction << (53 - bits)) & ((uint64_t(1) << 52) - 1);
}
// Little endian representation.
uint8_t data[sizeof(double)];
for(unsigned i = 0; i < 6; ++i) {
data[i] = fraction & 0xFF;
fraction >>= 8;
}
data[6] = (exponent << 4) | fraction;
data[7] = (sgn << 7) | (exponent >> 4);
// This test works on a little endian machine, only.
double result = *(double*) &data;
if(result == value || (std::isnan(result) && std::isnan(value))) return true;
else {
struct DoubleLittleEndian {
uint64_t fraction : 52;
uint64_t exp : 11;
uint64_t sgn : 1;
};
DoubleLittleEndian little_endian;
std::memcpy(&little_endian, &data, sizeof(double));
std::cout << std::hex
<< " Result: " << result << '\n'
<< "Fraction: " << little_endian.fraction << '\n'
<< " Exp: " << little_endian.exp << '\n'
<< " Sgn: " << little_endian.sgn << '\n'
<< std::endl;
std::memcpy(&little_endian, &value, sizeof(value));
std::cout << std::hex
<< " Value: " << value << '\n'
<< "Fraction: " << little_endian.fraction << '\n'
<< " Exp: " << little_endian.exp << '\n'
<< " Sgn: " << little_endian.sgn
<< std::endl;
return false;
}
}
int main()
{
to_little_endian(+1.0);
to_little_endian(+0.0);
to_little_endian(-0.0);
to_little_endian(+std::numeric_limits<double>::infinity());
to_little_endian(-std::numeric_limits<double>::infinity());
to_little_endian(std::numeric_limits<double>::quiet_NaN());
std::uniform_real_distribution<double> distribute(-100, +100);
std::default_random_engine random;
for (unsigned loop = 0; loop < 10000; ++loop) {
double value = distribute(random);
to_little_endian(value);
}
return 0;
}

Substitute an instruction depending on a condition

I have two for loops that I want to write in a function as one. The problem is that it differ only in one instruction
for (int i = 1; i <= fin_cabecera - 1 ; i++ ){
buffer[i] &= 0xfe;
if (bitsLetraRestantes < 0) {
bitsLetraRestantes = 7;
mask = 0x80;
letra = sms[++indiceLetra]; //*differs here*
}
char c = (letra & mask) >> bitsLetraRestantes--;
mask >>= 1;
buffer[i] ^= c;
}
And the other
for (int i = datos_fichero; i <= tamanio_en_bits + datos_fichero; i++){
buffer[i] &= 0xfe;
if (bitsLetraRestantes < 0) {
bitsLetraRestantes = 7;
mask = 0x80;
f.read(&letra, 1); //*differs here*
}
char c = (letra & mask) >> bitsLetraRestantes--;
mask >>= 1;
buffer[i] ^= c;
}
I thought in something like this:
void write_bit_by_bit(unsigned char buffer[], int from, int to, bool type) {
for (int i = to; i <= from; i++) {
buffer[i] &= 0xfe;
if (bitsLetraRestantes < 0) {
bitsLetraRestantes = 7;
mask = 0x80;
type ? (letra = sms[++indiceLetra]) : f.read(&letra, 1);
}
char c = (letra & mask) >> bitsLetraRestantes--;
mask >>= 1;
buffer[i] ^= c;
}
}
But I think there has to be a better method.
Context:
I will give more context (I will try explain it as better as I can within my language limitations). I have to read one byte each time because The Buffer variable represents a image pixel. sms is a message that have to be hidden within the image, and letra is a single char of that message. In order to not modify the aspect of the image, each bit of each character have to be written in the last bit of each pixel. Let me give you and example.
letra = 'H' // 01001000 in binary
buffer[0] = 255 // white pixel 11111111
In order to hide the H char, I will need 8 pixel:
The result will be like:
buffer[0] //11111110,
buffer[1] //11111111
buffer[2] //11111110
buffer[3] //11111110
buffer[4] //11111111
buffer[5] //11111110
buffer[6]//11111110
buffer[7]//11111110
The H is hidden in the last bit of the image. I hope I explained well.
[Solution]
Thanks to #anatolyg I've rewrited the code and now works just as I wanted. Here is how it looks:
void write_bit_by_bit(unsigned char buffer[], ifstream& f,int from, int to, char sms[], bool type){
unsigned short int indiceLetra = 0;
short int bitsLetraRestantes = 7;
unsigned char mask = 0x80; //Empezamos por el bit más significativo (10000000)
char* file_buffer;
if(type){ //Write file data
int number_of_bytes_to_read = get_file_size(f);
file_buffer = new char[number_of_bytes_to_read];
f.read(file_buffer, number_of_bytes_to_read);
}
const char* place_to_get_stuff_from = type ? file_buffer : sms;
char letra = place_to_get_stuff_from[0];
for (int i = from; i <= to; i++) {
buffer[i] &= 0xfe; //hacemos 0 último bit con máscara 11111110
//TODO: Hacer con dos for
if (bitsLetraRestantes < 0) {
bitsLetraRestantes = 7;
mask = 0x80;
letra = place_to_get_stuff_from[++indiceLetra];//letra = sms[++indiceLetra];
}
char c = (letra & mask) >> bitsLetraRestantes--;
mask >>= 1;
buffer[i] ^= c; //Almacenamos en el ultimo bit del pixel el valor del caracter
}
}
int ocultar(unsigned char buffer[],int tamImage, char sms[], int tamSms){
ifstream f(sms);
if (f) {
strcpy(sms,basename(sms));
buffer[0] = 0xff;
int fin_cabecera = strlen(sms)*8 + 1;
buffer[fin_cabecera] = 0xff;
write_bit_by_bit(buffer, f, 1, fin_cabecera -1, sms, WRITE_FILE_NAME);
int tamanio_en_bits = get_file_size(f) * 8;
int datos_fichero = fin_cabecera + 1;
write_bit_by_bit(buffer, f, datos_fichero, tamanio_en_bits + datos_fichero, sms, WRITE_FILE_DATA);
unsigned char fin_contenido = 0xff;
short int bitsLetraRestantes = 7;
unsigned char mask = 0x80;
for (int i = tamanio_en_bits + datos_fichero + 1;
i < tamanio_en_bits + datos_fichero + 1 + 8; i++) {
buffer[i] &= 0xfe;
char c = (fin_contenido & mask) >> bitsLetraRestantes--;
mask >>= 1;
buffer[i] ^= c;
}
}
return 0;
}
Since you are talking about optimization here, consider performing the read outside the loop. This will be a major optimization (reading 10 bytes at once must be quicker than reading 1 byte 10 times). This will require an additional buffer for (the file?) f.
if (!type)
{
char f_buffer[ENOUGH_SPACE];
number = calc_number_of_bytes_to_read();
f.read(f_buffer, number);
}
for (...) {
// your code
}
After you have done this, your original question is easy to answer:
const char* place_to_get_stuff_from = type ? sms : f_buffer;
for (...) {
...
letra = place_to_get_stuff_from[++indiceLetra];
...
}

how to convert digits from a integer in a byte array in C++

i tried to convert the digits from a number like 9140 to a char array of bytes, i finally did it, but for some reason one of the numbers is converted wrong.
The idea is separate each digit an convert it in a byte[4] and save it a global array of bytes, that means that array have a digit each 4 positions, i insert each digit at the end of array and finally i insert the amount of digits at the end of the array.
the problem is randomly with some values, for example for the value 25 it works but for 9140 it return me 9040, which could be the problem? this is the code:
void convertCantToByteArray4Digits(unsigned char *bufferDigits,int cant){
//char bufferDigits[32];
int bufferPos=20;
double cantAux=cant;
int digit=0,cantDigits=0;
double subdigit=0;
while(cantAux > 0){
cout<<"VUELTA"<<endl;
cantAux/=10;
cout<<"cantAux/=10:"<<cantAux<<endl;
cout<<"floor"<<floor(cantAux)<<endl;
subdigit=cantAux-floor(cantAux);
cout<<"subdigit"<<subdigit<<endl;
digit=static_cast<int>(subdigit*10);
cout<<"digit:"<<subdigit*10<<endl;
cantAux=cantAux-subdigit;
cout<<"cantAux=cantAux-subdigit:"<<cantAux<<endl;
bufferDigits[bufferPos-4] = (digit >> 24) & 0xFF;
std::cout<<static_cast<int>(bufferDigits[bufferPos-4])<<std::endl;
bufferDigits[bufferPos-3] = (digit >> 16) & 0xFF;
std::cout<<static_cast<int>(bufferDigits[bufferPos-3])<<std::endl;
bufferDigits[bufferPos-2] = (digit >> 8) & 0xFF;
std::cout<<static_cast<int>(bufferDigits[bufferPos-2])<<std::endl;
bufferDigits[bufferPos-1] = (digit) & 0xFF;
std::cout<<static_cast<int>(bufferDigits[bufferPos-1])<<std::endl;
/*bufferDigits[0] = digit >> 24;
std::cout<<bufferDigits[0]<<std::endl;
bufferDigits[1] = digit >> 16;
bufferDigits[2] = digit >> 8;
bufferDigits[3] = digit;*/
bufferPos-=4;
cantDigits++;
}
cout<<"sizeof"<<sizeof(bufferDigits)<<endl;
cout<<"cantDigits"<<cantDigits<<endl;
bufferPos=24;
bufferDigits[bufferPos-4] = (cantDigits) >> 24;
//std::cout<<bufferDigits[bufferPos-4]<<std::endl;
bufferDigits[bufferPos-3] = (cantDigits) >> 16;
bufferDigits[bufferPos-2] = (cantDigits) >> 8;
bufferDigits[bufferPos-1] = (cantDigits);
}
the bufferDigits have a size of 24 bytes, the cant parameter is the number to convert, i receive any question about my code.
I feel this is the most c++ way that probably answers your question, if I understood correctly:
#include <string>
#include <iterator>
#include <iostream>
#include <algorithm>
template <typename It>
It tochars(unsigned int i, It out)
{
It save = out;
do *out++ = '0' + i%10;
while (i/=10);
std::reverse(save, out);
return out;
}
int main()
{
char buf[10];
char* end = tochars(9140, buf);
*end = 0; // null terminate
std::cout << buf << std::endl;
}
Instead of using a double and the floor function, just use an int and the modulus operator instead.
void convertCantToByteArray4Digits(unsigned char *bufferDigits,int cant)
{
int bufferPos=20;
int cantAux=cant;
int digit=0,cantDigits=0;
while(cantAux > 0)
{
cout<<"VUELTA"<<endl;
digit = cantAux % 10;
cout<<"digit:"<<digit<<endl;
cantAux /= 10;
cout<<"cantAux/=10:"<<cantAux<<endl;
bufferDigits[bufferPos-4] = (digit >> 24) & 0xFF;
std::cout<<static_cast<int>(bufferDigits[bufferPos-4])<<std::endl;
bufferDigits[bufferPos-3] = (digit >> 16) & 0xFF;
std::cout<<static_cast<int>(bufferDigits[bufferPos-3])<<std::endl;
bufferDigits[bufferPos-2] = (digit >> 8) & 0xFF;
std::cout<<static_cast<int>(bufferDigits[bufferPos-2])<<std::endl;
bufferDigits[bufferPos-1] = (digit) & 0xFF;
std::cout<<static_cast<int>(bufferDigits[bufferPos-1])<<std::endl;
bufferPos-=4;
cantDigits++;
}
Why not use a union?
union {
int i;
char c[4];
};
i = 2530;
// now c is set appropriately
Or memcpy?
memcpy(bufferDigits, &cant, sizeof(int));
Why so complicated? Just divide and take remainders. Here's a reentrant example to which you provide a buffer, and you get back a pointer to the beginning of the converted string:
char * to_string(unsigned int n, char * buf, unsigned int len)
{
if (len < 1) return buf;
buf[--len] = 0;
if (n == 0 && len > 0) { buf[--len] = '0'; }
while (n != 0 && len > 0) { buf[--len] = '0' + (n % 10); n /= 10; }
return &buf[len];
}
Usage: char buf[100]; char * s = to_string(4160, buf, 100);