The code bellow works as expected with numbers <= 2^31-1 but produces wrong (way to large) result on numbers > 2^31-1. This is unexpected since since I never used uint32_t.
void encodeVarint(uint64_t value, uint8_t* output, uint8_t &outputSize)
{
outputSize = 0;
while (value > 127)
{
output[outputSize] = ((uint8_t)(value & 127)) | 128;
value >>= 7;
outputSize++;
}
output[outputSize++] = ((uint8_t)value) & 127;
}
uint64_t decodeVarint(uint8_t* input, uint8_t &inputSize)
{
uint64_t ret = 0;
inputSize = 0;
for (uint8_t i = 0; ; i++)
{
ret |= (input[i] & 127) << (7 * i);
inputSize++;
if(!(input[i] & 128))
break;
}
return ret;
}
Can someone please explain what I am doing wrong?
(input[i] & 127) should have type int and it may be 32-bit long.
Try
ret |= (uint64_t)(input[i] & 127) << (7 * i);
instead of
ret |= (input[i] & 127) << (7 * i);
Related
I am trying to copy n bits from any position of an array of uint8_ts into a single 64 bit integer. Here is a working solution that can copy an arbitrary amount of bits into a 64 bit integer starting at the beginning of the array, but I want to be able to start at any position of the array.
For example I might want to copy bits 2 through 11 of the array:
{7, 128, 7}
In binary that would be:
00000111 1000000 00000111
And I want an integer with value:
0001111000
std::uint64_t key_reg(std::uint8_t* bytes, std::size_t n)
{
std::uint64_t reg = 0;
// The amount of bits that fit into an entire element of an array
// ex, if I'm copying 17 bits, even_bytes == 2
std::size_t even_bytes = (n - (n % 8)) / 8;
// what's left over after the even bytes
// in this case, remainder == 1
std::size_t remainder = n - even_bytes * 8;
// copy each byte into the integer
for(std::size_t i = 0; i < even_bytes; ++i)
if(remainder)
reg |= (std::uint64_t)bytes[i] << (8 * (even_bytes - i));
else
reg |= (std::uint64_t)bytes[i] << (8 * (even_bytes - i - 1));
// if there is an uneven number of bits, copy them in
if(remainder)
reg |= (std::uint64_t)bytes[even_bytes];
return reg;
}
Do you have any idea how to implement
std::uint64_t key_reg(std::uint8_t* bytes, std::size_t pos, std::size_t n);
I didn't think anyone would answer so fast, so here was a solution I came up with in the same style. I found this bitfieldmask function on stackoverflow, but I'm unable to find the question to credit the author.
template<typename R>
static constexpr R bitfieldmask(unsigned int const a, unsigned int const b)
{
return ((static_cast<R>(-1) >> (((sizeof(R) * CHAR_BIT) - 1) - (b)))
& ~((1 << (a)) - 1));
}
std::uint64_t key_reg(std::uint8_t* bytes, std::size_t pos, std::size_t n)
{
std::uint64_t reg = 0;
std::size_t starting_byte = (pos < 8) ? 0 : ((pos - (pos % 8)) / 8);
std::size_t even_bytes = (n - (n % 8)) / 8;
std::size_t remainder = n - even_bytes * 8;
for(std::size_t i = 0; i < even_bytes; ++i)
if(remainder)
reg |= (std::uint64_t)bytes[starting_byte + i] << (8 * (even_bytes - i));
else
reg |= (std::uint64_t)bytes[starting_byte + i] << (8 * (even_bytes - i - 1));
if(remainder)
reg |= (std::uint64_t)bytes[even_bytes];
// mask out anything before the first bit
if(pos % 8 != 0) {
std::size_t a = n - pos;
std::size_t b = n;
auto mask = bitfieldmask<std::uint64_t>(a, b);
reg = (reg & ~mask);
}
return reg;
}
I think it is just simpler to copy all necessary bytes and then mask extra bits:
std::uint64_t key_reg(std::uint8_t* bytes, std::size_t n)
{
std::uint64_t reg = 0;
std::reverse_copy( bytes, bytes + n / 8 + ( n % 8 != 0 ),
reinterpret_cast<char *>( ® ) );
reg >>= n % 8;
reg &= ~( -1UL << n );
return reg;
}
using pos would be little more complex:
std::uint64_t key_reg(std::uint8_t* bytes, std::size_t pos, std::size_t n)
{
std::uint64_t reg = 0;
auto endpos = pos + n;
auto start = bytes + pos / 8;
auto end = bytes + endpos / 8 + ( endpos % 8 != 0 );
std::reverse_copy( start, end, reinterpret_cast<char *>( ® ) );
reg >>= endpos % 8;
reg &= ~( -1UL << n );
return reg;
}
live example
Your basic approach looks sound. To handle bit offsets that aren't multiples of 8, you just need to first read in a single partial byte and then proceed with the rest:
uint64_t key_reg(const uint8_t* bytes, size_t pos, size_t n) {
const uint8_t* ptr = bytes + pos / 8;
uint64_t result = 0;
if (pos % 8 > 0) {
/* read the first partial byte, masking off unwanted bits */
result = *(ptr++) & (0xFF >> (pos % 8));
if (n <= 8 - pos % 8) {
/* we need no more bits; shift off any excess and return early */
return result >> (8 - pos % 8 - n);
} else {
/* reduce the requested bit count by the number we got from this byte */
n -= 8 - pos % 8;
}
}
/* read and shift in as many whole bytes as we need */
while (n >= 8) {
result = (result << 8) + *(ptr++);
n -= 8;
}
/* finally read and shift in the last partial byte */
if (n > 0) {
result = (result << n) + (*ptr >> (8-n));
}
return result;
}
Here's an online demo with a simple test harness, demonstrating that this code indeed works correctly in all the edge cases I could find, such as reading a full 64 bits starting from the middle of a byte or reading only part of a single byte (which is actually a non-trivial special case, handled in a separate branch with its own return statement in the code above).
(Note that I wrote the code above in plain C since, like your original code, it doesn't really make use of any C++ specific features. Feel free to "C++ify" it by adding std:: where appropriate.)
One feature that the test harness doesn't check, but which I believe this code should possess, is that it never reads more bytes from the input array than necessary. In particular, the bytes array is not accessed at all if n == 0 (although a pointer to pos / 8 bytes after the start of the array is still calculated).
I have the following
struct MyType
{
std::array<uint8_t, 892> m_rguID;
uint16_t m_bitLength;
void GetBits(uint16_t startBit, uint16_t nBits, uint64_t & bits) const
};
void MyType::GetBits(uint16_t startBit, uint16_t nBits, uint64_t & bits) const
{
if(startBit + nBits > m_bitLength)
throw std::runtime_error("Index is out of range");
uint32_t num1 = startBit % 8U;
uint32_t num2 = 8U - num1;
uint32_t num3 = nBits >= num2 ? num2 : nBits;
uint32_t num4 = startBit >> 3;
bits = (uint64_t)(((int64_t)((uint64_t)m_rguID[num4] >> (8 - num3 - num1)) & (int64_t)((1 << num3) - 1)) << (nBits - num3));
uint32_t num5 = num4 + 1U;
int num6 = nBits - num3;
if(num6 <= 0)
return;
int num7 = num6 - 8;
int num8 = 8 - num6;
do
{
if(num6 >= 8)
{
bits |= (uint64_t)m_rguID[num5] << num7;
++num5;
}
else
{
bits |= (uint64_t)m_rguID[num5] >> num8;
++num5;
}
num6 += -8;
num7 += -8;
num8 += 8;
} while(num6 > 0);
}
I have a function that interleaves the bits of 32 bit words and returns a 64 bit result. For this simple test case, the bottom 3 bytes are correct, and the contents of the top 5 bytes are incorrect. intToBin_32 and intToBin_64 are convenience functions to see the binary representation of the arguments and return val. I've placed casts from the 32 bit type to the 64 bit type everywhere I think they are needed, but I'm still seeing this unexpected (to me, at least) behavior. Is there an implicit conversion going on here, or is there some other reason this doesn't work correctly?
#include <stdint.h>
#include <stdio.h>
struct intString_32 {char bstr [32 + 1 + 8];};
struct intString_64 { char bstr [64 + 1 + 8];};
intString_32 intToBin_32(int a)
{
intString_32 b;
for (int i = 0; i < 8; i++)
{
for (int j = 0; j < 5; j++)
{
if (j != 4)
{
b.bstr[5*i + j] = * ((a & (1 << (31 - (4*i + j)))) ? "1" : "0");
}
else
{
b.bstr[5*i + j] = 0x20;
}
}
}
b.bstr[40] = * ( "\0" );
return b;
}
intString_64 intToBin_64(long long a)
{
intString_64 b;
for (int i = 0; i < 8; i++)
{
for (int j = 0; j < 9; j++)
{
if (j != 8)
{
b.bstr[9*i + j] = * ((a & (1 << (63 - (8*i + j)))) ? "1" : "0");
}
else
{
b.bstr[9*i + j] = 0x20;
}
}
}
b.bstr[72] = * ( "\0" );
return b;
}
uint64_t interleaveBits(unsigned int a, unsigned int b)
{
uint64_t retVal = 0;
for (unsigned int i = 0; i < 32; i++)
{
retVal |= (uint64_t)((uint64_t)((a >> i) & 0x1)) << (2*i);
retVal |= (uint64_t)((uint64_t)((b >> i) & 0x1)) << (2*i + 1);
}
return retVal;
}
int main(int arc, char* argv)
{
unsigned int foo = 0x0004EDC7;
unsigned int bar = 0x5A5A00FF;
uint64_t bat = interleaveBits(foo, bar);
printf("foo: %s \n", intToBin_32(foo).bstr);
printf("bar: %s \n", intToBin_32(bar).bstr);
printf("bat: %s \n\n", intToBin_64(bat).bstr);
}
Through debugging I noticed it's your intToBin_64 which is wrong, to be specific, in this line:
b.bstr[9*i + j] = * ((a & (1 << (63 - (8*i + j)))) ? "1" : "0");
take a closer look on the shift:
(1 << (63 - (8*i + j)))
The literal 1 is a integer, however, shifting a integer by more than 31 bits is undefined behavior. Shift a longlong instead:
b.bstr[9*i + j] = * ((a & (1ll << (63 - (8*i + j)))) ? "1" : "0");
I need to convert Doublebyte characters. In my special case Shift-Jis into something better to handle, preferably with standard C++.
the following Question ended up without a workaround:
Doublebyte encodings on MSVC (std::codecvt): Lead bytes not recognized
So is there anyone with a suggestion or a reference on how to handle this conversion with C++ standard?
Normally I would recommend using the ICU library, but for this alone, using it is way too much overhead.
First a conversion function which takes an std::string with Shiftjis data, and returns an std::string with UTF8 (note 2019: no idea anymore if it works :))
It uses a uint8_t array of 25088 elements (25088 byte), which is used as convTable in the code. The function does not fill this variable, you have to load it from eg. a file first. The second code part below is a program that can generate the file.
The conversion function doesn't check if the input is valid ShiftJIS data.
std::string sj2utf8(const std::string &input)
{
std::string output(3 * input.length(), ' '); //ShiftJis won't give 4byte UTF8, so max. 3 byte per input char are needed
size_t indexInput = 0, indexOutput = 0;
while(indexInput < input.length())
{
char arraySection = ((uint8_t)input[indexInput]) >> 4;
size_t arrayOffset;
if(arraySection == 0x8) arrayOffset = 0x100; //these are two-byte shiftjis
else if(arraySection == 0x9) arrayOffset = 0x1100;
else if(arraySection == 0xE) arrayOffset = 0x2100;
else arrayOffset = 0; //this is one byte shiftjis
//determining real array offset
if(arrayOffset)
{
arrayOffset += (((uint8_t)input[indexInput]) & 0xf) << 8;
indexInput++;
if(indexInput >= input.length()) break;
}
arrayOffset += (uint8_t)input[indexInput++];
arrayOffset <<= 1;
//unicode number is...
uint16_t unicodeValue = (convTable[arrayOffset] << 8) | convTable[arrayOffset + 1];
//converting to UTF8
if(unicodeValue < 0x80)
{
output[indexOutput++] = unicodeValue;
}
else if(unicodeValue < 0x800)
{
output[indexOutput++] = 0xC0 | (unicodeValue >> 6);
output[indexOutput++] = 0x80 | (unicodeValue & 0x3f);
}
else
{
output[indexOutput++] = 0xE0 | (unicodeValue >> 12);
output[indexOutput++] = 0x80 | ((unicodeValue & 0xfff) >> 6);
output[indexOutput++] = 0x80 | (unicodeValue & 0x3f);
}
}
output.resize(indexOutput); //remove the unnecessary bytes
return output;
}
About the helper file: I used to have a download here, but nowadays I only know unreliable file hosters. So... either http://s000.tinyupload.com/index.php?file_id=95737652978017682303 works for you, or:
First download the "original" data from ftp://ftp.unicode.org/Public/MAPPINGS/OBSOLETE/EASTASIA/JIS/SHIFTJIS.TXT . I can't paste this here because of the length, so we have to hope at least unicode.org stays online.
Then use this program while piping/redirecting above text file in, and redirecting the binary output to a new file. (Needs a binary-safe shell, no idea if it works on Windows).
#include<iostream>
#include<string>
#include<cstdio>
using namespace std;
// pipe SHIFTJIS.txt in and pipe to (binary) file out
int main()
{
string s;
uint8_t *mapping; //same bigendian array as in converting function
mapping = new uint8_t[2*(256 + 3*256*16)];
//initializing with space for invalid value, and then ASCII control chars
for(size_t i = 32; i < 256 + 3*256*16; i++)
{
mapping[2 * i] = 0;
mapping[2 * i + 1] = 0x20;
}
for(size_t i = 0; i < 32; i++)
{
mapping[2 * i] = 0;
mapping[2 * i + 1] = i;
}
while(getline(cin, s)) //pipe the file SHIFTJIS to stdin
{
if(s.substr(0, 2) != "0x") continue; //comment lines
uint16_t shiftJisValue, unicodeValue;
if(2 != sscanf(s.c_str(), "%hx %hx", &shiftJisValue, &unicodeValue)) //getting hex values
{
puts("Error hex reading");
continue;
}
size_t offset; //array offset
if((shiftJisValue >> 8) == 0) offset = 0;
else if((shiftJisValue >> 12) == 0x8) offset = 256;
else if((shiftJisValue >> 12) == 0x9) offset = 256 + 16*256;
else if((shiftJisValue >> 12) == 0xE) offset = 256 + 2*16*256;
else
{
puts("Error input values");
continue;
}
offset = 2 * (offset + (shiftJisValue & 0xfff));
if(mapping[offset] != 0 || mapping[offset + 1] != 0x20)
{
puts("Error mapping not 1:1");
continue;
}
mapping[offset] = unicodeValue >> 8;
mapping[offset + 1] = unicodeValue & 0xff;
}
fwrite(mapping, 1, 2*(256 + 3*256*16), stdout);
delete[] mapping;
return 0;
}
Notes:
Two-byte big endian raw unicode values (more than two byte not necessary here)
First 256 chars (512 byte) for the single byte ShiftJIS chars, value 0x20 for invalid ones.
Then 3 * 256*16 chars for the groups 0x8???, 0x9??? and 0xE???
= 25088 byte
For those looking for the Shift-JIS conversion table data, you can get the uint8_t array here:
https://github.com/bucanero/apollo-ps3/blob/master/include/shiftjis.h
Also, here's a very simple function to convert basic Shift-JIS chars to ASCII:
const char SJIS_REPLACEMENT_TABLE[] =
" ,.,..:;?!\"*'`*^"
"-_????????*---/\\"
"~||--''\"\"()()[]{"
"}<><>[][][]+-+X?"
"-==<><>????*'\"CY"
"$c&%#&*#S*******"
"*******T><^_'='";
//Convert Shift-JIS characters to ASCII equivalent
void sjis2ascii(char* bData)
{
uint16_t ch;
int i, j = 0;
int len = strlen(bData);
for (i = 0; i < len; i += 2)
{
ch = (bData[i]<<8) | bData[i+1];
// 'A' .. 'Z'
// '0' .. '9'
if ((ch >= 0x8260 && ch <= 0x8279) || (ch >= 0x824F && ch <= 0x8258))
{
bData[j++] = (ch & 0xFF) - 0x1F;
continue;
}
// 'a' .. 'z'
if (ch >= 0x8281 && ch <= 0x829A)
{
bData[j++] = (ch & 0xFF) - 0x20;
continue;
}
if (ch >= 0x8140 && ch <= 0x81AC)
{
bData[j++] = SJIS_REPLACEMENT_TABLE[(ch & 0xFF) - 0x40];
continue;
}
if (ch == 0x0000)
{
//End of the string
bData[j] = 0;
return;
}
// Character not found
bData[j++] = bData[i];
bData[j++] = bData[i+1];
}
bData[j] = 0;
return;
}
I want to be able to take a user given double and write out in the DEC 64 dpfp format (http://www.wsmr.army.mil/RCCsite/Documents/106%20Previous%20Versions/106-07/appendixO.pdf). Having trouble getting this to line up correctly, anyone have experience or have written conversion functions for DEC types?
This seems pretty straight forward, let me take a shot at it. Note that I don't have any way of testing this for correctness.
std::vector<unsigned char> ToDEC64Float(double d)
{
uint64_t dec_bits = 0ULL;
if (d != 0.0)
{
assert(sizeof(double) == sizeof(uint64_t));
uint64_t bits = *reinterpret_cast<uint64_t*>(&d);
uint64_t fraction = bits & 0x000fffffffffffffULL;
int exp = (int)((bits >> 52) & 0x7ff) - 1023;
bool sign = (bool)(bits & 0x8000000000000000ULL);
// convert the individual values for the new format
fraction <<= 3;
exp += 1 + 128;
if (exp > 255)
throw std::overflow_error("overflow");
if (exp < 0 || (exp == 0 && fraction != 0))
throw std::underflow_error("underflow");
dec_bits = (uint64_t)sign << 63 | (uint64_t)exp << 55 | fraction;
}
std::vector<unsigned char> result;
for (int i = 0; i < 64; i+=8)
result.push_back((unsigned char)((dec_bits >> i) & 0xff));
return result;
}
double static const DECBytesToDouble(uint64_t value)
{
//DEC Byte Conversion Constants
static const float MANTISSA_CONSTANT = 0.5;
static const int32_t EXPONENT_BIAS = 128;
uint8_t * byte_array = (uint8_t*)&value;
uint8_t first = byte_array[0];
uint8_t second = byte_array[1];
uint8_t third = byte_array[2];
uint8_t fourth = byte_array[3];
uint8_t fifth = byte_array[4];
uint8_t sixth = byte_array[5];
uint8_t seventh = byte_array[6];
uint8_t eighth = byte_array[7];
// |second |first|fourth|third|sixth|fifth|eighth|seventh|
// |s|exponent|mantissa |
bool sign = second & 0x80;
std::cout<<"(DECBytesToDouble) Sign: "<<sign<<std::endl;
int32_t exponent = ((second & 0x7F) << 1) + ((first >> 7) & 0x1);
std::cout<<"(DECBytesToDouble) Exponent: "<<exponent<<std::endl;
int64_t mantissa = ((int64_t)(first & 0x7F) << 48) + ((int64_t)fourth << 40)
+ ((int64_t)third << 32) + ((int64_t)sixth << 24) + ((int64_t)fifth << 16)
+ ((int64_t)eighth << 8) + (int64_t) seventh;
std::cout<<"(DECBytesToDouble) Fraction: "<<mantissa<<std::endl;
double fraction = MANTISSA_CONSTANT;
for (int32_t i=0; i<55; i++)
{
fraction += ((mantissa >> i) & 0x1) * pow(2,i-56);
}//for
return pow(-1,sign)*fraction*pow(2,exponent-EXPONENT_BIAS);
}//DECBytesToDouble
uint64_t static const DoubleToDECBytes(double value)
{
static const int32_t EXPONENT_BIAS = 128;
uint64_t dec_bits = 0ULL;
if (value != 0.0)
{
uint64_t bits = *reinterpret_cast<uint64_t*>(&value);
uint64_t fraction = bits & 0x000fffffffffffffULL;
int exp = (int)((bits >> 52) & 0x7ff) - 1023;
bool sign = false;
if(value < 0)
{
sign = true;
}//if
std::cout<<"(DoubleToDECBytes) Sign: "<<sign<<std::endl;
// convert the individual values for the new format
fraction <<= 3;
exp += EXPONENT_BIAS + 1;
std::cout<<"(DoubleToDECBytes) Exponent: "<<exp<<std::endl;
std::cout<<"(DoubleToDECBytes) Fraction: "<<fraction<<std::endl;
if (exp > 255)
throw std::overflow_error("overflow");
if (exp < 0 || (exp == 0 && fraction != 0))
throw std::underflow_error("underflow");
dec_bits = (uint64_t)(sign << 63) | (uint64_t)(exp << 55) | fraction;
//|second |first|fourth|third|sixth|fifth|eighth|seventh|
uint8_t * byte_array = (uint8_t*)&dec_bits;
uint8_t first = byte_array[0];
uint8_t second = byte_array[1];
uint8_t third = byte_array[2];
uint8_t fourth = byte_array[3];
uint8_t fifth = byte_array[4];
uint8_t sixth = byte_array[5];
uint8_t seventh = byte_array[6];
uint8_t eighth = byte_array[7];
byte_array[7] = second;
byte_array[6] = first;
byte_array[5] = fourth;
byte_array[4] = third;
byte_array[3] = sixth;
byte_array[2] = fifth;
byte_array[1] = eighth;
byte_array[0] = seventh;
std::cout<<"(DoubleToDECBytes) Guess ="<<dec_bits<<std::endl;
}//if
/*std::vector<unsigned char> result;
for (int i = 0; i < 64; i+=8)
{
result.push_back((unsigned char)((dec_bits >> i) & 0xff));
}//for
uint64_t final_result = 0;
memcpy(&final_result, &result[0], sizeof(uint64_t));
std::cout<<"Final result: "<<final_result<<std::endl;*/
return dec_bits;
}//DoubleToDECBytes
Output:
input uint64_t value: 9707381994276473045
(DECBytesToDouble) Sign: 0
(DECBytesToDouble) Exponent: 145
(DECBytesToDouble) Fraction: 24184718387676855
output double value: 109527.7465
(DoubleToDECBytes) Sign: 0
(DoubleToDECBytes) Exponent: 145
(DoubleToDECBytes) Fraction: 24184718387676848
(DoubleToDECBytes) Guess =9705411669439479893
Converted double, uint64_t: 9705411669439479893
uint64_t difference: 1970324836993152
(DECBytesToDouble) Sign: 0
(DECBytesToDouble) Exponent: 0
(DECBytesToDouble) Fraction: 24184718387676848
output double value: 0.0000
I came to find that integrating libvaxdata C library into my C++ solution was the best way to go. In my use case situation all that was required was some byte flipping, however the routines work flawlessly.
I recommend the libvaxdata library when dealing with conversion to/from IEEE/DEC types.
http://pubs.usgs.gov/of/2005/1424/
I have two for loops that I want to write in a function as one. The problem is that it differ only in one instruction
for (int i = 1; i <= fin_cabecera - 1 ; i++ ){
buffer[i] &= 0xfe;
if (bitsLetraRestantes < 0) {
bitsLetraRestantes = 7;
mask = 0x80;
letra = sms[++indiceLetra]; //*differs here*
}
char c = (letra & mask) >> bitsLetraRestantes--;
mask >>= 1;
buffer[i] ^= c;
}
And the other
for (int i = datos_fichero; i <= tamanio_en_bits + datos_fichero; i++){
buffer[i] &= 0xfe;
if (bitsLetraRestantes < 0) {
bitsLetraRestantes = 7;
mask = 0x80;
f.read(&letra, 1); //*differs here*
}
char c = (letra & mask) >> bitsLetraRestantes--;
mask >>= 1;
buffer[i] ^= c;
}
I thought in something like this:
void write_bit_by_bit(unsigned char buffer[], int from, int to, bool type) {
for (int i = to; i <= from; i++) {
buffer[i] &= 0xfe;
if (bitsLetraRestantes < 0) {
bitsLetraRestantes = 7;
mask = 0x80;
type ? (letra = sms[++indiceLetra]) : f.read(&letra, 1);
}
char c = (letra & mask) >> bitsLetraRestantes--;
mask >>= 1;
buffer[i] ^= c;
}
}
But I think there has to be a better method.
Context:
I will give more context (I will try explain it as better as I can within my language limitations). I have to read one byte each time because The Buffer variable represents a image pixel. sms is a message that have to be hidden within the image, and letra is a single char of that message. In order to not modify the aspect of the image, each bit of each character have to be written in the last bit of each pixel. Let me give you and example.
letra = 'H' // 01001000 in binary
buffer[0] = 255 // white pixel 11111111
In order to hide the H char, I will need 8 pixel:
The result will be like:
buffer[0] //11111110,
buffer[1] //11111111
buffer[2] //11111110
buffer[3] //11111110
buffer[4] //11111111
buffer[5] //11111110
buffer[6]//11111110
buffer[7]//11111110
The H is hidden in the last bit of the image. I hope I explained well.
[Solution]
Thanks to #anatolyg I've rewrited the code and now works just as I wanted. Here is how it looks:
void write_bit_by_bit(unsigned char buffer[], ifstream& f,int from, int to, char sms[], bool type){
unsigned short int indiceLetra = 0;
short int bitsLetraRestantes = 7;
unsigned char mask = 0x80; //Empezamos por el bit más significativo (10000000)
char* file_buffer;
if(type){ //Write file data
int number_of_bytes_to_read = get_file_size(f);
file_buffer = new char[number_of_bytes_to_read];
f.read(file_buffer, number_of_bytes_to_read);
}
const char* place_to_get_stuff_from = type ? file_buffer : sms;
char letra = place_to_get_stuff_from[0];
for (int i = from; i <= to; i++) {
buffer[i] &= 0xfe; //hacemos 0 último bit con máscara 11111110
//TODO: Hacer con dos for
if (bitsLetraRestantes < 0) {
bitsLetraRestantes = 7;
mask = 0x80;
letra = place_to_get_stuff_from[++indiceLetra];//letra = sms[++indiceLetra];
}
char c = (letra & mask) >> bitsLetraRestantes--;
mask >>= 1;
buffer[i] ^= c; //Almacenamos en el ultimo bit del pixel el valor del caracter
}
}
int ocultar(unsigned char buffer[],int tamImage, char sms[], int tamSms){
ifstream f(sms);
if (f) {
strcpy(sms,basename(sms));
buffer[0] = 0xff;
int fin_cabecera = strlen(sms)*8 + 1;
buffer[fin_cabecera] = 0xff;
write_bit_by_bit(buffer, f, 1, fin_cabecera -1, sms, WRITE_FILE_NAME);
int tamanio_en_bits = get_file_size(f) * 8;
int datos_fichero = fin_cabecera + 1;
write_bit_by_bit(buffer, f, datos_fichero, tamanio_en_bits + datos_fichero, sms, WRITE_FILE_DATA);
unsigned char fin_contenido = 0xff;
short int bitsLetraRestantes = 7;
unsigned char mask = 0x80;
for (int i = tamanio_en_bits + datos_fichero + 1;
i < tamanio_en_bits + datos_fichero + 1 + 8; i++) {
buffer[i] &= 0xfe;
char c = (fin_contenido & mask) >> bitsLetraRestantes--;
mask >>= 1;
buffer[i] ^= c;
}
}
return 0;
}
Since you are talking about optimization here, consider performing the read outside the loop. This will be a major optimization (reading 10 bytes at once must be quicker than reading 1 byte 10 times). This will require an additional buffer for (the file?) f.
if (!type)
{
char f_buffer[ENOUGH_SPACE];
number = calc_number_of_bytes_to_read();
f.read(f_buffer, number);
}
for (...) {
// your code
}
After you have done this, your original question is easy to answer:
const char* place_to_get_stuff_from = type ? sms : f_buffer;
for (...) {
...
letra = place_to_get_stuff_from[++indiceLetra];
...
}