Single-Byte Error Correction - error-correction

A 200 byte message has one random byte corrupted.
What's the most efficient way to fix the corrupt byte?
A Hamming(255,247) code has 8 bytes of overhead, but is simple to implement.
Reed-Solomon error correction has 2 bytes of overhead, but is complex to implement.
Is there a simpler method that I'm overlooking?

I found a paper of a method that's perfect for this case-- two bytes overhead, simple to implement. Here's the code:
// Single-byte error correction for messages <255 bytes long
// using two check bytes. Based on "CA-based byte error-correcting code"
// by Chowdhury et al.
//
// rmmh 2013
uint8_t lfsr(uint8_t x) {
return (x >> 1) ^ (-(x&1) & 0x8E);
}
void eccComputeChecks(uint8_t *data, int data_len, uint8_t *out_c0, uint8_t *out_c1) {
uint8_t c0 = 0; // parity: m_0 ^ m_1 ^ ... ^ m_n-1
uint8_t c1 = 0; // lfsr: f^n-1(m_0) ^ f^n(m_1) ^ ... ^ f^0(m_n-1)
for (int i = 0; i < data_len; ++i) {
c0 ^= data[i];
c1 = lfsr(c1 ^ data[i]);
}
*out_c0 = c0;
*out_c1 = c1;
}
void eccEncode(uint8_t *data, int data_len, uint8_t check[2]) {;
eccComputeChecks(data, data_len, &check[0], &check[1]);
}
bool eccDecode(uint8_t *data, int data_len, uint8_t check[2]) {
uint8_t s0, s1;
eccComputeChecks(data, data_len, &s0, &s1);
s0 ^= check[0];
s1 ^= check[1];
if (s0 && s1) {
int error_index = data_len - 255;
while (s1 != s0) { // find i st s1 = lfsr^i(s0)
s1 = lfsr(s1);
error_index++;
}
if (error_index < 0 || error_index >= data_len) {
// multi-byte error?
return false;
}
data[error_index] ^= s0;
} else if (s0 || s1) {
// parity error
}
return true;
}

Using Reed Solomon to correct a single byte error would not be that complicated. Use a generator polynomial of the form (using ⊕ to mean xor)
g(x) = (x ⊕ 1)(x ⊕ 2) = x^2 + 3x + 2.
Encode the message as usual.
For decode, generate the two syndromes S(0) and S(1) in the normal way.
if(S(0) != 0){
error value = S(0)
error location = log2(S(1)/S(0))
}
Error location would be from right to left (0 == right most byte). If a shortened code and location is out of range, then an uncorrectable error is detected.

Related

Base64 image file encoding with C++

I am writing some simple code to encode files to base64. I have a short c++ code that reads a file into a vector and converts it to unsigned char*. I do this so I can properly use the encoding function I got.
The problem: It works with text files (of different sizes), but it won't work with image files. And I can't figure it out why. What gives?
For an simple text.txt containing the text abcd, the output for both my code and a bash $( base64 text.txt ) is the same.
On the other hand, when I input an image the output is something like iVBORwOKGgoAAAAAAA......AAA== or sometimes it ends with an corrupted size vs prev_size Aborted (core dumped), the first few bytes are correct.
The code:
static std::vector<char> readBytes(char const* filename)
{
std::ifstream ifs(filename, std::ios::binary|std::ios::ate);
std::ifstream::pos_type pos = ifs.tellg();
std::vector<char> result(pos);
ifs.seekg(0, std::ios::beg);
ifs.read(&result[0], pos);
return result;
}
static char Base64Digits[] =
"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
int ToBase64Simple( const BYTE* pSrc, int nLenSrc, char* pDst, int nLenDst )
{
int nLenOut= 0;
while ( nLenSrc > 0 ) {
if (nLenOut+4 > nLenDst) {
cout << "error\n";
return(0); // error
}
// read three source bytes (24 bits)
BYTE s1= pSrc[0]; // (but avoid reading past the end)
BYTE s2= 0; if (nLenSrc>1) s2=pSrc[1]; //------ corrected, thanks to jprichey
BYTE s3= 0; if (nLenSrc>2) s3=pSrc[2];
DWORD n;
n = s1; // xxx1
n <<= 8; // xx1x
n |= s2; // xx12
n <<= 8; // x12x
n |= s3; // x123
//-------------- get four 6-bit values for lookups
BYTE m4= n & 0x3f; n >>= 6;
BYTE m3= n & 0x3f; n >>= 6;
BYTE m2= n & 0x3f; n >>= 6;
BYTE m1= n & 0x3f;
//------------------ lookup the right digits for output
BYTE b1 = Base64Digits[m1];
BYTE b2 = Base64Digits[m2];
BYTE b3 = Base64Digits[m3];
BYTE b4 = Base64Digits[m4];
//--------- end of input handling
*pDst++ = b1;
*pDst++ = b2;
if ( nLenSrc >= 3 ) { // 24 src bits left to encode, output xxxx
*pDst++ = b3;
*pDst++ = b4;
}
if ( nLenSrc == 2 ) { // 16 src bits left to encode, output xxx=
*pDst++ = b3;
*pDst++ = '=';
}
if ( nLenSrc == 1 ) { // 8 src bits left to encode, output xx==
*pDst++ = '=';
*pDst++ = '=';
}
pSrc += 3;
nLenSrc -= 3;
nLenOut += 4;
}
// Could optionally append a NULL byte like so:
*pDst++= 0; nLenOut++;
return( nLenOut );
}
int main(int argc, char* argv[])
{
std::vector<char> mymsg;
mymsg = readBytes(argv[1]);
char* arr = &mymsg[0];
int len = mymsg.size();
int lendst = ((len+2)/3)*4;
unsigned char* uarr = (unsigned char *) malloc(len*sizeof(unsigned char));
char* dst = (char *) malloc(lendst*sizeof(char));;
mymsg.clear(); //free()
// convert to unsigned char
strncpy((char*)uarr, arr, len);
int lenOut = ToBase64Simple(uarr, len, dst, lendst);
free(uarr);
int cont = 0;
while (cont < lenOut) //(dst[cont] != 0)
cout << dst[cont++];
cout << "\n";
}
Any insight is welcomed.
I see two problems.
First, you are clearing your mymsg vector before you're done using it. This leaves the arr pointer dangling (pointing at memory that is no longer allocated). When you access arr to get the data out, you end up with Undefined Behavior.
Then you use strncpy to copy (potentially) binary data. This copy will stop when it reaches the first nul (0) byte within the file, so not all of your data will be copied. You should use memcpy instead.

C++ Base64 Unicode - null bytes

I am trying to base64 encode a unicode string. I am running into problems, after the encoding, the output is my string base64'ed however, there is null bytes at random places in throughout the code, I don't know why, or how to get them out.
Here is my Base64Encode function:
static char Base64Digits[] =
"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
int Base64Encode(const BYTE* pSrc, int nLenSrc, wchar_t* pDst, int nLenDst)
{
int nLenOut= 0;
while ( nLenSrc > 0 ) {
if (nLenOut+4 > nLenDst) return(0); // error
// read three source bytes (24 bits)
BYTE s1= pSrc[0]; // (but avoid reading past the end)
BYTE s2= 0; if (nLenSrc>1) s2=pSrc[1]; //------ corrected, thanks to jprichey
BYTE s3= 0; if (nLenSrc>2) s3=pSrc[2];
DWORD n;
n = s1; // xxx1
n <<= 8; // xx1x
n |= s2; // xx12
n <<= 8; // x12x
n |= s3; // x123
//-------------- get four 6-bit values for lookups
BYTE m4= n & 0x3f; n >>= 6;
BYTE m3= n & 0x3f; n >>= 6;
BYTE m2= n & 0x3f; n >>= 6;
BYTE m1= n & 0x3f;
//------------------ lookup the right digits for output
BYTE b1 = Base64Digits[m1];
BYTE b2 = Base64Digits[m2];
BYTE b3 = Base64Digits[m3];
BYTE b4 = Base64Digits[m4];
//--------- end of input handling
*pDst++ = b1;
*pDst++ = b2;
if ( nLenSrc >= 3 ) { // 24 src bits left to encode, output xxxx
*pDst++ = b3;
*pDst++ = b4;
}
if ( nLenSrc == 2 ) { // 16 src bits left to encode, output xxx=
*pDst++ = b3;
*pDst++ = '=';
}
if ( nLenSrc == 1 ) { // 8 src bits left to encode, output xx==
*pDst++ = '=';
*pDst++ = '=';
}
pSrc += 3;
nLenSrc -= 3;
nLenOut += 4;
}
// Could optionally append a NULL byte like so:
// *pDst++= 0; nLenOut++;
return( nLenOut );
}
Not to fool anyone, but I copied the function from here
Here is how I call the function:
wchar_t base64[256];
Base64Encode((const unsigned char *)UserLoginHash, lstrlenW(UserLoginHash) * 2, base64, 256);
So, why is there random null-bytes or "whitespaces" in the generated hash? What should be changed so that I can get rid of them?
Try something more like this. Portions copied from my own base64 encoder:
static const wchar_t *Base64Digits = L"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
int Base64Encode(const BYTE* pSrc, int nLenSrc, wchar_t* pDst, int nLenDst)
{
int nLenOut = 0;
while (nLenSrc > 0) {
if (nLenDst < 4) return(0); // error
// read up to three source bytes (24 bits)
int len = 0;
BYTE s1 = pSrc[len++];
BYTE s2 = (nLenSrc > 1) ? pSrc[len++] : 0
BYTE s3 = (nLenSrc > 2) ? pSrc[len++] : 0;
pSrc += len;
nLenSrc -= len;
//------------------ lookup the right digits for output
pDst[0] = Base64Digits[(s1 >> 2) & 0x3F];
pDst[1] = Base64Digits[(((s1 & 0x3) << 4) | ((s2 >> 4) & 0xF)) & 0x3F];
pDst[2] = Base64Digits[(((s2 & 0xF) << 2) | ((s3 >> 6) & 0x3)) & 0x3F];
pDst[3] = Base64Digits[s3 & 0x3F];
//--------- end of input handling
if (len < 3) { // less than 24 src bits encoded, pad with '='
pDst[3] = L'=';
if (len == 1)
pDst[2] = L'=';
}
nLenOut += 4;
pDst += 4;
nLenDst -= 4;
}
if (nLenDst > 0) *pDst = 0;
return (nLenOut);
}
The problem, from what I can see, is that as the encoder works, occasionally it is adding a value to a certain character value, for example, let's say U+0070 + U+0066 (this is just an example). At some point, these values equal the null terminator (\0) or something equivalent to it, making it so the program doesn't read past that point when outputting the string and making it appear shorter than it should be.
I've encountered this problem with my own encoding algorithm before, and the best solution appears to be to add more variability to your algorithm; so, instead of only adding characters to the string, subtract some, multiply or XOR some at some point in the algorithm. This should remove (or at least reduce the chances of) null terminators appearing where you don't want them. This may, however, take some trial-and-error on your part to see what works and what doesn't.

Remove nth bit from buffer, and shift the rest

Giving a uint8_t buffer of x length, I am trying to come up with a function or a macro that can remove nth bit (or n to n+i), then left-shift the remaining bits.
example #1:
for input 0b76543210 0b76543210 ... then output should be 0b76543217 0b654321 ...
example #2: if the input is:
uint8_t input[8] = {
0b00110011,
0b00110011,
...
};
the output without the first bit, should be
uint8_t output[8] = {
0b00110010,
0b01100100,
...
};
I have tried the following to remove the first bit, but it did not work for the second group of bits.
/* A macro to extract (a-b) range of bits without shifting */
#define BIT_RANGE(N,x,y) ((N) & ((0xff >> (7 - (y) + (x))) << ((x))))
void removeBit0(uint8_t *n) {
for (int i=0; i < 7; i++) {
n[i] = (BIT_RANGE(n[i], i + 1, 7)) << (i + 1) |
(BIT_RANGE(n[i + 1], 1, i + 1)) << (7 - i); /* This does not extract the next element bits */
}
n[7] = 0;
}
Update #1
In my case, the input will be uint64_t number, then I will use memmov to shift it one place to the left.
Update #2
The solution can be in C/C++, assembly(x86-64) or inline assembly.
This is really 2 subproblems: remove bits from each byte and pack the results. This is the flow of the code below. I wouldn't use a macro for this. Too much going on. Just inline the function if you're worried about performance at that level.
#include <stdio.h>
#include <stdint.h>
// Remove bits n to n+k-1 from x.
unsigned scrunch_1(unsigned x, int n, int k) {
unsigned hi_bits = ~0u << n;
return (x & ~hi_bits) | ((x >> k) & hi_bits);
}
// Remove bits n to n+k-1 from each byte in the buffer,
// then pack left. Return number of packed bytes.
size_t scrunch(uint8_t *buf, size_t size, int n, int k) {
size_t i_src = 0, i_dst = 0;
unsigned src_bits = 0; // Scrunched source bit buffer.
int n_src_bits = 0; // Initially it's empty.
for (;;) {
// Get scrunched bits until the buffer has at least 8.
while (n_src_bits < 8) {
if (i_src >= size) { // Done when source bytes exhausted.
// If there are left-over bits, add one more byte to output.
if (n_src_bits > 0) buf[i_dst++] = src_bits << (8 - n_src_bits);
return i_dst;
}
// Pack 'em in.
src_bits = (src_bits << (8 - k)) | scrunch_1(buf[i_src++], n, k);
n_src_bits += 8 - k;
}
// Write the highest 8 bits of the buffer to the destination byte.
n_src_bits -= 8;
buf[i_dst++] = src_bits >> n_src_bits;
}
}
int main(void) {
uint8_t x[] = { 0xaa, 0xaa, 0xaa, 0xaa };
size_t n = scrunch(x, 4, 2, 3);
for (size_t i = 0; i < n; i++) {
printf("%x ", x[i]);
}
printf("\n");
return 0;
}
This writes b5 ad 60, which by my reckoning is correct. A few other test cases work as well.
Oops I coded it the first time shifting the wrong way, but include that here in case it's useful to someone.
#include <stdio.h>
#include <stdint.h>
// Remove bits n to n+k-1 from x.
unsigned scrunch_1(unsigned x, int n, int k) {
unsigned hi_bits = 0xffu << n;
return (x & ~hi_bits) | ((x >> k) & hi_bits);
}
// Remove bits n to n+k-1 from each byte in the buffer,
// then pack right. Return number of packed bytes.
size_t scrunch(uint8_t *buf, size_t size, int n, int k) {
size_t i_src = 0, i_dst = 0;
unsigned src_bits = 0; // Scrunched source bit buffer.
int n_src_bits = 0; // Initially it's empty.
for (;;) {
// Get scrunched bits until the buffer has at least 8.
while (n_src_bits < 8) {
if (i_src >= size) { // Done when source bytes exhausted.
// If there are left-over bits, add one more byte to output.
if (n_src_bits > 0) buf[i_dst++] = src_bits;
return i_dst;
}
// Pack 'em in.
src_bits |= scrunch_1(buf[i_src++], n, k) << n_src_bits;
n_src_bits += 8 - k;
}
// Write the lower 8 bits of the buffer to the destination byte.
buf[i_dst++] = src_bits;
src_bits >>= 8;
n_src_bits -= 8;
}
}
int main(void) {
uint8_t x[] = { 0xaa, 0xaa, 0xaa, 0xaa };
size_t n = scrunch(x, 4, 2, 3);
for (size_t i = 0; i < n; i++) {
printf("%x ", x[i]);
}
printf("\n");
return 0;
}
This writes d6 5a b. A few other test cases work as well.
Something similar to this should work:
template<typename S> void removeBit(S* buffer, size_t length, size_t index)
{
const size_t BITS_PER_UNIT = sizeof(S)*8;
// first we find which data unit contains the desired bit
const size_t unit = index / BITS_PER_UNIT;
// and which index has the bit inside the specified unit, starting counting from most significant bit
const size_t relativeIndex = (BITS_PER_UNIT - 1) - index % BITS_PER_UNIT;
// then we unset that bit
buffer[unit] &= ~(1 << relativeIndex);
// now we have to shift what's on the right by 1 position
// we create a mask such that if 0b00100000 is the bit removed we use 0b00011111 as mask to shift the rest
const S partialShiftMask = (1 << relativeIndex) - 1;
// now we keep all bits left to the removed one and we shift left all the others
buffer[unit] = (buffer[unit] & ~partialShiftMask) | ((buffer[unit] & partialShiftMask) << 1);
for (int i = unit+1; i < length; ++i)
{
//we set rightmost bit of previous unit according to last bit of current unit
buffer[i-1] |= buffer[i] >> (BITS_PER_UNIT-1);
// then we shift current unit by one
buffer[i] <<= 1;
}
}
I just tested it on some basic cases so maybe something is not exactly correct but this should move you onto the right track.

Base 64 Encoding Losing data

This is my fourth attempt at doing base64 encoding. My first tries work but it isn't standard. It's also extremely slow!!! I used vectors and push_back and erase a lot.
So I decided to re-write it and this is much much faster! Except that it loses data. -__-
I need as much speed as I can possibly get because I'm compressing a pixel buffer and base64 encoding the compressed string. I'm using ZLib. The images are 1366 x 768 so yeah.
I do not want to copy any code I find online because... Well, I like to write things myself and I don't like worrying about copyright stuff or having to put a ton of credits from different sources all over my code..
Anyway, my code is as follows below. It's very short and simple.
const static std::string Base64Chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
inline bool IsBase64(std::uint8_t C)
{
return (isalnum(C) || (C == '+') || (C == '/'));
}
std::string Copy(std::string Str, int FirstChar, int Count)
{
if (FirstChar <= 0)
FirstChar = 0;
else
FirstChar -= 1;
return Str.substr(FirstChar, Count);
}
std::string DecToBinStr(int Num, int Padding)
{
int Bin = 0, Pos = 1;
std::stringstream SS;
while (Num > 0)
{
Bin += (Num % 2) * Pos;
Num /= 2;
Pos *= 10;
}
SS.fill('0');
SS.width(Padding);
SS << Bin;
return SS.str();
}
int DecToBinStr(std::string DecNumber)
{
int Bin = 0, Pos = 1;
int Dec = strtol(DecNumber.c_str(), NULL, 10);
while (Dec > 0)
{
Bin += (Dec % 2) * Pos;
Dec /= 2;
Pos *= 10;
}
return Bin;
}
int BinToDecStr(std::string BinNumber)
{
int Dec = 0;
int Bin = strtol(BinNumber.c_str(), NULL, 10);
for (int I = 0; Bin > 0; ++I)
{
if(Bin % 10 == 1)
{
Dec += (1 << I);
}
Bin /= 10;
}
return Dec;
}
std::string EncodeBase64(std::string Data)
{
std::string Binary = std::string();
std::string Result = std::string();
for (std::size_t I = 0; I < Data.size(); ++I)
{
Binary += DecToBinStr(Data[I], 8);
}
for (std::size_t I = 0; I < Binary.size(); I += 6)
{
Result += Base64Chars[BinToDecStr(Copy(Binary, I, 6))];
if (I == 0) ++I;
}
int PaddingAmount = ((-Result.size() * 3) & 3);
for (int I = 0; I < PaddingAmount; ++I)
Result += '=';
return Result;
}
std::string DecodeBase64(std::string Data)
{
std::string Binary = std::string();
std::string Result = std::string();
for (std::size_t I = Data.size(); I > 0; --I)
{
if (Data[I - 1] != '=')
{
std::string Characters = Copy(Data, 0, I);
for (std::size_t J = 0; J < Characters.size(); ++J)
Binary += DecToBinStr(Base64Chars.find(Characters[J]), 6);
break;
}
}
for (std::size_t I = 0; I < Binary.size(); I += 8)
{
Result += (char)BinToDecStr(Copy(Binary, I, 8));
if (I == 0) ++I;
}
return Result;
}
I've been using the above like this:
int main()
{
std::string Data = EncodeBase64("IMG." + ::ToString(677) + "*" + ::ToString(604)); //IMG.677*604
std::cout<<DecodeBase64(Data); //Prints IMG.677*601
}
As you can see in the above, it prints the wrong string. It's fairly close but for some reason, the 4 is turned into a 1!
Now if I do:
int main()
{
std::string Data = EncodeBase64("IMG." + ::ToString(1366) + "*" + ::ToString(768)); //IMG.1366*768
std::cout<<DecodeBase64(Data); //Prints IMG.1366*768
}
It prints correctly.. I'm not sure what is going on at all or where to begin looking.
Just in-case anyone is curious and want to see my other attempts (the slow ones): http://pastebin.com/Xcv03KwE
I'm really hoping someone could shed some light on speeding things up or at least figuring out what's wrong with my code :l
The main encoding issue is that you are not accounting for data that is not a multiple of 6 bits. In this case, the final 4 you have is being converted into 0100 instead of 010000 because there are no more bits to read. You are supposed to pad with 0s.
After changing your Copy like this, the final encoded character is Q, instead of the original E.
std::string data = Str.substr(FirstChar, Count);
while(data.size() < Count) data += '0';
return data;
Also, it appears that your logic for adding padding = is off because it is adding one too many = in this case.
As far as comments on speed, I'd focus primarily on trying to reduce your usage of std::string. The way you are currently converting the data into a string with 0 and 1 is pretty inefficent considering that the source could be read directly with bitwise operators.
I'm not sure whether I could easily come up with a slower method of doing Base-64 conversions.
The code requires 4 headers (on Mac OS X 10.7.5 with G++ 4.7.1) and the compiler option -std=c++11 to make the #include <cstdint> acceptable:
#include <string>
#include <iostream>
#include <sstream>
#include <cstdint>
It also requires a function ToString() that was not defined; I created:
std::string ToString(int value)
{
std::stringstream ss;
ss << value;
return ss.str();
}
The code in your main() — which is what uses the ToString() function — is a little odd: why do you need to build a string from pieces instead of simply using "IMG.677*604"?
Also, it is worth printing out the intermediate result:
int main()
{
std::string Data = EncodeBase64("IMG." + ::ToString(677) + "*" + ::ToString(604));
std::cout << Data << std::endl;
std::cout << DecodeBase64(Data) << std::endl; //Prints IMG.677*601
}
This yields:
SU1HLjY3Nyo2MDE===
IMG.677*601
The output string (SU1HLjY3Nyo2MDE===) is 18 bytes long; that has to be wrong as a valid Base-64 encoded string has to be a multiple of 4 bytes long (as three 8-bit bytes are encoded into four bytes each containing 6 bits of the original data). This immediately tells us there are problems. You should only get zero, one or two pad (=) characters; never three. This also confirms that there are problems.
Removing two of the pad characters leaves a valid Base-64 string. When I use my own home-brew Base-64 encoding and decoding functions to decode your (truncated) output, it gives me:
Base64:
0x0000: SU1HLjY3Nyo2MDE=
Binary:
0x0000: 49 4D 47 2E 36 37 37 2A 36 30 31 00 IMG.677*601.
Thus it appears you have encode the null terminating the string. When I encode IMG.677*604, the output I get is:
Binary:
0x0000: 49 4D 47 2E 36 37 37 2A 36 30 34 IMG.677*604
Base64: SU1HLjY3Nyo2MDQ=
You say you want to speed up your code. Quite apart from fixing it so that it encodes correctly (I've not really studied the decoding), you will want to avoid all the string manipulation you do. It should be a bit manipulation exercise, not a string manipulation exercise.
I have 3 small encoding routines in my code, to encode triplets, doublets and singlets:
/* Encode 3 bytes of data into 4 */
static void encode_triplet(const char *triplet, char *quad)
{
quad[0] = base_64_map[(triplet[0] >> 2) & 0x3F];
quad[1] = base_64_map[((triplet[0] & 0x03) << 4) | ((triplet[1] >> 4) & 0x0F)];
quad[2] = base_64_map[((triplet[1] & 0x0F) << 2) | ((triplet[2] >> 6) & 0x03)];
quad[3] = base_64_map[triplet[2] & 0x3F];
}
/* Encode 2 bytes of data into 4 */
static void encode_doublet(const char *doublet, char *quad, char pad)
{
quad[0] = base_64_map[(doublet[0] >> 2) & 0x3F];
quad[1] = base_64_map[((doublet[0] & 0x03) << 4) | ((doublet[1] >> 4) & 0x0F)];
quad[2] = base_64_map[((doublet[1] & 0x0F) << 2)];
quad[3] = pad;
}
/* Encode 1 byte of data into 4 */
static void encode_singlet(const char *singlet, char *quad, char pad)
{
quad[0] = base_64_map[(singlet[0] >> 2) & 0x3F];
quad[1] = base_64_map[((singlet[0] & 0x03) << 4)];
quad[2] = pad;
quad[3] = pad;
}
This is written as C code rather than using native C++ idioms, but the code shown should compile with C++ (unlike the C99 initializers elsewhere in the source). The base_64_map[] array corresponds to your Base64Chars string. The pad character passed in is normally '=', but can be '\0' since the system I work with has eccentric ideas about not needing padding (pre-dating my involvement in the code, and it uses a non-standard alphabet to boot) and the code handles both the non-standard and the RFC 3548 standard.
The driving code is:
/* Encode input data as Base-64 string. Output length returned, or negative error */
static int base64_encode_internal(const char *data, size_t datalen, char *buffer, size_t buflen, char pad)
{
size_t outlen = BASE64_ENCLENGTH(datalen);
const char *bin_data = (const void *)data;
char *b64_data = (void *)buffer;
if (outlen > buflen)
return(B64_ERR_OUTPUT_BUFFER_TOO_SMALL);
while (datalen >= 3)
{
encode_triplet(bin_data, b64_data);
bin_data += 3;
b64_data += 4;
datalen -= 3;
}
b64_data[0] = '\0';
if (datalen == 2)
encode_doublet(bin_data, b64_data, pad);
else if (datalen == 1)
encode_singlet(bin_data, b64_data, pad);
b64_data[4] = '\0';
return((b64_data - buffer) + strlen(b64_data));
}
/* Encode input data as Base-64 string. Output length returned, or negative error */
int base64_encode(const char *data, size_t datalen, char *buffer, size_t buflen)
{
return(base64_encode_internal(data, datalen, buffer, buflen, base64_pad));
}
The base64_pad constant is the '='; there's also a base64_encode_nopad() function that supplies '\0' instead. The errors are somewhat arbitrary but relevant to the code.
The main point to take away from this is that you should be doing bit manipulation and building up a string that is an exact multiple of 4 bytes for a given input.
std::string EncodeBase64(std::string Data)
{
std::string Binary = std::string();
std::string Result = std::string();
for (std::size_t I = 0; I < Data.size(); ++I)
{
Binary += DecToBinStr(Data[I], 8);
}
if (Binary.size() % 6)
{
Binary.resize(Binary.size() + 6 - Binary.size() % 6, '0');
}
for (std::size_t I = 0; I < Binary.size(); I += 6)
{
Result += Base64Chars[BinToDecStr(Copy(Binary, I, 6))];
if (I == 0) ++I;
}
if (Result.size() % 4)
{
Result.resize(Result.size() + 4 - Result.size() % 4, '=');
}
return Result;
}

Does anyone have an easy solution to parsing Exp-Golomb codes using C++?

Trying to decode the SDP sprop-parameter-sets values for an H.264 video stream and have found to access some of the values will involve parsing of Exp-Golomb encoded data and my method contains the base64 decoded sprop-parameter-sets data in a byte array which I now bit walking but have come up to the first part of Exp-Golomb encoded data and looking for a suitable code extract to parse these values.
Exp.-Golomb codes of what order ??
If it you need to parse H.264 bit stream (I mean transport layer) you can write a simple functions to make an access to scecified bits in the endless bit stream. Bits indexing from left to right.
inline u_dword get_bit(const u_byte * const base, u_dword offset)
{
return ((*(base + (offset >> 0x3))) >> (0x7 - (offset & 0x7))) & 0x1;
}
This function implement decoding of exp-Golomb codes of zero range (used in H.264).
u_dword DecodeUGolomb(const u_byte * const base, u_dword * const offset)
{
u_dword zeros = 0;
// calculate zero bits. Will be optimized.
while (0 == get_bit(base, (*offset)++)) zeros++;
// insert first 1 bit
u_dword info = 1 << zeros;
for (s_dword i = zeros - 1; i >= 0; i--)
{
info |= get_bit(base, (*offset)++) << i;
}
return (info - 1);
}
u_dword means unsigned 4 bytes integer.
u_byte means unsigned 1 byte integer.
Note that first byte of each NAL Unit is a specified structure with forbidden bit, NAL reference, and NAL type.
Accepted answer is not a correct implementation. It is giving wrong output. Correct implementation as per pseudo code from
"Sec 9.1 Parsing process for Exp-Golomb codes" spec T-REC-H.264-201304
int32_t getBitByPos(unsigned char *buffer, int32_t pos) {
return (buffer[pos/8] >> (8 - pos%8) & 0x01);
}
uint32_t decodeGolomb(unsigned char *byteStream, uint32_t *index) {
uint32_t leadingZeroBits = -1;
uint32_t codeNum = 0;
uint32_t pos = *index;
if (byteStream == NULL || pos == 0 ) {
printf("Invalid input\n");
return 0;
}
for (int32_t b = 0; !b; leadingZeroBits++)
b = getBitByPos(byteStream, pos++);
for (int32_t b = leadingZeroBits; b > 0; b--)
codeNum = codeNum | (getBitByPos(byteStream, pos++) << (b - 1));
*index = pos;
return ((1 << leadingZeroBits) - 1 + codeNum);
}
I wrote a c++ jpeg-ls compression library that uses golomb codes. I don't know if Exp-Golomb codes is exactly the same. The library is open source can be found at http://charls.codeplex.com. I use a lookup table to decode golomb codes <= 8 bits in length. Let me know if you have problems finding your way around.
Revised with a function to get N bits from the stream; works parsing H.264 NALs
inline uint32_t get_bit(const uint8_t * const base, uint32_t offset)
{
return ((*(base + (offset >> 0x3))) >> (0x7 - (offset & 0x7))) & 0x1;
}
inline uint32_t get_bits(const uint8_t * const base, uint32_t * const offset, uint8_t bits)
{
uint32_t value = 0;
for (int i = 0; i < bits; i++)
{
value = (value << 1) | (get_bit(base, (*offset)++) ? 1 : 0);
}
return value;
}
// This function implement decoding of exp-Golomb codes of zero range (used in H.264).
uint32_t DecodeUGolomb(const uint8_t * const base, uint32_t * const offset)
{
uint32_t zeros = 0;
// calculate zero bits. Will be optimized.
while (0 == get_bit(base, (*offset)++)) zeros++;
// insert first 1 bit
uint32_t info = 1 << zeros;
for (int32_t i = zeros - 1; i >= 0; i--)
{
info |= get_bit(base, (*offset)++) << i;
}
return (info - 1);
}