Related
I have below function that supports for conversion of LPCTSTR to BYTE , but the input str only support digits as of now.
void StrToByte2(LPCTSTR str, BYTE *dest)
{
UINT count = _ttoi(str);
BYTE buf[4] = { 0 };
char string[10] = { 0 };
sprintf_s(string, 10, "%04d", count);
for (int i = 0; i < 4; ++i)
{
if ((string[i] >= '0') && (string[i] <= '9'))
buf[i] = string[i] - '0';
}
dest[0] = (BYTE)(buf[0] << 4) | buf[1];
dest[1] = (BYTE)(buf[2] << 4) | buf[3];
}
If i call this function on "1234" ( any digits) , dest output some 12814,
struct st
{
byte btID[2];
int nID;
};
PTR ptr(new st);
StrToByte2(strCode, ptr->btID);
but when i call this function on any hexadecimal ex A123 , it outputs 0000 always.
Below function is used to convert back the dest code to str
CString Byte2ToStr(const byte* pbuf)
{
CString str;
str.Format(_T("%02X%02X"), pbuf[0], pbuf[1]);
return str;
}
How can i get A123 to converted to bytes and than back to str to display A123??
Please help!!
PTR ptr(new st);
This is a memory leak in C++, because new st allocates memory and there is no way to release it.
UINT count = _ttoi(str);
...
sprintf_s(string, 10, "%04d", count);
This is converting string to integer, then converts integer back to string. It doesn't seem to have a real purpose.
For example, "1234" is converted to 1234, and back to "1234". But "A123" is not a valid number so it is converted to 0, then converted to "0000". So this method fails. You can just work with the original string.
It seems this function tries to fit 2 integers in to 1 byte. This can be done as long as each value is less than 16 or 0xF (I don't know what purpose this might have) It can be fixed as follows:
void StrToByte2(const wchar_t* str, BYTE *dest)
{
int len = wcslen(str);
if(len != 4)
return; //handle error
char buf[4] = { 0 };
for(int i = 0; i < 4; ++i)
if(str[i] >= L'0' && str[i] <= L'9')
buf[i] = (BYTE)(str[i] - L'0');
dest[0] = (buf[0] << 4) + buf[1];
dest[1] = (buf[2] << 4) + buf[3];
}
CStringW Byte2_To_Str(BYTE *dest)
{
CStringW str;
str.AppendFormat(L"%X", 0xF & (dest[0] >> 4));
str.AppendFormat(L"%X", 0xF & (dest[0]));
str.AppendFormat(L"%X", 0xF & (dest[1] >> 4));
str.AppendFormat(L"%X", 0xF & (dest[1]));
return str;
}
int main()
{
BYTE dest[2] = { 0 };
StrToByte2(L"1234", dest);
OutputDebugStringW(Byte2_To_Str(dest));
OutputDebugStringW(L"\n");
return 0;
}
If the string is hexadecimal, you can use sscanf to convert each pair of character to bytes.
Basically, "1234" changes to 12 34
"A123" changes to A1 23
bool hexstring_to_bytes(const wchar_t* str, BYTE *dest, int dest_size = 2)
{
int len = wcslen(str);
if((len / 2) > dest_size)
{
//error
return false;
}
for(int i = 0; i < len / 2; i++)
{
int v;
if(swscanf_s(str + i * 2, L"%2x", &v) != 1)
break;
dest[i] = (unsigned char)v;
}
return true;
}
CStringW bytes_to_hexstring(const BYTE* bytes, int byte_size = 2)
{
CString str;
for(int i = 0; i < byte_size; i++)
str.AppendFormat(L"%02X ", bytes[i] & 0xFF);
return str;
}
int main()
{
CStringW str;
CStringW new_string;
BYTE dest[2] = { 0 };
str = L"1234";
hexstring_to_bytes(str, dest);
new_string = bytes_to_hexstring(dest);
OutputDebugString(new_string);
OutputDebugString(L"\n");
str = L"A123";
hexstring_to_bytes(str, dest);
new_string = bytes_to_hexstring(dest);
OutputDebugStringW(new_string);
OutputDebugStringW(L"\n");
return 0;
}
I have the following code written to have IPP resize my matrix:
#include "ipp_mx.h"
#include "ipp.h"
#include "stdafx.h"
#define IPPCALL(name) name
int main()
{
IppiSize srcSize = { 3,3 };
float srcImage[9] =
{ 20, 40, 30,
35, 55, 70,
100, 30, 20 };
float* src = new float[srcSize.width*srcSize.height];
for (int i = 0; i < srcSize.width*srcSize.height; i++) {
src[i] = srcImage[i];
}
double xFactor = 10; double yFactor = 10;
int numChannels = 1;
int bytesPerPixel = 4;
int srcStep = srcSize.width*bytesPerPixel*numChannels;
IppiRect srcRoi = { 0, 0, srcSize.width, srcSize.width };
float* dest = new float[srcSize.width*srcSize.height*xFactor*yFactor];
IppiSize destSize = { srcSize.width*xFactor, srcSize.height*yFactor };
int destStep = destSize.width*bytesPerPixel*numChannels;
IppiRect destRoi = { 0, 0, destSize.width, destSize.width };
double xShift = 0; double yShift = 0;
int interpolation = 1; //nearest neighbour
int bufSize;
IPPCALL(ippiResizeGetBufSize)(srcRoi, destRoi, 1, interpolation, &bufSize);
unsigned char* buffer = new unsigned char[bufSize];
IPPCALL(ippiResizeSqrPixel_32f_C1R)(src, srcSize, srcStep, srcRoi, dest, destStep, destRoi, xFactor, yFactor, xShift, yShift, interpolation, buffer);
return 0;
}
Is there an IPP function I can use that now converts this float matrix dest to an RGB24 format, given a colour map?
I know I can do it by hand in a for loop, but the raw matrices I want to work with are much larger and for loops may not cut it.
The technique I found to work consists of 3 steps:
Convert/truncate the float value to unsigned char - in my case the input values are within the 8 bit range and I don't care about the decimal numbers.
Convert the unsigned char value to 3 channel RGB gray which essentially assigns the same input values to all 3 channels.
Construct a palette to map 3 channel values to another 3 channel values.
Pass the palette and the input value to a lookup table function.
This is demonstrated on the code below. Note that my palette was setup to assign green for values under 30 and blue for values greater or equals than 30.
unsigned char** GeneratePalette()
{
unsigned char red[256];
unsigned char green[256];
unsigned char blue[256];
for(int value = 0; value < 256; value++)
{
if(value < 30)
{
red[value] = 0;
green[value] = 255;
blue[value] = 0;
}
else
{
red[value] = 0;
green[value] = 0;
blue[value] = 255;
}
}
unsigned char* table[3] = { red, green, blue };
return table;
}
void Test()
{
unsigned char** palette = GeneratePalette();
IppiSize srcSize = { 2,1 };
float src[2] = { 54, 19 };
unsigned char truncated[2];
IPPCALL(ippiConvert_32f8u_C1R)(src, srcSize.width * sizeof(float), truncated, srcSize.width * sizeof(unsigned char), srcSize, ippRndZero);
unsigned char copied[6] = {0};
IPPCALL(ippiGrayToRGB_8u_C1C3R)(truncated, srcSize.width * sizeof(unsigned char), copied, srcSize.width * sizeof(unsigned char) * 3, srcSize);
unsigned char dest[6];
IPPCALL(ippiLUTPalette_8u_C3R)(copied, 6, dest, 6, srcSize, palette, 8);
}
int main()
{
Test();
return 0;
}
In the end, this was not very efficient and working on a single for loop was faster.
I am trying to develop a Ascii85 decoder in c++, in order to parse a bitmap file from an adobe illustrator file (*ai).
I have found an algorithm in java here and I have tried to rewrite it in c++.
The problem is that I have some cases that my encoded text is not being decoded correctly. For example if character "a" is the 7th and final character in my string (before the encoding) it is decoded as "`", which is the previous character in the ascii table. It is weird because I have tried to make the calculations of the algorithm manually, and I get "`" as a result. I am wondering if there is a bug in the algorithm or if it is not the correct algorithm for adobe ascii85 decoding.
Here is my code:
#include <QCoreApplication>
#include <stdio.h>
#include <string.h>
#include <QDebug>
// returns 1 when there are no more bytes to decode
// 0 otherwise
int decodeBlock(char *input, unsigned char *output, unsigned int inputSize) {
qDebug() << input << output << inputSize;
if (inputSize > 0) {
unsigned int bytesToDecode = (inputSize < 5)? inputSize : 5;
unsigned int x[5] = { 0 };
unsigned int i;
for (i = 0; i < bytesToDecode; i++) {
x[i] = input[i] - 33;
qDebug() << x[i] << ", i: " << i;
}
if (i > 0)
i--;
unsigned int value =
x[0] * 85 * 85 * 85 * 85 +
x[1] * 85 * 85 * 85 +
x[2] * 85 * 85 +
x[3] * 85 +
x[4];
for (unsigned int j = 0; j < i; j++) {
int shift = 8 * (3 - j); // 8 * 3, 8 * 2, 8 * 1, 8 * 0
unsigned char byte = (unsigned char)((value >> shift) & 0xff);
printf("byte: %c, %d\n", byte, byte);
*output = byte;
output++;
}
}
return inputSize <= 5;
}
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
char x__input[] = "<~FE1f+#;K?~>";
unsigned char x__output[128] = { 0 };
char *input = x__input + 2;
unsigned int inputSize = (unsigned int)strlen(input);
inputSize -= 2;
unsigned char *output = x__output;
printf("decoding %s\n", input);
for (unsigned int i = 0; i < inputSize; i += 5, input += 5, output += 4)
if(decodeBlock(input, output, inputSize - i))
break;
printf("Output is: %s\n", x__output);
return a.exec();
}
What happens when inputSize is not multiple of 5??
unsigned int bytesToDecode = (inputSize < 5)? inputSize : 5;
you ASSUME that bytesToDecode is 5, while there will be some bytes with unknown values
So, when your character is the last at position 7, the above condition is true.
If the input is not a multiple of 5, then it MUST be padded with the value "u".
For more details about the encoding / decoding process, please check the Wikipedia page, where it is pretty well explained:
http://en.wikipedia.org/wiki/Ascii85#Example_for_Ascii85
On Linux Mint, I use operator new to alloc memory:
int maxNummber = 1000000;
int* arr = new int[maxNumber];
when I run my code, I meet
flybird#flybird ~/cplusplus_study $ ./a.out
-412179
Segmentation fault
when I change maxNumber = 100, the code runs successfully.
the result of command free -m:
flybird#flybird ~/cplusplus_study $ free -m
total used free shared buffers cached
Mem: 2016 800 1216 0 158 359
-/+ buffers/cache: 283 1733
Swap: 2045 0 2045
This is the actual code:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <iostream>
#include <fstream>
#include <ctime>
#include <memory>
#include <string.h>
#include <iterator>
#include <cstdlib>
using namespace std;
class GenRandomNumber;
class BitMap
{
public:
BitMap(int n):maxNumer(n)
{
int length = 1 + n / BITSPERWORD;
pData = new int[length];
memset(pData, 0, length);
}
void set(int i)
{
pData[i>>SHIFT] |= 1<<(i & MASK); // i&MASK 相当于i%32
}
void clear(int i)
{
pData[i>>SHIFT] &= ~(1<<(i & MASK)); // i>>SHIFT 相当于 i/32
}
bool test(int i)
{
return pData[i>>SHIFT] & (1<<(i & MASK));
}
void sort(string inputFile, string outputFile)
{
ifstream read(inputFile.c_str());
ofstream write(outputFile.c_str());
int temp = 0;
while (read>>temp)
set(temp);
for (int i = 0; i < maxNumer; ++i)
{
if(test(i))
write<<i<<endl;
}
read.close();
write.close();
}
~BitMap()
{
delete []pData;
pData = NULL;
}
private:
int* pData;
int maxNumer;
enum{ SHIFT = 5, MASK = 0x1F, BITSPERWORD = 32};
};
class GenRandomNumber
{
public:
static GenRandomNumber* genInstance()
{
if(!mInstance.get())
mInstance.reset(new GenRandomNumber());
return mInstance.get();
}
void generate1(string fileName, int m, int maxNumber)
{
ofstream outFile(fileName.c_str());
int* arr = new int[maxNumber];
for(int i = 0; i < maxNumber; i++)
arr[i] = i;
int temp = 0;
for(int j = 0; j < m; j++)
{
temp = randomRange(j, maxNumber - 1);
cout<<temp<<endl;
swap(arr[j], arr[temp]);
}
copy(arr, arr + m, ostream_iterator<int>(outFile, "\n"));
delete []arr;
outFile.close();
}
void generate2(string fileName, int m, int maxNumber)
{
BitMap bitmap(maxNumber);
ofstream outFile(fileName.c_str());
int count = 0;
int temp;
while (count < m)
{
srand(time(NULL));
temp = randomRange(0, maxNumber);
cout<<temp<<endl;
if (!bitmap.test(temp))
{
bitmap.set(temp);
outFile<<temp<<endl;
count++;
}
}
outFile.close();
}
private:
GenRandomNumber(){};
GenRandomNumber(const GenRandomNumber&);
GenRandomNumber& operator=(const GenRandomNumber&);
int randomRange(int low, int high)
{
srand(clock()); // better than srand(time(NULL))
return low + (RAND_MAX * rand() + rand()) % (high + 1 - low);;
}
static auto_ptr<GenRandomNumber> mInstance;
};
auto_ptr<GenRandomNumber> GenRandomNumber::mInstance;
int main()
{
const int MAX_NUMBER = 1000000;
GenRandomNumber *pGen = GenRandomNumber::genInstance();
pGen->generate1("test.txt", MAX_NUMBER, MAX_NUMBER);
BitMap bitmap(MAX_NUMBER);
bitmap.sort("test.txt", "sort.txt");
return 0;
}
gdb already gave you a hint where the error is coming from. The only place where you use swap is in this function:
void generate1(string fileName, int m, int maxNumber)
{
ofstream outFile(fileName);
int* arr = new int[maxNumber];
for(int i = 0; i < maxNumber; i++)
arr[i] = i;
int temp = 0;
for(int j = 0; j < m; j++)
{
temp = randomRange(j, maxNumber - 1);
cout<<temp<<endl;
swap(arr[j], arr[temp]); // <----
}
copy(arr, arr + m, ostream_iterator<int>(outFile, "\n"));
delete []arr;
outFile.close();
}
Swapping two ints isn't likely to be the culprit, unless you give it invalid input to begin with. arr[j] is pretty straightforward and should be fine, but what about arr[temp]? temp is calculated here:
temp = randomRange(j, maxNumber - 1);
and randomRange function looks like this:
int randomRange(int low, int high)
{
srand(clock()); // better than srand(time(NULL))
return low + (RAND_MAX * rand() + rand()) % (high + 1 - low);;
}
I'd say this is your problem. RAND_MAX * rand() probably overflows and gives you big negative numbers. Hopefully it's obvious why that's not good.
1,000,000 probably should not fail on a modern desktop, so I expect you are blowing up elsewhere.
To see what/where the problem is:
$ gdb
gdb> file ./a.out
gdb> run
<wait for crash>
gdb> bt full
If the allocation failed, you should see an uncaught bad_alloc exception.
Otherwise, please post the source code and results of the backtrace.
The problem is in your randomRange function.
return low + (RAND_MAX * rand() + rand()) % (high + 1 - low);;
I don't know, why do you multiple (RAND_MAX + 1) by rand()(which return value between 0 and RAND_MAX), but it causes overflow and may be negative.
If C++11 is an option for you, I can suggest use uniform_int_distribution. It will return a number between passed min and max values.
#include <random>
#include <iostream>
int main()
{
std::random_device rd;
std::mt19937 gen(rd());
std::uniform_int_distribution<> dis(1, 6);
for (int n=0; n<10; ++n)
std::cout << dis(gen) << ' ';
std::cout << '\n';
}
Here's one of the problems. m is too large and exceeds maxNumber. arr[j], when j=m, exceeds the its bounds because arr is only maxNumber in size. See the info on stack frame #1: m=1606416912, maxNumber=999999.
By the way, a well placed assert would have alerted you to this problem (I'm a big fan of self debugging code - I hate spending time under the debugger):
void generate1(string fileName, int m, int maxNumber)
{
assert(!fileName.empty());
assert(m > 0 && maxNumber > 0);
assert(m <= maxNumber);
...
}
And the back trace:
Program received signal EXC_BAD_ACCESS, Could not access memory.
Reason: KERN_INVALID_ADDRESS at address: 0x000000010007eef8
std::swap<int> (__a=#0x100200000, __b=#0x10007eef8) at stl_algobase.h:99
99 __a = __b;
(gdb) bt full
#0 std::swap<int> (__a=#0x100200000, __b=#0x10007eef8) at stl_algobase.h:99
__tmp = 0
#1 0x0000000100000ff1 in GenRandomNumber::generate1 (this=0x7fff5fbffa10, fileName=#0x100200000, m=1606416912, maxNumber=999999) at t.cpp:91
outFile = {
<std::basic_ostream<char,std::char_traits<char> >> = {
<std::basic_ios<char,std::char_traits<char> >> = {
<std::ios_base> = {
_vptr$ios_base = 0x7fff745bc350,
_M_precision = 6,
_M_width = 0,
_M_flags = 4098,
_M_exception = std::_S_goodbit,
_M_streambuf_state = std::_S_goodbit,
_M_callbacks = 0x0,
_M_word_zero = {
_M_pword = 0x0,
_M_iword = 0
},
_M_local_word = {{
_M_pword = 0x0,
_M_iword = 0
}, {
_M_pword = 0x0,
_M_iword = 0
}, {
_M_pword = 0x0,
_M_iword = 0
}, {
_M_pword = 0x0,
_M_iword = 0
}, {
_M_pword = 0x0,
_M_iword = 0
}, {
_M_pword = 0x0,
_M_iword = 0
}, {
_M_pword = 0x0,
_M_iword = 0
}, {
_M_pword = 0x0,
_M_iword = 0
}},
_M_word_size = 8,
_M_word = 0x7fff5fbff910,
_M_ios_locale = {
_M_impl = 0x7fff745c1880
}
},
members of std::basic_ios<char,std::char_traits<char> >:
_M_tie = 0x0,
_M_fill = 0 '\0',
_M_fill_init = false,
_M_streambuf = 0x7fff5fbff660,
_M_ctype = 0x7fff745c1ab0,
_M_num_put = 0x7fff745c1dd0,
_M_num_get = 0x7fff745c1dc0
},
members of std::basic_ostream<char,std::char_traits<char> >:
_vptr$basic_ostream = 0x7fff745bc328
},
members of std::basic_ofstream<char,std::char_traits<char> >:
_M_filebuf = {
<std::basic_streambuf<char,std::char_traits<char> >> = {
_vptr$basic_streambuf = 0x7fff745bc230,
_M_in_beg = 0x100803200 "",
_M_in_cur = 0x100803200 "",
_M_in_end = 0x100803200 "",
_M_out_beg = 0x0,
_M_out_cur = 0x0,
_M_out_end = 0x0,
_M_buf_locale = {
_M_impl = 0x7fff745c1880
}
},
members of std::basic_filebuf<char,std::char_traits<char> >:
_M_lock = {
__sig = 0,
__opaque = '\0' <repeats 55 times>
},
_M_file = {
_M_cfile = 0x7fff756bf0a0,
_M_cfile_created = true
},
_M_mode = 48,
_M_state_beg = {
__mbstate8 = '\0' <repeats 127 times>,
_mbstateL = 0
},
_M_state_cur = {
__mbstate8 = '\0' <repeats 127 times>,
_mbstateL = 0
},
_M_state_last = {
__mbstate8 = '\0' <repeats 127 times>,
_mbstateL = 0
},
_M_buf = 0x100803200 "",
_M_buf_size = 1024,
_M_buf_allocated = true,
_M_reading = false,
_M_writing = false,
_M_pback = 0 '\0',
_M_pback_cur_save = 0x0,
_M_pback_end_save = 0x0,
_M_pback_init = false,
_M_codecvt = 0x7fff745c1cf0,
_M_ext_buf = 0x0,
_M_ext_buf_size = 0,
_M_ext_next = 0x0,
_M_ext_end = 0x0
}
}
#2 0x0000000100000a18 in main () at t.cpp:140
bitmap = {
pData = 0x7fff5fc005a8,
maxNumer = 17
}
pGen = (GenRandomNumber *) 0x1001000e0
There is one problem in the code may not directly explain the segment fault, but should also draw your attention. Note that In the class BitMap, the constructor:
BitMap(int n):maxNumer(n)
{
int length = 1 + n / BITSPERWORD;
pData = new int[length];
memset(pData, 0, length);
}
The third parameter of memset is meant to be the size of allocated array, not number of elements, so it should be:
BitMap(int n):maxNumer(n)
{
int length = 1 + n / BITSPERWORD;
pData = new int[length];
memset(pData, 0, length * sizeof(int));
}
The original code might causes problem because only part of the allocated array is initialized to zero by memset. The remaining code might be logically wrong because class BitMap do binary operator in the member function (set, clear, test), in which all of them presume all elements of the array that pData pointed to are set to be zero.
I'm working on a program in c++ to do md5 checksums. I'm doing this mainly because I think I'll learn a lot of different things about c++, checksums, OOP, and whatever else I run into.
I'm having trouble the check sums and I think the problem is in the function padbuff which does the message padding.
#include "HashMD5.h"
int leftrotate(int x, int y);
void padbuff(uchar * buffer);
//HashMD5 constructor
HashMD5::HashMD5()
{
Type = "md5";
Hash = "";
}
HashMD5::HashMD5(const char * hashfile)
{
Type = "md5";
std::ifstream filestr;
filestr.open(hashfile, std::fstream::in | std::fstream::binary);
if(filestr.fail())
{
std::cerr << "File " << hashfile << " was not opened.\n";
std::cerr << "Open failed with error ";
}
}
std::string HashMD5::GetType()
{
return this->Type;
}
std::string HashMD5::GetHash()
{
return this->Hash;
}
bool HashMD5::is_open()
{
return !((this->filestr).fail());
}
void HashMD5::CalcHash(unsigned int * hash)
{
unsigned int *r, *k;
int r2[4] = {0, 4, 9, 15};
int r3[4] = {0, 7, 12, 19};
int r4[4] = {0, 4, 9, 15};
uchar * buffer;
int bufLength = (2<<20)*8;
int f,g,a,b,c,d, temp;
int *head;
uint32_t maxint = 1<<31;
//Initialized states
unsigned int h[4]{ 0x67452301, 0xefcdab89, 0x98badcfe, 0x10325476};
r = new unsigned int[64];
k = new unsigned int[64];
buffer = new uchar[bufLength];
if(r==NULL || k==NULL || buffer==NULL)
{
std::cerr << "One of the dyn alloc failed\n";
}
// r specifies the per-round shift amounts
for(int i = 0; i<16; i++)
r[i] = 7 + (5 * ((i)%4) );
for(int i = 16; i < 32; i++)
r[i] = 5 + r2[i%4];
for(int i = 32; i< 48; i++)
r[i] = 4 + r3[i%4];
for(int i = 48; i < 63; i++)
r[i] = 6 + r4[i%4];
for(int i = 0; i < 63; i++)
{
k[i] = floor( fabs( sin(i + 1)) * maxint);
}
while(!(this->filestr).eof())
{
//Read in 512 bits
(this->filestr).read((char *)buffer, bufLength-512);
padbuff(buffer);
//The 512 bits are now 16 32-bit ints
head = (int *)buffer;
for(int i = 0; i < 64; i++)
{
if(i >=0 && i <=15)
{
f = (b & c) | (~b & d);
g = i;
}
else if(i >= 16 && i <=31)
{
f = (d & b) | (~d & b);
g = (5*i +1) % 16;
}
else if(i >=32 && i<=47)
{
f = b ^ c ^ d;
g = (3*i + 5 ) % 16;
}
else
{
f = c ^ (b | ~d);
g = (7*i) % 16;
}
temp = d;
d = c;
c = b;
b = b + leftrotate((a + f + k[i] + head[g]), r[i]);
a = temp;
}
h[0] +=a;
h[1] +=b;
h[2] +=c;
h[3] +=d;
}
delete[] r;
delete[] k;
hash = h;
}
int leftrotate(int x, int y)
{
return(x<<y) | (x >> (32 -y));
}
void padbuff(uchar* buffer)
{
int lack;
int length = strlen((char *)buffer);
uint64_t mes_size = length % UINT64_MAX;
if((lack = (112 - (length % 128) ))>0)
{
*(buffer + length) = ('\0'+1 ) << 3;
memset((buffer + length + 1),0x0,lack);
memcpy((void*)(buffer+112),(void *)&mes_size, 64);
}
}
In my test program I run this on the an empty message. Thus length in padbuff is 0. Then when I do *(buffer + length) = ('\0'+1 ) << 3;, I'm trying to pad the message with a 1. In the Netbeans debugger I cast buffer as a uint64_t and it says buffer=8. I was trying to put a 1 bit in the most significant spot of buffer so my cast should have been UINT64_MAX. Its not, so I'm confused about how my padding code works. Can someone tell me what I'm doing and what I'm supposed to do in padbuff? Thanks, and I apologize for the long freaking question.
Just to be clear about what the padding is supposed to be doing, here is the padding excerpt from Wikipedia:
The message is padded so that its length is divisible by 512. The padding works as follows: first a single bit, 1, is appended to the end of the message. This is followed by as many zeros as are required to bring the length of the message up to 64 bits fewer than a multiple of 512. The remaining bits are filled up with 64 bits representing the length of the original message, modulo 264.
I'm mainly looking for help for padbuff, but since I'm trying to learn all comments are appreciated.
The first question is what you did:
length % UINT64_MAX doesn't make sense at all because length is in bytes and MAX is the value you can store in UINT64.
You thought that putting 1 bit in the most significant bit would give the maximum value. In fact, you need to put 1 in all bits to get it.
You shift 1 by 3. It's only half the length of the byte.
The byte pointed by buffer is the least significant in little endian. (I assume you have little endian since the debugger showed 8).
The second question how it should work.
I don't know what exactly padbuff should do but if you want to pad and get UINT64_MAX, you need something like this:
int length = strlen((char *)buffer);
int len_of_padding = sizeof(uint64_t) - length % sizeof(uint64_t);
if(len_of_padding > 0)
{
memset((void*)(buffer + length), 0xFF, len_of_padding);
}
You worked with the length of two uint64 values. May be you wanted to zero the next one:
uint64_t *after = (uint64_t*)(buffer + length + len_of_padding);
*after = 0;