A QString with some user input contains a MAC address, for instance "68F542F9AB22". I need to convert the QString to unsigned char array[6] of numbers, not the ASCII representation. So for the QString 68F542F9AB22 the unsigned char array at first position should be 104.
you can iterate over your QString and cut it into pieces of two and then use QString::toUShort(&ok,16), which will give you a ushort of your hex String.
Someting like
for(int i=0;i<6;++i)
{
QString hexString = yourstring.mid(i*2,2);
bool ok = false;
yourBuf[i] = (unsigned char) hexString.toUShort(&ok,16);
//if not ok, handle error
}
You shoud do some checks for the correct length of your input string and do some error handling on conversion errors.
Hope this might help you.
I would do it in the following way:
QString s("68F542F9AB22");
assert(s.size() % 2 == 0);
std::vector<unsigned char> array;
for (int i = 0; i < s.size(); i += 2)
{
QString num = s.mid(i, 2);
bool ok = false;
array.push_back(num.toUInt(&ok, 16));
assert(ok);
}
Hex to long long is a single operation and will convert up to 64 bits. That's sufficient for an 48 bits MAC. So:
bool ok = false;
auto result = input.toULongLong(&ok,16);
for(int i=5;i>=0;--i)
{
buf[i] = result & 0xFF;
result >>= 8;
}
You can try :
QString s = "68F542F9AB22";
unsigned char array[6];
if (s.length() % 2 == 0) { // test string length is even
for (unsigned long i = 0 ; i < s.length() ; i += 2) {
QString chunk = s.mid(i,2);
bool ok;
array[i/2] = static_cast<unsigned char>(chunk.toInt(&ok,16));
}
}
Note that the above code works only for even strings (but for MAC address, it's ok).
And to test the result you may use :
for (unsigned long i = 0 ; i < 6 ; i ++) {
std::cout<<"Array ["<<i<<"] = "<<static_cast<unsigned long>(array[i])<<std::endl;
}
Related
I have a issue with my code below which does encoding of a vector of long into a string by storing the differences of the sequence.
The encode / decode works fine as long as the value is same or below 2^30
Any value above it, the logic fails. Note that the sizeof(long) is 8 bytes.
static std::string encode(const std::vector<long>& path) {
long lastValue = 0L;
std::stringstream result;
for (long value : path) {
long delta = value - lastValue;
lastValue = value;
long var = 0;
// Shift the delta value left by 1 bit and encode each 5-bit chunk into a character
for (var = delta < 0 ? ~(delta << 1) : delta << 1; var >= 32L; var >>= 5) {
result << (char)((32L | var & 31L) + 63L); //char is getting written to result stringstream
}
// Encode the last 5-bit chunk into a character
result << (char)(var + 63L); // char is getting written to result stringstream
}
std::cout << std::endl;
return result.str();
}
static std::unique_ptr<std::vector<long>> decode(const std::string& encoded) {
auto decoded = std::make_unique<std::vector<long>>();
long last_val = 0;
int index = 0;
while (index < encoded.length()) {
int shift = 0;
long current = 1;
int c;
do {
c = encoded[index++] - 63 - 1;
current += c << shift;
shift += 5;
} while (c >= 31);
long v = ( (current & 1) == 0 ? current >> 1 : ~(current >> 1) );
last_val += v;
decoded->push_back(last_val);
}
return std::move(decoded);
}
Can someone please provide insight what might be going wrong ?
inside the decode function, it was required to declare c as "long" and not as "int".
I have written a program that sets up a client/server TCP socket over which the user sends an integer value to the server through the use of a terminal interface. On the server side I am executing byte commands for which I need hex values stored in my array.
sprint(mychararray, %X, myintvalue);
This code takes my integer and prints it as a hex value into a char array. The only problem is when I use that array to set my commands it registers as an ascii char. So for example if I send an integer equal to 3000 it is converted to 0x0BB8 and then stored as 'B''B''8' which corresponds to 42 42 38 in hex. I have looked all over the place for a solution, and have not been able to come up with one.
Finally came up with a solution to my problem. First I created an array and stored all hex values from 1 - 256 in it.
char m_list[256]; //array defined in class
m_list[0] = 0x00; //set first array index to zero
int count = 1; //count variable to step through the array and set members
while (count < 256)
{
m_list[count] = m_list[count -1] + 0x01; //populate array with hex from 0x00 - 0xFF
count++;
}
Next I created a function that lets me group my hex values into individual bytes and store into the array that will be processing my command.
void parse_input(char hex_array[], int i, char ans_array[])
{
int n = 0;
int j = 0;
int idx = 0;
string hex_values;
while (n < i-1)
{
if (hex_array[n] = '\0')
{
hex_values = '0';
}
else
{
hex_values = hex_array[n];
}
if (hex_array[n+1] = '\0')
{
hex_values += '0';
}
else
{
hex_values += hex_array[n+1];
}
cout<<"This is the string being used in stoi: "<<hex_values; //statement for testing
idx = stoul(hex_values, nullptr, 16);
ans_array[j] = m_list[idx];
n = n + 2;
j++;
}
}
This function will be called right after my previous code.
sprint(mychararray, %X, myintvalue);
void parse_input(arrayA, size of arrayA, arrayB)
Example: arrayA = 8byte char array, and arrayB is a 4byte char array. arrayA should be double the size of arrayB since you are taking two ascii values and making a byte pair. e.g 'A' 'B' = 0xAB
While I was trying to understand your question I realized what you needed was more than a single variable. You needed a class, this is because you wished to have a string that represents the hex code to be printed out and also the number itself in the form of an unsigned 16 bit integer, which I deduced would be something like unsigned short int. So I created a class that did all this for you named hexset (I got the idea from bitset), here:
#include <iostream>
#include <string>
class hexset {
public:
hexset(int num) {
this->hexnum = (unsigned short int) num;
this->hexstring = hexset::to_string(num);
}
unsigned short int get_hexnum() {return this->hexnum;}
std::string get_hexstring() {return this->hexstring;}
private:
static std::string to_string(int decimal) {
int length = int_length(decimal);
std::string ret = "";
for (int i = (length > 1 ? int_length(decimal) - 1 : length); i >= 0; i--) {
ret = hex_arr[decimal%16]+ret;
decimal /= 16;
}
if (ret[0] == '0') {
ret = ret.substr(1,ret.length()-1);
}
return "0x"+ret;
}
static int int_length(int num) {
int ret = 1;
while (num > 10) {
num/=10;
++ret;
}
return ret;
}
static constexpr char hex_arr[16] = {'0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'};
unsigned short int hexnum;
std::string hexstring;
};
constexpr char hexset::hex_arr[16];
int main() {
int number_from_file = 3000; // This number is in all forms technically, hex is just another way to represent this number.
hexset hex(number_from_file);
std::cout << hex.get_hexstring() << ' ' << hex.get_hexnum() << std::endl;
return 0;
}
I assume you'll probably want to do some operator overloading to make it so you can add and subtract from this number or assign new numbers or do any kind of mathematical or bit shift operation.
This is my fourth attempt at doing base64 encoding. My first tries work but it isn't standard. It's also extremely slow!!! I used vectors and push_back and erase a lot.
So I decided to re-write it and this is much much faster! Except that it loses data. -__-
I need as much speed as I can possibly get because I'm compressing a pixel buffer and base64 encoding the compressed string. I'm using ZLib. The images are 1366 x 768 so yeah.
I do not want to copy any code I find online because... Well, I like to write things myself and I don't like worrying about copyright stuff or having to put a ton of credits from different sources all over my code..
Anyway, my code is as follows below. It's very short and simple.
const static std::string Base64Chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
inline bool IsBase64(std::uint8_t C)
{
return (isalnum(C) || (C == '+') || (C == '/'));
}
std::string Copy(std::string Str, int FirstChar, int Count)
{
if (FirstChar <= 0)
FirstChar = 0;
else
FirstChar -= 1;
return Str.substr(FirstChar, Count);
}
std::string DecToBinStr(int Num, int Padding)
{
int Bin = 0, Pos = 1;
std::stringstream SS;
while (Num > 0)
{
Bin += (Num % 2) * Pos;
Num /= 2;
Pos *= 10;
}
SS.fill('0');
SS.width(Padding);
SS << Bin;
return SS.str();
}
int DecToBinStr(std::string DecNumber)
{
int Bin = 0, Pos = 1;
int Dec = strtol(DecNumber.c_str(), NULL, 10);
while (Dec > 0)
{
Bin += (Dec % 2) * Pos;
Dec /= 2;
Pos *= 10;
}
return Bin;
}
int BinToDecStr(std::string BinNumber)
{
int Dec = 0;
int Bin = strtol(BinNumber.c_str(), NULL, 10);
for (int I = 0; Bin > 0; ++I)
{
if(Bin % 10 == 1)
{
Dec += (1 << I);
}
Bin /= 10;
}
return Dec;
}
std::string EncodeBase64(std::string Data)
{
std::string Binary = std::string();
std::string Result = std::string();
for (std::size_t I = 0; I < Data.size(); ++I)
{
Binary += DecToBinStr(Data[I], 8);
}
for (std::size_t I = 0; I < Binary.size(); I += 6)
{
Result += Base64Chars[BinToDecStr(Copy(Binary, I, 6))];
if (I == 0) ++I;
}
int PaddingAmount = ((-Result.size() * 3) & 3);
for (int I = 0; I < PaddingAmount; ++I)
Result += '=';
return Result;
}
std::string DecodeBase64(std::string Data)
{
std::string Binary = std::string();
std::string Result = std::string();
for (std::size_t I = Data.size(); I > 0; --I)
{
if (Data[I - 1] != '=')
{
std::string Characters = Copy(Data, 0, I);
for (std::size_t J = 0; J < Characters.size(); ++J)
Binary += DecToBinStr(Base64Chars.find(Characters[J]), 6);
break;
}
}
for (std::size_t I = 0; I < Binary.size(); I += 8)
{
Result += (char)BinToDecStr(Copy(Binary, I, 8));
if (I == 0) ++I;
}
return Result;
}
I've been using the above like this:
int main()
{
std::string Data = EncodeBase64("IMG." + ::ToString(677) + "*" + ::ToString(604)); //IMG.677*604
std::cout<<DecodeBase64(Data); //Prints IMG.677*601
}
As you can see in the above, it prints the wrong string. It's fairly close but for some reason, the 4 is turned into a 1!
Now if I do:
int main()
{
std::string Data = EncodeBase64("IMG." + ::ToString(1366) + "*" + ::ToString(768)); //IMG.1366*768
std::cout<<DecodeBase64(Data); //Prints IMG.1366*768
}
It prints correctly.. I'm not sure what is going on at all or where to begin looking.
Just in-case anyone is curious and want to see my other attempts (the slow ones): http://pastebin.com/Xcv03KwE
I'm really hoping someone could shed some light on speeding things up or at least figuring out what's wrong with my code :l
The main encoding issue is that you are not accounting for data that is not a multiple of 6 bits. In this case, the final 4 you have is being converted into 0100 instead of 010000 because there are no more bits to read. You are supposed to pad with 0s.
After changing your Copy like this, the final encoded character is Q, instead of the original E.
std::string data = Str.substr(FirstChar, Count);
while(data.size() < Count) data += '0';
return data;
Also, it appears that your logic for adding padding = is off because it is adding one too many = in this case.
As far as comments on speed, I'd focus primarily on trying to reduce your usage of std::string. The way you are currently converting the data into a string with 0 and 1 is pretty inefficent considering that the source could be read directly with bitwise operators.
I'm not sure whether I could easily come up with a slower method of doing Base-64 conversions.
The code requires 4 headers (on Mac OS X 10.7.5 with G++ 4.7.1) and the compiler option -std=c++11 to make the #include <cstdint> acceptable:
#include <string>
#include <iostream>
#include <sstream>
#include <cstdint>
It also requires a function ToString() that was not defined; I created:
std::string ToString(int value)
{
std::stringstream ss;
ss << value;
return ss.str();
}
The code in your main() — which is what uses the ToString() function — is a little odd: why do you need to build a string from pieces instead of simply using "IMG.677*604"?
Also, it is worth printing out the intermediate result:
int main()
{
std::string Data = EncodeBase64("IMG." + ::ToString(677) + "*" + ::ToString(604));
std::cout << Data << std::endl;
std::cout << DecodeBase64(Data) << std::endl; //Prints IMG.677*601
}
This yields:
SU1HLjY3Nyo2MDE===
IMG.677*601
The output string (SU1HLjY3Nyo2MDE===) is 18 bytes long; that has to be wrong as a valid Base-64 encoded string has to be a multiple of 4 bytes long (as three 8-bit bytes are encoded into four bytes each containing 6 bits of the original data). This immediately tells us there are problems. You should only get zero, one or two pad (=) characters; never three. This also confirms that there are problems.
Removing two of the pad characters leaves a valid Base-64 string. When I use my own home-brew Base-64 encoding and decoding functions to decode your (truncated) output, it gives me:
Base64:
0x0000: SU1HLjY3Nyo2MDE=
Binary:
0x0000: 49 4D 47 2E 36 37 37 2A 36 30 31 00 IMG.677*601.
Thus it appears you have encode the null terminating the string. When I encode IMG.677*604, the output I get is:
Binary:
0x0000: 49 4D 47 2E 36 37 37 2A 36 30 34 IMG.677*604
Base64: SU1HLjY3Nyo2MDQ=
You say you want to speed up your code. Quite apart from fixing it so that it encodes correctly (I've not really studied the decoding), you will want to avoid all the string manipulation you do. It should be a bit manipulation exercise, not a string manipulation exercise.
I have 3 small encoding routines in my code, to encode triplets, doublets and singlets:
/* Encode 3 bytes of data into 4 */
static void encode_triplet(const char *triplet, char *quad)
{
quad[0] = base_64_map[(triplet[0] >> 2) & 0x3F];
quad[1] = base_64_map[((triplet[0] & 0x03) << 4) | ((triplet[1] >> 4) & 0x0F)];
quad[2] = base_64_map[((triplet[1] & 0x0F) << 2) | ((triplet[2] >> 6) & 0x03)];
quad[3] = base_64_map[triplet[2] & 0x3F];
}
/* Encode 2 bytes of data into 4 */
static void encode_doublet(const char *doublet, char *quad, char pad)
{
quad[0] = base_64_map[(doublet[0] >> 2) & 0x3F];
quad[1] = base_64_map[((doublet[0] & 0x03) << 4) | ((doublet[1] >> 4) & 0x0F)];
quad[2] = base_64_map[((doublet[1] & 0x0F) << 2)];
quad[3] = pad;
}
/* Encode 1 byte of data into 4 */
static void encode_singlet(const char *singlet, char *quad, char pad)
{
quad[0] = base_64_map[(singlet[0] >> 2) & 0x3F];
quad[1] = base_64_map[((singlet[0] & 0x03) << 4)];
quad[2] = pad;
quad[3] = pad;
}
This is written as C code rather than using native C++ idioms, but the code shown should compile with C++ (unlike the C99 initializers elsewhere in the source). The base_64_map[] array corresponds to your Base64Chars string. The pad character passed in is normally '=', but can be '\0' since the system I work with has eccentric ideas about not needing padding (pre-dating my involvement in the code, and it uses a non-standard alphabet to boot) and the code handles both the non-standard and the RFC 3548 standard.
The driving code is:
/* Encode input data as Base-64 string. Output length returned, or negative error */
static int base64_encode_internal(const char *data, size_t datalen, char *buffer, size_t buflen, char pad)
{
size_t outlen = BASE64_ENCLENGTH(datalen);
const char *bin_data = (const void *)data;
char *b64_data = (void *)buffer;
if (outlen > buflen)
return(B64_ERR_OUTPUT_BUFFER_TOO_SMALL);
while (datalen >= 3)
{
encode_triplet(bin_data, b64_data);
bin_data += 3;
b64_data += 4;
datalen -= 3;
}
b64_data[0] = '\0';
if (datalen == 2)
encode_doublet(bin_data, b64_data, pad);
else if (datalen == 1)
encode_singlet(bin_data, b64_data, pad);
b64_data[4] = '\0';
return((b64_data - buffer) + strlen(b64_data));
}
/* Encode input data as Base-64 string. Output length returned, or negative error */
int base64_encode(const char *data, size_t datalen, char *buffer, size_t buflen)
{
return(base64_encode_internal(data, datalen, buffer, buflen, base64_pad));
}
The base64_pad constant is the '='; there's also a base64_encode_nopad() function that supplies '\0' instead. The errors are somewhat arbitrary but relevant to the code.
The main point to take away from this is that you should be doing bit manipulation and building up a string that is an exact multiple of 4 bytes for a given input.
std::string EncodeBase64(std::string Data)
{
std::string Binary = std::string();
std::string Result = std::string();
for (std::size_t I = 0; I < Data.size(); ++I)
{
Binary += DecToBinStr(Data[I], 8);
}
if (Binary.size() % 6)
{
Binary.resize(Binary.size() + 6 - Binary.size() % 6, '0');
}
for (std::size_t I = 0; I < Binary.size(); I += 6)
{
Result += Base64Chars[BinToDecStr(Copy(Binary, I, 6))];
if (I == 0) ++I;
}
if (Result.size() % 4)
{
Result.resize(Result.size() + 4 - Result.size() % 4, '=');
}
return Result;
}
std::wstring hashStr(L"4727b105cf792b2d8ad20424ed83658c");
//....
byte digest[16];
How can I get my md5 hash in digest?
My answer is:
wchar_t * EndPtr;
for (int i = 0; i < 16; i++) {
std::wstring bt = hashStr.substr(i*2, 2);
digest[i] = static_cast<BYTE>(wcstoul(bt.c_str(), &EndPtr, 16));
}
You need to read two characters from hashStr, convert them from hex to a binary value, and put that value into the next spot in digest -- something on this order:
for (int i=0; i<16; i++) {
std::wstring byte = hashStr.substr(i*2, 2);
digest[i] = hextobin(byte);
}
C-way (I didn't test it, but it should work (though I could've screwed up somewhere) and you will get the method anyway).
memset(digest, 0, sizeof(digest));
for (int i = 0; i < 32; i++)
{
wchar_t numwc = hashStr[i];
BYTE numbt;
if (numwc >= L'0' && numwc <= L'9') //I assume that the string is right (i.e.: no SGJSGH chars and stuff) and is in uppercase (you can change that though)
{
numbt = (BYTE)(numwc - L'0');
}
else
{
numbt = 0xA + (BYTE)(numwc - L'A');
}
digest[i/2] += numbt*(2<<(4*((i+1)%2)));
}
I am trying to implement the algorithm of a CRC check, which basically created a value, based on an input message.
So, consider I have a hex message 3F214365876616AB15387D5D59, and I want to obtain the CRC24Q value of the message.
The algorithm that I found to do this is the following:
typedef unsigned long crc24;
crc24 crc_check(unsigned char *input) {
unsigned char *octets;
crc24 crc = 0xb704ce; // CRC24_INIT;
int i;
int len = strlen(input);
octets = input;
while (len--) {
crc ^= ((*octets++) << 16);
for (i = 0; i < 8; i++) {
crc <<= 1;
if (crc & 0x1000000)
crc ^= CRC24_POLY;
}
}
return crc & 0xFFFFFF;
}
where *input=3F214365876616AB15387D5D59.
The problem is that ((*octets++) << 16) will shift by 16 bits the ascii value of the hex character and not the character itself.
So, I made a function to convert the hex numbers to characters.
I know the implementation looks weird, and I wouldn't be surprised if it were wrong.
This is the convert function:
char* convert(unsigned char* message) {
unsigned char* input;
input = message;
int p;
char *xxxx[20];
xxxx[0]="";
for (p = 0; p < length(message) - 1; p = p + 2) {
char* pp[20];
pp[0] = input[0];
char *c[20];
*input++;
c[0]= input[0];
*input++;
strcat(pp,c);
char cc;
char tt[2];
cc = (char ) strtol(pp, &pp, 16);
tt[0]=cc;
strcat(xxxx,tt);
}
return xxxx;
}
SO:
unsigned char *msg_hex="3F214365876616AB15387D5D59";
crc_sum = crc_check(convert((msg_hex)));
printf("CRC-sum: %x\n", crc_sum);
Thank you very much for any suggestions.
Shouldn't the if (crc & 0x8000000) be if (crc & 0x1000000) otherwise you're testing the 28th bit not the 25th for 24-bit overflow