I am writing a small WebSocket server application that should support both draft 17 and older variations such as draft 00. I didn't have any problems with the newest draft, but I cannot make the draft 00 client happy.
For testing purposes I used the example provided in the official (old) draft 00 docuemnt, page 7:
Sec-WebSocket-Key1: 18x 6]8vM;54 *(5: { U1]8 z [ 8
Sec-WebSocket-Key2: 1_ tx7X d < nw 334J702) 7]o}` 0
Tm[K T2u
When calculating the keys by concatenating the digits and dividing by spaces count, I get the following two integers: 155712099 and 173347027 (the document has these two numbers as well).
Next, it says to:
Convert them individually to Big Endian
Concatenate the result in a string and append the last eight bits (Tm[K T2u).
Create a 128 bits MD5 sum from the string produced in step 1 and 2.
Armed with this knowledge I've produced the following code:
#define BYTE 8
#define WORD 16
// Little Endian to Big Endian short
#define LE_TO_BE_SHORT(SHORT)\
(((SHORT >> BYTE) & 0x00FF) | ((SHORT << BYTE) & 0xFF00))
// Little Endian to Big Endian long
#define LE_TO_BE_LONG(LONG)\
(((LE_TO_BE_SHORT(LONG >> WORD)) | \
((LE_TO_BE_SHORT((LONG & 0xFFFF)) << WORD))))
uint num1 = LE_TO_BE_LONG(155712099);
uint num2 = LE_TO_BE_LONG(173347027);
QString cookie = QString::fromUtf8("Tm[K T2u");
QString c = QString::number(num1) + QString::number(num2) + cookie;
QByteArray data = c.toUtf8();
qDebug() << QCryptographicHash::hash(data, QCryptographicHash::Md5);
Here's what I get:
←→»α√r¼??┐☺║Pa♠µ
And here's what's expected (again, based on the draft example)
fQJ,fN/4F4!~K~MH
On the other side, I've noticed that the wikipedia article does not mention anything about Endian conversion. I tried the above code without conversion (both wikipedia example and the example from the draft) and still cannot reproduce the expected result.
Anyone can point out what is the problem here?
EDIT:
I found this document has a better explanation of the protocol. It is a different draft (76) but is similar to 00 in terms of handshake.
Here is the calculation in the C implementation of websockify. I know that works so you might be able to use that as reference.
Finally with the help of fresh eyes from my colleagues, I figured out what I was doing wrong. Basically I was literally concatenating the two integers into a string. Instead I needed to concatenate the bytes:
uint num1 = LE_TO_BE_LONG(155712099); // macros definition can
uint num2 = LE_TO_BE_LONG(173347027); // be found in the question
QString cookie = QString::fromUtf8("Tm[K T2u");
QByteArray array;
array = QByteArray((const char*)&num1, sizeof(int));
array += QByteArray((const char*)&num2, sizeof(int));
array += QByteArray(cookie.toStdString().data(), cookie.length());
qDebug() << QCryptographicHash::hash(array, QCryptographicHash::Md5);
Make sure that you don't use the overloaded constructor that does not take the size because Qt will crate a slightly larger array padded with garbage. At least that was in my case.
Related
I have to use an encryption algorithm which takes unsigned int as an input. For this I want to convert my password which is alpha numeric 8 character string to int.
I am using the below code and not sure if it works right. I want to convert my characters say "test" to unsigned integer.
I do get an output value. But I'm not sure if this is the right way of doing this and if there can be any side effects.
Can you please explain what actually is happening here?
unsigned int ConvertStringToUInt(CString Input)
{
unsigned int output;
output = ((unsigned int)Input[3] << 24);
output += ((unsigned int)Input[2] << 16);
output += ((unsigned int)Input[1] << 8);
output += ((unsigned int)Input[0]);
return output;
}
For an input of "ABCD" the output of ConvertStringToUInt will be 0x44434241 because:
0x41 is the ASCII code of 'A'
0x42 is the ASCII code of 'B'
0x43 is the ASCII code of 'C'
0x44 is the ASCII code of 'D'
<< being the shift left operator.
So we have:
0x44 << 24 = 0x44000000
0x43 << 16 = 0x00430000
0x42 << 8 = 0x00004200
output =
0x44000000
+ 0x00430000
+ 0x00004200
+ 0x00000041
============
0x44434241
Be aware that your ConvertStringToUInt function only works if the length of the provided string is exactly 4, so this function is useless for your case because the length of your password is 8.
You can't do a unique mapping of a 8 character alphanumeric string to a 32 bit integer.
(10 + 26 + 26) ^ 8 is 218,340,105,584,896 (digits + upper-case and lower-case letters)
(10 + 26) ^ 8 is 2,821,109,907,456 (digits + case-insensitive letters)
2 ^ 32 is 4,294,967,296 (a 32 bit unsigned int)
So if you need to convert your 8 characters into a 32 bit number, you will need to use hashing. And that means that multiple passwords will map to the same key.
Note that this NOT encryption because the mapping is not reversible. It cannot be reversible. This can be proven mathematically.
The Wikipedia page on hash functions is a good place to start learning about this. Also the page on the pigeonhole principle.
However, it should also be noted that 8 character passwords are too small to be secure. And if you are hashing to a 32 bit code, brute-force attacks will be easy.
What you are trying to do is to reinvent a hashing algorithm, very poorly. I strongly recommend to use SHA-256 or some equivalent hashing algorithm available by the libs of your system, which is best practice and usually sufficient to transmit and compare passwords.
You should start reading the basics on that issue before writing any more code, else the security level of your application won't be much better than no hashing/encryption at all, but with the false sense of being on the safe side. Start here, for instance.
I've recently needed to convert mnist data-set to images and labels, it is binary and the structure is in the previous link, so i did a little research and as I'm fan of c++ ,I've read the I/O binary in c++,after that I've found this link in stack. That link works well but no code commenting and no explanation of algorithm so I've get confused and that raise some question in my mind which i need a professional c++ programmer to ask.
1-What is the algorithm to convert the data-set in c++ with help of ifstream?
I've realized to read a file as a binary with file.read and move to the next record, but in C , we define a struct and move it inside the file but i can't see any struct in c++ program for example to read this:
[offset] [type] [value] [description]
0000 32 bit integer 0x00000803(2051) magic number
0004 32 bit integer 60000 number of images
0008 32 bit integer 28 number of rows
0012 32 bit integer 28 number of columns
0016 unsigned byte ?? pixel
How can we go to the specific offset for example 0004 and read for example 32 bit integer and put it to an integer variable.
2-What the function reverseInt is doing? (It is not obviously doing simple reversing an integer)
int ReverseInt (int i)
{
unsigned char ch1, ch2, ch3, ch4;
ch1 = i & 255;
ch2 = (i >> 8) & 255;
ch3 = (i >> 16) & 255;
ch4 = (i >> 24) & 255;
return((int) ch1 << 24) + ((int)ch2 << 16) + ((int)ch3 << 8) + ch4;
}
I've did a little debugging with cout and when it revised for example 270991360 it return 10000 , which i cannot find any relation, I understand it AND the number multiples with two with 255 but why?
PS :
1-I already have the MNIST converted images but i want to understand the algorithm.
2-I've already unzip the gz files so the file is pure binary.
1-What is the algorithm to convert the data-set in c++ with help of ifstream?
This function read a file (t10k-images-idx3-ubyte.gz) as follow:
Read a magic number and adjust endianness
Read number of images and adjust endianness
Read number rows and adjust endianness
Read number of columns and adjust endianness
Read all the given images x rows x columns characters (but loose them).
The function use normal int and always switch endianness, that means it target a very specific architecture and is not portable.
How can we go to the specific offset for example 0004 and read for example 32 bit integer and put it to an integer variable.
ifstream provides a function to seek to a given position:
file.seekg( posInBytes, std::ios_base::beg);
At the given position, you could read the 32-bit integer:
int32_t val;
file.read ((char*)&val,sizeof(int32_t));
2- What the function reverseInt is doing?
This function reverse order of the bytes of an int value:
Considering an integer of 32bit like aaaaaaaabbbbbbbbccccccccdddddddd, it return the integer ddddddddccccccccbbbbbbbbaaaaaaaa.
This is useful for normalizing endianness, however, it is probably not very portable, as int might not be 32bit (but e.g. 16bit or 64bit)
I am new here, and would like to ask this question.
I am working with a binary file that each byte, multiple bytes or even parts of a byte have a different meaning.
What I have been trying so far is to read a number of bytes (4 in my example) as a one block.
I have them in Hexadecimal representation like: 00 1D FB C8.
Using the following code, I read them separately:
for (int j = 36; j < 40;j++)
{
cout << dec << (bitset<8>(fileBuf[j])).to_ulong();
}
where j is the position of the byte in the file. The previous code gives me 029251200 which is wrong. What I want is read the 4 bytes at once and get the answer of 1965000
I appreciate any help.
Thank you.
DWORD final = (fileBuf[j] << 24) + (fileBuf[j+1] << 16) + (fileBuf[j+2] << 8) + (fileBuf[j+3]);
Also depends what kind of endian you want (ABCD / DCBA / CDAB)
EDIT (cant reply due to low rep, just joined today)
I tried to extend the bitset, however it gave the value of the first byte only
It will not work because the fileBuf is 99% byte array, extending from 8bit to 32bit(int) wont make any difference because its still a byte array which is 8bit. You have to mathematicly calculate the value from 4 array elements into original integer representation. see code above edit
The answer isnt "Wrong" this is a logic error. Youre not storing the values and adding the computation
C8 is 200 in decimal form, so youre not appending the value to the original subset.
The answer it spit it out, was infact what you programmed it to do.
You need to either extend the bitset to a larger amount to append the other hex numbers or provide some other means of outputting
Keeping the format of the function from the question, you could do:
//little-endian
{
int i = (fileBuf[j]<<0) | (fileBuf[j+1]<<8) | (fileBuf[j+2]<<16) | (fileBuf[j+3]<<24);
cout << dec << i;
}
// big-endian
{
int i = (fileBuf[j+3]<<0) | (fileBuf[j+2]<<8) | (fileBuf[j+1]<<16) | (fileBuf[j]<<24);
cout << dec << i;
}
I'm programming in C++ and I have to store big numbers in one of my exercices.
The biggest number i have to store is : 9 780 321 563 842.
Each time i try to print the number (contained in a variable) it gives me a wrong result (not that number).
A 32bit type isn't enough since 2^32 is a 10 digit number and I have to store a 13 digit number. But with 64 bits you can respresent a number that has 20digits. So I tried using the type "uint64_t" but that didn't work for me and I really don't understand why.
So I searched on the internet to find which type would be sufficient for my variable to fit in. I saw on this forum persons with the same problem but they solved it using long long int or long double as type. But none worked for me (neither did long float).
I really don't know which other type could store that number, as I tried a lot but nothing worked for me.
Thanks for your help! :)
--
EDIT : The code is a bit long and complex and would not matter for the question, so this is actually what I do with the variable containing that number :
string barcode_s = "9780321563842";
uint64_t barcode = atoi(barcode_s.c_str());
cout << "Barcode is : " << barcode << endl;
Off course I don't put that number in a variable (of type string) "barcode_s" to convert it directly to a number, but that's what happen in my program. I read text from an input file and put it in "barcode_s" (the text I read and put in that variable is always a number) and then I convert that string to a number (using atoi).
So i presume the problem comes from the "atoi" function?
Thanks for your help!
The problem is indeed atoi: it returns an int, which is on most platforms a 32-bits integer. Converting to uint64_t from int will not magically restore the information that has been lost.
There are several solutions, though. In C++03, you could use stringstream to handle the conversion:
std::istringstream stream(barcode_s);
unsigned long barcode = 0;
if (not (stream >> barcode)) { std::abort(); }
In C++11, you can simply use stoul or stoull:
unsigned long long const barcode = std::stoull(barcode_s);
Your number 9 780 321 563 842 is hex 8E52897B4C2, which fits into 44 bits (4 bits per hex digit), so any 64 bit integer, no matter if signed or unsigned, will have space to spare. 'uint64_t' will work, and it will even fit into a 'double' with no loss of precision.
It follows that the remaining issue is a mistake in your code, usually that is either an accidental conversion of the 64 bit number to another type somewhere, or you are calling the wrong fouction to print a 64 bit integer.
Edit: just saw your code. 'atoi' returns int. As in 'int32_t'. Converting that to 'unit64_t' will not reconstruct the 64 bit number. Have a look at this: http://msdn.microsoft.com/en-us/library/czcad93k.aspx
The atoll () function converts char* to a long long.
If you don't have the longer function available, write your own in the mean time.
uint64_t result = 0 ;
for (unsigned int ii = 0 ; str.c_str()[ii] != 0 ; ++ ii)
{
result *= 10 ;
result += str.c_str () [ii] - '0' ;
}
I'm writing a program in C++ to listen to a stream of tcp messages from another program to give tracking data from a webcam. I have the socket connected and I'm getting all the information in but having difficulty splitting it up into the data I want.
Here's the format of the data coming in:
8 byte header:
4 character string,
integer
32 byte message:
integer,
float,
float,
float,
float,
float
This is all being stuck into a char array called buffer. I need to be able to parse out the different bytes into the primitives I need. I have tried making smaller sub arrays such as headerString that was filled by looping through and copying the first 4 elements of the buffer array and I do get the the correct hear ('CCV ') printed out. But when I try the same thing with the next for elements (to get the integer) and try to print it out I get weird ascii characters being printed out. I've tried converting the headerInt array to an integer with the atoi method from stdlib.h but it always prints out zero.
I've already done this in python using the excellent unpack method, is their any alternative in C++?
Any help greatly appreciated,
Jordan
Links
CCV packet structure
Python unpack method
The buffer only contains the raw image of what you read over the
network. You'll have to convert the bytes in the buffer to whatever
format you want. The string is easy:
std::string s(buffer + sOffset, 4);
(Assuming, of course, that the internal character encoding is the same
as in the file—probably an extension of ASCII.)
The others are more complicated, and depend on the format of the
external data. From the description of the header, I gather than the
integers are four bytes, but that still doesn't tell me anything about
their representation. Depending on the case, either:
int getInt(unsigned char* buffer, int offset)
{
return (buffer[offset ] << 24)
| (buffer[offset + 1] << 16)
| (buffer[offset + 2] << 8)
| (buffer[offset + 3] );
}
or
int getInt(unsigned char* buffer, int offset)
{
return (buffer[offset + 3] << 24)
| (buffer[offset + 2] << 16)
| (buffer[offset + 1] << 8)
| (buffer[offset ] );
}
will probably do the trick. (Other four byte representations of
integers are possible, but they are exceedingly rare. Similarly, the
conversion of the unsigned results of the shifts and or's into a int
is implementation defined, but in practice, the above will work almost
everywhere.)
The only hint you give concerning the representation of the floats is in
the message format: 32 bytes, minus a 4 byte integer, leave 28 bytes for
5 floats; but 28 doesn't go into five, so I cannot even guess as to the
length of the floats (except that there must be some padding in there
somewhere). But converting floating point can be more or less
complicated if the external format isn't exactly like the internal
format.
Something like this may work:
struct {
char string[4];
int integers[2];
float floats[5];
} Header;
Header* header = (Header*)buffer;
You should check that sizeof(Header) == 32.