I'm trying to implement the FMS attack on WEP. I understand that the attack takes advantage of the probability of parts of the RC4 sbox not changing to create "known" sbox states to reverse engineer the key. With many samples, the correct key octet should appear more often than noise.
The value that should add to the frequency count is:
where (I think; the notation is not properly defined)
B starts at 0
P.out is the outputted keystream byte
S is the Sbox
j is the "pointer" used in the RC4 key scheduling algorithm
In my code, I am generating 6 million data packets: a constant root key and constant plaintext to simulate constant header, and then encrypting with RC4(IV + root_key).encrypt(plaintext), without discarding the first 256 octets). The (IV, encrypted_data) pairs are run through the get_key function:
uint8_t RC4_ksa(const std::string & k, std::array <uint8_t, 256> & s, const uint16_t octets = 256){
for(uint16_t i = 0; i < 256; i++){
s[i] = i;
}
uint8_t j = 0;
for(uint16_t i = 0; i < octets; i++){
j = (j + s[i] + k[i % k.size()]);
std::swap(s[i], s[j]);
}
return j;
}
std::string get_key(const uint8_t keylen, const std::vector <std::pair <std::string, std::string> > & captured){
std::string rkey = ""; // root key to build
const std::string & pt = header; // "plaintext" with constant header
// recreate root key one octet at a time
for(uint8_t i = 3; i < keylen; i++){
// vote counter for current octet
std::array <unsigned int, 256> votes;
votes.fill(0);
uint8_t most = 0; // most probable index/octet value
// get vote from each "captured" ciphertext
for(std::pair <std::string, std::string> const & c : captured){
const std::string & IV = c.first;
// IV should be of form (i = root key index + 3, 255, some value)
if ((static_cast<uint8_t> (IV[0]) != i) ||
(static_cast<uint8_t> (IV[1]) != 0xff)){
continue; // skip this data
}
const std::string & ct = c.second;
const std::string key = IV + rkey;
// find current packet's vote
std::array <uint8_t, 256> sbox; // SBox after simulating; fill with RC4_ksa
uint8_t j = RC4_ksa(key, sbox, i); // simulate using key in KSA, up to known octets only
uint8_t keybytestream = pt[i - 3] ^ ct[i - 3];
// S^-1[keybytestream]
uint16_t sinv;
for(sinv = 0; sinv < 256; sinv++){
if (sbox[sinv] == keybytestream){
break;
}
}
// get mapping
uint8_t ki = sinv - j - sbox[i];
// add to tally and keep track of which tally is highest
votes[ki]++;
if (votes[ki] > votes[most]){
most = ki;
}
}
// select highest voted value as next key octet
rkey += std::string(1, most);
}
return rkey;
}
I am getting keys that are completely incorrect. I feel that the error is probably a one-off error or something silly like that, but I have asked two people to look at this, and neither person has managed to figure out what is wrong.
Is there something that is blatantly wrong? If not, what is not-so-obviously wrong?
Related
l have a string like "hello_1_world". And in each iteration l want to increment "1" part. "hello_2_world", "hello_3_world" etc... So l can not use const char* or string_view. l have to allocate new memory in every iteration by using std::string. But, l do not want to because of performance issue. Could you suggest a solution?
So, my code is like below. Guess index is incrementing each time.
std::string value{"hello_" + std::to_string(index) + "_world"};
l tried so many ways. And one of them is like below:
string_view result(value, 39);
And concat something but again. l cant modify string_view
Do you really need a std::string, or will a simple char[] suffice? If so, then try something like this:
// a 32bit positive int takes up 10 digits max...
const int MAX_DIGITS = 10;
char value[6 + MAX_DIGITS + 6 + 1];
for(int index = 0; index < ...; ++index) {
std::snprintf(value, std::size(value), "hello_%d_world", index);
// use value as needed...
}
Alternatively, if you don't mind having leading zeros in the number, then you can update just that portion of the buffer on each iteration:
const int MAX_DIGITS = ...; // whatever you need, up to 10 max
char value[6 + MAX_DIGITS + 6 + 1];
std::strcpy(value, "hello_");
std::strcpy(&value[6 + MAX_DIGITS], "_world");
for(int index = 0; index < ...; ++index) {
std::snprintf(&value[6], MAX_DIGITS, "%0.*d", MAX_DIGITS, index);
// use value as needed...
}
If you really need a std::string, then simply pre-allocate it before the iteration, and then fill in its existing memory during the iteration, similar to a char[]:
const int MAX_DIGITS = 10;
std::string value;
value.reserve(6 + MAX_DIGITS + 6); // allocate capacity
for(int index = 0; index < ...; ++index) {
value.resize(value.capacity()); // preset size, no allocation when newsize <= capacity
std::copy_n("hello_", 6, value.begin());
auto ptr = std::to_chars(&value[6], &value[6 + MAX_DIGITS], index).ptr;
/* or:
auto numWritten = std::snprintf(&value[6], MAX_DIGITS, "%d", index);
auto ptr = &value[6 + numWritten];
*/
auto newEnd = std::copy_n("_world", 6, ptr);
value.resize(newEnd - value.data()); // no allocation when shrinking size
// use value as needed...
}
Alternatively, with leading zeros:
const int MAX_DIGITS = ...; // up to 10 max
std::string value(6 + MAX_DIGITS + 6, '\0');
std::copy_n("hello_", 6, value.begin());
std::copy_n("_world", 6, &value[6 + MAX_DIGITS]);
for(int index = 0; index < ...; ++index) {
std::snprintf(&value[6], MAX_DIGITS, "%0.*d", MAX_DIGITS, index);
// use value as needed...
}
you could use a std::stringstream to construct the string incrementally:
std::stringstream ss;
ss << "hello_";
ss << index;
ss << "_world";
std::string value = ss.str();
I've written the below code to convert and store the data from a string (array of chars) called strinto an array of 16-bit integers called arr16bit
The code works. However, i'd say that there's a better or cleaner way to implement this logic, using less variables etc.
I don't want to use index i to get the modulus % 2, because if using little endian, I have the same algorithm but i starts at the last index of the string and counts down instead of up. Any recommendations are appreciated.
// assuming str had already been initialised before this ..
int strLength = CalculateStringLength(str); // function implementation now shown
uint16_t* arr16bit = new uint16_t[ (strLength /2) + 1]; // The only C++ feature used here , so I didn't want to tag it
int indexWrite = 0;
int counter = 0;
for(int i = 0; i < strLength; ++i)
{
arr16bit[indexWrite] <<= 8;
arr16bit[indexWrite] |= str[i];
if ( (counter % 2) != 0)
{
indexWrite++;
}
counter++;
}
Yes, there are some redundant variables here.
You have both counter and i which do exactly the same thing and always hold the same value. And you have indexWrite which is always exactly half (per integer division) of both of them.
You're also shifting too far (16 bits rather than 8).
const std::size_t strLength = CalculateStringLength(str);
std::vector<uint16_t> arr16bit((strLength/2) + 1);
for (std::size_t i = 0; i < strLength; ++i)
{
arr16bit[i/2] <<= 8;
arr16bit[i/2] |= str[i];
}
Though I'd probably do it more like this to avoid N redundant |= operations:
const std::size_t strLength = CalculateStringLength(str);
std::vector<uint16_t> arr16bit((strLength/2) + 1);
for (std::size_t i = 0; i < strLength+1; i += 2)
{
arr16bit[i/2] = (str[i] << 8);
arr16bit[(i/2)+1] |= str[i+1];
}
You may also wish to consider a simple std::copy over the whole dang buffer, if your endianness is right for it.
I wrote the following function to generate HMAC-SHA1 referring https://www.rfc-editor.org/rfc/rfc2104, however, the values I generate seem to differ from the values given on https://www.rfc-editor.org/rfc/rfc2202 and from what I've tested on https://www.freeformatter.com/hmac-generator.html.
For example, the function should be generating de7c9b85b8b78aa6bc8a7a36f70a90701c9db4d9 for text "The quick brown fox jumps over the lazy dog" with key "key", but it generates d3c446dbd70f5db3693f63f96a5931d49eaa5bab instead.
Could anyone point out my mistakes?
The function:
const int block_size = 64;
const int hash_output_size = 20;
const int ipadVal = 0x36;
const int opadVal = 0x5C;
std::string HMAC::getHMAC(const std::string &text)
{
// check if key length is block_size
// else, append 0x00 till the length of new key is block_size
int key_length = key.length();
std::string newkey = key;
if (key_length < block_size)
{
int appended_zeros = block_size - key_length;
// create new string with appended_zeros number of zeros
std::string zeros = std::string(appended_zeros, '0');
newkey = key + zeros;
}
if (key_length > block_size)
{
SHA1 sha1;
newkey = sha1(key);
}
// calculate hash of newkey XOR ipad and newkey XOR opad
std::string keyXipad = newkey;
std::string keyXopad = newkey;
for (int i = 0; i < 64; i++)
{
keyXipad[i] ^= ipadVal;
keyXopad[i] ^= opadVal;
}
// get first hash, hash of keyXipad+text
std::string inner_hash = getSHA1(keyXipad + text);
// get outer hash, hash of keyXopad+inner_hash
std::string outer_hash = getSHA1(keyXopad + inner_hash);
// return outer_hash
return outer_hash;
}
edit: In the line
std::string zeros = std::string(appended_zeros, '0');
'0' should be 0 instead : int instead of char. Thanks to #Igor Tandetnik for that.
Ok..so a little look around lead me to HMAC produces wrong results. Turns out, I was doing the same mistake of using hex as ascii.
I used a function to convert the inner_hash from hex to ascii and then everything turned out perfect.
The final version of the function:
std::string HMAC::getHMAC(const std::string &text)
{
// check if key length is block_size
// else, append 0x00 till the length of new key is block_size
int key_length = key.length();
std::string newkey = key;
if (key_length < block_size)
{
int appended_zeros = block_size - key_length;
// create new string with appended_zeros number of zeros
std::cout << "\nAppending " << appended_zeros << " 0s to key";
std::string zeros = std::string(appended_zeros, 0);
newkey = key + zeros;
}
if (key_length > block_size)
{
SHA1 sha1;
newkey = sha1(key);
}
// calculate hash of newkey XOR ipad and newkey XOR opad
std::string keyXipad = newkey;
std::string keyXopad = newkey;
for (int i = 0; i < 64; i++)
{
keyXipad[i] ^= ipadVal;
keyXopad[i] ^= opadVal;
}
// get first hash, hash of keyXipad+text
std::string toInnerHash = keyXipad + text;
std::string inner_hash = getHash(toInnerHash);
// get outer hash, hash of keyXopad+inner_hash
std::string toOuterHash = keyXopad + hex_to_string(inner_hash);
std::string outer_hash = getHash(toOuterHash);
// return outer_hash
return outer_hash;
}
hex_to_string function taken from https://stackoverflow.com/a/16125797/3818617
I have a base64 string containing bits, I have alredy decoded it with the code in here. But I'm unable to transform the resultant string in bits I could work with. Is there a way to convert the bytes contained in the code to a vector of bools containing the bits of the string?
I have tried converting the char with this code but it failed to conver to a proper char
void DecodedStringToBit(std::string const& decodedString, std::vector<bool> &bits) {
int it = 0;
for (int i = 0; i < decodedString.size(); ++i) {
unsigned char c = decodedString[i];
for (unsigned char j = 128; j > 0; j <<= 1) {
if (c&j) bits[++it] = true;
else bits[++it] = false;
}
}
}
Your inner for loop is botched: it's shifting j the wrong way. And honestly, if you want to work with 8-bit values, you should use the proper <stdint.h> types instead of unsigned char:
for (uint8_t j = 128; j; j >>= 1)
bits.push_back(c & j);
Also, remember to call bits.reserve(decodedString.size() * 8); so your program doesn't waste a bunch of time on resizing.
I'm assuming the bit order is MSB first. If you want LSB first, the loop becomes:
for (uint8_t j = 1; j; j <<= 1)
In OP's code, it is not clear if the vector bits is of sufficient size, for example, if it is resized by the caller (It should not be!). If not, then the vector does not have space allocated, and hence bits[++it] may not work; the appropriate thing might be to push_back. (Moreover, I think the code might need the post-increment of it, i.e. bits[it++] to start from bits[0].)
Furthermore, in OP's code, the purpose of unsigned char j = 128 and j <<= 1 is not clear. Wouldn't j be all zeros after the first iteration? If so, the inner loop would always run for only one iteration.
I would try something like this (not compiled):
void DecodedStringToBit(std::string const& decodedString,
std::vector<bool>& bits) {
for (auto charIndex = 0; charIndex != decodedString.size(); ++charIndex) {
const unsigned char c = decodedString[charIndex];
for (int bitIndex = 0; bitIndex != CHAR_BIT; ++bitIndex) {
// CHAR_BIT = bits in a char = 8
const bool bit = c & (1 << bitIndex); // bitwise-AND with mask
bits.push_back(bit);
}
}
}
I have a Constructor that creates a BitArray object, which asks a user for how many 'bits' they would like to use. It then uses unsigned chars to store the Bytes needed to hold the many. I then wish to create methods that allow for a user to 'Set' a certain bit, and also to Display the full set of Bytes at the end. However, my Set method does not seem to be changing the bit, that, or my Print function (The Overload) does not seem to actually be printing the actual bit(s). Can somebody point out the problem please?
Constructor
BitArray::BitArray(unsigned int n)
{
//Now let's find the minimum 'bits' needed
n++;
//If it does not "perfectly" fit
//------------------------------------ehhhh
if( (n % BYTE) != 0)
arraySize =(n / BYTE);
else
arraySize = (n / BYTE) + 1;
//Now dynamically create the array with full byte size
barray = new unsigned char[arraySize];
//Now intialize bytes to 0
for(int i = 0; i < arraySize; i++)
{
barray[i] = (int) 0;
}
}
Set Method:
void BitArray::Set(unsigned int index)
{
//Set the Indexed Bit to ON
barray[index/BYTE] |= 0x01 << (index%BYTE);
}
Print Overload:
ostream &operator<<(ostream& os, const BitArray& a)
{
for(int i = 0; i < (a.Length()*BYTE+1); i++)
{
int curNum = i/BYTE;
char charToPrint = a.barray[curNum];
os << (charToPrint & 0X01);
charToPrint >>= 1;
}
return os;
}
for(int i = 0; i < (a.Length()*BYTE+1); i++)
{
int curNum = i/BYTE;
char charToPrint = a.barray[curNum];
os << (charToPrint & 0X01);
charToPrint >>= 1;
}
Each time you run your loop, you are fetching a new value for charToPrint. That means that the operation charToPrint >>= 1; is useless, since that modification is not going to be carried out to the next time the loop runs. Therefore, you will always print only the first bit of each char of your array.