Change bit of hex number with leading zeros in C++,(C) - c++

I have this number in hex string:
002A05.
I need to set 7-th bit of this number to 1, so after conversion I will get
022A05
But it has to work with every 6 chars hex number.
I tried converting hex string to integer via strtol, but that function strip leading zeros.
Please help me how can I solve it.

int hex=0x002A05;
int mask = 0x020000;
printf ("%06X",hex | mask);
hope this helps

In a 24-bit number bit #7 (counting from the left, as you did in your example, not from the right, as is done conventionally) is always going to be in the second byte from the left. You can solve your problem without converting the entire number to integer by taking that second hex digit, converting it to a number 0..15, setting its bit #3 (again counting from the left), and converting the result back to a hex digit.
int fromHex(char c) {
c = toupper(c);
if (c >= '0' && c <= '9') {
return c-'0';
} else {
return c-'A'+10;
}
}
char toHexDigit(int n) {
return n < 10 ? '0'+n : 'A'+n-10;
}
char myNum[] = "002A05";
myNum[1] = toHexDigit(fromHex(myNum[1]) | 2);
printf("%s\n", myNum);
This prints '022A05' (link to ideone).

It sounds to me like you have a string, not a hex constant, that you want to manipulate. You can do it pretty easily by bit twiddling the ascii value of the hex character. If you have char representing a hex character like char h = '6';, char h = 'C';, or char h = '';, you can set the 3rd from the left (2nd from the right) bit in the number that the character represents using:
h = h > '7' ? h <= '9' ? h + 9 : ((h + 1) | 2) - 1 : h | 2;
So you can do this to the second character (4 + 3 bits) in your string. This works for any hex string with 2 or more characters. Here is your example:
char hex_string[] = "002A05";
// Get the second character from the string
char h = hex_string[1];
// Calculate the new character
h = h > '7' ? h <= '9' ? h + 9 : ((h + 1) | 2) - 1 : h | 2;
// Set the second character in the string to the result
hex_string[1] = h;
printf("%s", hex_string); // 022A05

You asked about strtol specifically, so to answer your question, just add padding after you convert the number with strtol:
const char *s = "002A05";
int x = strtol(s, NULL, 16);
x |= (1<<17);
printf("%.6X\n",x);

Related

Trouble understanding Caesar decryption steps

The following code will decrypt a caesar encrypted string given the ciphertext and the key:
#include <iostream>
std::string decrypt(std::string cipher, int key) {
std::string d = "";
for(int i=0; i<cipher.length();i++) {
d += ((cipher[i]-65-key+26) %26)+65;
}
return d;
}
int main()
{
std::cout << decrypt("WKLVLVJRRG", 3) << std::endl; // THISISGOOD
std::cout << decrypt("NBCMCMAIIX", 20) << std::endl; // THISISGOOD
}
I'm having trouble to understand the operations performed to compute the new character ASCII code at this line:
d += ((cipher[i]-65-key+26) %26)+65;
The first subtraction should shift the number range
Then we will subtract the key as how the Caesar decryption is defined
We add 26 to deal with negative numbers (?)
The module will limit the output as the range of the ASCII numbers is 26 length
We come back to the old range by adding 65 at the end
What am I missing?
If we reorder the expression slightly, like this:
d += (((cipher[i] - 65) + (26 - key)) % 26) + 65;
We get a formula for rotating cipher[i] left by key:
cipher[i] - 65 brings the ASCII range A..Z into an integer range 0..25
(cipher[i] - 65 + 26 - key) % 26 rotates that value left by key (subtracts key modulo 26)
+ 65 to shift the range 0..25 back into ASCII range A..Z.
e.g. given a key of 2, A becomes Y, B becomes Z, C becomes A, etc.
Let me give you a detailed explanation about Caesar Cipher for understanding that formular. I will also show ultra simple code examples, but also more advanced one liners.
The biggest problems are potential overflows. So, we need to deal with that.
Then we need to understand what Encryption and decryption means. If encryption will shift everthing one to the right, decryption will shift it back to left again.
So, with "def" and key=1, the encrpyted string will be "efg".
And decrpytion with key=1, will shift it to left again. Result: "def"
We can observe that we simply need to shift by -1, so the negative of the key.
So, basically encryption and decryption can be done with the same routine. We just need to invert the keys.
Let us look now at the overflow problematic. For the moment we will start with uppercase characters only. Characters have an associated code. For example, in ASCII, the letter 'A' is encoded with 65, 'B' with 66 and so on. Because we do not want to calculate with such number, we normalize them. We simply subtract 'A' from each character. Then
'A' - 'A' = 0
'B' - 'A' = 1
'C' - 'A' = 2
'D' - 'A' = 3
You see the pattern. If we want to encrypt now the letter 'C' with key 3, we can do the following.
'C' - 'A' + 3 = 5 Then we add again 'A' to get back the letter and we will get 5 + 'A' = 'F'
That is the whole magic.
But what to do with an overflow, beyond 'Z'. This can be handled by a simple modulo division.
Let us look at 'Z' + 1. We do 'Z' - 'A' = 25, then +1 = 26 and now modulo 26 = 0 then plus 'A' will be 'A'
And so on and so on. The resulting Formula is: (c-'A'+key)%26+'A'
Next, what with negative keys? This is also simple. Assume an 'A' and key=-1
Result will be a 'Z'. But this is the same as shifting 25 to the right. So, we can simply convert a negative key to a positive shift. The simple statement will be:
if (key < 0) key = (26 + (key % 26)) % 26;
And then we can call our tranformation function with a simple Lambda. One function for encryption and decrytion. Just with an inverted key.
And with the above formular, there is even no need to check for a negative values. It will work for positive and negative values.
So, key = (26 + (key % 26)) % 26; will always work.
Some extended information, if you work with ASCII character representation. Please have a look at any ASCII table. You will see that any uppercase and lowercase character differ by 32. Or, if you look in binary:
char dez bin char dez bin
'A' 65 0100 0001 'a' 97 0110 0001
'B' 66 0100 0010 'b' 98 0110 0010
'C' 67 0100 0011 'b' 99 0110 0011
. . .
So, if you already know that a character is alpha, then the only difference between upper- and lowercase is bit number 5. If we want to know, if char is lowercase, we can get this by masking this bit. c & 0b0010 0000 that is equal to c & 32 or c & 0x20.
If we want to operater on either uppercase or lowercase characters, the we can mask the "case" away. With c & 0b00011111 or c & 31 or c & 0x1F we will get always equivalents for uppercase charcters, already normalized to start with one.
char dez bin Masking char dez bin Masking
'A' 65 0100 0001 & 0x1b = 1 'a' 97 0110 0001 & 0x1b = 1
'B' 66 0100 0010 & 0x1b = 2 'b' 98 0110 0010 & 0x1b = 2
'C' 67 0100 0011 & 0x1b = 3 'b' 99 0110 0011 & 0x1b = 3
. . .
So, if we use an alpha character, mask it, and subtract 1, then we get as a result 0..25 for any upper- or lowercase character.
Additionally, I would like tor repeat the key handling. Positive keys will encrypt a string, negative keys will decrypt a string. But, as said above, negative keys can be transormed into positive ones. Example:
Shifting by -1 is same as shifting by +25
Shifting by -2 is same as shifting by +24
Shifting by -3 is same as shifting by +23
Shifting by -4 is same as shifting by +22
So,it is very obvious that we can calculate an always positive key by: 26 + key. For negative keys, this will give us the above offsets.
And for positve keys, we would have an overflow over 26, which we can elimiate by a modulo 26 division:
'A'--> 0 + 26 = 26 26 % 26 = 0
'B'--> 1 + 26 = 27 27 % 26 = 1
'C'--> 2 + 26 = 28 28 % 26 = 2
'D'--> 3 + 26 = 29 29 % 26 = 3
--> (c + key) % 26 will eliminate overflows and result in the correct new en/decryptd character.
And, if we combine this with the above wisdom for negative keys, we can write: ((26+(key%26))%26) which will work for all positive and negative keys.
Combining that with that masking, could give us the following program:
const char potentialLowerCaseIndicator = c & 0x20;
const char upperOrLower = c & 0x1F;
const char normalized = upperOrLower - 1;
const int withOffset = normalized + ((26+(key%26))%26);
const int withOverflowCompensation = withOffset % 26;
const char newUpperCaseCharacter = (char)withOverflowCompensation + 'A';
const char result = newUpperCaseCharacter | (potentialLowerCaseIndicator );
Of course, all the above many statements can be converted into one Lambda:
#include <string>
#include <algorithm>
#include <cctype>
#include <iostream>
// Simple function for Caesar encyption/decyption
std::string caesar(const std::string& in, int key) {
std::string res(in.size(), ' ');
std::transform(in.begin(), in.end(), res.begin(), [&](char c) {return std::isalpha(c) ? (char)((((c & 31) - 1 + ((26 + (key % 26)) % 26)) % 26 + 65) | (c & 32)) : c; });
return res;
}
int main() {
std::string test{ "aBcDeF xYzZ" };
std::cout << caesar(test, 5);
}
The last function can also be made more verbose for easier understanding:
std::string caesar1(const std::string& in, int key) {
std::string res(in.size(), ' ');
auto convert = [&](const char c) -> char {
char result = c;
if (std::isalpha(c)) {
// Handling of a negative key (Shift to left). Key will be converted to positive value
if (key < 0) {
// limit the key to 0,-1,...,-25
key = key % 26;
// Key was negative: Now we have someting between 0 and 26
key = 26 + key;
};
// Check and remember if the original character was lower case
const bool originalIsLower = std::islower(c);
// We want towork with uppercase only
const char upperCaseChar = (char)std::toupper(c);
// But, we want to start with 0 and not with 'A' (65)
const int normalized = upperCaseChar - 'A';
// Now add the key
const int shifted = normalized + key;
// Addition result maybe bigger then 25, so overflow. Cap it
const int capped = shifted % 26;
// Get back a character
const char convertedUppcase = (char)capped + 'A';
// And set back the original case
result = originalIsLower ? (char)std::tolower(convertedUppcase) : convertedUppcase;
}
return result;
};
std::transform(in.begin(), in.end(), res.begin(), convert);
return res;
}
And if you want to see a solution with only the simplest statements, then see the below.
#include <iostream>
#include <string>
using namespace std;
string caesar(string in, int key) {
// Here we will store the resulting encrypted/decrypted string
string result{};
// Handling of a negative key (Shift to left). Key will be converted to positive value
if (key < 0) {
// limit the key to 0,-1,...,-25
key = key % 26;
// Key was negative: Now we have someting between 0 and 26
key = 26 + key;
};
// Read character by character from the string
for (unsigned int i = 0; i < in.length(); ++i) {
char c = in[i];
// CHeck for alpha character
if ((c >= 'A' and c <= 'Z') or (c >= 'a' and c <= 'z')) {
// Check and remember if the original character was lower case
bool originalIsLower = (c >= 'a' and c <= 'z');
// We want to work with uppercase only
char upperCaseChar = originalIsLower ? c - ('a' - 'A') : c;
// But, we want to start with 0 and not with 'A' (65)
int normalized = upperCaseChar - 'A';
// Now add the key
int shifted = normalized + key;
// Addition result maybe bigger then 25, so overflow. Cap it
int capped = shifted % 26;
// Get back a character
char convertedUppcase = (char)capped + 'A';
// And set back the original case
result += originalIsLower ? convertedUppcase + ('a' - 'A') : convertedUppcase;
}
else
result += c;
}
return result;
}
int main() {
string test{ "aBcDeF xYzZ" };
string encrypted = caesar(test, 5);
string decrypted = caesar(encrypted, -5);
cout << "Original: " << test << '\n';
cout << "Encrpyted: " << encrypted << '\n';
cout << "Decrpyted: " << decrypted << '\n';
}

Caesar cipher which moves the text about X positions [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I'm looking for a simple code (only c++ strings and loops allowed) which reads a one-line string text as a string and then outputs this string "encrypted by shifting".
It should look like this:
Please enter the text to be encrypted: abc def xyz!ABC XYZ?
Please enter the number of shift positions (as a positive integer):
3
def ghi abc!DEF ABC?
User input is underlined in bold.
And this is how for I got:
#include <iostream>
#include <string>
using namespace std;
int main()
{
string decrypted_Text, int number_of_movements;
cout << "Please enter the text to be encrypted: ";
cin >> decrypted_Text;
cout << "Please enter the number of shift positions (as a positive integer): ";
cin >> number_of_movements;
system("PAUSE");
return(0);
}
I know that text.size() will be used here or rather can imagine that but just don't know how to implement that.
The biggest problems are potential overflows. So, we need to deal with that.
Then we need to understand what Encryption and decryption means. If encryption will shift everthing one to the right, decryption will shift it back to left again.
So, with "def" and key=1, the encrpyted string will be "efg".
And decrpytion with key=1, will shift it to left again. Result: "def"
We can observe that we simply need to shift by -1, so the negative of the key.
So, basically encryption and decryption can be done with the same routine. We just need to invert the keys.
Let us look now at the overflow problematic. For the moment we will start with uppercase characters only. Characters have an associated code. For example, in ASCII, the letter 'A' is encoded with 65, 'B' with 66 and so on. Because we do not want to calculate with such number, we normalize them. We simply subtract 'A' from each character. Then
'A' - 'A' = 0
'B' - 'A' = 1
'C' - 'A' = 2
'D' - 'A' = 3
You see the pattern. If we want to encrypt now the letter 'C' with key 3, we can do the following.
'C' - 'A' + 3 = 5 Then we add again 'A' to get back the letter and we will get 5 + 'A' = 'F'
That is the whole magic.
But what to do with an overflow, beyond 'Z'. This can be handled by a simple modulo division.
Let us look at 'Z' + 1. We do 'Z' - 'A' = 25, then +1 = 26 and now modulo 26 = 0 then plus 'A' will be 'A'
And so on and so on. The resulting Formula is: (c-'A'+key)%26+'A'
Next, what with negative keys? This is also simple. Assume an 'A' and key=-1
Result will be a 'Z'. But this is the same as shifting 25 to the right. So, we can simply convert a negative key to a positive shift. The simple statement will be:
if (key < 0) key = (26 + (key % 26)) % 26;
And then we can call our tranformation function with a simple Lambda. One function for encryption and decrytion. Just with an inverted key.
And with the above formular, there is even no need to check for a negative values. It will work for positive and negative values.
So, key = (26 + (key % 26)) % 26; will always work.
Some extended information, if you work with ASCII character representation. Please have a look at any ASCII table. You will see that any uppercase and lowercase character differ by 32. Or, if you look in binary:
char dez bin char dez bin
'A' 65 0100 0001 'a' 97 0110 0001
'B' 66 0100 0010 'b' 98 0110 0010
'C' 67 0100 0011 'b' 99 0110 0011
. . .
So, if you already know that a character is alpha, then teh only difference between upper- and lowercase is bit number 5. If we want to know, if char is lowercase, we can get this by masking this bit. c & 0b0010 0000 that is equal to c & 32 or c & 0x20.
If we want to operater on either uppercase or lowercase characters, the we can mask the "case" away. With c & 0b00011111 or c & 31 or c & 0x1F we will get always equivalents for uppercase charcters, already normalized to start with one.
char dez bin Masking char dez bin Masking
'A' 65 0100 0001 & 0x1b = 1 'a' 97 0110 0001 & 0x1b = 1
'B' 66 0100 0010 & 0x1b = 2 'b' 98 0110 0010 & 0x1b = 2
'C' 67 0100 0011 & 0x1b = 3 'b' 99 0110 0011 & 0x1b = 3
. . .
So, if we use an alpha character, mask it, and subtract 1, then we get as a result 0..25 for any upper- or lowercase character.
Additionally, I would like tor repeat the key handling. Positive keys will encrypt a string, negative keys will decrypt a string. But, as said above, negative keys can be transormed into positive ones. Example:
Shifting by -1 is same as shifting by +25
Shifting by -2 is same as shifting by +24
Shifting by -3 is same as shifting by +23
Shifting by -4 is same as shifting by +22
So,it is very obvious that we can calculate an always positive key by: 26 + key. For negative keys, this will give us the above offsets.
And for positve keys, we would have an overflow over 26, which we can elimiate by a modulo 26 division:
'A'--> 0 + 26 = 26 26 % 26 = 0
'B'--> 1 + 26 = 27 27 % 26 = 1
'C'--> 2 + 26 = 28 28 % 26 = 2
'D'--> 3 + 26 = 29 29 % 26 = 3
--> (c + key) % 26 will eliminate overflows and result in the correct new en/decryptd character.
And, if we combine this with the bove wisdom for negative keys, we can write: ((26+(key%26))%26) which will work for all positive and negative keys.
Combining that with that masking, could give us the following program:
const char potentialLowerCaseIndicator = c & 0x20;
const char upperOrLower = c & 0x1F;
const char normalized = upperOrLower - 1;
const int withOffset = normalized + ((26+(key%26))%26);
const int withOverflowCompensation = withOffset % 26;
const char newUpperCaseCharacter = (char)withOverflowCompensation + 'A';
const char result = newUpperCaseCharacter | (potentialLowerCaseIndicator );
Of course, all the above many statements can be converted into one Lambda:
#include <string>
#include <algorithm>
#include <cctype>
#include <iostream>
// Simple function for Caesar encyption/decyption
std::string caesar(const std::string& in, int key) {
std::string res(in.size(), ' ');
std::transform(in.begin(), in.end(), res.begin(), [&](char c) {return std::isalpha(c) ? (char)((((c & 31) - 1 + ((26 + (key % 26)) % 26)) % 26 + 65) | (c & 32)) : c; });
return res;
}
int main() {
std::string test{ "aBcDeF xYzZ" };
std::cout << caesar(test, 5);
}
The last function can also be made more verbose:
std::string caesar1(const std::string& in, int key) {
std::string res(in.size(), ' ');
auto convert = [&](const char c) -> char {
char result = c;
if (std::isalpha(c)) {
// Handling of a negative key (Shift to left). Key will be converted to positive value
if (key < 0) {
// limit the key to 0,-1,...,-25
key = key % 26;
// Key was negative: Now we have someting between 0 and 26
key = 26 + key;
};
// Check and remember if the original character was lower case
const bool originalIsLower = std::islower(c);
// We want towork with uppercase only
const char upperCaseChar = (char)std::toupper(c);
// But, we want to start with 0 and not with 'A' (65)
const int normalized = upperCaseChar - 'A';
// Now add the key
const int shifted = normalized + key;
// Addition result maybe bigger then 25, so overflow. Cap it
const int capped = shifted % 26;
// Get back a character
const char convertedUppcase = (char)capped + 'A';
// And set back the original case
result = originalIsLower ? (char)std::tolower(convertedUppcase) : convertedUppcase;
}
return result;
};
std::transform(in.begin(), in.end(), res.begin(), convert);
return res;
}
EDIT
Please see below a solution with only simplest statements.
#include <iostream>
#include <string>
using namespace std;
string caesar(string in, int key) {
// Here we will store the resulting encrypted/decrypted string
string result{};
// Handling of a negative key (Shift to left). Key will be converted to positive value
if (key < 0) {
// limit the key to 0,-1,...,-25
key = key % 26;
// Key was negative: Now we have someting between 0 and 26
key = 26 + key;
};
// Read character by character from the string
for (unsigned int i = 0; i < in.length(); ++i) {
char c = in[i];
// CHeck for alpha character
if ((c >= 'A' and c <= 'Z') or (c >= 'a' and c <= 'z')) {
// Check and remember if the original character was lower case
bool originalIsLower = (c >= 'a' and c <= 'z');
// We want to work with uppercase only
char upperCaseChar = originalIsLower ? c - ('a' - 'A') : c;
// But, we want to start with 0 and not with 'A' (65)
int normalized = upperCaseChar - 'A';
// Now add the key
int shifted = normalized + key;
// Addition result maybe bigger then 25, so overflow. Cap it
int capped = shifted % 26;
// Get back a character
char convertedUppcase = (char)capped + 'A';
// And set back the original case
result += originalIsLower ? convertedUppcase + ('a' - 'A') : convertedUppcase;
}
else
result += c;
}
return result;
}
int main() {
string test{ "aBcDeF xYzZ" };
string encrypted = caesar(test, 5);
string decrypted = caesar(encrypted, -5);
cout << "Original: " << test << '\n';
cout << "Encrpyted: " << encrypted << '\n';
cout << "Decrpyted: " << decrypted << '\n';
}

working of a faster I/O method

I was studying faster I/O methods for programming problems,
I found out this method of using getchar_unlocked() (though risky but still).
I looked around quite a bit but couldn't get how it scanned an integer value
or in other words what do the 4 lines mean and how they work in the scanint() function
defined below
#include<iostream>
#include<cstdio>
#define gc getchar_unlocked
void scanint(int &x)
{
register int c = gc();
x = 0;
for(;(c<48 || c>57);c = gc());
for(;c>47 && c<58;c = gc())
{x = (x<<1) + (x<<3) + c - 48;}
}
int main()
{
int n,k;
scanint(n);
scanint(k);
int cnt=0;
while(n--)
{
int num;
scanint(num);
if(num%k==0)cnt++;
}
printf("%d",cnt);
return 0;
}
The code has hard-coded ASCII values for the chars '0' and '9', so it skips over anything which does not lie in that range (the first for loop in the four lines of interest to you).
Then while it sees characters in the '0' - '9' range it multiplies its running total by 10 (by shifting it left once to double it, then adding that to itself shifted left three times, which is like multiplying by 8) and adds the current char - the code for '0'.
ASCII value of '0' is 48, and each subsequent digit goes one more. i.e. '1'->49, '2'->50... and so on.
A side effect of this is that if you take a character digit, meaning something between '0' and '9', and subtract the ASCII value of '0' from it, then you get the integer value of that digit.
So in the line x = (x<<1) + (x<<3) + c - 48;, the c-48 part is converting the digit character (an ASCII coded character) into a number between 0-9 that refers to that character.
(x<<1)+(x<<3) is the same as x * 10,(For more information, checkout http://en.wikipedia.org/wiki/Multiplication_algorithm#Shift_and_add and How can I multiply and divide using only bit shifting and adding? ) in fact this part of the code is unnecessarily obfuscated. The compiler can optimize multiplication in many different ways to make it as fast as possible, so we don't need to manually implement the bit shifting. It makes for an interesting college level puzzle though.
for(;(c<48 || c>57);c = gc()); This loop will ignore all the characters until it receives one that falls within the '0' to '9' range. So if the user started by typing a space or any other character, it'll simply be ignored.
By the time the code hits for(;c>47 && c<58;c = gc()) {x = (x<<1) + (x<<3) + c - 48;} line, the variable c is already initialized to the first digit that the user has typed. This loop leaves the initialization empty so the control flow will dive right into the loop and start computing the number as each character is typed.
The loop will go on as long as the user keeps typing digits, as soon as the user types something other than a digit, the loop will terminate, finalizing the number.
Until the user keeps typing digits, the line x = (x<<1) + (x<<3) + c - 48; will keep getting executed over and over again, with each time c being the character just typed. And x will multiply itself by 10 and add the new digit in.
Let's say the user types 2014. Here's how the values in c and x will progress.
c = '2' #ASCII value 50
x = 2
c = '0' #ASCII value 48
x = 20
c = '1' #ASCII value 49
x = 201
c = '4' #ASCII value 52
x = 2014
HTH.
The line
for(;(c<48 || c>57);c = gc());
reads the characters which are not in between '0' to '9'.
The line
for(;c>47 && c<58;c = gc())
reads the characters between '0' to '9' one by one.
The line
{x = (x<<1) + (x<<3) + c - 48;}
is simply equivalent to
x = 10 * x + c - 48;
A simplified version of this function can be rewritten as:
#define gc getchar_unlocked
int read_int()
{
char c = gc();
while(c<'0' || c>'9')
c = gc();
int ret = 0;
while(c>='0' && c<='9')
{
ret = 10 * ret + c - 48;
c = gc();
}
return ret;
}

How to random flip binary bit of char in C/C++

If I have a char array A, I use it to store hex
A = "0A F5 6D 02" size=11
The binary representation of this char array is:
00001010 11110101 01101101 00000010
I want to ask is there any function can random flip the bit?
That is:
if the parameter is 5
00001010 11110101 01101101 00000010
-->
10001110 11110001 01101001 00100010
it will random choose 5 bit to flip.
I am trying make this hex data to binary data and use bitmask method to achieve my requirement. Then turn it back to hex. I am curious is there any method to do this job more quickly?
Sorry, my question description is not clear enough. In simply, I have some hex data, and I want to simulate bit error in these data. For example, if I have 5 byte hex data:
"FF00FF00FF"
binary representation is
"1111111100000000111111110000000011111111"
If the bit error rate is 10%. Then I want to make these 40 bits have 4 bits error. One extreme random result: error happened in the first 4 bit:
"0000111100000000111111110000000011111111"
First of all, find out which char the bit represents:
param is your bit to flip...
char *byteToWrite = &A[sizeof(A) - (param / 8) - 1];
So that will give you a pointer to the char at that array offset (-1 for 0 array offset vs size)
Then get modulus (or more bit shifting if you're feeling adventurous) to find out which bit in here to flip:
*byteToWrite ^= (1u << param % 8);
So that should result for a param of 5 for the byte at A[10] to have its 5th bit toggled.
store the values of 2^n in an array
generate a random number seed
loop through x times (in this case 5) and go data ^= stored_values[random_num]
Alternatively to storing the 2^n values in an array, you could do some bit shifting to a random power of 2 like:
data ^= (1<<random%7)
Reflecting the first comment, you really could just write out that line 5 times in your function and avoid the overhead of a for loop entirely.
You have 32 bit number. You can treate the bits as parts of hte number and just xor this number with some random 5-bits-on number.
int count_1s(int )
{
int m = 0x55555555;
int r = (foo&m) + ((foo>>>1)&m);
m = 0x33333333;
r = (r&m) + ((r>>>2)&m);
m = 0x0F0F0F0F;
r = (r&m) + ((r>>>4)&m);
m = 0x00FF00FF;
r = (r&m) + ((r>>>8)&m);
m = 0x0000FFFF;
return r = (r&m) + ((r>>>16)&m);
}
void main()
{
char input[] = "0A F5 6D 02";
char data[4] = {};
scanf("%2x %2x %2x %2x", &data[0], &data[1], &data[2], &data[3]);
int *x = reinterpret_cast<int*>(data);
int y = rand();
while(count_1s(y) != 5)
{
y = rand(); // let's have this more random
}
*x ^= y;
printf("%2x %2x %2x %2x" data[0], data[1], data[2], data[3]);
return 0;
}
I see no reason to convert the entire string back and forth from and to hex notation. Just pick a random character out of the hex string, convert this to a digit, change it a bit, convert back to hex character.
In plain C:
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
int main (void)
{
char *hexToDec_lookup = "0123456789ABCDEF";
char hexstr[] = "0A F5 6D 02";
/* 0. make sure we're fairly random */
srand(time(0));
/* 1. loop 5 times .. */
int i;
for (i=0; i<5; i++)
{
/* 2. pick a random hex digit
we know it's one out of 8, grouped per 2 */
int hexdigit = rand() & 7;
hexdigit += (hexdigit>>1);
/* 3. convert the digit to binary */
int hexvalue = hexstr[hexdigit] > '9' ? hexstr[hexdigit] - 'A'+10 : hexstr[hexdigit]-'0';
/* 4. flip a random bit */
hexvalue ^= 1 << (rand() & 3);
/* 5. write it back into position */
hexstr[hexdigit] = hexToDec_lookup[hexvalue];
printf ("[%s]\n", hexstr);
}
return 0;
}
It might even be possible to omit the convert-to-and-from-ASCII steps -- flip a bit in the character string, check if it's still a valid hex digit and if necessary, adjust.
First randomly chose x positions (each position consist of array index and the bit position).
Now if you want to flip ith bit from right for a number n. Find the remainder of n by 2n as :
code:
int divisor = (2,i);
int remainder = n % divisor;
int quotient = n / divisor;
remainder = (remainder == 0) ? 1 : 0; // flip the remainder or the i th bit from right.
n = divisor * quotient + remainder;
Take mod 8 of input(5%8)
Shift 0x80 to right by input value (e.g 5)
XOR this value with (input/8)th element of your character array.
code:
void flip_bit(int bit)
{
Array[bit/8] ^= (0x80>>(bit%8));
}

Create a file that uses 4-bit encoding to represent integers 0 -9

How can I create a file that uses 4-bit encoding to represent integers 0-9 separated by a comma ('1111')? for example:
2,34,99 = 0010 1111 0011 0100 1111 1001 1001 => actually becomes without spaces
0010111100110100111110011001 = binary.txt
Therefore 0010111100110100111110011001 is what I see when I view the file ('binary.txt')in WINHEX in binary view but I would see 2,34,99 when view the file (binary.txt) in Notepad.
If not Notepad, is there another decoder that will do '4-bit encoding' or do I have a write a 'decoder program' to view the integers?
How can I do this in C++?
The basic idea of your format (4 bits per decimal digit) is well known and called BCD (Binary Coded Decimal). But I doubt the use of 0xF as an encoding for a coma is something well established and even more supported by notepad.
Writing a program in C++ to do the encoding and decoding would be quite easy. The only difficulty would be that the standard IO use byte as the more basic unit, not bit, so you'd have to group yourself the bits into a byte.
You can decode the files using od -tx1 if you have that (digits will show up as digits, commas will show up as f). You can also use xxd to go both directions; it comes with Vim. Use xxd -r -p to copy hex characters from stdin to a binary file on stdout, and xxd -p to go the other way. You can use sed or tr to change f back and forth to ,.
This is the simplest C++ 4-bit (BCD) encoding algorithm I could come up with - wouldn't call it exactly easy, but no rocket science either. Extracts one digit at a time by dividing and then adds them to the string:
#include <iostream>
int main() {
const unsigned int ints = 3;
unsigned int a[ints] = {2,34,99}; // these are the original ints
unsigned int bytes_per_int = 6;
char * result = new char[bytes_per_int * ints + 1];
// enough space for 11 digits per int plus comma, 8-bit chars
for (int j=0; j < bytes_per_int * ints; ++j)
{
result[j] = 0xFF; // fill with FF
}
result[bytes_per_int*ints] = 0; // null terminated string
unsigned int rpos = bytes_per_int * ints * 2; // result position, start from the end of result
int i = ints; // start from the end of the array too.
while (i != 0) {
--i;
unsigned int b = a[i];
while (b != 0) {
--rpos;
unsigned int digit = b % 10; // take the lowest decimal digit of b
if (rpos & 1) {
// odd rpos means we set the lowest bits of a char
result[(rpos >> 1)] = digit;
}
else {
// even rpos means we set the highest bits of a char
result[(rpos >> 1)] |= (digit << 4);
}
b /= 10; // make the next digit the new lowest digit
}
if (i != 0 || (rpos & 1))
{
// add the comma
--rpos;
if (rpos & 1) {
result[(rpos >> 1)] = 0x0F;
}
else {
result[(rpos >> 1)] |= 0xF0;
}
}
}
std::cout << result;
}
Trimming the bogus data left at the start portion of the result according to rpos will be left as an exercise for the reader.
The subproblem of BCD conversion has also been discussed before: Unsigned Integer to BCD conversion?
If you want a more efficient algorithm, here's a bunch of lecture slides with conversion from 8-bit ints to BCD: http://edda.csie.dyu.edu.tw/course/fpga/Binary2BCD.pdf