Store reverse in an integer c++ - c++

I have used reverse code in one of the program, I donot want to actually output the reverse in my program. I want to store that reverse integer so that i can use it somewhere else too.
This is my code of reversing an integer, please tell me how to store this reverse in seperate integer . Without using character array.
This is my some piece of code
int integer;
int rev;
do{
rev=integer%10;
integer=integer/10;
cout<<rev;
}while(integer!=0);

Here's a code snippet:
int integer = 123456789;
int rev = 0;
while (integer!=0) {
rev = (10 * rev) // move all digits one to the left: 98 --> 980
+ (integer % 10); // add rightmost digit from input 980 --> 987
integer /= 10; // delete rightmost digit 1234567 --> 123456
}
printf("%d", rev);

Related

detecting 32 bit integer overflow

I have a simple method that basically reverses the signed integer. This function works till the integer is under or equal to 32 bit.
for example :-
input = 321
output = 123
input = -321
output = -123
input = 1534236469
output = 9646324351 //this value is wrong.
expected output = 0
I want to detect the integer overflow and return 0 in that case.
Below is the code for the function
int reverse(int x) {
int number = x;
bool negative = false;
if(number<0){
negative = true;
number *= -1;
}
int reversed = 0;
while (number != 0){
int reminder = number % 10;
reversed = (reversed * 10) + reminder;
number /= 10;
}
if(negative){
reversed *= -1;
}
return reversed;
}
Furthermore, if I change the input and output into signed long I get the required output but I want to detect the integer overflow and return 0.
Before you multiply reversed by 10, just check to make sure it's small enough to multiply by 10.
Similarly, before you add remainder, check to make sure it's small enough to add remainder.
There's a clever trick you can use for the addition, but at your level you probably shouldn't:
if ((reversed += remainder) < remainder) {
//overflow
}
Note that the trick only works if both reversed and remainder are unsigned.
This hint might help you complete your assignment:
You are only going to get integer overflow if your final number is 10 digits long and the first digit ends up being above or equal to 2.
Which means that you are going to get integer overflow if your original number is also 10 digits long and the last digit is 2 or above.

VBA convert Excel Style Column Name (with 52 charset) to original number

I have a c++ program that takes an integer and convert it to lower and uppercase alphabets, similar to what excel does to convert column index to column number but also including lower case letters.
#include <string>
#include <iostream>
#include <climits>
using namespace std;
string ConvertNum(unsigned long v)
{
char const digits[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz";
size_t const base = sizeof(digits) - 1;
char result[sizeof(unsigned long)*CHAR_BIT + 1];
char* current = result + sizeof(result);
*--current = '\0';
while (v != 0) {
v--;
*--current = digits[v % base];
v /= base;
}
return current;
}
// for testing
int main()
{
cout<< ConvertNum(705);
return 0;
}
I need the vba function to reverse this back to the original number. I do not have a lot of experience with C++ so I can not figure out a logic to reverse this in vba. Can anyone please help.
Update 1: I don't need already written code, just some help in the logic to reverse it. I'll try to convert the logic into code myself.
Update 2: Base on the wonderful explanation and help provided in the answer, it's clear that the code is not converting the number to a usual base52, it is misleading. So I have changed the function name to eliminate the confusion for future readers.
EDIT: The character string format being translated to decimal by the code described below is NOT a standard base-52 schema. The schema does not include 0 or any other digits. Therefore this code should not be used, as is, to translate a standard base-52 value to decimal.
O.K. this is based on converting a single character based on its position in a long string. the string is:
chSET = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
The InStr() function tells us the A is in position 1 and the Z is in position 26 and that a is in position 27. All characters get converted the same way.
I use this rather than Asc() because Asc() has a gap between the upper and lower case letters.
The least significant character's value gets multiplied by 52^0The next character's value gets multiplied by 52^1The third character's value gets multiplied by 52^3, etc. The code:
Public Function deccimal(s As String) As Long
Dim chSET As String, arr(1 To 52) As String
Dim L As Long, i As Long, K As Long, CH As String
chSET = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
deccimal = 0
L = Len(s)
K = 0
For i = L To 1 Step -1
CH = Mid(s, i, 1)
deccimal = deccimal + InStr(1, chSET, CH) * (52 ^ K)
K = K + 1
Next i
End Function
Some examples:
NOTE:
This is NOT the way bases are usually encoded. Usually bases start with a 0 and allow 0 in any of the encoded value's positions. In all my previous UDF()'s similar to this one, the first character in chSET is a 0 and I have to use (InStr(1, chSET, CH) - 1) * (52 ^ K)
Gary's Student provided a good and easy to understand way to get the number from what I call "Excel style base 52" and this is what you wanted.
However this is a little different from the usual base 52. I'll try to explain the difference to regular base 52 and its conversion. There might be an easier way but this is the best I could come up with that also explains the code you provided.
As an example: The number zz..zz means 51*(1 + 52 + 52^2 + ... 52^(n-1)) in regular base 52 and 52*(1 + 52 + 52^2 + ... 52^(n-1)) in Excel style base 52. So Excel style get's higher number with fewer digits. Here is how much that difference is based on number of digits. How is this possible? It uses leading zeros so 1, 01, 001 etc are all different numbers. Why don't we do this normally? It would mess up the easy arithmetic of the usual system.
We can't just shift all the digits by one after the base change and we can't just substract 1 before the base change to counter the fact that we start at 1 instead of 0. I'll outline the problem with base 10. If we'd use Excel style base 10 to number the columns, we would have to count like "0, 1, 2, ..., 9, 00, 01, 02, ...". On the first glance it looks like we just have to shift the digits so we start counting at 1 but this only works up to the 10th number.
1 2 .. 10 11 .. 20 21 .. 99 100 .. 110 111 //normal counting
0 1 .. 9 00 .. 09 10 .. 88 89 .. 99 000 //excel style counting
You notice that whenever we add a new digit we shift again. To counter that, we have to do a shift by 1 before calculating each digit, not shift the digit after calculating it. (This only makes a difference if we're at 52^k) Note that we still assign A to 0, B to 1 etc.
Normally what you would do to change bases is looping with something like
nextDigit = x mod base //determining the last digit
x = x/base //removing the last digit
//terminate if x = 0
However now it is
x = x - 1
nextDigit = x mod base
x = x/base
//terminate if x = 0
So x is decremented by 1 first! Let's do a quick check for x=52:
Regular base 52:
nextDigit = x mod 52 //52 mod 52 = 0 so the next digit is A
x = x/52 //x is now 1
//next iteration
nextDigit = x mod 52 //1 mod 52 = 1 so the next digit is B
x = x/52 //1/52 = 0 in integer arithmetic
//terminate because x = 0
//result is BA
Excel style:
x = x-1 //x is now 51
nextDigit = x mod 52 //51 mod 52 = 51 so the next digit is z
x = x/52 //51/52 = 0 in integer arithmetic
//terminate because x=0
//result is z
It works!
Part 2: Your C++ code
Now for let's read your code:
x % y means x mod y
When you do calculations with integers, the result will be an integer which is achieved by rounding down. So 39/10 will produce 3 etc.
x++ and ++x both increment x by 1.
You can use this in other statements to save a line of code. x++ means x is incremented after the statement is evaluated and ++x means it is incremented before the statement is evaluated
y=f(x++);
is the same as
y = f(x);
x = x + 1;
while
y=f(++x);
is the same as
x = x + 1;
y = f(x);
This goes the same way for --
Char* p creates a pointer to a char.
A pointer points to a certain location in memory. If you change the pointer, it points to a different location. E.g. doing p-- moves the pointer one to the left. To read or write the value that is saved at the location, use *p. E.g. *p="a"; "a" is written to the memory location that p points at. *p--="a"; "a" is written to the memory but the pointer is moved to the left afterwards so *p is now whatever is in the memory left of "a".
strings are just arrays of type char.
The end of a string is always '\0' if the computer reads a string it continues until it finds '\0'
This is hopefully enough to understand the code. Here it is
#include <string>
#include <iostream>
#include <climits>
using namespace std;
string base52(unsigned long v)
{
char const digits[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"; //The digits. (Arrays start at 0)
size_t const base = sizeof(digits) - 1; //The base, based on the digits that were given
char result[sizeof(unsigned long)*CHAR_BIT + 1]; //The array that holds the answer
//sizeof(unsigned long)*CHAR_BIT is the number of bits of an unsigned long
//which means it is the absolute longest that v can be in any base.
//The +1 is to hold the terminating character '\0'
char* current = result + sizeof(result); //This is a pointer that is supposed to point to the next digit. It points to the first byte after the result array (because its start + length)
//(i.e. it will go through the memory from high to low)
*--current = '\0'; //The pointer gets moved one to the left (to the last char of result and the terminating char is added
//the pointer has to be moved to the left first because it was actually pointing to the first byte after the result.
while (v != 0) { //loop until v is zero (until there are no more digits left.
v--; //v = v - 1. This is the important part that does the 1 -> A part
*--current = digits[v % base]; // the pointer is moved one to the left and the corresponding digit is saved
v /= base; //the last digit is dropped
}
return current; //current is returned, which points at the last saved digit. The rest of the result array (before current) is not used.
}
// for testing
int main()
{
cout<< base52(705);
return 0;
}

How to random flip binary bit of char in C/C++

If I have a char array A, I use it to store hex
A = "0A F5 6D 02" size=11
The binary representation of this char array is:
00001010 11110101 01101101 00000010
I want to ask is there any function can random flip the bit?
That is:
if the parameter is 5
00001010 11110101 01101101 00000010
-->
10001110 11110001 01101001 00100010
it will random choose 5 bit to flip.
I am trying make this hex data to binary data and use bitmask method to achieve my requirement. Then turn it back to hex. I am curious is there any method to do this job more quickly?
Sorry, my question description is not clear enough. In simply, I have some hex data, and I want to simulate bit error in these data. For example, if I have 5 byte hex data:
"FF00FF00FF"
binary representation is
"1111111100000000111111110000000011111111"
If the bit error rate is 10%. Then I want to make these 40 bits have 4 bits error. One extreme random result: error happened in the first 4 bit:
"0000111100000000111111110000000011111111"
First of all, find out which char the bit represents:
param is your bit to flip...
char *byteToWrite = &A[sizeof(A) - (param / 8) - 1];
So that will give you a pointer to the char at that array offset (-1 for 0 array offset vs size)
Then get modulus (or more bit shifting if you're feeling adventurous) to find out which bit in here to flip:
*byteToWrite ^= (1u << param % 8);
So that should result for a param of 5 for the byte at A[10] to have its 5th bit toggled.
store the values of 2^n in an array
generate a random number seed
loop through x times (in this case 5) and go data ^= stored_values[random_num]
Alternatively to storing the 2^n values in an array, you could do some bit shifting to a random power of 2 like:
data ^= (1<<random%7)
Reflecting the first comment, you really could just write out that line 5 times in your function and avoid the overhead of a for loop entirely.
You have 32 bit number. You can treate the bits as parts of hte number and just xor this number with some random 5-bits-on number.
int count_1s(int )
{
int m = 0x55555555;
int r = (foo&m) + ((foo>>>1)&m);
m = 0x33333333;
r = (r&m) + ((r>>>2)&m);
m = 0x0F0F0F0F;
r = (r&m) + ((r>>>4)&m);
m = 0x00FF00FF;
r = (r&m) + ((r>>>8)&m);
m = 0x0000FFFF;
return r = (r&m) + ((r>>>16)&m);
}
void main()
{
char input[] = "0A F5 6D 02";
char data[4] = {};
scanf("%2x %2x %2x %2x", &data[0], &data[1], &data[2], &data[3]);
int *x = reinterpret_cast<int*>(data);
int y = rand();
while(count_1s(y) != 5)
{
y = rand(); // let's have this more random
}
*x ^= y;
printf("%2x %2x %2x %2x" data[0], data[1], data[2], data[3]);
return 0;
}
I see no reason to convert the entire string back and forth from and to hex notation. Just pick a random character out of the hex string, convert this to a digit, change it a bit, convert back to hex character.
In plain C:
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
int main (void)
{
char *hexToDec_lookup = "0123456789ABCDEF";
char hexstr[] = "0A F5 6D 02";
/* 0. make sure we're fairly random */
srand(time(0));
/* 1. loop 5 times .. */
int i;
for (i=0; i<5; i++)
{
/* 2. pick a random hex digit
we know it's one out of 8, grouped per 2 */
int hexdigit = rand() & 7;
hexdigit += (hexdigit>>1);
/* 3. convert the digit to binary */
int hexvalue = hexstr[hexdigit] > '9' ? hexstr[hexdigit] - 'A'+10 : hexstr[hexdigit]-'0';
/* 4. flip a random bit */
hexvalue ^= 1 << (rand() & 3);
/* 5. write it back into position */
hexstr[hexdigit] = hexToDec_lookup[hexvalue];
printf ("[%s]\n", hexstr);
}
return 0;
}
It might even be possible to omit the convert-to-and-from-ASCII steps -- flip a bit in the character string, check if it's still a valid hex digit and if necessary, adjust.
First randomly chose x positions (each position consist of array index and the bit position).
Now if you want to flip ith bit from right for a number n. Find the remainder of n by 2n as :
code:
int divisor = (2,i);
int remainder = n % divisor;
int quotient = n / divisor;
remainder = (remainder == 0) ? 1 : 0; // flip the remainder or the i th bit from right.
n = divisor * quotient + remainder;
Take mod 8 of input(5%8)
Shift 0x80 to right by input value (e.g 5)
XOR this value with (input/8)th element of your character array.
code:
void flip_bit(int bit)
{
Array[bit/8] ^= (0x80>>(bit%8));
}

What is the logic behind this program?

the operator int() function converts the string to an int
class mystring
{
private:
chat str[20];
public:
operator int() // i'm assuming this converts a string to an int
{
int i=0,l,ss=0,k=1;
l = strlen(str)-1;
while(l>=0)
{
ss=ss+(str[l]-48)*k;
l--;
k*=10;
}
return(ss);
}
}
int main()
{
mystring s2("123");
int i=int(s2);
cout << endl << "i= "<<i;
}
So what's the logic behind operator int() ? What's the 48 in there? Can someone explain to me the algorithm behind the conversion from string to int.
Yes this converts a string to an integer. 48 is the ASCII value for '0'. If you subtract 48 from an ASCII digit you'll get the value of the digit (ex: '0' - 48 = 0, '1' - 48 = 1, ..). For each digit, your code calculates the correct power of 10 by using k (ranges between 1...10^{ log of the number represented by the input string}).
It does indeed convert a string to an integer. The routine assumes that all characters are decimal digits (things like minus sign, space, or comma will mess it up).
It starts with the ones place and moves through the string. For each digit, it subtracts off the ASCII value of '0', and multiplies by the current place value.
This does indeed convert the string to an integer. If you look at an ascii table the numbers start at the value 48. Using this logic (and lets say the string "123") the while loop will do:
l=2
ss=0+(51-48)*1
so in this case ss = 3
next loop we get
l=1
ss=3+(50-48)*10
so ss = 23
next loop
l=0
ss=23+(49-48)*100
so ss= 123
The loop breaks and we return an integer of value 123.
Hope this helps!

Create a file that uses 4-bit encoding to represent integers 0 -9

How can I create a file that uses 4-bit encoding to represent integers 0-9 separated by a comma ('1111')? for example:
2,34,99 = 0010 1111 0011 0100 1111 1001 1001 => actually becomes without spaces
0010111100110100111110011001 = binary.txt
Therefore 0010111100110100111110011001 is what I see when I view the file ('binary.txt')in WINHEX in binary view but I would see 2,34,99 when view the file (binary.txt) in Notepad.
If not Notepad, is there another decoder that will do '4-bit encoding' or do I have a write a 'decoder program' to view the integers?
How can I do this in C++?
The basic idea of your format (4 bits per decimal digit) is well known and called BCD (Binary Coded Decimal). But I doubt the use of 0xF as an encoding for a coma is something well established and even more supported by notepad.
Writing a program in C++ to do the encoding and decoding would be quite easy. The only difficulty would be that the standard IO use byte as the more basic unit, not bit, so you'd have to group yourself the bits into a byte.
You can decode the files using od -tx1 if you have that (digits will show up as digits, commas will show up as f). You can also use xxd to go both directions; it comes with Vim. Use xxd -r -p to copy hex characters from stdin to a binary file on stdout, and xxd -p to go the other way. You can use sed or tr to change f back and forth to ,.
This is the simplest C++ 4-bit (BCD) encoding algorithm I could come up with - wouldn't call it exactly easy, but no rocket science either. Extracts one digit at a time by dividing and then adds them to the string:
#include <iostream>
int main() {
const unsigned int ints = 3;
unsigned int a[ints] = {2,34,99}; // these are the original ints
unsigned int bytes_per_int = 6;
char * result = new char[bytes_per_int * ints + 1];
// enough space for 11 digits per int plus comma, 8-bit chars
for (int j=0; j < bytes_per_int * ints; ++j)
{
result[j] = 0xFF; // fill with FF
}
result[bytes_per_int*ints] = 0; // null terminated string
unsigned int rpos = bytes_per_int * ints * 2; // result position, start from the end of result
int i = ints; // start from the end of the array too.
while (i != 0) {
--i;
unsigned int b = a[i];
while (b != 0) {
--rpos;
unsigned int digit = b % 10; // take the lowest decimal digit of b
if (rpos & 1) {
// odd rpos means we set the lowest bits of a char
result[(rpos >> 1)] = digit;
}
else {
// even rpos means we set the highest bits of a char
result[(rpos >> 1)] |= (digit << 4);
}
b /= 10; // make the next digit the new lowest digit
}
if (i != 0 || (rpos & 1))
{
// add the comma
--rpos;
if (rpos & 1) {
result[(rpos >> 1)] = 0x0F;
}
else {
result[(rpos >> 1)] |= 0xF0;
}
}
}
std::cout << result;
}
Trimming the bogus data left at the start portion of the result according to rpos will be left as an exercise for the reader.
The subproblem of BCD conversion has also been discussed before: Unsigned Integer to BCD conversion?
If you want a more efficient algorithm, here's a bunch of lecture slides with conversion from 8-bit ints to BCD: http://edda.csie.dyu.edu.tw/course/fpga/Binary2BCD.pdf