method that convert decimal to binary:
string toBinary(unsigned int n) {
char binary[33] = {0}; // char array with 0 value in each index
int ix = 32;
do {
binary[--ix] = '0' + n % 2; // adding remainder to specific index
n /= 2; // dividing n with 2
} while (n); // loop until n is not equal to 0
// return a string
return (binary + ix); // but unable to understand this line
}
Can anyone please explain what's happening right here return (binary + ix);
ix is an index into the char array. The function creates the binary string starting with its rightmost bit, near the end of the array, and working its way towards the beginning of the array, creating each bit one at a time.
Therefore, when the last bit is created, ix points to the index of the first, most-significant bit. It's not always going to be at the beginning of the array: specifically it won't be if there were fewer than 32 bits.
"binary + ix" adds the index to the first bit to the start of the char buffer, calculating a pointer to the first bit. Since the function returns a std::string, this is fed to std::string's constructor, which takes the pointer to a literal string and constructs a full std::string object from it, implicitly.
Since ix gets decremented only as long as the loop runs (which changes depending on the magnitude of n), this will truncate the string to not include all of the leading zeros that would be there otherwise.
Also note that you can do this: How to print (using cout) the way a number is stored in memory?
Related
I am currently working on an arbitrary size integer library for learning purposes.
Each number is represented as uint32_t *number_segments.
I have functional arithmetic operations, and the ability to print the raw bits of my number.
However, I have struggled to find any information on how I could convert my arbitrarily long array of uint32 into the correct, and also arbitrarily long base 10 representation as a string.
Essentially I need a function along the lines of:
std::string uint32_array_to_string(uint32_t *n, size_t n_length);
Any pointers in the right direction would be greatly appreciated, thank you.
You do it the same way as you do with a single uint64_t except on a larger scale (bringing this into modern c++ is left for the reader):
char * to_str(uint64_t x) {
static char buf[23] = {0}; // leave space for a minus sign added by the caller
char *p = &buf[22];
do {
*--p = '0' + (x % 10);
x /= 10;
} while(x > 0);
return p;
}
The function fills a buffer from the end with the lowest digits and divides the number by 10 in each step and then returns a pointer to the first digit.
Now with big nums you can't use a static buffer but have to adjust the buffer size to the size of your number. You probably want to return a std::string and creating the number in reverse and then copying it into a result string is the way to go. You also have to deal with negative numbers.
Since a long division of a big number is expensive you probably don't want to divide by 10 in the loop. Rather divide by 1'000'000'000 and convert the remainder into 9 digits. This should be the largest power of 10 you can do long division by a single integer, not bigum / bignum. Might be you can only do 10'000 if you don't use uint64_t in the division.
I have a c++ program that takes an integer and convert it to lower and uppercase alphabets, similar to what excel does to convert column index to column number but also including lower case letters.
#include <string>
#include <iostream>
#include <climits>
using namespace std;
string ConvertNum(unsigned long v)
{
char const digits[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz";
size_t const base = sizeof(digits) - 1;
char result[sizeof(unsigned long)*CHAR_BIT + 1];
char* current = result + sizeof(result);
*--current = '\0';
while (v != 0) {
v--;
*--current = digits[v % base];
v /= base;
}
return current;
}
// for testing
int main()
{
cout<< ConvertNum(705);
return 0;
}
I need the vba function to reverse this back to the original number. I do not have a lot of experience with C++ so I can not figure out a logic to reverse this in vba. Can anyone please help.
Update 1: I don't need already written code, just some help in the logic to reverse it. I'll try to convert the logic into code myself.
Update 2: Base on the wonderful explanation and help provided in the answer, it's clear that the code is not converting the number to a usual base52, it is misleading. So I have changed the function name to eliminate the confusion for future readers.
EDIT: The character string format being translated to decimal by the code described below is NOT a standard base-52 schema. The schema does not include 0 or any other digits. Therefore this code should not be used, as is, to translate a standard base-52 value to decimal.
O.K. this is based on converting a single character based on its position in a long string. the string is:
chSET = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
The InStr() function tells us the A is in position 1 and the Z is in position 26 and that a is in position 27. All characters get converted the same way.
I use this rather than Asc() because Asc() has a gap between the upper and lower case letters.
The least significant character's value gets multiplied by 52^0The next character's value gets multiplied by 52^1The third character's value gets multiplied by 52^3, etc. The code:
Public Function deccimal(s As String) As Long
Dim chSET As String, arr(1 To 52) As String
Dim L As Long, i As Long, K As Long, CH As String
chSET = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
deccimal = 0
L = Len(s)
K = 0
For i = L To 1 Step -1
CH = Mid(s, i, 1)
deccimal = deccimal + InStr(1, chSET, CH) * (52 ^ K)
K = K + 1
Next i
End Function
Some examples:
NOTE:
This is NOT the way bases are usually encoded. Usually bases start with a 0 and allow 0 in any of the encoded value's positions. In all my previous UDF()'s similar to this one, the first character in chSET is a 0 and I have to use (InStr(1, chSET, CH) - 1) * (52 ^ K)
Gary's Student provided a good and easy to understand way to get the number from what I call "Excel style base 52" and this is what you wanted.
However this is a little different from the usual base 52. I'll try to explain the difference to regular base 52 and its conversion. There might be an easier way but this is the best I could come up with that also explains the code you provided.
As an example: The number zz..zz means 51*(1 + 52 + 52^2 + ... 52^(n-1)) in regular base 52 and 52*(1 + 52 + 52^2 + ... 52^(n-1)) in Excel style base 52. So Excel style get's higher number with fewer digits. Here is how much that difference is based on number of digits. How is this possible? It uses leading zeros so 1, 01, 001 etc are all different numbers. Why don't we do this normally? It would mess up the easy arithmetic of the usual system.
We can't just shift all the digits by one after the base change and we can't just substract 1 before the base change to counter the fact that we start at 1 instead of 0. I'll outline the problem with base 10. If we'd use Excel style base 10 to number the columns, we would have to count like "0, 1, 2, ..., 9, 00, 01, 02, ...". On the first glance it looks like we just have to shift the digits so we start counting at 1 but this only works up to the 10th number.
1 2 .. 10 11 .. 20 21 .. 99 100 .. 110 111 //normal counting
0 1 .. 9 00 .. 09 10 .. 88 89 .. 99 000 //excel style counting
You notice that whenever we add a new digit we shift again. To counter that, we have to do a shift by 1 before calculating each digit, not shift the digit after calculating it. (This only makes a difference if we're at 52^k) Note that we still assign A to 0, B to 1 etc.
Normally what you would do to change bases is looping with something like
nextDigit = x mod base //determining the last digit
x = x/base //removing the last digit
//terminate if x = 0
However now it is
x = x - 1
nextDigit = x mod base
x = x/base
//terminate if x = 0
So x is decremented by 1 first! Let's do a quick check for x=52:
Regular base 52:
nextDigit = x mod 52 //52 mod 52 = 0 so the next digit is A
x = x/52 //x is now 1
//next iteration
nextDigit = x mod 52 //1 mod 52 = 1 so the next digit is B
x = x/52 //1/52 = 0 in integer arithmetic
//terminate because x = 0
//result is BA
Excel style:
x = x-1 //x is now 51
nextDigit = x mod 52 //51 mod 52 = 51 so the next digit is z
x = x/52 //51/52 = 0 in integer arithmetic
//terminate because x=0
//result is z
It works!
Part 2: Your C++ code
Now for let's read your code:
x % y means x mod y
When you do calculations with integers, the result will be an integer which is achieved by rounding down. So 39/10 will produce 3 etc.
x++ and ++x both increment x by 1.
You can use this in other statements to save a line of code. x++ means x is incremented after the statement is evaluated and ++x means it is incremented before the statement is evaluated
y=f(x++);
is the same as
y = f(x);
x = x + 1;
while
y=f(++x);
is the same as
x = x + 1;
y = f(x);
This goes the same way for --
Char* p creates a pointer to a char.
A pointer points to a certain location in memory. If you change the pointer, it points to a different location. E.g. doing p-- moves the pointer one to the left. To read or write the value that is saved at the location, use *p. E.g. *p="a"; "a" is written to the memory location that p points at. *p--="a"; "a" is written to the memory but the pointer is moved to the left afterwards so *p is now whatever is in the memory left of "a".
strings are just arrays of type char.
The end of a string is always '\0' if the computer reads a string it continues until it finds '\0'
This is hopefully enough to understand the code. Here it is
#include <string>
#include <iostream>
#include <climits>
using namespace std;
string base52(unsigned long v)
{
char const digits[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"; //The digits. (Arrays start at 0)
size_t const base = sizeof(digits) - 1; //The base, based on the digits that were given
char result[sizeof(unsigned long)*CHAR_BIT + 1]; //The array that holds the answer
//sizeof(unsigned long)*CHAR_BIT is the number of bits of an unsigned long
//which means it is the absolute longest that v can be in any base.
//The +1 is to hold the terminating character '\0'
char* current = result + sizeof(result); //This is a pointer that is supposed to point to the next digit. It points to the first byte after the result array (because its start + length)
//(i.e. it will go through the memory from high to low)
*--current = '\0'; //The pointer gets moved one to the left (to the last char of result and the terminating char is added
//the pointer has to be moved to the left first because it was actually pointing to the first byte after the result.
while (v != 0) { //loop until v is zero (until there are no more digits left.
v--; //v = v - 1. This is the important part that does the 1 -> A part
*--current = digits[v % base]; // the pointer is moved one to the left and the corresponding digit is saved
v /= base; //the last digit is dropped
}
return current; //current is returned, which points at the last saved digit. The rest of the result array (before current) is not used.
}
// for testing
int main()
{
cout<< base52(705);
return 0;
}
on my Arduino, the following code produces output I don't understand:
void setup(){
Serial.begin(9600);
int a = 250;
Serial.println(a, BIN);
a = a << 8;
Serial.println(a, BIN);
a = a >> 8;
Serial.println(a, BIN);
}
void loop(){}
The output is:
11111010
11111111111111111111101000000000
11111111111111111111111111111010
I do understand the first line: leading zeros are not printed to the serial terminal. However, after shifting the bits the data type of a seems to have changed from int to long (32 bits are printed). The expected behaviour is that bits are shifted to the left, and that bits which are shifted "out" of the 16 bits an int has are simply dropped. Shifting the bits back does not turn the "32bit" variable to "16bit" again.
Shifting by 7 or less positions does not show this effect.
I probably should say that I am not using the Arduino IDE, but the Makefile from https://github.com/sudar/Arduino-Makefile.
What is going on? I almost expect this to be "normal", but I don't get it. Or is it something in the printing routine which simply adds 16 "1"'s to the output?
Enno
In addition to other answers, Integers might be stored in 16 bits or 32 bits depending on what arduino you have.
The function printing numbers in Arduino is defined in /arduino-1.0.5/hardware/arduino/cores/arduino/Print.cpp
size_t Print::printNumber(unsigned long n, uint8_t base) {
char buf[8 * sizeof(long) + 1]; // Assumes 8-bit chars plus zero byte.
char *str = &buf[sizeof(buf) - 1];
*str = '\0';
// prevent crash if called with base == 1
if (base < 2) base = 10;
do {
unsigned long m = n;
n /= base;
char c = m - base * n;
*--str = c < 10 ? c + '0' : c + 'A' - 10;
} while(n);
return write(str);
}
All other functions rely on this one, so yes your int gets promoted to an unsigned long when you print it, not when you shift it.
However, the library is correct. By shifting left 8 positions, the negative bit in the integer number becomes '1', so when the integer value is promoted to unsigned long the runtime correctly pads it with 16 extra '1's instead of '0's.
If you are using such a value not as a number but to contain some flags, use unsigned int instead of int.
ETA: for completeness, I'll add further explanation for the second shifting operation.
Once you touch the 'negative bit' inside the int number, when you shift towards right the runtime pads the number with '1's in order to preserve its negative value. Shifting to the left k positions corresponds to dividing the number by 2^k, and since the number is negative to start with then the result must remain negative.
I need help with adding the 16 bits that are concatenated in 'bits'. Every time a set of 16 bits is concatenated, I want them to be added (binary addition) to an array...till all sets of 16 are complete in my string. If there is an overflow, length of final sum >16...then add that extra bit to the final sum as 0000000000000001 (where 1 is the 16th bit).
For a string entered: "hello"
std::vector<std::string> bitvec;
std::string bits;
for (int i = 0; i < s.size(); i += 2) {
bits = std::bitset<8>(s[i]).to_string() + std::bitset<8>(s[i + 1]).to_string();
bitvec.push_back(bits);
}
Possible problems:
If s holds "hello", then std::bitset<8>(s[i]) will be 0. You need to pass a string containing only "1"s and "0"s to the bitset constructor
Once your bitsets are initialized properly, you can't add them together by using the to_string() function, that will just concatenate the representations: "1011" + "1100" will become "10111100"
Oh, wait, maybe that's what you do want.
It sort of sounds like you are inventing a complicated way to sum the pairs of ascii values interpreted as 16 bit numbers, but it's not clear. Your code is roughly equivalent to something like:
std::vector<uint16_t> bitvec;
unsigned char* cp = s.c_str()+1;
while (*cp) {
uint16_t bits = *(cp-1)>>8 + *(cp);
bitvec.push_back(bits);
}
//sum over the numbers contained in bitvec here?
uint32_t sum=0;
for(std::vector<int16_t>::iterator j=bitvec.begin();j!=bitvec.end();++j) {
sum += *j;
uint16_t overflow = sum>>16; //capture the overflow bit, move it back to lsb
sum &= (1<<16)-1; //clear the overflow
sum += overflow; //add it back as lsb
}
I know that to pad an integer with zeroes I can do the following,
NSString *version = [NSString stringWithFormat:#"%03d", appVersion.intValue];
This will add up to 3 zeroes to the left of the integer if needed. However I would like to do the same but instead of padding to the left, I need to pad to the right.
How can I pad an integer with zeroes but to the right of the integer? (Ideally with similar simplicity as the above method)
Here's an example of what I need,
If the integer turns out to be 38, I need the version string to come out as 380 (Only one zero was added to the end of the integer because I wanted a max of three characters and if the integer is less than three characters, zeroes will be added to make it three).
The method I am currently using will give me 038. I need 380.
Or if the integer is 381 then the output should be 381 because I only want a max of three character.
Can you multiply the value by 1000, and then print that? If you need a lot of zeroes, you might generally want to promote the int to a long or long long before multiplying, to reduce the likelihood of overflow.
NSString *version = [NSString stringWithFormat:#"%ld", ((long)appVersion.intValue)*1000];
If you need the length of the int in digits, before deciding to multiply by some factor of ten, take the floor of the decimal log of the value and add one. For example:
int foo = 123;
int digitsInFoo = floor(log10(foo)) + 1; /* 3 */
Then use some rule to decide how many zero digits to add.
int totalDesiredLength = 5;
int numberOfZeroesNeeded = totalDesiredLength - digitsInFoo; /* this assumes totalDesiredLength >= digitsInFoo */
int factor = (int) pow(10, numberOfZeroesNeeded); /* 100 */
Then the five-digit result will be foo * factor or 12300. You can adjust this for your situation as needed. Test for bounds so that you don't end up with weird results.
char buf[4];
int len = sprintf(buf, "%u", 38);
while (len < 3)
buf[len++] = '0';
buf[len] = '\0';
You might prefer sprint_s over sprintf. And if you're really paranoid, check that the value returned from sprintf isn't negative.