I use this simple code to convert a decimal into binary :
#include <iostream>
#include <windows.h>
#include <bitset>
using namespace std;
int main(int argc, char const *argv[]){
unsigned int n;
cout << "# Decimal: "; cin >> n; cout << endl;
bitset<16>binary(n);
cout << endl << "# Binary: " << binary << endl;
system("Pause"); return 0;
}
How to convert "binary" into decimal and assign the value to other variable ?
n is not "a decimal". I think you have a misconception of what numbers are, based on the default output representation used by IOStreams. They are numbers. Not decimal strings, binary strings, hexadecimal strings, octal strings, base-64 strings, or any kind of strings. But numbers.
The way you choose to represent them on output is entirely orthogonal to the way they are stored internally (which is, actually, base-2 not decimal), so it is highly likely that these "conversions" you're trying to do are inappropriate.
However, if you wish to extract an integer from a std::bitset instance, you may do so using the to_ulong() member function.
Get into the habit of using the documentation.
Related
In c++,
I don't understand about this experience. I need your help.
in this topic, answers saying use to_string.
but they say 'to_string' is converting bitset to string and cpp reference do too.
So, I wonder the way converting something data(char or string (maybe ASCII, can convert unicode?).
{It means the statement can be divided bit and can be processed it}
The question "How to convert char to bits?"
then answers say "use to_string in bitset"
and I want to get each bit of my input.
Can I cleave and analyze bits of many types and process them? If I can this, how to?
#include <iostream>
#include <bitset>
#include <string>
using namespace std;
int main() {
char letter;
cout << "letter: " << endl;
cin >> letter;
cout << bitset<8>(letter).to_string() << endl;
bitset<8> letterbit(letter);
int lettertest[8];
for (int i = 0; i < 8; ++i) {
lettertest[i] = letterbit.test(i);
}
cout << "letter bit: ";
for (int i = 0; i < 8; ++i) {
cout << lettertest[i];
}
cout << endl;
int test = letterbit.test(0);
}
When executing this code, I get result I want.
But I don't understand 'to_string'.
An important point is using of "to_string"
{to_string is function converting bitset to string(including in name),
then Is there function converting string to bitset???
Actually, in my code, use the function with a letter -> convert string to bitset(at fitst, it is result I want)}
help me understand this action.
Q: What is a bitset?
https://www.cplusplus.com/reference/bitset/bitset/
A bitset stores bits (elements with only two possible values: 0 or 1,
true or false, ...).
The class emulates an array of bool elements, but optimized for space
allocation: generally, each element occupies only one bit (which, on
most systems, is eight times less than the smallest elemental type:
char).
In other words, a "bitset" is a binary object (like an "int", a "char", a "double", etc.).
Q: What is bitset<>.to_string()?
Bitsets have the feature of being able to be constructed from and
converted to both integer values and binary strings (see its
constructor and members to_ulong and to_string). They can also be
directly inserted and extracted from streams in binary format (see
applicable operators).
In other words, to_string() allows you to convert the binary bitset to text.
Q: How to to I convert convert char(or string, other type) -> bits?
A: Per the above, simply use bitset<>.to_ulong()
Here is an example:
https://en.cppreference.com/w/cpp/utility/bitset/to_string
Code:
#include <iostream>
#include <bitset>
int main()
{
std::bitset<8> b(42);
std::cout << b.to_string() << '\n'
<< b.to_string('*') << '\n'
<< b.to_string('O', 'X') << '\n';
}
Output:
00101010
**1*1*1*
OOXOXOXO
I hope this is not a naive question. Is type conversion performed implicitly in c++? Because I have asked user to input a number in hexadecimal format, and then when i output that number to the screen without mentioning its format, it is displayed as a decimal format. Am I missing something here?
#include <iostream>
#include <iomanip> using namespace std;
int main() { int number = 0;
cout << "\nEnter a hexadecimal number: " << endl;
cin >> hex >> number;
cout << "Your decimal input: " << number << endl; number;
}
There's no type conversion between hexadecimal and decimal here. Internally your number will be stored in two's complimentary (a binary representation) no matter whether it has been read in as a hex or decimal number. Converting from a string of dec/hex to an integer and the other way around happens when the number is inputted/outputted.
With std::hex you tell the stream you tell the stream to change its default numeric base for integer I/O. Without it, the default is decimal. So if you only do it for std::cin, then it is reading in numbers as hex, but std::cout is still outputting decimal numbers. If you want it to also change its base to hexadecimal, you have to do the same with std::cout:
std::cout << std::hex << "Your hexadecimal input: " << number << std::endl;
I've been working on an assignment where I've to use bitwise operators to (OR, AND, or NOT )
the Program has a fixed 4X4 matrix and the user suppose to enter a query to the program ANDing two BINARY numbers, ORing them ...etc
the problem is the "zero leading" binary numbers for example:0111 are shown with value 73
even when I manage to cout it with setfill() and setw()
I can't perform the bitwise operation on the actual binary value!
N.B: I've tried strings instead of ints but the bitwise operation still doesn't apply.
For Example:
if I want to AND two binary values let's say
int x=1100 and int y=0100 in another int z
z=x&y;
the result suppose to be 0100
But the result that appears is 64
which also the result that appears if I tried to print y to the screen
#include <iostream>
#include <string>
#include <iomanip>
using namespace std;
int main()
{
int Matrix[4][4]={{1,1,0,0},{1,1,0,1},{1,1,0,1},{0,1,0,0}};
string Doc[4]={"Doc1","Doc2","Doc3","Doc4"};
string Term[4]={"T1","T2","T3","T4"};
cout << "THE MATRIX IS:"<<endl;
for(int i=0;i<4;i++)
{
cout<<"\t"<<Doc[i];
}
cout<<"\n";
for(int row=0; row<4;row++)
{
cout<<Term[row]<<"\t";
for(int col=0;col<4;col++)
{
cout<<Matrix[row][col]<<"\t";
}
cout<<endl;
}
int term1=1100;
cout<<"\nTerm1= "<<term1;
int term2=1101;
cout<<"\nTerm2= "<<term2;
int term3=1101;
cout<<"\nTerm3= "<<term3;
int term4=0100;
cout<<"\nTerm4= "<<setfill('0')<<setw(4)<<term4;
int Q=term1&term4;
cout<<"\n Term1 and Term4 ="<<Q;
system("pause");
return 0;
}
When you write 0111 in your code the compiler will assume it's octal since octal numbers start with zero. If you wrote 111 it would be decimal.
C++14 added binary literal prefix so you can write 0b111 to get what you want.
Your question still not clear. You have said you have 4x4 matrix, what type of matrix or 2D array is it? So maybe you can elaborate more.
Regarding dealing with binaries, what students usually confuse about, is that if you are using integer variables, you can use bitwise manipulation over these variables and the result will still be read as an integer format. And if you happen to seek seeing what is happening during the bitwise manipulation and visualize the process, you can always use bitset object as follow.
#include <iostream>
#include <bitset>
int main() {
int a = 7, b = a>>3, c = a<<2;
std::cout << "a = " << std::bitset<8>(a) << std::endl;
std::cout << "b = " << std::bitset<8>(b) << std::endl;
std::cout << "c = " << std::bitset<8>(c) << std::endl;
}
Which should print
00000111
00000000
00011100
So play around with your variables and then visualize them as binaries using bitset is the best way to teach you how HEX, OCT, DEC, and BIN representation works.
And by the way if you are reading 73 as an integer, then this memory address stores 0100 1001 as binary if it's unsigned, and 111 as Octal which is base 8 number representation. See http://coderstoolbox.net/number/
Best of luck
I am using C++ and I would like to format doubles in the following obvious way. I have tried playing with 'fixed' and 'scientific' using stringstream, but I am unable to achieve this desired output.
double d = -5; // print "-5"
double d = 1000000000; // print "1000000000"
double d = 3.14; // print "3.14"
double d = 0.00000000001; // print "0.00000000001"
// Floating point error is acceptable:
double d = 10000000000000001; // print "10000000000000000"
As requested, here are the things I've tried:
#include <iostream>
#include <string>
#include <sstream>
#include <iomanip>
using namespace std;
string obvious_format_attempt1( double d )
{
stringstream ss;
ss.precision(15);
ss << d;
return ss.str();
}
string obvious_format_attempt2( double d )
{
stringstream ss;
ss.precision(15);
ss << fixed;
ss << d;
return ss.str();
}
int main(int argc, char *argv[])
{
cout << "Attempt #1" << endl;
cout << obvious_format_attempt1(-5) << endl;
cout << obvious_format_attempt1(1000000000) << endl;
cout << obvious_format_attempt1(3.14) << endl;
cout << obvious_format_attempt1(0.00000000001) << endl;
cout << obvious_format_attempt1(10000000000000001) << endl;
cout << endl << "Attempt #2" << endl;
cout << obvious_format_attempt2(-5) << endl;
cout << obvious_format_attempt2(1000000000) << endl;
cout << obvious_format_attempt2(3.14) << endl;
cout << obvious_format_attempt2(0.00000000001) << endl;
cout << obvious_format_attempt2(10000000000000001) << endl;
return 0;
}
That prints the following:
Attempt #1
-5
1000000000
3.14
1e-11
1e+16
Attempt #2
-5.000000000000000
1000000000.000000000000000
3.140000000000000
0.000000000010000
10000000000000000.000000000000000
There is no way for a program to KNOW how to format the numbers in the way that you are describing, unless you write some code to analyze the numbers in some way - and even that can be quite hard.
What is required here is knowing the input format in your source code, and that's lost as soon as the compiler converts the decimal input source code into binary form to store in the executable file.
One alternative that may work is to output to a stringstream, and then from that modify the output to strip trailing zeros. Something like this:
string obvious_format_attempt2( double d )
{
stringstream ss;
ss.precision(15);
ss << fixed;
ss << d;
string res = ss.str();
// Do we have a dot?
if ((string::size_type pos = res.rfind('.')) != string::npos)
{
while(pos > 0 && (res[pos] == '0' || res[pos] == '.')
{
pos--;
}
res = res.substr(pos);
}
return res;
}
I haven't actually tired it, but as a rough sketch, it should work. Caveats are that if you have something like 0.1, it may well print as 0.09999999999999285 or some such, becuase 0.1 can not be represented in exact form as a binary.
Formatting binary floating-point numbers accurately is quite tricky and was traditionally wrong. A pair of papers published in 1990 in the same journal settled that decimal values converted to binary floating-point numbers and back can have their values restored assuming they don't use more decimal digits than a specific constraint (in C++ represented using std::numeric_limits<T>::digits10 for the appropriate type T):
Clinger's "How to read floating-point numbers accurately" describes an algorithm to convert from a decimal representation to a binary floating-point.
Steele/White's "How to print floating-point numbers accurately" describes how to convert from a binary floating-point to a decimal decimal value. Interestingly, the algorithm even converts to the shortest such decimal value.
At the time these papers were published the C formatting directives for binary floating points ("%f", "%e", and "%g") were well established and they didn't get changed to the take the new results into account. The problem with the specification of these formatting directives is that "%f" assumes to count the digits after the decimal point and there is no format specifier asking to format numbers with a certain number of digits but not necessarily starting to count at the decimal point (e.g., to format with a decimal point but potentially having many leading zeros).
The format specifiers are still not improved, e.g., to include another one for non-scientific notation possibly involving many zeros, for that matter. Effectively, the power of the Steele/White's algorithm isn't fully exposed. The C++ formatting, sadly, didn't improve over the situation and just delegates the semantics to the C formatting directives.
The approach of not setting std::ios_base::fixed and using a precision of std::numeric_limits<double>::digits10 is the closest approximation of floating-point formatting the C and C++ standard libraries offer. The exact format requested could be obtained by getting the digits using using formatting with std::ios_base::scientific, parsing the result, and rewriting the digits afterwards. To give this process a nice stream-like interface it could be encapsulated with a std::num_put<char> facet.
An alternative could be the use of Double-Conversion. This implementation uses an improved (faster) algorithm for the conversion. It also exposes interfaces to get the digits in some form although not directly as a character sequence if I recall correctly.
You can't do what you want to do, because decimal numbers are not representable exactly in floating point format. In otherwords, double can't precisely hold 3.14 exactly, it stores everything as fractions of powers of 2, so it stores it as something like 3 + 9175/65536 or thereabouts (do it on your calculator and you'll get 3.1399993896484375. (I realize that 65536 is not the right denominator for IEEE double, but the gist of it is correct).
This is known as the round trip problem. You can't reliable do
double x = 3.14;
cout << magic << x;
and get "3.14"
If you must solve the round-trip problem, then don't use floating point. Use a custom "decimal" class, or use a string to hold the value.
Here's a decimal class you could use:
https://stackoverflow.com/a/15320495/364818
I am using C++ and I would like to format doubles in the following obvious way.
Based on your samples, I assume you want
Fixed rather than scientific notation,
A reasonable (but not excessive) amount of precision (this is for user display, so a small bit of rounding is okay),
Trailing zeros truncated, and
Decimal point truncated as well if the number looks like an integer.
The following function does just that:
#include <cmath>
#include <iomanip>
#include <sstream>
#include <string>
std::string fixed_precision_string (double num) {
// Magic numbers
static const int prec_limit = 14; // Change to 15 if you wish
static const double log10_fuzz = 1e-15; // In case log10 is slightly off
static const char decimal_pt = '.'; // Better: use std::locale
if (num == 0.0) {
return "0";
}
std::string result;
if (num < 0.0) {
result = '-';
num = -num;
}
int ndigs = int(std::log10(num) + log10_fuzz);
std::stringstream ss;
if (ndigs >= prec_limit) {
ss << std::fixed
<< std::setprecision(0)
<< num;
result += ss.str();
}
else {
ss << std::fixed
<< std::setprecision(prec_limit-ndigs)
<< num;
result += ss.str();
auto last_non_zero = result.find_last_not_of('0');
if (result[last_non_zero] == decimal_pt) {
result.erase(last_non_zero);
}
else if (last_non_zero+1 < result.length()) {
result.erase(last_non_zero+1);
}
}
return result;
}
If you are using a computer that uses IEEE floating point, changing prec_limit to 16 is unadvisable. While this will let you properly print 0.9999999999999999 as such, it also prints 5.1 as 5.0999999999999996 and 9.99999998 as 9.9999999800000001. This is from my computer, your results may vary due to a different library.
Changing prec_limit to 15 is okay, but it still leads to numbers that don't print "correctly". The value specified (14) works nicely so long as you aren't trying to print 1.0-1e-15.
You could do even better, but that might require discarding the standard library (see Dietmar Kühl's answer).
Is there a simple way to convert a binary bitset to hexadecimal? The function will be used in a CRC class and will only be used for standard output.
I've thought about using to_ulong() to convert the bitset to a integer, then converting the integers 10 - 15 to A - F using a switch case. However, I'm looking for something a little simpler.
I found this code on the internet:
#include <iostream>
#include <string>
#include <bitset>
using namespace std;
int main(){
string binary_str("11001111");
bitset<8> set(binary_str);
cout << hex << set.to_ulong() << endl;
}
It works great, but I need to store the output in a variable then return it to the function call rather than send it directly to standard out.
I've tried to alter the code but keep running into errors. Is there a way to change the code to store the hex value in a variable? Or, if there's a better way to do this please let me know.
Thank you.
You can send the output to a std::stringstream, and then return the resultant string to the caller:
stringstream res;
res << hex << uppercase << set.to_ulong();
return res.str();
This would produce a result of type std::string.
Here is an alternative for C:
unsigned int bintohex(char *digits){
unsigned int res=0;
while(*digits)
res = (res<<1)|(*digits++ -'0');
return res;
}
//...
unsigned int myint=bintohex("11001111");
//store value as an int
printf("%X\n",bintohex("11001111"));
//prints hex formatted output to stdout
//just use sprintf or snprintf similarly to store the hex string
Here is the easy alternative for C++:
bitset <32> data;
/*Perform operation on data*/
cout << "data = " << hex << data.to_ulong() << endl;