I have a long long holding ASCII hex values and want to convert it to a string. I have this code:
char myBuffer[8];
long long myLongLong = 0x7177657274797569;
sprintf(myBuffer,"%c%c%c%c%c%c%c%c",myLongLong);
int x;
cout << myBuffer;
cin >> x;
return 0;
The hex code should be "qwertyui", but it always gives other value.
I tried with %c, %s, %X but it doesn't give me the output I need, the closest was %c but it prints out only one char.
That code is wrong in so many ways I don't know where to start...
myBuffer is too small to hold the 8 chars + the NUL terminator, ie. should be myBuffer[9].
sprintf is expecting 8 arguments, you're only passing 1. The other required arguments will be whatever's on the stack.
myLongLong is not a char
You don't take into account endianness.
You're using C functions and doing things in a C way in C++. Why don't you use std::strings as opposed to C-style strings and stringstreams as an alternative to sprintf?
The closest almost working example of what you want, as similar to your example, is something like:
#include <cstdio>
#include <iostream>
using namespace std;
int main(void)
{
char myBuffer[9];
long long myLongLong = 0x7177657274797569;
char *c_ptr = (char*)&myLongLong;
sprintf(myBuffer,"%c%c%c%c%c%c%c%c", c_ptr[0], c_ptr[1], c_ptr[2], c_ptr[3], c_ptr[4], c_ptr[5], c_ptr[6], c_ptr[7]);
int x;
cout<<myBuffer;
cin>>x;
return 0;
}
Which will output "iuytrewq" on my little-endian machine. As I mentioned, that doesn't take into account the endianness. If the machine is little-endian then you could read/print the bytes in reverse.
I really don't understand why you're trying to do this though...
You could try
union { char buf[8]; long long num; } u;
u.num = 0x7177657274797569LL;
cout << u.str << endl;
But I don't understand what you want really to do. What about endianness ?
Use a string stream
long long myLongLong = 0x7177657274797569;
std::stringstream ss;
ss << std::hex << myLongLong;
std::cout << ss << std::endl
You want to print each byte of the long-long as an ascii char?
Then you need to loop over the long long extracting one byte at a time, look at bit shifts and masking.
Hint it's generally easier (if you know the length) to work from the last byte and shift right
or - you could just memcpy the long-long into the char array - except for any byte ordering issues
Try the following code.
#include <iostream>
using namespace std;
int main(void)
{
char myBuffer[8];
long long myLongLong = 0x7177657274797569;
for(int i = 0; i<8;i++)
{
myBuffer[i] = myLongLong>>(64-(i+1)*8);
}
cout<<myBuffer<<endl;
return 0;
}
Related
In c++,
I don't understand about this experience. I need your help.
in this topic, answers saying use to_string.
but they say 'to_string' is converting bitset to string and cpp reference do too.
So, I wonder the way converting something data(char or string (maybe ASCII, can convert unicode?).
{It means the statement can be divided bit and can be processed it}
The question "How to convert char to bits?"
then answers say "use to_string in bitset"
and I want to get each bit of my input.
Can I cleave and analyze bits of many types and process them? If I can this, how to?
#include <iostream>
#include <bitset>
#include <string>
using namespace std;
int main() {
char letter;
cout << "letter: " << endl;
cin >> letter;
cout << bitset<8>(letter).to_string() << endl;
bitset<8> letterbit(letter);
int lettertest[8];
for (int i = 0; i < 8; ++i) {
lettertest[i] = letterbit.test(i);
}
cout << "letter bit: ";
for (int i = 0; i < 8; ++i) {
cout << lettertest[i];
}
cout << endl;
int test = letterbit.test(0);
}
When executing this code, I get result I want.
But I don't understand 'to_string'.
An important point is using of "to_string"
{to_string is function converting bitset to string(including in name),
then Is there function converting string to bitset???
Actually, in my code, use the function with a letter -> convert string to bitset(at fitst, it is result I want)}
help me understand this action.
Q: What is a bitset?
https://www.cplusplus.com/reference/bitset/bitset/
A bitset stores bits (elements with only two possible values: 0 or 1,
true or false, ...).
The class emulates an array of bool elements, but optimized for space
allocation: generally, each element occupies only one bit (which, on
most systems, is eight times less than the smallest elemental type:
char).
In other words, a "bitset" is a binary object (like an "int", a "char", a "double", etc.).
Q: What is bitset<>.to_string()?
Bitsets have the feature of being able to be constructed from and
converted to both integer values and binary strings (see its
constructor and members to_ulong and to_string). They can also be
directly inserted and extracted from streams in binary format (see
applicable operators).
In other words, to_string() allows you to convert the binary bitset to text.
Q: How to to I convert convert char(or string, other type) -> bits?
A: Per the above, simply use bitset<>.to_ulong()
Here is an example:
https://en.cppreference.com/w/cpp/utility/bitset/to_string
Code:
#include <iostream>
#include <bitset>
int main()
{
std::bitset<8> b(42);
std::cout << b.to_string() << '\n'
<< b.to_string('*') << '\n'
<< b.to_string('O', 'X') << '\n';
}
Output:
00101010
**1*1*1*
OOXOXOXO
I have a simple program converting dynamic char array to hex string representation.
#include <iostream>
#include <sstream>
#include <iomanip>
#include <string>
using namespace std;
int main(int argc, char const* argv[]) {
int length = 2;
char *buf = new char[length];
buf[0] = 0xFC;
buf[1] = 0x01;
stringstream ss;
ss << hex << setfill('0');
for(int i = 0; i < length; ++i) {
ss << std::hex << std::setfill('0') << std::setw(2) << (int) buf[i] << " ";
}
string mystr = ss.str();
cout << mystr << endl;
}
Output:
fffffffc 01
Expected output:
fc 01
Why is this happening? What are those ffffff before fc? This happens only on certain bytes, as you can see the 0x01 is formatted correctly.
Three things you need to know to understand what's happening:
The first thing is that char can be either signed or unsigned, it's implementation (compiler) specific
When converting a small signed type to a large signed type (like e.g. a signed char to an int), they will be sign extended
How negative values are stored using the most common two's complement system, where the highest bit in a value defines if a value is negative (bit is set) or not (bit is clear)
What happens here is that char seems to be signed, and 0xfc is considered a negative value, and when you convert 0xfc to an int it will be sign-extended to 0xfffffffc.
To solve it use explicitly unsigned char and convert to unsigned int.
This is called "sign extension".
char is a signed type, so 0xfc will become negative value if you force it in to a char.
Its decimal value is -4
When you cast it to int, it extends the sign bit to give you the same value.
(It happens here (int) buf[i])
On your system, int is 4 bytes, so you get the extra bytes filled with ff.
I've wrote a small program to convert a char to an int. The program reads in chars of the form '1234' and then outputs the int 1234:
#include <iostream>
using namespace std;
int main(){
cout << "Enter a number with as many digits as you like: ";
char digit_char = cin.get(); // Read in the first char
int number = digit_char - '0';
digit_char = cin.get(); // Read in the next number
while(digit_char != ' '){ // While there is another number
// Shift the number to the left one place, add new number
number = number * 10 + (digit_char - '0');
digit_char = cin.get(); // Read the next number
}
cout << "Number entered: " << number << endl;
return 0;
}
This works fine with small chars, but if I try a big char (length 11 and above) like 12345678901 the program returns the wrong result, -539222987.
What's going on?
12345678901 in binary is 34 bits. As a result, you overflowed the integer value and set the sign bit.
Type int is not wide enough to store such big numbers. Try to use unsigned long long int instead of the type int.
You can check what maximum number can be represented in the given integer type. For example
#include <iostream>
#include <limits>
int main()
{
std::cout << std::numeric_limits<unsigned long long int>::max() << std::endl;
}
In C you can use constant ULLONG_MAX defined in header <limits.h>
Instead of using int, try to use unsigned long long int for your variable number.
That should solve your problem.
Overflowed integer. Use unsigned long long int.
Is there a simple way to convert a binary bitset to hexadecimal? The function will be used in a CRC class and will only be used for standard output.
I've thought about using to_ulong() to convert the bitset to a integer, then converting the integers 10 - 15 to A - F using a switch case. However, I'm looking for something a little simpler.
I found this code on the internet:
#include <iostream>
#include <string>
#include <bitset>
using namespace std;
int main(){
string binary_str("11001111");
bitset<8> set(binary_str);
cout << hex << set.to_ulong() << endl;
}
It works great, but I need to store the output in a variable then return it to the function call rather than send it directly to standard out.
I've tried to alter the code but keep running into errors. Is there a way to change the code to store the hex value in a variable? Or, if there's a better way to do this please let me know.
Thank you.
You can send the output to a std::stringstream, and then return the resultant string to the caller:
stringstream res;
res << hex << uppercase << set.to_ulong();
return res.str();
This would produce a result of type std::string.
Here is an alternative for C:
unsigned int bintohex(char *digits){
unsigned int res=0;
while(*digits)
res = (res<<1)|(*digits++ -'0');
return res;
}
//...
unsigned int myint=bintohex("11001111");
//store value as an int
printf("%X\n",bintohex("11001111"));
//prints hex formatted output to stdout
//just use sprintf or snprintf similarly to store the hex string
Here is the easy alternative for C++:
bitset <32> data;
/*Perform operation on data*/
cout << "data = " << hex << data.to_ulong() << endl;
I have a string that looks like this:
"0.4794255386042030002732879352156"
which is approximately the sin(0.5). I would like to format the string to look a much nicer
"4.794255386042e-1"
How can I achieve this? Remember I am dealing with strings and not numbers (float, double). Also I need to round to keep the number as accurate as possible, I can't just truncate. If I need to convert to a different data type I would prefer a long double because a regular double doesn't have enough precision. I'd like at least 12 decimal digits before rounding. Perhaps there is a simple sprintf() conversion I could do.
Something like this:
#include<iostream>
using namespace std;
int main()
{
char *s = "0.4794255386042030002732879352156";
double d;
sscanf(s,"%lf",&d);
printf("%.12e\n",d);
return EXIT_SUCCESS;
}
Output:
# g++ a.cpp && ./a.out
4.794255386042e-01
Are you looking for something like this?
Here is a sample:
// modify basefield
#include <iostream>
#include <sstream>
using namespace std;
int main () {
std::string numbers("0.4794255386042030002732879352156");
std::stringstream stream;
stream << numbers;
double number_fmt;
stream >> number_fmt;
cout.precision(30);
cout << number_fmt << endl;
cout.precision(5);
cout << scientific << number_fmt << endl;
return 0;
}
Output:
0.479425538604203005377257795772
4.79426e-01
In highly portable C the working example below outputs:
result is 4.794255386042E-01
#include <stdio.h>
int main()
{
char *str = "0.4794255386042030002732879352156";
double f;
char newstr [50];
// convert string in `str` to float
sscanf (str, "%le", &f);
sprintf (newstr, "%.12E", f);
printf ("result is %s\n", newstr);
return 0;
}
Looking at the strings in your question, it would seem you are using base-10 logarithms. In that case wouldn't it be relatively easy to just count the leading or trailing zeros and put that in an exponent, by scanning the strings directly?
Maybe i'm missing something..
An IEEE 754 64 bit float (i.e. double precision), is good for 15 decimal significant figures, so you should have no problem converting to double. You are more likely to run into the problem of getting the format specifier to display all available digits. Although from the examples posted, this seems not to be the case.
Convert to long double using sscanf(), then use sprintf() to print it back out as a string, using the scientific formatter:
long double x;
char num[64];
if(sscanf(string, "%Lf", &x) == 1)
sprintf(num, "%.12Le", x);
Not sure if even long double actually gives you 12 significant digits, though. On my system, this generates "4.79425538604e-01".
There is no standard function in either C or C++ to do this. The normal approach is either to convert to a number (and then use standard functions to format the number) or write your own custom output function.
#include <iostream>
using namespace std;
...
double dd = strtod( str );
cout << scientific << dd << endl;
Depending on how many decimal places you want (12 in this case) you could do something like this:
int main() {
char buff[500];
string x = "0.4794255386042030002732879352156";
double f = atof(x.c_str());
sprintf(buff,"%.12fe-1",f*10);
cout<<buff<<endl;
return 0;
}
Result:
---------- Capture Output ----------
"c:\windows\system32\cmd.exe" /c c:\temp\temp.exe
4.794255386042e-1
Terminated with exit code 0.