I have bitset<8> v8 and its value is something like "11001101", how can I convert it to char? I need a single letter. Like letter "f"=01100110.
P.S. Thanks for help. I needed this to illustrate random errors in bits. For example without error f, and with error something like ♥, and so on with all text in file. In text you can see such errors clearly.
unsigned long i = mybits.to_ulong();
unsigned char c = static_cast<unsigned char>( i ); // simplest -- no checks for 8 bit bitsets
Something along the lines of the above should work. Note that the bit field may contain a value that cannot be represented using a plain char (it is implementation defined whether it is signed or not) -- so you should always check before casting.
char c;
if (i <= CHAR_MAX)
c = static_cast<char>( i );
The provided solution did not work for me. I was using C++14 with g++ 9. However, I was able to get it working by :
char lower = 'a';
bitset<8> upper(lower);
upper.reset(5);
cout << (char)upper.to_ulong() << endl;
This may not be the best way to do it, I am sure, but it worked for me!
Related
In C++ Primer 5th Edition I saw this
when I tried to use it---
At this time it didn't work, but the program's output did give a weird symbol, but signed is totally blank And also they give some warnings when I tried to compile it. But C++ primer and so many webs said it should work... So I don't think they give the wrong information did I do something wrong?
I am newbie btw :)
But C++ primer ... said it should work
No it doesn't. The quote from C++ primer doesn't use std::cout at all. The output that you see doesn't contradict with what the book says.
So I don't think they give the wrong information
No1.
did I do something wrong?
It seems that you've possibly misunderstood what the value of a character means, or possibly misunderstood how character streams work.
Character types are integer types (but not all integer types are character types). The values of unsigned char are 0..255 (on systems where size of byte is 8 bits). Each2 of those values represent some textual symbol. The mapping from a set of values to a set of symbols is called a "character set" or "character encoding".
std::cout is a character stream. << is stream insertion operator. When you insert a character into a stream, the behaviour is not to show the numerical value. Instead, the behaviour to show the symbol that the value is mapped to3 in the character set that your system uses. In this case, it appears that the value 255 is mapped to whatever strange symbol you saw on the screen.
If you wish to print the numerical value of a character, what you can do is convert to a non-character integer type and insert that to the character stream:
int i = c;
std::cout << i;
1 At least, there's no wrong information regarding your confusion. The quote is a bit inaccurate and outdated in case of c2. Before C++20, the value was "implementation defined" rather than "undefined". Since C++20, the value is actually defined, and the value is 0 which is the null terminator character that signifies end of a string. If you try to print this character, you'll see no output.
2 This was bit of a lie for simplicity's sake. Some characters are not visible symbols. For example, there is the null terminator charter as well as other control characters. The situation becomes even more complex in the case of variable width encodings such as the ubiquitous Unicode, where symbols may consist of a sequence of several char. In such encoding, and individual char cannot necessarily be interpreted correctly without other char that are part of such sequence.
3 And this behaviour should feel natural once you grok the purpose of character types. Consider following program:
unsigned char c = 'a';
std::cout << c;
It would be highly confusing if the output would be a number that is the value of the character (such as 97 which may be the value of the symbol 'a' on the system) rather than the symbol 'a'.
For extra meditation, think about what this program might print (and feel free to try it out):
char c = 57;
std::cout << c << '\n';
int i = c;
std::cout << i << '\n';
c = '9';
std::cout << c << '\n';
i = c;
std::cout << i << '\n';
This is due to the behavior of the << operator on the char type and the character stream cout. Note, the << is known as formatted output means it does some implicit formatting.
We can say that the value of a variable is not the same as its representation in certain contexts. For example:
int main() {
bool t = true;
std::cout << t << std::endl; // Prints 1, not "true"
}
Think of it this way, why would we need char if it would still behave like a number when printed, why not to use int or unsigned? In essence, we have different types so to have different behaviors which can be deduced from these types.
So, the underlying numeric value of a char is probably not what we looking for, when we print one.
Check this for example:
int main() {
unsigned char c = -1;
int i = c;
std::cout << i << std::endl; // Prints 255
}
If I recall correctly, you're somewhat close in the Primer to the topic of built-in types conversions, it will bring in clarity when you'll get to know these rules better. Anyway, I'm sure, you will benefit greatly from looking into this article. Especially the "Printing chars as integers via type casting" part.
This question already has answers here:
Easiest way to convert int to string in C++
(30 answers)
Closed 6 years ago.
I have been working on a project where I need to convert an int8_t variable to a string. I did a bit of reading on this and came to a conclusion that using Strint(int8_t) is an option.
Here is my code:
int8_t matt = 0;
matt += 1;
char string[3]=String(matt);
tft.textWrite(string);// this is used to display text on an lcd display (arduino)
For some reason the method I used did not work so I researched a bit more and found out String(matt) is actually a String object. I don't really know if this is true for a fact. In order for the tft.textWrite(); to work it should look something like this.
Here is the code:
char string[15]= "Hello, World! ";
tft.textWrite(string);
I tried using this also:
char string[3];
sprintf(string, "%ld", matt);
However this did not work.
Can anybody help?
Thanks
It should work fine.
sprintf(string, "%ld", matt);
%ld is used for long int. you are using int8_t which is of range -128 to +128, it is better to use %d.
This code is nearly right
char string[3];
sprintf(string, "%ld", matt);
first
"%ld"
expects a long int as a type you need to find one that can print an 8 bit int. I think its "%hh".
second your
char string[3];
is too short! what if matt is 100 then you use 3 chars but you still need room for the ending zero (and sign). So it should be at least
char string[5];
third the code
sprintf(string, "%ld", matt);
is unsafe as it can write beyond the end of string as it would in your original code, use
snprintf(string, sizeof(string), "%ld", matt);
First you have to keep in mind that there is no String type in C, only fixed sized char arrays (type: char* or char[]). We still call them strings, but keep that in mind.
Your attempt with sprintf looked good to me. Are you shure your environment includes the full C Standard Library ? If not, you will have to code your own format function. If yes, have you included string.h ?
Anyway, try to use snprintf() to precise the size of the target, it is safer.
EDIT: This post applies to the good ol' C style, which I did not see was off topic
the following code should work:
// Example program
#include <iostream>
#include <string>
int main()
{
int8_t matt = 0;
matt += 1;
char string[1];
sprintf(string, "%ld", matt);
std::cout << string << std::endl;
return 0;
}
it prints "1".
This code seems a bit ridiculous, but it's the only way I found to deal with my problem...
char word[10];
cout << std::hex << static_cast<int>(static_cast<unsigned char>(word[i]));
This is my way of cout-ing a char as a hex value (including signed chars). It seems to work great (to my knowledge), but I feel it's a very stupid way to do it.
I should add, I'm reading a file, that's why my data type is char initially.
You are already doing it the right way, although using int would work as well as unsigned int. You could make a function or a functor if you'll be doing this in several places, e.g.:
int char_to_int(char ch)
{
return static_cast<unsigned char>(ch);
}
// ...
cout << hex << char_to_int(word[i]);
As noted in comments, another option is word[i] & 0xFF with no casting. This is actually implementation-defined but most likely will give the intended result. But again, if you will be doing this in several places I would suggest wrapping it up in a function so that it is more obvious what is going on.
I have generated a long sequence of bytes which looks as follows:
0x401DA1815EB560399FE365DA23AAC0757F1D61EC10839D9B5521F.....
Now, I would like to assign it to a static unsigned char x[].
Obviously, I get the warning that hex escape sequence out of range when I do this here
static unsigned char x[] = "\x401DA1815EB56039.....";
The format it needs is
static unsigned char x[] = "\x40\x1D\xA1\x81\x5E\xB5\x60\x39.....";
So I am wondering if in C there is a way for this assignment without me adding the
hex escape sequence after each byte (could take quite a while)
I don't think there's a way to make a literal out of it.
You can parse the string at runtime and store it in another array.
You can use sed or something to rewrite the sequence:
echo 401DA1815EB560399FE365DA23AAC0757F1D61EC10839D9B5521F | sed -e 's/../\\x&/g'
\x40\x1D\xA1\x81\x5E\xB5\x60\x39\x9F\xE3\x65\xDA\x23\xAA\xC0\x75\x7F\x1D\x61\xEC\x10\x83\x9D\x9B\x55\x21F
AFAIK, No.
But you can use the regex s/(..)/\\x$1/g to convert your sequence to the last format.
No there is no way to do that in C or C++. The obvious solution is to write a program to insert the '\x' sequences at the correct point in the string. This would be a suitable task for a scripting language like perl, but you can also easily do it in C or C++.
If the sequence is fixed, I suggest following the regexp-in-editor suggestion.
If the sequence changes dynamically, you can relatively easily convert it on runtime.
char in[]="0x401DA1815EB560399FE365DA23AAC0757F1D61EC10839D9B5521F..."; //or whatever, loaded from a file or such.
char out[MAX_LEN]; //or malloc() as l/2 or whatever...
int l = strlen(in);
for(int i=2;i<l;i+=2)
{
out[i/2-1]=16*AsciiAsHex(in[i])+AsciiAsHex(in[i]+1);
}
out[i/2-1]='\0';
...
int AsciiAsHex(char in)
{
if(in>='0' && in<='9') return in-'0';
if(in>='A' && in<='F') return in+10-'A';
if(in>='a' && in<='f') return in+10-'a';
return 0;
}
What's the safest and best way to retrieve an unsigned long from a string in C++?
I know of a number of possible methods.
First, converting a signed long taken from atol.
char *myStr; // Initalized to some value somehow.
unsigned long n = ((unsigned)atol(myStr));
The obvious problem with this is, what happens when the value stored in myStr is larger than a signed long can contain? What does atol retrieve?
The next possibility is to use strtoul.
char *myStr; // Initalized to some value somehow.
unsigned long n = strtoul(myStr, 0, 10);
However, this is a little over complicated for my needs. I'd like a simple function, string in, unsigned long base 10 out. Also, the error handling leaves much to be desired.
The final possibility I have found is to use sscanf.
char *myStr; // Initalized to some value somehow.
unsigned long n = 0;
if(sscanf(myStr, "%lu", n) != 1) {
//do some error handling
}
Again, error handling leaves much to be desired, and a little more complicated than I'd like.
The remaining obvious option is to write my own either a wrapper around one of the previous possibilities or some thing which cycles through the string and manually converts each digit until it reaches ULONG_MAX.
My question is, what are the other options that my google-fu has failed to find? Any thing in the C++ std library that will cleanly convert a string to an unsigned long and throw exceptions on failure?
My apologies if this is a dupe, but I couldn't find any questions that exactly matched mine.
You can use strtoul with no problem. The function returns an unsigned long. If convertion can not be performed the function return 0. If the correct long value is out of range the function return ULONG_MAX and the errno global variable is set to ERANGE.
One way to do it:
stringstream(str) >> ulongVariable;
template <class T>
T strToNum(const std::string &inputString,
std::ios_base &(*f)(std::ios_base&) = std::dec)
{
T t;
std::istringstream stringStream(inputString);
if ((stringStream >> f >> t).fail())
{
throw runtime_error("Invalid conversion");
}
return t;
}
// Example usage
unsigned long ulongValue = strToNum<unsigned long>(strValue);
int intValue = strToNum<int>(strValue);
int intValueFromHex = strToNum<int>(strHexValue,std::hex);
unsigned long ulOctValue = strToNum<unsigned long>(strOctVal, std::oct);
If you can use the boost libraries (www.boost.org) look at the conversion library - it's a header only include
#include "boost/lexical_cast.hpp"
then all you need to do is
unsigned long ul = boost::lexical_cast<unsigned long>(str);
Jeffrey Stedfast has a beautiful post about writing int parser routines for Mono (in C).
It generates code that uses uses native types (you need 32 bit to parse 32 bit) and error codes for overflow.
Use "atol" in-built std function
For example std::string input = "1024";
std::atol(input.c_str());
Atol expect parameter to be of type c string, so c_str() does it that for you.
Robust way will be
write a static function and use it
bool str2Ulong(const string& str,unsigned long & arValue)
{
char *tempptr=NULL;
arValue=strtoul(str,tempptr,10);
return ! (arValue==0 && tempptr==str.c_str());
}