Is there a way to convert numeric string to a char containing that value? For example, the string "128" should convert to a char holding the value 128.
Yes... atoi from C.
char mychar = (char)atoi("128");
A more C++ oriented approach would be...
template<class T>
T fromString(const std::string& s)
{
std::istringstream stream (s);
T t;
stream >> t;
return t;
}
char mychar = (char)fromString<int>(mycppstring);
There's the C-style atoi, but it converts to an int. You 'll have to cast to char yourself.
For a C++ style solution (which is also safer) you can do
string input("128");
stringstream ss(str);
int num;
if((ss >> num).fail()) {
// invalid format or other error
}
char result = (char)num;
It depends. If char is signed and 8 bits, you cannot convert "128" to a char in base 10. The maximum positive value of a signed 8-bit value is 127.
This is a really pedantic answer, but you should probably know this at some point.
You can use atoi. That will get you the integer 128. You can just cast that to a char and you're done.
char c = (char) atoi("128");
Related
i'm trying to write a function which takes one int parameter and returns sum of it's digits.For example, digital_root(123) will return 1+2+3 which is 6.And inside the for loop i can't convert individual character to integer..
it should be included that i used both atoi() and stoi() functions.What is wrong with the code?
int digital_root(int x)
{
int t = 0;
string str = to_string(x);
for(char& c : str){
t += atoi(c);
}
return t;
}
i expect the characters to convert to integer successfully.how can i do so?
Have a look at std::atoi, its argument is of type const char*, but you are passing a single char. There is no possible conversion from char to const char*, this is what the compiler complains about.
What you want instead is converting the char to an int by doing some ASCII math:
t += static_cast<int>(c) - '0';
But note that while this works, there is a better solution for this task. It doesn't require the conversion to a string, but instead relies on integer division solely, repeatedly using % 10.
Is there a way to convert std::string to size_t?
The problem is that size_t is platform dependable type (while it is the result of the sizeof). So, I can not guarantee that converting string to unsigned long or unsigned int will do it correctly.
EDIT:
A simple case is:
std::cout<< "Enter the index:";
std::string input;
std::cin >> input;
size_t index=string_to_size_t(input);
//Work with index to do something
you can use std::stringstream
std::string string = "12345";
std::stringstream sstream(string);
size_t result;
sstream >> result;
std::cout << result << std::endl;
You may want to use sscanf with the %zu specifier, which is for std::size_t.
sscanf(input.c_str(), "%zu", &index);
Have a look here.
Literally, I doubt that there is an overloaded operator >> of std::basic_istringstream for std::size_t. See here.
Let us assume for a minute that size_t is a typedef to an existing integer, i.e. the same width as either unsigned int, unsigned long, or unsigned long long.
AFAIR it could be a separate (larger still) type as far as the standard wording is concerned, but I consider that to be highly unlikely.
Working with that assumption that size_t is not larger than unsigned long long, either stoull or strtoull with subsequent cast to size_t should work.
From the same assumption (size_t defined in terms of either unsigned long or unsigned long long), there would be an operator>> overload for that type.
You can use %zd as the format specifier in a scanf-type approach.
Or use a std::stringstream which will have an overloaded >> to size_t.
#include <sstream>
std::istringstream iss("a");
size_t size;
iss >> size;
By using iss.fail(), you check failure.
Instead of ("a"), use value you want to convert.
/**
* #brief Convert const char* to size_t
* #note When there is an error it returns the maximum of size_t
* #param *number: const char*
* #retval size_t
*/
size_t to_size_t(const char *number) {
size_t sizeT;
std::istringstream iss(number);
iss >> sizeT;
if (iss.fail()) {
return std::numeric_limits<size_t>::max();
} else {
return sizeT;
}
}
Isn't it a generally a bad idea to convert from a larger integral type to a smaller signed if there is any possibility that overflow errors could occur? I was surprised by this code in C++ Primer (17.5.2) demonstrating low-level IO operations:
int ch;
while((ch = cin.get()) != EOF)
cout.put(ch); //overflow could occur here
Given that
cin.get() converts the character it obtains to unsigned char, and then to int. So ch will be in the range 0-255 (exluding EOF). All is good.
But then in the put(ch) expression ch gets converted back to char. If char is signed then any value of ch from 128-255 is going to cause an overflow, surely?
Would such code be generally bad practice if I'm expecting something outside of ordinary input 0-127, since there are no guarantees how overflow is treated?
There are rules for integer demotion.
When a long integer is cast to a short, or a short is cast to a char,
the least-significant bytes are retained.
As seen in: https://msdn.microsoft.com/en-us/library/0eex498h.aspx
So the least significant byte of ch will be retained. All good.
Use itoa, if you want to convert the integer into a null-terminated string which would represent it.
char * itoa ( int value, char * str, int base );
or you can convert it to a string , then char :
std::string tostr (int x){
std::stringstream str;
str << x;
return str.str();}
convert string to char
string fname;
char *lname;
lname = new char[fname.lenght() + 1];
strcpy(f, lname.c_str());
if you see "Secure Error" disable it with #Pragma
I need to convert an integer int parameter to an hexadecimal unsigned char buffer[n].
If integer is for example 10 then the hexadecimal unsigned char array should be 0x0A
To do so I have the following code:
int parameter;
std::stringstream ss;
unsigned char buffer[];
ss << std::hex << std::showbase << parameter;
typedef unsigned char byte_t;
byte_t b = static_cast<byte_t>(ss); //ERROR: invalid static_cast from type ‘std::stringstream {aka std::basic_stringstream<char>}’ to type ‘byte_t {aka unsigned char}’
buffer[0]=b;
Does anyone know how to avoid this error?
If there is a way of converting the integer parameter into an hexadecimal unsigned char than doing first: ss << std::hex << std::showbase << parameter; that would be even better.
Consulting my psychic powers it reads you actually want to have a int value seen in it's representation of bytes (byte_t). Well, as from your comment
I want the same number represented in hexadecimal and assigned to a unsigned char buffer[n].
not so much psychic powers, but you should note hexadecimal representation is a matter of formatting, not internal integer number representation.
The easiest way is to use a union like
union Int32Bytes {
int ival;
byte_t bytes[sizeof(int)];
};
and use it like
Int32Bytes x;
x.ival = parameter;
for(size_t i = 0; i < sizeof(int); ++i) {
std::cout << std::hex << std::showbase << (int)x.bytes[i] << ' ';
}
Be aware to see unexpected results due to endianess specialities of your current CPU architecture.
Problem 1: buffer is of undetermined size in your snippet. I'll suppose that you have declared it with a sufficient size.
Problem 2: the result of your conversion will be several chars (at least 3 due to the 0x prefix). So you need to copy all of them. This won't work with an = unless you'd have strings.
Problem 3: your intermediary cast won't succeed: you can't hope to convert a complex stringstream object to a single unsigned char. Fortunately, you don't need this.
Here a possible solution using std::copy(), and adding a null terminator to the buffer:
const string& s = ss.str();
*copy(s.cbegin(), s.cend(), buffer)='\0';
Live demo
So, string comes with the value type of char. I want a string of value type unsigned char. Why i want such a thing is because i am currently writing a program which converts large input of hexadecimal to decimal, and i am using strings to calculate the result. But the range of char, which is -128 to 127 is too small, unsigned char with range 0 to 255 would work perfectly instead. Consider this code:
#include<iostream>
using namespace std;
int main()
{
typedef basic_string<unsigned char> u_string;
u_string x= "Hello!";
return 0;
}
But when i try to compile, it shows 2 errors, one is _invalid conversion from const char* to unsigned const char*_ and the other is initializing argument 1 of std::basic_string<_CharT, _Traits, _Alloc>::basic_string...(it goes on)
EDIT:
"Why does the problem "converts large input of hexadecimal to decimal" require initializing a u_string with a string literal?"
While calculating, each time i shift to the left of the hexadecimal number, i multiply by 16. At most the result is going to be 16x9=144, which surpasses the limit of 127, and it makes it negative value.
Also, i have to initialize it like this:
x="0"; x[0] -='0';
Because i want it to be 0 in value. if the variable is null, then i can't perform operations on it, if it is 0, then i can.
So, what should i do?
String literals are const char and you are assigning them to a const unsigned char.
Two solution you have:
First, Copy string from standard strings to your element by element.
Second, Write your own user-literal for your string class:
inline constexpr const unsigned char * operator"" _us(const char *s,unsigned int)
{
return (const unsigned char *) s;
}
// OR
u_string operator"" _us(const char *s, unsigned int len)
{
return u_string(s, s+len);
}
u_string x = "Hello!"_us;
An alternative solution would be to make your compiler treat char as unsigned. There are compiler flags for this:
MSVC: /J
GCC, Clang, ICC: -funsigned-char