I have a function that should take as parameter an unsigned long long in the hex format. I am trying to convert this parameter to a string to check if it is made of 16 digits or not, but I am having problems with leading zeros
template <typename T>
static string to_string(const T& value) {
stringstream oss;
oss << hex << setprecision(16) << value;
return oss.str();
}
int main(int argc, char** argv) {
unsigned long long pattern1 = 0x0000001000000002;
unsigned long long pattern2 = 0x0FFFFFFFFFFFFFFFF;
cout << "Pattern 1 = " << to_string(pattern1) << endl;
cout << "Pattern 2 = " << to_string(pattern2) << endl;
return 0;
}
What I want is for the pattern 1 to be converted with the zeroes to be able to check its length but this is the output. I tried using the set precision but didn't seem to help
Pattern 1 = 1000000002
Pattern 2 = ffffffffffffffff
Do you know how many characters should be in a number? The count of leading zeros can be infinite. I think this helps you
int character_count = 10;
oss << hex << setfill('0') << setw(character_count) << value;
You cannot know the number of the digits that was "entered by the user" except if you already read it through an "array" (char*, std::string, ...).
If you have an unsigned long long, the variable size will be 8 bytes anyway:
unsigned long long a = 0x1; // This will internally be: 0x0000000000000001
Now, if you want to get back the leading zeros from the actual type in the std::string, I think the following should do the trick (as already mentioned by other answers):
template <typename T>
std::string to_string(const T & val)
{
std::stringstream oss;
oss << std::hex << std::setfill('0') << std::setw(sizeof(T)*2) << val;
return oss.str();
}
But perhaps it could be better to overload the function for any integral types you want to use instead of using a template.
Because if someone calls the function with a type that is not a short, int, long, long long (resp. unsigned), the function will be broken.
The length of an unsigned long long represented as an hexadecimal number, including leading zeros, is:
sizeof(unsigned long long)*2
Since we are including leading zeros, the value of the unsigned long long is not relevant.
Related
A char stores a numeric value from 0 to 255. But there seems to also be an implication that this type should be printed as a letter rather than a number by default.
This code produces 22:
int Bits = 0xE250;
signed int Test = ((Bits & 0x3F00) >> 8);
std::cout << "Test: " << Test <<std::endl; // 22
But I don't need Test to be 4 bytes long. One byte is enough. But if I do this:
int Bits = 0xE250;
signed char Test = ((Bits & 0x3F00) >> 8);
std::cout << "Test: " << Test <<std::endl; // "
I get " (a double quote symbol). Because char doesn't just make it an 8 bit variable, it also says, "this number represents a character".
Is there some way to specify a variable that is 8 bits long, like char, but also says, "this is meant as a number"?
I know I can cast or convert char, but I'd like to just use a number type to begin with. It there a better choice? Is it better to use short int even though it's twice the size needed?
cast your character variable to int before printing
signed char Test = ((Bits & 0x3F00) >> 8);
std::cout << "Test: " <<(int) Test <<std::endl;
Currently I'm using while loops:
std::string to_octal(unsigned int num)
{
int place = 1, remainder, octal = 0;
while (num != 0)
{
remainder = num % 8;
decimal /= 8;
octal += remainder * place;
place *= 10;
}
return std::to_string(octal);
}
unsigned int to_num(std::string octal)
{
unsigned int octal_n = std::stoi(octal);
int place = 1, remainder, num = 0;
while (num != 0)
{
remainder = octal_n % 10;
octal_n /= 10;
num += remainder * place;
place *= 8;
}
return num;
}
Which seems inefficient. Is there a better way to do this?
There is no such thing as decimal unsigned int, hexadecimal unsigned int or octal unsigned int. There is only one unsigned int. There is a difference only when you want to print an object of that type to the terminal or a file. From that point of view, the function
unsigned int decimal_to_octal(unsigned int decimal);
does not make sense at all. It makes sense to use:
struct decimal_tag {};
struct hexadecimal_tag {};
struct octal_tag {};
// Return a string that represents the number in decimal form
std::string to_string(unsigned int number, decimal_tag);
// Return a string that represents the number in hexadecimal form
std::string to_string(unsigned int number, hexadecimal_tag);
// Return a string that represents the number in octal form
std::string to_string(unsigned int number, octal_tag);
and their counterparts.
// Extract an unsigned number from the string that has decimal representation
unsigned int to_number(std::string const& s, decimal_tag);
// Extract an unsigned number from the string that has hexadecimal representation
unsigned int to_number(std::string const& s, hexadecimal_tag);
// Extract an unsigned number from the string that has octal representation
unsigned int to_number(std::string const& s, octal_tag);
Here's demonstrative program:
#include <iostream>
#include <string>
#include <iomanip>
#include <sstream>
struct decimal_tag {};
struct hexadecimal_tag {};
struct octal_tag {};
// Return a string that represents the number in decimal form
std::string to_string(unsigned int number, decimal_tag)
{
std::ostringstream str;
str << std::dec << number;
return str.str();
}
// Return a string that represents the number in hexadecimal form
std::string to_string(unsigned int number, hexadecimal_tag)
{
std::ostringstream str;
str << std::hex << number;
return str.str();
}
// Return a string that represents the number in octal form
std::string to_string(unsigned int number, octal_tag)
{
std::ostringstream str;
str << std::oct << number;
return str.str();
}
// Extract an unsigned number from the string that has decimal representation
unsigned int to_number(std::string const& s, decimal_tag)
{
std::istringstream str(s);
unsigned int number;
str >> std::dec >> number;
return number;
}
// Extract an unsigned number from the string that has hexadecimal representation
unsigned int to_number(std::string const& s, hexadecimal_tag)
{
std::istringstream str(s);
unsigned int number;
str >> std::hex >> number;
return number;
}
// Extract an unsigned number from the string that has octal representation
unsigned int to_number(std::string const& s, octal_tag)
{
std::istringstream str(s);
unsigned int number;
str >> std::oct >> number;
return number;
}
int main()
{
unsigned int n = 200;
std::cout << "200 in decimal: " << to_string(n, decimal_tag()) << std::endl;
std::cout << "200 in hexadecimal: " << to_string(n, hexadecimal_tag()) << std::endl;
std::cout << "200 in octal: " << to_string(n, octal_tag()) << std::endl;
std::cout << "Number from decimal form (200): " << to_number("200", decimal_tag()) << std::endl;
std::cout << "Number from hexadcimal form (c8): " << to_number("c8", hexadecimal_tag()) << std::endl;
std::cout << "Number from octal form (310): " << to_number("310", octal_tag()) << std::endl;
}
and its output:
200 in decimal: 200
200 in hexadecimal: c8
200 in octal: 310
Number from decimal form (200): 200
Number from hexadcimal form (c8): 200
Number from octal form (310): 200
Printing numbers in different bases:
#include <iostream>
int main () {
int n = 123;
std::cout << std::dec << n << '\n';
std::cout << std::hex << n << '\n';
std::cout << std::oct << n << '\n';
return 0;
}
This function is 5x faster than #R Sahu's solution in debug compile, and 11x faster in -O2 optimized compile. It also works for any size unsigned int.
Permission granted for all manner of use.
#include <limits>
// Efficient conversion from any unsigned int type to an octal C
// string. Argument must be unsigned, but may be ANY unsigned from
// char to long long. Return value points to a NUL-terminated string
// in the buffer. Single-threaded applications are supported with an
// internal buffer and can ignore this detail. Multi-threaded
// programs must pass in their own buffer of at least the given size,
// which will be big enough to handle any octal.
template<typename T> const char* to_oct( T t, char* pcBuf = nullptr ) {
static_assert( ! std::numeric_limits<T>::is_signed,
"only works for unsigned types" );
// Each byte can add no more than 3 digits to the string. +1 for NUL.
static char cBuf[ sizeof( T ) * 3 + 1 ];
// Allow single-threaded callers to skip buffer argument.
if ( ! pcBuf )
pcBuf = cBuf;
// Move to the end of the buffer and write a terminator.
pcBuf += sizeof( T ) / sizeof( char ) * 3;
*pcBuf = '\0';
if ( t )
// Move right to left, outputting LSD, until we have no more
// to output. The size of our argument dictates the maximum
// number of loops, and our buffer is big enough to hold even
// the maximum result.
while ( t ) {
*(--pcBuf) = '0' + t % 8;
t >>= 3;
}
else
// Above loop would produce no output for 0, so handle as special case.
*(--pcBuf) = '0';
return pcBuf;
}
And of course the old C method is still working in C++: use %o Format specifier.
printf("%o", n);
use sprintf if you want it in a string (ok, that means you have to care of memory allocation to store the result, which is a drawback compared to std::oct).
I have two strings to add. Strings is HEX values. I convert strings to long long, add and after I back to string. But this operation no working good.
Code:
unsigned long long FirstNum = std::strtoull(FirstString.c_str(), NULL, 16);
unsigned long long SecondNum = std::strtoull(SecondString.c_str(), NULL, 16);
unsigned long long Num = FirstNum + SecondNum;
std::cout << " " << FirstNum << "\n+ " << SecondNum << "\n= " << Num << "\n\n";
I received
13285923899203179534
+ 8063907133566997305
= 2903086959060625223
Anyone can explain me this magic? How can I fix it?
Back to hex value by
std::stringstream Stream;
Stream << std::hex << Num;
return Stream.str();
All unsigned arithmetic in C (and C++) occurs modulo 2k for some k. In your case, you are getting the result modulo 264, implying that unsigned long long is 64 bits on your platform.
If you want to do arithmetic with integers larger than the largest supported type on your platform, you'll need to use a multiprecision library such as GMP
I got confused with the openCV documentation mentioned here.
As per the documentation, if i create an image with "uchar", the pixels of that image can store unsigned integer values but if i create an image using the following code:
Mat image;
image = imread("someImage.jpg" , 0); // Read an image in "UCHAR" form
or by doing
image.create(10, 10, CV_8UC1);
for(int i=0; i<image.rows; i++)
{
for(int j=o; j<image.cols; j++)
{
image.at<uchar>(i,j) = (uchar)255;
}
}
and then if i try to print the values using
cout<<" "<<image.at<uchar>(i,j);
then i get some wierd results at terminal but if i use the following statement then i can get the values inbetween 0-255.
cout<<" "<<(int)image.at<uchar>(i,j); // with TYPECAST
Question: Why do i need to do typecast to get print the values in range 0-255 if the image itself can store "unsigned integer" values.
If you try to find definition of uchar (which is pressing F12 if you are using Visual Studio), then you'll end up in OpenCV's core/types_c.h:
#ifndef HAVE_IPL
typedef unsigned char uchar;
typedef unsigned short ushort;
#endif
which standard and reasonable way of defining unsigned integral 8bit type (i.e. "8-bit unsigned integer") since standard ensures that char always requires exactly 1 byte of memory. This means that:
cout << " " << image.at<uchar>(i,j);
uses the overloaded operator<< that takes unsigned char (char), which prints passed value in form of character, not number.
Explicit cast, however, causes another version of << to be used:
cout << " " << (int) image.at<uchar>(i,j);
and therefore it prints numbers. This issue is not related to the fact that you are using OpenCV at all.
Simple example:
char c = 56; // equivalent to c = '8'
unsigned char uc = 56;
int i = 56;
std::cout << c << " " << uc << " " << i;
outputs: 8 8 56
And if the fact that it is a template confuses you, then this behavior is also equivalent to:
template<class T>
T getValueAs(int i) { return static_cast<T>(i); }
typedef unsigned char uchar;
int main() {
int i = 56;
std::cout << getValueAs<uchar>(i) << " " << (int)getValueAs<uchar>(i);
}
Simply, because although uchar is an integer type, the stream operation << prints the character it represents, not a sequence of digits. Passing the type int you get a different overload of that same stream operation, which does print a sequence of digits.
I have a problem which I do not understand. I add characters to a standard string. Whe I take them out the value printed is not what I expected.
int main (int argc, char *argv[])
{
string x;
unsigned char y = 0x89, z = 0x76;
x += y;
x += z;
cout << hex << (int) x[0] << " " <<(int) x[1]<< endl;
}
The output:
ffffff89 76
What I expected:
89 76
Any ideas as what is happening here?
And how do I fix it?
The string operator [] is yielding a char, i.e. a signed value. When you cast this to an int for output it will be a signed value also.
The input value cast to a char is negative and therefore the int also will be. Thus you see the output you described.
Most likely char is signed on your platform, therefore 0x89 and 0x76 become negative when it's represented by char.
You've to make sure that the string has unsigned char as value_type, so this should work:
typedef basic_string<unsigned char> ustring; //string of unsigned char!
ustring ux;
ux += y;
ux += z;
cout << hex << (int) ux[0] << " " <<(int) ux[1]<< endl;
It prints what you think should print:
89 76
Online demo : http://www.ideone.com/HLvcv
You have to account for the fact that char may be signed. If you promote it to int directly, the signed value will be preserved. Rather, you first have to convert it to the unsigned type of the same width (i.e. unsigned char) to get the desired value, and then promote that value to an integer type to get the correct formatted printing.
Putting it all together, you want something like this:
std::cout << (int)(unsigned char)(x[0]);
Or, using the C++-style cast:
std::cout << static_cast<int>(static_cast<unsigned char>(x[0]))
The number 0x89 is 137 in decimal system. It exceeds the cap of 127 and is now a negative number and therefore you see those ffffffthere. You could just simply insert (unsigned char) after the (int) cast. You would get the required result.
-Sandip