Specifying unsigned char storage for integer numerals - c++

In terms of modern C++, is it possible to specify storage of a integer numeral to a char?
I am looking to write auto var = 255; to initialize var as unsigned char type
It is possible to specify other types of storage:
auto var_ulong = 255ul;
auto var_float = 255.f;
However, not short or char? Surely I am missing something...

There is no standard integer literal to do what you want, but you can add your own user- defined literal if you plan to do this a lot:
namespace foo {
// constexpr pre C++20
// |
// V
consteval unsigned char operator"" _uchar(unsigned long long x) {
return static_cast<unsigned char>(x);
}
}
using namespace foo;
auto var_uchar = 255_uchar;
Just be aware that normal integer promotion will still be in effect, so below, you'll get an int:
auto var_int = 55_uchar + 10_uchar;

Related

how is the keyword "using" used in the following case in c++?

Case 1:
using unumber = unsigned long;
unumber uvalue {};
Case 2:
using udigit_t = unsigned char;
using ubigvalue_t = vector<udigit_t>;
ubigvalue_t ubig_value;
Does using work in this case like an inferred type?
Basically, using is another keyword for typedef (among other things) since C++11. Both of these are used to create a type aliases. Therefore,
using udigit_t = unsigned char;
and
typedef unsigned char udigit_t;
are equal expressions, which will allow you to use udigit_t in the same sense as if you wrote unsigned char instead (with some pitfalls). For more information, take a look here:
Type alias, alias template.
using can be used as an alternative to typedef for defining type aliases in C++11 (no C++03/98). In some cases it makes your code easier to look. Compare these examples.
typedef unsigned char byte, *pbyte, &rbyte;
using byte = unsigned char;
using pbyte = unsigned char*;
using rbyte = unsigned char &;
typedef int (*fun)(int, int);
using fun = int (*)(int, int);
// And when later...
fun f = [&](int a, int b){return a + b;};
In this case, using does not state an inferred type, but a clearly defined type. To let the compiler infer and decide the type of a variable, use auto (C++11 only). So you can write
auto f = [](){return 1;};
// No more need to write this below
int (*f)(void) = [](){return 1;};

What is the type of string.size() in C++?

I have the following code:
#include<iostream>
#include<string>
int main()
{
std::string s = "458";
std::cout <<s.size()-4;
}
When I run this I get 42944935 or something like this. But when I run it with the following modification:
int main()
{
std::string s = "458";
int i = s.size();
std::cout << i -4;
}
I get -1, which I would have expected from the first code. Can someone explain what is happening here?
For historical reasons the return type of std::string::size() is size_t, which is an unsigned type sufficient for the largest possible array size.
You can work around that by defining a number of general size/length-functions, like
using Size = ptrdiff_t; // signed type
template< class Collection >
constexpr auto n_items( Collection const& c )
-> Size
{ return c.size(); }
// Raw array. Using size_t template param for g++ compatibility.
template< class Item, size_t n >
constexpr auto n_items( Item (&)[n] )
-> Size
{ return n; }
Here I used the name n_items because C++17 will define a general size function that, unfortunately, returns size_t (and conflates a number of notions of size, also unfortunate). One doesn't want a name conflict there.
Where you don't have such functions available an alternative is to express a size as the difference of std::end and std::begin, e.g. end(s) - begin(s). The difference type for raw pointers is ptrdiff_t (which is signed), and the default difference type for iterators like you get from std::string::begin(), is also ptrdiff_t, from std::iterator_traits.
That happens because the return value of size() function is an unsigned value.
When you subtract -4 from the returned value, the result becomes a negative value. You need to tell cout to interpret the value as a signed value (for example, cast the value like this std::cout <<int(s.size()-4); or as you have said int i = s.size();), then you'll get what you would expect. The reason behind that big integer you get is that if you interpret the binary representation of a two's complement negative value as a positive value, it'd be a very big integer value.
To learn more about two's complement binary arithmetic you can refer to this link.

Finding endian-ness programmatically at compile-time using C++11

I have referred many questions in SO on this topic, but couldn't find any solution so far. One natural solution was mentioned here: Determining endianness at compile time.
However, the related problems mentioned in the comments & the same answer.
With some modifications, I am able to compile a similar solution with g++ & clang++ (-std=c++11) without any warning.
static_assert(sizeof(char) == 1, "sizeof(char) != 1");
union U1
{
int i;
char c[sizeof(int)];
};
union U2
{
char c[sizeof(int)];
int i;
};
constexpr U1 u1 = {1};
constexpr U2 u2 = {{1}};
constexpr bool IsLittleEndian ()
{
return u1.i == u2.c[0]; // ignore different type comparison
}
static_assert(IsLittleEndian(), "The machine is BIG endian");
Demo.
Can this be considered a deterministic method to decide the endian-ness or does it miss type-punning or something else?
Since C++20 you can use std::endian from the <type_traits> header:
#include <type_traits>
int main()
{
static_assert(std::endian::native==std::endian::big,
"Not a big endian platform!");
}
See it live
Your attempt is no different from this obviously non-working one (where IsLittleEndian() is identical to true):
constexpr char c[sizeof(int)] = {1};
constexpr int i = {1};
constexpr bool IsLittleEndian ()
{
return i == c[0]; // ignore different type comparison
}
static_assert(IsLittleEndian(), "The machine is BIG endian");
I believe that C++11 doesn't provide means to programatically determine the endianness of the target platform during compile time. My argument is that the only valid way to perform that check during runtime is to examine an int variable using an unsigned char pointer (since other ways of type punning inevitably contain undefined behavior):
const uint32_t i = 0xffff0000;
bool isLittleEndian() {
return 0 == *reinterpret_cast<const unsigned char*>(&i);
}
C++11 doesn't allow to make this function constexpr, therefore this check cannot be performed during compile time.

Unsigned Char Template Value Overflow

I'm using the following template to encode unsigned char values:
template <unsigned char val>
struct Cell {
enum { value = val };
using add = Cell<val + 1>;
using sub = Cell<val - 1>;
};
I expected sub to behave like a standard unsigned char variable regarding overflow:
unsigned char x = 0;
x - 1; // 255
But instead I get a compiler error in Clang:
using cell = Cell<0>;
cell::sub::value; // Error here.
Non-type template argument evaluates to -1, which cannot be narrowed to type 'unsigned char'
Is overflow handled differently in template contexts?
val - 1 is an int on your platform, on account of the usual arithmetic conversions. There is no sensible meaning for a template parameter of type unsigned char to be given an int argument.
Simply make sure that your template argument has the desired type:
using sub = Cell<static_cast<unsigned char>(val - 1U)>;
// ^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^
(Using an unsigned int literal means that the usual conversions produce an unsigned int, which has well-defined narrowing semantics.)

Any way to make parameterized user defined literals?

A little while ago I had an idea about "parameterized" user-defined literals and was wondering if there is any way to do this in the current C++ standard.
Basically, the idea is to have a user-defined literal whose behaviour can be tweaked according to some parameters. As a simple example, I chose a "fixed-point" literal which turns a floating-point number into an integer; the parameter is the precision in terms of the number of decimal places.
This is just an exercise for now, since I'm not sure how or if this would be useful in a real application.
My first idea went something like this:
namespace fp_impl {
constexpr int floor(long double n) {
return n;
}
constexpr int pow10(int exp) {
return exp == 0 ? 1 : 10 * pow10(exp - 1);
}
template<int i>
constexpr int fixed_point(long double n) {
return floor(n * pow10(i));
}
namespace fp2 {
constexpr int operator"" _fp (long double n) {
return fixed_point<2>(n);
}
}
namespace fp4 {
constexpr int operator"" _fp (long double n) {
return fixed_point<4>(n);
}
}
}
template<int prec> struct fp;
template<> struct fp<2> {
namespace lit = fp2;
};
template<> struct fp<4> {
namespace lit = fp4;
};
int main() {
{
using namespace fp<2>::lit;
std::cout << 5.421_fp << std::endl; // should output 542
}
{
using namespace fp<4>::lit;
std::cout << 5.421_fp << std::endl; // should output 54210
}
}
However, it doesn't compile because namespace aliases are not permitted at class scope. (It also has a problem with requiring you t manually define every version of operator"" _fp.) So I decided to try something with macros:
namespace fp {
namespace detail {
constexpr int floor(long double n) {
return n;
}
constexpr int pow10(int exp) {
return exp == 0 ? 1 : 10 * pow10(exp - 1);
}
template<int i>
constexpr int fixed_point(long double n) {
return floor(n * pow10(i));
}
}
}
#define SPEC(i) \
namespace fp { \
namespace precision##i { \
constexpr int operator"" _fp(long double n) { \
return fp::detail::fixed_point<i>(n); \
} \
} \
}
SPEC(2); SPEC(4);
#undef SPEC
#define fp_precision(i) namespace fp::precision##i
int main() {
{
using fp_precision(2);
std::cout << 5.421_fp << std::endl;
}
{
using fp_precision(4);
std::cout << 5.421_fp << std::endl;
}
}
This works, though it still has the requirement of using the SPEC() macro for every precision you ever want to use. Of course, some preprocessor trickery could be used to do this for every value from, say, 0 to 100, but I'm wondering if there could be anything more like a template solution, where each one is instantiated as it is needed. I had a vague idea of using an operator"" declared as a friend function in a template class, though I suspect that won't work either.
As a note, I did try template<int i> constexpr int operator"" _fp(long double n), but it seems this is not an allowed declaration of a literal operator.
You can return a class type that has operator()(int) overloaded from your literal operator. Then you could write
5.421_fp(2);
A user-defined literal function takes as its sole argument the literal itself. You can use state outside the function, for example with a global or thread-local variable, but that's not very clean.
If the argument will always be compile-time constant, and it's part of the number, pass it through the literal itself. That requires writing an operator "" _ ( char const *, std::size_t ) overload or template< char ... > operator "" _ () template and parsing the number completely by yourself.
You will have to work such a parameter into the existing floating-point grammar, though. Although C++ defines a very open-ended preprocessing-number construct, a user-defined literal must be formed from a valid token with a ud-suffix identifier appended.
You might consider using strings instead of numbers, but then the template option goes away.
One does not need macros to solve the problem. Since the problem concerns processing literals numbers (e.g., integers or floating-point formatted numbers), one can use the template definition of the literal operator and template metaprogramming to do the job completely at compile-time.
To do your fixed-point literal conversions, you could use the integer literal operator with unsigned long long, e.g.,
some_type operator "" _fp(unsigned long long num)
{
// code
}
(or with long double with possible loss of precision) but this causes everything to happen at run-time.
C++11 in section 2.14.8 (User-defined Lierals [lex.ext]) within paragraphs 3 and 4 define literal operator variations including a template version for integer and floating-point literals! Unfortunately, paragraphs 5 and 6 do not define a template version for string and character literals. This means this technique will only work with integer and floating-point literals.
From C++11 section 2.14.8 the above _fp literal operator can therefore be written instead as:
template <char... Digits>
constexpr some_type operator "" _fp()
{
return process_fp<2, Digits...>::to_some_type();
}
e.g., where the 2 is a value from the int i template parameter from the OP and some_type is whatever the return type needs to be. Notice that template parameter is a char --not an int or some other number. Also notice that the literal operator has no arguments. Thus code like Digit - '0' is needed to get the numeric value to an integer value for that character. Moreover, Digits... will be processed in a left-to-right order.
Now one can use template metaprogramming with process_fp whose forward declaration would look like:
template <int i, char... Digits>
struct process_fp;
and would have a static constexpr method called to_some_type() to compute and return the desired, compile-time result.
One might also want a meaningful, simple example of this is done. Last year I wrote code (link below) that when used like this:
int main()
{
using namespace std;
const unsigned long long bits =
11011110101011011011111011101111_binary;
cout << "The number is: " << hex << bits << endl;
}
would convert the binary number 11011110101011011011111011101111 into an unsigned long long at compile-time and store it into bits. Full code and explanation of such using the template metaprogramming technique referred to above is provided in my blog entry titled, Using The C++ Literal Operator.