Suppose that I need a function template that does different amounts of bit shifts on different integer types. For example, if the input value n is of type char, the function does some calculations on n>>2 and n<<3. If it is short int, then the function uses n>>1 and n<<8. For int type, there would be n>>11 and n<<9, and so on. Of course, the values mentioned are just for example and there is no correlation between them and the size of int_type.
My suggestion for this problem was something like this:
template <typename Num_Type = char, int s1 = 2, int s2 = 3> void test1(Num_Type &n)
{
// this is just an example code:
int a = n >> s1,
b = n << s2;
// do some calculations on a and b
}
So I can use this template for different types of input value. But when I compile it, it gives me several warnings about undefined behavior in shift operations, because the value may be too big.
So the question can be asked in two forms:
How can I customize bit-shift operations without getting "undefined behavior" warnings? or
Can I have a limited data type, say intlength which can only have a specified range of numbers, say 0, 1, 2, ... , 31? I know it may sound stupid, but in this case, I can define the template as
template <typename Num_Type = char, intlength s1 = 2, intlength s2 = 3> void test1(Num_Type &n)
and so the compiler should not complain about shift values.
You can disable the warning for this function with a #pragma command, which depends on the compiler you are using. You will have to google that yourself.
Edit: since you mention you are using VisualStudio, here is a link to disable warnings. :)
Once you disable the warning you can add a static_assert to check if s1 and s2 are in range of the Num_Type. Although I find it strange that your compiler does not do that itself since everything is known at compile time.
template <typename Num_Type, int s1 = 2, int s2 = 3> void test1(Num_Type &n)
{
constexpr auto max = sizeof(Num_Type) * 8;
static_assert(s1 < max && s2 < max, "integer overflow");
// this is just an example code:
int a = n >> s1,
b = n << s2;
}
Live demo
The healthy way to change template function behavior depending on input type is to use template specialization. For example:
template <typename num_t> test(num_t n);
template <> test<int> (num_t n){
int shift_amount = 4;
// do shifts
}
template <> test<char> (num_t n){
int shift_amount = 1;
// do shifts
}
Also, make sure to surround your shifts with if checks to make sure there is no overflow.
I see no need for templates. All you need is simple overloading:
auto test(char n) { ... }
auto test(uint16_t n) { .. }
auto test(uint32_t n) { .. }
Related
I wondered if I could auto deduce the size of an array, which is passed as a template parameter, without (explicitly) passing its size.
The following code both compiles warning-less on g++ 4.8 and clang++ 3.3 (using -std=c++11 -Wall).
#include <iostream>
template<const int* arr>
struct array_container
{
static constexpr int val = arr[1];
array_container() {
std::cout << val << std::endl;
}
// static constexpr int arr_size = ??;
};
constexpr int one[] = { 1 };
constexpr int two[] = { 1, 2 };
int main()
{
// array_container<one> array_one;
array_container<two> array_two;
// (void) array_one;
(void) array_two;
return 0;
}
However, if I remove the two comment signs in main(), I get an out of bound error with both compilers.
Now, this is cool. Somehow the compiler knows the size of the array, though the type of const int* arr is a pointer. Is there any way to get the size of arr, e.g. to complete my comment in array_container?
Of course, you are not allowed to
Use any macros
Store the size in arr (e.g. passing an std::array as template parameter: constexpr std::array<int, 1> one = { 1 }, or using an end marker like '\0' in strings)
Use an additional template parameter for the size that can not be auto deduced (array_container<1, one> array_one).
Maybe std::extent template from <type_traits> header of C++11 standard library is what you want:
#include <iostream>
#include <type_traits>
constexpr int one[] = { 1 };
constexpr int two[] = { 1, 2 };
int main()
{
std::cout << std::extent<decltype(one)>::value << std::endl;
std::cout << std::extent<decltype(two)>::value << std::endl;
return 0;
}
Output:
1
2
template<size_t size>
constexpr size_t arraySize ( const int ( &arrayRef ) [size] ) {
return size;
}
int main(){
int A[1];
int B[2];
cout << arraySize(A) << arraySize(B);
return 0;
}
I believe something like this is what you're looking for, using array references. The syntax for declaring an array reference looks kind of like the syntax for a function pointer. This function template accepts an array reference named arrayRef, which prevents array-to-pointer decay so that compile-time info about array size is preserved. As you can see, the template argument is implicit to the compiler. Note that this can only work when the size can be deduced at compile time. Interestingly, this should still work without naming arrayRef at all. To make the above template more useful, you can add a template parameter to deduce the type of the array as well. I left it out for clarity.
Probably not, as SFINAE only happens in the immediate context, while that error comes from the requirement that UB in constexpr lead to a compile time error, which I think is not immediate. You could try a recursive SFINAE that stops on the UB, but even if it worked you would have to both check the standard and hope it does not change (as it is rather obscure and new).
The easy way is to ise s function to deduce the array size, have to explicitly pass it to the type, then store it in an auto. Probably not what you want.
There are proposals to allow type parameters to be deduced from value parameters, so you could wait for those instead.
Not a solid answer, more of an extended comment, so marked community wiki.
It is indeed possible. I found a solution using SFINAE. What it basically does is produce a substitution error if the index is out of bound (line 3 in this example):
template<class C>
static yes& sfinae(typename val_to_type<
decltype(*C::cont::data), *(C::cont::data + C::pos)>::type );
template<class C>
static no& sfinae(C );
The full source code is on github.
There are only two disadvantages:
You have to specify the type of the array (this can not be avoided)
It only works with g++ 4.8.1 and clang 3.3. g++ fails for empty strings (with a compiler bug). If someone can test for other compilers, that would be appreciated.
I wrote a function:
template<int N> void tryHarder() {
for(int i = 0; i < N; i++) {
tryOnce();
}
}
but I only want it to compile if N is in between 0 and 10. Can I do it? How?
You can do it with static_assert declaration:
template<int N> void tryHarder() {
static_assert(N >= 0 && N <= 10, "N out of bounds!");
for(int i = 0; i < N; i++) {
tryOnce();
}
}
This feature is only avaliable since C++11. If you're stuck with C++03, take a look at Boost's static assert macro.
The whole idea of this are nice error messages. If you don't care for those, or can't even affor boost, you could do something as follows:
template<bool B>
struct assert_impl {
static const int value = 1;
};
template<>
struct assert_impl<false> {
static const int value = -1;
};
template<bool B>
struct assert {
// this will attempt to declare an array of negative
// size if template parameter evaluates to false
static char arr[assert_impl<B>::value];
};
template<int N>
void tryHarder()
{
assert< N <= 10 >();
}
int main()
{
tryHarder<5>(); // fine
tryHarder<15>(); // error, size of array is negative
}
For pre C++11 compilers, you could implement a template parameter constraint on the non-type parameter N.
For a description of how to do this, please see http://stroustrup.com/bs_faq2.html#constraints
Combined with the answers given so far already, the lower bound can be covered, too, by using unsigned int as template type. Negative values, if applied, will be converted to unsigned values high enough that they will be covered with the static_assert or the pre-C++11 solution anyway.
unsigned int additionally gives already semantically a hint that negative values shall not be applied to this template, which is why it (possibly) should be preferred in the specific case...
#if !defined(__cplusplus)
#error C++ compiler required.
#endif
This is just an example.
Here is the source link: http://msdn.microsoft.com/en-us/library/c8tk0xsk(v=vs.71).aspx
All i am saying is that you can use #error also
It is a directive
Edit #Pratik Chowdhruy: I agree with Paul R. This does not answer the question directly. Sorry to the community
I'm trying to create a compile-time bit mask using metaprograming techniques, my idea is to create something like this:
unsigned int Mask3 = Mask<2>(); // value = 0x03 = b00000000000000000000000000000011
unsigned int Mask3 = Mask<3>(); // value = 0x07 = b00000000000000000000000000000111
unsigned int Mask3 = Mask<7>(); // value = 0x7F = b00000000000000000000000001111111
The code that I'm trying is this:
template <const unsigned int N> const unsigned int Mask()
{
if (N <= 1)
{
return 1;
}
else
{
return ((1 << N) | Mask<N - 1>());
}
}
return 1;
But it result in tons pairs of warnings:
warning C4554: '<<' : check operator precedence for possible error
warning C4293: '<<' : shift count negative or too big
And in the end, the compile error:
error C1202: recursive type or function dependency context too complex.
So, I deduce that the recursivity never ends and falls into a compiler infinite loop but I'm don't understanding WHY.
As has already been pointed out, you're depending on a runtime check to
stop a compile time recursion, which can't work. More importantly,
perhaps, for what you want to do, is that you're defining a function,
which has no value until you call it. So even after you stop the
recursion with a specialization, you still have a nested sequence of
functions, which will be called at runtime.
If you want full compile time evaluation, you must define a static data
member of a class template, since that's the only way a compile time
constant can appear in a template. Something like:
template <unsigned int N>
struct Mask
{
static unsigned int const value = (1 << (N - 1)) | Mask<N - 1>::value;
};
template <>
struct Mask<0>
{
static unsigned int const value = 0;
};
(I've also corrected the numerical values you got wrong.)
Of course, you don't need anything this complicated. The following
should do the trick:
template <unsigned int N>
struct Mask
{
static unsigned int const value = (1 << (N + 1)) - 1;
};
template <>
struct Mask<0>
{
static unsigned int const value = 0;
};
(You still need the specialization for 0. Otherwise, 0 means all bits
set.)
Finally, of course: to access the value, you need to write something
like Mask<3>::value. (You might want to wrap this in a macro.)
It doesn't need to be recursive. This should work just fine :
template <const unsigned int N> const unsigned int Mask()
{
return ((1 << N) - 1);
}
It doesn't even need to be a template really. An (inlined) function is ok.
Note that if you want to support any value of N, specifically N >= sizeof(unsigned int) * CHAR_BIT, you probably want to treat those as a special case.
A template is created at compile time, but you are relying on run time behavior to stop the recursion.
For example, if you instantiate Mask<2>, it is going to use Mask<1>, which is going to use Mask<0>, which is going to use Mask<-1>, etc.
You have a runtime check for N being <= 1, but this doesn't help when it's compiling. It still creates an infinite sequence of functions.
To blunt template instantiation recursion you need to introduce one explicit specialization:
template <0> const unsigned int Mask()
{
return 1;
}
Your recursion never ends, because compiler tries to generate template implementation for both if-branches. So, when it generates Mask<0> it also generates Mask<0xffffffff> and so on
C++11 -- no recursion or templates:
constexpr unsigned mask(unsigned N) { return unsigned(~(-1<<N)); }
So far the answers only addressed the second error (C1202), but you asked more than that.
Warning C4554 is caused by a Microsoft compiler bug involving template parameters and the << operator. So, (1 << N) generates a warning. If N were an ordinary parameter, there would be no warning of course.
The very simple workaround is to use (1 << (N)) instead of (1 << N), and C4554 goes away!
A little while ago I had an idea about "parameterized" user-defined literals and was wondering if there is any way to do this in the current C++ standard.
Basically, the idea is to have a user-defined literal whose behaviour can be tweaked according to some parameters. As a simple example, I chose a "fixed-point" literal which turns a floating-point number into an integer; the parameter is the precision in terms of the number of decimal places.
This is just an exercise for now, since I'm not sure how or if this would be useful in a real application.
My first idea went something like this:
namespace fp_impl {
constexpr int floor(long double n) {
return n;
}
constexpr int pow10(int exp) {
return exp == 0 ? 1 : 10 * pow10(exp - 1);
}
template<int i>
constexpr int fixed_point(long double n) {
return floor(n * pow10(i));
}
namespace fp2 {
constexpr int operator"" _fp (long double n) {
return fixed_point<2>(n);
}
}
namespace fp4 {
constexpr int operator"" _fp (long double n) {
return fixed_point<4>(n);
}
}
}
template<int prec> struct fp;
template<> struct fp<2> {
namespace lit = fp2;
};
template<> struct fp<4> {
namespace lit = fp4;
};
int main() {
{
using namespace fp<2>::lit;
std::cout << 5.421_fp << std::endl; // should output 542
}
{
using namespace fp<4>::lit;
std::cout << 5.421_fp << std::endl; // should output 54210
}
}
However, it doesn't compile because namespace aliases are not permitted at class scope. (It also has a problem with requiring you t manually define every version of operator"" _fp.) So I decided to try something with macros:
namespace fp {
namespace detail {
constexpr int floor(long double n) {
return n;
}
constexpr int pow10(int exp) {
return exp == 0 ? 1 : 10 * pow10(exp - 1);
}
template<int i>
constexpr int fixed_point(long double n) {
return floor(n * pow10(i));
}
}
}
#define SPEC(i) \
namespace fp { \
namespace precision##i { \
constexpr int operator"" _fp(long double n) { \
return fp::detail::fixed_point<i>(n); \
} \
} \
}
SPEC(2); SPEC(4);
#undef SPEC
#define fp_precision(i) namespace fp::precision##i
int main() {
{
using fp_precision(2);
std::cout << 5.421_fp << std::endl;
}
{
using fp_precision(4);
std::cout << 5.421_fp << std::endl;
}
}
This works, though it still has the requirement of using the SPEC() macro for every precision you ever want to use. Of course, some preprocessor trickery could be used to do this for every value from, say, 0 to 100, but I'm wondering if there could be anything more like a template solution, where each one is instantiated as it is needed. I had a vague idea of using an operator"" declared as a friend function in a template class, though I suspect that won't work either.
As a note, I did try template<int i> constexpr int operator"" _fp(long double n), but it seems this is not an allowed declaration of a literal operator.
You can return a class type that has operator()(int) overloaded from your literal operator. Then you could write
5.421_fp(2);
A user-defined literal function takes as its sole argument the literal itself. You can use state outside the function, for example with a global or thread-local variable, but that's not very clean.
If the argument will always be compile-time constant, and it's part of the number, pass it through the literal itself. That requires writing an operator "" _ ( char const *, std::size_t ) overload or template< char ... > operator "" _ () template and parsing the number completely by yourself.
You will have to work such a parameter into the existing floating-point grammar, though. Although C++ defines a very open-ended preprocessing-number construct, a user-defined literal must be formed from a valid token with a ud-suffix identifier appended.
You might consider using strings instead of numbers, but then the template option goes away.
One does not need macros to solve the problem. Since the problem concerns processing literals numbers (e.g., integers or floating-point formatted numbers), one can use the template definition of the literal operator and template metaprogramming to do the job completely at compile-time.
To do your fixed-point literal conversions, you could use the integer literal operator with unsigned long long, e.g.,
some_type operator "" _fp(unsigned long long num)
{
// code
}
(or with long double with possible loss of precision) but this causes everything to happen at run-time.
C++11 in section 2.14.8 (User-defined Lierals [lex.ext]) within paragraphs 3 and 4 define literal operator variations including a template version for integer and floating-point literals! Unfortunately, paragraphs 5 and 6 do not define a template version for string and character literals. This means this technique will only work with integer and floating-point literals.
From C++11 section 2.14.8 the above _fp literal operator can therefore be written instead as:
template <char... Digits>
constexpr some_type operator "" _fp()
{
return process_fp<2, Digits...>::to_some_type();
}
e.g., where the 2 is a value from the int i template parameter from the OP and some_type is whatever the return type needs to be. Notice that template parameter is a char --not an int or some other number. Also notice that the literal operator has no arguments. Thus code like Digit - '0' is needed to get the numeric value to an integer value for that character. Moreover, Digits... will be processed in a left-to-right order.
Now one can use template metaprogramming with process_fp whose forward declaration would look like:
template <int i, char... Digits>
struct process_fp;
and would have a static constexpr method called to_some_type() to compute and return the desired, compile-time result.
One might also want a meaningful, simple example of this is done. Last year I wrote code (link below) that when used like this:
int main()
{
using namespace std;
const unsigned long long bits =
11011110101011011011111011101111_binary;
cout << "The number is: " << hex << bits << endl;
}
would convert the binary number 11011110101011011011111011101111 into an unsigned long long at compile-time and store it into bits. Full code and explanation of such using the template metaprogramming technique referred to above is provided in my blog entry titled, Using The C++ Literal Operator.
This question already has answers here:
How do I find the length of an array?
(30 answers)
Closed 2 years ago.
I have
int list[] = {1, 2, 3};
How to I get the size of list?
I know that for a char array, we can use strlen(array) to find the size, or check with '\0' at the end of the array.
I tried sizeof(array) / sizeof(array[0]) as some answers said, but it only works in main? For example:
int size(int arr1[]){
return sizeof(arr1) / sizeof(arr1[0]);
}
int main() {
int list[] = {1, 2, 3};
int size1 = sizeof(list) / sizeof(list[0]); // ok
int size2 = size(list_1); // no
// size1 and size2 are not the same
}
Why?
Try this:
sizeof(list) / sizeof(list[0]);
Because this question is tagged C++, it is always recommended to use std::vector in C++ rather than using conventional C-style arrays.
An array-type is implicitly converted into a pointer-type when you pass it to a function.
Have a look at this.
In order to correctly print the sizeof an array inside any function, pass the array by reference to that function (but you need to know the size of that array in advance).
You would do it like so for the general case
template<typename T,int N>
//template argument deduction
int size(T (&arr1)[N]) //Passing the array by reference
{
return sizeof(arr1)/sizeof(arr1[0]); //Correctly returns the size of 'list'
// or
return N; //Correctly returns the size too [cool trick ;-)]
}
The "standard" C way to do this is
sizeof(list) / sizeof(list[0])
You could use boost::size, which is basically defined this way:
template <typename T, std::size_t N>
std::size_t size(T const (&)[N])
{
return N;
}
Note that if you want to use the size as a constant expression, you'll either have to use the sizeof a / sizeof a[0] idiom or wait for the next version of the C++ standard.
You can't do that for a dynamically allocated array (or a pointer). For static arrays, you can use sizeof(array) to get the whole array size in bytes and divide it by the size of each element:
#define COUNTOF(x) (sizeof(x)/sizeof(*x))
To get the size of a dynamic array, you have to keep track of it manually and pass it around with it, or terminate it with a sentinel value (like '\0' in null terminated strings).
Update: I realized that your question is tagged C++ and not C. You should definitely consider using std::vector instead of arrays in C++ if you want to pass things around:
std::vector<int> v;
v.push_back(1);
v.push_back(2);
std::cout << v.size() << std::endl; // prints 2
Since you've marked this as C++, it's worth mentioning that there is a somewhat better way than the C-style macro:
template <class T, size_t N>
size_t countof(const T &array[N]) { return N; }
This has the advantage that if you accidentally try to pass something other than an array to it, the code simply won't compile (whereas passing a pointer to the C macro will compile but produce a bad result. The disadvantage is that this doesn't give you a compile-time constant, so you can't do something like this:
int a[20];
char x[countof(a)];
In C++11 or newer, you can add constexpr to get a compile-time constant:
template <class T, size_t N>
constexpr size_t countof(const T &array[N]) { return N; }
If you really want to support the same on older compilers, there is a way, originally invented by Ivan Johnson, AFAIK:
#define COUNTOF(x) ( \
0 * sizeof( reinterpret_cast<const ::Bad_arg_to_COUNTOF*>(x) ) + \
0 * sizeof( ::Bad_arg_to_COUNTOF::check_type((x), &(x)) ) + \
sizeof(x) / sizeof((x)[0]) )
class Bad_arg_to_COUNTOF
{
public:
class Is_pointer;
class Is_array {};
template<typename T>
static Is_pointer check_type(const T*, const T* const*);
static Is_array check_type(const void*, const void*);
};
This uses sizeof(x)/sizeof(x[0]) to compute the size, just like the C macro does, so it gives a compile-time constant. The difference is that it first uses some template magic to cause a compile error if what you've passed isn't the name of an array. It does that by overloading check_type to return an incomplete type for a pointer, but a complete type for an array. Then (the really tricky part) it doesn't actually call that function at all -- it just takes the size of the type the function would return, which is zero for the overload that returns the complete type, but not allowed (forcing a compile error) for the incomplete type.
IMO, that's a pretty cool example of template meta programming -- though in all honesty, the result is kind of pointless. You really only need that size as a compile time constant if you're using arrays, which you should normally avoid in any case. Using std::vector, it's fine to supply the size at run-time (and resize the vector when/if needed).
Besides Carl's answer, the "standard" C++ way is not to use a C int array, but rather something like a C++ STL std::vector<int> list which you can query for list.size().
when u pass any array to some function. u are just passing it's starting address, so for it to work u have to pass it size also for it to work properly. it's the same reason why we pass argc with argv[] in command line arguement.
You can make a template function, and pass the array by reference to achieve this.
Here is my code snippet
template <typename TypeOfData>
void PrintArray(TypeOfData &arrayOfType);
int main()
{
char charArray[] = "my name is";
int intArray[] = { 1,2,3,4,5,6 };
double doubleArray[] = { 1.1,2.2,3.3 };
PrintArray(charArray);
PrintArray(intArray);
PrintArray(doubleArray);
}
template <typename TypeOfData>
void PrintArray(TypeOfData &arrayOfType)
{
int elementsCount = sizeof(arrayOfType) / sizeof(arrayOfType[0]);
for (int i = 0; i < elementsCount; i++)
{
cout << "Value in elements at position " << i + 1 << " is " << arrayOfType[i] << endl;
}
}
You have to use sizeof() function.
Code Snippet:
#include<bits/stdc++.h>
using namespace std;
int main()
{
ios::sync_with_stdio(false);
int arr[] ={5, 3, 6, 7};
int size = sizeof(arr) / sizeof(arr[0]);
cout<<size<<endl;
return 0;
}
int arr1[] = {8, 15, 3, 7};
int n = sizeof(arr1)/sizeof(arr1[0]);
So basically sizeof(arr1) is giving the size of the object being pointed to, each element maybe occupying multiple bits so dividing by the number of bits per element (sizeof(arr1[0]) gives you the actual number of elements you're looking for, i.e. 4 in my example.
This method work when you are using a class: In this example you will receive a array, so the only method that worked for me was these one:
template <typename T, size_t n, size_t m>
Matrix& operator= (T (&a)[n][m])
{
int arows = n;
int acols = m;
p = new double*[arows];
for (register int r = 0; r < arows; r++)
{
p[r] = new double[acols];
for (register int c = 0; c < acols; c++)
{
p[r][c] = a[r][c]; //A[rows][columns]
}
}
https://www.geeksforgeeks.org/how-to-print-size-of-an-array-in-a-function-in-c/
Assuming you merely want to know the size of an array whose type you know (int) but whose size, obviously, you don't know, it is suitable to verify whether the array is empty, otherwise you will end up with a division by zero (causing a Float point exception).
int array_size(int array[]) {
if(sizeof(array) == 0) {
return 0;
}
return sizeof(array)/sizeof(array[0]);
}
If you want to know how much numbers the array have, you want to know the array length. The function sizeof(var) in C gives you the bytes in the computer memory. So if you know the memory the int occupy you can do like this:
int arraylength(int array[]) {
return sizeof(array) / sizeof(int); // Size of the Array divided by the int size
}