Sometimes it is necessary to compare a string's length with a constant.
For example:
if ( line.length() > 2 )
{
// Do something...
}
But I am trying to avoid using "magic" constants in code.
Usually I use such code:
if ( line.length() > strlen("[]") )
{
// Do something...
}
It is more readable, but not efficient because of the function call.
I wrote template functions as follow:
template<size_t N>
size_t _lenof(const char (&)[N])
{
return N - 1;
}
template<size_t N>
size_t _lenof(const wchar_t (&)[N])
{
return N - 1;
}
// Using:
if ( line.length() > _lenof("[]") )
{
// Do something...
}
In a release build (VisualStudio 2008) it produces pretty good code:
cmp dword ptr [esp+27Ch],2
jbe 011D7FA5
And the good thing is that the compiler doesn't include the "[]" string in the binary output.
Is it a compiler specific optimisation or is it a common behavior?
Why not
sizeof "[]" - 1;
(minus one for the trailing null. You could
do sizeof "[]" - sizeof '\0', but sizeof '\0'
is often sizeof( int ) in C, and "- 1 " is
perfectly readable.)
The capability to inline a function call is both a compiler-specific optimization and a common behavior. That is, many compilers can do it, but they aren't required to.
I think most compilers will optimize it away when optimizations are enabled. If they're disabled, it might slow your program down much more than necessary.
I would prefer your template functions, as they're guaranteed to not call strlen at runtime.
Of course, rather than writing separate functions for char and wchar_t, you could add another template argument, and get a function which works for any type:
template <typename Char_t, int len>
int static_strlen(const Char_t (&)[N] array){
return len / sizeof(Char_t) - 1;
}
(As already mentioned in comments, this will give funny results if passed an array of ints, but are you likely to do that? It's meant for strings, after all)
A final note, the name _strlen is bad. All name at namespace scope beginning with an underscore are reserved to the implementation. You risk some nasty naming conflicts.
By the way, why is "[]" less of a magic constant than 2 is?
In both cases, it is a literal that has to be changed if the format of the string it is compared to changes.
#define TWO 2
#define STRING_LENGTH 2
/* ... etc ... */
Seriously, why go through all this hassle just to avoid typing a 2? I honestly think you're making your code less readable, and other programmers are going to stare at you like you're snorting the used coffee from the filter.
Related
I have such expressions in my code:
QByteArray idx0 = ...
unsigned short ushortIdx0;
if ( idx0.size() >= sizeof(ushortIdx0) ) {
// Do something
}
But I'm getting the warning:
warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
if ( idx0.size() >= sizeof(ushortIdx0) ) {
~~~~~~~~~~~~^~~~~~~~~~
Why size() of QByteArray is returned as int rather than unsigned int? How can I get rid of this warning safely?
Some folk feel that the introduction of unsigned types into C all those years ago was a bad idea. Such types found themselves introduced into C++, where they are deeply embedded in the C++ Standard Library and operator return types.
Yes, sizeof must, by the standard, return an unsigned type.
The Qt developers adopt the modern thinking that the unsigned types were a bad idea, and favour instead making the return type of size a signed type. Personally I find it idiosyncratic.
To solve, you could (i) live with the warning, (ii) switch it off for the duration of the function, or (iii) write something like
(std::size_t)idx0.size() >= sizeof(ushortIdx0)
at the expense of clarity.
Why size() of QByteArray is returned as int rather than unsigned int?
I literally have no idea why Qt chose a signed return for size(). However, there are good reasons to use a signed instead of an unsigned.
One infamous example where a unsigned size() fails miserably is this quite innocent looking loop:
for (int i = 0; i < some_container.size() - 1; ++i) {
do_somehting(some_container[i] , some_container[i+1] );
}
Its not too uncommon to make the loop body operate on two elements and in that case its seems to be a valid choice to iterate only till some_container.size() - 1.
However, if the container is empty some_container.size() - 1 will silently (unsigned overflow is well defined) turn into the biggest value for the unsigned type. Hence, instead of avoiding the out-of-bounds access it leads to the maximum out of bounds you can get.
Note that there are easy fixes for this problem, but if size() does return a signed value, then there is no issue that needs to be fixed in the first place.
Because in Qt containers (like: QByteArray, QVector, ...) there are functions which can return a negative number, like: indexOf, lastIndexOf, contains, ... and some can accept negative numbers, like: mid, ...; So to be class-compatible or even framework-compatibe, developers use a signed type (int).
You can use standard c++ casting:
if ( static_cast<size_t>(idx0.size()) >= sizeof(ushortIdx0) )
The reason why is a duplicate part of the question, but the solution to the type mismatch is a valid problem to solve. For the comparisons of the kind you're doing, it'd probably be useful to factor them out, as they have a certain reusable meaning:
template <typename T> bool fitsIn(const QByteArray &a) {
return static_cast<int>(sizeof(T)) <= a.size();
}
template <typename T> bool fitsIn(T, const QByteArray &a) {
return fitsIn<T>(a);
}
if (fitsIn(ushortIdx0, idx0)) ...
Hopefully you'll have just a few kinds of such comparisons, and it'd make most sense to DRY (do not repeat yourself) and instead of a copypasta of casts, use functions dedicated to the task - functions that also express the intent of the original comparison. It then becomes easy to centralize handling of any corner cases you might wish to handle, i.e. when sizeof(T) > INT_MAX.
Another approach would be to define a new type to wrap size_t and adapt it to the types you need to use it with:
class size_of {
size_t val;
template <typename T> static typename std::enable_if<std::is_signed<T>::value, size_t>::type fromSigned(T sVal) {
return (sVal > 0) ? static_cast<size_t>(sVal) : 0;
}
public:
template <typename T, typename U = std::enable_if<std::is_scalar<T>::value>::type>
size_of(const T&) : val(sizeof(T)) {}
size_of(const QByteArray &a) : val(fromSigned(a.size())) {}
...
bool operator>=(size_of o) const { return value >= o.value; }
};
if (size_of(idx0) >= size_of(ushortIdx0)) ...
This would conceptually extend sizeof and specialize it for comparison(s) and nothing else.
Constexpr can be awsome and useful for compilation optimisation. For example...
strlen(char*)
Can be precompiled using....
constexpr inline size_t strlen_constexpr(char* baseChar) {
return (
( baseChar[0] == 0 )
?(// if {
0
)// }
:(// else {
strlen_constexpr( baseChar+1 ) + 1
)// }
);
}
Which gives it a runtime cost of "0" when optimised... But is more than 10+x slower on runtime
// Test results ran on a 2010 macbook air
--------- strlen ---------
Time took for 100,000 runs:1054us.
Avg Time took for 1 run: 0.01054us.
--------- strlen_constexpr ---------
Time took for 100,000 runs:19098us.
Avg Time took for 1 run: 0.19098us.
Are there any existing macro / template hack where a single unified function can be used instead. ie.
constexpr size_t strlen_smart(char* baseChar) {
#if constexpr
... constexpr function
#else its runtime
... runtime function
}
Or some overloading hack that would allow the following
constexpr size_t strlen_smart(char* baseChar) {
... constexpr function
}
inline size_t strlen_smart(char* baseChar) {
... runtime function
}
Note: This question applies to the concept in general. Of having 2 separate functions for runtime and constexpr instead of the example functions given.
Disclaimer: Setting the compiler to -O3 (optimization level) is more than enough to fix 99.9% of static char optimizations making all the examples above "pointless". But that's beside the point of this question, as it applies to other "examples", and not just strlen.
I don't know any generic way, but I know two specific cases where it is possible.
Specific case of some compilers
Also gcc, and clang which copies all features of gcc, have a built-in function __builtin_constant_p. I am not sure whether gcc will correctly see argument to inline function as constant, but I fear you'd have to use it from a macro:
#define strlen_smart(s) \
(__builtin_constant_p(s) && __builtin_constant_p(*s) ? \
strlen_constexpr(s) : \
strlen(s))
Might be of use. Note that I am testing both s and *s for constexpr, because pointer to static buffer is a compile time constant while it's length is not.
Bonus: Specific case of literals (not an actual answer)
For the specific cast of strlen you can use the fact that string literals are not of type const char * but of type const char[N] that implicitly converts to const char *. But it also converts to const char (&)[N] as well while const char * does not.
So you can define:
template <size_t N>
constexpr size_t strlen_smart(const char (&array)[N])
(plus obviously strlen_smart on const char * forwards to strlen)
I've sometimes used function with this type of argument even in C++98 with definition corresponding to (I didn't try to overload strlen itself, but the overloads were so I could avoid calling it):
template <size_t N>
size_t strlen_smart(const char (&)[N]) { return N - 1; }
This has the problem that for
char buffer[10] = { 0 };
strlen_smart(buffer);
should say 0, but that optimized variant just says 9. The functions don't make sense to be called on buffers like that so I didn't care.
Let's say that I need to create a LUT containing precomputed bit count values (count of 1 bits in a number) for 0...255 values:
int CB_LUT[256] = {0, 1, 1, 2, ... 7, 8};
If I don't want to use hard-coded values, I can use nice template solution How to count the number of set bits in a 32-bit integer?
template <int BITS>
int CountBits(int val)
{
return (val & 0x1) + CountBits<BITS-1>(val >> 1);
}
template<>
int CountBits<1>(int val)
{
return val & 0x1;
}
int CB_LUT[256] = {CountBits<8>(0), CountBits<8>(1) ... CountBits<8>(255)};
This array is computed completely at compile time. Is there any way to avoid a long list, and generate such array using some kind of templates or even macros (sorry!), something like:
Generate(CB_LUT, 0, 255); // array declaration
...
cout << CB_LUT[255]; // should print 8
Notes. This question is not about counting 1 bits in an number, it is used just as example. I want to generate such array completely in the code, without using external code generators. Array must be generated at compile time.
Edit.
To overcome compiler limits, I found the following solution, based on
Bartek Banachewicz` code:
#define MACRO(z,n,text) CountBits<8>(n)
int CB_LUT[] = {
BOOST_PP_ENUM(128, MACRO, _)
};
#undef MACRO
#define MACRO(z,n,text) CountBits<8>(n+128)
int CB_LUT2[] = {
BOOST_PP_ENUM(128, MACRO, _)
};
#undef MACRO
for(int i = 0; i < 256; ++i) // use only CB_LUT
{
cout << CB_LUT[i] << endl;
}
I know that this is possibly UB...
It would be fairly easy with macros using (recently re-discovered by me for my code) Boost.Preprocessor - I am not sure if it falls under "without using external code generators".
PP_ENUM version
Thanks to #TemplateRex for BOOST_PP_ENUM, as I said, I am not very experienced at PP yet :)
#include <boost/preprocessor/repetition/enum.hpp>
// with ENUM we don't need a comma at the end
#define MACRO(z,n,text) CountBits<8>(n)
int CB_LUT[256] = {
BOOST_PP_ENUM(256, MACRO, _)
};
#undef MACRO
The main difference with PP_ENUM is that it automatically adds the comma after each element and strips the last one.
PP_REPEAT version
#include <boost/preprocessor/repetition/repeat.hpp>
#define MACRO(z,n,data) CountBits<8>(n),
int CB_LUT[256] = {
BOOST_PP_REPEAT(256, MACRO, _)
};
#undef MACRO
Remarks
It's actually very straightforward and easy to use, though it's up to you to decide if you will accept macros. I've personally struggled a lot with Boost.MPL and template techniques, to find PP solutions easy to read, short and powerful, especially for enumerations like those. Additional important advantage of PP over TMP is the compilation time.
As for the comma at the end, all reasonable compilers should support it, but in case yours doesn't, simply change the number of repetitions to 255 and add last case by hand.
You might also want to rename MACRO to something meaningful to avoid possible redefinitions.
I like to do it like this:
#define MYLIB_PP_COUNT_BITS(z, i, data) \
CountBits< 8 >(i)
int CB_LUT[] = {
BOOST_PP_ENUM(256, MYLIB_PP_COUNT_BITS, ~)
};
#undef MYLIB_PP_COUNT_BITS
The difference with BOOST_PP_REPEAT is that BOOST_PP_ENUM generates a comma-separated sequence of values, so no need to worry about comma's and last-case behavior.
Furthermore, it is recommended to make your macros really loud and obnoixous by using a NAMESPACE_PP_FUNCTION naming scheme.
a small configuration thing is to omit the [256] in favor of [] in the array size so that you can more easily modify it later.
Finally, I would recommend making your CountBit function template constexpr so that you also can initialize const arrays.
According to Effective C++, "casting object addresses to char* pointers and then using pointer arithemetic on them almost always yields undefined behavior."
Is this true for plain-old-data? for example in this template function I wrote long ago to print the bits of an object. It works splendidly on x86, but... is it portable?
#include <iostream>
template< typename TYPE >
void PrintBits( TYPE data ) {
unsigned char *c = reinterpret_cast<unsigned char *>(&data);
std::size_t i = sizeof(data);
std::size_t b;
while ( i>0 ) {
i--;
b=8;
while ( b > 0 ) {
b--;
std::cout << ( ( c[i] & (1<<b) ) ? '1' : '0' );
}
}
std::cout << "\n";
}
int main ( void ) {
unsigned int f = 0xf0f0f0f0;
PrintBits<unsigned int>( f );
return 0;
}
It certainly is not portable. Even if you stick to fundamental types, there is endianness and there is sizeof, so your function will print different results on big-endian machines, or on machines where sizeof(int) is 16 or 64. Another issue is that not all PODs are fundamental types, structs may be POD, too.
POD struct members may have internal paddings according to the implementation-defined alignment rules. So if you pass this POD struct:
struct PaddedPOD
{
char c;
int i;
}
your code would print the contents of padding, too. And that padding will be different even on the same compiler with different pragmas and options.
On the other side, maybe it's just what you wanted.
So, it's not portable, but it's not UB. There are some standard guarantees: you can copy PODs to and from array of char or unsigned char, and the result of this copying via char buffer will hold the original value. That implies that you can safely traverse that array, so your function is safe. But nobody guarantees that this array (or object representation) of objects with same type and value will be the same on different computers.
BTW, I couldn't find that passage in Effective C++. Would you quote it, pls? I could say, if a part of your code already contains lots of #ifdef thiscompilerversion, sometimes it makes sense to go all-nonstandard and use some hacks that lead to undefined behavior, but work as intended on this compiler version with this pragmas and options. In that sense, yes, casting to char * often leads to UB.
Yes, POD types can always be treated as an array of chars, of size sizeof (TYPE). POD types are just like the corresponding C types (that's what makes them "plain, old"). Since C doesn't have function overloading, writing "generic" functions to do things like write them to files or network streams depends on the ability to access them as char arrays.
Let's say I have a macro called LengthOf(array):
sizeof array / sizeof array[0]
When I make a new array of size 23, shouldn't I get 23 back for LengthOf?
WCHAR* str = new WCHAR[23];
str[22] = '\0';
size_t len = LengthOf(str); // len == 4
Why does len == 4?
UPDATE: I made a typo, it's a WCHAR*, not a WCHAR**.
Because str here is a pointer to a pointer, not an array.
This is one of the fine differences between pointers and arrays: in this case, your pointer is on the stack, pointing to the array of 23 characters that has been allocated elsewhere (presumably the heap).
WCHAR** str = new WCHAR[23];
First of all, this shouldn't even compile -- it tries to assign a pointer to WCHAR to a pointer to pointer to WCHAR. The compiler should reject the code based on this mismatch.
Second, one of the known shortcomings of the sizeof(array)/sizeof(array[0]) macro is that it can and will fail completely when applied to a pointer instead of a real array. In C++, you can use a template to get code like this rejected:
#include <iostream>
template <class T, size_t N>
size_t size(T (&x)[N]) {
return N;
}
int main() {
int a[4];
int *b;
b = ::new int[20];
std::cout << size(a); // compiles and prints '4'
// std::cout << size(b); // uncomment this, and the code won't compile.
return 0;
}
As others have pointed out, the macro fails to work properly if a pointer is passed to it instead of an actual array. Unfortunately, because pointers and arrays evaluate similarly in most expressions, the compiler isn't able to let you know there's a problem unless you make you macro somewhat more complex.
For a C++ version of the macro that's typesafe (will generate an error if you pass a pointer rather than an array type), see:
Compile time sizeof_array without using a macro
It wouldn't exactly 'fix' your problem, but it would let you know that you're doing something wrong.
For a macro that works in C and is somewhat safer (many pointers will diagnose as an error, but some will pass through without error - including yours, unfortunately):
Is there a standard function in C that would return the length of an array?
Of course, using the power of #ifdef __cplusplus you can have both in a general purpose header and have the compiler select the safer one for C++ builds and the C-compatible one when C++ isn't in effect.
The problem is that the sizeof operator checks the size of it's argument. The argument passed in your sample code is WCHAR*. So, the sizeof(WCHAR*) is 4. If you had an array, such as WCHAR foo[23], and took sizeof(foo), the type passed is WCHAR[23], essentially, and would yield sizeof(WCHAR) * 23. Effectively at compile type WCHAR* and WCHAR[23] are different types, and while you and I can see that the result of new WCHAR[23] is functionally equivalent to WCHAR[23], in actuality, the return type is WCHAR*, with absolutely no size information.
As a corellary, since sizeof(new WCHAR[23]) equals 4 on your platform, you're obviously dealing with an architecture where a pointer is 4 bytes. If you built this on an x64 platform, you'd find that sizeof(new WCHAR[23]) will return 8.
You wrote:
WCHAR* str = new WCHAR[23];
if 23 is meant to be a static value, (not variable in the entire life of your program) it's better use #define or const than just hardcoding 23.
#define STR_LENGTH 23
WCHAR* str = new WCHAR[STR_LENGTH];
size_t len = (size_t) STR_LENGTH;
or C++ version
const int STR_LENGTH = 23;
WCHAR* str = new WCHAR[STR_LENGTH];
size_t len = static_cast<size_t>(STR_LENGTH);