Is there a shorthand byte notation in C/C++? - c++

It’s been awhile since I programmed in C/C++. For the life of me, I cannot remember (or find in Google) how to make the following work. I thought there was a shorthand way of writing a repeating string of bytes, like these:
0x00 => 0x00000000
0xFF => 0xFFFFFFFF
0xCD => 0xCDCDCDCD
For example, if I was to declare
int x = 0xCD;
printf("%d", x);
it would print 3452816845, not 205.
Am I going crazy?
Is it possible without doing runtime bit shifts (e.g., by making the preprocessor handle it)?

The simplest way is:
0x1010101u * x
I can't think of any syntax that could possibly be simpler or more self-explanatory...
Edit: I see you want it to work for arbitrary types. Since it only makes sense for unsigned types, I'm going to assume you're using an unsigned type. Then try
#define REPB(t, x) ((t)-1/255 * (x))

There's nothing like that by default in C. There's something similar in CSS (the color #123 is expanded to #112233), but that's completely different. :)
You could write a macro to do it for you, though, like:
#define REPEAT_BYTE(x) ((x) | ((x) << 8) | ((x) << 16) | ((x) << 24))
...
int x = REPEAT_BYTE(0xcd);

Unless you write your own macro, this is impossible. How would it know how long to repeat? 0xAB could mean 0xABABABABABABABABABABAB for all it knows (using the proposed idea).

There is no such shorthand. 0x00 is the same as 0. 0xFF is the same as 0x000000FF.

You could use some template trickery:
#include <iostream>
#include <climits>
using namespace std;
template<typename T, unsigned char Pattern, unsigned int N=sizeof(T)>
struct FillInt
{
static const T Value=((T)Pattern)<<((N-1)*CHAR_BIT) | FillInt<T, Pattern, N-1>::Value;
};
template<typename T, unsigned char Pattern>
struct FillInt<T, Pattern, 0>
{
static const T Value=0;
};
int main()
{
cout<<hex<<FillInt<unsigned int, 0xdc>::Value<<endl; // outputs dcdcdcdc on 32 bit machines
}
which adapts automatically to the integral type passed as first argument and is completely resolved at compile-time, but this is just for fun, I don't think I'd use such a thing in real code.

Nope. But you can use memset:
int x;
memset(&x, 0xCD, sizeof(x));
And you could make a macro of that:
#define INITVAR(var, value) memset(&(var), (int)(value), sizeof(var))
int x;
INITVAR(x, 0xCD);

You can use the preprocessor token concatenation:
#include <stdio.h>
#define multi4(a) (0x##a##a##a##a)
int main()
{
int a = multi4(cd);
printf("0x%x\n", a);
return 0;
}
Result:
0xcdcdcdcd
Of course, you have to create a new macro each time you want to create a "generator" with a different number of repetitions.

Related

`#define` a very large number in c++ source code

Well, the question is not as silly as it sound.
I am using C++11 <array> and want to declare an array like this:
array<int, MAX_ARR_SIZE> myArr;
The MAX_ARR_SIZE is to be defined in a header file and could be very large i.e. 10^13. Currently I am typing it like a pre-school kid
#define MAX_ARR_SIZE 1000000000000000
I can live with it if there is no alternative. I can't use pow(10, 13) here since it can not be evaluated at compile time; array initialization will fail. I am not aware of any shorthand to type this.
Using #define for constants is more a way of C than C++.
You can define your constant in this way:
const size_t MAX_ARR_SIZE(1e15);
In this case, using a const size_t instead of #define is preferred.
I'd like to add that, since C++14, when writing integer literals, you could add the optional single quotes as separator.
1'000'000'000'000'000
This looks more clear.
You can define a constexpr function:
constexpr size_t MAX_ARR_SIZE()
{
return pow(10, 15);
}
That way you can do even more complex calculations in compile time.
Then use it as array<int, MAX_ARR_SIZE()> myArr; it will be evaluated in compile time.
Also like it was already mentioned, you probably won't be able to allocate that size on the stack.
EDIT:
I have a fault here, since pow itself is not constexpr you can't use it, but it's solvable, for example use ipow as discussed here: c++11 fast constexpr integer powers
here is the function quote:
constexpr int64_t ipow(int64_t base, int exp, int64_t result = 1) {
return exp < 1 ? result : ipow(base*base, exp/2, (exp % 2) ? result*base : result);
}
simply change MAX_ARR_SIZE() to:
constexpr size_t MAX_ARR_SIZE()
{
return ipow(10, 15);
}
#define MAX_ARRAY_SIZE (1000ull * 1000 * 1000 * 1000 * 1000)
You actually can evaluate pow(10, 15) and similar expressions at compile time in C++11 if you use a const instead of #define. Just make sure you pick a large enough primitive.
You can use :
#define MAX_ARR_SIZE 1e15
1e15 is very huge and probably would not be allocated.

2 bits size variable

I need to define a struct which has data members of size 2 bits and 6 bits.
Should I use char type for each member?Or ,in order not to waste a memory,can I use something like :2\ :6 notation?
how can I do that?
Can I define a typedef for 2 or 6 bits type?
You can use something like:
typedef struct {
unsigned char SixBits:6;
unsigned char TwoBits:2;
} tEightBits;
and then use:
tEightBits eight;
eight.SixBits = 31;
eight.TwoBits = 3;
But, to be honest, unless you're having to comply with packed data external to your application, or you're in a very memory constrained situation, this sort of memory saving is not usually worth it. You'll find your code is a lot faster if it's not having to pack and unpack data all the time with bitwise and bitshift operations.
Also keep in mind that use of any type other than _Bool, signed int or unsigned int is an issue for the implementation. Specifically, unsigned char may not work everywhere.
It's probably best to use uint8_t for something like this. And yes, use bit fields:
struct tiny_fields
{
uint8_t twobits : 2;
uint8_t sixbits : 6;
}
I don't think you can be sure that the compiler will pack this into a single byte, though. Also, you can't know how the bits are ordered, within the byte(s) that values of the the struct type occupies. It's often better to use explicit masks, if you want more control.
Personally I prefer shift operators and some macros over bit fields, so there's no "magic" left for the compiler. It is usual practice in embedded world.
#define SET_VAL2BIT(_var, _val) ( (_var) | ((_val) & 3) )
#define SET_VAL6BIT(_var, _val) ( (_var) | (((_val) & 63) << 2) )
#define GET_VAL2BIT(_var) ( (_val) & 3)
#define GET_VAL6BIT(_var) ( ((_var) >> 2) & 63 )
static uint8_t my_var;
<...>
SET_VAL2BIT(my_var, 1);
SET_VAL6BIT(my_var, 5);
int a = GET_VAL2BIT(my_var); /* a == 1 */
int b = GET_VAL6BIT(my_var); /* b == 5 */

Defining Bit-Flags Using #define in C++

I'm learning about bit-flags. I already know how they work and how they are defined in a struct. However, I'm unsure if they can be defined in a #define preprocessor directive like this:
#define FLAG_FAILED:1
Is this preprocessor define directive the as a struct bit-flag definition?
PS: I've already come across this related question but it didn't answer my question: #defined bitflags and enums - peaceful coexistence in "c". Also, if you can point me towards some information regarding preprocessor directives, I would appreciate that.
Any #define that you want to use to inject bitflags into a struct must take the form:
#define IDENTIFIER SUBSTITUTED_CODE
In your postulated use...
#define FLAG_FAILED:1
The identifier contains the colon, which makes it invalid.
You could do something like this:
#define FLAG_FAILED int flag_failed :1
struct X
{
char a;
FLAG_FAILED;
int b;
...
};
It's not clear why you're considering using a define for the bit field anyway. If you just want to be able to vary the field length, then:
#define FLAG_FAILED_BITS 1
struct X
{
unsigned flag_failed :FLAG_FAILED_BITS;
};
...or...
#define FLAG_FAILED_BITS :1
struct X
{
unsigned flag_failed FLAG_FAILED_BITS;
};
#define FLAG_FAILED:1 is not really a bit flag in the sense that what most people know as a "bit flag". It's also bad syntax.
Bit flags typically are defined so that you have a type and you turn "on" bits by "setting" them. You turn them "off" by "clearing" the flag. To compare if the bit flag is on, you use what is called the bitwise operator AND (e.g. &).
So your BIT0 (e.g. 2^0) would be defined as BIT0 = 0x00000001 and BIT1 (e.g. 2^1) as BIT1 = 0x00000002. If you wanted to stick with define you could do it this way with setting and clearing:
#ifndef setBit
#define setBit(word, mask) word |= mask
#endif
#ifndef clrBit
#define clrBit(word, mask) word &= ~mask
#endif
or as a template
template<typename T>
inline T& setBit(T& word, T mask) { return word |= mask; }
template<typename T>
inline T& clrBit(T& word, T mask) { return word &= ~mask; }
If you want to set the bit, so to speak, you could have a state set as follows:
setBit(SystemState, SYSTEM_ONLINE);
or
setBit(SystemState, <insert type here>SYSTEM_ONLINE);
clearing would be the same just replace setBit with clrBit.
To compare, just do this:
if (SystemState & SYSTEM_ONLINE) { ... // do some processing
}
if this is in a struct then, reference the struct.
A form to define bitwise values with #define macros is:
#define BIT_ONE static_cast<int>( 1 << 0 )
#define BIT_TWO static_cast<int>( 1 << 1 )
#define BIT_THREE static_cast<int>( 1 << 2 )

C++ throwing compilation error on sizeof() comparison in preprocessor #if

I have this which does not compile with the error "fatal error C1017: invalid integer constant expression" from visual studio. How would I do this?
template <class B>
A *Create()
{
#if sizeof(B) > sizeof(A)
#error sizeof(B) > sizeof(A)!
#endif
...
}
The preprocessor does not understand sizeof() (or data types, or identifiers, or templates, or class definitions, and it would need to understand all of those things to implement sizeof).
What you're looking for is a static assertion (enforced by the compiler, which does understand all of these things). I use Boost.StaticAssert for this:
template <class B>
A *Create()
{
BOOST_STATIC_ASSERT(sizeof(B) <= sizeof(A));
...
}
Preprocessor expressions are evaluated before the compiler starts compilation. sizeof() is only evaluated by the compiler.
You can't do this with preprocessor. Preprocessor directives cannot operate with such language-level elements as sizeof. Moreover, even if they could, it still wouldn't work, since preprocessor directives are eliminated from the code very early, they can't be expected to work as part of template code instantiated later (which is what you seem to be trying to achieve).
The proper way to go about it is to use some form of static assertion
template <class B>
A *Create()
{
STATIC_ASSERT(sizeof(B) <= sizeof(A));
...
}
There are quite a few implementations of static assertions out there. Do a search and choose one that looks best to you.
sizeof() cannot be used in a preprocessor directive.
The preprocessor runs before the compiler (at least logically it does) and has no knowledge of user defined types (and not necessarily much knowledge about intrinsic types - the preprocessor's int size could be different than the compiler targets.
Anyway, to do what you want, you should use a STATIC_ASSERT(). See the following answer:
Ways to ASSERT expressions at build time in C
With a STATIC_ASSERT() you'll be able to do this:
template <class B>
A *Create()
{
STATIC_ASSERT( sizeof(A) >= sizeof( B));
return 0;
}
This cannot be accomplished with pre-processor . The pre-processor executes in a pass prior to the compiler -- therefore the sizes of NodeB and Node have not yet been computed at the time #if is evaluated.
You could accomplish something similar using template-programming techniques. An excellent book on the subject is Modern C++ Design: Generic Programming and Design Patterns Applied, by Andrei Alexandrescu.
Here is an example from a web page which creates a template IF statement.
From that example, you could use:
IF< sizeof(NodeB)<sizeof(Node), non_existing_type, int>::RET i;
which either declares a variable of type int or of type non_existing_type. Assuming the non-existing type lives up to its name should the template IF condition evaluate as true, a compiler error will result. You can rename i something descriptive.
Using this would be "rolling your own" static assert, of which many are already available. I suggest you use one of those after playing around with building one yourself.
If you are interested in a compile time assert that will work for both C and C++, here is one I developed:
#define CONCAT2(x, y) x ## y
#define CONCAT(x, y) CONCAT2(x, y)
#define COMPILE_ASSERT(expr, name) \
struct CONCAT(name, __LINE__) { char CONCAT(name, __LINE__) [ (expr) ? 1 : -1 ]; }
#define CT_ASSERT(expr) COMPILE_ASSERT(expr, ct_assert_)
The to how this works is that the size of the array is negative (which is illegal) when the expression is false. By further wrapping that in a structure definition, this does not create anything at runtime.
This has already been explained, but allow me to elaborate on why the preprocessor can not compute the size of a structure. Aside from the fact that this is too much to ask of a simple preprocessor, there are also compiler flags that affect the way the structure is laid out.
struct X {
short a;
long b;
};
this structure might be 6 bytes or 8 bytes long, depending on whether the compiler was told to 32-bit align the "b" field, for performance reasons. There's no way the preprocessor could have that information.
Using MSVC, this code compiles for me:
const int cPointerSize = sizeof(void*);
const int cFourBytes = 4;`
#if (cPointerSize == cFourBytes)
...
however this (which should work identically) does not:
#if ( sizeof(void*) == 4 )
...
i see many people say that sizeof cannot be used in a pre-processor directive,
however that can't be the whole story because i regularly use the following macro:
#define STATICARRAYSIZE(a) (sizeof(a)/sizeof(*a))
for example:
#include <stdio.h>
#define STATICARRAYSIZE(a) (sizeof(a)/sizeof(*a))
int main(int argc, char*argv[])
{
unsigned char chars[] = "hello world!";
double dubls[] = {1, 2, 3, 4, 5};
printf("chars num bytes: %ld, num elements: %ld.\n" , sizeof(chars), STATICARRAYSIZE(chars));
printf("dubls num bytes: %ld, num elements: %ld.\n" , sizeof(dubls), STATICARRAYSIZE(dubls));
}
yields:
orion$ ./a.out
chars num bytes: 13, num elements: 13.
dubls num bytes: 40, num elements: 5.
however
i, too, cannot get sizeof() to compile in a #if statement under gcc 4.2.1.
eg, this doesn't compile:
#if (sizeof(int) == 2)
#error uh oh
#endif
any insight would be appreciated.

C++ binary constant/literal

I'm using a well known template to allow binary constants
template< unsigned long long N >
struct binary
{
enum { value = (N % 10) + 2 * binary< N / 10 > :: value } ;
};
template<>
struct binary< 0 >
{
enum { value = 0 } ;
};
So you can do something like binary<101011011>::value. Unfortunately this has a limit of 20 digits for a unsigned long long.
Does anyone have a better solution?
Does this work if you have a leading zero on your binary value? A leading zero makes the constant octal rather than decimal.
Which leads to a way to squeeze a couple more digits out of this solution - always start your binary constant with a zero! Then replace the 10's in your template with 8's.
The approaches I've always used, though not as elegant as yours:
1/ Just use hex. After a while, you just get to know which hex digits represent which bit patterns.
2/ Use constants and OR or ADD them. For example (may need qualifiers on the bit patterns to make them unsigned or long):
#define b0 0x00000001
#define b1 0x00000002
: : :
#define b31 0x80000000
unsigned long x = b2 | b7
3/ If performance isn't critical and readability is important, you can just do it at runtime with a function such as "x = fromBin("101011011");".
4/ As a sneaky solution, you could write a pre-pre-processor that goes through your *.cppme files and creates the *.cpp ones by replacing all "0b101011011"-type strings with their equivalent "0x15b" strings). I wouldn't do this lightly since there's all sorts of tricky combinations of syntax you may have to worry about. But it would allow you to write your string as you want to without having to worry about the vagaries of the compiler, and you could limit the syntax trickiness by careful coding.
Of course, the next step after that would be patching GCC to recognize "0b" constants but that may be an overkill :-)
C++0x has user-defined literals, which could be used to implement what you're talking about.
Otherwise, I don't know how to improve this template.
template<unsigned int p,unsigned int i> struct BinaryDigit
{
enum { value = p*2+i };
typedef BinaryDigit<value,0> O;
typedef BinaryDigit<value,1> I;
};
struct Bin
{
typedef BinaryDigit<0,0> O;
typedef BinaryDigit<0,1> I;
};
Allowing:
Bin::O::I::I::O::O::value
much more verbose, but no limits (until you hit the size of an unsigned int of course).
You can add more non-type template parameters to "simulate" additional bits:
// Utility metafunction used by top_bit<N>.
template <unsigned long long N1, unsigned long long N2>
struct compare {
enum { value = N1 > N2 ? N1 >> 1 : compare<N1 << 1, N2>::value };
};
// This is hit when N1 grows beyond the size representable
// in an unsigned long long. It's value is never actually used.
template<unsigned long long N2>
struct compare<0, N2> {
enum { value = 42 };
};
// Determine the highest 1-bit in an integer. Returns 0 for N == 0.
template <unsigned long long N>
struct top_bit {
enum { value = compare<1, N>::value };
};
template <unsigned long long N1, unsigned long long N2 = 0>
struct binary {
enum {
value =
(top_bit<binary<N2>::value>::value << 1) * binary<N1>::value +
binary<N2>::value
};
};
template <unsigned long long N1>
struct binary<N1, 0> {
enum { value = (N1 % 10) + 2 * binary<N1 / 10>::value };
};
template <>
struct binary<0> {
enum { value = 0 } ;
};
You can use this as before, e.g.:
binary<1001101>::value
But you can also use the following equivalent forms:
binary<100,1101>::value
binary<1001,101>::value
binary<100110,1>::value
Basically, the extra parameter gives you another 20 bits to play with. You could add even more parameters if necessary.
Because the place value of the second number is used to figure out how far to the left the first number needs to be shifted, the second number must begin with a 1. (This is required anyway, since starting it with a 0 would cause the number to be interpreted as an octal number.)
Technically it is not C nor C++, it is a GCC specific extension, but GCC allows binary constants as seen here:
The following statements are identical:
i = 42;
i = 0x2a;
i = 052;
i = 0b101010;
Hope that helps. Some Intel compilers and I am sure others, implement some of the GNU extensions. Maybe you are lucky.
A simple #define works very well:
#define HEX__(n) 0x##n##LU
#define B8__(x) ((x&0x0000000FLU)?1:0)\
+((x&0x000000F0LU)?2:0)\
+((x&0x00000F00LU)?4:0)\
+((x&0x0000F000LU)?8:0)\
+((x&0x000F0000LU)?16:0)\
+((x&0x00F00000LU)?32:0)\
+((x&0x0F000000LU)?64:0)\
+((x&0xF0000000LU)?128:0)
#define B8(d) ((unsigned char)B8__(HEX__(d)))
#define B16(dmsb,dlsb) (((unsigned short)B8(dmsb)<<8) + B8(dlsb))
#define B32(dmsb,db2,db3,dlsb) (((unsigned long)B8(dmsb)<<24) + ((unsigned long)B8(db2)<<16) + ((unsigned long)B8(db3)<<8) + B8(dlsb))
B8(011100111)
B16(10011011,10011011)
B32(10011011,10011011,10011011,10011011)
Not my invention, I saw it on a forum a long time ago.