Is it possible to have clang-format align variable assignments in columns? For example:
int someInteger = 42;
std::string someString = "string";
const unsigned someUnsigned = 42;
#define SOME_INTEGER 42
#define SOME_STRING_LITERAL "string"
#define SOME_CONSTANT 42
enum Enum {
ONE = 1,
TWO = 2,
THREE = 3,
FOUR = 4,
FIVE = 5,
SIX = 6,
SEVEN = 7
};
is more readable than:
int someInteger = 42;
const unsigned someUnsigned = 42;
std::string someString = "string";
#define SOME_INTEGER 42
#define SOME_STRING_LITERAL "string"
#define SOME_CONSTANT 42
enum Enum {
ONE = 1,
TWO = 2,
THREE = 3,
FOUR = 4,
FIVE = 5,
SIX = 6,
SEVEN = 7
};
I realize that it may not be practical for clang-format to always do this, but when code as already been manually formatted like said code, it would be nice for clang-format to leave the formatting in place.
It looks like 3.7 supports something like this (haven't tested yet).
From the docs
AlignConsecutiveAssignments (bool)
If true, aligns consecutive assignments.
This will align the assignment operators of consecutive lines. This will result in formattings like code int aaaa = 12; int b = 23; int ccc = 23; endcode
(sic)
Clang-format does not have any option to do this.
If you want to tell clang-format to leave certain lines alone, you can make it do so with // clang-format off and // clang-format on comments.
I tested it using https://github.com/mattga/ClangFormat-Xcode/tree/clang_3.7 which is branch of ClangFormat-Xcode supporting 3.7.
I could format a = 9999; type list as I wanted by option
AlignConsecutiveAssignments = true
. But definitions were not aligned.
Is there any indication to align them?
For macros: looks like you will be able to accomplish this once clang 10 is released, by adding AlignConsecutiveMacros: true to you .clang-format
https://reviews.llvm.org/D28462
you can use this options:AlignConsecutiveMacros: true
ref:https://clang.llvm.org/docs/ClangFormatStyleOptions.html
support llvm version: >=10.0
Related
trying to convert some C++ code into C, I'm working with binary data and need to use a C equivalent of this:
enum GssipFlags : uint16_t
{
SPARE0 = 1,
SPARE1 = 2 * SPARE0,
SPARE2 = 2 * SPARE1,
SPARE3 = 2 * SPARE2,
REQ_MSG = 2 * SPARE3,
DISCONNECT = 2 * REQ_MSG,
CONNECT = 2 * DISCONNECT,
INVALID_DATA = 2 * CONNECT,
CMD_REJECT = 2 * INVALID_DATA,
HANDSHAKE = 2 * CMD_REJECT,
NAK_MSG = 2 * HANDSHAKE,
ACK_MSG = 2 * NAK_MSG,
ACK_REQ = 2 * ACK_MSG,
RESYNC = 2 * ACK_REQ,
MODE = 2 * RESYNC,
READY = 2 * MODE
};
enum GssipMessageIDs : uint16_t
{
CCCCCCCC = 1,
RECEIVER_ID_MSG = 2,
BUFFER_BOX_STATUS_REQUEST_MSG = 3,
SETUP_DATA_5031 = 4,
WARNING_MSG = 5,
TIME_TRANSFER = 6
};
enum GssipWarningMsgIDs : uint16_t
{
EXTERNAL_POWER_DISCONNECT = 17,
SELF_TEST_OK = 8,
AAAAA = 9,
BBBBB = 10
};
Everything I've tried hasnt worked. the main aspect of this I need is for everything to be uint16_t
You have one standard option and two potential options depending on your compiler and what "I'm working with binary data and need to use a C" means (memory usage?, speed?, etc):
This has been already commented, the use of structs with the type you are looking for:
typedef struct {
uint16_t SPARE0;
...
} GssipFlags_t;
GssipFlags_t a = {
.SPARE0=1
...
};
If you are trying to reduce the size of enums, take advantage of the compiler (if available) and use -fshort-enums.
Allocate to an enum type only as many bytes as it needs for the declared range of possible values. Specifically, the enum type is equivalent to the smallest integer type that has enough room.
__attribute__((packed)), in order to remove the padding added between members (which may do things slower due to the cost of accessing to unaligned data).
If you don't mind about the size and your only concern is to compile C++11 code with a C compiler (which might produce the same output than adding -fshort-enums), just do:
enum GssipFlags {
SPARE0 = 1
...
};
Items 2, 3 and 4 don't explicitly create members with uint16_t type, but if this is a XY problem, they provide different solutions depending on your real issue.
I often use enums for bitflags like the following
enum EventType {
NODE_ADDED = 1 << 0,
NODE_DELETED = 1 << 1,
LINK_ADDED = 1 << 2,
LINK_DELETED = 1 << 3,
IN_PIN_ADDED = 1 << 4,
IN_PIN_DELETED = 1 << 5,
IN_PIN_CHANGE = 1 << 6,
OUT_PIN_ADDED = 1 << 7,
OUT_PIN_DELETED = 1 << 8,
OUT_PIN_CHANGE = 1 << 9,
ALL = NODE_ADDED | NODE_DELETED | ...,
};
Is there a clean less repetitive way to define an ALL flag that combines all other flags in an enum? For small enums the above works well, but lets say there are 30 flags in an enum, it gets tedious to do it this way. Does something work (in general) like this
ALL = -1
?
Use something that'll always cover every other option, like:
ALL = 0xFFFFFFFF
Or as Swordfish commented, you can flip the bits of an unsigned integer literal:
ALL = ~0u
To answer your comment, you can explicitly tell the compiler what type you want your enum to have:
enum EventType : unsigned int
The root problem here is how may one-bits you need. That depends on the number of enumerators previously. Trying to define ALL inside the enum makes that a case of circular logic
Instead, you have to define it outside the enum:
const auto ALL = (EventType) ~EventType{};
EventType{} has sufficient zeroes, ~ turns it into an integral type with enough ones, so you need another cast back to EventType
What is the preferred and best way in C++ to do this: Split the letters of the alphabeth into 7 groups so I can later ask if a char is in group 1, 3 or 4 etc... ? I can of course think of several ways of doing this myself but I want to know the standard and stick with it when doing this kinda stuff.
0
AEIOUHWY
1
BFPV
2
CGJKQSXZ
3
DT
4
MN
5
L

6
R
best way in C++ to do this: Split the letters of the alphabeth into 7 groups so I can later ask if a char is in group 1, 3 or 4 etc... ?
The most efficient way to do the "split" itself is to have an array from letter/char to number.
// A B C D E F G H...
const char lookup[] = { 0, 1, 2, 3, 0, 1, 2, 0...
A switch/case statement's another reasonable choice - the compiler can decide itself whether to create an array implementation or some other approach.
It's unclear what use of those 1-6 values you plan to make, but an enum appears a reasonable encoding choice. That has the advantage of still supporting any use you might have for those specific numeric values (e.g. in < comparisons, streaming...) while being more human-readable and compiler-checked than "magic" numeric constants scattered throughout the code. constant ints of any width are also likely to work fine, but won't have a unifying type.
Create a lookup table.
int lookup[26] = { 0, 1, 2, 3, 0, 1, 2, 0 .... whatever };
inline int getgroup(char c)
{
return lookup[tolower(c) - 'a'];
}
call it this way
char myc = 'M';
int grp = lookup(myc);
Error checks omitted for brevity.
Of course, depending on what the 7 groups represent , you can make enums instead of using 0, 1, 2 etc.
Given the small amount of data involved, I'd probably do it as a bit-wise lookup -- i.e., set up values:
cat1 = 1;
cat2 = 2;
cat3 = 4;
cat4 = 8;
cat5 = 16;
cat6 = 32;
cat7 = 64;
Then just create an array of 26 values, one for each letter in the alphabet, with each containing the value of the category for that letter. When you want to classify a letter, you just categories[ch-'A'] to find it.
I am interested in parsing C header files (only structures and variable declarations) using Python in a recursive manner.
Here is an example of what I am looking for. Suppose the following:
typedef struct
{
double value[3];
} vector3;
typedef struct
{
unsigned int variable_a[4][2];
vector3 variable_b[5];
} my_example;
Also, suppose there is a file that contains initialization values such as:
ANCHOR_STRUCT(my_example) =
{
// variable_a
{ {1,2}, {3, 4}, {5,6} ,{7,8} },
// variable_b
{ {1.0,2.0,3.0}, {4.0,5.0,6.0}, {7.0,8.0,9.0}, {10.0,11.0,12.0}, {13.0,14.0,15.0} }
}
I would like to be able to parse both of these files and be able to generate a report such as:
OUTPUT:
my_example.variable_a[0][0] = 1
my_example.variable_a[0][1] = 2
my_example.variable_a[1][0] = 3
my_example.variable_a[1][1] = 4
my_example.variable_a[2][0] = 5
my_example.variable_a[2][1] = 6
my_example.variable_a[3][0] = 7
my_example.variable_a[3][1] = 8
my_example.variable_b[0].value[0] = 1
my_example.variable_b[0].value[1] = 2
my_example.variable_b[0].value[2] = 3
my_example.variable_b[1].value[0] = 4
my_example.variable_b[1].value[1] = 5
my_example.variable_b[1].value[2] = 6
my_example.variable_b[2].value[0] = 7
my_example.variable_b[2].value[1] = 8
my_example.variable_b[2].value[2] = 9
my_example.variable_b[3].value[0] = 10
my_example.variable_b[3].value[1] = 11
my_example.variable_b[3].value[2] = 12
my_example.variable_b[4].value[0] = 13
my_example.variable_b[4].value[1] = 14
my_example.variable_b[4].value[2] = 15
I would like to be able to report this without running the code (only through parsing). Is there a Python tool that exist that would parses and prints this information. I'd also like to print out the data type.
It seems it is a bit complicated to parse the "{" and "," and "}" in the intiailization file and be able to match this with the structure's variables and children. Matching the values with the correct code name seems difficult because the order is very important. I also assume recursion is needed for parent/children/grandchildren variables.
Thanks,
Ned
Unless you restrict yourself to simple data types, this is going to get very complicated. For example, do you want to handle arbitrary data types such as nested classes?
You say you don't want to run the c-sources, but what you are trying to do here is build your own c-interpreter! Are you sure you want to reinvent the wheel? If yes...
The first thing you need to be able to do, is parse the file. You can can use a parser+lexicographic analyzer such as PLY. Once you have the parse tree, you can analyze what your variables are and what their intended values are.
For example I have:
int boo[8];
boo[1] = boo[3] = boo[7] = 4;
boo[0] = boo[2] = 7;
boo[4] = boo[5] = boo[6] = 15;
How I should type it as constant values? I saw similar question but it didn't help me.
EDIT:
One more question what about if boo with indexes 0 1 3 4 5 6 7 is constant but boo[2] is not? is it possible to do it?
Is this what you are looking for?
const int boo[] = { 7, 4, 7, 4, 15, 15, 15, 4 };
Get a non-const pointer to one entry in the array like this:
int * foo = (int*)&boo[2];
One not so elegant solution may be:
const int boo[8] = {7,4,7,4,15,15,15,4};
Another solution may be:
int boo_[8];
boo_[1] = boo_[3] = boo_[7] = 4;
boo_[0] = boo_[2] = 7;
boo_[4] = boo_[5] = boo_[6] = 15;
const int * boo = boo_;