How to make C++ linking take less memory - c++

I am working on a university research project in C++ with a lot of templates which have further nested templates and so on. The project is about efficient index data structures for a specific area of research. You can imagine: An index structure has a lot of parameters to adjust, so we use the template parameters excessively. Of course, we want to test our indexes with different sets of parameters, so there are quite a lot of template instantiations.
The project is not that huge. Maybe 50k LOC. But still, linking take 50 seconds and consumes over 7 GB memory (!!!). I am on a 32GB workstation so everything is fine for me. I often have bachelor and master students working on this project. The problem is that they often work on Laptops with 4 or 8 GB of RAM. Thus, these students have great troubles compiling that project. The resulting test binary (i.e. a binary that simply contains unit tests for the index structures) is 700 megabytes.
Most of that are symbols, because the nested templates produce huge names. If I use strip on the binary, it drops down to 8 megabytes.
So is there a way to reduce the RAM usage during linkage? And is there a way to have smaller symbols even with nested templates?
We compile using g++4.9 with std=c++11 under Ubuntu 14.10.
Edit:
It really seems to be the nested templates. We have two test cases with really deeply nested templates. The two .o files for these test make up almost 90% of the memory of the final binary. They result in method names that are over 3000 characters long. There is no way to not use the nested templates here as they form a "processing tree" of an example query. Is there any way to keep names short when using deeply nested templates?

GCC has a garbage collection scheme for the RAM it uses.
The parameters ggc-min-expand and ggc-min-heapsize are used to determine when GCC should clean up and dealloc its unused memory (their defaults are percentages of the total system memory).
You could try something like:
g++ --param ggc-min-expand=0 --param ggc-min-heapsize=8192
From the GCC manual:
ggc-min-expand
GCC uses a garbage collector to manage its own memory allocation. This parameter specifies the minimum percentage by which the garbage
collector's heap should be allowed to expand between collections.
Tuning this may improve compilation speed; it has no effect on code
generation.
The default is 30% + 70% * (RAM/1GB) with an upper bound of 100% when RAM >= 1GB. If getrlimit is available, the notion of "RAM" is the
smallest of actual RAM, RLIMIT_RSS, RLIMIT_DATA and RLIMIT_AS. If GCC
is not able to calculate RAM on a particular platform, the lower bound
of 30% is used. Setting this parameter and ggc-min-heapsize to zero
causes a full collection to occur at every opportunity. This is
extremely slow, but can be useful for debugging.
ggc-min-heapsize
Minimum size of the garbage collector's heap before it begins
bothering to collect garbage. The first collection occurs after the
heap expands by ggc-min-expand% beyond ggc-min-heapsize. Again, tuning
this may improve compilation speed, and has no effect on code
generation.
The default is RAM/8, with a lower bound of 4096 (four megabytes) and an upper bound of 131072 (128 megabytes). If getrlimit is
available, the notion of "RAM" is the smallest of actual RAM,
RLIMIT_RSS, RLIMIT_DATA and RLIMIT_AS. If GCC is not able to calculate
RAM on a particular platform, the lower bound is used. Setting this
parameter very large effectively disables garbage collection. Setting
this parameter and ggc-min-expand to zero causes a full collection to
occur at every opportunity.
Further details:
Compiling with GCC on low memory VPS

So is there a way to reduce the RAM usage during linkage? And is there a way to have smaller symbols even with nested templates?
Have you considered using the pimpl idiom in client code?
Consider a situation where you have this include chain:
A.h -> B.h -> C.h -> D.h (C includes D, B includes C, etc)
Suppose that A.h defines class AA, B.h defines class B and so on (with AA implemented in terms of BB, BB implemented in terms of CC and so on).
If DD is a large template and used in the implementation of CC, the templated code will be compiled three times, for the compilation units A, B and C.
Now, consider what happens if instead of C.h including D.h, you have the following situation:
C.h foward declares a CCImpl *pimpl and forwards all it's methods to pImpl-> methods (and does not include D.h).
C.cpp includes C.h and D.h, and implements CCImpl and CC.
Now, D will be included once (and compiled once, for C.cpp). A and B will only include C.h, with a CImpl forward declaration. A.h, B.h and C.h no longer know that a template exists.

Use inheritance judiciously. You may have a class Foo<1,4,8,1,9,int, std::string> as a base class for class Bar, and then the object file will mention just Bar.
Note that typedef does not introduce names for linking purposes.
[edit]
And to address a performance concern from another comment, an empty derived class adds no overhead on common compilers at normal optimization levels (and often there's no overhead even in debug builds)

Related

How to ensure certain struct layout across compilations?

The C++ standard says nothing about packing and padding of structs, because it is implementation defined.
If it is implementation defined, then for example, why it is safe to pass a struct to a DLL, if this DLL could have been compiled with a different compiler, which could have different methods for struct padding?
Is the struct padding method enforced by the OS's ABI (for example, the padding will be the same on all Windows platforms)?
Or, is there standard method for padding when compiling for a PC (x64 or x86_64 systems) that is used in every modern compiler?
If there is nothing that can guarantee the layout of variables, then is it safe to assume that each basic type in C++ (char, all numeric variables and pointers) must be aligned to an address that is a multiple of its size, and because of that, padding inside a struct can be done by hand without performance problems or UB?
From what I have checked, g++ compiles structs in such a way, that it inserts minimum amount of padding, just to ensure alignment of the next variable.
For example:
struct foo
{
char a;
// char _padding1[3]; <- inserted by compiler
uint32_t b;
};
There are 3 bytes of padding after a because that is the minimum amount that will give us a suitably aligned address for b.
Can we take for granted that compilers will do this that way? Or, can we force this kind of padding by hand without UB or performance issues?
By hand, I mean:
#pragma pack(1)
struct foo
{
char a;
char _padding1[3]; //<- manually adding padding bytes
uint32_t b;
};
#pragma pack()
Just to be clear: I am asking about behavior of compilers only on PC platforms : Windows, Linux distros, and maybe MacOS.
Sorry if my question is in category of "you dig into this too much". I just couldn't find a satisfying answer on the Internet. Some people say that it is not guaranteed. Others say that compiling with different compilers on systems that use the same ABI guarantee that the same struct will have the same layout. Others show how to reduce struct padding assuming that compilers pack structs the way that I described above (it is with minimum required padding to align variables).
If it is implementation defined, then for example, why it is safe to pass struct to dll
Because the dll and the caller follow the same Application binary interface (ABI) that defines the layout.
By the way, dll are a language extension and not part of standard C++.
if this dll could have been compiled with different compiler, which could have different method for struct padding?
If the library and the dependent don't follow an intercompatible ABI, then they cannot work together.
Is structpadding method enforced by the OS's ABI
Yes, class layout (structs are classes) is defined by the ABI.
For example padding will be the same on all Windows platforms
Not quite, since Windows on ARM has a different ABI for example. But within the same CPU architecture, the layout would be the same in Windows.
Or is there standard method for padding when compiling for PC (x64 or x86_64 systems) that is used in every modern compiler?
No, there is no universal class layout followed by OS, even within x86_64 architecture.
From what I checked, g++ compiles structs in such way, that it inserts minimum amount of padding, just to ensure alignment of next variable.
All objects in C++ must be aligned as per the alignment requirement of the type of the object. This guarantee isn't compiler specific. However alignment requirements of types - and even the sizes of types - vary across different ABIs.
Bonus info: Compilers have language extensions that remove such guarantee.
There are 3 bytes of padding after a because it is minimum amount that will give us suitably aligned address for b. Can we take for granted that compilers will do this that way?
In general no. On some systems, alignof(std::uint32_t) == 1 in which case there wouldn't be need for any padding.
Within a single ABI, you can take for granted that the layout is the same, but across multiple systems - which might not follow the same ABI - you cannot take it for granted.
When dealing with binary layout across systems (for example, when reading from a file or network), the standard compliant way is to treat the data as an array of bytes1, and to copy each sequence of bytes2 from pre-determined offsets onto fixed width3 fundamental objects (not classes whose layout may differ). In practice, you don't need to care about sign representation although that used to be a problem historically.
If the optimiser does its job, there ideally shouldn't be any performance penalty if the layout of input data matches the native layout. In case it doesn't match, then there may be a cost (compared to a matching layout) that cannot be optimised away.
1 This isn't sufficient when byte size differs across systems, but you don't need to worry about that since you care about x86_64 only.
2 In order to support systems with varying byte endianness, you must interpret the bytes in order of their significance rather than memory order, but you don't need to worry about that since you care about x86_64 only.
3 I.e. not int, short, long etc., but rather std::int32_t etc.
The C and C++ standards were written to describe existing languages. In situations where 99+% of implementations would do things a certain way, and it was obvious that implementations should do things that way absent a compelling reason for doing otherwise, the standards would generally leave open the possibility of implementations doing something unusual.
Consider, for example, given something like:
struct foo {int i; char a,b[4],c,d,e;}; // Assume sizeof (int) is 4
struct foo myFoo;
On most platforms, making bar be a three-word type which contains all of the individual bytes packed together may be more efficient than doing anything else. On the other hand, on a platform that uses word-addressed storages, but includes instructions to load or store bytes at a specified byte offset from a specified word address, word-aligning the start of b may allow a construct like myfoo.b[i] to be processed by directly using the value of i as an offset onto the word-aligned address of myFoo.b.
The standards were designed by people designing compilers for such platforms to weigh the pros and cons of following normal practice versus deviating from it to better fit the target architecture.
Machines that use word addresses but allow byte-based loads and stores are of course exceptionally rare, and very little code that isn't deliberately written from such machines for which compatibility with such them would offer any added value whatsoever.
The committees weren't willing to say that such machines should be viewed as archaic and not worth supporting, but that doesn't mean they didn't expect and intend that programs written for commonplace implementations could exploit aspects of behavior that were shared by all commonplace implementations, even if not by some obscure ones.

When should I use CUDA's built-in warpSize, as opposed to my own proper constant?

nvcc device code has access to a built-in value, warpSize, which is set to the warp size of the device executing the kernel (i.e. 32 for the foreseeable future). Usually you can't tell it apart from a constant - but if you try to declare an array of length warpSize you get a complaint about it being non-const... (with CUDA 7.5)
So, at least for that purpose you are motivated to have something like (edit):
enum : unsigned int { warp_size = 32 };
somewhere in your headers. But now - which should I prefer, and when? : warpSize, or warp_size?
Edit: warpSize is apparently a compile-time constant in PTX. Still, the question stands.
Let's get a couple of points straight. The warp size isn't a compile time constant and shouldn't be treated as one. It is an architecture specific runtime immediate constant (and its value just happens to be 32 for all architectures to date). Once upon a time, the old Open64 compiler did emit a constant into PTX, however that changed at least 6 years ago if my memory doesn't fail me.
The value is available:
In CUDA C via warpSize, where is is not a compile time constant (the PTX WARP_SZ variable is emitted by the compiler in such cases).
In PTX assembler via WARP_SZ, where it is a runtime immediate constant
From the runtime API as a device property
Don't declare you own constant for the warp size, that is just asking for trouble. The normal use case for an in-kernel array dimensioned to be some multiple of the warp size would be to use dynamically allocated shared memory. You can read the warp size from the host API at runtime to get it. If you have a statically declared in-kernel you need to dimension from the warp size, use templates and select the correct instance at runtime. The latter might seem like unnecessary theatre, but it is the right thing to do for a use case that almost never arises in practice. The choice is yours.
Contrary to talonmies's answer I find warp_size constant perfectly acceptable. The only reason to use warpSize is to make the code forward-compatibly with a possible future hardware that may have warps of different size. However, when such hardware arrives, the kernel code will most likely require other alterations as well in order to remain efficient. CUDA is not a hardware-agnostic language - on the contrary, it is still quite a low-level programming language. Production code uses various intrinsic functions that come and go over time (e.g. __umul24).
The day we get a different warp size (e.g. 64) many things will change:
The warpSize will have to be adjusted obviously
Many warp-level intrinsic will need their signature adjusted, or a new version produced, e.g. int __ballot, and while int does not need to be 32-bit, it is most commonly so!
Iterative operations, such as warp-level reductions, will need their number of iterations adjusted. I have never seen anyone writing:
for (int i = 0; i < log2(warpSize); ++i) ...
that would be overly complex in something that is usually a time-critical piece of code.
warpIdx and laneIdx computation out of threadIdx would need to be adjusted. Currently, the most typical code I see for it is:
warpIdx = threadIdx.x/32;
laneIdx = threadIdx.x%32;
which reduces to simple right-shift and mask operations. However, if you replace 32 with warpSize this suddenly becomes a quite expensive operation!
At the same time, using warpSize in the code prevents optimization, since formally it is not a compile-time known constant.
Also, if the amount of shared memory depends on the warpSize this forces you to use the dynamically allocated shmem (as per talonmies's answer). However, the syntax for that is inconvenient to use, especially when you have several arrays -- this forces you to do pointer arithmetic yourself and manually compute the sum of all memory usage.
Using templates for that warp_size is a partial solution, but adds a layer of syntactic complexity needed at every function call:
deviceFunction<warp_size>(params)
This obfuscates the code. The more boilerplate, the harder the code is to read and maintain.
My suggestion would be to have a single header that control all the model-specific constants, e.g.
#if __CUDA_ARCH__ <= 600
//all devices of compute capability <= 6.0
static const int warp_size = 32;
#endif
Now the rest of your CUDA code can use it without any syntactic overhead. The day you decide to add support for newer architecture, you just need to alter this one piece of code.

g++ compiler flag to minimize binary size

I'm have an Arduino Uno R3. I'm making logical objects for each of my sensors using C++. The Arduino has very limited on-board memory 32KB*, and, on average, my compiled objects are coming out around 6KB*.
I am already using the smallest possible data types required, in an attempt to minimize my memory footprint. Is there a compiler flag to minimize the size of the binary, or do I need to use shorter variable and function names, less functions, etc. to minimize my code base?
Also, any other tips or words of advice for minimizing binary size would be appreciated.
*It may not be measured in KB (as I don't have it sitting in front of me), but 1 object is approximately 1/5 of my total memory size, which is prompting my concern.
There are lots of techniques to reduce binary size in addition to what us2012 and others mentioned in the comments, summing them up with some points of my own:
Use -Os to make gcc/g++ optimize for size.
Use -ffunction-sections -fdata-sections to separate each function or data into distinct sections within the translation unit. Combine it with the linker option -Wl,--gc-sections to get rid of any unreferenced sections.
Run strip with at least the following options: -s -R .comment -R .gnu.version. It can be combined with --strip-unneeded to remove all symbols that are not necessary for relocation processing.
If your code does not contain c++-exception-handling you can save a lot of space (up to 30k after all optimize steps mentioned by Tuxdude).
Therefore you have to provide the following flag:
-fno-exceptions
But even if you don't use exceptions, the exception handling can be included!
Check the following steps:
no usage of new, delete. If you really need it replace them by malloc/free wrappers. For an example search for "tinynew.cpp"
provide function for pure virtual call, e.g.extern "C" void __cxa_pure_virtual() { while(1); }
overwrite __gnu_cxx::__verbose_terminate_handler(). It handles unhandled exceptions and does name demangling, which is quite large! (e.g d_print_comp.part.10 with 9.5k or d_type with 1.8k)
Cheers
Flo

How much is 32 kB of compiled code

I am planning to use an Arduino programmable board. Those have quite limited flash memories ranging between 16 and 128 kB to store compiled C or C++ code.
Are there ways to estimate how much (standard) code it will represent ?
I suppose this is very vague, but I'm only looking for an order of magnitude.
The output of the size command is a good starting place, but does not give you all of the information you need.
$ avr-size program.elf
text data bss dec hex filename
The size of your image is usually a little bit more than the sum of the text and the data sections. The bss section is essentially compressed because it is all 0s. There may be other sections which are relevant which aren't listed by size.
If your build system is set up like ones that I've used before for AVR microcontrollers then you will end up with an *.elf file as well as a *.bin file, and possibly a *.hex file. The *.bin file is the actual image that would be stored in the program flash of the processor, so you can examine its size to determine how your program is growing as you make edits to it. The *.bin file is extracted from the *.elf file with the objdump command and some flags which I can't remember right now.
If you are wanting to know how to guess-timate how your much your C or C++ code will produce when compiled, this is a lot more difficult. I have observed a 10x blowup in a function when I tried to use a uint64_t rather than a uint32_t when all I was doing was incrementing it (this was about 5 times more code than I thought it would be). This was mostly to do with gcc's avr optimizations not being the best, but smaller changes in code size can creep in from seemingly innocent code.
This will likely be amplified with the use of C++, which tends to hide more things that turn into code than C does. Chief among the things C++ hides are destructor calls and lots of pointer dereferencing which has to do with the this pointer in objects as well as a secret pointer many objects have to their virtual function table and class static variables.
On AVR all of this pointer stuff is likely to really add up because pointers are twice as big as registers and take multiple instructions to load. Also AVR has only a few register pairs that can be used as pointers, which results in lots of moving things into and out of those registers.
Some tips for small programs on AVR:
Use uint8_t and int8_t instead of int whenever you can. You could also use uint_fast8_t and int_fast8_t if you want your code to be portable. This can lead to many operations taking up only half as much code, because int is two bytes.
Be very aware of things like string and struct constants and literals and how/where they are stored.
If you're not scared of it, read the AVR assembly manual. You can get an idea of the types of instructions, and from that the type of C code that easily maps to those instructions. Use that kind of C code.
You can't really say there. The length of the uncompiled code has little to do with the length of the compiled code. For example:
#include <iostream>
#include <vector>
#include <string>
#include <algorithm>
int main()
{
std::vector<std::string> strings;
strings.push_back("Hello");
strings.push_back("World");
std::sort(strings.begin(), strings.end());
std::copy(strings.begin(), strings.end(), std::ostream_iterator<std::string>(std::cout, ""));
}
vs
#include <iostream>
#include <vector>
#include <string>
#include <algorithm>
int main()
{
std::vector<std::string> strings;
strings.push_back("Hello");
strings.push_back("World");
for ( int idx = 0; idx < strings.size(); idx++ )
std::cout << strings[idx];
}
Both are the exact same number of lines, and produce the same output, but the first example involves an instantiation of std::sort, which is probably an order of magnitude more code than the rest of the code here.
If you absolutely need to count number of bytes used in the program, use assembler.
Download the arduino IDE and 'verify' some of your existing code, or look at the sample sketches. It will tell you how many bytes that code is, which will give you an idea of how much more you can fit into a given device. Picking a couple of the examples at random, the web server example is 5816 bytes, and the LCD hello world is 2616. Both use external libraries.
Try creating a simplified version of your app, focusing on the most valuable feature first, then start adding up the 'nice (and cool) stuff to have'. Keep an eye on the byte usage shown in the Arduino IDE when you verify your code.
As a rough indication, my first app (LED flasher controlled by a push buttun) requires 1092 bytes. That`s roughly 1K out of 32k. Pretty small footprint for C++ code!
What worries me most is the limited amount of RAM (1 Kb). If the CPU stack takes some of it, then there isn`t much left for creating any data structures.
I only had my Arduino for 48 hrs, so there is still a lot to use it effectively ;-) But it's a lot of fun to use :).
It's quite a bit for a reasonably complex piece of software, but you will start bumping into the limit if you want it to have a lot of different functionality. Also, if you want to store quite a lot of static strings and data, it can eat into that quite quickly. But 32 KB is a decent amount for embedded applications. It tends to be RAM that you have problems with first!
Also, quite often the C++ compilers for embedded systems are a lot worse than the C compilers.
That is, they are nowhere as good as C++ compilers for the common desktop OS's (in terms of producing efficient machine code for the target platform).
At a linux system you can do some experiments with static compiled example programs. E.g.
$ size `which busybox `
text data bss dec hex filename
1830468 4448 25650 1860566 1c63d6 /bin/busybox
The sizes are given in bytes. This output is independent from the executable file format, since the sizes of the different sections inside the file format. The text section contains the machine code and const stufff. The data section contains data for static initialization of variables. The bss size is the size of uninitialized data - of course uninitialized data does not need to be stored in the executable file.)
Well, busybox contains a lot of functionality (like all common shell commands, a shell etc.).
If you link own examples with gcc -static, keep in mind, that your used libc may dramatically increase the program size and that using an embedded libc may be much more space efficient.
To test that you can check out the diet-libc or uclibc and link against that. Actually, busybox is usually linked against uclibc.
Note that the sizes you get this way give you only an order of magnitude. For example, your workstation probably uses another CPU architecture than the arduino board and the machine code of different architecture may differ, more or less, in its size (because of operand sizes, available instructions, opcode encoding and so one).
To go on with rough order of magnitude reasoning, busybox contains roughly 309 tools (including ftp daemon and such stuff), i.e. the average code size of a busybox tool is roughly 5k.

What makes EXE's grow in size?

My executable was 364KB in size. It did not use a Vector2D class so I implemented one with overloaded operators.
I changed most of my code from
point.x = point2.x;
point.y = point2.y;
to
point = point2;
This resulted in removing nearly 1/3 of my lines of code and yet my exe is still 364KB. What exactly causes it to grow in size?
The compiler probably optimised your operator overload by inlining it. So it effectively compiles to the same code as your original example would. So you may have cut down a lot of lines of code by overloading the assignment operator, but when the compiler inlines, it takes the contents of your assignment operator and sticks it inline at the calling point.
Inlining is one of the ways an executable can grow in size. It's not the only way, as you can see in other answers.
What makes EXE’s grow in size?
External libraries, especially static libraries and debugging information, total size of your code, runtime library. More code, more libraries == larger exe.
To reduce size of exe, you need to process exe with gnu strip utility, get rid of all static libraries, get rid of C/C++ runtime libraries, disable all runtime checks and turn on compiler size optimizations. Working without CRT is a pain, but it is possible. Also there is a wcrt (alternative C runtime) library created for making small applications (by the way, it hasn't been updated/maintained during last 5 years).
The smallest exe that I was able create with msvc compiler is somewhere around 16 kilobytes. This was a windows application that displayed single window and required msvcrt.dll to run. I've modified it a bit, and turned it into practical joke that wipes out picture on monitor.
For impressive exe size reduction techniques, you may want to look at .kkrieger. It is a 3D first person shooter, 96 kilobytes total. The game has a large and detailed level, supports shaders, real-time shadows, etc. I.e. comparable with Saurbraten (see screenshots). The smallest working windows application (3d demo with music) I ever encountered was 4 kilobytes big, and used compression techniques and (probably) undocumented features (i.e. the fact that *.com executbale could unpack and launch win32 exe on windows xp)..
In most cases, size of *.exe shouldn't really bother you (I haven't seen a diskette for a few years), as long as it is reasonable (below 100 megabytes). For example of "unreasonable" file size see debug build of Qt 4 for mingw.
This resulted in removing nearly 1/3 of my lines of code and yet my exe is still 364KB.
Most likely it is caused by external libraries used by compiler, runtime checks, etc.
Also, this is an assignment operation. If you aren't using custom types for x (with copy constructor), "copy" operation is very likely to result in small number of operations - i.e. removing 1/3 of lines doesn't guarantee that your code will be 1/3 shorter.
If you want to see how much impact your modification made, you could "ask" compiler to produce asm listing for both versions of the program then compare results (manually or with diff). Or you could disasm/compare both versions of executable. BUt I'm certain that using GNU strip or removing extra libraries will have more effect than removing assignment operators.
What type is point? If it's two floats, then the compiler will implicitly do a member-by-member copy, which is the same thing you did before.
EDIT: Apparently some people in today's crowd didn't understand this answer and compensated by downvoting. So let me elaborate:
Lines of code have NO relation to the executable size. The source code tells the compiler what assembly line to create. One line of code can cause hundreds if not thousands of assembly instructions. This is particularly true in C++, where one line can cause implicit object construction, destruction, copying, etc.
In this particular case, I suppose that "point" is a class with two floats, so using the assignment operator will perform a member-by-member copy, i.e. it takes every member individually and copies it. Which is exactly the same thing he did before, except that now it's done implicitly. The resulting assembly (and thus executable size) is the same.
Executables are most often sized in 'pages' rather than discrete bytes.
I think this a good example why one shouldn't worry too much about code being too verbose if you have a good optimizing compiler. Instead always code clearly so that fellow programmers can read your code and leave the optimization to the compiler.
Some links to look into
http://www2.research.att.com/~bs/bs_faq.html#Hello-world
GCC C++ "Hello World" program -> .exe is 500kb big when compiled on Windows. How can I reduce its size?
http://www.catch22.net/tuts/minexe
As for Windows, lots of compiler options in VC++ may be activated like RTTI, exception handling, buffer checking, etc. that may add more behind the scenes to the overall size.
When you compile a c or c++ program into an executable, the compiler translates your code into machine code, and applying optimizations as it sees fit.
But simply, more code = more machine code to generate = more size to the executable.
Also, check if you have lot of static/global objects. This substantially increase your exe size if they are not zero initialized.
For example:
int temp[100] = {0};
int main()
{
}
size of the above program is 9140 bytes on my linux machine.
if I initialize temp array to 5, then the size will shoot up by around 400 bytes. The size of the below program on my linux machine is 9588.
int temp[100] = {5};
int main()
{
}
This is because, zero initialized global objects go into .bss segment, which ill be initialized at once during program startup. Where as non zero initialized objects contents will be embedded in the exe itself.