Unexplained C++ default int values - c++

I've been refactoring some code and I noticed some wonky behavior involving an uninitialized int array:
int arr[ARRAY_SIZE];
I set a break-point and it seems all values default to -858993460. Is there something special with this value? Any ideas why they don't default to 0?

-858993460 is, in hex, CCCCCCCC, which Visual Studio puts as default in DEBUG MODE. This is to make it easier for you to notice that you forgot to initialize the variable. In release mode it could be anything.
I’m actually unsure why mouseBufferX isn’t an element of 10 items (if this does compile and that isn’t 10 elements). But I am pretty sure that the standard says statics are initialized before nonstatics. Anyways, I personally hate using defines and consts to declare ints. Use an enum instead.
C++ doesn’t default anything to 0, so you MUST assign something a value before using it. Static variables are exceptions to this rule as they are set to zero by default. But I’ll note that the use of static variables is discouraged, and some languages (such as C#) do not allow you to declare static variables in a function.

There is no "default" in C++ -- variables and array elements, until initialized by your code, will contain whatever was in memory last.
In other words, when these variables are declared, a space in memory is reserved for their use. The bits in memory left over from the last time that memory was used are still there, causing your variables to initially appear as if they're filled with "garbage". The reason that memory isn't always zeroed out right away is speed -- it takes time to zero out memory.
You can initialise your array using a loop, or use this trick (at risk of being much less readable):
int mouseBufferX[mosueBufferSize] = { 0 };
This works because when you use a list of values to initialize the array, and there's less literal values than the number of elements in the array, the remaining elements always get initialized to 0.

You need to explicitly set the values for the array - they do not "default" to anything.
Try:
memset(mouseBufferX,0,sizeof(mouseBufferX));
//or
int mouseBufferX[mouseBufferSize] = {0};
//and, in C++ this *might* work too (fuzzy memory!):
int mouseBufferX[mouseBufferSize] = {};

C++ doesn't initialize variables. When a chunk of memory is allocated for the variable, that chunk of memory is left as-is and contains the value it did when it was allocated.
However, some compilers (like g++, I believe) will automatically initialize things to 0 - but don't depend on this behaviour as it will make your code less portable.
To get an array to have all the values in it initialized to a value, you can do this:
int mouseBufferX[mouseBufferSize] = {0};
int mouseBufferY[mouseBufferSize] = {0};
You can provide as many values as you want in the initialization list and the elements will be assigned those values.

Sometimes in a debug build, the compiler will initialize memory to a known value to make it easier to detect bugs. If your compiler does this, it should be documented somewhere.
This is entirely at the discretion of the compiler, because the standard doesn't guarantee any initialization whatsoever in this case.

thats a really dangerous assumption your making.
on a good compiler you might get a constant default value like the closest to -infinity(edit: or 0xCCCCCCCC thx acidezombie24), otherwise you'll just get random numbers
ALWAYS initialize your variables
something like
static const int mouseBufferSize = 10;
int mouseBufferX[mouseBufferSize];
memset(mouseBufferX,0,sizeof(mouseBufferX));
int mouseBufferY[mouseBufferSize];
memset(mouseBufferY,0,sizeof(mouseBufferY));

Related

Can my program use unallocated memory on the free store without my knowledge?

When defining a variable without initialization on either the stack or the free store it usually has a garbage value, as assigning it to some default value e.g. 0 would just be a waste of time.
Examples:
int foo;//uninitialized foo may contain any value
int* fooptr=new int;//uninitialized *fooptr may contain any value
This however doens't answer the question of where the garbage values come from.
The usual explanation to that is that new or malloc or whatever you use to get dynamically allocated memory don't initialize the memory to some value as I've stated above and the garbage values are just leftover from whatever program used the same memory prior.
So I put this explanation to the test:
#include <iostream>
int main()
{
int* ptr= new int[10]{0};//allocate memory and initialize everything to 0
for (int i=0;i<10;++i)
{
std::cout<<*(ptr+i)<<" "<<ptr+i<<std::endl;
}
delete[]ptr;
ptr= new int[10];//allocate memory without initialization
for (int i=0;i<10;++i)
{
std::cout<<*(ptr+i)<<" "<<ptr+i<<std::endl;
}
delete[]ptr;
}
Output:
0 0x1291a60
0 0x1291a64
0 0x1291a68
0 0x1291a6c
0 0x1291a70
0 0x1291a74
0 0x1291a78
0 0x1291a7c
0 0x1291a80
0 0x1291a84
19471096 0x1291a60
19464384 0x1291a64
0 0x1291a68
0 0x1291a6c
0 0x1291a70
0 0x1291a74
0 0x1291a78
0 0x1291a7c
0 0x1291a80
0 0x1291a84
In this code sample I allocated memory for 10 ints twice. The first time I do so I initialize every value to 0. I use delete[] on the pointer and proceed to immediately allocate the memory for 10 ints again but this time without initialization.
Yes I know that the results of using an uninitialized variable are undefined, but I want to focus on the garbage values fro now.
The output shows that the first two ints now contain garbage values in the same memory location.
If we take the explanation for garbage values into consideration this leaves me only one conclusion: Between deleting the pointer and allocating the memory again something must have tampered with the values in those memory locations.
But isn't the free store reserved for new and delete?
What could have tampered those values?
Edit:
I removed the std::cout as a comment pointed it out.
I use the compiler Eclipse 2022-06 comes with (MinGW GCC) using default flags on Windows 10.
One of the things you need to understand about heap allocations is that there is always a small control block also allocated when you do a new. The values in the control block tend to inform the compiler how much space is being freed when delete is called.
When a block is deleted, the first part of the buffer is often overwritten by a control block. If you look at the two values you see from your program as hex values, you will note they appear to be addresses in the same general memory space. The first looks to be a pointer to the next allocated location, while the second appears to be a pointer to the start of the heap block.
Edit: One of the main reasons to add this kind of control block in a recently deallocated buffer is that is supports memory coalescence. That two int signature will effectively show how much memory can be claimed if that space is reused, and it signals that it is empty by pointing to the start of the frame.
When defining a variable without initialization on either the stack or the free store it usually has a garbage value, as assigning it to some default value e.g. 0 would just be a waste of time.
No. The initial value of a variable that is not initialized is always garbage. All garbage. This is inherent in "not initialized". The language semantics do not specify what the value of the variable is, and reading that value produces undefined behavior. If you do read it and it seems to make sense to you -- it is all zeroes, for example, or it looks like the value that some now-dead variable might have held -- that is meaningless.
This however doens't answer the question of where the garbage values come from.
At the level of the language semantics, that question is non-sensical. "Garbage values" aren't a thing of themselves. The term is descriptive of values on which you cannot safely rely, precisely because the language does not describe where they come from or how they are determined.
The usual explanation to that is that new or malloc or whatever you use to get dynamically allocated memory don't initialize the memory [so the] values are just leftover from whatever program used the same memory prior.
That's an explanation derived from typical C and C++ implementation details. Read again: implementation details. These are what you are asking about, and unless your objective is to learn about writing C and / or C++ compilers or implementations of their standard libraries, it is not a particularly useful area to probe. The specifics vary across implementations and sometimes between versions of the same implementation, and if your programs do anything that exposes them to these details then those programs are wrong.
I know that the results of using an uninitialized variable are undefined, but I want to focus on the garbage values fro now.
No, apparently you do not know that the results of using the value of an uninitialized variable are undefined. If you did, you would not present the results of your program as if they were somehow meaningful.
You also seem not understand the term "garbage value", for in addition to thinking that the results of your program are meaningful, you appear to think that some of the values it outputs are not garbage.

What happens if I write less than 12 bytes to a 12 byte buffer?

Understandably, going over a buffer errors out (or creates an overflow), but what happens if there are less than 12 bytes used in a 12 byte buffer? Is it possible or does the empty trailing always fill with 0s? Orthogonal question that may help: what is contained in a buffer when it is instantiated but not used by the application yet?
I have looked at a few pet programs in Visual Studio and it seems that they are appended with 0s (or null characters) but I am not sure if this is a MS implementation that may vary across language/ compiler.
Take the following example (within a block of code, not global):
char data[12];
memcpy(data, "Selbie", 6);
Or even this example:
char* data = new char[12];
memcpy(data, "Selbie", 6);
In both of the above cases, the first 6 bytes of data are S,e,l,b,i, and e. The remaining 6 bytes of data are considered "unspecified" (could be anything).
Is it possible or does the empty trailing always fill with 0s?
Not guaranteed at all. The only allocator that I know of that guarantees zero byte fill is calloc. Example:
char* data = calloc(12,1); // will allocate an array of 12 bytes and zero-init each byte
memcpy(data, "Selbie");
what is contained in a buffer when it is instantiated but not used by the application yet?
Technically, as per the most recent C++ standards, the bytes delivered by the allocator are technically considered "unspecified". You should assume that it's garbage data (anything). Make no assumptions about the content.
Debug builds with Visual Studio will often initialize buffers with with 0xcc or 0xcd values, but that is not the case in release builds. There are however compiler flags and memory allocation techniques for Windows and Visual Studio where you can guaranteed zero-init memory allocations, but it is not portable.
Consider your buffer, filled with zeroes:
[00][00][00][00][00][00][00][00][00][00][00][00]
Now, let's write 10 bytes to it. Values incrementing from 1:
[01][02][03][04][05][06][07][08][09][10][00][00]
And now again, this time, 4 times 0xFF:
[FF][FF][FF][FF][05][06][07][08][09][10][00][00]
what happens if there are less than 12 bytes used in a 12 byte buffer? Is it possible or does the empty trailing always fill with 0s?
You write as much as you want, the remaining bytes are left unchanged.
Orthogonal question that may help: what is contained in a buffer when
it is instantiated but not used by the application yet?
Unspecified. Expect junk left by programs (or other parts of your program) that used this memory before.
I have looked at a few pet programs in Visual Studio and it seems that they are appended with 0s (or null characters) but I am not sure if this is a MS implementation that may vary across language/ compiler.
It is exactly what you think it is. Somebody had done that for you this time, but there are no guarantees it will happen again. It could be a compiler flag that attaches cleaning code. Some versions of MSVC used to fill fresh memory with 0xCD when ran in debug but not in release. It can also be a system security feature that wipes memory before giving it to your process (so you can't spy on other apps). Always remember to use memset to initialize your buffer where it matters. Eventually, mandate using certain compiler flag in readme if you depend on fresh buffer to contain a certain value.
But cleaning is not really necessary. You take a 12 byte-long buffer. You fill it with 7 bytes. You then pass it somewhere - and you say "here is 7 bytes for you". The size of the buffer is not relevant when reading from it. You expect other functions to read as much as you've written, not as much as possible. In fact, in C it is usually not possible to tell how long the buffer is.
And a side note:
Understandably, going over a buffer errors out (or creates an overflow)
It doesn't, that's the problem. That's why it's a huge security issue: there is no error and the program tries to continue, so it sometimes executes the malicious content it never meant to. So we had to add bunch of mechanisms to the OS, like ASLR that will increase probability of a crashing the program and decrease probability of it continuing with corrupted memory. So, never depend on those afterthought guards and watch your buffer boundaries yourself.
C++ has storage classes including global, automatic and static. The initialization depends on how the variable is declared.
char global[12]; // all 0
static char s_global[12]; // all 0
void foo()
{
static char s_local[12]; // all 0
char local[12]; // automatic storage variables are uninitialized, accessing before initialization is undefined behavior
}
Some interesting details here.
The program knows the length of a string because it ends it with a null-terminator, a character of value zero.
This is why in order to fit a string in a buffer, the buffer has to be at least 1 character longer than the number of characters in the string, so that it can fit the string plus the null-terminator too.
Any space after that in the buffer is left untouched. If there was data there previously, it is still there. This is what we call garbage.
It is wrong to assume this space is zero-filled just because you haven't used it yet, you don't know what that particular memory space was used for before your program got to that point. Uninitialized memory should be handled as if what is in it is random and unreliable.
All of the previous answers are very good and very detailed, but the OP appears to be new to C programming. So, I thought a Real World example might be helpful.
Imagine you have a cardboard beverage holder that can hold six bottles. It's been sitting around in your garage so instead of six bottles, it contains various unsavory things that accumulate in the corners of garages: spiders, mouse houses, et al.
A computer buffer is a bit like this just after you allocate it. You can't really be sure what's in it, you just know how big it is.
Now, let's say you put four bottles in your holder. Your holder hasn't changed size, but you now know what's in four of the spaces. The other two spaces, complete with their questionable contents, are still there.
Computer buffers are the same way. That's why you frequently see a bufferSize variable to track how much of the buffer is in use. A better name might be numberOfBytesUsedInMyBuffer but programmers tend to be maddeningly terse.
Writing part of a buffer will not affect the unwritten part of the buffer; it will contain whatever was there beforehand (which naturally depends entirely on how you got the buffer in the first place).
As the other answer notes, static and global variables will be initialized to 0, but local variables will not be initialized (and instead contain whatever was on the stack beforehand). This is in keeping with the zero-overhead principle: initializing local variables would, in some cases, be an unnecessary and unwanted run-time cost, while static and global variables are allocated at load-time as part of a data segment.
Initialization of heap storage is at the option of the memory manager, but in general it will not be initialized, either.
In general, it's not at all unusual for buffers to be underfull. It's often good practice to allocate buffers bigger than they need to be. (Trying to always compute an exact buffer size is a frequent source of error, and often a waste of time.)
When a buffer is bigger than it needs to be, when the buffer contains less data than its allocated size, it's obviously important to keep track of how much data is there. In general there are two ways of doing this: (1) with an explicit count, kept in a separate variable, or (2) with a "sentinel" value, such as the \0 character which marks the end of a string in C.
But then there's the question, if not all of a buffer is in use, what do the unused entries contain?
One answer is, of course, that it doesn't matter. That's what "unused" means. You care about the values of the entries that are used, that are accounted for by your count or your sentinel value. You don't care about the unused values.
There are basically four situations in which you can predict the initial values of the unused entries in a buffer:
When you allocate an array (including a character array) with static duration, all unused entries are initialized to 0.
When you allocate an array and give it an explicit initializer, all unused entries are initialized to 0.
When you call calloc, the allocated memory is initialized to all-bits-0.
When you call strncpy, the destination string is padded out to size n with \0 characters.
In all other cases, the unused parts of a buffer are unpredictable, and generally contain whatever they did last time (whatever that means). In particular, you cannot predict the contents of an uninitialized array with automatic duration (that is, one that's local to a function and isn't declared with static), and you cannot predict the contents of memory obtained with malloc. (Some of the time, in those two cases the memory tends to start out as all-bits-zero the first time, but you definitely don't want to ever depend on this.)
It depends on the storage class specifier, your implementation, and its settings.
Some interesting examples:
- Uninitialized stack variables may be set to 0xCCCCCCCC
- Uninitialized heap variables may be set to 0xCDCDCDCD
- Uninitialized static or global variables may be set to 0x00000000
- or it could be garbage.
It's risky to make any assumptions about any of this.
I think the correct answer is that you should always keep track of how many char are written.
As with the low level functions like read and write need or give the number of character read or writen. In the same way std::string keep tracks of the number of characters in its implementatiin
Declared objects of static duration (those declared outside a function, or with a static qualifier) which have no specified initializer are initialized to whatever value would be represented by a literal zero [i.e. an integer zero, floating-point zero, or null pointer, as appropriate, or a structure or union containing such values]. If the declaration of any object (including those of automatic duration) includes an initializer, portions whose values are specified by that initializer will be set as specified, and the remainder will be zeroed as with static objects.
For automatic objects without initializers, the situation is somewhat more ambiguous. Given something like:
#include <string.h>
unsigned char static1[5], static2[5];
void test(void)
{
unsigned char temp[5];
strcpy(temp, "Hey");
memcpy(static1, temp, 5);
memcpy(static2, temp, 5);
}
the Standard is clear that test would not invoke Undefined Behavior, even though it copies portions of temp that were not initialized. The text of the Standard, at least as of C11, is unclear as to whether anything is guaranteed about the values of static1[4] and static2[4], most notably whether they might be left holding different values. A defect report states that the Standard was not intended to forbid a compiler from behaving as though the code had been:
unsigned char static1[5]={1,1,1,1,1}, static2[5]={2,2,2,2,2};
void test(void)
{
unsigned char temp[4];
strcpy(temp, "Hey");
memcpy(static1, temp, 4);
memcpy(static2, temp, 4);
}
which could leave static1[4] and static2[4] holding different values. The Standard is silent on whether quality compilers intended for various purposes should behave in that function. The Standard also offers no guidance as to how the function should be written if the intention if the programmer requires that static1[4] and static2[4] hold the same value, but doesn't care what that value is.

Why are memory locations assigned garbage values?

I always wondered why there are garbage values stored in a memory space. Why cant the memory be filled with zeros. Is there a particular reason?
For example:
int a ;
cout<<a //garbage value displayed
Assigning zeros takes time and is not always what the programmer wants to do. Consider this:
int a;
std::cin >> a;
Why waste time loading a zero into the memory when the first thing you are going to do is store a different value there?
Modern OSs do initialise memory to 0 before your process first gets access to it. But once it's been used once there's generally no point zeroing it out again unless there's a specific need. The "garbage values" are just whatever was last written to that memory.
Because it's expensive to clear memory (or certainly was), and in the vast number of common cases it wasn't needed.
In the example you show it's stack memory. It would be prohibitively expensive to zero this out each time (basically every function call would have to clear a lump of memory).
For (mostly historical) performance reasons. Zeroing out memory locations that get assigned a proper value later is unnecessary work and one of c/c++ slogans is "You don't pay for what you don't need".
Usually you should always properly initialize a variable right when it is declared anyway, but especially in c, you sometimes just don't know yet, what the initial value of a variable should be.
EDIT: If your question is about where that garbage data comes from: It is just the data that has previously been stored at the same physical address. Lets say, you are calling the following two functions directly after another:
void foo(){
int a=5;
}
void foo2() {
int b;
std::cout << b << std::endl;
}
int main() {
foo1();
foo2();
}
it is quite likely,that (in debug mode) the output of your program will be 5 (I believe, this is actually UB, so - taking into account compiler optimizations - anything can happen, of course)
The garbage values you are getting are the values that were previously stored in that address. But in C++ ( and many other languages ) , initializing them all to zero is quite an expensive task which the compiler does not do. That would waste a lot of time which the compiler could have used for some other purpose. So, assigning new values is not done by the compiler.
But there are other compilers that initialize them to 0, but C++ is not one of them.
Normally, the compiler will expect you to give the new variables a new value. Like
int a = 0; // this is not so hard to do
or
int a;
std::cin >> a ;
So, us assigning a value is much more efficient than the compiler initializing it and then overwriting it.
if you don't assign them values before accessing them, the compiler will give you a warning about uninitialized variable. ( if you have compiler warnings turned on ).
The garbage values come from what is present in the memory space. In your case you have only declared the variable and not initialised it. When a variable is declared, and not initialised, memory is allocated for that variable but not cleared, mostly for performance reasons. Therefore, it may contain an initial value that you do not expect it to contain, which can happen for several reasons. According to Code Complete Chapter 10, a few reasons include:
The variable has never been assigned a value. It's value is whatever bits happened to be in its area of memory when the program started.
The value in the variable is outdated. The variable was assigned a value at some point, but the value is no longer valid.
Part of the variable has been assigned a value and part has not (specifically relates to an object that may have several data members).
A good practice is to declare and initialise a variable as close as possible to where they're first used i.e. follow the Principle of Proximity by keeping related actions together.

What do C++ arrays init to?

So I can fix this manually so it isn't an urgent question but I thought it was really strange:
Here is the entirety of my code before the weird thing that happens:
int main(int argc, char** arg) {
int memory[100];
int loadCounter = 0;
bool getInput = true;
print_memory(memory);
and then some other unrelated stuff.
The print memory just prints the array which should've initialized to all zero's but instead the first few numbers are:
+1606636544 +32767 +1606418432 +32767 +1856227894 +1212071026 +1790564758 +813168429 +0000 +0000
(the plus and the filler zeros are just for formatting since all the numbers are supposed to be from 0-1000 once the array is filled. The rest of the list is zeros)
It also isn't memory leaking because I tried initializing a different array variable and on the first run it also gave me a ton of weird numbers. Why is this happening?
Since you asked "What do C++ arrays init to?", the answer is they init to whatever happens to be in the memory they have been allocated at the time they come into scope.
I.e. they are not initialized.
Do note that some compilers will initialize stack variables to zero in debug builds; this can lead to nasty, randomly occurring issues once you start doing release builds.
The array you are using is stack allocated:
int memory[100];
When the particular function scope exits (In this case main) or returns, the memory will be reclaimed and it will not leak. This is how stack allocated memory works. In this case you allocated 100 integers (32 bits each on my compiler) on the stack as opposed to on the heap. A heap allocation is just somewhere else in memory hopefully far far away from the stack. Anyways, heap allocated memory has a chance for leaking. Low level Plain Old Data allocated on the stack (like you wrote in your code) won't leak.
The reason you got random values in your function was probably because you didn't initialize the data in the 'memory' array of integers. In release mode the application or the C runtime (in windows at least) will not take care of initializing that memory to a known base value. So the memory that is in the array is memory left over from last time the stack was using that memory. It could be a few milli-seconds old (most likely) to a few seconds old (less likely) to a few minutes old (way less likely). Anyways, it's considered garbage memory and it's to be avoided at all costs.
The problem is we don't know what is in your function called print_memory. But if that function doesn't alter the memory in any ways, than that would explain why you are getting seemingly random values. You need to initialize those values to something first before using them. I like to declare my stack based buffers like this:
int memory[100] = {0};
That's a shortcut for the compiler to fill the entire array with zero's.
It works for strings and any other basic data type too:
char MyName[100] = {0};
float NoMoney[100] = {0};
Not sure what compiler you are using, but if you are using a microsoft compiler with visual studio you should be just fine.
In addition to other answers, consider this: What is an array?
In managed languages, such as Java or C#, you work with high-level abstractions. C and C++ don't provide abstractions (I mean hardware abstractions, not language abstractions like OO features). They are dessigned to work close to metal that is, the language uses the hardware directly (Memory in this case) without abstractions.
That means when you declare a local variable, int a for example, what the compiler does is to say "Ok, im going to interpret the chunk of memory [A,A + sizeof(int)] as an integer, which I call 'a'" (Where A is the offset between the beginning of that chunk and the start address of function's stack frame).
As you can see, the compiler only "assigns" memory-segments to variables. It does not do any "magic", like "creating" variables. You have to understand that your code is executed in a machine, and the machine has only a memory and a CPU. There is no magic.
So what is the value of a variable when the function execution starts? The value represented with the data which the chunk of memory of the variable has. Commonly, that data has no sense from our current point of view (Could be part of the data used previously by a string, for example), so when you access that variable you get extrange values. Thats what we call "garbage": Data previously written which has no sense in our context.
The same applies to an array: An array is only a bigger chunk of memory, with enough space to fit all the values of the array: [A,A + (length of the array)*sizeof(type of array elements)]. So as in the variable case, the memory contains garbage.
Commonly you want to initialize an array with a set of values during its declaration. You could achieve that using an initialiser list:
int array[] = {1,2,3,4};
In that case, the compiler adds code to the function to initialize the memory-chunk which the array is with that values.
Sidenote: Non-POD types and static storage
The things explained above only applies to POD types such as basic types and arrays of basic types. With non-POD types like classes the compiler adds calls to the constructor of the variables, which are designed to initialise the values (attributes) of a class instance.
In addition, even if you use POD types, if variables have static storage specification, the compiler initializes its memory with a default value, because static variables are allocated at program start.
the local variable on stack is not initialized in c/c++. c/c++ is designed to be fast so it doesn't zero stack on function calls.
Before main() runs, the language runtime sets up the environment. Exactly what it's doing you'd have to discover by breaking at the load module's entry point and watching the stack pointer, but at any rate your stack space on entering main is not guaranteed clean.
Anything that needs clean stack or malloc or new space gets to clean it itself. Plenty of things don't. C[++] isn't in the business of doing unnecessary things. In C++ a class object can have non-trivial constructors that run implicitly, those guarantee the object's set up for use, but arrays and plain scalars don't have constructors, if you want an inital value you have to declare an initializer.

unassigned value in the int[ ]

Would like to know in C++ what the value of unassigned integer in an int[] usually is.
Example
int arr[5];
arr[1]=2;
arr[3]=4;
for(int i=0;i<5;i++)
{
cout <<arr[i] <<endl;
}
it print
-858993460
2
-858993460
4
-858993460
we know that the array will be {?,2,?,4,?} ,where ? is unknown.
What will the "?" be usually?
When I tested , I always got negative value.
Can I assume in C++ unassigned element in the integer array is always less than or equal to zero?
Correct me if I'm wrong. When I study in Java unassigned element in array will produce null.
Formally, in most cases the very attempt to read an uninitialized value results in undefined behavior. So, formally the question about the actual value is rather moot: you are not allowed to even look at that value directly.
Practically, uninitialized values in C and C++ are unpredictable. On top of that they are not supposed to be stable, meaning that reading the same uninitialized value several times is not guaranteed to read the same value.
If you need a pre-initialized local array, declare it with an explicit initializer
int arr[5] = {};
The above is guaranteed to fill the array with integer zeros.
When I tested , I always got negative value.
The (previously) unused memory space seemed filled with the hex code 0xCC. However, as mentioned above -- several times -- you cannot rely on this.
In one of your comments you clarify your task:
im trying to create an int array let say of the size 100 and randomly insert postive integer into any position in the array. If the array is not full. how could i determine if that position in the array has never been assigned[?]
Fill the array with zeros (manually, or per AndrewT's answer). Since you are inserting positive integers only, all you have to test for is !0.
You can't know what this will produce, since it takes as value the bits that are in memory in that moment. So you can get ANY value, not only negative values.
The values contained in an unitialized area of memory can be anything, it is implementation depending. The most efficient implementation is to leave the memory as it was, so you will find in your array whatever was contained before. An important note: it is not something you can use as a random value. Some implementation (I have seen that, especially in the past, when compiling and running in debug mode) might put zeros in your memory, but it is uncommon. You simply should not rely on the content of uninitialized area of memory.
To understand if something has not been touched in your array, you can initialize it to some value like DEADBEEF:
http://en.wikipedia.org/wiki/Hexspeak
(Unless you are so unlucky that one of the values you have to insert corresponds exactly to DEADBEEF... :) )
These are garbage values, you cannot expect to work with these variables properly and they will not predict what it may result into. Whenever a variable gets allocated some portion of memory gets allocated for that variable and those portion may be used previously for some other unknown calculation which you cannot know, so you have to intialize those variables with some values to avoid usage of garbage values.
You are not assigning any value at these locations. So it will return garbage values from memory. You must put some values at these locations. Unimplemented locations will returned in some unexpected/unpredictable values.