Pointer) Is there a chance where (ptr + 1) is already occupied? - c++

char data;
char *ptr = &data;
*ptr = 3;
*(ptr+1) = 5;
So I am studying pointer, and one thing I quite don't understand.
When you use pointer as an array, how do you know address ptr+1 or ptr+2 and so on is not being occupied by some other variable?
So for example the address ptr + 1 is already being used, and if I try to put 5 in it, is there a chance the program just crashes?
Or more extreme example something like ptr + 1000?
Or does the compiler make sure that it never happens?

The definition char data; reserves space for one char.
char *ptr = &data; sets ptr to point to that char.
The space at ptr+1 is not reserved for your use, and neither the C nor the C++ standards define what happens if you try to use it.
The definition char data[3]; would reserve space for three char. Then char *ptr = data; would set ptr to point to the first of these char. That is, ptr would have the address &data[0], and *ptr would be data[0].
Then ptr+1 would point to the next char; it would have the address &data[1], and *(ptr+1) would be data[1].
Generally, you should use pointer arithmetic only to access space that you know is reserved for your use. (There may be exceptions or clarifications to that in special-purpose code, such as code in the operating system kernel to deal with memory mapping or code in special-purpose hardware. You do not need to consider such possibilities in normal user programs.)
The compiler generally does not prevent you from accessing invalid addresses. It might, in some circumstances, where it detects a reference out of bounds. Generally, this only happens in current compilers with simple expressions where the full definitions are visible to the compiler and the references essentially use constant indices.
The operating system may prevent you from accessing some invalid addresses. However, it will only prevent you from accessing invalid addresses either because they are not mapped to your program at all or they are mapped to be read-only but you tried writing to them (or certain other combinations, such as attempting to read execute-only memory). The operating system will not prevent you from accessing addresses improperly that are mapped and accessible to your program. For example, calculating an improper pointer value and using it in an assignment to change memory can result in changing data your program needs for other functions.

You don't know if they are occupied by other variables or not.
This is undefined behavior, which means that the C standard does not specify what your program does. It could appear to "work", it could crash, it could give wrong results, or it could do anything else.
So don't do this.

Related

Creating an array using new without declaring size [duplicate]

This question already has answers here:
Accessing an array out of bounds gives no error, why?
(18 answers)
Closed 3 years ago.
This has been bugging me for quite some time. I have a pointer. I declare an array of type int.
int* data;
data = new int[5];
I believe this creates an array of int with size 5. So I'll be able to store values from data[0] to data[4].
Now I create an array the same way, but without size.
int* data;
data = new int;
I am still able to store values in data[2] or data[3]. But I created an array of size 1. How is this possible?
I understand that data is a pointer pointing to the first element of the array. Though I haven't allocated memory for the next elements, I still able to access them. How?
Thanks.
Normally, there is no need to allocate an array "manually" with new. It is just much more convenient and also much safer to use std::vector<int> instead. And leave the correct implementation of dynamic memory management to the authors of the standard library.
std::vector<int> optionally provides element access with bounds checking, via the at() method.
Example:
#include <vector>
int main() {
// create resizable array of integers and resize as desired
std::vector<int> data;
data.resize(5);
// element access without bounds checking
data[3] = 10;
// optionally: element access with bounds checking
// attempts to access out-of-range elements trigger runtime exception
data.at(10) = 0;
}
The default mode in C++ is usually to allow to shoot yourself in the foot with undefined behavior as you have seen in your case.
For reference:
https://en.cppreference.com/w/cpp/container/vector
https://en.cppreference.com/w/cpp/container/vector/at
https://en.cppreference.com/w/cpp/language/ub
Undefined, unspecified and implementation-defined behavior
What are all the common undefined behaviours that a C++ programmer should know about?
Also, in the second case you don't allocate an array at all, but a single object. Note that you must use the matching delete operator too.
int main() {
// allocate and deallocate an array
int *arr = new int[5];
delete[] arr;
// allocate and deallocate a single object
int *p = new int;
delete p;
}
For reference:
https://en.cppreference.com/w/cpp/language/new
https://en.cppreference.com/w/cpp/language/delete
How does delete[] know it's an array?
When you used new int then accessing data[i] where i!=0 has undefined behaviour.
But that doesn't mean the operation will fail immediately (or every time or even ever).
On most architectures its very likely that the memory addresses just beyond the end of the block you asked for are mapped to your process and you can access them.
If you're not writing to them it's no surprise you can access them (though you shouldn't).
Even if you write to them most memory allocators have a minimum allocation and behind the scenes you may well have been allocated space for more (4 is realistic) integers even though the code only requests 1.
You may also be overwriting some area of memory but never get tripped up. A common consequence of writing beyond the end of an array is to corrupt the free-memory store itself. The consequence may be catastrophe but may only exhibit itself in a later allocation possibly of a similar sized object.
It's a dreadful idea to rely on such behaviour but it's not very surprising that it appears to work.
C++ doesn't (typically or by default) perform strict range checking and accessing invalid array elements may work or at least appear to work initially.
This is why C and C++ can be plagued with bizarre and intermittent errors. Not all code that provokes undefined behaviour fails catastrophically in every execution.
Going outside the bounds of an array in C++ is undefined behavior, so anything can happen, including things that appear to work "correctly".
In practical implementation terms on common systems, you can think of "virtual" memory as a large "flat" space from 0 up to the size of a pointer, and pointers are into this space.
The "virtual" memory for a process is mapped to physical memory, page file, etc. Now, if you access an address that is not mapped, or try to write a read-only part, you will get an error, such as an access violation or segfault.
But this mapping is done for fairly large chunks for efficiency, such as for 4KiB "pages". The allocators in a process, such as new and delete (or the stack) will further split up these pages as required. So accessing other parts of a valid page are unlikely to raise an error.
This has the unfortunate result that it can be hard to detect such out of bounds access, use after free, etc. In many cases writes will succeed, only to corrupt some other seemingly unrelated object, which may cause a crash later, or incorrect program output, so best to be very careful about C and C++ memory management.
data = new int; // will be some virtual address
data[1000] = 5; // possibly the start of a 4K page potentially allowing a great deal beyond it
other_int = new int[5];
other_int[10] = 10;
data[10000] = 42; // with further pages beyond, so you can really make a mess of your programs memory
other_int[10] == 42; // perfectly possible to overwrite other things in unexpected ways
C++ provides many tools to help, such as std::string, std::vector and std::unique_ptr, and it is generally best to try and avoid manual new and delete entirely.
new int allocates 1 integer only. If you access offsets larger than 0, e.g. data[1] you override the memory.
int * is a pointer to something that's probably an int. When you allocate using new int , you're allocating one int and storing the address to the pointer. In reality, int * is just a pointer to some memory.
We can treat an int * as a pointer to a scalar element (i.e. new int) or an array of elements -- the language has no way of telling you what your pointer is really pointing to; a very good argument to stop using pointers and only using scalar values and std::vector.
When you say a[2], you well access the memory sizeof(int) after the value pointed to by a. If a is pointing to a scalar value, anything could be after a and reading it causes undefined behaviour (your program might actually crash -- this is an actual risk). Writing to that adress will most likley cause problems; it is not merely a risk, but something you should actively guard against -- i.e. use std::vector if you need an array and int or int& if you don't.
The expression a[b], where one of the operands is a pointer, is another way to write *(a+b). Let's for the sake of sanity assume that a is the pointer here (but since addition is commutative it can be the other way around! try it!); then the address in a is incremented by b times sizeof(*a), resulting in the address of the bth object after *a.
The resulting pointer is dereferenced, resulting in a "name" for the object whose address is a+b.
Note that a does not have to be an array; if it is one, it "decays" to a pointer before the operator [] is applied. The operation is taking place on a typed pointer. If that pointer is invalid, or if the memory at a+b does not in fact hold an object of the type of *a, or even if that object is unrelated to *a (e.g., because it is not in the same array or structure), the behavior is undefined.
In the real world, "normal" programs do not do any bounds checking but simply add the offset to the pointer and access that memory location. (Accessing out-of-bounds memory is, of course, one of the more common bugs in C and C++, and one of the reasons these languages are not without restrictions recommended for high-security applications.)
If the index b is small, the memory is probably accessible by your program. For plain old data like int the most likely result is then that you simply read or write the memory in that location. This is what happened to you.
Since you overwrite unrelated data (which may in fact be used by other variables in your program) the results are often surprising in more complex programs. Such errors can be hard to find, and there are tools out there to detect such out-of-bounds access.
For larger indices you'll at some point end up in memory which is not assigned to your program, leading to an immediate crash on modern systems like Windows NT and up, and unpredictable results on architectures without memory management.
I am still able to store values in data[2] or data[3]. But I created an array of size 1. How is this possible?
The behaviour of the program is undefined.
Also, you didn't create an array of size 1, but a single non-array object instead. The difference is subtle.

C++ - operator -= on a pointer [duplicate]

This question already has answers here:
C/C++: Pointer Arithmetic
(7 answers)
Closed 5 years ago.
Say I have pointer array:
char* buf = new char[256];
what will happen to array pointer's value/size if i do
buf -= 100;
+, +=, - and -= move a pointer around.
The pointer char* ptr = new char[256] points at the start of a block of 256 bytes.
So ptr+10 is now a pointer poimting 10 in, and ptr+=10 is the same as ptr = ptr+10.
This is called pointer arithmetic.
In C++, poimter arithmetic is no longer valid if the result takes you out of the object the pointer is pointing within, or one-place-past-the-end. So ptr-0 to ptr+256 are the only valid places you are allowed to generate from ptr.
ptr-=100 is undefined behaviour.
In C++, most implementations currently active implement pointers as unsigned integer indexes into a flat address space at runtime. This still doesn't mean you can rely on this fact while doing pointer arithmetic. Each pointer has a range of validity, and going outside of it the C++ standard no longer defines what anything in your program does (UB).
Undefined Behaviour doesn't just mean "could segfault"; the compiler is free to do anything, and there are instances of compilers optimizing entire branches of code out because the only way to reach them required UB, or because it proved that if you reached them UB would occur. UB makes the correctness of your program bassically impossible to reason about.
That being said, 99/100+ times what will happen is that ptr-=100 now points to a different part of the heap than it did when initialized, and reading/writing to what it points at will result in getting junk, corrupting memory, and/or segfaulting. And doing a +=100 will bring ptr back to the valid range.
The block of memory won't be bothered by moving ptr, just ptr won't be pointing within it.
The standard says that even just trying to calculate a pointer that goes outside the actual boundaries of an array (and the "one past last" value, which is explicitly allowed, although not for dereferencing) is undefined behavior, i.e. anything can happen.
On some bizarre machines, even just this calculation may make the program crash (they had registers specifically for pointers, that trapped in case they pointed to non mapped pages/invalid addresses).
In practice, on non-patological machines calculating that pointer won't do much - you'll probably just obtain a pointer to an invalid memory location (which may crash your program when trying to dereference it) or, worse, to memory you don't own (so you may overwrite unrelated data).
The only case where that cose may be justified is if you have "insider knowledge" about the memory allocator (or you have actually replaced it with your own, e.g. providing your own override of the global new operator), and you know that the actual array starts before the returned address - possibly storing there extra data.

What happens when data is written into a memory location which is pointed by a dangling pointer of a different data type

If pointer data type is same as the newly entered data,i guess it wouldn't give an error,but if the pointer has a different data type ,we'll have a type mismatch. I was wondering whether the compiler would do something about it(say delete the dangling pointer first),or simply give an error.
#YuHao is absolutely right.
If you delete first, you might get a segmentation fault, if that unmaps a page that previously existed in your process' address space.
In any other case, you just write data somewhere; there might be useful stuff there by the time you do that, there might not. At any rate, you must avoid this.
I was wondering whether the compiler would do something about it(say delete the dangling pointer first)
Could do (won't)
or simply give an error.
Could do (won't)
It's a dangling pointer. There's no protection against that.
It's undefined behaviour. This is fundamentally unanswerable.
Perhaps a quick overview so we are on the same page will help.
Let's assume a modern PC with a multi-tasking OS like Linux.
When a C++ program is run, a process is created that has a private memory space. This is a linear mapping of addresses that get translated by the CPU and OS to real addresses in RAM.
C++ is a strongly typed, low-level language with manual memory management. Strongly typed means the compiler does some basic checks to make sure logical statements in your program make sense. Pointers are just another type.
For instance:
float f = 10.0f; // this is ok, 10.0f is a float literal
float* pF = &f; // types match, & operator returns type float*
int i = f; // types do not match. Compiler error or warning.
int* i = pF; // types do not match, int* is not float*
float f2 = pF; // types do not match, float is not float*
and so on.
Thats's compile time. That's really it. Once the program is running, the C++ runtime is pretty dumb. It doesn't do too many checks on memory operations since those can slow a program down and C++'s philosophy is "if you didn't ask for it, you don't pay for it".
At it the most fundamental our memory is just a sequence of bytes. Data types like float and int are multi-byte data types (4 bytes for 32-bit platforms). That means a float in memory is stored in 4 adjacent byte-sized slots.
Finally, we are ready to answer your question. If you allocate memory at runtime through something like new you are handed back a pointer to memory you can use. Say we do this for a single float. new knows how to mark that memory as "in use". new won't give a pointer to those 4 bytes to anyone else, so you are safe. When you invoke delete that gives the memory back to the heap - some other part of your program is free to allocate it later. But the pointer you have is unmodified. We can still use the pointer to write to memory, only now we are headed for trouble.
Example:
float *pF = new float; // allocate 4-bytes on a 32-bit system
*pF = 10.0f; // fine
delete pF; // free the memory
*pF = 20.0f; // ?????
That last instruction says "write 20.0f to the memory pointed to by pF". We don't "own" that memory any longer. We say this pointer is dangling as it doesn't point to valid memory we can write to safely. But it does point to writable memory. You are correct that this is a source of bugs.
There are C++ memory allocators that will write special values into memory to indicate whether it is uninitialized or previously deleted. This will depend on your OS and toolset.
Another option to find bugs like this is to use the awesome tool Valgrind which will simulate your program memory and flag these kinds of bugs.

Why is out-of-bounds pointer arithmetic undefined behaviour?

The following example is from Wikipedia.
int arr[4] = {0, 1, 2, 3};
int* p = arr + 5; // undefined behavior
If I never dereference p, then why is arr + 5 alone undefined behaviour? I expect pointers to behave as integers - with the exception that when dereferenced the value of a pointer is considered as a memory address.
That's because pointers don't behave like integers. It's undefined behavior because the standard says so.
On most platforms however (if not all), you won't get a crash or run into dubious behavior if you don't dereference the array. But then, if you don't dereference it, what's the point of doing the addition?
That said, note that an expression going one over the end of an array is technically 100% "correct" and guaranteed not to crash per §5.7 ¶5 of the C++11 spec. However, the result of that expression is unspecified (just guaranteed not to be an overflow); while any other expression going more than one past the array bounds is explicitly undefined behavior.
Note: That does not mean it is safe to read and write from an over-by-one offset. You likely will be editing data that does not belong to that array, and will cause state/memory corruption. You just won't cause an overflow exception.
My guess is that it's like that because it's not only dereferencing that's wrong. Also pointer arithmetics, comparing pointers, etc. So it's just easier to say don't do this instead of enumerating the situations where it can be dangerous.
The original x86 can have issues with such statements. On 16 bits code, pointers are 16+16 bits. If you add an offset to the lower 16 bits, you might need to deal with overflow and change the upper 16 bits. That was a slow operation and best avoided.
On those systems, array_base+offset was guaranteed not to overflow, if offset was in range (<=array size). But array+5 would overflow if array contained only 3 elements.
The consequence of that overflow is that you got a pointer which doesn't point behind the array, but before. And that might not even be RAM, but memory-mapped hardware. The C++ standard doesn't try to limit what happens if you construct pointers to random hardware components, i.e. it's Undefined Behavior on real systems.
If arr happens to be right at the end of the machine's memory space then arr+5 might be outside that memory space, so the pointer type might not be able to represent the value i.e. it might overflow, and overflow is undefined.
"Undefined behavior" doesn't mean it has to crash on that line of code, but it does mean that you can't make any guaranteed about the result. For example:
int arr[4] = {0, 1, 2, 3};
int* p = arr + 5; // I guess this is allowed to crash, but that would be a rather
// unusual implementation choice on most machines.
*p; //may cause a crash, or it may read data out of some other data structure
assert(arr < p); // this statement may not be true
// (arr may be so close to the end of the address space that
// adding 5 overflowed the address space and wrapped around)
assert(p - arr == 5); //this statement may not be true
//the compiler may have assigned p some other value
I'm sure there are many other examples you can throw in here.
Some systems, very rare systems and I can't name one, will cause traps when you increment past boundaries like that. Further, it allows an implementation that provides boundary protection to exist...again though I can't think of one.
Essentially, you shouldn't be doing it and therefor there's no reason to specify what happens when you do. Specifying what happens puts unwarranted burden on the implementation provider.
This result you are seeing is because of the x86's segment-based memory protection. I find this protection to be justified as when you are incrementing the pointer address and storing, It means at future point of time in your code you will be dereferencing the pointer and using the value. So compiler wants to avoid such kind of situations where you will end up changing some other's memory location or deleting the memory which is being owned by some other guy in your code. To avoid such scenario's compiler has put the restriction.
In addition to hardware issues, another factor was the emergence of implementations which attempted to trap on various kinds of programming errors. Although many such implementations could be most useful if configured to trap on constructs which a program is known not to use, even though they are defined by the C Standard, the authors of the Standard did not want to define the behavior of constructs which would--in many programming fields--be symptomatic of errors.
In many cases, it will be much easier to trap on actions which use pointer arithmetic to compute address of unintended objects than to somehow record the fact that the pointers cannot be used to access the storage they identify, but could be modified so that they could access other storage. Except in the case of arrays within larger (two-dimensional) arrays, an implementation would be allowed to reserve space that's "just past" the end of every object. Given something like doSomethingWithItem(someArray+i);, an implementation could trap any attempt to pass any address which doesn't point to either an element of the array or the space just past the last element. If the allocation of someArray reserved space for an extra unused element, and doSomethingWithItem() only accesses the item to which it receives a pointer, the implementation could relatively inexpensively ensure that any non-trapped execution of the above code could--at worst--access otherwise-unused storage.
The ability to compute "just-past" addresses makes bounds checking more difficult than it otherwise would be (the most common erroneous situation about would be passing doSomethingWithItem() a pointer just past the end of the array, but behavior would be defined unless doSomethingWithItem would try to dereference that pointer--something the caller may be unable to prove). Because the Standard would allow compilers to reserve space just past the array in most cases, however, such allowance would allow implementations to limit the damage caused by untrapped errors--something that would likely not be practical if more generalized pointer arithmetic were allowed.

What is a NULL value

I am wondering , what exactly is stored in the memory when we say a particular variable pointer to be NULL. suppose I have a structure, say
typdef struct MEM_LIST MEM_INSTANCE;
struct MEM_LIST
{
char *start_addr;
int size;
MEM_INSTANCE *next;
};
MEM_INSTANCE *front;
front = (MEM_INSTANCE*)malloc(sizeof(MEM_INSTANCE*));
-1) If I make front=NULL. What will be the value which actually gets stored in the different fields of the front, say front->size ,front->start_addr. Is it 0 or something else. I have limited knowledge in this NULL thing.
-2) If I do a free(front); It frees the memory which is pointed out by front. So what exactly free means here, does it make it NULL or make it all 0.
-3) What can be a good strategy to deal with initialization of pointers and freeing them .
Thanks in advance
There are many good contributed answers which adequately address the questions. However, the coverage of NULL is light.
In a modern virtual memory architecture, NULL points to memory for which any reference (that is, an attempt to read from or write to memory at that address) causes a segfault exception—also called an access violation or memory fault. This is an intentional protective mechanism to detect and deal appropriately with invalid memory accesses:
char *p = 0;
for (int j = 0; j < 50000000; ++j)
*(p += 1000000) = 10;
This code writes a ten at every millionth memory byte. It won't run for many loops—probably not even once. Either it will attempt to access an unmapped address, or it will attempt to modify read-only memory—where constant data or program code reside. The CPU will interrupt the instruction midway and report the exception to the operating system. Since there's no exception handling specified, the default o/s handling is to terminate the program. Linux displays Segmentation fault (for historical reasons). MS Windows is inconsistent, but tends to say access violation. The same should happen with any program in protected virtual memory doing this:
char *p = NULL;
if (p [34] == 'Y')
printf ("A miracle has occurred!\n");
This segfaults. A memory location near the NULL address is being dereferenced.
At the risk of confusion, it is possible that a large offset from zero will be valid memory. Thirty-four certainly won't be okay, but 34,000 might be. Different operating systems and different program processing tools (linkers) reserve a fixed amount of the zero end of memory and arrange for it to be unmapped. It could be as little as 1K, though 8K was a popular choice in the 1990s. Systems with ample virtual address space (not memory, but potential memory) might leave an 8M or 16M memory hole. Some modern operating systems randomize the amount of space reserved, as well as randomly varying the locations for the code and data sections each time a program starts.
The other extreme is non-virtual memory architectures. Typically, these provide valid addresses beginning at address zero up to the limit of installed memory. Such is common in embedded processors, many DSPs, and pre-protected mode CPUs, 8 and 16-bit processors like the 8086, 68000, etc. Reading a NULL address does not cause any special CPU reaction—it simply reads whatever is there, which is usually interrupt vectors. Writes to low memory usually result in hard-to-diagnose dire consequences as interrupt vectors are used asynchronously and infrequently.
Even stranger is the segment model of the oddly named "real-mode" x86 using small or medium memory model conventions. Such data addresses are 16 bits using the DS register which is set to the program's initialized data area. Dereferencing NULL accesses the first bytes of this space, but MSDOS programs contain ancient runtime structures for compatibility with CP/M, an o/s Fred Flintstone used. No exceptions, and maybe no consequences for modifying the memory near NULL in this environment. These were challenging bugs to find without program source code.
Virtual memory protection was a huge leap forward in creating stable systems and protecting programmers from themselves. Properly used NULLs provide significant safety and rapid debugging of programming flaws.
Wow, a lot of answers effectively claim that assigning NULL to a pointer sets it to point to the address 0, confusing value and representation. It does not. Setting a pointer to the value NULL or 0 is an abstract conception that sets the pointer to an invalid value not pointing to any valid object. The binary representation actually stored in memory does not need to be all bits 0. This is usually not an architecture thing, it is up to the compiler. In fact I had an old DOS compiler (on x86) that used all bits 1 for a NULL pointer.
Additionally, any pointer type is allowed to have its own binary representation for NULL, as long as all these pointers compare as equal when compared.
Granted, most of the times all bits are 0 for a NULL pointer for practical reasons, but it is not required. This means that using calloc() or memset(0) is not a portable initialization of pointers.
NULL assigned to a pointer does not change the "fields pointed by it".
In your case if you make front = NULL, front will no longer point to the structure allocated by your malloc, but will contain zero (NULL is 0 according to the C standard). Nothing will point to your allocated struct - it's a memory leak.
Note the critical distinction here between the pointer (front) and what it points to (the structure) - it's a big difference.
To answer your specific questions:
If you run front=NULL, front will no longer point to a MEM_INSTANCE structure, and hence front->size will have no meaning (it will probably crash the program)
If you do free(front) the OS will free the memory allocated to you for the MEM_INSTANCE structure. front will now point to memory that's no longer yours - and you can't access it
It's a broad question - please ask a more specific one.
Assignment to a pointer is not the same as assigmment to the elements to which the pointer points. Assigning NULL to front will make it so that front points to nothing, but your allocated memory will be unaffected. It will not write any data into the fields formerly pointed to by front Moreover, that is a memory leak.
Invoking free(front) will deallocate the block of memory but will not affect the value of front; in other words, front will point to a memory region that you no longer own and which is no longer valid for you to access. This is also known as a "dangling pointer", and it is generally a good idea to follow free(front) immediately with front=NULL so that you know that front is no longer valid.
A good strategy for dealing with pointers is, at least in C++, to use smart pointer classes and to perform allocation only in constructors and to perform deallocation only in destructors. Another good strategy is to ensure that you always assign NULL to any pointer that you have just freed. In C, you really just have to make sure that your allocations are matched properly with deallocations. It can also help to use "name_of_object_create" and "name_of_object_destroy" functions that parallel C++ constructors/destructors; however, there is no way in C to ensure automatic destruction.
NULL is a sentinel value that denotes that a pointer does not point to a meaningful location. I.e. "Do not attempt to dereference this".
So what exactly free means here, does it make it NULL or make it all 0.
Neither. It merely frees the memory block that front points to. The value in front remains as it was.
In C/C++ NULL == 0.
int* a = NULL;
int* b = 0;
The value stored in variables 'a' & 'b' will both be 0. There is no special magic to "NULL". The day this was explained to me, pointers suddenly made sense.
The definition of NULL is usually
#define NULL 0
or
#define NULL (void*)0
Assigning it to a pointer merely makes that pointer stop pointing at whatever memory address it was pointing at and now point to memory address 0. This is usually done at initialization or after a pointer's memory has been free-d, though it isn't necessary. Setting a pointer equal to NULL does not deallocate memory or change any values of whatever it used to be pointing to.
Calling free() (in C) or delete (in C++) will deallocate the memory the pointer pointed to, but it will not set the pointer to NULL. Dereferencing the pointer after its been free-d is undefined behavior (ie. crashes normally). Therefore a common idiom is to set a pointer to NULL after it has been deallocated to more easily catch erroneous deferences later on.
It depends on what you mean. A null is tradionally means no value.
In C generally null mean 0. Therefore a pointer points to address 0. However if you actually have a a piece of memory then there coulkd be anything the memory from whatever used it last. If you clear the memory to (say) 0's then if you say that that memory contains pointers, those pointers will be null.
You have to think about what a pointer is: Its simply a value that holds an address of some memory. So if the address is 0x1 then this pointer with the value is pointing at the second byte in memory (remember addressing traditionally is 0 for first item, 1 for second etc). So if I char * p = 0x1; is say p points to the memory starting at address 0. Since I have declared it as char *, I've saying that I'm interested in a char sized value in the memory pointed at by 0. So *p is the value in the second byte in memory.
for example:
take the following
struct somestruct { char p } ;
// this means that I've got somestruct at location null (0x00000)
somestruct* ptrToSomeStruct = null;
so the ptrToSomeStruct->p says take the contents of where ptrToSomeStruct points (0x00000) and then what ever is there take tat to be a value of a char, so you are read the the first byte in memory
now if I declare it like so:
// this means that I've got somestruct on the stack and there for it's got some memory behind it.
somestruct ptrToSomeStruct;
so the ptrToSomeStruct->p says take the contents of where ptrToSomeStruct points (somewhere on the stack) and then what ever is there take that to be a value of a char, so you are read the the some byte from the stack.
Reflecting the comments below:
One of the key problems faced by C (and simmilar laguages) programmers is that sometimes a pointer will be pointing to the wrong part of memory, so when you read the value then you gone to the wrong part of memory to start with, hence what you find there is wrong anyway. In lots of cases the wrong address is actually set to 0. This like my examples means go to the start of memory and read stuff there. To help with programming errors where you actually have 0 in a pointer, many operating systems/architectures prevent you from reading or writing that memory and when you do, your program gets a address exception/fault.
Typically, in classic pointers, a null pointer points to address 0x0. It depends on architecture and specific language, but if it is a primitive type then the value 0 would be considered NULL.
In Intel architectures, the beginning of memory (address 0) contains reserved space, which cannot be allocated. It is also outside the boundary of any running application. So a pointer pointing there would quite safely mean NULL as well.
Speaking in C:
The preprocesor macro NULL is #defined (by stdio.h or stddef.h), with value 0 (or (void *)0)
-1) If I make front=NULL. What will be the value which actually gets stored in the different fields of the front, say front->size ,front->start_addr. Is it 0 or something else. I have limited knowledge in this NULL thing.
You will have front = 0x0. Doing font->size will raise a SIGSEG.
-2) If I do a free(front); It frees the memory which is pointed out by front. So what exactly free means here, does it make it NULL or make it all 0.
Free will mark the memory once held in front as free, so another malloc/realloc call may use it. Whether it sets your pointer to NULL or leaves its value unchanged its implementation dependant, but surely wont set all the struct to 0.
-3) What can be a good strategy to deal with initialization of pointers and freeing them .
I like to initialize my pointers to NULL and set them to NULL after being deallocated.
Your question suggests that you don't understand pointers at all.
If you put front = NULL the compiler will do front = 0 and as front contained an address of actual structure then you'll lose a possibility to free it.
Read "Kernighan & Ritchie" one again.