why wild pointer holds zero address rather than garabge address? - c++

I have been trying to find the size of an particular datatype like "int" without using sizeof() and found this :
#include<stdio.h>
int main() {
int *ptr; /*Declare a pointer*/
printf("Size of ptr = %d\n",ptr);
ptr++;
printf("Size of ptr = %d\n",ptr);
return 0;
}
This returns correct size for int. How?
Isn't wild pointer suppose to contain garbage address rather than zero. And if it contains zero how is it different than NULL pointer as NULL is (void*)0 ?

Since ptr is uninitialised, its value is indeterminate and accessing its value gives undefined behaviour. The meaning of "undefined", somewhat ironically, is defined by C and C++ standards to mean something like "this standard doesn't constrain what happens".
Beginners often incorrectly assume this means it must contain a "garbage value" or be a "wild pointer" or "add some colourful description here" but that is simply not the case.
The meaning of "value is indeterminate" or "the behaviour on accessing the value is undefined" is that any behaviour is permitted from code that accesses the value.
Accessing the value is necessary to print it, increment it, or (in case of a pointer) dereference it (access contents of the address identified by the pointer's value).
The behaviour of code that accesses the value is undefined. Giving a printed value of zero, 42, or a "garbage value" are all correct outcomes. Equally, however, the result could mean no output, or undesirable actions, such as reformatting a hard drive. The behaviour may even change over time if the code is executed repeatedly. Or it may be 100% repeatable (for a specific compiler, specific operating system, specific hardware, etc).
Practically, it is quite common for code with undefined behaviour to give no sign of malfunction during program testing, but to later cause some nasty and visible but unintended effect when the program is installed and executed on a customer's computer. That tends to result in grumpy customers, bug reports that the developers may be unable to replicate, and stress for developers in trying to fix the flaw.
Trying to explain why undefined behaviour results in some particular outcome (such as printing a value of zero) is therefore pointless.

the first print will have garbage or zero, depends on your compiler and previous value that was in the memory location.
If it was zero, then the second print will have the size of int, because incrementing a pointer increments with the size of the pointee.
for instance:
char *x = 0;
x++; //x=1
int *y = 0;
y++; //y=4
In your case, if you got a 0 on the first print, it was the same as if you initialized it to NULL, but you can't count it to always be zero.

Related

Is there a difference in compiling/executing code in different Operational Systems?

I have just found a problem and I have no idea what it could be. I started learning programming a few weeks ago and I am learning about pointers.
I compiled exactly the same code in 2 different PC's. In the first, the program runs perfectly. In the second, it stops working when it reaches a certain line.
I use 2 PC's.
The one at my workplace runs Windows XP SP3. In this one, the program worked fine.
The one at my home runs Windows 7 SP1. It compiled the code, but the program did not work.
I am writing and compiling using DEV C++ and TDM GCC 5.1.0 in both systems.
#include<iostream>
using namespace std;
int main (void) {
int* pointer;
cout << "pointer == " << pointer << "\n";
cout << "*pointer == " << *pointer << "\n"; // this is the line where the program stops.
cout << "&pointer == " << &pointer << "\n";
return 0;}
The output in the first computer was something like:
pointer == 0x000001234
*pointer == some garbage value
&pointer == 0x000007865
In the second computer, it stops at second line.
pointer == 0x1
I do understand that the pointer have not been assigned to a variable. Therefore, it does not store any correct address. Even so, it should at least show the garbage value inside it, or a "0" to indicate it has not yet an address to point to. I know the code is right because it worked fine in the first PC. But I do not understand why it failed in other computer.
I know the code is right because it worked fine in the first PC
You know no such thing.
You have undefined behaviour, and one entirely valid consequence is a program that always works. Or always works except on Saturdays, or always works until after you finished testing and shipped it to a paying customer, or always works on one machine and always fails on another.
The behaviour is undefined, not "defined to some specific consistent observable mode of failure".
Specifically, the real risk of undefined behaviour isn't simply that the result of some operation has an unspecified value, but that it may have undefined and unpredictable side-effects - on apparently-unrelated areas of your program, or on the system as a whole.
Even so, it should at least show the garbage value inside it
It did. But then you asked it to dereference that garbage value.
Reading any variable with an unspecified value is itself Undefined Behaviour, so the first piece of UB is reading the value of the pointer.
Following (dereferencing) a pointer which doesn't point to a valid object is also undefined behaviour, because you don't know whether the unspecified value you illegally interpreted as an address is correctly aligned for the type, or is mapped in your process' address space.
If you successfully load some integer from that address, that is a third piece of undefined behaviour, because again its value is unspecified.
So, the worst-case immediate pitfalls (with hardware trap values and restrictive alignment) are:
read the unspecified pointer value, get a trap representation, die with a hardware trap
OR read the unspecified pointer value, interpret it as an address which is misaligned, die with a bus error
OR follow the unspecified pointer to an unmapped address, die with a segment violation
OR survive all the previous steps - by pure chance - load some random value from some location in memory. Then die because that value is a trap representation.
But your if your process just dies, reproducibly, you can easily debug and fix it with no ill effects. In that sense, crashing at the point of invoking UB is actually the best possible outcome. The alternatives are worse, less predictable, and harder to debug.
I do understand that the pointer have not been assigned to a variable. Therefore, it does not store any correct address. Even so, it should at least show the garbage value inside it, or a "0" to indicate it has not yet an address to point to.
It did! That was the 0x000001234.
Unfortunately you then tried to dereference this invalid pointer, and print the value of an int that does not exist. You cannot do that.
If you hadn't done that, we'd have made it to the third line, where the 0x000007865 would correctly represent the address of the pointer, which is an object with name pointer and type int* that does indeed exist.
I know the code is right because it worked fine in the first PC.
One of the things you'll have to get used to with C++ is that "it appears to work on one computer" is very far from proof that the code is correct. Read about undefined behaviour and weep slow tears.
But I do not understand why it failed in other computer.
Because the code isn't right, and you didn't get "lucky" this time.
We could analyse a few reasons why it appeared to work on one system and not the other, and there are reasons for that. But it's late, and you're just starting out, and since this is undefined behaviour it doesn't matter. :)

is null value is equal to zero(0)? [duplicate]

This question already has answers here:
C++: Uninitialized variables garbage
(5 answers)
Closed 4 years ago.
A programmer told me that if we don't assign a value to any variable e.g like max then it consider as null value.
{
int max=0,x[5];
for(int a=0;a<5;a++)
{
cout<<"Enter no "<<a+1<<" : ";
cin>>x[a];
if(max<x[a])
{
max=x[a];
}
}
cout<<endl<<max;
}
output:
Enter no 1 : 1
Enter no 2 : 5
Enter no 3 : 8
Enter no 4 : 7
Enter no 5 : 5
8
and when type max=0 means at the declaration assign 0 to max
it gives me the same result that's mean null value is equal to 0.
if yes then what is difference between null value and 0
It actually depends on what programming language are you writing your code in.
For instance, if you access in read an uninitialized variable in c/c++ you cannot know which value it will give you.
Memory is not zero-ed when the variable is allocated, so whatever bits were there, they will be read as a value in the type of that variable.
Game Maker's own language, GML, allows you to decide at compile time if uninitialized variables have to be set to 0 by default or not; but any decent programmer would set that option off and not rely on it; i'm not aware of other languages that automatically initialize variable values.
I'd say relying on such a feature is bad practice.
About the difference between null and 0, it actually depends on the base type of your variable.
An integer can only store integers, so null will still be readable as an integer.
If you default initialize a fundamental-type object (such as int) in automatic storage, the value will be indeterminate. The behaviour of reading an indeterminate value is undefined.
Undefined behaviour means that the standard doensn't constrain the behaviour of the program in any way. As far as the C++ standard is concerned, possible behaviours include, none which are guaranteed:
Output that you expect.
Output that you don't expect.
Same output as some program which doesn't have UB.
Different output than some program which doesn't have UB.
Any output.
No output whatsoever.
Side-effects that you expect.
Side-effects that you don't expect.
Any Side-effects.
No Side-effects whatsoever.
Possible side-effects include:
Corruption of data.
Security vulnerabilities.
Anything within the capability of the process, hopefully limited by the OS.
Inconsistent behaviour on other systems.
Inconsistent behaviour even on same system if you re-compile using another compiler.
Inconsistent behaviour even if you re-compile using same compiler.
Inconsistent behaviour even without recompilation during another execution:
Possibly only when you're on vacation.
Possibly only when you're demonstrating your program to your employer or important client.
Consistent behaviour in all of the above cases.
A programmer told me that if we don't assign a value to any variable e.g like max then it consider as null value.
There is no such thing as null value integer in C++. There is such thing as a null pointer, as well as null character, neither of which are directly related to each other, and neither have anything to do with an uninitialized variable.
Your programmer is using confusing terminology, but if by null value, they meant indeterminate value, then they are correct. And in that case your question becomes what is difference between indeterminate value and 0, and the answer is that reading an indeterminate value is undefined behaviour (see above).
when type max=0 means at the declaration assign 0 to max
To be pedantic, you have int max=0 which is not an assignment, but initialization.
There's no 'null' value in c++. Local primitives are uninitialized (you'll get garbage that just happens to be in memory at this time), while static and global variables are zero-initialized. So by saying int max; You'll either get random garbage or zero (depending on where you declared it).

C++ array accessing

let's say I have:
int test[10];
on a 32bit machine. What if I do:
int b = test[-1];
obviously that's a big no-no when it comes to access an array (out of bound) but what actually happens? Just curious
Am I accessing the 32bit word "before" my array?
int b = *(test - 1);
or just addressing a very far away word (starting at "test" memory location)?
int b = *(test + 0xFFFFFFFF);
0xFFFFFFFF is the two's complement representation of decimal -1
The behaviour of your program is undefined as you are attempting to access an element outside the bounds of the array.
What might be happening is this: Assuming you have a 32 bit int type, you're accessing the 32 bits of memory on the stack (if any) before test[0] and are casting this to an int. Your process may not even own this memory. Not good.
Whatever happens, you get undefined behaviour since pointer arithmetic is only defined within an array (including the one-past-the-end position).
A better question might be:
int test[10];
int * t1 = test+1;
int b = t1[-1]; // Is this defined behaviour?
The answer to this is yes. The definition of subscripting (C++11 5.2.1) is:
The expression E1[E2] is identical (by definition) to *((E1)+(E2))
so this is equivalent to *((t1)+(-1)). The definition of pointer addition (C++11 5.7/5) is for all integer types, signed or unsigned, so nothing will cause -1 to be converted into an unsigned type; so the expression is equivalent to *(t1-1), which is well-defined since t1-1 is within the array bounds.
The C++ standard says that it's undefined behavior and illegal. What this means in practice is that anything could happen, and the anything can vary by hardware, compiler, options, and anything else you can think of. Since anything could happen there isn't a lot of point in speculating about what might happen with a particular hardware/compiler combination.
The official answer is that the behavior is undefined. Unofficially, you are trying to access the integer before the start of the array. This means that you instruct the computer to calculate the address that precedes the start of the array by 4 bytes (in your case). Whether this operation will success or not depends on multiple factors. Some of them are whether the array is going to be allocated on the stack segment or static data segment, where specifically the location of that address is going to be. On a general purpose machine (windows/linux) you are likely to get a garbage value as a result but it may also result in a memory violation error if the address happens to be somewhere where the process is not authorized to access. What may happen on a specialized hardware is anybody's guess.

Behavior of uninitialized local char?

If you have lets say a local int that is uninitialized, then its gets an undefined value but if you have a local char variable should that not have an undefined value as well? Of course 0 could be that undefined value, but i was wondering if char is any different, since all related info i find is about int and the program below just outputs 0 when the char variable is cast to an int. Im using GCC 4.7 with no flags.
int main()
{
char test1;
int test2;
std::cout<<test2; //garbage
std::cout<<std::endl;
std::cout<<(int)test1; //0
return 0;
}
Uninitialised means really uninitialised. Just because you consistently get a particular value on your machine at a particular time, doesn't mean that will always be the case all the time on all machines.
You can verify that nothing is initialising your variable by dumping the assembly code for your function and inspecting it.
If you have lets say a local int that is uninitialized, then its gets an undefined value
No, that isn't the right way to think about it. Your local variable doesn't get an undefined value, it gets no value whatsoever. Subsequently querying such an uninitialized variable invokes undefined behavior.
Your program won't necessarily print "0". It won't necessarily print any number, or even anything at all. Granted, on typical computers, using typical compilers, your program will print some number, but within the scope of the C++ language, we can't make any prediction about what your program will do or not do.
Local variables get their initial values from whatever random data is in the stack space they occupy at that moment. There is no guarantee that space contains zeros.

Why is address zero used for the null pointer?

In C (or C++ for that matter), pointers are special if they have the value zero: I am adviced to set pointers to zero after freeing their memory, because it means freeing the pointer again isn't dangerous; when I call malloc it returns a pointer with the value zero if it can't get me memory; I use if (p != 0) all the time to make sure passed pointers are valid, etc.
But since memory addressing starts at 0, isn't 0 just as a valid address as any other? How can 0 be used for handling null pointers if that is the case? Why isn't a negative number null instead?
Edit:
A bunch of good answers. I'll summarize what has been said in the answers expressed as my own mind interprets it and hope that the community will correct me if I misunderstand.
Like everything else in programming it's an abstraction. Just a constant, not really related to the address 0. C++0x emphasizes this by adding the keyword nullptr.
It's not even an address abstraction, it's the constant specified by the C standard and the compiler can translate it to some other number as long as it makes sure it never equals a "real" address, and equals other null pointers if 0 is not the best value to use for the platform.
In case it's not an abstraction, which was the case in the early days, the address 0 is used by the system and off limits to the programmer.
My negative number suggestion was a little wild brainstorming, I admit. Using a signed integer for addresses is a little wasteful if it means that apart from the null pointer (-1 or whatever) the value space is split evenly between positive integers that make valid addresses and negative numbers that are just wasted.
If any number is always representable by a datatype, it's 0. (Probably 1 is too. I think of the one-bit integer which would be 0 or 1 if unsigned, or just the signed bit if signed, or the two bit integer which would be [-2, 1]. But then you could just go for 0 being null and 1 being the only accessible byte in memory.)
Still there is something that is unresolved in my mind. The Stack Overflow question Pointer to a specific fixed address tells me that even if 0 for null pointer is an abstraction, other pointer values aren't necessarily. This leads me to post another Stack Overflow question, Could I ever want to access the address zero?.
2 points:
only the constant value 0 in the source code is the null pointer - the compiler implementation can use whatever value it wants or needs in the running code. Some platforms have a special pointer value that's 'invalid' that the implementation might use as the null pointer. The C FAQ has a question, "Seriously, have any actual machines really used nonzero null pointers, or different representations for pointers to different types?", that points out several platforms that used this property of 0 being the null pointer in C source while represented differently at runtime. The C++ standard has a note that makes clear that converting "an integral constant expression with value zero always yields a null pointer, but converting other expressions that happen to have value zero need not yield a null pointer".
a negative value might be just as usable by the platform as an address - the C standard simply had to chose something to use to indicate a null pointer, and zero was chosen. I'm honestly not sure if other sentinel values were considered.
The only requirements for a null pointer are:
it's guaranteed to compare unequal to a pointer to an actual object
any two null pointers will compare equal (C++ refines this such that this only needs to hold for pointers to the same type)
Historically, the address space starting at 0 was always ROM, used for some operating system or low level interrupt handling routines, nowadays, since everything is virtual (including address space), the operating system can map any allocation to any address, so it can specifically NOT allocate anything at address 0.
IIRC, the "null pointer" value isn't guaranteed to be zero. The compiler translates 0 into whatever "null" value is appropriate for the system (which in practice is probably always zero, but not necessarily). The same translation is applied whenever you compare a pointer against zero. Because you can only compare pointers against each other and against this special-value-0, it insulates the programmer from knowing anything about the memory representation of the system. As for why they chose 0 instead of 42 or somesuch, I'm going to guess it's because most programmers start counting at 0 :) (Also, on most systems 0 is the first memory address and they wanted it to be convenient, since in practice translations like I'm describing rarely actually take place; the language just allows for them).
You must be misunderstanding the meaning of constant zero in pointer context.
Neither in C nor in C++ pointers can "have value zero". Pointers are not arithmetic objects. They canot have numerical values like "zero" or "negative" or anything of that nature. So your statement about "pointers ... have the value zero" simply makes no sense.
In C & C++ pointers can have the reserved null-pointer value. The actual representation of null-pointer value has nothing to do with any "zeros". It can be absolutely anything appropriate for a given platform. It is true that on most plaforms null-pointer value is represented physically by an actual zero address value. However, if on some platform address 0 is actually used for some purpose (i.e. you might need to create objects at address 0), the null-pointer value on such platform will most likely be different. It could be physically represented as 0xFFFFFFFF address value or as 0xBAADBAAD address value, for example.
Nevertheless, regardless of how the null-pointer value is respresented on a given platform, in your code you will still continue to designate null-pointers by constant 0. In order to assign a null-pointer value to a given pointer, you will continue to use expressions like p = 0. It is the compiler's responsibility to realize what you want and translate it into the proper null-pointer value representation, i.e. to translate it into the code that will put the address value of 0xFFFFFFFF into the pointer p, for example.
In short, the fact that you use 0 in your sorce code to generate null-pointer values does not mean that the null-pointer value is somehow tied to address 0. The 0 that you use in your source code is just "syntactic sugar" that has absolutely no relation to the actual physical address the null-pointer value is "pointing" to.
But since memory addressing starts at 0, isn't 0 just as a valid address as any other?
On some/many/all operating systems, memory address 0 is special in some way. For example, it's often mapped to invalid/non-existent memory, which causes an exception if you try to access it.
Why isn't a negative number null instead?
I think that pointer values are typically treated as unsigned numbers: otherwise for example a 32-bit pointer would only be able to address 2 GB of memory, instead of 4 GB.
My guess would be that the magic value 0 was picked to define an invalid pointer since it could be tested for with less instructions. Some machine languages automatically set the zero and sign flags according to the data when loading registers so you could test for a null pointer with a simple load then and branch instructions without doing a separate compare instruction.
(Most ISAs only set flags on ALU instructions, not loads, though. And usually you aren't producing pointers via computation, except in the compiler when parsing C source. But at least you don't need an arbitrary pointer-width constant to compare against.)
On the Commodore Pet, Vic20, and C64 which were the first machines I worked on, RAM started at location 0 so it was totally valid to read and write using a null pointer if you really wanted to.
I think it's just a convention. There must be some value to mark an invalid pointer.
You just lose one byte of address space, that should rarely be a problem.
There are no negative pointers. Pointers are always unsigned. Also if they could be negative your convention would mean that you lose half the address space.
Although C uses 0 to represent the null pointer, do keep in mind that the value of the pointer itself may not be a zero. However, most programmers will only ever use systems where the null pointer is, in fact, 0.
But why zero? Well, it's one address that every system shares. And oftentimes the low addresses are reserved for operating system purposes thus the value works well as being off-limits to application programs. Accidental assignment of an integer value to a pointer is as likely to end up zero as anything else.
Historically the low memory of an application was occupied by system resources. It was in those days that zero became the default null value.
While this is not necessarily true for modern systems, it is still a bad idea to set pointer values to anything but what memory allocation has handed you.
Regarding the argument about not setting a pointer to null after deleting it so that future deletes "expose errors"...
If you're really, really worried about this then a better approach, one that is guaranteed to work, is to leverage assert():
...
assert(ptr && "You're deleting this pointer twice, look for a bug?");
delete ptr;
ptr = 0;
...
This requires some extra typing, and one extra check during debug builds, but it is certain to give you what you want: notice when ptr is deleted 'twice'. The alternative given in the comment discussion, not setting the pointer to null so you'll get a crash, is simply not guaranteed to be successful. Worse, unlike the above, it can cause a crash (or much worse!) on a user if one of these "bugs" gets through to the shelf. Finally, this version lets you continue to run the program to see what actually happens.
I realize this does not answer the question asked, but I was worried that someone reading the comments might come to the conclusion that it is considered 'good practice' to NOT set pointers to 0 if it is possible they get sent to free() or delete twice. In those few cases when it is possible it is NEVER a good practice to use Undefined Behavior as a debugging tool. Nobody that's ever had to hunt down a bug that was ultimately caused by deleting an invalid pointer would propose this. These kinds of errors take hours to hunt down and nearly alway effect the program in a totally unexpected way that is hard to impossible to track back to the original problem.
An important reason why many operating systems use all-bits-zero for the null pointer representation, is that this means memset(struct_with_pointers, 0, sizeof struct_with_pointers) and similar will set all of the pointers inside struct_with_pointers to null pointers. This is not guaranteed by the C standard, but many, many programs assume it.
In one of the old DEC machines (PDP-8, I think), the C runtime would memory protect the first page of memory so that any attempt to access memory in that block would cause an exception to be raised.
The choice of sentinel value is arbitrary, and this is in fact being addressed by the next version of C++ (informally known as "C++0x", most likely to be known in the future as ISO C++ 2011) with the introduction of the keyword nullptr to represent a null valued pointer. In C++, a value of 0 may be used as an initializing expression for any POD and for any object with a default constructor, and it has the special meaning of assigning the sentinel value in the case of a pointer initialization. As for why a negative value was not chosen, addresses usually range from 0 to 2N-1 for some value N. In other words, addresses are usually treated as unsigned values. If the maximum value were used as the sentinel value, then it would have to vary from system to system depending on the size of memory whereas 0 is always a representable address. It is also used for historical reasons, as memory address 0 was typically unusable in programs, and nowadays most OSs have parts of the kernel loaded into the lower page(s) of memory, and such pages are typically protected in such a way that if touched (dereferenced) by a program (save the kernel) will cause a fault.
It has to have some value. Obviously you don't want to step on values the user might legitimately want to use. I would speculate that since the C runtime provides the BSS segment for zero-initialized data, it makes a certain degree of sense to interpret zero as an un-initialized pointer value.
Rarely does an OS allow you to write to address 0. It's common to stick OS-specific stuff down in low memory; namely, IDTs, page tables, etc. (The tables have to be in RAM, and it's easier to stick them at the bottom than to try and determine where the top of RAM is.) And no OS in its right mind will let you edit system tables willy-nilly.
This may not have been on K&R's minds when they made C, but it (along with the fact that 0==null is pretty easy to remember) makes 0 a popular choice.
The value 0 is a special value that takes on various meanings in specific expressions. In the case of pointers, as has been pointed out many many times, it is used probably because at the time it was the most convenient way of saying "insert the default sentinel value here." As a constant expression, it does not have the same meaning as bitwise zero (i.e., all bits set to zero) in the context of a pointer expression. In C++, there are several types that do not have a bitwise zero representation of NULL such as pointer member and pointer to member function.
Thankfully, C++0x has a new keyword for "expression that means a known invalid pointer that does not also map to bitwise zero for integral expressions": nullptr. Although there are a few systems that you can target with C++ that allow dereferencing of address 0 without barfing, so programmer beware.
There are already a lot of good answers in this thread; there are probably many different reasons for preferring the value 0 for null pointers, but I'm going to add two more:
In C++, zero-initializing a pointer will set it to null.
On many processors it is more efficient to set a value to 0 or to test for it equal/not equal to 0 than for any other constant.
This is dependent on the implementation of pointers in C/C++. There is no specific reason why NULL is equivalent in assignments to a pointer.
Null pointer is not the same thing with null value. For example the same strchr function of c will return a null pointer (empty on the console), while passing the value would return (null) on the console.
True function:
char *ft_strchr(const char *s, int c)
{
int i;
if (!s)
return (NULL);
i = 0;
while (s[i])
{
if (s[i] == (char)c)
return ((char*)(s + i));
i++;
}
**if (s[i] == (char)c)
return ((char*)(s + i));**
return (NULL);
}
This will produce empty thing as the output: the last || is the output.
While passing as value like s[i] gives us a NULL like: enter image description here
char *ft_strchr(const char *s, int c)
{
int i;
if (!s)
return (NULL);
i = 0;
while (s[i])
{
if (s[i] == (char)c)
return ((char*)(s + i));
i++;
}
**if (s[i] == (char)c)
return (s[i]);**
return (NULL);
}
There are historic reasons for this, but there are also optimization reasons for it.
It is common for the OS to provide a process with memory pages initialized to 0. If a program wants to interpret part of that memory page as a pointer then it is 0, so it is easy enough for the program to determine that that pointer is not initialized. (this doesn't work so well when applied to uninitialized flash pages)
Another reason is that on many many processors it is very very easy to test a value's equivalence to 0. It is sometimes a free comparison done without any extra instructions needed, and usually can be done without needing to provide a zero value in another register or as a literal in the instruction stream to compare to.
The cheap comparisons for most processors are the signed less than 0, and equal to 0. (signed greater than 0 and not equal to 0 are implied by both of these)
Since 1 value out of all of possible values needs to be reserved as bad or uninitialized then you might as well make it the one that has the cheapest test for equivalence to the bad value. This is also true for '\0' terminated character strings.
If you were to try to use greater or less than 0 for this purpose then you would end up chopping your range of addresses in half.
The constant 0 is used instead of NULL because C was made by some cavemen trillions of years ago, NULL, NIL, ZIP, or NADDA would have all made much more sense than 0.
But since memory addressing starts at
0, isn't 0 just as a valid address as
any other?
Indeed. Although a lot of operating systems disallow you from mapping anything at address zero, even in a virtual address space (people realized C is an insecure language, and reflecting that null pointer dereference bugs are very common, decided to "fix" them by dissallowing the userspace code to map to page 0; Thus, if you call a callback but the callback pointer is NULL, you wont end up executing some arbitrary code).
How can 0 be used for handling null
pointers if that is the case?
Because 0 used in comparison to a pointer will be replaced with some implementation specific value, which is the return value of malloc on a malloc failure.
Why isn't a negative number null
instead?
This would be even more confusing.