non-NULL reserved pointer value - c++

How can I create a reserved pointer value?
The context is this: I have been thinking of how to implement a data structure for a dynamic scripting language (I am not planning on implementing this - just wondering how it would be done).
Strings may contain arbitrary bytes, including NUL. Thus, it is necessary to store the value separately. This requires a pointer (to point to the array) and a number. The first trick is that if the pointer is NULL, it cannot possibly be a valid string, so the number can be used for an actual integer.
If a second reserved pointer value could be created, this could be used to imply that the other field is now being used as a floating-point value. Can this be done?
One thought is to mmap() an address with no permissions, which could also be done to replace the usage of the NULL pointer.

On any modern system, you can just use the pointer values 1, 2, ... 4095 for such purposes. Another frequent choice is (uintptr_t)-1, which is technically inferior, but used more frequently than 1 nevertheless.
Why are these values "safe"?
Modern systems safeguard against NULL pointer accesses by making it impossible to map anything at virtual address zero. Almost any dereferencing of a NULL pointer will hit this nonexistant region, and the hardware will tell the OS system that something bad happened, which triggers the OS to segfault the process.
Since virtual memory pages are page aligned (at least 4k on current hardware), and nothing is mapped to address zero, nothing can be mapped to the entire range 0, ..., 4095, protecting all these addresses in the same way, and you can use them as special purpose values.
How much virtual memory space is reserved for this purpose is a system parameter, on linux it is controlled by /proc/sys/vm/mmap_min_addr, and the root user can change it to zero, which would disable this protection (which would not be a very smart idea). The default on Ubuntu is 64k (i. e. 16 pages).
This is also the reason why (uintptr_1)-1 is less safe than 1; even though any load of more than one byte will hit the zero page, the address (uintptr_1)-1 itself is not necessarily protected in this way. Consequently, doing string operations on (char*)-1 does not necessarily segfault.
Edit:
My original explanation with the special mapping seems to have been a bit stale, probably this was the way things were handled on the old Mac/PPC platform. Even though the effect is pretty much the same, I changed the details of the answer to reflect modern linux. Anyway, the important point is not how the null page protection is achieved, the important point is that any sane, modern system will have some null page protection that encompasses at least the mentioned address range. Some more details can be found in this SO answer: https://stackoverflow.com/a/12645890/2445184

In standard C (and standard C++), the approach that's 100% valid and works is simple: declare a variable, use its address as a magic value.
char *ptr;
char magic;
if (ptr == &magic) { ... }
This guarantees that magic will never have any overlap with another object.
Magic pointer values such as (char *) 1 have their advantages too, but it's so easy to get them wrong (even if you disregard the theoretical implementations where (char *) 1 may be a valid object, if you use (int *) 1 as a magic pointer value, and the optimiser assumes int * values are suitably aligned, it may removes checks that are no-ops only in 100% valid code, not in your code) that I'd recommend the standard approach, and optionally temporarily switch to magic pointer values only if you find they help you debug.

mmaping an address can fail if the address is already assigned. Probably it would better to use an address of some static variable or function. Or to obtain an unique address via malloc(1).

Related

range of values a c pointer can take?

In "Computer System: A Programmer's Perspective", section 2.1 (page 31), it says:
The value of a pointer in C is the virtual address of the first byte of some block of storage.
To me it sounds like the C pointer's value can take values from 0 to [size of virtual memory - 1]. Is that the case? If yes, I wonder if there is any mechanism that checks if all pointers in a program are assigned with legal values -- values at least 0 and at most [size of virtual memory - 1], and where such mechanism is built in -- in compiler? OS? or somewhere else?
There is no process that checks pointers for validity as use of invalid pointers has undefined effects anyway.
Usually it will be impossible for a pointer to hold a value outside of the addressable range as the two will have the same available range — e.g. both will be 32 bit. However some CPUs have rules about pointer alignment that may render some addresses invalid for some types of data. Some runtimes, such as 64-bit Objective-C, which is a strict superset of C, use incorrectly aligned pointers to disguise literal objects as objects on the heap.
There are also some cases where the complete address space is defined by the instruction set to be one thing but is implemented by that specific hardware to be another. An example from history is the original 68000 which defined a 32-bit space but had only 24 address lines. Very early versions of Mac OS used the spare 8 bits for flags describing the block of data, relying on the hardware to ignore them.
So:
there's no runtime checking of validity;
even if there were, the meaning of validity is often dependent on the specific model of CPU (not just the family) or specific version of the OS (ditto) so as to make checking a less trivial task than you might guess.
In practise what will normally happen if your address is illegal per that hardware but is accessed as though legal is a processor exception.
A pointer in C is an abstract object. The only guarantee provided by the C standard is that pointers can point to all the things they need to within C: functions, objects, one past the end of an object, and NULL.
In typical C implementations, pointers can point to any address in virtual memory, and some C implementations deliberately support this in large part. However, there are complications. For example, the value used for NULL may be difficult to use as an address, and converting pointers created for one type to another type may fail (due to alignment problems). Additionally, there are legal non-typical C implementations where pointers do not directly correlate to memory addresses in a normal way.
You should not expect to use pointers to access memory arbitrarily without understanding the rules of the C standard and of the C implementations you use.
There is no mechanism in C which will check if pointers in a program are valid. The programmer is responsible for using them correctly.
For practical purposes a C pointer is either NULL or a memory address to something else. I've never heard of NULL being anything but zero in real life. If it's a memory address you're not supposed to "care" what the actual number is; just pass it around, dereference it etc.

C++ - safe pointers range?

I know NULL (0x00000000) is a pointer to nothing because the OS doesn't allow the process to allocate any memory at this location. But if I use 0x00000001 (Magic number or code-pointer), is it safe to assume as well that the OS wont allow memory to be allocated here?
If so then until where is it safe to assume that?
Standard (first)
The Standard only guarantees that 0 is a sentinel value as far as pointers go. The underlying memory representation is no way guaranteed; it's implementation defined.
Using a pointer set to that sentinel value for anything else than reading the pointer state or writing a new state (which includes dereferencing or pointer arithmetic) is undefined behavior.
Virtual Memory
In the days of virtual memory (ie, each process gets its own memory space, independent from the others), a null pointer is most often indeed represented as 0 in the process memory space. I don't know of any other architectures actually, though I imagine that in mainframes it may not be so.
Unix
In the Unix world, it is typical to reserve all the address space below 0x8000 for null values. The memory is not allocated, really, it is just protected (ie, placed in a special mode), so that the OS will trigger a segmentation fault should you ever try to read it or write to it.
The idea of using such a range is that a null pointer is not necessarily used as is. For example if you use a std::pair<int, int>* p = 0; which is null, and call p->second, then the compiler will perform the arithmetic necessary to point to second (ie +4 generally) and attempt to access the memory at 0x4 directly. The problem is obviously compounded by arrays.
In practice, this 0x8000 limit should be practical enough to detect most issues (and avoid memory corruption or others). In this case, this means that you avoid the undefined behavior and get a "proper" crash. However, should you be using a large array you could overshoot it, so it's not a silver bullet.
The particular limit of your implementation or compiler/runtime stack can be determined either through documentation or by successive trials. There might even be a way to tweak it.
You should not assume anything about the actual values of pointers. Especially, the null pointer is not required to be represented by a zero address, even though the literal 0 does look like a zero.
The only valid range is supposed to be range allocated to you by the OS.ANYTHING else should be denied by the OS.
An exception to that rule is the shared memory.
The C++ standard doesn't "reserve" any pointer addresses other than zero (null). So it is not safe to use 1 or any other value as a "magic" pointer value. Of course, in practice, some implementations of c++ probably do not every use certain values. But you don't get any guarantees from the language definition.
I will try to give a broad view about this:
you probably will never ever access the real memory addresses because of the multiple sandboxing mechanism that every modern OS has and puts in place.
What is a NULL pointer from the software viewpoint ? a NULL pointer is a pointer variable that stores a value that the programmer pick as a meaningfull value and this value is used as a label with the following meaning "this pointer goes nowhere". a NULL pointer does not point to 0x000000 by definition, the definition of a NULL pointer it's not about where that pointer will point to but the value of this macro called NULL and this value will be the value of this NULL pointer.
in C you can assume that NULL == 0, only in C NULL is a macro that defines NULL as an int that is equal to 0, in C++ you do not have this liberty
there are types, labels and values ( in better terms, representations of values not real values ) for every variables, at least for primitives values, the same is for the pointers, if you are speaking about void pointers you are speaking about pointers that contains a memory address ( just like any pointer ) and the only special thing about this pointers is that they need a cast in C++ to be decoded, safely and effectively; it's a big mistake if you think about void* as pointers that points to nowhere or to 0 or to NULL or to 0x0000000
by the way, i still don't get your problem ...
A modern OS is likely to reserve at least one page for NULL pointer. So 0x1 (or 0x4 if you want 32-bit alignment) is likely to work.
But remember this is not guaranteed by C/C++ language. You would have to rely on your OS and compiler for such behavior.
Further more, there's no guarantee about the actual value of the NULL pointer. It may or may not be all zeros. If it's not, your trick won't work at all.

0xDEADBEEF vs. NULL

Throughout various code, I have seen memory allocation in debug builds with NULL...
memset(ptr,NULL,size);
Or with 0xDEADBEEF...
memset(ptr,0xDEADBEEF,size);
What are the advantages to using each one, and what is the generally preferred way to achieve this in C/C++?
If a pointer was assigned a value of 0xDEADBEEF, couldn't it still deference to valid data?
Using either memset(ptr, NULL, size) or memset(ptr, 0xDEADBEEF, size) is a clear indication of the fact that the author did not understand what they were doing.
Firstly, memset(ptr, NULL, size) will indeed zero-out a memory block in C and C++ if NULL is defined as an integral zero.
However, using NULL to represent the zero value in this context is not an acceptable practice. NULL is a macro introduced specifically for pointer contexts. The second parameter of memset is an integer, not a pointer. The proper way to zero-out a memory block would be memset(ptr, 0, size). Note: 0 not NULL. I'd say that even memset(ptr, '\0', size) looks better than memset(ptr, NULL, size).
Moreover, the most recent (at the moment) C++ standard - C++11 - allows defining NULL as nullptr. nullptr value is not implicitly convertible to type int, which means that the above code is not guaranteed to compile in C++11 and later.
In C language (and your question is tagged C as well) macro NULL can expand to (void *) 0. Even in C (void *) 0 is not implicitly convertible to type int, which means that in general case memset(ptr, NULL, size) is simply invalid code in C.
Secondly, even though the second parameter of memset has type int, the function interprets it as an unsigned char value. It means that only one lower byte of the value is used to fill the destination memory block. For this reason memset(ptr, 0xDEADBEEF, size) will compile, but will not fill the target memory region with 0xDEADBEEF values, as the author of the code probably naively hoped. memset(ptr, 0xDEADBEEF, size) is eqivalent to memset(ptr, 0xEF, size) (assuming 8-bit chars). While this is probably good enough to fill some memory region with intentional "garbage", things like memset(ptr, NULL, size) or memset(ptr, 0xDEADBEEF, size) still betray the major lack of professionalism on the author's part.
Again, as other answer have already noted, the idea here is to fill the unused memory with a "garbage" value. Zero is certainly not a good idea in this case, since it is not "garbagy" enough. When using memset you are limited to one-byte values, like 0xAB or 0xEF. If this is good enough for your purposes, use memset. If you want a more expressive and unique garbage value, like 0xDEDABEEF or 0xBAADFOOD, you won't be able to use memset with it. You'll have to write a dedicated function that can fill memory region with 4-byte pattern.
A pointer in C and C++ cannot be assigned an arbitrary integer value (other than a Null Pointer Constant, i.e. zero). Such assignment can only be achieved by forcing the integral value into the pointer with an explicit cast. Formally speaking, the result of such a cast is implementation defined. The resultant value can certainly point to valid data.
Writing 0xDEADBEEF or another non-zero bit pattern is a good idea to be able to catch both write-after-delete and read-after-delete uses.
1) Write after delete
By writing a specific pattern you can check if a block that has already been deallocated was written over later by buggy code; in our debug memory manager we use a free list of blocks and before recycling a memory block we check that our custom pattern are still written all over the block. Of course it's sort of "late" when we discover the problem, but still much earlier than when it would be discovered not doing the check.
Also we have a special function that is called periodically and that can also be called on demand that just goes through the list of all freed memory blocks and check their consistency and so we can call this function often when chasing a bug. Using 0x00000000 as value wouldn't be as effective because zero may possibly be exactly the value that buggy code wants to write in the already deallocated block e.g. zeroing a field or setting a pointer to NULL (it's instead more unlikely that the buggy code wants to write 0xDEADBEEF).
2) Read after delete
Leaving the content of a deallocated block untouched or even writing just zeros will increase the possibility that someone reading the content of a dead memory block will still find the values reasonable and compatible with invariants (e.g. a NULL pointer as on many architectures NULL is just binary zeroes, or the integer 0, the ASCII NUL char or a double value 0.0).
By writing instead "strange" patterns like 0xDEADBEEF most of code that will access in read mode those bytes will probably find strange unreasonable values (e.g. the integer -559038737 or a double with value -1.1885959257070704e+148), hopefully triggering some other self consistency check assertion.
Of course nothing is really specific to the bit pattern 0xDEADBEEF, actually we use different patterns for freed blocks, before-block area, after-block area and and also our memory manager writes another (address-dependent) specific bit pattern to the content part of any memory block before giving it to the application (this is to help finding uses of uninitialized memory).
I would definitely recommend 0xDEADBEEF. It clearly identifies uninitialized variables, and accesses to uninitialized pointers.
Being odd, dereferencing a 0xdeadbeef pointer will definitely crash on the PowerPC architecture when loading a word, and very likely crash on other architectures since the memory is likely to be outside the process' address space.
Zeroing out memory is a convenience since many structures/classes have member variables that use 0 as their initial value, but I would very much recommend initializing each member in the constructor rather than using the default memory fill. You will really want to be on top of whether or not you properly initialized your variables.
http://en.wikipedia.org/wiki/Hexspeak
These "magic" numbers are are a debugging aid to identify bad pointers, uninitialized memory etc. You want a value that is unlikely to occur during normal execution and something that is visible when doing memory dumps or inspecting variables. Initializing to zero is less useful in this regard. I would guess that when you see people initialize to zero it is because they need to have that value at zero. A pointer with a value of 0xDEADBEEF could point to a valid memory location so it's a bad idea to use that as an alternative to NULL.
One reason that you null the buffer or set it to a special value is that you can easily tell whether the buffer contents is valid or not in the debugger.
Dereferencing a pointer of value "0xDEADBEEF" is almost always dangerous(probably crashes your program/system) because in most cases you have no idea what is stored there.
DEADBEEF is an example of HexSpeek. With it, as a programmer you convey intentionally an error condition.
I would personally recommend using NULL (or 0x0) as it represents the NULL as expected and comes in handy while comparison. Imagine you are using char * and in between on DEADBEEF for some reason (don't know why), then at least your debugger will come very handy to tell you that its 0x0.
I would go for NULL because it's much easier to mass zero out memory than to go through later and set all the pointers to 0xDEADBEEF. In addition, there's nothing at all stopping 0xDEADBEEF from being a valid memory address on x86- admittedly, it would be unusual, but far from impossible. NULL is more reliable.
Ultimately, look- NULL is the language convention. 0xDEADBEEF just looks pretty and that's it. You gain nothing for it. Libraries will check for NULL pointers, they don't check for 0xDEADBEEF pointers. In C++ then the idea of the zero pointer isn't even tied to a zero value, just indicated with the literal zero, and in C++0x there is a nullptr and a nullptr_t.
Vote me down if this is too opinion-y for StackOverflow but I think this whole discussion is a symptom of a glaring hole in the toolchain we use to make software.
Detecting uninititialized variables by initializing memory with "garabage-y" values detects only some kinds of errors in some kinds of data.
And detecting uninititialized variables in debug builds but not for release builds is like following safety procedures only when testing an aircraft and telling the flying public to be satisfied with "well, it tested OK".
WE NEED HARDWARE SUPPORT for detecting uninitialized variables. As in something like an "invalid" bit that accompanies every addressability entity of memory (=byte on most of our machines) and which is set by the OS in every byte VirtualAlloc() (et. al, or equivalents on other OS's) hands over to applications and which is automatically cleared when the byte is written to but which causes an exception if read first.
Memory is cheap enough for this and processors are fast enough for this. This end of reliance on "funny" patterns and keeps us all honest to boot.
Note that the second argument in memset is supposed to be a byte, that is it is implicitely cast to a char or similar. 0xDEADBEEF would for most platforms convert to 0xEF (and something else for some odd platform).
Also note that the second argument is supposed to formally be an int which NULL isn't.
Now for the advantage of doing these kind of initialization. First of course the behavior would more likely be deterministic (even if we by this ends up in undefined behavior the behavior would in practice be consistent).
Having deterministic behavior will mean that debugging becomes easier, when you found a bug you would "only" have to provide the same input and the fault will manifest itself.
Now when you select which value you would use you should select a value that most likely will result in bad behavior - which means the use of uninitialized data would more likely result in a fault being observed. This means that you would have to use some knowledge of the platform in question (however many of them behave quite similar).
If the memory is used to hold pointers then indeed having cleared the memory will mean that you get a NULL pointer and normally dereferencing that will result in segmentation fault (which will be observed as a fault). However if you use it in another way, for example as an arithmetic type then you will get 0 and for many application that is not that odd number.
If you instead use 0xDEADBEEF you will get a quite large integer, also when interpreting the data as floating point it will also be quite large number (IIRC). If interpreting it as text it will be very long and contain non-ascii characters and if you use UTF-8 encoding it will likely be invalid. Now if used as a pointer on some platform it would fail alignment requirements for some types - also on some platforms that region of memory might be mapped out anyway (note that on x86_64 the value of the pointer would be 0xDEADBEEFDEADBEEF which is out of range for an address).
Note that while filling with 0xEF will have pretty much similar properties, if you want to fill the memory with 0xDEADBEEF you would need to use a custom function since memset doesn't do the trick.

Is 0x000001, 0x000002, etc. ever a valid memory address in application level programming?

Or are those things are reserved for the operation system and things like that?
Thanks.
While it's unlikely that 0x00000001, etc. will be valid pointers (especially if you use odd numbers on many processors) using a pointer to store an integer value will be highly system dependent.
Are you really that strapped for space?
Edit:
You could make it portable like this:
char *base = malloc(NUM_MAGIC_VALUES);
#define MAGIC_VALUE_1 (base + 0)
#define MAGIC_VALUE_2 (base + 1)
...
Well the OS is going to give each program it's own virtual memory space, so when the application references memory spaces 0x0000001 or 0x0000002, it's actually referencing some other physical memory address. I would take a look at paging and virtual memory. So a program will never have access to memory the operating system is using. However I would stay away from manually assigning a memory address for a pointer rather than using malloc() because those memory addresses might be text or reserved space.
This depends on operating system layout. For User space applications running in general purpose operating systems, these are inaccessible addresses.
This problem is related to a architecture's virtual address space. Have a loot at this http://web.cs.wpi.edu/~cs3013/c07/lectures/Section09.1-Intel.pdf
Of course, you can do this:
int* myPointer1 = 0x000001;
int* myPointer2 = 0x000032;
But do not try to dereference addresses, cause it will end in an Access Violation.
The OS gives you the memory, by the way these addresses are just virtual
the OS hides the details and shows it like a big, continous stripe.
Maybe the 0x000000-0x211501 part is on a webserver and you read/write it through net,
and remaining is on your hard disk. Physical memory is just an illusion from your current viewpoint.
You tagged your question C++. I believe that in C++ the address at 0 is reserved and is normally referred to as NULL. Other than that you cannot assume anything. If you want to ask about a particular implementation on a particular OS then that would be a different question.
It depends on the compiler/platform, but many older compilers actually have something like the string "(null)" at address 0x00000000. This is a debug feature because that string will show up if a NULL pointer is ever used by accident. On newer systems like Windows, a pointer to this area will most likely cause a processor exception.
I can pretty much guarantee that address 1 and 2 will either be in use or will raise a processor exception if they're ever used. You can store any value you like in a pointer. But if you try and dereference a pointer with a random value, you're definitely asking for problems.
How about a nice integer instead?
Although the standard requires that NULL is 0, a pointer that is NULL does not have to consist of all zero bits, although it will do in many implementations. That is also something you have to beware of if you memset a POD struct that contains some pointers, and then rely on the pointers holding "NULL" as their value.
If you want to use the same space as a pointer you could use a union, but I guess what you really want is something that doubles up as a pointer and something else, and you know it is not a pointer to a real address if it contains low-numbered values. (With a union you still need to know which type you have).
I'd be interested to know what the magic other value is really being used for. Is this some lazy-evaluation issue where the pointer gives an indication of how to load the data when it is not yet loaded and a genuine pointer when it is?
Yes, on some platforms address 0x00000001 and 0x00000002 are valid addresses. On other platforms they are not.
In the embedded systems world, the validity depends on what resides at those locations. Some platforms may put interrupt or reset vectors at those addresses. Other embedded platforms may place Position Independent executable code there.
There is no standard specification for the layout of addresses. One cannot assume anything. If you want your code to be portable then forget about accessing specific addresses and leave that to the OS.
Also, the structure of a pointer is platform dependent. So is the conversion of the value in a pointer to a physical address. Some systems may only decode a portion of the pointer, others use the entire pointer value. Some may use indirection (a.k.a. virtual addressing) to access real objects. Still no standardization here either.

Why is NULL/0 an illegal memory location for an object?

I understand the purpose of the NULL constant in C/C++, and I understand that it needs to be represented some way internally.
My question is: Is there some fundamental reason why the 0-address would be an invalid memory-location for an object in C/C++? Or are we in theory "wasting" one byte of memory due to this reservation?
The null pointer does not actually have to be 0. It's guaranteed in the C spec that when a constant 0 value is given in the context of a pointer it is treated as null by the compiler, however if you do
char *foo = (void *)1;
--foo;
// do something with foo
You will access the 0-address, not necessarily the null pointer. In most cases this happens to actually be the case, but it's not necessary, so we don't really have to waste that byte. Although, in the larger picture, if it isn't 0, it has to be something, so a byte is being wasted somewhere
Edit: Edited out the use of NULL due to the confusion in the comments. Also, the main message here is "null pointer != 0, and here's some C/pseudo code that shows the point I'm trying to make." Please don't actually try to compile this or worry about whether the types are proper; the meaning is clear.
This has nothing to do with wasting memory and more with memory organization.
When you work with the memory space, you have to assume that anything not directly "Belonging to you" is shared by the entire system or illegal for you to access. An address "belongs to you" if you have taken the address of something on the stack that is still on the stack, or if you have received it from a dynamic memory allocator and have not yet recycled it. Some OS calls will also provide you with legal areas.
In the good old days of real mode (e.g., DOS), all the beginning of the machine's address space was not meant to be written by user programs at all. Some of it even mapped to things like I/O.
For instance, writing to the address space at 0xB800 (fairly low) would actually let you capture the screen! Nothing was ever placed at address 0, and many memory controller would not let you access it, so it was a great choice for NULL. In fact, the memory controller on some PCs would have gone bonkers if you tried writing there.
Today the operating system protects you with a virtual address space. Nevertheless, no process is allowed to access addresses not allocated to it. Most of the addresses are not even mapped to an actual memory page, so accessing them will trigger a general protection fault or the equivalent in your operating system. This is why 0 is not wasted - even though all the processes on your machine "have an address 0", if they try to access it, it is not mapped anywhere.
There is no requirement that a null pointer be equal to the 0-address, it's just that most compilers implement it this way. It is perfectly possible to implement a null pointer by storing some other value and in fact some systems do this. The C99 specification §6.3.2.3 (Pointers) specifies only that an integer constant expression with the value 0 is a null pointer constant, but it does not say that a null pointer when converted to an integer has value 0.
An integer constant expression with the value 0, or such an expression cast to type
void *, is called a null pointer constant.
Any pointer type may be converted to an integer type. Except as previously specified, the
result is implementation-defined. If the result cannot be represented in the integer type,
the behavior is undefined. The result need not be in the range of values of any integer
type.
On some embedded systems the zero memory address is used for something addressable.
The zero address and the NULL pointer are not (necessarily) the same thing. Only a literal zero is a null pointer. In other words:
char* p = 0; // p is a null pointer
char* q = 1;
q--; // q is NOT necessarily a null pointer
Systems are free to represent the null pointer internally in any way they choose, and this representation may or may not "waste" a byte of memory by making the actual 0 address illegal. However, a compiler is required to convert a literal zero pointer into whatever the system's internal representation of NULL is. A pointer that comes to point to the zero address by some way other than being assigned a literal zero is not necessarily null.
Now, most systems do use 0 for NULL, but they don't have to.
It is not necessarily an illegal memory location. I have stored data by dereferencing a pointer to zero... it happens the datum was an interrupt vector being stored at the vector located at address zero.
By convention it is not normally used by application code since historically many systems had important system information starting at zero. It could be the boot rom or a vector table or even unused address space.
On many processors address zero is the reset vector, wherein lies the bootrom (BIOS on a PC), so you are unlikely to be storing anything at that physical address. On a processor with an MMU and a supporting OS, the physical and logical address addresses need not be the same, and the address zero may not be a valid logical address in the executing process context.
NULL is typically the zero address, but it is the zero address in your applications virtual address space. The virtual addresses that you use in most modern operating systems have exactly nothing to do with actual physical addresses, the OS maps from the virtual address space to the physical addresses for you. So, no, having the virtual address 0 representing NULL does not waste any memory.
Read up on virtual memory for a more involved discussion if you're curious.
I don't see the answers directly addressing what i think you were asking, so here goes:
Yes, at least 1 address value is "wasted" (made unavailable for use) because of the constant used for null. Whether it maps to 0 in linear map of process memory is not relevant.
And the reason that address won't be used for data storage is that you need that special status of the null pointer, to be able to distinguish from any other real pointer. Just like in the case of ASCIIZ strings (C-string, NUL-terminated), where the NUL character is designated as end of character string and cannot be used inside strings. Can you still use it inside? Yeah but that will mislead library functions as of where string ends.
I can think of at least one implementation of LISP i was learning, in which NIL (Lisp's null) was not 0, nor was it an invalid address but a real object. The reason was very clever - the standard required that CAR(NIL)=NIL and CDR(NIL)=NIL (Note: CAR(l) returns pointer to the head/first element of a list, where CDR(l) returns ptr to the tail/rest of the list.). So instead of adding if-checks in CAR and CDR whether the pointer is NIL - which will slow every call - they just allocated a CONS (think list) and assigned its head and tail to point to itself. There! - this way CAR and CDR will work and that address in memory won't be reused (because it is taken by the object devised as NIL)
ps. i just remembered that many-many years ago i read about some bug of Lattice-C that was related to NULL - must have been in the dark MS-DOS segmentation times, where you worked with separate code segment and data segment - so i remember there was an issue that it was possible for the first function from a linked library to have address 0, thus pointer to it will be considered invalid since ==NULL
But since modern operating systems can map the physical memory to logical memory addresses (or better: modern CPUs starting with the 386), not even a single byte is wasted.
As people already have pointed out, the bit representation of the NULL pointer has not to be the same as the bit represention of a 0 value. It is though in nearly all cases (the old dinosaur computers that had special addresses can be neglected) because a NULL pointer can also be used as a boolean and by using an integer (of suffisent size) to hold the pointer value it is easier to represent in the common ISAs of modern CPU. The code to handle it is then much more straight forward, thus less error prone.
You are correct in noting that the address space at 0 is not usable storate for your program. For a number of reasons a variety of systems do not consider this a valid address space for your program anyway.
Allowing any valid address to be used would require a null value flag for all pointers. This would exceed the overhead of the lost memory at address 0. It would also require additional code to check and see if the address were null or not, wasting memory and processor cycles.
Ideally, the address that NULL pointer is using (usually 0) should return an error on access. VAX/VMS never mapped a page to address 0 so following the NULL pointer would result in a failure.
The memory at that address is reserved for use by the operating system. 0 - 64k is reserved. 0 is used as a special value to indicate to developers "not a valid address".