Special Meaning for Pointer Value 0x7c7c7c7c - c++

While debugging a Linux app, I found a pointer with the suspicious value 0x7c7c7c7c. Does that particular value indicate anything?
(I ask because I know from my MSVC days that in a debug build, values like 0xcdcdcdcd or 0xdddddddd would be stored into heap blocks that were uninitialized, freed, or otherwise invalid. Some people use magic values like 0xdeadbeef or 0xcafebabe in uninitialized memory. I'm guessing something in libc or elsewhere uses 0x7c7c7c7c as a magic value, but I can't find it documented.)

I don't recognize that magic number, and neither does Wikipedia. I would guess that some code in your program (or in a library you're using) is using memset() and hitting your pointer. Have you grepped your code base case-insensitively for the string "0x7c"?

0x7C is an ASCII pipe "|" character. You could search for writes of that character as well 124 and 0x7C as Adam suggested.

0x7c7c = 01111100 01111100 in binary. That could be one of those "most difficult to read" bit patterns that format utilities fill unused space on hard drives with.

Maybe the MALLOC_PERTURB_ environment variable is set? If set, it influences how malloc() initializes the allocated memory.

Related

What are common values for uninitialized memory for debugging?

A long time ago I learned about filling unused / uninitialized memory with 0xDEADBEEF so that in a debugger or a crash report if I ever see that value I know I'm looking at uninitialized memory. I saw from a crash report iOS uses 0xBBADBEEF.
What other creative values have people used? Do any particular values have any kind of specific benefit?
The most obvious benefit of values that turn into words is that, at least of most people, if the words are in their language they stick out easily where as some strictly numeric value is less likely to stick out.
But, maybe there are other reason to pick numbers? For example an odd number might crash a processors (68000) for example on certain memory accesses so it's probably better to pick 0x0BADBEEF over 0xBADBEEF0. Are their any other values (maybe processor specific) that have a concrete benefit for using for uninitialized memory?
Generally speaking, you want a value which is unlikely to happen to "work" when interpreted as either an integer, a pointer, or a string. So, here are a few constraints:
Don't use a value that's a multiple of the smallest "usual" alignment on your target architecture. For x86, that's 4 (bytes), so no values that are divisible by 4. This ensures that if the value is interpreted as a pointer, it'll be obviously-incorrect. If you're on a non-x86 architecture, you might even be able to use a value that will cause an alignment trap if used as a pointer.
Don't use a value which could reasonably be a small (positive or negative) integer. Your typical "int" variable in a C program never gets larger than 1,000 or so, so don't use small numbers as your empty data fill.
Don't use a value which is composed entirely of valid ASCII characters. Make sure there's at least one byte in there with the high bit set. These days, you'd want to make sure they weren't valid UTF-8 or possibly UTF-16 values, either.
Don't have any zero bytes in the value. There are too many cases where this would work out to be "helpful" to keeping the program from crashing - terminating a string, giving a non-int field a reasonable-looking value, etc.
Don't use a single (or two) byte values, repeated over and over. Having a full-word length pattern can make it easier to determine how your wild pointer ended up pointing where it is, at least narrowing down which operations offset it from the start of the pattern.
Don't use a value that maps to an valid address for a "typical" process. If the highest bits are set, it'll typically take a whole lot of malloc() before your process will grow large enough to make that a valid address.
Perhaps unsurprisingly, patterns like 0xDEADBEEF meet basically all of these requirements.
One technical term for values like this is "poison value".
Hex numbers that form English words are called Hexspeak. Wikipedia's Hexspeak article pretty much answers this question, cataloguing many known constants in use for various things, including several that are used as poison values / canaries / sanity checks, as well as other uses like error codes or IPv6 addresses.
I seem to recall some variation of 0xBADF00D. (maybe with a repeated letter like your 2nd example).
There's also 0xDEADC0DE. (Googling for where I've seen this used found the wikipedia article linked above).
Other English words in hex I've seen: Java .class files use 0xCAFEBABE as the magic number (first 4 bytes of the file). As a play on this, I guess, the Jikes JVM uses 0xDEADBABE as a sanity check constant.
Apparently Java wasn't the first user of 0xCAFEBABE. Wikipedia says "It was originally created by NeXTSTEP developers as a reference to the baristas at Peet's Coffee & Tea", and was used by the people developing Java before they thought of the name "Java". So it didn't come out of Java -> coffee (if anything the other way around), it's just plain old non-feminist tech culture. :(
re: update: Choosing a good value. For a poison value (not an error code), you want all the bytes to be different and not 0x00 or 0xFF, since those are probably the most likely values for an errant single-byte store. This applies especially for things like stack canaries (to detect buffer overruns), or other cases where detecting that it didn't get overwritten is important.
Your speculation about picking an odd value makes a lot of sense. Not being a valid memory address in the virtual memory layout of typical processes is a big advantage. Failing noisily as early as possible is optimal for debugging. Anyway, this probably means that having the high bit set is a good idea, so 0x0... is probably not a good idea.

On Linux, in C/C++, will a pointer ever have the MSB set?

I want to use a long integer that will be interpreted as a number when the MSB is set otherwise it will be interpreted as a pointer. So would this work or would I run into problems in either C or C++?
This is on a 64-bit system.
Edited for clarity and a better description.
On x86-64, you WILL have a pointer that is over 47 bits in address have the 63rd bit set, since all the bits above "max number of bits supported by the architecture" (which is currently 48) must all have the same value as the most significant bit of the value itself. (That is any address above 0007 FFFF FFFF FFFF will be FFF8 0000 0000 0000 - everything in between is "invalid" as a pointer)
That may well be addresses ONLY used by the kernel, but I'm not sure it's guaranteed to be.
However, I would try to avoid using tricks like this - it's likely to come back and haunt you at some point.
People have tried tricks like this before.
It never works out well in the long run.
Simply don't do it.
Edit: better link - see reference to 'bit31', which was previously never returned as set. Once it could be set (over 2 gigs of RAM, gasp!) it would break naughty programs and therefore programs needed to opt into this option once this much memory became the norm as people had used trickery like this (amongst other things). And now my lovely, short and to the point answer has become too long :-)
So would this work or would I run into problems in either C or C++?
Do you have 64 bits? Do you want your code to be portable to 32 bit systems? long does not necessarily have 64 bits. Big-endian v. little-endian? (Do you know which your system is?)
Plus, hopeless confusion. Please just use an extra variable to store this information or you will have many many bugs surrounding this.
It depends on the architecture. x86_64 architecture, for example, is currently using 48-bit addressing. It means that you could use 16 bits for your own needs (a trick that sometimes referred to as "pointer packing"). However, even the x86_64 architecture definition allows this limit to be raised in future implementations to the full 64 bits. If that happens, you may run into a situation where a lot of your code might need to be changed. So if you really must go that way, make sure your pointer packing is kept in one place that is easy to change in the future. For other architectures you have to check for yourself.
Unless you really need the space, or you're keeping alot of these things around, I would just use a plain union, and add a tag field. If you're going to go down that route, make sure that your memory is aligned to fit your needs.
Take a look at boost::lockfree::detail::tagged_ptr from boost.lockfree
This is a class that was introduced in latest 1_53 boost. It stores pointer and additional 16 bites in 64 bites variable.
Don't do such tricks. If you need to distinguish integers from pointers inside some container, consider using separate bit set to indicate such flag. In C++ std::bitset could be good enough.
Reasons:
Actually nobody guarantees pointers are long unsigned or long long unsigned. If you need
to store them, always apply sizeof() and void * type (if you need
to remove information about pointed object).
Even on one system addresses are highly dependent on architecture.
Kernel modules could seriously change mapping logics for process so you never know what addresses you will need.
Remember that the virtual address returned to your program does may necessarily line up to the actual physical address in memory. Infact, unless you are directly manipulating pretty special memory [e.g. some forms of graphics memory] then this is absolutely the case.
In this case, its the maximum value of the MMU which defines the values of the pointers your program sees. In which case, for x64 I'm pretty sure its (currently) 48bits, but as Mats specifies above once you've got the top bit set in the 48, you get the 63'd bit says aswell.
So taking his answer and mine - its entirely possible to get a pointer with the 47th bit set even with a small amount of RAM, and once you do you get the 63rd bit set.
If the "64-bit system" in question is x86_64, then yes, it will work.

sizeof("...") in visual c++ makes me unhappy

I was writing a piece of code where I use sizeof("somestring") as a parameter of a function, then I noticed the function was not returning the expected value, so I went to see the corresponding asm code and I found an unpleasant surprise. Does anyone have an explanation for this (see the picture)?
I know there are 1000+ different ways of doing this, I already implemented another one of them, but I do want to know the reason behind this behaviour.
For the curious, this is Visual Studio 2008 SP1.
The value 5 is correct. The constant includes the zero terminator byte. The display of 4 in the watch window is the one that does not appear to be correct.
String literals are of type "array of n const char" ([lex.string], ¶8), where n is the number of chars of which the string is composed. Since the string is null-terminated, sizeof will return the number of "normal" characters plus 1; the watch window is wrong, it's probably a bug (as #Gene Bushuyev said, it's probably interpreting it as a pointer instead of as a literal=array).
The fact that the value 5 is embedded into the code is normal, being sizeof a compile-time operator.
there is a C-String terminator '\0' on the end of every C-String so "pdfa" is actually the following char array {'p', 'd', 'f', 'a', '\0'} but the \0 will not be printed. Use strlen("pdfa") instead.
Remember that C strings contain an ending zero \0. Five is the correct value.
Well, 5 is the correct value of sizeof("PDFA"). 4 characters + trailing zero.
Also, keep in mind, that "The result does not necessarily correspond to the size calculated by adding the storage requirements of the individual members. The /Zp compiler option and the pack pragma affect alignment boundaries for members."
Speaking of Watch window, I think it is simply shows you the size of the pointer (const char*) itself. Try to recompile program in 64-bit mode and check what Watch window would show then. If I am right, then you will see 8.
The reason that Things Go Wrong™ here is that you have chosen a too low level of abstraction, the memcmp.
One level up you have strcmp and wcscmp.
And one level up from that you have std::string and std::wstring.
The "speed" (hah!) of your chosen lowest level possible abstraction is offset by
Incorrect result.
Inefficiency due to lack of type knowledge (wide or narrow string, your code doesn't know).
Inefficiency due to lack of data knowledge (uppercase or lowercase).
Instead of wasting time on fixing the problems of the inefficient lowest level code, and wasting time on figuring out baffling details of low level tools, use a higher and safer level of abstraction.
Just for the record, sizeof( "abcd" ) is 5. The watch window is probably, as Hans Passant remarked, displaying the size of a pointer. However, I disagree with Hans that the debugger generally has no way to know the size of an array: for a debug build it can know anything and everything about the original source, including the verbatim original source if needed (and it is displaying that verbatim original source, in context). So, that 4 is IMHO a bug one way or the other. Either a bug in the debugger code, or a bug in its design.
Sizeof is a operator that evaluates to a size_t,usually an unsigned int on 32 bit platforms. That is why you see it as 4 in the debugger. The sizeof operator is also an rvalue, so you cannot set a watch point on the memory. If you could, the location would contain 5. The size of your string plus terminator.

Could I ever want to access the address zero?

The constant 0 is used as the null pointer in C and C++. But as in the question "Pointer to a specific fixed address" there seems to be some possible use of assigning fixed addresses. Is there ever any conceivable need, in any system, for whatever low level task, for accessing the address 0?
If there is, how is that solved with 0 being the null pointer and all?
If not, what makes it certain that there is not such a need?
Neither in C nor in C++ null-pointer value is in any way tied to physical address 0. The fact that you use constant 0 in the source code to set a pointer to null-pointer value is nothing more than just a piece of syntactic sugar. The compiler is required to translate it into the actual physical address used as null-pointer value on the specific platform.
In other words, 0 in the source code has no physical importance whatsoever. It could have been 42 or 13, for example. I.e. the language authors, if they so pleased, could have made it so that you'd have to do p = 42 in order to set the pointer p to null-pointer value. Again, this does not mean that the physical address 42 would have to be reserved for null pointers. The compiler would be required to translate source code p = 42 into machine code that would stuff the actual physical null-pointer value (0x0000 or 0xBAAD) into the pointer p. That's exactly how it is now with constant 0.
Also note, that neither C nor C++ provides a strictly defined feature that would allow you to assign a specific physical address to a pointer. So your question about "how one would assign 0 address to a pointer" formally has no answer. You simply can't assign a specific address to a pointer in C/C++. However, in the realm of implementation-defined features, the explicit integer-to-pointer conversion is intended to have that effect. So, you'd do it as follows
uintptr_t address = 0;
void *p = (void *) address;
Note, that this is not the same as doing
void *p = 0;
The latter always produces the null-pointer value, while the former in general case does not. The former will normally produce a pointer to physical address 0, which might or might not be the null-pointer value on the given platform.
On a tangential note: you might be interested to know that with Microsoft's C++ compiler, a NULL pointer to member will be represented as the bit pattern 0xFFFFFFFF on a 32-bit machine. That is:
struct foo
{
int field;
};
int foo::*pmember = 0; // 'null' member pointer
pmember will have the bit pattern 'all ones'. This is because you need this value to distinguish it from
int foo::*pmember = &foo::field;
where the bit pattern will indeed by 'all zeroes' -- since we want offset 0 into the structure foo.
Other C++ compilers may choose a different bit pattern for a null pointer to member, but the key observation is that it won't be the all-zeroes bit pattern you might have been expecting.
You're starting from a mistaken premise. When you assign an integer constant with the value 0 to a pointer, that becomes a null pointer constant. This does not, however, mean that a null pointer necessarily refers to address 0. Quite the contrary, the C and C++ standards are both very clear that a null pointer may refer to some address other than zero.
What it comes down to is this: you do have to set aside an address that a null pointer would refer to -- but it can be essentially any address you choose. When you convert zero to a pointer, it has to refer to that chosen address -- but that's all that's really required. Just for example, if you decided that converting an integer to a point would mean adding 0x8000 to the integer, then the null pointer to would actually refer to address 0x8000 instead of address 0.
It's also worth noting that dereferencing a null pointer results in undefined behavior. That means you can't do it in portable code, but it does not mean you can't do it at all. When you're writing code for small microcontrollers and such, it's fairly common to include some bits and pieces of code that aren't portable at all. Reading from one address may give you the value from some sensor, while writing to the same address could activate a stepper motor (just for example). The next device (even using exactly the same processor) might be connected up so both of those addresses referred to normal RAM instead.
Even if a null pointer does refer to address 0, that doesn't prevent you from using it to read and/or write whatever happens to be at that address -- it just prevents you from doing so portably -- but that doesn't really matter a whole lot. The only reason address zero would normally be important would be if it was decoded to connect to something other than normal storage, so you probably can't use it entirely portably anyway.
The compiler takes care of this for you (comp.lang.c FAQ):
If a machine uses a nonzero bit pattern for null pointers, it is the compiler's responsibility to generate it when the programmer requests, by writing "0" or "NULL," a null pointer. Therefore, #defining NULL as 0 on a machine for which internal null pointers are nonzero is as valid as on any other, because the compiler must (and can) still generate the machine's correct null pointers in response to unadorned 0's seen in pointer contexts.
You can get to address zero by referencing zero from a non-pointer context.
In practice, C compilers will happily let your program attempt to write to address 0. Checking every pointer operation at run time for a NULL pointer would be a tad expensive. On computers, the program will crash because the operating system forbids it. On embedded systems without memory protection, the program will indeed write to address 0 which will often crash the whole system.
The address 0 might be useful on an embedded systems (a general term for a CPU that's not in a computer; they run everything from your stereo to your digital camera). Usually, the systems are designed so that you wouldn't need to write to address 0. In every case I know of, it's some kind of special address. Even if the programmer needs to write to it (e.g., to set up an interrupt table), they would only need to write to it during the initial boot sequence (usually a short bit of assembly language to set up the environment for C).
Memory address 0 is also called the Zero Page. This is populated by the BIOS, and contains information about the hardware running on your system. All modern kernels protect this region of memory. You should never need to access this memory, but if you want to you need to do it from within kernel land, a kernel module will do the trick.
On the x86, address 0 (or rather, 0000:0000) and its vicinity in real mode is the location of the interrupt vector. In the bad old days, you would typically write values to the interrupt vector to install interrupt handers (or if you were more disciplined, used the MS-DOS service 0x25). C compilers for MS-DOS defined a far pointer type which when assigned NULL or 0 would recieve the bit pattern 0000 in its segment part and 0000 in its offset part.
Of course, a misbehaving program that accidentally wrote to a far pointer whose value was 0000:0000 would cause very bad things to happen on the machine, typically locking it up and forcing a reboot.
In the question from the link, people are discussing setting to fixed addresses in a microcontroller. When you program a microcontroller everything is at a much lower level there.
You even don't have an OS in terms of desktop/server PC, and you don't have virtual memory and that stuff. So there is it OK and even necessary to access memory at a specific address. On a modern desktop/server PC it is useless and even dangerous.
I compiled some code using gcc for the Motorola HC11, which has no MMU and 0 is a perfectly good address, and was disappointed to find out that to write to address 0, you just write to it. There's no difference between NULL and address 0.
And I can see why. I mean, it's not really possible to define a unique NULL on an architecture where every memory location is potentially valid, so I guess the gcc authors just said 0 was good enough for NULL whether it's a valid address or not.
char *null = 0;
; Clears 8-bit AR and BR and stores it as a 16-bit pointer on the stack.
; The stack pointer, ironically, is stored at address 0.
1b: 4f clra
1c: 5f clrb
1d: de 00 ldx *0 <main>
1f: ed 05 std 5,x
When I compare it with another pointer, the compiler generates a regular comparison. Meaning that it in no way considers char *null = 0 to be a special NULL pointer, and in fact a pointer to address 0 and a "NULL" pointer will be equal.
; addr is a pointer stored at 7,x (offset of 7 from the address in XR) and
; the "NULL" pointer is at 5,y (offset of 5 from the address in YR). It doesn't
; treat the so-called NULL pointer as a special pointer, which is not standards
; compliant as far as I know.
37: de 00 ldx *0 <main>
39: ec 07 ldd 7,x
3b: 18 de 00 ldy *0 <main>
3e: cd a3 05 cpd 5,y
41: 26 10 bne 53 <.LM7>
So to address the original question, I guess my answer is to check your compiler implementation and find out whether they even bothered to implement a unique-value NULL. If not, you don't have to worry about it. ;)
(Of course this answer is not standard compliant.)
It all depends on whether the machine has virtual memory. Systems with it will typically put an unwritable page there, which is probably the behaviour that you are used to. However in systems without it (typically microcontrollers these days, but they used to be far more common) then there's often very interesting things in that area such as an interrupt table. I remember hacking around with those things back in the days of 8-bit systems; fun, and not too big a pain when you had to hard-reset the system and start over. :-)
Yes, you might want to access memory address 0x0h. Why you would want to do this is platform-dependent. A processor might use this for a reset vector, such that writing to it causes the CPU to reset. It could also be used for an interrupt vector, as a memory-mapped interface to some hardware resource (program counter, system clock, etc), or it could even be valid as a plain old memory address. There is nothing necessarily magical about memory address zero, it is just one that was historically used for special purposes (reset vectors and the like). C-like languages follow this tradition by using zero as the address for a NULL pointer, but in reality the underlying hardware may or may not see address zero as special.
The need to access address zero usually arises only in low-level details like bootloaders or drivers. In these cases, the compiler can provide options/pragmas to compile a section of code without optimizations (to prevent the zero pointer from being extracted away as a NULL pointer) or inline assembly can be used to access the true address zero.
C/C++ don't allows you to write to any address. It is the OS that can raise a signal when a user access some forbidden address. C and C++ ensure you that any memory obtained from the heap, will be different of 0.
I have at times used loads from address zero (on a known platform where that would be guaranteed to segfault) to deliberately crash at an informatively named symbol in library code if the user violates some necessary condition and there isn't any good way to throw an exception available to me. "Segfault at someFunction$xWasnt16ByteAligned" is a pretty effective error message to alert someone to what they did wrong and how to fix it. That said, I wouldn't recommend making a habit of that sort of thing.
Writing to address zero can be done, but it depends upon several factors such as your OS, target architecture and MMU configuration. In fact, it can be a useful debugging tool (but not always).
For example, a few years ago while working on an embedded system (with few debugging tools available), we had a problem which was resulting in a warm reboot. To help locate the problem, we were debugging using sprintf(NULL, ...); and a 9600 baud serial cable. As I said--few debugging tools available. With our setup, we knew that a warm reboot would not corrupt the first 256 bytes of memory. Thus after the warm reboot we could pause the loader and dump the memory contents to find out what happened prior to reboot.
Remember that in all normal cases, you don't actually see specific addresses.
When you allocate memory, the OS supplies you with the address of that chunk of memory.
When you take the reference of a variable, the the variable has already been allocated at an address determined by the system.
So accessing address zero is not really a problem, because when you follow a pointer, you don't care what address it points to, only that it is valid:
int* i = new int(); // suppose this returns a pointer to address zero
*i = 42; // now we're accessing address zero, writing the value 42 to it
So if you need to access address zero, it'll generally work just fine.
The 0 == null thing only really becomes an issue if for some reason you're accessing physical memory directly. Perhaps you're writing an OS kernel or something like that yourself. In that case, you're going to be writing to specific memory addresses (especially those mapped to hardware registers), and so you might conceivably need to write to address zero. But then you're really bypassing C++ and relying on the specifics of your compiler and hardware platform.
Of course, if you need to write to address zero, that is possible. Only the constant 0 represents a null pointer. The non-constant integer value zero will not, if assigned to a pointer, yield a null pointer.
So you could simply do something like this:
int i = 0;
int* zeroaddr = (int*)i;
now zeroaddr will point to address zero(*), but it will not, strictly speaking, be a null pointer, because the zero value was not constant.
(*): that's not entirely true. The C++ standard only guarantees an "implementation-defined mapping" between integers and addresses. It could convert the 0 to address 0x1633de20` or any other address it likes. But the mapping is usually the intuitive and obvious one, where the integer 0 is mapped to the address zero)
It may surprise many people, but in the core C language there is no such thing as a special null pointer. You are totally free to read and write to address 0 if it's physically possible.
The code below does not even compile, as NULL is not defined:
int main(int argc, char *argv[])
{
void *p = NULL;
return 0;
}
OTOH, the code below compiles, and you can read and write address 0, if the hardware/OS allows:
int main(int argc, char *argv[])
{
int *p = 0;
*p = 42;
int x = *p; /* let's assume C99 */
}
Please note, I did not include anything in the above examples.
If we start including stuff from the standard C library, NULL becomes magically defined. As far as I remember it comes from string.h.
NULL is still not a core C feature, it's a CONVENTION of many C library functions to indicate the invalidity of pointers. The C library on the given platform will define NULL to a memory location which is not accessible anyway. Let's try it on a Linux PC:
#include <stdio.h>
int main(int argc, char *argv[])
{
int *p = NULL;
printf("NULL is address %p\n", p);
printf("Contents of address NULL is %d\n", *p);
return 0;
}
The result is:
NULL is address 0x0
Segmentation fault (core dumped)
So our C library defines NULL to address zero, which it turns out is inaccessible.
But it was not the C compiler, of not even the C-library function printf() that handled the zero address specially. They all happily tried to work with it normally. It was the OS that detected a segmentation fault, when printf tried to read from address zero.
If I remember correctly, in an AVR microcontroller the register file is mapped into an address space of RAM and register R0 is at the address 0x00. It was clearly done in purpose and apparently Atmel thinks there are situations, when it's convenient to access address 0x00 instead of writing R0 explicitly.
In the program memory, at the address 0x0000 there is a reset interrupt vector and again this address is clearly intended to be accessed when programming the chip.

Allocate room for null terminating character when copying strings in C?

const char* src = "hello";
Calling strlen(src); returns size 5...
Now say I do this:
char* dest = new char[strlen(src)];
strcpy(dest, src);
That doesn't seem like it should work, but when I output everything it looks right. It seems like I'm not allocating space for the null terminator on the end... is this right? Thanks
You are correct that you are not allocating space for the terminator, however the failure to do this will not necessarily cause your program to fail. You may be overwriting following information on the heap, or your heap manager will be rounding up allocation size to a multiple of 16 bytes or something, so you won't necessarily see any visible effect of this bug.
If you run your program under Valgrind or other heap debugger, you may be able to detect this problem sooner.
Yes, you should allocate at least strlen(src)+1 characters.
That doesn't seem like it should work, but when I output everything it looks right.
Welcome to the world of Undefined Behavior. When you do this, anything can happen. Your program can crash, your computer can crash, your computer can explode, demons can fly out of your nose.
And worst of all, your program could run just fine, inconspicuously looking like it's working correctly until one day it starts spitting out garbage because it's overwriting sensitive data somewhere due to the fact that somewhere, someone allocated one too few characters for their arrays, and now you've corrupted the heap and you get a segfault at some point a million miles away, or even worse your program happily chugs along with a corrupted heap and your functions are operating on corrupted credit card numbers and you get in huge trouble.
Even if it looks like it works, it doesn't. That's Undefined Behavior. Avoid it, because you can never be sure what it will do, and even when what it does when you try it is okay, it may not be okay on another platform.
The best description I have read (was on stackoverflow) and went like this:
If the speed limit is 50 and you drive at 60. You may get lucky and not get a ticket but one day maybe not today maybe not tomorrow but one day that cop will be waiting for you. On that day you will pay and you will pay dearly.
If somebody can find the original I would much rather point at that they were much more eloquent than my explanation.
strcpy will copy the null terminated char as well as all of the other chars.
So you are copying the length of hello + 1 which is 6 into a buffer size which is 5.
You have a buffer overflow here though, and overwiting memory that is not your own will have undefined results.
Alternatively, you could also use dest = strdup(src) which will allocate enough memory for the string + 1 for the null terminator (+1 for Juliano's answer).
This is why you should always, always, always run valgrind on any C program that appears to work.
Yeah, everyone has covered the major point; you are not guaranteed to fail. The fact is that the null terminator is usually 0 and 0 is a pretty common value to be sitting in any particular memory address. So it just happens to work. You could test this by taking a set of memory, writing a bunch of garbage to it and then writing that string there and trying to work with it.
Anyway, the major issue I see here is that you are talking about C but you have this line of code:
char* dest = new char[strlen(src)];
This won't compile in any standard C compiler. There's no new keyword in C. That is C++. In C, you would use one of the memory allocation functions, usually malloc. I know it seems nitpicy, but really, it's not.