Does MSVC guarantee a null character in an uninitialized local character array? - c++

I am working in visual studio 2010 and I received a project where the original programmer did this about half the time inside functions and methods
TCHAR some_local_variable[size] = {0};
and the other half of the time, he did not initialize the array:
TCHAR some_local_variable[size];
I am well aware of what the first assignment does, but I was wondering if the MSVC compiler guarantees a null character at index 0 in the second case.
I did test this out and the first character was set to 0 followed by garbage as expected, but I am not sure if this is guaranteed or not?
Secondly, even if a null character is guaranteed (or if it isn't, simply setting the first character to 0), is there any good reason to initialize the entire array with 0s, especially when I always do the following after any string manipulation:
some_local_variable[size-1] = 0;
My only thought is there could be a problem if the function manipulating the string did not null terminate AND the number of characters copied were less than size-1. For example, if it copied size-5 characters and did not null terminate, my termination at size-1 would potentially expose garbage in size-4 to size-2. However, I don't think is a problem with the majority of standard library or Win32 functions.

No, it doesn't. Uninitialized memory locations have random contents. (Debug builds may do some initialization to help detect memory corruption but that doesn't happen in release builds.)
You should not rely on any memory contents unless you explicitly initialized that memory region to some known value.
This question has details about the memory initialization for debug builds. (I'm not sure if the listings are complete.)

Related

I don't understand memory allocation and strcpy [duplicate]

This question already has answers here:
No out of bounds error
(7 answers)
Accessing an array out of bounds gives no error, why?
(18 answers)
Closed 3 years ago.
Here's a sample of my code:
char chipid[13];
void initChipID(){
write_to_logs("Called initChipID");
strcpy(chipid, string2char(twelve_char_string));
write_to_logs("Chip ID: " + String(chipid));
}
Here's what I don't understand: even if I define chipid as char[2], I still get the expected result printed to the logs.
Why is that? Shouldn't the allocated memory space for chipid be overflown by the strcpy, and only the first 2 char of the string be printed?
Here's what I don't understand: even if I define chipid as char[2], I
still get the expected result printed to the logs.
Then you are (un)lucky. You are especially lucky if the undefined behavior produced by the overflow does not manifest as corruption of other data, yet also does not crash the program. The behavior is undefined, so you should not interpret whatever manifestation it takes as something you should rely upon, or that is specified by the language.
Why is that?
The language does not specify that it will happen, and it certainly doesn't specify why it does happen in your case.
In practice, the manifest ation you observe is as if the strcpy writes the full data into memory at the location starting at the beginning of your array and extending past its end, overwriting anything else your program may have stored in that space, and that the program subsequently reads it back via a corresponding overflowing read.
Shouldn't the allocated memory space for chipid be
overflown by the strcpy,
Yes.
and only the first 2 char of the string be
printed?
No, the language does not specify what happens once the program exercises UB by performing a buffer overflow (or by other means). But also no, C arrays are represented in memory simply as a flat sequence of contiguous elements, with no explicit boundary. This is why C strings need to be terminated. String functions do not see the declared size of the array containing a string's elements, they see only the element sequence.
You have it part correct: "the allocated memory space for chipid be overflowed by the strcpy" -- this is true. And this is why you get the full result (well, the result of an overflow is undefined, and could be a crash or other result).
C/C++ gives you a lot of power when it comes to memory. And with great power comes great responsibility. What you are doing gives undefined behaviour, meaning it may work. But it will definitely give problems later on when strcpy is writing to memory it is not supposed to write to.
You will find that you can get away with A LOT of things in C/C++. However, things like these will give you headaches later on when your program unexpectedly crashes, and this could be in an entire different part of your program, which makes it difficult to debug.
Anyway, if you are using C++, you should use std::string, which makes things like this a lot easier.

Does buffer overflow happen in C++ strings?

This is concerning strings in C++. I have not touched C/C++ for a very long time; infact I did programming in those languages only for the first year in my college, about 7 years ago.
In C to hold strings I had to create character arrays(whether static or dynamic, it is not of concern). So that would mean that I need to guess well in advance the size of the string that an array would contain. Well I applied the same approach in C++. I was aware that there was a std::string class but I never got around to use it.
My question is that since we never declare the size of an array/string in std::string class, does a buffer overflow occur when writing to it. I mean, in C, if the array’s size was 10 and I typed more than 10 characters on the console then that extra data would be writtein into some other object’s memory place, which is adjacent to the array. Can a similar thing happen in std::string when using the cin object.
Do I have to guess the size of the string before hand in C++ when using std::string?
Well! Thanks to you all. There is no one right answer on this page (a lot of different explanations provided), so I am not selecting any single one as such. I am satisfied with the first 5. Take Care!
Depending on the member(s) you are using to access the string object, yes. So, for example, if you use reference operator[](size_type pos) where pos > size(), yes, you would.
Assuming no bugs in the standard library implementation, no. A std::string always manages its own memory.
Unless, of course, you subvert the accessor methods that std::string provides, and do something like:
std::string str = "foo";
char *p = (char *)str.c_str();
strcpy(p, "blah");
You have no protection here, and are invoking undefined behaviour.
The std::string generally protects against buffer overflow, but there are still situations in which programming errors can lead to buffer overflows. While C++ generally throws an out_of_range exception when an operation references memory outside the bounds of the string, the subscript operator [] (which does not perform bounds checking) does not.
Another problem occurs when converting std::string objects to C-style strings. If you use string::c_str() to do the conversion, you get a properly null-terminated C-style string. However, if you use string::data(), which writes the string directly into an array (returning a pointer to the array), you get a buffer that is not null terminated. The only difference between c_str() and data() is that c_str() adds a trailing null byte.
Finally, many existing C++ programs and libraries have their own string classes. To use these libraries, you may have to use these string types or constantly convert back and forth. Such libraries are of varying quality when it comes to security. It is generally best to use the standard library (when possible) or to understand the semantics of the selected library. Generally speaking, libraries should be evaluated based on how easy or complex they are to use, the type of errors that can be made, how easy these errors are to make, and what the potential consequences may be.
refer https://buildsecurityin.us-cert.gov/bsi/articles/knowledge/coding/295-BSI.html
In c the cause is explained as follow:
void function (char *str) {
char buffer[16];
strcpy (buffer, str);
}
int main () {
char *str = "I am greater than 16 bytes"; // length of str = 27 bytes
function (str);
}
This program is guaranteed to cause unexpected behavior, because a string (str) of 27 bytes has been copied to a location (buffer) that has been allocated for only 16 bytes. The extra bytes run past the buffer and overwrites the space allocated for the FP, return address and so on. This, in turn, corrupts the process stack. The function used to copy the string is strcpy, which completes no checking of bounds. Using strncpy would have prevented this corruption of the stack. However, this classic example shows that a buffer overflow can overwrite a function's return address, which in turn can alter the program's execution path. Recall that a function's return address is the address of the next instruction in memory, which is executed immediately after the function returns.
here is a good tutorial that can give your answer satisfactory.
In C++, the std::string class starts out with a minimum size (or you can specify a starting size). If that size is exceeded, std::string allocates more dynamic memory.
Assuming the library providing std::string is correctly written, you cannot cause a buffer overflow by adding characters to a std::string object.
Of course, bugs in the library are not impossible.
"Do buffer overflows occur in C++ code?"
To the extent that C programs are legal C++ code (they almost all are), and C programs have buffer overflows, C++ programs can have buffer overflows.
Being richer than C, I'm sure C++ can have buffer overflows in ways that C cannot :-}

sizeof("...") in visual c++ makes me unhappy

I was writing a piece of code where I use sizeof("somestring") as a parameter of a function, then I noticed the function was not returning the expected value, so I went to see the corresponding asm code and I found an unpleasant surprise. Does anyone have an explanation for this (see the picture)?
I know there are 1000+ different ways of doing this, I already implemented another one of them, but I do want to know the reason behind this behaviour.
For the curious, this is Visual Studio 2008 SP1.
The value 5 is correct. The constant includes the zero terminator byte. The display of 4 in the watch window is the one that does not appear to be correct.
String literals are of type "array of n const char" ([lex.string], ¶8), where n is the number of chars of which the string is composed. Since the string is null-terminated, sizeof will return the number of "normal" characters plus 1; the watch window is wrong, it's probably a bug (as #Gene Bushuyev said, it's probably interpreting it as a pointer instead of as a literal=array).
The fact that the value 5 is embedded into the code is normal, being sizeof a compile-time operator.
there is a C-String terminator '\0' on the end of every C-String so "pdfa" is actually the following char array {'p', 'd', 'f', 'a', '\0'} but the \0 will not be printed. Use strlen("pdfa") instead.
Remember that C strings contain an ending zero \0. Five is the correct value.
Well, 5 is the correct value of sizeof("PDFA"). 4 characters + trailing zero.
Also, keep in mind, that "The result does not necessarily correspond to the size calculated by adding the storage requirements of the individual members. The /Zp compiler option and the pack pragma affect alignment boundaries for members."
Speaking of Watch window, I think it is simply shows you the size of the pointer (const char*) itself. Try to recompile program in 64-bit mode and check what Watch window would show then. If I am right, then you will see 8.
The reason that Things Go Wrong™ here is that you have chosen a too low level of abstraction, the memcmp.
One level up you have strcmp and wcscmp.
And one level up from that you have std::string and std::wstring.
The "speed" (hah!) of your chosen lowest level possible abstraction is offset by
Incorrect result.
Inefficiency due to lack of type knowledge (wide or narrow string, your code doesn't know).
Inefficiency due to lack of data knowledge (uppercase or lowercase).
Instead of wasting time on fixing the problems of the inefficient lowest level code, and wasting time on figuring out baffling details of low level tools, use a higher and safer level of abstraction.
Just for the record, sizeof( "abcd" ) is 5. The watch window is probably, as Hans Passant remarked, displaying the size of a pointer. However, I disagree with Hans that the debugger generally has no way to know the size of an array: for a debug build it can know anything and everything about the original source, including the verbatim original source if needed (and it is displaying that verbatim original source, in context). So, that 4 is IMHO a bug one way or the other. Either a bug in the debugger code, or a bug in its design.
Sizeof is a operator that evaluates to a size_t,usually an unsigned int on 32 bit platforms. That is why you see it as 4 in the debugger. The sizeof operator is also an rvalue, so you cannot set a watch point on the memory. If you could, the location would contain 5. The size of your string plus terminator.

Why is address zero used for the null pointer?

In C (or C++ for that matter), pointers are special if they have the value zero: I am adviced to set pointers to zero after freeing their memory, because it means freeing the pointer again isn't dangerous; when I call malloc it returns a pointer with the value zero if it can't get me memory; I use if (p != 0) all the time to make sure passed pointers are valid, etc.
But since memory addressing starts at 0, isn't 0 just as a valid address as any other? How can 0 be used for handling null pointers if that is the case? Why isn't a negative number null instead?
Edit:
A bunch of good answers. I'll summarize what has been said in the answers expressed as my own mind interprets it and hope that the community will correct me if I misunderstand.
Like everything else in programming it's an abstraction. Just a constant, not really related to the address 0. C++0x emphasizes this by adding the keyword nullptr.
It's not even an address abstraction, it's the constant specified by the C standard and the compiler can translate it to some other number as long as it makes sure it never equals a "real" address, and equals other null pointers if 0 is not the best value to use for the platform.
In case it's not an abstraction, which was the case in the early days, the address 0 is used by the system and off limits to the programmer.
My negative number suggestion was a little wild brainstorming, I admit. Using a signed integer for addresses is a little wasteful if it means that apart from the null pointer (-1 or whatever) the value space is split evenly between positive integers that make valid addresses and negative numbers that are just wasted.
If any number is always representable by a datatype, it's 0. (Probably 1 is too. I think of the one-bit integer which would be 0 or 1 if unsigned, or just the signed bit if signed, or the two bit integer which would be [-2, 1]. But then you could just go for 0 being null and 1 being the only accessible byte in memory.)
Still there is something that is unresolved in my mind. The Stack Overflow question Pointer to a specific fixed address tells me that even if 0 for null pointer is an abstraction, other pointer values aren't necessarily. This leads me to post another Stack Overflow question, Could I ever want to access the address zero?.
2 points:
only the constant value 0 in the source code is the null pointer - the compiler implementation can use whatever value it wants or needs in the running code. Some platforms have a special pointer value that's 'invalid' that the implementation might use as the null pointer. The C FAQ has a question, "Seriously, have any actual machines really used nonzero null pointers, or different representations for pointers to different types?", that points out several platforms that used this property of 0 being the null pointer in C source while represented differently at runtime. The C++ standard has a note that makes clear that converting "an integral constant expression with value zero always yields a null pointer, but converting other expressions that happen to have value zero need not yield a null pointer".
a negative value might be just as usable by the platform as an address - the C standard simply had to chose something to use to indicate a null pointer, and zero was chosen. I'm honestly not sure if other sentinel values were considered.
The only requirements for a null pointer are:
it's guaranteed to compare unequal to a pointer to an actual object
any two null pointers will compare equal (C++ refines this such that this only needs to hold for pointers to the same type)
Historically, the address space starting at 0 was always ROM, used for some operating system or low level interrupt handling routines, nowadays, since everything is virtual (including address space), the operating system can map any allocation to any address, so it can specifically NOT allocate anything at address 0.
IIRC, the "null pointer" value isn't guaranteed to be zero. The compiler translates 0 into whatever "null" value is appropriate for the system (which in practice is probably always zero, but not necessarily). The same translation is applied whenever you compare a pointer against zero. Because you can only compare pointers against each other and against this special-value-0, it insulates the programmer from knowing anything about the memory representation of the system. As for why they chose 0 instead of 42 or somesuch, I'm going to guess it's because most programmers start counting at 0 :) (Also, on most systems 0 is the first memory address and they wanted it to be convenient, since in practice translations like I'm describing rarely actually take place; the language just allows for them).
You must be misunderstanding the meaning of constant zero in pointer context.
Neither in C nor in C++ pointers can "have value zero". Pointers are not arithmetic objects. They canot have numerical values like "zero" or "negative" or anything of that nature. So your statement about "pointers ... have the value zero" simply makes no sense.
In C & C++ pointers can have the reserved null-pointer value. The actual representation of null-pointer value has nothing to do with any "zeros". It can be absolutely anything appropriate for a given platform. It is true that on most plaforms null-pointer value is represented physically by an actual zero address value. However, if on some platform address 0 is actually used for some purpose (i.e. you might need to create objects at address 0), the null-pointer value on such platform will most likely be different. It could be physically represented as 0xFFFFFFFF address value or as 0xBAADBAAD address value, for example.
Nevertheless, regardless of how the null-pointer value is respresented on a given platform, in your code you will still continue to designate null-pointers by constant 0. In order to assign a null-pointer value to a given pointer, you will continue to use expressions like p = 0. It is the compiler's responsibility to realize what you want and translate it into the proper null-pointer value representation, i.e. to translate it into the code that will put the address value of 0xFFFFFFFF into the pointer p, for example.
In short, the fact that you use 0 in your sorce code to generate null-pointer values does not mean that the null-pointer value is somehow tied to address 0. The 0 that you use in your source code is just "syntactic sugar" that has absolutely no relation to the actual physical address the null-pointer value is "pointing" to.
But since memory addressing starts at 0, isn't 0 just as a valid address as any other?
On some/many/all operating systems, memory address 0 is special in some way. For example, it's often mapped to invalid/non-existent memory, which causes an exception if you try to access it.
Why isn't a negative number null instead?
I think that pointer values are typically treated as unsigned numbers: otherwise for example a 32-bit pointer would only be able to address 2 GB of memory, instead of 4 GB.
My guess would be that the magic value 0 was picked to define an invalid pointer since it could be tested for with less instructions. Some machine languages automatically set the zero and sign flags according to the data when loading registers so you could test for a null pointer with a simple load then and branch instructions without doing a separate compare instruction.
(Most ISAs only set flags on ALU instructions, not loads, though. And usually you aren't producing pointers via computation, except in the compiler when parsing C source. But at least you don't need an arbitrary pointer-width constant to compare against.)
On the Commodore Pet, Vic20, and C64 which were the first machines I worked on, RAM started at location 0 so it was totally valid to read and write using a null pointer if you really wanted to.
I think it's just a convention. There must be some value to mark an invalid pointer.
You just lose one byte of address space, that should rarely be a problem.
There are no negative pointers. Pointers are always unsigned. Also if they could be negative your convention would mean that you lose half the address space.
Although C uses 0 to represent the null pointer, do keep in mind that the value of the pointer itself may not be a zero. However, most programmers will only ever use systems where the null pointer is, in fact, 0.
But why zero? Well, it's one address that every system shares. And oftentimes the low addresses are reserved for operating system purposes thus the value works well as being off-limits to application programs. Accidental assignment of an integer value to a pointer is as likely to end up zero as anything else.
Historically the low memory of an application was occupied by system resources. It was in those days that zero became the default null value.
While this is not necessarily true for modern systems, it is still a bad idea to set pointer values to anything but what memory allocation has handed you.
Regarding the argument about not setting a pointer to null after deleting it so that future deletes "expose errors"...
If you're really, really worried about this then a better approach, one that is guaranteed to work, is to leverage assert():
...
assert(ptr && "You're deleting this pointer twice, look for a bug?");
delete ptr;
ptr = 0;
...
This requires some extra typing, and one extra check during debug builds, but it is certain to give you what you want: notice when ptr is deleted 'twice'. The alternative given in the comment discussion, not setting the pointer to null so you'll get a crash, is simply not guaranteed to be successful. Worse, unlike the above, it can cause a crash (or much worse!) on a user if one of these "bugs" gets through to the shelf. Finally, this version lets you continue to run the program to see what actually happens.
I realize this does not answer the question asked, but I was worried that someone reading the comments might come to the conclusion that it is considered 'good practice' to NOT set pointers to 0 if it is possible they get sent to free() or delete twice. In those few cases when it is possible it is NEVER a good practice to use Undefined Behavior as a debugging tool. Nobody that's ever had to hunt down a bug that was ultimately caused by deleting an invalid pointer would propose this. These kinds of errors take hours to hunt down and nearly alway effect the program in a totally unexpected way that is hard to impossible to track back to the original problem.
An important reason why many operating systems use all-bits-zero for the null pointer representation, is that this means memset(struct_with_pointers, 0, sizeof struct_with_pointers) and similar will set all of the pointers inside struct_with_pointers to null pointers. This is not guaranteed by the C standard, but many, many programs assume it.
In one of the old DEC machines (PDP-8, I think), the C runtime would memory protect the first page of memory so that any attempt to access memory in that block would cause an exception to be raised.
The choice of sentinel value is arbitrary, and this is in fact being addressed by the next version of C++ (informally known as "C++0x", most likely to be known in the future as ISO C++ 2011) with the introduction of the keyword nullptr to represent a null valued pointer. In C++, a value of 0 may be used as an initializing expression for any POD and for any object with a default constructor, and it has the special meaning of assigning the sentinel value in the case of a pointer initialization. As for why a negative value was not chosen, addresses usually range from 0 to 2N-1 for some value N. In other words, addresses are usually treated as unsigned values. If the maximum value were used as the sentinel value, then it would have to vary from system to system depending on the size of memory whereas 0 is always a representable address. It is also used for historical reasons, as memory address 0 was typically unusable in programs, and nowadays most OSs have parts of the kernel loaded into the lower page(s) of memory, and such pages are typically protected in such a way that if touched (dereferenced) by a program (save the kernel) will cause a fault.
It has to have some value. Obviously you don't want to step on values the user might legitimately want to use. I would speculate that since the C runtime provides the BSS segment for zero-initialized data, it makes a certain degree of sense to interpret zero as an un-initialized pointer value.
Rarely does an OS allow you to write to address 0. It's common to stick OS-specific stuff down in low memory; namely, IDTs, page tables, etc. (The tables have to be in RAM, and it's easier to stick them at the bottom than to try and determine where the top of RAM is.) And no OS in its right mind will let you edit system tables willy-nilly.
This may not have been on K&R's minds when they made C, but it (along with the fact that 0==null is pretty easy to remember) makes 0 a popular choice.
The value 0 is a special value that takes on various meanings in specific expressions. In the case of pointers, as has been pointed out many many times, it is used probably because at the time it was the most convenient way of saying "insert the default sentinel value here." As a constant expression, it does not have the same meaning as bitwise zero (i.e., all bits set to zero) in the context of a pointer expression. In C++, there are several types that do not have a bitwise zero representation of NULL such as pointer member and pointer to member function.
Thankfully, C++0x has a new keyword for "expression that means a known invalid pointer that does not also map to bitwise zero for integral expressions": nullptr. Although there are a few systems that you can target with C++ that allow dereferencing of address 0 without barfing, so programmer beware.
There are already a lot of good answers in this thread; there are probably many different reasons for preferring the value 0 for null pointers, but I'm going to add two more:
In C++, zero-initializing a pointer will set it to null.
On many processors it is more efficient to set a value to 0 or to test for it equal/not equal to 0 than for any other constant.
This is dependent on the implementation of pointers in C/C++. There is no specific reason why NULL is equivalent in assignments to a pointer.
Null pointer is not the same thing with null value. For example the same strchr function of c will return a null pointer (empty on the console), while passing the value would return (null) on the console.
True function:
char *ft_strchr(const char *s, int c)
{
int i;
if (!s)
return (NULL);
i = 0;
while (s[i])
{
if (s[i] == (char)c)
return ((char*)(s + i));
i++;
}
**if (s[i] == (char)c)
return ((char*)(s + i));**
return (NULL);
}
This will produce empty thing as the output: the last || is the output.
While passing as value like s[i] gives us a NULL like: enter image description here
char *ft_strchr(const char *s, int c)
{
int i;
if (!s)
return (NULL);
i = 0;
while (s[i])
{
if (s[i] == (char)c)
return ((char*)(s + i));
i++;
}
**if (s[i] == (char)c)
return (s[i]);**
return (NULL);
}
There are historic reasons for this, but there are also optimization reasons for it.
It is common for the OS to provide a process with memory pages initialized to 0. If a program wants to interpret part of that memory page as a pointer then it is 0, so it is easy enough for the program to determine that that pointer is not initialized. (this doesn't work so well when applied to uninitialized flash pages)
Another reason is that on many many processors it is very very easy to test a value's equivalence to 0. It is sometimes a free comparison done without any extra instructions needed, and usually can be done without needing to provide a zero value in another register or as a literal in the instruction stream to compare to.
The cheap comparisons for most processors are the signed less than 0, and equal to 0. (signed greater than 0 and not equal to 0 are implied by both of these)
Since 1 value out of all of possible values needs to be reserved as bad or uninitialized then you might as well make it the one that has the cheapest test for equivalence to the bad value. This is also true for '\0' terminated character strings.
If you were to try to use greater or less than 0 for this purpose then you would end up chopping your range of addresses in half.
The constant 0 is used instead of NULL because C was made by some cavemen trillions of years ago, NULL, NIL, ZIP, or NADDA would have all made much more sense than 0.
But since memory addressing starts at
0, isn't 0 just as a valid address as
any other?
Indeed. Although a lot of operating systems disallow you from mapping anything at address zero, even in a virtual address space (people realized C is an insecure language, and reflecting that null pointer dereference bugs are very common, decided to "fix" them by dissallowing the userspace code to map to page 0; Thus, if you call a callback but the callback pointer is NULL, you wont end up executing some arbitrary code).
How can 0 be used for handling null
pointers if that is the case?
Because 0 used in comparison to a pointer will be replaced with some implementation specific value, which is the return value of malloc on a malloc failure.
Why isn't a negative number null
instead?
This would be even more confusing.

c++ what happens if you print more characters with sprintf, than the char pointer has allocated?

I assume this is a common way to use sprintf:
char pText[x];
sprintf(pText, "helloworld %d", Count );
but what exactly happens, if the char pointer has less memory allocated, than it will be print to?
i.e. what if x is smaller than the length of the second parameter of sprintf?
i am asking, since i get some strange behaviour in the code that follows the sprintf statement.
It's not possible to answer in general "exactly" what will happen. Doing this invokes what is called Undefined behavior, which basically means that anything might happen.
It's a good idea to simply avoid such cases, and use safe functions where available:
char pText[12];
snprintf(pText, sizeof pText, "helloworld %d", count);
Note how snprintf() takes an additional argument that is the buffer size, and won't write more than there is room for.
This is a common error and leads to memory after the char array being overwritten. So, for example, there could be some ints or another array in the memory after the char array and those would get overwritten with the text.
See a nice detailed description about the whole problem (buffer overflows) here. There's also a comment that some architectures provide a snprintf routine that has a fourth parameter that defines the maximum length (in your case x). If your compiler doesn't know it, you can also write it yourself to make sure you can't get such errors (or just check that you always have enough space allocated).
Note that the behaviour after such an error is undefined and can lead to very strange errors. Variables are usually aligned at memory locations divisible by 4, so you sometimes won't notice the error in most cases where you have written one or two bytes too much (i.e. forget to make place for a NUL), but get strange errors in other cases. These errors are hard to debug because other variables get changed and errors will often occur in a completely different part of the code.
This is called a buffer overrun.
sprintf will overwrite the memory that happens to follow pText address-wise. Since pText is on the stack, sprintf can overwrite local variables, function arguments and the return address, leading to all sorts of bugs. Many security vulnerabilities result from this kind of code — e.g. an attacker uses the buffer overrun to write a new return address pointing to his own code.
The behaviour in this situation is undefined. Normally, you will crash, but you might also see no ill effects, strange values appearing in unrelated variables and that kind of thing. Your code might also call into the wrong functions, format your hard-drive and kill other running programs. It is best to resolve this by allocating more memory for your buffer.
I have done this many times, you will receive memory corruption error. AFAIK, I remember i have done some thing like this:-
vector<char> vecMyObj(10);
vecMyObj.resize(10);
sprintf(&vecMyObj[0],"helloworld %d", count);
But when destructor of vector is called, my program receive memory corruption error, if size is less then 10, it will work successfully.
Can you spell Buffer Overflow ? One possible result will be stack corruption, and make your app vulnerable to Stack-based exploitation.