Consider this program:
#include <iostream>
#include <vector>
using namespace std;
int main(void)
{
vector<double> a;
while (a.size() <= 10)
a.push_back(2);
cout << a[15] << endl;
}
I would expect it to crash when asked to access 15-th element of a, but it does not (outputs 0 on my system). What gives? Is this some new feature in gcc or something?
You are accessing an invalid memory location which leads to undefined behavior. So anything might happen.
This references says that it
Returns a reference to the element at position n in the vector
container.
A similar member function, vector::at, has the same behavior as this
operator function, except that vector::at signals if the requested
position is out of range by throwing an exception.
So, this might or might not crash.
the operator[] does no bounds checking, and in this case you were *un*lucky and the address at that location could be read without causing a run-time error.
Because the growth of the vector could be larger than 15 when you access it?
Check the capacity().
it is not strange..a vector grows it's size when it is inserted a.size()-1 element..so now you have kind of 20 elements.. (maybe not 20 but more than 11 (= )
I the memory you are reading is inside your program it won't crash, c++ does not bounds checking
Related
I run the following codes in http://cpp.sh/.
The outputs show that name1.size()=2, but name1[100]=10;
// Example program
#include <iostream>
#include <string>
#include <vector>
using namespace std;
int main()
{
vector<vector<int>> name;
vector <int> name1;
name1.push_back(1);
name1.push_back(3);
vector <int> name2;
name2.push_back(4);
name.push_back(name1);
name.push_back(name2);
name1[100]=10;
cout<<name1.size()<<endl;
cout<<name1[100];
}
The short answer is that you are invoking undefined behavior, so you won't find any further specification of what must happen in the standard.
The long answer is that C++ is an unsafe language and doesn't guarantee any type of deterministic error for various invalid operations, including accessing a vector out of bounds using operator[]. Instead, it uses the broad brush of undefined behavior which basically allows anything at all to happen: the idea being that by allowing the user the flexibility to avoid checks which they know are redundant, good performance can be achieved for well-behaved code.
If you want the vector to check that you aren't accessing an out-of-bounds index, no problem - just use the vector::at() which does exactly that, and throws std::out_of_range for invalid accesses.
As for why your particular code is (apparently) returning a value, note that a typical1 implementation of operator[] will just access the underlying storage directly, meaning that at the assembly level you will access whatever is 100 * sizeof(int) bytes from the start of the storage underlying the vector. That's usually going to be something random on the heap (since the storage is usually allocated on the heap), but it may also be an inaccessible address, resulting in an access violation.
1 Some compilers such as MSVC will provide more error checking in "debug" modes, which might cause operator[] to call vector.at() instead, which does do range checking and has defined behavior, and at least a few other compilers/standard libraries seem to be jumping on board with that idea.
Try this code here.
You declared:
vector <int> name1; // An empty vector. name1.size = 0 here.
Once you push_back the two values.
name1.push_back(1);
name1.push_back(3); // name1.size = 2, Index starts at 0, so
// name[0] = 1 and name[1] = 2
So the value for:
name1[100] = undefined (garbage). // You never initialised this value, so it yields unexpected behaviour.
An alternative for this is to use a default initialiser for an int vector and declare it like this:
vector <int> name1(101);
Here:
name1[100] = 0 // Because you defined it in the vector constructor to initialise 100 default values.
name1[0] = 0 // This is the first value of the now initialised vector.
name1[69] = 0 // This is the 70th value of the initialised vector.
name[101] = undefined // Again, this is undefined (garbage). You defined a vector of size n = 101, so that means the vector's range is [0,100]. 0 index counts and the last value is n - 1 (101 - 1 = 100). Index 101 is undefined.
I was trying to use vectors in C++. I am trying to insert one element using iterator at a specified position and then remove some elements again using iterator.But I get error when using the same iterator for both the operations.Here is my code -
#include <iostream>
#include <vector>
#include <cctype>
using namespace std;
int main()
{
vector <int> A(10);
for (int i=0;i<A.size();i++)
{
A[i]=i+1;
}
vector<int>::iterator p=A.begin();
p+=2;
A.insert(p,1,55);
A.erase(p,p+2);
for (int i=0;i<A.size();i++)
{
cout << A[i] <<"\n";
}
return 0;
}
This gives me the following output:
*** Error in `./temp': double free or corruption (out): 0x00000000017d4040 ***
55
3
4
5
6
7
8
9
10
Aborted (core dumped)
If I add following two lines before A.erase I get the correct answer.
p=A.begin();
p+=2;
A.erase(p,p+2);
So, if p still points to the same element as its value has not been changed, why do I need to again set the value of p.
after inserting/erasing from std::vector all existing iterators are invalidated and should not be used (using them will lead to undefined behavior)
remember, that changing items contained by vector may lead to memory allocation etc., so old iterators can point to deallocated memory (like pointers)
So when you add lines you mention and reinitialize iterator - everything is ok. But after insert existing p is no longer valid.
Check paragraphs about 'iterator invalidation' in: http://en.cppreference.com/w/cpp/container/vector/erase and http://en.cppreference.com/w/cpp/container/vector/insert.
You might consider adding call to reserve to ensure that no reallocation happens on insert but in my opinion such code would still be error prone and harder to maintain.
According to standard (n4296 C++14) [23.3.6.5/1] insert operation for vector invalidates iterators - but not always:
Remarks: Causes reallocation if the new size is greater than the old capacity. If no reallocation happens,
all the iterators and references before the insertion point remain valid.
and for erase [23.3.6.5/3]
Effects: Invalidates iterators and references at or after the point of the erase.
These are rules, behaviour you are seeing as correct is actually UB (undefined behaviour) - which means it might look like it works even if in 99% of times. Also it depends on implementation in your compiler.
When I use clear() on a std::vector, it is supposed to destroy all the elements in the vector, but instead it doesn't.
Sample code:
vector<double> temp1(4);
cout << temp1.size() << std::endl;
temp1.clear();
cout << temp1.size() << std::endl;
temp1[2] = 343.5; // I should get segmentation fault here ....
cout << "Printing..... " << temp1[2] << endl;
cout << temp1.size() << std::endl;
Now, I should have gotten segmentation fault while trying to access the cleared vector, but instead it fills in the value there (which according to me is very buggy)
Result looks as follows:
4
0
Printing..... 343.5
0
Is this normal? This is a very hard bug to spot, which basically killed my code for months.
You have no right to get a segmentation fault. For that matter, a segmentation fault isn't even part of C++. Your program is removing all elements from the vector, and you're illegally accessing the container out of bounds. This is undefined behaviour, which means anything can happen. And indeed, something happened.
When you access outside of the bounds of a vector, you get Undefined Behavior. That means anything can happen. Anything.
So you could get the old value, garbage, or a seg-fault. You can't depend on anything.
If you want bounds checking, use the at() member function instead of operator []. It will throw an exception instead of invoking Undefined Behavior.
From cppreference:
void clear();
Removes all elements from the container. Invalidates any references, pointers, or iterators referring to contained elements. May invalidate any past-the-end iterators. Many implementations will not release allocated memory after a call to clear(), effectively leaving the capacity of the vector unchanged.
So the reason there is no apparent problem is because the vector still has the memory available in store. Of course this is merely implementation-specific, but not a bug. Besides, as the other answers point out, your program also does have Undefined Behavior for accessing the cleared contents in the first place, so technically anything can happen.
Let's imagine you're rich (perhaps you are or you aren't ... whatsoever)!
Since you're rich you buy a piece of land on Moorea (Windward Islands, French Polynesia).
You're very certain it is a nice property so you build a villa on that island and you live there.
Your villa has a pool, a tennis court, a big garage and even more nice stuff.
After some time you leave Moorea since you think it's getting really boring. A lot of sports but few people.
You sell your land and villa and decide to move somewhere else.
If you come back some time later you may encounter a lot of different things but you cannot be certain about even one of them.
Your villa may be gone, replaced by a club hotel.
Your villa may be still there.
The island may be sunken.
...
Who knows?
Eventhough the villa may not longer belong to you, you might even be able to jump in the pool or play tennis again.
There may also be another villa next to it where you can swim in an even bigger pool with nobody distracting you.
You have no guarantee of what you're gong to discover if you come back again and that's the same with your vector which contains three pointers in the implementations I've looked at:
(The names may be different but the function is mostly the same.)
begin points to the start of the allocated memory location (i.e. X)
end which points to the end of the allocated memory +1 (i.e. begin+4)
last which points to the last element in the container +1 (i.e. begin+4)
By calling clear the container may well destroy all elements and reset last = begin;. The function size() will most likely return last-begin; and so you'll observe a container size of 0.
Nevertheless, begin may still be valid and there may still be memory allocated (end may still be begin+4). You can even still observe values you set before clear().
std::vector<int> a(4);
a[2] = 12;
cout << "a cap " << a.capacity() << ", ptr is " << a.data() << ", val 2 is " << a[2] << endl;
a.clear();
cout << "a cap " << a.capacity() << ", ptr is " << a.data() << ", val 2 is " << a[2] << endl;
Prints:
a cap 4, ptr is 00746570, val 2 is 12
a cap 4, ptr is 00746570, val 2 is 12
Why don't you observe any errors? It is because std::vector<T>::operator[] does not perform any out-of-boundary checks (in contrast to std::vector<T>::at() which does).
Since C++ doesn't contain "segfaults" your program seems to operate properly.
Note: On MSVC 2012 operator[] performs boundary checks if compiled in the debug mode.
Welcome to the land of undefined behaviour! Things may or may not happen. You probably can't even be cartain about a single circumstance.
You can take a risk and be bold enough to take a look into it but that is probably not the way to produce reliable code.
The operator[] is efficient but comes at a price: it does not perform boundary checking.
There are safer yet efficient way to access a vector, like iterators and so on.
If you need a vector for random access (i.e. not always sequential), either be very careful on how you write your programs, or use the less efficient at(), which in the same conditions would have thrown an exception.
you can get seg fault but this is not for sure since accessing out of range elements of vector with operator[] after clear() called before is just undefined behavior. From your post it looks like you want to try if elements are destroyed so you can use at public function for this purpose:
The function automatically checks whether n is within the bounds of
valid elements in the vector, throwing an out_of_range exception if it
is not (i.e., if n is greater or equal than its size). This is in
contrast with member operator[], that does not check against bounds.
in addition, after clear():
All iterators, pointers and references related to this container are
invalidated.
http://www.cplusplus.com/reference/vector/vector/at/
try to access to an elements sup than 4 that you use for constructor may be you will get your segmentation fault
An other idea from cplusplus.com:
Clear content
Removes all elements from the vector (which are destroyed), leaving the container with a size of 0.
A reallocation is not guaranteed to happen, and the vector capacity is not guaranteed to change due to calling this function. A typical alternative that forces a reallocation is to use swap:
vector().swap(x); // clear x reallocating
If you use
temp1.at(2) = 343.5;
instead of
temp1[2] = 343.5;
you would find the problem. It is recommended to use the function of at(), and the operator[] doesn't check the boundary. You can avoid the bug without knowing the implementation of STL vector.
BTW, i run your code in my Ubuntu (12.04), it turns out like what you say. However, in Win7, it's reported "Assertion Failed".
Well, that reminds me of the type of stringstream. If define the sentence
stringstream str;
str << "3456";
If REUSE str, I was told to do like this
str.str("");
str.clear();
instead of just using the sentence
str.clear();
And I tried the resize(0) in Ubuntu, it turns out useless.
Yes this is normal. clear() doesn't guarantee a reallocation. Try using resize() after clear().
One important addition to the answers so far: If the class the vector is instanciated with provides a destructor, it will be called on clearing (and on resize(0), too).
Try this:
struct C
{
char* data;
C() { data = strdup("hello"); }
C(C const& c) { data = strdup(c.data); }
~C() { delete data; data = 0; };
};
int main(int argc, char** argv)
{
std::vector<C> v;
v.push_back(C());
puts(v[0].data);
v.clear();
char* data = v[0].data; // likely to survive
puts(data); // likely to crash
return 0;
}
This program most likely will crash with a segmentation fault - but (very likely) not at char* data = v[0].data;, but at the line puts(data); (use a debugger to see).
Typical vector implementations leave the memory allocated intact and leave it as is just after calling the destructors (however, no guarantee - remember, it is undefined behaviour!). Last thing that was done was setting data of the C instance to nullptr, and although not valid in sence of C++/vector, the memory is still there, so can access it (illegally) without segmentation fault. This will occur when dereferencing char* data pointer in puts, as being null...
I understand why this causes a segfault:
#include <algorithm>
#include <vector>
using namespace std;
int main()
{
vector<int> v;
int iArr[5] = {1, 2, 3, 4, 5};
int *p = iArr;
copy(p, p+5, v.begin());
return 0;
}
But why does this not cause a segfault?
#include <algorithm>
#include <vector>
using namespace std;
int main()
{
vector<int> v;
int iArr[5] = {1, 2, 3, 4, 5};
int *p = iArr;
v.reserve(1);
copy(p, p+5, v.begin());
return 0;
}
Both are wrong as you are copying to empty vector and copy requires that you have space for insertion. It does not resize container by itself. What you probably need here is back_insert_iterator and back_inserter:
copy(p, p+5, back_inserter(v));
This is undefined behavior - reserve() allocates a buffer for at least one element and the element is left uninitialized.
So either the buffer is big enough and so you technically can access elements beyond the first one or it is not big enough and you just happen to not observe any problems.
The bottom line is - don't do it. Only access elements that are legally stored in the vector instance.
But why does this not cause a
segfault?
Because the stars aligned. Or you were running in debug and the compiler did something to "help" you. Bottom line is you're doing the wrong thing, and crossed over in to the dark and nondeterministic world of Undefined Behavior. You reserve one spot in the vector and then try to cram 5 elements in to the reserve-ed space. Bad.
You have 3 options. In my personal order of preference:
1) Use a back_insert_iterator which is designed for just this purpose. It is provided by #include <iterator>. The syntax is a bit funky, but fortunately a nice sugar-coated shortcut, back_inserter is also provided:
#include <iterator>
// ...
copy( p, p+5, back_inserter(v) );
2) assign the elements to the vector. I prefer this method slightly less simply because assign is a member of vector, and that strikes me as slightly less generic than using somethign from algorithm.
v.assign(p, p+5);
3) reserve the right number of elements, then copy them. I consider this to be a last ditch effort in case everything else fails for whatever reason. It relies on the fact that a vector's storage is contiguous so it's not generic, and it just feels like a back-door method of getting the data in to the vector.
This is wrong! it is undefined behavior to access memory you don't own, even if it works in an example. The reason, I think, is that std::vector would reserve more than one element.
Because you were unlucky. Accessing memory not allocated is UB.
Most likely because an empty vector doesn't have any memory allocated at all, so you are trying to write to a NULL pointer which normally leads to an instant crash. In the second case it has at least some memory allocated, and you are most likely overwriting the end of an array which may or may not lead to a crash in C++.
Both are wrong.
It would be wrong, by the way, even to copy 1 element to the vector that way (or to reserve 5 then copy that way).
The reason it most likely does not segfault is that the implementor felt it would be inefficient to allocate the memory for just 1 element just in case you wanted to grow it later, so maybe they allocated enough for 16 or 32 elements.
Doing reserve(5) first then writing into 5 elements directly would probably not be Undefined Behaviour but would be incorrect because vector will not have a logical size of 5 yet and the copy would almost be "wasted" as vector will claim to still have the size of 0.
What would be valid behaviour is reserve(5), insert an element, store its iterator somewhere, insert 4 more elements and look at the contents of the first iterator. reserve() guarantees that the iterators do not become invalidated until the vector exceeds that size or a call such as erase(), clear(), resize() or another reserve() is made.
Could somebody be kind to explain why in the world this gives me a segmentation fault error?
#include <vector>
#include <iostream>
using namespace std;
vector <double>freqnote;
int main(){
freqnote[0] = 16.35;
cout << freqnote[0];
return 0;
}
I had other vectors in the code and this is the only vector that seems to be giving me trouble.
I changed it to vector<int>freqnote; and changed the value to 16 and I STILL get the segmentation fault. What is going on?
I have other vector ints and they give me correct results.
Replace
freqnote[0] = 16.35;
with
freqnote.push_back(16.35);
and you'll be fine.
The error is due to that index being out-of-range. At the time of your accessing the first element via [0], the vector likely has a capacity of 0. push_back(), on the other hand, will expand the vector's capacity (if necessary).
You can't initialise an element in a vector like that.
You have to go:
freqnote.push_back(16.35),
then access it as you would an array
You're accessing vector out of bounds. First you need to initialize vector specifying it's size.
int main() {
vector<int> v(10);
v[0] = 10;
}
As has been said, it's an issue about inserting an out of range index in the vector.
A vector is a dynamically sized array, it begins with a size of 0 and you can then extend/shrink it at your heart content.
There are 2 ways of accessing a vector element by index:
vector::operator[](size_t) (Experts only)
vector::at(size_t)
(I dispensed with the const overloads)
Both have the same semantics, however the second is "secured" in the sense that it will perform bounds checking and throw a std::out_of_range exception in case you're off bound.
I would warmly recommend performing ALL accesses using at.
The performance penalty can be shrugged off for most use cases. The operator[] should only be used by experts, after they have profiled the code and this spot proved to be a bottleneck.
Now, for inserting new elements in the vector you have several alternatives:
push_back will append an element
insert will insert the element in front of the element pointed to by the iterator
Depending on the semantics you wish for, both are to be considered. And of course, both will make the vector grow appropriately.
Finally, you can also define the size explicitly:
vector(size_t n, T const& t = T()) is an overload of the constructor which lets you specify the size
resize(size_t n, T const& t = T()) allows you to resize the vector, appending new elements if it gets bigger than it was
Both method allow you to supply an element to be copied (exemplar) and default to copying a default constructed object (0 if T is an int) if you don't supply the exemplar explicitly.
Besides using push_back() to store new elements, you can also call resize() once before you start using the vector to specify the number of elements it contains. This is very similar to allocating an array.