This question already has answers here:
Double free or corruption after queue::push
(6 answers)
What is The Rule of Three?
(8 answers)
Closed 3 years ago.
I am working on a bitset implementation. The bitset uses an array of unsigned long long to store the bits.
class bitset{
typedef unsigned long long uint64;
uint64* bits;
...
}
Since I need this bitset to store a large about of data, I am finding that it works best when I initialize the array of uint64 using the new keyword to build it on the heap.
bitset::bitset(int n_bits){
if (n_bits % 64 !=0) size (n_bits / 64) + 1;
else size = n_bits / 64;
this->data = new uint64[size];
}
Doing do allows my program to consistently allows my whole program to access the array of bits.
The One issue I'm running into is that my destructor doesn't seem to be able to delete the data
bitset::~bitset(){
delete[] this->data;
}
Working without the destructor, I get a memory leak (as expected), with the destructor I get a runtime error Error in `./a.out': double free or corruption (out):
I have tried googling this to no avail. I am fairly new to c++, so any insight on stack/heap behavior within classes would be appreciated.
You can use the vector container:
class bitset{
...
std::vector<uint64> bits;
...
Vector takes care of memory allocation so that you don't get problems with accidentally deleting memory more than once or accidentally leaking the memory.
P.S. unsigned long long is not guaranteed to be exactly 64 bits. It is allowed to be bigger than that. If that is crucial detail to your program, then you should use std::uint64_t from the standard library instead. This is mostly only relevant to future compatibility.
Related
I was practicing an array manipulation question. While solving I declared an array (array A in code).
For some test cases, I got a segmentation fault. I replaced the array with vector and got AC. I don't know the reason for this. Plz, explain.
#include <bits/stdc++.h>
using namespace std;
int main()
{
int n,m,a,b,k;
cin>>n>>m;
vector<long int> A(n+2);
//long int A[n+2]={0};
for(int i=0;i<m;i++)
{
cin>>a>>b>>k;
A[a]+=k;
A[b+1]-=k;
}
long res=0;
for(int i=1;i<n+2;i++)
{
A[i]+=A[i-1];
if(res<A[i])
res=A[i];
}
cout<<res;
return 0;
}
Since it looks like you haven't being programming in C++ for very long I will try to break it down for you to make it simpler to understand:
First of all c++ does not intialize any values for you this is not Java, so please do not do:
int n,m,a,b,k;
And then use:
A[a]+=k;
A[b+1]-=k;
At this point we have no idea what a and b are it might be -300 for all we know, you never intialized it. Hence, occasically you get lucky and the number that is initalized by the compiler does not cause a segmentation fault, and other times you are not so lucky and the value intialized by the compiler does cause a segmentation fault.
long int A[n+2]={0}; is not legal in Standard C++. There are a bunch of reasons for this and I think you stumbled over one of them.
Compilers that allow Variable Length Arrays follow the example of C99 and the array is allocated on the stack. Stack is a limited resource, usually between 1 and 10 MB for a desktop computer. If the user inputs an n of sufficient size, the array will take up too much of the stack or breach the bounds of the stack resulting in Undefined Behaviour. of then this behaviour manifests in a segmentation fault from accessing memory that is so far off the end of the stack that it's not controlled by the program. There are typically no warnings when you overflow the stack. Often a program crash or corrupted data is the the way you find out, and it's too late to salvage the program by then.
On the other hand, a vector allocates it's internal buffer from the freestore, and on a modern PC with virtual memory and 64 bit addressing the freestore is fantastically huge and throws an exception if you attempt to exceed what it can allocate.
Another important difference is
long int A[n+2]={0};
likely did not zero initialize the array. This is the case with g++. The first byte will be set to zero and the remainder are uninitialized. Such is the curse of using non-Standard extensions. You cannot count on the behaviour guaranteed by the Standard.
std::vector will zero initialize the whole array or set the array to whatever value you tell it to use.
This question already has answers here:
How to solve the 32-byte-alignment issue for AVX load/store operations?
(3 answers)
Closed 4 years ago.
I do some operations on array using SIMD, so I need to have them aligned in memory. When I place arrays on the stack, I simply do this and it works:
#define BUFFER_SIZE 10000
alignas(16) float approxFreqMuls_Float[BUFFER_SIZE];
alignas(16) double approxFreqMuls_Double[BUFFER_SIZE];
But now I need to allocate more memory (such as 96k doubles, or more): so I think the heap is the way; but when I do this:
int numSteps = 96000;
alignas(16) float *approxFreqMuls_Float = new float[numSteps];
alignas(16) double *approxFreqMuls_Double = new double[numSteps];
It thrown error on ostream. Not really sure about the message (I'm on MSVC, nothing appair).
How would you allocate aligned arrays on heap?
Heap allocations are aligned to the maximum native alignment by default, so as long as you don't need to over-align, then you don't need to do anything in particular to align it.
If you do need over-alignment, for some reason, you can use the aligned new syntax new (std::align_val_t(16)) float[numSteps]; (or std::aligned_alloc which is in the malloc family of functions and the memory must therefore be freed rather than deleted).
If you don't have C++17, then you need to allocate size + align - 1 bytes instead if size, and std::align the pointer - or use a non-standard aligned allocation function provided on your target platform.
Has anyone encountered a maximum size for QList?
I have a QList of pointers to my objects and have found that it silently throws an error when it reaches the 268,435,455th item, which is exactly 28 bits. I would have expected it to have at least a 31bit maximum size (minus one bit because size() returns a signed integer), or a 63bit maximum size on my 64bit computer, but this doesn't appear to be the case. I have confirmed this in a minimal example by executing QList<void*> mylist; mylist.append(0); in a counting loop.
To restate the question, what is the actual maximum size of QList? If it's not actually 2^32-1 then why? Is there a workaround?
I'm running a Windows 64bit build of Qt 4.8.5 for MSVC2010.
While the other answers make a useful attempt at explaining the problem, none of them actually answer the question or missed the point. Thanks to everyone for helping me track down the issue.
As Ali Mofrad mentioned, the error thrown is a std::bad_alloc error when the QList fails to allocate additional space in my QList::append(MyObject*) call. Here's where that happens in the Qt source code:
qlist.cpp: line 62:
static int grow(int size) //size = 268435456
{
//this is the problem line
volatile int x = qAllocMore(size * sizeof(void *), QListData::DataHeaderSize) / sizeof(void *);
return x; //x = -2147483648
}
qlist.cpp: line 231:
void **QListData::append(int n) //n = 1
{
Q_ASSERT(d->ref == 1);
int e = d->end;
if (e + n > d->alloc) {
int b = d->begin;
if (b - n >= 2 * d->alloc / 3) {
//...
} else {
realloc(grow(d->alloc + n)); //<-- grow() is called here
}
}
d->end = e + n;
return d->array + e;
}
In grow(), the new size requested (268,435,456) is multiplied by sizeof(void*) (8) to compute the size of the new block of memory to accommodate the growing QList. The problem is, 268435456*8 equals +2,147,483,648 if it's an unsigned int32, or -2,147,483,648 for a signed int32, which is what's getting returned from grow() on my OS. Therefore, when std::realloc() is called in QListData::realloc(int), we're trying to grow to a negative size.
The workaround here, as ddriver suggested, is to use QList::reserve() to pre-allocate the space, preventing my QList from ever having to grow.
In short, the maximum size for QList is 2^28-1 items unless you pre-allocate, in which case the maximum size truly is 2^31-1 as expected.
Update (Jan 2020): This appears to have changed in Qt 5.5, such that 2^28-1 is now the maximum size allowed for QList and QVector, regardless of whether or not you reserve in advance. A shame.
Has anyone encountered a maximum size for QList? I have a QList of pointers to my objects and have found that it silently throws an error when it reaches the 268,435,455th item, which is exactly 28 bits. I would have expected it to have at least a 31bit maximum size (minus one bit because size() returns a signed integer), or a 63bit maximum size on my 64bit computer, but this doesn't appear to be the case.
Theoretical maximum positive number stored in int is 2^31 - 1. Size of pointer is 4 bytes (for 32bit machine), so maximum possible number of them is 2^29 - 1. Appending data to the container will increases fragmentation of heap memory, so there is possible that you can allocate only half of possible memory. Try use reserve() or resize() instead.
Moreover, Win32 has some limits for memory allocation. So application compiled without special options cannot allocate more than this limit (1G or 2G).
Are you sure about this huge containers? Is it better to optimize application?
QList stores its elements in a void * array.
Hence, a list with 228 items, of which each one is a void *, will be 230 bytes long on a 32 bit machine, and 231 bytes on a 64 bit machine. I doubt you can request such a big chunk of contiguous memory.
And why allocating such a huge list anyhow? Are you sure you really need it?
The idea of be backed by an array of void * elements is because several operations on the list can be moved to non-templated code, therefore reducing the amount of generated code.
QList stores items straight in the void * array if the type is small enough (i.e. sizeof(T) <= sizeof(void*)), and if the type can be moved in memory via memmove. Otherwise, each item will be allocated on the heap via new, and the array will store the pointers to those items. A set of type traits is used to figure out how to handle each type, see Q_DECLARE_TYPEINFO.
While in theory this approach may sound attractive, in practice:
For all primitive types smaller than void * (char; int and float on 64 bit; etc.) you waste from 50 to 75% of the allocated space in the array
For all movable types bigger than void * (double on 32bit, QVariant, ...), you pay a heap allocation per each item in the list (plus the array itself)
QList code is generally less optimized than QVector one
Compilers these days do a pretty good job at merging template instantiations, hence the original reason for this design gets lost.
Today it's a much better idea to stick with QVector. Unfortunately the Qt APIs expose QList everywhere and can't change them (and we need C++11 to define QList as a template alias for QVector...)
I test this in Ubuntu 32bit with 4GB RAM using qt4.8.6. Maximum size for me is 268,435,450
I test this in Windows7 32bit with 4GB RAM using qt4.8.4. Maximum size for me is 134,217,722
This error happend : 'std::bad_alloc'
#include <QCoreApplication>
#include <QDebug>
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
QList<bool> li;
for(int i=0; ;i++)
{
li.append(true);
if(i>268435449)
qDebug()<<i;
}
return a.exec();
}
Output is :
268435450
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
I am receiving the error "User store segfault # 0x000000007feff598" for a large convolution operation.
I have defined the resultant array as
int t3_isize = 0;
int t3_irowcount = 0;
t3_irowcount=atoi(argv[2]);
t3_isize = atoi(argv[3]);
int iarray_size = t3_isize*t3_irowcount;
uint64_t t_result[iarray_size];
I noticed that if the array size is less than 2^16 - 1, the operation doesn't fail, but for the array size 2^16 or higher, I get the segfault error.
Any idea why this is happening? And how can i rectify this?
“I noticed that if the array size is greater than 2^16 - 1, the operation doesn't fail, but for the array size 2^16 or higher, I get the segfault error”
↑ Seems a bit self-contradictory.
But probably you're just allocating a too large array on the stack. Using dynamic memory allocation (e.g., just switch to using std::vector) you avoid that problem. For example:
std::vector<uint64_t> t_result(iarray_size);
In passing, I would ditch the Hungarian notation-like prefixes. For example, t_ reads like this is a type. The time for Hungarian notation was late 1980's, and its purpose was to support Microsoft's Programmer's Workbench, a now dicontinued (for very long) product.
You're probably declaring too large of an array for the stack. 216 elements of 8 bytes each is quite a lot (512K bytes).
If you just need static allocation, move the array to file scope.
Otherwise, consider using std::vector, which will allocate storage from the heap and manage it for you.
Using malloc() solved the issue.
uint64_t* t_result = (uint64_t*) malloc(sizeof(uint64_t)*iarray_size);
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Why isn't sizeof for a struct equal to the sum of sizeof of each member?
why the output is 8?
not sizeof(int)+sizeof(char) = 5?
class CBase
{
int a;
char p;
};
int main() {
cout<<"sizeof(CBase)="<<sizeof(CBase)<<endl;
getchar();
} ///:~
Memory is usually aligned by the compiler for better performance. So a class or structure can take more space in memory than the sum of its parts.
Looks as if the compiler / run-time has aligned to an 8-byte boundary. You might find you can change this by compiler or runtime switches. For example, on AIX, C++ memory allocations are aligned to 16-bytes which can cause them to use more memory.
To not do this alignment, there's a way around this at runtime (which has the drawback that the apps can't use VMX) Just set this environment variable for the application prior to running it:
export LIBCPP_NOVMX=1