SIGABRT when returning from a function? - c++

I'm new to C++, coming from a python/kotlin background and so am having some trouble understanding what's going on behind the scenes here...
The Issue
I call the calculateWeights (public) method with its required parameters, it then calls a series of methods, including conjugateGradientMethod (private) and should return a vector of doubles. conjugateGradientMethod returns the vector of doubles to calculateWeights just fine, but calculateWeights doesn't return that to its caller:
The code
Callsite of calculateWeights:
Matrix cov = estimator.estimateCovariances(&firstWindow, &meanReturns);
cout << "before" << endl; // this prints
vector<double> portfolioWeights = optimiser.calculateWeights(&cov, &meanReturns);
cout << "after" << endl; // this does not print
Here's calculateWeights:
vector<double> PortfolioOptimiser::calculateWeights
(Matrix *covariances, vector<double> *meanReturns) {
vector<double> X0 = this->calculateX0();
Matrix Q = this->generateQ(covariances, meanReturns);
vector<double> B = this->generateB0();
vector<double> weights = this->conjugateGradientMethod(&Q, &X0, &B);
cout << "inside calculateWeights" << endl;
print(&weights); // this prints just fine
cout << "returning from calculateWeights..." << endl; // also prints
return weights; //this is where the SIGABRT shows up
The output
The output looks like this (I've checked and the weights it outputs are indeed numerically correct):
before
inside calculateWeights
1.78998
0.429836
-0.62228
-0.597534
-0.0365409
0.000401613
returning from calculateWeights...
And then nothing.
I appreciate this is printf debugging which isn't ideal and so I used Cion's debugger to find the following:
When I used CLion's debugger
I put a break point on the returns of the conjugateGradient method and calculateWeights methods. The debugger steppeed through the first one just fine. After I stepped over the return from calculateWeights, it showed me a SIGABRT with the following error:
Thread 1 "markowitzportfoliooptimiser" received signal SIGABRT, Aborted.
__gnu_cxx::new_allocator<std::vector<double, std::allocator<double> > >::deallocate (this=0x6, __p=0x303e900000762) at /usr/lib/gcc/x86_64-pc-cygwin/9.3.0/include/c++/ext/new_allocator.h:129
This is probably wrong but my first stab at understanding that is that I've overflowed over the size of vector<double> weights? It's only 6 doubles long and I never append anything to it after the loop below. This is how it's created inside conjugateGradientMethod:
How weights is created inside conjugateGradientMethod
vector<double> weights= vector<double>(aSize);
for (int i = 0; i < aSize; i++) {
weights[i] = aCoeff * a->at(i) + bCoeff* b->at(i);
}
Things I've tried
Initialising a vector for double weights in calculateWeights and passed a pointer to it to conjugateGradientMethod. Same result.
Having a public attribute on the class calculateWeights and conjugateGradientMethod both live on, and having it assign the weights to that (so both functions return void). Same result.
More generally, I've had this kind of issue before with passing up a return value from two functions deep. (If that makes sense?) ie passing from private method up to public method up to public method's callsite.
I'd be grateful for any advice on SIGABRT's in this context, I've read it's when abort() sends the calling process the SIGABRT signal, but am unsure how to make use of that in this example.
Also, I'm all ears for any other style/best practices that would help avoid this in future
Edit: Solution found
After much work, I installed and got Ubuntu 20.04 LTS up and running since I couldn't get the Address Sanitizer nor Valgrind to work via WSL on Windows 10 (first time on Linux - I kinda like it).
With Address Sanitizer now working, I was able to see that I was writing too many elements to a vector of doubles on two separate accounts, nothing to do with my weights vector as #Lukas Matena rightly spotted. Confusingly this was long before it ever got to the snippets above.
If anyone is finding this in the future, these helped me massively:
Heap Buffer Overflow
Heap vs Stack 1
Heap vs Stack 2

The error message says that it failed to deallocate an std::vector<double> when calculateWeights was about to return. That likely means that at least one of the local variables (which are being destroyed at that point) in the function is corrupted.
You seem to be focusing on weights, but since the attempts that you mention have failed, I would rather suspect X0 or B (weights is maybe not even deallocated at that point due to return value optimization).
Things you can try:
start using an address sanitizer like others have suggested
comment out parts of the code to see if it leads you closer (in other words, make a minimal example)
make one of the vectors a member variable so it is not destroyed at that point (not a fix, but it might give a clue about who is the offender)
You're likely doing something bad to the respective vector somewhere, possibly in calculateX0 or generateB0 (which you haven't shared). It may be delete-ing part of the vector, returning a reference to a temporary instead of a copy, etc. The SIGABRT at that return is where you were caught, but memory corruption issues often surface later than they're caused.
(I would have made this shorter and posted as a comment but I cannot as a newbie. Hopefully it will count as an "advice on SIGABRT's in this context", which is what was in fact asked for).

Related

Getting sementation fault (core dumped)

Everything seems to run okay up until the return part of shuffle_array(), but I'm not sure what.
int * shuffle_array(int initialArray[], int userSize)
{
// Variables
int shuffledArray[userSize]; // Create new array for shuffled
srand(time(0));
for (int i = 0; i < userSize; i++) // Copy initial array into new array
{
shuffledArray[i] = initialArray[i];
}
for(int i = userSize - 1; i > 0; i--)
{
int randomPosition = (rand() % userSize;
temp = shuffledArray[i];
shuffledArray[i] = shuffledArray[randomPosition];
shuffledArray[randomPosition] = temp;
}
cout << "The numbers in the initial array are: ";
for (int i = 0; i < userSize; i++)
{
cout << initialArray[i] << " ";
}
cout << endl;
cout << "The numbers in the shuffled array are: ";
for (int i = 0; i < userSize; i++)
{
cout << shuffledArray[i] << " ";
}
cout << endl;
return shuffledArray;
}
Sorry if spacing is off here, not sure how to copy and past code into here, so I had to do it by hand.
EDIT: Should also mention that this is just a fraction of code, not the whole project I'm working on.
There are several issues of varying severity, and here's my best attempt at flagging them:
int shuffledArray[userSize];
This array has a variable length. I don't think that it's as bad as other users point out, but you should know that this isn't allowed by the C++ standard, so you can't expect it to work on every compiler that you try (GCC and Clang will let you do it, but MSVC won't, for instance).
srand(time(0));
This is most likely outside the scope of your assignment (you've probably been told "use rand/srand" as a simplification), but rand is actually a terrible random number generator compared to what else the C++ language offers. It is rather slow, it repeats quickly (calling rand() in sequence will eventually start returning the same sequence that it did before), it is easy to predict based on just a few samples, and it is not uniform (some values have a much higher probability of being returned than others). If you pursue C++, you should look into the <random> header (and, realistically, how to use it, because it's unfortunately not a shining example of simplicity).
Additionally, seeding with time(0) will give you sequences that change only once per second. This means that if you call shuffle_array twice quickly in succession, you're likely to get the same "random" order. (This is one reason that often people will call srand once, in main, instead.)
for(int i = userSize - 1; i > 0; i--)
By iterating to i > 0, you will never enter the loop with i == 0. This means that there's a chance that you'll never swap the zeroth element. (It could still be swapped by another iteration, depending on your luck, but this is clearly a bug.)
int randomPosition = (rand() % userSize);
You should know that this is biased: because the maximum value of rand() is likely not divisible by userSize, you are marginally more likely to get small values than large values. You can probably just read up the explanation and move on for the purposes of your assignment.
return shuffledArray;
This is a hard error: it is never legal to return storage that was allocated for a function. In this case, the memory for shuffledArray is allocated automatically at the beginning at the function, and importantly, it is deallocated automatically at the end: this means that your program will reuse it for other purposes. Reading from it is likely to return values that have been overwritten by some code, and writing to it is likely to overwrite memory that is currently used by other code, which can have catastrophic consequences.
Of course, I'm writing all of this assuming that you use the result of shuffle_array. If you don't use it, you should just not return it (although in this case, it's unlikely to be the reason that your program crashes).
Inside a function, it's fine to pass a pointer to automatic storage to another function, but it's never okay to return that. If you can't use std::vector (which is the best option here, IMO), you have three other options:
have shuffle_array accept a shuffledArray[] that is the same size as initialArray already, and return nothing;
have shuffle_array modify initialArray instead (the shuffling algorithm that you are using is in-place, meaning that you'll get correct results even if you don't copy the original input)
dynamically allocate the memory for shuffledArray using new, which will prevent it from being automatically reclaimed at the end of the function.
Option 3 requires you to use manual memory management, which is generally frowned upon these days. I think that option 1 or 2 are best. Option 1 would look like this:
void shuffle_array(int initialArray[], int shuffledArray[], int userSize) { ... }
where userSize is the size of both initialArray and shuffledArray. In this scenario, the caller needs to own the storage for shuffledArray.
You should NOT return a pointer to local variable. After the function returns, shuffledArray gets deallocated and you're left with a dangling pointer.
You cannot return a local array. The local array's memory is released when you return (did the compiler warn you about that). If you do not want to use std::vector then create yr result array using new
int *shuffledArray = new int[userSize];
your caller will have to delete[] it (not true with std::vector)
When you define any non static variables inside a function, those variables will reside in function's stack. Once you return from function, the function's stack is gone. In your program, you are trying to return a local array which will be gone once control is outside of shuffle_array().
To solve this, either you need to define the array globally (which I won't prefer because using global variables are dangerous) or use dynamic memory allocation for the array which will create space for the array in heap rather than allocating the space on the function's stack. You can use std::vectors also, if you are familiar with vectors.
To allocate memory dynamically, you have to use new as mentioned below.
int *shuffledArray[] = new int[userSize];
and once you completed using shuffledArray, you need to free the memory as below.
delete [] shuffledArray;
otherwise your program will leak memory.

Is alocating specific memory for a void pointer undefined behaviour?

I've met a situation that I think it is undefined behavior: there is a structure that has some member and one of them is a void pointer (it is not my code and it is not public, I suppose the void pointer is to make it more generic). At some point to this pointer is allocated some char memory:
void fooTest(ThatStructure * someStrPtr) {
try {
someStrPtr->voidPointer = new char[someStrPtr->someVal + someStrPtr->someOtherVal];
} catch (std::bad_alloc$ ba) {
std::cerr << ba.what << std::endl;
}
// ...
and at some point it crashes at the allocation part (operator new) with Segmentation fault (a few times it works, there are more calls of this function, more cases). I've seen this in debug.
I also know that on Windows (my machine is using Linux) there is also a Segmentation fault at the beginning (I suppose that in the first call of the function that allocates the memory).
More, if I added a print of the values :
std::cout << someStrPtr->someVal << " " << someStrPtr->someOtherVal << std::endl;
before the try block, it runs through the end. This print I've done to see if there is some other problem regarding the structure pointer, but the values are printed and not 0 or negative.
I've seen these topics: topic1, topic2, topic3 and I am thinking that there is some UB linked to the void pointer. Can anyone help me in pointing the issue here so I can solve it, thanks?
No, that in itself is not undefined behavior. In general, when code "crashes at the allocation part", it's because something earlier messed up the heap, typically by writing past one end of an allocated block or releasing the same block more than once. In short: the bug isn't in this code.
A void pointer is a perfectly fine thing to do in C/C++ and you can usually cast from/to other types
When you get a seg-fault while initialization, this means some of the used parameters are themselves invalid or so:
Is someStrPtr valid?
is someStrPtr->someVal and someStrPtr->someotherVal valid?
Are the values printed is what you were expecting?
Also if this is a multuthreaded application, make sure that no other thread is accessing those variables (especially between your print and initialization statement). This is what is really difficult to catch

How can I pass a C++ array of structs to a CUDA device?

I've spent 2 days trying to figure this out and getting nowhere. Say I had a struct that looks like this:
struct Thing {
bool is_solid;
double matrix[9];
}
I want to create an array of that struct called things and then process that array on the GPU. Something like:
Thing *things;
int num_of_things = 100;
cudaMallocManaged((void **)&things, num_of_things * sizeof(Thing));
// Something missing here? Malloc individual structs? Everything I try doesn't work.
things[10].is_solid = true; // Segfaults
Is it even best practice to do it this way rather than pass a single struct with arrays that are num_of_things large? It seem to me that can get pretty nasty especially when you have arrays already (like matrix, which would need to be 9 * num_of_things.
Any info would be much appreciated!
After some dialog in the comments, it seems that OP's posted code has no issues. I was able to successfully compile and run this test case built around that code, and so was OP:
$ cat t1005.cu
#include <iostream>
struct Thing {
bool is_solid;
double matrix[9];
};
int main(){
Thing *things;
int num_of_things = 100;
cudaError_t ret = cudaMallocManaged((void **)&things, num_of_things * sizeof(Thing));
if (ret != cudaSuccess) {
std::cout << cudaGetErrorString(ret) << std::endl;
return 1;}
else {
things[10].is_solid = true;
std::cout << "Success!" << std::endl;
return 0;}
}
$ nvcc -arch=sm_30 -o t1005 t1005.cu
$ ./t1005
Success!
$
Regarding this question:
Is it even best practice to do it this way rather than pass a single struct with arrays that are num_of_things large?
Yes, this is a sensible practice and is usable whether managed memory is being used or not. An array of more or less any structure that does not contain embedded pointers to dynamically allocated data elsewhere can be transferred to the GPU in a simple fashion using a single cudaMemcpy call (for example, if managed memory were not being used.)
To address the question about the 3rd (flags) parameter to cudaMallocManaged:
If it is specified, it is not correct to pass zero (although OP's posted code gives no evidence of that.) You should use one of the documented choices.
If it is not specified, this is still valid, and a default argument of cudaMemAttachGlobal is provided. This can be confirmed by reviewing the cuda_runtime.h file or else simply compiling/running the test code above. This particular point appears to be an oversight in the documentation, and I've filed an internal issue at NVIDIA to take a look at that. So it's possible the documentation may change in the future with respect to this.
Finally, proper cuda error checking is always in order any time you are having trouble with a CUDA code, and the use of such may shed some light on any errors that are made. The seg fault that the OP reported in code comments was almost certainly due to the cudaMallocManaged call failing (perhaps because a zero parameter was supplied incorrectly) and as a result the pointer in question (things) had no actual allocation. Subsequent usage of that pointer would lead to a seg fault. My test code demonstrates how to avoid that seg fault, even if the cudaMallocManaged call fails for some reason, and the key is proper error checking.

Memory usage and overwrites in c++

I am working on a large piece of code. As part of my main class constructor I declare a large number of vectors which at one point or another get filled (all with doubles). Up until a while ago the code ran fine but after I added one further vector of doubles a completely unrelated variable (one which decides whether a particular 'run' has been succesfull or not) is being changed for some reason.
I have not added any lines which change this success variable and when I print out it's value (a succesful run leads to the variable being zero) it changes to a massive integer everytime, but each time I run it gives a different value.
I have a feeling I am doing something wrong with memory allocation but I don't know what exactly!
Any advice welcomed,
Cheers
Jack
UPDATE
class MyClass {
std::vector <std::vector<HLV> > qChains;
std::vector <std::vector<HLV> > VertexChains;
std::vector <std::vector<double> > Virtuals;
std::vector <double> VProducts;
std::vector <double> QProducts;
std::vector <double> StrongCouplings;
int EventStatus
}
and then in another method of 'MyClass' I have a quick if loop checking the event is going ok:
if (GetEventStatus() != 0) cout << "ERROR!! " << GetEventStatus() << endl;
and ever since I added the line about StrongCouplings the status has been returning random huge integers.
I have however noticed that if I place a series of print statements throughout checking the value of EventStatus at various places the problem goes away!
try to add char buf[128]; before the variable that changes its value - if its helps - it will mean some previous variable overrides your variable. It may be caused by ODR violation or by incorrect usage of a C array (if you write after the end of the array)

Can the cause of SIGSEGV be the low ram of the system?

My system ram is small, 1.5GB. I have a C++ programm that calls a specific method about 300 times. This method uses 2 maps (they are cleared every time) and I would like to know if it is possible in some of the calls of this method that the stack is overflowed and the program fails. If I put small data (so the method is called 30 times) the program runs fine. But now it raises SIGSEGV error. I am trying to fix this for about 3 days and no luck, every solution I tried failed.
I found some cause of the SIGSEGV below but nothing helped
What is SIGSEGV run time error in C++?
Ok, here is the code.
I have 2 instances, which contain some keywords-features and their scores
I want to get their eucleidian distance, which means I have to save all the keywords for each of the instances, then find the diffs for the keywords of the first one with those of the second and then find the diffs for the remaining of the second instance. What I want is while iterating the first map, to be able to delete elements from the second. The following method is called multiple times as we have two message collections, and every message from the first one is compared with every message from the second.
I have this code but it suddenly stops although I checked it is working for some seconds with multiple cout I put in some places
Note that this is for a university task so I cannot use boost and all those tricks. But I would like to know the way to bypass the problem I am into.
float KNNClassifier::distance(const Instance& inst1, const Instance& inst2) {
map<string,unsigned> feat1;
map<string,unsigned> feat2;
for (unsigned i=0; i<inst1.getNumberOfFeatures(); i++) {
feat1[inst1.getFeature(i)]=i;
}
for (unsigned i=0; i<inst2.getNumberOfFeatures(); i++) {
feat2[inst2.getFeature(i)]=i;
}
float dist=0;
map<string,unsigned>::iterator it;
for (it=feat1.begin(); it!=feat1.end(); it++) {
if (feat2.find(it->first)!=feat2.end()) {//if and only if it exists in inst2
dist+=pow( (double) inst1.getScore(it->second) - inst2.getScore(feat2[it->first]) , 2.0);
feat2.erase(it->first);
}
else {
dist+=pow( (double) inst1.getScore(it->second) , 2.0);
}
}
for (it=feat2.begin(); it!=feat2.end(); it++) {//for the remaining words
dist+=pow( (double) inst2.getScore(it->second) , 2.0);
}
feat1.clear(); feat2.clear(); //ka8arizoume ta map gia thn epomenh xrhsh
return sqrt(dist);
}
and I also tried this idea in order to not have to delete something but it suddenly stops too.
float KNNClassifier::distance(const Instance& inst1, const Instance& inst2) {
map<string,unsigned> feat1;
map<string,unsigned> feat2;
map<string,bool> exists;
for (unsigned i=0; i<inst1.getNumberOfFeatures(); i++) {
feat1[inst1.getFeature(i)]=i;
}
for (unsigned i=0; i<inst2.getNumberOfFeatures(); i++) {
feat2[inst2.getFeature(i)]=i;
exists[inst2.getFeature(i)]=false;
if (feat1.find(inst2.getFeature(i))!=feat1.end()) {
exists[inst2.getFeature(i)]=true;
}
}
float dist=0;
map<string,unsigned>::iterator it;
for (it=feat1.begin(); it!=feat1.end(); it++) {
if (feat2.find(it->first)!=feat2.end()) {
dist+=pow( (double) inst1.getScore(it->second) - inst2.getScore(feat2[it->first]) , 2.0);
}
else {
dist+=pow( (double) inst1.getScore(it->second) , 2.0);
}
}
for (it=feat2.begin(); it!=feat2.end(); it++) {
if(it->second==false){//if it is true, it means the diff was done in the previous iteration
dist+=pow( (double) inst2.getScore(it->second) , 2.0);
}
}
feat1.clear(); feat2.clear(); exists.clear();
return sqrt(dist);
}
If malloc fails and thus returns NULL it can indeed lead to a SIGSEGV assuming the program does not properly handle that failure. However, if memory was that low your system would more likely start killing processes using lots of memory (the actual logic is more complicated, google for "oom killer" if you are interested).
Chances are good that there's simply a bug in your program. A good way to figure this out is using a memory debugger such as valgrind to see if you access invalid memory locations.
One possible explanation is that your program accesses a dynamically-allocated object after freeing it. If the object is small enough, the memory allocator keeps the memory around for the next allocation, and the access after free is harmless. If the object is large, the memory allocator unmaps the pages used to hold the object, and the access after free causes a SIGSEGV.
It is virtually certain that regardless of the underlying mechanism by which the SIGSEGV occurs, there is a bug in the code somewhere that is a key part of the causal chain.
As mentioned above, the most probable cause is bad memory allocation or memory leak. Check for buffer overflows, or if you try to access a resource after you free it.
1.5GB isn't that small. You can do a lot in 1.5GB in general. For 300 iterations to use up 1.5GB (let's say 0.5GB is used by the OS kernel, etc), you need to use roughly 32MB per iteration. That is quite a lot of memory, so, my guess is that either your code is actually using A LOT of memory, or your code contains a leak of some sort. More likely the latter. I have worked on machines with less than 64KB, and my first PC had 8MB of ram, and that was considered A LOT at the time.
No, this code is unable to cause a segfault if the system runs out of memory. map allocation uses the new operator, which does not use the stack for allocation. It uses the heap, and will throw a bad_alloc exception if the memory is exhausted, aborting before an invalid memory access can happen:
$ cat crazyalloc.cc
int main(void)
{
while(1) {
new int[100000000];
}
return 0;
}
$ ./crazyalloc
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted (core dumped)
The fact that an alternative implementation also crashes is a hint that the problem is not in this code.
The problem is on the Instance class instead. It's probably not lack of memory, it should be a buffer overflow, which can be confirmed with a debugger.