I was 100% positive that I covered all ground in terms of deleting memory from the heap before it was lost, but valgrind seems to disagree. Any help with finding the leak in the following code would be greatly appreciated! I can't seem to figure out what's causing it
Card * S = new Card[subsetSize];
Card * M = nullptr;
int subsetPrice = 0, subsetProfit = 0;
for(int i = 0; i < subsetSize; i++){
S[i] = problemCards[cardIndexesToAdd[i]];
subsetPrice += S[i].getPrice();
subsetProfit += S[i].getProfit();
}
// Evaluate the subset's cost and profit
if(subsetPrice <= maxToSpend){
if(subsetProfit > maxProfit){
maxProfit = subsetProfit;
if(M != nullptr)
delete[] M;
M = S;
S = nullptr;
mSize = subsetSize;
}
}
else{
if(S != nullptr){
delete[] S;
S = nullptr;
}
}
// output code for M
if(M != nullptr)
delete[] M;
Let's look at what you are doing step by step:
Allocate memory for S. Set M to point to null.
Card * S = new Card[subsetSize];
Card * M = nullptr;
If condition A (subsetPrice <= maxToSpend) is met, and condition B (subsetProfit > maxProfit) is met, swap M to point the memory allocated for S, and set S to point to null.
if (subsetPrice <= maxToSpend){
if (subsetProfit > maxProfit){
maxProfit = subsetProfit;
if (M != nullptr)
delete[] M;
M = S;
S = nullptr;
mSize = subsetSize;
}
}
If condition A was not met, deallocate the memory S is pointing to.
else{
if(S != nullptr){
delete[] S;
S = nullptr;
}
}
Deallocate the memory M is pointing to.
if(M != nullptr)
delete[] M;
So if condition A is met, but condition B is not, then S is neither deallocated or transferred over to M! Memory leak.
You leak S when (subsetPrice <= maxToSpend) == true but (subsetProfit > maxProfit) == false.
Rather than playing detective and tracking down this particular memory leak, I'd advise learning to write your code so you don't cause such problems to start with. The most obvious first point would be to use std::vector instead of trying to handle all the memory management on your own. There is probably no other single step that can eliminate as many problem as quickly as getting used to doing this.
When you use it, almost all problems of this entire class simply cease to exist, because you have an object that owns the memory, and when that object goes out of scope, it releases the memory it owns--entirely automatically. It also works even when/if exceptions are thrown, which your code doesn't even attempt to deal with.
std::vector<Card> subset(subsetSize);
for (int i=0; i<subsetSize; i++) {
subset.push_back(problemCards[cardIndexesToAdd[i]]);
subsetPrice += subset.back().getPrice();
subsetProfit += subset.back().getProfit();
}
if (subsetProfit > maxProfit && subsetPrice < maxPrice) {
maxSubset = std::move(subset);
maxProfit = subsetProfit;
}
// code to print out maxSubset goes here
If you wanted to go even further, you could use (for example) a Boost indirect_iterator in place of your cardIndexesToAdd. This would let you apply standard algorithms directly to the subset you care about. With this, you could pretty easily avoid making a copy of the current subset at all--you'd just use the indirect_iterator to iterate over the original collection in-place instead.
You could also define an operator+ for Card that would sum the Price and Profit fields:
Card operator+(Card const &left, Card const &right) {
return Card(left.price+right.price, left.profit+right.profit);
}
With this, and the aforementioned indirect_iterator, adding up the profit for a subset could be something like:
Card subset_stats = std::accumulate(subset.begin(), subset.end(), Card());
Likewise, we could define a comparison operator for Card that produces results based on profit and/or cost:
// Assuming we care primarily about maximizing profit, secondarily about
// price, so if one subset produces more profit, it's better. If they produce
// the same profit, the lower cost wins.
bool operator<(Card const &a, Card const &b) {
if (a.profit == b.profit)
return a.price < b.price;
return b.profit < a.profit;
}
With that, we can compare Cards directly, like: if (a < b) .. and get meaningful results.
Sorry, this would be a comment but I'm new here and can't do that yet.
No check for nullptr is needed with new for out of memory. Thx#jerry-coffin
All of your delete [] are inside if () or nested if () statements. If this leaks, you missed adding an else with a delete [] and you have missing else statements.
This appears to be a snippet but as it is, I see no reason for M or its assignment of S. You should probably consider having a single delete at the end.
Related
I need to implement the function resize() :
void IntSet::resize(int new_capacity)
{
if (new_capacity < used)
new_capacity = used;
if (used == 0)
new_capacity = 1;
capacity = new_capacity;
int * newData = new int[capacity];
for (int i = 0; i < used; ++i)
newData[i] = data[i];
delete [] data;
data = newData;
}
inside the functions :
IntSet IntSet::unionWith(const IntSet& otherIntSet) const
{
IntSet unionSet = otherIntSet;
for (int i = 0; i < used; i++)
{
if (unionSet.contains(data[i]))
unionSet.add(data[i]);
}
return unionSet;
}
and this one: ( NOTE: that I have it already inside the add() function but I think it is incorrect)
bool IntSet::add(int anInt)
{
if (contains(anInt) == false)
{
if (used >= capacity)
resize(used++);
data[used++] = anInt;
return true;
}
return false;
}
The program compiles correctly without errors, but it does give me an error of Segmentation fault
NOTE: The main thing is that I need help in learning how to use the resize function to resize the capacity of the the dynamic member data. Also, I know vectors would help in this case, but we are not allowed to use vectors yet
Here is the Special Requirements from professor:
>Special Requirement (You will lose points if you don't observe this.) <br/>
>When calling resize *(while implementing some of the member functions)* to
>increase the capacity of the dynamic arrays, use the following resizing
>rule (unless the new capacity has to be something else higher as dictated
>by other >overriding factors): <br/>
>
>*"new capacity" is "roughly 1.5 x old capacity" and at least "old capacity
> + 1".* <br/>
>
>The latter *(at least " old capacity + 1 ")* is a simple way to take care
>of the subtle case where " 1.5 x old capacity " evaluates (with truncation)
>to the >same as "old capacity". <br/>
When you resize because of add, you increment used twice.
bool IntSet::add(int anInt)
{
if (contains(anInt) == false)
{
if (used >= capacity)
resize(used++); // Here And this is a post increment.
// resize will be called with used before the increment
// so you will wind up asking for a buffer the same size.
data[used++] = anInt; // and here.
return true;
}
return false;
}
So nothing goes into used. You skip a space and write into the space after. Plus resize(used++); didn't ask for more space, so you actually wind up writing two spots outside the allocated storage, and this probably triggers the segfault.
Solution
You don't want to increment anything at resize(used++);. You want to add one to the capacity, but not increment it, so
resize(capacity +1);
Looks about right. However, what the instructions asked for is something more like:
int newcap = capacity * 1.5;
if (newcap == capacity) // newcap didn't change. eg: 1*1.5 = 1
{
newcap ++;
}
resize(newcap);
This is brute force though. There are smarter ways to do this.
Test case:
8,7,5,2,3,6,9 (NOT min heap) (this is element A* for buildHeap function)
2,3,5,7,8,6,9 (min heap after calling build heap)
3,5,6,7,8,9 (after calling deleteMin) THIS IS INCORRECT
it should be this 3,7,5,9,8,6
I can't seem to find the problem with deleteMin i know my heapify is working but idk maybe im not seeing something.
Element Heap::deleteMin(Heap& heap){
Element deleted = heap.H[0];
heap.H[0] = heap.H[heap.size-1];
heap.size--;
cout<<deleted.getKey()<<" has been deleted from heap"<<endl;
for(int i=heap.capacity/2-1;i>=0;--i)
heapify(heap,i);
return deleted;
}
void Heap::heapify(Heap& heap,int index){
int smallest = 0;
int left = 2*index;
int right = 2*index+1;
if(left < heap.size && heap.H[left].getKey() < heap.H[index].getKey())
smallest=left;
else
smallest=index;
if(right < heap.size && heap.H[right].getKey() < heap.H[smallest].getKey())
smallest=right;
if(smallest != index){
int swapKey = heap.H[index].getKey();
heap.H[index].setKey(heap.H[smallest].getKey());
heap.H[smallest].setKey(swapKey);
heapify(heap,smallest);
}
}
void Heap::buildHeap(Heap& heap, Element* A){
for(int j=0;j<heap.capacity;++j){
heap.insert(heap,A[j]);
for(int i=heap.capacity/2-1;i>=0;--i)
heapify(heap,i);
}
}
The first problem is that your calculations for child indexes are wrong. If you're using H[0] as the root of the heap, then
left = (2*index)+1
right = (2*index)+2
The calculations you have assume that the root is at H[1].
The other problem is that you're doing too much work in your deleteMin function:
Element Heap::deleteMin(Heap& heap){
Element deleted = heap.H[0];
heap.H[0] = heap.H[heap.size-1];
heap.size--;
cout<<deleted.getKey()<<" has been deleted from heap"<<endl;
for(int i=heap.capacity/2-1;i>=0;--i)
heapify(heap,i);
return deleted;
}
After you delete the minimum item and put the last item in the heap at the root, you just need to call heapify(heap, 0); There's no reason to re-build the entire heap in that loop.
So your function becomes:
Element Heap::deleteMin(Heap& heap){
Element deleted = heap.H[0];
heap.H[0] = heap.H[heap.size-1];
heap.size--;
cout<<deleted.getKey()<<" has been deleted from heap"<<endl;
heapify(heap, 0);
return deleted;
}
Your buildHeap method is similarly doing too much work.
You might be interested in a refresher on heaps. My blog article at http://blog.mischel.com/2013/09/29/a-better-way-to-do-it-the-heap/ explains their operation in very simple terms, and my simple heap of integers shows a bare-bones implementation. It's in C# rather than C++, but the code is very similar. You should be able to understand it without trouble.
I am having two main issues implementing the algorithm described in this article in C++: properly terminating the algorithm and freeing up dynamically allocated memory without running into a seg fault.
Here is the pseudocode provided in the article:
RBFS (node: N, value: V, bound: B)
IF f(N)>B, return f(N)
IF N is a goal, EXIT algorithm
IF N has no children, RETURN infinity
FOR each child Ni of N,
IF f(N) < V AND f(Ni) < V THEN F[i] := V
ELSE F[i] := f(Ni)
sort Ni and F[i] in increasing order of F[i]
IF only one child, F[2] := infinity
WHILE (F[1] <= B)
F[1] := RBFS(N1, F[1], MIN(B, F[2]))
insert N1 and F[1] in sorted order
return F[1]
Here, f(Ni) refers to the "computed" function value, whereas F[i] refers to the currently stored value of f(Ni).
Here is my C++ implementation, in which I had to use a global variable to keep track of whether the goal had been reached or not (note, I am trying to maximize my f(n) value as opposed to minimizing, so I reversed inequalities, orders, min/max values, etc.):
bool goal_found = false;
bool state_cmp(FlowState *lhs, FlowState *rhs)
{
return (lhs->value > rhs->value);
}
int _rbfs(FlowState *state, int value, int bound)
{
if (state->value < bound) // Returning if the state value is less than bound
{
int value = state->value;
delete state;
return value;
}
if (state->is_goal()) // Check if the goal has been reached
{
cout << "Solved the puzzle!" << endl;
goal_found = true; // Modify the global variable to exit the recursion
return state->value;
}
vector<FlowState*> children = state->children();
if (children.empty())
{
//delete state; // Deleting this state seems to result in a corrupted state elsewhere
return INT_MIN;
}
int n = 0; // Count the number of children
for (const auto& child: children)
{
if (state->value < value && child->value < value)
child->value = value;
else
child->update_value(); // Equivalent of setting stored value to static value (F[i] := f(Ni))
++n;
}
sort(children.begin(), children.end(), state_cmp);
while (children.front()->value >= bound && !goal_found)
{// Loop depends on the global goal_found variable since this is where the recursive calls happen
if (children.size() < 2)
children.front()->set_value(_rbfs(children.front(), children.front()->value, bound));
else
children.front()->set_value(_rbfs(children.front(), children.front()->value, max(children[1]->value, bound)));
}
// Free children except the front
int i;
for (i = 1; i < n; ++i)
delete children[i];
state->child = children.front(); // Records the path
return state->child->value;
}
void rbfs(FlowState* initial_state)
{
// This is the actual function I invoke to call the algorithm
_rbfs(initial_state, initial_state->get_value(), INT_MIN);
print_path(initial_state);
}
My main questions are:
Is there a way to terminate this function than having to use a global variable (bool goal_reached) without a complete re-implementation? Recursive algorithms usually have some kind of base-case to terminate the function, but I am not seeing an obvious way of doing that.
I can't seem to delete the dead-end state (when the state has no children) without running into a segmentation fault, but not deleting it results in unfreed memory (each state object was dynamically allocated). How can I modify this code to ensure that I've freed all of the states that pass through it?
I ran the program with gdb to see what was going on, and it appears that after deleting the dead-end state, the next state that is recursively called is not actually NULL, but appears to be corrupted. It has an address, but the data it contains is all junk. Not deleting that node lets the program terminate just fine, but then many states aren't getting freed. In addition, I had originally used the classical, iterative best-first search (but it takes up far too much memory for my case, and is much slower), and in that case, all dynamically allocated states were properly freed so the issue is in this code somewhere (and yes, I am freeing each of the states on the path in main() after calling rbfs).
In your code, you have
children.front()->set_value(_rbfs(children.front(), ...
where state inside of _rbfs is thus children.front().
And in _rbfs, you sometimes delete state. So children.front() can be deleted and then called with ->set_value. There's your problem.
Is there any reason why you calling delete at all?
I'm new to C++ so there's a lot I don't really understand, I'm trying to narrow down how I'm getting exc_bad_access but my attempts to print out values seems to be aggravating (or causing) the problem!
#include <iostream>
#include "SI_Term.h"
#include "LoadPrefabs.h"
int main() {
SI_Term * velocity = new SI_Term(1, "m/s");
std::cout<<"MAIN: FIRST UNITS "<<std::endl;
velocity->unitSet()->displayUnits();
return 0;
}
The above code produces an error (EXC_BAD_ACCESS) before the std::cout<< line even occurs. I traced it with xcode and it fails within the function call to new SI_Term(1, "m/s").
Re-running with the cout line commented out it runs and finishes. I would attach more code but I have a lot and I don't know what is relevant to this line seeming to sneak backwards and overwrite a pointer. Can anyone help me with where to look or how to debug this?
NEW INFO:
I narrowed it down to this block. I should explain at this point, this block is attempting to decompose a set of physical units written in the format kg*m/s^2 and break it down into kg, m, divide by s * s. Once something is broken down it uses LoadUnits(const char*) to read from a file. I am assuming (correctly at this point) that no string of units will contain anywhere near my limit of 40 characters.
UnitSet * decomposeUnits(const char* setOfUnits){
std::cout<<"Decomposing Units";
int i = 0;
bool divide = false;
UnitSet * nextUnit = 0;
UnitSet * temp = 0;
UnitSet * resultingUnit = new UnitSet(0, 0, 0, 1);
while (setOfUnits[i] != '\0') {
int j = 0;
char decomposedUnit[40];
std::cout<<"Wiped unit."<<std::endl;
while ((setOfUnits[i] != '\0') && (setOfUnits[i] != '*') && (setOfUnits[i] != '/') && (setOfUnits[i] != '^')) {
std::cout<<"Adding: " << decomposedUnit[i]<<std::endl;
decomposedUnit[j] = setOfUnits[i];
++i;
++j;
}
decomposedUnit[j] = '\0';
nextUnit = LoadUnits(decomposedUnit);
//The new unit has been loaded. now check for powers, if there is one read it, and apply it to the new unit.
//if there is a power, read the power, read the sign of the power and flip divide = !divide
if (setOfUnits[i] == '^') {
//there is a power. Analize.
++i;++j;
double power = atof(&setOfUnits[i]);
temp = *nextUnit^power;
delete nextUnit;
nextUnit = temp;
temp = 0;
}
//skip i and j till the next / or * symbol.
while (setOfUnits[i] != '\0' && setOfUnits[i] != '*' && setOfUnits[i] != '/') {
++i; ++j;
}
temp = resultingUnit;
if (divide) {
resultingUnit = *temp / *nextUnit;
} else {
resultingUnit = *temp * *nextUnit;
}
delete temp;
delete nextUnit;
temp = 0;
nextUnit = 0;
// we just copied a word and setOfUnits[i] is the multiply or divide or power character for the next set.
if (setOfUnits[i] == '/') {
divide = true;
}
++i;
}
return resultingUnit;
}
I'm tempted to say that SI_Term is messing with the stack (or maybe trashing the heap). Here's a great way to do that:
char buffer[16];
strcpy(buffer, "I'm writing too much into a buffer");
Your function will probably finish, but then wreak havoc. Check all arrays you have on the stack and make sure you don't write out of bounds.
Then apply standard debugging practices: Remove code one by one until it doesn't crash anymore, then start reinstating it to find your culprit.
You are mentioning xcode, so I assume you're on a MAC. I'D then suggest looking at the valgrind tool from http://valgrind.org/ That's a memory checker giving you information when yo're doing something wrong with memory. If your program was build including debugging symbols it should give you an stacktrace helping you to find the error.
Here, I removed the unimportant stuff:
while (setOfUnits[i] != '\0') {
while ((setOfUnits[i] != '\0') && (setOfUnits[i] != '*') && (setOfUnits[i] != '/') && (setOfUnits[i] != '^')) {
...
++i;
}
...
nextUnit = LoadUnits(decomposedUnit);
...
if (...) {
double power = ...;
temp = *nextUnit^power;
delete nextUnit;
}
....
temp = resultingUnit;
delete temp;
delete nextUnit;
...
++i;
}
There are a number of problems with this:
In the inner-loop, you increment i until setOfUnits[i] == '\0', the end of the string. Then you increment i again, past the end of the string.
nextUnit is of type UnitSet, which presumably overloads ^. Though it's possible that it overloads it to mean "exponentiation", it probably doesn't (and if it does, it shouldn't): in C-based languages, including C++, ^ means XOR, not exponentiation.
You are deleting pointers returned from other functions - that is, you have functions that return dynamically-allocated memory, and expect the caller to delete that memory. While not incorrect, and in fact common practice in C, it is considered bad practice in C++. Just have LoadUnits() return a UnitSet (rather than a UnitSet*), and make sure to overload the copy constructor and operator= in the UnitSet class. If performance then becomes a concern, you could return a const UnitSet& instead, or use smart pointers.
In similar vein, you are allocating and deleting inside the same function. There is no need for this: just make resultingUnit stack-allocated:
UnitSet resultingUnit(0, 0, 0, 1);
I know that last bullet-point sounds very confusing, but once you finally come to understand it, you'll likely know more about C++ than 90% of coders who claim to "know" C++. This site and this book are good places to start learning.
Good luck!
I thought i'd post a little of my homework assignment. Im so lost in it. I just have to be really efficient. Without using any stls, boosts and the like. By this post, I was hoping that someone could help me figure it out.
bool stack::pushFront(const int nPushFront)
{
if ( count == maxSize ) // indicates a full array
{
return false;
}
else if ( count <= 0 )
{
count++;
items[top+1].n = nPushFront;
return true;
}
++count;
for ( int i = 0; i < count - 1; i++ )
{
intBackPtr = intFrontPtr;
intBackPtr++;
*intBackPtr = *intFrontPtr;
}
items[top+1].n = nPushFront;
return true;
}
I just cannot figure out for the life of me to do this correctly! I hope im doing this right, what with the pointers and all
int *intFrontPtr = &items[0].n;
int *intBackPtr = &items[capacity-1].n;
Im trying to think of this pushFront method like shifting an array to the right by 'n' units...I can only seem to do that in an array that is full. Can someone out their please help me?
Firstly, I'm not sure why you have the line else if ( count <= 0 ) - the count of items in your stack should never be below 0.
Usually, you would implement a stack not by pushing to the front, but pushing and popping from the back. So rather than moving everything along, as it looks like you're doing, just store a pointer to where the last element is, and insert just after that, and pop from there. When you push, just increment that pointer, and when you pop, decrement it (you don't even have to delete it). If that pointer is at the end of your array, you're full (so you don't even have to store a count value). And if it's at the start, then it's empty.
Edit
If you're after a queue, look into Circular Queues. That's typically how you'd implement one in an array. Alternatively, rather than using an array, try a Linked List - that lets it be arbitrarily big (the only limit is your computer's memory).
You don't need any pointers to shift an array. Just use simple for statement:
int *a; // Your array
int count; // Elements count in array
int length; // Length of array (maxSize)
bool pushFront(const int nPushFront)
{
if (count == length) return false;
for (int i = count - 1; i >= 0; --i)
Swap(a[i], a[i + 1]);
a[0] = nPushFront; ++count;
return true;
}
Without doing your homework for you let me see if I can give you some hints. Implementing a deque (double ended queue) is really quite easy if you can get your head around a few concepts.
Firstly, it is key to note that since we will be popping off the front and/or back in order to efficiently code an algorithm which uses contiguous storage we need to be able to pop front/back without shifting the entire array (what you currently do). A much better and in my mind simpler way is to track the front AND the back of the relevant data within your deque.
As a simple example of the above concept consider a static (cannot grow) deque of size 10:
class Deque
{
public:
Deque()
: front(0)
, count(0) {}
private:
size_t front;
size_t count;
enum {
MAXSIZE = 10
};
int data[MAXSIZE];
};
You can of course implement this and allow it to grow in size etc. But for simplicity I'm leaving all that out. Now to allow a user to add to the deque:
void Deque::push_back(int value)
{
if(count>=MAXSIZE)
throw std::runtime_error("Deque full!");
data[(front+count)%MAXSIZE] = value;
count++;
}
And to pop off the back:
int Deque::pop_back()
{
if(count==0)
throw std::runtime_error("Deque empty! Cannot pop!");
int value = data[(front+(--count))%MAXSIZE];
return value;
}
Now the key thing to observe in the above functions is how we are accessing the data within the array. By modding with MAXSIZE we ensure that we are not accessing out of bounds, and that we are hitting the right value. Also as the value of front changes (due to push_front, pop_front) the modulus operator ensures that wrap around is dealt with appropriately. I'll show you how to do push_front, you can figure out pop_front for yourself:
void Deque::push_front(int value)
{
if(count>=MAXSIZE)
throw std::runtime_error("Deque full!");
// Determine where front should now be.
if (front==0)
front = MAXSIZE-1;
else
--front;
data[front] = value;
++count;
}