I'm learning C++ and I ran into a bit of code that I'm confused on what exactly it does. I'm learning dynamic memory and the place I'm learning it from mentioned that this was good practice.
double * pvalue = NULL;
if(!(pvalue = new double)){
cout<<"Error: out of memory." << endl;
exit(1);
}
I understand that you are creating a pointer called pvalue, but I don't understand the if statement. If someone could explain it to me I'd be greatly appreciated.
I'm confused on what exactly it does.
Working with a very outdated notion and limited understanding of C++.
First of all, C++ does not report a failure to allocate memory by returning NULL. It throws an exception, of type std::bad_alloc (which is derived from std::exception).
Second, with the advent of C++11, using "naked" pointers like that is frowned upon, because they all-too-easily result in resource leaks when you forget to delete them, e.g. when an unexpected exception makes your pointer go out of scope.
So you really should use either std::shared_ptr<> or unique_ptr<>.
pvalue = new double // This allocates a dynamic double as usual
!(pvalue = new double) // This takes the value of the resulting pointer
// and negates it, checking whether it's null
if(!(pvalue = new double)) // It all becomes the condition for the if.
It's worth noting that:
Raw owning pointers must not be used in C++, use smart pointers instead;
using namespace std; (which I'm certain this sample has) must not be used;
Dynamically allocating a single double is weird;
Plain new will never return a null pointer, so the check is meaningless;
std::exit will nuke your program mid-flight, leaking all objects with automatic lifetime, return from main instead.
[...] the place I'm learning it from mentioned that this was good practice.
It's time to put on some heavy metal and sunglasses, set fire to "the place" and go find some better learning material.
This:
if(!(pvalue = new double)){
... is just a (perhaps-too) clever shorthand way of writing this:
pvalue = new double;
if (pvalue == NULL) {...}
new never returns nullptr unless you provide std::nothrow. Rather, if new fails it will throw std::bad_alloc.
The more appropriate solution would be something like :
#include <iostream>
#include <stdexcept>
double * foo()
{
double * pvalue = nullptr;
try {
pvalue = new double;
}
catch(const std::bad_alloc &) {
std::cout << "Error: out of memory." << std::endl;
exit(1);
}
return pvalue;
}
However, using raw owning pointers is discouraged in modern code. new is generally avoided in favor of std::make_unique or std::make_shared. For example, an better solution would be something like this :
#include <iostream>
#include <memory>
#include <stdexcept>
std::unique_ptr<double> foo()
{
try {
return std::make_unique<double>();
}
catch(const std::bad_alloc &) {
std::cout << "Error: out of memory." << std::endl;
exit(1);
}
}
Related
Following is my code snippet. I am trying to write generic function to check whether pointer is valid and deleting it.
#include <windows.h>
#include <vector>
#include <map>
#include <string>
using namespace std;
struct testStruct
{
int nVal;
_TCHAR tcVal[256];
testStruct()
{
wmemset(tcVal, 0, _countof(tcVal));
}
};
void deletePointer(void *obj)
{
if (obj)
{
delete obj;
obj = NULL;
}
}
int _tmain(int argc, _TCHAR* argv[])
{
testStruct *obj = new testStruct;
wstring *strVal = new wstring();
vector<wstring> *vecVal = new vector<wstring>;
map<wstring,wstring> *mapVal = new map<wstring, wstring>;
//My business logic goes here.
//Finally after all business logic, clearing allocated memory.
deletePointer(obj);
deletePointer(strVal);
deletePointer(vecVal);
deletePointer(mapVal);
return 0;
}
While I am not facing any compilation or runtime error, just wanted to confirm, if this is the right way to check and delete multiple points. I don't want to check individual pointer whether it is valid or not before deleting. So calling generic function.
Thanks for your suggestions in advance.
Compilation and runtime errors are not present. Just need confirmation, if this is right way or is there a better way to do this.
No, it's both incorrect and unnecessary
If your compiler doesn't report error on this code, crank up warnings: https://godbolt.org/z/7ranoEnMa. Deleting a void* is Undefined Behaviour, you cannot predict what will be the result. If it's currently not crashing, it will likely crash at some random other use when you will least expect it.
It's unnecessary, because it's perfectly fine to delete nullptr; and your function only checks against that. If you wanted to check if the pointer is actually valid like Nathan Pierson suggests in comment (and you don't assign nullptr to them consistently), that's not possible. You are responsible for your memory management, no if can help if you don't do that correctly throughout the program.
And it's also not necessary, because memory management is already done for you. Containers shouldn't be ever allocated on the heap. Simply do
wstring strVal;
vector<wstring> vecVal;
map<wstring,wstring> mapVal;
And drop the pointers. C++ containers do all the magic by themselves and are generally small by themselves (sizeof(std::vector) is usually 3*sizeof(void*)).
Assuming you really need testStruct on the heap rather than in automatic storage, you should use a smart pointer:
std::unique_ptr<testStruct> obj = std::make_unique<testStruct>();
There, it's created, allocated on the heap and will be automatically deleted when obj ends its scope. You don't have to worry about deleteing anything anymore.
If you really want to have a function that deletes objects manually, it should look like this:
template <typename T>
void deletePointer(T*& obj)
{
delete obj;
obj = nullptr;
}
It keeps the type of the pointer to be deleted and updates passed pointer with nullptr, so it won't be invalid later on.
I'm using a library that, in order to construct some object that I use, expects a raw pointer to an object. I'm not sure what it will do with the pointer, to make my code as safe as possible, what should I pass to this function?
Use a unique pointer - if they decide to delete the pointer, I will do a double-delete
Keep track of a raw pointer - bad because I have to remember to write delete, but it could still be a double-delete
Use auto duration and give them a pointer Give them a reference - their code will error if they call delete
Use a shared pointer - same double-delete problem as unique pointer, but now my scope won't hurt their pointer
Based on my reading, option 3 seems like what I should do - they shouldn't be calling delete on the pointer, and this format enforces that. But what if I don't know whether they now or in the future will call delete on the reference I gave them? Use a shared pointer and say "not my fault about the double delete"?
#include <memory>
#include <iostream>
class ComplexObj {
public:
ComplexObj() : m_field(0) {}
ComplexObj(int data) : m_field(data) {}
void print() { std::cout << m_field << std::endl; }
private:
int m_field;
};
class BlackBox {
public:
BlackBox(ComplexObj* data) {
m_field = *data;
// Do other things I guess...
delete data;
std::cout << "Construction complete" << std::endl;
}
void print_data() { m_field.print(); }
private:
ComplexObj m_field;
};
int main(int argc, char* argv[]) {
// Use a smart pointer
std::unique_ptr<ComplexObj> my_ptr(new ComplexObj(1));
BlackBox obj1 = BlackBox(my_ptr.get());
obj1.print_data();
my_ptr->print(); // Bad data, since BlackBox free'd
// double delete when my_ptr goes out of scope
// Manually manage the memory
ComplexObj* manual = new ComplexObj(2);
BlackBox obj2 = BlackBox(manual);
obj2.print_data();
manual->print(); // Bad data, since BlackBox free'd
delete manual; // Pair new and delete, but this is a double delete
// Edit: use auto-duration and give them a pointer
ComplexObj by_ref(3);
BlackBox obj3 = BlackBox(&by_ref); // they can't call delete on the pointer they have
obj3.print_data();
by_ref.print();
// Use a shared pointer
std::shared_ptr<ComplexObj> our_ptr(new ComplexObj(4));
BlackBox obj4 = BlackBox(our_ptr.get());
obj4.print_data();
our_ptr->print(); // Bad data, they have free'd
// double delete when our_ptr goes out of scope
return 0;
}
Other questions I read related to this topic...
unique_ptr.get() is legit at times
I should pass by reference
I think I am case 2 and should pass by reference
You cannot solve this problem with the information you have. All choices produce garbage.
You have to read the documentation of the API you are using.
Doing any of your 4 answers without knowing if they take ownership of the pointer will result problems.
Life sometimes sucks.
If you have a corrupt or hostile API, the only halfway safe thing to do is to interact with it in a separate process, carefully flush all communication, and shut down the process.
If the API isn't corrupt or hostile, you should be able to know if it is taking ownership of the pointed to object. Calling an API without knowing this is a common mistake in novice C++ programmers. Don't do it. Yes, this sucks.
If this API is at all internal and you have any control, seek to make all "owning pointer" arguments be std::unique_ptr<>s. That makes it clear in the API that you intend to own the object and delete it later.
I am trying to cudaMalloc a bunch of device pointers, and gracefully exit if any of the mallocs didn't work. I have functioning code - but bloated because I have to cudaFree everything I'd previously malloc'd if one fails. So now I am wondering if there is a more succinct method of accomplishing this. Obviously I can't free something that hasn't been malloc'd - that will definitely cause problems.
Below is the snippet of code I am trying to make more elegant.
//define device pointers
float d_norm, *d_dut, *d_stdt, *d_gamma, *d_zeta;
//allocate space on the device for the vectors and answer
if (cudaMalloc(&d_norm, sizeof(float)*vSize) != cudaSuccess) {
std::cout << "failed malloc";
return;
};
if (cudaMalloc(&d_data, sizeof(float)*vSize) != cudaSuccess) {
std::cout << "failed malloc";
cudaFree(d_norm);
return;
};
if (cudaMalloc(&d_stdt, sizeof(float)*wSize) != cudaSuccess) {
std::cout << "failed malloc";
cudaFree(d_norm);
cudaFree(d_data);
return;
};
if (cudaMalloc(&d_gamma, sizeof(float)*vSize) != cudaSuccess) {
std::cout << "failed malloc";
cudaFree(d_norm);
cudaFree(d_dut);
cudaFree(d_stdt);
return;
};
if (cudaMalloc(&d_zeta, sizeof(float)*w) != cudaSuccess) {
std::cout << "failed malloc";
cudaFree(d_norm);
cudaFree(d_dut);
cudaFree(d_stdt);
cudaFree(d_gamma);
return;
};
This is a shortened version, but you can see how it just keeps building. In reality I am trying to malloc about 15 arrays. It starts getting ugly - but it works correctly.
Thoughts?
Some possibilities:
cudaDeviceReset() will free all device allocations, without you having to run through a list of pointers.
if you intend to exit (the application), all device allocations are freed automatically upon application termination anyway. The cuda runtime detects the termination of the process associated with an application's device context, and wipes that context at that point. So if you're just going to exit, it should be safe to not perform any cudaFree() operations.
You can wrap them into unique_ptr with custom deleter. (c++11)
Or just add to one vector when success allocate and free all pointers in the vector.
example about unique_ptr:
#include <iostream>
#include <memory>
using namespace std;
void nativeFree(float* p);
float* nativeAlloc(float value);
class NativePointerDeleter{
public:
void operator()(float* p)const{nativeFree(p);}
};
int main(){
using pointer_type = unique_ptr<float,decltype(&nativeFree)>;
using pointer_type_2 = unique_ptr<float,NativePointerDeleter>;
pointer_type ptr(nativeAlloc(1),nativeFree);
if(!ptr)return 0;
pointer_type_2 ptr2(nativeAlloc(2));//no need to provide deleter
if(!ptr2)return 0;
pointer_type ptr3(nullptr,nativeFree);//simulate a fail alloc
if(!ptr3)return 0;
/*Do Some Work*/
//now one can return without care about all the pointers
return 0;
}
void nativeFree(float* p){
cout << "release " << *p << '\n';
delete p;
}
float* nativeAlloc(float value){
return new float(value);
}
Initially store nullptr in all pointers. free takes no effect on a null pointer.
int* p1 = nullptr;
int* p2 = nullptr;
int* p3 = nullptr;
if (!(p1 = allocate()))
goto EXIT_BLOCK;
if (!(p2 = allocate()))
goto EXIT_BLOCK;
if (!(p3 = allocate()))
goto EXIT_BLOCK;
EXIT_BLOCK:
free(p3); free(p2); free(p1);
Question is tagged C++, so here is a C++ solution
The general practice is to acquire resources in constructor and to release in destructor. The idea is that in any circumstances resource is guaranteed to be released by a call to destructor. Neat side effect is that destructor is called automagically in the end of the scope, so you don't need to do anything at all for resource to be released when it's no longer used. See RAII
In the role of resource one might have various memory types, file handles, sockets etc. CUDA device memory is no exception from this general rule.
I would also discourage you from writing your own resource owning classes and would advice to use a library. thrust::device_vector is probably the most widely used device memory container. Thrust library is a part of CUDA toolkit.
Yes. If you use (my) CUDA Modern-C++ API wrapper library, you could just use unique pointers, which will release when their lifetime ends. Your code will become merely the following:
auto current_device = cuda::device::current::get();
auto d_dut = cuda::memory::device::make_unique<float[]>(current_device, vSize);
auto d_stdt = cuda::memory::device::make_unique<float[]>(current_device, vSize);
auto d_gamma = cuda::memory::device::make_unique<float[]>(current_device, vSize);
auto d_zeta = cuda::memory::device::make_unique<float[]>(current_device, vSize);
Note, though, that you could just allocate once and just place the other pointers at an appropriate offset.
Deleting the double pointer is will cause the harmful effect like crash the program and programmer should try to avoid this as its not allowed.
But sometime if anybody doing this then i how do we take care of this.
As delete in C++ is noexcept operator and it'll not throw any exceptions. And its written type is also void. so how do we catch this kind of exceptions.
Below is the code snippet
class myException: public std::runtime_error
{
public:
myException(std::string const& msg):
std::runtime_error(msg)
{
cout<<"inside class \n";
}
};
void main()
{
int* set = new int[100];
cout <<"memory allcated \n";
//use set[]
delete [] set;
cout <<"After delete first \n";
try{
delete [] set;
throw myException("Error while deleting data \n");
}
catch(std::exception &e)
{
cout<<"exception \n";
}
catch(...)
{
cout<<"generic catch \n";
}
cout <<"After delete second \n";
In this case i tried to catch the exception but no success.
Pleas provide your input how we'll take care of these type of scenario.
thanks in advance!!!
Given that the behaviour on a subsequent delete[] is undefined, there's nothing you can do, aside from writing
set = nullptr;
immediately after the first delete[]. This exploits the fact that a deletion of a nullptr is a no-op.
But really, that just encourages programmers to be sloppy.
Segmentation fault or bad memory access or bus errors cannot be caught by exception. Programmers need to manage their own memory correctly as you do not have garbage collection in C/C++.
But you are using C++, no ? Why not make use of RAII ?
Here is what you should strive to do:
Memory ownership - Explicitly via making use of std::unique_ptr or std::shared_ptr and family.
No explicit raw calls to new or delete. Make use of make_unique or make_shared or allocate_shared.
Make use of containers like std::vector or std::array instead of creating dynamic arrays or allocating array on stack resp.
Run your code via valgrind (Memcheck) to make sure there are no memory related issues in your code.
If you are using shared_ptr, you can use a weak_ptr to get access to the underlying pointer without incrementing the reference count. In this case, if the underlying pointer is already deleted, bad_weak_ptr exception gets thrown. This is the only scenario I know of when an exception will be thrown for you to catch when accessing a deleted pointer.
A code must undergo multiple level of testing iterations maybe with different sets of tools before committing.
There is a very important concept in c++ called RAII (Resource Acquisition Is Initialisation).
This concept encapsulates the idea that no object may exist unless it is fully serviceable and internally consistent, and that deleting the object will release any resources it was holding.
For this reason, when allocating memory we use smart pointers:
#include <memory>
#include <iostream>
#include <algorithm>
#include <iterator>
int main()
{
using namespace std;
// allocate an array into a smart pointer
auto set = std::make_unique<int[]>(100);
cout <<"memory allocated \n";
//use set[]
for (int i = 0 ; i < 100 ; ++i) {
set[i] = i * 2;
}
std::copy(&set[0], &set[100] , std::ostream_iterator<int>(cout, ", "));
cout << std::endl;
// delete the set
set.reset();
cout <<"After delete first \n";
// delete the set again
set.reset();
cout <<"After delete second \n";
// set also deleted here through RAII
}
I'm adding another answer here because previous answers focus very strongly on manually managing that memory, while the correct answer is to avoid having to deal with that in the first place.
void main() {
std::vector<int> set (100);
cout << "memory allocated\n";
//use set
}
This is it. This is enough. This gives you 100 integers to use as you like. They will be freed automatically when control flow leaves the function, whether through an exception, or a return, or by falling off the end of the function. There is no double delete; there isn't even a single delete, which is as it should be.
Also, I'm horrified to see suggestions in other answers for using signals to hide the effects of what is a broken program. If someone is enough of a beginner to not understand this rather basic stuff, PLEASE don't send them down that path.
1) Do we need to have a pointer validation for following code in cli. Is it good to have?
NameClass NameString^ = gcnew NameClass();
if (NameClass)
{
// some process
2) If we created a memory in one function and passing as pointer to other do we need to have validation ?
foo()
{
try {
NameClass *pNameString = new NameClass();
foo_2(pNameString);
} catch(std::bad_alloc &error)
{
std::cout << error.what() << std::endl;
}
}
foo_2(NameClass *pNameString)
{
if (pNameString) // do we need to validate here ?
{
// some stuff
}
}
3) Do we need to validate locally created object in reference passing ?
NameClass objNameClass;
foo(&objNameClass);
foo(NameClass *objNameClass)
{
if (objNameClass) // do we need to validate here ?
{
It's just as unnecessary just after a gcnew as it is after a new. It's only necessary if you use C allocators like malloc for some reason. The C++ and C++/CLI constructs use exceptions for unsuccessful object creations, unlike the C functions which return a null pointer.
In plain C++, new will throw std::bad_alloc if memory cannot be allocated. In C++/CLI, gcnew will throw System::OutOfMemoryException in that case.
Is most cases, you probably should let the exception propagate and kill your program, because it's probably doomed anyway.
In your second example, you may want to validate the pointer in foo_2 only if you expect someone to call foo_2 with a null pointer, and when that is a valid usage. If it's invalid usage to pass it a null pointer as an argument, then you have a bug and should probably let your application crash (instead of letting it corrupt your data for instance). If foo_2 is only visible to the code which calls it immediately after allocation, it's unnecessary as it won't ever be null.
Same for the third example. The contract/documentation of your function should specify whether it is valid to call it with a null pointer, and how it behaves.
And please, don't ever write that:
catch(std::bad_alloc &error)
{
std::cout << error.what() << std::endl;
}
If you have a low memory condition on a regular object allocation, just let your program die. Trying to cure it that way won't get you very far.
The only place when such code would be acceptable IMHO is when you're trying to allocate a potentially big array dynamically, and you know you can just cancel that operation if you're unable to do so. Don't attempt to catch allocation errors for every allocation, that will bloat your code and lead you nowhere.