Another std::bad_alloc at memory location - c++

I searched for this kind of error and found a lot. unfortunately no thread really helped me.
I have image files which I save in an array or a vector or what ever.
After about 1.8 GB (~1439 Images) the error std::bad_alloc at memory location occurs. So I tried to declared the array in different ways but every time the same error occurs.
Image* img;
Image img[180000];
Image* img = new Image[180000]
vector<Image> img;
(The 180k would be 1 minute of Frames). Its not really important to record 1 minute but it would be nice to save more than ~1439 Frames. Or at least to understand why this error occurs or rather why it occurs at 1.8 GB.
Maybe someone could help or explain that to me?
PS: i use a 32bit System
The problem is, the time to save the images in an folder or something takes to long. Maybe I have to find a compression which allows me to save just the necessary information of the image in the array and then I can restore the frames when I am done.
I heard that you can convert an image just in a x and a y "line" which holds all these information. But how this works is another issue.
The answers from Mark Ingram were exactly what I needed to understand the problem. Thanks for that
edit: oh i see i explained my problem not enough. I did`t have the Images and load them in my programm. I have a Camera which records the frame with a frequency of 50Hz so while recording i have no time to save the frames.

You've ran out of memory. On a 32bit system (on Windows at least) you can only allocate up to a maximum of ~2GB of memory. You need to dynamically load your data only when needed, and when you no longer need the image data, throw it away again.
In reality, the limit will be lower than 2GB, as memory is allocated in blocks (i.e. it isn't allocated contiguously). This means you will experience heap fragmentation if you mix small and large object allocations, and that will drastically reduce the amount of memory you can actually allocate.

Store the images in a folder and load one at a time.
Dynamic memory allocation is your friend.
There is nothing I could think of to accomplish by loading 18,000 images together. You are never going to process it even on a super computer.

Related

Is there anyway I can use array to store all video frames (no matter how long the video is) in opencv c++?

My computer totally crashes if I am trying to store all video frames in a vector. I know it is memory problem according to some other posts. Is there any other way I can store all video frames in just one container. I am trying this:
storage = cvCreateMemStorage(0);
CvSeq* seq = cvCreateSeq(0, sizeof(CvSeq), sizeof(Mat), storage);
but I could not get seq elements from seq. Did anyone tried this before?
Update:
The crash is due to memory full. It is important to allocate appropriate memory before I allocate video frames into the array and replace the old matrix with new one when the buffer is full.
To store these, you need to have enough memory available in ram, which is not possible. You will need to have think of something else. What you’re saying isn’t possible.

How to create vector of matrices to store large number of images?

I want to create the vector of matrices to stores as many images as possible.
I know that,it is possible as written below:
vector<Mat> images1;
and during the image acquisition from the camera and i would save the images at 100fps with resolution of 1600*800 as below:
images1.push_back(InputImage.clone());
Where InputImage is the Mat and given by the camera. Since creating video during the acquisition process either leads to frame missing in the video or reduction in aquisition speed.
Later after stopping the image acquisition and before stopping the program, I would write the images into video as written below:
VideoWriter writer;
writer = Videowriter("video.avi",-1,100,frameSize(1600,800),false);
for (vector<Mat>::iterator iter = images1.begin(); ier != images1.end(); iter++)
writer.write(*iter);
Is it correct, since I am not sure the images1 can store the images around 1500 images without overflow.
You don't really have to worry about "overflow", whatever that means in your context.
The bigger problem is memory. A single frame takes (at 8 bits per color, with 3 colors) 3 * 1600 * 800 == 3.84Mb. At 100fps, One second of footage requires 0.384Gb of memory. 8GB of memory can only hold about 20 seconds of footage. You'll need almost 24GB of memory before you can hold a whole minute. There's a reason that the vast, vast, vast majority of Video Encoding Software only keeps a few frames of video data in memory at any given time, and dumps the rest to the hard drive (or discards it, depending on what purpose the software is serving).
What you should probably be doing (which is what programs like FRAPS do) is dumping frames to the hard drive as soon as soon as you receive them. Then, when recording finishes, you can either call it a day (if raw video footage is what you need) or you can begin a process of reading the file and encoding it into a more compressed format.
Pre-allocate your image vector in memory so that you just need to copy the frames without real-time allocation.
If you have memory problems, try dumping the frames to a file, the OS will hopefully be able to handle the I/O. If not try memory mapped files.

Is it less computer intensive to load and unload an image every time a new image is needed or to have all of the images loaded at once? (SDL)

I've created a small animation for a game that uses a set of images as frames, and the current image to render changes over a certain amount of time to create the animation illusion. I've done this in two different ways, and I'm wondering which one is more efficient to use.
Method 1:
A single image is loaded and rendered. When a different image needs to be rendered, a function is called that unloads the current image, and loads and renders the new one.
Method 2:
All of the images needed for the animation are loaded once, and then rendered as needed.
In simpler terms: Method 1 unloads the current image and loads the new one every time a different image is needed, and Method 2 keeps all the images needed loaded at once.
So basically, the question is, it better to constantly load and unload images to keep as little loaded as possible, or to have many loaded at all times and not unload/load anything during the program? Does the computer have a harder time loading and unloading images or keeping many images loaded at once?
I ran looked at the task manager while running each method. CPU usage of Method 1 (loading and unloading) fluctuated between 29% and 30%, while that of Method 2 fluctuated between 28% and 29%.
It appears that keeping all the images loaded is better according to these statistics, but the reason I don't really trust them is because the program only loads seven images.
As the game gets bigger, there could be hundreds of images loaded at once (Method 2) or an image loaded and unloaded nearly every frame (Method 1). Which method is less intensive? Thank you for your time.
Firstly you dont define 'intensive'. Having a gaming machine using 100% is good - or at least not necessarily bad. It depends on what else is going on on your system.
Second, performance analysis is done by measuring, not by thinking. YOu have measured and found an answer. Believe the answer.
In general it is faster to store things in memory that to load off disk over and over again (as your measurement shows). However you will end up using a lot of memory (no free lunches here). You dont say how much memory you have or how big the images are. Assuming each image is 10mb, then 100 of them takes 1gb. ON a modern desktop thats not a lot, on an embedded system running on an arduiono its a disaster
Why not try with 100 images and see what happens
Measurement only confirms that doing less work results in faster execution, and doing something once is faster (in a long term) than repeating the same thing multiple times (if you'll restart your animation from first frame - you'll need to reload the same images again). More of a problem here is a latency - what if your image is considerably big and it takes noticeable time to load it, will you just stop rendering until it loads? What if hard disk is under heavy load (especially HDD, SSDs are not that vulnerable to random access) and suddenly image takes 100 times more time to load? (yes can still happen with CPU, but not that likely or extremely).
If you can predict what part of data will be used soon then you'll only need to load that part till specified deadline. It is generally referred as 'streaming' and especially helpful for very big but predictable data - like audio or video files (since you always know playback speed and direction). With animations it may be more difficult - sometimes you know what animation will be played (e.g. cutscenes or trigger), sometimes they're just too general and any of them could be played. In latter case you'll probably need to keep animation data ready to prevent unwanted pauses.
If data is too big you may try to do some compression (e.g. keeping data in memory in compressed form with some fast compression algorithm and decompressing when needed - much faster than reading from slow disk; special cases like for images there may be even better solution).

OpenCV cv::Mat causing potential memory leak with std::vector

As it stands right now im trying to save an entire list of images in the form of cv::Mats inside of a vector for later processing. Right now I have something that looks like this:
do
{
image = readimage();
cv::Mat mat = cv::Mat((length, width, CV_8UC4, image));
cv::Mat temp = mat.clone();
saved_images.push_back();
mat.release();
temp.release();
freeimagememory(image);
}
while(hasimage);
This actually works. For exceptionally small lists of images it will store them just fine. However as I get to large amounts of images the program consistently crashes saying Abort() was called, and upon inspection it says it's throwing a cv::exception.
Does anyone know why this is? I've thought about changing the vector to a vector of pointers to cv::Mats in order to save space (cloning seems expensive) but I'm not sure how well that will work.
Can anyone help?
EDIT1: The exact error being thrown is failed to allocated X bytes. I assume this is because it's eating up all of the available memory somehow (even though I'm sitting on 8 gigs of memory and definitely have memory free).
EDIT2:
The below code also seems to work.
std::vector<cv::Mat*> ptrvec;
do{
image.readimage();
ptrvec.push_back(new cv::Mat((length, width, CV_8UC4, image)));
freeimagememory(image);
}
while(hasimage);
This one doesn't have a problem with memory (I can push all the images I want to it) but I get an access violation when I try to do
cv::imshow("Test Window", *ptrvec[0]);
EDIT3:
Is there a chance I'm hitting the upper limit of 32 bit? I'm more than capable of recompiling this into a 64 bit project.
You may be running out of memory when you store 3000 color images 800 x 600 in a vector. Storing Mat pointers in memory will not solve your problem, since the data is still allocated in RAM.
Check whether there is enough memory in your system to store all the images. If not you can upload images in batches, for example, process the first 500 images, then process the next 500 images, etc.
In your program you allocate the vector on the stack. Allocating on the heap is recommended when you need a large block of memory (your case). So can try to allocate the vector on the heap instead (provided that you have enough memory to store the vector). See, stack vs heap, or this cpp-tutorial for more information.

What is fastest algorithm or method displaying line images from line scan camera

We have a line scan camera which produces 300 line images per second. We want to display the lines on a image view in the way of FIFO so that the last line of the view displays the most recent line image while shifting previous lines above for the line update.
If I can access video memory in C like old days, I would just do
memcpy(videoMem, videoMem+lineWidth*pixelSize, pixelSize*lineWidth*(nLines-1));
memcpy(videoMem+pixelSize*lineWidth*(nLines-1),newLine,lineWidth*pixelSize);
But I don't know if this is the best I can do even with direct video memory access.
Now I understand it's not possible nor desirable to access video memory directly. In that case, what is the best method? Any opinion from expert would be appreciated.
It is Desktop PC Application in Windows 7.
Update
As I expected, It seems that I have to deal with a kind of circular buffers. Tricky part in my case is that writing the buffer is line-by-line while reading is screen-by-screen. So end pointer reaches physical end of the buffer, additional memory copy is needed to pass the screen memory into video. I guess Bip buffer would be a solution for this. Any other idea?
You cannot memcpy memory that is overlapping, that is the purpose of memmove. Nevertheless, you can use memcpy as long as the copy occurs in the right order. Try it on your platform to see if it works.
The main implementation issue is if having two separate writes causes flicker. If this is the case, you have to write the new image to a buffer first and then write the entire buffer all at once to the video memory.
Generally speaking you don't read video memory. The data to be displayed should be in its own region of memory. Summing up you have 3 areas of memory:
data to be displayed
display buffer
video memory (or its equivalent)
The standard process is to write 1->2, then 2->3 in one step. If you have no flicker, however, you can go directly 1->3 with no buffer. Other than this, there is no magic algorithm beyond what you have written