loadRawData Memory issue in ogre while load opencv frames - c++

I am capturing images in real time using OpenCV, and I want to show these images in the OGRE window as a background. So, for each frame the background will change.
I am trying to use MemoryDataStream along with loadRawData to load the images into an OGRE window, but I am getting the following error:
OGRE EXCEPTION(2:InvalidParametersException): Stream size does not
match calculated image size in Image::loadRawData at
../../../../../OgreMain/src/OgreImage.cpp (line 283)
An image comes from OpenCV with a size of 640x480 and frame->buffer is a type of Mat in OpenCV 2.3. Also, the pixel format that I used in OpenCV is CV_8UC3 (i.e., each pixel is 8-bits and each pixel contains 3 channels ( B8G8R8 ) ).
Ogre::MemoryDataStream* videoStream = new Ogre::MemoryDataStream((void*)frame->buffer.data, 640*480*3, true);
Ogre::DataStreamPtr ptr(videoStream,Ogre::SPFM_DELETE);
ptr->seek(0);
Ogre::Image* image = new Ogre::Image();
image->loadRawData(ptr,640, 480,Ogre::PF_B8G8R8 );
texture->unload();
texture->loadImage(*image)
Why I always getting this memory error?

Quick idea, maybe memory 4-byte alignment issues ?
see Link 1 and
Link 2

I'm not an Ogre expert, but does it work if you use loadDynamicImage instead?
EDIT : Just for grins try using the Mat fields to setup the buffer:
Ogre::Image* image = new Ogre::Image();
image->loadDynamicImage((uchar*)frame->buffer.data, frame->buffer.cols, frame->buffer.rows, frame->buffer.channels(), Ogre::PF_B8G8R8);
This will avoid copying the image data, and should let the Mat delete it's contents later.

I had similar problems to get image data into OGRE, in my case the data came from ROS (see ros.org). The thing is that your data in frame->buffer is not RAW, but has a file header etc.
I think my solution was to search the data stream for the beginning of the image (by finding the appropriate indicator in the data block, e.g. 0x4D 0x00), and inserting the data from this point on.
You would have to find out were in your buffer the header ends and where your data begins.

Related

Is there a way to access image buffer through OpenCV VideoCapture?

Due to some reasons I need to access directly to image buffer from a camera through OpenCV's VideoCapture but I cannot find a way. To make it more clear, I want to access to the data from cv::VideoCapture::grab() before retrieving it to a cv::Mat.
I check the OpenCV source code here
and it seems OpenCV decodes it automatically before outputing the frame. Intuitionally I am thinking about "encoding" the frame to obtain the original data, however, cv::imencode requires a specific file extension.
Is there a way to access the camera buffer data without tweaking the source code?
Best,
Eric

Capture image frames from Kinect and save to Hard drive

My aim is to capture all the frames (RGB) from Kinect at 30 fps and save them to my hard drive. For doing this I took the following approach.
Get the frames from Kinect and store them in an array buffer. Since writing to disk (using imwrite()) takes a bit of time and I may miss some frames while doing so, so instead of directly saving them to the disk, I store them in an array. Now, I have another parallel thread that accesses this array and writes the individual frames to the disk as images.
Now I have used a static array of size 3000 and type Mat. This will suffice since I need to store frames for 1.5 minute videos (1.5 minutes = 2700 frames). I have declared the array as follows :
#define NUM_FRAMES 3000
Mat rgb[NUM_FRAMES];
I have already tested this limit by reading images and saving them to the array using the following code :
for(int i=0; i<NUM_FRAMES; i++)
{
Mat img = imread("image.jpg", CV_LOAD_IMAGE_COLOR);
rgb[i] = img;
imshow("Image", img);
cvWaitKey(10);
}
The above code executed flawlessly.
But one problem is that the code I am using for capturing image using Kinect, captures the image in an IplImage. Thus I need to convert the image to cv::Mat format before using it. I convert it using the following command:
IplImage* color = cvCreateImageHeader(cvSize(COLOR_WIDTH, COLOR_HEIGHT), IPL_DEPTH_8U, 4);
cvSetData(color, colorBuffer, colorLockedRect.Pitch); // colorBuffer and colorLockedRect.Pitch is something that Kinect uses. Not related to OpenCv
rgb[rgb_read++] = Mat(color, FLAG);
Now here lies my problem. Whenever I am setting #define FLAG true, it causes memory leaks and gives me OpenCv Error: Insufficient memory (failed to allocate 1228804 bytes) error.
But if I use #define FLAG false it works correctly, but the frames that I am getting is erroneous as shown below. They are three consecutive frames.
I was moving around my arm and the image got cut in between as can be seen from above.
Can someone please point out the reason for this weird behavior or any other alternate way of obtaining the desired result. I have been struggling with this since a few days now. Please ask for if any further clarifications are required.
I am using OpenCV 2.4.8, Kinect SDK for Windows version-1.8.0 and Microsoft Visual Studio 2010.
Also can someone please explan to me the role of the CopyData parameter in Mat::Mat. I have already gone through this link, but still it's not completely clear. Maybe that's why I could not solve the above error in the first place since it's working is not very clear.
Thanks in advance.
first, do not use IplImages, stick with cv::Mat, please.
the equivalent code for that would be:
Mat img_borrowed = Mat( height, width, CV_8U4C, colorBuffer, colorLockedRect.Pitch );
note, that this does not do any allocation on its own, it's still the kinect's pixels, so you will have to clone() it:
rgb[rgb_read++] = img_borrowed.clone();
this is the same as setting the flag in your code above to 'true'. (deep-copy the data)
[edit] maybe it's a good idea to skip the useless 4th channel (also less mem required), so , instead of the above you could do:
cvtColor( img_borrowed, rgb[rgb_read++], CV_BGRA2BGR); // will make a 'deep copy', too.
so, - here's the bummer: if you don't save a deep-copy in your array, you'll end up with garbled (and all the same!) images, probably even with undefined behaviour due to the locking/unlocking of the kinect buffer, if you do copy it (and you must), you will need a lot of memory.
unlikely, that you can keep 3000 *1024*786*4 = 9658368000 bytes in memory, you'll have to cut it down one way or another.

Getting all available frame size from capture device with OpenCV

I'm using Open CV 2.4.6 with C++ (with Python sometimes too but it is irrelevant). I would like to know if there is a simple way to get all the available frame sizes from a capture device?
For example, my webcam can provide 640x480, 320x240 and 160x120. Suppose that I don't know about these frame sizes a priori... Is it possible to get a vector or an iterator, or something like this that could give me these values?
In other words, I don't want to get the current frame size (which is easy to obtain) but the sizes I could set the device to.
Thanks!
When you retrieve a frame from a camera, it is the maximum size that that camera can give. If you want a smaller image, you have to specify it when you get the image, and opencv will resize it for you.
A normal camera has one sensor of one size, and it sends one kind of image to the computer. What opencv does with it thereafter is up to you to specify.

cv::Mat detect PixelFormat

I'm trying to use pictureBox->Image (Windows Forms) to display a cv::Mat image (openCV). I want to do that without saving the Image as a file ('cause i want to reset the image every 100ms).
I just found that topic here: How to display a cv::Mat in a Windows Form application?
When i use this solution the image appears to be white only. I guess i took the wrong PixelFormat.
So how do figure out the PixelFormat i need? Haven't seen any Method in cv::Mat to get info about that. Or does this depend on the image Source i use to create this cv::Mat?
Thanks so far :)
Here i took a screen. Its not completely white. So i guess there is some color info. But maybe i'm wrong.
Screen
cv::Mat.depth() returns the pixel type, eg. CV_8U for 8bit unsigned etc.
and cv::Mat.channels() returns the number of channels, typically 3 for a colour image
For 'regular images' opencv generally uses 8bit BGR colour order which should map directly onto a Windows bitmap or most Windows libs.
SO you probably want system.drawing.imaging.pixelformat.Format24bppRgb, I don't know what order ( RGB or BGR) it uses but you can use the opencv cv::cvtColor() with CV_BGR2RGB to convert to RGB
Thanks for the help! The problem was something else.
I just did not allocate memory for the Image Object. So there was nothing to really display.
Using cv::Mat img; in the headerFile and img = new cv::Mat(120,160,0); in Constructor (ocvcamimg Class) got it to work. (ocvcamimg->captureCamMatImg() returns img).

OpenCV - VideoWriter produces a video with a "repeated" image

I'm trying to process each frame in a pair of video files in OpenCV and then write the resulting frames to an output avi file. Everything works, except that the output video file looks strange: instead of one solid image, the image is repeated three times and horizontally compressed so all three copies fit into the window. I suspect there is something going wrong with the number of channels the writer is expecting, but I'm giving it 8-bit single channel images to write. Below are the setting with which I'm initializing my videowriter:
//Initialize the video writer
CvVideoWriter *writer = cvCreateVideoWriter("out.avi",CV_FOURCC('D','I','V','X'), 30, frame_sizeL, 0);
Has anyone encountered this strange output from the openCV videowriter before? I've been checking the resulting frames with cvSaveImage just to see if somehow my processing step is creating the "tripled" image, but it's not. It's only when I write to the output avi with cvWriteFrame that the image gets "tripled" and compressed.
Edit: So I've discovered that this only happens when I attempt to write single channel images using write frame. If I write 3 channel 8-bit RGB images, the output video turns out fine. Why is it doing this? I am correctly passing "0" for the color argument when initializing the CvVideoWriter, so it should be expecting single channel images.
In the C++ version you have to tell cv::VideoWriter that you are sending a single channel image by setting the last paramter "false", are you sure you are doing this?
edit: alternatively you can convert a greyscale image to color using cvtColor() and CV_GRAY2RGB