I have a fancy camera with global shutter that has its own set of API's. What I have been trying to do is to initialize the camera and trigger the shutter in a precise timed loop. The camera API then returns a pointer to the first pixel of the image (let's call it image.bp).
My code looks something like this:
XI_IMG image; //image type specified by the manufacturer. the image.bp points to the first pixel of the image
camera initialization ...
for(;;){
xiGetImage(xiH, timeOut, &image); //API given by the camera manufacturer
Mat frame(image.height, image.width, CV_8U, image.bp);
imshow(window_name,frame);
}
Now this code WORKS. But what I have been trying to do is define "frame" as Mat before the for loop so I don't end up redefining it every cycle (correct me if I'm wrong but redefining it in every loop would be very inefficient). But everything that I have tried fails.
Any help would be very appreciated!
There is frame.data (the Mat::data member) that you could re-assign every time you acquire a new frame using your xiGetImage code. When you do so, make sure you initialise your frame object using a valid image pointer, rather than omitting the last parameter. That way, your Mat object will not allocate own memory (and eventually leak it).
XI_IMG image; //image type specified by the manufacturer. the image.bp points to the first pixel of the image camera initialization ...
xiGetImage(xiH, timeOut, &image); //API given by the camera manufacturer
Mat frame(image.height, image.width, CV_8U, image.bp);
for(;;){
xiGetImage(xiH, timeOut, &image); //API given by the camera manufacturer
frame.data = image.bp;
imshow(window_name,frame);
}
Further,
I guess the comments regarding compiler optimisers are correct to a certain extend too. Your Mat constructor will only allocate the stack-allocated part, and will not do any heap allocation (as you specified your own data pointer). As the stack-allocated part is exactly the same size for every frame, I would assume the compiler will at least re-use the same address. There will probably be some overhead in setting the width and height for every frame, but that's really minimal effort.
I highly suspect that the address pointer in image.bp is stable and does not change every frame. Drivers prefer to reuse pre-allocated memory, rather than fragment the heap. If that's the case, you could even omit the line frame.data = image.bp; above.
Related
I have two processes who want to share cv::Mat image information, and I want to use the boost managed_shared_memory to realize it. Since copying an image is really time consuming, so I am trying to find a way to write the image directly to the shared memory when it first appears.
However, since cv::Mat is only a header who has the a pointer to the image data, and the data locates somewhere else, I couldn't realize my idea. I have some test code but they are very chaos and can not work, so I think I am in the totally wrong direction. Anyone has experience about this? thank you!
The cv::Mat.ptr() function gives you the first pointer of an OpenCV image.
The size of the data buffer is equal to Channels * Height * Width * elmsize, so you can just use
memcpy(dest, image.ptr(), Channels * Height * Width) if the elements are 1 byte each (based on CvType).
Caveats:
- The image must be continuous. Use isContinuous() to check. If it fails, clone() the image to get a continuous copy.
- To retrieve the image from Shared memory, you will have to construct a new cv:Mat with the same height, width, channels, CvType and step. Then use memcpy.
See Shared Memory Example for a minimal working example.
How is the array of pixels, that is passed to the update method in the Texture class (SFML), managed memory-wise? These are some of my guesses:
A weak pointer is saved inside the texture instance; which means that it is necessary to keep a pointer to the array of pixels of your own and manage it yourself.
The array is copied and managed by the texture (which also means that every time the update method is called again, the previous one is deallocated).
The second guess would justify this for updating a texture multiple times:
auto newPixels = new sf::Uint8[WIDTH * HEIGHT * 4];
... //do stuff to pixels
texture.update(newPixels);
Where the pixels are reallocated every time the texture is updated. Otherwise (if the pixels are just stored as a weak pointer and not managed/deallocated/allocated) a different approach would be necessary, where the pixels are managed by the user...
Thanks in advance for any answers :)
SFML is open source. You don't need to take guesses or ask here. You can just read it for yourself:
https://github.com/SFML/SFML/blob/master/src/SFML/Graphics/Texture.cpp#L390
Specifically, the pointer is passed to the glTexSubImage2D OpenGL method.
I am using the CL Eye Multicam C++ API to obtain frames from a PSEye camera and I found something interesting I hope someone can explain to me this behaviour.
Following this example if I use the regular code (around line 108) :
while(_running)
{
cvGetImageRawData(pCapImage, &pCapBuffer);
CLEyeCameraGetFrame(_cam, pCapBuffer);
cvShowImage(_windowName, pCapImage);
}
The pCapBuffer is updated, BUT if I just do:
while(_running)
{
CLEyeCameraGetFrame(_cam, pCapBuffer);
}
pCapBuffer remais NULL! So for what I see CLEyeCameraGetFrame() just updates pCapBuffer when someone "consumes" it...what I don't get is how does CLEyeCameraGetFrame() knows that the buffer was read? I was expecting the pCapBuffer to be updated everytime I called CLEyeCameraGetFrame()....is this the regular behaviour in camera frame reads?
Also if someone could point me out how to make a QImage out of this pCapBuffer it will be very helpful!
I finally understood what's going on...cvGetImageRawData() copies the image pCapImage raw data to pCapBuffer and thus gives it an address, making it point to the image class internal data representation. So everytime CLEyeCameraGetFrame() is called it changes the data inside pCapBuffer, which is the same data inside pCapImage. The designer of this code simply used the OpenCV functions to initialize a buffer with the right amount of space and used it to acquire the frame image.
There is an assert in the implementation of cropping a matrix that prevents the cropRect from overlapping the edges of the source image.
// Asserts that cropRect fits inside the image's bounds.
cv::Mat croppedImage = image(cropRect);
I want to lift this restriction and be able to do this using black pixels that lie outside the image. Is this possible?
The answer is: technically it is possible but you really really don't want to do it. There no "black pixels" that lie around your image. Your 'image' allocated just enough memory for himself, and that's it. So if you try to access pixels outside of allocated memory you will get runtime error. If you want to have some black pixels you will have to do that yourself in the way that #ffriend described. image(cropRect) is not allocating anything, it just creating new pointer to memory that already exist.
In case you are still curious about how this crop can be done, OpenCV is doing the following:
// check that cropRect is inside the image
if ((cropRect & Rect(0,0,image.cols,image.rows)) != cropRect)
return -1; // some kind of error notification
// use one of Mat constructors (x, y, width and height are taken from cropRect)
Mat croppedImage(Size(width,height), image.type(), image.ptr(y)+x, image.step);
You can skip the test and go to initialization, but as I said this is a good recipe for disaster.
According to the link of Operations on Arrays of OCV, I was not finding the way to have two different Mat and putting them into an only window which displays both of the images.
PS: It's not about merging images into a single one.
Any ideas?
Use Qt and this function:
how to convert an opencv cv::Mat to qimage
Highgui does not support several matrices per window yet.
I don't think it's possible to do it in pure opencv, as opencv is image processing library and support bare minimum of user interface, with a few functionality.
You can create a bigger Mat which contains your original two images. In order to be able to distinguish images from each other you can create a black line boundry, e.g:
// the 20 there is an example for border between images
Mat display = Mat::zeros ( MAX ( image1.rows, image2.rows ), image1.cols + 20 + image2.cols, image1.type() );
image1.copyTo ( Mat ( display, Rect ( 0, 0, image1.cols, image1.rows ) ) );
image2.copyTo ( Mat ( display, Rect ( image1.cols + 20, 0, image2.cols, image2.rows ) ) );
I was thinking about this exact same thing and came across your question. Given the way Mat works, it is really just a smart pointer, i.e. has header information that defines how the data it points to is organized.
This means that you can't use the mat container, and everything display related right now only uses one mat object at a time. So a nice easy valid way is not easily achievable.
Warning, you now are responsible for handling the memory yourself
If you are extremely determined to do it (i know that feeling, and i did it this way since i couldn't find a better option). You could allocate a contiguous space in memory for your data buffer.
Now, you can partition that memory as you see fit for your multiple images. i.e. using pointer arithmetic. You need to make sure the images you want to display are next to each other.
At this point, you can create a Mat object that covers the whole memory space of your 2 objects using Mat(cv::Size size, int type, void *buffer).
Of course, using this method, they are permanently stuck together in a certain formation.
Obviously, this can also be done the other way around, i.e. create the mat object and let it allocate the space, and then take the data pointer from the Mat object using the uchar *data member (remember to cast to your preferred type for your pointer arithmetic). And you can create 2 new Mat objects using that buffer.
Sadly, this method doesn't allow you do horizontal concatenation because of the way Mat objects are represented in memory.