I have image 6750x6450 px and trying to display it with imshow() function from OpenCv.
When I display one image it's shown badly(some wired output), when I try to display two images get seqfault. Saving those images on hdd gives good results, images are saved, and they are correct, when I resize both they are also shown correctly. Has imshow() function some size restrictions?
code:
Mat bigImage1 = imread(...);
Mat bigImage2 = imread(...);
namedWindow("first",CV_WINDOW_FULLSCREEN);
namedWindow("second",CV_WINDOW_FULLSCREEN);
imshow("first",bigImage1);
imshow("second",bigImage2);
I'm working on desktop computer Windows 7 64bit
The images are probably larger than your current screen resolution. The problem seems to be that they are simply too big for OpenCV to handle them in a window.
To be certain, I would try your code on Mac or Linux since OpenCV is cross-platform and there are specific window management implementations for every OS.
Related
Thank you in advance for your support.
I am using OpenCV for processing video frames taken by a video camara, and showing the processed frames in a simple GUI implemented in Qt5. In the GUI, the images are shown using QPixmap in a label. The OpenCV algorithms should be right, since if I write the outputs are right, and they are basically some examples provided by OpenCV.
I have implemented different processings: For a conversion from color to grey scale, and for binary threshold (see image 1) the results are fine (this "view" of the camera is right). Nevertheless, when trying to display ("in real time") Keypoints detections (using SURF -see image 2-) and contours detections (using Canny -see image 3-), the images displayed are strange.
The main problem is they seem to be at the same time "much closer" (see 2) and double (see 3).
In the Qt code I am using:
ui->labelView->setScaledContents(true);
I do the conversion from the processed OpenCV frame to QImage using:
QImage output((const unsigned char*) _frameProcessed.data, _frameProcessed.cols, _frameProcessed.rows, QImage::Format_Indexed8);
And I display the image using:
ui->labelView->setPixmap(QPixmap::fromImage(frame));
The GUI and the OpenCV processing are running in different threads: I move the image processing to a thread in an initial setup.
If you needed further information please just let me know.
Thank you very much in advance!
Best regards,
As #Micka pointed out, it had to do with the formats of the images. I found this handy code providing functions for an automatic transformation between OpenCv and Qt formats.
I recently started using OpenCV for a project involving reading videos. I followed tutorials online for video's reading and the video seems to be read with no problems. However, when I display any frame from the video, the far right column appears to be corrupted. Here is the code I used for reading and displaying the first frame.
VideoCapture cap("6.avi");
Mat frame;
cap>>frame;
imshow("test",frame);
waitKey(0);
This resulted in a frame that looks good for the most part except the far right column. See here.
I am making no modifications to the video or frames before displaying it. Can anyone help figure out why this is happening?
Note: I'm running Ubuntu 14.04, OpenCV version 2.4.8
Full video can be found here.
Your code looks fine to me. Are you certain the frame is corrupted? Resize, maximize, minimize the "test" GUI window to see if the right edge is still corrupted. Sometimes while displaying really small images, I've seen the right edge of the GUI window display incorrectly even though the frame is correct. You could also try imwrite("test.png",frame) to see if the saved image is still corrupted.
If this doesn't help, it would seem like a codec problem. Ensure you have the latest version of opencv, ffmpeg.
If this still doesn't help, the video itself may be corrupted. You could try converting it into another format using ffmpeg
I have a c++ code that is being run on the Parrot AR.Drone version 2.0 to detect objects, then save images of the detected objects to the controller (computer). As you all may know, the AR.Drone has an 720p High Definition camera. However, the saved images are very blurry. I cannot seem to find any OpenCV function that increases the resolution of the saved images, however I believe the resolution is set to 95/100 by default for OpenCV. Does anyone know of any solution to this problem?
Any input or comment would be helpful.
I think you mean 95/100 of jPEG quality. You can change the third parameter of cv::imwrite like it said in the opencv documentation
cv::imwrite("name.jpg", image, CV_IMWRITE_JPEG_QUALITY=100); //100 instead of default 95
But this method only increase the quality, not the resolution... and there shouldn't be much difference between 95 and 100%.
Currently trying to display a stream from an uEye camera using OpenCV. For this purpose I have Visual Studio 2013 and OpenCV 2.4.9 (64bit) at my disposal. Since things are not yet close to a release I'm using the debug libraries that are shipped with OpenCV (compiled with Visual Studio 2012).
I was trying to memcpy the image data that is returned by the camera to a cv::Mat object. After getting some weird error about a NULL pointer (string name of cvNamedWindow) I decide to check if I can actually run a very basic piece of code - read a PNG image and show it in a window. Well, it's not working... My mistake is probably still in the memcpy that I use but if you read below you will see that I have also tested a case where no camera is involved.
No matter if I give the absolute path of my image or simply point at the file where the EXE is I get an assertion failure from cv::imshow that either the height and/or width are not > 0. One other thing struck me here - the window name was all messed up - weird symbols, blank spaces etc. Nothing to do with what I have assigned as a name: "camOutput"
Further I decided to test things by manually creating a matrix of type CV_8U3 and filling it with black pixels. OpenCV showed the image yet the name of the window was again messed up. This time I was able to read the following, which seems to be some part of a command:
n in DOS mode.$
O_o I have never ever seen such weird behaviour especially when it comes to imshow, imread or namedWindow. Futher I can also not explain why imread returns an empty matrix no matter what I feed it. Tried PNG, JPEG and BMP - always the same crash.
EDIT: I have created an empty C++ project and transferred all my settings from the previous one. Now it's working. Even the memcpy for my uEye camera is fine and I can display the output in an OpenCV window. I have no idea what the problem was with my previous project. Will have to analyze further since the issue might reoccur. I will however leave this questioned opened because of that.
I'm using Open CV 2.4.6 with C++ (with Python sometimes too but it is irrelevant). I would like to know if there is a simple way to get all the available frame sizes from a capture device?
For example, my webcam can provide 640x480, 320x240 and 160x120. Suppose that I don't know about these frame sizes a priori... Is it possible to get a vector or an iterator, or something like this that could give me these values?
In other words, I don't want to get the current frame size (which is easy to obtain) but the sizes I could set the device to.
Thanks!
When you retrieve a frame from a camera, it is the maximum size that that camera can give. If you want a smaller image, you have to specify it when you get the image, and opencv will resize it for you.
A normal camera has one sensor of one size, and it sends one kind of image to the computer. What opencv does with it thereafter is up to you to specify.