OpenCV - How to enable scrolling to windows with images? - c++

So currently I open images created with openCV with something like
cvNamedWindow( "Original Image", CV_WINDOW_AUTOSIZE );
cvShowImage( "Original Image", original );
but my images are quite large and go off the screen like shown here
I want windows to be resizable or at least the size of the users screen but with scrolling.
How to do such thing?

A simple way to scroll large image is by usage of trackbar and a Rectangular for snipping.
.
.
.
namedWindow("winImage",WINDOW_AUTOSIZE);
namedWindow("controlWin",WINDOW_AUTOSIZE);
int winH=300;
int winW=600;
if(winH>=largeImage.rows)winH=largeImage.rows-1;
if(winW>=largeImage.cols)winW=largeImage.cols-1;
int scrolHight=0;
int scrolWidth=0;
cvCreateTrackbar("Hscroll", "controlWin", &scrolHight, (largeImage.rows -winH));
cvCreateTrackbar("Wscroll", "controlWin", &scrolWidth, (largeImage.cols -winW));
while(waitKey(0)!='q'){
Mat winImage= largeImage( Rect(scrolWidth,scrolHight,winW,winH) );
imshow("winImage",winImage);
}//while
.
.

EDIT
Short answer: you can't "enable" it, you have to implement it.
OpenCV does have trackbars -- have a look at the documentation, in particular the cvCreateTrackbar function. However, even if you use them, you still have to write the code behind it (for determining the new ROI and determining what to actually show).
If this sounds a bit too daunting, then you can wrap the displayed image using some GUI framework. Here is an example that uses OpenCV with wxWidgets. Of course, you can use any other GUI framework (for example, Qt).

This might help for one step: Just use CV_WINDOW_NORMAL instead of CV_WINDOW_AUTOSIZE.
cvNamedWindow(yourWindowName, CV_WINDOW_NORMAL);
cvShowImage("Original Image", original);

As far as I know (but I've only recently started looking at OpenCV) you need to build the OpenCV library with the Qt GUI library as GUI backend.
Then you get all the cute functions.
Well, OK, there's not very much, but the little that is there is documented as Qt only.
EDIT: PS, since possibly other answer might sow confusion, I'm not talking about using Qt to implement such functionality yourself. I'm talking about the functionality available in OpenCV's HighGUI module.
Cheers & hth.,

The best thing that I can do with pure opencv is by modifying the opencv trackbar method. I'm using ROI to update the display image according to the slider value. The weakness of this method is, opencv trackbar is displayed horizontally not vertically like the normal scrollbar, so in this case it is up to you whether you want to rotate your image horizontally or not.
int slider_max, slider, displayHeight;
int displayWidth = 1900;
Mat src1; // original big image
Mat dst;
cv::Rect roi;
static void on_trackbar(int, void*)
{
roi = cv::Rect(slider, 0, displayWidth, displayHeight); //Update ROI for display
dst = src1(roi);
imshow("Result", dst);
}
int main(void)
{
src1 = imread("BigImg.jpg"); // your big image
cv::rotate(src1, src1, cv::ROTATE_90_CLOCKWISE); //I rotate my image because opencv trackbar is displayed horizontally
cv::resize(src1, src1, cv::Size(src1.cols/2, src1.rows/2)); // resize the image if it's too big to display in 1 window
if (src1.empty()) { cout << "Error loading src1 \n"; return -1; }
slider_max = src1.cols - displayWidth;
slider = 0;
displayHeight = src1.rows;
namedWindow("Result", WINDOW_AUTOSIZE); // Create Window
char TrackbarName[50];
sprintf(TrackbarName, "Pixel Pos");
createTrackbar(TrackbarName, "Result", &slider, slider_max, on_trackbar);
on_trackbar(slider, 0);
waitKey(0);
return 0;
}
For changing the trackbar's orientation then you will need to use Qt or other GUI How to change the position of the trackbar in OpenCV applications?

Related

how to resize the image through imshow (opencv , imshow )?

I am trying to show an image. but imshow shows the image with big scales so that I can see the image's pixels(as shown in the image below)
as you see pixels are too big. and it is not nice. I expected something like this:
is there any way to resize the image's window size?
I am using Linux vscode
here is my code:
int main()
{
Mat O_image = imread("lena.jpg");
namedWindow("hamid", CV_WINDOW_KEEPRATIO);
imshow("hamid", O_image);
waitKey(0);
return 0;
}
the pointer changes when I move the mouse to the edges but I can't resize it.
You can use a cv::namedWindow with WindowFlags to control the size of the output window.
cv::Mat img;
img = cv::imread("Lena.jpg", CV_LOAD_IMAGE_GRAYSCALE);
cv::namedWindow("A image of Lena", CV_WINDOW_NORMAL); //CV_WINDOW_NORMAL to enable the user to resize the window
cv::imshow("A image of Lena", img);

bad quality, when rendering images from camera in qt4

My code:
camera = new RaspiCam_Cv();//raspbery pi library
camera->set(CV_CAP_PROP_FORMAT,CV_8UC1); //this is monochrome 8 bit format
camera->set(CV_CAP_PROP_FRAME_WIDTH, 960);
camera->set(CV_CAP_PROP_FRAME_HEIGHT,720);
while (1){
camera->grab();//for linux
unsigned char* buff = camera->getImageBufferData();
QPixmap pic = QPixmap::fromImage(QImage( buff, camWidth_, camHeight_, camWidth_ * 1, QImage::Format_Indexed8 ));
label->setPixmap(pic);
}
The problem is bad quality! I found out that the problem happens when using QImage, when using openCv Mat, everything is good!
Same thing happens in other Qt based programs, like this one (same bad quality): https://code.google.com/p/qt-opencv-multithreaded/
Here is a pic, where the problem is shown. there is a white page in front of the camera, so if all went as it should, you should see clean gray image.
You are resizing the image using pixmap and label transformations, which are worse than the one of QImage. This is due to pixmap being optimized for display and not for anything else. The pixmap size should be the same of the label to avoid any further resizing.
QImage img =QImage(
buff,
camWidth_,
camHeight_,
camWidth_ * 1,
QImage::Format_Indexed8 ).scaled(label->size());
label->setPixmap(QPixmap::fromImage(img));
This is not an answer, but it's too hard to share code in the comments.
Can you please test this code and tell me whether the result is good or bad?
int main(int argc, char** argv)
{
RaspiCam_Cv *camera = new RaspiCam_Cv();
camera->set(CV_CAP_PROP_FORMAT , CV_8UC1) ;
camera->set(CV_CAP_PROP_FRAME_WIDTH, 960);
camera->set(CV_CAP_PROP_FRAME_HEIGHT,720);
namedWindow("Output",CV_WINDOW_AUTOSIZE);
while (1)
{
Mat frame;
camera.grab();
//camera.retrieve ( frame);
unsigned char* buff = camera->getImageBufferData();
frame = cv::Mat(720, 960, CV_8UC1, buff);
imshow("Output", frame);
if (waitKey(30) == 27)
{ cout << "Exit" << endl; break; }
}
camera->~RaspiCam_Cv();
return 0;
}
Your provided images look like the color depth is only 16 Bit.
For comparison, here's the provided captured image:
and here's the same image, transformed to 16 bit color space in IrfanView (without Floyd-Steinberg-Dithering).
In the comments we found out that the Raspberry Pi Output Buffer was set to 16 Bit. and setting it to 24 Bit helped.
But I can't explain why rendering the image on the pi with OpenCV's cv::imshow produced well looking images on the Monitor/TV...

display multiple OpenCV imshow() windows separately

I have a VS console application that is built using opencv library. I am displaying image using the opencv imshow function. The thing is that all the imshow windows overlap over each other and it is difficult to toggle between them. How can I prevent the overlap and display them separately and toggle between them
The way to go about this programatically, is to call resizeWindow() to define each windows' size and moveWindow() to place them at specific locations in your screen.
void cv::resizeWindow(const string& winname, int width, int height)
void cv::moveWindow(const string& winname, int x, int y)
Although this is a late reply, you may find it useful to call moveWindow() after each imshow() call. A language independent solution is given here.
Example steps :-
call imshow("first image", img1)
call moveWindow("first image", img1, 0, 0) //Default position of window is at col,row == 0,0. So, this line is optional.
call imshow("second image", img2)
set firstImageWidth = width of img1
set mySpacing = 40 //vary this to increase/decrease the gap between the image windows.
call moveWindow("first image", firstImageWidth + mySpacing , 0)
Then, add these lines to prevent output windows from being forever active.
set myTime = 7000 //in milliseconds. Here, 7000 ms == 7 secs to show our image windows.
call waitKey(myTime)
call waitKey(1) //this is a trick. Otherwise, the windows are opened indefinitely.
At the moment, I am using Java SE8 with OpenCV 4.2. The above method works for me.
[Screenshot of the above example in action.][1]
[1]: https://i.stack.imgur.com/JaTI0.png
Here is a Java+OpenCV code snippet for the display part:
...
//display image. Using OpenCV HighGui class methods.
String inputWindowName = "This window shows input image";
String outputWindowName = "This window shows output image";
HighGui displayWindow = new HighGui();
displayWindow.imshow(inputWindowName, img1);
displayWindow.imshow(outputWindowName, img2);
displayWindow.moveWindow(outputWindowName, img1.cols()+40, 0);
displayWindow.waitKey(7000);
displayWindow.waitKey(1);

Holding an image for runtime updating in OpenCV

I have an image I'd like to display using imshow() and to update at runtime: say I'd like to run a corner detection algorithm then display the corner on this same image - like in MATLAB figure plot(), hold plot() - the hold keyword hold the previous image/graph and enables a new plot on the same figure.
Is this possible to do with OpenCV? If yes, how can I do it?
thanks
In C++, you don't need to hold it in order to update drawing. You just need to draw what you want to the image and then imshow the image. There you go.
Look at the following example from here:
int Displaying_Big_End( Mat image, char* window_name, RNG rng )
{
Size textsize = getTextSize("OpenCV forever!", CV_FONT_HERSHEY_COMPLEX, 3, 5, 0);
Point org((window_width - textsize.width)/2, (window_height - textsize.height)/2);
int lineType = 8;
Mat image2;
for( int i = 0; i < 255; i += 2 )
{
image2 = image - Scalar::all(i);
putText( image2, "OpenCV forever!", org, CV_FONT_HERSHEY_COMPLEX, 3,
Scalar(i, i, 255), 5, lineType );
imshow( window_name, image2 );
if( waitKey(DELAY) >= 0 )
{ return -1; }
}
return 0;
}
Pay attention to the imshow( window_name, image2 ), we don't do anything to hold the image, just use the loop to draw incrementally (by putText()) on the image. The image will update dynamically accordingly.
There is no concept of hold in OpenCV.
Basically, cv::imshow() will just update the window with whatever image it gets.
To "overlay" you actually need to create a new image (or reuse an existing one), draw on this new image and/or update it, e.g. with your detected corners, and call imshow() again with this updated image.

OpenCV+cvBlobsLib: blobs come out "stretched" on the x-axis

Making the usual blob tracker with OpenCV and cvBlobsLib, I've come across this problem and it seems no one else had it, which makes me sad. I get the RGB/BGR frame, choose the color to isolate, treshold it into b/w, find the blobs and add the bounding rectangle on each blob, but when I display the final image, the box is stretched on the x-axis: when the object is on the left the box is close to it (although around 2.5 times larger), and as it moves to the right the box moves faster (= more and more far from the object) until it reaches the right end of the window when the object isn't even halfway. This doesn't happen on the y-axis, where everything is fine. It's not a problem with rectangles, it happens when I use fillBlob aswell, the blob shape comes out stretched and misaligned. Also, it's not a problem related to image capturing, since I've tried with kinect (OpenNI), webcam and even using a single image (imread()), and I verified that every ImageGenerator, Mat, IplImage used were 640x480, 8bit depth, for which I used AUTOSIZE for the namedWindow (enlarging to fullscreen window doesn't help either). Showing the BGR frame and the tresholded image gives no problems, they both fit into the window, but the detected blobs seem to belong to a different resolution space when I merge them with the original image. Here's the code, not much has changed from the usual examples found online everywhere:
//[...]
namedWindow("Color Image", CV_WINDOW_AUTOSIZE);
namedWindow("Color Tracking", CV_WINDOW_AUTOSIZE);
//[...] I already got the two cv::Mat I need, imgBGR and imgTresh
CBlobResult blobs;
CBlob *currentBlob;
Point pt1, pt2;
Rect rect;
//had to do Mat to IplImage conversion, since cvBlobsLib doesn't like mats
IplImage iplTresh = imgTresh;
IplImage iplBGR = imgBGR;
blobs = CBlobResult(&iplTresh, NULL, 0);
blobs.Filter(blobs, B_EXCLUDE, CBlobGetArea(), B_LESS, 100);
int nBlobs = blobs.GetNumBlobs();
for (int i = 0; i < nBlobs; i++)
{
currentBlob = blobs.GetBlob(i);
rect = currentBlob->GetBoundingBox();
pt1.x = rect.x;
pt1.y = rect.y;
pt2.x = rect.x + rect.width;
pt2.y = rect.y + rect.height;
cvRectangle(&iplBGR, pt1, pt2, cvScalar(255, 255, 255, 0), 3, 8, 0);
}
//[...]
imshow("Color Image", imgBGR);
imshow("Color Tracking", imgTresh);
The "[...]" is code that shouldn't have nothing to do with this issue, but if you need further info on how I handled the images, let me know and I'll post it.
Based on the fact that the way I capture the image doesn't change anything, that BGR frame and B/W image are well shown, and that after getting blobs any way of displaying them gives the same (wrong) result, the problem must be something between CBlobResult() and matrix2ipl conversion, but I don't really know how to find it out.
Oh god, I spent ages to write the whole problem and the next day I found the answer almost casually. As I created the B/W matrix for tresholding, I didn't make it single-channel; I copied the BGR matrix type, thus having a treshold image with 3 channels which resulted in a widthStep 3 times the frame width. Resolved creating cv::Mat imgTresh with CV_8UC1 as type.