Currently, I use the class as below and promote it to graphics view to Zoom in and zoom out image. but how to stop zoom out when the resolution of image equals the original resolution of image and how to use press and hold scroll mouse to move picture when the resolution of image bigger than original resolution of image instead of use left mouse.
This is my code:
#include "custom_view.h"
Custom_View::Custom_View(QWidget *parent) : QGraphicsView(parent)
{
}
void Custom_View::wheelEvent(QWheelEvent *_wheelEvent)
{
setTransformationAnchor(AnchorUnderMouse);
setDragMode(ScrollHandDrag);
double scaleFactor = 1.2;
if (_wheelEvent->delta()>0){
scale(scaleFactor,scaleFactor);
}
else
{
scale(1/scaleFactor,1/scaleFactor);
}
}
My Question:
How to stop zoom out when the resolution of image equals the original resolution of image ? and how to use press and hold scroll mouse to move picture when the resolution of image bigger than original resolution of image instead of use left mouse.
Related
void Editor::DisplayImage(QString(pathOfImage))
{
// QPixmap pix (pathOfImage);
// ui->lbl_DisplayImage->setPixmap(pix);
// ui->lbl_DisplayImage->lower();
// annotationPixmap = QPixmap(pix.size());
// annotationPixmap.fill(Qt::transparent);
ui->lbl_DisplayImage->setPixmap(annotationPixmap);
annotationPixmap = QPixmap(pathOfImage);
clickMouse = false;
}
I am trying to move this pixmap and change the size of the pixmap image. I have tried using labels but they prevent me from drawing over the images and the drawings are always placed behind the image.
Is there a way to manipulate the pixmap?
I've been trying to simultaneously grab frames from two different cameras, but no matter how many times I call VideoCapture::grab(), there seems to be no effect. The frame retrieved using VideoCapture::retrieve() is always the first frame captured from the last VideoCapture::retrieve().
I've tested it on both OpenCV 2.4 and 3.1, with a Logitech C920 camera on windows.
Example:
VideoCapture vCapture;
Mat imgResult;
vCapture.open(0); //at this point, there is a green sheet in front of the camera
Sleep(100000); //change green sheet with red sheet
vCapture.grab(); //returns true
vCapture.retrieve(imgResult); //image with green sheet is retrieved
Sleep(100000); //change red sheet with blue sheet
vCapture.retrieve(imgResult); //red sheet is retrieved
I've also tried:
for(int i = 0; i < 1000; i++){
vCapture.grab();
} //takes almost no processing time, like an empty for
vCapture.retrieve(imgResult); //same as before
Retrieve always returns true and retrieves a frame, even if no grab was called since opening vCapture.
My current solution has been retrieving the frame twice (multi-threaded) to ensure the latest frame, but it isn't reliable enough to sync both cameras. Can anyone shed some light on how to force the camera to grab the current frame?
Thanks!
Edit:
A variation of the first example, for clarity:
VideoCapture vCapture;
Mat imgResult;
vCapture.open(0); //at this point, there is a green sheet in front of the camera
vCapture.retrieve(imgResult); //image with green sheet is retrieved
Sleep(100000); //change green sheet with red sheet
vCapture.grab(); //returns true
vCapture.retrieve(imgResult); //green sheet is retrieved
vCapture.retrieve(imgResult); //red sheet is retrieved
Sleep(100000); //change red sheet with blue sheet
vCapture.retrieve(imgResult); //red sheet is retrieved
vCapture.retrieve(imgResult); //blue sheet is retrieved
Expected behavior:
VideoCapture vCapture;
Mat imgResult;
vCapture.open(0); //at this point, there is a green sheet in front of the camera
vCapture.retrieve(imgResult); //error or image with green sheet is retrieved
Sleep(100000); //change green sheet with red sheet
vCapture.grab(); //returns true
vCapture.retrieve(imgResult); //red sheet is retrieved
Per OpenCV documentation:
VideoCapture::grab: The methods/functions grab the next frame from video file or camera and return true (non-zero) in the case of success.
VideoCapture::retrieve: The methods/functions decode and return the just grabbed frame. If no frames has been grabbed (camera has been disconnected, or there are no more frames in video file), the methods return false and the functions return NULL pointer.
Please try this code with the following instructions:
before and while you start the program, hold a red sheet in front of the camera. That moment, the first .grab will be called.
Once you see the black window popping up, remove the red sheet and hold a blue sheet or something else (except the red or the green sheet) in front of the camera. Then press keyboard key 'q'.
Now you have 5 seconds time to change the scene again. Hold hold the green sheet in front of the camera and wait. The black window will be switched to one of your camera images.
int main(int argc, char* argv[])
{
cv::Mat input = cv::Mat(512,512,CV_8UC1, cv::Scalar(0));
cv::VideoCapture cap(0);
while (cv::waitKey(10) != 'q')
{
cap.grab();
cv::imshow("input", input);
}
cv::waitKey(5000);
cap.retrieve(input);
cv::imshow("input", input);
cv::waitKey(0);
return 0;
}
3 possible results:
you see the red sheet: this means that the first grab was called and fixed the image, until a retrieve was called.
you see blue sheet: this means every .grab call "removes" one image and the camera will capture a new image on the next call of .grab
you see the green sheet: this means your .retrieve doesn't need a .grab at all and just grabs images automatically.
For me, result 1 occurs, so you can't grab and grab and grab and just .retrieve the last image.
Test 2: control about everything:
looks like you are right, on my machine no matter when or how often I call grab, the next image will be the one captured at the time when I called the previous .retrieve and the calls of .grab don't seem to influence the time position of capturing at all.
Would be very interesting whether the same behaviour occurs for different (all) kind of cameras and operating systems.
I've tested on the internal camera of a T450s and Windows 7.
int main(int argc, char* argv[])
{
cv::Mat input = cv::Mat(512,512,CV_8UC1, cv::Scalar(0));
cv::VideoCapture cap(0);
bool grabbed;
bool retrieved;
while (true)
{
char w = cv::waitKey(0);
switch (w)
{
case 'q': return 0;
case 27: return 0;
case ' ': retrieved = cap.retrieve(input); break;
case 'p': grabbed = cap.grab(); break;
}
cv::imshow("input", input);
}
return 0;
}
In addition, this simple code seems to be off 1 frame for my camera (which therefore probably has a buffersize of 2??):
while (true)
{
cap >> input;
cv::imshow("input", input);
cv::waitKey(0);
}
I ran my test and note very strange behavior of .garab and .retrive functions.
This is example:
cv::Mat input = cv::Mat(512, 512, CV_8UC1, cv::Scalar(0));
cv::VideoCapture cap(0);
while (true)
{
cap.grab();
cap.retrieve(input, 5);
cv::imshow("input", input);
cv::waitKey(0);
}
If you press any key slowly, about every 5 seconds, and change something in front of the camera between pressing, the position of the object on the image will change every second image showing, that is, every second call of the .grab and .retrive functions.
If you press any key quickly, about every 1 seconds, and also change something in front of the camera between pressing, the position of the object on the image will be changed every image showing.
This circumstance tell about that this function can be used to sync cameras.
I am writing a program, to display two cameras next to each other. In Qt it is pretty simple with the QCamera. But my Cameras are turned by 90°, so I have to turn the Camera in the porgram too.
The QCamera variable has no command to turn it, so I want to display it in a label, and not in a viewfinder. So I take an Image, turn it and display it in a label.
QImage img;
QPixmap img_;
img = ui->viewfinder->grab().toImage();
img_ = QPixmap::fromImage(img);
img_ = img_.transformed(QTransform().rotate((90)%360));
QImage img2;
QPixmap img2_;
img2 = ui->viewfinder->grab().toImage();
img2_ = QPixmap::fromImage(img2);
img2_ = img2_.transformed(QTransform().rotate((90)%360));
ui->label->setPixmap(img_);
ui->label_2->setPixmap(img2_);
When I start the program there are just two black boxes next to each other.
(In the code there is missing the part where I deklare it, but the camera works fine in the viewfinder so I think there is no problem)
Try this:
img_ = QPixmap::grabWindow(ui->viewfinder->winId(), 0, 0, -1, -1); (for take a snapshot as QPixmap)
or
img = QPixmap::grabWindow(ui->viewfinder->winId(), 0, 0, -1, -1).toImage(); (for take a snapshot as QImage)
You can use the orientation of the camera to correct the image orientation in view finder as described in Qt documentation. Here is the link:
http://doc.qt.io/qt-5/cameraoverview.html
and here is the code found in the documentation:
// Assuming a QImage has been created from the QVideoFrame that needs to be presented
QImage videoFrame;
QCameraInfo cameraInfo(camera); // needed to get the camera sensor position and orientation
// Get the current display orientation
const QScreen *screen = QGuiApplication::primaryScreen();
const int screenAngle = screen->angleBetween(screen->nativeOrientation(), screen->orientation());
int rotation;
if (cameraInfo.position() == QCamera::BackFace) {
rotation = (cameraInfo.orientation() - screenAngle) % 360;
} else {
// Front position, compensate the mirror
rotation = (360 - cameraInfo.orientation() + screenAngle) % 360;
}
// Rotate the frame so it always shows in the correct orientation
videoFrame = videoFrame.transformed(QTransform().rotate(rotation));
It looks like you don't even understand, what you are looking at...
Whats the purpose of pasting stuff like that to forum? Did you read ALL description about this? Its only part of the code that - i see You dont understand, but you try to be smart :)
I have an image onto which I draw a Rectangle as such. After that I am attempting to copy the contents of the rectangle onto another QLabel. This seems to work however i cannot seem to align the copied image starting from the left top corner of the image. Here is what I am doing
QPixmap original_image;
original_image.load("c:\\Images\\myimg.jpg");
original_image = original_image.scaled(ui.label->size().width(),ui.label->size().height());
//-----------------------------------------------------------------------
//Draw rectangle on this
QPixmap target_two(ui.label->size().width(),ui.label->size().height());
target_two.fill(Qt::transparent);
QPixmap target(ui.label->size().width(),ui.label->size().height());
target.fill(Qt::transparent);
QPainter painter(&target);
QPainter painter_two(&target_two);
QRegion r(QRect(0, 0, ui.label->size().width(), ui.label->size().height()), QRegion::RegionType::Rectangle); //Region to start copying
painter.setClipRegion(r);
painter.drawPixmap(0, 0, original_image); //Draw the original image in the clipped region
QRectF rectangle(x_start,y_start,clipRegion);
painter.drawRoundedRect(rectangle,0,0); //Last two parameters define the radius of the corners higher the radius more rounded it is
QRegion r_two(rectangle.toRect(), QRegion::RegionType::Rectangle);
painter_two.setClipRegion(r_two);
painter_two.drawPixmap(0,0,target);
ui.label->setPixmap(target);
ui.label_2->setPixmap(target_two);
The bottom picture is the image with the red rectangle in it and that is fine.
The top picture is a copy of the contents of the square.The only problem is its not starting from the top left corner.
Any suggestion on why I am not getting the copied content on the top left corner.
The problem in your logic is that both target and target_two images have the same sizes - size of the label, and you draw the copied image in the same position as it was in the initial label. So far so good. I would solve this by the following code:
[..]
// This both lines can be removed.
// QRegion r_two(rectangle.toRect(), QRegion::RegionType::Rectangle);
// painter_two.setClipRegion(r_two);
// Target rect. in the left top corner.
QRectF targetRect(0, 0, rectangle.width(), rectangle.height());
QRectF sourceRect(rectangle);
// Draw only rectangular area of the source image into the new position.
painter_two.drawPixmap(targetRect, target, sourceRect);
[..]
If I draw some circles "in a sequence of frames" in OpenCV by using this function:
void circle(Mat& img, Point center, int radius, const Scalar& color,
int thickness = 1, int lineType = 8, int shift = 0);
...is there any way to show these circle points in a separate window?
You can create as many windows as you like using the imshow function. The first argument of imshow is the window name, the second one is the image you are going to display on it.
So, a simple way to show these circles in a separated window is to create a brand new Mat (with the desired dimensions), draw the points on it and display it using imshow. To draw the points you may use the circle function with radius=1.
More info: docs.opencv.org