I am writing a program, to display two cameras next to each other. In Qt it is pretty simple with the QCamera. But my Cameras are turned by 90°, so I have to turn the Camera in the porgram too.
The QCamera variable has no command to turn it, so I want to display it in a label, and not in a viewfinder. So I take an Image, turn it and display it in a label.
QImage img;
QPixmap img_;
img = ui->viewfinder->grab().toImage();
img_ = QPixmap::fromImage(img);
img_ = img_.transformed(QTransform().rotate((90)%360));
QImage img2;
QPixmap img2_;
img2 = ui->viewfinder->grab().toImage();
img2_ = QPixmap::fromImage(img2);
img2_ = img2_.transformed(QTransform().rotate((90)%360));
ui->label->setPixmap(img_);
ui->label_2->setPixmap(img2_);
When I start the program there are just two black boxes next to each other.
(In the code there is missing the part where I deklare it, but the camera works fine in the viewfinder so I think there is no problem)
Try this:
img_ = QPixmap::grabWindow(ui->viewfinder->winId(), 0, 0, -1, -1); (for take a snapshot as QPixmap)
or
img = QPixmap::grabWindow(ui->viewfinder->winId(), 0, 0, -1, -1).toImage(); (for take a snapshot as QImage)
You can use the orientation of the camera to correct the image orientation in view finder as described in Qt documentation. Here is the link:
http://doc.qt.io/qt-5/cameraoverview.html
and here is the code found in the documentation:
// Assuming a QImage has been created from the QVideoFrame that needs to be presented
QImage videoFrame;
QCameraInfo cameraInfo(camera); // needed to get the camera sensor position and orientation
// Get the current display orientation
const QScreen *screen = QGuiApplication::primaryScreen();
const int screenAngle = screen->angleBetween(screen->nativeOrientation(), screen->orientation());
int rotation;
if (cameraInfo.position() == QCamera::BackFace) {
rotation = (cameraInfo.orientation() - screenAngle) % 360;
} else {
// Front position, compensate the mirror
rotation = (360 - cameraInfo.orientation() + screenAngle) % 360;
}
// Rotate the frame so it always shows in the correct orientation
videoFrame = videoFrame.transformed(QTransform().rotate(rotation));
It looks like you don't even understand, what you are looking at...
Whats the purpose of pasting stuff like that to forum? Did you read ALL description about this? Its only part of the code that - i see You dont understand, but you try to be smart :)
Related
I have been trying to use absdiff to find the motion in an image,but unfortunately it fail,i am new to OpenCV. The coding supposed to use absdiff to determine whether any motion is happening around or not, but the output is a pitch black for diff1,diff2 and motion. Meanwhile,next_mframe,current_mframe, prev_mframe shows grayscale images. While, result shows a clear and normal image. I use this as my reference http://manmade2.com/simple-home-surveillance-with-opencv-c-and-raspberry-pi/. I think the all the image memory is loaded with the same frame and compare, that explain why its a pitch black. Is there any others method i miss there? I am using RTSP to pass camera RAW image to ROS.
void imageCallback(const sensor_msgs::ImageConstPtr&msg_ptr){
CvPoint center;
int radius, posX, posY;
cv_bridge::CvImagePtr cv_image; //To parse image_raw from rstp
try
{
cv_image = cv_bridge::toCvCopy(msg_ptr, enc::BGR8);
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("cv_bridge exception: %s", e.what());
return;
}
frame = new IplImage(cv_image->image); //frame now holding raw_image
frame1 = new IplImage(cv_image->image);
frame2 = new IplImage(cv_image->image);
frame3 = new IplImage(cv_image->image);
matriximage = cvarrToMat(frame);
cvtColor(matriximage,matriximage,CV_RGB2GRAY); //grayscale
prev_mframe = cvarrToMat(frame1);
cvtColor(prev_mframe,prev_mframe,CV_RGB2GRAY); //grayscale
current_mframe = cvarrToMat(frame2);
cvtColor(current_mframe,current_mframe,CV_RGB2GRAY); //grayscale
next_mframe = cvarrToMat(frame3);
cvtColor(next_mframe,next_mframe,CV_RGB2GRAY); //grayscale
// Maximum deviation of the image, the higher the value, the more motion is allowed
int max_deviation = 20;
result=matriximage;
//rellocate image in right order
prev_mframe = current_mframe;
current_mframe = next_mframe;
next_mframe = matriximage;
//motion=difflmg(prev_mframe,current_mframe,next_mframe);
absdiff(prev_mframe,next_mframe,diff1); //Here should show black and white image
absdiff(next_mframe,current_mframe,diff2);
bitwise_and(diff1,diff2,motion);
threshold(motion,motion,35,255,CV_THRESH_BINARY);
erode(motion,motion,kernel_ero);
imshow("Motion Detection",result);
imshow("diff1",diff1); //I tried to output the image but its all black
imshow("diff2",diff2); //same here, I tried to output the image but its all black
imshow("diff1",motion);
imshow("nextframe",next_mframe);
imshow("motion",motion);
char c =cvWaitKey(3); }
I change the cv_bridge method to VideoCap, its seem to functions well, cv_bridge just cannot save the image even through i change the IplImage to Mat format. Maybe there is other ways, but as for now, i will go with this method fist.
VideoCapture cap(0);
Tracker(void)
{
//check if camera worked
if(!cap.isOpened())
{
cout<<"cannot open the Video cam"<<endl;
}
cout<<"camera is opening"<<endl;
cap>>prev_mframe;
cvtColor(prev_mframe,prev_mframe,CV_RGB2GRAY); // capture 3 frame and convert to grayscale
cap>>current_mframe;
cvtColor(current_mframe,current_mframe,CV_RGB2GRAY);
cap>>next_mframe;
cvtColor(next_mframe,next_mframe,CV_RGB2GRAY);
//rellocate image in right order
current_mframe.copyTo(prev_mframe);
next_mframe.copyTo(current_mframe);
matriximage.copyTo(next_mframe);
motion = diffImg(prev_mframe, current_mframe, next_mframe);
}
I want to plot circles on a image where each previous circle is deleted on the image before the next circle is drawn.
I have to following configuration:
I have several picture (let says 10)
For each picture I test several pixel for some condition (let say 50 pixels).
For each pixel I'm testing (or working on) I want to draw a circle at that pixel for visualization purpose (for me to visualize that pixel).
To summarize I have 2 for loop, one looping over the 10 images and the other looping over the 50 pixels.
I done the following (see code above). The circles are correctly drawn but when the next circle is drawn, the previous circle is still visible (at the end all circle are drawn on the same image) but what I want to have is (after a circle was drawn) to close the picture (or window) somehow and reopen a new one and plot the next circle on it and so on
for(int imgID=0; imgID < numbImgs; imgID++)
{
cv::Mat colorImg = imgVector[imgID];
for(int pixelID=0; pixelID < numPixelsToBeTested; pixelID++)
{
some_pixel = ... //some pixel
x = some_pixel(0); y = some_pixel(1);
cv::Mat colorImg2 = colorImg; //redefine the image for each pixel
cv::circle(colorImg2, cv::Point(x,y),5, cv::Scalar(0,0,255),1, cv::LINE_8, 0);
// creating a new window each time
cv::namedWindow("Display", CV_WINDOW_AUTOSIZE );
cv::imshow("Display", colorImg2);
cv::waitKey(0);
cv::destroyWindow("Display");
}
}
What is wrong in my code?
Thanks guys
cv::circle() manipulates the input image within the API call, so what you need to do is to create a clone of the original image, draw circles on the cloned image and at each iteration swap the cloned image with original image.
It is also a good idea to break your program into smaller methods, making the code more readable and easy to understand, Following code may give you a starting point.
void visualizePoints(cv::Mat mat) {
cv::Mat debugMat = mat.clone();
// Dummy set of points, to be replace with the 50 points youo are using.
std::vector<cv::Point> points = {cv::Point(30, 30), cv::Point(30, 100), cv::Point(100, 30), cv::Point(100, 100)};
for (cv::Point p:points) {
cv::circle(debugMat, p, 5, cv::Scalar(0, 0, 255), 1, cv::LINE_8, 0);
cv::imshow("Display", debugMat);
cv::waitKey(800);
debugMat = mat.clone();
}
}
int main() {
std::vector<std::string> imagePaths = {"path/img1.png", "path/img2.png", "path/img3.png"};
cv::namedWindow("Display", CV_WINDOW_AUTOSIZE );
for (std::string path:imagePaths) {
cv::Mat img = cv::imread(path);
visualizePoints(img);
}
}
I am working on a project with OpenFrameworks using ofxCV, ofxOpencv and ofxColorQuantizer. Technically, the project is analyzing live video captured via webcam and analysis's the image in real time to gather and output the most prominent color in the current frame. When generating the most prominent color I am using the pixel difference between the current frame and the previous frame to generate the what colors have updated and use the updated or moving areas of the video frame to figure out the most prominent colors.
The reason for using the pixel difference's to generate the color pallet is because I want to solve for the case of a user walks into the video frame, I want try and gather the color pallet of the person, for instance what they are wearing. For example red shirt, blue pants will be in the pallet and the white background will be excluded.
I have a strong background in Javascript and canvas but am fairly new to OpenFrameworks and C++ which is why I think I am running into a roadblock with this problem I described above.
Along with OpenFrameworks I am using ofxCV, ofxOpencv and ofxColorQuantizer as tools for this installation. I am taking a webcam image than making it a cv:Mat than using pyrdown on the webcam image twice followed by a absdiff of the mat which I am than trying to pass the mat into the ofxColorQuantizer. This is where I think I am running into problems — I don't think the ofxColorQuantizer likes the mat format of the image I am trying to use. I've tried looking for the different image format to try and convert the image to to solve this issue but I haven't been able to come to solution.
For efficiencies I am hoping to to the color difference and color prominence calculations on the smaller image (after I pyrdown' the image) and display the full image on the screen and the generated color palette is displayed at the bottom left like in the ofxColorQuantizer example.
I think there may be other ways to speed up the code but at the moment I am trying to get this portion of the app working first.
I have my main.cpp set up as follows:
#include "ofMain.h"
#include "ofApp.h"
#include "ofAppGlutWindow.h"
//========================================================================
int main( ){
ofAppGlutWindow window;
ofSetupOpenGL(&window, 1024,768, OF_WINDOW); // <-------- setup the GL context
// ofSetupOpenGL(1024,768,OF_WINDOW); // <-------- setup the GL context
// this kicks off the running of my app
// can be OF_WINDOW or OF_FULLSCREEN
// pass in width and height too:
ofRunApp(new ofApp());
}
My ofApp.h file is as follows:
#pragma once
#include "ofMain.h"
#include "ofxOpenCv.h"
#include "ofxCv.h"
#include "ofxColorQuantizer.h"
class ofApp : public ofBaseApp{
public:
void setup();
void update();
void draw();
ofVideoGrabber cam;
ofPixels previous;
ofImage diff;
void kMeansTest();
ofImage image;
ofImage img;
cv::Mat matA, matB;
ofImage diffCopy;
ofImage outputImage;
ofxCv::RunningBackground background;
ofxColorQuantizer colorQuantizer;
// a scalar is like an ofVec4f but normally used for storing color information
cv::Scalar diffMean;
};
And finally my ofApp.cpp is below:
#include "ofApp.h"
using namespace ofxCv;
using namespace cv;
//--------------------------------------------------------------
void ofApp::setup(){
ofSetVerticalSync(true);
cam.initGrabber(320, 240);
// get our colors
colorQuantizer.setNumColors(3);
// resize the window to match the image
// ofSetWindowShape(image.getWidth(), image.getHeight());
ofSetWindowShape(800, 600);
// imitate() will set up previous and diff
// so they have the same size and type as cam
imitate(previous, cam);
imitate(diff, cam);
imitate(previous, outputImage);
imitate(diff, outputImage);
}
//--------------------------------------------------------------
void ofApp::update(){
cam.update();
if(cam.isFrameNew()) {
matA = ofxCv::toCv(cam.getPixelsRef());
ofxCv::pyrDown(matA, matB);
ofxCv::pyrDown(matB, matA);
ofxCv::medianBlur(matA, 3);
ofxCv::toOf(matA, outputImage);
// take the absolute difference of prev and cam and save it inside diff
absdiff(previous, outputImage, diff);
}
}
//--------------------------------------------------------------
void ofApp::draw(){
// If the image is ready to draw, then draw it
if(outputImage.isAllocated()) {
outputImage.update();
outputImage.draw(0, 0, ofGetWidth(), ofGetHeight());
}
ofBackground(100,100,100);
ofSetColor(255);
ofImage diffCopy;
diffCopy = diff;
diffCopy.resize(diffCopy.getWidth()/2, diffCopy.getHeight()/2);
// there is some sort of bug / issue going on here...
// prevent the app from compiling
// comment out to run and see blank page
colorQuantizer.quantize(diffCopy.getPixelsRef());
ofLog() << "the number is " << outputImage.getHeight();
ofLog() << "the number is " << diffCopy.getHeight();
ofSetColor(255);
img.update();
// cam.draw(0, 0, 800, 600);
outputImage.draw(0, 0, 800, 600);
// colorQuantizer.draw(ofPoint(0, cam.getHeight()-20));
colorQuantizer.draw(ofPoint(0, 600-20));
// use the [] operator to get elements from a Scalar
float diffRed = diffMean[0];
float diffGreen = diffMean[1];
float diffBlue = diffMean[2];
ofSetColor(255, 0, 0);
ofRect(0, 0, diffRed, 10);
ofSetColor(0, 255, 0);
ofRect(0, 15, diffGreen, 10);
ofSetColor(0, 0, 255);
ofRect(0, 30, diffBlue, 10);
}
//--------------------------------------------------------------
void ofApp::kMeansTest(){
cv::Mat samples = (cv::Mat_<float>(8, 1) << 31 , 2 , 10 , 11 , 25 , 27, 2, 1);
cv::Mat labels;
// double kmeans(const Mat& samples, int clusterCount, Mat& labels,
cv::TermCriteria termcrit;
int attempts, flags;
cv::Mat centers;
double compactness = cv::kmeans(samples, 3, labels, cv::TermCriteria(), 2, cv::KMEANS_PP_CENTERS, centers);
cout<<"labels:"<<endl;
for(int i = 0; i < labels.rows; ++i)
{
cout<<labels.at<int>(0, i)<<endl;
}
cout<<"\ncenters:"<<endl;
for(int i = 0; i < centers.rows; ++i)
{
cout<<centers.at<float>(0, i)<<endl;
}
cout<<"\ncompactness: "<<compactness<<endl;
}
Apologies in advance for the state of my code — it's getting late and I'm trying to get this done.
My question is what is the image format openFrameworks is using for grabbing the webcam image, what is the image format that openCV expects and what should I use to switch back from a mat image to an ofImage and is there a way to getPixelsRef from a mat image?
The area of code that I think I have something wrong is the following logic.
I have this line of code which gets the video frame from the webcam matA = ofxCv::toCv(cam.getPixelsRef());
Than do a couple ofxCv procedures on the frame such as ofxCv::pyrDown(matA, matB); which I think changes the image format or pixel format of the frame
Than I convert the frame back to OF with ofxCv::toOf(matA, outputImage);,
Next I get the difference in the pixels between the current frame and the last frame, create a copy of the difference between the two frames. Potentially the issue lies here with the diff output image format
Which I pass the diff copy to colorQuantizer.quantize(diffCopy.getPixelsRef()); to try and generate the color palette in for the change in pixels.
It is the colorQuantizer class and function call that is giving me an error which reads thread error [ error ] ofTexture: allocate(): ofTextureData has 0 width and/or height: 0x0
with an EXC_BAD_ACCESS
And lastly, could there be an alternative cause for the exc_bad_access thread error rather than image formatting? Being new to c++ I'm just guessing and going off instinct of what I think the rood cause of my problem is.
Many thanks.
I spend much time trying to find solution but cannot. I hope you can help me. The code is bit longer so I give here just the part where I have the problem. My code captures bitmap from window and its saved in HBitmap. I need to do rotation of the bitmap. So I start GDI+ and create bitmap pBitmap from HBitmap:
// INIT GDI
ULONG_PTR gdiplusToken;
GdiplusStartupInput gdiplusStartupInput;
GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL);
if (!gdiplusToken) return 3;
// Gdip_GetRotatedDimensions:
GpBitmap* pBitmap;
int result = Gdiplus::DllExports::GdipCreateBitmapFromHBITMAP(HBitmap, 0, &pBitmap);
Then I calculate the variables needed for rotation. Then I create graphics object and tried to rotate the image:
GpGraphics * pG;
result = Gdiplus::DllExports::GdipGetImageGraphicsContext(pBitmap, &pG);
Gdiplus::SmoothingMode smooth = SmoothingModeHighQuality;
result = Gdiplus::DllExports::GdipSetSmoothingMode(pG, smooth);
Gdiplus::InterpolationMode interpolation = InterpolationModeNearestNeighbor;
result = Gdiplus::DllExports::GdipSetInterpolationMode(pG, interpolation);
MatrixOrder MatrixOrder_ = MatrixOrderPrepend;
result = Gdiplus::DllExports::GdipTranslateWorldTransform(pG, xTranslation, yTranslation, MatrixOrder_);
MatrixOrder_ = MatrixOrderPrepend;
result = Gdiplus::DllExports::GdipRotateWorldTransform(pG, ROTATION_ANGLE, MatrixOrder_);
GpImageAttributes * ImgAttributes;
result = Gdiplus::DllExports::GdipCreateImageAttributes(&ImgAttributes); // create an ImageAttribute object
result = Gdiplus::DllExports::GdipDrawImageRectRect(pG,pBitmap,0,0,w,h,0,0,w,h,UnitPixel,ImgAttributes,0,0); // Draw the original image onto the new bitmap
result = Gdiplus::DllExports::GdipDisposeImageAttributes(ImgAttributes);
Finally I wanted to check the image so I added:
CLSID pngClsid;
GetEncoderClsid(L"image/png", &pngClsid);
result = Gdiplus::DllExports::GdipCreateBitmapFromGraphics(w, h, pG, &pBitmap);
result = Gdiplus::DllExports::GdipSaveImageToFile(pBitmap, L"justest.png", &pngClsid, NULL); // last voluntary? GDIPCONST EncoderParameters* encoderParams
But my image is blank. I found out that GdipCreateBitmapFromGraphics creates blank image, but how should I finish it to check what drawings I have done? Are these steps correct (not just here but above, near GdipCreateBitmapFromHBITMAP() and GdipGetImageGraphicsContext() or I need to add something? How to get it working?
PS: I am sure that HBitmap contains picture of window, I already checked it.
To my eyes, you have some things backwards in your approach.
What you need to do is the following:
Read in your Image (src)
Find the minimum bounding rectangle that will contain the rotated image (ie, rotate the corners and the distance between min and max x and y are the dimensions.
Create a new Image object with these dimensions and the pixel format you want (likely the same as src, but maybe you want an alpha channel) and background color you want (dst)
Create a graphics based on dst (new Graphics(dst))
Set the appropriate transform on the graphics
Draw src onto dst
Export dst
The good news is that to make sure you're doing things right, you can isolate steps out.
For example, you can just make an image and a graphics and draw a line on it with no transform (or better a box with an X) and save that. If you have what you expect, then you're on the right path. Next add a transform to the box. In your case you'll need both a rotation and a translation. Next get the dimensions of the dest image right for that rotation (protip: don't use a square to test). Finally, do it with your actual image.
This will get you step-by-step to the correct output instead of trying to get the whole thing in one shot.
Making the usual blob tracker with OpenCV and cvBlobsLib, I've come across this problem and it seems no one else had it, which makes me sad. I get the RGB/BGR frame, choose the color to isolate, treshold it into b/w, find the blobs and add the bounding rectangle on each blob, but when I display the final image, the box is stretched on the x-axis: when the object is on the left the box is close to it (although around 2.5 times larger), and as it moves to the right the box moves faster (= more and more far from the object) until it reaches the right end of the window when the object isn't even halfway. This doesn't happen on the y-axis, where everything is fine. It's not a problem with rectangles, it happens when I use fillBlob aswell, the blob shape comes out stretched and misaligned. Also, it's not a problem related to image capturing, since I've tried with kinect (OpenNI), webcam and even using a single image (imread()), and I verified that every ImageGenerator, Mat, IplImage used were 640x480, 8bit depth, for which I used AUTOSIZE for the namedWindow (enlarging to fullscreen window doesn't help either). Showing the BGR frame and the tresholded image gives no problems, they both fit into the window, but the detected blobs seem to belong to a different resolution space when I merge them with the original image. Here's the code, not much has changed from the usual examples found online everywhere:
//[...]
namedWindow("Color Image", CV_WINDOW_AUTOSIZE);
namedWindow("Color Tracking", CV_WINDOW_AUTOSIZE);
//[...] I already got the two cv::Mat I need, imgBGR and imgTresh
CBlobResult blobs;
CBlob *currentBlob;
Point pt1, pt2;
Rect rect;
//had to do Mat to IplImage conversion, since cvBlobsLib doesn't like mats
IplImage iplTresh = imgTresh;
IplImage iplBGR = imgBGR;
blobs = CBlobResult(&iplTresh, NULL, 0);
blobs.Filter(blobs, B_EXCLUDE, CBlobGetArea(), B_LESS, 100);
int nBlobs = blobs.GetNumBlobs();
for (int i = 0; i < nBlobs; i++)
{
currentBlob = blobs.GetBlob(i);
rect = currentBlob->GetBoundingBox();
pt1.x = rect.x;
pt1.y = rect.y;
pt2.x = rect.x + rect.width;
pt2.y = rect.y + rect.height;
cvRectangle(&iplBGR, pt1, pt2, cvScalar(255, 255, 255, 0), 3, 8, 0);
}
//[...]
imshow("Color Image", imgBGR);
imshow("Color Tracking", imgTresh);
The "[...]" is code that shouldn't have nothing to do with this issue, but if you need further info on how I handled the images, let me know and I'll post it.
Based on the fact that the way I capture the image doesn't change anything, that BGR frame and B/W image are well shown, and that after getting blobs any way of displaying them gives the same (wrong) result, the problem must be something between CBlobResult() and matrix2ipl conversion, but I don't really know how to find it out.
Oh god, I spent ages to write the whole problem and the next day I found the answer almost casually. As I created the B/W matrix for tresholding, I didn't make it single-channel; I copied the BGR matrix type, thus having a treshold image with 3 channels which resulted in a widthStep 3 times the frame width. Resolved creating cv::Mat imgTresh with CV_8UC1 as type.