Through my code, I want to know the dimensions of an image in inches. Via OpenCV, I can find the height and width of the array of pixels of the image using the following code:
#include "stdafx.h"
#include <cv.h>
#include <cxcore.h>
#include <highgui.h>
#include <iostream>
using namespace std;
int _tmain(int argc, _TCHAR* argv[])
{
IplImage *img = cvLoadImage("photo.jpg");
if (!img) {
printf("Error: Couldn't open the image file.\n");
return 1;
}
cout<<"Number of pixels in width = "<<img->width<<endl<<"Number of pixels in height = "<<img->height;
return(0);
}
Please help me find the size of image in inches.
Thanks in advance...
You need to know the DPI of your display. For that, you'll need to look into your platform's SDK (Windows/Linux/Mac) to learn how to retrieve this info since OpenCV doesn't provide a feature for this.
Image Size Calculator is a JavaScript calculator that performs this calc. Check the source code of the page for the code.
You must define px/inch ratio. And you will get value.
If you want to size of image in inches on your monitor take monitor resolution and size and you will get those ratio.
You can't. If I take a picture of the moon, the moons diameter may well be 127 pixels. How many inches should that be? The moon is shining through a tree in that picture, and the tree is 341 pixels wide. How many inches is the tree? Really??
Related
I am trying to create a program that will open my camera, take a frame, and locate a laser point. I want to analyse each frame and find a red laser dot. Ideally, I would be able to insert a line at the x and y coordinate at the centroid of the laser dot that updates with each new frame. The camera I am using to take pictures of the laser is fairly high quality and takes pictures of size 2048x2048 so going through each pixel takes way too long. I need this software to run fairly quickly. I already have the code to open the camera and acquire frames.
laser image
What is the best way to write the code to locate the central point of the laser point within each frame?
Thanks,
#include <stdio.h>
#include <iostream>
#include "xiApiPlusOcv.hpp"
#include "opencv2/opencv.hpp"
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
try
{
xiAPIplusCameraOcv cam;
// Retrieving a handle to the camera device
printf("Opening first camera...\n");
cam.OpenFirst();
cam.SetExposureTime(10000); //10000 us = 10 ms
// Note: The default parameters of each camera might be different in different API versions
cam.SetImageDataFormat(XI_RGB24);
printf("Starting acquisition...\n");
cam.StartAcquisition();
printf("First pixel value \n");
#define EXPECTED_IMAGES 200
for (int images = 0; images < EXPECTED_IMAGES; images++)
{
Mat cv_mat_image = cam.GetNextImageOcvMat();
cvWaitKey(20);
Mat outImg;
cv::resize(cv_mat_image, outImg, cv::Size(), .35, .35);;
imshow("image", outImg);
}
cam.StopAcquisition();
cam.Close();
printf("Done\n");
cvWaitKey(500);
}
catch (xiAPIplus_Exception& exp) {
printf("Error:\n");
exp.PrintError();
cvWaitKey(2000);
}
return 0;
}
i'm newbie with opencv. Just managed to install and set it up to Visual 2013. I tested it with a sample of live stream for my laptop's camera and it works. Now i want to calculate the distance with the webcam to a red laser spot that will be in the middle of the screen(live_stream). Tell me from where can i start? I know that i must find the R(red) pixel from the middle of the screen but i dont know how to do that and what functions can i use. Some help, please?
The live stream from webcam that works is shown below:
#include "opencv2/highgui/highgui.hpp"
#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <vector>
#include <stdio.h>
int main()
{
//Data Structure to store cam.
CvCapture* cap=cvCreateCameraCapture(0);
//Image variable to store frame
IplImage* frame;
//Window to show livefeed
cvNamedWindow("Imagine Live",CV_WINDOW_AUTOSIZE);
while(1)
{
//Load the next frame
frame=cvQueryFrame(cap);
//If frame is not loaded break from the loop
if(!frame)
printf("\nno");;
//Show the present frame
cvShowImage("Imagine Live",frame);
//Escape Sequence
char c=cvWaitKey(33);
//If the key pressed by user is Esc(ASCII is 27) then break out of the loop
if(c==27)
break;
}
//CleanUp
cvReleaseCapture(&cap);
cvDestroyAllWindows();
}
Your red dot is most likely going to show up as total white in the camera stream, so I would suggest
Convert to grayscale using cvtColor().
Threshold using threshold(), for parameters use something like thresh=253, maxval=255 and mode THRESH_BINARY. That should give you an image that is all black with a small white dot where your laser is.
Then you can use findContours() to locate your dot in the image. Get the boundingRect() of a contour and then you can calculate its center to get the precise coordinates of your dot.
Also as has been already mentioned, do not use the deprecated C API, use the C++ API instead.
I'm new to OpenCV and am working on a video analysis project. Basically, I want to split my webcam into two sides (left and right), and have already figured out how to do this. However, I also want to analyze each side for red and green colors, and print out the amount of pixels that are red/green. I must have gone through every possible blog to figure this out, but alas it still doesn't work. The following code runs, however instead of detecting red as the code might suggest it seems to pick up white (all light sources and white walls). I have spent hours combing through the code but still cannot find the solution. Please help! Also note that this is being run on OSX 10.8, via Xcode. Thanks!
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/core/core.hpp"
using namespace cv;
using namespace std;
int main( int argc, char** argv )
{
VideoCapture cap(0); //capture the video from webcam
if ( !cap.isOpened() ) // if not success, exit program
{
cout << "Cannot open the web cam" << endl;
return -1;
}
namedWindow("HSVLeftRed", CV_WINDOW_AUTOSIZE);
namedWindow("HSVLeftGreen", CV_WINDOW_AUTOSIZE);
while (true) {
Mat image;
cap.read(image);
Mat HSV;
Mat threshold;
//Left Cropping
Mat leftimg = image(Rect(0, 0, 640, 720));
//Left Red Detection
cvtColor(leftimg,HSV,CV_BGR2HSV);
inRange(HSV,Scalar(0,0,150),Scalar(0,0,255),threshold);
imshow("HSVLeftRed",threshold);
//Left Green Detection
cvtColor(leftimg,HSV,CV_BGR2HSV);
inRange(HSV,Scalar(still need to find proper min values),Scalar(still need to find proper max values),threshold);
imshow("HSVLeftGreen",threshold);
}
return 0;
}
You're cropping a 640x720 area, which might not fit exactly your contents. Tip: Check your actual capture resolution with capture.get(CAP_PROP_FRAME_WIDTH) and capture.get(CAP_PROP_FRAME_HEIGHT). You might want to consider Mat threshold --> Mat thresholded. This is just some ranting :)
What I suspect is the actual issue is the threshold you use for HSV. According to cvtolor, section on RGB to HSV conversion,
On output 0 <= V <= 1.
so you should use a float representing your V threshold, i.e. 150 -> 150/255 ~= 0.58 etc.
I am using OpenCV 2.4.4 on a Cent OS machine. My code currently loads an image with the warning: component data type mismatch
here is the code:
#include <cv.h>
#include <highgui.h>
#include "imglib.h"
int main( int argc, char** argv )
{
Mat image = imread( argv[1], CV_LOAD_IMAGE_ANYDEPTH);
imwrite("debugwriteout.jp2", image);
}
I pass the name of a .jp2 greyscale file in the args. The image has a 14-bit pixel depth, but when I print out the pixel values I get values over 20000, and my image is now a completely black square. Any advice would be appreciated.
Additional information:
When I change the imread flag to CV_LOAD_IMAGE_GRAYSCALE it successfully convert the image to an 8-bit pixel depth and prints useful output so I can tell that the jasper module is working at least somewhat correctly.
Any advice would be appreciated,
Thanks
SZman,
I solved my problem.
The solution is the position of the high bit.
On 16 bits, for a 14 bits depth, you have xxxxxxxxxxxxxx00 instead of 00xxxxxxxxxxxxxx.
If you want the correct value, you must decal of 2 bits on the right.
Please read the image using those flags
Mat image = imread( argv[1], CV_LOAD_IMAGE_ANYDEPTH | CV_LOAD_IMAGE_ANYCOLOR);
I cannot find and interpret anything into my own knowledge of the usage of glBitmap(). My aim for the usage of this function is to be able to render letters and text to the SDL screen using OpenGL.
My current error-filled code is:
#include <SDL/SDL.h>
#include <SDL/SDL_opengl.h>
#include "functionfile.h"
int main(int argc, char **argv)
{
glClear(GL_COLOR_BUFFER_BIT);
GLubyte A[14] = {
0x00,0x00,
0x60,0xc0,
0x3f,0x80,
0x00,0x00,
0x0a,0x00,
0x0a,0x00,
0x04,0x00,
};
init_ortho(640,480);
glBitmap(100,100,0,0,50,50,A);
glLoadIdentity();
SDL_GL_SwapBuffers();
SDL_Delay(5000);
SDL_Quit();
return 0;
}
which results in a white 100x100 pixels of unrecognizable fuzz in the window.
Please read the documentation of glBitmap and try to understand it. You've got some serious misconceptions.
The first two parameters of glBitmap tell it, how large the image is you feed to it. It's not the destination size. The other parameters influence how the raster position is adjusted. glBitmap does not scale the contents that go the screen. If your bitmap is 8x8 pixels, it will come out as 8x8 pixels.
The Red Book has a rather nice section about glBitmap: http://fly.cc.fer.hr/~unreal/theredbook/chapter08.html