I cannot find and interpret anything into my own knowledge of the usage of glBitmap(). My aim for the usage of this function is to be able to render letters and text to the SDL screen using OpenGL.
My current error-filled code is:
#include <SDL/SDL.h>
#include <SDL/SDL_opengl.h>
#include "functionfile.h"
int main(int argc, char **argv)
{
glClear(GL_COLOR_BUFFER_BIT);
GLubyte A[14] = {
0x00,0x00,
0x60,0xc0,
0x3f,0x80,
0x00,0x00,
0x0a,0x00,
0x0a,0x00,
0x04,0x00,
};
init_ortho(640,480);
glBitmap(100,100,0,0,50,50,A);
glLoadIdentity();
SDL_GL_SwapBuffers();
SDL_Delay(5000);
SDL_Quit();
return 0;
}
which results in a white 100x100 pixels of unrecognizable fuzz in the window.
Please read the documentation of glBitmap and try to understand it. You've got some serious misconceptions.
The first two parameters of glBitmap tell it, how large the image is you feed to it. It's not the destination size. The other parameters influence how the raster position is adjusted. glBitmap does not scale the contents that go the screen. If your bitmap is 8x8 pixels, it will come out as 8x8 pixels.
The Red Book has a rather nice section about glBitmap: http://fly.cc.fer.hr/~unreal/theredbook/chapter08.html
Related
i'm newbie with opencv. Just managed to install and set it up to Visual 2013. I tested it with a sample of live stream for my laptop's camera and it works. Now i want to calculate the distance with the webcam to a red laser spot that will be in the middle of the screen(live_stream). Tell me from where can i start? I know that i must find the R(red) pixel from the middle of the screen but i dont know how to do that and what functions can i use. Some help, please?
The live stream from webcam that works is shown below:
#include "opencv2/highgui/highgui.hpp"
#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <vector>
#include <stdio.h>
int main()
{
//Data Structure to store cam.
CvCapture* cap=cvCreateCameraCapture(0);
//Image variable to store frame
IplImage* frame;
//Window to show livefeed
cvNamedWindow("Imagine Live",CV_WINDOW_AUTOSIZE);
while(1)
{
//Load the next frame
frame=cvQueryFrame(cap);
//If frame is not loaded break from the loop
if(!frame)
printf("\nno");;
//Show the present frame
cvShowImage("Imagine Live",frame);
//Escape Sequence
char c=cvWaitKey(33);
//If the key pressed by user is Esc(ASCII is 27) then break out of the loop
if(c==27)
break;
}
//CleanUp
cvReleaseCapture(&cap);
cvDestroyAllWindows();
}
Your red dot is most likely going to show up as total white in the camera stream, so I would suggest
Convert to grayscale using cvtColor().
Threshold using threshold(), for parameters use something like thresh=253, maxval=255 and mode THRESH_BINARY. That should give you an image that is all black with a small white dot where your laser is.
Then you can use findContours() to locate your dot in the image. Get the boundingRect() of a contour and then you can calculate its center to get the precise coordinates of your dot.
Also as has been already mentioned, do not use the deprecated C API, use the C++ API instead.
I am doing a project on face detection from surveillance cameras.Now I am at the stage of face detection and I can detect faces from each frame.After detecting the face I need store that face to local folder.Now I can save each face in the specified folder.
Problem Now it is saving every faces,but I need to save identical faces only once.That means if saved one face as a jpeg image and when face detection progress again the same face is coming, so this time I don't want to save that particular face.
This is my code:
#include <cv.h>
#include <highgui.h>
#include <time.h>
#include <stdio.h>
using namespace std;
int ct=1;
int ct1=0;
IplImage *frame;
int frames;
void facedetect(IplImage* image);
void saveImage(IplImage *img,char *ex);
IplImage* resizeImage(const IplImage *origImg, int newWidth,int newHeight, bool keepAspectRatio);
const char* cascade_name="haarcascade_frontalface_default.xml";
int k=1;
int main(int argc, char** argv)
{
CvCapture *capture=cvCaptureFromFile("Arnab Goswami on Pepper spary rajagopal Complete NewsHour Debate (Mobile).3gp");
int count=1;
while(1)
{
frame = cvQueryFrame(capture);
if(count%30==0)
{
facedetect(frame);
}
count++;
}
cvReleaseCapture(&capture);
return 0;
}
void facedetect(IplImage* image)
{
ct1++;
cvNamedWindow("output");
int j=0,i,count=0,l=0,strsize;
char numstr[50];
int arr[100],arr1[100];
CvPoint ul,lr,w,h,ul1,lr1;
CvRect *r;
//int i=0;
IplImage* image1;IplImage* tmpsize;IplImage* reimg;
CvHaarClassifierCascade* cascade=(CvHaarClassifierCascade*) cvLoad(cascade_name);
CvMemStorage* storage=cvCreateMemStorage(0);
const char *extract;
if(!cascade)
{
cout<<"Coulid not load classifier cascade"<<endl;
}
if(cascade)
{
CvSeq*faces=cvHaarDetectObjects(image,cascade,storage,1.1,1,CV_HAAR_DO_CANNY_PRUNING,cvSize(10,10));
//function used for detecting faces.o/p is list of detected faces.
for(int i=0;i<(faces ? faces->total : 0);i++)
{
string s1="im",re,rename,ex=".jpeg";
sprintf(numstr, "%d", k);
re = s1 + numstr;
rename=re+ex;
char *extract1=new char[rename.size()+1];
extract1[rename.size()]=0;
memcpy(extract1,rename.c_str(),rename.size());
//Copies the values of rename.size from the location pointed by source //(rename.c_str)directly to the memory block pointed by destination(extract).
strsize=rename.size();
r=(CvRect*) cvGetSeqElem(faces,i);//draw rectangle outline around each image.
ul.x=r->x;
ul.y=r->y;
w.x=r->width;
h.y=r->height;
lr.x=(r->x + r->width);
lr.y=(r->y + r->height);
cvSetImageROI(image,cvRect(ul.x,ul.y,w.x,h.y));
image1=cvCreateImage(cvGetSize(image),image->depth,image->nChannels);
cvCopy(image, image1, NULL);
reimg=resizeImage(image1, 40, 40, true);
saveImage(reimg,extract1);
cvResetImageROI(image);
cvRectangle(image,ul,lr,CV_RGB(1,255,0),3,8,0);
j++,count++;
k++;
cout<<"frame"<<ct1<<" "<<"face"<<ct<<":"<<"x: "<<ul.x<<endl;
cout<<"frame"<<ct1<<" "<<"face"<<ct<<":"<<"y: "<<ul.y<<endl;
cout<<""<<endl;
ct++;
//cvShowImage("output",image);
}
//return image;
//cvNamedWindow("output");//creating a window.
cvShowImage("output",image);//showing resized image.
cvWaitKey(0);
}
}
void saveImage(IplImage *img,char *ex)
{
int i=0;
char path[255]="/home/athira/Image/OutputImage";
char *ext[200];
char buff[1000];
ext[i]=ex;
sprintf(buff,"%s/%s",path,ext[i]);//copy ext[i] to buff
strcat(path,buff);//concat path & buff
cvSaveImage(buff,img);
i++;
}
You are using the haar feature-based cascade classifier for object detection. As far as i know these xml files are only trained to detect the specific objects based on hundreds of evaluated pictures (see cascade classifier training).
So to compare saved images you will need another "detection" mode, because you have to compare if two faces are identical with respect to the view angle and so on (keyword: biometric data).
The keyword you're looking for is "face recognition" i think. Just build up a database based on your detected faces and use them for face recognition after that.
Edit:
Another maybe helpful link: www.shervinemami.info/faceRecognition.html
If I understood correctly, what you want is to detect faces in one frame, save a thumbnail of this face. Then, in the following frame, you want to detect faces again but only save the thumbnails for those that were not present in the first frame.
This problem is hard, because the faces captured in a video always change from one frame to the next. This is due to noise in the images, to the fact that the persons may be moving, etc. As a consequence, no two faces are ever identical in a surveillance video.
Hence, in order to achieve what you asked, you need to determine if the face you are considering has already been observed in previous frames. In its general form, this problem is not obvious one and is still the topic of a lot of research related to biometrics, pedestrian tracking and re-identification, etc. Therefore, you will have a hard time to achieve 100% effectiveness in detecting that a given face has already been observed.
However, if can accept a method that is not 100% effective, you could try the following approach:
Detect faces F0i in frame 0, with associated image position (x0i, y0i), and store the thumbnails
Compute sparse optical-flow (e.g. using KLT, see this link) on the positions (xn-1i, yn-1i) of the faces in previous frame n-1, in order to estimate their positions (xxni, yyni) in the current frame n.
Detect faces F0i in the current frame n, with associated image position (xni, yni), and save only the thumbnail of those which are not close to one of the predicted positions (xxni, yyni).
Increment n and repeat steps 2-3 using the next frame.
This is a simple algorithm using tracking to determine if a given face was already observed previously. It should be easier to implement than biometrics-based approaches, and also probably more appropriate in the context of video surveillance. However, it is not a 100% accurate, due to the limited effectivity of the optical-flow estimation and of the face detector.
I have *.png files and I want to get different 8x8 px parts from textures and place them on bitmap (SDL_Surface, I guess, but maybe not), smth like this:
Now I'm rendering that without bitmap, i.e. I call each texture and draw part directly on screen each frame, and it's too slow. I guess I need to load each *.png to separate bitmap and use them passing video memory, then call just one big bitmap, but maybe I'm wrong. I need the fastest way of doing that, I need code of this (SDL 2, not SDL 1.3).
Also maybe I need to use clear OpenGL here?
Update:
Or maybe I need to load *.png's to int arrays somehow and use them just like usual numbers and place them to one big int array, and then convert it to SDL_Surface/SDL_Texture? It seems this is the best way, but how to write this?
Update 2:
Colors of pixels in each block are not the same as it presented at the picture and also can they be transparent. Picture is just an example.
Assumming you already have your bitmaps loaded up as SDL_Texture(s), composing them into a different texture is done via SDL_SetRenderTarget .
SDL_SetRenderTarget(renderer, target_texture);
SDL_RenderCopy(renderer, texture1, ...);
SDL_RenderCopy(renderer, texture2, ...);
...
SDL_SetRenderTarget(renderer, NULL);
Every render operation you perform between setting your Render Target and resetting it (by calling SDL_SetRenderTarget with a NULL texture parameter) will be renderer to the designated texture. You can then use this texture as you would use any other.
Ok so, when I asked about "solid colour", I meant - "in that 8x8 pixel area in the .png that you are copying from, do all 64 pixels have the same identical RGB value?" It looks that way in your diagram, so how about this:
How about creating an SDL_Surface, and directly painting 8x8 pixel areas of the memory pointed to by the pixels member of that SDL_Surface with the values read from the original .png.
And then when you're done, convert that surface to an SDL_Texture and render that?
You would avoid all the SDL_UpdateTexture() calls.
Anyway here is some example code. Let's say that you create a class called EightByEight.
class EightByEight
{
public:
EightByEight( SDL_Surface * pDest, Uint8 r, Uint8 g, Uint8 b):
m_pSurface(pDest),
m_red(r),
m_green(g),
m_blue(b){}
void BlitToSurface( int column, int row );
private:
SDL_Surface * m_pSurface;
Uint8 m_red;
Uint8 m_green;
Uint8 m_blue;
};
You construct an object of type EightByEight by passing it a pointer to an SDL_Surface and also some values for red, green and blue. This RGB corresponds to the RGB value taken from the particular 8x8 pixel area of the .png you are currently reading from. You will paint a particular 8x8 pixel area of the SDL_Surface pixels with this RGB value.
So now when you want to paint an area of the SDL_Surface, you use the function BlitToSurface() and pass in a column and row value. For example, if you divided the SDL_Surface into 8x8 pixel squares, BlitToSurface(3,5) means the paint the square at the 4th column, and 5th row with the RGB value that I set on construction.
The BlitToSurface() looks like this:
void EightByEight::BlitToSurface(int column, int row)
{
Uint32 * pixel = (Uint32*)m_pSurface->pixels+(row*(m_pSurface->pitch/4))+column;
// now pixel is pointing to the first pixel in the correct 8x8 pixel square
// of the Surface's pixel memory. Now you need to paint a 8 rows of 8 pixels,
// but be careful - you need to add m_pSurface->pitch - 8 each time
for(int y = 0; y < 8; y++)
{
// paint a row
for(int i = 0; i < 8; i++)
{
*pixel++ = SDL_MapRGB(m_pSurface->format, m_red, m_green, m_blue);
}
// advance pixel pointer by pitch-8, to get the next "row".
pixel += (m_pSurface->pitch - 8);
}
}
I'm sure you could probably speed things up further by pre-calculating an RGB value on construction. Or if you're reading a pixel from the texture, you could probably dispense with the SDL_MapRGB() (but it's just there in case the Surface has different pixel format to the .png).
memcpy is probably faster than 8 individual assignments to the RGB value - but I just want to demonstrate the technique. You could experiment.
So, all the EightByEight objects you create, all point to the same SDL_Surface.
And then, when you're done, you just convert that SDL_Surface to an SDL_Texture and blit that.
Thanks to everyone who took part, but we solved it with my friends. So here is an example (source code is too big and unnecessary here, I'll just describe the main idea):
int pitch, *pixels;
SDL_Texture *texture;
...
if (!SDL_LockTexture(texture, 0, (void **)&pixels, &pitch))
{
for (/*Conditions*/)
memcpy(/*Params*/);
SDL_UnlockTexture(texture);
}
SDL_RenderCopy(renderer, texture, 0, 0);
In my application, I paint a street map using QPainter on a widget
made by QPainterPaths that contain precalculated paths to be drawn
the widget is currently a QWidget, not a QGLWidget, but this might change.
I'm trying to move the painting off-screen and split it into chunked jobs
I want to paint each chunk onto a QImage and finally draw all images onto the widget
QPainterPaths are already chunked, so this is not the problem
problem is, that drawing on QImages is about 5 times slower than drawing on QWidget
Some benchmark testing I've done
time values are rounded averages over multiple runs
test chunk contains 100 QPainterPaths that have about 150 linear line segments each
the roughly 15k paths are drawn with QPainter::Antialiasing render hint, QPen uses round cap and round join
Remember that my source are QPainterPaths (and line width + color; some drawn, some filled)
I don't need all the other types of drawing QPainter supports
Can QPainterPaths be converted to something else which can be drawn on a OpenGL buffer, this would be a good solution.
I'm not familiar with OpenGL off-screen rendering and I know that there are different types of OpenGL buffers, of which most of them aren't for 2D image rendering but for vertex data.
Paint Device for chunk | Rendering the chunk itself | Painting chunk on QWidget
-----------------------+----------------------------+--------------------------
QImage | 2000 ms | < 10 ms
QPixmap (*) | 250 ms | < 10 ms
QGLFramebufferObj. (*) | 50 ms | < 10 ms
QPicture | 50 ms | 400 ms
-----------------------+----------------------------+--------------------------
none (directly on a QWidget in paintEvent) | 400 ms
----------------------------------------------------+--------------------------
(*) These 2 lines have been added afterwards and are solutions to the problem!
It would be nice if you can tell me a non-OpenGL-based solution, too, as I want to compile my application in two versions: OpenGL and non-OpenGL version.
Also, I want the solution to be able to render in a non-GUI thread.
Is there a good way to efficiently draw the chunks off-screen?
Is there an off-screen counter part of QGLWidget (an OpenGL off-screen buffer) which can be used as a paint device for QPainter?
The document of Qt-interest Archive, August 2008 QGLContext::create()
says:
A QGLContext can only be created with a valid GL paint device, which
means it needs to be bound to either a QGLWidget, QGLPixelBuffer or
QPixmap when you create it. If you use a QPixmap it will give you
software-only rendering, and you don't want that. A QGLFramebufferObject
is not in itself a valid GL paint device, it can only be created within
the context of a QGLWidget or a QGLPixelBuffer. What this means is that
you need a QGLWidget or QGLPixelBuffer as the base for your
QGLFramebufferObject.
As the document indicated, if you want to render in an off-screen buffer using opengl, you need QGLPixelBuffer. The code below is a very simple example which demonstrates how to use QGLPixelBuffer with OpenGL:
#include <QtGui/QApplication>
#include <Windows.h>
#include <gl/GL.h>
#include <gl/GLU.h>
#include <QtOpenGL/QGLFormat>
#include <QtOpenGL/QGLPixelBuffer>
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
// Construct an OpenGL pixel buffer.
QGLPixelBuffer glPixBuf(100, 100);
// Make the QGLContext object bound to pixel buffer the current context
glPixBuf.makeCurrent();
// The opengl commands
glClearColor(1.0, 1.0, 1.0, 0.0);
glViewport(0, 0, 100, 100);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, 100, 0, 100);
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(1.0, 0.0, 0.0);
glPointSize(4.0);
glBegin(GL_TRIANGLES);
glVertex2i(10, 10);
glVertex2i(50, 50);
glVertex2i(25, 75);
glEnd();
// At last, the pixel buffer was saved as an image
QImage &pImage = glPixBuf.toImage();
pImage.save(QString::fromLocal8Bit("gl.png"));
return a.exec();
}
The result of the program is a png image file as:
For non-opengl version using QPixmap, the code maybe in forms of below:
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
QPixmap pixmap(100, 100);
QPainter painter;
painter.begin(&pixmap);
painter.drawText(10, 45, QString::fromLocal8Bit("I love American."));
painter.end();
pixmap.save(QString::fromLocal8Bit("pixmap.png"));
return a.exec();
}
The result of the program above is a png file looks like:
Though the code is simple, but it works, maybe you can do some changes to make it suitable for you.
Through my code, I want to know the dimensions of an image in inches. Via OpenCV, I can find the height and width of the array of pixels of the image using the following code:
#include "stdafx.h"
#include <cv.h>
#include <cxcore.h>
#include <highgui.h>
#include <iostream>
using namespace std;
int _tmain(int argc, _TCHAR* argv[])
{
IplImage *img = cvLoadImage("photo.jpg");
if (!img) {
printf("Error: Couldn't open the image file.\n");
return 1;
}
cout<<"Number of pixels in width = "<<img->width<<endl<<"Number of pixels in height = "<<img->height;
return(0);
}
Please help me find the size of image in inches.
Thanks in advance...
You need to know the DPI of your display. For that, you'll need to look into your platform's SDK (Windows/Linux/Mac) to learn how to retrieve this info since OpenCV doesn't provide a feature for this.
Image Size Calculator is a JavaScript calculator that performs this calc. Check the source code of the page for the code.
You must define px/inch ratio. And you will get value.
If you want to size of image in inches on your monitor take monitor resolution and size and you will get those ratio.
You can't. If I take a picture of the moon, the moons diameter may well be 127 pixels. How many inches should that be? The moon is shining through a tree in that picture, and the tree is 341 pixels wide. How many inches is the tree? Really??