I want to achieve the following result in OpenGL (and C++): I have a plot I want to use as background of an animation and I want it to be fix when some points move on its surface.
For example (see the image): I want to calculate the level plot of 3 variables function (black and white in the photo), I want to set it as background and show an animation of points moving on the surface (red points in the photo).
What is the best method to achieve it and have good performances?
One possibility would be to draw your background image as a texture on a quad behind the other points (e.g., points at Z=0.1, quad at Z=0.2). Since you (presumably) don't want the quad scaled compared to the points, you probably want to use an orthographic projection, not perspective.
It really depends on which version of OpenGL you are using.
If you are using SDL and OpenGL 2.x, it is very easy, but if you are using something like GLFW and OpenGL 3.x, it will be quite hard.
But this is how you would do it with SDL and OpenGL 2.1(not all code is included, you will have to write some of it yourself):
void plot(SDL_Surface *surf, int x, int y, int r, int g, int b)
{
int color = SDL_MapRGB(surf->format, r, g, b);
Uint32 *framebuffer = (Uint32*) surf->pixels;
framebuffer[y * surf->w + x] = color;
}
int main()
{
/* set up SDL and OpenGL */
SDL_Surface *img = SDL_LoadBMP("base_background.bmp");
/* plot functions like this: */
plot(img, x, f(x), 255, 0, 0); // example colours
/* create an OpenGL surface(code can be found on google easily) */
GLuint tex = create_GL_texture(img);
/* draw it like you would normally */
glBindTexture(GL_TEXTURE_2D, tex);
glBegin(GL_QUADS);
...
glEnd();
return 0;
}
For OpenGL 3.x, it is a bit harder. You can find tutorials here.
Related
I'm trying to make a library that would allow me to draw my overlay on top of the content of a game window that uses OpenGL by intercepting the call to the SwapBuffers function. For interception i use Microsoft Detours.
BOOL WINAPI __SwapBuffers(HDC hDC)
{
HGLRC oldContext = wglGetCurrentContext();
if (!context) // Global variable
{
context = wglCreateContext(hDC);
}
wglMakeCurrent(hDC, context);
// Drawing
glRectf(0.1F, 0.5F, 0.2F, 0.6F);
wglMakeCurrent(hDC, oldContext);
return _SwapBuffers(hDC); // Call the original SwapBuffers
}
This code works, but occasionally, when I move my mouse, my overlay blinks. Why? Some forums have said that such an implementation can significantly reduce FPS. Is there any better alternative? How do I correctly translate a normal position to an OpenGL position? For example, width = 1366. It turns out 1366 = 1, and 0 = -1. How to get the value for example for 738? What about height?
To translate a screen coordinate to normal coordinate you need to know the screen width and screen height, linear mapping from [0, screenwidth] to [-1, 1] / [0, screenheight] to [-1, 1]. It is as simple as follows:
int screenwidth, screenheight;
//...
screenwidth = 1366;
screenheight = 738;
//...
float screenx, screeny;
float x = (screenx/(float)screenwidth)*2-1;
float y = (screeny/(float)screenheight)*2-1;
Problem z=0:
glRect renders to z=0, it is a problem because the plane would be infinitely near. Because opengl considers rendering to world space still. Screen space lies at (x, y, 1) in non transformed world space. OpenGL almost always works with 3D coordinates.
There are two ways to tackle this problem:
You should prefer using functions with a z component, because opengl does not render correctly at z=0. z=1 corresponds to the normalized screen space
or you add a glTranslatef(0,0,1); to get to the normalized screen space
Remember to disable depth testing when rendering 2D on the screen space and resetting the modelview matrix.
I need to use OpenGL for a very specific purpose. I've got a 1D array of floats of size [SIZE][SIZE] (it's always square), that represents a 2D image. Drawing is just an extra here since I've been doing it using third programs by outputting the array to a text file, but I would like to give the option of doing it in the program itself.
This array is being constantly updated in a loop as it's supposed to represents values of a simulated field, the details of which are quite irrelevant, but the important point is that the value of each of them is going to be a float between -1 and 1. Now, I would like to just draw this array as 2D image (in real time), every N steps of the main loop. I tried using the pixel drawing tool of X11 (I'm doing this on Linux), and drawing the array by just looping over it and going pixel by pixel on a SIZE X SIZE window, but this was very slow and was taking much more than the simulation itself. I've been looking into OpenGL and from what I've read the ideal solution would be to reinterprate my array as a 2D Texture and then printing it on a quad. Apparently to use bare OpenGL I would have to readapt my code to work with the main loop of the OpenGL drawing and this is a bit inpractical, so if the same can be done in GLFW, I'm happy with it.
The image to draw is always square, and its orientation is completely irrelevant, it doesn't matter if it's drawn mirrored, upside down, transposed, etc, as it's supposed to be completely isotropic.
The main backbone of the program follows the next scheme
#include <iostream>
#include <GLFW/glfw3.h>
using namespace std;
int main(int argc, char** argv)
{
if (GFX) //GFX is a bool, only draw stuff if it's 1 (its value doesnt change)
{
//Initialize GLFW
}
float field[2*SIZE][SIZE] = {0}; //This is the array to print (only the first SIZE * SIZE components)
for (int i = 0; i < totalTime; i++)
{
for (int x=0; x < SIZE; x++)
{
for (int y=0; y < SIZE; y++)
{
//Each position of the array is then updates here
}
}
if (GFX)
{
//The drawing should be done here
}
}
return 0;
}
I've tried some code snippets and modified some other samples I've found around, but haven't been able to make it work, either they have to call a glLoop that breaks my own simulation loop, or it just prints a pixel in the centre.
So my main question is how to make a texture out of the first SIZE X SIZE components of field, and then draw it on a QUAD.
Thanks!
The simplest for a rookie is to use old API without shaders. In order to make that work you simply encode your data into 1D linear array of floats in range <0.0,1.0> which can be done from <-1,+1> pretty fast on CPU side with single for loop like this:
for (i=0;i<size*size;i++) data[i]=0.5*(data[i]+1.0);
I do not use GLUT nor code for your platform so I stick just to rendering:
//---------------------------------------------------------------------------
const int size=512; // data resolution
const int size2=size*size;
float data[size2]; // your float size*size data
GLuint txrid=-1; // GL texture ID
//---------------------------------------------------------------------------
void init() // this must be called once (after GL is initialized)
{
int i;
// generate float data
Randomize();
for (i=0;i<size2;i++) data[i]=Random();
// create texture
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_MODULATE);
glDisable(GL_TEXTURE_2D);
}
//---------------------------------------------------------------------------
void exit() // this must be called once (before GL is unitialized)
{
// release texture
glDeleteTextures(1,&txrid);
}
//---------------------------------------------------------------------------
void gl_draw()
{
glClear(GL_COLOR_BUFFER_BIT);
glDisable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// bind texture
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
// copy your actual data into it
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE32F_ARB, size, size, 0, GL_LUMINANCE, GL_FLOAT, data);
// render single textured QUAD
glColor3f(1.0,1.0,1.0);
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0); glVertex2f(-1.0,-1.0);
glTexCoord2f(1.0,0.0); glVertex2f(+1.0,-1.0);
glTexCoord2f(1.0,1.0); glVertex2f(+1.0,+1.0);
glTexCoord2f(0.0,1.0); glVertex2f(-1.0,+1.0);
glEnd();
// unbind texture (so it does not mess with othre rendering)
glBindTexture(GL_TEXTURE_2D,0);
glDisable(GL_TEXTURE_2D);
glFlush();
SwapBuffers(hdc); // ignore this GLUT should make it on its own
}
//---------------------------------------------------------------------------
Here preview:
In order to make this work you need to call init() at start of your app after the GLUT creates GL context and exit() on Apps end before GLUT closes the GL context. The gl_draw() will render your data so it must be called in the drawing event of GLUT.
In case you do not want to do the range conversion to <0,1> on CPU side you can move it to shaders (very simple vertex and fragment shader) but I got the feeling you're rookie and shaders would be simply too much for you to start with. If you really want to go that way see:
complete GL+GLSL+VAO/VBO C++ example
It also covers the GL initialization without GLUT but on Windows ...
Now some notes to the program above:
I used GL_LUMINANCE32F_ARB texture format extention
its 32 bit floating point texture format that is not clamped so your data stays as is. It should be present on all nowadays gfx HW. I did this to ease up the transition to shaders latter on where you can operate at your raw data directly ...
size
in original GL specification the texture size should be power of 2 so 16,32,64,128,256,512,... If not you need to use rectangle texture extention but that is native in gfx HW for years now so no need to change anything. But on linux and MAC there might be problems with GL implementation so if something does not work try to use power of 2 size (just in case)...
Also do not get too craze with size as gfx cards has limits usually 2048 is safe limit for lowend stuff. If yo need more then do a mosaic of more QUADS/textures
GL_CLAMP_TO_EDGE
this is also extention (now native to HW) so your texture coordinates go from 0 to 1 instead of from 0+pixel/2 to 1-pixel/2 ...
However all of these are not GL 1.0 stuff so you need to add extentions to your App (if GLUT or whatever you use does not already). All of these are just tokens/constants no function calls so in case compiler complains it should be enough to:
#include <gl\glext.h>
After gl.h is included or add the defines directly instead:
#define GL_CLAMP_TO_EDGE 0x812F
#define GL_LUMINANCE32F_ARB 0x8818
btw. your code does not look like GLUT app (but I might be wrong as I do not use it) see this for example:
simple GLUT app example
Your header suggest GLFW3 that is something entirely different (unless its derived from GLUT) than GLUT so maybe you should edit tags and OP to match what you really have/use.
Now the shaders:
if you generate your data in <-1,+1> range:
for (i=0;i<size2;i++) data[i]=(2.0*Random())-1.0;
And use these shaders:
Vertex:
// Vertex
#version 400 core
layout(location = 0) in vec2 pos; // position
layout(location = 8) in vec2 tex; // texture
out vec2 vpos;
out vec2 vtex;
void main()
{
vpos=pos;
vtex=tex;
gl_Position=vec4(pos,0.0,1.0);
}
Fragment:
// Fragment
#version 400 core
uniform sampler2D txr;
in vec2 vpos; // position
in vec2 vtex; // texture
out vec4 col;
void main()
{
vec4 c;
c=texture(txr,vtex);
c=(c+1.0)*0.5;
col=c;
}
Then the result is the same (appart of faster conversion on GPU side). However you need to convert the GL_QUADS into VAO/VBO ( unless nVidia card is used but even then you definately should use VBO/VAO).
I want to Draw the 3D text in OpenGL viewport.
I have applied following method but it shows text at 2D positions.
void renderBitmapString(float x, float y, float z,void *font,const char *string){
const char * c;
//glRasterPos2f(x, y);
// glutBitmapCharacter(font, string);
glRasterPos3f(x, y ,z);
//glRasterPos3i(x, y ,z);
for (c=string; *c != '\0'; c++) {
glutBitmapCharacter(font, *c);
}
}
OpenGL does not render text. It is not part of the standard. What it does render is textures or bitmap images. So the way to render text is to use some sort of 2D rendering library like Cairo. This should help you create a bitmap with text in it. Once you have a bitmap, you can then render the bitmap as a texture. Just be careful though, Cairo uses BGRA format for its bitmaps so you might need to swizzle the red and blue components to get things working.
Switch to glutStrokeCharacter().
Or render your glutBitmapCharacter()s to a texture via FBO.
I am drawing 2D Sprites with legacy OpenGL (2.0 or less) commands and I want to be able to change rendering behavior without using Fragment-Shaders, especially I want to be able to change the hue of sprites to arbitrary colors, respecting Alpha-Values of the sprite as to make only visible parts colored differently. Is there an easy way to do that?
EDIT: To make an example: In the RPG-Maker series you can tint any entity on the map scenes or battler-sprites in the battle scenes, like when something gets struck, the sprites for the attack-animation get drawn while at the same time, the sprite of the hit target flashes red - the duration,color and intensity of everything can be adjusted - right now I am just looking for the bare struckture about how to change the hue of any sprite, the rest is only modeling and building up from that.
Code: This is what I do to draw a sprite transparent, it seems to work just as advertised.
[...] init(){
glEnable(GL2.GL_TEXTURE_2D);
glEnable(GL2.GL_BLEND);
glBlendFunc(GL2.GL_SRC_ALPHA, GL2.GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL2.GL_COLOR_MATERIAL);
}[...]
drawSpriteTransparent(Sprite sprite, int x, int y, float transparency) {
Texture t = sprite.getTexture();
double tx = sprite.getTexutreX();
double ty = sprite.getTextureY();
double tw = sprite.getWidthInTexture();
double th = sprite.getHeightInTexture();
glBindTexture(GL2.GL_TEXTURE_2D, t.getTextureID());
glColor4f(1, 1, 1, 1f - transparency);
glBegin(GL2.GL_QUADS);
{
glTexCoord2d(tx, ty);
glVertex3d(x, y, 0);
glTexCoord2d(tx, ty + th);
glVertex3d(x, y + sprite.getHeight(), 0);
glTexCoord2d(tx + tw, ty + th);
glVertex3d(x + sprite.getWidth(), y + sprite.getHeight(), 0);
glTexCoord2d(tx + tw, ty);
glVertex3d(x + sprite.getWidth(), y, 0);
}
glEnd();
glBindTexture(GL2.GL_TEXTURE_2D, 0);
}
EDIT: using glTexEnvi( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE ) does not yield the desired results, though using glTexEnvi( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_ADD ) at least enables me to increase the color intensity of a desired color, though it does not allow me to increase a value to a point where each visible pixel is completely of the desired color (What I would like would be something like: (1,0,0,0.1) -> a bit more red (1,0,0,0.5) -> a lot more red (1,0,0,1) -> every rendered pixel is 100% red, though pixels with original alpha value 0 are still not rendered)
I imagine you could do something like this using fixed pipe multitexturing (loading your tint as a separate texture), as it has a dizzying array of options as to how the two textures are blended together.
So I have been trying to understand the concept of 3D picking but as I can't find any video guides nor any concrete guides that actually speak English, it is proving to be very difficult. If anyone is well experienced with 3D picking in LWJGL, could you give me an example with line by line explanation of what everything means. I should mention that all I am trying to do it shoot the ray out of the center of the screen (not where the mouse is) and have it detect just a normal cube (rendered in 6 QUADS).
Though I am not an expert with 3D picking, I have done it before, so I will try to explain.
You mentioned that you want to shoot a ray, rather than go by mouse position; as long as this ray is parallel to the screen, this method will still work, just the same as it will for a random screen coordinate. If not, and you actually wish to shoot a ray out, angled in some direction, things get a little more complicated, but I will not go in to it (yet).
Now how about some code?
Object* picking3D(int screenX, int screenY){
//Disable any lighting or textures
glDisable(GL_LIGHTING);
glDisable(GL_TEXTURE);
//Render Scene
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
orientateCamera();
for(int i = 0; i < objectListSize; i++){
GLubyte blue = i%256;
GLubyte green = min((int)((float)i/256), 255);
GLubyte red = min((int)((float)i/256/256), 255);
glColor3ub(red, green, blue);
orientateObject(i);
renderObject(i);
}
//Get the pixel
GLubyte pixelColors[3];
glReadPixels(screenX, screenY, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, pixelColors);
//Calculate index
int index = pixelsColors[0]*256*256 + pixelsColors[1]*256 + pixelColors[2];
//Return the object
return getObject(index);
}
Code Notes:
screenX is the x location of the pixel, and screenY is the y location of the pixel (in screen coordinates)
orientateCamera() simply calls any glTranslate, glRotate, glMultMatrix, etc. needed to position (and rotate) the camera in your scene
orientateObject(i) does the same as orientateCamera, except for object 'i' in your scene
when I 'calculate the index', I am really just undoing the math I performed during the rendering to get the index back
The idea behind this method is that each object will be rendered exactly how the user sees it, except that all of a model is a solid colour. Then, you check the colour of the pixel for the screen coordinate requested, and which ever model the colour is indexed to: that's your object!
I do recommend, however, adding a check for the background color (or your glClearColor), just in case you don't actually hit any objects.
Please ask for further explanation if necessary.