openGL for matrix stack - c++

I have a win32 application, in which I want to use openGL just for its matrix stack not for any rendering. That is, I want to use openGL to specify the camera, viewport etc so that I dont have to do the maths again. While creating the scene, I just want to project the points using gluProject and use it. The projected points are passed to another library which creates the scene for me, all the windows handles are created by library itself and I dont have access to that.
The problem is, windows needs a device context for initialization. But, since I am not using openGL for any rendering, is there a way to use openGL without any Window handle at all?
Without any explicit initialization, when I read back the matrices using glGet, it returns a garbage. Any thought on how to fix it?

I want to use openGL just for its matrix stack not for any rendering.
That's not what OpenGL is meant for. OpenGL is a drawing/rendering API, not a math library. Actually the whole matrix math stuff has been stripped away from the latest OpenGL versions (OpenGL-3 core and later), for that very reason.
Also doing this matrix math stuff is so simple, you can write it down in less than 1k lines of C code. There's absolutely no benefit in abusing OpenGL for this.

The Matrix stack could potentially live on graphics hardware in your implementation. OpenGL is quite reasonable therefore in insisting you have an OpenGL context in order to be able to use such functions. This is because the act of creating a context probably includes setting up the necessary implementation mechanics required to store the matrix stack.
Even in a purely software based OpenGL implementation one would still expect the act of creating a context to call some equivalent to malloc to secure the storage space for the stack. If you happened to find an OpenGL implementation where creating a context wasn't necessary I'd still keep clear of relying on that behavior since it's most likely undefined and could be broken in the next release of that implementation.
If it's C++ I'd just use std::stack with the Matrix class from your favorite linear algebra package if you're not using OpenGL for anything other than that.

I present to you my complete (open source) matrix class. Enjoy.
https://github.com/TheBuzzSaw/paroxysm/blob/master/newsource/CGE/Matrix4x4.h

I can recommend trying to implement those calls yourself. I did that once for a Palm app I wrote, tinyGL. What I learnt was that the documentation basically tells you in plain text what is done.
i.e the verbatim code for tglFrustum and tglOrth are (note that I was using fix point math to get some performance)
void tglFrustum(fix_t w, fix_t h, fix_t n, fix_t f) {
matrix_t fm, m;
fix_t f_sub_n;
f_sub_n = sub_fix_t(f,n);
fm[0][0] = mult_fix_t(_two_,div_fix_t(n,w));
fm[0][1] = 0;
fm[0][2] = 0;
fm[0][3] = 0;
fm[1][0] = 0;
fm[1][1] = mult_fix_t(_two_,div_fix_t(n,h));
fm[1][2] = 0;
fm[1][3] = 0;
fm[2][0] = 0;
fm[2][1] = 0;
fm[2][2] = inv_fix_t(div_fix_t(add_fix_t(f,n),f_sub_n));
f = mult_fix_t(_two_,f);
fm[2][3] = inv_fix_t(div_fix_t(mult_fix_t(f,n),f_sub_n));
fm[3][0] = 0;
fm[3][1] = 0;
fm[3][2] = _minus_one_;
fm[3][3] = 0;
set_matrix_t(m,_matrix_stack[_toms]);
mult_matrix_t(_matrix_stack[_toms],m,fm);
}
void tglOrtho(fix_t w, fix_t h, fix_t n, fix_t f) {
matrix_t om, m;
fix_t f_sub_n;
f_sub_n = sub_fix_t(f,n);
MemSet(om,sizeof(matrix_t),0);
om[0][0] = div_fix_t(_two_,w);
om[1][1] = div_fix_t(_two_,h);
om[2][2] = div_fix_t(inv_fix_t(_two_),f_sub_n);
om[2][3] = inv_fix_t(div_fix_t(add_fix_t(f,n),f_sub_n));
om[3][3] = _one_;
set_matrix_t(m,_matrix_stack[_toms]);
mult_matrix_t(_matrix_stack[_toms],m,om);
}
Compare those with the man pages for glFrustum and glOrtho

Related

Creating a Grayscale image in Visual C++ from a float array

I have an array of grayscale pixel values (floats as a fraction of 1) that I need to display, and then possibly save. The values just came from computations, so I have no libraries currently installed or anything. I've been trying to figure out the CImage libraries, but can't make much sense of what I need to do to visualize this data. Any help would be appreciated!
Thank you.
One possible approach which I've used with some success is to take D3DX's texture functions to create a Direct3D texture and fill it. There is some overhead in starting up D3D, but it provides you with multi-thread-able texture creation and built-in-ish viewing, as well as saving to files without much more fuss.
If you're not interested in using D3D(X), some of the specifics here won't be useful, but the generator should help figure out how to output data for any other library.
For example, assuming an existing D3D9 device pDevice and a noise generator (or other texture data source) pGen:
IDirect3DTexture9 * pTexture = nullptr;
D3DXCreateTexture(pDevice, 255, 255, 0, 0, D3DFMT_R8G8B8, D3DPOOL_DEFAULT, &pTexture);
D3DXFillTexture(pTexture, &texFill, pGen);
D3DXSaveTexture("texture.png", D3DXIFF_PNG, pTexture, NULL);
The generator function:
VOID WINAPI texFill(
D3DXVECTOR4* pOut,
CONST D3DXVECTOR2* pTexCoord,
CONST D3DXVECTOR2* pTexelSize,
LPVOID pData,
) {
// For a prefilled array:
float * pArray = (float *)pData;
float initial = pArray[(pTexCoord->y*255)+pTexCoord->x];
// For a generator object:
Generator * pGen = (Generator*)pData; // passed in as the third param to fill
float initial = pGen->GetPixel(pTexCoord->x, pTexCoord->y);
pOut->x = pOut->y = pOut->z = (initial * 255);
pOut->w = 255; // set alpha to opaque
}
D3DXCreateTexture: http://msdn.microsoft.com/en-us/library/windows/desktop/bb172800%28v=vs.85%29.aspx
D3DXFillTexture: http://msdn.microsoft.com/en-us/library/windows/desktop/bb172833(v=vs.85).aspx
D3DXSaveTextureToFile: http://msdn.microsoft.com/en-us/library/windows/desktop/bb205433(v=vs.85).aspx
Corresponding functions are available for volume/3D textures. As they are already set up for D3D, you can simply render the texture to a flat quad to view, or use as a source in whatever graphical application you may want.
So long as your generator is thread-safe, you can run the create/fill/save in one thread per texture, and generate multiple slices or frames simultaneously.
I found that the best solution for this problem was to use the SFML library (www.sfml-dev.org). Very simple to use, but must be compiled from source if you want to use it with VS2010.
You can use the PNM image format without any libraries whatsoever. (The format itself is trivial). However it's pretty archaic and you'll have to have an image viewer that supports it. IvanView, for example, supports it on Windows.

Qwt and QwtSeriesData

I originally use C Programming language. But Now, I need to use Qt programming (by the way, Qt is like a dream). I am going to more depth step by step. But my C++ object oriented knowledge is weak, I hope that it will be stronger. Nowadays I have to use Qwt and I stucked in QwtSeriesData object. I need to know how can I set a series of data to this object in order to draw a curve with using QwtPlot.
For example my data is like below, how can I set them into QwtSeriesData.
float x[300];
float y[300];
Thanks.
My answer is for latest qwt version 6.x.x (latest for current moment)
Note: qwt internally uses double for data representation, not float. So you either should use double or you would need to implement your own QwtSeriesData implementation which holds float in memory but provides double for requests of external components (that's a really bad way of doing things)
You can use one of subclasses of QwtSeriesData provided by the qwt:
QwtCPointerData or QwtPointArrayData.
This is how I do it:
QwtPlotCurve* curve = new QwtPlotCurve;
QPolygonF points;
for(unsigned int i = 0; i < 300; i++)
{
points << QPointF(x[i], y[i]);
}
curve->setSamples(points);
you then need to attach the curve to the plot.

What is the best way to detect mouse-location/clicks on object in OpenGL?

I am creating a simple 2D OpenGL game, and I need to know when the player clicks or mouses over an OpenGL primitive. (For example, on a GL_QUADS that serves as one of the tiles...) There doesn't seems to be a simple way to do this beyond brute force or opengl.org's suggestion of using a unique color for every one of my primitives, which seems a little hacky. Am I missing something? Thanks...
My advice, don't use OpenGL's selection mode or OpenGL rendering (brute force method you are talking about), use a CPU-based ray picking algorithm if 3D. For 2D, like in your case, it should be straightforward, it's just a test to know if a 2D point is in a 2D rectangle.
I would suggest to use the hacky method if you want a quick implementation (coding time, I mean). Especially if you don't want to implement a quadtree with moving ojects. If you are using opengl immediate mode, that should be straightforward:
// Rendering part
glClearColor(0,0,0,0);
glClear(GL_COLOR_BUFFER_BIT);
for(unsigned i=0; i<tileCout; ++i){
unsigned tileId = i+1; // we inc the tile ID in order not to pick up the black
glColor3ub(tileId &0xFF, (tileId >>8)&0xFF, (tileId >>16)&0xFF);
renderTileWithoutColorNorTextures(i);
}
// Let's retrieve the tile ID
unsigned tileId = 0;
glReadPixels(mouseX, mouseY, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE,
(unsigned char *)&tileId);
if(tileId!=0){ // if we didn't picked the black
tileId--;
// we picked the tile number tileId
}
// We don't want to show that to the user, so we clean the screen
glClearColor(...); // the color you want
glClear(GL_COLOR_BUFFER_BIT);
// Now, render your real scene
// ...
// And we swap
whateverSwapBuffers(); // might be glutSwapBuffers, glx, ...
You can use OpenGL's glRenderMode(GL_SELECT) mode. Here is some code that uses it, and it should be easy to follow (look for the _pick method)
(and here's the same code using GL_SELECT in C)
(There have been cases - in the past - of GL_SELECT being deliberately slowed down on 'non-workstation' cards in order to discourage CAD and modeling users from buying consumer 3D cards; that ought to be a bad habit of the past that ATI and NVidia have grown out of ;) )

Rewriting a simple Pygame 2D drawing function in C++

I have a 2D list of vectors (say 20x20 / 400 points) and I am drawing these points on a screen like so:
for row in grid:
for point in row:
pygame.draw.circle(window, white, (particle.x, particle.y), 2, 0)
pygame.display.flip() #redraw the screen
This works perfectly, however it's much slower then I expected.
I want to rewrite this in C++ and hopefully learn some stuff (I am doing a unit on C++ atm, so it'll help) on the way. What's the easiest way to approach this? I have looked at Direct X, and have so far followed a bunch of tutorials and have drawn some rudimentary triangles. However I can't find a simple (draw point).
DirectX doesn't have functions for drawing just one point. It operates on vertex and index buffers only. If you want simpler way to make just one point, you'll need to write a wrapper.
For drawing lists of points you'll need to use DrawPrimitive(D3DPT_POINTLIST, ...). however, there will be no easy way to just plot a point. You'll have to prepare buffer, lock it, fill with data, then draw the buffer. Or you could use dynamic vertex buffers - to optimize performance. There is a DrawPrimitiveUP call that is supposed to be able to render primitives stored in system memory (instead of using buffers), but as far as I know, it doesn't work (may silently discard primitives) with pure devices, so you'll have to use software vertex processing.
In OpenGL you have glVertex2f and glVertex3f. Your call would look like this (there might be a typo or syntax error - I didn't compiler/run it) :
glBegin(GL_POINTS);
glColor3f(1.0, 1.0, 1.0);//white
for (int y = 0; y < height; y++)
for (int x = 0; x < width; x++)
glVertex2f(points[y][x].x, points[y][x].y);//plot point
glEnd();
OpenGL is MUCH easier for playing around and experimenting than DirectX. I'd recommend to take a look at SDL, and use it in conjuction with OpenGL. Or you could use GLUT instead of SDL.
Or you could try using Qt 4. It has a very good 2D rendering routines.
When I first dabbled with game/graphics programming I became fond of Allegro. It's got a huge range of features and a pretty easy learning curve.

Slow C++ DirectX 2D Game

I'm new to C++ and DirectX, I come from XNA.
I have developed a game like Fly The Copter.
What i've done is created a class named Wall.
While the game is running I draw all the walls.
In XNA I stored the walls in a ArrayList and in C++ I've used vector.
In XNA the game just runs fast and in C++ really slow.
Here's the C++ code:
void GameScreen::Update()
{
//Update Walls
int len = walls.size();
for(int i = wallsPassed; i < len; i++)
{
walls.at(i).Update();
if (walls.at(i).pos.x <= -40)
wallsPassed += 2;
}
}
void GameScreen::Draw()
{
//Draw Walls
int len = walls.size();
for(int i = wallsPassed; i < len; i++)
{
if (walls.at(i).pos.x < 1280)
walls.at(i).Draw();
else
break;
}
}
In the Update method I decrease the X value by 4.
In the Draw method I call sprite->Draw (Direct3DXSprite).
That the only codes that runs in the game loop.
I know this is a bad code, if you have an idea to improve it please help.
Thanks and sorry about my english.
Try replacing all occurrences of at() with the [] operator. For example:
walls[i].Draw();
and then turn on all optimisations. Both [] and at() are function calls - to get the maximum performance you need to make sure that they are inlined, which is what upping the optimisation level will do.
You can also do some minimal caching of a wall object - for example:
for(int i = wallsPassed; i < len; i++)
{
Wall & w = walls[i];
w.Update();
if (w.pos.x <= -40)
wallsPassed += 2;
}
Try to narrow the cause of the performance problem (also termed profiling). I would try drawing only one object while continue updating all the objects. If its suddenly faster, then its a DirectX drawing problem.
Otherwise try drawing all the objects, but updating only one wall. If its faster then your update() function may be too expensive.
How fast is 'fast'?
How slow is'really slow'?
How many sprites are you drawing?
How big is each one as an image file, and in pixels drawn on-screen?
How does performance scale (in XNA/C++) as you change the number of sprites drawn?
What difference do you get if you draw without updating, or vice versa
Maybe you just have forgotten to turn on release mode :) I had some problems with it in the past - I thought my code was very slow because of debug mode. If it's not it, you can have a problem with rendering part, or with huge count of objects. The code you provided looks good...
Have you tried multiple buffers (a.k.a. Double Buffering) for the bitmaps?
The typical scenario is to draw in one buffer, then while the first buffer is copied to the screen, draw in a second buffer.
Another technique is to have a huge "logical" screen in memory. The portion draw in the physical display is a viewport or view into a small area in the logical screen. Moving the background (or screen) just requires a copy on the part of the graphics processor.
You can aid batching of sprite draw calls. Presumably Your draw call calls your only instance of ID3DXSprite::Draw with the relevant parameters.
You can get much improved performance by doing a call to ID3DXSprite::Begin (with the D3DXSPRITE_SORT_TEXTURE flag set) and then calling ID3DXSprite::End when you've done all your rendering. ID3DXSprite will then sort all your sprite calls by texture to decrease the number of texture switches and batch the relevant calls together. This will improve performance massively.
Its difficult to say more, however, without seeing the internals of your Update and Draw calls. The above is only a guess ...
To draw every single wall with a different draw call is a bad idea. Try to batch the data into a single vertex buffer/index buffer and send them into a single draw. That's a more sane idea.
Anyway for getting an idea of WHY it goes slowly try with some CPU and GPU (PerfHud, Intel GPA, etc...) to know first of all WHAT's the bottleneck (if the CPU or the GPU). And then you can fight to alleviate the problem.
The lookups into your list of walls are unlikely to be the source of your slowdown. The cost of drawing objects in 3D will typically be the limiting factor.
The important parts are your draw code, the flags you used to create the DirectX device, and the flags you use to create your textures. My stab in the dark... check that you initialize the device as HAL (hardware 3d) rather than REF (software 3d).
Also, how many sprites are you drawing? Each draw call has a fair amount of overhead. If you make more than couple-hundred per frame, that will be your limiting factor.