How to draw a 2D Image using OpenGL - c++

I've started learning OpenGL a few days ago, and I'm mainly focusing OpenGL on 2D game development...
I've learned the basics of OpenGL - Creating a window, handling keyboard input.
What I did not find in tutorials (and did not found a clear solution in the net), is how to draw 2D image (such as a "player" image in a casual 2D game) using OpenGL.
I've learned XNA before, and remembered there was a structure called 2DTexture, though I did not find any in OpenGL..
I'm not looking toward 3D currently in OpenGL...
Edit:
if it makes it anyway easier, I can have the image's RGB in an array (array sized [WIDTH][HEIGHT], every cell contains the Pixel's R, G, B

First of all, OpenGL is a much lower level API than XNA and it is not primarily focused on 2D rendering.
In older versions of OpenGL, the function glDrawPixels() could serve the purpose of old school bit blitting on the screen, but is was ridiculously inefficient.
The efficient way of rendering 2D images, like sprites for a 2D game, on modern OpenGL is by mean of drawing a flat quadrilateral with an orthographic projection camera and applying a texture on top of it.
I can't remember any tutorial right now that covers 2D rendering specifically, but I think it would be best for you to first get a good grasp on texture mapping and the basics of polygon rendering. For this, ogldev is a good starting point; opengl-tutorial.org is also a great source.

Related

Drawing a simple anti-aliased stickman in OpenGL

I would like to draw a simple 2D stickman on the screen. I also want it to be anti-aliased.
The problem is that I want to use a bones system, which will be written after I would know how to draw the stickman itself based on the joints positions. This means I can't use sprites - I want my stickman to be fully controlable in the code.
It would be great if it will be possible to draw curves too.
Drawing a 3D stickman using a model would also be great if not better. The camera will be positioned like it's 2D, but I would still have depth. The problem is that I only have experience in Maya, and exporting and vertex weighting of the model in OpenGL seems like a mess...
I tried to find libraries for 2D anti-aliased drawing or enable multi-sampling and draw normally, but I had no luck. I also tried to use OpenGL's native anti-aliasing but it seems deprecated and the line joins are bad...
I don't want it to be too complicated because, well, it shouldn't be - it's just the first part of my program, and it's drawing a stickman...
I hope you guys can help me, I'm sure you know better than me :)
You could enable GL_SMOOTH. To check if you device supports your required line width for smooth lines, you can use glGet(GL_SMOOTH_LINE_WIDTH_RANGE);
If you want your code to be generic, you can also use antialiased textures.
Take a look at this link
http://www.opengl.org/resources/code/samples/advanced/advanced97/notes/node62.html
The only way to get antialiasing is use GL library which knows how to get antialiased GL context, for example, SDL. As of stickman, you can draw him with colored polygons.

Procedural Planets, Heightmaps and textures

I am currently working on an OpenGL procedural planet generator. I hope to use it for a space RPG, that will not allow players to go down to the surface of a planet so I have ignored anything ROAM related. At the momement I am drawing a cube with VBOs and mapping onto a sphere as shown here.
I am familiar with most fractal heightmap generating techniques and have already implemented my own version of midpoint displacement(not that useful in this case I know).
My question is, what is the best way to procedurally generate the heightmap. I have looked at libnoise which allows me to make tilable heightmaps/textures, but as far as I can see I would need to generate a net like:
Leaving the tiling obvious.
Could anyone advise me on the best route to take?
Any input would be much appreciated.
Thanks,
Henry.
It looks like you understand the problem with generating a flat, seamless surface and then trying to map it onto a sphere.
How about using a 3D noise function instead? A 3D noise function takes 3 coordinates instead of 2 as its input, so imagine a 3D array full of generated numbers (instead of a 2D array). Thus, once you have a 3D noise function, you can generate a 2D texture, but instead of using 2D coordinates for each pixel, use the 3D coordinates of where that pixel would be on the sphere. (I hope that convoluted sentence made sense!)
Take a look at halfway-down this page about Perlin noise: https://web.archive.org/web/20120829114554/http://local.wasp.uwa.edu.au/~pbourke/texture_colour/perlin/
I think it describes exactly what you want with regards to spheres.
You may also want to check out this article from 2004 on how to 'split' up a sphere into manageable parts.
http://www.gamedev.net/reference/articles/article2074.asp

What is the most efficient way to draw voxels (cubes) in opengl?

I would like to draw voxels by using opengl but it doesn't seem like it is supported. I made a cube drawing function that had 24 vertices (4 vertices per face) but it drops the frame rate when you draw 2500 cubes. I was hoping there was a better way. Ideally I would just like to send a position, edge size, and color to the graphics card. I'm not sure if I can do this by using GLSL to compile instructions as part of the fragment shader or vertex shader.
I searched google and found out about point sprites and billboard sprites (same thing?). Could those be used as an alternative to drawing a cube quicker? If I use 6, one for each face, it seems like that would be sending much less information to the graphics card and hopefully gain me a better frame rate.
Another thought is maybe I can draw multiple cubes using one drawelements call?
Maybe there is a better method altogether that I don't know about? Any help is appreciated.
Drawing voxels with cubes is almost always the wrong way to go (the exceptional case is ray-tracing). What you usually want to do is put the data into a 3D texture and render slices depending on camera position. See this page: https://developer.nvidia.com/gpugems/GPUGems/gpugems_ch39.html and you can find other techniques by searching for "volume rendering gpu".
EDIT: When writing the above answer I didn't realize that the OP was, most likely, interested in how Minecraft does that. For techniques to speed-up Minecraft-style rasterization check out Culling techniques for rendering lots of cubes. Though with recent advances in graphics hardware, rendering Minecraft through raytracing may become the reality.
What you're looking for is called instancing. You could take a look at glDrawElementsInstanced and glDrawArraysInstanced for a couple of possibilities. Note that these were only added as core operations relatively recently (OGL 3.1), but have been available as extensions quite a while longer.
nVidia's OpenGL SDK has an example of instanced drawing in OpenGL.
First you really should be looking at OpenGL 3+ using GLSL. This has been the standard for quite some time. Second, most Minecraft-esque implementations use mesh creation on the CPU side. This technique involves looking at all of the block positions and creating a vertex buffer object that renders the triangles of all of the exposed faces. The VBO is only generated when the voxels change and is persisted between frames. An ideal implementation would combine coplanar faces of the same texture into larger faces.

3d rendering of a surface from a depthmap

Using stereovision, I am producing depthmaps representing the 3d environment as viewed from a camera. There is one depthmap per "keyframe" associated with a camera position. The goal is to translate those 2d depthmaps into the 3d space (and later merge them to reconstruct the whole environment).
What would be the best (efficient) way to translate those depthmaps in 3d? Each depthmap is 752x480 large, so the number of triangles can grow quite fast. I would like an automatic system to manage the level of detail of the objects.
My team uses Ogre3d so it would be great to find a solution with it. What I am looking for is very similar to what Terrain do, except that I want to be able to put the resulting objects wherever I want (translation, rotation) and I think Terrain can't do that.
I am quite new to Ogre3d so please forgive me if there is a straightforward solution I should know. If another tool than Ogre3d is more appropriate to my problem, I'd be happy to learn about it!
Not clear what you want to do "merge depahtmap with envirronement" ?
Anyway, in your case, you seems stuck to make them 3d using terrain heightmap techniques.
In you case, as the depthmap is screen aligned, use a screen space simple raycasting technique. So you must do a compositor in ogre3D that takes that depth map and transform it on the pixel you want.
Translation and rotation from a depth map may be limited to xy on screen, as like terrain heightmap (you cannot have caves using heightmaps), you do miss a dimension.
Not directly related but might help: in pure screen space, there is a technique "position reconstruction" that help getting object world space positions, but only if you have a load of infos on the camera used to generate the depthmap you're using, for example: http://www.gamerendering.com/2009/12/07/position-reconstruction/

OpenGL: Textured Primitives + High Framerate

Short version: What's the best practice going forward for efficiently rendering large numbers of independent texture-mapped, lighted 2D/3D primitives (circles, rects, etc.) in OpenGL?
For example: a typical particle system using billboarded quads/triangles, point sprites, or whatever other technique, with blending.
alt text http://www.codingthewheel.com/image.axd?picture=lucent1.jpg
Because after reading this thread on the messiness of OpenGL versioning/deprecation I'm starting to have my doubts.
My specific question is not the ABCs of displaying primitives in OpenGL, but rather how to do so efficiently in post-deprecation (or pre-deprecation) OpenGL, in a way that's going to be compatible with a wide range of commodity hardware and in a way that's not going to break or itself get deprecated, five years down the line.
Thanks!
I'm still trying to get a handle on the post-deprecation OpenGL world myself.
From what I understand though, the recommended methods for specifying geometry are Vertex Buffer Objects (VBOs) or vertex arrays. VBOs are the first preference, because the vertex data lives in the GPU's memory.
Also, you have to use shaders, because all the fixed-pipeline functionality is deprecated.
This stuff all works in OpenGL 2.1 and above (and OpenGL ES 2.0 it seems).