I'm very new to OpenGL. I want to learn it from the Blue Book (OpenGL superbible 6th edition).
When I compile the first program with Visual Studio 2013, everything goe well, except that a white window appears and then the program quit with code 0.
The program is:
// Include the "sb6.h" header file
#include "sb6.h"
// Derive my_application from sb6::application
class my_application : public sb6::application
{
public:
// Our rendering function
void render(double currentTime)
{
// Simply clear the window with red
static const GLfloat red[] = { 1.0f, 0.0f, 0.0f, 1.0f };
glClearBufferfv(GL_COLOR, 0, red);
}
};
// Our one and only instance of DECLARE_MAIN
DECLARE_MAIN(my_application);
I think both the compiling and building process are working fine because the result code 0. But I cannot figure out why I don't see a red output window. Please help me.
You are basically dealing with a real-time application which means it takes very little time to render a frame. Rendering a frame means, OpenGL will take all the commands you used to define the scene (the geometry, the settings of the buffers, the shaders, etc.) and render 1 frame as quickly as it can. Generally, if your application needs to be real-time, it will have to be able to render this scene is more than 30 frame per second.
So what basically OpenGL does is render this frame, and then quit. Normally (you don't say in your post which framework you use to create your application, GLUT, GLFW? Well you say you use the code the blue book but not sure which OpenGL lib it uses, it's obviously a wrapper around something else), but a typical OpenGL app does this (in pseudo c/c++ code, assuming some arbitrary framework to deal with keyboard/mouse events, etc.):
bool run = true;
main() {
...
while (run) {
event e = get_event(e); // this is a key, or mouse event
process_event(e);
render_frame();
}
...
}
void processe_event(event e)
{
if (e.type == KEYBOARD_EVENT && e.value == ESC) { run = false; return; }
...
}
The idea is that you run the render function within an infinite loop. So each time the program iterates over the loop, it will render to the screen the content of your openGL scene. Of course, since it's an infinite loop, the window stays on the screen, until your decide to kill the program (or implement some mechanism in which you escape the loop when some specific keys are used, typically the escape key).
The most basic way of getting this to work, is to use an infinity loop:
while (1) {
render_frame();
}
and do a ctrl+c or interrupt/kill your program. That way you don't have to deal with keys, etc. and then you can at least see what your program does and then move on learning how to use keys.
Aslo I am not sure your code will do anything. First, if you use a double buffer (generally the default these days), you will need to switch buffers to see even the clear function doing something. Second, you will need to add some geometry to your scene. However, note that if you use OpenGL4 for example, you will need to declare and use shaders to see anything, and this is not easy to get this working the first time.
Note that maybe the infinite loop is embedded within the macro DECLARE_MAIN, however this is the problem with using code/framework like the one you are showing in your example, you don't know what's happening elsewhere in the code and how are things coded. For example maybe the buffer swap is happening in the macro DECLARE_MAIN. Generally, I understand why they use macro like that for teaching people (because it hides from you all the complexity of how to get the OpenGL app working) but the downfall is that it stops you from truly understanding the principle of how things work. I personally think this is not the best way to teach graphics especially OpenGL.
The blue book is great, but I would also recommend that you find some example on the web, which shows how to render for example a simple triangle in GL, for example:
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-2-the-first-triangle/
They are quite a few on the web, which are simple, well explained, etc.
You will need to choose your framework first though. I recommend GLFW if you can.
Also, while lessons on OpenGL haven't been written for this yet, I would recommend you check www.scratchapixel.com in the future. It will explain how OpenGL works and guide you step by step to get a simple app running.
If you have more questions please add them in your comments.
Related
I'm trying to make a SDL2 adapter for a graphics library. I believe this library assumes that all the things it draws in the screens stay in the screen as long as it never draws something in top of it. (See the end of the question for details about this)
What would be the best way to do it?
I've come across multiple solutions, including:
Hardware rendering without calling SDL_RenderClear. The problem with this is that SDL uses a double buffer, so the contents can end up in either of them, and I end up seing only a subset of the render at a time.
Software rendering, in which if I'm understanding SDL correctly, it would mean having a surface that I mapped to a texture, so I can render the texture, and I would edit the pixels field of the surface in main memory. This would be very slow, and also as the library expects everything rendered instantly and has no notion of frames would mean to send the data to the gpu every time there's an update (even single pixels).
I'm probably missing something about SDL2 and definitely about the library (Adafruit-GFX-Library). What does transaction api means in this context? I've not been able to find any information about it, and I feel like it could be something important.
In my understanding of SDL2 and general rendering application programming, SDL2 is designed that you draw a complete frame every time, meaning you would clear your "current window" either by OpenGL API calls
glClearColor(0, 1.0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
which clears the current OpenGL context i.e. frame buffer, of which you would have 2 or by using the SDL2 renderer which I am not familiar with.
Then you would swap the buffers, and repeat. (Which fits perfectly with this architecture proposal I am relying on)
So either you would have to replay the draw commands from your library for the second frame somehow, or you could also disable the double frame buffering, at least for the OpenGL backend, by
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 0);
(For the other OpenGL SDL2 setup code see this GitHub repository with a general helper class of mine
I have written a pathtracer in C++, and now I want to display the rendering process in real time, so I'm wondering what the simplest/best way to do this is.
Basically, the rendered image updates after every iteration, so I just need to retrieve it, display it in a separate window, and update it.
I was thinking about using DirectX, and it seems that I could probably also do it with OpenCV, but I'm just looking for a simple way which doesn't require adding a lot of new code.
I am using C++ on Windows.
If I understand correctly your path tracer probably outputs a color per emitted ray? If that is the case, and you are thinking about displaying the rendered image in a separate window I'd suggest using SDL2. There's a great set of tutorials concerning real-time graphics using C++ and SDL by Lazy Foo' Productions.
Excerpt taken from official SDL documentation (without cruft needed to initialize windows) regarding SDL_Surface, which you will probably be using:
/* This is meant to show how to edit a surface's pixels on the CPU, but
normally you should use SDL_FillRect() to wipe a surface's contents. */
void WipeSurface(SDL_Surface *surface)
{
/* This is fast for surfaces that don't require locking. */
/* Once locked, surface->pixels is safe to access. */
SDL_LockSurface(surface);
/* This assumes that color value zero is black. Use
SDL_MapRGBA() for more robust surface color mapping! */
/* height times pitch is the size of the surface's whole buffer. */
SDL_memset(surface->pixels, 0, surface->h * surface->pitch);
SDL_UnlockSurface(surface);
}
I want to creates a layer between any other OpenGL-based application and the original OpenGL library. It seamlessly intercepts OpenGL calls made by the application, and renders and sends images to the display, or sends the OpenGL stream to the rendering cluster.
I have completed my openg32.dll to replace the original library, I don't know what to do next,
How to convert OpenGL calls to images and what are OpenGL stream?
For an accurate description. visit the Opengl Wrapper
First and foremost OpenGL is not a libarary. It's an API. The opengl32.dll you have on your system is a library that provides the API and acts as a anchoring point for the actual graphics driver to attach to the programs.
Next it's a terrible idea to intercept OpenGL calls and turn them into something different, like multiple viewports. It may work for the fixed function pipeline, but as soon as shaders get involved it will break the program you hooked into. OpenGL is designed as an API to draw things to the screen, it's not a scene graph. Programs expect that when they make OpenGL calls they will produce an image in a pixel buffer according to their drawing commands. Now if you hook into that process and wildly alter the outcome, any graphics algorithm that relies on the visual outcome of the previous rendering for the following steps will break. For example any form of shadow mapping will be broken by what you do.
Also things like multiple viewport hacks will likely not work if the program does things like frustum culling internally, before making the actual OpenGL calls. Again this is because OpenGL is a drawing API, not a scene graph.
In the end yes you can hook into OpenGL, but whatever you do, you must make sure that OpenGL calls as made by the application get executed according to the specification. There is a authorative OpenGL specification for a reason, namely that programs rely on it to have predictable results.
OpenGL almost undoubtedly allows you to do the things you want to do without doing crazy modifications to it. Multi-viewpoints can be done by, in your render function, doing the following
glViewport(/*View 1 window coords*/0, 0, window_width, window_height / 2);
// Do all of your rendering for the first camera
glViewport(/*View 2 window coords*/0, window_height / 2, window_width, window_height);
glMatrixMode(GL_MODELVIEW);
// Redo your modelview matrix for a different viewpoint here, then re-render it all.
It's as simple as rendering twice into two areas which you specify with glViewport. If you Google around you can get a more detailed tutorial. I highly do not recommend messing with OpenGL as a good deal if it is implemented by the graphics card, and you should really just use what you're given. Chances are if you're modifying it you're doing it wrong. It probably allows you to do it a FAR better way.
Good luck!
I'm currently writing an interactive simulator which displays the evolution of a system of particles. I'm developing on Windows 7 32-bit, using Visual Studio.
Currently, I have a function to draw all the particles on screen that looks something like this:
void Simulator::draw()
{
glColor4f(255, 255, 255, 0);
glBegin();
for (size_t i = 0; i < numParticles; i++)
glVertex3dv(p[i].pos);
glEnd();
}
This works great and all for testing, but it's absurdly slow. If I have 200 particles on screen, without doing any other computations (just repeatedly calling draw()), I get about 60 fps. But if I use 1000 particles, this runs at only about 15 - 20 fps.
My question is: How can I draw the particles more quickly? My simulation runs at a fairly decent speed, and at a certain point it's actually being held back (!) by the drawing.
The very first optimization you should do is to drop glBegin/glEnd (Immediate mode) and move to Vertex Arrays (VAs) and Vertex Buffer Objects (VBOs).
You might also want to look into Point Sprites to draw the particles.
this will be unpopular, I know I hate it:
use Direct X instead.
Microsoft have been trying to kill off openGL since Vista.
so chances are they are going to make life difficult for you.
so if ALL ELSE fails, you could try using DirectX.
but yeah, what others have said:
-use DrawArrays / Elements
-use Vertex Buffers
-use point sprites
-you could make a vertex shader so that you don't have to update the whole vertex buffer every frame (you just store initial settings like init pos, init velocity, init time, then constant gravity, then the shader can calculate current position each frame based on a single updated global constant 'current time')
-if it is a setting in open GL (like it is in directX) make sure you are using Hardware Vertex Processing)
First, if you haven't done so, disable the desktop window manager -- it can make a huge difference in speed (you're basically getting largely software rendering with it turned on).
Second, though I doubt you'll need to, you could switch to using glDrawArrays or glDrawElements (for only a couple among many possibilities).
I'm having a rough time trying to set up this behavior in my program.
Basically, I want it that when a the user presses the "a" key a new sphere is displayed on the screen.
How can you do that?
I would probably do it by simply having some kind of data structure (array, linked list, whatever) holding the current "scene". Initially this is empty. Then when the event occurs, you create some kind of representation of the new desired geometry, and add that to the list.
On each frame, you clear the screen, and go through the data structure, mapping each representation into a suitble set of OpenGL commands. This is really standard.
The data structure is often referred to as a scene graph, it is often in the form of a tree or graph, where geometry can have child-geometries and so on.
If you're using the GLuT library (which is pretty standard), you can take advantage of its automatic primitive generation functions, like glutSolidSphere. You can find the API docs here. Take a look at section 11, 'Geometric Object Rendering'.
As unwind suggested, your program could keep some sort of list, but of the parameters for each primitive, rather than the actual geometry. In the case of the sphere, this would be position/radius/slices. You can then use the GLuT functions to easily draw the objects. Obviously this limits you to what GLuT can draw, but that's usually fine for simple cases.
Without some more details of what environment you are using it's difficult to be specific, but a few of pointers to things that can easily go wrong when setting up OpenGL
Make sure you have the camera set up to look at point you are drawing the sphere. This can be surprisingly hard, and the simplest approach is to implement glutLookAt from the OpenGL Utility Toolkit. Make sure you front and back planes are set to sensible values.
Turn off backface culling, at least to start with. Sure with production code backface culling gives you a quick performance gain, but it's remarkably easy to set up normals incorrectly on an object and not see it because you're looking at the invisible face
Remember to call glFlush to make sure that all commands are executed. Drawing to the back buffer then failing to call glSwapBuffers is also a common mistake.
Occasionally you can run into issues with buffer formats - although if you copy from sample code that works on your system this is less likely to be a problem.
Graphics coding tends to be quite straightforward to debug once you have the basic environment correct because the output is visual, but setting up the rendering environment on a new system can always be a bit tricky until you have that first cube or sphere rendered. I would recommend obtaining a sample or template and modifying that to start with rather than trying to set up the rendering window from scratch. Using GLUT to check out first drafts of OpenGL calls is good technique too.