C++ - Display continuously updating image - c++

I have written a pathtracer in C++, and now I want to display the rendering process in real time, so I'm wondering what the simplest/best way to do this is.
Basically, the rendered image updates after every iteration, so I just need to retrieve it, display it in a separate window, and update it.
I was thinking about using DirectX, and it seems that I could probably also do it with OpenCV, but I'm just looking for a simple way which doesn't require adding a lot of new code.
I am using C++ on Windows.

If I understand correctly your path tracer probably outputs a color per emitted ray? If that is the case, and you are thinking about displaying the rendered image in a separate window I'd suggest using SDL2. There's a great set of tutorials concerning real-time graphics using C++ and SDL by Lazy Foo' Productions.
Excerpt taken from official SDL documentation (without cruft needed to initialize windows) regarding SDL_Surface, which you will probably be using:
/* This is meant to show how to edit a surface's pixels on the CPU, but
normally you should use SDL_FillRect() to wipe a surface's contents. */
void WipeSurface(SDL_Surface *surface)
{
/* This is fast for surfaces that don't require locking. */
/* Once locked, surface->pixels is safe to access. */
SDL_LockSurface(surface);
/* This assumes that color value zero is black. Use
SDL_MapRGBA() for more robust surface color mapping! */
/* height times pitch is the size of the surface's whole buffer. */
SDL_memset(surface->pixels, 0, surface->h * surface->pitch);
SDL_UnlockSurface(surface);
}

Related

Freetype correct size rendering

I'm currently writing an OpenGL application that render a lot of text. Text rendering was previously done with Qt but for performence reason we change all by OpenGL rendering.
I made all the shader / rendering pipeline, it's working well.
But after reading the doc of freetype again, I still don't understand how to render correctly at the right size.
For now I use the function
FT_Set_Pixel_Sizes(face, 0, mFontSize);
To set the size of my font, but I know that it's incorrect, because Qt was rendering in 'point' (I guess..), so all the text is now smaller.
I read about using the function
FT_Set_Char_Size(
face, /* handle to face object */
0, /* char_width in 1/64th of points */
16*64, /* char_height in 1/64th of points */
300, /* horizontal device resolution */
300 ); /* vertical device resolution */
And here come my first question, what I should put in the resolution? I can't know the DPI of the screen... What standard I should use?
Also, I need the text to be at fixed size on the screen, at whatever the zoom is. For now, I pre-compute my glyph's vertex on CPU side as on this tutorial https://learnopengl.com/In-Practice/Text-Rendering
But for the parameter "scale", I use 1.f / font_size.
And then, on the shader I do
(camera * vec3(char_position.xy, 1)).xy + vertex.xy / viewport * font_size.
With this I can have fixed size on the screen, with a maximum of 45 pixel as I asked to freetype. But it's not correct from the point of view of what was rendered by Qt.
But I don't see how to do it with the DPI 'solution'
If you can't calculate an accurate DPI for your system (a claim which I'm very skeptical of; check your documentation) then you're going to need to do what most other solutions do: guess.
Windows, for example, assumes a DPI of 96 is "normal" for most systems, and if you're unable to provide an accurate number, it's probably a good number to use for your project.
So if you absolutely cannot get an accurate DPI number, just punch in 96 and then scale your font sizes from there.
I have made a project in Qt/OpenGL for unicode font rendering using freetype library. Hope this helps https://github.com/arunrvk/Qt-OpenGL-Fonts-with-Freetype-Library

SDL2: How to render without clearing the screen

I'm trying to make a SDL2 adapter for a graphics library. I believe this library assumes that all the things it draws in the screens stay in the screen as long as it never draws something in top of it. (See the end of the question for details about this)
What would be the best way to do it?
I've come across multiple solutions, including:
Hardware rendering without calling SDL_RenderClear. The problem with this is that SDL uses a double buffer, so the contents can end up in either of them, and I end up seing only a subset of the render at a time.
Software rendering, in which if I'm understanding SDL correctly, it would mean having a surface that I mapped to a texture, so I can render the texture, and I would edit the pixels field of the surface in main memory. This would be very slow, and also as the library expects everything rendered instantly and has no notion of frames would mean to send the data to the gpu every time there's an update (even single pixels).
I'm probably missing something about SDL2 and definitely about the library (Adafruit-GFX-Library). What does transaction api means in this context? I've not been able to find any information about it, and I feel like it could be something important.
In my understanding of SDL2 and general rendering application programming, SDL2 is designed that you draw a complete frame every time, meaning you would clear your "current window" either by OpenGL API calls
glClearColor(0, 1.0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
which clears the current OpenGL context i.e. frame buffer, of which you would have 2 or by using the SDL2 renderer which I am not familiar with.
Then you would swap the buffers, and repeat. (Which fits perfectly with this architecture proposal I am relying on)
So either you would have to replay the draw commands from your library for the second frame somehow, or you could also disable the double frame buffering, at least for the OpenGL backend, by
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 0);
(For the other OpenGL SDL2 setup code see this GitHub repository with a general helper class of mine

Frame grabbing with Matrox

I'm trying to run Matrox Image Library's example code for frame grabbing. When the application runs, all I get is a black screen for the display image.
I know my configuration is correct, since when I try Matrox Intellicam software, I'm able to grab an image, which makes it more weird. It's just something in the software I need to change that I'm not aware of.
I've found this, but it was not helpful really. Frame Capture using Matrox Commands
This is the code I have.
/* Allocate 2 display buffers and clear them. */
MbufAlloc2d(MilSystem,
(MIL_INT)(MdigInquire(MilDigitizer[0], M_SIZE_X, M_NULL)*GRAB_SCALE),
(MIL_INT)(MdigInquire(MilDigitizer[0], M_SIZE_Y, M_NULL)*GRAB_SCALE),
8L+M_UNSIGNED,
M_IMAGE+M_GRAB+M_PROC+M_DISP, &MilImageDisp[0]);
MbufClear(MilImageDisp[0], 0x0);
MbufAlloc2d(MilSystem,
(MIL_INT)(MdigInquire(MilDigitizer1, M_SIZE_X, M_NULL)*GRAB_SCALE),
(MIL_INT)(MdigInquire(MilDigitizer1, M_SIZE_Y, M_NULL)*GRAB_SCALE),
8L+M_UNSIGNED,
M_IMAGE+M_GRAB+M_PROC+M_DISP, &MilImageDisp1);
MbufClear(MilImageDisp1, 0x80);
/* Display the buffers. */
MdispSelect(MilDisplay[0], MilImageDisp[0]);
MdispSelect(MilDisplay[1], MilImageDisp[1]);
/* Grab continuously on displays at the specified scale. */
MdigControl(MilDigitizer[0], M_GRAB_SCALE, GRAB_SCALE);
MdigGrabContinuous(MilDigitizer[0],MilImageDisp[0]);
MdigControl(MilDigitizer[1], M_GRAB_SCALE, GRAB_SCALE);
MdigGrabContinuous(MilDigitizer[1],MilImageDisp[1]);
I'm quite stuck and I would appreciate any idea that suggests what might be wrong.
Frame grabber loses synchronization because either no default digitizer format is set or it's not the correct format for the camera.
To solve this issue, either set the DCF in code or manually in config file.

Why is glReadPixels so slow and are there any alternative?

I need to take sceenshots at every frame and I need very high performance (I'm using freeGlut). What I figured out is that it can be done like this inside glutIdleFunc(thisCallbackFunction)
GLubyte *data = (GLubyte *)malloc(3 * m_screenWidth * m_screenHeight);
glReadPixels(0, 0, m_screenWidth, m_screenHeight, GL_RGB, GL_UNSIGNED_BYTE, data);
// and I can access pixel values like this: data[3*(x*512 + y) + color] or whatever
It does work indeed but I have a huge issue with it, it's really slow. When my window is 512x512 it runs no faster than 90 frames per second when only cube is being rendered, without these two lines it runs at 6500 FPS! If we compare it to irrlicht graphics engine, there I can do this
// irrlicht code
video::IImage *screenShot = driver->createScreenShot();
const uint8_t *data = (uint8_t*)screenShot->lock();
// I can access pixel values from data in a similar manner here
and 512x512 window runs at 400 FPS even with a huge mesh (Quake 3 Map) loaded! Take into account that I'm using openGL as driver inside irrlicht. To my inexperienced eye it seems like glReadPixels is copying every pixel data from one place to another while (uint8_t*)screenShot->lock() is just copying a pointer to already existent array. Can I do something similar to latter using freeGlut? I expect it to be faster than irrlicht.
Note that irrlicht uses openGL too (well it offers directX and other options as well but in the example I gave above I used openGL and by the way it was the fastest compared to other options)
OpenGL methods are used to manage the rendering pipeline. In its nature, while the graphics card is showing image to the viewer, computations of the next frame are being done. When you call glReadPixels; graphics card wait for the current frame to be done, reads the pixels and then starts computing the next frame. Therefore pipeline becomes stalled and becomes sequential.
If you can hold two buffers and tell to the graphics card to read data into these buffers interchanging each frame; then you can read-back from your buffer 1-frame late but without stalling the pipeline. This is called double buffering. You can also do triple buffering with 2 frame late read-back and so on.
There is a relatively old web page describing the phenomenon and implementation here: http://www.songho.ca/opengl/gl_pbo.html
Also there are a lot of tutorials about framebuffers and rendering into a texture on the web. One of them is here: http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-14-render-to-texture/

Print an OpenGL Texture to File without Display?

I'm trying to use OpenGL to help with processing Kinect depth map input into an image. At the moment we're using the Kinect as a basic motion sensor, and the program counts how many people walk by and takes a screen shot each time it detects someone new.
The problem is that I need to get this program to run without access to a display. We're wanting to run it remotely over SSH, and the network traffic from other services is going to be too much for X11 forwarding to be a good idea. Attaching a display to the machine running the program is a possibility, but we're wanting to avoid doing that for energy consumption reasons.
The program does generate a 2D texture object for OpenGL, and usually just uses GLUT to render it before a read the pixels and output them to a .PNG file using FreeImage. The issue I'm running into is that once GLUT function calls are removed, all that gets printed to the .PNG files are just black boxes.
I'm using the OpenNI and NITE drivers for the Kinect. The programming language is C++, and I'm needing to use Ubuntu 10.04 due to the hardware limitations of the target device.
I've tried using OSMesa or FrameBuffer objects, but I am a complete OpenGL newbie so I haven't gotten OSMesa to render properly in place of the GLUT functions, and my compilers can't find any of the OpenGL FrameBuffer functions in GL/glext.h or GL/gl.h.
I know that textures can be read into a program from image files, and all I want to output is a single 2-D texture. Is there a way to skip the headache of off-screen rendering in this case and print a texture directly to an image file without needing OpenGL to render it first?
The OSMesa library is neither a drop-in replacement for GLUT, nor can work together. If you only need the offscreen rendering part without interaction you have to implement a simple event loop yourself.
For example:
/* init OSMesa */
OSMesaContext mContext;
void *mBuffer;
size_t mWidth;
size_t mHeight;
// Create RGBA context and specify Z, stencil, accum sizes
mContext = OSMesaCreateContextExt( OSMESA_RGBA, 16, 0, 0, NULL );
OSMesaMakeCurrent(mContext, mBuffer, GL_UNSIGNED_BYTE, mWidth, mHeight);
After this snipped you can use the normal OpenGL calls to render and after a glFinish() call the results can be accessed through the mBuffer pointer.
In you event loop you can call your normal onDisplay, onIdle, etc callbacks.
We're wanting to run it remotely over SSH, and the network traffic from other services is going to be too much for X11 forwarding to be a good idea.
If you forward X11 and create the OpenGL context on that display, OpenGL traffic will go over the net no matter if there is a window visible or not. So what you actually need to do (if you want to make use of GPU accelerated OpenGL) is starting an X server on the remote machine, and keeping it the active VT (i.e. the X server must be the program that "owns" the display). Then your program can make a connection to this very X server only. But this requires to use Xlib. Some time ago fungus wrote a minimalistic Xlib example, I extended it a bit so that it makes use of FBConfigs, you can find it here: https://github.com/datenwolf/codesamples/blob/master/samples/OpenGL/x11argb_opengl/x11argb_opengl.c
In your case you should render to a FBO or a PBuffer. Never use a visible window framebuffer to render stuff that's to be stored away! If you create a OpenGL window, like with the code I linked use a FBO. Creating a GLX PBuffer is not unlike creating a GLX Window, only that it will be off-screen.
The trick is, not to use the default X Display (of your SSH forward) but a separate connection to the local X Server. The key is the line
Xdisplay = XOpenDisplay(NULL);
Instead of NULL, you'd pass the connection to the local server there. To make this work you'll also need to (manually) add an xauth entry to, or disable xauth on the OpenGL rendering server.
You can use glGetTexImage to read a texture back from OpenGL.