DirectX — is there an analogue of DirectDraw surface Flip()? - c++

I'm building an application that is drawing an anaglyph (stereoimage) on 200 Hz screen based on two provided pictures (NOT 3D model). So speed integity of redrawing is very important. I've achieved the best results with DirectDraw surfaces and their Flip() (switching current surface's image to secondary one):
(void) lpddsPrimary->Flip(nullptr, DDFLIP_WAIT);
But DirectDraw is very outdated and I look for a way to reimplement this functionality based on modern DirectX libraries. But I really don't want to create a quad, draw picture as it's texture, calculate 3D projection matrices just to output 2D images.
I would be really greatful for any snippet of how this can be possibly done with DirectX. Thanks in advance.

For your purposes you can use DXGI and avoid D3D completely. You don't say how you get the data into the backbuffer, but DXGI allows you to create a swapchain, flip it (Present), and access the surfaces (e.g. lock them - it's called Map now). For 3D you need the "1" versions e.g. DXGISwapChain1. See http://msdn.microsoft.com/en-us/library/windows/desktop/bb205075(v=vs.85).aspx.
Note that DXGISwapChain1 is a subclass of DXGISwapChain, and some vital methods such as GetBuffer are in the base interface.

Related

How to best render to a window, when pixels could be written directly to display buffer?

I have used OpenGL pretty exclusively for all my rendering, to the point that I'm unaware of any other way to write pixels to a window unfortunately.
This is a problem because my current project is a work tool that emulates an LCD display (pixel perfect, 2D, very few pixels are touched each frame, all 'drawing' can be done with memcpy() to a pixel buffer) and I feel that OpenGL might be too heavy for this, but I could absolutely be wrong in that assumption.
My goal is to borrow as little CPU time as possible. What's the best way to draw pixels to a window, in this limited way, on a modern typical machine running windows 10 circa 2019? Is OpenGL suited for this type of rendering, or should I adopt another rendering method in this case? And if so, what would that method be?
edit:
I should also mention, OpenGL can be used right away for me. If rendering textured triangles with an optimal setup is the fastest method, then I can already do that. Anything that just acts as an API over OpenGL or DirectX will likely be worse in my case.
edit2:
After some research, and thanks to the comments, I think I may just use OpenGL with Pixel Buffer Objects to optimize pixel uploads and keep rendering inexpensive.

Draw Direct To Screen With CUDA/OPENCL

Is it possible yet to draw CUDA/OPENCL results directly to the screen with any existing API (opengl, directx, something else)? Skipping the typical drawing a textured quad method.
Even with registering resources and using modern CUDA interop methods, we still have to march through entire rendering pipelines just to render an array of colors. For applications like mine where every ms counts, this is a problem.
There's no way to draw directly on the screen with OpenCL or CUDA.
It is a solvable problem, but as far as I know, NVIDIA has not provided the needed APIs because they would be very complicated both to implement and to use, and the performance benefits would be limited at best.
The two main issues are:
1) the differing layouts of the buffers used for rendering (i.e. you'd have to use surface load/store functionality - a mapping into CUDA's address space is not suitable for graphics because the pitch-linear layout has poor performance in that context) and
2) the platform-specific details of incorporating your CUDA/OpenCL output into the presentation model (be it the desktop or a page-flipped full-screen experience, like a Direct3D game, or incorporating your app's output into the desktop). Bear in mind that most desktops these days are themselves page-flipped, so scribbling on the front buffer is frowned upon in any case.
I very much doubt that there is any performance lost in drawing pixels using a textured quad but you can draw pixels directly on the framebuffer with glDrawPixels.

Image slide using OpenGL

Lets say I have a stack of bitmap textures
char *Texture[] =
{
"texture1.bmp",
"texture2.bmp",
"texture3.bmp",
"texture4.bmp",
}
and I want to use all the pictures using some function in which they will be processed and displayed by some trigger, each one at a time. Do you know if there is any openGL function for implementing this scenario?
OpenGL is not a scene graph. It's just a fancy triangle rasterization system. See here.
Image loading, animation, and triggers are all (much) higher-level pieces of functionality not provided by OpenGL.

OpenGL height-map painting using CUDA VBO

I've asked several questions regarding VBO previously here and from the comments i had received i decided that a new approach must be taken.
To put it simply - I'm trying to draw the Mandelbrot set which is defined on a large FLOAT array, around 512X512 Points. the purpose of my program is to let the user control the zooming and world's orientation (it's a 3d model).
so far I've painted the entire thing using GL_TRIANGLE_STRIP which turned to be a bad choice because of its slow painting process. also because implementing my painting style (order of calling the glVertex) became impossible for coding for VBOs.
so I've got several questions.
even after this description i'm not sure either the VBO is the best choice because it's up the user to control the calculations.for each calculation that he causes by the program, i have to recompute the mandelbrot set(~60ms),and recopy the points to the buffer : a process which takes some time(?ms).
the program allows the user also to move in the world so no calculations are done here therefore VBO is an excellent choice here.
1.what's the best way to paint height map(when each cell in the array holds only the height)
2.how can i apply it on VBO and transfer it to cuda (cudaRegisterBuffer or something like that)
3.is there a way to distinguish between the mode and decide when VBOs are needed(in a no calculations mode) and when they aren't(calculations mode).
You don't need to copy the CUDA data each frame if you bind the CUDA array/VBO to the DirectX/OpenGL VB (refer to the CUDA Programming Guide for details). One way to render data as a height-field is to use the Geometry Shader to emit the tris based on the height-field. Another way is to use the height field as a parallax-map (ref DirectX SDK). My personal fave would be to make your height-field an array of positions (X/Y/Z) and use CUDA to modify only the Y-Values, then use an index buffer to define the polygons that compose the surface. Note that you'll also need to update the vertex normals, and you may also want to use XYZ/UV if you want to texture the surface. If 512x512 is too big, use raster-ops (texture sampling) to populate a lower-resolution height-field of the region of interest. You can do this stage in CUDA or OpenGL/DirectX (I'd recommend doing it in CUDA where you can easily write your own sampling kernel to lookup pixels when down-sampling).

Making a 3d text editor in c++

Currently I am looking to write a text editor for linux systems that does some particular text/font highlighting that involves opengl rendering. Does anyone have suggestions for a c++ graphics rendering library that works well with linux (ubuntu in particular for now)?
And advice for where to start with rendering 3d text is greatly appreciated!
EDIT: Just to clarify rendering 3d text is a strict requirement of the project.
There are basically only three ways to do this at the OpenGL level:
Raster Fonts.
Use glBitmap or glDrawPixels to draw a rectangular bunch of pixels onto the screen. The disadvantages of doing this are many:
The data describing each character is sent from your CPU to the graphics card every frame - and for every character in the frame. This can amount to significant bandwidth.
The underlying OpenGL implementation will almost certainly have to 'swizzle' the image data in some manner on it's way between CPU and frame-buffer.
Many 3D graphics chips are not designed to draw bitmaps at all. In this case, the OpenGL software driver must wait until the 3D hardware has completely finished drawing before it can get in to splat the pixels directly into the frame buffer. Until the software has finished doing that, the hardware is sitting idle.
Bitmaps and Drawpixels have to be aligned parallel to the edges of the screen, so rotated text is not possible.
Scaling of Bitmaps and Drawpixels is not possible.
There is one significant advantage to Raster fonts - and that is that on Software-only OpenGL implementations, they are likely to be FASTER than the other approaches...the reverse of the situation on 3D hardware.
Geometric Fonts.
Draw the characters of the font using geometric primitives - lines, triangles, whatever. The disadvantages of this are:
The number of triangles it takes to draw some characters can be very large - especially if you want them to look good. This can be bad for performance.
Designing fonts is both difficult and costly.
Doing fonts with coloured borders, drop-shadows, etc exacerbates the other two problems significantly.
The advantages are:
Geometric fonts can be scaled, rotated, twisted, morphed, extruded.
You can use fancy lighting models, environment mapping, texturing, etc.
If used in a 3D world, you can perform collision detection with them.
Geometric fonts scale nicely. They don't exhibit bad aliasing artifacts and they don't get 'fuzzy' as they are enlarged.
Texture-Mapped Fonts.
Typically, the entire font is stored in one or two large texture maps and each letter is drawn as a single quadrilateral. The disadvantages are:
The size of the texture map you need may have to be quite large - especially if you need both upper and lower case - and/or if you want to make the font look nice at large point sizes. This is especially a problem on hardware that only supports limited texture map sizes (eg 3Dfx Voodoo's can only render maps up to 256x256)
If you use MIPmapping, then scaling the font makes it look a litte fuzzy. If you don't use MIPmapping, it'll look horribly aliasy.
The advantages are:
Generality - you can use an arbitary full colour image for each letter of the font.
Texture fonts can be rotated and scaled - although they always look 'flat'.
It's easy to convert other kinds of fonts into texture maps.
You can draw them in the 3D scene and they will be illuminated correctly.
SPEED! Textured fonts require just one quadrilateral to be sent to the hardware for each letter. That's probably an order of magnitude faster than either Raster or Geometric fonts. Since low-end 3D hardware is highly optimised to drawing simple textured polygons, speed is also enhanced because you are 'on the fast path' through the renderer. (CAVEAT: On software-only OpenGL's, textured fonts will be S-L-O-W.
Links to some Free Font Libraries:
glut
glTexFont
fnt
GLTT
freetype
Freetype: http://freetype.sourceforge.net/index2.html
And: http://oglft.sourceforge.net/
I use FTGL, which builds on top of freetype. To create 3D, extruded text, I make these calls:
#include <FTGL/ftgl.h>
#include <FTGL/FTFont.h>
...
FTFont* font = new FTExtrudeFont("path_to_Fonts/COOPBL.ttf");
font->Depth(.5); // Text is half as 'deep' as it is tall
font->FaceSize(1); // GL unit sized text
...
FTBBox bounds = font->BBox("Text");
glEnable(GL_NORMALIZE); // Because we're scaling
glPushMatrix();
glScaled(.02,.02,.02);
glTranslated(-(bounds.Upper().X() - bounds.Lower().X())/2.0,yy,zz); // Center the text
font->Render("Text");
glPopMatrix();
glDisable(GL_NORMALIZE);
I recomend you QT wich is foundation of KDE or GTk+ for GNOME. Both of them have support for OPENGL and text. With QT you can do advanced graphics(QGraphicsView) , including animation... Take a look at QT Demo Application .
A good start would be NeHe's OpenGL Lesson 14.
http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=14