Lets say I have a stack of bitmap textures
char *Texture[] =
{
"texture1.bmp",
"texture2.bmp",
"texture3.bmp",
"texture4.bmp",
}
and I want to use all the pictures using some function in which they will be processed and displayed by some trigger, each one at a time. Do you know if there is any openGL function for implementing this scenario?
OpenGL is not a scene graph. It's just a fancy triangle rasterization system. See here.
Image loading, animation, and triggers are all (much) higher-level pieces of functionality not provided by OpenGL.
Related
My experience with D3D11on12 and Direct2D hasn't been that good. Infrequently I get
D3D12 ERROR: ID3D12Device::RemoveDevice: Device removal has been triggered for the following reason (DXGI_ERROR_ACCESS_DENIED: The application attempted to use a resource it does not access to. This could be, for example, rendering to a texture while only having read access.). [ EXECUTION ERROR #232: DEVICE_REMOVAL_PROCESS_AT_FAULT]
when I render to the swap chain backbuffer. There are lag spikes as well. And on top of all
this, I think amortizing the "UI" will be needed when I try to push the frame rate.
Synchronization between the UI and the actual scene doesn't really matter, so I can happily just use whatever UI Direct2D has most recently finished rendering.
So I would like to use Direct2D to render the UI on a transparent D3D11on12 bitmap (i.e. one created by using CreateBitmapFromDxgiSurface with the ID3D11Resource from ID3D11On12Device::CreateWrappedResource). And then render this overlay this to the swapchain backbuffer.
The problem is I don't really know anything about the 3D pipeline, as I do everything with compute shaders/DirectML + CopyTextureRegion or Direct2D. I suppose this is a pretty simple question about how to do alpha blending.
I suppose to do alpha blending you have to use the 3D pipeline. Luckily enough directXTK12 seems to do a tutorial that is reasonable trivial on this topic https://github.com/Microsoft/DirectXTK12/wiki/Sprites-and-textures
I want to creates a layer between any other OpenGL-based application and the original OpenGL library. It seamlessly intercepts OpenGL calls made by the application, and renders and sends images to the display, or sends the OpenGL stream to the rendering cluster.
I have completed my openg32.dll to replace the original library, I don't know what to do next,
How to convert OpenGL calls to images and what are OpenGL stream?
For an accurate description. visit the Opengl Wrapper
First and foremost OpenGL is not a libarary. It's an API. The opengl32.dll you have on your system is a library that provides the API and acts as a anchoring point for the actual graphics driver to attach to the programs.
Next it's a terrible idea to intercept OpenGL calls and turn them into something different, like multiple viewports. It may work for the fixed function pipeline, but as soon as shaders get involved it will break the program you hooked into. OpenGL is designed as an API to draw things to the screen, it's not a scene graph. Programs expect that when they make OpenGL calls they will produce an image in a pixel buffer according to their drawing commands. Now if you hook into that process and wildly alter the outcome, any graphics algorithm that relies on the visual outcome of the previous rendering for the following steps will break. For example any form of shadow mapping will be broken by what you do.
Also things like multiple viewport hacks will likely not work if the program does things like frustum culling internally, before making the actual OpenGL calls. Again this is because OpenGL is a drawing API, not a scene graph.
In the end yes you can hook into OpenGL, but whatever you do, you must make sure that OpenGL calls as made by the application get executed according to the specification. There is a authorative OpenGL specification for a reason, namely that programs rely on it to have predictable results.
OpenGL almost undoubtedly allows you to do the things you want to do without doing crazy modifications to it. Multi-viewpoints can be done by, in your render function, doing the following
glViewport(/*View 1 window coords*/0, 0, window_width, window_height / 2);
// Do all of your rendering for the first camera
glViewport(/*View 2 window coords*/0, window_height / 2, window_width, window_height);
glMatrixMode(GL_MODELVIEW);
// Redo your modelview matrix for a different viewpoint here, then re-render it all.
It's as simple as rendering twice into two areas which you specify with glViewport. If you Google around you can get a more detailed tutorial. I highly do not recommend messing with OpenGL as a good deal if it is implemented by the graphics card, and you should really just use what you're given. Chances are if you're modifying it you're doing it wrong. It probably allows you to do it a FAR better way.
Good luck!
I am extending an existing OpenGL project with new functionality.
I can play a video stream using OpenGL with FFMPEG.
Some objects are moving in the video stream. Co-ordinates of those objects are know to me.
I need to show tracking of motion for that object, like continuously drawing a point or rectangle around the object as it moves on the screen.
Any idea how to start with it?
Are you sure you want to use OpenGL for this task? Usually for computer vision algorithms, like motion tracking one uses OpenCV. In this case you could simply use the drawing functions of OpenCV as documented here.
If you are using OpenGL you might have a look at this question because in this case I guess you draw the frames as textures.
I'm building an application that is drawing an anaglyph (stereoimage) on 200 Hz screen based on two provided pictures (NOT 3D model). So speed integity of redrawing is very important. I've achieved the best results with DirectDraw surfaces and their Flip() (switching current surface's image to secondary one):
(void) lpddsPrimary->Flip(nullptr, DDFLIP_WAIT);
But DirectDraw is very outdated and I look for a way to reimplement this functionality based on modern DirectX libraries. But I really don't want to create a quad, draw picture as it's texture, calculate 3D projection matrices just to output 2D images.
I would be really greatful for any snippet of how this can be possibly done with DirectX. Thanks in advance.
For your purposes you can use DXGI and avoid D3D completely. You don't say how you get the data into the backbuffer, but DXGI allows you to create a swapchain, flip it (Present), and access the surfaces (e.g. lock them - it's called Map now). For 3D you need the "1" versions e.g. DXGISwapChain1. See http://msdn.microsoft.com/en-us/library/windows/desktop/bb205075(v=vs.85).aspx.
Note that DXGISwapChain1 is a subclass of DXGISwapChain, and some vital methods such as GetBuffer are in the base interface.
I plan on making a game (in SDL) where, if one character moves, the part of the image it was on turns alpha, thus allowing me to place a scrolling image underneath the original scene.
1) Is this possible?
2) If yes to #1, how can I go about implementing this (not to give me code, but to guide me in the right direction).
It sounds like you want to learn about image compositing.
A typical game these days will have a redraw function somewhere to redraw the entire screen. The entire scene is always redrawn each frame.
void redraw()
{
drawBackground();
drawCharacters();
drawHUD();
swapBuffers();
}
This is as simple as it gets: by using the right blending modes, each time you draw something it appears on top of what was drawn before. Older games are much more complicated because they don't redraw the entire screen at a time (or don't use a framebuffer), and newer games are much more complicated because they draw the world front-to-back and back-to-front in multiple passes for different types of objects.
SDL has software image compositing functions which you can use, or you can use OpenGL (which may use a combination of software and hardware). I personally use OpenGL because it is more powerful (lets you draw more complicated scenes), but the SDL compositing functions are easier to use. There are many excellent tutorials and many more mediocre or terrible tutorials online.
I'm not sure what you mean when you say "the part of the image it was on turns alpha". The alpha channel does not appear on screen, you cannot see it, it just affects how two images are composited.