How to display smooth video in FireMonkey (FMX, FM3)? - c++

Has anyone figured out how to display smooth video (i.e. a series of bitmaps) in a FireMonkey application, HD or 3D? In VCL you could write to a canvas from a thread and this would work perfectly, but this does not work in FMX. To make things worse, the apparently only reliable way is to use TImage, and that seems to be updated from the main thread (open a menu and video freezes temporarily). All EMB examples I could find all either write to TImage from the main thread, or use Synchronize(). These limitations make FMX unusable for decent video display so I am looking for a hack or possibly bypass of FMX. I use XE5/C++ but welcome any suggestions. Target OS is both Windows 7+ & OS X. Thanks!

How about putting a TPaintbox on your form to hold the video. In the OnPaint method you simply draw the next frame to the paintbox canvas. Now put a TTimer on the form, set the interval to the frame rate required. In the OnTimer event for the timer just write paintbox1.repaint
This should give you regular frames no matter what else the program is doing.
For extra safety, you could increment a frame number in the OnTimer event. Now in the paintbox paint method you know which frame to paint. This means you won't jump frames if something else calls the paint method as well as the timer - you will just end up repainting the same frame for the extra call to OnPaint.
I use this for marching ants selections although I go one step further and use an overlaid canvas so I can draw independently to the selection and the underlying paintbox canvas to remove the need to repaint the main canvas when the selection changes. That requires calls to API but I guess you won't need it unless you are doing videos with a transparent colour.

Further research, including some talks with the Itinerant developer, has unfortunately made it clear that, due to concurrency restrictions, FM has been designed so that all GPU access goes through the main thread and therefore painting will always be limited. As a result I have decided FM is not suitable for my needs and I am re-evaluating my options.

Related

OpenGL: How do i control the rendering to make it idle?

Say, my application has a 3D window rendering a tin model of millions of triangles using OpenGL.
Goal: For some operations of users, there is no need to update the 3D window. The 3D view can just stay idle with previously rendered content, without repeatly calulate the rotation/translation/scaling/texture things. I assume this will save lots of CPU and GPU time.
Current design: I have a rendering while loop running all the time. If I stop the while loop, then the content is rendered once and disappear.
Question: is there a way to achieve the goal? Can anyone give a direction?
Instead of having a continuous rendering loop, you can use OpenGL only to render your window when the system sends you an event to repaint the window. Additionally you invalidate your own window if you know that the contents of it changed (e.g. due to a reaction to a mouse click). In fact this is the proper way to draw in your window for anything other than latency sensitive applications.
The exact specifics highly depend on the API you use to create your window. For example,
With WinAPI you render on WM_PAINT and invalidate with InvalidateRect.
With Xlib you render on Expose and invalidate with IDK.
With Qt you render in QOpenGLWindow::paintGL and invalidate with update().
With GLUT you render in glutDisplayFunc and invalidate with glutPostRedisplay.
... and so on. This is not OpenGL specific in any way.

OpenGL: How to minimize drawing?

My OpenGL screen consists of 2 triangles and 1 texture, nothing else. I'd like to update the screen as little as possible, to save power and limit CPU/GPU usage. Unfortunately, when my draw_scene routine returns early without drawing anything, the OpenGL screen goes black-- even if I never call glutSwapBuffers. My background color is not black by the way. It seems that if I do not draw, the OpenGL window loses its contents. How can I minimize the amount of drawing that is done?
Modern graphics systems assume, that when a redraw is initiated, that the whole contents are redrawn. Furthermore, if you get a redraw event from the graphics system, then that's usually because the contents of the window have become undefined and need to be recreated, so you must redraw in that situation.
To save power you have to disable the idle loop (or just pass over everything that does and immediately yield back to the OS scheduler) and don't have timers create events.

OpenGL window draws fine, but all the windows on top of my OpenGL window go black

I have an app that mixes OpenGL with Motif. The big main window that has OpenGL in it redraws fine. But, the sub windows sitting on top of it all go black. Specifically, just the parts of those subwindows that are right on top of the main window. Those subwindows all have just Motif code in them (except for one).
The app doesn't freeze up or dump core. Data is still flowing, and as text fields, etcetera of various subwindows get updated, those parts redraw. Dragging windows across each other or minimizing/unminimizing also trigger redraws. The timing of the "blackout" is random. I run the same 1-hour dataset every time and sometimes the blackout happens 5 minutes into the run and sometimes 30 minutes in, etc.
I went through the process of turning off sections of code until the problem stopped. Narrowed it down more and more and found it had to do with the use of the depth buffer. In other words, when I comment out the glEnable(GL_ENABLE_DEPTH_TEST), the problem goes away. So the problem seems to have something do with the use of the depth buffer.
As far as I can tell, the depth buffer is being cleared before redrawing is done, as it should. There's if-statements wrapped around the glClear calls, so I put messages in there and confirmed that the glClear of the depth buffer is indeed happening even when the blackout happens. Also, glGetError didn't return anything.
UPDATE 6/30/2014
Looks like there's still at least one person looking at this (thanks, UltraJoe). If I remember correctly, it turned out that it was sometimes swapping buffers without first defining the back buffer and drawing anything to it. It wasn't obvious to me before because it's such a long routine. There were some other minor things I had to clean-up, but I think that was the main cause.
How did you create the OpenGL window/context. Did you just get the X11 Window handle of your Motif main window and created the OpenGL context on that one? Or did you create a own subwindow within that Motif window for OpenGL?
You should not use any window managed by a toolkit directly, unless this was some widget for exclusive OpenGL use. The reason is, that most toolkits don't create a own sub-window for each an every element and also reuse parts of their graphics resources.
Thus you should create a own sub-window for OpenGL, and maybe a further subwindow using glXCreateWindow as well.
This is an old question, I know, but the answer may help someone else.
This sounds like you're selecting a bad visual for your OpenGL window, or you're creating a new colormap that's overriding the default. If at all possible, choose a DirectColor 24-plane visual for everything in your application. DirectColor visuals use read-only color cells, but 24 planes will allow every supported color to be available to every window without having to overwrite color cells.

How to efficiently render double buffered window without any tearing effect?

I want to create my own tiny windowless GUI system, for that I am using GDI+. I cannot post code here because it got huge(c++) but bellow is the main steps I am following...
Create a bitmap of size equal to the application window.
For all mouse and keyboard events update the custom control states (eg. if mouse is currently held over a particular control e.t.c.)
For WM_PAINT event paint the background to offscreen bitmap and then paint all the updated controls on top of it and finally copy entire offscreen image to the front buffer via Graphics::DrawImage(..) call.
For WM_SIZE/WM_SIZING delete the previous offscreen bitmap and create another one with new window size.
Also there are some checks to prevent repeated drawing of controls i.e. controls are drawn only when it needs repainting in other words when the state of a control is changed only then it is painted e.t.c.
The system is working fine but only with one exception...when window is being resizing something sort of tearing effect appears. Now what I mean by tearing effect I shall try to explain ...
On the sizing edge/border there is a flickering gap as I drag the border.It is as if my DrawImage() function returns immediately and while one swap operation is half done another image drawing starts up.
Now you may think that it is common artifact that happens in many other application for the fact that resizing backbuffer is not always as fast as resizing window are but in other applications I noticed in other applications that although there is a leg between window size and client area size as window grows in size nothing flickers near the edge (its usually just white background that shows up as thin uniform strips along the border).
Also the dynamic controls which move with window resize acts jerky during sizing.
At first it seemed to me that using a constant fullscreen size offscreen surface could minimize the artifact but when I tried it results are not that satisfactory. I also tried to call Sleep() during sizing so that the flipping is done completely before another flip starts but strangely even that won't worked for me!
I have heard that GDI on vista is not hardware accelerated, could that might be the problem?
Also I wonder how frameworks such as Qt renders windowless GUI so smoothly, even if you size a complex Qt GUI window very fast negligibly little artifact appears. As far as I know Qt can use opengl for GUI rendering but that is second option.
If I use directx then real time resizing is even harder, opengl on the other hand seems to be nice for resizing without any problem but I will loose all the 2d drawing capability of GDI+.
If any of you have done anything like this before please guide me. Also if you have any pointer that I should consider for custom user interface design then provide me the links.
Thanks!
I always wished to design interfaces like windows media player 11 but can someone tell me that there is a straight forward solution for a c++ programmer (I want to know how rather than use some existing framework etc.)? Subclassing, owner drawing, custom drawing nothing seems to give you such level of control, I dont know a way to draw semitransparent control with common controls, so I think this question deserves some special attention . Thanks again.
Could it be a WM_ERASEBKGND message that's causing it?
see this question: GDI+ double buffering in C++
Also, if you need fast response from your GUI I would advise against GDI+.

Best Drawing approach

I have developed an application in wxWidgets in which I am using bitmap for drawing. So First time when my application launches, it reads coordinates from file and draw lines accordingly. The application also receives UDP packets from network, UDP packets also contain some x y coordinates information which has to be drawn on the screen, so when the packet are received I redraw the bitmap image, and display on screen, I also need to refresh the bitmap on mouse move event because on mouse move there is some new drawing which I have to draw on screen.
All this increases the operational cost and slows down my GUI. So kindly suggest me some alternative drawing approach which you think might be efficient in this situation.
I have searched out on Google and got the option of OpenGL, but due to time shortage I don't want to use openGL, because I haven't any experience of OpenGL.
It sounds as if your problem is that your GUI is unresponsive to user input because the application is busy redrawing the display. There are a couple of general solutions to this kind of problem.
Draw the bitmap in memory using a worker thread. While this is going on the main thread can continue to interact with the user. Once the bitmap has been redrawn, the worker thread signals the main thread, and the main thread then copied the completed bitmap to the screen - which is extremely fast.
Use the main thread to draw the bitmap directly to the screen, but sprinkle the drawing code with calls to wxApp::Yield(). This will allow the GUI to remain responsive to the user during a lengthy drawing process.
Option 1 is the 'best', especially when running on multicore machines, but it is a challenge to keep the two threads synchronized and prevent contention between them, unless you have significant experience with multithreading design. Option 2 is much simpler, though you still have to be careful that the user interaction doesn't start another drawing process before the first is finished.
Save off the data to draw instead of always refreshing the bitmap and have the main loop make refreshes of the bitmap from time to time.
This way you can make the program never hog down. The backside is of course that the reactivity will be lower (ie. when data comes, it won't be seen on screen for another 20 milliseconds instead of right away).