Portable YUV Drawing Context - opengl

I have a stream of YUV data (from a video file) that I want to stream to a screen in real time. (Basically, I want to write a program that plays the video in real time.)
As such, I am looking for a portable way to send YUV data to the screen. I would ideally like to use something portable so I don't have to reimplement it for every major platform.
I have found a few options, but all of them seem to have significant issues. They are:
Use OpenGL directly, converting the YUV data to RGB. (And using the single quad for the whole screen trick.)
This obviously won't work because converting RGB to YUV on the CPU is going to be too slow for displaying images in real time.
Use OpenGL, but use a shader to convert the YUV stream to RGB.
This option is a bit better. Although the problem here is that (afaict), this will involve making two streams and splicing them together. It might work, but may have issues with larger resolutions.
Instead use SDL, which has the option of creating a YUV context directly.
The problem with this is I already am using a cross platform widget library for other aspects of my program (such as playback controls). As far as I can tell, SDL only opens up in its on (possibly borderless) window. I would ideally like my controls and drawing context to be in the same window. Which I can do with opengl, but not SDL.
Use SDL, and also use something like Qt for the on screen widgets, use a message passing protocol to communicate between the two libraries. Have the (borderless) SDL window constantly move itself on top of the opengl window.
While this approach is clever, it seems like the two windows could get out of sink easily making the user experience sub-optimal.
Forget a cross platform library, do thinks OS specific, making use of hardware acceleration if present.
This is a fine solution although its not cross platform.
As such, is there any good way to draw YUV data to a screen that ideally is:
Portable (at least to the major platforms).
Fast enough to be real time.
Allows other widgets in the same window.

Use the option number 2. There's no problem in doing the YUV to RGB conversion in the shader. There's no such other "portable" way to do that.
Think like this: no matter "how big or small" your video is, the fragment shaders (where the conversion is done) will execute per pixel at the moment of the display, so you can have either a small video in full screen or big one, the computation (for the shaders) is the same, because they are displaying the same number of pixels.
Any video card in normal conditions will be able to run this kind of shader without any problem.

Related

Needed: Cross platform C++ 2D graphics library for fast audio waveform presentation

I am trying to build a multimedia editor. It includes audio and relatively simple 2D graphics. I am using C++. I would like as much of it to be cross platform as possible.
I wrote audio interface classes for android and windows using a common API so I have that under control for now but I need a 2D graphics package and possibly a cross platform GUI as well.
The big challenge is in trying to render the time line. It needs to generate many rows of waveforms and intersperse them with characters and other shapes some of which may include blends and transparencies. Or rather I should say the big challenge has been animating the time line as often I will need to update it in real time. I have this working nicely using a lot of cashing and shifting around of the pixels of off-screen bitmaps. If I have 50 lines on the screen and the screen is 1000 pixels wide that translates to over 50,000 line draw operations per frame. Actually I use multi segment lines that end up drawing 3 times as many segments. To generate each line of the audio waveform it needs to look at a few hundred samples of audio and compute the max and min values or maybe do an FFT to create a line of different colored pixels if I want to offer this to the user some day. Various forms of cashing let me do this with reasonable latency.
The animation side of things will include everything from moving poly lines and polygons around in 2D to importing images and playing back moving multiple images (video) at different arbitrary frame rates. I don’t think 3D is very useful for now anyway.
At the moment I am using a crazy mix of GDI and GDI plus on windows and running it all in a win32 “thing”. This is not great as I cannot invert regions in off screen bitmaps and I cannot draw individual pixels quickly enough to for instance show a spectrogram in real time. I think they were written in the 90s so there must be something newer I can use and get better performance and cross platform capabilities. I have been pulling out my remaining hair to figure out what to use.
I found another library on android that will let me set pixels and the performance actually seems a lot better but it does not support writing text. So I am hoping there is something else I could use for that. On Android the plan is to generate the bitmap and then blit it into an interface build with a Native Android GUI. These solutions do not seem great though a vast majority of the rest of the code can be ported without issue just being standard C++ and these horrors being cleanly wrapped.
I have seen a few potential candidates: openGL and Vulkan, seem to do 2D graphics as well as 3D but perhaps they are much more complex then what I need.
For the GUI I looked at QT but gave up on it (it seems to need half of my hard drive and has an incomprehensible licensing model). I recently started looking at IMGUI. They say it redraws on every frame. I don’t know how that will play with my existing rendering system and if it would drain a phones battery. A while ago I was able to get visual studio to create a cross platform App that would run on android but for some reason ditched that perhaps I should revisit it.
For the time line I need to draw a waveform. This could be done by drawing a lot of lines (50-150 k/frame) they can just be vertical ones for the most part, they do not need to be of fractional pixel width they need not be anti-aliased, and can have their end points specified with just integers. I also need to add some other lines polygons and text that does need to be anti-aliased. I may need to set a lot of pixels directly. Blends and transparencies would be nice but not essential. I also need to copy square chunks of bit map around. I also need sprites for things like the cursor. I am currently doing this by copying fragments of the bit map on and off screen. It would also be very nice to be able to select square regions of off screen bitmaps to invert for doing selections. And I need to assemble this off screen in a 2 or 3 buffer configuration so I can reuse chunks of one bitmap to make the next one and present it to the user in a real time animation. (all of this works with my GDI / GDI + wrapper though I have to work around the inversion problem)
For the animation part I need to draw similar graphics primitives, though it would also be nice to draw characters at arbitrary angles and scaling. As for Video if I can extract the images I guess I could blit them to the screen as needed. Maybe I would need yet another package to composite them into the other parts of the frame. Farther it would be nice to be able to write the animation out in a higher quality format in non real time to make a video file of some kind. It would be nice if I did not have to wrap yet another framework to make this happen though I can deal with this if I need to.
For the GUI it does not have to be all that fancy. Ideally I would like to have 2 or 3 floating and dockable windows on the PC and a few screens on a phone. I will have to make slightly different UIs for both but the time line bitmap and the media window bitmaps should be reusable for the most part. I just need standard widgets for the most part though.
My needs are somewhere in-between that of a game and that of a regular boring old forms app except for the need to animate the waveform.
Does anyone have any suggestions and perhaps know these systems well enough to know if they have a good chance to do what I need?
I fear I would have to spend weeks learning each one just to see if they give me the capabilities I need.
Is IMGUI likely to eat the phones battery just to make the cursor blink?
Any tips would be most welcome.

Change display mode under Linux

I'm trying to grasp programming graphics with Xlib and OpenGL. I can create windows etc., but I stuck at changing display modes.
I can list available video modes with Xrandr functions (XRRSizes, XRRRates, XRRGetScreenInfo, XRRConfigSizes), check which one is currently set (XRRConfigCurrentConfiguration), and change the resolution (XRRSetScreenConfig).
I can list available bit depths (a.k.a. color depths, that is, bits per pixel) with XListDepths.
What I don't know is how to change the bit depth for a given screen.
I couldn't find any suitable function for setting bit depths along with screen sizes in Xrandr. It seems to be totally ignorant about bit depths, which is really weird. I couldn't find any suitable function in the Xlib documentation either.
So my question is:
How to chhange the resolution and bit depth programatically under Linux?
Are there any functions in the Xlib library or somewhere else?
I know that there are full-blown libraries for graphics, such as SDL, but I don't want to use them as a dependency just for changing display modes, since I'm attempting to write a minimal graphics library myself, for learning purposes.
Edit:
What I want to achieve doesn't have to be done particularly with Xlib or X, but it has to cooperate with X gracefully. E.g. I don't want to get rid of X altogether ─ It is still useful for displaying graphics in windowed mode. But i also need some way for switching to fullscreen mode where I need full control over the video mode: resolutions, color depths, refresh rates, and direct access to the actual pixels in the frame buffer, not some "emulation". I assume that there is some way to do it, since there are video games who can do it on Linux.

AVFoundation on OSX: OpenGL texture from video WITHOUT needing access to pixel data

I've read a lot of posts describing how people use AVAssetReader or AVPlayerItemVideoOutput to get video frames as raw pixel data from a video file, which they then use to upload to an OpenGL texture. However, this seems to create the needless step of decoding the video frames with the CPU (as opposed to the graphics card), as well as creating unnecessary copies of the pixel data.
Is there a way to let AVFoundation own all aspects of the video playback process, but somehow also provide access to an OpenGL texture ID it created, which can just be drawn into an OpenGL context as necessary? Has anyone come across anything like this?
In other words, something like this pseudo code:
initialization:
open movie file, providing an opengl context;
get opengl texture id;
every opengl loop:
draw texture id;
If you were to use the Video Decode Acceleration Framework on OS X, it will give you a CVImageBufferRef when you "display" decoded frames, which you can call CVOpenGLTextureGetName (...) on to use as a native texture handle in OpenGL software.
This of course is lower level than your question, but it is definitely possible for certain video formats. This is the only technique that I have personal experience with. However, I believe QTMovie also has similar functionality at a much higher level, and would likely provide the full range of features you are looking for.
I wish I could comment on AVFoundation, but I have not done any development work on OS X since 10.6. I imagine the process ought to be similar though, it should be layered on top of CoreVideo.

Controlling the individual pixels of a projector

I need to control the individual pixels of a projector (an Infocus IN3104) whose native resolution is 1024x768. I would like to know which subset of functions in C or an APL to do this either by:
Functions that control the individual pixels of the adapter (not the pixels of a window).
A pixel-perfect, 1:1 map from an image file (1024x728) to the adaptor set at the native resolution of the projector.
In a related question ([How can I edit individual pixels in a window?][1]) the answerer Caladain states "Things have come a bit from the old days of direct memory manipulation.". I feel I need to go back to that to achieve my goal.
I don't know enough of the "graphic pipeline" to know what API or software tool to use. I'm overwhelmed by the number of technologies when I search this topic. I program in R, which easily interfaces to C, but would welcome suggestions of subsets of functions in OpenGL or C++ or ..... any other technology?
Or even an full blown application (rendering) which will map without applying a transformation.
For example even MS paint has the >VIEW>Bitmap but I get some transformation applied and I don't get pixel perfect rendering. This projector has DisplayLink digital input and I've also tried to tweek the timing parameters when using the VESA inputs and I don't think the transformation happens in the projector. In any case, using MS paint would not be flexible enough for me.
Platform: Linux or Windows.
I don't see a reason why a full-screen window, e.g. using SDL, wouldn't work. Normal bitmapped graphics is always 1:1, there shouldn't be any weird scaling going on behind your back for a full-screen:ed window.
Since SDL is portable, you should be able to run the same code in Windows or Linux (or any other supported platform).
The usual approach to this problem on current systems is:
Set graphics card to desired resolution
Create borderless full screen window
Draw whatever you want
There's really not much to gain from a "low level access", although it were certainly possible.

Bitmap conversion using GPU

I don't know whether this is the right forum. Anyway here is the question. In one of our application we display medical images and on top of them some algorithm generated bitmap. The real bitmap is a 16bit gray scale bitmap. From this we generate a color bitmap based on a look up table for eg
(0-100)->green
(100-200)->blue
(200>above)->red
The display is working well and good with small images 256x256. But when the display area becomes big say 1024x1024 the gray scale to color bitmap conversion takes a while and the interactions are not smooth any more. In the recent times I have heard a lot about general purpose GPU programming. In our deployment we have high end (Nvidia QuadroFX) graphics card.
Our application is built using .Net/C# if requiured I can add little bit of C++/CLI too.
Now my question is can this bitmap conversion be offloaded to the graphics processor? Where should I look for further reading?
Yes -- and since you're (apparently) displaying the bitmap, you don't need to go the GPGPU route (e.g., OpenCL or CUDA). You can use a programmable shader for it -- and if I understand what you're saying, it'll be a pretty straightforward one at that.
As far as how to write the shader, it will depend (mostly) on how you're doing the rest of your drawing. Just for an obvious example, if you're already using WPF for your drawing, you'll probably want to use an HLSL shader (WPF supports pixel shaders fairly directly).
It's probably also worth noting that if you had to support older hardware, a table lookup like this is something you could actually manage pretty easily on the GPU, even without programmable shaders. As long as you only need to support recent hardware, a shader will probably be simpler though.