I'm continuing to try and develop an OpenGL path for my software. I'm using abstract classes with concrete implementations for both, but obviously I need a common pixel format enumerator so that I can describe texture, backbuffer/frontbuffer and render target formats between the two. I provide a function in each concrete implementation that accepts my abstract identifier for say, R8G8B8A8, and provides the concrete implementation with an enum suitable for either D3D or OpenGL
I can easily enumerate all D3D pixel formats using CheckDeviceFormat. For OpenGL, I'm firstly iterating through Win32 available accelerated formats (using DescribePixelFormat) and then looking at the PIXELFORMATDESCRIPTOR to see how it's made up, so I can assign it one of my enums. This is where my problems start:
I want to be able to discover all accelerated formats that OpenGL supports on any given system, as comparable to a D3D format. But according to the format descriptors, there aren't any RGB formats (they're all BGR). Further, things like DXT1 - 5 formats, enumerable in D3D, aren't enumerable using the above method. For the latter, I suppose I can just assume if the extension is available, it's a hardware accelerated format.
For the former (how to interpret the format descriptor in terms of RGB/BGR, etc.), I'm not too sure how it works.
Anyone know about this stuff?
Responses appreciated.
Ok, I think I found what I was looking for:
OpenGL image formats
Some image formats are defined by the spec (for backbuffer/depth-stencil, textures, render-targets, etc.), so there is a guarantee, to an extent, that these will be available (they don't need enumerating). The pixel format descriptor can still be used to work out available front buffer formats for the given window.
Related
I have written a C/C++ implementation of what I term a "compositor" (I come from a video background) to composite/overlay video/graphics on the top of a video source. My current compositor implementation is rather naive and there is room for CPU optimization improvements (ex: SIMD, threading, etc).
I've created a high-level diagram of what I am currently doing:
The diagram is self explanatory. Nonetheless, I'll elaborate on some of the constraints:
The main video always comes served in an 8-bit YUV 4:2:2 packed format
The secondary video (optional) will come served in either an 8-bit YUV 4:2:2 or YUVA 4:2:2:4 packed format.
The output from the overlay must come out in an 8-bit YUV 4:2:2 packed format
Some other bits of information:
The number of graphics inputs will vary; it may (or may not) be a constant value.
The colour format of the Graphics can be pinned to either ARGB or YUVA format (ie. I can provide it as you see fit). At the moment, I pin it to YUVA to keep a consistent colour format.
The potential of using OpenGL and accompanying shaders is rather appealing:
No need to reinvent the wheel (in terms of actually performing the composition)
The possibility of using GPU where available.
My concern with using OpenGL is performance. Looking around on the web, it is my understanding that a YUV surface would be converted to RGB internally; I would like to minimize the number of colour format conversions and ensure optimal performance. Without prior OpenGL experience, I hope someone can shed some light and suggest if I'm about to venture down the wrong path.
Perhaps my concern relating to performance is less of an issue when using a dedicated GPU? Do I need to consider separate code paths:
Hardware with GPU(s)
Hardware with only CPU(s)?
Additionally, am I going to struggle when I need to process 10-bit YUV?
You should be able to treat YUV as independent channels throughout. OpenGL shaders will be calling them r, g, and b, but it's just data that can be treated as whatever you want.
Most GPUs will support 10 bits per channel (+ 2 alpha bits). Various will support 16 bits per channel for all 4 channels but I'm a little rusty here so I have no idea how common support is for this. Not sure about the 4:2:2 data, but you can always treat it as 3 separate surfaces.
The number of graphics inputs will vary; it may (or may not) be a constant value.
This is something I'm a little less sure about. Shaders like this to be predictable. If your implementation allows you to add each input iteratively then you should be fine.
As an alternative suggestion, have you looked into OpenCL?
I have a stream of YUV data (from a video file) that I want to stream to a screen in real time. (Basically, I want to write a program that plays the video in real time.)
As such, I am looking for a portable way to send YUV data to the screen. I would ideally like to use something portable so I don't have to reimplement it for every major platform.
I have found a few options, but all of them seem to have significant issues. They are:
Use OpenGL directly, converting the YUV data to RGB. (And using the single quad for the whole screen trick.)
This obviously won't work because converting RGB to YUV on the CPU is going to be too slow for displaying images in real time.
Use OpenGL, but use a shader to convert the YUV stream to RGB.
This option is a bit better. Although the problem here is that (afaict), this will involve making two streams and splicing them together. It might work, but may have issues with larger resolutions.
Instead use SDL, which has the option of creating a YUV context directly.
The problem with this is I already am using a cross platform widget library for other aspects of my program (such as playback controls). As far as I can tell, SDL only opens up in its on (possibly borderless) window. I would ideally like my controls and drawing context to be in the same window. Which I can do with opengl, but not SDL.
Use SDL, and also use something like Qt for the on screen widgets, use a message passing protocol to communicate between the two libraries. Have the (borderless) SDL window constantly move itself on top of the opengl window.
While this approach is clever, it seems like the two windows could get out of sink easily making the user experience sub-optimal.
Forget a cross platform library, do thinks OS specific, making use of hardware acceleration if present.
This is a fine solution although its not cross platform.
As such, is there any good way to draw YUV data to a screen that ideally is:
Portable (at least to the major platforms).
Fast enough to be real time.
Allows other widgets in the same window.
Use the option number 2. There's no problem in doing the YUV to RGB conversion in the shader. There's no such other "portable" way to do that.
Think like this: no matter "how big or small" your video is, the fragment shaders (where the conversion is done) will execute per pixel at the moment of the display, so you can have either a small video in full screen or big one, the computation (for the shaders) is the same, because they are displaying the same number of pixels.
Any video card in normal conditions will be able to run this kind of shader without any problem.
I'm just curious if there is a way to use OpenGL to write pixel data to an external JPEG/PNG/some other image file type (and also create an image to write the data to if one does not already exist). I couldn't really find anything on the subject. My program doesn't really make use of openGL at all otherwise, I just need something that can write out images.
Every image "put into" or "taken from" OpenGL is in a rather raw pixel format. OpenGL does neither have functionality for file I/O nor for handling of sophisticated image formats like e.g. BMP, JPEG or PNG, as that is completely out of its scope. So you will have to look for a different library to manage that and if this was the only reason you considered OpenGL, then you don't need it at all.
A very simple and easy to use one (and with an interface similar to OpenGL) would be DevIL. But many other larger frameworks for more complex tasks, like Qt (GUI and OS) or OpenCV (image processing) have functionality for image loading and saving. And last but not least many of the individual formats, like JPEG or PNG usually also have small official open-source libraries for handling their respective files.
Suppose that I have a 3d model with animation in, say, Blender. I need to export this model to some file and then use it in OpenGL app, i.e. without hardcoding animations, but reading them from file. What format is the best solution?
OpenGL doesn't support any format directly, but the classic OBJ file format lines up very well with drawing with vertex arrays. Basically, OBJ lists all vertices independently of the geometry. This way, several objects can share the same points. All kinds of groupings are also possible.
Also, it is one of the earliest formats to support a wide range of spline curves & surfaces, including Bezier, B-Splines & NURBS.
A basic decription can be found here:
http://en.wikipedia.org/wiki/Wavefront_.obj_file
The complete OBJ spec can be found here:
http://www.martinreddy.net/gfx/3d/OBJ.spec
It's not as modern as WebGL, but it's simple, human readable and widely supported.
What format is the best solution?
OpenGL doesn't care about file formats. So feel free to choose whatever suits your needs best. Due to the rise of WebGL I started dumping whole Blender scenes into collections of JSON formated files.
I don't know whether this is the right forum. Anyway here is the question. In one of our application we display medical images and on top of them some algorithm generated bitmap. The real bitmap is a 16bit gray scale bitmap. From this we generate a color bitmap based on a look up table for eg
(0-100)->green
(100-200)->blue
(200>above)->red
The display is working well and good with small images 256x256. But when the display area becomes big say 1024x1024 the gray scale to color bitmap conversion takes a while and the interactions are not smooth any more. In the recent times I have heard a lot about general purpose GPU programming. In our deployment we have high end (Nvidia QuadroFX) graphics card.
Our application is built using .Net/C# if requiured I can add little bit of C++/CLI too.
Now my question is can this bitmap conversion be offloaded to the graphics processor? Where should I look for further reading?
Yes -- and since you're (apparently) displaying the bitmap, you don't need to go the GPGPU route (e.g., OpenCL or CUDA). You can use a programmable shader for it -- and if I understand what you're saying, it'll be a pretty straightforward one at that.
As far as how to write the shader, it will depend (mostly) on how you're doing the rest of your drawing. Just for an obvious example, if you're already using WPF for your drawing, you'll probably want to use an HLSL shader (WPF supports pixel shaders fairly directly).
It's probably also worth noting that if you had to support older hardware, a table lookup like this is something you could actually manage pretty easily on the GPU, even without programmable shaders. As long as you only need to support recent hardware, a shader will probably be simpler though.