Converting an opengl screen to bytes and back to Texture in unity - opengl

I have a opengl software which genertes a window and then sends its bytes through a socket to unity.
The opengl software retrieves the data:
glreadPixels(0,0,width,height,GL_RGB,GL_UNSIGNED_BYTE, buffer);
where 'buffer' is byte[] buffer which is being sent to the unity app. There I try to convert it back to an image by doing:
Texture2D t = new Texture(width,hight,Textureformat.ETC2_RGB,false);
t.LoadRawTextureData(buffer);
t.Applay();
However the texture apears as black screen with colorful stains which are moving and disapearing cyclically.
I chose the 'ETC2_RGB' format because it looked relevant, but I also tried some more formats from TextureFormat, and all of them looked white\black with colorful dots\pink, etc. What are the right convertions that I should choose?

You're using the wrong texture format. ETC2_RGB is a compressed format with 4 bits per pixel. What you query from glReadPixel is a uncompressed format with 8 bits per channel. The format for this is Textureformat.RGB24.

Related

How to determine top-down/bottom-up from WIC decoder?

I'm using WIC (Windows Imaging Component) to decode image files and get access to the pixel data. I'm trying to figure out the pixel order (i.e., bottom-up or top-down).
I use IWICImagingFactory::CreateDecoderFromFileName to create the decoder from which I grab the (first) frame (IWICBitmapFrameDecode). With the frame, I use GetPixelFormat and GetSize to compute a buffer size, and finally I use CopyPixels to get the decoded pixel data into my buffer.
This works fine with a variety of JPEG files, giving me pixel rows in top-down sequence, and the pixels are in BGRX order (GUID_WICPixelFormat32bppBGR).
When I try with GIF files, however, the pixel rows come in bottom-up sequence. The reported pixel format is RGBA (GUID_WICPixelFormat32bppRGBA), but the ground truth shows the channel order is BGRA (with the blue in the low byte of each 32-bit pixel, just like JPEG).
My primary question: Is there a way for me to query the top-down/bottom-up orientation of the pixel data?
I found a similar question that asked about rotation when using JPEG sources, and the answer was to query the EXIF data to know whether the image was rotated. But EXIF isn't used with GIF. So I'm wondering whether I'm supposed to assume that pixels are always bottom-up, except for ones that do have an EXIF orientation that says otherwise. Update 6/25/2020 Nope, the JPEG orientation is neutral and the GIF has no orientation information, yet MS Paint and other programs can open the files in the correct orientation.
My secondary question: What's up with the incorrect channel order (RGB/BGR) from the GIF decoder?
Not only that, the WIC documentation says that the GIF decoder should return indexes into a color table (GUID_WICPixelFormat8bppIndexed) rather than actual pixel values. Is it possible some software on my machine installed its own buggy GIF decoder that supersedes the one that comes with Windows 10?
To query photo orientation for formats that support it you should use System.Photo.Orientation photo metadata policy (or one of file format specific metadata query paths) using IWICMetadataQueryReader interface.
As for GetPixelFormat() reporting "incorrect" pixel format, it is right there in the Remarks section:
The pixel format returned by this method is not necessarily the pixel format the image is stored as. The codec may perform a format conversion from the storage pixel format to an output pixel format.
Native byte order of image bitmaps under Windows is BGRA, so that is what you are getting from the decoder. If you want image in a different format you need to use IWICImagingFactory::CreateFormatConverter() to create a format converter and convert the image data before copying.
Finally, GIF doesn't have orientation metadata because it is always encoded from top to bottom. Most likely reason you are getting a vertically inverted image is because you are reading it directly from the decoder -- try calling CopyPixels() on the converter instead.

OpenGL Reading Pixels from Texture?

I need a way to get the pixels of an already existing texture. Similarly to how D3DTexture's LockRect works with ReadOnly and NoSysLock. Some of my textures are also stored in compressed DXT1/3/5 formats, not entirely sure if that would affect anything. If those formats are simply decoded by Opengl and stored as raw pixels instead of in the compression. So would retrieving the pixels guarantee the same format that was used to set the texture with?
Generally you will want to use a PBO for reading pixels. Here's all the information you need on PBOs, click here
So would retrieving the pixels guarantee the same format that was used
to set the texture with?
It is possible to convert the format and retrieve the pixels at the same time. Look at the format conversion section on the page I linked.

Weird VGL Notice - [VGL] NOTICE: Pixel format of 2D X server does not match pixel format of Pbuffer. Disabling PBO readback

I'm porting a game that I wrote from Windows to Linux. It uses GLFW and OpenGL. When I run it using optirun, to take advantage of my nVidia Optimus setup, it spits this out to the console:
[VGL] NOTICE: Pixel format of 2D X server does not match pixel format of
[VGL] Pbuffer. Disabling PBO readback.
I've never seen this before, but my impression is that I'm loading my textures in GL_RGBA format, when they need to be in GL_BGRA or something like that. However, I'm using DevIL's ilutGLLoadImage function to obtain an OpenGL texture handle, so I never specify a format.
Has anyone seen this before?

Keep alpha-transparency of a video through HDMI

The scenario I'm dealing with is actually as follow: I need to get the screen generated by OpenGL and send it through HDMI to a FPGA component while keeping the alpha channel. But right now the data that is being sent through HDMI is only RGB (24bit without alpha channel) So i need a way to force sending the Alpha bits through this port somehow.
See image: http://i.imgur.com/hhlcbb9.jpg
One solution i could think of is to convert the screen buffer from RGBA mode to RGB while mixing the Alpha channels within the RGB buffer.
For example:
The original buffer: [R G B A][R G B A][R G B A]
The output i want: [R G B][A R G][B A R][G B A]
The point is not having to go through every single pixels.
But I'm not sure if it's possible at all using OpenGL or any technology (VideoCore kernel?)
opengl frame buffer
Do you actually mean a framebuffer, or some kind of texture? Because framebuffers cannot be resized, and the size of this resulting image will be larger in the number of pixels by 25%. You can't actually do that.
You could do it with a texture, but only by resizing it. You would have to get the texel data with glGetTexImage into some buffer, then upload the texel data to another texture with glTexImage2D. You would simply change the pixel transfer format and texture width appropriately. The read would use GL_RGBA, and the write would use GL_RGB, with an internal format of GL_RGB8.
The performance of this will almost certainly not be very good. But it should work.
But no, there is no simple call you can make to cause this to happen.
You may be able to send the alpha channel separately in opengl via different rgb or hdmi video output on your video card.
So your pc now outputs RGB down one cable, and then the Alpha down the other cable. Your alpha probably needs to be converted to grey scale so that it is still 24 bits.
You then select the which signal is the key in your fpga.
I'm presuming your fpga supports chroma key.
I have done something similar before but using a program called Brainstorm which uses a specific video card that supports SDI out and it splits the RGB, and the Alpha into separate video channels(cables), and then the vision mixer does the keying.
Today, however, I have created a new solution which mixes the multiple video channels first on the pc, and then outputs the final mixed video directly to either a RTMP streaming server or to a DVI to SDI scan converter.

Qt and OpenGL how to render a PVR

I'm having a few issues regarding how to render a PVR.
I'm confused about how to get the data from the PVR to screen. I have a window that is ready to draw a picture and I'm a bit stuck. What do I need to get from the PVR as parameters to then be able to draw a texture? With jpeg and pngs locally you can just load the image from a directory but how would the same occur for a PVR?
Depends what format the data inside the PVR is in. If it's a supported standard then just copy it to a texture with glTexSubImage2D(), otherwise you will need to decompress it into something OpenGL understands - like RGB or RGBA.
edit - OpenGL is a display library (well much much more than that), it doesn't read images, decode movies or do sound.
TGA files are generally very simple uncompressed RGB or RGBA image data, it should be trivial to decode the file, extract the image data and copy it directly to an opengl texture.
since you tagged the question Qt you can use QImage to load the tga and Using QImage with OpenGL