Is there any equivalent for gluScaleImage function? - opengl

I am trying to load a texture with non-power-of-two (NPOT) sizes in my application which uses OGLPlus library. So, I use images::Image to load an image as a texture. When I call Context::Bound function to set the texture, it throws an exception. When the size of the input image is POT, it works fine.
I checked the source code of OGLPlus and it seems that it uses glTexImage2D function. I know that I can use gluScaleImage to scale my input image, but it is dated and I want to avoid it. Is there any functions in newer libraries like GLEW or OGLPLUS with the same functionality?

It has been 13 years (OpenGL 2.0) since the restriction of power-of-two on texture sizes was lifted. Just load the texture with glTexImage and, if needed, generate the mipmaps with glGenerateMipmap.
EDIT: If you truly want to scale the image prior to uploading to an OpenGL texture, I can recommend stb_image_resize.h — a one-file public domain library that does that for you.

Related

How do we display pixel data calculated in an OpenCL kernel to the screen using OpenGL?

I am interested in writing a real-time ray tracing application in c++ and I heard that using OpenCL-OpenGL interoperability is a good way to do this (to make good use of the GPU), so I have started writing a c++ project using this interoperability and using GLFW for window management. I should mention that although I have some coding experience, I do not have so much in c++ and have not worked with OpenCL or OpenGL before attempting this project, so I would appreciate it if answers are given with this in mind (that is, beginner-friendly terminology is preferred).
So far I have been able to get OpenCL-OpenGL interoperability working with an example using a vertex buffer object. I have also demonstrated that I can create image data with an RGBA array (at least on the CPU), send this to an OpenGL texture with glTexImage2D() and display it using glBlitFramebuffer().
My problem is that I don't know how to create an OpenCL kernel that is able to calculate pixel data such that it can be given as the data parameter in glTexImage2D(). I understand that to use the interoperability, we must first create OpenGL objects and then create OpenCL objects from these to write the data on as these objects share memory, so I am assuming I must first create an empty OpenGL array object then create an OpenCL array object from this to apply an appropriate kernel to which would write the pixel data before using the OpenGL array object as the data parameter in glTexImage2D(), but I am not sure what kind of object to use and have not seen any examples demonstrating this. A simple example showing how OpenCL can create pixel data for an OpenGL texture image (assuming a valid OpenCL-OpenGL context) would be much appreciated. Please do not leave any line out as I might not be able to fill in the blanks!
It's also very possible that the method I described above for implementing a ray tracer is not possible or at least not recommended, so if this is the case please outline an advised alternate method for sending OpenCL kernel calculated pixel data to OpenGL and subsequently drawing this to the screen. The answer to this similar question does not go into enough detail for me and the CL/GL interop link is not working. The answer mentions that this can be achieved using a renderbuffer rather than a texture, but it says at the bottom of the Khronos OpenGL wiki for Renderbuffer Objects that the only way to send pixel data to them is via pixel transfer operations but I can not find any straightforward explanation for how to initialize data this way.
Note that I am using OpenCL c (no c++ bindings).
From your second para you are creating an OpenCL context with a platform specific combination of GLX_DISPLAY / WGL_HDC and GL_CONTEXT properties to interoperate with OpenGL, and you can create a vertex buffer object that can be read/written as necessary by both OpenGL and OpenCL.
That's most of the work. In OpenGL you can copy any VBO into a texture with
glBindBuffer(GL_PIXEL_UNPACK_BUFER, myVBO);
glTexSubImage2D(GL_TEXTURE_2D, level, x, y, width, height, format, size, NULL);
with the NULL at the end meaning to copy from GPU memory (the unpack buffer) rather than CPU memory.
As with copying from regular CPU memory, you might also need to change the pixel alignment if it isn't 32 bit.

VTK OpenGL objects (3D texture) access from CUDA‏

Is there any proper way to access the low level OpenGL objects of VTK in order to modify them from a CUDA/OpenCL kernel using the openGL-CUDA/OpenCL interoperability feature?
Specifically, I would want to get the GLuint (or unsigned int) member from vtkOpenGLGPUVolumeRayCastMapper that points to the Opengl 3D Texture object where the dataset is stored, in order to bind it to a CUDA Surface to be able to access and modify its values from a CUDA kernel implemented by me.
For further information, the process that I need to follow is explained here:
http://rauwendaal.net/2011/12/02/writing-to-3d-opengl-textures-in-cuda-4-1-with-3d-surface-writes/
where the texID object used there (in Steps 1 and 2) is the equivalent to what I want to retrieve from VTK.
At a first look at the vtkOpenGLGPUVolumeRayCastMapper functions, I don't find an easy way to do this, rather than maybe creating a vtkGPUVolumeRayCastMapper subclass, but even in that case I am not sure what should I modify exactly, since I guess that some other members depend on the 3D Texture values, and should be also updated after modifying it.
So, do you know some way to do this?
Lots of thanks.
Subclassing might work, but you could probably avoid it if you wanted. The important thing is that you get the order of the GL/CUDA API calls in the right order.
First, you have to register the texture with CUDA. This is done using:
cudaGraphicsGLRegisterImage(&cuda_graphics_resource, texture_handle,
GL_TEXTURE_3D, cudaGraphicsRegisterFlagsSurfaceLoadStore);
with the stipulation that texture_handle is a GLuint written to by a call to glGenTextures(...)
Once you have registered the texture with CUDA, you can create the surface which can be read or written to in your kernel.
The only thing you have to worry about from here is that vtk does not use the texture in between a call to cudaGraphicsMapResources(...) and cudaGraphicsUnmapResources(...). Everything else should just be standard CUDA.
Also once you map the texture to CUDA and write to it within a kernel, there is no additional work besides unmapping the texture. GL will get the modified texture the next time it is used.

FreeType Text Rendering under DirectX 11

At the moment i just want to render a single char with FreeType and DirectX 11.
But how can i create a "ID3D11ShaderResourceView" from FT_Bitmap?
I already tried "D3DX11CreateShaderResourceViewFromFile" and "D3DX11CreateTextureFromFile" but both doesn't work.
Has someone a good tutorial or a code snippet for me?
Since FT_Bitmap is a bitmap already loaded into memory, you can't use the functions that create a texture from a file. Instead, you'll want to use a function to create a texture from memory.
You could use D3DXCreateShaderResourceViewFromMemory, however the D3DX library is deprecated. As mentioned on that page, you might want to instead use the DirectXTK or DirectXTex library to create a texture from memory.
Alternatively, you can create a D3D texture and shader resource view manually using ID3D11Device::CreateTexture2D and ID3D11Device::CreateShaderResourceView.
The only tricky part will be to determine which DXGI_FORMAT to use. You will need to match it with the FT_Pixel_mode of the bitmap.

What is the use case of cudaGraphicsRegisterFlagsWriteDiscard in cudaGraphicsGLRegisterImage?

I'm fairly new to CUDA, but I've managed to display something generated by a kernel on the screen using OpenGL. I've tried several approach :
Using a PBO and an OpenGL texture (old style);
Using a OpenGL texture as a CUDA surface and rendering on a quad (new style);
Using a renderbuffer as a CUDA surface and rendering using glBlitFramebuffer.
All of them worked, but, while implementing #2, I erroneously set the hint as cudaGraphicsRegisterFlagsWriteDiscard. Since all of the data will be generated by CUDA, I thought this was the correct option. However, later I realized that I needed a CUDA surface to write to an OpenGL texture, and when you use a surface, you are requested to use the LoadStore flag.
So basically my question is this : Since I absolutely need a CUDA surface to write to an OpenGL texture in CUDA, what is the use case of cudaGraphicsRegisterFlagsWriteDiscard in cudaGraphicsGLRegisterImage?
The documentation description seems pretty straightforward. It is for one-way delivery of data from CUDA to OpenGL.
This online book excerpt provides a similar explanation:
Applications where CUDA is the producer and OpenGL is the consumer should register the objects with a write-discard flag...
If you want to see an example, take a look at the postProcessGL cuda sample. In that case, OpenGL is rendering an image, and it's being post-processed (blur added) by cuda, before display. In this case, there are two separate pathways for data flow. In the OpenGL->CUDA case, the data is handled by the createTextureSrc function, and the flag specified is read-only. For the CUDA->OpenGL case (delivery of the post-processed frame) the function is handled in createTextureDst, where a call is made to cudaGraphicsGLRegisterImage with the cudaGraphicsMapFlagsWriteDiscard flag specified, since on this path, CUDA is producing and OpenGL is consuming.
To understand how the textures are handled (populated with data from the cuda operations via a cudaArray) you probably want to study the sequence of operations in processImage().

Texture Mapping C++ OpenGL

I have read around on this, including Nehe and here for solutions, but I cant find a specific answer.
I am trying to load a a photo, called stars.jpg. I want to make this my background of the scene, by mapping it using uv coordinates, doing it by
glBegin(GL_QUADS);
glTexCoord2f(0,0);
glVertex2f(0,0);
However I am just very confused about how to load the actual textures in, all the calls for
glActiveTexture();
glEnable(GL_TEXTURE_2d);
glBindTexture(GL_TEXTURE);
All they do is confuse me, what do all these mean/do, and in what order am I suppose to put these in, in order to get the stars.jpg to be my background?
Your number-one tool for loading textures in OpenGL is the Simple OpenGL Image Loader (SOIL) library. You just need to pass the filename and some flags and you'll get your texture ID.
Also, you're learning very old and outdated version of OpenGL now - you might want to have a google for newer tutorials or browse the specs whenever you feel ready.
Here's a step by step tutorial on loading textures
http://www.nullterminator.net/gltexture.html
Its important to remember that OpenGL is a state machine, so you have to tell it "I'm going to talk about textures now" that's where the glActiveTexture(); comes in.
Also keep in mind that you will have to load in pixel by pixel the colors from your .jpg (compressed) to your texture array, so either you will need to find a library that will give you bitmap values of your .jpg file or you will need to pre-convert it into a .ppm or a .bmp which will make reading in the values easier.