With my current project I've begun to translate all rendering from SDL to OpenGL. This means that I have to translate a SDL_Surface (a loaded img) to a OpenGL texture.
When I do this I understand that it's important for the dimensions to be of the power of two. But when I create a font I can't always have it of the power of two. In a tutorial which described how to use SDL_TTF with OpenGL it made sure to transform the image to the right dimensions if it weren't, but this only distorts my image.
If i don't mess with the dimensions everything works fine. Why do i need the power of two in dimensions? And if i really need it, how do i apply it without distorting my image?
Related
I am trying to create a card game. I want to have a deck of cards where the back of the card is a fixed texture but the front is dynamic, i.e. it has some text fields on it as well as a picture. I have created a box sized 3x2x0.16 to represent my card. I can get the fixed texture to load but I cannot find any code examples on the web that show me how to load a fixed texture on one side of the box and a dynamic one on the other. Can anyone point me to some examples please. I'm using DirectXTK mainly, but can probably fathom it out from any DirectX code too.
DirectX11 is version of DirectX I am using.
Any recommendations on how to do this would also be welcome.
Thanks
Easiest method for generating your cards, depending on how many there are and how large you want them, is to generate the faces at startup by using render to texture. Effectively, draw your dynamic card faces exactly like you would draw them in the world, but use an orthographic projection matrix and a blank 2D texture object as the render target. Once you have that, cache these "dynamic" textures in an std::map and bind them when drawing a specific card.
If your faces are relatively small, or you want to save on texture memory, you can stitch multiple card faces together into a large sheet of textures, then use some shader scaling logic to reference a subsection of the sheet for rendering a specific texture. With this, you can assemble "decks" of cards that only contain the faces in use in that particular game, allowing you to evict the others from GPU RAM.
Ok so I've used a chessboard image to calibrate my camera. This calibrates both the intrinsic and extrinsic parameters of the scene. Now I want to draw an object to sit on the table in which I had placed the chessboard. My question is do need the rotation+translation vectors of the camera to do this? And if so how can I get these after doing the calibration? Or are these vectors taken into account in the calibrateCamera function?
Basically after I calibrate the camera how can I now draw into the scene on top of a surface?
Thanks!
The calibrateCamera function provides you with the extrinsic (rotation+translation) and intrinsic (camera matrix K and distortion coefficients).
This allows you to project 3D points, expressed in the coordinate system of your chessboard, into an image acquired by the camera, for example using the projectPoints function (link). Using this approach, you can draw wireframe objects directly into the image by projecting the 3D edges using the projectPoints function.
If you would like to render more complex objects (i.e. including textures, etc), this requires using an auxiliary rendering engine since OpenCV does not provide such functionalities. You will have to use a rendering engine, say OpenGL, provide the projection details to it, retrieve the object rendered from the camera viewpoint in a buffer, and overlay this buffer on top of your initial image.
However, notice that the result of this, which is called augmented-reality, may look weird sometimes, because it does not take into account the occlusions between your rendered object and the scene observed in the image... Handling occlusions appropriately would require a 3D model of the scene.
AldourDisciple post is a great answer.
If I can add my 2 cents, this is a problem I worked with, and it is the basic problem of Augmented Reality, and more generally on how to integrate OpenCV and OpenGL (because you'll gonna use OpenGL to draw the 3d stuffs and use a OpenGL texture to show the underline OpenCV images from the video). You can find some (imho) good links, documentation and tutorial and references in my previous answer on that.
I'm trying to create a display with a complex OpenGL image and some spinboxes on the image. Using http://doc.qt.digia.com/qq/qq26-openglcanvas.html I'm able to have a two layers object (inheriting from QGraphicsScene) with a simple OpenGL image as background and the controls on foreground.
So, now I'm trying to display my true OpenGL image as background. This image is created by:
A quad mapped on a structure,
Some small 2D objects represented by 2D textures with alpha channel and specific shaders, drawn on the quad (upper z value)
Some polylines.
With this image I have some strange behavior. The 2D textured objects are drawn with a white background. Some experiments seem to indicate that, in the drawing of this complex OpenGL image the alpha channel is disabled.
I tried different configurations for the QGLWidget used as viewport of the QGraphicsView but without result.
So I need help to be able to create this OpenGL image with the right transparency effects.
I'm new to DirectX, and I am trying to render a Texture that is of dimension 300 x 570 pixels.
What is the correct way to accomplish this?
I am using Windows 8, and feature level 11_0, and I have access to a function that loads textures from *.DDS files.
However, it is my understanding that DDS textures have dimensions that are of a power of two, and my background texture does not, therefore I cannot convert my background image (currently .jpg) to a .dds format.
How might someone render this background texture efficiently?
Is it possible to load an image file into opengl? We are developing a realistic scene of a robot in both linux and VC++.
What libraies and methods are available to insert an image? also link good examples and references.
The general technique is to bind your image to a texture, and apply it to a quad rendered in your scene. You can use any image library to load the image (DevIL is pretty good); and you'll probably need to rescale or pad it to be a square with power-of-two dimension.