i have a 3d model ,it has coordinate information, normal information, color information, etc. of each vertex, but I want to get the normal information corresponding to the rendered image is the same as the rgb image, what should I do, can you provide the reference code, I am a novice, I don't know much about the application of opengl, thank you, thank you very much
Related
I need help with rendering a .vox model in OpenGL.
The .VOX file format is described here.
Here is an example VOX file reader.
And here is where I come across the problem - how would I go about rendering a .vox model in OpenGL? I know how to render standard .obj models with textures using the Phong reflection model, but how do I handle voxel data? What kind of data should I pass to the shaders? Should I parse the data somehow, to get the index of each individual voxel to parse? How should I create vertices based on voxel data (should I even do that)? Should I pass all the chunks or is there a simple way to filter out those that won't be visible?
I tried searching for information on this topic, but came up empty. What I am trying to accomplish is something like MagicaVoxel Viewer, but much simpler, without all those customizable options and with only a single light source.
I'm not trying to look for a ready solution, but if anyone could even point me in the right direction, I would be very grateful.
After some more searching I decided to render the cubes in two ways:
1) Based on voxel data, I will generate vertices and feed them to the pipeline.
2) Using the geometric shader, I'll emit vertices based on indexes of voxels to render I feed to the pipeline. I'll store the entire model as a 3D texture.
I'm trying to render a texture with openGL and GLSL. The texture is supposed to be rendered on a floating cube.
texture: http://imgur.com/Actqtx1
result: http://imgur.com/MXIOEvS
The cube is a strange mix of blue and black. Even when I try other textures, the result is the same. In the screenshot above, I have rendered a plane using "fract(worldspace" to ensure that the shaders are working.
It is apparent that the "color = texture(myTextureSampler, UV).rgb;" is producing the wrong color, but I do not know why. The texture coordinates and texture data appear to be read and buffered correctly.
Has anyone seen this effect before? Does anyone know where my problem may lie? I can provide code snippets upon request.
You're looking in the wrong direction. It's not your shader that's wrong (well maybe it is). Your problems start much earlier in the texture loading process.
You see how your texture seems to be strangely skewed. That usually happens if the alignment and pixel row strides have not been properly set before calling glTexImage, see glPixelStorei(GL_UNPACK_…,) parameters.
The other problem I see is, that whatever you loaded into OpenGL has no resemblence of your original picture whatsoever. It looks like bitnoise. That is telling me, that you're probably feeding some compressed data, or maybe even a picture file as it is to OpenGL.
OpenGL does not know how to deal with image file formats. There are a few special compression formats it knows, but these are compression formats as used by GPUs to reduce memory bandwidth requirements, not something like PNG or JPEG.
If you don't have one of the special texture compression format at hand OpenGL expects a raw pixel array.
Ok so I've used a chessboard image to calibrate my camera. This calibrates both the intrinsic and extrinsic parameters of the scene. Now I want to draw an object to sit on the table in which I had placed the chessboard. My question is do need the rotation+translation vectors of the camera to do this? And if so how can I get these after doing the calibration? Or are these vectors taken into account in the calibrateCamera function?
Basically after I calibrate the camera how can I now draw into the scene on top of a surface?
Thanks!
The calibrateCamera function provides you with the extrinsic (rotation+translation) and intrinsic (camera matrix K and distortion coefficients).
This allows you to project 3D points, expressed in the coordinate system of your chessboard, into an image acquired by the camera, for example using the projectPoints function (link). Using this approach, you can draw wireframe objects directly into the image by projecting the 3D edges using the projectPoints function.
If you would like to render more complex objects (i.e. including textures, etc), this requires using an auxiliary rendering engine since OpenCV does not provide such functionalities. You will have to use a rendering engine, say OpenGL, provide the projection details to it, retrieve the object rendered from the camera viewpoint in a buffer, and overlay this buffer on top of your initial image.
However, notice that the result of this, which is called augmented-reality, may look weird sometimes, because it does not take into account the occlusions between your rendered object and the scene observed in the image... Handling occlusions appropriately would require a 3D model of the scene.
AldourDisciple post is a great answer.
If I can add my 2 cents, this is a problem I worked with, and it is the basic problem of Augmented Reality, and more generally on how to integrate OpenCV and OpenGL (because you'll gonna use OpenGL to draw the 3d stuffs and use a OpenGL texture to show the underline OpenCV images from the video). You can find some (imho) good links, documentation and tutorial and references in my previous answer on that.
I need some help in surface area selection on a 3d model rendered in opengl by picking points through mouse. I know how to get a point in world coordinate but cant find a way to select an area. Later I need to remesh that selected area and map an image over it which I know.
Well, OpenGL by itself can't help you there. OpenGL is a drawing API. You draw things, but once the drawing commands have been executed all that's left are pixels in a framebuffer and OpenGL has no recollection about the geometry whatsoever.
You can use OpenGL to implement image based area selection algorithms, for example by drawing each face with a unique index color into an off screen framebuffer. Then by looking at what values can be found therein you know which faces are present in a given area.
Later I need to remesh
This is called topology modification and is completely outside the scope of OpenGL.
that selected area and map an image over it which I know
You can use a image based approach for this again, however you must know in which way you want to make images to faces first. If you want to unwrap the mesh, then OpenGL is of no help. However if you want the user to be able to "directly draw" onto the mesh, this can be done by drawing texture coordinates into another off screen framebuffer and by this reverse mapping screen coordinates to texture coordinates.
I have been brought in on a project where I need to render a 3D volume from a series of images of the volume. The images have been created by a couple of techniques such that they are vertical slices of the object in question.
The data set is similar to this question, but the asker is looking for a Matlab solution.
The goal is to have this drawing be in something near real time (>1Hz update rate), and from my research openGL seems to be the fastest option for drawing. Is there a built in function in openGL render the volume in openGL other than the following psuedocode algorithm.
foreach(Image in Folder)
foreach(Pixel in Image)
pointColour(pixelColour)
pointLocation(Pixel.X,Pixel.Y,Image.Z)
drawPoint
I am not concerned about interpolating between images, the current spacing is small enough that there no need for it.
I'm afraid if you're thinking about volume rendering, you will need to first understand the volume rendering integral because the resultant color of a pixel on the screen is a function of all the voxels that line up with it for the current viewing angle.
There are two methods to render a volume in real-time using conventional graphics hardware.
Render the volume as a set of 2D view-aligned slices that intersect the 3D texture (proxy geometry). Explanation here.
Use a raycaster that uses programmable graphics hardware, tutorial here.
This is not an easy problem to solve - but depending on what you need to do things might be a little simpler. For example: Do you care about having an interactive transfer function? Do you want perspective views, or will orthographic projection suffice? Are you rendering iso-surfaces? Are you using this only for MPR-type views?