I have an issue with applying noise over the surface of a non-trivial mesh (well any mesh) in OpenGL without texture coordinates. I basically want to have a noise texture applied over the surface but since I don't have texture coordinates I can't just apply a noise texture. Generating texture coordinates in the vertex shader works to an extent however whether it is cube, sphere or object planar coordinates there is always some texture smearing.
smearing with cube map http://img811.imageshack.us/img811/3923/0ouu.png
Smearing with cube map coordinates across surface changes
smearing with object planar http://img195.imageshack.us/img195/987/c3cz.png
Smearing with object planar (xy) coordinates along z plane
I've done random noise generation in the fragment shader however as this changes every frame it is not what i need (and not computationally cheap either).
I just need a static uniform distribution of noise across the mesh surface.
Anybody got any ideas on how this could be done?
You could acquire 3d model space coordinates for each pixel in fragment shader and use some 3d noise based on those values.
Related
I have a texture of the earth which I want to map onto a sphere.
As it is a unit sphere and the model itself has no texture coordinates, the easiest thing I could think of is to just calculate spherical coordinates for each vertex and use them as texture coordinates.
textureCoordinatesVarying = vec2(atan(modelPositionVarying.y, modelPositionVarying.x)/(2*M_PI)+.5, acos(modelPositionVarying.z/sqrt(length(modelPositionVarying.xyz)))/M_PI);
When doing this in the fragment shader, this works fine, as I calculate the texture coordinates from the (interpolated) vertex positions.
But when I do this in the vertex shader, which I also would do if the model itself has texture coordinates, I get the result as shown in the image below. The vertices are shown as points and a texture coordinate (u) lower than 0.5 is red while all others are blue.
So it looks like that the texture coordinate (u) of two adjacent red/blue vertices have value (almost) 1.0 and 0.0. The variably is then smoothly interpolated and therefore yields values somewhere between 0.0 and 1.0. This of course is wrong, because the value should either be 1.0 or 0.0 but nothing in between.
Is there a way to work with spherical coordinates as texture coordinates without getting those effects shown above? (if possible, without changing the model)
This is a common problem. The seams between two texture coordinate topologies, where you want the texture coordinate to seamlessly wrap from 1.0 to 0.0 requires the mesh to properly handle this. To do this, the mesh must duplicate every vertex along the seam. One of the vertices will have a 0.0 texture coordinate and will be connected to the vertices coming from the right (in your example). The other will have a 1.0 texture coordinate and will be connected to the vertices coming from the left (in your example).
This is a mesh problem, and it is best to solve it in the mesh itself. The same position needs two different texture coordinates, so you must duplicate the position in question.
Alternatively, you could have the fragment shader generate the texture coordinate from an interpolated vertex normal. Of course, this is more computationally expensive, as it requires doing a conversion from a direction to a pair of angles (and then to the [0, 1] texture coordinate range).
I've got a 2D Texture on a 3D Sphere and I want to know how to transfer a 2D coordinate on the Texture into a 3D coordinate. I know it has to do with the clipping of the texture : I'm using the auto clipping function of OpenGL to put the texture on the Sphere.
Edit:
To clarify the problem:
I have a 2D plane which is an image containing borders drawn in red now I put objects on this plane, that have a collision radius and are wildly moving around. Whenever the objects collide with the red border they bounce back.
Now I take this 2D plane and warp it around a 3D sphere. At the position of the circles I want to put 3D-Models that move on the sphere. The problem now is to get from the "simple" 2D coordinates on the plane to the more complicates 3D coordinates on the sphere to position the 3D-Models correctly.
My first approach would be to map 2D coordinates to spherical coordinates which can easily be transferred into 3D coordinates but how would I do this?
You don't "convert" the 2D coordinate to a 3D coordinate. The 2D coordinates you have are UV coordinates (from 0 to 1) and they represent a position in the texture space. What you do is to map these UV coordinates to the vertices.
You can read more about UV mapping here.
In OpenGL, it depends on which version are you using. Either you use glTexCoord calls before the glVertex calls (for old versions of OpenGL), or you set it in a VBO to be processed at the fragment shader on newer versions of OpenGL.
If you are planning to use gluSphere() function, you don't need to worry about calculating UV texture coordinates since opengl does it for you with the right functions.
Here you can check the gluSphere() documentation
Here there is an example code
If you are planning to render your own sphere, check this question
So I'm supposed to Texture Map a specific model I've loaded into a scene (with a Framebuffer and a Planar Pinhole Camera), however I'm not allowed to use OpenGL and I have no idea how to do it otherwise (we do use glDrawPixels for other functionality, but that's the only function we can use).
Is anyone here able enough to give me a run-through on how to texture map without OpenGL functionality?
I'm supposed to use these slides: https://www.cs.purdue.edu/cgvlab/courses/334/Fall_2014/Lectures/TMapping.pdf
But they make very little sense to me.
What I've gathered so far is the following:
You iterate over a model, and assign each triangle "texture coordinates" (which I'm not sure what those are), and then use "model space interpolation" (again, I don't understand what that is) to apply the texture with the right perspective.
I currently have my program doing the following:
TL;DR:
1. What is model space interpolation/how do I do it?
2. What explicitly are texture coordinates?
3. How, on a high level (in layman's terms) do I texture map a model without using OpenGL.
OK, let's start by making sure we're both on the same page about how the color interpolation works. Lines 125 through 143 set up three vectors redABC, greenABC and blueABC that are used to interpolate the colors across the triangle. They work one color component at a time, and each of the three vectors helps interpolate one color component.
By convention, s,t coordinates are in source texture space. As provided in the mesh data, they specify the position within the texture of that particular vertex of the triangle. The crucial thing to understand is that s,t coordinates need to be interpolated across the triangle just like colors.
So, what you want to do is set up two more ABC vectors: sABC and tABC, exactly duplicating the logic used to set up redABC, but instead of using the color components of each vertex, you just use the s,t coordinates of each vertex. Then for each pixel, instead of computing ssiRed etc. as unsigned int values, you compute ssis and ssit as floats, they should be in the range 0.0f through 1.0f assuming your source s,t values are well behaved.
Now that you have an interpolated s,t coordinate, multiply ssis by the texel width of the texture, and ssit by the texel height, and use those coordinates to fetch the texel. Then just put that on the screen.
Since you are not using OpenGL I assume you wrote your own software renderer to render that teapot?
A texture is simply an image. A texture coordinate is a 2D position in the texture. So (0,0) is bottom-left and (1,1) is top-right. For every vertex of your 3D model you should store a 2D position (u,v) in the texture. That means that at that vertex, you should use the colour the texture has at that point.
To know the UV texture coordinate of a pixel in between vertices you need to interpolate the texture coordinates of the vertices around it. Then you can use that UV to look up the colour in the texture.
I am working on voxelisation using the rendering pipeline and now I successfully voxelise the scene using vertex+geometry+fragment shaders. Now my voxels are stored in a 3D texture which has size, for example, 128x128x128.
My original model of the scene is centered at (0,0,0) and it extends in both positive and negative axis. The texure, however, is centered at (63,63,63) in tex coordinates.
I implemented a simple ray marcing for visualizing but it doesn't take into account the camera movements (I can render only from very fixed positions because my ray have to be generated taking into account the different coordinates of the 3D texture).
My question is: how can I map my rays so that they are generated at point Po with direction D in the coordinates of my 3D model but intersect the voxels at the corresponding position in texture coordinates and every movement of the camera in the 3D world is remapped in the voxel coordinates?
Right now I generate the rays in this way:
create a quad in front of the camera at position (63,63,-20)
cast rays in direction towards (63,63,3)
I think you should store your entire view transform matrix in your shader uniform params. Then for each shader execution you can use its screen coords and view transform to compute the view ray direction for your particular pixel.
Having the ray direction and camera position you just use them the same as currently.
There's also another way to do this that you can try:
Let's say you have a cube (0,0,0)->(1,1,1) and for each corner you assign a color based on its position, like (1,0,0) is red, etc.
Now for every frame you draw your cube front faces to the texture, and cube back faces to the second texture.
In the final rendering you can use both textures to get enter and exit 3D vectors, already in the texture space, which makes your final shader much simpler.
You can read better descriptions here:
http://graphicsrunner.blogspot.com/2009/01/volume-rendering-101.html
http://web.cse.ohio-state.edu/~tong/vr/
I have a 3D terrain (a voxel mesh, my "arbitrary mesh"). I know how to "splat" the texture down from above the mesh, but on vertical or steep slopes it smears.
I have access to the normals and positions of each vertex. How would I generate UVs (without using a shader, so no true tri-planar colour blending) so that the texture is not smeared on steep slopes and meets up nicely with itself (no sharp seams)?
Without a shader, you are a bit stuck. Tri-planar works by using three planar projection for uvs ( one for each world planes : XY, YZ, and XZ ) and then blend the three layers with the normal values pow by some value as coefficient.
What are the options you have to render your terrain, are you allowed to edit the geometry ? Can you do multi pass rendering with alpha blend ?
Everything is shader, why are they inaccessible ?