I have a texture problem with the cubemap I'm rendering and can't seem to figure it out. I've generated a cube map with direct x's texture tools and then read it using
D3DX11CreateShaderResourceViewFromFile(device, L"cubemap.dds", 0, 0, &fullcubemap, 0);
The cubemap texture is not high quality at all and it looks really stretched/distorted. I can definitely tell that the images used for the cubemap match correctly, but it's not great at all at the moment
I'm not sure why this is happening. Is it because my textures are too large/small or is it something else? If it's due to the size of the textures, what is a recommended texture size? I am using a sphere for the cubemap not a cube.
Edit:
Shader:
cbuffer SkyboxConstantBuffer {
float4x4 world;
float4x4 view;
float4x4 projection;
};
TextureCube gCubeMap;
SamplerState samTriLinearSam {
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
struct VertexIn {
float4 position : POSITION;
};
struct VertexOut {
float4 position : SV_POSITION;
float4 spherePosition : POSITION;
};
VertexOut VS(VertexIn vin) {
VertexOut vout = (VertexOut)0;
vin.position.w = 1.0f;
vout.position = mul(vin.position, world);
vout.position = mul(vout.position, view);
vout.position = mul(vout.position, projection);
vout.spherePosition = vin.position;
return vout;
}
float4 PS(VertexOut pin) : SV_Target {
return gCubeMap.Sample(samTriLinearSam, pin.spherePosition);//float4(1.0, 0.5, 0.5, 1.0);
}
RasterizerState NoCull {
CullMode = None;
};
DepthStencilState LessEqualDSS {
DepthFunc = LESS_EQUAL;
};
technique11 SkyTech {
pass p0 {
SetVertexShader(CompileShader(vs_4_0, VS()));
SetGeometryShader(NULL);
SetPixelShader(CompileShader(ps_4_0, PS()));
SetRasterizerState(NoCull);
SetDepthStencilState(LessEqualDSS, 0);
}
}
Draw:
immediateContext->OMSetRenderTargets(1, &renderTarget, nullptr);
XMMATRIX sworld, sview, sprojection;
SkyboxConstantBuffer scb;
sview = XMLoadFloat4x4(&_view);
sprojection = XMLoadFloat4x4(&_projection);
sworld = XMLoadFloat4x4(&_world);
scb.world = sworld;
scb.view = sview;
scb.projection = sprojection;
immediateContext->IASetIndexBuffer(cubeMapSphere->getIndexBuffer(), DXGI_FORMAT_R32_UINT, 0);
ID3D11Buffer* vertexBuffer = cubeMapSphere->getVertexBuffer();
//ID3DX11EffectShaderResourceVariable * cMap;
////cMap = skyboxShader->GetVariableByName("gCubeMap")->AsShaderResource();
immediateContext->PSSetShaderResources(0, 1, &fullcubemap);//textures
//cMap->SetResource(fullcubemap);
immediateContext->IASetVertexBuffers(0, 1, &vertexBuffer, &stride, &offset);
immediateContext->VSSetShader(skyboxVertexShader, nullptr, 0);
immediateContext->VSSetConstantBuffers(0, 1, &skyboxConstantBuffer);
immediateContext->PSSetConstantBuffers(0, 1, &skyboxConstantBuffer);
immediateContext->PSSetShader(skyboxPixelShader, nullptr, 0);
immediateContext->UpdateSubresource(skyboxConstantBuffer, 0, nullptr, &scb, 0, 0);
immediateContext->DrawIndexed(cubeMapSphere->getIndexBufferSize(), 0, 0);
Initially I was planning to use this snippet to update the TextureCube variable in the shader
ID3DX11EffectShaderResourceVariable * cMap;
cMap = skyboxShader->GetVariableByName("gCubeMap")->AsShaderResource();
cMap->SetResource(fullcubemap);
But it seems that has no effect, and in fact, without the following line, the sphere I'm using for the cubemap textures with a texture used with another object in the scene, so perhaps there's something going on here? I'm not sure what though.
immediateContext->PSSetShaderResources(0, 1, &fullcubemap);//textures
Edit: Probably not the above, realised that if this wasn't updated, the old texture would be applied as it's never wiped after each draw.
Edit: Tried the cubemap with both a sphere and a cube, still the same texture issue.
Edit: Tried loading the shader resource view differently
D3DX11_IMAGE_LOAD_INFO loadSMInfo;
loadSMInfo.MiscFlags = D3D11_RESOURCE_MISC_TEXTURECUBE;
ID3D11Texture2D* SMTexture = 0;
hr = D3DX11CreateTextureFromFile(device, L"cubemap.dds",
&loadSMInfo, 0, (ID3D11Resource**)&SMTexture, 0);
D3D11_TEXTURE2D_DESC SMTextureDesc;
SMTexture->GetDesc(&SMTextureDesc);
D3D11_SHADER_RESOURCE_VIEW_DESC SMViewDesc;
SMViewDesc.Format = SMTextureDesc.Format;
SMViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURECUBE;
SMViewDesc.TextureCube.MipLevels = SMTextureDesc.MipLevels;
SMViewDesc.TextureCube.MostDetailedMip = 0;
hr = device->CreateShaderResourceView(SMTexture, &SMViewDesc, &fullcubemap);
Still produces the same output, any ideas?
Edit: Tried increasing the zfar distance and the texture remains the exact same no matter what value I put.
Example with second texture with increased view distance.
This texture is used on another object in my scene and comes out fine.
Edit: I have been trying to mess with the scaling of the texture/object
To achieve this I used
vin.position = vin.position * 50.0f;
This is beginning to look sort of like how it should, however, when I turn my camera angle, the image disappears so I obviously know this isn't correct, but if I could just scale the image per pixel or per vertex properly, I'm sure I could get the end result.
Edit:
I can confirm the cubemap is rendering correctly, I was ignoring the view/projection space and just using world and managed to get this, which is the high quality image i'm after, just not correct. Yes the faces are incorrect, but I'm not fussed about that now, it's easy enough to swap them around, I just need to get it rendering with this quality, in the correct space.
When in camera space does it take into account whether or not it's the outside/inside of the sphere? If my textures were over the outside of the sphere and I have the view from the inside, it's not going to look the same?
Issue is with your texture size, its small, you are applying it on larger surface, Make larger textures with more pixels
Its confirm that zfar and scaling has nothing to do with it.
Finally found the issue, silly mistake.
scb.world = XMMatrixTranspose(sworld);
scb.view = XMMatrixTranspose(sview);
scb.projection = XMMatrixTranspose(sprojection);
Related
I'm attempting to port a pathtracer to GLSL, and to do this I need to modify a shader sample program to use a texture as the framebuffer instead of the backbuffer.
This is the vertex fragment
#version 130
out vec2 texCoord;
// https://rauwendaal.net/2014/06/14/rendering-a-screen-covering-triangle-in-opengl/
void main()
{
float x = -1.0 + float((gl_VertexID & 1) << 2);
float y = -1.0 + float((gl_VertexID & 2) << 1);
texCoord.x = x;
texCoord.y = y;
gl_Position = vec4(x, y, 0, 1);
}
This is the setup code
gl.GenFramebuffersEXT(2, _FrameBuffer);
gl.BindFramebufferEXT(OpenGL.GL_FRAMEBUFFER_EXT, _FrameBuffer[0]);
gl.GenRenderbuffersEXT(2, _RaytracerBuffer);
gl.BindRenderbufferEXT(OpenGL.GL_RENDERBUFFER_EXT, _RaytracerBuffer[0]);
gl.RenderbufferStorageEXT(OpenGL.GL_RENDERBUFFER_EXT, OpenGL.GL_RGBA32F, (int)viewport[2], (int)viewport[3]);
And this is the runtime code
// Get a reference to the raytracer shader.
var shader = shaderRayMarch;
// setup first framebuffer (RGB32F)
gl.BindFramebufferEXT(OpenGL.GL_FRAMEBUFFER_EXT, _FrameBuffer[0]);
gl.Viewport((int)viewport[0], (int)viewport[1], (int)viewport[2], (int)viewport[3]); //0,0,width,height)
gl.FramebufferRenderbufferEXT(OpenGL.GL_FRAMEBUFFER_EXT, OpenGL.GL_COLOR_ATTACHMENT0_EXT, OpenGL.GL_RENDERBUFFER_EXT, _RaytracerBuffer[0]);
gl.FramebufferRenderbufferEXT(OpenGL.GL_FRAMEBUFFER_EXT, OpenGL.GL_DEPTH_ATTACHMENT_EXT, OpenGL.GL_RENDERBUFFER_EXT, 0);
uint [] DrawBuffers = new uint[1];
DrawBuffers[0] = OpenGL.GL_COLOR_ATTACHMENT0_EXT;
gl.DrawBuffers(1, DrawBuffers);
shader.Bind(gl);
shader.SetUniform1(gl, "screenWidth", viewport[2]);
shader.SetUniform1(gl, "screenHeight", viewport[3]);
shader.SetUniform1(gl, "fov", 40.0f);
gl.DrawArrays(OpenGL.GL_TRIANGLES, 0, 3);
shader.Unbind(gl);
int[] pixels = new int[(int)viewport[2]*(int)viewport[3]*4];
gl.GetTexImage(_RaytracerBuffer[0], 0, OpenGL.GL_RGBA32F, OpenGL.GL_INT, pixels);
But when I inspect the pixels coming back from GetTexImage they're black. When I bind this texture in a further transfer shader they remain black. I suspect I'm missing something in the setup code for the renderbuffer and would appreciate any suggestions you have!
Renderbuffers are not textures. So when you do glGetTexImage on your renderbuffer, you probably got an OpenGL error. When you tried to bind it as a texture with glBindTexture, you probably got an OpenGL error.
If you want to render to a texture, you should render to a texture. As in glGenTextures/glTexImage2D/glFramebufferTexture2D.
Also, please stop using EXT_framebuffer_object. You should be using the core FBO feature, which requires no "EXT" suffixes. Not unless you're using a really ancient OpenGL version.
I'm getting everything ready in a little DirectX 11.0 project of mine for a deferred rendering pipeline. However, I've been having quite a lot of trouble with sampling the depth buffer from within a pixel shader.
First I define the depth texture and its shader resource view:
D3D11_TEXTURE2D_DESC depthTexDesc;
ZeroMemory(&depthTexDesc, sizeof(depthTexDesc));
depthTexDesc.Width = nWidth;
depthTexDesc.Height = nHeight;
depthTexDesc.Format = DXGI_FORMAT_R32_TYPELESS;
depthTexDesc.Usage = D3D11_USAGE_DEFAULT;
depthTexDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;
depthTexDesc.MipLevels = 1;
depthTexDesc.ArraySize = 1;
depthTexDesc.SampleDesc.Count = 1;
depthTexDesc.SampleDesc.Quality = 0;
depthTexDesc.CPUAccessFlags = 0;
depthTexDesc.MiscFlags = 0;
hresult = d3dDevice_->CreateTexture2D(&depthTexDesc, nullptr, &depthTexture_);
D3D11_DEPTH_STENCIL_VIEW_DESC DSVDesc;
ZeroMemory(&DSVDesc, sizeof(DSVDesc));
DSVDesc.Format = DXGI_FORMAT_D32_FLOAT;
DSVDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
DSVDesc.Texture2D.MipSlice = 0;
hresult = d3dDevice_->CreateDepthStencilView(depthTexture_, &DSVDesc, &depthView_);
D3D11_SHADER_RESOURCE_VIEW_DESC gbDepthTexDesc;
ZeroMemory(&gbDepthTexDesc, sizeof(gbDepthTexDesc));
gbDepthTexDesc.Format = DXGI_FORMAT_R32_FLOAT;
gbDepthTexDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
gbDepthTexDesc.Texture2D.MipLevels = 1;
gbDepthTexDesc.Texture2D.MostDetailedMip = -1;
d3dDevice_->CreateShaderResourceView(depthTexture_, &gbDepthTexDesc, &gbDepthView_);
Here's the relevant part of my rendering function:
float clearColor[4] = { 0.0f, 0.0f, 0.0f, 1.0f };
d3dContext_->ClearRenderTargetView(backBufferTarget_, clearColor);
d3dContext_->ClearDepthStencilView(depthView_, D3D11_CLEAR_DEPTH, 1.0f, 0);
// GBuffer packing pass (in the future): /////////////////////////////////////////
d3dContext_->OMSetRenderTargets(1, &backBufferTarget_, depthView_);
unsigned int nStride = sizeof(Vertex);
unsigned int nOffset = 0;
d3dContext_->IASetInputLayout(inputLayout_);
d3dContext_->IASetVertexBuffers(0, 1, &vertexBuffer_, &nStride, &nOffset);
d3dContext_->IASetIndexBuffer(indexBuffer_, DXGI_FORMAT_R32_UINT, 0);
d3dContext_->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
d3dContext_->VSSetShader(firstVS_, 0, 0);
d3dContext_->PSSetShader(firstPS_, 0, 0);
d3dContext_->DrawIndexed(nIndexCount_, 0, 0);
d3dContext_->OMSetRenderTargets(1, &backBufferTarget_, nullptr);
d3dContext_->VSSetShader(secondVS_, 0, 0);
d3dContext_->PSSetShader(secondPS_, 0, 0);
d3dContext_->PSGetShaderResources(0, 1, &gbDepthView_);
d3dContext_->PSSetSamplers(0, 1, &colorMapSampler_);
d3dContext_->DrawIndexed(nIndexCount_, 0, 0);
swapChain_->Present(0, 0);
In this temporary implementation, firstVS_ and secondVS_ are identical, and their only function is to do all the transforms and pass on the data to the PSs.
And finally, here are firstPS_ and secondPS_:
// firstPS_
float4 main(PS_Input frag) : SV_TARGET
{
return float4(1.0f, 1.0f, 1.0f, 1.0f);
}
// secondPS_
Texture2D<float> depthMap_ : register(t0);
SamplerState colorSampler_ : register(s0);
float4 main(PS_Input frag) : SV_TARGET
{
float4 psOut;
psOut.xyz = depthMap_.Sample(colorSampler_, frag.tex0).xxx;
psOut.w = 1.0f;
return psOut;
}
So, my actual questions:
1) All this code compiles without any issues, but when I sample the depth buffer, it just turns out black. I read this could be caused by having your depth & stencil view bound by D3D11DeviceContext::OMSetRenderTargets() at the time you want to sample the depth buffer. I fixed that, but the buffer is still black. I checked the graphics debugger, with no success. So, is my depth buffer not getting written correctly, or am I sampling the wrong way? firstPS_ works fine.
2) Speaking of sampling, the book I'm using just says "we'll be using a point sampler," but I have no idea what is exactly meant. Now I'm just using a standard texture map sampler, but is there something else I should sample with?
3) Also, the book uses the SamplerState.Gather() function in secondPS_, but when I tried that it complained that "the expression could not be mapped to pixel shader instruction set." Is Gather() an error in the book, or is it my GPU (D3D feature level 11.0) who doesn't understand what that is? Is Sample() good enough for what I want to do? The original use of Gather() was in the context of creating a silhouette around objects in the depth buffer.
4) I tried to get secondVS_ to draw nothing but a full-screen quad, but FXC complained about my use of SV_VertexID as "invalid," saying that my type should be integral, even though it already was. I read somewhere that SV_VertexID can only be used by the first VS in a pipeline. Is that the problem here? How do I solve this in this particular case? In my current inefficient solution, is the problem being caused by the UVs?
1) You've called PSGetShaderResources instead of PSSetShaderResources. Also, MostDetailedMip should be 0 not -1.
2) A "point sampler" is just a texture sampler with the FILTER field set to something like D3D11_FILTER_MIN_MAG_MIP_POINT.
3) Gather is a function on the Texture2D not the SamplerState as you stated.
4) You get this error if you compile using vs_4_0, try vs_5_0.
I'm running into a problem and I don't know what is the best practise for it. I have a background that moves upward, which is in fact "slices" that moves toghether, as if the screen was splitted in 4-5 parts horizontally. I need to be able to draw a hole (circle) in the background (see-through), at a specified position which will change dynamically at each frame or so.
Here is how I generate a zone, I don't think there's much of a problem there:
// A 'zone' is simply the 'slice' of ground that moves upward. There's about 4 of
// them visible on screen at the same time, and they are automatically generated by
// a method irrelevant to the situation. Zones are Sprites.
// ---------
void LevelLayer::Zone::generate(LevelLayer *sender) {
// [...]
// Make a background for the zone
Sprite *background = this->generateBackgroundSprite();
background->setPosition(_contentSize.width / 2, _contentSize.height / 2);
this->addChild(background, 0);
}
This is the Zone::generateBackgroundSprite() method:
// generates dynamically a new background texture
Sprite *LevelLayer::Zone::generateBackgroundSprite() {
RenderTexture *rt = RenderTexture::create(_contentSize.width, _contentSize.height);
rt->retain();
Color4B dirtColorByte = Color4B(/*initialize the color with bytes*/);
Color4F dirtColor(dirtColorByte);
rt->beginWithClear(dirtColor.r, dirtColor.g, dirtColor.b, dirtColor.a);
// [Nothing here yet, gotta learn OpenGL m8]
rt->end();
// ++++++++++++++++++++
// I'm just testing clipping node, it works but the FPS get significantly lower.
// If I lock them to 60, they get down to 30, and if I lock them there they get
// to 20 :(
// Also for the test I'm drawing a square since ClippingNode doesn't seem to
// like circles...
DrawNode *square = DrawNode::create();
Point squarePoints[4] = { Point(-20, -20), Point(20, -20), Point(20, 20), Point(-20, 20) };
square->drawPolygon(squarePoints, 4, Color4F::BLACK, 0.0f, Color4F(0, 0, 0, 0));
square->setPosition(0, 0);
// Make a stencil
Node *stencil = Node::create();
stencil->addChild(square);
// Create a clipping node with the prepared stencil
ClippingNode *clippingNode = ClippingNode::create(stencil);
clippingNode->setInverted(true);
clippingNode->addChild(rt);
Sprite *ret = Sprite::create();
ret->addChild(clippingNode);
rt->release();
return ret;
}
**
So I'm asking you guys, what would you do in such a situation? Is what I am doing a good idea? Would you do it in another more imaginative way?
PS This is a rewrite of a little app I made for iOS (I want to port it to Android), and I was using MutableTextures in the Objective-C version (it was working). I'm just trying to see if there's a better way using RenderTexture, so I can dynamically create background images using OpenGL calls.
EDIT (SOLUTION)
I wrote my own simple fragment shader that "masks" the visible parts of a texture (the background) based on the visible parts of another texture (the mask). I have an array of points that determine where my circles are on the screen, and in the update method I draw them to a RenderTexture. I then take the generated texture and use it as the mask I pass to the shader.
This is my shader:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texCoord;
uniform sampler2D u_texture;
uniform sampler2D u_alphaMaskTexture;
void main() {
float maskAlpha = texture2D(u_alphaMaskTexture, v_texCoord).a;
float texAlpha = texture2D(u_texture, v_texCoord).a;
float blendAlpha = (1.0 - maskAlpha) * texAlpha; // Show only where mask is not visible
vec3 texColor = texture2D(u_texture, v_texCoord).rgb;
gl_FragColor = vec4(texColor, blendAlpha);
return;
}
init method:
bool HelloWorld::init() {
// [...]
Size visibleSize = Director::getInstance()->getVisibleSize();
// Load and cache the custom shader
this->loadCustomShader();
// 'generateBackgroundSlice()' creates a new RenderTexture and fills it with a
// color, nothing too complicated here so I won't copy-paste it in my edit
m_background = Sprite::createWithTexture(this->generateBackgroundSprite()->getSprite()->getTexture());
m_background->setPosition(visibleSize.width / 2, visibleSize.height / 2);
this->addChild(m_background);
m_background->setShaderProgram(ShaderCache::getInstance()->getProgram(Shader_AlphaMask_frag_key));
GLProgram *shader = m_background->getShaderProgram();
m_alphaMaskTextureUniformLocation = glGetUniformLocation(shader->getProgram(), "u_alphaMaskTexture");
glUniform1i(m_alphaMaskTextureUniformLocation, 1);
m_alphaMaskRender = RenderTexture::create(m_background->getContentSize().width,
m_background->getContentSize().height);
m_alphaMaskRender->retain();
// [...]
}
loadCustomShader method:
void HelloWorld::loadCustomShader() {
// Load the content of the vertex and fragement shader
FileUtils *fileUtils = FileUtils::getInstance();
string vertexSource = ccPositionTextureA8Color_vert;
string fragmentSource = fileUtils->getStringFromFile(
fileUtils->fullPathForFilename("Shader_AlphaMask_frag.fsh"));
// Init a shader and add its attributes
GLProgram *shader = new GLProgram;
shader->initWithByteArrays(vertexSource.c_str(), fragmentSource.c_str());
shader->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_POSITION, GLProgram::VERTEX_ATTRIB_POSITION);
shader->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_TEX_COORD, GLProgram::VERTEX_ATTRIB_TEX_COORDS);
shader->link();
shader->updateUniforms();
ShaderCache::getInstance()->addProgram(shader, Shader_AlphaMask_frag_key);
// Trace OpenGL errors if any
CHECK_GL_ERROR_DEBUG();
}
update method:
void HelloWorld::update(float dt) {
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
// Create the mask texture from the points in the m_circlePos array
GLProgram *shader = m_background->getShaderProgram();
m_alphaMaskRender->beginWithClear(0, 0, 0, 0); // Begin with transparent mask
for (vector<Point>::iterator it = m_circlePos.begin(); it != m_circlePos.end(); it++) {
// draw a circle on the mask
const float radius = 40;
const int resolution = 20;
Point circlePoints[resolution];
Point center = *it;
center = Director::getInstance()->convertToUI(center); // OpenGL has a weird coordinates system
float angle = 0;
for (int i = 0; i < resolution; i++) {
float x = (radius * cosf(angle)) + center.x;
float y = (radius * sinf(angle)) + center.y;
angle += (2 * M_PI) / resolution;
circlePoints[i] = Point(x, y);
}
DrawNode *circle = DrawNode::create();
circle->retain();
circle->drawPolygon(circlePoints, resolution, Color4F::BLACK, 0.0f, Color4F(0, 0, 0, 0));
circle->setPosition(Point::ZERO);
circle->visit();
circle->release();
}
m_alphaMaskRender->end();
Texture2D *alphaMaskTexture = m_alphaMaskRender->getSprite()->getTexture();
alphaMaskTexture->setAliasTexParameters(); // Disable linear interpolation
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
shader->use();
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, alphaMaskTexture->getName());
glActiveTexture(GL_TEXTURE0);
}
What you might want to look at is framebuffers, i'm not too familiar with the mobile API for OpenGL but I'm sure you should have access to framebuffers.
An idea of what you might want to try is to do a first pass where you render the circles's that you want to set to alpha on your background into a new framebuffer texture, then you can use this texture as an alpha map on your pass for rendering your background. So basically when you render your circle you might set the value in the texture to 0.0 for the alpha channel otherwise to 1.0, when rendering you can then set the alpha channel of the fragment to the same value as the alpha of texture of the first pass' of the rendering process.
You can think of it as a the same idea as a mask. But just using another texture.
Hope this helps :)
I suspect I'm not correctly rendering particle positions to my FBO, or correctly sampling those positions when rendering, though that may not be the actual problem with my code, admittedly.
I have a complete jsfiddle here: http://jsfiddle.net/p5mdv/53/
A brief overview of the code:
Initialization:
Create an array of random particle positions in x,y,z
Create an array of texture sampling locations (e.g. for 2 particles, first particle at 0,0, next at 0.5,0)
Create a Frame Buffer Object and two particle position textures (one for input, one for output)
Create a full-screen quad (-1,-1 to 1,1)
Particle simulation:
Render a full-screen quad using the particle program (bind frame buffer, set viewport to the dimensions of my particle positions texture, bind input texture, and draw a quad from -1,-1 to 1,1). Input and output textures are swapped each frame.
Particle fragment shader samples the particle texture at the current fragment position (gl_FragCoord.xy), makes some modifications, and writes out the modified position
Particle rendering:
Draw using the vertex buffer of texture sampling locations
Vertex shader uses the sampling location to sample the particle position texture, then transforms them using view projection matrix
Draw the particle using a sprite texture (gl.POINTS)
Questions:
Am I correctly setting the viewport for the FBO in the particle simulation step? I.e. am I correctly rendering a full-screen quad?
// 6 2D corners = 12 vertices
var vertexBuffer = new Float32Array(12);
// -1,-1 to 1,1 screen quad
vertexBuffer[0] = -1;
vertexBuffer[1] = -1;
vertexBuffer[2] = -1;
vertexBuffer[3] = 1;
vertexBuffer[4] = 1;
vertexBuffer[5] = 1;
vertexBuffer[6] = -1;
vertexBuffer[7] = -1;
vertexBuffer[8] = 1;
vertexBuffer[9] = 1;
vertexBuffer[10] = 1;
vertexBuffer[11] = -1;
// Create GL buffers with this data
g.particleSystem.vertexObject = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, g.particleSystem.vertexObject);
gl.bufferData(gl.ARRAY_BUFFER, vertexBuffer, gl.STATIC_DRAW);
...
gl.viewport(0, 0,
g.particleSystem.particleFBO.width,
g.particleSystem.particleFBO.height);
...
// Set the quad as vertex buffer
gl.bindBuffer(gl.ARRAY_BUFFER, g.screenQuad.vertexObject);
gl.vertexAttribPointer(0, 2, gl.FLOAT, false, 0, 0);
// Draw!
gl.drawArrays(gl.TRIANGLES, 0, 6);
Am I correctly setting the texture coordinates to sample the particle positions?
for(var i=0; i<numParticles; i++)
{
// Coordinates of particle within texture (normalized)
var texCoordX = Math.floor(i % texSize.width) / texSize.width;
var texCoordY = Math.floor(i / texSize.width) / texSize.height;
particleIndices[ pclIdx ] = texCoordX;
particleIndices[ pclIdx + 1 ] = texCoordY;
particleIndices[ pclIdx + 2 ] = 1; // not used in shader
}
The relevant shaders:
Particle simulation fragment shader:
precision mediump float;
uniform sampler2D mParticleTex;
void main()
{
// Current pixel is the particle's position on the texture
vec2 particleSampleCoords = gl_FragCoord.xy;
vec4 particlePos = texture2D(mParticleTex, particleSampleCoords);
// Move the particle up
particlePos.y += 0.1;
if(particlePos.y > 2.0)
{
// Reset
particlePos.y = -2.0;
}
// Write particle out to texture
gl_FragColor = particlePos;
}
Particle rendering vertex shader:
attribute vec4 vPosition;
uniform mat4 u_modelViewProjMatrix;
uniform sampler2D mParticleTex;
void main()
{
vec2 particleSampleCoords = vPosition.xy;
vec4 particlePos = texture2D(mParticleTex, particleSampleCoords);
gl_Position = u_modelViewProjMatrix * particlePos;
gl_PointSize = 10.0;
}
Let me know if there's a better way to go about debugging this, if nothing else. I'm using webgl-debug to find gl errors and logging what I can to the console.
Your quad is facing away from view so I tried adding gl.disable(gl.CULL_FACE), still no result.
Then I noticed that while resizing window panel with canvas it actually shows one black, square-shaped particle. So it seems that rendering loop is not good.
If you look at console log, it fails to load particle image and it also says that FBO size is 512x1 which is not good.
Some function declarations do not exist, as getTexSize. (?!)
Code needs tiding and grouping, and always check console if you're already using it.
Hope this helps a bit.
Found the problem.
gl_FragCoord is from [0,0] to [screenwidth, screenheight], I was wrongly thinking it was from [0,0] to [1,1].
I had to pass in shader variables for width and height, then normalize the sample coordinates before sampling from the texture.
Okay first up I am using:
DirectX 10
C++
Okay this is a bit of a bizarre one to me, I wouldn't usually ask the question, but I've been forced by circumstance. I have two triangles (not a quad for reasons I wont go into!) full screen, aligned to screen through the fact they are not transformed.
In the DirectX vertex declaration I am passing a 3 component float (Pos x,y,z) and 2 component float (Texcoord x,y). Texcoord z is reserved for texture2d arrays, which I'm currently defaulting to 0 in the the pixel shader.
I wrote this to achieve the simple task:
float fStartX = -1.0f;
float fEndX = 1.0f;
float fStartY = 1.0f;
float fEndY = -1.0f;
float fStartU = 0.0f;
float fEndU = 1.0f;
float fStartV = 0.0f;
float fEndV = 1.0f;
vmvUIVerts.push_back(CreateVertex(fStartX, fStartY, 0, fStartU, fStartV));
vmvUIVerts.push_back(CreateVertex(fEndX, fStartY, 0, fEndU, fStartV));
vmvUIVerts.push_back(CreateVertex(fEndX, fEndY, 0, fEndU, fEndV));
vmvUIVerts.push_back(CreateVertex(fStartX, fStartY, 0, fStartU, fStartV));
vmvUIVerts.push_back(CreateVertex(fEndX, fEndY, 0, fEndU, fEndV));
vmvUIVerts.push_back(CreateVertex(fStartX, fEndY, 0, fStartU, fEndV));
IA Layout: (Update)
D3D10_INPUT_ELEMENT_DESC ieDesc[2] = {
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D10_INPUT_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0,12, D3D10_INPUT_PER_VERTEX_DATA, 0 }
};
Data reaches the vertex shader in the following format: (Update)
struct VS_INPUT
{
float3 fPos :POSITION;
float3 fTexcoord :TEXCOORD0;
}
Within my Vertex and Pixel shader not a lot is happening for this particular draw call, the pixel shader does most of the work sampling from a texture using the specified UV coordinates. However, this isn't working quite as expected, it appears that I am getting only 1 pixel of the sampled texture.
The workaround was in the pixel shader to do the following: (Update)
sampler s0 : register(s0);
Texture2DArray<float4> meshTex : register(t0);
float4 psMain(in VS_OUTPUT vOut) : SV_TARGET
{
float4 Color;
vOut.fTexcoord.z = 0;
vOut.fTexcoord.x = vOut.fPosObj.x * 0.5f;
vOut.fTexcoord.y = vOut.fPosObj.y * 0.5f;
vOut.fTexcoord.x += 0.5f;
vOut.fTexcoord.y += 0.5f;
Color = quadTex.Sample(s0, vOut.fTexcoord);
Color.a = 1.0f;
return Color;
}
It was also worth noting that this worked with the following VS out struct defined in the shaders:
struct VS_OUTPUT
{
float4 fPos :POSITION0; // SV_POSITION wont work in this case
float3 fTexcoord :TEXCOORD0;
}
Now I have a texture that's stretched to fit the entire screen, both triangles already cover this, but why did the texture UV's not get used as expected?
To clarify I am using a point sampler and have tried both clamp and wrapping UV.
I was a bit curious and found a solution / workaround mentioned above, however I'd prefer not to have to do it if anyone knows why it's happening?
What semantics are you specifying for your vertex-type? Are they properly aligned with your vertices and also your shader? If you are using a D3DXVECTOR4, D3DXVECTOR3 setup (as shown in your VS code) this could be a problem if your CreateVertex() returns a D3DXVECTOR3, D3DXVECTOR2 struct.
It would be reassuring to see your pixel-shader code too.
Okay well, for one, the texture coordinates outside of 0..1 range get clamped. I made the mistake of assuming that by going to clip space -1 to +1 that the texture coordinates would be too. This is not the case, they still go from 0.0 to 1.0.
The reason why the code in the pixel shader worked, was because it was using the clip space x,y,z coordinates to automatically overwrite these texture coordinates; an oversight on my part. However, the pixel shader code results in texture stretch on a full screen 'quad', so it might be useful to someone somewhere ;)