I have a piece of code in my iOS project that swaps the texture of a CCSprite via setTexture, like so:
[sprite setTexture:[[CCTextureCache sharedTextureCache] addImage:#"Circle.png"]];
However, the dimensions of the CCSprite's texture before and after the swap are different, and as a result, the Circle.png texture is getting cropped; stuck at the size of original texture (as the circle is larger).
How can I adjust the size after swapping the Texture?
(Related, but not helpful in solving this)
Try this:
CCTexture2D* texture = [[CCTextureCache sharedTextureCache] addImage:#"Circle.png"];
if (texture)
{
// Get the size of the new texture:
CGSize size = [texture contentSize];
[sprite setTexture:texture];
// use the size of the new texture:
[sprite setTextureRect:CGRectMake(0.0f, 0.0f, size.width,size.height)];
}
You can just recreate sprite with spriteWithFile constructor. the dimensions will be set automatically
Related
Okay, first of all, I'm really new to DirectX11 and this is actually my first project using it. I'm also relatively new to Computer Graphics in general so I might have some concepts wrong although, for this particular case, I do not think so. My code is based on the RasterTek tutorials.
In trying to implement a shader shader, I need to render the scene to a 2D texture and then perform a gaussian blur on the resulting image.
That part seems to be working fine as when using the Visual Studio graphics debugger the output seems to be what I expect.
However, after having having done all post processing, I render a quad to the backbuffer using a simple shader that uses the final output of the blur as a resource. This always gives me a black screen. When I debug my pixel shader with the VS graphics debugger, it seem like the Sample(texture, uv) method always returns (0,0,0,1) when trying to sample that texture.
The pixel shader works fine if I use a different texture, like some normal map or whatever, as a resource, just not when using any of the rendertargets from the previous passes.
The behaviour is particularly weird because the actual blur shader works fine when using any of the rendertargets as a resource.
I know I cannot use a rendertarget as both input and output but I think I have that covered since I call OMSetRenderTargets so I can render to the backbuffer.
Here's the step by step of my implementation:
Set Render Targets
Clear them
Clear Depth buffer
Render scene to texture
Turn off Z buffer
Render to quad
Perform horizontal blur
Perform vertical blur
Set back buffer as render target
Clear back buffer
Render final output to quad
Turn z buffer on
Present back buffer
Here is the shader for the quad:
Texture2D shaderTexture : register(t0);
SamplerState SampleType : register(s0);
struct PixelInputType
{
float4 position : SV_POSITION;
float2 tex : TEXCOORD0;
};
float4 main(PixelInputType input) : SV_TARGET
{
return shaderTexture.Sample(SampleType, input.tex);
}
Here's the relevant c++ code
This is how I set the render targets
void DeferredBuffers::SetRenderTargets(ID3D11DeviceContext* deviceContext, bool activeRTs[BUFFER_COUNT]){
vector<ID3D11RenderTargetView*> rts = vector<ID3D11RenderTargetView*>();
for (int i = 0; i < BUFFER_COUNT; ++i){
if (activeRTs[i]){
rts.push_back(m_renderTargetViewArray[i]);
}
}
deviceContext->OMSetRenderTargets(rts.size(), &rts[0], m_depthStencilView);
// Set the viewport.
deviceContext->RSSetViewports(1, &m_viewport);
}
I use a ping pong approach with the Render Targets for the blur.
I render the scene to a MainTarget and depth information to the depthMap. The first pass performs an horizontal blur onto a third target (horizontalBlurred) and then I use that one as input for the vertical blur which renders back to the mainTarget and to the finalTarget. It's a loop because on the vertical pass I'm supposed to blend the PS output with what's on the finalTarget. I left that code (and some other stuff) out as it's not relevant.
The m_Fullscreen is the quad.
bool activeRenderTargets[4] = { true, true, false, false };
// Set the render buffers to be the render target.
m_ShaderManager->getDeferredBuffers()->SetRenderTargets(m_D3D->GetDeviceContext(), activeRenderTargets);
// Clear the render buffers.
m_ShaderManager->getDeferredBuffers()->ClearRenderTargets(m_D3D->GetDeviceContext(), 0.25f, 0.0f, 0.0f, 1.0f);
m_ShaderManager->getDeferredBuffers()->ClearDepthStencil(m_D3D->GetDeviceContext());
// Render the scene to the render buffers.
RenderSceneToTexture();
// Get the matrices.
m_D3D->GetWorldMatrix(worldMatrix);
m_Camera->GetBaseViewMatrix(baseViewMatrix);
m_D3D->GetOrthoMatrix(projectionMatrix);
// Turn off the Z buffer to begin all 2D rendering.
m_D3D->TurnZBufferOff();
// Put the full screen ortho window vertex and index buffers on the graphics pipeline to prepare them for drawing.
m_FullScreenWindow->Render(m_D3D->GetDeviceContext());
ID3D11ShaderResourceView* mainTarget = m_ShaderManager->getDeferredBuffers()->GetShaderResourceView(0);
ID3D11ShaderResourceView* horizontalBlurred = m_ShaderManager->getDeferredBuffers()->GetShaderResourceView(2);
ID3D11ShaderResourceView* depthMap = m_ShaderManager->getDeferredBuffers()->GetShaderResourceView(1);
ID3D11ShaderResourceView* finalTarget = m_ShaderManager->getDeferredBuffers()->GetShaderResourceView(3);
activeRenderTargets[1] = false; //depth map is never a render target again
for (int i = 0; i < numBlurs; ++i){
activeRenderTargets[0] = false; //main target is resource in this pass
activeRenderTargets[2] = true; //horizontal blurred target
activeRenderTargets[3] = false; //unbind final target
m_ShaderManager->getDeferredBuffers()->SetRenderTargets(m_D3D->GetDeviceContext(), activeRenderTargets);
m_ShaderManager->RenderScreenSpaceSSS_HorizontalBlur(m_D3D->GetDeviceContext(), m_FullScreenWindow->GetIndexCount(), worldMatrix, baseViewMatrix, projectionMatrix, mainTarget, depthMap);
activeRenderTargets[0] = true; //rendering to main target
activeRenderTargets[2] = false; //horizontal blurred is resource
activeRenderTargets[3] = true; //rendering to final target
m_ShaderManager->getDeferredBuffers()->SetRenderTargets(m_D3D->GetDeviceContext(), activeRenderTargets);
m_ShaderManager->RenderScreenSpaceSSS_VerticalBlur(m_D3D->GetDeviceContext(), m_FullScreenWindow->GetIndexCount(), worldMatrix, baseViewMatrix, projectionMatrix, horizontalBlurred, depthMap);
}
m_D3D->SetBackBufferRenderTarget();
m_D3D->BeginScene(0.0f, 0.0f, 0.5f, 1.0f);
// Reset the viewport back to the original.
m_D3D->ResetViewport();
m_ShaderManager->RenderTextureShader(m_D3D->GetDeviceContext(), m_FullScreenWindow->GetIndexCount(), worldMatrix, baseViewMatrix, projectionMatrix, depthMap);
m_D3D->TurnZBufferOn();
m_D3D->EndScene();
And, finally, here are 3 screenshots from my graphics log.
They show rendering the scene onto the mainTarget, a verticalPass which takes as input the horizontalBlurred resource and finally, rendering onto the backBuffer, which is what's failing. You can see the resource bound to the shader and how the output is just a black screen. I purposedly set the background as red to find out if it was sampling with wrong coordinates, but nope.
So, has anyone ever experienced something like this? What could be the cause of this bug?
Thanks in advance for any help!
EDIT: The Render_SOMETHING_SOMETHING_shader methods handle binding all the resources, setting the shaders, draw calls etc etc. If necessary I can post them here, but I don't think it's that relevant.
I have an DirectX 11 C++ application that displays two rectangles with textures and some text.
Both textures are taken from TGA resources (with alpha channel added).
When I run the program, I get the result:
What's wrong? Take a closer look:
The corners of the rectangles are transparent (and they should be). The rest of textures are set to be 30% opacity (and it works well too).
But, when one texture (let's call it texture1) is over another (texture2):
The corners of texture1 are transparent.
But behind them I see the background of window, instead of texture2.
In other words, transparency of texture interacts with the background of window, not with the textures behind it.
What have I done wrong? What part of my program can be responsible for it? Blending options, render states, shader code...?
In my shader, I set:
technique10 RENDER{
pass P0{
SetVertexShader(CompileShader( vs_4_0, VS()));
SetPixelShader(CompileShader( ps_4_0, PS()));
SetBlendState(SrcAlphaBlendingAdd, float4(0.0f, 0.0f, 0.0f, 0.0f),
0xFFFFFFFF);
}
}
P.s.
Of course, when I change the background of window from blue to another colour, the elements still have the transparency (the corners aren't blue).
edit:
According to #ComicSansMS (+ for nick, anyway ;p ), I've tried to change to order of render elements (I've also moved the smaller texture a bit, to check if the error remains):
The smaller texture is now behind the bigger one. But the problem with corners remains (now it appears on the second texture). I am almost sure that I draw the rectangle behind BEFORE I render the rectangle above (I see the code's lines order).
My depth stencil:
//initialize the description of the stencil state
ZeroMemory(depthStencilsDescs, sizeof(*depthStencilsDescs));
//set up the description of the stencil state
depthStencilsDescs->DepthEnable = true;
depthStencilsDescs->DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
depthStencilsDescs->DepthFunc = D3D11_COMPARISON_LESS;
depthStencilsDescs->StencilEnable = true;
depthStencilsDescs->StencilReadMask = 0xFF;
depthStencilsDescs->StencilWriteMask = 0xFF;
//stencil operations if pixel is front-facing
depthStencilsDescs->FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthStencilsDescs->FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_INCR;
depthStencilsDescs->FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
depthStencilsDescs->FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
//stencil operations if pixel is back-facing
depthStencilsDescs->BackFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthStencilsDescs->BackFace.StencilDepthFailOp = D3D11_STENCIL_OP_DECR;
depthStencilsDescs->BackFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
depthStencilsDescs->BackFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
//create the depth stencil state
result = device->CreateDepthStencilState(depthStencilsDescs, depthStencilState2D);
The render function:
...
//clear the back buffer
context->ClearRenderTargetView(myRenderTargetView, backgroundColor); //backgroundColor
//clear the depth buffer to 1.0 (max depth)
context->ClearDepthStencilView(depthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0);
context->OMSetDepthStencilState(depthStencilState2D, 1);
context->VSSetShader(getVertexShader(), NULL, 0);
context->PSSetShader(getPixelShader(), NULL, 0);
for(...){
rectangles[i]->render();
}
The blend state:
D3D11_BLEND_DESC blendDesc;
ZeroMemory(&blendDesc, sizeof(D3D11_BLEND_DESC) );
blendDesc.AlphaToCoverageEnable = false;
blendDesc.IndependentBlendEnable = false;
blendDesc.RenderTarget[0].BlendEnable = true;
blendDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA;
blendDesc.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
blendDesc.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD;
blendDesc.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ONE;
blendDesc.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ONE;
blendDesc.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD;
blendDesc.RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL ;
ID3D11BlendState * blendState;
if (FAILED(device->CreateBlendState(&blendDesc, &blendState))){
}
context->OMSetBlendState(blendState,NULL,0xffffffff);
Your draw order is probably wrong.
Blending does not interact with the pixels of the object behind it, it interacts with the pixels that are currently in the frame buffer.
So if you draw the rectangle in front before the rectangle in the back, its blending operation will interact with what is in the frame buffer at that point (that is, the background).
The solution is obviously to sort your objects by their depth in view space and draw from back-to-front, although that is sometimes easier said than done (especially when allowing arbitrary overlaps).
The other problem seems to be that you draw both rectangles at the same depth value. Your depth test is set to D3D11_COMPARISON_LESS, so as soon as a triangle is drawn on a pixel, the other triangle will fail the depth test for that pixel and won't get drawn at all. This is explains the results you get when swapping the drawing order.
Note that if you draw objects back-to-front there is no need to perform a depth test at all, so you might just want to switch it off in this case. Alternatively, you can try to arrange the objects along the depth axis by giving them different Z values, or just switch to D3D11_COMPARISON_LESS_EQUAL for the depth test.
I have made spritesheet in Zwoptex , I know TexturePacker is better than this but i have just started with cocos2d iphone, so haven't purchase that.
I have made CCTexture2D using following code.
texture = [[CCTextureCache sharedTextureCache] addImage:#"1.png"];
self.shaderProgram = [[CCShaderCache sharedShaderCache] programForKey:kCCShader_PositionTexture];
CC_NODE_DRAW_SETUP();
And i use this CCtexture2D object to draw texture around soft body.Using Following code.
ccGLEnableVertexAttribs(kCCVertexAttribFlag_Position | kCCVertexAttribFlag_TexCoords);
ccGLBindTexture2D([texture name]);
glVertexAttribPointer(kCCVertexAttrib_TexCoords, 2, GL_FLOAT, GL_FALSE, 0, textCoords);
glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_TRUE, 0, triangleFanPos);
glDrawArrays(GL_TRIANGLE_FAN, 0, NUM_SEGMENT+2);
ccGLEnableVertexAttribs( kCCVertexAttribFlag_Color);
Now I want to animate texture of soft body. I know how to animate sprite using spritesheet. But now i am confuse how to make CCTexture2D using spritesheet and how can i animate texture using different images like we do in sprite animation?
Can anyone give me any direction in solving this issue?
First of all i want to say sorry for this question. I want to share my code here. I want to animate the texture which i am giving to the soft body.
I was confused , how can i give new image to the cctexture2d object and rotate that at the angle of previous textures angle. But I came to know that cocos2d iphone automatically handles that.
I animate my texture2d object by just giving it the image sequences with delay. Code for that is shown below.
Called Function
-(void)changeTexture1:(NSNumber*)i{
texture = [[CCTextureCache sharedTextureCache] addImage:[NSString stringWithFormat:#"blink%#.png",i]];
}
Calling function:
-(void)makeAnimation{
for (int i = 2; i<7; i++) {
[self performSelector:#selector(changeTexture1:) withObject:[NSNumber numberWithInt:i] afterDelay:0.1*i];
}
int j = 1;
for(int i=6;i>0;i--){
[self performSelector:#selector(changeTexture1:) withObject:[NSNumber numberWithInt:i] afterDelay:0.1*6 + 0.1*j++];
}
}
I have a CCSprite subclass. In the draw method, I am drawing some cocos2d primitives like lines and such. How can I create a CCTexture2D of the sprite? I can't use sprite.texture because that doesn't include the primitives I am drawing.
You can add sprite to object of CCRenderTexture2D and after this can draw sprite to texture.
look at the example
CCSprite *spr = nil;//your sprite
CCRenderTexture* renderTexture = [CCRenderTexture renderTextureWithWidth:spr.contentSize.width height:spr.contentSize.height];
spr.anchorPoint = ccp(0, 0);
spr.position = ccp(0, 0);
[renderTexture addChild:spr];
[renderTexture begin];
[spr draw]; // or [spr visit];
[renderTexture end];
CCTexture2D *result = renderTexture.sprite.texture;
Now you will have texture, that contains sprite and primitives, that it draws in draw method.
Hope, it will help you:)
Making a 2D OpenGL game. When rendering a frame I need to first draw some computed quads geometry and then draw some textured sprites. When the body of my render method only draws the sprites, everything works fine. However, when I try to draw my geometric quads prior to the sprites the texture of the sprite changes to be the color of the last GL.Color3 used previously. How do I tell OpenGL (well, OpenTK) "Ok, we are done drawing geometry and its time to move on to sprites?"
Here is what the render code looks like:
// Let's do some geometry
GL.Begin(BeginMode.Quads);
GL.Color3(_dashboardBGColor); // commenting this out makes my sprites look right
int shakeBuffer = 100;
GL.Vertex2(0 - shakeBuffer, _view.DashboardHeightPixels);
GL.Vertex2(_view.WidthPixelsCount + shakeBuffer, _view.DashboardHeightPixels);
GL.Vertex2(_view.WidthPixelsCount + shakeBuffer, 0 - shakeBuffer);
GL.Vertex2(0 - shakeBuffer, 0 - shakeBuffer);
GL.End();
// lets do some sprites
GL.Begin(BeginMode.Quads);
GL.BindTexture(TextureTarget.Texture2D, _rockTextureId);
float baseX = 200;
float baseY = 200;
GL.TexCoord2(0, 0); GL.Vertex2(baseX, baseY);
GL.TexCoord2(1, 0); GL.Vertex2(baseX + _rockTextureWidth, baseY);
GL.TexCoord2(1, 1); GL.Vertex2(baseX + _rockTextureWidth, baseY - _rockTextureHeight);
GL.TexCoord2(0, 1); GL.Vertex2(baseX, baseY - _rockTextureHeight);
GL.End();
GL.Flush();
SwapBuffers();
The default texture environment mode is GL_MODULATE, which does that, it multiplies the texture color with the vertex color.
A easy solution is to set the vertex color before drawing a textured primitive to 1,1,1,1 with:
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
Another solution is to change the texture environment mode to GL_REPLACE, which makes the texture color replace the vertex color and doesn't have the issue:
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);