I'm trying to achieve a simple effect: drawing an image on the screen with a transparent background. I'm using SpriteBatches to do this.
Here is my code for creating the blend state:
D3D11_BLEND_DESC descBlend;
ZeroMemory(&descBlend, sizeof(descBlend));
descBlend.RenderTarget[0].BlendEnable = true;
descBlend.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD;
descBlend.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD;
descBlend.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA;
descBlend.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_INV_SRC_ALPHA;
descBlend.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA;
descBlend.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_ONE;
m_d3dDevice->CreateBlendState(&descBlend, &m_Blend);
I begin drawing my sprites with:
m_SpriteBatch->Begin(SpriteSortMode_BackToFront, m_Blend);
Nothing shows up on the screen! Am I missing something?
I'm sure my image is correct because when I draw with no blending enabled, everything shows up except the transparent parts become pure white.
Any help would be appreciated.
I was once trying to achieve something similar and wrote down what I did over at gamedev
You have to identify which components of each pixel of a render target are writable during blending.
descBlend.RenderTarget[0].RenderTargetWriteMask = D3D11_COLOR_WRITE_ENABLE_ALL;
With ZeroMemory(&descBlend, sizeof(descBlend)); your clear all fields of the D3D11_BLEND_DESC inclusive the RenderTargetWriteMask. This mask determines whether the channels are written or not, if it's zero nothing is written. Try to set it to D3D11_COLOR_WRITE_ENABLE_ALL.
Related
I am using Django and easy-thumbnails 2.3. My intention is to take an image, scale it down so that it fits a square and fill the empty area with white color in case of non-square original images. Also in case of transparent images the transparency shall be changed to white.
My settings.py contains the following lines:
THUMBNAIL_PROCESSORS = (
'easy_thumbnails.processors.colorspace',
'easy_thumbnails.processors.autocrop',
'easy_thumbnails.processors.scale_and_crop',
'easy_thumbnails.processors.filters',
'easy_thumbnails.processors.background',
)
THUMBNAIL_ALIASES = {
'':{
'square_image': {'background':'#fff','replace_alpha':'#fff','size':(200,200)},
},
}
THUMBNAIL_TRANSPARENCY_EXTENSION = 'jpg'
I've tried some debugging and everything seems to work quite well and makes sense until the code reaches a line 318 in the background-processor function of easy-thumbnails processors.py:
im = colorspace(im, replace_alpha=background, **kwargs)
Here the debugger returns straight to the method that was calling background(im, size, background=None, **kwargs).
Is there anything wrong with my configuration of square_image in THUMBNAIL_ALIASES? Could it be anything else?
It turns out that you can't use 'background':'#fff' from the background processor and 'replace_alpha':'#fff' from the colorspace processor at the same time, as the background-key is turned into replace_alpha in
im = colorspace(im, replace_alpha=background, **kwargs)
and then you end up with two replace_alpha, as one is still in **kwargs. This causes the error. But it also turns out that in
THUMBNAIL_ALIASES = {
'':{
'square_image': {'background':'#fff','replace_alpha':'#fff','size':(200,200)}, #wrong
},
}
you don't even need replace_alpha. The background processor does not add bars at the sides of a non-fitting image, but instead the image gets written on a - in my case white - background. The colorspace conversion does not seem to happen before that. So the proper definition would be
THUMBNAIL_ALIASES = {
'':{
'square_image': {'background':'#fff','size':(200,200)},
},
}
In Cocos2D-x, CCNode class provides "skewX" & "skewY" to let me do some distortion of the sprite, however, I fail to find similar mapping in SKNode of SpriteKit.
My game uses Flash to port skeleton animations, in which the configs of positioning, scaling, rotation and shearing of sprites would be decomposed into game-engine's digestive. Except shearing, all other configs do have solutions to be done in SpriteKit.
You can achieve a skew affect using a CGAffineTransform, CoreImage filter and an SKEffectNode
let transform = CGAffineTransform(a: 1, b: 0.1, c: -0.3, d: 1, tx: 0, ty: 0)
let transformValue = NSValue(cgAffineTransform: transform)
let transformFilter = CIFilter(name: "CIAffineTransform")!
transformFilter.setValue(transformValue, forKey: "inputTransform")
let effectNode = SKEffectNode()
effectNode.addChild(sprite) // <-- Add sprite to effect node
effectNode.filter = transformFilter
effectNode.shouldRasterize = true
effectNode.shouldEnableEffects = true
No you can't.
SKNode doesn't have skew* properties. And SKShader allows to use only fragment shader. It means you can change the color of each pixel whatever you want, but you can't change the shape of a sprite.
So I recommend you to use Cocos2d-Swift, Cocos2d-x or so on instead of SpriteKit.
Another option is UIView. You can use the matrix from Adobe Flash to UIView via CALayer transform.
I have some rendering code written in OpenGL. I use stencil buffer to implement clipping:
//Let's assume this is done in render loop.
if(!already_created())
{
create_stencil_attachment_and_bind_to_FB_as_depth_stencil_attachment();
}
glEnable(GL_STENCIL_TEST);
glColorMask(0,0,0,0);
glDepthMask(0);
glClearStencil(0);
glClear(GL_STENCIL_BUFFER_BIT);
glStencilFunc(GL_ALWAYS,1,1);
glStencilOp(GL_REPLACE,GL_REPLACE,GL_REPLACE);
render_to_stencil();
glColorMask(1,1,1,1);
glDepthMask(1);
glStencilFunc(GL_EQUAL,1,1);
glStencilOp(GL_KEEP,GL_KEEP,GL_KEEP);
render_with_clipping();
glDisable(GL_STENCIL_TEST);
Now, the problem is, I need to port this code to DX11. I saw examples on MSDN and some nice tutorials. I end up with this logic:
1. Create ID3D11Texture2D with format = DXGI_FORMAT_D32_FLOAT_S8X24_UINT.
2. Create ID3D11DepthStencilState for rendering to stencil: //Let's call it DS_RENDER
- For both front and back faces:
- op = D3D11_STENCIL_OP_REPLACE for all 3 cases
- func = D3D11_COMPARISON_ALWAYS
- DepthEnable = FALSE
- StencilEnable = TRUE
- StencilReadMask = 0xFF
- StencilWriteMask = 0xFF
3. Create ID3D11DepthStencilView for state and texture created before. //Let's call it DSV
4. Create ID3D11DepthStencilState for using stencil as 'input': //Let's call it DS_CLIP
- For both front and back faces:
- op = D3D11_STENCIL_OP_KEEP for all 3 cases
- func = D3D11_COMPARISON_EQUAL
- DepthEnable = FALSE
- StencilEnable = TRUE
- StencilReadMask = 0xFF
- StencilWriteMask = 0xFF
Now, I'm not sure how do I set stencil as target or input.
MSDN says:
`pDevice->OMSetDepthStencilState(pDSState, 1);`
and
`pd3dDeviceContext->OMSetRenderTargets(1, &pRTV, pDSV);`
If I understand these calls correctly, first one sets stencil state, while the second one binds pDSV as additional 'attachment' to render target. Is that correct?
If so, will this work as I expect?
pDevice->OMSetDepthStencilState(DS_RENDER, 1);
pd3dDeviceContext->OMSetRenderTargets(1, &pRTV, DSV);
render_geometry_to_stencil_buffer();
pDevice->OMSetDepthStencilState(DS_CLIP, 1);
render_geometry_with_clipping();
pd3dDeviceContext->OMSetRenderTargets(1, &pRTV, NULL); //Does this disable stencil testing?
Thanks in advance for every help or useful hint.
If you want to render only to stencil, use (setting your state for writing):
pd3dDeviceContext->OMSetRenderTargets(0, NULL, DSV);
You don't need to render to color buffer, so no need to bind it.
Then to render to your target and enabling stencil test, use:
pd3dDeviceContext->OMSetRenderTargets(1, &pRTV, DSV);
When you use stencil as input, a very simple thing is also to set StencilWriteMask = 0;
So it will never write to it (which is what you want to render clipped geometry).
If you use:
pd3dDeviceContext->OMSetRenderTargets(1, &pRTV, NULL);
You will indeed disable any form of depth/stencil test (no more depth bound, so DepthStencilState will have no effect at all).
Also I would use DXGI_FORMAT_D24_UNORM_S8_UINT for you depth format (personal preference tho), it will prefectly fit your use case and consume less memory.
Hope that helps.
I'm having trouble understanding why sprites with shadows (%opacity layer) looks different in ps and on screen. Here is the comparison:
This is simply because of image formate you set. I guess you set RGBA4444 in code or while exporting spriteSheet. Also remove checkmark Premultiply alpha in texture packer.
Also check in AppDelegate Class:
CCGLView *glView = [CCGLView viewWithFrame:[window_ bounds]
pixelFormat:kEAGLColorFormatRGBA8 //Guru - replaced kEAGLColorFormatRGB565 with kEAGLColorFormatRGBA8
depthFormat:0 //GL_DEPTH_COMPONENT24_OES
preserveBackbuffer:NO
sharegroup:nil
multiSampling:NO
numberOfSamples:0];
[CCTexture2D setDefaultAlphaPixelFormat:kCCTexture2DPixelFormat_RGBA8888];
I am currently changing textures with cocos2d using this
CCTexture2D* tex = [[CCTextureCache sharedTextureCache] addImage:
someImage.png];
[someSprite setTexture: tex];
But the issue is the image I send it to
[someSprite setTexture:tex withRect:someRect
How do I get the size of the image or get the rect size to set the CCSprite to with the texture.?
Thanks
try like this
urSprite.contentSize
As said in http://www.cocos2d-iphone.org/api-ref/0.99.0/interface_c_c_texture2_d.html use tex.contentSize as a CGSize or make a CGRect by CGRectMake(0,0,tex.contentSize.width,tex.contentSize.height);
I think you were looking for tex.pixelsWide and tex.pixelsHigh. These are the dimensions of your texture in pixels.
Example:
[CCSprite spriteWithTexture:tex rect:CGRectMake(0, 0, tex.pixelsWide, tex.pixelsHigh)];