Add embedded texture to Assimp matrial - c++

I want to add texture to my material and export it.
I added texture to scene->mTextures. and my question is how can I add this texture reference to my material?

You need to add the texture with its type to the material definition. For instance if you want to add a Diffuse-Texture you need to add the texture name , its relation for the UV-source-mapping and the requested clamp-mode:
aiMaterial *mat = new aiMaterial;
mat->AddProperty( diffuseTexture.c_str(), 1, AI_MATKEY_TEXTURE_DIFFUSE(0) );
int uvwIndex=0;
mat->AddProperty( &uvwIndex, 1, AI_MATKEY_UVWSRC_DIFFUSE(0) );
mat->AddProperty<int>( &clampMode, 1, AI_MATKEY_MAPPINGMODE_U( aiTextureType_DIFFUSE, 0) );
The UV-mapping and clamp-mode will have some default values, so specifying this is just for special configurations. See ObjImporter
For exporting just look into the examples or the documentation.

Related

Unit test on fixture in SVG generated with Batik

I'm trying to unit test generated svg in batik.
My development machine is windows,
my CI environment is Linux,
I use a a fixture to compare the svg result with my input, thus comparing the generated svg with what I've got in a file in my test/resource folder
I want my environments to be identical, so what I see in my development environment is what I get.
Initially I used the standard monospaced font.
To avoid the problem I tried the following: The documentation on Batik seems to suggest its possible to include the font in the generated svg to make it "self contained". I found out there are 2 options:
include the font definition
render the text themselves as strokes
That does not work either. Rendering takes place on the font definition before inclusion in the svg. I found there's a difference: java uses the system (windows/linux) font rendering to determine the font definition. They work differently. There are some articles on that topic.
But why render if you use monospaced fonts? The result should be the same, right? In the case of svg, I really would like to delegate rendering to the platform (browser or other tool) interpreting the svg.
My code:
DOMImplementation domImpl = GenericDOMImplementation.getDOMImplementation();
// Create an instance of org.w3c.dom.Document.
Document document = domImpl.createDocument( "http://www.w3.org/2000/svg", "svg", null );
// embed fonts in svg document
SVGGeneratorContext ctx = SVGGeneratorContext.createDefault( document );
ctx.setEmbeddedFontsOn( true );
GraphicsEnvironment ge;
try {
ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
ge.registerFont( Font.createFont( Font.TRUETYPE_FONT, this.getClass().getResourceAsStream( "/font/VeraMono.ttf" ) ) );
}
catch ( FontFormatException | IOException e ) {
LOG.error( "Cannot load font!", e );
}
// Create an instance of the SVG Generator: true converts text to paths.
SVGGraphics2D g2 = new SVGGraphics2D( ctx, true );
g2.setFont( new Font( "VeraMono", Font.PLAIN, 20 ) );
g2.drawString( "example text", 5, 30 );

How could I skew/shear a sprite via SpriteKit like Cocos2D?

In Cocos2D-x, CCNode class provides "skewX" & "skewY" to let me do some distortion of the sprite, however, I fail to find similar mapping in SKNode of SpriteKit.
My game uses Flash to port skeleton animations, in which the configs of positioning, scaling, rotation and shearing of sprites would be decomposed into game-engine's digestive. Except shearing, all other configs do have solutions to be done in SpriteKit.
You can achieve a skew affect using a CGAffineTransform, CoreImage filter and an SKEffectNode
let transform = CGAffineTransform(a: 1, b: 0.1, c: -0.3, d: 1, tx: 0, ty: 0)
let transformValue = NSValue(cgAffineTransform: transform)
let transformFilter = CIFilter(name: "CIAffineTransform")!
transformFilter.setValue(transformValue, forKey: "inputTransform")
let effectNode = SKEffectNode()
effectNode.addChild(sprite) // <-- Add sprite to effect node
effectNode.filter = transformFilter
effectNode.shouldRasterize = true
effectNode.shouldEnableEffects = true
No you can't.
SKNode doesn't have skew* properties. And SKShader allows to use only fragment shader. It means you can change the color of each pixel whatever you want, but you can't change the shape of a sprite.
So I recommend you to use Cocos2d-Swift, Cocos2d-x or so on instead of SpriteKit.
Another option is UIView. You can use the matrix from Adobe Flash to UIView via CALayer transform.

Porting OpenGL stencil functionality to DirectX 11

I have some rendering code written in OpenGL. I use stencil buffer to implement clipping:
//Let's assume this is done in render loop.
if(!already_created())
{
create_stencil_attachment_and_bind_to_FB_as_depth_stencil_attachment();
}
glEnable(GL_STENCIL_TEST);
glColorMask(0,0,0,0);
glDepthMask(0);
glClearStencil(0);
glClear(GL_STENCIL_BUFFER_BIT);
glStencilFunc(GL_ALWAYS,1,1);
glStencilOp(GL_REPLACE,GL_REPLACE,GL_REPLACE);
render_to_stencil();
glColorMask(1,1,1,1);
glDepthMask(1);
glStencilFunc(GL_EQUAL,1,1);
glStencilOp(GL_KEEP,GL_KEEP,GL_KEEP);
render_with_clipping();
glDisable(GL_STENCIL_TEST);
Now, the problem is, I need to port this code to DX11. I saw examples on MSDN and some nice tutorials. I end up with this logic:
1. Create ID3D11Texture2D with format = DXGI_FORMAT_D32_FLOAT_S8X24_UINT.
2. Create ID3D11DepthStencilState for rendering to stencil: //Let's call it DS_RENDER
- For both front and back faces:
- op = D3D11_STENCIL_OP_REPLACE for all 3 cases
- func = D3D11_COMPARISON_ALWAYS
- DepthEnable = FALSE
- StencilEnable = TRUE
- StencilReadMask = 0xFF
- StencilWriteMask = 0xFF
3. Create ID3D11DepthStencilView for state and texture created before. //Let's call it DSV
4. Create ID3D11DepthStencilState for using stencil as 'input': //Let's call it DS_CLIP
- For both front and back faces:
- op = D3D11_STENCIL_OP_KEEP for all 3 cases
- func = D3D11_COMPARISON_EQUAL
- DepthEnable = FALSE
- StencilEnable = TRUE
- StencilReadMask = 0xFF
- StencilWriteMask = 0xFF
Now, I'm not sure how do I set stencil as target or input.
MSDN says:
`pDevice->OMSetDepthStencilState(pDSState, 1);`
and
`pd3dDeviceContext->OMSetRenderTargets(1, &pRTV, pDSV);`
If I understand these calls correctly, first one sets stencil state, while the second one binds pDSV as additional 'attachment' to render target. Is that correct?
If so, will this work as I expect?
pDevice->OMSetDepthStencilState(DS_RENDER, 1);
pd3dDeviceContext->OMSetRenderTargets(1, &pRTV, DSV);
render_geometry_to_stencil_buffer();
pDevice->OMSetDepthStencilState(DS_CLIP, 1);
render_geometry_with_clipping();
pd3dDeviceContext->OMSetRenderTargets(1, &pRTV, NULL); //Does this disable stencil testing?
Thanks in advance for every help or useful hint.
If you want to render only to stencil, use (setting your state for writing):
pd3dDeviceContext->OMSetRenderTargets(0, NULL, DSV);
You don't need to render to color buffer, so no need to bind it.
Then to render to your target and enabling stencil test, use:
pd3dDeviceContext->OMSetRenderTargets(1, &pRTV, DSV);
When you use stencil as input, a very simple thing is also to set StencilWriteMask = 0;
So it will never write to it (which is what you want to render clipped geometry).
If you use:
pd3dDeviceContext->OMSetRenderTargets(1, &pRTV, NULL);
You will indeed disable any form of depth/stencil test (no more depth bound, so DepthStencilState will have no effect at all).
Also I would use DXGI_FORMAT_D24_UNORM_S8_UINT for you depth format (personal preference tho), it will prefectly fit your use case and consume less memory.
Hope that helps.

Issue with shadows/Opacity on sprites in cocos2d (.png format)

I'm having trouble understanding why sprites with shadows (%opacity layer) looks different in ps and on screen. Here is the comparison:
This is simply because of image formate you set. I guess you set RGBA4444 in code or while exporting spriteSheet. Also remove checkmark Premultiply alpha in texture packer.
Also check in AppDelegate Class:
CCGLView *glView = [CCGLView viewWithFrame:[window_ bounds]
pixelFormat:kEAGLColorFormatRGBA8 //Guru - replaced kEAGLColorFormatRGB565 with kEAGLColorFormatRGBA8
depthFormat:0 //GL_DEPTH_COMPONENT24_OES
preserveBackbuffer:NO
sharegroup:nil
multiSampling:NO
numberOfSamples:0];
[CCTexture2D setDefaultAlphaPixelFormat:kCCTexture2DPixelFormat_RGBA8888];

Flex Mobile List: How to get rid of grey overlay on renderer tap?

I tried setting all possible styles to something other than grey, just to try and get rid of the grey overlay as shown in the "Hello item 1" in the attached image of a list. Nothing worked. I examined the ListSkin class too and didn't fins anything that would draw these. How to get rid of these overlays?
<s:List id="list" width="100%" height="100%"
dataProvider="{dp}"
focusAlpha="0"
contentBackgroundAlpha="0"
contentBackgroundColor="0xFFFFFF"
selectionColor="0xFFFFFF"
downColor="0xFFFFFF"
borderVisible="false"
>
</s:List>
I just helped a client with this same thing. You, basically, have to extend the LabelItemRemderer class to not draw the rectangle. It is not exposed via styles or colors for you to change.
Look at this code (Starting at line 853 in the LabelItemRemderer):
// Selected and down states have a gradient overlay as well
// as different separators colors/alphas
if (selected || down)
{
var colors:Array = [0x000000, 0x000000 ];
var alphas:Array = [.2, .1];
var ratios:Array = [0, 255];
var matrix:Matrix = new Matrix();
// gradient overlay
matrix.createGradientBox(unscaledWidth, unscaledHeight, Math.PI / 2, 0, 0 );
graphics.beginGradientFill(GradientType.LINEAR, colors, alphas, ratios, matrix);
graphics.drawRect(0, 0, unscaledWidth, unscaledHeight);
graphics.endFill();
}
You basically need some way to force this code to not run. You can do this by creating your own itemRenderer from scratch. Or you can extend the LabelItemRenderer, override the drawBackground() method and copy all the parent drawBackground() code into your extended child; minus the block above.
I'd love to see the color exposed as a style or something. I'd love to see a magic property (or style) we could use to make the overlay vanish altogether. Feel free to log this as a bug into the Apache Flex Jira.