Cocos2d-x 3.x rendering with and without spritesheet - opengl

I am rendering a flower from textures from a spritesheet on screen, and all looks fine. When I use the same textures but not from a spritesheet the flower is rendered differently.
This image (first, below) of the flower rendered from a spritesheet and looks correctly. It is composed of two texture layers: petals and center.
This image (below) of the flower composed from two sprites, one holding petals texture, the other holding the center texture, not from a spritesheet. As you can see, there is transparency around the center, and that is caused by (I presume) blending of center texture onto the petals texture.
The petals and center textures were composed from original images read from files using CCRenderTexture. The original images are PMA (premultiplied-alpha), while the resulting texture from CCRenderTexture is NPMA (non-premultiplied-alpha).
Changing blending modes to PMA or NPMA does not help.
The sprite node hierarchy is simple:
ROOT-SPRITE (empty image, ie 1x1px size, fully tranparent image)
PETALS-SPRITE (petals texture), z=1
CENTER-SPRITE (center texture), z=2
I have the following questions:
What am I doing wrong?
How can I resolve this?
Using Cocos2D-X v3.4 in iOS simulator (device has same results).

Related

c++ opengGL: Draw polygon + image processing interior pixels

I am using opengl and c++ doing image processing. The idea is simple, I will load an image, draw a polygon by clicking and then apply an effect (desaturation for instance) only to the pixels in the interior of the polygon shape just created.
Can anyone give me any direction on how to limit the effect to the pixels within the interior of the polygon? Loading the image and drawing the polygon is not a problem
Supposing the following situation :
The picture on which you want to apply the effect takes the whole screen
The picture is rendered using opengl , probably through a simple shader, with the picture passed as a texture
You can do the following approach :
consider the screen as being a big texture
you draw a polygon, which will be rendered on top of the rendered texture
inside the polygon's vertices insert the uv's coresponding to the 2D coordinates on the screen (so from screen space to uv space (0 , 1 ) )
draw the picture normaly
on top of the picture draw your polygon using the same picture as texture, but with a different shader
So instead of trying to desaturate a specific region from your picture, create a polygon on top of that region with the same picture, and desaturate that new polygon.
This would help you in avoiding the stencil buffer.
Another approach would be to create the polygon, but draw it only on the stencil buffer, before the picture is drawn.

Select source rectangle from video texturing

I am doing video texturing to a rectangle surface created. I need to create 2 more rectangle of say different size and then copy a part of the texturing video running on the 1st surface (for eg: middle part of the video ) and play it on the new surface created. Is this possible using OpenGL ES ? Through my native video surface renderer, I can do this functionality and can map it to OGLES application. I was just wondering whether it is possible to do directly from OGL app itself, by copying selected rectangle from one of the video texturing surface ?
If your texture is full motion video, you should not copy the texture data because that will be too slow too keep up with video frame rates. You should avoid using glTexImage2D() and instead use the EGL Image Extensions as detailed in my third article here:
http://montgomery1.com/opengl/
But either way, once you have the image in a texture and the texture is bound with glBindTexture(), then any number of rectangles you draw will be textured with that same currently-bound texture, without more copying. These rectangles are actually geometry constructed of triangles and not "surfaces". The framebuffer is the surface. The texture coordinates can be different for each rectangle, which allows you to crop and/or scale the texture mapping uniquely for each.

How to use a complex OpenGL as background in QGraphicsScene?

I'm trying to create a display with a complex OpenGL image and some spinboxes on the image. Using http://doc.qt.digia.com/qq/qq26-openglcanvas.html I'm able to have a two layers object (inheriting from QGraphicsScene) with a simple OpenGL image as background and the controls on foreground.
So, now I'm trying to display my true OpenGL image as background. This image is created by:
A quad mapped on a structure,
Some small 2D objects represented by 2D textures with alpha channel and specific shaders, drawn on the quad (upper z value)
Some polylines.
With this image I have some strange behavior. The 2D textured objects are drawn with a white background. Some experiments seem to indicate that, in the drawing of this complex OpenGL image the alpha channel is disabled.
I tried different configurations for the QGLWidget used as viewport of the QGraphicsView but without result.
So I need help to be able to create this OpenGL image with the right transparency effects.

Blend a clipped image together with a background into destination in one step

Please have a look at this image.
I'd like to show an clipped detail of the texture while the clipping rect can be animated so I cannot crop the image upfront. The position of the image is animated too.
I'd like to show it in front of a background. The background is a color or a texture itself.
I'd like to blend both the image and the background combined with opacity
< 1.0 to the destination.
The real requirement here is to render it in one step, avoiding a temporary buffer. Obviously a (simple) shader is needed for that.
What I already tried to achieve this:
Rendering the background first and then the image each with opacity < 1. The problem here: It lets the background shine through the image. The background is not allowed to be visible where the image itself is opaque.
It works when rendering both into a temporary buffer using opacity = 1 and then rendering this buffer to destination with opacity < 1, but this needs more (too much) resources.
I can combine two textures (background, image) in a shader, transform the texture coordinates each with a different transformation matrices. The probleme here is, that I'm not able to clip the image. The rendered geometry is a simple rectangle consisting of two triangles.
Can anybody hint me in the right direction?
You're basically trying to render this.
(Image blended with background) blended with destination
The part in parentheses, you can do with a shader, the blending with destination, you have to do with glBlendFunc, since the destination isn't available in the pixel shader.
It sounds like you know how to clip the image in the shader and rotate it by animating texture coordinates.
Let's call your image with the childreb on it ImageA, and the grey square ImageB
You want your shader to produce this at each pixel:
outputColor.rgb = ImageA.rgb * ImageA.a + ImageB.rgb * (1.0 - ImageA.a);
This blends your two images exactly as you want. Now set the alpha output from your pixel shader to be your desired alpha (<1.0)
outputColor.a = <some alpha value>
Then, when you render your quad with your shader, set the blend function as follows.
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
<draw quad>

Video as voxels in OpenGL

Any good references on displaying sequence of images from a video as voxel data in OpenGL? I want to display all these images at once as a cuboid with 50% alpha and navigate using keyboard or mouse.
Check out this tutorial on setting up a 3D texture.
If you then render slices through the texture array with the appropriate UVW coordinates you will get what you are after.