I've created two cubes as a single object in Blender and applied two different image textures for each cube and exported them as .OBJ file. I've converted .OBJ to .USDZ file using XCODE and uploaded the .png image texture file as material for my object using the color_map tag. The material is applied to two cubes. However, I see one cube is Opaque and another cube is Transparent.
Not sure why I am having this issue, Can anyone help me out on this?
OBJ File = PBR_Cube.obj
Image Texture file = Combined.png
Code:
xcrun usdz_converter PBR_Cube.obj PBR_Cube.usdz -v -a -l \
-color_map Combined.png
I expect both the cubes to be Opaque
I know this is an old post, but I have the solution to this issue and have not found the answer anywhere online. The USDZ format (at least on mobile) interprets the base color rgba map as pre-multiplied alpha or rgb(a). If the model does not use any transparency it is safe and easy to simply use a 3-channel rgb output (jpeg works as a simple solution, or a png with a pre-multiplied alpha of 1). USDZ documentation recommends using separate material sets for transparent areas, but it is possible to use an opacity mask, this should be formatted as a png-24 with pre-multiplied alpha containing the opacity mask. In this rgb(a) format, the alpha is stored in relation to the rgb values, making the hex code for colors different than a straight rgba with alpha in the 'a' channel, but color values appear the same. This explains why colors are correct but transparency values are wrong. I used Substance Designer to combine my base color and opacity mask to a pre-multiplied png-24, and this worked perfectly. The process seems to be finicky/difficult in Photoshop or other image manipulation software, but is straightforward in Substance (I can give more details if anyone is interested). It could be possible to use an rgb jpeg colormap for the base color, and a single channel greyscale mask for opacity, but I'm not entirely sure if the USDZ converter would properly combine these, It is safer to use your own 4-channel png-24.
Related
I am trying to capture PNG image with transparent background. I have set GL_RGBA as format in glReadPixels. But the output PNG image looks a little blackish or with saturated color. If backgrouund is not transparent that is if I use GL_RGB format in glReadPixels expected image is captured.
Note: In both cases, I am capturing translucent(partially transparent) shaded cube. If cube is completely opaque, RGBA format works fine.
Any ideas as to why this is happening for transparent background?
Blackish image with RGBA format
Image with RGB format
The cube looks darker because it's semitransparent, and whatever you use to view the image blends the semitransparent parts of it with black background.
You might argue that the cube in the picture shouldn't be semitransparent since it's drawn on top of a completely opaque background, but the problem is that the widely-used
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
blending mode (which you seem to use as well) is known to produce incorrect alpha values when used to draw semitransparent objects.
Usually you can't see it because alpha is discarded when drawing to the default framebuffer, but it becomes prominent when inspecting outputs of glReadPixels.
As you noticed, to solve it you can simply discard the alpha component.
But if you for some reason need to have a proper blending without those problems, you need
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
Note that for this funciton to work, both source and destination images have to be in premultiplied alpha format. That is, you need to do color.rgb *= color.a
as a last step in your shaders.
The inverse operation (color.rgb /= color.a) can be used to convert an image back to the regular format, but if your images are completely opaque, this step is unnecessary.
I'm porting some OpenGL code from a technical paper to use with Metal. In it, they use a render target with only one channel - a 16-bit float buffer. But then they set blending operations on it like this:
glBlendFunci(1, GL_ZERO, GL_ONE_MINUS_SRC_ALPHA);
With only one channel, does that mean that with OpenGL, the target defaults to being an alpha channel?
Does anyone know if it is the same with Metal? I am not seeing the results I expect and I am wondering if Metal differs, or if there is a setting that controls how single-channel targets are treated with regards to blending.
In OpenGL, image format channels are labeled explicitly with their channels. There is only one one-channel color format: GL_R* (obviously with different bitdepths and other info). That is, red-only. And while texture swizzling can make the red channel appear in other channels from a texture fetch, that doesn't work for framebuffer writes.
Furthermore, that blend function doesn't actually use the destination alpha. It only uses the source alpha, which has the value the FS gave it. So the fact that the framebuffer doesn't store an alpha is essentially irrelevant.
I'm working on a subpixel rasterizer. The output is to be rendered on an opaque bitmap. I've come so far as to correctly render text white-on-black (because i can basically disregard the contents of the bitmap).
The problem is the blending. Each actually rendered pixel affects it's neighbours intensity levels as well, because of the lowpass filtering technique (I'm using the 5-tap fir - 1/9, 2/9, 3/9 etc.), and additionally alpha levels of the pixel to be rendered. This result then has to be alphablended onto the destination image, which is where the problem occurs...
The result of the pixels interactions has to be added together to achieve correct luminance - and the alphablended to the destination - but if I rasterize one pixel at a time, I 'loose' the information of the previous pixels, hence, further addition may lead to overflowing.
How is this supposed to be done? The only solution I can imagine would work is, to render to a separate image with alpha channels for each colour, then some complex blending algorithm, and lastly alphablend it to the destination.. Somehow.
However, I couldn't find any resources on how to actually do it - besides the basic concepts of lcd subpixel rendering and nice closeup images of monitor pixels. If anyone can help me along the way, I would be very grateful.
Tonight I awoke and could not fall asleep again.
I could not let all those brain energy get to waste and stumbled over exactly the same problem.
I came up with two different solutions, both unvalidated.
You have to use a 3 channel alpha mask, one for each subpixel, blend each color with its own alpha.
You can use the color channels each as alpha mask if you only render gray/BW font (1-color_value if you draw dark text on a light background color), again applying each color individualy. The color value itself should be considered 1 in this case.
Hope this helps a little, I filled ~2h of insomnia with it.
~ Jan
I am trying to display a bitmap using opengl but I don't want the black portion of the image to be displayed. I can do this in DirectX but not in opengl. In other words - I have images of plants with a black background and I want the plants to be drawn so they look realistic (without a black border).
You can do this using alpha-testing:
Add an alpha channel to your image before uploading it to a texture, 0.0 on black pixels and 1.0 everywhere else.
Enable alpha-testing with glEnable( GL_ALPHA_TEST )
glAlphaFunc( GL_LESS, 0.1f )
Render textured quad as usual. OpenGL will skip the zero-alpha texels thanks to the alpha-test.
There are a couple of ways you can do this.
One is to use an image editing program like Photoshop or GIMP to add an alpha channel to your image and then set the black portions of the image to a max alpha value. The upside to this is it allows you to decide which portions of the image you want to be transparent, since a fully programmatic approach can sometimes hide things you want to be seen.
Another method is to loop through every pixel in your bitmap and set the alpha based on some defined threshold (i.e. if you want true black, check to see if each color channel is at 255). The downside to this is it will occasionally cause some of your lines to disappear.
Also, you will need to make sure that you have actually enabled the alpha channel and test, as stated in the answer above. Make sure to double check the order of your calls as well, as this can cause a lot of issues when you're trying to use transparency.
That's about as much as I can suggest since you haven't posted the code itself, but hopefully it should be enough to at least get you on the way to a solution.
I have some GDI code that's drawing semi-transparent triangles using System.Drawing.SolidBrush. I'm trying to reimplement the code using a proper 3D rendering API (OpenGL/Direct3D11), but I'm not sure what blend equation to use for these triangles to get the same output as the original GDI code.
I assume it's something relatively simple like additive blending (func=GL_FUNC_ADD, eq=GL_ONE,GL_ONE) or interpolation (func=GL_FUNC_ADD, eq=GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), but neither seems to look quite right. This is almost certainly to a bug in my new code, but I want to make sure I'm working towards the correct target before I continue. Does anybody know the appropriate blend equation?
EDIT: Here's the relevant C# code, stripped of context:
using System.Drawing;
SolidBrush b = new SolidBrush(Color.FromArgb(alpha,red,green,blue));
Point[] points = ...;
Graphics g;
g.FillPolygon(b,points);
My question is, what color will actually be written, in terms of the brush's RGBA and the destination pixel's RGBA? All the docs I can find just say "use alpha to control the Brush's transparency" or something equally vague; I'm curious what the actual blend equation is.
According to MSDN, System.Drawing.Graphics has a property named CompositingMode which can be either SourceOver or SourceCopy.
The default one is SourceOver, which
Specifies that when a color is rendered, it is blended with the background color. The blend is determined by the alpha component of the color being rendered.
As I have tested, the blend strategy used here is alpha composition, which determines the pixel value in the area image A over image B by the following equation:
Here αa and αb are the normalized alpha value of image A and image B; Ca and Cb are the RGBA values of image A and image B; Co is the RGBA value of the output image.