We are currently switching to WebP for textures in a video game. We've come across a problem where the areas in an image that have the alpha channel set to zero end up losing all the detail. You can see this effect in the following example:
Original Image (left is color channels, right is alpha channel)
After saving as WebP
As you can see, the zero-alpha areas have lost their detail.
This optimization makes sense when the alpha channel is being used as transparency. However, in our game we are using the alpha for something else and need to maintain the color channel integrity independently from the alpha channel. How do I disable this effect in the encoder so the color channel encodes normally?
I should mention I'm using libwebp in C++, calling the function WebPEncodeRGBA.
Thanks!
https://developers.google.com/speed/webp/docs/cwebp
In this documentation, the -exact parameter is documented.
-exact
Preserve RGB values in transparent area. The default is off, to help compressibility.
Found the solution. After tracing through libwebp code I discovered an undocumented option in WebPConfig called "exact". Setting this to 1 will prevent the library from optimizing zero-alpha areas when encoding.
Related
I'm trying to load and view a video with an alpha channel in Qt. The video was encoded using Quicktime Animation set to RGB + Alpha and Millions of Colors+. I'm sure the video has transparency working as I loaded it into After Effects and checked.
I tried using the Phonon module with no success. The video loads alright but without the alpha channel, it just shows a black background. I tried setting WA_TranslucentBackground attribute but that didn't work either. GIF is not an option since the graphics are quite complex.
Is there any way to do this?
I'm not sure if it's possible (don't know the export options for After Effects) but did you try converting the movie to the MNG format? Then you can load it with QMovie, and it supports alpha channel (may be quite heavy in size, though).
Maybe this link will help: http://www.libpng.org/pub/mng/mngapcv.html
I am using cocos2d for a game which uses sprite sheets for my character animations. I created these images using TexturePacker. Now, I want to use PVRTC 4 format for reducing memory consumption due to some reasons. But as the PVRTC Texture Compression
Usage Guide suggests, I need to add extra border of 4 pixels in each character to produce proper results. Even if I add border, I will have to mask this image with alpha image to remove border at run time. I am using Texture Packer to create a sprite sheet with PVRTC4 format and created alpha masking image matching it. I am ready with these 2 images in hand which are of same width and height.
Now my question is, how can I mask my PVRTC texture with alpha image in Cocos2D?
It will be more helpful if the solution provided works with Batch Nodes!
Thanks in advance for any solutions!
Why don't you just make the border/padding area completely transparent?
I was having the same problem, and after reading ray wenderlichs page about masking, I made a little ccsprite subclass which allows you to mask by 2 images.
CCMaskedSprite
I need to convert 24bppRGB to 16bppRGB, 8bppRGB, 4bppRGB, 8bpp grayscal and 4bpp grayscale. Any good link or other suggestions?
preferably using Windows/GDI+
[EDIT] speed is more critical than quality. source images are screenshots
[EDIT1] color conversion is required to minimize space
You're better off getting yourself a library, as others have suggested. Aside from ImageMagick, there are others, such as OpenCV. The benefits of leaving this to a library are:
Save yourself some time -- by cutting out dev and testing time for the algorithm
Speed. Most libraries out there are optimized to a level far greater than a standard developer (such as ourselves) could achieve
Standards compliance. There are many image formats out there, and using a library cuts the problem of standards compliance out of the equation.
If you're doing this yourself, then your problem can be divided into the following sub-problems:
Simple color quantization. As #Alf P. Steinbach pointed out, this is just "downscaling" the number of colors. RGB24 has 8 bits per R, G, B channels, each. For RGB16 you can do a number of conversions:
Equal number of bits for each of R, G, B. This typically means 4 or 5 bits each.
Favor the green channel (human eyes are more sensitive to green) and give it 6 bits. R and B get 5 bits.
You can even do the same thing for RGB24 to RGB8, but the results won't be as pretty as a palletized image:
4 bits green, 2 red, 2 blue.
3 bits green, 5 bits between red and blue
Palletization (indexed color). This is for going from RGB24 to RGB8 and RGB4. This is a hard problem to solve by yourself.
Color to grayscale conversion. Very easy. Convert your RGB24 to YUV' color space, and keep the Y' channel. That will give you 8bpp grayscale. If you want 4bpp grayscale, then you either quantize or do palletization.
Also be sure to check out chroma subsampling. Often, you can decrease the bitrate by a third without visible losses to image quality.
With that breakdown, you can divide and conquer. Problems 1 and 2 you can solve pretty quickly. That will allow you to see the quality you can get simply by doing coarser color quantization.
Whether or not you want to solve Problem 2 will depend on the result from above. You said that speed is more important, so if the quality of color quantization only is good enough, don't bother with palletization.
Finally, you never mentioned WHY you are doing this. If this is for reducing storage space, then you should be looking at image compression. Even lossless compression will give you better results than reducing the color depth alone.
EDIT
If you're set on using PNG as the final format, then your options are quite limited, because both RGB16 and RGB8 are not valid combinations in the PNG header.
So what this means is: regardless of bit depth, you will have to switch to index color if you want RGB color images below 24bpp (8 bits per channel). This means you will NOT be able to take advantage of the color quantization and chroma decimation that I mentioned above -- it's not supported in PNG. So this means you will have to solve Problem 2 -- palletization.
But before you think about that, some more questions:
What are the dimensions of your images?
What sort of ideal file-size are you after?
How close to that ideal file-size do you get with straight RBG24 + PNG compression?
What is the source of your images? You've mentioned screenshots, but since you're so concerned about disk space, I'm beginning to suspect that you might be dealing with image sequences (video). If this is so, then you could do better than PNG compression.
Oh, and if you're serious about doing things with PNG, then definitely have a look at this library.
Find your self a copy of the ImageMagick [sic] library. It's very configurable, so you can teach it about the details of some binary format that you need to process...
See: ImageMagick, which has a very practical license.
I received acceptable results (preliminary) by GDI+, v.1.1 that is shipped with Vista and Win7. It allows conversion to 16bpp (I used PixelFormat16bppRGB565) and to 8bpp and 4bpp using standard palettes. Better quality could be received by "optimal palette" - GDI+ would calculate optimal palette for each screenshot, but it's two times slower conversion. Grayscale was received by specifying simple custom palette, e.g. as demonstrated here, except that I didn't need to modify pixels manually, Bitmap::ConvertFormat() did it for me.
[EDIT] results were really acceptable until I decided to check the solution on WinXP. Surprisingly, Microsoft decided to not ship GDI+ v.1.1 (required for Bitmap::ConvertFormat) to WinXP. Nice move! So I continue researching...
[EDIT] had to reimplement this on clean GDI hardcoding palettes from GDI+
How to convert modern day photos to the look and feel of those Polaroid photos ? References and/or sample codes are welcome. Thanks!
Convert the images to HSV (cv::cvtColor) then look at adjusting the hue/saturation values
see http://en.wikipedia.org/wiki/HSL_and_HSV for a rather too technical article
Here is a video showing how to do it in GIMP: http://www.youtube.com/watch?v=1LAUm-SrWJA and here is the tutorial: http://howto.nicubunu.ro/gimp_polaroid_photo/
You can look at various steps (each one of them would be some basic image processing operation) and glue together to make your own code. I think each GIMP operation is in turn available as a script-fu script/code.
I would suggest using blend modes along with the HSV conversion.
This website below has been of tremendous help to me while processing images to give them an 'old' look.
http://www.simplefilter.de/en/basics/mixmods.html
Do note that you need to mix and match different blend modes with color tints and blur algorithms to achieve the various Polaroid effects.
A good starting point would be look at ImagemMagick. It already does have cmdline options to change the hue and saturation of a photo. Find a parameter set that gives you the result that you want and look at the source code to see what it is doing behind the scenes..
Programmatically, you'd want to use an image processing library such as OpenCV.
A large part of the effect (besides adding the white frame) is a change in the image color balance and histogram. This is due to the degradation of the chemical elements in the Polaroid film.
The types of operations you would need to apply to the image:
Changing color spaces such as HSV;
Desaturation;
Blending with color filters (this is the suggested way here);
Changing the brightness and contrast of the image channels for the chosen color space.
Obviously, most tutorials about how to do this in Photoshop (or other photo editing apps), can be converted into programs using OpenCV.
I'm trying to alpha blend sprites and backgrounds with devkitPro (including libnds, libarm, etc).
Does anyone know how to do this?
As a generic reference, i once wrote a small blog entry about that issue. Basically, you first have to define which layer is alpha-blended against which other layer(s). Afaik,
the source layer(s) must be over destination layer(s) to have some blending displayed. that means the priority of source layers should be numerically lower than the the priority of destination layers.
the source layer is what is going to be translucent, the destination(s) is what is going to be seen through (and yes, i find this rather confusing).
For the sprites, specifically, you then have 3 ways to achieve alpha-blending depending on what you need and what you're "ready to pay" for it:
You can make all the sprites have some alpha-blending by turning on BLEND_SRC_SPRITE in REG_BLDCNT[_SUB] ... not that useful.
You can selectively turn on blending of some sprites by using ATTR0_TYPE_BLENDED. The blending level will be the same for all sprites (and layers)
bitmap-type sprites use direct colors (bypassing the palettes), so the ATTR2_PALETTE() field of GBA sprites is useless and has been recycled into ATTR2_ALPHA.
Sprites on the DS can be alpha blended using the blend control registers. TONC gives the necessary information for getting blending working on the main screen because the register locations are the same. Alpha blending on the subscreen uses the same process with different registers at a 1000h offset.
The registers you'll be looking at are REG_BLDMOD, REG_COLV, and REG_COLY for the main screen and REG_BLDMOD_SUB, REG_COLV_SUB, and REG_COLY_SUB for the sub screen.
Also remember that you'll have to change the sprite's graphic mode to enable blending per sprite.
It's been a long time since I've done any GBA programming, but as I recall, the DS supports most (if not all) of the stuff that GBA supports. This link has a section on how to do alpha blending for GBA (section 13.2). I don't know if there's a DS-specific way of doing it, but this should work for you.