I'm actually using the libjpeg to read and save JPEG images.
So first what are the possibilities of pixel size (info.output_components;) ?
What are the possibilities of color space (info.out_color_space;) ?
And can a JPEG image have an alpha channel ?
I'm using the libpng too.
So first what is the bit depth (png_get_bit_depth(png, info);) ?
And what is the color type (png_get_color_type(png, info);) ?
Thanks !
So first what are the possibilities of pixel size (info.output_components;) ?
From the doc
output_components is 1 (a colormap index) when quantizing colors; otherwise it
equals out_color_components. It is the number of JSAMPLE values that will be
emitted per pixel in the output arrays.
int out_color_components; /* # of color components in out_color_space */
int output_components; /* # of color components returned */
/* output_components is 1 (a colormap index) when quantizing colors;
* otherwise it equals out_color_components.
What are the possibilities of color space (info.out_color_space;) ?
From the source
JCS_UNKNOWN, /* error/unspecified */
JCS_GRAYSCALE, /* monochrome */
JCS_RGB, /* red/green/blue as specified by the RGB_RED,
RGB_GREEN, RGB_BLUE, and RGB_PIXELSIZE macros */
JCS_YCbCr, /* Y/Cb/Cr (also known as YUV) */
JCS_CMYK, /* C/M/Y/K */
JCS_YCCK, /* Y/Cb/Cr/K */
JCS_EXT_RGB, /* red/green/blue */
JCS_EXT_RGBX, /* red/green/blue/x */
JCS_EXT_BGR, /* blue/green/red */
JCS_EXT_BGRX, /* blue/green/red/x */
JCS_EXT_XBGR, /* x/blue/green/red */
JCS_EXT_XRGB, /* x/red/green/blue */
/* When out_color_space it set to JCS_EXT_RGBX, JCS_EXT_BGRX, JCS_EXT_XBGR,
or JCS_EXT_XRGB during decompression, the X byte is undefined, and in
order to ensure the best performance, libjpeg-turbo can set that byte to
whatever value it wishes. Use the following colorspace constants to
ensure that the X byte is set to 0xFF, so that it can be interpreted as an
opaque alpha channel. */
JCS_EXT_RGBA, /* red/green/blue/alpha */
JCS_EXT_BGRA, /* blue/green/red/alpha */
JCS_EXT_ABGR, /* alpha/blue/green/red */
JCS_EXT_ARGB, /* alpha/red/green/blue */
JCS_RGB565 /* 5-bit red/6-bit green/5-bit blue */
And can a JPEG image have an alpha channel ?
As you can see from the source code above, libjpeg-turbo does support alpha channel for jpegs.
So first what is the bit depth (png_get_bit_depth(png, info);) ?
Simply put, number of bits used to represent each pixel in an image. The higher the bit depth, the more colors each pixel can contain.
From the PNG Spec:
Color type is a single-byte integer that describes the interpretation of the image data. Color type codes represent sums of the following values: 1 (palette used), 2 (color used), and 4 (alpha channel used). Valid values are 0, 2, 3, 4, and 6.
Color Allowed Interpretation
Type Bit Depths
0 1,2,4,8,16 Each pixel is a grayscale sample.
2 8,16 Each pixel is an R,G,B triple.
3 1,2,4,8 Each pixel is a palette index;
a PLTE chunk must appear.
4 8,16 Each pixel is a grayscale sample,
followed by an alpha sample.
6 8,16 Each pixel is an R,G,B triple,
followed by an alpha sample.
Related
Running on OS X, I've loaded a texture in OpenGL using SDL_Image library (using IMG_LOAD() which returns SDL_Surface*). It appeared that the color channels have been swapped, i.e. I had to set GL_BGRA as a pixel format parameter in glTexImage2D().
Is there a way to determine the correct data format (BGRA or RGBA or etc...) without just simply compiling and checking the texture? And what is the reason that SDL swaps these color channels?
Yes. Following link has code examples of how to determine the channel shift for each component: http://wiki.libsdl.org/SDL_PixelFormat#Code_Examples
From the site:
SDL_PixelFormat *fmt;
SDL_Surface *surface;
Uint32 temp, pixel;
Uint8 red, green, blue, alpha;
.
.
.
fmt = surface->format;
SDL_LockSurface(surface);
pixel = *((Uint32*)surface->pixels);
SDL_UnlockSurface(surface);
/* Get Red component */
temp = pixel & fmt->Rmask; /* Isolate red component */
temp = temp >> fmt->Rshift; /* Shift it down to 8-bit */
temp = temp << fmt->Rloss; /* Expand to a full 8-bit number */
red = (Uint8)temp;
You should be able to sort the Xmasks by value. Then you may determine if its RGBA or BGRA. If Xmask == 0 then the color channel does not exist.
I have no idea why the swaps occur.
Edit: Changed from Xshift to Xmask as the latter may be used to determine both location AND existance of color channels.
I've got a series of PNG files that I know is in a 16-bit greyscale format. I haven't had a problem loading this up with Magick++ and getting access to the data in 8-bit format (this was done, primarily, because all the code was there and no change was necessary).
magick_image->write(0, 0, grey->GetWidth(), grey->GetHeight(), "R", Magick::CharPixel, grey->GetBeginData());
Please note that grey is in our own container for images, but the memory layout is just one block of pre-allocated memory.
I've now been told that we need to get access to the full 16-bit range, and I'm not too sure how to do this. I presume I wouldn't use Magick::CharPixel, but the others described in the documentation don't specify what bit size they actually are.
So, I need to be able to do the following things:
Determine whether it is a 16-bit image in the first place
Read out of the Magick::Image class into a block of memory that would map to place into a block of memory for a u_int16_t
Can anyone help with this?
There's a few ways to do this, but without seeing the definitions of magick_image & grey, this answer is based on a few assumptions.
To work with bare-bone grey, I would argue it could be defined as...
struct MaybeGray {
std::vector<u_int16_t> blob;
size_t width;
size_t height;
MaybeGray(size_t w, size_t h) : blob(w * h) { width = w; height = h; };
size_t getWidth() { return width; };
size_t getHeight() { return height; };
void * getBeginData() { return blob.data(); } // C11? Is this allowed?
};
Next, I'll create a 2x2 canvas image to satisfy magick_image.
Magick::Image magick_image;
Magick::Geometry size(2, 2);
magick_image.size(size);
magick_image.read("XC:blue");
// Apply gray scale (Magick::Image.quantize may be better.)
magick_image.type(Magick::GrayscaleType);
Determine whether it is a 16-bit image in the first place
Magick::Image.depth can be used to identify, and set depth value.
const size_t DEPTH = 16;
if ( magick_image.depth() != DEPTH ) {
magick_image.depth(DEPTH);
}
Read out of the Magick::Image class into a block of memory that would map to place into a block of memory for a u_int16_t
What you're doing with Magick::Image.write is correct. However Magick::CharPixel would be for 8-bit, 256 colors, depth. For 16-bit, use Magick::ShortPixel.
struct MaybeGray gray(2, 2);
magick_image.write(0, 0,
gray.getWidth(),
gray.getHeight(),
"R",
Magick::ShortPixel,
gray.getBeginData());
How to test
A image canvas of XC:red should fill the blob with 0xFFFF, and XC:black with 0x0000. Use ImageMagick's convert & identify utilities to create the expected results.
# Create identical canvas
convert -size 2x2 xc:blue -colorspace gray -depth 16 blue.tiff
# Dump info
identify -verbose blue.tiff
Image: blue.tiff
Format: TIFF (Tagged Image File Format)
Class: DirectClass
Geometry: 2x2+0+0
Units: PixelsPerInch
Type: Grayscale
Base type: Grayscale
Endianess: LSB
Colorspace: Gray
Depth: 16/14-bit
Channel depth:
gray: 14-bit
Channel statistics:
Gray:
min: 4732 (0.0722057)
max: 4732 (0.0722057)
mean: 4732 (0.0722057)
standard deviation: 0 (0)
kurtosis: 0
skewness: 0
Colors: 1
Histogram:
4: ( 4732, 4732, 4732) #127C127C127C gray(7.22057%)
# ... rest omitted
And the verbose info confirms that I have 16-bit gray image, and the histogram informs me that MaybeGrey.blob would be filled with 4 0x127C.
Which it is.
I would like to access the RGB valuas from individual pixels. I know I can get an unsigned char array by calling
unsigned char* pixels = SOIL_load_image(picturename.c_str(), &_w, &_h, 0, SOIL_LOAD_RGB);
However I dont understand what these chars mean. The documentation says:
// The return value from an image loader is an 'unsigned char *' which points
// to the pixel data. The pixel data consists of *y scanlines of *x pixels,
// with each pixel consisting of N interleaved 8-bit components; the first
// pixel pointed to is top-left-most in the image. There is no padding between
// image scanlines or between pixels, regardless of format. The number of
// components N is 'req_comp' if req_comp is non-zero, or *comp otherwise.
// If req_comp is non-zero, *comp has the number of components that _would_
// have been output otherwise. E.g. if you set req_comp to 4, you will always
// get RGBA output, but you can check *comp to easily see if it's opaque.
However when I load an 10 by 10 pixels image to test it i get an huge ammound of chars in the array (almost 54000) this seems way too much. How do i get the individual pixel colour that I can do something like this:
int colourvalue = pixel[y*width+x];
i cant seem to find this
I'm working on a NES emulator right now and I'm having trouble figuring out how to render the pixels. I am using a 3 dimensional array to hold the RGB value of each pixel. The array definition looks like this for the 256 x 224 screen size:
byte screenData[224][256][3];
For example, [0][0][0] holds the blue value, [0][0][1] holds the green values and [0][0][2] holds the red value of the pixel at screen position [0][0].
When the vblank flag goes high, I need to render the screen. When SDL goes to render the screen, the screenData array will be full of the RGB values for each pixel. I was able to find a function named SDL_CreateRGBSurfaceFrom that looked like it may work for what I want to do. However, all of the examples I have seen use 1 dimensional arrays for the RGB values and not a 3 dimensional array.
What would be the best way for me to render my pixels? It would also be nice if the function allowed me to resize the surface somehow so I didn't have to use a 256 x 224 window size.
You need to store the data as an unidimensional char array:
int channels = 3; // for a RGB image
char* pixels = new char[img_width * img_height * channels];
// populate pixels with real data ...
SDL_Surface *surface = SDL_CreateRGBSurfaceFrom((void*)pixels,
img_width,
img_height,
channels * 8, // bits per pixel = 24
img_width * channels, // pitch
0x0000FF, // red mask
0x00FF00, // green mask
0xFF0000, // blue mask
0); // alpha mask (none)
In 2.0, use SDL_Texture + SDL_TEXTUREACCESS_STREAMING + SDL_RenderCopy, it's faster than SDL_RenderPoint.
See:
official example: http://hg.libsdl.org/SDL/file/e12c38730512/test/teststreaming.c
my derived example which does not require blob data and compares both methods: https://github.com/cirosantilli/cpp-cheat/blob/0607da1236030d2e1ec56256a0d12cadb6924a41/sdl/plot2d.c
Related: Why do I get bad performance with SDL2 and SDL_RenderCopy inside a double for loop over all pixels?
How can i use the RGBA5555 pixel format on cocos2d?
I define my pixel formats like this:
[CCTexture2D setDefaultAlphaPixelFormat:kTexture2DPixelFormat_RGBA4444];
and i've found these:
// Available textures
// kCCTexture2DPixelFormat_RGBA8888 - 32-bit texture with Alpha channel
// kCCTexture2DPixelFormat_RGB565 - 16-bit texture without Alpha channel
// kCCTexture2DPixelFormat_A8 - 8-bit textures used as masks
// kCCTexture2DPixelFormat_I8 - 8-bit intensity texture
// kCCTexture2DPixelFormat_AI88 - 16-bit textures used as masks
// kCCTexture2DPixelFormat_RGBA4444 - 16-bit textures: RGBA4444
// kCCTexture2DPixelFormat_RGB5A1 - 16-bit textures: RGB5A1
// kCCTexture2DPixelFormat_PVRTC4 - 4-bit PVRTC-compressed texture: PVRTC4
// kCCTexture2DPixelFormat_PVRTC2 - 2-bit PVRTC-compressed texture: PVRTC2
but i can't seem to find the RGBA5555. Any thoughts on that?
There is no RGBA5555 format. That would amount to 4 times 5 Bits = 20 Bits. Such a texture format doesn't exist anywhere.
If you're looking for RGB5551, meaning 5 Bits for each RGB color and 1 Bit for alpha, then you're looking for the RGB5A1 format.