Using RGBA5555 on cocos2d? - cocos2d-iphone

How can i use the RGBA5555 pixel format on cocos2d?
I define my pixel formats like this:
[CCTexture2D setDefaultAlphaPixelFormat:kTexture2DPixelFormat_RGBA4444];
and i've found these:
// Available textures
// kCCTexture2DPixelFormat_RGBA8888 - 32-bit texture with Alpha channel
// kCCTexture2DPixelFormat_RGB565 - 16-bit texture without Alpha channel
// kCCTexture2DPixelFormat_A8 - 8-bit textures used as masks
// kCCTexture2DPixelFormat_I8 - 8-bit intensity texture
// kCCTexture2DPixelFormat_AI88 - 16-bit textures used as masks
// kCCTexture2DPixelFormat_RGBA4444 - 16-bit textures: RGBA4444
// kCCTexture2DPixelFormat_RGB5A1 - 16-bit textures: RGB5A1
// kCCTexture2DPixelFormat_PVRTC4 - 4-bit PVRTC-compressed texture: PVRTC4
// kCCTexture2DPixelFormat_PVRTC2 - 2-bit PVRTC-compressed texture: PVRTC2
but i can't seem to find the RGBA5555. Any thoughts on that?

There is no RGBA5555 format. That would amount to 4 times 5 Bits = 20 Bits. Such a texture format doesn't exist anywhere.
If you're looking for RGB5551, meaning 5 Bits for each RGB color and 1 Bit for alpha, then you're looking for the RGB5A1 format.

Related

TGA color mapped color convert to RGB or RGBA

I'm writing a tga parser in C++ for fun and now it can read files of imagetypes 2, 3, 10, but I'm stuck at type 1 where they have color map. I don't know how to convert color mapped colors to rgb or rgba. It seems like for an type 1 image (unencoded), if I have a char* color_map, I should turn it to uint_8* and if the color_map_entry_depth is 24 and pixel_depth is 8 and we have a uint8_t pixel_data[3] taken from the file buffer, the first pixel color will be
RGB(color_map[pixel_data[2]],color_map[pixel_data[1],color_map[pixel_data[0]])
But it gives me wrong color. Could anybody help?

How to fill non RGB OpenGL texture in glium?

I use OpenGL shaders to do color conversion from YUV to RGB. For example, on YUV420P, I create 3 textures (one for Y, one for U, one for V) and use the texture GLSL call to get each texture. Then I use matrix multiplication to get the RGB value. Each of thes textures have the format GL_RED, because they store only 1 component.
This all works on C++. Now I'm using the safe OpenGL Rust library glium. I'm creating a texture like this:
let mipmap = glium::texture::MipmapsOption::NoMipmap;
let format = glium::texture::UncompressedFloatFormat::U8;
let y_texture = glium::texture::texture2d::Texture2d::empty_with_format(&display, format, mipmap, width as u32, height as u32).unwrap();
let u_texture = glium::texture::texture2d::Texture2d::empty_with_format(&display, format, mipmap, (width as u32)/2, (height as u32)/2).unwrap();
let v_texture = glium::texture::texture2d::Texture2d::empty_with_format(&display, format, mipmap, (width as u32)/2, (height as u32)/2).unwrap();
See that the sizes of the U and V textures are 1/4 of the Y texture, as expected for YUV420P.
as you see, for YUV420P I've chosen glium::texture::UncompressedFloatFormat::U8, which I think is the same as GL_RED.
The problem is that I don't know how to fill this texture with data. Its write method expect something that can be converted into a RawImage2D. However, all the filling for methods for RawImage2D expect an RGB image.
I need a method to fill only Y to the first texture, then only U to the second, and only V to the third.

Converting float to unsigned char causes wrong values

I've created a function that creates a BMP image using RGB values.
The RGB values are stored as floats that range from 0.0 to 1.0.
When writing the values to the BMP file they need to range from 0 to 255.0 so I multiply the floats by 255.0
They also need to be unsigned chars.
EDIT: Unless one of you can think of a better type.
So basically what I do is this
unsigned char pixel[3]
//BMP Expects BGR
pixel[0] = image.b*255.0;
pixel[1] = image.g*255.0;
pixel[2] = image.r*255.0;
fwrite(&pixel, 1, 3, file);
Where image.r is a float.
There seems to be some kind of loss of data in the conversion because some parts of the image are black when they shouldn't be.
The BMP image is set to 24 bits per pixel
I was going to post images but I don't have enough reputation.
edit:
BMP image
http://tinypic.com/r/2qw3cdv/8
Printscreen
http://tinypic.com/r/2q3rm07/8
Basically light blue parts become black.
If I multiply by 128 instead the image is darker but otherwise accurate. It starts getting weird around 180 ish

Rendering pixels from array of RGB values in SDL 1.2?

I'm working on a NES emulator right now and I'm having trouble figuring out how to render the pixels. I am using a 3 dimensional array to hold the RGB value of each pixel. The array definition looks like this for the 256 x 224 screen size:
byte screenData[224][256][3];
For example, [0][0][0] holds the blue value, [0][0][1] holds the green values and [0][0][2] holds the red value of the pixel at screen position [0][0].
When the vblank flag goes high, I need to render the screen. When SDL goes to render the screen, the screenData array will be full of the RGB values for each pixel. I was able to find a function named SDL_CreateRGBSurfaceFrom that looked like it may work for what I want to do. However, all of the examples I have seen use 1 dimensional arrays for the RGB values and not a 3 dimensional array.
What would be the best way for me to render my pixels? It would also be nice if the function allowed me to resize the surface somehow so I didn't have to use a 256 x 224 window size.
You need to store the data as an unidimensional char array:
int channels = 3; // for a RGB image
char* pixels = new char[img_width * img_height * channels];
// populate pixels with real data ...
SDL_Surface *surface = SDL_CreateRGBSurfaceFrom((void*)pixels,
img_width,
img_height,
channels * 8, // bits per pixel = 24
img_width * channels, // pitch
0x0000FF, // red mask
0x00FF00, // green mask
0xFF0000, // blue mask
0); // alpha mask (none)
In 2.0, use SDL_Texture + SDL_TEXTUREACCESS_STREAMING + SDL_RenderCopy, it's faster than SDL_RenderPoint.
See:
official example: http://hg.libsdl.org/SDL/file/e12c38730512/test/teststreaming.c
my derived example which does not require blob data and compares both methods: https://github.com/cirosantilli/cpp-cheat/blob/0607da1236030d2e1ec56256a0d12cadb6924a41/sdl/plot2d.c
Related: Why do I get bad performance with SDL2 and SDL_RenderCopy inside a double for loop over all pixels?

getting Y value[Ycbcr] of one Pixel in opencv

I'm trying to get the Y value of pixel from a frame that's in Ycbcr color mode.
here what I' wrote:
cv::Mat frame, Ycbcrframe, helpframe;
........
cvtColor(frame,yCbCrFrame,CV_RGB2YCrCb); // converting to Ycbcr
Vec3b intensity =yCbCrFrame.at<uchar>(YPoint);
uchar yv = intensity.val[0]; // I thought it's my Y value but its not, coz he gives me I think the Blue channel of RGB color space
any Idea how what the correct way to do that
what about the following code?
Vec3f Y_pix = YCbCrframe.at<Vec3f>(rows, cols);
int pixelval = Y_pix[0];
(P.S. I havent tried it yet)
You need to know both the depth (numerical format and precision of channel sample) as well as the channel count (typically 3, but can also be 1 (monochrome) or 4 (alpha-containing)), ahead of time.
For 3-channel, 8-bit unsigned integer (a.k.a. byte or uchar) pixel format, each pixel can be accessed with
mat8UC3.at<cv::Vec3b>(pt);