I would like to access the RGB valuas from individual pixels. I know I can get an unsigned char array by calling
unsigned char* pixels = SOIL_load_image(picturename.c_str(), &_w, &_h, 0, SOIL_LOAD_RGB);
However I dont understand what these chars mean. The documentation says:
// The return value from an image loader is an 'unsigned char *' which points
// to the pixel data. The pixel data consists of *y scanlines of *x pixels,
// with each pixel consisting of N interleaved 8-bit components; the first
// pixel pointed to is top-left-most in the image. There is no padding between
// image scanlines or between pixels, regardless of format. The number of
// components N is 'req_comp' if req_comp is non-zero, or *comp otherwise.
// If req_comp is non-zero, *comp has the number of components that _would_
// have been output otherwise. E.g. if you set req_comp to 4, you will always
// get RGBA output, but you can check *comp to easily see if it's opaque.
However when I load an 10 by 10 pixels image to test it i get an huge ammound of chars in the array (almost 54000) this seems way too much. How do i get the individual pixel colour that I can do something like this:
int colourvalue = pixel[y*width+x];
i cant seem to find this
Related
I have a Mipi camera that captures frames and stores them into the struct buffer that you can see below. Once the frame is stored I want to convert it into a cv::Mat, the thing is that the Mat ends up looking like the first pic.
The var buf.index is just part of the V4L2 API, useful to understand which buffer I'm using.
//The structure where the data is stored
struct buffer{
void *start;
size_t length;
};
struct buffer *buffers;
//buffer->mat
cv::Mat im = cv::Mat(cv::Size(width, height), CV_8UC3, ((uint8_t*)buffers[buf.index].start));
At first I thought that the data might be corrupted but storing the image with lodepng results in a nice image without any distortion.
unsigned char* out_buf = (unsigned char*)malloc( width * height * 3);
for(int pix = 0; pix < width*height; ++pix) {
memcpy(out_buf + pix*3, ((uint8_t*)buffers[buf.index].start)+4*pix+1, 3);
}
lodepng_encode24_file(filename, out_buf, width, height);
I bet it's something really silly.
the picture you post has oddly colored pixels and the patterns look like there's more information than simply 24 bits per pixel.
after inspecting the data, it appears that V4L gives you four bytes per pixel, and the first byte is always 0xFF (let's call that X). further, the channel order seems to be XRGB.
create a cv::Mat using 8UC4 to contain the data.
to use the picture in OpenCV, you need BGR order. cv::split the received data into its four color planes which are X,R,G,B. use cv::merge to reassemble the B,G,R planes into a picture that OpenCV can handle, or reassemble into R,G,B to create a Mat for other purposes (that other library you seem to use).
I'm using a library called Awesomium and it has the following function:
void Awesomium::BitmapSurface::CopyTo ( unsigned char * dest_buffer, // output
int dest_row_span, // input that I can select
int dest_depth, // input that I can select
bool convert_to_rgba, // input that I can select
bool flip_y // input that I can select
) const
Copy this bitmap to a certain destination. Will also set the dirty bit to False.
Parameters
dest_buffer A pointer to the destination pixel buffer.
dest_row_span The number of bytes per-row of the destination.
dest_depth The depth (number of bytes per pixel, is usually 4 for BGRA surfaces and 3 for BGR surfaces).
convert_to_rgba Whether or not we should convert BGRA to RGBA.
flip_y Whether or not we should invert the bitmap vertically.
This is great because it gives me an unsigned char * dest_buffer which contains raw bitmap data. I've been trying for several hours to convert this raw bitmap data into some sort of usable format that I can use in SDL but I'm having trouble. =[ Is there any way I can load it into a SDL texture or surface? It would be ideal to have examples for both but if I only get one example (either texture or surface), that is sufficient and I will be very grateful. :) I tried to use SDL_LoadBMP_RW but that crashed. I'm not even sure if I should be using that method.
SDL_LoadBMP_RW is for loading an image in the BMP file format. And it expects an SDL_RWops*, which is a file stream, not a pixel buffer. The function you want is SDL_CreateRGBSurfaceFrom. I believe this call should work for your purposes:
SDL_Surface* surface =
SDL_CreateRGBSurfaceFrom(
pixels, // dest_buffer from CopyTo
width, // in pixels
height, // in pixels
depth, // in bits, so should be dest_depth * 8
pitch, // dest_row_span from CopyTo
Rmask, // RGBA masks, see docs
Gmask,
Bmask,
Amask
);
I have an array of uint8_t which represents a greyscale picture, where each pixel is one uint8_t. I would like to display this in a window using the SDL2 library.
I have tried to create an SDL_Surface from the array by doing
mSurface = SDL_CreateRGBSurfaceFrom(mData, mWidth, mHeight, 8, mWidth, 0xFF0000, 0xFF0000, 0xFF0000, 0xFF0000);
However, the problem is that when a depth of 8 bits is passed to SDL_CreateRGBSurfaceFrom (as I have done here), according to the SDL2 wiki "If depth is 4 or 8 bits, an empty palette is allocated for the surface" . If it wasn't for that, then I would be able to tell SDL that each pixel is one byte, and to use that byte for the R, G, and B values.
I want a depth of 8 bits per pixel because thats how my data is stored, but I don't want to use a pallete.
Is there any way to make SDL not assume I want a pallete, and just display the image with the r, g, and b masks all set to that byte?
I understand that an alternative solution would be to convert my greyscale image into RGB by copying each byte three times, and then to display it. However, I would like to avoid doing that if possible because all that copying would be slow.
SDL_CreateRGBSurfaceFrom() does not handle 8-bit true color formats. As you noted, it creates a blank palette for 8-bit depths. The most obvious thing to do is to fill in the palette and just let it do its thing.
Here's some code for a grayscale palette:
SDL_Color colors[256];
int i;
for(i = 0; i < 256; i++)
{
colors[i].r = colors[i].g = colors[i].b = i;
}
SDL_SetPaletteColors(mSurface->format->palette, colors, 0, 256);
Also, a rule of thumb: Never avoid something that works just for being "slow". Do avoid things that are "too slow". You might only know when something is "too slow" by trying it out.
In this case, you might only be loading this image once and then after that you experience a negligible performance effect.
I'm working on a NES emulator right now and I'm having trouble figuring out how to render the pixels. I am using a 3 dimensional array to hold the RGB value of each pixel. The array definition looks like this for the 256 x 224 screen size:
byte screenData[224][256][3];
For example, [0][0][0] holds the blue value, [0][0][1] holds the green values and [0][0][2] holds the red value of the pixel at screen position [0][0].
When the vblank flag goes high, I need to render the screen. When SDL goes to render the screen, the screenData array will be full of the RGB values for each pixel. I was able to find a function named SDL_CreateRGBSurfaceFrom that looked like it may work for what I want to do. However, all of the examples I have seen use 1 dimensional arrays for the RGB values and not a 3 dimensional array.
What would be the best way for me to render my pixels? It would also be nice if the function allowed me to resize the surface somehow so I didn't have to use a 256 x 224 window size.
You need to store the data as an unidimensional char array:
int channels = 3; // for a RGB image
char* pixels = new char[img_width * img_height * channels];
// populate pixels with real data ...
SDL_Surface *surface = SDL_CreateRGBSurfaceFrom((void*)pixels,
img_width,
img_height,
channels * 8, // bits per pixel = 24
img_width * channels, // pitch
0x0000FF, // red mask
0x00FF00, // green mask
0xFF0000, // blue mask
0); // alpha mask (none)
In 2.0, use SDL_Texture + SDL_TEXTUREACCESS_STREAMING + SDL_RenderCopy, it's faster than SDL_RenderPoint.
See:
official example: http://hg.libsdl.org/SDL/file/e12c38730512/test/teststreaming.c
my derived example which does not require blob data and compares both methods: https://github.com/cirosantilli/cpp-cheat/blob/0607da1236030d2e1ec56256a0d12cadb6924a41/sdl/plot2d.c
Related: Why do I get bad performance with SDL2 and SDL_RenderCopy inside a double for loop over all pixels?
I'm going through NeHe's tutorials and I'm running into a problem when it comes to bump mapping. Up until now I've been using the SOIL library to load image files into OpenGL which works great. But the bump mapping tutorial uses a pointer to the image data to modify the colors of the image pixel by pixel. To my knowledge I can't do this with the SOIL library. Is there a good way to get this affect now that glaux is deprecated? Apparently we're trying to set the alpha channel to be the value of the red component of the pixel color. On another note are we loading these into a char array because c++ doesn't care about the difference between bytes and char (they're the same size right?) or is there some other thing I'm missing in all this?
// Load The Logo-Bitmaps
if (Image=auxDIBImageLoad("Data/OpenGL_ALPHA.bmp")) {
alpha=new char[4*Image->sizeX*Image->sizeY];
// Create Memory For RGBA8-Texture
for (int a=0; a<Image->sizeX*Image->sizeY; a++)
alpha[4*a+3]=Image->data[a*3]; // Pick Only Red Value As Alpha!
if (!(Image=auxDIBImageLoad("Data/OpenGL.bmp"))) status=false;
for (a=0; a<Image->sizeX*Image->sizeY; a++) {
alpha[4*a]=Image->data[a*3]; // R
alpha[4*a+1]=Image->data[a*3+1]; // G
alpha[4*a+2]=Image->data[a*3+2]; // B
}
SOIL_load_image() should give you the raw image bits:
/**
Loads an image from disk into an array of unsigned chars.
Note that *channels return the original channel count of the
image. If force_channels was other than SOIL_LOAD_AUTO,
the resulting image has force_channels, but *channels may be
different (if the original image had a different channel
count).
\return 0 if failed, otherwise returns 1
**/
unsigned char*
SOIL_load_image
(
const char *filename,
int *width, int *height, int *channels,
int force_channels
);