I am trying to encode an MP4 video using raw YUV frames data, but I am not sure how can I fill the plane data (preferably without using other libraries like ffmpeg)
The frame data is already encoded in I420, and does not need conversion.
Here is what I am trying to do:
const char *frameData = /* Raw frame data */;
x264_t *encoder = x264_encoder_open(¶m);
x264_picture_t imgInput, imgOutput;
x264_picture_alloc(&imgInput, X264_CSP_I420, width, height);
// how can I fill the struct data of imgInput
x264_nal_t *nals;
int i_nals;
int frameSize = x264_encoder_encode(encoder, &nals, &i_nals, &imgInput, &imgOutput);
The equivalent command line that I have found is :
x264 --output video.mp4 --fps 15 --input-res 1280x800 imgdata_01.raw
But I could not figure out how the app does it.
Thanks.
Look at libx264 API usage example. This example use fread() to fill frame allocated by x264_picture_alloc() with actual i420 data from stdin. If you already have i420 data in memory and want to skip memcpy step than instead of it you can:
Use x264_picture_init() instead of x264_picture_alloc() and x264_picture_clean(). Because you don't need allocate memory on heap for frame data.
Fill x264_picture_t.img struct fields:
i_csp = X264_CSP_I420;
i_plane = 3;
plane[0] = pointer to Y-plane;
i_stride[0] = stride in bytes for Y-plane;
plane[1] = pointer to U-plane;
i_stride[1] = stride in bytes for U-plane;
plane[2] = pointer to V-plane;
i_stride[2] = stride in bytes for V-plane;
To complete the answer above, this is an example to fill an x264_picture_t image.
int fillImage(uint8_t* buffer, int width, int height, x264_picture_t*pic){
int ret = x264_picture_alloc(pic, X264_CSP_I420, width, height);
if (ret < 0) return ret;
pic->img.i_plane = 3; // Y, U and V
pic->img.i_stride[0] = width;
// U and V planes are half the size of Y plane
pic->img.i_stride[1] = width / 2;
pic->img.i_stride[2] = width / 2;
int uvsize = ((width + 1) >> 1) * ((height + 1) >> 1);
pic->img.plane[0] = buffer; // Y Plane pointer
pic->img.plane[1] = buffer + (width * height); // U Plane pointer
pic->img.plane[2] = pic->img.plane[1] + uvsize; // V Plane pointer
return ret;
}
Related
I'm trying to get started with the VP8 library, I'm not building it in the standard way they tell you to, I just loaded all of the main files and the "encoder" folder into a new Visual Studio C++ DLL project, and just included the C files in an extern "C" dll export function, which so far builds fine etc., I just have no idea where to start with the C++ API to encode, say, 3 frames of ARGB data into a very basic video, just to get started
The only example I could find is in the examples folder called simple_encoder.c, although their premise is that they are loading in another file already and parsing its frames then converting it, so it seems a bit complicated, I just want to be able to pass in a byte array of a few ARGB frames and have it output a very simple VP8 video
I've seen How to encode series of images into VP8 using WebM VP8 Encoder API? (C/C++) but the accepted answer just links to the build instructions and references the general specification of the vp8 format, the closest I could find there is the example encoding parameters but I just want to do everything from C++ and I can't seem to find any other examples, besides for the default one simple_encoder.c?
Just to cite some of the relevant parts I think I understand, but still need more help on
//in int main...
...
vpx_image_t raw;
if (!vpx_img_alloc(&raw, VPX_IMG_FMT_I420, info.frame_width,
info.frame_height, 1)) {
//"Failed to allocate image." error
}
So that part I think I understand for the most part, VPX_IMG_FMT_I420 is the only part that's not made in this file itself, but its in vpx_image.h, first as
#define VPX_IMG_FMT_PLANAR
//then after...
typedef enum vpx_img_fmt {
VPX_IMG_FMT_NONE,
VPX_IMG_FMT_RGB24, /**< 24 bit per pixel packed RGB */
///some other formats....
VPX_IMG_FMT_ARGB, /**< 32 bit packed ARGB, alpha=255 */
VPX_IMG_FMT_YV12 = VPX_IMG_FMT_PLANAR | VPX_IMG_FMT_UV_FLIP | 1, /**< planar YVU */
VPX_IMG_FMT_I420 = VPX_IMG_FMT_PLANAR | 2,
} vpx_img_fmt_t; /**< alias for enum vpx_img_fmt */
So I guess part of my question is answered already just from writing this, that one of the formats is VPX_IMG_FMT_ARGB, although I don't where where it's defined, but I'm guessing in the above code I would replace it with
const VpxInterface *encoder = get_vpx_encoder_by_name("v8");
vpx_image_t raw;
VpxVideoInfo info = { 0, 0, 0, { 0, 0 } };
info.frame_width = 1920;
info.frame_height = 1080;
info.codec_fourcc = encoder->fourcc;
info.time_base.numerator = 1;
info.time_base.denominator = 24;
bool didIt = vpx_img_alloc(&raw, VPX_IMG_FMT_ARGB,
info.frame_width, info.frame_height/*example width and height*/, 1)
//check didIt..
vpx_codec_enc_cfg_t cfg;
vpx_codec_ctx_t codec;
vpx_codec_err_t res;
res = vpx_codec_enc_config_default(encoder->codec_interface(), &cfg, 0);
//check if !res for error
cfg.g_w = info.frame_width;
cfg.g_h = info.frame_height;
cfg.g_timebase.num = info.time_base.numerator;
cfg.g_timebase.den = info.time_base.denominator;
cfg.rc_target_bitrate = 200;
VpxVideoWriter *writer = NULL;
writer = vpx_video_writer_open(outfile_arg, kContainerIVF, &info);
//check if !writer for error
bool startIt = vpx_codec_enc_init(&codec, encoder->codec_interface(), &cfg, 0);
//not even sure where codec was set actually..
//check !startIt for error starting
//now the next part in the original is where it reads from the input file, but instead
//I need to pass in an array of some ARGB byte arrays..
//thing is, in the next step they use a while loop for
//vpx_img_read(&raw, fopen("path/to/YV12formatVideo", "rb"))
//to set the contents of the raw vpx image allocated earlier, then
//they call another program that writes it to the writer object,
//but I don't know how to read the actual ARGB data directly into the raw image
//without using fopen, so that's one question (review at end)
//so I'll just put a placeholder here for the **question**
//assuming I have an array of byte arrays stored individually
//for simplicity sake
int size = 1920 * 1080 * 4;
uint8_t imgOne[size] = {/*some big byte array*/};
uint8_t imgTwo[size] = {/*some big byte array*/};
uint8_t imgThree[size] = {/*some big byte array*/};
uint8_t *images[] = {imgOne, imgTwo, imgThree};
int framesDone = 0;
int maxFrames = 3;
//so now I can replace the while loop with a filler function
//until I find out how to set the raw image with ARGB data
while(framesDone < maxFrames) {
magicalFunctionToSetARGBOfRawImage(&raw, images[framesDone]);
encode_frame(&codec, &raw, framesDone, 0, writer);
framesDone++;
}
//now apparently it needs to be flushed after
while(encode_frame(&codec, 0, -1, 0, writer)){}
vpx_img_free(&raw);
bool isDestroyed = vpx_codec_destroy(&codec);
//check if !isDestroyed for error
//now we gotta define the encode_Frames function, but simpler
//(and make it above other function for reference purposes
//or in header
static int encode_frame(
vpx_codex_ctx_t *coydek,
vpx_image_t pic,
int currentFrame,
int flags,
VpxVideoWriter *koysayv/*writer*/
) {
//now to substitute their encodeFrame function for
//the actual raw calls to simplify things
const DidIt = vpx_codec_encode(
coydek,
pic,
currentFrame,
1,//duration I think
flags,//whatever that is
VPX_DL_REALTIME//different than simlpe_encoder
);
if(!DidIt) return;//error here
vpx_codec_iter_t iter = 0;
const vpx_codec_cx_pkt_t *pkt = 0;
int gotThings = 0;
while(
(pkt = vpx_codec_get_cx_data(
coydek,
&iter
)) != 0
) {
gotThings = 1;
if(
pkt->kind
== VPX_CODEC_CX_FRAME_PKT //don't exactly
//understand this part
) {
const
int
keyframe = (
pkt
->
data
.frame
.flags
&
VPX_FRAME_IS_KEY
) != 0; //don'texactly understand the
//& operator here or how it gets the keyframe
bool wroteFrame = vpx_video_writer_write_frame(
koysayv,
pkt->data.frame.buf
//I'm guessing this is the encoded
//frame data
,
pkt->data.frame.sz,
pkt->data.frame.pts
);
if(!wroteFrame) return; //error
}
}
return gotThings;
}
Thing is though, I don't know how to actually read the
ARGB data into the RAW image buffer itself, as mentioned
above, in the original example, they use
vpx_img_read(&raw, fopen("path/to/file", "rb"))
but if I'm starting off with the byte arrays themselves
then what function do I use for that instead of the file?
I have a feeling it can be solved by the source code for the vpx_img_read found in tools_common.c function:
int vpx_img_read(vpx_image_t *img, FILE *file) {
int plane;
for (plane = 0; plane < 3; ++plane) {
unsigned char *buf = img->planes[plane];
const int stride = img->stride[plane];
const int w = vpx_img_plane_width(img, plane) *
((img->fmt & VPX_IMG_FMT_HIGHBITDEPTH) ? 2 : 1);
const int h = vpx_img_plane_height(img, plane);
int y;
for (y = 0; y < h; ++y) {
if (fread(buf, 1, w, file) != (size_t)w) return 0;
buf += stride;
}
}
return 1;
}
although I personally am not experienced enough to necessarily know how to get a single frames ARGB data in, I think the key part is fread(buf, 1, w, file) which seems to read parts of file into buf which represents img->planes[plane];, which I think then by reading into buf that automatically reads into img->planes[plane];, but I'm not sure if that is the case, and also not sure how to replace the fread from file to just take in a bye array that is alreasy loaded into memory...
VPX_IMG_FMT_ARGB is not defined because not supported by libvpx (as far as I have seen). To compress an image using this library, you must first convert it to one of the supported format, like I420 (VPX_IMG_FMT_I420). The code here (not mine) : https://gist.github.com/racerxdl/8164330 do it well for the RGB format. If you don't want to use libswscale to make the conversion from RGB to I420, you can do things like this (this code convert a RGBA array of bytes to a I420 vpx_image that can be use by libvpx):
unsigned int tx = <width of your image>
unsigned int ty = <height of your image>
unsigned char *image = <array of bytes : RGBARGBA... of size ty*tx*4>
vpx_image_t *imageVpx = <result that must have been properly initialized by libvpx>
imageVpx->stride[VPX_PLANE_U ] = tx/2;
imageVpx->stride[VPX_PLANE_V ] = tx/2;
imageVpx->stride[VPX_PLANE_Y ] = tx;
imageVpx->stride[VPX_PLANE_ALPHA] = tx;
imageVpx->planes[VPX_PLANE_U ] = new unsigned char[ty*tx/4];
imageVpx->planes[VPX_PLANE_V ] = new unsigned char[ty*tx/4];
imageVpx->planes[VPX_PLANE_Y ] = new unsigned char[ty*tx ];
imageVpx->planes[VPX_PLANE_ALPHA] = new unsigned char[ty*tx ];
unsigned char *planeY = imageVpx->planes[VPX_PLANE_Y ];
unsigned char *planeU = imageVpx->planes[VPX_PLANE_U ];
unsigned char *planeV = imageVpx->planes[VPX_PLANE_V ];
unsigned char *planeA = imageVpx->planes[VPX_PLANE_ALPHA];
for (unsigned int y=0; y<ty; y++)
{
if (!(y % 2))
{
for (unsigned int x=0; x<tx; x+=2)
{
int r = *image++;
int g = *image++;
int b = *image++;
int a = *image++;
*planeY++ = max(0, min(255, (( 66*r + 129*g + 25*b) >> 8) + 16));
*planeU++ = max(0, min(255, ((-38*r + -74*g + 112*b) >> 8) + 128));
*planeV++ = max(0, min(255, ((112*r + -94*g + -18*b) >> 8) + 128));
*planeA++ = a;
r = *image++;
g = *image++;
b = *image++;
a = *image++;
*planeA++ = a;
*planeY++ = max(0, min(255, ((66*r + 129*g + 25*b) >> 8) + 16));
}
}
else
{
for (unsigned int x=0; x<tx; x++)
{
int const r = *image++;
int const g = *image++;
int const b = *image++;
int const a = *image++;
*planeA++ = a;
*planeY++ = max(0, min(255, ((66*r + 129*g + 25*b) >> 8) + 16));
}
}
}
I have input from captured camera frame as CMSampleBufferRef and I need to get the raw pixels preferably in C type uint8_t[].
I also need to find the color scheme of the input image.
I know how to convert CMSampleBufferRef to UIImage and then to NSData with png format but I dont know how to get the raw pixels from there. Perhaps I could get it already from CMSampleBufferRef/CIImage`?
This code shows the need and the missing bits.
Any thoughts where to start?
int convertCMSampleBufferToPixelArray (CMSampleBufferRef sampleBuffer)
{
// inputs
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
CIContext *imgContext = [CIContext new];
CGImageRef cgImage = [imgContext createCGImage:ciImage fromRect:ciImage.extent];
UIImage *uiImage = [UIImage imageWithCGImage:cgImage];
NSData *nsData = UIImagePNGRepresentation(uiImage);
// Need to fill this gap
uint8_t* data = XXXXXXXXXXXXXXXX;
ImageFormat format = XXXXXXXXXXXXXXXX; // one of: GRAY8, RGB_888, YV12, BGRA_8888, ARGB_8888
// sample showing expected data values
// this routine converts the image data to gray
//
int width = uiImage.size.width;
int height = uiImage.size.height;
const int size = width * height;
std::unique_ptr<uint8_t[]> new_data(new uint8_t[size]);
for (int i = 0; i < size; ++i) {
new_data[i] = uint8_t(data[i * 3] * 0.299f + data[i * 3 + 1] * 0.587f +
data[i * 3 + 2] * 0.114f + 0.5f);
}
return 1;
}
Some pointers you can use to search for more info. It's nicely documented and you shouldn't have an issue.
int convertCMSampleBufferToPixelArray (CMSampleBufferRef sampleBuffer) {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (imageBuffer == NULL) {
return -1;
}
// Get address of the image buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
uint8_t* data = CVPixelBufferGetBaseAddress(imageBuffer);
// Get size
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Get bytes per row
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// At `data` you have a bytesPerRow * height bytes of the image data
// To get pixel info you can call CVPixelBufferGetPixelFormatType, ...
// you can call CVImageBufferGetColorSpace and inspect it, ...
// When you're done, unlock the base address
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return 0;
}
There're couple of things you should be aware of.
First one is that it can be planar. Check the CVPixelBufferIsPlanar, CVPixelBufferGetPlaneCount, CVPixelBufferGetBytesPerRowOfPlane, etc.
Second one is that you have to calculate pixel size based on CVPixelBufferGetPixelFormatType. Something like:
CVPixelBufferGetPixelFormatType(imageBuffer)
size_t pixelSize;
switch (pixelFormat) {
case kCVPixelFormatType_32BGRA:
case kCVPixelFormatType_32ARGB:
case kCVPixelFormatType_32ABGR:
case kCVPixelFormatType_32RGBA:
pixelSize = 4;
break;
// + other cases
}
Let's say that the buffer is not planar and:
CVPixelBufferGetWidth returns 200 (pixels)
Your pixelSize is 4 (calcuated bytes per row is 200 * 4 = 800)
CVPixelBufferGetBytesPerRow can return anything >= 800
In other words, the pointer you have is not a pointer to a contiguous buffer. If you need row data you have to do something like this:
uint8_t* data = CVPixelBufferGetBaseAddress(imageBuffer);
// Get size
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
size_t pixelSize = 4; // Let's pretend it's calculated pixel size
size_t realRowSize = width * pixelSize;
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
for (int row = 0 ; row < height ; row++) {
// bytesPerRow acts like an offset where the next row starts
// bytesPerRow can be >= realRowSize
uint8_t *rowData = data + row * bytesPerRow;
// realRowSize = how many bytes are available for this row
// copy them somewhere
}
You have to allocate a buffer and copy these row data there if you'd like to have contiguous buffer. How many bytes to allocate? CVPixelBufferGetDataSize.
Here is a code that decodes a WebM frame and put them in a buffer
image->planes[p] = pointer to the top left pixel
image->linesize[p] = strides betwen rows
framesArray = vector of unsigned char*
while ( videoDec->getImage(*image) == VPXDecoder::NO_ERROR)
{
const int w = image->getWidth(p);
const int h = image->getHeight(p);
int offset = 0;
for (int y = 0; y < h; y++)
{
// fwrite(image->planes[p] + offset, 1, w, pFile);
for(int i=0;i<w;i++){
framesArray.at(count)[i+(w*y)] = *(image->planes[p]+offset+ i) ;
}
offset += image->linesize[p];
}
}
.............................
How can I write intro buffer line by line not pixel by pixel or optimize the writing of frame intro buffer?
if the source image and destination buffer share the same Width, Height and bit per pixel, you can use std::copy to copy the whole image into it.
std::copy(image->planes[p] + offset, image->planes[p] + (image->getHeight(p) * image->linesize[p], framesArray.begin()) ;
if it is same bit per pixel but different width and height, you can use std::copy by line.
I am trying to extract frames from a stream which I create with Gstreamer and trying to save them with FreeImage or QImage ( this one is for testing ).
GstMapInfo bufferInfo;
GstBuffer *sampleBuffer;
GstStructure *capsStruct;
GstSample *sample;
GstCaps *caps;
int width, height;
const int BitsPP = 32;
/* Retrieve the buffer */
g_signal_emit_by_name (sink, "pull-sample", &sample);
if (sample) {
sampleBuffer = gst_sample_get_buffer(sample);
gst_buffer_map(sampleBuffer,&bufferInfo,GST_MAP_READ);
if (!bufferInfo.data) {
g_printerr("Warning: could not map GStreamer buffer!\n");
throw;
}
caps = gst_sample_get_caps(sample);
capsStruct= gst_caps_get_structure(caps,0);
gst_structure_get_int(capsStruct,"width",&width);
gst_structure_get_int(capsStruct,"height",&height);
auto bitmap = FreeImage_Allocate(width, height, BitsPP,0,0,0);
memcpy( FreeImage_GetBits( bitmap ), bufferInfo.data, width * height * (BitsPP/8));
// int pitch = ((((BitsPP * width) + 31) / 32) * 4);
// auto bitmap = FreeImage_ConvertFromRawBits(bufferInfo.data,width,height,pitch,BitsPP,0, 0, 0);
FreeImage_FlipHorizontal(bitmap);
bitmap = FreeImage_RotateClassic(bitmap,180);
static int id = 0;
std::string name = "/home/stadmin/pic/sample" + std::to_string(id++) + ".png";
#ifdef FREE_SAVE
FreeImage_Save(FIF_PNG,bitmap,name.c_str());
#endif
#ifdef QT_SAVE
//Format_ARGB32
QImage image(bufferInfo.data,width,height,QImage::Format_ARGB32);
image.save(QString::fromStdString(name));
#endif
fibPipeline.push(bitmap);
gst_sample_unref(sample);
gst_buffer_unmap(sampleBuffer, &bufferInfo);
return GST_FLOW_OK;
The color output in FreeImage are totally wrong like when Qt - Format_ARGB32 [ greens like blue or blues like oranges etc.. ] but when I test with Qt - Format_RGBA8888 I can get correct output. I need to use FreeImage and I wish to learn how to correct this.
Since you say Qt succeeds using Format_RGBA8888, I can only guess: the gstreamer frame has bytes in RGBA order while FreeImage expects ARGB.
Quick fix:
//have a buffer the same length of the incoming bytes
size_t length = width * height * (BitsPP/8);
BYTE * bytes = (BYTE *) malloc(length);
//copy the incoming bytes to it, in the right order:
int index = 0;
while(index < length)
{
bytes[index] = bufferInfo.data[index + 2]; //B
bytes[index + 1] = bufferInfo.data[index + 1]; //G
bytes[index + 2] = bufferInfo.data[index]; //R
bytes[index + 3] = bufferInfo.data[index + 3]; //A
index += 4;
}
//fill the bitmap using the buffer
auto bitmap = FreeImage_Allocate(width, height, BitsPP,0,0,0);
memcpy( FreeImage_GetBits( bitmap ), bytes, length);
//don't forget to
free(bytes);
I need to create a CImage from a byte array (actually, its an array of unsigned char, but I can cast to whatever form is necessary). The byte array is in the form "RGBRGBRGB...". The new image needs to contain a copy of the image bytes, rather than using the memory of the byte array itself.
I have tried many different ways of achieving this -- including going through various HBITMAP creation functions, trying to use BitBlt -- and nothing so far has worked.
To test whether the function works, it should pass this test:
BYTE* imgBits;
int width;
int height;
int Bpp; // BYTES per pixel (e.g. 3)
getImage(&imgBits, &width, &height, &Bpp); // get the image bits
// This is the magic function I need!!!
CImage img = createCImage(imgBits, width, height, Bpp);
// Test the image
BYTE* data = img.GetBits(); // data should now have the same data as imgBits
All implementations of createCImage() so far have ended up with data pointing to an empty (zero filled) array.
CImage supports DIBs quite neatly and has a SetPixel() method so you could presumably do something like this (uncompiled, untested code ahead!):
CImage img;
img.Create(width, height, 24 /* bpp */, 0 /* No alpha channel */);
int nPixel = 0;
for(int row = 0; row < height; row++)
{
for(int col = 0; col < width; col++)
{
BYTE r = imgBits[nPixel++];
BYTE g = imgBits[nPixel++];
BYTE b = imgBits[nPixel++];
img.SetPixel(row, col, RGB(r, g, b));
}
}
Maybe not the most efficient method but I should think it is the simplest approach.
Use memcpy to copy the data, then SetDIBits or SetDIBitsToDevice depending on what you need to do. Take care though, the scanlines of the raw image data are aligned on 4-byte boundaries (IIRC, it's been a few years since I did this) so the data you get back from GetDIBits will never be exactly the same as the original data (well it might, depending on the image size).
So most likely you will need to memcpy scanline by scanline.
Thanks everyone, I managed to solve it in the end with your help. It mainly involved #tinman and #Roel's suggestion to use SetDIBitsToDevice(), but it involved a bit of extra bit-twiddling and memory management, so I thought I'd share my end-point here.
In the code below, I assume that width, height and Bpp (Bytes per pixel) are set, and that data is a pointer to the array of RGB pixel values.
// Create the header info
bmInfohdr.biSize = sizeof(BITMAPINFOHEADER);
bmInfohdr.biWidth = width;
bmInfohdr.biHeight = -height;
bmInfohdr.biPlanes = 1;
bmInfohdr.biBitCount = Bpp*8;
bmInfohdr.biCompression = BI_RGB;
bmInfohdr.biSizeImage = width*height*Bpp;
bmInfohdr.biXPelsPerMeter = 0;
bmInfohdr.biYPelsPerMeter = 0;
bmInfohdr.biClrUsed = 0;
bmInfohdr.biClrImportant = 0;
BITMAPINFO bmInfo;
bmInfo.bmiHeader = bmInfohdr;
bmInfo.bmiColors[0].rgbBlue=255;
// Allocate some memory and some pointers
unsigned char * p24Img = new unsigned char[width*height*3];
BYTE *pTemp,*ptr;
pTemp=(BYTE*)data;
ptr=p24Img;
// Convert image from RGB to BGR
for (DWORD index = 0; index < width*height ; index++)
{
unsigned char r = *(pTemp++);
unsigned char g = *(pTemp++);
unsigned char b = *(pTemp++);
*(ptr++) = b;
*(ptr++) = g;
*(ptr++) = r;
}
// Create the CImage
CImage im;
im.Create(width, height, 24, NULL);
HDC dc = im.GetDC();
SetDIBitsToDevice(dc, 0,0,width,height,0,0, 0, height, p24Img, &bmInfo, DIB_RGB_COLORS);
im.ReleaseDC();
delete[] p24Img;
Here is a simpler solution. You can use GetPixelAddress(...) instead of all this BITMAPHEADERINFO and SedDIBitsToDevice. Another problem I have solved was with 8-bit images, which need to have the color table defined.
CImage outImage;
outImage.Create(width, height, channelCount * 8);
int lineSize = width * channelCount;
if (channelCount == 1)
{
// Define the color table
RGBQUAD* tab = new RGBQUAD[256];
for (int i = 0; i < 256; ++i)
{
tab[i].rgbRed = i;
tab[i].rgbGreen = i;
tab[i].rgbBlue = i;
tab[i].rgbReserved = 0;
}
outImage.SetColorTable(0, 256, tab);
delete[] tab;
}
// Copy pixel values
// Warining: does not convert from RGB to BGR
for ( int i = 0; i < height; i++ )
{
void* dst = outImage.GetPixelAddress(0, i);
const void* src = /* put the pointer to the i'th source row here */;
memcpy(dst, src, lineSize);
}