How to read an image in C++ as a 2D array? I need to create a C/C++ program that reads an image (all formats) as a 2D array to show pixel values (0-255), divide the image into blocks and apply different compression methods using pixels blocks (BTC, AMBTC, MMBTC) and saving the new image by hand without using already set libraries (must not use magic++)..
thanks in advance
Here's some 'outline' code using MFC's CImage class that may help you. I've shown how to use the basic Load and Save options, and how to get a 'raw' array of pixel data (note: it's best to convert to 32-bit format, so you can be sure the DWORD pointer you get will really be to a width X height array - other BPP formats can give strange results):
First, load from file (CImage will know or guess the format from the file extension):
CImage original;
original.Load("Yourfile.jpg"); // Use actual file path, obviously
int pw = original.GetWidth(), ph = original.GetHeight(); // Dimensions
CImage working; // Use this to hold our 32-bit image
working.Create(pw, ph, 32);
// Next, copy image from original to working...
HDC hDC = working.GetDC();
original.BitBlt(hDC, 0, 0, SRCCOPY);
working.ReleaseDC();
// Get a DWORD pointer to the pixel data...
BITMAP bmp;
GetObject(working.operator HBITMAP(), sizeof(BITMAP), &bmp);
DWORD* pixbuf = static_cast<DWORD*>(bmp.bmBits);
// We can now access any pixel(x,y) data using: pixbuf[x + y * pw]
You now do all sorts of work on your image buffer, using the pixbuf array, as stated in the comment. For clarity: each DWORD (32-bit unsigned) in the buffer will be the RGBA data (where A is the 'alpha` channel - set to zero) but in reversed order; so, for each DWORD, the bytes will be 0xBBBBGGGGRRRR0000.
When you're done, you can save your modified image as follows:
CImage savepic;
savepic.Create(pw, ph, 24); // Change 24 to whatever BPP you require
hDC = savepic.GetDC();
working.BitBlt(hDC, 0, 0, SRCCOPY); // Copies modified image to output
savepic.ReleaseDC();
savepic.Save("NewFile.jpg"); // CImage understand what format to use base on extension
Of course, in a real-world program, there are error checks that you will need to make (most CImage methods return a status indicator, and GetLastError() can be used), and you would probably be safer copying the 'pixbuf' data to a separate memory zone - but, hopefully, this brief outline will help you get started.
Feel free to ask for further clarification and/or explanation.
Related
I have a bitmap in raw RGBA values in the following code from a library I found on the net. "svgren.
auto img = svgren::render(*dom, width, height); //uses 96 dpi by default
//At this point the 'width' and 'height' variables were filled with
//the actual width and height of the rendered image.
//Returned 'img' is a std::vector<std::uint32_t> holding array of RGBA values.
I need to know how to get this picture into a CBitmap so I can display it in an MFC Picture control. I can presize it and I know how to display a bitmap in the control. What I can't do is load the RGBA values into the bitmap. Any ideas please?
The CBitmap::CreateBitmap member function can construct a bitmap from a block of memory. The lpBits argument expects a pointer to byte values. Passing a pointer to an array of uint32_t values is technically undefined behavior (although it will work on all little-endian implementations of Windows).
Special care must be taken for the memory layout. This is only documented for the Windows API call CreateBitmap and not at all present in the MFC documentation:
Each scan line in the rectangle must be word1 aligned (scan lines that are not word aligned must be padded with zeros).
Based on the assumption, that the memory is properly aligned, and reinterpreting the buffer as a pointer to bytes is well defined, here's an implementation with proper resource handling:
CBitmap Chb;
Chb.CreateBitmap(width, height, 1, 32, img.data());
mProjectorWindow.m_picControl.ModifyStyle(0xF, SS_BITMAP, SWP_NOSIZE);
Chb.Attach(mProjectorWindow.m_picControl.SetBitmap(Chb.Detach()));
The last line of code exchanges ownership of the GDI resource between the m_picControl and Chb. This ensures proper cleanup of the GDI resource previously owned by the m_picControl, and makes the m_picControl the only owner of the newly created bitmap.
1 I believe this should read dword aligned.
CBitmap Chb;
HBITMAP bmp = CreateBitmap(width, height, 1, 32, &*img.begin());
ASSERT_ALWAYS(bmp != NULL)
Chb.Attach(bmp);
//PicControl.ModifyStyle(0xF, SS_BITMAP, SWP_NOSIZE);
//PicControl.SetBitmap(Chb);
mProjectorWindow.m_picControl.ModifyStyle(0xF, SS_BITMAP, SWP_NOSIZE);
mProjectorWindow.m_picControl.SetBitmap(Chb);
I've been working on a Webcam video recorder and I got interested in trying everything when it comes to this topic but there's this problem that I can't solve.
Everything that you might wonder about can be found here
https://msdn.microsoft.com/en-us/library/windows/desktop/dd757677%28v=vs.85%29.aspx and here
https://msdn.microsoft.com/en-us/library/windows/desktop/dd757694%28v=vs.85%29.aspx
Now, in this code
if (capSetCallbackOnVideoStream(hCapWnd, capVideoStreamCallback))
{
capCaptureSequenceNoFile(hCapWnd); //Capture
}
I make sure that every frame that gets captured is sent to capVideoStreamCallback.
Now what I'm trying to do is transform a frame to an image and save it somewhere, this might be useless but it's interesting and it is surely possible.
Here is my capVideoStreamCallback function (it's commented):
LRESULT CALLBACK capVideoStreamCallback(HWND hWnd, LPVIDEOHDR lpVHdr)
{
BYTE *Image;
BITMAPINFO * TempBitmapInfo = new BITMAPINFO;
ULONG Size;
// First we need to get the full size of the image
Size = capGetVideoFormat(hWnd, TempBitmapInfo, sizeof(BITMAPINFO)); //header size
Size += lpVHdr->dwBytesUsed; //bytes used
Image = new BYTE[Size];
memcpy(Image, TempBitmapInfo, sizeof(BITMAPINFO)); //copy the header to Image
// lpVHdr is LPVIDEOHER passed into callback function.
memcpy(Image + sizeof(BITMAPINFO), lpVHdr->lpData, lpVHdr->dwBytesUsed); //copy the data to Image
//write the image
ofstream output("image.dib", ios::binary);
for (int i = 0; i < Size; i++)
{
output << (BYTE)Image[i];
}
output.close();
return (LRESULT)TRUE;
}
So, the information about every frame that gets sent to capVideoStreamCallback can be found in lpVHdr which is a structure (https://msdn.microsoft.com/en-us/library/windows/desktop/dd757688%28v=vs.85%29.aspx) and what I'm trying to do here is to take that information and transform it to an image.
I first start by getting the full size of the image by retrieving the size of the header and the size of the data and then I dynamically declared a BYTE Array called Image and copied the header and the data to Image using memcpy. I finally used ofstream to write the bytes to a file and that's pretty much it.
The problem is that everything works just fine but the image is somehow corrupted because it cannot be opened.
What is wrong in what I'm doing? It seems so logical but it's not working.
Please share your ideas and thanks for reading.
Here's the answer thanks to Frankie-C from http://codeproject.com who reminded me that I needed a BITMAPFILEHEADER structure at the top of the BITMAP File.
There's also few extra stuff that you need to do to get the image to show up the way it should be such as flipping bytes to get BGR instead of RGB etc, here's a nice tut explaining that: http://tipsandtricks.runicsoft.com/Cpp/BitmapTutorial.htm
I'm using a library called Awesomium and it has the following function:
void Awesomium::BitmapSurface::CopyTo ( unsigned char * dest_buffer, // output
int dest_row_span, // input that I can select
int dest_depth, // input that I can select
bool convert_to_rgba, // input that I can select
bool flip_y // input that I can select
) const
Copy this bitmap to a certain destination. Will also set the dirty bit to False.
Parameters
dest_buffer A pointer to the destination pixel buffer.
dest_row_span The number of bytes per-row of the destination.
dest_depth The depth (number of bytes per pixel, is usually 4 for BGRA surfaces and 3 for BGR surfaces).
convert_to_rgba Whether or not we should convert BGRA to RGBA.
flip_y Whether or not we should invert the bitmap vertically.
This is great because it gives me an unsigned char * dest_buffer which contains raw bitmap data. I've been trying for several hours to convert this raw bitmap data into some sort of usable format that I can use in SDL but I'm having trouble. =[ Is there any way I can load it into a SDL texture or surface? It would be ideal to have examples for both but if I only get one example (either texture or surface), that is sufficient and I will be very grateful. :) I tried to use SDL_LoadBMP_RW but that crashed. I'm not even sure if I should be using that method.
SDL_LoadBMP_RW is for loading an image in the BMP file format. And it expects an SDL_RWops*, which is a file stream, not a pixel buffer. The function you want is SDL_CreateRGBSurfaceFrom. I believe this call should work for your purposes:
SDL_Surface* surface =
SDL_CreateRGBSurfaceFrom(
pixels, // dest_buffer from CopyTo
width, // in pixels
height, // in pixels
depth, // in bits, so should be dest_depth * 8
pitch, // dest_row_span from CopyTo
Rmask, // RGBA masks, see docs
Gmask,
Bmask,
Amask
);
I spend much time trying to find solution but cannot. I hope you can help me. The code is bit longer so I give here just the part where I have the problem. My code captures bitmap from window and its saved in HBitmap. I need to do rotation of the bitmap. So I start GDI+ and create bitmap pBitmap from HBitmap:
// INIT GDI
ULONG_PTR gdiplusToken;
GdiplusStartupInput gdiplusStartupInput;
GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL);
if (!gdiplusToken) return 3;
// Gdip_GetRotatedDimensions:
GpBitmap* pBitmap;
int result = Gdiplus::DllExports::GdipCreateBitmapFromHBITMAP(HBitmap, 0, &pBitmap);
Then I calculate the variables needed for rotation. Then I create graphics object and tried to rotate the image:
GpGraphics * pG;
result = Gdiplus::DllExports::GdipGetImageGraphicsContext(pBitmap, &pG);
Gdiplus::SmoothingMode smooth = SmoothingModeHighQuality;
result = Gdiplus::DllExports::GdipSetSmoothingMode(pG, smooth);
Gdiplus::InterpolationMode interpolation = InterpolationModeNearestNeighbor;
result = Gdiplus::DllExports::GdipSetInterpolationMode(pG, interpolation);
MatrixOrder MatrixOrder_ = MatrixOrderPrepend;
result = Gdiplus::DllExports::GdipTranslateWorldTransform(pG, xTranslation, yTranslation, MatrixOrder_);
MatrixOrder_ = MatrixOrderPrepend;
result = Gdiplus::DllExports::GdipRotateWorldTransform(pG, ROTATION_ANGLE, MatrixOrder_);
GpImageAttributes * ImgAttributes;
result = Gdiplus::DllExports::GdipCreateImageAttributes(&ImgAttributes); // create an ImageAttribute object
result = Gdiplus::DllExports::GdipDrawImageRectRect(pG,pBitmap,0,0,w,h,0,0,w,h,UnitPixel,ImgAttributes,0,0); // Draw the original image onto the new bitmap
result = Gdiplus::DllExports::GdipDisposeImageAttributes(ImgAttributes);
Finally I wanted to check the image so I added:
CLSID pngClsid;
GetEncoderClsid(L"image/png", &pngClsid);
result = Gdiplus::DllExports::GdipCreateBitmapFromGraphics(w, h, pG, &pBitmap);
result = Gdiplus::DllExports::GdipSaveImageToFile(pBitmap, L"justest.png", &pngClsid, NULL); // last voluntary? GDIPCONST EncoderParameters* encoderParams
But my image is blank. I found out that GdipCreateBitmapFromGraphics creates blank image, but how should I finish it to check what drawings I have done? Are these steps correct (not just here but above, near GdipCreateBitmapFromHBITMAP() and GdipGetImageGraphicsContext() or I need to add something? How to get it working?
PS: I am sure that HBitmap contains picture of window, I already checked it.
To my eyes, you have some things backwards in your approach.
What you need to do is the following:
Read in your Image (src)
Find the minimum bounding rectangle that will contain the rotated image (ie, rotate the corners and the distance between min and max x and y are the dimensions.
Create a new Image object with these dimensions and the pixel format you want (likely the same as src, but maybe you want an alpha channel) and background color you want (dst)
Create a graphics based on dst (new Graphics(dst))
Set the appropriate transform on the graphics
Draw src onto dst
Export dst
The good news is that to make sure you're doing things right, you can isolate steps out.
For example, you can just make an image and a graphics and draw a line on it with no transform (or better a box with an X) and save that. If you have what you expect, then you're on the right path. Next add a transform to the box. In your case you'll need both a rotation and a translation. Next get the dimensions of the dest image right for that rotation (protip: don't use a square to test). Finally, do it with your actual image.
This will get you step-by-step to the correct output instead of trying to get the whole thing in one shot.
I'm using DirectShow to access a video stream, and then using the SampleGrabber filter and interface to get samples from each frame for further image processing. I'm using a callback, so it gets called after each new frame. I've basically just worked from the PlayCap sample application and added a sample filter to the graph.
The problem I'm having is that I'm trying to display the grabbed samples on a different OpenCV window. However, when I try to cast the information in the buffer to an IplImage, I get a garbled mess of pixels. The code for the BufferCB call is below, sans any proper error handling:
STDMETHODIMP BufferCB(double Time, BYTE *pBuffer, long BufferLen)
{
AM_MEDIA_TYPE type;
g_pGrabber->GetConnectedMediaType(&type);
VIDEOINFOHEADER *pVih = (VIDEOINFOHEADER *)type.pbFormat;
BITMAPINFO* bmi = (BITMAPINFO *)&pVih->bmiHeader;
BITMAPINFOHEADER* bmih = &(bmi->bmiHeader);
int channels = bmih->biBitCount / 8;
mih->biPlanes = 1;
bmih->biBitCount = 24;
bmih->biCompression = BI_RGB;
IplImage *Image = cvCreateImage(cvSize(bmih->biWidth, bmih->biHeight), IPL_DEPTH_8U, channels);
Image->imageSize = BufferLen;
CopyMemory(Image->imageData, pBuffer, BufferLen);
cvFlip(Image);
//openCV Mat creation
Mat cvMat = Mat(Image, true);
imshow("Display window", cvMat); // Show our image inside it.
waitKey(2);
return S_OK;
}
My question is, am I doing something wrong here that will make the image displayed look like this:
Am I missing header information or something?
The quoted code is a part of the solution. You create here an image object of certain width/height with 8-bit pixel data and unknown channel/component count. Then you copy data from another buffer of unknown format.
The only chance for it to work well is that all unknowns amazingly match without your effort. So you basically need to start with checking what media type is exactly on Sample Grabber's input pin. Then, if it is not what you wanted, you have to update your code respectively. It might also be important what is the downstream connection of the SG, and whether it is connected to video renderer in particular.