The following code extract I am loading an 300DPI 8-bit JPEG and then trying to write it out again in a Fresh instance of a CImage also as a JPEG.
But I end up with a black image with the correct dimensions.
Can someone explain why that is?
Ignore the commented out brush lines I'll get over that mental hurdle later.
If I hard code the bppGraphic to 24 it does copy the picture (to a DPI of 96) resulting in a smaller file size. I can live with this, I guess I am just curious.
Update 07-Nov-2018
So I added the indendented 'if' statement and it still came out black. The colorCountIMAGE comes out at 20. (The IsIndexed lines were to help me with an ASSERT issue I found in the SetColorTable - but it went away)
I think I may just force in all 24 bit.
Thanks
4GLGuy
PS This is all being done in VS2017.
char filePath[256] = "C:\\temp\\b64-one.jpg";
CImage imageGRAPHIC, imageJPG;
HRESULT retval;
bool result;
retval = imageGRAPHIC.Load(filePath);
if (retval != S_OK) {
throw FALSE;
}
int xGRAPHIC, yGRAPHIC, bppGRAPHIC = 0;
xGRAPHIC = imageGRAPHIC.GetWidth();
yGRAPHIC = imageGRAPHIC.GetHeight();
bppGRAPHIC = imageGRAPHIC.GetBPP();
//Create my target JPG same size and bit depth specifying
//that there is no alpha channel (dwflag last param)
result = imageJPG.Create(xGRAPHIC, yGRAPHIC, bppGRAPHIC, 0);
auto dcJPEG = imageJPG.GetDC();
if (bppGRAPHIC <= 8)
{
result = imageJPG.IsIndexed();
result = imageGRAPHIC.IsIndexed();
auto dcIMAGE = imageGRAPHIC.GetDC();
int colorCountIMAGE = GetDeviceCaps(dcIMAGE, NUMCOLORS);
RGBQUAD* coltblIMAGE = new RGBQUAD[colorCountIMAGE];
imageGRAPHIC.GetColorTable(0, colorCountIMAGE, &coltblIMAGE[0]);
imageJPG.SetColorTable(0, colorCountIMAGE, &coltblIMAGE[0]);
}
//Let there be white - 8 bit depth with 24 bit brush - no worky
//CRect rect{ 0, 0, xGRAPHIC, yGRAPHIC };
//HBRUSH white = CreateSolidBrush(RGB(255, 255, 255));
//FillRect(dcJPEG, &rect, white);
result = imageGRAPHIC.Draw(dcJPEG, 0, 0);
retval = imageJPG.Save(filePath, Gdiplus::ImageFormatJPEG);
if (retval != S_OK) {
throw FALSE;
}
Related
I'm trying to get rect. of diffent area between two Bitmap* objects. When i pass 2 bitmap* it can run lockbits for first frame but it cannot do it for second bitmap.
Rect GetRecDifference(Bitmap* currentFrame, Bitmap* previousFrame) {
if (previousFrame == NULL) {
previousFrame = currentFrame;
}
else {
return Rect(0, 0, 0, 0);
}
BitmapData* bd1 = new BitmapData;
Rect rect1(0, 0, currentFrame->GetWidth(), currentFrame->GetHeight());
currentFrame->LockBits(&rect1, ImageLockModeRead, PixelFormat32bppARGB, bd1);
BitmapData* bd2 = new BitmapData;
Rect rect2(0, 0, previousFrame->GetWidth(), previousFrame->GetHeight());
previousFrame->LockBits(&rect2, ImageLockModeRead, PixelFormat32bppARGB, bd2);
It can run for first one (bd1*) load status ok and last result is shows ok. but when it comes to bd2 it shows status as WrongState(8).
Is this because i copy current pointer to previous one ? What's reason can be for wrong state error ? Do i need to clear some parts from memory ?
The problem is that you are trying to lock the same image twice, this
previousFrame = currentFrame;
means that both your pointers are pointing to the same image.
Instead you need a scheme that keeps two images in memory at once. Something like the following
Bitmap* current = NULL;
Bitmap* previous = NULL;
while (something)
{
current = getNextImage(); // get the new image
if (current && previous)
{
// process current and previous images
...
}
delete previous; // delete the previous image, not needed anymore
previous = current; // save the current image as the previous
}
delete previous; // one image left over, delete it as well
Not the only way to do it, but hopefully you get the idea.
I am using libpng and libjpeg to read and write images. The code I use was taken almost straight from the examples provided with the two libraries' documentation, and image loading works correctly with both libraries. However, when I go to save an image, something goes wrong, and it seems to write corrupted data somehow. The confusing part is that it writes it in exactly the same incorrect way, using both libraries. Here's an example:
Original:
Blurred picture (as it looks in the program, before saving):
How it saves (png):
The jpeg version saves with identical discoloration, just more compressed.
Here's the png saving code:
void PNGHandler::save(PixelBuffer* buffer, std::string fileName)
{
FILE* filePointer = fopen(fileName.c_str(), "wb");
int width = buffer->getWidth();
int height = buffer->getHeight();
png_structp png = png_create_write_struct(PNG_LIBPNG_VER_STRING, NULL, NULL, NULL);
png_infop info = png_create_info_struct(png);
png_init_io(png, filePointer);
// 8-bit depth, RGBA
png_set_IHDR(png, info, width, height, 8,
PNG_COLOR_TYPE_RGBA, PNG_INTERLACE_NONE,
PNG_COMPRESSION_TYPE_DEFAULT,
PNG_FILTER_TYPE_DEFAULT);
png_write_info(png, info);
// Set up rows for writing from
png_bytep *rowPointers = new png_bytep[height];
for(int y = 0; y < height; y++) {
rowPointers[y] = new png_byte[png_get_rowbytes(png,info)];
}
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
ColorData cd = buffer->getPixel(x, height - y - 1);
rowPointers[y][x*4] = (int)(cd.getRed() * 255);
rowPointers[y][x*4+1] = (int)(cd.getGreen() * 255);
rowPointers[y][x*4+2] = (int)(cd.getBlue() * 255);
rowPointers[y][x*4+3] = (int)(cd.getAlpha() * 255);
}
}
png_write_image(png, rowPointers);
png_write_end(png, info);
delete [] rowPointers;
png_destroy_write_struct(&png, &info);
fclose(filePointer);
}
(I know the error handling isn't great right now, but I'll fix that later)
Additionally, the file always saves this way. That is, I can apply the blur and save, then reload the original and do it again, and performing a diff on the two files reveals they're identical. The PixelBuffer pointer that's passed in is a pointer to the buffer that is being displayed on the screen, so all of the color data should be exactly as it appears.
I know this isn't much information to go on, but if someone can guide me toward what I should look for, I can bring more to the table (it's a large project, so I can't post all the code)
Edit: It's also worth noting that the image looks correct after saving, but once the saved image is loaded in, it displays the discoloration. This points toward a problem in the saving methods to me
Your filter/blur probably overflows/underflows the color values. You should make sure that the values are saturated within 0 and 255 (if values goes under 0, set them to 0, and if values goes above 255, set them to 255)
We are working with the Kinect to track faces for a schoolproject. We have set up Visual Studio 2012, and all the test programs are working correctly. However we are trying to run this code and it gives us an error. After many attempts to fix the code, it gives the following error:
"The application was unable to start correctly (0xc000007b).Click OK to close the application.
The good thing is that it's finally running. The bad thing is that the compiler doesn't throw any errors other than this vague error.
We are completely lost and we hope that someone can help us or point us into the right direction. Thanks in advance for helping us.
The code:
#include "stdafx.h"
#include <iostream>
#include <Windows.h>
#include <NuiApi.h>
#include <FaceTrackLib.h>
#include <NuiSensor.h>
using namespace std;
HANDLE rgbStream;
HANDLE depthStream;
INuiSensor* sensor;
#define width 640
#define height 480
bool initKinect() {
// Get a working kinect sensor
int numSensors;
if (NuiGetSensorCount(&numSensors) < 0 || numSensors < 1) return false;
if (NuiCreateSensorByIndex(0, &sensor) < 0) return false;
// Initialize sensor
sensor->NuiInitialize(NUI_INITIALIZE_FLAG_USES_DEPTH | NUI_INITIALIZE_FLAG_USES_COLOR);
sensor->NuiImageStreamOpen(
NUI_IMAGE_TYPE_COLOR, // Depth camera or rgb camera?
NUI_IMAGE_RESOLUTION_640x480, // Image resolution
0, // Image stream flags, e.g. near mode
2, // Number of frames to buffer
NULL, // Event handle
&rgbStream);
// --------------- END CHANGED CODE -----------------
return true;
}
BYTE* dataEnd;
USHORT* dataEndD;
void getKinectDataD(){
NUI_IMAGE_FRAME imageFrame;
NUI_LOCKED_RECT LockedRect;
if (sensor->NuiImageStreamGetNextFrame(rgbStream, 0, &imageFrame) < 0) return;
INuiFrameTexture* texture = imageFrame.pFrameTexture;
texture->LockRect(0, &LockedRect, NULL, 0);
const USHORT* curr = (const USHORT*)LockedRect.pBits;
const USHORT* dataEnding = curr + (width*height);
if (LockedRect.Pitch != 0)
{
const BYTE* curr = (const BYTE*)LockedRect.pBits;
dataEnd = (BYTE*)(curr + (width*height) * 4);
}
while (curr < dataEnding) {
// Get depth in millimeters
USHORT depth = NuiDepthPixelToDepth(*curr++);
dataEndD = (USHORT*)depth;
// Draw a grayscale image of the depth:
// B,G,R are all set to depth%256, alpha set to 1.
}
texture->UnlockRect(0);
sensor->NuiImageStreamReleaseFrame(rgbStream, &imageFrame);
}
// This example assumes that the application provides
// void* cameraFrameBuffer, a buffer for an image, and that there is a method
// to fill the buffer with data from a camera, for example
// cameraObj.ProcessIO(cameraFrameBuffer)
int main(){
initKinect();
// Create an instance of a face tracker
IFTFaceTracker* pFT = FTCreateFaceTracker();
if (!pFT)
{
// Handle errors
}
// Initialize cameras configuration structures.
// IMPORTANT NOTE: resolutions and focal lengths must be accurate, since it affects tracking precision!
// It is better to use enums defined in NuiAPI.h
// Video camera config with width, height, focal length in pixels
// NUI_CAMERA_COLOR_NOMINAL_FOCAL_LENGTH_IN_PIXELS focal length is computed for 640x480 resolution
// If you use different resolutions, multiply this focal length by the scaling factor
FT_CAMERA_CONFIG videoCameraConfig = { 640, 480, NUI_CAMERA_COLOR_NOMINAL_FOCAL_LENGTH_IN_PIXELS };
// Depth camera config with width, height, focal length in pixels
// NUI_CAMERA_COLOR_NOMINAL_FOCAL_LENGTH_IN_PIXELS focal length is computed for 320x240 resolution
// If you use different resolutions, multiply this focal length by the scaling factor
FT_CAMERA_CONFIG depthCameraConfig = { 320, 240, NUI_CAMERA_DEPTH_NOMINAL_FOCAL_LENGTH_IN_PIXELS };
// Initialize the face tracker
HRESULT hr = pFT->Initialize(&videoCameraConfig, &depthCameraConfig, NULL, NULL);
if (FAILED(hr))
{
// Handle errors
}
// Create a face tracking result interface
IFTResult* pFTResult = NULL;
hr = pFT->CreateFTResult(&pFTResult);
if (FAILED(hr))
{
// Handle errors
}
// Prepare image interfaces that hold RGB and depth data
IFTImage* pColorFrame = FTCreateImage();
IFTImage* pDepthFrame = FTCreateImage();
if (!pColorFrame || !pDepthFrame)
{
// Handle errors
}
// Attach created interfaces to the RGB and depth buffers that are filled with
// corresponding RGB and depth frame data from Kinect cameras
pColorFrame->Attach(640, 480, dataEnd, FTIMAGEFORMAT_UINT8_R8G8B8, 640 * 3);
pDepthFrame->Attach(320, 240, dataEndD, FTIMAGEFORMAT_UINT16_D13P3, 320 * 2);
// You can also use Allocate() method in which case IFTImage interfaces own their memory.
// In this case use CopyTo() method to copy buffers
FT_SENSOR_DATA sensorData;
sensorData.ZoomFactor = 1.0f; // Not used must be 1.0
bool isFaceTracked = false;
// Track a face
while (true)
{
// Call Kinect API to fill videoCameraFrameBuffer and depthFrameBuffer with RGB and depth data
getKinectDataD();
// Check if we are already tracking a face
if (!isFaceTracked)
{
// Initiate face tracking.
// This call is more expensive and searches the input frame for a face.
hr = pFT->StartTracking(&sensorData, NULL, NULL, pFTResult);
if (SUCCEEDED(hr))
{
isFaceTracked = true;
}
else
{
// No faces found
isFaceTracked = false;
}
}
else
{
// Continue tracking. It uses a previously known face position.
// This call is less expensive than StartTracking()
hr = pFT->ContinueTracking(&sensorData, NULL, pFTResult);
if (FAILED(hr))
{
// Lost the face
isFaceTracked = false;
}
}
// Do something with pFTResult like visualize the mask, drive your 3D avatar,
// recognize facial expressions
}
// Clean up
pFTResult->Release();
pColorFrame->Release();
pDepthFrame->Release();
pFT->Release();
return 0;
}
We figured it out we used the wrong dll indeed, it runs without errors now. But we ran in to an another problem, we have no clue how to use the pFTResult and retrieve the face angles with use of "getFaceRect". Does somebody know how?
I am adding images to a CImageList using the following code:
CImageList ImageList;
ImageList.Create(64, 64, ILC_COLOR32, 0, 1);
Find image path and load image . . .
HBITMAP hBitmap = LoadBitmapFromFile(finder.GetFilePath(), NULL, 64, 64);
int nImage = ImageList.Add(CBitmap::FromHandle(hBitmap), RGB(0,0,0));
HBITMAP LoadBitmapFromFile(const char* pszFilePath, HDC hDC, long lMaxHeight, long lMaxWidth)
{
// Macro that MFC needs to manage its state within a DLL
AFX_MANAGE_STATE(AfxGetStaticModuleState())
// The item object which will be used to read in the file
CxImage image;
// Load the image from the file
if(image.Load(pszFilePath))
{
ResizeImage(image, lMaxHeight, lMaxWidth);
// Get a handle to a bitmap version of the image which the given DC can use
return image.MakeBitmap(hDC);
}
return NULL;
}
void ResizeImage(CxImage& image, long lMaxHeight, long lMaxWidth)
{
const DWORD ulImgHeight = image.GetHeight();
const DWORD ulImgWidth = image.GetWidth();
long nHeightDif = ulImgHeight - lMaxHeight;
long nWidthDif = ulImgWidth - lMaxWidth;
if(nHeightDif > 0 || nWidthDif > 0)
{
double rScaleFactor = 1.0;
if(nHeightDif > nWidthDif)
rScaleFactor = (double)lMaxHeight / (double)ulImgHeight;
else
rScaleFactor = (double)lMaxWidth / (double)ulImgWidth;
long nNewHeight = (long)(ulImgHeight * rScaleFactor);
long nNewWidth = (long)(ulImgWidth * rScaleFactor);
image.Resample2(nNewWidth, nNewHeight);
}
}
The Add method appears to add the image but will repeatedly return the same index value for the new image. When the image list is displayed in a list box, the same image appears for all images that were added and returned the same index, but the filename appearing below the image on the list is correct.
I have tried several things to determine wnat is happening all to no avail. This problem does not appear to be tied to the image size, or shape (h. vs. w.). I have verified that the correct image is attached to the HBITMAP that the image is loaded onto by displaying the image on a picture control, but the image on the list control is incorrect.
If I add the images using:
int nImage = ImageList.Add(CBitmap::FromHandle(hBitmap), (CBitmap *)NULL);
none of the duplicated images are loaded. Instead of the repeating index Add returns -1. I can find no documentation for the errors on a CImageList. Does anyone know what might be happening?
Okay I am having some problems with being able to change bitmaps when a certain parameter is greater than another. I am a massive newbie to this and my coding isn't great (at all). I have code that reads the limits (parameters) and displays as text which is this:
CFont* def_font = argDC->SelectObject(&m_Font);
CString csText;
int StartPos = WindowRect.Width()/5;
CRect TextRect(StartPos, WindowRect.top + 5, StartPos + 100, WindowRect.top + 35);
csText.Format(_T("%.2ft"), argSystemDataPtr->GetMaxSWL());
int32_t iSWLDigits = csText.GetLength();
if (iSWLDigits < m_SWLDigitsNum)
{
m_RedPanelBitmap.LoadBitmapW(IDB_BITMAP_PANEL_RED);
//argDC->FillSolidRect(TextRect, RGB(255, 255, 255));
}
m_SWLDigitsNum = iSWLDigits;
argDC->DrawText(csText, TextRect, DT_LEFT);
The bitmaps that are usually displayed are green but if a limit is breached like the one above then I want the bitmap to change to a red one. Here is what I've got for the green ones.
CRect PanelRect1, PanelRect2;
CRect PanelsRect(WindowRect);
const int BarHeight = 30;
PanelsRect.OffsetRect(0,m_bShowTitleBar?BarHeight:-BarHeight);
PanelsRect.DeflateRect(0,m_bShowTitleBar?BarHeight*-1:BarHeight);
m_GreenPanelBitmap.Detach();
m_GreenPanelBitmap.LoadBitmapW(IDB_BITMAP_PANEL_GREEN);
CBitmap* pOld = memDC.SelectObject(&m_GreenPanelBitmap);
BITMAP bits;
m_GreenPanelBitmap.GetObject(sizeof(BITMAP),&bits);
PanelRect1.SetRect(0,PanelsRect.top, PanelsRect.right /2 , PanelsRect.Height()/3);
PanelRect2.SetRect(0,PanelsRect.top+PanelRect1.Height(), PanelsRect.right /2 ,(PanelsRect.Height()/3) + PanelRect1.Height());
//Now draw the Panels
if (pOld != NULL)
{
argDC->StretchBlt(PanelRect1.left ,PanelRect1.top,PanelRect1.Width(),PanelRect1.Height(),
&memDC,0,0,bits.bmWidth-1, bits.bmHeight-1, SRCCOPY);
argDC->StretchBlt(PanelRect2.left,PanelRect2.top,PanelRect2.Width(),PanelRect2.Height(),
&memDC,0,0,bits.bmWidth-1, bits.bmHeight-1, SRCCOPY);
memDC.SelectObject(pOld);
I would be extremely grateful for any help, I understand there probably is a simple answer but I've been scratching my head over it and can't seem to find an answer anywhere else on how change the m_GreenPanelBitmap to m_RedPanelBitmap when this statement is true.
`if (iSWLDigits < m_SWLDigitsNum).`
Well, I do think your question is a bit messy but...
On the second code snippet you posted (I suppose from a OnPaint method in a dialog) you are displaying the green bitmap by using StretchBlt.
If your problem is you need to display one bitmap or another depending on a condition you should load both images (maybe you can do that elsewhere to avoid loading the images everytime the dialog is painted) and then display the one you really need based on the condition. Something like that:
bool bCondition = /*whatever*/
m_GreenPanelBitmap.LoadBitmapW(IDB_BITMAP_PANEL_GREEN);
m_RedPanelBitmap.LoadBitmapW(IDB_BITMAP_PANEL_RED);
CBitmap* pBitmapToDisplay = bCondition ? &m_GreenPanelBitmap : &m_RedPanelBitmap;
CBitmap* pOld = memDC.SelectObject(pBitmapToDisplay);
BITMAP bits;
pBitmapToDisplay->GetObject(sizeof(BITMAP),&bits);
PanelRect1.SetRect(0,PanelsRect.top, PanelsRect.right /2 , PanelsRect.Height()/3);
PanelRect2.SetRect(0,PanelsRect.top+PanelRect1.Height(), PanelsRect.right /2, PanelsRect.Height()/3) + PanelRect1.Height());
argDC->StretchBlt(PanelRect1.left ,PanelRect1.top,PanelRect1.Width(),PanelRect1.Height(),
&memDC,0,0,bits.bmWidth-1, bits.bmHeight-1, SRCCOPY);
memDC.SelectObject(pOld);
Maybe with a more detailed question we would be able to help you more.