I'm pulling data from a uEye industrial camera, and am retrieving images through the camera's API.
My code looks something like this:
bool get_image(char*& img)
{
void *pMemVoid; //pointer to where the image is stored
// Takes an image from the camera. If successful, returns true, otherwise
// returns false
if (is_GetImageMem(hCam, &pMemVoid) == IS_SUCCESS){
img = (char*) pMemVoid;
pMemVoid = NULL;
return true;
}
else
return false;
}
I'm retrieving a image data, and if it is successful, it returns true, otherwise returns false.
The problem is I believe I'm leaking memory with img = (char*) pMemVoid, because I'm repeatedly calling this function and not releasing this data. How do I release the memory that is assigned to img?
EDIT:
I'm initializing the camera in a function that uses is_AllocImageMem:
// Global variables for camera functions
HIDS hCam = 0;
char* ppcImgMem;
int pid;
/* Initializes the uEye camera. If camera initialization is successful, it
* returns true, otherwise returns false */
bool init_camera()
{
int nRet = is_InitCamera (&hCam, NULL);
is_AllocImageMem(hCam,752, 480, 1 ,&ppcImgMem, &pid);
is_SetImageMem(hCam, ppcImgMem, pid);
is_SetDisplayMode (hCam, IS_SET_DM_DIB);
is_SetColorMode (hCam, IS_CM_MONO8);
int pnCol , pnColMode;
is_GetColorDepth(hCam, &pnCol , &pnColMode);
is_CaptureVideo(hCam, IS_WAIT);
if (nRet != IS_SUCCESS)
{
if (nRet == IS_STARTER_FW_UPLOAD_NEEDED)
{
hCam = hCam | IS_ALLOW_STARTER_FW_UPLOAD;
nRet = is_InitCamera (&hCam, NULL);
}
cout << "camera failed to initialize " << endl;
return false;
}
else
return true;
}
The API Documentation seems to suggest that there's a corresponding is_FreeImageMem function. Have you tried that?
Edit: It looks like is_GetImageMem may not allocate memory. From its description:
is_GetImageMem() returns the starting address of the image memory last used for image capturing.
Are you calling is_AllocImageMem anywhere?
I ran valgrind on the code, the output had roughly 16,000,000 bytes possibly lost, and roughly 20,000,000 bytes indirectly lost.
When the image was retrieved from get_image and assigned to img, img was assigned as data to an OpenCV IplImage like this:
IplImage* src = cvCreateImage(cvSize(752,480), IPL_DEPTH_8U, 1);
src -> imageData = img;
The IplImage was being retained, so I had to call cvReleaseImage on the IplImage stored in memory. Now valgrind reports that indirectly lost are at 0, and possibly lost at about 1,600,000. Still havn't accounted for the 1.6 million, but I think the IplImage contributed significantly to the memory leak
Related
I'm trying to get rect. of diffent area between two Bitmap* objects. When i pass 2 bitmap* it can run lockbits for first frame but it cannot do it for second bitmap.
Rect GetRecDifference(Bitmap* currentFrame, Bitmap* previousFrame) {
if (previousFrame == NULL) {
previousFrame = currentFrame;
}
else {
return Rect(0, 0, 0, 0);
}
BitmapData* bd1 = new BitmapData;
Rect rect1(0, 0, currentFrame->GetWidth(), currentFrame->GetHeight());
currentFrame->LockBits(&rect1, ImageLockModeRead, PixelFormat32bppARGB, bd1);
BitmapData* bd2 = new BitmapData;
Rect rect2(0, 0, previousFrame->GetWidth(), previousFrame->GetHeight());
previousFrame->LockBits(&rect2, ImageLockModeRead, PixelFormat32bppARGB, bd2);
It can run for first one (bd1*) load status ok and last result is shows ok. but when it comes to bd2 it shows status as WrongState(8).
Is this because i copy current pointer to previous one ? What's reason can be for wrong state error ? Do i need to clear some parts from memory ?
The problem is that you are trying to lock the same image twice, this
previousFrame = currentFrame;
means that both your pointers are pointing to the same image.
Instead you need a scheme that keeps two images in memory at once. Something like the following
Bitmap* current = NULL;
Bitmap* previous = NULL;
while (something)
{
current = getNextImage(); // get the new image
if (current && previous)
{
// process current and previous images
...
}
delete previous; // delete the previous image, not needed anymore
previous = current; // save the current image as the previous
}
delete previous; // one image left over, delete it as well
Not the only way to do it, but hopefully you get the idea.
The following code extract I am loading an 300DPI 8-bit JPEG and then trying to write it out again in a Fresh instance of a CImage also as a JPEG.
But I end up with a black image with the correct dimensions.
Can someone explain why that is?
Ignore the commented out brush lines I'll get over that mental hurdle later.
If I hard code the bppGraphic to 24 it does copy the picture (to a DPI of 96) resulting in a smaller file size. I can live with this, I guess I am just curious.
Update 07-Nov-2018
So I added the indendented 'if' statement and it still came out black. The colorCountIMAGE comes out at 20. (The IsIndexed lines were to help me with an ASSERT issue I found in the SetColorTable - but it went away)
I think I may just force in all 24 bit.
Thanks
4GLGuy
PS This is all being done in VS2017.
char filePath[256] = "C:\\temp\\b64-one.jpg";
CImage imageGRAPHIC, imageJPG;
HRESULT retval;
bool result;
retval = imageGRAPHIC.Load(filePath);
if (retval != S_OK) {
throw FALSE;
}
int xGRAPHIC, yGRAPHIC, bppGRAPHIC = 0;
xGRAPHIC = imageGRAPHIC.GetWidth();
yGRAPHIC = imageGRAPHIC.GetHeight();
bppGRAPHIC = imageGRAPHIC.GetBPP();
//Create my target JPG same size and bit depth specifying
//that there is no alpha channel (dwflag last param)
result = imageJPG.Create(xGRAPHIC, yGRAPHIC, bppGRAPHIC, 0);
auto dcJPEG = imageJPG.GetDC();
if (bppGRAPHIC <= 8)
{
result = imageJPG.IsIndexed();
result = imageGRAPHIC.IsIndexed();
auto dcIMAGE = imageGRAPHIC.GetDC();
int colorCountIMAGE = GetDeviceCaps(dcIMAGE, NUMCOLORS);
RGBQUAD* coltblIMAGE = new RGBQUAD[colorCountIMAGE];
imageGRAPHIC.GetColorTable(0, colorCountIMAGE, &coltblIMAGE[0]);
imageJPG.SetColorTable(0, colorCountIMAGE, &coltblIMAGE[0]);
}
//Let there be white - 8 bit depth with 24 bit brush - no worky
//CRect rect{ 0, 0, xGRAPHIC, yGRAPHIC };
//HBRUSH white = CreateSolidBrush(RGB(255, 255, 255));
//FillRect(dcJPEG, &rect, white);
result = imageGRAPHIC.Draw(dcJPEG, 0, 0);
retval = imageJPG.Save(filePath, Gdiplus::ImageFormatJPEG);
if (retval != S_OK) {
throw FALSE;
}
I am working with COSMCtrl in order to display maps on to the viewing window.
In the COSMCtrl, a file name is passed on to the CD2DBitmap constructor along with CRenderTarget object. But my application doesnt have image file. It will receive image data (in the form of byte array) from a database.
Could any one please help me in finding out the solution ?
The sample code is below:
BOOL COSMCtrl::DrawTile(CRenderTarget* pRenderTarget, const CD2DRectF& rTile, int nTileX, int nTileY)
{
//What will be the return value from this function (assume the worst)
BOOL bSuccess = FALSE;
//Form the path to the cache file which we want to draw
int nZoom = static_cast<int>(m_fZoom);
CString sFile(GetTileCachePath(m_sCacheDirectory, nZoom, nTileX, nTileY, FALSE));
//Get the fractional value of the zoom
double fInt = 0;
double fFractionalZoom = modf(m_fZoom, &fInt);
//Try to obtain the standard tile
CD2DBitmap bitmap(pRenderTarget, sFile);
// I have a Byte Array. I should pass the byte array instead of file
//Determine how the tile should be draw
BOOL bStandardTile = FALSE;
if (fFractionalZoom == 0 && SUCCEEDED(bitmap.Create(pRenderTarget)))
bStandardTile = TRUE;
//Load up the image from disk and display it if we can
if (bStandardTile)
{
//Draw the image to the screen at the specified position
pRenderTarget->DrawBitmap(&bitmap, rTile, 1.0);
bSuccess = TRUE;
}
return bSuccess;
}
I am not allowed to save the byte array to disk (as image).
I have tried using the other constructor of CD2DBitmap which accepts CRenderTarget and HBITMAP. but of no use
We are working with the Kinect to track faces for a schoolproject. We have set up Visual Studio 2012, and all the test programs are working correctly. However we are trying to run this code and it gives us an error. After many attempts to fix the code, it gives the following error:
"The application was unable to start correctly (0xc000007b).Click OK to close the application.
The good thing is that it's finally running. The bad thing is that the compiler doesn't throw any errors other than this vague error.
We are completely lost and we hope that someone can help us or point us into the right direction. Thanks in advance for helping us.
The code:
#include "stdafx.h"
#include <iostream>
#include <Windows.h>
#include <NuiApi.h>
#include <FaceTrackLib.h>
#include <NuiSensor.h>
using namespace std;
HANDLE rgbStream;
HANDLE depthStream;
INuiSensor* sensor;
#define width 640
#define height 480
bool initKinect() {
// Get a working kinect sensor
int numSensors;
if (NuiGetSensorCount(&numSensors) < 0 || numSensors < 1) return false;
if (NuiCreateSensorByIndex(0, &sensor) < 0) return false;
// Initialize sensor
sensor->NuiInitialize(NUI_INITIALIZE_FLAG_USES_DEPTH | NUI_INITIALIZE_FLAG_USES_COLOR);
sensor->NuiImageStreamOpen(
NUI_IMAGE_TYPE_COLOR, // Depth camera or rgb camera?
NUI_IMAGE_RESOLUTION_640x480, // Image resolution
0, // Image stream flags, e.g. near mode
2, // Number of frames to buffer
NULL, // Event handle
&rgbStream);
// --------------- END CHANGED CODE -----------------
return true;
}
BYTE* dataEnd;
USHORT* dataEndD;
void getKinectDataD(){
NUI_IMAGE_FRAME imageFrame;
NUI_LOCKED_RECT LockedRect;
if (sensor->NuiImageStreamGetNextFrame(rgbStream, 0, &imageFrame) < 0) return;
INuiFrameTexture* texture = imageFrame.pFrameTexture;
texture->LockRect(0, &LockedRect, NULL, 0);
const USHORT* curr = (const USHORT*)LockedRect.pBits;
const USHORT* dataEnding = curr + (width*height);
if (LockedRect.Pitch != 0)
{
const BYTE* curr = (const BYTE*)LockedRect.pBits;
dataEnd = (BYTE*)(curr + (width*height) * 4);
}
while (curr < dataEnding) {
// Get depth in millimeters
USHORT depth = NuiDepthPixelToDepth(*curr++);
dataEndD = (USHORT*)depth;
// Draw a grayscale image of the depth:
// B,G,R are all set to depth%256, alpha set to 1.
}
texture->UnlockRect(0);
sensor->NuiImageStreamReleaseFrame(rgbStream, &imageFrame);
}
// This example assumes that the application provides
// void* cameraFrameBuffer, a buffer for an image, and that there is a method
// to fill the buffer with data from a camera, for example
// cameraObj.ProcessIO(cameraFrameBuffer)
int main(){
initKinect();
// Create an instance of a face tracker
IFTFaceTracker* pFT = FTCreateFaceTracker();
if (!pFT)
{
// Handle errors
}
// Initialize cameras configuration structures.
// IMPORTANT NOTE: resolutions and focal lengths must be accurate, since it affects tracking precision!
// It is better to use enums defined in NuiAPI.h
// Video camera config with width, height, focal length in pixels
// NUI_CAMERA_COLOR_NOMINAL_FOCAL_LENGTH_IN_PIXELS focal length is computed for 640x480 resolution
// If you use different resolutions, multiply this focal length by the scaling factor
FT_CAMERA_CONFIG videoCameraConfig = { 640, 480, NUI_CAMERA_COLOR_NOMINAL_FOCAL_LENGTH_IN_PIXELS };
// Depth camera config with width, height, focal length in pixels
// NUI_CAMERA_COLOR_NOMINAL_FOCAL_LENGTH_IN_PIXELS focal length is computed for 320x240 resolution
// If you use different resolutions, multiply this focal length by the scaling factor
FT_CAMERA_CONFIG depthCameraConfig = { 320, 240, NUI_CAMERA_DEPTH_NOMINAL_FOCAL_LENGTH_IN_PIXELS };
// Initialize the face tracker
HRESULT hr = pFT->Initialize(&videoCameraConfig, &depthCameraConfig, NULL, NULL);
if (FAILED(hr))
{
// Handle errors
}
// Create a face tracking result interface
IFTResult* pFTResult = NULL;
hr = pFT->CreateFTResult(&pFTResult);
if (FAILED(hr))
{
// Handle errors
}
// Prepare image interfaces that hold RGB and depth data
IFTImage* pColorFrame = FTCreateImage();
IFTImage* pDepthFrame = FTCreateImage();
if (!pColorFrame || !pDepthFrame)
{
// Handle errors
}
// Attach created interfaces to the RGB and depth buffers that are filled with
// corresponding RGB and depth frame data from Kinect cameras
pColorFrame->Attach(640, 480, dataEnd, FTIMAGEFORMAT_UINT8_R8G8B8, 640 * 3);
pDepthFrame->Attach(320, 240, dataEndD, FTIMAGEFORMAT_UINT16_D13P3, 320 * 2);
// You can also use Allocate() method in which case IFTImage interfaces own their memory.
// In this case use CopyTo() method to copy buffers
FT_SENSOR_DATA sensorData;
sensorData.ZoomFactor = 1.0f; // Not used must be 1.0
bool isFaceTracked = false;
// Track a face
while (true)
{
// Call Kinect API to fill videoCameraFrameBuffer and depthFrameBuffer with RGB and depth data
getKinectDataD();
// Check if we are already tracking a face
if (!isFaceTracked)
{
// Initiate face tracking.
// This call is more expensive and searches the input frame for a face.
hr = pFT->StartTracking(&sensorData, NULL, NULL, pFTResult);
if (SUCCEEDED(hr))
{
isFaceTracked = true;
}
else
{
// No faces found
isFaceTracked = false;
}
}
else
{
// Continue tracking. It uses a previously known face position.
// This call is less expensive than StartTracking()
hr = pFT->ContinueTracking(&sensorData, NULL, pFTResult);
if (FAILED(hr))
{
// Lost the face
isFaceTracked = false;
}
}
// Do something with pFTResult like visualize the mask, drive your 3D avatar,
// recognize facial expressions
}
// Clean up
pFTResult->Release();
pColorFrame->Release();
pDepthFrame->Release();
pFT->Release();
return 0;
}
We figured it out we used the wrong dll indeed, it runs without errors now. But we ran in to an another problem, we have no clue how to use the pFTResult and retrieve the face angles with use of "getFaceRect". Does somebody know how?
Using DirectX 9, I am trying to create and then fill in an LPDIRECT3DTEXTURE9 texture in the following way.
First, I create the texture with IDirect3DTexture9::CreateTexture:
LPDIRECT3DTEXTURE9 pTexture;
if ( FAILED( pd3dDevice->CreateTexture( MAX_IMAGE_WIDTH,
MAX_IMAGE_HEIGHT,
1,
0, // D3DUSAGE_DYNAMIC,
D3DFMT_A8R8G8B8,
D3DPOOL_MANAGED, // D3DPOOL_DEFAULT,
&pTexture,
NULL ) ) )
{
// Handle error case
}
Then, I try and lock a rectangle on the texture as follows:
unsigned int uiSize = GetTextureSize();
D3DLOCKED_RECT rect;
ARGB BlackColor = { (char)0xFF, (char)0xFF, (char)0xFF, (char)0x00 };
::ZeroMemory( &rect, sizeof( D3DLOCKED_RECT ) );
// Lock outline texture to rect, and then cast rect to bits and use bits as outlineTexture access point
if ( pTexture == NULL )
{
return ERROR_NOT_INITIALIZED;
}
pTexture->LockRect( 0, &rect, NULL, D3DLOCK_NOSYSLOCK ); // Consider ?
ARGB* bits = (ARGB*)rect.pBits;
for ( unsigned int uiPixel = 0; uiPixel < uiSize; ++uiPixel )
{
// Copy all black pixels only
if ( compositeMask[uiPixel] == BlackColor )
{
bits[uiPixel] = compositeMask[uiPixel];
}
}
pTexture->UnlockRect( 0 );
return ERROR_SUCCESS;
ARGB is just a struct defined as follows:
struct ARGB
{
char b;
char g;
char r;
char a;
bool operator==( ARGB& comp )
{
if ( a == comp.a &&
r == comp.r &&
g == comp.g &&
b == comp.b )
return TRUE;
else
return FALSE;
}
bool operator!=( ARGB& comp )
{
return !( this->operator==( comp ) );
}
};
What I want to do is pre-calculate an array of pixel data (a black outline) depending on an in-application algorithm, and then only write the pure black pixels from that set of pixel data onto my LPDIRECT3DTEXTURE9 to be rendered later.
The application currently throws a ACCESS_VIOLATION exception (0xC0000005) at the LockRect call. Can anyone possibly explain why?
Here's the exact exception detail:
Unhandled exception at 0x0132F261 in TestApp.exe: 0xC0000005: Access violation reading location 0x00000001.
The location varied between 0x00000000 and 0x00000001... Does that hint at anything?
Also, if there's a better way to do what I am trying to do, then I'd be all ears :)
Like the other commentators on your question, I can't see anything wrong in principle with the way that you create and lock the texture. I have done the same myself - creating a texture in D3DPOOL_MANAGED and using LockRect to update the content.
However, there are three areas that concern me. I'm posting as an answer because there's far too much for a comment, so please bear with me...
Using the D3DLOCK_NOSYSLOCK flag when locking. I have found that this can cause conflicts when the D3D device has been created for multithreaded operation.
The way you access the locked bits takes no account of the stride. I appreciate that the error apparently occurs before this code, but it's worth mentioning anyway.
You are casting to your own struct for pixel access and it's unclear what the actual size of the struct may be because I can't see your packing options for the project.
So, I suggest three things that you can do to identify if any of the above are causing a problem:
First, just use the default zero flag for the locking call
pTexture->LockRect( 0, &rect, NULL, 0 );
Second, verify that your ARGB structure really is 4 bytes
ASSERT(sizeof(ARGB) == 4);
Finally, do nothing except lock and unlock the texture and see if you still get a runtime error, but also check the return code
HRESULT hr = pTexture->LockRect( 0, &rect, NULL, 0 );
ASSERT(SUCCEEDED(hr));
hr = pTexture->UnlockRect( 0 );
ASSERT(SUCCEEDED(hr));
In any case, when updating the texture bits, you must do it on a row-by-row basis, taking account of the stride reported back from the LockRect call in D3DLOCKED_RECT.Pitch.
Perhaps you could update your question with the results of the above and I can amend this answer as necessary.
This was mind-numbingly stupid. Sorry everyone.
I followed the texture pointer all the way through the code; the LPDIRECT3DTEXTURE9 pointers are actually being stored inside another custom Texture class object type with extra contextual data attached to it; these wrapper objects were members of another class that was being copied and used all over the place, and yet there was no assignment operator or copy constructor written for the class. At some point, out of the huge list of textures being processed, one of the textures sent from the container class was found to be invalid because it actually was; it was supposed to contain a copy of another texture, but contained only an invalid pointer.
Sorry for the unfortunate amateur error everyone, but thank you all for the great pointers and assurance