CImage implicitly converting grayscale PNG images to 32bpp ARGB - mfc

I am using the ATL:CImage class to decode png images.
But the images are getting converted to RGBA (4 byte per pixel) images when loaded.
ATL:CImage img;
img.Load((LPCTSTR)("test.png")); // 8bit grayscale
after a successful load the m_nBPP member is 32 (aka 4 byte) and the m_bHasAlphaChannel is true. But they should be 8 and false.
Due to the implicit conversion we need to convert the RGBA back to 8BPP manually. I am processing more than 400 images. So this is a major slow down for the application.
On Visual Studio forum I read that this is an issue in the GDI+ as its implicitly converting all grayscale PNG images to 32bpp ARGB.
Is there a workaround for this?

If you want control, use the Windows Imaging Component. You'll need to create a Decoder and retrieve the image frame(s) you're interested in. PNG support is outlined at PNG Format Overview.
The following sample code opens a grayscale PNG image, and displays information about the media:
#include <windows.h>
#include <wincodec.h>
#pragma comment(lib, "Windowscodecs.lib")
#include <iostream>
int main() {
::CoInitialize( NULL );
Create the factory COM server:
IWICImagingFactory* pFactory = NULL;
HRESULT hr = ::CoCreateInstance( CLSID_WICImagingFactory,
NULL,
CLSCTX_INPROC_SERVER,
IID_IWICImagingFactory,
(void**)&pFactory );
Create a decoder based on the image source:
IWICBitmapDecoder* pDecoder = NULL;
if ( SUCCEEDED( hr ) ) {
hr = pFactory->CreateDecoderFromFilename( L"test.png",
NULL,
GENERIC_READ,
WICDecodeMetadataCacheOnDemand,
&pDecoder );
}
Although PNG files always contain a single image, there are image formats, that allow to store several images in a single file. In general you will have to query the frame count and iterate over all frames, decoding one after another:
UINT frameCount = 0;
if ( SUCCEEDED( hr ) ) {
hr = pDecoder->GetFrameCount( &frameCount );
}
if ( SUCCEEDED( hr ) ) {
std::wcout << L"Framecount: " << frameCount << std::endl;
for ( UINT frameIndex = 0; frameIndex < frameCount; ++frameIndex ) {
std::wcout << std::endl << L"Frame " << frameIndex << L":" << std::endl;
IWICBitmapFrameDecode* pFrame = NULL;
hr = pDecoder->GetFrame( frameIndex, &pFrame );
Dump image dimensions for illustration purposes:
if ( SUCCEEDED( hr ) ) {
UINT width = 0, height = 0;
hr = pFrame->GetSize( &width, &height );
if ( SUCCEEDED( hr ) ) {
std::wcout << L" width: " << width <<
L", height: " << height << std::endl;
}
}
To verify that the image data has not been altered, dump both bpp and channel count information:
if ( SUCCEEDED( hr ) ) {
WICPixelFormatGUID pixelFormat = { 0 };
pFrame->GetPixelFormat( &pixelFormat );
if ( SUCCEEDED( hr ) ) {
// Translate pixelformat to bpp
IWICComponentInfo* pComponentInfo = NULL;
hr = pFactory->CreateComponentInfo( pixelFormat, &pComponentInfo );
IWICPixelFormatInfo* pPixelFormatInfo = NULL;
if ( SUCCEEDED( hr ) ) {
hr = pComponentInfo->QueryInterface( &pPixelFormatInfo );
}
UINT bpp = 0;
if ( SUCCEEDED( hr ) ) {
hr = pPixelFormatInfo->GetBitsPerPixel( &bpp );
}
if ( SUCCEEDED( hr ) ) {
std::wcout << L" bpp: " << bpp << std::endl;
}
UINT channelCount = 0;
if ( SUCCEEDED( hr ) ) {
hr = pPixelFormatInfo->GetChannelCount( &channelCount );
}
if ( SUCCEEDED( hr ) ) {
std::wcout << L" Channel Count: " << channelCount << std::endl;
}
// Cleanup
if ( pPixelFormatInfo != NULL ) { pPixelFormatInfo->Release(); }
if ( pComponentInfo != NULL ) { pComponentInfo->Release(); }
}
}
The remainder is resource cleanup:
// Cleanup
if ( pFrame != NULL ) { pFrame->Release(); }
}
}
// Cleanup
if ( pDecoder != NULL ) { pDecoder->Release(); }
if ( pFactory != NULL ) { pFactory->Release(); }
return 0;
}
Running this code against this image
produces the following output:
Framecount: 1
Frame 0:
width: 50, height: 50
bpp: 8
Channel Count: 1

Related

Screenshot fullscreen game using DirectX 10/11

I searched all day for answers on this topic and this is what I found:
WinAPI screenshot - No
You can take screenshots using outdated directx 9
You can take screenshots using Hook with new DirectX
What I already tried:
#include <cstdio>
#include "screenshoter.h"
#include <d3d9.h>
#include <wincodec.h>
#include <Windows.h>
#include <iostream>
#include <string>
#include <Psapi.h>
#include <algorithm>
#include <vector>
HWND targetHWND = nullptr;
BOOL CALLBACK enumWindowsProc(__in HWND hWnd, __in LPARAM lParam) {
int length = GetWindowTextLength(hWnd);
if (length == 0)
return true;
auto buffer = new TCHAR[512];
memset( buffer, 0, ( 512 ) * sizeof( TCHAR ) );
GetWindowText( hWnd, buffer, 512 );
auto windowTitle = std::string(buffer);
DWORD proc = NULL;
GetWindowThreadProcessId(hWnd, &proc);
GetModuleFileNameEx(OpenProcess(PROCESS_QUERY_LIMITED_INFORMATION, false, proc), nullptr, buffer, 512);
auto filePath = std::string(buffer);
auto pos = filePath.find_last_of('\\');
auto fileName = filePath.substr(pos + 1);
delete[] buffer;
if (windowTitle == "Overwatch" && fileName == "Overwatch.exe")
targetHWND = hWnd;
//std::cout << Title: " << windowTitle << " | Filename: " << fileName << std::endl;
return true;
}
int screenshoter::take() {
std::cout << EnumWindows(enumWindowsProc, NULL) << std::endl;
IDirect3DSurface9* pRenderTarget=NULL;
IDirect3DSurface9* pDestTarget=NULL;
IDirect3D9 *pD3D = Direct3DCreate9(D3D_SDK_VERSION);
IDirect3DDevice9* Device=NULL;
D3DPRESENT_PARAMETERS d3dpp = { 0 };
D3DDISPLAYMODE DisplayMode;
d3dpp.SwapEffect = D3DSWAPEFFECT_DISCARD;
d3dpp.hDeviceWindow = targetHWND;
d3dpp.Windowed = ((GetWindowLong(targetHWND, GWL_STYLE) & WS_POPUP) != 0) ? FALSE : TRUE;
if (FAILED(pD3D->CreateDevice(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, d3dpp.hDeviceWindow, D3DCREATE_SOFTWARE_VERTEXPROCESSING, &d3dpp, &Device)))
{
pD3D->Release();
return false;
}
const char file[] = "Pickture.bmp";
// sanity checks.
if (Device == NULL)
return 0;
// get the render target surface.
HRESULT hr = Device->GetRenderTarget(0, &pRenderTarget);
// get the current adapter display mode.
hr = pD3D->GetAdapterDisplayMode(D3DADAPTER_DEFAULT,&DisplayMode);
// create a destination surface.
hr = Device->CreateOffscreenPlainSurface(DisplayMode.Width,
DisplayMode.Height,
DisplayMode.Format,
D3DPOOL_SYSTEMMEM,
&pDestTarget,
NULL);
//copy the render target to the destination surface.
hr = Device->GetRenderTargetData(pRenderTarget, pDestTarget);
//save its contents to a bitmap file.
hr = D3DXSaveSurfaceToFile(file,
D3DXIFF_JPG,
pDestTarget,
NULL,
NULL);
// clean up.
pRenderTarget->Release();
pDestTarget->Release();
return 0;
}
My program searches for game process/window
But the problem is there is no d3dx9.h on my system
So I can't use D3DXIFF_JPG

Setting default format on capture device via WinAPI

In an application I've been using the piece of code below to automatically set the 'Default Format' of the microphone to '2 channel, 16 bit, 48000Hz'.
This code works in Windows 7 an 8 en until recently also in Windows 10. Since some update of Windows 10 this year the code doesn't work anymore as expected. When I manually set the format to another value, like 44.1KHz in Sound - Mic - Advanced and I run the code, then the format is changed to '2 channel, 16 bit, 48000Hz', but I get no sound from the microphone. When I set the format manually to the correct value, then there are no problems.
Here is the piece of code:
IMMDevice* pEndpointRead = NULL;
IMMDevice* pEndpointWrite = NULL;
IMMDeviceEnumerator* pEnumerator = NULL;
IPropertyStore* propertyStoreRead = NULL;
IPropertyStore* propertyStoreWrite = NULL;
IAudioClient *audioClient = NULL;
PROPVARIANT propRead;
HRESULT hr = CoCreateInstance(__uuidof(MMDeviceEnumerator), NULL, CLSCTX_ALL, __uuidof(IMMDeviceEnumerator), (LPVOID *)&pEnumerator);
hr = pEnumerator->GetDefaultAudioEndpoint(eCapture, eMultimedia, &pEndpointRead);
hr = pEndpointRead->OpenPropertyStore(STGM_READ, &propertyStoreRead);
if (FAILED(hr))
{
std::cout << "OpenPropertyStore failed!" << std::endl;
}
hr = propertyStoreRead->GetValue(PKEY_AudioEngine_DeviceFormat, &propRead);
if (FAILED(hr))
{
std::cout << "GetValue failed!" << std::endl;
}
deviceFormatProperties =(WAVEFORMATEXTENSIBLE *)propRead.blob.pBlobData;
deviceFormatProperties->Format.nChannels = 2;
deviceFormatProperties->Format.nSamplesPerSec = 48000;
deviceFormatProperties->Format.wBitsPerSample = 16;
deviceFormatProperties->Samples.wValidBitsPerSample = deviceFormatProperties->Format.wBitsPerSample;
deviceFormatProperties->Format.nBlockAlign = (deviceFormatProperties->Format.nChannels*deviceFormatProperties->Format.wBitsPerSample) / 8;
deviceFormatProperties->Format.nAvgBytesPerSec = deviceFormatProperties->Format.nSamplesPerSec*deviceFormatProperties->Format.nBlockAlign;
deviceFormatProperties->dwChannelMask = KSAUDIO_SPEAKER_STEREO;
deviceFormatProperties->Format.cbSize = 22;
deviceFormatProperties->SubFormat = KSDATAFORMAT_SUBTYPE_PCM;
hr = pEndpointRead->Activate(__uuidof(IAudioClient), CLSCTX_ALL, NULL, (void**)&audioClient);
if (FAILED(hr))
{
std::cout << "pDevice->Activate failed!" << std::endl;
}
hr = audioClient->IsFormatSupported(/*AUDCLNT_SHAREMODE_SHARED*/AUDCLNT_SHAREMODE_EXCLUSIVE, (PWAVEFORMATEX)&deviceFormatProperties->Format, NULL);
if (FAILED(hr))
{
std::cout << "IsFormatSupported failed" << std::endl;
}
hr = CoCreateInstance(__uuidof(MMDeviceEnumerator), NULL, CLSCTX_ALL, __uuidof(IMMDeviceEnumerator), (LPVOID *)&pEnumerator);
hr = pEnumerator->GetDefaultAudioEndpoint(eCapture, eMultimedia, &pEndpointWrite);
hr= pEndpointWrite->OpenPropertyStore(STGM_WRITE, &propertyStoreWrite);
if (FAILED(hr))
{
std::cout << "OpenPropertyStore failed!" << std::endl;
}
hr = propertyStoreWrite->SetValue(PKEY_AudioEngine_DeviceFormat, propRead);
if (FAILED(hr))
{
std::cout << "SetValue failed!" << std::endl;
}
hr = propertyStoreWrite->Commit();
pEndpointWrite->Release();
pEnumerator->Release();
propertyStoreWrite->Release();
pEndpointRead->Release();
propertyStoreRead->Release();
pEndpointRead = NULL;
PropVariantClear(&propRead);
Any idea what could be the problem?

using the directshow to control the camera and using open cv to capture images

i am using the directshow to control the camera settings and using open cv i am capturing the images..but my problem is when i capture the images the images settings which i give changes after 2 or 3 captures and turns to a default value...i need this for my college project,,,i have given my code below...i always want a image with same camera settings...the solution given by you will be highly helpfull because i am completely new to this..
#include <stdio.h>
#include <atlstr.h>
#include <dshow.h>
#include <opencv2\highgui\highgui.hpp>
#include <opencv2\imgproc\imgproc.hpp>
#include <opencv2\core\core.hpp>
#include <opencv\cv.h>
#include <iostream>
#include <streams.h>
CFactoryTemplate g_Templates[1];
int g_cTemplates;
void setCameraMode(ICaptureGraphBuilder2 *pCaptureGraphBuilder2, IAMStreamConfig *pConfig, IBaseFilter *pDeviceFilter, HRESULT hr)
{
// Set res, frame rate, and color mode
hr = CoInitialize(0);
hr = pCaptureGraphBuilder2->FindInterface(&PIN_CATEGORY_CAPTURE, 0, pDeviceFilter, IID_IAMStreamConfig, (void**)&pConfig);
int iCount = 0, iSize = 0;
hr = pConfig->GetNumberOfCapabilities(&iCount, &iSize);
// Check the size to make sure we pass in the correct structure.
if (iSize == sizeof(VIDEO_STREAM_CONFIG_CAPS))
{
// Use the video capabilities structure.
for (int iFormat = 0; iFormat < iCount; iFormat++)
{
VIDEO_STREAM_CONFIG_CAPS scc;
AM_MEDIA_TYPE *pmtConfig;
hr = pConfig->GetStreamCaps(iFormat, &pmtConfig, (BYTE*)&scc);
if (SUCCEEDED(hr))
{
if ((pmtConfig->majortype == MEDIATYPE_Video)) //&&
//(pmtConfig->subtype == MEDIASUBTYPE_RGB24))
{
VIDEOINFOHEADER *pVih = (VIDEOINFOHEADER*)pmtConfig->pbFormat;
// pVih contains the detailed format information.
LONG lWidth = pVih->bmiHeader.biWidth;
LONG lHeight = pVih->bmiHeader.biHeight;
pVih->bmiHeader.biWidth = 160;
pVih->bmiHeader.biHeight = 120;
pVih->bmiHeader.biSizeImage = DIBSIZE(pVih->bmiHeader);
// pVih->AvgTimePerFrame = 10000000;
}
}
hr = pConfig->SetFormat(pmtConfig);
hr = pConfig->GetStreamCaps(iFormat, &pmtConfig, (BYTE*)&scc);
//DeleteMediaType(pmtConfig);
}
}
}
void setCameraControl(IBaseFilter *pDeviceFilter, HRESULT hr, int exposure, int focus)
{
// Query the capture filter for the IAMCameraControl interface.
IAMCameraControl *pCameraControl = 0;
hr = pDeviceFilter->QueryInterface(IID_IAMCameraControl, (void**)&pCameraControl);
if (FAILED(hr))
{
// The device does not support IAMCameraControl
}
else
{
long Min, Max, Step, Default, Flags, Val;
// Get the range and default values
hr = pCameraControl->GetRange(CameraControl_Exposure, &Min, &Max, &Step, &Default, &Flags);
hr = pCameraControl->GetRange(CameraControl_Focus, &Min, &Max, &Step, &Default, &Flags);
if (SUCCEEDED(hr))
{
hr = pCameraControl->Set(CameraControl_Exposure, -10, CameraControl_Flags_Manual );
// Min = -11, Max = 1, Step = 1
hr = pCameraControl->Set(CameraControl_Focus, focus, CameraControl_Flags_Manual );
}
}
}
void setCameraProperties(IBaseFilter *pDeviceFilter, HRESULT hr, int brightness, int backLightCompensation, int contrast, int saturation, int sharpness, int whiteBalance)
{
// Query the capture filter for the IAMVideoProcAmp interface.
IAMVideoProcAmp *pProcAmp = 0;
hr = pDeviceFilter->QueryInterface(IID_IAMVideoProcAmp, (void**)&pProcAmp);
if (FAILED(hr))
{
// The device does not support IAMVideoProcAmp
}
else
{
long Min, Max, Step, Default, Flags, Val;
// Get the range and default values
hr = pProcAmp->GetRange(VideoProcAmp_Brightness, &Min, &Max, &Step, &Default, &Flags);
hr = pProcAmp->GetRange(VideoProcAmp_BacklightCompensation, &Min, &Max, &Step, &Default, &Flags);
hr = pProcAmp->GetRange(VideoProcAmp_Contrast, &Min, &Max, &Step, &Default, &Flags);
hr = pProcAmp->GetRange(VideoProcAmp_Saturation, &Min, &Max, &Step, &Default, &Flags);
hr = pProcAmp->GetRange(VideoProcAmp_Sharpness, &Min, &Max, &Step, &Default, &Flags);
hr = pProcAmp->GetRange(VideoProcAmp_WhiteBalance, &Min, &Max, &Step, &Default, &Flags);
if (SUCCEEDED(hr))
{
hr = pProcAmp->Set(VideoProcAmp_Brightness,100, VideoProcAmp_Flags_Manual);
hr = pProcAmp->Set(VideoProcAmp_BacklightCompensation, 0, VideoProcAmp_Flags_Manual);
hr = pProcAmp->Set(VideoProcAmp_Contrast, 20 , VideoProcAmp_Flags_Manual);
hr = pProcAmp->Set(VideoProcAmp_Saturation,50, VideoProcAmp_Flags_Manual);
hr = pProcAmp->Set(VideoProcAmp_Sharpness, 0, VideoProcAmp_Flags_Manual);
hr = pProcAmp->Set(VideoProcAmp_WhiteBalance, 0, VideoProcAmp_Flags_Manual);
}
}
}
//given in the example program
IPin *GetPin(IBaseFilter *pFilter, PIN_DIRECTION PinDir)
{
BOOL bFound = FALSE;
IEnumPins *pEnum;
IPin *pPin;
pFilter->EnumPins(&pEnum);
while(pEnum->Next(1, &pPin, 0) == S_OK)
{
PIN_DIRECTION PinDirThis;
pPin->QueryDirection(&PinDirThis);
if (bFound = (PinDir == PinDirThis))
break;
pPin->Release();
}
pEnum->Release();
return (bFound ? pPin : 0);
}
int main()
{
// for playing
IGraphBuilder *pGraphBuilder;
ICaptureGraphBuilder2 *pCaptureGraphBuilder2;
IMediaControl *pMediaControl = NULL;
IMediaEventEx *pEvent = NULL;
// multiple cameras
IBaseFilter *pDeviceFilter_0 = NULL;
IBaseFilter *m_pGrabber_0 = NULL;
ISampleGrabber *m_pGrabberSettings_0 = NULL;
// select camera
ICreateDevEnum *pCreateDevEnum = NULL;
IEnumMoniker *pEnumMoniker = NULL;
IMoniker *pMoniker = NULL;
ULONG nFetched = 0;
// initialize COM
CoInitialize(NULL);
// selecting a device
// Create CreateDevEnum to list device
std::string USB1 = "\\\\?\\usb#vid_045e&pid_076d&mi_00#7&1ba27d43&0&0000#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\\global";
CoCreateInstance(CLSID_SystemDeviceEnum, NULL, CLSCTX_INPROC_SERVER, IID_ICreateDevEnum, (PVOID *)&pCreateDevEnum);
// Create EnumMoniker to list VideoInputDevice
pCreateDevEnum->CreateClassEnumerator(CLSID_VideoInputDeviceCategory, &pEnumMoniker, 0);
if (pEnumMoniker == NULL) {
// this will be shown if there is no capture device
printf("no device\n");
return 0;
}
// reset EnumMoniker
pEnumMoniker->Reset();
// get each Moniker
while (pEnumMoniker->Next(1, &pMoniker, &nFetched) == S_OK)
{
IPropertyBag *pPropertyBag;
TCHAR devname[256];
TCHAR devpath[256];
// bind to IPropertyBag
pMoniker->BindToStorage(0, 0, IID_IPropertyBag, (void **)&pPropertyBag);
VARIANT var;
// get FriendlyName
var.vt = VT_BSTR;
pPropertyBag->Read(L"FriendlyName", &var, 0);
WideCharToMultiByte(CP_ACP, 0, var.bstrVal, -1, devname, sizeof(devname), 0, 0);
VariantClear(&var);
// get DevicePath
// DevicePath : A unique string
var.vt = VT_BSTR;
pPropertyBag->Read(L"DevicePath", &var, 0);
WideCharToMultiByte(CP_ACP, 0, var.bstrVal, -1, devpath, sizeof(devpath), 0, 0);
std::string devpathString = devpath;
pMoniker->BindToObject(0, 0, IID_IBaseFilter, (void**)&pDeviceFilter_0 );
pMoniker->Release();
pPropertyBag->Release();
if (pDeviceFilter_0 == NULL)
{
MessageBox(NULL, "No MS HD-5000 cameras found", "No cameras", MB_OK);
return 0;
}
}
// create FilterGraph and CaptureGraphBuilder2
CoCreateInstance(CLSID_FilterGraph, NULL, CLSCTX_INPROC, IID_IGraphBuilder, (LPVOID *)&pGraphBuilder);
CoCreateInstance(CLSID_CaptureGraphBuilder2, NULL, CLSCTX_INPROC, IID_ICaptureGraphBuilder2, (LPVOID *)&pCaptureGraphBuilder2);
HRESULT hr = CoInitialize(0);
IAMStreamConfig *pConfig = NULL;
setCameraMode(pCaptureGraphBuilder2, pConfig, pDeviceFilter_0, hr); // FPS, Res, color mode
setCameraControl(pDeviceFilter_0, hr, 10 , 12); // Focus, exposure
setCameraProperties(pDeviceFilter_0, hr, 180, 0, 4, 100, 0, 2800); // Brightness, saturation, etc
// set grabber properties
AM_MEDIA_TYPE mt;
hr = CoCreateInstance(CLSID_SampleGrabber, NULL, CLSCTX_INPROC_SERVER, IID_IBaseFilter, (void**)&m_pGrabber_0); // create ISampleGrabber
pCaptureGraphBuilder2->SetFiltergraph(pGraphBuilder); // set FilterGraph
pGraphBuilder->QueryInterface(IID_IMediaControl, (LPVOID *)&pMediaControl); // get MediaControl interface
m_pGrabber_0->QueryInterface(IID_ISampleGrabber, (void**)&m_pGrabberSettings_0);
ZeroMemory(&mt, sizeof(AM_MEDIA_TYPE));
mt.majortype = MEDIATYPE_Video;
mt.subtype = MEDIASUBTYPE_RGB24;
hr = m_pGrabberSettings_0->SetMediaType(&mt);
if (FAILED(hr))
{
return hr;
}
hr = m_pGrabberSettings_0->SetOneShot(FALSE);
hr = m_pGrabberSettings_0->SetBufferSamples(TRUE);
// build filter graph
pGraphBuilder->AddFilter(pDeviceFilter_0, L"Device Filter");
pGraphBuilder->AddFilter(m_pGrabber_0, L"Sample Grabber");
IPin* pSourceOut_0 = GetPin(pDeviceFilter_0, PINDIR_OUTPUT);
IPin* pGrabberIn_0 = GetPin(m_pGrabber_0, PINDIR_INPUT);
pGraphBuilder->Connect(pSourceOut_0, pGrabberIn_0);
/*
pMediaControl->Run();
long pBufferSize;
unsigned char* pBuffer_0 = 0;
hr = m_pGrabberSettings_0->GetCurrentBuffer(&pBufferSize, NULL);
if (FAILED(hr))
{
return 0;
}
pBuffer_0 = (BYTE*)CoTaskMemAlloc(pBufferSize);
if (!pBuffer_0)
{
hr = E_OUTOFMEMORY;
return 0;
}
long pBufferSize = 0;
unsigned char* pBuffer_0 = 0;
long Size=0;
hr = m_pGrabberSettings_0->GetCurrentBuffer(&Size, NULL);
if (Size != pBufferSize)
{
pBufferSize = Size;
if (pBuffer_0 != 0)
{
delete[] pBuffer_0;
}
pBuffer_0= new unsigned char[pBufferSize];
}
long pBufferSize = 425;
unsigned char* pBuffer_0 = 0;
pBuffer_0 = new unsigned char[pBufferSize];
// start playing
pMediaControl->Run();
while (1) {
if (MessageBox(NULL, "Grab frame?", "Grab?", MB_OKCANCEL) == 2)
{
break;
}
hr = m_pGrabberSettings_0->GetCurrentBuffer(&pBufferSize,(long*)pBuffer_0);
Cleanup:
// convert to OpenCV format
IplImage* img_0 = cvCreateImage(cvSize(160,120),IPL_DEPTH_8U,3);
for (int i = 0; i < pBufferSize ; i++)
{
img_0->imageData[i] = pBuffer_0[i];
}
cvFlip(img_0, NULL, 0);
// show
// cvNamedWindow("mainWin_0", CV_WINDOW_AUTOSIZE);
// cvMoveWindow("mainWin_0", 100, 100);
cvShowImage("mainWin_0", img_0 );
cvSaveImage("c:\\users\\senthil\\desktop\\img.png",img_0 );
//cvWaitKey(0);
cvReleaseImage(&img_0 );
}
*/
pMediaControl->Run();
cvNamedWindow("Camera_Output", 1); //Create window
CvCapture* capture = cvCaptureFromCAM(0); //Capture using any camera connected to your system
while(1)
{
//Create infinte loop for live streaming
if (MessageBox(NULL, "Grab frame?", "Grab?", MB_OKCANCEL) == 2)
{
break;
}
IplImage* frame = cvQueryFrame(capture); //Create image frames from capture
cvShowImage("Camera_Output", frame); //Show image frames on created window
cvSaveImage("c:\\users\\senthil\\desktop\\img1.png",frame);
// cv::Mat img(frame);
// cv::imwrite("c:\\users\\selvaraj\\desktop\\img.png",img);
}
//std::cout << "FPS: " << fps << std::endl;
//std::cout << "PROP_BRIGHTNESS: " << PROP_BRIGHTNESS << std::endl;
//WriteComPort("COM3","A");
cvReleaseCapture(&capture); //Release capture.
cvDestroyWindow("Camera_Output"); //Destroy Window */
// release
pMediaControl->Release();
pCaptureGraphBuilder2->Release();
pGraphBuilder->Release();
pEnumMoniker->Release();
pCreateDevEnum->Release();
// finalize COM
CoUninitialize();
return 0;
}
i tried using the sample grabber also but it is also not usefull...help me to solve this code..
Are you talking about these settings:
setCameraMode(pCaptureGraphBuilder2, pConfig, pDeviceFilter_0, hr); // FPS, Res, color mode
setCameraControl(pDeviceFilter_0, hr, 10 , 12); // Focus, exposure
setCameraProperties(pDeviceFilter_0, hr, 180, 0, 4, 100, 0, 2800); // Brightness, saturation, etc

CryptDecrypt returns random characters at the end of decrxpted string?

I am trying to make a simple application which encrypts a string and then decrypts it.
So far my code:
int main( int argc, char* argv[] )
{
char test[ 32 ] = { 0 };
strcpy( test, "This is a sample string." );
BYTE buf = NULL;
DWORD len = strlen( test );
EncryptData( lpszPassword, test, &len );
return 0;
}
void EncryptData( TCHAR *lpszPassword, char *pbBuffer, DWORD *dwCount )
{
HCRYPTPROV hProv = 0;
HCRYPTKEY hKey = 0;
HCRYPTHASH hHash = 0;
LPWSTR wszPassword = lpszPassword;
DWORD cbPassword = ( wcslen( wszPassword ) + 1 )*sizeof( WCHAR );
if ( !CryptAcquireContext( &hProv, NULL, MS_ENH_RSA_AES_PROV, PROV_RSA_AES, CRYPT_VERIFYCONTEXT ) )
{
printf( "Error %x during CryptAcquireContext!\n", GetLastError() );
goto Cleanup;
}
if ( !CryptCreateHash( hProv, CALG_SHA_256, 0, 0, &hHash ) )
{
printf( "Error %x during CryptCreateHash!\n", GetLastError() );
goto Cleanup;
}
if ( !CryptHashData( hHash, ( PBYTE )wszPassword, cbPassword, 0 ) )
{
printf( "Error %x during CryptHashData!\n", GetLastError() );
goto Cleanup;
}
if ( !CryptDeriveKey( hProv, CALG_AES_256, hHash, CRYPT_EXPORTABLE, &hKey ) )//hKey
{
printf( "Error %x during CryptDeriveKey!\n", GetLastError() );
goto Cleanup;
}
DWORD size = ( DWORD )strlen( pbBuffer ) / sizeof( char );
printf( "\nLength of string = %d", size );
if ( !CryptEncrypt( hKey, 0, TRUE, 0, ( LPBYTE )pbBuffer, &size, BLOCK_SIZE ) )
{
printf( "Error %x during CryptEncrypt!\n", GetLastError() );
goto Cleanup;
}
printf( "\nEncrypted bytes = %d", size );
printf( "\nEncrypted text = %s", pbBuffer );
if ( !CryptDecrypt( hKey, 0, TRUE, 0, ( LPBYTE )pbBuffer, &size ) )
{
printf( "Error %x during CryptDecrypt!\n", GetLastError() );
goto Cleanup;
}
printf( "\nDecrypted bytes = %d", size );
printf( "\nDecrypted text = %s", pbBuffer );
Cleanup:
if ( hKey )
{
CryptDestroyKey( hKey );
}
if ( hHash )
{
CryptDestroyHash( hHash );
}
if ( hProv )
{
CryptReleaseContext( hProv, 0 );
}
}
This produces the output:
Length of string = 24
Encrypted bytes = 32
Encrypted text = ╨é╖·ç┤╠├ó br.≡·►;╜K/┤E(↓)╫%┤Cà¡╩╠╠╠╠╘)Ñ°♀·L
Decrypted bytes = 24
Decrypted text = This is a sample string.)╫%┤Cà¡╩╠╠╠╠╘)Ñ°♀·L
So basicly it is almost working, but at the and of the encrypted string there are characters left from the encrypted string.
So my question is, am i doing something wrong or am i just missing something?
Thanks in advance!
The printf function when given "%s" requires a NULL terminated string. Obviously the string is not NULL terminated (actually, the NULL is located who-knows-where, but printf() found it long after the valid portion of the data is printed).
Use the size value you retrieved for the decrypted text. That is the real number of bytes that are valid.
Here is a solution that not only corrects the size and decrypted data issue, but also the issue with usage of goto.
#include <string>
#include <iostream>
using namespace std;
struct CryptStuff
{
HCRYPTPROV* hProv;
HCRYPTKEY* hKey;
HCRYPTHASH* hHash;
CryptStuff(HCRYPTPROV* hprov, HCRYPTKEY* hkey, HCRYPTHASH* hash) :
hProv(hprov), hKey(hkey), hHash(hash) {}
~CryptStuff()
{
if ( *hKey ) CryptDestroyKey( *hKey );
if ( *hHash ) CryptDestroyHash( *hHash );
if ( *hProv ) CryptReleaseContext( *hProv, 0 );
}
};
void EncryptData( TCHAR *lpszPassword, char *pbBuffer, DWORD *dwCount )
{
HCRYPTPROV hProv = 0;
HCRYPTKEY hKey = 0;
HCRYPTHASH hHash = 0;
// create an instance of CryptStuff. This will cleanup the data on return
CryptStuff cs(&hProv, &hKey, &hHash);
LPWSTR wszPassword = lpszPassword;
DWORD cbPassword = ( wcslen( wszPassword ) + 1 )*sizeof( WCHAR );
if ( !CryptAcquireContext( &hProv, NULL, MS_ENH_RSA_AES_PROV, PROV_RSA_AES,
CRYPT_VERIFYCONTEXT ) )
{
return;
}
if ( !CryptCreateHash( hProv, CALG_SHA_256, 0, 0, &hHash ) )
{
return;
}
if ( !CryptHashData( hHash, ( PBYTE )wszPassword, cbPassword, 0 ) )
{
return;
}
if ( !CryptDeriveKey( hProv, CALG_AES_256, hHash, CRYPT_EXPORTABLE, &hKey ) )
{
return;
}
DWORD size = ( DWORD )strlen( pbBuffer ) / sizeof( char );
cout << "\nLength of string = " << size;
if ( !CryptEncrypt( hKey, 0, TRUE, 0, ( LPBYTE )pbBuffer, &size, BLOCK_SIZE ) )
{
return;
}
cout << "\nEncrypted bytes = " << size;
cout << "\nEncrypted text = ";
cout.write(pbBuffer, size);
if ( !CryptDecrypt( hKey, 0, TRUE, 0, ( LPBYTE )pbBuffer, &size ) )
{
return;
}
cout << "\nDecrypted bytes = " << size;
cout << "\nDecrypted text = ";
cout.write(pbBuffer, size);
}
I wrote this without a compiler handy, so forgive any typos. I also removed the error output for brevity.
The code above first corrects the issue of the decrypted data by using cout.write to output the proper number of characters (denoted by the size value). This guarantees we get the characters outputted that we want. I used cout.write, since it is perfectly acceptable for unencrypted data to contain embedded NULL's, and we don't want to stop on the first NULL that shows up in the string. We want to stop once we hit size number of characters that are outputted.
The next thing that was done was to use a technique called RAII (Resource Acquisition Is Initialization) to remove the goto. Note how this was done:
We first created a struct called CryptStuff that contains pointers to the 3 items we want to clean up. In this struct, we have a destructor that cleans up these items. To utilize this struct, we create an instance of it called cs inside of EncryptData, and give the instance on construction the address of the 3 items.
So basically, when EncryptData returns, that cs instance will have its destructor called automatically, which means that we get our handles cleaned up. This is much more advantageous than using things such as goto (practically anything is better than goto) or tricky, redundant coding to clean up the handles. The reason why is that the clean up is automatic -- regardless of the reason for the return of EncryptData, i.e. a return or some function causes an exception to be thrown, we get to clean up the handles.
Also, if at a later time, the code gets more complex, there is no need to remember to "add a goto" or "write that clean up code" over and over again for each new return scenario. Note that the error conditions do a simple return without need for goto.
RAII info can be found here:
What is meant by Resource Acquisition is Initialization (RAII)?
It is an important part in writing C++ code that has to manage resources that are created and must be destroyed consistently.

Windows: how to get cameras supported resolutions?

So to get cameras list and let user select one (C++, Boost, dshow, windows) I use such code:
#include "StdAfx.h"
#include "list.h"
#include <windows.h>
#include <dshow.h>
#include <boost/lexical_cast.hpp>
HRESULT CamerasList::EnumerateDevices( REFGUID category, IEnumMoniker **ppEnum )
{
// Create the System Device Enumerator.
ICreateDevEnum *pDevEnum;
HRESULT hr = CoCreateInstance(CLSID_SystemDeviceEnum, NULL,
CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&pDevEnum));
if (SUCCEEDED(hr))
{
// Create an enumerator for the category.
hr = pDevEnum->CreateClassEnumerator(category, ppEnum, 0);
if (hr == S_FALSE)
{
hr = VFW_E_NOT_FOUND; // The category is empty. Treat as an error.
}
pDevEnum->Release();
}
return hr;
}
int CamerasList::SelectFromList()
{ int i = 0;
int SelectedIndex;
IEnumMoniker *pEnum;
printf("\nLet us select video device\n");
printf("Available Capture Devices are:\n");
HRESULT hr;
hr = EnumerateDevices(CLSID_VideoInputDeviceCategory, &pEnum);
if (SUCCEEDED(hr))
{
IMoniker *pMoniker = NULL;
while (pEnum->Next(1, &pMoniker, NULL) == S_OK)
{
IPropertyBag *pPropBag;
HRESULT hr = pMoniker->BindToStorage(0, 0, IID_PPV_ARGS(&pPropBag));
if (FAILED(hr))
{
pMoniker->Release();
continue;
}
VARIANT var;
VariantInit(&var);
// Get description or friendly name.
hr = pPropBag->Read(L"Description", &var, 0);
if (FAILED(hr))
{
hr = pPropBag->Read(L"FriendlyName", &var, 0);
}
if (SUCCEEDED(hr))
{
std::cout << i;
printf(") %S\n", var.bstrVal);
i++;
VariantClear(&var);
}
hr = pPropBag->Write(L"FriendlyName", &var);
pPropBag->Release();
pMoniker->Release();
}
SelectedIndex = 999;
if (i <= 0)
{
cout <<"No devices found. \n " << endl;
//cout <<"Please restart application." << endl;
//cin.get();
//Sleep(999999);
return 999;
}else if(i == 1){
cout <<"Default device will be used\n" << std::endl;
SelectedIndex = 0;
}else{
while(SelectedIndex > i-1 || SelectedIndex < 0)
{
try{
std::string s;
std::getline( cin, s, '\n' );
SelectedIndex = boost::lexical_cast<int>(s);
}
catch(std::exception& e){
std::cout <<"please input index from 0 to " << i-1 << std::endl;
SelectedIndex = 999;
}
}}
pEnum->Release();
}else
{
printf("no Video Devices found. \n") ;
//cout <<"Please restart application." << endl;
//cin.get();
//Sleep(999999);
return 999;
}
return SelectedIndex;
}
I need to somehow get list of camera supported resolutions for selected camera. How to do such thing?
Assuming that you've added the capture source filter to the graph:
One method is to get the IAMStreamConfig interface of the capture filter's output pin and then call the IAMStreamConfig::GetNumberOfCapabilities to get the number of format capabilities supported by the device. You can iterate over all formats by calling the IAMStreamConfig::GetStreamCaps with the appropriate indices.
You can get supported resolutions without adding the capture source to a filter graph. You need to:
bind the device moniker to a base filter
get an output pin from that filter
enumerate over media types of that output pin
Here is how to enumerate the media types given a media type enumerator:
AM_MEDIA_TYPE* mediaType = NULL;
VIDEOINFOHEADER* videoInfoHeader = NULL;
while (S_OK == mediaTypesEnumerator->Next(1, &mediaType, NULL))
{
if ((mediaType->formattype == FORMAT_VideoInfo) &&
(mediaType->cbFormat >= sizeof(VIDEOINFOHEADER)) &&
(mediaType->pbFormat != NULL))
{
videoInfoHeader = (VIDEOINFOHEADER*)mediaType->pbFormat;
videoInfoHeader->bmiHeader.biWidth; // Supported width
videoInfoHeader->bmiHeader.biHeight; // Supported height
}
FreeMediaType(*mediaType);
}